Nov 29 01:17:14 np0005539563 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 29 01:17:14 np0005539563 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 29 01:17:14 np0005539563 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:17:14 np0005539563 kernel: BIOS-provided physical RAM map:
Nov 29 01:17:14 np0005539563 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 29 01:17:14 np0005539563 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 29 01:17:14 np0005539563 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 29 01:17:14 np0005539563 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 29 01:17:14 np0005539563 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 29 01:17:14 np0005539563 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 29 01:17:14 np0005539563 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 29 01:17:14 np0005539563 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 29 01:17:14 np0005539563 kernel: NX (Execute Disable) protection: active
Nov 29 01:17:14 np0005539563 kernel: APIC: Static calls initialized
Nov 29 01:17:14 np0005539563 kernel: SMBIOS 2.8 present.
Nov 29 01:17:14 np0005539563 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 29 01:17:14 np0005539563 kernel: Hypervisor detected: KVM
Nov 29 01:17:14 np0005539563 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 29 01:17:14 np0005539563 kernel: kvm-clock: using sched offset of 3280124491 cycles
Nov 29 01:17:14 np0005539563 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 29 01:17:14 np0005539563 kernel: tsc: Detected 2799.998 MHz processor
Nov 29 01:17:14 np0005539563 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 29 01:17:14 np0005539563 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 29 01:17:14 np0005539563 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 29 01:17:14 np0005539563 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 29 01:17:14 np0005539563 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 29 01:17:14 np0005539563 kernel: Using GB pages for direct mapping
Nov 29 01:17:14 np0005539563 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 29 01:17:14 np0005539563 kernel: ACPI: Early table checksum verification disabled
Nov 29 01:17:14 np0005539563 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 29 01:17:14 np0005539563 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:17:14 np0005539563 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:17:14 np0005539563 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:17:14 np0005539563 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 29 01:17:14 np0005539563 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:17:14 np0005539563 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:17:14 np0005539563 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 29 01:17:14 np0005539563 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 29 01:17:14 np0005539563 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 29 01:17:14 np0005539563 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 29 01:17:14 np0005539563 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 29 01:17:14 np0005539563 kernel: No NUMA configuration found
Nov 29 01:17:14 np0005539563 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 29 01:17:14 np0005539563 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 29 01:17:14 np0005539563 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 29 01:17:14 np0005539563 kernel: Zone ranges:
Nov 29 01:17:14 np0005539563 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 29 01:17:14 np0005539563 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 29 01:17:14 np0005539563 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 01:17:14 np0005539563 kernel:  Device   empty
Nov 29 01:17:14 np0005539563 kernel: Movable zone start for each node
Nov 29 01:17:14 np0005539563 kernel: Early memory node ranges
Nov 29 01:17:14 np0005539563 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 29 01:17:14 np0005539563 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 29 01:17:14 np0005539563 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 01:17:14 np0005539563 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 29 01:17:14 np0005539563 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 29 01:17:14 np0005539563 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 29 01:17:14 np0005539563 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 29 01:17:14 np0005539563 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 29 01:17:14 np0005539563 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 29 01:17:14 np0005539563 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 29 01:17:14 np0005539563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 29 01:17:14 np0005539563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 29 01:17:14 np0005539563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 29 01:17:14 np0005539563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 29 01:17:14 np0005539563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 29 01:17:14 np0005539563 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 29 01:17:14 np0005539563 kernel: TSC deadline timer available
Nov 29 01:17:14 np0005539563 kernel: CPU topo: Max. logical packages:   8
Nov 29 01:17:14 np0005539563 kernel: CPU topo: Max. logical dies:       8
Nov 29 01:17:14 np0005539563 kernel: CPU topo: Max. dies per package:   1
Nov 29 01:17:14 np0005539563 kernel: CPU topo: Max. threads per core:   1
Nov 29 01:17:14 np0005539563 kernel: CPU topo: Num. cores per package:     1
Nov 29 01:17:14 np0005539563 kernel: CPU topo: Num. threads per package:   1
Nov 29 01:17:14 np0005539563 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 29 01:17:14 np0005539563 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 29 01:17:14 np0005539563 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 29 01:17:14 np0005539563 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 29 01:17:14 np0005539563 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 29 01:17:14 np0005539563 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 29 01:17:14 np0005539563 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 29 01:17:14 np0005539563 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 29 01:17:14 np0005539563 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 29 01:17:14 np0005539563 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 29 01:17:14 np0005539563 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 29 01:17:14 np0005539563 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 29 01:17:14 np0005539563 kernel: Booting paravirtualized kernel on KVM
Nov 29 01:17:14 np0005539563 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 29 01:17:14 np0005539563 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 29 01:17:14 np0005539563 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 29 01:17:14 np0005539563 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 29 01:17:14 np0005539563 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:17:14 np0005539563 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 29 01:17:14 np0005539563 kernel: random: crng init done
Nov 29 01:17:14 np0005539563 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 29 01:17:14 np0005539563 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 29 01:17:14 np0005539563 kernel: Fallback order for Node 0: 0 
Nov 29 01:17:14 np0005539563 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 29 01:17:14 np0005539563 kernel: Policy zone: Normal
Nov 29 01:17:14 np0005539563 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 29 01:17:14 np0005539563 kernel: software IO TLB: area num 8.
Nov 29 01:17:14 np0005539563 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 29 01:17:14 np0005539563 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 29 01:17:14 np0005539563 kernel: ftrace: allocated 193 pages with 3 groups
Nov 29 01:17:14 np0005539563 kernel: Dynamic Preempt: voluntary
Nov 29 01:17:14 np0005539563 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 29 01:17:14 np0005539563 kernel: rcu: #011RCU event tracing is enabled.
Nov 29 01:17:14 np0005539563 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 29 01:17:14 np0005539563 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 29 01:17:14 np0005539563 kernel: #011Rude variant of Tasks RCU enabled.
Nov 29 01:17:14 np0005539563 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 29 01:17:14 np0005539563 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 29 01:17:14 np0005539563 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 29 01:17:14 np0005539563 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:17:14 np0005539563 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:17:14 np0005539563 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:17:14 np0005539563 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 29 01:17:14 np0005539563 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 29 01:17:14 np0005539563 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 29 01:17:14 np0005539563 kernel: Console: colour VGA+ 80x25
Nov 29 01:17:14 np0005539563 kernel: printk: console [ttyS0] enabled
Nov 29 01:17:14 np0005539563 kernel: ACPI: Core revision 20230331
Nov 29 01:17:14 np0005539563 kernel: APIC: Switch to symmetric I/O mode setup
Nov 29 01:17:14 np0005539563 kernel: x2apic enabled
Nov 29 01:17:14 np0005539563 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 29 01:17:14 np0005539563 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 29 01:17:14 np0005539563 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 29 01:17:14 np0005539563 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 29 01:17:14 np0005539563 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 29 01:17:14 np0005539563 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 29 01:17:14 np0005539563 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 29 01:17:14 np0005539563 kernel: Spectre V2 : Mitigation: Retpolines
Nov 29 01:17:14 np0005539563 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 29 01:17:14 np0005539563 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 29 01:17:14 np0005539563 kernel: RETBleed: Mitigation: untrained return thunk
Nov 29 01:17:14 np0005539563 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 29 01:17:14 np0005539563 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 29 01:17:14 np0005539563 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 29 01:17:14 np0005539563 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 29 01:17:14 np0005539563 kernel: x86/bugs: return thunk changed
Nov 29 01:17:14 np0005539563 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 29 01:17:14 np0005539563 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 29 01:17:14 np0005539563 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 29 01:17:14 np0005539563 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 29 01:17:14 np0005539563 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 29 01:17:14 np0005539563 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 29 01:17:14 np0005539563 kernel: Freeing SMP alternatives memory: 40K
Nov 29 01:17:14 np0005539563 kernel: pid_max: default: 32768 minimum: 301
Nov 29 01:17:14 np0005539563 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 29 01:17:14 np0005539563 kernel: landlock: Up and running.
Nov 29 01:17:14 np0005539563 kernel: Yama: becoming mindful.
Nov 29 01:17:14 np0005539563 kernel: SELinux:  Initializing.
Nov 29 01:17:14 np0005539563 kernel: LSM support for eBPF active
Nov 29 01:17:14 np0005539563 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 01:17:14 np0005539563 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 01:17:14 np0005539563 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 29 01:17:14 np0005539563 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 29 01:17:14 np0005539563 kernel: ... version:                0
Nov 29 01:17:14 np0005539563 kernel: ... bit width:              48
Nov 29 01:17:14 np0005539563 kernel: ... generic registers:      6
Nov 29 01:17:14 np0005539563 kernel: ... value mask:             0000ffffffffffff
Nov 29 01:17:14 np0005539563 kernel: ... max period:             00007fffffffffff
Nov 29 01:17:14 np0005539563 kernel: ... fixed-purpose events:   0
Nov 29 01:17:14 np0005539563 kernel: ... event mask:             000000000000003f
Nov 29 01:17:14 np0005539563 kernel: signal: max sigframe size: 1776
Nov 29 01:17:14 np0005539563 kernel: rcu: Hierarchical SRCU implementation.
Nov 29 01:17:14 np0005539563 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 29 01:17:14 np0005539563 kernel: smp: Bringing up secondary CPUs ...
Nov 29 01:17:14 np0005539563 kernel: smpboot: x86: Booting SMP configuration:
Nov 29 01:17:14 np0005539563 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 29 01:17:14 np0005539563 kernel: smp: Brought up 1 node, 8 CPUs
Nov 29 01:17:14 np0005539563 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 29 01:17:14 np0005539563 kernel: node 0 deferred pages initialised in 11ms
Nov 29 01:17:14 np0005539563 kernel: Memory: 7765800K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616268K reserved, 0K cma-reserved)
Nov 29 01:17:14 np0005539563 kernel: devtmpfs: initialized
Nov 29 01:17:14 np0005539563 kernel: x86/mm: Memory block size: 128MB
Nov 29 01:17:14 np0005539563 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 29 01:17:14 np0005539563 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 29 01:17:14 np0005539563 kernel: pinctrl core: initialized pinctrl subsystem
Nov 29 01:17:14 np0005539563 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 29 01:17:14 np0005539563 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 29 01:17:14 np0005539563 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 29 01:17:14 np0005539563 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 29 01:17:14 np0005539563 kernel: audit: initializing netlink subsys (disabled)
Nov 29 01:17:14 np0005539563 kernel: audit: type=2000 audit(1764397032.010:1): state=initialized audit_enabled=0 res=1
Nov 29 01:17:14 np0005539563 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 29 01:17:14 np0005539563 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 29 01:17:14 np0005539563 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 29 01:17:14 np0005539563 kernel: cpuidle: using governor menu
Nov 29 01:17:14 np0005539563 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 29 01:17:14 np0005539563 kernel: PCI: Using configuration type 1 for base access
Nov 29 01:17:14 np0005539563 kernel: PCI: Using configuration type 1 for extended access
Nov 29 01:17:14 np0005539563 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 29 01:17:14 np0005539563 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 29 01:17:14 np0005539563 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 29 01:17:14 np0005539563 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 29 01:17:14 np0005539563 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 29 01:17:14 np0005539563 kernel: Demotion targets for Node 0: null
Nov 29 01:17:14 np0005539563 kernel: cryptd: max_cpu_qlen set to 1000
Nov 29 01:17:14 np0005539563 kernel: ACPI: Added _OSI(Module Device)
Nov 29 01:17:14 np0005539563 kernel: ACPI: Added _OSI(Processor Device)
Nov 29 01:17:14 np0005539563 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 29 01:17:14 np0005539563 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 29 01:17:14 np0005539563 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 29 01:17:14 np0005539563 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 29 01:17:14 np0005539563 kernel: ACPI: Interpreter enabled
Nov 29 01:17:14 np0005539563 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 29 01:17:14 np0005539563 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 29 01:17:14 np0005539563 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 29 01:17:14 np0005539563 kernel: PCI: Using E820 reservations for host bridge windows
Nov 29 01:17:14 np0005539563 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 29 01:17:14 np0005539563 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 29 01:17:14 np0005539563 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [3] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [4] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [5] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [6] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [7] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [8] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [9] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [10] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [11] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [12] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [13] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [14] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [15] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [16] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [17] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [18] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [19] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [20] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [21] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [22] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [23] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [24] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [25] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [26] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [27] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [28] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [29] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [30] registered
Nov 29 01:17:14 np0005539563 kernel: acpiphp: Slot [31] registered
Nov 29 01:17:14 np0005539563 kernel: PCI host bridge to bus 0000:00
Nov 29 01:17:14 np0005539563 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 29 01:17:14 np0005539563 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 29 01:17:14 np0005539563 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 29 01:17:14 np0005539563 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 29 01:17:14 np0005539563 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 29 01:17:14 np0005539563 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 29 01:17:14 np0005539563 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 29 01:17:14 np0005539563 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 29 01:17:14 np0005539563 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 29 01:17:14 np0005539563 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 29 01:17:14 np0005539563 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 29 01:17:14 np0005539563 kernel: iommu: Default domain type: Translated
Nov 29 01:17:14 np0005539563 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 29 01:17:14 np0005539563 kernel: SCSI subsystem initialized
Nov 29 01:17:14 np0005539563 kernel: ACPI: bus type USB registered
Nov 29 01:17:14 np0005539563 kernel: usbcore: registered new interface driver usbfs
Nov 29 01:17:14 np0005539563 kernel: usbcore: registered new interface driver hub
Nov 29 01:17:14 np0005539563 kernel: usbcore: registered new device driver usb
Nov 29 01:17:14 np0005539563 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 29 01:17:14 np0005539563 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 29 01:17:14 np0005539563 kernel: PTP clock support registered
Nov 29 01:17:14 np0005539563 kernel: EDAC MC: Ver: 3.0.0
Nov 29 01:17:14 np0005539563 kernel: NetLabel: Initializing
Nov 29 01:17:14 np0005539563 kernel: NetLabel:  domain hash size = 128
Nov 29 01:17:14 np0005539563 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 29 01:17:14 np0005539563 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 29 01:17:14 np0005539563 kernel: PCI: Using ACPI for IRQ routing
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 29 01:17:14 np0005539563 kernel: vgaarb: loaded
Nov 29 01:17:14 np0005539563 kernel: clocksource: Switched to clocksource kvm-clock
Nov 29 01:17:14 np0005539563 kernel: VFS: Disk quotas dquot_6.6.0
Nov 29 01:17:14 np0005539563 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 29 01:17:14 np0005539563 kernel: pnp: PnP ACPI init
Nov 29 01:17:14 np0005539563 kernel: pnp: PnP ACPI: found 5 devices
Nov 29 01:17:14 np0005539563 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 29 01:17:14 np0005539563 kernel: NET: Registered PF_INET protocol family
Nov 29 01:17:14 np0005539563 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 29 01:17:14 np0005539563 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 29 01:17:14 np0005539563 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 29 01:17:14 np0005539563 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 29 01:17:14 np0005539563 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 29 01:17:14 np0005539563 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 29 01:17:14 np0005539563 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 29 01:17:14 np0005539563 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 01:17:14 np0005539563 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 01:17:14 np0005539563 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 29 01:17:14 np0005539563 kernel: NET: Registered PF_XDP protocol family
Nov 29 01:17:14 np0005539563 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 29 01:17:14 np0005539563 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 29 01:17:14 np0005539563 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 29 01:17:14 np0005539563 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 29 01:17:14 np0005539563 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 29 01:17:14 np0005539563 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 29 01:17:14 np0005539563 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 71070 usecs
Nov 29 01:17:14 np0005539563 kernel: PCI: CLS 0 bytes, default 64
Nov 29 01:17:14 np0005539563 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 29 01:17:14 np0005539563 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 29 01:17:14 np0005539563 kernel: Trying to unpack rootfs image as initramfs...
Nov 29 01:17:14 np0005539563 kernel: ACPI: bus type thunderbolt registered
Nov 29 01:17:14 np0005539563 kernel: Initialise system trusted keyrings
Nov 29 01:17:14 np0005539563 kernel: Key type blacklist registered
Nov 29 01:17:14 np0005539563 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 29 01:17:14 np0005539563 kernel: zbud: loaded
Nov 29 01:17:14 np0005539563 kernel: integrity: Platform Keyring initialized
Nov 29 01:17:14 np0005539563 kernel: integrity: Machine keyring initialized
Nov 29 01:17:14 np0005539563 kernel: Freeing initrd memory: 85868K
Nov 29 01:17:14 np0005539563 kernel: NET: Registered PF_ALG protocol family
Nov 29 01:17:14 np0005539563 kernel: xor: automatically using best checksumming function   avx       
Nov 29 01:17:14 np0005539563 kernel: Key type asymmetric registered
Nov 29 01:17:14 np0005539563 kernel: Asymmetric key parser 'x509' registered
Nov 29 01:17:14 np0005539563 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 29 01:17:14 np0005539563 kernel: io scheduler mq-deadline registered
Nov 29 01:17:14 np0005539563 kernel: io scheduler kyber registered
Nov 29 01:17:14 np0005539563 kernel: io scheduler bfq registered
Nov 29 01:17:14 np0005539563 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 29 01:17:14 np0005539563 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 29 01:17:14 np0005539563 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 29 01:17:14 np0005539563 kernel: ACPI: button: Power Button [PWRF]
Nov 29 01:17:14 np0005539563 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 29 01:17:14 np0005539563 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 29 01:17:14 np0005539563 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 29 01:17:14 np0005539563 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 29 01:17:14 np0005539563 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 29 01:17:14 np0005539563 kernel: Non-volatile memory driver v1.3
Nov 29 01:17:14 np0005539563 kernel: rdac: device handler registered
Nov 29 01:17:14 np0005539563 kernel: hp_sw: device handler registered
Nov 29 01:17:14 np0005539563 kernel: emc: device handler registered
Nov 29 01:17:14 np0005539563 kernel: alua: device handler registered
Nov 29 01:17:14 np0005539563 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 29 01:17:14 np0005539563 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 29 01:17:14 np0005539563 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 29 01:17:14 np0005539563 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 29 01:17:14 np0005539563 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 29 01:17:14 np0005539563 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 29 01:17:14 np0005539563 kernel: usb usb1: Product: UHCI Host Controller
Nov 29 01:17:14 np0005539563 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 29 01:17:14 np0005539563 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 29 01:17:14 np0005539563 kernel: hub 1-0:1.0: USB hub found
Nov 29 01:17:14 np0005539563 kernel: hub 1-0:1.0: 2 ports detected
Nov 29 01:17:14 np0005539563 kernel: usbcore: registered new interface driver usbserial_generic
Nov 29 01:17:14 np0005539563 kernel: usbserial: USB Serial support registered for generic
Nov 29 01:17:14 np0005539563 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 29 01:17:14 np0005539563 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 29 01:17:14 np0005539563 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 29 01:17:14 np0005539563 kernel: mousedev: PS/2 mouse device common for all mice
Nov 29 01:17:14 np0005539563 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 29 01:17:14 np0005539563 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 29 01:17:14 np0005539563 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 29 01:17:14 np0005539563 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 29 01:17:14 np0005539563 kernel: rtc_cmos 00:04: registered as rtc0
Nov 29 01:17:14 np0005539563 kernel: rtc_cmos 00:04: setting system clock to 2025-11-29T06:17:13 UTC (1764397033)
Nov 29 01:17:14 np0005539563 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 29 01:17:14 np0005539563 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 29 01:17:14 np0005539563 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 29 01:17:14 np0005539563 kernel: usbcore: registered new interface driver usbhid
Nov 29 01:17:14 np0005539563 kernel: usbhid: USB HID core driver
Nov 29 01:17:14 np0005539563 kernel: drop_monitor: Initializing network drop monitor service
Nov 29 01:17:14 np0005539563 kernel: Initializing XFRM netlink socket
Nov 29 01:17:14 np0005539563 kernel: NET: Registered PF_INET6 protocol family
Nov 29 01:17:14 np0005539563 kernel: Segment Routing with IPv6
Nov 29 01:17:14 np0005539563 kernel: NET: Registered PF_PACKET protocol family
Nov 29 01:17:14 np0005539563 kernel: mpls_gso: MPLS GSO support
Nov 29 01:17:14 np0005539563 kernel: IPI shorthand broadcast: enabled
Nov 29 01:17:14 np0005539563 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 29 01:17:14 np0005539563 kernel: AES CTR mode by8 optimization enabled
Nov 29 01:17:14 np0005539563 kernel: sched_clock: Marking stable (1415015841, 147214472)->(1707152319, -144922006)
Nov 29 01:17:14 np0005539563 kernel: registered taskstats version 1
Nov 29 01:17:14 np0005539563 kernel: Loading compiled-in X.509 certificates
Nov 29 01:17:14 np0005539563 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 01:17:14 np0005539563 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 29 01:17:14 np0005539563 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 29 01:17:14 np0005539563 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 29 01:17:14 np0005539563 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 29 01:17:14 np0005539563 kernel: Demotion targets for Node 0: null
Nov 29 01:17:14 np0005539563 kernel: page_owner is disabled
Nov 29 01:17:14 np0005539563 kernel: Key type .fscrypt registered
Nov 29 01:17:14 np0005539563 kernel: Key type fscrypt-provisioning registered
Nov 29 01:17:14 np0005539563 kernel: Key type big_key registered
Nov 29 01:17:14 np0005539563 kernel: Key type encrypted registered
Nov 29 01:17:14 np0005539563 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 29 01:17:14 np0005539563 kernel: Loading compiled-in module X.509 certificates
Nov 29 01:17:14 np0005539563 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 01:17:14 np0005539563 kernel: ima: Allocated hash algorithm: sha256
Nov 29 01:17:14 np0005539563 kernel: ima: No architecture policies found
Nov 29 01:17:14 np0005539563 kernel: evm: Initialising EVM extended attributes:
Nov 29 01:17:14 np0005539563 kernel: evm: security.selinux
Nov 29 01:17:14 np0005539563 kernel: evm: security.SMACK64 (disabled)
Nov 29 01:17:14 np0005539563 kernel: evm: security.SMACK64EXEC (disabled)
Nov 29 01:17:14 np0005539563 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 29 01:17:14 np0005539563 kernel: evm: security.SMACK64MMAP (disabled)
Nov 29 01:17:14 np0005539563 kernel: evm: security.apparmor (disabled)
Nov 29 01:17:14 np0005539563 kernel: evm: security.ima
Nov 29 01:17:14 np0005539563 kernel: evm: security.capability
Nov 29 01:17:14 np0005539563 kernel: evm: HMAC attrs: 0x1
Nov 29 01:17:14 np0005539563 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 29 01:17:14 np0005539563 kernel: Running certificate verification RSA selftest
Nov 29 01:17:14 np0005539563 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 29 01:17:14 np0005539563 kernel: Running certificate verification ECDSA selftest
Nov 29 01:17:14 np0005539563 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 29 01:17:14 np0005539563 kernel: clk: Disabling unused clocks
Nov 29 01:17:14 np0005539563 kernel: Freeing unused decrypted memory: 2028K
Nov 29 01:17:14 np0005539563 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 29 01:17:14 np0005539563 kernel: Write protecting the kernel read-only data: 30720k
Nov 29 01:17:14 np0005539563 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 29 01:17:14 np0005539563 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 29 01:17:14 np0005539563 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 29 01:17:14 np0005539563 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 29 01:17:14 np0005539563 kernel: usb 1-1: Manufacturer: QEMU
Nov 29 01:17:14 np0005539563 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 29 01:17:14 np0005539563 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 29 01:17:14 np0005539563 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 29 01:17:14 np0005539563 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 29 01:17:14 np0005539563 kernel: Run /init as init process
Nov 29 01:17:14 np0005539563 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 01:17:14 np0005539563 systemd: Detected virtualization kvm.
Nov 29 01:17:14 np0005539563 systemd: Detected architecture x86-64.
Nov 29 01:17:14 np0005539563 systemd: Running in initrd.
Nov 29 01:17:14 np0005539563 systemd: No hostname configured, using default hostname.
Nov 29 01:17:14 np0005539563 systemd: Hostname set to <localhost>.
Nov 29 01:17:14 np0005539563 systemd: Initializing machine ID from VM UUID.
Nov 29 01:17:14 np0005539563 systemd: Queued start job for default target Initrd Default Target.
Nov 29 01:17:14 np0005539563 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 01:17:14 np0005539563 systemd: Reached target Local Encrypted Volumes.
Nov 29 01:17:14 np0005539563 systemd: Reached target Initrd /usr File System.
Nov 29 01:17:14 np0005539563 systemd: Reached target Local File Systems.
Nov 29 01:17:14 np0005539563 systemd: Reached target Path Units.
Nov 29 01:17:14 np0005539563 systemd: Reached target Slice Units.
Nov 29 01:17:14 np0005539563 systemd: Reached target Swaps.
Nov 29 01:17:14 np0005539563 systemd: Reached target Timer Units.
Nov 29 01:17:14 np0005539563 systemd: Listening on D-Bus System Message Bus Socket.
Nov 29 01:17:14 np0005539563 systemd: Listening on Journal Socket (/dev/log).
Nov 29 01:17:14 np0005539563 systemd: Listening on Journal Socket.
Nov 29 01:17:14 np0005539563 systemd: Listening on udev Control Socket.
Nov 29 01:17:14 np0005539563 systemd: Listening on udev Kernel Socket.
Nov 29 01:17:14 np0005539563 systemd: Reached target Socket Units.
Nov 29 01:17:14 np0005539563 systemd: Starting Create List of Static Device Nodes...
Nov 29 01:17:14 np0005539563 systemd: Starting Journal Service...
Nov 29 01:17:14 np0005539563 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 01:17:14 np0005539563 systemd: Starting Apply Kernel Variables...
Nov 29 01:17:14 np0005539563 systemd: Starting Create System Users...
Nov 29 01:17:14 np0005539563 systemd: Starting Setup Virtual Console...
Nov 29 01:17:14 np0005539563 systemd: Finished Create List of Static Device Nodes.
Nov 29 01:17:14 np0005539563 systemd: Finished Apply Kernel Variables.
Nov 29 01:17:14 np0005539563 systemd: Finished Create System Users.
Nov 29 01:17:14 np0005539563 systemd-journald[307]: Journal started
Nov 29 01:17:14 np0005539563 systemd-journald[307]: Runtime Journal (/run/log/journal/9fe1370835784487abe99bea2dcb1209) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:17:14 np0005539563 systemd-sysusers[311]: Creating group 'users' with GID 100.
Nov 29 01:17:14 np0005539563 systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Nov 29 01:17:14 np0005539563 systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 29 01:17:14 np0005539563 systemd: Started Journal Service.
Nov 29 01:17:14 np0005539563 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 01:17:14 np0005539563 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 01:17:14 np0005539563 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 01:17:14 np0005539563 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 01:17:14 np0005539563 systemd[1]: Finished Setup Virtual Console.
Nov 29 01:17:14 np0005539563 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 29 01:17:14 np0005539563 systemd[1]: Starting dracut cmdline hook...
Nov 29 01:17:14 np0005539563 dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Nov 29 01:17:14 np0005539563 dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:17:14 np0005539563 systemd[1]: Finished dracut cmdline hook.
Nov 29 01:17:14 np0005539563 systemd[1]: Starting dracut pre-udev hook...
Nov 29 01:17:14 np0005539563 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 29 01:17:14 np0005539563 kernel: device-mapper: uevent: version 1.0.3
Nov 29 01:17:14 np0005539563 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 29 01:17:14 np0005539563 kernel: RPC: Registered named UNIX socket transport module.
Nov 29 01:17:14 np0005539563 kernel: RPC: Registered udp transport module.
Nov 29 01:17:14 np0005539563 kernel: RPC: Registered tcp transport module.
Nov 29 01:17:14 np0005539563 kernel: RPC: Registered tcp-with-tls transport module.
Nov 29 01:17:14 np0005539563 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 29 01:17:14 np0005539563 rpc.statd[443]: Version 2.5.4 starting
Nov 29 01:17:14 np0005539563 rpc.statd[443]: Initializing NSM state
Nov 29 01:17:14 np0005539563 rpc.idmapd[448]: Setting log level to 0
Nov 29 01:17:14 np0005539563 systemd[1]: Finished dracut pre-udev hook.
Nov 29 01:17:14 np0005539563 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 01:17:14 np0005539563 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 01:17:14 np0005539563 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 01:17:14 np0005539563 systemd[1]: Starting dracut pre-trigger hook...
Nov 29 01:17:14 np0005539563 systemd[1]: Finished dracut pre-trigger hook.
Nov 29 01:17:14 np0005539563 systemd[1]: Starting Coldplug All udev Devices...
Nov 29 01:17:15 np0005539563 systemd[1]: Created slice Slice /system/modprobe.
Nov 29 01:17:15 np0005539563 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 01:17:15 np0005539563 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 01:17:15 np0005539563 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:17:15 np0005539563 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:17:15 np0005539563 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 01:17:15 np0005539563 systemd[1]: Reached target Network.
Nov 29 01:17:15 np0005539563 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 01:17:15 np0005539563 systemd[1]: Starting dracut initqueue hook...
Nov 29 01:17:15 np0005539563 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 29 01:17:15 np0005539563 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 29 01:17:15 np0005539563 kernel: scsi host0: ata_piix
Nov 29 01:17:15 np0005539563 kernel: scsi host1: ata_piix
Nov 29 01:17:15 np0005539563 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 29 01:17:15 np0005539563 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 29 01:17:15 np0005539563 kernel: vda: vda1
Nov 29 01:17:15 np0005539563 systemd[1]: Mounting Kernel Configuration File System...
Nov 29 01:17:15 np0005539563 systemd[1]: Mounted Kernel Configuration File System.
Nov 29 01:17:15 np0005539563 systemd[1]: Reached target System Initialization.
Nov 29 01:17:15 np0005539563 systemd[1]: Reached target Basic System.
Nov 29 01:17:15 np0005539563 kernel: ata1: found unknown device (class 0)
Nov 29 01:17:15 np0005539563 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 29 01:17:15 np0005539563 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 29 01:17:15 np0005539563 systemd-udevd[500]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:17:15 np0005539563 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 29 01:17:15 np0005539563 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 01:17:15 np0005539563 systemd[1]: Reached target Initrd Root Device.
Nov 29 01:17:15 np0005539563 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 29 01:17:15 np0005539563 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 29 01:17:15 np0005539563 systemd[1]: Finished dracut initqueue hook.
Nov 29 01:17:15 np0005539563 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 01:17:15 np0005539563 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 29 01:17:15 np0005539563 systemd[1]: Reached target Remote File Systems.
Nov 29 01:17:15 np0005539563 systemd[1]: Starting dracut pre-mount hook...
Nov 29 01:17:15 np0005539563 systemd[1]: Finished dracut pre-mount hook.
Nov 29 01:17:15 np0005539563 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 29 01:17:15 np0005539563 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Nov 29 01:17:15 np0005539563 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 01:17:15 np0005539563 systemd[1]: Mounting /sysroot...
Nov 29 01:17:16 np0005539563 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 29 01:17:16 np0005539563 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 29 01:17:16 np0005539563 kernel: XFS (vda1): Ending clean mount
Nov 29 01:17:16 np0005539563 systemd[1]: Mounted /sysroot.
Nov 29 01:17:16 np0005539563 systemd[1]: Reached target Initrd Root File System.
Nov 29 01:17:16 np0005539563 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 29 01:17:16 np0005539563 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 29 01:17:16 np0005539563 systemd[1]: Reached target Initrd File Systems.
Nov 29 01:17:16 np0005539563 systemd[1]: Reached target Initrd Default Target.
Nov 29 01:17:16 np0005539563 systemd[1]: Starting dracut mount hook...
Nov 29 01:17:16 np0005539563 systemd[1]: Finished dracut mount hook.
Nov 29 01:17:16 np0005539563 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 29 01:17:16 np0005539563 rpc.idmapd[448]: exiting on signal 15
Nov 29 01:17:16 np0005539563 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 29 01:17:16 np0005539563 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Network.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Timer Units.
Nov 29 01:17:16 np0005539563 systemd[1]: dbus.socket: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 29 01:17:16 np0005539563 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Initrd Default Target.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Basic System.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Initrd Root Device.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Initrd /usr File System.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Path Units.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Remote File Systems.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Slice Units.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Socket Units.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target System Initialization.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Local File Systems.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Swaps.
Nov 29 01:17:16 np0005539563 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped dracut mount hook.
Nov 29 01:17:16 np0005539563 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped dracut pre-mount hook.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 29 01:17:16 np0005539563 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 29 01:17:16 np0005539563 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped dracut initqueue hook.
Nov 29 01:17:16 np0005539563 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 01:17:16 np0005539563 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 29 01:17:16 np0005539563 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped Coldplug All udev Devices.
Nov 29 01:17:16 np0005539563 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped dracut pre-trigger hook.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 29 01:17:16 np0005539563 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped Setup Virtual Console.
Nov 29 01:17:16 np0005539563 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 29 01:17:16 np0005539563 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Closed udev Control Socket.
Nov 29 01:17:16 np0005539563 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Closed udev Kernel Socket.
Nov 29 01:17:16 np0005539563 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped dracut pre-udev hook.
Nov 29 01:17:16 np0005539563 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped dracut cmdline hook.
Nov 29 01:17:16 np0005539563 systemd[1]: Starting Cleanup udev Database...
Nov 29 01:17:16 np0005539563 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 29 01:17:16 np0005539563 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 29 01:17:16 np0005539563 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Stopped Create System Users.
Nov 29 01:17:16 np0005539563 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 29 01:17:16 np0005539563 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 29 01:17:16 np0005539563 systemd[1]: Finished Cleanup udev Database.
Nov 29 01:17:16 np0005539563 systemd[1]: Reached target Switch Root.
Nov 29 01:17:16 np0005539563 systemd[1]: Starting Switch Root...
Nov 29 01:17:16 np0005539563 systemd[1]: Switching root.
Nov 29 01:17:16 np0005539563 systemd-journald[307]: Journal stopped
Nov 29 01:17:17 np0005539563 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 29 01:17:17 np0005539563 kernel: audit: type=1404 audit(1764397036.551:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 29 01:17:17 np0005539563 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:17:17 np0005539563 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:17:17 np0005539563 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:17:17 np0005539563 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:17:17 np0005539563 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:17:17 np0005539563 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:17:17 np0005539563 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:17:17 np0005539563 kernel: audit: type=1403 audit(1764397036.681:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 29 01:17:17 np0005539563 systemd: Successfully loaded SELinux policy in 133.212ms.
Nov 29 01:17:17 np0005539563 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 34.693ms.
Nov 29 01:17:17 np0005539563 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 01:17:17 np0005539563 systemd: Detected virtualization kvm.
Nov 29 01:17:17 np0005539563 systemd: Detected architecture x86-64.
Nov 29 01:17:17 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:17:17 np0005539563 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 29 01:17:17 np0005539563 systemd: Stopped Switch Root.
Nov 29 01:17:17 np0005539563 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 29 01:17:17 np0005539563 systemd: Created slice Slice /system/getty.
Nov 29 01:17:17 np0005539563 systemd: Created slice Slice /system/serial-getty.
Nov 29 01:17:17 np0005539563 systemd: Created slice Slice /system/sshd-keygen.
Nov 29 01:17:17 np0005539563 systemd: Created slice User and Session Slice.
Nov 29 01:17:17 np0005539563 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 01:17:17 np0005539563 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 29 01:17:17 np0005539563 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 29 01:17:17 np0005539563 systemd: Reached target Local Encrypted Volumes.
Nov 29 01:17:17 np0005539563 systemd: Stopped target Switch Root.
Nov 29 01:17:17 np0005539563 systemd: Stopped target Initrd File Systems.
Nov 29 01:17:17 np0005539563 systemd: Stopped target Initrd Root File System.
Nov 29 01:17:17 np0005539563 systemd: Reached target Local Integrity Protected Volumes.
Nov 29 01:17:17 np0005539563 systemd: Reached target Path Units.
Nov 29 01:17:17 np0005539563 systemd: Reached target rpc_pipefs.target.
Nov 29 01:17:17 np0005539563 systemd: Reached target Slice Units.
Nov 29 01:17:17 np0005539563 systemd: Reached target Swaps.
Nov 29 01:17:17 np0005539563 systemd: Reached target Local Verity Protected Volumes.
Nov 29 01:17:17 np0005539563 systemd: Listening on RPCbind Server Activation Socket.
Nov 29 01:17:17 np0005539563 systemd: Reached target RPC Port Mapper.
Nov 29 01:17:17 np0005539563 systemd: Listening on Process Core Dump Socket.
Nov 29 01:17:17 np0005539563 systemd: Listening on initctl Compatibility Named Pipe.
Nov 29 01:17:17 np0005539563 systemd: Listening on udev Control Socket.
Nov 29 01:17:17 np0005539563 systemd: Listening on udev Kernel Socket.
Nov 29 01:17:17 np0005539563 systemd: Mounting Huge Pages File System...
Nov 29 01:17:17 np0005539563 systemd: Mounting POSIX Message Queue File System...
Nov 29 01:17:17 np0005539563 systemd: Mounting Kernel Debug File System...
Nov 29 01:17:17 np0005539563 systemd: Mounting Kernel Trace File System...
Nov 29 01:17:17 np0005539563 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 01:17:17 np0005539563 systemd: Starting Create List of Static Device Nodes...
Nov 29 01:17:17 np0005539563 systemd: Starting Load Kernel Module configfs...
Nov 29 01:17:17 np0005539563 systemd: Starting Load Kernel Module drm...
Nov 29 01:17:17 np0005539563 systemd: Starting Load Kernel Module efi_pstore...
Nov 29 01:17:17 np0005539563 systemd: Starting Load Kernel Module fuse...
Nov 29 01:17:17 np0005539563 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 29 01:17:17 np0005539563 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 29 01:17:17 np0005539563 systemd: Stopped File System Check on Root Device.
Nov 29 01:17:17 np0005539563 systemd: Stopped Journal Service.
Nov 29 01:17:17 np0005539563 systemd: Starting Journal Service...
Nov 29 01:17:17 np0005539563 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 01:17:17 np0005539563 systemd: Starting Generate network units from Kernel command line...
Nov 29 01:17:17 np0005539563 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:17:17 np0005539563 systemd: Starting Remount Root and Kernel File Systems...
Nov 29 01:17:17 np0005539563 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 29 01:17:17 np0005539563 systemd: Starting Apply Kernel Variables...
Nov 29 01:17:17 np0005539563 kernel: fuse: init (API version 7.37)
Nov 29 01:17:17 np0005539563 systemd: Starting Coldplug All udev Devices...
Nov 29 01:17:17 np0005539563 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:17 np0005539563 systemd-journald[681]: Journal started
Nov 29 01:17:17 np0005539563 systemd-journald[681]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:17:17 np0005539563 systemd[1]: Queued start job for default target Multi-User System.
Nov 29 01:17:17 np0005539563 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 29 01:17:17 np0005539563 systemd: Started Journal Service.
Nov 29 01:17:17 np0005539563 systemd[1]: Mounted Huge Pages File System.
Nov 29 01:17:17 np0005539563 systemd[1]: Mounted POSIX Message Queue File System.
Nov 29 01:17:17 np0005539563 systemd[1]: Mounted Kernel Debug File System.
Nov 29 01:17:17 np0005539563 systemd[1]: Mounted Kernel Trace File System.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 01:17:17 np0005539563 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:17:17 np0005539563 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 29 01:17:17 np0005539563 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Load Kernel Module fuse.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Generate network units from Kernel command line.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Apply Kernel Variables.
Nov 29 01:17:17 np0005539563 kernel: ACPI: bus type drm_connector registered
Nov 29 01:17:17 np0005539563 systemd[1]: Mounting FUSE Control File System...
Nov 29 01:17:17 np0005539563 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 01:17:17 np0005539563 systemd[1]: Starting Rebuild Hardware Database...
Nov 29 01:17:17 np0005539563 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 29 01:17:17 np0005539563 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 29 01:17:17 np0005539563 systemd[1]: Starting Load/Save OS Random Seed...
Nov 29 01:17:17 np0005539563 systemd[1]: Starting Create System Users...
Nov 29 01:17:17 np0005539563 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Load Kernel Module drm.
Nov 29 01:17:17 np0005539563 systemd-journald[681]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:17:17 np0005539563 systemd-journald[681]: Received client request to flush runtime journal.
Nov 29 01:17:17 np0005539563 systemd[1]: Mounted FUSE Control File System.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Load/Save OS Random Seed.
Nov 29 01:17:17 np0005539563 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Create System Users.
Nov 29 01:17:17 np0005539563 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 01:17:17 np0005539563 systemd[1]: Reached target Preparation for Local File Systems.
Nov 29 01:17:17 np0005539563 systemd[1]: Reached target Local File Systems.
Nov 29 01:17:17 np0005539563 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 29 01:17:17 np0005539563 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 29 01:17:17 np0005539563 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 29 01:17:17 np0005539563 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 29 01:17:17 np0005539563 systemd[1]: Starting Automatic Boot Loader Update...
Nov 29 01:17:17 np0005539563 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 29 01:17:17 np0005539563 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 01:17:17 np0005539563 bootctl[697]: Couldn't find EFI system partition, skipping.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Automatic Boot Loader Update.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 01:17:17 np0005539563 systemd[1]: Starting Security Auditing Service...
Nov 29 01:17:17 np0005539563 systemd[1]: Starting RPC Bind...
Nov 29 01:17:17 np0005539563 systemd[1]: Starting Rebuild Journal Catalog...
Nov 29 01:17:17 np0005539563 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 29 01:17:17 np0005539563 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Rebuild Journal Catalog.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 29 01:17:17 np0005539563 augenrules[708]: /sbin/augenrules: No change
Nov 29 01:17:17 np0005539563 augenrules[723]: No rules
Nov 29 01:17:17 np0005539563 augenrules[723]: enabled 1
Nov 29 01:17:17 np0005539563 augenrules[723]: failure 1
Nov 29 01:17:17 np0005539563 augenrules[723]: pid 703
Nov 29 01:17:17 np0005539563 augenrules[723]: rate_limit 0
Nov 29 01:17:17 np0005539563 augenrules[723]: backlog_limit 8192
Nov 29 01:17:17 np0005539563 augenrules[723]: lost 0
Nov 29 01:17:17 np0005539563 augenrules[723]: backlog 0
Nov 29 01:17:17 np0005539563 augenrules[723]: backlog_wait_time 60000
Nov 29 01:17:17 np0005539563 augenrules[723]: backlog_wait_time_actual 0
Nov 29 01:17:17 np0005539563 augenrules[723]: enabled 1
Nov 29 01:17:17 np0005539563 augenrules[723]: failure 1
Nov 29 01:17:17 np0005539563 augenrules[723]: pid 703
Nov 29 01:17:17 np0005539563 augenrules[723]: rate_limit 0
Nov 29 01:17:17 np0005539563 augenrules[723]: backlog_limit 8192
Nov 29 01:17:17 np0005539563 augenrules[723]: lost 0
Nov 29 01:17:17 np0005539563 augenrules[723]: backlog 0
Nov 29 01:17:17 np0005539563 augenrules[723]: backlog_wait_time 60000
Nov 29 01:17:17 np0005539563 augenrules[723]: backlog_wait_time_actual 0
Nov 29 01:17:17 np0005539563 augenrules[723]: enabled 1
Nov 29 01:17:17 np0005539563 augenrules[723]: failure 1
Nov 29 01:17:17 np0005539563 augenrules[723]: pid 703
Nov 29 01:17:17 np0005539563 augenrules[723]: rate_limit 0
Nov 29 01:17:17 np0005539563 augenrules[723]: backlog_limit 8192
Nov 29 01:17:17 np0005539563 augenrules[723]: lost 0
Nov 29 01:17:17 np0005539563 augenrules[723]: backlog 0
Nov 29 01:17:17 np0005539563 augenrules[723]: backlog_wait_time 60000
Nov 29 01:17:17 np0005539563 augenrules[723]: backlog_wait_time_actual 0
Nov 29 01:17:17 np0005539563 systemd[1]: Started Security Auditing Service.
Nov 29 01:17:17 np0005539563 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 29 01:17:17 np0005539563 systemd[1]: Started RPC Bind.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Rebuild Hardware Database.
Nov 29 01:17:17 np0005539563 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 01:17:17 np0005539563 systemd[1]: Starting Update is Completed...
Nov 29 01:17:17 np0005539563 systemd[1]: Finished Update is Completed.
Nov 29 01:17:17 np0005539563 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 01:17:17 np0005539563 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 01:17:17 np0005539563 systemd[1]: Reached target System Initialization.
Nov 29 01:17:17 np0005539563 systemd[1]: Started dnf makecache --timer.
Nov 29 01:17:17 np0005539563 systemd[1]: Started Daily rotation of log files.
Nov 29 01:17:17 np0005539563 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 29 01:17:17 np0005539563 systemd[1]: Reached target Timer Units.
Nov 29 01:17:17 np0005539563 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 01:17:17 np0005539563 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 29 01:17:18 np0005539563 systemd[1]: Reached target Socket Units.
Nov 29 01:17:18 np0005539563 systemd[1]: Starting D-Bus System Message Bus...
Nov 29 01:17:18 np0005539563 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:17:18 np0005539563 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 29 01:17:18 np0005539563 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 01:17:18 np0005539563 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:17:18 np0005539563 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:17:18 np0005539563 systemd-udevd[743]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:17:18 np0005539563 systemd[1]: Started D-Bus System Message Bus.
Nov 29 01:17:18 np0005539563 systemd[1]: Reached target Basic System.
Nov 29 01:17:18 np0005539563 dbus-broker-lau[765]: Ready
Nov 29 01:17:18 np0005539563 systemd[1]: Starting NTP client/server...
Nov 29 01:17:18 np0005539563 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 29 01:17:18 np0005539563 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 29 01:17:18 np0005539563 systemd[1]: Starting IPv4 firewall with iptables...
Nov 29 01:17:18 np0005539563 systemd[1]: Started irqbalance daemon.
Nov 29 01:17:18 np0005539563 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 29 01:17:18 np0005539563 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:17:18 np0005539563 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:17:18 np0005539563 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:17:18 np0005539563 systemd[1]: Reached target sshd-keygen.target.
Nov 29 01:17:18 np0005539563 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 29 01:17:18 np0005539563 systemd[1]: Reached target User and Group Name Lookups.
Nov 29 01:17:18 np0005539563 systemd[1]: Starting User Login Management...
Nov 29 01:17:18 np0005539563 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 29 01:17:18 np0005539563 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 29 01:17:18 np0005539563 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 29 01:17:18 np0005539563 systemd[1]: Started NTP client/server.
Nov 29 01:17:18 np0005539563 chronyd[792]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 01:17:18 np0005539563 chronyd[792]: Loaded 0 symmetric keys
Nov 29 01:17:18 np0005539563 chronyd[792]: Using right/UTC timezone to obtain leap second data
Nov 29 01:17:18 np0005539563 chronyd[792]: Loaded seccomp filter (level 2)
Nov 29 01:17:18 np0005539563 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 29 01:17:18 np0005539563 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 29 01:17:18 np0005539563 systemd-logind[785]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 01:17:18 np0005539563 systemd-logind[785]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 01:17:18 np0005539563 systemd-logind[785]: New seat seat0.
Nov 29 01:17:18 np0005539563 systemd[1]: Started User Login Management.
Nov 29 01:17:18 np0005539563 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 29 01:17:18 np0005539563 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 29 01:17:18 np0005539563 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 29 01:17:18 np0005539563 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 29 01:17:18 np0005539563 kernel: Console: switching to colour dummy device 80x25
Nov 29 01:17:18 np0005539563 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 29 01:17:18 np0005539563 kernel: [drm] features: -context_init
Nov 29 01:17:18 np0005539563 kernel: [drm] number of scanouts: 1
Nov 29 01:17:18 np0005539563 kernel: [drm] number of cap sets: 0
Nov 29 01:17:18 np0005539563 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 29 01:17:18 np0005539563 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 29 01:17:18 np0005539563 kernel: Console: switching to colour frame buffer device 128x48
Nov 29 01:17:18 np0005539563 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 29 01:17:18 np0005539563 kernel: kvm_amd: TSC scaling supported
Nov 29 01:17:18 np0005539563 kernel: kvm_amd: Nested Virtualization enabled
Nov 29 01:17:18 np0005539563 kernel: kvm_amd: Nested Paging enabled
Nov 29 01:17:18 np0005539563 kernel: kvm_amd: LBR virtualization supported
Nov 29 01:17:18 np0005539563 iptables.init[779]: iptables: Applying firewall rules: [  OK  ]
Nov 29 01:17:18 np0005539563 systemd[1]: Finished IPv4 firewall with iptables.
Nov 29 01:17:18 np0005539563 cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 29 Nov 2025 06:17:18 +0000. Up 6.36 seconds.
Nov 29 01:17:18 np0005539563 systemd[1]: run-cloud\x2dinit-tmp-tmp_666gcg3.mount: Deactivated successfully.
Nov 29 01:17:18 np0005539563 systemd[1]: Starting Hostname Service...
Nov 29 01:17:18 np0005539563 systemd[1]: Started Hostname Service.
Nov 29 01:17:18 np0005539563 systemd-hostnamed[853]: Hostname set to <np0005539563.novalocal> (static)
Nov 29 01:17:18 np0005539563 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 29 01:17:18 np0005539563 systemd[1]: Reached target Preparation for Network.
Nov 29 01:17:19 np0005539563 systemd[1]: Starting Network Manager...
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0576] NetworkManager (version 1.54.1-1.el9) is starting... (boot:0c7ae4db-0f1a-4da3-8ef6-e4098d4e22b0)
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0581] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0695] manager[0x55c488b17080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0739] hostname: hostname: using hostnamed
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0740] hostname: static hostname changed from (none) to "np0005539563.novalocal"
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0746] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0840] manager[0x55c488b17080]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0840] manager[0x55c488b17080]: rfkill: WWAN hardware radio set enabled
Nov 29 01:17:19 np0005539563 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0885] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0885] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0886] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0887] manager: Networking is enabled by state file
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0889] settings: Loaded settings plugin: keyfile (internal)
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0899] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0924] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0942] dhcp: init: Using DHCP client 'internal'
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0954] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0973] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0982] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.0991] device (lo): Activation: starting connection 'lo' (3aadc541-76b7-4062-89e5-f28944387640)
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1001] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1004] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1042] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1048] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1051] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1053] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1056] device (eth0): carrier: link connected
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1060] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 01:17:19 np0005539563 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1070] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1079] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1084] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1085] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1088] manager: NetworkManager state is now CONNECTING
Nov 29 01:17:19 np0005539563 systemd[1]: Started Network Manager.
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1091] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1103] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1107] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:17:19 np0005539563 systemd[1]: Reached target Network.
Nov 29 01:17:19 np0005539563 systemd[1]: Starting Network Manager Wait Online...
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1166] dhcp4 (eth0): state changed new lease, address=38.102.83.184
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1176] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 01:17:19 np0005539563 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1199] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1210] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1212] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 01:17:19 np0005539563 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1219] device (lo): Activation: successful, device activated.
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1229] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1232] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1236] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1240] device (eth0): Activation: successful, device activated.
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1247] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 01:17:19 np0005539563 NetworkManager[857]: <info>  [1764397039.1250] manager: startup complete
Nov 29 01:17:19 np0005539563 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 29 01:17:19 np0005539563 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 01:17:19 np0005539563 systemd[1]: Reached target NFS client services.
Nov 29 01:17:19 np0005539563 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 01:17:19 np0005539563 systemd[1]: Reached target Remote File Systems.
Nov 29 01:17:19 np0005539563 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:17:19 np0005539563 systemd[1]: Finished Network Manager Wait Online.
Nov 29 01:17:19 np0005539563 systemd[1]: Starting Cloud-init: Network Stage...
Nov 29 01:17:19 np0005539563 cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 29 Nov 2025 06:17:19 +0000. Up 7.28 seconds.
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: |  eth0  | True |        38.102.83.184         | 255.255.255.0 | global | fa:16:3e:fa:1b:89 |
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fefa:1b89/64 |       .       |  link  | fa:16:3e:fa:1b:89 |
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 29 01:17:19 np0005539563 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:17:20 np0005539563 cloud-init[920]: Generating public/private rsa key pair.
Nov 29 01:17:20 np0005539563 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 29 01:17:20 np0005539563 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 29 01:17:20 np0005539563 cloud-init[920]: The key fingerprint is:
Nov 29 01:17:20 np0005539563 cloud-init[920]: SHA256:iNL+0xv9eNXsuiwoEFf9XlbtCLZk1YIW5Lx0gYiZom0 root@np0005539563.novalocal
Nov 29 01:17:20 np0005539563 cloud-init[920]: The key's randomart image is:
Nov 29 01:17:20 np0005539563 cloud-init[920]: +---[RSA 3072]----+
Nov 29 01:17:20 np0005539563 cloud-init[920]: |         + +o=o..|
Nov 29 01:17:20 np0005539563 cloud-init[920]: |      . + ooX ..+|
Nov 29 01:17:20 np0005539563 cloud-init[920]: |     o . . =++.+.|
Nov 29 01:17:20 np0005539563 cloud-init[920]: |   ...E..  ..oo +|
Nov 29 01:17:20 np0005539563 cloud-init[920]: |  . o..oS   .. = |
Nov 29 01:17:20 np0005539563 cloud-init[920]: |   o  .  .    o o|
Nov 29 01:17:20 np0005539563 cloud-init[920]: |    .  o. .. . . |
Nov 29 01:17:20 np0005539563 cloud-init[920]: |     .. o..oo.  .|
Nov 29 01:17:20 np0005539563 cloud-init[920]: |      ...o....+o |
Nov 29 01:17:20 np0005539563 cloud-init[920]: +----[SHA256]-----+
Nov 29 01:17:20 np0005539563 cloud-init[920]: Generating public/private ecdsa key pair.
Nov 29 01:17:20 np0005539563 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 29 01:17:20 np0005539563 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 29 01:17:20 np0005539563 cloud-init[920]: The key fingerprint is:
Nov 29 01:17:20 np0005539563 cloud-init[920]: SHA256:H6AEQ94U3C3a2vmTPGlledMGgzZ/EGd2s2mUpCImlPE root@np0005539563.novalocal
Nov 29 01:17:20 np0005539563 cloud-init[920]: The key's randomart image is:
Nov 29 01:17:20 np0005539563 cloud-init[920]: +---[ECDSA 256]---+
Nov 29 01:17:20 np0005539563 cloud-init[920]: |   .+.o=o.    ...|
Nov 29 01:17:20 np0005539563 cloud-init[920]: |   . =o.+ .   o+=|
Nov 29 01:17:20 np0005539563 cloud-init[920]: |    . o+.E . o.==|
Nov 29 01:17:20 np0005539563 cloud-init[920]: |     ...+.. = ++ |
Nov 29 01:17:20 np0005539563 cloud-init[920]: |      .oS... +.= |
Nov 29 01:17:20 np0005539563 cloud-init[920]: |      . o. .+ + +|
Nov 29 01:17:20 np0005539563 cloud-init[920]: |         o.= . + |
Nov 29 01:17:20 np0005539563 cloud-init[920]: |          O      |
Nov 29 01:17:20 np0005539563 cloud-init[920]: |         . o     |
Nov 29 01:17:20 np0005539563 cloud-init[920]: +----[SHA256]-----+
Nov 29 01:17:20 np0005539563 cloud-init[920]: Generating public/private ed25519 key pair.
Nov 29 01:17:20 np0005539563 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 29 01:17:20 np0005539563 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 29 01:17:20 np0005539563 cloud-init[920]: The key fingerprint is:
Nov 29 01:17:20 np0005539563 cloud-init[920]: SHA256:6TGyUA02v8TwUoSiCvgSWovt7nxI+GWuz16oLFtv13U root@np0005539563.novalocal
Nov 29 01:17:20 np0005539563 cloud-init[920]: The key's randomart image is:
Nov 29 01:17:20 np0005539563 cloud-init[920]: +--[ED25519 256]--+
Nov 29 01:17:20 np0005539563 cloud-init[920]: |      =oo        |
Nov 29 01:17:20 np0005539563 cloud-init[920]: |    ...X         |
Nov 29 01:17:20 np0005539563 cloud-init[920]: |.  . .o *        |
Nov 29 01:17:20 np0005539563 cloud-init[920]: |+ o  . o o       |
Nov 29 01:17:20 np0005539563 cloud-init[920]: |oO .. . S        |
Nov 29 01:17:20 np0005539563 cloud-init[920]: |* = oo + o. E    |
Nov 29 01:17:20 np0005539563 cloud-init[920]: | =.=. o... .     |
Nov 29 01:17:20 np0005539563 cloud-init[920]: |.++++.. .        |
Nov 29 01:17:20 np0005539563 cloud-init[920]: |.=B*=.           |
Nov 29 01:17:20 np0005539563 cloud-init[920]: +----[SHA256]-----+
Nov 29 01:17:20 np0005539563 systemd[1]: Finished Cloud-init: Network Stage.
Nov 29 01:17:20 np0005539563 systemd[1]: Reached target Cloud-config availability.
Nov 29 01:17:20 np0005539563 systemd[1]: Reached target Network is Online.
Nov 29 01:17:20 np0005539563 systemd[1]: Starting Cloud-init: Config Stage...
Nov 29 01:17:20 np0005539563 systemd[1]: Starting Crash recovery kernel arming...
Nov 29 01:17:20 np0005539563 systemd[1]: Starting Notify NFS peers of a restart...
Nov 29 01:17:20 np0005539563 systemd[1]: Starting System Logging Service...
Nov 29 01:17:20 np0005539563 sm-notify[1002]: Version 2.5.4 starting
Nov 29 01:17:20 np0005539563 systemd[1]: Starting OpenSSH server daemon...
Nov 29 01:17:20 np0005539563 systemd[1]: Starting Permit User Sessions...
Nov 29 01:17:20 np0005539563 systemd[1]: Started Notify NFS peers of a restart.
Nov 29 01:17:20 np0005539563 systemd[1]: Started OpenSSH server daemon.
Nov 29 01:17:20 np0005539563 systemd[1]: Finished Permit User Sessions.
Nov 29 01:17:20 np0005539563 systemd[1]: Started Command Scheduler.
Nov 29 01:17:20 np0005539563 systemd[1]: Started Getty on tty1.
Nov 29 01:17:20 np0005539563 systemd[1]: Started Serial Getty on ttyS0.
Nov 29 01:17:20 np0005539563 systemd[1]: Reached target Login Prompts.
Nov 29 01:17:20 np0005539563 rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] start
Nov 29 01:17:20 np0005539563 systemd[1]: Started System Logging Service.
Nov 29 01:17:20 np0005539563 rsyslogd[1003]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 29 01:17:20 np0005539563 systemd[1]: Reached target Multi-User System.
Nov 29 01:17:20 np0005539563 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 29 01:17:20 np0005539563 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 29 01:17:20 np0005539563 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 29 01:17:20 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:17:20 np0005539563 kdumpctl[1012]: kdump: No kdump initial ramdisk found.
Nov 29 01:17:20 np0005539563 kdumpctl[1012]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 29 01:17:21 np0005539563 cloud-init[1129]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 29 Nov 2025 06:17:21 +0000. Up 8.88 seconds.
Nov 29 01:17:21 np0005539563 systemd[1]: Finished Cloud-init: Config Stage.
Nov 29 01:17:21 np0005539563 systemd[1]: Starting Cloud-init: Final Stage...
Nov 29 01:17:21 np0005539563 dracut[1281]: dracut-057-102.git20250818.el9
Nov 29 01:17:21 np0005539563 cloud-init[1299]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 29 Nov 2025 06:17:21 +0000. Up 9.29 seconds.
Nov 29 01:17:21 np0005539563 dracut[1283]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 29 01:17:21 np0005539563 cloud-init[1317]: #############################################################
Nov 29 01:17:21 np0005539563 cloud-init[1320]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 29 01:17:21 np0005539563 cloud-init[1326]: 256 SHA256:H6AEQ94U3C3a2vmTPGlledMGgzZ/EGd2s2mUpCImlPE root@np0005539563.novalocal (ECDSA)
Nov 29 01:17:21 np0005539563 cloud-init[1333]: 256 SHA256:6TGyUA02v8TwUoSiCvgSWovt7nxI+GWuz16oLFtv13U root@np0005539563.novalocal (ED25519)
Nov 29 01:17:21 np0005539563 cloud-init[1338]: 3072 SHA256:iNL+0xv9eNXsuiwoEFf9XlbtCLZk1YIW5Lx0gYiZom0 root@np0005539563.novalocal (RSA)
Nov 29 01:17:21 np0005539563 cloud-init[1341]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 29 01:17:21 np0005539563 cloud-init[1344]: #############################################################
Nov 29 01:17:21 np0005539563 cloud-init[1299]: Cloud-init v. 24.4-7.el9 finished at Sat, 29 Nov 2025 06:17:21 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.45 seconds
Nov 29 01:17:21 np0005539563 systemd[1]: Finished Cloud-init: Final Stage.
Nov 29 01:17:21 np0005539563 systemd[1]: Reached target Cloud-init target.
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: memstrack is not available
Nov 29 01:17:22 np0005539563 dracut[1283]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 01:17:22 np0005539563 dracut[1283]: memstrack is not available
Nov 29 01:17:22 np0005539563 dracut[1283]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 01:17:23 np0005539563 dracut[1283]: *** Including module: systemd ***
Nov 29 01:17:23 np0005539563 dracut[1283]: *** Including module: fips ***
Nov 29 01:17:23 np0005539563 dracut[1283]: *** Including module: systemd-initrd ***
Nov 29 01:17:23 np0005539563 dracut[1283]: *** Including module: i18n ***
Nov 29 01:17:23 np0005539563 dracut[1283]: *** Including module: drm ***
Nov 29 01:17:24 np0005539563 chronyd[792]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Nov 29 01:17:24 np0005539563 chronyd[792]: System clock TAI offset set to 37 seconds
Nov 29 01:17:24 np0005539563 dracut[1283]: *** Including module: prefixdevname ***
Nov 29 01:17:24 np0005539563 dracut[1283]: *** Including module: kernel-modules ***
Nov 29 01:17:24 np0005539563 kernel: block vda: the capability attribute has been deprecated.
Nov 29 01:17:24 np0005539563 dracut[1283]: *** Including module: kernel-modules-extra ***
Nov 29 01:17:24 np0005539563 dracut[1283]: *** Including module: qemu ***
Nov 29 01:17:24 np0005539563 dracut[1283]: *** Including module: fstab-sys ***
Nov 29 01:17:24 np0005539563 dracut[1283]: *** Including module: rootfs-block ***
Nov 29 01:17:24 np0005539563 dracut[1283]: *** Including module: terminfo ***
Nov 29 01:17:25 np0005539563 dracut[1283]: *** Including module: udev-rules ***
Nov 29 01:17:25 np0005539563 dracut[1283]: Skipping udev rule: 91-permissions.rules
Nov 29 01:17:25 np0005539563 dracut[1283]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 29 01:17:25 np0005539563 dracut[1283]: *** Including module: virtiofs ***
Nov 29 01:17:25 np0005539563 dracut[1283]: *** Including module: dracut-systemd ***
Nov 29 01:17:25 np0005539563 dracut[1283]: *** Including module: usrmount ***
Nov 29 01:17:25 np0005539563 dracut[1283]: *** Including module: base ***
Nov 29 01:17:25 np0005539563 dracut[1283]: *** Including module: fs-lib ***
Nov 29 01:17:25 np0005539563 dracut[1283]: *** Including module: kdumpbase ***
Nov 29 01:17:26 np0005539563 dracut[1283]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 29 01:17:26 np0005539563 dracut[1283]:  microcode_ctl module: mangling fw_dir
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: configuration "intel" is ignored
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 29 01:17:26 np0005539563 dracut[1283]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 29 01:17:26 np0005539563 dracut[1283]: *** Including module: openssl ***
Nov 29 01:17:26 np0005539563 dracut[1283]: *** Including module: shutdown ***
Nov 29 01:17:27 np0005539563 dracut[1283]: *** Including module: squash ***
Nov 29 01:17:27 np0005539563 dracut[1283]: *** Including modules done ***
Nov 29 01:17:27 np0005539563 dracut[1283]: *** Installing kernel module dependencies ***
Nov 29 01:17:27 np0005539563 dracut[1283]: *** Installing kernel module dependencies done ***
Nov 29 01:17:27 np0005539563 dracut[1283]: *** Resolving executable dependencies ***
Nov 29 01:17:28 np0005539563 irqbalance[780]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 29 01:17:28 np0005539563 irqbalance[780]: IRQ 25 affinity is now unmanaged
Nov 29 01:17:28 np0005539563 irqbalance[780]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 29 01:17:28 np0005539563 irqbalance[780]: IRQ 31 affinity is now unmanaged
Nov 29 01:17:28 np0005539563 irqbalance[780]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 29 01:17:28 np0005539563 irqbalance[780]: IRQ 28 affinity is now unmanaged
Nov 29 01:17:28 np0005539563 irqbalance[780]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 29 01:17:28 np0005539563 irqbalance[780]: IRQ 32 affinity is now unmanaged
Nov 29 01:17:28 np0005539563 irqbalance[780]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 29 01:17:28 np0005539563 irqbalance[780]: IRQ 30 affinity is now unmanaged
Nov 29 01:17:28 np0005539563 irqbalance[780]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 29 01:17:28 np0005539563 irqbalance[780]: IRQ 29 affinity is now unmanaged
Nov 29 01:17:29 np0005539563 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:17:29 np0005539563 dracut[1283]: *** Resolving executable dependencies done ***
Nov 29 01:17:29 np0005539563 dracut[1283]: *** Generating early-microcode cpio image ***
Nov 29 01:17:29 np0005539563 dracut[1283]: *** Store current command line parameters ***
Nov 29 01:17:29 np0005539563 dracut[1283]: Stored kernel commandline:
Nov 29 01:17:29 np0005539563 dracut[1283]: No dracut internal kernel commandline stored in the initramfs
Nov 29 01:17:29 np0005539563 dracut[1283]: *** Install squash loader ***
Nov 29 01:17:30 np0005539563 dracut[1283]: *** Squashing the files inside the initramfs ***
Nov 29 01:17:31 np0005539563 dracut[1283]: *** Squashing the files inside the initramfs done ***
Nov 29 01:17:31 np0005539563 dracut[1283]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 29 01:17:31 np0005539563 dracut[1283]: *** Hardlinking files ***
Nov 29 01:17:31 np0005539563 dracut[1283]: *** Hardlinking files done ***
Nov 29 01:17:31 np0005539563 dracut[1283]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 29 01:17:32 np0005539563 kdumpctl[1012]: kdump: kexec: loaded kdump kernel
Nov 29 01:17:32 np0005539563 kdumpctl[1012]: kdump: Starting kdump: [OK]
Nov 29 01:17:32 np0005539563 systemd[1]: Finished Crash recovery kernel arming.
Nov 29 01:17:32 np0005539563 systemd[1]: Startup finished in 1.776s (kernel) + 2.603s (initrd) + 15.957s (userspace) = 20.336s.
Nov 29 01:17:44 np0005539563 systemd[1]: Created slice User Slice of UID 1000.
Nov 29 01:17:44 np0005539563 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 29 01:17:44 np0005539563 systemd-logind[785]: New session 1 of user zuul.
Nov 29 01:17:44 np0005539563 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 29 01:17:44 np0005539563 systemd[1]: Starting User Manager for UID 1000...
Nov 29 01:17:45 np0005539563 systemd[4297]: Queued start job for default target Main User Target.
Nov 29 01:17:45 np0005539563 systemd[4297]: Created slice User Application Slice.
Nov 29 01:17:45 np0005539563 systemd[4297]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 01:17:45 np0005539563 systemd[4297]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 01:17:45 np0005539563 systemd[4297]: Reached target Paths.
Nov 29 01:17:45 np0005539563 systemd[4297]: Reached target Timers.
Nov 29 01:17:45 np0005539563 systemd[4297]: Starting D-Bus User Message Bus Socket...
Nov 29 01:17:45 np0005539563 systemd[4297]: Starting Create User's Volatile Files and Directories...
Nov 29 01:17:45 np0005539563 systemd[4297]: Finished Create User's Volatile Files and Directories.
Nov 29 01:17:45 np0005539563 systemd[4297]: Listening on D-Bus User Message Bus Socket.
Nov 29 01:17:45 np0005539563 systemd[4297]: Reached target Sockets.
Nov 29 01:17:45 np0005539563 systemd[4297]: Reached target Basic System.
Nov 29 01:17:45 np0005539563 systemd[4297]: Reached target Main User Target.
Nov 29 01:17:45 np0005539563 systemd[4297]: Startup finished in 120ms.
Nov 29 01:17:45 np0005539563 systemd[1]: Started User Manager for UID 1000.
Nov 29 01:17:45 np0005539563 systemd[1]: Started Session 1 of User zuul.
Nov 29 01:17:45 np0005539563 python3[4379]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:17:48 np0005539563 python3[4407]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:17:49 np0005539563 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:17:57 np0005539563 python3[4467]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:17:59 np0005539563 python3[4507]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 29 01:18:01 np0005539563 python3[4533]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO009q8XvhgDCH/CYntn/Nj7apUjGycgerKyxcYKwqlrsQqtgZ+4b1AwoiDJ6ACRb/89P698Zu8SgdnR/v9pn0LFMXEa2g1lWeFaQovDGpqBz4mYtyZIbvWOJAPw3VQm6HJnXakvw8LrVDql95W2i6anqAeBFXq/hs4EAkNzhNR4pua8lJHwAgkexNQ+7fdWwTNsd+E5A23VTA0NzgPyGjZyo5PcuqueNFdk/JaekH4GB/BVWyh0KIH6JnPu98++RaPl1C8BRj9wWE/zvooiZsXPQCOfW1oql3StPekqBwJti2jRygs685e4eHPE+tO1VzwfPTyXZfQAe9dOPlZsWdnKtIw5H/2tajn7DELzA77VUbsuA1U+jNJ9sE0PwaWj6JsBqDB9tBbb31S7B12ZvrS250Qc0Q/c4Qv/WdSE87jti5CrwLfsjPX2DOo37gqMfu2EB90zV1L+h9vMlmkg3g8rOzpQK5jspXBfUIO2Pq0Nyyj9IORN7HLSKyZmK+teE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:01 np0005539563 python3[4557]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:02 np0005539563 python3[4656]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:02 np0005539563 python3[4727]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397081.8811133-251-156951404667540/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=6e6c58d2ce3447e2bcc44a9308b07ccb_id_rsa follow=False checksum=d281ebb5f24e5d8783693a36170923a3c25cbd23 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:03 np0005539563 python3[4850]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:03 np0005539563 python3[4921]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397082.8545675-306-273819162489781/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=6e6c58d2ce3447e2bcc44a9308b07ccb_id_rsa.pub follow=False checksum=96e05a798ef30e23c5626e997638db7097fc90b9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:04 np0005539563 python3[4969]: ansible-ping Invoked with data=pong
Nov 29 01:18:05 np0005539563 python3[4993]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:18:08 np0005539563 python3[5051]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 29 01:18:09 np0005539563 python3[5083]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:09 np0005539563 python3[5107]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:09 np0005539563 python3[5131]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:10 np0005539563 python3[5155]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:10 np0005539563 python3[5179]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:10 np0005539563 python3[5203]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:12 np0005539563 python3[5229]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:13 np0005539563 python3[5307]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:13 np0005539563 python3[5380]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397092.7450562-31-1672324398605/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:14 np0005539563 python3[5428]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:14 np0005539563 python3[5452]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:15 np0005539563 python3[5476]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:15 np0005539563 python3[5500]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:15 np0005539563 python3[5524]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:15 np0005539563 python3[5548]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:16 np0005539563 python3[5572]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:16 np0005539563 python3[5596]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:16 np0005539563 python3[5620]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:17 np0005539563 python3[5644]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:17 np0005539563 python3[5668]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:17 np0005539563 python3[5692]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:17 np0005539563 python3[5716]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:18 np0005539563 python3[5740]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:18 np0005539563 python3[5764]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:18 np0005539563 python3[5788]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:18 np0005539563 python3[5812]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:19 np0005539563 python3[5836]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:19 np0005539563 python3[5860]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:19 np0005539563 python3[5884]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:20 np0005539563 python3[5908]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:20 np0005539563 python3[5932]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:20 np0005539563 python3[5956]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:21 np0005539563 python3[5980]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:21 np0005539563 python3[6004]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:21 np0005539563 python3[6028]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:18:24 np0005539563 python3[6054]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 01:18:24 np0005539563 systemd[1]: Starting Time & Date Service...
Nov 29 01:18:24 np0005539563 systemd[1]: Started Time & Date Service.
Nov 29 01:18:25 np0005539563 systemd-timedated[6056]: Changed time zone to 'UTC' (UTC).
Nov 29 01:18:26 np0005539563 python3[6085]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:26 np0005539563 python3[6161]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:26 np0005539563 python3[6232]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764397106.4023392-251-144510910740304/source _original_basename=tmpqy31ry3p follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:27 np0005539563 python3[6332]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:27 np0005539563 python3[6403]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397107.303086-301-238102445686067/source _original_basename=tmpb1ncher8 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:28 np0005539563 python3[6505]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:29 np0005539563 python3[6578]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397108.4695811-381-248118839082868/source _original_basename=tmpm90s_3lz follow=False checksum=faf3cf8b99afe143012f152c554f05a914a3872d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:29 np0005539563 python3[6626]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:18:30 np0005539563 python3[6652]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:18:30 np0005539563 python3[6732]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:30 np0005539563 python3[6805]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397110.2621467-451-92166564075929/source _original_basename=tmp72n1fkzo follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:31 np0005539563 python3[6856]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-beaa-304e-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:18:32 np0005539563 python3[6883]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-beaa-304e-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 29 01:18:33 np0005539563 python3[6912]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:51 np0005539563 python3[6938]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:55 np0005539563 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 01:19:35 np0005539563 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 01:19:35 np0005539563 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 29 01:19:35 np0005539563 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 29 01:19:35 np0005539563 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 29 01:19:35 np0005539563 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 29 01:19:35 np0005539563 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 29 01:19:35 np0005539563 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 29 01:19:35 np0005539563 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 29 01:19:35 np0005539563 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 29 01:19:35 np0005539563 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 29 01:19:35 np0005539563 NetworkManager[857]: <info>  [1764397175.8202] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 01:19:35 np0005539563 systemd-udevd[6941]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:19:35 np0005539563 NetworkManager[857]: <info>  [1764397175.8378] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:19:35 np0005539563 NetworkManager[857]: <info>  [1764397175.8413] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 29 01:19:35 np0005539563 NetworkManager[857]: <info>  [1764397175.8416] device (eth1): carrier: link connected
Nov 29 01:19:35 np0005539563 NetworkManager[857]: <info>  [1764397175.8417] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 01:19:35 np0005539563 NetworkManager[857]: <info>  [1764397175.8422] policy: auto-activating connection 'Wired connection 1' (85ce326f-4f7b-326a-be5d-00bceb3dd984)
Nov 29 01:19:35 np0005539563 NetworkManager[857]: <info>  [1764397175.8426] device (eth1): Activation: starting connection 'Wired connection 1' (85ce326f-4f7b-326a-be5d-00bceb3dd984)
Nov 29 01:19:35 np0005539563 NetworkManager[857]: <info>  [1764397175.8427] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:19:35 np0005539563 NetworkManager[857]: <info>  [1764397175.8429] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:19:35 np0005539563 NetworkManager[857]: <info>  [1764397175.8432] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:19:35 np0005539563 NetworkManager[857]: <info>  [1764397175.8436] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:19:36 np0005539563 python3[6968]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-957e-49b2-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:47 np0005539563 python3[7048]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:19:47 np0005539563 python3[7121]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397186.701274-104-45419568471226/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=1c8e43f5351c6bad183609dd588445ac610a71cf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:19:48 np0005539563 python3[7171]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:19:48 np0005539563 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 01:19:48 np0005539563 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 01:19:48 np0005539563 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 01:19:48 np0005539563 systemd[1]: Stopping Network Manager...
Nov 29 01:19:48 np0005539563 NetworkManager[857]: <info>  [1764397188.3479] caught SIGTERM, shutting down normally.
Nov 29 01:19:48 np0005539563 NetworkManager[857]: <info>  [1764397188.3490] dhcp4 (eth0): canceled DHCP transaction
Nov 29 01:19:48 np0005539563 NetworkManager[857]: <info>  [1764397188.3490] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:19:48 np0005539563 NetworkManager[857]: <info>  [1764397188.3490] dhcp4 (eth0): state changed no lease
Nov 29 01:19:48 np0005539563 NetworkManager[857]: <info>  [1764397188.3495] manager: NetworkManager state is now CONNECTING
Nov 29 01:19:48 np0005539563 NetworkManager[857]: <info>  [1764397188.3607] dhcp4 (eth1): canceled DHCP transaction
Nov 29 01:19:48 np0005539563 NetworkManager[857]: <info>  [1764397188.3608] dhcp4 (eth1): state changed no lease
Nov 29 01:19:48 np0005539563 NetworkManager[857]: <info>  [1764397188.3647] exiting (success)
Nov 29 01:19:48 np0005539563 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:19:48 np0005539563 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:19:48 np0005539563 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 01:19:48 np0005539563 systemd[1]: Stopped Network Manager.
Nov 29 01:19:48 np0005539563 systemd[1]: NetworkManager.service: Consumed 1.209s CPU time, 9.9M memory peak.
Nov 29 01:19:48 np0005539563 systemd[1]: Starting Network Manager...
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.4008] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:0c7ae4db-0f1a-4da3-8ef6-e4098d4e22b0)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.4011] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.4060] manager[0x55fc9675f070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 01:19:48 np0005539563 systemd[1]: Starting Hostname Service...
Nov 29 01:19:48 np0005539563 systemd[1]: Started Hostname Service.
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5115] hostname: hostname: using hostnamed
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5117] hostname: static hostname changed from (none) to "np0005539563.novalocal"
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5124] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5132] manager[0x55fc9675f070]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5133] manager[0x55fc9675f070]: rfkill: WWAN hardware radio set enabled
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5180] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5181] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5184] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5186] manager: Networking is enabled by state file
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5191] settings: Loaded settings plugin: keyfile (internal)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5199] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5246] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5261] dhcp: init: Using DHCP client 'internal'
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5267] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5275] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5284] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5296] device (lo): Activation: starting connection 'lo' (3aadc541-76b7-4062-89e5-f28944387640)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5309] device (eth0): carrier: link connected
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5317] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5326] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5328] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5339] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5350] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5360] device (eth1): carrier: link connected
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5368] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5375] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (85ce326f-4f7b-326a-be5d-00bceb3dd984) (indicated)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5377] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5385] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5398] device (eth1): Activation: starting connection 'Wired connection 1' (85ce326f-4f7b-326a-be5d-00bceb3dd984)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5410] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 01:19:48 np0005539563 systemd[1]: Started Network Manager.
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5418] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5424] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5429] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5434] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5440] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5448] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5453] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5460] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5470] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5475] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5492] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5498] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5525] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5533] dhcp4 (eth0): state changed new lease, address=38.102.83.184
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5543] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5555] device (lo): Activation: successful, device activated.
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5575] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 01:19:48 np0005539563 systemd[1]: Starting Network Manager Wait Online...
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5656] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5679] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5681] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5684] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5689] device (eth0): Activation: successful, device activated.
Nov 29 01:19:48 np0005539563 NetworkManager[7180]: <info>  [1764397188.5694] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 01:19:48 np0005539563 python3[7256]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-957e-49b2-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:53 np0005539563 systemd[4297]: Starting Mark boot as successful...
Nov 29 01:19:53 np0005539563 systemd[4297]: Finished Mark boot as successful.
Nov 29 01:19:58 np0005539563 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:20:18 np0005539563 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.1700] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 01:20:34 np0005539563 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:20:34 np0005539563 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2019] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2023] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2039] device (eth1): Activation: successful, device activated.
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2049] manager: startup complete
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2052] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <warn>  [1764397234.2060] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2070] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 29 01:20:34 np0005539563 systemd[1]: Finished Network Manager Wait Online.
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2148] dhcp4 (eth1): canceled DHCP transaction
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2148] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2149] dhcp4 (eth1): state changed no lease
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2170] policy: auto-activating connection 'ci-private-network' (bd2fbe2d-a1e7-585d-9bdf-307f4a13e0f8)
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2176] device (eth1): Activation: starting connection 'ci-private-network' (bd2fbe2d-a1e7-585d-9bdf-307f4a13e0f8)
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2178] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2182] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2192] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2204] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2717] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2721] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:20:34 np0005539563 NetworkManager[7180]: <info>  [1764397234.2734] device (eth1): Activation: successful, device activated.
Nov 29 01:20:44 np0005539563 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:20:49 np0005539563 systemd-logind[785]: Session 1 logged out. Waiting for processes to exit.
Nov 29 01:21:55 np0005539563 systemd-logind[785]: New session 3 of user zuul.
Nov 29 01:21:55 np0005539563 systemd[1]: Started Session 3 of User zuul.
Nov 29 01:21:55 np0005539563 python3[7368]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:21:56 np0005539563 python3[7441]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397315.6178637-373-209787194191259/source _original_basename=tmppuem0wx_ follow=False checksum=c97a8eb9e2d79dc37ef10c85c5553d4edb376b92 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:22:00 np0005539563 systemd[1]: session-3.scope: Deactivated successfully.
Nov 29 01:22:00 np0005539563 systemd-logind[785]: Session 3 logged out. Waiting for processes to exit.
Nov 29 01:22:00 np0005539563 systemd-logind[785]: Removed session 3.
Nov 29 01:22:53 np0005539563 systemd[4297]: Created slice User Background Tasks Slice.
Nov 29 01:22:53 np0005539563 systemd[4297]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 01:22:53 np0005539563 systemd[4297]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 01:29:00 np0005539563 systemd-logind[785]: New session 4 of user zuul.
Nov 29 01:29:00 np0005539563 systemd[1]: Started Session 4 of User zuul.
Nov 29 01:29:00 np0005539563 python3[7503]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-534d-d776-000000000ca8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:01 np0005539563 python3[7532]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:01 np0005539563 python3[7558]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:01 np0005539563 python3[7584]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:02 np0005539563 python3[7610]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:02 np0005539563 python3[7636]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:03 np0005539563 python3[7714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:29:04 np0005539563 python3[7787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397743.3936303-368-35333619114012/source _original_basename=tmp89wwvm89 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:05 np0005539563 python3[7837]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 01:29:05 np0005539563 systemd[1]: Reloading.
Nov 29 01:29:05 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:29:44 np0005539563 python3[7894]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 29 01:29:46 np0005539563 python3[7920]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:47 np0005539563 python3[7948]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:47 np0005539563 python3[7976]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:47 np0005539563 python3[8004]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:48 np0005539563 python3[8031]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-534d-d776-000000000caf-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:48 np0005539563 python3[8061]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 01:29:51 np0005539563 systemd[1]: session-4.scope: Deactivated successfully.
Nov 29 01:29:51 np0005539563 systemd[1]: session-4.scope: Consumed 4.374s CPU time.
Nov 29 01:29:51 np0005539563 systemd-logind[785]: Session 4 logged out. Waiting for processes to exit.
Nov 29 01:29:51 np0005539563 systemd-logind[785]: Removed session 4.
Nov 29 01:29:53 np0005539563 systemd-logind[785]: New session 5 of user zuul.
Nov 29 01:29:53 np0005539563 systemd[1]: Started Session 5 of User zuul.
Nov 29 01:29:54 np0005539563 python3[8096]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 01:30:10 np0005539563 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:30:10 np0005539563 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:30:10 np0005539563 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:30:10 np0005539563 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:30:10 np0005539563 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:30:10 np0005539563 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:30:10 np0005539563 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:30:10 np0005539563 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:30:20 np0005539563 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:30:20 np0005539563 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:30:20 np0005539563 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:30:20 np0005539563 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:30:20 np0005539563 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:30:20 np0005539563 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:30:20 np0005539563 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:30:20 np0005539563 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:30:31 np0005539563 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:30:31 np0005539563 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:30:31 np0005539563 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:30:31 np0005539563 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:30:31 np0005539563 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:30:31 np0005539563 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:30:31 np0005539563 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:30:31 np0005539563 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:30:33 np0005539563 setsebool[8162]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 29 01:30:33 np0005539563 setsebool[8162]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 29 01:30:44 np0005539563 kernel: SELinux:  Converting 388 SID table entries...
Nov 29 01:30:44 np0005539563 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:30:44 np0005539563 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:30:44 np0005539563 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:30:44 np0005539563 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:30:44 np0005539563 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:30:44 np0005539563 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:30:44 np0005539563 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:31:05 np0005539563 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 01:31:05 np0005539563 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 01:31:05 np0005539563 systemd[1]: Starting man-db-cache-update.service...
Nov 29 01:31:05 np0005539563 systemd[1]: Reloading.
Nov 29 01:31:05 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:31:05 np0005539563 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 01:31:14 np0005539563 python3[14606]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-59ff-55d2-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:31:15 np0005539563 kernel: evm: overlay not supported
Nov 29 01:31:15 np0005539563 systemd[4297]: Starting D-Bus User Message Bus...
Nov 29 01:31:15 np0005539563 dbus-broker-launch[15050]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 29 01:31:15 np0005539563 dbus-broker-launch[15050]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 29 01:31:15 np0005539563 systemd[4297]: Started D-Bus User Message Bus.
Nov 29 01:31:15 np0005539563 dbus-broker-lau[15050]: Ready
Nov 29 01:31:15 np0005539563 systemd[4297]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 01:31:15 np0005539563 systemd[4297]: Created slice Slice /user.
Nov 29 01:31:15 np0005539563 systemd[4297]: podman-14968.scope: unit configures an IP firewall, but not running as root.
Nov 29 01:31:15 np0005539563 systemd[4297]: (This warning is only shown for the first unit using IP firewalling.)
Nov 29 01:31:15 np0005539563 systemd[4297]: Started podman-14968.scope.
Nov 29 01:31:15 np0005539563 systemd[4297]: Started podman-pause-5c8d3b3e.scope.
Nov 29 01:31:16 np0005539563 python3[15410]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.39:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.39:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:16 np0005539563 python3[15410]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 29 01:31:16 np0005539563 systemd-logind[785]: Session 5 logged out. Waiting for processes to exit.
Nov 29 01:31:16 np0005539563 systemd[1]: session-5.scope: Deactivated successfully.
Nov 29 01:31:16 np0005539563 systemd[1]: session-5.scope: Consumed 1min 6.107s CPU time.
Nov 29 01:31:16 np0005539563 systemd-logind[785]: Removed session 5.
Nov 29 01:31:40 np0005539563 systemd-logind[785]: New session 6 of user zuul.
Nov 29 01:31:40 np0005539563 systemd[1]: Started Session 6 of User zuul.
Nov 29 01:31:41 np0005539563 python3[23868]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBERfnCyU4nTYUWmFTAniOJOUOEv7Xw4lXfUipogpfAF7ccSxhjTd6NQ6pvVg1ljPzhdmgBHnd+DTf/btXVVCAJo= zuul@np0005539562.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:41 np0005539563 python3[24020]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBERfnCyU4nTYUWmFTAniOJOUOEv7Xw4lXfUipogpfAF7ccSxhjTd6NQ6pvVg1ljPzhdmgBHnd+DTf/btXVVCAJo= zuul@np0005539562.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:42 np0005539563 python3[24418]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539563.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 29 01:31:42 np0005539563 python3[24618]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBERfnCyU4nTYUWmFTAniOJOUOEv7Xw4lXfUipogpfAF7ccSxhjTd6NQ6pvVg1ljPzhdmgBHnd+DTf/btXVVCAJo= zuul@np0005539562.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:43 np0005539563 python3[24864]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:31:43 np0005539563 python3[25135]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397903.0053568-167-211944400932700/source _original_basename=tmpe7sg370a follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:44 np0005539563 python3[25278]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 29 01:31:44 np0005539563 systemd[1]: Starting Hostname Service...
Nov 29 01:31:44 np0005539563 systemd[1]: Started Hostname Service.
Nov 29 01:31:44 np0005539563 systemd-hostnamed[25372]: Changed pretty hostname to 'compute-0'
Nov 29 01:31:44 np0005539563 systemd-hostnamed[25372]: Hostname set to <compute-0> (static)
Nov 29 01:31:44 np0005539563 NetworkManager[7180]: <info>  [1764397904.9784] hostname: static hostname changed from "np0005539563.novalocal" to "compute-0"
Nov 29 01:31:44 np0005539563 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:31:45 np0005539563 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:31:45 np0005539563 systemd[1]: session-6.scope: Deactivated successfully.
Nov 29 01:31:45 np0005539563 systemd[1]: session-6.scope: Consumed 2.360s CPU time.
Nov 29 01:31:45 np0005539563 systemd-logind[785]: Session 6 logged out. Waiting for processes to exit.
Nov 29 01:31:45 np0005539563 systemd-logind[785]: Removed session 6.
Nov 29 01:31:55 np0005539563 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:32:01 np0005539563 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 01:32:01 np0005539563 systemd[1]: Finished man-db-cache-update.service.
Nov 29 01:32:01 np0005539563 systemd[1]: man-db-cache-update.service: Consumed 58.235s CPU time.
Nov 29 01:32:01 np0005539563 systemd[1]: run-r5e360402f4e44962bdcb914507868eae.service: Deactivated successfully.
Nov 29 01:32:15 np0005539563 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 29 01:32:15 np0005539563 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:32:15 np0005539563 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 29 01:32:15 np0005539563 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 29 01:32:15 np0005539563 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 29 01:35:48 np0005539563 systemd-logind[785]: New session 7 of user zuul.
Nov 29 01:35:48 np0005539563 systemd[1]: Started Session 7 of User zuul.
Nov 29 01:35:49 np0005539563 python3[29994]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:35:51 np0005539563 python3[30110]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:51 np0005539563 python3[30183]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1437914-34060-169409061108337/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:35:52 np0005539563 python3[30209]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:53 np0005539563 python3[30282]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1437914-34060-169409061108337/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:35:53 np0005539563 python3[30308]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:53 np0005539563 python3[30381]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1437914-34060-169409061108337/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:35:54 np0005539563 python3[30407]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:54 np0005539563 python3[30480]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1437914-34060-169409061108337/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:35:54 np0005539563 python3[30506]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:54 np0005539563 python3[30579]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1437914-34060-169409061108337/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:35:55 np0005539563 python3[30605]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:55 np0005539563 python3[30678]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1437914-34060-169409061108337/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:35:55 np0005539563 python3[30704]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:35:56 np0005539563 python3[30777]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398151.1437914-34060-169409061108337/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:09 np0005539563 python3[30835]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:08 np0005539563 systemd[1]: session-7.scope: Deactivated successfully.
Nov 29 01:41:08 np0005539563 systemd[1]: session-7.scope: Consumed 4.679s CPU time.
Nov 29 01:41:08 np0005539563 systemd-logind[785]: Session 7 logged out. Waiting for processes to exit.
Nov 29 01:41:08 np0005539563 systemd-logind[785]: Removed session 7.
Nov 29 01:57:49 np0005539563 systemd-logind[785]: New session 8 of user zuul.
Nov 29 01:57:49 np0005539563 systemd[1]: Started Session 8 of User zuul.
Nov 29 01:57:50 np0005539563 python3.9[30997]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:57:51 np0005539563 python3.9[31178]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:58:18 np0005539563 systemd[1]: session-8.scope: Deactivated successfully.
Nov 29 01:58:18 np0005539563 systemd[1]: session-8.scope: Consumed 8.096s CPU time.
Nov 29 01:58:18 np0005539563 systemd-logind[785]: Session 8 logged out. Waiting for processes to exit.
Nov 29 01:58:18 np0005539563 systemd-logind[785]: Removed session 8.
Nov 29 01:58:43 np0005539563 systemd-logind[785]: New session 9 of user zuul.
Nov 29 01:58:43 np0005539563 systemd[1]: Started Session 9 of User zuul.
Nov 29 01:58:43 np0005539563 python3.9[31390]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 01:58:45 np0005539563 python3.9[31564]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:58:46 np0005539563 python3.9[31716]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:58:49 np0005539563 python3.9[31869]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:58:50 np0005539563 python3.9[32021]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:58:51 np0005539563 python3.9[32173]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:58:51 np0005539563 python3.9[32296]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399530.679292-182-237626594153859/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:58:52 np0005539563 python3.9[32448]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:58:53 np0005539563 python3.9[32604]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:58:54 np0005539563 python3.9[32756]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:58:55 np0005539563 python3.9[32906]: ansible-ansible.builtin.service_facts Invoked
Nov 29 01:58:58 np0005539563 irqbalance[780]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 29 01:58:58 np0005539563 irqbalance[780]: IRQ 27 affinity is now unmanaged
Nov 29 01:58:59 np0005539563 python3.9[33159]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:59:00 np0005539563 python3.9[33309]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:59:02 np0005539563 python3.9[33463]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:59:03 np0005539563 python3.9[33621]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:59:04 np0005539563 python3.9[33705]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:59:51 np0005539563 systemd[1]: Reloading.
Nov 29 01:59:51 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:59:51 np0005539563 systemd[1]: Starting dnf makecache...
Nov 29 01:59:51 np0005539563 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 29 01:59:51 np0005539563 dnf[33912]: Failed determining last makecache time.
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-openstack-barbican-42b4c41831408a8e323 156 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 157 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-openstack-cinder-1c00d6490d88e436f26ef 154 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-python-stevedore-c4acc5639fd2329372142 158 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-python-cloudkitty-tests-tempest-2c80f8 159 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 systemd[1]: Reloading.
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-os-net-config-9758ab42364673d01bc5014e 178 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 188 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-python-designate-tests-tempest-347fdbc 153 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-openstack-glance-1fd12c29b339f30fe823e 142 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 162 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-openstack-manila-3c01b7181572c95dac462 162 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-python-whitebox-neutron-tests-tempest- 141 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-openstack-octavia-ba397f07a7331190208c 104 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-openstack-watcher-c014f81a8647287f6dcc 148 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-python-tcib-1124124ec06aadbac34f0d340b 127 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 117 kB/s | 3.0 kB     00:00
Nov 29 01:59:51 np0005539563 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 29 01:59:51 np0005539563 dnf[33912]: delorean-openstack-swift-dc98a8463506ac520c469a 183 kB/s | 3.0 kB     00:00
Nov 29 01:59:52 np0005539563 systemd[1]: Reloading.
Nov 29 01:59:52 np0005539563 dnf[33912]: delorean-python-tempestconf-8515371b7cceebd4282 176 kB/s | 3.0 kB     00:00
Nov 29 01:59:52 np0005539563 dnf[33912]: delorean-openstack-heat-ui-013accbfd179753bc3f0 186 kB/s | 3.0 kB     00:00
Nov 29 01:59:52 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:59:52 np0005539563 dnf[33912]: CentOS Stream 9 - BaseOS                         78 kB/s | 7.3 kB     00:00
Nov 29 01:59:52 np0005539563 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 29 01:59:52 np0005539563 dnf[33912]: CentOS Stream 9 - AppStream                      33 kB/s | 7.4 kB     00:00
Nov 29 01:59:52 np0005539563 dnf[33912]: CentOS Stream 9 - CRB                            74 kB/s | 7.2 kB     00:00
Nov 29 01:59:52 np0005539563 dnf[33912]: CentOS Stream 9 - Extras packages                85 kB/s | 8.3 kB     00:00
Nov 29 01:59:52 np0005539563 dnf[33912]: dlrn-antelope-testing                           172 kB/s | 3.0 kB     00:00
Nov 29 01:59:52 np0005539563 dnf[33912]: dlrn-antelope-build-deps                        192 kB/s | 3.0 kB     00:00
Nov 29 01:59:52 np0005539563 dnf[33912]: centos9-rabbitmq                                120 kB/s | 3.0 kB     00:00
Nov 29 01:59:52 np0005539563 dnf[33912]: centos9-storage                                 123 kB/s | 3.0 kB     00:00
Nov 29 01:59:52 np0005539563 dnf[33912]: centos9-opstools                                133 kB/s | 3.0 kB     00:00
Nov 29 01:59:52 np0005539563 dnf[33912]: NFV SIG OpenvSwitch                             122 kB/s | 3.0 kB     00:00
Nov 29 01:59:52 np0005539563 dnf[33912]: repo-setup-centos-appstream                     229 kB/s | 4.4 kB     00:00
Nov 29 01:59:53 np0005539563 dnf[33912]: repo-setup-centos-baseos                        176 kB/s | 3.9 kB     00:00
Nov 29 01:59:53 np0005539563 dnf[33912]: repo-setup-centos-highavailability              170 kB/s | 3.9 kB     00:00
Nov 29 01:59:53 np0005539563 dnf[33912]: repo-setup-centos-powertools                    183 kB/s | 4.3 kB     00:00
Nov 29 01:59:53 np0005539563 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Nov 29 01:59:53 np0005539563 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Nov 29 01:59:53 np0005539563 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Nov 29 01:59:53 np0005539563 dnf[33912]: Extra Packages for Enterprise Linux 9 - x86_64  262 kB/s |  33 kB     00:00
Nov 29 01:59:53 np0005539563 dnf[33912]: Metadata cache created.
Nov 29 01:59:53 np0005539563 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 01:59:53 np0005539563 systemd[1]: Finished dnf makecache.
Nov 29 01:59:53 np0005539563 systemd[1]: dnf-makecache.service: Consumed 1.775s CPU time.
Nov 29 02:01:12 np0005539563 kernel: SELinux:  Converting 2718 SID table entries...
Nov 29 02:01:12 np0005539563 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:01:12 np0005539563 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:01:12 np0005539563 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:01:12 np0005539563 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:01:12 np0005539563 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:01:12 np0005539563 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:01:12 np0005539563 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:01:12 np0005539563 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 29 02:01:13 np0005539563 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:01:13 np0005539563 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:01:13 np0005539563 systemd[1]: Reloading.
Nov 29 02:01:13 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:01:13 np0005539563 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:01:14 np0005539563 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:01:14 np0005539563 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:01:14 np0005539563 systemd[1]: man-db-cache-update.service: Consumed 1.097s CPU time.
Nov 29 02:01:14 np0005539563 systemd[1]: run-rb17f2b5905a346968607874c5bd52b84.service: Deactivated successfully.
Nov 29 02:01:18 np0005539563 python3.9[35288]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:01:21 np0005539563 python3.9[35569]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 02:01:22 np0005539563 python3.9[35721]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 02:01:28 np0005539563 python3.9[35874]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:01:31 np0005539563 python3.9[36027]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 02:01:38 np0005539563 python3.9[36179]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:01:40 np0005539563 python3.9[36331]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:01:40 np0005539563 python3.9[36454]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399699.5101726-671-274518424959775/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:01:47 np0005539563 python3.9[36606]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:01:48 np0005539563 python3.9[36758]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:01:49 np0005539563 python3.9[36911]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:01:50 np0005539563 python3.9[37063]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 02:01:50 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:01:50 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:01:51 np0005539563 python3.9[37217]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:01:52 np0005539563 python3.9[37375]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:01:54 np0005539563 python3.9[37535]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 02:01:54 np0005539563 python3.9[37688]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:01:55 np0005539563 python3.9[37846]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 02:01:56 np0005539563 python3.9[37998]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:02:00 np0005539563 python3.9[38152]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:02:01 np0005539563 python3.9[38304]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:02:02 np0005539563 python3.9[38427]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399720.9133415-1028-280706437214928/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:02:03 np0005539563 python3.9[38579]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:02:03 np0005539563 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:02:03 np0005539563 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 29 02:02:03 np0005539563 systemd-modules-load[38583]: Inserted module 'br_netfilter'
Nov 29 02:02:03 np0005539563 kernel: Bridge firewalling registered
Nov 29 02:02:03 np0005539563 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:02:05 np0005539563 python3.9[38739]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:02:05 np0005539563 python3.9[38862]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399724.7365363-1097-96371144438433/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:02:07 np0005539563 python3.9[39014]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:02:10 np0005539563 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Nov 29 02:02:10 np0005539563 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Nov 29 02:02:10 np0005539563 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:02:10 np0005539563 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:02:10 np0005539563 systemd[1]: Reloading.
Nov 29 02:02:11 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:02:11 np0005539563 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:02:13 np0005539563 python3.9[41443]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:02:14 np0005539563 python3.9[42521]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 02:02:15 np0005539563 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:02:15 np0005539563 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:02:15 np0005539563 systemd[1]: man-db-cache-update.service: Consumed 5.033s CPU time.
Nov 29 02:02:15 np0005539563 systemd[1]: run-rf0a9380defa74e1c99c458de3ef3625a.service: Deactivated successfully.
Nov 29 02:02:15 np0005539563 python3.9[43093]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:02:16 np0005539563 python3.9[43245]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:02:16 np0005539563 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:02:17 np0005539563 systemd[1]: Starting Authorization Manager...
Nov 29 02:02:17 np0005539563 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:02:17 np0005539563 polkitd[43462]: Started polkitd version 0.117
Nov 29 02:02:17 np0005539563 systemd[1]: Started Authorization Manager.
Nov 29 02:02:18 np0005539563 python3.9[43632]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:02:18 np0005539563 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 02:02:18 np0005539563 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 02:02:18 np0005539563 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 02:02:18 np0005539563 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:02:18 np0005539563 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:02:19 np0005539563 python3.9[43794]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 02:02:23 np0005539563 python3.9[43946]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:02:23 np0005539563 systemd[1]: Reloading.
Nov 29 02:02:23 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:02:24 np0005539563 python3.9[44136]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:02:24 np0005539563 systemd[1]: Reloading.
Nov 29 02:02:24 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:02:26 np0005539563 python3.9[44325]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:02:26 np0005539563 python3.9[44478]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:02:26 np0005539563 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 29 02:02:27 np0005539563 python3.9[44631]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:02:29 np0005539563 python3.9[44793]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:02:30 np0005539563 python3.9[44946]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:02:30 np0005539563 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 02:02:30 np0005539563 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 02:02:30 np0005539563 systemd[1]: Stopping Apply Kernel Variables...
Nov 29 02:02:30 np0005539563 systemd[1]: Starting Apply Kernel Variables...
Nov 29 02:02:30 np0005539563 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 02:02:30 np0005539563 systemd[1]: Finished Apply Kernel Variables.
Nov 29 02:02:31 np0005539563 systemd[1]: session-9.scope: Deactivated successfully.
Nov 29 02:02:31 np0005539563 systemd[1]: session-9.scope: Consumed 2min 21.348s CPU time.
Nov 29 02:02:31 np0005539563 systemd-logind[785]: Session 9 logged out. Waiting for processes to exit.
Nov 29 02:02:31 np0005539563 systemd-logind[785]: Removed session 9.
Nov 29 02:02:36 np0005539563 systemd-logind[785]: New session 10 of user zuul.
Nov 29 02:02:36 np0005539563 systemd[1]: Started Session 10 of User zuul.
Nov 29 02:02:38 np0005539563 python3.9[45129]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:02:39 np0005539563 python3.9[45285]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 02:02:40 np0005539563 python3.9[45438]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:02:42 np0005539563 python3.9[45596]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:02:43 np0005539563 python3.9[45756]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:02:44 np0005539563 python3.9[45840]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:02:48 np0005539563 python3.9[46004]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:03:03 np0005539563 kernel: SELinux:  Converting 2730 SID table entries...
Nov 29 02:03:03 np0005539563 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:03:03 np0005539563 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:03:03 np0005539563 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:03:03 np0005539563 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:03:03 np0005539563 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:03:03 np0005539563 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:03:03 np0005539563 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:03:04 np0005539563 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 29 02:03:04 np0005539563 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 29 02:03:06 np0005539563 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:03:06 np0005539563 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:03:06 np0005539563 systemd[1]: Reloading.
Nov 29 02:03:06 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:03:06 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:03:06 np0005539563 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:03:09 np0005539563 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:03:09 np0005539563 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:03:09 np0005539563 systemd[1]: run-rab27f258d4e74076883e447693382d16.service: Deactivated successfully.
Nov 29 02:03:13 np0005539563 python3.9[47103]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:03:13 np0005539563 systemd[1]: Reloading.
Nov 29 02:03:13 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:03:13 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:03:13 np0005539563 systemd[1]: Starting Open vSwitch Database Unit...
Nov 29 02:03:13 np0005539563 chown[47144]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 29 02:03:14 np0005539563 ovs-ctl[47149]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 29 02:03:14 np0005539563 ovs-ctl[47149]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 29 02:03:14 np0005539563 ovs-ctl[47149]: Starting ovsdb-server [  OK  ]
Nov 29 02:03:14 np0005539563 ovs-vsctl[47198]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 29 02:03:14 np0005539563 ovs-vsctl[47218]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"cb98fb5a-8fde-4aab-9a19-a76cfc927075\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 29 02:03:14 np0005539563 ovs-ctl[47149]: Configuring Open vSwitch system IDs [  OK  ]
Nov 29 02:03:14 np0005539563 ovs-ctl[47149]: Enabling remote OVSDB managers [  OK  ]
Nov 29 02:03:14 np0005539563 ovs-vsctl[47224]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 02:03:14 np0005539563 systemd[1]: Started Open vSwitch Database Unit.
Nov 29 02:03:14 np0005539563 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 29 02:03:14 np0005539563 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 29 02:03:14 np0005539563 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 29 02:03:14 np0005539563 kernel: openvswitch: Open vSwitch switching datapath
Nov 29 02:03:14 np0005539563 ovs-ctl[47269]: Inserting openvswitch module [  OK  ]
Nov 29 02:03:14 np0005539563 ovs-ctl[47238]: Starting ovs-vswitchd [  OK  ]
Nov 29 02:03:14 np0005539563 ovs-vsctl[47286]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 02:03:14 np0005539563 ovs-ctl[47238]: Enabling remote OVSDB managers [  OK  ]
Nov 29 02:03:14 np0005539563 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 29 02:03:14 np0005539563 systemd[1]: Starting Open vSwitch...
Nov 29 02:03:14 np0005539563 systemd[1]: Finished Open vSwitch.
Nov 29 02:03:15 np0005539563 python3.9[47438]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:03:16 np0005539563 python3.9[47590]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 02:03:18 np0005539563 kernel: SELinux:  Converting 2744 SID table entries...
Nov 29 02:03:18 np0005539563 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:03:18 np0005539563 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:03:18 np0005539563 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:03:18 np0005539563 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:03:18 np0005539563 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:03:18 np0005539563 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:03:18 np0005539563 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:03:20 np0005539563 python3.9[47745]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:03:20 np0005539563 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 29 02:03:21 np0005539563 python3.9[47903]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:03:23 np0005539563 python3.9[48056]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:03:25 np0005539563 python3.9[48343]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:03:26 np0005539563 python3.9[48493]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:03:26 np0005539563 python3.9[48647]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:03:29 np0005539563 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:03:29 np0005539563 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:03:29 np0005539563 systemd[1]: Reloading.
Nov 29 02:03:29 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:03:29 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:03:29 np0005539563 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:03:32 np0005539563 python3.9[48963]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:03:32 np0005539563 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 02:03:32 np0005539563 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 02:03:32 np0005539563 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 02:03:32 np0005539563 NetworkManager[7180]: <info>  [1764399812.4812] caught SIGTERM, shutting down normally.
Nov 29 02:03:32 np0005539563 systemd[1]: Stopping Network Manager...
Nov 29 02:03:32 np0005539563 NetworkManager[7180]: <info>  [1764399812.4823] dhcp4 (eth0): canceled DHCP transaction
Nov 29 02:03:32 np0005539563 NetworkManager[7180]: <info>  [1764399812.4823] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:03:32 np0005539563 NetworkManager[7180]: <info>  [1764399812.4823] dhcp4 (eth0): state changed no lease
Nov 29 02:03:32 np0005539563 NetworkManager[7180]: <info>  [1764399812.4825] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 02:03:32 np0005539563 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 02:03:32 np0005539563 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 02:03:32 np0005539563 NetworkManager[7180]: <info>  [1764399812.8358] exiting (success)
Nov 29 02:03:32 np0005539563 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 02:03:32 np0005539563 systemd[1]: Stopped Network Manager.
Nov 29 02:03:32 np0005539563 systemd[1]: NetworkManager.service: Consumed 17.604s CPU time, 4.1M memory peak, read 0B from disk, written 39.0K to disk.
Nov 29 02:03:32 np0005539563 systemd[1]: Starting Network Manager...
Nov 29 02:03:32 np0005539563 NetworkManager[48981]: <info>  [1764399812.8974] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:0c7ae4db-0f1a-4da3-8ef6-e4098d4e22b0)
Nov 29 02:03:32 np0005539563 NetworkManager[48981]: <info>  [1764399812.8975] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 02:03:32 np0005539563 NetworkManager[48981]: <info>  [1764399812.9024] manager[0x55b4f0700090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 02:03:32 np0005539563 systemd[1]: Starting Hostname Service...
Nov 29 02:03:32 np0005539563 systemd[1]: Started Hostname Service.
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0016] hostname: hostname: using hostnamed
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0017] hostname: static hostname changed from (none) to "compute-0"
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0021] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0025] manager[0x55b4f0700090]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0025] manager[0x55b4f0700090]: rfkill: WWAN hardware radio set enabled
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0043] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0051] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0051] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0052] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0052] manager: Networking is enabled by state file
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0054] settings: Loaded settings plugin: keyfile (internal)
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0064] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0097] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0108] dhcp: init: Using DHCP client 'internal'
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0111] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0118] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0124] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0134] device (lo): Activation: starting connection 'lo' (3aadc541-76b7-4062-89e5-f28944387640)
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0142] device (eth0): carrier: link connected
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0148] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0154] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0154] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0163] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0171] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0179] device (eth1): carrier: link connected
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0185] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0192] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (bd2fbe2d-a1e7-585d-9bdf-307f4a13e0f8) (indicated)
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0193] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0200] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0208] device (eth1): Activation: starting connection 'ci-private-network' (bd2fbe2d-a1e7-585d-9bdf-307f4a13e0f8)
Nov 29 02:03:33 np0005539563 systemd[1]: Started Network Manager.
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0214] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0221] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0223] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0224] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0226] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0229] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0231] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0232] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0235] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0239] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0240] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0249] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0259] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0285] dhcp4 (eth0): state changed new lease, address=38.102.83.184
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.0291] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 02:03:33 np0005539563 systemd[1]: Starting Network Manager Wait Online...
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2697] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2708] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2710] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2711] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2716] device (lo): Activation: successful, device activated.
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2722] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2725] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2728] device (eth1): Activation: successful, device activated.
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2769] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2771] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2775] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2780] device (eth0): Activation: successful, device activated.
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2786] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 02:03:33 np0005539563 NetworkManager[48981]: <info>  [1764399813.2789] manager: startup complete
Nov 29 02:03:33 np0005539563 systemd[1]: Finished Network Manager Wait Online.
Nov 29 02:03:33 np0005539563 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:03:33 np0005539563 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:03:33 np0005539563 systemd[1]: run-re38df74e09ad442a8f75762856bd41fc.service: Deactivated successfully.
Nov 29 02:03:33 np0005539563 python3.9[49189]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:03:43 np0005539563 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 02:03:46 np0005539563 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:03:46 np0005539563 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:03:46 np0005539563 systemd[1]: Reloading.
Nov 29 02:03:46 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:03:46 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:03:46 np0005539563 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:03:53 np0005539563 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:03:53 np0005539563 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:03:53 np0005539563 systemd[1]: run-r45ad9e70c8d74b838f4acf9ca016eff2.service: Deactivated successfully.
Nov 29 02:03:54 np0005539563 python3.9[49649]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:03:55 np0005539563 python3.9[49801]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:03:56 np0005539563 python3.9[49955]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:03:57 np0005539563 python3.9[50107]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:03:58 np0005539563 python3.9[50259]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:03:58 np0005539563 python3.9[50411]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:03:59 np0005539563 python3.9[50563]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:00 np0005539563 python3.9[50686]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399839.1651492-652-164151201990706/.source _original_basename=.g7lyoiep follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:01 np0005539563 python3.9[50838]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:01 np0005539563 python3.9[50990]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 29 02:04:02 np0005539563 python3.9[51142]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:03 np0005539563 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 02:04:05 np0005539563 python3.9[51571]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 29 02:04:06 np0005539563 ansible-async_wrapper.py[51746]: Invoked with j222540268594 300 /home/zuul/.ansible/tmp/ansible-tmp-1764399845.4710412-850-209448405420287/AnsiballZ_edpm_os_net_config.py _
Nov 29 02:04:06 np0005539563 ansible-async_wrapper.py[51749]: Starting module and watcher
Nov 29 02:04:06 np0005539563 ansible-async_wrapper.py[51749]: Start watching 51750 (300)
Nov 29 02:04:06 np0005539563 ansible-async_wrapper.py[51750]: Start module (51750)
Nov 29 02:04:06 np0005539563 ansible-async_wrapper.py[51746]: Return async_wrapper task started.
Nov 29 02:04:06 np0005539563 python3.9[51751]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 29 02:04:07 np0005539563 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 29 02:04:07 np0005539563 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 29 02:04:07 np0005539563 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 29 02:04:07 np0005539563 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 29 02:04:07 np0005539563 kernel: cfg80211: failed to load regulatory.db
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3426] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3441] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3910] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3912] audit: op="connection-add" uuid="d160e03e-194d-4449-b49f-eaac2038f69a" name="br-ex-br" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3928] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3929] audit: op="connection-add" uuid="7c435863-a0a6-42ce-8972-f67915454845" name="br-ex-port" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3940] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3942] audit: op="connection-add" uuid="416f0528-7057-4113-a840-7fea97aa35aa" name="eth1-port" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3954] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3955] audit: op="connection-add" uuid="8e03ff0e-b65b-40fa-9ea7-e0a512307030" name="vlan20-port" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3968] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3970] audit: op="connection-add" uuid="1a058103-74b2-4530-9e7d-73ea7a52ede4" name="vlan21-port" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3981] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3982] audit: op="connection-add" uuid="1f1351ea-178b-4eba-98ae-072b20c19ea9" name="vlan22-port" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3993] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.3995] audit: op="connection-add" uuid="5fbce65c-4bfd-4918-ba88-ad2b429915cd" name="vlan23-port" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4013] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4029] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4031] audit: op="connection-add" uuid="22e520a5-cde7-4c5e-a9cb-deadb56c187b" name="br-ex-if" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4081] audit: op="connection-update" uuid="bd2fbe2d-a1e7-585d-9bdf-307f4a13e0f8" name="ci-private-network" args="connection.master,connection.controller,connection.slave-type,connection.port-type,connection.timestamp,ovs-interface.type,ipv6.routing-rules,ipv6.routes,ipv6.addr-gen-mode,ipv6.dns,ipv6.addresses,ipv6.method,ipv4.routing-rules,ipv4.routes,ipv4.dns,ipv4.addresses,ipv4.method,ipv4.never-default,ovs-external-ids.data" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4097] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4099] audit: op="connection-add" uuid="2485752c-8292-4be6-80ad-553dc3f7d63f" name="vlan20-if" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4114] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4116] audit: op="connection-add" uuid="98922a73-2a13-4b1b-9d16-b908d48bcd36" name="vlan21-if" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4131] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4133] audit: op="connection-add" uuid="7ca03043-999f-412d-a78f-bc03e59a1c71" name="vlan22-if" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4148] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4149] audit: op="connection-add" uuid="23f9c88d-e6cb-4164-ae3e-aef372461180" name="vlan23-if" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4161] audit: op="connection-delete" uuid="85ce326f-4f7b-326a-be5d-00bceb3dd984" name="Wired connection 1" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4171] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4182] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4185] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (d160e03e-194d-4449-b49f-eaac2038f69a)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4186] audit: op="connection-activate" uuid="d160e03e-194d-4449-b49f-eaac2038f69a" name="br-ex-br" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4189] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4196] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4201] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (7c435863-a0a6-42ce-8972-f67915454845)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4203] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4209] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4214] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (416f0528-7057-4113-a840-7fea97aa35aa)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4216] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4222] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4226] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (8e03ff0e-b65b-40fa-9ea7-e0a512307030)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4229] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4235] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4240] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (1a058103-74b2-4530-9e7d-73ea7a52ede4)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4242] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4249] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4254] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (1f1351ea-178b-4eba-98ae-072b20c19ea9)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4256] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4262] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4267] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (5fbce65c-4bfd-4918-ba88-ad2b429915cd)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4268] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4270] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4272] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4279] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4284] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4289] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (22e520a5-cde7-4c5e-a9cb-deadb56c187b)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4289] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4292] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4294] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4295] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4296] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4307] device (eth1): disconnecting for new activation request.
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4307] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4309] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4311] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4311] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4313] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4316] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4319] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (2485752c-8292-4be6-80ad-553dc3f7d63f)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4319] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4321] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4323] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4324] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4326] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4330] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4333] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (98922a73-2a13-4b1b-9d16-b908d48bcd36)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4334] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4336] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4337] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4338] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4341] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4344] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4346] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (7ca03043-999f-412d-a78f-bc03e59a1c71)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4347] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4349] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4351] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4352] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4354] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4360] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4366] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (23f9c88d-e6cb-4164-ae3e-aef372461180)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4367] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4371] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4373] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4374] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4377] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4392] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4394] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4398] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4401] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4408] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4411] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4415] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4421] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4424] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 kernel: ovs-system: entered promiscuous mode
Nov 29 02:04:08 np0005539563 systemd-udevd[51756]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:04:08 np0005539563 kernel: Timeout policy base is empty
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4446] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4452] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4458] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4460] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4464] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4468] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4473] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4475] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4479] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4484] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4487] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4488] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4492] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4495] dhcp4 (eth0): canceled DHCP transaction
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4495] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4495] dhcp4 (eth0): state changed no lease
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4497] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4506] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4508] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51752 uid=0 result="fail" reason="Device is not activated"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4511] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4516] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 29 02:04:08 np0005539563 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4548] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4551] dhcp4 (eth0): state changed new lease, address=38.102.83.184
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4556] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4596] device (eth1): disconnecting for new activation request.
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4597] audit: op="connection-activate" uuid="bd2fbe2d-a1e7-585d-9bdf-307f4a13e0f8" name="ci-private-network" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4622] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51752 uid=0 result="success"
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4625] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4732] device (eth1): Activation: starting connection 'ci-private-network' (bd2fbe2d-a1e7-585d-9bdf-307f4a13e0f8)
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4743] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4746] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4750] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4751] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4752] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4753] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4754] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4755] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4755] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4763] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4767] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4770] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539563 kernel: br-ex: entered promiscuous mode
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4788] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4793] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4797] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4802] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4805] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4809] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4813] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4817] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4821] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4825] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4829] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4833] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4840] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4845] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4889] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4890] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4892] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4899] device (eth1): Activation: successful, device activated.
Nov 29 02:04:08 np0005539563 kernel: vlan22: entered promiscuous mode
Nov 29 02:04:08 np0005539563 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 29 02:04:08 np0005539563 systemd-udevd[51758]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4915] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 kernel: vlan20: entered promiscuous mode
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4979] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4980] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.4986] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539563 kernel: vlan23: entered promiscuous mode
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5025] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5046] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5073] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5075] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5080] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539563 systemd-udevd[51757]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:04:08 np0005539563 kernel: vlan21: entered promiscuous mode
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5089] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5103] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5222] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5223] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5227] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5230] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5237] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5272] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5281] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5323] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5325] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5326] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5333] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5341] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:04:08 np0005539563 NetworkManager[48981]: <info>  [1764399848.5346] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:04:09 np0005539563 NetworkManager[48981]: <info>  [1764399849.6555] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51752 uid=0 result="success"
Nov 29 02:04:09 np0005539563 NetworkManager[48981]: <info>  [1764399849.8320] checkpoint[0x55b4f06d5950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 29 02:04:09 np0005539563 NetworkManager[48981]: <info>  [1764399849.8324] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51752 uid=0 result="success"
Nov 29 02:04:10 np0005539563 python3.9[52111]: ansible-ansible.legacy.async_status Invoked with jid=j222540268594.51746 mode=status _async_dir=/root/.ansible_async
Nov 29 02:04:10 np0005539563 NetworkManager[48981]: <info>  [1764399850.1131] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51752 uid=0 result="success"
Nov 29 02:04:10 np0005539563 NetworkManager[48981]: <info>  [1764399850.1141] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51752 uid=0 result="success"
Nov 29 02:04:10 np0005539563 NetworkManager[48981]: <info>  [1764399850.3282] audit: op="networking-control" arg="global-dns-configuration" pid=51752 uid=0 result="success"
Nov 29 02:04:10 np0005539563 NetworkManager[48981]: <info>  [1764399850.3320] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 29 02:04:10 np0005539563 NetworkManager[48981]: <info>  [1764399850.3352] audit: op="networking-control" arg="global-dns-configuration" pid=51752 uid=0 result="success"
Nov 29 02:04:10 np0005539563 NetworkManager[48981]: <info>  [1764399850.3661] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51752 uid=0 result="success"
Nov 29 02:04:10 np0005539563 NetworkManager[48981]: <info>  [1764399850.5004] checkpoint[0x55b4f06d5a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 29 02:04:10 np0005539563 NetworkManager[48981]: <info>  [1764399850.5008] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51752 uid=0 result="success"
Nov 29 02:04:10 np0005539563 ansible-async_wrapper.py[51750]: Module complete (51750)
Nov 29 02:04:11 np0005539563 ansible-async_wrapper.py[51749]: Done in kid B.
Nov 29 02:04:13 np0005539563 python3.9[52216]: ansible-ansible.legacy.async_status Invoked with jid=j222540268594.51746 mode=status _async_dir=/root/.ansible_async
Nov 29 02:04:14 np0005539563 python3.9[52316]: ansible-ansible.legacy.async_status Invoked with jid=j222540268594.51746 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 02:04:14 np0005539563 python3.9[52468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:15 np0005539563 python3.9[52591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399854.4467618-931-166273946260746/.source.returncode _original_basename=.x4h01bv0 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:16 np0005539563 python3.9[52743]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:16 np0005539563 python3.9[52866]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399856.0325792-979-221001308321224/.source.cfg _original_basename=.swm9mmqv follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:17 np0005539563 python3.9[53019]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:04:17 np0005539563 systemd[1]: Reloading Network Manager...
Nov 29 02:04:18 np0005539563 NetworkManager[48981]: <info>  [1764399858.0549] audit: op="reload" arg="0" pid=53023 uid=0 result="success"
Nov 29 02:04:18 np0005539563 NetworkManager[48981]: <info>  [1764399858.0556] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 29 02:04:18 np0005539563 irqbalance[780]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 29 02:04:18 np0005539563 irqbalance[780]: IRQ 26 affinity is now unmanaged
Nov 29 02:04:18 np0005539563 systemd[1]: Reloaded Network Manager.
Nov 29 02:04:18 np0005539563 systemd[1]: session-10.scope: Deactivated successfully.
Nov 29 02:04:18 np0005539563 systemd[1]: session-10.scope: Consumed 52.817s CPU time.
Nov 29 02:04:18 np0005539563 systemd-logind[785]: Session 10 logged out. Waiting for processes to exit.
Nov 29 02:04:18 np0005539563 systemd-logind[785]: Removed session 10.
Nov 29 02:04:23 np0005539563 systemd-logind[785]: New session 11 of user zuul.
Nov 29 02:04:23 np0005539563 systemd[1]: Started Session 11 of User zuul.
Nov 29 02:04:24 np0005539563 python3.9[53207]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:04:25 np0005539563 python3.9[53361]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:04:27 np0005539563 python3.9[53555]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:04:27 np0005539563 systemd[1]: session-11.scope: Deactivated successfully.
Nov 29 02:04:27 np0005539563 systemd[1]: session-11.scope: Consumed 2.174s CPU time.
Nov 29 02:04:27 np0005539563 systemd-logind[785]: Session 11 logged out. Waiting for processes to exit.
Nov 29 02:04:27 np0005539563 systemd-logind[785]: Removed session 11.
Nov 29 02:04:28 np0005539563 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 02:04:33 np0005539563 systemd-logind[785]: New session 12 of user zuul.
Nov 29 02:04:33 np0005539563 systemd[1]: Started Session 12 of User zuul.
Nov 29 02:04:34 np0005539563 python3.9[53738]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:04:36 np0005539563 python3.9[53892]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:04:37 np0005539563 python3.9[54049]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:04:37 np0005539563 python3.9[54133]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:04:40 np0005539563 python3.9[54287]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:04:41 np0005539563 python3.9[54482]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:42 np0005539563 python3.9[54634]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:04:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-compat4089351787-merged.mount: Deactivated successfully.
Nov 29 02:04:43 np0005539563 podman[54635]: 2025-11-29 07:04:43.229096637 +0000 UTC m=+0.545593324 system refresh
Nov 29 02:04:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:04:44 np0005539563 python3.9[54797]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:45 np0005539563 python3.9[54920]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399884.0997097-202-196972655106667/.source.json follow=False _original_basename=podman_network_config.j2 checksum=271e4b9a22b50d305e0e38dbdab68262da312bb7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:46 np0005539563 python3.9[55072]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:46 np0005539563 python3.9[55195]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399885.7892919-247-42757808719528/.source.conf follow=False _original_basename=registries.conf.j2 checksum=f27f86218e398aa50b444b0bf8b9e443f3d2c120 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:48 np0005539563 python3.9[55347]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:49 np0005539563 python3.9[55499]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:49 np0005539563 python3.9[55651]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:50 np0005539563 python3.9[55803]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:51 np0005539563 python3.9[55955]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:04:54 np0005539563 python3.9[56108]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:04:55 np0005539563 python3.9[56262]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:04:56 np0005539563 python3.9[56414]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:04:57 np0005539563 python3.9[56566]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:04:58 np0005539563 python3.9[56719]: ansible-service_facts Invoked
Nov 29 02:04:58 np0005539563 network[56736]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:04:58 np0005539563 network[56737]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:04:58 np0005539563 network[56738]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:05:03 np0005539563 python3.9[57190]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:05:06 np0005539563 python3.9[57343]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 02:05:08 np0005539563 python3.9[57495]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:08 np0005539563 python3.9[57620]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399907.616093-679-51642701648917/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:10 np0005539563 python3.9[57774]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:10 np0005539563 python3.9[57899]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399909.2567985-724-244157074502518/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:12 np0005539563 python3.9[58053]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:14 np0005539563 python3.9[58207]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:05:15 np0005539563 python3.9[58291]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:16 np0005539563 python3.9[58445]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:05:17 np0005539563 python3.9[58529]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:05:17 np0005539563 chronyd[792]: chronyd exiting
Nov 29 02:05:17 np0005539563 systemd[1]: Stopping NTP client/server...
Nov 29 02:05:17 np0005539563 systemd[1]: chronyd.service: Deactivated successfully.
Nov 29 02:05:17 np0005539563 systemd[1]: Stopped NTP client/server.
Nov 29 02:05:17 np0005539563 systemd[1]: Starting NTP client/server...
Nov 29 02:05:17 np0005539563 chronyd[58537]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 02:05:17 np0005539563 chronyd[58537]: Frequency -26.781 +/- 0.126 ppm read from /var/lib/chrony/drift
Nov 29 02:05:17 np0005539563 chronyd[58537]: Loaded seccomp filter (level 2)
Nov 29 02:05:17 np0005539563 systemd[1]: Started NTP client/server.
Nov 29 02:05:19 np0005539563 systemd[1]: session-12.scope: Deactivated successfully.
Nov 29 02:05:19 np0005539563 systemd[1]: session-12.scope: Consumed 27.876s CPU time.
Nov 29 02:05:19 np0005539563 systemd-logind[785]: Session 12 logged out. Waiting for processes to exit.
Nov 29 02:05:19 np0005539563 systemd-logind[785]: Removed session 12.
Nov 29 02:05:24 np0005539563 systemd-logind[785]: New session 13 of user zuul.
Nov 29 02:05:24 np0005539563 systemd[1]: Started Session 13 of User zuul.
Nov 29 02:05:25 np0005539563 python3.9[58718]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:26 np0005539563 python3.9[58870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:26 np0005539563 python3.9[58993]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399925.4315445-67-31554115007232/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:27 np0005539563 systemd[1]: session-13.scope: Deactivated successfully.
Nov 29 02:05:27 np0005539563 systemd[1]: session-13.scope: Consumed 1.684s CPU time.
Nov 29 02:05:27 np0005539563 systemd-logind[785]: Session 13 logged out. Waiting for processes to exit.
Nov 29 02:05:27 np0005539563 systemd-logind[785]: Removed session 13.
Nov 29 02:05:32 np0005539563 systemd-logind[785]: New session 14 of user zuul.
Nov 29 02:05:32 np0005539563 systemd[1]: Started Session 14 of User zuul.
Nov 29 02:05:33 np0005539563 python3.9[59171]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:05:34 np0005539563 python3.9[59327]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:35 np0005539563 python3.9[59502]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:36 np0005539563 python3.9[59625]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764399934.8377073-88-168196750753974/.source.json _original_basename=.aeks7py2 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:37 np0005539563 python3.9[59777]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:38 np0005539563 python3.9[59900]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399937.2960694-157-252564218629935/.source _original_basename=._wcw79sq follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:39 np0005539563 python3.9[60052]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:05:39 np0005539563 python3.9[60204]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:40 np0005539563 python3.9[60327]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399939.518136-229-226481332944457/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:05:41 np0005539563 python3.9[60479]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:41 np0005539563 python3.9[60602]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399940.6713493-229-55253934862656/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:05:42 np0005539563 python3.9[60754]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:43 np0005539563 python3.9[60906]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:43 np0005539563 python3.9[61029]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399942.8244138-340-75812013694685/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:44 np0005539563 python3.9[61181]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:45 np0005539563 python3.9[61304]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399944.1457434-385-62580483972122/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:46 np0005539563 python3.9[61456]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:46 np0005539563 systemd[1]: Reloading.
Nov 29 02:05:46 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:05:46 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:46 np0005539563 systemd[1]: Reloading.
Nov 29 02:05:46 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:46 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:05:47 np0005539563 systemd[1]: Starting EDPM Container Shutdown...
Nov 29 02:05:47 np0005539563 systemd[1]: Finished EDPM Container Shutdown.
Nov 29 02:05:48 np0005539563 python3.9[61685]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:48 np0005539563 python3.9[61808]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399947.8071666-454-11123743919130/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:49 np0005539563 python3.9[61960]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:05:50 np0005539563 python3.9[62083]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399949.2475982-499-7418322848853/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:05:51 np0005539563 python3.9[62235]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:51 np0005539563 systemd[1]: Reloading.
Nov 29 02:05:51 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:51 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:05:51 np0005539563 systemd[1]: Reloading.
Nov 29 02:05:51 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:51 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:05:51 np0005539563 systemd[1]: Starting Create netns directory...
Nov 29 02:05:51 np0005539563 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:05:51 np0005539563 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:05:51 np0005539563 systemd[1]: Finished Create netns directory.
Nov 29 02:05:52 np0005539563 python3.9[62461]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:05:52 np0005539563 network[62478]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:05:52 np0005539563 network[62479]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:05:52 np0005539563 network[62480]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:05:57 np0005539563 python3.9[62742]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:57 np0005539563 systemd[1]: Reloading.
Nov 29 02:05:57 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:57 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:05:58 np0005539563 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 29 02:05:58 np0005539563 iptables.init[62782]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 29 02:05:58 np0005539563 iptables.init[62782]: iptables: Flushing firewall rules: [  OK  ]
Nov 29 02:05:58 np0005539563 systemd[1]: iptables.service: Deactivated successfully.
Nov 29 02:05:58 np0005539563 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 29 02:05:59 np0005539563 python3.9[62979]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:06:00 np0005539563 python3.9[63133]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:06:00 np0005539563 systemd[1]: Reloading.
Nov 29 02:06:00 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:06:00 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:06:00 np0005539563 systemd[1]: Starting Netfilter Tables...
Nov 29 02:06:00 np0005539563 systemd[1]: Finished Netfilter Tables.
Nov 29 02:06:01 np0005539563 python3.9[63325]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:02 np0005539563 python3.9[63478]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:03 np0005539563 python3.9[63603]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399962.2328587-706-165682442651537/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:04 np0005539563 python3.9[63756]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:06:04 np0005539563 systemd[1]: Reloading OpenSSH server daemon...
Nov 29 02:06:04 np0005539563 systemd[1]: Reloaded OpenSSH server daemon.
Nov 29 02:06:05 np0005539563 python3.9[63912]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:05 np0005539563 python3.9[64064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:06 np0005539563 python3.9[64187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399965.3618433-799-83105134818163/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:07 np0005539563 python3.9[64339]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 02:06:07 np0005539563 systemd[1]: Starting Time & Date Service...
Nov 29 02:06:07 np0005539563 systemd[1]: Started Time & Date Service.
Nov 29 02:06:08 np0005539563 python3.9[64495]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:09 np0005539563 python3.9[64647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:09 np0005539563 python3.9[64770]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399968.8209717-904-254720771426804/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:10 np0005539563 python3.9[64922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:11 np0005539563 python3.9[65045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399970.2747831-949-250614929772597/.source.yaml _original_basename=.bc3woep7 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:12 np0005539563 python3.9[65197]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:12 np0005539563 python3.9[65320]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399971.7686663-994-62405728237699/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:13 np0005539563 python3.9[65472]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:14 np0005539563 python3.9[65625]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:15 np0005539563 python3[65778]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:06:16 np0005539563 python3.9[65930]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:16 np0005539563 python3.9[66053]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399975.5037775-1111-84157387199196/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:17 np0005539563 python3.9[66205]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:18 np0005539563 python3.9[66328]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399976.908722-1156-150078073934293/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:19 np0005539563 python3.9[66480]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:19 np0005539563 python3.9[66603]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399978.7073667-1201-233284959042208/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:20 np0005539563 python3.9[66755]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:21 np0005539563 python3.9[66878]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399979.9874196-1246-104410983782671/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:21 np0005539563 python3.9[67030]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:06:22 np0005539563 python3.9[67153]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399981.286629-1291-222457209412473/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:23 np0005539563 python3.9[67305]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:23 np0005539563 python3.9[67457]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:24 np0005539563 python3.9[67616]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:25 np0005539563 python3.9[67769]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:26 np0005539563 python3.9[67921]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:27 np0005539563 python3.9[68073]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:06:28 np0005539563 python3.9[68226]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:06:28 np0005539563 systemd[1]: session-14.scope: Deactivated successfully.
Nov 29 02:06:28 np0005539563 systemd[1]: session-14.scope: Consumed 34.593s CPU time.
Nov 29 02:06:28 np0005539563 systemd-logind[785]: Session 14 logged out. Waiting for processes to exit.
Nov 29 02:06:28 np0005539563 systemd-logind[785]: Removed session 14.
Nov 29 02:06:33 np0005539563 systemd-logind[785]: New session 15 of user zuul.
Nov 29 02:06:33 np0005539563 systemd[1]: Started Session 15 of User zuul.
Nov 29 02:06:34 np0005539563 python3.9[68407]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 02:06:35 np0005539563 python3.9[68559]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:06:36 np0005539563 python3.9[68711]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:06:37 np0005539563 python3.9[68863]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQsLXbFhjUoBaTkhKZlhlr4wo49zgbzeJBequh3eUPlExtzdjrm/R47hkAJGagw+KhipRZ6XygyvP7g0rFG4kdUV8ZbW7HpIhvM2LCuDhFHJGta5IbLQDOAA3QuuNA4DyzfWhW146Q2aOja0AoRZOxjBRKO37fhEgGVJO/UZQHoJZFXHQPBPhZ27Wtt4Jfhz0G/t7WgxqsHTg9pnZL3PKV8yC/Ety9V+G9Hjrbwv8GblAazAMvnYcN6Hhh0mKKJ41E1++cy2nN9Lr6iU9KXS4BN73PkapyN75SJK4/2HEELgi7XCGQtXkdc+cnS1nYdtqW5aUS8fONsji8bdoy4AvRQrTsNWbXNcQXBesHoKNiBaUZjzaW0LhwQ2HTD36wG2FW/thgjrlU0AY8aqut/tcB7sjUacgNn8XfqibZb07x75HvbixT1G+V9ax63HLyfAiLCZquwpnl7CuyQvBAe+UNPLU4Kegtn+KKw2+3BoNkkAKkAoDdKd5fQKWFavTllfU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEesPYkFXAKa2jD/XHieFXe2/NLZG5BPNBvLebxF7i4V#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK3fAbGbewc62wcP/ANYyTDYdWflUi4LqSZ2pYXEDgbyEIKVn6IU7ulNV9i7b7SvxrtzT5K34kYv1WsU3bRd5RM=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDc04fosxiJMz9URZzfwgW2kqQvT/wRjkGRSpo8InnYlU+RAljr+QL8e1C8DPu41m+HGkgDmV4uDikwXF3b0w/6D0/P6iPUsexRy4OkOFgOqlzl7+pNzQ1p5SMgMoaKslyPA1DEUc0bxHjIpTHyjq/X8YamvXJO4KLpZ42Ii0c6RyWcejiRw4wZQWh2s6egN8in6cEVODGcWVseYKhFaPjdUDBtuQy4LaGwosJIkR1OCy9coVbEdcv2vOxdpLby9ssC7nEDAKg2X+0rmcdpImSt43KnAXiuMegm5A7FvAas99jVOYawKyostqRzEOId/1TnbBGDEabjKYlPEOLSFiMsBWLwTkN5loBfqwpLWlheJWPYP90mvfiENFN4W+ut6nx4zBVHQYvGts86HDkcSVipUVxaYaWf37c/GMXcee85lI//k2lNWe0yYOJGU7P1jyU+ug0Cn1MeQghj1V8Gcnax0b58J+Ttp4a7UnYek2q2w2h6nbIbZT5m+yw/KYeNtE8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEgIAlZsupHHlO1a9ydDFIdgMGgwYqu0xx1PBhB1cRGz#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHZLPbvNXmCCAW6hZosm19hA5j7Lbr0PZCizVLJXvz0y88L5bXrAQVln7SscOXMnvFy6P8Fn/54/gijC9Rd2rDs=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9IXwkB2kbuJv6AXS7YRKSa74/LXNdMPGOs9WAzsnePFq78YtNX+JkgkhS6H4PtKZr7d8zGldcUVTXsG54r7DHIiEhjiunXArwm7nxPCcvRVmU6kntuiJbAOObaZlgrdlGcNsB0gEt5E4YWVNxiiRnsA60PvQbLyfN0/+99rmyMLcT4z9DL+dZj8kNH54PFTeXByeUArORk1qkPj734Ru+RP82qH26PyeJz2HlCsq7qPKepCgiVDKLbjXnLqt58qEzzVFKx3gfIhpvZ8PiUoFSS6UJlk/70XVp+og+tU/Dv952UWQMOHkfsIfqvdJgcy2hYuLbI03ZOF/NRU1FEUEPIhfU7kM2KzkqoDLyu+ntXGTBE6vWBuqrH+KUMqrAGGXZPnoTS8zb3H1izaYqN48vVE10jDHjkhWEEIuwN5AVGsCBjpRkQ+rZ+gDb/z4loN29WMX/KmqYAy+qsu7X8gFojfnlrv4DYVd1lxYZPnqS8bCkeBF8txjMVUD5EpNVGVU=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOpx0/R+UH9iWt0hByjYOi11MmeoOEV/RM05Qq0CkR6T#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLcAFq3gx5S+bCbh1b0B1Plh9X3nnDc+14hmd4HK59tBD1jd/VrvEVcg/jrioqZJxPOiBK8QMTq5htAcmQbIjnM=#012 create=True mode=0644 path=/tmp/ansible.462fjaul state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:37 np0005539563 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 02:06:39 np0005539563 python3.9[69017]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.462fjaul' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:40 np0005539563 python3.9[69171]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.462fjaul state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:40 np0005539563 systemd[1]: session-15.scope: Deactivated successfully.
Nov 29 02:06:40 np0005539563 systemd[1]: session-15.scope: Consumed 3.484s CPU time.
Nov 29 02:06:40 np0005539563 systemd-logind[785]: Session 15 logged out. Waiting for processes to exit.
Nov 29 02:06:40 np0005539563 systemd-logind[785]: Removed session 15.
Nov 29 02:06:46 np0005539563 systemd-logind[785]: New session 16 of user zuul.
Nov 29 02:06:46 np0005539563 systemd[1]: Started Session 16 of User zuul.
Nov 29 02:06:47 np0005539563 python3.9[69349]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:06:48 np0005539563 python3.9[69505]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:06:49 np0005539563 python3.9[69659]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:06:50 np0005539563 python3.9[69812]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:51 np0005539563 python3.9[69965]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:06:52 np0005539563 python3.9[70119]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:53 np0005539563 python3.9[70274]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:06:53 np0005539563 systemd[1]: session-16.scope: Deactivated successfully.
Nov 29 02:06:53 np0005539563 systemd[1]: session-16.scope: Consumed 4.280s CPU time.
Nov 29 02:06:53 np0005539563 systemd-logind[785]: Session 16 logged out. Waiting for processes to exit.
Nov 29 02:06:53 np0005539563 systemd-logind[785]: Removed session 16.
Nov 29 02:06:59 np0005539563 systemd-logind[785]: New session 17 of user zuul.
Nov 29 02:06:59 np0005539563 systemd[1]: Started Session 17 of User zuul.
Nov 29 02:07:00 np0005539563 python3.9[70452]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:01 np0005539563 python3.9[70608]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:07:02 np0005539563 python3.9[70692]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:07:04 np0005539563 python3.9[70843]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:07:05 np0005539563 python3.9[70994]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:07:06 np0005539563 python3.9[71144]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:07:06 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:07:07 np0005539563 python3.9[71295]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:07:07 np0005539563 systemd[1]: session-17.scope: Deactivated successfully.
Nov 29 02:07:07 np0005539563 systemd[1]: session-17.scope: Consumed 5.817s CPU time.
Nov 29 02:07:07 np0005539563 systemd-logind[785]: Session 17 logged out. Waiting for processes to exit.
Nov 29 02:07:07 np0005539563 systemd-logind[785]: Removed session 17.
Nov 29 02:07:16 np0005539563 systemd-logind[785]: New session 18 of user zuul.
Nov 29 02:07:16 np0005539563 systemd[1]: Started Session 18 of User zuul.
Nov 29 02:07:23 np0005539563 python3[72062]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:25 np0005539563 python3[72157]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 02:07:27 np0005539563 python3[72184]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:07:27 np0005539563 chronyd[58537]: Selected source 23.159.16.194 (pool.ntp.org)
Nov 29 02:07:27 np0005539563 python3[72210]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:07:27 np0005539563 kernel: loop: module loaded
Nov 29 02:07:27 np0005539563 kernel: loop3: detected capacity change from 0 to 14680064
Nov 29 02:07:28 np0005539563 python3[72245]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:07:28 np0005539563 lvm[72248]: PV /dev/loop3 not used.
Nov 29 02:07:28 np0005539563 lvm[72250]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:07:28 np0005539563 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 29 02:07:28 np0005539563 lvm[72257]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 29 02:07:28 np0005539563 lvm[72260]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:07:28 np0005539563 lvm[72260]: VG ceph_vg0 finished
Nov 29 02:07:28 np0005539563 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 29 02:07:28 np0005539563 python3[72338]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:07:29 np0005539563 python3[72411]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400048.6695204-37017-178182030695901/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:30 np0005539563 python3[72461]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:07:30 np0005539563 systemd[1]: Reloading.
Nov 29 02:07:30 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:07:30 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:07:30 np0005539563 systemd[1]: Starting Ceph OSD losetup...
Nov 29 02:07:30 np0005539563 bash[72502]: /dev/loop3: [64513]:4327951 (/var/lib/ceph-osd-0.img)
Nov 29 02:07:30 np0005539563 systemd[1]: Finished Ceph OSD losetup.
Nov 29 02:07:30 np0005539563 lvm[72504]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:07:30 np0005539563 lvm[72504]: VG ceph_vg0 finished
Nov 29 02:07:32 np0005539563 python3[72528]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:35 np0005539563 python3[72621]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 02:07:38 np0005539563 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:07:38 np0005539563 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:07:39 np0005539563 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:07:39 np0005539563 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:07:39 np0005539563 systemd[1]: run-rdaf9d5c48f79453486ccead14b96a21d.service: Deactivated successfully.
Nov 29 02:07:39 np0005539563 python3[72731]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:07:39 np0005539563 python3[72760]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:07:39 np0005539563 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:07:39 np0005539563 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:07:40 np0005539563 python3[72823]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:40 np0005539563 python3[72849]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:40 np0005539563 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:07:41 np0005539563 python3[72927]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:07:41 np0005539563 python3[73000]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400061.066641-37209-7275457734523/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:42 np0005539563 python3[73102]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:07:42 np0005539563 python3[73175]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400062.2277443-37227-124989738465630/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:43 np0005539563 python3[73225]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:07:43 np0005539563 python3[73253]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:07:44 np0005539563 python3[73281]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:07:44 np0005539563 python3[73309]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:07:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:07:44 np0005539563 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 02:07:44 np0005539563 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 02:07:44 np0005539563 systemd-logind[785]: New session 19 of user ceph-admin.
Nov 29 02:07:44 np0005539563 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 02:07:44 np0005539563 systemd[1]: Starting User Manager for UID 42477...
Nov 29 02:07:44 np0005539563 systemd[73329]: Queued start job for default target Main User Target.
Nov 29 02:07:44 np0005539563 systemd[73329]: Created slice User Application Slice.
Nov 29 02:07:44 np0005539563 systemd[73329]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:07:44 np0005539563 systemd[73329]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:07:44 np0005539563 systemd[73329]: Reached target Paths.
Nov 29 02:07:44 np0005539563 systemd[73329]: Reached target Timers.
Nov 29 02:07:44 np0005539563 systemd[73329]: Starting D-Bus User Message Bus Socket...
Nov 29 02:07:44 np0005539563 systemd[73329]: Starting Create User's Volatile Files and Directories...
Nov 29 02:07:44 np0005539563 systemd[73329]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:07:44 np0005539563 systemd[73329]: Finished Create User's Volatile Files and Directories.
Nov 29 02:07:44 np0005539563 systemd[73329]: Reached target Sockets.
Nov 29 02:07:44 np0005539563 systemd[73329]: Reached target Basic System.
Nov 29 02:07:44 np0005539563 systemd[73329]: Reached target Main User Target.
Nov 29 02:07:44 np0005539563 systemd[73329]: Startup finished in 125ms.
Nov 29 02:07:44 np0005539563 systemd[1]: Started User Manager for UID 42477.
Nov 29 02:07:44 np0005539563 systemd[1]: Started Session 19 of User ceph-admin.
Nov 29 02:07:44 np0005539563 systemd[1]: session-19.scope: Deactivated successfully.
Nov 29 02:07:44 np0005539563 systemd-logind[785]: Session 19 logged out. Waiting for processes to exit.
Nov 29 02:07:44 np0005539563 systemd-logind[785]: Removed session 19.
Nov 29 02:07:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-compat1513190192-lower\x2dmapped.mount: Deactivated successfully.
Nov 29 02:07:54 np0005539563 systemd[1]: Stopping User Manager for UID 42477...
Nov 29 02:07:54 np0005539563 systemd[73329]: Activating special unit Exit the Session...
Nov 29 02:07:54 np0005539563 systemd[73329]: Stopped target Main User Target.
Nov 29 02:07:54 np0005539563 systemd[73329]: Stopped target Basic System.
Nov 29 02:07:54 np0005539563 systemd[73329]: Stopped target Paths.
Nov 29 02:07:54 np0005539563 systemd[73329]: Stopped target Sockets.
Nov 29 02:07:54 np0005539563 systemd[73329]: Stopped target Timers.
Nov 29 02:07:54 np0005539563 systemd[73329]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:07:54 np0005539563 systemd[73329]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 02:07:54 np0005539563 systemd[73329]: Closed D-Bus User Message Bus Socket.
Nov 29 02:07:54 np0005539563 systemd[73329]: Stopped Create User's Volatile Files and Directories.
Nov 29 02:07:54 np0005539563 systemd[73329]: Removed slice User Application Slice.
Nov 29 02:07:54 np0005539563 systemd[73329]: Reached target Shutdown.
Nov 29 02:07:54 np0005539563 systemd[73329]: Finished Exit the Session.
Nov 29 02:07:54 np0005539563 systemd[73329]: Reached target Exit the Session.
Nov 29 02:07:54 np0005539563 systemd[1]: user@42477.service: Deactivated successfully.
Nov 29 02:07:54 np0005539563 systemd[1]: Stopped User Manager for UID 42477.
Nov 29 02:07:54 np0005539563 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 29 02:07:54 np0005539563 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 29 02:07:54 np0005539563 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 29 02:07:54 np0005539563 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 29 02:07:54 np0005539563 systemd[1]: Removed slice User Slice of UID 42477.
Nov 29 02:08:10 np0005539563 podman[73381]: 2025-11-29 07:08:10.321068084 +0000 UTC m=+25.356689702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:08:10 np0005539563 podman[73453]: 2025-11-29 07:08:10.404446547 +0000 UTC m=+0.050121512 container create a7aa45e48f596ef9fb9cdb02ae69b355eb292e6d19049504d9f1365b1db82a00 (image=quay.io/ceph/ceph:v18, name=distracted_edison, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:10 np0005539563 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 29 02:08:10 np0005539563 systemd[1]: Started libpod-conmon-a7aa45e48f596ef9fb9cdb02ae69b355eb292e6d19049504d9f1365b1db82a00.scope.
Nov 29 02:08:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:10 np0005539563 podman[73453]: 2025-11-29 07:08:10.382422059 +0000 UTC m=+0.028097044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:10 np0005539563 podman[73453]: 2025-11-29 07:08:10.51145735 +0000 UTC m=+0.157132345 container init a7aa45e48f596ef9fb9cdb02ae69b355eb292e6d19049504d9f1365b1db82a00 (image=quay.io/ceph/ceph:v18, name=distracted_edison, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:08:10 np0005539563 podman[73453]: 2025-11-29 07:08:10.519449818 +0000 UTC m=+0.165124783 container start a7aa45e48f596ef9fb9cdb02ae69b355eb292e6d19049504d9f1365b1db82a00 (image=quay.io/ceph/ceph:v18, name=distracted_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:08:10 np0005539563 podman[73453]: 2025-11-29 07:08:10.523837727 +0000 UTC m=+0.169512722 container attach a7aa45e48f596ef9fb9cdb02ae69b355eb292e6d19049504d9f1365b1db82a00 (image=quay.io/ceph/ceph:v18, name=distracted_edison, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:08:10 np0005539563 distracted_edison[73470]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 02:08:10 np0005539563 systemd[1]: libpod-a7aa45e48f596ef9fb9cdb02ae69b355eb292e6d19049504d9f1365b1db82a00.scope: Deactivated successfully.
Nov 29 02:08:10 np0005539563 podman[73453]: 2025-11-29 07:08:10.872254312 +0000 UTC m=+0.517929317 container died a7aa45e48f596ef9fb9cdb02ae69b355eb292e6d19049504d9f1365b1db82a00 (image=quay.io/ceph/ceph:v18, name=distracted_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:08:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-918fd4304de95bb3f521a2c2f700d42efea0c0c4c03da3c26528a0355457c9d7-merged.mount: Deactivated successfully.
Nov 29 02:08:10 np0005539563 podman[73453]: 2025-11-29 07:08:10.927225634 +0000 UTC m=+0.572900599 container remove a7aa45e48f596ef9fb9cdb02ae69b355eb292e6d19049504d9f1365b1db82a00 (image=quay.io/ceph/ceph:v18, name=distracted_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:10 np0005539563 systemd[1]: libpod-conmon-a7aa45e48f596ef9fb9cdb02ae69b355eb292e6d19049504d9f1365b1db82a00.scope: Deactivated successfully.
Nov 29 02:08:10 np0005539563 podman[73490]: 2025-11-29 07:08:10.992497095 +0000 UTC m=+0.040493220 container create e3e2fba275b37bc8a59cb5c727b0c684cbe499f5b154f3e8d51f6a08e1bd1586 (image=quay.io/ceph/ceph:v18, name=brave_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:08:11 np0005539563 systemd[1]: Started libpod-conmon-e3e2fba275b37bc8a59cb5c727b0c684cbe499f5b154f3e8d51f6a08e1bd1586.scope.
Nov 29 02:08:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:11 np0005539563 podman[73490]: 2025-11-29 07:08:11.060412648 +0000 UTC m=+0.108408803 container init e3e2fba275b37bc8a59cb5c727b0c684cbe499f5b154f3e8d51f6a08e1bd1586 (image=quay.io/ceph/ceph:v18, name=brave_lichterman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:08:11 np0005539563 podman[73490]: 2025-11-29 07:08:11.066435701 +0000 UTC m=+0.114431816 container start e3e2fba275b37bc8a59cb5c727b0c684cbe499f5b154f3e8d51f6a08e1bd1586 (image=quay.io/ceph/ceph:v18, name=brave_lichterman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:11 np0005539563 podman[73490]: 2025-11-29 07:08:10.973483989 +0000 UTC m=+0.021480134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:11 np0005539563 podman[73490]: 2025-11-29 07:08:11.069945426 +0000 UTC m=+0.117941571 container attach e3e2fba275b37bc8a59cb5c727b0c684cbe499f5b154f3e8d51f6a08e1bd1586 (image=quay.io/ceph/ceph:v18, name=brave_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:08:11 np0005539563 brave_lichterman[73507]: 167 167
Nov 29 02:08:11 np0005539563 systemd[1]: libpod-e3e2fba275b37bc8a59cb5c727b0c684cbe499f5b154f3e8d51f6a08e1bd1586.scope: Deactivated successfully.
Nov 29 02:08:11 np0005539563 podman[73490]: 2025-11-29 07:08:11.073216536 +0000 UTC m=+0.121212661 container died e3e2fba275b37bc8a59cb5c727b0c684cbe499f5b154f3e8d51f6a08e1bd1586 (image=quay.io/ceph/ceph:v18, name=brave_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:08:11 np0005539563 podman[73490]: 2025-11-29 07:08:11.105515662 +0000 UTC m=+0.153511787 container remove e3e2fba275b37bc8a59cb5c727b0c684cbe499f5b154f3e8d51f6a08e1bd1586 (image=quay.io/ceph/ceph:v18, name=brave_lichterman, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:08:11 np0005539563 systemd[1]: libpod-conmon-e3e2fba275b37bc8a59cb5c727b0c684cbe499f5b154f3e8d51f6a08e1bd1586.scope: Deactivated successfully.
Nov 29 02:08:11 np0005539563 podman[73522]: 2025-11-29 07:08:11.174002241 +0000 UTC m=+0.044240402 container create dff003bad56ee7867a48c040fbeb09862fa99bcfb154c909303422aca8f3dbea (image=quay.io/ceph/ceph:v18, name=festive_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:08:11 np0005539563 systemd[1]: Started libpod-conmon-dff003bad56ee7867a48c040fbeb09862fa99bcfb154c909303422aca8f3dbea.scope.
Nov 29 02:08:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:11 np0005539563 podman[73522]: 2025-11-29 07:08:11.235097638 +0000 UTC m=+0.105335799 container init dff003bad56ee7867a48c040fbeb09862fa99bcfb154c909303422aca8f3dbea (image=quay.io/ceph/ceph:v18, name=festive_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:08:11 np0005539563 podman[73522]: 2025-11-29 07:08:11.243316571 +0000 UTC m=+0.113554732 container start dff003bad56ee7867a48c040fbeb09862fa99bcfb154c909303422aca8f3dbea (image=quay.io/ceph/ceph:v18, name=festive_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:08:11 np0005539563 podman[73522]: 2025-11-29 07:08:11.248284176 +0000 UTC m=+0.118522347 container attach dff003bad56ee7867a48c040fbeb09862fa99bcfb154c909303422aca8f3dbea (image=quay.io/ceph/ceph:v18, name=festive_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 02:08:11 np0005539563 podman[73522]: 2025-11-29 07:08:11.153670308 +0000 UTC m=+0.023908519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:11 np0005539563 festive_jackson[73539]: AQDbmyppggA9EBAARSO9EOh3FkKw7pfa2Gx5Qw==
Nov 29 02:08:11 np0005539563 systemd[1]: libpod-dff003bad56ee7867a48c040fbeb09862fa99bcfb154c909303422aca8f3dbea.scope: Deactivated successfully.
Nov 29 02:08:11 np0005539563 podman[73522]: 2025-11-29 07:08:11.278107206 +0000 UTC m=+0.148345367 container died dff003bad56ee7867a48c040fbeb09862fa99bcfb154c909303422aca8f3dbea (image=quay.io/ceph/ceph:v18, name=festive_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:08:11 np0005539563 podman[73522]: 2025-11-29 07:08:11.31882708 +0000 UTC m=+0.189065241 container remove dff003bad56ee7867a48c040fbeb09862fa99bcfb154c909303422aca8f3dbea (image=quay.io/ceph/ceph:v18, name=festive_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:08:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:08:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:08:11 np0005539563 systemd[1]: libpod-conmon-dff003bad56ee7867a48c040fbeb09862fa99bcfb154c909303422aca8f3dbea.scope: Deactivated successfully.
Nov 29 02:08:11 np0005539563 podman[73559]: 2025-11-29 07:08:11.377066871 +0000 UTC m=+0.036865511 container create e1d8c0fb8326ba888cb38330ce681a8739aacf048ba42721be50e7ab9041f31a (image=quay.io/ceph/ceph:v18, name=eager_feistel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:08:11 np0005539563 systemd[1]: Started libpod-conmon-e1d8c0fb8326ba888cb38330ce681a8739aacf048ba42721be50e7ab9041f31a.scope.
Nov 29 02:08:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:11 np0005539563 podman[73559]: 2025-11-29 07:08:11.360308926 +0000 UTC m=+0.020107596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:11 np0005539563 podman[73559]: 2025-11-29 07:08:11.457676029 +0000 UTC m=+0.117474699 container init e1d8c0fb8326ba888cb38330ce681a8739aacf048ba42721be50e7ab9041f31a (image=quay.io/ceph/ceph:v18, name=eager_feistel, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:08:11 np0005539563 podman[73559]: 2025-11-29 07:08:11.462923671 +0000 UTC m=+0.122722321 container start e1d8c0fb8326ba888cb38330ce681a8739aacf048ba42721be50e7ab9041f31a (image=quay.io/ceph/ceph:v18, name=eager_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:08:11 np0005539563 podman[73559]: 2025-11-29 07:08:11.46655607 +0000 UTC m=+0.126354730 container attach e1d8c0fb8326ba888cb38330ce681a8739aacf048ba42721be50e7ab9041f31a (image=quay.io/ceph/ceph:v18, name=eager_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:08:11 np0005539563 eager_feistel[73576]: AQDbmypp/4hCHRAALl2AY9ieoNkLwMVWAXaIGQ==
Nov 29 02:08:11 np0005539563 systemd[1]: libpod-e1d8c0fb8326ba888cb38330ce681a8739aacf048ba42721be50e7ab9041f31a.scope: Deactivated successfully.
Nov 29 02:08:11 np0005539563 podman[73559]: 2025-11-29 07:08:11.495378172 +0000 UTC m=+0.155176832 container died e1d8c0fb8326ba888cb38330ce681a8739aacf048ba42721be50e7ab9041f31a (image=quay.io/ceph/ceph:v18, name=eager_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:08:11 np0005539563 podman[73559]: 2025-11-29 07:08:11.530494355 +0000 UTC m=+0.190292995 container remove e1d8c0fb8326ba888cb38330ce681a8739aacf048ba42721be50e7ab9041f31a (image=quay.io/ceph/ceph:v18, name=eager_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:08:11 np0005539563 systemd[1]: libpod-conmon-e1d8c0fb8326ba888cb38330ce681a8739aacf048ba42721be50e7ab9041f31a.scope: Deactivated successfully.
Nov 29 02:08:11 np0005539563 podman[73594]: 2025-11-29 07:08:11.591949863 +0000 UTC m=+0.040798079 container create 3733284a901e6f19e4e21c81982d53e4a93fe678b8d671cd959d2900cf620dbd (image=quay.io/ceph/ceph:v18, name=magical_gagarin, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:08:11 np0005539563 systemd[1]: Started libpod-conmon-3733284a901e6f19e4e21c81982d53e4a93fe678b8d671cd959d2900cf620dbd.scope.
Nov 29 02:08:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:11 np0005539563 podman[73594]: 2025-11-29 07:08:11.661042117 +0000 UTC m=+0.109890363 container init 3733284a901e6f19e4e21c81982d53e4a93fe678b8d671cd959d2900cf620dbd (image=quay.io/ceph/ceph:v18, name=magical_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:11 np0005539563 podman[73594]: 2025-11-29 07:08:11.667263117 +0000 UTC m=+0.116111333 container start 3733284a901e6f19e4e21c81982d53e4a93fe678b8d671cd959d2900cf620dbd (image=quay.io/ceph/ceph:v18, name=magical_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:08:11 np0005539563 podman[73594]: 2025-11-29 07:08:11.573773539 +0000 UTC m=+0.022621775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:11 np0005539563 podman[73594]: 2025-11-29 07:08:11.67106305 +0000 UTC m=+0.119911266 container attach 3733284a901e6f19e4e21c81982d53e4a93fe678b8d671cd959d2900cf620dbd (image=quay.io/ceph/ceph:v18, name=magical_gagarin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:08:11 np0005539563 magical_gagarin[73611]: AQDbmyppI8vcKBAA2f2xexg7XiIXytCWUTrPsw==
Nov 29 02:08:11 np0005539563 systemd[1]: libpod-3733284a901e6f19e4e21c81982d53e4a93fe678b8d671cd959d2900cf620dbd.scope: Deactivated successfully.
Nov 29 02:08:11 np0005539563 podman[73594]: 2025-11-29 07:08:11.689262874 +0000 UTC m=+0.138111090 container died 3733284a901e6f19e4e21c81982d53e4a93fe678b8d671cd959d2900cf620dbd (image=quay.io/ceph/ceph:v18, name=magical_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:08:11 np0005539563 podman[73594]: 2025-11-29 07:08:11.723823191 +0000 UTC m=+0.172671407 container remove 3733284a901e6f19e4e21c81982d53e4a93fe678b8d671cd959d2900cf620dbd (image=quay.io/ceph/ceph:v18, name=magical_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:08:11 np0005539563 systemd[1]: libpod-conmon-3733284a901e6f19e4e21c81982d53e4a93fe678b8d671cd959d2900cf620dbd.scope: Deactivated successfully.
Nov 29 02:08:11 np0005539563 podman[73630]: 2025-11-29 07:08:11.783953943 +0000 UTC m=+0.037683143 container create eee080cc8296ce46193035564f3833edcc2538f9cbfe33e8d834f4f152577f50 (image=quay.io/ceph/ceph:v18, name=quizzical_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:08:11 np0005539563 systemd[1]: Started libpod-conmon-eee080cc8296ce46193035564f3833edcc2538f9cbfe33e8d834f4f152577f50.scope.
Nov 29 02:08:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd8a5e08faa06817f9ac172420f9ddb4a0ded95124344f016159af2a54dd51b/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:11 np0005539563 podman[73630]: 2025-11-29 07:08:11.845302428 +0000 UTC m=+0.099031628 container init eee080cc8296ce46193035564f3833edcc2538f9cbfe33e8d834f4f152577f50 (image=quay.io/ceph/ceph:v18, name=quizzical_goldwasser, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:08:11 np0005539563 podman[73630]: 2025-11-29 07:08:11.852122203 +0000 UTC m=+0.105851403 container start eee080cc8296ce46193035564f3833edcc2538f9cbfe33e8d834f4f152577f50 (image=quay.io/ceph/ceph:v18, name=quizzical_goldwasser, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:08:11 np0005539563 podman[73630]: 2025-11-29 07:08:11.855437033 +0000 UTC m=+0.109166263 container attach eee080cc8296ce46193035564f3833edcc2538f9cbfe33e8d834f4f152577f50 (image=quay.io/ceph/ceph:v18, name=quizzical_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:08:11 np0005539563 podman[73630]: 2025-11-29 07:08:11.767450565 +0000 UTC m=+0.021179785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:11 np0005539563 quizzical_goldwasser[73646]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 29 02:08:11 np0005539563 quizzical_goldwasser[73646]: setting min_mon_release = pacific
Nov 29 02:08:11 np0005539563 quizzical_goldwasser[73646]: /usr/bin/monmaptool: set fsid to 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 02:08:11 np0005539563 quizzical_goldwasser[73646]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 29 02:08:11 np0005539563 systemd[1]: libpod-eee080cc8296ce46193035564f3833edcc2538f9cbfe33e8d834f4f152577f50.scope: Deactivated successfully.
Nov 29 02:08:11 np0005539563 podman[73630]: 2025-11-29 07:08:11.884223884 +0000 UTC m=+0.137953084 container died eee080cc8296ce46193035564f3833edcc2538f9cbfe33e8d834f4f152577f50 (image=quay.io/ceph/ceph:v18, name=quizzical_goldwasser, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:08:11 np0005539563 podman[73630]: 2025-11-29 07:08:11.916233853 +0000 UTC m=+0.169963053 container remove eee080cc8296ce46193035564f3833edcc2538f9cbfe33e8d834f4f152577f50 (image=quay.io/ceph/ceph:v18, name=quizzical_goldwasser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:08:11 np0005539563 systemd[1]: libpod-conmon-eee080cc8296ce46193035564f3833edcc2538f9cbfe33e8d834f4f152577f50.scope: Deactivated successfully.
Nov 29 02:08:11 np0005539563 podman[73665]: 2025-11-29 07:08:11.98135256 +0000 UTC m=+0.043359248 container create 56055be944adfbb01fce43890c12defb44d61d6264885c31bf9ae09b6895f9e6 (image=quay.io/ceph/ceph:v18, name=funny_dirac, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:08:12 np0005539563 systemd[1]: Started libpod-conmon-56055be944adfbb01fce43890c12defb44d61d6264885c31bf9ae09b6895f9e6.scope.
Nov 29 02:08:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94bd34c3e5380da04d4db1ddd95b9d0cad0aab34f45e3a1f8a343feddf3276c4/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94bd34c3e5380da04d4db1ddd95b9d0cad0aab34f45e3a1f8a343feddf3276c4/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94bd34c3e5380da04d4db1ddd95b9d0cad0aab34f45e3a1f8a343feddf3276c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94bd34c3e5380da04d4db1ddd95b9d0cad0aab34f45e3a1f8a343feddf3276c4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:12 np0005539563 podman[73665]: 2025-11-29 07:08:12.056990033 +0000 UTC m=+0.118996751 container init 56055be944adfbb01fce43890c12defb44d61d6264885c31bf9ae09b6895f9e6 (image=quay.io/ceph/ceph:v18, name=funny_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:08:12 np0005539563 podman[73665]: 2025-11-29 07:08:11.961525822 +0000 UTC m=+0.023532530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:12 np0005539563 podman[73665]: 2025-11-29 07:08:12.062713308 +0000 UTC m=+0.124719996 container start 56055be944adfbb01fce43890c12defb44d61d6264885c31bf9ae09b6895f9e6 (image=quay.io/ceph/ceph:v18, name=funny_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:12 np0005539563 podman[73665]: 2025-11-29 07:08:12.065968846 +0000 UTC m=+0.127975564 container attach 56055be944adfbb01fce43890c12defb44d61d6264885c31bf9ae09b6895f9e6 (image=quay.io/ceph/ceph:v18, name=funny_dirac, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:12 np0005539563 systemd[1]: libpod-56055be944adfbb01fce43890c12defb44d61d6264885c31bf9ae09b6895f9e6.scope: Deactivated successfully.
Nov 29 02:08:12 np0005539563 podman[73707]: 2025-11-29 07:08:12.198656467 +0000 UTC m=+0.024002142 container died 56055be944adfbb01fce43890c12defb44d61d6264885c31bf9ae09b6895f9e6 (image=quay.io/ceph/ceph:v18, name=funny_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:08:12 np0005539563 podman[73707]: 2025-11-29 07:08:12.239843805 +0000 UTC m=+0.065189480 container remove 56055be944adfbb01fce43890c12defb44d61d6264885c31bf9ae09b6895f9e6 (image=quay.io/ceph/ceph:v18, name=funny_dirac, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:08:12 np0005539563 systemd[1]: libpod-conmon-56055be944adfbb01fce43890c12defb44d61d6264885c31bf9ae09b6895f9e6.scope: Deactivated successfully.
Nov 29 02:08:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-aa8eb40e5de7f3e516bb47a40e5a953c42d347818d1afc7acc5f4d8a14315ce0-merged.mount: Deactivated successfully.
Nov 29 02:08:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:08:12 np0005539563 systemd[1]: Reloading.
Nov 29 02:08:12 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:08:12 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:08:12 np0005539563 systemd[1]: Reloading.
Nov 29 02:08:12 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:08:12 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:08:12 np0005539563 systemd[1]: Reached target All Ceph clusters and services.
Nov 29 02:08:12 np0005539563 systemd[1]: Reloading.
Nov 29 02:08:12 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:08:12 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:08:12 np0005539563 systemd[1]: Reached target Ceph cluster 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:08:13 np0005539563 systemd[1]: Reloading.
Nov 29 02:08:13 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:08:13 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:08:13 np0005539563 systemd[1]: Reloading.
Nov 29 02:08:13 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:08:13 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:08:13 np0005539563 systemd[1]: Created slice Slice /system/ceph-38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:08:13 np0005539563 systemd[1]: Reached target System Time Set.
Nov 29 02:08:13 np0005539563 systemd[1]: Reached target System Time Synchronized.
Nov 29 02:08:13 np0005539563 systemd[1]: Starting Ceph mon.compute-0 for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:08:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:08:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:08:13 np0005539563 podman[73963]: 2025-11-29 07:08:13.847465342 +0000 UTC m=+0.033570812 container create 3dc308814d63fd6fe99ce8bd997ecc3531c8bd131eb40fa71121a5e57e448dee (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 02:08:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ec73c3f6fbe99019f7d7bfed4ff7ae51d7c8c2a3fc1b5cd3741a3a2ec0efd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ec73c3f6fbe99019f7d7bfed4ff7ae51d7c8c2a3fc1b5cd3741a3a2ec0efd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ec73c3f6fbe99019f7d7bfed4ff7ae51d7c8c2a3fc1b5cd3741a3a2ec0efd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ec73c3f6fbe99019f7d7bfed4ff7ae51d7c8c2a3fc1b5cd3741a3a2ec0efd0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:13 np0005539563 podman[73963]: 2025-11-29 07:08:13.896505503 +0000 UTC m=+0.082611023 container init 3dc308814d63fd6fe99ce8bd997ecc3531c8bd131eb40fa71121a5e57e448dee (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:08:13 np0005539563 podman[73963]: 2025-11-29 07:08:13.901222121 +0000 UTC m=+0.087327611 container start 3dc308814d63fd6fe99ce8bd997ecc3531c8bd131eb40fa71121a5e57e448dee (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:08:13 np0005539563 bash[73963]: 3dc308814d63fd6fe99ce8bd997ecc3531c8bd131eb40fa71121a5e57e448dee
Nov 29 02:08:13 np0005539563 podman[73963]: 2025-11-29 07:08:13.833773261 +0000 UTC m=+0.019878731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:13 np0005539563 systemd[1]: Started Ceph mon.compute-0 for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: pidfile_write: ignore empty --pid-file
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: load: jerasure load: lrc 
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Git sha 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: DB SUMMARY
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: DB Session ID:  549RK22J9KW77OCPFARS
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                                     Options.env: 0x5628e1107c40
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                                Options.info_log: 0x5628e2360ec0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                                 Options.wal_dir: 
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                    Options.write_buffer_manager: 0x5628e2370b40
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                               Options.row_cache: None
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                              Options.wal_filter: None
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.wal_compression: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.max_background_jobs: 2
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Compression algorithms supported:
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: #011kZSTD supported: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:           Options.merge_operator: 
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:        Options.compaction_filter: None
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5628e2360aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5628e23591f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:          Options.compression: NoCompression
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.num_levels: 7
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c1f3df60-9c98-4a69-b455-f1b44f64a88d
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400093939468, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400093941399, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "549RK22J9KW77OCPFARS", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400093941515, "job": 1, "event": "recovery_finished"}
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5628e2382e00
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: DB pointer 0x5628e240c000
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5628e23591f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@-1(???) e0 preinit fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 02:08:13 np0005539563 podman[73983]: 2025-11-29 07:08:13.977793479 +0000 UTC m=+0.040965722 container create cb7bf5c481a5574f2ed180fa79b4762f54959ea3d93541b59695f5f7888f4d15 (image=quay.io/ceph/ceph:v18, name=laughing_davinci, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-29T07:08:12.104557Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,os=Linux}
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).mds e1 new map
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mkfs 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 02:08:13 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 29 02:08:14 np0005539563 ceph-mon[73982]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 02:08:14 np0005539563 ceph-mon[73982]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 02:08:14 np0005539563 ceph-mon[73982]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 02:08:14 np0005539563 systemd[1]: Started libpod-conmon-cb7bf5c481a5574f2ed180fa79b4762f54959ea3d93541b59695f5f7888f4d15.scope.
Nov 29 02:08:14 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed4764405092e0e876afca78843efb12b30a67ddcfeb53f7a969589406f096c5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed4764405092e0e876afca78843efb12b30a67ddcfeb53f7a969589406f096c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed4764405092e0e876afca78843efb12b30a67ddcfeb53f7a969589406f096c5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:14 np0005539563 podman[73983]: 2025-11-29 07:08:13.961391744 +0000 UTC m=+0.024564007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:14 np0005539563 podman[73983]: 2025-11-29 07:08:14.06370724 +0000 UTC m=+0.126879493 container init cb7bf5c481a5574f2ed180fa79b4762f54959ea3d93541b59695f5f7888f4d15 (image=quay.io/ceph/ceph:v18, name=laughing_davinci, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:08:14 np0005539563 podman[73983]: 2025-11-29 07:08:14.071518453 +0000 UTC m=+0.134690696 container start cb7bf5c481a5574f2ed180fa79b4762f54959ea3d93541b59695f5f7888f4d15 (image=quay.io/ceph/ceph:v18, name=laughing_davinci, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:08:14 np0005539563 podman[73983]: 2025-11-29 07:08:14.075070519 +0000 UTC m=+0.138242762 container attach cb7bf5c481a5574f2ed180fa79b4762f54959ea3d93541b59695f5f7888f4d15 (image=quay.io/ceph/ceph:v18, name=laughing_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:08:14 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 02:08:14 np0005539563 ceph-mon[73982]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/491516469' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]:  cluster:
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]:    id:     38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]:    health: HEALTH_OK
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]: 
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]:  services:
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]:    mon: 1 daemons, quorum compute-0 (age 0.511895s)
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]:    mgr: no daemons active
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]:    osd: 0 osds: 0 up, 0 in
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]: 
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]:  data:
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]:    pools:   0 pools, 0 pgs
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]:    objects: 0 objects, 0 B
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]:    usage:   0 B used, 0 B / 0 B avail
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]:    pgs:     
Nov 29 02:08:14 np0005539563 laughing_davinci[74038]: 
Nov 29 02:08:14 np0005539563 systemd[1]: libpod-cb7bf5c481a5574f2ed180fa79b4762f54959ea3d93541b59695f5f7888f4d15.scope: Deactivated successfully.
Nov 29 02:08:14 np0005539563 conmon[74038]: conmon cb7bf5c481a5574f2ed1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb7bf5c481a5574f2ed180fa79b4762f54959ea3d93541b59695f5f7888f4d15.scope/container/memory.events
Nov 29 02:08:14 np0005539563 podman[73983]: 2025-11-29 07:08:14.505945522 +0000 UTC m=+0.569117755 container died cb7bf5c481a5574f2ed180fa79b4762f54959ea3d93541b59695f5f7888f4d15 (image=quay.io/ceph/ceph:v18, name=laughing_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:08:14 np0005539563 podman[73983]: 2025-11-29 07:08:14.552873156 +0000 UTC m=+0.616045399 container remove cb7bf5c481a5574f2ed180fa79b4762f54959ea3d93541b59695f5f7888f4d15 (image=quay.io/ceph/ceph:v18, name=laughing_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:08:14 np0005539563 systemd[1]: libpod-conmon-cb7bf5c481a5574f2ed180fa79b4762f54959ea3d93541b59695f5f7888f4d15.scope: Deactivated successfully.
Nov 29 02:08:14 np0005539563 podman[74076]: 2025-11-29 07:08:14.620627764 +0000 UTC m=+0.044089258 container create a163196ba8e487f748422b6834b1a422764f76836e120651e672172ab39543b9 (image=quay.io/ceph/ceph:v18, name=xenodochial_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:08:14 np0005539563 systemd[1]: Started libpod-conmon-a163196ba8e487f748422b6834b1a422764f76836e120651e672172ab39543b9.scope.
Nov 29 02:08:14 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4386ebf21e4c0ef47ad470ec8725fb146aeaeeea4f9e620eea0a8e8b6ba80d34/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4386ebf21e4c0ef47ad470ec8725fb146aeaeeea4f9e620eea0a8e8b6ba80d34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4386ebf21e4c0ef47ad470ec8725fb146aeaeeea4f9e620eea0a8e8b6ba80d34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4386ebf21e4c0ef47ad470ec8725fb146aeaeeea4f9e620eea0a8e8b6ba80d34/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:14 np0005539563 podman[74076]: 2025-11-29 07:08:14.696413721 +0000 UTC m=+0.119875205 container init a163196ba8e487f748422b6834b1a422764f76836e120651e672172ab39543b9 (image=quay.io/ceph/ceph:v18, name=xenodochial_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:08:14 np0005539563 podman[74076]: 2025-11-29 07:08:14.60018055 +0000 UTC m=+0.023642074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:14 np0005539563 podman[74076]: 2025-11-29 07:08:14.703966396 +0000 UTC m=+0.127427880 container start a163196ba8e487f748422b6834b1a422764f76836e120651e672172ab39543b9 (image=quay.io/ceph/ceph:v18, name=xenodochial_bhabha, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:08:14 np0005539563 podman[74076]: 2025-11-29 07:08:14.707726488 +0000 UTC m=+0.131187972 container attach a163196ba8e487f748422b6834b1a422764f76836e120651e672172ab39543b9 (image=quay.io/ceph/ceph:v18, name=xenodochial_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:08:15 np0005539563 ceph-mon[73982]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 02:08:15 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 02:08:15 np0005539563 ceph-mon[73982]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/290110849' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:08:15 np0005539563 ceph-mon[73982]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/290110849' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 02:08:15 np0005539563 xenodochial_bhabha[74093]: 
Nov 29 02:08:15 np0005539563 xenodochial_bhabha[74093]: [global]
Nov 29 02:08:15 np0005539563 xenodochial_bhabha[74093]: #011fsid = 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 02:08:15 np0005539563 xenodochial_bhabha[74093]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 29 02:08:15 np0005539563 systemd[1]: libpod-a163196ba8e487f748422b6834b1a422764f76836e120651e672172ab39543b9.scope: Deactivated successfully.
Nov 29 02:08:15 np0005539563 podman[74076]: 2025-11-29 07:08:15.374435781 +0000 UTC m=+0.797897265 container died a163196ba8e487f748422b6834b1a422764f76836e120651e672172ab39543b9 (image=quay.io/ceph/ceph:v18, name=xenodochial_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:15 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4386ebf21e4c0ef47ad470ec8725fb146aeaeeea4f9e620eea0a8e8b6ba80d34-merged.mount: Deactivated successfully.
Nov 29 02:08:15 np0005539563 podman[74076]: 2025-11-29 07:08:15.468556955 +0000 UTC m=+0.892018439 container remove a163196ba8e487f748422b6834b1a422764f76836e120651e672172ab39543b9 (image=quay.io/ceph/ceph:v18, name=xenodochial_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:08:15 np0005539563 systemd[1]: libpod-conmon-a163196ba8e487f748422b6834b1a422764f76836e120651e672172ab39543b9.scope: Deactivated successfully.
Nov 29 02:08:15 np0005539563 podman[74131]: 2025-11-29 07:08:15.524799531 +0000 UTC m=+0.037784547 container create 31f12a76e4b36e85b13ea7d0c225e06db2118aaee846eda5ff28f2c4702393e6 (image=quay.io/ceph/ceph:v18, name=stupefied_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:08:15 np0005539563 systemd[1]: Started libpod-conmon-31f12a76e4b36e85b13ea7d0c225e06db2118aaee846eda5ff28f2c4702393e6.scope.
Nov 29 02:08:15 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecffa013d591122838fd724162933e1026912faefe930dec4674cd22e859d03e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecffa013d591122838fd724162933e1026912faefe930dec4674cd22e859d03e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecffa013d591122838fd724162933e1026912faefe930dec4674cd22e859d03e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecffa013d591122838fd724162933e1026912faefe930dec4674cd22e859d03e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:15 np0005539563 podman[74131]: 2025-11-29 07:08:15.594255465 +0000 UTC m=+0.107240491 container init 31f12a76e4b36e85b13ea7d0c225e06db2118aaee846eda5ff28f2c4702393e6 (image=quay.io/ceph/ceph:v18, name=stupefied_panini, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:15 np0005539563 podman[74131]: 2025-11-29 07:08:15.599776535 +0000 UTC m=+0.112761551 container start 31f12a76e4b36e85b13ea7d0c225e06db2118aaee846eda5ff28f2c4702393e6 (image=quay.io/ceph/ceph:v18, name=stupefied_panini, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:08:15 np0005539563 podman[74131]: 2025-11-29 07:08:15.603568079 +0000 UTC m=+0.116553125 container attach 31f12a76e4b36e85b13ea7d0c225e06db2118aaee846eda5ff28f2c4702393e6 (image=quay.io/ceph/ceph:v18, name=stupefied_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:08:15 np0005539563 podman[74131]: 2025-11-29 07:08:15.509536857 +0000 UTC m=+0.022521893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:15 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:08:15 np0005539563 ceph-mon[73982]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1821688344' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:08:15 np0005539563 systemd[1]: libpod-31f12a76e4b36e85b13ea7d0c225e06db2118aaee846eda5ff28f2c4702393e6.scope: Deactivated successfully.
Nov 29 02:08:15 np0005539563 podman[74131]: 2025-11-29 07:08:15.992881794 +0000 UTC m=+0.505866810 container died 31f12a76e4b36e85b13ea7d0c225e06db2118aaee846eda5ff28f2c4702393e6 (image=quay.io/ceph/ceph:v18, name=stupefied_panini, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:08:16 np0005539563 ceph-mon[73982]: from='client.? 192.168.122.100:0/290110849' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:08:16 np0005539563 ceph-mon[73982]: from='client.? 192.168.122.100:0/290110849' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 02:08:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ecffa013d591122838fd724162933e1026912faefe930dec4674cd22e859d03e-merged.mount: Deactivated successfully.
Nov 29 02:08:16 np0005539563 podman[74131]: 2025-11-29 07:08:16.040298961 +0000 UTC m=+0.553283977 container remove 31f12a76e4b36e85b13ea7d0c225e06db2118aaee846eda5ff28f2c4702393e6 (image=quay.io/ceph/ceph:v18, name=stupefied_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:08:16 np0005539563 systemd[1]: libpod-conmon-31f12a76e4b36e85b13ea7d0c225e06db2118aaee846eda5ff28f2c4702393e6.scope: Deactivated successfully.
Nov 29 02:08:16 np0005539563 systemd[1]: Stopping Ceph mon.compute-0 for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:08:16 np0005539563 ceph-mon[73982]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 02:08:16 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 02:08:16 np0005539563 ceph-mon[73982]: mon.compute-0@0(leader) e1 shutdown
Nov 29 02:08:16 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0[73978]: 2025-11-29T07:08:16.212+0000 7ff418810640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 02:08:16 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0[73978]: 2025-11-29T07:08:16.212+0000 7ff418810640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 02:08:16 np0005539563 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 02:08:16 np0005539563 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 02:08:16 np0005539563 podman[74217]: 2025-11-29 07:08:16.375916329 +0000 UTC m=+0.197358828 container stop 3dc308814d63fd6fe99ce8bd997ecc3531c8bd131eb40fa71121a5e57e448dee (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:08:16 np0005539563 podman[74217]: 2025-11-29 07:08:16.40727177 +0000 UTC m=+0.228714289 container died 3dc308814d63fd6fe99ce8bd997ecc3531c8bd131eb40fa71121a5e57e448dee (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:08:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d6ec73c3f6fbe99019f7d7bfed4ff7ae51d7c8c2a3fc1b5cd3741a3a2ec0efd0-merged.mount: Deactivated successfully.
Nov 29 02:08:16 np0005539563 podman[74217]: 2025-11-29 07:08:16.445814605 +0000 UTC m=+0.267257104 container remove 3dc308814d63fd6fe99ce8bd997ecc3531c8bd131eb40fa71121a5e57e448dee (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:08:16 np0005539563 bash[74217]: ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0
Nov 29 02:08:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:08:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:08:16 np0005539563 systemd[1]: ceph-38a37ed2-442a-5e0d-a69a-881fdd186450@mon.compute-0.service: Deactivated successfully.
Nov 29 02:08:16 np0005539563 systemd[1]: Stopped Ceph mon.compute-0 for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:08:16 np0005539563 systemd[1]: Starting Ceph mon.compute-0 for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:08:16 np0005539563 podman[74319]: 2025-11-29 07:08:16.765332117 +0000 UTC m=+0.035633608 container create ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:08:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e95debec92698e611a9743615081c8d53e8fed06a8832119ccf03d430a46453/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e95debec92698e611a9743615081c8d53e8fed06a8832119ccf03d430a46453/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e95debec92698e611a9743615081c8d53e8fed06a8832119ccf03d430a46453/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e95debec92698e611a9743615081c8d53e8fed06a8832119ccf03d430a46453/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:16 np0005539563 podman[74319]: 2025-11-29 07:08:16.819466206 +0000 UTC m=+0.089767707 container init ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:08:16 np0005539563 podman[74319]: 2025-11-29 07:08:16.82441026 +0000 UTC m=+0.094711751 container start ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:08:16 np0005539563 bash[74319]: ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4
Nov 29 02:08:16 np0005539563 podman[74319]: 2025-11-29 07:08:16.750076932 +0000 UTC m=+0.020378433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:16 np0005539563 systemd[1]: Started Ceph mon.compute-0 for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: pidfile_write: ignore empty --pid-file
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: load: jerasure load: lrc 
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Git sha 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: DB SUMMARY
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: DB Session ID:  JQEAAIL9DQ5VZJBMNK3S
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55198 ; 
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                                     Options.env: 0x564870b11c40
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                                Options.info_log: 0x564872da1040
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                                 Options.wal_dir: 
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                    Options.write_buffer_manager: 0x564872db0b40
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                               Options.row_cache: None
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                              Options.wal_filter: None
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.wal_compression: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.max_background_jobs: 2
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Compression algorithms supported:
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: #011kZSTD supported: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:           Options.merge_operator: 
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:        Options.compaction_filter: None
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564872da0c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564872d991f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:          Options.compression: NoCompression
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.num_levels: 7
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c1f3df60-9c98-4a69-b455-f1b44f64a88d
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400096865168, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400096870465, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54837, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 136, "table_properties": {"data_size": 53373, "index_size": 170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2933, "raw_average_key_size": 29, "raw_value_size": 51015, "raw_average_value_size": 515, "num_data_blocks": 9, "num_entries": 99, "num_filter_entries": 99, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400096, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400096870568, "job": 1, "event": "recovery_finished"}
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564872dc2e00
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: DB pointer 0x564872e4c000
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   55.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: mon.compute-0@-1(???) e1 preinit fsid 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: mon.compute-0@-1(???).mds e1 new map
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 02:08:16 np0005539563 podman[74339]: 2025-11-29 07:08:16.898253164 +0000 UTC m=+0.043155912 container create 1959e4ca9687e65e28234da352bdf375ea23383572697431f1221069100a0cdf (image=quay.io/ceph/ceph:v18, name=gallant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:08:16 np0005539563 systemd[1]: Started libpod-conmon-1959e4ca9687e65e28234da352bdf375ea23383572697431f1221069100a0cdf.scope.
Nov 29 02:08:16 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f036ea1feabc594a9a312939e547e57527a2f9c891cb6067db4b8608223ce5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f036ea1feabc594a9a312939e547e57527a2f9c891cb6067db4b8608223ce5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f036ea1feabc594a9a312939e547e57527a2f9c891cb6067db4b8608223ce5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:16 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 02:08:16 np0005539563 podman[74339]: 2025-11-29 07:08:16.967075722 +0000 UTC m=+0.111978460 container init 1959e4ca9687e65e28234da352bdf375ea23383572697431f1221069100a0cdf (image=quay.io/ceph/ceph:v18, name=gallant_edison, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:08:16 np0005539563 podman[74339]: 2025-11-29 07:08:16.973684131 +0000 UTC m=+0.118586879 container start 1959e4ca9687e65e28234da352bdf375ea23383572697431f1221069100a0cdf (image=quay.io/ceph/ceph:v18, name=gallant_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:08:16 np0005539563 podman[74339]: 2025-11-29 07:08:16.879072183 +0000 UTC m=+0.023974961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:16 np0005539563 podman[74339]: 2025-11-29 07:08:16.976931339 +0000 UTC m=+0.121834107 container attach 1959e4ca9687e65e28234da352bdf375ea23383572697431f1221069100a0cdf (image=quay.io/ceph/ceph:v18, name=gallant_edison, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 29 02:08:17 np0005539563 systemd[1]: libpod-1959e4ca9687e65e28234da352bdf375ea23383572697431f1221069100a0cdf.scope: Deactivated successfully.
Nov 29 02:08:17 np0005539563 podman[74339]: 2025-11-29 07:08:17.356661844 +0000 UTC m=+0.501564592 container died 1959e4ca9687e65e28234da352bdf375ea23383572697431f1221069100a0cdf (image=quay.io/ceph/ceph:v18, name=gallant_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:08:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay-92f036ea1feabc594a9a312939e547e57527a2f9c891cb6067db4b8608223ce5-merged.mount: Deactivated successfully.
Nov 29 02:08:17 np0005539563 podman[74339]: 2025-11-29 07:08:17.409302932 +0000 UTC m=+0.554205680 container remove 1959e4ca9687e65e28234da352bdf375ea23383572697431f1221069100a0cdf (image=quay.io/ceph/ceph:v18, name=gallant_edison, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:08:17 np0005539563 systemd[1]: libpod-conmon-1959e4ca9687e65e28234da352bdf375ea23383572697431f1221069100a0cdf.scope: Deactivated successfully.
Nov 29 02:08:17 np0005539563 podman[74434]: 2025-11-29 07:08:17.484541404 +0000 UTC m=+0.054092779 container create 78f6e4ede605f6445a7e08f41f34dd18384b76a445965432bb61059199ba2a00 (image=quay.io/ceph/ceph:v18, name=jovial_agnesi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:08:17 np0005539563 systemd[1]: Started libpod-conmon-78f6e4ede605f6445a7e08f41f34dd18384b76a445965432bb61059199ba2a00.scope.
Nov 29 02:08:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10db073d4abf0925523baca9e20fb9b484b329cd80d5640213a56a48360d98e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10db073d4abf0925523baca9e20fb9b484b329cd80d5640213a56a48360d98e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a10db073d4abf0925523baca9e20fb9b484b329cd80d5640213a56a48360d98e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:17 np0005539563 podman[74434]: 2025-11-29 07:08:17.458397555 +0000 UTC m=+0.027949010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:17 np0005539563 podman[74434]: 2025-11-29 07:08:17.565115561 +0000 UTC m=+0.134666976 container init 78f6e4ede605f6445a7e08f41f34dd18384b76a445965432bb61059199ba2a00 (image=quay.io/ceph/ceph:v18, name=jovial_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:08:17 np0005539563 podman[74434]: 2025-11-29 07:08:17.57172341 +0000 UTC m=+0.141274785 container start 78f6e4ede605f6445a7e08f41f34dd18384b76a445965432bb61059199ba2a00 (image=quay.io/ceph/ceph:v18, name=jovial_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:08:17 np0005539563 podman[74434]: 2025-11-29 07:08:17.575899274 +0000 UTC m=+0.145450709 container attach 78f6e4ede605f6445a7e08f41f34dd18384b76a445965432bb61059199ba2a00 (image=quay.io/ceph/ceph:v18, name=jovial_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:08:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 29 02:08:18 np0005539563 systemd[1]: libpod-78f6e4ede605f6445a7e08f41f34dd18384b76a445965432bb61059199ba2a00.scope: Deactivated successfully.
Nov 29 02:08:18 np0005539563 podman[74476]: 2025-11-29 07:08:18.045828747 +0000 UTC m=+0.025415391 container died 78f6e4ede605f6445a7e08f41f34dd18384b76a445965432bb61059199ba2a00 (image=quay.io/ceph/ceph:v18, name=jovial_agnesi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:08:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a10db073d4abf0925523baca9e20fb9b484b329cd80d5640213a56a48360d98e-merged.mount: Deactivated successfully.
Nov 29 02:08:18 np0005539563 podman[74476]: 2025-11-29 07:08:18.088783102 +0000 UTC m=+0.068369726 container remove 78f6e4ede605f6445a7e08f41f34dd18384b76a445965432bb61059199ba2a00 (image=quay.io/ceph/ceph:v18, name=jovial_agnesi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:08:18 np0005539563 systemd[1]: libpod-conmon-78f6e4ede605f6445a7e08f41f34dd18384b76a445965432bb61059199ba2a00.scope: Deactivated successfully.
Nov 29 02:08:18 np0005539563 systemd[1]: Reloading.
Nov 29 02:08:18 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:08:18 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:08:18 np0005539563 systemd[1]: Reloading.
Nov 29 02:08:18 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:08:18 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:08:18 np0005539563 systemd[1]: Starting Ceph mgr.compute-0.rotard for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:08:18 np0005539563 podman[74617]: 2025-11-29 07:08:18.847248065 +0000 UTC m=+0.020776215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:18 np0005539563 podman[74617]: 2025-11-29 07:08:18.989004951 +0000 UTC m=+0.162533051 container create 2a2429c70809bc0b8be7521b8057c64cb973e07d128df4b35b51e02d7c8fa988 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:08:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f85558922ea90cfd58aa7c2a3c9276bb5d81ad2cb5491a2e0f72089898050c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f85558922ea90cfd58aa7c2a3c9276bb5d81ad2cb5491a2e0f72089898050c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f85558922ea90cfd58aa7c2a3c9276bb5d81ad2cb5491a2e0f72089898050c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92f85558922ea90cfd58aa7c2a3c9276bb5d81ad2cb5491a2e0f72089898050c/merged/var/lib/ceph/mgr/ceph-compute-0.rotard supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:19 np0005539563 podman[74617]: 2025-11-29 07:08:19.059240047 +0000 UTC m=+0.232768167 container init 2a2429c70809bc0b8be7521b8057c64cb973e07d128df4b35b51e02d7c8fa988 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:08:19 np0005539563 podman[74617]: 2025-11-29 07:08:19.063939045 +0000 UTC m=+0.237467145 container start 2a2429c70809bc0b8be7521b8057c64cb973e07d128df4b35b51e02d7c8fa988 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:08:19 np0005539563 bash[74617]: 2a2429c70809bc0b8be7521b8057c64cb973e07d128df4b35b51e02d7c8fa988
Nov 29 02:08:19 np0005539563 systemd[1]: Started Ceph mgr.compute-0.rotard for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:08:19 np0005539563 ceph-mgr[74636]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:08:19 np0005539563 ceph-mgr[74636]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 02:08:19 np0005539563 ceph-mgr[74636]: pidfile_write: ignore empty --pid-file
Nov 29 02:08:19 np0005539563 podman[74637]: 2025-11-29 07:08:19.143568356 +0000 UTC m=+0.038294400 container create 6445a92b605b0799d8486d30586b9d41b6981fb814b2bc2b7dbe30e5dd3311ec (image=quay.io/ceph/ceph:v18, name=confident_varahamihira, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:08:19 np0005539563 systemd[1]: Started libpod-conmon-6445a92b605b0799d8486d30586b9d41b6981fb814b2bc2b7dbe30e5dd3311ec.scope.
Nov 29 02:08:19 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7729fac8ea806591de4db8133e093d3bcac014e84c32d974407a73fa32a9b3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7729fac8ea806591de4db8133e093d3bcac014e84c32d974407a73fa32a9b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7729fac8ea806591de4db8133e093d3bcac014e84c32d974407a73fa32a9b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:19 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'alerts'
Nov 29 02:08:19 np0005539563 podman[74637]: 2025-11-29 07:08:19.127970603 +0000 UTC m=+0.022696647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:19 np0005539563 podman[74637]: 2025-11-29 07:08:19.226552378 +0000 UTC m=+0.121278442 container init 6445a92b605b0799d8486d30586b9d41b6981fb814b2bc2b7dbe30e5dd3311ec (image=quay.io/ceph/ceph:v18, name=confident_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:08:19 np0005539563 podman[74637]: 2025-11-29 07:08:19.233862337 +0000 UTC m=+0.128588391 container start 6445a92b605b0799d8486d30586b9d41b6981fb814b2bc2b7dbe30e5dd3311ec (image=quay.io/ceph/ceph:v18, name=confident_varahamihira, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:08:19 np0005539563 podman[74637]: 2025-11-29 07:08:19.242713107 +0000 UTC m=+0.137439151 container attach 6445a92b605b0799d8486d30586b9d41b6981fb814b2bc2b7dbe30e5dd3311ec (image=quay.io/ceph/ceph:v18, name=confident_varahamihira, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:08:19 np0005539563 ceph-mgr[74636]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:08:19 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'balancer'
Nov 29 02:08:19 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:19.529+0000 7f1e25ab9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:08:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:08:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258336102' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]: 
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]: {
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    "fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    "health": {
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "status": "HEALTH_OK",
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "checks": {},
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "mutes": []
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    },
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    "election_epoch": 5,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    "quorum": [
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        0
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    ],
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    "quorum_names": [
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "compute-0"
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    ],
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    "quorum_age": 2,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    "monmap": {
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "epoch": 1,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "min_mon_release_name": "reef",
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "num_mons": 1
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    },
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    "osdmap": {
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "epoch": 1,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "num_osds": 0,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "num_up_osds": 0,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "osd_up_since": 0,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "num_in_osds": 0,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "osd_in_since": 0,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "num_remapped_pgs": 0
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    },
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    "pgmap": {
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "pgs_by_state": [],
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "num_pgs": 0,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "num_pools": 0,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "num_objects": 0,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "data_bytes": 0,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "bytes_used": 0,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "bytes_avail": 0,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "bytes_total": 0
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    },
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    "fsmap": {
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "epoch": 1,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "by_rank": [],
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "up:standby": 0
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    },
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    "mgrmap": {
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "available": false,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "num_standbys": 0,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "modules": [
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:            "iostat",
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:            "nfs",
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:            "restful"
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        ],
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "services": {}
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    },
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    "servicemap": {
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "epoch": 1,
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "modified": "2025-11-29T07:08:13.981497+0000",
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:        "services": {}
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    },
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]:    "progress_events": {}
Nov 29 02:08:19 np0005539563 confident_varahamihira[74677]: }
Nov 29 02:08:19 np0005539563 systemd[1]: libpod-6445a92b605b0799d8486d30586b9d41b6981fb814b2bc2b7dbe30e5dd3311ec.scope: Deactivated successfully.
Nov 29 02:08:19 np0005539563 podman[74637]: 2025-11-29 07:08:19.638396355 +0000 UTC m=+0.533122419 container died 6445a92b605b0799d8486d30586b9d41b6981fb814b2bc2b7dbe30e5dd3311ec (image=quay.io/ceph/ceph:v18, name=confident_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ac7729fac8ea806591de4db8133e093d3bcac014e84c32d974407a73fa32a9b3-merged.mount: Deactivated successfully.
Nov 29 02:08:19 np0005539563 podman[74637]: 2025-11-29 07:08:19.681508215 +0000 UTC m=+0.576234259 container remove 6445a92b605b0799d8486d30586b9d41b6981fb814b2bc2b7dbe30e5dd3311ec (image=quay.io/ceph/ceph:v18, name=confident_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:08:19 np0005539563 systemd[1]: libpod-conmon-6445a92b605b0799d8486d30586b9d41b6981fb814b2bc2b7dbe30e5dd3311ec.scope: Deactivated successfully.
Nov 29 02:08:19 np0005539563 ceph-mgr[74636]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:08:19 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'cephadm'
Nov 29 02:08:19 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:19.820+0000 7f1e25ab9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:08:21 np0005539563 podman[74729]: 2025-11-29 07:08:21.744641624 +0000 UTC m=+0.040083039 container create 1e89dd809f56b9ba8625985f5f7137648c278aaa56f77df27e18baa8e09d24c7 (image=quay.io/ceph/ceph:v18, name=reverent_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:08:21 np0005539563 systemd[1]: Started libpod-conmon-1e89dd809f56b9ba8625985f5f7137648c278aaa56f77df27e18baa8e09d24c7.scope.
Nov 29 02:08:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbabec6eec6e0efe395a85fba05491767d01313a149d16c30eb0ac014b76c960/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbabec6eec6e0efe395a85fba05491767d01313a149d16c30eb0ac014b76c960/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbabec6eec6e0efe395a85fba05491767d01313a149d16c30eb0ac014b76c960/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:21 np0005539563 podman[74729]: 2025-11-29 07:08:21.808089766 +0000 UTC m=+0.103531181 container init 1e89dd809f56b9ba8625985f5f7137648c278aaa56f77df27e18baa8e09d24c7 (image=quay.io/ceph/ceph:v18, name=reverent_kalam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:08:21 np0005539563 podman[74729]: 2025-11-29 07:08:21.814055828 +0000 UTC m=+0.109497253 container start 1e89dd809f56b9ba8625985f5f7137648c278aaa56f77df27e18baa8e09d24c7 (image=quay.io/ceph/ceph:v18, name=reverent_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:08:21 np0005539563 podman[74729]: 2025-11-29 07:08:21.818108937 +0000 UTC m=+0.113550382 container attach 1e89dd809f56b9ba8625985f5f7137648c278aaa56f77df27e18baa8e09d24c7 (image=quay.io/ceph/ceph:v18, name=reverent_kalam, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:08:21 np0005539563 podman[74729]: 2025-11-29 07:08:21.725785402 +0000 UTC m=+0.021226837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:21 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'crash'
Nov 29 02:08:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:08:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351428727' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]: 
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]: {
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    "fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    "health": {
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "status": "HEALTH_OK",
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "checks": {},
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "mutes": []
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    },
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    "election_epoch": 5,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    "quorum": [
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        0
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    ],
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    "quorum_names": [
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "compute-0"
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    ],
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    "quorum_age": 5,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    "monmap": {
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "epoch": 1,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "min_mon_release_name": "reef",
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "num_mons": 1
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    },
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    "osdmap": {
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "epoch": 1,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "num_osds": 0,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "num_up_osds": 0,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "osd_up_since": 0,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "num_in_osds": 0,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "osd_in_since": 0,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "num_remapped_pgs": 0
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    },
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    "pgmap": {
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "pgs_by_state": [],
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "num_pgs": 0,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "num_pools": 0,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "num_objects": 0,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "data_bytes": 0,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "bytes_used": 0,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "bytes_avail": 0,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "bytes_total": 0
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    },
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    "fsmap": {
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "epoch": 1,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "by_rank": [],
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "up:standby": 0
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    },
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    "mgrmap": {
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "available": false,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "num_standbys": 0,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "modules": [
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:            "iostat",
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:            "nfs",
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:            "restful"
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        ],
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "services": {}
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    },
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    "servicemap": {
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "epoch": 1,
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "modified": "2025-11-29T07:08:13.981497+0000",
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:        "services": {}
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    },
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]:    "progress_events": {}
Nov 29 02:08:22 np0005539563 reverent_kalam[74746]: }
Nov 29 02:08:22 np0005539563 systemd[1]: libpod-1e89dd809f56b9ba8625985f5f7137648c278aaa56f77df27e18baa8e09d24c7.scope: Deactivated successfully.
Nov 29 02:08:22 np0005539563 podman[74729]: 2025-11-29 07:08:22.224715522 +0000 UTC m=+0.520157017 container died 1e89dd809f56b9ba8625985f5f7137648c278aaa56f77df27e18baa8e09d24c7 (image=quay.io/ceph/ceph:v18, name=reverent_kalam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:08:22 np0005539563 ceph-mgr[74636]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:08:22 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'dashboard'
Nov 29 02:08:22 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:22.237+0000 7f1e25ab9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:08:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-fbabec6eec6e0efe395a85fba05491767d01313a149d16c30eb0ac014b76c960-merged.mount: Deactivated successfully.
Nov 29 02:08:22 np0005539563 podman[74729]: 2025-11-29 07:08:22.399716381 +0000 UTC m=+0.695157786 container remove 1e89dd809f56b9ba8625985f5f7137648c278aaa56f77df27e18baa8e09d24c7 (image=quay.io/ceph/ceph:v18, name=reverent_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:08:22 np0005539563 systemd[1]: libpod-conmon-1e89dd809f56b9ba8625985f5f7137648c278aaa56f77df27e18baa8e09d24c7.scope: Deactivated successfully.
Nov 29 02:08:23 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'devicehealth'
Nov 29 02:08:24 np0005539563 ceph-mgr[74636]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:08:24 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 02:08:24 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:24.015+0000 7f1e25ab9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:08:24 np0005539563 podman[74786]: 2025-11-29 07:08:24.464181726 +0000 UTC m=+0.043729298 container create 567962a5acaab48bec4b0e9313a81bf7c630812998765d4899e346d81a42851d (image=quay.io/ceph/ceph:v18, name=happy_khayyam, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:08:24 np0005539563 systemd[1]: Started libpod-conmon-567962a5acaab48bec4b0e9313a81bf7c630812998765d4899e346d81a42851d.scope.
Nov 29 02:08:24 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4285c541a709512370a2b6bdd382b32fb5138256b7a2f52a09ce25458fda666/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4285c541a709512370a2b6bdd382b32fb5138256b7a2f52a09ce25458fda666/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4285c541a709512370a2b6bdd382b32fb5138256b7a2f52a09ce25458fda666/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:24 np0005539563 podman[74786]: 2025-11-29 07:08:24.535341347 +0000 UTC m=+0.114888939 container init 567962a5acaab48bec4b0e9313a81bf7c630812998765d4899e346d81a42851d (image=quay.io/ceph/ceph:v18, name=happy_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:24 np0005539563 podman[74786]: 2025-11-29 07:08:24.444216974 +0000 UTC m=+0.023764566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:24 np0005539563 podman[74786]: 2025-11-29 07:08:24.541394841 +0000 UTC m=+0.120942413 container start 567962a5acaab48bec4b0e9313a81bf7c630812998765d4899e346d81a42851d (image=quay.io/ceph/ceph:v18, name=happy_khayyam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:08:24 np0005539563 podman[74786]: 2025-11-29 07:08:24.54613887 +0000 UTC m=+0.125686472 container attach 567962a5acaab48bec4b0e9313a81bf7c630812998765d4899e346d81a42851d (image=quay.io/ceph/ceph:v18, name=happy_khayyam, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:08:24 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 02:08:24 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 02:08:24 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]:  from numpy import show_config as show_numpy_config
Nov 29 02:08:24 np0005539563 ceph-mgr[74636]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:08:24 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'influx'
Nov 29 02:08:24 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:24.631+0000 7f1e25ab9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:08:24 np0005539563 ceph-mgr[74636]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:08:24 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'insights'
Nov 29 02:08:24 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:24.944+0000 7f1e25ab9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:08:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:08:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1435592158' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]: 
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]: {
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    "fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    "health": {
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "status": "HEALTH_OK",
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "checks": {},
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "mutes": []
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    },
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    "election_epoch": 5,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    "quorum": [
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        0
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    ],
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    "quorum_names": [
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "compute-0"
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    ],
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    "quorum_age": 8,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    "monmap": {
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "epoch": 1,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "min_mon_release_name": "reef",
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "num_mons": 1
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    },
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    "osdmap": {
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "epoch": 1,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "num_osds": 0,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "num_up_osds": 0,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "osd_up_since": 0,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "num_in_osds": 0,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "osd_in_since": 0,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "num_remapped_pgs": 0
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    },
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    "pgmap": {
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "pgs_by_state": [],
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "num_pgs": 0,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "num_pools": 0,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "num_objects": 0,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "data_bytes": 0,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "bytes_used": 0,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "bytes_avail": 0,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "bytes_total": 0
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    },
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    "fsmap": {
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "epoch": 1,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "by_rank": [],
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "up:standby": 0
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    },
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    "mgrmap": {
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "available": false,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "num_standbys": 0,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "modules": [
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:            "iostat",
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:            "nfs",
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:            "restful"
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        ],
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "services": {}
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    },
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    "servicemap": {
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "epoch": 1,
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "modified": "2025-11-29T07:08:13.981497+0000",
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:        "services": {}
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    },
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]:    "progress_events": {}
Nov 29 02:08:25 np0005539563 happy_khayyam[74802]: }
Nov 29 02:08:25 np0005539563 systemd[1]: libpod-567962a5acaab48bec4b0e9313a81bf7c630812998765d4899e346d81a42851d.scope: Deactivated successfully.
Nov 29 02:08:25 np0005539563 podman[74786]: 2025-11-29 07:08:25.042585072 +0000 UTC m=+0.622132644 container died 567962a5acaab48bec4b0e9313a81bf7c630812998765d4899e346d81a42851d (image=quay.io/ceph/ceph:v18, name=happy_khayyam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:08:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f4285c541a709512370a2b6bdd382b32fb5138256b7a2f52a09ce25458fda666-merged.mount: Deactivated successfully.
Nov 29 02:08:25 np0005539563 podman[74786]: 2025-11-29 07:08:25.094555593 +0000 UTC m=+0.674103185 container remove 567962a5acaab48bec4b0e9313a81bf7c630812998765d4899e346d81a42851d (image=quay.io/ceph/ceph:v18, name=happy_khayyam, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:08:25 np0005539563 systemd[1]: libpod-conmon-567962a5acaab48bec4b0e9313a81bf7c630812998765d4899e346d81a42851d.scope: Deactivated successfully.
Nov 29 02:08:25 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'iostat'
Nov 29 02:08:25 np0005539563 ceph-mgr[74636]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:08:25 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'k8sevents'
Nov 29 02:08:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:25.538+0000 7f1e25ab9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:08:27 np0005539563 podman[74841]: 2025-11-29 07:08:27.16204269 +0000 UTC m=+0.043582724 container create 701d5058f163dadc15c34a125b74f6a27612f423a2bb21fa040efa44ec19d549 (image=quay.io/ceph/ceph:v18, name=reverent_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:08:27 np0005539563 systemd[1]: Started libpod-conmon-701d5058f163dadc15c34a125b74f6a27612f423a2bb21fa040efa44ec19d549.scope.
Nov 29 02:08:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ca385c2edc1fa8d11ba86b614d98e989384e27d32449a63d26225caf53887e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ca385c2edc1fa8d11ba86b614d98e989384e27d32449a63d26225caf53887e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ca385c2edc1fa8d11ba86b614d98e989384e27d32449a63d26225caf53887e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:27 np0005539563 podman[74841]: 2025-11-29 07:08:27.2236031 +0000 UTC m=+0.105143144 container init 701d5058f163dadc15c34a125b74f6a27612f423a2bb21fa040efa44ec19d549 (image=quay.io/ceph/ceph:v18, name=reverent_elbakyan, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:08:27 np0005539563 podman[74841]: 2025-11-29 07:08:27.229417018 +0000 UTC m=+0.110957052 container start 701d5058f163dadc15c34a125b74f6a27612f423a2bb21fa040efa44ec19d549 (image=quay.io/ceph/ceph:v18, name=reverent_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:08:27 np0005539563 podman[74841]: 2025-11-29 07:08:27.139710224 +0000 UTC m=+0.021250258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:27 np0005539563 podman[74841]: 2025-11-29 07:08:27.233946521 +0000 UTC m=+0.115486565 container attach 701d5058f163dadc15c34a125b74f6a27612f423a2bb21fa040efa44ec19d549 (image=quay.io/ceph/ceph:v18, name=reverent_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:08:27 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'localpool'
Nov 29 02:08:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:08:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3805613351' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]: 
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]: {
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    "fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    "health": {
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "status": "HEALTH_OK",
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "checks": {},
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "mutes": []
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    },
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    "election_epoch": 5,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    "quorum": [
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        0
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    ],
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    "quorum_names": [
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "compute-0"
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    ],
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    "quorum_age": 10,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    "monmap": {
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "epoch": 1,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "min_mon_release_name": "reef",
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "num_mons": 1
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    },
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    "osdmap": {
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "epoch": 1,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "num_osds": 0,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "num_up_osds": 0,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "osd_up_since": 0,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "num_in_osds": 0,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "osd_in_since": 0,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "num_remapped_pgs": 0
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    },
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    "pgmap": {
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "pgs_by_state": [],
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "num_pgs": 0,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "num_pools": 0,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "num_objects": 0,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "data_bytes": 0,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "bytes_used": 0,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "bytes_avail": 0,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "bytes_total": 0
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    },
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    "fsmap": {
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "epoch": 1,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "by_rank": [],
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "up:standby": 0
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    },
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    "mgrmap": {
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "available": false,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "num_standbys": 0,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "modules": [
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:            "iostat",
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:            "nfs",
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:            "restful"
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        ],
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "services": {}
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    },
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    "servicemap": {
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "epoch": 1,
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "modified": "2025-11-29T07:08:13.981497+0000",
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:        "services": {}
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    },
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]:    "progress_events": {}
Nov 29 02:08:27 np0005539563 reverent_elbakyan[74857]: }
Nov 29 02:08:27 np0005539563 systemd[1]: libpod-701d5058f163dadc15c34a125b74f6a27612f423a2bb21fa040efa44ec19d549.scope: Deactivated successfully.
Nov 29 02:08:27 np0005539563 podman[74841]: 2025-11-29 07:08:27.679324117 +0000 UTC m=+0.560864171 container died 701d5058f163dadc15c34a125b74f6a27612f423a2bb21fa040efa44ec19d549 (image=quay.io/ceph/ceph:v18, name=reverent_elbakyan, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:08:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c5ca385c2edc1fa8d11ba86b614d98e989384e27d32449a63d26225caf53887e-merged.mount: Deactivated successfully.
Nov 29 02:08:27 np0005539563 podman[74841]: 2025-11-29 07:08:27.723483876 +0000 UTC m=+0.605023910 container remove 701d5058f163dadc15c34a125b74f6a27612f423a2bb21fa040efa44ec19d549 (image=quay.io/ceph/ceph:v18, name=reverent_elbakyan, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:08:27 np0005539563 systemd[1]: libpod-conmon-701d5058f163dadc15c34a125b74f6a27612f423a2bb21fa040efa44ec19d549.scope: Deactivated successfully.
Nov 29 02:08:27 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 02:08:28 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'mirroring'
Nov 29 02:08:29 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'nfs'
Nov 29 02:08:29 np0005539563 ceph-mgr[74636]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:08:29 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'orchestrator'
Nov 29 02:08:29 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:29.806+0000 7f1e25ab9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:08:29 np0005539563 podman[74897]: 2025-11-29 07:08:29.774094275 +0000 UTC m=+0.030191681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:29 np0005539563 podman[74897]: 2025-11-29 07:08:29.980304061 +0000 UTC m=+0.236401437 container create eff1c5cc4f0ff62ecfd4a54e352b34892278feb14a4a65efedcab162b4f91c83 (image=quay.io/ceph/ceph:v18, name=priceless_black, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:08:30 np0005539563 systemd[1]: Started libpod-conmon-eff1c5cc4f0ff62ecfd4a54e352b34892278feb14a4a65efedcab162b4f91c83.scope.
Nov 29 02:08:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ff2bb1935abf26ee61ce1bdcc590dd97f17cc92fc5b101740a0067750345803/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ff2bb1935abf26ee61ce1bdcc590dd97f17cc92fc5b101740a0067750345803/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ff2bb1935abf26ee61ce1bdcc590dd97f17cc92fc5b101740a0067750345803/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:30 np0005539563 podman[74897]: 2025-11-29 07:08:30.060102816 +0000 UTC m=+0.316200212 container init eff1c5cc4f0ff62ecfd4a54e352b34892278feb14a4a65efedcab162b4f91c83 (image=quay.io/ceph/ceph:v18, name=priceless_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:08:30 np0005539563 podman[74897]: 2025-11-29 07:08:30.06576312 +0000 UTC m=+0.321860496 container start eff1c5cc4f0ff62ecfd4a54e352b34892278feb14a4a65efedcab162b4f91c83 (image=quay.io/ceph/ceph:v18, name=priceless_black, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:08:30 np0005539563 podman[74897]: 2025-11-29 07:08:30.069647566 +0000 UTC m=+0.325744952 container attach eff1c5cc4f0ff62ecfd4a54e352b34892278feb14a4a65efedcab162b4f91c83 (image=quay.io/ceph/ceph:v18, name=priceless_black, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:08:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:08:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224241735' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:08:30 np0005539563 priceless_black[74913]: 
Nov 29 02:08:30 np0005539563 priceless_black[74913]: {
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    "fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    "health": {
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "status": "HEALTH_OK",
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "checks": {},
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "mutes": []
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    },
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    "election_epoch": 5,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    "quorum": [
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        0
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    ],
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    "quorum_names": [
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "compute-0"
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    ],
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    "quorum_age": 13,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    "monmap": {
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "epoch": 1,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "min_mon_release_name": "reef",
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "num_mons": 1
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    },
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    "osdmap": {
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "epoch": 1,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "num_osds": 0,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "num_up_osds": 0,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "osd_up_since": 0,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "num_in_osds": 0,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "osd_in_since": 0,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "num_remapped_pgs": 0
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    },
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    "pgmap": {
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "pgs_by_state": [],
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "num_pgs": 0,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "num_pools": 0,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "num_objects": 0,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "data_bytes": 0,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "bytes_used": 0,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "bytes_avail": 0,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "bytes_total": 0
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    },
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    "fsmap": {
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "epoch": 1,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "by_rank": [],
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "up:standby": 0
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    },
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    "mgrmap": {
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "available": false,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "num_standbys": 0,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "modules": [
Nov 29 02:08:30 np0005539563 priceless_black[74913]:            "iostat",
Nov 29 02:08:30 np0005539563 priceless_black[74913]:            "nfs",
Nov 29 02:08:30 np0005539563 priceless_black[74913]:            "restful"
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        ],
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "services": {}
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    },
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    "servicemap": {
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "epoch": 1,
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "modified": "2025-11-29T07:08:13.981497+0000",
Nov 29 02:08:30 np0005539563 priceless_black[74913]:        "services": {}
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    },
Nov 29 02:08:30 np0005539563 priceless_black[74913]:    "progress_events": {}
Nov 29 02:08:30 np0005539563 priceless_black[74913]: }
Nov 29 02:08:30 np0005539563 systemd[1]: libpod-eff1c5cc4f0ff62ecfd4a54e352b34892278feb14a4a65efedcab162b4f91c83.scope: Deactivated successfully.
Nov 29 02:08:30 np0005539563 podman[74897]: 2025-11-29 07:08:30.565545943 +0000 UTC m=+0.821643319 container died eff1c5cc4f0ff62ecfd4a54e352b34892278feb14a4a65efedcab162b4f91c83 (image=quay.io/ceph/ceph:v18, name=priceless_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:08:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4ff2bb1935abf26ee61ce1bdcc590dd97f17cc92fc5b101740a0067750345803-merged.mount: Deactivated successfully.
Nov 29 02:08:30 np0005539563 podman[74897]: 2025-11-29 07:08:30.626677112 +0000 UTC m=+0.882774488 container remove eff1c5cc4f0ff62ecfd4a54e352b34892278feb14a4a65efedcab162b4f91c83 (image=quay.io/ceph/ceph:v18, name=priceless_black, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:08:30 np0005539563 systemd[1]: libpod-conmon-eff1c5cc4f0ff62ecfd4a54e352b34892278feb14a4a65efedcab162b4f91c83.scope: Deactivated successfully.
Nov 29 02:08:30 np0005539563 ceph-mgr[74636]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:08:30 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 02:08:30 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:30.647+0000 7f1e25ab9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:08:30 np0005539563 ceph-mgr[74636]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:08:30 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'osd_support'
Nov 29 02:08:30 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:30.960+0000 7f1e25ab9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:08:31 np0005539563 ceph-mgr[74636]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:08:31 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 02:08:31 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:31.267+0000 7f1e25ab9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:08:31 np0005539563 ceph-mgr[74636]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:08:31 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'progress'
Nov 29 02:08:31 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:31.589+0000 7f1e25ab9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:08:31 np0005539563 ceph-mgr[74636]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:08:31 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'prometheus'
Nov 29 02:08:31 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:31.892+0000 7f1e25ab9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:08:32 np0005539563 podman[74949]: 2025-11-29 07:08:32.703506599 +0000 UTC m=+0.054660028 container create 12d81e3e46a14497a217e0ac7ecadb42e7488b7286da89c3a4543fc811ed8f13 (image=quay.io/ceph/ceph:v18, name=elastic_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:08:32 np0005539563 systemd[1]: Started libpod-conmon-12d81e3e46a14497a217e0ac7ecadb42e7488b7286da89c3a4543fc811ed8f13.scope.
Nov 29 02:08:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5823475d50617cad25116d3d0856f53af7516677537c368c02cd59547887e54b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5823475d50617cad25116d3d0856f53af7516677537c368c02cd59547887e54b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5823475d50617cad25116d3d0856f53af7516677537c368c02cd59547887e54b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:32 np0005539563 podman[74949]: 2025-11-29 07:08:32.765975243 +0000 UTC m=+0.117128702 container init 12d81e3e46a14497a217e0ac7ecadb42e7488b7286da89c3a4543fc811ed8f13 (image=quay.io/ceph/ceph:v18, name=elastic_ptolemy, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:08:32 np0005539563 podman[74949]: 2025-11-29 07:08:32.673166032 +0000 UTC m=+0.024319491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:32 np0005539563 podman[74949]: 2025-11-29 07:08:32.771919776 +0000 UTC m=+0.123073215 container start 12d81e3e46a14497a217e0ac7ecadb42e7488b7286da89c3a4543fc811ed8f13 (image=quay.io/ceph/ceph:v18, name=elastic_ptolemy, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:08:32 np0005539563 podman[74949]: 2025-11-29 07:08:32.775568444 +0000 UTC m=+0.126721913 container attach 12d81e3e46a14497a217e0ac7ecadb42e7488b7286da89c3a4543fc811ed8f13 (image=quay.io/ceph/ceph:v18, name=elastic_ptolemy, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:08:33 np0005539563 ceph-mgr[74636]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:08:33 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'rbd_support'
Nov 29 02:08:33 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:33.159+0000 7f1e25ab9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:08:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:08:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1992814569' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]: 
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]: {
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    "fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    "health": {
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "status": "HEALTH_OK",
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "checks": {},
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "mutes": []
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    },
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    "election_epoch": 5,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    "quorum": [
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        0
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    ],
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    "quorum_names": [
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "compute-0"
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    ],
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    "quorum_age": 16,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    "monmap": {
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "epoch": 1,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "min_mon_release_name": "reef",
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "num_mons": 1
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    },
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    "osdmap": {
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "epoch": 1,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "num_osds": 0,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "num_up_osds": 0,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "osd_up_since": 0,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "num_in_osds": 0,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "osd_in_since": 0,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "num_remapped_pgs": 0
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    },
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    "pgmap": {
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "pgs_by_state": [],
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "num_pgs": 0,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "num_pools": 0,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "num_objects": 0,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "data_bytes": 0,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "bytes_used": 0,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "bytes_avail": 0,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "bytes_total": 0
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    },
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    "fsmap": {
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "epoch": 1,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "by_rank": [],
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "up:standby": 0
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    },
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    "mgrmap": {
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "available": false,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "num_standbys": 0,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "modules": [
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:            "iostat",
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:            "nfs",
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:            "restful"
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        ],
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "services": {}
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    },
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    "servicemap": {
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "epoch": 1,
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "modified": "2025-11-29T07:08:13.981497+0000",
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:        "services": {}
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    },
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]:    "progress_events": {}
Nov 29 02:08:33 np0005539563 elastic_ptolemy[74966]: }
Nov 29 02:08:33 np0005539563 systemd[1]: libpod-12d81e3e46a14497a217e0ac7ecadb42e7488b7286da89c3a4543fc811ed8f13.scope: Deactivated successfully.
Nov 29 02:08:33 np0005539563 podman[74949]: 2025-11-29 07:08:33.250187195 +0000 UTC m=+0.601340634 container died 12d81e3e46a14497a217e0ac7ecadb42e7488b7286da89c3a4543fc811ed8f13 (image=quay.io/ceph/ceph:v18, name=elastic_ptolemy, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:08:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5823475d50617cad25116d3d0856f53af7516677537c368c02cd59547887e54b-merged.mount: Deactivated successfully.
Nov 29 02:08:33 np0005539563 podman[74949]: 2025-11-29 07:08:33.328044289 +0000 UTC m=+0.679197718 container remove 12d81e3e46a14497a217e0ac7ecadb42e7488b7286da89c3a4543fc811ed8f13 (image=quay.io/ceph/ceph:v18, name=elastic_ptolemy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:08:33 np0005539563 systemd[1]: libpod-conmon-12d81e3e46a14497a217e0ac7ecadb42e7488b7286da89c3a4543fc811ed8f13.scope: Deactivated successfully.
Nov 29 02:08:33 np0005539563 ceph-mgr[74636]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:08:33 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:33.480+0000 7f1e25ab9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:08:33 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'restful'
Nov 29 02:08:34 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'rgw'
Nov 29 02:08:35 np0005539563 ceph-mgr[74636]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:08:35 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:35.233+0000 7f1e25ab9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:08:35 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'rook'
Nov 29 02:08:35 np0005539563 podman[75006]: 2025-11-29 07:08:35.391803835 +0000 UTC m=+0.041191665 container create 2f7c9c8e49b84140da6fa49c1ae97af1619894c38d0b469289c2f5be9d0ca9fb (image=quay.io/ceph/ceph:v18, name=competent_allen, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:08:35 np0005539563 systemd[1]: Started libpod-conmon-2f7c9c8e49b84140da6fa49c1ae97af1619894c38d0b469289c2f5be9d0ca9fb.scope.
Nov 29 02:08:35 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c340865f13f5fd607a2cc531790457a949033c3553c36facfcb7a87089d047/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c340865f13f5fd607a2cc531790457a949033c3553c36facfcb7a87089d047/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c340865f13f5fd607a2cc531790457a949033c3553c36facfcb7a87089d047/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:35 np0005539563 podman[75006]: 2025-11-29 07:08:35.465445345 +0000 UTC m=+0.114833195 container init 2f7c9c8e49b84140da6fa49c1ae97af1619894c38d0b469289c2f5be9d0ca9fb (image=quay.io/ceph/ceph:v18, name=competent_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:08:35 np0005539563 podman[75006]: 2025-11-29 07:08:35.373567341 +0000 UTC m=+0.022955201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:35 np0005539563 podman[75006]: 2025-11-29 07:08:35.471010308 +0000 UTC m=+0.120398138 container start 2f7c9c8e49b84140da6fa49c1ae97af1619894c38d0b469289c2f5be9d0ca9fb (image=quay.io/ceph/ceph:v18, name=competent_allen, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:08:35 np0005539563 podman[75006]: 2025-11-29 07:08:35.476000444 +0000 UTC m=+0.125388294 container attach 2f7c9c8e49b84140da6fa49c1ae97af1619894c38d0b469289c2f5be9d0ca9fb (image=quay.io/ceph/ceph:v18, name=competent_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:08:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322642908' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:08:35 np0005539563 competent_allen[75022]: 
Nov 29 02:08:35 np0005539563 competent_allen[75022]: {
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    "fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    "health": {
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "status": "HEALTH_OK",
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "checks": {},
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "mutes": []
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    },
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    "election_epoch": 5,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    "quorum": [
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        0
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    ],
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    "quorum_names": [
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "compute-0"
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    ],
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    "quorum_age": 18,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    "monmap": {
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "epoch": 1,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "min_mon_release_name": "reef",
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "num_mons": 1
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    },
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    "osdmap": {
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "epoch": 1,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "num_osds": 0,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "num_up_osds": 0,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "osd_up_since": 0,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "num_in_osds": 0,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "osd_in_since": 0,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "num_remapped_pgs": 0
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    },
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    "pgmap": {
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "pgs_by_state": [],
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "num_pgs": 0,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "num_pools": 0,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "num_objects": 0,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "data_bytes": 0,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "bytes_used": 0,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "bytes_avail": 0,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "bytes_total": 0
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    },
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    "fsmap": {
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "epoch": 1,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "by_rank": [],
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "up:standby": 0
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    },
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    "mgrmap": {
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "available": false,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "num_standbys": 0,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "modules": [
Nov 29 02:08:35 np0005539563 competent_allen[75022]:            "iostat",
Nov 29 02:08:35 np0005539563 competent_allen[75022]:            "nfs",
Nov 29 02:08:35 np0005539563 competent_allen[75022]:            "restful"
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        ],
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "services": {}
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    },
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    "servicemap": {
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "epoch": 1,
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "modified": "2025-11-29T07:08:13.981497+0000",
Nov 29 02:08:35 np0005539563 competent_allen[75022]:        "services": {}
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    },
Nov 29 02:08:35 np0005539563 competent_allen[75022]:    "progress_events": {}
Nov 29 02:08:35 np0005539563 competent_allen[75022]: }
Nov 29 02:08:35 np0005539563 systemd[1]: libpod-2f7c9c8e49b84140da6fa49c1ae97af1619894c38d0b469289c2f5be9d0ca9fb.scope: Deactivated successfully.
Nov 29 02:08:35 np0005539563 podman[75006]: 2025-11-29 07:08:35.907173067 +0000 UTC m=+0.556560907 container died 2f7c9c8e49b84140da6fa49c1ae97af1619894c38d0b469289c2f5be9d0ca9fb (image=quay.io/ceph/ceph:v18, name=competent_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:08:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-76c340865f13f5fd607a2cc531790457a949033c3553c36facfcb7a87089d047-merged.mount: Deactivated successfully.
Nov 29 02:08:35 np0005539563 podman[75006]: 2025-11-29 07:08:35.945397842 +0000 UTC m=+0.594785672 container remove 2f7c9c8e49b84140da6fa49c1ae97af1619894c38d0b469289c2f5be9d0ca9fb (image=quay.io/ceph/ceph:v18, name=competent_allen, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:08:35 np0005539563 systemd[1]: libpod-conmon-2f7c9c8e49b84140da6fa49c1ae97af1619894c38d0b469289c2f5be9d0ca9fb.scope: Deactivated successfully.
Nov 29 02:08:37 np0005539563 ceph-mgr[74636]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:08:37 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:37.620+0000 7f1e25ab9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:08:37 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'selftest'
Nov 29 02:08:37 np0005539563 ceph-mgr[74636]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:08:37 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:37.886+0000 7f1e25ab9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:08:37 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'snap_schedule'
Nov 29 02:08:38 np0005539563 podman[75059]: 2025-11-29 07:08:37.999278229 +0000 UTC m=+0.031323956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:38 np0005539563 ceph-mgr[74636]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:08:38 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:38.163+0000 7f1e25ab9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:08:38 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'stats'
Nov 29 02:08:38 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'status'
Nov 29 02:08:38 np0005539563 ceph-mgr[74636]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:08:38 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:38.707+0000 7f1e25ab9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:08:38 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'telegraf'
Nov 29 02:08:38 np0005539563 ceph-mgr[74636]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:08:38 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:38.959+0000 7f1e25ab9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:08:38 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'telemetry'
Nov 29 02:08:39 np0005539563 ceph-mgr[74636]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:08:39 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:39.607+0000 7f1e25ab9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:08:39 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 02:08:40 np0005539563 ceph-mgr[74636]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:08:40 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:40.346+0000 7f1e25ab9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:08:40 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'volumes'
Nov 29 02:08:41 np0005539563 ceph-mgr[74636]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:08:41 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:41.157+0000 7f1e25ab9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:08:41 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'zabbix'
Nov 29 02:08:41 np0005539563 ceph-mgr[74636]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:08:41 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:41.435+0000 7f1e25ab9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:08:41 np0005539563 ceph-mgr[74636]: ms_deliver_dispatch: unhandled message 0x55d48babef20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 02:08:41 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rotard
Nov 29 02:08:43 np0005539563 podman[75059]: 2025-11-29 07:08:43.12143671 +0000 UTC m=+5.153482417 container create f881a375c0c9e5c50369e7b846dd2a1f2907c310c11185f0b802c863ef398c04 (image=quay.io/ceph/ceph:v18, name=keen_torvalds, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr handle_mgr_map Activating!
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr handle_mgr_map I am now activating
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.rotard(active, starting, since 1.68212s)
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.rotard", "id": "compute-0.rotard"} v 0) v1
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr metadata", "who": "compute-0.rotard", "id": "compute-0.rotard"}]: dispatch
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Manager daemon compute-0.rotard is now available
Nov 29 02:08:43 np0005539563 systemd[1]: Started libpod-conmon-f881a375c0c9e5c50369e7b846dd2a1f2907c310c11185f0b802c863ef398c04.scope.
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: balancer
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: crash
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [balancer INFO root] Starting
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: devicehealth
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Starting
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:08:43
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [balancer INFO root] No pools available
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: iostat
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: nfs
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: orchestrator
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: pg_autoscaler
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: progress
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [progress INFO root] Loading...
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [progress INFO root] No stored events to load
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [progress INFO root] Loaded [] historic events
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 02:08:43 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e75281fbd5c3e132b8b68236258b1ef0147d33d3abe7b35f8d733e8397affe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e75281fbd5c3e132b8b68236258b1ef0147d33d3abe7b35f8d733e8397affe/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e75281fbd5c3e132b8b68236258b1ef0147d33d3abe7b35f8d733e8397affe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: Activating manager daemon compute-0.rotard
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: Manager daemon compute-0.rotard is now available
Nov 29 02:08:43 np0005539563 podman[75059]: 2025-11-29 07:08:43.189655442 +0000 UTC m=+5.221701169 container init f881a375c0c9e5c50369e7b846dd2a1f2907c310c11185f0b802c863ef398c04 (image=quay.io/ceph/ceph:v18, name=keen_torvalds, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:08:43 np0005539563 podman[75059]: 2025-11-29 07:08:43.194822793 +0000 UTC m=+5.226868500 container start f881a375c0c9e5c50369e7b846dd2a1f2907c310c11185f0b802c863ef398c04 (image=quay.io/ceph/ceph:v18, name=keen_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] recovery thread starting
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] starting setup
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: rbd_support
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: restful
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: status
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: telemetry
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rotard/mirror_snapshot_schedule"} v 0) v1
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rotard/mirror_snapshot_schedule"}]: dispatch
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [restful WARNING root] server not running: no certificate configured
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] PerfHandler: starting
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TaskHandler: starting
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rotard/trash_purge_schedule"} v 0) v1
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rotard/trash_purge_schedule"}]: dispatch
Nov 29 02:08:43 np0005539563 podman[75059]: 2025-11-29 07:08:43.22243801 +0000 UTC m=+5.254483837 container attach f881a375c0c9e5c50369e7b846dd2a1f2907c310c11185f0b802c863ef398c04 (image=quay.io/ceph/ceph:v18, name=keen_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' 
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] setup complete
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' 
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 29 02:08:43 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: volumes
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' 
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187582889' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]: 
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]: {
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    "fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    "health": {
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "status": "HEALTH_OK",
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "checks": {},
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "mutes": []
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    },
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    "election_epoch": 5,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    "quorum": [
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        0
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    ],
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    "quorum_names": [
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "compute-0"
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    ],
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    "quorum_age": 26,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    "monmap": {
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "epoch": 1,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "min_mon_release_name": "reef",
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "num_mons": 1
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    },
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    "osdmap": {
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "epoch": 1,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "num_osds": 0,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "num_up_osds": 0,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "osd_up_since": 0,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "num_in_osds": 0,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "osd_in_since": 0,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "num_remapped_pgs": 0
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    },
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    "pgmap": {
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "pgs_by_state": [],
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "num_pgs": 0,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "num_pools": 0,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "num_objects": 0,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "data_bytes": 0,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "bytes_used": 0,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "bytes_avail": 0,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "bytes_total": 0
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    },
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    "fsmap": {
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "epoch": 1,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "by_rank": [],
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "up:standby": 0
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    },
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    "mgrmap": {
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "available": false,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "num_standbys": 0,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "modules": [
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:            "iostat",
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:            "nfs",
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:            "restful"
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        ],
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "services": {}
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    },
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    "servicemap": {
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "epoch": 1,
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "modified": "2025-11-29T07:08:13.981497+0000",
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:        "services": {}
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    },
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]:    "progress_events": {}
Nov 29 02:08:43 np0005539563 keen_torvalds[75093]: }
Nov 29 02:08:43 np0005539563 systemd[1]: libpod-f881a375c0c9e5c50369e7b846dd2a1f2907c310c11185f0b802c863ef398c04.scope: Deactivated successfully.
Nov 29 02:08:43 np0005539563 podman[75059]: 2025-11-29 07:08:43.642342453 +0000 UTC m=+5.674388160 container died f881a375c0c9e5c50369e7b846dd2a1f2907c310c11185f0b802c863ef398c04 (image=quay.io/ceph/ceph:v18, name=keen_torvalds, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:08:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a6e75281fbd5c3e132b8b68236258b1ef0147d33d3abe7b35f8d733e8397affe-merged.mount: Deactivated successfully.
Nov 29 02:08:43 np0005539563 podman[75059]: 2025-11-29 07:08:43.687464641 +0000 UTC m=+5.719510348 container remove f881a375c0c9e5c50369e7b846dd2a1f2907c310c11185f0b802c863ef398c04 (image=quay.io/ceph/ceph:v18, name=keen_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:08:43 np0005539563 systemd[1]: libpod-conmon-f881a375c0c9e5c50369e7b846dd2a1f2907c310c11185f0b802c863ef398c04.scope: Deactivated successfully.
Nov 29 02:08:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.rotard(active, since 2s)
Nov 29 02:08:44 np0005539563 ceph-mon[74338]: from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rotard/mirror_snapshot_schedule"}]: dispatch
Nov 29 02:08:44 np0005539563 ceph-mon[74338]: from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rotard/trash_purge_schedule"}]: dispatch
Nov 29 02:08:44 np0005539563 ceph-mon[74338]: from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' 
Nov 29 02:08:44 np0005539563 ceph-mon[74338]: from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' 
Nov 29 02:08:44 np0005539563 ceph-mon[74338]: from='mgr.14102 192.168.122.100:0/3984272795' entity='mgr.compute-0.rotard' 
Nov 29 02:08:45 np0005539563 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:08:45 np0005539563 podman[75192]: 2025-11-29 07:08:45.748641241 +0000 UTC m=+0.036953171 container create 86e13f1cbe48ee43100b16b48b07dbe1c6716ec1d64c4d1dcd03dd2c81fa4177 (image=quay.io/ceph/ceph:v18, name=musing_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:08:45 np0005539563 systemd[1]: Started libpod-conmon-86e13f1cbe48ee43100b16b48b07dbe1c6716ec1d64c4d1dcd03dd2c81fa4177.scope.
Nov 29 02:08:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d67898866cef8ba847a6f2e06f6e8d514ad7eea32db1e8eb03a6f270d6029f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d67898866cef8ba847a6f2e06f6e8d514ad7eea32db1e8eb03a6f270d6029f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d67898866cef8ba847a6f2e06f6e8d514ad7eea32db1e8eb03a6f270d6029f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:45 np0005539563 podman[75192]: 2025-11-29 07:08:45.817427829 +0000 UTC m=+0.105739779 container init 86e13f1cbe48ee43100b16b48b07dbe1c6716ec1d64c4d1dcd03dd2c81fa4177 (image=quay.io/ceph/ceph:v18, name=musing_shannon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:08:45 np0005539563 podman[75192]: 2025-11-29 07:08:45.822149617 +0000 UTC m=+0.110461547 container start 86e13f1cbe48ee43100b16b48b07dbe1c6716ec1d64c4d1dcd03dd2c81fa4177 (image=quay.io/ceph/ceph:v18, name=musing_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:08:45 np0005539563 podman[75192]: 2025-11-29 07:08:45.826484993 +0000 UTC m=+0.114796943 container attach 86e13f1cbe48ee43100b16b48b07dbe1c6716ec1d64c4d1dcd03dd2c81fa4177 (image=quay.io/ceph/ceph:v18, name=musing_shannon, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:08:45 np0005539563 podman[75192]: 2025-11-29 07:08:45.732598122 +0000 UTC m=+0.020910072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:08:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4080795752' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:08:46 np0005539563 musing_shannon[75209]: 
Nov 29 02:08:46 np0005539563 musing_shannon[75209]: {
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    "fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    "health": {
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "status": "HEALTH_OK",
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "checks": {},
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "mutes": []
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    },
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    "election_epoch": 5,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    "quorum": [
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        0
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    ],
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    "quorum_names": [
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "compute-0"
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    ],
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    "quorum_age": 29,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    "monmap": {
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "epoch": 1,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "min_mon_release_name": "reef",
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "num_mons": 1
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    },
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    "osdmap": {
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "epoch": 1,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "num_osds": 0,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "num_up_osds": 0,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "osd_up_since": 0,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "num_in_osds": 0,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "osd_in_since": 0,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "num_remapped_pgs": 0
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    },
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    "pgmap": {
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "pgs_by_state": [],
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "num_pgs": 0,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "num_pools": 0,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "num_objects": 0,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "data_bytes": 0,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "bytes_used": 0,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "bytes_avail": 0,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "bytes_total": 0
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    },
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    "fsmap": {
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "epoch": 1,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "by_rank": [],
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "up:standby": 0
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    },
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    "mgrmap": {
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "available": true,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "num_standbys": 0,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "modules": [
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:            "iostat",
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:            "nfs",
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:            "restful"
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        ],
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "services": {}
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    },
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    "servicemap": {
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "epoch": 1,
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "modified": "2025-11-29T07:08:13.981497+0000",
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:        "services": {}
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    },
Nov 29 02:08:46 np0005539563 musing_shannon[75209]:    "progress_events": {}
Nov 29 02:08:46 np0005539563 musing_shannon[75209]: }
Nov 29 02:08:46 np0005539563 systemd[1]: libpod-86e13f1cbe48ee43100b16b48b07dbe1c6716ec1d64c4d1dcd03dd2c81fa4177.scope: Deactivated successfully.
Nov 29 02:08:46 np0005539563 conmon[75209]: conmon 86e13f1cbe48ee43100b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86e13f1cbe48ee43100b16b48b07dbe1c6716ec1d64c4d1dcd03dd2c81fa4177.scope/container/memory.events
Nov 29 02:08:46 np0005539563 podman[75192]: 2025-11-29 07:08:46.449211221 +0000 UTC m=+0.737523151 container died 86e13f1cbe48ee43100b16b48b07dbe1c6716ec1d64c4d1dcd03dd2c81fa4177 (image=quay.io/ceph/ceph:v18, name=musing_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:08:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d4d67898866cef8ba847a6f2e06f6e8d514ad7eea32db1e8eb03a6f270d6029f-merged.mount: Deactivated successfully.
Nov 29 02:08:46 np0005539563 podman[75192]: 2025-11-29 07:08:46.4998249 +0000 UTC m=+0.788136830 container remove 86e13f1cbe48ee43100b16b48b07dbe1c6716ec1d64c4d1dcd03dd2c81fa4177 (image=quay.io/ceph/ceph:v18, name=musing_shannon, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:08:46 np0005539563 systemd[1]: libpod-conmon-86e13f1cbe48ee43100b16b48b07dbe1c6716ec1d64c4d1dcd03dd2c81fa4177.scope: Deactivated successfully.
Nov 29 02:08:46 np0005539563 podman[75250]: 2025-11-29 07:08:46.557867564 +0000 UTC m=+0.039033100 container create 782fc561e0615ff7a95ceb197e70bbebcf56e54cb9d7b162d69e279508804347 (image=quay.io/ceph/ceph:v18, name=funny_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:08:46 np0005539563 systemd[1]: Started libpod-conmon-782fc561e0615ff7a95ceb197e70bbebcf56e54cb9d7b162d69e279508804347.scope.
Nov 29 02:08:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac1120459250752e3ba07d84ddbbe028e899add4a17f524081bae95357a7ea9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac1120459250752e3ba07d84ddbbe028e899add4a17f524081bae95357a7ea9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac1120459250752e3ba07d84ddbbe028e899add4a17f524081bae95357a7ea9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac1120459250752e3ba07d84ddbbe028e899add4a17f524081bae95357a7ea9/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:46 np0005539563 podman[75250]: 2025-11-29 07:08:46.622061759 +0000 UTC m=+0.103227305 container init 782fc561e0615ff7a95ceb197e70bbebcf56e54cb9d7b162d69e279508804347 (image=quay.io/ceph/ceph:v18, name=funny_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:08:46 np0005539563 podman[75250]: 2025-11-29 07:08:46.626808998 +0000 UTC m=+0.107974524 container start 782fc561e0615ff7a95ceb197e70bbebcf56e54cb9d7b162d69e279508804347 (image=quay.io/ceph/ceph:v18, name=funny_saha, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:08:46 np0005539563 podman[75250]: 2025-11-29 07:08:46.630828115 +0000 UTC m=+0.111993641 container attach 782fc561e0615ff7a95ceb197e70bbebcf56e54cb9d7b162d69e279508804347 (image=quay.io/ceph/ceph:v18, name=funny_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:08:46 np0005539563 podman[75250]: 2025-11-29 07:08:46.540895349 +0000 UTC m=+0.022060895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:47 np0005539563 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:08:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 02:08:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3124806934' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:08:47 np0005539563 systemd[1]: libpod-782fc561e0615ff7a95ceb197e70bbebcf56e54cb9d7b162d69e279508804347.scope: Deactivated successfully.
Nov 29 02:08:47 np0005539563 podman[75250]: 2025-11-29 07:08:47.16203206 +0000 UTC m=+0.643197586 container died 782fc561e0615ff7a95ceb197e70bbebcf56e54cb9d7b162d69e279508804347 (image=quay.io/ceph/ceph:v18, name=funny_saha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:08:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-aac1120459250752e3ba07d84ddbbe028e899add4a17f524081bae95357a7ea9-merged.mount: Deactivated successfully.
Nov 29 02:08:47 np0005539563 podman[75250]: 2025-11-29 07:08:47.204048227 +0000 UTC m=+0.685213753 container remove 782fc561e0615ff7a95ceb197e70bbebcf56e54cb9d7b162d69e279508804347 (image=quay.io/ceph/ceph:v18, name=funny_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:08:47 np0005539563 systemd[1]: libpod-conmon-782fc561e0615ff7a95ceb197e70bbebcf56e54cb9d7b162d69e279508804347.scope: Deactivated successfully.
Nov 29 02:08:47 np0005539563 podman[75306]: 2025-11-29 07:08:47.275451023 +0000 UTC m=+0.048961661 container create adcbe5ed777dc74aa44e0177a43de0f485018e49f99c6851f7cada17b3921a2d (image=quay.io/ceph/ceph:v18, name=great_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:08:47 np0005539563 systemd[1]: Started libpod-conmon-adcbe5ed777dc74aa44e0177a43de0f485018e49f99c6851f7cada17b3921a2d.scope.
Nov 29 02:08:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b88f85bab2ca43e51587cdec68f5172c645fadd0b4cfdfd5a02066430604fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b88f85bab2ca43e51587cdec68f5172c645fadd0b4cfdfd5a02066430604fa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b88f85bab2ca43e51587cdec68f5172c645fadd0b4cfdfd5a02066430604fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:47 np0005539563 podman[75306]: 2025-11-29 07:08:47.347400854 +0000 UTC m=+0.120911512 container init adcbe5ed777dc74aa44e0177a43de0f485018e49f99c6851f7cada17b3921a2d (image=quay.io/ceph/ceph:v18, name=great_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:08:47 np0005539563 podman[75306]: 2025-11-29 07:08:47.256105768 +0000 UTC m=+0.029616426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:47 np0005539563 podman[75306]: 2025-11-29 07:08:47.353612146 +0000 UTC m=+0.127122784 container start adcbe5ed777dc74aa44e0177a43de0f485018e49f99c6851f7cada17b3921a2d (image=quay.io/ceph/ceph:v18, name=great_yalow, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:08:47 np0005539563 podman[75306]: 2025-11-29 07:08:47.357149539 +0000 UTC m=+0.130660167 container attach adcbe5ed777dc74aa44e0177a43de0f485018e49f99c6851f7cada17b3921a2d (image=quay.io/ceph/ceph:v18, name=great_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:08:47 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/3124806934' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:08:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 29 02:08:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3578145442' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:08:49 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/3578145442' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 02:08:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3578145442' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn  1: '-n'
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn  2: 'mgr.compute-0.rotard'
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn  3: '-f'
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn  4: '--setuser'
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn  5: 'ceph'
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn  6: '--setgroup'
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn  7: 'ceph'
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn  8: '--default-log-to-file=false'
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn  9: '--default-log-to-journald=true'
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: mgr respawn  exe_path /proc/self/exe
Nov 29 02:08:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.rotard(active, since 8s)
Nov 29 02:08:49 np0005539563 systemd[1]: libpod-adcbe5ed777dc74aa44e0177a43de0f485018e49f99c6851f7cada17b3921a2d.scope: Deactivated successfully.
Nov 29 02:08:49 np0005539563 podman[75306]: 2025-11-29 07:08:49.869170316 +0000 UTC m=+2.642680954 container died adcbe5ed777dc74aa44e0177a43de0f485018e49f99c6851f7cada17b3921a2d (image=quay.io/ceph/ceph:v18, name=great_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:08:49 np0005539563 systemd[1]: var-lib-containers-storage-overlay-62b88f85bab2ca43e51587cdec68f5172c645fadd0b4cfdfd5a02066430604fa-merged.mount: Deactivated successfully.
Nov 29 02:08:49 np0005539563 podman[75306]: 2025-11-29 07:08:49.937598915 +0000 UTC m=+2.711109553 container remove adcbe5ed777dc74aa44e0177a43de0f485018e49f99c6851f7cada17b3921a2d (image=quay.io/ceph/ceph:v18, name=great_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:08:49 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: ignoring --setuser ceph since I am not root
Nov 29 02:08:49 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: ignoring --setgroup ceph since I am not root
Nov 29 02:08:49 np0005539563 systemd[1]: libpod-conmon-adcbe5ed777dc74aa44e0177a43de0f485018e49f99c6851f7cada17b3921a2d.scope: Deactivated successfully.
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 02:08:49 np0005539563 ceph-mgr[74636]: pidfile_write: ignore empty --pid-file
Nov 29 02:08:49 np0005539563 podman[75368]: 2025-11-29 07:08:49.995272178 +0000 UTC m=+0.038093193 container create 6907498f50a5977780721415b4406361dd4b64ef3b80441fc0081e7270f8df65 (image=quay.io/ceph/ceph:v18, name=hopeful_bhaskara, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:08:50 np0005539563 systemd[1]: Started libpod-conmon-6907498f50a5977780721415b4406361dd4b64ef3b80441fc0081e7270f8df65.scope.
Nov 29 02:08:50 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/412bc74ba193319ebb863857f90782cc3db1eca85e17d6ed1c799e5413d5ab45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/412bc74ba193319ebb863857f90782cc3db1eca85e17d6ed1c799e5413d5ab45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/412bc74ba193319ebb863857f90782cc3db1eca85e17d6ed1c799e5413d5ab45/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:50 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'alerts'
Nov 29 02:08:50 np0005539563 podman[75368]: 2025-11-29 07:08:49.977467209 +0000 UTC m=+0.020288244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:50 np0005539563 podman[75368]: 2025-11-29 07:08:50.085154354 +0000 UTC m=+0.127975399 container init 6907498f50a5977780721415b4406361dd4b64ef3b80441fc0081e7270f8df65 (image=quay.io/ceph/ceph:v18, name=hopeful_bhaskara, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:08:50 np0005539563 podman[75368]: 2025-11-29 07:08:50.09116521 +0000 UTC m=+0.133986225 container start 6907498f50a5977780721415b4406361dd4b64ef3b80441fc0081e7270f8df65 (image=quay.io/ceph/ceph:v18, name=hopeful_bhaskara, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:08:50 np0005539563 podman[75368]: 2025-11-29 07:08:50.09528239 +0000 UTC m=+0.138103435 container attach 6907498f50a5977780721415b4406361dd4b64ef3b80441fc0081e7270f8df65 (image=quay.io/ceph/ceph:v18, name=hopeful_bhaskara, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:08:50 np0005539563 ceph-mgr[74636]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:08:50 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'balancer'
Nov 29 02:08:50 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:50.418+0000 7f878d430140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:08:50 np0005539563 ceph-mgr[74636]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:08:50 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'cephadm'
Nov 29 02:08:50 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:50.722+0000 7f878d430140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:08:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 02:08:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3203184022' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 02:08:50 np0005539563 hopeful_bhaskara[75404]: {
Nov 29 02:08:50 np0005539563 hopeful_bhaskara[75404]:    "epoch": 4,
Nov 29 02:08:50 np0005539563 hopeful_bhaskara[75404]:    "available": true,
Nov 29 02:08:50 np0005539563 hopeful_bhaskara[75404]:    "active_name": "compute-0.rotard",
Nov 29 02:08:50 np0005539563 hopeful_bhaskara[75404]:    "num_standby": 0
Nov 29 02:08:50 np0005539563 hopeful_bhaskara[75404]: }
Nov 29 02:08:50 np0005539563 systemd[1]: libpod-6907498f50a5977780721415b4406361dd4b64ef3b80441fc0081e7270f8df65.scope: Deactivated successfully.
Nov 29 02:08:50 np0005539563 podman[75368]: 2025-11-29 07:08:50.776779823 +0000 UTC m=+0.819600858 container died 6907498f50a5977780721415b4406361dd4b64ef3b80441fc0081e7270f8df65 (image=quay.io/ceph/ceph:v18, name=hopeful_bhaskara, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:08:51 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/3578145442' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 02:08:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-412bc74ba193319ebb863857f90782cc3db1eca85e17d6ed1c799e5413d5ab45-merged.mount: Deactivated successfully.
Nov 29 02:08:51 np0005539563 podman[75368]: 2025-11-29 07:08:51.421636848 +0000 UTC m=+1.464457863 container remove 6907498f50a5977780721415b4406361dd4b64ef3b80441fc0081e7270f8df65 (image=quay.io/ceph/ceph:v18, name=hopeful_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:08:51 np0005539563 systemd[1]: libpod-conmon-6907498f50a5977780721415b4406361dd4b64ef3b80441fc0081e7270f8df65.scope: Deactivated successfully.
Nov 29 02:08:51 np0005539563 podman[75443]: 2025-11-29 07:08:51.476343825 +0000 UTC m=+0.033271572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:08:51 np0005539563 podman[75443]: 2025-11-29 07:08:51.574799771 +0000 UTC m=+0.131727508 container create 1ad0a5481c40eadf4caddd524a92e379edded2e57c354a1f81aca7a84d280f28 (image=quay.io/ceph/ceph:v18, name=elated_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:08:51 np0005539563 systemd[1]: Started libpod-conmon-1ad0a5481c40eadf4caddd524a92e379edded2e57c354a1f81aca7a84d280f28.scope.
Nov 29 02:08:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:08:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d91450c8c2c7b0e299bda9a872f791b297c29d4a1cd253ead8eeb17f88542b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d91450c8c2c7b0e299bda9a872f791b297c29d4a1cd253ead8eeb17f88542b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d91450c8c2c7b0e299bda9a872f791b297c29d4a1cd253ead8eeb17f88542b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:08:51 np0005539563 podman[75443]: 2025-11-29 07:08:51.861504045 +0000 UTC m=+0.418431812 container init 1ad0a5481c40eadf4caddd524a92e379edded2e57c354a1f81aca7a84d280f28 (image=quay.io/ceph/ceph:v18, name=elated_heisenberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:08:51 np0005539563 podman[75443]: 2025-11-29 07:08:51.866578132 +0000 UTC m=+0.423505869 container start 1ad0a5481c40eadf4caddd524a92e379edded2e57c354a1f81aca7a84d280f28 (image=quay.io/ceph/ceph:v18, name=elated_heisenberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:08:51 np0005539563 podman[75443]: 2025-11-29 07:08:51.871515057 +0000 UTC m=+0.428442834 container attach 1ad0a5481c40eadf4caddd524a92e379edded2e57c354a1f81aca7a84d280f28 (image=quay.io/ceph/ceph:v18, name=elated_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:08:52 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'crash'
Nov 29 02:08:53 np0005539563 ceph-mgr[74636]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:08:53 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'dashboard'
Nov 29 02:08:53 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:53.103+0000 7f878d430140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:08:54 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'devicehealth'
Nov 29 02:08:54 np0005539563 ceph-mgr[74636]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:08:54 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 02:08:54 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:54.872+0000 7f878d430140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:08:55 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 02:08:55 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 02:08:55 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]:  from numpy import show_config as show_numpy_config
Nov 29 02:08:55 np0005539563 ceph-mgr[74636]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:08:55 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:55.408+0000 7f878d430140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:08:55 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'influx'
Nov 29 02:08:55 np0005539563 ceph-mgr[74636]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:08:55 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:55.661+0000 7f878d430140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:08:55 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'insights'
Nov 29 02:08:55 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'iostat'
Nov 29 02:08:56 np0005539563 ceph-mgr[74636]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:08:56 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'k8sevents'
Nov 29 02:08:56 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:08:56.229+0000 7f878d430140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:08:58 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'localpool'
Nov 29 02:08:58 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 02:08:59 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'mirroring'
Nov 29 02:08:59 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'nfs'
Nov 29 02:09:00 np0005539563 ceph-mgr[74636]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:09:00 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'orchestrator'
Nov 29 02:09:00 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:00.048+0000 7f878d430140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:09:00 np0005539563 ceph-mgr[74636]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:09:00 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 02:09:00 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:00.752+0000 7f878d430140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:09:01 np0005539563 ceph-mgr[74636]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:09:01 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'osd_support'
Nov 29 02:09:01 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:01.053+0000 7f878d430140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:09:01 np0005539563 ceph-mgr[74636]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:09:01 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 02:09:01 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:01.337+0000 7f878d430140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:09:01 np0005539563 ceph-mgr[74636]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:09:01 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'progress'
Nov 29 02:09:01 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:01.643+0000 7f878d430140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:09:01 np0005539563 ceph-mgr[74636]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:09:01 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'prometheus'
Nov 29 02:09:01 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:01.908+0000 7f878d430140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:09:03 np0005539563 ceph-mgr[74636]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:09:03 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'rbd_support'
Nov 29 02:09:03 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:03.015+0000 7f878d430140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:09:03 np0005539563 ceph-mgr[74636]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:09:03 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'restful'
Nov 29 02:09:03 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:03.343+0000 7f878d430140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:09:04 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'rgw'
Nov 29 02:09:04 np0005539563 ceph-mgr[74636]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:09:04 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'rook'
Nov 29 02:09:04 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:04.836+0000 7f878d430140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:09:07 np0005539563 ceph-mgr[74636]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:09:07 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'selftest'
Nov 29 02:09:07 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:07.166+0000 7f878d430140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:09:07 np0005539563 ceph-mgr[74636]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:09:07 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'snap_schedule'
Nov 29 02:09:07 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:07.446+0000 7f878d430140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:09:07 np0005539563 ceph-mgr[74636]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:09:07 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'stats'
Nov 29 02:09:07 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:07.700+0000 7f878d430140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:09:07 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'status'
Nov 29 02:09:08 np0005539563 ceph-mgr[74636]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:09:08 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'telegraf'
Nov 29 02:09:08 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:08.269+0000 7f878d430140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:09:08 np0005539563 ceph-mgr[74636]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:09:08 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'telemetry'
Nov 29 02:09:08 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:08.531+0000 7f878d430140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:09:09 np0005539563 ceph-mgr[74636]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:09:09 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 02:09:09 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:09.229+0000 7f878d430140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:09:09 np0005539563 ceph-mgr[74636]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:09:09 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'volumes'
Nov 29 02:09:09 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:09.960+0000 7f878d430140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:09:10 np0005539563 ceph-mgr[74636]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:09:10 np0005539563 ceph-mgr[74636]: mgr[py] Loading python module 'zabbix'
Nov 29 02:09:10 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:10.677+0000 7f878d430140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:09:10 np0005539563 ceph-mgr[74636]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:09:10 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:09:10.947+0000 7f878d430140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:09:10 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Active manager daemon compute-0.rotard restarted
Nov 29 02:09:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 29 02:09:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:09:10 np0005539563 ceph-mgr[74636]: ms_deliver_dispatch: unhandled message 0x55cb94194420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 02:09:10 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.rotard
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 29 02:09:12 np0005539563 ceph-mgr[74636]: mgr handle_mgr_map Activating!
Nov 29 02:09:12 np0005539563 ceph-mgr[74636]: mgr handle_mgr_map I am now activating
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.rotard(active, starting, since 1.57751s)
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.rotard", "id": "compute-0.rotard"} v 0) v1
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr metadata", "who": "compute-0.rotard", "id": "compute-0.rotard"}]: dispatch
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 02:09:12 np0005539563 ceph-mgr[74636]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:12 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: balancer
Nov 29 02:09:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Starting
Nov 29 02:09:12 np0005539563 ceph-mgr[74636]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:09:12
Nov 29 02:09:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:09:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:09:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] No pools available
Nov 29 02:09:12 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 29 02:09:12 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Manager daemon compute-0.rotard is now available
Nov 29 02:09:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: Active manager daemon compute-0.rotard restarted
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: Activating manager daemon compute-0.rotard
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: cephadm
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: crash
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: devicehealth
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Starting
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: iostat
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: nfs
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: orchestrator
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: pg_autoscaler
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: progress
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [progress INFO root] Loading...
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [progress INFO root] No stored events to load
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [progress INFO root] Loaded [] historic events
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] recovery thread starting
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] starting setup
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: rbd_support
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: restful
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rotard/mirror_snapshot_schedule"} v 0) v1
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rotard/mirror_snapshot_schedule"}]: dispatch
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [restful WARNING root] server not running: no certificate configured
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: status
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: telemetry
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] PerfHandler: starting
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TaskHandler: starting
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rotard/trash_purge_schedule"} v 0) v1
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rotard/trash_purge_schedule"}]: dispatch
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] setup complete
Nov 29 02:09:13 np0005539563 ceph-mgr[74636]: mgr load Constructed class from module: volumes
Nov 29 02:09:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019917290 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:09:14 np0005539563 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:09:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 29 02:09:15 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 29 02:09:15 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.rotard(active, since 4s)
Nov 29 02:09:15 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 29 02:09:15 np0005539563 elated_heisenberg[75459]: {
Nov 29 02:09:15 np0005539563 elated_heisenberg[75459]:    "mgrmap_epoch": 6,
Nov 29 02:09:15 np0005539563 elated_heisenberg[75459]:    "initialized": true
Nov 29 02:09:15 np0005539563 elated_heisenberg[75459]: }
Nov 29 02:09:15 np0005539563 ceph-mon[74338]: Found migration_current of "None". Setting to last migration.
Nov 29 02:09:15 np0005539563 ceph-mon[74338]: Manager daemon compute-0.rotard is now available
Nov 29 02:09:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rotard/mirror_snapshot_schedule"}]: dispatch
Nov 29 02:09:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.rotard/trash_purge_schedule"}]: dispatch
Nov 29 02:09:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 29 02:09:15 np0005539563 systemd[1]: libpod-1ad0a5481c40eadf4caddd524a92e379edded2e57c354a1f81aca7a84d280f28.scope: Deactivated successfully.
Nov 29 02:09:15 np0005539563 podman[75443]: 2025-11-29 07:09:15.827428761 +0000 UTC m=+24.384356518 container died 1ad0a5481c40eadf4caddd524a92e379edded2e57c354a1f81aca7a84d280f28 (image=quay.io/ceph/ceph:v18, name=elated_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:09:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:15 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d4d91450c8c2c7b0e299bda9a872f791b297c29d4a1cd253ead8eeb17f88542b-merged.mount: Deactivated successfully.
Nov 29 02:09:15 np0005539563 podman[75443]: 2025-11-29 07:09:15.904568245 +0000 UTC m=+24.461495982 container remove 1ad0a5481c40eadf4caddd524a92e379edded2e57c354a1f81aca7a84d280f28 (image=quay.io/ceph/ceph:v18, name=elated_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:09:15 np0005539563 systemd[1]: libpod-conmon-1ad0a5481c40eadf4caddd524a92e379edded2e57c354a1f81aca7a84d280f28.scope: Deactivated successfully.
Nov 29 02:09:15 np0005539563 podman[75620]: 2025-11-29 07:09:15.967209296 +0000 UTC m=+0.040981854 container create 061265190c07b42c397e10e1e392564ca7688663cd3dced96ba7b68c0f0946df (image=quay.io/ceph/ceph:v18, name=hopeful_ramanujan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 02:09:16 np0005539563 systemd[1]: Started libpod-conmon-061265190c07b42c397e10e1e392564ca7688663cd3dced96ba7b68c0f0946df.scope.
Nov 29 02:09:16 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0446e9dff1c5d0b88a11e5d5bc1a0fa257225f12c0bd00044b2495586d005a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0446e9dff1c5d0b88a11e5d5bc1a0fa257225f12c0bd00044b2495586d005a4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:16 np0005539563 podman[75620]: 2025-11-29 07:09:15.94859345 +0000 UTC m=+0.022366028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0446e9dff1c5d0b88a11e5d5bc1a0fa257225f12c0bd00044b2495586d005a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:16 np0005539563 podman[75620]: 2025-11-29 07:09:16.060013744 +0000 UTC m=+0.133786332 container init 061265190c07b42c397e10e1e392564ca7688663cd3dced96ba7b68c0f0946df (image=quay.io/ceph/ceph:v18, name=hopeful_ramanujan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:09:16 np0005539563 podman[75620]: 2025-11-29 07:09:16.065430801 +0000 UTC m=+0.139203359 container start 061265190c07b42c397e10e1e392564ca7688663cd3dced96ba7b68c0f0946df (image=quay.io/ceph/ceph:v18, name=hopeful_ramanujan, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:09:16 np0005539563 podman[75620]: 2025-11-29 07:09:16.069386108 +0000 UTC m=+0.143158696 container attach 061265190c07b42c397e10e1e392564ca7688663cd3dced96ba7b68c0f0946df (image=quay.io/ceph/ceph:v18, name=hopeful_ramanujan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:09:16 np0005539563 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:09:16 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:09:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 29 02:09:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 02:09:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 02:09:16 np0005539563 systemd[1]: libpod-061265190c07b42c397e10e1e392564ca7688663cd3dced96ba7b68c0f0946df.scope: Deactivated successfully.
Nov 29 02:09:16 np0005539563 podman[75620]: 2025-11-29 07:09:16.742909077 +0000 UTC m=+0.816681625 container died 061265190c07b42c397e10e1e392564ca7688663cd3dced96ba7b68c0f0946df (image=quay.io/ceph/ceph:v18, name=hopeful_ramanujan, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:16 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.rotard(active, since 5s)
Nov 29 02:09:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a0446e9dff1c5d0b88a11e5d5bc1a0fa257225f12c0bd00044b2495586d005a4-merged.mount: Deactivated successfully.
Nov 29 02:09:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:16 np0005539563 podman[75620]: 2025-11-29 07:09:16.849118629 +0000 UTC m=+0.922891187 container remove 061265190c07b42c397e10e1e392564ca7688663cd3dced96ba7b68c0f0946df (image=quay.io/ceph/ceph:v18, name=hopeful_ramanujan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:09:16 np0005539563 systemd[1]: libpod-conmon-061265190c07b42c397e10e1e392564ca7688663cd3dced96ba7b68c0f0946df.scope: Deactivated successfully.
Nov 29 02:09:16 np0005539563 podman[75674]: 2025-11-29 07:09:16.909813817 +0000 UTC m=+0.042807733 container create 0d3f35e9d4a5f214efa4779a5e534dd6438cd9276951f6d03ad3bbaa199dc3e9 (image=quay.io/ceph/ceph:v18, name=elegant_beaver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:09:16 np0005539563 systemd[1]: Started libpod-conmon-0d3f35e9d4a5f214efa4779a5e534dd6438cd9276951f6d03ad3bbaa199dc3e9.scope.
Nov 29 02:09:16 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4981ea3c7fec4778f1aee1eb75c93dcc498d19570358df31ab5da94bf0f6fe/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4981ea3c7fec4778f1aee1eb75c93dcc498d19570358df31ab5da94bf0f6fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4981ea3c7fec4778f1aee1eb75c93dcc498d19570358df31ab5da94bf0f6fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:16 np0005539563 podman[75674]: 2025-11-29 07:09:16.890917104 +0000 UTC m=+0.023911040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:16 np0005539563 podman[75674]: 2025-11-29 07:09:16.995618136 +0000 UTC m=+0.128612082 container init 0d3f35e9d4a5f214efa4779a5e534dd6438cd9276951f6d03ad3bbaa199dc3e9 (image=quay.io/ceph/ceph:v18, name=elegant_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:09:17 np0005539563 podman[75674]: 2025-11-29 07:09:17.001246318 +0000 UTC m=+0.134240244 container start 0d3f35e9d4a5f214efa4779a5e534dd6438cd9276951f6d03ad3bbaa199dc3e9 (image=quay.io/ceph/ceph:v18, name=elegant_beaver, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:17 np0005539563 podman[75674]: 2025-11-29 07:09:17.005860733 +0000 UTC m=+0.138854679 container attach 0d3f35e9d4a5f214efa4779a5e534dd6438cd9276951f6d03ad3bbaa199dc3e9 (image=quay.io/ceph/ceph:v18, name=elegant_beaver, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:09:17] ENGINE Bus STARTING
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:09:17] ENGINE Bus STARTING
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:09:17] ENGINE Serving on http://192.168.122.100:8765
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:09:17] ENGINE Serving on http://192.168.122.100:8765
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:09:17] ENGINE Serving on https://192.168.122.100:7150
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:09:17] ENGINE Serving on https://192.168.122.100:7150
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:09:17] ENGINE Bus STARTED
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:09:17] ENGINE Bus STARTED
Nov 29 02:09:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 02:09:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:09:17] ENGINE Client ('192.168.122.100', 45722) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:09:17] ENGINE Client ('192.168.122.100', 45722) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:09:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 29 02:09:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Set ssh ssh_user
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 29 02:09:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 29 02:09:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Set ssh ssh_config
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 29 02:09:17 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 29 02:09:17 np0005539563 elegant_beaver[75691]: ssh user set to ceph-admin. sudo will be used
Nov 29 02:09:17 np0005539563 systemd[1]: libpod-0d3f35e9d4a5f214efa4779a5e534dd6438cd9276951f6d03ad3bbaa199dc3e9.scope: Deactivated successfully.
Nov 29 02:09:17 np0005539563 podman[75674]: 2025-11-29 07:09:17.588223858 +0000 UTC m=+0.721217774 container died 0d3f35e9d4a5f214efa4779a5e534dd6438cd9276951f6d03ad3bbaa199dc3e9 (image=quay.io/ceph/ceph:v18, name=elegant_beaver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:09:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6d4981ea3c7fec4778f1aee1eb75c93dcc498d19570358df31ab5da94bf0f6fe-merged.mount: Deactivated successfully.
Nov 29 02:09:17 np0005539563 podman[75674]: 2025-11-29 07:09:17.629235871 +0000 UTC m=+0.762229787 container remove 0d3f35e9d4a5f214efa4779a5e534dd6438cd9276951f6d03ad3bbaa199dc3e9 (image=quay.io/ceph/ceph:v18, name=elegant_beaver, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:17 np0005539563 systemd[1]: libpod-conmon-0d3f35e9d4a5f214efa4779a5e534dd6438cd9276951f6d03ad3bbaa199dc3e9.scope: Deactivated successfully.
Nov 29 02:09:17 np0005539563 podman[75752]: 2025-11-29 07:09:17.688995893 +0000 UTC m=+0.041196890 container create 0b219af427e8bd2efdc143e770c71a8d1b7aca4024dabaa838d8bc9c6316d3f9 (image=quay.io/ceph/ceph:v18, name=sad_swirles, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 29 02:09:17 np0005539563 systemd[1]: Started libpod-conmon-0b219af427e8bd2efdc143e770c71a8d1b7aca4024dabaa838d8bc9c6316d3f9.scope.
Nov 29 02:09:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a4f6a8afebf88f06f7ddf3d7e24ac7f15aa57f127d531f0e5ed49e5d6f35e9/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a4f6a8afebf88f06f7ddf3d7e24ac7f15aa57f127d531f0e5ed49e5d6f35e9/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a4f6a8afebf88f06f7ddf3d7e24ac7f15aa57f127d531f0e5ed49e5d6f35e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a4f6a8afebf88f06f7ddf3d7e24ac7f15aa57f127d531f0e5ed49e5d6f35e9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a4f6a8afebf88f06f7ddf3d7e24ac7f15aa57f127d531f0e5ed49e5d6f35e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:17 np0005539563 podman[75752]: 2025-11-29 07:09:17.75557542 +0000 UTC m=+0.107776427 container init 0b219af427e8bd2efdc143e770c71a8d1b7aca4024dabaa838d8bc9c6316d3f9 (image=quay.io/ceph/ceph:v18, name=sad_swirles, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:17 np0005539563 podman[75752]: 2025-11-29 07:09:17.763139685 +0000 UTC m=+0.115340682 container start 0b219af427e8bd2efdc143e770c71a8d1b7aca4024dabaa838d8bc9c6316d3f9 (image=quay.io/ceph/ceph:v18, name=sad_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:09:17 np0005539563 podman[75752]: 2025-11-29 07:09:17.766605799 +0000 UTC m=+0.118806806 container attach 0b219af427e8bd2efdc143e770c71a8d1b7aca4024dabaa838d8bc9c6316d3f9 (image=quay.io/ceph/ceph:v18, name=sad_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:09:17 np0005539563 podman[75752]: 2025-11-29 07:09:17.672341101 +0000 UTC m=+0.024542118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052894 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:09:18 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:09:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 29 02:09:18 np0005539563 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:09:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:20 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 29 02:09:20 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 29 02:09:20 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Set ssh private key
Nov 29 02:09:20 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 29 02:09:20 np0005539563 systemd[1]: libpod-0b219af427e8bd2efdc143e770c71a8d1b7aca4024dabaa838d8bc9c6316d3f9.scope: Deactivated successfully.
Nov 29 02:09:20 np0005539563 podman[75794]: 2025-11-29 07:09:20.087212428 +0000 UTC m=+0.023849658 container died 0b219af427e8bd2efdc143e770c71a8d1b7aca4024dabaa838d8bc9c6316d3f9 (image=quay.io/ceph/ceph:v18, name=sad_swirles, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:09:20 np0005539563 ceph-mon[74338]: [29/Nov/2025:07:09:17] ENGINE Bus STARTING
Nov 29 02:09:20 np0005539563 ceph-mon[74338]: [29/Nov/2025:07:09:17] ENGINE Serving on http://192.168.122.100:8765
Nov 29 02:09:20 np0005539563 ceph-mon[74338]: [29/Nov/2025:07:09:17] ENGINE Serving on https://192.168.122.100:7150
Nov 29 02:09:20 np0005539563 ceph-mon[74338]: [29/Nov/2025:07:09:17] ENGINE Bus STARTED
Nov 29 02:09:20 np0005539563 ceph-mon[74338]: [29/Nov/2025:07:09:17] ENGINE Client ('192.168.122.100', 45722) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 02:09:20 np0005539563 ceph-mon[74338]: Set ssh ssh_user
Nov 29 02:09:20 np0005539563 ceph-mon[74338]: Set ssh ssh_config
Nov 29 02:09:20 np0005539563 ceph-mon[74338]: ssh user set to ceph-admin. sudo will be used
Nov 29 02:09:20 np0005539563 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:09:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b6a4f6a8afebf88f06f7ddf3d7e24ac7f15aa57f127d531f0e5ed49e5d6f35e9-merged.mount: Deactivated successfully.
Nov 29 02:09:20 np0005539563 podman[75794]: 2025-11-29 07:09:20.991728356 +0000 UTC m=+0.928365566 container remove 0b219af427e8bd2efdc143e770c71a8d1b7aca4024dabaa838d8bc9c6316d3f9 (image=quay.io/ceph/ceph:v18, name=sad_swirles, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:09:20 np0005539563 systemd[1]: libpod-conmon-0b219af427e8bd2efdc143e770c71a8d1b7aca4024dabaa838d8bc9c6316d3f9.scope: Deactivated successfully.
Nov 29 02:09:21 np0005539563 podman[75808]: 2025-11-29 07:09:21.060473272 +0000 UTC m=+0.043944684 container create f02b5a02fc64a5a2206607fe834f87fb163ad6677c3ecb46c929723d6793f4cb (image=quay.io/ceph/ceph:v18, name=infallible_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:09:21 np0005539563 systemd[1]: Started libpod-conmon-f02b5a02fc64a5a2206607fe834f87fb163ad6677c3ecb46c929723d6793f4cb.scope.
Nov 29 02:09:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2140e27dc6a824784fd6e67f5475cbf698049408d340c655aae883f9addb53c/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2140e27dc6a824784fd6e67f5475cbf698049408d340c655aae883f9addb53c/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2140e27dc6a824784fd6e67f5475cbf698049408d340c655aae883f9addb53c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2140e27dc6a824784fd6e67f5475cbf698049408d340c655aae883f9addb53c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2140e27dc6a824784fd6e67f5475cbf698049408d340c655aae883f9addb53c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:21 np0005539563 podman[75808]: 2025-11-29 07:09:21.039907333 +0000 UTC m=+0.023378765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:21 np0005539563 podman[75808]: 2025-11-29 07:09:21.217721669 +0000 UTC m=+0.201193111 container init f02b5a02fc64a5a2206607fe834f87fb163ad6677c3ecb46c929723d6793f4cb (image=quay.io/ceph/ceph:v18, name=infallible_blackburn, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:09:21 np0005539563 podman[75808]: 2025-11-29 07:09:21.22367598 +0000 UTC m=+0.207147392 container start f02b5a02fc64a5a2206607fe834f87fb163ad6677c3ecb46c929723d6793f4cb (image=quay.io/ceph/ceph:v18, name=infallible_blackburn, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:09:21 np0005539563 podman[75808]: 2025-11-29 07:09:21.266559234 +0000 UTC m=+0.250030666 container attach f02b5a02fc64a5a2206607fe834f87fb163ad6677c3ecb46c929723d6793f4cb (image=quay.io/ceph/ceph:v18, name=infallible_blackburn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:09:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:21 np0005539563 ceph-mon[74338]: Set ssh ssh_identity_key
Nov 29 02:09:21 np0005539563 ceph-mon[74338]: Set ssh private key
Nov 29 02:09:21 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:09:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 29 02:09:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:22 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 29 02:09:22 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 29 02:09:22 np0005539563 systemd[1]: libpod-f02b5a02fc64a5a2206607fe834f87fb163ad6677c3ecb46c929723d6793f4cb.scope: Deactivated successfully.
Nov 29 02:09:22 np0005539563 podman[75808]: 2025-11-29 07:09:22.082424397 +0000 UTC m=+1.065895809 container died f02b5a02fc64a5a2206607fe834f87fb163ad6677c3ecb46c929723d6793f4cb (image=quay.io/ceph/ceph:v18, name=infallible_blackburn, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:09:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a2140e27dc6a824784fd6e67f5475cbf698049408d340c655aae883f9addb53c-merged.mount: Deactivated successfully.
Nov 29 02:09:22 np0005539563 podman[75808]: 2025-11-29 07:09:22.132847365 +0000 UTC m=+1.116318777 container remove f02b5a02fc64a5a2206607fe834f87fb163ad6677c3ecb46c929723d6793f4cb (image=quay.io/ceph/ceph:v18, name=infallible_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:09:22 np0005539563 systemd[1]: libpod-conmon-f02b5a02fc64a5a2206607fe834f87fb163ad6677c3ecb46c929723d6793f4cb.scope: Deactivated successfully.
Nov 29 02:09:22 np0005539563 podman[75864]: 2025-11-29 07:09:22.202450213 +0000 UTC m=+0.046632916 container create 3c8cb9df58f5b880695a2f1b4eb0a110d0986293633b4eda2f4dbf1188c36937 (image=quay.io/ceph/ceph:v18, name=amazing_euclid, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:09:22 np0005539563 systemd[1]: Started libpod-conmon-3c8cb9df58f5b880695a2f1b4eb0a110d0986293633b4eda2f4dbf1188c36937.scope.
Nov 29 02:09:22 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046dc61d5f3b259b6f0f8855713f6235d6454917da36ae47c9abd4e958fe43dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046dc61d5f3b259b6f0f8855713f6235d6454917da36ae47c9abd4e958fe43dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046dc61d5f3b259b6f0f8855713f6235d6454917da36ae47c9abd4e958fe43dd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:22 np0005539563 podman[75864]: 2025-11-29 07:09:22.266671526 +0000 UTC m=+0.110854239 container init 3c8cb9df58f5b880695a2f1b4eb0a110d0986293633b4eda2f4dbf1188c36937 (image=quay.io/ceph/ceph:v18, name=amazing_euclid, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:09:22 np0005539563 podman[75864]: 2025-11-29 07:09:22.272246317 +0000 UTC m=+0.116429020 container start 3c8cb9df58f5b880695a2f1b4eb0a110d0986293633b4eda2f4dbf1188c36937 (image=quay.io/ceph/ceph:v18, name=amazing_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:22 np0005539563 podman[75864]: 2025-11-29 07:09:22.277653654 +0000 UTC m=+0.121836367 container attach 3c8cb9df58f5b880695a2f1b4eb0a110d0986293633b4eda2f4dbf1188c36937 (image=quay.io/ceph/ceph:v18, name=amazing_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:09:22 np0005539563 podman[75864]: 2025-11-29 07:09:22.183421908 +0000 UTC m=+0.027604631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:22 np0005539563 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:09:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:22 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:09:22 np0005539563 amazing_euclid[75880]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCW2bIFWl9RVqqcqTp8kHPPONzxPU2OFucMqyuwSo+EvpNQpXD0jfnukmzwFup++SulNoMinzITIcbzAjsu6QG/WDRlLXUQLQ/yiZ06eq8FMsD3p5kQEKBcwoO+LrPiQwoYqk42tLtXcNeok21IUVRmC9MzkRkjyb3sF7EjnEHP/VkIFS6AC7WV7yrv1gu92e4RSpRSpBIhDL2kfnBVCYfxgN23HgJbfEmZg/hvdLchGfJsMYkdtrOTi389fCncIy6btJ48Xkw+YAN6Ozcj6plMilq+f43HT7+/RWPjoM/g3XkJr8HnFc7B2RzKU3ijXuI2WMmBEEpcozk8k30sRF+HG2v1VvwfApaK6ayJbM+PcC1qtxVwLx48UE1cOERxn7xgM15AxM24Xdw+GVrQo0FnFsAXVAYFe7D88rXmKsXMNEzc0NG0M7oiOKUBOdkWZSPF3qLJH78js7RTl6YIcRcERBrmHCu++1B+ZMWHWO9Q+nyr8bZmwdGJST1Rd49XHV0= zuul@controller
Nov 29 02:09:22 np0005539563 systemd[1]: libpod-3c8cb9df58f5b880695a2f1b4eb0a110d0986293633b4eda2f4dbf1188c36937.scope: Deactivated successfully.
Nov 29 02:09:22 np0005539563 podman[75864]: 2025-11-29 07:09:22.823249782 +0000 UTC m=+0.667432485 container died 3c8cb9df58f5b880695a2f1b4eb0a110d0986293633b4eda2f4dbf1188c36937 (image=quay.io/ceph/ceph:v18, name=amazing_euclid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:09:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-046dc61d5f3b259b6f0f8855713f6235d6454917da36ae47c9abd4e958fe43dd-merged.mount: Deactivated successfully.
Nov 29 02:09:22 np0005539563 podman[75864]: 2025-11-29 07:09:22.863120313 +0000 UTC m=+0.707303016 container remove 3c8cb9df58f5b880695a2f1b4eb0a110d0986293633b4eda2f4dbf1188c36937 (image=quay.io/ceph/ceph:v18, name=amazing_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:09:22 np0005539563 systemd[1]: libpod-conmon-3c8cb9df58f5b880695a2f1b4eb0a110d0986293633b4eda2f4dbf1188c36937.scope: Deactivated successfully.
Nov 29 02:09:22 np0005539563 podman[75917]: 2025-11-29 07:09:22.930143362 +0000 UTC m=+0.041749424 container create c34852fe761d78dad3a938d9d545e7651856b8cce806afe09456e0ee9c683717 (image=quay.io/ceph/ceph:v18, name=suspicious_robinson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:09:22 np0005539563 systemd[1]: Started libpod-conmon-c34852fe761d78dad3a938d9d545e7651856b8cce806afe09456e0ee9c683717.scope.
Nov 29 02:09:22 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea854c5e46d0245a68dcd0f64f8c2a8cfff5bc39983e31296ed008ba1f90525/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea854c5e46d0245a68dcd0f64f8c2a8cfff5bc39983e31296ed008ba1f90525/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea854c5e46d0245a68dcd0f64f8c2a8cfff5bc39983e31296ed008ba1f90525/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:23 np0005539563 podman[75917]: 2025-11-29 07:09:22.910642753 +0000 UTC m=+0.022248855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:23 np0005539563 podman[75917]: 2025-11-29 07:09:23.009318711 +0000 UTC m=+0.120924783 container init c34852fe761d78dad3a938d9d545e7651856b8cce806afe09456e0ee9c683717 (image=quay.io/ceph/ceph:v18, name=suspicious_robinson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:09:23 np0005539563 podman[75917]: 2025-11-29 07:09:23.014218634 +0000 UTC m=+0.125824706 container start c34852fe761d78dad3a938d9d545e7651856b8cce806afe09456e0ee9c683717 (image=quay.io/ceph/ceph:v18, name=suspicious_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 02:09:23 np0005539563 podman[75917]: 2025-11-29 07:09:23.018542672 +0000 UTC m=+0.130148774 container attach c34852fe761d78dad3a938d9d545e7651856b8cce806afe09456e0ee9c683717 (image=quay.io/ceph/ceph:v18, name=suspicious_robinson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 29 02:09:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054707 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:09:23 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:09:23 np0005539563 systemd-logind[785]: New session 21 of user ceph-admin.
Nov 29 02:09:23 np0005539563 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 02:09:23 np0005539563 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 02:09:23 np0005539563 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 02:09:23 np0005539563 systemd[1]: Starting User Manager for UID 42477...
Nov 29 02:09:23 np0005539563 systemd[75963]: Queued start job for default target Main User Target.
Nov 29 02:09:23 np0005539563 systemd-logind[785]: New session 23 of user ceph-admin.
Nov 29 02:09:23 np0005539563 systemd[75963]: Created slice User Application Slice.
Nov 29 02:09:23 np0005539563 systemd[75963]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:09:23 np0005539563 systemd[75963]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:09:23 np0005539563 systemd[75963]: Reached target Paths.
Nov 29 02:09:23 np0005539563 systemd[75963]: Reached target Timers.
Nov 29 02:09:23 np0005539563 systemd[75963]: Starting D-Bus User Message Bus Socket...
Nov 29 02:09:23 np0005539563 systemd[75963]: Starting Create User's Volatile Files and Directories...
Nov 29 02:09:24 np0005539563 systemd[75963]: Finished Create User's Volatile Files and Directories.
Nov 29 02:09:24 np0005539563 systemd[75963]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:09:24 np0005539563 systemd[75963]: Reached target Sockets.
Nov 29 02:09:24 np0005539563 systemd[75963]: Reached target Basic System.
Nov 29 02:09:24 np0005539563 systemd[75963]: Reached target Main User Target.
Nov 29 02:09:24 np0005539563 systemd[75963]: Startup finished in 121ms.
Nov 29 02:09:24 np0005539563 systemd[1]: Started User Manager for UID 42477.
Nov 29 02:09:24 np0005539563 systemd[1]: Started Session 21 of User ceph-admin.
Nov 29 02:09:24 np0005539563 systemd[1]: Started Session 23 of User ceph-admin.
Nov 29 02:09:24 np0005539563 systemd-logind[785]: New session 24 of user ceph-admin.
Nov 29 02:09:24 np0005539563 systemd[1]: Started Session 24 of User ceph-admin.
Nov 29 02:09:24 np0005539563 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:09:24 np0005539563 ceph-mon[74338]: Set ssh ssh_identity_pub
Nov 29 02:09:24 np0005539563 systemd-logind[785]: New session 25 of user ceph-admin.
Nov 29 02:09:24 np0005539563 systemd[1]: Started Session 25 of User ceph-admin.
Nov 29 02:09:24 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 29 02:09:24 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 29 02:09:25 np0005539563 systemd-logind[785]: New session 26 of user ceph-admin.
Nov 29 02:09:25 np0005539563 systemd[1]: Started Session 26 of User ceph-admin.
Nov 29 02:09:25 np0005539563 systemd-logind[785]: New session 27 of user ceph-admin.
Nov 29 02:09:25 np0005539563 systemd[1]: Started Session 27 of User ceph-admin.
Nov 29 02:09:25 np0005539563 systemd-logind[785]: New session 28 of user ceph-admin.
Nov 29 02:09:25 np0005539563 systemd[1]: Started Session 28 of User ceph-admin.
Nov 29 02:09:25 np0005539563 ceph-mon[74338]: Deploying cephadm binary to compute-0
Nov 29 02:09:26 np0005539563 systemd-logind[785]: New session 29 of user ceph-admin.
Nov 29 02:09:26 np0005539563 systemd[1]: Started Session 29 of User ceph-admin.
Nov 29 02:09:26 np0005539563 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:09:26 np0005539563 systemd-logind[785]: New session 30 of user ceph-admin.
Nov 29 02:09:26 np0005539563 systemd[1]: Started Session 30 of User ceph-admin.
Nov 29 02:09:27 np0005539563 systemd-logind[785]: New session 31 of user ceph-admin.
Nov 29 02:09:27 np0005539563 systemd[1]: Started Session 31 of User ceph-admin.
Nov 29 02:09:27 np0005539563 systemd-logind[785]: New session 32 of user ceph-admin.
Nov 29 02:09:27 np0005539563 systemd[1]: Started Session 32 of User ceph-admin.
Nov 29 02:09:27 np0005539563 systemd-logind[785]: New session 33 of user ceph-admin.
Nov 29 02:09:27 np0005539563 systemd[1]: Started Session 33 of User ceph-admin.
Nov 29 02:09:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:09:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:09:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:28 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Added host compute-0
Nov 29 02:09:28 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 02:09:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 02:09:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 02:09:28 np0005539563 suspicious_robinson[75933]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 02:09:28 np0005539563 systemd[1]: libpod-c34852fe761d78dad3a938d9d545e7651856b8cce806afe09456e0ee9c683717.scope: Deactivated successfully.
Nov 29 02:09:28 np0005539563 podman[75917]: 2025-11-29 07:09:28.348698456 +0000 UTC m=+5.460304528 container died c34852fe761d78dad3a938d9d545e7651856b8cce806afe09456e0ee9c683717 (image=quay.io/ceph/ceph:v18, name=suspicious_robinson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:09:28 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9ea854c5e46d0245a68dcd0f64f8c2a8cfff5bc39983e31296ed008ba1f90525-merged.mount: Deactivated successfully.
Nov 29 02:09:28 np0005539563 podman[75917]: 2025-11-29 07:09:28.395311121 +0000 UTC m=+5.506917193 container remove c34852fe761d78dad3a938d9d545e7651856b8cce806afe09456e0ee9c683717 (image=quay.io/ceph/ceph:v18, name=suspicious_robinson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:09:28 np0005539563 systemd[1]: libpod-conmon-c34852fe761d78dad3a938d9d545e7651856b8cce806afe09456e0ee9c683717.scope: Deactivated successfully.
Nov 29 02:09:28 np0005539563 podman[76608]: 2025-11-29 07:09:28.463676376 +0000 UTC m=+0.045184487 container create a661b33c128670d7ac9f6b62093f10d3b9539498912b5a9f0a4434a5b50c1ff7 (image=quay.io/ceph/ceph:v18, name=distracted_panini, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:28 np0005539563 systemd[1]: Started libpod-conmon-a661b33c128670d7ac9f6b62093f10d3b9539498912b5a9f0a4434a5b50c1ff7.scope.
Nov 29 02:09:28 np0005539563 podman[76608]: 2025-11-29 07:09:28.443440997 +0000 UTC m=+0.024949128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:28 np0005539563 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:09:28 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c856792204007997c2020ddcd2a7463ceea0ed80d723ee29a688d8e8b0f30f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c856792204007997c2020ddcd2a7463ceea0ed80d723ee29a688d8e8b0f30f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c856792204007997c2020ddcd2a7463ceea0ed80d723ee29a688d8e8b0f30f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:28 np0005539563 podman[76608]: 2025-11-29 07:09:28.564893744 +0000 UTC m=+0.146401885 container init a661b33c128670d7ac9f6b62093f10d3b9539498912b5a9f0a4434a5b50c1ff7 (image=quay.io/ceph/ceph:v18, name=distracted_panini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:09:28 np0005539563 podman[76608]: 2025-11-29 07:09:28.572673954 +0000 UTC m=+0.154182065 container start a661b33c128670d7ac9f6b62093f10d3b9539498912b5a9f0a4434a5b50c1ff7 (image=quay.io/ceph/ceph:v18, name=distracted_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:09:28 np0005539563 podman[76608]: 2025-11-29 07:09:28.575309856 +0000 UTC m=+0.156817967 container attach a661b33c128670d7ac9f6b62093f10d3b9539498912b5a9f0a4434a5b50c1ff7 (image=quay.io/ceph/ceph:v18, name=distracted_panini, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:28 np0005539563 podman[76725]: 2025-11-29 07:09:28.820909232 +0000 UTC m=+0.038724623 container create 64ae97cf855be95d49544c29c4cfa491fa105ef644c96ce9c64e3c0e23dfcc57 (image=quay.io/ceph/ceph:v18, name=jolly_tu, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:09:28 np0005539563 systemd[1]: Started libpod-conmon-64ae97cf855be95d49544c29c4cfa491fa105ef644c96ce9c64e3c0e23dfcc57.scope.
Nov 29 02:09:28 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:28 np0005539563 podman[76725]: 2025-11-29 07:09:28.890929342 +0000 UTC m=+0.108744753 container init 64ae97cf855be95d49544c29c4cfa491fa105ef644c96ce9c64e3c0e23dfcc57 (image=quay.io/ceph/ceph:v18, name=jolly_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:09:28 np0005539563 podman[76725]: 2025-11-29 07:09:28.897004167 +0000 UTC m=+0.114819558 container start 64ae97cf855be95d49544c29c4cfa491fa105ef644c96ce9c64e3c0e23dfcc57 (image=quay.io/ceph/ceph:v18, name=jolly_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:09:28 np0005539563 podman[76725]: 2025-11-29 07:09:28.80352536 +0000 UTC m=+0.021340781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:28 np0005539563 podman[76725]: 2025-11-29 07:09:28.901449858 +0000 UTC m=+0.119265249 container attach 64ae97cf855be95d49544c29c4cfa491fa105ef644c96ce9c64e3c0e23dfcc57 (image=quay.io/ceph/ceph:v18, name=jolly_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:29 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:09:29 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 29 02:09:29 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 29 02:09:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 02:09:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:29 np0005539563 distracted_panini[76668]: Scheduled mon update...
Nov 29 02:09:29 np0005539563 systemd[1]: libpod-a661b33c128670d7ac9f6b62093f10d3b9539498912b5a9f0a4434a5b50c1ff7.scope: Deactivated successfully.
Nov 29 02:09:29 np0005539563 podman[76608]: 2025-11-29 07:09:29.140075393 +0000 UTC m=+0.721583514 container died a661b33c128670d7ac9f6b62093f10d3b9539498912b5a9f0a4434a5b50c1ff7 (image=quay.io/ceph/ceph:v18, name=distracted_panini, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:09:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-09c856792204007997c2020ddcd2a7463ceea0ed80d723ee29a688d8e8b0f30f-merged.mount: Deactivated successfully.
Nov 29 02:09:29 np0005539563 podman[76608]: 2025-11-29 07:09:29.181890558 +0000 UTC m=+0.763398659 container remove a661b33c128670d7ac9f6b62093f10d3b9539498912b5a9f0a4434a5b50c1ff7 (image=quay.io/ceph/ceph:v18, name=distracted_panini, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:09:29 np0005539563 systemd[1]: libpod-conmon-a661b33c128670d7ac9f6b62093f10d3b9539498912b5a9f0a4434a5b50c1ff7.scope: Deactivated successfully.
Nov 29 02:09:29 np0005539563 jolly_tu[76742]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 02:09:29 np0005539563 systemd[1]: libpod-64ae97cf855be95d49544c29c4cfa491fa105ef644c96ce9c64e3c0e23dfcc57.scope: Deactivated successfully.
Nov 29 02:09:29 np0005539563 podman[76725]: 2025-11-29 07:09:29.209543569 +0000 UTC m=+0.427358960 container died 64ae97cf855be95d49544c29c4cfa491fa105ef644c96ce9c64e3c0e23dfcc57 (image=quay.io/ceph/ceph:v18, name=jolly_tu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:09:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1a62ac0b6fd1a0224aec13109c5df4c179be9b878d42b9d85761948e7a74328f-merged.mount: Deactivated successfully.
Nov 29 02:09:29 np0005539563 podman[76782]: 2025-11-29 07:09:29.248370203 +0000 UTC m=+0.049923526 container create ebfd804adb4cf25b468979c7a611134d5cf279f2d0915a6f81c1d0b72c3b33cb (image=quay.io/ceph/ceph:v18, name=stupefied_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:09:29 np0005539563 podman[76725]: 2025-11-29 07:09:29.254141869 +0000 UTC m=+0.471957250 container remove 64ae97cf855be95d49544c29c4cfa491fa105ef644c96ce9c64e3c0e23dfcc57 (image=quay.io/ceph/ceph:v18, name=jolly_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:09:29 np0005539563 systemd[1]: libpod-conmon-64ae97cf855be95d49544c29c4cfa491fa105ef644c96ce9c64e3c0e23dfcc57.scope: Deactivated successfully.
Nov 29 02:09:29 np0005539563 systemd[1]: Started libpod-conmon-ebfd804adb4cf25b468979c7a611134d5cf279f2d0915a6f81c1d0b72c3b33cb.scope.
Nov 29 02:09:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 29 02:09:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95309218d3a39e947b4f986a6590ebc641420ea46ee1206f5bbcd0c79b41b8ef/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95309218d3a39e947b4f986a6590ebc641420ea46ee1206f5bbcd0c79b41b8ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95309218d3a39e947b4f986a6590ebc641420ea46ee1206f5bbcd0c79b41b8ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:29 np0005539563 podman[76782]: 2025-11-29 07:09:29.31054603 +0000 UTC m=+0.112099383 container init ebfd804adb4cf25b468979c7a611134d5cf279f2d0915a6f81c1d0b72c3b33cb (image=quay.io/ceph/ceph:v18, name=stupefied_ride, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:09:29 np0005539563 podman[76782]: 2025-11-29 07:09:29.22210846 +0000 UTC m=+0.023661803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:29 np0005539563 podman[76782]: 2025-11-29 07:09:29.316943883 +0000 UTC m=+0.118497206 container start ebfd804adb4cf25b468979c7a611134d5cf279f2d0915a6f81c1d0b72c3b33cb (image=quay.io/ceph/ceph:v18, name=stupefied_ride, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:09:29 np0005539563 podman[76782]: 2025-11-29 07:09:29.320726616 +0000 UTC m=+0.122279939 container attach ebfd804adb4cf25b468979c7a611134d5cf279f2d0915a6f81c1d0b72c3b33cb (image=quay.io/ceph/ceph:v18, name=stupefied_ride, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:09:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:29 np0005539563 ceph-mon[74338]: Added host compute-0
Nov 29 02:09:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:09:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:29 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:09:29 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 29 02:09:29 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 29 02:09:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 02:09:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:29 np0005539563 stupefied_ride[76809]: Scheduled mgr update...
Nov 29 02:09:29 np0005539563 systemd[1]: libpod-ebfd804adb4cf25b468979c7a611134d5cf279f2d0915a6f81c1d0b72c3b33cb.scope: Deactivated successfully.
Nov 29 02:09:29 np0005539563 podman[76782]: 2025-11-29 07:09:29.93471977 +0000 UTC m=+0.736273093 container died ebfd804adb4cf25b468979c7a611134d5cf279f2d0915a6f81c1d0b72c3b33cb (image=quay.io/ceph/ceph:v18, name=stupefied_ride, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-95309218d3a39e947b4f986a6590ebc641420ea46ee1206f5bbcd0c79b41b8ef-merged.mount: Deactivated successfully.
Nov 29 02:09:29 np0005539563 podman[76782]: 2025-11-29 07:09:29.97489594 +0000 UTC m=+0.776449263 container remove ebfd804adb4cf25b468979c7a611134d5cf279f2d0915a6f81c1d0b72c3b33cb (image=quay.io/ceph/ceph:v18, name=stupefied_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:09:29 np0005539563 systemd[1]: libpod-conmon-ebfd804adb4cf25b468979c7a611134d5cf279f2d0915a6f81c1d0b72c3b33cb.scope: Deactivated successfully.
Nov 29 02:09:30 np0005539563 podman[77069]: 2025-11-29 07:09:30.035608158 +0000 UTC m=+0.042565607 container create a9db47f19c13097c51e894be286c5f46d5a934fb313c820913f8b24ed5c2bded (image=quay.io/ceph/ceph:v18, name=eager_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:09:30 np0005539563 systemd[1]: Started libpod-conmon-a9db47f19c13097c51e894be286c5f46d5a934fb313c820913f8b24ed5c2bded.scope.
Nov 29 02:09:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b25a643fee88b83f7625b581f95b03dccd0614a9563ba8c38b51995718196b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b25a643fee88b83f7625b581f95b03dccd0614a9563ba8c38b51995718196b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b25a643fee88b83f7625b581f95b03dccd0614a9563ba8c38b51995718196b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:30 np0005539563 podman[77069]: 2025-11-29 07:09:30.110108539 +0000 UTC m=+0.117066008 container init a9db47f19c13097c51e894be286c5f46d5a934fb313c820913f8b24ed5c2bded (image=quay.io/ceph/ceph:v18, name=eager_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:30 np0005539563 podman[77069]: 2025-11-29 07:09:30.017754942 +0000 UTC m=+0.024712411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:30 np0005539563 podman[77069]: 2025-11-29 07:09:30.117054848 +0000 UTC m=+0.124012297 container start a9db47f19c13097c51e894be286c5f46d5a934fb313c820913f8b24ed5c2bded (image=quay.io/ceph/ceph:v18, name=eager_tu, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:30 np0005539563 podman[77069]: 2025-11-29 07:09:30.120619534 +0000 UTC m=+0.127576983 container attach a9db47f19c13097c51e894be286c5f46d5a934fb313c820913f8b24ed5c2bded (image=quay.io/ceph/ceph:v18, name=eager_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 29 02:09:30 np0005539563 podman[77161]: 2025-11-29 07:09:30.403137212 +0000 UTC m=+0.053669288 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:09:30 np0005539563 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:09:30 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:09:30 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Saving service crash spec with placement *
Nov 29 02:09:30 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 29 02:09:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:09:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:30 np0005539563 eager_tu[77085]: Scheduled crash update...
Nov 29 02:09:30 np0005539563 podman[77161]: 2025-11-29 07:09:30.724174974 +0000 UTC m=+0.374706960 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:09:30 np0005539563 systemd[1]: libpod-a9db47f19c13097c51e894be286c5f46d5a934fb313c820913f8b24ed5c2bded.scope: Deactivated successfully.
Nov 29 02:09:30 np0005539563 podman[77069]: 2025-11-29 07:09:30.733903058 +0000 UTC m=+0.740860507 container died a9db47f19c13097c51e894be286c5f46d5a934fb313c820913f8b24ed5c2bded (image=quay.io/ceph/ceph:v18, name=eager_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:30 np0005539563 ceph-mon[74338]: Saving service mon spec with placement count:5
Nov 29 02:09:30 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:30 np0005539563 ceph-mon[74338]: Saving service mgr spec with placement count:2
Nov 29 02:09:30 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:30 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-69b25a643fee88b83f7625b581f95b03dccd0614a9563ba8c38b51995718196b-merged.mount: Deactivated successfully.
Nov 29 02:09:30 np0005539563 podman[77069]: 2025-11-29 07:09:30.783012071 +0000 UTC m=+0.789969520 container remove a9db47f19c13097c51e894be286c5f46d5a934fb313c820913f8b24ed5c2bded (image=quay.io/ceph/ceph:v18, name=eager_tu, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:30 np0005539563 systemd[1]: libpod-conmon-a9db47f19c13097c51e894be286c5f46d5a934fb313c820913f8b24ed5c2bded.scope: Deactivated successfully.
Nov 29 02:09:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:09:30 np0005539563 podman[77229]: 2025-11-29 07:09:30.863726521 +0000 UTC m=+0.061457019 container create 0503e3e6e0becfefa81e4de7f6d88872e4c4e0ef1a5be4c5a8565b7a8fc670f9 (image=quay.io/ceph/ceph:v18, name=nice_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:09:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:30 np0005539563 systemd[1]: Started libpod-conmon-0503e3e6e0becfefa81e4de7f6d88872e4c4e0ef1a5be4c5a8565b7a8fc670f9.scope.
Nov 29 02:09:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:30 np0005539563 podman[77229]: 2025-11-29 07:09:30.825425761 +0000 UTC m=+0.023156299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0300d45373f958db8f3ad8c73d6cf53fcd9e8c84c9e9d3973443f014aa71946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0300d45373f958db8f3ad8c73d6cf53fcd9e8c84c9e9d3973443f014aa71946/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0300d45373f958db8f3ad8c73d6cf53fcd9e8c84c9e9d3973443f014aa71946/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:30 np0005539563 podman[77229]: 2025-11-29 07:09:30.931635104 +0000 UTC m=+0.129365612 container init 0503e3e6e0becfefa81e4de7f6d88872e4c4e0ef1a5be4c5a8565b7a8fc670f9 (image=quay.io/ceph/ceph:v18, name=nice_meninsky, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:09:30 np0005539563 podman[77229]: 2025-11-29 07:09:30.937274077 +0000 UTC m=+0.135004585 container start 0503e3e6e0becfefa81e4de7f6d88872e4c4e0ef1a5be4c5a8565b7a8fc670f9 (image=quay.io/ceph/ceph:v18, name=nice_meninsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:09:30 np0005539563 podman[77229]: 2025-11-29 07:09:30.942463858 +0000 UTC m=+0.140194366 container attach 0503e3e6e0becfefa81e4de7f6d88872e4c4e0ef1a5be4c5a8565b7a8fc670f9 (image=quay.io/ceph/ceph:v18, name=nice_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:09:31 np0005539563 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77375 (sysctl)
Nov 29 02:09:31 np0005539563 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 29 02:09:31 np0005539563 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 29 02:09:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 29 02:09:32 np0005539563 ceph-mgr[74636]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 29 02:09:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:09:33 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 02:09:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:09:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/604059744' entity='client.admin' 
Nov 29 02:09:33 np0005539563 ceph-mon[74338]: Saving service crash spec with placement *
Nov 29 02:09:33 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:33 np0005539563 systemd[1]: libpod-0503e3e6e0becfefa81e4de7f6d88872e4c4e0ef1a5be4c5a8565b7a8fc670f9.scope: Deactivated successfully.
Nov 29 02:09:33 np0005539563 podman[77229]: 2025-11-29 07:09:33.798659643 +0000 UTC m=+2.996390151 container died 0503e3e6e0becfefa81e4de7f6d88872e4c4e0ef1a5be4c5a8565b7a8fc670f9 (image=quay.io/ceph/ceph:v18, name=nice_meninsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:09:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f0300d45373f958db8f3ad8c73d6cf53fcd9e8c84c9e9d3973443f014aa71946-merged.mount: Deactivated successfully.
Nov 29 02:09:34 np0005539563 ceph-mon[74338]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 02:09:34 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/604059744' entity='client.admin' 
Nov 29 02:09:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:34 np0005539563 podman[77229]: 2025-11-29 07:09:34.989805469 +0000 UTC m=+4.187535967 container remove 0503e3e6e0becfefa81e4de7f6d88872e4c4e0ef1a5be4c5a8565b7a8fc670f9 (image=quay.io/ceph/ceph:v18, name=nice_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:09:35 np0005539563 podman[77661]: 2025-11-29 07:09:35.055621345 +0000 UTC m=+0.044865069 container create 8b3159800b1facad2aa05a1c78e500355453cf69fcf7e4abcd2d4605449ecb38 (image=quay.io/ceph/ceph:v18, name=pensive_buck, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:35 np0005539563 systemd[1]: Started libpod-conmon-8b3159800b1facad2aa05a1c78e500355453cf69fcf7e4abcd2d4605449ecb38.scope.
Nov 29 02:09:35 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7456adc537c8c6638433e54050bae24fc49561b9c800586666450575860ca87b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7456adc537c8c6638433e54050bae24fc49561b9c800586666450575860ca87b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7456adc537c8c6638433e54050bae24fc49561b9c800586666450575860ca87b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:35 np0005539563 podman[77661]: 2025-11-29 07:09:35.035567761 +0000 UTC m=+0.024811495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:35 np0005539563 podman[77661]: 2025-11-29 07:09:35.136766547 +0000 UTC m=+0.126010291 container init 8b3159800b1facad2aa05a1c78e500355453cf69fcf7e4abcd2d4605449ecb38 (image=quay.io/ceph/ceph:v18, name=pensive_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:09:35 np0005539563 podman[77661]: 2025-11-29 07:09:35.143456379 +0000 UTC m=+0.132700103 container start 8b3159800b1facad2aa05a1c78e500355453cf69fcf7e4abcd2d4605449ecb38 (image=quay.io/ceph/ceph:v18, name=pensive_buck, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:09:35 np0005539563 podman[77661]: 2025-11-29 07:09:35.147207161 +0000 UTC m=+0.136450885 container attach 8b3159800b1facad2aa05a1c78e500355453cf69fcf7e4abcd2d4605449ecb38 (image=quay.io/ceph/ceph:v18, name=pensive_buck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:09:35 np0005539563 systemd[1]: libpod-conmon-0503e3e6e0becfefa81e4de7f6d88872e4c4e0ef1a5be4c5a8565b7a8fc670f9.scope: Deactivated successfully.
Nov 29 02:09:35 np0005539563 podman[77709]: 2025-11-29 07:09:35.258860491 +0000 UTC m=+0.038235149 container create 5ea8c9a3b610eed3956d2699016eb880250a3664782bb9e103111bcf49c2f4e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:09:35 np0005539563 systemd[1]: Started libpod-conmon-5ea8c9a3b610eed3956d2699016eb880250a3664782bb9e103111bcf49c2f4e4.scope.
Nov 29 02:09:35 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:35 np0005539563 podman[77709]: 2025-11-29 07:09:35.327107433 +0000 UTC m=+0.106482091 container init 5ea8c9a3b610eed3956d2699016eb880250a3664782bb9e103111bcf49c2f4e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:09:35 np0005539563 podman[77709]: 2025-11-29 07:09:35.332301304 +0000 UTC m=+0.111675962 container start 5ea8c9a3b610eed3956d2699016eb880250a3664782bb9e103111bcf49c2f4e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:09:35 np0005539563 podman[77709]: 2025-11-29 07:09:35.336096078 +0000 UTC m=+0.115470756 container attach 5ea8c9a3b610eed3956d2699016eb880250a3664782bb9e103111bcf49c2f4e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:09:35 np0005539563 epic_ellis[77725]: 167 167
Nov 29 02:09:35 np0005539563 systemd[1]: libpod-5ea8c9a3b610eed3956d2699016eb880250a3664782bb9e103111bcf49c2f4e4.scope: Deactivated successfully.
Nov 29 02:09:35 np0005539563 podman[77709]: 2025-11-29 07:09:35.242039695 +0000 UTC m=+0.021414373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:09:35 np0005539563 podman[77709]: 2025-11-29 07:09:35.338598956 +0000 UTC m=+0.117973624 container died 5ea8c9a3b610eed3956d2699016eb880250a3664782bb9e103111bcf49c2f4e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:09:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-de67a988a8f93c98205c49261b0b6b3ecf40c59bb795d5cd2c39540623aaf802-merged.mount: Deactivated successfully.
Nov 29 02:09:35 np0005539563 podman[77709]: 2025-11-29 07:09:35.489187362 +0000 UTC m=+0.268562020 container remove 5ea8c9a3b610eed3956d2699016eb880250a3664782bb9e103111bcf49c2f4e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:09:35 np0005539563 systemd[1]: libpod-conmon-5ea8c9a3b610eed3956d2699016eb880250a3664782bb9e103111bcf49c2f4e4.scope: Deactivated successfully.
Nov 29 02:09:35 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:09:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 29 02:09:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:35 np0005539563 systemd[1]: libpod-8b3159800b1facad2aa05a1c78e500355453cf69fcf7e4abcd2d4605449ecb38.scope: Deactivated successfully.
Nov 29 02:09:35 np0005539563 podman[77661]: 2025-11-29 07:09:35.757285308 +0000 UTC m=+0.746529032 container died 8b3159800b1facad2aa05a1c78e500355453cf69fcf7e4abcd2d4605449ecb38 (image=quay.io/ceph/ceph:v18, name=pensive_buck, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:09:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7456adc537c8c6638433e54050bae24fc49561b9c800586666450575860ca87b-merged.mount: Deactivated successfully.
Nov 29 02:09:35 np0005539563 podman[77661]: 2025-11-29 07:09:35.799324989 +0000 UTC m=+0.788568713 container remove 8b3159800b1facad2aa05a1c78e500355453cf69fcf7e4abcd2d4605449ecb38 (image=quay.io/ceph/ceph:v18, name=pensive_buck, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:09:35 np0005539563 systemd[1]: libpod-conmon-8b3159800b1facad2aa05a1c78e500355453cf69fcf7e4abcd2d4605449ecb38.scope: Deactivated successfully.
Nov 29 02:09:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:35 np0005539563 podman[77775]: 2025-11-29 07:09:35.860508659 +0000 UTC m=+0.042567576 container create 59bce9327c7d9e33311cf258cbc83707f415504b524035115040774e220981c3 (image=quay.io/ceph/ceph:v18, name=distracted_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:09:35 np0005539563 systemd[1]: Started libpod-conmon-59bce9327c7d9e33311cf258cbc83707f415504b524035115040774e220981c3.scope.
Nov 29 02:09:35 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97daab538b037c45712570a7117b0f3660659a87faa9d9f084ce4b1055908ff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97daab538b037c45712570a7117b0f3660659a87faa9d9f084ce4b1055908ff9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97daab538b037c45712570a7117b0f3660659a87faa9d9f084ce4b1055908ff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:35 np0005539563 podman[77775]: 2025-11-29 07:09:35.84282522 +0000 UTC m=+0.024884167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:35 np0005539563 podman[77775]: 2025-11-29 07:09:35.944358215 +0000 UTC m=+0.126417152 container init 59bce9327c7d9e33311cf258cbc83707f415504b524035115040774e220981c3 (image=quay.io/ceph/ceph:v18, name=distracted_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:09:35 np0005539563 podman[77775]: 2025-11-29 07:09:35.949517425 +0000 UTC m=+0.131576332 container start 59bce9327c7d9e33311cf258cbc83707f415504b524035115040774e220981c3 (image=quay.io/ceph/ceph:v18, name=distracted_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:09:35 np0005539563 podman[77775]: 2025-11-29 07:09:35.955127777 +0000 UTC m=+0.137186724 container attach 59bce9327c7d9e33311cf258cbc83707f415504b524035115040774e220981c3 (image=quay.io/ceph/ceph:v18, name=distracted_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:09:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:36 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:09:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:09:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:36 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Added label _admin to host compute-0
Nov 29 02:09:36 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 29 02:09:36 np0005539563 distracted_merkle[77791]: Added label _admin to host compute-0
Nov 29 02:09:36 np0005539563 systemd[1]: libpod-59bce9327c7d9e33311cf258cbc83707f415504b524035115040774e220981c3.scope: Deactivated successfully.
Nov 29 02:09:36 np0005539563 conmon[77791]: conmon 59bce9327c7d9e33311c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-59bce9327c7d9e33311cf258cbc83707f415504b524035115040774e220981c3.scope/container/memory.events
Nov 29 02:09:36 np0005539563 podman[77775]: 2025-11-29 07:09:36.649271717 +0000 UTC m=+0.831330634 container died 59bce9327c7d9e33311cf258cbc83707f415504b524035115040774e220981c3 (image=quay.io/ceph/ceph:v18, name=distracted_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:09:36 np0005539563 systemd[1]: var-lib-containers-storage-overlay-97daab538b037c45712570a7117b0f3660659a87faa9d9f084ce4b1055908ff9-merged.mount: Deactivated successfully.
Nov 29 02:09:36 np0005539563 podman[77775]: 2025-11-29 07:09:36.691003981 +0000 UTC m=+0.873062898 container remove 59bce9327c7d9e33311cf258cbc83707f415504b524035115040774e220981c3 (image=quay.io/ceph/ceph:v18, name=distracted_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:09:36 np0005539563 systemd[1]: libpod-conmon-59bce9327c7d9e33311cf258cbc83707f415504b524035115040774e220981c3.scope: Deactivated successfully.
Nov 29 02:09:36 np0005539563 podman[77830]: 2025-11-29 07:09:36.757595898 +0000 UTC m=+0.044799038 container create b8701ae79a05848875d8dc7dc3d0a2ce5a18d97cbd3386deea3c194163c42f0b (image=quay.io/ceph/ceph:v18, name=jolly_swanson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:09:36 np0005539563 systemd[1]: Started libpod-conmon-b8701ae79a05848875d8dc7dc3d0a2ce5a18d97cbd3386deea3c194163c42f0b.scope.
Nov 29 02:09:36 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48cfb1b71e3a00203887b1248b167bc2563206e40e66c081e785ae9298dc0bbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48cfb1b71e3a00203887b1248b167bc2563206e40e66c081e785ae9298dc0bbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48cfb1b71e3a00203887b1248b167bc2563206e40e66c081e785ae9298dc0bbd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:36 np0005539563 podman[77830]: 2025-11-29 07:09:36.736981534 +0000 UTC m=+0.024184484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:36 np0005539563 podman[77830]: 2025-11-29 07:09:36.838594101 +0000 UTC m=+0.125797021 container init b8701ae79a05848875d8dc7dc3d0a2ce5a18d97cbd3386deea3c194163c42f0b (image=quay.io/ceph/ceph:v18, name=jolly_swanson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:09:36 np0005539563 podman[77830]: 2025-11-29 07:09:36.843886631 +0000 UTC m=+0.131089551 container start b8701ae79a05848875d8dc7dc3d0a2ce5a18d97cbd3386deea3c194163c42f0b (image=quay.io/ceph/ceph:v18, name=jolly_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 02:09:36 np0005539563 podman[77830]: 2025-11-29 07:09:36.850208862 +0000 UTC m=+0.137411992 container attach b8701ae79a05848875d8dc7dc3d0a2ce5a18d97cbd3386deea3c194163c42f0b (image=quay.io/ceph/ceph:v18, name=jolly_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:09:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 29 02:09:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3560123169' entity='client.admin' 
Nov 29 02:09:37 np0005539563 systemd[1]: libpod-b8701ae79a05848875d8dc7dc3d0a2ce5a18d97cbd3386deea3c194163c42f0b.scope: Deactivated successfully.
Nov 29 02:09:37 np0005539563 podman[77830]: 2025-11-29 07:09:37.441925903 +0000 UTC m=+0.729128823 container died b8701ae79a05848875d8dc7dc3d0a2ce5a18d97cbd3386deea3c194163c42f0b (image=quay.io/ceph/ceph:v18, name=jolly_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:37 np0005539563 systemd[1]: var-lib-containers-storage-overlay-48cfb1b71e3a00203887b1248b167bc2563206e40e66c081e785ae9298dc0bbd-merged.mount: Deactivated successfully.
Nov 29 02:09:37 np0005539563 podman[77830]: 2025-11-29 07:09:37.483586644 +0000 UTC m=+0.770789564 container remove b8701ae79a05848875d8dc7dc3d0a2ce5a18d97cbd3386deea3c194163c42f0b (image=quay.io/ceph/ceph:v18, name=jolly_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:09:37 np0005539563 systemd[1]: libpod-conmon-b8701ae79a05848875d8dc7dc3d0a2ce5a18d97cbd3386deea3c194163c42f0b.scope: Deactivated successfully.
Nov 29 02:09:37 np0005539563 podman[77884]: 2025-11-29 07:09:37.545358775 +0000 UTC m=+0.041124886 container create 343842dc7f2f1e65a660f0b41e2add34c01abda4c0d0fd237eedbfc25c893efd (image=quay.io/ceph/ceph:v18, name=tender_maxwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:09:37 np0005539563 systemd[1]: Started libpod-conmon-343842dc7f2f1e65a660f0b41e2add34c01abda4c0d0fd237eedbfc25c893efd.scope.
Nov 29 02:09:37 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1033da14346cc76c244c6f6d37b3aadf045d6f4049542633f5867554ddd450d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1033da14346cc76c244c6f6d37b3aadf045d6f4049542633f5867554ddd450d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1033da14346cc76c244c6f6d37b3aadf045d6f4049542633f5867554ddd450d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:37 np0005539563 podman[77884]: 2025-11-29 07:09:37.528166755 +0000 UTC m=+0.023932896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:37 np0005539563 podman[77884]: 2025-11-29 07:09:37.625364609 +0000 UTC m=+0.121130750 container init 343842dc7f2f1e65a660f0b41e2add34c01abda4c0d0fd237eedbfc25c893efd (image=quay.io/ceph/ceph:v18, name=tender_maxwell, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:09:37 np0005539563 podman[77884]: 2025-11-29 07:09:37.631129283 +0000 UTC m=+0.126895394 container start 343842dc7f2f1e65a660f0b41e2add34c01abda4c0d0fd237eedbfc25c893efd (image=quay.io/ceph/ceph:v18, name=tender_maxwell, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 02:09:37 np0005539563 podman[77884]: 2025-11-29 07:09:37.634728422 +0000 UTC m=+0.130494533 container attach 343842dc7f2f1e65a660f0b41e2add34c01abda4c0d0fd237eedbfc25c893efd (image=quay.io/ceph/ceph:v18, name=tender_maxwell, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:09:37 np0005539563 ceph-mon[74338]: Added label _admin to host compute-0
Nov 29 02:09:37 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/3560123169' entity='client.admin' 
Nov 29 02:09:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 29 02:09:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2953019779' entity='client.admin' 
Nov 29 02:09:38 np0005539563 tender_maxwell[77901]: set mgr/dashboard/cluster/status
Nov 29 02:09:38 np0005539563 systemd[1]: libpod-343842dc7f2f1e65a660f0b41e2add34c01abda4c0d0fd237eedbfc25c893efd.scope: Deactivated successfully.
Nov 29 02:09:38 np0005539563 podman[77884]: 2025-11-29 07:09:38.332652019 +0000 UTC m=+0.828418130 container died 343842dc7f2f1e65a660f0b41e2add34c01abda4c0d0fd237eedbfc25c893efd (image=quay.io/ceph/ceph:v18, name=tender_maxwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:09:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a1033da14346cc76c244c6f6d37b3aadf045d6f4049542633f5867554ddd450d-merged.mount: Deactivated successfully.
Nov 29 02:09:38 np0005539563 podman[77884]: 2025-11-29 07:09:38.378713654 +0000 UTC m=+0.874479765 container remove 343842dc7f2f1e65a660f0b41e2add34c01abda4c0d0fd237eedbfc25c893efd (image=quay.io/ceph/ceph:v18, name=tender_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:09:38 np0005539563 systemd[1]: libpod-conmon-343842dc7f2f1e65a660f0b41e2add34c01abda4c0d0fd237eedbfc25c893efd.scope: Deactivated successfully.
Nov 29 02:09:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:38 np0005539563 podman[77948]: 2025-11-29 07:09:38.554080495 +0000 UTC m=+0.039981512 container create cb3c488e50a2f4b369e5776b5b368675a537ff587f04e247afda2e1663ef8438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:09:38 np0005539563 systemd[1]: Started libpod-conmon-cb3c488e50a2f4b369e5776b5b368675a537ff587f04e247afda2e1663ef8438.scope.
Nov 29 02:09:38 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a419b5bc39a9821f9f2927d7848b53040a76b2914d10ef16739685331b1ec69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a419b5bc39a9821f9f2927d7848b53040a76b2914d10ef16739685331b1ec69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a419b5bc39a9821f9f2927d7848b53040a76b2914d10ef16739685331b1ec69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a419b5bc39a9821f9f2927d7848b53040a76b2914d10ef16739685331b1ec69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:38 np0005539563 podman[77948]: 2025-11-29 07:09:38.536255685 +0000 UTC m=+0.022156722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:09:38 np0005539563 podman[77948]: 2025-11-29 07:09:38.64271649 +0000 UTC m=+0.128617527 container init cb3c488e50a2f4b369e5776b5b368675a537ff587f04e247afda2e1663ef8438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:38 np0005539563 podman[77948]: 2025-11-29 07:09:38.651409323 +0000 UTC m=+0.137310340 container start cb3c488e50a2f4b369e5776b5b368675a537ff587f04e247afda2e1663ef8438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:09:38 np0005539563 podman[77948]: 2025-11-29 07:09:38.654183577 +0000 UTC m=+0.140084664 container attach cb3c488e50a2f4b369e5776b5b368675a537ff587f04e247afda2e1663ef8438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 02:09:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:09:38 np0005539563 python3[77995]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:39 np0005539563 podman[77996]: 2025-11-29 07:09:38.933916829 +0000 UTC m=+0.025200294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:39 np0005539563 podman[77996]: 2025-11-29 07:09:39.173950079 +0000 UTC m=+0.265233524 container create 31e1957864d198ca907f30860b2ca04b357cb27a0582e8ce365f5741de0739ae (image=quay.io/ceph/ceph:v18, name=competent_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:09:39 np0005539563 systemd[1]: Started libpod-conmon-31e1957864d198ca907f30860b2ca04b357cb27a0582e8ce365f5741de0739ae.scope.
Nov 29 02:09:39 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9499aea2a4f701d2f7dca0f00081e1bb2c9f098cf1685f3265ec1453fc997bdd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9499aea2a4f701d2f7dca0f00081e1bb2c9f098cf1685f3265ec1453fc997bdd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:39 np0005539563 podman[77996]: 2025-11-29 07:09:39.265191481 +0000 UTC m=+0.356474946 container init 31e1957864d198ca907f30860b2ca04b357cb27a0582e8ce365f5741de0739ae (image=quay.io/ceph/ceph:v18, name=competent_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:09:39 np0005539563 podman[77996]: 2025-11-29 07:09:39.271616276 +0000 UTC m=+0.362899721 container start 31e1957864d198ca907f30860b2ca04b357cb27a0582e8ce365f5741de0739ae (image=quay.io/ceph/ceph:v18, name=competent_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:09:39 np0005539563 podman[77996]: 2025-11-29 07:09:39.321921079 +0000 UTC m=+0.413204514 container attach 31e1957864d198ca907f30860b2ca04b357cb27a0582e8ce365f5741de0739ae (image=quay.io/ceph/ceph:v18, name=competent_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2953019779' entity='client.admin' 
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]: [
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:    {
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:        "available": false,
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:        "ceph_device": false,
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:        "lsm_data": {},
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:        "lvs": [],
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:        "path": "/dev/sr0",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:        "rejected_reasons": [
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "Has a FileSystem",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "Insufficient space (<5GB)"
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:        ],
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:        "sys_api": {
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "actuators": null,
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "device_nodes": "sr0",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "devname": "sr0",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "human_readable_size": "482.00 KB",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "id_bus": "ata",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "model": "QEMU DVD-ROM",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "nr_requests": "2",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "parent": "/dev/sr0",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "partitions": {},
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "path": "/dev/sr0",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "removable": "1",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "rev": "2.5+",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "ro": "0",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "rotational": "1",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "sas_address": "",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "sas_device_handle": "",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "scheduler_mode": "mq-deadline",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "sectors": 0,
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "sectorsize": "2048",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "size": 493568.0,
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "support_discard": "2048",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "type": "disk",
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:            "vendor": "QEMU"
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:        }
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]:    }
Nov 29 02:09:39 np0005539563 condescending_diffie[77965]: ]
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 29 02:09:39 np0005539563 systemd[1]: libpod-cb3c488e50a2f4b369e5776b5b368675a537ff587f04e247afda2e1663ef8438.scope: Deactivated successfully.
Nov 29 02:09:39 np0005539563 systemd[1]: libpod-cb3c488e50a2f4b369e5776b5b368675a537ff587f04e247afda2e1663ef8438.scope: Consumed 1.182s CPU time.
Nov 29 02:09:39 np0005539563 podman[77948]: 2025-11-29 07:09:39.838115663 +0000 UTC m=+1.324016700 container died cb3c488e50a2f4b369e5776b5b368675a537ff587f04e247afda2e1663ef8438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2471743229' entity='client.admin' 
Nov 29 02:09:39 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0a419b5bc39a9821f9f2927d7848b53040a76b2914d10ef16739685331b1ec69-merged.mount: Deactivated successfully.
Nov 29 02:09:39 np0005539563 systemd[1]: libpod-31e1957864d198ca907f30860b2ca04b357cb27a0582e8ce365f5741de0739ae.scope: Deactivated successfully.
Nov 29 02:09:39 np0005539563 podman[77996]: 2025-11-29 07:09:39.87798816 +0000 UTC m=+0.969271605 container died 31e1957864d198ca907f30860b2ca04b357cb27a0582e8ce365f5741de0739ae (image=quay.io/ceph/ceph:v18, name=competent_swartz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:39 np0005539563 podman[77948]: 2025-11-29 07:09:39.912362261 +0000 UTC m=+1.398263288 container remove cb3c488e50a2f4b369e5776b5b368675a537ff587f04e247afda2e1663ef8438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:09:39 np0005539563 systemd[1]: libpod-conmon-cb3c488e50a2f4b369e5776b5b368675a537ff587f04e247afda2e1663ef8438.scope: Deactivated successfully.
Nov 29 02:09:39 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9499aea2a4f701d2f7dca0f00081e1bb2c9f098cf1685f3265ec1453fc997bdd-merged.mount: Deactivated successfully.
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:09:39 np0005539563 podman[78974]: 2025-11-29 07:09:39.95357298 +0000 UTC m=+0.072983072 container remove 31e1957864d198ca907f30860b2ca04b357cb27a0582e8ce365f5741de0739ae (image=quay.io/ceph/ceph:v18, name=competent_swartz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:09:39 np0005539563 systemd[1]: libpod-conmon-31e1957864d198ca907f30860b2ca04b357cb27a0582e8ce365f5741de0739ae.scope: Deactivated successfully.
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:09:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:09:39 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:09:39 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:09:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:40 np0005539563 ansible-async_wrapper.py[79492]: Invoked with j476917150789 30 /home/zuul/.ansible/tmp/ansible-tmp-1764400180.3196895-37272-12739946542236/AnsiballZ_command.py _
Nov 29 02:09:40 np0005539563 ansible-async_wrapper.py[79568]: Starting module and watcher
Nov 29 02:09:40 np0005539563 ansible-async_wrapper.py[79568]: Start watching 79569 (30)
Nov 29 02:09:40 np0005539563 ansible-async_wrapper.py[79569]: Start module (79569)
Nov 29 02:09:40 np0005539563 ansible-async_wrapper.py[79492]: Return async_wrapper task started.
Nov 29 02:09:40 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:09:40 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:09:41 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2471743229' entity='client.admin' 
Nov 29 02:09:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:09:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:09:41 np0005539563 python3[79571]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:41 np0005539563 podman[79646]: 2025-11-29 07:09:41.202002829 +0000 UTC m=+0.097088061 container create 7f72e6156d857fdea442815941712cccf3abaa2b057c50e72ed9fa493f411e06 (image=quay.io/ceph/ceph:v18, name=festive_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 02:09:41 np0005539563 podman[79646]: 2025-11-29 07:09:41.129526454 +0000 UTC m=+0.024611706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:41 np0005539563 systemd[1]: Started libpod-conmon-7f72e6156d857fdea442815941712cccf3abaa2b057c50e72ed9fa493f411e06.scope.
Nov 29 02:09:41 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:41 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8964bc737963c665d2acab3c7760d4bdbed8c62868e20df02d7899ea20d6ecd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:41 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8964bc737963c665d2acab3c7760d4bdbed8c62868e20df02d7899ea20d6ecd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:41 np0005539563 podman[79646]: 2025-11-29 07:09:41.367929544 +0000 UTC m=+0.263014776 container init 7f72e6156d857fdea442815941712cccf3abaa2b057c50e72ed9fa493f411e06 (image=quay.io/ceph/ceph:v18, name=festive_wozniak, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:09:41 np0005539563 podman[79646]: 2025-11-29 07:09:41.375174453 +0000 UTC m=+0.270259685 container start 7f72e6156d857fdea442815941712cccf3abaa2b057c50e72ed9fa493f411e06 (image=quay.io/ceph/ceph:v18, name=festive_wozniak, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:09:41 np0005539563 podman[79646]: 2025-11-29 07:09:41.55072647 +0000 UTC m=+0.445811722 container attach 7f72e6156d857fdea442815941712cccf3abaa2b057c50e72ed9fa493f411e06 (image=quay.io/ceph/ceph:v18, name=festive_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:09:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 02:09:41 np0005539563 festive_wozniak[79712]: 
Nov 29 02:09:41 np0005539563 festive_wozniak[79712]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 02:09:41 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:09:41 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:09:41 np0005539563 systemd[1]: libpod-7f72e6156d857fdea442815941712cccf3abaa2b057c50e72ed9fa493f411e06.scope: Deactivated successfully.
Nov 29 02:09:41 np0005539563 podman[79646]: 2025-11-29 07:09:41.978686862 +0000 UTC m=+0.873772094 container died 7f72e6156d857fdea442815941712cccf3abaa2b057c50e72ed9fa493f411e06 (image=quay.io/ceph/ceph:v18, name=festive_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:09:42 np0005539563 ceph-mon[74338]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:09:42 np0005539563 ceph-mon[74338]: Updating compute-0:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:09:42 np0005539563 ceph-mon[74338]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:09:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d8964bc737963c665d2acab3c7760d4bdbed8c62868e20df02d7899ea20d6ecd-merged.mount: Deactivated successfully.
Nov 29 02:09:42 np0005539563 podman[79646]: 2025-11-29 07:09:42.226075034 +0000 UTC m=+1.121160266 container remove 7f72e6156d857fdea442815941712cccf3abaa2b057c50e72ed9fa493f411e06 (image=quay.io/ceph/ceph:v18, name=festive_wozniak, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:09:42 np0005539563 systemd[1]: libpod-conmon-7f72e6156d857fdea442815941712cccf3abaa2b057c50e72ed9fa493f411e06.scope: Deactivated successfully.
Nov 29 02:09:42 np0005539563 ansible-async_wrapper.py[79569]: Module complete (79569)
Nov 29 02:09:42 np0005539563 python3[80220]: ansible-ansible.legacy.async_status Invoked with jid=j476917150789.79492 mode=status _async_dir=/root/.ansible_async
Nov 29 02:09:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:42 np0005539563 python3[80394]: ansible-ansible.legacy.async_status Invoked with jid=j476917150789.79492 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 02:09:42 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring
Nov 29 02:09:42 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring
Nov 29 02:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:09:43 np0005539563 ceph-mon[74338]: Updating compute-0:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring
Nov 29 02:09:43 np0005539563 python3[80642]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:09:43 np0005539563 python3[80884]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:43 np0005539563 podman[80946]: 2025-11-29 07:09:43.713574064 +0000 UTC m=+0.039408085 container create a717614a166cb4a4ad3405fa45b1e44f54db219e4f4e467ef4e31b642a488851 (image=quay.io/ceph/ceph:v18, name=xenodochial_keller, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:09:43 np0005539563 systemd[1]: Started libpod-conmon-a717614a166cb4a4ad3405fa45b1e44f54db219e4f4e467ef4e31b642a488851.scope.
Nov 29 02:09:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:09:43 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5672f719790c398c319cdb67050ea866a3a9bc8ce76da694443a65bf368d933e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5672f719790c398c319cdb67050ea866a3a9bc8ce76da694443a65bf368d933e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5672f719790c398c319cdb67050ea866a3a9bc8ce76da694443a65bf368d933e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:43 np0005539563 podman[80946]: 2025-11-29 07:09:43.696522637 +0000 UTC m=+0.022356688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:43 np0005539563 podman[80946]: 2025-11-29 07:09:43.801105724 +0000 UTC m=+0.126939765 container init a717614a166cb4a4ad3405fa45b1e44f54db219e4f4e467ef4e31b642a488851 (image=quay.io/ceph/ceph:v18, name=xenodochial_keller, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:43 np0005539563 podman[80946]: 2025-11-29 07:09:43.806592131 +0000 UTC m=+0.132426152 container start a717614a166cb4a4ad3405fa45b1e44f54db219e4f4e467ef4e31b642a488851 (image=quay.io/ceph/ceph:v18, name=xenodochial_keller, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:09:43 np0005539563 podman[80946]: 2025-11-29 07:09:43.809744056 +0000 UTC m=+0.135578087 container attach a717614a166cb4a4ad3405fa45b1e44f54db219e4f4e467ef4e31b642a488851 (image=quay.io/ceph/ceph:v18, name=xenodochial_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:09:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:09:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:09:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:09:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:43 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev 8740cf2e-ae5b-4cc5-b466-c245d94baf95 (Updating crash deployment (+1 -> 1))
Nov 29 02:09:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 02:09:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:09:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:09:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:09:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:09:43 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 29 02:09:43 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 29 02:09:44 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 02:09:44 np0005539563 xenodochial_keller[81009]: 
Nov 29 02:09:44 np0005539563 xenodochial_keller[81009]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 02:09:44 np0005539563 systemd[1]: libpod-a717614a166cb4a4ad3405fa45b1e44f54db219e4f4e467ef4e31b642a488851.scope: Deactivated successfully.
Nov 29 02:09:44 np0005539563 podman[80946]: 2025-11-29 07:09:44.393797565 +0000 UTC m=+0.719631596 container died a717614a166cb4a4ad3405fa45b1e44f54db219e4f4e467ef4e31b642a488851 (image=quay.io/ceph/ceph:v18, name=xenodochial_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5672f719790c398c319cdb67050ea866a3a9bc8ce76da694443a65bf368d933e-merged.mount: Deactivated successfully.
Nov 29 02:09:44 np0005539563 podman[80946]: 2025-11-29 07:09:44.585675795 +0000 UTC m=+0.911509826 container remove a717614a166cb4a4ad3405fa45b1e44f54db219e4f4e467ef4e31b642a488851 (image=quay.io/ceph/ceph:v18, name=xenodochial_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:09:44 np0005539563 systemd[1]: libpod-conmon-a717614a166cb4a4ad3405fa45b1e44f54db219e4f4e467ef4e31b642a488851.scope: Deactivated successfully.
Nov 29 02:09:44 np0005539563 podman[81239]: 2025-11-29 07:09:44.666031249 +0000 UTC m=+0.045380295 container create ceea0e15ef97d6903783a008a509643b1591228c5f61ebba7ff95378a07ca66b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:09:44 np0005539563 systemd[1]: Started libpod-conmon-ceea0e15ef97d6903783a008a509643b1591228c5f61ebba7ff95378a07ca66b.scope.
Nov 29 02:09:44 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:44 np0005539563 podman[81239]: 2025-11-29 07:09:44.646038354 +0000 UTC m=+0.025387440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:09:44 np0005539563 podman[81239]: 2025-11-29 07:09:44.888618941 +0000 UTC m=+0.267968017 container init ceea0e15ef97d6903783a008a509643b1591228c5f61ebba7ff95378a07ca66b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamarr, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:09:44 np0005539563 podman[81239]: 2025-11-29 07:09:44.894799588 +0000 UTC m=+0.274148644 container start ceea0e15ef97d6903783a008a509643b1591228c5f61ebba7ff95378a07ca66b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:44 np0005539563 nostalgic_lamarr[81255]: 167 167
Nov 29 02:09:44 np0005539563 systemd[1]: libpod-ceea0e15ef97d6903783a008a509643b1591228c5f61ebba7ff95378a07ca66b.scope: Deactivated successfully.
Nov 29 02:09:44 np0005539563 podman[81239]: 2025-11-29 07:09:44.911576206 +0000 UTC m=+0.290925262 container attach ceea0e15ef97d6903783a008a509643b1591228c5f61ebba7ff95378a07ca66b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamarr, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:09:44 np0005539563 podman[81239]: 2025-11-29 07:09:44.912163664 +0000 UTC m=+0.291512720 container died ceea0e15ef97d6903783a008a509643b1591228c5f61ebba7ff95378a07ca66b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamarr, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:09:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:09:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:09:44 np0005539563 ceph-mon[74338]: Deploying daemon crash.compute-0 on compute-0
Nov 29 02:09:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0dac0c9d2e375195767e715c6be18e24d602d797def6bd0c72a4fffbe75008cd-merged.mount: Deactivated successfully.
Nov 29 02:09:45 np0005539563 python3[81288]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:45 np0005539563 podman[81239]: 2025-11-29 07:09:45.385804458 +0000 UTC m=+0.765153514 container remove ceea0e15ef97d6903783a008a509643b1591228c5f61ebba7ff95378a07ca66b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:09:45 np0005539563 podman[81299]: 2025-11-29 07:09:45.49283017 +0000 UTC m=+0.398610613 container create fad5d74fdf71f17ed78a099362cb1a7e81b5cb372f7072fbbd182956c6b25c8e (image=quay.io/ceph/ceph:v18, name=distracted_lovelace, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:45 np0005539563 systemd[1]: Reloading.
Nov 29 02:09:45 np0005539563 podman[81299]: 2025-11-29 07:09:45.449075895 +0000 UTC m=+0.354856368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:45 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:45 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:45 np0005539563 systemd[1]: libpod-conmon-ceea0e15ef97d6903783a008a509643b1591228c5f61ebba7ff95378a07ca66b.scope: Deactivated successfully.
Nov 29 02:09:45 np0005539563 systemd[1]: Started libpod-conmon-fad5d74fdf71f17ed78a099362cb1a7e81b5cb372f7072fbbd182956c6b25c8e.scope.
Nov 29 02:09:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed34603d0e2e19c7cee9410901c6080d6e99fa562e86c79eb71bd46e24156dc9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed34603d0e2e19c7cee9410901c6080d6e99fa562e86c79eb71bd46e24156dc9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed34603d0e2e19c7cee9410901c6080d6e99fa562e86c79eb71bd46e24156dc9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:45 np0005539563 systemd[1]: Reloading.
Nov 29 02:09:45 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:45 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:45 np0005539563 ansible-async_wrapper.py[79568]: Done in kid B.
Nov 29 02:09:45 np0005539563 podman[81299]: 2025-11-29 07:09:45.945478339 +0000 UTC m=+0.851258812 container init fad5d74fdf71f17ed78a099362cb1a7e81b5cb372f7072fbbd182956c6b25c8e (image=quay.io/ceph/ceph:v18, name=distracted_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:09:45 np0005539563 podman[81299]: 2025-11-29 07:09:45.954202483 +0000 UTC m=+0.859982936 container start fad5d74fdf71f17ed78a099362cb1a7e81b5cb372f7072fbbd182956c6b25c8e (image=quay.io/ceph/ceph:v18, name=distracted_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:09:45 np0005539563 podman[81299]: 2025-11-29 07:09:45.958997008 +0000 UTC m=+0.864777491 container attach fad5d74fdf71f17ed78a099362cb1a7e81b5cb372f7072fbbd182956c6b25c8e (image=quay.io/ceph/ceph:v18, name=distracted_lovelace, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:09:46 np0005539563 systemd[1]: Starting Ceph crash.compute-0 for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:09:46 np0005539563 podman[81445]: 2025-11-29 07:09:46.289683553 +0000 UTC m=+0.021624596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:09:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 29 02:09:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:46 np0005539563 podman[81445]: 2025-11-29 07:09:46.64517497 +0000 UTC m=+0.377115983 container create 14251f2dd995a89033cd5195e9621ed4f29304c8b2b85ed3eaa956a23e8c6327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/202386828' entity='client.admin' 
Nov 29 02:09:46 np0005539563 systemd[1]: libpod-fad5d74fdf71f17ed78a099362cb1a7e81b5cb372f7072fbbd182956c6b25c8e.scope: Deactivated successfully.
Nov 29 02:09:46 np0005539563 podman[81299]: 2025-11-29 07:09:46.672306751 +0000 UTC m=+1.578087204 container died fad5d74fdf71f17ed78a099362cb1a7e81b5cb372f7072fbbd182956c6b25c8e (image=quay.io/ceph/ceph:v18, name=distracted_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:09:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c66233aa900e3db4947abe0bd1f1fa407e699834364a209f39ecb22bc89b228/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c66233aa900e3db4947abe0bd1f1fa407e699834364a209f39ecb22bc89b228/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c66233aa900e3db4947abe0bd1f1fa407e699834364a209f39ecb22bc89b228/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c66233aa900e3db4947abe0bd1f1fa407e699834364a209f39ecb22bc89b228/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ed34603d0e2e19c7cee9410901c6080d6e99fa562e86c79eb71bd46e24156dc9-merged.mount: Deactivated successfully.
Nov 29 02:09:46 np0005539563 podman[81299]: 2025-11-29 07:09:46.738410613 +0000 UTC m=+1.644191066 container remove fad5d74fdf71f17ed78a099362cb1a7e81b5cb372f7072fbbd182956c6b25c8e (image=quay.io/ceph/ceph:v18, name=distracted_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:09:46 np0005539563 systemd[1]: libpod-conmon-fad5d74fdf71f17ed78a099362cb1a7e81b5cb372f7072fbbd182956c6b25c8e.scope: Deactivated successfully.
Nov 29 02:09:46 np0005539563 podman[81445]: 2025-11-29 07:09:46.754276724 +0000 UTC m=+0.486217767 container init 14251f2dd995a89033cd5195e9621ed4f29304c8b2b85ed3eaa956a23e8c6327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:09:46 np0005539563 podman[81445]: 2025-11-29 07:09:46.759353767 +0000 UTC m=+0.491294780 container start 14251f2dd995a89033cd5195e9621ed4f29304c8b2b85ed3eaa956a23e8c6327 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:09:46 np0005539563 bash[81445]: 14251f2dd995a89033cd5195e9621ed4f29304c8b2b85ed3eaa956a23e8c6327
Nov 29 02:09:46 np0005539563 systemd[1]: Started Ceph crash.compute-0 for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:09:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:09:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:09:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:09:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:46 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev 8740cf2e-ae5b-4cc5-b466-c245d94baf95 (Updating crash deployment (+1 -> 1))
Nov 29 02:09:46 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event 8740cf2e-ae5b-4cc5-b466-c245d94baf95 (Updating crash deployment (+1 -> 1)) in 3 seconds
Nov 29 02:09:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:09:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:46 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0bfaea23-25ef-463e-bb00-99830ec53b96 does not exist
Nov 29 02:09:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 02:09:46 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-0[81488]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 29 02:09:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:47 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 85e7b5a0-6ed3-4992-9096-03cd77754d2d does not exist
Nov 29 02:09:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 02:09:47 np0005539563 python3[81521]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:47 np0005539563 podman[81524]: 2025-11-29 07:09:47.119488614 +0000 UTC m=+0.072588169 container create f0f4faccdf4319025638e5c091a57de5cfb467f58300b68e66c61f56da4cac77 (image=quay.io/ceph/ceph:v18, name=charming_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:47 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-0[81488]: 2025-11-29T07:09:47.136+0000 7f495a5e4640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 02:09:47 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-0[81488]: 2025-11-29T07:09:47.136+0000 7f495a5e4640 -1 AuthRegistry(0x7f4954067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 02:09:47 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-0[81488]: 2025-11-29T07:09:47.137+0000 7f495a5e4640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 02:09:47 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-0[81488]: 2025-11-29T07:09:47.137+0000 7f495a5e4640 -1 AuthRegistry(0x7f495a5e3000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 02:09:47 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-0[81488]: 2025-11-29T07:09:47.138+0000 7f4953fff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 02:09:47 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-0[81488]: 2025-11-29T07:09:47.138+0000 7f495a5e4640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 29 02:09:47 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-0[81488]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 29 02:09:47 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-crash-compute-0[81488]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 29 02:09:47 np0005539563 systemd[1]: Started libpod-conmon-f0f4faccdf4319025638e5c091a57de5cfb467f58300b68e66c61f56da4cac77.scope.
Nov 29 02:09:47 np0005539563 podman[81524]: 2025-11-29 07:09:47.076502363 +0000 UTC m=+0.029601918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/850898ba574a28da05548cf0a22daeb5b8290ba853e4d0d67096b4d7e3ee1f04/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/850898ba574a28da05548cf0a22daeb5b8290ba853e4d0d67096b4d7e3ee1f04/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/850898ba574a28da05548cf0a22daeb5b8290ba853e4d0d67096b4d7e3ee1f04/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:47 np0005539563 podman[81524]: 2025-11-29 07:09:47.35808685 +0000 UTC m=+0.311186435 container init f0f4faccdf4319025638e5c091a57de5cfb467f58300b68e66c61f56da4cac77 (image=quay.io/ceph/ceph:v18, name=charming_shamir, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:09:47 np0005539563 podman[81524]: 2025-11-29 07:09:47.370190446 +0000 UTC m=+0.323290021 container start f0f4faccdf4319025638e5c091a57de5cfb467f58300b68e66c61f56da4cac77 (image=quay.io/ceph/ceph:v18, name=charming_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:09:47 np0005539563 podman[81524]: 2025-11-29 07:09:47.37526892 +0000 UTC m=+0.328368475 container attach f0f4faccdf4319025638e5c091a57de5cfb467f58300b68e66c61f56da4cac77 (image=quay.io/ceph/ceph:v18, name=charming_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:09:47 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/202386828' entity='client.admin' 
Nov 29 02:09:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:47 np0005539563 podman[81791]: 2025-11-29 07:09:47.853545585 +0000 UTC m=+0.049106188 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:09:47 np0005539563 podman[81791]: 2025-11-29 07:09:47.947396717 +0000 UTC m=+0.142957330 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:09:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 29 02:09:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1576660291' entity='client.admin' 
Nov 29 02:09:48 np0005539563 systemd[1]: libpod-f0f4faccdf4319025638e5c091a57de5cfb467f58300b68e66c61f56da4cac77.scope: Deactivated successfully.
Nov 29 02:09:48 np0005539563 podman[81524]: 2025-11-29 07:09:48.012635314 +0000 UTC m=+0.965734859 container died f0f4faccdf4319025638e5c091a57de5cfb467f58300b68e66c61f56da4cac77 (image=quay.io/ceph/ceph:v18, name=charming_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:09:48 np0005539563 systemd[1]: var-lib-containers-storage-overlay-850898ba574a28da05548cf0a22daeb5b8290ba853e4d0d67096b4d7e3ee1f04-merged.mount: Deactivated successfully.
Nov 29 02:09:48 np0005539563 podman[81524]: 2025-11-29 07:09:48.065117773 +0000 UTC m=+1.018217328 container remove f0f4faccdf4319025638e5c091a57de5cfb467f58300b68e66c61f56da4cac77 (image=quay.io/ceph/ceph:v18, name=charming_shamir, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:09:48 np0005539563 systemd[1]: libpod-conmon-f0f4faccdf4319025638e5c091a57de5cfb467f58300b68e66c61f56da4cac77.scope: Deactivated successfully.
Nov 29 02:09:48 np0005539563 ceph-mgr[74636]: [progress INFO root] Writing back 1 completed events
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:48 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 26ebf660-eac5-4e0a-8c69-37ddfc5c66ca does not exist
Nov 29 02:09:48 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f0101685-85de-4ae4-b3c9-717a4dda9d6d does not exist
Nov 29 02:09:48 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ed498338-bbf4-4481-8e65-0b7e316a655c does not exist
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:48 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 02:09:48 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:09:48 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:09:48 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:09:48 np0005539563 python3[81920]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:48 np0005539563 podman[81980]: 2025-11-29 07:09:48.460310902 +0000 UTC m=+0.023798912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:09:49 np0005539563 podman[81980]: 2025-11-29 07:09:49.910581134 +0000 UTC m=+1.474069114 container create 5f0e8cc8cf24137c989b1913e4b1ea34e982b6d882cf27cc3493dbf16085c405 (image=quay.io/ceph/ceph:v18, name=recursing_wu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:49 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/1576660291' entity='client.admin' 
Nov 29 02:09:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:09:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:49 np0005539563 ceph-mon[74338]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 02:09:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:09:49 np0005539563 ceph-mon[74338]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:09:50 np0005539563 systemd[1]: Started libpod-conmon-5f0e8cc8cf24137c989b1913e4b1ea34e982b6d882cf27cc3493dbf16085c405.scope.
Nov 29 02:09:50 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1749e4538f66e0f2008e9e3dfbfb9914a4934dec79e15bc498cceeac2295b10/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1749e4538f66e0f2008e9e3dfbfb9914a4934dec79e15bc498cceeac2295b10/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1749e4538f66e0f2008e9e3dfbfb9914a4934dec79e15bc498cceeac2295b10/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:50 np0005539563 podman[81980]: 2025-11-29 07:09:50.129617717 +0000 UTC m=+1.693105777 container init 5f0e8cc8cf24137c989b1913e4b1ea34e982b6d882cf27cc3493dbf16085c405 (image=quay.io/ceph/ceph:v18, name=recursing_wu, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:09:50 np0005539563 podman[81980]: 2025-11-29 07:09:50.135534197 +0000 UTC m=+1.699022177 container start 5f0e8cc8cf24137c989b1913e4b1ea34e982b6d882cf27cc3493dbf16085c405 (image=quay.io/ceph/ceph:v18, name=recursing_wu, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:50 np0005539563 podman[81980]: 2025-11-29 07:09:50.138685603 +0000 UTC m=+1.702173613 container attach 5f0e8cc8cf24137c989b1913e4b1ea34e982b6d882cf27cc3493dbf16085c405 (image=quay.io/ceph/ceph:v18, name=recursing_wu, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:09:50 np0005539563 podman[82082]: 2025-11-29 07:09:50.166813624 +0000 UTC m=+0.069209437 container create 49fbe71932e4b77473b94ec66e7d1a1e2e287f7e25bb38d2c1363934a69f26e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_morse, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:09:50 np0005539563 systemd[1]: Started libpod-conmon-49fbe71932e4b77473b94ec66e7d1a1e2e287f7e25bb38d2c1363934a69f26e4.scope.
Nov 29 02:09:50 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:50 np0005539563 podman[82082]: 2025-11-29 07:09:50.142019393 +0000 UTC m=+0.044415306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:09:50 np0005539563 podman[82082]: 2025-11-29 07:09:50.241374882 +0000 UTC m=+0.143770715 container init 49fbe71932e4b77473b94ec66e7d1a1e2e287f7e25bb38d2c1363934a69f26e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:09:50 np0005539563 podman[82082]: 2025-11-29 07:09:50.245767625 +0000 UTC m=+0.148163448 container start 49fbe71932e4b77473b94ec66e7d1a1e2e287f7e25bb38d2c1363934a69f26e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:09:50 np0005539563 podman[82082]: 2025-11-29 07:09:50.248710324 +0000 UTC m=+0.151106157 container attach 49fbe71932e4b77473b94ec66e7d1a1e2e287f7e25bb38d2c1363934a69f26e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:50 np0005539563 cool_morse[82098]: 167 167
Nov 29 02:09:50 np0005539563 systemd[1]: libpod-49fbe71932e4b77473b94ec66e7d1a1e2e287f7e25bb38d2c1363934a69f26e4.scope: Deactivated successfully.
Nov 29 02:09:50 np0005539563 conmon[82098]: conmon 49fbe71932e4b77473b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49fbe71932e4b77473b94ec66e7d1a1e2e287f7e25bb38d2c1363934a69f26e4.scope/container/memory.events
Nov 29 02:09:50 np0005539563 podman[82082]: 2025-11-29 07:09:50.250309362 +0000 UTC m=+0.152705175 container died 49fbe71932e4b77473b94ec66e7d1a1e2e287f7e25bb38d2c1363934a69f26e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 29 02:09:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0a86417097022fec2163c4d38b876a2e61a3c47975ea683c84442c9bfae855fb-merged.mount: Deactivated successfully.
Nov 29 02:09:50 np0005539563 podman[82082]: 2025-11-29 07:09:50.282783286 +0000 UTC m=+0.185179099 container remove 49fbe71932e4b77473b94ec66e7d1a1e2e287f7e25bb38d2c1363934a69f26e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:09:50 np0005539563 systemd[1]: libpod-conmon-49fbe71932e4b77473b94ec66e7d1a1e2e287f7e25bb38d2c1363934a69f26e4.scope: Deactivated successfully.
Nov 29 02:09:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 29 02:09:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/681525608' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 02:09:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:09:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:09:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:50 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.rotard (unknown last config time)...
Nov 29 02:09:50 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.rotard (unknown last config time)...
Nov 29 02:09:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.rotard", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 02:09:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rotard", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:09:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 02:09:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 02:09:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:09:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:09:50 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.rotard on compute-0
Nov 29 02:09:50 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.rotard on compute-0
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/681525608' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: Reconfiguring mgr.compute-0.rotard (unknown last config time)...
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rotard", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: Reconfiguring daemon mgr.compute-0.rotard on compute-0
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/681525608' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 29 02:09:51 np0005539563 recursing_wu[82077]: set require_min_compat_client to mimic
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 29 02:09:51 np0005539563 systemd[1]: libpod-5f0e8cc8cf24137c989b1913e4b1ea34e982b6d882cf27cc3493dbf16085c405.scope: Deactivated successfully.
Nov 29 02:09:51 np0005539563 podman[81980]: 2025-11-29 07:09:51.049907549 +0000 UTC m=+2.613395529 container died 5f0e8cc8cf24137c989b1913e4b1ea34e982b6d882cf27cc3493dbf16085c405 (image=quay.io/ceph/ceph:v18, name=recursing_wu, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c1749e4538f66e0f2008e9e3dfbfb9914a4934dec79e15bc498cceeac2295b10-merged.mount: Deactivated successfully.
Nov 29 02:09:51 np0005539563 podman[81980]: 2025-11-29 07:09:51.09417163 +0000 UTC m=+2.657659610 container remove 5f0e8cc8cf24137c989b1913e4b1ea34e982b6d882cf27cc3493dbf16085c405 (image=quay.io/ceph/ceph:v18, name=recursing_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:51 np0005539563 systemd[1]: libpod-conmon-5f0e8cc8cf24137c989b1913e4b1ea34e982b6d882cf27cc3493dbf16085c405.scope: Deactivated successfully.
Nov 29 02:09:51 np0005539563 podman[82262]: 2025-11-29 07:09:51.159988663 +0000 UTC m=+0.038872029 container create 1745045e82d5ac3a07e13d83b7a9f48acfa34ff8ab27f85a40913eac439d18b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_roentgen, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:09:51 np0005539563 systemd[1]: Started libpod-conmon-1745045e82d5ac3a07e13d83b7a9f48acfa34ff8ab27f85a40913eac439d18b1.scope.
Nov 29 02:09:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:51 np0005539563 podman[82262]: 2025-11-29 07:09:51.219399312 +0000 UTC m=+0.098282678 container init 1745045e82d5ac3a07e13d83b7a9f48acfa34ff8ab27f85a40913eac439d18b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_roentgen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:09:51 np0005539563 podman[82262]: 2025-11-29 07:09:51.224967541 +0000 UTC m=+0.103850907 container start 1745045e82d5ac3a07e13d83b7a9f48acfa34ff8ab27f85a40913eac439d18b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_roentgen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:09:51 np0005539563 hardcore_roentgen[82278]: 167 167
Nov 29 02:09:51 np0005539563 systemd[1]: libpod-1745045e82d5ac3a07e13d83b7a9f48acfa34ff8ab27f85a40913eac439d18b1.scope: Deactivated successfully.
Nov 29 02:09:51 np0005539563 podman[82262]: 2025-11-29 07:09:51.228363954 +0000 UTC m=+0.107247340 container attach 1745045e82d5ac3a07e13d83b7a9f48acfa34ff8ab27f85a40913eac439d18b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:09:51 np0005539563 podman[82262]: 2025-11-29 07:09:51.229472327 +0000 UTC m=+0.108355693 container died 1745045e82d5ac3a07e13d83b7a9f48acfa34ff8ab27f85a40913eac439d18b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:09:51 np0005539563 podman[82262]: 2025-11-29 07:09:51.142825323 +0000 UTC m=+0.021708709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:09:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ba7188460e815d2e9c5bc19d00a2ac7d8447e2bdb2b0f68c329c9cc65404f179-merged.mount: Deactivated successfully.
Nov 29 02:09:51 np0005539563 podman[82262]: 2025-11-29 07:09:51.267381806 +0000 UTC m=+0.146265172 container remove 1745045e82d5ac3a07e13d83b7a9f48acfa34ff8ab27f85a40913eac439d18b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_roentgen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:09:51 np0005539563 systemd[1]: libpod-conmon-1745045e82d5ac3a07e13d83b7a9f48acfa34ff8ab27f85a40913eac439d18b1.scope: Deactivated successfully.
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:09:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:51 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cdcd9548-1268-4d28-83c2-1cf0bbc0e24b does not exist
Nov 29 02:09:51 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ba18a932-e949-4fc4-8269-0612a3588f8c does not exist
Nov 29 02:09:51 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev fb294beb-0eea-438b-a13f-b29e0a118c66 does not exist
Nov 29 02:09:51 np0005539563 python3[82371]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:51 np0005539563 podman[82372]: 2025-11-29 07:09:51.759240531 +0000 UTC m=+0.046829409 container create 22d5b35a25e64be030289ac44312b3fb3d1de7dd4d6ef3a444b98633d867d259 (image=quay.io/ceph/ceph:v18, name=recursing_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:09:51 np0005539563 systemd[1]: Started libpod-conmon-22d5b35a25e64be030289ac44312b3fb3d1de7dd4d6ef3a444b98633d867d259.scope.
Nov 29 02:09:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:09:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db2e5d878bd78e84acfc8fa782307ecfb1b35f8c17aac082e014c15962babfbf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db2e5d878bd78e84acfc8fa782307ecfb1b35f8c17aac082e014c15962babfbf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db2e5d878bd78e84acfc8fa782307ecfb1b35f8c17aac082e014c15962babfbf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:09:51 np0005539563 podman[82372]: 2025-11-29 07:09:51.740099632 +0000 UTC m=+0.027688530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:09:51 np0005539563 podman[82372]: 2025-11-29 07:09:51.840097711 +0000 UTC m=+0.127686579 container init 22d5b35a25e64be030289ac44312b3fb3d1de7dd4d6ef3a444b98633d867d259 (image=quay.io/ceph/ceph:v18, name=recursing_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:09:51 np0005539563 podman[82372]: 2025-11-29 07:09:51.84536967 +0000 UTC m=+0.132958548 container start 22d5b35a25e64be030289ac44312b3fb3d1de7dd4d6ef3a444b98633d867d259 (image=quay.io/ceph/ceph:v18, name=recursing_wilbur, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:09:51 np0005539563 podman[82372]: 2025-11-29 07:09:51.848743302 +0000 UTC m=+0.136332210 container attach 22d5b35a25e64be030289ac44312b3fb3d1de7dd4d6ef3a444b98633d867d259 (image=quay.io/ceph/ceph:v18, name=recursing_wilbur, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/681525608' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:52 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:09:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:52 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Added host compute-0
Nov 29 02:09:52 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:09:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:52 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7db62543-dbfd-4a42-b132-4e95fe48f962 does not exist
Nov 29 02:09:52 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e3e23c64-ddb1-429d-b24b-ba394de1ddaa does not exist
Nov 29 02:09:52 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 3a3c8d8c-5db8-4c6c-bb3a-5adec444752a does not exist
Nov 29 02:09:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:09:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:53 np0005539563 ceph-mon[74338]: Added host compute-0
Nov 29 02:09:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:09:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:54 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Nov 29 02:09:54 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Nov 29 02:09:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:55 np0005539563 ceph-mon[74338]: Deploying cephadm binary to compute-1
Nov 29 02:09:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:09:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:58 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Added host compute-1
Nov 29 02:09:58 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Added host compute-1
Nov 29 02:09:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:09:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:09:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:09:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:59 np0005539563 ceph-mon[74338]: Added host compute-1
Nov 29 02:09:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:09:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:09:59 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Nov 29 02:09:59 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Nov 29 02:10:00 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:00 np0005539563 ceph-mon[74338]: Deploying cephadm binary to compute-2
Nov 29 02:10:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:10:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:10:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:03 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Added host compute-2
Nov 29 02:10:03 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Added host compute-2
Nov 29 02:10:03 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 29 02:10:03 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 29 02:10:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 02:10:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:03 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 29 02:10:03 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 29 02:10:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 02:10:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:03 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 02:10:03 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 02:10:03 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Nov 29 02:10:03 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Nov 29 02:10:03 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 29 02:10:03 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 29 02:10:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 29 02:10:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:03 np0005539563 recursing_wilbur[82388]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 02:10:03 np0005539563 recursing_wilbur[82388]: Added host 'compute-1' with addr '192.168.122.101'
Nov 29 02:10:03 np0005539563 recursing_wilbur[82388]: Added host 'compute-2' with addr '192.168.122.102'
Nov 29 02:10:03 np0005539563 recursing_wilbur[82388]: Scheduled mon update...
Nov 29 02:10:03 np0005539563 recursing_wilbur[82388]: Scheduled mgr update...
Nov 29 02:10:03 np0005539563 recursing_wilbur[82388]: Scheduled osd.default_drive_group update...
Nov 29 02:10:03 np0005539563 systemd[1]: libpod-22d5b35a25e64be030289ac44312b3fb3d1de7dd4d6ef3a444b98633d867d259.scope: Deactivated successfully.
Nov 29 02:10:03 np0005539563 podman[82372]: 2025-11-29 07:10:03.169446936 +0000 UTC m=+11.457035824 container died 22d5b35a25e64be030289ac44312b3fb3d1de7dd4d6ef3a444b98633d867d259 (image=quay.io/ceph/ceph:v18, name=recursing_wilbur, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:10:03 np0005539563 systemd[1]: var-lib-containers-storage-overlay-db2e5d878bd78e84acfc8fa782307ecfb1b35f8c17aac082e014c15962babfbf-merged.mount: Deactivated successfully.
Nov 29 02:10:03 np0005539563 podman[82372]: 2025-11-29 07:10:03.22736218 +0000 UTC m=+11.514951058 container remove 22d5b35a25e64be030289ac44312b3fb3d1de7dd4d6ef3a444b98633d867d259 (image=quay.io/ceph/ceph:v18, name=recursing_wilbur, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:10:03 np0005539563 systemd[1]: libpod-conmon-22d5b35a25e64be030289ac44312b3fb3d1de7dd4d6ef3a444b98633d867d259.scope: Deactivated successfully.
Nov 29 02:10:03 np0005539563 python3[82622]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:10:03 np0005539563 podman[82624]: 2025-11-29 07:10:03.681513364 +0000 UTC m=+0.020206542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:10:03 np0005539563 podman[82624]: 2025-11-29 07:10:03.805282964 +0000 UTC m=+0.143976112 container create 82e5d4c6a9c95a4d6a860967070210370f8392997b9ad5c7472b7769b824c488 (image=quay.io/ceph/ceph:v18, name=gifted_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:10:03 np0005539563 systemd[1]: Started libpod-conmon-82e5d4c6a9c95a4d6a860967070210370f8392997b9ad5c7472b7769b824c488.scope.
Nov 29 02:10:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:10:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895bc1ed1e72cce31f95eac55c526d250dcf5651d9beee75e19a794dd633d563/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895bc1ed1e72cce31f95eac55c526d250dcf5651d9beee75e19a794dd633d563/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895bc1ed1e72cce31f95eac55c526d250dcf5651d9beee75e19a794dd633d563/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:03 np0005539563 podman[82624]: 2025-11-29 07:10:03.872938392 +0000 UTC m=+0.211631560 container init 82e5d4c6a9c95a4d6a860967070210370f8392997b9ad5c7472b7769b824c488 (image=quay.io/ceph/ceph:v18, name=gifted_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:10:03 np0005539563 podman[82624]: 2025-11-29 07:10:03.878373937 +0000 UTC m=+0.217067085 container start 82e5d4c6a9c95a4d6a860967070210370f8392997b9ad5c7472b7769b824c488 (image=quay.io/ceph/ceph:v18, name=gifted_swirles, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:10:03 np0005539563 podman[82624]: 2025-11-29 07:10:03.881438249 +0000 UTC m=+0.220131407 container attach 82e5d4c6a9c95a4d6a860967070210370f8392997b9ad5c7472b7769b824c488 (image=quay.io/ceph/ceph:v18, name=gifted_swirles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 02:10:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905345280' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 02:10:04 np0005539563 gifted_swirles[82640]: 
Nov 29 02:10:04 np0005539563 gifted_swirles[82640]: {"fsid":"38a37ed2-442a-5e0d-a69a-881fdd186450","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":107,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-29T07:08:13.981497+0000","services":{}},"progress_events":{}}
Nov 29 02:10:04 np0005539563 systemd[1]: libpod-82e5d4c6a9c95a4d6a860967070210370f8392997b9ad5c7472b7769b824c488.scope: Deactivated successfully.
Nov 29 02:10:04 np0005539563 podman[82624]: 2025-11-29 07:10:04.514069459 +0000 UTC m=+0.852762597 container died 82e5d4c6a9c95a4d6a860967070210370f8392997b9ad5c7472b7769b824c488 (image=quay.io/ceph/ceph:v18, name=gifted_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:10:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay-895bc1ed1e72cce31f95eac55c526d250dcf5651d9beee75e19a794dd633d563-merged.mount: Deactivated successfully.
Nov 29 02:10:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:04 np0005539563 podman[82624]: 2025-11-29 07:10:04.571659653 +0000 UTC m=+0.910352801 container remove 82e5d4c6a9c95a4d6a860967070210370f8392997b9ad5c7472b7769b824c488 (image=quay.io/ceph/ceph:v18, name=gifted_swirles, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:04 np0005539563 systemd[1]: libpod-conmon-82e5d4c6a9c95a4d6a860967070210370f8392997b9ad5c7472b7769b824c488.scope: Deactivated successfully.
Nov 29 02:10:04 np0005539563 ceph-mon[74338]: Added host compute-2
Nov 29 02:10:04 np0005539563 ceph-mon[74338]: Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 29 02:10:04 np0005539563 ceph-mon[74338]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 29 02:10:04 np0005539563 ceph-mon[74338]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 02:10:04 np0005539563 ceph-mon[74338]: Marking host: compute-1 for OSDSpec preview refresh.
Nov 29 02:10:04 np0005539563 ceph-mon[74338]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 29 02:10:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:10:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:10:12
Nov 29 02:10:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:10:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:10:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] No pools available
Nov 29 02:10:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:10:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:10:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:10:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:10:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:10:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:10:21 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:10:21 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:10:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:10:22 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:10:22 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:10:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:22 np0005539563 ceph-mon[74338]: Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:10:23 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:10:23 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:10:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:10:23 np0005539563 ceph-mon[74338]: Updating compute-1:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:10:24 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring
Nov 29 02:10:24 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring
Nov 29 02:10:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:24 np0005539563 ceph-mon[74338]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:25 np0005539563 ceph-mgr[74636]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 02:10:25 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 02:10:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:25 np0005539563 ceph-mgr[74636]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 02:10:25 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 02:10:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:25 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev f5eaeb94-c44c-441b-9b79-554064037fa6 (Updating crash deployment (+1 -> 2))
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:10:25.421+0000 7f871c636640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: service_name: mon
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: placement:
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]:  hosts:
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]:  - compute-0
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]:  - compute-1
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]:  - compute-2
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:10:25.422+0000 7f871c636640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: service_name: mgr
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: placement:
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]:  hosts:
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]:  - compute-0
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]:  - compute-1
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]:  - compute-2
Nov 29 02:10:25 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:10:25 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Nov 29 02:10:25 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: Updating compute-1:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:10:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:10:26 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 29 02:10:26 np0005539563 ceph-mon[74338]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 02:10:26 np0005539563 ceph-mon[74338]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 02:10:26 np0005539563 ceph-mon[74338]: Deploying daemon crash.compute-1 on compute-1
Nov 29 02:10:26 np0005539563 ceph-mon[74338]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 29 02:10:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:10:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:28 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev f5eaeb94-c44c-441b-9b79-554064037fa6 (Updating crash deployment (+1 -> 2))
Nov 29 02:10:28 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event f5eaeb94-c44c-441b-9b79-554064037fa6 (Updating crash deployment (+1 -> 2)) in 3 seconds
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:10:28 np0005539563 ceph-mgr[74636]: [progress INFO root] Writing back 2 completed events
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:28 np0005539563 podman[82817]: 2025-11-29 07:10:28.530654988 +0000 UTC m=+0.035558726 container create b5d39ee5a017910be60adcea6edc6c49861ecb18a459d4a3852c8e4808369cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kalam, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:10:28 np0005539563 systemd[1]: Started libpod-conmon-b5d39ee5a017910be60adcea6edc6c49861ecb18a459d4a3852c8e4808369cea.scope.
Nov 29 02:10:28 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:10:28 np0005539563 podman[82817]: 2025-11-29 07:10:28.601438619 +0000 UTC m=+0.106342407 container init b5d39ee5a017910be60adcea6edc6c49861ecb18a459d4a3852c8e4808369cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kalam, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:10:28 np0005539563 podman[82817]: 2025-11-29 07:10:28.607592106 +0000 UTC m=+0.112495874 container start b5d39ee5a017910be60adcea6edc6c49861ecb18a459d4a3852c8e4808369cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:28 np0005539563 hungry_kalam[82833]: 167 167
Nov 29 02:10:28 np0005539563 podman[82817]: 2025-11-29 07:10:28.514696965 +0000 UTC m=+0.019600733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:28 np0005539563 podman[82817]: 2025-11-29 07:10:28.611279146 +0000 UTC m=+0.116182894 container attach b5d39ee5a017910be60adcea6edc6c49861ecb18a459d4a3852c8e4808369cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:10:28 np0005539563 systemd[1]: libpod-b5d39ee5a017910be60adcea6edc6c49861ecb18a459d4a3852c8e4808369cea.scope: Deactivated successfully.
Nov 29 02:10:28 np0005539563 podman[82817]: 2025-11-29 07:10:28.611997395 +0000 UTC m=+0.116901143 container died b5d39ee5a017910be60adcea6edc6c49861ecb18a459d4a3852c8e4808369cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kalam, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:10:28 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c63a6198e02580fd8d54539a1fec95972f388a63fcf7f11bbf2b536392f6f8eb-merged.mount: Deactivated successfully.
Nov 29 02:10:28 np0005539563 podman[82817]: 2025-11-29 07:10:28.646847712 +0000 UTC m=+0.151751460 container remove b5d39ee5a017910be60adcea6edc6c49861ecb18a459d4a3852c8e4808369cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kalam, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:28 np0005539563 systemd[1]: libpod-conmon-b5d39ee5a017910be60adcea6edc6c49861ecb18a459d4a3852c8e4808369cea.scope: Deactivated successfully.
Nov 29 02:10:28 np0005539563 podman[82857]: 2025-11-29 07:10:28.800872352 +0000 UTC m=+0.042815704 container create 250ee4c2c010ada6c06070b2409027d9c2781c87f6f5a11b013e5f8491e074bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jepsen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:10:28 np0005539563 systemd[1]: Started libpod-conmon-250ee4c2c010ada6c06070b2409027d9c2781c87f6f5a11b013e5f8491e074bb.scope.
Nov 29 02:10:28 np0005539563 podman[82857]: 2025-11-29 07:10:28.781143207 +0000 UTC m=+0.023086549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:28 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:10:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f83ab8ff7e09c23172ab55f7160de88585a03e1324c8245bffa465c7a15c82ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f83ab8ff7e09c23172ab55f7160de88585a03e1324c8245bffa465c7a15c82ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f83ab8ff7e09c23172ab55f7160de88585a03e1324c8245bffa465c7a15c82ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f83ab8ff7e09c23172ab55f7160de88585a03e1324c8245bffa465c7a15c82ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f83ab8ff7e09c23172ab55f7160de88585a03e1324c8245bffa465c7a15c82ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:28 np0005539563 podman[82857]: 2025-11-29 07:10:28.898873602 +0000 UTC m=+0.140816944 container init 250ee4c2c010ada6c06070b2409027d9c2781c87f6f5a11b013e5f8491e074bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:10:28 np0005539563 podman[82857]: 2025-11-29 07:10:28.904636758 +0000 UTC m=+0.146580070 container start 250ee4c2c010ada6c06070b2409027d9c2781c87f6f5a11b013e5f8491e074bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:10:28 np0005539563 podman[82857]: 2025-11-29 07:10:28.907614289 +0000 UTC m=+0.149557601 container attach 250ee4c2c010ada6c06070b2409027d9c2781c87f6f5a11b013e5f8491e074bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jepsen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:10:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:29 np0005539563 youthful_jepsen[82873]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:10:29 np0005539563 youthful_jepsen[82873]: --> relative data size: 1.0
Nov 29 02:10:29 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 02:10:29 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 975ae8ee-a376-4084-87fd-232acabcaa54
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "975ae8ee-a376-4084-87fd-232acabcaa54"} v 0) v1
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/913899739' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "975ae8ee-a376-4084-87fd-232acabcaa54"}]: dispatch
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/913899739' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "975ae8ee-a376-4084-87fd-232acabcaa54"}]': finished
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:30 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "0bd7e18e-b0cb-49d8-9a2b-77b4b562d860"} v 0) v1
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2694231546' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0bd7e18e-b0cb-49d8-9a2b-77b4b562d860"}]: dispatch
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2694231546' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0bd7e18e-b0cb-49d8-9a2b-77b4b562d860"}]': finished
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:10:30 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:10:30 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:10:30 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 02:10:30 np0005539563 lvm[82920]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:10:30 np0005539563 lvm[82920]: VG ceph_vg0 finished
Nov 29 02:10:30 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 29 02:10:30 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 29 02:10:30 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:10:30 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 02:10:30 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3769719368' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 02:10:30 np0005539563 youthful_jepsen[82873]: stderr: got monmap epoch 1
Nov 29 02:10:30 np0005539563 youthful_jepsen[82873]: --> Creating keyring file for osd.0
Nov 29 02:10:30 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 29 02:10:30 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 29 02:10:30 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 975ae8ee-a376-4084-87fd-232acabcaa54 --setuser ceph --setgroup ceph
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/913899739' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "975ae8ee-a376-4084-87fd-232acabcaa54"}]: dispatch
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/913899739' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "975ae8ee-a376-4084-87fd-232acabcaa54"}]': finished
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.101:0/2694231546' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0bd7e18e-b0cb-49d8-9a2b-77b4b562d860"}]: dispatch
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.101:0/2694231546' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0bd7e18e-b0cb-49d8-9a2b-77b4b562d860"}]': finished
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 02:10:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/572845785' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 02:10:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:31 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 02:10:32 np0005539563 ceph-mon[74338]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 02:10:33 np0005539563 youthful_jepsen[82873]: stderr: 2025-11-29T07:10:30.964+0000 7f7902183740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:10:33 np0005539563 youthful_jepsen[82873]: stderr: 2025-11-29T07:10:30.964+0000 7f7902183740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:10:33 np0005539563 youthful_jepsen[82873]: stderr: 2025-11-29T07:10:30.964+0000 7f7902183740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:10:33 np0005539563 youthful_jepsen[82873]: stderr: 2025-11-29T07:10:30.964+0000 7f7902183740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 29 02:10:33 np0005539563 youthful_jepsen[82873]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 29 02:10:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:33 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 02:10:33 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 29 02:10:33 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 02:10:33 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 29 02:10:33 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:10:33 np0005539563 youthful_jepsen[82873]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 02:10:33 np0005539563 youthful_jepsen[82873]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 29 02:10:33 np0005539563 youthful_jepsen[82873]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 29 02:10:33 np0005539563 systemd[1]: libpod-250ee4c2c010ada6c06070b2409027d9c2781c87f6f5a11b013e5f8491e074bb.scope: Deactivated successfully.
Nov 29 02:10:33 np0005539563 systemd[1]: libpod-250ee4c2c010ada6c06070b2409027d9c2781c87f6f5a11b013e5f8491e074bb.scope: Consumed 2.708s CPU time.
Nov 29 02:10:33 np0005539563 podman[82857]: 2025-11-29 07:10:33.57326961 +0000 UTC m=+4.815212942 container died 250ee4c2c010ada6c06070b2409027d9c2781c87f6f5a11b013e5f8491e074bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jepsen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:10:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f83ab8ff7e09c23172ab55f7160de88585a03e1324c8245bffa465c7a15c82ba-merged.mount: Deactivated successfully.
Nov 29 02:10:33 np0005539563 podman[82857]: 2025-11-29 07:10:33.637587036 +0000 UTC m=+4.879530358 container remove 250ee4c2c010ada6c06070b2409027d9c2781c87f6f5a11b013e5f8491e074bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jepsen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:10:33 np0005539563 systemd[1]: libpod-conmon-250ee4c2c010ada6c06070b2409027d9c2781c87f6f5a11b013e5f8491e074bb.scope: Deactivated successfully.
Nov 29 02:10:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:10:34 np0005539563 podman[83991]: 2025-11-29 07:10:34.309519623 +0000 UTC m=+0.040292724 container create d0e756dd6746b5aeee9d59873063c8e8c1a5a147b2dc74840de0718ddefb40ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noether, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:10:34 np0005539563 systemd[1]: Started libpod-conmon-d0e756dd6746b5aeee9d59873063c8e8c1a5a147b2dc74840de0718ddefb40ad.scope.
Nov 29 02:10:34 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:10:34 np0005539563 podman[83991]: 2025-11-29 07:10:34.289549431 +0000 UTC m=+0.020322552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:34 np0005539563 podman[83991]: 2025-11-29 07:10:34.398526249 +0000 UTC m=+0.129299370 container init d0e756dd6746b5aeee9d59873063c8e8c1a5a147b2dc74840de0718ddefb40ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noether, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:34 np0005539563 podman[83991]: 2025-11-29 07:10:34.408072388 +0000 UTC m=+0.138845489 container start d0e756dd6746b5aeee9d59873063c8e8c1a5a147b2dc74840de0718ddefb40ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:10:34 np0005539563 podman[83991]: 2025-11-29 07:10:34.412606801 +0000 UTC m=+0.143379912 container attach d0e756dd6746b5aeee9d59873063c8e8c1a5a147b2dc74840de0718ddefb40ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:10:34 np0005539563 interesting_noether[84008]: 167 167
Nov 29 02:10:34 np0005539563 systemd[1]: libpod-d0e756dd6746b5aeee9d59873063c8e8c1a5a147b2dc74840de0718ddefb40ad.scope: Deactivated successfully.
Nov 29 02:10:34 np0005539563 podman[83991]: 2025-11-29 07:10:34.415622943 +0000 UTC m=+0.146396054 container died d0e756dd6746b5aeee9d59873063c8e8c1a5a147b2dc74840de0718ddefb40ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:10:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-427a33c929596871831fbf44e189066bfc664313cc5c30af385277f11014303e-merged.mount: Deactivated successfully.
Nov 29 02:10:34 np0005539563 podman[83991]: 2025-11-29 07:10:34.461942 +0000 UTC m=+0.192715101 container remove d0e756dd6746b5aeee9d59873063c8e8c1a5a147b2dc74840de0718ddefb40ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noether, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:10:34 np0005539563 systemd[1]: libpod-conmon-d0e756dd6746b5aeee9d59873063c8e8c1a5a147b2dc74840de0718ddefb40ad.scope: Deactivated successfully.
Nov 29 02:10:34 np0005539563 podman[84032]: 2025-11-29 07:10:34.624213144 +0000 UTC m=+0.046386140 container create 432435e18502169599d45548b08b2b6441996029e696680c2fa9b7afcfbae975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heisenberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:10:34 np0005539563 systemd[1]: Started libpod-conmon-432435e18502169599d45548b08b2b6441996029e696680c2fa9b7afcfbae975.scope.
Nov 29 02:10:34 np0005539563 podman[84032]: 2025-11-29 07:10:34.60340374 +0000 UTC m=+0.025576786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:34 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:10:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938f7e8bb00462b012456079826ee1d3e20094f42c2870ed83b2949409edf17f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938f7e8bb00462b012456079826ee1d3e20094f42c2870ed83b2949409edf17f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938f7e8bb00462b012456079826ee1d3e20094f42c2870ed83b2949409edf17f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938f7e8bb00462b012456079826ee1d3e20094f42c2870ed83b2949409edf17f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:34 np0005539563 podman[84032]: 2025-11-29 07:10:34.73753494 +0000 UTC m=+0.159707956 container init 432435e18502169599d45548b08b2b6441996029e696680c2fa9b7afcfbae975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heisenberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:10:34 np0005539563 podman[84032]: 2025-11-29 07:10:34.746582185 +0000 UTC m=+0.168755181 container start 432435e18502169599d45548b08b2b6441996029e696680c2fa9b7afcfbae975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:10:34 np0005539563 podman[84032]: 2025-11-29 07:10:34.750329347 +0000 UTC m=+0.172502343 container attach 432435e18502169599d45548b08b2b6441996029e696680c2fa9b7afcfbae975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 29 02:10:34 np0005539563 python3[84077]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:34 np0005539563 podman[84081]: 2025-11-29 07:10:34.9520044 +0000 UTC m=+0.057578173 container create 8d908f99abb23000907c7a95307d4e0479b9d869049b349acff91ffbe33c7802 (image=quay.io/ceph/ceph:v18, name=xenodochial_euclid, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:10:35 np0005539563 podman[84081]: 2025-11-29 07:10:34.9228743 +0000 UTC m=+0.028448133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:10:35 np0005539563 systemd[1]: Started libpod-conmon-8d908f99abb23000907c7a95307d4e0479b9d869049b349acff91ffbe33c7802.scope.
Nov 29 02:10:35 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:10:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a997dd5c5823a5a0d89952020a197468d237133490618d8199a14cda949a7b16/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a997dd5c5823a5a0d89952020a197468d237133490618d8199a14cda949a7b16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a997dd5c5823a5a0d89952020a197468d237133490618d8199a14cda949a7b16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]: {
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:    "0": [
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:        {
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:            "devices": [
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:                "/dev/loop3"
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:            ],
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:            "lv_name": "ceph_lv0",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:            "lv_size": "7511998464",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:            "name": "ceph_lv0",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:            "tags": {
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:                "ceph.cluster_name": "ceph",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:                "ceph.crush_device_class": "",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:                "ceph.encrypted": "0",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:                "ceph.osd_id": "0",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:                "ceph.type": "block",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:                "ceph.vdo": "0"
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:            },
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:            "type": "block",
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:            "vg_name": "ceph_vg0"
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:        }
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]:    ]
Nov 29 02:10:35 np0005539563 upbeat_heisenberg[84063]: }
Nov 29 02:10:35 np0005539563 systemd[1]: libpod-432435e18502169599d45548b08b2b6441996029e696680c2fa9b7afcfbae975.scope: Deactivated successfully.
Nov 29 02:10:35 np0005539563 podman[84081]: 2025-11-29 07:10:35.835331146 +0000 UTC m=+0.940905009 container init 8d908f99abb23000907c7a95307d4e0479b9d869049b349acff91ffbe33c7802 (image=quay.io/ceph/ceph:v18, name=xenodochial_euclid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:10:35 np0005539563 podman[84032]: 2025-11-29 07:10:35.836812086 +0000 UTC m=+1.258985092 container died 432435e18502169599d45548b08b2b6441996029e696680c2fa9b7afcfbae975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heisenberg, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:10:35 np0005539563 podman[84081]: 2025-11-29 07:10:35.850433775 +0000 UTC m=+0.956007588 container start 8d908f99abb23000907c7a95307d4e0479b9d869049b349acff91ffbe33c7802 (image=quay.io/ceph/ceph:v18, name=xenodochial_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:10:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-938f7e8bb00462b012456079826ee1d3e20094f42c2870ed83b2949409edf17f-merged.mount: Deactivated successfully.
Nov 29 02:10:35 np0005539563 podman[84081]: 2025-11-29 07:10:35.960855993 +0000 UTC m=+1.066429776 container attach 8d908f99abb23000907c7a95307d4e0479b9d869049b349acff91ffbe33c7802 (image=quay.io/ceph/ceph:v18, name=xenodochial_euclid, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:10:35 np0005539563 podman[84032]: 2025-11-29 07:10:35.966460915 +0000 UTC m=+1.388633911 container remove 432435e18502169599d45548b08b2b6441996029e696680c2fa9b7afcfbae975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heisenberg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:10:35 np0005539563 systemd[1]: libpod-conmon-432435e18502169599d45548b08b2b6441996029e696680c2fa9b7afcfbae975.scope: Deactivated successfully.
Nov 29 02:10:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 29 02:10:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 02:10:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:10:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:10:36 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 29 02:10:36 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 29 02:10:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 02:10:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050962536' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 02:10:36 np0005539563 xenodochial_euclid[84097]: 
Nov 29 02:10:36 np0005539563 xenodochial_euclid[84097]: {"fsid":"38a37ed2-442a-5e0d-a69a-881fdd186450","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":139,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1764400230,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T07:10:16.552509+0000","services":{}},"progress_events":{}}
Nov 29 02:10:36 np0005539563 systemd[1]: libpod-8d908f99abb23000907c7a95307d4e0479b9d869049b349acff91ffbe33c7802.scope: Deactivated successfully.
Nov 29 02:10:36 np0005539563 podman[84081]: 2025-11-29 07:10:36.512959398 +0000 UTC m=+1.618533171 container died 8d908f99abb23000907c7a95307d4e0479b9d869049b349acff91ffbe33c7802 (image=quay.io/ceph/ceph:v18, name=xenodochial_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:10:36 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a997dd5c5823a5a0d89952020a197468d237133490618d8199a14cda949a7b16-merged.mount: Deactivated successfully.
Nov 29 02:10:36 np0005539563 podman[84081]: 2025-11-29 07:10:36.877151012 +0000 UTC m=+1.982724785 container remove 8d908f99abb23000907c7a95307d4e0479b9d869049b349acff91ffbe33c7802 (image=quay.io/ceph/ceph:v18, name=xenodochial_euclid, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 02:10:36 np0005539563 systemd[1]: libpod-conmon-8d908f99abb23000907c7a95307d4e0479b9d869049b349acff91ffbe33c7802.scope: Deactivated successfully.
Nov 29 02:10:37 np0005539563 podman[84292]: 2025-11-29 07:10:37.07084862 +0000 UTC m=+0.048682033 container create 4df7ed25365e300ec476e29827b308a345e7ce23c10566005f0059c80c588534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_solomon, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:10:37 np0005539563 systemd[1]: Started libpod-conmon-4df7ed25365e300ec476e29827b308a345e7ce23c10566005f0059c80c588534.scope.
Nov 29 02:10:37 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:10:37 np0005539563 podman[84292]: 2025-11-29 07:10:37.049092139 +0000 UTC m=+0.026925582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:37 np0005539563 podman[84292]: 2025-11-29 07:10:37.153889313 +0000 UTC m=+0.131722736 container init 4df7ed25365e300ec476e29827b308a345e7ce23c10566005f0059c80c588534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_solomon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:10:37 np0005539563 podman[84292]: 2025-11-29 07:10:37.160130443 +0000 UTC m=+0.137963856 container start 4df7ed25365e300ec476e29827b308a345e7ce23c10566005f0059c80c588534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:10:37 np0005539563 podman[84292]: 2025-11-29 07:10:37.164252614 +0000 UTC m=+0.142086047 container attach 4df7ed25365e300ec476e29827b308a345e7ce23c10566005f0059c80c588534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_solomon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:10:37 np0005539563 unruffled_solomon[84308]: 167 167
Nov 29 02:10:37 np0005539563 systemd[1]: libpod-4df7ed25365e300ec476e29827b308a345e7ce23c10566005f0059c80c588534.scope: Deactivated successfully.
Nov 29 02:10:37 np0005539563 podman[84292]: 2025-11-29 07:10:37.166835754 +0000 UTC m=+0.144669177 container died 4df7ed25365e300ec476e29827b308a345e7ce23c10566005f0059c80c588534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:10:37 np0005539563 systemd[1]: var-lib-containers-storage-overlay-957bc70b689cebb41cfd492e7e316fbc852cefc5e57c689411f7c021f2de4e9b-merged.mount: Deactivated successfully.
Nov 29 02:10:37 np0005539563 podman[84292]: 2025-11-29 07:10:37.20388724 +0000 UTC m=+0.181720653 container remove 4df7ed25365e300ec476e29827b308a345e7ce23c10566005f0059c80c588534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:10:37 np0005539563 systemd[1]: libpod-conmon-4df7ed25365e300ec476e29827b308a345e7ce23c10566005f0059c80c588534.scope: Deactivated successfully.
Nov 29 02:10:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 29 02:10:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 02:10:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:10:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:10:37 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Nov 29 02:10:37 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Nov 29 02:10:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:37 np0005539563 podman[84338]: 2025-11-29 07:10:37.531118062 +0000 UTC m=+0.055411416 container create a481e0f356369e3bec818fe61b806912b305106aea306e3cf0f7d463dd57d237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate-test, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:37 np0005539563 systemd[1]: Started libpod-conmon-a481e0f356369e3bec818fe61b806912b305106aea306e3cf0f7d463dd57d237.scope.
Nov 29 02:10:37 np0005539563 podman[84338]: 2025-11-29 07:10:37.507151862 +0000 UTC m=+0.031445316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:37 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:10:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49772baed37590ab42638d6a30fa6d1a5655f333f64d74da4f5ac2a40a793575/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49772baed37590ab42638d6a30fa6d1a5655f333f64d74da4f5ac2a40a793575/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49772baed37590ab42638d6a30fa6d1a5655f333f64d74da4f5ac2a40a793575/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49772baed37590ab42638d6a30fa6d1a5655f333f64d74da4f5ac2a40a793575/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49772baed37590ab42638d6a30fa6d1a5655f333f64d74da4f5ac2a40a793575/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:37 np0005539563 podman[84338]: 2025-11-29 07:10:37.630801287 +0000 UTC m=+0.155094651 container init a481e0f356369e3bec818fe61b806912b305106aea306e3cf0f7d463dd57d237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate-test, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:10:37 np0005539563 podman[84338]: 2025-11-29 07:10:37.642617838 +0000 UTC m=+0.166911222 container start a481e0f356369e3bec818fe61b806912b305106aea306e3cf0f7d463dd57d237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:10:37 np0005539563 podman[84338]: 2025-11-29 07:10:37.64639396 +0000 UTC m=+0.170687344 container attach a481e0f356369e3bec818fe61b806912b305106aea306e3cf0f7d463dd57d237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate-test, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:10:38 np0005539563 ceph-mon[74338]: Deploying daemon osd.0 on compute-0
Nov 29 02:10:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 02:10:38 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate-test[84356]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 02:10:38 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate-test[84356]:                            [--no-systemd] [--no-tmpfs]
Nov 29 02:10:38 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate-test[84356]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 02:10:38 np0005539563 systemd[1]: libpod-a481e0f356369e3bec818fe61b806912b305106aea306e3cf0f7d463dd57d237.scope: Deactivated successfully.
Nov 29 02:10:38 np0005539563 podman[84338]: 2025-11-29 07:10:38.423328458 +0000 UTC m=+0.947621852 container died a481e0f356369e3bec818fe61b806912b305106aea306e3cf0f7d463dd57d237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate-test, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:10:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay-49772baed37590ab42638d6a30fa6d1a5655f333f64d74da4f5ac2a40a793575-merged.mount: Deactivated successfully.
Nov 29 02:10:38 np0005539563 podman[84338]: 2025-11-29 07:10:38.483564442 +0000 UTC m=+1.007857786 container remove a481e0f356369e3bec818fe61b806912b305106aea306e3cf0f7d463dd57d237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate-test, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:10:38 np0005539563 systemd[1]: libpod-conmon-a481e0f356369e3bec818fe61b806912b305106aea306e3cf0f7d463dd57d237.scope: Deactivated successfully.
Nov 29 02:10:38 np0005539563 systemd[1]: Reloading.
Nov 29 02:10:38 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:10:38 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:10:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:10:38 np0005539563 systemd[1]: Reloading.
Nov 29 02:10:39 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:10:39 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:10:39 np0005539563 systemd[1]: Starting Ceph osd.0 for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:10:39 np0005539563 ceph-mon[74338]: Deploying daemon osd.1 on compute-1
Nov 29 02:10:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:39 np0005539563 podman[84519]: 2025-11-29 07:10:39.484287033 +0000 UTC m=+0.047672325 container create 295f3722cc64ea34a5dcf715dd4be275477c6755e861d003785f20fc7d939ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:10:39 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:10:39 np0005539563 podman[84519]: 2025-11-29 07:10:39.459585122 +0000 UTC m=+0.022970414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a4a54b96139682ca8c11efb876eb59176bec5a371aa596a6843a816bce11323/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a4a54b96139682ca8c11efb876eb59176bec5a371aa596a6843a816bce11323/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a4a54b96139682ca8c11efb876eb59176bec5a371aa596a6843a816bce11323/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a4a54b96139682ca8c11efb876eb59176bec5a371aa596a6843a816bce11323/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a4a54b96139682ca8c11efb876eb59176bec5a371aa596a6843a816bce11323/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:39 np0005539563 podman[84519]: 2025-11-29 07:10:39.572413095 +0000 UTC m=+0.135798387 container init 295f3722cc64ea34a5dcf715dd4be275477c6755e861d003785f20fc7d939ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:10:39 np0005539563 podman[84519]: 2025-11-29 07:10:39.58477379 +0000 UTC m=+0.148159062 container start 295f3722cc64ea34a5dcf715dd4be275477c6755e861d003785f20fc7d939ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:10:39 np0005539563 podman[84519]: 2025-11-29 07:10:39.59031188 +0000 UTC m=+0.153697152 container attach 295f3722cc64ea34a5dcf715dd4be275477c6755e861d003785f20fc7d939ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:10:40 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate[84534]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 02:10:40 np0005539563 bash[84519]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 02:10:40 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate[84534]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:10:40 np0005539563 bash[84519]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:10:40 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate[84534]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:10:40 np0005539563 bash[84519]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:10:40 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate[84534]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:10:40 np0005539563 bash[84519]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:10:40 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate[84534]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 02:10:40 np0005539563 bash[84519]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 02:10:40 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate[84534]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 02:10:40 np0005539563 bash[84519]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 02:10:40 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate[84534]: --> ceph-volume raw activate successful for osd ID: 0
Nov 29 02:10:40 np0005539563 bash[84519]: --> ceph-volume raw activate successful for osd ID: 0
Nov 29 02:10:40 np0005539563 systemd[1]: libpod-295f3722cc64ea34a5dcf715dd4be275477c6755e861d003785f20fc7d939ba0.scope: Deactivated successfully.
Nov 29 02:10:40 np0005539563 podman[84519]: 2025-11-29 07:10:40.654728987 +0000 UTC m=+1.218114289 container died 295f3722cc64ea34a5dcf715dd4be275477c6755e861d003785f20fc7d939ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:10:40 np0005539563 systemd[1]: libpod-295f3722cc64ea34a5dcf715dd4be275477c6755e861d003785f20fc7d939ba0.scope: Consumed 1.086s CPU time.
Nov 29 02:10:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7a4a54b96139682ca8c11efb876eb59176bec5a371aa596a6843a816bce11323-merged.mount: Deactivated successfully.
Nov 29 02:10:42 np0005539563 podman[84519]: 2025-11-29 07:10:42.368835776 +0000 UTC m=+2.932221068 container remove 295f3722cc64ea34a5dcf715dd4be275477c6755e861d003785f20fc7d939ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0-activate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:10:42 np0005539563 podman[84705]: 2025-11-29 07:10:42.636329745 +0000 UTC m=+0.096222756 container create 95d4b7aa7fd594850fca653940ecdbd47e02943ac047dbbbc23f23701c95426a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:10:42 np0005539563 podman[84705]: 2025-11-29 07:10:42.570384966 +0000 UTC m=+0.030278077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70f3f9286b1ccc2c6ec0a992161c7bc10a59528c6c10c0795a8404622cdc123/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70f3f9286b1ccc2c6ec0a992161c7bc10a59528c6c10c0795a8404622cdc123/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70f3f9286b1ccc2c6ec0a992161c7bc10a59528c6c10c0795a8404622cdc123/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70f3f9286b1ccc2c6ec0a992161c7bc10a59528c6c10c0795a8404622cdc123/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70f3f9286b1ccc2c6ec0a992161c7bc10a59528c6c10c0795a8404622cdc123/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:42 np0005539563 podman[84705]: 2025-11-29 07:10:42.704165906 +0000 UTC m=+0.164058947 container init 95d4b7aa7fd594850fca653940ecdbd47e02943ac047dbbbc23f23701c95426a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:10:42 np0005539563 podman[84705]: 2025-11-29 07:10:42.712214905 +0000 UTC m=+0.172107916 container start 95d4b7aa7fd594850fca653940ecdbd47e02943ac047dbbbc23f23701c95426a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:10:42 np0005539563 bash[84705]: 95d4b7aa7fd594850fca653940ecdbd47e02943ac047dbbbc23f23701c95426a
Nov 29 02:10:42 np0005539563 systemd[1]: Started Ceph osd.0 for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: pidfile_write: ignore empty --pid-file
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: bdev(0x561bde639800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: bdev(0x561bde639800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: bdev(0x561bde639800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: bdev(0x561bde639800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: bdev(0x561bdf47b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: bdev(0x561bdf47b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: bdev(0x561bdf47b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: bdev(0x561bdf47b800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Nov 29 02:10:42 np0005539563 ceph-osd[84724]: bdev(0x561bdf47b800 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 02:10:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:10:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:10:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bde639800 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 02:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: load: jerasure load: lrc 
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 02:10:43 np0005539563 podman[84884]: 2025-11-29 07:10:43.403723334 +0000 UTC m=+0.044759062 container create 48e0ce4de544ca3a7a94385729116a94506ad75cc36c179bab560db58cabfc23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_easley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:10:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:43 np0005539563 systemd[1]: Started libpod-conmon-48e0ce4de544ca3a7a94385729116a94506ad75cc36c179bab560db58cabfc23.scope.
Nov 29 02:10:43 np0005539563 podman[84884]: 2025-11-29 07:10:43.384498646 +0000 UTC m=+0.025534394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:43 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:10:43 np0005539563 podman[84884]: 2025-11-29 07:10:43.507666102 +0000 UTC m=+0.148701850 container init 48e0ce4de544ca3a7a94385729116a94506ad75cc36c179bab560db58cabfc23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_easley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:10:43 np0005539563 podman[84884]: 2025-11-29 07:10:43.518253606 +0000 UTC m=+0.159289334 container start 48e0ce4de544ca3a7a94385729116a94506ad75cc36c179bab560db58cabfc23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:10:43 np0005539563 podman[84884]: 2025-11-29 07:10:43.522826455 +0000 UTC m=+0.163862183 container attach 48e0ce4de544ca3a7a94385729116a94506ad75cc36c179bab560db58cabfc23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_easley, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:10:43 np0005539563 peaceful_easley[84900]: 167 167
Nov 29 02:10:43 np0005539563 systemd[1]: libpod-48e0ce4de544ca3a7a94385729116a94506ad75cc36c179bab560db58cabfc23.scope: Deactivated successfully.
Nov 29 02:10:43 np0005539563 conmon[84900]: conmon 48e0ce4de544ca3a7a94 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48e0ce4de544ca3a7a94385729116a94506ad75cc36c179bab560db58cabfc23.scope/container/memory.events
Nov 29 02:10:43 np0005539563 podman[84884]: 2025-11-29 07:10:43.529284022 +0000 UTC m=+0.170319760 container died 48e0ce4de544ca3a7a94385729116a94506ad75cc36c179bab560db58cabfc23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:10:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d68777280d781624ee7e2639aea66222c7031de87153491b996984279d19c92a-merged.mount: Deactivated successfully.
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 02:10:43 np0005539563 podman[84884]: 2025-11-29 07:10:43.569961127 +0000 UTC m=+0.210996855 container remove 48e0ce4de544ca3a7a94385729116a94506ad75cc36c179bab560db58cabfc23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:10:43 np0005539563 systemd[1]: libpod-conmon-48e0ce4de544ca3a7a94385729116a94506ad75cc36c179bab560db58cabfc23.scope: Deactivated successfully.
Nov 29 02:10:43 np0005539563 podman[84927]: 2025-11-29 07:10:43.734129566 +0000 UTC m=+0.046512787 container create a57e2882f08ec0fd070d120e50af699828fc3ea4f19827bc1483a15f5d865bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:10:43 np0005539563 systemd[1]: Started libpod-conmon-a57e2882f08ec0fd070d120e50af699828fc3ea4f19827bc1483a15f5d865bcb.scope.
Nov 29 02:10:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:43 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:10:43 np0005539563 podman[84927]: 2025-11-29 07:10:43.713484791 +0000 UTC m=+0.025868032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d075e7ede94e4484b5aee055cb6cd8634b101c2be20b28ad867d4d35c6a170b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d075e7ede94e4484b5aee055cb6cd8634b101c2be20b28ad867d4d35c6a170b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d075e7ede94e4484b5aee055cb6cd8634b101c2be20b28ad867d4d35c6a170b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d075e7ede94e4484b5aee055cb6cd8634b101c2be20b28ad867d4d35c6a170b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:43 np0005539563 podman[84927]: 2025-11-29 07:10:43.822850698 +0000 UTC m=+0.135233919 container init a57e2882f08ec0fd070d120e50af699828fc3ea4f19827bc1483a15f5d865bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goodall, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:10:43 np0005539563 podman[84927]: 2025-11-29 07:10:43.82948008 +0000 UTC m=+0.141863311 container start a57e2882f08ec0fd070d120e50af699828fc3ea4f19827bc1483a15f5d865bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:10:43 np0005539563 podman[84927]: 2025-11-29 07:10:43.832726745 +0000 UTC m=+0.145109966 container attach a57e2882f08ec0fd070d120e50af699828fc3ea4f19827bc1483a15f5d865bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:10:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fcc00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fd400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fd400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fd400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fd400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluefs mount
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluefs mount shared_bdev_used = 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Git sha 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: DB SUMMARY
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: DB Session ID:  VHAMJ0D8V79S235PALZX
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                                     Options.env: 0x561bdf4cdc70
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                                Options.info_log: 0x561bde6b6ba0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.write_buffer_manager: 0x561bdf5d6460
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                               Options.row_cache: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                              Options.wal_filter: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.wal_compression: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.max_background_jobs: 4
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Compression algorithms supported:
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: #011kZSTD supported: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde6b6600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6acdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde6b6600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6acdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde6b6600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6acdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde6b6600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6acdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde6b6600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6acdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde6b6600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6acdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde6b6600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6acdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde6b65c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6ac430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde6b65c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6ac430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde6b65c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6ac430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cec3133c-7918-4df2-8327-3cc48ba170ff
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400243861199, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400243861481, "job": 1, "event": "recovery_finished"}
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: freelist init
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: freelist _read_cfg
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bluefs umount
Nov 29 02:10:43 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fd400 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fd400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fd400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fd400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: bdev(0x561bdf4fd400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: bluefs mount
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: bluefs mount shared_bdev_used = 4718592
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Git sha 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: DB SUMMARY
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: DB Session ID:  VHAMJ0D8V79S235PALZW
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                                     Options.env: 0x561bde6f8690
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                                Options.info_log: 0x561bde6b78a0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.write_buffer_manager: 0x561bdf5d6460
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                               Options.row_cache: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                              Options.wal_filter: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.wal_compression: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.max_background_jobs: 4
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Compression algorithms supported:
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: #011kZSTD supported: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde693b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6ad610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde693b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6ad610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde693b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6ad610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde693b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6ad610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde693b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6ad610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde693b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6ad610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde693b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6ad610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde6b7e40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6ad770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde6b7e40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6ad770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:           Options.merge_operator: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561bde6b7e40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561bde6ad770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.compression: LZ4
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.num_levels: 7
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cec3133c-7918-4df2-8327-3cc48ba170ff
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400244146842, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400244152872, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400244, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cec3133c-7918-4df2-8327-3cc48ba170ff", "db_session_id": "VHAMJ0D8V79S235PALZW", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400244156248, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400244, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cec3133c-7918-4df2-8327-3cc48ba170ff", "db_session_id": "VHAMJ0D8V79S235PALZW", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400244159303, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400244, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cec3133c-7918-4df2-8327-3cc48ba170ff", "db_session_id": "VHAMJ0D8V79S235PALZW", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400244161247, "job": 1, "event": "recovery_finished"}
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561bde77fc00
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: DB pointer 0x561bdf5bfa00
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561bde6ad610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561bde6ad610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Bloc
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: _get_class not permitted to load lua
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: _get_class not permitted to load sdk
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: _get_class not permitted to load test_remote_reads
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: osd.0 0 load_pgs
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: osd.0 0 load_pgs opened 0 pgs
Nov 29 02:10:44 np0005539563 ceph-osd[84724]: osd.0 0 log_to_monitors true
Nov 29 02:10:44 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:10:44.209+0000 7f1f0249f740 -1 osd.0 0 log_to_monitors true
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3449398090,v1:192.168.122.100:6803/3449398090]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 02:10:44 np0005539563 determined_goodall[84943]: {
Nov 29 02:10:44 np0005539563 determined_goodall[84943]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:10:44 np0005539563 determined_goodall[84943]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:10:44 np0005539563 determined_goodall[84943]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:10:44 np0005539563 determined_goodall[84943]:        "osd_id": 0,
Nov 29 02:10:44 np0005539563 determined_goodall[84943]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:10:44 np0005539563 determined_goodall[84943]:        "type": "bluestore"
Nov 29 02:10:44 np0005539563 determined_goodall[84943]:    }
Nov 29 02:10:44 np0005539563 determined_goodall[84943]: }
Nov 29 02:10:44 np0005539563 systemd[1]: libpod-a57e2882f08ec0fd070d120e50af699828fc3ea4f19827bc1483a15f5d865bcb.scope: Deactivated successfully.
Nov 29 02:10:44 np0005539563 podman[84927]: 2025-11-29 07:10:44.76844919 +0000 UTC m=+1.080832411 container died a57e2882f08ec0fd070d120e50af699828fc3ea4f19827bc1483a15f5d865bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goodall, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3449398090,v1:192.168.122.100:6803/3449398090]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 02:10:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2d075e7ede94e4484b5aee055cb6cd8634b101c2be20b28ad867d4d35c6a170b-merged.mount: Deactivated successfully.
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3449398090,v1:192.168.122.100:6803/3449398090]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-0,root=default}
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:10:44 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:10:44 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: from='osd.0 [v2:192.168.122.100:6802/3449398090,v1:192.168.122.100:6803/3449398090]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 02:10:44 np0005539563 podman[84927]: 2025-11-29 07:10:44.840092728 +0000 UTC m=+1.152475949 container remove a57e2882f08ec0fd070d120e50af699828fc3ea4f19827bc1483a15f5d865bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:44 np0005539563 systemd[1]: libpod-conmon-a57e2882f08ec0fd070d120e50af699828fc3ea4f19827bc1483a15f5d865bcb.scope: Deactivated successfully.
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:10:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:45 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 02:10:45 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 02:10:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3449398090,v1:192.168.122.100:6803/3449398090]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Nov 29 02:10:45 np0005539563 ceph-osd[84724]: osd.0 0 done with init, starting boot process
Nov 29 02:10:45 np0005539563 ceph-osd[84724]: osd.0 0 start_boot
Nov 29 02:10:45 np0005539563 ceph-osd[84724]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 02:10:45 np0005539563 ceph-osd[84724]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 02:10:45 np0005539563 ceph-osd[84724]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 02:10:45 np0005539563 ceph-osd[84724]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 02:10:45 np0005539563 ceph-osd[84724]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:10:45 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:10:45 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:10:45 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3449398090; not ready for session (expect reconnect)
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:45 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: from='osd.0 [v2:192.168.122.100:6802/3449398090,v1:192.168.122.100:6803/3449398090]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: from='osd.0 [v2:192.168.122.100:6802/3449398090,v1:192.168.122.100:6803/3449398090]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:45 np0005539563 ceph-mon[74338]: from='osd.0 [v2:192.168.122.100:6802/3449398090,v1:192.168.122.100:6803/3449398090]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/3322826423,v1:192.168.122.101:6801/3322826423]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 02:10:46 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3449398090; not ready for session (expect reconnect)
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:46 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: from='osd.1 [v2:192.168.122.101:6800/3322826423,v1:192.168.122.101:6801/3322826423]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/3322826423,v1:192.168.122.101:6801/3322826423]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/3322826423,v1:192.168.122.101:6801/3322826423]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-1,root=default}
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:10:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:10:46 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:10:46 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:10:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:47 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3449398090; not ready for session (expect reconnect)
Nov 29 02:10:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:47 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: from='osd.1 [v2:192.168.122.101:6800/3322826423,v1:192.168.122.101:6801/3322826423]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: from='osd.1 [v2:192.168.122.101:6800/3322826423,v1:192.168.122.101:6801/3322826423]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/3322826423,v1:192.168.122.101:6801/3322826423]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:10:48 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:10:48 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:10:48 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3322826423; not ready for session (expect reconnect)
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:10:48 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:10:48 np0005539563 podman[85606]: 2025-11-29 07:10:48.552270773 +0000 UTC m=+0.089930414 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:48 np0005539563 podman[85606]: 2025-11-29 07:10:48.705192201 +0000 UTC m=+0.242851812 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:10:48 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3449398090; not ready for session (expect reconnect)
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:10:48 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:10:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:10:49 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3322826423; not ready for session (expect reconnect)
Nov 29 02:10:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:10:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:10:49 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:10:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:10:49 np0005539563 ceph-mon[74338]: from='osd.1 [v2:192.168.122.101:6800/3322826423,v1:192.168.122.101:6801/3322826423]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Nov 29 02:10:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v50: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:49 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3449398090; not ready for session (expect reconnect)
Nov 29 02:10:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:49 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:10:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:10:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:50 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3322826423; not ready for session (expect reconnect)
Nov 29 02:10:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:10:50 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:10:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:10:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:50 np0005539563 podman[85962]: 2025-11-29 07:10:50.735878153 +0000 UTC m=+0.073415776 container create 4250b718a4a1ac2f4ad138c80b39f1679402044378f42308d6dbee59ee43a22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jones, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:10:50 np0005539563 podman[85962]: 2025-11-29 07:10:50.697328433 +0000 UTC m=+0.034866116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:50 np0005539563 systemd[1]: Started libpod-conmon-4250b718a4a1ac2f4ad138c80b39f1679402044378f42308d6dbee59ee43a22a.scope.
Nov 29 02:10:50 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3449398090; not ready for session (expect reconnect)
Nov 29 02:10:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:50 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:10:50 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:10:50 np0005539563 podman[85962]: 2025-11-29 07:10:50.846290427 +0000 UTC m=+0.183828070 container init 4250b718a4a1ac2f4ad138c80b39f1679402044378f42308d6dbee59ee43a22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jones, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:10:50 np0005539563 podman[85962]: 2025-11-29 07:10:50.852668752 +0000 UTC m=+0.190206375 container start 4250b718a4a1ac2f4ad138c80b39f1679402044378f42308d6dbee59ee43a22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jones, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:10:50 np0005539563 agitated_jones[85978]: 167 167
Nov 29 02:10:50 np0005539563 systemd[1]: libpod-4250b718a4a1ac2f4ad138c80b39f1679402044378f42308d6dbee59ee43a22a.scope: Deactivated successfully.
Nov 29 02:10:50 np0005539563 podman[85962]: 2025-11-29 07:10:50.897563077 +0000 UTC m=+0.235100700 container attach 4250b718a4a1ac2f4ad138c80b39f1679402044378f42308d6dbee59ee43a22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jones, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:10:50 np0005539563 podman[85962]: 2025-11-29 07:10:50.898006869 +0000 UTC m=+0.235544492 container died 4250b718a4a1ac2f4ad138c80b39f1679402044378f42308d6dbee59ee43a22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:10:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-56588d63369beff97badfd336f7a124c9b2010848b2cabec16932dbc45fd0535-merged.mount: Deactivated successfully.
Nov 29 02:10:51 np0005539563 podman[85962]: 2025-11-29 07:10:51.001569826 +0000 UTC m=+0.339107449 container remove 4250b718a4a1ac2f4ad138c80b39f1679402044378f42308d6dbee59ee43a22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jones, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:10:51 np0005539563 systemd[1]: libpod-conmon-4250b718a4a1ac2f4ad138c80b39f1679402044378f42308d6dbee59ee43a22a.scope: Deactivated successfully.
Nov 29 02:10:51 np0005539563 podman[86000]: 2025-11-29 07:10:51.153331143 +0000 UTC m=+0.049728751 container create abaecfbd71b543e232a383dc05f2b6070d33566916f0348196fcf6731cdf773f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kirch, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:10:51 np0005539563 systemd[1]: Started libpod-conmon-abaecfbd71b543e232a383dc05f2b6070d33566916f0348196fcf6731cdf773f.scope.
Nov 29 02:10:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:10:51 np0005539563 podman[86000]: 2025-11-29 07:10:51.136495876 +0000 UTC m=+0.032893514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:10:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2c128365fceec8f16cb7a5f7814040efe9123da7ab59a823ea6b657ddae9d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2c128365fceec8f16cb7a5f7814040efe9123da7ab59a823ea6b657ddae9d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2c128365fceec8f16cb7a5f7814040efe9123da7ab59a823ea6b657ddae9d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2c128365fceec8f16cb7a5f7814040efe9123da7ab59a823ea6b657ddae9d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:10:51 np0005539563 podman[86000]: 2025-11-29 07:10:51.260969245 +0000 UTC m=+0.157366883 container init abaecfbd71b543e232a383dc05f2b6070d33566916f0348196fcf6731cdf773f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kirch, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:10:51 np0005539563 podman[86000]: 2025-11-29 07:10:51.269481226 +0000 UTC m=+0.165878844 container start abaecfbd71b543e232a383dc05f2b6070d33566916f0348196fcf6731cdf773f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kirch, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:10:51 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3322826423; not ready for session (expect reconnect)
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:10:51 np0005539563 podman[86000]: 2025-11-29 07:10:51.274279941 +0000 UTC m=+0.170677569 container attach abaecfbd71b543e232a383dc05f2b6070d33566916f0348196fcf6731cdf773f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kirch, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:10:51 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:10:51 np0005539563 ceph-osd[84724]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 16.061 iops: 4111.593 elapsed_sec: 0.730
Nov 29 02:10:51 np0005539563 ceph-osd[84724]: log_channel(cluster) log [WRN] : OSD bench result of 4111.593189 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:10:51 np0005539563 ceph-osd[84724]: osd.0 0 waiting for initial osdmap
Nov 29 02:10:51 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:10:51.416+0000 7f1efe41f640 -1 osd.0 0 waiting for initial osdmap
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:10:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v51: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:10:51 np0005539563 ceph-osd[84724]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 29 02:10:51 np0005539563 ceph-osd[84724]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 29 02:10:51 np0005539563 ceph-osd[84724]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 29 02:10:51 np0005539563 ceph-osd[84724]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:10:51 np0005539563 ceph-osd[84724]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 02:10:51 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:10:51.464+0000 7f1ef9a47640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 02:10:51 np0005539563 ceph-osd[84724]: osd.0 9 set_numa_affinity not setting numa affinity
Nov 29 02:10:51 np0005539563 ceph-osd[84724]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:10:51 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5248M
Nov 29 02:10:51 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5248M
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:51 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3449398090; not ready for session (expect reconnect)
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:51 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:10:52 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3322826423; not ready for session (expect reconnect)
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:10:52 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: OSD bench result of 4111.593189 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: Adjusting osd_memory_target on compute-1 to  5248M
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:52 np0005539563 keen_kirch[86016]: [
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:    {
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:        "available": false,
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:        "ceph_device": false,
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:        "lsm_data": {},
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:        "lvs": [],
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:        "path": "/dev/sr0",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:        "rejected_reasons": [
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "Has a FileSystem",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "Insufficient space (<5GB)"
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:        ],
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:        "sys_api": {
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "actuators": null,
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "device_nodes": "sr0",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "devname": "sr0",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "human_readable_size": "482.00 KB",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "id_bus": "ata",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "model": "QEMU DVD-ROM",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "nr_requests": "2",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "parent": "/dev/sr0",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "partitions": {},
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "path": "/dev/sr0",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "removable": "1",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "rev": "2.5+",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "ro": "0",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "rotational": "1",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "sas_address": "",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "sas_device_handle": "",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "scheduler_mode": "mq-deadline",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "sectors": 0,
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "sectorsize": "2048",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "size": 493568.0,
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "support_discard": "2048",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "type": "disk",
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:            "vendor": "QEMU"
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:        }
Nov 29 02:10:52 np0005539563 keen_kirch[86016]:    }
Nov 29 02:10:52 np0005539563 keen_kirch[86016]: ]
Nov 29 02:10:52 np0005539563 ceph-osd[84724]: osd.0 9 tick checking mon for new map
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Nov 29 02:10:52 np0005539563 systemd[1]: libpod-abaecfbd71b543e232a383dc05f2b6070d33566916f0348196fcf6731cdf773f.scope: Deactivated successfully.
Nov 29 02:10:52 np0005539563 systemd[1]: libpod-abaecfbd71b543e232a383dc05f2b6070d33566916f0348196fcf6731cdf773f.scope: Consumed 1.244s CPU time.
Nov 29 02:10:52 np0005539563 conmon[86016]: conmon abaecfbd71b543e232a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abaecfbd71b543e232a383dc05f2b6070d33566916f0348196fcf6731cdf773f.scope/container/memory.events
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3449398090,v1:192.168.122.100:6803/3449398090] boot
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:10:52 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:10:52 np0005539563 ceph-osd[84724]: osd.0 10 state: booting -> active
Nov 29 02:10:52 np0005539563 podman[87180]: 2025-11-29 07:10:52.530547531 +0000 UTC m=+0.029185708 container died abaecfbd71b543e232a383dc05f2b6070d33566916f0348196fcf6731cdf773f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:10:52 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7c2c128365fceec8f16cb7a5f7814040efe9123da7ab59a823ea6b657ddae9d7-merged.mount: Deactivated successfully.
Nov 29 02:10:52 np0005539563 podman[87180]: 2025-11-29 07:10:52.577861398 +0000 UTC m=+0.076499545 container remove abaecfbd71b543e232a383dc05f2b6070d33566916f0348196fcf6731cdf773f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:10:52 np0005539563 systemd[1]: libpod-conmon-abaecfbd71b543e232a383dc05f2b6070d33566916f0348196fcf6731cdf773f.scope: Deactivated successfully.
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:10:52 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Nov 29 02:10:52 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Nov 29 02:10:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 02:10:52 np0005539563 ceph-mgr[74636]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 02:10:52 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 02:10:53 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] creating mgr pool
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 02:10:53 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3322826423; not ready for session (expect reconnect)
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:10:53 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:10:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: osd.0 [v2:192.168.122.100:6802/3449398090,v1:192.168.122.100:6803/3449398090] boot
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: Adjusting osd_memory_target on compute-0 to 128.0M
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: OSD bench result of 6630.780675 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/3322826423,v1:192.168.122.101:6801/3322826423] boot
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 02:10:53 np0005539563 ceph-osd[84724]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 02:10:53 np0005539563 ceph-osd[84724]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 29 02:10:53 np0005539563 ceph-osd[84724]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 02:10:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:10:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 29 02:10:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 02:10:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Nov 29 02:10:54 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Nov 29 02:10:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 02:10:54 np0005539563 ceph-mon[74338]: osd.1 [v2:192.168.122.101:6800/3322826423,v1:192.168.122.101:6801/3322826423] boot
Nov 29 02:10:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 02:10:54 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] creating main.db for devicehealth
Nov 29 02:10:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 02:10:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 02:10:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 02:10:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 02:10:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 02:10:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Nov 29 02:10:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 29 02:10:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Nov 29 02:10:56 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Nov 29 02:10:56 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 02:10:56 np0005539563 ceph-mon[74338]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 02:10:56 np0005539563 ceph-mon[74338]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 02:10:57 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.rotard(active, since 106s)
Nov 29 02:10:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Nov 29 02:10:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:10:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Nov 29 02:11:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Nov 29 02:11:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:07 np0005539563 python3[87237]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:07 np0005539563 podman[87239]: 2025-11-29 07:11:07.186840378 +0000 UTC m=+0.027053192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:09 np0005539563 podman[87239]: 2025-11-29 07:11:09.691991357 +0000 UTC m=+2.532204131 container create 9fa1fb14872172fe26ec6d744036fce21389d265f2b925378e7fef1bc3e7f6ee (image=quay.io/ceph/ceph:v18, name=sweet_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:11:09 np0005539563 systemd[1]: Started libpod-conmon-9fa1fb14872172fe26ec6d744036fce21389d265f2b925378e7fef1bc3e7f6ee.scope.
Nov 29 02:11:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea221d4800c1c19406a186400420258969fb34b327561e94ce311576da70f07e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea221d4800c1c19406a186400420258969fb34b327561e94ce311576da70f07e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea221d4800c1c19406a186400420258969fb34b327561e94ce311576da70f07e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:09 np0005539563 podman[87239]: 2025-11-29 07:11:09.780289719 +0000 UTC m=+2.620502513 container init 9fa1fb14872172fe26ec6d744036fce21389d265f2b925378e7fef1bc3e7f6ee (image=quay.io/ceph/ceph:v18, name=sweet_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:11:09 np0005539563 podman[87239]: 2025-11-29 07:11:09.79031815 +0000 UTC m=+2.630530924 container start 9fa1fb14872172fe26ec6d744036fce21389d265f2b925378e7fef1bc3e7f6ee (image=quay.io/ceph/ceph:v18, name=sweet_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:09 np0005539563 podman[87239]: 2025-11-29 07:11:09.794455417 +0000 UTC m=+2.634668301 container attach 9fa1fb14872172fe26ec6d744036fce21389d265f2b925378e7fef1bc3e7f6ee (image=quay.io/ceph/ceph:v18, name=sweet_banach, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 02:11:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3072314952' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 02:11:10 np0005539563 sweet_banach[87255]: 
Nov 29 02:11:10 np0005539563 sweet_banach[87255]: {"fsid":"38a37ed2-442a-5e0d-a69a-881fdd186450","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":173,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":13,"num_osds":2,"num_up_osds":2,"osd_up_since":1764400253,"num_in_osds":2,"osd_in_since":1764400230,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":475209728,"bytes_avail":14548787200,"bytes_total":15023996928},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T07:10:16.552509+0000","services":{}},"progress_events":{}}
Nov 29 02:11:10 np0005539563 systemd[1]: libpod-9fa1fb14872172fe26ec6d744036fce21389d265f2b925378e7fef1bc3e7f6ee.scope: Deactivated successfully.
Nov 29 02:11:10 np0005539563 conmon[87255]: conmon 9fa1fb14872172fe26ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9fa1fb14872172fe26ec6d744036fce21389d265f2b925378e7fef1bc3e7f6ee.scope/container/memory.events
Nov 29 02:11:10 np0005539563 podman[87239]: 2025-11-29 07:11:10.446748259 +0000 UTC m=+3.286961033 container died 9fa1fb14872172fe26ec6d744036fce21389d265f2b925378e7fef1bc3e7f6ee (image=quay.io/ceph/ceph:v18, name=sweet_banach, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:11:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ea221d4800c1c19406a186400420258969fb34b327561e94ce311576da70f07e-merged.mount: Deactivated successfully.
Nov 29 02:11:10 np0005539563 podman[87239]: 2025-11-29 07:11:10.509035805 +0000 UTC m=+3.349248589 container remove 9fa1fb14872172fe26ec6d744036fce21389d265f2b925378e7fef1bc3e7f6ee (image=quay.io/ceph/ceph:v18, name=sweet_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:11:10 np0005539563 systemd[1]: libpod-conmon-9fa1fb14872172fe26ec6d744036fce21389d265f2b925378e7fef1bc3e7f6ee.scope: Deactivated successfully.
Nov 29 02:11:11 np0005539563 python3[87318]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:11 np0005539563 podman[87319]: 2025-11-29 07:11:11.134889002 +0000 UTC m=+0.042168566 container create 5eb2f54ce025c983347c9173864343b8c783b3e11d53e671dda48edeb9994e29 (image=quay.io/ceph/ceph:v18, name=serene_wescoff, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:11 np0005539563 systemd[1]: Started libpod-conmon-5eb2f54ce025c983347c9173864343b8c783b3e11d53e671dda48edeb9994e29.scope.
Nov 29 02:11:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4100084cb1ec27cc9c6ba824f9afbb51cd9c3b0a1ccb4e53d1f8147becaa0965/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4100084cb1ec27cc9c6ba824f9afbb51cd9c3b0a1ccb4e53d1f8147becaa0965/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:11 np0005539563 podman[87319]: 2025-11-29 07:11:11.1171182 +0000 UTC m=+0.024397834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:11 np0005539563 podman[87319]: 2025-11-29 07:11:11.225112612 +0000 UTC m=+0.132392186 container init 5eb2f54ce025c983347c9173864343b8c783b3e11d53e671dda48edeb9994e29 (image=quay.io/ceph/ceph:v18, name=serene_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:11 np0005539563 podman[87319]: 2025-11-29 07:11:11.230598875 +0000 UTC m=+0.137878439 container start 5eb2f54ce025c983347c9173864343b8c783b3e11d53e671dda48edeb9994e29 (image=quay.io/ceph/ceph:v18, name=serene_wescoff, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:11:11 np0005539563 podman[87319]: 2025-11-29 07:11:11.233765667 +0000 UTC m=+0.141045231 container attach 5eb2f54ce025c983347c9173864343b8c783b3e11d53e671dda48edeb9994e29 (image=quay.io/ceph/ceph:v18, name=serene_wescoff, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 02:11:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/871170505' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:11:12
Nov 29 02:11:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:11:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:11:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.mgr']
Nov 29 02:11:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:11:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 29 02:11:12 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/871170505' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/871170505' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Nov 29 02:11:12 np0005539563 serene_wescoff[87334]: pool 'vms' created
Nov 29 02:11:12 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Nov 29 02:11:12 np0005539563 systemd[1]: libpod-5eb2f54ce025c983347c9173864343b8c783b3e11d53e671dda48edeb9994e29.scope: Deactivated successfully.
Nov 29 02:11:12 np0005539563 podman[87362]: 2025-11-29 07:11:12.808292774 +0000 UTC m=+0.024366783 container died 5eb2f54ce025c983347c9173864343b8c783b3e11d53e671dda48edeb9994e29 (image=quay.io/ceph/ceph:v18, name=serene_wescoff, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:11:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4100084cb1ec27cc9c6ba824f9afbb51cd9c3b0a1ccb4e53d1f8147becaa0965-merged.mount: Deactivated successfully.
Nov 29 02:11:12 np0005539563 podman[87362]: 2025-11-29 07:11:12.853447486 +0000 UTC m=+0.069521475 container remove 5eb2f54ce025c983347c9173864343b8c783b3e11d53e671dda48edeb9994e29 (image=quay.io/ceph/ceph:v18, name=serene_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:12 np0005539563 systemd[1]: libpod-conmon-5eb2f54ce025c983347c9173864343b8c783b3e11d53e671dda48edeb9994e29.scope: Deactivated successfully.
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 02:11:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:11:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:11:13 np0005539563 python3[87402]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:13 np0005539563 podman[87403]: 2025-11-29 07:11:13.227591552 +0000 UTC m=+0.048097249 container create 9debefd748e1b89f475916960a3f5be119033eaed8c91f199af1499d465d4e2f (image=quay.io/ceph/ceph:v18, name=modest_austin, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:11:13 np0005539563 systemd[1]: Started libpod-conmon-9debefd748e1b89f475916960a3f5be119033eaed8c91f199af1499d465d4e2f.scope.
Nov 29 02:11:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa7391b1e9d0b4b07a77c12738273ec3e67946e85b89cc6e0b17c911ffee960/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa7391b1e9d0b4b07a77c12738273ec3e67946e85b89cc6e0b17c911ffee960/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:13 np0005539563 podman[87403]: 2025-11-29 07:11:13.202589124 +0000 UTC m=+0.023094821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:13 np0005539563 podman[87403]: 2025-11-29 07:11:13.381528835 +0000 UTC m=+0.202034552 container init 9debefd748e1b89f475916960a3f5be119033eaed8c91f199af1499d465d4e2f (image=quay.io/ceph/ceph:v18, name=modest_austin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:11:13 np0005539563 podman[87403]: 2025-11-29 07:11:13.387260225 +0000 UTC m=+0.207765922 container start 9debefd748e1b89f475916960a3f5be119033eaed8c91f199af1499d465d4e2f (image=quay.io/ceph/ceph:v18, name=modest_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:13 np0005539563 podman[87403]: 2025-11-29 07:11:13.391221677 +0000 UTC m=+0.211727374 container attach 9debefd748e1b89f475916960a3f5be119033eaed8c91f199af1499d465d4e2f (image=quay.io/ceph/ceph:v18, name=modest_austin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v67: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 29 02:11:13 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:11:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Nov 29 02:11:13 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev 2eace560-32f9-417e-9863-33e4e96de061 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev 2eace560-32f9-417e-9863-33e4e96de061 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 02:11:13 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event 2eace560-32f9-417e-9863-33e4e96de061 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 0 seconds
Nov 29 02:11:13 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/871170505' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:11:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 02:11:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2971159617' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 29 02:11:14 np0005539563 ceph-mon[74338]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:11:14 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2971159617' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2971159617' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Nov 29 02:11:14 np0005539563 modest_austin[87418]: pool 'volumes' created
Nov 29 02:11:14 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Nov 29 02:11:14 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 16 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:14 np0005539563 systemd[1]: libpod-9debefd748e1b89f475916960a3f5be119033eaed8c91f199af1499d465d4e2f.scope: Deactivated successfully.
Nov 29 02:11:14 np0005539563 podman[87403]: 2025-11-29 07:11:14.858155264 +0000 UTC m=+1.678660971 container died 9debefd748e1b89f475916960a3f5be119033eaed8c91f199af1499d465d4e2f (image=quay.io/ceph/ceph:v18, name=modest_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5fa7391b1e9d0b4b07a77c12738273ec3e67946e85b89cc6e0b17c911ffee960-merged.mount: Deactivated successfully.
Nov 29 02:11:14 np0005539563 podman[87403]: 2025-11-29 07:11:14.901936659 +0000 UTC m=+1.722442346 container remove 9debefd748e1b89f475916960a3f5be119033eaed8c91f199af1499d465d4e2f (image=quay.io/ceph/ceph:v18, name=modest_austin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 02:11:14 np0005539563 systemd[1]: libpod-conmon-9debefd748e1b89f475916960a3f5be119033eaed8c91f199af1499d465d4e2f.scope: Deactivated successfully.
Nov 29 02:11:15 np0005539563 python3[87483]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:15 np0005539563 podman[87484]: 2025-11-29 07:11:15.276464295 +0000 UTC m=+0.110090796 container create efba4c6139b28ba0b566161b490c74d262dcde9e3241f554bb40b05e0c35bb4e (image=quay.io/ceph/ceph:v18, name=adoring_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:11:15 np0005539563 podman[87484]: 2025-11-29 07:11:15.189559641 +0000 UTC m=+0.023186132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:15 np0005539563 systemd[1]: Started libpod-conmon-efba4c6139b28ba0b566161b490c74d262dcde9e3241f554bb40b05e0c35bb4e.scope.
Nov 29 02:11:15 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39d626187f5e1aa5d30fb84179ad461d9e7fc415287812ea872405d90f204a1b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39d626187f5e1aa5d30fb84179ad461d9e7fc415287812ea872405d90f204a1b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:15 np0005539563 podman[87484]: 2025-11-29 07:11:15.335453366 +0000 UTC m=+0.169079847 container init efba4c6139b28ba0b566161b490c74d262dcde9e3241f554bb40b05e0c35bb4e (image=quay.io/ceph/ceph:v18, name=adoring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:11:15 np0005539563 podman[87484]: 2025-11-29 07:11:15.339649914 +0000 UTC m=+0.173276385 container start efba4c6139b28ba0b566161b490c74d262dcde9e3241f554bb40b05e0c35bb4e (image=quay.io/ceph/ceph:v18, name=adoring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:11:15 np0005539563 podman[87484]: 2025-11-29 07:11:15.34291181 +0000 UTC m=+0.176538281 container attach efba4c6139b28ba0b566161b490c74d262dcde9e3241f554bb40b05e0c35bb4e (image=quay.io/ceph/ceph:v18, name=adoring_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:11:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v70: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:11:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:11:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 29 02:11:15 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2971159617' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:11:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:11:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Nov 29 02:11:15 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Nov 29 02:11:15 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 17 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 02:11:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/306472305' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 29 02:11:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:11:16 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/306472305' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/306472305' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Nov 29 02:11:16 np0005539563 adoring_hugle[87500]: pool 'backups' created
Nov 29 02:11:16 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Nov 29 02:11:16 np0005539563 systemd[1]: libpod-efba4c6139b28ba0b566161b490c74d262dcde9e3241f554bb40b05e0c35bb4e.scope: Deactivated successfully.
Nov 29 02:11:16 np0005539563 podman[87484]: 2025-11-29 07:11:16.884314168 +0000 UTC m=+1.717940649 container died efba4c6139b28ba0b566161b490c74d262dcde9e3241f554bb40b05e0c35bb4e (image=quay.io/ceph/ceph:v18, name=adoring_hugle, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:11:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-39d626187f5e1aa5d30fb84179ad461d9e7fc415287812ea872405d90f204a1b-merged.mount: Deactivated successfully.
Nov 29 02:11:16 np0005539563 podman[87484]: 2025-11-29 07:11:16.930428354 +0000 UTC m=+1.764054825 container remove efba4c6139b28ba0b566161b490c74d262dcde9e3241f554bb40b05e0c35bb4e (image=quay.io/ceph/ceph:v18, name=adoring_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:16 np0005539563 systemd[1]: libpod-conmon-efba4c6139b28ba0b566161b490c74d262dcde9e3241f554bb40b05e0c35bb4e.scope: Deactivated successfully.
Nov 29 02:11:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:17 np0005539563 python3[87564]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:17 np0005539563 podman[87565]: 2025-11-29 07:11:17.266649576 +0000 UTC m=+0.037644127 container create 6b10e40ab98cdf9893187540dc3b777cc0eeddcd3efc6fdf406c62a8a431a4b0 (image=quay.io/ceph/ceph:v18, name=sleepy_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:11:17 np0005539563 systemd[1]: Started libpod-conmon-6b10e40ab98cdf9893187540dc3b777cc0eeddcd3efc6fdf406c62a8a431a4b0.scope.
Nov 29 02:11:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71db5ba2fc851a8a8409447c06516659f958fd8b02e94fdd21abf0af5b3e4da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71db5ba2fc851a8a8409447c06516659f958fd8b02e94fdd21abf0af5b3e4da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:17 np0005539563 podman[87565]: 2025-11-29 07:11:17.326398807 +0000 UTC m=+0.097393368 container init 6b10e40ab98cdf9893187540dc3b777cc0eeddcd3efc6fdf406c62a8a431a4b0 (image=quay.io/ceph/ceph:v18, name=sleepy_wilbur, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Nov 29 02:11:17 np0005539563 podman[87565]: 2025-11-29 07:11:17.332328581 +0000 UTC m=+0.103323132 container start 6b10e40ab98cdf9893187540dc3b777cc0eeddcd3efc6fdf406c62a8a431a4b0 (image=quay.io/ceph/ceph:v18, name=sleepy_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:17 np0005539563 podman[87565]: 2025-11-29 07:11:17.335998536 +0000 UTC m=+0.106993307 container attach 6b10e40ab98cdf9893187540dc3b777cc0eeddcd3efc6fdf406c62a8a431a4b0 (image=quay.io/ceph/ceph:v18, name=sleepy_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:11:17 np0005539563 podman[87565]: 2025-11-29 07:11:17.251142954 +0000 UTC m=+0.022137525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v73: 35 pgs: 33 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 29 02:11:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Nov 29 02:11:17 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/306472305' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:17 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Nov 29 02:11:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 02:11:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3001830821' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:18 np0005539563 ceph-mgr[74636]: [progress INFO root] Writing back 3 completed events
Nov 29 02:11:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:11:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 29 02:11:18 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/3001830821' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3001830821' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Nov 29 02:11:18 np0005539563 sleepy_wilbur[87580]: pool 'images' created
Nov 29 02:11:18 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Nov 29 02:11:18 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:18 np0005539563 systemd[1]: libpod-6b10e40ab98cdf9893187540dc3b777cc0eeddcd3efc6fdf406c62a8a431a4b0.scope: Deactivated successfully.
Nov 29 02:11:18 np0005539563 podman[87565]: 2025-11-29 07:11:18.914779424 +0000 UTC m=+1.685773975 container died 6b10e40ab98cdf9893187540dc3b777cc0eeddcd3efc6fdf406c62a8a431a4b0 (image=quay.io/ceph/ceph:v18, name=sleepy_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:11:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a71db5ba2fc851a8a8409447c06516659f958fd8b02e94fdd21abf0af5b3e4da-merged.mount: Deactivated successfully.
Nov 29 02:11:18 np0005539563 podman[87565]: 2025-11-29 07:11:18.953863768 +0000 UTC m=+1.724858339 container remove 6b10e40ab98cdf9893187540dc3b777cc0eeddcd3efc6fdf406c62a8a431a4b0 (image=quay.io/ceph/ceph:v18, name=sleepy_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:11:18 np0005539563 systemd[1]: libpod-conmon-6b10e40ab98cdf9893187540dc3b777cc0eeddcd3efc6fdf406c62a8a431a4b0.scope: Deactivated successfully.
Nov 29 02:11:19 np0005539563 python3[87646]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:19 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:19 np0005539563 podman[87647]: 2025-11-29 07:11:19.334189204 +0000 UTC m=+0.069180585 container create 6189ac55db714ac05f50b355be1d3ddbf0ea98f24eb3051713cec0155a8d4dc3 (image=quay.io/ceph/ceph:v18, name=wizardly_hugle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:19 np0005539563 systemd[1]: Started libpod-conmon-6189ac55db714ac05f50b355be1d3ddbf0ea98f24eb3051713cec0155a8d4dc3.scope.
Nov 29 02:11:19 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c90a95c6246a5a65facfb14f1cd021feb27e14c03727a175eca66ae7941eff0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c90a95c6246a5a65facfb14f1cd021feb27e14c03727a175eca66ae7941eff0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:19 np0005539563 podman[87647]: 2025-11-29 07:11:19.405938775 +0000 UTC m=+0.140930176 container init 6189ac55db714ac05f50b355be1d3ddbf0ea98f24eb3051713cec0155a8d4dc3 (image=quay.io/ceph/ceph:v18, name=wizardly_hugle, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 02:11:19 np0005539563 podman[87647]: 2025-11-29 07:11:19.311998968 +0000 UTC m=+0.046990399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:19 np0005539563 podman[87647]: 2025-11-29 07:11:19.411821628 +0000 UTC m=+0.146813009 container start 6189ac55db714ac05f50b355be1d3ddbf0ea98f24eb3051713cec0155a8d4dc3 (image=quay.io/ceph/ceph:v18, name=wizardly_hugle, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:11:19 np0005539563 podman[87647]: 2025-11-29 07:11:19.41538405 +0000 UTC m=+0.150375461 container attach 6189ac55db714ac05f50b355be1d3ddbf0ea98f24eb3051713cec0155a8d4dc3 (image=quay.io/ceph/ceph:v18, name=wizardly_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:11:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v76: 36 pgs: 1 creating+peering, 1 unknown, 34 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 29 02:11:19 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/3001830821' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:19 np0005539563 ceph-mon[74338]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Nov 29 02:11:19 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Nov 29 02:11:19 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 02:11:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2863789023' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 29 02:11:20 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2863789023' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2863789023' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Nov 29 02:11:20 np0005539563 wizardly_hugle[87662]: pool 'cephfs.cephfs.meta' created
Nov 29 02:11:20 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Nov 29 02:11:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:20 np0005539563 systemd[1]: libpod-6189ac55db714ac05f50b355be1d3ddbf0ea98f24eb3051713cec0155a8d4dc3.scope: Deactivated successfully.
Nov 29 02:11:20 np0005539563 podman[87647]: 2025-11-29 07:11:20.941178064 +0000 UTC m=+1.676169455 container died 6189ac55db714ac05f50b355be1d3ddbf0ea98f24eb3051713cec0155a8d4dc3 (image=quay.io/ceph/ceph:v18, name=wizardly_hugle, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:11:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2c90a95c6246a5a65facfb14f1cd021feb27e14c03727a175eca66ae7941eff0-merged.mount: Deactivated successfully.
Nov 29 02:11:20 np0005539563 podman[87647]: 2025-11-29 07:11:20.984477897 +0000 UTC m=+1.719469298 container remove 6189ac55db714ac05f50b355be1d3ddbf0ea98f24eb3051713cec0155a8d4dc3 (image=quay.io/ceph/ceph:v18, name=wizardly_hugle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:20 np0005539563 systemd[1]: libpod-conmon-6189ac55db714ac05f50b355be1d3ddbf0ea98f24eb3051713cec0155a8d4dc3.scope: Deactivated successfully.
Nov 29 02:11:21 np0005539563 python3[87729]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:21 np0005539563 podman[87730]: 2025-11-29 07:11:21.309423857 +0000 UTC m=+0.041509917 container create bfed1181c70422b72bd910af8de9ba6b460904c3532e4dbd1e958ca526f29f9c (image=quay.io/ceph/ceph:v18, name=sharp_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:11:21 np0005539563 systemd[1]: Started libpod-conmon-bfed1181c70422b72bd910af8de9ba6b460904c3532e4dbd1e958ca526f29f9c.scope.
Nov 29 02:11:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992eb13b35e2013ef3d1958c519c2689d5bd4a5ccbcb030c1fd91a6ad9679bf6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992eb13b35e2013ef3d1958c519c2689d5bd4a5ccbcb030c1fd91a6ad9679bf6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:21 np0005539563 podman[87730]: 2025-11-29 07:11:21.387869192 +0000 UTC m=+0.119955332 container init bfed1181c70422b72bd910af8de9ba6b460904c3532e4dbd1e958ca526f29f9c (image=quay.io/ceph/ceph:v18, name=sharp_wu, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:11:21 np0005539563 podman[87730]: 2025-11-29 07:11:21.292144689 +0000 UTC m=+0.024230769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:21 np0005539563 podman[87730]: 2025-11-29 07:11:21.39471925 +0000 UTC m=+0.126805340 container start bfed1181c70422b72bd910af8de9ba6b460904c3532e4dbd1e958ca526f29f9c (image=quay.io/ceph/ceph:v18, name=sharp_wu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:11:21 np0005539563 podman[87730]: 2025-11-29 07:11:21.405766657 +0000 UTC m=+0.137852777 container attach bfed1181c70422b72bd910af8de9ba6b460904c3532e4dbd1e958ca526f29f9c (image=quay.io/ceph/ceph:v18, name=sharp_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 02:11:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v79: 37 pgs: 1 creating+peering, 2 unknown, 34 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 29 02:11:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Nov 29 02:11:21 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Nov 29 02:11:21 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2863789023' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 02:11:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2884731350' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 29 02:11:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2884731350' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Nov 29 02:11:22 np0005539563 sharp_wu[87745]: pool 'cephfs.cephfs.data' created
Nov 29 02:11:22 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Nov 29 02:11:22 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2884731350' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:11:22 np0005539563 systemd[1]: libpod-bfed1181c70422b72bd910af8de9ba6b460904c3532e4dbd1e958ca526f29f9c.scope: Deactivated successfully.
Nov 29 02:11:22 np0005539563 podman[87730]: 2025-11-29 07:11:22.9662401 +0000 UTC m=+1.698326160 container died bfed1181c70422b72bd910af8de9ba6b460904c3532e4dbd1e958ca526f29f9c (image=quay.io/ceph/ceph:v18, name=sharp_wu, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:11:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-992eb13b35e2013ef3d1958c519c2689d5bd4a5ccbcb030c1fd91a6ad9679bf6-merged.mount: Deactivated successfully.
Nov 29 02:11:23 np0005539563 podman[87730]: 2025-11-29 07:11:23.02214025 +0000 UTC m=+1.754226320 container remove bfed1181c70422b72bd910af8de9ba6b460904c3532e4dbd1e958ca526f29f9c (image=quay.io/ceph/ceph:v18, name=sharp_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:23 np0005539563 systemd[1]: libpod-conmon-bfed1181c70422b72bd910af8de9ba6b460904c3532e4dbd1e958ca526f29f9c.scope: Deactivated successfully.
Nov 29 02:11:23 np0005539563 python3[87812]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v82: 38 pgs: 1 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:23 np0005539563 podman[87813]: 2025-11-29 07:11:23.440389181 +0000 UTC m=+0.058745595 container create 56e457eb8c51cde7e51647bc7c89a48b17a13aef25ffe48f4600fff67b734f01 (image=quay.io/ceph/ceph:v18, name=hungry_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:23 np0005539563 systemd[1]: Started libpod-conmon-56e457eb8c51cde7e51647bc7c89a48b17a13aef25ffe48f4600fff67b734f01.scope.
Nov 29 02:11:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212828abc7f7ffc78d1e9bbeaca58a8d56a0448e697a601d8fe3bc610194afba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212828abc7f7ffc78d1e9bbeaca58a8d56a0448e697a601d8fe3bc610194afba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:23 np0005539563 podman[87813]: 2025-11-29 07:11:23.417639601 +0000 UTC m=+0.035996005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:23 np0005539563 podman[87813]: 2025-11-29 07:11:23.525616632 +0000 UTC m=+0.143973046 container init 56e457eb8c51cde7e51647bc7c89a48b17a13aef25ffe48f4600fff67b734f01 (image=quay.io/ceph/ceph:v18, name=hungry_mahavira, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:23 np0005539563 podman[87813]: 2025-11-29 07:11:23.531236758 +0000 UTC m=+0.149593142 container start 56e457eb8c51cde7e51647bc7c89a48b17a13aef25ffe48f4600fff67b734f01 (image=quay.io/ceph/ceph:v18, name=hungry_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:11:23 np0005539563 podman[87813]: 2025-11-29 07:11:23.56100987 +0000 UTC m=+0.179366254 container attach 56e457eb8c51cde7e51647bc7c89a48b17a13aef25ffe48f4600fff67b734f01 (image=quay.io/ceph/ceph:v18, name=hungry_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:11:23 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:11:23 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2884731350' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:11:23 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Nov 29 02:11:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 29 02:11:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4139808608' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 02:11:24 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:24 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:11:24 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:11:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 29 02:11:24 np0005539563 ceph-mon[74338]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:11:24 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/4139808608' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 02:11:24 np0005539563 ceph-mon[74338]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4139808608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 02:11:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Nov 29 02:11:24 np0005539563 hungry_mahavira[87829]: enabled application 'rbd' on pool 'vms'
Nov 29 02:11:24 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Nov 29 02:11:25 np0005539563 systemd[1]: libpod-56e457eb8c51cde7e51647bc7c89a48b17a13aef25ffe48f4600fff67b734f01.scope: Deactivated successfully.
Nov 29 02:11:25 np0005539563 podman[87854]: 2025-11-29 07:11:25.048639333 +0000 UTC m=+0.028804778 container died 56e457eb8c51cde7e51647bc7c89a48b17a13aef25ffe48f4600fff67b734f01 (image=quay.io/ceph/ceph:v18, name=hungry_mahavira, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:11:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-212828abc7f7ffc78d1e9bbeaca58a8d56a0448e697a601d8fe3bc610194afba-merged.mount: Deactivated successfully.
Nov 29 02:11:25 np0005539563 systemd[75963]: Starting Mark boot as successful...
Nov 29 02:11:25 np0005539563 systemd[75963]: Finished Mark boot as successful.
Nov 29 02:11:25 np0005539563 podman[87854]: 2025-11-29 07:11:25.087882791 +0000 UTC m=+0.068048216 container remove 56e457eb8c51cde7e51647bc7c89a48b17a13aef25ffe48f4600fff67b734f01 (image=quay.io/ceph/ceph:v18, name=hungry_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:11:25 np0005539563 systemd[1]: libpod-conmon-56e457eb8c51cde7e51647bc7c89a48b17a13aef25ffe48f4600fff67b734f01.scope: Deactivated successfully.
Nov 29 02:11:25 np0005539563 python3[87895]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:25 np0005539563 podman[87896]: 2025-11-29 07:11:25.425209173 +0000 UTC m=+0.045500702 container create c06bbe31d58045063e3df8b1766ffe5ef0e155bdea8b17fe53d76abadec95c66 (image=quay.io/ceph/ceph:v18, name=priceless_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v85: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:11:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:11:25 np0005539563 systemd[1]: Started libpod-conmon-c06bbe31d58045063e3df8b1766ffe5ef0e155bdea8b17fe53d76abadec95c66.scope.
Nov 29 02:11:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f1c9946abe09a1e56dd20cc6f232a7c1091ea337e664daa6b60686fbe5dd21/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f1c9946abe09a1e56dd20cc6f232a7c1091ea337e664daa6b60686fbe5dd21/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:25 np0005539563 podman[87896]: 2025-11-29 07:11:25.485938018 +0000 UTC m=+0.106229577 container init c06bbe31d58045063e3df8b1766ffe5ef0e155bdea8b17fe53d76abadec95c66 (image=quay.io/ceph/ceph:v18, name=priceless_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:11:25 np0005539563 podman[87896]: 2025-11-29 07:11:25.492521639 +0000 UTC m=+0.112813178 container start c06bbe31d58045063e3df8b1766ffe5ef0e155bdea8b17fe53d76abadec95c66 (image=quay.io/ceph/ceph:v18, name=priceless_noether, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:11:25 np0005539563 podman[87896]: 2025-11-29 07:11:25.495582439 +0000 UTC m=+0.115873988 container attach c06bbe31d58045063e3df8b1766ffe5ef0e155bdea8b17fe53d76abadec95c66 (image=quay.io/ceph/ceph:v18, name=priceless_noether, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:11:25 np0005539563 podman[87896]: 2025-11-29 07:11:25.404463224 +0000 UTC m=+0.024754793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:25 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:11:25 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:11:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 29 02:11:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:11:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Nov 29 02:11:25 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Nov 29 02:11:25 np0005539563 ceph-mon[74338]: Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:11:25 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/4139808608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 02:11:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:11:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 29 02:11:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/653577093' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 02:11:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 29 02:11:27 np0005539563 ceph-mon[74338]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:11:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:11:27 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/653577093' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 02:11:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/653577093' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 02:11:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Nov 29 02:11:27 np0005539563 priceless_noether[87911]: enabled application 'rbd' on pool 'volumes'
Nov 29 02:11:27 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Nov 29 02:11:27 np0005539563 systemd[1]: libpod-c06bbe31d58045063e3df8b1766ffe5ef0e155bdea8b17fe53d76abadec95c66.scope: Deactivated successfully.
Nov 29 02:11:27 np0005539563 conmon[87911]: conmon c06bbe31d58045063e3d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c06bbe31d58045063e3df8b1766ffe5ef0e155bdea8b17fe53d76abadec95c66.scope/container/memory.events
Nov 29 02:11:27 np0005539563 podman[87896]: 2025-11-29 07:11:27.038689372 +0000 UTC m=+1.658980931 container died c06bbe31d58045063e3df8b1766ffe5ef0e155bdea8b17fe53d76abadec95c66 (image=quay.io/ceph/ceph:v18, name=priceless_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:11:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b2f1c9946abe09a1e56dd20cc6f232a7c1091ea337e664daa6b60686fbe5dd21-merged.mount: Deactivated successfully.
Nov 29 02:11:27 np0005539563 podman[87896]: 2025-11-29 07:11:27.085007323 +0000 UTC m=+1.705298882 container remove c06bbe31d58045063e3df8b1766ffe5ef0e155bdea8b17fe53d76abadec95c66 (image=quay.io/ceph/ceph:v18, name=priceless_noether, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:27 np0005539563 systemd[1]: libpod-conmon-c06bbe31d58045063e3df8b1766ffe5ef0e155bdea8b17fe53d76abadec95c66.scope: Deactivated successfully.
Nov 29 02:11:27 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring
Nov 29 02:11:27 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring
Nov 29 02:11:27 np0005539563 python3[87975]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:27 np0005539563 podman[87976]: 2025-11-29 07:11:27.404235195 +0000 UTC m=+0.042961765 container create 8460689a3e289fea3d91e3b868e36e767f276fd2b0cba4da98d0125048440d90 (image=quay.io/ceph/ceph:v18, name=condescending_villani, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:27 np0005539563 systemd[1]: Started libpod-conmon-8460689a3e289fea3d91e3b868e36e767f276fd2b0cba4da98d0125048440d90.scope.
Nov 29 02:11:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v88: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88122f615bf30f7939495dd7cc4c168953bba2ffbee2bb9218d1993c9fd77db5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88122f615bf30f7939495dd7cc4c168953bba2ffbee2bb9218d1993c9fd77db5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:27 np0005539563 podman[87976]: 2025-11-29 07:11:27.468260616 +0000 UTC m=+0.106987196 container init 8460689a3e289fea3d91e3b868e36e767f276fd2b0cba4da98d0125048440d90 (image=quay.io/ceph/ceph:v18, name=condescending_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:27 np0005539563 podman[87976]: 2025-11-29 07:11:27.473909973 +0000 UTC m=+0.112636533 container start 8460689a3e289fea3d91e3b868e36e767f276fd2b0cba4da98d0125048440d90 (image=quay.io/ceph/ceph:v18, name=condescending_villani, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:11:27 np0005539563 podman[87976]: 2025-11-29 07:11:27.47691505 +0000 UTC m=+0.115641620 container attach 8460689a3e289fea3d91e3b868e36e767f276fd2b0cba4da98d0125048440d90 (image=quay.io/ceph/ceph:v18, name=condescending_villani, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:11:27 np0005539563 podman[87976]: 2025-11-29 07:11:27.384915594 +0000 UTC m=+0.023642184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 29 02:11:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1575948781' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/653577093' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.client.admin.keyring
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/1575948781' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1575948781' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Nov 29 02:11:28 np0005539563 condescending_villani[87992]: enabled application 'rbd' on pool 'backups'
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Nov 29 02:11:28 np0005539563 systemd[1]: libpod-8460689a3e289fea3d91e3b868e36e767f276fd2b0cba4da98d0125048440d90.scope: Deactivated successfully.
Nov 29 02:11:28 np0005539563 podman[87976]: 2025-11-29 07:11:28.050305836 +0000 UTC m=+0.689032396 container died 8460689a3e289fea3d91e3b868e36e767f276fd2b0cba4da98d0125048440d90 (image=quay.io/ceph/ceph:v18, name=condescending_villani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:11:28 np0005539563 systemd[1]: var-lib-containers-storage-overlay-88122f615bf30f7939495dd7cc4c168953bba2ffbee2bb9218d1993c9fd77db5-merged.mount: Deactivated successfully.
Nov 29 02:11:28 np0005539563 podman[87976]: 2025-11-29 07:11:28.087097531 +0000 UTC m=+0.725824091 container remove 8460689a3e289fea3d91e3b868e36e767f276fd2b0cba4da98d0125048440d90 (image=quay.io/ceph/ceph:v18, name=condescending_villani, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:28 np0005539563 systemd[1]: libpod-conmon-8460689a3e289fea3d91e3b868e36e767f276fd2b0cba4da98d0125048440d90.scope: Deactivated successfully.
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:28 np0005539563 python3[88054]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v90: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:28 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev ae2d2005-2684-40f6-bb23-189f84a933be (Updating mon deployment (+2 -> 3))
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:11:28 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Nov 29 02:11:28 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Nov 29 02:11:28 np0005539563 podman[88055]: 2025-11-29 07:11:28.399582217 +0000 UTC m=+0.039755852 container create eaf2df6e02b7dbfbf74747b628f71f055bf2f14980a9e70570f93c84081fabff (image=quay.io/ceph/ceph:v18, name=dreamy_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.a( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 systemd[1]: Started libpod-conmon-eaf2df6e02b7dbfbf74747b628f71f055bf2f14980a9e70570f93c84081fabff.scope.
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.6( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.9( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.1e( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.4( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.1( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.e( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.d( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.c( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.10( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.13( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.15( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 29 pg[2.1b( empty local-lis/les=0/0 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:11:28 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740e0c129216d4ea87850651d3d7ad1954ee1cf354490d1c47b3b1c90f9992c0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740e0c129216d4ea87850651d3d7ad1954ee1cf354490d1c47b3b1c90f9992c0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:28 np0005539563 podman[88055]: 2025-11-29 07:11:28.46676959 +0000 UTC m=+0.106943255 container init eaf2df6e02b7dbfbf74747b628f71f055bf2f14980a9e70570f93c84081fabff (image=quay.io/ceph/ceph:v18, name=dreamy_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:28 np0005539563 podman[88055]: 2025-11-29 07:11:28.473061744 +0000 UTC m=+0.113235369 container start eaf2df6e02b7dbfbf74747b628f71f055bf2f14980a9e70570f93c84081fabff (image=quay.io/ceph/ceph:v18, name=dreamy_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:11:28 np0005539563 podman[88055]: 2025-11-29 07:11:28.477102748 +0000 UTC m=+0.117276413 container attach eaf2df6e02b7dbfbf74747b628f71f055bf2f14980a9e70570f93c84081fabff (image=quay.io/ceph/ceph:v18, name=dreamy_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:28 np0005539563 podman[88055]: 2025-11-29 07:11:28.383557112 +0000 UTC m=+0.023730777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 29 02:11:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2619282733' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/1575948781' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: Deploying daemon mon.compute-2 on compute-2
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2619282733' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2619282733' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Nov 29 02:11:29 np0005539563 dreamy_banach[88071]: enabled application 'rbd' on pool 'images'
Nov 29 02:11:29 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.1b( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.15( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.c( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.1( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.10( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.6( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.a( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.9( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.1e( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.4( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.e( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 30 pg[2.d( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=17/17 les/c/f=19/19/0 sis=27) [0] r=0 lpr=29 pi=[17,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:11:29 np0005539563 systemd[1]: libpod-eaf2df6e02b7dbfbf74747b628f71f055bf2f14980a9e70570f93c84081fabff.scope: Deactivated successfully.
Nov 29 02:11:29 np0005539563 podman[88055]: 2025-11-29 07:11:29.38685505 +0000 UTC m=+1.027028685 container died eaf2df6e02b7dbfbf74747b628f71f055bf2f14980a9e70570f93c84081fabff (image=quay.io/ceph/ceph:v18, name=dreamy_banach, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 02:11:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-740e0c129216d4ea87850651d3d7ad1954ee1cf354490d1c47b3b1c90f9992c0-merged.mount: Deactivated successfully.
Nov 29 02:11:29 np0005539563 podman[88055]: 2025-11-29 07:11:29.438610253 +0000 UTC m=+1.078783888 container remove eaf2df6e02b7dbfbf74747b628f71f055bf2f14980a9e70570f93c84081fabff (image=quay.io/ceph/ceph:v18, name=dreamy_banach, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:11:29 np0005539563 systemd[1]: libpod-conmon-eaf2df6e02b7dbfbf74747b628f71f055bf2f14980a9e70570f93c84081fabff.scope: Deactivated successfully.
Nov 29 02:11:29 np0005539563 python3[88135]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:29 np0005539563 podman[88136]: 2025-11-29 07:11:29.756598223 +0000 UTC m=+0.024380214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:29 np0005539563 podman[88136]: 2025-11-29 07:11:29.951666933 +0000 UTC m=+0.219448904 container create d2a75bdafe16102865722dd8108dd131b668c9eadf1d0c0b2af25a0e8b60562d (image=quay.io/ceph/ceph:v18, name=strange_goldwasser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:11:29 np0005539563 systemd[1]: Started libpod-conmon-d2a75bdafe16102865722dd8108dd131b668c9eadf1d0c0b2af25a0e8b60562d.scope.
Nov 29 02:11:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f284e3b8339b2dde4e722e1cb41da3b9b77dca8fa525cbc80a8b596d71656af/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f284e3b8339b2dde4e722e1cb41da3b9b77dca8fa525cbc80a8b596d71656af/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:30 np0005539563 podman[88136]: 2025-11-29 07:11:30.022385308 +0000 UTC m=+0.290167299 container init d2a75bdafe16102865722dd8108dd131b668c9eadf1d0c0b2af25a0e8b60562d (image=quay.io/ceph/ceph:v18, name=strange_goldwasser, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:11:30 np0005539563 podman[88136]: 2025-11-29 07:11:30.027964603 +0000 UTC m=+0.295746574 container start d2a75bdafe16102865722dd8108dd131b668c9eadf1d0c0b2af25a0e8b60562d (image=quay.io/ceph/ceph:v18, name=strange_goldwasser, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:11:30 np0005539563 podman[88136]: 2025-11-29 07:11:30.031394342 +0000 UTC m=+0.299176313 container attach d2a75bdafe16102865722dd8108dd131b668c9eadf1d0c0b2af25a0e8b60562d (image=quay.io/ceph/ceph:v18, name=strange_goldwasser, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:11:30 np0005539563 ceph-mon[74338]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:30 np0005539563 ceph-mon[74338]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 29 02:11:30 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2619282733' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 02:11:30 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 29 02:11:30 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 29 02:11:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v92: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 29 02:11:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2522884032' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 02:11:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 29 02:11:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2522884032' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 02:11:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e31 e31: 2 total, 2 up, 2 in
Nov 29 02:11:31 np0005539563 strange_goldwasser[88151]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 29 02:11:31 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2522884032' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 02:11:31 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Nov 29 02:11:31 np0005539563 systemd[1]: libpod-d2a75bdafe16102865722dd8108dd131b668c9eadf1d0c0b2af25a0e8b60562d.scope: Deactivated successfully.
Nov 29 02:11:31 np0005539563 podman[88176]: 2025-11-29 07:11:31.103401433 +0000 UTC m=+0.022470414 container died d2a75bdafe16102865722dd8108dd131b668c9eadf1d0c0b2af25a0e8b60562d (image=quay.io/ceph/ceph:v18, name=strange_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:11:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4f284e3b8339b2dde4e722e1cb41da3b9b77dca8fa525cbc80a8b596d71656af-merged.mount: Deactivated successfully.
Nov 29 02:11:31 np0005539563 podman[88176]: 2025-11-29 07:11:31.138075112 +0000 UTC m=+0.057144083 container remove d2a75bdafe16102865722dd8108dd131b668c9eadf1d0c0b2af25a0e8b60562d (image=quay.io/ceph/ceph:v18, name=strange_goldwasser, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:11:31 np0005539563 systemd[1]: libpod-conmon-d2a75bdafe16102865722dd8108dd131b668c9eadf1d0c0b2af25a0e8b60562d.scope: Deactivated successfully.
Nov 29 02:11:31 np0005539563 python3[88216]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:31 np0005539563 podman[88217]: 2025-11-29 07:11:31.460910517 +0000 UTC m=+0.035463770 container create 99a814aa7599e5d121357172adb0c25ca89c9b4379baddc9b017115c6be2fc1f (image=quay.io/ceph/ceph:v18, name=charming_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 29 02:11:31 np0005539563 systemd[1]: Started libpod-conmon-99a814aa7599e5d121357172adb0c25ca89c9b4379baddc9b017115c6be2fc1f.scope.
Nov 29 02:11:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f4b79e0b3428b835e24734638596aafb665b21a9310f6b144096391325c0b9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f4b79e0b3428b835e24734638596aafb665b21a9310f6b144096391325c0b9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:31 np0005539563 podman[88217]: 2025-11-29 07:11:31.52580215 +0000 UTC m=+0.100355423 container init 99a814aa7599e5d121357172adb0c25ca89c9b4379baddc9b017115c6be2fc1f (image=quay.io/ceph/ceph:v18, name=charming_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:11:31 np0005539563 podman[88217]: 2025-11-29 07:11:31.532093794 +0000 UTC m=+0.106647047 container start 99a814aa7599e5d121357172adb0c25ca89c9b4379baddc9b017115c6be2fc1f (image=quay.io/ceph/ceph:v18, name=charming_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:31 np0005539563 podman[88217]: 2025-11-29 07:11:31.535992975 +0000 UTC m=+0.110546228 container attach 99a814aa7599e5d121357172adb0c25ca89c9b4379baddc9b017115c6be2fc1f (image=quay.io/ceph/ceph:v18, name=charming_lumiere, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:11:31 np0005539563 podman[88217]: 2025-11-29 07:11:31.445226981 +0000 UTC m=+0.019780264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2522884032' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2395674344' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 29 02:11:32 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Nov 29 02:11:32 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Nov 29 02:11:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v94: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Nov 29 02:11:32 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1490416311; not ready for session (expect reconnect)
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:11:32 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2395674344' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:11:32 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Nov 29 02:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:11:33 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1490416311; not ready for session (expect reconnect)
Nov 29 02:11:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:11:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:11:33 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 02:11:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v95: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:34 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1490416311; not ready for session (expect reconnect)
Nov 29 02:11:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:11:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:11:34 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 02:11:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:11:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:11:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:11:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:11:34 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2330088063; not ready for session (expect reconnect)
Nov 29 02:11:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:11:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:11:34 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 02:11:35 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1490416311; not ready for session (expect reconnect)
Nov 29 02:11:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:11:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:11:35 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 02:11:35 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2330088063; not ready for session (expect reconnect)
Nov 29 02:11:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:11:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:11:35 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 02:11:36 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.9 deep-scrub starts
Nov 29 02:11:36 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.9 deep-scrub ok
Nov 29 02:11:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v96: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:36 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1490416311; not ready for session (expect reconnect)
Nov 29 02:11:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:11:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:11:36 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 02:11:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:11:36 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2330088063; not ready for session (expect reconnect)
Nov 29 02:11:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:11:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:11:36 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 02:11:37 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1490416311; not ready for session (expect reconnect)
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:11:37 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.rotard(active, since 2m)
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:37 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev ae2d2005-2684-40f6-bb23-189f84a933be (Updating mon deployment (+2 -> 3))
Nov 29 02:11:37 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event ae2d2005-2684-40f6-bb23-189f84a933be (Updating mon deployment (+2 -> 3)) in 9 seconds
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:37 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev a9cf5752-7b12-4131-b04b-d71bcfa2a623 (Updating mgr deployment (+2 -> 3))
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2395674344' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e32 e32: 2 total, 2 up, 2 in
Nov 29 02:11:37 np0005539563 charming_lumiere[88232]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: Deploying daemon mon.compute-1 on compute-1
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2395674344' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
Nov 29 02:11:37 np0005539563 ceph-mon[74338]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:11:37 np0005539563 ceph-mon[74338]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:11:37 np0005539563 ceph-mon[74338]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 02:11:37 np0005539563 systemd[1]: libpod-99a814aa7599e5d121357172adb0c25ca89c9b4379baddc9b017115c6be2fc1f.scope: Deactivated successfully.
Nov 29 02:11:37 np0005539563 podman[88217]: 2025-11-29 07:11:37.710793537 +0000 UTC m=+6.285346790 container died 99a814aa7599e5d121357172adb0c25ca89c9b4379baddc9b017115c6be2fc1f (image=quay.io/ceph/ceph:v18, name=charming_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:11:37 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.vyxqrz on compute-2
Nov 29 02:11:37 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.vyxqrz on compute-2
Nov 29 02:11:37 np0005539563 systemd[1]: var-lib-containers-storage-overlay-04f4b79e0b3428b835e24734638596aafb665b21a9310f6b144096391325c0b9-merged.mount: Deactivated successfully.
Nov 29 02:11:37 np0005539563 podman[88217]: 2025-11-29 07:11:37.755329022 +0000 UTC m=+6.329882285 container remove 99a814aa7599e5d121357172adb0c25ca89c9b4379baddc9b017115c6be2fc1f (image=quay.io/ceph/ceph:v18, name=charming_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:11:37 np0005539563 systemd[1]: libpod-conmon-99a814aa7599e5d121357172adb0c25ca89c9b4379baddc9b017115c6be2fc1f.scope: Deactivated successfully.
Nov 29 02:11:37 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2330088063; not ready for session (expect reconnect)
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:11:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:11:37 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 02:11:38 np0005539563 ceph-mgr[74636]: [progress INFO root] Writing back 4 completed events
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v98: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:38 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1490416311; not ready for session (expect reconnect)
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2395674344' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: Deploying daemon mgr.compute-2.vyxqrz on compute-2
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Nov 29 02:11:38 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2330088063; not ready for session (expect reconnect)
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:11:38 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 02:11:38 np0005539563 python3[88346]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:11:38 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(10) init, last seen epoch 10
Nov 29 02:11:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:11:39 np0005539563 python3[88417]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400298.5630705-37393-229691346948393/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:11:39 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 29 02:11:39 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 29 02:11:39 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2330088063; not ready for session (expect reconnect)
Nov 29 02:11:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:11:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:11:39 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:11:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:11:39 np0005539563 python3[88519]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:11:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:40 np0005539563 python3[88594]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400299.5866604-37407-196507230105000/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=39c3189fb27eb844c3cb4b08da26a8beb28f08be backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:11:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v99: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:40 np0005539563 python3[88644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:40 np0005539563 podman[88645]: 2025-11-29 07:11:40.619424655 +0000 UTC m=+0.044631369 container create 56513799b3a264f95d3aa3f329e9a4aebd57d91a7e81667a3e075c56474c6536 (image=quay.io/ceph/ceph:v18, name=great_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:11:40 np0005539563 systemd[1]: Started libpod-conmon-56513799b3a264f95d3aa3f329e9a4aebd57d91a7e81667a3e075c56474c6536.scope.
Nov 29 02:11:40 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ac4fa26d955d41ad2663b6dc8a8d9c1ed2406bd30f1dd572a5efdb74eed176/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ac4fa26d955d41ad2663b6dc8a8d9c1ed2406bd30f1dd572a5efdb74eed176/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ac4fa26d955d41ad2663b6dc8a8d9c1ed2406bd30f1dd572a5efdb74eed176/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:40 np0005539563 podman[88645]: 2025-11-29 07:11:40.680613192 +0000 UTC m=+0.105819976 container init 56513799b3a264f95d3aa3f329e9a4aebd57d91a7e81667a3e075c56474c6536 (image=quay.io/ceph/ceph:v18, name=great_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:11:40 np0005539563 podman[88645]: 2025-11-29 07:11:40.687232814 +0000 UTC m=+0.112439528 container start 56513799b3a264f95d3aa3f329e9a4aebd57d91a7e81667a3e075c56474c6536 (image=quay.io/ceph/ceph:v18, name=great_benz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:11:40 np0005539563 podman[88645]: 2025-11-29 07:11:40.690241212 +0000 UTC m=+0.115448026 container attach 56513799b3a264f95d3aa3f329e9a4aebd57d91a7e81667a3e075c56474c6536 (image=quay.io/ceph/ceph:v18, name=great_benz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:40 np0005539563 podman[88645]: 2025-11-29 07:11:40.600397702 +0000 UTC m=+0.025604436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:40 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2330088063; not ready for session (expect reconnect)
Nov 29 02:11:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:11:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:11:40 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:11:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:41 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2330088063; not ready for session (expect reconnect)
Nov 29 02:11:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:11:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:11:41 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:11:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v100: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:42 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2330088063; not ready for session (expect reconnect)
Nov 29 02:11:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:11:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:11:42 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:11:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:11:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:43 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2330088063; not ready for session (expect reconnect)
Nov 29 02:11:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:11:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:11:43 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:11:43 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.rotard(active, since 2m)
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 02:11:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v101: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-1 calling monitor election
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled
Nov 29 02:11:44 np0005539563 ceph-mon[74338]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:11:44 np0005539563 ceph-mon[74338]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.jjnjed", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jjnjed", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jjnjed", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:11:44 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.jjnjed on compute-1
Nov 29 02:11:44 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.jjnjed on compute-1
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2616467829' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:11:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2616467829' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 02:11:44 np0005539563 great_benz[88660]: 
Nov 29 02:11:44 np0005539563 great_benz[88660]: [global]
Nov 29 02:11:44 np0005539563 great_benz[88660]: #011fsid = 38a37ed2-442a-5e0d-a69a-881fdd186450
Nov 29 02:11:44 np0005539563 great_benz[88660]: #011mon_host = 192.168.122.100
Nov 29 02:11:44 np0005539563 systemd[1]: libpod-56513799b3a264f95d3aa3f329e9a4aebd57d91a7e81667a3e075c56474c6536.scope: Deactivated successfully.
Nov 29 02:11:44 np0005539563 podman[88645]: 2025-11-29 07:11:44.528498548 +0000 UTC m=+3.953705272 container died 56513799b3a264f95d3aa3f329e9a4aebd57d91a7e81667a3e075c56474c6536 (image=quay.io/ceph/ceph:v18, name=great_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:11:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b4ac4fa26d955d41ad2663b6dc8a8d9c1ed2406bd30f1dd572a5efdb74eed176-merged.mount: Deactivated successfully.
Nov 29 02:11:44 np0005539563 podman[88645]: 2025-11-29 07:11:44.569282816 +0000 UTC m=+3.994489520 container remove 56513799b3a264f95d3aa3f329e9a4aebd57d91a7e81667a3e075c56474c6536 (image=quay.io/ceph/ceph:v18, name=great_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:11:44 np0005539563 systemd[1]: libpod-conmon-56513799b3a264f95d3aa3f329e9a4aebd57d91a7e81667a3e075c56474c6536.scope: Deactivated successfully.
Nov 29 02:11:44 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2330088063; not ready for session (expect reconnect)
Nov 29 02:11:44 np0005539563 python3[88722]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:45 np0005539563 podman[88723]: 2025-11-29 07:11:44.92405208 +0000 UTC m=+0.038406588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:11:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:11:45 np0005539563 podman[88723]: 2025-11-29 07:11:45.409379941 +0000 UTC m=+0.523734449 container create 7eccb1308adeecefbd32df2fe6584db2acafbbc204cfd59788ca013b071342e6 (image=quay.io/ceph/ceph:v18, name=festive_poincare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:45 np0005539563 systemd[1]: Started libpod-conmon-7eccb1308adeecefbd32df2fe6584db2acafbbc204cfd59788ca013b071342e6.scope.
Nov 29 02:11:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:45 np0005539563 ceph-mon[74338]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:11:45 np0005539563 ceph-mon[74338]: Cluster is now healthy
Nov 29 02:11:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jjnjed", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:11:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.jjnjed", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:11:45 np0005539563 ceph-mon[74338]: Deploying daemon mgr.compute-1.jjnjed on compute-1
Nov 29 02:11:45 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2616467829' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:11:45 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/2616467829' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 02:11:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df789183ac26423344942b15534208a77b056ef48dc1dc147bbe74b4dcd20834/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df789183ac26423344942b15534208a77b056ef48dc1dc147bbe74b4dcd20834/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df789183ac26423344942b15534208a77b056ef48dc1dc147bbe74b4dcd20834/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:45 np0005539563 podman[88723]: 2025-11-29 07:11:45.479835659 +0000 UTC m=+0.594190207 container init 7eccb1308adeecefbd32df2fe6584db2acafbbc204cfd59788ca013b071342e6 (image=quay.io/ceph/ceph:v18, name=festive_poincare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:45 np0005539563 podman[88723]: 2025-11-29 07:11:45.487138408 +0000 UTC m=+0.601492896 container start 7eccb1308adeecefbd32df2fe6584db2acafbbc204cfd59788ca013b071342e6 (image=quay.io/ceph/ceph:v18, name=festive_poincare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:11:45 np0005539563 podman[88723]: 2025-11-29 07:11:45.492494597 +0000 UTC m=+0.606849135 container attach 7eccb1308adeecefbd32df2fe6584db2acafbbc204cfd59788ca013b071342e6 (image=quay.io/ceph/ceph:v18, name=festive_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:45 np0005539563 ceph-mgr[74636]: mgr.server handle_report got status from non-daemon mon.compute-1
Nov 29 02:11:45 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T07:11:45.800+0000 7f872ae53640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Nov 29 02:11:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 29 02:11:46 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 29 02:11:46 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 29 02:11:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v102: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4133094199' entity='client.admin' 
Nov 29 02:11:47 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 29 02:11:47 np0005539563 festive_poincare[88739]: set ssl_option
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:11:47 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 29 02:11:47 np0005539563 systemd[1]: libpod-7eccb1308adeecefbd32df2fe6584db2acafbbc204cfd59788ca013b071342e6.scope: Deactivated successfully.
Nov 29 02:11:47 np0005539563 podman[88723]: 2025-11-29 07:11:47.194609115 +0000 UTC m=+2.308963603 container died 7eccb1308adeecefbd32df2fe6584db2acafbbc204cfd59788ca013b071342e6 (image=quay.io/ceph/ceph:v18, name=festive_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:47 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev a9cf5752-7b12-4131-b04b-d71bcfa2a623 (Updating mgr deployment (+2 -> 3))
Nov 29 02:11:47 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event a9cf5752-7b12-4131-b04b-d71bcfa2a623 (Updating mgr deployment (+2 -> 3)) in 10 seconds
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 02:11:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-df789183ac26423344942b15534208a77b056ef48dc1dc147bbe74b4dcd20834-merged.mount: Deactivated successfully.
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/4133094199' entity='client.admin' 
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:47 np0005539563 podman[88723]: 2025-11-29 07:11:47.237248326 +0000 UTC m=+2.351602814 container remove 7eccb1308adeecefbd32df2fe6584db2acafbbc204cfd59788ca013b071342e6 (image=quay.io/ceph/ceph:v18, name=festive_poincare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:47 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev 3da6de1f-bb60-426f-a0b8-b99064baae2a (Updating crash deployment (+1 -> 3))
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:11:47 np0005539563 systemd[1]: libpod-conmon-7eccb1308adeecefbd32df2fe6584db2acafbbc204cfd59788ca013b071342e6.scope: Deactivated successfully.
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:11:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:11:47 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Nov 29 02:11:47 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Nov 29 02:11:47 np0005539563 python3[88800]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:47 np0005539563 podman[88801]: 2025-11-29 07:11:47.624023836 +0000 UTC m=+0.040889715 container create be8f1fcb539b86215760185b15d6629941fc39f7f54703c1a31800fa860f2f47 (image=quay.io/ceph/ceph:v18, name=crazy_allen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:11:47 np0005539563 systemd[1]: Started libpod-conmon-be8f1fcb539b86215760185b15d6629941fc39f7f54703c1a31800fa860f2f47.scope.
Nov 29 02:11:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da33af4c0c09747cd0d5ea3eb248fff7d10c14dbd5f80519758854b8bca6252/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da33af4c0c09747cd0d5ea3eb248fff7d10c14dbd5f80519758854b8bca6252/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da33af4c0c09747cd0d5ea3eb248fff7d10c14dbd5f80519758854b8bca6252/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:47 np0005539563 podman[88801]: 2025-11-29 07:11:47.687529551 +0000 UTC m=+0.104395440 container init be8f1fcb539b86215760185b15d6629941fc39f7f54703c1a31800fa860f2f47 (image=quay.io/ceph/ceph:v18, name=crazy_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:11:47 np0005539563 podman[88801]: 2025-11-29 07:11:47.692089193 +0000 UTC m=+0.108955072 container start be8f1fcb539b86215760185b15d6629941fc39f7f54703c1a31800fa860f2f47 (image=quay.io/ceph/ceph:v18, name=crazy_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 29 02:11:47 np0005539563 podman[88801]: 2025-11-29 07:11:47.694686304 +0000 UTC m=+0.111552213 container attach be8f1fcb539b86215760185b15d6629941fc39f7f54703c1a31800fa860f2f47 (image=quay.io/ceph/ceph:v18, name=crazy_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:47 np0005539563 podman[88801]: 2025-11-29 07:11:47.607186462 +0000 UTC m=+0.024052361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:48 np0005539563 ceph-mgr[74636]: [progress INFO root] Writing back 5 completed events
Nov 29 02:11:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:11:48 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14253 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:11:48 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:11:48 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:11:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 02:11:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v103: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v104: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:11:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:11:50 np0005539563 ceph-mon[74338]: Deploying daemon crash.compute-2 on compute-2
Nov 29 02:11:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:50 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Nov 29 02:11:50 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Nov 29 02:11:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 02:11:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:51 np0005539563 crazy_allen[88816]: Scheduled rgw.rgw update...
Nov 29 02:11:51 np0005539563 crazy_allen[88816]: Scheduled ingress.rgw.default update...
Nov 29 02:11:51 np0005539563 systemd[1]: libpod-be8f1fcb539b86215760185b15d6629941fc39f7f54703c1a31800fa860f2f47.scope: Deactivated successfully.
Nov 29 02:11:51 np0005539563 podman[88801]: 2025-11-29 07:11:51.168570685 +0000 UTC m=+3.585436574 container died be8f1fcb539b86215760185b15d6629941fc39f7f54703c1a31800fa860f2f47 (image=quay.io/ceph/ceph:v18, name=crazy_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:11:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5da33af4c0c09747cd0d5ea3eb248fff7d10c14dbd5f80519758854b8bca6252-merged.mount: Deactivated successfully.
Nov 29 02:11:51 np0005539563 podman[88801]: 2025-11-29 07:11:51.210548008 +0000 UTC m=+3.627413897 container remove be8f1fcb539b86215760185b15d6629941fc39f7f54703c1a31800fa860f2f47 (image=quay.io/ceph/ceph:v18, name=crazy_allen, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:11:51 np0005539563 systemd[1]: libpod-conmon-be8f1fcb539b86215760185b15d6629941fc39f7f54703c1a31800fa860f2f47.scope: Deactivated successfully.
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: Saving service ingress.rgw.default spec with placement count:2
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:52 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev 3da6de1f-bb60-426f-a0b8-b99064baae2a (Updating crash deployment (+1 -> 3))
Nov 29 02:11:52 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event 3da6de1f-bb60-426f-a0b8-b99064baae2a (Updating crash deployment (+1 -> 3)) in 5 seconds
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:52 np0005539563 python3[88931]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:11:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:11:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v105: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:52 np0005539563 python3[89077]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400311.9906778-37450-94676089418168/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:11:52 np0005539563 podman[89167]: 2025-11-29 07:11:52.786149849 +0000 UTC m=+0.040147905 container create 5fb0cb093b3a5115bda21f895f6dd9342d12ba1f6b5bd056eda8c1e07c8a01e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:52 np0005539563 systemd[1]: Started libpod-conmon-5fb0cb093b3a5115bda21f895f6dd9342d12ba1f6b5bd056eda8c1e07c8a01e2.scope.
Nov 29 02:11:52 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:52 np0005539563 podman[89167]: 2025-11-29 07:11:52.858994556 +0000 UTC m=+0.112992632 container init 5fb0cb093b3a5115bda21f895f6dd9342d12ba1f6b5bd056eda8c1e07c8a01e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:11:52 np0005539563 podman[89167]: 2025-11-29 07:11:52.769847389 +0000 UTC m=+0.023845465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:11:52 np0005539563 podman[89167]: 2025-11-29 07:11:52.86698132 +0000 UTC m=+0.120979386 container start 5fb0cb093b3a5115bda21f895f6dd9342d12ba1f6b5bd056eda8c1e07c8a01e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:11:52 np0005539563 podman[89167]: 2025-11-29 07:11:52.870887036 +0000 UTC m=+0.124885132 container attach 5fb0cb093b3a5115bda21f895f6dd9342d12ba1f6b5bd056eda8c1e07c8a01e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:11:52 np0005539563 cranky_elbakyan[89183]: 167 167
Nov 29 02:11:52 np0005539563 systemd[1]: libpod-5fb0cb093b3a5115bda21f895f6dd9342d12ba1f6b5bd056eda8c1e07c8a01e2.scope: Deactivated successfully.
Nov 29 02:11:52 np0005539563 podman[89167]: 2025-11-29 07:11:52.872706546 +0000 UTC m=+0.126704612 container died 5fb0cb093b3a5115bda21f895f6dd9342d12ba1f6b5bd056eda8c1e07c8a01e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:11:52 np0005539563 systemd[1]: var-lib-containers-storage-overlay-af1a3d99fc95c826fcc6796757e9bb25f848566ea2f39e823b3942c5d2854f8d-merged.mount: Deactivated successfully.
Nov 29 02:11:52 np0005539563 podman[89167]: 2025-11-29 07:11:52.903043404 +0000 UTC m=+0.157041460 container remove 5fb0cb093b3a5115bda21f895f6dd9342d12ba1f6b5bd056eda8c1e07c8a01e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:52 np0005539563 systemd[1]: libpod-conmon-5fb0cb093b3a5115bda21f895f6dd9342d12ba1f6b5bd056eda8c1e07c8a01e2.scope: Deactivated successfully.
Nov 29 02:11:53 np0005539563 python3[89221]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:53 np0005539563 podman[89232]: 2025-11-29 07:11:53.043022532 +0000 UTC m=+0.039002123 container create 903b326676ddee53799627d6b5d384b0efb6e8d8dc4839feb4dc3769937db422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:53 np0005539563 systemd[1]: Started libpod-conmon-903b326676ddee53799627d6b5d384b0efb6e8d8dc4839feb4dc3769937db422.scope.
Nov 29 02:11:53 np0005539563 podman[89246]: 2025-11-29 07:11:53.098637324 +0000 UTC m=+0.040045482 container create 06755704305c407028eb2a0273b4f45589bb33d3cc7cdec7a2d6f0c7bb2ec859 (image=quay.io/ceph/ceph:v18, name=thirsty_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3541a4c563261717d28d718d84532f40203950fc188d0a127b22e4167490e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3541a4c563261717d28d718d84532f40203950fc188d0a127b22e4167490e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3541a4c563261717d28d718d84532f40203950fc188d0a127b22e4167490e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3541a4c563261717d28d718d84532f40203950fc188d0a127b22e4167490e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3541a4c563261717d28d718d84532f40203950fc188d0a127b22e4167490e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:53 np0005539563 systemd[1]: Started libpod-conmon-06755704305c407028eb2a0273b4f45589bb33d3cc7cdec7a2d6f0c7bb2ec859.scope.
Nov 29 02:11:53 np0005539563 podman[89232]: 2025-11-29 07:11:53.027963366 +0000 UTC m=+0.023942977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:11:53 np0005539563 podman[89232]: 2025-11-29 07:11:53.125839858 +0000 UTC m=+0.121819459 container init 903b326676ddee53799627d6b5d384b0efb6e8d8dc4839feb4dc3769937db422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:53 np0005539563 podman[89232]: 2025-11-29 07:11:53.131537482 +0000 UTC m=+0.127517073 container start 903b326676ddee53799627d6b5d384b0efb6e8d8dc4839feb4dc3769937db422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:11:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:53 np0005539563 podman[89232]: 2025-11-29 07:11:53.136009883 +0000 UTC m=+0.131989474 container attach 903b326676ddee53799627d6b5d384b0efb6e8d8dc4839feb4dc3769937db422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b36b8fe704b14ddcf7faa511816dc160d40da9e99580beacd5a694865b532d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b36b8fe704b14ddcf7faa511816dc160d40da9e99580beacd5a694865b532d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b36b8fe704b14ddcf7faa511816dc160d40da9e99580beacd5a694865b532d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:53 np0005539563 podman[89246]: 2025-11-29 07:11:53.147819402 +0000 UTC m=+0.089227570 container init 06755704305c407028eb2a0273b4f45589bb33d3cc7cdec7a2d6f0c7bb2ec859 (image=quay.io/ceph/ceph:v18, name=thirsty_rhodes, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:53 np0005539563 podman[89246]: 2025-11-29 07:11:53.153877865 +0000 UTC m=+0.095286023 container start 06755704305c407028eb2a0273b4f45589bb33d3cc7cdec7a2d6f0c7bb2ec859 (image=quay.io/ceph/ceph:v18, name=thirsty_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:11:53 np0005539563 podman[89246]: 2025-11-29 07:11:53.157047621 +0000 UTC m=+0.098455809 container attach 06755704305c407028eb2a0273b4f45589bb33d3cc7cdec7a2d6f0c7bb2ec859 (image=quay.io/ceph/ceph:v18, name=thirsty_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:11:53 np0005539563 podman[89246]: 2025-11-29 07:11:53.084111192 +0000 UTC m=+0.025519370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:53 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 29 02:11:53 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:11:53 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14259 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:11:53 np0005539563 ceph-mgr[74636]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 02:11:53 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0[74334]: 2025-11-29T07:11:53.719+0000 7fce38e3c640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e2 new map
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:11:53.720209+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e33 e33: 2 total, 2 up, 2 in
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 29 02:11:53 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:11:53 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 02:11:53 np0005539563 eloquent_margulis[89261]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:11:53 np0005539563 eloquent_margulis[89261]: --> relative data size: 1.0
Nov 29 02:11:53 np0005539563 eloquent_margulis[89261]: --> All data devices are unavailable
Nov 29 02:11:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:53 np0005539563 ceph-mgr[74636]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 02:11:53 np0005539563 systemd[1]: libpod-903b326676ddee53799627d6b5d384b0efb6e8d8dc4839feb4dc3769937db422.scope: Deactivated successfully.
Nov 29 02:11:53 np0005539563 podman[89232]: 2025-11-29 07:11:53.991009152 +0000 UTC m=+0.986988743 container died 903b326676ddee53799627d6b5d384b0efb6e8d8dc4839feb4dc3769937db422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:54 np0005539563 systemd[1]: libpod-06755704305c407028eb2a0273b4f45589bb33d3cc7cdec7a2d6f0c7bb2ec859.scope: Deactivated successfully.
Nov 29 02:11:54 np0005539563 podman[89246]: 2025-11-29 07:11:54.007621661 +0000 UTC m=+0.949029819 container died 06755704305c407028eb2a0273b4f45589bb33d3cc7cdec7a2d6f0c7bb2ec859 (image=quay.io/ceph/ceph:v18, name=thirsty_rhodes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4f3541a4c563261717d28d718d84532f40203950fc188d0a127b22e4167490e8-merged.mount: Deactivated successfully.
Nov 29 02:11:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-21b36b8fe704b14ddcf7faa511816dc160d40da9e99580beacd5a694865b532d-merged.mount: Deactivated successfully.
Nov 29 02:11:54 np0005539563 podman[89232]: 2025-11-29 07:11:54.062577194 +0000 UTC m=+1.058556785 container remove 903b326676ddee53799627d6b5d384b0efb6e8d8dc4839feb4dc3769937db422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:11:54 np0005539563 podman[89246]: 2025-11-29 07:11:54.06837489 +0000 UTC m=+1.009783048 container remove 06755704305c407028eb2a0273b4f45589bb33d3cc7cdec7a2d6f0c7bb2ec859 (image=quay.io/ceph/ceph:v18, name=thirsty_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:11:54 np0005539563 systemd[1]: libpod-conmon-903b326676ddee53799627d6b5d384b0efb6e8d8dc4839feb4dc3769937db422.scope: Deactivated successfully.
Nov 29 02:11:54 np0005539563 systemd[1]: libpod-conmon-06755704305c407028eb2a0273b4f45589bb33d3cc7cdec7a2d6f0c7bb2ec859.scope: Deactivated successfully.
Nov 29 02:11:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v107: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 02:11:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:11:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:11:54 np0005539563 ceph-mon[74338]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 02:11:54 np0005539563 ceph-mon[74338]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 02:11:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 02:11:54 np0005539563 ceph-mon[74338]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:11:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:54 np0005539563 python3[89398]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:54 np0005539563 podman[89451]: 2025-11-29 07:11:54.465754097 +0000 UTC m=+0.041231985 container create 4bae3189063a46cf9ddcc15ffeefd62ebdce9b60372c91df289a7755aee1e0ba (image=quay.io/ceph/ceph:v18, name=distracted_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:11:54 np0005539563 systemd[1]: Started libpod-conmon-4bae3189063a46cf9ddcc15ffeefd62ebdce9b60372c91df289a7755aee1e0ba.scope.
Nov 29 02:11:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb2133ca9a5130d6a7db4f872888828d09b190bdef40f0d5666f99d1aa886e11/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb2133ca9a5130d6a7db4f872888828d09b190bdef40f0d5666f99d1aa886e11/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb2133ca9a5130d6a7db4f872888828d09b190bdef40f0d5666f99d1aa886e11/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:54 np0005539563 podman[89451]: 2025-11-29 07:11:54.540069543 +0000 UTC m=+0.115547451 container init 4bae3189063a46cf9ddcc15ffeefd62ebdce9b60372c91df289a7755aee1e0ba (image=quay.io/ceph/ceph:v18, name=distracted_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:54 np0005539563 podman[89451]: 2025-11-29 07:11:54.449708173 +0000 UTC m=+0.025186081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:54 np0005539563 podman[89451]: 2025-11-29 07:11:54.545927381 +0000 UTC m=+0.121405269 container start 4bae3189063a46cf9ddcc15ffeefd62ebdce9b60372c91df289a7755aee1e0ba (image=quay.io/ceph/ceph:v18, name=distracted_haslett, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:54 np0005539563 podman[89451]: 2025-11-29 07:11:54.549582969 +0000 UTC m=+0.125060877 container attach 4bae3189063a46cf9ddcc15ffeefd62ebdce9b60372c91df289a7755aee1e0ba (image=quay.io/ceph/ceph:v18, name=distracted_haslett, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:54 np0005539563 podman[89511]: 2025-11-29 07:11:54.679931868 +0000 UTC m=+0.048729456 container create 63fb371d039e0a6112beab97f9f9bbe676e0c1c2e0fbe71ba65b690ead5770e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:11:54 np0005539563 systemd[1]: Started libpod-conmon-63fb371d039e0a6112beab97f9f9bbe676e0c1c2e0fbe71ba65b690ead5770e6.scope.
Nov 29 02:11:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:54 np0005539563 podman[89511]: 2025-11-29 07:11:54.749244649 +0000 UTC m=+0.118042257 container init 63fb371d039e0a6112beab97f9f9bbe676e0c1c2e0fbe71ba65b690ead5770e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:11:54 np0005539563 podman[89511]: 2025-11-29 07:11:54.654210133 +0000 UTC m=+0.023007741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:11:54 np0005539563 podman[89511]: 2025-11-29 07:11:54.754822519 +0000 UTC m=+0.123620107 container start 63fb371d039e0a6112beab97f9f9bbe676e0c1c2e0fbe71ba65b690ead5770e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:54 np0005539563 lucid_ishizaka[89528]: 167 167
Nov 29 02:11:54 np0005539563 systemd[1]: libpod-63fb371d039e0a6112beab97f9f9bbe676e0c1c2e0fbe71ba65b690ead5770e6.scope: Deactivated successfully.
Nov 29 02:11:54 np0005539563 podman[89511]: 2025-11-29 07:11:54.75853944 +0000 UTC m=+0.127337048 container attach 63fb371d039e0a6112beab97f9f9bbe676e0c1c2e0fbe71ba65b690ead5770e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:11:54 np0005539563 podman[89511]: 2025-11-29 07:11:54.759102275 +0000 UTC m=+0.127899863 container died 63fb371d039e0a6112beab97f9f9bbe676e0c1c2e0fbe71ba65b690ead5770e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-70235f43bbe713aae29d1c498a2a2c592667e06b9389e35b454890364092f290-merged.mount: Deactivated successfully.
Nov 29 02:11:54 np0005539563 podman[89511]: 2025-11-29 07:11:54.795505688 +0000 UTC m=+0.164303276 container remove 63fb371d039e0a6112beab97f9f9bbe676e0c1c2e0fbe71ba65b690ead5770e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:54 np0005539563 systemd[1]: libpod-conmon-63fb371d039e0a6112beab97f9f9bbe676e0c1c2e0fbe71ba65b690ead5770e6.scope: Deactivated successfully.
Nov 29 02:11:54 np0005539563 podman[89572]: 2025-11-29 07:11:54.93561279 +0000 UTC m=+0.038320066 container create 8cb539dca56fda5e2857e6e4e50c44fc51fc877b174c0f39718d7f1fa22e8f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 29 02:11:54 np0005539563 systemd[1]: Started libpod-conmon-8cb539dca56fda5e2857e6e4e50c44fc51fc877b174c0f39718d7f1fa22e8f5f.scope.
Nov 29 02:11:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16393c179b79ee8c7ae56e654c470e29dd1c9a2d9a4a6de10df2f51edad4152a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16393c179b79ee8c7ae56e654c470e29dd1c9a2d9a4a6de10df2f51edad4152a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16393c179b79ee8c7ae56e654c470e29dd1c9a2d9a4a6de10df2f51edad4152a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16393c179b79ee8c7ae56e654c470e29dd1c9a2d9a4a6de10df2f51edad4152a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:55 np0005539563 podman[89572]: 2025-11-29 07:11:55.009848713 +0000 UTC m=+0.112556009 container init 8cb539dca56fda5e2857e6e4e50c44fc51fc877b174c0f39718d7f1fa22e8f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:55 np0005539563 podman[89572]: 2025-11-29 07:11:54.919603337 +0000 UTC m=+0.022310653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:11:55 np0005539563 podman[89572]: 2025-11-29 07:11:55.016089172 +0000 UTC m=+0.118796448 container start 8cb539dca56fda5e2857e6e4e50c44fc51fc877b174c0f39718d7f1fa22e8f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:11:55 np0005539563 podman[89572]: 2025-11-29 07:11:55.022187397 +0000 UTC m=+0.124894673 container attach 8cb539dca56fda5e2857e6e4e50c44fc51fc877b174c0f39718d7f1fa22e8f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:55 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:11:55 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:11:55 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:11:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 02:11:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:55 np0005539563 distracted_haslett[89467]: Scheduled mds.cephfs update...
Nov 29 02:11:55 np0005539563 systemd[1]: libpod-4bae3189063a46cf9ddcc15ffeefd62ebdce9b60372c91df289a7755aee1e0ba.scope: Deactivated successfully.
Nov 29 02:11:55 np0005539563 podman[89451]: 2025-11-29 07:11:55.104790446 +0000 UTC m=+0.680268334 container died 4bae3189063a46cf9ddcc15ffeefd62ebdce9b60372c91df289a7755aee1e0ba (image=quay.io/ceph/ceph:v18, name=distracted_haslett, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay-eb2133ca9a5130d6a7db4f872888828d09b190bdef40f0d5666f99d1aa886e11-merged.mount: Deactivated successfully.
Nov 29 02:11:55 np0005539563 podman[89451]: 2025-11-29 07:11:55.144160649 +0000 UTC m=+0.719638537 container remove 4bae3189063a46cf9ddcc15ffeefd62ebdce9b60372c91df289a7755aee1e0ba (image=quay.io/ceph/ceph:v18, name=distracted_haslett, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:11:55 np0005539563 systemd[1]: libpod-conmon-4bae3189063a46cf9ddcc15ffeefd62ebdce9b60372c91df289a7755aee1e0ba.scope: Deactivated successfully.
Nov 29 02:11:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"} v 0) v1
Nov 29 02:11:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"}]: dispatch
Nov 29 02:11:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 29 02:11:55 np0005539563 ceph-mgr[74636]: [progress INFO root] Writing back 6 completed events
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]: {
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:    "0": [
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:        {
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:            "devices": [
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:                "/dev/loop3"
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:            ],
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:            "lv_name": "ceph_lv0",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:            "lv_size": "7511998464",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:            "name": "ceph_lv0",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:            "tags": {
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:                "ceph.cluster_name": "ceph",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:                "ceph.crush_device_class": "",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:                "ceph.encrypted": "0",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:                "ceph.osd_id": "0",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:                "ceph.type": "block",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:                "ceph.vdo": "0"
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:            },
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:            "type": "block",
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:            "vg_name": "ceph_vg0"
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:        }
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]:    ]
Nov 29 02:11:55 np0005539563 nostalgic_fermat[89590]: }
Nov 29 02:11:55 np0005539563 systemd[1]: libpod-8cb539dca56fda5e2857e6e4e50c44fc51fc877b174c0f39718d7f1fa22e8f5f.scope: Deactivated successfully.
Nov 29 02:11:55 np0005539563 podman[89572]: 2025-11-29 07:11:55.792185392 +0000 UTC m=+0.894892668 container died 8cb539dca56fda5e2857e6e4e50c44fc51fc877b174c0f39718d7f1fa22e8f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:11:55 np0005539563 python3[89690]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:11:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:11:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"}]': finished
Nov 29 02:11:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e34 e34: 3 total, 2 up, 3 in
Nov 29 02:11:56 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 2 up, 3 in
Nov 29 02:11:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:11:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:11:56 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:11:56 np0005539563 ceph-mon[74338]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:11:56 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:56 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.102:0/4274206403' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"}]: dispatch
Nov 29 02:11:56 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"}]: dispatch
Nov 29 02:11:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay-16393c179b79ee8c7ae56e654c470e29dd1c9a2d9a4a6de10df2f51edad4152a-merged.mount: Deactivated successfully.
Nov 29 02:11:56 np0005539563 podman[89572]: 2025-11-29 07:11:56.31375857 +0000 UTC m=+1.416465846 container remove 8cb539dca56fda5e2857e6e4e50c44fc51fc877b174c0f39718d7f1fa22e8f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:11:56 np0005539563 python3[89774]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400315.6082947-37480-162289628819278/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=b1c127dd74be8d747654d0d3f00b29a32faa6866 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:11:56 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 29 02:11:56 np0005539563 systemd[1]: libpod-conmon-8cb539dca56fda5e2857e6e4e50c44fc51fc877b174c0f39718d7f1fa22e8f5f.scope: Deactivated successfully.
Nov 29 02:11:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v109: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:56 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 29 02:11:56 np0005539563 python3[89925]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:56 np0005539563 podman[89954]: 2025-11-29 07:11:56.873806388 +0000 UTC m=+0.036975149 container create 3cd91b5d31c0db4dc19fa37e2f6eff750fdd8d0bb80e46d08e73696fe2676e87 (image=quay.io/ceph/ceph:v18, name=elastic_carver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:56 np0005539563 systemd[1]: Started libpod-conmon-3cd91b5d31c0db4dc19fa37e2f6eff750fdd8d0bb80e46d08e73696fe2676e87.scope.
Nov 29 02:11:56 np0005539563 podman[89978]: 2025-11-29 07:11:56.91351807 +0000 UTC m=+0.035846939 container create 3633190dc5c18695eec85ca703f970fe8fedaa127ececc2434c490a08bc6294e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:11:56 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f00ad71705d6619b0c8dd61cd850c6500543ebe7ea41d6f20fed669566c3289/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f00ad71705d6619b0c8dd61cd850c6500543ebe7ea41d6f20fed669566c3289/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:56 np0005539563 systemd[1]: Started libpod-conmon-3633190dc5c18695eec85ca703f970fe8fedaa127ececc2434c490a08bc6294e.scope.
Nov 29 02:11:56 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:56 np0005539563 podman[89954]: 2025-11-29 07:11:56.951231797 +0000 UTC m=+0.114400558 container init 3cd91b5d31c0db4dc19fa37e2f6eff750fdd8d0bb80e46d08e73696fe2676e87 (image=quay.io/ceph/ceph:v18, name=elastic_carver, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:11:56 np0005539563 podman[89954]: 2025-11-29 07:11:56.859321867 +0000 UTC m=+0.022490658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:56 np0005539563 podman[89954]: 2025-11-29 07:11:56.960125798 +0000 UTC m=+0.123294559 container start 3cd91b5d31c0db4dc19fa37e2f6eff750fdd8d0bb80e46d08e73696fe2676e87 (image=quay.io/ceph/ceph:v18, name=elastic_carver, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:56 np0005539563 podman[89978]: 2025-11-29 07:11:56.96208978 +0000 UTC m=+0.084418669 container init 3633190dc5c18695eec85ca703f970fe8fedaa127ececc2434c490a08bc6294e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jang, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:11:56 np0005539563 podman[89954]: 2025-11-29 07:11:56.966306215 +0000 UTC m=+0.129475056 container attach 3cd91b5d31c0db4dc19fa37e2f6eff750fdd8d0bb80e46d08e73696fe2676e87 (image=quay.io/ceph/ceph:v18, name=elastic_carver, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:56 np0005539563 podman[89978]: 2025-11-29 07:11:56.967145477 +0000 UTC m=+0.089474356 container start 3633190dc5c18695eec85ca703f970fe8fedaa127ececc2434c490a08bc6294e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:56 np0005539563 podman[89978]: 2025-11-29 07:11:56.970519438 +0000 UTC m=+0.092848307 container attach 3633190dc5c18695eec85ca703f970fe8fedaa127ececc2434c490a08bc6294e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jang, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:56 np0005539563 elegant_jang[90002]: 167 167
Nov 29 02:11:56 np0005539563 systemd[1]: libpod-3633190dc5c18695eec85ca703f970fe8fedaa127ececc2434c490a08bc6294e.scope: Deactivated successfully.
Nov 29 02:11:56 np0005539563 conmon[90002]: conmon 3633190dc5c18695eec8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3633190dc5c18695eec85ca703f970fe8fedaa127ececc2434c490a08bc6294e.scope/container/memory.events
Nov 29 02:11:56 np0005539563 podman[89978]: 2025-11-29 07:11:56.972691247 +0000 UTC m=+0.095020116 container died 3633190dc5c18695eec85ca703f970fe8fedaa127ececc2434c490a08bc6294e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jang, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:56 np0005539563 podman[89978]: 2025-11-29 07:11:56.897950859 +0000 UTC m=+0.020279748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:11:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d9f9c31380393cd9f0ce42b1a29cf6b56c44a2d00da5ad08e6ef635ac04f1eaa-merged.mount: Deactivated successfully.
Nov 29 02:11:57 np0005539563 podman[89978]: 2025-11-29 07:11:57.00913566 +0000 UTC m=+0.131464529 container remove 3633190dc5c18695eec85ca703f970fe8fedaa127ececc2434c490a08bc6294e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:11:57 np0005539563 systemd[1]: libpod-conmon-3633190dc5c18695eec85ca703f970fe8fedaa127ececc2434c490a08bc6294e.scope: Deactivated successfully.
Nov 29 02:11:57 np0005539563 podman[90028]: 2025-11-29 07:11:57.151210776 +0000 UTC m=+0.037925865 container create 3e7c2438ba6692ba656246e59e1547d36fd53f713ed82a56f70e3bc7e8c06d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:11:57 np0005539563 systemd[1]: Started libpod-conmon-3e7c2438ba6692ba656246e59e1547d36fd53f713ed82a56f70e3bc7e8c06d2e.scope.
Nov 29 02:11:57 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e6e2f4162734ade1bb6b5c1cbdaaddf06a72d0e146cbdd6785184dac4dd307/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e6e2f4162734ade1bb6b5c1cbdaaddf06a72d0e146cbdd6785184dac4dd307/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e6e2f4162734ade1bb6b5c1cbdaaddf06a72d0e146cbdd6785184dac4dd307/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:57 np0005539563 podman[90028]: 2025-11-29 07:11:57.134482424 +0000 UTC m=+0.021197533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:11:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e6e2f4162734ade1bb6b5c1cbdaaddf06a72d0e146cbdd6785184dac4dd307/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:57 np0005539563 podman[90028]: 2025-11-29 07:11:57.240484766 +0000 UTC m=+0.127199885 container init 3e7c2438ba6692ba656246e59e1547d36fd53f713ed82a56f70e3bc7e8c06d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:11:57 np0005539563 podman[90028]: 2025-11-29 07:11:57.24621575 +0000 UTC m=+0.132930839 container start 3e7c2438ba6692ba656246e59e1547d36fd53f713ed82a56f70e3bc7e8c06d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:11:57 np0005539563 podman[90028]: 2025-11-29 07:11:57.249390926 +0000 UTC m=+0.136106015 container attach 3e7c2438ba6692ba656246e59e1547d36fd53f713ed82a56f70e3bc7e8c06d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:11:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 29 02:11:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3119064189' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 02:11:57 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ebea8b7f-6a60-41f3-b580-d449bc0d4887"}]': finished
Nov 29 02:11:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:58 np0005539563 quirky_jennings[90045]: {
Nov 29 02:11:58 np0005539563 quirky_jennings[90045]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:11:58 np0005539563 quirky_jennings[90045]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:11:58 np0005539563 quirky_jennings[90045]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:11:58 np0005539563 quirky_jennings[90045]:        "osd_id": 0,
Nov 29 02:11:58 np0005539563 quirky_jennings[90045]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:11:58 np0005539563 quirky_jennings[90045]:        "type": "bluestore"
Nov 29 02:11:58 np0005539563 quirky_jennings[90045]:    }
Nov 29 02:11:58 np0005539563 quirky_jennings[90045]: }
Nov 29 02:11:58 np0005539563 systemd[1]: libpod-3e7c2438ba6692ba656246e59e1547d36fd53f713ed82a56f70e3bc7e8c06d2e.scope: Deactivated successfully.
Nov 29 02:11:58 np0005539563 conmon[90045]: conmon 3e7c2438ba6692ba6562 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e7c2438ba6692ba656246e59e1547d36fd53f713ed82a56f70e3bc7e8c06d2e.scope/container/memory.events
Nov 29 02:11:58 np0005539563 podman[90028]: 2025-11-29 07:11:58.065574957 +0000 UTC m=+0.952290056 container died 3e7c2438ba6692ba656246e59e1547d36fd53f713ed82a56f70e3bc7e8c06d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-76e6e2f4162734ade1bb6b5c1cbdaaddf06a72d0e146cbdd6785184dac4dd307-merged.mount: Deactivated successfully.
Nov 29 02:11:58 np0005539563 podman[90028]: 2025-11-29 07:11:58.116128012 +0000 UTC m=+1.002843101 container remove 3e7c2438ba6692ba656246e59e1547d36fd53f713ed82a56f70e3bc7e8c06d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:58 np0005539563 systemd[1]: libpod-conmon-3e7c2438ba6692ba656246e59e1547d36fd53f713ed82a56f70e3bc7e8c06d2e.scope: Deactivated successfully.
Nov 29 02:11:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:11:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v110: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:11:58 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 29 02:11:58 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 29 02:11:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3119064189' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 02:11:58 np0005539563 systemd[1]: libpod-3cd91b5d31c0db4dc19fa37e2f6eff750fdd8d0bb80e46d08e73696fe2676e87.scope: Deactivated successfully.
Nov 29 02:11:58 np0005539563 podman[89954]: 2025-11-29 07:11:58.552672745 +0000 UTC m=+1.715841526 container died 3cd91b5d31c0db4dc19fa37e2f6eff750fdd8d0bb80e46d08e73696fe2676e87 (image=quay.io/ceph/ceph:v18, name=elastic_carver, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 29 02:11:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2f00ad71705d6619b0c8dd61cd850c6500543ebe7ea41d6f20fed669566c3289-merged.mount: Deactivated successfully.
Nov 29 02:11:58 np0005539563 podman[89954]: 2025-11-29 07:11:58.598186654 +0000 UTC m=+1.761355415 container remove 3cd91b5d31c0db4dc19fa37e2f6eff750fdd8d0bb80e46d08e73696fe2676e87 (image=quay.io/ceph/ceph:v18, name=elastic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:58 np0005539563 systemd[1]: libpod-conmon-3cd91b5d31c0db4dc19fa37e2f6eff750fdd8d0bb80e46d08e73696fe2676e87.scope: Deactivated successfully.
Nov 29 02:11:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:11:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:11:59 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/3119064189' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 02:11:59 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/3119064189' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 02:11:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:11:59 np0005539563 python3[90137]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:59 np0005539563 podman[90139]: 2025-11-29 07:11:59.381485018 +0000 UTC m=+0.037040601 container create 86fe30ad6518a6b88c78bd4934f438ad244576761d2065708795ea2da3013352 (image=quay.io/ceph/ceph:v18, name=nifty_shtern, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:11:59 np0005539563 systemd[1]: Started libpod-conmon-86fe30ad6518a6b88c78bd4934f438ad244576761d2065708795ea2da3013352.scope.
Nov 29 02:11:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:11:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c289e300aaf2c7987f1341bad5d0a198672020d091e1f4323e240e64417cc441/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c289e300aaf2c7987f1341bad5d0a198672020d091e1f4323e240e64417cc441/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:59 np0005539563 podman[90139]: 2025-11-29 07:11:59.452129934 +0000 UTC m=+0.107685537 container init 86fe30ad6518a6b88c78bd4934f438ad244576761d2065708795ea2da3013352 (image=quay.io/ceph/ceph:v18, name=nifty_shtern, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 02:11:59 np0005539563 podman[90139]: 2025-11-29 07:11:59.457944221 +0000 UTC m=+0.113499794 container start 86fe30ad6518a6b88c78bd4934f438ad244576761d2065708795ea2da3013352 (image=quay.io/ceph/ceph:v18, name=nifty_shtern, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:11:59 np0005539563 podman[90139]: 2025-11-29 07:11:59.365331531 +0000 UTC m=+0.020887144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:59 np0005539563 podman[90139]: 2025-11-29 07:11:59.461211299 +0000 UTC m=+0.116766902 container attach 86fe30ad6518a6b88c78bd4934f438ad244576761d2065708795ea2da3013352 (image=quay.io/ceph/ceph:v18, name=nifty_shtern, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:12:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 02:12:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376407688' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 02:12:00 np0005539563 nifty_shtern[90155]: 
Nov 29 02:12:00 np0005539563 nifty_shtern[90155]: {"fsid":"38a37ed2-442a-5e0d-a69a-881fdd186450","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":15,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":34,"num_osds":3,"num_up_osds":2,"osd_up_since":1764400253,"num_in_osds":3,"osd_in_since":1764400315,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":38}],"num_pgs":38,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56061952,"bytes_avail":14967934976,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-11-29T07:11:54.362494+0000","services":{"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 29 02:12:00 np0005539563 systemd[1]: libpod-86fe30ad6518a6b88c78bd4934f438ad244576761d2065708795ea2da3013352.scope: Deactivated successfully.
Nov 29 02:12:00 np0005539563 podman[90139]: 2025-11-29 07:12:00.084036632 +0000 UTC m=+0.739592235 container died 86fe30ad6518a6b88c78bd4934f438ad244576761d2065708795ea2da3013352 (image=quay.io/ceph/ceph:v18, name=nifty_shtern, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:12:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c289e300aaf2c7987f1341bad5d0a198672020d091e1f4323e240e64417cc441-merged.mount: Deactivated successfully.
Nov 29 02:12:00 np0005539563 podman[90139]: 2025-11-29 07:12:00.121216705 +0000 UTC m=+0.776772288 container remove 86fe30ad6518a6b88c78bd4934f438ad244576761d2065708795ea2da3013352 (image=quay.io/ceph/ceph:v18, name=nifty_shtern, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:12:00 np0005539563 systemd[1]: libpod-conmon-86fe30ad6518a6b88c78bd4934f438ad244576761d2065708795ea2da3013352.scope: Deactivated successfully.
Nov 29 02:12:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v111: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:00 np0005539563 python3[90218]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:12:00 np0005539563 podman[90219]: 2025-11-29 07:12:00.492855358 +0000 UTC m=+0.046645071 container create dae2404195d27d0c59e1bd28b21149e25ff2f680a9ed5fe8622ade5abc1c82a2 (image=quay.io/ceph/ceph:v18, name=hopeful_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:12:00 np0005539563 systemd[1]: Started libpod-conmon-dae2404195d27d0c59e1bd28b21149e25ff2f680a9ed5fe8622ade5abc1c82a2.scope.
Nov 29 02:12:00 np0005539563 podman[90219]: 2025-11-29 07:12:00.472853898 +0000 UTC m=+0.026643601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db49dcb5c124f26d61d8cf14f7c15e3be47d43486caf430466335cf6c99f54b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db49dcb5c124f26d61d8cf14f7c15e3be47d43486caf430466335cf6c99f54b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:00 np0005539563 podman[90219]: 2025-11-29 07:12:00.585108538 +0000 UTC m=+0.138898231 container init dae2404195d27d0c59e1bd28b21149e25ff2f680a9ed5fe8622ade5abc1c82a2 (image=quay.io/ceph/ceph:v18, name=hopeful_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:00 np0005539563 podman[90219]: 2025-11-29 07:12:00.590713589 +0000 UTC m=+0.144503262 container start dae2404195d27d0c59e1bd28b21149e25ff2f680a9ed5fe8622ade5abc1c82a2 (image=quay.io/ceph/ceph:v18, name=hopeful_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:00 np0005539563 podman[90219]: 2025-11-29 07:12:00.594771858 +0000 UTC m=+0.148561541 container attach dae2404195d27d0c59e1bd28b21149e25ff2f680a9ed5fe8622ade5abc1c82a2 (image=quay.io/ceph/ceph:v18, name=hopeful_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:12:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3363120177' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:12:01 np0005539563 hopeful_blackburn[90234]: 
Nov 29 02:12:01 np0005539563 hopeful_blackburn[90234]: {"epoch":3,"fsid":"38a37ed2-442a-5e0d-a69a-881fdd186450","modified":"2025-11-29T07:11:38.796217Z","created":"2025-11-29T07:08:11.879965Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Nov 29 02:12:01 np0005539563 hopeful_blackburn[90234]: dumped monmap epoch 3
Nov 29 02:12:01 np0005539563 systemd[1]: libpod-dae2404195d27d0c59e1bd28b21149e25ff2f680a9ed5fe8622ade5abc1c82a2.scope: Deactivated successfully.
Nov 29 02:12:01 np0005539563 podman[90219]: 2025-11-29 07:12:01.210939631 +0000 UTC m=+0.764729314 container died dae2404195d27d0c59e1bd28b21149e25ff2f680a9ed5fe8622ade5abc1c82a2 (image=quay.io/ceph/ceph:v18, name=hopeful_blackburn, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 02:12:01 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7db49dcb5c124f26d61d8cf14f7c15e3be47d43486caf430466335cf6c99f54b-merged.mount: Deactivated successfully.
Nov 29 02:12:01 np0005539563 podman[90219]: 2025-11-29 07:12:01.319048499 +0000 UTC m=+0.872838222 container remove dae2404195d27d0c59e1bd28b21149e25ff2f680a9ed5fe8622ade5abc1c82a2 (image=quay.io/ceph/ceph:v18, name=hopeful_blackburn, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:12:01 np0005539563 systemd[1]: libpod-conmon-dae2404195d27d0c59e1bd28b21149e25ff2f680a9ed5fe8622ade5abc1c82a2.scope: Deactivated successfully.
Nov 29 02:12:01 np0005539563 python3[90296]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:12:01 np0005539563 podman[90297]: 2025-11-29 07:12:01.962405045 +0000 UTC m=+0.042885438 container create ead1eef53c691231d9cba5b764c8e8c723a06e80210f90bd136205da03f87c98 (image=quay.io/ceph/ceph:v18, name=bold_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:01 np0005539563 systemd[1]: Started libpod-conmon-ead1eef53c691231d9cba5b764c8e8c723a06e80210f90bd136205da03f87c98.scope.
Nov 29 02:12:02 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00a567432c8486e4adf3b71a8166ef1a8e84a03a975bc749faf2931cd26f04d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00a567432c8486e4adf3b71a8166ef1a8e84a03a975bc749faf2931cd26f04d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:02 np0005539563 podman[90297]: 2025-11-29 07:12:02.032160908 +0000 UTC m=+0.112641301 container init ead1eef53c691231d9cba5b764c8e8c723a06e80210f90bd136205da03f87c98 (image=quay.io/ceph/ceph:v18, name=bold_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:12:02 np0005539563 podman[90297]: 2025-11-29 07:12:02.037828061 +0000 UTC m=+0.118308454 container start ead1eef53c691231d9cba5b764c8e8c723a06e80210f90bd136205da03f87c98 (image=quay.io/ceph/ceph:v18, name=bold_euclid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:12:02 np0005539563 podman[90297]: 2025-11-29 07:12:01.944860772 +0000 UTC m=+0.025341185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:02 np0005539563 podman[90297]: 2025-11-29 07:12:02.041297574 +0000 UTC m=+0.121777967 container attach ead1eef53c691231d9cba5b764c8e8c723a06e80210f90bd136205da03f87c98 (image=quay.io/ceph/ceph:v18, name=bold_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v112: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 29 02:12:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4263620903' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 02:12:02 np0005539563 bold_euclid[90312]: [client.openstack]
Nov 29 02:12:02 np0005539563 bold_euclid[90312]: #011key = AQC1myppAAAAABAAOniUNx/sXGx0vIXKyfUNbA==
Nov 29 02:12:02 np0005539563 bold_euclid[90312]: #011caps mgr = "allow *"
Nov 29 02:12:02 np0005539563 bold_euclid[90312]: #011caps mon = "profile rbd"
Nov 29 02:12:02 np0005539563 bold_euclid[90312]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 29 02:12:02 np0005539563 systemd[1]: libpod-ead1eef53c691231d9cba5b764c8e8c723a06e80210f90bd136205da03f87c98.scope: Deactivated successfully.
Nov 29 02:12:02 np0005539563 podman[90297]: 2025-11-29 07:12:02.651208448 +0000 UTC m=+0.731688841 container died ead1eef53c691231d9cba5b764c8e8c723a06e80210f90bd136205da03f87c98 (image=quay.io/ceph/ceph:v18, name=bold_euclid, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:12:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e00a567432c8486e4adf3b71a8166ef1a8e84a03a975bc749faf2931cd26f04d-merged.mount: Deactivated successfully.
Nov 29 02:12:02 np0005539563 podman[90297]: 2025-11-29 07:12:02.698275909 +0000 UTC m=+0.778756312 container remove ead1eef53c691231d9cba5b764c8e8c723a06e80210f90bd136205da03f87c98 (image=quay.io/ceph/ceph:v18, name=bold_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:12:02 np0005539563 systemd[1]: libpod-conmon-ead1eef53c691231d9cba5b764c8e8c723a06e80210f90bd136205da03f87c98.scope: Deactivated successfully.
Nov 29 02:12:03 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/4263620903' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 02:12:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v113: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:04 np0005539563 ansible-async_wrapper.py[90497]: Invoked with j930033796422 30 /home/zuul/.ansible/tmp/ansible-tmp-1764400324.0277262-37552-96770943591670/AnsiballZ_command.py _
Nov 29 02:12:04 np0005539563 ansible-async_wrapper.py[90500]: Starting module and watcher
Nov 29 02:12:04 np0005539563 ansible-async_wrapper.py[90500]: Start watching 90501 (30)
Nov 29 02:12:04 np0005539563 ansible-async_wrapper.py[90501]: Start module (90501)
Nov 29 02:12:04 np0005539563 ansible-async_wrapper.py[90497]: Return async_wrapper task started.
Nov 29 02:12:04 np0005539563 python3[90502]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:12:04 np0005539563 podman[90503]: 2025-11-29 07:12:04.745421739 +0000 UTC m=+0.060716401 container create 85bdeef5d1df578c0ff788c094fac284137668efadae5af307c929774b72c5f6 (image=quay.io/ceph/ceph:v18, name=friendly_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:12:04 np0005539563 systemd[1]: Started libpod-conmon-85bdeef5d1df578c0ff788c094fac284137668efadae5af307c929774b72c5f6.scope.
Nov 29 02:12:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c1a2bd6044a4cf61dcdc78ce01568de68c6da5d95d55a25df309a0570a6c18/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c1a2bd6044a4cf61dcdc78ce01568de68c6da5d95d55a25df309a0570a6c18/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:04 np0005539563 podman[90503]: 2025-11-29 07:12:04.722221952 +0000 UTC m=+0.037516624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:04 np0005539563 podman[90503]: 2025-11-29 07:12:04.822581581 +0000 UTC m=+0.137876253 container init 85bdeef5d1df578c0ff788c094fac284137668efadae5af307c929774b72c5f6 (image=quay.io/ceph/ceph:v18, name=friendly_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:04 np0005539563 podman[90503]: 2025-11-29 07:12:04.832865839 +0000 UTC m=+0.148160491 container start 85bdeef5d1df578c0ff788c094fac284137668efadae5af307c929774b72c5f6 (image=quay.io/ceph/ceph:v18, name=friendly_fermi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:12:04 np0005539563 podman[90503]: 2025-11-29 07:12:04.837239546 +0000 UTC m=+0.152534198 container attach 85bdeef5d1df578c0ff788c094fac284137668efadae5af307c929774b72c5f6 (image=quay.io/ceph/ceph:v18, name=friendly_fermi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:12:05 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 02:12:05 np0005539563 friendly_fermi[90519]: 
Nov 29 02:12:05 np0005539563 friendly_fermi[90519]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 02:12:05 np0005539563 systemd[1]: libpod-85bdeef5d1df578c0ff788c094fac284137668efadae5af307c929774b72c5f6.scope: Deactivated successfully.
Nov 29 02:12:05 np0005539563 podman[90503]: 2025-11-29 07:12:05.421688403 +0000 UTC m=+0.736983055 container died 85bdeef5d1df578c0ff788c094fac284137668efadae5af307c929774b72c5f6 (image=quay.io/ceph/ceph:v18, name=friendly_fermi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 02:12:05 np0005539563 systemd[1]: var-lib-containers-storage-overlay-32c1a2bd6044a4cf61dcdc78ce01568de68c6da5d95d55a25df309a0570a6c18-merged.mount: Deactivated successfully.
Nov 29 02:12:05 np0005539563 podman[90503]: 2025-11-29 07:12:05.465386943 +0000 UTC m=+0.780681625 container remove 85bdeef5d1df578c0ff788c094fac284137668efadae5af307c929774b72c5f6 (image=quay.io/ceph/ceph:v18, name=friendly_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:12:05 np0005539563 systemd[1]: libpod-conmon-85bdeef5d1df578c0ff788c094fac284137668efadae5af307c929774b72c5f6.scope: Deactivated successfully.
Nov 29 02:12:05 np0005539563 ansible-async_wrapper.py[90501]: Module complete (90501)
Nov 29 02:12:05 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.jjnjed 192.168.122.101:0/4101010653; not ready for session (expect reconnect)
Nov 29 02:12:05 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.jjnjed started
Nov 29 02:12:05 np0005539563 python3[90603]: ansible-ansible.legacy.async_status Invoked with jid=j930033796422.90497 mode=status _async_dir=/root/.ansible_async
Nov 29 02:12:06 np0005539563 python3[90652]: ansible-ansible.legacy.async_status Invoked with jid=j930033796422.90497 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 02:12:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v114: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:06 np0005539563 python3[90678]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:12:06 np0005539563 podman[90679]: 2025-11-29 07:12:06.799429112 +0000 UTC m=+0.038430648 container create f84a40995072e326f5d2a506b275a88801efb0e5ed6ee67dd1b81204ddb5d88e (image=quay.io/ceph/ceph:v18, name=elegant_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:06 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.jjnjed 192.168.122.101:0/4101010653; not ready for session (expect reconnect)
Nov 29 02:12:06 np0005539563 systemd[1]: Started libpod-conmon-f84a40995072e326f5d2a506b275a88801efb0e5ed6ee67dd1b81204ddb5d88e.scope.
Nov 29 02:12:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54bc3c3288ad90aa1b2beb9660a8591bb6f1595c973c97183909923609bd161/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54bc3c3288ad90aa1b2beb9660a8591bb6f1595c973c97183909923609bd161/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:06 np0005539563 podman[90679]: 2025-11-29 07:12:06.85825638 +0000 UTC m=+0.097257916 container init f84a40995072e326f5d2a506b275a88801efb0e5ed6ee67dd1b81204ddb5d88e (image=quay.io/ceph/ceph:v18, name=elegant_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:06 np0005539563 podman[90679]: 2025-11-29 07:12:06.864011236 +0000 UTC m=+0.103012772 container start f84a40995072e326f5d2a506b275a88801efb0e5ed6ee67dd1b81204ddb5d88e (image=quay.io/ceph/ceph:v18, name=elegant_chandrasekhar, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:06 np0005539563 podman[90679]: 2025-11-29 07:12:06.867374686 +0000 UTC m=+0.106376252 container attach f84a40995072e326f5d2a506b275a88801efb0e5ed6ee67dd1b81204ddb5d88e (image=quay.io/ceph/ceph:v18, name=elegant_chandrasekhar, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:06 np0005539563 podman[90679]: 2025-11-29 07:12:06.78343272 +0000 UTC m=+0.022434266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:06 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.vyxqrz 192.168.122.102:0/1037819506; not ready for session (expect reconnect)
Nov 29 02:12:07 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14307 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 02:12:07 np0005539563 elegant_chandrasekhar[90694]: 
Nov 29 02:12:07 np0005539563 elegant_chandrasekhar[90694]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 02:12:07 np0005539563 systemd[1]: libpod-f84a40995072e326f5d2a506b275a88801efb0e5ed6ee67dd1b81204ddb5d88e.scope: Deactivated successfully.
Nov 29 02:12:07 np0005539563 podman[90679]: 2025-11-29 07:12:07.436961951 +0000 UTC m=+0.675963517 container died f84a40995072e326f5d2a506b275a88801efb0e5ed6ee67dd1b81204ddb5d88e (image=quay.io/ceph/ceph:v18, name=elegant_chandrasekhar, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:12:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.vyxqrz started
Nov 29 02:12:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.rotard(active, since 2m), standbys: compute-1.jjnjed
Nov 29 02:12:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.jjnjed", "id": "compute-1.jjnjed"} v 0) v1
Nov 29 02:12:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr metadata", "who": "compute-1.jjnjed", "id": "compute-1.jjnjed"}]: dispatch
Nov 29 02:12:07 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d54bc3c3288ad90aa1b2beb9660a8591bb6f1595c973c97183909923609bd161-merged.mount: Deactivated successfully.
Nov 29 02:12:07 np0005539563 podman[90679]: 2025-11-29 07:12:07.827517864 +0000 UTC m=+1.066519410 container remove f84a40995072e326f5d2a506b275a88801efb0e5ed6ee67dd1b81204ddb5d88e (image=quay.io/ceph/ceph:v18, name=elegant_chandrasekhar, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:07 np0005539563 systemd[1]: libpod-conmon-f84a40995072e326f5d2a506b275a88801efb0e5ed6ee67dd1b81204ddb5d88e.scope: Deactivated successfully.
Nov 29 02:12:07 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.vyxqrz 192.168.122.102:0/1037819506; not ready for session (expect reconnect)
Nov 29 02:12:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v115: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:08 np0005539563 python3[90755]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:12:08 np0005539563 podman[90756]: 2025-11-29 07:12:08.836557681 +0000 UTC m=+0.036026773 container create 76347e0d6bfb49892ba8df0594d77f19a7b360e57963b8c831ab2c94f45a80a4 (image=quay.io/ceph/ceph:v18, name=infallible_booth, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:08 np0005539563 systemd[1]: Started libpod-conmon-76347e0d6bfb49892ba8df0594d77f19a7b360e57963b8c831ab2c94f45a80a4.scope.
Nov 29 02:12:08 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cec7c84815dac27407921bc09dc6477ad6c4a0ed826b4b39c82347609131802/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cec7c84815dac27407921bc09dc6477ad6c4a0ed826b4b39c82347609131802/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:08 np0005539563 podman[90756]: 2025-11-29 07:12:08.912806349 +0000 UTC m=+0.112275461 container init 76347e0d6bfb49892ba8df0594d77f19a7b360e57963b8c831ab2c94f45a80a4 (image=quay.io/ceph/ceph:v18, name=infallible_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:08 np0005539563 podman[90756]: 2025-11-29 07:12:08.821716541 +0000 UTC m=+0.021185653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:08 np0005539563 podman[90756]: 2025-11-29 07:12:08.919224033 +0000 UTC m=+0.118693135 container start 76347e0d6bfb49892ba8df0594d77f19a7b360e57963b8c831ab2c94f45a80a4 (image=quay.io/ceph/ceph:v18, name=infallible_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:08 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rotard(active, since 2m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 02:12:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.vyxqrz", "id": "compute-2.vyxqrz"} v 0) v1
Nov 29 02:12:08 np0005539563 podman[90756]: 2025-11-29 07:12:08.922943633 +0000 UTC m=+0.122412725 container attach 76347e0d6bfb49892ba8df0594d77f19a7b360e57963b8c831ab2c94f45a80a4 (image=quay.io/ceph/ceph:v18, name=infallible_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr metadata", "who": "compute-2.vyxqrz", "id": "compute-2.vyxqrz"}]: dispatch
Nov 29 02:12:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 29 02:12:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 02:12:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:12:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:12:09 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Nov 29 02:12:09 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Nov 29 02:12:09 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 02:12:09 np0005539563 infallible_booth[90772]: 
Nov 29 02:12:09 np0005539563 infallible_booth[90772]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 29 02:12:09 np0005539563 systemd[1]: libpod-76347e0d6bfb49892ba8df0594d77f19a7b360e57963b8c831ab2c94f45a80a4.scope: Deactivated successfully.
Nov 29 02:12:09 np0005539563 podman[90756]: 2025-11-29 07:12:09.523303489 +0000 UTC m=+0.722772621 container died 76347e0d6bfb49892ba8df0594d77f19a7b360e57963b8c831ab2c94f45a80a4 (image=quay.io/ceph/ceph:v18, name=infallible_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:09 np0005539563 ansible-async_wrapper.py[90500]: Done in kid B.
Nov 29 02:12:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7cec7c84815dac27407921bc09dc6477ad6c4a0ed826b4b39c82347609131802-merged.mount: Deactivated successfully.
Nov 29 02:12:09 np0005539563 podman[90756]: 2025-11-29 07:12:09.571982174 +0000 UTC m=+0.771451276 container remove 76347e0d6bfb49892ba8df0594d77f19a7b360e57963b8c831ab2c94f45a80a4 (image=quay.io/ceph/ceph:v18, name=infallible_booth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:09 np0005539563 systemd[1]: libpod-conmon-76347e0d6bfb49892ba8df0594d77f19a7b360e57963b8c831ab2c94f45a80a4.scope: Deactivated successfully.
Nov 29 02:12:10 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 02:12:10 np0005539563 ceph-mon[74338]: Deploying daemon osd.2 on compute-2
Nov 29 02:12:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v116: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:10 np0005539563 python3[90834]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:12:10 np0005539563 podman[90835]: 2025-11-29 07:12:10.526237801 +0000 UTC m=+0.035814878 container create 886f856f7e76fcae09c1fddb7b2080785daff1429c26772047732ffe234de58e (image=quay.io/ceph/ceph:v18, name=suspicious_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:10 np0005539563 systemd[1]: Started libpod-conmon-886f856f7e76fcae09c1fddb7b2080785daff1429c26772047732ffe234de58e.scope.
Nov 29 02:12:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3ba1b8f274f00a7f197eeb418245aabc58da130edbc70357db22d98048950bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3ba1b8f274f00a7f197eeb418245aabc58da130edbc70357db22d98048950bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:10 np0005539563 podman[90835]: 2025-11-29 07:12:10.510258601 +0000 UTC m=+0.019835678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:10 np0005539563 podman[90835]: 2025-11-29 07:12:10.607158326 +0000 UTC m=+0.116735433 container init 886f856f7e76fcae09c1fddb7b2080785daff1429c26772047732ffe234de58e (image=quay.io/ceph/ceph:v18, name=suspicious_booth, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:12:10 np0005539563 podman[90835]: 2025-11-29 07:12:10.613485516 +0000 UTC m=+0.123062563 container start 886f856f7e76fcae09c1fddb7b2080785daff1429c26772047732ffe234de58e (image=quay.io/ceph/ceph:v18, name=suspicious_booth, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:12:10 np0005539563 podman[90835]: 2025-11-29 07:12:10.616513918 +0000 UTC m=+0.126091015 container attach 886f856f7e76fcae09c1fddb7b2080785daff1429c26772047732ffe234de58e (image=quay.io/ceph/ceph:v18, name=suspicious_booth, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:11 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 02:12:11 np0005539563 suspicious_booth[90850]: 
Nov 29 02:12:11 np0005539563 suspicious_booth[90850]: [{"container_id": "14251f2dd995", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.64%", "created": "2025-11-29T07:09:46.778895Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-29T07:09:46.851617Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:10:49.119265Z", "memory_usage": 11649679, "ports": [], "service_name": "crash", "started": "2025-11-29T07:09:46.295417Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-38a37ed2-442a-5e0d-a69a-881fdd186450@crash.compute-0", "version": "18.2.7"}, {"container_id": "9ac4e9705f99", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.38%", "created": "2025-11-29T07:10:27.936248Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2025-11-29T07:10:28.005973Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-29T07:10:48.578123Z", "memory_usage": 11660165, "ports": [], "service_name": "crash", "started": "2025-11-29T07:10:27.806962Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-38a37ed2-442a-5e0d-a69a-881fdd186450@crash.compute-1", "version": "18.2.7"}, {"daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2025-11-29T07:11:52.238503Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "crash", "status": 2, "status_desc": "starting"}, {"container_id": "2a2429c70809", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "37.41%", "created": "2025-11-29T07:08:19.076458Z", "daemon_id": "compute-0.rotard", "daemon_name": "mgr.compute-0.rotard", "daemon_type": "mgr", "events": ["2025-11-29T07:09:51.331861Z daemon:mgr.compute-0.rotard [INFO] \"Reconfigured mgr.compute-0.rotard on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:10:49.119202Z", "memory_usage": 546832384, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-29T07:08:18.853601Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-38a37ed2-442a-5e0d-a69a-881fdd186450@mgr.compute-0.rotard", "version": "18.2.7"}, {"daemon_id": "compute-1.jjnjed", "daemon_name": "mgr.compute-1.jjnjed", "daemon_type": "mgr", "events": ["2025-11-29T07:11:47.198102Z daemon:mgr.compute-1.jjnjed [INFO] \"Deployed mgr.compute-1.jjnjed on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [8765], "service_name": "mgr", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2.vyxqrz", "daemon_name": "mgr.compute-2.vyxqrz", "daemon_type": "mgr", "events": ["2025-11-29T07:11:44.277973Z daemon:mgr.compute-2.vyxqrz [INFO] \"Deployed mgr.compute-2.vyxqrz on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [8765], "service_name": "mgr", "status": 2, "status_desc": "starting"}, {"container_id": "ed38017505ea", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.33%", "created": "2025-11-29T07:08:13.913319Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-29T07:09:50.720319Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:10:49.119117Z", "memory_request": 2147483648, "memory_usage": 33344716, "ports": [], "service_name": "mon", "started": "2025-11-29T07:08:16.753633Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-38a37ed2-442a-5e0d-a69a-881fdd186450@mon.compute-0", "version": "18.2.7"}, {"daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2025-11-29T07:11:37.650031Z daemon:mon.compute-1 [INFO] \"Deployed mon.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "memory_request": 2147483648, "ports": [], "service_name": "mon", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "events": ["2025-11-29T07:11:32.219974Z daemon:mon.compute-2 [INFO] \"Deployed mon.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "memory_request": 2147483648, "ports": [], "service_name": "mon", "status": 2, "status_desc": "starting"}, {"container_id": "95d4b7aa7fd5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "8.26%", "created": "2025-11-29T07:10:42.732505Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-29T07:10:42.792617Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:10:49.119325Z", "memory_request": 4294967296, "memory_usage": 34372321, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T07:10:42.578476Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-38a37ed2-442a-5e0d-a69a-881fdd186450@osd.0", "version": "18.2.7"}, {"container_id": "96efbbbc4edb", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "6.93%", "created": "2025-11-29T07:10:44.853667Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-29T07:10:44.903971Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-29T07:10:48.578238Z", "memory_request": 5502926848, "memory_usage": 26203914, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T07:10:44.761699Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-38a37ed2-442a-5e0d-a69a-881fdd186450@osd.1", "version": "18.2.7"}]
Nov 29 02:12:11 np0005539563 systemd[1]: libpod-886f856f7e76fcae09c1fddb7b2080785daff1429c26772047732ffe234de58e.scope: Deactivated successfully.
Nov 29 02:12:11 np0005539563 podman[90875]: 2025-11-29 07:12:11.201267543 +0000 UTC m=+0.029135697 container died 886f856f7e76fcae09c1fddb7b2080785daff1429c26772047732ffe234de58e (image=quay.io/ceph/ceph:v18, name=suspicious_booth, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:12:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a3ba1b8f274f00a7f197eeb418245aabc58da130edbc70357db22d98048950bd-merged.mount: Deactivated successfully.
Nov 29 02:12:11 np0005539563 podman[90875]: 2025-11-29 07:12:11.253338259 +0000 UTC m=+0.081206353 container remove 886f856f7e76fcae09c1fddb7b2080785daff1429c26772047732ffe234de58e (image=quay.io/ceph/ceph:v18, name=suspicious_booth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:11 np0005539563 systemd[1]: libpod-conmon-886f856f7e76fcae09c1fddb7b2080785daff1429c26772047732ffe234de58e.scope: Deactivated successfully.
Nov 29 02:12:12 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.10 deep-scrub starts
Nov 29 02:12:12 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.10 deep-scrub ok
Nov 29 02:12:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v117: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:12:12
Nov 29 02:12:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:12:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:12:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['images', 'backups', 'cephfs.cephfs.data', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.meta']
Nov 29 02:12:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:12:13 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 29 02:12:13 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:12:13 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 02:12:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:12:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:12:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:12:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v118: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 29 02:12:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:12:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e35 e35: 3 total, 2 up, 3 in
Nov 29 02:12:14 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 2 up, 3 in
Nov 29 02:12:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:14 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:14 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev 7b36c275-2ec9-40f2-9552-20c7fef323e3 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 02:12:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:12:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:12:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:12:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:12:15 np0005539563 python3[90915]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:12:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 29 02:12:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:12:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e36 e36: 3 total, 2 up, 3 in
Nov 29 02:12:15 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 2 up, 3 in
Nov 29 02:12:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:15 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:15 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev d82fa343-4a65-4ee4-b35b-7f76fc50b75c (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 02:12:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:12:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:12:15 np0005539563 podman[90916]: 2025-11-29 07:12:15.67010578 +0000 UTC m=+0.035205711 container create 305054dacd839696c5a478ae550286bb4468053cdba99e065a86f04a252652b9 (image=quay.io/ceph/ceph:v18, name=boring_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:15 np0005539563 systemd[1]: Started libpod-conmon-305054dacd839696c5a478ae550286bb4468053cdba99e065a86f04a252652b9.scope.
Nov 29 02:12:15 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425835435e859fac8d792583216ec727f57f6ec56e0fbf5e25ec1b907a832f61/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425835435e859fac8d792583216ec727f57f6ec56e0fbf5e25ec1b907a832f61/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:15 np0005539563 podman[90916]: 2025-11-29 07:12:15.737265333 +0000 UTC m=+0.102365314 container init 305054dacd839696c5a478ae550286bb4468053cdba99e065a86f04a252652b9 (image=quay.io/ceph/ceph:v18, name=boring_sutherland, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:15 np0005539563 podman[90916]: 2025-11-29 07:12:15.746817201 +0000 UTC m=+0.111917092 container start 305054dacd839696c5a478ae550286bb4468053cdba99e065a86f04a252652b9 (image=quay.io/ceph/ceph:v18, name=boring_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:15 np0005539563 podman[90916]: 2025-11-29 07:12:15.751671133 +0000 UTC m=+0.116771114 container attach 305054dacd839696c5a478ae550286bb4468053cdba99e065a86f04a252652b9 (image=quay.io/ceph/ceph:v18, name=boring_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:15 np0005539563 podman[90916]: 2025-11-29 07:12:15.655049694 +0000 UTC m=+0.020149605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295273094' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 02:12:16 np0005539563 boring_sutherland[90931]: 
Nov 29 02:12:16 np0005539563 boring_sutherland[90931]: {"fsid":"38a37ed2-442a-5e0d-a69a-881fdd186450","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":32,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":36,"num_osds":3,"num_up_osds":2,"osd_up_since":1764400253,"num_in_osds":3,"osd_in_since":1764400315,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":38}],"num_pgs":38,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56086528,"bytes_avail":14967910400,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-11-29T07:11:54.362494+0000","services":{"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 29 02:12:16 np0005539563 systemd[1]: libpod-305054dacd839696c5a478ae550286bb4468053cdba99e065a86f04a252652b9.scope: Deactivated successfully.
Nov 29 02:12:16 np0005539563 podman[90916]: 2025-11-29 07:12:16.343034895 +0000 UTC m=+0.708134786 container died 305054dacd839696c5a478ae550286bb4468053cdba99e065a86f04a252652b9 (image=quay.io/ceph/ceph:v18, name=boring_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:12:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-425835435e859fac8d792583216ec727f57f6ec56e0fbf5e25ec1b907a832f61-merged.mount: Deactivated successfully.
Nov 29 02:12:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v121: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:16 np0005539563 podman[90916]: 2025-11-29 07:12:16.392905621 +0000 UTC m=+0.758005512 container remove 305054dacd839696c5a478ae550286bb4468053cdba99e065a86f04a252652b9 (image=quay.io/ceph/ceph:v18, name=boring_sutherland, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:12:16 np0005539563 systemd[1]: libpod-conmon-305054dacd839696c5a478ae550286bb4468053cdba99e065a86f04a252652b9.scope: Deactivated successfully.
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e37 e37: 3 total, 2 up, 3 in
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 2 up, 3 in
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:16 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:16 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev ce3f1899-f494-40af-adf8-16cdf69777a8 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:16 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 37 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=37 pruub=10.929883957s) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active pruub 103.670471191s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:16 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=12.956590652s) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active pruub 105.697166443s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:16 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=12.956590652s) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown pruub 105.697166443s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:16 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 37 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=37 pruub=10.929883957s) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown pruub 103.670471191s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.c deep-scrub starts
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.c deep-scrub ok
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:12:17 np0005539563 python3[90995]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:17 np0005539563 podman[90996]: 2025-11-29 07:12:17.400433037 +0000 UTC m=+0.038810379 container create 2182d579ddb9ed53453ef313e0d8bd70dc1f095595d54a2355f7b87c6926c466 (image=quay.io/ceph/ceph:v18, name=wizardly_jepsen, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:12:17 np0005539563 systemd[1]: Started libpod-conmon-2182d579ddb9ed53453ef313e0d8bd70dc1f095595d54a2355f7b87c6926c466.scope.
Nov 29 02:12:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:17 np0005539563 podman[90996]: 2025-11-29 07:12:17.382212116 +0000 UTC m=+0.020589438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9fe00bfde87cebea4409b44dcbe3e02a2bb0fcc100431bd8064cb8bca72324/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9fe00bfde87cebea4409b44dcbe3e02a2bb0fcc100431bd8064cb8bca72324/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:17 np0005539563 podman[90996]: 2025-11-29 07:12:17.500144529 +0000 UTC m=+0.138521881 container init 2182d579ddb9ed53453ef313e0d8bd70dc1f095595d54a2355f7b87c6926c466 (image=quay.io/ceph/ceph:v18, name=wizardly_jepsen, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:17 np0005539563 podman[90996]: 2025-11-29 07:12:17.512468211 +0000 UTC m=+0.150845523 container start 2182d579ddb9ed53453ef313e0d8bd70dc1f095595d54a2355f7b87c6926c466 (image=quay.io/ceph/ceph:v18, name=wizardly_jepsen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:12:17 np0005539563 podman[90996]: 2025-11-29 07:12:17.516077268 +0000 UTC m=+0.154454570 container attach 2182d579ddb9ed53453ef313e0d8bd70dc1f095595d54a2355f7b87c6926c466 (image=quay.io/ceph/ceph:v18, name=wizardly_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e38 e38: 3 total, 2 up, 3 in
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 2 up, 3 in
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:17 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:17 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev 0e66acb3-8e9d-476a-bed5-8427feafdc5d (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.9( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.a( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.4( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.2( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1a( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.18( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.14( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.12( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.11( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.d( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.3( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1e( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.7( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.6( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.5( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.b( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.8( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1f( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.c( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.f( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.e( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.10( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.13( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.15( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.16( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.17( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.19( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1b( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1c( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1d( empty local-lis/les=16/17 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.0( empty local-lis/les=37/38 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.0( empty local-lis/les=37/38 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:17 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 38 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=16/16 les/c/f=17/17/0 sis=37) [0] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3034131701' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 02:12:18 np0005539563 wizardly_jepsen[91011]: 
Nov 29 02:12:18 np0005539563 wizardly_jepsen[91011]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502926848","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""}]
Nov 29 02:12:18 np0005539563 systemd[1]: libpod-2182d579ddb9ed53453ef313e0d8bd70dc1f095595d54a2355f7b87c6926c466.scope: Deactivated successfully.
Nov 29 02:12:18 np0005539563 podman[91036]: 2025-11-29 07:12:18.106819044 +0000 UTC m=+0.026301040 container died 2182d579ddb9ed53453ef313e0d8bd70dc1f095595d54a2355f7b87c6926c466 (image=quay.io/ceph/ceph:v18, name=wizardly_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:12:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8f9fe00bfde87cebea4409b44dcbe3e02a2bb0fcc100431bd8064cb8bca72324-merged.mount: Deactivated successfully.
Nov 29 02:12:18 np0005539563 podman[91036]: 2025-11-29 07:12:18.154359208 +0000 UTC m=+0.073841174 container remove 2182d579ddb9ed53453ef313e0d8bd70dc1f095595d54a2355f7b87c6926c466 (image=quay.io/ceph/ceph:v18, name=wizardly_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:12:18 np0005539563 systemd[1]: libpod-conmon-2182d579ddb9ed53453ef313e0d8bd70dc1f095595d54a2355f7b87c6926c466.scope: Deactivated successfully.
Nov 29 02:12:18 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 29 02:12:18 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 29 02:12:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v124: 100 pgs: 2 peering, 62 unknown, 36 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e39 e39: 3 total, 2 up, 3 in
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 2 up, 3 in
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e39 create-or-move crush item name 'osd.2' initial_weight 0.0068 at location {host=compute-2,root=default}
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:18 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev 0455ea0d-fc6f-40b4-ba3e-f8451ef10178 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 02:12:18 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:18 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev 7b36c275-2ec9-40f2-9552-20c7fef323e3 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 02:12:18 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event 7b36c275-2ec9-40f2-9552-20c7fef323e3 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Nov 29 02:12:18 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev d82fa343-4a65-4ee4-b35b-7f76fc50b75c (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 02:12:18 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event d82fa343-4a65-4ee4-b35b-7f76fc50b75c (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Nov 29 02:12:18 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev ce3f1899-f494-40af-adf8-16cdf69777a8 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 02:12:18 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event ce3f1899-f494-40af-adf8-16cdf69777a8 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Nov 29 02:12:18 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev 0e66acb3-8e9d-476a-bed5-8427feafdc5d (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 02:12:18 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event 0e66acb3-8e9d-476a-bed5-8427feafdc5d (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Nov 29 02:12:18 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev 0455ea0d-fc6f-40b4-ba3e-f8451ef10178 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 02:12:18 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event 0455ea0d-fc6f-40b4-ba3e-f8451ef10178 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: from='osd.2 [v2:192.168.122.102:6800/1730612232,v1:192.168.122.102:6801/1730612232]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:12:18 np0005539563 ceph-mon[74338]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 02:12:19 np0005539563 python3[91077]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:12:19 np0005539563 podman[91078]: 2025-11-29 07:12:19.19321507 +0000 UTC m=+0.046031433 container create a84bf9471ae77d0ec82d58fa80b45595fc47343ed591be311a063370eba03418 (image=quay.io/ceph/ceph:v18, name=determined_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:12:19 np0005539563 systemd[1]: Started libpod-conmon-a84bf9471ae77d0ec82d58fa80b45595fc47343ed591be311a063370eba03418.scope.
Nov 29 02:12:19 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592ecad44bbce3e5ca58d94f643cd5fe85590e763644b28471079c2c737e458c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592ecad44bbce3e5ca58d94f643cd5fe85590e763644b28471079c2c737e458c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:19 np0005539563 podman[91078]: 2025-11-29 07:12:19.174198606 +0000 UTC m=+0.027015009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:19 np0005539563 podman[91078]: 2025-11-29 07:12:19.394451752 +0000 UTC m=+0.247268165 container init a84bf9471ae77d0ec82d58fa80b45595fc47343ed591be311a063370eba03418 (image=quay.io/ceph/ceph:v18, name=determined_lamport, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:12:19 np0005539563 podman[91078]: 2025-11-29 07:12:19.401175463 +0000 UTC m=+0.253991866 container start a84bf9471ae77d0ec82d58fa80b45595fc47343ed591be311a063370eba03418 (image=quay.io/ceph/ceph:v18, name=determined_lamport, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:19 np0005539563 podman[91078]: 2025-11-29 07:12:19.405395898 +0000 UTC m=+0.258212321 container attach a84bf9471ae77d0ec82d58fa80b45595fc47343ed591be311a063370eba03418 (image=quay.io/ceph/ceph:v18, name=determined_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e40 e40: 3 total, 2 up, 3 in
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 2 up, 3 in
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:19 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:19 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: from='osd.2 [v2:192.168.122.102:6800/1730612232,v1:192.168.122.102:6801/1730612232]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:19 np0005539563 ceph-mon[74338]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Nov 29 02:12:19 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 29 02:12:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4208717030' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 29 02:12:20 np0005539563 determined_lamport[91094]: mimic
Nov 29 02:12:20 np0005539563 systemd[1]: libpod-a84bf9471ae77d0ec82d58fa80b45595fc47343ed591be311a063370eba03418.scope: Deactivated successfully.
Nov 29 02:12:20 np0005539563 podman[91078]: 2025-11-29 07:12:20.027017017 +0000 UTC m=+0.879833380 container died a84bf9471ae77d0ec82d58fa80b45595fc47343ed591be311a063370eba03418 (image=quay.io/ceph/ceph:v18, name=determined_lamport, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:12:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-592ecad44bbce3e5ca58d94f643cd5fe85590e763644b28471079c2c737e458c-merged.mount: Deactivated successfully.
Nov 29 02:12:20 np0005539563 podman[91078]: 2025-11-29 07:12:20.069662748 +0000 UTC m=+0.922479111 container remove a84bf9471ae77d0ec82d58fa80b45595fc47343ed591be311a063370eba03418 (image=quay.io/ceph/ceph:v18, name=determined_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:12:20 np0005539563 systemd[1]: libpod-conmon-a84bf9471ae77d0ec82d58fa80b45595fc47343ed591be311a063370eba03418.scope: Deactivated successfully.
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.634828568s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.767044067s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.634816170s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.767044067s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.634828568s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.767044067s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639621735s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.771934509s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639621735s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.771934509s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639678955s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772056580s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639678955s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772056580s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639586449s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772079468s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639586449s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772079468s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.634816170s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.767044067s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639539719s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772102356s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639539719s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772102356s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639501572s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772163391s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639501572s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772163391s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.1b( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.051496506s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 active pruub 109.184288025s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.1b( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.051496506s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.184288025s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639395714s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772239685s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639395714s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772239685s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.10( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.051410675s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 active pruub 109.184371948s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.10( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.051410675s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.184371948s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639180183s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772277832s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638951302s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772178650s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.15( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.051583290s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 active pruub 109.184318542s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.15( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.051583290s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.184318542s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638951302s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772178650s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.c( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.050890923s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 active pruub 109.184280396s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.c( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.050890923s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.184280396s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638898849s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772323608s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638898849s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772323608s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.0( empty local-lis/les=37/38 n=0 ec=16/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638762474s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772285461s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.0( empty local-lis/les=37/38 n=0 ec=16/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638762474s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772285461s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638756752s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772369385s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638756752s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772369385s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.13( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.046937943s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 active pruub 109.180610657s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.13( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.046937943s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.180610657s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638522148s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772430420s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.639180183s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772277832s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638376236s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772415161s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638376236s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772415161s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 39 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=39 pruub=11.584877014s) [0] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active pruub 107.718963623s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638299942s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772445679s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638299942s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772445679s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=13.609316826s) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active pruub 109.743553162s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.638522148s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772430420s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637904167s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772453308s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637904167s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772453308s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.a( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.050097466s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 active pruub 109.184722900s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.a( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.050097466s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.184722900s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637804985s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772460938s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637804985s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772460938s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637731552s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772514343s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637731552s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772514343s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637668610s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772552490s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637668610s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772552490s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.d( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.049636841s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 active pruub 109.184638977s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[2.d( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=40 pruub=13.049636841s) [] r=-1 lpr=40 pi=[27,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.184638977s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637395859s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772598267s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637397766s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772644043s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637397766s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772644043s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637395859s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772598267s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637223244s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772598267s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637223244s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772598267s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637232780s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772689819s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637232780s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772689819s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637256622s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772743225s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637256622s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772743225s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637210846s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772766113s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637210846s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772766113s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637235641s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772857666s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637137413s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772834778s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637137413s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772834778s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=13.609316826s) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 109.743553162s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.636939049s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772819519s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.636939049s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772819519s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.636898041s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772903442s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.636898041s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772903442s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.636833191s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772979736s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.636833191s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772979736s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.636766434s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 109.772956848s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.637235641s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772857666s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=13.636766434s) [] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.772956848s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=40 pruub=11.581470490s) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown pruub 107.718963623s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=40 pruub=11.581470490s) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.718963623s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.e( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.e( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.2( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.2( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.3( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.3( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.5( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.6( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.6( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.7( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.5( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.7( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.8( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.8( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.9( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.9( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.a( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.a( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.b( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.b( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.f( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.c( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.f( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.c( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.d( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.d( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.10( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.10( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.11( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.11( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.12( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.13( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.13( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.12( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.14( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.14( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.15( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.16( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.15( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.16( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.17( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.17( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.18( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.18( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.19( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1a( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1a( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.19( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1b( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1b( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1c( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1d( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1d( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1c( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1e( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1e( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1f( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:20 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 40 pg[5.1f( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=40) [] r=-1 lpr=40 pi=[20,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v127: 146 pgs: 2 peering, 108 unknown, 36 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:12:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:20 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:20 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 29 02:12:21 np0005539563 python3[91338]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:12:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:21 np0005539563 podman[91339]: 2025-11-29 07:12:21.203881034 +0000 UTC m=+0.035274183 container create 6df49e0930bd9b750b7dc9d0c8666a7108c3c4d5531d58c253f9ebe20c1f346f (image=quay.io/ceph/ceph:v18, name=exciting_turing, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:12:21 np0005539563 systemd[1]: Started libpod-conmon-6df49e0930bd9b750b7dc9d0c8666a7108c3c4d5531d58c253f9ebe20c1f346f.scope.
Nov 29 02:12:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1949630116291465152fe0d36b005ec8754d1df0cfcda3839c06745dd0a68c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db1949630116291465152fe0d36b005ec8754d1df0cfcda3839c06745dd0a68c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:21 np0005539563 podman[91339]: 2025-11-29 07:12:21.27892679 +0000 UTC m=+0.110319949 container init 6df49e0930bd9b750b7dc9d0c8666a7108c3c4d5531d58c253f9ebe20c1f346f (image=quay.io/ceph/ceph:v18, name=exciting_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:21 np0005539563 podman[91339]: 2025-11-29 07:12:21.284746027 +0000 UTC m=+0.116139176 container start 6df49e0930bd9b750b7dc9d0c8666a7108c3c4d5531d58c253f9ebe20c1f346f (image=quay.io/ceph/ceph:v18, name=exciting_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:21 np0005539563 podman[91339]: 2025-11-29 07:12:21.189026163 +0000 UTC m=+0.020419332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:21 np0005539563 podman[91339]: 2025-11-29 07:12:21.287647536 +0000 UTC m=+0.119040695 container attach 6df49e0930bd9b750b7dc9d0c8666a7108c3c4d5531d58c253f9ebe20c1f346f (image=quay.io/ceph/ceph:v18, name=exciting_turing, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:21 np0005539563 ceph-mgr[74636]: [progress INFO root] Writing back 11 completed events
Nov 29 02:12:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:12:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:12:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:12:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e41 e41: 3 total, 2 up, 3 in
Nov 29 02:12:21 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 2 up, 3 in
Nov 29 02:12:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:21 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.f( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.c( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.4( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.6( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=39/41 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.9( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.b( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 41 pg[6.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:21 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1479633516' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 29 02:12:22 np0005539563 exciting_turing[91354]: 
Nov 29 02:12:22 np0005539563 systemd[1]: libpod-6df49e0930bd9b750b7dc9d0c8666a7108c3c4d5531d58c253f9ebe20c1f346f.scope: Deactivated successfully.
Nov 29 02:12:22 np0005539563 exciting_turing[91354]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":2},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":8}}
Nov 29 02:12:22 np0005539563 podman[91339]: 2025-11-29 07:12:22.178338209 +0000 UTC m=+1.009731368 container died 6df49e0930bd9b750b7dc9d0c8666a7108c3c4d5531d58c253f9ebe20c1f346f (image=quay.io/ceph/ceph:v18, name=exciting_turing, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-db1949630116291465152fe0d36b005ec8754d1df0cfcda3839c06745dd0a68c-merged.mount: Deactivated successfully.
Nov 29 02:12:22 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:22 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 29 02:12:22 np0005539563 podman[91339]: 2025-11-29 07:12:22.289938621 +0000 UTC m=+1.121331770 container remove 6df49e0930bd9b750b7dc9d0c8666a7108c3c4d5531d58c253f9ebe20c1f346f (image=quay.io/ceph/ceph:v18, name=exciting_turing, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:22 np0005539563 systemd[1]: libpod-conmon-6df49e0930bd9b750b7dc9d0c8666a7108c3c4d5531d58c253f9ebe20c1f346f.scope: Deactivated successfully.
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:12:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v129: 177 pgs: 2 peering, 139 unknown, 36 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:22 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:22 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:23 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 29 02:12:23 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 29 02:12:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 29 02:12:23 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:23 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e42 e42: 3 total, 2 up, 3 in
Nov 29 02:12:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:24 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 2 up, 3 in
Nov 29 02:12:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:24 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v131: 177 pgs: 2 peering, 93 unknown, 82 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:24 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:24 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:25 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:25 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v132: 177 pgs: 2 peering, 93 unknown, 82 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:26 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:26 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:27 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 29 02:12:27 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 29 02:12:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:12:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:12:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 29 02:12:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:12:27 np0005539563 ceph-mgr[74636]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Nov 29 02:12:27 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Nov 29 02:12:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 02:12:27 np0005539563 ceph-mgr[74636]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 02:12:27 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 02:12:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:12:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:12:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:12:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:12:27 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:12:27 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:12:27 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:12:27 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:12:27 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:12:27 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:12:27 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:27 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v133: 177 pgs: 2 peering, 93 unknown, 82 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:12:28 np0005539563 ceph-mon[74338]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 29 02:12:28 np0005539563 ceph-mon[74338]: Unable to set osd_memory_target on compute-2 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 02:12:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:12:28 np0005539563 ceph-mon[74338]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:12:28 np0005539563 ceph-mon[74338]: Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:12:28 np0005539563 ceph-mon[74338]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:12:28 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:12:28 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:12:28 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:28 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:29 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:12:29 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:12:29 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:12:29 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:12:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:29 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 29 02:12:29 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 29 02:12:29 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:29 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:12:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:12:30 np0005539563 ceph-mon[74338]: Updating compute-0:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:12:30 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 29 02:12:30 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 29 02:12:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 1 peering, 62 unknown, 114 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:12:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:12:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:12:30 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:30 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: Updating compute-1:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: Updating compute-2:/var/lib/ceph/38a37ed2-442a-5e0d-a69a-881fdd186450/config/ceph.conf
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:31 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 22007887-0ff5-4ca8-95ce-4e3fe3779854 does not exist
Nov 29 02:12:31 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7e0df8a6-925c-48a1-a2d1-aea92f0ed4f1 does not exist
Nov 29 02:12:31 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9a43ba5d-d189-408c-81ff-1a4ea7618369 does not exist
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:12:31 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:31 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:32 np0005539563 podman[92376]: 2025-11-29 07:12:32.230197042 +0000 UTC m=+0.040482174 container create 9bcf081071c5f6a55f4bec791863a30b79d2a9dad3d668d544075a1b605ad424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_matsumoto, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:12:32 np0005539563 systemd[1]: Started libpod-conmon-9bcf081071c5f6a55f4bec791863a30b79d2a9dad3d668d544075a1b605ad424.scope.
Nov 29 02:12:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:32 np0005539563 podman[92376]: 2025-11-29 07:12:32.301064894 +0000 UTC m=+0.111350056 container init 9bcf081071c5f6a55f4bec791863a30b79d2a9dad3d668d544075a1b605ad424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_matsumoto, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:12:32 np0005539563 podman[92376]: 2025-11-29 07:12:32.306770008 +0000 UTC m=+0.117055140 container start 9bcf081071c5f6a55f4bec791863a30b79d2a9dad3d668d544075a1b605ad424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_matsumoto, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:12:32 np0005539563 podman[92376]: 2025-11-29 07:12:32.211813945 +0000 UTC m=+0.022099097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:32 np0005539563 podman[92376]: 2025-11-29 07:12:32.310081928 +0000 UTC m=+0.120367060 container attach 9bcf081071c5f6a55f4bec791863a30b79d2a9dad3d668d544075a1b605ad424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_matsumoto, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:12:32 np0005539563 stoic_matsumoto[92392]: 167 167
Nov 29 02:12:32 np0005539563 systemd[1]: libpod-9bcf081071c5f6a55f4bec791863a30b79d2a9dad3d668d544075a1b605ad424.scope: Deactivated successfully.
Nov 29 02:12:32 np0005539563 podman[92376]: 2025-11-29 07:12:32.312341509 +0000 UTC m=+0.122626641 container died 9bcf081071c5f6a55f4bec791863a30b79d2a9dad3d668d544075a1b605ad424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_matsumoto, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:12:32 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f6dd4f41a952f4978e96f530db64d58a8f9769d4607ad04c64c7780062d101d3-merged.mount: Deactivated successfully.
Nov 29 02:12:32 np0005539563 podman[92376]: 2025-11-29 07:12:32.344914898 +0000 UTC m=+0.155200030 container remove 9bcf081071c5f6a55f4bec791863a30b79d2a9dad3d668d544075a1b605ad424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_matsumoto, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:12:32 np0005539563 systemd[1]: libpod-conmon-9bcf081071c5f6a55f4bec791863a30b79d2a9dad3d668d544075a1b605ad424.scope: Deactivated successfully.
Nov 29 02:12:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v135: 177 pgs: 1 peering, 62 unknown, 114 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:32 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:32 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:32 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:32 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:32 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:12:32 np0005539563 podman[92415]: 2025-11-29 07:12:32.476074139 +0000 UTC m=+0.036716333 container create e05d6d57f393d2aa4d3485e57e039e0979abf05e1aa0473c1551df23269a7a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:12:32 np0005539563 systemd[1]: Started libpod-conmon-e05d6d57f393d2aa4d3485e57e039e0979abf05e1aa0473c1551df23269a7a6b.scope.
Nov 29 02:12:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458a84d5380a259fdb574fa5f24150ecfbc552849c646b5575811b7d7cc71b2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458a84d5380a259fdb574fa5f24150ecfbc552849c646b5575811b7d7cc71b2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458a84d5380a259fdb574fa5f24150ecfbc552849c646b5575811b7d7cc71b2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458a84d5380a259fdb574fa5f24150ecfbc552849c646b5575811b7d7cc71b2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/458a84d5380a259fdb574fa5f24150ecfbc552849c646b5575811b7d7cc71b2c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:32 np0005539563 podman[92415]: 2025-11-29 07:12:32.545180264 +0000 UTC m=+0.105822478 container init e05d6d57f393d2aa4d3485e57e039e0979abf05e1aa0473c1551df23269a7a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_meitner, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:12:32 np0005539563 podman[92415]: 2025-11-29 07:12:32.552913723 +0000 UTC m=+0.113555917 container start e05d6d57f393d2aa4d3485e57e039e0979abf05e1aa0473c1551df23269a7a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:32 np0005539563 podman[92415]: 2025-11-29 07:12:32.555967345 +0000 UTC m=+0.116609539 container attach e05d6d57f393d2aa4d3485e57e039e0979abf05e1aa0473c1551df23269a7a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_meitner, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:32 np0005539563 podman[92415]: 2025-11-29 07:12:32.46170413 +0000 UTC m=+0.022346344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:32 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:32 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:33 np0005539563 zen_meitner[92432]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:12:33 np0005539563 zen_meitner[92432]: --> relative data size: 1.0
Nov 29 02:12:33 np0005539563 zen_meitner[92432]: --> All data devices are unavailable
Nov 29 02:12:33 np0005539563 systemd[1]: libpod-e05d6d57f393d2aa4d3485e57e039e0979abf05e1aa0473c1551df23269a7a6b.scope: Deactivated successfully.
Nov 29 02:12:33 np0005539563 podman[92415]: 2025-11-29 07:12:33.325121277 +0000 UTC m=+0.885763481 container died e05d6d57f393d2aa4d3485e57e039e0979abf05e1aa0473c1551df23269a7a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_meitner, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 02:12:33 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 29 02:12:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-458a84d5380a259fdb574fa5f24150ecfbc552849c646b5575811b7d7cc71b2c-merged.mount: Deactivated successfully.
Nov 29 02:12:33 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 29 02:12:33 np0005539563 podman[92415]: 2025-11-29 07:12:33.389405093 +0000 UTC m=+0.950047287 container remove e05d6d57f393d2aa4d3485e57e039e0979abf05e1aa0473c1551df23269a7a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_meitner, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:12:33 np0005539563 systemd[1]: libpod-conmon-e05d6d57f393d2aa4d3485e57e039e0979abf05e1aa0473c1551df23269a7a6b.scope: Deactivated successfully.
Nov 29 02:12:33 np0005539563 podman[92601]: 2025-11-29 07:12:33.942537053 +0000 UTC m=+0.038403748 container create 2735ef437680922d661a99f1ea4378e3f151ea86d980089b1f2068fdbe887342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lederberg, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:12:33 np0005539563 systemd[1]: Started libpod-conmon-2735ef437680922d661a99f1ea4378e3f151ea86d980089b1f2068fdbe887342.scope.
Nov 29 02:12:33 np0005539563 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1730612232; not ready for session (expect reconnect)
Nov 29 02:12:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:33 np0005539563 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:12:34 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:34 np0005539563 podman[92601]: 2025-11-29 07:12:33.92651844 +0000 UTC m=+0.022385155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:34 np0005539563 podman[92601]: 2025-11-29 07:12:34.061217966 +0000 UTC m=+0.157084701 container init 2735ef437680922d661a99f1ea4378e3f151ea86d980089b1f2068fdbe887342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:34 np0005539563 podman[92601]: 2025-11-29 07:12:34.068866713 +0000 UTC m=+0.164733408 container start 2735ef437680922d661a99f1ea4378e3f151ea86d980089b1f2068fdbe887342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:34 np0005539563 podman[92601]: 2025-11-29 07:12:34.07355604 +0000 UTC m=+0.169422795 container attach 2735ef437680922d661a99f1ea4378e3f151ea86d980089b1f2068fdbe887342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:12:34 np0005539563 sleepy_lederberg[92618]: 167 167
Nov 29 02:12:34 np0005539563 systemd[1]: libpod-2735ef437680922d661a99f1ea4378e3f151ea86d980089b1f2068fdbe887342.scope: Deactivated successfully.
Nov 29 02:12:34 np0005539563 podman[92601]: 2025-11-29 07:12:34.074928916 +0000 UTC m=+0.170795631 container died 2735ef437680922d661a99f1ea4378e3f151ea86d980089b1f2068fdbe887342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:12:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-81d95e7ae1a9a82b213135ac50ff6eb6785c29f4cb7264e79837d1e17f008187-merged.mount: Deactivated successfully.
Nov 29 02:12:34 np0005539563 podman[92601]: 2025-11-29 07:12:34.115747909 +0000 UTC m=+0.211614604 container remove 2735ef437680922d661a99f1ea4378e3f151ea86d980089b1f2068fdbe887342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lederberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:34 np0005539563 systemd[1]: libpod-conmon-2735ef437680922d661a99f1ea4378e3f151ea86d980089b1f2068fdbe887342.scope: Deactivated successfully.
Nov 29 02:12:34 np0005539563 podman[92641]: 2025-11-29 07:12:34.270613719 +0000 UTC m=+0.041031968 container create ac3dc4f048f0f784bc3bbdc22e8ee57d8f42f3eb588269cb4620af3d6c8b4b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:12:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:34 np0005539563 systemd[1]: Started libpod-conmon-ac3dc4f048f0f784bc3bbdc22e8ee57d8f42f3eb588269cb4620af3d6c8b4b9a.scope.
Nov 29 02:12:34 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdda6487dc8a47e73802bb6ac7e5c10cc2db77d0943386ba50c165a84508debc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdda6487dc8a47e73802bb6ac7e5c10cc2db77d0943386ba50c165a84508debc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdda6487dc8a47e73802bb6ac7e5c10cc2db77d0943386ba50c165a84508debc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdda6487dc8a47e73802bb6ac7e5c10cc2db77d0943386ba50c165a84508debc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:34 np0005539563 podman[92641]: 2025-11-29 07:12:34.250411754 +0000 UTC m=+0.020830023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:34 np0005539563 podman[92641]: 2025-11-29 07:12:34.355612064 +0000 UTC m=+0.126030363 container init ac3dc4f048f0f784bc3bbdc22e8ee57d8f42f3eb588269cb4620af3d6c8b4b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jones, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:34 np0005539563 podman[92641]: 2025-11-29 07:12:34.362593262 +0000 UTC m=+0.133011531 container start ac3dc4f048f0f784bc3bbdc22e8ee57d8f42f3eb588269cb4620af3d6c8b4b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 29 02:12:34 np0005539563 podman[92641]: 2025-11-29 07:12:34.369318443 +0000 UTC m=+0.139736722 container attach ac3dc4f048f0f784bc3bbdc22e8ee57d8f42f3eb588269cb4620af3d6c8b4b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jones, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:12:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 1 peering, 62 unknown, 114 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 29 02:12:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 29 02:12:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 29 02:12:34 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1730612232,v1:192.168.122.102:6801/1730612232] boot
Nov 29 02:12:34 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.f( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.f( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1c( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.1b( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.1b( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.12( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.12( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.15( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.15( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1c( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.10( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.10( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.b( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.c( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.b( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.c( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.c( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.0( empty local-lis/les=37/38 n=0 ec=16/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.6( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.6( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.0( empty local-lis/les=37/38 n=0 ec=16/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.c( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.18( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.18( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:12:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.d( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.d( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.e( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.e( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.19( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.19( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.a( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.a( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.d( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.d( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.9( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.9( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.8( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.8( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.16( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.16( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.13( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.13( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.10( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.10( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.11( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.a( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[2.a( empty local-lis/les=27/30 n=0 ec=17/14 lis/c=27/27 les/c/f=30/30/0 sis=43) [2] r=-1 lpr=43 pi=[27,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1f( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1f( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1a( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1a( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1b( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1b( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/16 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.17( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.17( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1d( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.1d( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 43 pg[5.11( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=43) [2] r=-1 lpr=43 pi=[20,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:35 np0005539563 trusting_jones[92657]: {
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:    "0": [
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:        {
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:            "devices": [
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:                "/dev/loop3"
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:            ],
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:            "lv_name": "ceph_lv0",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:            "lv_size": "7511998464",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:            "name": "ceph_lv0",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:            "tags": {
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:                "ceph.cluster_name": "ceph",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:                "ceph.crush_device_class": "",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:                "ceph.encrypted": "0",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:                "ceph.osd_id": "0",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:                "ceph.type": "block",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:                "ceph.vdo": "0"
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:            },
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:            "type": "block",
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:            "vg_name": "ceph_vg0"
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:        }
Nov 29 02:12:35 np0005539563 trusting_jones[92657]:    ]
Nov 29 02:12:35 np0005539563 trusting_jones[92657]: }
Nov 29 02:12:35 np0005539563 systemd[1]: libpod-ac3dc4f048f0f784bc3bbdc22e8ee57d8f42f3eb588269cb4620af3d6c8b4b9a.scope: Deactivated successfully.
Nov 29 02:12:35 np0005539563 podman[92641]: 2025-11-29 07:12:35.196865202 +0000 UTC m=+0.967283451 container died ac3dc4f048f0f784bc3bbdc22e8ee57d8f42f3eb588269cb4620af3d6c8b4b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jones, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:12:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-cdda6487dc8a47e73802bb6ac7e5c10cc2db77d0943386ba50c165a84508debc-merged.mount: Deactivated successfully.
Nov 29 02:12:35 np0005539563 podman[92641]: 2025-11-29 07:12:35.249541504 +0000 UTC m=+1.019959753 container remove ac3dc4f048f0f784bc3bbdc22e8ee57d8f42f3eb588269cb4620af3d6c8b4b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:12:35 np0005539563 systemd[1]: libpod-conmon-ac3dc4f048f0f784bc3bbdc22e8ee57d8f42f3eb588269cb4620af3d6c8b4b9a.scope: Deactivated successfully.
Nov 29 02:12:35 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Nov 29 02:12:35 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Nov 29 02:12:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 29 02:12:35 np0005539563 podman[92818]: 2025-11-29 07:12:35.78660717 +0000 UTC m=+0.036774173 container create 9aa37f3e1715d231bfed7c258a089b250215d86ea4d7b638432965872347faba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:35 np0005539563 systemd[1]: Started libpod-conmon-9aa37f3e1715d231bfed7c258a089b250215d86ea4d7b638432965872347faba.scope.
Nov 29 02:12:35 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:35 np0005539563 podman[92818]: 2025-11-29 07:12:35.85364501 +0000 UTC m=+0.103812033 container init 9aa37f3e1715d231bfed7c258a089b250215d86ea4d7b638432965872347faba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mahavira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:12:35 np0005539563 podman[92818]: 2025-11-29 07:12:35.860483355 +0000 UTC m=+0.110650358 container start 9aa37f3e1715d231bfed7c258a089b250215d86ea4d7b638432965872347faba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mahavira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:12:35 np0005539563 podman[92818]: 2025-11-29 07:12:35.863797674 +0000 UTC m=+0.113964687 container attach 9aa37f3e1715d231bfed7c258a089b250215d86ea4d7b638432965872347faba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:12:35 np0005539563 ecstatic_mahavira[92834]: 167 167
Nov 29 02:12:35 np0005539563 systemd[1]: libpod-9aa37f3e1715d231bfed7c258a089b250215d86ea4d7b638432965872347faba.scope: Deactivated successfully.
Nov 29 02:12:35 np0005539563 podman[92818]: 2025-11-29 07:12:35.864986746 +0000 UTC m=+0.115153749 container died 9aa37f3e1715d231bfed7c258a089b250215d86ea4d7b638432965872347faba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mahavira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:35 np0005539563 podman[92818]: 2025-11-29 07:12:35.771178764 +0000 UTC m=+0.021345797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6ff82f0ce504c43e0aaa1499234ba9f6cba893a400822a7182669b8e6e6618bd-merged.mount: Deactivated successfully.
Nov 29 02:12:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 29 02:12:35 np0005539563 podman[92818]: 2025-11-29 07:12:35.898263314 +0000 UTC m=+0.148430317 container remove 9aa37f3e1715d231bfed7c258a089b250215d86ea4d7b638432965872347faba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mahavira, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:12:35 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 29 02:12:35 np0005539563 systemd[1]: libpod-conmon-9aa37f3e1715d231bfed7c258a089b250215d86ea4d7b638432965872347faba.scope: Deactivated successfully.
Nov 29 02:12:36 np0005539563 podman[92858]: 2025-11-29 07:12:36.025095108 +0000 UTC m=+0.033014202 container create c8daa41ee9a00d526bb9c5d4430938282d3115e0bf61f9c63f1cb9b4e8a91ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:12:36 np0005539563 systemd[1]: Started libpod-conmon-c8daa41ee9a00d526bb9c5d4430938282d3115e0bf61f9c63f1cb9b4e8a91ad4.scope.
Nov 29 02:12:36 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0075f2a3e3d8106638fbed263dbaf1fda059efe39a26e9ac3e62ec3b3c6ec7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0075f2a3e3d8106638fbed263dbaf1fda059efe39a26e9ac3e62ec3b3c6ec7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0075f2a3e3d8106638fbed263dbaf1fda059efe39a26e9ac3e62ec3b3c6ec7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0075f2a3e3d8106638fbed263dbaf1fda059efe39a26e9ac3e62ec3b3c6ec7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:36 np0005539563 podman[92858]: 2025-11-29 07:12:36.085139799 +0000 UTC m=+0.093058923 container init c8daa41ee9a00d526bb9c5d4430938282d3115e0bf61f9c63f1cb9b4e8a91ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:36 np0005539563 podman[92858]: 2025-11-29 07:12:36.090758451 +0000 UTC m=+0.098677555 container start c8daa41ee9a00d526bb9c5d4430938282d3115e0bf61f9c63f1cb9b4e8a91ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:12:36 np0005539563 podman[92858]: 2025-11-29 07:12:36.093831093 +0000 UTC m=+0.101750227 container attach c8daa41ee9a00d526bb9c5d4430938282d3115e0bf61f9c63f1cb9b4e8a91ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:36 np0005539563 podman[92858]: 2025-11-29 07:12:36.010453543 +0000 UTC m=+0.018372647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 71 peering, 106 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:12:36 np0005539563 ceph-mon[74338]: OSD bench result of 4342.898351 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:12:36 np0005539563 ceph-mon[74338]: osd.2 [v2:192.168.122.102:6800/1730612232,v1:192.168.122.102:6801/1730612232] boot
Nov 29 02:12:36 np0005539563 priceless_lichterman[92876]: {
Nov 29 02:12:36 np0005539563 priceless_lichterman[92876]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:12:36 np0005539563 priceless_lichterman[92876]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:12:36 np0005539563 priceless_lichterman[92876]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:12:36 np0005539563 priceless_lichterman[92876]:        "osd_id": 0,
Nov 29 02:12:36 np0005539563 priceless_lichterman[92876]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:12:36 np0005539563 priceless_lichterman[92876]:        "type": "bluestore"
Nov 29 02:12:36 np0005539563 priceless_lichterman[92876]:    }
Nov 29 02:12:36 np0005539563 priceless_lichterman[92876]: }
Nov 29 02:12:36 np0005539563 systemd[1]: libpod-c8daa41ee9a00d526bb9c5d4430938282d3115e0bf61f9c63f1cb9b4e8a91ad4.scope: Deactivated successfully.
Nov 29 02:12:36 np0005539563 podman[92858]: 2025-11-29 07:12:36.907078735 +0000 UTC m=+0.914997839 container died c8daa41ee9a00d526bb9c5d4430938282d3115e0bf61f9c63f1cb9b4e8a91ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 02:12:36 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d0075f2a3e3d8106638fbed263dbaf1fda059efe39a26e9ac3e62ec3b3c6ec7b-merged.mount: Deactivated successfully.
Nov 29 02:12:36 np0005539563 podman[92858]: 2025-11-29 07:12:36.969528552 +0000 UTC m=+0.977447656 container remove c8daa41ee9a00d526bb9c5d4430938282d3115e0bf61f9c63f1cb9b4e8a91ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lichterman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:36 np0005539563 systemd[1]: libpod-conmon-c8daa41ee9a00d526bb9c5d4430938282d3115e0bf61f9c63f1cb9b4e8a91ad4.scope: Deactivated successfully.
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:37 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev 4e86282e-b2e5-499c-b5a3-9ba9b447629a (Updating rgw.rgw deployment (+3 -> 3))
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tfmigt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tfmigt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tfmigt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:12:37 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.tfmigt on compute-2
Nov 29 02:12:37 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.tfmigt on compute-2
Nov 29 02:12:37 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 29 02:12:37 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tfmigt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tfmigt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:12:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 71 peering, 106 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:12:38 np0005539563 ceph-mon[74338]: Deploying daemon rgw.rgw.compute-2.tfmigt on compute-2
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mdhebv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mdhebv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mdhebv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:12:39 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.mdhebv on compute-1
Nov 29 02:12:39 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.mdhebv on compute-1
Nov 29 02:12:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.1e( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.10( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[5.17( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[5.6( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[8.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [0] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.b( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[3.19( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[3.2( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.290027618s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.773544312s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.289968491s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.773544312s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[3.4( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mdhebv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mdhebv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: Deploying daemon rgw.rgw.compute-1.mdhebv on compute-1
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.102:0/4058279052' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.288048744s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.773544312s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.288016319s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.773544312s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.287675858s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.773513794s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.287601471s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.773513794s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.287450790s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.773391724s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.287414551s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.773391724s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.287420273s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.773406982s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.287370682s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.773406982s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.287084579s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.773193359s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.304711342s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active pruub 129.790847778s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.287050247s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.773193359s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.a( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.304677010s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.790847778s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.286701202s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772972107s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.286664963s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772972107s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.286520958s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772880554s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.286494255s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772880554s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.304424286s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active pruub 129.790832520s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.304392815s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.790832520s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.286504745s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.773017883s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.304172516s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active pruub 129.790710449s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.286471367s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.773017883s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.304147720s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.790710449s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.286225319s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772850037s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.303925514s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active pruub 129.790695190s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.303904533s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.790695190s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.286080360s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772850037s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.304020882s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active pruub 129.790832520s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.303991318s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.790832520s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[3.b( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.285267830s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772781372s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.285235405s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772781372s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.302971840s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active pruub 129.790603638s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.5( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.302947044s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.790603638s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.285033226s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772743225s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.285004616s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772743225s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.302446365s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active pruub 129.790603638s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.284555435s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772743225s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.302412033s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.790603638s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.284525871s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772743225s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[5.a( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.284281731s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772773743s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.284257889s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772773743s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.284183502s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772712708s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.284211159s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772750854s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.284188271s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772750854s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.284146309s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772712708s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.284084320s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772689819s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.284065247s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772689819s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.283999443s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772705078s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.301077843s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active pruub 129.789825439s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.283950806s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772705078s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.283802986s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772590637s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.301051140s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.789825439s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.283781052s) [2] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772590637s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.283578873s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772590637s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.283596992s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 125.772628784s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.283553123s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772590637s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=9.283569336s) [1] r=-1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 125.772628784s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.300476074s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active pruub 129.789840698s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:12:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=39/41 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=13.300440788s) [2] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.789840698s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fvilij", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fvilij", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fvilij", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:12:41 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.fvilij on compute-0
Nov 29 02:12:41 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.fvilij on compute-0
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[3.19( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.b( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[3.b( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[5.6( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[5.17( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[8.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [0] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[5.a( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[3.2( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[3.4( empty local-lis/les=45/46 n=0 ec=37/16 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.1e( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 46 pg[7.10( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fvilij", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fvilij", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:41 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 02:12:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Nov 29 02:12:42 np0005539563 podman[93056]: 2025-11-29 07:12:42.234836008 +0000 UTC m=+0.045978302 container create 22e2437efef4ce5adb18003de0d5d15bdd0180e8912ee17109477d480fbfacb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ganguly, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:42 np0005539563 systemd[1]: Started libpod-conmon-22e2437efef4ce5adb18003de0d5d15bdd0180e8912ee17109477d480fbfacb9.scope.
Nov 29 02:12:42 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:42 np0005539563 podman[93056]: 2025-11-29 07:12:42.282279179 +0000 UTC m=+0.093421473 container init 22e2437efef4ce5adb18003de0d5d15bdd0180e8912ee17109477d480fbfacb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ganguly, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:42 np0005539563 podman[93056]: 2025-11-29 07:12:42.287954202 +0000 UTC m=+0.099096496 container start 22e2437efef4ce5adb18003de0d5d15bdd0180e8912ee17109477d480fbfacb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:12:42 np0005539563 podman[93056]: 2025-11-29 07:12:42.291715473 +0000 UTC m=+0.102857847 container attach 22e2437efef4ce5adb18003de0d5d15bdd0180e8912ee17109477d480fbfacb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:42 np0005539563 zealous_ganguly[93072]: 167 167
Nov 29 02:12:42 np0005539563 systemd[1]: libpod-22e2437efef4ce5adb18003de0d5d15bdd0180e8912ee17109477d480fbfacb9.scope: Deactivated successfully.
Nov 29 02:12:42 np0005539563 conmon[93072]: conmon 22e2437efef4ce5adb18 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22e2437efef4ce5adb18003de0d5d15bdd0180e8912ee17109477d480fbfacb9.scope/container/memory.events
Nov 29 02:12:42 np0005539563 podman[93056]: 2025-11-29 07:12:42.294627362 +0000 UTC m=+0.105769646 container died 22e2437efef4ce5adb18003de0d5d15bdd0180e8912ee17109477d480fbfacb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:12:42 np0005539563 podman[93056]: 2025-11-29 07:12:42.210157712 +0000 UTC m=+0.021300026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-55e04794d9a7eaa7985706cbbc77f680ee527c5e410caf5d18de7cbbc4ff4575-merged.mount: Deactivated successfully.
Nov 29 02:12:42 np0005539563 podman[93056]: 2025-11-29 07:12:42.332511575 +0000 UTC m=+0.143653849 container remove 22e2437efef4ce5adb18003de0d5d15bdd0180e8912ee17109477d480fbfacb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ganguly, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:12:42 np0005539563 systemd[1]: libpod-conmon-22e2437efef4ce5adb18003de0d5d15bdd0180e8912ee17109477d480fbfacb9.scope: Deactivated successfully.
Nov 29 02:12:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v144: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:12:42 np0005539563 systemd[1]: Reloading.
Nov 29 02:12:42 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:12:42 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:12:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 29 02:12:42 np0005539563 systemd[1]: Reloading.
Nov 29 02:12:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 29 02:12:42 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 29 02:12:42 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 47 pg[9.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [0] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 29 02:12:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:12:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 29 02:12:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:12:42 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:12:42 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:12:42 np0005539563 ceph-mon[74338]: Deploying daemon rgw.rgw.compute-0.fvilij on compute-0
Nov 29 02:12:42 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.102:0/4058279052' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:12:42 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:12:42 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.101:0/2435465121' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:12:42 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:12:42 np0005539563 systemd[1]: Starting Ceph rgw.rgw.compute-0.fvilij for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:12:43 np0005539563 podman[93216]: 2025-11-29 07:12:43.137681228 +0000 UTC m=+0.041000837 container create 5aa876443345b70aae20a48faf25f1fd54ecb0cd78fad5cad8e3a7bb6bf2ef84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-rgw-rgw-compute-0-fvilij, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc787a79fd6f41c75902992f80e29ad60fddd3d8b6b829b53ce2ae81abafed3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc787a79fd6f41c75902992f80e29ad60fddd3d8b6b829b53ce2ae81abafed3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc787a79fd6f41c75902992f80e29ad60fddd3d8b6b829b53ce2ae81abafed3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc787a79fd6f41c75902992f80e29ad60fddd3d8b6b829b53ce2ae81abafed3c/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.fvilij supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:43 np0005539563 podman[93216]: 2025-11-29 07:12:43.119834707 +0000 UTC m=+0.023154336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:43 np0005539563 podman[93216]: 2025-11-29 07:12:43.228901401 +0000 UTC m=+0.132221030 container init 5aa876443345b70aae20a48faf25f1fd54ecb0cd78fad5cad8e3a7bb6bf2ef84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-rgw-rgw-compute-0-fvilij, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:43 np0005539563 podman[93216]: 2025-11-29 07:12:43.234015039 +0000 UTC m=+0.137334648 container start 5aa876443345b70aae20a48faf25f1fd54ecb0cd78fad5cad8e3a7bb6bf2ef84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-rgw-rgw-compute-0-fvilij, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:12:43 np0005539563 bash[93216]: 5aa876443345b70aae20a48faf25f1fd54ecb0cd78fad5cad8e3a7bb6bf2ef84
Nov 29 02:12:43 np0005539563 systemd[1]: Started Ceph rgw.rgw.compute-0.fvilij for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:12:43 np0005539563 radosgw[93236]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:12:43 np0005539563 radosgw[93236]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 29 02:12:43 np0005539563 radosgw[93236]: framework: beast
Nov 29 02:12:43 np0005539563 radosgw[93236]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 29 02:12:43 np0005539563 radosgw[93236]: init_numa not setting numa affinity
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev 4e86282e-b2e5-499c-b5a3-9ba9b447629a (Updating rgw.rgw deployment (+3 -> 3))
Nov 29 02:12:43 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event 4e86282e-b2e5-499c-b5a3-9ba9b447629a (Updating rgw.rgw deployment (+3 -> 3)) in 6 seconds
Nov 29 02:12:43 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:12:43 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev 06cd08a7-b541-4107-904c-0219ce7f1970 (Updating mds.cephfs deployment (+3 -> 3))
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.fwjrvc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.fwjrvc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.fwjrvc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:12:43 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.fwjrvc on compute-2
Nov 29 02:12:43 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.fwjrvc on compute-2
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 29 02:12:43 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 48 pg[9.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [0] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.fwjrvc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.fwjrvc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 02:12:43 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v147: 179 pgs: 1 creating+peering, 178 active+clean; 451 KiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3487140368' entity='client.rgw.rgw.compute-0.fvilij' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: Deploying daemon mds.cephfs.compute-2.fwjrvc on compute-2
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/3487140368' entity='client.rgw.rgw.compute-0.fvilij' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.102:0/4058279052' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.101:0/2435465121' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:44 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.msknqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.msknqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.msknqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:12:45 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.msknqt on compute-0
Nov 29 02:12:45 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.msknqt on compute-0
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3487140368' entity='client.rgw.rgw.compute-0.fvilij' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.msknqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.msknqt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/3487140368' entity='client.rgw.rgw.compute-0.fvilij' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:12:45 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:12:46 np0005539563 podman[93460]: 2025-11-29 07:12:46.119916399 +0000 UTC m=+0.039420875 container create 9d47e8aa594efc31d80bc2dfc87e39b53a26efeae49f2d793ae9429ae181966a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_liskov, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:46 np0005539563 systemd[1]: Started libpod-conmon-9d47e8aa594efc31d80bc2dfc87e39b53a26efeae49f2d793ae9429ae181966a.scope.
Nov 29 02:12:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:46 np0005539563 podman[93460]: 2025-11-29 07:12:46.195299094 +0000 UTC m=+0.114803620 container init 9d47e8aa594efc31d80bc2dfc87e39b53a26efeae49f2d793ae9429ae181966a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_liskov, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:12:46 np0005539563 podman[93460]: 2025-11-29 07:12:46.103579518 +0000 UTC m=+0.023084024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:46 np0005539563 podman[93460]: 2025-11-29 07:12:46.201708027 +0000 UTC m=+0.121212513 container start 9d47e8aa594efc31d80bc2dfc87e39b53a26efeae49f2d793ae9429ae181966a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:12:46 np0005539563 podman[93460]: 2025-11-29 07:12:46.205283723 +0000 UTC m=+0.124788229 container attach 9d47e8aa594efc31d80bc2dfc87e39b53a26efeae49f2d793ae9429ae181966a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_liskov, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:12:46 np0005539563 focused_liskov[93476]: 167 167
Nov 29 02:12:46 np0005539563 systemd[1]: libpod-9d47e8aa594efc31d80bc2dfc87e39b53a26efeae49f2d793ae9429ae181966a.scope: Deactivated successfully.
Nov 29 02:12:46 np0005539563 podman[93460]: 2025-11-29 07:12:46.211316587 +0000 UTC m=+0.130821063 container died 9d47e8aa594efc31d80bc2dfc87e39b53a26efeae49f2d793ae9429ae181966a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:12:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-efe41523eefff861a864c691faa2bd7e63734a08268d68c2fb88999b8799e12f-merged.mount: Deactivated successfully.
Nov 29 02:12:46 np0005539563 podman[93460]: 2025-11-29 07:12:46.248556412 +0000 UTC m=+0.168060898 container remove 9d47e8aa594efc31d80bc2dfc87e39b53a26efeae49f2d793ae9429ae181966a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_liskov, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:46 np0005539563 systemd[1]: libpod-conmon-9d47e8aa594efc31d80bc2dfc87e39b53a26efeae49f2d793ae9429ae181966a.scope: Deactivated successfully.
Nov 29 02:12:46 np0005539563 systemd[1]: Reloading.
Nov 29 02:12:46 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:12:46 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:12:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v150: 180 pgs: 1 unknown, 1 creating+peering, 178 active+clean; 451 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e3 new map
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:11:53.720209+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.fwjrvc{-1:24133} state up:standby seq 1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] up:boot
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] as mds.0
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.fwjrvc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.fwjrvc"} v 0) v1
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.fwjrvc"}]: dispatch
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e3 all = 0
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e4 new map
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:12:46.514987+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24133}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.fwjrvc{0:24133} state up:creating seq 1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:creating}
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.fwjrvc is now active in filesystem cephfs as rank 0
Nov 29 02:12:46 np0005539563 systemd[1]: Reloading.
Nov 29 02:12:46 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:12:46 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:46 np0005539563 systemd[1]: Starting Ceph mds.cephfs.compute-0.msknqt for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: Deploying daemon mds.cephfs.compute-0.msknqt on compute-0
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: daemon mds.cephfs.compute-2.fwjrvc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: Cluster is now healthy
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: daemon mds.cephfs.compute-2.fwjrvc is now active in filesystem cephfs as rank 0
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.102:0/2934709007' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.101:0/1219183491' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:46 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:12:47 np0005539563 ceph-mgr[74636]: [progress INFO root] Writing back 12 completed events
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:12:47 np0005539563 podman[93619]: 2025-11-29 07:12:47.035532055 +0000 UTC m=+0.044961465 container create 2f671e4ae1fc23b38a39d9d2af98be84141473d85e5b3dbed4b63982310c9558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mds-cephfs-compute-0-msknqt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f83c3f086fc67aa422189a7dc4b42e6597e9fe5f43d1a667f4362b1e6d8eee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f83c3f086fc67aa422189a7dc4b42e6597e9fe5f43d1a667f4362b1e6d8eee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f83c3f086fc67aa422189a7dc4b42e6597e9fe5f43d1a667f4362b1e6d8eee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f83c3f086fc67aa422189a7dc4b42e6597e9fe5f43d1a667f4362b1e6d8eee/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.msknqt supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:47 np0005539563 podman[93619]: 2025-11-29 07:12:47.102180404 +0000 UTC m=+0.111609854 container init 2f671e4ae1fc23b38a39d9d2af98be84141473d85e5b3dbed4b63982310c9558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mds-cephfs-compute-0-msknqt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 29 02:12:47 np0005539563 podman[93619]: 2025-11-29 07:12:47.010640993 +0000 UTC m=+0.020070433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:12:47 np0005539563 podman[93619]: 2025-11-29 07:12:47.108182016 +0000 UTC m=+0.117611416 container start 2f671e4ae1fc23b38a39d9d2af98be84141473d85e5b3dbed4b63982310c9558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mds-cephfs-compute-0-msknqt, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:12:47 np0005539563 bash[93619]: 2f671e4ae1fc23b38a39d9d2af98be84141473d85e5b3dbed4b63982310c9558
Nov 29 02:12:47 np0005539563 systemd[1]: Started Ceph mds.cephfs.compute-0.msknqt for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:12:47 np0005539563 ceph-mds[93638]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:12:47 np0005539563 ceph-mds[93638]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 02:12:47 np0005539563 ceph-mds[93638]: main not setting numa affinity
Nov 29 02:12:47 np0005539563 ceph-mds[93638]: pidfile_write: ignore empty --pid-file
Nov 29 02:12:47 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mds-cephfs-compute-0-msknqt[93634]: starting mds.cephfs.compute-0.msknqt at 
Nov 29 02:12:47 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Updating MDS map to version 4 from mon.0
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:12:47 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 51 pg[11.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [0] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e5 new map
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:12:47.524985+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24133}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 2 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:12:47 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Updating MDS map to version 5 from mon.0
Nov 29 02:12:47 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Monitors have assigned me to become a standby.
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] up:active
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] up:boot
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 1 up:standby
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.msknqt"} v 0) v1
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.msknqt"}]: dispatch
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e5 all = 0
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e6 new map
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:12:47.524985+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24133}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 2 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 1 up:standby
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oeerwd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oeerwd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oeerwd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:12:47 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.oeerwd on compute-1
Nov 29 02:12:47 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.oeerwd on compute-1
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:47 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [0] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oeerwd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oeerwd", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.101:0/1219183491' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.102:0/2934709007' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:47 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:12:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v153: 181 pgs: 1 unknown, 180 active+clean; 453 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 1.7 KiB/s wr, 12 op/s
Nov 29 02:12:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 29 02:12:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:12:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:12:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:12:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 29 02:12:48 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 29 02:12:48 np0005539563 ceph-mon[74338]: Deploying daemon mds.cephfs.compute-1.oeerwd on compute-1
Nov 29 02:12:48 np0005539563 ceph-mon[74338]: from='client.? 192.168.122.100:0/368267298' entity='client.rgw.rgw.compute-0.fvilij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:12:48 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mdhebv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:12:48 np0005539563 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.tfmigt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:12:49 np0005539563 radosgw[93236]: LDAP not started since no server URIs were provided in the configuration.
Nov 29 02:12:49 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-rgw-rgw-compute-0-fvilij[93232]: 2025-11-29T07:12:49.028+0000 7efe95d92940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 29 02:12:49 np0005539563 radosgw[93236]: framework: beast
Nov 29 02:12:49 np0005539563 radosgw[93236]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 29 02:12:49 np0005539563 radosgw[93236]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 29 02:12:49 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539563 radosgw[93236]: starting handler: beast
Nov 29 02:12:49 np0005539563 radosgw[93236]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:12:49 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539563 radosgw[93236]: mgrc service_daemon_register rgw.14367 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.fvilij,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=566f71d6-80f0-4888-8471-3c4b61b17fae,zone_name=default,zonegroup_id=52a2d801-fd4c-4d81-9622-166900f04f3d,zonegroup_name=default}
Nov 29 02:12:49 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 02:12:49 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Nov 29 02:12:49 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev 06cd08a7-b541-4107-904c-0219ce7f1970 (Updating mds.cephfs deployment (+3 -> 3))
Nov 29 02:12:49 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event 06cd08a7-b541-4107-904c-0219ce7f1970 (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev cb47908a-26a0-4fd0-8190-9388f6d92807 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.aoijdn on compute-0
Nov 29 02:12:49 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.aoijdn on compute-0
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v155: 181 pgs: 181 active+clean; 454 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 3.5 KiB/s wr, 14 op/s
Nov 29 02:12:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e7 new map
Nov 29 02:12:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:12:50.793764+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24133}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.oeerwd{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/1767230500,v1:192.168.122.101:6805/1767230500] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:12:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1767230500,v1:192.168.122.101:6805/1767230500] up:boot
Nov 29 02:12:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] up:active
Nov 29 02:12:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 02:12:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.oeerwd"} v 0) v1
Nov 29 02:12:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.oeerwd"}]: dispatch
Nov 29 02:12:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e7 all = 0
Nov 29 02:12:50 np0005539563 ceph-mon[74338]: Deploying daemon haproxy.rgw.default.compute-0.aoijdn on compute-0
Nov 29 02:12:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e8 new map
Nov 29 02:12:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:12:50.793764+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24133}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.oeerwd{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/1767230500,v1:192.168.122.101:6805/1767230500] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:12:51 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Updating MDS map to version 8 from mon.0
Nov 29 02:12:51 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] up:standby
Nov 29 02:12:51 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 02:12:52 np0005539563 ceph-mgr[74636]: [progress INFO root] Writing back 13 completed events
Nov 29 02:12:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:12:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:52 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event 06717272-1d11-4069-b7e0-7d8a78961504 (Global Recovery Event) in 10 seconds
Nov 29 02:12:52 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 29 02:12:52 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 29 02:12:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v156: 181 pgs: 181 active+clean; 454 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 2.7 KiB/s wr, 11 op/s
Nov 29 02:12:53 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 29 02:12:53 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 29 02:12:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:53 np0005539563 podman[94340]: 2025-11-29 07:12:53.83523948 +0000 UTC m=+3.360182849 container create e9842561dab638f4f9f5ea5a2adc5af7a970f2f9a4bf5b995c633f1fc09c1e45 (image=quay.io/ceph/haproxy:2.3, name=keen_moser)
Nov 29 02:12:53 np0005539563 systemd[1]: Started libpod-conmon-e9842561dab638f4f9f5ea5a2adc5af7a970f2f9a4bf5b995c633f1fc09c1e45.scope.
Nov 29 02:12:53 np0005539563 podman[94340]: 2025-11-29 07:12:53.819555943 +0000 UTC m=+3.344499332 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 29 02:12:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:12:53 np0005539563 podman[94340]: 2025-11-29 07:12:53.929397798 +0000 UTC m=+3.454341427 container init e9842561dab638f4f9f5ea5a2adc5af7a970f2f9a4bf5b995c633f1fc09c1e45 (image=quay.io/ceph/haproxy:2.3, name=keen_moser)
Nov 29 02:12:53 np0005539563 podman[94340]: 2025-11-29 07:12:53.937962961 +0000 UTC m=+3.462906320 container start e9842561dab638f4f9f5ea5a2adc5af7a970f2f9a4bf5b995c633f1fc09c1e45 (image=quay.io/ceph/haproxy:2.3, name=keen_moser)
Nov 29 02:12:53 np0005539563 podman[94340]: 2025-11-29 07:12:53.941494006 +0000 UTC m=+3.466437375 container attach e9842561dab638f4f9f5ea5a2adc5af7a970f2f9a4bf5b995c633f1fc09c1e45 (image=quay.io/ceph/haproxy:2.3, name=keen_moser)
Nov 29 02:12:53 np0005539563 systemd[1]: libpod-e9842561dab638f4f9f5ea5a2adc5af7a970f2f9a4bf5b995c633f1fc09c1e45.scope: Deactivated successfully.
Nov 29 02:12:53 np0005539563 keen_moser[94457]: 0 0
Nov 29 02:12:53 np0005539563 conmon[94457]: conmon e9842561dab638f4f9f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9842561dab638f4f9f5ea5a2adc5af7a970f2f9a4bf5b995c633f1fc09c1e45.scope/container/memory.events
Nov 29 02:12:53 np0005539563 podman[94340]: 2025-11-29 07:12:53.946362229 +0000 UTC m=+3.471305608 container died e9842561dab638f4f9f5ea5a2adc5af7a970f2f9a4bf5b995c633f1fc09c1e45 (image=quay.io/ceph/haproxy:2.3, name=keen_moser)
Nov 29 02:12:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ad5196c1dd70bb49d4be531a6ed1c39e06e1274e1685b2d0e0f6257fe2319410-merged.mount: Deactivated successfully.
Nov 29 02:12:54 np0005539563 podman[94340]: 2025-11-29 07:12:54.010965454 +0000 UTC m=+3.535908823 container remove e9842561dab638f4f9f5ea5a2adc5af7a970f2f9a4bf5b995c633f1fc09c1e45 (image=quay.io/ceph/haproxy:2.3, name=keen_moser)
Nov 29 02:12:54 np0005539563 systemd[1]: libpod-conmon-e9842561dab638f4f9f5ea5a2adc5af7a970f2f9a4bf5b995c633f1fc09c1e45.scope: Deactivated successfully.
Nov 29 02:12:54 np0005539563 systemd[1]: Reloading.
Nov 29 02:12:54 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:12:54 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:12:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:12:54 np0005539563 systemd[1]: Reloading.
Nov 29 02:12:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v157: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 215 KiB/s rd, 6.3 KiB/s wr, 394 op/s
Nov 29 02:12:54 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:12:54 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:12:54 np0005539563 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.aoijdn for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:12:54 np0005539563 podman[94597]: 2025-11-29 07:12:54.773158421 +0000 UTC m=+0.034437486 container create fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:12:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e9 new map
Nov 29 02:12:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:11:53.720139+0000#012modified#0112025-11-29T07:12:50.793764+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24133}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.fwjrvc{0:24133} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1349691830,v1:192.168.122.102:6805/1349691830] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.msknqt{-1:14382} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/956920877,v1:192.168.122.100:6807/956920877] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.oeerwd{-1:24137} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1767230500,v1:192.168.122.101:6805/1767230500] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:12:54 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1767230500,v1:192.168.122.101:6805/1767230500] up:standby
Nov 29 02:12:54 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 02:12:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef843d38ac93faa33bdc2776120221df060f4e190815e2097929d1ea02fa758c/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:54 np0005539563 podman[94597]: 2025-11-29 07:12:54.830866108 +0000 UTC m=+0.092145173 container init fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:12:54 np0005539563 podman[94597]: 2025-11-29 07:12:54.835431303 +0000 UTC m=+0.096710368 container start fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:12:54 np0005539563 bash[94597]: fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9
Nov 29 02:12:54 np0005539563 podman[94597]: 2025-11-29 07:12:54.757237808 +0000 UTC m=+0.018516893 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 29 02:12:54 np0005539563 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.aoijdn for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:12:54 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn[94612]: [NOTICE] 332/071254 (2) : New worker #1 (4) forked
Nov 29 02:12:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:12:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:12:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:12:54.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:12:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:12:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:12:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 02:12:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:54 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.goeiuk on compute-2
Nov 29 02:12:54 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.goeiuk on compute-2
Nov 29 02:12:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:55 np0005539563 ceph-mon[74338]: Deploying daemon haproxy.rgw.default.compute-2.goeiuk on compute-2
Nov 29 02:12:56 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 29 02:12:56 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 29 02:12:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v158: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 191 KiB/s rd, 5.6 KiB/s wr, 349 op/s
Nov 29 02:12:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:12:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:12:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:12:56.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:12:57 np0005539563 ceph-mgr[74636]: [progress INFO root] Writing back 14 completed events
Nov 29 02:12:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:12:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:57 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 29 02:12:57 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 29 02:12:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:12:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v159: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 162 KiB/s rd, 4.1 KiB/s wr, 294 op/s
Nov 29 02:12:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:12:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:12:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:12:58.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:12:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:00 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 29 02:13:00 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 29 02:13:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v160: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 140 KiB/s rd, 3.5 KiB/s wr, 254 op/s
Nov 29 02:13:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:00.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:01 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 29 02:13:01 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 29 02:13:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:01.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:13:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:13:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 02:13:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Nov 29 02:13:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:02 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:13:02 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:13:02 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:13:02 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:13:02 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.gecapa on compute-2
Nov 29 02:13:02 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.gecapa on compute-2
Nov 29 02:13:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v161: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 135 KiB/s rd, 2.7 KiB/s wr, 244 op/s
Nov 29 02:13:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:02.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:03 np0005539563 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:13:03 np0005539563 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:13:03 np0005539563 ceph-mon[74338]: Deploying daemon keepalived.rgw.default.compute-2.gecapa on compute-2
Nov 29 02:13:03 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 29 02:13:03 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 29 02:13:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:03.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v162: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 135 KiB/s rd, 2.7 KiB/s wr, 244 op/s
Nov 29 02:13:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:04.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:05.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v163: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:13:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:06.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:07 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 29 02:13:07 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 29 02:13:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:07.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v164: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:13:08 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 29 02:13:08 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 29 02:13:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:08.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:13:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:13:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 02:13:09 np0005539563 python3[94651]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:13:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:09 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:13:09 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:13:09 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:13:09 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:13:09 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.uxbosd on compute-0
Nov 29 02:13:09 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.uxbosd on compute-0
Nov 29 02:13:09 np0005539563 podman[94652]: 2025-11-29 07:13:09.074352193 +0000 UTC m=+0.048425966 container create a0578c241338e0dd2e4b740bc774216c24bfd2db44167a8e0378b567094183d7 (image=quay.io/ceph/ceph:v18, name=dreamy_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:09 np0005539563 systemd[1]: Started libpod-conmon-a0578c241338e0dd2e4b740bc774216c24bfd2db44167a8e0378b567094183d7.scope.
Nov 29 02:13:09 np0005539563 podman[94652]: 2025-11-29 07:13:09.055349406 +0000 UTC m=+0.029423199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6542efe51c0449cedfeee94744d39c1b8b88f5de0f664db56eb585b4145253/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6542efe51c0449cedfeee94744d39c1b8b88f5de0f664db56eb585b4145253/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:09 np0005539563 podman[94652]: 2025-11-29 07:13:09.175662725 +0000 UTC m=+0.149736528 container init a0578c241338e0dd2e4b740bc774216c24bfd2db44167a8e0378b567094183d7 (image=quay.io/ceph/ceph:v18, name=dreamy_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:13:09 np0005539563 podman[94652]: 2025-11-29 07:13:09.185823082 +0000 UTC m=+0.159896865 container start a0578c241338e0dd2e4b740bc774216c24bfd2db44167a8e0378b567094183d7 (image=quay.io/ceph/ceph:v18, name=dreamy_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:09 np0005539563 podman[94652]: 2025-11-29 07:13:09.196715698 +0000 UTC m=+0.170789481 container attach a0578c241338e0dd2e4b740bc774216c24bfd2db44167a8e0378b567094183d7 (image=quay.io/ceph/ceph:v18, name=dreamy_roentgen, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:13:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:09 np0005539563 dreamy_roentgen[94715]: could not fetch user info: no user info saved
Nov 29 02:13:09 np0005539563 systemd[1]: libpod-a0578c241338e0dd2e4b740bc774216c24bfd2db44167a8e0378b567094183d7.scope: Deactivated successfully.
Nov 29 02:13:09 np0005539563 podman[94652]: 2025-11-29 07:13:09.46404628 +0000 UTC m=+0.438120103 container died a0578c241338e0dd2e4b740bc774216c24bfd2db44167a8e0378b567094183d7 (image=quay.io/ceph/ceph:v18, name=dreamy_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:13:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ec6542efe51c0449cedfeee94744d39c1b8b88f5de0f664db56eb585b4145253-merged.mount: Deactivated successfully.
Nov 29 02:13:09 np0005539563 podman[94652]: 2025-11-29 07:13:09.516650799 +0000 UTC m=+0.490724592 container remove a0578c241338e0dd2e4b740bc774216c24bfd2db44167a8e0378b567094183d7 (image=quay.io/ceph/ceph:v18, name=dreamy_roentgen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:09 np0005539563 systemd[1]: libpod-conmon-a0578c241338e0dd2e4b740bc774216c24bfd2db44167a8e0378b567094183d7.scope: Deactivated successfully.
Nov 29 02:13:09 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 29 02:13:09 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 29 02:13:09 np0005539563 python3[94944]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 38a37ed2-442a-5e0d-a69a-881fdd186450 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:13:09 np0005539563 podman[94945]: 2025-11-29 07:13:09.871359755 +0000 UTC m=+0.039695659 container create f044c620f80e8fad0b6dda6699b04263f0aa7365179383acc78bea9cab6e94f0 (image=quay.io/ceph/ceph:v18, name=infallible_rubin, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:13:09 np0005539563 systemd[1]: Started libpod-conmon-f044c620f80e8fad0b6dda6699b04263f0aa7365179383acc78bea9cab6e94f0.scope.
Nov 29 02:13:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4810c9df5e791896ab99ebed1bf9902815bb887aa9528b7e9aa22344e87e44d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4810c9df5e791896ab99ebed1bf9902815bb887aa9528b7e9aa22344e87e44d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:09 np0005539563 podman[94945]: 2025-11-29 07:13:09.931646254 +0000 UTC m=+0.099982188 container init f044c620f80e8fad0b6dda6699b04263f0aa7365179383acc78bea9cab6e94f0 (image=quay.io/ceph/ceph:v18, name=infallible_rubin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:13:09 np0005539563 podman[94945]: 2025-11-29 07:13:09.942088338 +0000 UTC m=+0.110424242 container start f044c620f80e8fad0b6dda6699b04263f0aa7365179383acc78bea9cab6e94f0 (image=quay.io/ceph/ceph:v18, name=infallible_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:09 np0005539563 podman[94945]: 2025-11-29 07:13:09.852874594 +0000 UTC m=+0.021210518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:09 np0005539563 podman[94945]: 2025-11-29 07:13:09.950610549 +0000 UTC m=+0.118946453 container attach f044c620f80e8fad0b6dda6699b04263f0aa7365179383acc78bea9cab6e94f0 (image=quay.io/ceph/ceph:v18, name=infallible_rubin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:09 np0005539563 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:13:09 np0005539563 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:13:09 np0005539563 ceph-mon[74338]: Deploying daemon keepalived.rgw.default.compute-0.uxbosd on compute-0
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]: {
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "user_id": "openstack",
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "display_name": "openstack",
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "email": "",
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "suspended": 0,
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "max_buckets": 1000,
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "subusers": [],
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "keys": [
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:        {
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:            "user": "openstack",
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:            "access_key": "38QIHMQJU95SC8W6OO6J",
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:            "secret_key": "gPGVcBuxGZZsJ5vSrN8l7JKYBRy6h1kARBmFZJ5w"
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:        }
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    ],
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "swift_keys": [],
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "caps": [],
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "op_mask": "read, write, delete",
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "default_placement": "",
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "default_storage_class": "",
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "placement_tags": [],
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "bucket_quota": {
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:        "enabled": false,
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:        "check_on_raw": false,
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:        "max_size": -1,
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:        "max_size_kb": 0,
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:        "max_objects": -1
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    },
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "user_quota": {
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:        "enabled": false,
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:        "check_on_raw": false,
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:        "max_size": -1,
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:        "max_size_kb": 0,
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:        "max_objects": -1
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    },
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "temp_url_keys": [],
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "type": "rgw",
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]:    "mfa_ids": []
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]: }
Nov 29 02:13:10 np0005539563 infallible_rubin[94961]: 
Nov 29 02:13:10 np0005539563 systemd[1]: libpod-f044c620f80e8fad0b6dda6699b04263f0aa7365179383acc78bea9cab6e94f0.scope: Deactivated successfully.
Nov 29 02:13:10 np0005539563 podman[94945]: 2025-11-29 07:13:10.275208948 +0000 UTC m=+0.443544852 container died f044c620f80e8fad0b6dda6699b04263f0aa7365179383acc78bea9cab6e94f0 (image=quay.io/ceph/ceph:v18, name=infallible_rubin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b4810c9df5e791896ab99ebed1bf9902815bb887aa9528b7e9aa22344e87e44d-merged.mount: Deactivated successfully.
Nov 29 02:13:10 np0005539563 podman[94945]: 2025-11-29 07:13:10.367443864 +0000 UTC m=+0.535779768 container remove f044c620f80e8fad0b6dda6699b04263f0aa7365179383acc78bea9cab6e94f0 (image=quay.io/ceph/ceph:v18, name=infallible_rubin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:13:10 np0005539563 systemd[1]: libpod-conmon-f044c620f80e8fad0b6dda6699b04263f0aa7365179383acc78bea9cab6e94f0.scope: Deactivated successfully.
Nov 29 02:13:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v165: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:13:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:10.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:11.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v166: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:13:12 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 29 02:13:12 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 29 02:13:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:13:12
Nov 29 02:13:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:13:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:13:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', '.mgr', 'images', 'default.rgw.meta', 'backups']
Nov 29 02:13:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:13:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:12.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:12 np0005539563 podman[94905]: 2025-11-29 07:13:12.902708571 +0000 UTC m=+3.310897821 container create 6a23f22f018cc2ce035ff6721537a4c77dcfb18f029fc0c96aadc2a96c95a109 (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_ardinghelli, architecture=x86_64, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, version=2.2.4, io.buildah.version=1.28.2, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, vcs-type=git)
Nov 29 02:13:12 np0005539563 podman[94905]: 2025-11-29 07:13:12.887518448 +0000 UTC m=+3.295707718 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 29 02:13:12 np0005539563 systemd[1]: Started libpod-conmon-6a23f22f018cc2ce035ff6721537a4c77dcfb18f029fc0c96aadc2a96c95a109.scope.
Nov 29 02:13:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:12 np0005539563 podman[94905]: 2025-11-29 07:13:12.971610452 +0000 UTC m=+3.379799722 container init 6a23f22f018cc2ce035ff6721537a4c77dcfb18f029fc0c96aadc2a96c95a109 (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_ardinghelli, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, release=1793, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 29 02:13:12 np0005539563 podman[94905]: 2025-11-29 07:13:12.97665894 +0000 UTC m=+3.384848190 container start 6a23f22f018cc2ce035ff6721537a4c77dcfb18f029fc0c96aadc2a96c95a109 (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_ardinghelli, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, name=keepalived, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 29 02:13:12 np0005539563 wizardly_ardinghelli[95142]: 0 0
Nov 29 02:13:12 np0005539563 podman[94905]: 2025-11-29 07:13:12.979668681 +0000 UTC m=+3.387857941 container attach 6a23f22f018cc2ce035ff6721537a4c77dcfb18f029fc0c96aadc2a96c95a109 (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_ardinghelli, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, distribution-scope=public, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc.)
Nov 29 02:13:12 np0005539563 systemd[1]: libpod-6a23f22f018cc2ce035ff6721537a4c77dcfb18f029fc0c96aadc2a96c95a109.scope: Deactivated successfully.
Nov 29 02:13:12 np0005539563 podman[94905]: 2025-11-29 07:13:12.980645458 +0000 UTC m=+3.388834718 container died 6a23f22f018cc2ce035ff6721537a4c77dcfb18f029fc0c96aadc2a96c95a109 (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_ardinghelli, name=keepalived, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, version=2.2.4, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9)
Nov 29 02:13:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-25b276f44fd4a5ab67790a2e0d4d0901fe26d9dc8d5a28bf5b3d3004a97afb01-merged.mount: Deactivated successfully.
Nov 29 02:13:13 np0005539563 podman[94905]: 2025-11-29 07:13:13.050403733 +0000 UTC m=+3.458592983 container remove 6a23f22f018cc2ce035ff6721537a4c77dcfb18f029fc0c96aadc2a96c95a109 (image=quay.io/ceph/keepalived:2.2.4, name=wizardly_ardinghelli, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, architecture=x86_64, description=keepalived for Ceph, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-type=git, release=1793, distribution-scope=public)
Nov 29 02:13:13 np0005539563 systemd[1]: libpod-conmon-6a23f22f018cc2ce035ff6721537a4c77dcfb18f029fc0c96aadc2a96c95a109.scope: Deactivated successfully.
Nov 29 02:13:13 np0005539563 systemd[1]: Reloading.
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:13:13 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:13:13 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:13:13 np0005539563 systemd[1]: Reloading.
Nov 29 02:13:13 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:13:13 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:13:13 np0005539563 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.uxbosd for 38a37ed2-442a-5e0d-a69a-881fdd186450...
Nov 29 02:13:13 np0005539563 podman[95288]: 2025-11-29 07:13:13.831055902 +0000 UTC m=+0.047093591 container create 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, release=1793, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, name=keepalived, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.component=keepalived-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 02:13:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45e4e271f50a8f31d40fdb036ec7702480ccb46936935e9f7da74b3f89166307/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:13 np0005539563 podman[95288]: 2025-11-29 07:13:13.804311285 +0000 UTC m=+0.020348994 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 29 02:13:13 np0005539563 podman[95288]: 2025-11-29 07:13:13.929819955 +0000 UTC m=+0.145857664 container init 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.expose-services=, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, release=1793, io.buildah.version=1.28.2, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, vcs-type=git)
Nov 29 02:13:13 np0005539563 podman[95288]: 2025-11-29 07:13:13.936486187 +0000 UTC m=+0.152523866 container start 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, release=1793, io.buildah.version=1.28.2, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 02:13:13 np0005539563 bash[95288]: 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e
Nov 29 02:13:13 np0005539563 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.uxbosd for 38a37ed2-442a-5e0d-a69a-881fdd186450.
Nov 29 02:13:13 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd[95305]: Sat Nov 29 07:13:13 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 29 02:13:13 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd[95305]: Sat Nov 29 07:13:13 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Nov 29 02:13:13 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd[95305]: Sat Nov 29 07:13:13 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 29 02:13:13 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd[95305]: Sat Nov 29 07:13:13 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 29 02:13:13 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd[95305]: Sat Nov 29 07:13:13 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 29 02:13:13 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd[95305]: Sat Nov 29 07:13:13 2025: Starting VRRP child process, pid=4
Nov 29 02:13:13 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd[95305]: Sat Nov 29 07:13:13 2025: Startup complete
Nov 29 02:13:13 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd[95305]: Sat Nov 29 07:13:13 2025: (VI_0) Entering BACKUP STATE (init)
Nov 29 02:13:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:13:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:13.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:13:13 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd[95305]: Sat Nov 29 07:13:13 2025: VRRP_Script(check_backend) succeeded
Nov 29 02:13:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 02:13:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:14 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev cb47908a-26a0-4fd0-8190-9388f6d92807 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 29 02:13:14 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event cb47908a-26a0-4fd0-8190-9388f6d92807 (Updating ingress.rgw.default deployment (+4 -> 4)) in 24 seconds
Nov 29 02:13:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 02:13:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v167: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 29 02:13:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:14.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:15 np0005539563 podman[95585]: 2025-11-29 07:13:15.235660942 +0000 UTC m=+0.121958504 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:13:15 np0005539563 podman[95585]: 2025-11-29 07:13:15.334243291 +0000 UTC m=+0.220540833 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:13:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:13:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:15 np0005539563 podman[95739]: 2025-11-29 07:13:15.891689835 +0000 UTC m=+0.072040598 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:13:15 np0005539563 podman[95739]: 2025-11-29 07:13:15.928024591 +0000 UTC m=+0.108375344 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:13:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:15.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v168: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 29 02:13:16 np0005539563 podman[95806]: 2025-11-29 07:13:16.587094758 +0000 UTC m=+0.049229019 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, architecture=x86_64, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, release=1793, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 02:13:16 np0005539563 podman[95806]: 2025-11-29 07:13:16.602132506 +0000 UTC m=+0.064266727 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, com.redhat.component=keepalived-container, architecture=x86_64, release=1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., version=2.2.4, description=keepalived for Ceph, distribution-scope=public, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived)
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ab8f2b89-b5d8-43e4-b210-9811b09edf42 does not exist
Nov 29 02:13:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 48ba0dbe-70b6-43e5-a109-45f86bcbe7be does not exist
Nov 29 02:13:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 30a747ca-c443-4834-9db4-5bb25e8452a3 does not exist
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:16.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:13:17 np0005539563 ceph-mgr[74636]: [progress INFO root] Writing back 15 completed events
Nov 29 02:13:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:13:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:17 np0005539563 podman[95978]: 2025-11-29 07:13:17.285307466 +0000 UTC m=+0.042258859 container create c3a61c7a6a5e049508366bfa8bc90370119f1515e3325c7d99de7d7593140dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tharp, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:17 np0005539563 systemd[1]: Started libpod-conmon-c3a61c7a6a5e049508366bfa8bc90370119f1515e3325c7d99de7d7593140dd2.scope.
Nov 29 02:13:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:17 np0005539563 podman[95978]: 2025-11-29 07:13:17.265639392 +0000 UTC m=+0.022590805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:17 np0005539563 podman[95978]: 2025-11-29 07:13:17.481223879 +0000 UTC m=+0.238175282 container init c3a61c7a6a5e049508366bfa8bc90370119f1515e3325c7d99de7d7593140dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tharp, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:17 np0005539563 podman[95978]: 2025-11-29 07:13:17.493481952 +0000 UTC m=+0.250433375 container start c3a61c7a6a5e049508366bfa8bc90370119f1515e3325c7d99de7d7593140dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tharp, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:17 np0005539563 agitated_tharp[95995]: 167 167
Nov 29 02:13:17 np0005539563 systemd[1]: libpod-c3a61c7a6a5e049508366bfa8bc90370119f1515e3325c7d99de7d7593140dd2.scope: Deactivated successfully.
Nov 29 02:13:17 np0005539563 podman[95978]: 2025-11-29 07:13:17.50152271 +0000 UTC m=+0.258474113 container attach c3a61c7a6a5e049508366bfa8bc90370119f1515e3325c7d99de7d7593140dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tharp, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:17 np0005539563 podman[95978]: 2025-11-29 07:13:17.501974502 +0000 UTC m=+0.258925885 container died c3a61c7a6a5e049508366bfa8bc90370119f1515e3325c7d99de7d7593140dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:17 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Nov 29 02:13:17 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Nov 29 02:13:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2914dc0c5ad8cfd0c528e4ed8e5de163dcc510b78ee590cf4e65b363058a160d-merged.mount: Deactivated successfully.
Nov 29 02:13:17 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd[95305]: Sat Nov 29 07:13:17 2025: (VI_0) Entering MASTER STATE
Nov 29 02:13:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:17.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v169: 181 pgs: 181 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 29 02:13:18 np0005539563 podman[95978]: 2025-11-29 07:13:18.434832836 +0000 UTC m=+1.191784249 container remove c3a61c7a6a5e049508366bfa8bc90370119f1515e3325c7d99de7d7593140dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:18 np0005539563 systemd[1]: libpod-conmon-c3a61c7a6a5e049508366bfa8bc90370119f1515e3325c7d99de7d7593140dd2.scope: Deactivated successfully.
Nov 29 02:13:18 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 29 02:13:18 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 29 02:13:18 np0005539563 podman[96019]: 2025-11-29 07:13:18.611371522 +0000 UTC m=+0.065257273 container create b9d299c3008086e7446518e913d4f6474f52dbef0f005ae41c41a9a37a75245c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:18 np0005539563 systemd[1]: Started libpod-conmon-b9d299c3008086e7446518e913d4f6474f52dbef0f005ae41c41a9a37a75245c.scope.
Nov 29 02:13:18 np0005539563 podman[96019]: 2025-11-29 07:13:18.573105663 +0000 UTC m=+0.026991444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d285f932781104640e9eb2fa445e2ab1f3d6fb5f38268dcb140c20b3e5fb6be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d285f932781104640e9eb2fa445e2ab1f3d6fb5f38268dcb140c20b3e5fb6be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d285f932781104640e9eb2fa445e2ab1f3d6fb5f38268dcb140c20b3e5fb6be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d285f932781104640e9eb2fa445e2ab1f3d6fb5f38268dcb140c20b3e5fb6be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d285f932781104640e9eb2fa445e2ab1f3d6fb5f38268dcb140c20b3e5fb6be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:18 np0005539563 podman[96019]: 2025-11-29 07:13:18.722024709 +0000 UTC m=+0.175910460 container init b9d299c3008086e7446518e913d4f6474f52dbef0f005ae41c41a9a37a75245c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cartwright, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:18 np0005539563 podman[96019]: 2025-11-29 07:13:18.728788173 +0000 UTC m=+0.182673924 container start b9d299c3008086e7446518e913d4f6474f52dbef0f005ae41c41a9a37a75245c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:18 np0005539563 podman[96019]: 2025-11-29 07:13:18.743342778 +0000 UTC m=+0.197228529 container attach b9d299c3008086e7446518e913d4f6474f52dbef0f005ae41c41a9a37a75245c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:13:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:18.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 1)
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 1)
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:13:18 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 1)
Nov 29 02:13:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:13:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:13:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 29 02:13:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:13:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 29 02:13:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:13:19 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 29 02:13:19 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev 0e42fc7b-1c78-4b8b-bd1a-5e5d5861a021 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 02:13:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:13:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:13:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:19 np0005539563 jolly_cartwright[96035]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:13:19 np0005539563 jolly_cartwright[96035]: --> relative data size: 1.0
Nov 29 02:13:19 np0005539563 jolly_cartwright[96035]: --> All data devices are unavailable
Nov 29 02:13:19 np0005539563 systemd[1]: libpod-b9d299c3008086e7446518e913d4f6474f52dbef0f005ae41c41a9a37a75245c.scope: Deactivated successfully.
Nov 29 02:13:19 np0005539563 podman[96019]: 2025-11-29 07:13:19.600111954 +0000 UTC m=+1.053997695 container died b9d299c3008086e7446518e913d4f6474f52dbef0f005ae41c41a9a37a75245c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4d285f932781104640e9eb2fa445e2ab1f3d6fb5f38268dcb140c20b3e5fb6be-merged.mount: Deactivated successfully.
Nov 29 02:13:19 np0005539563 podman[96019]: 2025-11-29 07:13:19.707512503 +0000 UTC m=+1.161398254 container remove b9d299c3008086e7446518e913d4f6474f52dbef0f005ae41c41a9a37a75245c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:19 np0005539563 systemd[1]: libpod-conmon-b9d299c3008086e7446518e913d4f6474f52dbef0f005ae41c41a9a37a75245c.scope: Deactivated successfully.
Nov 29 02:13:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:13:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:19.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:13:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 29 02:13:20 np0005539563 podman[96203]: 2025-11-29 07:13:20.304305207 +0000 UTC m=+0.073979422 container create fee93a8b3caf638e5c1eff49dc4ef13577fecd7b42e7151c92a9bfd5e656f831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gould, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:13:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 29 02:13:20 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 29 02:13:20 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev f8a4fbb1-1e7d-4b6f-9285-5a50e1b1cc68 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 02:13:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:13:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:13:20 np0005539563 systemd[1]: Started libpod-conmon-fee93a8b3caf638e5c1eff49dc4ef13577fecd7b42e7151c92a9bfd5e656f831.scope.
Nov 29 02:13:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:13:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:13:20 np0005539563 podman[96203]: 2025-11-29 07:13:20.25256345 +0000 UTC m=+0.022237675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:20 np0005539563 podman[96203]: 2025-11-29 07:13:20.389006147 +0000 UTC m=+0.158680382 container init fee93a8b3caf638e5c1eff49dc4ef13577fecd7b42e7151c92a9bfd5e656f831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gould, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:13:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v172: 181 pgs: 1 active+clean+scrubbing, 180 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 1 op/s
Nov 29 02:13:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:13:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:13:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:20 np0005539563 podman[96203]: 2025-11-29 07:13:20.398959907 +0000 UTC m=+0.168634152 container start fee93a8b3caf638e5c1eff49dc4ef13577fecd7b42e7151c92a9bfd5e656f831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:20 np0005539563 romantic_gould[96220]: 167 167
Nov 29 02:13:20 np0005539563 systemd[1]: libpod-fee93a8b3caf638e5c1eff49dc4ef13577fecd7b42e7151c92a9bfd5e656f831.scope: Deactivated successfully.
Nov 29 02:13:20 np0005539563 podman[96203]: 2025-11-29 07:13:20.410577444 +0000 UTC m=+0.180251659 container attach fee93a8b3caf638e5c1eff49dc4ef13577fecd7b42e7151c92a9bfd5e656f831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:13:20 np0005539563 podman[96203]: 2025-11-29 07:13:20.410976564 +0000 UTC m=+0.180650779 container died fee93a8b3caf638e5c1eff49dc4ef13577fecd7b42e7151c92a9bfd5e656f831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:13:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2fa24269fd4c75ef5efa7a08723bdbc2a4078ba28b21a588d3d9d78ca8eab8fc-merged.mount: Deactivated successfully.
Nov 29 02:13:20 np0005539563 podman[96203]: 2025-11-29 07:13:20.522327859 +0000 UTC m=+0.292002074 container remove fee93a8b3caf638e5c1eff49dc4ef13577fecd7b42e7151c92a9bfd5e656f831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gould, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:13:20 np0005539563 systemd[1]: libpod-conmon-fee93a8b3caf638e5c1eff49dc4ef13577fecd7b42e7151c92a9bfd5e656f831.scope: Deactivated successfully.
Nov 29 02:13:20 np0005539563 podman[96244]: 2025-11-29 07:13:20.667509763 +0000 UTC m=+0.039578686 container create 5060cd2d32e720c6d3f6ea2108b842f039914c4d870b6c1f69943330d33fe6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khayyam, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:20 np0005539563 systemd[1]: Started libpod-conmon-5060cd2d32e720c6d3f6ea2108b842f039914c4d870b6c1f69943330d33fe6a2.scope.
Nov 29 02:13:20 np0005539563 podman[96244]: 2025-11-29 07:13:20.649850304 +0000 UTC m=+0.021919247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b18c8b0244775b04106c879e24181c165cf8a838bcd0cfd866cd3a37991781/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b18c8b0244775b04106c879e24181c165cf8a838bcd0cfd866cd3a37991781/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b18c8b0244775b04106c879e24181c165cf8a838bcd0cfd866cd3a37991781/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b18c8b0244775b04106c879e24181c165cf8a838bcd0cfd866cd3a37991781/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:20 np0005539563 podman[96244]: 2025-11-29 07:13:20.768982421 +0000 UTC m=+0.141051374 container init 5060cd2d32e720c6d3f6ea2108b842f039914c4d870b6c1f69943330d33fe6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khayyam, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:13:20 np0005539563 podman[96244]: 2025-11-29 07:13:20.776834743 +0000 UTC m=+0.148903666 container start 5060cd2d32e720c6d3f6ea2108b842f039914c4d870b6c1f69943330d33fe6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:13:20 np0005539563 podman[96244]: 2025-11-29 07:13:20.800180998 +0000 UTC m=+0.172249921 container attach 5060cd2d32e720c6d3f6ea2108b842f039914c4d870b6c1f69943330d33fe6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:13:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:20.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 29 02:13:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:13:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:13:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:13:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 29 02:13:21 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 29 02:13:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 56 pg[8.0( v 46'8 (0'0,46'8] local-lis/les=45/46 n=6 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=8.255973816s) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 lcod 46'7 mlcod 46'7 active pruub 165.501037598s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 56 pg[9.0( v 53'1137 (0'0,53'1137] local-lis/les=47/48 n=177 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=10.272585869s) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 53'1136 mlcod 53'1136 active pruub 167.518096924s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 56 pg[8.0( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=8.255973816s) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 lcod 46'7 mlcod 0'0 unknown pruub 165.501037598s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:21 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev 23a2afbb-8f04-4087-a533-e740d2ac16cc (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 02:13:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:13:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:13:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:13:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:13:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:21 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 56 pg[9.0( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=10.272585869s) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 53'1136 mlcod 0'0 unknown pruub 167.518096924s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]: {
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:    "0": [
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:        {
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:            "devices": [
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:                "/dev/loop3"
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:            ],
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:            "lv_name": "ceph_lv0",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:            "lv_size": "7511998464",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:            "name": "ceph_lv0",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:            "tags": {
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:                "ceph.cluster_name": "ceph",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:                "ceph.crush_device_class": "",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:                "ceph.encrypted": "0",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:                "ceph.osd_id": "0",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:                "ceph.type": "block",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:                "ceph.vdo": "0"
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:            },
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:            "type": "block",
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:            "vg_name": "ceph_vg0"
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:        }
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]:    ]
Nov 29 02:13:21 np0005539563 heuristic_khayyam[96260]: }
Nov 29 02:13:21 np0005539563 systemd[1]: libpod-5060cd2d32e720c6d3f6ea2108b842f039914c4d870b6c1f69943330d33fe6a2.scope: Deactivated successfully.
Nov 29 02:13:21 np0005539563 podman[96244]: 2025-11-29 07:13:21.615708244 +0000 UTC m=+0.987777167 container died 5060cd2d32e720c6d3f6ea2108b842f039914c4d870b6c1f69943330d33fe6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khayyam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:13:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay-64b18c8b0244775b04106c879e24181c165cf8a838bcd0cfd866cd3a37991781-merged.mount: Deactivated successfully.
Nov 29 02:13:21 np0005539563 podman[96244]: 2025-11-29 07:13:21.723708818 +0000 UTC m=+1.095777741 container remove 5060cd2d32e720c6d3f6ea2108b842f039914c4d870b6c1f69943330d33fe6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khayyam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:13:21 np0005539563 systemd[1]: libpod-conmon-5060cd2d32e720c6d3f6ea2108b842f039914c4d870b6c1f69943330d33fe6a2.scope: Deactivated successfully.
Nov 29 02:13:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:21.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Nov 29 02:13:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v174: 243 pgs: 62 unknown, 1 active+clean+scrubbing, 180 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:13:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:13:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:22 np0005539563 podman[96423]: 2025-11-29 07:13:22.403960569 +0000 UTC m=+0.060237557 container create 13397156cf2a7829c22db2c499c09a1e5c29d086ccc702eb511984e1e39398b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 29 02:13:22 np0005539563 systemd[1]: Started libpod-conmon-13397156cf2a7829c22db2c499c09a1e5c29d086ccc702eb511984e1e39398b5.scope.
Nov 29 02:13:22 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:22 np0005539563 podman[96423]: 2025-11-29 07:13:22.371005063 +0000 UTC m=+0.027282081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:13:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:13:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 29 02:13:22 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.17( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.18( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.19( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.16( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.16( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.17( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.11( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.10( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.12( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.13( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1c( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1d( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1c( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1d( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1f( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1e( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1a( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.4( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.5( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.4( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.5( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1b( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.6( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.7( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.14( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.2( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.3( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.f( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.15( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.e( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.c( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.d( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.d( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.c( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.14( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.15( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.9( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.8( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.a( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.b( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.7( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.6( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1a( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.18( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.19( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1b( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1e( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1f( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.12( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.13( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.10( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.11( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.b( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.a( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.8( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.9( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.e( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.f( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.3( v 53'1137 lc 0'0 (0'0,53'1137] local-lis/les=47/48 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1( v 46'8 (0'0,46'8] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.2( v 46'8 lc 0'0 (0'0,46'8] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-mgr[74636]: [progress INFO root] update: starting ev 5685a345-eb81-4c96-81d9-47c11e898072 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 02:13:22 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev 0e42fc7b-1c78-4b8b-bd1a-5e5d5861a021 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 02:13:22 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event 0e42fc7b-1c78-4b8b-bd1a-5e5d5861a021 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 29 02:13:22 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev f8a4fbb1-1e7d-4b6f-9285-5a50e1b1cc68 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 02:13:22 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event f8a4fbb1-1e7d-4b6f-9285-5a50e1b1cc68 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 29 02:13:22 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev 23a2afbb-8f04-4087-a533-e740d2ac16cc (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 02:13:22 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event 23a2afbb-8f04-4087-a533-e740d2ac16cc (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 29 02:13:22 np0005539563 ceph-mgr[74636]: [progress INFO root] complete: finished ev 5685a345-eb81-4c96-81d9-47c11e898072 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 02:13:22 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event 5685a345-eb81-4c96-81d9-47c11e898072 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 29 02:13:22 np0005539563 podman[96423]: 2025-11-29 07:13:22.483197901 +0000 UTC m=+0.139474919 container init 13397156cf2a7829c22db2c499c09a1e5c29d086ccc702eb511984e1e39398b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:22 np0005539563 podman[96423]: 2025-11-29 07:13:22.490136351 +0000 UTC m=+0.146413349 container start 13397156cf2a7829c22db2c499c09a1e5c29d086ccc702eb511984e1e39398b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brattain, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:22 np0005539563 loving_brattain[96440]: 167 167
Nov 29 02:13:22 np0005539563 systemd[1]: libpod-13397156cf2a7829c22db2c499c09a1e5c29d086ccc702eb511984e1e39398b5.scope: Deactivated successfully.
Nov 29 02:13:22 np0005539563 conmon[96440]: conmon 13397156cf2a7829c22d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13397156cf2a7829c22db2c499c09a1e5c29d086ccc702eb511984e1e39398b5.scope/container/memory.events
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.17( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.16( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.17( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.10( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.11( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.13( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1c( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1c( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.12( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.18( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1e( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1d( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.4( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.5( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.5( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1b( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.6( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.14( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.7( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.2( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.3( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.0( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 46'7 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.e( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.4( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.c( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.c( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.14( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.15( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.8( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.9( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.a( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.b( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.6( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.7( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1a( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.18( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.19( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1f( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.12( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.d( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.13( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.10( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:13:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:13:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:13:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:13:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:22 np0005539563 podman[96423]: 2025-11-29 07:13:22.530536978 +0000 UTC m=+0.186813986 container attach 13397156cf2a7829c22db2c499c09a1e5c29d086ccc702eb511984e1e39398b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brattain, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:22 np0005539563 podman[96423]: 2025-11-29 07:13:22.530996011 +0000 UTC m=+0.187272999 container died 13397156cf2a7829c22db2c499c09a1e5c29d086ccc702eb511984e1e39398b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.11( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.a( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.b( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.f( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.8( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.e( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.3( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.1( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.9( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[9.0( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=53'1137 lcod 53'1136 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 57 pg[8.2( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [0] r=0 lpr=56 pi=[45,56)/1 crt=46'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-30d4070a09b4fe30afa7e120c9ae8686461a6745c51707871f9948255813579b-merged.mount: Deactivated successfully.
Nov 29 02:13:22 np0005539563 podman[96423]: 2025-11-29 07:13:22.637821293 +0000 UTC m=+0.294098281 container remove 13397156cf2a7829c22db2c499c09a1e5c29d086ccc702eb511984e1e39398b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brattain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:13:22 np0005539563 systemd[1]: libpod-conmon-13397156cf2a7829c22db2c499c09a1e5c29d086ccc702eb511984e1e39398b5.scope: Deactivated successfully.
Nov 29 02:13:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:22.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:22 np0005539563 podman[96465]: 2025-11-29 07:13:22.795533168 +0000 UTC m=+0.022423801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:22 np0005539563 podman[96465]: 2025-11-29 07:13:22.935317975 +0000 UTC m=+0.162208578 container create 6a8aa97fe3dfa64244ae9cedb49f5a5e0bd0dbe632f70b02abe89570d43f7968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:13:22 np0005539563 systemd[1]: Started libpod-conmon-6a8aa97fe3dfa64244ae9cedb49f5a5e0bd0dbe632f70b02abe89570d43f7968.scope.
Nov 29 02:13:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d44a58a3760f4e114057f325da42bccec7dfd6d08be14439d3f049fa1026be71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d44a58a3760f4e114057f325da42bccec7dfd6d08be14439d3f049fa1026be71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d44a58a3760f4e114057f325da42bccec7dfd6d08be14439d3f049fa1026be71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d44a58a3760f4e114057f325da42bccec7dfd6d08be14439d3f049fa1026be71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:23 np0005539563 podman[96465]: 2025-11-29 07:13:23.045303663 +0000 UTC m=+0.272194266 container init 6a8aa97fe3dfa64244ae9cedb49f5a5e0bd0dbe632f70b02abe89570d43f7968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:23 np0005539563 podman[96465]: 2025-11-29 07:13:23.052508679 +0000 UTC m=+0.279399282 container start 6a8aa97fe3dfa64244ae9cedb49f5a5e0bd0dbe632f70b02abe89570d43f7968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:13:23 np0005539563 podman[96465]: 2025-11-29 07:13:23.055667245 +0000 UTC m=+0.282557878 container attach 6a8aa97fe3dfa64244ae9cedb49f5a5e0bd0dbe632f70b02abe89570d43f7968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:13:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 29 02:13:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:13:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:13:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 29 02:13:23 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 29 02:13:23 np0005539563 condescending_mendel[96481]: {
Nov 29 02:13:23 np0005539563 condescending_mendel[96481]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:13:23 np0005539563 condescending_mendel[96481]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:13:23 np0005539563 condescending_mendel[96481]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:13:23 np0005539563 condescending_mendel[96481]:        "osd_id": 0,
Nov 29 02:13:23 np0005539563 condescending_mendel[96481]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:13:23 np0005539563 condescending_mendel[96481]:        "type": "bluestore"
Nov 29 02:13:23 np0005539563 condescending_mendel[96481]:    }
Nov 29 02:13:23 np0005539563 condescending_mendel[96481]: }
Nov 29 02:13:23 np0005539563 systemd[1]: libpod-6a8aa97fe3dfa64244ae9cedb49f5a5e0bd0dbe632f70b02abe89570d43f7968.scope: Deactivated successfully.
Nov 29 02:13:23 np0005539563 podman[96465]: 2025-11-29 07:13:23.977590812 +0000 UTC m=+1.204481445 container died 6a8aa97fe3dfa64244ae9cedb49f5a5e0bd0dbe632f70b02abe89570d43f7968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendel, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:13:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:13:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:23.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:13:24 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d44a58a3760f4e114057f325da42bccec7dfd6d08be14439d3f049fa1026be71-merged.mount: Deactivated successfully.
Nov 29 02:13:24 np0005539563 podman[96465]: 2025-11-29 07:13:24.047253734 +0000 UTC m=+1.274144327 container remove 6a8aa97fe3dfa64244ae9cedb49f5a5e0bd0dbe632f70b02abe89570d43f7968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:24 np0005539563 systemd[1]: libpod-conmon-6a8aa97fe3dfa64244ae9cedb49f5a5e0bd0dbe632f70b02abe89570d43f7968.scope: Deactivated successfully.
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a77bcb56-00dc-4aa9-9e26-01b8eee27c1c does not exist
Nov 29 02:13:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9472ccb6-e5ca-4bd6-90ce-4bf22caf2616 does not exist
Nov 29 02:13:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2b4bed6b-dcf8-4eb7-accb-216328cab889 does not exist
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v177: 274 pgs: 274 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:13:24 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 02:13:24 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:24 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:13:24 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:13:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:13:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:13:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:24.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:13:25 np0005539563 podman[96679]: 2025-11-29 07:13:25.041497756 +0000 UTC m=+0.052588400 container create 55f71a144591a0982f6774cbcd47f62b828af24b3452bcee17da0b9cfb17e1e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:25 np0005539563 systemd[1]: Started libpod-conmon-55f71a144591a0982f6774cbcd47f62b828af24b3452bcee17da0b9cfb17e1e4.scope.
Nov 29 02:13:25 np0005539563 podman[96679]: 2025-11-29 07:13:25.018317445 +0000 UTC m=+0.029408109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 29 02:13:25 np0005539563 podman[96679]: 2025-11-29 07:13:25.128099279 +0000 UTC m=+0.139189963 container init 55f71a144591a0982f6774cbcd47f62b828af24b3452bcee17da0b9cfb17e1e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:13:25 np0005539563 podman[96679]: 2025-11-29 07:13:25.135929631 +0000 UTC m=+0.147020275 container start 55f71a144591a0982f6774cbcd47f62b828af24b3452bcee17da0b9cfb17e1e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_northcutt, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 29 02:13:25 np0005539563 festive_northcutt[96696]: 167 167
Nov 29 02:13:25 np0005539563 systemd[1]: libpod-55f71a144591a0982f6774cbcd47f62b828af24b3452bcee17da0b9cfb17e1e4.scope: Deactivated successfully.
Nov 29 02:13:25 np0005539563 podman[96679]: 2025-11-29 07:13:25.142726406 +0000 UTC m=+0.153817070 container attach 55f71a144591a0982f6774cbcd47f62b828af24b3452bcee17da0b9cfb17e1e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_northcutt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 29 02:13:25 np0005539563 podman[96679]: 2025-11-29 07:13:25.143273181 +0000 UTC m=+0.154363825 container died 55f71a144591a0982f6774cbcd47f62b828af24b3452bcee17da0b9cfb17e1e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.18( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.362129211s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.308715820s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.17( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361767769s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.308425903s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.16( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361447334s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.308166504s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.17( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361698151s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.308425903s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.18( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361982346s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.308715820s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.16( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361387253s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.308166504s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.10( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361406326s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.308456421s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.10( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361378670s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.308456421s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.12( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361429214s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.308639526s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.1c( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361434937s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.308685303s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.1c( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361413956s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.308685303s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.12( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361388206s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.308639526s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.1b( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361593246s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.309036255s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.1b( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361572266s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.309036255s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.5( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361458778s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.308944702s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.4( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361715317s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.309310913s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.5( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361424446s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.308944702s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.4( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361680031s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.309310913s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.14( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361598015s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.309326172s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.14( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361577988s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.309326172s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.3( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361392975s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.309371948s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.3( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361368179s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.309371948s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[11.0( v 53'2 (0'0,53'2] local-lis/les=51/52 n=2 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=10.662649155s) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 53'1 mlcod 53'1 active pruub 171.610778809s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.d( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361454964s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.309799194s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.d( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361433029s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.309799194s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.c( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361153603s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.309539795s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.c( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361125946s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.309539795s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.15( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361045837s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.309555054s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.15( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361021996s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.309555054s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.8( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361006737s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.309585571s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.8( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.360981941s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.309585571s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.b( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.361001015s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.309631348s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.b( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.360980034s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.309631348s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.19( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.360617638s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.309692383s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.19( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.360592842s) [1] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.309692383s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.1f( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.360605240s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.309722900s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.1f( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.360427856s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.309722900s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.11( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.386370659s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.335815430s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.11( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.386347771s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.335815430s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.a( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.386425018s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.335937500s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.a( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.386381149s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.335937500s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.9( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.386464119s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.336120605s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.9( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.386442184s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.336120605s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.f( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.386437416s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.336151123s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.f( v 46'8 (0'0,46'8] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.386414528s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.336151123s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.2( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.386503220s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.336395264s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.2( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.386480331s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.336395264s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.6( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.360663414s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 active pruub 174.309661865s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[8.6( v 46'8 (0'0,46'8] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=59 pruub=13.358672142s) [2] r=-1 lpr=59 pi=[56,59)/1 crt=46'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.309661865s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[11.0( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=10.662649155s) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 53'1 mlcod 0'0 unknown pruub 171.610778809s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[10.8( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[10.14( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[10.5( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[10.15( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[10.2( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[10.18( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 59 pg[10.13( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3b7fba127f58d427063ff6eed85880b23d331b0bcde423f2b29f72bbd1d951b8-merged.mount: Deactivated successfully.
Nov 29 02:13:25 np0005539563 podman[96679]: 2025-11-29 07:13:25.203700933 +0000 UTC m=+0.214791577 container remove 55f71a144591a0982f6774cbcd47f62b828af24b3452bcee17da0b9cfb17e1e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:25 np0005539563 systemd[1]: libpod-conmon-55f71a144591a0982f6774cbcd47f62b828af24b3452bcee17da0b9cfb17e1e4.scope: Deactivated successfully.
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:25 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.rotard (monmap changed)...
Nov 29 02:13:25 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.rotard (monmap changed)...
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.rotard", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rotard", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:25 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.rotard on compute-0
Nov 29 02:13:25 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.rotard on compute-0
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rotard", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:13:25 np0005539563 podman[96833]: 2025-11-29 07:13:25.770010318 +0000 UTC m=+0.041044576 container create e534c52c440fbd49c473f955c2bf3e2a520c7f371e4f278335cf7678ea956735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:13:25 np0005539563 systemd[1]: Started libpod-conmon-e534c52c440fbd49c473f955c2bf3e2a520c7f371e4f278335cf7678ea956735.scope.
Nov 29 02:13:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:25 np0005539563 podman[96833]: 2025-11-29 07:13:25.84077359 +0000 UTC m=+0.111807868 container init e534c52c440fbd49c473f955c2bf3e2a520c7f371e4f278335cf7678ea956735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cannon, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:25 np0005539563 podman[96833]: 2025-11-29 07:13:25.750801086 +0000 UTC m=+0.021835364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:25 np0005539563 podman[96833]: 2025-11-29 07:13:25.846835625 +0000 UTC m=+0.117869883 container start e534c52c440fbd49c473f955c2bf3e2a520c7f371e4f278335cf7678ea956735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:25 np0005539563 beautiful_cannon[96849]: 167 167
Nov 29 02:13:25 np0005539563 podman[96833]: 2025-11-29 07:13:25.851727258 +0000 UTC m=+0.122761516 container attach e534c52c440fbd49c473f955c2bf3e2a520c7f371e4f278335cf7678ea956735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cannon, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:25 np0005539563 systemd[1]: libpod-e534c52c440fbd49c473f955c2bf3e2a520c7f371e4f278335cf7678ea956735.scope: Deactivated successfully.
Nov 29 02:13:25 np0005539563 podman[96833]: 2025-11-29 07:13:25.852086787 +0000 UTC m=+0.123121045 container died e534c52c440fbd49c473f955c2bf3e2a520c7f371e4f278335cf7678ea956735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:13:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9007a30820b87076e132dd1462d0ede88fdfeed9970b179280502a6178835923-merged.mount: Deactivated successfully.
Nov 29 02:13:25 np0005539563 podman[96833]: 2025-11-29 07:13:25.893956765 +0000 UTC m=+0.164991023 container remove e534c52c440fbd49c473f955c2bf3e2a520c7f371e4f278335cf7678ea956735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cannon, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:13:25 np0005539563 systemd[1]: libpod-conmon-e534c52c440fbd49c473f955c2bf3e2a520c7f371e4f278335cf7678ea956735.scope: Deactivated successfully.
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:25 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 02:13:25 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:25 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 02:13:25 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 02:13:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:25.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 29 02:13:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 29 02:13:26 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1b( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.14( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.13( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.11( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1f( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1e( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1d( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.18( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.6( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.15( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.7( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.4( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.17( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.3( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.d( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.e( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.f( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.16( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.8( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.b( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.5( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.19( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1a( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1c( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.10( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.9( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.12( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.a( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.c( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1( v 53'2 (0'0,53'2] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.2( v 53'2 lc 0'0 (0'0,53'2] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[10.13( v 53'96 (0'0,53'96] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[10.8( v 53'96 (0'0,53'96] local-lis/les=59/60 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[10.18( v 53'96 (0'0,53'96] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[10.5( v 53'96 (0'0,53'96] local-lis/les=59/60 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[10.1b( v 53'96 (0'0,53'96] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[10.15( v 58'99 lc 53'78 (0'0,58'99] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=58'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[10.19( v 53'96 (0'0,53'96] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1b( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.13( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.14( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1f( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.11( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[10.14( v 58'99 lc 53'86 (0'0,58'99] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=58'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1e( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1d( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.18( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[10.2( v 53'96 (0'0,53'96] local-lis/les=59/60 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=53'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.15( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.6( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.7( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.0( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 53'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.3( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.d( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.e( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.16( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.5( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.8( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.4( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.b( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.19( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1c( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.10( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1a( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.a( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.12( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.f( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.9( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.17( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.c( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.1( v 53'2 (0'0,53'2] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 60 pg[11.2( v 53'2 (0'0,53'2] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=53'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 31 unknown, 274 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:13:26 np0005539563 podman[96985]: 2025-11-29 07:13:26.500111042 +0000 UTC m=+0.113848913 container create 90ed4f660e57a5ca0159261492f522fdbffb691fcd1ab9a3029a3be5d1c97580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 29 02:13:26 np0005539563 podman[96985]: 2025-11-29 07:13:26.407388673 +0000 UTC m=+0.021126574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:26 np0005539563 systemd[1]: Started libpod-conmon-90ed4f660e57a5ca0159261492f522fdbffb691fcd1ab9a3029a3be5d1c97580.scope.
Nov 29 02:13:26 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:26 np0005539563 podman[96985]: 2025-11-29 07:13:26.672685391 +0000 UTC m=+0.286423282 container init 90ed4f660e57a5ca0159261492f522fdbffb691fcd1ab9a3029a3be5d1c97580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hopper, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:26 np0005539563 podman[96985]: 2025-11-29 07:13:26.6789165 +0000 UTC m=+0.292654381 container start 90ed4f660e57a5ca0159261492f522fdbffb691fcd1ab9a3029a3be5d1c97580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hopper, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:26 np0005539563 elegant_hopper[97001]: 167 167
Nov 29 02:13:26 np0005539563 systemd[1]: libpod-90ed4f660e57a5ca0159261492f522fdbffb691fcd1ab9a3029a3be5d1c97580.scope: Deactivated successfully.
Nov 29 02:13:26 np0005539563 podman[96985]: 2025-11-29 07:13:26.685340615 +0000 UTC m=+0.299078516 container attach 90ed4f660e57a5ca0159261492f522fdbffb691fcd1ab9a3029a3be5d1c97580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:13:26 np0005539563 podman[96985]: 2025-11-29 07:13:26.685695905 +0000 UTC m=+0.299433786 container died 90ed4f660e57a5ca0159261492f522fdbffb691fcd1ab9a3029a3be5d1c97580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:26 np0005539563 systemd[1]: var-lib-containers-storage-overlay-41cad6226fec25fc590ab664937c92c1827c78e5c77699ee7f55061a273e46fc-merged.mount: Deactivated successfully.
Nov 29 02:13:26 np0005539563 podman[96985]: 2025-11-29 07:13:26.839538355 +0000 UTC m=+0.453276236 container remove 90ed4f660e57a5ca0159261492f522fdbffb691fcd1ab9a3029a3be5d1c97580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hopper, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:13:26 np0005539563 systemd[1]: libpod-conmon-90ed4f660e57a5ca0159261492f522fdbffb691fcd1ab9a3029a3be5d1c97580.scope: Deactivated successfully.
Nov 29 02:13:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:26.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: Reconfiguring mgr.compute-0.rotard (monmap changed)...
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: Reconfiguring daemon mgr.compute-0.rotard on compute-0
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 02:13:27 np0005539563 ceph-mgr[74636]: [progress INFO root] Writing back 19 completed events
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:27 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Nov 29 02:13:27 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:27 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Nov 29 02:13:27 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Nov 29 02:13:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:28.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:28 np0005539563 podman[97136]: 2025-11-29 07:13:28.04301798 +0000 UTC m=+0.021602748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 02:13:28 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 29 02:13:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:28.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 02:13:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 02:13:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:13:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:29 np0005539563 podman[97136]: 2025-11-29 07:13:29.851044041 +0000 UTC m=+1.829628779 container create 60baaa004c7ff4e99f556d3cd8b7f186e38386733f790036f0c118eb4f4c038b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cray, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:29 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 29 02:13:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:29 np0005539563 ceph-mon[74338]: Reconfiguring osd.0 (monmap changed)...
Nov 29 02:13:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 02:13:29 np0005539563 ceph-mon[74338]: Reconfiguring daemon osd.0 on compute-0
Nov 29 02:13:29 np0005539563 systemd[1]: Started libpod-conmon-60baaa004c7ff4e99f556d3cd8b7f186e38386733f790036f0c118eb4f4c038b.scope.
Nov 29 02:13:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 29 02:13:29 np0005539563 podman[97136]: 2025-11-29 07:13:29.974471454 +0000 UTC m=+1.953056212 container init 60baaa004c7ff4e99f556d3cd8b7f186e38386733f790036f0c118eb4f4c038b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cray, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:13:29 np0005539563 podman[97136]: 2025-11-29 07:13:29.982636005 +0000 UTC m=+1.961220763 container start 60baaa004c7ff4e99f556d3cd8b7f186e38386733f790036f0c118eb4f4c038b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:13:29 np0005539563 nervous_cray[97152]: 167 167
Nov 29 02:13:29 np0005539563 systemd[1]: libpod-60baaa004c7ff4e99f556d3cd8b7f186e38386733f790036f0c118eb4f4c038b.scope: Deactivated successfully.
Nov 29 02:13:29 np0005539563 podman[97136]: 2025-11-29 07:13:29.997186921 +0000 UTC m=+1.975771669 container attach 60baaa004c7ff4e99f556d3cd8b7f186e38386733f790036f0c118eb4f4c038b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:29 np0005539563 podman[97136]: 2025-11-29 07:13:29.998173768 +0000 UTC m=+1.976758526 container died 60baaa004c7ff4e99f556d3cd8b7f186e38386733f790036f0c118eb4f4c038b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cray, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:30.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.14( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951343536s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.993530273s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.13( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951285362s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.993515015s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.13( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951218605s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.993515015s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.1e( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951294899s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.993591309s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.1e( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951241493s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.993591309s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.1d( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950996399s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.993637085s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.1b( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950812340s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.993484497s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.1b( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950780869s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.993484497s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.7( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951155663s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.993911743s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.1d( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950934410s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.993637085s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.7( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951123238s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.993911743s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.4( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951456070s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994506836s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.14( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951247215s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.993530273s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.4( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951419830s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994506836s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.3( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951364517s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994567871s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.3( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951292992s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994567871s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.17( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951548576s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994918823s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.f( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951152802s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994659424s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.f( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951125145s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994659424s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.16( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950822830s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994628906s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.8( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950770378s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994689941s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.8( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950731277s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994689941s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.16( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950699806s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994628906s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.17( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.951510429s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994918823s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.5( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950483322s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994659424s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.5( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950453758s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994659424s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.1a( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950511932s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994796753s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.1c( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950371742s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994750977s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.1c( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950347900s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994750977s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.19( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950096130s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994750977s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.12( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950151443s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994827271s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.12( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950126648s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994827271s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3ffd3b47d813ca865bf1f267457eda2ead9b4facd2e2f4fe7820d3f29bb4fdf0-merged.mount: Deactivated successfully.
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.19( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950062752s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994750977s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.a( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.949748993s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994796753s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.a( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.949723244s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994796753s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.1a( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.950485229s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994796753s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.1( v 53'2 (0'0,53'2] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.949737549s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994949341s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.1( v 53'2 (0'0,53'2] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.949667931s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994949341s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.e( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.949235916s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 active pruub 177.994613647s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 61 pg[11.e( v 53'2 (0'0,53'2] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.949193001s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=53'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.994613647s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:30 np0005539563 podman[97136]: 2025-11-29 07:13:30.325318836 +0000 UTC m=+2.303903584 container remove 60baaa004c7ff4e99f556d3cd8b7f186e38386733f790036f0c118eb4f4c038b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cray, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:13:30 np0005539563 systemd[1]: libpod-conmon-60baaa004c7ff4e99f556d3cd8b7f186e38386733f790036f0c118eb4f4c038b.scope: Deactivated successfully.
Nov 29 02:13:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 446 B/s, 1 objects/s recovering
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:30 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 02:13:30 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:30 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 02:13:30 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 29 02:13:30 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 29 02:13:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:30.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:31 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Nov 29 02:13:31 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:31 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Nov 29 02:13:31 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Nov 29 02:13:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 02:13:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:32.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:32 np0005539563 systemd-logind[785]: New session 34 of user zuul.
Nov 29 02:13:32 np0005539563 systemd[1]: Started Session 34 of User zuul.
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.17( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.262563705s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 182.308349609s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.17( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.262503624s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.308349609s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.13( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.262731552s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 182.308731079s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.13( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.262685776s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.308731079s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.262838364s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 182.308898926s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.262813568s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.308898926s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.263061523s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 182.309494019s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.263035774s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.309494019s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.7( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.263175964s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 182.309890747s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.289628983s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 182.336441040s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.7( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.263090134s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.309890747s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.289607048s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.336441040s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.b( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.289044380s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 182.336090088s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.b( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.289010048s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.336090088s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.3( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.289238930s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 182.336380005s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:32 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 62 pg[9.3( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=14.289209366s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.336380005s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:32 np0005539563 ceph-mgr[74636]: [progress INFO root] Completed event 70593715-0f08-4adf-9a76-d87d9b9b7cc6 (Global Recovery Event) in 10 seconds
Nov 29 02:13:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 430 B/s, 1 objects/s recovering
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:32 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 02:13:32 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:32 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 02:13:32 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 02:13:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:32.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: Reconfiguring osd.1 (monmap changed)...
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: Reconfiguring daemon osd.1 on compute-1
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.7( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.7( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.b( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.13( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.13( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.17( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.17( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.3( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.b( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.3( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:33 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 63 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 29 02:13:33 np0005539563 python3.9[97334]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:33 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 02:13:33 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:33 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 02:13:33 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 02:13:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:34.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 64 pg[9.13( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 64 pg[9.17( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 64 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 64 pg[9.b( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 64 pg[9.7( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 64 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 64 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 64 pg[9.3( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=63) [2]/[0] async=[2] r=0 lpr=63 pi=[56,63)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:34 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.vyxqrz (monmap changed)...
Nov 29 02:13:34 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.vyxqrz (monmap changed)...
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:34 np0005539563 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.vyxqrz on compute-2
Nov 29 02:13:34 np0005539563 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.vyxqrz on compute-2
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 29 02:13:34 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.17( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.360445023s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 186.024047852s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.17( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.360339165s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.024047852s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.645030975s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 182.308792114s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.13( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.352898598s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 186.016693115s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.644975662s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.308792114s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.360528946s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 186.024368286s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.360453606s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.024368286s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.13( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.352776527s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.016693115s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.5( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.645044327s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 182.309097290s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.5( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.645019531s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.309097290s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.645343781s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 182.309494019s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.645320892s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.309494019s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.359998703s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 186.024353027s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.f( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.359943390s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.024353027s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.645104408s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 182.309570312s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.645071983s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.309570312s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.359377861s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 186.024063110s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=5 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.359339714s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.024063110s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.b( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.359275818s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 186.024108887s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.b( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.359209061s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.024108887s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.3( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.359439850s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 186.024398804s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:34 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 65 pg[9.3( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=65 pruub=15.359385490s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.024398804s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:34.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:35 np0005539563 python3.9[97599]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: Reconfiguring mgr.compute-2.vyxqrz (monmap changed)...
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.vyxqrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: Reconfiguring daemon mgr.compute-2.vyxqrz on compute-2
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 29 02:13:35 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 29 02:13:35 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 66 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:35 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 66 pg[9.5( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:35 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 66 pg[9.5( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:35 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 66 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:35 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 66 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:35 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 66 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:35 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 66 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:35 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 66 pg[9.7( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=66 pruub=14.333894730s) [2] async=[2] r=-1 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 186.024108887s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:35 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 66 pg[9.7( v 53'1137 (0'0,53'1137] local-lis/les=63/64 n=6 ec=56/47 lis/c=63/56 les/c/f=64/57/0 sis=66 pruub=14.333831787s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.024108887s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:35 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 66 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:36.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 4 unknown, 7 peering, 294 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 219 B/s, 8 objects/s recovering
Nov 29 02:13:36 np0005539563 podman[97782]: 2025-11-29 07:13:36.445371594 +0000 UTC m=+0.078056551 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:13:36 np0005539563 podman[97782]: 2025-11-29 07:13:36.542106102 +0000 UTC m=+0.174791059 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 29 02:13:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:36.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 29 02:13:36 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 29 02:13:36 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 67 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=66/67 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:36 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 67 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=66/67 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:36 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 67 pg[9.5( v 53'1137 (0'0,53'1137] local-lis/les=66/67 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:36 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 67 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=66/67 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[56,66)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:37 np0005539563 podman[97931]: 2025-11-29 07:13:37.224172092 +0000 UTC m=+0.071014510 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:13:37 np0005539563 podman[97931]: 2025-11-29 07:13:37.234099312 +0000 UTC m=+0.080941730 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:13:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:13:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:13:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:37 np0005539563 ceph-mgr[74636]: [progress INFO root] Writing back 20 completed events
Nov 29 02:13:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:13:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:37 np0005539563 podman[97997]: 2025-11-29 07:13:37.451729165 +0000 UTC m=+0.055384066 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Nov 29 02:13:37 np0005539563 podman[97997]: 2025-11-29 07:13:37.467150154 +0000 UTC m=+0.070805055 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.openshift.expose-services=, vcs-type=git, version=2.2.4, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, release=1793)
Nov 29 02:13:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 29 02:13:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:38.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:13:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 4 active+remapped, 7 peering, 294 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 350 B/s, 13 objects/s recovering
Nov 29 02:13:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:38.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 29 02:13:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:39 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 29 02:13:39 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 68 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=66/67 n=5 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=13.866867065s) [2] async=[2] r=-1 lpr=68 pi=[56,68)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 188.736999512s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:39 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 68 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=66/67 n=6 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=13.866984367s) [2] async=[2] r=-1 lpr=68 pi=[56,68)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 188.737182617s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:39 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 68 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=66/67 n=5 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=13.866802216s) [2] async=[2] r=-1 lpr=68 pi=[56,68)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 188.737030029s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:39 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 68 pg[9.15( v 53'1137 (0'0,53'1137] local-lis/les=66/67 n=5 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=13.866736412s) [2] r=-1 lpr=68 pi=[56,68)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 188.736999512s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:39 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 68 pg[9.5( v 53'1137 (0'0,53'1137] local-lis/les=66/67 n=6 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=13.866828918s) [2] async=[2] r=-1 lpr=68 pi=[56,68)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 188.737091064s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:39 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 68 pg[9.d( v 53'1137 (0'0,53'1137] local-lis/les=66/67 n=6 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=13.866881371s) [2] r=-1 lpr=68 pi=[56,68)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 188.737182617s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:39 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 68 pg[9.1d( v 53'1137 (0'0,53'1137] local-lis/les=66/67 n=5 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=13.866686821s) [2] r=-1 lpr=68 pi=[56,68)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 188.737030029s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:39 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 68 pg[9.5( v 53'1137 (0'0,53'1137] local-lis/les=66/67 n=6 ec=56/47 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=13.866707802s) [2] r=-1 lpr=68 pi=[56,68)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 188.737091064s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:13:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 29 02:13:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:40.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 29 02:13:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 4 peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 906 B/s wr, 81 op/s; 73 B/s, 1 objects/s recovering
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:13:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:40.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:40 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 336b7e47-41cd-4c6e-bfdb-37e730f46db8 does not exist
Nov 29 02:13:40 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0616d5a5-1761-4a36-9be1-7f4739cd79ec does not exist
Nov 29 02:13:40 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 361471fa-c993-4783-97a4-7be75cc6348b does not exist
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:13:41 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 29 02:13:41 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 29 02:13:41 np0005539563 podman[98316]: 2025-11-29 07:13:41.581254535 +0000 UTC m=+0.043094472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:42.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 4 peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 682 B/s wr, 61 op/s; 54 B/s, 0 objects/s recovering
Nov 29 02:13:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:13:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:42.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:13:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:44.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 4 peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 547 B/s wr, 49 op/s; 44 B/s, 0 objects/s recovering
Nov 29 02:13:44 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.1e deep-scrub starts
Nov 29 02:13:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:13:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:44.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:13:45 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 29 02:13:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:46.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 0 objects/s recovering
Nov 29 02:13:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:46.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:47 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:13:47 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.651384830s, txc = 0x561be1fc1b00
Nov 29 02:13:47 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 5.651416779s
Nov 29 02:13:47 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 5.651417255s
Nov 29 02:13:47 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.1e deep-scrub ok
Nov 29 02:13:47 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 29 02:13:47 np0005539563 podman[98316]: 2025-11-29 07:13:47.185029187 +0000 UTC m=+5.646869104 container create f31f9d83f8a2276f49bd95eda256fbc06bc7a1b3ba4b4dd8420c4dc7b20d4e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:13:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 29 02:13:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 02:13:47 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.716608524s, txc = 0x561be1f85800
Nov 29 02:13:47 np0005539563 systemd[1]: Started libpod-conmon-f31f9d83f8a2276f49bd95eda256fbc06bc7a1b3ba4b4dd8420c4dc7b20d4e5b.scope.
Nov 29 02:13:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 29 02:13:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 02:13:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 02:13:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 29 02:13:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 29 02:13:47 np0005539563 podman[98316]: 2025-11-29 07:13:47.325954645 +0000 UTC m=+5.787794592 container init f31f9d83f8a2276f49bd95eda256fbc06bc7a1b3ba4b4dd8420c4dc7b20d4e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:47 np0005539563 podman[98316]: 2025-11-29 07:13:47.333261704 +0000 UTC m=+5.795101621 container start f31f9d83f8a2276f49bd95eda256fbc06bc7a1b3ba4b4dd8420c4dc7b20d4e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ptolemy, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:47 np0005539563 podman[98316]: 2025-11-29 07:13:47.33861741 +0000 UTC m=+5.800457357 container attach f31f9d83f8a2276f49bd95eda256fbc06bc7a1b3ba4b4dd8420c4dc7b20d4e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:47 np0005539563 mystifying_ptolemy[98335]: 167 167
Nov 29 02:13:47 np0005539563 systemd[1]: libpod-f31f9d83f8a2276f49bd95eda256fbc06bc7a1b3ba4b4dd8420c4dc7b20d4e5b.scope: Deactivated successfully.
Nov 29 02:13:47 np0005539563 podman[98316]: 2025-11-29 07:13:47.340875881 +0000 UTC m=+5.802715798 container died f31f9d83f8a2276f49bd95eda256fbc06bc7a1b3ba4b4dd8420c4dc7b20d4e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ptolemy, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-25e7857c2cfd22351207684ae81a61e9da36b18d931fa6d0d2067a3f238b4040-merged.mount: Deactivated successfully.
Nov 29 02:13:47 np0005539563 podman[98316]: 2025-11-29 07:13:47.409398512 +0000 UTC m=+5.871238429 container remove f31f9d83f8a2276f49bd95eda256fbc06bc7a1b3ba4b4dd8420c4dc7b20d4e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:13:47 np0005539563 systemd[1]: libpod-conmon-f31f9d83f8a2276f49bd95eda256fbc06bc7a1b3ba4b4dd8420c4dc7b20d4e5b.scope: Deactivated successfully.
Nov 29 02:13:47 np0005539563 podman[98361]: 2025-11-29 07:13:47.561848885 +0000 UTC m=+0.035637270 container create ff88deb6dd9501209e0df87cb937d3cb46b85352eb4fb39d824227df792cbf48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_spence, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:13:47 np0005539563 systemd[1]: Started libpod-conmon-ff88deb6dd9501209e0df87cb937d3cb46b85352eb4fb39d824227df792cbf48.scope.
Nov 29 02:13:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6aa5d0e9702a85241f825b367d720e52dd1e1745879b29dc137e3bfc6126592/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6aa5d0e9702a85241f825b367d720e52dd1e1745879b29dc137e3bfc6126592/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6aa5d0e9702a85241f825b367d720e52dd1e1745879b29dc137e3bfc6126592/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6aa5d0e9702a85241f825b367d720e52dd1e1745879b29dc137e3bfc6126592/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6aa5d0e9702a85241f825b367d720e52dd1e1745879b29dc137e3bfc6126592/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:47 np0005539563 podman[98361]: 2025-11-29 07:13:47.632085463 +0000 UTC m=+0.105873858 container init ff88deb6dd9501209e0df87cb937d3cb46b85352eb4fb39d824227df792cbf48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_spence, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 29 02:13:47 np0005539563 podman[98361]: 2025-11-29 07:13:47.641453807 +0000 UTC m=+0.115242192 container start ff88deb6dd9501209e0df87cb937d3cb46b85352eb4fb39d824227df792cbf48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_spence, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:13:47 np0005539563 podman[98361]: 2025-11-29 07:13:47.547230637 +0000 UTC m=+0.021019042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:47 np0005539563 podman[98361]: 2025-11-29 07:13:47.659985031 +0000 UTC m=+0.133773486 container attach ff88deb6dd9501209e0df87cb937d3cb46b85352eb4fb39d824227df792cbf48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:13:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:48.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:48 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 02:13:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 72 B/s, 2 objects/s recovering
Nov 29 02:13:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 29 02:13:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 02:13:48 np0005539563 serene_spence[98377]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:13:48 np0005539563 serene_spence[98377]: --> relative data size: 1.0
Nov 29 02:13:48 np0005539563 serene_spence[98377]: --> All data devices are unavailable
Nov 29 02:13:48 np0005539563 systemd[1]: libpod-ff88deb6dd9501209e0df87cb937d3cb46b85352eb4fb39d824227df792cbf48.scope: Deactivated successfully.
Nov 29 02:13:48 np0005539563 podman[98361]: 2025-11-29 07:13:48.592375182 +0000 UTC m=+1.066163567 container died ff88deb6dd9501209e0df87cb937d3cb46b85352eb4fb39d824227df792cbf48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:48 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d6aa5d0e9702a85241f825b367d720e52dd1e1745879b29dc137e3bfc6126592-merged.mount: Deactivated successfully.
Nov 29 02:13:48 np0005539563 podman[98361]: 2025-11-29 07:13:48.658923709 +0000 UTC m=+1.132712114 container remove ff88deb6dd9501209e0df87cb937d3cb46b85352eb4fb39d824227df792cbf48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_spence, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:48 np0005539563 systemd[1]: libpod-conmon-ff88deb6dd9501209e0df87cb937d3cb46b85352eb4fb39d824227df792cbf48.scope: Deactivated successfully.
Nov 29 02:13:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:48.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 70 pg[9.6( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.261192322s) [1] r=-1 lpr=70 pi=[56,70)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 198.309616089s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 70 pg[9.6( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.261013031s) [1] r=-1 lpr=70 pi=[56,70)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.309616089s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 70 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.261371613s) [1] r=-1 lpr=70 pi=[56,70)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 198.310104370s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 70 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.261309624s) [1] r=-1 lpr=70 pi=[56,70)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.310104370s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 70 pg[9.e( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.287708282s) [1] r=-1 lpr=70 pi=[56,70)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 198.336624146s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 70 pg[9.e( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.287683487s) [1] r=-1 lpr=70 pi=[56,70)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.336624146s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 70 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.259447098s) [1] r=-1 lpr=70 pi=[56,70)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 198.308898926s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 70 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.259366035s) [1] r=-1 lpr=70 pi=[56,70)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.308898926s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:49 np0005539563 podman[98555]: 2025-11-29 07:13:49.314239593 +0000 UTC m=+0.043269766 container create d1dd975ce5cf8ee4e27dd5b1142293b26a1e6563729e5fa78e135f9ba0139035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 29 02:13:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 02:13:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 02:13:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 29 02:13:49 np0005539563 systemd[1]: Started libpod-conmon-d1dd975ce5cf8ee4e27dd5b1142293b26a1e6563729e5fa78e135f9ba0139035.scope.
Nov 29 02:13:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 71 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=0 lpr=71 pi=[56,71)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 71 pg[9.e( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=0 lpr=71 pi=[56,71)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 71 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=0 lpr=71 pi=[56,71)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 71 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=0 lpr=71 pi=[56,71)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 71 pg[9.e( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=0 lpr=71 pi=[56,71)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 71 pg[9.6( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=0 lpr=71 pi=[56,71)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 71 pg[9.6( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=0 lpr=71 pi=[56,71)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:49 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 71 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] r=0 lpr=71 pi=[56,71)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:13:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:49 np0005539563 podman[98555]: 2025-11-29 07:13:49.295242706 +0000 UTC m=+0.024272899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:49 np0005539563 podman[98555]: 2025-11-29 07:13:49.398152872 +0000 UTC m=+0.127183075 container init d1dd975ce5cf8ee4e27dd5b1142293b26a1e6563729e5fa78e135f9ba0139035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:49 np0005539563 podman[98555]: 2025-11-29 07:13:49.405723648 +0000 UTC m=+0.134753821 container start d1dd975ce5cf8ee4e27dd5b1142293b26a1e6563729e5fa78e135f9ba0139035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:13:49 np0005539563 podman[98555]: 2025-11-29 07:13:49.409936302 +0000 UTC m=+0.138966465 container attach d1dd975ce5cf8ee4e27dd5b1142293b26a1e6563729e5fa78e135f9ba0139035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:13:49 np0005539563 laughing_faraday[98595]: 167 167
Nov 29 02:13:49 np0005539563 systemd[1]: libpod-d1dd975ce5cf8ee4e27dd5b1142293b26a1e6563729e5fa78e135f9ba0139035.scope: Deactivated successfully.
Nov 29 02:13:49 np0005539563 podman[98555]: 2025-11-29 07:13:49.414319132 +0000 UTC m=+0.143349305 container died d1dd975ce5cf8ee4e27dd5b1142293b26a1e6563729e5fa78e135f9ba0139035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:49 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3aa6b8160e0c77178599daffc4e511ffe1ced738ab890b102f51f0ed76b8ec49-merged.mount: Deactivated successfully.
Nov 29 02:13:49 np0005539563 podman[98555]: 2025-11-29 07:13:49.462927312 +0000 UTC m=+0.191957485 container remove d1dd975ce5cf8ee4e27dd5b1142293b26a1e6563729e5fa78e135f9ba0139035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:13:49 np0005539563 systemd[1]: libpod-conmon-d1dd975ce5cf8ee4e27dd5b1142293b26a1e6563729e5fa78e135f9ba0139035.scope: Deactivated successfully.
Nov 29 02:13:49 np0005539563 systemd[1]: session-34.scope: Deactivated successfully.
Nov 29 02:13:49 np0005539563 systemd[1]: session-34.scope: Consumed 8.902s CPU time.
Nov 29 02:13:49 np0005539563 systemd-logind[785]: Session 34 logged out. Waiting for processes to exit.
Nov 29 02:13:49 np0005539563 systemd-logind[785]: Removed session 34.
Nov 29 02:13:49 np0005539563 podman[98618]: 2025-11-29 07:13:49.62404391 +0000 UTC m=+0.048625363 container create 92115ddf7a6cd51e14720c2784c19b1dc19f8f8aa2113970ae5024c1f5c77ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_clarke, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:13:49 np0005539563 systemd[1]: Started libpod-conmon-92115ddf7a6cd51e14720c2784c19b1dc19f8f8aa2113970ae5024c1f5c77ee8.scope.
Nov 29 02:13:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920ffb8158f404f0a8400e888a65326842e3bf31bceba618faa99e6db9b49f29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920ffb8158f404f0a8400e888a65326842e3bf31bceba618faa99e6db9b49f29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920ffb8158f404f0a8400e888a65326842e3bf31bceba618faa99e6db9b49f29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920ffb8158f404f0a8400e888a65326842e3bf31bceba618faa99e6db9b49f29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:49 np0005539563 podman[98618]: 2025-11-29 07:13:49.605719552 +0000 UTC m=+0.030301025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:49 np0005539563 podman[98618]: 2025-11-29 07:13:49.705541924 +0000 UTC m=+0.130123387 container init 92115ddf7a6cd51e14720c2784c19b1dc19f8f8aa2113970ae5024c1f5c77ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_clarke, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:49 np0005539563 podman[98618]: 2025-11-29 07:13:49.712359059 +0000 UTC m=+0.136940512 container start 92115ddf7a6cd51e14720c2784c19b1dc19f8f8aa2113970ae5024c1f5c77ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_clarke, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:13:49 np0005539563 podman[98618]: 2025-11-29 07:13:49.715879995 +0000 UTC m=+0.140461468 container attach 92115ddf7a6cd51e14720c2784c19b1dc19f8f8aa2113970ae5024c1f5c77ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_clarke, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:50.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 29 02:13:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 02:13:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 29 02:13:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 29 02:13:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 4 unknown, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 98 B/s, 3 objects/s recovering
Nov 29 02:13:50 np0005539563 angry_clarke[98635]: {
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:    "0": [
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:        {
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:            "devices": [
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:                "/dev/loop3"
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:            ],
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:            "lv_name": "ceph_lv0",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:            "lv_size": "7511998464",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:            "name": "ceph_lv0",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:            "tags": {
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:                "ceph.cluster_name": "ceph",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:                "ceph.crush_device_class": "",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:                "ceph.encrypted": "0",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:                "ceph.osd_id": "0",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:                "ceph.type": "block",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:                "ceph.vdo": "0"
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:            },
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:            "type": "block",
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:            "vg_name": "ceph_vg0"
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:        }
Nov 29 02:13:50 np0005539563 angry_clarke[98635]:    ]
Nov 29 02:13:50 np0005539563 angry_clarke[98635]: }
Nov 29 02:13:50 np0005539563 systemd[1]: libpod-92115ddf7a6cd51e14720c2784c19b1dc19f8f8aa2113970ae5024c1f5c77ee8.scope: Deactivated successfully.
Nov 29 02:13:50 np0005539563 podman[98618]: 2025-11-29 07:13:50.482722118 +0000 UTC m=+0.907303571 container died 92115ddf7a6cd51e14720c2784c19b1dc19f8f8aa2113970ae5024c1f5c77ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:13:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-920ffb8158f404f0a8400e888a65326842e3bf31bceba618faa99e6db9b49f29-merged.mount: Deactivated successfully.
Nov 29 02:13:50 np0005539563 podman[98618]: 2025-11-29 07:13:50.541003261 +0000 UTC m=+0.965584714 container remove 92115ddf7a6cd51e14720c2784c19b1dc19f8f8aa2113970ae5024c1f5c77ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_clarke, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:13:50 np0005539563 systemd[1]: libpod-conmon-92115ddf7a6cd51e14720c2784c19b1dc19f8f8aa2113970ae5024c1f5c77ee8.scope: Deactivated successfully.
Nov 29 02:13:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:50.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:51 np0005539563 podman[98797]: 2025-11-29 07:13:51.164354757 +0000 UTC m=+0.088250399 container create 471ccee78bb144d7a367bd0109879c608519e329a10185fd88bac6f21c7f7fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:13:51 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 72 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=71/72 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[56,71)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:51 np0005539563 podman[98797]: 2025-11-29 07:13:51.099250207 +0000 UTC m=+0.023145869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:51 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 72 pg[9.e( v 53'1137 (0'0,53'1137] local-lis/les=71/72 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[56,71)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:51 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 72 pg[9.6( v 53'1137 (0'0,53'1137] local-lis/les=71/72 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[56,71)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:51 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 72 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=71/72 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[56,71)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:13:51 np0005539563 systemd[1]: Started libpod-conmon-471ccee78bb144d7a367bd0109879c608519e329a10185fd88bac6f21c7f7fa4.scope.
Nov 29 02:13:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:51 np0005539563 podman[98797]: 2025-11-29 07:13:51.263266154 +0000 UTC m=+0.187161796 container init 471ccee78bb144d7a367bd0109879c608519e329a10185fd88bac6f21c7f7fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:13:51 np0005539563 podman[98797]: 2025-11-29 07:13:51.269957615 +0000 UTC m=+0.193853257 container start 471ccee78bb144d7a367bd0109879c608519e329a10185fd88bac6f21c7f7fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hellman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:51 np0005539563 youthful_hellman[98813]: 167 167
Nov 29 02:13:51 np0005539563 podman[98797]: 2025-11-29 07:13:51.275524666 +0000 UTC m=+0.199420338 container attach 471ccee78bb144d7a367bd0109879c608519e329a10185fd88bac6f21c7f7fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hellman, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:51 np0005539563 systemd[1]: libpod-471ccee78bb144d7a367bd0109879c608519e329a10185fd88bac6f21c7f7fa4.scope: Deactivated successfully.
Nov 29 02:13:51 np0005539563 podman[98797]: 2025-11-29 07:13:51.276873454 +0000 UTC m=+0.200769116 container died 471ccee78bb144d7a367bd0109879c608519e329a10185fd88bac6f21c7f7fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:13:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-64405f6765acf66a304a3905ad773dd2d815f83d3ffd2e5e55281ef98e011ee3-merged.mount: Deactivated successfully.
Nov 29 02:13:51 np0005539563 podman[98797]: 2025-11-29 07:13:51.32641602 +0000 UTC m=+0.250311662 container remove 471ccee78bb144d7a367bd0109879c608519e329a10185fd88bac6f21c7f7fa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:13:51 np0005539563 systemd[1]: libpod-conmon-471ccee78bb144d7a367bd0109879c608519e329a10185fd88bac6f21c7f7fa4.scope: Deactivated successfully.
Nov 29 02:13:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 29 02:13:51 np0005539563 podman[98837]: 2025-11-29 07:13:51.48213743 +0000 UTC m=+0.040073179 container create 31c34f2e50e808c092852ffa89ea2f88617fd2ff933a50c24fb60f3ead7d14a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nobel, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 29 02:13:51 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 29 02:13:51 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 73 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=71/72 n=5 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=73 pruub=15.676594734s) [1] async=[1] r=-1 lpr=73 pi=[56,73)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 202.990432739s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:51 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 73 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=71/72 n=5 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=73 pruub=15.676445961s) [1] r=-1 lpr=73 pi=[56,73)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.990432739s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:51 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 73 pg[9.e( v 53'1137 (0'0,53'1137] local-lis/les=71/72 n=6 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=73 pruub=15.683156013s) [1] async=[1] r=-1 lpr=73 pi=[56,73)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 202.997238159s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:51 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 73 pg[9.e( v 53'1137 (0'0,53'1137] local-lis/les=71/72 n=6 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=73 pruub=15.682942390s) [1] r=-1 lpr=73 pi=[56,73)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.997238159s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:51 np0005539563 systemd[1]: Started libpod-conmon-31c34f2e50e808c092852ffa89ea2f88617fd2ff933a50c24fb60f3ead7d14a3.scope.
Nov 29 02:13:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:13:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4343723afb835f9d024d53cd7ecae9185c8be3a367fa6384b6ae7629aea95e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4343723afb835f9d024d53cd7ecae9185c8be3a367fa6384b6ae7629aea95e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4343723afb835f9d024d53cd7ecae9185c8be3a367fa6384b6ae7629aea95e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4343723afb835f9d024d53cd7ecae9185c8be3a367fa6384b6ae7629aea95e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:51 np0005539563 podman[98837]: 2025-11-29 07:13:51.465694723 +0000 UTC m=+0.023630502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:51 np0005539563 podman[98837]: 2025-11-29 07:13:51.56347509 +0000 UTC m=+0.121410849 container init 31c34f2e50e808c092852ffa89ea2f88617fd2ff933a50c24fb60f3ead7d14a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:51 np0005539563 podman[98837]: 2025-11-29 07:13:51.57011946 +0000 UTC m=+0.128055209 container start 31c34f2e50e808c092852ffa89ea2f88617fd2ff933a50c24fb60f3ead7d14a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:13:51 np0005539563 podman[98837]: 2025-11-29 07:13:51.574615932 +0000 UTC m=+0.132551781 container attach 31c34f2e50e808c092852ffa89ea2f88617fd2ff933a50c24fb60f3ead7d14a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:52.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 29 02:13:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 4 unknown, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:13:52 np0005539563 stupefied_nobel[98854]: {
Nov 29 02:13:52 np0005539563 stupefied_nobel[98854]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:13:52 np0005539563 stupefied_nobel[98854]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:13:52 np0005539563 stupefied_nobel[98854]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:13:52 np0005539563 stupefied_nobel[98854]:        "osd_id": 0,
Nov 29 02:13:52 np0005539563 stupefied_nobel[98854]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:13:52 np0005539563 stupefied_nobel[98854]:        "type": "bluestore"
Nov 29 02:13:52 np0005539563 stupefied_nobel[98854]:    }
Nov 29 02:13:52 np0005539563 stupefied_nobel[98854]: }
Nov 29 02:13:52 np0005539563 systemd[1]: libpod-31c34f2e50e808c092852ffa89ea2f88617fd2ff933a50c24fb60f3ead7d14a3.scope: Deactivated successfully.
Nov 29 02:13:52 np0005539563 podman[98837]: 2025-11-29 07:13:52.476142325 +0000 UTC m=+1.034078074 container died 31c34f2e50e808c092852ffa89ea2f88617fd2ff933a50c24fb60f3ead7d14a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:13:52 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 29 02:13:52 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 29 02:13:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:52.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:54.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 2 peering, 2 active+remapped, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 02:13:54 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 29 02:13:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:54.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:13:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:56.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:13:56 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 29 02:13:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 2 peering, 2 active+remapped, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 580 B/s wr, 52 op/s; 15 B/s, 1 objects/s recovering
Nov 29 02:13:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:56.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d4343723afb835f9d024d53cd7ecae9185c8be3a367fa6384b6ae7629aea95e2-merged.mount: Deactivated successfully.
Nov 29 02:13:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 29 02:13:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 29 02:13:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:13:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:13:58.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:13:58 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 74 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=71/72 n=5 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=74 pruub=9.084896088s) [1] async=[1] r=-1 lpr=74 pi=[56,74)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 202.997619629s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:58 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 74 pg[9.16( v 53'1137 (0'0,53'1137] local-lis/les=71/72 n=5 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=74 pruub=9.084759712s) [1] r=-1 lpr=74 pi=[56,74)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.997619629s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:58 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 74 pg[9.6( v 53'1137 (0'0,53'1137] local-lis/les=71/72 n=6 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=74 pruub=9.084315300s) [1] async=[1] r=-1 lpr=74 pi=[56,74)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 202.997253418s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:13:58 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 74 pg[9.6( v 53'1137 (0'0,53'1137] local-lis/les=71/72 n=6 ec=56/47 lis/c=71/56 les/c/f=72/57/0 sis=74 pruub=9.084237099s) [1] r=-1 lpr=74 pi=[56,74)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.997253418s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:13:58 np0005539563 podman[98837]: 2025-11-29 07:13:58.16624463 +0000 UTC m=+6.724180379 container remove 31c34f2e50e808c092852ffa89ea2f88617fd2ff933a50c24fb60f3ead7d14a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:13:58 np0005539563 systemd[1]: libpod-conmon-31c34f2e50e808c092852ffa89ea2f88617fd2ff933a50c24fb60f3ead7d14a3.scope: Deactivated successfully.
Nov 29 02:13:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 2 peering, 2 active+remapped, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 511 B/s wr, 46 op/s; 13 B/s, 1 objects/s recovering
Nov 29 02:13:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:13:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:13:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:13:58.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:13:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:13:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 29 02:14:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:00.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:14:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 46a37ed5-03fa-49e1-9e4e-e9a4e48668ae does not exist
Nov 29 02:14:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 87156956-f791-4736-b87f-62c1895bde03 does not exist
Nov 29 02:14:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c6abf474-c302-4916-891f-ee4a89032492 does not exist
Nov 29 02:14:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 459 B/s wr, 41 op/s; 12 B/s, 1 objects/s recovering
Nov 29 02:14:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:14:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:00.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:14:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 29 02:14:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:14:01 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 29 02:14:01 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 29 02:14:01 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 29 02:14:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:02.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:14:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:02.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:14:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:04.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 1 objects/s recovering
Nov 29 02:14:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 29 02:14:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 02:14:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:14:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:04.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:14:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 02:14:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 29 02:14:05 np0005539563 systemd-logind[785]: New session 35 of user zuul.
Nov 29 02:14:05 np0005539563 systemd[1]: Started Session 35 of User zuul.
Nov 29 02:14:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 02:14:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 29 02:14:05 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 29 02:14:05 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 76 pg[9.18( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76 pruub=12.844593048s) [2] r=-1 lpr=76 pi=[56,76)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 214.315322876s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:05 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 76 pg[9.18( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76 pruub=12.844522476s) [2] r=-1 lpr=76 pi=[56,76)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.315322876s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:05 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 76 pg[9.8( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76 pruub=12.866133690s) [2] r=-1 lpr=76 pi=[56,76)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 214.337081909s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:05 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 76 pg[9.8( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=76 pruub=12.866068840s) [2] r=-1 lpr=76 pi=[56,76)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.337081909s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 29 02:14:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 29 02:14:06 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 29 02:14:06 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 77 pg[9.8( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[0] r=0 lpr=77 pi=[56,77)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:06 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 77 pg[9.8( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[0] r=0 lpr=77 pi=[56,77)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:06 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 77 pg[9.18( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[0] r=0 lpr=77 pi=[56,77)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:06 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 77 pg[9.18( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[0] r=0 lpr=77 pi=[56,77)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:06.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 02:14:06 np0005539563 python3.9[99150]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 02:14:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 02:14:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 29 02:14:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 02:14:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:06.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 29 02:14:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 02:14:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 29 02:14:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 29 02:14:07 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 78 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=78 pruub=11.432009697s) [2] r=-1 lpr=78 pi=[56,78)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 214.282211304s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:07 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 78 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=78 pruub=11.431934357s) [2] r=-1 lpr=78 pi=[56,78)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.282211304s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:07 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 78 pg[9.9( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=78 pruub=11.463809967s) [2] r=-1 lpr=78 pi=[56,78)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 214.315185547s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:07 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 78 pg[9.9( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=78 pruub=11.463729858s) [2] r=-1 lpr=78 pi=[56,78)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.315185547s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:07 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 78 pg[9.18( v 53'1137 (0'0,53'1137] local-lis/les=77/78 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[56,77)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:07 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 78 pg[9.8( v 53'1137 (0'0,53'1137] local-lis/les=77/78 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=77) [2]/[0] async=[2] r=0 lpr=77 pi=[56,77)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:07 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 02:14:07 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 02:14:07 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 29 02:14:07 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 29 02:14:07 np0005539563 python3.9[99324]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:14:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 29 02:14:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:08.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 29 02:14:08 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 29 02:14:08 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 79 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79) [2]/[0] r=0 lpr=79 pi=[56,79)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:08 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 79 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79) [2]/[0] r=0 lpr=79 pi=[56,79)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:08 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 79 pg[9.9( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79) [2]/[0] r=0 lpr=79 pi=[56,79)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:08 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 79 pg[9.9( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79) [2]/[0] r=0 lpr=79 pi=[56,79)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:08 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 79 pg[9.18( v 53'1137 (0'0,53'1137] local-lis/les=77/78 n=5 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79 pruub=14.883929253s) [2] async=[2] r=-1 lpr=79 pi=[56,79)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 218.857528687s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:08 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 79 pg[9.18( v 53'1137 (0'0,53'1137] local-lis/les=77/78 n=5 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79 pruub=14.883790970s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.857528687s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:08 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 79 pg[9.8( v 53'1137 (0'0,53'1137] local-lis/les=77/78 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79 pruub=14.884093285s) [2] async=[2] r=-1 lpr=79 pi=[56,79)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 218.858169556s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:08 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 79 pg[9.8( v 53'1137 (0'0,53'1137] local-lis/les=77/78 n=6 ec=56/47 lis/c=77/56 les/c/f=78/57/0 sis=79 pruub=14.883928299s) [2] r=-1 lpr=79 pi=[56,79)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.858169556s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:08 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 29 02:14:08 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 29 02:14:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 2 active+remapped, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Nov 29 02:14:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 29 02:14:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 02:14:08 np0005539563 python3.9[99481]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:14:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:08.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 29 02:14:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 02:14:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 02:14:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 29 02:14:09 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 29 02:14:09 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 80 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80 pruub=9.316864014s) [1] r=-1 lpr=80 pi=[56,80)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 214.315170288s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:09 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 80 pg[9.a( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80 pruub=9.316442490s) [1] r=-1 lpr=80 pi=[56,80)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 214.315185547s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:09 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 80 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80 pruub=9.316366196s) [1] r=-1 lpr=80 pi=[56,80)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.315170288s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:09 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 80 pg[9.a( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=80 pruub=9.315837860s) [1] r=-1 lpr=80 pi=[56,80)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.315185547s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:09 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 80 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=79/80 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79) [2]/[0] async=[2] r=0 lpr=79 pi=[56,79)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:09 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 80 pg[9.9( v 53'1137 (0'0,53'1137] local-lis/les=79/80 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=79) [2]/[0] async=[2] r=0 lpr=79 pi=[56,79)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:10.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:10 np0005539563 python3.9[99634]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:14:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 29 02:14:10 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 02:14:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 29 02:14:10 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 29 02:14:10 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 81 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=79/80 n=5 ec=56/47 lis/c=79/56 les/c/f=80/57/0 sis=81 pruub=14.986483574s) [2] async=[2] r=-1 lpr=81 pi=[56,81)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 221.011657715s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:10 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 81 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=79/80 n=5 ec=56/47 lis/c=79/56 les/c/f=80/57/0 sis=81 pruub=14.986371040s) [2] r=-1 lpr=81 pi=[56,81)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.011657715s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:10 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 81 pg[9.a( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=81) [1]/[0] r=0 lpr=81 pi=[56,81)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:10 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 81 pg[9.a( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=81) [1]/[0] r=0 lpr=81 pi=[56,81)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:10 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 81 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=81) [1]/[0] r=0 lpr=81 pi=[56,81)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:10 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 81 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=81) [1]/[0] r=0 lpr=81 pi=[56,81)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:10 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 81 pg[9.9( v 53'1137 (0'0,53'1137] local-lis/les=79/80 n=6 ec=56/47 lis/c=79/56 les/c/f=80/57/0 sis=81 pruub=14.984715462s) [2] async=[2] r=-1 lpr=81 pi=[56,81)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 221.011734009s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:10 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 81 pg[9.9( v 53'1137 (0'0,53'1137] local-lis/les=79/80 n=6 ec=56/47 lis/c=79/56 les/c/f=80/57/0 sis=81 pruub=14.984664917s) [2] r=-1 lpr=81 pi=[56,81)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.011734009s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Nov 29 02:14:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:10.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 29 02:14:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 29 02:14:11 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 29 02:14:11 np0005539563 python3.9[99789]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:14:11 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 29 02:14:11 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 29 02:14:11 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 82 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=81/82 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=81) [1]/[0] async=[1] r=0 lpr=81 pi=[56,81)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:11 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 82 pg[9.a( v 53'1137 (0'0,53'1137] local-lis/les=81/82 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=81) [1]/[0] async=[1] r=0 lpr=81 pi=[56,81)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:11 np0005539563 python3.9[99941]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:14:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:12.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 29 02:14:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 29 02:14:12 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 29 02:14:12 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 83 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=81/82 n=5 ec=56/47 lis/c=81/56 les/c/f=82/57/0 sis=83 pruub=15.196557999s) [1] async=[1] r=-1 lpr=83 pi=[56,83)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 223.260101318s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:12 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 83 pg[9.a( v 53'1137 (0'0,53'1137] local-lis/les=81/82 n=6 ec=56/47 lis/c=81/56 les/c/f=82/57/0 sis=83 pruub=15.200024605s) [1] async=[1] r=-1 lpr=83 pi=[56,83)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 223.263839722s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:12 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 83 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=81/82 n=5 ec=56/47 lis/c=81/56 les/c/f=82/57/0 sis=83 pruub=15.196057320s) [1] r=-1 lpr=83 pi=[56,83)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.260101318s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:12 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 83 pg[9.a( v 53'1137 (0'0,53'1137] local-lis/les=81/82 n=6 ec=56/47 lis/c=81/56 les/c/f=82/57/0 sis=83 pruub=15.199804306s) [1] r=-1 lpr=83 pi=[56,83)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.263839722s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:12 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 29 02:14:12 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 29 02:14:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:14:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:14:12
Nov 29 02:14:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:14:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Some PGs (0.006557) are unknown; try again later
Nov 29 02:14:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:12.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:12 np0005539563 python3.9[100092]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:14:13 np0005539563 network[100109]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:14:13 np0005539563 network[100110]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:14:13 np0005539563 network[100111]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:14:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 29 02:14:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 29 02:14:13 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 29 02:14:13 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 29 02:14:13 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 29 02:14:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:14.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 487 B/s wr, 44 op/s; 52 B/s, 3 objects/s recovering
Nov 29 02:14:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 29 02:14:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 02:14:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 29 02:14:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 02:14:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 29 02:14:14 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 29 02:14:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 02:14:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:14:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:14.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:14:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 02:14:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:16.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 34 op/s; 106 B/s, 4 objects/s recovering
Nov 29 02:14:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 29 02:14:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 02:14:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 29 02:14:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:16.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:17 np0005539563 python3.9[100423]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:14:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 02:14:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 29 02:14:17 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 29 02:14:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 02:14:17 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 29 02:14:17 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 29 02:14:17 np0005539563 python3.9[100573]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:14:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:18.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 02:14:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 0 B/s wr, 30 op/s; 91 B/s, 4 objects/s recovering
Nov 29 02:14:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 29 02:14:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 02:14:18 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 29 02:14:18 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 29 02:14:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:18.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 29 02:14:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 02:14:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 29 02:14:19 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 29 02:14:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 02:14:19 np0005539563 python3.9[100728]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:14:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:20.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 29 02:14:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 02:14:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 29 02:14:20 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 29 02:14:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:14:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:20.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:21 np0005539563 python3.9[100887]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:14:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 29 02:14:22 np0005539563 python3.9[100971]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:14:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:14:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:22.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:14:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:14:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:22.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:23 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 29 02:14:23 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 29 02:14:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:24.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:14:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:14:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:24.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:14:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 29 02:14:25 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 29 02:14:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:26.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 2 unknown, 303 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:14:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:26.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:28.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 2 unknown, 303 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:14:28 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 29 02:14:28 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 29 02:14:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:28.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 29 02:14:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 29 02:14:29 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 29 02:14:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:30.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 2 unknown, 303 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:14:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 29 02:14:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:30.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 29 02:14:31 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 29 02:14:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:32.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 2 unknown, 303 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:14:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:32.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:33 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 29 02:14:33 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 29 02:14:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:34.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 458 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 22 op/s; 82 B/s, 3 objects/s recovering
Nov 29 02:14:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 02:14:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 02:14:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 29 02:14:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:34.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 02:14:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 29 02:14:34 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 29 02:14:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 02:14:35 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 29 02:14:35 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 29 02:14:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:14:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:36.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:14:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 458 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 26 op/s; 101 B/s, 3 objects/s recovering
Nov 29 02:14:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 02:14:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:14:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 29 02:14:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:36.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 02:14:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:14:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 29 02:14:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 29 02:14:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:38.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:14:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:14:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 458 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 24 op/s; 89 B/s, 3 objects/s recovering
Nov 29 02:14:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 29 02:14:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 02:14:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 29 02:14:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 02:14:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 29 02:14:38 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 29 02:14:38 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 94 pg[9.10( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=94 pruub=12.049140930s) [1] r=-1 lpr=94 pi=[56,94)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 246.316604614s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:38 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 94 pg[9.10( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=94 pruub=12.048787117s) [1] r=-1 lpr=94 pi=[56,94)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 246.316604614s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:38 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 29 02:14:38 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 29 02:14:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:38.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:39 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 02:14:39 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 02:14:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 29 02:14:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 29 02:14:39 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 29 02:14:39 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 95 pg[9.10( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=95) [1]/[0] r=0 lpr=95 pi=[56,95)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:39 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 95 pg[9.10( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=95) [1]/[0] r=0 lpr=95 pi=[56,95)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:40.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 2 active+remapped, 1 unknown, 302 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 40 B/s, 2 objects/s recovering
Nov 29 02:14:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 29 02:14:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 29 02:14:40 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 29 02:14:40 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 29 02:14:40 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 29 02:14:40 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 96 pg[9.10( v 53'1137 (0'0,53'1137] local-lis/les=95/96 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=95) [1]/[0] async=[1] r=0 lpr=95 pi=[56,95)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:40.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 29 02:14:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 29 02:14:41 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 29 02:14:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 97 pg[9.10( v 53'1137 (0'0,53'1137] local-lis/les=95/96 n=6 ec=56/47 lis/c=95/56 les/c/f=96/57/0 sis=97 pruub=15.420734406s) [1] async=[1] r=-1 lpr=97 pi=[56,97)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 252.360183716s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:41 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 97 pg[9.10( v 53'1137 (0'0,53'1137] local-lis/les=95/96 n=6 ec=56/47 lis/c=95/56 les/c/f=96/57/0 sis=97 pruub=15.420147896s) [1] r=-1 lpr=97 pi=[56,97)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 252.360183716s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 29 02:14:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 29 02:14:42 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 29 02:14:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:42.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 2 active+remapped, 1 unknown, 302 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 55 B/s, 3 objects/s recovering
Nov 29 02:14:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:14:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:42.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:14:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:44.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 458 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 0 B/s wr, 18 op/s; 44 B/s, 2 objects/s recovering
Nov 29 02:14:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 29 02:14:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 02:14:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:44.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 29 02:14:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 02:14:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 29 02:14:45 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 29 02:14:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 02:14:45 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 99 pg[9.11( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=99 pruub=12.548916817s) [1] r=-1 lpr=99 pi=[56,99)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 254.316589355s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:45 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 99 pg[9.11( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=99 pruub=12.548853874s) [1] r=-1 lpr=99 pi=[56,99)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.316589355s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 29 02:14:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 29 02:14:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 29 02:14:46 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 100 pg[9.11( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=100) [1]/[0] r=0 lpr=100 pi=[56,100)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:46 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 100 pg[9.11( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=100) [1]/[0] r=0 lpr=100 pi=[56,100)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:46.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:46 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 02:14:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 458 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 9.6 KiB/s rd, 193 B/s wr, 17 op/s; 0 B/s, 0 objects/s recovering
Nov 29 02:14:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 29 02:14:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:14:46 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 29 02:14:46 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 29 02:14:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:46.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 29 02:14:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:14:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 29 02:14:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 29 02:14:47 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 101 pg[9.12( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=101 pruub=11.045699120s) [1] r=-1 lpr=101 pi=[56,101)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 254.316726685s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:47 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 101 pg[9.12( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=101 pruub=11.045097351s) [1] r=-1 lpr=101 pi=[56,101)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.316726685s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:47 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 101 pg[9.11( v 53'1137 (0'0,53'1137] local-lis/les=100/101 n=6 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[56,100)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:14:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:48.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 1 active+remapped, 304 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.5 KiB/s rd, 170 B/s wr, 15 op/s; 18 B/s, 1 objects/s recovering
Nov 29 02:14:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 02:14:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 02:14:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 29 02:14:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 02:14:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 29 02:14:48 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 29 02:14:48 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 102 pg[9.11( v 53'1137 (0'0,53'1137] local-lis/les=100/101 n=6 ec=56/47 lis/c=100/56 les/c/f=101/57/0 sis=102 pruub=14.730236053s) [1] async=[1] r=-1 lpr=102 pi=[56,102)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 259.290496826s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:48 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 102 pg[9.11( v 53'1137 (0'0,53'1137] local-lis/les=100/101 n=6 ec=56/47 lis/c=100/56 les/c/f=101/57/0 sis=102 pruub=14.729927063s) [1] r=-1 lpr=102 pi=[56,102)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.290496826s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:48 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 102 pg[9.12( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=102) [1]/[0] r=0 lpr=102 pi=[56,102)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:48 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 102 pg[9.12( v 53'1137 (0'0,53'1137] local-lis/les=56/57 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=102) [1]/[0] r=0 lpr=102 pi=[56,102)/1 crt=53'1137 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:48.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:14:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 02:14:49 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 29 02:14:49 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 29 02:14:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 29 02:14:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:50.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 29 02:14:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 1 peering, 304 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 0 objects/s recovering
Nov 29 02:14:50 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 29 02:14:50 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 29 02:14:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 29 02:14:50 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 103 pg[9.12( v 53'1137 (0'0,53'1137] local-lis/les=102/103 n=5 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[56,102)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 02:14:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:50.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 29 02:14:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 29 02:14:51 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 29 02:14:51 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 104 pg[9.12( v 53'1137 (0'0,53'1137] local-lis/les=102/103 n=5 ec=56/47 lis/c=102/56 les/c/f=103/57/0 sis=104 pruub=15.322547913s) [1] async=[1] r=-1 lpr=104 pi=[56,104)/1 crt=53'1137 lcod 0'0 mlcod 0'0 active pruub 262.401611328s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:14:51 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 104 pg[9.12( v 53'1137 (0'0,53'1137] local-lis/les=102/103 n=5 ec=56/47 lis/c=102/56 les/c/f=103/57/0 sis=104 pruub=15.322480202s) [1] r=-1 lpr=104 pi=[56,104)/1 crt=53'1137 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 262.401611328s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:14:51 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 29 02:14:51 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 29 02:14:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:52.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 29 02:14:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 29 02:14:52 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 29 02:14:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 1 peering, 304 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:14:52 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Nov 29 02:14:52 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Nov 29 02:14:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:52.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:53 np0005539563 systemd[75963]: Created slice User Background Tasks Slice.
Nov 29 02:14:53 np0005539563 systemd[75963]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 02:14:53 np0005539563 systemd[75963]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 02:14:53 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 29 02:14:53 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 29 02:14:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:54.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 19 B/s, 0 objects/s recovering
Nov 29 02:14:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 29 02:14:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 02:14:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 29 02:14:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 02:14:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 29 02:14:54 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 29 02:14:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 02:14:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:54.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:55 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 29 02:14:55 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 29 02:14:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:56.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:56 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 02:14:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 02:14:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 02:14:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 02:14:56 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 29 02:14:56 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 29 02:14:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:56.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 29 02:14:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 02:14:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 29 02:14:57 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 29 02:14:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 02:14:57 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 29 02:14:57 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 29 02:14:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:14:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:14:58.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:14:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 29 02:14:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 02:14:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 29 02:14:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 29 02:14:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 02:14:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 29 02:14:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 02:14:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:14:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:14:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:14:58.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:14:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 02:14:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 29 02:14:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 02:14:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 29 02:14:59 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 29 02:15:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:00.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:00 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 02:15:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 29 02:15:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 29 02:15:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 29 02:15:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:00 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 29 02:15:00 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 29 02:15:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:00.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 29 02:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 29 02:15:01 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 29 02:15:01 np0005539563 podman[101406]: 2025-11-29 07:15:01.482933946 +0000 UTC m=+0.088778876 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:15:01 np0005539563 podman[101406]: 2025-11-29 07:15:01.868150651 +0000 UTC m=+0.473995601 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:02.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 29 02:15:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 29 02:15:02 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 29 02:15:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:15:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:15:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:02 np0005539563 podman[101563]: 2025-11-29 07:15:02.679177908 +0000 UTC m=+0.054013866 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:15:02 np0005539563 podman[101563]: 2025-11-29 07:15:02.686162779 +0000 UTC m=+0.060998647 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:15:02 np0005539563 podman[101627]: 2025-11-29 07:15:02.954790377 +0000 UTC m=+0.117360166 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, distribution-scope=public, release=1793, name=keepalived, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4)
Nov 29 02:15:02 np0005539563 podman[101627]: 2025-11-29 07:15:02.97025231 +0000 UTC m=+0.132822099 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, name=keepalived, vcs-type=git, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, distribution-scope=public, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived)
Nov 29 02:15:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:02.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:15:03.161597) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503161869, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7499, "num_deletes": 251, "total_data_size": 9579182, "memory_usage": 9764880, "flush_reason": "Manual Compaction"}
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503277612, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7790471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 145, "largest_seqno": 7635, "table_properties": {"data_size": 7762610, "index_size": 18274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 79321, "raw_average_key_size": 23, "raw_value_size": 7696925, "raw_average_value_size": 2273, "num_data_blocks": 809, "num_entries": 3385, "num_filter_entries": 3385, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400096, "oldest_key_time": 1764400096, "file_creation_time": 1764400503, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 116140 microseconds, and 20246 cpu microseconds.
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:15:03.277823) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7790471 bytes OK
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:15:03.277897) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:15:03.280146) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:15:03.280193) EVENT_LOG_v1 {"time_micros": 1764400503280183, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:15:03.280243) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9546286, prev total WAL file size 9586149, number of live WAL files 2.
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:15:03.283615) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7607KB) 13(53KB) 8(1944B)]
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503283791, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7847252, "oldest_snapshot_seqno": -1}
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3197 keys, 7802612 bytes, temperature: kUnknown
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503346568, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7802612, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7775233, "index_size": 18309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 77226, "raw_average_key_size": 24, "raw_value_size": 7711342, "raw_average_value_size": 2412, "num_data_blocks": 813, "num_entries": 3197, "num_filter_entries": 3197, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764400503, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:15:03.347006) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7802612 bytes
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:15:03.348604) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.8 rd, 124.1 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3489, records dropped: 292 output_compression: NoCompression
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:15:03.348623) EVENT_LOG_v1 {"time_micros": 1764400503348613, "job": 4, "event": "compaction_finished", "compaction_time_micros": 62865, "compaction_time_cpu_micros": 18904, "output_level": 6, "num_output_files": 1, "total_output_size": 7802612, "num_input_records": 3489, "num_output_records": 3197, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503349913, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503349974, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400503350012, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:15:03.283318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 29 02:15:03 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 10fcef1e-881c-4d69-a63f-032347ca8a83 does not exist
Nov 29 02:15:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 184da67d-6f89-4537-8e73-92d09af6c9d3 does not exist
Nov 29 02:15:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 09dd2116-d7bf-47d4-9487-e429b48de100 does not exist
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:15:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:04.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:04 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 29 02:15:04 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 29 02:15:04 np0005539563 podman[101932]: 2025-11-29 07:15:04.659388038 +0000 UTC m=+0.022835135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:04 np0005539563 podman[101932]: 2025-11-29 07:15:04.794118669 +0000 UTC m=+0.157565736 container create 94d4285e1bcd100151c74a5cbbd8eae169b25114a5e4e04d186a2bf43191fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:15:04 np0005539563 systemd[1]: Started libpod-conmon-94d4285e1bcd100151c74a5cbbd8eae169b25114a5e4e04d186a2bf43191fa80.scope.
Nov 29 02:15:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:15:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:04.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:05 np0005539563 podman[101932]: 2025-11-29 07:15:05.207887193 +0000 UTC m=+0.571334350 container init 94d4285e1bcd100151c74a5cbbd8eae169b25114a5e4e04d186a2bf43191fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:05 np0005539563 podman[101932]: 2025-11-29 07:15:05.222316167 +0000 UTC m=+0.585763274 container start 94d4285e1bcd100151c74a5cbbd8eae169b25114a5e4e04d186a2bf43191fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:15:05 np0005539563 sweet_almeida[101948]: 167 167
Nov 29 02:15:05 np0005539563 systemd[1]: libpod-94d4285e1bcd100151c74a5cbbd8eae169b25114a5e4e04d186a2bf43191fa80.scope: Deactivated successfully.
Nov 29 02:15:05 np0005539563 podman[101932]: 2025-11-29 07:15:05.395688304 +0000 UTC m=+0.759135371 container attach 94d4285e1bcd100151c74a5cbbd8eae169b25114a5e4e04d186a2bf43191fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:15:05 np0005539563 podman[101932]: 2025-11-29 07:15:05.396340151 +0000 UTC m=+0.759787228 container died 94d4285e1bcd100151c74a5cbbd8eae169b25114a5e4e04d186a2bf43191fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:05 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a47e51e3f9cf5d2c3af7c690c053470abbd98536c48001557c62f08606d9ea3d-merged.mount: Deactivated successfully.
Nov 29 02:15:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:06.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.5 KiB/s rd, 170 B/s wr, 14 op/s; 36 B/s, 1 objects/s recovering
Nov 29 02:15:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 29 02:15:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 02:15:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 29 02:15:06 np0005539563 podman[101932]: 2025-11-29 07:15:06.515091936 +0000 UTC m=+1.878539023 container remove 94d4285e1bcd100151c74a5cbbd8eae169b25114a5e4e04d186a2bf43191fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:15:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 02:15:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 29 02:15:06 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 29 02:15:06 np0005539563 systemd[1]: libpod-conmon-94d4285e1bcd100151c74a5cbbd8eae169b25114a5e4e04d186a2bf43191fa80.scope: Deactivated successfully.
Nov 29 02:15:06 np0005539563 podman[101972]: 2025-11-29 07:15:06.751440433 +0000 UTC m=+0.106847269 container create 03e902e801eced89b73ebcbfef2d93f75f6831fd7a5a2dbbadb045b6d79cd365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:06 np0005539563 podman[101972]: 2025-11-29 07:15:06.666113312 +0000 UTC m=+0.021520178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:06 np0005539563 systemd[1]: Started libpod-conmon-03e902e801eced89b73ebcbfef2d93f75f6831fd7a5a2dbbadb045b6d79cd365.scope.
Nov 29 02:15:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:15:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e72d814fd2cf74dc97fb5642b789660b94e9d94a28f2ee7c429f558dba17b6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e72d814fd2cf74dc97fb5642b789660b94e9d94a28f2ee7c429f558dba17b6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e72d814fd2cf74dc97fb5642b789660b94e9d94a28f2ee7c429f558dba17b6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e72d814fd2cf74dc97fb5642b789660b94e9d94a28f2ee7c429f558dba17b6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e72d814fd2cf74dc97fb5642b789660b94e9d94a28f2ee7c429f558dba17b6f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:06 np0005539563 podman[101972]: 2025-11-29 07:15:06.987719418 +0000 UTC m=+0.343126284 container init 03e902e801eced89b73ebcbfef2d93f75f6831fd7a5a2dbbadb045b6d79cd365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:06.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:07 np0005539563 podman[101972]: 2025-11-29 07:15:07.002865432 +0000 UTC m=+0.358272288 container start 03e902e801eced89b73ebcbfef2d93f75f6831fd7a5a2dbbadb045b6d79cd365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:07 np0005539563 podman[101972]: 2025-11-29 07:15:07.06062564 +0000 UTC m=+0.416032476 container attach 03e902e801eced89b73ebcbfef2d93f75f6831fd7a5a2dbbadb045b6d79cd365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:07 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 02:15:07 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 02:15:07 np0005539563 xenodochial_tesla[101989]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:15:07 np0005539563 xenodochial_tesla[101989]: --> relative data size: 1.0
Nov 29 02:15:07 np0005539563 xenodochial_tesla[101989]: --> All data devices are unavailable
Nov 29 02:15:07 np0005539563 systemd[1]: libpod-03e902e801eced89b73ebcbfef2d93f75f6831fd7a5a2dbbadb045b6d79cd365.scope: Deactivated successfully.
Nov 29 02:15:07 np0005539563 podman[101972]: 2025-11-29 07:15:07.888416316 +0000 UTC m=+1.243823152 container died 03e902e801eced89b73ebcbfef2d93f75f6831fd7a5a2dbbadb045b6d79cd365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:15:07 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9e72d814fd2cf74dc97fb5642b789660b94e9d94a28f2ee7c429f558dba17b6f-merged.mount: Deactivated successfully.
Nov 29 02:15:07 np0005539563 podman[101972]: 2025-11-29 07:15:07.953212516 +0000 UTC m=+1.308619352 container remove 03e902e801eced89b73ebcbfef2d93f75f6831fd7a5a2dbbadb045b6d79cd365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:07 np0005539563 systemd[1]: libpod-conmon-03e902e801eced89b73ebcbfef2d93f75f6831fd7a5a2dbbadb045b6d79cd365.scope: Deactivated successfully.
Nov 29 02:15:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:08.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.5 KiB/s rd, 170 B/s wr, 14 op/s; 36 B/s, 1 objects/s recovering
Nov 29 02:15:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 29 02:15:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 02:15:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 29 02:15:08 np0005539563 podman[102160]: 2025-11-29 07:15:08.565041391 +0000 UTC m=+0.037812693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:08 np0005539563 podman[102160]: 2025-11-29 07:15:08.828503989 +0000 UTC m=+0.301275261 container create 512abd1c4b6603a8dc203d9aa1d2ea7bc4bd8325fde138e5b35ca80291eee38f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:15:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 02:15:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 29 02:15:08 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 29 02:15:08 np0005539563 systemd[1]: Started libpod-conmon-512abd1c4b6603a8dc203d9aa1d2ea7bc4bd8325fde138e5b35ca80291eee38f.scope.
Nov 29 02:15:08 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:15:08 np0005539563 podman[102160]: 2025-11-29 07:15:08.950881203 +0000 UTC m=+0.423652495 container init 512abd1c4b6603a8dc203d9aa1d2ea7bc4bd8325fde138e5b35ca80291eee38f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:08 np0005539563 podman[102160]: 2025-11-29 07:15:08.962376457 +0000 UTC m=+0.435147729 container start 512abd1c4b6603a8dc203d9aa1d2ea7bc4bd8325fde138e5b35ca80291eee38f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:15:08 np0005539563 podman[102160]: 2025-11-29 07:15:08.965608205 +0000 UTC m=+0.438379497 container attach 512abd1c4b6603a8dc203d9aa1d2ea7bc4bd8325fde138e5b35ca80291eee38f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:15:08 np0005539563 nostalgic_brown[102177]: 167 167
Nov 29 02:15:08 np0005539563 systemd[1]: libpod-512abd1c4b6603a8dc203d9aa1d2ea7bc4bd8325fde138e5b35ca80291eee38f.scope: Deactivated successfully.
Nov 29 02:15:08 np0005539563 podman[102160]: 2025-11-29 07:15:08.96760822 +0000 UTC m=+0.440379492 container died 512abd1c4b6603a8dc203d9aa1d2ea7bc4bd8325fde138e5b35ca80291eee38f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:15:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:08.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-003f58d86323c1550a0242b778bf7fd095cdbd49382c34c8f6978c449d0f83d8-merged.mount: Deactivated successfully.
Nov 29 02:15:09 np0005539563 podman[102160]: 2025-11-29 07:15:09.014622035 +0000 UTC m=+0.487393307 container remove 512abd1c4b6603a8dc203d9aa1d2ea7bc4bd8325fde138e5b35ca80291eee38f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:15:09 np0005539563 systemd[1]: libpod-conmon-512abd1c4b6603a8dc203d9aa1d2ea7bc4bd8325fde138e5b35ca80291eee38f.scope: Deactivated successfully.
Nov 29 02:15:09 np0005539563 podman[102202]: 2025-11-29 07:15:09.156647504 +0000 UTC m=+0.040796044 container create 12b6dc747ac735312b5ad4fa78792d6daca3c819b968fc6384acdde7232c2310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cori, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:09 np0005539563 systemd[1]: Started libpod-conmon-12b6dc747ac735312b5ad4fa78792d6daca3c819b968fc6384acdde7232c2310.scope.
Nov 29 02:15:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:15:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85bfee60733adb4a0477e8b322c827c32d34a42845c2c7b84a38ea20947c79a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85bfee60733adb4a0477e8b322c827c32d34a42845c2c7b84a38ea20947c79a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85bfee60733adb4a0477e8b322c827c32d34a42845c2c7b84a38ea20947c79a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85bfee60733adb4a0477e8b322c827c32d34a42845c2c7b84a38ea20947c79a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:09 np0005539563 podman[102202]: 2025-11-29 07:15:09.229402512 +0000 UTC m=+0.113551082 container init 12b6dc747ac735312b5ad4fa78792d6daca3c819b968fc6384acdde7232c2310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:09 np0005539563 podman[102202]: 2025-11-29 07:15:09.137727328 +0000 UTC m=+0.021875898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:09 np0005539563 podman[102202]: 2025-11-29 07:15:09.236851146 +0000 UTC m=+0.120999686 container start 12b6dc747ac735312b5ad4fa78792d6daca3c819b968fc6384acdde7232c2310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cori, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:15:09 np0005539563 podman[102202]: 2025-11-29 07:15:09.240027243 +0000 UTC m=+0.124175813 container attach 12b6dc747ac735312b5ad4fa78792d6daca3c819b968fc6384acdde7232c2310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:15:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 02:15:09 np0005539563 loving_cori[102219]: {
Nov 29 02:15:09 np0005539563 loving_cori[102219]:    "0": [
Nov 29 02:15:09 np0005539563 loving_cori[102219]:        {
Nov 29 02:15:09 np0005539563 loving_cori[102219]:            "devices": [
Nov 29 02:15:09 np0005539563 loving_cori[102219]:                "/dev/loop3"
Nov 29 02:15:09 np0005539563 loving_cori[102219]:            ],
Nov 29 02:15:09 np0005539563 loving_cori[102219]:            "lv_name": "ceph_lv0",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:            "lv_size": "7511998464",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:            "name": "ceph_lv0",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:            "tags": {
Nov 29 02:15:09 np0005539563 loving_cori[102219]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:                "ceph.cluster_name": "ceph",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:                "ceph.crush_device_class": "",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:                "ceph.encrypted": "0",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:                "ceph.osd_id": "0",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:                "ceph.type": "block",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:                "ceph.vdo": "0"
Nov 29 02:15:09 np0005539563 loving_cori[102219]:            },
Nov 29 02:15:09 np0005539563 loving_cori[102219]:            "type": "block",
Nov 29 02:15:09 np0005539563 loving_cori[102219]:            "vg_name": "ceph_vg0"
Nov 29 02:15:09 np0005539563 loving_cori[102219]:        }
Nov 29 02:15:09 np0005539563 loving_cori[102219]:    ]
Nov 29 02:15:09 np0005539563 loving_cori[102219]: }
Nov 29 02:15:10 np0005539563 systemd[1]: libpod-12b6dc747ac735312b5ad4fa78792d6daca3c819b968fc6384acdde7232c2310.scope: Deactivated successfully.
Nov 29 02:15:10 np0005539563 podman[102202]: 2025-11-29 07:15:10.023038635 +0000 UTC m=+0.907187185 container died 12b6dc747ac735312b5ad4fa78792d6daca3c819b968fc6384acdde7232c2310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cori, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-85bfee60733adb4a0477e8b322c827c32d34a42845c2c7b84a38ea20947c79a8-merged.mount: Deactivated successfully.
Nov 29 02:15:10 np0005539563 podman[102202]: 2025-11-29 07:15:10.088725129 +0000 UTC m=+0.972873669 container remove 12b6dc747ac735312b5ad4fa78792d6daca3c819b968fc6384acdde7232c2310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cori, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:15:10 np0005539563 systemd[1]: libpod-conmon-12b6dc747ac735312b5ad4fa78792d6daca3c819b968fc6384acdde7232c2310.scope: Deactivated successfully.
Nov 29 02:15:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:15:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:10.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:15:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 150 B/s wr, 12 op/s; 32 B/s, 1 objects/s recovering
Nov 29 02:15:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 29 02:15:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 02:15:10 np0005539563 podman[102382]: 2025-11-29 07:15:10.680323522 +0000 UTC m=+0.043968793 container create 1f3afca6bffc9a2cc792d66972deb7541f89bd67fe6bcc9fc6c09152d7d39061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_benz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:15:10 np0005539563 systemd[1]: Started libpod-conmon-1f3afca6bffc9a2cc792d66972deb7541f89bd67fe6bcc9fc6c09152d7d39061.scope.
Nov 29 02:15:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:15:10 np0005539563 podman[102382]: 2025-11-29 07:15:10.661175418 +0000 UTC m=+0.024820719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 29 02:15:10 np0005539563 podman[102382]: 2025-11-29 07:15:10.879627107 +0000 UTC m=+0.243272398 container init 1f3afca6bffc9a2cc792d66972deb7541f89bd67fe6bcc9fc6c09152d7d39061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_benz, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:10 np0005539563 podman[102382]: 2025-11-29 07:15:10.887472971 +0000 UTC m=+0.251118242 container start 1f3afca6bffc9a2cc792d66972deb7541f89bd67fe6bcc9fc6c09152d7d39061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:10 np0005539563 brave_benz[102398]: 167 167
Nov 29 02:15:10 np0005539563 systemd[1]: libpod-1f3afca6bffc9a2cc792d66972deb7541f89bd67fe6bcc9fc6c09152d7d39061.scope: Deactivated successfully.
Nov 29 02:15:10 np0005539563 podman[102382]: 2025-11-29 07:15:10.89145455 +0000 UTC m=+0.255099851 container attach 1f3afca6bffc9a2cc792d66972deb7541f89bd67fe6bcc9fc6c09152d7d39061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:10 np0005539563 podman[102382]: 2025-11-29 07:15:10.894485713 +0000 UTC m=+0.258130994 container died 1f3afca6bffc9a2cc792d66972deb7541f89bd67fe6bcc9fc6c09152d7d39061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_benz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-78e84bd4c3864b4cfa3bf56247a322ad31d3bccbf200e88e6d0fc14c424fe15d-merged.mount: Deactivated successfully.
Nov 29 02:15:10 np0005539563 podman[102382]: 2025-11-29 07:15:10.938661639 +0000 UTC m=+0.302306900 container remove 1f3afca6bffc9a2cc792d66972deb7541f89bd67fe6bcc9fc6c09152d7d39061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:15:10 np0005539563 systemd[1]: libpod-conmon-1f3afca6bffc9a2cc792d66972deb7541f89bd67fe6bcc9fc6c09152d7d39061.scope: Deactivated successfully.
Nov 29 02:15:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:10.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 02:15:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 29 02:15:11 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 29 02:15:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:11 np0005539563 podman[102422]: 2025-11-29 07:15:11.116157209 +0000 UTC m=+0.065189643 container create 64210aea608969d4c50e8da2d1ab2af3aa4edee4c6995d1bb9abf8c8df0c7439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:15:11 np0005539563 podman[102422]: 2025-11-29 07:15:11.076115005 +0000 UTC m=+0.025147469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:11 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 02:15:11 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=81/81 les/c/f=82/82/0 sis=116) [0] r=0 lpr=116 pi=[81,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:11 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 02:15:11 np0005539563 systemd[1]: Started libpod-conmon-64210aea608969d4c50e8da2d1ab2af3aa4edee4c6995d1bb9abf8c8df0c7439.scope.
Nov 29 02:15:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:15:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30bd867fbfb388b163d5b0a7f7aa5f5e2af56852de57d54c3894dbe60d440ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30bd867fbfb388b163d5b0a7f7aa5f5e2af56852de57d54c3894dbe60d440ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30bd867fbfb388b163d5b0a7f7aa5f5e2af56852de57d54c3894dbe60d440ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30bd867fbfb388b163d5b0a7f7aa5f5e2af56852de57d54c3894dbe60d440ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:11 np0005539563 podman[102422]: 2025-11-29 07:15:11.276353726 +0000 UTC m=+0.225386270 container init 64210aea608969d4c50e8da2d1ab2af3aa4edee4c6995d1bb9abf8c8df0c7439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:15:11 np0005539563 podman[102422]: 2025-11-29 07:15:11.282888124 +0000 UTC m=+0.231920568 container start 64210aea608969d4c50e8da2d1ab2af3aa4edee4c6995d1bb9abf8c8df0c7439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hellman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:11 np0005539563 podman[102422]: 2025-11-29 07:15:11.297797202 +0000 UTC m=+0.246829666 container attach 64210aea608969d4c50e8da2d1ab2af3aa4edee4c6995d1bb9abf8c8df0c7439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hellman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:15:11 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 29 02:15:11 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 29 02:15:12 np0005539563 charming_hellman[102439]: {
Nov 29 02:15:12 np0005539563 charming_hellman[102439]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:15:12 np0005539563 charming_hellman[102439]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:15:12 np0005539563 charming_hellman[102439]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:15:12 np0005539563 charming_hellman[102439]:        "osd_id": 0,
Nov 29 02:15:12 np0005539563 charming_hellman[102439]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:15:12 np0005539563 charming_hellman[102439]:        "type": "bluestore"
Nov 29 02:15:12 np0005539563 charming_hellman[102439]:    }
Nov 29 02:15:12 np0005539563 charming_hellman[102439]: }
Nov 29 02:15:12 np0005539563 systemd[1]: libpod-64210aea608969d4c50e8da2d1ab2af3aa4edee4c6995d1bb9abf8c8df0c7439.scope: Deactivated successfully.
Nov 29 02:15:12 np0005539563 podman[102422]: 2025-11-29 07:15:12.111211194 +0000 UTC m=+1.060243638 container died 64210aea608969d4c50e8da2d1ab2af3aa4edee4c6995d1bb9abf8c8df0c7439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:15:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b30bd867fbfb388b163d5b0a7f7aa5f5e2af56852de57d54c3894dbe60d440ee-merged.mount: Deactivated successfully.
Nov 29 02:15:12 np0005539563 podman[102422]: 2025-11-29 07:15:12.161536889 +0000 UTC m=+1.110569353 container remove 64210aea608969d4c50e8da2d1ab2af3aa4edee4c6995d1bb9abf8c8df0c7439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:12 np0005539563 systemd[1]: libpod-conmon-64210aea608969d4c50e8da2d1ab2af3aa4edee4c6995d1bb9abf8c8df0c7439.scope: Deactivated successfully.
Nov 29 02:15:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:12.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 29 02:15:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:15:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 29 02:15:12 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 29 02:15:12 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 117 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=81/81 les/c/f=82/82/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[81,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:12 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 117 pg[9.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=81/81 les/c/f=82/82/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[81,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 02:15:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:15:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:12 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c12d70c3-76af-44fc-8fe5-a8d4205185fc does not exist
Nov 29 02:15:12 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4339bfdc-bcd4-41d5-b7df-6f59f518ea35 does not exist
Nov 29 02:15:12 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 55aacadd-c6ca-4b7c-a1f6-46a775aa358d does not exist
Nov 29 02:15:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 02:15:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 02:15:12 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 29 02:15:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:15:12
Nov 29 02:15:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:15:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:15:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['volumes', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log']
Nov 29 02:15:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:15:12 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 29 02:15:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:13.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:15:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 29 02:15:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:15:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 02:15:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 02:15:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 29 02:15:13 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 29 02:15:13 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 118 pg[9.1a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=83/83 les/c/f=84/84/0 sis=118) [0] r=0 lpr=118 pi=[83,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:14.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 29 02:15:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 02:15:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:15:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 02:15:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 29 02:15:14 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 29 02:15:14 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 119 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=117/81 les/c/f=118/82/0 sis=119) [0] r=0 lpr=119 pi=[81,119)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:14 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 119 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=117/81 les/c/f=118/82/0 sis=119) [0] r=0 lpr=119 pi=[81,119)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:14 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=83/83 les/c/f=84/84/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[83,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:14 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=83/83 les/c/f=84/84/0 sis=119) [0]/[1] r=-1 lpr=119 pi=[83,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:14 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.7 deep-scrub starts
Nov 29 02:15:14 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.7 deep-scrub ok
Nov 29 02:15:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:15.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:15:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 29 02:15:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:15:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 29 02:15:15 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 29 02:15:15 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 120 pg[9.1b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=120) [0] r=0 lpr=120 pi=[65,120)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:15 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 120 pg[9.19( v 53'1137 (0'0,53'1137] local-lis/les=119/120 n=5 ec=56/47 lis/c=117/81 les/c/f=118/82/0 sis=119) [0] r=0 lpr=119 pi=[81,119)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 29 02:15:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 29 02:15:16 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 121 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=119/83 les/c/f=120/84/0 sis=121) [0] r=0 lpr=121 pi=[83,121)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:16 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 121 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=119/83 les/c/f=120/84/0 sis=121) [0] r=0 lpr=121 pi=[83,121)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:16 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 29 02:15:16 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=121) [0]/[2] r=-1 lpr=121 pi=[65,121)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:16 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=65/65 les/c/f=66/66/0 sis=121) [0]/[2] r=-1 lpr=121 pi=[65,121)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:16.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 02:15:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:15:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:15:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:15:16 np0005539563 python3.9[102729]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:17.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 29 02:15:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 02:15:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 29 02:15:17 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 29 02:15:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:18.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 1 activating, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Nov 29 02:15:18 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 29 02:15:18 np0005539563 python3.9[103017]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 02:15:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:19.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:19 np0005539563 python3.9[103170]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 02:15:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:20.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 1 activating+remapped, 1 activating, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 02:15:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:21.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:22.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 1 activating+remapped, 1 activating, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.361378652521869e-06 of space, bias 1.0, pg target 0.0019084135957565607 quantized to 32 (current 32)
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:15:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:15:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:23.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:24.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 1 activating+remapped, 1 activating, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 02:15:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:25.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:26.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 1 activating+remapped, 1 activating, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 02:15:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:27.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:27 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:15:27 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 10.258554459s
Nov 29 02:15:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:27 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 10.258554459s
Nov 29 02:15:27 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.258703232s, txc = 0x561be257ef00
Nov 29 02:15:27 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 122 pg[9.1a( v 53'1137 (0'0,53'1137] local-lis/les=121/122 n=5 ec=56/47 lis/c=119/83 les/c/f=120/84/0 sis=121) [0] r=0 lpr=121 pi=[83,121)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).paxos(paxos updating c 252..780) accept timeout, calling fresh election
Nov 29 02:15:27 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:15:27 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(14) init, last seen epoch 14
Nov 29 02:15:27 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 29 02:15:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:27 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.616072655s, txc = 0x561be1f59800
Nov 29 02:15:27 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.414923668s, txc = 0x561be0e31b00
Nov 29 02:15:27 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:15:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:28.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:28 np0005539563 python3.9[103327]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:15:28 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:15:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:28 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 02:15:28 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 29 02:15:28 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rotard(active, since 6m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 02:15:28 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:15:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 1 activating+remapped, 301 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 02:15:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:29.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:29 np0005539563 python3.9[103479]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 02:15:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 29 02:15:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 29 02:15:29 np0005539563 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 29 02:15:29 np0005539563 ceph-mon[74338]: mon.compute-1 calling monitor election
Nov 29 02:15:29 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 02:15:29 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:15:29 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 02:15:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 123 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=121/65 les/c/f=122/66/0 sis=123) [0] r=0 lpr=123 pi=[65,123)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:29 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 123 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=121/65 les/c/f=122/66/0 sis=123) [0] r=0 lpr=123 pi=[65,123)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:29 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 29 02:15:29 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 29 02:15:29 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 29 02:15:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:15:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:30.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:15:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 29 02:15:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 1 activating+remapped, 301 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 02:15:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 29 02:15:30 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 29 02:15:30 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 124 pg[9.1b( v 53'1137 (0'0,53'1137] local-lis/les=123/124 n=5 ec=56/47 lis/c=121/65 les/c/f=122/66/0 sis=123) [0] r=0 lpr=123 pi=[65,123)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:30 np0005539563 python3.9[103632]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:15:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:31.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:31 np0005539563 python3.9[103784]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:15:31 np0005539563 python3.9[103862]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:15:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:32.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 1 activating+remapped, 301 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 02:15:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:33.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:33 np0005539563 python3.9[104015]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:15:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:34.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 02:15:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:15:34 np0005539563 python3.9[104170]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 02:15:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 29 02:15:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:15:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:15:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 29 02:15:34 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 29 02:15:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:35.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:35 np0005539563 python3.9[104346]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 02:15:35 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 29 02:15:35 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 29 02:15:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 29 02:15:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:15:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 29 02:15:35 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 29 02:15:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:36.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:36 np0005539563 python3.9[104527]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:15:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 29 02:15:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 29 02:15:36 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 29 02:15:36 np0005539563 python3.9[104679]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 02:15:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:37.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 29 02:15:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 29 02:15:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 29 02:15:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:38.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 29 02:15:38 np0005539563 python3.9[104831]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:15:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:39.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 29 02:15:39 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 29 02:15:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:40.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 141 B/s, 3 objects/s recovering
Nov 29 02:15:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 29 02:15:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 02:15:40 np0005539563 python3.9[104986]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:15:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 29 02:15:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:41.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 02:15:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 02:15:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 29 02:15:41 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 29 02:15:41 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 29 02:15:41 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 29 02:15:41 np0005539563 python3.9[105138]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:15:41 np0005539563 python3.9[105216]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:15:42 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 02:15:42 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 130 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=73/73 les/c/f=74/74/0 sis=130) [0] r=0 lpr=130 pi=[73,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:42.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 119 B/s, 2 objects/s recovering
Nov 29 02:15:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:15:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:42 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 29 02:15:42 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 29 02:15:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 29 02:15:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:15:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 29 02:15:42 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 29 02:15:42 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 131 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=96/96 les/c/f=97/97/0 sis=131) [0] r=0 lpr=131 pi=[96,131)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:42 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:42 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=73/73 les/c/f=74/74/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[73,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:42 np0005539563 python3.9[105369]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:15:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:43.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:15:43 np0005539563 python3.9[105447]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:15:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:15:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 29 02:15:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 29 02:15:43 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 29 02:15:43 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=96/96 les/c/f=97/97/0 sis=132) [0]/[1] r=-1 lpr=132 pi=[96,132)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:43 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 132 pg[9.1f( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=96/96 les/c/f=97/97/0 sis=132) [0]/[1] r=-1 lpr=132 pi=[96,132)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:44.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 122 B/s, 1 objects/s recovering
Nov 29 02:15:44 np0005539563 python3.9[105600]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:15:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 29 02:15:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 29 02:15:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 29 02:15:44 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 133 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:44 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 133 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:45.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 29 02:15:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 29 02:15:45 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 29 02:15:45 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 134 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=132/96 les/c/f=133/97/0 sis=134) [0] r=0 lpr=134 pi=[96,134)/1 luod=0'0 crt=53'1137 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:45 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 134 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=0/0 n=5 ec=56/47 lis/c=132/96 les/c/f=133/97/0 sis=134) [0] r=0 lpr=134 pi=[96,134)/1 crt=53'1137 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:45 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 134 pg[9.1e( v 53'1137 (0'0,53'1137] local-lis/les=133/134 n=5 ec=56/47 lis/c=131/73 les/c/f=132/74/0 sis=133) [0] r=0 lpr=133 pi=[73,133)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:46.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:46 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 29 02:15:46 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 29 02:15:46 np0005539563 python3.9[105752]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:15:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:47.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 29 02:15:47 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 29 02:15:47 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 29 02:15:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 29 02:15:48 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 02:15:48 np0005539563 ceph-osd[84724]: osd.0 pg_epoch: 135 pg[9.1f( v 53'1137 (0'0,53'1137] local-lis/les=134/135 n=5 ec=56/47 lis/c=132/96 les/c/f=133/97/0 sis=134) [0] r=0 lpr=134 pi=[96,134)/1 crt=53'1137 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:15:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:48.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:15:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:48 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 29 02:15:48 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 29 02:15:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:49.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:49 np0005539563 python3.9[105905]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 02:15:49 np0005539563 python3.9[106055]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:15:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:50.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:50 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.c deep-scrub starts
Nov 29 02:15:50 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.c deep-scrub ok
Nov 29 02:15:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:51.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:51 np0005539563 python3.9[106208]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:15:51 np0005539563 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 02:15:51 np0005539563 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 02:15:51 np0005539563 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 02:15:51 np0005539563 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:15:51 np0005539563 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:15:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:52.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:52 np0005539563 python3.9[106370]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 02:15:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:53.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:54.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 12 B/s, 1 objects/s recovering
Nov 29 02:15:54 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Nov 29 02:15:54 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Nov 29 02:15:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:55.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:56 np0005539563 python3.9[106573]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:15:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:56.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 1 objects/s recovering
Nov 29 02:15:56 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 29 02:15:56 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 29 02:15:56 np0005539563 python3.9[106728]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:15:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:57.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:57 np0005539563 systemd[1]: session-35.scope: Deactivated successfully.
Nov 29 02:15:57 np0005539563 systemd[1]: session-35.scope: Consumed 1min 6.730s CPU time.
Nov 29 02:15:57 np0005539563 systemd-logind[785]: Session 35 logged out. Waiting for processes to exit.
Nov 29 02:15:57 np0005539563 systemd-logind[785]: Removed session 35.
Nov 29 02:15:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:15:58.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:15:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:15:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:15:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:15:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:15:59.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:00.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:00 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.2 deep-scrub starts
Nov 29 02:16:00 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.2 deep-scrub ok
Nov 29 02:16:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:01.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:02.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:03.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:04 np0005539563 systemd-logind[785]: New session 36 of user zuul.
Nov 29 02:16:04 np0005539563 systemd[1]: Started Session 36 of User zuul.
Nov 29 02:16:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:04.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:05.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:05 np0005539563 python3.9[106912]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:16:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:06.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:06 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 29 02:16:06 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 29 02:16:06 np0005539563 python3.9[107069]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 02:16:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:07.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:07 np0005539563 python3.9[107222]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:16:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:08.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:08 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 29 02:16:08 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 29 02:16:08 np0005539563 python3.9[107307]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:16:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:09.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:10.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:11.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:11 np0005539563 python3.9[107461]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:16:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:12.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:16:12
Nov 29 02:16:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:16:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:16:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.control', 'backups', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta']
Nov 29 02:16:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:16:12 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 29 02:16:12 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 29 02:16:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:13.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:16:13 np0005539563 podman[107659]: 2025-11-29 07:16:13.516621322 +0000 UTC m=+0.099689927 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:16:13 np0005539563 podman[107659]: 2025-11-29 07:16:13.638891846 +0000 UTC m=+0.221960451 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:16:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:16:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:14.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:16:14 np0005539563 podman[107936]: 2025-11-29 07:16:14.371570956 +0000 UTC m=+0.075867836 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:16:14 np0005539563 podman[107936]: 2025-11-29 07:16:14.383275736 +0000 UTC m=+0.087572596 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:16:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:16:14 np0005539563 python3.9[107903]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:16:14 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 29 02:16:14 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 29 02:16:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:16:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:15.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:15 np0005539563 python3.9[108167]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:16:15 np0005539563 podman[108127]: 2025-11-29 07:16:15.51100249 +0000 UTC m=+0.345092890 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, com.redhat.component=keepalived-container, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, vcs-type=git, io.openshift.expose-services=, release=1793, name=keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 29 02:16:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:15 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 29 02:16:15 np0005539563 podman[108225]: 2025-11-29 07:16:15.581966321 +0000 UTC m=+0.056603409 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, name=keepalived, release=1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 29 02:16:15 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 29 02:16:15 np0005539563 podman[108127]: 2025-11-29 07:16:15.669509805 +0000 UTC m=+0.503600195 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, description=keepalived for Ceph, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, version=2.2.4, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.expose-services=, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived)
Nov 29 02:16:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:16:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:16:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:16.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f8e8ec68-8f39-4833-af06-9234958f6410 does not exist
Nov 29 02:16:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0e568575-1737-46ba-bb2b-aac0f4df1328 does not exist
Nov 29 02:16:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 47c0859d-bb51-4ece-a75b-a3aed399ef24 does not exist
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:16:16 np0005539563 python3.9[108520]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:16:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:17.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:17 np0005539563 podman[108683]: 2025-11-29 07:16:17.34387618 +0000 UTC m=+0.045751733 container create 97fa6b2f45eed325bcde4e2f81abf04e43316fc7ab21807c0551202c0e0b096d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:16:17 np0005539563 systemd[1]: Started libpod-conmon-97fa6b2f45eed325bcde4e2f81abf04e43316fc7ab21807c0551202c0e0b096d.scope.
Nov 29 02:16:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:16:17 np0005539563 podman[108683]: 2025-11-29 07:16:17.324308555 +0000 UTC m=+0.026184138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:17 np0005539563 podman[108683]: 2025-11-29 07:16:17.634501919 +0000 UTC m=+0.336377502 container init 97fa6b2f45eed325bcde4e2f81abf04e43316fc7ab21807c0551202c0e0b096d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_einstein, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:16:17 np0005539563 podman[108683]: 2025-11-29 07:16:17.643998598 +0000 UTC m=+0.345874151 container start 97fa6b2f45eed325bcde4e2f81abf04e43316fc7ab21807c0551202c0e0b096d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 29 02:16:17 np0005539563 goofy_einstein[108750]: 167 167
Nov 29 02:16:17 np0005539563 systemd[1]: libpod-97fa6b2f45eed325bcde4e2f81abf04e43316fc7ab21807c0551202c0e0b096d.scope: Deactivated successfully.
Nov 29 02:16:17 np0005539563 conmon[108750]: conmon 97fa6b2f45eed325bcde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97fa6b2f45eed325bcde4e2f81abf04e43316fc7ab21807c0551202c0e0b096d.scope/container/memory.events
Nov 29 02:16:17 np0005539563 podman[108683]: 2025-11-29 07:16:17.693639476 +0000 UTC m=+0.395515039 container attach 97fa6b2f45eed325bcde4e2f81abf04e43316fc7ab21807c0551202c0e0b096d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_einstein, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:16:17 np0005539563 podman[108683]: 2025-11-29 07:16:17.694719045 +0000 UTC m=+0.396594598 container died 97fa6b2f45eed325bcde4e2f81abf04e43316fc7ab21807c0551202c0e0b096d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:16:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay-574937660faccd04cfd2662f804fd1df0e0d66cfe655fb8bffd15d9118c1d152-merged.mount: Deactivated successfully.
Nov 29 02:16:17 np0005539563 podman[108683]: 2025-11-29 07:16:17.74496271 +0000 UTC m=+0.446838263 container remove 97fa6b2f45eed325bcde4e2f81abf04e43316fc7ab21807c0551202c0e0b096d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:16:17 np0005539563 systemd[1]: libpod-conmon-97fa6b2f45eed325bcde4e2f81abf04e43316fc7ab21807c0551202c0e0b096d.scope: Deactivated successfully.
Nov 29 02:16:17 np0005539563 python3.9[108827]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:16:17 np0005539563 podman[108850]: 2025-11-29 07:16:17.876700213 +0000 UTC m=+0.024023349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:17 np0005539563 podman[108850]: 2025-11-29 07:16:17.986184227 +0000 UTC m=+0.133507343 container create 4e34cf0f864cd5f24a67b0bc5db27535341b65361286263b9890fe98c1219520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:16:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:16:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:18 np0005539563 systemd[1]: Started libpod-conmon-4e34cf0f864cd5f24a67b0bc5db27535341b65361286263b9890fe98c1219520.scope.
Nov 29 02:16:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:16:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/611b82e6f8abe81d9a4260879c322dca7305894cfe05aee9bb2fb580eeab77a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/611b82e6f8abe81d9a4260879c322dca7305894cfe05aee9bb2fb580eeab77a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/611b82e6f8abe81d9a4260879c322dca7305894cfe05aee9bb2fb580eeab77a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/611b82e6f8abe81d9a4260879c322dca7305894cfe05aee9bb2fb580eeab77a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/611b82e6f8abe81d9a4260879c322dca7305894cfe05aee9bb2fb580eeab77a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:18 np0005539563 podman[108850]: 2025-11-29 07:16:18.162668654 +0000 UTC m=+0.309991860 container init 4e34cf0f864cd5f24a67b0bc5db27535341b65361286263b9890fe98c1219520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:16:18 np0005539563 podman[108850]: 2025-11-29 07:16:18.170654003 +0000 UTC m=+0.317977129 container start 4e34cf0f864cd5f24a67b0bc5db27535341b65361286263b9890fe98c1219520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jennings, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:16:18 np0005539563 podman[108850]: 2025-11-29 07:16:18.213611637 +0000 UTC m=+0.360934783 container attach 4e34cf0f864cd5f24a67b0bc5db27535341b65361286263b9890fe98c1219520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:16:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:18.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:19 np0005539563 practical_jennings[108871]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:16:19 np0005539563 practical_jennings[108871]: --> relative data size: 1.0
Nov 29 02:16:19 np0005539563 practical_jennings[108871]: --> All data devices are unavailable
Nov 29 02:16:19 np0005539563 systemd[1]: libpod-4e34cf0f864cd5f24a67b0bc5db27535341b65361286263b9890fe98c1219520.scope: Deactivated successfully.
Nov 29 02:16:19 np0005539563 podman[108850]: 2025-11-29 07:16:19.051409372 +0000 UTC m=+1.198732488 container died 4e34cf0f864cd5f24a67b0bc5db27535341b65361286263b9890fe98c1219520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:16:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:19.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:19 np0005539563 python3.9[109029]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:16:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-611b82e6f8abe81d9a4260879c322dca7305894cfe05aee9bb2fb580eeab77a8-merged.mount: Deactivated successfully.
Nov 29 02:16:19 np0005539563 podman[108850]: 2025-11-29 07:16:19.357893495 +0000 UTC m=+1.505216611 container remove 4e34cf0f864cd5f24a67b0bc5db27535341b65361286263b9890fe98c1219520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:16:19 np0005539563 systemd[1]: libpod-conmon-4e34cf0f864cd5f24a67b0bc5db27535341b65361286263b9890fe98c1219520.scope: Deactivated successfully.
Nov 29 02:16:19 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 29 02:16:19 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 29 02:16:20 np0005539563 podman[109197]: 2025-11-29 07:16:20.043946999 +0000 UTC m=+0.021588842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:20 np0005539563 podman[109197]: 2025-11-29 07:16:20.170066428 +0000 UTC m=+0.147708251 container create c42736537ea1cddcfd8f0c9656f5ace41d5eff5ed345c8c54b0d90d98c2287a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:16:20 np0005539563 systemd[1]: Started libpod-conmon-c42736537ea1cddcfd8f0c9656f5ace41d5eff5ed345c8c54b0d90d98c2287a7.scope.
Nov 29 02:16:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:16:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:20.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:20 np0005539563 podman[109197]: 2025-11-29 07:16:20.561595166 +0000 UTC m=+0.539237009 container init c42736537ea1cddcfd8f0c9656f5ace41d5eff5ed345c8c54b0d90d98c2287a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:16:20 np0005539563 podman[109197]: 2025-11-29 07:16:20.570028626 +0000 UTC m=+0.547670439 container start c42736537ea1cddcfd8f0c9656f5ace41d5eff5ed345c8c54b0d90d98c2287a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:16:20 np0005539563 systemd[1]: libpod-c42736537ea1cddcfd8f0c9656f5ace41d5eff5ed345c8c54b0d90d98c2287a7.scope: Deactivated successfully.
Nov 29 02:16:20 np0005539563 interesting_poincare[109214]: 167 167
Nov 29 02:16:20 np0005539563 conmon[109214]: conmon c42736537ea1cddcfd8f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c42736537ea1cddcfd8f0c9656f5ace41d5eff5ed345c8c54b0d90d98c2287a7.scope/container/memory.events
Nov 29 02:16:20 np0005539563 podman[109197]: 2025-11-29 07:16:20.629065572 +0000 UTC m=+0.606707385 container attach c42736537ea1cddcfd8f0c9656f5ace41d5eff5ed345c8c54b0d90d98c2287a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:16:20 np0005539563 podman[109197]: 2025-11-29 07:16:20.629491714 +0000 UTC m=+0.607133557 container died c42736537ea1cddcfd8f0c9656f5ace41d5eff5ed345c8c54b0d90d98c2287a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:16:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-63b2c216602a3890ac0c0d6c53d0127b0e58365d35f4e5b8375ed0bc26aa9217-merged.mount: Deactivated successfully.
Nov 29 02:16:20 np0005539563 podman[109197]: 2025-11-29 07:16:20.677690652 +0000 UTC m=+0.655332465 container remove c42736537ea1cddcfd8f0c9656f5ace41d5eff5ed345c8c54b0d90d98c2287a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:16:20 np0005539563 systemd[1]: libpod-conmon-c42736537ea1cddcfd8f0c9656f5ace41d5eff5ed345c8c54b0d90d98c2287a7.scope: Deactivated successfully.
Nov 29 02:16:20 np0005539563 podman[109263]: 2025-11-29 07:16:20.828381543 +0000 UTC m=+0.040142349 container create 72e8efb30f59c800aa103a2fcadacfd3d93f03aeff0c4bc5a0760b6587f78049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:16:20 np0005539563 systemd[1]: Started libpod-conmon-72e8efb30f59c800aa103a2fcadacfd3d93f03aeff0c4bc5a0760b6587f78049.scope.
Nov 29 02:16:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:16:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8098f9a9da25512b494093ca7481ed619c03723dc0d21529fa3c5d51517bce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8098f9a9da25512b494093ca7481ed619c03723dc0d21529fa3c5d51517bce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8098f9a9da25512b494093ca7481ed619c03723dc0d21529fa3c5d51517bce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8098f9a9da25512b494093ca7481ed619c03723dc0d21529fa3c5d51517bce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:20 np0005539563 podman[109263]: 2025-11-29 07:16:20.810439962 +0000 UTC m=+0.022200788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:21.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:21 np0005539563 podman[109263]: 2025-11-29 07:16:21.474522795 +0000 UTC m=+0.686283631 container init 72e8efb30f59c800aa103a2fcadacfd3d93f03aeff0c4bc5a0760b6587f78049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:21 np0005539563 podman[109263]: 2025-11-29 07:16:21.483704087 +0000 UTC m=+0.695464893 container start 72e8efb30f59c800aa103a2fcadacfd3d93f03aeff0c4bc5a0760b6587f78049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pascal, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:21 np0005539563 python3.9[109410]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:16:21 np0005539563 podman[109263]: 2025-11-29 07:16:21.990501048 +0000 UTC m=+1.202261884 container attach 72e8efb30f59c800aa103a2fcadacfd3d93f03aeff0c4bc5a0760b6587f78049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pascal, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]: {
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:    "0": [
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:        {
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:            "devices": [
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:                "/dev/loop3"
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:            ],
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:            "lv_name": "ceph_lv0",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:            "lv_size": "7511998464",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:            "name": "ceph_lv0",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:            "tags": {
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:                "ceph.cluster_name": "ceph",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:                "ceph.crush_device_class": "",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:                "ceph.encrypted": "0",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:                "ceph.osd_id": "0",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:                "ceph.type": "block",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:                "ceph.vdo": "0"
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:            },
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:            "type": "block",
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:            "vg_name": "ceph_vg0"
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:        }
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]:    ]
Nov 29 02:16:22 np0005539563 fervent_pascal[109280]: }
Nov 29 02:16:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:22.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:22 np0005539563 systemd[1]: libpod-72e8efb30f59c800aa103a2fcadacfd3d93f03aeff0c4bc5a0760b6587f78049.scope: Deactivated successfully.
Nov 29 02:16:22 np0005539563 podman[109263]: 2025-11-29 07:16:22.342990498 +0000 UTC m=+1.554751304 container died 72e8efb30f59c800aa103a2fcadacfd3d93f03aeff0c4bc5a0760b6587f78049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:16:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:16:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ef8098f9a9da25512b494093ca7481ed619c03723dc0d21529fa3c5d51517bce-merged.mount: Deactivated successfully.
Nov 29 02:16:22 np0005539563 podman[109263]: 2025-11-29 07:16:22.643068555 +0000 UTC m=+1.854829381 container remove 72e8efb30f59c800aa103a2fcadacfd3d93f03aeff0c4bc5a0760b6587f78049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pascal, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:16:22 np0005539563 systemd[1]: libpod-conmon-72e8efb30f59c800aa103a2fcadacfd3d93f03aeff0c4bc5a0760b6587f78049.scope: Deactivated successfully.
Nov 29 02:16:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:23.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:23 np0005539563 podman[109841]: 2025-11-29 07:16:23.205458797 +0000 UTC m=+0.021518598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:23 np0005539563 python3.9[109867]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:16:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:24.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:24 np0005539563 podman[109841]: 2025-11-29 07:16:24.310894902 +0000 UTC m=+1.126954703 container create 6acfa143d4313553b4ad6570a950f1216d17a2b65d4a68f198fb4de7ee3d9ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rhodes, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:16:24 np0005539563 python3.9[110020]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:16:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:24 np0005539563 systemd[1]: Started libpod-conmon-6acfa143d4313553b4ad6570a950f1216d17a2b65d4a68f198fb4de7ee3d9ea3.scope.
Nov 29 02:16:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:16:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:25.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:25 np0005539563 python3.9[110174]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:16:25 np0005539563 podman[109841]: 2025-11-29 07:16:25.370852902 +0000 UTC m=+2.186912713 container init 6acfa143d4313553b4ad6570a950f1216d17a2b65d4a68f198fb4de7ee3d9ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rhodes, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:16:25 np0005539563 podman[109841]: 2025-11-29 07:16:25.37842742 +0000 UTC m=+2.194487221 container start 6acfa143d4313553b4ad6570a950f1216d17a2b65d4a68f198fb4de7ee3d9ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:16:25 np0005539563 kind_rhodes[110177]: 167 167
Nov 29 02:16:25 np0005539563 systemd[1]: libpod-6acfa143d4313553b4ad6570a950f1216d17a2b65d4a68f198fb4de7ee3d9ea3.scope: Deactivated successfully.
Nov 29 02:16:25 np0005539563 podman[109841]: 2025-11-29 07:16:25.56381163 +0000 UTC m=+2.379871451 container attach 6acfa143d4313553b4ad6570a950f1216d17a2b65d4a68f198fb4de7ee3d9ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:16:25 np0005539563 podman[109841]: 2025-11-29 07:16:25.565296171 +0000 UTC m=+2.381355972 container died 6acfa143d4313553b4ad6570a950f1216d17a2b65d4a68f198fb4de7ee3d9ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:16:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-bb6d6063a13149bfeb983cc2ecefba74589a08cabeff9d471ca377f04fb7d2f3-merged.mount: Deactivated successfully.
Nov 29 02:16:25 np0005539563 podman[109841]: 2025-11-29 07:16:25.694521895 +0000 UTC m=+2.510581696 container remove 6acfa143d4313553b4ad6570a950f1216d17a2b65d4a68f198fb4de7ee3d9ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rhodes, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:16:25 np0005539563 systemd[1]: libpod-conmon-6acfa143d4313553b4ad6570a950f1216d17a2b65d4a68f198fb4de7ee3d9ea3.scope: Deactivated successfully.
Nov 29 02:16:25 np0005539563 podman[110205]: 2025-11-29 07:16:25.83001332 +0000 UTC m=+0.022915627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:26 np0005539563 podman[110205]: 2025-11-29 07:16:26.304273161 +0000 UTC m=+0.497175438 container create 35846b9cc062c72a8d6eceb863588d25a4b8fa133140061e7625783c04e2a9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:16:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:26.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:26 np0005539563 systemd[1]: Started libpod-conmon-35846b9cc062c72a8d6eceb863588d25a4b8fa133140061e7625783c04e2a9c9.scope.
Nov 29 02:16:26 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:16:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b05ed13c44619696bdf08e5d49a2835c74d67552a8c17c5b8fb658e7dadf70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b05ed13c44619696bdf08e5d49a2835c74d67552a8c17c5b8fb658e7dadf70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b05ed13c44619696bdf08e5d49a2835c74d67552a8c17c5b8fb658e7dadf70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b05ed13c44619696bdf08e5d49a2835c74d67552a8c17c5b8fb658e7dadf70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:26 np0005539563 podman[110205]: 2025-11-29 07:16:26.49140997 +0000 UTC m=+0.684312267 container init 35846b9cc062c72a8d6eceb863588d25a4b8fa133140061e7625783c04e2a9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:26 np0005539563 podman[110205]: 2025-11-29 07:16:26.500139119 +0000 UTC m=+0.693041396 container start 35846b9cc062c72a8d6eceb863588d25a4b8fa133140061e7625783c04e2a9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_torvalds, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:16:26 np0005539563 podman[110205]: 2025-11-29 07:16:26.508924809 +0000 UTC m=+0.701827186 container attach 35846b9cc062c72a8d6eceb863588d25a4b8fa133140061e7625783c04e2a9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:16:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:27.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:27 np0005539563 vigilant_torvalds[110222]: {
Nov 29 02:16:27 np0005539563 vigilant_torvalds[110222]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:16:27 np0005539563 vigilant_torvalds[110222]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:16:27 np0005539563 vigilant_torvalds[110222]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:16:27 np0005539563 vigilant_torvalds[110222]:        "osd_id": 0,
Nov 29 02:16:27 np0005539563 vigilant_torvalds[110222]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:16:27 np0005539563 vigilant_torvalds[110222]:        "type": "bluestore"
Nov 29 02:16:27 np0005539563 vigilant_torvalds[110222]:    }
Nov 29 02:16:27 np0005539563 vigilant_torvalds[110222]: }
Nov 29 02:16:27 np0005539563 systemd[1]: libpod-35846b9cc062c72a8d6eceb863588d25a4b8fa133140061e7625783c04e2a9c9.scope: Deactivated successfully.
Nov 29 02:16:27 np0005539563 podman[110205]: 2025-11-29 07:16:27.409445809 +0000 UTC m=+1.602348086 container died 35846b9cc062c72a8d6eceb863588d25a4b8fa133140061e7625783c04e2a9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:16:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c6b05ed13c44619696bdf08e5d49a2835c74d67552a8c17c5b8fb658e7dadf70-merged.mount: Deactivated successfully.
Nov 29 02:16:27 np0005539563 python3.9[110393]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:16:27 np0005539563 podman[110205]: 2025-11-29 07:16:27.653795172 +0000 UTC m=+1.846697449 container remove 35846b9cc062c72a8d6eceb863588d25a4b8fa133140061e7625783c04e2a9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_torvalds, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:16:27 np0005539563 systemd[1]: libpod-conmon-35846b9cc062c72a8d6eceb863588d25a4b8fa133140061e7625783c04e2a9c9.scope: Deactivated successfully.
Nov 29 02:16:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:16:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:28.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:16:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev afd4ac1f-0eba-4abc-8af6-4c19446432f7 does not exist
Nov 29 02:16:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1c46c980-c097-4d3c-85ba-0e5b0a716eff does not exist
Nov 29 02:16:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 047ea953-66f5-48b3-bc8e-83a157feaf42 does not exist
Nov 29 02:16:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:29.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:16:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:30.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:30 np0005539563 python3.9[110614]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:16:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:31.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:31 np0005539563 python3.9[110768]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 29 02:16:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:33.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:33 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 29 02:16:33 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 29 02:16:33 np0005539563 systemd[1]: session-36.scope: Deactivated successfully.
Nov 29 02:16:33 np0005539563 systemd[1]: session-36.scope: Consumed 18.585s CPU time.
Nov 29 02:16:33 np0005539563 systemd-logind[785]: Session 36 logged out. Waiting for processes to exit.
Nov 29 02:16:33 np0005539563 systemd-logind[785]: Removed session 36.
Nov 29 02:16:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:34.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:34 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 29 02:16:34 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 29 02:16:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:35.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:36.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:37.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:38.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:39.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:39 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:16:39 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 29 02:16:39 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 29 02:16:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:40.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:41.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:42.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:16:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:43.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:16:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:44.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:44 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 29 02:16:44 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 29 02:16:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:45.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:46.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:47.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:48.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:49.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:50.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:50 np0005539563 systemd-logind[785]: New session 37 of user zuul.
Nov 29 02:16:50 np0005539563 systemd[1]: Started Session 37 of User zuul.
Nov 29 02:16:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:51.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:51 np0005539563 python3.9[111006]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:16:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:52.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:52 np0005539563 python3.9[111161]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:16:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:16:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:53.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:16:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:54.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:54 np0005539563 python3.9[111355]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:16:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:55 np0005539563 systemd[1]: session-37.scope: Deactivated successfully.
Nov 29 02:16:55 np0005539563 systemd[1]: session-37.scope: Consumed 2.168s CPU time.
Nov 29 02:16:55 np0005539563 systemd-logind[785]: Session 37 logged out. Waiting for processes to exit.
Nov 29 02:16:55 np0005539563 systemd-logind[785]: Removed session 37.
Nov 29 02:16:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:55.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:55 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.8 deep-scrub starts
Nov 29 02:16:55 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.8 deep-scrub ok
Nov 29 02:16:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:16:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:16:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:57.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:16:58.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:16:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:16:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:16:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:16:59.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:00.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:01.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:02.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:02 np0005539563 systemd-logind[785]: New session 38 of user zuul.
Nov 29 02:17:02 np0005539563 systemd[1]: Started Session 38 of User zuul.
Nov 29 02:17:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:03.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:04 np0005539563 python3.9[111589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:17:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:04.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:05.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:05 np0005539563 python3.9[111743]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:17:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:06.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:06 np0005539563 python3.9[111900]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:17:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:07.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:07 np0005539563 python3.9[111984]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:17:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:08.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:09.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:10.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:10 np0005539563 python3.9[112139]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:17:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:11.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:12.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:12 np0005539563 python3.9[112335]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:17:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:17:12
Nov 29 02:17:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:17:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:17:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'images', 'default.rgw.log']
Nov 29 02:17:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:17:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:13.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:17:13 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 29 02:17:13 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 29 02:17:13 np0005539563 python3.9[112487]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:17:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:14 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.1b deep-scrub starts
Nov 29 02:17:14 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.1b deep-scrub ok
Nov 29 02:17:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:14.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:15.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:15 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 29 02:17:15 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 29 02:17:15 np0005539563 python3.9[112652]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:17:16 np0005539563 python3.9[112780]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:17:16 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 29 02:17:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:16.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:16 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 29 02:17:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:17.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:17 np0005539563 python3.9[112933]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:17:18 np0005539563 python3.9[113012]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:17:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:18.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:19.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:19 np0005539563 python3.9[113164]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:17:20 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 29 02:17:20 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 29 02:17:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:20.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:20 np0005539563 python3.9[113317]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:17:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:21.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:21 np0005539563 python3.9[113469]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:17:22 np0005539563 python3.9[113622]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:17:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:22.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:17:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:23.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:23 np0005539563 python3.9[113774]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:17:24 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 29 02:17:24 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 29 02:17:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:24.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:25.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:26 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 29 02:17:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:26.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:27 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 29 02:17:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:27.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:28 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 29 02:17:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:28.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:29.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:30.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:31.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:31 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:17:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:32.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:33.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:34.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:35.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:35 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:17:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:36.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:37.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:38.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:39.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:39 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:17:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:40.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e9 check_health: resetting beacon timeouts due to mon delay (slow election?) of 21.2726 seconds
Nov 29 02:17:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:17:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:17:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:41.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:17:41 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 17.064270020s
Nov 29 02:17:41 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 17.064270020s
Nov 29 02:17:41 np0005539563 ceph-osd[84724]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f1ee6202640' had timed out after 15.000000954s
Nov 29 02:17:41 np0005539563 ceph-osd[84724]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f1ee8206640' had timed out after 15.000000954s
Nov 29 02:17:41 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.065263748s, txc = 0x561be1f8f500
Nov 29 02:17:41 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 29 02:17:41 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 29 02:17:41 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 29 02:17:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:17:41 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:17:41 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(17) init, last seen epoch 17, mid-election, bumping
Nov 29 02:17:41 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.363512039s, txc = 0x561be1ebc600
Nov 29 02:17:41 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 15.435464859s, txc = 0x561be1f63500
Nov 29 02:17:41 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.466964722s, txc = 0x561be1ecc900
Nov 29 02:17:41 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.454018593s, txc = 0x561be1f50c00
Nov 29 02:17:42 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 29 02:17:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:42.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 3 active+clean+scrubbing, 302 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:42 np0005539563 python3.9[114119]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:17:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:43 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 29 02:17:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:43.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:43 np0005539563 python3.9[114273]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:17:43 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:17:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:17:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 02:17:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 02:17:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rotard(active, since 8m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 02:17:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:17:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:17:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:44.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 3 active+clean+scrubbing, 302 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:44 np0005539563 python3.9[114426]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:17:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:45.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:45 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e3d30011-9533-475a-8f6e-7bb25967d61b does not exist
Nov 29 02:17:45 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f426c1d2-8b30-4b9f-9e0c-199c4d3a41e0 does not exist
Nov 29 02:17:45 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 702b9178-1a85-4118-9d28-332fbd48477c does not exist
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:17:45 np0005539563 python3.9[114578]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:17:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:45 np0005539563 podman[114745]: 2025-11-29 07:17:45.883688767 +0000 UTC m=+0.022533948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:17:46 np0005539563 podman[114745]: 2025-11-29 07:17:46.033635368 +0000 UTC m=+0.172480529 container create 46910d43a3fc913b7ac930ef1d940fb389b50985d8857b7f3454cb1b8df27ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lichterman, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:17:46 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 29 02:17:46 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 29 02:17:46 np0005539563 systemd[1]: Started libpod-conmon-46910d43a3fc913b7ac930ef1d940fb389b50985d8857b7f3454cb1b8df27ccd.scope.
Nov 29 02:17:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:17:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:46.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:46 np0005539563 python3.9[114887]: ansible-service_facts Invoked
Nov 29 02:17:46 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:17:46 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:46 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:17:46 np0005539563 network[114909]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:17:46 np0005539563 network[114910]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:17:46 np0005539563 network[114911]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:17:46 np0005539563 podman[114745]: 2025-11-29 07:17:46.67837556 +0000 UTC m=+0.817220741 container init 46910d43a3fc913b7ac930ef1d940fb389b50985d8857b7f3454cb1b8df27ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lichterman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:17:46 np0005539563 podman[114745]: 2025-11-29 07:17:46.68531586 +0000 UTC m=+0.824161011 container start 46910d43a3fc913b7ac930ef1d940fb389b50985d8857b7f3454cb1b8df27ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lichterman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:17:46 np0005539563 jolly_lichterman[114890]: 167 167
Nov 29 02:17:46 np0005539563 podman[114745]: 2025-11-29 07:17:46.870260378 +0000 UTC m=+1.009105539 container attach 46910d43a3fc913b7ac930ef1d940fb389b50985d8857b7f3454cb1b8df27ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lichterman, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:17:46 np0005539563 podman[114745]: 2025-11-29 07:17:46.87068416 +0000 UTC m=+1.009529321 container died 46910d43a3fc913b7ac930ef1d940fb389b50985d8857b7f3454cb1b8df27ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lichterman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:17:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:47.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:47 np0005539563 systemd[1]: libpod-46910d43a3fc913b7ac930ef1d940fb389b50985d8857b7f3454cb1b8df27ccd.scope: Deactivated successfully.
Nov 29 02:17:47 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 29 02:17:47 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 29 02:17:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-76eb8e9ca8aa2ff015cb649084b78c8dbf9931184c86692c5b1aebb9e4db49cd-merged.mount: Deactivated successfully.
Nov 29 02:17:48 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 29 02:17:48 np0005539563 ceph-osd[84724]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 29 02:17:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:48.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:48 np0005539563 podman[114745]: 2025-11-29 07:17:48.899175316 +0000 UTC m=+3.038020477 container remove 46910d43a3fc913b7ac930ef1d940fb389b50985d8857b7f3454cb1b8df27ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lichterman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:17:48 np0005539563 systemd[1]: libpod-conmon-46910d43a3fc913b7ac930ef1d940fb389b50985d8857b7f3454cb1b8df27ccd.scope: Deactivated successfully.
Nov 29 02:17:49 np0005539563 podman[114984]: 2025-11-29 07:17:49.087666681 +0000 UTC m=+0.067763554 container create 0fc2961e32f26333903678cac3677ed9e9b1baddffaf8e7d8cb1865b61573cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pare, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:17:49 np0005539563 podman[114984]: 2025-11-29 07:17:49.043060791 +0000 UTC m=+0.023157684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:17:49 np0005539563 systemd[1]: Started libpod-conmon-0fc2961e32f26333903678cac3677ed9e9b1baddffaf8e7d8cb1865b61573cae.scope.
Nov 29 02:17:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:17:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebc5194f155aa8969b82978ce8939a982f14b64d64d89dc2ddba5dc2d2a693b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebc5194f155aa8969b82978ce8939a982f14b64d64d89dc2ddba5dc2d2a693b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebc5194f155aa8969b82978ce8939a982f14b64d64d89dc2ddba5dc2d2a693b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebc5194f155aa8969b82978ce8939a982f14b64d64d89dc2ddba5dc2d2a693b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebc5194f155aa8969b82978ce8939a982f14b64d64d89dc2ddba5dc2d2a693b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:49.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:49 np0005539563 podman[114984]: 2025-11-29 07:17:49.189411244 +0000 UTC m=+0.169508137 container init 0fc2961e32f26333903678cac3677ed9e9b1baddffaf8e7d8cb1865b61573cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:17:49 np0005539563 podman[114984]: 2025-11-29 07:17:49.197048852 +0000 UTC m=+0.177145725 container start 0fc2961e32f26333903678cac3677ed9e9b1baddffaf8e7d8cb1865b61573cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pare, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:17:49 np0005539563 podman[114984]: 2025-11-29 07:17:49.201587577 +0000 UTC m=+0.181684450 container attach 0fc2961e32f26333903678cac3677ed9e9b1baddffaf8e7d8cb1865b61573cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pare, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:17:50 np0005539563 beautiful_pare[115005]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:17:50 np0005539563 beautiful_pare[115005]: --> relative data size: 1.0
Nov 29 02:17:50 np0005539563 beautiful_pare[115005]: --> All data devices are unavailable
Nov 29 02:17:50 np0005539563 systemd[1]: libpod-0fc2961e32f26333903678cac3677ed9e9b1baddffaf8e7d8cb1865b61573cae.scope: Deactivated successfully.
Nov 29 02:17:50 np0005539563 podman[114984]: 2025-11-29 07:17:50.085339706 +0000 UTC m=+1.065436579 container died 0fc2961e32f26333903678cac3677ed9e9b1baddffaf8e7d8cb1865b61573cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pare, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:17:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-cebc5194f155aa8969b82978ce8939a982f14b64d64d89dc2ddba5dc2d2a693b-merged.mount: Deactivated successfully.
Nov 29 02:17:50 np0005539563 podman[114984]: 2025-11-29 07:17:50.155794164 +0000 UTC m=+1.135891037 container remove 0fc2961e32f26333903678cac3677ed9e9b1baddffaf8e7d8cb1865b61573cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pare, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:17:50 np0005539563 systemd[1]: libpod-conmon-0fc2961e32f26333903678cac3677ed9e9b1baddffaf8e7d8cb1865b61573cae.scope: Deactivated successfully.
Nov 29 02:17:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:50.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:50 np0005539563 podman[115253]: 2025-11-29 07:17:50.705472107 +0000 UTC m=+0.036946602 container create 34386ff60e69c8fdc15aa14e8f4986118c13355e005359b69c6354aa4baa3ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_galois, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:17:50 np0005539563 systemd[1]: Started libpod-conmon-34386ff60e69c8fdc15aa14e8f4986118c13355e005359b69c6354aa4baa3ebc.scope.
Nov 29 02:17:50 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:17:50 np0005539563 podman[115253]: 2025-11-29 07:17:50.775645926 +0000 UTC m=+0.107120441 container init 34386ff60e69c8fdc15aa14e8f4986118c13355e005359b69c6354aa4baa3ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:17:50 np0005539563 podman[115253]: 2025-11-29 07:17:50.782470202 +0000 UTC m=+0.113944697 container start 34386ff60e69c8fdc15aa14e8f4986118c13355e005359b69c6354aa4baa3ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_galois, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:17:50 np0005539563 podman[115253]: 2025-11-29 07:17:50.690658181 +0000 UTC m=+0.022132696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:17:50 np0005539563 podman[115253]: 2025-11-29 07:17:50.786362439 +0000 UTC m=+0.117836954 container attach 34386ff60e69c8fdc15aa14e8f4986118c13355e005359b69c6354aa4baa3ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:17:50 np0005539563 quizzical_galois[115269]: 167 167
Nov 29 02:17:50 np0005539563 systemd[1]: libpod-34386ff60e69c8fdc15aa14e8f4986118c13355e005359b69c6354aa4baa3ebc.scope: Deactivated successfully.
Nov 29 02:17:50 np0005539563 podman[115253]: 2025-11-29 07:17:50.787757957 +0000 UTC m=+0.119232452 container died 34386ff60e69c8fdc15aa14e8f4986118c13355e005359b69c6354aa4baa3ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_galois, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:17:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-35ff2a61e37282367945464011af40169bcda4bf3bea47b9222f08a5b057b608-merged.mount: Deactivated successfully.
Nov 29 02:17:50 np0005539563 podman[115253]: 2025-11-29 07:17:50.821935011 +0000 UTC m=+0.153409496 container remove 34386ff60e69c8fdc15aa14e8f4986118c13355e005359b69c6354aa4baa3ebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_galois, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:17:50 np0005539563 systemd[1]: libpod-conmon-34386ff60e69c8fdc15aa14e8f4986118c13355e005359b69c6354aa4baa3ebc.scope: Deactivated successfully.
Nov 29 02:17:50 np0005539563 podman[115293]: 2025-11-29 07:17:50.963112443 +0000 UTC m=+0.039559173 container create 3d1ce10ab264f2e69abfec3b33a9456b1a47e2a6003b6dc569d77f590e9999d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:17:51 np0005539563 systemd[1]: Started libpod-conmon-3d1ce10ab264f2e69abfec3b33a9456b1a47e2a6003b6dc569d77f590e9999d0.scope.
Nov 29 02:17:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:17:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4ed74728d728a54a000e532802a205a3ef7abd1bd4652c7b61f275fc89a4c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4ed74728d728a54a000e532802a205a3ef7abd1bd4652c7b61f275fc89a4c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4ed74728d728a54a000e532802a205a3ef7abd1bd4652c7b61f275fc89a4c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4ed74728d728a54a000e532802a205a3ef7abd1bd4652c7b61f275fc89a4c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:51 np0005539563 podman[115293]: 2025-11-29 07:17:51.037587129 +0000 UTC m=+0.114033899 container init 3d1ce10ab264f2e69abfec3b33a9456b1a47e2a6003b6dc569d77f590e9999d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:17:51 np0005539563 podman[115293]: 2025-11-29 07:17:50.943961569 +0000 UTC m=+0.020408329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:17:51 np0005539563 podman[115293]: 2025-11-29 07:17:51.043661815 +0000 UTC m=+0.120108555 container start 3d1ce10ab264f2e69abfec3b33a9456b1a47e2a6003b6dc569d77f590e9999d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:17:51 np0005539563 podman[115293]: 2025-11-29 07:17:51.047279715 +0000 UTC m=+0.123726475 container attach 3d1ce10ab264f2e69abfec3b33a9456b1a47e2a6003b6dc569d77f590e9999d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:17:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:51.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]: {
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:    "0": [
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:        {
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:            "devices": [
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:                "/dev/loop3"
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:            ],
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:            "lv_name": "ceph_lv0",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:            "lv_size": "7511998464",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:            "name": "ceph_lv0",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:            "tags": {
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:                "ceph.cluster_name": "ceph",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:                "ceph.crush_device_class": "",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:                "ceph.encrypted": "0",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:                "ceph.osd_id": "0",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:                "ceph.type": "block",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:                "ceph.vdo": "0"
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:            },
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:            "type": "block",
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:            "vg_name": "ceph_vg0"
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:        }
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]:    ]
Nov 29 02:17:51 np0005539563 interesting_cartwright[115310]: }
Nov 29 02:17:51 np0005539563 systemd[1]: libpod-3d1ce10ab264f2e69abfec3b33a9456b1a47e2a6003b6dc569d77f590e9999d0.scope: Deactivated successfully.
Nov 29 02:17:51 np0005539563 podman[115293]: 2025-11-29 07:17:51.836537749 +0000 UTC m=+0.912984499 container died 3d1ce10ab264f2e69abfec3b33a9456b1a47e2a6003b6dc569d77f590e9999d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:17:52 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9a4ed74728d728a54a000e532802a205a3ef7abd1bd4652c7b61f275fc89a4c1-merged.mount: Deactivated successfully.
Nov 29 02:17:52 np0005539563 podman[115293]: 2025-11-29 07:17:52.227536113 +0000 UTC m=+1.303982853 container remove 3d1ce10ab264f2e69abfec3b33a9456b1a47e2a6003b6dc569d77f590e9999d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:17:52 np0005539563 systemd[1]: libpod-conmon-3d1ce10ab264f2e69abfec3b33a9456b1a47e2a6003b6dc569d77f590e9999d0.scope: Deactivated successfully.
Nov 29 02:17:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:52.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:52 np0005539563 python3.9[115750]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:17:52 np0005539563 podman[115791]: 2025-11-29 07:17:52.851997962 +0000 UTC m=+0.087702220 container create 2a88fbcf3284df461ff4ba0f94daeeb5afa77a4a7e0a65bd13771acca76c8b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:17:52 np0005539563 podman[115791]: 2025-11-29 07:17:52.784934987 +0000 UTC m=+0.020639265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:17:52 np0005539563 systemd[1]: Started libpod-conmon-2a88fbcf3284df461ff4ba0f94daeeb5afa77a4a7e0a65bd13771acca76c8b49.scope.
Nov 29 02:17:52 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:17:52 np0005539563 podman[115791]: 2025-11-29 07:17:52.924932456 +0000 UTC m=+0.160636724 container init 2a88fbcf3284df461ff4ba0f94daeeb5afa77a4a7e0a65bd13771acca76c8b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:17:52 np0005539563 podman[115791]: 2025-11-29 07:17:52.932477043 +0000 UTC m=+0.168181311 container start 2a88fbcf3284df461ff4ba0f94daeeb5afa77a4a7e0a65bd13771acca76c8b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:17:52 np0005539563 ecstatic_thompson[115807]: 167 167
Nov 29 02:17:52 np0005539563 systemd[1]: libpod-2a88fbcf3284df461ff4ba0f94daeeb5afa77a4a7e0a65bd13771acca76c8b49.scope: Deactivated successfully.
Nov 29 02:17:52 np0005539563 podman[115791]: 2025-11-29 07:17:52.93713211 +0000 UTC m=+0.172836398 container attach 2a88fbcf3284df461ff4ba0f94daeeb5afa77a4a7e0a65bd13771acca76c8b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:17:52 np0005539563 podman[115791]: 2025-11-29 07:17:52.937899051 +0000 UTC m=+0.173603309 container died 2a88fbcf3284df461ff4ba0f94daeeb5afa77a4a7e0a65bd13771acca76c8b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:17:52 np0005539563 systemd[1]: var-lib-containers-storage-overlay-263bb54ef3d2d8c549f4e687c6915fb569a39d56c6e63b12b5e3cebd7dcbf0b3-merged.mount: Deactivated successfully.
Nov 29 02:17:52 np0005539563 podman[115791]: 2025-11-29 07:17:52.97663661 +0000 UTC m=+0.212340868 container remove 2a88fbcf3284df461ff4ba0f94daeeb5afa77a4a7e0a65bd13771acca76c8b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:17:52 np0005539563 systemd[1]: libpod-conmon-2a88fbcf3284df461ff4ba0f94daeeb5afa77a4a7e0a65bd13771acca76c8b49.scope: Deactivated successfully.
Nov 29 02:17:53 np0005539563 podman[115832]: 2025-11-29 07:17:53.126778907 +0000 UTC m=+0.040538640 container create 8df3a267f537cd1a3e3c0499bf72d83f2f952a4edf39e2a122252995b988f9d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:17:53 np0005539563 systemd[1]: Started libpod-conmon-8df3a267f537cd1a3e3c0499bf72d83f2f952a4edf39e2a122252995b988f9d4.scope.
Nov 29 02:17:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:53.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:17:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26748d746533d01876bc0b6a2d72591704e6f4cb8b16b856b46dc228d1a82cb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26748d746533d01876bc0b6a2d72591704e6f4cb8b16b856b46dc228d1a82cb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26748d746533d01876bc0b6a2d72591704e6f4cb8b16b856b46dc228d1a82cb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26748d746533d01876bc0b6a2d72591704e6f4cb8b16b856b46dc228d1a82cb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:53 np0005539563 podman[115832]: 2025-11-29 07:17:53.106893433 +0000 UTC m=+0.020653176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:17:53 np0005539563 podman[115832]: 2025-11-29 07:17:53.207684359 +0000 UTC m=+0.121444102 container init 8df3a267f537cd1a3e3c0499bf72d83f2f952a4edf39e2a122252995b988f9d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ramanujan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:17:53 np0005539563 podman[115832]: 2025-11-29 07:17:53.216250434 +0000 UTC m=+0.130010157 container start 8df3a267f537cd1a3e3c0499bf72d83f2f952a4edf39e2a122252995b988f9d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ramanujan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:17:53 np0005539563 podman[115832]: 2025-11-29 07:17:53.219939514 +0000 UTC m=+0.133699257 container attach 8df3a267f537cd1a3e3c0499bf72d83f2f952a4edf39e2a122252995b988f9d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:17:54 np0005539563 dazzling_ramanujan[115848]: {
Nov 29 02:17:54 np0005539563 dazzling_ramanujan[115848]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:17:54 np0005539563 dazzling_ramanujan[115848]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:17:54 np0005539563 dazzling_ramanujan[115848]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:17:54 np0005539563 dazzling_ramanujan[115848]:        "osd_id": 0,
Nov 29 02:17:54 np0005539563 dazzling_ramanujan[115848]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:17:54 np0005539563 dazzling_ramanujan[115848]:        "type": "bluestore"
Nov 29 02:17:54 np0005539563 dazzling_ramanujan[115848]:    }
Nov 29 02:17:54 np0005539563 dazzling_ramanujan[115848]: }
Nov 29 02:17:54 np0005539563 systemd[1]: libpod-8df3a267f537cd1a3e3c0499bf72d83f2f952a4edf39e2a122252995b988f9d4.scope: Deactivated successfully.
Nov 29 02:17:54 np0005539563 podman[115832]: 2025-11-29 07:17:54.083634486 +0000 UTC m=+0.997394229 container died 8df3a267f537cd1a3e3c0499bf72d83f2f952a4edf39e2a122252995b988f9d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ramanujan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:17:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:54.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-26748d746533d01876bc0b6a2d72591704e6f4cb8b16b856b46dc228d1a82cb7-merged.mount: Deactivated successfully.
Nov 29 02:17:54 np0005539563 podman[115832]: 2025-11-29 07:17:54.609751333 +0000 UTC m=+1.523511066 container remove 8df3a267f537cd1a3e3c0499bf72d83f2f952a4edf39e2a122252995b988f9d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ramanujan, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:17:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:17:54 np0005539563 systemd[1]: libpod-conmon-8df3a267f537cd1a3e3c0499bf72d83f2f952a4edf39e2a122252995b988f9d4.scope: Deactivated successfully.
Nov 29 02:17:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:17:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:54 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev dab5c322-e144-427e-ba2d-521de763cf03 does not exist
Nov 29 02:17:54 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4d82484c-28a9-43ea-8588-3fd8e72caaa7 does not exist
Nov 29 02:17:54 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 29717c84-c930-4a5d-952e-eea2ba13b98d does not exist
Nov 29 02:17:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:55.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:17:55 np0005539563 python3.9[116083]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 02:17:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:56.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:17:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:57.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:17:57 np0005539563 python3.9[116286]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:17:57 np0005539563 python3.9[116364]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:17:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:58.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:58 np0005539563 python3.9[116517]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:17:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:17:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:59.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:59 np0005539563 python3.9[116595]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:00.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:01 np0005539563 python3.9[116748]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:01.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:02.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:03 np0005539563 python3.9[116901]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:18:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:03.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:04 np0005539563 python3.9[116985]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:18:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:04.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:05.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:05 np0005539563 systemd[1]: session-38.scope: Deactivated successfully.
Nov 29 02:18:05 np0005539563 systemd[1]: session-38.scope: Consumed 23.215s CPU time.
Nov 29 02:18:05 np0005539563 systemd-logind[785]: Session 38 logged out. Waiting for processes to exit.
Nov 29 02:18:05 np0005539563 systemd-logind[785]: Removed session 38.
Nov 29 02:18:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:06.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:07.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:08.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:09.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:10 np0005539563 systemd[1]: session-18.scope: Deactivated successfully.
Nov 29 02:18:10 np0005539563 systemd[1]: session-18.scope: Consumed 1min 32.482s CPU time.
Nov 29 02:18:10 np0005539563 systemd-logind[785]: Session 18 logged out. Waiting for processes to exit.
Nov 29 02:18:10 np0005539563 systemd-logind[785]: Removed session 18.
Nov 29 02:18:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:10.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:11 np0005539563 systemd-logind[785]: New session 39 of user zuul.
Nov 29 02:18:11 np0005539563 systemd[1]: Started Session 39 of User zuul.
Nov 29 02:18:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:11.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:11 np0005539563 python3.9[117171]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:12.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:18:12
Nov 29 02:18:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:18:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:18:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'backups']
Nov 29 02:18:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:18:12 np0005539563 python3.9[117324]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:18:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:13.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:13 np0005539563 python3.9[117402]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:13 np0005539563 systemd[1]: session-39.scope: Deactivated successfully.
Nov 29 02:18:13 np0005539563 systemd[1]: session-39.scope: Consumed 1.458s CPU time.
Nov 29 02:18:13 np0005539563 systemd-logind[785]: Session 39 logged out. Waiting for processes to exit.
Nov 29 02:18:13 np0005539563 systemd-logind[785]: Removed session 39.
Nov 29 02:18:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:14.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:15.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:16.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:18:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2136 writes, 9414 keys, 2135 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2136 writes, 2135 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2136 writes, 9414 keys, 2135 commit groups, 1.0 writes per commit group, ingest: 12.39 MB, 0.02 MB/s#012Interval WAL: 2136 writes, 2135 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     61.8      0.12              0.02         2    0.061       0      0       0.0       0.0#012  L6      1/0    7.44 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0    119.0    118.4      0.06              0.02         1    0.063    3489    292       0.0       0.0#012 Sum      1/0    7.44 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0     40.7     81.1      0.18              0.04         3    0.061    3489    292       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0     41.8     83.1      0.18              0.04         2    0.090    3489    292       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    119.0    118.4      0.06              0.02         1    0.063    3489    292       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     64.0      0.12              0.02         1    0.116       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.007, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.01 GB write, 0.02 MB/s write, 0.01 GB read, 0.01 MB/s read, 0.2 seconds#012Interval compaction: 0.01 GB write, 0.02 MB/s write, 0.01 GB read, 0.01 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 308.00 MB usage: 486.52 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000101 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(30,428.52 KB,0.135868%) FilterBlock(4,17.48 KB,0.0055437%) IndexBlock(4,40.52 KB,0.0128461%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:18:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:17.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:18:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:18.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:18:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:19.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:20 np0005539563 systemd-logind[785]: New session 40 of user zuul.
Nov 29 02:18:20 np0005539563 systemd[1]: Started Session 40 of User zuul.
Nov 29 02:18:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:20.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:21.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:21 np0005539563 python3.9[117634]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:18:22 np0005539563 python3.9[117790]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:18:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:18:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:22.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:18:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:23.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:23 np0005539563 python3.9[117966]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:23 np0005539563 python3.9[118044]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.2vmic_mk recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:24.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:24 np0005539563 python3.9[118197]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:25.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:25 np0005539563 python3.9[118275]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.80jxbugs recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:26 np0005539563 python3.9[118427]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:18:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:26.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:26 np0005539563 python3.9[118580]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:27.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:27 np0005539563 python3.9[118658]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:18:27 np0005539563 python3.9[118810]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:28 np0005539563 python3.9[118888]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:18:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:28.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:29 np0005539563 python3.9[119041]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:29.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:29 np0005539563 python3.9[119193]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:30 np0005539563 python3.9[119271]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:30.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:31.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:31 np0005539563 python3.9[119424]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:31 np0005539563 python3.9[119502]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:32.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:33 np0005539563 python3.9[119655]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:18:33 np0005539563 systemd[1]: Reloading.
Nov 29 02:18:33 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:18:33 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:18:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:33.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:34.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:35.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:36.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:37.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:38 np0005539563 python3.9[119897]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:38 np0005539563 python3.9[119975]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:38.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:39.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:39 np0005539563 python3.9[120128]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:40 np0005539563 python3.9[120206]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:40.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:40 np0005539563 python3.9[120359]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:18:40 np0005539563 systemd[1]: Reloading.
Nov 29 02:18:41 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:18:41 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:18:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:41.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:41 np0005539563 systemd[1]: Starting Create netns directory...
Nov 29 02:18:41 np0005539563 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:18:41 np0005539563 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:18:41 np0005539563 systemd[1]: Finished Create netns directory.
Nov 29 02:18:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:42 np0005539563 python3.9[120550]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:18:42 np0005539563 network[120567]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:18:42 np0005539563 network[120568]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:18:42 np0005539563 network[120569]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:18:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:42.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:18:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:43.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:44.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:46.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:47.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:47 np0005539563 python3.9[120834]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:48.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:49 np0005539563 python3.9[120913]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:49.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:49 np0005539563 python3.9[121065]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:50.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:50 np0005539563 python3.9[121218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:51 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:18:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:51.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:51 np0005539563 python3.9[121296]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:52.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:52 np0005539563 python3.9[121448]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 02:18:52 np0005539563 systemd[1]: Starting Time & Date Service...
Nov 29 02:18:52 np0005539563 systemd[1]: Started Time & Date Service.
Nov 29 02:18:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:53.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:53 np0005539563 python3.9[121605]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:54.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:54 np0005539563 python3.9[121757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:55 np0005539563 python3.9[121836]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:55.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:56 np0005539563 python3.9[122119]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:56.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:56 np0005539563 python3.9[122246]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.845g28m_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:18:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:57.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:18:57 np0005539563 python3.9[122400]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:18:58 np0005539563 python3.9[122478]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:18:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:18:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:18:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:18:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:18:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:58.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:18:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:18:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:18:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:18:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:18:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:18:59 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2ac25278-3dc4-4d2f-8395-a47b21d7498e does not exist
Nov 29 02:18:59 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f5eacad1-c9d7-4dc6-938d-9e04f3961d44 does not exist
Nov 29 02:18:59 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9719971d-2c60-431e-9ab8-8c12789237f7 does not exist
Nov 29 02:18:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:18:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:18:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:18:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:18:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:18:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:18:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:18:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:59.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:59 np0005539563 python3.9[122663]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:18:59 np0005539563 podman[122796]: 2025-11-29 07:18:59.570987013 +0000 UTC m=+0.020463765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:00 np0005539563 python3[122937]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:19:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:00.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:01 np0005539563 python3.9[123090]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:19:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:01.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:01 np0005539563 python3.9[123168]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:02.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:02 np0005539563 python3.9[123321]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:19:03 np0005539563 python3.9[123400]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:03.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:04 np0005539563 python3.9[123552]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:19:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:04.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:19:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:19:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:19:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:19:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:19:04 np0005539563 python3.9[123630]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:04 np0005539563 podman[122796]: 2025-11-29 07:19:04.873431739 +0000 UTC m=+5.322908421 container create 5110dce403f7f544bb1e1b2e0ae02727852233ea0d14f3924304221d59429097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wozniak, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 29 02:19:04 np0005539563 systemd[1]: Started libpod-conmon-5110dce403f7f544bb1e1b2e0ae02727852233ea0d14f3924304221d59429097.scope.
Nov 29 02:19:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:19:04 np0005539563 podman[122796]: 2025-11-29 07:19:04.978379104 +0000 UTC m=+5.427855806 container init 5110dce403f7f544bb1e1b2e0ae02727852233ea0d14f3924304221d59429097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:19:04 np0005539563 podman[122796]: 2025-11-29 07:19:04.986299971 +0000 UTC m=+5.435776653 container start 5110dce403f7f544bb1e1b2e0ae02727852233ea0d14f3924304221d59429097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:19:04 np0005539563 podman[122796]: 2025-11-29 07:19:04.990191609 +0000 UTC m=+5.439668321 container attach 5110dce403f7f544bb1e1b2e0ae02727852233ea0d14f3924304221d59429097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:19:04 np0005539563 vibrant_wozniak[123657]: 167 167
Nov 29 02:19:04 np0005539563 systemd[1]: libpod-5110dce403f7f544bb1e1b2e0ae02727852233ea0d14f3924304221d59429097.scope: Deactivated successfully.
Nov 29 02:19:04 np0005539563 podman[122796]: 2025-11-29 07:19:04.993071879 +0000 UTC m=+5.442548561 container died 5110dce403f7f544bb1e1b2e0ae02727852233ea0d14f3924304221d59429097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:19:05 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f82bcabbc8f429dbf7f40f8d8b8f517938a34b07721714faa1569d4e57ce24a2-merged.mount: Deactivated successfully.
Nov 29 02:19:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:05 np0005539563 python3.9[123803]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:19:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:06 np0005539563 python3.9[123881]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:06 np0005539563 podman[122796]: 2025-11-29 07:19:06.236614897 +0000 UTC m=+6.686091579 container remove 5110dce403f7f544bb1e1b2e0ae02727852233ea0d14f3924304221d59429097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:19:06 np0005539563 systemd[1]: libpod-conmon-5110dce403f7f544bb1e1b2e0ae02727852233ea0d14f3924304221d59429097.scope: Deactivated successfully.
Nov 29 02:19:06 np0005539563 podman[123913]: 2025-11-29 07:19:06.399432226 +0000 UTC m=+0.038885653 container create 29d84af8de4cd2f536b8c98c82cc40e2c183d63f89226110644a6b16425f0077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:19:06 np0005539563 systemd[1]: Started libpod-conmon-29d84af8de4cd2f536b8c98c82cc40e2c183d63f89226110644a6b16425f0077.scope.
Nov 29 02:19:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:19:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580cf868d0859b4df861b6ee06e2db032a6dea58fa3f3958bce52e45ef9f14c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580cf868d0859b4df861b6ee06e2db032a6dea58fa3f3958bce52e45ef9f14c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580cf868d0859b4df861b6ee06e2db032a6dea58fa3f3958bce52e45ef9f14c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580cf868d0859b4df861b6ee06e2db032a6dea58fa3f3958bce52e45ef9f14c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/580cf868d0859b4df861b6ee06e2db032a6dea58fa3f3958bce52e45ef9f14c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:06 np0005539563 podman[123913]: 2025-11-29 07:19:06.383186528 +0000 UTC m=+0.022639975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:06 np0005539563 podman[123913]: 2025-11-29 07:19:06.492122252 +0000 UTC m=+0.131575699 container init 29d84af8de4cd2f536b8c98c82cc40e2c183d63f89226110644a6b16425f0077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lamport, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:19:06 np0005539563 podman[123913]: 2025-11-29 07:19:06.50290822 +0000 UTC m=+0.142361657 container start 29d84af8de4cd2f536b8c98c82cc40e2c183d63f89226110644a6b16425f0077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lamport, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:19:06 np0005539563 podman[123913]: 2025-11-29 07:19:06.506672723 +0000 UTC m=+0.146126150 container attach 29d84af8de4cd2f536b8c98c82cc40e2c183d63f89226110644a6b16425f0077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lamport, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:19:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:06.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:07 np0005539563 python3.9[124062]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:19:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:07.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:07.303314) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747303428, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2482, "num_deletes": 251, "total_data_size": 4107348, "memory_usage": 4166064, "flush_reason": "Manual Compaction"}
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 29 02:19:07 np0005539563 gifted_lamport[123929]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:19:07 np0005539563 gifted_lamport[123929]: --> relative data size: 1.0
Nov 29 02:19:07 np0005539563 gifted_lamport[123929]: --> All data devices are unavailable
Nov 29 02:19:07 np0005539563 systemd[1]: libpod-29d84af8de4cd2f536b8c98c82cc40e2c183d63f89226110644a6b16425f0077.scope: Deactivated successfully.
Nov 29 02:19:07 np0005539563 podman[123913]: 2025-11-29 07:19:07.336438802 +0000 UTC m=+0.975892249 container died 29d84af8de4cd2f536b8c98c82cc40e2c183d63f89226110644a6b16425f0077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747370121, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3969666, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7636, "largest_seqno": 10117, "table_properties": {"data_size": 3958546, "index_size": 6910, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 26739, "raw_average_key_size": 21, "raw_value_size": 3934543, "raw_average_value_size": 3167, "num_data_blocks": 306, "num_entries": 1242, "num_filter_entries": 1242, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400503, "oldest_key_time": 1764400503, "file_creation_time": 1764400747, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 66828 microseconds, and 8141 cpu microseconds.
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:07.370195) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3969666 bytes OK
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:07.370221) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:07.391534) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:07.391649) EVENT_LOG_v1 {"time_micros": 1764400747391625, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:07.391692) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 4096599, prev total WAL file size 4096599, number of live WAL files 2.
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:07.393765) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3876KB)], [20(7619KB)]
Nov 29 02:19:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400747393891, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11772278, "oldest_snapshot_seqno": -1}
Nov 29 02:19:07 np0005539563 systemd[1]: var-lib-containers-storage-overlay-580cf868d0859b4df861b6ee06e2db032a6dea58fa3f3958bce52e45ef9f14c3-merged.mount: Deactivated successfully.
Nov 29 02:19:07 np0005539563 python3.9[124150]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:08.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:09.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:10 np0005539563 python3.9[124314]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:19:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:10.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3913 keys, 10208709 bytes, temperature: kUnknown
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400750698397, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 10208709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10175838, "index_size": 22009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 94595, "raw_average_key_size": 24, "raw_value_size": 10098529, "raw_average_value_size": 2580, "num_data_blocks": 963, "num_entries": 3913, "num_filter_entries": 3913, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764400747, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:10.699333) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 10208709 bytes
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:10.921236) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 3.6 rd, 3.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 7.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(5.5) write-amplify(2.6) OK, records in: 4439, records dropped: 526 output_compression: NoCompression
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:10.921303) EVENT_LOG_v1 {"time_micros": 1764400750921279, "job": 6, "event": "compaction_finished", "compaction_time_micros": 3305169, "compaction_time_cpu_micros": 26701, "output_level": 6, "num_output_files": 1, "total_output_size": 10208709, "num_input_records": 4439, "num_output_records": 3913, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400750923150, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400750925779, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:07.393532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:10.925861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:10.925872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:10.925875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:10.925878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:10.925881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:11 np0005539563 python3.9[124471]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:11.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:12 np0005539563 python3.9[124623]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:12 np0005539563 podman[123913]: 2025-11-29 07:19:12.425215766 +0000 UTC m=+6.064669243 container remove 29d84af8de4cd2f536b8c98c82cc40e2c183d63f89226110644a6b16425f0077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:19:12 np0005539563 systemd[1]: libpod-conmon-29d84af8de4cd2f536b8c98c82cc40e2c183d63f89226110644a6b16425f0077.scope: Deactivated successfully.
Nov 29 02:19:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:19:12
Nov 29 02:19:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:19:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:19:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'images', 'default.rgw.log', 'backups', '.rgw.root', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Nov 29 02:19:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:19:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:12.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:13 np0005539563 podman[124766]: 2025-11-29 07:19:13.04593451 +0000 UTC m=+0.096124641 container create 2670303b3d73e2ad23fbd6f7f4d85bb6173f072b0ec4ce5243d5a6496232e2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_newton, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:19:13 np0005539563 podman[124766]: 2025-11-29 07:19:12.971022635 +0000 UTC m=+0.021212786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:19:13 np0005539563 systemd[1]: Started libpod-conmon-2670303b3d73e2ad23fbd6f7f4d85bb6173f072b0ec4ce5243d5a6496232e2d1.scope.
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:19:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:19:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:13.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:13 np0005539563 podman[124766]: 2025-11-29 07:19:13.288488829 +0000 UTC m=+0.338678980 container init 2670303b3d73e2ad23fbd6f7f4d85bb6173f072b0ec4ce5243d5a6496232e2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_newton, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:19:13 np0005539563 podman[124766]: 2025-11-29 07:19:13.29542106 +0000 UTC m=+0.345611191 container start 2670303b3d73e2ad23fbd6f7f4d85bb6173f072b0ec4ce5243d5a6496232e2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_newton, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:19:13 np0005539563 condescending_newton[124788]: 167 167
Nov 29 02:19:13 np0005539563 systemd[1]: libpod-2670303b3d73e2ad23fbd6f7f4d85bb6173f072b0ec4ce5243d5a6496232e2d1.scope: Deactivated successfully.
Nov 29 02:19:13 np0005539563 podman[124766]: 2025-11-29 07:19:13.441120718 +0000 UTC m=+0.491310889 container attach 2670303b3d73e2ad23fbd6f7f4d85bb6173f072b0ec4ce5243d5a6496232e2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_newton, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:19:13 np0005539563 podman[124766]: 2025-11-29 07:19:13.441962681 +0000 UTC m=+0.492152852 container died 2670303b3d73e2ad23fbd6f7f4d85bb6173f072b0ec4ce5243d5a6496232e2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_newton, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:19:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0e4112b2c3b41754474ab41bd43a4612cfbefa012186595768b9c80165f93643-merged.mount: Deactivated successfully.
Nov 29 02:19:13 np0005539563 podman[124766]: 2025-11-29 07:19:13.665295339 +0000 UTC m=+0.715485480 container remove 2670303b3d73e2ad23fbd6f7f4d85bb6173f072b0ec4ce5243d5a6496232e2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_newton, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:19:13 np0005539563 systemd[1]: libpod-conmon-2670303b3d73e2ad23fbd6f7f4d85bb6173f072b0ec4ce5243d5a6496232e2d1.scope: Deactivated successfully.
Nov 29 02:19:13 np0005539563 python3.9[124951]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:13 np0005539563 podman[124959]: 2025-11-29 07:19:13.807285723 +0000 UTC m=+0.026569093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:14 np0005539563 podman[124959]: 2025-11-29 07:19:14.452261118 +0000 UTC m=+0.671544468 container create 92fa41de5305fbe9adac3fd3e67519b3161bcfa6b126782ce4f6a579097a18d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:19:14 np0005539563 systemd[1]: Started libpod-conmon-92fa41de5305fbe9adac3fd3e67519b3161bcfa6b126782ce4f6a579097a18d1.scope.
Nov 29 02:19:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:14 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:19:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e79be74bce46c3e0b05d1fdff66da04445726191341a98562e19b2b0330bd893/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e79be74bce46c3e0b05d1fdff66da04445726191341a98562e19b2b0330bd893/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e79be74bce46c3e0b05d1fdff66da04445726191341a98562e19b2b0330bd893/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e79be74bce46c3e0b05d1fdff66da04445726191341a98562e19b2b0330bd893/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:14 np0005539563 podman[124959]: 2025-11-29 07:19:14.563634138 +0000 UTC m=+0.782917468 container init 92fa41de5305fbe9adac3fd3e67519b3161bcfa6b126782ce4f6a579097a18d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_einstein, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:19:14 np0005539563 podman[124959]: 2025-11-29 07:19:14.571507786 +0000 UTC m=+0.790791096 container start 92fa41de5305fbe9adac3fd3e67519b3161bcfa6b126782ce4f6a579097a18d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_einstein, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:19:14 np0005539563 podman[124959]: 2025-11-29 07:19:14.576072692 +0000 UTC m=+0.795356032 container attach 92fa41de5305fbe9adac3fd3e67519b3161bcfa6b126782ce4f6a579097a18d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_einstein, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:19:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:14.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:15 np0005539563 python3.9[125133]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:19:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:15.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]: {
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:    "0": [
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:        {
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:            "devices": [
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:                "/dev/loop3"
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:            ],
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:            "lv_name": "ceph_lv0",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:            "lv_size": "7511998464",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:            "name": "ceph_lv0",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:            "tags": {
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:                "ceph.cluster_name": "ceph",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:                "ceph.crush_device_class": "",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:                "ceph.encrypted": "0",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:                "ceph.osd_id": "0",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:                "ceph.type": "block",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:                "ceph.vdo": "0"
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:            },
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:            "type": "block",
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:            "vg_name": "ceph_vg0"
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:        }
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]:    ]
Nov 29 02:19:15 np0005539563 recursing_einstein[125069]: }
Nov 29 02:19:15 np0005539563 systemd[1]: libpod-92fa41de5305fbe9adac3fd3e67519b3161bcfa6b126782ce4f6a579097a18d1.scope: Deactivated successfully.
Nov 29 02:19:15 np0005539563 podman[124959]: 2025-11-29 07:19:15.358567837 +0000 UTC m=+1.577851147 container died 92fa41de5305fbe9adac3fd3e67519b3161bcfa6b126782ce4f6a579097a18d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_einstein, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:19:15 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e79be74bce46c3e0b05d1fdff66da04445726191341a98562e19b2b0330bd893-merged.mount: Deactivated successfully.
Nov 29 02:19:15 np0005539563 python3.9[125301]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:19:15 np0005539563 podman[124959]: 2025-11-29 07:19:15.727624954 +0000 UTC m=+1.946908274 container remove 92fa41de5305fbe9adac3fd3e67519b3161bcfa6b126782ce4f6a579097a18d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:19:15 np0005539563 systemd[1]: libpod-conmon-92fa41de5305fbe9adac3fd3e67519b3161bcfa6b126782ce4f6a579097a18d1.scope: Deactivated successfully.
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:16 np0005539563 podman[125468]: 2025-11-29 07:19:16.275968123 +0000 UTC m=+0.036319532 container create 9627b3e64d6888289215c40f9af7961c8b313223539d3264da37e1e3af4a7eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_matsumoto, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.305979) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756306040, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 306, "num_deletes": 250, "total_data_size": 141463, "memory_usage": 148328, "flush_reason": "Manual Compaction"}
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 29 02:19:16 np0005539563 systemd[1]: Started libpod-conmon-9627b3e64d6888289215c40f9af7961c8b313223539d3264da37e1e3af4a7eb3.scope.
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756309632, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 140562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10118, "largest_seqno": 10423, "table_properties": {"data_size": 138546, "index_size": 244, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5246, "raw_average_key_size": 18, "raw_value_size": 134612, "raw_average_value_size": 485, "num_data_blocks": 11, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400750, "oldest_key_time": 1764400750, "file_creation_time": 1764400756, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 3676 microseconds, and 974 cpu microseconds.
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.309667) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 140562 bytes OK
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.309681) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.312229) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.312244) EVENT_LOG_v1 {"time_micros": 1764400756312239, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.312259) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 139275, prev total WAL file size 139275, number of live WAL files 2.
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.312676) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(137KB)], [23(9969KB)]
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756312934, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 10349271, "oldest_snapshot_seqno": -1}
Nov 29 02:19:16 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:19:16 np0005539563 podman[125468]: 2025-11-29 07:19:16.260933408 +0000 UTC m=+0.021284827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3682 keys, 7523784 bytes, temperature: kUnknown
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756397065, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7523784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7496081, "index_size": 17417, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9221, "raw_key_size": 90337, "raw_average_key_size": 24, "raw_value_size": 7426329, "raw_average_value_size": 2016, "num_data_blocks": 761, "num_entries": 3682, "num_filter_entries": 3682, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764400756, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.397274) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7523784 bytes
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.399831) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.0 rd, 89.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(127.2) write-amplify(53.5) OK, records in: 4190, records dropped: 508 output_compression: NoCompression
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.399853) EVENT_LOG_v1 {"time_micros": 1764400756399842, "job": 8, "event": "compaction_finished", "compaction_time_micros": 84124, "compaction_time_cpu_micros": 17335, "output_level": 6, "num_output_files": 1, "total_output_size": 7523784, "num_input_records": 4190, "num_output_records": 3682, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756399971, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400756401631, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.312475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.401718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.401723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:16 np0005539563 podman[125468]: 2025-11-29 07:19:16.401821833 +0000 UTC m=+0.162173262 container init 9627b3e64d6888289215c40f9af7961c8b313223539d3264da37e1e3af4a7eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.401725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.401727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:19:16.401754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:19:16 np0005539563 podman[125468]: 2025-11-29 07:19:16.408792356 +0000 UTC m=+0.169143755 container start 9627b3e64d6888289215c40f9af7961c8b313223539d3264da37e1e3af4a7eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_matsumoto, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:19:16 np0005539563 podman[125468]: 2025-11-29 07:19:16.412490087 +0000 UTC m=+0.172841516 container attach 9627b3e64d6888289215c40f9af7961c8b313223539d3264da37e1e3af4a7eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_matsumoto, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:19:16 np0005539563 jolly_matsumoto[125484]: 167 167
Nov 29 02:19:16 np0005539563 systemd[1]: libpod-9627b3e64d6888289215c40f9af7961c8b313223539d3264da37e1e3af4a7eb3.scope: Deactivated successfully.
Nov 29 02:19:16 np0005539563 podman[125468]: 2025-11-29 07:19:16.413613769 +0000 UTC m=+0.173965178 container died 9627b3e64d6888289215c40f9af7961c8b313223539d3264da37e1e3af4a7eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_matsumoto, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:19:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-bcccc8f52c0d1d169da731d65c74f72e387a64bd166546559319ca0afd09573a-merged.mount: Deactivated successfully.
Nov 29 02:19:16 np0005539563 podman[125468]: 2025-11-29 07:19:16.447097962 +0000 UTC m=+0.207449361 container remove 9627b3e64d6888289215c40f9af7961c8b313223539d3264da37e1e3af4a7eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:19:16 np0005539563 systemd[1]: libpod-conmon-9627b3e64d6888289215c40f9af7961c8b313223539d3264da37e1e3af4a7eb3.scope: Deactivated successfully.
Nov 29 02:19:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:16 np0005539563 systemd[1]: session-40.scope: Deactivated successfully.
Nov 29 02:19:16 np0005539563 systemd[1]: session-40.scope: Consumed 29.038s CPU time.
Nov 29 02:19:16 np0005539563 systemd-logind[785]: Session 40 logged out. Waiting for processes to exit.
Nov 29 02:19:16 np0005539563 podman[125509]: 2025-11-29 07:19:16.601692174 +0000 UTC m=+0.043611733 container create 83061ff65bd9a0dbd6c55a58d6108caed03d74a59f42a96f0b4ff377686eec56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:19:16 np0005539563 systemd-logind[785]: Removed session 40.
Nov 29 02:19:16 np0005539563 systemd[1]: Started libpod-conmon-83061ff65bd9a0dbd6c55a58d6108caed03d74a59f42a96f0b4ff377686eec56.scope.
Nov 29 02:19:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:16.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:16 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:19:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c229c58f07e0fb94f5160d2efb1cb0c7a6da6a40b1ad16bf8795cb3d63543370/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c229c58f07e0fb94f5160d2efb1cb0c7a6da6a40b1ad16bf8795cb3d63543370/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:16 np0005539563 podman[125509]: 2025-11-29 07:19:16.580551481 +0000 UTC m=+0.022471070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c229c58f07e0fb94f5160d2efb1cb0c7a6da6a40b1ad16bf8795cb3d63543370/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c229c58f07e0fb94f5160d2efb1cb0c7a6da6a40b1ad16bf8795cb3d63543370/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:16 np0005539563 podman[125509]: 2025-11-29 07:19:16.689853585 +0000 UTC m=+0.131773144 container init 83061ff65bd9a0dbd6c55a58d6108caed03d74a59f42a96f0b4ff377686eec56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chatelet, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:19:16 np0005539563 podman[125509]: 2025-11-29 07:19:16.69692311 +0000 UTC m=+0.138842679 container start 83061ff65bd9a0dbd6c55a58d6108caed03d74a59f42a96f0b4ff377686eec56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:19:16 np0005539563 podman[125509]: 2025-11-29 07:19:16.700757716 +0000 UTC m=+0.142677295 container attach 83061ff65bd9a0dbd6c55a58d6108caed03d74a59f42a96f0b4ff377686eec56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chatelet, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:19:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:17.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:17 np0005539563 loving_chatelet[125527]: {
Nov 29 02:19:17 np0005539563 loving_chatelet[125527]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:19:17 np0005539563 loving_chatelet[125527]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:19:17 np0005539563 loving_chatelet[125527]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:19:17 np0005539563 loving_chatelet[125527]:        "osd_id": 0,
Nov 29 02:19:17 np0005539563 loving_chatelet[125527]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:19:17 np0005539563 loving_chatelet[125527]:        "type": "bluestore"
Nov 29 02:19:17 np0005539563 loving_chatelet[125527]:    }
Nov 29 02:19:17 np0005539563 loving_chatelet[125527]: }
Nov 29 02:19:17 np0005539563 systemd[1]: libpod-83061ff65bd9a0dbd6c55a58d6108caed03d74a59f42a96f0b4ff377686eec56.scope: Deactivated successfully.
Nov 29 02:19:17 np0005539563 podman[125509]: 2025-11-29 07:19:17.605289457 +0000 UTC m=+1.047209016 container died 83061ff65bd9a0dbd6c55a58d6108caed03d74a59f42a96f0b4ff377686eec56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:19:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c229c58f07e0fb94f5160d2efb1cb0c7a6da6a40b1ad16bf8795cb3d63543370-merged.mount: Deactivated successfully.
Nov 29 02:19:18 np0005539563 podman[125509]: 2025-11-29 07:19:18.152860244 +0000 UTC m=+1.594779803 container remove 83061ff65bd9a0dbd6c55a58d6108caed03d74a59f42a96f0b4ff377686eec56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 02:19:18 np0005539563 systemd[1]: libpod-conmon-83061ff65bd9a0dbd6c55a58d6108caed03d74a59f42a96f0b4ff377686eec56.scope: Deactivated successfully.
Nov 29 02:19:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:19:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:19:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:19:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:19:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5d3e93a7-96dd-404b-9ba0-a5caa10d9784 does not exist
Nov 29 02:19:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1f6d7d06-bc08-4bcc-bfab-99a2ef415f1e does not exist
Nov 29 02:19:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a31f9687-9477-4365-98ab-362824852f03 does not exist
Nov 29 02:19:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:18.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:19.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:19:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:19:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:20.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:21.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:19:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:22.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:22 np0005539563 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 02:19:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:23.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:24.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:25.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:26 np0005539563 systemd-logind[785]: New session 41 of user zuul.
Nov 29 02:19:26 np0005539563 systemd[1]: Started Session 41 of User zuul.
Nov 29 02:19:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:26.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:27.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:27 np0005539563 python3.9[125822]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 02:19:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:28 np0005539563 python3.9[125974]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:19:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:28.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:29.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:29 np0005539563 python3.9[126129]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 29 02:19:30 np0005539563 python3.9[126281]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.j1q76u6b follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:19:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:30.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:30 np0005539563 python3.9[126407]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.j1q76u6b mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400769.70846-107-113802773217028/.source.j1q76u6b _original_basename=.e3cf63pj follow=False checksum=425f0dec1542497d25012cf56c23eb3f3f1d2c45 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:31.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:32 np0005539563 python3.9[126559]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:32.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:33 np0005539563 python3.9[126712]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQsLXbFhjUoBaTkhKZlhlr4wo49zgbzeJBequh3eUPlExtzdjrm/R47hkAJGagw+KhipRZ6XygyvP7g0rFG4kdUV8ZbW7HpIhvM2LCuDhFHJGta5IbLQDOAA3QuuNA4DyzfWhW146Q2aOja0AoRZOxjBRKO37fhEgGVJO/UZQHoJZFXHQPBPhZ27Wtt4Jfhz0G/t7WgxqsHTg9pnZL3PKV8yC/Ety9V+G9Hjrbwv8GblAazAMvnYcN6Hhh0mKKJ41E1++cy2nN9Lr6iU9KXS4BN73PkapyN75SJK4/2HEELgi7XCGQtXkdc+cnS1nYdtqW5aUS8fONsji8bdoy4AvRQrTsNWbXNcQXBesHoKNiBaUZjzaW0LhwQ2HTD36wG2FW/thgjrlU0AY8aqut/tcB7sjUacgNn8XfqibZb07x75HvbixT1G+V9ax63HLyfAiLCZquwpnl7CuyQvBAe+UNPLU4Kegtn+KKw2+3BoNkkAKkAoDdKd5fQKWFavTllfU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEesPYkFXAKa2jD/XHieFXe2/NLZG5BPNBvLebxF7i4V#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK3fAbGbewc62wcP/ANYyTDYdWflUi4LqSZ2pYXEDgbyEIKVn6IU7ulNV9i7b7SvxrtzT5K34kYv1WsU3bRd5RM=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDc04fosxiJMz9URZzfwgW2kqQvT/wRjkGRSpo8InnYlU+RAljr+QL8e1C8DPu41m+HGkgDmV4uDikwXF3b0w/6D0/P6iPUsexRy4OkOFgOqlzl7+pNzQ1p5SMgMoaKslyPA1DEUc0bxHjIpTHyjq/X8YamvXJO4KLpZ42Ii0c6RyWcejiRw4wZQWh2s6egN8in6cEVODGcWVseYKhFaPjdUDBtuQy4LaGwosJIkR1OCy9coVbEdcv2vOxdpLby9ssC7nEDAKg2X+0rmcdpImSt43KnAXiuMegm5A7FvAas99jVOYawKyostqRzEOId/1TnbBGDEabjKYlPEOLSFiMsBWLwTkN5loBfqwpLWlheJWPYP90mvfiENFN4W+ut6nx4zBVHQYvGts86HDkcSVipUVxaYaWf37c/GMXcee85lI//k2lNWe0yYOJGU7P1jyU+ug0Cn1MeQghj1V8Gcnax0b58J+Ttp4a7UnYek2q2w2h6nbIbZT5m+yw/KYeNtE8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEgIAlZsupHHlO1a9ydDFIdgMGgwYqu0xx1PBhB1cRGz#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHZLPbvNXmCCAW6hZosm19hA5j7Lbr0PZCizVLJXvz0y88L5bXrAQVln7SscOXMnvFy6P8Fn/54/gijC9Rd2rDs=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9IXwkB2kbuJv6AXS7YRKSa74/LXNdMPGOs9WAzsnePFq78YtNX+JkgkhS6H4PtKZr7d8zGldcUVTXsG54r7DHIiEhjiunXArwm7nxPCcvRVmU6kntuiJbAOObaZlgrdlGcNsB0gEt5E4YWVNxiiRnsA60PvQbLyfN0/+99rmyMLcT4z9DL+dZj8kNH54PFTeXByeUArORk1qkPj734Ru+RP82qH26PyeJz2HlCsq7qPKepCgiVDKLbjXnLqt58qEzzVFKx3gfIhpvZ8PiUoFSS6UJlk/70XVp+og+tU/Dv952UWQMOHkfsIfqvdJgcy2hYuLbI03ZOF/NRU1FEUEPIhfU7kM2KzkqoDLyu+ntXGTBE6vWBuqrH+KUMqrAGGXZPnoTS8zb3H1izaYqN48vVE10jDHjkhWEEIuwN5AVGsCBjpRkQ+rZ+gDb/z4loN29WMX/KmqYAy+qsu7X8gFojfnlrv4DYVd1lxYZPnqS8bCkeBF8txjMVUD5EpNVGVU=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOpx0/R+UH9iWt0hByjYOi11MmeoOEV/RM05Qq0CkR6T#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLcAFq3gx5S+bCbh1b0B1Plh9X3nnDc+14hmd4HK59tBD1jd/VrvEVcg/jrioqZJxPOiBK8QMTq5htAcmQbIjnM=#012 create=True mode=0644 path=/tmp/ansible.j1q76u6b state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:33.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:34 np0005539563 python3.9[126864]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.j1q76u6b' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:19:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:35 np0005539563 python3.9[127019]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.j1q76u6b state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:35.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:35 np0005539563 systemd[1]: session-41.scope: Deactivated successfully.
Nov 29 02:19:35 np0005539563 systemd[1]: session-41.scope: Consumed 4.784s CPU time.
Nov 29 02:19:35 np0005539563 systemd-logind[785]: Session 41 logged out. Waiting for processes to exit.
Nov 29 02:19:35 np0005539563 systemd-logind[785]: Removed session 41.
Nov 29 02:19:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:36.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:37.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:39.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:40.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:40 np0005539563 systemd-logind[785]: New session 42 of user zuul.
Nov 29 02:19:40 np0005539563 systemd[1]: Started Session 42 of User zuul.
Nov 29 02:19:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:41.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:42 np0005539563 python3.9[127250]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:42.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:19:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:43.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:43 np0005539563 python3.9[127407]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:19:44 np0005539563 python3.9[127561]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:19:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:44.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:45.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:45 np0005539563 python3.9[127715]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:19:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:46 np0005539563 python3.9[127868]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:19:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:46.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:47.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:47 np0005539563 python3.9[128021]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:47 np0005539563 systemd[1]: session-42.scope: Deactivated successfully.
Nov 29 02:19:47 np0005539563 systemd[1]: session-42.scope: Consumed 3.831s CPU time.
Nov 29 02:19:47 np0005539563 systemd-logind[785]: Session 42 logged out. Waiting for processes to exit.
Nov 29 02:19:47 np0005539563 systemd-logind[785]: Removed session 42.
Nov 29 02:19:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:48.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:49.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:50.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:51.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:52.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:53 np0005539563 systemd-logind[785]: New session 43 of user zuul.
Nov 29 02:19:53 np0005539563 systemd[1]: Started Session 43 of User zuul.
Nov 29 02:19:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:19:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:53.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:19:54 np0005539563 python3.9[128203]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:54.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:55.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:55 np0005539563 python3.9[128360]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:19:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:56.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:57 np0005539563 python3.9[128446]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:19:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:57.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:58.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:19:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:59.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:59 np0005539563 python3.9[128647]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:20:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:20:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:00.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:01 np0005539563 python3.9[128799]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:20:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:01.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:02 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 02:20:02 np0005539563 python3.9[128949]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:20:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:02.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:02 np0005539563 python3.9[129100]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:20:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:03.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:03 np0005539563 systemd[1]: session-43.scope: Deactivated successfully.
Nov 29 02:20:03 np0005539563 systemd[1]: session-43.scope: Consumed 6.284s CPU time.
Nov 29 02:20:03 np0005539563 systemd-logind[785]: Session 43 logged out. Waiting for processes to exit.
Nov 29 02:20:03 np0005539563 systemd-logind[785]: Removed session 43.
Nov 29 02:20:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:04.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:05.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:06.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:07.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:08.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:09.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:09 np0005539563 systemd-logind[785]: New session 44 of user zuul.
Nov 29 02:20:09 np0005539563 systemd[1]: Started Session 44 of User zuul.
Nov 29 02:20:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:10 np0005539563 python3.9[129281]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:20:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:10.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:11.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:12 np0005539563 python3.9[129438]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:20:12
Nov 29 02:20:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:20:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:20:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.control', 'vms', '.rgw.root', '.mgr', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta']
Nov 29 02:20:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:20:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:12.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:20:13 np0005539563 python3.9[129591]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:13.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:14 np0005539563 python3.9[129743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:14.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:15 np0005539563 python3.9[129867]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400813.505381-162-141680059814810/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=514930d8011b30519a709f76d8787e43ca9fd8f9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:15.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:15 np0005539563 python3.9[130019]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:16 np0005539563 python3.9[130142]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400815.24753-162-97307034697191/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=cd8ab8ed4fdf501d1b4ce95ba4f398e005279fa9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:16.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:17 np0005539563 python3.9[130295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:17.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:17 np0005539563 python3.9[130468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400816.5662956-162-12827551650420/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=50a7cdb62031defeb33dea025d9a604431d396fe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:18 np0005539563 python3.9[130620]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:18.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:20:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:20:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:20:19 np0005539563 python3.9[130873]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:20:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:19.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:19 np0005539563 python3.9[131162]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:20 np0005539563 python3.9[131299]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400819.4839857-341-66834682504364/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5a2df77dcf71068ab391e737dbb07d53f1359e9b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:20:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:20:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:20.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:21 np0005539563 python3.9[131452]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:21.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:21 np0005539563 python3.9[131575]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400820.7526088-341-91780377925168/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=901ecafc59da21fac83aa5044424fabd09a6fef2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b9cf7e8e-bf13-4864-a827-22b036a49a34 does not exist
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f933d6ee-1652-48a7-b7c4-25460eff8a25 does not exist
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9444b1f0-989a-4a0b-a7c5-5d456e736878 does not exist
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:20:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:20:22 np0005539563 python3.9[131727]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:20:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:22.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:22 np0005539563 podman[131964]: 2025-11-29 07:20:22.731418566 +0000 UTC m=+0.024579785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:20:22 np0005539563 podman[131964]: 2025-11-29 07:20:22.870191598 +0000 UTC m=+0.163352777 container create f03fd0b35ab0ea2db0541ece07d4eeef589ea84fb442bf52a0d27795dc3ec9f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:20:22 np0005539563 systemd[1]: Started libpod-conmon-f03fd0b35ab0ea2db0541ece07d4eeef589ea84fb442bf52a0d27795dc3ec9f2.scope.
Nov 29 02:20:22 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:20:22 np0005539563 podman[131964]: 2025-11-29 07:20:22.978426725 +0000 UTC m=+0.271587934 container init f03fd0b35ab0ea2db0541ece07d4eeef589ea84fb442bf52a0d27795dc3ec9f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:20:22 np0005539563 podman[131964]: 2025-11-29 07:20:22.985345775 +0000 UTC m=+0.278506954 container start f03fd0b35ab0ea2db0541ece07d4eeef589ea84fb442bf52a0d27795dc3ec9f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:20:22 np0005539563 hardcore_pare[132009]: 167 167
Nov 29 02:20:22 np0005539563 systemd[1]: libpod-f03fd0b35ab0ea2db0541ece07d4eeef589ea84fb442bf52a0d27795dc3ec9f2.scope: Deactivated successfully.
Nov 29 02:20:22 np0005539563 podman[131964]: 2025-11-29 07:20:22.991272498 +0000 UTC m=+0.284433717 container attach f03fd0b35ab0ea2db0541ece07d4eeef589ea84fb442bf52a0d27795dc3ec9f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_pare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:20:22 np0005539563 podman[131964]: 2025-11-29 07:20:22.994395253 +0000 UTC m=+0.287556462 container died f03fd0b35ab0ea2db0541ece07d4eeef589ea84fb442bf52a0d27795dc3ec9f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:20:23 np0005539563 python3.9[132006]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400821.9430933-341-9685581858741/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=174a9777b18d217cd520748bf0935a5a48c1aad0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:23 np0005539563 systemd[1]: var-lib-containers-storage-overlay-92a23aca5e0a1597c85bc5c4d2d0adcab1a73ba7b10dacb7f4ce3cafd79bd8a3-merged.mount: Deactivated successfully.
Nov 29 02:20:23 np0005539563 podman[131964]: 2025-11-29 07:20:23.120505649 +0000 UTC m=+0.413666838 container remove f03fd0b35ab0ea2db0541ece07d4eeef589ea84fb442bf52a0d27795dc3ec9f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:20:23 np0005539563 systemd[1]: libpod-conmon-f03fd0b35ab0ea2db0541ece07d4eeef589ea84fb442bf52a0d27795dc3ec9f2.scope: Deactivated successfully.
Nov 29 02:20:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:23.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:23 np0005539563 podman[132057]: 2025-11-29 07:20:23.253656288 +0000 UTC m=+0.023832104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:20:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:20:23 np0005539563 podman[132057]: 2025-11-29 07:20:23.452937699 +0000 UTC m=+0.223113485 container create 3b30f5cbc220215447b775b2fa5ce90519b1d490ab91856683d9103de1801076 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:20:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:20:23 np0005539563 systemd[1]: Started libpod-conmon-3b30f5cbc220215447b775b2fa5ce90519b1d490ab91856683d9103de1801076.scope.
Nov 29 02:20:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:20:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873bc5006ec1a9a8192eb920b65e9f7e7b1feb5a9c8b98971d3df878f0d5fe7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873bc5006ec1a9a8192eb920b65e9f7e7b1feb5a9c8b98971d3df878f0d5fe7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873bc5006ec1a9a8192eb920b65e9f7e7b1feb5a9c8b98971d3df878f0d5fe7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873bc5006ec1a9a8192eb920b65e9f7e7b1feb5a9c8b98971d3df878f0d5fe7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873bc5006ec1a9a8192eb920b65e9f7e7b1feb5a9c8b98971d3df878f0d5fe7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:23 np0005539563 python3.9[132203]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:23 np0005539563 podman[132057]: 2025-11-29 07:20:23.802134537 +0000 UTC m=+0.572310423 container init 3b30f5cbc220215447b775b2fa5ce90519b1d490ab91856683d9103de1801076 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:20:23 np0005539563 podman[132057]: 2025-11-29 07:20:23.80990049 +0000 UTC m=+0.580076276 container start 3b30f5cbc220215447b775b2fa5ce90519b1d490ab91856683d9103de1801076 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:20:23 np0005539563 podman[132057]: 2025-11-29 07:20:23.921187834 +0000 UTC m=+0.691363700 container attach 3b30f5cbc220215447b775b2fa5ce90519b1d490ab91856683d9103de1801076 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:20:24 np0005539563 python3.9[132357]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:24 np0005539563 musing_cray[132199]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:20:24 np0005539563 musing_cray[132199]: --> relative data size: 1.0
Nov 29 02:20:24 np0005539563 musing_cray[132199]: --> All data devices are unavailable
Nov 29 02:20:24 np0005539563 systemd[1]: libpod-3b30f5cbc220215447b775b2fa5ce90519b1d490ab91856683d9103de1801076.scope: Deactivated successfully.
Nov 29 02:20:24 np0005539563 podman[132057]: 2025-11-29 07:20:24.677013845 +0000 UTC m=+1.447189631 container died 3b30f5cbc220215447b775b2fa5ce90519b1d490ab91856683d9103de1801076 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:20:24 np0005539563 systemd[1]: var-lib-containers-storage-overlay-873bc5006ec1a9a8192eb920b65e9f7e7b1feb5a9c8b98971d3df878f0d5fe7f-merged.mount: Deactivated successfully.
Nov 29 02:20:24 np0005539563 podman[132057]: 2025-11-29 07:20:24.74240384 +0000 UTC m=+1.512579626 container remove 3b30f5cbc220215447b775b2fa5ce90519b1d490ab91856683d9103de1801076 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:20:24 np0005539563 systemd[1]: libpod-conmon-3b30f5cbc220215447b775b2fa5ce90519b1d490ab91856683d9103de1801076.scope: Deactivated successfully.
Nov 29 02:20:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:25 np0005539563 python3.9[132607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:25 np0005539563 podman[132728]: 2025-11-29 07:20:25.342540019 +0000 UTC m=+0.037622463 container create c2471a5eb1cd97769561f6e89e671308fa33cd7b93417602a31a7939acb9f712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhabha, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:20:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:25.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:25 np0005539563 systemd[1]: Started libpod-conmon-c2471a5eb1cd97769561f6e89e671308fa33cd7b93417602a31a7939acb9f712.scope.
Nov 29 02:20:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:20:25 np0005539563 podman[132728]: 2025-11-29 07:20:25.32546124 +0000 UTC m=+0.020543704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:20:25 np0005539563 podman[132728]: 2025-11-29 07:20:25.434209845 +0000 UTC m=+0.129292359 container init c2471a5eb1cd97769561f6e89e671308fa33cd7b93417602a31a7939acb9f712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhabha, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:20:25 np0005539563 podman[132728]: 2025-11-29 07:20:25.441544075 +0000 UTC m=+0.136626519 container start c2471a5eb1cd97769561f6e89e671308fa33cd7b93417602a31a7939acb9f712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:20:25 np0005539563 podman[132728]: 2025-11-29 07:20:25.448494237 +0000 UTC m=+0.143576681 container attach c2471a5eb1cd97769561f6e89e671308fa33cd7b93417602a31a7939acb9f712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:20:25 np0005539563 agitated_bhabha[132782]: 167 167
Nov 29 02:20:25 np0005539563 systemd[1]: libpod-c2471a5eb1cd97769561f6e89e671308fa33cd7b93417602a31a7939acb9f712.scope: Deactivated successfully.
Nov 29 02:20:25 np0005539563 podman[132728]: 2025-11-29 07:20:25.449635548 +0000 UTC m=+0.144717992 container died c2471a5eb1cd97769561f6e89e671308fa33cd7b93417602a31a7939acb9f712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhabha, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:20:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b14f926ca16460b6f9f0b0ad5624317dfd2c1e5f87af6be739f97cff3136d9b1-merged.mount: Deactivated successfully.
Nov 29 02:20:25 np0005539563 podman[132728]: 2025-11-29 07:20:25.496127144 +0000 UTC m=+0.191209588 container remove c2471a5eb1cd97769561f6e89e671308fa33cd7b93417602a31a7939acb9f712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:20:25 np0005539563 systemd[1]: libpod-conmon-c2471a5eb1cd97769561f6e89e671308fa33cd7b93417602a31a7939acb9f712.scope: Deactivated successfully.
Nov 29 02:20:25 np0005539563 podman[132837]: 2025-11-29 07:20:25.644968878 +0000 UTC m=+0.045289503 container create 3b7d1f34f374be75194e4811247348b47dd00ce1e151f2a9bc01df5f03a05bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:20:25 np0005539563 systemd[1]: Started libpod-conmon-3b7d1f34f374be75194e4811247348b47dd00ce1e151f2a9bc01df5f03a05bff.scope.
Nov 29 02:20:25 np0005539563 python3.9[132829]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400824.6871214-508-191257418041048/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=18ae7c9a41642d4c932eb402155228d490364d91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:25 np0005539563 podman[132837]: 2025-11-29 07:20:25.620010453 +0000 UTC m=+0.020331098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:20:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:20:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b574cd3a465b20c5e56a90d5a7f2adbb81c34df3fd92718480b5ce0601de5ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b574cd3a465b20c5e56a90d5a7f2adbb81c34df3fd92718480b5ce0601de5ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b574cd3a465b20c5e56a90d5a7f2adbb81c34df3fd92718480b5ce0601de5ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b574cd3a465b20c5e56a90d5a7f2adbb81c34df3fd92718480b5ce0601de5ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:25 np0005539563 podman[132837]: 2025-11-29 07:20:25.735498242 +0000 UTC m=+0.135818887 container init 3b7d1f34f374be75194e4811247348b47dd00ce1e151f2a9bc01df5f03a05bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:20:25 np0005539563 podman[132837]: 2025-11-29 07:20:25.74523629 +0000 UTC m=+0.145556915 container start 3b7d1f34f374be75194e4811247348b47dd00ce1e151f2a9bc01df5f03a05bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:20:25 np0005539563 podman[132837]: 2025-11-29 07:20:25.749216629 +0000 UTC m=+0.149537274 container attach 3b7d1f34f374be75194e4811247348b47dd00ce1e151f2a9bc01df5f03a05bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:20:26 np0005539563 python3.9[133009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]: {
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:    "0": [
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:        {
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:            "devices": [
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:                "/dev/loop3"
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:            ],
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:            "lv_name": "ceph_lv0",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:            "lv_size": "7511998464",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:            "name": "ceph_lv0",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:            "tags": {
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:                "ceph.cluster_name": "ceph",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:                "ceph.crush_device_class": "",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:                "ceph.encrypted": "0",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:                "ceph.osd_id": "0",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:                "ceph.type": "block",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:                "ceph.vdo": "0"
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:            },
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:            "type": "block",
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:            "vg_name": "ceph_vg0"
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:        }
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]:    ]
Nov 29 02:20:26 np0005539563 vibrant_archimedes[132853]: }
Nov 29 02:20:26 np0005539563 systemd[1]: libpod-3b7d1f34f374be75194e4811247348b47dd00ce1e151f2a9bc01df5f03a05bff.scope: Deactivated successfully.
Nov 29 02:20:26 np0005539563 podman[132837]: 2025-11-29 07:20:26.557901981 +0000 UTC m=+0.958222626 container died 3b7d1f34f374be75194e4811247348b47dd00ce1e151f2a9bc01df5f03a05bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:20:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:26 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0b574cd3a465b20c5e56a90d5a7f2adbb81c34df3fd92718480b5ce0601de5ad-merged.mount: Deactivated successfully.
Nov 29 02:20:26 np0005539563 podman[132837]: 2025-11-29 07:20:26.610981028 +0000 UTC m=+1.011301653 container remove 3b7d1f34f374be75194e4811247348b47dd00ce1e151f2a9bc01df5f03a05bff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:20:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:26 np0005539563 systemd[1]: libpod-conmon-3b7d1f34f374be75194e4811247348b47dd00ce1e151f2a9bc01df5f03a05bff.scope: Deactivated successfully.
Nov 29 02:20:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:26.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:26 np0005539563 python3.9[133198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400825.8675458-508-122746140691494/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=901ecafc59da21fac83aa5044424fabd09a6fef2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:27 np0005539563 podman[133312]: 2025-11-29 07:20:27.123272617 +0000 UTC m=+0.041543632 container create b5b15d5071a74781b17aab00f0fc9ce4dcad478015359b4f29fab1902d68c78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poitras, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:20:27 np0005539563 systemd[1]: Started libpod-conmon-b5b15d5071a74781b17aab00f0fc9ce4dcad478015359b4f29fab1902d68c78e.scope.
Nov 29 02:20:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:20:27 np0005539563 podman[133312]: 2025-11-29 07:20:27.10081923 +0000 UTC m=+0.019090255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:20:27 np0005539563 podman[133312]: 2025-11-29 07:20:27.203163168 +0000 UTC m=+0.121434233 container init b5b15d5071a74781b17aab00f0fc9ce4dcad478015359b4f29fab1902d68c78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poitras, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:20:27 np0005539563 podman[133312]: 2025-11-29 07:20:27.211122387 +0000 UTC m=+0.129393402 container start b5b15d5071a74781b17aab00f0fc9ce4dcad478015359b4f29fab1902d68c78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:20:27 np0005539563 podman[133312]: 2025-11-29 07:20:27.214844659 +0000 UTC m=+0.133115724 container attach b5b15d5071a74781b17aab00f0fc9ce4dcad478015359b4f29fab1902d68c78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:20:27 np0005539563 infallible_poitras[133355]: 167 167
Nov 29 02:20:27 np0005539563 systemd[1]: libpod-b5b15d5071a74781b17aab00f0fc9ce4dcad478015359b4f29fab1902d68c78e.scope: Deactivated successfully.
Nov 29 02:20:27 np0005539563 podman[133312]: 2025-11-29 07:20:27.216359641 +0000 UTC m=+0.134630656 container died b5b15d5071a74781b17aab00f0fc9ce4dcad478015359b4f29fab1902d68c78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poitras, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:20:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-94cce13c595a9336e38bd3e508b95c2d6c6ae2e8c874fdb0ef0836c3a46b22c4-merged.mount: Deactivated successfully.
Nov 29 02:20:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:27.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:27 np0005539563 podman[133312]: 2025-11-29 07:20:27.570333584 +0000 UTC m=+0.488604599 container remove b5b15d5071a74781b17aab00f0fc9ce4dcad478015359b4f29fab1902d68c78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_poitras, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:20:27 np0005539563 python3.9[133471]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:27 np0005539563 systemd[1]: libpod-conmon-b5b15d5071a74781b17aab00f0fc9ce4dcad478015359b4f29fab1902d68c78e.scope: Deactivated successfully.
Nov 29 02:20:27 np0005539563 podman[133506]: 2025-11-29 07:20:27.729598475 +0000 UTC m=+0.038293322 container create f42d3e703402e1377a97c7db90e82fa0d8d4287db7e9e293795070d2eb53eb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:20:27 np0005539563 systemd[1]: Started libpod-conmon-f42d3e703402e1377a97c7db90e82fa0d8d4287db7e9e293795070d2eb53eb78.scope.
Nov 29 02:20:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:20:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc8cc8bc873fd442b4a012c05690f5f1bb50aabc90cfd5c42f6dfa09e6b589a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc8cc8bc873fd442b4a012c05690f5f1bb50aabc90cfd5c42f6dfa09e6b589a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc8cc8bc873fd442b4a012c05690f5f1bb50aabc90cfd5c42f6dfa09e6b589a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc8cc8bc873fd442b4a012c05690f5f1bb50aabc90cfd5c42f6dfa09e6b589a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:27 np0005539563 podman[133506]: 2025-11-29 07:20:27.714161662 +0000 UTC m=+0.022856529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:20:27 np0005539563 podman[133506]: 2025-11-29 07:20:27.81869525 +0000 UTC m=+0.127390127 container init f42d3e703402e1377a97c7db90e82fa0d8d4287db7e9e293795070d2eb53eb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:20:27 np0005539563 podman[133506]: 2025-11-29 07:20:27.825507397 +0000 UTC m=+0.134202234 container start f42d3e703402e1377a97c7db90e82fa0d8d4287db7e9e293795070d2eb53eb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:20:27 np0005539563 podman[133506]: 2025-11-29 07:20:27.82853138 +0000 UTC m=+0.137226227 container attach f42d3e703402e1377a97c7db90e82fa0d8d4287db7e9e293795070d2eb53eb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:20:28 np0005539563 python3.9[133623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400827.1447618-508-221774821097357/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=5d294839fac8306f44c4d2845c89aac9a8cf0dd6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:28 np0005539563 boring_brahmagupta[133566]: {
Nov 29 02:20:28 np0005539563 boring_brahmagupta[133566]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:20:28 np0005539563 boring_brahmagupta[133566]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:20:28 np0005539563 boring_brahmagupta[133566]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:20:28 np0005539563 boring_brahmagupta[133566]:        "osd_id": 0,
Nov 29 02:20:28 np0005539563 boring_brahmagupta[133566]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:20:28 np0005539563 boring_brahmagupta[133566]:        "type": "bluestore"
Nov 29 02:20:28 np0005539563 boring_brahmagupta[133566]:    }
Nov 29 02:20:28 np0005539563 boring_brahmagupta[133566]: }
Nov 29 02:20:28 np0005539563 systemd[1]: libpod-f42d3e703402e1377a97c7db90e82fa0d8d4287db7e9e293795070d2eb53eb78.scope: Deactivated successfully.
Nov 29 02:20:28 np0005539563 conmon[133566]: conmon f42d3e703402e1377a97 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f42d3e703402e1377a97c7db90e82fa0d8d4287db7e9e293795070d2eb53eb78.scope/container/memory.events
Nov 29 02:20:28 np0005539563 podman[133506]: 2025-11-29 07:20:28.657655963 +0000 UTC m=+0.966350820 container died f42d3e703402e1377a97c7db90e82fa0d8d4287db7e9e293795070d2eb53eb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 02:20:28 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3bc8cc8bc873fd442b4a012c05690f5f1bb50aabc90cfd5c42f6dfa09e6b589a-merged.mount: Deactivated successfully.
Nov 29 02:20:28 np0005539563 podman[133506]: 2025-11-29 07:20:28.714965336 +0000 UTC m=+1.023660183 container remove f42d3e703402e1377a97c7db90e82fa0d8d4287db7e9e293795070d2eb53eb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:20:28 np0005539563 systemd[1]: libpod-conmon-f42d3e703402e1377a97c7db90e82fa0d8d4287db7e9e293795070d2eb53eb78.scope: Deactivated successfully.
Nov 29 02:20:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:20:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:20:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d8a8a98e-7ff8-4db1-9631-41790172446f does not exist
Nov 29 02:20:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7d11cc8d-0a52-4af3-9b3d-342e933e81a9 does not exist
Nov 29 02:20:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 34b04399-86a7-4d49-90b1-b0e0ab167495 does not exist
Nov 29 02:20:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:28.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:29 np0005539563 python3.9[133856]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:29.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:20:30 np0005539563 python3.9[134008]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:30 np0005539563 python3.9[134131]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400829.518615-684-41496022209196/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:30.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:31 np0005539563 python3.9[134284]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:31.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:31 np0005539563 python3.9[134436]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:32 np0005539563 python3.9[134559]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400831.405626-759-224837210944143/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:32.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:33 np0005539563 python3.9[134712]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:33.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:33 np0005539563 python3.9[134864]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:34 np0005539563 python3.9[134987]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400833.2436545-826-227149435279535/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:34.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:35 np0005539563 python3.9[135140]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:35.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:35 np0005539563 python3.9[135292]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:36 np0005539563 python3.9[135415]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400835.4532993-897-51142549791625/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:36.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:37 np0005539563 python3.9[135568]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:37.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:37 np0005539563 python3.9[135770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:38 np0005539563 python3.9[135893]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400837.267482-964-138721846093526/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:38.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:38 np0005539563 python3.9[136046]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:39.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:39 np0005539563 python3.9[136198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:40 np0005539563 python3.9[136321]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400839.1738207-1036-8206265947776/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e373fa93a0e53fbb089cc79ce53406904f5c7b5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:40.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:41.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:42.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:20:43 np0005539563 systemd[1]: session-44.scope: Deactivated successfully.
Nov 29 02:20:43 np0005539563 systemd[1]: session-44.scope: Consumed 22.847s CPU time.
Nov 29 02:20:43 np0005539563 systemd-logind[785]: Session 44 logged out. Waiting for processes to exit.
Nov 29 02:20:43 np0005539563 systemd-logind[785]: Removed session 44.
Nov 29 02:20:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:43.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:20:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8058 writes, 34K keys, 8058 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8058 writes, 1514 syncs, 5.32 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8058 writes, 34K keys, 8058 commit groups, 1.0 writes per commit group, ingest: 21.01 MB, 0.04 MB/s#012Interval WAL: 8058 writes, 1514 syncs, 5.32 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561bde6ad610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000106 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561bde6ad610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000106 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Nov 29 02:20:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:44.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:45.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:46.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:47.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:48.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:48 np0005539563 systemd-logind[785]: New session 45 of user zuul.
Nov 29 02:20:48 np0005539563 systemd[1]: Started Session 45 of User zuul.
Nov 29 02:20:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:49.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:49 np0005539563 python3.9[136506]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:50 np0005539563 python3.9[136658]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:50.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:51 np0005539563 python3.9[136782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400850.061375-67-178814306636506/.source.conf _original_basename=ceph.conf follow=False checksum=c098df1eed8765439af66fe3d0de96ae0e466ab0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:51.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:51 np0005539563 python3.9[136934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:52 np0005539563 python3.9[137057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400851.4903774-67-145229759355944/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=b1c127dd74be8d747654d0d3f00b29a32faa6866 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:52.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:53 np0005539563 systemd[1]: session-45.scope: Deactivated successfully.
Nov 29 02:20:53 np0005539563 systemd[1]: session-45.scope: Consumed 2.404s CPU time.
Nov 29 02:20:53 np0005539563 systemd-logind[785]: Session 45 logged out. Waiting for processes to exit.
Nov 29 02:20:53 np0005539563 systemd-logind[785]: Removed session 45.
Nov 29 02:20:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:53.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:54.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 02:20:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:20:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:55.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:20:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:56.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:57.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:58.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:20:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:59.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:59 np0005539563 systemd-logind[785]: New session 46 of user zuul.
Nov 29 02:20:59 np0005539563 systemd[1]: Started Session 46 of User zuul.
Nov 29 02:21:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:00.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:00 np0005539563 python3.9[137290]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:01.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:02 np0005539563 python3.9[137446]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:21:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:02 np0005539563 python3.9[137599]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:21:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:02.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:03.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:03 np0005539563 python3.9[137749]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:04 np0005539563 python3.9[137901]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 02:21:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:04.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:05.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:06.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:07.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:08.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:09 np0005539563 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 29 02:21:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:09.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:09 np0005539563 python3.9[138060]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:21:10 np0005539563 python3.9[138144]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:10.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:11.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:21:12
Nov 29 02:21:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:21:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:21:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.control', 'vms', 'volumes', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 29 02:21:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:21:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:12.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:21:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:13.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:13 np0005539563 python3.9[138299]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:21:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:14 np0005539563 python3[138454]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 29 02:21:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:14.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:15.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:15 np0005539563 python3.9[138607]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:16.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:16 np0005539563 python3.9[138760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:17.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:17 np0005539563 python3.9[138838]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:18 np0005539563 python3.9[139040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:18 np0005539563 python3.9[139118]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4uutkyd3 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:18.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:19 np0005539563 python3.9[139271]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:19.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:19 np0005539563 python3.9[139349]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:20.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:20 np0005539563 python3.9[139502]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:21.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:21 np0005539563 python3[139655]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:21:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:21:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:22 np0005539563 python3.9[139808]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:22.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:23.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:23 np0005539563 python3.9[139933]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400882.2173223-436-129461504079074/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:24 np0005539563 python3.9[140085]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:24.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:25 np0005539563 python3.9[140211]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400883.8004923-481-84260192544757/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:25.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:26 np0005539563 python3.9[140363]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:26.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:27 np0005539563 python3.9[140489]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400885.9833784-526-214515660139255/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:27.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:28 np0005539563 python3.9[140641]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:28 np0005539563 python3.9[140766]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400887.6084092-571-237180214924981/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:28.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:29 np0005539563 python3.9[141034]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:21:30 np0005539563 python3.9[141176]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400889.2591352-616-177745377611797/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:30.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:31.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:32.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:33.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:34.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:35 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:21:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:35.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:36.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:37.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:38.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:39 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:21:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:39.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:40.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:41.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:42.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:21:43 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:21:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:43.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:44.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:45.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e9 check_health: resetting beacon timeouts due to mon delay (slow election?) of 18.5456 seconds
Nov 29 02:21:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:21:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 02:21:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:21:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 02:21:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:21:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:21:46 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(21) init, last seen epoch 21, mid-election, bumping
Nov 29 02:21:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:46.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:47 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:21:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:47.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:47 np0005539563 python3.9[141388]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:21:48 np0005539563 python3.9[141540]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:48.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:21:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 02:21:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 02:21:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rotard(active, since 12m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 02:21:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:21:49 np0005539563 python3.9[141696]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:49.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:50 np0005539563 python3.9[141849]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:50.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:21:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:51.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:21:51 np0005539563 python3.9[142002]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:21:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:21:51 np0005539563 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 29 02:21:51 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 02:21:51 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:21:51 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 02:21:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 29 02:21:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:21:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:52 np0005539563 python3.9[142156]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:52.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:53.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:53 np0005539563 python3.9[142312]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:21:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:21:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:21:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:21:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:21:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:21:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:21:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 29 02:21:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:21:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:54.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:55 np0005539563 python3.9[142463]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:55.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:56 np0005539563 python3.9[142616]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:56 np0005539563 ovs-vsctl[142617]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 29 02:21:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:56.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:57 np0005539563 python3.9[142770]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:57.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:21:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 62b27f16-0ab6-4210-80b6-c2d743c65ee4 does not exist
Nov 29 02:21:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev dfea9212-8427-4703-914a-7580416f97b3 does not exist
Nov 29 02:21:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 96c62d75-89dd-4eb2-bd29-1f4a1920621f does not exist
Nov 29 02:21:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:21:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:21:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:21:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:21:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:21:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:21:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:58 np0005539563 python3.9[142980]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:58 np0005539563 ovs-vsctl[143075]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 29 02:21:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:58.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:59 np0005539563 podman[143144]: 2025-11-29 07:21:59.103247977 +0000 UTC m=+0.021062156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:21:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:21:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:59.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:21:59 np0005539563 python3.9[143283]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:22:00 np0005539563 podman[143144]: 2025-11-29 07:22:00.007298497 +0000 UTC m=+0.925112666 container create 7451a9cadc7b77533e14b6bf27b958a3bf4ec2d7c9bce57fa83bf44ca60777db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:22:00 np0005539563 systemd[1]: Started libpod-conmon-7451a9cadc7b77533e14b6bf27b958a3bf4ec2d7c9bce57fa83bf44ca60777db.scope.
Nov 29 02:22:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:22:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:00 np0005539563 python3.9[143442]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:00.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:01 np0005539563 podman[143144]: 2025-11-29 07:22:01.310198054 +0000 UTC m=+2.228012243 container init 7451a9cadc7b77533e14b6bf27b958a3bf4ec2d7c9bce57fa83bf44ca60777db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:22:01 np0005539563 podman[143144]: 2025-11-29 07:22:01.323004894 +0000 UTC m=+2.240819093 container start 7451a9cadc7b77533e14b6bf27b958a3bf4ec2d7c9bce57fa83bf44ca60777db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 02:22:01 np0005539563 dreamy_darwin[143387]: 167 167
Nov 29 02:22:01 np0005539563 systemd[1]: libpod-7451a9cadc7b77533e14b6bf27b958a3bf4ec2d7c9bce57fa83bf44ca60777db.scope: Deactivated successfully.
Nov 29 02:22:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:01.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:01 np0005539563 python3.9[143603]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:01 np0005539563 podman[143144]: 2025-11-29 07:22:01.909529565 +0000 UTC m=+2.827343724 container attach 7451a9cadc7b77533e14b6bf27b958a3bf4ec2d7c9bce57fa83bf44ca60777db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:22:01 np0005539563 podman[143144]: 2025-11-29 07:22:01.911072378 +0000 UTC m=+2.828886587 container died 7451a9cadc7b77533e14b6bf27b958a3bf4ec2d7c9bce57fa83bf44ca60777db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:22:02 np0005539563 python3.9[143684]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:22:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:22:02 np0005539563 python3.9[143837]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:02.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:03 np0005539563 python3.9[143917]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:03.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:03 np0005539563 systemd[1]: var-lib-containers-storage-overlay-535d28da962a3bb6314fbf7208a62df28b818bd61911a13744df7f7aaf0e7fc6-merged.mount: Deactivated successfully.
Nov 29 02:22:04 np0005539563 python3.9[144069]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:04.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:04 np0005539563 podman[143144]: 2025-11-29 07:22:04.97137706 +0000 UTC m=+5.889191219 container remove 7451a9cadc7b77533e14b6bf27b958a3bf4ec2d7c9bce57fa83bf44ca60777db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:22:04 np0005539563 systemd[1]: libpod-conmon-7451a9cadc7b77533e14b6bf27b958a3bf4ec2d7c9bce57fa83bf44ca60777db.scope: Deactivated successfully.
Nov 29 02:22:05 np0005539563 python3.9[144224]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:05 np0005539563 podman[144230]: 2025-11-29 07:22:05.113038708 +0000 UTC m=+0.027672277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:22:05 np0005539563 podman[144230]: 2025-11-29 07:22:05.44868844 +0000 UTC m=+0.363321979 container create e28e1f3752e55916ff03a7519dbb0363a8983255644f9836f951df4885f14fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:22:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:05.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:05 np0005539563 python3.9[144321]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:06 np0005539563 systemd[1]: Started libpod-conmon-e28e1f3752e55916ff03a7519dbb0363a8983255644f9836f951df4885f14fcf.scope.
Nov 29 02:22:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:22:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f527bf28ad4cc7ba32e21a8d60518b577f59f05ef9ef8b9cab4d111cabea2530/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f527bf28ad4cc7ba32e21a8d60518b577f59f05ef9ef8b9cab4d111cabea2530/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f527bf28ad4cc7ba32e21a8d60518b577f59f05ef9ef8b9cab4d111cabea2530/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f527bf28ad4cc7ba32e21a8d60518b577f59f05ef9ef8b9cab4d111cabea2530/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f527bf28ad4cc7ba32e21a8d60518b577f59f05ef9ef8b9cab4d111cabea2530/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:06 np0005539563 podman[144230]: 2025-11-29 07:22:06.180253112 +0000 UTC m=+1.094886651 container init e28e1f3752e55916ff03a7519dbb0363a8983255644f9836f951df4885f14fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:22:06 np0005539563 podman[144230]: 2025-11-29 07:22:06.187402956 +0000 UTC m=+1.102036495 container start e28e1f3752e55916ff03a7519dbb0363a8983255644f9836f951df4885f14fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:22:06 np0005539563 podman[144230]: 2025-11-29 07:22:06.359520625 +0000 UTC m=+1.274154194 container attach e28e1f3752e55916ff03a7519dbb0363a8983255644f9836f951df4885f14fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:22:06 np0005539563 python3.9[144480]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:06.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:07 np0005539563 python3.9[144559]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:07 np0005539563 recursing_albattani[144348]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:22:07 np0005539563 recursing_albattani[144348]: --> relative data size: 1.0
Nov 29 02:22:07 np0005539563 recursing_albattani[144348]: --> All data devices are unavailable
Nov 29 02:22:07 np0005539563 systemd[1]: libpod-e28e1f3752e55916ff03a7519dbb0363a8983255644f9836f951df4885f14fcf.scope: Deactivated successfully.
Nov 29 02:22:07 np0005539563 podman[144230]: 2025-11-29 07:22:07.062076933 +0000 UTC m=+1.976710472 container died e28e1f3752e55916ff03a7519dbb0363a8983255644f9836f951df4885f14fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:22:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:07.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:07 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f527bf28ad4cc7ba32e21a8d60518b577f59f05ef9ef8b9cab4d111cabea2530-merged.mount: Deactivated successfully.
Nov 29 02:22:08 np0005539563 python3.9[144733]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:22:08 np0005539563 systemd[1]: Reloading.
Nov 29 02:22:08 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:22:08 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:22:08 np0005539563 podman[144230]: 2025-11-29 07:22:08.202164097 +0000 UTC m=+3.116797656 container remove e28e1f3752e55916ff03a7519dbb0363a8983255644f9836f951df4885f14fcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_albattani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:22:08 np0005539563 systemd[1]: libpod-conmon-e28e1f3752e55916ff03a7519dbb0363a8983255644f9836f951df4885f14fcf.scope: Deactivated successfully.
Nov 29 02:22:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:08 np0005539563 podman[145013]: 2025-11-29 07:22:08.869266658 +0000 UTC m=+0.023842032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:22:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:08.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:09 np0005539563 podman[145013]: 2025-11-29 07:22:09.00120876 +0000 UTC m=+0.155784184 container create 153c7ac828894f7613b5589ade6c8fa8b4a0198629d2ac941aa32a9558e47bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:22:09 np0005539563 python3.9[145079]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:09 np0005539563 systemd[1]: Started libpod-conmon-153c7ac828894f7613b5589ade6c8fa8b4a0198629d2ac941aa32a9558e47bcd.scope.
Nov 29 02:22:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:22:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:09.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:09 np0005539563 podman[145013]: 2025-11-29 07:22:09.476276608 +0000 UTC m=+0.630852012 container init 153c7ac828894f7613b5589ade6c8fa8b4a0198629d2ac941aa32a9558e47bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_zhukovsky, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:22:09 np0005539563 podman[145013]: 2025-11-29 07:22:09.485745338 +0000 UTC m=+0.640320682 container start 153c7ac828894f7613b5589ade6c8fa8b4a0198629d2ac941aa32a9558e47bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:22:09 np0005539563 dreamy_zhukovsky[145155]: 167 167
Nov 29 02:22:09 np0005539563 systemd[1]: libpod-153c7ac828894f7613b5589ade6c8fa8b4a0198629d2ac941aa32a9558e47bcd.scope: Deactivated successfully.
Nov 29 02:22:09 np0005539563 conmon[145155]: conmon 153c7ac828894f7613b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-153c7ac828894f7613b5589ade6c8fa8b4a0198629d2ac941aa32a9558e47bcd.scope/container/memory.events
Nov 29 02:22:09 np0005539563 python3.9[145162]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:09 np0005539563 podman[145013]: 2025-11-29 07:22:09.667413706 +0000 UTC m=+0.821989050 container attach 153c7ac828894f7613b5589ade6c8fa8b4a0198629d2ac941aa32a9558e47bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:22:09 np0005539563 podman[145013]: 2025-11-29 07:22:09.667836908 +0000 UTC m=+0.822412252 container died 153c7ac828894f7613b5589ade6c8fa8b4a0198629d2ac941aa32a9558e47bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:22:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-188a6cf40b840b010e7c4dc5d30d556ce07098139da9d4043f45e09738abb3d8-merged.mount: Deactivated successfully.
Nov 29 02:22:10 np0005539563 python3.9[145328]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:10 np0005539563 python3.9[145406]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:10 np0005539563 podman[145013]: 2025-11-29 07:22:10.850228316 +0000 UTC m=+2.004803660 container remove 153c7ac828894f7613b5589ade6c8fa8b4a0198629d2ac941aa32a9558e47bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:22:10 np0005539563 systemd[1]: libpod-conmon-153c7ac828894f7613b5589ade6c8fa8b4a0198629d2ac941aa32a9558e47bcd.scope: Deactivated successfully.
Nov 29 02:22:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:10.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:11 np0005539563 podman[145439]: 2025-11-29 07:22:10.99763362 +0000 UTC m=+0.030114544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:22:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:11.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:11 np0005539563 podman[145439]: 2025-11-29 07:22:11.723957749 +0000 UTC m=+0.756438683 container create 7bd52c3ce19bb24af834ab3ce84c03f36f746e3525cefd47a57fd6b850a4c6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:22:11 np0005539563 systemd[1]: Started libpod-conmon-7bd52c3ce19bb24af834ab3ce84c03f36f746e3525cefd47a57fd6b850a4c6d8.scope.
Nov 29 02:22:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:22:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f921df4cc0ecb6a6aad3f221041c7946f09a20ec2d38063903b6104b95f574cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f921df4cc0ecb6a6aad3f221041c7946f09a20ec2d38063903b6104b95f574cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f921df4cc0ecb6a6aad3f221041c7946f09a20ec2d38063903b6104b95f574cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f921df4cc0ecb6a6aad3f221041c7946f09a20ec2d38063903b6104b95f574cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:12 np0005539563 podman[145439]: 2025-11-29 07:22:12.002175233 +0000 UTC m=+1.034656177 container init 7bd52c3ce19bb24af834ab3ce84c03f36f746e3525cefd47a57fd6b850a4c6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:22:12 np0005539563 podman[145439]: 2025-11-29 07:22:12.011098916 +0000 UTC m=+1.043579820 container start 7bd52c3ce19bb24af834ab3ce84c03f36f746e3525cefd47a57fd6b850a4c6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:22:12 np0005539563 podman[145439]: 2025-11-29 07:22:12.040352904 +0000 UTC m=+1.072833848 container attach 7bd52c3ce19bb24af834ab3ce84c03f36f746e3525cefd47a57fd6b850a4c6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:22:12 np0005539563 python3.9[145580]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:22:12 np0005539563 systemd[1]: Reloading.
Nov 29 02:22:12 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:22:12 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:22:12 np0005539563 systemd[1]: Starting Create netns directory...
Nov 29 02:22:12 np0005539563 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:22:12 np0005539563 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:22:12 np0005539563 systemd[1]: Finished Create netns directory.
Nov 29 02:22:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:22:12
Nov 29 02:22:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:22:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:22:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', '.mgr', 'vms', 'backups', 'volumes', 'default.rgw.control']
Nov 29 02:22:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:22:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]: {
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:    "0": [
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:        {
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:            "devices": [
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:                "/dev/loop3"
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:            ],
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:            "lv_name": "ceph_lv0",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:            "lv_size": "7511998464",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:            "name": "ceph_lv0",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:            "tags": {
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:                "ceph.cluster_name": "ceph",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:                "ceph.crush_device_class": "",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:                "ceph.encrypted": "0",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:                "ceph.osd_id": "0",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:                "ceph.type": "block",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:                "ceph.vdo": "0"
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:            },
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:            "type": "block",
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:            "vg_name": "ceph_vg0"
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:        }
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]:    ]
Nov 29 02:22:12 np0005539563 goofy_jemison[145583]: }
Nov 29 02:22:12 np0005539563 systemd[1]: libpod-7bd52c3ce19bb24af834ab3ce84c03f36f746e3525cefd47a57fd6b850a4c6d8.scope: Deactivated successfully.
Nov 29 02:22:12 np0005539563 podman[145439]: 2025-11-29 07:22:12.897904126 +0000 UTC m=+1.930385080 container died 7bd52c3ce19bb24af834ab3ce84c03f36f746e3525cefd47a57fd6b850a4c6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:22:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:12.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:22:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f921df4cc0ecb6a6aad3f221041c7946f09a20ec2d38063903b6104b95f574cc-merged.mount: Deactivated successfully.
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:22:13 np0005539563 python3.9[145798]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:13 np0005539563 podman[145439]: 2025-11-29 07:22:13.379303806 +0000 UTC m=+2.411784720 container remove 7bd52c3ce19bb24af834ab3ce84c03f36f746e3525cefd47a57fd6b850a4c6d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:22:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:13.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:13 np0005539563 systemd[1]: libpod-conmon-7bd52c3ce19bb24af834ab3ce84c03f36f746e3525cefd47a57fd6b850a4c6d8.scope: Deactivated successfully.
Nov 29 02:22:13 np0005539563 podman[146087]: 2025-11-29 07:22:13.97355579 +0000 UTC m=+0.034857824 container create 89dbbf62266b1cbc823b0d07ae9572a2b05238e567cb7d05a73cbbed3566ac8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:22:14 np0005539563 systemd[1]: Started libpod-conmon-89dbbf62266b1cbc823b0d07ae9572a2b05238e567cb7d05a73cbbed3566ac8d.scope.
Nov 29 02:22:14 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:22:14 np0005539563 podman[146087]: 2025-11-29 07:22:14.053373378 +0000 UTC m=+0.114675412 container init 89dbbf62266b1cbc823b0d07ae9572a2b05238e567cb7d05a73cbbed3566ac8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jones, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:22:14 np0005539563 podman[146087]: 2025-11-29 07:22:13.958766745 +0000 UTC m=+0.020068809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:22:14 np0005539563 podman[146087]: 2025-11-29 07:22:14.065900831 +0000 UTC m=+0.127202885 container start 89dbbf62266b1cbc823b0d07ae9572a2b05238e567cb7d05a73cbbed3566ac8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jones, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:22:14 np0005539563 podman[146087]: 2025-11-29 07:22:14.069333624 +0000 UTC m=+0.130635678 container attach 89dbbf62266b1cbc823b0d07ae9572a2b05238e567cb7d05a73cbbed3566ac8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jones, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:22:14 np0005539563 cranky_jones[146105]: 167 167
Nov 29 02:22:14 np0005539563 systemd[1]: libpod-89dbbf62266b1cbc823b0d07ae9572a2b05238e567cb7d05a73cbbed3566ac8d.scope: Deactivated successfully.
Nov 29 02:22:14 np0005539563 podman[146087]: 2025-11-29 07:22:14.07249738 +0000 UTC m=+0.133799434 container died 89dbbf62266b1cbc823b0d07ae9572a2b05238e567cb7d05a73cbbed3566ac8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:22:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5da4cf18c5b157b29490c8c25e61832b3a76121da69abe676f8b95de9cf088b8-merged.mount: Deactivated successfully.
Nov 29 02:22:14 np0005539563 python3.9[146090]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:14 np0005539563 podman[146087]: 2025-11-29 07:22:14.11168943 +0000 UTC m=+0.172991464 container remove 89dbbf62266b1cbc823b0d07ae9572a2b05238e567cb7d05a73cbbed3566ac8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:22:14 np0005539563 systemd[1]: libpod-conmon-89dbbf62266b1cbc823b0d07ae9572a2b05238e567cb7d05a73cbbed3566ac8d.scope: Deactivated successfully.
Nov 29 02:22:14 np0005539563 podman[146152]: 2025-11-29 07:22:14.273398595 +0000 UTC m=+0.050384987 container create dd6381cc0fd350a553be1082bb66fb04127ed349720a205007f136df5b3bc2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:22:14 np0005539563 systemd[1]: Started libpod-conmon-dd6381cc0fd350a553be1082bb66fb04127ed349720a205007f136df5b3bc2b8.scope.
Nov 29 02:22:14 np0005539563 podman[146152]: 2025-11-29 07:22:14.247474526 +0000 UTC m=+0.024460948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:22:14 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:22:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75017eb3587a695c89a4a8da614fa211464691bd3b046a2748a077c3cacc4ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75017eb3587a695c89a4a8da614fa211464691bd3b046a2748a077c3cacc4ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75017eb3587a695c89a4a8da614fa211464691bd3b046a2748a077c3cacc4ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75017eb3587a695c89a4a8da614fa211464691bd3b046a2748a077c3cacc4ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:14 np0005539563 podman[146152]: 2025-11-29 07:22:14.366916418 +0000 UTC m=+0.143902780 container init dd6381cc0fd350a553be1082bb66fb04127ed349720a205007f136df5b3bc2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:22:14 np0005539563 podman[146152]: 2025-11-29 07:22:14.376212561 +0000 UTC m=+0.153198933 container start dd6381cc0fd350a553be1082bb66fb04127ed349720a205007f136df5b3bc2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:22:14 np0005539563 podman[146152]: 2025-11-29 07:22:14.380272052 +0000 UTC m=+0.157258424 container attach dd6381cc0fd350a553be1082bb66fb04127ed349720a205007f136df5b3bc2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:22:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:14 np0005539563 python3.9[146271]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400933.680268-1369-35231221708229/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:14.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:15 np0005539563 quirky_goodall[146214]: {
Nov 29 02:22:15 np0005539563 quirky_goodall[146214]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:22:15 np0005539563 quirky_goodall[146214]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:22:15 np0005539563 quirky_goodall[146214]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:22:15 np0005539563 quirky_goodall[146214]:        "osd_id": 0,
Nov 29 02:22:15 np0005539563 quirky_goodall[146214]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:22:15 np0005539563 quirky_goodall[146214]:        "type": "bluestore"
Nov 29 02:22:15 np0005539563 quirky_goodall[146214]:    }
Nov 29 02:22:15 np0005539563 quirky_goodall[146214]: }
Nov 29 02:22:15 np0005539563 systemd[1]: libpod-dd6381cc0fd350a553be1082bb66fb04127ed349720a205007f136df5b3bc2b8.scope: Deactivated successfully.
Nov 29 02:22:15 np0005539563 podman[146152]: 2025-11-29 07:22:15.146287393 +0000 UTC m=+0.923273765 container died dd6381cc0fd350a553be1082bb66fb04127ed349720a205007f136df5b3bc2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:22:15 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b75017eb3587a695c89a4a8da614fa211464691bd3b046a2748a077c3cacc4ae-merged.mount: Deactivated successfully.
Nov 29 02:22:15 np0005539563 podman[146152]: 2025-11-29 07:22:15.198422447 +0000 UTC m=+0.975408799 container remove dd6381cc0fd350a553be1082bb66fb04127ed349720a205007f136df5b3bc2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:22:15 np0005539563 systemd[1]: libpod-conmon-dd6381cc0fd350a553be1082bb66fb04127ed349720a205007f136df5b3bc2b8.scope: Deactivated successfully.
Nov 29 02:22:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:22:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:22:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:22:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:15.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:22:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cfa452a9-a2d0-40d3-8b3a-e3005107688f does not exist
Nov 29 02:22:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4730efc1-8e5c-486b-a3da-45232df59be9 does not exist
Nov 29 02:22:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 87d19224-098d-4a09-adbe-aa6dc9547672 does not exist
Nov 29 02:22:15 np0005539563 python3.9[146501]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:22:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:22:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:16 np0005539563 python3.9[146653]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:16.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:17 np0005539563 python3.9[146777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400936.2596452-1444-247930970019496/.source.json _original_basename=.fny8qak0 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:17.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:17 np0005539563 python3.9[146929]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:18.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:19.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:20 np0005539563 python3.9[147407]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 29 02:22:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:21 np0005539563 python3.9[147560]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:22:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:21.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:22:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:22 np0005539563 python3.9[147712]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:22:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:22.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:23.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:24 np0005539563 python3[147891]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:22:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:24.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:25.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:26.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:27.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:28.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:29.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:31.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:31.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:32 np0005539563 podman[147903]: 2025-11-29 07:22:32.102474788 +0000 UTC m=+7.290449171 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:22:32 np0005539563 podman[148024]: 2025-11-29 07:22:32.247519258 +0000 UTC m=+0.026441494 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:22:32 np0005539563 podman[148024]: 2025-11-29 07:22:32.365282663 +0000 UTC m=+0.144204869 container create a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 02:22:32 np0005539563 python3[147891]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:22:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:33.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:33 np0005539563 python3.9[148215]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:22:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:33.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:33 np0005539563 python3.9[148369]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:34 np0005539563 python3.9[148445]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:22:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:35.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:35 np0005539563 python3.9[148597]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400954.4542127-1708-198608923740802/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:35.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:36 np0005539563 python3.9[148674]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:22:36 np0005539563 systemd[1]: Reloading.
Nov 29 02:22:36 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:22:36 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:22:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:37.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:37.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:37 np0005539563 python3.9[148786]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:22:37 np0005539563 systemd[1]: Reloading.
Nov 29 02:22:37 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:22:37 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:22:38 np0005539563 systemd[1]: Starting ovn_controller container...
Nov 29 02:22:38 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:22:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ceedf0331d40ecc34e9ca37bfc947dc239e846ec0203132ffc9ac4b23c1d9c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:38 np0005539563 systemd[1]: Started /usr/bin/podman healthcheck run a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb.
Nov 29 02:22:38 np0005539563 podman[148827]: 2025-11-29 07:22:38.243974123 +0000 UTC m=+0.142014897 container init a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: + sudo -E kolla_set_configs
Nov 29 02:22:38 np0005539563 podman[148827]: 2025-11-29 07:22:38.274498317 +0000 UTC m=+0.172539021 container start a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:22:38 np0005539563 edpm-start-podman-container[148827]: ovn_controller
Nov 29 02:22:38 np0005539563 systemd[1]: Created slice User Slice of UID 0.
Nov 29 02:22:38 np0005539563 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 29 02:22:38 np0005539563 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 29 02:22:38 np0005539563 systemd[1]: Starting User Manager for UID 0...
Nov 29 02:22:38 np0005539563 podman[148871]: 2025-11-29 07:22:38.358966083 +0000 UTC m=+0.077902148 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 02:22:38 np0005539563 systemd[1]: a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb-3bd53a8fde478424.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 02:22:38 np0005539563 systemd[1]: a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb-3bd53a8fde478424.service: Failed with result 'exit-code'.
Nov 29 02:22:38 np0005539563 edpm-start-podman-container[148826]: Creating additional drop-in dependency for "ovn_controller" (a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb)
Nov 29 02:22:38 np0005539563 systemd[1]: Reloading.
Nov 29 02:22:38 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:22:38 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:22:38 np0005539563 systemd[148918]: Queued start job for default target Main User Target.
Nov 29 02:22:38 np0005539563 systemd[148918]: Created slice User Application Slice.
Nov 29 02:22:38 np0005539563 systemd[148918]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 29 02:22:38 np0005539563 systemd[148918]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:22:38 np0005539563 systemd[148918]: Reached target Paths.
Nov 29 02:22:38 np0005539563 systemd[148918]: Reached target Timers.
Nov 29 02:22:38 np0005539563 systemd[148918]: Starting D-Bus User Message Bus Socket...
Nov 29 02:22:38 np0005539563 systemd[148918]: Starting Create User's Volatile Files and Directories...
Nov 29 02:22:38 np0005539563 systemd[148918]: Finished Create User's Volatile Files and Directories.
Nov 29 02:22:38 np0005539563 systemd[148918]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:22:38 np0005539563 systemd[148918]: Reached target Sockets.
Nov 29 02:22:38 np0005539563 systemd[148918]: Reached target Basic System.
Nov 29 02:22:38 np0005539563 systemd[148918]: Reached target Main User Target.
Nov 29 02:22:38 np0005539563 systemd[148918]: Startup finished in 163ms.
Nov 29 02:22:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:38 np0005539563 systemd[1]: Started User Manager for UID 0.
Nov 29 02:22:38 np0005539563 systemd[1]: Started Session c1 of User root.
Nov 29 02:22:38 np0005539563 systemd[1]: Started ovn_controller container.
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: INFO:__main__:Validating config file
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: INFO:__main__:Writing out command to execute
Nov 29 02:22:38 np0005539563 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: ++ cat /run_command
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: + ARGS=
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: + sudo kolla_copy_cacerts
Nov 29 02:22:38 np0005539563 systemd[1]: Started Session c2 of User root.
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: + [[ ! -n '' ]]
Nov 29 02:22:38 np0005539563 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: + . kolla_extend_start
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: + umask 0022
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 29 02:22:38 np0005539563 NetworkManager[48981]: <info>  [1764400958.8352] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 29 02:22:38 np0005539563 NetworkManager[48981]: <info>  [1764400958.8363] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:22:38 np0005539563 NetworkManager[48981]: <info>  [1764400958.8377] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 29 02:22:38 np0005539563 NetworkManager[48981]: <info>  [1764400958.8382] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 29 02:22:38 np0005539563 NetworkManager[48981]: <info>  [1764400958.8385] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:22:38 np0005539563 kernel: br-int: entered promiscuous mode
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 29 02:22:38 np0005539563 systemd-udevd[149024]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 29 02:22:38 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:38Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 29 02:22:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:39.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:39 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:39Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:22:39 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:39Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:22:39 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:39Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:22:39 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:39Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:22:39 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:39Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:22:39 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:39Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:22:39 np0005539563 NetworkManager[48981]: <info>  [1764400959.0192] manager: (ovn-011fdd-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 29 02:22:39 np0005539563 NetworkManager[48981]: <info>  [1764400959.0198] manager: (ovn-45d4c7-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Nov 29 02:22:39 np0005539563 kernel: genev_sys_6081: entered promiscuous mode
Nov 29 02:22:39 np0005539563 NetworkManager[48981]: <info>  [1764400959.0367] device (genev_sys_6081): carrier: link connected
Nov 29 02:22:39 np0005539563 NetworkManager[48981]: <info>  [1764400959.0370] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Nov 29 02:22:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:39.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:41.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:41.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:41 np0005539563 NetworkManager[48981]: <info>  [1764400961.7817] manager: (ovn-c8abfd-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 29 02:22:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:42 np0005539563 python3.9[149159]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:42 np0005539563 ovs-vsctl[149160]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 29 02:22:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:43.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:22:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:43.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:43 np0005539563 python3.9[149312]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:43 np0005539563 ovs-vsctl[149314]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 29 02:22:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:45 np0005539563 python3.9[149468]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:45.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:45 np0005539563 ovs-vsctl[149469]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 29 02:22:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:45.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:45 np0005539563 systemd[1]: session-46.scope: Deactivated successfully.
Nov 29 02:22:45 np0005539563 systemd[1]: session-46.scope: Consumed 56.942s CPU time.
Nov 29 02:22:45 np0005539563 systemd-logind[785]: Session 46 logged out. Waiting for processes to exit.
Nov 29 02:22:45 np0005539563 systemd-logind[785]: Removed session 46.
Nov 29 02:22:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:47.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:47.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:48 np0005539563 systemd[1]: Stopping User Manager for UID 0...
Nov 29 02:22:48 np0005539563 systemd[148918]: Activating special unit Exit the Session...
Nov 29 02:22:48 np0005539563 systemd[148918]: Stopped target Main User Target.
Nov 29 02:22:48 np0005539563 systemd[148918]: Stopped target Basic System.
Nov 29 02:22:48 np0005539563 systemd[148918]: Stopped target Paths.
Nov 29 02:22:48 np0005539563 systemd[148918]: Stopped target Sockets.
Nov 29 02:22:48 np0005539563 systemd[148918]: Stopped target Timers.
Nov 29 02:22:48 np0005539563 systemd[148918]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 02:22:48 np0005539563 systemd[148918]: Closed D-Bus User Message Bus Socket.
Nov 29 02:22:48 np0005539563 systemd[148918]: Stopped Create User's Volatile Files and Directories.
Nov 29 02:22:48 np0005539563 systemd[148918]: Removed slice User Application Slice.
Nov 29 02:22:48 np0005539563 systemd[148918]: Reached target Shutdown.
Nov 29 02:22:48 np0005539563 systemd[148918]: Finished Exit the Session.
Nov 29 02:22:48 np0005539563 systemd[148918]: Reached target Exit the Session.
Nov 29 02:22:48 np0005539563 systemd[1]: user@0.service: Deactivated successfully.
Nov 29 02:22:48 np0005539563 systemd[1]: Stopped User Manager for UID 0.
Nov 29 02:22:48 np0005539563 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 29 02:22:49 np0005539563 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 29 02:22:49 np0005539563 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 29 02:22:49 np0005539563 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 29 02:22:49 np0005539563 systemd[1]: Removed slice User Slice of UID 0.
Nov 29 02:22:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:49.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:49.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:49 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 02:22:49 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:49Z|00025|memory|INFO|16256 kB peak resident set size after 10.9 seconds
Nov 29 02:22:49 np0005539563 ovn_controller[148841]: 2025-11-29T07:22:49Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Nov 29 02:22:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Nov 29 02:22:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:51.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:51 np0005539563 systemd-logind[785]: New session 48 of user zuul.
Nov 29 02:22:51 np0005539563 systemd[1]: Started Session 48 of User zuul.
Nov 29 02:22:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:51.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:52 np0005539563 python3.9[149654]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 0 B/s wr, 65 op/s
Nov 29 02:22:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:22:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:53.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:22:53 np0005539563 python3.9[149811]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:53.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:54 np0005539563 python3.9[149963]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:54 np0005539563 python3.9[150115]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Nov 29 02:22:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:55.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:55 np0005539563 python3.9[150268]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:55.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:55 np0005539563 python3.9[150420]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 162 op/s
Nov 29 02:22:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:22:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:57.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:57 np0005539563 python3.9[150572]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:57.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:58 np0005539563 python3.9[150724]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 02:22:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 162 op/s
Nov 29 02:22:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:59.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:22:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:59.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:59 np0005539563 python3.9[150925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 162 op/s
Nov 29 02:23:00 np0005539563 python3.9[151047]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400979.3198879-223-260186643234814/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:01.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:01.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:01 np0005539563 python3.9[151197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:02 np0005539563 python3.9[151318]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400981.0608795-268-139591072653181/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Nov 29 02:23:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:03.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:03 np0005539563 python3.9[151471]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:23:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:03.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:04 np0005539563 python3.9[151555]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:23:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 97 op/s
Nov 29 02:23:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:05.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:05.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 45 op/s
Nov 29 02:23:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:07.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:07.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:08 np0005539563 podman[151711]: 2025-11-29 07:23:08.540395981 +0000 UTC m=+0.089750461 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:23:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:08 np0005539563 python3.9[151710]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:23:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:09.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:09.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:10 np0005539563 python3.9[151892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:11 np0005539563 python3.9[152014]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400989.9980817-379-52238354219867/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:11.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:11.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:11 np0005539563 python3.9[152164]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:12 np0005539563 python3.9[152285]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400991.1706018-379-98137059211302/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:23:12
Nov 29 02:23:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:23:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:23:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'images', '.mgr', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 29 02:23:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:23:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:13.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:23:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:13.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:13 np0005539563 python3.9[152436]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:14 np0005539563 python3.9[152557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400993.5099528-511-33105113875249/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:15 np0005539563 python3.9[152708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:15.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:15.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:15 np0005539563 python3.9[152829]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400994.62548-511-137610587284911/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:16 np0005539563 python3.9[153091]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:23:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:23:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 50dde3d9-4a98-40a9-9685-b04684f25103 does not exist
Nov 29 02:23:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f5fb861a-51ac-48ba-a08a-4c10a89acae3 does not exist
Nov 29 02:23:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 10fc593f-2ebb-4e7a-bb42-6ce3d7f151f0 does not exist
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:23:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:23:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:17.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:17 np0005539563 python3.9[153363]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:17 np0005539563 podman[153404]: 2025-11-29 07:23:17.518442621 +0000 UTC m=+0.052201276 container create 5edf392dba3a69b26444466d2123873df5d1fb5e1c9e0226c4ae3fb631b84b09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rhodes, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:23:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:17.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:17 np0005539563 systemd[1]: Started libpod-conmon-5edf392dba3a69b26444466d2123873df5d1fb5e1c9e0226c4ae3fb631b84b09.scope.
Nov 29 02:23:17 np0005539563 podman[153404]: 2025-11-29 07:23:17.491078664 +0000 UTC m=+0.024837319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:23:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:23:17 np0005539563 podman[153404]: 2025-11-29 07:23:17.645421807 +0000 UTC m=+0.179180462 container init 5edf392dba3a69b26444466d2123873df5d1fb5e1c9e0226c4ae3fb631b84b09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rhodes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:23:17 np0005539563 podman[153404]: 2025-11-29 07:23:17.659949024 +0000 UTC m=+0.193707669 container start 5edf392dba3a69b26444466d2123873df5d1fb5e1c9e0226c4ae3fb631b84b09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:23:17 np0005539563 podman[153404]: 2025-11-29 07:23:17.664488178 +0000 UTC m=+0.198246813 container attach 5edf392dba3a69b26444466d2123873df5d1fb5e1c9e0226c4ae3fb631b84b09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rhodes, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:23:17 np0005539563 jovial_rhodes[153444]: 167 167
Nov 29 02:23:17 np0005539563 systemd[1]: libpod-5edf392dba3a69b26444466d2123873df5d1fb5e1c9e0226c4ae3fb631b84b09.scope: Deactivated successfully.
Nov 29 02:23:17 np0005539563 podman[153404]: 2025-11-29 07:23:17.670943824 +0000 UTC m=+0.204702459 container died 5edf392dba3a69b26444466d2123873df5d1fb5e1c9e0226c4ae3fb631b84b09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:23:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay-aaf9e7298bcc7cb981bf74dbfdae7d17d21e3ed6acf7b39ab5c5287d23baecda-merged.mount: Deactivated successfully.
Nov 29 02:23:17 np0005539563 podman[153404]: 2025-11-29 07:23:17.710847324 +0000 UTC m=+0.244605959 container remove 5edf392dba3a69b26444466d2123873df5d1fb5e1c9e0226c4ae3fb631b84b09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rhodes, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:23:17 np0005539563 systemd[1]: libpod-conmon-5edf392dba3a69b26444466d2123873df5d1fb5e1c9e0226c4ae3fb631b84b09.scope: Deactivated successfully.
Nov 29 02:23:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:23:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:23:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:23:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:23:17 np0005539563 podman[153520]: 2025-11-29 07:23:17.866346168 +0000 UTC m=+0.041929015 container create 21ddcc9635baf77c359d7254c6065eb2a6bbfd159aac865dfe97b1a4b8ec21b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:23:17 np0005539563 systemd[1]: Started libpod-conmon-21ddcc9635baf77c359d7254c6065eb2a6bbfd159aac865dfe97b1a4b8ec21b7.scope.
Nov 29 02:23:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:23:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6544c65774ebb4e99030a864a9f97edb8d929c6a6d35d8ac50ff8f65732834e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6544c65774ebb4e99030a864a9f97edb8d929c6a6d35d8ac50ff8f65732834e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6544c65774ebb4e99030a864a9f97edb8d929c6a6d35d8ac50ff8f65732834e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6544c65774ebb4e99030a864a9f97edb8d929c6a6d35d8ac50ff8f65732834e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6544c65774ebb4e99030a864a9f97edb8d929c6a6d35d8ac50ff8f65732834e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:17 np0005539563 podman[153520]: 2025-11-29 07:23:17.944297807 +0000 UTC m=+0.119880674 container init 21ddcc9635baf77c359d7254c6065eb2a6bbfd159aac865dfe97b1a4b8ec21b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:23:17 np0005539563 podman[153520]: 2025-11-29 07:23:17.850280999 +0000 UTC m=+0.025863866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:23:17 np0005539563 podman[153520]: 2025-11-29 07:23:17.954191166 +0000 UTC m=+0.129774013 container start 21ddcc9635baf77c359d7254c6065eb2a6bbfd159aac865dfe97b1a4b8ec21b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:23:17 np0005539563 podman[153520]: 2025-11-29 07:23:17.957894728 +0000 UTC m=+0.133477605 container attach 21ddcc9635baf77c359d7254c6065eb2a6bbfd159aac865dfe97b1a4b8ec21b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:23:18 np0005539563 python3.9[153616]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:18 np0005539563 python3.9[153694]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:18 np0005539563 beautiful_pike[153578]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:23:18 np0005539563 beautiful_pike[153578]: --> relative data size: 1.0
Nov 29 02:23:18 np0005539563 beautiful_pike[153578]: --> All data devices are unavailable
Nov 29 02:23:18 np0005539563 systemd[1]: libpod-21ddcc9635baf77c359d7254c6065eb2a6bbfd159aac865dfe97b1a4b8ec21b7.scope: Deactivated successfully.
Nov 29 02:23:18 np0005539563 podman[153520]: 2025-11-29 07:23:18.763771997 +0000 UTC m=+0.939354854 container died 21ddcc9635baf77c359d7254c6065eb2a6bbfd159aac865dfe97b1a4b8ec21b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:23:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6544c65774ebb4e99030a864a9f97edb8d929c6a6d35d8ac50ff8f65732834e6-merged.mount: Deactivated successfully.
Nov 29 02:23:18 np0005539563 podman[153520]: 2025-11-29 07:23:18.851724258 +0000 UTC m=+1.027307105 container remove 21ddcc9635baf77c359d7254c6065eb2a6bbfd159aac865dfe97b1a4b8ec21b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:23:18 np0005539563 systemd[1]: libpod-conmon-21ddcc9635baf77c359d7254c6065eb2a6bbfd159aac865dfe97b1a4b8ec21b7.scope: Deactivated successfully.
Nov 29 02:23:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:19.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:19 np0005539563 python3.9[154010]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:19 np0005539563 podman[154080]: 2025-11-29 07:23:19.486401254 +0000 UTC m=+0.094883431 container create 1ef550fb3bc6c5bdf6245dab633beb90a7ff09916948a346208c320224f3510c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:23:19 np0005539563 podman[154080]: 2025-11-29 07:23:19.414578604 +0000 UTC m=+0.023060841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:23:19 np0005539563 systemd[1]: Started libpod-conmon-1ef550fb3bc6c5bdf6245dab633beb90a7ff09916948a346208c320224f3510c.scope.
Nov 29 02:23:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:19.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:19 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:23:19 np0005539563 podman[154080]: 2025-11-29 07:23:19.565211156 +0000 UTC m=+0.173693353 container init 1ef550fb3bc6c5bdf6245dab633beb90a7ff09916948a346208c320224f3510c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_vaughan, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:23:19 np0005539563 podman[154080]: 2025-11-29 07:23:19.572823653 +0000 UTC m=+0.181305830 container start 1ef550fb3bc6c5bdf6245dab633beb90a7ff09916948a346208c320224f3510c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:23:19 np0005539563 podman[154080]: 2025-11-29 07:23:19.576316189 +0000 UTC m=+0.184798386 container attach 1ef550fb3bc6c5bdf6245dab633beb90a7ff09916948a346208c320224f3510c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:23:19 np0005539563 eager_vaughan[154146]: 167 167
Nov 29 02:23:19 np0005539563 systemd[1]: libpod-1ef550fb3bc6c5bdf6245dab633beb90a7ff09916948a346208c320224f3510c.scope: Deactivated successfully.
Nov 29 02:23:19 np0005539563 podman[154080]: 2025-11-29 07:23:19.579560347 +0000 UTC m=+0.188042534 container died 1ef550fb3bc6c5bdf6245dab633beb90a7ff09916948a346208c320224f3510c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_vaughan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 02:23:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4a3becb14b087578da2e7653f53218363e9eecfdbb3158eb8ca54bff45e70bac-merged.mount: Deactivated successfully.
Nov 29 02:23:19 np0005539563 podman[154080]: 2025-11-29 07:23:19.614878191 +0000 UTC m=+0.223360368 container remove 1ef550fb3bc6c5bdf6245dab633beb90a7ff09916948a346208c320224f3510c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_vaughan, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 29 02:23:19 np0005539563 systemd[1]: libpod-conmon-1ef550fb3bc6c5bdf6245dab633beb90a7ff09916948a346208c320224f3510c.scope: Deactivated successfully.
Nov 29 02:23:19 np0005539563 podman[154176]: 2025-11-29 07:23:19.772676259 +0000 UTC m=+0.042288905 container create 556c419e98ade1b9d3a4cc9da9236ca3a281fb5c097e124f515f02a322c34e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_buck, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:23:19 np0005539563 python3.9[154153]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:19 np0005539563 systemd[1]: Started libpod-conmon-556c419e98ade1b9d3a4cc9da9236ca3a281fb5c097e124f515f02a322c34e8f.scope.
Nov 29 02:23:19 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:23:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04511dffd344409567279cbe4b80cbf8372a3a65ad4fd0bf5d45e873779abec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04511dffd344409567279cbe4b80cbf8372a3a65ad4fd0bf5d45e873779abec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04511dffd344409567279cbe4b80cbf8372a3a65ad4fd0bf5d45e873779abec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04511dffd344409567279cbe4b80cbf8372a3a65ad4fd0bf5d45e873779abec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:19 np0005539563 podman[154176]: 2025-11-29 07:23:19.753450234 +0000 UTC m=+0.023062910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:23:19 np0005539563 podman[154176]: 2025-11-29 07:23:19.86243027 +0000 UTC m=+0.132042946 container init 556c419e98ade1b9d3a4cc9da9236ca3a281fb5c097e124f515f02a322c34e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_buck, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:23:19 np0005539563 podman[154176]: 2025-11-29 07:23:19.869965945 +0000 UTC m=+0.139578591 container start 556c419e98ade1b9d3a4cc9da9236ca3a281fb5c097e124f515f02a322c34e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_buck, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:23:19 np0005539563 podman[154176]: 2025-11-29 07:23:19.873835441 +0000 UTC m=+0.143448117 container attach 556c419e98ade1b9d3a4cc9da9236ca3a281fb5c097e124f515f02a322c34e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_buck, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:23:20 np0005539563 interesting_buck[154192]: {
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:    "0": [
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:        {
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:            "devices": [
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:                "/dev/loop3"
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:            ],
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:            "lv_name": "ceph_lv0",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:            "lv_size": "7511998464",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:            "name": "ceph_lv0",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:            "tags": {
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:                "ceph.cluster_name": "ceph",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:                "ceph.crush_device_class": "",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:                "ceph.encrypted": "0",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:                "ceph.osd_id": "0",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:                "ceph.type": "block",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:                "ceph.vdo": "0"
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:            },
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:            "type": "block",
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:            "vg_name": "ceph_vg0"
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:        }
Nov 29 02:23:20 np0005539563 interesting_buck[154192]:    ]
Nov 29 02:23:20 np0005539563 interesting_buck[154192]: }
Nov 29 02:23:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:20 np0005539563 python3.9[154348]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:20 np0005539563 systemd[1]: libpod-556c419e98ade1b9d3a4cc9da9236ca3a281fb5c097e124f515f02a322c34e8f.scope: Deactivated successfully.
Nov 29 02:23:20 np0005539563 podman[154176]: 2025-11-29 07:23:20.686777863 +0000 UTC m=+0.956390509 container died 556c419e98ade1b9d3a4cc9da9236ca3a281fb5c097e124f515f02a322c34e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_buck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:23:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d04511dffd344409567279cbe4b80cbf8372a3a65ad4fd0bf5d45e873779abec-merged.mount: Deactivated successfully.
Nov 29 02:23:20 np0005539563 podman[154176]: 2025-11-29 07:23:20.750866603 +0000 UTC m=+1.020479249 container remove 556c419e98ade1b9d3a4cc9da9236ca3a281fb5c097e124f515f02a322c34e8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_buck, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:23:20 np0005539563 systemd[1]: libpod-conmon-556c419e98ade1b9d3a4cc9da9236ca3a281fb5c097e124f515f02a322c34e8f.scope: Deactivated successfully.
Nov 29 02:23:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:21.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:21 np0005539563 podman[154658]: 2025-11-29 07:23:21.354696466 +0000 UTC m=+0.039193861 container create 1407139968f2b5b8d10e381b39f3c22c75707d914c49b0c55d211fd27de72b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wilson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:23:21 np0005539563 systemd[1]: Started libpod-conmon-1407139968f2b5b8d10e381b39f3c22c75707d914c49b0c55d211fd27de72b93.scope.
Nov 29 02:23:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:23:21 np0005539563 podman[154658]: 2025-11-29 07:23:21.337822075 +0000 UTC m=+0.022319500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:23:21 np0005539563 podman[154658]: 2025-11-29 07:23:21.438049742 +0000 UTC m=+0.122547157 container init 1407139968f2b5b8d10e381b39f3c22c75707d914c49b0c55d211fd27de72b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:23:21 np0005539563 podman[154658]: 2025-11-29 07:23:21.447481009 +0000 UTC m=+0.131978414 container start 1407139968f2b5b8d10e381b39f3c22c75707d914c49b0c55d211fd27de72b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wilson, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:23:21 np0005539563 podman[154658]: 2025-11-29 07:23:21.451691754 +0000 UTC m=+0.136189149 container attach 1407139968f2b5b8d10e381b39f3c22c75707d914c49b0c55d211fd27de72b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:23:21 np0005539563 jovial_wilson[154672]: 167 167
Nov 29 02:23:21 np0005539563 systemd[1]: libpod-1407139968f2b5b8d10e381b39f3c22c75707d914c49b0c55d211fd27de72b93.scope: Deactivated successfully.
Nov 29 02:23:21 np0005539563 conmon[154672]: conmon 1407139968f2b5b8d10e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1407139968f2b5b8d10e381b39f3c22c75707d914c49b0c55d211fd27de72b93.scope/container/memory.events
Nov 29 02:23:21 np0005539563 podman[154658]: 2025-11-29 07:23:21.454795539 +0000 UTC m=+0.139292934 container died 1407139968f2b5b8d10e381b39f3c22c75707d914c49b0c55d211fd27de72b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:23:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e8cc69a49bcd0281a2f739566bdb9dafdf64ab72000cebc816b155eab0f1abc6-merged.mount: Deactivated successfully.
Nov 29 02:23:21 np0005539563 podman[154658]: 2025-11-29 07:23:21.489237199 +0000 UTC m=+0.173734604 container remove 1407139968f2b5b8d10e381b39f3c22c75707d914c49b0c55d211fd27de72b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:23:21 np0005539563 python3.9[154657]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:21 np0005539563 systemd[1]: libpod-conmon-1407139968f2b5b8d10e381b39f3c22c75707d914c49b0c55d211fd27de72b93.scope: Deactivated successfully.
Nov 29 02:23:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:21.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:21 np0005539563 podman[154702]: 2025-11-29 07:23:21.640622681 +0000 UTC m=+0.044867535 container create fc4865c3678006e008190536c183d788a5c53e8b9227132cfb3ead5854ee9b19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:23:21 np0005539563 systemd[1]: Started libpod-conmon-fc4865c3678006e008190536c183d788a5c53e8b9227132cfb3ead5854ee9b19.scope.
Nov 29 02:23:21 np0005539563 podman[154702]: 2025-11-29 07:23:21.618843308 +0000 UTC m=+0.023088172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:23:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:23:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a3dd5bbb9a409fa30b26012fcb6cfb215fb6b9a3c022eaa9e9c94e4c739c12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a3dd5bbb9a409fa30b26012fcb6cfb215fb6b9a3c022eaa9e9c94e4c739c12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a3dd5bbb9a409fa30b26012fcb6cfb215fb6b9a3c022eaa9e9c94e4c739c12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a3dd5bbb9a409fa30b26012fcb6cfb215fb6b9a3c022eaa9e9c94e4c739c12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:21 np0005539563 podman[154702]: 2025-11-29 07:23:21.740682743 +0000 UTC m=+0.144927617 container init fc4865c3678006e008190536c183d788a5c53e8b9227132cfb3ead5854ee9b19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:23:21 np0005539563 podman[154702]: 2025-11-29 07:23:21.748828706 +0000 UTC m=+0.153073550 container start fc4865c3678006e008190536c183d788a5c53e8b9227132cfb3ead5854ee9b19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:23:21 np0005539563 podman[154702]: 2025-11-29 07:23:21.752414063 +0000 UTC m=+0.156658937 container attach fc4865c3678006e008190536c183d788a5c53e8b9227132cfb3ead5854ee9b19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:23:22 np0005539563 python3.9[154796]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:22 np0005539563 wonderful_bassi[154763]: {
Nov 29 02:23:22 np0005539563 wonderful_bassi[154763]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:23:22 np0005539563 wonderful_bassi[154763]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:23:22 np0005539563 wonderful_bassi[154763]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:23:22 np0005539563 wonderful_bassi[154763]:        "osd_id": 0,
Nov 29 02:23:22 np0005539563 wonderful_bassi[154763]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:23:22 np0005539563 wonderful_bassi[154763]:        "type": "bluestore"
Nov 29 02:23:22 np0005539563 wonderful_bassi[154763]:    }
Nov 29 02:23:22 np0005539563 wonderful_bassi[154763]: }
Nov 29 02:23:22 np0005539563 systemd[1]: libpod-fc4865c3678006e008190536c183d788a5c53e8b9227132cfb3ead5854ee9b19.scope: Deactivated successfully.
Nov 29 02:23:22 np0005539563 podman[154702]: 2025-11-29 07:23:22.593881875 +0000 UTC m=+0.998126729 container died fc4865c3678006e008190536c183d788a5c53e8b9227132cfb3ead5854ee9b19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:23:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e0a3dd5bbb9a409fa30b26012fcb6cfb215fb6b9a3c022eaa9e9c94e4c739c12-merged.mount: Deactivated successfully.
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:22 np0005539563 podman[154702]: 2025-11-29 07:23:22.659200318 +0000 UTC m=+1.063445172 container remove fc4865c3678006e008190536c183d788a5c53e8b9227132cfb3ead5854ee9b19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bassi, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:23:22 np0005539563 systemd[1]: libpod-conmon-fc4865c3678006e008190536c183d788a5c53e8b9227132cfb3ead5854ee9b19.scope: Deactivated successfully.
Nov 29 02:23:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:23:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:23:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:23:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d394f49a-3baa-4936-a292-582d93782c1e does not exist
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1aecb06d-8514-4442-aedf-9a71156a86ed does not exist
Nov 29 02:23:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9a016c02-808d-4e3e-b792-92f056a5c148 does not exist
Nov 29 02:23:22 np0005539563 python3.9[154962]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:23.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:23 np0005539563 python3.9[155105]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:23.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:23:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:23:24 np0005539563 python3.9[155257]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:24 np0005539563 systemd[1]: Reloading.
Nov 29 02:23:24 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:23:24 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:23:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:25.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:25 np0005539563 python3.9[155448]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:25.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:25 np0005539563 python3.9[155526]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:26 np0005539563 python3.9[155679]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:27.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:27 np0005539563 python3.9[155757]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:27.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:28 np0005539563 python3.9[155909]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:28 np0005539563 systemd[1]: Reloading.
Nov 29 02:23:28 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:23:28 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:23:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:28 np0005539563 systemd[1]: Starting Create netns directory...
Nov 29 02:23:28 np0005539563 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:23:28 np0005539563 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:23:28 np0005539563 systemd[1]: Finished Create netns directory.
Nov 29 02:23:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:29.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:29.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:29 np0005539563 python3.9[156102]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:30 np0005539563 python3.9[156254]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:31.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:31 np0005539563 python3.9[156378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401010.1888733-964-124873562164703/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:31.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:32 np0005539563 python3.9[156530]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:33.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:33 np0005539563 python3.9[156683]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:33.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:33 np0005539563 python3.9[156806]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401012.7032156-1039-25928283479376/.source.json _original_basename=.pq473br6 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:34 np0005539563 python3.9[156958]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:35.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:35.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:37.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:37 np0005539563 python3.9[157387]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 29 02:23:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:37.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:38 np0005539563 python3.9[157539]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:23:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:38 np0005539563 podman[157641]: 2025-11-29 07:23:38.984034961 +0000 UTC m=+0.160335809 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:23:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:39.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:39 np0005539563 python3.9[157765]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:23:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:39.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:41 np0005539563 python3[157945]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:23:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:23:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:41.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:23:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:41.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:43.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:23:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:23:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:23:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:23:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:23:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:23:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:43.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:45.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:45.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:47.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:47.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:49.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:23:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:49.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:23:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:51.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:51.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.501625) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032501757, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2095, "num_deletes": 252, "total_data_size": 4080040, "memory_usage": 4129840, "flush_reason": "Manual Compaction"}
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032551715, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3965993, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10424, "largest_seqno": 12518, "table_properties": {"data_size": 3956358, "index_size": 6129, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19062, "raw_average_key_size": 20, "raw_value_size": 3937177, "raw_average_value_size": 4153, "num_data_blocks": 274, "num_entries": 948, "num_filter_entries": 948, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400757, "oldest_key_time": 1764400757, "file_creation_time": 1764401032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 50457 microseconds, and 10714 cpu microseconds.
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.552130) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3965993 bytes OK
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.552258) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.554534) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.554565) EVENT_LOG_v1 {"time_micros": 1764401032554557, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.554584) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4071543, prev total WAL file size 4102309, number of live WAL files 2.
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.556412) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3873KB)], [26(7347KB)]
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032556513, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 11489777, "oldest_snapshot_seqno": -1}
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4105 keys, 9260325 bytes, temperature: kUnknown
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032658054, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 9260325, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9229007, "index_size": 19944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 99976, "raw_average_key_size": 24, "raw_value_size": 9150978, "raw_average_value_size": 2229, "num_data_blocks": 863, "num_entries": 4105, "num_filter_entries": 4105, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764401032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.658319) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 9260325 bytes
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.660315) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.1 rd, 91.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 7.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(5.2) write-amplify(2.3) OK, records in: 4630, records dropped: 525 output_compression: NoCompression
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.660336) EVENT_LOG_v1 {"time_micros": 1764401032660326, "job": 10, "event": "compaction_finished", "compaction_time_micros": 101628, "compaction_time_cpu_micros": 23260, "output_level": 6, "num_output_files": 1, "total_output_size": 9260325, "num_input_records": 4630, "num_output_records": 4105, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032661949, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401032663442, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.556237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.663488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.663493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.663494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.663496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:23:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:23:52.663497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:23:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:53.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:53.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:55.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:55 np0005539563 podman[157958]: 2025-11-29 07:23:55.246116236 +0000 UTC m=+14.116928865 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:23:55 np0005539563 podman[158092]: 2025-11-29 07:23:55.410450843 +0000 UTC m=+0.050935692 container create 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:23:55 np0005539563 podman[158092]: 2025-11-29 07:23:55.381536269 +0000 UTC m=+0.022021148 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:23:55 np0005539563 python3[157945]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:23:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:55.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:56 np0005539563 python3.9[158280]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:23:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:57.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:57 np0005539563 python3.9[158435]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:57.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:57 np0005539563 python3.9[158511]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:23:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:23:58 np0005539563 python3.9[158662]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401037.7468543-1303-232034873033241/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:59.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:23:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:59.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:59 np0005539563 python3.9[158746]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:24:00 np0005539563 systemd[1]: Reloading.
Nov 29 02:24:00 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:24:00 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:24:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:01.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:01.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:02 np0005539563 python3.9[158928]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:02 np0005539563 systemd[1]: Reloading.
Nov 29 02:24:02 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:24:02 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:24:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:02 np0005539563 systemd[1]: Starting ovn_metadata_agent container...
Nov 29 02:24:02 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:24:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f269ae332370af6d08da414e7f2096a82e517eeab10fef95574d1dfdb0a621/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5f269ae332370af6d08da414e7f2096a82e517eeab10fef95574d1dfdb0a621/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:02 np0005539563 systemd[1]: Started /usr/bin/podman healthcheck run 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd.
Nov 29 02:24:02 np0005539563 podman[158969]: 2025-11-29 07:24:02.931877863 +0000 UTC m=+0.111176396 container init 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:24:02 np0005539563 ovn_metadata_agent[158985]: + sudo -E kolla_set_configs
Nov 29 02:24:02 np0005539563 podman[158969]: 2025-11-29 07:24:02.958565097 +0000 UTC m=+0.137863630 container start 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:24:02 np0005539563 edpm-start-podman-container[158969]: ovn_metadata_agent
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Validating config file
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Copying service configuration files
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Writing out command to execute
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 29 02:24:03 np0005539563 edpm-start-podman-container[158968]: Creating additional drop-in dependency for "ovn_metadata_agent" (5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd)
Nov 29 02:24:03 np0005539563 podman[158992]: 2025-11-29 07:24:03.016933749 +0000 UTC m=+0.048014253 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: ++ cat /run_command
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: + CMD=neutron-ovn-metadata-agent
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: + ARGS=
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: + sudo kolla_copy_cacerts
Nov 29 02:24:03 np0005539563 systemd[1]: Reloading.
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: + [[ ! -n '' ]]
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: + . kolla_extend_start
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: Running command: 'neutron-ovn-metadata-agent'
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: + umask 0022
Nov 29 02:24:03 np0005539563 ovn_metadata_agent[158985]: + exec neutron-ovn-metadata-agent
Nov 29 02:24:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:03 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:24:03 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:24:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:03.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:03 np0005539563 systemd[1]: Started ovn_metadata_agent container.
Nov 29 02:24:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:03.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:03 np0005539563 systemd[1]: session-48.scope: Deactivated successfully.
Nov 29 02:24:03 np0005539563 systemd[1]: session-48.scope: Consumed 53.686s CPU time.
Nov 29 02:24:03 np0005539563 systemd-logind[785]: Session 48 logged out. Waiting for processes to exit.
Nov 29 02:24:03 np0005539563 systemd-logind[785]: Removed session 48.
Nov 29 02:24:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.830 158990 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.831 158990 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.831 158990 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.832 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.832 158990 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.832 158990 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.832 158990 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.832 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.832 158990 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.832 158990 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.833 158990 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.833 158990 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.833 158990 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.833 158990 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.833 158990 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.833 158990 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.833 158990 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.834 158990 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.834 158990 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.834 158990 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.834 158990 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.834 158990 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.834 158990 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.834 158990 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.834 158990 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.834 158990 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.835 158990 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.835 158990 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.835 158990 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.835 158990 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.835 158990 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.835 158990 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.835 158990 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.835 158990 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.835 158990 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.835 158990 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.836 158990 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.836 158990 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.836 158990 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.836 158990 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.836 158990 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.836 158990 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.836 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.836 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.837 158990 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.837 158990 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.837 158990 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.837 158990 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.837 158990 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.837 158990 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.837 158990 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.837 158990 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.837 158990 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.837 158990 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.838 158990 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.838 158990 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.838 158990 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.838 158990 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.838 158990 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.838 158990 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.838 158990 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.838 158990 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.838 158990 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.839 158990 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.839 158990 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.839 158990 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.839 158990 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.839 158990 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.839 158990 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.839 158990 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.839 158990 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.839 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.840 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.840 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.840 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.840 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.840 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.840 158990 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.840 158990 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.840 158990 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.840 158990 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.840 158990 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.841 158990 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.841 158990 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.841 158990 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.841 158990 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.841 158990 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.841 158990 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.841 158990 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.841 158990 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.841 158990 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.842 158990 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.842 158990 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.842 158990 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.842 158990 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.842 158990 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.842 158990 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.842 158990 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.842 158990 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.842 158990 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.842 158990 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.842 158990 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.843 158990 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.843 158990 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.843 158990 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.843 158990 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.843 158990 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.843 158990 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.843 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.843 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.843 158990 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.844 158990 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.844 158990 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.844 158990 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.844 158990 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.844 158990 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.844 158990 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.844 158990 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.844 158990 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.845 158990 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.845 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.845 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.845 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.845 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.845 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.845 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.845 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.846 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.846 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.846 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.846 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.846 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.846 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.846 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.846 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.846 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.847 158990 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.847 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.847 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.847 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.847 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.847 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.847 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.847 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.847 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.847 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.848 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.848 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.848 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.848 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.848 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.848 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.848 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.848 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.848 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.848 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.849 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.849 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.849 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.849 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.849 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.849 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.849 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.849 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.849 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.849 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.850 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.850 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.850 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.850 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.850 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.850 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.850 158990 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.850 158990 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.850 158990 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.851 158990 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.851 158990 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.851 158990 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.851 158990 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.851 158990 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.851 158990 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.851 158990 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.851 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.851 158990 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.851 158990 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.852 158990 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.852 158990 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.852 158990 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.852 158990 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.852 158990 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.852 158990 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.852 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.852 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.852 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.852 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.853 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.853 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.853 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.853 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.853 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.853 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.853 158990 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.853 158990 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.853 158990 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.854 158990 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.854 158990 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.854 158990 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.854 158990 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.854 158990 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.854 158990 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.854 158990 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.854 158990 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.854 158990 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.854 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.855 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.855 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.855 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.855 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.855 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.855 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.855 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.855 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.855 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.855 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.856 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.856 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.856 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.856 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.856 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.856 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.856 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.856 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.856 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.856 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.857 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.857 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.857 158990 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.857 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.857 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.857 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.857 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.857 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.857 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.858 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.858 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.858 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.858 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.858 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.858 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.858 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.858 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.858 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.858 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.859 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.859 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.859 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.859 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.859 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.859 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.859 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.859 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.859 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.860 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.860 158990 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.860 158990 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.860 158990 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.860 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.860 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.860 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.860 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.860 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.861 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.861 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.861 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.861 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.861 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.861 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.861 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.861 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.861 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.861 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.862 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.862 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.862 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.862 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.862 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.862 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.862 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.862 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.862 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.863 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.863 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.863 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.863 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.863 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.863 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.863 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.863 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.864 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.864 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.864 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.864 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.864 158990 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.864 158990 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.873 158990 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.873 158990 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.873 158990 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.873 158990 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.874 158990 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.889 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name cb98fb5a-8fde-4aab-9a19-a76cfc927075 (UUID: cb98fb5a-8fde-4aab-9a19-a76cfc927075) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.918 158990 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.918 158990 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.919 158990 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.919 158990 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.922 158990 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.928 158990 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.934 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'cb98fb5a-8fde-4aab-9a19-a76cfc927075'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], external_ids={}, name=cb98fb5a-8fde-4aab-9a19-a76cfc927075, nb_cfg_timestamp=1764400966851, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.935 158990 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fcc96498bb0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.936 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.936 158990 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.936 158990 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.937 158990 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.941 158990 DEBUG oslo_service.service [-] Started child 159101 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.944 159101 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-236435'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.945 158990 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpohf6fmua/privsep.sock']#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.969 159101 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.969 159101 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.969 159101 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.973 159101 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.979 159101 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 02:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:04.990 159101 INFO eventlet.wsgi.server [-] (159101) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 29 02:24:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:05.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:05 np0005539563 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 29 02:24:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:05.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:05.590 158990 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:24:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:05.591 158990 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpohf6fmua/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:24:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:05.458 159106 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:24:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:05.462 159106 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:24:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:05.464 159106 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 29 02:24:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:05.464 159106 INFO oslo.privsep.daemon [-] privsep daemon running as pid 159106#033[00m
Nov 29 02:24:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:05.593 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[02e420c0-787e-4ad6-a438-68e3d7249ffd]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.074 159106 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.074 159106 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.075 159106 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.620 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[04fb5def-c96a-4cb6-a33d-e22caa57d254]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.622 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, column=external_ids, values=({'neutron:ovn-metadata-id': 'f95211b8-b61f-59b5-baca-a919ef3aa5e2'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.645 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.652 158990 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.653 158990 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.653 158990 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.653 158990 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.653 158990 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.653 158990 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.653 158990 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.654 158990 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.654 158990 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.654 158990 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.654 158990 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.654 158990 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.654 158990 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.654 158990 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.655 158990 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.655 158990 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.655 158990 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.655 158990 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.655 158990 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.655 158990 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.655 158990 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.656 158990 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.656 158990 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.656 158990 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.656 158990 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.656 158990 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.656 158990 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.657 158990 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.657 158990 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.657 158990 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.657 158990 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.657 158990 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.657 158990 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.657 158990 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.658 158990 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.658 158990 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.658 158990 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.658 158990 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.658 158990 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.658 158990 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.658 158990 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.659 158990 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.659 158990 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.659 158990 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.659 158990 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.659 158990 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.659 158990 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.659 158990 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.659 158990 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.660 158990 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.660 158990 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.660 158990 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.660 158990 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.660 158990 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.660 158990 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.660 158990 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.661 158990 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.661 158990 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.661 158990 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.661 158990 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.661 158990 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.661 158990 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.661 158990 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.661 158990 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.662 158990 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.662 158990 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.662 158990 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.662 158990 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.662 158990 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.662 158990 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.662 158990 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.662 158990 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.663 158990 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.663 158990 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.663 158990 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.663 158990 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.663 158990 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.663 158990 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.663 158990 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.663 158990 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.663 158990 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.664 158990 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.664 158990 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.664 158990 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.664 158990 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.664 158990 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.664 158990 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.664 158990 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.664 158990 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.664 158990 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.664 158990 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.665 158990 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.665 158990 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.665 158990 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.665 158990 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.665 158990 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.665 158990 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.665 158990 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.666 158990 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.666 158990 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.666 158990 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.666 158990 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.666 158990 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.666 158990 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.666 158990 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.666 158990 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.667 158990 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.667 158990 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.667 158990 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.667 158990 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.667 158990 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.667 158990 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.667 158990 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.667 158990 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.668 158990 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.668 158990 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.668 158990 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.668 158990 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.668 158990 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.668 158990 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.668 158990 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.668 158990 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.668 158990 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.669 158990 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.669 158990 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.669 158990 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.669 158990 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.669 158990 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.669 158990 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.669 158990 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.669 158990 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.670 158990 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.670 158990 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.670 158990 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.670 158990 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.670 158990 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.670 158990 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.670 158990 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.670 158990 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.670 158990 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.671 158990 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.671 158990 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.671 158990 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.671 158990 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.671 158990 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.671 158990 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.671 158990 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.671 158990 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.671 158990 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.671 158990 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.672 158990 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.672 158990 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.672 158990 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.672 158990 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.672 158990 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.672 158990 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.672 158990 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.672 158990 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.673 158990 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.673 158990 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.673 158990 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.673 158990 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.673 158990 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.673 158990 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.673 158990 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.673 158990 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.673 158990 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.673 158990 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.674 158990 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.674 158990 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.674 158990 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.674 158990 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.674 158990 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.674 158990 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.674 158990 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.674 158990 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.674 158990 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.675 158990 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.675 158990 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.675 158990 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.675 158990 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.675 158990 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.675 158990 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.675 158990 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.675 158990 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.676 158990 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.676 158990 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.676 158990 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.676 158990 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.676 158990 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.676 158990 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.676 158990 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.676 158990 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.676 158990 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.677 158990 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.677 158990 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.677 158990 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.677 158990 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.677 158990 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.677 158990 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.677 158990 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.677 158990 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.677 158990 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.678 158990 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.678 158990 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.678 158990 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.678 158990 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.678 158990 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.678 158990 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.678 158990 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.678 158990 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.678 158990 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.678 158990 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.679 158990 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.679 158990 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.679 158990 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.679 158990 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.679 158990 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.679 158990 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.679 158990 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.679 158990 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.680 158990 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.680 158990 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.680 158990 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.680 158990 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.680 158990 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.680 158990 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.680 158990 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.681 158990 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.681 158990 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.681 158990 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.681 158990 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.681 158990 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.681 158990 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.681 158990 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.682 158990 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.682 158990 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.682 158990 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.682 158990 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.682 158990 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.683 158990 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.683 158990 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.683 158990 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.683 158990 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.683 158990 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.683 158990 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.684 158990 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.684 158990 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.684 158990 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.684 158990 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.684 158990 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.684 158990 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.684 158990 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.684 158990 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.685 158990 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.685 158990 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.685 158990 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.685 158990 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.685 158990 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.685 158990 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.685 158990 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.686 158990 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.686 158990 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.686 158990 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.686 158990 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.686 158990 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.686 158990 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.687 158990 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.687 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.687 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.687 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.687 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.687 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.687 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.688 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.688 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.688 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.688 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.688 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.688 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.689 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.689 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.689 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.689 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.689 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.689 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.690 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.690 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.690 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.690 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.690 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.690 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.690 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.691 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.691 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.691 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.691 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.691 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.691 158990 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.691 158990 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.692 158990 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.692 158990 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.692 158990 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:24:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:24:06.692 158990 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:24:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:07.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:07.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:09.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:09 np0005539563 podman[159113]: 2025-11-29 07:24:09.529994056 +0000 UTC m=+0.086694212 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:24:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:09.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:09 np0005539563 systemd-logind[785]: New session 49 of user zuul.
Nov 29 02:24:09 np0005539563 systemd[1]: Started Session 49 of User zuul.
Nov 29 02:24:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:10 np0005539563 python3.9[159293]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:24:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:11.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:11.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:12 np0005539563 python3.9[159450]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:24:12
Nov 29 02:24:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:24:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:24:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'backups', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr']
Nov 29 02:24:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:24:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:24:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:13.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:24:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:13.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:13 np0005539563 python3.9[159616]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:24:13 np0005539563 systemd[1]: Reloading.
Nov 29 02:24:13 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:24:13 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:24:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:14 np0005539563 python3.9[159803]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:24:15 np0005539563 network[159820]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:24:15 np0005539563 network[159821]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:24:15 np0005539563 network[159822]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:24:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:15.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:15.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:17.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:17.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:19.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:19.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:20 np0005539563 python3.9[160136]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:21.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:21 np0005539563 python3.9[160290]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:21.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:22 np0005539563 python3.9[160443]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:24:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:22 np0005539563 python3.9[160597]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:23.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:23.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:23 np0005539563 python3.9[160848]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:24:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 493914a3-17c3-4247-aad4-5f490fb9a705 does not exist
Nov 29 02:24:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e13ad9f5-f8bf-4d35-b028-6c8d950015eb does not exist
Nov 29 02:24:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0917df89-b0a0-4b2b-ab3e-08b83c68d619 does not exist
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:24:24 np0005539563 python3.9[161084]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:24 np0005539563 podman[161222]: 2025-11-29 07:24:24.75835926 +0000 UTC m=+0.046428569 container create 772c13407499b9053aa6390c6c8321bcbfd7eb08a3e1b7ba3fb14690bc51abff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:24:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:24:24 np0005539563 systemd[1]: Started libpod-conmon-772c13407499b9053aa6390c6c8321bcbfd7eb08a3e1b7ba3fb14690bc51abff.scope.
Nov 29 02:24:24 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:24:24 np0005539563 podman[161222]: 2025-11-29 07:24:24.737140135 +0000 UTC m=+0.025209454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:24:24 np0005539563 podman[161222]: 2025-11-29 07:24:24.85053108 +0000 UTC m=+0.138600459 container init 772c13407499b9053aa6390c6c8321bcbfd7eb08a3e1b7ba3fb14690bc51abff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:24:24 np0005539563 podman[161222]: 2025-11-29 07:24:24.860926752 +0000 UTC m=+0.148996051 container start 772c13407499b9053aa6390c6c8321bcbfd7eb08a3e1b7ba3fb14690bc51abff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:24:24 np0005539563 podman[161222]: 2025-11-29 07:24:24.865523817 +0000 UTC m=+0.153593116 container attach 772c13407499b9053aa6390c6c8321bcbfd7eb08a3e1b7ba3fb14690bc51abff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:24:24 np0005539563 happy_morse[161270]: 167 167
Nov 29 02:24:24 np0005539563 systemd[1]: libpod-772c13407499b9053aa6390c6c8321bcbfd7eb08a3e1b7ba3fb14690bc51abff.scope: Deactivated successfully.
Nov 29 02:24:24 np0005539563 podman[161222]: 2025-11-29 07:24:24.869687589 +0000 UTC m=+0.157756898 container died 772c13407499b9053aa6390c6c8321bcbfd7eb08a3e1b7ba3fb14690bc51abff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:24:24 np0005539563 systemd[1]: var-lib-containers-storage-overlay-727c18ebad12ed67f7be67ce3c313ec99b3140f1eee9510b2c37faa0e92b9ac4-merged.mount: Deactivated successfully.
Nov 29 02:24:24 np0005539563 podman[161222]: 2025-11-29 07:24:24.914296649 +0000 UTC m=+0.202365948 container remove 772c13407499b9053aa6390c6c8321bcbfd7eb08a3e1b7ba3fb14690bc51abff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:24:24 np0005539563 systemd[1]: libpod-conmon-772c13407499b9053aa6390c6c8321bcbfd7eb08a3e1b7ba3fb14690bc51abff.scope: Deactivated successfully.
Nov 29 02:24:25 np0005539563 podman[161368]: 2025-11-29 07:24:25.116570254 +0000 UTC m=+0.058301142 container create a6fed93e4f66137d4c137378039e0a9187ce8612bd8482ffa9ba0bab3136c92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_thompson, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:24:25 np0005539563 systemd[1]: Started libpod-conmon-a6fed93e4f66137d4c137378039e0a9187ce8612bd8482ffa9ba0bab3136c92f.scope.
Nov 29 02:24:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:25.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:25 np0005539563 podman[161368]: 2025-11-29 07:24:25.099285095 +0000 UTC m=+0.041015993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:24:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:24:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e1b5d3a927308893078d7ccfb48dcb8540eb6462fb8d6b6632111be1925c378/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e1b5d3a927308893078d7ccfb48dcb8540eb6462fb8d6b6632111be1925c378/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e1b5d3a927308893078d7ccfb48dcb8540eb6462fb8d6b6632111be1925c378/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e1b5d3a927308893078d7ccfb48dcb8540eb6462fb8d6b6632111be1925c378/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e1b5d3a927308893078d7ccfb48dcb8540eb6462fb8d6b6632111be1925c378/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:25 np0005539563 podman[161368]: 2025-11-29 07:24:25.22998826 +0000 UTC m=+0.171719168 container init a6fed93e4f66137d4c137378039e0a9187ce8612bd8482ffa9ba0bab3136c92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_thompson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:24:25 np0005539563 podman[161368]: 2025-11-29 07:24:25.24143602 +0000 UTC m=+0.183166898 container start a6fed93e4f66137d4c137378039e0a9187ce8612bd8482ffa9ba0bab3136c92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:24:25 np0005539563 podman[161368]: 2025-11-29 07:24:25.245387377 +0000 UTC m=+0.187118275 container attach a6fed93e4f66137d4c137378039e0a9187ce8612bd8482ffa9ba0bab3136c92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_thompson, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:24:25 np0005539563 python3.9[161371]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:24:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:25.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:26 np0005539563 interesting_thompson[161386]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:24:26 np0005539563 interesting_thompson[161386]: --> relative data size: 1.0
Nov 29 02:24:26 np0005539563 interesting_thompson[161386]: --> All data devices are unavailable
Nov 29 02:24:26 np0005539563 systemd[1]: libpod-a6fed93e4f66137d4c137378039e0a9187ce8612bd8482ffa9ba0bab3136c92f.scope: Deactivated successfully.
Nov 29 02:24:26 np0005539563 podman[161368]: 2025-11-29 07:24:26.081700896 +0000 UTC m=+1.023431774 container died a6fed93e4f66137d4c137378039e0a9187ce8612bd8482ffa9ba0bab3136c92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:24:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:26 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0e1b5d3a927308893078d7ccfb48dcb8540eb6462fb8d6b6632111be1925c378-merged.mount: Deactivated successfully.
Nov 29 02:24:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:27.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:27.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:27 np0005539563 podman[161368]: 2025-11-29 07:24:27.874561004 +0000 UTC m=+2.816291882 container remove a6fed93e4f66137d4c137378039e0a9187ce8612bd8482ffa9ba0bab3136c92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:24:27 np0005539563 systemd[1]: libpod-conmon-a6fed93e4f66137d4c137378039e0a9187ce8612bd8482ffa9ba0bab3136c92f.scope: Deactivated successfully.
Nov 29 02:24:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:28 np0005539563 podman[161584]: 2025-11-29 07:24:28.582888102 +0000 UTC m=+0.039879703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:24:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:29.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:29 np0005539563 podman[161584]: 2025-11-29 07:24:29.315223951 +0000 UTC m=+0.772215492 container create a3f02889210bf482a381015433bd09cefb28d8652ce385fc320788ac382355d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:24:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:29.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:29 np0005539563 systemd[1]: Started libpod-conmon-a3f02889210bf482a381015433bd09cefb28d8652ce385fc320788ac382355d1.scope.
Nov 29 02:24:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:24:30 np0005539563 podman[161584]: 2025-11-29 07:24:30.155652331 +0000 UTC m=+1.612643862 container init a3f02889210bf482a381015433bd09cefb28d8652ce385fc320788ac382355d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:24:30 np0005539563 podman[161584]: 2025-11-29 07:24:30.165871099 +0000 UTC m=+1.622862610 container start a3f02889210bf482a381015433bd09cefb28d8652ce385fc320788ac382355d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:24:30 np0005539563 podman[161584]: 2025-11-29 07:24:30.170497184 +0000 UTC m=+1.627488715 container attach a3f02889210bf482a381015433bd09cefb28d8652ce385fc320788ac382355d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:24:30 np0005539563 hardcore_brattain[161602]: 167 167
Nov 29 02:24:30 np0005539563 systemd[1]: libpod-a3f02889210bf482a381015433bd09cefb28d8652ce385fc320788ac382355d1.scope: Deactivated successfully.
Nov 29 02:24:30 np0005539563 podman[161584]: 2025-11-29 07:24:30.176542477 +0000 UTC m=+1.633533998 container died a3f02889210bf482a381015433bd09cefb28d8652ce385fc320788ac382355d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:24:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a9cdabcf8171d4b02c394c10a00b5d6f0d7b9c16ddfde42aef637c3b7628e2cd-merged.mount: Deactivated successfully.
Nov 29 02:24:30 np0005539563 podman[161584]: 2025-11-29 07:24:30.439443617 +0000 UTC m=+1.896435128 container remove a3f02889210bf482a381015433bd09cefb28d8652ce385fc320788ac382355d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:24:30 np0005539563 systemd[1]: libpod-conmon-a3f02889210bf482a381015433bd09cefb28d8652ce385fc320788ac382355d1.scope: Deactivated successfully.
Nov 29 02:24:30 np0005539563 podman[161631]: 2025-11-29 07:24:30.622876711 +0000 UTC m=+0.038238288 container create c8b480dd7dc562266fa269990185c97f144c14776e5638383e6caf46725cb4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bassi, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:24:30 np0005539563 systemd[1]: Started libpod-conmon-c8b480dd7dc562266fa269990185c97f144c14776e5638383e6caf46725cb4f4.scope.
Nov 29 02:24:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:24:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b47f986a2677cf980c2454dc60901226d890e2043e81378fc374a70767f802e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b47f986a2677cf980c2454dc60901226d890e2043e81378fc374a70767f802e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b47f986a2677cf980c2454dc60901226d890e2043e81378fc374a70767f802e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b47f986a2677cf980c2454dc60901226d890e2043e81378fc374a70767f802e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:30 np0005539563 podman[161631]: 2025-11-29 07:24:30.605027077 +0000 UTC m=+0.020388694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:24:30 np0005539563 podman[161631]: 2025-11-29 07:24:30.710090076 +0000 UTC m=+0.125451723 container init c8b480dd7dc562266fa269990185c97f144c14776e5638383e6caf46725cb4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:24:30 np0005539563 podman[161631]: 2025-11-29 07:24:30.718874444 +0000 UTC m=+0.134236031 container start c8b480dd7dc562266fa269990185c97f144c14776e5638383e6caf46725cb4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:24:30 np0005539563 podman[161631]: 2025-11-29 07:24:30.722674327 +0000 UTC m=+0.138035914 container attach c8b480dd7dc562266fa269990185c97f144c14776e5638383e6caf46725cb4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bassi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:24:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:31.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:31 np0005539563 python3.9[161775]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]: {
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:    "0": [
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:        {
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:            "devices": [
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:                "/dev/loop3"
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:            ],
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:            "lv_name": "ceph_lv0",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:            "lv_size": "7511998464",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:            "name": "ceph_lv0",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:            "tags": {
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:                "ceph.cluster_name": "ceph",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:                "ceph.crush_device_class": "",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:                "ceph.encrypted": "0",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:                "ceph.osd_id": "0",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:                "ceph.type": "block",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:                "ceph.vdo": "0"
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:            },
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:            "type": "block",
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:            "vg_name": "ceph_vg0"
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:        }
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]:    ]
Nov 29 02:24:31 np0005539563 eloquent_bassi[161695]: }
Nov 29 02:24:31 np0005539563 systemd[1]: libpod-c8b480dd7dc562266fa269990185c97f144c14776e5638383e6caf46725cb4f4.scope: Deactivated successfully.
Nov 29 02:24:31 np0005539563 podman[161631]: 2025-11-29 07:24:31.498876926 +0000 UTC m=+0.914238523 container died c8b480dd7dc562266fa269990185c97f144c14776e5638383e6caf46725cb4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bassi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:24:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:31.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:32 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2b47f986a2677cf980c2454dc60901226d890e2043e81378fc374a70767f802e-merged.mount: Deactivated successfully.
Nov 29 02:24:32 np0005539563 python3.9[161942]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:32 np0005539563 podman[161631]: 2025-11-29 07:24:32.448969371 +0000 UTC m=+1.864330958 container remove c8b480dd7dc562266fa269990185c97f144c14776e5638383e6caf46725cb4f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bassi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:24:32 np0005539563 systemd[1]: libpod-conmon-c8b480dd7dc562266fa269990185c97f144c14776e5638383e6caf46725cb4f4.scope: Deactivated successfully.
Nov 29 02:24:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:32 np0005539563 python3.9[162196]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:33 np0005539563 podman[162239]: 2025-11-29 07:24:33.038855837 +0000 UTC m=+0.048130176 container create 4508ebe59fd2de3d29c7c082b2a05737716ad4b5cbaedfa1ccf84940c86edf1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bartik, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:24:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:33 np0005539563 systemd[1]: Started libpod-conmon-4508ebe59fd2de3d29c7c082b2a05737716ad4b5cbaedfa1ccf84940c86edf1c.scope.
Nov 29 02:24:33 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:24:33 np0005539563 podman[162239]: 2025-11-29 07:24:33.100063767 +0000 UTC m=+0.109338126 container init 4508ebe59fd2de3d29c7c082b2a05737716ad4b5cbaedfa1ccf84940c86edf1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bartik, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:24:33 np0005539563 podman[162239]: 2025-11-29 07:24:33.110380097 +0000 UTC m=+0.119654436 container start 4508ebe59fd2de3d29c7c082b2a05737716ad4b5cbaedfa1ccf84940c86edf1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bartik, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 29 02:24:33 np0005539563 podman[162239]: 2025-11-29 07:24:33.01722989 +0000 UTC m=+0.026504249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:24:33 np0005539563 podman[162239]: 2025-11-29 07:24:33.114268702 +0000 UTC m=+0.123543041 container attach 4508ebe59fd2de3d29c7c082b2a05737716ad4b5cbaedfa1ccf84940c86edf1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 29 02:24:33 np0005539563 magical_bartik[162280]: 167 167
Nov 29 02:24:33 np0005539563 podman[162239]: 2025-11-29 07:24:33.115727622 +0000 UTC m=+0.125001961 container died 4508ebe59fd2de3d29c7c082b2a05737716ad4b5cbaedfa1ccf84940c86edf1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:24:33 np0005539563 systemd[1]: libpod-4508ebe59fd2de3d29c7c082b2a05737716ad4b5cbaedfa1ccf84940c86edf1c.scope: Deactivated successfully.
Nov 29 02:24:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4ffdc06608e3d7d3707f0fdb87c7f22df58512bcc3038316f3511a306c3c3fea-merged.mount: Deactivated successfully.
Nov 29 02:24:33 np0005539563 podman[162239]: 2025-11-29 07:24:33.161438771 +0000 UTC m=+0.170713110 container remove 4508ebe59fd2de3d29c7c082b2a05737716ad4b5cbaedfa1ccf84940c86edf1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:24:33 np0005539563 podman[162277]: 2025-11-29 07:24:33.165809919 +0000 UTC m=+0.094528874 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:24:33 np0005539563 systemd[1]: libpod-conmon-4508ebe59fd2de3d29c7c082b2a05737716ad4b5cbaedfa1ccf84940c86edf1c.scope: Deactivated successfully.
Nov 29 02:24:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:33.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:33 np0005539563 podman[162402]: 2025-11-29 07:24:33.305093596 +0000 UTC m=+0.035819682 container create b21ea76a06966b0b5e500d11be76cba6c5f2977eb9b0063c84246a86dd78a004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:24:33 np0005539563 systemd[1]: Started libpod-conmon-b21ea76a06966b0b5e500d11be76cba6c5f2977eb9b0063c84246a86dd78a004.scope.
Nov 29 02:24:33 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:24:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e3d238aa710ce3321d5355c21ac6c57d0ee6ed5fd74913236e280d5f2157f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e3d238aa710ce3321d5355c21ac6c57d0ee6ed5fd74913236e280d5f2157f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e3d238aa710ce3321d5355c21ac6c57d0ee6ed5fd74913236e280d5f2157f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e3d238aa710ce3321d5355c21ac6c57d0ee6ed5fd74913236e280d5f2157f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:24:33 np0005539563 podman[162402]: 2025-11-29 07:24:33.385424775 +0000 UTC m=+0.116150861 container init b21ea76a06966b0b5e500d11be76cba6c5f2977eb9b0063c84246a86dd78a004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:24:33 np0005539563 podman[162402]: 2025-11-29 07:24:33.29009319 +0000 UTC m=+0.020819296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:24:33 np0005539563 podman[162402]: 2025-11-29 07:24:33.397162953 +0000 UTC m=+0.127889039 container start b21ea76a06966b0b5e500d11be76cba6c5f2977eb9b0063c84246a86dd78a004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:24:33 np0005539563 podman[162402]: 2025-11-29 07:24:33.400591386 +0000 UTC m=+0.131317472 container attach b21ea76a06966b0b5e500d11be76cba6c5f2977eb9b0063c84246a86dd78a004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:24:33 np0005539563 python3.9[162466]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:33.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:34 np0005539563 python3.9[162621]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:34 np0005539563 elastic_elion[162464]: {
Nov 29 02:24:34 np0005539563 elastic_elion[162464]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:24:34 np0005539563 elastic_elion[162464]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:24:34 np0005539563 elastic_elion[162464]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:24:34 np0005539563 elastic_elion[162464]:        "osd_id": 0,
Nov 29 02:24:34 np0005539563 elastic_elion[162464]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:24:34 np0005539563 elastic_elion[162464]:        "type": "bluestore"
Nov 29 02:24:34 np0005539563 elastic_elion[162464]:    }
Nov 29 02:24:34 np0005539563 elastic_elion[162464]: }
Nov 29 02:24:34 np0005539563 systemd[1]: libpod-b21ea76a06966b0b5e500d11be76cba6c5f2977eb9b0063c84246a86dd78a004.scope: Deactivated successfully.
Nov 29 02:24:34 np0005539563 podman[162638]: 2025-11-29 07:24:34.257504243 +0000 UTC m=+0.030680373 container died b21ea76a06966b0b5e500d11be76cba6c5f2977eb9b0063c84246a86dd78a004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:24:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-87e3d238aa710ce3321d5355c21ac6c57d0ee6ed5fd74913236e280d5f2157f4-merged.mount: Deactivated successfully.
Nov 29 02:24:34 np0005539563 podman[162638]: 2025-11-29 07:24:34.3263186 +0000 UTC m=+0.099494700 container remove b21ea76a06966b0b5e500d11be76cba6c5f2977eb9b0063c84246a86dd78a004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:24:34 np0005539563 systemd[1]: libpod-conmon-b21ea76a06966b0b5e500d11be76cba6c5f2977eb9b0063c84246a86dd78a004.scope: Deactivated successfully.
Nov 29 02:24:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:24:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:24:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:24:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:24:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4249d1e5-048f-4eaa-8a3e-dada51b59f57 does not exist
Nov 29 02:24:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 802e05c1-2a39-436b-8236-7260ca332e0d does not exist
Nov 29 02:24:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 25d82c1d-73f1-4719-924d-241704ec8a9d does not exist
Nov 29 02:24:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:34 np0005539563 python3.9[162855]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:35.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:35 np0005539563 python3.9[163007]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:24:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:24:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:35.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:36 np0005539563 python3.9[163159]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:36 np0005539563 python3.9[163312]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:37.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:37 np0005539563 python3.9[163464]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:37.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:38 np0005539563 python3.9[163616]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:38 np0005539563 python3.9[163768]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:39.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:39 np0005539563 python3.9[163921]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:39.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:39 np0005539563 podman[164095]: 2025-11-29 07:24:39.649765278 +0000 UTC m=+0.080842543 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:24:39 np0005539563 python3.9[164140]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:41.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:41.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:41 np0005539563 python3.9[164302]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:24:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:24:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:24:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:24:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:24:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:24:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:43.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:43.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:45.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:45.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:47.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:47.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:48 np0005539563 python3.9[164458]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:24:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:49.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:49 np0005539563 python3.9[164611]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:24:49 np0005539563 systemd[1]: Reloading.
Nov 29 02:24:49 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:24:49 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:24:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:49.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:50 np0005539563 python3.9[164799]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:51.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:51 np0005539563 python3.9[164953]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:51.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:51 np0005539563 python3.9[165106]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:52 np0005539563 python3.9[165259]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:53.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:53 np0005539563 python3.9[165413]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:53.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:53 np0005539563 python3.9[165566]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:54 np0005539563 python3.9[165719]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:24:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:55.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:24:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:55.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:56 np0005539563 python3.9[165873]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 29 02:24:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:57.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:57 np0005539563 python3.9[166027]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:24:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:57.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:24:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:58 np0005539563 python3.9[166186]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:24:58 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:24:58 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:24:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:59.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:24:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:59.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:00 np0005539563 python3.9[166397]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:25:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:01 np0005539563 python3.9[166482]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:25:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:01.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:01.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:03.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:03 np0005539563 podman[166494]: 2025-11-29 07:25:03.53162439 +0000 UTC m=+0.083941287 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:25:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:03.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:25:04.866 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:25:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:25:04.867 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:25:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:25:04.867 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:25:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:05.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:05.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:07.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:07.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:09.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:09.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:10 np0005539563 podman[166516]: 2025-11-29 07:25:10.568107239 +0000 UTC m=+0.125711490 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 02:25:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:11.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:25:12
Nov 29 02:25:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:25:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:25:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'vms', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'images']
Nov 29 02:25:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:25:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:25:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:25:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:25:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:13.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:13.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:15.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:17.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:17.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:19.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:19.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:21.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:21.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:25:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:23.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:23.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:25.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:25.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:27.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:27.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:29.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:29.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:31.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:31.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:33.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:33.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:34 np0005539563 podman[166790]: 2025-11-29 07:25:34.52029157 +0000 UTC m=+0.078450009 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:25:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:35.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:35 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:25:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:35.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:37.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:37.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:39.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:39 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:25:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e9 check_health: resetting beacon timeouts due to mon delay (slow election?) of 11.4092 seconds
Nov 29 02:25:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:25:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:25:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:39.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:25:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:25:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:25:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:41.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:25:41 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:25:41 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(25) init, last seen epoch 25, mid-election, bumping
Nov 29 02:25:41 np0005539563 podman[166993]: 2025-11-29 07:25:41.529803269 +0000 UTC m=+0.090077204 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:25:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:41.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:25:42 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:25:42 np0005539563 kernel: SELinux:  Converting 2770 SID table entries...
Nov 29 02:25:42 np0005539563 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:25:42 np0005539563 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:25:42 np0005539563 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:25:42 np0005539563 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:25:42 np0005539563 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:25:42 np0005539563 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:25:42 np0005539563 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:25:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:25:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:25:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:25:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:25:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:25:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:25:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:43.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:43.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:25:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:45.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:25:45 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:25:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:25:45 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 02:25:45 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 02:25:45 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rotard(active, since 16m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 02:25:45 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:25:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:45.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:25:47 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 459c79ef-c40c-495a-885b-6c2c57159908 does not exist
Nov 29 02:25:47 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 460d2c76-fc40-491b-9f2f-170c65a84977 does not exist
Nov 29 02:25:47 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 900ef641-2c0d-4b36-aeb6-b5acbeb2fcfb does not exist
Nov 29 02:25:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:25:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:25:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:25:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:25:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:25:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:25:47 np0005539563 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 29 02:25:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:47.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:47 np0005539563 ceph-mon[74338]: mon.compute-1 calling monitor election
Nov 29 02:25:47 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 02:25:47 np0005539563 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 29 02:25:47 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:25:47 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 02:25:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:47.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:47 np0005539563 podman[167166]: 2025-11-29 07:25:47.709344622 +0000 UTC m=+0.038697456 container create d5ca59f7a1dcd12ef3654d0b509836ca39bf78661dc39671efc9cdd95f6b32cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:25:47 np0005539563 systemd[1]: Started libpod-conmon-d5ca59f7a1dcd12ef3654d0b509836ca39bf78661dc39671efc9cdd95f6b32cc.scope.
Nov 29 02:25:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:25:47 np0005539563 podman[167166]: 2025-11-29 07:25:47.692084622 +0000 UTC m=+0.021437476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:25:47 np0005539563 podman[167166]: 2025-11-29 07:25:47.943176201 +0000 UTC m=+0.272529035 container init d5ca59f7a1dcd12ef3654d0b509836ca39bf78661dc39671efc9cdd95f6b32cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:25:47 np0005539563 podman[167166]: 2025-11-29 07:25:47.950174682 +0000 UTC m=+0.279527516 container start d5ca59f7a1dcd12ef3654d0b509836ca39bf78661dc39671efc9cdd95f6b32cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:25:47 np0005539563 systemd[1]: libpod-d5ca59f7a1dcd12ef3654d0b509836ca39bf78661dc39671efc9cdd95f6b32cc.scope: Deactivated successfully.
Nov 29 02:25:47 np0005539563 inspiring_ptolemy[167182]: 167 167
Nov 29 02:25:48 np0005539563 podman[167166]: 2025-11-29 07:25:48.407965533 +0000 UTC m=+0.737318447 container attach d5ca59f7a1dcd12ef3654d0b509836ca39bf78661dc39671efc9cdd95f6b32cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:25:48 np0005539563 podman[167166]: 2025-11-29 07:25:48.409556607 +0000 UTC m=+0.738909481 container died d5ca59f7a1dcd12ef3654d0b509836ca39bf78661dc39671efc9cdd95f6b32cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:25:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:49.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:49.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:51.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:51.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:53.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:53.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:25:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:55.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:55.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4d5211d659108db77cf7a96884728b05b89af2392b109837b782b871cf23ab43-merged.mount: Deactivated successfully.
Nov 29 02:25:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:57.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:25:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:25:57 np0005539563 podman[167166]: 2025-11-29 07:25:57.580556327 +0000 UTC m=+9.909909161 container remove d5ca59f7a1dcd12ef3654d0b509836ca39bf78661dc39671efc9cdd95f6b32cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:25:57 np0005539563 systemd[1]: libpod-conmon-d5ca59f7a1dcd12ef3654d0b509836ca39bf78661dc39671efc9cdd95f6b32cc.scope: Deactivated successfully.
Nov 29 02:25:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:57.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:57 np0005539563 kernel: SELinux:  Converting 2770 SID table entries...
Nov 29 02:25:57 np0005539563 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:25:57 np0005539563 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:25:57 np0005539563 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:25:57 np0005539563 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:25:57 np0005539563 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:25:57 np0005539563 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:25:57 np0005539563 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:25:57 np0005539563 podman[167217]: 2025-11-29 07:25:57.761957908 +0000 UTC m=+0.064024425 container create 7ba0c2b2c65380f86bc887fa9fe3a777be5897189b91f7de1537050e27cbc6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:25:57 np0005539563 systemd[1]: Started libpod-conmon-7ba0c2b2c65380f86bc887fa9fe3a777be5897189b91f7de1537050e27cbc6c4.scope.
Nov 29 02:25:57 np0005539563 podman[167217]: 2025-11-29 07:25:57.719186043 +0000 UTC m=+0.021252580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:25:57 np0005539563 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 29 02:25:57 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:25:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0be8b3e73c18752baf754d0c4b6c49654b2289a4b4a009610f7cf4c9f4c58f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0be8b3e73c18752baf754d0c4b6c49654b2289a4b4a009610f7cf4c9f4c58f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0be8b3e73c18752baf754d0c4b6c49654b2289a4b4a009610f7cf4c9f4c58f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0be8b3e73c18752baf754d0c4b6c49654b2289a4b4a009610f7cf4c9f4c58f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0be8b3e73c18752baf754d0c4b6c49654b2289a4b4a009610f7cf4c9f4c58f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:57 np0005539563 podman[167217]: 2025-11-29 07:25:57.872162581 +0000 UTC m=+0.174229118 container init 7ba0c2b2c65380f86bc887fa9fe3a777be5897189b91f7de1537050e27cbc6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:25:57 np0005539563 podman[167217]: 2025-11-29 07:25:57.880491148 +0000 UTC m=+0.182557665 container start 7ba0c2b2c65380f86bc887fa9fe3a777be5897189b91f7de1537050e27cbc6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_khayyam, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:25:57 np0005539563 podman[167217]: 2025-11-29 07:25:57.884209129 +0000 UTC m=+0.186275646 container attach 7ba0c2b2c65380f86bc887fa9fe3a777be5897189b91f7de1537050e27cbc6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:25:58 np0005539563 eloquent_khayyam[167235]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:25:58 np0005539563 eloquent_khayyam[167235]: --> relative data size: 1.0
Nov 29 02:25:58 np0005539563 eloquent_khayyam[167235]: --> All data devices are unavailable
Nov 29 02:25:58 np0005539563 systemd[1]: libpod-7ba0c2b2c65380f86bc887fa9fe3a777be5897189b91f7de1537050e27cbc6c4.scope: Deactivated successfully.
Nov 29 02:25:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:58 np0005539563 podman[167252]: 2025-11-29 07:25:58.76119449 +0000 UTC m=+0.021533598 container died 7ba0c2b2c65380f86bc887fa9fe3a777be5897189b91f7de1537050e27cbc6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:25:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d0be8b3e73c18752baf754d0c4b6c49654b2289a4b4a009610f7cf4c9f4c58f9-merged.mount: Deactivated successfully.
Nov 29 02:25:58 np0005539563 podman[167252]: 2025-11-29 07:25:58.897936205 +0000 UTC m=+0.158275303 container remove 7ba0c2b2c65380f86bc887fa9fe3a777be5897189b91f7de1537050e27cbc6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:25:58 np0005539563 systemd[1]: libpod-conmon-7ba0c2b2c65380f86bc887fa9fe3a777be5897189b91f7de1537050e27cbc6c4.scope: Deactivated successfully.
Nov 29 02:25:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:59 np0005539563 podman[167408]: 2025-11-29 07:25:59.479598001 +0000 UTC m=+0.041551993 container create e75a1d9cecd3d100c902f79d4a28e4b8a894b4dde9681b3aedfc6f619431ae43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:25:59 np0005539563 systemd[1]: Started libpod-conmon-e75a1d9cecd3d100c902f79d4a28e4b8a894b4dde9681b3aedfc6f619431ae43.scope.
Nov 29 02:25:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:25:59 np0005539563 podman[167408]: 2025-11-29 07:25:59.458560068 +0000 UTC m=+0.020514090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:25:59 np0005539563 podman[167408]: 2025-11-29 07:25:59.558238424 +0000 UTC m=+0.120192446 container init e75a1d9cecd3d100c902f79d4a28e4b8a894b4dde9681b3aedfc6f619431ae43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:25:59 np0005539563 podman[167408]: 2025-11-29 07:25:59.564084083 +0000 UTC m=+0.126038075 container start e75a1d9cecd3d100c902f79d4a28e4b8a894b4dde9681b3aedfc6f619431ae43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 29 02:25:59 np0005539563 podman[167408]: 2025-11-29 07:25:59.568215755 +0000 UTC m=+0.130169747 container attach e75a1d9cecd3d100c902f79d4a28e4b8a894b4dde9681b3aedfc6f619431ae43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:25:59 np0005539563 nifty_kare[167425]: 167 167
Nov 29 02:25:59 np0005539563 systemd[1]: libpod-e75a1d9cecd3d100c902f79d4a28e4b8a894b4dde9681b3aedfc6f619431ae43.scope: Deactivated successfully.
Nov 29 02:25:59 np0005539563 conmon[167425]: conmon e75a1d9cecd3d100c902 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e75a1d9cecd3d100c902f79d4a28e4b8a894b4dde9681b3aedfc6f619431ae43.scope/container/memory.events
Nov 29 02:25:59 np0005539563 podman[167408]: 2025-11-29 07:25:59.571663829 +0000 UTC m=+0.133617821 container died e75a1d9cecd3d100c902f79d4a28e4b8a894b4dde9681b3aedfc6f619431ae43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:25:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-de37474bc0f4e56f128b8ae4774b7350047735eb1a8ba8a903fb23792e9fa809-merged.mount: Deactivated successfully.
Nov 29 02:25:59 np0005539563 podman[167408]: 2025-11-29 07:25:59.608036479 +0000 UTC m=+0.169990471 container remove e75a1d9cecd3d100c902f79d4a28e4b8a894b4dde9681b3aedfc6f619431ae43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:25:59 np0005539563 systemd[1]: libpod-conmon-e75a1d9cecd3d100c902f79d4a28e4b8a894b4dde9681b3aedfc6f619431ae43.scope: Deactivated successfully.
Nov 29 02:25:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:25:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:59.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:59 np0005539563 podman[167449]: 2025-11-29 07:25:59.809101607 +0000 UTC m=+0.062495903 container create 96c7b5b7b042648803646448012d27caaaf0e9c1729bce0c55a41cd25eca9d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wu, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:25:59 np0005539563 systemd[1]: Started libpod-conmon-96c7b5b7b042648803646448012d27caaaf0e9c1729bce0c55a41cd25eca9d22.scope.
Nov 29 02:25:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:25:59 np0005539563 podman[167449]: 2025-11-29 07:25:59.782794451 +0000 UTC m=+0.036188807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:25:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a97dca068a2613c224c52a9583ec0ac29d370a090833fbd3db9f3c6c2d1be5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a97dca068a2613c224c52a9583ec0ac29d370a090833fbd3db9f3c6c2d1be5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a97dca068a2613c224c52a9583ec0ac29d370a090833fbd3db9f3c6c2d1be5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a97dca068a2613c224c52a9583ec0ac29d370a090833fbd3db9f3c6c2d1be5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:59 np0005539563 podman[167449]: 2025-11-29 07:25:59.916634947 +0000 UTC m=+0.170029263 container init 96c7b5b7b042648803646448012d27caaaf0e9c1729bce0c55a41cd25eca9d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wu, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:25:59 np0005539563 podman[167449]: 2025-11-29 07:25:59.929361503 +0000 UTC m=+0.182755809 container start 96c7b5b7b042648803646448012d27caaaf0e9c1729bce0c55a41cd25eca9d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:25:59 np0005539563 podman[167449]: 2025-11-29 07:25:59.934455722 +0000 UTC m=+0.187850018 container attach 96c7b5b7b042648803646448012d27caaaf0e9c1729bce0c55a41cd25eca9d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wu, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:26:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:00 np0005539563 adoring_wu[167466]: {
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:    "0": [
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:        {
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:            "devices": [
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:                "/dev/loop3"
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:            ],
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:            "lv_name": "ceph_lv0",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:            "lv_size": "7511998464",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:            "name": "ceph_lv0",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:            "tags": {
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:                "ceph.cluster_name": "ceph",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:                "ceph.crush_device_class": "",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:                "ceph.encrypted": "0",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:                "ceph.osd_id": "0",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:                "ceph.type": "block",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:                "ceph.vdo": "0"
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:            },
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:            "type": "block",
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:            "vg_name": "ceph_vg0"
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:        }
Nov 29 02:26:00 np0005539563 adoring_wu[167466]:    ]
Nov 29 02:26:00 np0005539563 adoring_wu[167466]: }
Nov 29 02:26:00 np0005539563 systemd[1]: libpod-96c7b5b7b042648803646448012d27caaaf0e9c1729bce0c55a41cd25eca9d22.scope: Deactivated successfully.
Nov 29 02:26:00 np0005539563 podman[167526]: 2025-11-29 07:26:00.715795388 +0000 UTC m=+0.027603903 container died 96c7b5b7b042648803646448012d27caaaf0e9c1729bce0c55a41cd25eca9d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:26:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay-96a97dca068a2613c224c52a9583ec0ac29d370a090833fbd3db9f3c6c2d1be5-merged.mount: Deactivated successfully.
Nov 29 02:26:00 np0005539563 podman[167526]: 2025-11-29 07:26:00.822823404 +0000 UTC m=+0.134631879 container remove 96c7b5b7b042648803646448012d27caaaf0e9c1729bce0c55a41cd25eca9d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wu, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:26:00 np0005539563 systemd[1]: libpod-conmon-96c7b5b7b042648803646448012d27caaaf0e9c1729bce0c55a41cd25eca9d22.scope: Deactivated successfully.
Nov 29 02:26:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:01.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:01 np0005539563 podman[167679]: 2025-11-29 07:26:01.535485058 +0000 UTC m=+0.048995176 container create f8396a4d9ee5e231f16c765ed2d5ddd6c7f0b2617b4b4f09221795f91e7681e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:26:01 np0005539563 systemd[1]: Started libpod-conmon-f8396a4d9ee5e231f16c765ed2d5ddd6c7f0b2617b4b4f09221795f91e7681e2.scope.
Nov 29 02:26:01 np0005539563 podman[167679]: 2025-11-29 07:26:01.51282111 +0000 UTC m=+0.026331248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:26:01 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:26:01 np0005539563 podman[167679]: 2025-11-29 07:26:01.634826525 +0000 UTC m=+0.148336673 container init f8396a4d9ee5e231f16c765ed2d5ddd6c7f0b2617b4b4f09221795f91e7681e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:26:01 np0005539563 podman[167679]: 2025-11-29 07:26:01.643029418 +0000 UTC m=+0.156539536 container start f8396a4d9ee5e231f16c765ed2d5ddd6c7f0b2617b4b4f09221795f91e7681e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_burnell, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:26:01 np0005539563 podman[167679]: 2025-11-29 07:26:01.647129619 +0000 UTC m=+0.160639737 container attach f8396a4d9ee5e231f16c765ed2d5ddd6c7f0b2617b4b4f09221795f91e7681e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_burnell, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:26:01 np0005539563 peaceful_burnell[167698]: 167 167
Nov 29 02:26:01 np0005539563 systemd[1]: libpod-f8396a4d9ee5e231f16c765ed2d5ddd6c7f0b2617b4b4f09221795f91e7681e2.scope: Deactivated successfully.
Nov 29 02:26:01 np0005539563 podman[167679]: 2025-11-29 07:26:01.652559407 +0000 UTC m=+0.166069525 container died f8396a4d9ee5e231f16c765ed2d5ddd6c7f0b2617b4b4f09221795f91e7681e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_burnell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:26:01 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ab22cdce1781c181130a5c97bf5a0b56ac03ec9b03fa74056140100d2c89073c-merged.mount: Deactivated successfully.
Nov 29 02:26:01 np0005539563 podman[167679]: 2025-11-29 07:26:01.696030422 +0000 UTC m=+0.209540540 container remove f8396a4d9ee5e231f16c765ed2d5ddd6c7f0b2617b4b4f09221795f91e7681e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:26:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:01.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:01 np0005539563 systemd[1]: libpod-conmon-f8396a4d9ee5e231f16c765ed2d5ddd6c7f0b2617b4b4f09221795f91e7681e2.scope: Deactivated successfully.
Nov 29 02:26:01 np0005539563 podman[167721]: 2025-11-29 07:26:01.862665741 +0000 UTC m=+0.045585823 container create e609e7711977e9dda053970a2f8f9d0e3dcddb13d2ec98424e9c77a86388490e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:26:01 np0005539563 systemd[1]: Started libpod-conmon-e609e7711977e9dda053970a2f8f9d0e3dcddb13d2ec98424e9c77a86388490e.scope.
Nov 29 02:26:01 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:26:01 np0005539563 podman[167721]: 2025-11-29 07:26:01.844444425 +0000 UTC m=+0.027364527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:26:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3595aead4342274158a4c9a22e62f89c507c556322b877c3b1dd7c8f9ade2fb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3595aead4342274158a4c9a22e62f89c507c556322b877c3b1dd7c8f9ade2fb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3595aead4342274158a4c9a22e62f89c507c556322b877c3b1dd7c8f9ade2fb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3595aead4342274158a4c9a22e62f89c507c556322b877c3b1dd7c8f9ade2fb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:01 np0005539563 podman[167721]: 2025-11-29 07:26:01.95953381 +0000 UTC m=+0.142453912 container init e609e7711977e9dda053970a2f8f9d0e3dcddb13d2ec98424e9c77a86388490e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:26:01 np0005539563 podman[167721]: 2025-11-29 07:26:01.965793941 +0000 UTC m=+0.148714023 container start e609e7711977e9dda053970a2f8f9d0e3dcddb13d2ec98424e9c77a86388490e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:26:01 np0005539563 podman[167721]: 2025-11-29 07:26:01.968938636 +0000 UTC m=+0.151858798 container attach e609e7711977e9dda053970a2f8f9d0e3dcddb13d2ec98424e9c77a86388490e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:26:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:02 np0005539563 unruffled_bassi[167737]: {
Nov 29 02:26:02 np0005539563 unruffled_bassi[167737]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:26:02 np0005539563 unruffled_bassi[167737]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:26:02 np0005539563 unruffled_bassi[167737]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:26:02 np0005539563 unruffled_bassi[167737]:        "osd_id": 0,
Nov 29 02:26:02 np0005539563 unruffled_bassi[167737]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:26:02 np0005539563 unruffled_bassi[167737]:        "type": "bluestore"
Nov 29 02:26:02 np0005539563 unruffled_bassi[167737]:    }
Nov 29 02:26:02 np0005539563 unruffled_bassi[167737]: }
Nov 29 02:26:02 np0005539563 systemd[1]: libpod-e609e7711977e9dda053970a2f8f9d0e3dcddb13d2ec98424e9c77a86388490e.scope: Deactivated successfully.
Nov 29 02:26:02 np0005539563 podman[167721]: 2025-11-29 07:26:02.898193412 +0000 UTC m=+1.081113514 container died e609e7711977e9dda053970a2f8f9d0e3dcddb13d2ec98424e9c77a86388490e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:26:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3595aead4342274158a4c9a22e62f89c507c556322b877c3b1dd7c8f9ade2fb7-merged.mount: Deactivated successfully.
Nov 29 02:26:02 np0005539563 podman[167721]: 2025-11-29 07:26:02.965873175 +0000 UTC m=+1.148793257 container remove e609e7711977e9dda053970a2f8f9d0e3dcddb13d2ec98424e9c77a86388490e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bassi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:26:02 np0005539563 systemd[1]: libpod-conmon-e609e7711977e9dda053970a2f8f9d0e3dcddb13d2ec98424e9c77a86388490e.scope: Deactivated successfully.
Nov 29 02:26:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:26:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:03.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:03.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:26:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:26:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:26:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 556185c9-d748-4f68-9280-0ab37edfe2d0 does not exist
Nov 29 02:26:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 60460970-1998-4608-99a7-2692b8f3533b does not exist
Nov 29 02:26:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 52f79685-c05d-490f-a251-df2f25301d00 does not exist
Nov 29 02:26:04 np0005539563 podman[167797]: 2025-11-29 07:26:04.684047322 +0000 UTC m=+0.064185840 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 02:26:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:26:04.867 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:26:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:26:04.867 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:26:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:26:04.868 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:26:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:05.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:05.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:26:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:26:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:07.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:07.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:09.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:09.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:11.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:11.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:12 np0005539563 podman[169143]: 2025-11-29 07:26:12.514571744 +0000 UTC m=+0.075885418 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Nov 29 02:26:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:26:12
Nov 29 02:26:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:26:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:26:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['volumes', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'images', 'default.rgw.meta', 'default.rgw.log']
Nov 29 02:26:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:26:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:26:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:26:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:13.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:13.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:15.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:15.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:17.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:17.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:19.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:19.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:21.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:21.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:26:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:23.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:23.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:25.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:25.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:27.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:27.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:29.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:29.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:31.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:31.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:33.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:33.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:35.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:35 np0005539563 podman[182480]: 2025-11-29 07:26:35.511533536 +0000 UTC m=+0.071538960 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 02:26:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:35.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:37.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:37.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:39.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:39.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:41.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:41.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:26:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:26:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:26:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:26:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:26:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:26:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:43.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:43 np0005539563 podman[184803]: 2025-11-29 07:26:43.622928919 +0000 UTC m=+0.171095092 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:26:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:43.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:26:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:45.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:26:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:45.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:47.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:47.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:49.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:49.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:51.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:51.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:53.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:53.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:26:55 np0005539563 kernel: SELinux:  Converting 2771 SID table entries...
Nov 29 02:26:55 np0005539563 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:26:55 np0005539563 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:26:55 np0005539563 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:26:55 np0005539563 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:26:55 np0005539563 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:26:55 np0005539563 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:26:55 np0005539563 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:26:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:55.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:55.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:56 np0005539563 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Nov 29 02:26:56 np0005539563 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 29 02:26:56 np0005539563 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Nov 29 02:26:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:57.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:57.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:59.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:26:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:59.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:01.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:27:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:01.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:27:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:03.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:03.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:27:04.869 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:27:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:27:04.870 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:27:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:27:04.870 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:27:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:05.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:05.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:05 np0005539563 podman[185105]: 2025-11-29 07:27:05.763929711 +0000 UTC m=+0.077299022 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:27:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:07 np0005539563 podman[185148]: 2025-11-29 07:27:07.259418572 +0000 UTC m=+1.403938429 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:27:07 np0005539563 podman[185148]: 2025-11-29 07:27:07.386052454 +0000 UTC m=+1.530572301 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:27:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:07.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:07.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:27:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:27:08 np0005539563 podman[185323]: 2025-11-29 07:27:08.851008025 +0000 UTC m=+0.883548619 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:27:09 np0005539563 podman[185323]: 2025-11-29 07:27:09.089387071 +0000 UTC m=+1.121927655 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:27:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:09.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:09.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:09 np0005539563 podman[185451]: 2025-11-29 07:27:09.984340494 +0000 UTC m=+0.069847142 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, io.buildah.version=1.28.2, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, architecture=x86_64, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 02:27:10 np0005539563 podman[185486]: 2025-11-29 07:27:10.063961717 +0000 UTC m=+0.057272605 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, architecture=x86_64, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, build-date=2023-02-22T09:23:20)
Nov 29 02:27:10 np0005539563 podman[185451]: 2025-11-29 07:27:10.131110016 +0000 UTC m=+0.216616634 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=2.2.4, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, release=1793, vcs-type=git, architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived)
Nov 29 02:27:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:27:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:11.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:11.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:27:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:27:12
Nov 29 02:27:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:27:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:27:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'vms', 'default.rgw.meta', 'images']
Nov 29 02:27:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:27:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:27:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:13.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:13.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:27:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:27:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:27:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:27:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:27:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:13 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7d6ced11-2908-4042-8b8d-b7aaca433401 does not exist
Nov 29 02:27:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f15f6850-290e-490e-9d93-645820e641a9 does not exist
Nov 29 02:27:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b9946f49-753b-43d9-8573-223dae44ee61 does not exist
Nov 29 02:27:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:27:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:27:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:27:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:27:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:27:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:27:14 np0005539563 podman[185730]: 2025-11-29 07:27:14.203416341 +0000 UTC m=+0.097566204 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:27:14 np0005539563 podman[185873]: 2025-11-29 07:27:14.64348186 +0000 UTC m=+0.028973598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:27:14 np0005539563 podman[185873]: 2025-11-29 07:27:14.745986405 +0000 UTC m=+0.131478093 container create acab4f1687f7e406f9e9a83e0351154b074e1704a397d76d8d4ca963783787ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:27:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:27:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:27:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:14 np0005539563 systemd[1]: Started libpod-conmon-acab4f1687f7e406f9e9a83e0351154b074e1704a397d76d8d4ca963783787ef.scope.
Nov 29 02:27:14 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:27:14 np0005539563 podman[185873]: 2025-11-29 07:27:14.85554379 +0000 UTC m=+0.241035438 container init acab4f1687f7e406f9e9a83e0351154b074e1704a397d76d8d4ca963783787ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:27:14 np0005539563 podman[185873]: 2025-11-29 07:27:14.868940489 +0000 UTC m=+0.254432137 container start acab4f1687f7e406f9e9a83e0351154b074e1704a397d76d8d4ca963783787ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:27:14 np0005539563 podman[185873]: 2025-11-29 07:27:14.872450663 +0000 UTC m=+0.257942311 container attach acab4f1687f7e406f9e9a83e0351154b074e1704a397d76d8d4ca963783787ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:27:14 np0005539563 friendly_galois[185892]: 167 167
Nov 29 02:27:14 np0005539563 systemd[1]: libpod-acab4f1687f7e406f9e9a83e0351154b074e1704a397d76d8d4ca963783787ef.scope: Deactivated successfully.
Nov 29 02:27:14 np0005539563 podman[185873]: 2025-11-29 07:27:14.880616312 +0000 UTC m=+0.266108010 container died acab4f1687f7e406f9e9a83e0351154b074e1704a397d76d8d4ca963783787ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:27:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c400731e594238e74e5ddef460c44dfbeade9901b7563d2eac88e791b6b48567-merged.mount: Deactivated successfully.
Nov 29 02:27:14 np0005539563 podman[185873]: 2025-11-29 07:27:14.945421007 +0000 UTC m=+0.330912665 container remove acab4f1687f7e406f9e9a83e0351154b074e1704a397d76d8d4ca963783787ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:27:14 np0005539563 systemd[1]: libpod-conmon-acab4f1687f7e406f9e9a83e0351154b074e1704a397d76d8d4ca963783787ef.scope: Deactivated successfully.
Nov 29 02:27:15 np0005539563 podman[186052]: 2025-11-29 07:27:15.14521232 +0000 UTC m=+0.049079226 container create 54df61e77051a4cede8270d6036f22f21b45d663d8c07f6c5516f287c6439058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:27:15 np0005539563 systemd[1]: Started libpod-conmon-54df61e77051a4cede8270d6036f22f21b45d663d8c07f6c5516f287c6439058.scope.
Nov 29 02:27:15 np0005539563 podman[186052]: 2025-11-29 07:27:15.117350954 +0000 UTC m=+0.021217870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:27:15 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:27:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fe1469ca004e4b38492d0049226b8a5b0b910992fbfd88248c94a4f3a24082/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fe1469ca004e4b38492d0049226b8a5b0b910992fbfd88248c94a4f3a24082/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fe1469ca004e4b38492d0049226b8a5b0b910992fbfd88248c94a4f3a24082/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fe1469ca004e4b38492d0049226b8a5b0b910992fbfd88248c94a4f3a24082/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fe1469ca004e4b38492d0049226b8a5b0b910992fbfd88248c94a4f3a24082/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:15 np0005539563 podman[186052]: 2025-11-29 07:27:15.278071399 +0000 UTC m=+0.181938315 container init 54df61e77051a4cede8270d6036f22f21b45d663d8c07f6c5516f287c6439058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_spence, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:27:15 np0005539563 podman[186052]: 2025-11-29 07:27:15.28595877 +0000 UTC m=+0.189825666 container start 54df61e77051a4cede8270d6036f22f21b45d663d8c07f6c5516f287c6439058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_spence, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:27:15 np0005539563 podman[186052]: 2025-11-29 07:27:15.291661133 +0000 UTC m=+0.195528049 container attach 54df61e77051a4cede8270d6036f22f21b45d663d8c07f6c5516f287c6439058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_spence, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:27:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:15.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.518914) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235519172, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1579, "num_deletes": 252, "total_data_size": 2844205, "memory_usage": 3125680, "flush_reason": "Manual Compaction"}
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235615182, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 2788960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12519, "largest_seqno": 14097, "table_properties": {"data_size": 2781711, "index_size": 4256, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 13968, "raw_average_key_size": 18, "raw_value_size": 2767247, "raw_average_value_size": 3679, "num_data_blocks": 192, "num_entries": 752, "num_filter_entries": 752, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401032, "oldest_key_time": 1764401032, "file_creation_time": 1764401235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 97066 microseconds, and 15712 cpu microseconds.
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.616059) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 2788960 bytes OK
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.616388) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.619957) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.620236) EVENT_LOG_v1 {"time_micros": 1764401235620029, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.620272) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2837592, prev total WAL file size 2837592, number of live WAL files 2.
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.623448) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(2723KB)], [29(9043KB)]
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235623639, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 12049285, "oldest_snapshot_seqno": -1}
Nov 29 02:27:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:15.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4335 keys, 11506570 bytes, temperature: kUnknown
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235841126, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 11506570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11471623, "index_size": 22997, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 106487, "raw_average_key_size": 24, "raw_value_size": 11387389, "raw_average_value_size": 2626, "num_data_blocks": 979, "num_entries": 4335, "num_filter_entries": 4335, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764401235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.841392) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11506570 bytes
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.843054) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 55.4 rd, 52.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 8.8 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(8.4) write-amplify(4.1) OK, records in: 4857, records dropped: 522 output_compression: NoCompression
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.843073) EVENT_LOG_v1 {"time_micros": 1764401235843063, "job": 12, "event": "compaction_finished", "compaction_time_micros": 217558, "compaction_time_cpu_micros": 46098, "output_level": 6, "num_output_files": 1, "total_output_size": 11506570, "num_input_records": 4857, "num_output_records": 4335, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235843544, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401235844924, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.623188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.844964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.844971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.844972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.844974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:27:15 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:27:15.844975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:27:16 np0005539563 systemd[1]: Stopping OpenSSH server daemon...
Nov 29 02:27:16 np0005539563 systemd[1]: sshd.service: Deactivated successfully.
Nov 29 02:27:16 np0005539563 systemd[1]: Stopped OpenSSH server daemon.
Nov 29 02:27:16 np0005539563 systemd[1]: sshd.service: Consumed 2.188s CPU time, read 32.0K from disk, written 0B to disk.
Nov 29 02:27:16 np0005539563 systemd[1]: Stopped target sshd-keygen.target.
Nov 29 02:27:16 np0005539563 systemd[1]: Stopping sshd-keygen.target...
Nov 29 02:27:16 np0005539563 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:27:16 np0005539563 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:27:16 np0005539563 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:27:16 np0005539563 systemd[1]: Reached target sshd-keygen.target.
Nov 29 02:27:16 np0005539563 systemd[1]: Starting OpenSSH server daemon...
Nov 29 02:27:16 np0005539563 systemd[1]: Started OpenSSH server daemon.
Nov 29 02:27:16 np0005539563 goofy_spence[186128]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:27:16 np0005539563 goofy_spence[186128]: --> relative data size: 1.0
Nov 29 02:27:16 np0005539563 goofy_spence[186128]: --> All data devices are unavailable
Nov 29 02:27:16 np0005539563 systemd[1]: libpod-54df61e77051a4cede8270d6036f22f21b45d663d8c07f6c5516f287c6439058.scope: Deactivated successfully.
Nov 29 02:27:16 np0005539563 podman[186052]: 2025-11-29 07:27:16.539328014 +0000 UTC m=+1.443194900 container died 54df61e77051a4cede8270d6036f22f21b45d663d8c07f6c5516f287c6439058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:27:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:17.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay-16fe1469ca004e4b38492d0049226b8a5b0b910992fbfd88248c94a4f3a24082-merged.mount: Deactivated successfully.
Nov 29 02:27:17 np0005539563 podman[186052]: 2025-11-29 07:27:17.724648035 +0000 UTC m=+2.628514971 container remove 54df61e77051a4cede8270d6036f22f21b45d663d8c07f6c5516f287c6439058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_spence, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:27:17 np0005539563 systemd[1]: libpod-conmon-54df61e77051a4cede8270d6036f22f21b45d663d8c07f6c5516f287c6439058.scope: Deactivated successfully.
Nov 29 02:27:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:17.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:18 np0005539563 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:27:18 np0005539563 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:27:18 np0005539563 podman[186928]: 2025-11-29 07:27:18.35982169 +0000 UTC m=+0.026767228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:27:18 np0005539563 systemd[1]: Reloading.
Nov 29 02:27:18 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:18 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:18 np0005539563 podman[186928]: 2025-11-29 07:27:18.700092005 +0000 UTC m=+0.367037483 container create 95e1350fe16ef38b10dd6fde73adda3577036cdf38df6a50f76b15f70953b779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_benz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:27:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:18 np0005539563 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:27:18 np0005539563 systemd[1]: Started libpod-conmon-95e1350fe16ef38b10dd6fde73adda3577036cdf38df6a50f76b15f70953b779.scope.
Nov 29 02:27:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:27:18 np0005539563 auditd[703]: Audit daemon rotating log files
Nov 29 02:27:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:19.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:19.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:20 np0005539563 podman[186928]: 2025-11-29 07:27:20.595429746 +0000 UTC m=+2.262375324 container init 95e1350fe16ef38b10dd6fde73adda3577036cdf38df6a50f76b15f70953b779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_benz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:27:20 np0005539563 podman[186928]: 2025-11-29 07:27:20.607084128 +0000 UTC m=+2.274029616 container start 95e1350fe16ef38b10dd6fde73adda3577036cdf38df6a50f76b15f70953b779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_benz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:27:20 np0005539563 pensive_benz[187014]: 167 167
Nov 29 02:27:20 np0005539563 systemd[1]: libpod-95e1350fe16ef38b10dd6fde73adda3577036cdf38df6a50f76b15f70953b779.scope: Deactivated successfully.
Nov 29 02:27:20 np0005539563 podman[186928]: 2025-11-29 07:27:20.6467194 +0000 UTC m=+2.313664918 container attach 95e1350fe16ef38b10dd6fde73adda3577036cdf38df6a50f76b15f70953b779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_benz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:27:20 np0005539563 podman[186928]: 2025-11-29 07:27:20.652378512 +0000 UTC m=+2.319324000 container died 95e1350fe16ef38b10dd6fde73adda3577036cdf38df6a50f76b15f70953b779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_benz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:27:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:21.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0467b3e2955eeee8734c080033b5a4733cd28b17daf3b0fa1940c1c4f623474e-merged.mount: Deactivated successfully.
Nov 29 02:27:21 np0005539563 podman[186928]: 2025-11-29 07:27:21.551063754 +0000 UTC m=+3.218009242 container remove 95e1350fe16ef38b10dd6fde73adda3577036cdf38df6a50f76b15f70953b779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:27:21 np0005539563 systemd[1]: libpod-conmon-95e1350fe16ef38b10dd6fde73adda3577036cdf38df6a50f76b15f70953b779.scope: Deactivated successfully.
Nov 29 02:27:21 np0005539563 podman[189746]: 2025-11-29 07:27:21.744150627 +0000 UTC m=+0.056923186 container create 3bdc3465420b05e446115dc38802bd438dc0626c6bbf4f8d26434e80844c9585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lehmann, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:27:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:21.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:21 np0005539563 systemd[1]: Started libpod-conmon-3bdc3465420b05e446115dc38802bd438dc0626c6bbf4f8d26434e80844c9585.scope.
Nov 29 02:27:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:27:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ff564b4157981881b905a84de36915e4b802e6a1153c2f17359cfd4d02bf124/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ff564b4157981881b905a84de36915e4b802e6a1153c2f17359cfd4d02bf124/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ff564b4157981881b905a84de36915e4b802e6a1153c2f17359cfd4d02bf124/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ff564b4157981881b905a84de36915e4b802e6a1153c2f17359cfd4d02bf124/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:21 np0005539563 podman[189746]: 2025-11-29 07:27:21.72188026 +0000 UTC m=+0.034652839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:27:21 np0005539563 podman[189746]: 2025-11-29 07:27:21.823542674 +0000 UTC m=+0.136315233 container init 3bdc3465420b05e446115dc38802bd438dc0626c6bbf4f8d26434e80844c9585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lehmann, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:27:21 np0005539563 podman[189746]: 2025-11-29 07:27:21.833322745 +0000 UTC m=+0.146095284 container start 3bdc3465420b05e446115dc38802bd438dc0626c6bbf4f8d26434e80844c9585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lehmann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:27:21 np0005539563 podman[189746]: 2025-11-29 07:27:21.837407885 +0000 UTC m=+0.150180464 container attach 3bdc3465420b05e446115dc38802bd438dc0626c6bbf4f8d26434e80844c9585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]: {
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:    "0": [
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:        {
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:            "devices": [
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:                "/dev/loop3"
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:            ],
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:            "lv_name": "ceph_lv0",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:            "lv_size": "7511998464",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:            "name": "ceph_lv0",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:            "tags": {
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:                "ceph.cluster_name": "ceph",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:                "ceph.crush_device_class": "",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:                "ceph.encrypted": "0",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:                "ceph.osd_id": "0",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:                "ceph.type": "block",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:                "ceph.vdo": "0"
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:            },
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:            "type": "block",
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:            "vg_name": "ceph_vg0"
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:        }
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]:    ]
Nov 29 02:27:22 np0005539563 serene_lehmann[189881]: }
Nov 29 02:27:22 np0005539563 systemd[1]: libpod-3bdc3465420b05e446115dc38802bd438dc0626c6bbf4f8d26434e80844c9585.scope: Deactivated successfully.
Nov 29 02:27:22 np0005539563 podman[189746]: 2025-11-29 07:27:22.702803796 +0000 UTC m=+1.015576385 container died 3bdc3465420b05e446115dc38802bd438dc0626c6bbf4f8d26434e80844c9585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lehmann, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:27:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7ff564b4157981881b905a84de36915e4b802e6a1153c2f17359cfd4d02bf124-merged.mount: Deactivated successfully.
Nov 29 02:27:23 np0005539563 podman[189746]: 2025-11-29 07:27:23.244343013 +0000 UTC m=+1.557115572 container remove 3bdc3465420b05e446115dc38802bd438dc0626c6bbf4f8d26434e80844c9585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lehmann, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:27:23 np0005539563 systemd[1]: libpod-conmon-3bdc3465420b05e446115dc38802bd438dc0626c6bbf4f8d26434e80844c9585.scope: Deactivated successfully.
Nov 29 02:27:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:23.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:23.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:23 np0005539563 podman[192161]: 2025-11-29 07:27:23.824656848 +0000 UTC m=+0.021291742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:27:24 np0005539563 podman[192161]: 2025-11-29 07:27:24.207625067 +0000 UTC m=+0.404259961 container create a24794c3ab19773b8b78c20cdfb0712bed169fd23091e4f63b58eb7eebaef35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:27:24 np0005539563 systemd[1]: Started libpod-conmon-a24794c3ab19773b8b78c20cdfb0712bed169fd23091e4f63b58eb7eebaef35c.scope.
Nov 29 02:27:24 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:27:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:24 np0005539563 podman[192161]: 2025-11-29 07:27:24.769020905 +0000 UTC m=+0.965655869 container init a24794c3ab19773b8b78c20cdfb0712bed169fd23091e4f63b58eb7eebaef35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:27:24 np0005539563 podman[192161]: 2025-11-29 07:27:24.785123656 +0000 UTC m=+0.981758530 container start a24794c3ab19773b8b78c20cdfb0712bed169fd23091e4f63b58eb7eebaef35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_franklin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:27:24 np0005539563 podman[192161]: 2025-11-29 07:27:24.792061182 +0000 UTC m=+0.988696136 container attach a24794c3ab19773b8b78c20cdfb0712bed169fd23091e4f63b58eb7eebaef35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:27:24 np0005539563 dazzling_franklin[192668]: 167 167
Nov 29 02:27:24 np0005539563 systemd[1]: libpod-a24794c3ab19773b8b78c20cdfb0712bed169fd23091e4f63b58eb7eebaef35c.scope: Deactivated successfully.
Nov 29 02:27:24 np0005539563 podman[192161]: 2025-11-29 07:27:24.796226014 +0000 UTC m=+0.992860928 container died a24794c3ab19773b8b78c20cdfb0712bed169fd23091e4f63b58eb7eebaef35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:27:24 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e37413e1ab1e02d0215fa653b8637d3944514293b5234a497da5e9c281b2fae1-merged.mount: Deactivated successfully.
Nov 29 02:27:24 np0005539563 podman[192161]: 2025-11-29 07:27:24.903815136 +0000 UTC m=+1.100450020 container remove a24794c3ab19773b8b78c20cdfb0712bed169fd23091e4f63b58eb7eebaef35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:27:24 np0005539563 systemd[1]: libpod-conmon-a24794c3ab19773b8b78c20cdfb0712bed169fd23091e4f63b58eb7eebaef35c.scope: Deactivated successfully.
Nov 29 02:27:25 np0005539563 podman[193345]: 2025-11-29 07:27:25.11080242 +0000 UTC m=+0.047366020 container create 89ffc88fe1f2c4e2ccc406307a8ee84e2d7873f3e64d24e34bce5134e4b5c44e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_varahamihira, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:27:25 np0005539563 systemd[1]: Started libpod-conmon-89ffc88fe1f2c4e2ccc406307a8ee84e2d7873f3e64d24e34bce5134e4b5c44e.scope.
Nov 29 02:27:25 np0005539563 podman[193345]: 2025-11-29 07:27:25.087313421 +0000 UTC m=+0.023877031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:27:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:27:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc68d2ebb816c32f38d8c30d3114671357b834938df8c9cdee87feb5ea5c956e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc68d2ebb816c32f38d8c30d3114671357b834938df8c9cdee87feb5ea5c956e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc68d2ebb816c32f38d8c30d3114671357b834938df8c9cdee87feb5ea5c956e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc68d2ebb816c32f38d8c30d3114671357b834938df8c9cdee87feb5ea5c956e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:25 np0005539563 podman[193345]: 2025-11-29 07:27:25.299856955 +0000 UTC m=+0.236420635 container init 89ffc88fe1f2c4e2ccc406307a8ee84e2d7873f3e64d24e34bce5134e4b5c44e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_varahamihira, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:27:25 np0005539563 podman[193345]: 2025-11-29 07:27:25.312132693 +0000 UTC m=+0.248696333 container start 89ffc88fe1f2c4e2ccc406307a8ee84e2d7873f3e64d24e34bce5134e4b5c44e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:27:25 np0005539563 podman[193345]: 2025-11-29 07:27:25.375528352 +0000 UTC m=+0.312092052 container attach 89ffc88fe1f2c4e2ccc406307a8ee84e2d7873f3e64d24e34bce5134e4b5c44e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:27:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:25.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:25.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:26 np0005539563 vigilant_varahamihira[193406]: {
Nov 29 02:27:26 np0005539563 vigilant_varahamihira[193406]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:27:26 np0005539563 vigilant_varahamihira[193406]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:27:26 np0005539563 vigilant_varahamihira[193406]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:27:26 np0005539563 vigilant_varahamihira[193406]:        "osd_id": 0,
Nov 29 02:27:26 np0005539563 vigilant_varahamihira[193406]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:27:26 np0005539563 vigilant_varahamihira[193406]:        "type": "bluestore"
Nov 29 02:27:26 np0005539563 vigilant_varahamihira[193406]:    }
Nov 29 02:27:26 np0005539563 vigilant_varahamihira[193406]: }
Nov 29 02:27:26 np0005539563 systemd[1]: libpod-89ffc88fe1f2c4e2ccc406307a8ee84e2d7873f3e64d24e34bce5134e4b5c44e.scope: Deactivated successfully.
Nov 29 02:27:26 np0005539563 podman[193345]: 2025-11-29 07:27:26.152075113 +0000 UTC m=+1.088638713 container died 89ffc88fe1f2c4e2ccc406307a8ee84e2d7873f3e64d24e34bce5134e4b5c44e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:27:26 np0005539563 systemd[1]: var-lib-containers-storage-overlay-fc68d2ebb816c32f38d8c30d3114671357b834938df8c9cdee87feb5ea5c956e-merged.mount: Deactivated successfully.
Nov 29 02:27:26 np0005539563 podman[193345]: 2025-11-29 07:27:26.220024823 +0000 UTC m=+1.156588423 container remove 89ffc88fe1f2c4e2ccc406307a8ee84e2d7873f3e64d24e34bce5134e4b5c44e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:27:26 np0005539563 systemd[1]: libpod-conmon-89ffc88fe1f2c4e2ccc406307a8ee84e2d7873f3e64d24e34bce5134e4b5c44e.scope: Deactivated successfully.
Nov 29 02:27:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:27:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:27:26 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:26 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 37e6a489-d26d-481b-b2bb-2b6b25f017b0 does not exist
Nov 29 02:27:26 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 08cdcbdb-edf9-49a5-ae73-c7148c67c42a does not exist
Nov 29 02:27:26 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 83fe1a91-4c64-4cf1-a14e-d2220fd53855 does not exist
Nov 29 02:27:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:27 np0005539563 python3.9[195128]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:27:27 np0005539563 systemd[1]: Reloading.
Nov 29 02:27:27 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:27 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:27:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:27.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:27.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:28 np0005539563 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:27:28 np0005539563 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:27:28 np0005539563 systemd[1]: man-db-cache-update.service: Consumed 11.763s CPU time.
Nov 29 02:27:28 np0005539563 systemd[1]: run-ra3586a767cb24f849a78d153278f97a8.service: Deactivated successfully.
Nov 29 02:27:28 np0005539563 python3.9[196124]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:27:28 np0005539563 systemd[1]: Reloading.
Nov 29 02:27:28 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:28 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:29.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:29 np0005539563 python3.9[196315]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:27:29 np0005539563 systemd[1]: Reloading.
Nov 29 02:27:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:29.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:29 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:29 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:30 np0005539563 python3.9[196506]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:27:30 np0005539563 systemd[1]: Reloading.
Nov 29 02:27:31 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:31 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:31.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:31.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:33 np0005539563 python3.9[196698]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:33 np0005539563 systemd[1]: Reloading.
Nov 29 02:27:33 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:33 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:33.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:33.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:34 np0005539563 python3.9[196889]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:34 np0005539563 systemd[1]: Reloading.
Nov 29 02:27:34 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:34 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:35.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:35.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:36 np0005539563 python3.9[197080]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:36 np0005539563 systemd[1]: Reloading.
Nov 29 02:27:36 np0005539563 podman[197082]: 2025-11-29 07:27:36.176157391 +0000 UTC m=+0.115104774 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:27:36 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:36 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:37.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:37.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:37 np0005539563 python3.9[197289]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:38 np0005539563 python3.9[197444]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:38 np0005539563 systemd[1]: Reloading.
Nov 29 02:27:38 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:38 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:39.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:39.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:41 np0005539563 python3.9[197660]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:27:41 np0005539563 systemd[1]: Reloading.
Nov 29 02:27:41 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:41 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:41.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:41 np0005539563 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 29 02:27:41 np0005539563 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 29 02:27:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:41.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:42 np0005539563 python3.9[197880]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:27:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:27:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:27:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:27:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:27:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:27:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:43.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:43.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:43 np0005539563 python3.9[198036]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:44 np0005539563 podman[198139]: 2025-11-29 07:27:44.557546047 +0000 UTC m=+0.108582010 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 02:27:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:45.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:45.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:47.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:47 np0005539563 python3.9[198222]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:47.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:48 np0005539563 python3.9[198377]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:49 np0005539563 python3.9[198533]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:49.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:49.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:50 np0005539563 python3.9[198688]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:51 np0005539563 python3.9[198844]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:51.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:51.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:52 np0005539563 python3.9[198999]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:53 np0005539563 python3.9[199155]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:53.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:53.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:54 np0005539563 python3.9[199310]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:55 np0005539563 python3.9[199466]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:27:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:55.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:27:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:55.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:27:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:27:56 np0005539563 python3.9[199621]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:57 np0005539563 python3.9[199777]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:57.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:57.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:58 np0005539563 python3.9[199932]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:27:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:59.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:27:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:27:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:59.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:01.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:01.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:03 np0005539563 python3.9[200140]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:03.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:03.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:04 np0005539563 python3.9[200292]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:28:04.871 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:28:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:28:04.872 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:28:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:28:04.872 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:28:04 np0005539563 python3.9[200445]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:05.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:05.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:05 np0005539563 python3.9[200597]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:06 np0005539563 python3.9[200749]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:07 np0005539563 podman[200902]: 2025-11-29 07:28:07.210566418 +0000 UTC m=+0.088582183 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 02:28:07 np0005539563 python3.9[200903]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:07.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:07.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:08 np0005539563 python3.9[201073]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:08 np0005539563 python3.9[201199]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401287.5563436-1627-247931884425772/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:09.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:09 np0005539563 python3.9[201351]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:09.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:10 np0005539563 python3.9[201476]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401289.1441274-1627-27133272520045/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:11 np0005539563 python3.9[201629]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:11.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:11.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:11 np0005539563 python3.9[201754]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401290.7044144-1627-40235118748699/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:28:12
Nov 29 02:28:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:28:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:28:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'default.rgw.control', 'volumes', '.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms']
Nov 29 02:28:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:28:12 np0005539563 python3.9[201906]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:28:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:28:13 np0005539563 python3.9[202032]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401292.1935399-1627-154923850952493/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:13.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:13.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:13 np0005539563 python3.9[202184]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:14 np0005539563 python3.9[202309]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401293.4837196-1627-23248127636023/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:14 np0005539563 podman[202311]: 2025-11-29 07:28:14.781435161 +0000 UTC m=+0.106096233 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:28:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:15 np0005539563 python3.9[202487]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:15.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:15.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:16 np0005539563 python3.9[202612]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401294.8187985-1627-159759973292539/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:16 np0005539563 python3.9[202764]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:28:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3314 writes, 14K keys, 3311 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3314 writes, 3311 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1178 writes, 5012 keys, 1176 commit groups, 1.0 writes per commit group, ingest: 8.35 MB, 0.01 MB/s#012Interval WAL: 1178 writes, 1176 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     52.6      0.34              0.06         6    0.057       0      0       0.0       0.0#012  L6      1/0   10.97 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.5     13.5     11.7      3.77              0.13         5    0.754     21K   2373       0.0       0.0#012 Sum      1/0   10.97 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.5     12.4     15.1      4.11              0.19        11    0.374     21K   2373       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     11.1     12.0      3.93              0.15         8    0.491     18K   2081       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     13.5     11.7      3.77              0.13         5    0.754     21K   2373       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     53.2      0.33              0.06         5    0.067       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.017, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 4.1 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 3.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 308.00 MB usage: 2.00 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 9.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(96,1.79 MB,0.581593%) FilterBlock(12,70.23 KB,0.0222689%) IndexBlock(12,147.89 KB,0.0468911%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:28:17 np0005539563 python3.9[202888]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401296.2690718-1627-9007332116984/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:17.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:17.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:18 np0005539563 python3.9[203040]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:18 np0005539563 python3.9[203166]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401297.630211-1627-229403393477281/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:19.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:19.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:20 np0005539563 python3.9[203318]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 29 02:28:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:21.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:21.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:21 np0005539563 python3.9[203522]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:28:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:22 np0005539563 python3.9[203674]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:23 np0005539563 python3.9[203827]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:23.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:23.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:24 np0005539563 python3.9[203979]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:24 np0005539563 python3.9[204132]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:25.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:25 np0005539563 python3.9[204284]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:25.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:26 np0005539563 python3.9[204436]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:27 np0005539563 python3.9[204618]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:27.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:28:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:28:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:28:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:28:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:28:27 np0005539563 python3.9[204870]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:27.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:28 np0005539563 python3.9[205022]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:28:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 38b4fb4b-cafc-4da4-937f-823bfad7b9e0 does not exist
Nov 29 02:28:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 96d60dc9-d415-4580-a1b7-31deb43529c0 does not exist
Nov 29 02:28:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cdf5091d-5274-4f9f-8a26-c1fbb08e1eb6 does not exist
Nov 29 02:28:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:28:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:28:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:28:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:28:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:28:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:28:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:29 np0005539563 podman[205316]: 2025-11-29 07:28:29.144212939 +0000 UTC m=+0.061771956 container create 3cf6ada08c815be42ed47333ae13e473875e00ffe2d6e395f178a1485d02ff59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:28:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:28:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:28:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:28:29 np0005539563 python3.9[205296]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:29 np0005539563 systemd[1]: Started libpod-conmon-3cf6ada08c815be42ed47333ae13e473875e00ffe2d6e395f178a1485d02ff59.scope.
Nov 29 02:28:29 np0005539563 podman[205316]: 2025-11-29 07:28:29.117531894 +0000 UTC m=+0.035090961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:28:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:28:29 np0005539563 podman[205316]: 2025-11-29 07:28:29.426308346 +0000 UTC m=+0.343867473 container init 3cf6ada08c815be42ed47333ae13e473875e00ffe2d6e395f178a1485d02ff59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_burnell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:28:29 np0005539563 podman[205316]: 2025-11-29 07:28:29.437362571 +0000 UTC m=+0.354921588 container start 3cf6ada08c815be42ed47333ae13e473875e00ffe2d6e395f178a1485d02ff59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_burnell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:28:29 np0005539563 sad_burnell[205332]: 167 167
Nov 29 02:28:29 np0005539563 systemd[1]: libpod-3cf6ada08c815be42ed47333ae13e473875e00ffe2d6e395f178a1485d02ff59.scope: Deactivated successfully.
Nov 29 02:28:29 np0005539563 podman[205316]: 2025-11-29 07:28:29.449430415 +0000 UTC m=+0.366989432 container attach 3cf6ada08c815be42ed47333ae13e473875e00ffe2d6e395f178a1485d02ff59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:28:29 np0005539563 podman[205316]: 2025-11-29 07:28:29.449667191 +0000 UTC m=+0.367226198 container died 3cf6ada08c815be42ed47333ae13e473875e00ffe2d6e395f178a1485d02ff59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 29 02:28:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d014f786fc8714efdec44416071f3594c5fbf63d89827909f377452b817c07f1-merged.mount: Deactivated successfully.
Nov 29 02:28:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:29.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:29.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:29 np0005539563 python3.9[205497]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:30 np0005539563 python3.9[205649]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:31 np0005539563 python3.9[205802]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:31.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:31.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:33 np0005539563 python3.9[205955]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:33.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:33.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:34 np0005539563 python3.9[206078]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401312.9375596-2290-119748864258376/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:34 np0005539563 podman[205316]: 2025-11-29 07:28:34.483564575 +0000 UTC m=+5.401123592 container remove 3cf6ada08c815be42ed47333ae13e473875e00ffe2d6e395f178a1485d02ff59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_burnell, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:28:34 np0005539563 systemd[1]: libpod-conmon-3cf6ada08c815be42ed47333ae13e473875e00ffe2d6e395f178a1485d02ff59.scope: Deactivated successfully.
Nov 29 02:28:34 np0005539563 podman[206211]: 2025-11-29 07:28:34.692177202 +0000 UTC m=+0.027921938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:28:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:34 np0005539563 python3.9[206253]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:35 np0005539563 python3.9[206376]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401314.3697834-2290-197089444614688/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:35.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:35 np0005539563 podman[206211]: 2025-11-29 07:28:35.842019924 +0000 UTC m=+1.177764650 container create 92021e569681c27ff6a3701736dd70c4961dd56792b244f6e0126b742d3290da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lehmann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:28:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:35.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:36 np0005539563 python3.9[206528]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:36 np0005539563 python3.9[206652]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401315.730679-2290-1321433965224/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:37 np0005539563 systemd[1]: Started libpod-conmon-92021e569681c27ff6a3701736dd70c4961dd56792b244f6e0126b742d3290da.scope.
Nov 29 02:28:37 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:28:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9ee7b6d3cba0023de0db53d18dbe4a238000e90934a41044e7fb64d4d6bac2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9ee7b6d3cba0023de0db53d18dbe4a238000e90934a41044e7fb64d4d6bac2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9ee7b6d3cba0023de0db53d18dbe4a238000e90934a41044e7fb64d4d6bac2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9ee7b6d3cba0023de0db53d18dbe4a238000e90934a41044e7fb64d4d6bac2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae9ee7b6d3cba0023de0db53d18dbe4a238000e90934a41044e7fb64d4d6bac2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:37 np0005539563 python3.9[206818]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:37.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:38 np0005539563 podman[206211]: 2025-11-29 07:28:38.129628502 +0000 UTC m=+3.465373258 container init 92021e569681c27ff6a3701736dd70c4961dd56792b244f6e0126b742d3290da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lehmann, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:28:38 np0005539563 podman[206211]: 2025-11-29 07:28:38.141292915 +0000 UTC m=+3.477037631 container start 92021e569681c27ff6a3701736dd70c4961dd56792b244f6e0126b742d3290da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:28:38 np0005539563 python3.9[206943]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401317.1840951-2290-149868443322170/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:38 np0005539563 practical_lehmann[206702]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:28:38 np0005539563 practical_lehmann[206702]: --> relative data size: 1.0
Nov 29 02:28:38 np0005539563 practical_lehmann[206702]: --> All data devices are unavailable
Nov 29 02:28:38 np0005539563 systemd[1]: libpod-92021e569681c27ff6a3701736dd70c4961dd56792b244f6e0126b742d3290da.scope: Deactivated successfully.
Nov 29 02:28:39 np0005539563 python3.9[207106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:39.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:39 np0005539563 python3.9[207241]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401318.595264-2290-53914408178096/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:39.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:40 np0005539563 podman[206211]: 2025-11-29 07:28:40.368799534 +0000 UTC m=+5.704544340 container attach 92021e569681c27ff6a3701736dd70c4961dd56792b244f6e0126b742d3290da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:28:40 np0005539563 podman[206211]: 2025-11-29 07:28:40.371712432 +0000 UTC m=+5.707457248 container died 92021e569681c27ff6a3701736dd70c4961dd56792b244f6e0126b742d3290da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lehmann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:28:40 np0005539563 python3.9[207393]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:41 np0005539563 python3.9[207517]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401319.9596722-2290-142595118089376/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:41.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:41.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:41 np0005539563 python3.9[207720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ae9ee7b6d3cba0023de0db53d18dbe4a238000e90934a41044e7fb64d4d6bac2-merged.mount: Deactivated successfully.
Nov 29 02:28:42 np0005539563 python3.9[207843]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401321.3481042-2290-186149382655436/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:28:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:28:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:28:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:28:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:28:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:28:43 np0005539563 python3.9[207996]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:43.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:43.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:44 np0005539563 python3.9[208119]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401322.7559483-2290-69679846319939/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:45.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:45.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:46 np0005539563 python3.9[208283]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:46 np0005539563 podman[206211]: 2025-11-29 07:28:46.523378281 +0000 UTC m=+11.859123007 container remove 92021e569681c27ff6a3701736dd70c4961dd56792b244f6e0126b742d3290da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:28:46 np0005539563 systemd[1]: libpod-conmon-92021e569681c27ff6a3701736dd70c4961dd56792b244f6e0126b742d3290da.scope: Deactivated successfully.
Nov 29 02:28:46 np0005539563 podman[206781]: 2025-11-29 07:28:46.688923515 +0000 UTC m=+9.240833660 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:28:46 np0005539563 podman[208145]: 2025-11-29 07:28:46.693077748 +0000 UTC m=+1.245784967 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:28:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:46 np0005539563 python3.9[208512]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401325.4948263-2290-123200008141701/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:47 np0005539563 podman[208599]: 2025-11-29 07:28:47.145131713 +0000 UTC m=+0.038848080 container create 02f81e588a7500001cdbf28406766286da31e86af7cbde8fe05c7719148d7ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:28:47 np0005539563 systemd[1]: Started libpod-conmon-02f81e588a7500001cdbf28406766286da31e86af7cbde8fe05c7719148d7ace.scope.
Nov 29 02:28:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:28:47 np0005539563 podman[208599]: 2025-11-29 07:28:47.128357946 +0000 UTC m=+0.022074323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:28:47 np0005539563 podman[208599]: 2025-11-29 07:28:47.234947032 +0000 UTC m=+0.128663449 container init 02f81e588a7500001cdbf28406766286da31e86af7cbde8fe05c7719148d7ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_almeida, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:28:47 np0005539563 podman[208599]: 2025-11-29 07:28:47.245996783 +0000 UTC m=+0.139713150 container start 02f81e588a7500001cdbf28406766286da31e86af7cbde8fe05c7719148d7ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_almeida, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:28:47 np0005539563 podman[208599]: 2025-11-29 07:28:47.252850979 +0000 UTC m=+0.146567376 container attach 02f81e588a7500001cdbf28406766286da31e86af7cbde8fe05c7719148d7ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:28:47 np0005539563 musing_almeida[208662]: 167 167
Nov 29 02:28:47 np0005539563 systemd[1]: libpod-02f81e588a7500001cdbf28406766286da31e86af7cbde8fe05c7719148d7ace.scope: Deactivated successfully.
Nov 29 02:28:47 np0005539563 podman[208599]: 2025-11-29 07:28:47.25470438 +0000 UTC m=+0.148420767 container died 02f81e588a7500001cdbf28406766286da31e86af7cbde8fe05c7719148d7ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_almeida, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:28:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-19e970ca3df3e0932dd3b0534b4e82d7625ade1aef7bccbff68d9ca5b3ce74e1-merged.mount: Deactivated successfully.
Nov 29 02:28:47 np0005539563 podman[208599]: 2025-11-29 07:28:47.295314437 +0000 UTC m=+0.189030804 container remove 02f81e588a7500001cdbf28406766286da31e86af7cbde8fe05c7719148d7ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:28:47 np0005539563 systemd[1]: libpod-conmon-02f81e588a7500001cdbf28406766286da31e86af7cbde8fe05c7719148d7ace.scope: Deactivated successfully.
Nov 29 02:28:47 np0005539563 podman[208760]: 2025-11-29 07:28:47.45094002 +0000 UTC m=+0.042258083 container create 3a1fbbd1d87e87ed5003b138cebb082cf35d80f777b56391f1627ec6c83c8e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wilson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:28:47 np0005539563 systemd[1]: Started libpod-conmon-3a1fbbd1d87e87ed5003b138cebb082cf35d80f777b56391f1627ec6c83c8e14.scope.
Nov 29 02:28:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:28:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a4c9744d7b7c3ec778f5f16e66d2444a661a0f4fbf4dc22d1ab232855d2ca3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a4c9744d7b7c3ec778f5f16e66d2444a661a0f4fbf4dc22d1ab232855d2ca3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a4c9744d7b7c3ec778f5f16e66d2444a661a0f4fbf4dc22d1ab232855d2ca3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a4c9744d7b7c3ec778f5f16e66d2444a661a0f4fbf4dc22d1ab232855d2ca3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:47 np0005539563 podman[208760]: 2025-11-29 07:28:47.431728436 +0000 UTC m=+0.023046539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:28:47 np0005539563 podman[208760]: 2025-11-29 07:28:47.534348414 +0000 UTC m=+0.125666517 container init 3a1fbbd1d87e87ed5003b138cebb082cf35d80f777b56391f1627ec6c83c8e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wilson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:28:47 np0005539563 podman[208760]: 2025-11-29 07:28:47.547928934 +0000 UTC m=+0.139246997 container start 3a1fbbd1d87e87ed5003b138cebb082cf35d80f777b56391f1627ec6c83c8e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:28:47 np0005539563 podman[208760]: 2025-11-29 07:28:47.552465758 +0000 UTC m=+0.143783871 container attach 3a1fbbd1d87e87ed5003b138cebb082cf35d80f777b56391f1627ec6c83c8e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:28:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:47.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:47 np0005539563 python3.9[208778]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:47.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:48 np0005539563 python3.9[208908]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401327.1289527-2290-181772711460724/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]: {
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:    "0": [
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:        {
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:            "devices": [
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:                "/dev/loop3"
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:            ],
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:            "lv_name": "ceph_lv0",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:            "lv_size": "7511998464",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:            "name": "ceph_lv0",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:            "tags": {
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:                "ceph.cluster_name": "ceph",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:                "ceph.crush_device_class": "",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:                "ceph.encrypted": "0",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:                "ceph.osd_id": "0",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:                "ceph.type": "block",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:                "ceph.vdo": "0"
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:            },
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:            "type": "block",
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:            "vg_name": "ceph_vg0"
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:        }
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]:    ]
Nov 29 02:28:48 np0005539563 relaxed_wilson[208781]: }
Nov 29 02:28:48 np0005539563 systemd[1]: libpod-3a1fbbd1d87e87ed5003b138cebb082cf35d80f777b56391f1627ec6c83c8e14.scope: Deactivated successfully.
Nov 29 02:28:48 np0005539563 podman[208760]: 2025-11-29 07:28:48.389762206 +0000 UTC m=+0.981080349 container died 3a1fbbd1d87e87ed5003b138cebb082cf35d80f777b56391f1627ec6c83c8e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wilson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 29 02:28:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:49.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:49.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:49 np0005539563 python3.9[209076]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:50 np0005539563 python3.9[209199]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401328.414632-2290-207221966521268/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6a4c9744d7b7c3ec778f5f16e66d2444a661a0f4fbf4dc22d1ab232855d2ca3d-merged.mount: Deactivated successfully.
Nov 29 02:28:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:51 np0005539563 python3.9[209353]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:51.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:51.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:52 np0005539563 podman[208760]: 2025-11-29 07:28:52.1197697 +0000 UTC m=+4.711087753 container remove 3a1fbbd1d87e87ed5003b138cebb082cf35d80f777b56391f1627ec6c83c8e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wilson, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:28:52 np0005539563 systemd[1]: libpod-conmon-3a1fbbd1d87e87ed5003b138cebb082cf35d80f777b56391f1627ec6c83c8e14.scope: Deactivated successfully.
Nov 29 02:28:52 np0005539563 python3.9[209476]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401330.8497698-2290-114720052532575/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:52 np0005539563 podman[209771]: 2025-11-29 07:28:52.728830716 +0000 UTC m=+0.024928040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:28:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:53 np0005539563 python3.9[209755]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:53.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:53.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:54 np0005539563 python3.9[209907]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401332.3046603-2290-103653159443617/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:54 np0005539563 podman[209771]: 2025-11-29 07:28:54.159724029 +0000 UTC m=+1.455821363 container create e2b3d64614e7642d15319c2ba7e1d7d85ed0662f0092641ceb9e275513594cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:28:54 np0005539563 python3.9[210059]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:55 np0005539563 python3.9[210184]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401334.2489674-2290-272182897260286/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:28:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:55.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:55.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:56 np0005539563 python3.9[210334]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:28:56 np0005539563 systemd[1]: Started libpod-conmon-e2b3d64614e7642d15319c2ba7e1d7d85ed0662f0092641ceb9e275513594cdf.scope.
Nov 29 02:28:56 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:28:56 np0005539563 podman[209771]: 2025-11-29 07:28:56.704492257 +0000 UTC m=+4.000589631 container init e2b3d64614e7642d15319c2ba7e1d7d85ed0662f0092641ceb9e275513594cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 02:28:56 np0005539563 podman[209771]: 2025-11-29 07:28:56.716477604 +0000 UTC m=+4.012574938 container start e2b3d64614e7642d15319c2ba7e1d7d85ed0662f0092641ceb9e275513594cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:28:56 np0005539563 magical_ellis[210364]: 167 167
Nov 29 02:28:56 np0005539563 systemd[1]: libpod-e2b3d64614e7642d15319c2ba7e1d7d85ed0662f0092641ceb9e275513594cdf.scope: Deactivated successfully.
Nov 29 02:28:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:57 np0005539563 podman[209771]: 2025-11-29 07:28:57.239651327 +0000 UTC m=+4.535748661 container attach e2b3d64614e7642d15319c2ba7e1d7d85ed0662f0092641ceb9e275513594cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:28:57 np0005539563 podman[209771]: 2025-11-29 07:28:57.241179638 +0000 UTC m=+4.537277022 container died e2b3d64614e7642d15319c2ba7e1d7d85ed0662f0092641ceb9e275513594cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:28:57 np0005539563 python3.9[210508]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 29 02:28:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:57.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:57.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-73da363038408483e88431f2689222b1d57a89d58aa19e01e31035702f039612-merged.mount: Deactivated successfully.
Nov 29 02:28:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:28:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:59.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:28:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:28:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:59.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:00 np0005539563 podman[209771]: 2025-11-29 07:29:00.642383139 +0000 UTC m=+7.938480433 container remove e2b3d64614e7642d15319c2ba7e1d7d85ed0662f0092641ceb9e275513594cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 02:29:00 np0005539563 systemd[1]: libpod-conmon-e2b3d64614e7642d15319c2ba7e1d7d85ed0662f0092641ceb9e275513594cdf.scope: Deactivated successfully.
Nov 29 02:29:00 np0005539563 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 29 02:29:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:00 np0005539563 podman[210523]: 2025-11-29 07:29:00.864976628 +0000 UTC m=+0.029910267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:29:01 np0005539563 podman[210523]: 2025-11-29 07:29:01.202982743 +0000 UTC m=+0.367916412 container create 4f686d28931d7aceb10ebac454da74839471c06c8ff48ec06e2c818a68468d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:29:01 np0005539563 systemd[1]: Started libpod-conmon-4f686d28931d7aceb10ebac454da74839471c06c8ff48ec06e2c818a68468d37.scope.
Nov 29 02:29:01 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:29:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75dbc4ddfa2fbbc3496d09bdefdb17a626466b937025eafdaa10b282c61a741d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75dbc4ddfa2fbbc3496d09bdefdb17a626466b937025eafdaa10b282c61a741d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75dbc4ddfa2fbbc3496d09bdefdb17a626466b937025eafdaa10b282c61a741d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75dbc4ddfa2fbbc3496d09bdefdb17a626466b937025eafdaa10b282c61a741d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:01 np0005539563 podman[210523]: 2025-11-29 07:29:01.421241854 +0000 UTC m=+0.586175543 container init 4f686d28931d7aceb10ebac454da74839471c06c8ff48ec06e2c818a68468d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:29:01 np0005539563 podman[210523]: 2025-11-29 07:29:01.430761883 +0000 UTC m=+0.595695522 container start 4f686d28931d7aceb10ebac454da74839471c06c8ff48ec06e2c818a68468d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:29:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:01.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:01.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:01 np0005539563 podman[210523]: 2025-11-29 07:29:01.969779368 +0000 UTC m=+1.134713007 container attach 4f686d28931d7aceb10ebac454da74839471c06c8ff48ec06e2c818a68468d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:29:02 np0005539563 priceless_bose[210540]: {
Nov 29 02:29:02 np0005539563 priceless_bose[210540]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:29:02 np0005539563 priceless_bose[210540]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:29:02 np0005539563 priceless_bose[210540]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:29:02 np0005539563 priceless_bose[210540]:        "osd_id": 0,
Nov 29 02:29:02 np0005539563 priceless_bose[210540]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:29:02 np0005539563 priceless_bose[210540]:        "type": "bluestore"
Nov 29 02:29:02 np0005539563 priceless_bose[210540]:    }
Nov 29 02:29:02 np0005539563 priceless_bose[210540]: }
Nov 29 02:29:02 np0005539563 systemd[1]: libpod-4f686d28931d7aceb10ebac454da74839471c06c8ff48ec06e2c818a68468d37.scope: Deactivated successfully.
Nov 29 02:29:02 np0005539563 podman[210523]: 2025-11-29 07:29:02.255662603 +0000 UTC m=+1.420596242 container died 4f686d28931d7aceb10ebac454da74839471c06c8ff48ec06e2c818a68468d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:29:02 np0005539563 python3.9[210751]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-75dbc4ddfa2fbbc3496d09bdefdb17a626466b937025eafdaa10b282c61a741d-merged.mount: Deactivated successfully.
Nov 29 02:29:02 np0005539563 podman[210523]: 2025-11-29 07:29:02.504281441 +0000 UTC m=+1.669215080 container remove 4f686d28931d7aceb10ebac454da74839471c06c8ff48ec06e2c818a68468d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:29:02 np0005539563 systemd[1]: libpod-conmon-4f686d28931d7aceb10ebac454da74839471c06c8ff48ec06e2c818a68468d37.scope: Deactivated successfully.
Nov 29 02:29:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:29:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:29:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:29:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:29:02 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c60d2758-e26e-487d-80ad-f12f43c5145f does not exist
Nov 29 02:29:02 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e77f4830-4827-4bce-95d0-38828606bfb2 does not exist
Nov 29 02:29:02 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e5c3964a-7fab-4994-a952-080f1e263345 does not exist
Nov 29 02:29:03 np0005539563 python3.9[210929]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:03 np0005539563 python3.9[211131]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:03.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:03.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:29:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:29:04 np0005539563 python3.9[211283]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:29:04.872 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:29:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:29:04.873 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:29:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:29:04.873 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:29:05 np0005539563 python3.9[211436]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:05.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:05 np0005539563 python3.9[211588]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:05.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:06 np0005539563 python3.9[211740]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:07 np0005539563 python3.9[211893]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:07.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:07.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:08 np0005539563 python3.9[212045]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:08 np0005539563 python3.9[212197]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:09 np0005539563 python3.9[212350]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:29:09 np0005539563 systemd[1]: Reloading.
Nov 29 02:29:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:09.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:09 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:09 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:09.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:10 np0005539563 systemd[1]: Starting libvirt logging daemon socket...
Nov 29 02:29:10 np0005539563 systemd[1]: Listening on libvirt logging daemon socket.
Nov 29 02:29:10 np0005539563 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 29 02:29:10 np0005539563 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 29 02:29:10 np0005539563 systemd[1]: Starting libvirt logging daemon...
Nov 29 02:29:10 np0005539563 systemd[1]: Started libvirt logging daemon.
Nov 29 02:29:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:11 np0005539563 python3.9[212544]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:29:11 np0005539563 systemd[1]: Reloading.
Nov 29 02:29:11 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:11 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:11.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:11 np0005539563 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 29 02:29:11 np0005539563 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 29 02:29:11 np0005539563 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 29 02:29:11 np0005539563 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 29 02:29:11 np0005539563 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 29 02:29:11 np0005539563 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 29 02:29:11 np0005539563 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 02:29:11 np0005539563 systemd[1]: Started libvirt nodedev daemon.
Nov 29 02:29:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:11.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:12 np0005539563 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 29 02:29:12 np0005539563 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 29 02:29:12 np0005539563 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 29 02:29:12 np0005539563 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 29 02:29:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:29:12
Nov 29 02:29:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:29:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:29:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.meta', 'vms']
Nov 29 02:29:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:29:12 np0005539563 python3.9[212763]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:29:12 np0005539563 systemd[1]: Reloading.
Nov 29 02:29:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:12 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:12 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:29:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:29:13 np0005539563 setroubleshoot[212654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a1dea0ed-b876-4b33-9c7f-42f282b0f770
Nov 29 02:29:13 np0005539563 setroubleshoot[212654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 02:29:13 np0005539563 setroubleshoot[212654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a1dea0ed-b876-4b33-9c7f-42f282b0f770
Nov 29 02:29:13 np0005539563 setroubleshoot[212654]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 02:29:13 np0005539563 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 29 02:29:13 np0005539563 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 29 02:29:13 np0005539563 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 29 02:29:13 np0005539563 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 29 02:29:13 np0005539563 systemd[1]: Starting libvirt proxy daemon...
Nov 29 02:29:13 np0005539563 systemd[1]: Started libvirt proxy daemon.
Nov 29 02:29:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:13.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:13.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:14 np0005539563 python3.9[212982]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:29:14 np0005539563 systemd[1]: Reloading.
Nov 29 02:29:14 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:14 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:15 np0005539563 systemd[1]: Listening on libvirt locking daemon socket.
Nov 29 02:29:15 np0005539563 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 29 02:29:15 np0005539563 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 29 02:29:15 np0005539563 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 29 02:29:15 np0005539563 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 29 02:29:15 np0005539563 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 29 02:29:15 np0005539563 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 29 02:29:15 np0005539563 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 29 02:29:15 np0005539563 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 29 02:29:15 np0005539563 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 29 02:29:15 np0005539563 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 02:29:15 np0005539563 systemd[1]: Started libvirt QEMU daemon.
Nov 29 02:29:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:15.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:15.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:16 np0005539563 python3.9[213199]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:29:16 np0005539563 systemd[1]: Reloading.
Nov 29 02:29:16 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:16 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:16 np0005539563 systemd[1]: Starting libvirt secret daemon socket...
Nov 29 02:29:16 np0005539563 systemd[1]: Listening on libvirt secret daemon socket.
Nov 29 02:29:16 np0005539563 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 29 02:29:16 np0005539563 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 29 02:29:16 np0005539563 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 29 02:29:16 np0005539563 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 29 02:29:16 np0005539563 systemd[1]: Starting libvirt secret daemon...
Nov 29 02:29:16 np0005539563 systemd[1]: Started libvirt secret daemon.
Nov 29 02:29:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:16 np0005539563 podman[213237]: 2025-11-29 07:29:16.87415588 +0000 UTC m=+0.097346465 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:29:16 np0005539563 podman[213238]: 2025-11-29 07:29:16.902572025 +0000 UTC m=+0.120517597 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 29 02:29:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:17.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:17.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:17 np0005539563 python3.9[213457]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:18 np0005539563 python3.9[213609]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:29:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:19 np0005539563 python3.9[213762]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:29:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:19.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:19.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:20 np0005539563 python3.9[213916]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:29:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:21.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:21.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:21 np0005539563 python3.9[214067]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:29:22 np0005539563 python3.9[214238]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401361.3503745-3364-190899679941057/.source.xml follow=False _original_basename=secret.xml.j2 checksum=3de32f8e861874afb18756e58a543ac33a4e4294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:23 np0005539563 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 29 02:29:23 np0005539563 python3.9[214391]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 38a37ed2-442a-5e0d-a69a-881fdd186450#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:29:23 np0005539563 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 29 02:29:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:23.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:23.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:25 np0005539563 python3.9[214554]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:25.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:25.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:27.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:27.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:29 np0005539563 python3.9[215019]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:29.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:29.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:30 np0005539563 python3.9[215171]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:31 np0005539563 python3.9[215295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401369.7164545-3529-16394296478129/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:31.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:31.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:32 np0005539563 python3.9[215447]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:33 np0005539563 python3.9[215600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:33.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:33.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:33 np0005539563 python3.9[215678]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:35 np0005539563 python3.9[215831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:35 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 29 02:29:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:35.205229) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:29:35 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 29 02:29:35 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401375205440, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1403, "num_deletes": 503, "total_data_size": 2023849, "memory_usage": 2061800, "flush_reason": "Manual Compaction"}
Nov 29 02:29:35 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 29 02:29:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:35.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:35 np0005539563 python3.9[215909]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.euhv9ovh recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:35.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:37 np0005539563 python3.9[216061]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401377237908, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1183243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14098, "largest_seqno": 15500, "table_properties": {"data_size": 1178410, "index_size": 1781, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15117, "raw_average_key_size": 19, "raw_value_size": 1166021, "raw_average_value_size": 1479, "num_data_blocks": 81, "num_entries": 788, "num_filter_entries": 788, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401235, "oldest_key_time": 1764401235, "file_creation_time": 1764401375, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 2032717 microseconds, and 10678 cpu microseconds.
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:29:37 np0005539563 python3.9[216140]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:37.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:37.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:37.238057) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1183243 bytes OK
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:37.238102) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:37.928620) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:37.928774) EVENT_LOG_v1 {"time_micros": 1764401377928718, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:37.928823) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2016741, prev total WAL file size 2048514, number of live WAL files 2.
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:37.930252) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1155KB)], [32(10MB)]
Nov 29 02:29:37 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401377930429, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 12689813, "oldest_snapshot_seqno": -1}
Nov 29 02:29:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:39.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:39.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:39 np0005539563 python3.9[216293]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4155 keys, 7563474 bytes, temperature: kUnknown
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401380506670, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7563474, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7534649, "index_size": 17323, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 104015, "raw_average_key_size": 25, "raw_value_size": 7458316, "raw_average_value_size": 1795, "num_data_blocks": 724, "num_entries": 4155, "num_filter_entries": 4155, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764401377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:29:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:40.507137) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7563474 bytes
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:40.954480) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 4.9 rd, 2.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(17.1) write-amplify(6.4) OK, records in: 5123, records dropped: 968 output_compression: NoCompression
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:40.954531) EVENT_LOG_v1 {"time_micros": 1764401380954512, "job": 14, "event": "compaction_finished", "compaction_time_micros": 2576365, "compaction_time_cpu_micros": 43970, "output_level": 6, "num_output_files": 1, "total_output_size": 7563474, "num_input_records": 5123, "num_output_records": 4155, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401380955465, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401380959071, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:37.930045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:40.959188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:40.959192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:40.959193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:40.959195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:29:40 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:29:40.959196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:29:41 np0005539563 python3[216447]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:29:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:41.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:41.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:42 np0005539563 python3.9[216599]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:42 np0005539563 python3.9[216727]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:29:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:29:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:29:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:29:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:29:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:29:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:43.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:43.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:45.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:45.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:47 np0005539563 python3.9[216882]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:47 np0005539563 podman[216932]: 2025-11-29 07:29:47.38747773 +0000 UTC m=+0.071802359 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 02:29:47 np0005539563 podman[216933]: 2025-11-29 07:29:47.416542832 +0000 UTC m=+0.098034024 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:29:47 np0005539563 python3.9[216998]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:47.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:47.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:48 np0005539563 python3.9[217156]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:49 np0005539563 python3.9[217235]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:49.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:49.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:50 np0005539563 python3.9[217387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:50 np0005539563 python3.9[217465]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:51.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:51 np0005539563 python3.9[217618]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:29:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:51.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:52 np0005539563 python3.9[217743]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401391.2618-3904-87946444946375/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:53 np0005539563 python3.9[217896]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:53.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:53.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:54 np0005539563 python3.9[218048]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:29:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:55.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:55.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:56 np0005539563 python3.9[218204]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:29:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:57 np0005539563 python3.9[218357]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:29:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:57.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:29:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:57.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:58 np0005539563 python3.9[218510]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:29:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:59.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:29:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:29:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:59.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:30:00 np0005539563 python3.9[218665]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:01 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 02:30:01 np0005539563 python3.9[218821]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:01.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:01.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:02 np0005539563 python3.9[218973]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:30:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:03 np0005539563 python3.9[219147]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401402.0165694-4120-29049623634917/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:03.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:03.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:04 np0005539563 python3.9[219415]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:30:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:30:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:30:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:30:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:30:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:30:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:30:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8b7a67c3-e798-4caf-b84c-2acb7ac8492d does not exist
Nov 29 02:30:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b3293878-1f3c-4987-a8d9-927f338ae03a does not exist
Nov 29 02:30:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev afedeb35-7c0f-4625-b16b-bcbea69aaf04 does not exist
Nov 29 02:30:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:30:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:30:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:30:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:30:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:30:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:30:04 np0005539563 python3.9[219626]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401403.5655859-4165-98941586653899/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:30:04.873 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:30:04.873 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:30:04.873 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:30:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:04 np0005539563 podman[219717]: 2025-11-29 07:30:04.891567569 +0000 UTC m=+0.019258226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:30:05 np0005539563 podman[219717]: 2025-11-29 07:30:05.130527714 +0000 UTC m=+0.258218391 container create 90ad6915d9a1bb9f3f108f2a292208b5704dacf1c38cf555f98edac54df1117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dhawan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 29 02:30:05 np0005539563 systemd[1]: Started libpod-conmon-90ad6915d9a1bb9f3f108f2a292208b5704dacf1c38cf555f98edac54df1117d.scope.
Nov 29 02:30:05 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:30:05 np0005539563 podman[219717]: 2025-11-29 07:30:05.266381408 +0000 UTC m=+0.394072085 container init 90ad6915d9a1bb9f3f108f2a292208b5704dacf1c38cf555f98edac54df1117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:30:05 np0005539563 podman[219717]: 2025-11-29 07:30:05.278617242 +0000 UTC m=+0.406307869 container start 90ad6915d9a1bb9f3f108f2a292208b5704dacf1c38cf555f98edac54df1117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dhawan, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:30:05 np0005539563 friendly_dhawan[219774]: 167 167
Nov 29 02:30:05 np0005539563 systemd[1]: libpod-90ad6915d9a1bb9f3f108f2a292208b5704dacf1c38cf555f98edac54df1117d.scope: Deactivated successfully.
Nov 29 02:30:05 np0005539563 podman[219717]: 2025-11-29 07:30:05.290684451 +0000 UTC m=+0.418375138 container attach 90ad6915d9a1bb9f3f108f2a292208b5704dacf1c38cf555f98edac54df1117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:30:05 np0005539563 podman[219717]: 2025-11-29 07:30:05.291145523 +0000 UTC m=+0.418836160 container died 90ad6915d9a1bb9f3f108f2a292208b5704dacf1c38cf555f98edac54df1117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dhawan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:30:05 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a9b2b6660cfc3974722aff1e2d18e2e3c2167ac8781822368f97ffa32e74e764-merged.mount: Deactivated successfully.
Nov 29 02:30:05 np0005539563 podman[219717]: 2025-11-29 07:30:05.438872721 +0000 UTC m=+0.566563368 container remove 90ad6915d9a1bb9f3f108f2a292208b5704dacf1c38cf555f98edac54df1117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:30:05 np0005539563 systemd[1]: libpod-conmon-90ad6915d9a1bb9f3f108f2a292208b5704dacf1c38cf555f98edac54df1117d.scope: Deactivated successfully.
Nov 29 02:30:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:30:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:30:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:30:05 np0005539563 podman[219887]: 2025-11-29 07:30:05.618023545 +0000 UTC m=+0.063541123 container create 45c61ed26331f00bee08a4337d6b575b6daf071133fe941242919693c9e03e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_liskov, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:30:05 np0005539563 systemd[1]: Started libpod-conmon-45c61ed26331f00bee08a4337d6b575b6daf071133fe941242919693c9e03e79.scope.
Nov 29 02:30:05 np0005539563 podman[219887]: 2025-11-29 07:30:05.580999415 +0000 UTC m=+0.026517013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:30:05 np0005539563 python3.9[219881]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:30:05 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:30:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023f06cc9474117cecf0bc9c64fa8f5a71c580462b483de22caed1622c24ae81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023f06cc9474117cecf0bc9c64fa8f5a71c580462b483de22caed1622c24ae81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023f06cc9474117cecf0bc9c64fa8f5a71c580462b483de22caed1622c24ae81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023f06cc9474117cecf0bc9c64fa8f5a71c580462b483de22caed1622c24ae81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023f06cc9474117cecf0bc9c64fa8f5a71c580462b483de22caed1622c24ae81/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:05 np0005539563 podman[219887]: 2025-11-29 07:30:05.741968764 +0000 UTC m=+0.187486382 container init 45c61ed26331f00bee08a4337d6b575b6daf071133fe941242919693c9e03e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_liskov, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:30:05 np0005539563 podman[219887]: 2025-11-29 07:30:05.749983543 +0000 UTC m=+0.195501121 container start 45c61ed26331f00bee08a4337d6b575b6daf071133fe941242919693c9e03e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_liskov, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:30:05 np0005539563 podman[219887]: 2025-11-29 07:30:05.784310769 +0000 UTC m=+0.229828347 container attach 45c61ed26331f00bee08a4337d6b575b6daf071133fe941242919693c9e03e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_liskov, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:30:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:05.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:05.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:06 np0005539563 python3.9[220031]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401405.1371658-4210-141409889711638/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:06 np0005539563 nice_liskov[219904]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:30:06 np0005539563 nice_liskov[219904]: --> relative data size: 1.0
Nov 29 02:30:06 np0005539563 nice_liskov[219904]: --> All data devices are unavailable
Nov 29 02:30:06 np0005539563 systemd[1]: libpod-45c61ed26331f00bee08a4337d6b575b6daf071133fe941242919693c9e03e79.scope: Deactivated successfully.
Nov 29 02:30:06 np0005539563 podman[220091]: 2025-11-29 07:30:06.743211242 +0000 UTC m=+0.024162999 container died 45c61ed26331f00bee08a4337d6b575b6daf071133fe941242919693c9e03e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_liskov, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:30:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-023f06cc9474117cecf0bc9c64fa8f5a71c580462b483de22caed1622c24ae81-merged.mount: Deactivated successfully.
Nov 29 02:30:06 np0005539563 podman[220091]: 2025-11-29 07:30:06.864508949 +0000 UTC m=+0.145460676 container remove 45c61ed26331f00bee08a4337d6b575b6daf071133fe941242919693c9e03e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_liskov, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:30:06 np0005539563 systemd[1]: libpod-conmon-45c61ed26331f00bee08a4337d6b575b6daf071133fe941242919693c9e03e79.scope: Deactivated successfully.
Nov 29 02:30:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:07 np0005539563 python3.9[220257]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:07 np0005539563 systemd[1]: Reloading.
Nov 29 02:30:07 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:30:07 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:30:07 np0005539563 podman[220384]: 2025-11-29 07:30:07.451628496 +0000 UTC m=+0.022810053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:30:07 np0005539563 systemd[1]: Reached target edpm_libvirt.target.
Nov 29 02:30:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:07.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:07.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:08 np0005539563 podman[220384]: 2025-11-29 07:30:08.023195389 +0000 UTC m=+0.594376946 container create f348e29c8c77b1d40768337f9df5817434961cee54db96e883c3ff9a3403873d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wing, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:30:08 np0005539563 systemd[1]: Started libpod-conmon-f348e29c8c77b1d40768337f9df5817434961cee54db96e883c3ff9a3403873d.scope.
Nov 29 02:30:08 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:30:08 np0005539563 python3.9[220558]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:30:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:08 np0005539563 systemd[1]: Reloading.
Nov 29 02:30:08 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:30:08 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:30:09 np0005539563 podman[220384]: 2025-11-29 07:30:09.011241918 +0000 UTC m=+1.582423535 container init f348e29c8c77b1d40768337f9df5817434961cee54db96e883c3ff9a3403873d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wing, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:30:09 np0005539563 podman[220384]: 2025-11-29 07:30:09.019820381 +0000 UTC m=+1.591001898 container start f348e29c8c77b1d40768337f9df5817434961cee54db96e883c3ff9a3403873d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:30:09 np0005539563 sweet_wing[220428]: 167 167
Nov 29 02:30:09 np0005539563 podman[220384]: 2025-11-29 07:30:09.076038884 +0000 UTC m=+1.647220411 container attach f348e29c8c77b1d40768337f9df5817434961cee54db96e883c3ff9a3403873d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wing, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:30:09 np0005539563 podman[220384]: 2025-11-29 07:30:09.078172402 +0000 UTC m=+1.649353949 container died f348e29c8c77b1d40768337f9df5817434961cee54db96e883c3ff9a3403873d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wing, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:30:09 np0005539563 systemd[1]: libpod-f348e29c8c77b1d40768337f9df5817434961cee54db96e883c3ff9a3403873d.scope: Deactivated successfully.
Nov 29 02:30:09 np0005539563 systemd[1]: Reloading.
Nov 29 02:30:09 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:30:09 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:30:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:09.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c509e1c9247de21cf22babd95c26b3d91591d2a1e14945810015fe022ff2cae5-merged.mount: Deactivated successfully.
Nov 29 02:30:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:09.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:10 np0005539563 podman[220384]: 2025-11-29 07:30:10.595145579 +0000 UTC m=+3.166327136 container remove f348e29c8c77b1d40768337f9df5817434961cee54db96e883c3ff9a3403873d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wing, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:30:10 np0005539563 systemd[1]: libpod-conmon-f348e29c8c77b1d40768337f9df5817434961cee54db96e883c3ff9a3403873d.scope: Deactivated successfully.
Nov 29 02:30:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:10 np0005539563 podman[220681]: 2025-11-29 07:30:10.807229402 +0000 UTC m=+0.024589512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:30:11 np0005539563 podman[220681]: 2025-11-29 07:30:11.047468352 +0000 UTC m=+0.264828482 container create f8b03c8e7b4ad1b305feaba9a297c3c0ea63164cddfd0c74ca9fc7f39bca4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_montalcini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:30:11 np0005539563 systemd[1]: Started libpod-conmon-f8b03c8e7b4ad1b305feaba9a297c3c0ea63164cddfd0c74ca9fc7f39bca4e41.scope.
Nov 29 02:30:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:30:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea047787f678aa83910f5135e4adb17f6d34d26b99eaee551036d4fb8fd4b173/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea047787f678aa83910f5135e4adb17f6d34d26b99eaee551036d4fb8fd4b173/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea047787f678aa83910f5135e4adb17f6d34d26b99eaee551036d4fb8fd4b173/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea047787f678aa83910f5135e4adb17f6d34d26b99eaee551036d4fb8fd4b173/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:11 np0005539563 systemd[1]: session-49.scope: Deactivated successfully.
Nov 29 02:30:11 np0005539563 systemd[1]: session-49.scope: Consumed 3min 39.736s CPU time.
Nov 29 02:30:11 np0005539563 systemd-logind[785]: Session 49 logged out. Waiting for processes to exit.
Nov 29 02:30:11 np0005539563 systemd-logind[785]: Removed session 49.
Nov 29 02:30:11 np0005539563 podman[220681]: 2025-11-29 07:30:11.35444208 +0000 UTC m=+0.571802210 container init f8b03c8e7b4ad1b305feaba9a297c3c0ea63164cddfd0c74ca9fc7f39bca4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:30:11 np0005539563 podman[220681]: 2025-11-29 07:30:11.369926103 +0000 UTC m=+0.587286223 container start f8b03c8e7b4ad1b305feaba9a297c3c0ea63164cddfd0c74ca9fc7f39bca4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_montalcini, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:30:11 np0005539563 podman[220681]: 2025-11-29 07:30:11.408548046 +0000 UTC m=+0.625908166 container attach f8b03c8e7b4ad1b305feaba9a297c3c0ea63164cddfd0c74ca9fc7f39bca4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:30:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:30:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:11.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:30:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:11.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]: {
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:    "0": [
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:        {
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:            "devices": [
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:                "/dev/loop3"
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:            ],
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:            "lv_name": "ceph_lv0",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:            "lv_size": "7511998464",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:            "name": "ceph_lv0",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:            "tags": {
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:                "ceph.cluster_name": "ceph",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:                "ceph.crush_device_class": "",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:                "ceph.encrypted": "0",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:                "ceph.osd_id": "0",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:                "ceph.type": "block",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:                "ceph.vdo": "0"
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:            },
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:            "type": "block",
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:            "vg_name": "ceph_vg0"
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:        }
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]:    ]
Nov 29 02:30:12 np0005539563 lucid_montalcini[220698]: }
Nov 29 02:30:12 np0005539563 systemd[1]: libpod-f8b03c8e7b4ad1b305feaba9a297c3c0ea63164cddfd0c74ca9fc7f39bca4e41.scope: Deactivated successfully.
Nov 29 02:30:12 np0005539563 podman[220681]: 2025-11-29 07:30:12.131667201 +0000 UTC m=+1.349027331 container died f8b03c8e7b4ad1b305feaba9a297c3c0ea63164cddfd0c74ca9fc7f39bca4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:30:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ea047787f678aa83910f5135e4adb17f6d34d26b99eaee551036d4fb8fd4b173-merged.mount: Deactivated successfully.
Nov 29 02:30:12 np0005539563 podman[220681]: 2025-11-29 07:30:12.576611633 +0000 UTC m=+1.793971723 container remove f8b03c8e7b4ad1b305feaba9a297c3c0ea63164cddfd0c74ca9fc7f39bca4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_montalcini, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:30:12 np0005539563 systemd[1]: libpod-conmon-f8b03c8e7b4ad1b305feaba9a297c3c0ea63164cddfd0c74ca9fc7f39bca4e41.scope: Deactivated successfully.
Nov 29 02:30:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:30:12
Nov 29 02:30:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:30:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:30:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'backups', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.control']
Nov 29 02:30:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:30:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:30:13 np0005539563 podman[220861]: 2025-11-29 07:30:13.17583245 +0000 UTC m=+0.046871189 container create 243c0a650c02d9d07b64481ed38cb0e201fbe2ad31f6753e7ace52265bc97b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaum, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:30:13 np0005539563 systemd[1]: Started libpod-conmon-243c0a650c02d9d07b64481ed38cb0e201fbe2ad31f6753e7ace52265bc97b8a.scope.
Nov 29 02:30:13 np0005539563 podman[220861]: 2025-11-29 07:30:13.155719931 +0000 UTC m=+0.026758690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:30:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:30:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:30:13 np0005539563 podman[220861]: 2025-11-29 07:30:13.275835446 +0000 UTC m=+0.146874195 container init 243c0a650c02d9d07b64481ed38cb0e201fbe2ad31f6753e7ace52265bc97b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaum, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:30:13 np0005539563 podman[220861]: 2025-11-29 07:30:13.282251961 +0000 UTC m=+0.153290700 container start 243c0a650c02d9d07b64481ed38cb0e201fbe2ad31f6753e7ace52265bc97b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:30:13 np0005539563 beautiful_chaum[220877]: 167 167
Nov 29 02:30:13 np0005539563 systemd[1]: libpod-243c0a650c02d9d07b64481ed38cb0e201fbe2ad31f6753e7ace52265bc97b8a.scope: Deactivated successfully.
Nov 29 02:30:13 np0005539563 conmon[220877]: conmon 243c0a650c02d9d07b64 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-243c0a650c02d9d07b64481ed38cb0e201fbe2ad31f6753e7ace52265bc97b8a.scope/container/memory.events
Nov 29 02:30:13 np0005539563 podman[220861]: 2025-11-29 07:30:13.290562228 +0000 UTC m=+0.161601007 container attach 243c0a650c02d9d07b64481ed38cb0e201fbe2ad31f6753e7ace52265bc97b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaum, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:30:13 np0005539563 podman[220861]: 2025-11-29 07:30:13.290957448 +0000 UTC m=+0.161996217 container died 243c0a650c02d9d07b64481ed38cb0e201fbe2ad31f6753e7ace52265bc97b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaum, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:30:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b9a634120e9730b30c760fa38b78cfd0d22dded3f6d0aba90504f388bd02bdfc-merged.mount: Deactivated successfully.
Nov 29 02:30:13 np0005539563 podman[220861]: 2025-11-29 07:30:13.334483585 +0000 UTC m=+0.205522334 container remove 243c0a650c02d9d07b64481ed38cb0e201fbe2ad31f6753e7ace52265bc97b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:30:13 np0005539563 systemd[1]: libpod-conmon-243c0a650c02d9d07b64481ed38cb0e201fbe2ad31f6753e7ace52265bc97b8a.scope: Deactivated successfully.
Nov 29 02:30:13 np0005539563 podman[220900]: 2025-11-29 07:30:13.487058555 +0000 UTC m=+0.036994651 container create 67ba572a20d7923f7f61d934e73f6ce5d7822b52f2f8d5a361f67b46807ccea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_northcutt, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:30:13 np0005539563 systemd[1]: Started libpod-conmon-67ba572a20d7923f7f61d934e73f6ce5d7822b52f2f8d5a361f67b46807ccea9.scope.
Nov 29 02:30:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:30:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e690be20696ccafc0357c2b12c79ac86e382ea6e110b4eca63076e6f7217b045/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e690be20696ccafc0357c2b12c79ac86e382ea6e110b4eca63076e6f7217b045/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e690be20696ccafc0357c2b12c79ac86e382ea6e110b4eca63076e6f7217b045/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e690be20696ccafc0357c2b12c79ac86e382ea6e110b4eca63076e6f7217b045/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:13 np0005539563 podman[220900]: 2025-11-29 07:30:13.555002377 +0000 UTC m=+0.104938503 container init 67ba572a20d7923f7f61d934e73f6ce5d7822b52f2f8d5a361f67b46807ccea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_northcutt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:30:13 np0005539563 podman[220900]: 2025-11-29 07:30:13.56242187 +0000 UTC m=+0.112357976 container start 67ba572a20d7923f7f61d934e73f6ce5d7822b52f2f8d5a361f67b46807ccea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:30:13 np0005539563 podman[220900]: 2025-11-29 07:30:13.565823532 +0000 UTC m=+0.115759658 container attach 67ba572a20d7923f7f61d934e73f6ce5d7822b52f2f8d5a361f67b46807ccea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:30:13 np0005539563 podman[220900]: 2025-11-29 07:30:13.471319595 +0000 UTC m=+0.021255721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:30:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:13.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:13.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:14 np0005539563 cool_northcutt[220916]: {
Nov 29 02:30:14 np0005539563 cool_northcutt[220916]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:30:14 np0005539563 cool_northcutt[220916]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:30:14 np0005539563 cool_northcutt[220916]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:30:14 np0005539563 cool_northcutt[220916]:        "osd_id": 0,
Nov 29 02:30:14 np0005539563 cool_northcutt[220916]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:30:14 np0005539563 cool_northcutt[220916]:        "type": "bluestore"
Nov 29 02:30:14 np0005539563 cool_northcutt[220916]:    }
Nov 29 02:30:14 np0005539563 cool_northcutt[220916]: }
Nov 29 02:30:14 np0005539563 systemd[1]: libpod-67ba572a20d7923f7f61d934e73f6ce5d7822b52f2f8d5a361f67b46807ccea9.scope: Deactivated successfully.
Nov 29 02:30:14 np0005539563 podman[220900]: 2025-11-29 07:30:14.49738631 +0000 UTC m=+1.047322416 container died 67ba572a20d7923f7f61d934e73f6ce5d7822b52f2f8d5a361f67b46807ccea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:30:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e690be20696ccafc0357c2b12c79ac86e382ea6e110b4eca63076e6f7217b045-merged.mount: Deactivated successfully.
Nov 29 02:30:14 np0005539563 podman[220900]: 2025-11-29 07:30:14.5468844 +0000 UTC m=+1.096820506 container remove 67ba572a20d7923f7f61d934e73f6ce5d7822b52f2f8d5a361f67b46807ccea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:30:14 np0005539563 systemd[1]: libpod-conmon-67ba572a20d7923f7f61d934e73f6ce5d7822b52f2f8d5a361f67b46807ccea9.scope: Deactivated successfully.
Nov 29 02:30:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:30:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:30:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:30:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:30:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9c8e11f1-163f-462b-ab76-cd89294d710c does not exist
Nov 29 02:30:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0a0ac02c-e62f-47f3-92c6-63f62f7b8a7b does not exist
Nov 29 02:30:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8ffd01ca-b854-41f3-997b-13aa8a4279de does not exist
Nov 29 02:30:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:30:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:30:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:15.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:15.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:16 np0005539563 systemd-logind[785]: New session 50 of user zuul.
Nov 29 02:30:16 np0005539563 systemd[1]: Started Session 50 of User zuul.
Nov 29 02:30:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:17 np0005539563 podman[221155]: 2025-11-29 07:30:17.508647859 +0000 UTC m=+0.064283804 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 02:30:17 np0005539563 podman[221156]: 2025-11-29 07:30:17.542547223 +0000 UTC m=+0.096305116 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:30:17 np0005539563 python3.9[221154]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:30:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:17.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:17.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:19 np0005539563 python3.9[221352]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:30:19 np0005539563 network[221369]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:30:19 np0005539563 network[221370]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:30:19 np0005539563 network[221371]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:30:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:19.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:19.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:21.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:21.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:30:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:23.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:23.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:25.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:25.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:26 np0005539563 python3.9[221696]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:30:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:27 np0005539563 python3.9[221781]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:30:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:27.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:27.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:29.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:29.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:31.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:31.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:33.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:33.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:34 np0005539563 python3.9[221937]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:30:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:35.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:35 np0005539563 python3.9[222090]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:35.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:37 np0005539563 python3.9[222244]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:30:37 np0005539563 python3.9[222396]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:37.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:37.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:38 np0005539563 python3.9[222549]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:30:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:39 np0005539563 python3.9[222673]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401438.323622-250-261573235752174/.source.iscsi _original_basename=.5ezhcdu4 follow=False checksum=a72e85b4262a06c7193b4e0eac95db4ca6da1969 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:39.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:39.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:40 np0005539563 python3.9[222826]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:41.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:41 np0005539563 python3.9[222978]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:41.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:30:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:30:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:30:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:30:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:30:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:30:43 np0005539563 python3.9[223181]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:43 np0005539563 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 29 02:30:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:43.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:43.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:30:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 8714 writes, 35K keys, 8714 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8714 writes, 1830 syncs, 4.76 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 656 writes, 1009 keys, 656 commit groups, 1.0 writes per commit group, ingest: 0.33 MB, 0.00 MB/s#012Interval WAL: 656 writes, 316 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561bde6ad610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000135 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561bde6ad610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000135 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Nov 29 02:30:44 np0005539563 python3.9[223337]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:44 np0005539563 systemd[1]: Reloading.
Nov 29 02:30:45 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:30:45 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:30:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:45.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:45.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:46 np0005539563 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 02:30:46 np0005539563 systemd[1]: Starting Open-iSCSI...
Nov 29 02:30:46 np0005539563 kernel: Loading iSCSI transport class v2.0-870.
Nov 29 02:30:46 np0005539563 systemd[1]: Started Open-iSCSI.
Nov 29 02:30:46 np0005539563 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 29 02:30:46 np0005539563 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 29 02:30:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:47.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:47 np0005539563 podman[223512]: 2025-11-29 07:30:47.910053487 +0000 UTC m=+0.069113135 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:30:47 np0005539563 podman[223540]: 2025-11-29 07:30:47.938962305 +0000 UTC m=+0.097510379 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 02:30:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:47.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:48 np0005539563 python3.9[223541]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:30:48 np0005539563 network[223598]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:30:48 np0005539563 network[223599]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:30:48 np0005539563 network[223600]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:30:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:49.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:49.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:51.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:51.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:30:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:53.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:30:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:54 np0005539563 python3.9[223875]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:30:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:53.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:54 np0005539563 python3.9[224028]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 29 02:30:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 02:30:55 np0005539563 python3.9[224184]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:30:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:55.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:56.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:30:57 np0005539563 python3.9[224307]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401455.3042955-481-274121195569843/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:57.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:58.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:58 np0005539563 python3.9[224460]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:59 np0005539563 python3.9[224613]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:30:59 np0005539563 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 02:30:59 np0005539563 systemd[1]: Stopped Load Kernel Modules.
Nov 29 02:30:59 np0005539563 systemd[1]: Stopping Load Kernel Modules...
Nov 29 02:30:59 np0005539563 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:30:59 np0005539563 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:30:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:30:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:59.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:00.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:00 np0005539563 python3.9[224769]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:01 np0005539563 python3.9[224922]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:31:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:01.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:02.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:02 np0005539563 python3.9[225074]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:31:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:03 np0005539563 python3.9[225277]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:03 np0005539563 python3.9[225400]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401462.8184407-655-5506600057262/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:03.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:04.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:31:04.874 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:31:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:31:04.875 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:31:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:31:04.875 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:31:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:05 np0005539563 python3.9[225553]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:31:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:05.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:06.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:06 np0005539563 python3.9[225706]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:07 np0005539563 python3.9[225859]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:07.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:08.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:08 np0005539563 python3.9[226011]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:08 np0005539563 python3.9[226164]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:09 np0005539563 python3.9[226316]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:09.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:10.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:10 np0005539563 python3.9[226468]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:11 np0005539563 python3.9[226621]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:11 np0005539563 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 29 02:31:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:11.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:12.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:31:12
Nov 29 02:31:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:31:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:31:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'default.rgw.meta', 'vms']
Nov 29 02:31:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:31:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:31:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:31:13 np0005539563 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 02:31:13 np0005539563 python3.9[226775]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:31:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:13.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:14.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:14 np0005539563 python3.9[226930]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:31:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:31:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:15.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:15 np0005539563 python3.9[227203]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:16.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:17 np0005539563 python3.9[227356]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:17.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:18.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:18 np0005539563 podman[227408]: 2025-11-29 07:31:18.318487762 +0000 UTC m=+0.084614056 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 29 02:31:18 np0005539563 podman[227409]: 2025-11-29 07:31:18.337388407 +0000 UTC m=+0.092354407 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 02:31:18 np0005539563 python3.9[227467]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:19 np0005539563 python3.9[227635]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:31:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:31:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:19.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:20.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:21 np0005539563 python3.9[227713]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:21.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:22.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:22 np0005539563 python3.9[227866]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:31:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:23 np0005539563 python3.9[228019]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:23 np0005539563 python3.9[228147]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:23.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:24.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:31:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:24 np0005539563 python3.9[228299]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:25.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:26.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:31:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:31:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:31:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:31:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:31:26 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:26 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1b629036-048c-4a71-9a0f-355a5f28d999 does not exist
Nov 29 02:31:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f8672a06-8d10-414a-a5ef-aba20f6c13c2 does not exist
Nov 29 02:31:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4c5a7ac5-e697-4f0b-a92f-04d4374a6372 does not exist
Nov 29 02:31:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:31:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:31:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:31:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:31:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:31:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:31:27 np0005539563 python3.9[228510]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:27 np0005539563 podman[228698]: 2025-11-29 07:31:27.485114309 +0000 UTC m=+0.018881245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:31:27 np0005539563 podman[228698]: 2025-11-29 07:31:27.612667485 +0000 UTC m=+0.146434391 container create 53dbe1c73080dae98326c0c0035fcec3a0dfe66a17954bf0dff845cdac3823b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gates, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:31:27 np0005539563 systemd[1]: Started libpod-conmon-53dbe1c73080dae98326c0c0035fcec3a0dfe66a17954bf0dff845cdac3823b2.scope.
Nov 29 02:31:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:31:27 np0005539563 podman[228698]: 2025-11-29 07:31:27.940451036 +0000 UTC m=+0.474217972 container init 53dbe1c73080dae98326c0c0035fcec3a0dfe66a17954bf0dff845cdac3823b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:31:27 np0005539563 podman[228698]: 2025-11-29 07:31:27.951268891 +0000 UTC m=+0.485035797 container start 53dbe1c73080dae98326c0c0035fcec3a0dfe66a17954bf0dff845cdac3823b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gates, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:31:27 np0005539563 elastic_gates[228820]: 167 167
Nov 29 02:31:27 np0005539563 systemd[1]: libpod-53dbe1c73080dae98326c0c0035fcec3a0dfe66a17954bf0dff845cdac3823b2.scope: Deactivated successfully.
Nov 29 02:31:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:27.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:28 np0005539563 podman[228698]: 2025-11-29 07:31:28.015178332 +0000 UTC m=+0.548945248 container attach 53dbe1c73080dae98326c0c0035fcec3a0dfe66a17954bf0dff845cdac3823b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:31:28 np0005539563 podman[228698]: 2025-11-29 07:31:28.016791065 +0000 UTC m=+0.550558061 container died 53dbe1c73080dae98326c0c0035fcec3a0dfe66a17954bf0dff845cdac3823b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:31:28 np0005539563 python3.9[228817]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:31:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:28.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:28 np0005539563 systemd[1]: Reloading.
Nov 29 02:31:28 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:31:28 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:31:28 np0005539563 systemd[1]: var-lib-containers-storage-overlay-42e9f671fce1b03159cfe56b7851b89ac0dc7e577b8fc0c146b1050ffa8ed887-merged.mount: Deactivated successfully.
Nov 29 02:31:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:31:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:31:28 np0005539563 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 02:31:28 np0005539563 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 29 02:31:28 np0005539563 podman[228698]: 2025-11-29 07:31:28.705548432 +0000 UTC m=+1.239315338 container remove 53dbe1c73080dae98326c0c0035fcec3a0dfe66a17954bf0dff845cdac3823b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:31:28 np0005539563 systemd[1]: libpod-conmon-53dbe1c73080dae98326c0c0035fcec3a0dfe66a17954bf0dff845cdac3823b2.scope: Deactivated successfully.
Nov 29 02:31:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:29 np0005539563 podman[228928]: 2025-11-29 07:31:28.916414417 +0000 UTC m=+0.026250936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:31:29 np0005539563 podman[228928]: 2025-11-29 07:31:29.01781438 +0000 UTC m=+0.127650819 container create d6069cf1e60a159f32dd56224e3210e41fe6c4f097f2c61843141222838adc98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_gates, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:31:29 np0005539563 systemd[1]: Started libpod-conmon-d6069cf1e60a159f32dd56224e3210e41fe6c4f097f2c61843141222838adc98.scope.
Nov 29 02:31:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:31:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da149c8a3ccc58232874a0810d89bc139142029b637490838841ad7aefadde2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da149c8a3ccc58232874a0810d89bc139142029b637490838841ad7aefadde2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da149c8a3ccc58232874a0810d89bc139142029b637490838841ad7aefadde2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da149c8a3ccc58232874a0810d89bc139142029b637490838841ad7aefadde2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da149c8a3ccc58232874a0810d89bc139142029b637490838841ad7aefadde2a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:29 np0005539563 podman[228928]: 2025-11-29 07:31:29.164201018 +0000 UTC m=+0.274037497 container init d6069cf1e60a159f32dd56224e3210e41fe6c4f097f2c61843141222838adc98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:31:29 np0005539563 podman[228928]: 2025-11-29 07:31:29.176810661 +0000 UTC m=+0.286647100 container start d6069cf1e60a159f32dd56224e3210e41fe6c4f097f2c61843141222838adc98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:31:29 np0005539563 podman[228928]: 2025-11-29 07:31:29.207844527 +0000 UTC m=+0.317681006 container attach d6069cf1e60a159f32dd56224e3210e41fe6c4f097f2c61843141222838adc98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:31:29 np0005539563 python3.9[229058]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:29.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:29 np0005539563 nice_gates[229001]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:31:29 np0005539563 nice_gates[229001]: --> relative data size: 1.0
Nov 29 02:31:29 np0005539563 nice_gates[229001]: --> All data devices are unavailable
Nov 29 02:31:29 np0005539563 python3.9[229136]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:30 np0005539563 systemd[1]: libpod-d6069cf1e60a159f32dd56224e3210e41fe6c4f097f2c61843141222838adc98.scope: Deactivated successfully.
Nov 29 02:31:30 np0005539563 podman[228928]: 2025-11-29 07:31:30.001153243 +0000 UTC m=+1.110989682 container died d6069cf1e60a159f32dd56224e3210e41fe6c4f097f2c61843141222838adc98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_gates, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 29 02:31:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:30.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-da149c8a3ccc58232874a0810d89bc139142029b637490838841ad7aefadde2a-merged.mount: Deactivated successfully.
Nov 29 02:31:30 np0005539563 podman[228928]: 2025-11-29 07:31:30.42350785 +0000 UTC m=+1.533344309 container remove d6069cf1e60a159f32dd56224e3210e41fe6c4f097f2c61843141222838adc98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:31:30 np0005539563 systemd[1]: libpod-conmon-d6069cf1e60a159f32dd56224e3210e41fe6c4f097f2c61843141222838adc98.scope: Deactivated successfully.
Nov 29 02:31:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:30 np0005539563 python3.9[229410]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:31 np0005539563 podman[229452]: 2025-11-29 07:31:31.031432994 +0000 UTC m=+0.024497378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:31:31 np0005539563 podman[229452]: 2025-11-29 07:31:31.161791886 +0000 UTC m=+0.154856250 container create ced41066d3d2760988b650cac887c04d1ca95493f1bb2c4d48e9633822e54dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:31:31 np0005539563 systemd[1]: Started libpod-conmon-ced41066d3d2760988b650cac887c04d1ca95493f1bb2c4d48e9633822e54dac.scope.
Nov 29 02:31:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:31:31 np0005539563 python3.9[229542]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:31 np0005539563 podman[229452]: 2025-11-29 07:31:31.509160411 +0000 UTC m=+0.502224785 container init ced41066d3d2760988b650cac887c04d1ca95493f1bb2c4d48e9633822e54dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:31:31 np0005539563 podman[229452]: 2025-11-29 07:31:31.515986096 +0000 UTC m=+0.509050460 container start ced41066d3d2760988b650cac887c04d1ca95493f1bb2c4d48e9633822e54dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 02:31:31 np0005539563 competent_wu[229545]: 167 167
Nov 29 02:31:31 np0005539563 systemd[1]: libpod-ced41066d3d2760988b650cac887c04d1ca95493f1bb2c4d48e9633822e54dac.scope: Deactivated successfully.
Nov 29 02:31:31 np0005539563 podman[229452]: 2025-11-29 07:31:31.593312474 +0000 UTC m=+0.586376828 container attach ced41066d3d2760988b650cac887c04d1ca95493f1bb2c4d48e9633822e54dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:31:31 np0005539563 podman[229452]: 2025-11-29 07:31:31.593656103 +0000 UTC m=+0.586720467 container died ced41066d3d2760988b650cac887c04d1ca95493f1bb2c4d48e9633822e54dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:31:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-cceece80c3d63421e5120aa5a564ac8a466dcf8b646efc20311c4dcdc55f7ae5-merged.mount: Deactivated successfully.
Nov 29 02:31:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:31.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:32.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:32 np0005539563 python3.9[229713]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:31:32 np0005539563 systemd[1]: Reloading.
Nov 29 02:31:32 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:31:32 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:31:32 np0005539563 podman[229452]: 2025-11-29 07:31:32.881872932 +0000 UTC m=+1.874937296 container remove ced41066d3d2760988b650cac887c04d1ca95493f1bb2c4d48e9633822e54dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:31:32 np0005539563 systemd[1]: libpod-conmon-ced41066d3d2760988b650cac887c04d1ca95493f1bb2c4d48e9633822e54dac.scope: Deactivated successfully.
Nov 29 02:31:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:32 np0005539563 systemd[1]: Starting Create netns directory...
Nov 29 02:31:33 np0005539563 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:31:33 np0005539563 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:31:33 np0005539563 systemd[1]: Finished Create netns directory.
Nov 29 02:31:33 np0005539563 podman[229762]: 2025-11-29 07:31:33.028097016 +0000 UTC m=+0.028231610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:31:33 np0005539563 podman[229762]: 2025-11-29 07:31:33.586970743 +0000 UTC m=+0.587105347 container create 02f385d0f742a609dda2c73e49f125420e426d1febd93b0bba867e7b07df9cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:31:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:33.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:34 np0005539563 systemd[1]: Started libpod-conmon-02f385d0f742a609dda2c73e49f125420e426d1febd93b0bba867e7b07df9cfe.scope.
Nov 29 02:31:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:34.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:34 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:31:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4996d88707fea9cd2ca90f2049f37f752aceeba89b3b18be4a4f08472c6672/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4996d88707fea9cd2ca90f2049f37f752aceeba89b3b18be4a4f08472c6672/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4996d88707fea9cd2ca90f2049f37f752aceeba89b3b18be4a4f08472c6672/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f4996d88707fea9cd2ca90f2049f37f752aceeba89b3b18be4a4f08472c6672/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:34 np0005539563 podman[229762]: 2025-11-29 07:31:34.228563794 +0000 UTC m=+1.228698388 container init 02f385d0f742a609dda2c73e49f125420e426d1febd93b0bba867e7b07df9cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:31:34 np0005539563 podman[229762]: 2025-11-29 07:31:34.236434838 +0000 UTC m=+1.236569412 container start 02f385d0f742a609dda2c73e49f125420e426d1febd93b0bba867e7b07df9cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cerf, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:31:34 np0005539563 podman[229762]: 2025-11-29 07:31:34.744698367 +0000 UTC m=+1.744832931 container attach 02f385d0f742a609dda2c73e49f125420e426d1febd93b0bba867e7b07df9cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 29 02:31:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]: {
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:    "0": [
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:        {
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:            "devices": [
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:                "/dev/loop3"
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:            ],
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:            "lv_name": "ceph_lv0",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:            "lv_size": "7511998464",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:            "name": "ceph_lv0",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:            "tags": {
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:                "ceph.cluster_name": "ceph",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:                "ceph.crush_device_class": "",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:                "ceph.encrypted": "0",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:                "ceph.osd_id": "0",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:                "ceph.type": "block",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:                "ceph.vdo": "0"
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:            },
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:            "type": "block",
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:            "vg_name": "ceph_vg0"
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:        }
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]:    ]
Nov 29 02:31:35 np0005539563 unruffled_cerf[229804]: }
Nov 29 02:31:35 np0005539563 systemd[1]: libpod-02f385d0f742a609dda2c73e49f125420e426d1febd93b0bba867e7b07df9cfe.scope: Deactivated successfully.
Nov 29 02:31:35 np0005539563 podman[229762]: 2025-11-29 07:31:35.051755324 +0000 UTC m=+2.051889888 container died 02f385d0f742a609dda2c73e49f125420e426d1febd93b0bba867e7b07df9cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cerf, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:31:35 np0005539563 python3.9[229953]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:35.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:36.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:36 np0005539563 python3.9[230105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:36 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4f4996d88707fea9cd2ca90f2049f37f752aceeba89b3b18be4a4f08472c6672-merged.mount: Deactivated successfully.
Nov 29 02:31:36 np0005539563 python3.9[230230]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401495.7473657-1276-271140966961100/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:37 np0005539563 podman[229762]: 2025-11-29 07:31:37.552597471 +0000 UTC m=+4.552732035 container remove 02f385d0f742a609dda2c73e49f125420e426d1febd93b0bba867e7b07df9cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cerf, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:31:37 np0005539563 systemd[1]: libpod-conmon-02f385d0f742a609dda2c73e49f125420e426d1febd93b0bba867e7b07df9cfe.scope: Deactivated successfully.
Nov 29 02:31:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:37.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:38.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:38 np0005539563 python3.9[230482]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:31:38 np0005539563 podman[230523]: 2025-11-29 07:31:38.13131774 +0000 UTC m=+0.020058687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:31:38 np0005539563 podman[230523]: 2025-11-29 07:31:38.241892412 +0000 UTC m=+0.130633339 container create 8cd509895a0f9017ed5a958094a03e2430690a5b84f6aed94d619a29d09d3479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shaw, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:31:38 np0005539563 systemd[1]: Started libpod-conmon-8cd509895a0f9017ed5a958094a03e2430690a5b84f6aed94d619a29d09d3479.scope.
Nov 29 02:31:38 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:31:38 np0005539563 podman[230523]: 2025-11-29 07:31:38.941981477 +0000 UTC m=+0.830722424 container init 8cd509895a0f9017ed5a958094a03e2430690a5b84f6aed94d619a29d09d3479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shaw, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:31:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:38 np0005539563 podman[230523]: 2025-11-29 07:31:38.953703136 +0000 UTC m=+0.842444093 container start 8cd509895a0f9017ed5a958094a03e2430690a5b84f6aed94d619a29d09d3479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shaw, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:31:38 np0005539563 vigilant_shaw[230563]: 167 167
Nov 29 02:31:38 np0005539563 systemd[1]: libpod-8cd509895a0f9017ed5a958094a03e2430690a5b84f6aed94d619a29d09d3479.scope: Deactivated successfully.
Nov 29 02:31:39 np0005539563 podman[230523]: 2025-11-29 07:31:39.080983445 +0000 UTC m=+0.969724412 container attach 8cd509895a0f9017ed5a958094a03e2430690a5b84f6aed94d619a29d09d3479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:31:39 np0005539563 podman[230523]: 2025-11-29 07:31:39.081942601 +0000 UTC m=+0.970683568 container died 8cd509895a0f9017ed5a958094a03e2430690a5b84f6aed94d619a29d09d3479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shaw, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:31:39 np0005539563 python3.9[230705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:31:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:31:39 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4a2104f837c99d9e52118ef6e8c61965edc0c7db9bc4a69cb794b322e2c07bec-merged.mount: Deactivated successfully.
Nov 29 02:31:39 np0005539563 python3.9[230828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401498.7433972-1351-104535171511871/.source.json _original_basename=.vcxrbd69 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:39.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:39 np0005539563 podman[230523]: 2025-11-29 07:31:39.985820799 +0000 UTC m=+1.874561726 container remove 8cd509895a0f9017ed5a958094a03e2430690a5b84f6aed94d619a29d09d3479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shaw, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:31:39 np0005539563 systemd[1]: libpod-conmon-8cd509895a0f9017ed5a958094a03e2430690a5b84f6aed94d619a29d09d3479.scope: Deactivated successfully.
Nov 29 02:31:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:40.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:40 np0005539563 podman[230865]: 2025-11-29 07:31:40.126049529 +0000 UTC m=+0.021790255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:31:40 np0005539563 podman[230865]: 2025-11-29 07:31:40.270034312 +0000 UTC m=+0.165775008 container create 5161edd2b12ea2708492f04b938b3857dd375649d6ec89d035103a5ef7c0e8b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:31:40 np0005539563 systemd[1]: Started libpod-conmon-5161edd2b12ea2708492f04b938b3857dd375649d6ec89d035103a5ef7c0e8b6.scope.
Nov 29 02:31:40 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:31:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578c38bbc63b6187cbc884230eccfe48ab26711d170a2d5889dcd50cf78c4845/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578c38bbc63b6187cbc884230eccfe48ab26711d170a2d5889dcd50cf78c4845/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578c38bbc63b6187cbc884230eccfe48ab26711d170a2d5889dcd50cf78c4845/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578c38bbc63b6187cbc884230eccfe48ab26711d170a2d5889dcd50cf78c4845/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:40 np0005539563 podman[230865]: 2025-11-29 07:31:40.413573114 +0000 UTC m=+0.309313830 container init 5161edd2b12ea2708492f04b938b3857dd375649d6ec89d035103a5ef7c0e8b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:31:40 np0005539563 podman[230865]: 2025-11-29 07:31:40.420931894 +0000 UTC m=+0.316672590 container start 5161edd2b12ea2708492f04b938b3857dd375649d6ec89d035103a5ef7c0e8b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:31:40 np0005539563 podman[230865]: 2025-11-29 07:31:40.534967841 +0000 UTC m=+0.430708567 container attach 5161edd2b12ea2708492f04b938b3857dd375649d6ec89d035103a5ef7c0e8b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:31:40 np0005539563 python3.9[231014]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:31:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:41 np0005539563 dreamy_ishizaka[230881]: {
Nov 29 02:31:41 np0005539563 dreamy_ishizaka[230881]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:31:41 np0005539563 dreamy_ishizaka[230881]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:31:41 np0005539563 dreamy_ishizaka[230881]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:31:41 np0005539563 dreamy_ishizaka[230881]:        "osd_id": 0,
Nov 29 02:31:41 np0005539563 dreamy_ishizaka[230881]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:31:41 np0005539563 dreamy_ishizaka[230881]:        "type": "bluestore"
Nov 29 02:31:41 np0005539563 dreamy_ishizaka[230881]:    }
Nov 29 02:31:41 np0005539563 dreamy_ishizaka[230881]: }
Nov 29 02:31:41 np0005539563 systemd[1]: libpod-5161edd2b12ea2708492f04b938b3857dd375649d6ec89d035103a5ef7c0e8b6.scope: Deactivated successfully.
Nov 29 02:31:41 np0005539563 podman[231059]: 2025-11-29 07:31:41.290384964 +0000 UTC m=+0.021066466 container died 5161edd2b12ea2708492f04b938b3857dd375649d6ec89d035103a5ef7c0e8b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:31:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:41.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:42.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-578c38bbc63b6187cbc884230eccfe48ab26711d170a2d5889dcd50cf78c4845-merged.mount: Deactivated successfully.
Nov 29 02:31:42 np0005539563 podman[231059]: 2025-11-29 07:31:42.653271087 +0000 UTC m=+1.383952569 container remove 5161edd2b12ea2708492f04b938b3857dd375649d6ec89d035103a5ef7c0e8b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:31:42 np0005539563 systemd[1]: libpod-conmon-5161edd2b12ea2708492f04b938b3857dd375649d6ec89d035103a5ef7c0e8b6.scope: Deactivated successfully.
Nov 29 02:31:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:31:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:31:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:31:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:31:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:31:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:31:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:31:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:31:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:43.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:44.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:44 np0005539563 python3.9[231473]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 29 02:31:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:45.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:46.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:47 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:31:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:47.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:48.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:48 np0005539563 podman[231628]: 2025-11-29 07:31:48.534445918 +0000 UTC m=+0.080595046 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:31:48 np0005539563 podman[231629]: 2025-11-29 07:31:48.588141642 +0000 UTC m=+0.132348857 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:31:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:49.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:50.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:51 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:31:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:52.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:52.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:54.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:54.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:55 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:31:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:56.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:56.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:57 np0005539563 python3.9[231626]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:31:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:31:57 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:31:57 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(29) init, last seen epoch 29, mid-election, bumping
Nov 29 02:31:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:31:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:58.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:31:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:31:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:58.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:31:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:31:58 np0005539563 python3.9[231829]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:31:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:31:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:31:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 02:31:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 02:31:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rotard(active, since 22m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 02:31:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:31:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:31:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b9377663-e8ea-4c3f-829c-0f48b6f5aa1a does not exist
Nov 29 02:31:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c92810cd-035e-4057-89cc-ac8fd31ebef6 does not exist
Nov 29 02:31:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a61b90a0-118e-4328-b9ba-3a07db98f29d does not exist
Nov 29 02:31:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:59 np0005539563 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 29 02:31:59 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 02:31:59 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:31:59 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 02:31:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:32:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:00.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:00.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:00 np0005539563 python3[232108]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:32:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:02.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:02.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:02 np0005539563 podman[232120]: 2025-11-29 07:32:02.305189023 +0000 UTC m=+1.856669958 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:32:02 np0005539563 podman[232179]: 2025-11-29 07:32:02.451554411 +0000 UTC m=+0.032236489 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:32:02 np0005539563 podman[232179]: 2025-11-29 07:32:02.595328369 +0000 UTC m=+0.176010457 container create 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:32:02 np0005539563 python3[232108]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:32:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:03 np0005539563 python3.9[232368]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:32:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:04.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:04.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:04 np0005539563 python3.9[232522]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:32:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:32:04.875 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:32:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:32:04.876 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:32:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:32:04.876 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:32:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:05 np0005539563 python3.9[232599]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:32:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:06.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:06 np0005539563 python3.9[232750]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401525.2106943-1615-174659173904837/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:32:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:06.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:06 np0005539563 python3.9[232826]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:32:06 np0005539563 systemd[1]: Reloading.
Nov 29 02:32:06 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:32:06 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:32:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:08.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:08 np0005539563 python3.9[232938]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:08.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:08 np0005539563 systemd[1]: Reloading.
Nov 29 02:32:08 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:32:08 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:32:08 np0005539563 systemd[1]: Starting multipathd container...
Nov 29 02:32:08 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:32:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84902c48200d2f26d8a099d5d61f813e0c96c45701ded7d6d9f8f40e87ee2742/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84902c48200d2f26d8a099d5d61f813e0c96c45701ded7d6d9f8f40e87ee2742/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:08 np0005539563 systemd[1]: Started /usr/bin/podman healthcheck run 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a.
Nov 29 02:32:08 np0005539563 podman[232980]: 2025-11-29 07:32:08.695039024 +0000 UTC m=+0.232615839 container init 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:32:08 np0005539563 multipathd[232996]: + sudo -E kolla_set_configs
Nov 29 02:32:08 np0005539563 podman[232980]: 2025-11-29 07:32:08.73454509 +0000 UTC m=+0.272121905 container start 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 02:32:08 np0005539563 podman[232980]: multipathd
Nov 29 02:32:08 np0005539563 systemd[1]: Started multipathd container.
Nov 29 02:32:08 np0005539563 multipathd[232996]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:32:08 np0005539563 multipathd[232996]: INFO:__main__:Validating config file
Nov 29 02:32:08 np0005539563 multipathd[232996]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:32:08 np0005539563 multipathd[232996]: INFO:__main__:Writing out command to execute
Nov 29 02:32:08 np0005539563 multipathd[232996]: ++ cat /run_command
Nov 29 02:32:08 np0005539563 multipathd[232996]: + CMD='/usr/sbin/multipathd -d'
Nov 29 02:32:08 np0005539563 multipathd[232996]: + ARGS=
Nov 29 02:32:08 np0005539563 multipathd[232996]: + sudo kolla_copy_cacerts
Nov 29 02:32:08 np0005539563 multipathd[232996]: + [[ ! -n '' ]]
Nov 29 02:32:08 np0005539563 multipathd[232996]: + . kolla_extend_start
Nov 29 02:32:08 np0005539563 multipathd[232996]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 02:32:08 np0005539563 multipathd[232996]: Running command: '/usr/sbin/multipathd -d'
Nov 29 02:32:08 np0005539563 multipathd[232996]: + umask 0022
Nov 29 02:32:08 np0005539563 multipathd[232996]: + exec /usr/sbin/multipathd -d
Nov 29 02:32:08 np0005539563 podman[233004]: 2025-11-29 07:32:08.800429335 +0000 UTC m=+0.058345290 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 02:32:08 np0005539563 systemd[1]: 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a-36b6e8c5eee007f7.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 02:32:08 np0005539563 multipathd[232996]: 4496.636562 | --------start up--------
Nov 29 02:32:08 np0005539563 multipathd[232996]: 4496.636579 | read /etc/multipath.conf
Nov 29 02:32:08 np0005539563 systemd[1]: 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a-36b6e8c5eee007f7.service: Failed with result 'exit-code'.
Nov 29 02:32:08 np0005539563 multipathd[232996]: 4496.642519 | path checkers start up
Nov 29 02:32:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:09 np0005539563 python3.9[233187]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:32:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:10.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:10.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:12.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:12.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:32:12
Nov 29 02:32:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:32:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:32:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'backups', '.mgr', 'volumes', '.rgw.root', 'images', 'default.rgw.log']
Nov 29 02:32:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:32:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:32:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:32:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:14.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:14.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:14 np0005539563 python3.9[233341]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:32:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:15 np0005539563 python3.9[233510]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:32:15 np0005539563 systemd[1]: Stopping multipathd container...
Nov 29 02:32:15 np0005539563 multipathd[232996]: 4503.238051 | exit (signal)
Nov 29 02:32:15 np0005539563 multipathd[232996]: 4503.238153 | --------shut down-------
Nov 29 02:32:15 np0005539563 systemd[1]: libpod-95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a.scope: Deactivated successfully.
Nov 29 02:32:15 np0005539563 podman[233514]: 2025-11-29 07:32:15.450158656 +0000 UTC m=+0.161346407 container died 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:32:15 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:32:15 np0005539563 systemd[1]: 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a-36b6e8c5eee007f7.timer: Deactivated successfully.
Nov 29 02:32:15 np0005539563 systemd[1]: Stopped /usr/bin/podman healthcheck run 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a.
Nov 29 02:32:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:16.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a-userdata-shm.mount: Deactivated successfully.
Nov 29 02:32:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-84902c48200d2f26d8a099d5d61f813e0c96c45701ded7d6d9f8f40e87ee2742-merged.mount: Deactivated successfully.
Nov 29 02:32:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:16.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:17 np0005539563 podman[233514]: 2025-11-29 07:32:17.46814215 +0000 UTC m=+2.179329861 container cleanup 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:32:17 np0005539563 podman[233514]: multipathd
Nov 29 02:32:17 np0005539563 podman[233544]: multipathd
Nov 29 02:32:17 np0005539563 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 29 02:32:17 np0005539563 systemd[1]: Stopped multipathd container.
Nov 29 02:32:17 np0005539563 systemd[1]: Starting multipathd container...
Nov 29 02:32:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:32:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84902c48200d2f26d8a099d5d61f813e0c96c45701ded7d6d9f8f40e87ee2742/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84902c48200d2f26d8a099d5d61f813e0c96c45701ded7d6d9f8f40e87ee2742/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:17 np0005539563 systemd[1]: Started /usr/bin/podman healthcheck run 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a.
Nov 29 02:32:17 np0005539563 podman[233557]: 2025-11-29 07:32:17.660909092 +0000 UTC m=+0.106899754 container init 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:32:17 np0005539563 multipathd[233573]: + sudo -E kolla_set_configs
Nov 29 02:32:17 np0005539563 podman[233557]: 2025-11-29 07:32:17.690617451 +0000 UTC m=+0.136608083 container start 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:32:17 np0005539563 multipathd[233573]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:32:17 np0005539563 multipathd[233573]: INFO:__main__:Validating config file
Nov 29 02:32:17 np0005539563 multipathd[233573]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:32:17 np0005539563 multipathd[233573]: INFO:__main__:Writing out command to execute
Nov 29 02:32:17 np0005539563 multipathd[233573]: ++ cat /run_command
Nov 29 02:32:17 np0005539563 multipathd[233573]: + CMD='/usr/sbin/multipathd -d'
Nov 29 02:32:17 np0005539563 multipathd[233573]: + ARGS=
Nov 29 02:32:17 np0005539563 multipathd[233573]: + sudo kolla_copy_cacerts
Nov 29 02:32:17 np0005539563 podman[233557]: multipathd
Nov 29 02:32:17 np0005539563 systemd[1]: Started multipathd container.
Nov 29 02:32:17 np0005539563 multipathd[233573]: Running command: '/usr/sbin/multipathd -d'
Nov 29 02:32:17 np0005539563 multipathd[233573]: + [[ ! -n '' ]]
Nov 29 02:32:17 np0005539563 multipathd[233573]: + . kolla_extend_start
Nov 29 02:32:17 np0005539563 multipathd[233573]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 02:32:17 np0005539563 multipathd[233573]: + umask 0022
Nov 29 02:32:17 np0005539563 multipathd[233573]: + exec /usr/sbin/multipathd -d
Nov 29 02:32:17 np0005539563 multipathd[233573]: 4505.611274 | --------start up--------
Nov 29 02:32:17 np0005539563 multipathd[233573]: 4505.611291 | read /etc/multipath.conf
Nov 29 02:32:17 np0005539563 multipathd[233573]: 4505.617091 | path checkers start up
Nov 29 02:32:17 np0005539563 podman[233580]: 2025-11-29 07:32:17.797660327 +0000 UTC m=+0.095399820 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:32:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:18.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:18.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:18 np0005539563 python3.9[233764]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:32:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:19 np0005539563 podman[233814]: 2025-11-29 07:32:19.098624165 +0000 UTC m=+0.054837486 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:32:19 np0005539563 podman[233815]: 2025-11-29 07:32:19.124276414 +0000 UTC m=+0.079126097 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:32:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:19 np0005539563 python3.9[234012]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:32:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:20.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:20.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:20 np0005539563 python3.9[234164]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 29 02:32:20 np0005539563 kernel: Key type psk registered
Nov 29 02:32:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:21 np0005539563 python3.9[234328]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:32:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:22.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:22.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:22 np0005539563 python3.9[234451]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401541.3052933-1855-103164758555736/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:32:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:23 np0005539563 python3.9[234604]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:32:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:24.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:24.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:24 np0005539563 python3.9[234756]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:32:24 np0005539563 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 02:32:24 np0005539563 systemd[1]: Stopped Load Kernel Modules.
Nov 29 02:32:24 np0005539563 systemd[1]: Stopping Load Kernel Modules...
Nov 29 02:32:24 np0005539563 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:32:24 np0005539563 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:32:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:25 np0005539563 python3.9[234913]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:32:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:26.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:26.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:28.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:28.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:30.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:30.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:30 np0005539563 systemd[1]: Reloading.
Nov 29 02:32:30 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:32:30 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:32:30 np0005539563 systemd[1]: Reloading.
Nov 29 02:32:30 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:32:30 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:32:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:31 np0005539563 systemd-logind[785]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 02:32:31 np0005539563 systemd-logind[785]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 02:32:31 np0005539563 lvm[235030]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:32:31 np0005539563 lvm[235030]: VG ceph_vg0 finished
Nov 29 02:32:31 np0005539563 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:32:32 np0005539563 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:32:32 np0005539563 systemd[1]: Reloading.
Nov 29 02:32:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:32.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:32.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:32 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:32:32 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:32:32 np0005539563 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:32:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:34.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:34.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:36.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:36.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:38.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:38.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:40.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:40.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:41 np0005539563 python3.9[236425]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:32:41 np0005539563 iscsid[223378]: iscsid shutting down.
Nov 29 02:32:41 np0005539563 systemd[1]: Stopping Open-iSCSI...
Nov 29 02:32:41 np0005539563 systemd[1]: iscsid.service: Deactivated successfully.
Nov 29 02:32:41 np0005539563 systemd[1]: Stopped Open-iSCSI.
Nov 29 02:32:41 np0005539563 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 02:32:41 np0005539563 systemd[1]: Starting Open-iSCSI...
Nov 29 02:32:41 np0005539563 systemd[1]: Started Open-iSCSI.
Nov 29 02:32:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:42.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:42.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:43 np0005539563 python3.9[236580]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:32:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:32:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:32:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:32:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:32:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:32:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:32:43 np0005539563 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:32:43 np0005539563 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:32:43 np0005539563 systemd[1]: man-db-cache-update.service: Consumed 1.644s CPU time.
Nov 29 02:32:43 np0005539563 systemd[1]: run-r881170f3aab544e4875b3bd572e773bd.service: Deactivated successfully.
Nov 29 02:32:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:44.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:44.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:44 np0005539563 python3.9[236737]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:32:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:45 np0005539563 python3.9[236890]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:32:45 np0005539563 systemd[1]: Reloading.
Nov 29 02:32:45 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:32:45 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:32:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:46.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:46.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:47 np0005539563 python3.9[237076]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:32:47 np0005539563 network[237093]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:32:47 np0005539563 network[237094]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:32:47 np0005539563 network[237095]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:32:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:48.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:32:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:48.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:32:48 np0005539563 podman[237102]: 2025-11-29 07:32:48.313570878 +0000 UTC m=+0.062839424 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 02:32:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:49 np0005539563 podman[237134]: 2025-11-29 07:32:49.516162033 +0000 UTC m=+0.089783817 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:32:49 np0005539563 podman[237135]: 2025-11-29 07:32:49.541765461 +0000 UTC m=+0.118612973 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 02:32:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:50.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:50.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:50 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 02:32:50 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 02:32:50 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 02:32:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Nov 29 02:32:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:52.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 02:32:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 02:32:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:52.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 02:32:52 np0005539563 python3.9[237437]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 02:32:53 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 02:32:53 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 02:32:54 np0005539563 python3.9[237591]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:54.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:54.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:54 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 02:32:54 np0005539563 python3.9[237744]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 29 02:32:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:32:55 np0005539563 python3.9[237898]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:56.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:32:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:56.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:32:56 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 02:32:56 np0005539563 python3.9[238051]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 8.9 KiB/s rd, 0 B/s wr, 14 op/s
Nov 29 02:32:57 np0005539563 python3.9[238205]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:58.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:32:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:58.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:58 np0005539563 python3.9[238359]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Nov 29 02:32:59 np0005539563 python3.9[238561]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:32:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:33:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:00.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:00.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 02:33:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:33:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 02:33:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:33:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:33:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:33:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Nov 29 02:33:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:33:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:33:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:33:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:33:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:02.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:02.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:33:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:33:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:33:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:33:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:33:02 np0005539563 python3.9[238847]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:33:02 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev fb596f14-1778-4f69-87e0-77fc35815317 does not exist
Nov 29 02:33:02 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cead863f-1aaf-4cbd-9043-1dbd9b2a5ee1 does not exist
Nov 29 02:33:02 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c433b7ee-b3db-4176-8840-210e4f789f4c does not exist
Nov 29 02:33:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Nov 29 02:33:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:33:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:33:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:33:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:33:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:33:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:33:03 np0005539563 python3.9[239024]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:03 np0005539563 podman[239221]: 2025-11-29 07:33:03.716657547 +0000 UTC m=+0.025707912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:33:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:33:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:33:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:04.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:04 np0005539563 python3.9[239306]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:04.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:33:04.876 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:33:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:33:04.877 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:33:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:33:04.877 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:33:04 np0005539563 python3.9[239459]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Nov 29 02:33:05 np0005539563 podman[239221]: 2025-11-29 07:33:05.014046426 +0000 UTC m=+1.323096741 container create e98a93cd3280564d838e5762896563863b9f8ff25bc70bcdeb37ce22e5873842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mestorf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:33:05 np0005539563 systemd[1]: Started libpod-conmon-e98a93cd3280564d838e5762896563863b9f8ff25bc70bcdeb37ce22e5873842.scope.
Nov 29 02:33:05 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:33:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:05 np0005539563 python3.9[239616]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:06.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:06.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:06 np0005539563 podman[239221]: 2025-11-29 07:33:06.479243217 +0000 UTC m=+2.788293582 container init e98a93cd3280564d838e5762896563863b9f8ff25bc70bcdeb37ce22e5873842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mestorf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:33:06 np0005539563 podman[239221]: 2025-11-29 07:33:06.489799585 +0000 UTC m=+2.798849850 container start e98a93cd3280564d838e5762896563863b9f8ff25bc70bcdeb37ce22e5873842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mestorf, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:33:06 np0005539563 lucid_mestorf[239561]: 167 167
Nov 29 02:33:06 np0005539563 systemd[1]: libpod-e98a93cd3280564d838e5762896563863b9f8ff25bc70bcdeb37ce22e5873842.scope: Deactivated successfully.
Nov 29 02:33:06 np0005539563 python3.9[239768]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Nov 29 02:33:07 np0005539563 podman[239221]: 2025-11-29 07:33:07.024221156 +0000 UTC m=+3.333271481 container attach e98a93cd3280564d838e5762896563863b9f8ff25bc70bcdeb37ce22e5873842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mestorf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:33:07 np0005539563 podman[239221]: 2025-11-29 07:33:07.024571896 +0000 UTC m=+3.333622181 container died e98a93cd3280564d838e5762896563863b9f8ff25bc70bcdeb37ce22e5873842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:33:07 np0005539563 python3.9[239934]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:07 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:33:07 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:33:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:08.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:08.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:08 np0005539563 python3.9[240087]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b7e24ece06b6d25613b742aaeb93d2dd0ac3b70abe03e0649bed3effd4fb0b0b-merged.mount: Deactivated successfully.
Nov 29 02:33:08 np0005539563 python3.9[240239]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Nov 29 02:33:09 np0005539563 podman[239221]: 2025-11-29 07:33:09.256054275 +0000 UTC m=+5.565104590 container remove e98a93cd3280564d838e5762896563863b9f8ff25bc70bcdeb37ce22e5873842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:33:09 np0005539563 systemd[1]: libpod-conmon-e98a93cd3280564d838e5762896563863b9f8ff25bc70bcdeb37ce22e5873842.scope: Deactivated successfully.
Nov 29 02:33:09 np0005539563 podman[240400]: 2025-11-29 07:33:09.500021712 +0000 UTC m=+0.048432180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:33:09 np0005539563 python3.9[240394]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:09 np0005539563 podman[240400]: 2025-11-29 07:33:09.791982818 +0000 UTC m=+0.340393206 container create 6a5401d08f7252a81af24ca9056c21820f649aae7a189cdd4b3b4b7b5365b323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:33:09 np0005539563 systemd[1]: Started libpod-conmon-6a5401d08f7252a81af24ca9056c21820f649aae7a189cdd4b3b4b7b5365b323.scope.
Nov 29 02:33:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:33:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6dc21148c0b65cdcd2ec4e48b0f181924041893a3b2e31b32400d264b9c0c63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6dc21148c0b65cdcd2ec4e48b0f181924041893a3b2e31b32400d264b9c0c63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6dc21148c0b65cdcd2ec4e48b0f181924041893a3b2e31b32400d264b9c0c63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6dc21148c0b65cdcd2ec4e48b0f181924041893a3b2e31b32400d264b9c0c63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6dc21148c0b65cdcd2ec4e48b0f181924041893a3b2e31b32400d264b9c0c63/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:10.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:10.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:10 np0005539563 python3.9[240571]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:10 np0005539563 podman[240400]: 2025-11-29 07:33:10.510665399 +0000 UTC m=+1.059075827 container init 6a5401d08f7252a81af24ca9056c21820f649aae7a189cdd4b3b4b7b5365b323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:33:10 np0005539563 podman[240400]: 2025-11-29 07:33:10.518179464 +0000 UTC m=+1.066589862 container start 6a5401d08f7252a81af24ca9056c21820f649aae7a189cdd4b3b4b7b5365b323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:33:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Nov 29 02:33:11 np0005539563 python3.9[240726]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:11 np0005539563 podman[240400]: 2025-11-29 07:33:11.271452928 +0000 UTC m=+1.819863296 container attach 6a5401d08f7252a81af24ca9056c21820f649aae7a189cdd4b3b4b7b5365b323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:33:11 np0005539563 crazy_zhukovsky[240470]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:33:11 np0005539563 crazy_zhukovsky[240470]: --> relative data size: 1.0
Nov 29 02:33:11 np0005539563 crazy_zhukovsky[240470]: --> All data devices are unavailable
Nov 29 02:33:11 np0005539563 systemd[1]: libpod-6a5401d08f7252a81af24ca9056c21820f649aae7a189cdd4b3b4b7b5365b323.scope: Deactivated successfully.
Nov 29 02:33:11 np0005539563 podman[240400]: 2025-11-29 07:33:11.404127652 +0000 UTC m=+1.952538090 container died 6a5401d08f7252a81af24ca9056c21820f649aae7a189cdd4b3b4b7b5365b323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:33:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e6dc21148c0b65cdcd2ec4e48b0f181924041893a3b2e31b32400d264b9c0c63-merged.mount: Deactivated successfully.
Nov 29 02:33:11 np0005539563 python3.9[240900]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:12.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:12.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:12 np0005539563 podman[240400]: 2025-11-29 07:33:12.201221421 +0000 UTC m=+2.749631819 container remove 6a5401d08f7252a81af24ca9056c21820f649aae7a189cdd4b3b4b7b5365b323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:33:12 np0005539563 systemd[1]: libpod-conmon-6a5401d08f7252a81af24ca9056c21820f649aae7a189cdd4b3b4b7b5365b323.scope: Deactivated successfully.
Nov 29 02:33:12 np0005539563 python3.9[241140]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:33:12
Nov 29 02:33:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:33:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:33:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'images', '.mgr']
Nov 29 02:33:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:33:12 np0005539563 podman[241242]: 2025-11-29 07:33:12.80383772 +0000 UTC m=+0.030522293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:33:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:33:13 np0005539563 python3.9[241360]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:33:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:33:13 np0005539563 podman[241242]: 2025-11-29 07:33:13.652032729 +0000 UTC m=+0.878717202 container create 93bb3e5cf88f77c3580358781cacf8b97134ec304e2724966d61fe20625372ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:33:13 np0005539563 systemd[1]: Started libpod-conmon-93bb3e5cf88f77c3580358781cacf8b97134ec304e2724966d61fe20625372ad.scope.
Nov 29 02:33:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:33:14 np0005539563 podman[241242]: 2025-11-29 07:33:14.028427495 +0000 UTC m=+1.255111988 container init 93bb3e5cf88f77c3580358781cacf8b97134ec304e2724966d61fe20625372ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mclaren, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:33:14 np0005539563 podman[241242]: 2025-11-29 07:33:14.036867355 +0000 UTC m=+1.263551828 container start 93bb3e5cf88f77c3580358781cacf8b97134ec304e2724966d61fe20625372ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:33:14 np0005539563 upbeat_mclaren[241512]: 167 167
Nov 29 02:33:14 np0005539563 systemd[1]: libpod-93bb3e5cf88f77c3580358781cacf8b97134ec304e2724966d61fe20625372ad.scope: Deactivated successfully.
Nov 29 02:33:14 np0005539563 python3.9[241517]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:33:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:14.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:14.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:14 np0005539563 podman[241242]: 2025-11-29 07:33:14.169328464 +0000 UTC m=+1.396012987 container attach 93bb3e5cf88f77c3580358781cacf8b97134ec304e2724966d61fe20625372ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:33:14 np0005539563 podman[241242]: 2025-11-29 07:33:14.169955111 +0000 UTC m=+1.396639634 container died 93bb3e5cf88f77c3580358781cacf8b97134ec304e2724966d61fe20625372ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:33:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 29 02:33:15 np0005539563 python3.9[241683]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:16.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:16.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6516bd17e0c8c48bae6a9adeb7b2bf1416e46cb8f7b193dfd0ad5a3eb7f03f66-merged.mount: Deactivated successfully.
Nov 29 02:33:16 np0005539563 python3.9[241836]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:33:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 29 02:33:17 np0005539563 python3.9[241989]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:33:18 np0005539563 systemd[1]: Reloading.
Nov 29 02:33:18 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:33:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:18.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:18 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:33:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:18.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:18 np0005539563 podman[241242]: 2025-11-29 07:33:18.653256705 +0000 UTC m=+5.879941178 container remove 93bb3e5cf88f77c3580358781cacf8b97134ec304e2724966d61fe20625372ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mclaren, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:33:18 np0005539563 systemd[1]: libpod-conmon-93bb3e5cf88f77c3580358781cacf8b97134ec304e2724966d61fe20625372ad.scope: Deactivated successfully.
Nov 29 02:33:18 np0005539563 podman[242024]: 2025-11-29 07:33:18.783312548 +0000 UTC m=+0.396410851 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:33:18 np0005539563 podman[242076]: 2025-11-29 07:33:18.86815948 +0000 UTC m=+0.042514499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:33:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 29 02:33:19 np0005539563 podman[242076]: 2025-11-29 07:33:19.221321043 +0000 UTC m=+0.395676032 container create 854894d37680e3da0263c574fee7e14edfc28e74b19bfc19458651efeba60804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:33:19 np0005539563 systemd[1]: Started libpod-conmon-854894d37680e3da0263c574fee7e14edfc28e74b19bfc19458651efeba60804.scope.
Nov 29 02:33:19 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:33:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9d6ac2fcf18059ede7a4a5867619cd0153d60a7d8653e4cb3f1995a1737183/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9d6ac2fcf18059ede7a4a5867619cd0153d60a7d8653e4cb3f1995a1737183/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9d6ac2fcf18059ede7a4a5867619cd0153d60a7d8653e4cb3f1995a1737183/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9d6ac2fcf18059ede7a4a5867619cd0153d60a7d8653e4cb3f1995a1737183/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:20.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:20.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:20 np0005539563 python3.9[242291]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 29 02:33:21 np0005539563 podman[242076]: 2025-11-29 07:33:21.028101908 +0000 UTC m=+2.202456947 container init 854894d37680e3da0263c574fee7e14edfc28e74b19bfc19458651efeba60804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_dirac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:33:21 np0005539563 podman[242076]: 2025-11-29 07:33:21.045083943 +0000 UTC m=+2.219438952 container start 854894d37680e3da0263c574fee7e14edfc28e74b19bfc19458651efeba60804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:33:21 np0005539563 python3.9[242467]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]: {
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:    "0": [
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:        {
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:            "devices": [
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:                "/dev/loop3"
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:            ],
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:            "lv_name": "ceph_lv0",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:            "lv_size": "7511998464",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:            "name": "ceph_lv0",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:            "tags": {
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:                "ceph.cluster_name": "ceph",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:                "ceph.crush_device_class": "",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:                "ceph.encrypted": "0",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:                "ceph.osd_id": "0",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:                "ceph.type": "block",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:                "ceph.vdo": "0"
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:            },
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:            "type": "block",
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:            "vg_name": "ceph_vg0"
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:        }
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]:    ]
Nov 29 02:33:21 np0005539563 gifted_dirac[242116]: }
Nov 29 02:33:21 np0005539563 systemd[1]: libpod-854894d37680e3da0263c574fee7e14edfc28e74b19bfc19458651efeba60804.scope: Deactivated successfully.
Nov 29 02:33:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:22.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:22.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:22 np0005539563 python3.9[242635]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:33:22 np0005539563 python3.9[242789]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 25 op/s
Nov 29 02:33:23 np0005539563 podman[242076]: 2025-11-29 07:33:23.059379204 +0000 UTC m=+4.233734223 container attach 854894d37680e3da0263c574fee7e14edfc28e74b19bfc19458651efeba60804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_dirac, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:33:23 np0005539563 podman[242076]: 2025-11-29 07:33:23.060890575 +0000 UTC m=+4.235245594 container died 854894d37680e3da0263c574fee7e14edfc28e74b19bfc19458651efeba60804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_dirac, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 02:33:23 np0005539563 python3.9[242942]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:24.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:24.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:24 np0005539563 python3.9[243095]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Nov 29 02:33:25 np0005539563 python3.9[243249]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:25 np0005539563 python3.9[243403]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:33:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:26.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:26.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 9.4 KiB/s rd, 0 B/s wr, 15 op/s
Nov 29 02:33:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:28.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:28.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 02:33:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:30.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:30.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:33:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:32.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:32.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:33:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:34.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:34.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:33:35 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:33:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:36.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:36.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 02:33:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:38.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:38.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 852 B/s rd, 0 B/s wr, 1 op/s
Nov 29 02:33:39 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:33:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:40.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:40.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:41 np0005539563 python3.9[243557]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:41 np0005539563 python3.9[243766]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:42.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:42.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:42 np0005539563 python3.9[243918]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:42 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 17.461042404s
Nov 29 02:33:42 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 17.461042404s
Nov 29 02:33:42 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 18.076940536s, txc = 0x561be2238600
Nov 29 02:33:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 02:33:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:33:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:33:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:33:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:33:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:33:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:33:43 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:33:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:44.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:44.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9f9d6ac2fcf18059ede7a4a5867619cd0153d60a7d8653e4cb3f1995a1737183-merged.mount: Deactivated successfully.
Nov 29 02:33:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 02:33:45 np0005539563 python3.9[244071]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:45 np0005539563 python3.9[244225]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:46.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:46.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:46 np0005539563 python3.9[244379]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e9 check_health: resetting beacon timeouts due to mon delay (slow election?) of 21.1167 seconds
Nov 29 02:33:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 29 02:33:47 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 22.145120621s, txc = 0x561be1f58600
Nov 29 02:33:47 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 22.145002365s, txc = 0x561be0eef800
Nov 29 02:33:47 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:33:47 np0005539563 python3.9[244532]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:48 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:33:48 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(33) init, last seen epoch 33, mid-election, bumping
Nov 29 02:33:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:48.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:48.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:48 np0005539563 podman[242119]: 2025-11-29 07:33:48.21966068 +0000 UTC m=+28.545398057 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 02:33:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:48 np0005539563 python3.9[244690]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:48 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:33:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Nov 29 02:33:49 np0005539563 podman[242076]: 2025-11-29 07:33:49.206292031 +0000 UTC m=+30.380647010 container remove 854894d37680e3da0263c574fee7e14edfc28e74b19bfc19458651efeba60804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:33:49 np0005539563 podman[242117]: 2025-11-29 07:33:49.215592334 +0000 UTC m=+29.540534149 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:33:49 np0005539563 systemd[1]: libpod-conmon-854894d37680e3da0263c574fee7e14edfc28e74b19bfc19458651efeba60804.scope: Deactivated successfully.
Nov 29 02:33:49 np0005539563 podman[244815]: 2025-11-29 07:33:49.26888928 +0000 UTC m=+0.096483365 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:33:49 np0005539563 python3.9[244854]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:50 np0005539563 podman[245127]: 2025-11-29 07:33:49.951411969 +0000 UTC m=+0.023341527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:33:50 np0005539563 python3.9[245169]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:33:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:33:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 02:33:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 02:33:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68a2e6f0 =====
Nov 29 02:33:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68a2e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:50 np0005539563 radosgw[93236]: beast: 0x7efd68a2e6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:50.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:50.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rotard(active, since 24m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 02:33:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:33:50 np0005539563 podman[245127]: 2025-11-29 07:33:50.317632776 +0000 UTC m=+0.389562304 container create 50a1484ace2d844e695ed352c57fa20871a5494e3bd9ff7d50d6b74b9113bb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:33:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 02:33:51 np0005539563 systemd[1]: Started libpod-conmon-50a1484ace2d844e695ed352c57fa20871a5494e3bd9ff7d50d6b74b9113bb5a.scope.
Nov 29 02:33:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:33:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:52.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:52.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:52 np0005539563 podman[245127]: 2025-11-29 07:33:52.517968445 +0000 UTC m=+2.589898083 container init 50a1484ace2d844e695ed352c57fa20871a5494e3bd9ff7d50d6b74b9113bb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jennings, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:33:52 np0005539563 podman[245127]: 2025-11-29 07:33:52.533941881 +0000 UTC m=+2.605871469 container start 50a1484ace2d844e695ed352c57fa20871a5494e3bd9ff7d50d6b74b9113bb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:33:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:52 np0005539563 fervent_jennings[245197]: 167 167
Nov 29 02:33:52 np0005539563 systemd[1]: libpod-50a1484ace2d844e695ed352c57fa20871a5494e3bd9ff7d50d6b74b9113bb5a.scope: Deactivated successfully.
Nov 29 02:33:52 np0005539563 podman[245127]: 2025-11-29 07:33:52.85411633 +0000 UTC m=+2.926045958 container attach 50a1484ace2d844e695ed352c57fa20871a5494e3bd9ff7d50d6b74b9113bb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:33:52 np0005539563 podman[245127]: 2025-11-29 07:33:52.85480989 +0000 UTC m=+2.926739468 container died 50a1484ace2d844e695ed352c57fa20871a5494e3bd9ff7d50d6b74b9113bb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:33:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 02:33:53 np0005539563 ceph-mon[74338]: mon.compute-1 calling monitor election
Nov 29 02:33:53 np0005539563 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 29 02:33:53 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 02:33:53 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:33:53 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 02:33:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:54.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:54.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 02:33:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:56.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:56.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:56 np0005539563 python3.9[245342]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 29 02:33:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay-965700e66539331a09f91eaec64f3c19022e65687e6c407d9bf974c0eaa97bc8-merged.mount: Deactivated successfully.
Nov 29 02:33:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.3 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 02:33:57 np0005539563 python3.9[245496]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:33:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:33:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:58.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:33:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:33:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:58.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:33:58 np0005539563 podman[245127]: 2025-11-29 07:33:58.566491863 +0000 UTC m=+8.638421441 container remove 50a1484ace2d844e695ed352c57fa20871a5494e3bd9ff7d50d6b74b9113bb5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jennings, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:33:58 np0005539563 systemd[1]: libpod-conmon-50a1484ace2d844e695ed352c57fa20871a5494e3bd9ff7d50d6b74b9113bb5a.scope: Deactivated successfully.
Nov 29 02:33:58 np0005539563 podman[245506]: 2025-11-29 07:33:58.787143416 +0000 UTC m=+0.033103054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:33:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 29 02:33:59 np0005539563 podman[245506]: 2025-11-29 07:33:59.268804944 +0000 UTC m=+0.514764602 container create 24e0e6c277017e2271690e7e8fca6074669aba9e2a5d39be4d1064c2d49c8738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:33:59 np0005539563 systemd[1]: Started libpod-conmon-24e0e6c277017e2271690e7e8fca6074669aba9e2a5d39be4d1064c2d49c8738.scope.
Nov 29 02:33:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:33:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b8394c0f549aaf94317bb0e2e3055dc23b4a6d71eec9c0c3ed8e42acdf1fb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b8394c0f549aaf94317bb0e2e3055dc23b4a6d71eec9c0c3ed8e42acdf1fb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b8394c0f549aaf94317bb0e2e3055dc23b4a6d71eec9c0c3ed8e42acdf1fb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b8394c0f549aaf94317bb0e2e3055dc23b4a6d71eec9c0c3ed8e42acdf1fb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:34:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:00.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:00.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:00 np0005539563 python3.9[245702]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:34:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Nov 29 02:34:01 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:34:01 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:34:01 np0005539563 podman[245506]: 2025-11-29 07:34:01.501083995 +0000 UTC m=+2.747043673 container init 24e0e6c277017e2271690e7e8fca6074669aba9e2a5d39be4d1064c2d49c8738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:34:01 np0005539563 podman[245506]: 2025-11-29 07:34:01.511040907 +0000 UTC m=+2.757000535 container start 24e0e6c277017e2271690e7e8fca6074669aba9e2a5d39be4d1064c2d49c8738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:34:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:02.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:02.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:02 np0005539563 busy_merkle[245597]: {
Nov 29 02:34:02 np0005539563 busy_merkle[245597]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:34:02 np0005539563 busy_merkle[245597]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:34:02 np0005539563 busy_merkle[245597]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:34:02 np0005539563 busy_merkle[245597]:        "osd_id": 0,
Nov 29 02:34:02 np0005539563 busy_merkle[245597]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:34:02 np0005539563 busy_merkle[245597]:        "type": "bluestore"
Nov 29 02:34:02 np0005539563 busy_merkle[245597]:    }
Nov 29 02:34:02 np0005539563 busy_merkle[245597]: }
Nov 29 02:34:02 np0005539563 systemd[1]: libpod-24e0e6c277017e2271690e7e8fca6074669aba9e2a5d39be4d1064c2d49c8738.scope: Deactivated successfully.
Nov 29 02:34:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:34:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.6 KiB/s rd, 0 B/s wr, 9 op/s
Nov 29 02:34:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:04.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:04.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:04 np0005539563 podman[245506]: 2025-11-29 07:34:04.275796682 +0000 UTC m=+5.521756370 container attach 24e0e6c277017e2271690e7e8fca6074669aba9e2a5d39be4d1064c2d49c8738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 02:34:04 np0005539563 podman[245506]: 2025-11-29 07:34:04.277685944 +0000 UTC m=+5.523645622 container died 24e0e6c277017e2271690e7e8fca6074669aba9e2a5d39be4d1064c2d49c8738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:34:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:34:04.877 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:34:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:34:04.877 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:34:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:34:04.878 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:34:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 0 B/s wr, 8 op/s
Nov 29 02:34:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:06.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:06.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.4 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 02:34:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:08.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:08.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 6 op/s
Nov 29 02:34:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-68b8394c0f549aaf94317bb0e2e3055dc23b4a6d71eec9c0c3ed8e42acdf1fb5-merged.mount: Deactivated successfully.
Nov 29 02:34:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:10.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:10.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 02:34:11 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:34:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:12.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:12.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:34:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:34:12
Nov 29 02:34:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:34:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:34:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'backups', 'vms', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.meta']
Nov 29 02:34:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:34:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:34:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:14.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:14.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:14 np0005539563 podman[245506]: 2025-11-29 07:34:14.562278568 +0000 UTC m=+15.808238206 container remove 24e0e6c277017e2271690e7e8fca6074669aba9e2a5d39be4d1064c2d49c8738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:34:14 np0005539563 systemd[1]: libpod-conmon-24e0e6c277017e2271690e7e8fca6074669aba9e2a5d39be4d1064c2d49c8738.scope: Deactivated successfully.
Nov 29 02:34:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:34:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 02:34:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:16.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:16.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 02:34:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:34:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:18.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:18.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:18 np0005539563 podman[245774]: 2025-11-29 07:34:18.5717236 +0000 UTC m=+0.120665054 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 02:34:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 02:34:19 np0005539563 podman[245802]: 2025-11-29 07:34:19.501019015 +0000 UTC m=+0.050933011 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 02:34:19 np0005539563 podman[245803]: 2025-11-29 07:34:19.519431308 +0000 UTC m=+0.066606539 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 02:34:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:20.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:20.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:34:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:22.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:22.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:34:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:34:23 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:34:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:24.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:24.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 02:34:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:34:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:26.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:26.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:26 np0005539563 systemd-logind[785]: New session 51 of user zuul.
Nov 29 02:34:26 np0005539563 systemd[1]: Started Session 51 of User zuul.
Nov 29 02:34:26 np0005539563 systemd[1]: session-51.scope: Deactivated successfully.
Nov 29 02:34:26 np0005539563 systemd-logind[785]: Session 51 logged out. Waiting for processes to exit.
Nov 29 02:34:26 np0005539563 systemd-logind[785]: Removed session 51.
Nov 29 02:34:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:34:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:34:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:34:27 np0005539563 python3.9[246029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:34:28 np0005539563 python3.9[246150]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401666.8997371-3438-92625218939554/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:34:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:28.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:28.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:28 np0005539563 python3.9[246300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:34:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 02:34:29 np0005539563 python3.9[246377]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:34:29 np0005539563 python3.9[246527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:34:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:30.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:30.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:30 np0005539563 python3.9[246648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401669.5040677-3438-102611349843800/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:34:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:34:31 np0005539563 python3.9[246799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:34:32 np0005539563 python3.9[246920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401670.7770631-3438-196730986672286/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:34:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:32.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:32.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:32 np0005539563 python3.9[247070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:34:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:34:33 np0005539563 python3.9[247192]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401672.170944-3438-227907825612023/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:34:34 np0005539563 python3.9[247342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:34:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:34.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:34.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:34 np0005539563 python3.9[247463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401673.5862548-3438-7501297483339/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:34:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:34:35 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:34:35 np0005539563 python3.9[247616]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:34:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:36.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:36.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s
Nov 29 02:34:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:38.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:38.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s
Nov 29 02:34:39 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:34:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:40.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:40.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:42.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:42.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:34:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:34:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:34:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:34:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:34:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:34:43 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:34:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:44.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:44.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:46.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:46.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:47 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:34:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:48.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:48.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:49 np0005539563 podman[247699]: 2025-11-29 07:34:49.589753188 +0000 UTC m=+0.131953143 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:34:49 np0005539563 podman[247726]: 2025-11-29 07:34:49.707146842 +0000 UTC m=+0.082362559 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Nov 29 02:34:49 np0005539563 podman[247727]: 2025-11-29 07:34:49.717723951 +0000 UTC m=+0.089633758 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 02:34:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:50.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:50.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:51 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:34:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:52.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:52.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:54.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:54.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:55 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:34:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:56.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:56.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:56 np0005539563 ceph-osd[84724]: osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:56 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:34:56.506+0000 7f1efb24a640 -1 osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:56 np0005539563 ceph-osd[84724]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:34:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:57 np0005539563 ceph-osd[84724]: osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:57 np0005539563 ceph-osd[84724]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:34:57 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:34:57.506+0000 7f1efb24a640 -1 osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:58.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:34:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:34:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:58.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:34:58 np0005539563 ceph-osd[84724]: osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:58 np0005539563 ceph-osd[84724]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:34:58 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:34:58.531+0000 7f1efb24a640 -1 osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:59 np0005539563 ceph-osd[84724]: osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:59 np0005539563 ceph-osd[84724]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:34:59 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:34:59.494+0000 7f1efb24a640 -1 osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:34:59 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:35:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:00.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:00.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:00 np0005539563 ceph-osd[84724]: osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:00 np0005539563 ceph-osd[84724]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:00 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:35:00.461+0000 7f1efb24a640 -1 osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:01 np0005539563 ceph-osd[84724]: osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:01 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:35:01.451+0000 7f1efb24a640 -1 osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:01 np0005539563 ceph-osd[84724]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:02.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:02.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:02 np0005539563 ceph-osd[84724]: osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:02 np0005539563 ceph-osd[84724]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:02 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:35:02.483+0000 7f1efb24a640 -1 osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:03 np0005539563 ceph-osd[84724]: osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:03 np0005539563 ceph-osd[84724]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:03 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:35:03.435+0000 7f1efb24a640 -1 osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:03 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:35:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:04.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:04.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:04 np0005539563 ceph-osd[84724]: osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:04 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:35:04.417+0000 7f1efb24a640 -1 osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:04 np0005539563 ceph-osd[84724]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:35:04.877 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:35:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:35:04.878 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:35:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:35:04.878 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:35:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:05 np0005539563 ceph-osd[84724]: osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:05 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:35:05.413+0000 7f1efb24a640 -1 osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:05 np0005539563 ceph-osd[84724]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:06.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:06.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:06 np0005539563 ceph-osd[84724]: osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:06 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-osd-0[84720]: 2025-11-29T07:35:06.372+0000 7f1efb24a640 -1 osd.0 135 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14367.0:1006 9.4 9:2533b45a:::obj_delete_at_hint.0000000076:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e135)
Nov 29 02:35:06 np0005539563 ceph-osd[84724]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:06 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 38.996139526s
Nov 29 02:35:06 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 38.996139526s
Nov 29 02:35:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e9 check_health: resetting beacon timeouts due to mon delay (slow election?) of 41.1205 seconds
Nov 29 02:35:06 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 40.228866577s, txc = 0x561be1ece600
Nov 29 02:35:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:35:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 get_health_metrics reporting 10 slow ops, oldest is log(1 entries from seq 870 at 2025-11-29T07:34:13.024855+0000)
Nov 29 02:35:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).paxos(paxos updating c 1005..1747) accept timeout, calling fresh election
Nov 29 02:35:06 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0[74334]: 2025-11-29T07:35:06.424+0000 7fce3b641640 -1 mon.compute-0@0(leader) e3 get_health_metrics reporting 10 slow ops, oldest is log(1 entries from seq 870 at 2025-11-29T07:34:13.024855+0000)
Nov 29 02:35:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(probing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:35:06 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:35:06 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(36) init, last seen epoch 36
Nov 29 02:35:06 np0005539563 python3.9[247845]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:35:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:07 np0005539563 python3.9[247999]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:35:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:35:07 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:35:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:08.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:08.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:08 np0005539563 python3.9[248151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:35:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Nov 29 02:35:09 np0005539563 python3.9[248275]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764401707.8710136-3759-43009416968414/.source _original_basename=.r3gdkkh4 follow=False checksum=a883feba388bee74e2e9c0317d8cd762bd10f291 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 29 02:35:10 np0005539563 python3.9[248427]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:35:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:10.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:10.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Nov 29 02:35:11 np0005539563 python3.9[248580]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:35:11 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:35:11 np0005539563 python3.9[248701]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401710.689121-3837-211870971898371/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:35:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:12.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:12.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:35:12
Nov 29 02:35:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:35:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:35:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'images', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.meta']
Nov 29 02:35:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:35:12 np0005539563 python3.9[248851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:35:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 get_health_metrics reporting 10 slow ops, oldest is log(1 entries from seq 873 at 2025-11-29T07:34:19.028230+0000)
Nov 29 02:35:12 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0[74334]: 2025-11-29T07:35:12.786+0000 7fce3b641640 -1 mon.compute-0@0(leader) e3 get_health_metrics reporting 10 slow ops, oldest is log(1 entries from seq 873 at 2025-11-29T07:34:19.028230+0000)
Nov 29 02:35:12 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 5.368425846s
Nov 29 02:35:12 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 5.368425846s
Nov 29 02:35:12 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.503218174s, txc = 0x561be21c2900
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:35:13 np0005539563 python3.9[248973]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401712.1435242-3882-237048871460670/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:35:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:35:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:14.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:14.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:14 np0005539563 python3.9[249125]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 29 02:35:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Nov 29 02:35:15 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:35:15 np0005539563 python3.9[249278]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:35:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:16.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:16.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:16 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.357429504s, txc = 0x561be1f4e600
Nov 29 02:35:16 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.707386017s, txc = 0x561be214b800
Nov 29 02:35:16 np0005539563 python3[249430]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:35:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Nov 29 02:35:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 get_health_metrics reporting 12 slow ops, oldest is log(1 entries from seq 873 at 2025-11-29T07:34:19.028230+0000)
Nov 29 02:35:17 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0[74334]: 2025-11-29T07:35:17.886+0000 7fce3b641640 -1 mon.compute-0@0(leader) e3 get_health_metrics reporting 12 slow ops, oldest is log(1 entries from seq 873 at 2025-11-29T07:34:19.028230+0000)
Nov 29 02:35:17 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.075459957s, txc = 0x561be1f5ec00
Nov 29 02:35:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:18.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:18.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:18 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:35:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e9 check_health: resetting beacon timeouts due to mon delay (slow election?) of 11.2936 seconds
Nov 29 02:35:18 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 02:35:18 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 02:35:18 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rotard(active, since 26m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 02:35:18 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:35:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 02:35:19 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check failed: 19 slow ops, oldest one blocked for 53 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops. (SLOW_OPS)
Nov 29 02:35:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:35:19 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 35dbd859-b032-4a2f-ac51-0c1b23ccd5d1 does not exist
Nov 29 02:35:19 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 59d9fc90-ea5f-4469-89bc-9568703566d5 does not exist
Nov 29 02:35:19 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2bfeb65e-e9f9-4de7-b7f3-1742e236ddfe does not exist
Nov 29 02:35:19 np0005539563 podman[249518]: 2025-11-29 07:35:19.867654073 +0000 UTC m=+0.069868768 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 02:35:19 np0005539563 podman[249519]: 2025-11-29 07:35:19.875085016 +0000 UTC m=+0.077554098 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:35:19 np0005539563 podman[249520]: 2025-11-29 07:35:19.905406864 +0000 UTC m=+0.103678941 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 02:35:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:20.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:20.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: mon.compute-1 calling monitor election
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:35:20 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 02:35:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Nov 29 02:35:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:22.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:22.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:35:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:35:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:35:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:24.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:24.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:35:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:26.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:26.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:26 np0005539563 ceph-mon[74338]: Health check failed: 19 slow ops, oldest one blocked for 53 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops. (SLOW_OPS)
Nov 29 02:35:26 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:35:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:35:27 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 5.415212631s
Nov 29 02:35:27 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 5.415212631s
Nov 29 02:35:27 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.861218929s, txc = 0x561be1fe8300
Nov 29 02:35:27 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.923403263s, txc = 0x561be2152000
Nov 29 02:35:27 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.873543262s, txc = 0x561be214b800
Nov 29 02:35:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:35:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:28.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:28.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 02:35:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:30.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:30.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 02:35:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:32.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:32.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 02:35:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:34.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:34.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Nov 29 02:35:35 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:35:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:36.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:36.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Nov 29 02:35:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:38.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:38.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Nov 29 02:35:39 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:35:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:40.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:40.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).mds e9 check_health: resetting beacon timeouts due to mon delay (slow election?) of 12.5445 seconds
Nov 29 02:35:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:35:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).paxos(paxos updating c 1005..1751) accept timeout, calling fresh election
Nov 29 02:35:40 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:35:40 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(38) init, last seen epoch 38
Nov 29 02:35:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 02:35:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:42.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:42.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:42 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 14.598733902s
Nov 29 02:35:42 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 14.598733902s
Nov 29 02:35:42 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.875885010s, txc = 0x561be1ecfb00
Nov 29 02:35:42 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.875080109s, txc = 0x561be21c3200
Nov 29 02:35:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 29 02:35:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:35:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:35:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:35:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:35:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:35:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:35:43 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:35:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:44.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:44.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 02:35:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:46.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:46.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 02:35:47 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:35:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:48.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:48.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 852 B/s rd, 0 B/s wr, 1 op/s
Nov 29 02:35:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:50.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:50.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:35:50 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 8.138607979s
Nov 29 02:35:50 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 8.138607979s
Nov 29 02:35:50 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 22.719320297s, txc = 0x561be1ebdb00
Nov 29 02:35:50 np0005539563 podman[249731]: 2025-11-29 07:35:50.958608897 +0000 UTC m=+0.512806882 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 29 02:35:50 np0005539563 podman[249733]: 2025-11-29 07:35:50.989545775 +0000 UTC m=+0.534126150 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:35:50 np0005539563 podman[249732]: 2025-11-29 07:35:50.990580092 +0000 UTC m=+0.538327483 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:35:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Nov 29 02:35:51 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 19 slow ops, oldest one blocked for 53 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 02:35:51 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 19 slow ops, oldest one blocked for 53 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 02:35:51 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:35:51 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(41) init, last seen epoch 41, mid-election, bumping
Nov 29 02:35:51 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 02:35:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:52 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:35:52 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:35:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:52 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 02:35:52 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 02:35:52 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rotard(active, since 26m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 02:35:52 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 23 slow ops, oldest one blocked for 58 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 02:35:52 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 23 slow ops, oldest one blocked for 58 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 02:35:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:52.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:35:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:52.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:35:52 np0005539563 podman[249446]: 2025-11-29 07:35:52.959116391 +0000 UTC m=+35.952771190 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:35:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 02:35:53 np0005539563 podman[249817]: 2025-11-29 07:35:53.07287991 +0000 UTC m=+0.020129326 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:35:53 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 23 slow ops, oldest one blocked for 58 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.)
Nov 29 02:35:53 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:35:53 np0005539563 podman[249817]: 2025-11-29 07:35:53.403014427 +0000 UTC m=+0.350263823 container create 1c2d44f1e56aea9dac26b05108671c9c0534d3ccdae214e23e18a75022c6d58b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:35:53 np0005539563 python3[249430]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 29 02:35:53 np0005539563 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 29 02:35:53 np0005539563 ceph-mon[74338]: mon.compute-1 calling monitor election
Nov 29 02:35:53 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 02:35:53 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:35:53 np0005539563 ceph-mon[74338]: Health detail: HEALTH_WARN 19 slow ops, oldest one blocked for 53 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 02:35:53 np0005539563 ceph-mon[74338]: [WRN] SLOW_OPS: 19 slow ops, oldest one blocked for 53 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 02:35:53 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 02:35:53 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:35:53 np0005539563 ceph-mon[74338]: Health detail: HEALTH_WARN 23 slow ops, oldest one blocked for 58 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 02:35:53 np0005539563 ceph-mon[74338]: [WRN] SLOW_OPS: 23 slow ops, oldest one blocked for 58 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.
Nov 29 02:35:54 np0005539563 python3.9[250006]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:35:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:54.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:54.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:54 np0005539563 ceph-mon[74338]: Health check cleared: SLOW_OPS (was: 23 slow ops, oldest one blocked for 58 sec, daemons [mon.compute-0,mon.compute-1,mon.compute-2] have slow ops.)
Nov 29 02:35:54 np0005539563 ceph-mon[74338]: Cluster is now healthy
Nov 29 02:35:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Nov 29 02:35:55 np0005539563 python3.9[250161]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 29 02:35:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:35:55 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 29 02:35:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:55.983975) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:35:55 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 29 02:35:55 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401755984305, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1999, "num_deletes": 251, "total_data_size": 3608220, "memory_usage": 3655328, "flush_reason": "Manual Compaction"}
Nov 29 02:35:55 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 29 02:35:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:56.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:56.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:56 np0005539563 python3.9[250313]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401756820077, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 3461315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15501, "largest_seqno": 17499, "table_properties": {"data_size": 3452276, "index_size": 5469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20875, "raw_average_key_size": 21, "raw_value_size": 3433360, "raw_average_value_size": 3510, "num_data_blocks": 241, "num_entries": 978, "num_filter_entries": 978, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401375, "oldest_key_time": 1764401375, "file_creation_time": 1764401755, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 836111 microseconds, and 11656 cpu microseconds.
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:56.820273) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 3461315 bytes OK
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:56.820324) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:56.842930) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:56.842994) EVENT_LOG_v1 {"time_micros": 1764401756842982, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:56.843024) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 3599736, prev total WAL file size 3615901, number of live WAL files 2.
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:56.844443) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(3380KB)], [35(7386KB)]
Nov 29 02:35:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401756844555, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 11024789, "oldest_snapshot_seqno": -1}
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4611 keys, 8884282 bytes, temperature: kUnknown
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401757018755, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8884282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8851823, "index_size": 19805, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 115126, "raw_average_key_size": 24, "raw_value_size": 8766797, "raw_average_value_size": 1901, "num_data_blocks": 828, "num_entries": 4611, "num_filter_entries": 4611, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764401756, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:57.019023) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8884282 bytes
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:57.034528) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.3 rd, 51.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 5133, records dropped: 522 output_compression: NoCompression
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:57.034580) EVENT_LOG_v1 {"time_micros": 1764401757034559, "job": 16, "event": "compaction_finished", "compaction_time_micros": 174262, "compaction_time_cpu_micros": 24615, "output_level": 6, "num_output_files": 1, "total_output_size": 8884282, "num_input_records": 5133, "num_output_records": 4611, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401757035493, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401757036750, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:56.844248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:57.036830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:57.036838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:57.036840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:57.036842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:35:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:35:57.036844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:35:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Nov 29 02:35:57 np0005539563 python3[250466]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:35:57 np0005539563 podman[250501]: 2025-11-29 07:35:57.858877456 +0000 UTC m=+0.021893074 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:35:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:58.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:35:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:58.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:58 np0005539563 podman[250501]: 2025-11-29 07:35:58.587304914 +0000 UTC m=+0.750320512 container create 3f1097b88586750b576eec93ab9db414cf59fbe71af96c470ed5a09ec6f69781 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute)
Nov 29 02:35:58 np0005539563 python3[250466]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 29 02:35:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 29 02:35:59 np0005539563 python3.9[250692]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:36:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:00.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:00 np0005539563 python3.9[250846]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:00.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:01 np0005539563 python3.9[250998]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401760.4822571-4158-21594666106656/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Nov 29 02:36:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:02 np0005539563 python3.9[251124]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:36:02 np0005539563 systemd[1]: Reloading.
Nov 29 02:36:02 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:02 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:36:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:36:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:02.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:03 np0005539563 python3.9[251236]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:36:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Nov 29 02:36:03 np0005539563 systemd[1]: Reloading.
Nov 29 02:36:03 np0005539563 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:03 np0005539563 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:03 np0005539563 systemd[1]: Starting nova_compute container...
Nov 29 02:36:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:36:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b7cab840a0cf5f79285bb06fa60145774474f50131210c57153a4ec10a0378/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b7cab840a0cf5f79285bb06fa60145774474f50131210c57153a4ec10a0378/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b7cab840a0cf5f79285bb06fa60145774474f50131210c57153a4ec10a0378/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b7cab840a0cf5f79285bb06fa60145774474f50131210c57153a4ec10a0378/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b7cab840a0cf5f79285bb06fa60145774474f50131210c57153a4ec10a0378/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:04 np0005539563 podman[251275]: 2025-11-29 07:36:04.161581239 +0000 UTC m=+0.598121942 container init 3f1097b88586750b576eec93ab9db414cf59fbe71af96c470ed5a09ec6f69781 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true)
Nov 29 02:36:04 np0005539563 podman[251275]: 2025-11-29 07:36:04.1678946 +0000 UTC m=+0.604435283 container start 3f1097b88586750b576eec93ab9db414cf59fbe71af96c470ed5a09ec6f69781 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:36:04 np0005539563 nova_compute[251290]: + sudo -E kolla_set_configs
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Validating config file
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Copying service configuration files
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Deleting /etc/ceph
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Creating directory /etc/ceph
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Writing out command to execute
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:04 np0005539563 nova_compute[251290]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:36:04 np0005539563 nova_compute[251290]: ++ cat /run_command
Nov 29 02:36:04 np0005539563 nova_compute[251290]: + CMD=nova-compute
Nov 29 02:36:04 np0005539563 nova_compute[251290]: + ARGS=
Nov 29 02:36:04 np0005539563 nova_compute[251290]: + sudo kolla_copy_cacerts
Nov 29 02:36:04 np0005539563 nova_compute[251290]: + [[ ! -n '' ]]
Nov 29 02:36:04 np0005539563 nova_compute[251290]: + . kolla_extend_start
Nov 29 02:36:04 np0005539563 nova_compute[251290]: Running command: 'nova-compute'
Nov 29 02:36:04 np0005539563 nova_compute[251290]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 02:36:04 np0005539563 nova_compute[251290]: + umask 0022
Nov 29 02:36:04 np0005539563 nova_compute[251290]: + exec nova-compute
Nov 29 02:36:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:04.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:04 np0005539563 podman[251275]: nova_compute
Nov 29 02:36:04 np0005539563 systemd[1]: Started nova_compute container.
Nov 29 02:36:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:04.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:36:04.878 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:36:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:36:04.879 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:36:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:36:04.879 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:36:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Nov 29 02:36:05 np0005539563 python3.9[251453]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:36:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:06.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:06 np0005539563 python3.9[251603]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:36:06 np0005539563 nova_compute[251290]: 2025-11-29 07:36:06.839 251294 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:36:06 np0005539563 nova_compute[251290]: 2025-11-29 07:36:06.840 251294 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:36:06 np0005539563 nova_compute[251290]: 2025-11-29 07:36:06.840 251294 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:36:06 np0005539563 nova_compute[251290]: 2025-11-29 07:36:06.840 251294 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 02:36:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:07 np0005539563 nova_compute[251290]: 2025-11-29 07:36:07.012 251294 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:36:07 np0005539563 nova_compute[251290]: 2025-11-29 07:36:07.027 251294 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:36:07 np0005539563 nova_compute[251290]: 2025-11-29 07:36:07.028 251294 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.043008) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767043144, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 343, "num_deletes": 255, "total_data_size": 206160, "memory_usage": 214472, "flush_reason": "Manual Compaction"}
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 29 02:36:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767096026, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 204831, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17500, "largest_seqno": 17842, "table_properties": {"data_size": 202670, "index_size": 325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4808, "raw_average_key_size": 16, "raw_value_size": 198473, "raw_average_value_size": 677, "num_data_blocks": 15, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401756, "oldest_key_time": 1764401756, "file_creation_time": 1764401767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 53050 microseconds, and 1792 cpu microseconds.
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.096108) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 204831 bytes OK
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.096143) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.130853) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.130940) EVENT_LOG_v1 {"time_micros": 1764401767130929, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.130968) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 203828, prev total WAL file size 203828, number of live WAL files 2.
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.131762) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(200KB)], [38(8676KB)]
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767131859, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9089113, "oldest_snapshot_seqno": -1}
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4386 keys, 8744981 bytes, temperature: kUnknown
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767241982, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 8744981, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8714108, "index_size": 18795, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 111735, "raw_average_key_size": 25, "raw_value_size": 8632980, "raw_average_value_size": 1968, "num_data_blocks": 772, "num_entries": 4386, "num_filter_entries": 4386, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764401767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.242262) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8744981 bytes
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.300184) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.5 rd, 79.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 8.5 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(87.1) write-amplify(42.7) OK, records in: 4904, records dropped: 518 output_compression: NoCompression
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.300230) EVENT_LOG_v1 {"time_micros": 1764401767300215, "job": 18, "event": "compaction_finished", "compaction_time_micros": 110223, "compaction_time_cpu_micros": 24683, "output_level": 6, "num_output_files": 1, "total_output_size": 8744981, "num_input_records": 4904, "num_output_records": 4386, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767300462, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401767302312, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.131582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.302357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.302362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.302364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.302366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:36:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:36:07.302368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:36:07 np0005539563 nova_compute[251290]: 2025-11-29 07:36:07.540 251294 INFO nova.virt.driver [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 02:36:07 np0005539563 python3.9[251758]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:36:07 np0005539563 nova_compute[251290]: 2025-11-29 07:36:07.672 251294 INFO nova.compute.provider_config [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.106 251294 DEBUG oslo_concurrency.lockutils [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.106 251294 DEBUG oslo_concurrency.lockutils [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.107 251294 DEBUG oslo_concurrency.lockutils [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.107 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.107 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.107 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.108 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.108 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.108 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.108 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.108 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.109 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.109 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.109 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.109 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.110 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.110 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.110 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.110 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.111 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.111 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.111 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.111 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.111 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.112 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.112 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.112 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.112 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.112 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.113 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.113 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.113 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.113 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.114 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.114 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.114 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.114 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.115 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.115 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.115 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.115 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.116 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.116 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.116 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.116 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.117 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.117 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.117 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.118 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.118 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.118 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.118 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.118 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.119 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.119 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.119 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.119 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.119 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.120 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.120 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.121 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.122 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.122 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.122 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.122 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.123 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.123 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.123 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.123 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.123 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.123 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.124 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.124 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.124 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.124 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.124 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.125 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.125 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.125 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.125 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.125 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.126 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.126 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.126 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.126 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.126 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.127 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.127 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.127 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.127 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.128 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.128 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.128 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.128 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.129 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.129 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.129 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.129 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.129 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.130 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.130 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.130 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.130 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.130 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.130 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.131 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.131 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.131 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.131 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.131 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.132 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.132 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.132 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.132 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.132 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.133 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.133 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.133 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.133 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.133 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.134 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.134 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.134 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.134 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.134 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.135 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.135 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.135 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.135 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.135 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.135 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.136 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.136 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.136 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.136 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.136 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.137 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.137 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.137 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.137 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.137 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.138 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.138 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.138 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.138 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.138 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.138 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.139 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.139 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.139 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.139 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.139 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.140 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.140 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.140 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.140 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.140 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.141 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.141 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.141 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.141 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.141 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.142 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.142 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.142 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.142 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.142 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.143 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.143 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.143 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.143 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.143 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.144 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.144 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.144 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.144 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.144 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.145 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.145 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.145 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.145 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.145 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.146 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.146 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.146 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.146 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.146 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.147 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.147 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.147 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.147 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.147 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.148 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.148 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.148 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.148 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.148 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.149 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.149 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.149 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.149 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.149 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.150 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.150 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.150 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.150 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.150 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.151 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.151 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.151 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.151 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.151 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.152 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.152 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.152 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.152 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.152 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.153 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.153 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.153 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.153 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.153 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.154 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.154 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.154 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.154 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.154 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.155 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.155 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.155 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.155 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.155 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.155 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.156 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.156 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.156 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.156 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.156 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.157 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.157 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.157 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.157 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.157 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.158 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.158 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.158 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.158 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.158 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.159 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.159 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.159 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.159 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.159 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.159 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.160 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.160 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.160 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.160 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.160 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.161 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.161 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.161 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.161 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.161 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.162 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.162 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.162 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.162 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.162 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.163 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.163 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.163 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.163 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.163 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.163 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.164 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.164 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.164 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.164 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.164 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.165 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.165 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.165 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.165 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.166 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.166 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.166 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.166 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.167 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.167 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.167 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.167 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.168 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.168 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.168 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.168 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.169 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.169 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.169 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.169 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.169 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.170 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.170 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.170 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.171 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.171 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.171 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.172 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.172 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.172 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.173 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.173 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.173 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.173 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.173 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.174 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.174 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.174 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.174 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.175 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.175 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.175 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.175 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.176 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.176 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.176 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.176 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.176 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.177 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.177 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.177 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.177 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.178 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.178 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.178 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.178 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.178 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.179 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.179 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.179 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.179 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.180 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.180 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.180 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.180 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.181 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.181 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.181 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.181 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.181 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.182 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.182 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.182 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.182 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.183 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.183 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.183 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.183 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.184 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.184 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.184 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.184 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.184 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.185 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.185 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.185 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.185 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.185 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.186 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.186 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.186 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.186 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.187 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.187 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.187 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.187 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.187 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.188 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.188 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.188 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.188 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.188 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.189 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.189 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.189 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.189 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.189 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.190 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.190 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.190 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.190 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.190 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.191 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.191 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.191 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.191 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.191 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.192 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.192 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.192 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.192 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.193 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.193 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.193 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.193 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.194 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.194 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.194 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.194 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.194 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.195 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.195 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.195 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.195 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.195 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.195 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.196 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.196 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.196 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.196 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.196 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.197 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.197 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.197 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.197 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.197 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.198 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.198 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.198 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.198 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.198 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.198 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.199 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.199 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.199 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.199 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.199 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.200 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.200 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.200 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.200 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.200 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.201 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.201 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.201 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.201 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.201 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.202 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.202 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.202 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.202 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.202 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.203 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.203 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.203 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.203 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.203 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.204 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.204 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.204 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.204 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.204 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.205 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.205 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.205 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.205 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.206 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.206 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.206 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.206 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.207 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.207 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.207 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.207 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.207 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.208 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.208 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.208 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.208 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.208 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.209 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.209 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.209 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.209 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.209 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.210 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.210 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.210 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.210 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.210 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.211 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.211 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.211 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.211 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.212 251294 WARNING oslo_config.cfg [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 02:36:08 np0005539563 nova_compute[251290]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 02:36:08 np0005539563 nova_compute[251290]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 02:36:08 np0005539563 nova_compute[251290]: and ``live_migration_inbound_addr`` respectively.
Nov 29 02:36:08 np0005539563 nova_compute[251290]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.212 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.212 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.212 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.213 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.213 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.213 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.213 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.214 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.214 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.214 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.214 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.215 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.215 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.215 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.216 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.216 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.216 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.216 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.216 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.rbd_secret_uuid        = 38a37ed2-442a-5e0d-a69a-881fdd186450 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.217 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.217 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.217 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.217 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.217 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.218 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.218 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.218 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.218 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.218 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.219 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.219 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.219 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.219 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.220 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.220 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.220 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.220 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.220 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.221 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.221 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.221 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.221 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.221 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.222 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.222 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.222 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.222 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.223 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.223 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.223 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.223 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.223 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.224 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.224 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.224 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.224 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.224 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.225 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.225 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.225 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.225 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.225 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.226 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.226 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.226 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.226 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.226 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.227 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.227 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.227 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.228 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.228 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.228 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.228 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.228 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.228 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.229 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.229 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.229 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.230 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.230 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.230 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.230 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.230 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.231 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.231 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.231 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.232 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.232 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.232 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.232 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.232 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.233 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.233 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.233 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.233 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.234 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.234 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.234 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.234 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.235 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.235 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.235 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.235 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.236 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.236 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.236 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.236 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.237 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.237 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.237 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.237 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.237 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.238 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.238 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.238 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.238 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.239 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.239 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.239 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.239 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.239 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.240 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.240 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.240 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.240 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.241 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.241 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.241 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.241 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.242 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.242 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.242 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.242 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.243 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.243 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.243 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.243 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.243 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.244 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.244 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.245 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.245 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.245 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.245 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.246 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.246 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.246 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.246 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.246 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.247 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.247 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.247 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.247 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.248 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.248 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.248 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.248 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.249 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.249 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.249 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.249 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.249 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.250 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.250 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.250 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.250 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.250 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.251 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.251 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.251 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.251 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.252 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.252 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.252 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.252 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.253 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.253 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.253 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.253 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.254 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.254 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.254 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.255 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.255 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.255 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.255 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.255 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.256 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.256 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.256 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.256 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.257 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.257 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.257 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.257 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.257 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.258 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.258 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.258 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.258 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.259 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.259 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.259 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.259 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.260 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.260 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.260 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.260 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.260 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.261 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.261 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.261 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.261 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.262 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.262 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.262 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.262 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.262 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.263 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.263 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.263 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.263 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.263 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.264 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.264 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.264 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.264 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.264 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.265 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.265 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.265 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.265 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.265 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.265 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.266 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.266 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.266 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.266 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.266 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.267 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.267 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.267 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.267 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.268 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.268 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.268 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.268 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.269 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.269 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.269 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.269 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.270 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.270 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.270 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.270 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.270 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.271 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.271 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.271 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.271 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.271 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.271 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.272 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.272 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.272 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.272 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.272 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.273 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.273 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.273 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.273 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.273 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.274 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.274 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.274 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.274 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.276 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.276 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.276 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.276 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.276 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.277 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.277 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.277 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.277 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.277 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.277 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.278 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.278 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.278 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.278 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.278 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.278 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.279 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.279 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.279 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.279 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.279 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.279 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.280 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.280 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.280 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.280 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.280 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.280 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.281 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.281 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.281 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.281 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.281 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.281 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.281 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.282 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.282 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.282 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.282 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.282 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.282 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.283 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.283 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.283 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.283 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.283 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.283 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.283 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.284 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.284 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.284 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.284 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.284 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.284 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.285 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.285 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.285 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.285 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.285 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.285 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.285 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.286 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.286 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.286 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.286 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.286 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.287 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.287 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.287 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.287 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.287 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.287 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.288 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.288 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.288 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.288 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.288 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.288 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.289 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.289 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.289 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.289 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.289 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.290 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.290 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.290 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.290 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.291 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.291 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.291 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.291 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.291 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.291 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.292 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.292 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.292 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.292 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.292 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.292 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.293 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.293 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.293 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.293 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.293 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.294 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.294 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.294 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.294 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.294 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.294 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.295 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.295 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.295 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.295 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.295 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.296 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.296 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.296 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.296 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.296 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.297 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.297 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.297 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.297 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.297 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.297 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.298 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.298 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.298 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.298 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.298 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.299 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.299 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.299 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.299 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.299 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.300 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.300 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.300 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.300 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.300 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.301 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.301 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.301 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.301 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.301 251294 DEBUG oslo_service.service [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.302 251294 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 02:36:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:08.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.390 251294 DEBUG nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.391 251294 DEBUG nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.391 251294 DEBUG nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.391 251294 DEBUG nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 02:36:08 np0005539563 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 02:36:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:08 np0005539563 systemd[1]: Started libvirt QEMU daemon.
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.780 251294 DEBUG nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f28e7260a00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.782 251294 DEBUG nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f28e7260a00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.783 251294 INFO nova.virt.libvirt.driver [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.800 251294 WARNING nova.virt.libvirt.driver [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 29 02:36:08 np0005539563 nova_compute[251290]: 2025-11-29 07:36:08.801 251294 DEBUG nova.virt.libvirt.volume.mount [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 02:36:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Nov 29 02:36:09 np0005539563 python3.9[251963]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 02:36:09 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:36:09 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.583 251294 INFO nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <host>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <uuid>9fe13708-3578-4487-abe9-9bea2dcb1209</uuid>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <cpu>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <arch>x86_64</arch>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model>EPYC-Rome-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <vendor>AMD</vendor>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <microcode version='16777317'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <signature family='23' model='49' stepping='0'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='x2apic'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='tsc-deadline'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='osxsave'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='hypervisor'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='tsc_adjust'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='spec-ctrl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='stibp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='arch-capabilities'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='cmp_legacy'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='topoext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='virt-ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='lbrv'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='tsc-scale'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='vmcb-clean'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='pause-filter'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='pfthreshold'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='svme-addr-chk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='rdctl-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='mds-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature name='pschange-mc-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <pages unit='KiB' size='4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <pages unit='KiB' size='2048'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <pages unit='KiB' size='1048576'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </cpu>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <power_management>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <suspend_mem/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </power_management>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <iommu support='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <migration_features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <live/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <uri_transports>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <uri_transport>tcp</uri_transport>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <uri_transport>rdma</uri_transport>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </uri_transports>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </migration_features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <topology>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <cells num='1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <cell id='0'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:          <memory unit='KiB'>7864324</memory>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:          <pages unit='KiB' size='4'>1966081</pages>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:          <distances>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:            <sibling id='0' value='10'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:          </distances>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:          <cpus num='8'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:          </cpus>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        </cell>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </cells>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </topology>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <cache>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </cache>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <secmodel>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model>selinux</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <doi>0</doi>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </secmodel>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <secmodel>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model>dac</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <doi>0</doi>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </secmodel>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </host>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <guest>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <os_type>hvm</os_type>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <arch name='i686'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <wordsize>32</wordsize>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <domain type='qemu'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <domain type='kvm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </arch>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <pae/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <nonpae/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <acpi default='on' toggle='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <apic default='on' toggle='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <cpuselection/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <deviceboot/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <externalSnapshot/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </guest>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <guest>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <os_type>hvm</os_type>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <arch name='x86_64'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <wordsize>64</wordsize>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <domain type='qemu'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <domain type='kvm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </arch>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <acpi default='on' toggle='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <apic default='on' toggle='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <cpuselection/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <deviceboot/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <externalSnapshot/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </guest>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 
Nov 29 02:36:09 np0005539563 nova_compute[251290]: </capabilities>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: #033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.592 251294 DEBUG nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.610 251294 DEBUG nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 02:36:09 np0005539563 nova_compute[251290]: <domainCapabilities>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <domain>kvm</domain>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <arch>i686</arch>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <vcpu max='4096'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <iothreads supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <os supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <enum name='firmware'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <loader supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>rom</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pflash</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='readonly'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>yes</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>no</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='secure'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>no</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </loader>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </os>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <cpu>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>on</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>off</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='maximumMigratable'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>on</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>off</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <vendor>AMD</vendor>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='succor'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='custom' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cooperlake'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='GraniteRapids'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10-128'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10-256'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10-512'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='KnightsMill'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SierraForest'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='athlon'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='athlon-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='core2duo'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='core2duo-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='coreduo'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='coreduo-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='n270'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='n270-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='phenom'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='phenom-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </cpu>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <memoryBacking supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <enum name='sourceType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>file</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>anonymous</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>memfd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </memoryBacking>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <devices>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <disk supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='diskDevice'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>disk</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>cdrom</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>floppy</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>lun</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='bus'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>fdc</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>scsi</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>usb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>sata</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </disk>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <graphics supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vnc</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>egl-headless</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>dbus</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </graphics>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <video supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='modelType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vga</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>cirrus</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>none</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>bochs</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>ramfb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </video>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <hostdev supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='mode'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>subsystem</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='startupPolicy'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>default</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>mandatory</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>requisite</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>optional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='subsysType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>usb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pci</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>scsi</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='capsType'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='pciBackend'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </hostdev>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <rng supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>random</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>egd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>builtin</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </rng>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <filesystem supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='driverType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>path</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>handle</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtiofs</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </filesystem>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <tpm supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tpm-tis</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tpm-crb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>emulator</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>external</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendVersion'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>2.0</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </tpm>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <redirdev supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='bus'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>usb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </redirdev>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <channel supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pty</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>unix</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </channel>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <crypto supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>qemu</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>builtin</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </crypto>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <interface supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>default</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>passt</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </interface>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <panic supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>isa</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>hyperv</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </panic>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <console supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>null</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vc</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pty</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>dev</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>file</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pipe</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>stdio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>udp</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tcp</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>unix</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>qemu-vdagent</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>dbus</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </console>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </devices>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <gic supported='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <genid supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <backup supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <async-teardown supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <ps2 supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <sev supported='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <sgx supported='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <hyperv supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='features'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>relaxed</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vapic</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>spinlocks</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vpindex</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>runtime</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>synic</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>stimer</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>reset</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vendor_id</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>frequencies</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>reenlightenment</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tlbflush</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>ipi</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>avic</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>emsr_bitmap</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>xmm_input</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <defaults>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </defaults>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </hyperv>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <launchSecurity supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='sectype'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tdx</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </launchSecurity>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: </domainCapabilities>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.616 251294 DEBUG nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 02:36:09 np0005539563 nova_compute[251290]: <domainCapabilities>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <domain>kvm</domain>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <arch>i686</arch>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <vcpu max='240'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <iothreads supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <os supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <enum name='firmware'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <loader supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>rom</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pflash</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='readonly'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>yes</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>no</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='secure'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>no</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </loader>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </os>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <cpu>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>on</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>off</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='maximumMigratable'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>on</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>off</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <vendor>AMD</vendor>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='succor'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='custom' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cooperlake'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='GraniteRapids'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10-128'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10-256'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10-512'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='KnightsMill'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SierraForest'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='athlon'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='athlon-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='core2duo'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='core2duo-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='coreduo'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='coreduo-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='n270'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='n270-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='phenom'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='phenom-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </cpu>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <memoryBacking supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <enum name='sourceType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>file</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>anonymous</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>memfd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </memoryBacking>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <devices>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <disk supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='diskDevice'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>disk</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>cdrom</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>floppy</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>lun</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='bus'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>ide</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>fdc</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>scsi</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>usb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>sata</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </disk>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <graphics supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vnc</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>egl-headless</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>dbus</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </graphics>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <video supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='modelType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vga</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>cirrus</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>none</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>bochs</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>ramfb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </video>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <hostdev supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='mode'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>subsystem</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='startupPolicy'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>default</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>mandatory</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>requisite</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>optional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='subsysType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>usb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pci</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>scsi</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='capsType'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='pciBackend'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </hostdev>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <rng supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>random</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>egd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>builtin</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </rng>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <filesystem supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='driverType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>path</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>handle</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtiofs</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </filesystem>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <tpm supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tpm-tis</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tpm-crb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>emulator</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>external</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendVersion'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>2.0</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </tpm>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <redirdev supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='bus'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>usb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </redirdev>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <channel supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pty</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>unix</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </channel>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <crypto supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>qemu</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>builtin</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </crypto>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <interface supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>default</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>passt</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </interface>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <panic supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>isa</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>hyperv</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </panic>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <console supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>null</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vc</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pty</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>dev</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>file</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pipe</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>stdio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>udp</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tcp</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>unix</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>qemu-vdagent</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>dbus</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </console>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </devices>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <gic supported='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <genid supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <backup supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <async-teardown supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <ps2 supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <sev supported='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <sgx supported='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <hyperv supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='features'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>relaxed</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vapic</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>spinlocks</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vpindex</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>runtime</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>synic</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>stimer</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>reset</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vendor_id</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>frequencies</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>reenlightenment</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tlbflush</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>ipi</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>avic</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>emsr_bitmap</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>xmm_input</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <defaults>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </defaults>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </hyperv>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <launchSecurity supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='sectype'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tdx</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </launchSecurity>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: </domainCapabilities>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.644 251294 DEBUG nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.648 251294 DEBUG nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 02:36:09 np0005539563 nova_compute[251290]: <domainCapabilities>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <domain>kvm</domain>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <arch>x86_64</arch>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <vcpu max='4096'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <iothreads supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <os supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <enum name='firmware'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>efi</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <loader supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>rom</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pflash</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='readonly'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>yes</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>no</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='secure'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>yes</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>no</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </loader>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </os>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <cpu>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>on</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>off</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='maximumMigratable'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>on</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>off</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <vendor>AMD</vendor>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='succor'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='custom' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cooperlake'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='GraniteRapids'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10-128'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10-256'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10-512'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='KnightsMill'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SierraForest'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='athlon'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='athlon-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='core2duo'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='core2duo-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='coreduo'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='coreduo-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='n270'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='n270-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='phenom'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='phenom-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </cpu>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <memoryBacking supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <enum name='sourceType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>file</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>anonymous</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>memfd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </memoryBacking>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <devices>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <disk supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='diskDevice'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>disk</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>cdrom</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>floppy</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>lun</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='bus'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>fdc</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>scsi</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>usb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>sata</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </disk>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <graphics supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vnc</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>egl-headless</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>dbus</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </graphics>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <video supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='modelType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vga</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>cirrus</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>none</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>bochs</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>ramfb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </video>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <hostdev supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='mode'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>subsystem</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='startupPolicy'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>default</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>mandatory</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>requisite</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>optional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='subsysType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>usb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pci</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>scsi</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='capsType'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='pciBackend'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </hostdev>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <rng supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>random</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>egd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>builtin</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </rng>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <filesystem supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='driverType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>path</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>handle</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtiofs</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </filesystem>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <tpm supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tpm-tis</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tpm-crb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>emulator</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>external</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendVersion'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>2.0</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </tpm>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <redirdev supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='bus'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>usb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </redirdev>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <channel supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pty</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>unix</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </channel>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <crypto supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>qemu</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>builtin</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </crypto>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <interface supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>default</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>passt</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </interface>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <panic supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>isa</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>hyperv</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </panic>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <console supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>null</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vc</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pty</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>dev</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>file</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pipe</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>stdio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>udp</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tcp</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>unix</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>qemu-vdagent</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>dbus</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </console>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </devices>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <gic supported='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <genid supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <backup supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <async-teardown supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <ps2 supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <sev supported='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <sgx supported='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <hyperv supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='features'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>relaxed</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vapic</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>spinlocks</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vpindex</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>runtime</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>synic</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>stimer</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>reset</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vendor_id</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>frequencies</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>reenlightenment</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tlbflush</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>ipi</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>avic</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>emsr_bitmap</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>xmm_input</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <defaults>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </defaults>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </hyperv>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <launchSecurity supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='sectype'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tdx</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </launchSecurity>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: </domainCapabilities>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.707 251294 DEBUG nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 02:36:09 np0005539563 nova_compute[251290]: <domainCapabilities>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <domain>kvm</domain>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <arch>x86_64</arch>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <vcpu max='240'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <iothreads supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <os supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <enum name='firmware'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <loader supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>rom</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pflash</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='readonly'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>yes</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>no</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='secure'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>no</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </loader>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </os>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <cpu>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>on</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>off</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='maximumMigratable'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>on</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>off</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <vendor>AMD</vendor>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='succor'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <mode name='custom' supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cooperlake'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Denverton-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='auto-ibrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amd-psfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='stibp-always-on'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='EPYC-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='GraniteRapids'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10-128'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10-256'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx10-512'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='prefetchiti'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Haswell-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='KnightsMill'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512er'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512pf'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fma4'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tbm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xop'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='amx-tile'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-bf16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-fp16'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bitalg'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrc'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fzrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='la57'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='taa-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xfd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SierraForest'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ifma'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cmpccxadd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fbsdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='fsrs'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ibrs-all'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mcdt-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pbrsb-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='psdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='serialize'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vaes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='hle'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='rtm'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512bw'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512cd'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512dq'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512f'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='avx512vl'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='invpcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pcid'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='pku'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='mpx'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='core-capability'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='split-lock-detect'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='cldemote'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='erms'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='gfni'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdir64b'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='movdiri'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='xsaves'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='athlon'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='athlon-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='core2duo'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='core2duo-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='coreduo'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='coreduo-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='n270'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='n270-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='ss'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='phenom'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <blockers model='phenom-v1'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnow'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <feature name='3dnowext'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </blockers>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </mode>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </cpu>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <memoryBacking supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <enum name='sourceType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>file</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>anonymous</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <value>memfd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </memoryBacking>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <devices>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <disk supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='diskDevice'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>disk</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>cdrom</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>floppy</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>lun</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='bus'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>ide</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>fdc</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>scsi</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>usb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>sata</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </disk>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <graphics supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vnc</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>egl-headless</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>dbus</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </graphics>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <video supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='modelType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vga</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>cirrus</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>none</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>bochs</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>ramfb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </video>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <hostdev supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='mode'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>subsystem</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='startupPolicy'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>default</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>mandatory</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>requisite</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>optional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='subsysType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>usb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pci</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>scsi</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='capsType'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='pciBackend'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </hostdev>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <rng supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtio-non-transitional</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>random</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>egd</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>builtin</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </rng>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <filesystem supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='driverType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>path</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>handle</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>virtiofs</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </filesystem>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <tpm supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tpm-tis</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tpm-crb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>emulator</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>external</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendVersion'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>2.0</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </tpm>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <redirdev supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='bus'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>usb</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </redirdev>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <channel supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pty</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>unix</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </channel>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <crypto supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>qemu</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendModel'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>builtin</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </crypto>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <interface supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='backendType'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>default</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>passt</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </interface>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <panic supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='model'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>isa</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>hyperv</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </panic>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <console supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='type'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>null</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vc</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pty</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>dev</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>file</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>pipe</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>stdio</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>udp</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tcp</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>unix</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>qemu-vdagent</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>dbus</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </console>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </devices>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <gic supported='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <genid supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <backup supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <async-teardown supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <ps2 supported='yes'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <sev supported='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <sgx supported='no'/>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <hyperv supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='features'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>relaxed</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vapic</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>spinlocks</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vpindex</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>runtime</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>synic</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>stimer</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>reset</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>vendor_id</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>frequencies</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>reenlightenment</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tlbflush</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>ipi</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>avic</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>emsr_bitmap</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>xmm_input</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <defaults>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </defaults>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </hyperv>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    <launchSecurity supported='yes'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      <enum name='sectype'>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:        <value>tdx</value>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:      </enum>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:    </launchSecurity>
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  </features>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: </domainCapabilities>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.774 251294 DEBUG nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.774 251294 INFO nova.virt.libvirt.host [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Secure Boot support detected#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.777 251294 INFO nova.virt.libvirt.driver [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.777 251294 INFO nova.virt.libvirt.driver [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.789 251294 DEBUG nova.virt.libvirt.driver [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 02:36:09 np0005539563 nova_compute[251290]:  <model>Nehalem</model>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: </cpu>
Nov 29 02:36:09 np0005539563 nova_compute[251290]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.793 251294 DEBUG nova.virt.libvirt.driver [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.813 251294 INFO nova.virt.node [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Determined node identity 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from /var/lib/nova/compute_id#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.832 251294 WARNING nova.compute.manager [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Compute nodes ['190eff98-dce8-46c0-8a7d-870d6fa5cbbd'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.865 251294 INFO nova.compute.manager [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.903 251294 WARNING nova.compute.manager [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.903 251294 DEBUG oslo_concurrency.lockutils [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.903 251294 DEBUG oslo_concurrency.lockutils [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.904 251294 DEBUG oslo_concurrency.lockutils [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.904 251294 DEBUG nova.compute.resource_tracker [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:36:09 np0005539563 nova_compute[251290]: 2025-11-29 07:36:09.904 251294 DEBUG oslo_concurrency.processutils [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:36:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:36:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4198467867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:36:10 np0005539563 nova_compute[251290]: 2025-11-29 07:36:10.319 251294 DEBUG oslo_concurrency.processutils [None req-e9f7c981-b32e-4a83-8415-58a1984cad5a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:36:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:10.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:10 np0005539563 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 02:36:10 np0005539563 systemd[1]: Started libvirt nodedev daemon.
Nov 29 02:36:10 np0005539563 python3.9[252167]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:36:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:10.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:10 np0005539563 systemd[1]: Stopping nova_compute container...
Nov 29 02:36:10 np0005539563 nova_compute[251290]: 2025-11-29 07:36:10.572 251294 DEBUG oslo_concurrency.lockutils [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:36:10 np0005539563 nova_compute[251290]: 2025-11-29 07:36:10.573 251294 DEBUG oslo_concurrency.lockutils [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:36:10 np0005539563 nova_compute[251290]: 2025-11-29 07:36:10.573 251294 DEBUG oslo_concurrency.lockutils [None req-394b8dc7-efa0-4cab-a913-cb8413943331 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:36:10 np0005539563 virtqemud[251807]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 02:36:10 np0005539563 virtqemud[251807]: hostname: compute-0
Nov 29 02:36:10 np0005539563 virtqemud[251807]: End of file while reading data: Input/output error
Nov 29 02:36:10 np0005539563 systemd[1]: libpod-3f1097b88586750b576eec93ab9db414cf59fbe71af96c470ed5a09ec6f69781.scope: Deactivated successfully.
Nov 29 02:36:10 np0005539563 systemd[1]: libpod-3f1097b88586750b576eec93ab9db414cf59fbe71af96c470ed5a09ec6f69781.scope: Consumed 3.949s CPU time.
Nov 29 02:36:10 np0005539563 podman[252194]: 2025-11-29 07:36:10.98991168 +0000 UTC m=+0.494612669 container died 3f1097b88586750b576eec93ab9db414cf59fbe71af96c470ed5a09ec6f69781 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:36:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Nov 29 02:36:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f1097b88586750b576eec93ab9db414cf59fbe71af96c470ed5a09ec6f69781-userdata-shm.mount: Deactivated successfully.
Nov 29 02:36:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-40b7cab840a0cf5f79285bb06fa60145774474f50131210c57153a4ec10a0378-merged.mount: Deactivated successfully.
Nov 29 02:36:11 np0005539563 podman[252194]: 2025-11-29 07:36:11.690498725 +0000 UTC m=+1.195199714 container cleanup 3f1097b88586750b576eec93ab9db414cf59fbe71af96c470ed5a09ec6f69781 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:36:11 np0005539563 podman[252194]: nova_compute
Nov 29 02:36:11 np0005539563 podman[252225]: nova_compute
Nov 29 02:36:11 np0005539563 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 29 02:36:11 np0005539563 systemd[1]: Stopped nova_compute container.
Nov 29 02:36:11 np0005539563 systemd[1]: Starting nova_compute container...
Nov 29 02:36:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:36:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b7cab840a0cf5f79285bb06fa60145774474f50131210c57153a4ec10a0378/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b7cab840a0cf5f79285bb06fa60145774474f50131210c57153a4ec10a0378/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b7cab840a0cf5f79285bb06fa60145774474f50131210c57153a4ec10a0378/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b7cab840a0cf5f79285bb06fa60145774474f50131210c57153a4ec10a0378/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b7cab840a0cf5f79285bb06fa60145774474f50131210c57153a4ec10a0378/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:36:12 np0005539563 podman[252238]: 2025-11-29 07:36:12.172518003 +0000 UTC m=+0.394990784 container init 3f1097b88586750b576eec93ab9db414cf59fbe71af96c470ed5a09ec6f69781 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:36:12 np0005539563 podman[252238]: 2025-11-29 07:36:12.178865365 +0000 UTC m=+0.401338126 container start 3f1097b88586750b576eec93ab9db414cf59fbe71af96c470ed5a09ec6f69781 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:36:12 np0005539563 podman[252238]: nova_compute
Nov 29 02:36:12 np0005539563 nova_compute[252253]: + sudo -E kolla_set_configs
Nov 29 02:36:12 np0005539563 systemd[1]: Started nova_compute container.
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Validating config file
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Copying service configuration files
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Deleting /etc/ceph
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Creating directory /etc/ceph
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Writing out command to execute
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:12 np0005539563 nova_compute[252253]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:36:12 np0005539563 nova_compute[252253]: ++ cat /run_command
Nov 29 02:36:12 np0005539563 nova_compute[252253]: + CMD=nova-compute
Nov 29 02:36:12 np0005539563 nova_compute[252253]: + ARGS=
Nov 29 02:36:12 np0005539563 nova_compute[252253]: + sudo kolla_copy_cacerts
Nov 29 02:36:12 np0005539563 nova_compute[252253]: + [[ ! -n '' ]]
Nov 29 02:36:12 np0005539563 nova_compute[252253]: + . kolla_extend_start
Nov 29 02:36:12 np0005539563 nova_compute[252253]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 02:36:12 np0005539563 nova_compute[252253]: Running command: 'nova-compute'
Nov 29 02:36:12 np0005539563 nova_compute[252253]: + umask 0022
Nov 29 02:36:12 np0005539563 nova_compute[252253]: + exec nova-compute
Nov 29 02:36:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:12.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:12.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:36:12
Nov 29 02:36:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:36:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:36:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'volumes']
Nov 29 02:36:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:36:13 np0005539563 python3.9[252417]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:36:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:36:13 np0005539563 systemd[1]: Started libpod-conmon-1c2d44f1e56aea9dac26b05108671c9c0534d3ccdae214e23e18a75022c6d58b.scope.
Nov 29 02:36:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:36:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9525f4cededab6cc6fc6bbd213e3c483ea3590e3e75c8649f783b45bb34e1cc1/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9525f4cededab6cc6fc6bbd213e3c483ea3590e3e75c8649f783b45bb34e1cc1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9525f4cededab6cc6fc6bbd213e3c483ea3590e3e75c8649f783b45bb34e1cc1/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:13 np0005539563 podman[252444]: 2025-11-29 07:36:13.992870119 +0000 UTC m=+0.623716555 container init 1c2d44f1e56aea9dac26b05108671c9c0534d3ccdae214e23e18a75022c6d58b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:36:14 np0005539563 podman[252444]: 2025-11-29 07:36:14.001932535 +0000 UTC m=+0.632778951 container start 1c2d44f1e56aea9dac26b05108671c9c0534d3ccdae214e23e18a75022c6d58b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Applying nova statedir ownership
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 29 02:36:14 np0005539563 nova_compute_init[252463]: INFO:nova_statedir:Nova statedir ownership complete
Nov 29 02:36:14 np0005539563 systemd[1]: libpod-1c2d44f1e56aea9dac26b05108671c9c0534d3ccdae214e23e18a75022c6d58b.scope: Deactivated successfully.
Nov 29 02:36:14 np0005539563 python3.9[252417]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 29 02:36:14 np0005539563 podman[252464]: 2025-11-29 07:36:14.112818527 +0000 UTC m=+0.040020244 container died 1c2d44f1e56aea9dac26b05108671c9c0534d3ccdae214e23e18a75022c6d58b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 02:36:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c2d44f1e56aea9dac26b05108671c9c0534d3ccdae214e23e18a75022c6d58b-userdata-shm.mount: Deactivated successfully.
Nov 29 02:36:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9525f4cededab6cc6fc6bbd213e3c483ea3590e3e75c8649f783b45bb34e1cc1-merged.mount: Deactivated successfully.
Nov 29 02:36:14 np0005539563 podman[252464]: 2025-11-29 07:36:14.271564664 +0000 UTC m=+0.198766361 container cleanup 1c2d44f1e56aea9dac26b05108671c9c0534d3ccdae214e23e18a75022c6d58b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:36:14 np0005539563 systemd[1]: libpod-conmon-1c2d44f1e56aea9dac26b05108671c9c0534d3ccdae214e23e18a75022c6d58b.scope: Deactivated successfully.
Nov 29 02:36:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:14.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:36:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:14.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:14 np0005539563 nova_compute[252253]: 2025-11-29 07:36:14.485 252257 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:36:14 np0005539563 nova_compute[252253]: 2025-11-29 07:36:14.486 252257 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:36:14 np0005539563 nova_compute[252253]: 2025-11-29 07:36:14.486 252257 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:36:14 np0005539563 nova_compute[252253]: 2025-11-29 07:36:14.486 252257 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 02:36:14 np0005539563 nova_compute[252253]: 2025-11-29 07:36:14.641 252257 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:36:14 np0005539563 nova_compute[252253]: 2025-11-29 07:36:14.666 252257 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:36:14 np0005539563 nova_compute[252253]: 2025-11-29 07:36:14.667 252257 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:36:14 np0005539563 systemd[1]: session-50.scope: Deactivated successfully.
Nov 29 02:36:14 np0005539563 systemd[1]: session-50.scope: Consumed 2min 26.446s CPU time.
Nov 29 02:36:14 np0005539563 systemd-logind[785]: Session 50 logged out. Waiting for processes to exit.
Nov 29 02:36:14 np0005539563 systemd-logind[785]: Removed session 50.
Nov 29 02:36:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.181 252257 INFO nova.virt.driver [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.298 252257 INFO nova.compute.provider_config [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.310 252257 DEBUG oslo_concurrency.lockutils [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.311 252257 DEBUG oslo_concurrency.lockutils [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.311 252257 DEBUG oslo_concurrency.lockutils [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.312 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.312 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.312 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.312 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.312 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.313 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.313 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.313 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.313 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.313 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.313 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.314 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.314 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.314 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.314 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.314 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.315 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.315 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.315 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.315 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.315 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.315 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.316 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.316 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.316 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.316 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.316 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.316 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.317 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.317 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.317 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.317 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.317 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.318 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.318 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.318 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.318 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.318 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.318 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.319 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.319 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.319 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.319 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.319 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.320 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.320 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.320 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.320 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.320 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.320 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.321 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.321 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.321 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.321 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.321 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.321 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.322 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.322 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.322 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.322 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.322 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.322 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.323 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.323 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.323 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.323 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.323 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.323 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.323 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.324 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.324 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.324 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.324 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.324 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.324 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.325 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.325 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.325 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.325 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.325 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.325 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.325 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.326 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.326 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.326 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.326 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.326 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.326 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.327 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.327 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.327 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.327 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.327 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.327 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.327 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.327 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.328 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.328 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.328 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.328 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.328 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.328 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.329 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.329 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.329 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.329 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.330 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.330 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.330 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.330 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.330 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.330 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.331 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.331 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.331 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.331 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.331 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.332 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.332 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.332 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.332 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.332 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.332 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.332 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.333 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.333 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.333 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.333 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.333 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.333 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.333 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.334 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.334 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.334 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.334 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.334 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.334 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.335 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.335 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.335 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.335 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.335 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.335 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.335 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.336 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.336 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.336 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.336 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.336 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.336 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.336 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.337 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.337 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.337 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.337 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.337 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.338 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.338 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.338 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.338 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.338 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.339 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.339 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.339 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.339 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.339 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.339 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.340 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.340 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.340 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.340 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.340 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.341 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.341 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.341 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.341 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.341 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.342 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.342 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.342 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.342 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.342 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.343 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.343 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.343 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.343 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.343 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.344 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.344 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.344 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.344 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.344 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.345 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.345 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.345 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.345 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.345 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.345 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.346 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.346 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.346 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.346 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.347 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.347 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.347 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.347 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.347 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.348 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.348 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.348 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.348 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.348 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.349 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.349 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.349 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.349 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.349 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.350 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.350 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.350 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.350 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.350 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.351 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.351 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.351 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.351 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.351 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.352 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.352 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.352 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.352 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.352 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.353 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.353 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.353 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.353 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.353 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.354 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.354 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.354 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.354 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.355 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.355 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.355 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.355 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.355 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.356 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.356 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.356 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.356 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.356 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.357 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.357 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.357 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.357 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.357 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.358 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.358 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.358 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.358 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.358 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.359 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.359 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.359 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.359 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.359 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.360 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.360 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.360 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.360 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.361 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.361 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.361 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.361 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.361 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.362 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.362 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.362 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.362 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.362 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.363 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.363 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.363 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.363 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.363 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.364 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.364 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.364 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.364 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.364 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.365 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.365 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.365 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.365 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.365 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.366 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.366 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.366 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.366 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.366 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.366 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.367 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.367 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.367 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.367 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.367 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.367 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.368 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.368 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.368 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.368 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.368 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.368 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.369 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.369 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.369 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.369 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.369 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.369 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.370 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.370 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.370 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.370 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.370 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.371 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.371 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.371 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.371 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.371 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.372 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.372 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.372 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.372 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.372 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.373 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.373 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.373 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.373 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.373 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.374 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.374 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.374 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.374 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.374 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.375 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.375 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.375 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.375 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.375 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.376 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.376 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.376 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.376 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.376 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.376 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.377 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.377 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.377 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.378 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.378 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.378 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.378 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.378 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.378 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.378 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.379 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.379 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.379 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.379 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.379 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.379 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.379 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.380 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.380 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.380 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.380 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.380 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.380 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.380 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.380 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.381 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.381 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.381 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.381 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.381 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.381 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.381 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.382 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.382 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.382 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.382 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.382 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.382 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.383 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.383 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.383 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.383 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.383 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.383 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.383 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.383 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.384 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.384 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.384 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.384 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.384 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.384 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.384 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.385 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.385 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.385 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.385 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.385 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.385 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.385 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.386 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.386 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.386 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.386 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.386 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.386 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.386 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.387 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.387 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.387 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.387 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.387 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.387 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.387 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.387 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.388 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.388 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.388 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.388 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.388 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.388 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.388 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.389 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.389 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.389 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.389 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.389 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.389 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.389 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.390 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.390 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.390 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.390 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.390 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.390 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.390 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.391 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.391 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.391 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.391 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.391 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.391 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.391 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.392 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.392 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.392 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.392 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.392 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.392 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.392 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.393 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.393 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.393 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.393 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.393 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.393 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.393 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.394 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.394 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.394 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.394 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.394 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.394 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.394 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.395 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.395 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.395 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.395 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.395 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.395 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.395 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.396 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.396 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.396 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.396 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.396 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.396 252257 WARNING oslo_config.cfg [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 02:36:15 np0005539563 nova_compute[252253]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 02:36:15 np0005539563 nova_compute[252253]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 02:36:15 np0005539563 nova_compute[252253]: and ``live_migration_inbound_addr`` respectively.
Nov 29 02:36:15 np0005539563 nova_compute[252253]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.397 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.397 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.397 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.397 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.397 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.397 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.397 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.398 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.398 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.398 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.398 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.398 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.398 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.398 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.399 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.399 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.399 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.399 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.399 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.rbd_secret_uuid        = 38a37ed2-442a-5e0d-a69a-881fdd186450 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.399 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.400 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.400 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.400 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.400 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.400 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.400 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.400 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.400 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.401 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.401 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.401 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.401 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.401 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.401 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.402 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.402 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.402 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.402 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.402 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.402 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.402 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.403 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.403 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.403 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.403 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.403 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.403 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.403 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.404 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.404 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.404 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.404 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.404 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.404 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.404 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.405 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.405 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.405 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.405 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.405 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.405 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.405 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.406 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.406 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.406 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.406 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.406 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.406 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.406 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.407 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.407 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.407 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.407 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.407 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.407 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.407 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.407 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.408 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.408 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.408 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.408 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.408 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.408 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.408 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.409 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.409 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.409 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.409 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.409 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.409 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.410 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.410 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.410 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.410 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.410 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.410 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.410 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.410 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.411 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.411 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.411 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.411 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.411 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.411 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.411 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.412 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.412 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.412 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.412 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.412 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.412 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.412 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.413 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.413 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.413 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.413 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.413 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.413 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.413 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.414 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.414 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.414 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.414 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.414 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.414 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.415 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.415 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.415 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.415 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.415 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.415 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.415 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.416 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.416 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.416 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.416 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.416 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.416 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.416 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.417 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.417 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.417 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.417 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.417 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.418 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.418 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.418 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.418 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.418 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.418 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.418 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.419 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.419 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.419 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.419 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.419 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.419 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.419 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.420 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.420 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.420 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.420 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.420 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.420 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.420 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.421 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.421 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.421 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.421 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.421 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.421 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.422 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.422 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.422 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.422 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.422 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.422 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.422 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.423 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.423 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.423 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.423 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.423 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.423 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.424 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.424 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.424 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.424 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.424 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.424 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.424 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.425 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.425 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.425 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.425 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.425 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.425 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.426 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.426 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.426 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.426 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.426 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.426 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.427 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.427 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.427 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.427 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.427 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.427 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.427 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.428 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.428 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.428 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.428 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.428 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.428 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.428 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.429 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.429 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.429 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.429 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.429 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.430 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.430 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.430 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.430 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.430 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.430 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.430 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.431 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.431 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.431 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.431 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.431 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.431 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.431 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.432 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.432 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.432 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.432 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.432 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.433 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.433 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.433 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.433 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.434 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.434 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.434 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.434 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.435 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.435 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.435 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.435 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.435 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.436 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.436 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.436 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.436 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.436 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.436 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.436 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.437 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.437 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.437 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.437 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.437 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.437 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.438 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.438 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.438 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.438 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.438 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.439 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.439 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.439 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.439 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.439 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.439 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.440 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.440 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.440 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.440 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.440 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.440 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.441 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.441 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.441 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.441 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.441 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.441 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.442 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.442 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.442 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.442 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.442 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.442 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.442 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.443 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.443 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.443 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.443 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.443 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.443 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.444 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.444 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.444 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.444 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.444 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.444 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.444 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.445 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.445 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.445 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.445 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.445 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.445 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.446 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.446 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.446 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.446 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.446 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.446 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.447 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.447 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.447 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.447 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.447 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.447 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.448 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.448 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.448 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.448 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.448 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.449 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.449 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.449 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.449 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.450 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.450 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.450 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.450 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.450 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.451 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.451 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.451 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.451 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.451 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.451 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.452 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.452 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.452 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.452 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.452 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.453 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.453 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.453 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.453 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.453 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.454 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.454 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.454 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.454 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.454 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.455 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.455 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.455 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.455 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.455 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.455 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.456 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.456 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.456 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.456 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.456 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.456 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.457 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.457 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.457 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.457 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.457 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.458 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.458 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.458 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.458 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.458 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.458 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.459 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.459 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.459 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.459 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.459 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.460 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.460 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.460 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.460 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.460 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.461 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.461 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.461 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.461 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.461 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.461 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.462 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.462 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.462 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.462 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.463 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.463 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.463 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.463 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.463 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.463 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.464 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.464 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.464 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.464 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.464 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.464 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.465 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.465 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.465 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.465 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.465 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.465 252257 DEBUG oslo_service.service [None req-e442bbd7-bf9c-4ae2-abd6-00fe827b42cf - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.467 252257 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.510 252257 INFO nova.virt.node [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Determined node identity 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from /var/lib/nova/compute_id#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.511 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.512 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.512 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.512 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.527 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7ff0c4af06d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.530 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7ff0c4af06d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.531 252257 INFO nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.536 252257 INFO nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <host>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <uuid>9fe13708-3578-4487-abe9-9bea2dcb1209</uuid>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <cpu>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <arch>x86_64</arch>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model>EPYC-Rome-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <vendor>AMD</vendor>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <microcode version='16777317'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <signature family='23' model='49' stepping='0'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='x2apic'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='tsc-deadline'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='osxsave'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='hypervisor'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='tsc_adjust'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='spec-ctrl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='stibp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='arch-capabilities'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='cmp_legacy'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='topoext'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='virt-ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='lbrv'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='tsc-scale'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='vmcb-clean'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='pause-filter'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='pfthreshold'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='svme-addr-chk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='rdctl-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='mds-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature name='pschange-mc-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <pages unit='KiB' size='4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <pages unit='KiB' size='2048'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <pages unit='KiB' size='1048576'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </cpu>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <power_management>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <suspend_mem/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </power_management>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <iommu support='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <migration_features>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <live/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <uri_transports>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <uri_transport>tcp</uri_transport>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <uri_transport>rdma</uri_transport>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </uri_transports>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </migration_features>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <topology>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <cells num='1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <cell id='0'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:          <memory unit='KiB'>7864324</memory>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:          <pages unit='KiB' size='4'>1966081</pages>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:          <distances>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:            <sibling id='0' value='10'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:          </distances>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:          <cpus num='8'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:          </cpus>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        </cell>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </cells>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </topology>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <cache>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </cache>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <secmodel>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model>selinux</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <doi>0</doi>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </secmodel>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <secmodel>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model>dac</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <doi>0</doi>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </secmodel>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </host>
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <guest>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <os_type>hvm</os_type>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <arch name='i686'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <wordsize>32</wordsize>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <domain type='qemu'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <domain type='kvm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </arch>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <features>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <pae/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <nonpae/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <acpi default='on' toggle='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <apic default='on' toggle='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <cpuselection/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <deviceboot/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <externalSnapshot/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </features>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </guest>
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <guest>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <os_type>hvm</os_type>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <arch name='x86_64'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <wordsize>64</wordsize>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <domain type='qemu'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <domain type='kvm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </arch>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <features>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <acpi default='on' toggle='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <apic default='on' toggle='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <cpuselection/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <deviceboot/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <externalSnapshot/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </features>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </guest>
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 
Nov 29 02:36:15 np0005539563 nova_compute[252253]: </capabilities>
Nov 29 02:36:15 np0005539563 nova_compute[252253]: #033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.544 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.548 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 02:36:15 np0005539563 nova_compute[252253]: <domainCapabilities>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <domain>kvm</domain>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <arch>i686</arch>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <vcpu max='240'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <iothreads supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <os supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <enum name='firmware'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <loader supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>rom</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pflash</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='readonly'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>yes</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>no</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='secure'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>no</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </loader>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <cpu>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>on</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>off</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='maximumMigratable'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>on</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>off</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <vendor>AMD</vendor>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='succor'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='custom' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cooperlake'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='GraniteRapids'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10-128'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10-256'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10-512'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='IvyBridge'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='KnightsMill'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Opteron_G4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Opteron_G5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SapphireRapids'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SierraForest'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='athlon'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='athlon-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='core2duo'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='core2duo-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='coreduo'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='coreduo-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='n270'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='n270-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='phenom'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='phenom-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <memoryBacking supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <enum name='sourceType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>file</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>anonymous</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>memfd</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </memoryBacking>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <disk supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='diskDevice'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>disk</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>cdrom</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>floppy</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>lun</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='bus'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>ide</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>fdc</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>scsi</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>usb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>sata</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <graphics supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vnc</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>egl-headless</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>dbus</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </graphics>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <video supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='modelType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vga</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>cirrus</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>none</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>bochs</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>ramfb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <hostdev supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='mode'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>subsystem</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='startupPolicy'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>default</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>mandatory</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>requisite</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>optional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='subsysType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>usb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pci</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>scsi</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='capsType'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='pciBackend'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </hostdev>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <rng supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>random</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>egd</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>builtin</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <filesystem supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='driverType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>path</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>handle</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtiofs</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </filesystem>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <tpm supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tpm-tis</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tpm-crb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>emulator</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>external</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendVersion'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>2.0</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </tpm>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <redirdev supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='bus'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>usb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </redirdev>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <channel supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pty</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>unix</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </channel>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <crypto supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>qemu</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>builtin</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </crypto>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <interface supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>default</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>passt</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <panic supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>isa</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>hyperv</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </panic>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <console supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>null</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vc</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pty</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>dev</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>file</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pipe</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>stdio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>udp</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tcp</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>unix</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>qemu-vdagent</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>dbus</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </console>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <gic supported='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <genid supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <backup supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <async-teardown supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <ps2 supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <sev supported='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <sgx supported='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <hyperv supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='features'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>relaxed</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vapic</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>spinlocks</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vpindex</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>runtime</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>synic</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>stimer</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>reset</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vendor_id</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>frequencies</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>reenlightenment</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tlbflush</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>ipi</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>avic</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>emsr_bitmap</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>xmm_input</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <defaults>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </defaults>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </hyperv>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <launchSecurity supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='sectype'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tdx</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </launchSecurity>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:36:15 np0005539563 nova_compute[252253]: </domainCapabilities>
Nov 29 02:36:15 np0005539563 nova_compute[252253]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.560 252257 DEBUG nova.virt.libvirt.volume.mount [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.563 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 02:36:15 np0005539563 nova_compute[252253]: <domainCapabilities>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <domain>kvm</domain>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <arch>i686</arch>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <vcpu max='4096'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <iothreads supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <os supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <enum name='firmware'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <loader supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>rom</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pflash</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='readonly'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>yes</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>no</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='secure'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>no</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </loader>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <cpu>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>on</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>off</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='maximumMigratable'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>on</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>off</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <vendor>AMD</vendor>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='succor'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='custom' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cooperlake'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='GraniteRapids'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10-128'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10-256'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10-512'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='IvyBridge'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='KnightsMill'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Opteron_G4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Opteron_G5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SapphireRapids'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SierraForest'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='athlon'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='athlon-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='core2duo'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='core2duo-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='coreduo'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='coreduo-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='n270'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='n270-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='phenom'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='phenom-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <memoryBacking supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <enum name='sourceType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>file</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>anonymous</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>memfd</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </memoryBacking>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <disk supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='diskDevice'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>disk</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>cdrom</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>floppy</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>lun</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='bus'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>fdc</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>scsi</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>usb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>sata</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <graphics supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vnc</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>egl-headless</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>dbus</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </graphics>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <video supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='modelType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vga</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>cirrus</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>none</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>bochs</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>ramfb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <hostdev supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='mode'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>subsystem</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='startupPolicy'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>default</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>mandatory</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>requisite</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>optional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='subsysType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>usb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pci</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>scsi</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='capsType'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='pciBackend'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </hostdev>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <rng supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>random</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>egd</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>builtin</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <filesystem supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='driverType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>path</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>handle</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtiofs</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </filesystem>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <tpm supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tpm-tis</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tpm-crb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>emulator</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>external</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendVersion'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>2.0</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </tpm>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <redirdev supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='bus'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>usb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </redirdev>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <channel supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pty</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>unix</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </channel>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <crypto supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>qemu</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>builtin</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </crypto>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <interface supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>default</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>passt</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <panic supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>isa</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>hyperv</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </panic>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <console supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>null</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vc</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pty</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>dev</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>file</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pipe</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>stdio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>udp</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tcp</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>unix</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>qemu-vdagent</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>dbus</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </console>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <gic supported='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <genid supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <backup supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <async-teardown supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <ps2 supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <sev supported='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <sgx supported='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <hyperv supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='features'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>relaxed</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vapic</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>spinlocks</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vpindex</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>runtime</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>synic</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>stimer</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>reset</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vendor_id</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>frequencies</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>reenlightenment</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tlbflush</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>ipi</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>avic</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>emsr_bitmap</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>xmm_input</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <defaults>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </defaults>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </hyperv>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <launchSecurity supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='sectype'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tdx</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </launchSecurity>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:36:15 np0005539563 nova_compute[252253]: </domainCapabilities>
Nov 29 02:36:15 np0005539563 nova_compute[252253]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.584 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.588 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 02:36:15 np0005539563 nova_compute[252253]: <domainCapabilities>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <domain>kvm</domain>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <arch>x86_64</arch>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <vcpu max='4096'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <iothreads supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <os supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <enum name='firmware'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>efi</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <loader supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>rom</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pflash</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='readonly'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>yes</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>no</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='secure'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>yes</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>no</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </loader>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <cpu>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>on</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>off</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='maximumMigratable'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>on</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>off</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <vendor>AMD</vendor>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='succor'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='custom' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cooperlake'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='GraniteRapids'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10-128'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10-256'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10-512'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='IvyBridge'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='IvyBridge-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='IvyBridge-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='KnightsMill'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='KnightsMill-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-4fmaps'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-4vnniw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512er'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512pf'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Opteron_G4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Opteron_G5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fma4'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tbm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xop'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SapphireRapids'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SierraForest'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='SierraForest-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-ne-convert'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cmpccxadd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='core-capability'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='split-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Snowridge-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='athlon'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='athlon-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='core2duo'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='core2duo-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='coreduo'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='coreduo-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='n270'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='n270-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='phenom'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='phenom-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnow'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='3dnowext'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <memoryBacking supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <enum name='sourceType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>file</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>anonymous</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>memfd</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </memoryBacking>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <disk supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='diskDevice'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>disk</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>cdrom</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>floppy</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>lun</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='bus'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>fdc</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>scsi</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>usb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>sata</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <graphics supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vnc</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>egl-headless</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>dbus</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </graphics>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <video supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='modelType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vga</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>cirrus</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>none</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>bochs</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>ramfb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <hostdev supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='mode'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>subsystem</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='startupPolicy'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>default</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>mandatory</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>requisite</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>optional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='subsysType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>usb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pci</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>scsi</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='capsType'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='pciBackend'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </hostdev>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <rng supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio-transitional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtio-non-transitional</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>random</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>egd</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>builtin</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <filesystem supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='driverType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>path</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>handle</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>virtiofs</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </filesystem>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <tpm supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tpm-tis</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tpm-crb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>emulator</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>external</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendVersion'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>2.0</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </tpm>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <redirdev supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='bus'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>usb</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </redirdev>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <channel supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pty</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>unix</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </channel>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <crypto supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>qemu</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendModel'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>builtin</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </crypto>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <interface supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='backendType'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>default</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>passt</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <panic supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='model'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>isa</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>hyperv</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </panic>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <console supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>null</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vc</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pty</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>dev</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>file</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pipe</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>stdio</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>udp</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tcp</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>unix</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>qemu-vdagent</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>dbus</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </console>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <gic supported='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <vmcoreinfo supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <genid supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <backingStoreInput supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <backup supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <async-teardown supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <ps2 supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <sev supported='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <sgx supported='no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <hyperv supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='features'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>relaxed</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vapic</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>spinlocks</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vpindex</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>runtime</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>synic</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>stimer</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>reset</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>vendor_id</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>frequencies</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>reenlightenment</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tlbflush</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>ipi</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>avic</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>emsr_bitmap</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>xmm_input</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <defaults>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <spinlocks>4095</spinlocks>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <stimer_direct>on</stimer_direct>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </defaults>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </hyperv>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <launchSecurity supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='sectype'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>tdx</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </launchSecurity>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:36:15 np0005539563 nova_compute[252253]: </domainCapabilities>
Nov 29 02:36:15 np0005539563 nova_compute[252253]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:36:15 np0005539563 nova_compute[252253]: 2025-11-29 07:36:15.649 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 02:36:15 np0005539563 nova_compute[252253]: <domainCapabilities>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <domain>kvm</domain>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <arch>x86_64</arch>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <vcpu max='240'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <iothreads supported='yes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <os supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <enum name='firmware'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <loader supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='type'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>rom</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>pflash</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='readonly'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>yes</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>no</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='secure'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>no</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </loader>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:  <cpu>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>on</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>off</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='maximum' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <enum name='maximumMigratable'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>on</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <value>off</value>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </enum>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='host-model' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <vendor>AMD</vendor>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='x2apic'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='stibp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='succor'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='ibrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='lbrv'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    </mode>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:    <mode name='custom' supported='yes'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Broadwell-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cooperlake'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cooperlake-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Cooperlake-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mpx'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Denverton-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Dhyana-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Genoa'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='auto-ibrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Milan'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amd-psfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='no-nested-data-bp'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='null-sel-clr-base'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='stibp-always-on'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='EPYC-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='GraniteRapids'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-int8'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='amx-tile'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx-vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10-128'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10-256'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx10-512'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-bf16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-fp16'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='bus-lock-detect'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='cldemote'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fbsdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrc'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrs'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fzrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='mcdt-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdir64b'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='movdiri'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pbrsb-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='prefetchiti'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='psdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='serialize'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ss'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='tsx-ldtrk'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xfd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Haswell-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bitalg'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512bw'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512cd'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512dq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512f'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512ifma'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vbmi2'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vl'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='avx512vnni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='erms'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='fsrm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='gfni'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='hle'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='ibrs-all'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='invpcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='la57'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pcid'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='pku'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='rtm'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='taa-no'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vaes'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='vpclmulqdq'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:        <feature name='xsaves'/>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      </blockers>
Nov 29 02:36:15 np0005539563 nova_compute[252253]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:37:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:37:58 np0005539563 rsyslogd[1003]: imjournal: 1667 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 29 02:37:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:37:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:58.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:37:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:58.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:37:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:37:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 02:37:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 02:37:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rotard(active, since 28m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 02:37:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:37:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:37:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:37:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:37:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:37:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:37:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:37:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:37:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:37:59 np0005539563 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 29 02:37:59 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 02:37:59 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:37:59 np0005539563 ceph-mon[74338]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:37:59 np0005539563 ceph-mon[74338]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:37:59 np0005539563 ceph-mon[74338]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:37:59 np0005539563 ceph-mon[74338]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:37:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:37:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:00.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:00.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:02.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:38:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:02.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:38:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:04.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:04.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:38:04.881 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:38:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:38:04.881 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:38:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:38:04.881 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:38:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:05 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 355eeb5d-3fb9-446c-a05b-954efcfd2c01 does not exist
Nov 29 02:38:05 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 75c7ce97-ed7d-4a5d-958e-b6af19958869 does not exist
Nov 29 02:38:05 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4d57b3a5-4b34-4f47-b585-6b35586b22df does not exist
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:38:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:38:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:06.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:06 np0005539563 podman[254804]: 2025-11-29 07:38:06.511588512 +0000 UTC m=+0.031171758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:38:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:06.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:38:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:38:06 np0005539563 podman[254804]: 2025-11-29 07:38:06.690026091 +0000 UTC m=+0.209609337 container create e73dc3dd8d67f66b4336f42baca42931e1c2c68057ba536d235a7cdf6dad9248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:38:07 np0005539563 systemd[1]: Started libpod-conmon-e73dc3dd8d67f66b4336f42baca42931e1c2c68057ba536d235a7cdf6dad9248.scope.
Nov 29 02:38:07 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:38:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:38:07 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(49) init, last seen epoch 49, mid-election, bumping
Nov 29 02:38:07 np0005539563 podman[254804]: 2025-11-29 07:38:07.344332722 +0000 UTC m=+0.863915948 container init e73dc3dd8d67f66b4336f42baca42931e1c2c68057ba536d235a7cdf6dad9248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cori, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:38:07 np0005539563 podman[254804]: 2025-11-29 07:38:07.356279452 +0000 UTC m=+0.875862678 container start e73dc3dd8d67f66b4336f42baca42931e1c2c68057ba536d235a7cdf6dad9248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cori, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:38:07 np0005539563 mystifying_cori[254821]: 167 167
Nov 29 02:38:07 np0005539563 systemd[1]: libpod-e73dc3dd8d67f66b4336f42baca42931e1c2c68057ba536d235a7cdf6dad9248.scope: Deactivated successfully.
Nov 29 02:38:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:07 np0005539563 podman[254804]: 2025-11-29 07:38:07.465047452 +0000 UTC m=+0.984630678 container attach e73dc3dd8d67f66b4336f42baca42931e1c2c68057ba536d235a7cdf6dad9248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:38:07 np0005539563 podman[254804]: 2025-11-29 07:38:07.466311586 +0000 UTC m=+0.985894812 container died e73dc3dd8d67f66b4336f42baca42931e1c2c68057ba536d235a7cdf6dad9248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cori, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:38:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:38:07 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f09ad41411237a75f59665fdb09f6c1db8506ac2e7917525e0057d6f770aed80-merged.mount: Deactivated successfully.
Nov 29 02:38:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:38:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 02:38:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 02:38:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rotard(active, since 28m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 02:38:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:38:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:38:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:38:08 np0005539563 podman[254804]: 2025-11-29 07:38:08.070518152 +0000 UTC m=+1.590101388 container remove e73dc3dd8d67f66b4336f42baca42931e1c2c68057ba536d235a7cdf6dad9248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cori, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:38:08 np0005539563 systemd[1]: libpod-conmon-e73dc3dd8d67f66b4336f42baca42931e1c2c68057ba536d235a7cdf6dad9248.scope: Deactivated successfully.
Nov 29 02:38:08 np0005539563 podman[254847]: 2025-11-29 07:38:08.317530521 +0000 UTC m=+0.042547603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:38:08 np0005539563 podman[254847]: 2025-11-29 07:38:08.458576716 +0000 UTC m=+0.183593768 container create 2f2253cefa591acba425eca74804f7905d1ad4418288800dc8d4dc91152870c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mayer, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:38:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:08.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:08 np0005539563 systemd[1]: Started libpod-conmon-2f2253cefa591acba425eca74804f7905d1ad4418288800dc8d4dc91152870c0.scope.
Nov 29 02:38:08 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:38:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71861ed9c1d0f81fe70c808f42c1257d124206847248b60cb30d7ecb299bf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71861ed9c1d0f81fe70c808f42c1257d124206847248b60cb30d7ecb299bf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71861ed9c1d0f81fe70c808f42c1257d124206847248b60cb30d7ecb299bf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71861ed9c1d0f81fe70c808f42c1257d124206847248b60cb30d7ecb299bf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71861ed9c1d0f81fe70c808f42c1257d124206847248b60cb30d7ecb299bf1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:09 np0005539563 podman[254847]: 2025-11-29 07:38:09.010896801 +0000 UTC m=+0.735913853 container init 2f2253cefa591acba425eca74804f7905d1ad4418288800dc8d4dc91152870c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:38:09 np0005539563 podman[254847]: 2025-11-29 07:38:09.017751254 +0000 UTC m=+0.742768256 container start 2f2253cefa591acba425eca74804f7905d1ad4418288800dc8d4dc91152870c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:38:09 np0005539563 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 29 02:38:09 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 02:38:09 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:38:09 np0005539563 ceph-mon[74338]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:38:09 np0005539563 ceph-mon[74338]: Cluster is now healthy
Nov 29 02:38:09 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 02:38:09 np0005539563 podman[254847]: 2025-11-29 07:38:09.137059767 +0000 UTC m=+0.862076759 container attach 2f2253cefa591acba425eca74804f7905d1ad4418288800dc8d4dc91152870c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mayer, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:38:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:09 np0005539563 nice_mayer[254865]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:38:09 np0005539563 nice_mayer[254865]: --> relative data size: 1.0
Nov 29 02:38:09 np0005539563 nice_mayer[254865]: --> All data devices are unavailable
Nov 29 02:38:09 np0005539563 systemd[1]: libpod-2f2253cefa591acba425eca74804f7905d1ad4418288800dc8d4dc91152870c0.scope: Deactivated successfully.
Nov 29 02:38:09 np0005539563 podman[254847]: 2025-11-29 07:38:09.85181189 +0000 UTC m=+1.576828892 container died 2f2253cefa591acba425eca74804f7905d1ad4418288800dc8d4dc91152870c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:38:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1e71861ed9c1d0f81fe70c808f42c1257d124206847248b60cb30d7ecb299bf1-merged.mount: Deactivated successfully.
Nov 29 02:38:10 np0005539563 podman[254847]: 2025-11-29 07:38:10.425930399 +0000 UTC m=+2.150947401 container remove 2f2253cefa591acba425eca74804f7905d1ad4418288800dc8d4dc91152870c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mayer, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:38:10 np0005539563 systemd[1]: libpod-conmon-2f2253cefa591acba425eca74804f7905d1ad4418288800dc8d4dc91152870c0.scope: Deactivated successfully.
Nov 29 02:38:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:10.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:10.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:11 np0005539563 podman[255035]: 2025-11-29 07:38:11.070601122 +0000 UTC m=+0.070508024 container create 3330c20caa95f73f78f959d1ee4059ac96d34618f095985b7615a02ffabdc806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:38:11 np0005539563 podman[255035]: 2025-11-29 07:38:11.024103683 +0000 UTC m=+0.024010635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:38:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:11 np0005539563 systemd[1]: Started libpod-conmon-3330c20caa95f73f78f959d1ee4059ac96d34618f095985b7615a02ffabdc806.scope.
Nov 29 02:38:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:38:11 np0005539563 podman[255035]: 2025-11-29 07:38:11.308719222 +0000 UTC m=+0.308626124 container init 3330c20caa95f73f78f959d1ee4059ac96d34618f095985b7615a02ffabdc806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:38:11 np0005539563 podman[255035]: 2025-11-29 07:38:11.31459858 +0000 UTC m=+0.314505482 container start 3330c20caa95f73f78f959d1ee4059ac96d34618f095985b7615a02ffabdc806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:38:11 np0005539563 eloquent_mahavira[255052]: 167 167
Nov 29 02:38:11 np0005539563 systemd[1]: libpod-3330c20caa95f73f78f959d1ee4059ac96d34618f095985b7615a02ffabdc806.scope: Deactivated successfully.
Nov 29 02:38:11 np0005539563 podman[255035]: 2025-11-29 07:38:11.369837213 +0000 UTC m=+0.369744145 container attach 3330c20caa95f73f78f959d1ee4059ac96d34618f095985b7615a02ffabdc806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:38:11 np0005539563 podman[255035]: 2025-11-29 07:38:11.370558462 +0000 UTC m=+0.370465364 container died 3330c20caa95f73f78f959d1ee4059ac96d34618f095985b7615a02ffabdc806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:38:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-36d3770f4f740dcb3faeb8aedd27ffc9b74d66930a970f5c69b49bd384bbd0e1-merged.mount: Deactivated successfully.
Nov 29 02:38:11 np0005539563 podman[255035]: 2025-11-29 07:38:11.819211644 +0000 UTC m=+0.819118556 container remove 3330c20caa95f73f78f959d1ee4059ac96d34618f095985b7615a02ffabdc806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:38:11 np0005539563 systemd[1]: libpod-conmon-3330c20caa95f73f78f959d1ee4059ac96d34618f095985b7615a02ffabdc806.scope: Deactivated successfully.
Nov 29 02:38:12 np0005539563 podman[255075]: 2025-11-29 07:38:12.044394617 +0000 UTC m=+0.088976028 container create d293f3d5326a51d2d64352d518a6d4f38820ea0cf4b44def16e1d7bd4b9dccdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:38:12 np0005539563 podman[255075]: 2025-11-29 07:38:11.976841604 +0000 UTC m=+0.021423075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:38:12 np0005539563 systemd[1]: Started libpod-conmon-d293f3d5326a51d2d64352d518a6d4f38820ea0cf4b44def16e1d7bd4b9dccdb.scope.
Nov 29 02:38:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:38:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d64d4742e5ea22c50bcbefe25afcefdad86c6b26ee0366c830a42ac5961631/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d64d4742e5ea22c50bcbefe25afcefdad86c6b26ee0366c830a42ac5961631/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d64d4742e5ea22c50bcbefe25afcefdad86c6b26ee0366c830a42ac5961631/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d64d4742e5ea22c50bcbefe25afcefdad86c6b26ee0366c830a42ac5961631/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:12 np0005539563 podman[255075]: 2025-11-29 07:38:12.250975162 +0000 UTC m=+0.295556603 container init d293f3d5326a51d2d64352d518a6d4f38820ea0cf4b44def16e1d7bd4b9dccdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_goldstine, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:38:12 np0005539563 podman[255075]: 2025-11-29 07:38:12.258832233 +0000 UTC m=+0.303413674 container start d293f3d5326a51d2d64352d518a6d4f38820ea0cf4b44def16e1d7bd4b9dccdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:38:12 np0005539563 podman[255075]: 2025-11-29 07:38:12.327038483 +0000 UTC m=+0.371619884 container attach d293f3d5326a51d2d64352d518a6d4f38820ea0cf4b44def16e1d7bd4b9dccdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 29 02:38:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:12.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:12.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:38:12
Nov 29 02:38:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:38:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:38:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'images', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'volumes', 'default.rgw.meta']
Nov 29 02:38:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]: {
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:    "0": [
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:        {
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:            "devices": [
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:                "/dev/loop3"
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:            ],
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:            "lv_name": "ceph_lv0",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:            "lv_size": "7511998464",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:            "name": "ceph_lv0",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:            "tags": {
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:                "ceph.cluster_name": "ceph",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:                "ceph.crush_device_class": "",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:                "ceph.encrypted": "0",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:                "ceph.osd_id": "0",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:                "ceph.type": "block",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:                "ceph.vdo": "0"
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:            },
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:            "type": "block",
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:            "vg_name": "ceph_vg0"
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:        }
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]:    ]
Nov 29 02:38:13 np0005539563 priceless_goldstine[255092]: }
Nov 29 02:38:13 np0005539563 systemd[1]: libpod-d293f3d5326a51d2d64352d518a6d4f38820ea0cf4b44def16e1d7bd4b9dccdb.scope: Deactivated successfully.
Nov 29 02:38:13 np0005539563 podman[255075]: 2025-11-29 07:38:13.080178017 +0000 UTC m=+1.124759428 container died d293f3d5326a51d2d64352d518a6d4f38820ea0cf4b44def16e1d7bd4b9dccdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_goldstine, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:38:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-12d64d4742e5ea22c50bcbefe25afcefdad86c6b26ee0366c830a42ac5961631-merged.mount: Deactivated successfully.
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:13 np0005539563 podman[255075]: 2025-11-29 07:38:13.182061732 +0000 UTC m=+1.226643133 container remove d293f3d5326a51d2d64352d518a6d4f38820ea0cf4b44def16e1d7bd4b9dccdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_goldstine, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:38:13 np0005539563 systemd[1]: libpod-conmon-d293f3d5326a51d2d64352d518a6d4f38820ea0cf4b44def16e1d7bd4b9dccdb.scope: Deactivated successfully.
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:38:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:38:13 np0005539563 podman[255255]: 2025-11-29 07:38:13.770435243 +0000 UTC m=+0.046392966 container create f66675bfae71d0a95f59fc7925f5a8f5a335d51dd6465e933f00cd5c565b3e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:38:13 np0005539563 systemd[1]: Started libpod-conmon-f66675bfae71d0a95f59fc7925f5a8f5a335d51dd6465e933f00cd5c565b3e68.scope.
Nov 29 02:38:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:38:13 np0005539563 podman[255255]: 2025-11-29 07:38:13.83142645 +0000 UTC m=+0.107384243 container init f66675bfae71d0a95f59fc7925f5a8f5a335d51dd6465e933f00cd5c565b3e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:38:13 np0005539563 podman[255255]: 2025-11-29 07:38:13.838330095 +0000 UTC m=+0.114287828 container start f66675bfae71d0a95f59fc7925f5a8f5a335d51dd6465e933f00cd5c565b3e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 02:38:13 np0005539563 podman[255255]: 2025-11-29 07:38:13.841636454 +0000 UTC m=+0.117594197 container attach f66675bfae71d0a95f59fc7925f5a8f5a335d51dd6465e933f00cd5c565b3e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:38:13 np0005539563 podman[255255]: 2025-11-29 07:38:13.747165448 +0000 UTC m=+0.023123281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:38:13 np0005539563 systemd[1]: libpod-f66675bfae71d0a95f59fc7925f5a8f5a335d51dd6465e933f00cd5c565b3e68.scope: Deactivated successfully.
Nov 29 02:38:13 np0005539563 vibrant_carver[255271]: 167 167
Nov 29 02:38:13 np0005539563 conmon[255271]: conmon f66675bfae71d0a95f59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f66675bfae71d0a95f59fc7925f5a8f5a335d51dd6465e933f00cd5c565b3e68.scope/container/memory.events
Nov 29 02:38:13 np0005539563 podman[255255]: 2025-11-29 07:38:13.844609084 +0000 UTC m=+0.120566817 container died f66675bfae71d0a95f59fc7925f5a8f5a335d51dd6465e933f00cd5c565b3e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:38:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3adf6fd9858206df4d12aa13738a9e76bcf90e55870b0daef4c001d46d887b40-merged.mount: Deactivated successfully.
Nov 29 02:38:13 np0005539563 podman[255255]: 2025-11-29 07:38:13.880672232 +0000 UTC m=+0.156629965 container remove f66675bfae71d0a95f59fc7925f5a8f5a335d51dd6465e933f00cd5c565b3e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:38:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:13 np0005539563 systemd[1]: libpod-conmon-f66675bfae71d0a95f59fc7925f5a8f5a335d51dd6465e933f00cd5c565b3e68.scope: Deactivated successfully.
Nov 29 02:38:14 np0005539563 podman[255293]: 2025-11-29 07:38:14.045187728 +0000 UTC m=+0.045650787 container create 55e3ccf4da5cdbe776e40dfa01d73e7f3ce1607170b4048b13ad8795b97bc0e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:38:14 np0005539563 systemd[1]: Started libpod-conmon-55e3ccf4da5cdbe776e40dfa01d73e7f3ce1607170b4048b13ad8795b97bc0e0.scope.
Nov 29 02:38:14 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:38:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7e1c1e3f268f2ff64cef06f056f428056d350765e87a42bf7c8d780640ff8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:14 np0005539563 podman[255293]: 2025-11-29 07:38:14.023315901 +0000 UTC m=+0.023778980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:38:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7e1c1e3f268f2ff64cef06f056f428056d350765e87a42bf7c8d780640ff8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7e1c1e3f268f2ff64cef06f056f428056d350765e87a42bf7c8d780640ff8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee7e1c1e3f268f2ff64cef06f056f428056d350765e87a42bf7c8d780640ff8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:14 np0005539563 podman[255293]: 2025-11-29 07:38:14.137352501 +0000 UTC m=+0.137815580 container init 55e3ccf4da5cdbe776e40dfa01d73e7f3ce1607170b4048b13ad8795b97bc0e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:38:14 np0005539563 podman[255293]: 2025-11-29 07:38:14.14513737 +0000 UTC m=+0.145600429 container start 55e3ccf4da5cdbe776e40dfa01d73e7f3ce1607170b4048b13ad8795b97bc0e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:38:14 np0005539563 podman[255293]: 2025-11-29 07:38:14.148752227 +0000 UTC m=+0.149215296 container attach 55e3ccf4da5cdbe776e40dfa01d73e7f3ce1607170b4048b13ad8795b97bc0e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:38:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:14.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:14.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:14 np0005539563 hopeful_bohr[255309]: {
Nov 29 02:38:14 np0005539563 hopeful_bohr[255309]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:38:14 np0005539563 hopeful_bohr[255309]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:38:14 np0005539563 hopeful_bohr[255309]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:38:14 np0005539563 hopeful_bohr[255309]:        "osd_id": 0,
Nov 29 02:38:14 np0005539563 hopeful_bohr[255309]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:38:14 np0005539563 hopeful_bohr[255309]:        "type": "bluestore"
Nov 29 02:38:14 np0005539563 hopeful_bohr[255309]:    }
Nov 29 02:38:14 np0005539563 hopeful_bohr[255309]: }
Nov 29 02:38:15 np0005539563 systemd[1]: libpod-55e3ccf4da5cdbe776e40dfa01d73e7f3ce1607170b4048b13ad8795b97bc0e0.scope: Deactivated successfully.
Nov 29 02:38:15 np0005539563 podman[255293]: 2025-11-29 07:38:15.018135831 +0000 UTC m=+1.018598890 container died 55e3ccf4da5cdbe776e40dfa01d73e7f3ce1607170b4048b13ad8795b97bc0e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:38:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:15 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ee7e1c1e3f268f2ff64cef06f056f428056d350765e87a42bf7c8d780640ff8c-merged.mount: Deactivated successfully.
Nov 29 02:38:15 np0005539563 podman[255293]: 2025-11-29 07:38:15.32621006 +0000 UTC m=+1.326673119 container remove 55e3ccf4da5cdbe776e40dfa01d73e7f3ce1607170b4048b13ad8795b97bc0e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bohr, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:38:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:38:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:38:15 np0005539563 systemd[1]: libpod-conmon-55e3ccf4da5cdbe776e40dfa01d73e7f3ce1607170b4048b13ad8795b97bc0e0.scope: Deactivated successfully.
Nov 29 02:38:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e984a0cb-1cdc-4552-86eb-ceb18d705ef5 does not exist
Nov 29 02:38:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 46a6f02f-fc8b-4498-8d3e-4f0612f5fa72 does not exist
Nov 29 02:38:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 428c6727-2210-4d10-a4fe-6f224a9dd202 does not exist
Nov 29 02:38:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:16.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:38:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:16.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:38:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4232 writes, 18K keys, 4229 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4232 writes, 4229 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 918 writes, 4283 keys, 918 commit groups, 1.0 writes per commit group, ingest: 6.49 MB, 0.01 MB/s#012Interval WAL: 918 writes, 918 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      6.9      3.26              0.08         9    0.362       0      0       0.0       0.0#012  L6      1/0    8.34 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.0     12.4     10.3      6.63              0.23         8    0.829     36K   4381       0.0       0.0#012 Sum      1/0    8.34 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0      8.3      9.2      9.89              0.31        17    0.582     36K   4381       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.2      5.4      5.0      5.78              0.12         6    0.964     15K   2008       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     12.4     10.3      6.63              0.23         8    0.829     36K   4381       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.9      3.26              0.08         8    0.407       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.022, interval 0.005#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 9.9 seconds#012Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 5.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 308.00 MB usage: 5.41 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000188 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(285,5.09 MB,1.65237%) FilterBlock(18,111.80 KB,0.035447%) IndexBlock(18,216.45 KB,0.0686299%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.926 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.927 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.957 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.957 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.958 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.969 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.969 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.970 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.970 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.970 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.970 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.970 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.971 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.971 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.998 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.998 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.999 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.999 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:38:16 np0005539563 nova_compute[252253]: 2025-11-29 07:38:16.999 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:38:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:38:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1753995405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:38:17 np0005539563 nova_compute[252253]: 2025-11-29 07:38:17.511 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:38:17 np0005539563 nova_compute[252253]: 2025-11-29 07:38:17.680 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:38:17 np0005539563 nova_compute[252253]: 2025-11-29 07:38:17.681 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5232MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:38:17 np0005539563 nova_compute[252253]: 2025-11-29 07:38:17.682 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:38:17 np0005539563 nova_compute[252253]: 2025-11-29 07:38:17.682 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:38:17 np0005539563 nova_compute[252253]: 2025-11-29 07:38:17.778 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:38:17 np0005539563 nova_compute[252253]: 2025-11-29 07:38:17.778 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:38:17 np0005539563 nova_compute[252253]: 2025-11-29 07:38:17.798 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:38:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:38:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3142815245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:38:18 np0005539563 nova_compute[252253]: 2025-11-29 07:38:18.243 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:38:18 np0005539563 nova_compute[252253]: 2025-11-29 07:38:18.248 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:38:18 np0005539563 nova_compute[252253]: 2025-11-29 07:38:18.271 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:38:18 np0005539563 nova_compute[252253]: 2025-11-29 07:38:18.272 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:38:18 np0005539563 nova_compute[252253]: 2025-11-29 07:38:18.273 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:38:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:18.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:18.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:20.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:20.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:38:21.757 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:38:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:38:21.758 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:38:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:38:21.758 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:38:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:22.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:22.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:38:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:23 np0005539563 podman[255493]: 2025-11-29 07:38:23.511035045 +0000 UTC m=+0.063032053 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 02:38:23 np0005539563 podman[255492]: 2025-11-29 07:38:23.510426358 +0000 UTC m=+0.062321604 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:38:23 np0005539563 podman[255494]: 2025-11-29 07:38:23.547779641 +0000 UTC m=+0.098851704 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_id=ovn_controller)
Nov 29 02:38:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:24.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:24.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:26.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:26.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:28.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:28.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:30.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:30.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:32.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:32.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:34.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:34.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:36.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:36.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:38.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:38.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:40.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:40.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:42.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:42.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:38:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:38:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:38:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:38:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:38:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:38:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:44.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:44.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:46.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:46.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:48.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:48.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:50.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:38:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:50.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:38:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:52.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:52.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:54 np0005539563 podman[255615]: 2025-11-29 07:38:54.506521986 +0000 UTC m=+0.056351693 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:38:54 np0005539563 podman[255614]: 2025-11-29 07:38:54.506537687 +0000 UTC m=+0.055962183 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:38:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:54.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:54 np0005539563 podman[255616]: 2025-11-29 07:38:54.537489507 +0000 UTC m=+0.079246837 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:38:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:38:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:54.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:56.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:56.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:38:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:58.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:00.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:00.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:02.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:02.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:04.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:04.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:39:04.882 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:39:04.883 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:39:04.883 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:39:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:06.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:06.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:08.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:08.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:10.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:10.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:12.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:39:12
Nov 29 02:39:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:39:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:39:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'backups', 'volumes', 'default.rgw.log']
Nov 29 02:39:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:39:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:12.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:39:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:39:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:14.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:14.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:15 np0005539563 ceph-mgr[74636]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 02:39:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:16.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:16.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:39:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:39:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:39:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:39:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:39:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:39:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 60c17ec5-dc15-407e-b17c-569212a44d82 does not exist
Nov 29 02:39:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7d52232b-df9a-4f43-8179-c17807b49b6d does not exist
Nov 29 02:39:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1baf515d-70d1-42e0-b203-5694317ddc86 does not exist
Nov 29 02:39:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:39:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:39:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:39:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:39:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:39:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:39:17 np0005539563 podman[256010]: 2025-11-29 07:39:17.825293755 +0000 UTC m=+0.023525823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:39:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:39:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:39:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:39:18 np0005539563 podman[256010]: 2025-11-29 07:39:18.112525774 +0000 UTC m=+0.310757862 container create 6e2b7b5ea0a169459ff13efbcd24af266e50e05991dc3d5e51a50b8ad8fdcce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_satoshi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:39:18 np0005539563 systemd[1]: Started libpod-conmon-6e2b7b5ea0a169459ff13efbcd24af266e50e05991dc3d5e51a50b8ad8fdcce1.scope.
Nov 29 02:39:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.276 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.277 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.277 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.278 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.314 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.314 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.315 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.315 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.315 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.315 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.317 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.317 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.318 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.362 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.363 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.363 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.363 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.363 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:39:18 np0005539563 podman[256010]: 2025-11-29 07:39:18.506288192 +0000 UTC m=+0.704520340 container init 6e2b7b5ea0a169459ff13efbcd24af266e50e05991dc3d5e51a50b8ad8fdcce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_satoshi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:39:18 np0005539563 podman[256010]: 2025-11-29 07:39:18.522684102 +0000 UTC m=+0.720916170 container start 6e2b7b5ea0a169459ff13efbcd24af266e50e05991dc3d5e51a50b8ad8fdcce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_satoshi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:39:18 np0005539563 dreamy_satoshi[256026]: 167 167
Nov 29 02:39:18 np0005539563 systemd[1]: libpod-6e2b7b5ea0a169459ff13efbcd24af266e50e05991dc3d5e51a50b8ad8fdcce1.scope: Deactivated successfully.
Nov 29 02:39:18 np0005539563 podman[256010]: 2025-11-29 07:39:18.533846202 +0000 UTC m=+0.732078280 container attach 6e2b7b5ea0a169459ff13efbcd24af266e50e05991dc3d5e51a50b8ad8fdcce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_satoshi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:39:18 np0005539563 conmon[256026]: conmon 6e2b7b5ea0a169459ff1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e2b7b5ea0a169459ff13efbcd24af266e50e05991dc3d5e51a50b8ad8fdcce1.scope/container/memory.events
Nov 29 02:39:18 np0005539563 podman[256010]: 2025-11-29 07:39:18.534703794 +0000 UTC m=+0.732935842 container died 6e2b7b5ea0a169459ff13efbcd24af266e50e05991dc3d5e51a50b8ad8fdcce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_satoshi, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:39:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:18.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-505626a8a5c5afdf4a91913cc96ad44de89ffcb43261d0d80dfeb652fb7f0bea-merged.mount: Deactivated successfully.
Nov 29 02:39:18 np0005539563 podman[256010]: 2025-11-29 07:39:18.583823133 +0000 UTC m=+0.782055191 container remove 6e2b7b5ea0a169459ff13efbcd24af266e50e05991dc3d5e51a50b8ad8fdcce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:39:18 np0005539563 systemd[1]: libpod-conmon-6e2b7b5ea0a169459ff13efbcd24af266e50e05991dc3d5e51a50b8ad8fdcce1.scope: Deactivated successfully.
Nov 29 02:39:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:18.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:18 np0005539563 podman[256072]: 2025-11-29 07:39:18.772474266 +0000 UTC m=+0.055088719 container create 0fcd958065a5ec90c1b266c2a06c43ef4b9a4bbac6cfd26e30c806ff73dd9183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shaw, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:39:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:39:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/871685077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:39:18 np0005539563 nova_compute[252253]: 2025-11-29 07:39:18.820 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:39:18 np0005539563 podman[256072]: 2025-11-29 07:39:18.744071634 +0000 UTC m=+0.026686117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:39:18 np0005539563 systemd[1]: Started libpod-conmon-0fcd958065a5ec90c1b266c2a06c43ef4b9a4bbac6cfd26e30c806ff73dd9183.scope.
Nov 29 02:39:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:39:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77863aa754e371e25a6c6e2f7283b911e3d9f450b9aa5efdaa731c24968d17bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77863aa754e371e25a6c6e2f7283b911e3d9f450b9aa5efdaa731c24968d17bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77863aa754e371e25a6c6e2f7283b911e3d9f450b9aa5efdaa731c24968d17bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77863aa754e371e25a6c6e2f7283b911e3d9f450b9aa5efdaa731c24968d17bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77863aa754e371e25a6c6e2f7283b911e3d9f450b9aa5efdaa731c24968d17bd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:19 np0005539563 nova_compute[252253]: 2025-11-29 07:39:19.015 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:39:19 np0005539563 nova_compute[252253]: 2025-11-29 07:39:19.017 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5188MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:39:19 np0005539563 nova_compute[252253]: 2025-11-29 07:39:19.018 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:39:19 np0005539563 nova_compute[252253]: 2025-11-29 07:39:19.018 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:39:19 np0005539563 nova_compute[252253]: 2025-11-29 07:39:19.090 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:39:19 np0005539563 nova_compute[252253]: 2025-11-29 07:39:19.090 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:39:19 np0005539563 nova_compute[252253]: 2025-11-29 07:39:19.106 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:39:19 np0005539563 podman[256072]: 2025-11-29 07:39:19.126079326 +0000 UTC m=+0.408693879 container init 0fcd958065a5ec90c1b266c2a06c43ef4b9a4bbac6cfd26e30c806ff73dd9183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shaw, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:39:19 np0005539563 podman[256072]: 2025-11-29 07:39:19.138925751 +0000 UTC m=+0.421540234 container start 0fcd958065a5ec90c1b266c2a06c43ef4b9a4bbac6cfd26e30c806ff73dd9183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:39:19 np0005539563 podman[256072]: 2025-11-29 07:39:19.171021332 +0000 UTC m=+0.453635785 container attach 0fcd958065a5ec90c1b266c2a06c43ef4b9a4bbac6cfd26e30c806ff73dd9183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:39:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:39:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741539133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:39:19 np0005539563 nova_compute[252253]: 2025-11-29 07:39:19.543 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:39:19 np0005539563 nova_compute[252253]: 2025-11-29 07:39:19.550 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:39:19 np0005539563 nova_compute[252253]: 2025-11-29 07:39:19.565 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:39:19 np0005539563 nova_compute[252253]: 2025-11-29 07:39:19.567 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:39:19 np0005539563 nova_compute[252253]: 2025-11-29 07:39:19.568 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:39:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:19 np0005539563 great_shaw[256091]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:39:19 np0005539563 great_shaw[256091]: --> relative data size: 1.0
Nov 29 02:39:19 np0005539563 great_shaw[256091]: --> All data devices are unavailable
Nov 29 02:39:20 np0005539563 systemd[1]: libpod-0fcd958065a5ec90c1b266c2a06c43ef4b9a4bbac6cfd26e30c806ff73dd9183.scope: Deactivated successfully.
Nov 29 02:39:20 np0005539563 podman[256072]: 2025-11-29 07:39:20.024370196 +0000 UTC m=+1.306984689 container died 0fcd958065a5ec90c1b266c2a06c43ef4b9a4bbac6cfd26e30c806ff73dd9183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:39:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-77863aa754e371e25a6c6e2f7283b911e3d9f450b9aa5efdaa731c24968d17bd-merged.mount: Deactivated successfully.
Nov 29 02:39:20 np0005539563 podman[256072]: 2025-11-29 07:39:20.339852214 +0000 UTC m=+1.622466667 container remove 0fcd958065a5ec90c1b266c2a06c43ef4b9a4bbac6cfd26e30c806ff73dd9183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shaw, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:39:20 np0005539563 systemd[1]: libpod-conmon-0fcd958065a5ec90c1b266c2a06c43ef4b9a4bbac6cfd26e30c806ff73dd9183.scope: Deactivated successfully.
Nov 29 02:39:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:20.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:20.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:20 np0005539563 podman[256281]: 2025-11-29 07:39:20.932069248 +0000 UTC m=+0.070950715 container create 68b141664e31ad1c7ef1f175a9666a99281848e529aa79bb8f9c39a10a1e856c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:39:20 np0005539563 podman[256281]: 2025-11-29 07:39:20.884712227 +0000 UTC m=+0.023593784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:39:21 np0005539563 systemd[1]: Started libpod-conmon-68b141664e31ad1c7ef1f175a9666a99281848e529aa79bb8f9c39a10a1e856c.scope.
Nov 29 02:39:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:39:21 np0005539563 podman[256281]: 2025-11-29 07:39:21.409129881 +0000 UTC m=+0.548011448 container init 68b141664e31ad1c7ef1f175a9666a99281848e529aa79bb8f9c39a10a1e856c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:39:21 np0005539563 podman[256281]: 2025-11-29 07:39:21.416962812 +0000 UTC m=+0.555844279 container start 68b141664e31ad1c7ef1f175a9666a99281848e529aa79bb8f9c39a10a1e856c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:39:21 np0005539563 gracious_ramanujan[256297]: 167 167
Nov 29 02:39:21 np0005539563 systemd[1]: libpod-68b141664e31ad1c7ef1f175a9666a99281848e529aa79bb8f9c39a10a1e856c.scope: Deactivated successfully.
Nov 29 02:39:21 np0005539563 podman[256281]: 2025-11-29 07:39:21.423130827 +0000 UTC m=+0.562012324 container attach 68b141664e31ad1c7ef1f175a9666a99281848e529aa79bb8f9c39a10a1e856c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:39:21 np0005539563 podman[256281]: 2025-11-29 07:39:21.423539059 +0000 UTC m=+0.562420526 container died 68b141664e31ad1c7ef1f175a9666a99281848e529aa79bb8f9c39a10a1e856c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:39:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c34b9ff7098c335a4212a10ffb73bf21bc56f230b4fce9a967d5d51d7a94309b-merged.mount: Deactivated successfully.
Nov 29 02:39:21 np0005539563 podman[256281]: 2025-11-29 07:39:21.50893463 +0000 UTC m=+0.647816117 container remove 68b141664e31ad1c7ef1f175a9666a99281848e529aa79bb8f9c39a10a1e856c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:39:21 np0005539563 systemd[1]: libpod-conmon-68b141664e31ad1c7ef1f175a9666a99281848e529aa79bb8f9c39a10a1e856c.scope: Deactivated successfully.
Nov 29 02:39:21 np0005539563 podman[256321]: 2025-11-29 07:39:21.655702389 +0000 UTC m=+0.023377778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:39:22 np0005539563 podman[256321]: 2025-11-29 07:39:22.167212328 +0000 UTC m=+0.534887707 container create c7dd1f471666daa7e6890e0043cc5167b3f5b66161ddce04665c309195bd5bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:39:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:22.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:22.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:39:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:23 np0005539563 systemd[1]: Started libpod-conmon-c7dd1f471666daa7e6890e0043cc5167b3f5b66161ddce04665c309195bd5bdd.scope.
Nov 29 02:39:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:39:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2d43fa344ecc46bb34239d9093287bf48f7ca1791c36dccc0a9542c3262e37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2d43fa344ecc46bb34239d9093287bf48f7ca1791c36dccc0a9542c3262e37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2d43fa344ecc46bb34239d9093287bf48f7ca1791c36dccc0a9542c3262e37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2d43fa344ecc46bb34239d9093287bf48f7ca1791c36dccc0a9542c3262e37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:24 np0005539563 podman[256321]: 2025-11-29 07:39:24.13342077 +0000 UTC m=+2.501096189 container init c7dd1f471666daa7e6890e0043cc5167b3f5b66161ddce04665c309195bd5bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:39:24 np0005539563 podman[256321]: 2025-11-29 07:39:24.143627813 +0000 UTC m=+2.511303182 container start c7dd1f471666daa7e6890e0043cc5167b3f5b66161ddce04665c309195bd5bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_almeida, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:39:24 np0005539563 podman[256321]: 2025-11-29 07:39:24.161065322 +0000 UTC m=+2.528740701 container attach c7dd1f471666daa7e6890e0043cc5167b3f5b66161ddce04665c309195bd5bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_almeida, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:39:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:24.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:24.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:24 np0005539563 charming_almeida[256388]: {
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:    "0": [
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:        {
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:            "devices": [
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:                "/dev/loop3"
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:            ],
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:            "lv_name": "ceph_lv0",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:            "lv_size": "7511998464",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:            "name": "ceph_lv0",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:            "tags": {
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:                "ceph.cluster_name": "ceph",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:                "ceph.crush_device_class": "",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:                "ceph.encrypted": "0",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:                "ceph.osd_id": "0",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:                "ceph.type": "block",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:                "ceph.vdo": "0"
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:            },
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:            "type": "block",
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:            "vg_name": "ceph_vg0"
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:        }
Nov 29 02:39:24 np0005539563 charming_almeida[256388]:    ]
Nov 29 02:39:24 np0005539563 charming_almeida[256388]: }
Nov 29 02:39:24 np0005539563 systemd[1]: libpod-c7dd1f471666daa7e6890e0043cc5167b3f5b66161ddce04665c309195bd5bdd.scope: Deactivated successfully.
Nov 29 02:39:24 np0005539563 podman[256321]: 2025-11-29 07:39:24.967148335 +0000 UTC m=+3.334823714 container died c7dd1f471666daa7e6890e0043cc5167b3f5b66161ddce04665c309195bd5bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:39:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ae2d43fa344ecc46bb34239d9093287bf48f7ca1791c36dccc0a9542c3262e37-merged.mount: Deactivated successfully.
Nov 29 02:39:25 np0005539563 podman[256321]: 2025-11-29 07:39:25.047946604 +0000 UTC m=+3.415621973 container remove c7dd1f471666daa7e6890e0043cc5167b3f5b66161ddce04665c309195bd5bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_almeida, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:39:25 np0005539563 systemd[1]: libpod-conmon-c7dd1f471666daa7e6890e0043cc5167b3f5b66161ddce04665c309195bd5bdd.scope: Deactivated successfully.
Nov 29 02:39:25 np0005539563 podman[256406]: 2025-11-29 07:39:25.080598091 +0000 UTC m=+0.080784959 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:39:25 np0005539563 podman[256399]: 2025-11-29 07:39:25.097536736 +0000 UTC m=+0.098909526 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:39:25 np0005539563 podman[256409]: 2025-11-29 07:39:25.129460032 +0000 UTC m=+0.127524963 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:39:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:25 np0005539563 podman[256614]: 2025-11-29 07:39:25.640799096 +0000 UTC m=+0.039111210 container create eba89943ffeaedc15c5658a4930e5f0d9c2080438ebb02f727ae49173227c8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 02:39:25 np0005539563 systemd[1]: Started libpod-conmon-eba89943ffeaedc15c5658a4930e5f0d9c2080438ebb02f727ae49173227c8d7.scope.
Nov 29 02:39:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:39:25 np0005539563 podman[256614]: 2025-11-29 07:39:25.624247352 +0000 UTC m=+0.022559486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:39:25 np0005539563 podman[256614]: 2025-11-29 07:39:25.730824962 +0000 UTC m=+0.129137096 container init eba89943ffeaedc15c5658a4930e5f0d9c2080438ebb02f727ae49173227c8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:39:25 np0005539563 podman[256614]: 2025-11-29 07:39:25.738004356 +0000 UTC m=+0.136316470 container start eba89943ffeaedc15c5658a4930e5f0d9c2080438ebb02f727ae49173227c8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:39:25 np0005539563 blissful_curran[256630]: 167 167
Nov 29 02:39:25 np0005539563 systemd[1]: libpod-eba89943ffeaedc15c5658a4930e5f0d9c2080438ebb02f727ae49173227c8d7.scope: Deactivated successfully.
Nov 29 02:39:25 np0005539563 podman[256614]: 2025-11-29 07:39:25.744198822 +0000 UTC m=+0.142510936 container attach eba89943ffeaedc15c5658a4930e5f0d9c2080438ebb02f727ae49173227c8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:39:25 np0005539563 conmon[256630]: conmon eba89943ffeaedc15c56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eba89943ffeaedc15c5658a4930e5f0d9c2080438ebb02f727ae49173227c8d7.scope/container/memory.events
Nov 29 02:39:25 np0005539563 podman[256614]: 2025-11-29 07:39:25.745240969 +0000 UTC m=+0.143553103 container died eba89943ffeaedc15c5658a4930e5f0d9c2080438ebb02f727ae49173227c8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:39:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4d1c590aae4dfa95f076ede107ad8ecdad9ec45162bedbc54b166ae6472d06bf-merged.mount: Deactivated successfully.
Nov 29 02:39:25 np0005539563 podman[256614]: 2025-11-29 07:39:25.790800792 +0000 UTC m=+0.189112936 container remove eba89943ffeaedc15c5658a4930e5f0d9c2080438ebb02f727ae49173227c8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_curran, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:39:25 np0005539563 systemd[1]: libpod-conmon-eba89943ffeaedc15c5658a4930e5f0d9c2080438ebb02f727ae49173227c8d7.scope: Deactivated successfully.
Nov 29 02:39:25 np0005539563 podman[256655]: 2025-11-29 07:39:25.95724859 +0000 UTC m=+0.046564472 container create 400836fd0281b57dfe3cfbe2b0567a36540ecad10db2556ea59ab1b38434e006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:39:25 np0005539563 systemd[1]: Started libpod-conmon-400836fd0281b57dfe3cfbe2b0567a36540ecad10db2556ea59ab1b38434e006.scope.
Nov 29 02:39:26 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:39:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35c8639452e399de657bebd51e5211a6c9b1c1a63abfbfd5d2056a5e6ef0a66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35c8639452e399de657bebd51e5211a6c9b1c1a63abfbfd5d2056a5e6ef0a66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35c8639452e399de657bebd51e5211a6c9b1c1a63abfbfd5d2056a5e6ef0a66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35c8639452e399de657bebd51e5211a6c9b1c1a63abfbfd5d2056a5e6ef0a66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:26 np0005539563 podman[256655]: 2025-11-29 07:39:25.933315877 +0000 UTC m=+0.022631779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:39:26 np0005539563 podman[256655]: 2025-11-29 07:39:26.038615363 +0000 UTC m=+0.127931265 container init 400836fd0281b57dfe3cfbe2b0567a36540ecad10db2556ea59ab1b38434e006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:39:26 np0005539563 podman[256655]: 2025-11-29 07:39:26.045701923 +0000 UTC m=+0.135017805 container start 400836fd0281b57dfe3cfbe2b0567a36540ecad10db2556ea59ab1b38434e006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:39:26 np0005539563 podman[256655]: 2025-11-29 07:39:26.04967169 +0000 UTC m=+0.138987602 container attach 400836fd0281b57dfe3cfbe2b0567a36540ecad10db2556ea59ab1b38434e006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:39:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:26.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:26.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:26 np0005539563 nervous_germain[256671]: {
Nov 29 02:39:26 np0005539563 nervous_germain[256671]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:39:26 np0005539563 nervous_germain[256671]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:39:26 np0005539563 nervous_germain[256671]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:39:26 np0005539563 nervous_germain[256671]:        "osd_id": 0,
Nov 29 02:39:26 np0005539563 nervous_germain[256671]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:39:26 np0005539563 nervous_germain[256671]:        "type": "bluestore"
Nov 29 02:39:26 np0005539563 nervous_germain[256671]:    }
Nov 29 02:39:26 np0005539563 nervous_germain[256671]: }
Nov 29 02:39:26 np0005539563 systemd[1]: libpod-400836fd0281b57dfe3cfbe2b0567a36540ecad10db2556ea59ab1b38434e006.scope: Deactivated successfully.
Nov 29 02:39:26 np0005539563 podman[256655]: 2025-11-29 07:39:26.874352363 +0000 UTC m=+0.963668285 container died 400836fd0281b57dfe3cfbe2b0567a36540ecad10db2556ea59ab1b38434e006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:39:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e35c8639452e399de657bebd51e5211a6c9b1c1a63abfbfd5d2056a5e6ef0a66-merged.mount: Deactivated successfully.
Nov 29 02:39:28 np0005539563 podman[256655]: 2025-11-29 07:39:28.137295649 +0000 UTC m=+2.226611531 container remove 400836fd0281b57dfe3cfbe2b0567a36540ecad10db2556ea59ab1b38434e006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_germain, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:39:28 np0005539563 systemd[1]: libpod-conmon-400836fd0281b57dfe3cfbe2b0567a36540ecad10db2556ea59ab1b38434e006.scope: Deactivated successfully.
Nov 29 02:39:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:39:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:39:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:39:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:39:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e2d530b8-df2b-4485-978e-ab8326778e62 does not exist
Nov 29 02:39:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6e9c2a8c-a957-4212-ac8f-0914cb2d4e9e does not exist
Nov 29 02:39:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4c188fc0-44b1-4512-a1ec-c3e2c9e965ec does not exist
Nov 29 02:39:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:28.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:28.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:39:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:39:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:30.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:30.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:32.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:32.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:34.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:34.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:36.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:36.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:38.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:38.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:40.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:40.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:42.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:39:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:42.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:39:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:39:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:39:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:39:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:39:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:39:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:39:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:44.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:44.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:46.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:46.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:48.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:48.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:50.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:50.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=404 latency=0.003000078s ======
Nov 29 02:39:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:52.096 +0000] "GET /info HTTP/1.1" 404 150 - "python-urllib3/1.26.5" - latency=0.003000078s
Nov 29 02:39:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - - [29/Nov/2025:07:39:52.117 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.000000000s
Nov 29 02:39:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:52.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:52.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:54.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:54.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:39:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:55 np0005539563 podman[256823]: 2025-11-29 07:39:55.515664607 +0000 UTC m=+0.070210839 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:39:55 np0005539563 podman[256822]: 2025-11-29 07:39:55.533946169 +0000 UTC m=+0.088600083 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 02:39:55 np0005539563 podman[256824]: 2025-11-29 07:39:55.539597108 +0000 UTC m=+0.089254101 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:39:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:56.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:56.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 29 02:39:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:58.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:39:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:58.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 29 02:39:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 29 02:40:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:00.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:00.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 102 B/s wr, 0 op/s
Nov 29 02:40:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:01 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:40:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 29 02:40:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 29 02:40:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:02.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:02.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:02 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 29 02:40:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 127 B/s wr, 0 op/s
Nov 29 02:40:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:04.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:04.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:40:04.884 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:40:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:40:04.885 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:40:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:40:04.885 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:40:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Nov 29 02:40:05 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 02:40:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:06.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:06.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 383 B/s wr, 0 op/s
Nov 29 02:40:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:08.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:08.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 432 B/s rd, 648 B/s wr, 1 op/s
Nov 29 02:40:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:10.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:10.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 511 B/s wr, 1 op/s
Nov 29 02:40:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:12.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:40:12
Nov 29 02:40:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:40:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:40:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms']
Nov 29 02:40:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:40:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:12.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.9 KiB/s rd, 586 B/s wr, 5 op/s
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:40:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:40:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:14.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:14.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 8.4 MiB data, 161 MiB used, 21 GiB / 21 GiB avail; 4.2 KiB/s rd, 683 KiB/s wr, 6 op/s
Nov 29 02:40:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 29 02:40:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:16.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:16.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 8.4 MiB data, 161 MiB used, 21 GiB / 21 GiB avail; 4.2 KiB/s rd, 683 KiB/s wr, 6 op/s
Nov 29 02:40:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:18.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:18.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:18 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 29 02:40:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:18.903610) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:40:18 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 29 02:40:18 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402018903918, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2129, "num_deletes": 251, "total_data_size": 3920211, "memory_usage": 3963504, "flush_reason": "Manual Compaction"}
Nov 29 02:40:18 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 29 02:40:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 29 02:40:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 8.4 MiB data, 169 MiB used, 21 GiB / 21 GiB avail; 5.8 KiB/s rd, 683 KiB/s wr, 8 op/s
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.570 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.571 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.571 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.572 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.588 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.588 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.589 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.589 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.589 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.589 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.589 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.590 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.608 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.608 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.608 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.608 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:40:19 np0005539563 nova_compute[252253]: 2025-11-29 07:40:19.609 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:40:19 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402019977740, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 3821151, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17843, "largest_seqno": 19971, "table_properties": {"data_size": 3811565, "index_size": 6016, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20101, "raw_average_key_size": 20, "raw_value_size": 3792071, "raw_average_value_size": 3869, "num_data_blocks": 268, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401768, "oldest_key_time": 1764401768, "file_creation_time": 1764402018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:40:19 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 1074240 microseconds, and 22288 cpu microseconds.
Nov 29 02:40:19 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:40:19 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 29 02:40:20 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:19.977943) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 3821151 bytes OK
Nov 29 02:40:20 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:19.978024) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 29 02:40:20 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:20.173519) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 29 02:40:20 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:20.173577) EVENT_LOG_v1 {"time_micros": 1764402020173568, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:40:20 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:20.173602) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:40:20 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3911513, prev total WAL file size 3913294, number of live WAL files 2.
Nov 29 02:40:20 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:40:20 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:20.175451) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 29 02:40:20 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:40:20 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(3731KB)], [41(8540KB)]
Nov 29 02:40:20 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402020175622, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 12566132, "oldest_snapshot_seqno": -1}
Nov 29 02:40:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:40:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1232024832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:40:20 np0005539563 nova_compute[252253]: 2025-11-29 07:40:20.544 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.935s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:40:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:20.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:20 np0005539563 nova_compute[252253]: 2025-11-29 07:40:20.723 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:40:20 np0005539563 nova_compute[252253]: 2025-11-29 07:40:20.725 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5213MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:40:20 np0005539563 nova_compute[252253]: 2025-11-29 07:40:20.725 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:40:20 np0005539563 nova_compute[252253]: 2025-11-29 07:40:20.726 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:40:20 np0005539563 nova_compute[252253]: 2025-11-29 07:40:20.805 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:40:20 np0005539563 nova_compute[252253]: 2025-11-29 07:40:20.806 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:40:20 np0005539563 nova_compute[252253]: 2025-11-29 07:40:20.835 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:40:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:20.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 8.4 MiB data, 169 MiB used, 21 GiB / 21 GiB avail; 6.8 KiB/s rd, 819 KiB/s wr, 9 op/s
Nov 29 02:40:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:40:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/926580274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:40:21 np0005539563 nova_compute[252253]: 2025-11-29 07:40:21.246 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:40:21 np0005539563 nova_compute[252253]: 2025-11-29 07:40:21.251 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:40:21 np0005539563 nova_compute[252253]: 2025-11-29 07:40:21.267 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:40:21 np0005539563 nova_compute[252253]: 2025-11-29 07:40:21.268 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:40:21 np0005539563 nova_compute[252253]: 2025-11-29 07:40:21.268 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:40:21 np0005539563 nova_compute[252253]: 2025-11-29 07:40:21.358 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:21 np0005539563 nova_compute[252253]: 2025-11-29 07:40:21.380 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:40:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:22.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:22.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000745008317048204 of space, bias 1.0, pg target 0.2235024951144612 quantized to 32 (current 32)
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:40:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 16 MiB data, 169 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 1.6 MiB/s wr, 4 op/s
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4837 keys, 10483296 bytes, temperature: kUnknown
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402023230255, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 10483296, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10448128, "index_size": 22002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12101, "raw_key_size": 121960, "raw_average_key_size": 25, "raw_value_size": 10357601, "raw_average_value_size": 2141, "num_data_blocks": 910, "num_entries": 4837, "num_filter_entries": 4837, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764402020, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:23.425541) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 10483296 bytes
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:23.889504) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 4.1 rd, 3.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 5366, records dropped: 529 output_compression: NoCompression
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:23.889656) EVENT_LOG_v1 {"time_micros": 1764402023889560, "job": 20, "event": "compaction_finished", "compaction_time_micros": 3054974, "compaction_time_cpu_micros": 45867, "output_level": 6, "num_output_files": 1, "total_output_size": 10483296, "num_input_records": 5366, "num_output_records": 4837, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402023892481, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402023896196, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:20.175153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:23.896426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:23.896479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:23.896482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:23.896485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:40:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:40:23.896507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:40:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:24.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:24.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 16 MiB data, 169 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 819 KiB/s wr, 3 op/s
Nov 29 02:40:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:26 np0005539563 podman[257047]: 2025-11-29 07:40:26.531709966 +0000 UTC m=+0.084175057 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 02:40:26 np0005539563 podman[257046]: 2025-11-29 07:40:26.553314685 +0000 UTC m=+0.108733773 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Nov 29 02:40:26 np0005539563 podman[257048]: 2025-11-29 07:40:26.591297204 +0000 UTC m=+0.130187857 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:40:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:26.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 29 02:40:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:26.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 1.2 MiB/s wr, 3 op/s
Nov 29 02:40:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:28.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 29 02:40:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:28.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:28 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 02:40:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 1.3 MiB/s wr, 3 op/s
Nov 29 02:40:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:40:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:40:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:30.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:30.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 1.2 MiB/s wr, 3 op/s
Nov 29 02:40:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:32.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:32.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 4.9 KiB/s rd, 455 KiB/s wr, 6 op/s
Nov 29 02:40:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:34.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:34.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 455 KiB/s wr, 6 op/s
Nov 29 02:40:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:40:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:36.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:36.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 614 B/s wr, 6 op/s
Nov 29 02:40:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:40:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:38.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:38.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 497 B/s wr, 4 op/s
Nov 29 02:40:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:40.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:40.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:40:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:40:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:40:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:40:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:40:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 426 B/s wr, 3 op/s
Nov 29 02:40:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:42.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:42 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:42 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:42 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:40:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:42.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d2a205fb-4000-4eaf-901c-e2fe19132ad9 does not exist
Nov 29 02:40:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2143f66c-ad2e-4718-b91f-07a18554ee44 does not exist
Nov 29 02:40:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f36d7918-77ea-4bad-9b2e-b3a9740e14bd does not exist
Nov 29 02:40:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:40:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:40:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:40:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:40:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:40:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:40:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:40:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:40:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:40:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:40:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:40:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:40:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 29 MiB data, 181 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 683 KiB/s wr, 3 op/s
Nov 29 02:40:43 np0005539563 podman[257504]: 2025-11-29 07:40:43.464356722 +0000 UTC m=+0.019878334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:40:43 np0005539563 podman[257504]: 2025-11-29 07:40:43.733594249 +0000 UTC m=+0.289115881 container create 239d7fc079819ecc690b24b1ed311c6f2184eed4bac243cdbcb20a7e33e92bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:40:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:40:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 9356 writes, 36K keys, 9356 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9356 writes, 2106 syncs, 4.44 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 642 writes, 1218 keys, 642 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s#012Interval WAL: 642 writes, 276 syncs, 2.33 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 02:40:44 np0005539563 systemd[1]: Started libpod-conmon-239d7fc079819ecc690b24b1ed311c6f2184eed4bac243cdbcb20a7e33e92bd2.scope.
Nov 29 02:40:44 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:40:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:44.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:44 np0005539563 podman[257504]: 2025-11-29 07:40:44.699482782 +0000 UTC m=+1.255004394 container init 239d7fc079819ecc690b24b1ed311c6f2184eed4bac243cdbcb20a7e33e92bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:40:44 np0005539563 podman[257504]: 2025-11-29 07:40:44.709029093 +0000 UTC m=+1.264550705 container start 239d7fc079819ecc690b24b1ed311c6f2184eed4bac243cdbcb20a7e33e92bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:40:44 np0005539563 laughing_bardeen[257571]: 167 167
Nov 29 02:40:44 np0005539563 systemd[1]: libpod-239d7fc079819ecc690b24b1ed311c6f2184eed4bac243cdbcb20a7e33e92bd2.scope: Deactivated successfully.
Nov 29 02:40:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:44.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 29 02:40:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 33 MiB data, 186 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 1.0 MiB/s wr, 2 op/s
Nov 29 02:40:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:46.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:40:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:46.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:40:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 1.7 MiB/s wr, 12 op/s
Nov 29 02:40:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:40:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:40:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:48.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:48.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.6 KiB/s rd, 1.7 MiB/s wr, 12 op/s
Nov 29 02:40:50 np0005539563 podman[257504]: 2025-11-29 07:40:50.471449264 +0000 UTC m=+7.026970946 container attach 239d7fc079819ecc690b24b1ed311c6f2184eed4bac243cdbcb20a7e33e92bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:40:50 np0005539563 podman[257504]: 2025-11-29 07:40:50.472645755 +0000 UTC m=+7.028167347 container died 239d7fc079819ecc690b24b1ed311c6f2184eed4bac243cdbcb20a7e33e92bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:40:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:50.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:50.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.7 MiB/s wr, 12 op/s
Nov 29 02:40:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:52.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:52.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.1 KiB/s rd, 1.7 MiB/s wr, 11 op/s
Nov 29 02:40:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:54.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:54.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7d62a0921d7f753f995232bdb4966fd99fe4952aaa8269fe40ff667b500838e8-merged.mount: Deactivated successfully.
Nov 29 02:40:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 29 02:40:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 02:40:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.1 KiB/s rd, 1.0 MiB/s wr, 11 op/s
Nov 29 02:40:55 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 29 02:40:56 np0005539563 podman[257504]: 2025-11-29 07:40:56.298439391 +0000 UTC m=+12.853960993 container remove 239d7fc079819ecc690b24b1ed311c6f2184eed4bac243cdbcb20a7e33e92bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:40:56 np0005539563 systemd[1]: libpod-conmon-239d7fc079819ecc690b24b1ed311c6f2184eed4bac243cdbcb20a7e33e92bd2.scope: Deactivated successfully.
Nov 29 02:40:56 np0005539563 podman[257603]: 2025-11-29 07:40:56.469785582 +0000 UTC m=+0.021299282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:40:56 np0005539563 podman[257603]: 2025-11-29 07:40:56.620665532 +0000 UTC m=+0.172179212 container create 016c0e3cea92da77d17534cd137bebc59708a1cc3298f3a2a36434e6ddb4e049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:40:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:56.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:40:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:56.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:40:57 np0005539563 systemd[1]: Started libpod-conmon-016c0e3cea92da77d17534cd137bebc59708a1cc3298f3a2a36434e6ddb4e049.scope.
Nov 29 02:40:57 np0005539563 podman[257618]: 2025-11-29 07:40:57.055688783 +0000 UTC m=+0.395250394 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:40:57 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:40:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b59ea88c237488bd433e00cc798d117ca49777465059da98a36db13435557acb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b59ea88c237488bd433e00cc798d117ca49777465059da98a36db13435557acb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b59ea88c237488bd433e00cc798d117ca49777465059da98a36db13435557acb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b59ea88c237488bd433e00cc798d117ca49777465059da98a36db13435557acb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b59ea88c237488bd433e00cc798d117ca49777465059da98a36db13435557acb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:57 np0005539563 podman[257603]: 2025-11-29 07:40:57.155466219 +0000 UTC m=+0.706979929 container init 016c0e3cea92da77d17534cd137bebc59708a1cc3298f3a2a36434e6ddb4e049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:40:57 np0005539563 podman[257619]: 2025-11-29 07:40:57.155807338 +0000 UTC m=+0.496120169 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 02:40:57 np0005539563 podman[257603]: 2025-11-29 07:40:57.164600569 +0000 UTC m=+0.716114249 container start 016c0e3cea92da77d17534cd137bebc59708a1cc3298f3a2a36434e6ddb4e049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_yalow, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:40:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Nov 29 02:40:57 np0005539563 podman[257603]: 2025-11-29 07:40:57.30403077 +0000 UTC m=+0.855544460 container attach 016c0e3cea92da77d17534cd137bebc59708a1cc3298f3a2a36434e6ddb4e049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:40:57 np0005539563 podman[257617]: 2025-11-29 07:40:57.439228758 +0000 UTC m=+0.783759910 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:40:58 np0005539563 vibrant_yalow[257660]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:40:58 np0005539563 vibrant_yalow[257660]: --> relative data size: 1.0
Nov 29 02:40:58 np0005539563 vibrant_yalow[257660]: --> All data devices are unavailable
Nov 29 02:40:58 np0005539563 systemd[1]: libpod-016c0e3cea92da77d17534cd137bebc59708a1cc3298f3a2a36434e6ddb4e049.scope: Deactivated successfully.
Nov 29 02:40:58 np0005539563 podman[257603]: 2025-11-29 07:40:58.026992228 +0000 UTC m=+1.578505898 container died 016c0e3cea92da77d17534cd137bebc59708a1cc3298f3a2a36434e6ddb4e049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:40:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:58.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:40:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:58.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 102 B/s wr, 0 op/s
Nov 29 02:40:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b59ea88c237488bd433e00cc798d117ca49777465059da98a36db13435557acb-merged.mount: Deactivated successfully.
Nov 29 02:40:59 np0005539563 podman[257603]: 2025-11-29 07:40:59.838414485 +0000 UTC m=+3.389928165 container remove 016c0e3cea92da77d17534cd137bebc59708a1cc3298f3a2a36434e6ddb4e049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:40:59 np0005539563 systemd[1]: libpod-conmon-016c0e3cea92da77d17534cd137bebc59708a1cc3298f3a2a36434e6ddb4e049.scope: Deactivated successfully.
Nov 29 02:41:00 np0005539563 podman[257846]: 2025-11-29 07:41:00.473120821 +0000 UTC m=+0.022974255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:41:00 np0005539563 podman[257846]: 2025-11-29 07:41:00.645332763 +0000 UTC m=+0.195186147 container create d6d22103715f49786cee1f29472839e09049a753639575bae0b076d9dbb6d503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:41:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:00.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:00.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:00 np0005539563 systemd[1]: Started libpod-conmon-d6d22103715f49786cee1f29472839e09049a753639575bae0b076d9dbb6d503.scope.
Nov 29 02:41:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:41:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 511 B/s wr, 4 op/s
Nov 29 02:41:01 np0005539563 podman[257846]: 2025-11-29 07:41:01.344677441 +0000 UTC m=+0.894530855 container init d6d22103715f49786cee1f29472839e09049a753639575bae0b076d9dbb6d503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:41:01 np0005539563 podman[257846]: 2025-11-29 07:41:01.351251234 +0000 UTC m=+0.901104618 container start d6d22103715f49786cee1f29472839e09049a753639575bae0b076d9dbb6d503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noyce, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:41:01 np0005539563 epic_noyce[257864]: 167 167
Nov 29 02:41:01 np0005539563 systemd[1]: libpod-d6d22103715f49786cee1f29472839e09049a753639575bae0b076d9dbb6d503.scope: Deactivated successfully.
Nov 29 02:41:01 np0005539563 podman[257846]: 2025-11-29 07:41:01.547271743 +0000 UTC m=+1.097125127 container attach d6d22103715f49786cee1f29472839e09049a753639575bae0b076d9dbb6d503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:41:01 np0005539563 podman[257846]: 2025-11-29 07:41:01.548225359 +0000 UTC m=+1.098078743 container died d6d22103715f49786cee1f29472839e09049a753639575bae0b076d9dbb6d503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noyce, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:41:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2aae8d319c9a50aaf241c0c45edbbb3b3b7f55452c9e28b357b190f726292ac0-merged.mount: Deactivated successfully.
Nov 29 02:41:02 np0005539563 podman[257846]: 2025-11-29 07:41:02.328188857 +0000 UTC m=+1.878042251 container remove d6d22103715f49786cee1f29472839e09049a753639575bae0b076d9dbb6d503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:41:02 np0005539563 systemd[1]: libpod-conmon-d6d22103715f49786cee1f29472839e09049a753639575bae0b076d9dbb6d503.scope: Deactivated successfully.
Nov 29 02:41:02 np0005539563 podman[257889]: 2025-11-29 07:41:02.525603434 +0000 UTC m=+0.041807032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:41:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:02.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:02 np0005539563 podman[257889]: 2025-11-29 07:41:02.706213777 +0000 UTC m=+0.222417345 container create 24458a8f378ac8dde23a0cc4c264433818535d538df0162ccd1efc6244863715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cerf, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:41:02 np0005539563 systemd[1]: Started libpod-conmon-24458a8f378ac8dde23a0cc4c264433818535d538df0162ccd1efc6244863715.scope.
Nov 29 02:41:02 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:41:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17152f4dab3a3bc34b0275e815ef86a1ff874ec7bdafa77e1223d54113e40ad0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:41:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17152f4dab3a3bc34b0275e815ef86a1ff874ec7bdafa77e1223d54113e40ad0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:41:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17152f4dab3a3bc34b0275e815ef86a1ff874ec7bdafa77e1223d54113e40ad0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:41:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17152f4dab3a3bc34b0275e815ef86a1ff874ec7bdafa77e1223d54113e40ad0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:41:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:02.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:03 np0005539563 podman[257889]: 2025-11-29 07:41:03.104554331 +0000 UTC m=+0.620757919 container init 24458a8f378ac8dde23a0cc4c264433818535d538df0162ccd1efc6244863715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cerf, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:41:03 np0005539563 podman[257889]: 2025-11-29 07:41:03.110442746 +0000 UTC m=+0.626646314 container start 24458a8f378ac8dde23a0cc4c264433818535d538df0162ccd1efc6244863715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cerf, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:41:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 511 B/s wr, 4 op/s
Nov 29 02:41:03 np0005539563 podman[257889]: 2025-11-29 07:41:03.312626279 +0000 UTC m=+0.828829877 container attach 24458a8f378ac8dde23a0cc4c264433818535d538df0162ccd1efc6244863715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cerf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]: {
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:    "0": [
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:        {
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:            "devices": [
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:                "/dev/loop3"
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:            ],
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:            "lv_name": "ceph_lv0",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:            "lv_size": "7511998464",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:            "name": "ceph_lv0",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:            "tags": {
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:                "ceph.cluster_name": "ceph",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:                "ceph.crush_device_class": "",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:                "ceph.encrypted": "0",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:                "ceph.osd_id": "0",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:                "ceph.type": "block",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:                "ceph.vdo": "0"
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:            },
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:            "type": "block",
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:            "vg_name": "ceph_vg0"
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:        }
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]:    ]
Nov 29 02:41:03 np0005539563 intelligent_cerf[257906]: }
Nov 29 02:41:03 np0005539563 systemd[1]: libpod-24458a8f378ac8dde23a0cc4c264433818535d538df0162ccd1efc6244863715.scope: Deactivated successfully.
Nov 29 02:41:03 np0005539563 podman[257889]: 2025-11-29 07:41:03.890498538 +0000 UTC m=+1.406702106 container died 24458a8f378ac8dde23a0cc4c264433818535d538df0162ccd1efc6244863715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cerf, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:41:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:04.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:41:04.886 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:41:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:41:04.887 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:41:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:41:04.887 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:41:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:04.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 511 B/s wr, 4 op/s
Nov 29 02:41:05 np0005539563 systemd[1]: var-lib-containers-storage-overlay-17152f4dab3a3bc34b0275e815ef86a1ff874ec7bdafa77e1223d54113e40ad0-merged.mount: Deactivated successfully.
Nov 29 02:41:05 np0005539563 podman[257889]: 2025-11-29 07:41:05.874768585 +0000 UTC m=+3.390972153 container remove 24458a8f378ac8dde23a0cc4c264433818535d538df0162ccd1efc6244863715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cerf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:41:05 np0005539563 systemd[1]: libpod-conmon-24458a8f378ac8dde23a0cc4c264433818535d538df0162ccd1efc6244863715.scope: Deactivated successfully.
Nov 29 02:41:06 np0005539563 podman[258118]: 2025-11-29 07:41:06.657369604 +0000 UTC m=+0.062552628 container create 237e319689351ec3e5734ace39b004a3bc8ab0139f4548cc1ede41de769a11a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:41:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:06.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:06 np0005539563 systemd[1]: Started libpod-conmon-237e319689351ec3e5734ace39b004a3bc8ab0139f4548cc1ede41de769a11a8.scope.
Nov 29 02:41:06 np0005539563 podman[258118]: 2025-11-29 07:41:06.617201117 +0000 UTC m=+0.022384161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:41:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:41:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:06.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:07 np0005539563 podman[258118]: 2025-11-29 07:41:07.078645802 +0000 UTC m=+0.483828846 container init 237e319689351ec3e5734ace39b004a3bc8ab0139f4548cc1ede41de769a11a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:41:07 np0005539563 podman[258118]: 2025-11-29 07:41:07.085898093 +0000 UTC m=+0.491081117 container start 237e319689351ec3e5734ace39b004a3bc8ab0139f4548cc1ede41de769a11a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:41:07 np0005539563 stoic_stonebraker[258134]: 167 167
Nov 29 02:41:07 np0005539563 systemd[1]: libpod-237e319689351ec3e5734ace39b004a3bc8ab0139f4548cc1ede41de769a11a8.scope: Deactivated successfully.
Nov 29 02:41:07 np0005539563 podman[258118]: 2025-11-29 07:41:07.207453353 +0000 UTC m=+0.612636377 container attach 237e319689351ec3e5734ace39b004a3bc8ab0139f4548cc1ede41de769a11a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:41:07 np0005539563 podman[258118]: 2025-11-29 07:41:07.207928975 +0000 UTC m=+0.613111999 container died 237e319689351ec3e5734ace39b004a3bc8ab0139f4548cc1ede41de769a11a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:41:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.8 KiB/s rd, 427 B/s wr, 3 op/s
Nov 29 02:41:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e2f89fa1941dc7009100c632cbcc3cbdd2a2c9060452e8a2f3abe004d2c93fbe-merged.mount: Deactivated successfully.
Nov 29 02:41:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:08.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:08.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Nov 29 02:41:09 np0005539563 podman[258118]: 2025-11-29 07:41:09.250015233 +0000 UTC m=+2.655198297 container remove 237e319689351ec3e5734ace39b004a3bc8ab0139f4548cc1ede41de769a11a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:41:09 np0005539563 systemd[1]: libpod-conmon-237e319689351ec3e5734ace39b004a3bc8ab0139f4548cc1ede41de769a11a8.scope: Deactivated successfully.
Nov 29 02:41:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:09 np0005539563 podman[258161]: 2025-11-29 07:41:09.454381233 +0000 UTC m=+0.039158482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:41:09 np0005539563 podman[258161]: 2025-11-29 07:41:09.933970115 +0000 UTC m=+0.518747314 container create e6a356c317ccf92813f945c07cc52c0add9982fe214157c54b022bc55f4b1747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:41:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:10.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:10 np0005539563 systemd[1]: Started libpod-conmon-e6a356c317ccf92813f945c07cc52c0add9982fe214157c54b022bc55f4b1747.scope.
Nov 29 02:41:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:41:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36be8ee02e04fae49bc8b6d822ce27e641802e451d3ad7ac919dfd09290cf501/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:41:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36be8ee02e04fae49bc8b6d822ce27e641802e451d3ad7ac919dfd09290cf501/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:41:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36be8ee02e04fae49bc8b6d822ce27e641802e451d3ad7ac919dfd09290cf501/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:41:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36be8ee02e04fae49bc8b6d822ce27e641802e451d3ad7ac919dfd09290cf501/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:41:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:10.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:10 np0005539563 podman[258161]: 2025-11-29 07:41:10.932907158 +0000 UTC m=+1.517684377 container init e6a356c317ccf92813f945c07cc52c0add9982fe214157c54b022bc55f4b1747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:41:10 np0005539563 podman[258161]: 2025-11-29 07:41:10.950363037 +0000 UTC m=+1.535140206 container start e6a356c317ccf92813f945c07cc52c0add9982fe214157c54b022bc55f4b1747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:41:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 341 B/s wr, 3 op/s
Nov 29 02:41:11 np0005539563 priceless_meninsky[258177]: {
Nov 29 02:41:11 np0005539563 priceless_meninsky[258177]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:41:11 np0005539563 priceless_meninsky[258177]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:41:11 np0005539563 priceless_meninsky[258177]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:41:11 np0005539563 priceless_meninsky[258177]:        "osd_id": 0,
Nov 29 02:41:11 np0005539563 priceless_meninsky[258177]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:41:11 np0005539563 priceless_meninsky[258177]:        "type": "bluestore"
Nov 29 02:41:11 np0005539563 priceless_meninsky[258177]:    }
Nov 29 02:41:11 np0005539563 priceless_meninsky[258177]: }
Nov 29 02:41:11 np0005539563 podman[258161]: 2025-11-29 07:41:11.827397331 +0000 UTC m=+2.412174520 container attach e6a356c317ccf92813f945c07cc52c0add9982fe214157c54b022bc55f4b1747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:41:11 np0005539563 systemd[1]: libpod-e6a356c317ccf92813f945c07cc52c0add9982fe214157c54b022bc55f4b1747.scope: Deactivated successfully.
Nov 29 02:41:11 np0005539563 podman[258161]: 2025-11-29 07:41:11.867391454 +0000 UTC m=+2.452168693 container died e6a356c317ccf92813f945c07cc52c0add9982fe214157c54b022bc55f4b1747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:41:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:12.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:41:12
Nov 29 02:41:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:41:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:41:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'vms']
Nov 29 02:41:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:41:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-36be8ee02e04fae49bc8b6d822ce27e641802e451d3ad7ac919dfd09290cf501-merged.mount: Deactivated successfully.
Nov 29 02:41:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:12.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:13 np0005539563 podman[258161]: 2025-11-29 07:41:13.36114723 +0000 UTC m=+3.945924439 container remove e6a356c317ccf92813f945c07cc52c0add9982fe214157c54b022bc55f4b1747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:41:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:41:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:41:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:41:13 np0005539563 systemd[1]: libpod-conmon-e6a356c317ccf92813f945c07cc52c0add9982fe214157c54b022bc55f4b1747.scope: Deactivated successfully.
Nov 29 02:41:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ebe7c162-bce2-4961-a7f6-2e11898373f8 does not exist
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 21b50fee-de1f-4681-810b-495bd3fcf2b8 does not exist
Nov 29 02:41:13 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5531ea05-a668-4c4a-ac2d-f45dc537832c does not exist
Nov 29 02:41:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:14.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:14 np0005539563 nova_compute[252253]: 2025-11-29 07:41:14.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:14 np0005539563 nova_compute[252253]: 2025-11-29 07:41:14.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:41:14 np0005539563 nova_compute[252253]: 2025-11-29 07:41:14.696 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:41:14 np0005539563 nova_compute[252253]: 2025-11-29 07:41:14.699 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:14 np0005539563 nova_compute[252253]: 2025-11-29 07:41:14.699 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:41:14 np0005539563 nova_compute[252253]: 2025-11-29 07:41:14.713 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:14.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:41:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:41:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:15 np0005539563 nova_compute[252253]: 2025-11-29 07:41:15.725 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:16 np0005539563 nova_compute[252253]: 2025-11-29 07:41:16.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:16.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:16 np0005539563 nova_compute[252253]: 2025-11-29 07:41:16.703 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:41:16 np0005539563 nova_compute[252253]: 2025-11-29 07:41:16.704 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:41:16 np0005539563 nova_compute[252253]: 2025-11-29 07:41:16.705 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:41:16 np0005539563 nova_compute[252253]: 2025-11-29 07:41:16.705 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:41:16 np0005539563 nova_compute[252253]: 2025-11-29 07:41:16.706 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:41:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:16.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:41:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1821876993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:41:17 np0005539563 nova_compute[252253]: 2025-11-29 07:41:17.597 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.891s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:41:17 np0005539563 nova_compute[252253]: 2025-11-29 07:41:17.795 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:41:17 np0005539563 nova_compute[252253]: 2025-11-29 07:41:17.796 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5223MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:41:17 np0005539563 nova_compute[252253]: 2025-11-29 07:41:17.796 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:41:17 np0005539563 nova_compute[252253]: 2025-11-29 07:41:17.797 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:41:18 np0005539563 nova_compute[252253]: 2025-11-29 07:41:18.123 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:41:18 np0005539563 nova_compute[252253]: 2025-11-29 07:41:18.123 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:41:18 np0005539563 nova_compute[252253]: 2025-11-29 07:41:18.292 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:41:18 np0005539563 nova_compute[252253]: 2025-11-29 07:41:18.501 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:41:18 np0005539563 nova_compute[252253]: 2025-11-29 07:41:18.502 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:41:18 np0005539563 nova_compute[252253]: 2025-11-29 07:41:18.538 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:41:18 np0005539563 nova_compute[252253]: 2025-11-29 07:41:18.575 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:41:18 np0005539563 nova_compute[252253]: 2025-11-29 07:41:18.600 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:41:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:18.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:18.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:41:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/175367771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:41:19 np0005539563 nova_compute[252253]: 2025-11-29 07:41:19.070 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:41:19 np0005539563 nova_compute[252253]: 2025-11-29 07:41:19.075 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:41:19 np0005539563 nova_compute[252253]: 2025-11-29 07:41:19.110 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:41:19 np0005539563 nova_compute[252253]: 2025-11-29 07:41:19.112 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:41:19 np0005539563 nova_compute[252253]: 2025-11-29 07:41:19.112 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:41:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:20 np0005539563 nova_compute[252253]: 2025-11-29 07:41:20.108 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:20 np0005539563 nova_compute[252253]: 2025-11-29 07:41:20.108 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:20 np0005539563 nova_compute[252253]: 2025-11-29 07:41:20.109 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:41:20 np0005539563 nova_compute[252253]: 2025-11-29 07:41:20.109 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:41:20 np0005539563 nova_compute[252253]: 2025-11-29 07:41:20.141 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:41:20 np0005539563 nova_compute[252253]: 2025-11-29 07:41:20.142 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:20 np0005539563 nova_compute[252253]: 2025-11-29 07:41:20.142 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:20 np0005539563 nova_compute[252253]: 2025-11-29 07:41:20.143 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:20 np0005539563 nova_compute[252253]: 2025-11-29 07:41:20.143 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:20 np0005539563 nova_compute[252253]: 2025-11-29 07:41:20.143 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:41:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:20.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:20.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:21 np0005539563 nova_compute[252253]: 2025-11-29 07:41:21.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:41:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:22.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:41:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:22.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.003000078s ======
Nov 29 02:41:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:24.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Nov 29 02:41:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:24.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:41:25.220 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:41:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:41:25.221 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:41:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:26.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:26.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:27 np0005539563 podman[258364]: 2025-11-29 07:41:27.587566167 +0000 UTC m=+0.121078357 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:41:27 np0005539563 podman[258366]: 2025-11-29 07:41:27.591521141 +0000 UTC m=+0.111026023 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:41:27 np0005539563 podman[258365]: 2025-11-29 07:41:27.596826741 +0000 UTC m=+0.130737313 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:41:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:41:28.224 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:41:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:28.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:28.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:30 np0005539563 nova_compute[252253]: 2025-11-29 07:41:30.399 252257 DEBUG oslo_concurrency.lockutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "7e858991-fb4d-470d-a63e-bf5f72d59c34" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:41:30 np0005539563 nova_compute[252253]: 2025-11-29 07:41:30.399 252257 DEBUG oslo_concurrency.lockutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7e858991-fb4d-470d-a63e-bf5f72d59c34" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:41:30 np0005539563 nova_compute[252253]: 2025-11-29 07:41:30.512 252257 DEBUG nova.compute.manager [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:41:30 np0005539563 nova_compute[252253]: 2025-11-29 07:41:30.684 252257 DEBUG oslo_concurrency.lockutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:41:30 np0005539563 nova_compute[252253]: 2025-11-29 07:41:30.685 252257 DEBUG oslo_concurrency.lockutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:41:30 np0005539563 nova_compute[252253]: 2025-11-29 07:41:30.691 252257 DEBUG nova.virt.hardware [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:41:30 np0005539563 nova_compute[252253]: 2025-11-29 07:41:30.691 252257 INFO nova.compute.claims [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:41:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:30.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:30 np0005539563 nova_compute[252253]: 2025-11-29 07:41:30.855 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:41:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:30.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:32 np0005539563 nova_compute[252253]: 2025-11-29 07:41:32.531 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:41:32 np0005539563 nova_compute[252253]: 2025-11-29 07:41:32.537 252257 DEBUG nova.compute.provider_tree [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:41:32 np0005539563 nova_compute[252253]: 2025-11-29 07:41:32.559 252257 DEBUG nova.scheduler.client.report [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:41:32 np0005539563 nova_compute[252253]: 2025-11-29 07:41:32.604 252257 DEBUG oslo_concurrency.lockutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:41:32 np0005539563 nova_compute[252253]: 2025-11-29 07:41:32.606 252257 DEBUG nova.compute.manager [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:41:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:32.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:32 np0005539563 nova_compute[252253]: 2025-11-29 07:41:32.887 252257 DEBUG nova.compute.manager [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 02:41:32 np0005539563 nova_compute[252253]: 2025-11-29 07:41:32.943 252257 INFO nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:41:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:32.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:32 np0005539563 nova_compute[252253]: 2025-11-29 07:41:32.987 252257 DEBUG nova.compute.manager [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:41:33 np0005539563 nova_compute[252253]: 2025-11-29 07:41:33.133 252257 DEBUG nova.compute.manager [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:41:33 np0005539563 nova_compute[252253]: 2025-11-29 07:41:33.135 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:41:33 np0005539563 nova_compute[252253]: 2025-11-29 07:41:33.135 252257 INFO nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Creating image(s)#033[00m
Nov 29 02:41:33 np0005539563 nova_compute[252253]: 2025-11-29 07:41:33.167 252257 DEBUG nova.storage.rbd_utils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7e858991-fb4d-470d-a63e-bf5f72d59c34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:41:33 np0005539563 nova_compute[252253]: 2025-11-29 07:41:33.200 252257 DEBUG nova.storage.rbd_utils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7e858991-fb4d-470d-a63e-bf5f72d59c34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:41:33 np0005539563 nova_compute[252253]: 2025-11-29 07:41:33.227 252257 DEBUG nova.storage.rbd_utils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7e858991-fb4d-470d-a63e-bf5f72d59c34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:41:33 np0005539563 nova_compute[252253]: 2025-11-29 07:41:33.231 252257 DEBUG oslo_concurrency.lockutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:41:33 np0005539563 nova_compute[252253]: 2025-11-29 07:41:33.232 252257 DEBUG oslo_concurrency.lockutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:41:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:34 np0005539563 nova_compute[252253]: 2025-11-29 07:41:34.189 252257 DEBUG nova.virt.libvirt.imagebackend [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/1be11678-cfa4-4dee-b54c-6c7e547e5a6a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/1be11678-cfa4-4dee-b54c-6c7e547e5a6a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 02:41:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:34.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:34.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:36.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:36.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:38 np0005539563 nova_compute[252253]: 2025-11-29 07:41:38.010 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:41:38 np0005539563 nova_compute[252253]: 2025-11-29 07:41:38.106 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.part --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:41:38 np0005539563 nova_compute[252253]: 2025-11-29 07:41:38.108 252257 DEBUG nova.virt.images [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] 1be11678-cfa4-4dee-b54c-6c7e547e5a6a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 02:41:38 np0005539563 nova_compute[252253]: 2025-11-29 07:41:38.110 252257 DEBUG nova.privsep.utils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 02:41:38 np0005539563 nova_compute[252253]: 2025-11-29 07:41:38.110 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.part /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:41:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:38.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:38.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:39 np0005539563 nova_compute[252253]: 2025-11-29 07:41:39.012 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.part /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.converted" returned: 0 in 0.902s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:41:39 np0005539563 nova_compute[252253]: 2025-11-29 07:41:39.017 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:41:39 np0005539563 nova_compute[252253]: 2025-11-29 07:41:39.079 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf.converted --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:41:39 np0005539563 nova_compute[252253]: 2025-11-29 07:41:39.080 252257 DEBUG oslo_concurrency.lockutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:41:39 np0005539563 nova_compute[252253]: 2025-11-29 07:41:39.105 252257 DEBUG nova.storage.rbd_utils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7e858991-fb4d-470d-a63e-bf5f72d59c34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:41:39 np0005539563 nova_compute[252253]: 2025-11-29 07:41:39.108 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7e858991-fb4d-470d-a63e-bf5f72d59c34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:41:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 687 KiB/s rd, 85 B/s wr, 5 op/s
Nov 29 02:41:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 29 02:41:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 29 02:41:40 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 29 02:41:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:40.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:40.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Nov 29 02:41:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 29 02:41:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 29 02:41:42 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 29 02:41:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:42.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:42.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:41:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:41:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:41:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:41:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:41:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:41:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 55 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 303 KiB/s wr, 11 op/s
Nov 29 02:41:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:44.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:44 np0005539563 nova_compute[252253]: 2025-11-29 07:41:44.883 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7e858991-fb4d-470d-a63e-bf5f72d59c34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.774s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:41:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:44.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 66 MiB data, 204 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 12 op/s
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.283 252257 DEBUG nova.storage.rbd_utils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] resizing rbd image 7e858991-fb4d-470d-a63e-bf5f72d59c34_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.399 252257 DEBUG nova.objects.instance [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lazy-loading 'migration_context' on Instance uuid 7e858991-fb4d-470d-a63e-bf5f72d59c34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.424 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.424 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Ensure instance console log exists: /var/lib/nova/instances/7e858991-fb4d-470d-a63e-bf5f72d59c34/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.425 252257 DEBUG oslo_concurrency.lockutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.426 252257 DEBUG oslo_concurrency.lockutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.426 252257 DEBUG oslo_concurrency.lockutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.429 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.435 252257 WARNING nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.485 252257 DEBUG nova.virt.libvirt.host [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.487 252257 DEBUG nova.virt.libvirt.host [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.490 252257 DEBUG nova.virt.libvirt.host [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.491 252257 DEBUG nova.virt.libvirt.host [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.492 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.492 252257 DEBUG nova.virt.hardware [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.493 252257 DEBUG nova.virt.hardware [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.493 252257 DEBUG nova.virt.hardware [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.493 252257 DEBUG nova.virt.hardware [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.494 252257 DEBUG nova.virt.hardware [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.494 252257 DEBUG nova.virt.hardware [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.494 252257 DEBUG nova.virt.hardware [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.494 252257 DEBUG nova.virt.hardware [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.494 252257 DEBUG nova.virt.hardware [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.495 252257 DEBUG nova.virt.hardware [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.495 252257 DEBUG nova.virt.hardware [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.498 252257 DEBUG nova.privsep.utils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 02:41:45 np0005539563 nova_compute[252253]: 2025-11-29 07:41:45.498 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:41:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:41:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1680067597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:41:46 np0005539563 nova_compute[252253]: 2025-11-29 07:41:46.033 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:41:46 np0005539563 nova_compute[252253]: 2025-11-29 07:41:46.060 252257 DEBUG nova.storage.rbd_utils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7e858991-fb4d-470d-a63e-bf5f72d59c34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:41:46 np0005539563 nova_compute[252253]: 2025-11-29 07:41:46.064 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:41:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:41:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/806798456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:41:46 np0005539563 nova_compute[252253]: 2025-11-29 07:41:46.483 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:41:46 np0005539563 nova_compute[252253]: 2025-11-29 07:41:46.486 252257 DEBUG nova.objects.instance [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e858991-fb4d-470d-a63e-bf5f72d59c34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:41:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:46.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:46.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 79 MiB data, 210 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 41 op/s
Nov 29 02:41:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:48.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:48 np0005539563 nova_compute[252253]: 2025-11-29 07:41:48.713 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  <uuid>7e858991-fb4d-470d-a63e-bf5f72d59c34</uuid>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  <name>instance-00000001</name>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <nova:name>tempest-AutoAllocateNetworkTest-server-1397617352</nova:name>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:41:45</nova:creationTime>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <nova:user uuid="cf2495f54add463c8ce9d2dd8623347c">tempest-AutoAllocateNetworkTest-752491155-project-member</nova:user>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <nova:project uuid="0d3a6ccbb2794f6e85d683953ac4b5fd">tempest-AutoAllocateNetworkTest-752491155</nova:project>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <entry name="serial">7e858991-fb4d-470d-a63e-bf5f72d59c34</entry>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <entry name="uuid">7e858991-fb4d-470d-a63e-bf5f72d59c34</entry>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/7e858991-fb4d-470d-a63e-bf5f72d59c34_disk">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/7e858991-fb4d-470d-a63e-bf5f72d59c34_disk.config">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/7e858991-fb4d-470d-a63e-bf5f72d59c34/console.log" append="off"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:41:48 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:41:48 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:41:48 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:41:48 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:41:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:48.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 2.4 MiB/s wr, 36 op/s
Nov 29 02:41:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:50.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:50.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Nov 29 02:41:52 np0005539563 nova_compute[252253]: 2025-11-29 07:41:52.364 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:41:52 np0005539563 nova_compute[252253]: 2025-11-29 07:41:52.364 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:41:52 np0005539563 nova_compute[252253]: 2025-11-29 07:41:52.364 252257 INFO nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Using config drive#033[00m
Nov 29 02:41:52 np0005539563 nova_compute[252253]: 2025-11-29 07:41:52.388 252257 DEBUG nova.storage.rbd_utils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7e858991-fb4d-470d-a63e-bf5f72d59c34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:41:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:52.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:52.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:53 np0005539563 nova_compute[252253]: 2025-11-29 07:41:53.062 252257 INFO nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Creating config drive at /var/lib/nova/instances/7e858991-fb4d-470d-a63e-bf5f72d59c34/disk.config#033[00m
Nov 29 02:41:53 np0005539563 nova_compute[252253]: 2025-11-29 07:41:53.070 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e858991-fb4d-470d-a63e-bf5f72d59c34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzxaiq6ze execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:41:53 np0005539563 nova_compute[252253]: 2025-11-29 07:41:53.215 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e858991-fb4d-470d-a63e-bf5f72d59c34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzxaiq6ze" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:41:53 np0005539563 nova_compute[252253]: 2025-11-29 07:41:53.246 252257 DEBUG nova.storage.rbd_utils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7e858991-fb4d-470d-a63e-bf5f72d59c34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:41:53 np0005539563 nova_compute[252253]: 2025-11-29 07:41:53.250 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e858991-fb4d-470d-a63e-bf5f72d59c34/disk.config 7e858991-fb4d-470d-a63e-bf5f72d59c34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:41:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 29 op/s
Nov 29 02:41:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 29 02:41:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:54.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:41:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:54.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:41:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 26 op/s
Nov 29 02:41:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 29 02:41:55 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 29 02:41:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:56.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:56.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:57 np0005539563 nova_compute[252253]: 2025-11-29 07:41:57.256 252257 DEBUG oslo_concurrency.processutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e858991-fb4d-470d-a63e-bf5f72d59c34/disk.config 7e858991-fb4d-470d-a63e-bf5f72d59c34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:41:57 np0005539563 nova_compute[252253]: 2025-11-29 07:41:57.258 252257 INFO nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Deleting local config drive /var/lib/nova/instances/7e858991-fb4d-470d-a63e-bf5f72d59c34/disk.config because it was imported into RBD.#033[00m
Nov 29 02:41:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 818 B/s rd, 362 KiB/s wr, 1 op/s
Nov 29 02:41:57 np0005539563 systemd[1]: Starting libvirt secret daemon...
Nov 29 02:41:57 np0005539563 systemd[1]: Started libvirt secret daemon.
Nov 29 02:41:57 np0005539563 systemd-machined[213024]: New machine qemu-1-instance-00000001.
Nov 29 02:41:57 np0005539563 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 29 02:41:58 np0005539563 podman[258880]: 2025-11-29 07:41:58.526438412 +0000 UTC m=+0.063611315 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 02:41:58 np0005539563 podman[258879]: 2025-11-29 07:41:58.534025012 +0000 UTC m=+0.083557650 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 02:41:58 np0005539563 podman[258881]: 2025-11-29 07:41:58.542044864 +0000 UTC m=+0.074539064 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 02:41:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:58.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:41:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:58.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.128 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402119.1271229, 7e858991-fb4d-470d-a63e-bf5f72d59c34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.129 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.144 252257 DEBUG nova.compute.manager [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.146 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.152 252257 INFO nova.virt.libvirt.driver [-] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Instance spawned successfully.#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.152 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:41:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 921 B/s rd, 53 KiB/s wr, 1 op/s
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.370 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.377 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.491 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.491 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402119.144132, 7e858991-fb4d-470d-a63e-bf5f72d59c34 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.492 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] VM Started (Lifecycle Event)#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.595 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.596 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.597 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.597 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.598 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:41:59 np0005539563 nova_compute[252253]: 2025-11-29 07:41:59.599 252257 DEBUG nova.virt.libvirt.driver [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:42:00 np0005539563 nova_compute[252253]: 2025-11-29 07:42:00.200 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:42:00 np0005539563 nova_compute[252253]: 2025-11-29 07:42:00.205 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:42:00 np0005539563 nova_compute[252253]: 2025-11-29 07:42:00.243 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:42:00 np0005539563 nova_compute[252253]: 2025-11-29 07:42:00.260 252257 INFO nova.compute.manager [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Took 27.13 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:42:00 np0005539563 nova_compute[252253]: 2025-11-29 07:42:00.263 252257 DEBUG nova.compute.manager [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:42:00 np0005539563 nova_compute[252253]: 2025-11-29 07:42:00.339 252257 INFO nova.compute.manager [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Took 29.72 seconds to build instance.#033[00m
Nov 29 02:42:00 np0005539563 nova_compute[252253]: 2025-11-29 07:42:00.372 252257 DEBUG oslo_concurrency.lockutils [None req-ba91d262-805c-498b-8436-c97cc6986d51 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7e858991-fb4d-470d-a63e-bf5f72d59c34" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 29.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:00.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:01.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 347 KiB/s rd, 15 KiB/s wr, 22 op/s
Nov 29 02:42:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:02.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:03.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 985 KiB/s rd, 15 KiB/s wr, 45 op/s
Nov 29 02:42:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:04.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:04.887 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:04.887 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:04.887 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:05.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 80 op/s
Nov 29 02:42:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:06.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:07.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 29 02:42:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:08.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:09.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 02:42:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:10.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:11.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:11 np0005539563 nova_compute[252253]: 2025-11-29 07:42:11.052 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:11 np0005539563 nova_compute[252253]: 2025-11-29 07:42:11.052 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:11 np0005539563 nova_compute[252253]: 2025-11-29 07:42:11.105 252257 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:42:11 np0005539563 nova_compute[252253]: 2025-11-29 07:42:11.255 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:11 np0005539563 nova_compute[252253]: 2025-11-29 07:42:11.256 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:11 np0005539563 nova_compute[252253]: 2025-11-29 07:42:11.263 252257 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:42:11 np0005539563 nova_compute[252253]: 2025-11-29 07:42:11.263 252257 INFO nova.compute.claims [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:42:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 02:42:11 np0005539563 nova_compute[252253]: 2025-11-29 07:42:11.434 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:42:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3133733945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:42:12 np0005539563 nova_compute[252253]: 2025-11-29 07:42:12.094 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:12 np0005539563 nova_compute[252253]: 2025-11-29 07:42:12.103 252257 DEBUG nova.compute.provider_tree [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:42:12 np0005539563 nova_compute[252253]: 2025-11-29 07:42:12.149 252257 ERROR nova.scheduler.client.report [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [req-f901d565-5fde-4010-8483-0893c58d77ba] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 190eff98-dce8-46c0-8a7d-870d6fa5cbbd.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-f901d565-5fde-4010-8483-0893c58d77ba"}]}#033[00m
Nov 29 02:42:12 np0005539563 nova_compute[252253]: 2025-11-29 07:42:12.188 252257 DEBUG nova.scheduler.client.report [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:42:12 np0005539563 nova_compute[252253]: 2025-11-29 07:42:12.231 252257 DEBUG nova.scheduler.client.report [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:42:12 np0005539563 nova_compute[252253]: 2025-11-29 07:42:12.232 252257 DEBUG nova.compute.provider_tree [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:42:12 np0005539563 nova_compute[252253]: 2025-11-29 07:42:12.279 252257 DEBUG nova.scheduler.client.report [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:42:12 np0005539563 nova_compute[252253]: 2025-11-29 07:42:12.320 252257 DEBUG nova.scheduler.client.report [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:42:12 np0005539563 nova_compute[252253]: 2025-11-29 07:42:12.458 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 29 02:42:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:42:12
Nov 29 02:42:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:42:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:42:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'backups', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.mgr']
Nov 29 02:42:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:42:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:12.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:13.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:42:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478996777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.060 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.068 252257 DEBUG nova.compute.provider_tree [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:42:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.144 252257 DEBUG nova.scheduler.client.report [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Updated inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.145 252257 DEBUG nova.compute.provider_tree [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Updating resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.145 252257 DEBUG nova.compute.provider_tree [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.197 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.199 252257 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:42:13 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 88 MiB data, 221 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 523 KiB/s wr, 52 op/s
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.276 252257 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.277 252257 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.304 252257 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.336 252257 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:42:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.817 252257 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.819 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.819 252257 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Creating image(s)#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.849 252257 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.880 252257 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.908 252257 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.912 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.966 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.966 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.967 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.967 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:13 np0005539563 nova_compute[252253]: 2025-11-29 07:42:13.997 252257 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:14 np0005539563 nova_compute[252253]: 2025-11-29 07:42:14.001 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 29 02:42:14 np0005539563 nova_compute[252253]: 2025-11-29 07:42:14.227 252257 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Automatically allocating a network for project 0d3a6ccbb2794f6e85d683953ac4b5fd. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Nov 29 02:42:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:42:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:14.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 98 MiB data, 230 MiB used, 21 GiB / 21 GiB avail; 289 KiB/s rd, 1.4 MiB/s wr, 23 op/s
Nov 29 02:42:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:42:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:42:16 np0005539563 nova_compute[252253]: 2025-11-29 07:42:16.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:16.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:17.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 124 MiB data, 257 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 3.0 MiB/s wr, 44 op/s
Nov 29 02:42:17 np0005539563 nova_compute[252253]: 2025-11-29 07:42:17.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:17 np0005539563 nova_compute[252253]: 2025-11-29 07:42:17.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:42:17 np0005539563 nova_compute[252253]: 2025-11-29 07:42:17.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:42:17 np0005539563 nova_compute[252253]: 2025-11-29 07:42:17.705 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:42:18 np0005539563 nova_compute[252253]: 2025-11-29 07:42:18.089 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-7e858991-fb4d-470d-a63e-bf5f72d59c34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:42:18 np0005539563 nova_compute[252253]: 2025-11-29 07:42:18.089 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-7e858991-fb4d-470d-a63e-bf5f72d59c34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:42:18 np0005539563 nova_compute[252253]: 2025-11-29 07:42:18.089 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:42:18 np0005539563 nova_compute[252253]: 2025-11-29 07:42:18.090 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7e858991-fb4d-470d-a63e-bf5f72d59c34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:42:18 np0005539563 nova_compute[252253]: 2025-11-29 07:42:18.655 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:42:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:18.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:19.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 29 02:42:19 np0005539563 nova_compute[252253]: 2025-11-29 07:42:19.227 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:42:19 np0005539563 nova_compute[252253]: 2025-11-29 07:42:19.245 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-7e858991-fb4d-470d-a63e-bf5f72d59c34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:42:19 np0005539563 nova_compute[252253]: 2025-11-29 07:42:19.245 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:42:19 np0005539563 nova_compute[252253]: 2025-11-29 07:42:19.247 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:19 np0005539563 nova_compute[252253]: 2025-11-29 07:42:19.248 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 136 MiB data, 262 MiB used, 21 GiB / 21 GiB avail; 93 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 29 02:42:19 np0005539563 nova_compute[252253]: 2025-11-29 07:42:19.451 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:19 np0005539563 nova_compute[252253]: 2025-11-29 07:42:19.452 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:19 np0005539563 nova_compute[252253]: 2025-11-29 07:42:19.452 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:19 np0005539563 nova_compute[252253]: 2025-11-29 07:42:19.452 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:42:19 np0005539563 nova_compute[252253]: 2025-11-29 07:42:19.453 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:19 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 29 02:42:20 np0005539563 nova_compute[252253]: 2025-11-29 07:42:20.555 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:42:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:20.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:21.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 159 MiB data, 279 MiB used, 21 GiB / 21 GiB avail; 389 KiB/s rd, 5.0 MiB/s wr, 84 op/s
Nov 29 02:42:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:42:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:42:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455888677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:42:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.473 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.523 252257 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] resizing rbd image 7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.690 252257 DEBUG nova.objects.instance [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lazy-loading 'migration_context' on Instance uuid 7a2ac9f8-c588-434e-9da9-98a9d77f2e72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.715 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.715 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Ensure instance console log exists: /var/lib/nova/instances/7a2ac9f8-c588-434e-9da9-98a9d77f2e72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.715 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.716 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.716 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.721 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.722 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.944 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.945 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4985MB free_disk=20.962398529052734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.945 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:21 np0005539563 nova_compute[252253]: 2025-11-29 07:42:21.945 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:22 np0005539563 nova_compute[252253]: 2025-11-29 07:42:22.055 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 7e858991-fb4d-470d-a63e-bf5f72d59c34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:42:22 np0005539563 nova_compute[252253]: 2025-11-29 07:42:22.056 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 7a2ac9f8-c588-434e-9da9-98a9d77f2e72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:42:22 np0005539563 nova_compute[252253]: 2025-11-29 07:42:22.056 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:42:22 np0005539563 nova_compute[252253]: 2025-11-29 07:42:22.056 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:42:22 np0005539563 nova_compute[252253]: 2025-11-29 07:42:22.129 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:42:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1355188258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:42:22 np0005539563 nova_compute[252253]: 2025-11-29 07:42:22.589 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:22 np0005539563 nova_compute[252253]: 2025-11-29 07:42:22.597 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:42:22 np0005539563 nova_compute[252253]: 2025-11-29 07:42:22.629 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:42:22 np0005539563 nova_compute[252253]: 2025-11-29 07:42:22.687 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:42:22 np0005539563 nova_compute[252253]: 2025-11-29 07:42:22.688 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:22.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031159850176809977 of space, bias 1.0, pg target 0.9347955053042993 quantized to 32 (current 32)
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:42:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:23 np0005539563 nova_compute[252253]: 2025-11-29 07:42:23.119 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:23 np0005539563 nova_compute[252253]: 2025-11-29 07:42:23.120 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:23 np0005539563 nova_compute[252253]: 2025-11-29 07:42:23.121 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:23 np0005539563 nova_compute[252253]: 2025-11-29 07:42:23.121 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:23 np0005539563 nova_compute[252253]: 2025-11-29 07:42:23.121 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:23 np0005539563 nova_compute[252253]: 2025-11-29 07:42:23.121 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:42:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 159 MiB data, 279 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 89 op/s
Nov 29 02:42:23 np0005539563 nova_compute[252253]: 2025-11-29 07:42:23.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:42:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:24 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 02:42:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:24 np0005539563 podman[259661]: 2025-11-29 07:42:24.202785012 +0000 UTC m=+0.021744821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:42:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:24.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:24 np0005539563 podman[259661]: 2025-11-29 07:42:24.75243402 +0000 UTC m=+0.571393849 container create 3414732a8af68dff0f9547e6a940740c59aa0df1bce61baf39104d3f67b667ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_solomon, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:42:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:25.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 159 MiB data, 279 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.2 MiB/s wr, 89 op/s
Nov 29 02:42:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:42:25 np0005539563 systemd[1]: Started libpod-conmon-3414732a8af68dff0f9547e6a940740c59aa0df1bce61baf39104d3f67b667ca.scope.
Nov 29 02:42:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:42:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:25.792 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:42:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:25.793 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:42:26 np0005539563 podman[259661]: 2025-11-29 07:42:26.057012868 +0000 UTC m=+1.875972687 container init 3414732a8af68dff0f9547e6a940740c59aa0df1bce61baf39104d3f67b667ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_solomon, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:42:26 np0005539563 podman[259661]: 2025-11-29 07:42:26.06849576 +0000 UTC m=+1.887455559 container start 3414732a8af68dff0f9547e6a940740c59aa0df1bce61baf39104d3f67b667ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_solomon, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:42:26 np0005539563 awesome_solomon[259729]: 167 167
Nov 29 02:42:26 np0005539563 systemd[1]: libpod-3414732a8af68dff0f9547e6a940740c59aa0df1bce61baf39104d3f67b667ca.scope: Deactivated successfully.
Nov 29 02:42:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:26.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:26 np0005539563 nova_compute[252253]: 2025-11-29 07:42:26.960 252257 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Automatically allocated network: {'id': '6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'name': 'auto_allocated_network', 'tenant_id': '0d3a6ccbb2794f6e85d683953ac4b5fd', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['54f77d9b-4fc0-4513-9e8e-0b66d5a5d1b2', 'd3409058-7381-4024-9d79-5f6d3aec308c'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-11-29T07:42:14Z', 'updated_at': '2025-11-29T07:42:25Z', 'revision_number': 4, 'project_id': '0d3a6ccbb2794f6e85d683953ac4b5fd'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Nov 29 02:42:26 np0005539563 nova_compute[252253]: 2025-11-29 07:42:26.984 252257 WARNING oslo_policy.policy [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 29 02:42:26 np0005539563 nova_compute[252253]: 2025-11-29 07:42:26.986 252257 WARNING oslo_policy.policy [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 29 02:42:26 np0005539563 nova_compute[252253]: 2025-11-29 07:42:26.992 252257 DEBUG nova.policy [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cf2495f54add463c8ce9d2dd8623347c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d3a6ccbb2794f6e85d683953ac4b5fd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:42:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:27.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 172 MiB data, 280 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 29 02:42:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:42:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2934978996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:42:27 np0005539563 podman[259661]: 2025-11-29 07:42:27.731027493 +0000 UTC m=+3.549987312 container attach 3414732a8af68dff0f9547e6a940740c59aa0df1bce61baf39104d3f67b667ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_solomon, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:42:27 np0005539563 podman[259661]: 2025-11-29 07:42:27.732705609 +0000 UTC m=+3.551665428 container died 3414732a8af68dff0f9547e6a940740c59aa0df1bce61baf39104d3f67b667ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_solomon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 29 02:42:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:28 np0005539563 nova_compute[252253]: 2025-11-29 07:42:28.518 252257 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Successfully created port: c5179400-1023-4dc7-b67b-d922a2453db4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:42:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:28.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:29.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 179 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.7 MiB/s wr, 84 op/s
Nov 29 02:42:29 np0005539563 nova_compute[252253]: 2025-11-29 07:42:29.351 252257 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Successfully updated port: c5179400-1023-4dc7-b67b-d922a2453db4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:42:29 np0005539563 nova_compute[252253]: 2025-11-29 07:42:29.379 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "refresh_cache-7a2ac9f8-c588-434e-9da9-98a9d77f2e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:42:29 np0005539563 nova_compute[252253]: 2025-11-29 07:42:29.380 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquired lock "refresh_cache-7a2ac9f8-c588-434e-9da9-98a9d77f2e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:42:29 np0005539563 nova_compute[252253]: 2025-11-29 07:42:29.380 252257 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:42:29 np0005539563 nova_compute[252253]: 2025-11-29 07:42:29.686 252257 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:42:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-02ffdf65994852cf3d75a8128260b14e67264cc617e6526fc891191b64424e7f-merged.mount: Deactivated successfully.
Nov 29 02:42:30 np0005539563 nova_compute[252253]: 2025-11-29 07:42:30.139 252257 DEBUG nova.compute.manager [req-958bc8f9-5e12-42ff-a82b-4f65ed4d052c req-0e79f7dd-aeb3-4caa-80f3-79d45f0dfd5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Received event network-changed-c5179400-1023-4dc7-b67b-d922a2453db4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:42:30 np0005539563 nova_compute[252253]: 2025-11-29 07:42:30.139 252257 DEBUG nova.compute.manager [req-958bc8f9-5e12-42ff-a82b-4f65ed4d052c req-0e79f7dd-aeb3-4caa-80f3-79d45f0dfd5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Refreshing instance network info cache due to event network-changed-c5179400-1023-4dc7-b67b-d922a2453db4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:42:30 np0005539563 nova_compute[252253]: 2025-11-29 07:42:30.139 252257 DEBUG oslo_concurrency.lockutils [req-958bc8f9-5e12-42ff-a82b-4f65ed4d052c req-0e79f7dd-aeb3-4caa-80f3-79d45f0dfd5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-7a2ac9f8-c588-434e-9da9-98a9d77f2e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:42:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:42:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:30.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:30.795 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:31.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 226 MiB data, 321 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 78 op/s
Nov 29 02:42:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.573 252257 DEBUG nova.network.neutron [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Updating instance_info_cache with network_info: [{"id": "c5179400-1023-4dc7-b67b-d922a2453db4", "address": "fa:16:3e:44:0e:b2", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::d4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5179400-10", "ovs_interfaceid": "c5179400-1023-4dc7-b67b-d922a2453db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.594 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Releasing lock "refresh_cache-7a2ac9f8-c588-434e-9da9-98a9d77f2e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.595 252257 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Instance network_info: |[{"id": "c5179400-1023-4dc7-b67b-d922a2453db4", "address": "fa:16:3e:44:0e:b2", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::d4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5179400-10", "ovs_interfaceid": "c5179400-1023-4dc7-b67b-d922a2453db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.595 252257 DEBUG oslo_concurrency.lockutils [req-958bc8f9-5e12-42ff-a82b-4f65ed4d052c req-0e79f7dd-aeb3-4caa-80f3-79d45f0dfd5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-7a2ac9f8-c588-434e-9da9-98a9d77f2e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.595 252257 DEBUG nova.network.neutron [req-958bc8f9-5e12-42ff-a82b-4f65ed4d052c req-0e79f7dd-aeb3-4caa-80f3-79d45f0dfd5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Refreshing network info cache for port c5179400-1023-4dc7-b67b-d922a2453db4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.598 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Start _get_guest_xml network_info=[{"id": "c5179400-1023-4dc7-b67b-d922a2453db4", "address": "fa:16:3e:44:0e:b2", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::d4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5179400-10", "ovs_interfaceid": "c5179400-1023-4dc7-b67b-d922a2453db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.603 252257 WARNING nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.608 252257 DEBUG nova.virt.libvirt.host [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.608 252257 DEBUG nova.virt.libvirt.host [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.611 252257 DEBUG nova.virt.libvirt.host [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.612 252257 DEBUG nova.virt.libvirt.host [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.613 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.613 252257 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.613 252257 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.614 252257 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.614 252257 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.614 252257 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.614 252257 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.614 252257 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.615 252257 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.615 252257 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.615 252257 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.615 252257 DEBUG nova.virt.hardware [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:42:31 np0005539563 nova_compute[252253]: 2025-11-29 07:42:31.618 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:31 np0005539563 podman[259661]: 2025-11-29 07:42:31.644648762 +0000 UTC m=+7.463608561 container remove 3414732a8af68dff0f9547e6a940740c59aa0df1bce61baf39104d3f67b667ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_solomon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:42:31 np0005539563 systemd[1]: libpod-conmon-3414732a8af68dff0f9547e6a940740c59aa0df1bce61baf39104d3f67b667ca.scope: Deactivated successfully.
Nov 29 02:42:31 np0005539563 podman[259746]: 2025-11-29 07:42:31.759366336 +0000 UTC m=+2.305013762 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 29 02:42:31 np0005539563 podman[259747]: 2025-11-29 07:42:31.778839685 +0000 UTC m=+2.314831708 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:42:31 np0005539563 podman[259812]: 2025-11-29 07:42:31.799258428 +0000 UTC m=+0.025023840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:42:31 np0005539563 podman[259812]: 2025-11-29 07:42:31.913057468 +0000 UTC m=+0.138822860 container create d8e358240c7f5ad22e5aa42d8ef1d6bb53644667244eb79bafbe6c280889bcab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:42:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:31 np0005539563 podman[259753]: 2025-11-29 07:42:31.932562357 +0000 UTC m=+2.461805598 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:42:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 29 02:42:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:31 np0005539563 systemd[1]: Started libpod-conmon-d8e358240c7f5ad22e5aa42d8ef1d6bb53644667244eb79bafbe6c280889bcab.scope.
Nov 29 02:42:31 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 29 02:42:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:42:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da917cc4fd82da4585aa4ae4da49ba3a1a1d853c5cdc24978d846dc51b10eb14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da917cc4fd82da4585aa4ae4da49ba3a1a1d853c5cdc24978d846dc51b10eb14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da917cc4fd82da4585aa4ae4da49ba3a1a1d853c5cdc24978d846dc51b10eb14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da917cc4fd82da4585aa4ae4da49ba3a1a1d853c5cdc24978d846dc51b10eb14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:42:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455864033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:42:32 np0005539563 podman[259812]: 2025-11-29 07:42:32.129090211 +0000 UTC m=+0.354855663 container init d8e358240c7f5ad22e5aa42d8ef1d6bb53644667244eb79bafbe6c280889bcab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:42:32 np0005539563 podman[259812]: 2025-11-29 07:42:32.141550059 +0000 UTC m=+0.367315451 container start d8e358240c7f5ad22e5aa42d8ef1d6bb53644667244eb79bafbe6c280889bcab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.154 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:32 np0005539563 podman[259812]: 2025-11-29 07:42:32.155633021 +0000 UTC m=+0.381398443 container attach d8e358240c7f5ad22e5aa42d8ef1d6bb53644667244eb79bafbe6c280889bcab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.184 252257 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.188 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:42:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/940101142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.625 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.627 252257 DEBUG nova.virt.libvirt.vif [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:42:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1667508244-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1667508244-3',id=4,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d3a6ccbb2794f6e85d683953ac4b5fd',ramdisk_id='',reservation_id='r-2hb64b0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-752491155',owner_user_name='tempest-AutoAllocateNetworkTest-752491155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:42:13Z,user_data=None,user_id='cf2495f54add463c8ce9d2dd8623347c',uuid=7a2ac9f8-c588-434e-9da9-98a9d77f2e72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5179400-1023-4dc7-b67b-d922a2453db4", "address": "fa:16:3e:44:0e:b2", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::d4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5179400-10", "ovs_interfaceid": "c5179400-1023-4dc7-b67b-d922a2453db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.627 252257 DEBUG nova.network.os_vif_util [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converting VIF {"id": "c5179400-1023-4dc7-b67b-d922a2453db4", "address": "fa:16:3e:44:0e:b2", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::d4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5179400-10", "ovs_interfaceid": "c5179400-1023-4dc7-b67b-d922a2453db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.628 252257 DEBUG nova.network.os_vif_util [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=c5179400-1023-4dc7-b67b-d922a2453db4,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5179400-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.631 252257 DEBUG nova.objects.instance [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a2ac9f8-c588-434e-9da9-98a9d77f2e72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.678 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  <uuid>7a2ac9f8-c588-434e-9da9-98a9d77f2e72</uuid>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  <name>instance-00000004</name>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <nova:name>tempest-tempest.common.compute-instance-1667508244-3</nova:name>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:42:31</nova:creationTime>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <nova:user uuid="cf2495f54add463c8ce9d2dd8623347c">tempest-AutoAllocateNetworkTest-752491155-project-member</nova:user>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <nova:project uuid="0d3a6ccbb2794f6e85d683953ac4b5fd">tempest-AutoAllocateNetworkTest-752491155</nova:project>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <nova:port uuid="c5179400-1023-4dc7-b67b-d922a2453db4">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="fdfe:381f:8400::d4" ipVersion="6"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.1.0.26" ipVersion="4"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <entry name="serial">7a2ac9f8-c588-434e-9da9-98a9d77f2e72</entry>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <entry name="uuid">7a2ac9f8-c588-434e-9da9-98a9d77f2e72</entry>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk.config">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:44:0e:b2"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <target dev="tapc5179400-10"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/7a2ac9f8-c588-434e-9da9-98a9d77f2e72/console.log" append="off"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:42:32 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:42:32 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:42:32 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:42:32 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.680 252257 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Preparing to wait for external event network-vif-plugged-c5179400-1023-4dc7-b67b-d922a2453db4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.681 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.681 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.682 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.683 252257 DEBUG nova.virt.libvirt.vif [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:42:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1667508244-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1667508244-3',id=4,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d3a6ccbb2794f6e85d683953ac4b5fd',ramdisk_id='',reservation_id='r-2hb64b0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-752491155',owner_user_name='tempest-AutoAllocateNetworkTest-752491155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:42:13Z,user_data=None,user_id='cf2495f54add463c8ce9d2dd8623347c',uuid=7a2ac9f8-c588-434e-9da9-98a9d77f2e72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5179400-1023-4dc7-b67b-d922a2453db4", "address": "fa:16:3e:44:0e:b2", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::d4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5179400-10", "ovs_interfaceid": "c5179400-1023-4dc7-b67b-d922a2453db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.683 252257 DEBUG nova.network.os_vif_util [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converting VIF {"id": "c5179400-1023-4dc7-b67b-d922a2453db4", "address": "fa:16:3e:44:0e:b2", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::d4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5179400-10", "ovs_interfaceid": "c5179400-1023-4dc7-b67b-d922a2453db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.684 252257 DEBUG nova.network.os_vif_util [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=c5179400-1023-4dc7-b67b-d922a2453db4,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5179400-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.685 252257 DEBUG os_vif [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=c5179400-1023-4dc7-b67b-d922a2453db4,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5179400-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.737 252257 DEBUG ovsdbapp.backend.ovs_idl [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.738 252257 DEBUG ovsdbapp.backend.ovs_idl [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.738 252257 DEBUG ovsdbapp.backend.ovs_idl [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.739 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.742 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.742 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.744 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.760 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.760 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.761 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:42:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:32.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:32 np0005539563 nova_compute[252253]: 2025-11-29 07:42:32.762 252257 INFO oslo.privsep.daemon [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpcotd1dzb/privsep.sock']#033[00m
Nov 29 02:42:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:42:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2586963344' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:42:32 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:33.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 251 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.2 MiB/s wr, 74 op/s
Nov 29 02:42:33 np0005539563 charming_neumann[259855]: [
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:    {
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:        "available": false,
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:        "ceph_device": false,
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:        "lsm_data": {},
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:        "lvs": [],
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:        "path": "/dev/sr0",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:        "rejected_reasons": [
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "Has a FileSystem",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "Insufficient space (<5GB)"
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:        ],
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:        "sys_api": {
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "actuators": null,
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "device_nodes": "sr0",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "devname": "sr0",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "human_readable_size": "482.00 KB",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "id_bus": "ata",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "model": "QEMU DVD-ROM",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "nr_requests": "2",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "parent": "/dev/sr0",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "partitions": {},
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "path": "/dev/sr0",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "removable": "1",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "rev": "2.5+",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "ro": "0",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "rotational": "1",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "sas_address": "",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "sas_device_handle": "",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "scheduler_mode": "mq-deadline",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "sectors": 0,
Nov 29 02:42:33 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "sectorsize": "2048",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "size": 493568.0,
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "support_discard": "2048",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "type": "disk",
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:            "vendor": "QEMU"
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:        }
Nov 29 02:42:33 np0005539563 charming_neumann[259855]:    }
Nov 29 02:42:33 np0005539563 charming_neumann[259855]: ]
Nov 29 02:42:33 np0005539563 systemd[1]: libpod-d8e358240c7f5ad22e5aa42d8ef1d6bb53644667244eb79bafbe6c280889bcab.scope: Deactivated successfully.
Nov 29 02:42:33 np0005539563 systemd[1]: libpod-d8e358240c7f5ad22e5aa42d8ef1d6bb53644667244eb79bafbe6c280889bcab.scope: Consumed 1.318s CPU time.
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.468 252257 DEBUG nova.network.neutron [req-958bc8f9-5e12-42ff-a82b-4f65ed4d052c req-0e79f7dd-aeb3-4caa-80f3-79d45f0dfd5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Updated VIF entry in instance network info cache for port c5179400-1023-4dc7-b67b-d922a2453db4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.469 252257 DEBUG nova.network.neutron [req-958bc8f9-5e12-42ff-a82b-4f65ed4d052c req-0e79f7dd-aeb3-4caa-80f3-79d45f0dfd5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Updating instance_info_cache with network_info: [{"id": "c5179400-1023-4dc7-b67b-d922a2453db4", "address": "fa:16:3e:44:0e:b2", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::d4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5179400-10", "ovs_interfaceid": "c5179400-1023-4dc7-b67b-d922a2453db4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.482 252257 INFO oslo.privsep.daemon [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.342 260824 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.346 260824 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.348 260824 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.349 260824 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260824#033[00m
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.490 252257 DEBUG oslo_concurrency.lockutils [req-958bc8f9-5e12-42ff-a82b-4f65ed4d052c req-0e79f7dd-aeb3-4caa-80f3-79d45f0dfd5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-7a2ac9f8-c588-434e-9da9-98a9d77f2e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:42:33 np0005539563 podman[261211]: 2025-11-29 07:42:33.491092427 +0000 UTC m=+0.026710215 container died d8e358240c7f5ad22e5aa42d8ef1d6bb53644667244eb79bafbe6c280889bcab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.817 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.818 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5179400-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.819 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc5179400-10, col_values=(('external_ids', {'iface-id': 'c5179400-1023-4dc7-b67b-d922a2453db4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:0e:b2', 'vm-uuid': '7a2ac9f8-c588-434e-9da9-98a9d77f2e72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:33 np0005539563 NetworkManager[48981]: <info>  [1764402153.8223] manager: (tapc5179400-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.820 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.824 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.829 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:33 np0005539563 nova_compute[252253]: 2025-11-29 07:42:33.830 252257 INFO os_vif [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=c5179400-1023-4dc7-b67b-d922a2453db4,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5179400-10')#033[00m
Nov 29 02:42:34 np0005539563 nova_compute[252253]: 2025-11-29 07:42:34.088 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:42:34 np0005539563 nova_compute[252253]: 2025-11-29 07:42:34.089 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:42:34 np0005539563 nova_compute[252253]: 2025-11-29 07:42:34.089 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] No VIF found with MAC fa:16:3e:44:0e:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:42:34 np0005539563 nova_compute[252253]: 2025-11-29 07:42:34.091 252257 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Using config drive#033[00m
Nov 29 02:42:34 np0005539563 nova_compute[252253]: 2025-11-29 07:42:34.137 252257 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:34 np0005539563 nova_compute[252253]: 2025-11-29 07:42:34.190 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-da917cc4fd82da4585aa4ae4da49ba3a1a1d853c5cdc24978d846dc51b10eb14-merged.mount: Deactivated successfully.
Nov 29 02:42:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:42:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:34.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:34 np0005539563 nova_compute[252253]: 2025-11-29 07:42:34.966 252257 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Creating config drive at /var/lib/nova/instances/7a2ac9f8-c588-434e-9da9-98a9d77f2e72/disk.config#033[00m
Nov 29 02:42:34 np0005539563 nova_compute[252253]: 2025-11-29 07:42:34.973 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a2ac9f8-c588-434e-9da9-98a9d77f2e72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8tw3atty execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:35.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:35 np0005539563 nova_compute[252253]: 2025-11-29 07:42:35.103 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a2ac9f8-c588-434e-9da9-98a9d77f2e72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8tw3atty" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:35 np0005539563 nova_compute[252253]: 2025-11-29 07:42:35.277 252257 DEBUG nova.storage.rbd_utils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] rbd image 7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:42:35 np0005539563 nova_compute[252253]: 2025-11-29 07:42:35.281 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a2ac9f8-c588-434e-9da9-98a9d77f2e72/disk.config 7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:42:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 298 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.2 MiB/s wr, 122 op/s
Nov 29 02:42:35 np0005539563 podman[261211]: 2025-11-29 07:42:35.677481618 +0000 UTC m=+2.213099396 container remove d8e358240c7f5ad22e5aa42d8ef1d6bb53644667244eb79bafbe6c280889bcab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:42:35 np0005539563 systemd[1]: libpod-conmon-d8e358240c7f5ad22e5aa42d8ef1d6bb53644667244eb79bafbe6c280889bcab.scope: Deactivated successfully.
Nov 29 02:42:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:42:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:42:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:36.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:37.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:42:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:42:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 342 MiB data, 392 MiB used, 21 GiB / 21 GiB avail; 133 KiB/s rd, 7.9 MiB/s wr, 112 op/s
Nov 29 02:42:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:38.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:38 np0005539563 nova_compute[252253]: 2025-11-29 07:42:38.821 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:39.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:39 np0005539563 nova_compute[252253]: 2025-11-29 07:42:39.193 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 352 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 118 KiB/s rd, 7.9 MiB/s wr, 107 op/s
Nov 29 02:42:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:42:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:40.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:41.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 6.2 MiB/s wr, 111 op/s
Nov 29 02:42:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:41 np0005539563 nova_compute[252253]: 2025-11-29 07:42:41.867 252257 DEBUG oslo_concurrency.processutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a2ac9f8-c588-434e-9da9-98a9d77f2e72/disk.config 7a2ac9f8-c588-434e-9da9-98a9d77f2e72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:42:41 np0005539563 nova_compute[252253]: 2025-11-29 07:42:41.868 252257 INFO nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Deleting local config drive /var/lib/nova/instances/7a2ac9f8-c588-434e-9da9-98a9d77f2e72/disk.config because it was imported into RBD.#033[00m
Nov 29 02:42:41 np0005539563 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 29 02:42:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:42:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:42:41 np0005539563 kernel: tapc5179400-10: entered promiscuous mode
Nov 29 02:42:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:42:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:42:41 np0005539563 NetworkManager[48981]: <info>  [1764402161.9309] manager: (tapc5179400-10): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Nov 29 02:42:41 np0005539563 ovn_controller[148841]: 2025-11-29T07:42:41Z|00027|binding|INFO|Claiming lport c5179400-1023-4dc7-b67b-d922a2453db4 for this chassis.
Nov 29 02:42:41 np0005539563 ovn_controller[148841]: 2025-11-29T07:42:41Z|00028|binding|INFO|c5179400-1023-4dc7-b67b-d922a2453db4: Claiming fa:16:3e:44:0e:b2 10.1.0.26 fdfe:381f:8400::d4
Nov 29 02:42:41 np0005539563 nova_compute[252253]: 2025-11-29 07:42:41.933 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:42:41 np0005539563 nova_compute[252253]: 2025-11-29 07:42:41.938 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:41 np0005539563 systemd-udevd[261310]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:42:41 np0005539563 NetworkManager[48981]: <info>  [1764402161.9788] device (tapc5179400-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:42:41 np0005539563 NetworkManager[48981]: <info>  [1764402161.9796] device (tapc5179400-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:42:41 np0005539563 systemd-machined[213024]: New machine qemu-2-instance-00000004.
Nov 29 02:42:42 np0005539563 nova_compute[252253]: 2025-11-29 07:42:42.016 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:42.018 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:0e:b2 10.1.0.26 fdfe:381f:8400::d4'], port_security=['fa:16:3e:44:0e:b2 10.1.0.26 fdfe:381f:8400::d4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.26/26 fdfe:381f:8400::d4/64', 'neutron:device_id': '7a2ac9f8-c588-434e-9da9-98a9d77f2e72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d3a6ccbb2794f6e85d683953ac4b5fd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '441b5877-d47a-4ccc-b96a-381864fe0f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab4638fe-12b3-4f0f-a7fc-23f58f536508, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=c5179400-1023-4dc7-b67b-d922a2453db4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:42.019 158990 INFO neutron.agent.ovn.metadata.agent [-] Port c5179400-1023-4dc7-b67b-d922a2453db4 in datapath 6c117dd1-5064-4e69-b07c-c93c3d729d3c bound to our chassis#033[00m
Nov 29 02:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:42.021 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c117dd1-5064-4e69-b07c-c93c3d729d3c#033[00m
Nov 29 02:42:42 np0005539563 systemd[1]: Started Virtual Machine qemu-2-instance-00000004.
Nov 29 02:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:42.022 158990 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmph9lwifxo/privsep.sock']#033[00m
Nov 29 02:42:42 np0005539563 ovn_controller[148841]: 2025-11-29T07:42:42Z|00029|binding|INFO|Setting lport c5179400-1023-4dc7-b67b-d922a2453db4 ovn-installed in OVS
Nov 29 02:42:42 np0005539563 ovn_controller[148841]: 2025-11-29T07:42:42Z|00030|binding|INFO|Setting lport c5179400-1023-4dc7-b67b-d922a2453db4 up in Southbound
Nov 29 02:42:42 np0005539563 nova_compute[252253]: 2025-11-29 07:42:42.035 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.058941) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162059059, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1259, "num_deletes": 257, "total_data_size": 2283846, "memory_usage": 2341560, "flush_reason": "Manual Compaction"}
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162357471, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1527264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19973, "largest_seqno": 21230, "table_properties": {"data_size": 1521991, "index_size": 2541, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13628, "raw_average_key_size": 21, "raw_value_size": 1510587, "raw_average_value_size": 2352, "num_data_blocks": 111, "num_entries": 642, "num_filter_entries": 642, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402019, "oldest_key_time": 1764402019, "file_creation_time": 1764402162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 298568 microseconds, and 5821 cpu microseconds.
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.357551) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1527264 bytes OK
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.357590) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.360748) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.360776) EVENT_LOG_v1 {"time_micros": 1764402162360769, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.360796) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 2277860, prev total WAL file size 2278150, number of live WAL files 2.
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.361749) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373630' seq:0, type:0; will stop at (end)
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1491KB)], [44(10237KB)]
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162361854, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 12010560, "oldest_snapshot_seqno": -1}
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4987 keys, 8704680 bytes, temperature: kUnknown
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162722186, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8704680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8671598, "index_size": 19532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 125887, "raw_average_key_size": 25, "raw_value_size": 8581427, "raw_average_value_size": 1720, "num_data_blocks": 803, "num_entries": 4987, "num_filter_entries": 4987, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764402162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.722576) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8704680 bytes
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.728967) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.3 rd, 24.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 10.0 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(13.6) write-amplify(5.7) OK, records in: 5479, records dropped: 492 output_compression: NoCompression
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.729000) EVENT_LOG_v1 {"time_micros": 1764402162728986, "job": 22, "event": "compaction_finished", "compaction_time_micros": 360467, "compaction_time_cpu_micros": 20318, "output_level": 6, "num_output_files": 1, "total_output_size": 8704680, "num_input_records": 5479, "num_output_records": 4987, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162729455, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402162731405, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.361566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.731470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.731475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.731477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.731478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:42:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cedbbd74-fd8c-4c19-b2d6-acb49f263d4f does not exist
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:42:42.731479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:42:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e89b6909-7407-4f64-88df-c4719937ec41 does not exist
Nov 29 02:42:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7f81fac9-482c-4220-af23-8b0f25a5780b does not exist
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:42:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:42:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:42:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:42.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:42.860 158990 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:42.861 158990 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmph9lwifxo/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:42.702 261364 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:42.708 261364 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:42.710 261364 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 29 02:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:42.710 261364 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261364#033[00m
Nov 29 02:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:42.863 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0be096-9938-4620-a880-1fc56e37a86a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:43.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:42:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:42:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:42:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:42:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:42:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:42:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 5.5 MiB/s wr, 99 op/s
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.396 252257 DEBUG nova.compute.manager [req-2f6991e1-349b-4525-bdcb-2c0f0a0730bb req-6fe49887-7dd7-4363-be41-39045fd72c58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Received event network-vif-plugged-c5179400-1023-4dc7-b67b-d922a2453db4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.397 252257 DEBUG oslo_concurrency.lockutils [req-2f6991e1-349b-4525-bdcb-2c0f0a0730bb req-6fe49887-7dd7-4363-be41-39045fd72c58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.398 252257 DEBUG oslo_concurrency.lockutils [req-2f6991e1-349b-4525-bdcb-2c0f0a0730bb req-6fe49887-7dd7-4363-be41-39045fd72c58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.398 252257 DEBUG oslo_concurrency.lockutils [req-2f6991e1-349b-4525-bdcb-2c0f0a0730bb req-6fe49887-7dd7-4363-be41-39045fd72c58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.398 252257 DEBUG nova.compute.manager [req-2f6991e1-349b-4525-bdcb-2c0f0a0730bb req-6fe49887-7dd7-4363-be41-39045fd72c58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Processing event network-vif-plugged-c5179400-1023-4dc7-b67b-d922a2453db4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:42:43 np0005539563 podman[261512]: 2025-11-29 07:42:43.32355805 +0000 UTC m=+0.022983344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:42:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:43.472 261364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:43.472 261364 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:43.472 261364 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:43 np0005539563 podman[261512]: 2025-11-29 07:42:43.50888058 +0000 UTC m=+0.208305824 container create 2a20ab094430d6add9567dd5b86579fd5ebcc068b12114fbc98317f88b5ce9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:42:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:42:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:42:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:42:43 np0005539563 systemd[1]: Started libpod-conmon-2a20ab094430d6add9567dd5b86579fd5ebcc068b12114fbc98317f88b5ce9d7.scope.
Nov 29 02:42:43 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.635 252257 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.639 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.640 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402163.6401784, 7a2ac9f8-c588-434e-9da9-98a9d77f2e72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.640 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] VM Started (Lifecycle Event)#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.645 252257 INFO nova.virt.libvirt.driver [-] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Instance spawned successfully.#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.645 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.668 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.672 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.673 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:42:43 np0005539563 podman[261512]: 2025-11-29 07:42:43.673022406 +0000 UTC m=+0.372447670 container init 2a20ab094430d6add9567dd5b86579fd5ebcc068b12114fbc98317f88b5ce9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.673 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.673 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.674 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.674 252257 DEBUG nova.virt.libvirt.driver [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.678 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:42:43 np0005539563 podman[261512]: 2025-11-29 07:42:43.682099532 +0000 UTC m=+0.381524776 container start 2a20ab094430d6add9567dd5b86579fd5ebcc068b12114fbc98317f88b5ce9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:42:43 np0005539563 zealous_ride[261534]: 167 167
Nov 29 02:42:43 np0005539563 systemd[1]: libpod-2a20ab094430d6add9567dd5b86579fd5ebcc068b12114fbc98317f88b5ce9d7.scope: Deactivated successfully.
Nov 29 02:42:43 np0005539563 conmon[261534]: conmon 2a20ab094430d6add956 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a20ab094430d6add9567dd5b86579fd5ebcc068b12114fbc98317f88b5ce9d7.scope/container/memory.events
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.737 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.738 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402163.6411715, 7a2ac9f8-c588-434e-9da9-98a9d77f2e72 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.738 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:42:43 np0005539563 nova_compute[252253]: 2025-11-29 07:42:43.823 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:43 np0005539563 podman[261512]: 2025-11-29 07:42:43.838663171 +0000 UTC m=+0.538088415 container attach 2a20ab094430d6add9567dd5b86579fd5ebcc068b12114fbc98317f88b5ce9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 29 02:42:43 np0005539563 podman[261512]: 2025-11-29 07:42:43.839698909 +0000 UTC m=+0.539124153 container died 2a20ab094430d6add9567dd5b86579fd5ebcc068b12114fbc98317f88b5ce9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:42:44 np0005539563 nova_compute[252253]: 2025-11-29 07:42:44.013 252257 INFO nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Took 30.20 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:42:44 np0005539563 nova_compute[252253]: 2025-11-29 07:42:44.014 252257 DEBUG nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:42:44 np0005539563 nova_compute[252253]: 2025-11-29 07:42:44.016 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:42:44 np0005539563 nova_compute[252253]: 2025-11-29 07:42:44.023 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402163.643022, 7a2ac9f8-c588-434e-9da9-98a9d77f2e72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:42:44 np0005539563 nova_compute[252253]: 2025-11-29 07:42:44.023 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:42:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d72ed2f3091372b06d4ebefd61acc29b6c89bcf3c9ab9cf60d446cf1d956aed1-merged.mount: Deactivated successfully.
Nov 29 02:42:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:44.174 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ce31a6fa-5619-40b3-904d-f1ba08b35cd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:44.175 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c117dd1-51 in ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:42:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:44.177 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c117dd1-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:42:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:44.178 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[905593f2-2bdf-44ea-a896-e96913b896a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:44.187 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[199ff4b4-0014-4dda-8c9f-af2bc42b8bff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:44 np0005539563 nova_compute[252253]: 2025-11-29 07:42:44.216 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:44.219 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[6a57772b-2327-42e4-8324-fcdc04854bf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:44.237 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8fadea58-f868-4b0b-9fd4-c08c85193da3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:44.239 158990 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpevmce5tc/privsep.sock']#033[00m
Nov 29 02:42:44 np0005539563 nova_compute[252253]: 2025-11-29 07:42:44.434 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:42:44 np0005539563 nova_compute[252253]: 2025-11-29 07:42:44.438 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:42:44 np0005539563 podman[261512]: 2025-11-29 07:42:44.636479336 +0000 UTC m=+1.335904580 container remove 2a20ab094430d6add9567dd5b86579fd5ebcc068b12114fbc98317f88b5ce9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:42:44 np0005539563 nova_compute[252253]: 2025-11-29 07:42:44.643 252257 INFO nova.compute.manager [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Took 33.44 seconds to build instance.#033[00m
Nov 29 02:42:44 np0005539563 systemd[1]: libpod-conmon-2a20ab094430d6add9567dd5b86579fd5ebcc068b12114fbc98317f88b5ce9d7.scope: Deactivated successfully.
Nov 29 02:42:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:44.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:44 np0005539563 podman[261567]: 2025-11-29 07:42:44.792095459 +0000 UTC m=+0.027464647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:42:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:45.009 158990 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:42:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:45.011 158990 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpevmce5tc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:42:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:44.880 261631 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:42:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:44.887 261631 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:42:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:44.890 261631 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 29 02:42:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:44.891 261631 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261631#033[00m
Nov 29 02:42:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:45.013 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e86d8c2e-5fd3-4782-893a-e511ca43b296]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:45 np0005539563 nova_compute[252253]: 2025-11-29 07:42:45.016 252257 DEBUG oslo_concurrency.lockutils [None req-209096e3-087e-43d3-9230-cb8d6dbbd9d4 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 33.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:45.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 725 KiB/s rd, 3.7 MiB/s wr, 107 op/s
Nov 29 02:42:45 np0005539563 nova_compute[252253]: 2025-11-29 07:42:45.497 252257 DEBUG nova.compute.manager [req-6dbfd042-3611-43c0-a403-d46fec6b965c req-ab72dd93-0dda-43e1-ac10-dbc4bbdbf563 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Received event network-vif-plugged-c5179400-1023-4dc7-b67b-d922a2453db4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:42:45 np0005539563 nova_compute[252253]: 2025-11-29 07:42:45.498 252257 DEBUG oslo_concurrency.lockutils [req-6dbfd042-3611-43c0-a403-d46fec6b965c req-ab72dd93-0dda-43e1-ac10-dbc4bbdbf563 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:45 np0005539563 nova_compute[252253]: 2025-11-29 07:42:45.498 252257 DEBUG oslo_concurrency.lockutils [req-6dbfd042-3611-43c0-a403-d46fec6b965c req-ab72dd93-0dda-43e1-ac10-dbc4bbdbf563 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:45 np0005539563 nova_compute[252253]: 2025-11-29 07:42:45.498 252257 DEBUG oslo_concurrency.lockutils [req-6dbfd042-3611-43c0-a403-d46fec6b965c req-ab72dd93-0dda-43e1-ac10-dbc4bbdbf563 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:45 np0005539563 nova_compute[252253]: 2025-11-29 07:42:45.499 252257 DEBUG nova.compute.manager [req-6dbfd042-3611-43c0-a403-d46fec6b965c req-ab72dd93-0dda-43e1-ac10-dbc4bbdbf563 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] No waiting events found dispatching network-vif-plugged-c5179400-1023-4dc7-b67b-d922a2453db4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:42:45 np0005539563 nova_compute[252253]: 2025-11-29 07:42:45.499 252257 WARNING nova.compute.manager [req-6dbfd042-3611-43c0-a403-d46fec6b965c req-ab72dd93-0dda-43e1-ac10-dbc4bbdbf563 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Received unexpected event network-vif-plugged-c5179400-1023-4dc7-b67b-d922a2453db4 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:42:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:45.671 261631 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:45.671 261631 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:45.671 261631 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.483 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ecbfcb29-63db-417a-b4a3-f59c70934798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:46 np0005539563 NetworkManager[48981]: <info>  [1764402166.5095] manager: (tap6c117dd1-50): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.508 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4366d176-630a-4051-b911-52283eb37a8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:46 np0005539563 systemd-udevd[261645]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.546 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[706ce10b-57c3-4bc7-805a-1c3df2987311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.549 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[23c7c435-6d45-4550-a92f-0c72da9ef029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:46 np0005539563 NetworkManager[48981]: <info>  [1764402166.5713] device (tap6c117dd1-50): carrier: link connected
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.575 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f7126d7e-83ab-4da5-a53b-ffaf11ce9b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.593 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a8761c8b-6f97-4d71-bd7e-9369d50284cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c117dd1-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e4:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513434, 'reachable_time': 42837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261663, 'error': None, 'target': 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.608 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b173c269-b76d-45ee-b350-5b189c438ad9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:e465'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 513434, 'tstamp': 513434}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261664, 'error': None, 'target': 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.622 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dcda348d-cefe-4245-9dd5-c02e63c72e2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c117dd1-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e4:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513434, 'reachable_time': 42837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261665, 'error': None, 'target': 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.651 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[76d4da79-6865-4d28-bac8-efffbac2afca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.711 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[be4c8750-fd26-45d2-99aa-0b3101168cbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.713 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c117dd1-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.713 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.714 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c117dd1-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:46 np0005539563 nova_compute[252253]: 2025-11-29 07:42:46.760 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:46 np0005539563 NetworkManager[48981]: <info>  [1764402166.7696] manager: (tap6c117dd1-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 29 02:42:46 np0005539563 kernel: tap6c117dd1-50: entered promiscuous mode
Nov 29 02:42:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:46.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:46 np0005539563 nova_compute[252253]: 2025-11-29 07:42:46.774 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.776 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c117dd1-50, col_values=(('external_ids', {'iface-id': '32f6a270-f2be-48b5-9316-7ff23d26e5c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:42:46 np0005539563 nova_compute[252253]: 2025-11-29 07:42:46.778 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:46 np0005539563 ovn_controller[148841]: 2025-11-29T07:42:46Z|00031|binding|INFO|Releasing lport 32f6a270-f2be-48b5-9316-7ff23d26e5c2 from this chassis (sb_readonly=0)
Nov 29 02:42:46 np0005539563 nova_compute[252253]: 2025-11-29 07:42:46.810 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.812 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c117dd1-5064-4e69-b07c-c93c3d729d3c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c117dd1-5064-4e69-b07c-c93c3d729d3c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.813 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7b1ef3de-4a87-40c8-be3a-40caff8560e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.815 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-6c117dd1-5064-4e69-b07c-c93c3d729d3c
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/6c117dd1-5064-4e69-b07c-c93c3d729d3c.pid.haproxy
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 6c117dd1-5064-4e69-b07c-c93c3d729d3c
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:42:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:42:46.819 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'env', 'PROCESS_TAG=haproxy-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c117dd1-5064-4e69-b07c-c93c3d729d3c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:42:47 np0005539563 podman[261567]: 2025-11-29 07:42:47.019277207 +0000 UTC m=+2.254646445 container create 948e7f7ff800bb5269fdaf2bf2aec2432c7e20b6ad0211064a7359bb28735670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:42:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:47.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Nov 29 02:42:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:48 np0005539563 systemd[1]: Started libpod-conmon-948e7f7ff800bb5269fdaf2bf2aec2432c7e20b6ad0211064a7359bb28735670.scope.
Nov 29 02:42:48 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:42:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f70176f19e3d7e6cfcd9f9a1514c165dfe3bca5027997ae32582333db4c5a4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f70176f19e3d7e6cfcd9f9a1514c165dfe3bca5027997ae32582333db4c5a4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f70176f19e3d7e6cfcd9f9a1514c165dfe3bca5027997ae32582333db4c5a4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f70176f19e3d7e6cfcd9f9a1514c165dfe3bca5027997ae32582333db4c5a4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f70176f19e3d7e6cfcd9f9a1514c165dfe3bca5027997ae32582333db4c5a4b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:48.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:48 np0005539563 nova_compute[252253]: 2025-11-29 07:42:48.826 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:49.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:49 np0005539563 nova_compute[252253]: 2025-11-29 07:42:49.218 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 458 KiB/s wr, 125 op/s
Nov 29 02:42:49 np0005539563 podman[261567]: 2025-11-29 07:42:49.379064485 +0000 UTC m=+4.614433773 container init 948e7f7ff800bb5269fdaf2bf2aec2432c7e20b6ad0211064a7359bb28735670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:42:49 np0005539563 podman[261567]: 2025-11-29 07:42:49.3906761 +0000 UTC m=+4.626045288 container start 948e7f7ff800bb5269fdaf2bf2aec2432c7e20b6ad0211064a7359bb28735670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 02:42:50 np0005539563 jolly_austin[261693]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:42:50 np0005539563 jolly_austin[261693]: --> relative data size: 1.0
Nov 29 02:42:50 np0005539563 jolly_austin[261693]: --> All data devices are unavailable
Nov 29 02:42:50 np0005539563 systemd[1]: libpod-948e7f7ff800bb5269fdaf2bf2aec2432c7e20b6ad0211064a7359bb28735670.scope: Deactivated successfully.
Nov 29 02:42:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:50.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:51.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 162 op/s
Nov 29 02:42:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:52.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:53.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 145 op/s
Nov 29 02:42:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:53 np0005539563 podman[261567]: 2025-11-29 07:42:53.651486164 +0000 UTC m=+8.886855442 container attach 948e7f7ff800bb5269fdaf2bf2aec2432c7e20b6ad0211064a7359bb28735670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:42:53 np0005539563 podman[261567]: 2025-11-29 07:42:53.653219201 +0000 UTC m=+8.888588419 container died 948e7f7ff800bb5269fdaf2bf2aec2432c7e20b6ad0211064a7359bb28735670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:42:53 np0005539563 nova_compute[252253]: 2025-11-29 07:42:53.828 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:54 np0005539563 nova_compute[252253]: 2025-11-29 07:42:54.259 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:42:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:54.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:42:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:55.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 352 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 KiB/s wr, 148 op/s
Nov 29 02:42:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:42:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:56.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:42:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:57.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 353 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 14 KiB/s wr, 123 op/s
Nov 29 02:42:58 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.668887138s, txc = 0x561be1e97b00
Nov 29 02:42:58 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.758665085s, txc = 0x561be0e98f00
Nov 29 02:42:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3f70176f19e3d7e6cfcd9f9a1514c165dfe3bca5027997ae32582333db4c5a4b-merged.mount: Deactivated successfully.
Nov 29 02:42:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:58.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:58 np0005539563 nova_compute[252253]: 2025-11-29 07:42:58.832 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:42:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:59.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:59 np0005539563 nova_compute[252253]: 2025-11-29 07:42:59.262 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:42:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 353 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 75 op/s
Nov 29 02:43:00 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 02:43:00 np0005539563 podman[261567]: 2025-11-29 07:43:00.105938547 +0000 UTC m=+15.341307735 container remove 948e7f7ff800bb5269fdaf2bf2aec2432c7e20b6ad0211064a7359bb28735670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:43:00 np0005539563 systemd[1]: libpod-conmon-948e7f7ff800bb5269fdaf2bf2aec2432c7e20b6ad0211064a7359bb28735670.scope: Deactivated successfully.
Nov 29 02:43:00 np0005539563 podman[261705]: 2025-11-29 07:43:00.172131753 +0000 UTC m=+11.827971198 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:43:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:00.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:01.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 356 MiB data, 408 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 974 KiB/s wr, 65 op/s
Nov 29 02:43:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:02.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:03 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 02:43:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:03.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:03 np0005539563 podman[261705]: 2025-11-29 07:43:03.17188666 +0000 UTC m=+14.827726095 container create 0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:43:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 369 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 2.3 MiB/s wr, 54 op/s
Nov 29 02:43:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:03 np0005539563 systemd[1]: Started libpod-conmon-0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523.scope.
Nov 29 02:43:03 np0005539563 podman[261864]: 2025-11-29 07:43:03.6107074 +0000 UTC m=+1.154788633 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:43:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:43:03 np0005539563 podman[261865]: 2025-11-29 07:43:03.624038562 +0000 UTC m=+1.159371558 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 02:43:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b64e6d5a3724e79de7c7dc8046c875b957d9b83dfb8e47d49d652f54692bf91/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:03 np0005539563 nova_compute[252253]: 2025-11-29 07:43:03.835 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:04 np0005539563 nova_compute[252253]: 2025-11-29 07:43:04.294 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:04.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:43:04.888 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:43:04.889 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:43:04.890 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:05.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 373 MiB data, 430 MiB used, 21 GiB / 21 GiB avail; 259 KiB/s rd, 2.5 MiB/s wr, 68 op/s
Nov 29 02:43:06 np0005539563 podman[261705]: 2025-11-29 07:43:06.285589689 +0000 UTC m=+17.941429174 container init 0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:43:06 np0005539563 podman[261705]: 2025-11-29 07:43:06.293988277 +0000 UTC m=+17.949827712 container start 0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 02:43:06 np0005539563 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[261926]: [NOTICE]   (261999) : New worker (262001) forked
Nov 29 02:43:06 np0005539563 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[261926]: [NOTICE]   (261999) : Loading success.
Nov 29 02:43:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:06.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:06 np0005539563 podman[261866]: 2025-11-29 07:43:06.996673089 +0000 UTC m=+4.535178211 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:07.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:07 np0005539563 podman[262029]: 2025-11-29 07:43:07.178800372 +0000 UTC m=+0.046111583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:43:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 373 MiB data, 430 MiB used, 21 GiB / 21 GiB avail; 257 KiB/s rd, 2.5 MiB/s wr, 64 op/s
Nov 29 02:43:07 np0005539563 podman[262029]: 2025-11-29 07:43:07.819988084 +0000 UTC m=+0.687299195 container create e7bcc70023cd0c35b8d03d0fd8549dcf3e8651ef90455858684b4ddfde176973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:43:08 np0005539563 systemd[1]: Started libpod-conmon-e7bcc70023cd0c35b8d03d0fd8549dcf3e8651ef90455858684b4ddfde176973.scope.
Nov 29 02:43:08 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:43:08 np0005539563 podman[262029]: 2025-11-29 07:43:08.384430924 +0000 UTC m=+1.251742065 container init e7bcc70023cd0c35b8d03d0fd8549dcf3e8651ef90455858684b4ddfde176973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:43:08 np0005539563 podman[262029]: 2025-11-29 07:43:08.392143804 +0000 UTC m=+1.259454925 container start e7bcc70023cd0c35b8d03d0fd8549dcf3e8651ef90455858684b4ddfde176973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:43:08 np0005539563 competent_davinci[262045]: 167 167
Nov 29 02:43:08 np0005539563 systemd[1]: libpod-e7bcc70023cd0c35b8d03d0fd8549dcf3e8651ef90455858684b4ddfde176973.scope: Deactivated successfully.
Nov 29 02:43:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:08 np0005539563 podman[262029]: 2025-11-29 07:43:08.73464843 +0000 UTC m=+1.601959551 container attach e7bcc70023cd0c35b8d03d0fd8549dcf3e8651ef90455858684b4ddfde176973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:43:08 np0005539563 podman[262029]: 2025-11-29 07:43:08.735207135 +0000 UTC m=+1.602518256 container died e7bcc70023cd0c35b8d03d0fd8549dcf3e8651ef90455858684b4ddfde176973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:43:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:08.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:08 np0005539563 nova_compute[252253]: 2025-11-29 07:43:08.840 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:09.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:09 np0005539563 nova_compute[252253]: 2025-11-29 07:43:09.296 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 388 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 124 op/s
Nov 29 02:43:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2887c28f8e307859d0ef7e8ec94b3c2e7fce0adf1e649a20b0593a82d25832ff-merged.mount: Deactivated successfully.
Nov 29 02:43:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:10.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:11.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 395 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.0 MiB/s wr, 132 op/s
Nov 29 02:43:11 np0005539563 nova_compute[252253]: 2025-11-29 07:43:11.682 252257 DEBUG oslo_concurrency.lockutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Acquiring lock "399998c5-6728-4cec-8516-c37c97b56a72" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:11 np0005539563 nova_compute[252253]: 2025-11-29 07:43:11.683 252257 DEBUG oslo_concurrency.lockutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "399998c5-6728-4cec-8516-c37c97b56a72" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:11 np0005539563 nova_compute[252253]: 2025-11-29 07:43:11.719 252257 DEBUG nova.compute.manager [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:43:11 np0005539563 nova_compute[252253]: 2025-11-29 07:43:11.919 252257 DEBUG oslo_concurrency.lockutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:11 np0005539563 nova_compute[252253]: 2025-11-29 07:43:11.920 252257 DEBUG oslo_concurrency.lockutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:11 np0005539563 nova_compute[252253]: 2025-11-29 07:43:11.927 252257 DEBUG nova.virt.hardware [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:43:11 np0005539563 nova_compute[252253]: 2025-11-29 07:43:11.927 252257 INFO nova.compute.claims [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:43:12 np0005539563 nova_compute[252253]: 2025-11-29 07:43:12.161 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:43:12
Nov 29 02:43:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:43:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:43:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'volumes', '.mgr', 'images', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 29 02:43:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:43:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:12.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:13.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:43:13 np0005539563 podman[262029]: 2025-11-29 07:43:13.161229832 +0000 UTC m=+6.028540963 container remove e7bcc70023cd0c35b8d03d0fd8549dcf3e8651ef90455858684b4ddfde176973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:43:13 np0005539563 systemd[1]: libpod-conmon-e7bcc70023cd0c35b8d03d0fd8549dcf3e8651ef90455858684b4ddfde176973.scope: Deactivated successfully.
Nov 29 02:43:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:43:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3294385100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 396 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.2 MiB/s wr, 189 op/s
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.325 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.332 252257 DEBUG nova.compute.provider_tree [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.365 252257 DEBUG nova.scheduler.client.report [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.392 252257 DEBUG oslo_concurrency.lockutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.393 252257 DEBUG nova.compute.manager [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.443 252257 DEBUG nova.compute.manager [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.444 252257 DEBUG nova.network.neutron [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:43:13 np0005539563 podman[262095]: 2025-11-29 07:43:13.37192079 +0000 UTC m=+0.032877762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.468 252257 INFO nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.485 252257 DEBUG nova.compute.manager [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:43:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.574 252257 DEBUG nova.compute.manager [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.576 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.577 252257 INFO nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Creating image(s)#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.623 252257 DEBUG nova.storage.rbd_utils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] rbd image 399998c5-6728-4cec-8516-c37c97b56a72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.663 252257 DEBUG nova.storage.rbd_utils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] rbd image 399998c5-6728-4cec-8516-c37c97b56a72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.696 252257 DEBUG nova.storage.rbd_utils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] rbd image 399998c5-6728-4cec-8516-c37c97b56a72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.701 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:43:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.726 252257 DEBUG nova.network.neutron [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.727 252257 DEBUG nova.compute.manager [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.773 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.773 252257 DEBUG oslo_concurrency.lockutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.774 252257 DEBUG oslo_concurrency.lockutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.775 252257 DEBUG oslo_concurrency.lockutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.808 252257 DEBUG nova.storage.rbd_utils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] rbd image 399998c5-6728-4cec-8516-c37c97b56a72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.812 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 399998c5-6728-4cec-8516-c37c97b56a72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:13 np0005539563 nova_compute[252253]: 2025-11-29 07:43:13.843 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:13 np0005539563 podman[262095]: 2025-11-29 07:43:13.868275262 +0000 UTC m=+0.529232184 container create ffb35968a2b29c9cdeba217788add0eabf23c0a4a8a7d89d3ef3f5a9b59e18fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wescoff, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:43:14 np0005539563 systemd[1]: Started libpod-conmon-ffb35968a2b29c9cdeba217788add0eabf23c0a4a8a7d89d3ef3f5a9b59e18fb.scope.
Nov 29 02:43:14 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:43:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd7e3298b99e01a3ccffb69a3c6262854b2f0571d0ec85baffac6c7b25bcee6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd7e3298b99e01a3ccffb69a3c6262854b2f0571d0ec85baffac6c7b25bcee6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd7e3298b99e01a3ccffb69a3c6262854b2f0571d0ec85baffac6c7b25bcee6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd7e3298b99e01a3ccffb69a3c6262854b2f0571d0ec85baffac6c7b25bcee6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:14 np0005539563 nova_compute[252253]: 2025-11-29 07:43:14.358 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:14 np0005539563 podman[262095]: 2025-11-29 07:43:14.386158318 +0000 UTC m=+1.047115300 container init ffb35968a2b29c9cdeba217788add0eabf23c0a4a8a7d89d3ef3f5a9b59e18fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wescoff, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:43:14 np0005539563 podman[262095]: 2025-11-29 07:43:14.39282902 +0000 UTC m=+1.053785942 container start ffb35968a2b29c9cdeba217788add0eabf23c0a4a8a7d89d3ef3f5a9b59e18fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:43:14 np0005539563 podman[262095]: 2025-11-29 07:43:14.477309703 +0000 UTC m=+1.138266635 container attach ffb35968a2b29c9cdeba217788add0eabf23c0a4a8a7d89d3ef3f5a9b59e18fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wescoff, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:43:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:14.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:15.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]: {
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:    "0": [
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:        {
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:            "devices": [
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:                "/dev/loop3"
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:            ],
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:            "lv_name": "ceph_lv0",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:            "lv_size": "7511998464",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:            "name": "ceph_lv0",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:            "tags": {
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:                "ceph.cluster_name": "ceph",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:                "ceph.crush_device_class": "",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:                "ceph.encrypted": "0",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:                "ceph.osd_id": "0",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:                "ceph.type": "block",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:                "ceph.vdo": "0"
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:            },
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:            "type": "block",
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:            "vg_name": "ceph_vg0"
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:        }
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]:    ]
Nov 29 02:43:15 np0005539563 kind_wescoff[262203]: }
Nov 29 02:43:15 np0005539563 systemd[1]: libpod-ffb35968a2b29c9cdeba217788add0eabf23c0a4a8a7d89d3ef3f5a9b59e18fb.scope: Deactivated successfully.
Nov 29 02:43:15 np0005539563 podman[262095]: 2025-11-29 07:43:15.213271998 +0000 UTC m=+1.874228930 container died ffb35968a2b29c9cdeba217788add0eabf23c0a4a8a7d89d3ef3f5a9b59e18fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wescoff, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:43:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 396 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.8 MiB/s wr, 239 op/s
Nov 29 02:43:15 np0005539563 ovn_controller[148841]: 2025-11-29T07:43:15Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:0e:b2 10.1.0.26
Nov 29 02:43:15 np0005539563 ovn_controller[148841]: 2025-11-29T07:43:15Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:0e:b2 10.1.0.26
Nov 29 02:43:16 np0005539563 ovn_controller[148841]: 2025-11-29T07:43:16Z|00032|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 02:43:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:16.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:17.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 396 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.6 MiB/s wr, 224 op/s
Nov 29 02:43:17 np0005539563 nova_compute[252253]: 2025-11-29 07:43:17.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:17 np0005539563 nova_compute[252253]: 2025-11-29 07:43:17.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:43:17 np0005539563 nova_compute[252253]: 2025-11-29 07:43:17.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:43:17 np0005539563 nova_compute[252253]: 2025-11-29 07:43:17.714 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:43:17 np0005539563 nova_compute[252253]: 2025-11-29 07:43:17.965 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-7e858991-fb4d-470d-a63e-bf5f72d59c34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:43:17 np0005539563 nova_compute[252253]: 2025-11-29 07:43:17.966 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-7e858991-fb4d-470d-a63e-bf5f72d59c34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:43:17 np0005539563 nova_compute[252253]: 2025-11-29 07:43:17.966 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:43:17 np0005539563 nova_compute[252253]: 2025-11-29 07:43:17.966 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7e858991-fb4d-470d-a63e-bf5f72d59c34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:43:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ffd7e3298b99e01a3ccffb69a3c6262854b2f0571d0ec85baffac6c7b25bcee6-merged.mount: Deactivated successfully.
Nov 29 02:43:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:18.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:18 np0005539563 nova_compute[252253]: 2025-11-29 07:43:18.846 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:18 np0005539563 nova_compute[252253]: 2025-11-29 07:43:18.850 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:43:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:19.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 396 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 1.6 MiB/s wr, 270 op/s
Nov 29 02:43:19 np0005539563 nova_compute[252253]: 2025-11-29 07:43:19.361 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:19 np0005539563 nova_compute[252253]: 2025-11-29 07:43:19.852 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:43:19 np0005539563 nova_compute[252253]: 2025-11-29 07:43:19.898 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-7e858991-fb4d-470d-a63e-bf5f72d59c34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:43:19 np0005539563 nova_compute[252253]: 2025-11-29 07:43:19.898 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:43:19 np0005539563 nova_compute[252253]: 2025-11-29 07:43:19.899 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:19 np0005539563 nova_compute[252253]: 2025-11-29 07:43:19.900 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:19 np0005539563 nova_compute[252253]: 2025-11-29 07:43:19.923 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:19 np0005539563 nova_compute[252253]: 2025-11-29 07:43:19.923 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:19 np0005539563 nova_compute[252253]: 2025-11-29 07:43:19.924 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:19 np0005539563 nova_compute[252253]: 2025-11-29 07:43:19.924 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:43:19 np0005539563 nova_compute[252253]: 2025-11-29 07:43:19.925 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:43:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2428679417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.411 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.519 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.519 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.525 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.526 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.777 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.778 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4429MB free_disk=20.790821075439453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.778 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.779 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:20.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.883 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 7e858991-fb4d-470d-a63e-bf5f72d59c34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.884 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 7a2ac9f8-c588-434e-9da9-98a9d77f2e72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.884 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 399998c5-6728-4cec-8516-c37c97b56a72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.884 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:43:20 np0005539563 nova_compute[252253]: 2025-11-29 07:43:20.885 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:43:21 np0005539563 nova_compute[252253]: 2025-11-29 07:43:21.007 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:21.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 404 MiB data, 450 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 670 KiB/s wr, 219 op/s
Nov 29 02:43:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:43:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2878732894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:43:21 np0005539563 nova_compute[252253]: 2025-11-29 07:43:21.559 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:21 np0005539563 nova_compute[252253]: 2025-11-29 07:43:21.566 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:43:21 np0005539563 nova_compute[252253]: 2025-11-29 07:43:21.588 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:43:21 np0005539563 nova_compute[252253]: 2025-11-29 07:43:21.618 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:43:21 np0005539563 nova_compute[252253]: 2025-11-29 07:43:21.619 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:22 np0005539563 nova_compute[252253]: 2025-11-29 07:43:22.397 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:22 np0005539563 nova_compute[252253]: 2025-11-29 07:43:22.398 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:22 np0005539563 nova_compute[252253]: 2025-11-29 07:43:22.398 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:22 np0005539563 nova_compute[252253]: 2025-11-29 07:43:22.399 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:22 np0005539563 nova_compute[252253]: 2025-11-29 07:43:22.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:22 np0005539563 nova_compute[252253]: 2025-11-29 07:43:22.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:43:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:22.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009431379990228922 of space, bias 1.0, pg target 2.8294139970686767 quantized to 32 (current 32)
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 02:43:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:23.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 404 MiB data, 450 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 208 KiB/s wr, 211 op/s
Nov 29 02:43:23 np0005539563 nova_compute[252253]: 2025-11-29 07:43:23.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:43:23 np0005539563 nova_compute[252253]: 2025-11-29 07:43:23.849 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:24 np0005539563 nova_compute[252253]: 2025-11-29 07:43:24.364 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:24.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:25.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 422 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 837 KiB/s wr, 145 op/s
Nov 29 02:43:25 np0005539563 podman[262095]: 2025-11-29 07:43:25.738545237 +0000 UTC m=+12.399502199 container remove ffb35968a2b29c9cdeba217788add0eabf23c0a4a8a7d89d3ef3f5a9b59e18fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wescoff, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:25 np0005539563 systemd[1]: libpod-conmon-ffb35968a2b29c9cdeba217788add0eabf23c0a4a8a7d89d3ef3f5a9b59e18fb.scope: Deactivated successfully.
Nov 29 02:43:26 np0005539563 podman[262471]: 2025-11-29 07:43:26.485414807 +0000 UTC m=+0.028396342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:43:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:26.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:27.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 422 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 477 KiB/s rd, 836 KiB/s wr, 69 op/s
Nov 29 02:43:27 np0005539563 podman[262471]: 2025-11-29 07:43:27.344598097 +0000 UTC m=+0.887579542 container create 44c475989953035e7d4ff76c4dfef9f5c8c07d6409e481722dc87b41a97c5180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chebyshev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:43:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:28.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:28 np0005539563 nova_compute[252253]: 2025-11-29 07:43:28.852 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:29.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 442 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 482 KiB/s rd, 1.7 MiB/s wr, 74 op/s
Nov 29 02:43:29 np0005539563 systemd[1]: Started libpod-conmon-44c475989953035e7d4ff76c4dfef9f5c8c07d6409e481722dc87b41a97c5180.scope.
Nov 29 02:43:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:43:29 np0005539563 nova_compute[252253]: 2025-11-29 07:43:29.368 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:29 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.701327801s, txc = 0x561be1b9af00
Nov 29 02:43:30 np0005539563 podman[262471]: 2025-11-29 07:43:30.166573678 +0000 UTC m=+3.709555203 container init 44c475989953035e7d4ff76c4dfef9f5c8c07d6409e481722dc87b41a97c5180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:43:30 np0005539563 podman[262471]: 2025-11-29 07:43:30.178335828 +0000 UTC m=+3.721317313 container start 44c475989953035e7d4ff76c4dfef9f5c8c07d6409e481722dc87b41a97c5180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:43:30 np0005539563 crazy_chebyshev[262490]: 167 167
Nov 29 02:43:30 np0005539563 systemd[1]: libpod-44c475989953035e7d4ff76c4dfef9f5c8c07d6409e481722dc87b41a97c5180.scope: Deactivated successfully.
Nov 29 02:43:30 np0005539563 conmon[262490]: conmon 44c475989953035e7d4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44c475989953035e7d4ff76c4dfef9f5c8c07d6409e481722dc87b41a97c5180.scope/container/memory.events
Nov 29 02:43:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:30.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:31.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 450 MiB data, 486 MiB used, 21 GiB / 21 GiB avail; 116 KiB/s rd, 2.9 MiB/s wr, 47 op/s
Nov 29 02:43:31 np0005539563 podman[262471]: 2025-11-29 07:43:31.419370731 +0000 UTC m=+4.962352286 container attach 44c475989953035e7d4ff76c4dfef9f5c8c07d6409e481722dc87b41a97c5180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chebyshev, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:43:31 np0005539563 podman[262471]: 2025-11-29 07:43:31.420899742 +0000 UTC m=+4.963881247 container died 44c475989953035e7d4ff76c4dfef9f5c8c07d6409e481722dc87b41a97c5180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:43:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:32.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:33.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 451 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 3.2 MiB/s wr, 40 op/s
Nov 29 02:43:33 np0005539563 nova_compute[252253]: 2025-11-29 07:43:33.854 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:34 np0005539563 nova_compute[252253]: 2025-11-29 07:43:34.429 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-cbeaf87e1e3fcbd0cfb7e4b9c3dce97f394575787edb2efd59d38a98f2951989-merged.mount: Deactivated successfully.
Nov 29 02:43:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:34.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:35.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 451 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 3.2 MiB/s wr, 41 op/s
Nov 29 02:43:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:36.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:37 np0005539563 podman[262471]: 2025-11-29 07:43:37.083175404 +0000 UTC m=+10.626156859 container remove 44c475989953035e7d4ff76c4dfef9f5c8c07d6409e481722dc87b41a97c5180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chebyshev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:43:37 np0005539563 podman[262509]: 2025-11-29 07:43:37.146461302 +0000 UTC m=+3.202036209 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:43:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:37.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:37 np0005539563 podman[262510]: 2025-11-29 07:43:37.154981284 +0000 UTC m=+3.209780690 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:43:37 np0005539563 systemd[1]: libpod-conmon-44c475989953035e7d4ff76c4dfef9f5c8c07d6409e481722dc87b41a97c5180.scope: Deactivated successfully.
Nov 29 02:43:37 np0005539563 podman[262551]: 2025-11-29 07:43:37.292618749 +0000 UTC m=+0.120416299 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:43:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 451 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 2.4 MiB/s wr, 32 op/s
Nov 29 02:43:37 np0005539563 podman[262582]: 2025-11-29 07:43:37.301990914 +0000 UTC m=+0.028459074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:43:38 np0005539563 podman[262582]: 2025-11-29 07:43:38.345468606 +0000 UTC m=+1.071936786 container create 2bfb28c4fcbb7b2ae76322353e1b3e2a0f60d3b549ede915249e1befca738f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaum, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:43:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:38.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:38 np0005539563 nova_compute[252253]: 2025-11-29 07:43:38.857 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:39.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 475 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 3.4 MiB/s wr, 60 op/s
Nov 29 02:43:39 np0005539563 nova_compute[252253]: 2025-11-29 07:43:39.433 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:39 np0005539563 systemd[1]: Started libpod-conmon-2bfb28c4fcbb7b2ae76322353e1b3e2a0f60d3b549ede915249e1befca738f84.scope.
Nov 29 02:43:39 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:43:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacea6b187db8c3dfc384b8c561e8b1da60203caa85959ae3cc9d779353d388d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacea6b187db8c3dfc384b8c561e8b1da60203caa85959ae3cc9d779353d388d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacea6b187db8c3dfc384b8c561e8b1da60203caa85959ae3cc9d779353d388d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacea6b187db8c3dfc384b8c561e8b1da60203caa85959ae3cc9d779353d388d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:40 np0005539563 podman[262582]: 2025-11-29 07:43:40.219941402 +0000 UTC m=+2.946409582 container init 2bfb28c4fcbb7b2ae76322353e1b3e2a0f60d3b549ede915249e1befca738f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaum, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:43:40 np0005539563 podman[262582]: 2025-11-29 07:43:40.229377848 +0000 UTC m=+2.955845988 container start 2bfb28c4fcbb7b2ae76322353e1b3e2a0f60d3b549ede915249e1befca738f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:43:40 np0005539563 nova_compute[252253]: 2025-11-29 07:43:40.290 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 399998c5-6728-4cec-8516-c37c97b56a72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 26.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:40 np0005539563 nova_compute[252253]: 2025-11-29 07:43:40.384 252257 DEBUG nova.storage.rbd_utils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] resizing rbd image 399998c5-6728-4cec-8516-c37c97b56a72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:43:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:43:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/640078798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:43:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:40.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:41 np0005539563 exciting_chaum[262600]: {
Nov 29 02:43:41 np0005539563 exciting_chaum[262600]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:43:41 np0005539563 exciting_chaum[262600]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:43:41 np0005539563 exciting_chaum[262600]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:43:41 np0005539563 exciting_chaum[262600]:        "osd_id": 0,
Nov 29 02:43:41 np0005539563 exciting_chaum[262600]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:43:41 np0005539563 exciting_chaum[262600]:        "type": "bluestore"
Nov 29 02:43:41 np0005539563 exciting_chaum[262600]:    }
Nov 29 02:43:41 np0005539563 exciting_chaum[262600]: }
Nov 29 02:43:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:41.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:41 np0005539563 systemd[1]: libpod-2bfb28c4fcbb7b2ae76322353e1b3e2a0f60d3b549ede915249e1befca738f84.scope: Deactivated successfully.
Nov 29 02:43:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 484 MiB data, 500 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 3.0 MiB/s wr, 65 op/s
Nov 29 02:43:41 np0005539563 podman[262582]: 2025-11-29 07:43:41.642914333 +0000 UTC m=+4.369382513 container attach 2bfb28c4fcbb7b2ae76322353e1b3e2a0f60d3b549ede915249e1befca738f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaum, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:43:41 np0005539563 podman[262582]: 2025-11-29 07:43:41.645638927 +0000 UTC m=+4.372107087 container died 2bfb28c4fcbb7b2ae76322353e1b3e2a0f60d3b549ede915249e1befca738f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaum, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:43:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-eacea6b187db8c3dfc384b8c561e8b1da60203caa85959ae3cc9d779353d388d-merged.mount: Deactivated successfully.
Nov 29 02:43:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:42.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:43 np0005539563 podman[262582]: 2025-11-29 07:43:43.12840429 +0000 UTC m=+5.854872480 container remove 2bfb28c4fcbb7b2ae76322353e1b3e2a0f60d3b549ede915249e1befca738f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:43:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:43:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:43:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:43:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:43:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:43:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:43:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:43.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:43 np0005539563 systemd[1]: libpod-conmon-2bfb28c4fcbb7b2ae76322353e1b3e2a0f60d3b549ede915249e1befca738f84.scope: Deactivated successfully.
Nov 29 02:43:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 482 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 2.9 MiB/s wr, 63 op/s
Nov 29 02:43:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:43:43 np0005539563 nova_compute[252253]: 2025-11-29 07:43:43.860 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:44 np0005539563 nova_compute[252253]: 2025-11-29 07:43:44.434 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:44.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:45.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 471 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 233 KiB/s rd, 2.7 MiB/s wr, 76 op/s
Nov 29 02:43:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:46.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:47.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 471 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 230 KiB/s rd, 2.7 MiB/s wr, 73 op/s
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.218 252257 DEBUG nova.objects.instance [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lazy-loading 'migration_context' on Instance uuid 399998c5-6728-4cec-8516-c37c97b56a72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.281 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.281 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Ensure instance console log exists: /var/lib/nova/instances/399998c5-6728-4cec-8516-c37c97b56a72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.282 252257 DEBUG oslo_concurrency.lockutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.282 252257 DEBUG oslo_concurrency.lockutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.282 252257 DEBUG oslo_concurrency.lockutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.283 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.500 252257 WARNING nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.510 252257 DEBUG nova.virt.libvirt.host [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.510 252257 DEBUG nova.virt.libvirt.host [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.513 252257 DEBUG nova.virt.libvirt.host [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.514 252257 DEBUG nova.virt.libvirt.host [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.515 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.515 252257 DEBUG nova.virt.hardware [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.516 252257 DEBUG nova.virt.hardware [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.516 252257 DEBUG nova.virt.hardware [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.516 252257 DEBUG nova.virt.hardware [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.517 252257 DEBUG nova.virt.hardware [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.517 252257 DEBUG nova.virt.hardware [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.517 252257 DEBUG nova.virt.hardware [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.518 252257 DEBUG nova.virt.hardware [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.518 252257 DEBUG nova.virt.hardware [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.518 252257 DEBUG nova.virt.hardware [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.518 252257 DEBUG nova.virt.hardware [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.521 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:48.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.864 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:43:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376785573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.927 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.965 252257 DEBUG nova.storage.rbd_utils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] rbd image 399998c5-6728-4cec-8516-c37c97b56a72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:43:48 np0005539563 nova_compute[252253]: 2025-11-29 07:43:48.969 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:43:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:49.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 461 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Nov 29 02:43:49 np0005539563 nova_compute[252253]: 2025-11-29 07:43:49.436 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:43:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:43:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4037608517' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:43:49 np0005539563 nova_compute[252253]: 2025-11-29 07:43:49.987 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:49 np0005539563 nova_compute[252253]: 2025-11-29 07:43:49.990 252257 DEBUG nova.objects.instance [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lazy-loading 'pci_devices' on Instance uuid 399998c5-6728-4cec-8516-c37c97b56a72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:43:50 np0005539563 nova_compute[252253]: 2025-11-29 07:43:50.007 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  <uuid>399998c5-6728-4cec-8516-c37c97b56a72</uuid>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  <name>instance-00000007</name>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerDiagnosticsTest-server-1576442586</nova:name>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:43:48</nova:creationTime>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <nova:user uuid="3d83bf995dec4307afb9c1594d6ed7da">tempest-ServerDiagnosticsTest-1173827368-project-member</nova:user>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <nova:project uuid="62c09de590ff4880a2343e4e0a4e9474">tempest-ServerDiagnosticsTest-1173827368</nova:project>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <entry name="serial">399998c5-6728-4cec-8516-c37c97b56a72</entry>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <entry name="uuid">399998c5-6728-4cec-8516-c37c97b56a72</entry>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/399998c5-6728-4cec-8516-c37c97b56a72_disk">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/399998c5-6728-4cec-8516-c37c97b56a72_disk.config">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/399998c5-6728-4cec-8516-c37c97b56a72/console.log" append="off"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:43:50 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:43:50 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:43:50 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:43:50 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:43:50 np0005539563 nova_compute[252253]: 2025-11-29 07:43:50.456 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:43:50 np0005539563 nova_compute[252253]: 2025-11-29 07:43:50.456 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:43:50 np0005539563 nova_compute[252253]: 2025-11-29 07:43:50.457 252257 INFO nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Using config drive#033[00m
Nov 29 02:43:50 np0005539563 nova_compute[252253]: 2025-11-29 07:43:50.483 252257 DEBUG nova.storage.rbd_utils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] rbd image 399998c5-6728-4cec-8516-c37c97b56a72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:43:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:50.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:50 np0005539563 nova_compute[252253]: 2025-11-29 07:43:50.866 252257 INFO nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Creating config drive at /var/lib/nova/instances/399998c5-6728-4cec-8516-c37c97b56a72/disk.config#033[00m
Nov 29 02:43:50 np0005539563 nova_compute[252253]: 2025-11-29 07:43:50.874 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/399998c5-6728-4cec-8516-c37c97b56a72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9wdakssc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:51 np0005539563 nova_compute[252253]: 2025-11-29 07:43:51.014 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/399998c5-6728-4cec-8516-c37c97b56a72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9wdakssc" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:51 np0005539563 nova_compute[252253]: 2025-11-29 07:43:51.060 252257 DEBUG nova.storage.rbd_utils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] rbd image 399998c5-6728-4cec-8516-c37c97b56a72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:43:51 np0005539563 nova_compute[252253]: 2025-11-29 07:43:51.066 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/399998c5-6728-4cec-8516-c37c97b56a72/disk.config 399998c5-6728-4cec-8516-c37c97b56a72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:51.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 465 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Nov 29 02:43:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:52.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:53.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 466 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 1.4 MiB/s wr, 77 op/s
Nov 29 02:43:53 np0005539563 nova_compute[252253]: 2025-11-29 07:43:53.662 252257 DEBUG oslo_concurrency.processutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/399998c5-6728-4cec-8516-c37c97b56a72/disk.config 399998c5-6728-4cec-8516-c37c97b56a72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:53 np0005539563 nova_compute[252253]: 2025-11-29 07:43:53.663 252257 INFO nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Deleting local config drive /var/lib/nova/instances/399998c5-6728-4cec-8516-c37c97b56a72/disk.config because it was imported into RBD.#033[00m
Nov 29 02:43:53 np0005539563 systemd-machined[213024]: New machine qemu-3-instance-00000007.
Nov 29 02:43:53 np0005539563 systemd[1]: Started Virtual Machine qemu-3-instance-00000007.
Nov 29 02:43:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:43:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 76ebb715-2e20-44d2-a856-fd04956935e7 does not exist
Nov 29 02:43:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c4ffb7b4-2333-4abf-9346-d8b7cfe2ca31 does not exist
Nov 29 02:43:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5a37c08a-d37c-470e-8a6a-7456e3de47c5 does not exist
Nov 29 02:43:53 np0005539563 nova_compute[252253]: 2025-11-29 07:43:53.867 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.491 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.717 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:43:54.717 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:43:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:43:54.718 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:43:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:54.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.890 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402234.8901181, 399998c5-6728-4cec-8516-c37c97b56a72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.891 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.894 252257 DEBUG nova.compute.manager [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.894 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.898 252257 INFO nova.virt.libvirt.driver [-] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Instance spawned successfully.#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.898 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.919 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.926 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.931 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.932 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.932 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.933 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.933 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.933 252257 DEBUG nova.virt.libvirt.driver [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.983 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.984 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402234.8921978, 399998c5-6728-4cec-8516-c37c97b56a72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:43:54 np0005539563 nova_compute[252253]: 2025-11-29 07:43:54.984 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] VM Started (Lifecycle Event)#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.012 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.016 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.023 252257 INFO nova.compute.manager [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Took 41.45 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.023 252257 DEBUG nova.compute.manager [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.055 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.116 252257 INFO nova.compute.manager [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Took 43.27 seconds to build instance.#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.162 252257 DEBUG oslo_concurrency.lockutils [None req-8c3f34c8-3dbc-4067-b52b-390bdbcb7204 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "399998c5-6728-4cec-8516-c37c97b56a72" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 43.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:55.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 470 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 598 KiB/s rd, 338 KiB/s wr, 100 op/s
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.673 252257 DEBUG nova.compute.manager [None req-cf128ffc-b709-45c7-8a8f-7a85ef48c80c 77145c59c1984a66b3462c26650d8842 fceedf22e63c4557b85dc90d1d71f96d - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.677 252257 INFO nova.compute.manager [None req-cf128ffc-b709-45c7-8a8f-7a85ef48c80c 77145c59c1984a66b3462c26650d8842 fceedf22e63c4557b85dc90d1d71f96d - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Retrieving diagnostics#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.947 252257 DEBUG oslo_concurrency.lockutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Acquiring lock "399998c5-6728-4cec-8516-c37c97b56a72" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.949 252257 DEBUG oslo_concurrency.lockutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "399998c5-6728-4cec-8516-c37c97b56a72" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.949 252257 DEBUG oslo_concurrency.lockutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Acquiring lock "399998c5-6728-4cec-8516-c37c97b56a72-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.950 252257 DEBUG oslo_concurrency.lockutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "399998c5-6728-4cec-8516-c37c97b56a72-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.950 252257 DEBUG oslo_concurrency.lockutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "399998c5-6728-4cec-8516-c37c97b56a72-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.952 252257 INFO nova.compute.manager [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Terminating instance#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.953 252257 DEBUG oslo_concurrency.lockutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Acquiring lock "refresh_cache-399998c5-6728-4cec-8516-c37c97b56a72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.954 252257 DEBUG oslo_concurrency.lockutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Acquired lock "refresh_cache-399998c5-6728-4cec-8516-c37c97b56a72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:43:55 np0005539563 nova_compute[252253]: 2025-11-29 07:43:55.954 252257 DEBUG nova.network.neutron [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:43:56 np0005539563 nova_compute[252253]: 2025-11-29 07:43:56.330 252257 DEBUG nova.network.neutron [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:43:56 np0005539563 nova_compute[252253]: 2025-11-29 07:43:56.639 252257 DEBUG nova.network.neutron [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:43:56 np0005539563 nova_compute[252253]: 2025-11-29 07:43:56.722 252257 DEBUG oslo_concurrency.lockutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Releasing lock "refresh_cache-399998c5-6728-4cec-8516-c37c97b56a72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:43:56 np0005539563 nova_compute[252253]: 2025-11-29 07:43:56.723 252257 DEBUG nova.compute.manager [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:43:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:56.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:56 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:43:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:43:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:57.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:43:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 470 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 464 KiB/s rd, 192 KiB/s wr, 82 op/s
Nov 29 02:43:57 np0005539563 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 29 02:43:57 np0005539563 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 3.026s CPU time.
Nov 29 02:43:57 np0005539563 systemd-machined[213024]: Machine qemu-3-instance-00000007 terminated.
Nov 29 02:43:57 np0005539563 nova_compute[252253]: 2025-11-29 07:43:57.945 252257 INFO nova.virt.libvirt.driver [-] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Instance destroyed successfully.#033[00m
Nov 29 02:43:57 np0005539563 nova_compute[252253]: 2025-11-29 07:43:57.946 252257 DEBUG nova.objects.instance [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lazy-loading 'resources' on Instance uuid 399998c5-6728-4cec-8516-c37c97b56a72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:43:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:43:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:58.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:43:58 np0005539563 nova_compute[252253]: 2025-11-29 07:43:58.868 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:43:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:59.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 474 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 272 KiB/s wr, 141 op/s
Nov 29 02:43:59 np0005539563 nova_compute[252253]: 2025-11-29 07:43:59.497 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:43:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:43:59.720 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:44:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:00.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:01.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 437 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 189 KiB/s wr, 182 op/s
Nov 29 02:44:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:02.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:03.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:03 np0005539563 nova_compute[252253]: 2025-11-29 07:44:03.191 252257 INFO nova.virt.libvirt.driver [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Deleting instance files /var/lib/nova/instances/399998c5-6728-4cec-8516-c37c97b56a72_del#033[00m
Nov 29 02:44:03 np0005539563 nova_compute[252253]: 2025-11-29 07:44:03.192 252257 INFO nova.virt.libvirt.driver [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Deletion of /var/lib/nova/instances/399998c5-6728-4cec-8516-c37c97b56a72_del complete#033[00m
Nov 29 02:44:03 np0005539563 nova_compute[252253]: 2025-11-29 07:44:03.278 252257 DEBUG nova.virt.libvirt.host [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 29 02:44:03 np0005539563 nova_compute[252253]: 2025-11-29 07:44:03.279 252257 INFO nova.virt.libvirt.host [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] UEFI support detected#033[00m
Nov 29 02:44:03 np0005539563 nova_compute[252253]: 2025-11-29 07:44:03.280 252257 INFO nova.compute.manager [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Took 6.56 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:44:03 np0005539563 nova_compute[252253]: 2025-11-29 07:44:03.280 252257 DEBUG oslo.service.loopingcall [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:44:03 np0005539563 nova_compute[252253]: 2025-11-29 07:44:03.281 252257 DEBUG nova.compute.manager [-] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:44:03 np0005539563 nova_compute[252253]: 2025-11-29 07:44:03.281 252257 DEBUG nova.network.neutron [-] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:44:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 412 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 238 op/s
Nov 29 02:44:03 np0005539563 nova_compute[252253]: 2025-11-29 07:44:03.861 252257 DEBUG nova.network.neutron [-] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:44:03 np0005539563 nova_compute[252253]: 2025-11-29 07:44:03.872 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:03 np0005539563 nova_compute[252253]: 2025-11-29 07:44:03.914 252257 DEBUG nova.network.neutron [-] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:44:03 np0005539563 nova_compute[252253]: 2025-11-29 07:44:03.982 252257 INFO nova.compute.manager [-] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Took 0.70 seconds to deallocate network for instance.#033[00m
Nov 29 02:44:04 np0005539563 nova_compute[252253]: 2025-11-29 07:44:04.402 252257 DEBUG oslo_concurrency.lockutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:04 np0005539563 nova_compute[252253]: 2025-11-29 07:44:04.402 252257 DEBUG oslo_concurrency.lockutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:04 np0005539563 nova_compute[252253]: 2025-11-29 07:44:04.531 252257 DEBUG oslo_concurrency.processutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:04 np0005539563 nova_compute[252253]: 2025-11-29 07:44:04.549 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:04.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:04.889 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:04.889 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:04.890 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1917334749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:05 np0005539563 nova_compute[252253]: 2025-11-29 07:44:05.014 252257 DEBUG oslo_concurrency.processutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:05 np0005539563 nova_compute[252253]: 2025-11-29 07:44:05.020 252257 DEBUG nova.compute.provider_tree [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:44:05 np0005539563 nova_compute[252253]: 2025-11-29 07:44:05.043 252257 DEBUG nova.scheduler.client.report [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:44:05 np0005539563 nova_compute[252253]: 2025-11-29 07:44:05.071 252257 DEBUG oslo_concurrency.lockutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:05 np0005539563 nova_compute[252253]: 2025-11-29 07:44:05.113 252257 INFO nova.scheduler.client.report [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Deleted allocations for instance 399998c5-6728-4cec-8516-c37c97b56a72#033[00m
Nov 29 02:44:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:05.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:05 np0005539563 nova_compute[252253]: 2025-11-29 07:44:05.234 252257 DEBUG oslo_concurrency.lockutils [None req-75cd9ae9-eb22-4af6-89a7-a75da51fb18f 3d83bf995dec4307afb9c1594d6ed7da 62c09de590ff4880a2343e4e0a4e9474 - - default default] Lock "399998c5-6728-4cec-8516-c37c97b56a72" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 407 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 267 op/s
Nov 29 02:44:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:06.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:07.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 407 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 225 op/s
Nov 29 02:44:07 np0005539563 podman[263045]: 2025-11-29 07:44:07.514861743 +0000 UTC m=+0.067836928 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:44:07 np0005539563 podman[263044]: 2025-11-29 07:44:07.535726278 +0000 UTC m=+0.093013887 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 02:44:07 np0005539563 podman[263046]: 2025-11-29 07:44:07.560882618 +0000 UTC m=+0.102537281 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:44:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:08.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:08 np0005539563 nova_compute[252253]: 2025-11-29 07:44:08.874 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:09.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 407 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 255 op/s
Nov 29 02:44:09 np0005539563 nova_compute[252253]: 2025-11-29 07:44:09.540 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:10.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:11.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 407 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 206 op/s
Nov 29 02:44:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:44:12
Nov 29 02:44:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:44:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:44:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'volumes', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 29 02:44:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:44:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:12.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:12 np0005539563 nova_compute[252253]: 2025-11-29 07:44:12.945 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402237.9430969, 399998c5-6728-4cec-8516-c37c97b56a72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:44:12 np0005539563 nova_compute[252253]: 2025-11-29 07:44:12.950 252257 INFO nova.compute.manager [-] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:44:12 np0005539563 nova_compute[252253]: 2025-11-29 07:44:12.983 252257 DEBUG nova.compute.manager [None req-5fc2406c-5ac1-4d54-8615-e773d4f3346c - - - - - -] [instance: 399998c5-6728-4cec-8516-c37c97b56a72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:44:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:13.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 399 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 127 KiB/s rd, 1.8 MiB/s wr, 159 op/s
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:44:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:44:13 np0005539563 nova_compute[252253]: 2025-11-29 07:44:13.877 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:14 np0005539563 nova_compute[252253]: 2025-11-29 07:44:14.563 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:14.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:15.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 357 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 565 KiB/s wr, 103 op/s
Nov 29 02:44:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:16.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:17.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 357 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 36 KiB/s wr, 62 op/s
Nov 29 02:44:18 np0005539563 nova_compute[252253]: 2025-11-29 07:44:18.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:18.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:18 np0005539563 nova_compute[252253]: 2025-11-29 07:44:18.881 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:19.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 328 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 38 KiB/s wr, 112 op/s
Nov 29 02:44:19 np0005539563 nova_compute[252253]: 2025-11-29 07:44:19.566 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:19 np0005539563 nova_compute[252253]: 2025-11-29 07:44:19.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:19 np0005539563 nova_compute[252253]: 2025-11-29 07:44:19.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:44:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:20.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:20 np0005539563 nova_compute[252253]: 2025-11-29 07:44:20.921 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-7a2ac9f8-c588-434e-9da9-98a9d77f2e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:44:20 np0005539563 nova_compute[252253]: 2025-11-29 07:44:20.921 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-7a2ac9f8-c588-434e-9da9-98a9d77f2e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:44:20 np0005539563 nova_compute[252253]: 2025-11-29 07:44:20.922 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:44:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:44:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2719169375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:44:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:21.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 328 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 102 op/s
Nov 29 02:44:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:22.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007509698376139959 of space, bias 1.0, pg target 2.252909512841988 quantized to 32 (current 32)
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 02:44:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:23.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 328 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 105 op/s
Nov 29 02:44:23 np0005539563 nova_compute[252253]: 2025-11-29 07:44:23.926 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:23 np0005539563 nova_compute[252253]: 2025-11-29 07:44:23.978 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Updating instance_info_cache with network_info: [{"id": "c5179400-1023-4dc7-b67b-d922a2453db4", "address": "fa:16:3e:44:0e:b2", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::d4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5179400-10", "ovs_interfaceid": "c5179400-1023-4dc7-b67b-d922a2453db4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.029 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-7a2ac9f8-c588-434e-9da9-98a9d77f2e72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.029 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.030 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.031 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.031 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.031 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.062 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.063 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.064 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.064 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.065 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.569 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1393983249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.862 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.797s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:24.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.993 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.994 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.998 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:44:24 np0005539563 nova_compute[252253]: 2025-11-29 07:44:24.999 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:44:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:25 np0005539563 nova_compute[252253]: 2025-11-29 07:44:25.218 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:44:25 np0005539563 nova_compute[252253]: 2025-11-29 07:44:25.219 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4475MB free_disk=20.830665588378906GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:44:25 np0005539563 nova_compute[252253]: 2025-11-29 07:44:25.219 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:25 np0005539563 nova_compute[252253]: 2025-11-29 07:44:25.220 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:25.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 328 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 99 op/s
Nov 29 02:44:25 np0005539563 nova_compute[252253]: 2025-11-29 07:44:25.348 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 7e858991-fb4d-470d-a63e-bf5f72d59c34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:44:25 np0005539563 nova_compute[252253]: 2025-11-29 07:44:25.349 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 7a2ac9f8-c588-434e-9da9-98a9d77f2e72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:44:25 np0005539563 nova_compute[252253]: 2025-11-29 07:44:25.349 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:44:25 np0005539563 nova_compute[252253]: 2025-11-29 07:44:25.349 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:44:25 np0005539563 nova_compute[252253]: 2025-11-29 07:44:25.534 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1276293911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:26 np0005539563 nova_compute[252253]: 2025-11-29 07:44:26.017 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:26 np0005539563 nova_compute[252253]: 2025-11-29 07:44:26.027 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:44:26 np0005539563 nova_compute[252253]: 2025-11-29 07:44:26.051 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:44:26 np0005539563 nova_compute[252253]: 2025-11-29 07:44:26.119 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:44:26 np0005539563 nova_compute[252253]: 2025-11-29 07:44:26.119 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:26 np0005539563 nova_compute[252253]: 2025-11-29 07:44:26.766 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:26 np0005539563 nova_compute[252253]: 2025-11-29 07:44:26.767 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:26 np0005539563 nova_compute[252253]: 2025-11-29 07:44:26.806 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:26 np0005539563 nova_compute[252253]: 2025-11-29 07:44:26.806 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:26 np0005539563 nova_compute[252253]: 2025-11-29 07:44:26.807 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:44:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:26.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:27.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 328 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.0 KiB/s wr, 86 op/s
Nov 29 02:44:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:28.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:28 np0005539563 nova_compute[252253]: 2025-11-29 07:44:28.928 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:29.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 328 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 94 op/s
Nov 29 02:44:29 np0005539563 nova_compute[252253]: 2025-11-29 07:44:29.571 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:30.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:31.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 330 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 608 KiB/s rd, 240 KiB/s wr, 56 op/s
Nov 29 02:44:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:32.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:33.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 330 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 241 KiB/s wr, 37 op/s
Nov 29 02:44:33 np0005539563 nova_compute[252253]: 2025-11-29 07:44:33.931 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:34 np0005539563 nova_compute[252253]: 2025-11-29 07:44:34.606 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:34.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:35.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 335 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 713 KiB/s wr, 27 op/s
Nov 29 02:44:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:36.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:37.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 335 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 709 KiB/s wr, 23 op/s
Nov 29 02:44:38 np0005539563 podman[263270]: 2025-11-29 07:44:38.509039036 +0000 UTC m=+0.063367407 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 02:44:38 np0005539563 podman[263269]: 2025-11-29 07:44:38.526152573 +0000 UTC m=+0.080707290 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 02:44:38 np0005539563 podman[263271]: 2025-11-29 07:44:38.578930488 +0000 UTC m=+0.124573078 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:44:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:38.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:38 np0005539563 nova_compute[252253]: 2025-11-29 07:44:38.933 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:39.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 344 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 1.7 MiB/s wr, 40 op/s
Nov 29 02:44:39 np0005539563 nova_compute[252253]: 2025-11-29 07:44:39.608 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:40.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:41.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 349 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 2.0 MiB/s wr, 38 op/s
Nov 29 02:44:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:42.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:44:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:44:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:44:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:44:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:44:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:44:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:43.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 349 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 29 02:44:43 np0005539563 nova_compute[252253]: 2025-11-29 07:44:43.935 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:44 np0005539563 nova_compute[252253]: 2025-11-29 07:44:44.610 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:45.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 353 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 262 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Nov 29 02:44:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:44:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2030426042' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:44:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:44:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2030426042' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.714 252257 DEBUG oslo_concurrency.lockutils [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.715 252257 DEBUG oslo_concurrency.lockutils [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.715 252257 DEBUG oslo_concurrency.lockutils [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.716 252257 DEBUG oslo_concurrency.lockutils [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.716 252257 DEBUG oslo_concurrency.lockutils [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.717 252257 INFO nova.compute.manager [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Terminating instance#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.719 252257 DEBUG nova.compute.manager [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:44:46 np0005539563 kernel: tapc5179400-10 (unregistering): left promiscuous mode
Nov 29 02:44:46 np0005539563 NetworkManager[48981]: <info>  [1764402286.7772] device (tapc5179400-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:44:46 np0005539563 ovn_controller[148841]: 2025-11-29T07:44:46Z|00033|binding|INFO|Releasing lport c5179400-1023-4dc7-b67b-d922a2453db4 from this chassis (sb_readonly=0)
Nov 29 02:44:46 np0005539563 ovn_controller[148841]: 2025-11-29T07:44:46Z|00034|binding|INFO|Setting lport c5179400-1023-4dc7-b67b-d922a2453db4 down in Southbound
Nov 29 02:44:46 np0005539563 ovn_controller[148841]: 2025-11-29T07:44:46Z|00035|binding|INFO|Removing iface tapc5179400-10 ovn-installed in OVS
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.786 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.787 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.789 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:46.804 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:0e:b2 10.1.0.26 fdfe:381f:8400::d4'], port_security=['fa:16:3e:44:0e:b2 10.1.0.26 fdfe:381f:8400::d4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.26/26 fdfe:381f:8400::d4/64', 'neutron:device_id': '7a2ac9f8-c588-434e-9da9-98a9d77f2e72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d3a6ccbb2794f6e85d683953ac4b5fd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '441b5877-d47a-4ccc-b96a-381864fe0f87', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab4638fe-12b3-4f0f-a7fc-23f58f536508, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=c5179400-1023-4dc7-b67b-d922a2453db4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:44:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:46.806 158990 INFO neutron.agent.ovn.metadata.agent [-] Port c5179400-1023-4dc7-b67b-d922a2453db4 in datapath 6c117dd1-5064-4e69-b07c-c93c3d729d3c unbound from our chassis#033[00m
Nov 29 02:44:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:46.809 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c117dd1-5064-4e69-b07c-c93c3d729d3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.809 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:46.811 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[287c5c96-64f9-48b4-ada1-0be325a03061]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:46.811 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c namespace which is not needed anymore#033[00m
Nov 29 02:44:46 np0005539563 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 29 02:44:46 np0005539563 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Consumed 20.822s CPU time.
Nov 29 02:44:46 np0005539563 systemd-machined[213024]: Machine qemu-2-instance-00000004 terminated.
Nov 29 02:44:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:46.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.951 252257 INFO nova.virt.libvirt.driver [-] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Instance destroyed successfully.#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.952 252257 DEBUG nova.objects.instance [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lazy-loading 'resources' on Instance uuid 7a2ac9f8-c588-434e-9da9-98a9d77f2e72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.991 252257 DEBUG nova.virt.libvirt.vif [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:42:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1667508244-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1667508244-3',id=4,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-11-29T07:42:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d3a6ccbb2794f6e85d683953ac4b5fd',ramdisk_id='',reservation_id='r-2hb64b0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-752491155',owner_user_name='tempest-AutoAllocateNetworkTest-752491155-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:42:44Z,user_data=None,user_id='cf2495f54add463c8ce9d2dd8623347c',uuid=7a2ac9f8-c588-434e-9da9-98a9d77f2e72,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c5179400-1023-4dc7-b67b-d922a2453db4", "address": "fa:16:3e:44:0e:b2", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::d4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5179400-10", "ovs_interfaceid": "c5179400-1023-4dc7-b67b-d922a2453db4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.992 252257 DEBUG nova.network.os_vif_util [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converting VIF {"id": "c5179400-1023-4dc7-b67b-d922a2453db4", "address": "fa:16:3e:44:0e:b2", "network": {"id": "6c117dd1-5064-4e69-b07c-c93c3d729d3c", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::d4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d3a6ccbb2794f6e85d683953ac4b5fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5179400-10", "ovs_interfaceid": "c5179400-1023-4dc7-b67b-d922a2453db4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.993 252257 DEBUG nova.network.os_vif_util [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:44:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=c5179400-1023-4dc7-b67b-d922a2453db4,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5179400-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.994 252257 DEBUG os_vif [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=c5179400-1023-4dc7-b67b-d922a2453db4,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5179400-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.995 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.996 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5179400-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.997 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:46 np0005539563 nova_compute[252253]: 2025-11-29 07:44:46.999 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:47 np0005539563 nova_compute[252253]: 2025-11-29 07:44:47.002 252257 INFO os_vif [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:0e:b2,bridge_name='br-int',has_traffic_filtering=True,id=c5179400-1023-4dc7-b67b-d922a2453db4,network=Network(6c117dd1-5064-4e69-b07c-c93c3d729d3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5179400-10')#033[00m
Nov 29 02:44:47 np0005539563 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[261926]: [NOTICE]   (261999) : haproxy version is 2.8.14-c23fe91
Nov 29 02:44:47 np0005539563 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[261926]: [NOTICE]   (261999) : path to executable is /usr/sbin/haproxy
Nov 29 02:44:47 np0005539563 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[261926]: [WARNING]  (261999) : Exiting Master process...
Nov 29 02:44:47 np0005539563 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[261926]: [WARNING]  (261999) : Exiting Master process...
Nov 29 02:44:47 np0005539563 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[261926]: [ALERT]    (261999) : Current worker (262001) exited with code 143 (Terminated)
Nov 29 02:44:47 np0005539563 neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c[261926]: [WARNING]  (261999) : All workers exited. Exiting... (0)
Nov 29 02:44:47 np0005539563 systemd[1]: libpod-0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523.scope: Deactivated successfully.
Nov 29 02:44:47 np0005539563 podman[263362]: 2025-11-29 07:44:47.05757705 +0000 UTC m=+0.157724861 container died 0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:44:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:44:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:47.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:44:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 353 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 262 KiB/s rd, 1.4 MiB/s wr, 66 op/s
Nov 29 02:44:47 np0005539563 nova_compute[252253]: 2025-11-29 07:44:47.580 252257 DEBUG nova.compute.manager [req-895af723-09ad-4075-9f8e-b46ef45bc229 req-2a3ab5df-9a08-4658-b4a5-450bd215c8ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Received event network-vif-unplugged-c5179400-1023-4dc7-b67b-d922a2453db4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:44:47 np0005539563 nova_compute[252253]: 2025-11-29 07:44:47.581 252257 DEBUG oslo_concurrency.lockutils [req-895af723-09ad-4075-9f8e-b46ef45bc229 req-2a3ab5df-9a08-4658-b4a5-450bd215c8ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:47 np0005539563 nova_compute[252253]: 2025-11-29 07:44:47.582 252257 DEBUG oslo_concurrency.lockutils [req-895af723-09ad-4075-9f8e-b46ef45bc229 req-2a3ab5df-9a08-4658-b4a5-450bd215c8ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:47 np0005539563 nova_compute[252253]: 2025-11-29 07:44:47.582 252257 DEBUG oslo_concurrency.lockutils [req-895af723-09ad-4075-9f8e-b46ef45bc229 req-2a3ab5df-9a08-4658-b4a5-450bd215c8ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:47 np0005539563 nova_compute[252253]: 2025-11-29 07:44:47.582 252257 DEBUG nova.compute.manager [req-895af723-09ad-4075-9f8e-b46ef45bc229 req-2a3ab5df-9a08-4658-b4a5-450bd215c8ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] No waiting events found dispatching network-vif-unplugged-c5179400-1023-4dc7-b67b-d922a2453db4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:44:47 np0005539563 nova_compute[252253]: 2025-11-29 07:44:47.583 252257 DEBUG nova.compute.manager [req-895af723-09ad-4075-9f8e-b46ef45bc229 req-2a3ab5df-9a08-4658-b4a5-450bd215c8ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Received event network-vif-unplugged-c5179400-1023-4dc7-b67b-d922a2453db4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:44:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523-userdata-shm.mount: Deactivated successfully.
Nov 29 02:44:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8b64e6d5a3724e79de7c7dc8046c875b957d9b83dfb8e47d49d652f54692bf91-merged.mount: Deactivated successfully.
Nov 29 02:44:48 np0005539563 podman[263362]: 2025-11-29 07:44:48.102347271 +0000 UTC m=+1.202495082 container cleanup 0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 02:44:48 np0005539563 systemd[1]: libpod-conmon-0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523.scope: Deactivated successfully.
Nov 29 02:44:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:48.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:49.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 334 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 293 KiB/s rd, 1.4 MiB/s wr, 83 op/s
Nov 29 02:44:49 np0005539563 nova_compute[252253]: 2025-11-29 07:44:49.666 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:49 np0005539563 nova_compute[252253]: 2025-11-29 07:44:49.714 252257 DEBUG nova.compute.manager [req-df057f28-cbbc-4f13-a5eb-57e8edf63309 req-bc0351b4-29e0-4f1a-9144-26b4181ef35c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Received event network-vif-plugged-c5179400-1023-4dc7-b67b-d922a2453db4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:44:49 np0005539563 nova_compute[252253]: 2025-11-29 07:44:49.714 252257 DEBUG oslo_concurrency.lockutils [req-df057f28-cbbc-4f13-a5eb-57e8edf63309 req-bc0351b4-29e0-4f1a-9144-26b4181ef35c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:49 np0005539563 nova_compute[252253]: 2025-11-29 07:44:49.715 252257 DEBUG oslo_concurrency.lockutils [req-df057f28-cbbc-4f13-a5eb-57e8edf63309 req-bc0351b4-29e0-4f1a-9144-26b4181ef35c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:49 np0005539563 nova_compute[252253]: 2025-11-29 07:44:49.715 252257 DEBUG oslo_concurrency.lockutils [req-df057f28-cbbc-4f13-a5eb-57e8edf63309 req-bc0351b4-29e0-4f1a-9144-26b4181ef35c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:49 np0005539563 nova_compute[252253]: 2025-11-29 07:44:49.715 252257 DEBUG nova.compute.manager [req-df057f28-cbbc-4f13-a5eb-57e8edf63309 req-bc0351b4-29e0-4f1a-9144-26b4181ef35c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] No waiting events found dispatching network-vif-plugged-c5179400-1023-4dc7-b67b-d922a2453db4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:44:49 np0005539563 nova_compute[252253]: 2025-11-29 07:44:49.716 252257 WARNING nova.compute.manager [req-df057f28-cbbc-4f13-a5eb-57e8edf63309 req-bc0351b4-29e0-4f1a-9144-26b4181ef35c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Received unexpected event network-vif-plugged-c5179400-1023-4dc7-b67b-d922a2453db4 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:44:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:50.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:51.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:51 np0005539563 podman[263423]: 2025-11-29 07:44:51.349273554 +0000 UTC m=+3.221698872 container remove 0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:44:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 310 MiB data, 426 MiB used, 21 GiB / 21 GiB avail; 324 KiB/s rd, 417 KiB/s wr, 95 op/s
Nov 29 02:44:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:51.357 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8e080e57-917f-4002-ae1b-d71c99b69081]: (4, ('Sat Nov 29 07:44:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c (0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523)\n0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523\nSat Nov 29 07:44:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c (0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523)\n0e3faf27e5086e8dedaeb88afbd966a36dabb5f5462156507d86b77863306523\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:51.360 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[65fbf446-cb31-46a5-b93a-6d4ed4e60302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:51.361 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c117dd1-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:44:51 np0005539563 nova_compute[252253]: 2025-11-29 07:44:51.364 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:51 np0005539563 kernel: tap6c117dd1-50: left promiscuous mode
Nov 29 02:44:51 np0005539563 nova_compute[252253]: 2025-11-29 07:44:51.395 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:51.398 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f8abedfe-b7b1-4741-9371-e6e0b6b12979]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:51.415 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[eb353670-b174-4eed-9b3b-98005a4ed011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:51.417 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4b1d9c2b-2445-44fa-8d81-a8f12c3e78ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:51.446 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bea566-d415-433d-a775-d025af9b4ab6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 513425, 'reachable_time': 25695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263440, 'error': None, 'target': 'ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:51.459 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c117dd1-5064-4e69-b07c-c93c3d729d3c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:44:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:44:51.460 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[b8aa0ea6-7885-4a76-9a71-e99ad9971c87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:44:51 np0005539563 systemd[1]: run-netns-ovnmeta\x2d6c117dd1\x2d5064\x2d4e69\x2db07c\x2dc93c3d729d3c.mount: Deactivated successfully.
Nov 29 02:44:52 np0005539563 nova_compute[252253]: 2025-11-29 07:44:51.999 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:52.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:53.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 253 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 329 KiB/s rd, 168 KiB/s wr, 100 op/s
Nov 29 02:44:54 np0005539563 nova_compute[252253]: 2025-11-29 07:44:54.669 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:54.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:44:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 02:44:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:44:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:55.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 334 KiB/s rd, 129 KiB/s wr, 116 op/s
Nov 29 02:44:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 02:44:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:44:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:44:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:44:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:44:55 np0005539563 nova_compute[252253]: 2025-11-29 07:44:55.755 252257 INFO nova.virt.libvirt.driver [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Deleting instance files /var/lib/nova/instances/7a2ac9f8-c588-434e-9da9-98a9d77f2e72_del#033[00m
Nov 29 02:44:55 np0005539563 nova_compute[252253]: 2025-11-29 07:44:55.756 252257 INFO nova.virt.libvirt.driver [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Deletion of /var/lib/nova/instances/7a2ac9f8-c588-434e-9da9-98a9d77f2e72_del complete#033[00m
Nov 29 02:44:55 np0005539563 nova_compute[252253]: 2025-11-29 07:44:55.819 252257 INFO nova.compute.manager [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Took 9.10 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:44:55 np0005539563 nova_compute[252253]: 2025-11-29 07:44:55.820 252257 DEBUG oslo.service.loopingcall [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:44:55 np0005539563 nova_compute[252253]: 2025-11-29 07:44:55.820 252257 DEBUG nova.compute.manager [-] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:44:55 np0005539563 nova_compute[252253]: 2025-11-29 07:44:55.820 252257 DEBUG nova.network.neutron [-] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:44:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:44:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:44:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:44:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:44:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:44:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:56.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:57 np0005539563 nova_compute[252253]: 2025-11-29 07:44:57.001 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:44:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:57.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 95 KiB/s rd, 76 KiB/s wr, 81 op/s
Nov 29 02:44:57 np0005539563 nova_compute[252253]: 2025-11-29 07:44:57.461 252257 DEBUG nova.network.neutron [-] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:44:57 np0005539563 nova_compute[252253]: 2025-11-29 07:44:57.479 252257 INFO nova.compute.manager [-] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Took 1.66 seconds to deallocate network for instance.#033[00m
Nov 29 02:44:57 np0005539563 nova_compute[252253]: 2025-11-29 07:44:57.548 252257 DEBUG oslo_concurrency.lockutils [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:57 np0005539563 nova_compute[252253]: 2025-11-29 07:44:57.548 252257 DEBUG oslo_concurrency.lockutils [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:57 np0005539563 nova_compute[252253]: 2025-11-29 07:44:57.680 252257 DEBUG oslo_concurrency.processutils [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:44:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:44:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:44:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:44:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:44:57 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b61860d9-b038-48c5-8bf6-8a8cef8b3210 does not exist
Nov 29 02:44:57 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 795d2757-4d18-479f-95b9-2a91abd60a2f does not exist
Nov 29 02:44:57 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 53ee4838-0fd7-4fa0-bb25-ff09f5bb2edb does not exist
Nov 29 02:44:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:44:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:44:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:44:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:44:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:44:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:44:58 np0005539563 nova_compute[252253]: 2025-11-29 07:44:58.016 252257 DEBUG nova.compute.manager [req-bcbe5687-25ac-44af-998e-8ce6d95b3c75 req-514cb9ef-a567-4b1b-88d8-9f02d7396c07 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Received event network-vif-deleted-c5179400-1023-4dc7-b67b-d922a2453db4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:44:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4130116835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:58 np0005539563 nova_compute[252253]: 2025-11-29 07:44:58.195 252257 DEBUG oslo_concurrency.processutils [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:58 np0005539563 nova_compute[252253]: 2025-11-29 07:44:58.202 252257 DEBUG nova.compute.provider_tree [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:44:58 np0005539563 nova_compute[252253]: 2025-11-29 07:44:58.225 252257 DEBUG nova.scheduler.client.report [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:44:58 np0005539563 nova_compute[252253]: 2025-11-29 07:44:58.380 252257 DEBUG oslo_concurrency.lockutils [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:58 np0005539563 nova_compute[252253]: 2025-11-29 07:44:58.435 252257 INFO nova.scheduler.client.report [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Deleted allocations for instance 7a2ac9f8-c588-434e-9da9-98a9d77f2e72#033[00m
Nov 29 02:44:58 np0005539563 podman[263793]: 2025-11-29 07:44:58.34370328 +0000 UTC m=+0.018018131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:44:58 np0005539563 podman[263793]: 2025-11-29 07:44:58.451988003 +0000 UTC m=+0.126302834 container create 755f7ece78ee76882d4759da79fa1f9b54ff944a869b199fa10a7a8526cfe2e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:44:58 np0005539563 nova_compute[252253]: 2025-11-29 07:44:58.625 252257 DEBUG oslo_concurrency.lockutils [None req-a15a5fa4-7413-4da7-9147-d9854afe3272 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7a2ac9f8-c588-434e-9da9-98a9d77f2e72" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:58 np0005539563 systemd[1]: Started libpod-conmon-755f7ece78ee76882d4759da79fa1f9b54ff944a869b199fa10a7a8526cfe2e5.scope.
Nov 29 02:44:58 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:44:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:58.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:59 np0005539563 podman[263793]: 2025-11-29 07:44:59.168315849 +0000 UTC m=+0.842630730 container init 755f7ece78ee76882d4759da79fa1f9b54ff944a869b199fa10a7a8526cfe2e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:44:59 np0005539563 podman[263793]: 2025-11-29 07:44:59.179775214 +0000 UTC m=+0.854090055 container start 755f7ece78ee76882d4759da79fa1f9b54ff944a869b199fa10a7a8526cfe2e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:44:59 np0005539563 objective_faraday[263808]: 167 167
Nov 29 02:44:59 np0005539563 systemd[1]: libpod-755f7ece78ee76882d4759da79fa1f9b54ff944a869b199fa10a7a8526cfe2e5.scope: Deactivated successfully.
Nov 29 02:44:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:44:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:44:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:44:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:44:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:59.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 147 MiB data, 314 MiB used, 21 GiB / 21 GiB avail; 101 KiB/s rd, 77 KiB/s wr, 90 op/s
Nov 29 02:44:59 np0005539563 podman[263793]: 2025-11-29 07:44:59.517422476 +0000 UTC m=+1.191737357 container attach 755f7ece78ee76882d4759da79fa1f9b54ff944a869b199fa10a7a8526cfe2e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:44:59 np0005539563 podman[263793]: 2025-11-29 07:44:59.520836556 +0000 UTC m=+1.195151417 container died 755f7ece78ee76882d4759da79fa1f9b54ff944a869b199fa10a7a8526cfe2e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:44:59 np0005539563 nova_compute[252253]: 2025-11-29 07:44:59.671 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:00.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:01 np0005539563 systemd[1]: var-lib-containers-storage-overlay-57753064b35301a4bfa58c9082b2fa244aaf5cb8831a11da2f01e155abeddf08-merged.mount: Deactivated successfully.
Nov 29 02:45:01 np0005539563 nova_compute[252253]: 2025-11-29 07:45:01.241 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "15f1608c-ffc9-4864-a004-20b44eea0709" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:01 np0005539563 nova_compute[252253]: 2025-11-29 07:45:01.241 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:01 np0005539563 nova_compute[252253]: 2025-11-29 07:45:01.262 252257 DEBUG nova.compute.manager [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:45:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:01.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 123 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 83 KiB/s rd, 16 KiB/s wr, 92 op/s
Nov 29 02:45:01 np0005539563 nova_compute[252253]: 2025-11-29 07:45:01.470 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:01 np0005539563 nova_compute[252253]: 2025-11-29 07:45:01.471 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:01 np0005539563 nova_compute[252253]: 2025-11-29 07:45:01.478 252257 DEBUG nova.virt.hardware [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:45:01 np0005539563 nova_compute[252253]: 2025-11-29 07:45:01.478 252257 INFO nova.compute.claims [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:45:01 np0005539563 podman[263793]: 2025-11-29 07:45:01.504779438 +0000 UTC m=+3.179094269 container remove 755f7ece78ee76882d4759da79fa1f9b54ff944a869b199fa10a7a8526cfe2e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:45:01 np0005539563 systemd[1]: libpod-conmon-755f7ece78ee76882d4759da79fa1f9b54ff944a869b199fa10a7a8526cfe2e5.scope: Deactivated successfully.
Nov 29 02:45:01 np0005539563 nova_compute[252253]: 2025-11-29 07:45:01.659 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:01 np0005539563 podman[263835]: 2025-11-29 07:45:01.643886583 +0000 UTC m=+0.021801552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:01 np0005539563 nova_compute[252253]: 2025-11-29 07:45:01.950 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402286.9485207, 7a2ac9f8-c588-434e-9da9-98a9d77f2e72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:45:01 np0005539563 nova_compute[252253]: 2025-11-29 07:45:01.951 252257 INFO nova.compute.manager [-] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:45:02 np0005539563 nova_compute[252253]: 2025-11-29 07:45:02.004 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:02 np0005539563 nova_compute[252253]: 2025-11-29 07:45:02.042 252257 DEBUG nova.compute.manager [None req-45afd02f-7725-4f6a-81e7-08c2094e395f - - - - - -] [instance: 7a2ac9f8-c588-434e-9da9-98a9d77f2e72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:02 np0005539563 podman[263835]: 2025-11-29 07:45:02.657861644 +0000 UTC m=+1.035776633 container create d045e7995f7cae53e1757de9377a09b5263cd64ba8c9225d0759c395b89efddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:45:02 np0005539563 systemd[1]: Started libpod-conmon-d045e7995f7cae53e1757de9377a09b5263cd64ba8c9225d0759c395b89efddc.scope.
Nov 29 02:45:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:02.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:02 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:45:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221d8663b5c21b4135927d3b2b574d1a151880413dc6e06bc5b9014e165c491e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221d8663b5c21b4135927d3b2b574d1a151880413dc6e06bc5b9014e165c491e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221d8663b5c21b4135927d3b2b574d1a151880413dc6e06bc5b9014e165c491e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221d8663b5c21b4135927d3b2b574d1a151880413dc6e06bc5b9014e165c491e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221d8663b5c21b4135927d3b2b574d1a151880413dc6e06bc5b9014e165c491e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:03.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 135 MiB data, 305 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 259 KiB/s wr, 66 op/s
Nov 29 02:45:03 np0005539563 nova_compute[252253]: 2025-11-29 07:45:03.549 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:03.551 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:45:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:03.553 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:45:03 np0005539563 podman[263835]: 2025-11-29 07:45:03.58942874 +0000 UTC m=+1.967343739 container init d045e7995f7cae53e1757de9377a09b5263cd64ba8c9225d0759c395b89efddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:45:03 np0005539563 podman[263835]: 2025-11-29 07:45:03.599434367 +0000 UTC m=+1.977349326 container start d045e7995f7cae53e1757de9377a09b5263cd64ba8c9225d0759c395b89efddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:45:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:45:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/481720125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.390 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.731s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.399 252257 DEBUG nova.compute.provider_tree [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.424 252257 DEBUG nova.scheduler.client.report [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:45:04 np0005539563 gracious_raman[263874]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:45:04 np0005539563 gracious_raman[263874]: --> relative data size: 1.0
Nov 29 02:45:04 np0005539563 gracious_raman[263874]: --> All data devices are unavailable
Nov 29 02:45:04 np0005539563 systemd[1]: libpod-d045e7995f7cae53e1757de9377a09b5263cd64ba8c9225d0759c395b89efddc.scope: Deactivated successfully.
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.512 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.513 252257 DEBUG nova.compute.manager [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.587 252257 DEBUG nova.compute.manager [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.587 252257 DEBUG nova.network.neutron [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.623 252257 INFO nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.674 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.677 252257 DEBUG nova.compute.manager [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.818 252257 DEBUG nova.compute.manager [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.820 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.821 252257 INFO nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Creating image(s)#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.862 252257 DEBUG nova.storage.rbd_utils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15f1608c-ffc9-4864-a004-20b44eea0709_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:04.890 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:04.890 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:04.891 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.904 252257 DEBUG nova.storage.rbd_utils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15f1608c-ffc9-4864-a004-20b44eea0709_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:04.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.941 252257 DEBUG nova.storage.rbd_utils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15f1608c-ffc9-4864-a004-20b44eea0709_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:04 np0005539563 nova_compute[252253]: 2025-11-29 07:45:04.945 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:05 np0005539563 nova_compute[252253]: 2025-11-29 07:45:05.007 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:05 np0005539563 nova_compute[252253]: 2025-11-29 07:45:05.008 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:05 np0005539563 nova_compute[252253]: 2025-11-29 07:45:05.008 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:05 np0005539563 nova_compute[252253]: 2025-11-29 07:45:05.009 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:05 np0005539563 nova_compute[252253]: 2025-11-29 07:45:05.037 252257 DEBUG nova.storage.rbd_utils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15f1608c-ffc9-4864-a004-20b44eea0709_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:05 np0005539563 nova_compute[252253]: 2025-11-29 07:45:05.040 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 15f1608c-ffc9-4864-a004-20b44eea0709_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:05 np0005539563 nova_compute[252253]: 2025-11-29 07:45:05.058 252257 DEBUG nova.policy [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd94c707cca604d72a8e1d49b636095e1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96ea84545e71401fb69d21be6e2472f7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:45:05 np0005539563 podman[263835]: 2025-11-29 07:45:05.213812227 +0000 UTC m=+3.591727206 container attach d045e7995f7cae53e1757de9377a09b5263cd64ba8c9225d0759c395b89efddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:45:05 np0005539563 podman[263835]: 2025-11-29 07:45:05.215077531 +0000 UTC m=+3.592992480 container died d045e7995f7cae53e1757de9377a09b5263cd64ba8c9225d0759c395b89efddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:45:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:05.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Nov 29 02:45:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:06 np0005539563 nova_compute[252253]: 2025-11-29 07:45:06.809 252257 DEBUG nova.network.neutron [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Successfully created port: 1f92e4a4-86c8-48d0-b614-42644f6def7a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:45:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:06.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-221d8663b5c21b4135927d3b2b574d1a151880413dc6e06bc5b9014e165c491e-merged.mount: Deactivated successfully.
Nov 29 02:45:07 np0005539563 nova_compute[252253]: 2025-11-29 07:45:07.006 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:07.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 29 02:45:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:08.555 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:08.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:08 np0005539563 nova_compute[252253]: 2025-11-29 07:45:08.996 252257 DEBUG nova.network.neutron [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Successfully updated port: 1f92e4a4-86c8-48d0-b614-42644f6def7a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:45:09 np0005539563 nova_compute[252253]: 2025-11-29 07:45:09.071 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "refresh_cache-15f1608c-ffc9-4864-a004-20b44eea0709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:45:09 np0005539563 nova_compute[252253]: 2025-11-29 07:45:09.072 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquired lock "refresh_cache-15f1608c-ffc9-4864-a004-20b44eea0709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:45:09 np0005539563 nova_compute[252253]: 2025-11-29 07:45:09.072 252257 DEBUG nova.network.neutron [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:45:09 np0005539563 nova_compute[252253]: 2025-11-29 07:45:09.287 252257 DEBUG nova.compute.manager [req-2f6e2d67-58c9-4474-a41b-c3cc3bba6dc3 req-0cb73d01-4bc5-47f7-b395-707234048993 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Received event network-changed-1f92e4a4-86c8-48d0-b614-42644f6def7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:45:09 np0005539563 nova_compute[252253]: 2025-11-29 07:45:09.287 252257 DEBUG nova.compute.manager [req-2f6e2d67-58c9-4474-a41b-c3cc3bba6dc3 req-0cb73d01-4bc5-47f7-b395-707234048993 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Refreshing instance network info cache due to event network-changed-1f92e4a4-86c8-48d0-b614-42644f6def7a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:45:09 np0005539563 nova_compute[252253]: 2025-11-29 07:45:09.287 252257 DEBUG oslo_concurrency.lockutils [req-2f6e2d67-58c9-4474-a41b-c3cc3bba6dc3 req-0cb73d01-4bc5-47f7-b395-707234048993 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-15f1608c-ffc9-4864-a004-20b44eea0709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:45:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:09.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Nov 29 02:45:09 np0005539563 nova_compute[252253]: 2025-11-29 07:45:09.482 252257 DEBUG nova.network.neutron [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:45:09 np0005539563 nova_compute[252253]: 2025-11-29 07:45:09.677 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:10 np0005539563 podman[263835]: 2025-11-29 07:45:10.400630167 +0000 UTC m=+8.778545126 container remove d045e7995f7cae53e1757de9377a09b5263cd64ba8c9225d0759c395b89efddc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 02:45:10 np0005539563 nova_compute[252253]: 2025-11-29 07:45:10.439 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:10 np0005539563 podman[263998]: 2025-11-29 07:45:10.464550668 +0000 UTC m=+1.007279663 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 02:45:10 np0005539563 podman[263999]: 2025-11-29 07:45:10.469679485 +0000 UTC m=+1.012502362 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125)
Nov 29 02:45:10 np0005539563 systemd[1]: libpod-conmon-d045e7995f7cae53e1757de9377a09b5263cd64ba8c9225d0759c395b89efddc.scope: Deactivated successfully.
Nov 29 02:45:10 np0005539563 podman[264000]: 2025-11-29 07:45:10.493612073 +0000 UTC m=+1.033757659 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:45:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:10.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:11 np0005539563 podman[264203]: 2025-11-29 07:45:11.124839411 +0000 UTC m=+0.041520437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:11.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 185 MiB data, 338 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 2.4 MiB/s wr, 70 op/s
Nov 29 02:45:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:12 np0005539563 nova_compute[252253]: 2025-11-29 07:45:12.008 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:12 np0005539563 nova_compute[252253]: 2025-11-29 07:45:12.178 252257 DEBUG nova.network.neutron [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Updating instance_info_cache with network_info: [{"id": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "address": "fa:16:3e:c2:a6:c0", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f92e4a4-86", "ovs_interfaceid": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:45:12 np0005539563 nova_compute[252253]: 2025-11-29 07:45:12.366 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Releasing lock "refresh_cache-15f1608c-ffc9-4864-a004-20b44eea0709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:45:12 np0005539563 nova_compute[252253]: 2025-11-29 07:45:12.367 252257 DEBUG nova.compute.manager [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Instance network_info: |[{"id": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "address": "fa:16:3e:c2:a6:c0", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f92e4a4-86", "ovs_interfaceid": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:45:12 np0005539563 nova_compute[252253]: 2025-11-29 07:45:12.367 252257 DEBUG oslo_concurrency.lockutils [req-2f6e2d67-58c9-4474-a41b-c3cc3bba6dc3 req-0cb73d01-4bc5-47f7-b395-707234048993 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-15f1608c-ffc9-4864-a004-20b44eea0709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:45:12 np0005539563 nova_compute[252253]: 2025-11-29 07:45:12.367 252257 DEBUG nova.network.neutron [req-2f6e2d67-58c9-4474-a41b-c3cc3bba6dc3 req-0cb73d01-4bc5-47f7-b395-707234048993 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Refreshing network info cache for port 1f92e4a4-86c8-48d0-b614-42644f6def7a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:45:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:45:12
Nov 29 02:45:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:45:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:45:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', 'images', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.rgw.root']
Nov 29 02:45:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:45:12 np0005539563 podman[264203]: 2025-11-29 07:45:12.922461611 +0000 UTC m=+1.839142547 container create c3b55adfb044105350f9105cbad57f773ec2916b45d768bebd40e85a7468905d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:45:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:12.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:12 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.125797749s, txc = 0x561be0eee900
Nov 29 02:45:12 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.129529476s, txc = 0x561be002b500
Nov 29 02:45:12 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.128532887s, txc = 0x561be002af00
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:45:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:13.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 190 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 2.9 MiB/s wr, 51 op/s
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:45:13 np0005539563 nova_compute[252253]: 2025-11-29 07:45:13.582 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 15f1608c-ffc9-4864-a004-20b44eea0709_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:13 np0005539563 systemd[1]: Started libpod-conmon-c3b55adfb044105350f9105cbad57f773ec2916b45d768bebd40e85a7468905d.scope.
Nov 29 02:45:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:45:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:45:14 np0005539563 podman[264203]: 2025-11-29 07:45:14.280459483 +0000 UTC m=+3.197140469 container init c3b55adfb044105350f9105cbad57f773ec2916b45d768bebd40e85a7468905d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 02:45:14 np0005539563 podman[264203]: 2025-11-29 07:45:14.287396857 +0000 UTC m=+3.204077793 container start c3b55adfb044105350f9105cbad57f773ec2916b45d768bebd40e85a7468905d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shirley, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:45:14 np0005539563 eager_shirley[264228]: 167 167
Nov 29 02:45:14 np0005539563 systemd[1]: libpod-c3b55adfb044105350f9105cbad57f773ec2916b45d768bebd40e85a7468905d.scope: Deactivated successfully.
Nov 29 02:45:14 np0005539563 nova_compute[252253]: 2025-11-29 07:45:14.564 252257 DEBUG nova.storage.rbd_utils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] resizing rbd image 15f1608c-ffc9-4864-a004-20b44eea0709_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:45:14 np0005539563 nova_compute[252253]: 2025-11-29 07:45:14.679 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:14.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:15.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 190 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 2.6 MiB/s wr, 55 op/s
Nov 29 02:45:15 np0005539563 nova_compute[252253]: 2025-11-29 07:45:15.440 252257 DEBUG oslo_concurrency.lockutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "7e858991-fb4d-470d-a63e-bf5f72d59c34" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:15 np0005539563 nova_compute[252253]: 2025-11-29 07:45:15.441 252257 DEBUG oslo_concurrency.lockutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7e858991-fb4d-470d-a63e-bf5f72d59c34" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:15 np0005539563 nova_compute[252253]: 2025-11-29 07:45:15.441 252257 DEBUG oslo_concurrency.lockutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "7e858991-fb4d-470d-a63e-bf5f72d59c34-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:15 np0005539563 nova_compute[252253]: 2025-11-29 07:45:15.441 252257 DEBUG oslo_concurrency.lockutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7e858991-fb4d-470d-a63e-bf5f72d59c34-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:15 np0005539563 nova_compute[252253]: 2025-11-29 07:45:15.442 252257 DEBUG oslo_concurrency.lockutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7e858991-fb4d-470d-a63e-bf5f72d59c34-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:15 np0005539563 nova_compute[252253]: 2025-11-29 07:45:15.443 252257 INFO nova.compute.manager [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Terminating instance#033[00m
Nov 29 02:45:15 np0005539563 nova_compute[252253]: 2025-11-29 07:45:15.444 252257 DEBUG oslo_concurrency.lockutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "refresh_cache-7e858991-fb4d-470d-a63e-bf5f72d59c34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:45:15 np0005539563 nova_compute[252253]: 2025-11-29 07:45:15.444 252257 DEBUG oslo_concurrency.lockutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquired lock "refresh_cache-7e858991-fb4d-470d-a63e-bf5f72d59c34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:45:15 np0005539563 nova_compute[252253]: 2025-11-29 07:45:15.444 252257 DEBUG nova.network.neutron [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:45:15 np0005539563 nova_compute[252253]: 2025-11-29 07:45:15.753 252257 DEBUG nova.network.neutron [req-2f6e2d67-58c9-4474-a41b-c3cc3bba6dc3 req-0cb73d01-4bc5-47f7-b395-707234048993 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Updated VIF entry in instance network info cache for port 1f92e4a4-86c8-48d0-b614-42644f6def7a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:45:15 np0005539563 nova_compute[252253]: 2025-11-29 07:45:15.754 252257 DEBUG nova.network.neutron [req-2f6e2d67-58c9-4474-a41b-c3cc3bba6dc3 req-0cb73d01-4bc5-47f7-b395-707234048993 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Updating instance_info_cache with network_info: [{"id": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "address": "fa:16:3e:c2:a6:c0", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f92e4a4-86", "ovs_interfaceid": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:45:15 np0005539563 nova_compute[252253]: 2025-11-29 07:45:15.782 252257 DEBUG oslo_concurrency.lockutils [req-2f6e2d67-58c9-4474-a41b-c3cc3bba6dc3 req-0cb73d01-4bc5-47f7-b395-707234048993 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-15f1608c-ffc9-4864-a004-20b44eea0709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:45:16 np0005539563 podman[264203]: 2025-11-29 07:45:16.109327124 +0000 UTC m=+5.026008090 container attach c3b55adfb044105350f9105cbad57f773ec2916b45d768bebd40e85a7468905d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shirley, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:45:16 np0005539563 podman[264203]: 2025-11-29 07:45:16.110425863 +0000 UTC m=+5.027106799 container died c3b55adfb044105350f9105cbad57f773ec2916b45d768bebd40e85a7468905d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.140 252257 DEBUG nova.objects.instance [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 15f1608c-ffc9-4864-a004-20b44eea0709 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.212 252257 DEBUG nova.network.neutron [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.236 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.237 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Ensure instance console log exists: /var/lib/nova/instances/15f1608c-ffc9-4864-a004-20b44eea0709/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.238 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.239 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.239 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.243 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Start _get_guest_xml network_info=[{"id": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "address": "fa:16:3e:c2:a6:c0", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f92e4a4-86", "ovs_interfaceid": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.251 252257 WARNING nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.259 252257 DEBUG nova.virt.libvirt.host [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.260 252257 DEBUG nova.virt.libvirt.host [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.265 252257 DEBUG nova.virt.libvirt.host [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.266 252257 DEBUG nova.virt.libvirt.host [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.267 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.268 252257 DEBUG nova.virt.hardware [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.268 252257 DEBUG nova.virt.hardware [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.268 252257 DEBUG nova.virt.hardware [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.269 252257 DEBUG nova.virt.hardware [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.269 252257 DEBUG nova.virt.hardware [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.269 252257 DEBUG nova.virt.hardware [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.269 252257 DEBUG nova.virt.hardware [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.270 252257 DEBUG nova.virt.hardware [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.270 252257 DEBUG nova.virt.hardware [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.270 252257 DEBUG nova.virt.hardware [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.271 252257 DEBUG nova.virt.hardware [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:45:16 np0005539563 nova_compute[252253]: 2025-11-29 07:45:16.274 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:16.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:17 np0005539563 nova_compute[252253]: 2025-11-29 07:45:17.012 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:17 np0005539563 nova_compute[252253]: 2025-11-29 07:45:17.056 252257 DEBUG nova.network.neutron [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:45:17 np0005539563 nova_compute[252253]: 2025-11-29 07:45:17.077 252257 DEBUG oslo_concurrency.lockutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Releasing lock "refresh_cache-7e858991-fb4d-470d-a63e-bf5f72d59c34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:45:17 np0005539563 nova_compute[252253]: 2025-11-29 07:45:17.077 252257 DEBUG nova.compute.manager [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:45:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:17.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 213 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 29 02:45:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:45:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1568009851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:45:17 np0005539563 nova_compute[252253]: 2025-11-29 07:45:17.706 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:17 np0005539563 nova_compute[252253]: 2025-11-29 07:45:17.731 252257 DEBUG nova.storage.rbd_utils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15f1608c-ffc9-4864-a004-20b44eea0709_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:17 np0005539563 nova_compute[252253]: 2025-11-29 07:45:17.734 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:45:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3695214977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.122 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.123 252257 DEBUG nova.virt.libvirt.vif [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:44:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1140999470',display_name='tempest-ServersAdminTestJSON-server-1140999470',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1140999470',id=10,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-lnqmhc1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:45:04Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=15f1608c-ffc9-4864-a004-20b44eea0709,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "address": "fa:16:3e:c2:a6:c0", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f92e4a4-86", "ovs_interfaceid": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.124 252257 DEBUG nova.network.os_vif_util [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "address": "fa:16:3e:c2:a6:c0", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f92e4a4-86", "ovs_interfaceid": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.124 252257 DEBUG nova.network.os_vif_util [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:a6:c0,bridge_name='br-int',has_traffic_filtering=True,id=1f92e4a4-86c8-48d0-b614-42644f6def7a,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f92e4a4-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.126 252257 DEBUG nova.objects.instance [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 15f1608c-ffc9-4864-a004-20b44eea0709 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.158 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  <uuid>15f1608c-ffc9-4864-a004-20b44eea0709</uuid>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  <name>instance-0000000a</name>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServersAdminTestJSON-server-1140999470</nova:name>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:45:16</nova:creationTime>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <nova:user uuid="d94c707cca604d72a8e1d49b636095e1">tempest-ServersAdminTestJSON-1807764482-project-member</nova:user>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <nova:project uuid="96ea84545e71401fb69d21be6e2472f7">tempest-ServersAdminTestJSON-1807764482</nova:project>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <nova:port uuid="1f92e4a4-86c8-48d0-b614-42644f6def7a">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <entry name="serial">15f1608c-ffc9-4864-a004-20b44eea0709</entry>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <entry name="uuid">15f1608c-ffc9-4864-a004-20b44eea0709</entry>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/15f1608c-ffc9-4864-a004-20b44eea0709_disk">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/15f1608c-ffc9-4864-a004-20b44eea0709_disk.config">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:c2:a6:c0"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <target dev="tap1f92e4a4-86"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/15f1608c-ffc9-4864-a004-20b44eea0709/console.log" append="off"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:45:18 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:45:18 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:45:18 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:45:18 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.158 252257 DEBUG nova.compute.manager [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Preparing to wait for external event network-vif-plugged-1f92e4a4-86c8-48d0-b614-42644f6def7a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.159 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.159 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.159 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.159 252257 DEBUG nova.virt.libvirt.vif [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:44:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1140999470',display_name='tempest-ServersAdminTestJSON-server-1140999470',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1140999470',id=10,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-lnqmhc1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:45:04Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=15f1608c-ffc9-4864-a004-20b44eea0709,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "address": "fa:16:3e:c2:a6:c0", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f92e4a4-86", "ovs_interfaceid": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.160 252257 DEBUG nova.network.os_vif_util [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "address": "fa:16:3e:c2:a6:c0", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f92e4a4-86", "ovs_interfaceid": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.160 252257 DEBUG nova.network.os_vif_util [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:a6:c0,bridge_name='br-int',has_traffic_filtering=True,id=1f92e4a4-86c8-48d0-b614-42644f6def7a,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f92e4a4-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.160 252257 DEBUG os_vif [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:a6:c0,bridge_name='br-int',has_traffic_filtering=True,id=1f92e4a4-86c8-48d0-b614-42644f6def7a,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f92e4a4-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.161 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.161 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.161 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.164 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.164 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f92e4a4-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.165 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1f92e4a4-86, col_values=(('external_ids', {'iface-id': '1f92e4a4-86c8-48d0-b614-42644f6def7a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:a6:c0', 'vm-uuid': '15f1608c-ffc9-4864-a004-20b44eea0709'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.166 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:18 np0005539563 NetworkManager[48981]: <info>  [1764402318.1673] manager: (tap1f92e4a4-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.169 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.174 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.175 252257 INFO os_vif [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:a6:c0,bridge_name='br-int',has_traffic_filtering=True,id=1f92e4a4-86c8-48d0-b614-42644f6def7a,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f92e4a4-86')#033[00m
Nov 29 02:45:18 np0005539563 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 29 02:45:18 np0005539563 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 23.500s CPU time.
Nov 29 02:45:18 np0005539563 systemd-machined[213024]: Machine qemu-1-instance-00000001 terminated.
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.703 252257 INFO nova.virt.libvirt.driver [-] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Instance destroyed successfully.#033[00m
Nov 29 02:45:18 np0005539563 nova_compute[252253]: 2025-11-29 07:45:18.704 252257 DEBUG nova.objects.instance [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lazy-loading 'resources' on Instance uuid 7e858991-fb4d-470d-a63e-bf5f72d59c34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:45:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:18.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:19 np0005539563 nova_compute[252253]: 2025-11-29 07:45:19.238 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:45:19 np0005539563 nova_compute[252253]: 2025-11-29 07:45:19.238 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:45:19 np0005539563 nova_compute[252253]: 2025-11-29 07:45:19.238 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] No VIF found with MAC fa:16:3e:c2:a6:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:45:19 np0005539563 nova_compute[252253]: 2025-11-29 07:45:19.239 252257 INFO nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Using config drive#033[00m
Nov 29 02:45:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-45921679850c30d04a6849a7305376ee08df2a179dbb922d6881fadd05ac451b-merged.mount: Deactivated successfully.
Nov 29 02:45:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:19.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 213 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 29 02:45:20 np0005539563 nova_compute[252253]: 2025-11-29 07:45:20.030 252257 DEBUG nova.storage.rbd_utils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15f1608c-ffc9-4864-a004-20b44eea0709_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:20 np0005539563 nova_compute[252253]: 2025-11-29 07:45:20.036 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:20 np0005539563 nova_compute[252253]: 2025-11-29 07:45:20.038 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:20 np0005539563 nova_compute[252253]: 2025-11-29 07:45:20.038 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:45:20 np0005539563 nova_compute[252253]: 2025-11-29 07:45:20.038 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:45:20 np0005539563 nova_compute[252253]: 2025-11-29 07:45:20.080 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 02:45:20 np0005539563 nova_compute[252253]: 2025-11-29 07:45:20.080 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:45:20 np0005539563 nova_compute[252253]: 2025-11-29 07:45:20.080 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:45:20 np0005539563 nova_compute[252253]: 2025-11-29 07:45:20.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:20.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:21 np0005539563 nova_compute[252253]: 2025-11-29 07:45:21.106 252257 INFO nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Creating config drive at /var/lib/nova/instances/15f1608c-ffc9-4864-a004-20b44eea0709/disk.config#033[00m
Nov 29 02:45:21 np0005539563 nova_compute[252253]: 2025-11-29 07:45:21.111 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/15f1608c-ffc9-4864-a004-20b44eea0709/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5eu3q2oc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:21 np0005539563 nova_compute[252253]: 2025-11-29 07:45:21.239 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/15f1608c-ffc9-4864-a004-20b44eea0709/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5eu3q2oc" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 213 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 29 02:45:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:21.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:22 np0005539563 podman[264203]: 2025-11-29 07:45:22.305533206 +0000 UTC m=+11.222214182 container remove c3b55adfb044105350f9105cbad57f773ec2916b45d768bebd40e85a7468905d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:45:22 np0005539563 systemd[1]: libpod-conmon-c3b55adfb044105350f9105cbad57f773ec2916b45d768bebd40e85a7468905d.scope: Deactivated successfully.
Nov 29 02:45:22 np0005539563 podman[264486]: 2025-11-29 07:45:22.530003093 +0000 UTC m=+0.041997589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:22 np0005539563 nova_compute[252253]: 2025-11-29 07:45:22.793 252257 DEBUG nova.storage.rbd_utils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] rbd image 15f1608c-ffc9-4864-a004-20b44eea0709_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:45:22 np0005539563 nova_compute[252253]: 2025-11-29 07:45:22.797 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/15f1608c-ffc9-4864-a004-20b44eea0709/disk.config 15f1608c-ffc9-4864-a004-20b44eea0709_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:22 np0005539563 nova_compute[252253]: 2025-11-29 07:45:22.813 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:22 np0005539563 nova_compute[252253]: 2025-11-29 07:45:22.814 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:22 np0005539563 nova_compute[252253]: 2025-11-29 07:45:22.814 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:22 np0005539563 nova_compute[252253]: 2025-11-29 07:45:22.815 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:22 np0005539563 nova_compute[252253]: 2025-11-29 07:45:22.848 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:22 np0005539563 nova_compute[252253]: 2025-11-29 07:45:22.849 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:22 np0005539563 nova_compute[252253]: 2025-11-29 07:45:22.850 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:22 np0005539563 nova_compute[252253]: 2025-11-29 07:45:22.850 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:45:22 np0005539563 nova_compute[252253]: 2025-11-29 07:45:22.851 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:22.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004144529068955891 of space, bias 1.0, pg target 1.2433587206867671 quantized to 32 (current 32)
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.169 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:45:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099044415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.310 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 191 MiB data, 326 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Nov 29 02:45:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:23.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.407 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.409 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.415 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.415 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:45:23 np0005539563 podman[264486]: 2025-11-29 07:45:23.471244778 +0000 UTC m=+0.983239264 container create 348db731b0c245387d8a1d554dd2665d853099ab98455950f3a88b345306817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.633 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.635 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4826MB free_disk=20.90129852294922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.635 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.636 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.744 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 7e858991-fb4d-470d-a63e-bf5f72d59c34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.745 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 15f1608c-ffc9-4864-a004-20b44eea0709 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.745 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.746 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:45:23 np0005539563 nova_compute[252253]: 2025-11-29 07:45:23.837 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:23 np0005539563 systemd[1]: Started libpod-conmon-348db731b0c245387d8a1d554dd2665d853099ab98455950f3a88b345306817e.scope.
Nov 29 02:45:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:45:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdda71fe32eea16da171f3d8d16a569d73f5b99546959e97ff0067c4ad33958/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdda71fe32eea16da171f3d8d16a569d73f5b99546959e97ff0067c4ad33958/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdda71fe32eea16da171f3d8d16a569d73f5b99546959e97ff0067c4ad33958/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdda71fe32eea16da171f3d8d16a569d73f5b99546959e97ff0067c4ad33958/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:45:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833253687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:45:24 np0005539563 nova_compute[252253]: 2025-11-29 07:45:24.375 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:24 np0005539563 nova_compute[252253]: 2025-11-29 07:45:24.385 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:45:24 np0005539563 nova_compute[252253]: 2025-11-29 07:45:24.469 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:45:24 np0005539563 nova_compute[252253]: 2025-11-29 07:45:24.512 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:45:24 np0005539563 nova_compute[252253]: 2025-11-29 07:45:24.512 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:24 np0005539563 nova_compute[252253]: 2025-11-29 07:45:24.683 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:24.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 140 MiB data, 299 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 714 KiB/s wr, 35 op/s
Nov 29 02:45:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:25.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:25 np0005539563 podman[264486]: 2025-11-29 07:45:25.865357981 +0000 UTC m=+3.377352487 container init 348db731b0c245387d8a1d554dd2665d853099ab98455950f3a88b345306817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:45:25 np0005539563 podman[264486]: 2025-11-29 07:45:25.874200266 +0000 UTC m=+3.386194712 container start 348db731b0c245387d8a1d554dd2665d853099ab98455950f3a88b345306817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:45:26 np0005539563 nova_compute[252253]: 2025-11-29 07:45:26.169 252257 DEBUG oslo_concurrency.processutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/15f1608c-ffc9-4864-a004-20b44eea0709/disk.config 15f1608c-ffc9-4864-a004-20b44eea0709_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:26 np0005539563 nova_compute[252253]: 2025-11-29 07:45:26.170 252257 INFO nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Deleting local config drive /var/lib/nova/instances/15f1608c-ffc9-4864-a004-20b44eea0709/disk.config because it was imported into RBD.#033[00m
Nov 29 02:45:26 np0005539563 kernel: tap1f92e4a4-86: entered promiscuous mode
Nov 29 02:45:26 np0005539563 ovn_controller[148841]: 2025-11-29T07:45:26Z|00036|binding|INFO|Claiming lport 1f92e4a4-86c8-48d0-b614-42644f6def7a for this chassis.
Nov 29 02:45:26 np0005539563 ovn_controller[148841]: 2025-11-29T07:45:26Z|00037|binding|INFO|1f92e4a4-86c8-48d0-b614-42644f6def7a: Claiming fa:16:3e:c2:a6:c0 10.100.0.7
Nov 29 02:45:26 np0005539563 NetworkManager[48981]: <info>  [1764402326.2336] manager: (tap1f92e4a4-86): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Nov 29 02:45:26 np0005539563 nova_compute[252253]: 2025-11-29 07:45:26.243 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:26 np0005539563 systemd-udevd[264596]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:45:26 np0005539563 NetworkManager[48981]: <info>  [1764402326.2700] device (tap1f92e4a4-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:45:26 np0005539563 systemd-machined[213024]: New machine qemu-4-instance-0000000a.
Nov 29 02:45:26 np0005539563 NetworkManager[48981]: <info>  [1764402326.2725] device (tap1f92e4a4-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.275 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:a6:c0 10.100.0.7'], port_security=['fa:16:3e:c2:a6:c0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '15f1608c-ffc9-4864-a004-20b44eea0709', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96ea84545e71401fb69d21be6e2472f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b094dd4e-cb76-48e4-81b4-a11d19d5f956', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baaadbdd-7935-4514-9332-391647ab6336, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1f92e4a4-86c8-48d0-b614-42644f6def7a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.276 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1f92e4a4-86c8-48d0-b614-42644f6def7a in datapath 788595a6-8f3f-45f7-807d-f88c9bf0e050 bound to our chassis#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.277 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 788595a6-8f3f-45f7-807d-f88c9bf0e050#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.289 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ec7028cc-6269-44f3-99c2-85ba50e5b0e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.290 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap788595a6-81 in ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.292 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap788595a6-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.292 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[550bde52-b33b-4c12-8548-f527989b6c6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.293 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[35b077a7-def8-44df-8cf2-ada63104ff1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 systemd[1]: Started Virtual Machine qemu-4-instance-0000000a.
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.305 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5c6406-4071-449f-b649-e6ffe81a0ddc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 ovn_controller[148841]: 2025-11-29T07:45:26Z|00038|binding|INFO|Setting lport 1f92e4a4-86c8-48d0-b614-42644f6def7a ovn-installed in OVS
Nov 29 02:45:26 np0005539563 ovn_controller[148841]: 2025-11-29T07:45:26Z|00039|binding|INFO|Setting lport 1f92e4a4-86c8-48d0-b614-42644f6def7a up in Southbound
Nov 29 02:45:26 np0005539563 nova_compute[252253]: 2025-11-29 07:45:26.312 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.322 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[999c2840-e94f-4175-b9f4-ad940cb3bbc4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.353 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d758a005-89ca-48aa-a191-941beadd75f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 systemd-udevd[264599]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:45:26 np0005539563 nova_compute[252253]: 2025-11-29 07:45:26.375 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:26 np0005539563 nova_compute[252253]: 2025-11-29 07:45:26.375 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:26 np0005539563 nova_compute[252253]: 2025-11-29 07:45:26.376 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.375 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa16882-9623-454b-b1ad-b8c92f58a19b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 NetworkManager[48981]: <info>  [1764402326.3773] manager: (tap788595a6-80): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.403 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[415cc161-8628-4635-9e78-f4376edc1565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.407 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a8c170-5300-4c40-819b-51cd2a5a8524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 NetworkManager[48981]: <info>  [1764402326.4379] device (tap788595a6-80): carrier: link connected
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.443 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e45517a0-ef87-4204-a2f7-6c21da076864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.462 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd04c17-da59-4c00-a000-ac89e8bde7a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap788595a6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:52:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529421, 'reachable_time': 39910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264630, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.478 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7df69dcd-3f54-47c7-ab06-1ab95e70c605]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:529d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529421, 'tstamp': 529421}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264631, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.490 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[87ae85bf-e3a6-4fe7-95fc-e41c103084e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap788595a6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:52:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529421, 'reachable_time': 39910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264632, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.535 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[83afa991-8d6f-4b6d-a282-c2815ca0be11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.614 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c3c5af-a1c4-4927-a0ba-fe06fb32d43c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.616 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap788595a6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.617 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.617 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap788595a6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:26 np0005539563 nova_compute[252253]: 2025-11-29 07:45:26.654 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:26 np0005539563 NetworkManager[48981]: <info>  [1764402326.6546] manager: (tap788595a6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 29 02:45:26 np0005539563 kernel: tap788595a6-80: entered promiscuous mode
Nov 29 02:45:26 np0005539563 nova_compute[252253]: 2025-11-29 07:45:26.657 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.658 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap788595a6-80, col_values=(('external_ids', {'iface-id': '4a1365a2-9549-4214-ba8d-c7bb361501a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:45:26 np0005539563 nova_compute[252253]: 2025-11-29 07:45:26.660 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:26 np0005539563 ovn_controller[148841]: 2025-11-29T07:45:26Z|00040|binding|INFO|Releasing lport 4a1365a2-9549-4214-ba8d-c7bb361501a6 from this chassis (sb_readonly=0)
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]: {
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:    "0": [
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:        {
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:            "devices": [
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:                "/dev/loop3"
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:            ],
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:            "lv_name": "ceph_lv0",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:            "lv_size": "7511998464",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:            "name": "ceph_lv0",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:            "tags": {
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:                "ceph.cluster_name": "ceph",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:                "ceph.crush_device_class": "",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:                "ceph.encrypted": "0",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:                "ceph.osd_id": "0",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:                "ceph.type": "block",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:                "ceph.vdo": "0"
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:            },
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:            "type": "block",
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:            "vg_name": "ceph_vg0"
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:        }
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]:    ]
Nov 29 02:45:26 np0005539563 eloquent_archimedes[264556]: }
Nov 29 02:45:26 np0005539563 nova_compute[252253]: 2025-11-29 07:45:26.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:45:26 np0005539563 nova_compute[252253]: 2025-11-29 07:45:26.679 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.681 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/788595a6-8f3f-45f7-807d-f88c9bf0e050.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/788595a6-8f3f-45f7-807d-f88c9bf0e050.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.682 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1968618d-35ef-4733-b613-dea007d5e775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.682 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-788595a6-8f3f-45f7-807d-f88c9bf0e050
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/788595a6-8f3f-45f7-807d-f88c9bf0e050.pid.haproxy
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 788595a6-8f3f-45f7-807d-f88c9bf0e050
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:45:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:45:26.683 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'env', 'PROCESS_TAG=haproxy-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/788595a6-8f3f-45f7-807d-f88c9bf0e050.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:45:26 np0005539563 podman[264486]: 2025-11-29 07:45:26.697930682 +0000 UTC m=+4.209925228 container attach 348db731b0c245387d8a1d554dd2665d853099ab98455950f3a88b345306817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:45:26 np0005539563 systemd[1]: libpod-348db731b0c245387d8a1d554dd2665d853099ab98455950f3a88b345306817e.scope: Deactivated successfully.
Nov 29 02:45:26 np0005539563 podman[264486]: 2025-11-29 07:45:26.712614222 +0000 UTC m=+4.224608768 container died 348db731b0c245387d8a1d554dd2665d853099ab98455950f3a88b345306817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:45:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:26.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.257 252257 DEBUG nova.compute.manager [req-5b22ab8c-47f9-4b85-b54c-a68f60112860 req-446f712f-ed66-4cd9-beb2-b4e433a0b1f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Received event network-vif-plugged-1f92e4a4-86c8-48d0-b614-42644f6def7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.258 252257 DEBUG oslo_concurrency.lockutils [req-5b22ab8c-47f9-4b85-b54c-a68f60112860 req-446f712f-ed66-4cd9-beb2-b4e433a0b1f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.258 252257 DEBUG oslo_concurrency.lockutils [req-5b22ab8c-47f9-4b85-b54c-a68f60112860 req-446f712f-ed66-4cd9-beb2-b4e433a0b1f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.259 252257 DEBUG oslo_concurrency.lockutils [req-5b22ab8c-47f9-4b85-b54c-a68f60112860 req-446f712f-ed66-4cd9-beb2-b4e433a0b1f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.259 252257 DEBUG nova.compute.manager [req-5b22ab8c-47f9-4b85-b54c-a68f60112860 req-446f712f-ed66-4cd9-beb2-b4e433a0b1f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Processing event network-vif-plugged-1f92e4a4-86c8-48d0-b614-42644f6def7a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:45:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 140 MiB data, 299 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 714 KiB/s wr, 29 op/s
Nov 29 02:45:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:27.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.794 252257 DEBUG nova.compute.manager [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.794 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402327.7934632, 15f1608c-ffc9-4864-a004-20b44eea0709 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.795 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] VM Started (Lifecycle Event)#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.799 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.803 252257 INFO nova.virt.libvirt.driver [-] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Instance spawned successfully.#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.803 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.820 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.826 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.829 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.829 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.830 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.830 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.830 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.831 252257 DEBUG nova.virt.libvirt.driver [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.853 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.854 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402327.7944357, 15f1608c-ffc9-4864-a004-20b44eea0709 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.854 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:45:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:45:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/797657266' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:45:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:45:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/797657266' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.890 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.895 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402327.798313, 15f1608c-ffc9-4864-a004-20b44eea0709 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.895 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.925 252257 INFO nova.compute.manager [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Took 23.11 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.925 252257 DEBUG nova.compute.manager [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.927 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.932 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:45:27 np0005539563 nova_compute[252253]: 2025-11-29 07:45:27.988 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:45:28 np0005539563 nova_compute[252253]: 2025-11-29 07:45:28.008 252257 INFO nova.compute.manager [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Took 26.58 seconds to build instance.#033[00m
Nov 29 02:45:28 np0005539563 nova_compute[252253]: 2025-11-29 07:45:28.033 252257 DEBUG oslo_concurrency.lockutils [None req-e46660c9-a1c8-4614-9c14-c296e6f9febc d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 26.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:28 np0005539563 nova_compute[252253]: 2025-11-29 07:45:28.174 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:28 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ffdda71fe32eea16da171f3d8d16a569d73f5b99546959e97ff0067c4ad33958-merged.mount: Deactivated successfully.
Nov 29 02:45:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:28.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 14 KiB/s wr, 29 op/s
Nov 29 02:45:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:29.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:29 np0005539563 nova_compute[252253]: 2025-11-29 07:45:29.407 252257 DEBUG nova.compute.manager [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Received event network-vif-plugged-1f92e4a4-86c8-48d0-b614-42644f6def7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:45:29 np0005539563 nova_compute[252253]: 2025-11-29 07:45:29.409 252257 DEBUG oslo_concurrency.lockutils [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:29 np0005539563 nova_compute[252253]: 2025-11-29 07:45:29.409 252257 DEBUG oslo_concurrency.lockutils [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:29 np0005539563 nova_compute[252253]: 2025-11-29 07:45:29.410 252257 DEBUG oslo_concurrency.lockutils [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:29 np0005539563 nova_compute[252253]: 2025-11-29 07:45:29.410 252257 DEBUG nova.compute.manager [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] No waiting events found dispatching network-vif-plugged-1f92e4a4-86c8-48d0-b614-42644f6def7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:45:29 np0005539563 nova_compute[252253]: 2025-11-29 07:45:29.411 252257 WARNING nova.compute.manager [req-63fcd31c-39be-45bd-a0fb-6b2bfb4107df req-23d6da64-fbe3-44d1-adfc-3973b121d887 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Received unexpected event network-vif-plugged-1f92e4a4-86c8-48d0-b614-42644f6def7a for instance with vm_state active and task_state None.#033[00m
Nov 29 02:45:29 np0005539563 nova_compute[252253]: 2025-11-29 07:45:29.686 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:29 np0005539563 podman[264486]: 2025-11-29 07:45:29.82335855 +0000 UTC m=+7.335353036 container remove 348db731b0c245387d8a1d554dd2665d853099ab98455950f3a88b345306817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:45:29 np0005539563 systemd[1]: libpod-conmon-348db731b0c245387d8a1d554dd2665d853099ab98455950f3a88b345306817e.scope: Deactivated successfully.
Nov 29 02:45:30 np0005539563 podman[264724]: 2025-11-29 07:45:29.93449208 +0000 UTC m=+1.466214016 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:45:30 np0005539563 podman[264724]: 2025-11-29 07:45:30.941822134 +0000 UTC m=+2.473544050 container create 84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:45:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:30.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:31 np0005539563 systemd[1]: Started libpod-conmon-84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7.scope.
Nov 29 02:45:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:45:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8e9f45818df43423c2e321107943d9904c4cb4bc3137914f8f4c9b4291f6ab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:31 np0005539563 podman[264724]: 2025-11-29 07:45:31.161623018 +0000 UTC m=+2.693344954 container init 84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:45:31 np0005539563 podman[264724]: 2025-11-29 07:45:31.168578592 +0000 UTC m=+2.700300548 container start 84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 02:45:31 np0005539563 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[264855]: [NOTICE]   (264871) : New worker (264873) forked
Nov 29 02:45:31 np0005539563 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[264855]: [NOTICE]   (264871) : Loading success.
Nov 29 02:45:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 26 KiB/s wr, 93 op/s
Nov 29 02:45:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:31.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:31 np0005539563 podman[264896]: 2025-11-29 07:45:31.345815452 +0000 UTC m=+0.021087942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:31 np0005539563 podman[264896]: 2025-11-29 07:45:31.487965557 +0000 UTC m=+0.163238017 container create edc2d175bc6ab29dc0178885a2c824f7502a3e54d5139003c7ff33f63ac77c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:45:31 np0005539563 systemd[1]: Started libpod-conmon-edc2d175bc6ab29dc0178885a2c824f7502a3e54d5139003c7ff33f63ac77c5e.scope.
Nov 29 02:45:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:45:31 np0005539563 podman[264896]: 2025-11-29 07:45:31.713575206 +0000 UTC m=+0.388847666 container init edc2d175bc6ab29dc0178885a2c824f7502a3e54d5139003c7ff33f63ac77c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:45:31 np0005539563 podman[264896]: 2025-11-29 07:45:31.726999372 +0000 UTC m=+0.402271832 container start edc2d175bc6ab29dc0178885a2c824f7502a3e54d5139003c7ff33f63ac77c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:45:31 np0005539563 strange_wilbur[264912]: 167 167
Nov 29 02:45:31 np0005539563 systemd[1]: libpod-edc2d175bc6ab29dc0178885a2c824f7502a3e54d5139003c7ff33f63ac77c5e.scope: Deactivated successfully.
Nov 29 02:45:31 np0005539563 podman[264896]: 2025-11-29 07:45:31.83090402 +0000 UTC m=+0.506176490 container attach edc2d175bc6ab29dc0178885a2c824f7502a3e54d5139003c7ff33f63ac77c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilbur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:45:31 np0005539563 podman[264896]: 2025-11-29 07:45:31.83242216 +0000 UTC m=+0.507694620 container died edc2d175bc6ab29dc0178885a2c824f7502a3e54d5139003c7ff33f63ac77c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:45:32 np0005539563 systemd[1]: var-lib-containers-storage-overlay-20119ebe4ebea4c303df018dd168f2c77ded51fd30d171c20fee2b4238f43cac-merged.mount: Deactivated successfully.
Nov 29 02:45:32 np0005539563 podman[264896]: 2025-11-29 07:45:32.30708565 +0000 UTC m=+0.982358100 container remove edc2d175bc6ab29dc0178885a2c824f7502a3e54d5139003c7ff33f63ac77c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:45:32 np0005539563 systemd[1]: libpod-conmon-edc2d175bc6ab29dc0178885a2c824f7502a3e54d5139003c7ff33f63ac77c5e.scope: Deactivated successfully.
Nov 29 02:45:32 np0005539563 podman[264937]: 2025-11-29 07:45:32.470869302 +0000 UTC m=+0.028774577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:32 np0005539563 podman[264937]: 2025-11-29 07:45:32.698301258 +0000 UTC m=+0.256206553 container create cc53e92e34cc00fa67621dd56b38391cf69c051b03e6d1227107e5a0c0a73607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williams, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:45:32 np0005539563 nova_compute[252253]: 2025-11-29 07:45:32.768 252257 INFO nova.virt.libvirt.driver [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Deleting instance files /var/lib/nova/instances/7e858991-fb4d-470d-a63e-bf5f72d59c34_del#033[00m
Nov 29 02:45:32 np0005539563 nova_compute[252253]: 2025-11-29 07:45:32.770 252257 INFO nova.virt.libvirt.driver [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Deletion of /var/lib/nova/instances/7e858991-fb4d-470d-a63e-bf5f72d59c34_del complete#033[00m
Nov 29 02:45:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:32 np0005539563 systemd[1]: Started libpod-conmon-cc53e92e34cc00fa67621dd56b38391cf69c051b03e6d1227107e5a0c0a73607.scope.
Nov 29 02:45:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:45:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35ee3b9e737e77d1aea77111874036b657727d115bd1e7a4f18d3d957da28ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35ee3b9e737e77d1aea77111874036b657727d115bd1e7a4f18d3d957da28ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35ee3b9e737e77d1aea77111874036b657727d115bd1e7a4f18d3d957da28ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35ee3b9e737e77d1aea77111874036b657727d115bd1e7a4f18d3d957da28ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:32 np0005539563 nova_compute[252253]: 2025-11-29 07:45:32.844 252257 INFO nova.compute.manager [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Took 15.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:45:32 np0005539563 nova_compute[252253]: 2025-11-29 07:45:32.846 252257 DEBUG oslo.service.loopingcall [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:45:32 np0005539563 nova_compute[252253]: 2025-11-29 07:45:32.846 252257 DEBUG nova.compute.manager [-] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:45:32 np0005539563 nova_compute[252253]: 2025-11-29 07:45:32.847 252257 DEBUG nova.network.neutron [-] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:45:32 np0005539563 podman[264937]: 2025-11-29 07:45:32.93159883 +0000 UTC m=+0.489504115 container init cc53e92e34cc00fa67621dd56b38391cf69c051b03e6d1227107e5a0c0a73607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:45:32 np0005539563 podman[264937]: 2025-11-29 07:45:32.938930866 +0000 UTC m=+0.496836121 container start cc53e92e34cc00fa67621dd56b38391cf69c051b03e6d1227107e5a0c0a73607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:45:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:32.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.010 252257 DEBUG nova.network.neutron [-] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.023 252257 DEBUG nova.network.neutron [-] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.042 252257 INFO nova.compute.manager [-] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Took 0.20 seconds to deallocate network for instance.#033[00m
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.114 252257 DEBUG oslo_concurrency.lockutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.114 252257 DEBUG oslo_concurrency.lockutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:33 np0005539563 podman[264937]: 2025-11-29 07:45:33.152364339 +0000 UTC m=+0.710269624 container attach cc53e92e34cc00fa67621dd56b38391cf69c051b03e6d1227107e5a0c0a73607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williams, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.177 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.203 252257 DEBUG oslo_concurrency.processutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 25 KiB/s wr, 114 op/s
Nov 29 02:45:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:33.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.702 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402318.7002876, 7e858991-fb4d-470d-a63e-bf5f72d59c34 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.703 252257 INFO nova.compute.manager [-] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.723 252257 DEBUG nova.compute.manager [None req-c913f845-323d-4220-99a1-ddd7cb50115a - - - - - -] [instance: 7e858991-fb4d-470d-a63e-bf5f72d59c34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:45:33 np0005539563 laughing_williams[264954]: {
Nov 29 02:45:33 np0005539563 laughing_williams[264954]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:45:33 np0005539563 laughing_williams[264954]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:45:33 np0005539563 laughing_williams[264954]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:45:33 np0005539563 laughing_williams[264954]:        "osd_id": 0,
Nov 29 02:45:33 np0005539563 laughing_williams[264954]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:45:33 np0005539563 laughing_williams[264954]:        "type": "bluestore"
Nov 29 02:45:33 np0005539563 laughing_williams[264954]:    }
Nov 29 02:45:33 np0005539563 laughing_williams[264954]: }
Nov 29 02:45:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:45:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1714524971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.844 252257 DEBUG oslo_concurrency.processutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.852 252257 DEBUG nova.compute.provider_tree [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:45:33 np0005539563 systemd[1]: libpod-cc53e92e34cc00fa67621dd56b38391cf69c051b03e6d1227107e5a0c0a73607.scope: Deactivated successfully.
Nov 29 02:45:33 np0005539563 podman[264937]: 2025-11-29 07:45:33.86210598 +0000 UTC m=+1.420011235 container died cc53e92e34cc00fa67621dd56b38391cf69c051b03e6d1227107e5a0c0a73607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.870 252257 DEBUG nova.scheduler.client.report [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.889 252257 DEBUG oslo_concurrency.lockutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:33 np0005539563 nova_compute[252253]: 2025-11-29 07:45:33.972 252257 INFO nova.scheduler.client.report [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Deleted allocations for instance 7e858991-fb4d-470d-a63e-bf5f72d59c34#033[00m
Nov 29 02:45:34 np0005539563 nova_compute[252253]: 2025-11-29 07:45:34.148 252257 DEBUG oslo_concurrency.lockutils [None req-40d2cc04-f851-4a68-bcd3-44e472ea0ea8 cf2495f54add463c8ce9d2dd8623347c 0d3a6ccbb2794f6e85d683953ac4b5fd - - default default] Lock "7e858991-fb4d-470d-a63e-bf5f72d59c34" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 18.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e35ee3b9e737e77d1aea77111874036b657727d115bd1e7a4f18d3d957da28ca-merged.mount: Deactivated successfully.
Nov 29 02:45:34 np0005539563 podman[264937]: 2025-11-29 07:45:34.418388512 +0000 UTC m=+1.976293767 container remove cc53e92e34cc00fa67621dd56b38391cf69c051b03e6d1227107e5a0c0a73607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:45:34 np0005539563 systemd[1]: libpod-conmon-cc53e92e34cc00fa67621dd56b38391cf69c051b03e6d1227107e5a0c0a73607.scope: Deactivated successfully.
Nov 29 02:45:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:45:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:45:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:45:34 np0005539563 nova_compute[252253]: 2025-11-29 07:45:34.688 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:45:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 56168b33-299a-4348-b0a3-6a4fed978091 does not exist
Nov 29 02:45:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a30ad5ab-5e61-4186-aa54-c650aa2c1e1a does not exist
Nov 29 02:45:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 76c570e4-c3b3-464d-9e35-7676b7ec3626 does not exist
Nov 29 02:45:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:34.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 163 op/s
Nov 29 02:45:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:35.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:45:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:45:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:36.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 153 op/s
Nov 29 02:45:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:37.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:38 np0005539563 nova_compute[252253]: 2025-11-29 07:45:38.181 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:38.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 151 MiB data, 305 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 692 KiB/s wr, 157 op/s
Nov 29 02:45:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:39.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:39 np0005539563 nova_compute[252253]: 2025-11-29 07:45:39.691 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:40.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 181 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Nov 29 02:45:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:41.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:41 np0005539563 podman[265113]: 2025-11-29 07:45:41.5318987 +0000 UTC m=+0.084253364 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:45:41 np0005539563 podman[265112]: 2025-11-29 07:45:41.574995299 +0000 UTC m=+0.119746050 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:45:41 np0005539563 podman[265114]: 2025-11-29 07:45:41.597091257 +0000 UTC m=+0.145569417 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 02:45:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:42.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:45:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:45:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:45:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:45:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:45:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:45:43 np0005539563 nova_compute[252253]: 2025-11-29 07:45:43.185 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 187 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Nov 29 02:45:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:43.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:44 np0005539563 nova_compute[252253]: 2025-11-29 07:45:44.749 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:44.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 211 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.8 MiB/s wr, 117 op/s
Nov 29 02:45:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:45.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:46 np0005539563 ovn_controller[148841]: 2025-11-29T07:45:46Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c2:a6:c0 10.100.0.7
Nov 29 02:45:46 np0005539563 ovn_controller[148841]: 2025-11-29T07:45:46Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c2:a6:c0 10.100.0.7
Nov 29 02:45:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:46.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 211 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 83 KiB/s rd, 4.8 MiB/s wr, 67 op/s
Nov 29 02:45:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:47.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:48 np0005539563 nova_compute[252253]: 2025-11-29 07:45:48.190 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:48.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 216 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 123 KiB/s rd, 5.2 MiB/s wr, 76 op/s
Nov 29 02:45:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:49.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:49 np0005539563 nova_compute[252253]: 2025-11-29 07:45:49.751 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:50.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 232 MiB data, 368 MiB used, 21 GiB / 21 GiB avail; 436 KiB/s rd, 5.3 MiB/s wr, 116 op/s
Nov 29 02:45:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:51.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:52.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:53 np0005539563 nova_compute[252253]: 2025-11-29 07:45:53.193 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 254 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 145 op/s
Nov 29 02:45:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:53.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:54 np0005539563 nova_compute[252253]: 2025-11-29 07:45:54.753 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:54.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.6 MiB/s wr, 217 op/s
Nov 29 02:45:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:55.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:56.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 188 op/s
Nov 29 02:45:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:57.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:58 np0005539563 nova_compute[252253]: 2025-11-29 07:45:58.196 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:58.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 284 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.0 MiB/s wr, 210 op/s
Nov 29 02:45:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:45:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:45:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:59.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.754444) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359754571, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1390, "num_deletes": 251, "total_data_size": 2398146, "memory_usage": 2437744, "flush_reason": "Manual Compaction"}
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 29 02:45:59 np0005539563 nova_compute[252253]: 2025-11-29 07:45:59.756 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359788395, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2358792, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21231, "largest_seqno": 22620, "table_properties": {"data_size": 2352056, "index_size": 3807, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15172, "raw_average_key_size": 20, "raw_value_size": 2338385, "raw_average_value_size": 3194, "num_data_blocks": 169, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402162, "oldest_key_time": 1764402162, "file_creation_time": 1764402359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 34016 microseconds, and 7499 cpu microseconds.
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.788520) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2358792 bytes OK
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.788555) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.791499) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.791529) EVENT_LOG_v1 {"time_micros": 1764402359791521, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.791550) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2391945, prev total WAL file size 2391945, number of live WAL files 2.
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.792637) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2303KB)], [47(8500KB)]
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359792856, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 11063472, "oldest_snapshot_seqno": -1}
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5202 keys, 8878398 bytes, temperature: kUnknown
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359897712, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 8878398, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8843851, "index_size": 20421, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 131489, "raw_average_key_size": 25, "raw_value_size": 8750103, "raw_average_value_size": 1682, "num_data_blocks": 838, "num_entries": 5202, "num_filter_entries": 5202, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764402359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.898042) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8878398 bytes
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.899896) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.4 rd, 84.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 8.3 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(8.5) write-amplify(3.8) OK, records in: 5719, records dropped: 517 output_compression: NoCompression
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.899924) EVENT_LOG_v1 {"time_micros": 1764402359899912, "job": 24, "event": "compaction_finished", "compaction_time_micros": 104965, "compaction_time_cpu_micros": 33764, "output_level": 6, "num_output_files": 1, "total_output_size": 8878398, "num_input_records": 5719, "num_output_records": 5202, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359900562, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402359901985, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.792306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.902088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.902094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.902096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.902097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:45:59 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:45:59.902099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:00.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 277 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.0 MiB/s wr, 295 op/s
Nov 29 02:46:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:01.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:02 np0005539563 nova_compute[252253]: 2025-11-29 07:46:02.523 252257 DEBUG oslo_concurrency.lockutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Acquiring lock "bda72fee-0917-4dd9-a8f2-5c74d0ce7276" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:02 np0005539563 nova_compute[252253]: 2025-11-29 07:46:02.524 252257 DEBUG oslo_concurrency.lockutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "bda72fee-0917-4dd9-a8f2-5c74d0ce7276" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:02 np0005539563 nova_compute[252253]: 2025-11-29 07:46:02.548 252257 DEBUG nova.compute.manager [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:46:02 np0005539563 nova_compute[252253]: 2025-11-29 07:46:02.660 252257 DEBUG oslo_concurrency.lockutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:02 np0005539563 nova_compute[252253]: 2025-11-29 07:46:02.661 252257 DEBUG oslo_concurrency.lockutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:02 np0005539563 nova_compute[252253]: 2025-11-29 07:46:02.669 252257 DEBUG nova.virt.hardware [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:46:02 np0005539563 nova_compute[252253]: 2025-11-29 07:46:02.669 252257 INFO nova.compute.claims [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:46:02 np0005539563 nova_compute[252253]: 2025-11-29 07:46:02.804 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:02.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.199 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 293 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 254 op/s
Nov 29 02:46:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:03.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:46:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3769101896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.528 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.535 252257 DEBUG nova.compute.provider_tree [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.553 252257 DEBUG nova.scheduler.client.report [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.574 252257 DEBUG oslo_concurrency.lockutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.574 252257 DEBUG nova.compute.manager [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.619 252257 DEBUG nova.compute.manager [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.620 252257 DEBUG nova.network.neutron [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.646 252257 INFO nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.670 252257 DEBUG nova.compute.manager [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.820 252257 DEBUG nova.compute.manager [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.822 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.823 252257 INFO nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Creating image(s)#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.856 252257 DEBUG nova.storage.rbd_utils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.897 252257 DEBUG nova.storage.rbd_utils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.936 252257 DEBUG nova.storage.rbd_utils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:03 np0005539563 nova_compute[252253]: 2025-11-29 07:46:03.941 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.008 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.009 252257 DEBUG oslo_concurrency.lockutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.010 252257 DEBUG oslo_concurrency.lockutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.010 252257 DEBUG oslo_concurrency.lockutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.040 252257 DEBUG nova.storage.rbd_utils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.043 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.411 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.494 252257 DEBUG nova.storage.rbd_utils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] resizing rbd image bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.533 252257 DEBUG nova.network.neutron [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.534 252257 DEBUG nova.compute.manager [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.605 252257 DEBUG nova.objects.instance [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lazy-loading 'migration_context' on Instance uuid bda72fee-0917-4dd9-a8f2-5c74d0ce7276 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.623 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.624 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Ensure instance console log exists: /var/lib/nova/instances/bda72fee-0917-4dd9-a8f2-5c74d0ce7276/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.625 252257 DEBUG oslo_concurrency.lockutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.625 252257 DEBUG oslo_concurrency.lockutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.625 252257 DEBUG oslo_concurrency.lockutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.628 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.634 252257 WARNING nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.639 252257 DEBUG nova.virt.libvirt.host [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.640 252257 DEBUG nova.virt.libvirt.host [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.643 252257 DEBUG nova.virt.libvirt.host [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.643 252257 DEBUG nova.virt.libvirt.host [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.645 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.645 252257 DEBUG nova.virt.hardware [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.646 252257 DEBUG nova.virt.hardware [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.646 252257 DEBUG nova.virt.hardware [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.646 252257 DEBUG nova.virt.hardware [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.647 252257 DEBUG nova.virt.hardware [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.647 252257 DEBUG nova.virt.hardware [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.647 252257 DEBUG nova.virt.hardware [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.648 252257 DEBUG nova.virt.hardware [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.648 252257 DEBUG nova.virt.hardware [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.648 252257 DEBUG nova.virt.hardware [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.649 252257 DEBUG nova.virt.hardware [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.652 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:04 np0005539563 nova_compute[252253]: 2025-11-29 07:46:04.759 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:46:04.890 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:46:04.891 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:46:04.892 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:04.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:46:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16859782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:46:05 np0005539563 nova_compute[252253]: 2025-11-29 07:46:05.094 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:05 np0005539563 nova_compute[252253]: 2025-11-29 07:46:05.125 252257 DEBUG nova.storage.rbd_utils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:05 np0005539563 nova_compute[252253]: 2025-11-29 07:46:05.130 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 350 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.8 MiB/s wr, 271 op/s
Nov 29 02:46:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:05.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:46:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3957404881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:46:05 np0005539563 nova_compute[252253]: 2025-11-29 07:46:05.718 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:05 np0005539563 nova_compute[252253]: 2025-11-29 07:46:05.721 252257 DEBUG nova.objects.instance [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lazy-loading 'pci_devices' on Instance uuid bda72fee-0917-4dd9-a8f2-5c74d0ce7276 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:05 np0005539563 nova_compute[252253]: 2025-11-29 07:46:05.738 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  <uuid>bda72fee-0917-4dd9-a8f2-5c74d0ce7276</uuid>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  <name>instance-0000000e</name>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-84073055</nova:name>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:46:04</nova:creationTime>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <nova:user uuid="37822b5c62cd45aebbcbd953e06c4516">tempest-DeleteServersAdminTestJSON-119371266-project-member</nova:user>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <nova:project uuid="a1de18be9de849f9885ffa928cd531bb">tempest-DeleteServersAdminTestJSON-119371266</nova:project>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <entry name="serial">bda72fee-0917-4dd9-a8f2-5c74d0ce7276</entry>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <entry name="uuid">bda72fee-0917-4dd9-a8f2-5c74d0ce7276</entry>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk.config">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/bda72fee-0917-4dd9-a8f2-5c74d0ce7276/console.log" append="off"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:46:05 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:46:05 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:46:05 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:46:05 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:46:05 np0005539563 nova_compute[252253]: 2025-11-29 07:46:05.788 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:46:05 np0005539563 nova_compute[252253]: 2025-11-29 07:46:05.788 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:46:05 np0005539563 nova_compute[252253]: 2025-11-29 07:46:05.789 252257 INFO nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Using config drive#033[00m
Nov 29 02:46:05 np0005539563 nova_compute[252253]: 2025-11-29 07:46:05.813 252257 DEBUG nova.storage.rbd_utils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:05 np0005539563 nova_compute[252253]: 2025-11-29 07:46:05.968 252257 INFO nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Creating config drive at /var/lib/nova/instances/bda72fee-0917-4dd9-a8f2-5c74d0ce7276/disk.config#033[00m
Nov 29 02:46:05 np0005539563 nova_compute[252253]: 2025-11-29 07:46:05.972 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bda72fee-0917-4dd9-a8f2-5c74d0ce7276/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw8n1irgy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:06 np0005539563 nova_compute[252253]: 2025-11-29 07:46:06.099 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bda72fee-0917-4dd9-a8f2-5c74d0ce7276/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw8n1irgy" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:46:06.100 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:46:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:46:06.101 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:46:06 np0005539563 nova_compute[252253]: 2025-11-29 07:46:06.133 252257 DEBUG nova.storage.rbd_utils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] rbd image bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:06 np0005539563 nova_compute[252253]: 2025-11-29 07:46:06.137 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bda72fee-0917-4dd9-a8f2-5c74d0ce7276/disk.config bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:06 np0005539563 nova_compute[252253]: 2025-11-29 07:46:06.156 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:06 np0005539563 nova_compute[252253]: 2025-11-29 07:46:06.304 252257 DEBUG oslo_concurrency.processutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bda72fee-0917-4dd9-a8f2-5c74d0ce7276/disk.config bda72fee-0917-4dd9-a8f2-5c74d0ce7276_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:06 np0005539563 nova_compute[252253]: 2025-11-29 07:46:06.304 252257 INFO nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Deleting local config drive /var/lib/nova/instances/bda72fee-0917-4dd9-a8f2-5c74d0ce7276/disk.config because it was imported into RBD.#033[00m
Nov 29 02:46:06 np0005539563 systemd-machined[213024]: New machine qemu-5-instance-0000000e.
Nov 29 02:46:06 np0005539563 systemd[1]: Started Virtual Machine qemu-5-instance-0000000e.
Nov 29 02:46:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:06.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.208 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402367.2082279, bda72fee-0917-4dd9-a8f2-5c74d0ce7276 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.209 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.212 252257 DEBUG nova.compute.manager [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.213 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.216 252257 INFO nova.virt.libvirt.driver [-] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Instance spawned successfully.#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.217 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.232 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.238 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.242 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.243 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.243 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.243 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.244 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.244 252257 DEBUG nova.virt.libvirt.driver [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.268 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.268 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402367.2094123, bda72fee-0917-4dd9-a8f2-5c74d0ce7276 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.269 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] VM Started (Lifecycle Event)#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.298 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.301 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.339 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.371 252257 INFO nova.compute.manager [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Took 3.55 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.371 252257 DEBUG nova.compute.manager [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:46:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 350 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.5 MiB/s wr, 182 op/s
Nov 29 02:46:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:07.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.458 252257 INFO nova.compute.manager [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Took 4.83 seconds to build instance.#033[00m
Nov 29 02:46:07 np0005539563 nova_compute[252253]: 2025-11-29 07:46:07.476 252257 DEBUG oslo_concurrency.lockutils [None req-62442ed5-5ff5-4d55-9627-7fce07bccfb3 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "bda72fee-0917-4dd9-a8f2-5c74d0ce7276" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:08 np0005539563 nova_compute[252253]: 2025-11-29 07:46:08.202 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:08.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:09 np0005539563 nova_compute[252253]: 2025-11-29 07:46:09.204 252257 DEBUG oslo_concurrency.lockutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Acquiring lock "bda72fee-0917-4dd9-a8f2-5c74d0ce7276" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:09 np0005539563 nova_compute[252253]: 2025-11-29 07:46:09.205 252257 DEBUG oslo_concurrency.lockutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "bda72fee-0917-4dd9-a8f2-5c74d0ce7276" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:09 np0005539563 nova_compute[252253]: 2025-11-29 07:46:09.205 252257 DEBUG oslo_concurrency.lockutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Acquiring lock "bda72fee-0917-4dd9-a8f2-5c74d0ce7276-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:09 np0005539563 nova_compute[252253]: 2025-11-29 07:46:09.205 252257 DEBUG oslo_concurrency.lockutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "bda72fee-0917-4dd9-a8f2-5c74d0ce7276-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:09 np0005539563 nova_compute[252253]: 2025-11-29 07:46:09.205 252257 DEBUG oslo_concurrency.lockutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "bda72fee-0917-4dd9-a8f2-5c74d0ce7276-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:09 np0005539563 nova_compute[252253]: 2025-11-29 07:46:09.206 252257 INFO nova.compute.manager [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Terminating instance#033[00m
Nov 29 02:46:09 np0005539563 nova_compute[252253]: 2025-11-29 07:46:09.207 252257 DEBUG oslo_concurrency.lockutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Acquiring lock "refresh_cache-bda72fee-0917-4dd9-a8f2-5c74d0ce7276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:46:09 np0005539563 nova_compute[252253]: 2025-11-29 07:46:09.207 252257 DEBUG oslo_concurrency.lockutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Acquired lock "refresh_cache-bda72fee-0917-4dd9-a8f2-5c74d0ce7276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:46:09 np0005539563 nova_compute[252253]: 2025-11-29 07:46:09.208 252257 DEBUG nova.network.neutron [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:46:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 366 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 230 op/s
Nov 29 02:46:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:09.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:09 np0005539563 nova_compute[252253]: 2025-11-29 07:46:09.592 252257 DEBUG nova.network.neutron [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:46:09 np0005539563 nova_compute[252253]: 2025-11-29 07:46:09.761 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:10 np0005539563 nova_compute[252253]: 2025-11-29 07:46:10.005 252257 DEBUG nova.network.neutron [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:46:10 np0005539563 nova_compute[252253]: 2025-11-29 07:46:10.021 252257 DEBUG oslo_concurrency.lockutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Releasing lock "refresh_cache-bda72fee-0917-4dd9-a8f2-5c74d0ce7276" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:46:10 np0005539563 nova_compute[252253]: 2025-11-29 07:46:10.022 252257 DEBUG nova.compute.manager [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:46:10 np0005539563 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 29 02:46:10 np0005539563 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000e.scope: Consumed 3.727s CPU time.
Nov 29 02:46:10 np0005539563 systemd-machined[213024]: Machine qemu-5-instance-0000000e terminated.
Nov 29 02:46:10 np0005539563 nova_compute[252253]: 2025-11-29 07:46:10.235 252257 INFO nova.virt.libvirt.driver [-] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Instance destroyed successfully.#033[00m
Nov 29 02:46:10 np0005539563 nova_compute[252253]: 2025-11-29 07:46:10.236 252257 DEBUG nova.objects.instance [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lazy-loading 'resources' on Instance uuid bda72fee-0917-4dd9-a8f2-5c74d0ce7276 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:10 np0005539563 nova_compute[252253]: 2025-11-29 07:46:10.860 252257 INFO nova.virt.libvirt.driver [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Deleting instance files /var/lib/nova/instances/bda72fee-0917-4dd9-a8f2-5c74d0ce7276_del#033[00m
Nov 29 02:46:10 np0005539563 nova_compute[252253]: 2025-11-29 07:46:10.860 252257 INFO nova.virt.libvirt.driver [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Deletion of /var/lib/nova/instances/bda72fee-0917-4dd9-a8f2-5c74d0ce7276_del complete#033[00m
Nov 29 02:46:10 np0005539563 nova_compute[252253]: 2025-11-29 07:46:10.919 252257 INFO nova.compute.manager [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:46:10 np0005539563 nova_compute[252253]: 2025-11-29 07:46:10.920 252257 DEBUG oslo.service.loopingcall [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:46:10 np0005539563 nova_compute[252253]: 2025-11-29 07:46:10.920 252257 DEBUG nova.compute.manager [-] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:46:10 np0005539563 nova_compute[252253]: 2025-11-29 07:46:10.920 252257 DEBUG nova.network.neutron [-] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:46:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:10.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:46:11.102 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:11 np0005539563 nova_compute[252253]: 2025-11-29 07:46:11.201 252257 DEBUG nova.network.neutron [-] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:46:11 np0005539563 nova_compute[252253]: 2025-11-29 07:46:11.215 252257 DEBUG nova.network.neutron [-] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:46:11 np0005539563 nova_compute[252253]: 2025-11-29 07:46:11.231 252257 INFO nova.compute.manager [-] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Took 0.31 seconds to deallocate network for instance.#033[00m
Nov 29 02:46:11 np0005539563 nova_compute[252253]: 2025-11-29 07:46:11.288 252257 DEBUG oslo_concurrency.lockutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:11 np0005539563 nova_compute[252253]: 2025-11-29 07:46:11.289 252257 DEBUG oslo_concurrency.lockutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:11 np0005539563 nova_compute[252253]: 2025-11-29 07:46:11.383 252257 DEBUG oslo_concurrency.processutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 372 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.7 MiB/s wr, 318 op/s
Nov 29 02:46:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:11.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:46:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761511784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:46:11 np0005539563 nova_compute[252253]: 2025-11-29 07:46:11.882 252257 DEBUG oslo_concurrency.processutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:11 np0005539563 nova_compute[252253]: 2025-11-29 07:46:11.890 252257 DEBUG nova.compute.provider_tree [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:46:11 np0005539563 nova_compute[252253]: 2025-11-29 07:46:11.914 252257 DEBUG nova.scheduler.client.report [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:46:11 np0005539563 nova_compute[252253]: 2025-11-29 07:46:11.934 252257 DEBUG oslo_concurrency.lockutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:11 np0005539563 nova_compute[252253]: 2025-11-29 07:46:11.977 252257 INFO nova.scheduler.client.report [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Deleted allocations for instance bda72fee-0917-4dd9-a8f2-5c74d0ce7276#033[00m
Nov 29 02:46:12 np0005539563 nova_compute[252253]: 2025-11-29 07:46:12.057 252257 DEBUG oslo_concurrency.lockutils [None req-3a25ca08-941d-4faf-896c-07ce1821725b 37822b5c62cd45aebbcbd953e06c4516 a1de18be9de849f9885ffa928cd531bb - - default default] Lock "bda72fee-0917-4dd9-a8f2-5c74d0ce7276" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:12 np0005539563 podman[265646]: 2025-11-29 07:46:12.520038033 +0000 UTC m=+0.068261398 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 29 02:46:12 np0005539563 podman[265647]: 2025-11-29 07:46:12.530402019 +0000 UTC m=+0.075430219 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125)
Nov 29 02:46:12 np0005539563 podman[265648]: 2025-11-29 07:46:12.550643398 +0000 UTC m=+0.099325065 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:46:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:46:12
Nov 29 02:46:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:46:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:46:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'vms', '.rgw.root', '.mgr', 'cephfs.cephfs.data']
Nov 29 02:46:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:46:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:12.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:46:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:46:13 np0005539563 nova_compute[252253]: 2025-11-29 07:46:13.204 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 361 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 243 op/s
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:46:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:13.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:46:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:46:14 np0005539563 nova_compute[252253]: 2025-11-29 07:46:14.804 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:14.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 326 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 264 op/s
Nov 29 02:46:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:15.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:16.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 326 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.3 MiB/s wr, 201 op/s
Nov 29 02:46:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:17.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.160881) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378160906, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 413, "num_deletes": 260, "total_data_size": 310469, "memory_usage": 319960, "flush_reason": "Manual Compaction"}
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378167714, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 307523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22621, "largest_seqno": 23033, "table_properties": {"data_size": 305138, "index_size": 485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5558, "raw_average_key_size": 17, "raw_value_size": 300316, "raw_average_value_size": 926, "num_data_blocks": 22, "num_entries": 324, "num_filter_entries": 324, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402361, "oldest_key_time": 1764402361, "file_creation_time": 1764402378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 6916 microseconds, and 1387 cpu microseconds.
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.167794) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 307523 bytes OK
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.167810) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.169831) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.169843) EVENT_LOG_v1 {"time_micros": 1764402378169839, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.169856) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 307870, prev total WAL file size 307870, number of live WAL files 2.
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.170178) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353037' seq:0, type:0; will stop at (end)
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(300KB)], [50(8670KB)]
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378170241, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 9185921, "oldest_snapshot_seqno": -1}
Nov 29 02:46:18 np0005539563 nova_compute[252253]: 2025-11-29 07:46:18.206 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4998 keys, 9033562 bytes, temperature: kUnknown
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378310543, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9033562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8999669, "index_size": 20279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 128460, "raw_average_key_size": 25, "raw_value_size": 8908796, "raw_average_value_size": 1782, "num_data_blocks": 827, "num_entries": 4998, "num_filter_entries": 4998, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764402378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.310907) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9033562 bytes
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.313467) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.4 rd, 64.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.5 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(59.2) write-amplify(29.4) OK, records in: 5526, records dropped: 528 output_compression: NoCompression
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.313509) EVENT_LOG_v1 {"time_micros": 1764402378313495, "job": 26, "event": "compaction_finished", "compaction_time_micros": 140406, "compaction_time_cpu_micros": 42538, "output_level": 6, "num_output_files": 1, "total_output_size": 9033562, "num_input_records": 5526, "num_output_records": 4998, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378313967, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402378315694, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.170088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.315874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.315880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.315882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.315886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:46:18.315888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:46:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:18.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 332 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Nov 29 02:46:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:19.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:19 np0005539563 nova_compute[252253]: 2025-11-29 07:46:19.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:19 np0005539563 nova_compute[252253]: 2025-11-29 07:46:19.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:46:19 np0005539563 nova_compute[252253]: 2025-11-29 07:46:19.696 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:46:19 np0005539563 nova_compute[252253]: 2025-11-29 07:46:19.806 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:21.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 301 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 230 op/s
Nov 29 02:46:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:21.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:21 np0005539563 nova_compute[252253]: 2025-11-29 07:46:21.690 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:21 np0005539563 nova_compute[252253]: 2025-11-29 07:46:21.690 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:21 np0005539563 nova_compute[252253]: 2025-11-29 07:46:21.690 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:46:21 np0005539563 nova_compute[252253]: 2025-11-29 07:46:21.690 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:46:22 np0005539563 nova_compute[252253]: 2025-11-29 07:46:22.122 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-15f1608c-ffc9-4864-a004-20b44eea0709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:46:22 np0005539563 nova_compute[252253]: 2025-11-29 07:46:22.122 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-15f1608c-ffc9-4864-a004-20b44eea0709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:46:22 np0005539563 nova_compute[252253]: 2025-11-29 07:46:22.123 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:46:22 np0005539563 nova_compute[252253]: 2025-11-29 07:46:22.123 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 15f1608c-ffc9-4864-a004-20b44eea0709 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006882829948352875 of space, bias 1.0, pg target 2.0648489845058626 quantized to 32 (current 32)
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 02:46:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:23.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:23 np0005539563 nova_compute[252253]: 2025-11-29 07:46:23.210 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 279 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 866 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 29 02:46:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:23.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.668 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Updating instance_info_cache with network_info: [{"id": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "address": "fa:16:3e:c2:a6:c0", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f92e4a4-86", "ovs_interfaceid": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.698 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-15f1608c-ffc9-4864-a004-20b44eea0709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.698 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.699 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.699 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.699 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.699 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.699 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.700 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.731 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.731 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.731 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.732 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.732 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:24 np0005539563 nova_compute[252253]: 2025-11-29 07:46:24.808 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:25.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:46:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1527428038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.206 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.234 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402370.2339737, bda72fee-0917-4dd9-a8f2-5c74d0ce7276 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.235 252257 INFO nova.compute.manager [-] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.330 252257 DEBUG nova.compute.manager [None req-ca6efbcc-6b92-420d-9b8a-da8c75958c42 - - - - - -] [instance: bda72fee-0917-4dd9-a8f2-5c74d0ce7276] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.338 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.338 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:46:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 279 MiB data, 413 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 117 op/s
Nov 29 02:46:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:25.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.521 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.523 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4661MB free_disk=20.851795196533203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.523 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.523 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.770 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 15f1608c-ffc9-4864-a004-20b44eea0709 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.770 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.770 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:46:25 np0005539563 nova_compute[252253]: 2025-11-29 07:46:25.953 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:46:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2378973378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:46:26 np0005539563 nova_compute[252253]: 2025-11-29 07:46:26.398 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:26 np0005539563 nova_compute[252253]: 2025-11-29 07:46:26.404 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:46:26 np0005539563 nova_compute[252253]: 2025-11-29 07:46:26.501 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:46:26 np0005539563 nova_compute[252253]: 2025-11-29 07:46:26.528 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:46:26 np0005539563 nova_compute[252253]: 2025-11-29 07:46:26.528 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:26 np0005539563 nova_compute[252253]: 2025-11-29 07:46:26.529 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:26 np0005539563 nova_compute[252253]: 2025-11-29 07:46:26.688 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:26 np0005539563 nova_compute[252253]: 2025-11-29 07:46:26.689 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:26 np0005539563 nova_compute[252253]: 2025-11-29 07:46:26.689 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:46:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:27.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 279 MiB data, 403 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Nov 29 02:46:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:27.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:28 np0005539563 nova_compute[252253]: 2025-11-29 07:46:28.214 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:28 np0005539563 nova_compute[252253]: 2025-11-29 07:46:28.767 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:28 np0005539563 nova_compute[252253]: 2025-11-29 07:46:28.788 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:29.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 295 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 104 op/s
Nov 29 02:46:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:29.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:29 np0005539563 nova_compute[252253]: 2025-11-29 07:46:29.857 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:31.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 325 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 123 op/s
Nov 29 02:46:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:31.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:33.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:33 np0005539563 nova_compute[252253]: 2025-11-29 07:46:33.218 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Nov 29 02:46:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:33.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:34 np0005539563 nova_compute[252253]: 2025-11-29 07:46:34.859 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:35.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Nov 29 02:46:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:35.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:46:36 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b4c9845b-39f1-4c88-851c-617ef15b2a6c does not exist
Nov 29 02:46:36 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6112a2b1-d92f-4cc2-9391-2c81e1f0b3d1 does not exist
Nov 29 02:46:36 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 563f39b1-71b7-4209-9a86-c40bfdb2eaa2 does not exist
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:46:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:46:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:37.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 29 02:46:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:37.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:37 np0005539563 podman[266138]: 2025-11-29 07:46:37.4849044 +0000 UTC m=+0.041753253 container create c12511cb42c6fd8a777498f1f7c980188c74fe6dd7d5c95b6ecee05366caac07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:46:37 np0005539563 systemd[1]: Started libpod-conmon-c12511cb42c6fd8a777498f1f7c980188c74fe6dd7d5c95b6ecee05366caac07.scope.
Nov 29 02:46:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:46:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:46:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:46:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:46:37 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:46:37 np0005539563 podman[266138]: 2025-11-29 07:46:37.465989786 +0000 UTC m=+0.022838669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:37 np0005539563 podman[266138]: 2025-11-29 07:46:37.577456845 +0000 UTC m=+0.134305718 container init c12511cb42c6fd8a777498f1f7c980188c74fe6dd7d5c95b6ecee05366caac07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pare, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:46:37 np0005539563 podman[266138]: 2025-11-29 07:46:37.585959061 +0000 UTC m=+0.142807914 container start c12511cb42c6fd8a777498f1f7c980188c74fe6dd7d5c95b6ecee05366caac07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:46:37 np0005539563 podman[266138]: 2025-11-29 07:46:37.589325721 +0000 UTC m=+0.146174604 container attach c12511cb42c6fd8a777498f1f7c980188c74fe6dd7d5c95b6ecee05366caac07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:46:37 np0005539563 jovial_pare[266153]: 167 167
Nov 29 02:46:37 np0005539563 systemd[1]: libpod-c12511cb42c6fd8a777498f1f7c980188c74fe6dd7d5c95b6ecee05366caac07.scope: Deactivated successfully.
Nov 29 02:46:37 np0005539563 podman[266138]: 2025-11-29 07:46:37.593044129 +0000 UTC m=+0.149892982 container died c12511cb42c6fd8a777498f1f7c980188c74fe6dd7d5c95b6ecee05366caac07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:46:37 np0005539563 systemd[1]: var-lib-containers-storage-overlay-599326e5827efbbffbaab7ed8153787fca4a4691a4fe3a56a83294498dfa276d-merged.mount: Deactivated successfully.
Nov 29 02:46:37 np0005539563 podman[266138]: 2025-11-29 07:46:37.670516023 +0000 UTC m=+0.227364876 container remove c12511cb42c6fd8a777498f1f7c980188c74fe6dd7d5c95b6ecee05366caac07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:46:37 np0005539563 systemd[1]: libpod-conmon-c12511cb42c6fd8a777498f1f7c980188c74fe6dd7d5c95b6ecee05366caac07.scope: Deactivated successfully.
Nov 29 02:46:37 np0005539563 podman[266179]: 2025-11-29 07:46:37.833437461 +0000 UTC m=+0.023790104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:38 np0005539563 nova_compute[252253]: 2025-11-29 07:46:38.222 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:39.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:39 np0005539563 podman[266179]: 2025-11-29 07:46:39.210705747 +0000 UTC m=+1.401058360 container create b05d21e1195dffaeec2a4ae4fc090a49262ac97bb257426edeb84cbcb5dfe751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_thompson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:46:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 02:46:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:39.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:39 np0005539563 systemd[1]: Started libpod-conmon-b05d21e1195dffaeec2a4ae4fc090a49262ac97bb257426edeb84cbcb5dfe751.scope.
Nov 29 02:46:39 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:46:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714004daefd326acd2983882a22cd63b7343cb55c39ac8556ae8aa0460b191ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714004daefd326acd2983882a22cd63b7343cb55c39ac8556ae8aa0460b191ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714004daefd326acd2983882a22cd63b7343cb55c39ac8556ae8aa0460b191ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714004daefd326acd2983882a22cd63b7343cb55c39ac8556ae8aa0460b191ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714004daefd326acd2983882a22cd63b7343cb55c39ac8556ae8aa0460b191ea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:39 np0005539563 nova_compute[252253]: 2025-11-29 07:46:39.862 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:40 np0005539563 podman[266179]: 2025-11-29 07:46:40.159659577 +0000 UTC m=+2.350012220 container init b05d21e1195dffaeec2a4ae4fc090a49262ac97bb257426edeb84cbcb5dfe751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_thompson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:46:40 np0005539563 podman[266179]: 2025-11-29 07:46:40.17259857 +0000 UTC m=+2.362951183 container start b05d21e1195dffaeec2a4ae4fc090a49262ac97bb257426edeb84cbcb5dfe751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_thompson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:46:40 np0005539563 podman[266179]: 2025-11-29 07:46:40.24243204 +0000 UTC m=+2.432784673 container attach b05d21e1195dffaeec2a4ae4fc090a49262ac97bb257426edeb84cbcb5dfe751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_thompson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:46:41 np0005539563 nice_thompson[266198]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:46:41 np0005539563 nice_thompson[266198]: --> relative data size: 1.0
Nov 29 02:46:41 np0005539563 nice_thompson[266198]: --> All data devices are unavailable
Nov 29 02:46:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:41.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:41 np0005539563 systemd[1]: libpod-b05d21e1195dffaeec2a4ae4fc090a49262ac97bb257426edeb84cbcb5dfe751.scope: Deactivated successfully.
Nov 29 02:46:41 np0005539563 podman[266179]: 2025-11-29 07:46:41.042798464 +0000 UTC m=+3.233151117 container died b05d21e1195dffaeec2a4ae4fc090a49262ac97bb257426edeb84cbcb5dfe751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:46:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 99 op/s
Nov 29 02:46:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:41.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay-714004daefd326acd2983882a22cd63b7343cb55c39ac8556ae8aa0460b191ea-merged.mount: Deactivated successfully.
Nov 29 02:46:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:43.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:46:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:46:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:46:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:46:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:46:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:46:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:43 np0005539563 nova_compute[252253]: 2025-11-29 07:46:43.225 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:43 np0005539563 podman[266179]: 2025-11-29 07:46:43.255974129 +0000 UTC m=+5.446326752 container remove b05d21e1195dffaeec2a4ae4fc090a49262ac97bb257426edeb84cbcb5dfe751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:46:43 np0005539563 podman[266228]: 2025-11-29 07:46:43.339696029 +0000 UTC m=+0.414672183 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:46:43 np0005539563 podman[266229]: 2025-11-29 07:46:43.34049638 +0000 UTC m=+0.419421630 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Nov 29 02:46:43 np0005539563 systemd[1]: libpod-conmon-b05d21e1195dffaeec2a4ae4fc090a49262ac97bb257426edeb84cbcb5dfe751.scope: Deactivated successfully.
Nov 29 02:46:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 69 op/s
Nov 29 02:46:43 np0005539563 podman[266230]: 2025-11-29 07:46:43.424902817 +0000 UTC m=+0.495881365 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 02:46:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:43.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:44 np0005539563 podman[266433]: 2025-11-29 07:46:44.084535164 +0000 UTC m=+0.097830427 container create e476bca56ceff2d9fcb3384e15931b6a1cec4b4abbfa4975eb9d585ed3789225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:46:44 np0005539563 podman[266433]: 2025-11-29 07:46:44.012572757 +0000 UTC m=+0.025868090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:44 np0005539563 nova_compute[252253]: 2025-11-29 07:46:44.899 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:45.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:45 np0005539563 systemd[1]: Started libpod-conmon-e476bca56ceff2d9fcb3384e15931b6a1cec4b4abbfa4975eb9d585ed3789225.scope.
Nov 29 02:46:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:46:45 np0005539563 podman[266433]: 2025-11-29 07:46:45.192361124 +0000 UTC m=+1.205656377 container init e476bca56ceff2d9fcb3384e15931b6a1cec4b4abbfa4975eb9d585ed3789225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:46:45 np0005539563 podman[266433]: 2025-11-29 07:46:45.203979943 +0000 UTC m=+1.217275186 container start e476bca56ceff2d9fcb3384e15931b6a1cec4b4abbfa4975eb9d585ed3789225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 02:46:45 np0005539563 podman[266433]: 2025-11-29 07:46:45.209565392 +0000 UTC m=+1.222860635 container attach e476bca56ceff2d9fcb3384e15931b6a1cec4b4abbfa4975eb9d585ed3789225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:46:45 np0005539563 strange_ishizaka[266450]: 167 167
Nov 29 02:46:45 np0005539563 systemd[1]: libpod-e476bca56ceff2d9fcb3384e15931b6a1cec4b4abbfa4975eb9d585ed3789225.scope: Deactivated successfully.
Nov 29 02:46:45 np0005539563 podman[266433]: 2025-11-29 07:46:45.214669808 +0000 UTC m=+1.227965051 container died e476bca56ceff2d9fcb3384e15931b6a1cec4b4abbfa4975eb9d585ed3789225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:46:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.3 KiB/s wr, 51 op/s
Nov 29 02:46:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:45.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8c4abf91c472b30374b75bb8e498e75e48e5d73736f036971d7ec8cb77b7659d-merged.mount: Deactivated successfully.
Nov 29 02:46:45 np0005539563 podman[266433]: 2025-11-29 07:46:45.919089848 +0000 UTC m=+1.932385091 container remove e476bca56ceff2d9fcb3384e15931b6a1cec4b4abbfa4975eb9d585ed3789225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:46:45 np0005539563 systemd[1]: libpod-conmon-e476bca56ceff2d9fcb3384e15931b6a1cec4b4abbfa4975eb9d585ed3789225.scope: Deactivated successfully.
Nov 29 02:46:46 np0005539563 podman[266475]: 2025-11-29 07:46:46.142418244 +0000 UTC m=+0.086842604 container create be4566fed7f3ba04df097140428405d9aaf39e4e0564fa9cf9aefb070f372231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:46:46 np0005539563 podman[266475]: 2025-11-29 07:46:46.077625939 +0000 UTC m=+0.022050319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:46 np0005539563 systemd[1]: Started libpod-conmon-be4566fed7f3ba04df097140428405d9aaf39e4e0564fa9cf9aefb070f372231.scope.
Nov 29 02:46:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:46:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0506a18acbd042433c019498683f5affe471374696671e1c47f66626f6fdb36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0506a18acbd042433c019498683f5affe471374696671e1c47f66626f6fdb36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0506a18acbd042433c019498683f5affe471374696671e1c47f66626f6fdb36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0506a18acbd042433c019498683f5affe471374696671e1c47f66626f6fdb36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:47.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:47 np0005539563 podman[266475]: 2025-11-29 07:46:47.089611167 +0000 UTC m=+1.034035627 container init be4566fed7f3ba04df097140428405d9aaf39e4e0564fa9cf9aefb070f372231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:46:47 np0005539563 podman[266475]: 2025-11-29 07:46:47.096403268 +0000 UTC m=+1.040827628 container start be4566fed7f3ba04df097140428405d9aaf39e4e0564fa9cf9aefb070f372231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kirch, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:46:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 326 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 725 KiB/s rd, 75 KiB/s wr, 27 op/s
Nov 29 02:46:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:47.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]: {
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:    "0": [
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:        {
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:            "devices": [
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:                "/dev/loop3"
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:            ],
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:            "lv_name": "ceph_lv0",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:            "lv_size": "7511998464",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:            "name": "ceph_lv0",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:            "tags": {
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:                "ceph.cluster_name": "ceph",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:                "ceph.crush_device_class": "",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:                "ceph.encrypted": "0",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:                "ceph.osd_id": "0",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:                "ceph.type": "block",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:                "ceph.vdo": "0"
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:            },
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:            "type": "block",
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:            "vg_name": "ceph_vg0"
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:        }
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]:    ]
Nov 29 02:46:47 np0005539563 romantic_kirch[266491]: }
Nov 29 02:46:47 np0005539563 systemd[1]: libpod-be4566fed7f3ba04df097140428405d9aaf39e4e0564fa9cf9aefb070f372231.scope: Deactivated successfully.
Nov 29 02:46:48 np0005539563 nova_compute[252253]: 2025-11-29 07:46:48.229 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:48 np0005539563 podman[266475]: 2025-11-29 07:46:48.917102422 +0000 UTC m=+2.861526822 container attach be4566fed7f3ba04df097140428405d9aaf39e4e0564fa9cf9aefb070f372231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:46:48 np0005539563 podman[266475]: 2025-11-29 07:46:48.918569601 +0000 UTC m=+2.862993991 container died be4566fed7f3ba04df097140428405d9aaf39e4e0564fa9cf9aefb070f372231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:46:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:49.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 337 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 78 KiB/s rd, 1.0 MiB/s wr, 16 op/s
Nov 29 02:46:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:49.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:49 np0005539563 nova_compute[252253]: 2025-11-29 07:46:49.902 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c0506a18acbd042433c019498683f5affe471374696671e1c47f66626f6fdb36-merged.mount: Deactivated successfully.
Nov 29 02:46:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:51.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:51 np0005539563 podman[266475]: 2025-11-29 07:46:51.31546781 +0000 UTC m=+5.259892170 container remove be4566fed7f3ba04df097140428405d9aaf39e4e0564fa9cf9aefb070f372231 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:46:51 np0005539563 systemd[1]: libpod-conmon-be4566fed7f3ba04df097140428405d9aaf39e4e0564fa9cf9aefb070f372231.scope: Deactivated successfully.
Nov 29 02:46:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 347 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 150 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Nov 29 02:46:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:51.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:51 np0005539563 nova_compute[252253]: 2025-11-29 07:46:51.978 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:51 np0005539563 nova_compute[252253]: 2025-11-29 07:46:51.979 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:52 np0005539563 nova_compute[252253]: 2025-11-29 07:46:52.019 252257 DEBUG nova.compute.manager [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:46:52 np0005539563 podman[266659]: 2025-11-29 07:46:51.934630108 +0000 UTC m=+0.028747227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:52 np0005539563 podman[266659]: 2025-11-29 07:46:52.073943348 +0000 UTC m=+0.168060447 container create bfed51162655b7290b68f3cfd94090c4bbf155c54ca6af9c50479069b07f4550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moore, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:46:52 np0005539563 systemd[1]: Started libpod-conmon-bfed51162655b7290b68f3cfd94090c4bbf155c54ca6af9c50479069b07f4550.scope.
Nov 29 02:46:52 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:46:52 np0005539563 podman[266659]: 2025-11-29 07:46:52.280569069 +0000 UTC m=+0.374686188 container init bfed51162655b7290b68f3cfd94090c4bbf155c54ca6af9c50479069b07f4550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moore, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:46:52 np0005539563 podman[266659]: 2025-11-29 07:46:52.287513855 +0000 UTC m=+0.381630984 container start bfed51162655b7290b68f3cfd94090c4bbf155c54ca6af9c50479069b07f4550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:46:52 np0005539563 nova_compute[252253]: 2025-11-29 07:46:52.292 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:52 np0005539563 quizzical_moore[266676]: 167 167
Nov 29 02:46:52 np0005539563 systemd[1]: libpod-bfed51162655b7290b68f3cfd94090c4bbf155c54ca6af9c50479069b07f4550.scope: Deactivated successfully.
Nov 29 02:46:52 np0005539563 nova_compute[252253]: 2025-11-29 07:46:52.294 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:52 np0005539563 nova_compute[252253]: 2025-11-29 07:46:52.309 252257 DEBUG nova.virt.hardware [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:46:52 np0005539563 nova_compute[252253]: 2025-11-29 07:46:52.309 252257 INFO nova.compute.claims [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:46:52 np0005539563 podman[266659]: 2025-11-29 07:46:52.363033186 +0000 UTC m=+0.457150285 container attach bfed51162655b7290b68f3cfd94090c4bbf155c54ca6af9c50479069b07f4550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moore, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:46:52 np0005539563 podman[266659]: 2025-11-29 07:46:52.363486698 +0000 UTC m=+0.457603807 container died bfed51162655b7290b68f3cfd94090c4bbf155c54ca6af9c50479069b07f4550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moore, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:46:52 np0005539563 nova_compute[252253]: 2025-11-29 07:46:52.680 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:52 np0005539563 systemd[1]: var-lib-containers-storage-overlay-36ac9627b253c1ca3c0fd5595ce94ad88f28886a9a5aac7cab1e032e63d16509-merged.mount: Deactivated successfully.
Nov 29 02:46:52 np0005539563 podman[266659]: 2025-11-29 07:46:52.923129751 +0000 UTC m=+1.017246850 container remove bfed51162655b7290b68f3cfd94090c4bbf155c54ca6af9c50479069b07f4550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moore, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:46:52 np0005539563 systemd[1]: libpod-conmon-bfed51162655b7290b68f3cfd94090c4bbf155c54ca6af9c50479069b07f4550.scope: Deactivated successfully.
Nov 29 02:46:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:53.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:46:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2689649829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:46:53 np0005539563 nova_compute[252253]: 2025-11-29 07:46:53.155 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:53 np0005539563 nova_compute[252253]: 2025-11-29 07:46:53.161 252257 DEBUG nova.compute.provider_tree [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:46:53 np0005539563 podman[266722]: 2025-11-29 07:46:53.112553884 +0000 UTC m=+0.028229822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:53 np0005539563 nova_compute[252253]: 2025-11-29 07:46:53.231 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:53 np0005539563 nova_compute[252253]: 2025-11-29 07:46:53.274 252257 DEBUG nova.scheduler.client.report [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:46:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 347 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 245 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Nov 29 02:46:53 np0005539563 nova_compute[252253]: 2025-11-29 07:46:53.450 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:53 np0005539563 nova_compute[252253]: 2025-11-29 07:46:53.451 252257 DEBUG nova.compute.manager [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:46:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:53.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:53 np0005539563 nova_compute[252253]: 2025-11-29 07:46:53.511 252257 DEBUG nova.compute.manager [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:46:53 np0005539563 nova_compute[252253]: 2025-11-29 07:46:53.511 252257 DEBUG nova.network.neutron [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:46:53 np0005539563 nova_compute[252253]: 2025-11-29 07:46:53.560 252257 INFO nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:46:53 np0005539563 podman[266722]: 2025-11-29 07:46:53.562964439 +0000 UTC m=+0.478640347 container create 4b92c9a510c98cd5333fb2ce3f91add118977f37b3ff501560eb62a1f1090545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 29 02:46:53 np0005539563 nova_compute[252253]: 2025-11-29 07:46:53.631 252257 DEBUG nova.compute.manager [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:46:54 np0005539563 systemd[1]: Started libpod-conmon-4b92c9a510c98cd5333fb2ce3f91add118977f37b3ff501560eb62a1f1090545.scope.
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.066 252257 DEBUG nova.policy [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd15fa4897cba4410b8d341f62586c091', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f3f16345721743ccb9afb374deec67b5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:46:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:46:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24b487ac78a0c5a29ede6ca47e382e85d47410c53f0b9097816c5ccda86945b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24b487ac78a0c5a29ede6ca47e382e85d47410c53f0b9097816c5ccda86945b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24b487ac78a0c5a29ede6ca47e382e85d47410c53f0b9097816c5ccda86945b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24b487ac78a0c5a29ede6ca47e382e85d47410c53f0b9097816c5ccda86945b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.206 252257 DEBUG nova.compute.manager [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.208 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.209 252257 INFO nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Creating image(s)#033[00m
Nov 29 02:46:54 np0005539563 podman[266722]: 2025-11-29 07:46:54.317310017 +0000 UTC m=+1.232985975 container init 4b92c9a510c98cd5333fb2ce3f91add118977f37b3ff501560eb62a1f1090545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:46:54 np0005539563 podman[266722]: 2025-11-29 07:46:54.325228928 +0000 UTC m=+1.240904846 container start 4b92c9a510c98cd5333fb2ce3f91add118977f37b3ff501560eb62a1f1090545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:46:54 np0005539563 podman[266722]: 2025-11-29 07:46:54.343130195 +0000 UTC m=+1.258806213 container attach 4b92c9a510c98cd5333fb2ce3f91add118977f37b3ff501560eb62a1f1090545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.370 252257 DEBUG nova.storage.rbd_utils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] rbd image bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.401 252257 DEBUG nova.storage.rbd_utils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] rbd image bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.427 252257 DEBUG nova.storage.rbd_utils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] rbd image bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.431 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.485 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.486 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.486 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.487 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.513 252257 DEBUG nova.storage.rbd_utils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] rbd image bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.516 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:54 np0005539563 nova_compute[252253]: 2025-11-29 07:46:54.903 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:46:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:55.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:46:55 np0005539563 crazy_kepler[266741]: {
Nov 29 02:46:55 np0005539563 crazy_kepler[266741]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:46:55 np0005539563 crazy_kepler[266741]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:46:55 np0005539563 crazy_kepler[266741]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:46:55 np0005539563 crazy_kepler[266741]:        "osd_id": 0,
Nov 29 02:46:55 np0005539563 crazy_kepler[266741]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:46:55 np0005539563 crazy_kepler[266741]:        "type": "bluestore"
Nov 29 02:46:55 np0005539563 crazy_kepler[266741]:    }
Nov 29 02:46:55 np0005539563 crazy_kepler[266741]: }
Nov 29 02:46:55 np0005539563 systemd[1]: libpod-4b92c9a510c98cd5333fb2ce3f91add118977f37b3ff501560eb62a1f1090545.scope: Deactivated successfully.
Nov 29 02:46:55 np0005539563 podman[266722]: 2025-11-29 07:46:55.223329574 +0000 UTC m=+2.139005492 container died 4b92c9a510c98cd5333fb2ce3f91add118977f37b3ff501560eb62a1f1090545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:46:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 351 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 317 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 02:46:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:55.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b24b487ac78a0c5a29ede6ca47e382e85d47410c53f0b9097816c5ccda86945b-merged.mount: Deactivated successfully.
Nov 29 02:46:56 np0005539563 podman[266722]: 2025-11-29 07:46:56.480724127 +0000 UTC m=+3.396400035 container remove 4b92c9a510c98cd5333fb2ce3f91add118977f37b3ff501560eb62a1f1090545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kepler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:46:56 np0005539563 systemd[1]: libpod-conmon-4b92c9a510c98cd5333fb2ce3f91add118977f37b3ff501560eb62a1f1090545.scope: Deactivated successfully.
Nov 29 02:46:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:46:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:57.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:46:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:46:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 356 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 29 02:46:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:57.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:46:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev daf73261-6a66-40b2-8384-b1d2cfbeb431 does not exist
Nov 29 02:46:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8ed6e2d7-83ef-49fa-a8af-fa160479b256 does not exist
Nov 29 02:46:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 57b40176-6364-4fc3-8c47-46e9e525a753 does not exist
Nov 29 02:46:58 np0005539563 nova_compute[252253]: 2025-11-29 07:46:58.235 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:58 np0005539563 nova_compute[252253]: 2025-11-29 07:46:58.411 252257 DEBUG nova.network.neutron [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Successfully updated port: da69d7f6-de64-485f-96a1-c51ad9274372 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:46:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:46:58 np0005539563 nova_compute[252253]: 2025-11-29 07:46:58.532 252257 DEBUG nova.compute.manager [req-10c37a32-4427-4adf-931e-fbe0aed7060c req-48decaf3-b6e9-46b5-b06a-476719272803 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-changed-da69d7f6-de64-485f-96a1-c51ad9274372 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:46:58 np0005539563 nova_compute[252253]: 2025-11-29 07:46:58.532 252257 DEBUG nova.compute.manager [req-10c37a32-4427-4adf-931e-fbe0aed7060c req-48decaf3-b6e9-46b5-b06a-476719272803 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Refreshing instance network info cache due to event network-changed-da69d7f6-de64-485f-96a1-c51ad9274372. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:46:58 np0005539563 nova_compute[252253]: 2025-11-29 07:46:58.532 252257 DEBUG oslo_concurrency.lockutils [req-10c37a32-4427-4adf-931e-fbe0aed7060c req-48decaf3-b6e9-46b5-b06a-476719272803 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:46:58 np0005539563 nova_compute[252253]: 2025-11-29 07:46:58.533 252257 DEBUG oslo_concurrency.lockutils [req-10c37a32-4427-4adf-931e-fbe0aed7060c req-48decaf3-b6e9-46b5-b06a-476719272803 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:46:58 np0005539563 nova_compute[252253]: 2025-11-29 07:46:58.533 252257 DEBUG nova.network.neutron [req-10c37a32-4427-4adf-931e-fbe0aed7060c req-48decaf3-b6e9-46b5-b06a-476719272803 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Refreshing network info cache for port da69d7f6-de64-485f-96a1-c51ad9274372 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:46:58 np0005539563 nova_compute[252253]: 2025-11-29 07:46:58.613 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:46:58 np0005539563 nova_compute[252253]: 2025-11-29 07:46:58.816 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:58 np0005539563 nova_compute[252253]: 2025-11-29 07:46:58.881 252257 DEBUG nova.storage.rbd_utils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] resizing rbd image bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:46:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:59.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:59 np0005539563 nova_compute[252253]: 2025-11-29 07:46:59.158 252257 DEBUG nova.network.neutron [req-10c37a32-4427-4adf-931e-fbe0aed7060c req-48decaf3-b6e9-46b5-b06a-476719272803 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:46:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 359 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 02:46:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:46:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:59.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:59 np0005539563 nova_compute[252253]: 2025-11-29 07:46:59.835 252257 DEBUG nova.network.neutron [req-10c37a32-4427-4adf-931e-fbe0aed7060c req-48decaf3-b6e9-46b5-b06a-476719272803 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:46:59 np0005539563 nova_compute[252253]: 2025-11-29 07:46:59.914 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:46:59 np0005539563 nova_compute[252253]: 2025-11-29 07:46:59.923 252257 DEBUG oslo_concurrency.lockutils [req-10c37a32-4427-4adf-931e-fbe0aed7060c req-48decaf3-b6e9-46b5-b06a-476719272803 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:46:59 np0005539563 nova_compute[252253]: 2025-11-29 07:46:59.924 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquired lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:46:59 np0005539563 nova_compute[252253]: 2025-11-29 07:46:59.924 252257 DEBUG nova.network.neutron [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:47:00 np0005539563 nova_compute[252253]: 2025-11-29 07:47:00.105 252257 DEBUG nova.network.neutron [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:47:00 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:47:00 np0005539563 nova_compute[252253]: 2025-11-29 07:47:00.193 252257 DEBUG nova.objects.instance [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lazy-loading 'migration_context' on Instance uuid bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:00 np0005539563 nova_compute[252253]: 2025-11-29 07:47:00.303 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:47:00 np0005539563 nova_compute[252253]: 2025-11-29 07:47:00.303 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Ensure instance console log exists: /var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:47:00 np0005539563 nova_compute[252253]: 2025-11-29 07:47:00.303 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:00 np0005539563 nova_compute[252253]: 2025-11-29 07:47:00.304 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:00 np0005539563 nova_compute[252253]: 2025-11-29 07:47:00.304 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:01.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 389 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 314 KiB/s rd, 2.2 MiB/s wr, 79 op/s
Nov 29 02:47:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:01.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.679 252257 DEBUG nova.network.neutron [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Updating instance_info_cache with network_info: [{"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.704 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Releasing lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.705 252257 DEBUG nova.compute.manager [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Instance network_info: |[{"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.707 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Start _get_guest_xml network_info=[{"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.712 252257 WARNING nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.719 252257 DEBUG nova.virt.libvirt.host [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.720 252257 DEBUG nova.virt.libvirt.host [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.723 252257 DEBUG nova.virt.libvirt.host [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.723 252257 DEBUG nova.virt.libvirt.host [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.725 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.725 252257 DEBUG nova.virt.hardware [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.725 252257 DEBUG nova.virt.hardware [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.726 252257 DEBUG nova.virt.hardware [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.726 252257 DEBUG nova.virt.hardware [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.726 252257 DEBUG nova.virt.hardware [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.726 252257 DEBUG nova.virt.hardware [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.727 252257 DEBUG nova.virt.hardware [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.727 252257 DEBUG nova.virt.hardware [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.727 252257 DEBUG nova.virt.hardware [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.727 252257 DEBUG nova.virt.hardware [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.728 252257 DEBUG nova.virt.hardware [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:47:01 np0005539563 nova_compute[252253]: 2025-11-29 07:47:01.731 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:47:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814134550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:47:02 np0005539563 nova_compute[252253]: 2025-11-29 07:47:02.192 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:02 np0005539563 nova_compute[252253]: 2025-11-29 07:47:02.225 252257 DEBUG nova.storage.rbd_utils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] rbd image bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:02 np0005539563 nova_compute[252253]: 2025-11-29 07:47:02.231 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:02 np0005539563 nova_compute[252253]: 2025-11-29 07:47:02.678 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:02 np0005539563 nova_compute[252253]: 2025-11-29 07:47:02.680 252257 DEBUG nova.virt.libvirt.vif [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:46:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1845987537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1845987537',id=15,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fd51d8b6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:46:53Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:47:02 np0005539563 nova_compute[252253]: 2025-11-29 07:47:02.680 252257 DEBUG nova.network.os_vif_util [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Converting VIF {"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:47:02 np0005539563 nova_compute[252253]: 2025-11-29 07:47:02.681 252257 DEBUG nova.network.os_vif_util [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:47:02 np0005539563 nova_compute[252253]: 2025-11-29 07:47:02.683 252257 DEBUG nova.objects.instance [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.019 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  <uuid>bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0</uuid>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  <name>instance-0000000f</name>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-1845987537</nova:name>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:47:01</nova:creationTime>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <nova:user uuid="d15fa4897cba4410b8d341f62586c091">tempest-LiveAutoBlockMigrationV225Test-362691100-project-member</nova:user>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <nova:project uuid="f3f16345721743ccb9afb374deec67b5">tempest-LiveAutoBlockMigrationV225Test-362691100</nova:project>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <nova:port uuid="da69d7f6-de64-485f-96a1-c51ad9274372">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <entry name="serial">bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0</entry>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <entry name="uuid">bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0</entry>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk.config">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:27:b0:27"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <target dev="tapda69d7f6-de"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0/console.log" append="off"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:47:03 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:47:03 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:47:03 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:47:03 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.021 252257 DEBUG nova.compute.manager [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Preparing to wait for external event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.022 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.022 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.023 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.023 252257 DEBUG nova.virt.libvirt.vif [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:46:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1845987537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1845987537',id=15,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fd51d8b6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:46:53Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.023 252257 DEBUG nova.network.os_vif_util [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Converting VIF {"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.024 252257 DEBUG nova.network.os_vif_util [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.024 252257 DEBUG os_vif [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.025 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.026 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.026 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.030 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.030 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda69d7f6-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.030 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda69d7f6-de, col_values=(('external_ids', {'iface-id': 'da69d7f6-de64-485f-96a1-c51ad9274372', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:b0:27', 'vm-uuid': 'bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:03 np0005539563 NetworkManager[48981]: <info>  [1764402423.0340] manager: (tapda69d7f6-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.034 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.038 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.039 252257 INFO os_vif [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de')#033[00m
Nov 29 02:47:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:03.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.235 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.236 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.236 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] No VIF found with MAC fa:16:3e:27:b0:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.237 252257 INFO nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Using config drive#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.263 252257 DEBUG nova.storage.rbd_utils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] rbd image bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 405 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 193 KiB/s rd, 1.9 MiB/s wr, 63 op/s
Nov 29 02:47:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:03.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.893 252257 INFO nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Creating config drive at /var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0/disk.config#033[00m
Nov 29 02:47:03 np0005539563 nova_compute[252253]: 2025-11-29 07:47:03.902 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2owzpy_s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.030 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2owzpy_s" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.061 252257 DEBUG nova.storage.rbd_utils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] rbd image bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.066 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0/disk.config bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.238 252257 DEBUG oslo_concurrency.processutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0/disk.config bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.239 252257 INFO nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Deleting local config drive /var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0/disk.config because it was imported into RBD.#033[00m
Nov 29 02:47:04 np0005539563 NetworkManager[48981]: <info>  [1764402424.2952] manager: (tapda69d7f6-de): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 29 02:47:04 np0005539563 kernel: tapda69d7f6-de: entered promiscuous mode
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.298 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:04 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:04Z|00041|binding|INFO|Claiming lport da69d7f6-de64-485f-96a1-c51ad9274372 for this chassis.
Nov 29 02:47:04 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:04Z|00042|binding|INFO|da69d7f6-de64-485f-96a1-c51ad9274372: Claiming fa:16:3e:27:b0:27 10.100.0.8
Nov 29 02:47:04 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:04Z|00043|binding|INFO|Claiming lport d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b for this chassis.
Nov 29 02:47:04 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:04Z|00044|binding|INFO|d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b: Claiming fa:16:3e:e7:8a:05 19.80.0.53
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.303 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.305 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:04 np0005539563 systemd-udevd[267181]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.323 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:b0:27 10.100.0.8'], port_security=['fa:16:3e:27:b0:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1120887272', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1120887272', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49e03573-97a7-4693-af53-f6975c853dfa, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=da69d7f6-de64-485f-96a1-c51ad9274372) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.325 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:8a:05 19.80.0.53'], port_security=['fa:16:3e:e7:8a:05 19.80.0.53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['da69d7f6-de64-485f-96a1-c51ad9274372'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-716824560', 'neutron:cidrs': '19.80.0.53/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-716824560', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=cfe6824c-d376-41ab-9fc4-a90c757d1a0a, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.326 158990 INFO neutron.agent.ovn.metadata.agent [-] Port da69d7f6-de64-485f-96a1-c51ad9274372 in datapath 64f65ccd-7749-48ca-ba36-8eb6d9ce3610 bound to our chassis#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.327 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64f65ccd-7749-48ca-ba36-8eb6d9ce3610#033[00m
Nov 29 02:47:04 np0005539563 systemd-machined[213024]: New machine qemu-6-instance-0000000f.
Nov 29 02:47:04 np0005539563 NetworkManager[48981]: <info>  [1764402424.3388] device (tapda69d7f6-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:47:04 np0005539563 NetworkManager[48981]: <info>  [1764402424.3400] device (tapda69d7f6-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.339 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[37d54d58-41da-4b8a-a7ac-37066baeedc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.340 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap64f65ccd-71 in ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.343 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap64f65ccd-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.343 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[415192d2-a0f9-4e15-a46a-947e766cbdb5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.343 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3b684f7f-c11e-401a-aae1-a9b1e9cf5bb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 systemd[1]: Started Virtual Machine qemu-6-instance-0000000f.
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.358 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[e39d79ed-eaa2-4db4-ba89-02cb200dd7b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.382 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[518da984-80a9-4ba9-b256-4a8521eae8c9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.409 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1ae534-5af2-4c02-9d5b-c25e7c4adc53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:04Z|00045|binding|INFO|Setting lport da69d7f6-de64-485f-96a1-c51ad9274372 ovn-installed in OVS
Nov 29 02:47:04 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:04Z|00046|binding|INFO|Setting lport da69d7f6-de64-485f-96a1-c51ad9274372 up in Southbound
Nov 29 02:47:04 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:04Z|00047|binding|INFO|Setting lport d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b up in Southbound
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.412 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.414 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e40f4d-bb39-45a9-a1b0-b6b4072a08a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 systemd-udevd[267185]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:47:04 np0005539563 NetworkManager[48981]: <info>  [1764402424.4161] manager: (tap64f65ccd-70): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.444 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[166ea4cc-5173-4dad-9cb5-00a67b77d61c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.447 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[53b91604-9ae0-4298-8276-104dfa3985dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 NetworkManager[48981]: <info>  [1764402424.4647] device (tap64f65ccd-70): carrier: link connected
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.469 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2a835d32-bef1-4843-8e35-5e871e865b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.483 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[213ae015-b7c8-4940-8181-9902dcd8b4ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64f65ccd-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:be:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539223, 'reachable_time': 34900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267215, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.499 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1862b802-3721-4470-84fd-6cc2a4f46469]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9d:be36'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539223, 'tstamp': 539223}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267216, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.514 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e966f1f0-6a2b-4010-8c3d-b5133c60fc22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64f65ccd-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:be:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539223, 'reachable_time': 34900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267217, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.535 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3356b3da-e145-4d4f-87d0-041a4263ac6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.583 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b5cffd83-36ec-40fb-ad24-a28c4e72c590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.584 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64f65ccd-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.585 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.585 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64f65ccd-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.586 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:04 np0005539563 NetworkManager[48981]: <info>  [1764402424.5873] manager: (tap64f65ccd-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 29 02:47:04 np0005539563 kernel: tap64f65ccd-70: entered promiscuous mode
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.592 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64f65ccd-70, col_values=(('external_ids', {'iface-id': 'cbc2b067-53f5-4ead-84ea-8fcd92aff3f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.594 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:04 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:04Z|00048|binding|INFO|Releasing lport cbc2b067-53f5-4ead-84ea-8fcd92aff3f1 from this chassis (sb_readonly=0)
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.596 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.597 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.598 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a08c7313-ef64-4248-810f-d4aae18189e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.599 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-64f65ccd-7749-48ca-ba36-8eb6d9ce3610
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.pid.haproxy
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 64f65ccd-7749-48ca-ba36-8eb6d9ce3610
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.600 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'env', 'PROCESS_TAG=haproxy-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.609 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.758 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402424.7584503, bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.759 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] VM Started (Lifecycle Event)#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.789 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.793 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402424.758626, bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.793 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.814 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.817 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.838 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.891 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.891 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:04.892 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:04 np0005539563 nova_compute[252253]: 2025-11-29 07:47:04.916 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:04 np0005539563 podman[267292]: 2025-11-29 07:47:04.947019298 +0000 UTC m=+0.047955708 container create 1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:47:04 np0005539563 systemd[1]: Started libpod-conmon-1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41.scope.
Nov 29 02:47:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:47:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b17727fdcf6df84bf88845c460897d6eab4f1089f8530c864b440f188672cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:47:05 np0005539563 podman[267292]: 2025-11-29 07:47:04.920658766 +0000 UTC m=+0.021595186 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:47:05 np0005539563 podman[267292]: 2025-11-29 07:47:05.01885418 +0000 UTC m=+0.119790620 container init 1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:47:05 np0005539563 podman[267292]: 2025-11-29 07:47:05.026575196 +0000 UTC m=+0.127511596 container start 1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:47:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:05.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:05 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[267307]: [NOTICE]   (267311) : New worker (267313) forked
Nov 29 02:47:05 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[267307]: [NOTICE]   (267311) : Loading success.
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.078 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b in datapath ce6bdb9b-87f6-4011-9a56-230cbc6f4771 unbound from our chassis#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.080 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce6bdb9b-87f6-4011-9a56-230cbc6f4771#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.090 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[25851dcd-aeea-495a-babb-b2c834670599]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.091 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce6bdb9b-81 in ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.092 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce6bdb9b-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.092 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7642e9f0-2bc9-48a9-8368-e6c363d393c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.093 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[def81221-8e97-4b18-98d0-8b0bb64638c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.104 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[15a5de32-ed8b-4993-9021-4e386dbac475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.128 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3b76fce4-fdc8-4022-b3cf-e1ce36ed4980]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.158 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[40fa335d-d87f-4e25-b5e2-e67e0eb6d479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 NetworkManager[48981]: <info>  [1764402425.1648] manager: (tapce6bdb9b-80): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.164 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[64480dfb-3496-4daf-97db-b074b5a6ad21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 systemd-udevd[267207]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.197 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c3219d1a-d6da-42f3-af93-a60711a375ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.200 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[24a180be-c192-4e94-b34a-5b3a53c6cc35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 NetworkManager[48981]: <info>  [1764402425.2237] device (tapce6bdb9b-80): carrier: link connected
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.228 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2a9ca233-fb6b-43e8-a5f1-6ebf259ba2eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.244 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[573f7dc1-f3f4-4853-b255-11c968679a62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce6bdb9b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:fc:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539299, 'reachable_time': 28369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267332, 'error': None, 'target': 'ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.259 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6249e456-5d5e-4fb6-a9db-99cce2e88470]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:fc04'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539299, 'tstamp': 539299}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267333, 'error': None, 'target': 'ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.273 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[17b98c2d-9a15-4c95-9500-8ec1f1117e0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce6bdb9b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:fc:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539299, 'reachable_time': 28369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267334, 'error': None, 'target': 'ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.306 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[935fd973-fcca-4f7c-977c-57f027f89ed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.365 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6429ea3e-961b-41d7-bff8-1e2ac38159d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.367 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce6bdb9b-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.367 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.367 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce6bdb9b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:05 np0005539563 nova_compute[252253]: 2025-11-29 07:47:05.380 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:05 np0005539563 NetworkManager[48981]: <info>  [1764402425.3808] manager: (tapce6bdb9b-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 29 02:47:05 np0005539563 kernel: tapce6bdb9b-80: entered promiscuous mode
Nov 29 02:47:05 np0005539563 nova_compute[252253]: 2025-11-29 07:47:05.382 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.383 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce6bdb9b-80, col_values=(('external_ids', {'iface-id': 'ef275590-b3a5-476c-87e4-00a73179899a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:05 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:05Z|00049|binding|INFO|Releasing lport ef275590-b3a5-476c-87e4-00a73179899a from this chassis (sb_readonly=0)
Nov 29 02:47:05 np0005539563 nova_compute[252253]: 2025-11-29 07:47:05.384 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:05 np0005539563 nova_compute[252253]: 2025-11-29 07:47:05.399 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.400 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce6bdb9b-87f6-4011-9a56-230cbc6f4771.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce6bdb9b-87f6-4011-9a56-230cbc6f4771.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.400 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c04424e7-f837-4257-9174-1ebfbc90ce19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.401 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-ce6bdb9b-87f6-4011-9a56-230cbc6f4771
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/ce6bdb9b-87f6-4011-9a56-230cbc6f4771.pid.haproxy
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID ce6bdb9b-87f6-4011-9a56-230cbc6f4771
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:47:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:05.402 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'env', 'PROCESS_TAG=haproxy-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce6bdb9b-87f6-4011-9a56-230cbc6f4771.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:47:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 386 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 115 KiB/s rd, 2.4 MiB/s wr, 76 op/s
Nov 29 02:47:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:05.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:05 np0005539563 podman[267366]: 2025-11-29 07:47:05.727271705 +0000 UTC m=+0.023363753 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:47:06 np0005539563 podman[267366]: 2025-11-29 07:47:06.480337549 +0000 UTC m=+0.776429537 container create 41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.505 252257 DEBUG nova.compute.manager [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.506 252257 DEBUG oslo_concurrency.lockutils [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.506 252257 DEBUG oslo_concurrency.lockutils [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.507 252257 DEBUG oslo_concurrency.lockutils [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.507 252257 DEBUG nova.compute.manager [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Processing event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.507 252257 DEBUG nova.compute.manager [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.508 252257 DEBUG oslo_concurrency.lockutils [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.508 252257 DEBUG oslo_concurrency.lockutils [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.508 252257 DEBUG oslo_concurrency.lockutils [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.509 252257 DEBUG nova.compute.manager [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] No waiting events found dispatching network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.509 252257 WARNING nova.compute.manager [req-5739bb5e-eefe-4caa-ad54-98a61ff3ef13 req-135bd338-a8b4-4f2f-a53f-9ee9c6b2b65e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received unexpected event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.510 252257 DEBUG nova.compute.manager [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.515 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402426.5151598, bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.515 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.517 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.520 252257 INFO nova.virt.libvirt.driver [-] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Instance spawned successfully.#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.521 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:47:06 np0005539563 systemd[1]: Started libpod-conmon-41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292.scope.
Nov 29 02:47:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:47:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58501bfa2349d6c81bdd5042f9d71923e79c3b8ffb56978247c780144d1f06eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.827 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.838 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.842 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.843 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.843 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.844 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.844 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.845 252257 DEBUG nova.virt.libvirt.driver [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:47:06 np0005539563 nova_compute[252253]: 2025-11-29 07:47:06.875 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:47:06 np0005539563 podman[267366]: 2025-11-29 07:47:06.90094941 +0000 UTC m=+1.197041508 container init 41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:47:06 np0005539563 podman[267366]: 2025-11-29 07:47:06.912048405 +0000 UTC m=+1.208140393 container start 41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:47:06 np0005539563 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[267382]: [NOTICE]   (267387) : New worker (267389) forked
Nov 29 02:47:06 np0005539563 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[267382]: [NOTICE]   (267387) : Loading success.
Nov 29 02:47:07 np0005539563 nova_compute[252253]: 2025-11-29 07:47:07.020 252257 INFO nova.compute.manager [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Took 12.81 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:47:07 np0005539563 nova_compute[252253]: 2025-11-29 07:47:07.020 252257 DEBUG nova.compute.manager [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:07.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:07 np0005539563 nova_compute[252253]: 2025-11-29 07:47:07.335 252257 INFO nova.compute.manager [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Took 15.09 seconds to build instance.#033[00m
Nov 29 02:47:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 381 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 2.9 MiB/s wr, 67 op/s
Nov 29 02:47:07 np0005539563 nova_compute[252253]: 2025-11-29 07:47:07.438 252257 DEBUG oslo_concurrency.lockutils [None req-75141862-76e4-4fd7-b5a0-1ca9e944d0ce d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:07.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:08 np0005539563 nova_compute[252253]: 2025-11-29 07:47:08.035 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:09.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 392 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 724 KiB/s rd, 4.2 MiB/s wr, 121 op/s
Nov 29 02:47:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:09.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:09 np0005539563 nova_compute[252253]: 2025-11-29 07:47:09.956 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:11.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 418 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.4 MiB/s wr, 182 op/s
Nov 29 02:47:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:11.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:12.525 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:12 np0005539563 nova_compute[252253]: 2025-11-29 07:47:12.525 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:12.527 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:47:12 np0005539563 nova_compute[252253]: 2025-11-29 07:47:12.547 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:12 np0005539563 nova_compute[252253]: 2025-11-29 07:47:12.581 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Triggering sync for uuid 15f1608c-ffc9-4864-a004-20b44eea0709 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 02:47:12 np0005539563 nova_compute[252253]: 2025-11-29 07:47:12.582 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Triggering sync for uuid bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 02:47:12 np0005539563 nova_compute[252253]: 2025-11-29 07:47:12.582 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "15f1608c-ffc9-4864-a004-20b44eea0709" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:12 np0005539563 nova_compute[252253]: 2025-11-29 07:47:12.582 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "15f1608c-ffc9-4864-a004-20b44eea0709" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:12 np0005539563 nova_compute[252253]: 2025-11-29 07:47:12.583 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:12 np0005539563 nova_compute[252253]: 2025-11-29 07:47:12.583 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:12 np0005539563 nova_compute[252253]: 2025-11-29 07:47:12.583 252257 INFO nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 02:47:12 np0005539563 nova_compute[252253]: 2025-11-29 07:47:12.584 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:12 np0005539563 nova_compute[252253]: 2025-11-29 07:47:12.608 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "15f1608c-ffc9-4864-a004-20b44eea0709" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:47:12
Nov 29 02:47:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:47:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:47:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'backups', '.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images']
Nov 29 02:47:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:47:13 np0005539563 nova_compute[252253]: 2025-11-29 07:47:13.036 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:13.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 418 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 192 op/s
Nov 29 02:47:13 np0005539563 podman[267401]: 2025-11-29 07:47:13.51809264 +0000 UTC m=+0.074155356 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:47:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:13.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:13 np0005539563 podman[267402]: 2025-11-29 07:47:13.541044552 +0000 UTC m=+0.093905262 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:47:13 np0005539563 podman[267403]: 2025-11-29 07:47:13.546454686 +0000 UTC m=+0.091487848 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:47:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:47:14 np0005539563 nova_compute[252253]: 2025-11-29 07:47:14.095 252257 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Check if temp file /var/lib/nova/instances/tmpwgtxewll exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Nov 29 02:47:14 np0005539563 nova_compute[252253]: 2025-11-29 07:47:14.096 252257 DEBUG nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwgtxewll',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Nov 29 02:47:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:14 np0005539563 nova_compute[252253]: 2025-11-29 07:47:14.958 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:15.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 418 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 208 op/s
Nov 29 02:47:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:15.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:16 np0005539563 nova_compute[252253]: 2025-11-29 07:47:16.852 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:47:16 np0005539563 nova_compute[252253]: 2025-11-29 07:47:16.854 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:47:16 np0005539563 nova_compute[252253]: 2025-11-29 07:47:16.861 252257 INFO nova.compute.rpcapi [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Nov 29 02:47:16 np0005539563 nova_compute[252253]: 2025-11-29 07:47:16.862 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:47:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:17.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 418 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 207 op/s
Nov 29 02:47:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:17.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:18 np0005539563 nova_compute[252253]: 2025-11-29 07:47:18.040 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:19.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 419 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 202 op/s
Nov 29 02:47:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:19.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:19 np0005539563 nova_compute[252253]: 2025-11-29 07:47:19.961 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:21.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 427 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.0 MiB/s wr, 168 op/s
Nov 29 02:47:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:21.529 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:21.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.635 252257 INFO nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Took 4.78 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.635 252257 DEBUG nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.652 252257 DEBUG nova.compute.manager [req-29526374-b839-4e96-9fd2-82fe2a447657 req-0211a710-0c4e-4714-b347-6b41df893e8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-unplugged-da69d7f6-de64-485f-96a1-c51ad9274372 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.652 252257 DEBUG oslo_concurrency.lockutils [req-29526374-b839-4e96-9fd2-82fe2a447657 req-0211a710-0c4e-4714-b347-6b41df893e8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.653 252257 DEBUG oslo_concurrency.lockutils [req-29526374-b839-4e96-9fd2-82fe2a447657 req-0211a710-0c4e-4714-b347-6b41df893e8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.653 252257 DEBUG oslo_concurrency.lockutils [req-29526374-b839-4e96-9fd2-82fe2a447657 req-0211a710-0c4e-4714-b347-6b41df893e8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.653 252257 DEBUG nova.compute.manager [req-29526374-b839-4e96-9fd2-82fe2a447657 req-0211a710-0c4e-4714-b347-6b41df893e8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] No waiting events found dispatching network-vif-unplugged-da69d7f6-de64-485f-96a1-c51ad9274372 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.654 252257 DEBUG nova.compute.manager [req-29526374-b839-4e96-9fd2-82fe2a447657 req-0211a710-0c4e-4714-b347-6b41df893e8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-unplugged-da69d7f6-de64-485f-96a1-c51ad9274372 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.654 252257 DEBUG nova.compute.manager [req-29526374-b839-4e96-9fd2-82fe2a447657 req-0211a710-0c4e-4714-b347-6b41df893e8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.654 252257 DEBUG oslo_concurrency.lockutils [req-29526374-b839-4e96-9fd2-82fe2a447657 req-0211a710-0c4e-4714-b347-6b41df893e8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.654 252257 DEBUG oslo_concurrency.lockutils [req-29526374-b839-4e96-9fd2-82fe2a447657 req-0211a710-0c4e-4714-b347-6b41df893e8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.655 252257 DEBUG oslo_concurrency.lockutils [req-29526374-b839-4e96-9fd2-82fe2a447657 req-0211a710-0c4e-4714-b347-6b41df893e8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.655 252257 DEBUG nova.compute.manager [req-29526374-b839-4e96-9fd2-82fe2a447657 req-0211a710-0c4e-4714-b347-6b41df893e8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] No waiting events found dispatching network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.655 252257 WARNING nova.compute.manager [req-29526374-b839-4e96-9fd2-82fe2a447657 req-0211a710-0c4e-4714-b347-6b41df893e8b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received unexpected event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.675 252257 DEBUG nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwgtxewll',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(61e2127b-055f-4b3a-8c41-a0fe32b26029),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.678 252257 DEBUG nova.objects.instance [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lazy-loading 'migration_context' on Instance uuid bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.679 252257 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.681 252257 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.682 252257 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.701 252257 DEBUG nova.virt.libvirt.vif [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:46:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1845987537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1845987537',id=15,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:47:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fd51d8b6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:47:07Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.701 252257 DEBUG nova.network.os_vif_util [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converting VIF {"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.702 252257 DEBUG nova.network.os_vif_util [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.703 252257 DEBUG nova.virt.libvirt.migration [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 02:47:21 np0005539563 nova_compute[252253]:  <mac address="fa:16:3e:27:b0:27"/>
Nov 29 02:47:21 np0005539563 nova_compute[252253]:  <model type="virtio"/>
Nov 29 02:47:21 np0005539563 nova_compute[252253]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:47:21 np0005539563 nova_compute[252253]:  <mtu size="1442"/>
Nov 29 02:47:21 np0005539563 nova_compute[252253]:  <target dev="tapda69d7f6-de"/>
Nov 29 02:47:21 np0005539563 nova_compute[252253]: </interface>
Nov 29 02:47:21 np0005539563 nova_compute[252253]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Nov 29 02:47:21 np0005539563 nova_compute[252253]: 2025-11-29 07:47:21.703 252257 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Nov 29 02:47:22 np0005539563 nova_compute[252253]: 2025-11-29 07:47:22.184 252257 DEBUG nova.virt.libvirt.migration [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:47:22 np0005539563 nova_compute[252253]: 2025-11-29 07:47:22.184 252257 INFO nova.virt.libvirt.migration [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Nov 29 02:47:22 np0005539563 nova_compute[252253]: 2025-11-29 07:47:22.477 252257 INFO nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Nov 29 02:47:22 np0005539563 nova_compute[252253]: 2025-11-29 07:47:22.708 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:22 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:22Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:b0:27 10.100.0.8
Nov 29 02:47:22 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:22Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:b0:27 10.100.0.8
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:47:22 np0005539563 nova_compute[252253]: 2025-11-29 07:47:22.980 252257 DEBUG nova.virt.libvirt.migration [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:47:22 np0005539563 nova_compute[252253]: 2025-11-29 07:47:22.981 252257 DEBUG nova.virt.libvirt.migration [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009949559719895775 of space, bias 1.0, pg target 2.9848679159687324 quantized to 32 (current 32)
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 02:47:23 np0005539563 nova_compute[252253]: 2025-11-29 07:47:23.043 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:23.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 427 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 861 KiB/s wr, 103 op/s
Nov 29 02:47:23 np0005539563 nova_compute[252253]: 2025-11-29 07:47:23.484 252257 DEBUG nova.virt.libvirt.migration [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:47:23 np0005539563 nova_compute[252253]: 2025-11-29 07:47:23.485 252257 DEBUG nova.virt.libvirt.migration [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:47:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:23.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:23 np0005539563 nova_compute[252253]: 2025-11-29 07:47:23.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:23 np0005539563 nova_compute[252253]: 2025-11-29 07:47:23.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:47:23 np0005539563 nova_compute[252253]: 2025-11-29 07:47:23.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:47:23 np0005539563 nova_compute[252253]: 2025-11-29 07:47:23.989 252257 DEBUG nova.virt.libvirt.migration [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:47:23 np0005539563 nova_compute[252253]: 2025-11-29 07:47:23.989 252257 DEBUG nova.virt.libvirt.migration [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:47:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.254 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-15f1608c-ffc9-4864-a004-20b44eea0709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.255 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-15f1608c-ffc9-4864-a004-20b44eea0709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.255 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.256 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 15f1608c-ffc9-4864-a004-20b44eea0709 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.421 252257 DEBUG nova.compute.manager [req-9cbd1e11-4ff9-4297-be52-4ea379abf7be req-b0b12df7-f227-4e76-a723-2a83b82bbdcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-changed-da69d7f6-de64-485f-96a1-c51ad9274372 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.421 252257 DEBUG nova.compute.manager [req-9cbd1e11-4ff9-4297-be52-4ea379abf7be req-b0b12df7-f227-4e76-a723-2a83b82bbdcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Refreshing instance network info cache due to event network-changed-da69d7f6-de64-485f-96a1-c51ad9274372. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.422 252257 DEBUG oslo_concurrency.lockutils [req-9cbd1e11-4ff9-4297-be52-4ea379abf7be req-b0b12df7-f227-4e76-a723-2a83b82bbdcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.422 252257 DEBUG oslo_concurrency.lockutils [req-9cbd1e11-4ff9-4297-be52-4ea379abf7be req-b0b12df7-f227-4e76-a723-2a83b82bbdcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.422 252257 DEBUG nova.network.neutron [req-9cbd1e11-4ff9-4297-be52-4ea379abf7be req-b0b12df7-f227-4e76-a723-2a83b82bbdcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Refreshing network info cache for port da69d7f6-de64-485f-96a1-c51ad9274372 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.460 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402444.4604752, bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.461 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.479 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.483 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.492 252257 DEBUG nova.virt.libvirt.migration [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.492 252257 DEBUG nova.virt.libvirt.migration [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.509 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 02:47:24 np0005539563 kernel: tapda69d7f6-de (unregistering): left promiscuous mode
Nov 29 02:47:24 np0005539563 NetworkManager[48981]: <info>  [1764402444.9156] device (tapda69d7f6-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:47:24 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:24Z|00050|binding|INFO|Releasing lport da69d7f6-de64-485f-96a1-c51ad9274372 from this chassis (sb_readonly=0)
Nov 29 02:47:24 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:24Z|00051|binding|INFO|Setting lport da69d7f6-de64-485f-96a1-c51ad9274372 down in Southbound
Nov 29 02:47:24 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:24Z|00052|binding|INFO|Releasing lport d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b from this chassis (sb_readonly=0)
Nov 29 02:47:24 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:24Z|00053|binding|INFO|Setting lport d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b down in Southbound
Nov 29 02:47:24 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:24Z|00054|binding|INFO|Removing iface tapda69d7f6-de ovn-installed in OVS
Nov 29 02:47:24 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:24Z|00055|binding|INFO|Releasing lport cbc2b067-53f5-4ead-84ea-8fcd92aff3f1 from this chassis (sb_readonly=0)
Nov 29 02:47:24 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:24Z|00056|binding|INFO|Releasing lport ef275590-b3a5-476c-87e4-00a73179899a from this chassis (sb_readonly=0)
Nov 29 02:47:24 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:24Z|00057|binding|INFO|Releasing lport 4a1365a2-9549-4214-ba8d-c7bb361501a6 from this chassis (sb_readonly=0)
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.958 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:24.968 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:b0:27 10.100.0.8'], port_security=['fa:16:3e:27:b0:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'c8abfd39-a629-4854-b6ed-e2d68f35f5fb'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1120887272', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1120887272', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '8', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49e03573-97a7-4693-af53-f6975c853dfa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=da69d7f6-de64-485f-96a1-c51ad9274372) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:24.971 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:8a:05 19.80.0.53'], port_security=['fa:16:3e:e7:8a:05 19.80.0.53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['da69d7f6-de64-485f-96a1-c51ad9274372'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-716824560', 'neutron:cidrs': '19.80.0.53/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-716824560', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=cfe6824c-d376-41ab-9fc4-a90c757d1a0a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:24.973 158990 INFO neutron.agent.ovn.metadata.agent [-] Port da69d7f6-de64-485f-96a1-c51ad9274372 in datapath 64f65ccd-7749-48ca-ba36-8eb6d9ce3610 unbound from our chassis#033[00m
Nov 29 02:47:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:24.975 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64f65ccd-7749-48ca-ba36-8eb6d9ce3610, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:47:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:24.976 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[adb9d671-4ad3-45d2-a3b4-b6ff0d96b8e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:24.977 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 namespace which is not needed anymore#033[00m
Nov 29 02:47:24 np0005539563 nova_compute[252253]: 2025-11-29 07:47:24.989 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:25 np0005539563 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 29 02:47:25 np0005539563 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000f.scope: Consumed 14.039s CPU time.
Nov 29 02:47:25 np0005539563 systemd-machined[213024]: Machine qemu-6-instance-0000000f terminated.
Nov 29 02:47:25 np0005539563 virtqemud[251807]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk: No such file or directory
Nov 29 02:47:25 np0005539563 virtqemud[251807]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_disk: No such file or directory
Nov 29 02:47:25 np0005539563 systemd-udevd[267527]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:47:25 np0005539563 kernel: tapda69d7f6-de: entered promiscuous mode
Nov 29 02:47:25 np0005539563 NetworkManager[48981]: <info>  [1764402445.0622] manager: (tapda69d7f6-de): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Nov 29 02:47:25 np0005539563 kernel: tapda69d7f6-de (unregistering): left promiscuous mode
Nov 29 02:47:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:25Z|00058|binding|INFO|Claiming lport da69d7f6-de64-485f-96a1-c51ad9274372 for this chassis.
Nov 29 02:47:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:25Z|00059|binding|INFO|da69d7f6-de64-485f-96a1-c51ad9274372: Claiming fa:16:3e:27:b0:27 10.100.0.8
Nov 29 02:47:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:25Z|00060|binding|INFO|Claiming lport d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b for this chassis.
Nov 29 02:47:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:25Z|00061|binding|INFO|d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b: Claiming fa:16:3e:e7:8a:05 19.80.0.53
Nov 29 02:47:25 np0005539563 nova_compute[252253]: 2025-11-29 07:47:25.064 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:25.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:25 np0005539563 nova_compute[252253]: 2025-11-29 07:47:25.084 252257 DEBUG nova.virt.libvirt.guest [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Nov 29 02:47:25 np0005539563 nova_compute[252253]: 2025-11-29 07:47:25.084 252257 INFO nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Migration operation has completed#033[00m
Nov 29 02:47:25 np0005539563 nova_compute[252253]: 2025-11-29 07:47:25.085 252257 INFO nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] _post_live_migration() is started..#033[00m
Nov 29 02:47:25 np0005539563 nova_compute[252253]: 2025-11-29 07:47:25.090 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:25 np0005539563 nova_compute[252253]: 2025-11-29 07:47:25.091 252257 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Nov 29 02:47:25 np0005539563 nova_compute[252253]: 2025-11-29 07:47:25.091 252257 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Nov 29 02:47:25 np0005539563 nova_compute[252253]: 2025-11-29 07:47:25.091 252257 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Nov 29 02:47:25 np0005539563 nova_compute[252253]: 2025-11-29 07:47:25.092 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:25 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[267307]: [NOTICE]   (267311) : haproxy version is 2.8.14-c23fe91
Nov 29 02:47:25 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[267307]: [NOTICE]   (267311) : path to executable is /usr/sbin/haproxy
Nov 29 02:47:25 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[267307]: [WARNING]  (267311) : Exiting Master process...
Nov 29 02:47:25 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[267307]: [ALERT]    (267311) : Current worker (267313) exited with code 143 (Terminated)
Nov 29 02:47:25 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[267307]: [WARNING]  (267311) : All workers exited. Exiting... (0)
Nov 29 02:47:25 np0005539563 systemd[1]: libpod-1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41.scope: Deactivated successfully.
Nov 29 02:47:25 np0005539563 podman[267546]: 2025-11-29 07:47:25.120577646 +0000 UTC m=+0.053927366 container died 1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:47:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41-userdata-shm.mount: Deactivated successfully.
Nov 29 02:47:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f6b17727fdcf6df84bf88845c460897d6eab4f1089f8530c864b440f188672cf-merged.mount: Deactivated successfully.
Nov 29 02:47:25 np0005539563 podman[267546]: 2025-11-29 07:47:25.154320855 +0000 UTC m=+0.087670575 container cleanup 1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:47:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:25Z|00062|binding|INFO|Releasing lport cbc2b067-53f5-4ead-84ea-8fcd92aff3f1 from this chassis (sb_readonly=0)
Nov 29 02:47:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:25Z|00063|binding|INFO|Releasing lport da69d7f6-de64-485f-96a1-c51ad9274372 from this chassis (sb_readonly=0)
Nov 29 02:47:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:25Z|00064|binding|INFO|Releasing lport d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b from this chassis (sb_readonly=0)
Nov 29 02:47:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:25Z|00065|binding|INFO|Releasing lport ef275590-b3a5-476c-87e4-00a73179899a from this chassis (sb_readonly=0)
Nov 29 02:47:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:25Z|00066|binding|INFO|Releasing lport 4a1365a2-9549-4214-ba8d-c7bb361501a6 from this chassis (sb_readonly=0)
Nov 29 02:47:25 np0005539563 systemd[1]: libpod-conmon-1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41.scope: Deactivated successfully.
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.165 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:b0:27 10.100.0.8'], port_security=['fa:16:3e:27:b0:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'c8abfd39-a629-4854-b6ed-e2d68f35f5fb'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1120887272', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1120887272', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '8', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49e03573-97a7-4693-af53-f6975c853dfa, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=da69d7f6-de64-485f-96a1-c51ad9274372) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.166 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:8a:05 19.80.0.53'], port_security=['fa:16:3e:e7:8a:05 19.80.0.53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['da69d7f6-de64-485f-96a1-c51ad9274372'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-716824560', 'neutron:cidrs': '19.80.0.53/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-716824560', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=cfe6824c-d376-41ab-9fc4-a90c757d1a0a, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.177 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:b0:27 10.100.0.8'], port_security=['fa:16:3e:27:b0:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'c8abfd39-a629-4854-b6ed-e2d68f35f5fb'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1120887272', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1120887272', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '8', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49e03573-97a7-4693-af53-f6975c853dfa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=da69d7f6-de64-485f-96a1-c51ad9274372) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.179 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:8a:05 19.80.0.53'], port_security=['fa:16:3e:e7:8a:05 19.80.0.53'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['da69d7f6-de64-485f-96a1-c51ad9274372'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-716824560', 'neutron:cidrs': '19.80.0.53/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-716824560', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=cfe6824c-d376-41ab-9fc4-a90c757d1a0a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:25 np0005539563 podman[267579]: 2025-11-29 07:47:25.224816502 +0000 UTC m=+0.044302110 container remove 1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.229 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e1ae4f-5316-4bf4-bcb2-91bacc97e446]: (4, ('Sat Nov 29 07:47:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 (1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41)\n1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41\nSat Nov 29 07:47:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 (1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41)\n1e7a4070abfa299b66c3704c15e40adb92916d3e2764818e4f600b04be24fe41\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.231 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ad239f6c-488f-47b6-ace1-4837e466a9da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.232 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64f65ccd-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:25 np0005539563 nova_compute[252253]: 2025-11-29 07:47:25.234 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:25 np0005539563 kernel: tap64f65ccd-70: left promiscuous mode
Nov 29 02:47:25 np0005539563 nova_compute[252253]: 2025-11-29 07:47:25.248 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:25 np0005539563 nova_compute[252253]: 2025-11-29 07:47:25.251 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.253 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[30862548-ad2e-4504-ad01-9400cc9913e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.278 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[14aa99ea-6240-4170-a745-ff31b0f31f5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.279 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[049fe486-6558-484a-9913-1177f8cbc4b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.290 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[80f9e1a4-cec6-42ff-bc95-898c25c15cc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539217, 'reachable_time': 39197, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267597, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.292 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.293 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3155e2-c75a-44e8-9ea9-66021f1a02d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:25 np0005539563 systemd[1]: run-netns-ovnmeta\x2d64f65ccd\x2d7749\x2d48ca\x2dba36\x2d8eb6d9ce3610.mount: Deactivated successfully.
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.294 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b in datapath ce6bdb9b-87f6-4011-9a56-230cbc6f4771 unbound from our chassis#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.295 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce6bdb9b-87f6-4011-9a56-230cbc6f4771, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.296 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4b54fa60-10e8-41ec-9856-3abea2c6ac11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:25.296 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771 namespace which is not needed anymore#033[00m
Nov 29 02:47:25 np0005539563 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[267382]: [NOTICE]   (267387) : haproxy version is 2.8.14-c23fe91
Nov 29 02:47:25 np0005539563 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[267382]: [NOTICE]   (267387) : path to executable is /usr/sbin/haproxy
Nov 29 02:47:25 np0005539563 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[267382]: [WARNING]  (267387) : Exiting Master process...
Nov 29 02:47:25 np0005539563 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[267382]: [WARNING]  (267387) : Exiting Master process...
Nov 29 02:47:25 np0005539563 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[267382]: [ALERT]    (267387) : Current worker (267389) exited with code 143 (Terminated)
Nov 29 02:47:25 np0005539563 neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771[267382]: [WARNING]  (267387) : All workers exited. Exiting... (0)
Nov 29 02:47:25 np0005539563 systemd[1]: libpod-41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292.scope: Deactivated successfully.
Nov 29 02:47:25 np0005539563 podman[267615]: 2025-11-29 07:47:25.423631886 +0000 UTC m=+0.044597998 container died 41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:47:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 388 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.9 MiB/s wr, 177 op/s
Nov 29 02:47:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292-userdata-shm.mount: Deactivated successfully.
Nov 29 02:47:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-58501bfa2349d6c81bdd5042f9d71923e79c3b8ffb56978247c780144d1f06eb-merged.mount: Deactivated successfully.
Nov 29 02:47:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:25.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.018 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Updating instance_info_cache with network_info: [{"id": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "address": "fa:16:3e:c2:a6:c0", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f92e4a4-86", "ovs_interfaceid": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.052 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-15f1608c-ffc9-4864-a004-20b44eea0709" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.053 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.054 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.055 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.055 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.055 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.056 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.089 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.090 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.091 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.092 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.093 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.127 252257 DEBUG nova.network.neutron [req-9cbd1e11-4ff9-4297-be52-4ea379abf7be req-b0b12df7-f227-4e76-a723-2a83b82bbdcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Updated VIF entry in instance network info cache for port da69d7f6-de64-485f-96a1-c51ad9274372. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.128 252257 DEBUG nova.network.neutron [req-9cbd1e11-4ff9-4297-be52-4ea379abf7be req-b0b12df7-f227-4e76-a723-2a83b82bbdcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Updating instance_info_cache with network_info: [{"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:47:26 np0005539563 podman[267615]: 2025-11-29 07:47:26.171242545 +0000 UTC m=+0.792208667 container cleanup 41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:47:26 np0005539563 systemd[1]: libpod-conmon-41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292.scope: Deactivated successfully.
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.181 252257 DEBUG oslo_concurrency.lockutils [req-9cbd1e11-4ff9-4297-be52-4ea379abf7be req-b0b12df7-f227-4e76-a723-2a83b82bbdcb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:47:26 np0005539563 podman[267647]: 2025-11-29 07:47:26.28069304 +0000 UTC m=+0.071301620 container remove 41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.287 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d977d1a1-1d87-48ed-a8c2-132d62b44a89]: (4, ('Sat Nov 29 07:47:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771 (41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292)\n41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292\nSat Nov 29 07:47:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771 (41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292)\n41ac9deb25be7bd1367e3829ffca06d8ef18240da70ac89289bacf1c143fa292\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.288 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[022ee0dc-1b2f-4880-a4fe-934793792596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.289 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce6bdb9b-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.291 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:26 np0005539563 kernel: tapce6bdb9b-80: left promiscuous mode
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.313 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.316 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[629a43a8-52d8-4b23-8884-dbdd14e9ccb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.333 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f0950927-43d3-4e7a-9f80-57f6a5e45008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.334 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ac722bdb-e2a1-434c-b29e-8c5bc98f0efc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.348 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c07388-e089-41f3-bb7a-9a82436514a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539292, 'reachable_time': 33176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267685, 'error': None, 'target': 'ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.350 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce6bdb9b-87f6-4011-9a56-230cbc6f4771 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.350 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e4ded9-2930-43fe-9515-ca0c05635819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.351 158990 INFO neutron.agent.ovn.metadata.agent [-] Port da69d7f6-de64-485f-96a1-c51ad9274372 in datapath 64f65ccd-7749-48ca-ba36-8eb6d9ce3610 unbound from our chassis#033[00m
Nov 29 02:47:26 np0005539563 systemd[1]: run-netns-ovnmeta\x2dce6bdb9b\x2d87f6\x2d4011\x2d9a56\x2d230cbc6f4771.mount: Deactivated successfully.
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.353 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64f65ccd-7749-48ca-ba36-8eb6d9ce3610, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.354 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f6ec32-6133-4531-b9fc-652753ceca2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.354 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b in datapath ce6bdb9b-87f6-4011-9a56-230cbc6f4771 unbound from our chassis#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.355 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce6bdb9b-87f6-4011-9a56-230cbc6f4771, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.356 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cba6c2-3409-415f-984b-44478e55b8cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.356 158990 INFO neutron.agent.ovn.metadata.agent [-] Port da69d7f6-de64-485f-96a1-c51ad9274372 in datapath 64f65ccd-7749-48ca-ba36-8eb6d9ce3610 unbound from our chassis#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.357 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64f65ccd-7749-48ca-ba36-8eb6d9ce3610, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.357 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[143f2745-0533-4bec-bd95-b0f204861c2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.358 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d5ee6aac-d39c-4fb4-b83e-89d6fb507d8b in datapath ce6bdb9b-87f6-4011-9a56-230cbc6f4771 unbound from our chassis#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.359 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce6bdb9b-87f6-4011-9a56-230cbc6f4771, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:47:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:26.359 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[abc4bba8-6873-4857-80e8-79d2b03bae49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2991815555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.574 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.670 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.670 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.684 252257 DEBUG nova.compute.manager [req-1cd0f5c3-ba3b-42c7-87f8-4c3e09c3e989 req-8754d5e4-3570-42e2-8c22-91be70f68cb5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-unplugged-da69d7f6-de64-485f-96a1-c51ad9274372 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.684 252257 DEBUG oslo_concurrency.lockutils [req-1cd0f5c3-ba3b-42c7-87f8-4c3e09c3e989 req-8754d5e4-3570-42e2-8c22-91be70f68cb5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.685 252257 DEBUG oslo_concurrency.lockutils [req-1cd0f5c3-ba3b-42c7-87f8-4c3e09c3e989 req-8754d5e4-3570-42e2-8c22-91be70f68cb5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.685 252257 DEBUG oslo_concurrency.lockutils [req-1cd0f5c3-ba3b-42c7-87f8-4c3e09c3e989 req-8754d5e4-3570-42e2-8c22-91be70f68cb5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.685 252257 DEBUG nova.compute.manager [req-1cd0f5c3-ba3b-42c7-87f8-4c3e09c3e989 req-8754d5e4-3570-42e2-8c22-91be70f68cb5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] No waiting events found dispatching network-vif-unplugged-da69d7f6-de64-485f-96a1-c51ad9274372 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.686 252257 DEBUG nova.compute.manager [req-1cd0f5c3-ba3b-42c7-87f8-4c3e09c3e989 req-8754d5e4-3570-42e2-8c22-91be70f68cb5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-unplugged-da69d7f6-de64-485f-96a1-c51ad9274372 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.790 252257 DEBUG nova.network.neutron [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Activated binding for port da69d7f6-de64-485f-96a1-c51ad9274372 and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.791 252257 DEBUG nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.792 252257 DEBUG nova.virt.libvirt.vif [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:46:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1845987537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1845987537',id=15,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:47:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fd51d8b6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:47:12Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.792 252257 DEBUG nova.network.os_vif_util [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converting VIF {"id": "da69d7f6-de64-485f-96a1-c51ad9274372", "address": "fa:16:3e:27:b0:27", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda69d7f6-de", "ovs_interfaceid": "da69d7f6-de64-485f-96a1-c51ad9274372", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.793 252257 DEBUG nova.network.os_vif_util [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.794 252257 DEBUG os_vif [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.796 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.797 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda69d7f6-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.798 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.800 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.804 252257 INFO os_vif [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:b0:27,bridge_name='br-int',has_traffic_filtering=True,id=da69d7f6-de64-485f-96a1-c51ad9274372,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda69d7f6-de')#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.804 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.805 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.805 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.806 252257 DEBUG nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.806 252257 INFO nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Deleting instance files /var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_del#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.807 252257 INFO nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Deletion of /var/lib/nova/instances/bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0_del complete#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.902 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.903 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4596MB free_disk=20.788719177246094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.904 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.904 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.957 252257 INFO nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Updating resource usage from migration 61e2127b-055f-4b3a-8c41-a0fe32b26029#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.981 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 15f1608c-ffc9-4864-a004-20b44eea0709 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.981 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Migration 61e2127b-055f-4b3a-8c41-a0fe32b26029 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.982 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.982 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:47:26 np0005539563 nova_compute[252253]: 2025-11-29 07:47:26.997 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:47:27 np0005539563 nova_compute[252253]: 2025-11-29 07:47:27.015 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:47:27 np0005539563 nova_compute[252253]: 2025-11-29 07:47:27.016 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:47:27 np0005539563 nova_compute[252253]: 2025-11-29 07:47:27.035 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:47:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:27.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:27 np0005539563 nova_compute[252253]: 2025-11-29 07:47:27.082 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:47:27 np0005539563 nova_compute[252253]: 2025-11-29 07:47:27.133 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 378 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.1 MiB/s wr, 202 op/s
Nov 29 02:47:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:27.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3267796039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:27 np0005539563 nova_compute[252253]: 2025-11-29 07:47:27.588 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:27 np0005539563 nova_compute[252253]: 2025-11-29 07:47:27.597 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:47:27 np0005539563 nova_compute[252253]: 2025-11-29 07:47:27.616 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:47:27 np0005539563 nova_compute[252253]: 2025-11-29 07:47:27.640 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:47:27 np0005539563 nova_compute[252253]: 2025-11-29 07:47:27.640 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.264 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.264 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.543 252257 DEBUG nova.compute.manager [req-9d12e538-8736-4919-8488-ac259e1a12cd req-caf927e0-d9eb-437e-a5aa-38c8d7f321af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-unplugged-da69d7f6-de64-485f-96a1-c51ad9274372 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.543 252257 DEBUG oslo_concurrency.lockutils [req-9d12e538-8736-4919-8488-ac259e1a12cd req-caf927e0-d9eb-437e-a5aa-38c8d7f321af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.544 252257 DEBUG oslo_concurrency.lockutils [req-9d12e538-8736-4919-8488-ac259e1a12cd req-caf927e0-d9eb-437e-a5aa-38c8d7f321af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.544 252257 DEBUG oslo_concurrency.lockutils [req-9d12e538-8736-4919-8488-ac259e1a12cd req-caf927e0-d9eb-437e-a5aa-38c8d7f321af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.544 252257 DEBUG nova.compute.manager [req-9d12e538-8736-4919-8488-ac259e1a12cd req-caf927e0-d9eb-437e-a5aa-38c8d7f321af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] No waiting events found dispatching network-vif-unplugged-da69d7f6-de64-485f-96a1-c51ad9274372 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.544 252257 DEBUG nova.compute.manager [req-9d12e538-8736-4919-8488-ac259e1a12cd req-caf927e0-d9eb-437e-a5aa-38c8d7f321af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-unplugged-da69d7f6-de64-485f-96a1-c51ad9274372 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.766 252257 DEBUG nova.compute.manager [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.767 252257 DEBUG oslo_concurrency.lockutils [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.767 252257 DEBUG oslo_concurrency.lockutils [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.767 252257 DEBUG oslo_concurrency.lockutils [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.767 252257 DEBUG nova.compute.manager [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] No waiting events found dispatching network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.768 252257 WARNING nova.compute.manager [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received unexpected event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.768 252257 DEBUG nova.compute.manager [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.769 252257 DEBUG oslo_concurrency.lockutils [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.769 252257 DEBUG oslo_concurrency.lockutils [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.769 252257 DEBUG oslo_concurrency.lockutils [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.770 252257 DEBUG nova.compute.manager [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] No waiting events found dispatching network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.770 252257 WARNING nova.compute.manager [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received unexpected event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.770 252257 DEBUG nova.compute.manager [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.772 252257 DEBUG oslo_concurrency.lockutils [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.772 252257 DEBUG oslo_concurrency.lockutils [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.772 252257 DEBUG oslo_concurrency.lockutils [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.773 252257 DEBUG nova.compute.manager [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] No waiting events found dispatching network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:28 np0005539563 nova_compute[252253]: 2025-11-29 07:47:28.773 252257 WARNING nova.compute.manager [req-5e6004f7-8992-4ef8-a5d9-445dcc665baa req-3a955ea0-86a8-43b2-a265-3a654f6a4a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Received unexpected event network-vif-plugged-da69d7f6-de64-485f-96a1-c51ad9274372 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:47:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2339003117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:29.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 394 MiB data, 500 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.1 MiB/s wr, 224 op/s
Nov 29 02:47:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:29.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:30 np0005539563 nova_compute[252253]: 2025-11-29 07:47:30.066 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:30 np0005539563 nova_compute[252253]: 2025-11-29 07:47:30.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:31.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 285 op/s
Nov 29 02:47:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:31.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:31 np0005539563 nova_compute[252253]: 2025-11-29 07:47:31.799 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:33.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 372 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.2 MiB/s wr, 263 op/s
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.505 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.505 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.505 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.525 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.526 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.527 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.527 252257 DEBUG nova.compute.resource_tracker [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.527 252257 DEBUG oslo_concurrency.processutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:33.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.615 252257 DEBUG oslo_concurrency.lockutils [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "15f1608c-ffc9-4864-a004-20b44eea0709" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.616 252257 DEBUG oslo_concurrency.lockutils [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.616 252257 DEBUG oslo_concurrency.lockutils [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.616 252257 DEBUG oslo_concurrency.lockutils [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.616 252257 DEBUG oslo_concurrency.lockutils [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.618 252257 INFO nova.compute.manager [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Terminating instance#033[00m
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.619 252257 DEBUG nova.compute.manager [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:47:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229400981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:33 np0005539563 nova_compute[252253]: 2025-11-29 07:47:33.956 252257 DEBUG oslo_concurrency.processutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:34 np0005539563 kernel: tap1f92e4a4-86 (unregistering): left promiscuous mode
Nov 29 02:47:34 np0005539563 NetworkManager[48981]: <info>  [1764402454.3183] device (tap1f92e4a4-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.325 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:34 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:34Z|00067|binding|INFO|Releasing lport 1f92e4a4-86c8-48d0-b614-42644f6def7a from this chassis (sb_readonly=0)
Nov 29 02:47:34 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:34Z|00068|binding|INFO|Setting lport 1f92e4a4-86c8-48d0-b614-42644f6def7a down in Southbound
Nov 29 02:47:34 np0005539563 ovn_controller[148841]: 2025-11-29T07:47:34Z|00069|binding|INFO|Removing iface tap1f92e4a4-86 ovn-installed in OVS
Nov 29 02:47:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:34.335 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:a6:c0 10.100.0.7'], port_security=['fa:16:3e:c2:a6:c0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '15f1608c-ffc9-4864-a004-20b44eea0709', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96ea84545e71401fb69d21be6e2472f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b094dd4e-cb76-48e4-81b4-a11d19d5f956', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=baaadbdd-7935-4514-9332-391647ab6336, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1f92e4a4-86c8-48d0-b614-42644f6def7a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:34.336 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1f92e4a4-86c8-48d0-b614-42644f6def7a in datapath 788595a6-8f3f-45f7-807d-f88c9bf0e050 unbound from our chassis#033[00m
Nov 29 02:47:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:34.337 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 788595a6-8f3f-45f7-807d-f88c9bf0e050, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:47:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:34.338 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a49e6f-11ec-480a-ac77-60b26befd474]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:34.338 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 namespace which is not needed anymore#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.355 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:34 np0005539563 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 29 02:47:34 np0005539563 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Consumed 19.512s CPU time.
Nov 29 02:47:34 np0005539563 systemd-machined[213024]: Machine qemu-4-instance-0000000a terminated.
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.437 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.442 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.449 252257 INFO nova.virt.libvirt.driver [-] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Instance destroyed successfully.#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.450 252257 DEBUG nova.objects.instance [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lazy-loading 'resources' on Instance uuid 15f1608c-ffc9-4864-a004-20b44eea0709 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.467 252257 DEBUG nova.virt.libvirt.vif [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:44:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1140999470',display_name='tempest-ServersAdminTestJSON-server-1140999470',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1140999470',id=10,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:45:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96ea84545e71401fb69d21be6e2472f7',ramdisk_id='',reservation_id='r-lnqmhc1j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1807764482',owner_user_name='tempest-ServersAdminTestJSON-1807764482-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:45:27Z,user_data=None,user_id='d94c707cca604d72a8e1d49b636095e1',uuid=15f1608c-ffc9-4864-a004-20b44eea0709,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "address": "fa:16:3e:c2:a6:c0", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f92e4a4-86", "ovs_interfaceid": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.468 252257 DEBUG nova.network.os_vif_util [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converting VIF {"id": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "address": "fa:16:3e:c2:a6:c0", "network": {"id": "788595a6-8f3f-45f7-807d-f88c9bf0e050", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-275777888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96ea84545e71401fb69d21be6e2472f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f92e4a4-86", "ovs_interfaceid": "1f92e4a4-86c8-48d0-b614-42644f6def7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.469 252257 DEBUG nova.network.os_vif_util [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:a6:c0,bridge_name='br-int',has_traffic_filtering=True,id=1f92e4a4-86c8-48d0-b614-42644f6def7a,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f92e4a4-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.469 252257 DEBUG os_vif [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:a6:c0,bridge_name='br-int',has_traffic_filtering=True,id=1f92e4a4-86c8-48d0-b614-42644f6def7a,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f92e4a4-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.470 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.471 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f92e4a4-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.472 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.473 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.476 252257 INFO os_vif [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:a6:c0,bridge_name='br-int',has_traffic_filtering=True,id=1f92e4a4-86c8-48d0-b614-42644f6def7a,network=Network(788595a6-8f3f-45f7-807d-f88c9bf0e050),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f92e4a4-86')#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.515 252257 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.516 252257 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.651 252257 WARNING nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.652 252257 DEBUG nova.compute.resource_tracker [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4809MB free_disk=20.810302734375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.653 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.653 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.708 252257 DEBUG nova.compute.resource_tracker [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Migration for instance bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.728 252257 DEBUG nova.compute.resource_tracker [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.767 252257 DEBUG nova.compute.resource_tracker [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Instance 15f1608c-ffc9-4864-a004-20b44eea0709 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.768 252257 DEBUG nova.compute.resource_tracker [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Migration 61e2127b-055f-4b3a-8c41-a0fe32b26029 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.768 252257 DEBUG nova.compute.resource_tracker [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.768 252257 DEBUG nova.compute.resource_tracker [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.775 252257 DEBUG nova.compute.manager [req-b2d0ad9b-5a08-423c-a4dc-7102dc0ed09c req-b9d54241-b0a3-40f9-b568-15138ed33110 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Received event network-vif-unplugged-1f92e4a4-86c8-48d0-b614-42644f6def7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.776 252257 DEBUG oslo_concurrency.lockutils [req-b2d0ad9b-5a08-423c-a4dc-7102dc0ed09c req-b9d54241-b0a3-40f9-b568-15138ed33110 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.776 252257 DEBUG oslo_concurrency.lockutils [req-b2d0ad9b-5a08-423c-a4dc-7102dc0ed09c req-b9d54241-b0a3-40f9-b568-15138ed33110 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.776 252257 DEBUG oslo_concurrency.lockutils [req-b2d0ad9b-5a08-423c-a4dc-7102dc0ed09c req-b9d54241-b0a3-40f9-b568-15138ed33110 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.777 252257 DEBUG nova.compute.manager [req-b2d0ad9b-5a08-423c-a4dc-7102dc0ed09c req-b9d54241-b0a3-40f9-b568-15138ed33110 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] No waiting events found dispatching network-vif-unplugged-1f92e4a4-86c8-48d0-b614-42644f6def7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.777 252257 DEBUG nova.compute.manager [req-b2d0ad9b-5a08-423c-a4dc-7102dc0ed09c req-b9d54241-b0a3-40f9-b568-15138ed33110 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Received event network-vif-unplugged-1f92e4a4-86c8-48d0-b614-42644f6def7a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:47:34 np0005539563 nova_compute[252253]: 2025-11-29 07:47:34.832 252257 DEBUG oslo_concurrency.processutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:35 np0005539563 nova_compute[252253]: 2025-11-29 07:47:35.069 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:35.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:35 np0005539563 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[264855]: [NOTICE]   (264871) : haproxy version is 2.8.14-c23fe91
Nov 29 02:47:35 np0005539563 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[264855]: [NOTICE]   (264871) : path to executable is /usr/sbin/haproxy
Nov 29 02:47:35 np0005539563 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[264855]: [WARNING]  (264871) : Exiting Master process...
Nov 29 02:47:35 np0005539563 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[264855]: [WARNING]  (264871) : Exiting Master process...
Nov 29 02:47:35 np0005539563 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[264855]: [ALERT]    (264871) : Current worker (264873) exited with code 143 (Terminated)
Nov 29 02:47:35 np0005539563 neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050[264855]: [WARNING]  (264871) : All workers exited. Exiting... (0)
Nov 29 02:47:35 np0005539563 systemd[1]: libpod-84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7.scope: Deactivated successfully.
Nov 29 02:47:35 np0005539563 podman[267759]: 2025-11-29 07:47:35.1926931 +0000 UTC m=+0.761818328 container died 84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:47:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 375 MiB data, 486 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.6 MiB/s wr, 340 op/s
Nov 29 02:47:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:35.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469695789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.614 252257 DEBUG oslo_concurrency.processutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.782s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.624 252257 DEBUG nova.compute.provider_tree [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.655 252257 DEBUG nova.scheduler.client.report [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.679 252257 DEBUG nova.compute.resource_tracker [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.680 252257 DEBUG oslo_concurrency.lockutils [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.691 252257 INFO nova.compute.manager [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.786 252257 INFO nova.scheduler.client.report [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Deleted allocation for migration 61e2127b-055f-4b3a-8c41-a0fe32b26029#033[00m
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.787 252257 DEBUG nova.virt.libvirt.driver [None req-a75cecb9-bcb6-40c5-8bca-23307ede577e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.883 252257 DEBUG nova.compute.manager [req-b01b6158-1d5e-4270-9289-96dd9b9af1c5 req-60ba28dd-e030-4363-ba53-9cbba794c385 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Received event network-vif-plugged-1f92e4a4-86c8-48d0-b614-42644f6def7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.885 252257 DEBUG oslo_concurrency.lockutils [req-b01b6158-1d5e-4270-9289-96dd9b9af1c5 req-60ba28dd-e030-4363-ba53-9cbba794c385 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.886 252257 DEBUG oslo_concurrency.lockutils [req-b01b6158-1d5e-4270-9289-96dd9b9af1c5 req-60ba28dd-e030-4363-ba53-9cbba794c385 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.886 252257 DEBUG oslo_concurrency.lockutils [req-b01b6158-1d5e-4270-9289-96dd9b9af1c5 req-60ba28dd-e030-4363-ba53-9cbba794c385 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.887 252257 DEBUG nova.compute.manager [req-b01b6158-1d5e-4270-9289-96dd9b9af1c5 req-60ba28dd-e030-4363-ba53-9cbba794c385 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] No waiting events found dispatching network-vif-plugged-1f92e4a4-86c8-48d0-b614-42644f6def7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:47:36 np0005539563 nova_compute[252253]: 2025-11-29 07:47:36.887 252257 WARNING nova.compute.manager [req-b01b6158-1d5e-4270-9289-96dd9b9af1c5 req-60ba28dd-e030-4363-ba53-9cbba794c385 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Received unexpected event network-vif-plugged-1f92e4a4-86c8-48d0-b614-42644f6def7a for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:47:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:37.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:37 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7-userdata-shm.mount: Deactivated successfully.
Nov 29 02:47:37 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ed8e9f45818df43423c2e321107943d9904c4cb4bc3137914f8f4c9b4291f6ab-merged.mount: Deactivated successfully.
Nov 29 02:47:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 381 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.0 MiB/s wr, 242 op/s
Nov 29 02:47:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:37.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:38 np0005539563 podman[267759]: 2025-11-29 07:47:38.099220969 +0000 UTC m=+3.668346187 container cleanup 84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:47:38 np0005539563 systemd[1]: libpod-conmon-84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7.scope: Deactivated successfully.
Nov 29 02:47:38 np0005539563 podman[267888]: 2025-11-29 07:47:38.918624389 +0000 UTC m=+0.783833543 container remove 84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:47:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:38.930 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6c194340-7c36-443d-8c99-dc585d997f53]: (4, ('Sat Nov 29 07:47:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 (84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7)\n84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7\nSat Nov 29 07:47:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 (84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7)\n84f92c8dc64d1eb148e2b9fe6f0dea0aadc419b8e574c40d5cd4385ab389f1b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:38.932 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ea24a9e5-2025-45e1-8062-1e8949597ccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:38.934 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap788595a6-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:38 np0005539563 nova_compute[252253]: 2025-11-29 07:47:38.936 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:38 np0005539563 kernel: tap788595a6-80: left promiscuous mode
Nov 29 02:47:38 np0005539563 nova_compute[252253]: 2025-11-29 07:47:38.957 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:38.961 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[117732e1-41f2-4dc8-8b5b-e2708580dec8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:38.980 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1d04535b-f2d5-4824-826f-29b683f814b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:38.981 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7fffed3b-7659-496c-b7c1-cea691e5d7bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:38.999 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d0df2371-0295-49b2-9552-5bf9f9c93b67]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529412, 'reachable_time': 39166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267904, 'error': None, 'target': 'ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:39.001 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-788595a6-8f3f-45f7-807d-f88c9bf0e050 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:47:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:39.001 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e057c1-b1a5-4267-9994-8f343693e957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:39 np0005539563 systemd[1]: run-netns-ovnmeta\x2d788595a6\x2d8f3f\x2d45f7\x2d807d\x2df88c9bf0e050.mount: Deactivated successfully.
Nov 29 02:47:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:39.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 388 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 206 op/s
Nov 29 02:47:39 np0005539563 nova_compute[252253]: 2025-11-29 07:47:39.473 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:39.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:40 np0005539563 nova_compute[252253]: 2025-11-29 07:47:40.070 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:40 np0005539563 nova_compute[252253]: 2025-11-29 07:47:40.084 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402445.08263, bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:47:40 np0005539563 nova_compute[252253]: 2025-11-29 07:47:40.084 252257 INFO nova.compute.manager [-] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:47:40 np0005539563 nova_compute[252253]: 2025-11-29 07:47:40.105 252257 DEBUG nova.compute.manager [None req-bb6f33e8-a65e-492e-82c8-cad9a0b59fa4 - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:40 np0005539563 nova_compute[252253]: 2025-11-29 07:47:40.924 252257 INFO nova.virt.libvirt.driver [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Deleting instance files /var/lib/nova/instances/15f1608c-ffc9-4864-a004-20b44eea0709_del#033[00m
Nov 29 02:47:40 np0005539563 nova_compute[252253]: 2025-11-29 07:47:40.925 252257 INFO nova.virt.libvirt.driver [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Deletion of /var/lib/nova/instances/15f1608c-ffc9-4864-a004-20b44eea0709_del complete#033[00m
Nov 29 02:47:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:41.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:41 np0005539563 nova_compute[252253]: 2025-11-29 07:47:41.185 252257 INFO nova.compute.manager [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Took 7.57 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:47:41 np0005539563 nova_compute[252253]: 2025-11-29 07:47:41.186 252257 DEBUG oslo.service.loopingcall [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:47:41 np0005539563 nova_compute[252253]: 2025-11-29 07:47:41.187 252257 DEBUG nova.compute.manager [-] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:47:41 np0005539563 nova_compute[252253]: 2025-11-29 07:47:41.187 252257 DEBUG nova.network.neutron [-] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:47:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 329 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 182 op/s
Nov 29 02:47:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:41.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:43.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:47:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:47:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:47:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:47:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:47:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:47:43 np0005539563 nova_compute[252253]: 2025-11-29 07:47:43.392 252257 DEBUG nova.network.neutron [-] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:47:43 np0005539563 nova_compute[252253]: 2025-11-29 07:47:43.416 252257 INFO nova.compute.manager [-] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Took 2.23 seconds to deallocate network for instance.#033[00m
Nov 29 02:47:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 329 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 118 op/s
Nov 29 02:47:43 np0005539563 nova_compute[252253]: 2025-11-29 07:47:43.462 252257 DEBUG oslo_concurrency.lockutils [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:43 np0005539563 nova_compute[252253]: 2025-11-29 07:47:43.462 252257 DEBUG oslo_concurrency.lockutils [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:43 np0005539563 nova_compute[252253]: 2025-11-29 07:47:43.503 252257 DEBUG nova.compute.manager [req-edcf363f-df2c-4af5-832a-a84f85b379d9 req-a8fd602a-3c3d-4230-b09f-58d80922fae1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Received event network-vif-deleted-1f92e4a4-86c8-48d0-b614-42644f6def7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:47:43 np0005539563 nova_compute[252253]: 2025-11-29 07:47:43.513 252257 DEBUG oslo_concurrency.processutils [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:43.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2181592901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:44 np0005539563 nova_compute[252253]: 2025-11-29 07:47:44.010 252257 DEBUG oslo_concurrency.processutils [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:44 np0005539563 nova_compute[252253]: 2025-11-29 07:47:44.018 252257 DEBUG nova.compute.provider_tree [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:47:44 np0005539563 nova_compute[252253]: 2025-11-29 07:47:44.037 252257 DEBUG nova.scheduler.client.report [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:47:44 np0005539563 nova_compute[252253]: 2025-11-29 07:47:44.062 252257 DEBUG oslo_concurrency.lockutils [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:44 np0005539563 nova_compute[252253]: 2025-11-29 07:47:44.104 252257 INFO nova.scheduler.client.report [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Deleted allocations for instance 15f1608c-ffc9-4864-a004-20b44eea0709#033[00m
Nov 29 02:47:44 np0005539563 nova_compute[252253]: 2025-11-29 07:47:44.194 252257 DEBUG oslo_concurrency.lockutils [None req-1441716c-29bd-4895-9b37-b05c8e6c52f2 d94c707cca604d72a8e1d49b636095e1 96ea84545e71401fb69d21be6e2472f7 - - default default] Lock "15f1608c-ffc9-4864-a004-20b44eea0709" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:44 np0005539563 nova_compute[252253]: 2025-11-29 07:47:44.546 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:44 np0005539563 podman[267934]: 2025-11-29 07:47:44.567653458 +0000 UTC m=+0.115944528 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:47:44 np0005539563 podman[267935]: 2025-11-29 07:47:44.576016831 +0000 UTC m=+0.124600748 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:47:44 np0005539563 podman[267936]: 2025-11-29 07:47:44.594120304 +0000 UTC m=+0.136917557 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:47:45 np0005539563 nova_compute[252253]: 2025-11-29 07:47:45.072 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:45.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 336 MiB data, 486 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.0 MiB/s wr, 179 op/s
Nov 29 02:47:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:45.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:47.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 345 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 131 op/s
Nov 29 02:47:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:47.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:49.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 288 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 163 op/s
Nov 29 02:47:49 np0005539563 nova_compute[252253]: 2025-11-29 07:47:49.448 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402454.4471362, 15f1608c-ffc9-4864-a004-20b44eea0709 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:47:49 np0005539563 nova_compute[252253]: 2025-11-29 07:47:49.449 252257 INFO nova.compute.manager [-] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:47:49 np0005539563 nova_compute[252253]: 2025-11-29 07:47:49.542 252257 DEBUG nova.compute.manager [None req-b5443fd7-d7ba-4929-ab94-4ecbe6fa7dac - - - - - -] [instance: 15f1608c-ffc9-4864-a004-20b44eea0709] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:47:49 np0005539563 nova_compute[252253]: 2025-11-29 07:47:49.548 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:49.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:50 np0005539563 nova_compute[252253]: 2025-11-29 07:47:50.072 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:51.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:51.342 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:47:51 np0005539563 nova_compute[252253]: 2025-11-29 07:47:51.342 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:51.343 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:47:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 223 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 216 op/s
Nov 29 02:47:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:51.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:53.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 223 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 194 op/s
Nov 29 02:47:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:53.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:54 np0005539563 nova_compute[252253]: 2025-11-29 07:47:54.549 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:55 np0005539563 nova_compute[252253]: 2025-11-29 07:47:55.075 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:55.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:55 np0005539563 nova_compute[252253]: 2025-11-29 07:47:55.140 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:55 np0005539563 nova_compute[252253]: 2025-11-29 07:47:55.255 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:55 np0005539563 nova_compute[252253]: 2025-11-29 07:47:55.255 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:55 np0005539563 nova_compute[252253]: 2025-11-29 07:47:55.273 252257 DEBUG nova.compute.manager [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:47:55 np0005539563 nova_compute[252253]: 2025-11-29 07:47:55.399 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:55 np0005539563 nova_compute[252253]: 2025-11-29 07:47:55.399 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:55 np0005539563 nova_compute[252253]: 2025-11-29 07:47:55.404 252257 DEBUG nova.virt.hardware [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:47:55 np0005539563 nova_compute[252253]: 2025-11-29 07:47:55.405 252257 INFO nova.compute.claims [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:47:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 246 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 200 op/s
Nov 29 02:47:55 np0005539563 nova_compute[252253]: 2025-11-29 07:47:55.515 252257 DEBUG oslo_concurrency.processutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:55.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 29 02:47:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 29 02:47:55 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 29 02:47:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1628963730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:55 np0005539563 nova_compute[252253]: 2025-11-29 07:47:55.928 252257 DEBUG oslo_concurrency.processutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:55 np0005539563 nova_compute[252253]: 2025-11-29 07:47:55.934 252257 DEBUG nova.compute.provider_tree [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:47:56 np0005539563 nova_compute[252253]: 2025-11-29 07:47:56.047 252257 DEBUG nova.scheduler.client.report [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:47:56 np0005539563 nova_compute[252253]: 2025-11-29 07:47:56.401 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:56 np0005539563 nova_compute[252253]: 2025-11-29 07:47:56.402 252257 DEBUG nova.compute.manager [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:47:56 np0005539563 nova_compute[252253]: 2025-11-29 07:47:56.563 252257 DEBUG nova.compute.manager [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:47:56 np0005539563 nova_compute[252253]: 2025-11-29 07:47:56.564 252257 DEBUG nova.network.neutron [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:47:56 np0005539563 nova_compute[252253]: 2025-11-29 07:47:56.698 252257 INFO nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:47:56 np0005539563 nova_compute[252253]: 2025-11-29 07:47:56.717 252257 DEBUG nova.compute.manager [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:47:57 np0005539563 nova_compute[252253]: 2025-11-29 07:47:57.003 252257 INFO nova.virt.block_device [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Booting with volume 1c382493-5718-4c6c-93b8-8f2562c0a68a at /dev/vda#033[00m
Nov 29 02:47:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:57.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:57 np0005539563 nova_compute[252253]: 2025-11-29 07:47:57.183 252257 DEBUG nova.policy [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd15fa4897cba4410b8d341f62586c091', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f3f16345721743ccb9afb374deec67b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:47:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:47:57.345 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:47:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 246 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 182 KiB/s rd, 2.7 MiB/s wr, 142 op/s
Nov 29 02:47:57 np0005539563 nova_compute[252253]: 2025-11-29 07:47:57.486 252257 DEBUG os_brick.utils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:47:57 np0005539563 nova_compute[252253]: 2025-11-29 07:47:57.488 252257 INFO oslo.privsep.daemon [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp65peggr1/privsep.sock']#033[00m
Nov 29 02:47:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:57.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.274 252257 INFO oslo.privsep.daemon [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.140 268082 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.144 268082 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.146 268082 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.146 268082 INFO oslo.privsep.daemon [-] privsep daemon running as pid 268082#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.277 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[04cac34e-5b4d-4454-9b0b-57517c908824]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.369 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.380 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.380 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0f90f8-c918-412b-8a41-f9dc7b1ea855]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.382 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.389 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.390 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[0f47c37e-4ce7-4fd0-a3c3-b966edb92b35]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.392 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.401 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.401 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[b252da1a-384e-4575-b4d3-28fd70a0e38a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.403 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[df9b8dc8-2b2e-4878-ae96-415ab1527429]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.404 252257 DEBUG oslo_concurrency.processutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.425 252257 DEBUG oslo_concurrency.processutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.428 252257 DEBUG os_brick.initiator.connectors.lightos [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.429 252257 DEBUG os_brick.initiator.connectors.lightos [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.429 252257 DEBUG os_brick.initiator.connectors.lightos [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.430 252257 DEBUG os_brick.utils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] <== get_connector_properties: return (942ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:47:58 np0005539563 nova_compute[252253]: 2025-11-29 07:47:58.430 252257 DEBUG nova.virt.block_device [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating existing volume attachment record: e8b3488f-f1e2-453e-93e6-a153b3c09a17 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 02:47:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:59.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:59 np0005539563 nova_compute[252253]: 2025-11-29 07:47:59.397 252257 DEBUG nova.network.neutron [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Successfully created port: 83ec9820-3713-4570-ab8a-a88fba3f29c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:47:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 246 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 264 KiB/s rd, 2.2 MiB/s wr, 104 op/s
Nov 29 02:47:59 np0005539563 nova_compute[252253]: 2025-11-29 07:47:59.551 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:47:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:47:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:47:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:59.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:47:59 np0005539563 podman[268264]: 2025-11-29 07:47:59.616312552 +0000 UTC m=+0.320456923 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:47:59 np0005539563 podman[268264]: 2025-11-29 07:47:59.72811072 +0000 UTC m=+0.432255091 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.077 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:00 np0005539563 podman[268416]: 2025-11-29 07:48:00.301038636 +0000 UTC m=+0.049149359 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:48:00 np0005539563 podman[268416]: 2025-11-29 07:48:00.33948066 +0000 UTC m=+0.087591363 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:48:00 np0005539563 podman[268482]: 2025-11-29 07:48:00.540088572 +0000 UTC m=+0.048439121 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, vcs-type=git, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, release=1793, version=2.2.4, com.redhat.component=keepalived-container, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc.)
Nov 29 02:48:00 np0005539563 podman[268482]: 2025-11-29 07:48:00.577328594 +0000 UTC m=+0.085679133 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, distribution-scope=public, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, release=1793, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.28.2)
Nov 29 02:48:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:48:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.651 252257 DEBUG nova.network.neutron [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Successfully updated port: 83ec9820-3713-4570-ab8a-a88fba3f29c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.654 252257 DEBUG nova.compute.manager [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.656 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.656 252257 INFO nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Creating image(s)#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.656 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.657 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Ensure instance console log exists: /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.657 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.657 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.658 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.686 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.686 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquired lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.687 252257 DEBUG nova.network.neutron [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.791 252257 DEBUG nova.compute.manager [req-275e3d1f-8afc-411b-9492-4a4f05541761 req-a98312dc-b458-486b-90b1-b83d80b5c9e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-changed-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.791 252257 DEBUG nova.compute.manager [req-275e3d1f-8afc-411b-9492-4a4f05541761 req-a98312dc-b458-486b-90b1-b83d80b5c9e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Refreshing instance network info cache due to event network-changed-83ec9820-3713-4570-ab8a-a88fba3f29c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.791 252257 DEBUG oslo_concurrency.lockutils [req-275e3d1f-8afc-411b-9492-4a4f05541761 req-a98312dc-b458-486b-90b1-b83d80b5c9e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:48:00 np0005539563 nova_compute[252253]: 2025-11-29 07:48:00.891 252257 DEBUG nova.network.neutron [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:48:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:48:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:01.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:48:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 246 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 96 op/s
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d5235b71-7651-4c0f-b9c5-a56df84b90bf does not exist
Nov 29 02:48:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c818181d-a569-4418-977d-cc11574ebc82 does not exist
Nov 29 02:48:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 3af48f63-34b5-47c9-b6b3-a00d633459cc does not exist
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:48:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:01.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 29 02:48:01 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.810 252257 DEBUG nova.network.neutron [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating instance_info_cache with network_info: [{"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.831 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Releasing lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.832 252257 DEBUG nova.compute.manager [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Instance network_info: |[{"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.832 252257 DEBUG oslo_concurrency.lockutils [req-275e3d1f-8afc-411b-9492-4a4f05541761 req-a98312dc-b458-486b-90b1-b83d80b5c9e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.832 252257 DEBUG nova.network.neutron [req-275e3d1f-8afc-411b-9492-4a4f05541761 req-a98312dc-b458-486b-90b1-b83d80b5c9e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Refreshing network info cache for port 83ec9820-3713-4570-ab8a-a88fba3f29c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.836 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Start _get_guest_xml network_info=[{"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1c382493-5718-4c6c-93b8-8f2562c0a68a', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1c382493-5718-4c6c-93b8-8f2562c0a68a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '738ca4a4-91f6-4476-a500-4d85c8eb00ef', 'attached_at': '', 'detached_at': '', 'volume_id': '1c382493-5718-4c6c-93b8-8f2562c0a68a', 'serial': '1c382493-5718-4c6c-93b8-8f2562c0a68a'}, 'attachment_id': 'e8b3488f-f1e2-453e-93e6-a153b3c09a17', 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.842 252257 WARNING nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.848 252257 DEBUG nova.virt.libvirt.host [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.848 252257 DEBUG nova.virt.libvirt.host [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.856 252257 DEBUG nova.virt.libvirt.host [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.856 252257 DEBUG nova.virt.libvirt.host [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.858 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.859 252257 DEBUG nova.virt.hardware [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.859 252257 DEBUG nova.virt.hardware [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.859 252257 DEBUG nova.virt.hardware [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.860 252257 DEBUG nova.virt.hardware [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.860 252257 DEBUG nova.virt.hardware [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.860 252257 DEBUG nova.virt.hardware [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.860 252257 DEBUG nova.virt.hardware [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.861 252257 DEBUG nova.virt.hardware [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.861 252257 DEBUG nova.virt.hardware [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.861 252257 DEBUG nova.virt.hardware [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.861 252257 DEBUG nova.virt.hardware [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.889 252257 DEBUG nova.storage.rbd_utils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] rbd image 738ca4a4-91f6-4476-a500-4d85c8eb00ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:01 np0005539563 nova_compute[252253]: 2025-11-29 07:48:01.893 252257 DEBUG oslo_concurrency.processutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:02 np0005539563 podman[268818]: 2025-11-29 07:48:02.117459977 +0000 UTC m=+0.020672903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:48:02 np0005539563 podman[268818]: 2025-11-29 07:48:02.318762867 +0000 UTC m=+0.221975773 container create b2afdd91377ccd255559c01d7694692e263ec6f64e6ee34fab83ae8176081e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ganguly, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:48:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:48:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2397694028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.364 252257 DEBUG oslo_concurrency.processutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.365 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.366 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.367 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.396 252257 DEBUG nova.virt.libvirt.vif [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-245400987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-245400987',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fe8gqt5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:47:56Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=738ca4a4-91f6-4476-a500-4d85c8eb00ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.397 252257 DEBUG nova.network.os_vif_util [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Converting VIF {"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.398 252257 DEBUG nova.network.os_vif_util [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.400 252257 DEBUG nova.objects.instance [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 738ca4a4-91f6-4476-a500-4d85c8eb00ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.417 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  <uuid>738ca4a4-91f6-4476-a500-4d85c8eb00ef</uuid>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  <name>instance-00000012</name>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-245400987</nova:name>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:48:01</nova:creationTime>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <nova:user uuid="d15fa4897cba4410b8d341f62586c091">tempest-LiveAutoBlockMigrationV225Test-362691100-project-member</nova:user>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <nova:project uuid="f3f16345721743ccb9afb374deec67b5">tempest-LiveAutoBlockMigrationV225Test-362691100</nova:project>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <nova:port uuid="83ec9820-3713-4570-ab8a-a88fba3f29c9">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <entry name="serial">738ca4a4-91f6-4476-a500-4d85c8eb00ef</entry>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <entry name="uuid">738ca4a4-91f6-4476-a500-4d85c8eb00ef</entry>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/738ca4a4-91f6-4476-a500-4d85c8eb00ef_disk.config">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="volumes/volume-1c382493-5718-4c6c-93b8-8f2562c0a68a">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <serial>1c382493-5718-4c6c-93b8-8f2562c0a68a</serial>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:eb:30:0e"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <target dev="tap83ec9820-37"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef/console.log" append="off"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:48:02 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:48:02 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:48:02 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:48:02 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.418 252257 DEBUG nova.compute.manager [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Preparing to wait for external event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.419 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.419 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.419 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.420 252257 DEBUG nova.virt.libvirt.vif [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-245400987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-245400987',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fe8gqt5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:47:56Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=738ca4a4-91f6-4476-a500-4d85c8eb00ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.420 252257 DEBUG nova.network.os_vif_util [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Converting VIF {"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.420 252257 DEBUG nova.network.os_vif_util [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.421 252257 DEBUG os_vif [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.421 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.422 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.422 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.425 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.425 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83ec9820-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.426 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap83ec9820-37, col_values=(('external_ids', {'iface-id': '83ec9820-3713-4570-ab8a-a88fba3f29c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:30:0e', 'vm-uuid': '738ca4a4-91f6-4476-a500-4d85c8eb00ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.475 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:02 np0005539563 NetworkManager[48981]: <info>  [1764402482.4769] manager: (tap83ec9820-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.481 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.484 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:02 np0005539563 nova_compute[252253]: 2025-11-29 07:48:02.486 252257 INFO os_vif [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37')#033[00m
Nov 29 02:48:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:48:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:03.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:48:03 np0005539563 nova_compute[252253]: 2025-11-29 07:48:03.346 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:48:03 np0005539563 nova_compute[252253]: 2025-11-29 07:48:03.346 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:48:03 np0005539563 nova_compute[252253]: 2025-11-29 07:48:03.347 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] No VIF found with MAC fa:16:3e:eb:30:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:48:03 np0005539563 nova_compute[252253]: 2025-11-29 07:48:03.347 252257 INFO nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Using config drive#033[00m
Nov 29 02:48:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 246 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.4 KiB/s wr, 112 op/s
Nov 29 02:48:03 np0005539563 systemd[1]: Started libpod-conmon-b2afdd91377ccd255559c01d7694692e263ec6f64e6ee34fab83ae8176081e9d.scope.
Nov 29 02:48:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:48:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:48:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:03.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:48:03 np0005539563 nova_compute[252253]: 2025-11-29 07:48:03.770 252257 DEBUG nova.storage.rbd_utils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] rbd image 738ca4a4-91f6-4476-a500-4d85c8eb00ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:03 np0005539563 nova_compute[252253]: 2025-11-29 07:48:03.776 252257 DEBUG nova.network.neutron [req-275e3d1f-8afc-411b-9492-4a4f05541761 req-a98312dc-b458-486b-90b1-b83d80b5c9e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updated VIF entry in instance network info cache for port 83ec9820-3713-4570-ab8a-a88fba3f29c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:48:03 np0005539563 nova_compute[252253]: 2025-11-29 07:48:03.777 252257 DEBUG nova.network.neutron [req-275e3d1f-8afc-411b-9492-4a4f05541761 req-a98312dc-b458-486b-90b1-b83d80b5c9e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating instance_info_cache with network_info: [{"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:48:03 np0005539563 nova_compute[252253]: 2025-11-29 07:48:03.805 252257 DEBUG oslo_concurrency.lockutils [req-275e3d1f-8afc-411b-9492-4a4f05541761 req-a98312dc-b458-486b-90b1-b83d80b5c9e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:48:04 np0005539563 podman[268818]: 2025-11-29 07:48:04.207366049 +0000 UTC m=+2.110579005 container init b2afdd91377ccd255559c01d7694692e263ec6f64e6ee34fab83ae8176081e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:48:04 np0005539563 podman[268818]: 2025-11-29 07:48:04.216311387 +0000 UTC m=+2.119524293 container start b2afdd91377ccd255559c01d7694692e263ec6f64e6ee34fab83ae8176081e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:48:04 np0005539563 fervent_ganguly[268847]: 167 167
Nov 29 02:48:04 np0005539563 systemd[1]: libpod-b2afdd91377ccd255559c01d7694692e263ec6f64e6ee34fab83ae8176081e9d.scope: Deactivated successfully.
Nov 29 02:48:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:04 np0005539563 nova_compute[252253]: 2025-11-29 07:48:04.374 252257 INFO nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Creating config drive at /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef/disk.config#033[00m
Nov 29 02:48:04 np0005539563 nova_compute[252253]: 2025-11-29 07:48:04.385 252257 DEBUG oslo_concurrency.processutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoh8z6tg6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:04 np0005539563 podman[268818]: 2025-11-29 07:48:04.426124415 +0000 UTC m=+2.329337321 container attach b2afdd91377ccd255559c01d7694692e263ec6f64e6ee34fab83ae8176081e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:48:04 np0005539563 podman[268818]: 2025-11-29 07:48:04.426605667 +0000 UTC m=+2.329818583 container died b2afdd91377ccd255559c01d7694692e263ec6f64e6ee34fab83ae8176081e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:48:04 np0005539563 nova_compute[252253]: 2025-11-29 07:48:04.525 252257 DEBUG oslo_concurrency.processutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoh8z6tg6" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay-322e69e903474255352f8554b7d635b025a2d9102058f79fb4d72c3028a49663-merged.mount: Deactivated successfully.
Nov 29 02:48:04 np0005539563 nova_compute[252253]: 2025-11-29 07:48:04.554 252257 DEBUG nova.storage.rbd_utils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] rbd image 738ca4a4-91f6-4476-a500-4d85c8eb00ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:04 np0005539563 nova_compute[252253]: 2025-11-29 07:48:04.558 252257 DEBUG oslo_concurrency.processutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef/disk.config 738ca4a4-91f6-4476-a500-4d85c8eb00ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:04 np0005539563 podman[268818]: 2025-11-29 07:48:04.568670411 +0000 UTC m=+2.471883317 container remove b2afdd91377ccd255559c01d7694692e263ec6f64e6ee34fab83ae8176081e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ganguly, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:48:04 np0005539563 systemd[1]: libpod-conmon-b2afdd91377ccd255559c01d7694692e263ec6f64e6ee34fab83ae8176081e9d.scope: Deactivated successfully.
Nov 29 02:48:04 np0005539563 podman[268920]: 2025-11-29 07:48:04.710876978 +0000 UTC m=+0.026294612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:48:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:04.892 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:04.893 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:04.893 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:04 np0005539563 podman[268920]: 2025-11-29 07:48:04.968053075 +0000 UTC m=+0.283470669 container create de738623c63b0b2621d8aa5935f02b92d5ddc8effe0b5b8a68e2ec8996eecaa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sammet, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:48:04 np0005539563 nova_compute[252253]: 2025-11-29 07:48:04.996 252257 DEBUG oslo_concurrency.processutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef/disk.config 738ca4a4-91f6-4476-a500-4d85c8eb00ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:04 np0005539563 nova_compute[252253]: 2025-11-29 07:48:04.997 252257 INFO nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Deleting local config drive /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef/disk.config because it was imported into RBD.#033[00m
Nov 29 02:48:05 np0005539563 systemd[1]: Started libpod-conmon-de738623c63b0b2621d8aa5935f02b92d5ddc8effe0b5b8a68e2ec8996eecaa7.scope.
Nov 29 02:48:05 np0005539563 kernel: tap83ec9820-37: entered promiscuous mode
Nov 29 02:48:05 np0005539563 NetworkManager[48981]: <info>  [1764402485.0525] manager: (tap83ec9820-37): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Nov 29 02:48:05 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:05Z|00070|binding|INFO|Claiming lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 for this chassis.
Nov 29 02:48:05 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:05Z|00071|binding|INFO|83ec9820-3713-4570-ab8a-a88fba3f29c9: Claiming fa:16:3e:eb:30:0e 10.100.0.6
Nov 29 02:48:05 np0005539563 nova_compute[252253]: 2025-11-29 07:48:05.054 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:05 np0005539563 nova_compute[252253]: 2025-11-29 07:48:05.060 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:05 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.072 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:30:0e 10.100.0.6'], port_security=['fa:16:3e:eb:30:0e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '738ca4a4-91f6-4476-a500-4d85c8eb00ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49e03573-97a7-4693-af53-f6975c853dfa, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=83ec9820-3713-4570-ab8a-a88fba3f29c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.074 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 83ec9820-3713-4570-ab8a-a88fba3f29c9 in datapath 64f65ccd-7749-48ca-ba36-8eb6d9ce3610 bound to our chassis#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.077 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64f65ccd-7749-48ca-ba36-8eb6d9ce3610#033[00m
Nov 29 02:48:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5066094cac81e63bf55226c99efe9ee115603277f39d93438a42a557574e898c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5066094cac81e63bf55226c99efe9ee115603277f39d93438a42a557574e898c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5066094cac81e63bf55226c99efe9ee115603277f39d93438a42a557574e898c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5066094cac81e63bf55226c99efe9ee115603277f39d93438a42a557574e898c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5066094cac81e63bf55226c99efe9ee115603277f39d93438a42a557574e898c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.104 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[aee6d8d0-a2c6-4d42-b55f-61f212503186]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 systemd-machined[213024]: New machine qemu-7-instance-00000012.
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.106 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap64f65ccd-71 in ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:48:05 np0005539563 podman[268920]: 2025-11-29 07:48:05.109933704 +0000 UTC m=+0.425351308 container init de738623c63b0b2621d8aa5935f02b92d5ddc8effe0b5b8a68e2ec8996eecaa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sammet, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.110 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap64f65ccd-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.110 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[78a8aefc-fedd-4fc6-be45-d8ab36cf6cbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 systemd-udevd[268957]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.112 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c493086f-1f84-4adc-b95b-3e6122389f4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:05.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:05 np0005539563 podman[268920]: 2025-11-29 07:48:05.117625549 +0000 UTC m=+0.433043103 container start de738623c63b0b2621d8aa5935f02b92d5ddc8effe0b5b8a68e2ec8996eecaa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sammet, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:48:05 np0005539563 NetworkManager[48981]: <info>  [1764402485.1227] device (tap83ec9820-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:48:05 np0005539563 NetworkManager[48981]: <info>  [1764402485.1235] device (tap83ec9820-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:48:05 np0005539563 podman[268920]: 2025-11-29 07:48:05.123878955 +0000 UTC m=+0.439296529 container attach de738623c63b0b2621d8aa5935f02b92d5ddc8effe0b5b8a68e2ec8996eecaa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:48:05 np0005539563 systemd[1]: Started Virtual Machine qemu-7-instance-00000012.
Nov 29 02:48:05 np0005539563 nova_compute[252253]: 2025-11-29 07:48:05.131 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.131 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[3f397b20-b637-47ee-80ea-c4a2b6ab6138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:05Z|00072|binding|INFO|Setting lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 ovn-installed in OVS
Nov 29 02:48:05 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:05Z|00073|binding|INFO|Setting lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 up in Southbound
Nov 29 02:48:05 np0005539563 nova_compute[252253]: 2025-11-29 07:48:05.136 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.151 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3b1e1e1e-0b98-4edb-a596-bc48506669cb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.190 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[0c34a9d7-7c08-454f-b994-dbd7f3f66aad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 NetworkManager[48981]: <info>  [1764402485.2116] manager: (tap64f65ccd-70): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.210 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[71130cec-238c-44b1-8ae4-d2b3c1da81de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.248 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[be7dac9c-c09c-41f8-a7bd-45223b00d381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.250 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[941767b2-2c91-46b1-a918-0eabbbe5e571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 NetworkManager[48981]: <info>  [1764402485.2730] device (tap64f65ccd-70): carrier: link connected
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.279 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc078d7-6d2f-42ab-a87d-e6125f7b0af8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.296 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f87e57-2aa6-4d09-9235-64ffc880ab4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64f65ccd-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:be:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545304, 'reachable_time': 17908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268991, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.308 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[afa9abbe-1879-4bc7-84ef-8d74864e8646]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9d:be36'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545304, 'tstamp': 545304}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268992, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.320 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d76138bf-cd7f-4069-b72b-32352fa0b152]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64f65ccd-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:be:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545304, 'reachable_time': 17908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268993, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.349 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[24ebb107-f958-4f3d-8256-4a0e2c38fedc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.401 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a43140b3-d7fb-4d1b-b904-a7ab60544b33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.402 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64f65ccd-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.402 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.403 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64f65ccd-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:05 np0005539563 nova_compute[252253]: 2025-11-29 07:48:05.404 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:05 np0005539563 kernel: tap64f65ccd-70: entered promiscuous mode
Nov 29 02:48:05 np0005539563 NetworkManager[48981]: <info>  [1764402485.4053] manager: (tap64f65ccd-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.410 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64f65ccd-70, col_values=(('external_ids', {'iface-id': 'cbc2b067-53f5-4ead-84ea-8fcd92aff3f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:05 np0005539563 nova_compute[252253]: 2025-11-29 07:48:05.411 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:05 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:05Z|00074|binding|INFO|Releasing lport cbc2b067-53f5-4ead-84ea-8fcd92aff3f1 from this chassis (sb_readonly=0)
Nov 29 02:48:05 np0005539563 nova_compute[252253]: 2025-11-29 07:48:05.412 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.413 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.413 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[44ec1b10-c9cf-47b8-8597-7a8473e8a5cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.414 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-64f65ccd-7749-48ca-ba36-8eb6d9ce3610
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.pid.haproxy
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 64f65ccd-7749-48ca-ba36-8eb6d9ce3610
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:48:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:05.415 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'env', 'PROCESS_TAG=haproxy-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:48:05 np0005539563 nova_compute[252253]: 2025-11-29 07:48:05.435 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 249 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 123 KiB/s wr, 108 op/s
Nov 29 02:48:05 np0005539563 nova_compute[252253]: 2025-11-29 07:48:05.490 252257 DEBUG nova.compute.manager [req-3fd2b65f-3e09-4ade-a6e2-3964bb091386 req-5ae52e63-34bf-462f-aa48-3ded1d39741d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:48:05 np0005539563 nova_compute[252253]: 2025-11-29 07:48:05.490 252257 DEBUG oslo_concurrency.lockutils [req-3fd2b65f-3e09-4ade-a6e2-3964bb091386 req-5ae52e63-34bf-462f-aa48-3ded1d39741d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:05 np0005539563 nova_compute[252253]: 2025-11-29 07:48:05.490 252257 DEBUG oslo_concurrency.lockutils [req-3fd2b65f-3e09-4ade-a6e2-3964bb091386 req-5ae52e63-34bf-462f-aa48-3ded1d39741d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:05 np0005539563 nova_compute[252253]: 2025-11-29 07:48:05.491 252257 DEBUG oslo_concurrency.lockutils [req-3fd2b65f-3e09-4ade-a6e2-3964bb091386 req-5ae52e63-34bf-462f-aa48-3ded1d39741d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:05 np0005539563 nova_compute[252253]: 2025-11-29 07:48:05.491 252257 DEBUG nova.compute.manager [req-3fd2b65f-3e09-4ade-a6e2-3964bb091386 req-5ae52e63-34bf-462f-aa48-3ded1d39741d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Processing event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:48:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:05.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:05 np0005539563 podman[269061]: 2025-11-29 07:48:05.79062544 +0000 UTC m=+0.018526994 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:48:05 np0005539563 musing_sammet[268945]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:48:05 np0005539563 musing_sammet[268945]: --> relative data size: 1.0
Nov 29 02:48:05 np0005539563 musing_sammet[268945]: --> All data devices are unavailable
Nov 29 02:48:06 np0005539563 systemd[1]: libpod-de738623c63b0b2621d8aa5935f02b92d5ddc8effe0b5b8a68e2ec8996eecaa7.scope: Deactivated successfully.
Nov 29 02:48:06 np0005539563 podman[268920]: 2025-11-29 07:48:06.002274037 +0000 UTC m=+1.317691621 container died de738623c63b0b2621d8aa5935f02b92d5ddc8effe0b5b8a68e2ec8996eecaa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sammet, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:48:06 np0005539563 podman[269061]: 2025-11-29 07:48:06.068165931 +0000 UTC m=+0.296067485 container create bf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 02:48:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5066094cac81e63bf55226c99efe9ee115603277f39d93438a42a557574e898c-merged.mount: Deactivated successfully.
Nov 29 02:48:06 np0005539563 systemd[1]: Started libpod-conmon-bf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622.scope.
Nov 29 02:48:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:48:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aec15ba818be5a7bdd2c0e7b2a2a640e9ec610d57b96a8d8f0dae08c8b65150/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.172 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402486.1718361, 738ca4a4-91f6-4476-a500-4d85c8eb00ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.174 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] VM Started (Lifecycle Event)#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.177 252257 DEBUG nova.compute.manager [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.180 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:48:06 np0005539563 podman[269061]: 2025-11-29 07:48:06.182598688 +0000 UTC m=+0.410500242 container init bf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.184 252257 INFO nova.virt.libvirt.driver [-] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Instance spawned successfully.#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.185 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:48:06 np0005539563 podman[269061]: 2025-11-29 07:48:06.188810814 +0000 UTC m=+0.416712348 container start bf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 02:48:06 np0005539563 podman[268920]: 2025-11-29 07:48:06.188715241 +0000 UTC m=+1.504132805 container remove de738623c63b0b2621d8aa5935f02b92d5ddc8effe0b5b8a68e2ec8996eecaa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.203 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:06 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[269106]: [NOTICE]   (269111) : New worker (269113) forked
Nov 29 02:48:06 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[269106]: [NOTICE]   (269111) : Loading success.
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.213 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.219 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.220 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.221 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.221 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.222 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.223 252257 DEBUG nova.virt.libvirt.driver [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.257 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.258 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402486.1720195, 738ca4a4-91f6-4476-a500-4d85c8eb00ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.258 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.295 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.299 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402486.1803145, 738ca4a4-91f6-4476-a500-4d85c8eb00ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.300 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:48:06 np0005539563 systemd[1]: libpod-conmon-de738623c63b0b2621d8aa5935f02b92d5ddc8effe0b5b8a68e2ec8996eecaa7.scope: Deactivated successfully.
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.319 252257 INFO nova.compute.manager [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Took 5.66 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.320 252257 DEBUG nova.compute.manager [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.322 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.331 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.383 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.425 252257 INFO nova.compute.manager [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Took 11.09 seconds to build instance.#033[00m
Nov 29 02:48:06 np0005539563 nova_compute[252253]: 2025-11-29 07:48:06.463 252257 DEBUG oslo_concurrency.lockutils [None req-8607cadc-ba62-4823-84fb-e713f443f4e4 d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:06 np0005539563 podman[269261]: 2025-11-29 07:48:06.859322579 +0000 UTC m=+0.081961274 container create 620ec64491424c07ef119f5cfd78d2cef18ee0cb8667be9019a2cc024bec41c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_stonebraker, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:48:06 np0005539563 podman[269261]: 2025-11-29 07:48:06.79890352 +0000 UTC m=+0.021542275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:48:06 np0005539563 systemd[1]: Started libpod-conmon-620ec64491424c07ef119f5cfd78d2cef18ee0cb8667be9019a2cc024bec41c5.scope.
Nov 29 02:48:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:48:06 np0005539563 podman[269261]: 2025-11-29 07:48:06.947102767 +0000 UTC m=+0.169741452 container init 620ec64491424c07ef119f5cfd78d2cef18ee0cb8667be9019a2cc024bec41c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_stonebraker, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:48:06 np0005539563 podman[269261]: 2025-11-29 07:48:06.953938518 +0000 UTC m=+0.176577223 container start 620ec64491424c07ef119f5cfd78d2cef18ee0cb8667be9019a2cc024bec41c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:48:06 np0005539563 podman[269261]: 2025-11-29 07:48:06.957760021 +0000 UTC m=+0.180398736 container attach 620ec64491424c07ef119f5cfd78d2cef18ee0cb8667be9019a2cc024bec41c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:48:06 np0005539563 magical_stonebraker[269279]: 167 167
Nov 29 02:48:06 np0005539563 systemd[1]: libpod-620ec64491424c07ef119f5cfd78d2cef18ee0cb8667be9019a2cc024bec41c5.scope: Deactivated successfully.
Nov 29 02:48:06 np0005539563 conmon[269279]: conmon 620ec64491424c07ef11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-620ec64491424c07ef119f5cfd78d2cef18ee0cb8667be9019a2cc024bec41c5.scope/container/memory.events
Nov 29 02:48:06 np0005539563 podman[269261]: 2025-11-29 07:48:06.963191355 +0000 UTC m=+0.185830040 container died 620ec64491424c07ef119f5cfd78d2cef18ee0cb8667be9019a2cc024bec41c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_stonebraker, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:48:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e997f58e8a1998a926499c0c0c43c48c77faf832f7d9d907ba41dd844fe2edea-merged.mount: Deactivated successfully.
Nov 29 02:48:07 np0005539563 podman[269261]: 2025-11-29 07:48:07.00469664 +0000 UTC m=+0.227335325 container remove 620ec64491424c07ef119f5cfd78d2cef18ee0cb8667be9019a2cc024bec41c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_stonebraker, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 02:48:07 np0005539563 systemd[1]: libpod-conmon-620ec64491424c07ef119f5cfd78d2cef18ee0cb8667be9019a2cc024bec41c5.scope: Deactivated successfully.
Nov 29 02:48:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:07.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:07 np0005539563 podman[269302]: 2025-11-29 07:48:07.170923817 +0000 UTC m=+0.038863626 container create aabe53657baba39563a6c52e68b2e9b420e7eae7e3f96e098bffc3242b9bb2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:48:07 np0005539563 systemd[1]: Started libpod-conmon-aabe53657baba39563a6c52e68b2e9b420e7eae7e3f96e098bffc3242b9bb2c3.scope.
Nov 29 02:48:07 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:48:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01609a160710e8390b4e9a886100de2f79f9daff15519bc6a2eae110cabfd266/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01609a160710e8390b4e9a886100de2f79f9daff15519bc6a2eae110cabfd266/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01609a160710e8390b4e9a886100de2f79f9daff15519bc6a2eae110cabfd266/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01609a160710e8390b4e9a886100de2f79f9daff15519bc6a2eae110cabfd266/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:07 np0005539563 podman[269302]: 2025-11-29 07:48:07.153843962 +0000 UTC m=+0.021783761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:48:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 253 MiB data, 452 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 347 KiB/s wr, 131 op/s
Nov 29 02:48:07 np0005539563 nova_compute[252253]: 2025-11-29 07:48:07.476 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:07.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:07 np0005539563 nova_compute[252253]: 2025-11-29 07:48:07.742 252257 DEBUG nova.compute.manager [req-b5d0c5cd-13b2-40d0-9d0f-86ae86b6553a req-17829722-3c0f-4d21-9eac-0c9c8785a18a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:48:07 np0005539563 nova_compute[252253]: 2025-11-29 07:48:07.743 252257 DEBUG oslo_concurrency.lockutils [req-b5d0c5cd-13b2-40d0-9d0f-86ae86b6553a req-17829722-3c0f-4d21-9eac-0c9c8785a18a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:07 np0005539563 nova_compute[252253]: 2025-11-29 07:48:07.743 252257 DEBUG oslo_concurrency.lockutils [req-b5d0c5cd-13b2-40d0-9d0f-86ae86b6553a req-17829722-3c0f-4d21-9eac-0c9c8785a18a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:07 np0005539563 nova_compute[252253]: 2025-11-29 07:48:07.743 252257 DEBUG oslo_concurrency.lockutils [req-b5d0c5cd-13b2-40d0-9d0f-86ae86b6553a req-17829722-3c0f-4d21-9eac-0c9c8785a18a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:07 np0005539563 nova_compute[252253]: 2025-11-29 07:48:07.743 252257 DEBUG nova.compute.manager [req-b5d0c5cd-13b2-40d0-9d0f-86ae86b6553a req-17829722-3c0f-4d21-9eac-0c9c8785a18a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:48:07 np0005539563 nova_compute[252253]: 2025-11-29 07:48:07.743 252257 WARNING nova.compute.manager [req-b5d0c5cd-13b2-40d0-9d0f-86ae86b6553a req-17829722-3c0f-4d21-9eac-0c9c8785a18a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received unexpected event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:48:07 np0005539563 podman[269302]: 2025-11-29 07:48:07.976024916 +0000 UTC m=+0.843964765 container init aabe53657baba39563a6c52e68b2e9b420e7eae7e3f96e098bffc3242b9bb2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:48:07 np0005539563 podman[269302]: 2025-11-29 07:48:07.989513735 +0000 UTC m=+0.857453544 container start aabe53657baba39563a6c52e68b2e9b420e7eae7e3f96e098bffc3242b9bb2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:48:08 np0005539563 podman[269302]: 2025-11-29 07:48:08.012584129 +0000 UTC m=+0.880523948 container attach aabe53657baba39563a6c52e68b2e9b420e7eae7e3f96e098bffc3242b9bb2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]: {
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:    "0": [
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:        {
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:            "devices": [
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:                "/dev/loop3"
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:            ],
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:            "lv_name": "ceph_lv0",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:            "lv_size": "7511998464",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:            "name": "ceph_lv0",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:            "tags": {
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:                "ceph.cluster_name": "ceph",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:                "ceph.crush_device_class": "",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:                "ceph.encrypted": "0",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:                "ceph.osd_id": "0",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:                "ceph.type": "block",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:                "ceph.vdo": "0"
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:            },
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:            "type": "block",
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:            "vg_name": "ceph_vg0"
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:        }
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]:    ]
Nov 29 02:48:08 np0005539563 naughty_bassi[269319]: }
Nov 29 02:48:08 np0005539563 systemd[1]: libpod-aabe53657baba39563a6c52e68b2e9b420e7eae7e3f96e098bffc3242b9bb2c3.scope: Deactivated successfully.
Nov 29 02:48:08 np0005539563 podman[269302]: 2025-11-29 07:48:08.790529106 +0000 UTC m=+1.658468875 container died aabe53657baba39563a6c52e68b2e9b420e7eae7e3f96e098bffc3242b9bb2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:48:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-01609a160710e8390b4e9a886100de2f79f9daff15519bc6a2eae110cabfd266-merged.mount: Deactivated successfully.
Nov 29 02:48:09 np0005539563 podman[269302]: 2025-11-29 07:48:09.028833562 +0000 UTC m=+1.896773331 container remove aabe53657baba39563a6c52e68b2e9b420e7eae7e3f96e098bffc3242b9bb2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:48:09 np0005539563 systemd[1]: libpod-conmon-aabe53657baba39563a6c52e68b2e9b420e7eae7e3f96e098bffc3242b9bb2c3.scope: Deactivated successfully.
Nov 29 02:48:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:09.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 29 02:48:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 29 02:48:09 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 29 02:48:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 291 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 147 op/s
Nov 29 02:48:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:09.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:09 np0005539563 podman[269478]: 2025-11-29 07:48:09.761368848 +0000 UTC m=+0.023278800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:48:09 np0005539563 podman[269478]: 2025-11-29 07:48:09.864541306 +0000 UTC m=+0.126451258 container create f05917a6a7d9f9fb0be89dddba409c820852ed1d68c761fbedc03ec9aeb7a19b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:48:10 np0005539563 nova_compute[252253]: 2025-11-29 07:48:10.136 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:10 np0005539563 systemd[1]: Started libpod-conmon-f05917a6a7d9f9fb0be89dddba409c820852ed1d68c761fbedc03ec9aeb7a19b.scope.
Nov 29 02:48:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:48:10 np0005539563 podman[269478]: 2025-11-29 07:48:10.272484159 +0000 UTC m=+0.534394101 container init f05917a6a7d9f9fb0be89dddba409c820852ed1d68c761fbedc03ec9aeb7a19b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:48:10 np0005539563 podman[269478]: 2025-11-29 07:48:10.28001133 +0000 UTC m=+0.541921252 container start f05917a6a7d9f9fb0be89dddba409c820852ed1d68c761fbedc03ec9aeb7a19b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:48:10 np0005539563 podman[269478]: 2025-11-29 07:48:10.28414765 +0000 UTC m=+0.546057572 container attach f05917a6a7d9f9fb0be89dddba409c820852ed1d68c761fbedc03ec9aeb7a19b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:48:10 np0005539563 sharp_wilson[269494]: 167 167
Nov 29 02:48:10 np0005539563 systemd[1]: libpod-f05917a6a7d9f9fb0be89dddba409c820852ed1d68c761fbedc03ec9aeb7a19b.scope: Deactivated successfully.
Nov 29 02:48:10 np0005539563 podman[269478]: 2025-11-29 07:48:10.2916648 +0000 UTC m=+0.553574762 container died f05917a6a7d9f9fb0be89dddba409c820852ed1d68c761fbedc03ec9aeb7a19b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:48:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-233f82c3d8f0ab25819cf7e38bfcc656509016b40afadf1e99cfba4fc348ed21-merged.mount: Deactivated successfully.
Nov 29 02:48:10 np0005539563 podman[269478]: 2025-11-29 07:48:10.34499392 +0000 UTC m=+0.606903852 container remove f05917a6a7d9f9fb0be89dddba409c820852ed1d68c761fbedc03ec9aeb7a19b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wilson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:48:10 np0005539563 systemd[1]: libpod-conmon-f05917a6a7d9f9fb0be89dddba409c820852ed1d68c761fbedc03ec9aeb7a19b.scope: Deactivated successfully.
Nov 29 02:48:10 np0005539563 podman[269520]: 2025-11-29 07:48:10.579661068 +0000 UTC m=+0.063492761 container create 7b1f5be51d4a57df8008e193a3a59c4015cf4067752d53fa9ac9ac71308e682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 02:48:10 np0005539563 systemd[1]: Started libpod-conmon-7b1f5be51d4a57df8008e193a3a59c4015cf4067752d53fa9ac9ac71308e682c.scope.
Nov 29 02:48:10 np0005539563 podman[269520]: 2025-11-29 07:48:10.556950434 +0000 UTC m=+0.040782157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:48:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:48:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bb31ed57d599ae4abf17c8c428a5fef24dc010bc6ffdd4bea56634c845b008/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bb31ed57d599ae4abf17c8c428a5fef24dc010bc6ffdd4bea56634c845b008/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bb31ed57d599ae4abf17c8c428a5fef24dc010bc6ffdd4bea56634c845b008/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bb31ed57d599ae4abf17c8c428a5fef24dc010bc6ffdd4bea56634c845b008/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:10 np0005539563 podman[269520]: 2025-11-29 07:48:10.671662989 +0000 UTC m=+0.155494682 container init 7b1f5be51d4a57df8008e193a3a59c4015cf4067752d53fa9ac9ac71308e682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:48:10 np0005539563 podman[269520]: 2025-11-29 07:48:10.683869894 +0000 UTC m=+0.167701587 container start 7b1f5be51d4a57df8008e193a3a59c4015cf4067752d53fa9ac9ac71308e682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kepler, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:48:10 np0005539563 podman[269520]: 2025-11-29 07:48:10.688117447 +0000 UTC m=+0.171949140 container attach 7b1f5be51d4a57df8008e193a3a59c4015cf4067752d53fa9ac9ac71308e682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 02:48:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:11.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 339 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.4 MiB/s wr, 210 op/s
Nov 29 02:48:11 np0005539563 mystifying_kepler[269537]: {
Nov 29 02:48:11 np0005539563 mystifying_kepler[269537]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:48:11 np0005539563 mystifying_kepler[269537]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:48:11 np0005539563 mystifying_kepler[269537]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:48:11 np0005539563 mystifying_kepler[269537]:        "osd_id": 0,
Nov 29 02:48:11 np0005539563 mystifying_kepler[269537]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:48:11 np0005539563 mystifying_kepler[269537]:        "type": "bluestore"
Nov 29 02:48:11 np0005539563 mystifying_kepler[269537]:    }
Nov 29 02:48:11 np0005539563 mystifying_kepler[269537]: }
Nov 29 02:48:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:11.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:11 np0005539563 systemd[1]: libpod-7b1f5be51d4a57df8008e193a3a59c4015cf4067752d53fa9ac9ac71308e682c.scope: Deactivated successfully.
Nov 29 02:48:11 np0005539563 podman[269559]: 2025-11-29 07:48:11.669420028 +0000 UTC m=+0.028693185 container died 7b1f5be51d4a57df8008e193a3a59c4015cf4067752d53fa9ac9ac71308e682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kepler, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:48:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-00bb31ed57d599ae4abf17c8c428a5fef24dc010bc6ffdd4bea56634c845b008-merged.mount: Deactivated successfully.
Nov 29 02:48:11 np0005539563 podman[269559]: 2025-11-29 07:48:11.853592623 +0000 UTC m=+0.212865750 container remove 7b1f5be51d4a57df8008e193a3a59c4015cf4067752d53fa9ac9ac71308e682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:48:11 np0005539563 systemd[1]: libpod-conmon-7b1f5be51d4a57df8008e193a3a59c4015cf4067752d53fa9ac9ac71308e682c.scope: Deactivated successfully.
Nov 29 02:48:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:48:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:48:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:12 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f017542b-6d02-405b-803c-092406d76d07 does not exist
Nov 29 02:48:12 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d74f567a-7258-44d9-9824-f2200870ea6b does not exist
Nov 29 02:48:12 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c89ae3c5-8dd7-4f40-9f2b-618d8fcf5f2a does not exist
Nov 29 02:48:12 np0005539563 nova_compute[252253]: 2025-11-29 07:48:12.488 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:48:12
Nov 29 02:48:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:48:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:48:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'backups', 'vms', 'images', '.mgr', 'volumes', 'cephfs.cephfs.meta']
Nov 29 02:48:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:48:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:13.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:48:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:48:13 np0005539563 nova_compute[252253]: 2025-11-29 07:48:13.397 252257 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Check if temp file /var/lib/nova/instances/tmp28xflw8r exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Nov 29 02:48:13 np0005539563 nova_compute[252253]: 2025-11-29 07:48:13.398 252257 DEBUG nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp28xflw8r',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='738ca4a4-91f6-4476-a500-4d85c8eb00ef',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 339 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Nov 29 02:48:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:13.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:48:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:48:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:15.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:15 np0005539563 nova_compute[252253]: 2025-11-29 07:48:15.181 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 339 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 4.2 MiB/s wr, 373 op/s
Nov 29 02:48:15 np0005539563 podman[269626]: 2025-11-29 07:48:15.509586319 +0000 UTC m=+0.059663330 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 02:48:15 np0005539563 podman[269627]: 2025-11-29 07:48:15.517917051 +0000 UTC m=+0.066311296 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:48:15 np0005539563 podman[269628]: 2025-11-29 07:48:15.569726501 +0000 UTC m=+0.116848683 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:48:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:15.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:48:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5404 writes, 23K keys, 5401 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 5404 writes, 5401 syncs, 1.00 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1172 writes, 5254 keys, 1172 commit groups, 1.0 writes per commit group, ingest: 8.54 MB, 0.01 MB/s#012Interval WAL: 1172 writes, 1172 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      6.4      4.67              0.12        13    0.360       0      0       0.0       0.0#012  L6      1/0    8.62 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     12.2     10.1     10.29              0.37        12    0.858     58K   6447       0.0       0.0#012 Sum      1/0    8.62 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4      8.4      8.9     14.97              0.48        25    0.599     58K   6447       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.6      8.4      8.5      5.07              0.18         8    0.634     22K   2066       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     12.2     10.1     10.29              0.37        12    0.858     58K   6447       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.4      4.67              0.12        12    0.389       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.029, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.06 MB/s write, 0.12 GB read, 0.05 MB/s read, 15.0 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 5.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 304.00 MB usage: 11.38 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000133 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(615,10.91 MB,3.58822%) FilterBlock(26,170.17 KB,0.0546656%) IndexBlock(26,314.08 KB,0.100894%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.037 252257 DEBUG nova.compute.manager [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 02:48:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:17.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.152 252257 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.152 252257 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.176 252257 DEBUG nova.objects.instance [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'pci_requests' on Instance uuid 3efe6bb4-36be-4a30-832d-8da05e5baa50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.195 252257 DEBUG nova.virt.hardware [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.196 252257 INFO nova.compute.claims [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.196 252257 DEBUG nova.objects.instance [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'resources' on Instance uuid 3efe6bb4-36be-4a30-832d-8da05e5baa50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.209 252257 DEBUG nova.objects.instance [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'pci_devices' on Instance uuid 3efe6bb4-36be-4a30-832d-8da05e5baa50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.256 252257 INFO nova.compute.resource_tracker [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updating resource usage from migration b40c1dc1-dccd-49f3-9f64-b6116e439e87#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.257 252257 DEBUG nova.compute.resource_tracker [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Starting to track incoming migration b40c1dc1-dccd-49f3-9f64-b6116e439e87 with flavor a3833334-6e3e-4b1c-bf74-bdd1055a9e9b _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.345 252257 DEBUG oslo_concurrency.processutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 339 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 7.4 MiB/s rd, 4.0 MiB/s wr, 358 op/s
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.490 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:17.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1083834170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.747 252257 DEBUG oslo_concurrency.processutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.753 252257 DEBUG nova.compute.provider_tree [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.771 252257 DEBUG nova.scheduler.client.report [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.790 252257 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.791 252257 INFO nova.compute.manager [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Migrating#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.809 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "93122e14-3d0f-4a03-912f-b7cf58571516" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.810 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "93122e14-3d0f-4a03-912f-b7cf58571516" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.829 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.853 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "22535ce1-6453-42d0-b742-1030fb087035" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.854 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "22535ce1-6453-42d0-b742-1030fb087035" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.905 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.942 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.943 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.949 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:48:17 np0005539563 nova_compute[252253]: 2025-11-29 07:48:17.950 252257 INFO nova.compute.claims [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.018 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.135 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1635298860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.581 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.589 252257 DEBUG nova.compute.provider_tree [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.614 252257 DEBUG nova.scheduler.client.report [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.644 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.645 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.651 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.652 252257 INFO nova.compute.claims [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.682 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "e25abdd0-8176-4063-bd53-74c30051638b" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.683 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "e25abdd0-8176-4063-bd53-74c30051638b" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.738 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "e25abdd0-8176-4063-bd53-74c30051638b" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.740 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.831 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.832 252257 DEBUG nova.network.neutron [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.854 252257 INFO nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:48:18 np0005539563 nova_compute[252253]: 2025-11-29 07:48:18.874 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:48:19 np0005539563 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 02:48:19 np0005539563 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 02:48:19 np0005539563 systemd-logind[785]: New session 52 of user nova.
Nov 29 02:48:19 np0005539563 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 02:48:19 np0005539563 systemd[1]: Starting User Manager for UID 42436...
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.066 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:19.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.142 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.145 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.145 252257 INFO nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Creating image(s)#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.182 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 93122e14-3d0f-4a03-912f-b7cf58571516_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:19 np0005539563 systemd[269786]: Queued start job for default target Main User Target.
Nov 29 02:48:19 np0005539563 systemd[269786]: Created slice User Application Slice.
Nov 29 02:48:19 np0005539563 systemd[269786]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:48:19 np0005539563 systemd[269786]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:48:19 np0005539563 systemd[269786]: Reached target Paths.
Nov 29 02:48:19 np0005539563 systemd[269786]: Reached target Timers.
Nov 29 02:48:19 np0005539563 systemd[269786]: Starting D-Bus User Message Bus Socket...
Nov 29 02:48:19 np0005539563 systemd[269786]: Starting Create User's Volatile Files and Directories...
Nov 29 02:48:19 np0005539563 systemd[269786]: Finished Create User's Volatile Files and Directories.
Nov 29 02:48:19 np0005539563 systemd[269786]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:48:19 np0005539563 systemd[269786]: Reached target Sockets.
Nov 29 02:48:19 np0005539563 systemd[269786]: Reached target Basic System.
Nov 29 02:48:19 np0005539563 systemd[269786]: Reached target Main User Target.
Nov 29 02:48:19 np0005539563 systemd[269786]: Startup finished in 140ms.
Nov 29 02:48:19 np0005539563 systemd[1]: Started User Manager for UID 42436.
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.220 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 93122e14-3d0f-4a03-912f-b7cf58571516_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:19 np0005539563 systemd[1]: Started Session 52 of User nova.
Nov 29 02:48:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.254 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 93122e14-3d0f-4a03-912f-b7cf58571516_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.258 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:19 np0005539563 systemd[1]: session-52.scope: Deactivated successfully.
Nov 29 02:48:19 np0005539563 systemd-logind[785]: Session 52 logged out. Waiting for processes to exit.
Nov 29 02:48:19 np0005539563 systemd-logind[785]: Removed session 52.
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.331 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.332 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.333 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.333 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.367 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 93122e14-3d0f-4a03-912f-b7cf58571516_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.379 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 93122e14-3d0f-4a03-912f-b7cf58571516_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:19 np0005539563 systemd-logind[785]: New session 54 of user nova.
Nov 29 02:48:19 np0005539563 systemd[1]: Started Session 54 of User nova.
Nov 29 02:48:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 339 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.6 MiB/s wr, 292 op/s
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.471 252257 DEBUG nova.network.neutron [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.472 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:48:19 np0005539563 systemd[1]: session-54.scope: Deactivated successfully.
Nov 29 02:48:19 np0005539563 systemd-logind[785]: Session 54 logged out. Waiting for processes to exit.
Nov 29 02:48:19 np0005539563 systemd-logind[785]: Removed session 54.
Nov 29 02:48:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051929412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.562 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.567 252257 DEBUG nova.compute.provider_tree [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.583 252257 DEBUG nova.scheduler.client.report [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.604 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.618 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "e25abdd0-8176-4063-bd53-74c30051638b" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.619 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "e25abdd0-8176-4063-bd53-74c30051638b" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:19.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.639 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "e25abdd0-8176-4063-bd53-74c30051638b" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.640 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.688 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.689 252257 DEBUG nova.network.neutron [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.706 252257 INFO nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.723 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.753 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 93122e14-3d0f-4a03-912f-b7cf58571516_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:19 np0005539563 nova_compute[252253]: 2025-11-29 07:48:19.839 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] resizing rbd image 93122e14-3d0f-4a03-912f-b7cf58571516_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.062 252257 DEBUG nova.network.neutron [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.063 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.064 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.065 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.066 252257 INFO nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Creating image(s)#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.105 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 22535ce1-6453-42d0-b742-1030fb087035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.140 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 22535ce1-6453-42d0-b742-1030fb087035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.184 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 22535ce1-6453-42d0-b742-1030fb087035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.189 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.212 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.279 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.280 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.280 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.281 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.310 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 22535ce1-6453-42d0-b742-1030fb087035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.315 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 22535ce1-6453-42d0-b742-1030fb087035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.832 252257 DEBUG nova.compute.manager [req-bbb9d093-bf11-4f37-828c-4539e904547f req-bdad7fff-7352-48f1-95d9-e1d4e2b35ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.833 252257 DEBUG oslo_concurrency.lockutils [req-bbb9d093-bf11-4f37-828c-4539e904547f req-bdad7fff-7352-48f1-95d9-e1d4e2b35ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.833 252257 DEBUG oslo_concurrency.lockutils [req-bbb9d093-bf11-4f37-828c-4539e904547f req-bdad7fff-7352-48f1-95d9-e1d4e2b35ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.834 252257 DEBUG oslo_concurrency.lockutils [req-bbb9d093-bf11-4f37-828c-4539e904547f req-bdad7fff-7352-48f1-95d9-e1d4e2b35ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.834 252257 DEBUG nova.compute.manager [req-bbb9d093-bf11-4f37-828c-4539e904547f req-bdad7fff-7352-48f1-95d9-e1d4e2b35ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:48:20 np0005539563 nova_compute[252253]: 2025-11-29 07:48:20.834 252257 DEBUG nova.compute.manager [req-bbb9d093-bf11-4f37-828c-4539e904547f req-bdad7fff-7352-48f1-95d9-e1d4e2b35ef2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:48:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:21.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:21 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:21Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:30:0e 10.100.0.6
Nov 29 02:48:21 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:21Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:30:0e 10.100.0.6
Nov 29 02:48:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 389 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.6 MiB/s wr, 302 op/s
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.488 252257 DEBUG nova.objects.instance [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'migration_context' on Instance uuid 93122e14-3d0f-4a03-912f-b7cf58571516 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.634 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.635 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Ensure instance console log exists: /var/lib/nova/instances/93122e14-3d0f-4a03-912f-b7cf58571516/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.635 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.636 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.636 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.638 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:48:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:21.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.643 252257 WARNING nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.652 252257 DEBUG nova.virt.libvirt.host [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.654 252257 DEBUG nova.virt.libvirt.host [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.657 252257 DEBUG nova.virt.libvirt.host [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.658 252257 DEBUG nova.virt.libvirt.host [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.660 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.660 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.661 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.661 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.661 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.662 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.662 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.662 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.662 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.663 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.663 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.663 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:48:21 np0005539563 nova_compute[252253]: 2025-11-29 07:48:21.667 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:48:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3341268384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:48:22 np0005539563 nova_compute[252253]: 2025-11-29 07:48:22.149 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:22 np0005539563 nova_compute[252253]: 2025-11-29 07:48:22.389 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 93122e14-3d0f-4a03-912f-b7cf58571516_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:22 np0005539563 nova_compute[252253]: 2025-11-29 07:48:22.396 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:22 np0005539563 nova_compute[252253]: 2025-11-29 07:48:22.501 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:22 np0005539563 nova_compute[252253]: 2025-11-29 07:48:22.503 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 22535ce1-6453-42d0-b742-1030fb087035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:22 np0005539563 nova_compute[252253]: 2025-11-29 07:48:22.592 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] resizing rbd image 22535ce1-6453-42d0-b742-1030fb087035_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:48:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:48:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634316336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:48:22 np0005539563 nova_compute[252253]: 2025-11-29 07:48:22.883 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:22 np0005539563 nova_compute[252253]: 2025-11-29 07:48:22.886 252257 DEBUG nova.objects.instance [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'pci_devices' on Instance uuid 93122e14-3d0f-4a03-912f-b7cf58571516 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:22 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.001 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  <uuid>93122e14-3d0f-4a03-912f-b7cf58571516</uuid>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  <name>instance-00000015</name>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServersOnMultiNodesTest-server-97142810-1</nova:name>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:48:21</nova:creationTime>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <nova:user uuid="386584ea971049e3b0c06b8237710848">tempest-ServersOnMultiNodesTest-893669333-project-member</nova:user>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <nova:project uuid="c80f8d4661784e8faaf78d28df3fb677">tempest-ServersOnMultiNodesTest-893669333</nova:project>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <entry name="serial">93122e14-3d0f-4a03-912f-b7cf58571516</entry>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <entry name="uuid">93122e14-3d0f-4a03-912f-b7cf58571516</entry>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/93122e14-3d0f-4a03-912f-b7cf58571516_disk">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/93122e14-3d0f-4a03-912f-b7cf58571516_disk.config">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/93122e14-3d0f-4a03-912f-b7cf58571516/console.log" append="off"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:48:23 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:48:23 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:48:23 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:48:23 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0068521135771449845 of space, bias 1.0, pg target 2.055634073143495 quantized to 32 (current 32)
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0018289872394379304 of space, bias 1.0, pg target 0.5450381973525033 quantized to 32 (current 32)
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.063 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.063 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.064 252257 INFO nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Using config drive#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.089 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 93122e14-3d0f-4a03-912f-b7cf58571516_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:48:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:23.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.144 252257 DEBUG nova.objects.instance [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'migration_context' on Instance uuid 22535ce1-6453-42d0-b742-1030fb087035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.244 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.245 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Ensure instance console log exists: /var/lib/nova/instances/22535ce1-6453-42d0-b742-1030fb087035/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.245 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.245 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.245 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.246 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.250 252257 WARNING nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.253 252257 DEBUG nova.virt.libvirt.host [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.254 252257 DEBUG nova.virt.libvirt.host [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.256 252257 DEBUG nova.virt.libvirt.host [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.256 252257 DEBUG nova.virt.libvirt.host [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.257 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.257 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.258 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.258 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.258 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.258 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.258 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.259 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.259 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.259 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.259 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.260 252257 DEBUG nova.virt.hardware [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.262 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.396 252257 INFO nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Creating config drive at /var/lib/nova/instances/93122e14-3d0f-4a03-912f-b7cf58571516/disk.config#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.402 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/93122e14-3d0f-4a03-912f-b7cf58571516/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdvmrh0vg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 389 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.5 MiB/s wr, 232 op/s
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.528 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/93122e14-3d0f-4a03-912f-b7cf58571516/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdvmrh0vg" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.559 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 93122e14-3d0f-4a03-912f-b7cf58571516_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.562 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/93122e14-3d0f-4a03-912f-b7cf58571516/disk.config 93122e14-3d0f-4a03-912f-b7cf58571516_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.638 252257 DEBUG nova.compute.manager [req-a9a52dab-2559-41cd-af76-fd448caf2976 req-07441614-8e91-4871-924a-524af587ebd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.640 252257 DEBUG oslo_concurrency.lockutils [req-a9a52dab-2559-41cd-af76-fd448caf2976 req-07441614-8e91-4871-924a-524af587ebd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.640 252257 DEBUG oslo_concurrency.lockutils [req-a9a52dab-2559-41cd-af76-fd448caf2976 req-07441614-8e91-4871-924a-524af587ebd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.640 252257 DEBUG oslo_concurrency.lockutils [req-a9a52dab-2559-41cd-af76-fd448caf2976 req-07441614-8e91-4871-924a-524af587ebd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:23.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.640 252257 DEBUG nova.compute.manager [req-a9a52dab-2559-41cd-af76-fd448caf2976 req-07441614-8e91-4871-924a-524af587ebd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.641 252257 WARNING nova.compute.manager [req-a9a52dab-2559-41cd-af76-fd448caf2976 req-07441614-8e91-4871-924a-524af587ebd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received unexpected event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:48:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:48:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/249590342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.691 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.722 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 22535ce1-6453-42d0-b742-1030fb087035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:23 np0005539563 nova_compute[252253]: 2025-11-29 07:48:23.727 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.079 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/93122e14-3d0f-4a03-912f-b7cf58571516/disk.config 93122e14-3d0f-4a03-912f-b7cf58571516_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.080 252257 INFO nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Deleting local config drive /var/lib/nova/instances/93122e14-3d0f-4a03-912f-b7cf58571516/disk.config because it was imported into RBD.#033[00m
Nov 29 02:48:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:48:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2135189243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:48:24 np0005539563 systemd-machined[213024]: New machine qemu-8-instance-00000015.
Nov 29 02:48:24 np0005539563 systemd[1]: Started Virtual Machine qemu-8-instance-00000015.
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.177 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.178 252257 DEBUG nova.objects.instance [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'pci_devices' on Instance uuid 22535ce1-6453-42d0-b742-1030fb087035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.197 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <uuid>22535ce1-6453-42d0-b742-1030fb087035</uuid>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <name>instance-00000016</name>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServersOnMultiNodesTest-server-97142810-2</nova:name>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:48:23</nova:creationTime>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <nova:user uuid="386584ea971049e3b0c06b8237710848">tempest-ServersOnMultiNodesTest-893669333-project-member</nova:user>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <nova:project uuid="c80f8d4661784e8faaf78d28df3fb677">tempest-ServersOnMultiNodesTest-893669333</nova:project>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <entry name="serial">22535ce1-6453-42d0-b742-1030fb087035</entry>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <entry name="uuid">22535ce1-6453-42d0-b742-1030fb087035</entry>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/22535ce1-6453-42d0-b742-1030fb087035_disk">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/22535ce1-6453-42d0-b742-1030fb087035_disk.config">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/22535ce1-6453-42d0-b742-1030fb087035/console.log" append="off"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:48:24 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:48:24 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:48:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.282 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.283 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.283 252257 INFO nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Using config drive#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.310 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 22535ce1-6453-42d0-b742-1030fb087035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.484 252257 INFO nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Creating config drive at /var/lib/nova/instances/22535ce1-6453-42d0-b742-1030fb087035/disk.config#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.491 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/22535ce1-6453-42d0-b742-1030fb087035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppz6fkr60 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.539 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402504.5393813, 93122e14-3d0f-4a03-912f-b7cf58571516 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.541 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.544 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.544 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.551 252257 INFO nova.virt.libvirt.driver [-] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Instance spawned successfully.#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.551 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.573 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.578 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.583 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.584 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.584 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.584 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.585 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.585 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.618 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/22535ce1-6453-42d0-b742-1030fb087035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppz6fkr60" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.645 252257 DEBUG nova.storage.rbd_utils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] rbd image 22535ce1-6453-42d0-b742-1030fb087035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.648 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/22535ce1-6453-42d0-b742-1030fb087035/disk.config 22535ce1-6453-42d0-b742-1030fb087035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.675 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.676 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.677 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402504.5406106, 93122e14-3d0f-4a03-912f-b7cf58571516 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.677 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] VM Started (Lifecycle Event)#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.681 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.681 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.683 252257 INFO nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Took 5.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.684 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.694 252257 INFO nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Took 8.93 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.695 252257 DEBUG nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.698 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.701 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.726 252257 DEBUG nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp28xflw8r',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='738ca4a4-91f6-4476-a500-4d85c8eb00ef',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(ae907aa6-ecdf-428a-a07c-0e5499b61879),old_vol_attachment_ids={1c382493-5718-4c6c-93b8-8f2562c0a68a='e8b3488f-f1e2-453e-93e6-a153b3c09a17'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.729 252257 DEBUG nova.objects.instance [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lazy-loading 'migration_context' on Instance uuid 738ca4a4-91f6-4476-a500-4d85c8eb00ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.730 252257 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.732 252257 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.732 252257 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.733 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.757 252257 INFO nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Took 6.83 seconds to build instance.#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.763 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Find same serial number: pos=1, serial=1c382493-5718-4c6c-93b8-8f2562c0a68a _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.765 252257 DEBUG nova.virt.libvirt.vif [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-245400987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-245400987',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:48:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fe8gqt5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:48:06Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=738ca4a4-91f6-4476-a500-4d85c8eb00ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.765 252257 DEBUG nova.network.os_vif_util [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converting VIF {"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.766 252257 DEBUG nova.network.os_vif_util [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.766 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <mac address="fa:16:3e:eb:30:0e"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <model type="virtio"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <mtu size="1442"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]:  <target dev="tap83ec9820-37"/>
Nov 29 02:48:24 np0005539563 nova_compute[252253]: </interface>
Nov 29 02:48:24 np0005539563 nova_compute[252253]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.767 252257 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Nov 29 02:48:24 np0005539563 nova_compute[252253]: 2025-11-29 07:48:24.774 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "93122e14-3d0f-4a03-912f-b7cf58571516" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:25.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:25 np0005539563 nova_compute[252253]: 2025-11-29 07:48:25.185 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:25 np0005539563 nova_compute[252253]: 2025-11-29 07:48:25.235 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:48:25 np0005539563 nova_compute[252253]: 2025-11-29 07:48:25.235 252257 INFO nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Nov 29 02:48:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 467 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 7.0 MiB/s wr, 339 op/s
Nov 29 02:48:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:25.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:25 np0005539563 nova_compute[252253]: 2025-11-29 07:48:25.680 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:25 np0005539563 nova_compute[252253]: 2025-11-29 07:48:25.680 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.048 252257 DEBUG nova.compute.manager [req-2f93fda3-8156-48df-90f3-39ab5e25d184 req-054e55a2-0a7c-415d-b426-c175aa2d20ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-changed-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.049 252257 DEBUG nova.compute.manager [req-2f93fda3-8156-48df-90f3-39ab5e25d184 req-054e55a2-0a7c-415d-b426-c175aa2d20ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Refreshing instance network info cache due to event network-changed-83ec9820-3713-4570-ab8a-a88fba3f29c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.049 252257 DEBUG oslo_concurrency.lockutils [req-2f93fda3-8156-48df-90f3-39ab5e25d184 req-054e55a2-0a7c-415d-b426-c175aa2d20ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.050 252257 DEBUG oslo_concurrency.lockutils [req-2f93fda3-8156-48df-90f3-39ab5e25d184 req-054e55a2-0a7c-415d-b426-c175aa2d20ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.050 252257 DEBUG nova.network.neutron [req-2f93fda3-8156-48df-90f3-39ab5e25d184 req-054e55a2-0a7c-415d-b426-c175aa2d20ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Refreshing network info cache for port 83ec9820-3713-4570-ab8a-a88fba3f29c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.076 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: bfcc8bec-b7c4-449e-9f0e-ce03a76df5d0] Skipping network cache update for instance because it has been migrated to another host. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9902#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.076 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.091 252257 INFO nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.594 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.594 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.938 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.940 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.940 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.941 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:48:26 np0005539563 nova_compute[252253]: 2025-11-29 07:48:26.942 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:27 np0005539563 nova_compute[252253]: 2025-11-29 07:48:27.098 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:48:27 np0005539563 nova_compute[252253]: 2025-11-29 07:48:27.100 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:48:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:27.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 492 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 7.7 MiB/s wr, 192 op/s
Nov 29 02:48:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671205973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:27 np0005539563 nova_compute[252253]: 2025-11-29 07:48:27.549 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:27 np0005539563 nova_compute[252253]: 2025-11-29 07:48:27.565 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:27 np0005539563 nova_compute[252253]: 2025-11-29 07:48:27.605 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:48:27 np0005539563 nova_compute[252253]: 2025-11-29 07:48:27.606 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:48:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:27.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:27 np0005539563 nova_compute[252253]: 2025-11-29 07:48:27.811 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:27 np0005539563 nova_compute[252253]: 2025-11-29 07:48:27.812 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:27 np0005539563 nova_compute[252253]: 2025-11-29 07:48:27.820 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:27 np0005539563 nova_compute[252253]: 2025-11-29 07:48:27.821 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:27 np0005539563 nova_compute[252253]: 2025-11-29 07:48:27.827 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:27 np0005539563 nova_compute[252253]: 2025-11-29 07:48:27.827 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.051 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.053 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4444MB free_disk=20.844467163085938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.053 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.054 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.108 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.109 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.147 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Migration for instance 3efe6bb4-36be-4a30-832d-8da05e5baa50 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.187 252257 INFO nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating resource usage from migration ae907aa6-ecdf-428a-a07c-0e5499b61879#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.187 252257 INFO nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updating resource usage from migration b40c1dc1-dccd-49f3-9f64-b6116e439e87#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.187 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Starting to track incoming migration b40c1dc1-dccd-49f3-9f64-b6116e439e87 with flavor a3833334-6e3e-4b1c-bf74-bdd1055a9e9b _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.224 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Migration ae907aa6-ecdf-428a-a07c-0e5499b61879 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.311 252257 WARNING nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 3efe6bb4-36be-4a30-832d-8da05e5baa50 has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.312 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 93122e14-3d0f-4a03-912f-b7cf58571516 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.312 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 22535ce1-6453-42d0-b742-1030fb087035 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.313 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.313 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1088MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.385 252257 DEBUG oslo_concurrency.processutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/22535ce1-6453-42d0-b742-1030fb087035/disk.config 22535ce1-6453-42d0-b742-1030fb087035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.737s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.386 252257 INFO nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Deleting local config drive /var/lib/nova/instances/22535ce1-6453-42d0-b742-1030fb087035/disk.config because it was imported into RBD.#033[00m
Nov 29 02:48:28 np0005539563 systemd-machined[213024]: New machine qemu-9-instance-00000016.
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.465 252257 DEBUG nova.network.neutron [req-2f93fda3-8156-48df-90f3-39ab5e25d184 req-054e55a2-0a7c-415d-b426-c175aa2d20ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updated VIF entry in instance network info cache for port 83ec9820-3713-4570-ab8a-a88fba3f29c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.466 252257 DEBUG nova.network.neutron [req-2f93fda3-8156-48df-90f3-39ab5e25d184 req-054e55a2-0a7c-415d-b426-c175aa2d20ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating instance_info_cache with network_info: [{"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:48:28 np0005539563 systemd[1]: Started Virtual Machine qemu-9-instance-00000016.
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.484 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.615 252257 DEBUG oslo_concurrency.lockutils [req-2f93fda3-8156-48df-90f3-39ab5e25d184 req-054e55a2-0a7c-415d-b426-c175aa2d20ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.623 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:48:28 np0005539563 nova_compute[252253]: 2025-11-29 07:48:28.624 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.024 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402509.0244746, 738ca4a4-91f6-4476-a500-4d85c8eb00ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.025 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.076 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.081 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.127 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.128 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Current 50 elapsed 4 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.129 252257 DEBUG nova.virt.libvirt.migration [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:48:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:29.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1539478808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.235 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.243 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.386 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:48:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 493 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 7.8 MiB/s wr, 231 op/s
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.512 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.513 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:29 np0005539563 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 02:48:29 np0005539563 systemd[269786]: Activating special unit Exit the Session...
Nov 29 02:48:29 np0005539563 systemd[269786]: Stopped target Main User Target.
Nov 29 02:48:29 np0005539563 systemd[269786]: Stopped target Basic System.
Nov 29 02:48:29 np0005539563 systemd[269786]: Stopped target Paths.
Nov 29 02:48:29 np0005539563 systemd[269786]: Stopped target Sockets.
Nov 29 02:48:29 np0005539563 systemd[269786]: Stopped target Timers.
Nov 29 02:48:29 np0005539563 systemd[269786]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:48:29 np0005539563 systemd[269786]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 02:48:29 np0005539563 systemd[269786]: Closed D-Bus User Message Bus Socket.
Nov 29 02:48:29 np0005539563 systemd[269786]: Stopped Create User's Volatile Files and Directories.
Nov 29 02:48:29 np0005539563 systemd[269786]: Removed slice User Application Slice.
Nov 29 02:48:29 np0005539563 systemd[269786]: Reached target Shutdown.
Nov 29 02:48:29 np0005539563 systemd[269786]: Finished Exit the Session.
Nov 29 02:48:29 np0005539563 systemd[269786]: Reached target Exit the Session.
Nov 29 02:48:29 np0005539563 kernel: tap83ec9820-37 (unregistering): left promiscuous mode
Nov 29 02:48:29 np0005539563 NetworkManager[48981]: <info>  [1764402509.5550] device (tap83ec9820-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:48:29 np0005539563 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 02:48:29 np0005539563 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 02:48:29 np0005539563 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 02:48:29 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:29Z|00075|binding|INFO|Releasing lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 from this chassis (sb_readonly=0)
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.576 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:29 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:29Z|00076|binding|INFO|Setting lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 down in Southbound
Nov 29 02:48:29 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:29Z|00077|binding|INFO|Removing iface tap83ec9820-37 ovn-installed in OVS
Nov 29 02:48:29 np0005539563 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 02:48:29 np0005539563 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 02:48:29 np0005539563 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 02:48:29 np0005539563 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.604 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:29 np0005539563 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 29 02:48:29 np0005539563 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000012.scope: Consumed 14.853s CPU time.
Nov 29 02:48:29 np0005539563 systemd-machined[213024]: Machine qemu-7-instance-00000012 terminated.
Nov 29 02:48:29 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:29Z|00078|binding|INFO|Releasing lport cbc2b067-53f5-4ead-84ea-8fcd92aff3f1 from this chassis (sb_readonly=0)
Nov 29 02:48:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:29.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:29.651 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:30:0e 10.100.0.6'], port_security=['fa:16:3e:eb:30:0e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'c8abfd39-a629-4854-b6ed-e2d68f35f5fb'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '738ca4a4-91f6-4476-a500-4d85c8eb00ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '8', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49e03573-97a7-4693-af53-f6975c853dfa, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=83ec9820-3713-4570-ab8a-a88fba3f29c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:48:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:29.652 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 83ec9820-3713-4570-ab8a-a88fba3f29c9 in datapath 64f65ccd-7749-48ca-ba36-8eb6d9ce3610 unbound from our chassis#033[00m
Nov 29 02:48:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:29.653 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64f65ccd-7749-48ca-ba36-8eb6d9ce3610, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:48:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:29.654 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[67241f7f-6612-4331-95eb-4a0adc831eb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:29.655 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 namespace which is not needed anymore#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.659 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402509.659127, 22535ce1-6453-42d0-b742-1030fb087035 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.660 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 22535ce1-6453-42d0-b742-1030fb087035] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:48:29 np0005539563 virtqemud[251807]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-1c382493-5718-4c6c-93b8-8f2562c0a68a: No such file or directory
Nov 29 02:48:29 np0005539563 virtqemud[251807]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-1c382493-5718-4c6c-93b8-8f2562c0a68a: No such file or directory
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.679 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.680 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.684 252257 INFO nova.virt.libvirt.driver [-] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Instance spawned successfully.#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.684 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.701 252257 DEBUG nova.virt.libvirt.guest [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.701 252257 INFO nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migration operation has completed#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.701 252257 INFO nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] _post_live_migration() is started..#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.704 252257 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.704 252257 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.704 252257 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.715 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.716 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.732 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.733 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.733 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.734 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.734 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.735 252257 DEBUG nova.virt.libvirt.driver [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.742 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.955 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 22535ce1-6453-42d0-b742-1030fb087035] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.956 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402509.6801171, 22535ce1-6453-42d0-b742-1030fb087035 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.956 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 22535ce1-6453-42d0-b742-1030fb087035] VM Started (Lifecycle Event)#033[00m
Nov 29 02:48:29 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.992 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:30 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[269106]: [NOTICE]   (269111) : haproxy version is 2.8.14-c23fe91
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:29.999 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:30 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[269106]: [NOTICE]   (269111) : path to executable is /usr/sbin/haproxy
Nov 29 02:48:30 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[269106]: [WARNING]  (269111) : Exiting Master process...
Nov 29 02:48:30 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[269106]: [ALERT]    (269111) : Current worker (269113) exited with code 143 (Terminated)
Nov 29 02:48:30 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[269106]: [WARNING]  (269111) : All workers exited. Exiting... (0)
Nov 29 02:48:30 np0005539563 systemd[1]: libpod-bf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622.scope: Deactivated successfully.
Nov 29 02:48:30 np0005539563 podman[270603]: 2025-11-29 07:48:30.012690871 +0000 UTC m=+0.250567815 container died bf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.085 252257 INFO nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Took 10.02 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.086 252257 DEBUG nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.092 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 22535ce1-6453-42d0-b742-1030fb087035] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.187 252257 INFO nova.compute.manager [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Took 12.18 seconds to build instance.#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.191 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.211 252257 DEBUG oslo_concurrency.lockutils [None req-a3766f15-e789-4e41-9622-c4731a5ee40b 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "22535ce1-6453-42d0-b742-1030fb087035" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.636 252257 DEBUG nova.compute.manager [req-d7b6000b-144a-4ccf-86ca-4608273525c6 req-b3f156c0-0ea0-4722-82c7-97eb9a5f2484 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.637 252257 DEBUG oslo_concurrency.lockutils [req-d7b6000b-144a-4ccf-86ca-4608273525c6 req-b3f156c0-0ea0-4722-82c7-97eb9a5f2484 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.638 252257 DEBUG oslo_concurrency.lockutils [req-d7b6000b-144a-4ccf-86ca-4608273525c6 req-b3f156c0-0ea0-4722-82c7-97eb9a5f2484 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.638 252257 DEBUG oslo_concurrency.lockutils [req-d7b6000b-144a-4ccf-86ca-4608273525c6 req-b3f156c0-0ea0-4722-82c7-97eb9a5f2484 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.638 252257 DEBUG nova.compute.manager [req-d7b6000b-144a-4ccf-86ca-4608273525c6 req-b3f156c0-0ea0-4722-82c7-97eb9a5f2484 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.638 252257 DEBUG nova.compute.manager [req-d7b6000b-144a-4ccf-86ca-4608273525c6 req-b3f156c0-0ea0-4722-82c7-97eb9a5f2484 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:48:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622-userdata-shm.mount: Deactivated successfully.
Nov 29 02:48:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9aec15ba818be5a7bdd2c0e7b2a2a640e9ec610d57b96a8d8f0dae08c8b65150-merged.mount: Deactivated successfully.
Nov 29 02:48:30 np0005539563 podman[270603]: 2025-11-29 07:48:30.786364146 +0000 UTC m=+1.024241090 container cleanup bf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:48:30 np0005539563 systemd[1]: libpod-conmon-bf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622.scope: Deactivated successfully.
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.830 252257 DEBUG nova.network.neutron [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Activated binding for port 83ec9820-3713-4570-ab8a-a88fba3f29c9 and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.830 252257 DEBUG nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.831 252257 DEBUG nova.virt.libvirt.vif [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-245400987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-245400987',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:48:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fe8gqt5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:48:12Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=738ca4a4-91f6-4476-a500-4d85c8eb00ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.832 252257 DEBUG nova.network.os_vif_util [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converting VIF {"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.833 252257 DEBUG nova.network.os_vif_util [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.834 252257 DEBUG os_vif [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.836 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.837 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83ec9820-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.838 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.842 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.845 252257 INFO os_vif [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37')#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.846 252257 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.846 252257 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.847 252257 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.847 252257 DEBUG nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.848 252257 INFO nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Deleting instance files /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef_del#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.849 252257 INFO nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Deletion of /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef_del complete#033[00m
Nov 29 02:48:30 np0005539563 podman[270634]: 2025-11-29 07:48:30.869068955 +0000 UTC m=+0.058069950 container remove bf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 02:48:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:30.874 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[53155235-d387-4aca-bc3f-0c4e0bde592c]: (4, ('Sat Nov 29 07:48:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 (bf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622)\nbf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622\nSat Nov 29 07:48:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 (bf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622)\nbf8d362973a173c08540690c4127e084b5c11ec9a50d6940c61fc210ee815622\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:30.876 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4664f5c8-fa99-40c8-a398-f8496e79bd2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:30.877 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64f65ccd-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:30 np0005539563 kernel: tap64f65ccd-70: left promiscuous mode
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.880 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:30 np0005539563 nova_compute[252253]: 2025-11-29 07:48:30.897 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:30.902 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1c9dfd-0030-4d67-a708-f453d899d62a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:30.917 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[94df9ae0-a84f-4d64-9add-021147f66ea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:30.921 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3d170be3-03dd-4b02-a869-675ea61c4b3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:30.936 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd323aa-99af-4045-ba57-8726aef82226]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545295, 'reachable_time': 39255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270647, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:30 np0005539563 systemd[1]: run-netns-ovnmeta\x2d64f65ccd\x2d7749\x2d48ca\x2dba36\x2d8eb6d9ce3610.mount: Deactivated successfully.
Nov 29 02:48:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:30.942 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:48:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:30.942 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[b26ee8a6-9edb-4382-9547-4e9368500b7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:31.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 504 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 8.9 MiB/s wr, 280 op/s
Nov 29 02:48:31 np0005539563 nova_compute[252253]: 2025-11-29 07:48:31.514 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:31.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:31 np0005539563 nova_compute[252253]: 2025-11-29 07:48:31.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:31 np0005539563 nova_compute[252253]: 2025-11-29 07:48:31.698 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:48:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:33.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 504 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.5 MiB/s wr, 226 op/s
Nov 29 02:48:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:33.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:35.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:35 np0005539563 nova_compute[252253]: 2025-11-29 07:48:35.218 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 529 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.5 MiB/s wr, 316 op/s
Nov 29 02:48:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:35.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:35 np0005539563 nova_compute[252253]: 2025-11-29 07:48:35.839 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:36 np0005539563 nova_compute[252253]: 2025-11-29 07:48:36.639 252257 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:36 np0005539563 nova_compute[252253]: 2025-11-29 07:48:36.642 252257 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:36 np0005539563 nova_compute[252253]: 2025-11-29 07:48:36.642 252257 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:36 np0005539563 nova_compute[252253]: 2025-11-29 07:48:36.681 252257 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:36 np0005539563 nova_compute[252253]: 2025-11-29 07:48:36.681 252257 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:36 np0005539563 nova_compute[252253]: 2025-11-29 07:48:36.682 252257 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:36 np0005539563 nova_compute[252253]: 2025-11-29 07:48:36.682 252257 DEBUG nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:48:36 np0005539563 nova_compute[252253]: 2025-11-29 07:48:36.683 252257 DEBUG oslo_concurrency.processutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154217637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.129 252257 DEBUG oslo_concurrency.processutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:48:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:37.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.227 252257 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.228 252257 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.234 252257 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.235 252257 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.402 252257 WARNING nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.403 252257 DEBUG nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4455MB free_disk=20.764629364013672GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.404 252257 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.404 252257 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.462 252257 DEBUG nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Migration for instance 738ca4a4-91f6-4476-a500-4d85c8eb00ef refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.463 252257 DEBUG nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Migration for instance 3efe6bb4-36be-4a30-832d-8da05e5baa50 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:48:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 532 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 232 op/s
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.484 252257 DEBUG nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.503 252257 INFO nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updating resource usage from migration b40c1dc1-dccd-49f3-9f64-b6116e439e87#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.503 252257 DEBUG nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Starting to track incoming migration b40c1dc1-dccd-49f3-9f64-b6116e439e87 with flavor a3833334-6e3e-4b1c-bf74-bdd1055a9e9b _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.537 252257 DEBUG nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Migration ae907aa6-ecdf-428a-a07c-0e5499b61879 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.569 252257 WARNING nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Instance 3efe6bb4-36be-4a30-832d-8da05e5baa50 has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.570 252257 DEBUG nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Instance 93122e14-3d0f-4a03-912f-b7cf58571516 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.571 252257 DEBUG nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Instance 22535ce1-6453-42d0-b742-1030fb087035 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.572 252257 DEBUG nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.572 252257 DEBUG nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:48:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:37.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:37 np0005539563 nova_compute[252253]: 2025-11-29 07:48:37.676 252257 DEBUG oslo_concurrency.processutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:38 np0005539563 nova_compute[252253]: 2025-11-29 07:48:38.164 252257 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Creating tmpfile /var/lib/nova/instances/tmp6kfl16am to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Nov 29 02:48:38 np0005539563 nova_compute[252253]: 2025-11-29 07:48:38.555 252257 DEBUG nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6kfl16am',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Nov 29 02:48:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:48:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1437457782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:48:38 np0005539563 nova_compute[252253]: 2025-11-29 07:48:38.608 252257 DEBUG oslo_concurrency.processutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.933s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:38 np0005539563 nova_compute[252253]: 2025-11-29 07:48:38.615 252257 DEBUG nova.compute.provider_tree [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:48:38 np0005539563 nova_compute[252253]: 2025-11-29 07:48:38.634 252257 DEBUG nova.scheduler.client.report [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:48:38 np0005539563 nova_compute[252253]: 2025-11-29 07:48:38.660 252257 DEBUG nova.compute.resource_tracker [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:48:38 np0005539563 nova_compute[252253]: 2025-11-29 07:48:38.661 252257 DEBUG oslo_concurrency.lockutils [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:38 np0005539563 nova_compute[252253]: 2025-11-29 07:48:38.667 252257 INFO nova.compute.manager [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Nov 29 02:48:38 np0005539563 nova_compute[252253]: 2025-11-29 07:48:38.753 252257 INFO nova.scheduler.client.report [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Deleted allocation for migration ae907aa6-ecdf-428a-a07c-0e5499b61879#033[00m
Nov 29 02:48:38 np0005539563 nova_compute[252253]: 2025-11-29 07:48:38.754 252257 DEBUG nova.virt.libvirt.driver [None req-2ed15a8c-a7c3-4981-bac8-0906b889d1b1 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Nov 29 02:48:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:39.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 532 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 214 op/s
Nov 29 02:48:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:39.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:39 np0005539563 nova_compute[252253]: 2025-11-29 07:48:39.851 252257 DEBUG nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6kfl16am',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='738ca4a4-91f6-4476-a500-4d85c8eb00ef',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Nov 29 02:48:39 np0005539563 nova_compute[252253]: 2025-11-29 07:48:39.892 252257 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:48:39 np0005539563 nova_compute[252253]: 2025-11-29 07:48:39.893 252257 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquired lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:48:39 np0005539563 nova_compute[252253]: 2025-11-29 07:48:39.894 252257 DEBUG nova.network.neutron [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:48:40 np0005539563 nova_compute[252253]: 2025-11-29 07:48:40.219 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:40 np0005539563 nova_compute[252253]: 2025-11-29 07:48:40.842 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:48:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:41.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:48:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 646 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 7.8 MiB/s wr, 225 op/s
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.601 252257 DEBUG nova.network.neutron [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating instance_info_cache with network_info: [{"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.623 252257 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Releasing lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.626 252257 DEBUG os_brick.utils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.629 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.647 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.648 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c76ad1-1e68-4a7a-8c62-a3ea97c1ae93]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.650 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.662 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.663 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[01f92fc0-c192-4739-9f76-657a8de2c105]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.665 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:41.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.678 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.679 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3995ce-1baa-4e61-b76e-b72bfebb2e3b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.681 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[cb84435f-9845-4ae7-9ffd-1894aecc4c97]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.682 252257 DEBUG oslo_concurrency.processutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.714 252257 DEBUG oslo_concurrency.processutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.717 252257 DEBUG os_brick.initiator.connectors.lightos [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.717 252257 DEBUG os_brick.initiator.connectors.lightos [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.718 252257 DEBUG os_brick.initiator.connectors.lightos [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:48:41 np0005539563 nova_compute[252253]: 2025-11-29 07:48:41.718 252257 DEBUG os_brick.utils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] <== get_connector_properties: return (90ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:48:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:48:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:48:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:48:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:48:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:48:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:48:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:48:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:43.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:48:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 646 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.6 MiB/s wr, 171 op/s
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.511 252257 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6kfl16am',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='738ca4a4-91f6-4476-a500-4d85c8eb00ef',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={1c382493-5718-4c6c-93b8-8f2562c0a68a='0293ab39-88ee-4fc9-98a5-4614a458bb2c'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.512 252257 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Creating instance directory: /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.512 252257 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Ensure instance console log exists: /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.513 252257 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.515 252257 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.516 252257 DEBUG nova.virt.libvirt.vif [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-245400987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-245400987',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:48:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fe8gqt5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:48:35Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=738ca4a4-91f6-4476-a500-4d85c8eb00ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.516 252257 DEBUG nova.network.os_vif_util [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converting VIF {"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.517 252257 DEBUG nova.network.os_vif_util [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.517 252257 DEBUG os_vif [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.518 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.518 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.518 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.520 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.521 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83ec9820-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.521 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap83ec9820-37, col_values=(('external_ids', {'iface-id': '83ec9820-3713-4570-ab8a-a88fba3f29c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:30:0e', 'vm-uuid': '738ca4a4-91f6-4476-a500-4d85c8eb00ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.522 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.523 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:48:43 np0005539563 NetworkManager[48981]: <info>  [1764402523.5243] manager: (tap83ec9820-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.528 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.529 252257 INFO os_vif [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37')#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.531 252257 DEBUG nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Nov 29 02:48:43 np0005539563 nova_compute[252253]: 2025-11-29 07:48:43.531 252257 DEBUG nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6kfl16am',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='738ca4a4-91f6-4476-a500-4d85c8eb00ef',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={1c382493-5718-4c6c-93b8-8f2562c0a68a='0293ab39-88ee-4fc9-98a5-4614a458bb2c'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Nov 29 02:48:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:43.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:45 np0005539563 nova_compute[252253]: 2025-11-29 07:48:45.094 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402509.7005606, 738ca4a4-91f6-4476-a500-4d85c8eb00ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:45 np0005539563 nova_compute[252253]: 2025-11-29 07:48:45.094 252257 INFO nova.compute.manager [-] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:48:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:45.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:45 np0005539563 nova_compute[252253]: 2025-11-29 07:48:45.219 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 653 MiB data, 681 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.0 MiB/s wr, 214 op/s
Nov 29 02:48:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:45.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:45 np0005539563 nova_compute[252253]: 2025-11-29 07:48:45.892 252257 DEBUG nova.compute.manager [None req-81b5e9c9-25d6-4476-a592-9fa5e796412c - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:46 np0005539563 nova_compute[252253]: 2025-11-29 07:48:46.489 252257 DEBUG nova.network.neutron [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Port 83ec9820-3713-4570-ab8a-a88fba3f29c9 updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Nov 29 02:48:46 np0005539563 podman[270759]: 2025-11-29 07:48:46.500536709 +0000 UTC m=+0.053056155 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 02:48:46 np0005539563 podman[270760]: 2025-11-29 07:48:46.507388132 +0000 UTC m=+0.059771265 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 02:48:46 np0005539563 podman[270761]: 2025-11-29 07:48:46.533587636 +0000 UTC m=+0.082208228 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:48:46 np0005539563 nova_compute[252253]: 2025-11-29 07:48:46.780 252257 DEBUG nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6kfl16am',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='738ca4a4-91f6-4476-a500-4d85c8eb00ef',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={1c382493-5718-4c6c-93b8-8f2562c0a68a='0293ab39-88ee-4fc9-98a5-4614a458bb2c'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Nov 29 02:48:46 np0005539563 systemd[1]: Starting libvirt proxy daemon...
Nov 29 02:48:46 np0005539563 systemd[1]: Started libvirt proxy daemon.
Nov 29 02:48:47 np0005539563 kernel: tap83ec9820-37: entered promiscuous mode
Nov 29 02:48:47 np0005539563 NetworkManager[48981]: <info>  [1764402527.0699] manager: (tap83ec9820-37): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Nov 29 02:48:47 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:47Z|00079|binding|INFO|Claiming lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 for this additional chassis.
Nov 29 02:48:47 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:47Z|00080|binding|INFO|83ec9820-3713-4570-ab8a-a88fba3f29c9: Claiming fa:16:3e:eb:30:0e 10.100.0.6
Nov 29 02:48:47 np0005539563 systemd-udevd[270854]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:48:47 np0005539563 nova_compute[252253]: 2025-11-29 07:48:47.106 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:47 np0005539563 NetworkManager[48981]: <info>  [1764402527.1195] device (tap83ec9820-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:48:47 np0005539563 NetworkManager[48981]: <info>  [1764402527.1205] device (tap83ec9820-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:48:47 np0005539563 systemd-machined[213024]: New machine qemu-10-instance-00000012.
Nov 29 02:48:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:47.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:47 np0005539563 systemd[1]: Started Virtual Machine qemu-10-instance-00000012.
Nov 29 02:48:47 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:47Z|00081|binding|INFO|Setting lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 ovn-installed in OVS
Nov 29 02:48:47 np0005539563 nova_compute[252253]: 2025-11-29 07:48:47.186 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 659 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 709 KiB/s rd, 6.8 MiB/s wr, 149 op/s
Nov 29 02:48:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:47.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:48 np0005539563 ceph-osd[84724]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 02:48:48 np0005539563 nova_compute[252253]: 2025-11-29 07:48:48.433 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402528.433381, 738ca4a4-91f6-4476-a500-4d85c8eb00ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:48 np0005539563 nova_compute[252253]: 2025-11-29 07:48:48.435 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] VM Started (Lifecycle Event)#033[00m
Nov 29 02:48:48 np0005539563 nova_compute[252253]: 2025-11-29 07:48:48.524 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:49.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 659 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 354 KiB/s rd, 6.8 MiB/s wr, 125 op/s
Nov 29 02:48:49 np0005539563 nova_compute[252253]: 2025-11-29 07:48:49.604 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:49 np0005539563 nova_compute[252253]: 2025-11-29 07:48:49.608 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:49 np0005539563 nova_compute[252253]: 2025-11-29 07:48:49.662 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Nov 29 02:48:49 np0005539563 nova_compute[252253]: 2025-11-29 07:48:49.663 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402528.9588087, 738ca4a4-91f6-4476-a500-4d85c8eb00ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:49 np0005539563 nova_compute[252253]: 2025-11-29 07:48:49.663 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:48:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:49.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:49 np0005539563 nova_compute[252253]: 2025-11-29 07:48:49.704 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:49 np0005539563 nova_compute[252253]: 2025-11-29 07:48:49.707 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:49 np0005539563 nova_compute[252253]: 2025-11-29 07:48:49.753 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Nov 29 02:48:50 np0005539563 nova_compute[252253]: 2025-11-29 07:48:50.221 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:50 np0005539563 nova_compute[252253]: 2025-11-29 07:48:50.746 252257 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:48:50 np0005539563 nova_compute[252253]: 2025-11-29 07:48:50.746 252257 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:48:50 np0005539563 nova_compute[252253]: 2025-11-29 07:48:50.747 252257 DEBUG nova.network.neutron [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:48:51 np0005539563 nova_compute[252253]: 2025-11-29 07:48:51.092 252257 DEBUG nova.network.neutron [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:48:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:48:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:51.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:48:51 np0005539563 nova_compute[252253]: 2025-11-29 07:48:51.333 252257 DEBUG nova.network.neutron [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:48:51 np0005539563 nova_compute[252253]: 2025-11-29 07:48:51.352 252257 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:48:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 691 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.9 MiB/s wr, 299 op/s
Nov 29 02:48:51 np0005539563 nova_compute[252253]: 2025-11-29 07:48:51.478 252257 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 02:48:51 np0005539563 nova_compute[252253]: 2025-11-29 07:48:51.481 252257 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 02:48:51 np0005539563 nova_compute[252253]: 2025-11-29 07:48:51.482 252257 INFO nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Creating image(s)#033[00m
Nov 29 02:48:51 np0005539563 nova_compute[252253]: 2025-11-29 07:48:51.539 252257 DEBUG nova.storage.rbd_utils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] creating snapshot(nova-resize) on rbd image(3efe6bb4-36be-4a30-832d-8da05e5baa50_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:48:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:51.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 29 02:48:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 29 02:48:52 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.460 252257 DEBUG nova.objects.instance [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3efe6bb4-36be-4a30-832d-8da05e5baa50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.589 252257 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.590 252257 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Ensure instance console log exists: /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.591 252257 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.591 252257 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.592 252257 DEBUG oslo_concurrency.lockutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.594 252257 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.599 252257 WARNING nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.605 252257 DEBUG nova.virt.libvirt.host [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.606 252257 DEBUG nova.virt.libvirt.host [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.610 252257 DEBUG nova.virt.libvirt.host [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.611 252257 DEBUG nova.virt.libvirt.host [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.613 252257 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.613 252257 DEBUG nova.virt.hardware [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.614 252257 DEBUG nova.virt.hardware [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.614 252257 DEBUG nova.virt.hardware [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.614 252257 DEBUG nova.virt.hardware [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.615 252257 DEBUG nova.virt.hardware [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.615 252257 DEBUG nova.virt.hardware [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.615 252257 DEBUG nova.virt.hardware [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.616 252257 DEBUG nova.virt.hardware [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.616 252257 DEBUG nova.virt.hardware [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.616 252257 DEBUG nova.virt.hardware [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.617 252257 DEBUG nova.virt.hardware [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.617 252257 DEBUG nova.objects.instance [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3efe6bb4-36be-4a30-832d-8da05e5baa50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:52 np0005539563 nova_compute[252253]: 2025-11-29 07:48:52.639 252257 DEBUG oslo_concurrency.processutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:52 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:52Z|00082|binding|INFO|Claiming lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 for this chassis.
Nov 29 02:48:52 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:52Z|00083|binding|INFO|83ec9820-3713-4570-ab8a-a88fba3f29c9: Claiming fa:16:3e:eb:30:0e 10.100.0.6
Nov 29 02:48:52 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:52Z|00084|binding|INFO|Setting lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 up in Southbound
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.684 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:30:0e 10.100.0.6'], port_security=['fa:16:3e:eb:30:0e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '738ca4a4-91f6-4476-a500-4d85c8eb00ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '19', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49e03573-97a7-4693-af53-f6975c853dfa, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=83ec9820-3713-4570-ab8a-a88fba3f29c9) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.686 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 83ec9820-3713-4570-ab8a-a88fba3f29c9 in datapath 64f65ccd-7749-48ca-ba36-8eb6d9ce3610 bound to our chassis#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.688 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64f65ccd-7749-48ca-ba36-8eb6d9ce3610#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.704 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ea78f05f-253d-4e5d-b5d0-6fc42f30a0db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.705 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap64f65ccd-71 in ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.707 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap64f65ccd-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.707 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad94bab-cdca-46ff-ad30-85a0267249d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.708 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8663ed1d-ddcb-4cd4-b82f-f50d8b8122ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.721 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[948e9602-e2f0-458e-a25a-301eeb88ba84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.750 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba41701-579c-4d76-9ddb-6ef0c3fe3209]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.788 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b3f3fc-10d6-4319-bd4c-9cd9f36935eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:52 np0005539563 NetworkManager[48981]: <info>  [1764402532.7993] manager: (tap64f65ccd-70): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.798 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0868c9-a474-4f86-9322-f1634b22ce14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:52 np0005539563 systemd-udevd[271011]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.840 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f1558bc0-98bf-4e30-978e-96b8043d1a49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.846 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcb4300-0620-4db0-90e9-56a4aebee3b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:52 np0005539563 NetworkManager[48981]: <info>  [1764402532.8754] device (tap64f65ccd-70): carrier: link connected
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.887 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[012c4b28-8411-4cfe-b50c-71f3497ff32d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.908 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d675541a-060a-44bc-9ca7-3e4fcf2822df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64f65ccd-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:be:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550064, 'reachable_time': 42215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271030, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.927 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7c32a8-a038-45ee-abb0-7439ffebef5b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9d:be36'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550064, 'tstamp': 550064}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271031, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:52.954 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[51ded810-f874-4d01-a3a4-cbb2b259a91c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64f65ccd-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:be:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550064, 'reachable_time': 42215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271032, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:53.003 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[12245571-2603-46f6-8030-37edbc41b316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:53.085 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5d668bba-b8fa-4584-9f5f-cd56e708a337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:53.088 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64f65ccd-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:53.089 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:53.091 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64f65ccd-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:53 np0005539563 NetworkManager[48981]: <info>  [1764402533.0957] manager: (tap64f65ccd-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 29 02:48:53 np0005539563 kernel: tap64f65ccd-70: entered promiscuous mode
Nov 29 02:48:53 np0005539563 nova_compute[252253]: 2025-11-29 07:48:53.096 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:53.099 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64f65ccd-70, col_values=(('external_ids', {'iface-id': 'cbc2b067-53f5-4ead-84ea-8fcd92aff3f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:48:53 np0005539563 nova_compute[252253]: 2025-11-29 07:48:53.101 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:53 np0005539563 ovn_controller[148841]: 2025-11-29T07:48:53Z|00085|binding|INFO|Releasing lport cbc2b067-53f5-4ead-84ea-8fcd92aff3f1 from this chassis (sb_readonly=0)
Nov 29 02:48:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:48:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2309254414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:48:53 np0005539563 nova_compute[252253]: 2025-11-29 07:48:53.124 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:53.128 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:53.130 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4ef5ef-7761-447d-95ba-9a19b0425dc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:53.132 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-64f65ccd-7749-48ca-ba36-8eb6d9ce3610
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.pid.haproxy
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 64f65ccd-7749-48ca-ba36-8eb6d9ce3610
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:48:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:48:53.137 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'env', 'PROCESS_TAG=haproxy-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/64f65ccd-7749-48ca-ba36-8eb6d9ce3610.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:48:53 np0005539563 nova_compute[252253]: 2025-11-29 07:48:53.148 252257 DEBUG oslo_concurrency.processutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:53.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:53 np0005539563 nova_compute[252253]: 2025-11-29 07:48:53.202 252257 DEBUG oslo_concurrency.processutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:48:53 np0005539563 nova_compute[252253]: 2025-11-29 07:48:53.283 252257 INFO nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Post operation of migration started#033[00m
Nov 29 02:48:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 691 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.7 MiB/s wr, 290 op/s
Nov 29 02:48:53 np0005539563 nova_compute[252253]: 2025-11-29 07:48:53.526 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:53 np0005539563 podman[271105]: 2025-11-29 07:48:53.594852184 +0000 UTC m=+0.069389542 container create 23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:48:53 np0005539563 systemd[1]: Started libpod-conmon-23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3.scope.
Nov 29 02:48:53 np0005539563 podman[271105]: 2025-11-29 07:48:53.551612064 +0000 UTC m=+0.026149432 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:48:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:48:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702586891' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:48:53 np0005539563 nova_compute[252253]: 2025-11-29 07:48:53.677 252257 DEBUG oslo_concurrency.processutils [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:48:53 np0005539563 nova_compute[252253]: 2025-11-29 07:48:53.681 252257 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  <uuid>3efe6bb4-36be-4a30-832d-8da05e5baa50</uuid>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  <name>instance-00000014</name>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  <memory>196608</memory>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <nova:name>tempest-MigrationsAdminTest-server-65742869</nova:name>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:48:52</nova:creationTime>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.micro">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <nova:memory>192</nova:memory>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <nova:user uuid="e1c26cd8138e4114b4801d377b39933a">tempest-MigrationsAdminTest-845185139-project-member</nova:user>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <nova:project uuid="f7e8ae9fdefb4049959228954fb4250e">tempest-MigrationsAdminTest-845185139</nova:project>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <entry name="serial">3efe6bb4-36be-4a30-832d-8da05e5baa50</entry>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <entry name="uuid">3efe6bb4-36be-4a30-832d-8da05e5baa50</entry>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/3efe6bb4-36be-4a30-832d-8da05e5baa50_disk">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/3efe6bb4-36be-4a30-832d-8da05e5baa50_disk.config">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50/console.log" append="off"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:48:53 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:48:53 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:48:53 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:48:53 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:48:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:48:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:53.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30efad8b076f98dfca797abe1eef2b3febbf8c566ee62dfdef08c4bedbb0fc1a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:48:53 np0005539563 podman[271105]: 2025-11-29 07:48:53.717623799 +0000 UTC m=+0.192161237 container init 23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 02:48:53 np0005539563 podman[271105]: 2025-11-29 07:48:53.724545326 +0000 UTC m=+0.199082674 container start 23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:48:53 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[271120]: [NOTICE]   (271127) : New worker (271129) forked
Nov 29 02:48:53 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[271120]: [NOTICE]   (271127) : Loading success.
Nov 29 02:48:53 np0005539563 nova_compute[252253]: 2025-11-29 07:48:53.848 252257 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:48:53 np0005539563 nova_compute[252253]: 2025-11-29 07:48:53.849 252257 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:48:53 np0005539563 nova_compute[252253]: 2025-11-29 07:48:53.850 252257 INFO nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Using config drive#033[00m
Nov 29 02:48:54 np0005539563 systemd-machined[213024]: New machine qemu-11-instance-00000014.
Nov 29 02:48:54 np0005539563 systemd[1]: Started Virtual Machine qemu-11-instance-00000014.
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.533 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402534.5330505, 3efe6bb4-36be-4a30-832d-8da05e5baa50 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.535 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.537 252257 DEBUG nova.compute.manager [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.542 252257 INFO nova.virt.libvirt.driver [-] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance running successfully.#033[00m
Nov 29 02:48:54 np0005539563 virtqemud[251807]: argument unsupported: QEMU guest agent is not configured
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.545 252257 DEBUG nova.virt.libvirt.guest [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.546 252257 DEBUG nova.virt.libvirt.driver [None req-639c74b5-ce47-462a-bbe9-b493e298e522 e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.572 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.579 252257 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.579 252257 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquired lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.580 252257 DEBUG nova.network.neutron [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.583 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.657 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.658 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402534.536958, 3efe6bb4-36be-4a30-832d-8da05e5baa50 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.658 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] VM Started (Lifecycle Event)#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.687 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:54 np0005539563 nova_compute[252253]: 2025-11-29 07:48:54.692 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:48:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:55.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:55 np0005539563 nova_compute[252253]: 2025-11-29 07:48:55.224 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 691 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.2 MiB/s wr, 290 op/s
Nov 29 02:48:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:55.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:57.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 667 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.3 MiB/s wr, 328 op/s
Nov 29 02:48:57 np0005539563 nova_compute[252253]: 2025-11-29 07:48:57.481 252257 DEBUG nova.network.neutron [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating instance_info_cache with network_info: [{"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:48:57 np0005539563 nova_compute[252253]: 2025-11-29 07:48:57.504 252257 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Releasing lock "refresh_cache-738ca4a4-91f6-4476-a500-4d85c8eb00ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:48:57 np0005539563 nova_compute[252253]: 2025-11-29 07:48:57.522 252257 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:57 np0005539563 nova_compute[252253]: 2025-11-29 07:48:57.523 252257 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:57 np0005539563 nova_compute[252253]: 2025-11-29 07:48:57.523 252257 DEBUG oslo_concurrency.lockutils [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:57 np0005539563 nova_compute[252253]: 2025-11-29 07:48:57.527 252257 INFO nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Nov 29 02:48:57 np0005539563 virtqemud[251807]: Domain id=10 name='instance-00000012' uuid=738ca4a4-91f6-4476-a500-4d85c8eb00ef is tainted: custom-monitor
Nov 29 02:48:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:48:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:57.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.528 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.537 252257 INFO nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Nov 29 02:48:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 29 02:48:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 29 02:48:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.694 252257 DEBUG oslo_concurrency.lockutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "93122e14-3d0f-4a03-912f-b7cf58571516" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.695 252257 DEBUG oslo_concurrency.lockutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "93122e14-3d0f-4a03-912f-b7cf58571516" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.696 252257 DEBUG oslo_concurrency.lockutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "93122e14-3d0f-4a03-912f-b7cf58571516-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.696 252257 DEBUG oslo_concurrency.lockutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "93122e14-3d0f-4a03-912f-b7cf58571516-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.696 252257 DEBUG oslo_concurrency.lockutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "93122e14-3d0f-4a03-912f-b7cf58571516-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.698 252257 INFO nova.compute.manager [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Terminating instance#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.699 252257 DEBUG oslo_concurrency.lockutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "refresh_cache-93122e14-3d0f-4a03-912f-b7cf58571516" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.699 252257 DEBUG oslo_concurrency.lockutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquired lock "refresh_cache-93122e14-3d0f-4a03-912f-b7cf58571516" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.700 252257 DEBUG nova.network.neutron [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.855 252257 DEBUG oslo_concurrency.lockutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "22535ce1-6453-42d0-b742-1030fb087035" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.856 252257 DEBUG oslo_concurrency.lockutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "22535ce1-6453-42d0-b742-1030fb087035" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.856 252257 DEBUG oslo_concurrency.lockutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "22535ce1-6453-42d0-b742-1030fb087035-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.856 252257 DEBUG oslo_concurrency.lockutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "22535ce1-6453-42d0-b742-1030fb087035-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.857 252257 DEBUG oslo_concurrency.lockutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "22535ce1-6453-42d0-b742-1030fb087035-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.858 252257 INFO nova.compute.manager [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Terminating instance#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.859 252257 DEBUG oslo_concurrency.lockutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "refresh_cache-22535ce1-6453-42d0-b742-1030fb087035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.859 252257 DEBUG oslo_concurrency.lockutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquired lock "refresh_cache-22535ce1-6453-42d0-b742-1030fb087035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.860 252257 DEBUG nova.network.neutron [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:48:58 np0005539563 nova_compute[252253]: 2025-11-29 07:48:58.871 252257 DEBUG nova.network.neutron [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:48:59 np0005539563 nova_compute[252253]: 2025-11-29 07:48:59.002 252257 DEBUG nova.network.neutron [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:48:59 np0005539563 nova_compute[252253]: 2025-11-29 07:48:59.108 252257 DEBUG nova.network.neutron [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:48:59 np0005539563 nova_compute[252253]: 2025-11-29 07:48:59.126 252257 DEBUG oslo_concurrency.lockutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Releasing lock "refresh_cache-93122e14-3d0f-4a03-912f-b7cf58571516" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:48:59 np0005539563 nova_compute[252253]: 2025-11-29 07:48:59.127 252257 DEBUG nova.compute.manager [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:48:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:48:59.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:59 np0005539563 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000015.scope: Deactivated successfully.
Nov 29 02:48:59 np0005539563 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000015.scope: Consumed 14.853s CPU time.
Nov 29 02:48:59 np0005539563 systemd-machined[213024]: Machine qemu-8-instance-00000015 terminated.
Nov 29 02:48:59 np0005539563 nova_compute[252253]: 2025-11-29 07:48:59.347 252257 INFO nova.virt.libvirt.driver [-] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Instance destroyed successfully.#033[00m
Nov 29 02:48:59 np0005539563 nova_compute[252253]: 2025-11-29 07:48:59.348 252257 DEBUG nova.objects.instance [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'resources' on Instance uuid 93122e14-3d0f-4a03-912f-b7cf58571516 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:48:59 np0005539563 nova_compute[252253]: 2025-11-29 07:48:59.363 252257 DEBUG nova.network.neutron [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:48:59 np0005539563 nova_compute[252253]: 2025-11-29 07:48:59.384 252257 DEBUG oslo_concurrency.lockutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Releasing lock "refresh_cache-22535ce1-6453-42d0-b742-1030fb087035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:48:59 np0005539563 nova_compute[252253]: 2025-11-29 07:48:59.385 252257 DEBUG nova.compute.manager [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:48:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 667 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 39 KiB/s wr, 150 op/s
Nov 29 02:48:59 np0005539563 nova_compute[252253]: 2025-11-29 07:48:59.542 252257 INFO nova.virt.libvirt.driver [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Nov 29 02:48:59 np0005539563 nova_compute[252253]: 2025-11-29 07:48:59.548 252257 DEBUG nova.compute.manager [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:48:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:48:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:48:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:48:59.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:48:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:48:59 np0005539563 nova_compute[252253]: 2025-11-29 07:48:59.890 252257 DEBUG nova.objects.instance [None req-326fec11-bc2c-4f9e-ba85-559e2b30a85e 59581c6281ec4338a6e50f15daba8f83 0ef7361aeeb6486f81bc1b66cbf76166 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:48:59 np0005539563 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000016.scope: Deactivated successfully.
Nov 29 02:48:59 np0005539563 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000016.scope: Consumed 14.351s CPU time.
Nov 29 02:48:59 np0005539563 systemd-machined[213024]: Machine qemu-9-instance-00000016 terminated.
Nov 29 02:49:00 np0005539563 nova_compute[252253]: 2025-11-29 07:49:00.005 252257 INFO nova.virt.libvirt.driver [-] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Instance destroyed successfully.#033[00m
Nov 29 02:49:00 np0005539563 nova_compute[252253]: 2025-11-29 07:49:00.006 252257 DEBUG nova.objects.instance [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lazy-loading 'resources' on Instance uuid 22535ce1-6453-42d0-b742-1030fb087035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:00 np0005539563 nova_compute[252253]: 2025-11-29 07:49:00.224 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:01.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:01.254 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:49:01 np0005539563 nova_compute[252253]: 2025-11-29 07:49:01.255 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:01.257 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:49:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 598 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 38 KiB/s wr, 280 op/s
Nov 29 02:49:01 np0005539563 nova_compute[252253]: 2025-11-29 07:49:01.668 252257 DEBUG oslo_concurrency.lockutils [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:01 np0005539563 nova_compute[252253]: 2025-11-29 07:49:01.669 252257 DEBUG oslo_concurrency.lockutils [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:01 np0005539563 nova_compute[252253]: 2025-11-29 07:49:01.670 252257 DEBUG oslo_concurrency.lockutils [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:01 np0005539563 nova_compute[252253]: 2025-11-29 07:49:01.670 252257 DEBUG oslo_concurrency.lockutils [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:01 np0005539563 nova_compute[252253]: 2025-11-29 07:49:01.671 252257 DEBUG oslo_concurrency.lockutils [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:01 np0005539563 nova_compute[252253]: 2025-11-29 07:49:01.673 252257 INFO nova.compute.manager [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Terminating instance#033[00m
Nov 29 02:49:01 np0005539563 nova_compute[252253]: 2025-11-29 07:49:01.674 252257 DEBUG nova.compute.manager [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:49:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:01.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:02 np0005539563 kernel: tap83ec9820-37 (unregistering): left promiscuous mode
Nov 29 02:49:02 np0005539563 NetworkManager[48981]: <info>  [1764402542.3253] device (tap83ec9820-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:49:02 np0005539563 ovn_controller[148841]: 2025-11-29T07:49:02Z|00086|binding|INFO|Releasing lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 from this chassis (sb_readonly=0)
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.335 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:02 np0005539563 ovn_controller[148841]: 2025-11-29T07:49:02Z|00087|binding|INFO|Setting lport 83ec9820-3713-4570-ab8a-a88fba3f29c9 down in Southbound
Nov 29 02:49:02 np0005539563 ovn_controller[148841]: 2025-11-29T07:49:02Z|00088|binding|INFO|Removing iface tap83ec9820-37 ovn-installed in OVS
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.342 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:30:0e 10.100.0.6'], port_security=['fa:16:3e:eb:30:0e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '738ca4a4-91f6-4476-a500-4d85c8eb00ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f3f16345721743ccb9afb374deec67b5', 'neutron:revision_number': '21', 'neutron:security_group_ids': '4965281f-7261-4f0b-b0ca-fbb327add57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49e03573-97a7-4693-af53-f6975c853dfa, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=83ec9820-3713-4570-ab8a-a88fba3f29c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.343 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 83ec9820-3713-4570-ab8a-a88fba3f29c9 in datapath 64f65ccd-7749-48ca-ba36-8eb6d9ce3610 unbound from our chassis#033[00m
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.344 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64f65ccd-7749-48ca-ba36-8eb6d9ce3610, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.346 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5a854b08-fc39-4c31-ad65-085051e21982]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.347 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 namespace which is not needed anymore#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.369 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:02 np0005539563 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 29 02:49:02 np0005539563 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000012.scope: Consumed 1.960s CPU time.
Nov 29 02:49:02 np0005539563 systemd-machined[213024]: Machine qemu-10-instance-00000012 terminated.
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.499 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.503 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.520 252257 INFO nova.virt.libvirt.driver [-] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Instance destroyed successfully.#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.521 252257 DEBUG nova.objects.instance [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lazy-loading 'resources' on Instance uuid 738ca4a4-91f6-4476-a500-4d85c8eb00ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.537 252257 DEBUG nova.virt.libvirt.vif [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-245400987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-245400987',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:48:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f3f16345721743ccb9afb374deec67b5',ramdisk_id='',reservation_id='r-fe8gqt5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-362691100',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-362691100-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:48:59Z,user_data=None,user_id='d15fa4897cba4410b8d341f62586c091',uuid=738ca4a4-91f6-4476-a500-4d85c8eb00ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.538 252257 DEBUG nova.network.os_vif_util [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Converting VIF {"id": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "address": "fa:16:3e:eb:30:0e", "network": {"id": "64f65ccd-7749-48ca-ba36-8eb6d9ce3610", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-323299976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3f16345721743ccb9afb374deec67b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83ec9820-37", "ovs_interfaceid": "83ec9820-3713-4570-ab8a-a88fba3f29c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.539 252257 DEBUG nova.network.os_vif_util [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.539 252257 DEBUG os_vif [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.542 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.542 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83ec9820-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:02 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[271120]: [NOTICE]   (271127) : haproxy version is 2.8.14-c23fe91
Nov 29 02:49:02 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[271120]: [NOTICE]   (271127) : path to executable is /usr/sbin/haproxy
Nov 29 02:49:02 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[271120]: [WARNING]  (271127) : Exiting Master process...
Nov 29 02:49:02 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[271120]: [WARNING]  (271127) : Exiting Master process...
Nov 29 02:49:02 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[271120]: [ALERT]    (271127) : Current worker (271129) exited with code 143 (Terminated)
Nov 29 02:49:02 np0005539563 neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610[271120]: [WARNING]  (271127) : All workers exited. Exiting... (0)
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.584 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:02 np0005539563 systemd[1]: libpod-23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3.scope: Deactivated successfully.
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.587 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.589 252257 INFO os_vif [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:30:0e,bridge_name='br-int',has_traffic_filtering=True,id=83ec9820-3713-4570-ab8a-a88fba3f29c9,network=Network(64f65ccd-7749-48ca-ba36-8eb6d9ce3610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83ec9820-37')#033[00m
Nov 29 02:49:02 np0005539563 podman[271332]: 2025-11-29 07:49:02.60061995 +0000 UTC m=+0.136258907 container died 23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:49:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3-userdata-shm.mount: Deactivated successfully.
Nov 29 02:49:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-30efad8b076f98dfca797abe1eef2b3febbf8c566ee62dfdef08c4bedbb0fc1a-merged.mount: Deactivated successfully.
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.679 252257 DEBUG nova.compute.manager [req-245279c1-18a8-48fc-a1ce-e73a20509693 req-b67a45e1-4a7b-4a19-9026-b75bb3085d8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.681 252257 DEBUG oslo_concurrency.lockutils [req-245279c1-18a8-48fc-a1ce-e73a20509693 req-b67a45e1-4a7b-4a19-9026-b75bb3085d8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.681 252257 DEBUG oslo_concurrency.lockutils [req-245279c1-18a8-48fc-a1ce-e73a20509693 req-b67a45e1-4a7b-4a19-9026-b75bb3085d8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.681 252257 DEBUG oslo_concurrency.lockutils [req-245279c1-18a8-48fc-a1ce-e73a20509693 req-b67a45e1-4a7b-4a19-9026-b75bb3085d8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.682 252257 DEBUG nova.compute.manager [req-245279c1-18a8-48fc-a1ce-e73a20509693 req-b67a45e1-4a7b-4a19-9026-b75bb3085d8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.682 252257 DEBUG nova.compute.manager [req-245279c1-18a8-48fc-a1ce-e73a20509693 req-b67a45e1-4a7b-4a19-9026-b75bb3085d8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-unplugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:49:02 np0005539563 podman[271332]: 2025-11-29 07:49:02.739074757 +0000 UTC m=+0.274713704 container cleanup 23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:49:02 np0005539563 systemd[1]: libpod-conmon-23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3.scope: Deactivated successfully.
Nov 29 02:49:02 np0005539563 podman[271389]: 2025-11-29 07:49:02.878847458 +0000 UTC m=+0.113354264 container remove 23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.888 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2fad8e-c947-42f3-ad07-40dd68cdcd7c]: (4, ('Sat Nov 29 07:49:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 (23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3)\n23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3\nSat Nov 29 07:49:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 (23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3)\n23ce159b730765779047074964c23f9f5f294dd0691b3f892590c38015cbbaf3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.891 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f160bb9a-6762-4f30-a1c4-20cea7b2cf53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.893 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64f65ccd-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:02 np0005539563 kernel: tap64f65ccd-70: left promiscuous mode
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.908 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:02 np0005539563 nova_compute[252253]: 2025-11-29 07:49:02.916 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.919 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d84c87-e727-4ec7-8d66-5bf5b4409d6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.935 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc85074-8232-433c-b940-c2f4cf0aa94d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.936 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9a033a-33dd-4017-bb1c-5e1b86b2c5b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.954 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8b925f-8994-4069-9e7a-fc39e2945bc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550055, 'reachable_time': 28459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271405, 'error': None, 'target': 'ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:02 np0005539563 systemd[1]: run-netns-ovnmeta\x2d64f65ccd\x2d7749\x2d48ca\x2dba36\x2d8eb6d9ce3610.mount: Deactivated successfully.
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.957 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-64f65ccd-7749-48ca-ba36-8eb6d9ce3610 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:49:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:02.957 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf4ffa9-7c38-4ec3-b9eb-1440bdb1c405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:49:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:03.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.299 252257 INFO nova.virt.libvirt.driver [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Deleting instance files /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef_del#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.300 252257 INFO nova.virt.libvirt.driver [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Deletion of /var/lib/nova/instances/738ca4a4-91f6-4476-a500-4d85c8eb00ef_del complete#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.306 252257 INFO nova.virt.libvirt.driver [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Deleting instance files /var/lib/nova/instances/93122e14-3d0f-4a03-912f-b7cf58571516_del#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.306 252257 INFO nova.virt.libvirt.driver [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Deletion of /var/lib/nova/instances/93122e14-3d0f-4a03-912f-b7cf58571516_del complete#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.428 252257 INFO nova.compute.manager [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Took 1.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.429 252257 DEBUG oslo.service.loopingcall [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.430 252257 DEBUG nova.compute.manager [-] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.430 252257 DEBUG nova.network.neutron [-] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.433 252257 INFO nova.compute.manager [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Took 4.31 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.433 252257 DEBUG oslo.service.loopingcall [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.433 252257 DEBUG nova.compute.manager [-] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.434 252257 DEBUG nova.network.neutron [-] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:49:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 598 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 35 KiB/s wr, 255 op/s
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.606 252257 DEBUG nova.network.neutron [-] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.622 252257 DEBUG nova.network.neutron [-] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.646 252257 INFO nova.compute.manager [-] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Took 0.21 seconds to deallocate network for instance.#033[00m
Nov 29 02:49:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:03.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.725 252257 DEBUG oslo_concurrency.lockutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.726 252257 DEBUG oslo_concurrency.lockutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:03 np0005539563 nova_compute[252253]: 2025-11-29 07:49:03.880 252257 DEBUG oslo_concurrency.processutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.279 252257 DEBUG nova.network.neutron [-] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.367 252257 INFO nova.compute.manager [-] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Took 0.94 seconds to deallocate network for instance.#033[00m
Nov 29 02:49:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2816915665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.413 252257 DEBUG oslo_concurrency.processutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.419 252257 DEBUG nova.compute.provider_tree [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.438 252257 DEBUG nova.scheduler.client.report [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.464 252257 DEBUG oslo_concurrency.lockutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.475 252257 INFO nova.virt.libvirt.driver [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Deleting instance files /var/lib/nova/instances/22535ce1-6453-42d0-b742-1030fb087035_del#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.475 252257 INFO nova.virt.libvirt.driver [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Deletion of /var/lib/nova/instances/22535ce1-6453-42d0-b742-1030fb087035_del complete#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.480 252257 DEBUG nova.compute.manager [req-2df9ec48-9465-4d17-9d04-18b841fe04d7 req-5c7e01cf-0596-4aa6-b69d-794c5f873f2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-deleted-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.489 252257 INFO nova.scheduler.client.report [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Deleted allocations for instance 93122e14-3d0f-4a03-912f-b7cf58571516#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.545 252257 INFO nova.compute.manager [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Took 5.16 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.546 252257 DEBUG oslo.service.loopingcall [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.547 252257 DEBUG nova.compute.manager [-] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.547 252257 DEBUG nova.network.neutron [-] [instance: 22535ce1-6453-42d0-b742-1030fb087035] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.578 252257 DEBUG oslo_concurrency.lockutils [None req-19717246-fb2a-4c8a-92d5-464ec0f309d3 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "93122e14-3d0f-4a03-912f-b7cf58571516" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.653 252257 INFO nova.compute.manager [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Took 0.28 seconds to detach 1 volumes for instance.#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.654 252257 DEBUG nova.compute.manager [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Deleting volume: 1c382493-5718-4c6c-93b8-8f2562c0a68a _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.715 252257 DEBUG nova.network.neutron [-] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:49:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.732 252257 DEBUG nova.network.neutron [-] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.743 252257 INFO nova.compute.manager [-] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Took 0.20 seconds to deallocate network for instance.#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.780 252257 DEBUG nova.compute.manager [req-d40b3716-7e1c-4179-9137-225c1ce8e722 req-98151da7-2754-42ba-ae80-9d14f1322df4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.780 252257 DEBUG oslo_concurrency.lockutils [req-d40b3716-7e1c-4179-9137-225c1ce8e722 req-98151da7-2754-42ba-ae80-9d14f1322df4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.780 252257 DEBUG oslo_concurrency.lockutils [req-d40b3716-7e1c-4179-9137-225c1ce8e722 req-98151da7-2754-42ba-ae80-9d14f1322df4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.781 252257 DEBUG oslo_concurrency.lockutils [req-d40b3716-7e1c-4179-9137-225c1ce8e722 req-98151da7-2754-42ba-ae80-9d14f1322df4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.781 252257 DEBUG nova.compute.manager [req-d40b3716-7e1c-4179-9137-225c1ce8e722 req-98151da7-2754-42ba-ae80-9d14f1322df4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] No waiting events found dispatching network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.781 252257 WARNING nova.compute.manager [req-d40b3716-7e1c-4179-9137-225c1ce8e722 req-98151da7-2754-42ba-ae80-9d14f1322df4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Received unexpected event network-vif-plugged-83ec9820-3713-4570-ab8a-a88fba3f29c9 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.789 252257 DEBUG oslo_concurrency.lockutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.790 252257 DEBUG oslo_concurrency.lockutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.868 252257 DEBUG oslo_concurrency.lockutils [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:04 np0005539563 nova_compute[252253]: 2025-11-29 07:49:04.882 252257 DEBUG oslo_concurrency.processutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:04.893 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:04.894 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:04.894 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:05.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:05 np0005539563 nova_compute[252253]: 2025-11-29 07:49:05.227 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 554 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 729 KiB/s wr, 239 op/s
Nov 29 02:49:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4216977296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:05 np0005539563 nova_compute[252253]: 2025-11-29 07:49:05.515 252257 DEBUG oslo_concurrency.processutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:05 np0005539563 nova_compute[252253]: 2025-11-29 07:49:05.522 252257 DEBUG nova.compute.provider_tree [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:49:05 np0005539563 nova_compute[252253]: 2025-11-29 07:49:05.551 252257 DEBUG nova.scheduler.client.report [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:49:05 np0005539563 nova_compute[252253]: 2025-11-29 07:49:05.572 252257 DEBUG oslo_concurrency.lockutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:05 np0005539563 nova_compute[252253]: 2025-11-29 07:49:05.578 252257 DEBUG oslo_concurrency.lockutils [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:05 np0005539563 nova_compute[252253]: 2025-11-29 07:49:05.584 252257 DEBUG oslo_concurrency.lockutils [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:05 np0005539563 nova_compute[252253]: 2025-11-29 07:49:05.624 252257 INFO nova.scheduler.client.report [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Deleted allocations for instance 738ca4a4-91f6-4476-a500-4d85c8eb00ef#033[00m
Nov 29 02:49:05 np0005539563 nova_compute[252253]: 2025-11-29 07:49:05.638 252257 INFO nova.scheduler.client.report [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Deleted allocations for instance 22535ce1-6453-42d0-b742-1030fb087035#033[00m
Nov 29 02:49:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:05.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:05 np0005539563 nova_compute[252253]: 2025-11-29 07:49:05.751 252257 DEBUG oslo_concurrency.lockutils [None req-f8733fe2-e9c2-4f9c-a694-22702e5ea5fc d15fa4897cba4410b8d341f62586c091 f3f16345721743ccb9afb374deec67b5 - - default default] Lock "738ca4a4-91f6-4476-a500-4d85c8eb00ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:05 np0005539563 nova_compute[252253]: 2025-11-29 07:49:05.766 252257 DEBUG oslo_concurrency.lockutils [None req-c633e92f-7f93-47e1-a707-ff2d1b31cbb9 386584ea971049e3b0c06b8237710848 c80f8d4661784e8faaf78d28df3fb677 - - default default] Lock "22535ce1-6453-42d0-b742-1030fb087035" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 29 02:49:05 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 29 02:49:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:07.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 486 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 259 op/s
Nov 29 02:49:07 np0005539563 nova_compute[252253]: 2025-11-29 07:49:07.619 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:07.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:09.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 486 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Nov 29 02:49:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:09.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:10 np0005539563 nova_compute[252253]: 2025-11-29 07:49:10.229 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:49:10.259 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:49:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:11.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 486 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 161 KiB/s rd, 2.1 MiB/s wr, 125 op/s
Nov 29 02:49:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:11.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:12 np0005539563 nova_compute[252253]: 2025-11-29 07:49:12.622 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:49:12
Nov 29 02:49:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:49:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:49:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'images', '.mgr', 'volumes', 'cephfs.cephfs.data']
Nov 29 02:49:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:49:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:13.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 486 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 161 KiB/s rd, 2.1 MiB/s wr, 125 op/s
Nov 29 02:49:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:13.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:49:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:49:14 np0005539563 nova_compute[252253]: 2025-11-29 07:49:14.346 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402539.3448896, 93122e14-3d0f-4a03-912f-b7cf58571516 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:49:14 np0005539563 nova_compute[252253]: 2025-11-29 07:49:14.347 252257 INFO nova.compute.manager [-] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:49:14 np0005539563 nova_compute[252253]: 2025-11-29 07:49:14.374 252257 DEBUG nova.compute.manager [None req-139959f2-21fe-4543-9e53-90a5a483ce36 - - - - - -] [instance: 93122e14-3d0f-4a03-912f-b7cf58571516] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:15 np0005539563 nova_compute[252253]: 2025-11-29 07:49:15.005 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402540.003892, 22535ce1-6453-42d0-b742-1030fb087035 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:49:15 np0005539563 nova_compute[252253]: 2025-11-29 07:49:15.005 252257 INFO nova.compute.manager [-] [instance: 22535ce1-6453-42d0-b742-1030fb087035] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:49:15 np0005539563 nova_compute[252253]: 2025-11-29 07:49:15.048 252257 DEBUG nova.compute.manager [None req-033ad4f2-a7fc-4c43-bcef-4d94491b8b60 - - - - - -] [instance: 22535ce1-6453-42d0-b742-1030fb087035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:15.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:15 np0005539563 nova_compute[252253]: 2025-11-29 07:49:15.231 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 494 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 561 KiB/s rd, 1.9 MiB/s wr, 128 op/s
Nov 29 02:49:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:49:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:49:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:15.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:49:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:49:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:49:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:49:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:49:17 np0005539563 podman[271612]: 2025-11-29 07:49:17.137519009 +0000 UTC m=+0.056216210 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:49:17 np0005539563 podman[271613]: 2025-11-29 07:49:17.14574412 +0000 UTC m=+0.066942548 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 02:49:17 np0005539563 podman[271614]: 2025-11-29 07:49:17.190424929 +0000 UTC m=+0.107320411 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:49:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:17.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 230564c0-adec-45fa-9cce-f306e9130e22 does not exist
Nov 29 02:49:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 04ee3653-99d6-4fc8-b2b5-c55559604f11 does not exist
Nov 29 02:49:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9307098c-d331-46d1-8903-f00ee204221b does not exist
Nov 29 02:49:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:49:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:49:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:49:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:49:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:49:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:49:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 510 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 582 KiB/s rd, 1.9 MiB/s wr, 104 op/s
Nov 29 02:49:17 np0005539563 nova_compute[252253]: 2025-11-29 07:49:17.519 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402542.518401, 738ca4a4-91f6-4476-a500-4d85c8eb00ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:49:17 np0005539563 nova_compute[252253]: 2025-11-29 07:49:17.520 252257 INFO nova.compute.manager [-] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:49:17 np0005539563 nova_compute[252253]: 2025-11-29 07:49:17.539 252257 DEBUG nova.compute.manager [None req-ba2a4e30-5476-49d5-82d6-beceeee01bb9 - - - - - -] [instance: 738ca4a4-91f6-4476-a500-4d85c8eb00ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:17 np0005539563 nova_compute[252253]: 2025-11-29 07:49:17.625 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:17.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:18 np0005539563 podman[271840]: 2025-11-29 07:49:18.093397373 +0000 UTC m=+0.022078863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:49:19 np0005539563 podman[271840]: 2025-11-29 07:49:19.138281856 +0000 UTC m=+1.066963326 container create de670df56127fd18fae2197266578c4aa2a9591e581a3d3533c3929bfedd918a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_payne, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:49:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:49:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:49:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:19.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 510 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 560 KiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 02:49:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:19.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:20 np0005539563 nova_compute[252253]: 2025-11-29 07:49:20.234 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:20 np0005539563 systemd[1]: Started libpod-conmon-de670df56127fd18fae2197266578c4aa2a9591e581a3d3533c3929bfedd918a.scope.
Nov 29 02:49:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:49:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:21.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 298 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 29 02:49:21 np0005539563 podman[271840]: 2025-11-29 07:49:21.712245334 +0000 UTC m=+3.640926864 container init de670df56127fd18fae2197266578c4aa2a9591e581a3d3533c3929bfedd918a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_payne, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:49:21 np0005539563 podman[271840]: 2025-11-29 07:49:21.726521828 +0000 UTC m=+3.655203298 container start de670df56127fd18fae2197266578c4aa2a9591e581a3d3533c3929bfedd918a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:49:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:21.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:21 np0005539563 charming_payne[271859]: 167 167
Nov 29 02:49:21 np0005539563 systemd[1]: libpod-de670df56127fd18fae2197266578c4aa2a9591e581a3d3533c3929bfedd918a.scope: Deactivated successfully.
Nov 29 02:49:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:22 np0005539563 nova_compute[252253]: 2025-11-29 07:49:22.626 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:22 np0005539563 podman[271840]: 2025-11-29 07:49:22.870791708 +0000 UTC m=+4.799473248 container attach de670df56127fd18fae2197266578c4aa2a9591e581a3d3533c3929bfedd918a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:49:22 np0005539563 podman[271840]: 2025-11-29 07:49:22.871476226 +0000 UTC m=+4.800157706 container died de670df56127fd18fae2197266578c4aa2a9591e581a3d3533c3929bfedd918a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006339204704075935 of space, bias 1.0, pg target 1.9017614112227805 quantized to 32 (current 32)
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 02:49:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:23.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 298 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Nov 29 02:49:23 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d736b25118fd5365243d8f1aa40ed1e6c7e72f11e861c982ebd85b55a34e012e-merged.mount: Deactivated successfully.
Nov 29 02:49:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:23.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:49:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/670377070' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:49:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:49:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/670377070' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:49:24 np0005539563 podman[271840]: 2025-11-29 07:49:24.48242198 +0000 UTC m=+6.411103460 container remove de670df56127fd18fae2197266578c4aa2a9591e581a3d3533c3929bfedd918a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:49:24 np0005539563 systemd[1]: libpod-conmon-de670df56127fd18fae2197266578c4aa2a9591e581a3d3533c3929bfedd918a.scope: Deactivated successfully.
Nov 29 02:49:24 np0005539563 nova_compute[252253]: 2025-11-29 07:49:24.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:24 np0005539563 nova_compute[252253]: 2025-11-29 07:49:24.680 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:24 np0005539563 nova_compute[252253]: 2025-11-29 07:49:24.680 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:49:24 np0005539563 podman[271885]: 2025-11-29 07:49:24.710159792 +0000 UTC m=+0.066489455 container create 84f95179c6237e7036a45047f60fae0947ec922ff9246ddb688998e809f63911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:49:24 np0005539563 systemd[1]: Started libpod-conmon-84f95179c6237e7036a45047f60fae0947ec922ff9246ddb688998e809f63911.scope.
Nov 29 02:49:24 np0005539563 podman[271885]: 2025-11-29 07:49:24.683918727 +0000 UTC m=+0.040248420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:49:24 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:49:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bfbe74891cf13fcfe9168416bc441cf786dfc2c48cbd9d4162512f8fb6589c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bfbe74891cf13fcfe9168416bc441cf786dfc2c48cbd9d4162512f8fb6589c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bfbe74891cf13fcfe9168416bc441cf786dfc2c48cbd9d4162512f8fb6589c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bfbe74891cf13fcfe9168416bc441cf786dfc2c48cbd9d4162512f8fb6589c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bfbe74891cf13fcfe9168416bc441cf786dfc2c48cbd9d4162512f8fb6589c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:24 np0005539563 podman[271885]: 2025-11-29 07:49:24.82933322 +0000 UTC m=+0.185662873 container init 84f95179c6237e7036a45047f60fae0947ec922ff9246ddb688998e809f63911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:49:24 np0005539563 podman[271885]: 2025-11-29 07:49:24.839777151 +0000 UTC m=+0.196106804 container start 84f95179c6237e7036a45047f60fae0947ec922ff9246ddb688998e809f63911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:49:24 np0005539563 podman[271885]: 2025-11-29 07:49:24.84423044 +0000 UTC m=+0.200560083 container attach 84f95179c6237e7036a45047f60fae0947ec922ff9246ddb688998e809f63911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:49:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:25.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:25 np0005539563 nova_compute[252253]: 2025-11-29 07:49:25.236 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 298 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Nov 29 02:49:25 np0005539563 practical_rhodes[271901]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:49:25 np0005539563 practical_rhodes[271901]: --> relative data size: 1.0
Nov 29 02:49:25 np0005539563 practical_rhodes[271901]: --> All data devices are unavailable
Nov 29 02:49:25 np0005539563 systemd[1]: libpod-84f95179c6237e7036a45047f60fae0947ec922ff9246ddb688998e809f63911.scope: Deactivated successfully.
Nov 29 02:49:25 np0005539563 podman[271885]: 2025-11-29 07:49:25.677838162 +0000 UTC m=+1.034167835 container died 84f95179c6237e7036a45047f60fae0947ec922ff9246ddb688998e809f63911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:49:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:25.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:26 np0005539563 nova_compute[252253]: 2025-11-29 07:49:26.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:27.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 297 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 165 op/s
Nov 29 02:49:27 np0005539563 nova_compute[252253]: 2025-11-29 07:49:27.664 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:27 np0005539563 nova_compute[252253]: 2025-11-29 07:49:27.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:27 np0005539563 nova_compute[252253]: 2025-11-29 07:49:27.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:49:27 np0005539563 nova_compute[252253]: 2025-11-29 07:49:27.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:49:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:27.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:28 np0005539563 nova_compute[252253]: 2025-11-29 07:49:28.568 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:49:28 np0005539563 nova_compute[252253]: 2025-11-29 07:49:28.568 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:49:28 np0005539563 nova_compute[252253]: 2025-11-29 07:49:28.568 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:49:28 np0005539563 nova_compute[252253]: 2025-11-29 07:49:28.569 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3efe6bb4-36be-4a30-832d-8da05e5baa50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:29.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 297 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 39 KiB/s wr, 121 op/s
Nov 29 02:49:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:29.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:29 np0005539563 nova_compute[252253]: 2025-11-29 07:49:29.823 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:49:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8bfbe74891cf13fcfe9168416bc441cf786dfc2c48cbd9d4162512f8fb6589c1-merged.mount: Deactivated successfully.
Nov 29 02:49:30 np0005539563 nova_compute[252253]: 2025-11-29 07:49:30.139 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:49:30 np0005539563 nova_compute[252253]: 2025-11-29 07:49:30.237 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:30 np0005539563 nova_compute[252253]: 2025-11-29 07:49:30.278 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:49:30 np0005539563 nova_compute[252253]: 2025-11-29 07:49:30.279 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:49:30 np0005539563 nova_compute[252253]: 2025-11-29 07:49:30.280 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:30 np0005539563 nova_compute[252253]: 2025-11-29 07:49:30.280 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:30 np0005539563 nova_compute[252253]: 2025-11-29 07:49:30.281 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:30 np0005539563 nova_compute[252253]: 2025-11-29 07:49:30.281 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:30 np0005539563 nova_compute[252253]: 2025-11-29 07:49:30.580 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:30 np0005539563 nova_compute[252253]: 2025-11-29 07:49:30.581 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:30 np0005539563 nova_compute[252253]: 2025-11-29 07:49:30.582 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:30 np0005539563 nova_compute[252253]: 2025-11-29 07:49:30.582 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:49:30 np0005539563 nova_compute[252253]: 2025-11-29 07:49:30.583 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1141279700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:31 np0005539563 nova_compute[252253]: 2025-11-29 07:49:31.115 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:31.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:31 np0005539563 nova_compute[252253]: 2025-11-29 07:49:31.235 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:49:31 np0005539563 nova_compute[252253]: 2025-11-29 07:49:31.235 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:49:31 np0005539563 nova_compute[252253]: 2025-11-29 07:49:31.452 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:49:31 np0005539563 nova_compute[252253]: 2025-11-29 07:49:31.454 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4528MB free_disk=20.85523223876953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:49:31 np0005539563 nova_compute[252253]: 2025-11-29 07:49:31.454 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:31 np0005539563 nova_compute[252253]: 2025-11-29 07:49:31.455 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 297 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 43 KiB/s wr, 193 op/s
Nov 29 02:49:31 np0005539563 nova_compute[252253]: 2025-11-29 07:49:31.538 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 3efe6bb4-36be-4a30-832d-8da05e5baa50 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:49:31 np0005539563 nova_compute[252253]: 2025-11-29 07:49:31.538 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:49:31 np0005539563 nova_compute[252253]: 2025-11-29 07:49:31.539 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=704MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:49:31 np0005539563 nova_compute[252253]: 2025-11-29 07:49:31.586 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:31.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/407377134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:32 np0005539563 nova_compute[252253]: 2025-11-29 07:49:32.528 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.942s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:32 np0005539563 nova_compute[252253]: 2025-11-29 07:49:32.534 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:49:32 np0005539563 nova_compute[252253]: 2025-11-29 07:49:32.553 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:49:32 np0005539563 nova_compute[252253]: 2025-11-29 07:49:32.578 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:49:32 np0005539563 nova_compute[252253]: 2025-11-29 07:49:32.578 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:32 np0005539563 nova_compute[252253]: 2025-11-29 07:49:32.667 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:33.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 297 MiB data, 462 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 25 KiB/s wr, 120 op/s
Nov 29 02:49:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:33.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:33 np0005539563 nova_compute[252253]: 2025-11-29 07:49:33.976 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:49:34 np0005539563 podman[271885]: 2025-11-29 07:49:34.517831907 +0000 UTC m=+9.874161590 container remove 84f95179c6237e7036a45047f60fae0947ec922ff9246ddb688998e809f63911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:49:34 np0005539563 systemd[1]: libpod-conmon-84f95179c6237e7036a45047f60fae0947ec922ff9246ddb688998e809f63911.scope: Deactivated successfully.
Nov 29 02:49:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:35.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:35 np0005539563 nova_compute[252253]: 2025-11-29 07:49:35.242 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:35 np0005539563 podman[272120]: 2025-11-29 07:49:35.384536297 +0000 UTC m=+0.039893791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:49:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 297 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 383 KiB/s wr, 129 op/s
Nov 29 02:49:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:35.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:36 np0005539563 podman[272120]: 2025-11-29 07:49:36.249930313 +0000 UTC m=+0.905287747 container create 210c3b17ab148964b6ec50f5064163029149e8a46870aa5138731d695491bf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:49:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:36 np0005539563 systemd[1]: Started libpod-conmon-210c3b17ab148964b6ec50f5064163029149e8a46870aa5138731d695491bf6a.scope.
Nov 29 02:49:36 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:49:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:37.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 297 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 371 KiB/s wr, 108 op/s
Nov 29 02:49:37 np0005539563 nova_compute[252253]: 2025-11-29 07:49:37.670 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:37 np0005539563 nova_compute[252253]: 2025-11-29 07:49:37.701 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:37.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:37 np0005539563 podman[272120]: 2025-11-29 07:49:37.882182969 +0000 UTC m=+2.537540453 container init 210c3b17ab148964b6ec50f5064163029149e8a46870aa5138731d695491bf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 29 02:49:37 np0005539563 podman[272120]: 2025-11-29 07:49:37.896308318 +0000 UTC m=+2.551665762 container start 210c3b17ab148964b6ec50f5064163029149e8a46870aa5138731d695491bf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:49:37 np0005539563 compassionate_mayer[272136]: 167 167
Nov 29 02:49:37 np0005539563 systemd[1]: libpod-210c3b17ab148964b6ec50f5064163029149e8a46870aa5138731d695491bf6a.scope: Deactivated successfully.
Nov 29 02:49:38 np0005539563 podman[272120]: 2025-11-29 07:49:38.761017554 +0000 UTC m=+3.416375048 container attach 210c3b17ab148964b6ec50f5064163029149e8a46870aa5138731d695491bf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:49:38 np0005539563 podman[272120]: 2025-11-29 07:49:38.762272689 +0000 UTC m=+3.417630133 container died 210c3b17ab148964b6ec50f5064163029149e8a46870aa5138731d695491bf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:49:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay-79ae09329213bc398d17544d64e96b5c61862d5d55dc4f1d3335d46dc95b44e5-merged.mount: Deactivated successfully.
Nov 29 02:49:39 np0005539563 podman[272120]: 2025-11-29 07:49:39.018232288 +0000 UTC m=+3.673589722 container remove 210c3b17ab148964b6ec50f5064163029149e8a46870aa5138731d695491bf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:49:39 np0005539563 systemd[1]: libpod-conmon-210c3b17ab148964b6ec50f5064163029149e8a46870aa5138731d695491bf6a.scope: Deactivated successfully.
Nov 29 02:49:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:39.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:39 np0005539563 podman[272212]: 2025-11-29 07:49:39.161218405 +0000 UTC m=+0.023458000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:49:39 np0005539563 podman[272212]: 2025-11-29 07:49:39.299886037 +0000 UTC m=+0.162125602 container create 46340bacf4eaede898f125e339742a9da97ceac85a9fd74915e20c85ec93329a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:49:39 np0005539563 systemd[1]: Started libpod-conmon-46340bacf4eaede898f125e339742a9da97ceac85a9fd74915e20c85ec93329a.scope.
Nov 29 02:49:39 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:49:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e26f3031978948f2e214a394047883076168790a0374c1160ddea8607237c91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e26f3031978948f2e214a394047883076168790a0374c1160ddea8607237c91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e26f3031978948f2e214a394047883076168790a0374c1160ddea8607237c91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e26f3031978948f2e214a394047883076168790a0374c1160ddea8607237c91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:39 np0005539563 podman[272212]: 2025-11-29 07:49:39.479641531 +0000 UTC m=+0.341881136 container init 46340bacf4eaede898f125e339742a9da97ceac85a9fd74915e20c85ec93329a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 29 02:49:39 np0005539563 podman[272212]: 2025-11-29 07:49:39.486613148 +0000 UTC m=+0.348852713 container start 46340bacf4eaede898f125e339742a9da97ceac85a9fd74915e20c85ec93329a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:49:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 302 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 849 KiB/s wr, 87 op/s
Nov 29 02:49:39 np0005539563 podman[272212]: 2025-11-29 07:49:39.56495342 +0000 UTC m=+0.427193065 container attach 46340bacf4eaede898f125e339742a9da97ceac85a9fd74915e20c85ec93329a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:49:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:39.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:40 np0005539563 nova_compute[252253]: 2025-11-29 07:49:40.242 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]: {
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:    "0": [
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:        {
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:            "devices": [
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:                "/dev/loop3"
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:            ],
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:            "lv_name": "ceph_lv0",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:            "lv_size": "7511998464",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:            "name": "ceph_lv0",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:            "tags": {
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:                "ceph.cluster_name": "ceph",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:                "ceph.crush_device_class": "",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:                "ceph.encrypted": "0",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:                "ceph.osd_id": "0",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:                "ceph.type": "block",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:                "ceph.vdo": "0"
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:            },
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:            "type": "block",
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:            "vg_name": "ceph_vg0"
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:        }
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]:    ]
Nov 29 02:49:40 np0005539563 fervent_vaughan[272228]: }
Nov 29 02:49:40 np0005539563 systemd[1]: libpod-46340bacf4eaede898f125e339742a9da97ceac85a9fd74915e20c85ec93329a.scope: Deactivated successfully.
Nov 29 02:49:40 np0005539563 podman[272212]: 2025-11-29 07:49:40.363045609 +0000 UTC m=+1.225285174 container died 46340bacf4eaede898f125e339742a9da97ceac85a9fd74915e20c85ec93329a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:49:40 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7e26f3031978948f2e214a394047883076168790a0374c1160ddea8607237c91-merged.mount: Deactivated successfully.
Nov 29 02:49:40 np0005539563 podman[272212]: 2025-11-29 07:49:40.42678876 +0000 UTC m=+1.289028325 container remove 46340bacf4eaede898f125e339742a9da97ceac85a9fd74915e20c85ec93329a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_vaughan, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:49:40 np0005539563 systemd[1]: libpod-conmon-46340bacf4eaede898f125e339742a9da97ceac85a9fd74915e20c85ec93329a.scope: Deactivated successfully.
Nov 29 02:49:40 np0005539563 nova_compute[252253]: 2025-11-29 07:49:40.972 252257 DEBUG oslo_concurrency.lockutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:40 np0005539563 nova_compute[252253]: 2025-11-29 07:49:40.973 252257 DEBUG oslo_concurrency.lockutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:40 np0005539563 nova_compute[252253]: 2025-11-29 07:49:40.991 252257 DEBUG nova.compute.manager [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:49:41 np0005539563 podman[272393]: 2025-11-29 07:49:41.074134024 +0000 UTC m=+0.046752827 container create 927ebe34bc24bf16fc9a37892eaf0c648ae2759eba75edae3afda8b18f0f4e80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:49:41 np0005539563 nova_compute[252253]: 2025-11-29 07:49:41.103 252257 DEBUG oslo_concurrency.lockutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:41 np0005539563 nova_compute[252253]: 2025-11-29 07:49:41.103 252257 DEBUG oslo_concurrency.lockutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:41 np0005539563 systemd[1]: Started libpod-conmon-927ebe34bc24bf16fc9a37892eaf0c648ae2759eba75edae3afda8b18f0f4e80.scope.
Nov 29 02:49:41 np0005539563 nova_compute[252253]: 2025-11-29 07:49:41.113 252257 DEBUG nova.virt.hardware [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:49:41 np0005539563 nova_compute[252253]: 2025-11-29 07:49:41.113 252257 INFO nova.compute.claims [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:49:41 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:49:41 np0005539563 podman[272393]: 2025-11-29 07:49:41.053073598 +0000 UTC m=+0.025692401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:49:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:41.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:41 np0005539563 nova_compute[252253]: 2025-11-29 07:49:41.224 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:41 np0005539563 podman[272393]: 2025-11-29 07:49:41.413776899 +0000 UTC m=+0.386395702 container init 927ebe34bc24bf16fc9a37892eaf0c648ae2759eba75edae3afda8b18f0f4e80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:49:41 np0005539563 podman[272393]: 2025-11-29 07:49:41.425549424 +0000 UTC m=+0.398168197 container start 927ebe34bc24bf16fc9a37892eaf0c648ae2759eba75edae3afda8b18f0f4e80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:49:41 np0005539563 systemd[1]: libpod-927ebe34bc24bf16fc9a37892eaf0c648ae2759eba75edae3afda8b18f0f4e80.scope: Deactivated successfully.
Nov 29 02:49:41 np0005539563 practical_cray[272410]: 167 167
Nov 29 02:49:41 np0005539563 conmon[272410]: conmon 927ebe34bc24bf16fc9a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-927ebe34bc24bf16fc9a37892eaf0c648ae2759eba75edae3afda8b18f0f4e80.scope/container/memory.events
Nov 29 02:49:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 336 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 141 op/s
Nov 29 02:49:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:41.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:49:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2258598670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:49:41 np0005539563 podman[272393]: 2025-11-29 07:49:41.920499328 +0000 UTC m=+0.893118151 container attach 927ebe34bc24bf16fc9a37892eaf0c648ae2759eba75edae3afda8b18f0f4e80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:49:41 np0005539563 podman[272393]: 2025-11-29 07:49:41.921171016 +0000 UTC m=+0.893789859 container died 927ebe34bc24bf16fc9a37892eaf0c648ae2759eba75edae3afda8b18f0f4e80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:49:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6902ae80e359b831bc23ecf5ba4ee4619e1499d1bb9a5dfce584d44ddf320717-merged.mount: Deactivated successfully.
Nov 29 02:49:41 np0005539563 nova_compute[252253]: 2025-11-29 07:49:41.961 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.737s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:41 np0005539563 nova_compute[252253]: 2025-11-29 07:49:41.970 252257 DEBUG nova.compute.provider_tree [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:49:41 np0005539563 podman[272393]: 2025-11-29 07:49:41.985097162 +0000 UTC m=+0.957715945 container remove 927ebe34bc24bf16fc9a37892eaf0c648ae2759eba75edae3afda8b18f0f4e80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cray, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:49:41 np0005539563 nova_compute[252253]: 2025-11-29 07:49:41.985 252257 DEBUG nova.scheduler.client.report [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:49:42 np0005539563 systemd[1]: libpod-conmon-927ebe34bc24bf16fc9a37892eaf0c648ae2759eba75edae3afda8b18f0f4e80.scope: Deactivated successfully.
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.016 252257 DEBUG oslo_concurrency.lockutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.018 252257 DEBUG nova.compute.manager [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.084 252257 DEBUG nova.compute.manager [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.109 252257 INFO nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.131 252257 DEBUG nova.compute.manager [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:49:42 np0005539563 podman[272456]: 2025-11-29 07:49:42.140543314 +0000 UTC m=+0.038128165 container create 5b278dee124b1e9fe3eb2c17480c1da37885ad50f2345883c0f8f7f85d4ae2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:49:42 np0005539563 systemd[1]: Started libpod-conmon-5b278dee124b1e9fe3eb2c17480c1da37885ad50f2345883c0f8f7f85d4ae2ea.scope.
Nov 29 02:49:42 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:49:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d25a6ba955dd8a00f0f02bf4ffe1b7966e053c0f6b078241b815cfe320eeb66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d25a6ba955dd8a00f0f02bf4ffe1b7966e053c0f6b078241b815cfe320eeb66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d25a6ba955dd8a00f0f02bf4ffe1b7966e053c0f6b078241b815cfe320eeb66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d25a6ba955dd8a00f0f02bf4ffe1b7966e053c0f6b078241b815cfe320eeb66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:49:42 np0005539563 podman[272456]: 2025-11-29 07:49:42.218375002 +0000 UTC m=+0.115959843 container init 5b278dee124b1e9fe3eb2c17480c1da37885ad50f2345883c0f8f7f85d4ae2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kalam, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:49:42 np0005539563 podman[272456]: 2025-11-29 07:49:42.124041211 +0000 UTC m=+0.021626072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.222 252257 DEBUG nova.compute.manager [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.223 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.224 252257 INFO nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Creating image(s)#033[00m
Nov 29 02:49:42 np0005539563 podman[272456]: 2025-11-29 07:49:42.225783661 +0000 UTC m=+0.123368502 container start 5b278dee124b1e9fe3eb2c17480c1da37885ad50f2345883c0f8f7f85d4ae2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kalam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:49:42 np0005539563 podman[272456]: 2025-11-29 07:49:42.232025749 +0000 UTC m=+0.129610610 container attach 5b278dee124b1e9fe3eb2c17480c1da37885ad50f2345883c0f8f7f85d4ae2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.248 252257 DEBUG nova.storage.rbd_utils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.273 252257 DEBUG nova.storage.rbd_utils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.300 252257 DEBUG nova.storage.rbd_utils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.303 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.377 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.377 252257 DEBUG oslo_concurrency.lockutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.378 252257 DEBUG oslo_concurrency.lockutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.378 252257 DEBUG oslo_concurrency.lockutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.402 252257 DEBUG nova.storage.rbd_utils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.406 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.673 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:42 np0005539563 nova_compute[252253]: 2025-11-29 07:49:42.968 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:43 np0005539563 hungry_kalam[272471]: {
Nov 29 02:49:43 np0005539563 hungry_kalam[272471]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:49:43 np0005539563 hungry_kalam[272471]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:49:43 np0005539563 hungry_kalam[272471]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:49:43 np0005539563 hungry_kalam[272471]:        "osd_id": 0,
Nov 29 02:49:43 np0005539563 hungry_kalam[272471]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:49:43 np0005539563 hungry_kalam[272471]:        "type": "bluestore"
Nov 29 02:49:43 np0005539563 hungry_kalam[272471]:    }
Nov 29 02:49:43 np0005539563 hungry_kalam[272471]: }
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.043 252257 DEBUG nova.storage.rbd_utils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] resizing rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:49:43 np0005539563 systemd[1]: libpod-5b278dee124b1e9fe3eb2c17480c1da37885ad50f2345883c0f8f7f85d4ae2ea.scope: Deactivated successfully.
Nov 29 02:49:43 np0005539563 podman[272456]: 2025-11-29 07:49:43.059077305 +0000 UTC m=+0.956662166 container died 5b278dee124b1e9fe3eb2c17480c1da37885ad50f2345883c0f8f7f85d4ae2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:49:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2d25a6ba955dd8a00f0f02bf4ffe1b7966e053c0f6b078241b815cfe320eeb66-merged.mount: Deactivated successfully.
Nov 29 02:49:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:49:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:49:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:49:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:49:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:49:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:49:43 np0005539563 podman[272456]: 2025-11-29 07:49:43.200566012 +0000 UTC m=+1.098150873 container remove 5b278dee124b1e9fe3eb2c17480c1da37885ad50f2345883c0f8f7f85d4ae2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kalam, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:49:43 np0005539563 systemd[1]: libpod-conmon-5b278dee124b1e9fe3eb2c17480c1da37885ad50f2345883c0f8f7f85d4ae2ea.scope: Deactivated successfully.
Nov 29 02:49:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:43.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.249 252257 DEBUG nova.objects.instance [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lazy-loading 'migration_context' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:49:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.275 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.275 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Ensure instance console log exists: /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.276 252257 DEBUG oslo_concurrency.lockutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.276 252257 DEBUG oslo_concurrency.lockutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.276 252257 DEBUG oslo_concurrency.lockutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.278 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.283 252257 WARNING nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:49:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:43 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d814eecf-5261-40d5-a0fb-49326d92ff15 does not exist
Nov 29 02:49:43 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b630b609-8d42-4cde-b815-ea552b9e6aff does not exist
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.288 252257 DEBUG nova.virt.libvirt.host [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.288 252257 DEBUG nova.virt.libvirt.host [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:49:43 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0d35ace5-3fd7-4fe6-8ad8-abc3b75b324c does not exist
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.291 252257 DEBUG nova.virt.libvirt.host [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.292 252257 DEBUG nova.virt.libvirt.host [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.294 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.295 252257 DEBUG nova.virt.hardware [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.295 252257 DEBUG nova.virt.hardware [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.295 252257 DEBUG nova.virt.hardware [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.296 252257 DEBUG nova.virt.hardware [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.296 252257 DEBUG nova.virt.hardware [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.296 252257 DEBUG nova.virt.hardware [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.296 252257 DEBUG nova.virt.hardware [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.296 252257 DEBUG nova.virt.hardware [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.297 252257 DEBUG nova.virt.hardware [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.297 252257 DEBUG nova.virt.hardware [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.297 252257 DEBUG nova.virt.hardware [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.300 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 336 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 3.5 MiB/s wr, 70 op/s
Nov 29 02:49:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:49:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239262292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.734 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.758 252257 DEBUG nova.storage.rbd_utils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:43.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:43 np0005539563 nova_compute[252253]: 2025-11-29 07:49:43.762 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:49:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630575762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.205 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.209 252257 DEBUG nova.objects.instance [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lazy-loading 'pci_devices' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.234 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  <uuid>d278aa2a-e5e7-4f89-8b5c-b6dca172b57d</uuid>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  <name>instance-0000001b</name>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1579333464</nova:name>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:49:43</nova:creationTime>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <nova:user uuid="ed57e094b4c4441c8ffbfb96ecb62afc">tempest-UnshelveToHostMultiNodesTest-155692188-project-member</nova:user>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <nova:project uuid="cf226b9a5bb945c3a8f54976b5736fe3">tempest-UnshelveToHostMultiNodesTest-155692188</nova:project>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <entry name="serial">d278aa2a-e5e7-4f89-8b5c-b6dca172b57d</entry>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <entry name="uuid">d278aa2a-e5e7-4f89-8b5c-b6dca172b57d</entry>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/console.log" append="off"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:49:44 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:49:44 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:49:44 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:49:44 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:49:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.286 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.286 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.287 252257 INFO nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Using config drive#033[00m
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.307 252257 DEBUG nova.storage.rbd_utils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.551 252257 INFO nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Creating config drive at /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config#033[00m
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.563 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp55127y15 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.694 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp55127y15" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.726 252257 DEBUG nova.storage.rbd_utils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.730 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.925 252257 DEBUG oslo_concurrency.processutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:49:44 np0005539563 nova_compute[252253]: 2025-11-29 07:49:44.927 252257 INFO nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Deleting local config drive /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config because it was imported into RBD.#033[00m
Nov 29 02:49:44 np0005539563 systemd-machined[213024]: New machine qemu-12-instance-0000001b.
Nov 29 02:49:45 np0005539563 systemd[1]: Started Virtual Machine qemu-12-instance-0000001b.
Nov 29 02:49:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:45.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.243 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.360 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402585.3594234, d278aa2a-e5e7-4f89-8b5c-b6dca172b57d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.360 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.363 252257 DEBUG nova.compute.manager [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.364 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.369 252257 INFO nova.virt.libvirt.driver [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance spawned successfully.#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.370 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.418 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.418 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.419 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.419 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.420 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.420 252257 DEBUG nova.virt.libvirt.driver [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.428 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.432 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.461 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.462 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402585.3596087, d278aa2a-e5e7-4f89-8b5c-b6dca172b57d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.462 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] VM Started (Lifecycle Event)#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.483 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.490 252257 INFO nova.compute.manager [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Took 3.27 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.490 252257 DEBUG nova.compute.manager [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.493 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:49:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 380 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 575 KiB/s rd, 4.9 MiB/s wr, 146 op/s
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.566 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.589 252257 INFO nova.compute.manager [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Took 4.53 seconds to build instance.#033[00m
Nov 29 02:49:45 np0005539563 nova_compute[252253]: 2025-11-29 07:49:45.610 252257 DEBUG oslo_concurrency.lockutils [None req-c1d4b9aa-931f-4625-b85f-f696b17d1549 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:49:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:45.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:47.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 408 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 5.7 MiB/s wr, 148 op/s
Nov 29 02:49:47 np0005539563 podman[272901]: 2025-11-29 07:49:47.519155173 +0000 UTC m=+0.068458779 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:49:47 np0005539563 podman[272902]: 2025-11-29 07:49:47.531919555 +0000 UTC m=+0.081024405 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:49:47 np0005539563 podman[272903]: 2025-11-29 07:49:47.572979858 +0000 UTC m=+0.108676208 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 02:49:47 np0005539563 nova_compute[252253]: 2025-11-29 07:49:47.675 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:47.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:48 np0005539563 nova_compute[252253]: 2025-11-29 07:49:48.794 252257 DEBUG oslo_concurrency.lockutils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:49:48 np0005539563 nova_compute[252253]: 2025-11-29 07:49:48.795 252257 DEBUG oslo_concurrency.lockutils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:49:48 np0005539563 nova_compute[252253]: 2025-11-29 07:49:48.796 252257 INFO nova.compute.manager [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Shelving#033[00m
Nov 29 02:49:48 np0005539563 nova_compute[252253]: 2025-11-29 07:49:48.823 252257 DEBUG nova.virt.libvirt.driver [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.098862) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589098957, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2073, "num_deletes": 253, "total_data_size": 3588299, "memory_usage": 3650472, "flush_reason": "Manual Compaction"}
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589162156, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 3511905, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23034, "largest_seqno": 25106, "table_properties": {"data_size": 3502473, "index_size": 5862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20700, "raw_average_key_size": 20, "raw_value_size": 3483220, "raw_average_value_size": 3507, "num_data_blocks": 259, "num_entries": 993, "num_filter_entries": 993, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402379, "oldest_key_time": 1764402379, "file_creation_time": 1764402589, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 63334 microseconds, and 7638 cpu microseconds.
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.162232) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 3511905 bytes OK
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.162264) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.165296) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.165328) EVENT_LOG_v1 {"time_micros": 1764402589165321, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.165372) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3579619, prev total WAL file size 3579619, number of live WAL files 2.
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.166589) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(3429KB)], [53(8821KB)]
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589166966, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 12545467, "oldest_snapshot_seqno": -1}
Nov 29 02:49:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:49.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5467 keys, 10467190 bytes, temperature: kUnknown
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589336412, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10467190, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10429131, "index_size": 23293, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 139341, "raw_average_key_size": 25, "raw_value_size": 10329025, "raw_average_value_size": 1889, "num_data_blocks": 953, "num_entries": 5467, "num_filter_entries": 5467, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764402589, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.336808) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10467190 bytes
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.338767) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.0 rd, 61.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.6 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 5991, records dropped: 524 output_compression: NoCompression
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.338805) EVENT_LOG_v1 {"time_micros": 1764402589338786, "job": 28, "event": "compaction_finished", "compaction_time_micros": 169535, "compaction_time_cpu_micros": 45374, "output_level": 6, "num_output_files": 1, "total_output_size": 10467190, "num_input_records": 5991, "num_output_records": 5467, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589340804, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402589344411, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.166337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.344496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.344504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.344505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.344507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:49:49 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:49:49.344508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:49:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 409 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 6.3 MiB/s wr, 204 op/s
Nov 29 02:49:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:49:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:49.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:49:50 np0005539563 nova_compute[252253]: 2025-11-29 07:49:50.245 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:51.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 409 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 206 op/s
Nov 29 02:49:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:51.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:52 np0005539563 nova_compute[252253]: 2025-11-29 07:49:52.676 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:53.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 409 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 206 op/s
Nov 29 02:49:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:53.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:55.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:55 np0005539563 nova_compute[252253]: 2025-11-29 07:49:55.248 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 409 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.5 MiB/s wr, 189 op/s
Nov 29 02:49:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:55.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:49:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:57.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 409 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 41 KiB/s wr, 190 op/s
Nov 29 02:49:57 np0005539563 nova_compute[252253]: 2025-11-29 07:49:57.678 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:49:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:57.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:49:58 np0005539563 nova_compute[252253]: 2025-11-29 07:49:58.868 252257 DEBUG nova.virt.libvirt.driver [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:49:59 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 02:49:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:49:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:49:59.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:49:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 411 MiB data, 559 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 448 KiB/s wr, 159 op/s
Nov 29 02:49:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:49:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:49:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:49:59.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:50:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 29 02:50:00 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 02:50:00 np0005539563 nova_compute[252253]: 2025-11-29 07:50:00.250 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 29 02:50:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 29 02:50:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:01.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 449 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 165 op/s
Nov 29 02:50:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:01.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:02 np0005539563 nova_compute[252253]: 2025-11-29 07:50:02.681 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:03.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 449 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 165 op/s
Nov 29 02:50:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:03.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:04.894 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:04.895 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:04.895 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:05.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:05 np0005539563 nova_compute[252253]: 2025-11-29 07:50:05.252 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 487 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.7 MiB/s wr, 184 op/s
Nov 29 02:50:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:05.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:07.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 488 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 188 op/s
Nov 29 02:50:07 np0005539563 nova_compute[252253]: 2025-11-29 07:50:07.705 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:07.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:08.008 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:08 np0005539563 nova_compute[252253]: 2025-11-29 07:50:08.008 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:08.008 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:50:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:08.009 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:09.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 499 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 205 op/s
Nov 29 02:50:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:09.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:09 np0005539563 nova_compute[252253]: 2025-11-29 07:50:09.917 252257 DEBUG nova.virt.libvirt.driver [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:50:10 np0005539563 nova_compute[252253]: 2025-11-29 07:50:10.254 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:11.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 29 02:50:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 29 02:50:11 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 29 02:50:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 530 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Nov 29 02:50:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:11.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:12 np0005539563 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 29 02:50:12 np0005539563 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001b.scope: Consumed 14.424s CPU time.
Nov 29 02:50:12 np0005539563 systemd-machined[213024]: Machine qemu-12-instance-0000001b terminated.
Nov 29 02:50:12 np0005539563 nova_compute[252253]: 2025-11-29 07:50:12.707 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:50:12
Nov 29 02:50:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:50:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:50:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'volumes', 'vms']
Nov 29 02:50:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:50:12 np0005539563 nova_compute[252253]: 2025-11-29 07:50:12.930 252257 INFO nova.virt.libvirt.driver [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance shutdown successfully after 24 seconds.#033[00m
Nov 29 02:50:12 np0005539563 nova_compute[252253]: 2025-11-29 07:50:12.936 252257 INFO nova.virt.libvirt.driver [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance destroyed successfully.#033[00m
Nov 29 02:50:12 np0005539563 nova_compute[252253]: 2025-11-29 07:50:12.936 252257 DEBUG nova.objects.instance [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lazy-loading 'numa_topology' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:50:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:13.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:13 np0005539563 nova_compute[252253]: 2025-11-29 07:50:13.269 252257 INFO nova.virt.libvirt.driver [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Beginning cold snapshot process#033[00m
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 530 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Nov 29 02:50:13 np0005539563 nova_compute[252253]: 2025-11-29 07:50:13.664 252257 DEBUG nova.virt.libvirt.imagebackend [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:50:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:50:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:13.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:13 np0005539563 nova_compute[252253]: 2025-11-29 07:50:13.919 252257 DEBUG nova.storage.rbd_utils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] creating snapshot(036cbe876e744d64bfe945505bf7a8ae) on rbd image(d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:50:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 29 02:50:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 29 02:50:14 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 29 02:50:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:15.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:15 np0005539563 nova_compute[252253]: 2025-11-29 07:50:15.257 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 534 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.7 MiB/s wr, 282 op/s
Nov 29 02:50:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:15.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:16 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:16Z|00089|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 02:50:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:17.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 534 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.1 MiB/s wr, 268 op/s
Nov 29 02:50:17 np0005539563 nova_compute[252253]: 2025-11-29 07:50:17.554 252257 DEBUG nova.storage.rbd_utils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] cloning vms/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk@036cbe876e744d64bfe945505bf7a8ae to images/73295910-5010-42bf-ac24-9ee2d7bb4671 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 02:50:17 np0005539563 nova_compute[252253]: 2025-11-29 07:50:17.726 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:17 np0005539563 podman[273110]: 2025-11-29 07:50:17.734577496 +0000 UTC m=+0.077780788 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 29 02:50:17 np0005539563 podman[273109]: 2025-11-29 07:50:17.751282933 +0000 UTC m=+0.096909280 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 02:50:17 np0005539563 podman[273111]: 2025-11-29 07:50:17.752966319 +0000 UTC m=+0.097428555 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 02:50:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:17.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:18 np0005539563 nova_compute[252253]: 2025-11-29 07:50:18.430 252257 DEBUG nova.storage.rbd_utils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] flattening images/73295910-5010-42bf-ac24-9ee2d7bb4671 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 02:50:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:19.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 534 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 537 KiB/s wr, 192 op/s
Nov 29 02:50:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:19.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:20 np0005539563 nova_compute[252253]: 2025-11-29 07:50:20.118 252257 DEBUG nova.storage.rbd_utils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] removing snapshot(036cbe876e744d64bfe945505bf7a8ae) on rbd image(d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:50:20 np0005539563 nova_compute[252253]: 2025-11-29 07:50:20.259 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 29 02:50:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 29 02:50:21 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 29 02:50:21 np0005539563 nova_compute[252253]: 2025-11-29 07:50:21.120 252257 DEBUG nova.storage.rbd_utils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] creating snapshot(snap) on rbd image(73295910-5010-42bf-ac24-9ee2d7bb4671) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:50:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:21.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 595 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.5 MiB/s wr, 275 op/s
Nov 29 02:50:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:21.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 29 02:50:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 29 02:50:22 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 29 02:50:22 np0005539563 nova_compute[252253]: 2025-11-29 07:50:22.728 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.012997750616508468 of space, bias 1.0, pg target 3.8993251849525405 quantized to 32 (current 32)
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016194252512562814 quantized to 32 (current 32)
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003606719942304113 of space, bias 1.0, pg target 1.0711958228643217 quantized to 32 (current 32)
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Nov 29 02:50:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:23.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 595 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.0 MiB/s wr, 126 op/s
Nov 29 02:50:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:23.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:24 np0005539563 nova_compute[252253]: 2025-11-29 07:50:24.043 252257 INFO nova.virt.libvirt.driver [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Snapshot image upload complete#033[00m
Nov 29 02:50:24 np0005539563 nova_compute[252253]: 2025-11-29 07:50:24.044 252257 DEBUG nova.compute.manager [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:24 np0005539563 nova_compute[252253]: 2025-11-29 07:50:24.099 252257 INFO nova.compute.manager [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Shelve offloading#033[00m
Nov 29 02:50:24 np0005539563 nova_compute[252253]: 2025-11-29 07:50:24.106 252257 INFO nova.virt.libvirt.driver [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance destroyed successfully.#033[00m
Nov 29 02:50:24 np0005539563 nova_compute[252253]: 2025-11-29 07:50:24.107 252257 DEBUG nova.compute.manager [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:24 np0005539563 nova_compute[252253]: 2025-11-29 07:50:24.110 252257 DEBUG oslo_concurrency.lockutils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:24 np0005539563 nova_compute[252253]: 2025-11-29 07:50:24.111 252257 DEBUG oslo_concurrency.lockutils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquired lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:24 np0005539563 nova_compute[252253]: 2025-11-29 07:50:24.111 252257 DEBUG nova.network.neutron [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:50:24 np0005539563 nova_compute[252253]: 2025-11-29 07:50:24.598 252257 DEBUG nova.network.neutron [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:24 np0005539563 nova_compute[252253]: 2025-11-29 07:50:24.904 252257 DEBUG nova.network.neutron [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:24 np0005539563 nova_compute[252253]: 2025-11-29 07:50:24.921 252257 DEBUG oslo_concurrency.lockutils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Releasing lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:24 np0005539563 nova_compute[252253]: 2025-11-29 07:50:24.929 252257 INFO nova.virt.libvirt.driver [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance destroyed successfully.#033[00m
Nov 29 02:50:24 np0005539563 nova_compute[252253]: 2025-11-29 07:50:24.930 252257 DEBUG nova.objects.instance [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lazy-loading 'resources' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:25 np0005539563 nova_compute[252253]: 2025-11-29 07:50:25.260 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:25.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:25 np0005539563 nova_compute[252253]: 2025-11-29 07:50:25.382 252257 INFO nova.virt.libvirt.driver [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Deleting instance files /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_del#033[00m
Nov 29 02:50:25 np0005539563 nova_compute[252253]: 2025-11-29 07:50:25.383 252257 INFO nova.virt.libvirt.driver [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Deletion of /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_del complete#033[00m
Nov 29 02:50:25 np0005539563 nova_compute[252253]: 2025-11-29 07:50:25.463 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:25 np0005539563 nova_compute[252253]: 2025-11-29 07:50:25.464 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:25 np0005539563 nova_compute[252253]: 2025-11-29 07:50:25.477 252257 DEBUG nova.compute.manager [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:50:25 np0005539563 nova_compute[252253]: 2025-11-29 07:50:25.480 252257 INFO nova.scheduler.client.report [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Deleted allocations for instance d278aa2a-e5e7-4f89-8b5c-b6dca172b57d#033[00m
Nov 29 02:50:25 np0005539563 nova_compute[252253]: 2025-11-29 07:50:25.530 252257 DEBUG oslo_concurrency.lockutils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:25 np0005539563 nova_compute[252253]: 2025-11-29 07:50:25.530 252257 DEBUG oslo_concurrency.lockutils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:25 np0005539563 nova_compute[252253]: 2025-11-29 07:50:25.547 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 622 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 11 MiB/s wr, 353 op/s
Nov 29 02:50:25 np0005539563 nova_compute[252253]: 2025-11-29 07:50:25.591 252257 DEBUG oslo_concurrency.processutils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:25.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480715856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.078 252257 DEBUG oslo_concurrency.processutils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.085 252257 DEBUG nova.compute.provider_tree [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.120 252257 DEBUG nova.scheduler.client.report [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.166 252257 DEBUG oslo_concurrency.lockutils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.171 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.182 252257 DEBUG nova.virt.hardware [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.183 252257 INFO nova.compute.claims [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.255 252257 DEBUG oslo_concurrency.lockutils [None req-3bbd4e37-9ef1-4d8c-b265-5521e7dcce92 ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 37.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 29 02:50:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 29 02:50:26 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.423 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:50:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167011889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.889 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.895 252257 DEBUG nova.compute.provider_tree [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.968 252257 DEBUG nova.scheduler.client.report [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.994 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:26 np0005539563 nova_compute[252253]: 2025-11-29 07:50:26.995 252257 DEBUG nova.compute.manager [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.084 252257 DEBUG nova.compute.manager [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.084 252257 DEBUG nova.network.neutron [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.173 252257 INFO nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.236 252257 DEBUG nova.compute.manager [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:50:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:27.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.429 252257 DEBUG nova.compute.manager [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.431 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.432 252257 INFO nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Creating image(s)#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.469 252257 DEBUG nova.storage.rbd_utils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] rbd image aca637ac-6ef0-42f8-aacf-e022e990aeba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.524 252257 DEBUG nova.storage.rbd_utils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] rbd image aca637ac-6ef0-42f8-aacf-e022e990aeba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 622 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.8 MiB/s wr, 302 op/s
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.559 252257 DEBUG nova.storage.rbd_utils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] rbd image aca637ac-6ef0-42f8-aacf-e022e990aeba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.563 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.585 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402612.5399468, d278aa2a-e5e7-4f89-8b5c-b6dca172b57d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.586 252257 INFO nova.compute.manager [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.628 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.630 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.630 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.630 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.657 252257 DEBUG nova.storage.rbd_utils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] rbd image aca637ac-6ef0-42f8-aacf-e022e990aeba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.661 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf aca637ac-6ef0-42f8-aacf-e022e990aeba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.692 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.694 252257 DEBUG nova.compute.manager [None req-48c0b38e-2499-464b-8aed-acf531a76c73 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.717 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.717 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.718 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.718 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.718 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.744 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:27.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:27 np0005539563 nova_compute[252253]: 2025-11-29 07:50:27.875 252257 DEBUG nova.policy [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37531d9f927d40ecadd246429b5b598d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:50:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745027761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.227 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Acquiring lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.227 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.228 252257 INFO nova.compute.manager [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Unshelving#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.247 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.352 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf aca637ac-6ef0-42f8-aacf-e022e990aeba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.691s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.425 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.425 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.433 252257 DEBUG nova.storage.rbd_utils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] resizing rbd image aca637ac-6ef0-42f8-aacf-e022e990aeba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.475 252257 DEBUG nova.objects.instance [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'pci_requests' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.478 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.479 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.490 252257 DEBUG nova.objects.instance [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'numa_topology' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.502 252257 DEBUG nova.virt.hardware [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.502 252257 INFO nova.compute.claims [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.549 252257 DEBUG nova.objects.instance [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lazy-loading 'migration_context' on Instance uuid aca637ac-6ef0-42f8-aacf-e022e990aeba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.567 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.568 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Ensure instance console log exists: /var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.568 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.569 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.569 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.643 252257 DEBUG oslo_concurrency.processutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.727 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.728 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4431MB free_disk=20.704662322998047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:50:28 np0005539563 nova_compute[252253]: 2025-11-29 07:50:28.728 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/721227256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.080 252257 DEBUG oslo_concurrency.processutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.085 252257 DEBUG nova.compute.provider_tree [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.105 252257 DEBUG nova.scheduler.client.report [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.140 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.146 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:29.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.457 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 3efe6bb4-36be-4a30-832d-8da05e5baa50 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.458 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance aca637ac-6ef0-42f8-aacf-e022e990aeba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.458 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance d278aa2a-e5e7-4f89-8b5c-b6dca172b57d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.458 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.458 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.537 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 608 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.2 MiB/s wr, 286 op/s
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.646 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Acquiring lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.646 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Acquired lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.647 252257 DEBUG nova.network.neutron [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.826 252257 DEBUG nova.network.neutron [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:29.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.856 252257 DEBUG nova.network.neutron [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Successfully updated port: e347928a-5a81-4fdb-a7df-4ac039bb8bb3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.874 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.874 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquired lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.874 252257 DEBUG nova.network.neutron [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.944 252257 DEBUG nova.compute.manager [req-42f6dc9a-ffed-4728-bd96-42d2fec2a9a5 req-eca45f2a-6051-4d58-aa1b-9eb57e89de75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-changed-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.945 252257 DEBUG nova.compute.manager [req-42f6dc9a-ffed-4728-bd96-42d2fec2a9a5 req-eca45f2a-6051-4d58-aa1b-9eb57e89de75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Refreshing instance network info cache due to event network-changed-e347928a-5a81-4fdb-a7df-4ac039bb8bb3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.945 252257 DEBUG oslo_concurrency.lockutils [req-42f6dc9a-ffed-4728-bd96-42d2fec2a9a5 req-eca45f2a-6051-4d58-aa1b-9eb57e89de75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1045105352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.991 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:29 np0005539563 nova_compute[252253]: 2025-11-29 07:50:29.997 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.029 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.059 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.060 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.118 252257 DEBUG nova.network.neutron [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.168 252257 DEBUG nova.network.neutron [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.181 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Releasing lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.182 252257 DEBUG nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.183 252257 INFO nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Creating image(s)#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.211 252257 DEBUG nova.storage.rbd_utils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.215 252257 DEBUG nova.objects.instance [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.257 252257 DEBUG nova.storage.rbd_utils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.287 252257 DEBUG nova.storage.rbd_utils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.290 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Acquiring lock "5e7fc7eb463b4860538b717738ed49b0905bd1be" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.291 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "5e7fc7eb463b4860538b717738ed49b0905bd1be" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.295 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.547 252257 DEBUG nova.virt.libvirt.imagebackend [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/73295910-5010-42bf-ac24-9ee2d7bb4671/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/73295910-5010-42bf-ac24-9ee2d7bb4671/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.605 252257 DEBUG nova.virt.libvirt.imagebackend [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/73295910-5010-42bf-ac24-9ee2d7bb4671/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.605 252257 DEBUG nova.storage.rbd_utils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] cloning images/73295910-5010-42bf-ac24-9ee2d7bb4671@snap to None/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.730 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "5e7fc7eb463b4860538b717738ed49b0905bd1be" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.906 252257 DEBUG nova.objects.instance [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'migration_context' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:30 np0005539563 nova_compute[252253]: 2025-11-29 07:50:30.969 252257 DEBUG nova.storage.rbd_utils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] flattening vms/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.045 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.046 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.046 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.089 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.095 252257 DEBUG nova.network.neutron [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Updating instance_info_cache with network_info: [{"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.131 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Releasing lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.131 252257 DEBUG nova.compute.manager [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Instance network_info: |[{"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.132 252257 DEBUG oslo_concurrency.lockutils [req-42f6dc9a-ffed-4728-bd96-42d2fec2a9a5 req-eca45f2a-6051-4d58-aa1b-9eb57e89de75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.133 252257 DEBUG nova.network.neutron [req-42f6dc9a-ffed-4728-bd96-42d2fec2a9a5 req-eca45f2a-6051-4d58-aa1b-9eb57e89de75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Refreshing network info cache for port e347928a-5a81-4fdb-a7df-4ac039bb8bb3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.139 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Start _get_guest_xml network_info=[{"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.145 252257 WARNING nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.151 252257 DEBUG nova.virt.libvirt.host [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.152 252257 DEBUG nova.virt.libvirt.host [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.162 252257 DEBUG nova.virt.libvirt.host [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.163 252257 DEBUG nova.virt.libvirt.host [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.165 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.165 252257 DEBUG nova.virt.hardware [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.166 252257 DEBUG nova.virt.hardware [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.167 252257 DEBUG nova.virt.hardware [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.167 252257 DEBUG nova.virt.hardware [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.168 252257 DEBUG nova.virt.hardware [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.168 252257 DEBUG nova.virt.hardware [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.169 252257 DEBUG nova.virt.hardware [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.170 252257 DEBUG nova.virt.hardware [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.170 252257 DEBUG nova.virt.hardware [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.171 252257 DEBUG nova.virt.hardware [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.171 252257 DEBUG nova.virt.hardware [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.174 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:31.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.377 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.378 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.378 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.379 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3efe6bb4-36be-4a30-832d-8da05e5baa50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.444 252257 DEBUG nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Image rbd:vms/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.445 252257 DEBUG nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.445 252257 DEBUG nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Ensure instance console log exists: /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.446 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.446 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.447 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.448 252257 DEBUG nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T07:49:48Z,direct_url=<?>,disk_format='raw',id=73295910-5010-42bf-ac24-9ee2d7bb4671,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1579333464-shelved',owner='cf226b9a5bb945c3a8f54976b5736fe3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T07:50:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.453 252257 WARNING nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.457 252257 DEBUG nova.virt.libvirt.host [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.457 252257 DEBUG nova.virt.libvirt.host [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.461 252257 DEBUG nova.virt.libvirt.host [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.461 252257 DEBUG nova.virt.libvirt.host [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.462 252257 DEBUG nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.462 252257 DEBUG nova.virt.hardware [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T07:49:48Z,direct_url=<?>,disk_format='raw',id=73295910-5010-42bf-ac24-9ee2d7bb4671,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1579333464-shelved',owner='cf226b9a5bb945c3a8f54976b5736fe3',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T07:50:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.463 252257 DEBUG nova.virt.hardware [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.463 252257 DEBUG nova.virt.hardware [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.464 252257 DEBUG nova.virt.hardware [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.464 252257 DEBUG nova.virt.hardware [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.464 252257 DEBUG nova.virt.hardware [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.464 252257 DEBUG nova.virt.hardware [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.465 252257 DEBUG nova.virt.hardware [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.465 252257 DEBUG nova.virt.hardware [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.465 252257 DEBUG nova.virt.hardware [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.465 252257 DEBUG nova.virt.hardware [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.466 252257 DEBUG nova.objects.instance [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.481 252257 DEBUG oslo_concurrency.processutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 648 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 8.4 MiB/s wr, 297 op/s
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.567 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253559021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.660 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.693 252257 DEBUG nova.storage.rbd_utils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] rbd image aca637ac-6ef0-42f8-aacf-e022e990aeba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.696 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.819 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.838 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.839 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.840 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.840 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.840 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:31.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1714154324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.923 252257 DEBUG oslo_concurrency.processutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.954 252257 DEBUG nova.storage.rbd_utils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:31 np0005539563 nova_compute[252253]: 2025-11-29 07:50:31.958 252257 DEBUG oslo_concurrency.processutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150943183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.142 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.144 252257 DEBUG nova.virt.libvirt.vif [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:50:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-267620235',display_name='tempest-LiveMigrationTest-server-267620235',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-267620235',id=30,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-e1tqaqaw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:50:27Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=aca637ac-6ef0-42f8-aacf-e022e990aeba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.144 252257 DEBUG nova.network.os_vif_util [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Converting VIF {"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.145 252257 DEBUG nova.network.os_vif_util [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.146 252257 DEBUG nova.objects.instance [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lazy-loading 'pci_devices' on Instance uuid aca637ac-6ef0-42f8-aacf-e022e990aeba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.162 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <uuid>aca637ac-6ef0-42f8-aacf-e022e990aeba</uuid>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <name>instance-0000001e</name>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:name>tempest-LiveMigrationTest-server-267620235</nova:name>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:50:31</nova:creationTime>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:user uuid="37531d9f927d40ecadd246429b5b598d">tempest-LiveMigrationTest-561693451-project-member</nova:user>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:project uuid="73f3d0f2c9aa4ba29984fc9e6a7ed869">tempest-LiveMigrationTest-561693451</nova:project>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:port uuid="e347928a-5a81-4fdb-a7df-4ac039bb8bb3">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <entry name="serial">aca637ac-6ef0-42f8-aacf-e022e990aeba</entry>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <entry name="uuid">aca637ac-6ef0-42f8-aacf-e022e990aeba</entry>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/aca637ac-6ef0-42f8-aacf-e022e990aeba_disk">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/aca637ac-6ef0-42f8-aacf-e022e990aeba_disk.config">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:93:42:48"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <target dev="tape347928a-5a"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba/console.log" append="off"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:50:32 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:50:32 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.163 252257 DEBUG nova.compute.manager [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Preparing to wait for external event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.163 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.164 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.164 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.165 252257 DEBUG nova.virt.libvirt.vif [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:50:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-267620235',display_name='tempest-LiveMigrationTest-server-267620235',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-267620235',id=30,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-e1tqaqaw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:50:27Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=aca637ac-6ef0-42f8-aacf-e022e990aeba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.165 252257 DEBUG nova.network.os_vif_util [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Converting VIF {"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.166 252257 DEBUG nova.network.os_vif_util [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.167 252257 DEBUG os_vif [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.167 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.168 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.169 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.173 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.173 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape347928a-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.174 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape347928a-5a, col_values=(('external_ids', {'iface-id': 'e347928a-5a81-4fdb-a7df-4ac039bb8bb3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:42:48', 'vm-uuid': 'aca637ac-6ef0-42f8-aacf-e022e990aeba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.186 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:32 np0005539563 NetworkManager[48981]: <info>  [1764402632.1877] manager: (tape347928a-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.191 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.193 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.195 252257 INFO os_vif [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a')#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.281 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.282 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.282 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] No VIF found with MAC fa:16:3e:93:42:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.283 252257 INFO nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Using config drive#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.312 252257 DEBUG nova.storage.rbd_utils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] rbd image aca637ac-6ef0-42f8-aacf-e022e990aeba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:50:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1953835541' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.425 252257 DEBUG oslo_concurrency.processutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.426 252257 DEBUG nova.objects.instance [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.440 252257 DEBUG nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <uuid>d278aa2a-e5e7-4f89-8b5c-b6dca172b57d</uuid>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <name>instance-0000001b</name>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1579333464</nova:name>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:50:31</nova:creationTime>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:user uuid="ed57e094b4c4441c8ffbfb96ecb62afc">tempest-UnshelveToHostMultiNodesTest-155692188-project-member</nova:user>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <nova:project uuid="cf226b9a5bb945c3a8f54976b5736fe3">tempest-UnshelveToHostMultiNodesTest-155692188</nova:project>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="73295910-5010-42bf-ac24-9ee2d7bb4671"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <entry name="serial">d278aa2a-e5e7-4f89-8b5c-b6dca172b57d</entry>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <entry name="uuid">d278aa2a-e5e7-4f89-8b5c-b6dca172b57d</entry>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/console.log" append="off"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <input type="keyboard" bus="usb"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:50:32 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:50:32 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:50:32 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:50:32 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.657 252257 DEBUG nova.network.neutron [req-42f6dc9a-ffed-4728-bd96-42d2fec2a9a5 req-eca45f2a-6051-4d58-aa1b-9eb57e89de75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Updated VIF entry in instance network info cache for port e347928a-5a81-4fdb-a7df-4ac039bb8bb3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.657 252257 DEBUG nova.network.neutron [req-42f6dc9a-ffed-4728-bd96-42d2fec2a9a5 req-eca45f2a-6051-4d58-aa1b-9eb57e89de75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Updating instance_info_cache with network_info: [{"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.670 252257 DEBUG nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.670 252257 DEBUG nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.671 252257 INFO nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Using config drive#033[00m
Nov 29 02:50:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.878 252257 DEBUG nova.storage.rbd_utils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.886 252257 INFO nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Creating config drive at /var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba/disk.config#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.891 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbs77vxd1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.921 252257 DEBUG oslo_concurrency.lockutils [req-42f6dc9a-ffed-4728-bd96-42d2fec2a9a5 req-eca45f2a-6051-4d58-aa1b-9eb57e89de75 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.944 252257 DEBUG nova.objects.instance [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'ec2_ids' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:32 np0005539563 nova_compute[252253]: 2025-11-29 07:50:32.995 252257 DEBUG nova.objects.instance [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lazy-loading 'keypairs' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.037 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbs77vxd1" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.080 252257 DEBUG nova.storage.rbd_utils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] rbd image aca637ac-6ef0-42f8-aacf-e022e990aeba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.085 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba/disk.config aca637ac-6ef0-42f8-aacf-e022e990aeba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.151 252257 INFO nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Creating config drive at /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.163 252257 DEBUG oslo_concurrency.processutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3n6dtjh7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 29 02:50:33 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 29 02:50:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:33.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.312 252257 DEBUG oslo_concurrency.processutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3n6dtjh7" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.345 252257 DEBUG nova.storage.rbd_utils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] rbd image d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.350 252257 DEBUG oslo_concurrency.processutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.425 252257 DEBUG oslo_concurrency.processutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba/disk.config aca637ac-6ef0-42f8-aacf-e022e990aeba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.427 252257 INFO nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Deleting local config drive /var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba/disk.config because it was imported into RBD.#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.469 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:33 np0005539563 kernel: tape347928a-5a: entered promiscuous mode
Nov 29 02:50:33 np0005539563 NetworkManager[48981]: <info>  [1764402633.4784] manager: (tape347928a-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.478 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:33Z|00090|binding|INFO|Claiming lport e347928a-5a81-4fdb-a7df-4ac039bb8bb3 for this chassis.
Nov 29 02:50:33 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:33Z|00091|binding|INFO|e347928a-5a81-4fdb-a7df-4ac039bb8bb3: Claiming fa:16:3e:93:42:48 10.100.0.9
Nov 29 02:50:33 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:33Z|00092|binding|INFO|Claiming lport 9ecac803-0ffe-4cdf-a724-cbb61954b01b for this chassis.
Nov 29 02:50:33 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:33Z|00093|binding|INFO|9ecac803-0ffe-4cdf-a724-cbb61954b01b: Claiming fa:16:3e:f4:69:f6 19.80.0.160
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.484 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.488 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.495 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:42:48 10.100.0.9'], port_security=['fa:16:3e:93:42:48 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1284960005', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'aca637ac-6ef0-42f8-aacf-e022e990aeba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b746034c-0143-4024-986c-673efea114a3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1284960005', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e550898e-f197-49d8-b2f0-71b93775fb71, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=e347928a-5a81-4fdb-a7df-4ac039bb8bb3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.497 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:69:f6 19.80.0.160'], port_security=['fa:16:3e:f4:69:f6 19.80.0.160'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['e347928a-5a81-4fdb-a7df-4ac039bb8bb3'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1926672861', 'neutron:cidrs': '19.80.0.160/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1926672861', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c4847c33-f725-4948-8187-3e41c1ea344f, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9ecac803-0ffe-4cdf-a724-cbb61954b01b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.498 158990 INFO neutron.agent.ovn.metadata.agent [-] Port e347928a-5a81-4fdb-a7df-4ac039bb8bb3 in datapath b746034c-0143-4024-986c-673efea114a3 bound to our chassis#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.500 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b746034c-0143-4024-986c-673efea114a3#033[00m
Nov 29 02:50:33 np0005539563 systemd-udevd[274050]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.511 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f96bca9e-f135-4c79-a504-7c389dd8995d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 systemd-machined[213024]: New machine qemu-13-instance-0000001e.
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.513 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb746034c-01 in ovnmeta-b746034c-0143-4024-986c-673efea114a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.516 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb746034c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.516 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dd17019e-cf7d-470b-a186-f40335b10a90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.517 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4793972d-ae0a-42ff-bcf5-2ed399375e4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 NetworkManager[48981]: <info>  [1764402633.5253] device (tape347928a-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:50:33 np0005539563 NetworkManager[48981]: <info>  [1764402633.5265] device (tape347928a-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:50:33 np0005539563 systemd[1]: Started Virtual Machine qemu-13-instance-0000001e.
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.529 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8be429-0c97-4d04-b91f-c669bf857bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.553 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4af254be-d3f0-48fc-9852-1634e2f36e95]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 648 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 216 KiB/s rd, 3.6 MiB/s wr, 106 op/s
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.593 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5076d96d-cb53-4c50-835b-80556da05a07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 NetworkManager[48981]: <info>  [1764402633.6035] manager: (tapb746034c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.602 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f362dfca-bd21-438d-b19f-61638aa45e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.607 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:33Z|00094|binding|INFO|Setting lport e347928a-5a81-4fdb-a7df-4ac039bb8bb3 ovn-installed in OVS
Nov 29 02:50:33 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:33Z|00095|binding|INFO|Setting lport e347928a-5a81-4fdb-a7df-4ac039bb8bb3 up in Southbound
Nov 29 02:50:33 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:33Z|00096|binding|INFO|Setting lport 9ecac803-0ffe-4cdf-a724-cbb61954b01b up in Southbound
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.613 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.635 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[9229ad25-e542-44ef-a784-91b7bb1e94f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.638 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2237c4-216f-46a1-9a58-3c2fc2675f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 NetworkManager[48981]: <info>  [1764402633.6569] device (tapb746034c-00): carrier: link connected
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.661 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ab84a32f-4a57-4d88-bc1a-4b25cd77a4e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.674 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a82a15cc-6d59-4d66-88c2-046a430b59c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb746034c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:8c:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560143, 'reachable_time': 26453, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274083, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.687 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9c92a152-2f21-4989-aea1-91b615bc10b9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:8cc1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 560143, 'tstamp': 560143}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274084, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.700 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[825fefdf-3b5f-4a5f-97e8-61712a1e5a20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb746034c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:8c:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560143, 'reachable_time': 26453, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274085, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.728 252257 DEBUG oslo_concurrency.processutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.729 252257 INFO nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Deleting local config drive /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d/disk.config because it was imported into RBD.#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.742 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a48f6da3-776f-4e23-99ec-fc9c14910958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 systemd-machined[213024]: New machine qemu-14-instance-0000001b.
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.806 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6564327b-9285-4776-81dc-958f8f23f1d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.808 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb746034c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.808 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.809 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb746034c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.811 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539563 NetworkManager[48981]: <info>  [1764402633.8115] manager: (tapb746034c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 29 02:50:33 np0005539563 kernel: tapb746034c-00: entered promiscuous mode
Nov 29 02:50:33 np0005539563 systemd[1]: Started Virtual Machine qemu-14-instance-0000001b.
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.816 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.817 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb746034c-00, col_values=(('external_ids', {'iface-id': '193f2fed-77bd-4c35-9dcd-f198bbb1915e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.818 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:33Z|00097|binding|INFO|Releasing lport 193f2fed-77bd-4c35-9dcd-f198bbb1915e from this chassis (sb_readonly=0)
Nov 29 02:50:33 np0005539563 nova_compute[252253]: 2025-11-29 07:50:33.837 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.838 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b746034c-0143-4024-986c-673efea114a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b746034c-0143-4024-986c-673efea114a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.839 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c2289266-6cb3-4aaf-987e-4c89a0d856f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:33.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.840 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-b746034c-0143-4024-986c-673efea114a3
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/b746034c-0143-4024-986c-673efea114a3.pid.haproxy
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID b746034c-0143-4024-986c-673efea114a3
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:50:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:33.849 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'env', 'PROCESS_TAG=haproxy-b746034c-0143-4024-986c-673efea114a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b746034c-0143-4024-986c-673efea114a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.190 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402634.1899648, aca637ac-6ef0-42f8-aacf-e022e990aeba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.192 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] VM Started (Lifecycle Event)#033[00m
Nov 29 02:50:34 np0005539563 podman[274174]: 2025-11-29 07:50:34.236505343 +0000 UTC m=+0.058732878 container create 5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.256 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.262 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402634.191098, aca637ac-6ef0-42f8-aacf-e022e990aeba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.263 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:50:34 np0005539563 podman[274174]: 2025-11-29 07:50:34.201226496 +0000 UTC m=+0.023454051 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.317 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:34 np0005539563 systemd[1]: Started libpod-conmon-5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3.scope.
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.321 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.342 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:34 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:50:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf245be613a2254cdbe1ecb869a7a0d9ef54ab2c92a58bd5d23074c4406ade61/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.439 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402634.4385164, d278aa2a-e5e7-4f89-8b5c-b6dca172b57d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.439 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.441 252257 DEBUG nova.compute.manager [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.442 252257 DEBUG nova.virt.libvirt.driver [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.444 252257 INFO nova.virt.libvirt.driver [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance spawned successfully.#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.460 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.464 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.494 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.495 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402634.4409053, d278aa2a-e5e7-4f89-8b5c-b6dca172b57d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.495 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] VM Started (Lifecycle Event)#033[00m
Nov 29 02:50:34 np0005539563 podman[274174]: 2025-11-29 07:50:34.501789543 +0000 UTC m=+0.324017058 container init 5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 02:50:34 np0005539563 podman[274174]: 2025-11-29 07:50:34.513034844 +0000 UTC m=+0.335262339 container start 5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.522 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.526 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:34 np0005539563 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[274225]: [NOTICE]   (274234) : New worker (274237) forked
Nov 29 02:50:34 np0005539563 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[274225]: [NOTICE]   (274234) : Loading success.
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.546 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.589 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecac803-0ffe-4cdf-a724-cbb61954b01b in datapath 5791158c-7fc4-4c56-891c-c8aa0c79ed59 unbound from our chassis#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.592 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5791158c-7fc4-4c56-891c-c8aa0c79ed59#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.607 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[46ade698-dfc3-404d-9cb3-1bbf41121882]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.608 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5791158c-71 in ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.610 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5791158c-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.610 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ec0aadd1-f2d6-409a-bb4a-2c48a3451f4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.610 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec577ab-1d59-47d3-b6ce-2913f8e126b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.628 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[cf876be4-34b1-472a-8783-1f73978d9345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.643 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8c460c7d-97df-4e5a-9461-a2bf24c9884c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.677 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c3775bc4-5ad9-4d7d-bd0f-77b4dc77f64f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 NetworkManager[48981]: <info>  [1764402634.6876] manager: (tap5791158c-70): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.688 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[402e7684-3543-46fe-9cad-beb1991d3704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 systemd-udevd[274069]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.733 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[56f27de1-67c8-4c5d-99f7-9a584156303d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.736 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[33ba6590-7ec4-4a4b-8517-74fd8cd3cf1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 NetworkManager[48981]: <info>  [1764402634.7610] device (tap5791158c-70): carrier: link connected
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.768 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7c6a80-2c11-41c4-9267-96d50c735954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.792 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[50b4fdee-f47b-474a-bb42-ada3b229c1fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5791158c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:cd:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560253, 'reachable_time': 29192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274259, 'error': None, 'target': 'ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.806 252257 DEBUG nova.compute.manager [req-76e068dd-5df9-4d8a-a345-d2045492de22 req-67f18b80-0665-4a2b-9e7b-5816dcc24461 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.807 252257 DEBUG oslo_concurrency.lockutils [req-76e068dd-5df9-4d8a-a345-d2045492de22 req-67f18b80-0665-4a2b-9e7b-5816dcc24461 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.808 252257 DEBUG oslo_concurrency.lockutils [req-76e068dd-5df9-4d8a-a345-d2045492de22 req-67f18b80-0665-4a2b-9e7b-5816dcc24461 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.808 252257 DEBUG oslo_concurrency.lockutils [req-76e068dd-5df9-4d8a-a345-d2045492de22 req-67f18b80-0665-4a2b-9e7b-5816dcc24461 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.808 252257 DEBUG nova.compute.manager [req-76e068dd-5df9-4d8a-a345-d2045492de22 req-67f18b80-0665-4a2b-9e7b-5816dcc24461 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Processing event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.809 252257 DEBUG nova.compute.manager [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.813 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402634.813153, aca637ac-6ef0-42f8-aacf-e022e990aeba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.813 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.815 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.820 252257 INFO nova.virt.libvirt.driver [-] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Instance spawned successfully.#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.820 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.825 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e80e3038-ed90-48cf-8a00-5eb44de37ce7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:cdab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 560253, 'tstamp': 560253}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274261, 'error': None, 'target': 'ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.839 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.848 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.853 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.852 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[60f53548-59d5-4f54-8932-8d9c6cd78124]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5791158c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:cd:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560253, 'reachable_time': 29192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274262, 'error': None, 'target': 'ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.854 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.855 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.855 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.856 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.857 252257 DEBUG nova.virt.libvirt.driver [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.868 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.897 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b9564876-07ff-458a-847a-0b0bd9dcf059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.914 252257 INFO nova.compute.manager [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Took 7.48 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.916 252257 DEBUG nova.compute.manager [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.968 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[02ab9ac9-5cd1-4a3d-beb9-72819b87dc4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.969 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5791158c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.969 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.970 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5791158c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:34 np0005539563 NetworkManager[48981]: <info>  [1764402634.9721] manager: (tap5791158c-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.971 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:34 np0005539563 kernel: tap5791158c-70: entered promiscuous mode
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.974 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5791158c-70, col_values=(('external_ids', {'iface-id': '898f98e2-e0cf-47a4-905a-1825318afc76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:34 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:34Z|00098|binding|INFO|Releasing lport 898f98e2-e0cf-47a4-905a-1825318afc76 from this chassis (sb_readonly=0)
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.977 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.977 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5791158c-7fc4-4c56-891c-c8aa0c79ed59.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5791158c-7fc4-4c56-891c-c8aa0c79ed59.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.978 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1cbb3267-bd28-48b6-82ae-573a869a2c82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.978 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-5791158c-7fc4-4c56-891c-c8aa0c79ed59
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/5791158c-7fc4-4c56-891c-c8aa0c79ed59.pid.haproxy
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 5791158c-7fc4-4c56-891c-c8aa0c79ed59
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:50:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:34.979 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'env', 'PROCESS_TAG=haproxy-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5791158c-7fc4-4c56-891c-c8aa0c79ed59.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.992 252257 INFO nova.compute.manager [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Took 9.46 seconds to build instance.#033[00m
Nov 29 02:50:34 np0005539563 nova_compute[252253]: 2025-11-29 07:50:34.996 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:35 np0005539563 nova_compute[252253]: 2025-11-29 07:50:35.014 252257 DEBUG oslo_concurrency.lockutils [None req-c3f978e2-4271-4354-ab57-a9f7b9859894 37531d9f927d40ecadd246429b5b598d 73f3d0f2c9aa4ba29984fc9e6a7ed869 - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 29 02:50:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 29 02:50:35 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 29 02:50:35 np0005539563 nova_compute[252253]: 2025-11-29 07:50:35.263 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:35.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:35 np0005539563 podman[274295]: 2025-11-29 07:50:35.340851541 +0000 UTC m=+0.027729236 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:50:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 728 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 9.5 MiB/s wr, 330 op/s
Nov 29 02:50:35 np0005539563 podman[274295]: 2025-11-29 07:50:35.655484695 +0000 UTC m=+0.342362360 container create a25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 02:50:35 np0005539563 systemd[1]: Started libpod-conmon-a25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6.scope.
Nov 29 02:50:35 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:50:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95330f8a55a6438958520d5fea1e88be0a8e62828c07ccdf87617e940943ab14/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:35 np0005539563 podman[274295]: 2025-11-29 07:50:35.794293 +0000 UTC m=+0.481170695 container init a25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:50:35 np0005539563 podman[274295]: 2025-11-29 07:50:35.800043914 +0000 UTC m=+0.486921579 container start a25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 02:50:35 np0005539563 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[274310]: [NOTICE]   (274314) : New worker (274316) forked
Nov 29 02:50:35 np0005539563 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[274310]: [NOTICE]   (274314) : Loading success.
Nov 29 02:50:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:35.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:36 np0005539563 nova_compute[252253]: 2025-11-29 07:50:36.621 252257 DEBUG nova.compute.manager [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:36 np0005539563 nova_compute[252253]: 2025-11-29 07:50:36.713 252257 DEBUG oslo_concurrency.lockutils [None req-9fb76f56-55bc-4e76-9dd2-17bcec562266 e7f285d2d2554e7d9845ab13aecc53db ef548e12a785446885c4b410d36a85c2 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 8.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:36 np0005539563 nova_compute[252253]: 2025-11-29 07:50:36.886 252257 DEBUG nova.compute.manager [req-31bbd891-4a1b-4146-a52e-32ead51d12ad req-1da5b38e-33f9-49db-be41-39fd73e85228 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:36 np0005539563 nova_compute[252253]: 2025-11-29 07:50:36.887 252257 DEBUG oslo_concurrency.lockutils [req-31bbd891-4a1b-4146-a52e-32ead51d12ad req-1da5b38e-33f9-49db-be41-39fd73e85228 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:36 np0005539563 nova_compute[252253]: 2025-11-29 07:50:36.887 252257 DEBUG oslo_concurrency.lockutils [req-31bbd891-4a1b-4146-a52e-32ead51d12ad req-1da5b38e-33f9-49db-be41-39fd73e85228 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:36 np0005539563 nova_compute[252253]: 2025-11-29 07:50:36.888 252257 DEBUG oslo_concurrency.lockutils [req-31bbd891-4a1b-4146-a52e-32ead51d12ad req-1da5b38e-33f9-49db-be41-39fd73e85228 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:36 np0005539563 nova_compute[252253]: 2025-11-29 07:50:36.888 252257 DEBUG nova.compute.manager [req-31bbd891-4a1b-4146-a52e-32ead51d12ad req-1da5b38e-33f9-49db-be41-39fd73e85228 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] No waiting events found dispatching network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:36 np0005539563 nova_compute[252253]: 2025-11-29 07:50:36.888 252257 WARNING nova.compute.manager [req-31bbd891-4a1b-4146-a52e-32ead51d12ad req-1da5b38e-33f9-49db-be41-39fd73e85228 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received unexpected event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:50:37 np0005539563 nova_compute[252253]: 2025-11-29 07:50:37.187 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:37.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 728 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 8.6 MiB/s wr, 289 op/s
Nov 29 02:50:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:37.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:38 np0005539563 nova_compute[252253]: 2025-11-29 07:50:38.188 252257 DEBUG oslo_concurrency.lockutils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:38 np0005539563 nova_compute[252253]: 2025-11-29 07:50:38.190 252257 DEBUG oslo_concurrency.lockutils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:38 np0005539563 nova_compute[252253]: 2025-11-29 07:50:38.191 252257 INFO nova.compute.manager [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Shelving#033[00m
Nov 29 02:50:38 np0005539563 nova_compute[252253]: 2025-11-29 07:50:38.215 252257 DEBUG nova.virt.libvirt.driver [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:50:38 np0005539563 nova_compute[252253]: 2025-11-29 07:50:38.763 252257 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Check if temp file /var/lib/nova/instances/tmp01fw4h8b exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Nov 29 02:50:38 np0005539563 nova_compute[252253]: 2025-11-29 07:50:38.764 252257 DEBUG nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp01fw4h8b',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='aca637ac-6ef0-42f8-aacf-e022e990aeba',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Nov 29 02:50:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:39.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 707 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 5.9 MiB/s wr, 317 op/s
Nov 29 02:50:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:39.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 29 02:50:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 29 02:50:39 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 29 02:50:40 np0005539563 nova_compute[252253]: 2025-11-29 07:50:40.265 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:41.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 648 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 15 MiB/s rd, 5.9 MiB/s wr, 524 op/s
Nov 29 02:50:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:41.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:41 np0005539563 nova_compute[252253]: 2025-11-29 07:50:41.967 252257 DEBUG nova.compute.manager [req-ec8151c9-d042-4249-aa51-506f6303e6c2 req-d8c7812a-e7e0-4f0a-b9a8-c1285e9f6c01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-vif-unplugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:41 np0005539563 nova_compute[252253]: 2025-11-29 07:50:41.968 252257 DEBUG oslo_concurrency.lockutils [req-ec8151c9-d042-4249-aa51-506f6303e6c2 req-d8c7812a-e7e0-4f0a-b9a8-c1285e9f6c01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:41 np0005539563 nova_compute[252253]: 2025-11-29 07:50:41.968 252257 DEBUG oslo_concurrency.lockutils [req-ec8151c9-d042-4249-aa51-506f6303e6c2 req-d8c7812a-e7e0-4f0a-b9a8-c1285e9f6c01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:41 np0005539563 nova_compute[252253]: 2025-11-29 07:50:41.968 252257 DEBUG oslo_concurrency.lockutils [req-ec8151c9-d042-4249-aa51-506f6303e6c2 req-d8c7812a-e7e0-4f0a-b9a8-c1285e9f6c01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:41 np0005539563 nova_compute[252253]: 2025-11-29 07:50:41.969 252257 DEBUG nova.compute.manager [req-ec8151c9-d042-4249-aa51-506f6303e6c2 req-d8c7812a-e7e0-4f0a-b9a8-c1285e9f6c01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] No waiting events found dispatching network-vif-unplugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:41 np0005539563 nova_compute[252253]: 2025-11-29 07:50:41.969 252257 DEBUG nova.compute.manager [req-ec8151c9-d042-4249-aa51-506f6303e6c2 req-d8c7812a-e7e0-4f0a-b9a8-c1285e9f6c01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-vif-unplugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:50:42 np0005539563 nova_compute[252253]: 2025-11-29 07:50:42.189 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:50:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:50:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:50:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:50:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:50:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:50:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:43.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 648 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 2.5 MiB/s wr, 400 op/s
Nov 29 02:50:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:43.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:43 np0005539563 nova_compute[252253]: 2025-11-29 07:50:43.975 252257 INFO nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Took 4.45 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.#033[00m
Nov 29 02:50:43 np0005539563 nova_compute[252253]: 2025-11-29 07:50:43.976 252257 DEBUG nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.010 252257 DEBUG nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp01fw4h8b',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='aca637ac-6ef0-42f8-aacf-e022e990aeba',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(72ea9a4e-b79a-4ba4-a793-db44aa18be1d),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.013 252257 DEBUG nova.objects.instance [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lazy-loading 'migration_context' on Instance uuid aca637ac-6ef0-42f8-aacf-e022e990aeba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.014 252257 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.015 252257 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.015 252257 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.036 252257 DEBUG nova.virt.libvirt.vif [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:50:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-267620235',display_name='tempest-LiveMigrationTest-server-267620235',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-267620235',id=30,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:50:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-e1tqaqaw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:50:34Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=aca637ac-6ef0-42f8-aacf-e022e990aeba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.037 252257 DEBUG nova.network.os_vif_util [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converting VIF {"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.038 252257 DEBUG nova.network.os_vif_util [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.039 252257 DEBUG nova.virt.libvirt.migration [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 02:50:44 np0005539563 nova_compute[252253]:  <mac address="fa:16:3e:93:42:48"/>
Nov 29 02:50:44 np0005539563 nova_compute[252253]:  <model type="virtio"/>
Nov 29 02:50:44 np0005539563 nova_compute[252253]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:50:44 np0005539563 nova_compute[252253]:  <mtu size="1442"/>
Nov 29 02:50:44 np0005539563 nova_compute[252253]:  <target dev="tape347928a-5a"/>
Nov 29 02:50:44 np0005539563 nova_compute[252253]: </interface>
Nov 29 02:50:44 np0005539563 nova_compute[252253]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.039 252257 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.083 252257 DEBUG nova.compute.manager [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.084 252257 DEBUG oslo_concurrency.lockutils [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.085 252257 DEBUG oslo_concurrency.lockutils [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.085 252257 DEBUG oslo_concurrency.lockutils [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.085 252257 DEBUG nova.compute.manager [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] No waiting events found dispatching network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.085 252257 WARNING nova.compute.manager [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received unexpected event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.085 252257 DEBUG nova.compute.manager [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-changed-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.086 252257 DEBUG nova.compute.manager [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Refreshing instance network info cache due to event network-changed-e347928a-5a81-4fdb-a7df-4ac039bb8bb3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.086 252257 DEBUG oslo_concurrency.lockutils [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.086 252257 DEBUG oslo_concurrency.lockutils [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.086 252257 DEBUG nova.network.neutron [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Refreshing network info cache for port e347928a-5a81-4fdb-a7df-4ac039bb8bb3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:50:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:50:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 14K writes, 58K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 3924 syncs, 3.71 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5187 writes, 22K keys, 5187 commit groups, 1.0 writes per commit group, ingest: 24.35 MB, 0.04 MB/s#012Interval WAL: 5187 writes, 1818 syncs, 2.85 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.518 252257 DEBUG nova.virt.libvirt.migration [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.518 252257 INFO nova.virt.libvirt.migration [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Nov 29 02:50:44 np0005539563 nova_compute[252253]: 2025-11-29 07:50:44.602 252257 INFO nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Nov 29 02:50:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:50:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:50:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:50:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:50:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:50:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:50:44 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 42ba0ca4-9664-487f-9dca-d21cab4b108c does not exist
Nov 29 02:50:44 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cbf1a4e9-eecf-421f-b59a-0196cb403eb2 does not exist
Nov 29 02:50:44 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4cd822a3-4ceb-4082-b38e-74d54c80c433 does not exist
Nov 29 02:50:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:50:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:50:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:50:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:50:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:50:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:50:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.105 252257 DEBUG nova.virt.libvirt.migration [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.105 252257 DEBUG nova.virt.libvirt.migration [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.267 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:45.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:45 np0005539563 podman[274651]: 2025-11-29 07:50:45.388445325 +0000 UTC m=+0.027370516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:50:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 569 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 3.9 KiB/s wr, 366 op/s
Nov 29 02:50:45 np0005539563 podman[274651]: 2025-11-29 07:50:45.590280432 +0000 UTC m=+0.229205563 container create f6be03c39d0e1611872e9738b05d3da770b8abc2f2e5001150053edf152abbac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.610 252257 DEBUG nova.virt.libvirt.migration [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.611 252257 DEBUG nova.virt.libvirt.migration [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.627 252257 DEBUG nova.network.neutron [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Updated VIF entry in instance network info cache for port e347928a-5a81-4fdb-a7df-4ac039bb8bb3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.628 252257 DEBUG nova.network.neutron [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Updating instance_info_cache with network_info: [{"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:45 np0005539563 systemd[1]: Started libpod-conmon-f6be03c39d0e1611872e9738b05d3da770b8abc2f2e5001150053edf152abbac.scope.
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.649 252257 DEBUG oslo_concurrency.lockutils [req-75f0601c-ab52-4705-8b03-7a412f808a64 req-455d1153-1554-4737-8d00-586f3448390b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-aca637ac-6ef0-42f8-aacf-e022e990aeba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:50:45 np0005539563 podman[274651]: 2025-11-29 07:50:45.693595155 +0000 UTC m=+0.332520316 container init f6be03c39d0e1611872e9738b05d3da770b8abc2f2e5001150053edf152abbac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:50:45 np0005539563 podman[274651]: 2025-11-29 07:50:45.701930549 +0000 UTC m=+0.340855690 container start f6be03c39d0e1611872e9738b05d3da770b8abc2f2e5001150053edf152abbac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:50:45 np0005539563 elastic_aryabhata[274668]: 167 167
Nov 29 02:50:45 np0005539563 podman[274651]: 2025-11-29 07:50:45.711085394 +0000 UTC m=+0.350010545 container attach f6be03c39d0e1611872e9738b05d3da770b8abc2f2e5001150053edf152abbac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.711 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402645.7104356, aca637ac-6ef0-42f8-aacf-e022e990aeba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:50:45 np0005539563 systemd[1]: libpod-f6be03c39d0e1611872e9738b05d3da770b8abc2f2e5001150053edf152abbac.scope: Deactivated successfully.
Nov 29 02:50:45 np0005539563 podman[274651]: 2025-11-29 07:50:45.712467822 +0000 UTC m=+0.351392953 container died f6be03c39d0e1611872e9738b05d3da770b8abc2f2e5001150053edf152abbac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.712 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:50:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1400b9ac9a34cc95ccd97c772c2e73a411ab8d69289fd3c93d1b093abd442d99-merged.mount: Deactivated successfully.
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.747 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.751 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:50:45 np0005539563 podman[274651]: 2025-11-29 07:50:45.766027419 +0000 UTC m=+0.404952550 container remove f6be03c39d0e1611872e9738b05d3da770b8abc2f2e5001150053edf152abbac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.772 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 02:50:45 np0005539563 systemd[1]: libpod-conmon-f6be03c39d0e1611872e9738b05d3da770b8abc2f2e5001150053edf152abbac.scope: Deactivated successfully.
Nov 29 02:50:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:50:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:50:45 np0005539563 kernel: tape347928a-5a (unregistering): left promiscuous mode
Nov 29 02:50:45 np0005539563 NetworkManager[48981]: <info>  [1764402645.8552] device (tape347928a-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:50:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:45.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.866 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:45 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:45Z|00099|binding|INFO|Releasing lport e347928a-5a81-4fdb-a7df-4ac039bb8bb3 from this chassis (sb_readonly=0)
Nov 29 02:50:45 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:45Z|00100|binding|INFO|Setting lport e347928a-5a81-4fdb-a7df-4ac039bb8bb3 down in Southbound
Nov 29 02:50:45 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:45Z|00101|binding|INFO|Releasing lport 9ecac803-0ffe-4cdf-a724-cbb61954b01b from this chassis (sb_readonly=0)
Nov 29 02:50:45 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:45Z|00102|binding|INFO|Setting lport 9ecac803-0ffe-4cdf-a724-cbb61954b01b down in Southbound
Nov 29 02:50:45 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:45Z|00103|binding|INFO|Removing iface tape347928a-5a ovn-installed in OVS
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.870 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:45 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:45Z|00104|binding|INFO|Releasing lport 193f2fed-77bd-4c35-9dcd-f198bbb1915e from this chassis (sb_readonly=0)
Nov 29 02:50:45 np0005539563 ovn_controller[148841]: 2025-11-29T07:50:45Z|00105|binding|INFO|Releasing lport 898f98e2-e0cf-47a4-905a-1825318afc76 from this chassis (sb_readonly=0)
Nov 29 02:50:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:45.876 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:42:48 10.100.0.9'], port_security=['fa:16:3e:93:42:48 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'c8abfd39-a629-4854-b6ed-e2d68f35f5fb'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1284960005', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'aca637ac-6ef0-42f8-aacf-e022e990aeba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b746034c-0143-4024-986c-673efea114a3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1284960005', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e550898e-f197-49d8-b2f0-71b93775fb71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=e347928a-5a81-4fdb-a7df-4ac039bb8bb3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:45.878 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:69:f6 19.80.0.160'], port_security=['fa:16:3e:f4:69:f6 19.80.0.160'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['e347928a-5a81-4fdb-a7df-4ac039bb8bb3'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1926672861', 'neutron:cidrs': '19.80.0.160/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1926672861', 'neutron:project_id': '73f3d0f2c9aa4ba29984fc9e6a7ed869', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'b8e2487b-c1a1-47ed-b1d0-0dfc2829d236', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c4847c33-f725-4948-8187-3e41c1ea344f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9ecac803-0ffe-4cdf-a724-cbb61954b01b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:50:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:45.879 158990 INFO neutron.agent.ovn.metadata.agent [-] Port e347928a-5a81-4fdb-a7df-4ac039bb8bb3 in datapath b746034c-0143-4024-986c-673efea114a3 unbound from our chassis#033[00m
Nov 29 02:50:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:45.880 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b746034c-0143-4024-986c-673efea114a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:50:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:45.881 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6d247d-a835-4fb7-9934-a6a01cdbb4e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:45.882 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b746034c-0143-4024-986c-673efea114a3 namespace which is not needed anymore#033[00m
Nov 29 02:50:45 np0005539563 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.907 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:45 np0005539563 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001e.scope: Consumed 11.763s CPU time.
Nov 29 02:50:45 np0005539563 systemd-machined[213024]: Machine qemu-13-instance-0000001e terminated.
Nov 29 02:50:45 np0005539563 podman[274701]: 2025-11-29 07:50:45.952532234 +0000 UTC m=+0.040258281 container create 1cf3311b2c8e8207fc23e247d54c7daa139f90d4e8020a39854b5c22d2ba3f2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:50:45 np0005539563 nova_compute[252253]: 2025-11-29 07:50:45.997 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:46 np0005539563 virtqemud[251807]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/aca637ac-6ef0-42f8-aacf-e022e990aeba_disk: No such file or directory
Nov 29 02:50:46 np0005539563 virtqemud[251807]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/aca637ac-6ef0-42f8-aacf-e022e990aeba_disk: No such file or directory
Nov 29 02:50:46 np0005539563 systemd[1]: Started libpod-conmon-1cf3311b2c8e8207fc23e247d54c7daa139f90d4e8020a39854b5c22d2ba3f2e.scope.
Nov 29 02:50:46 np0005539563 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[274225]: [NOTICE]   (274234) : haproxy version is 2.8.14-c23fe91
Nov 29 02:50:46 np0005539563 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[274225]: [NOTICE]   (274234) : path to executable is /usr/sbin/haproxy
Nov 29 02:50:46 np0005539563 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[274225]: [WARNING]  (274234) : Exiting Master process...
Nov 29 02:50:46 np0005539563 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[274225]: [ALERT]    (274234) : Current worker (274237) exited with code 143 (Terminated)
Nov 29 02:50:46 np0005539563 neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3[274225]: [WARNING]  (274234) : All workers exited. Exiting... (0)
Nov 29 02:50:46 np0005539563 systemd[1]: libpod-5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3.scope: Deactivated successfully.
Nov 29 02:50:46 np0005539563 nova_compute[252253]: 2025-11-29 07:50:46.028 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:46 np0005539563 podman[274731]: 2025-11-29 07:50:46.033577529 +0000 UTC m=+0.049087568 container died 5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:50:46 np0005539563 podman[274701]: 2025-11-29 07:50:45.936817913 +0000 UTC m=+0.024543980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:50:46 np0005539563 nova_compute[252253]: 2025-11-29 07:50:46.034 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:50:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a913bfc442f4da56887743898caf70c5dc735544903438e5debe04d49172a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a913bfc442f4da56887743898caf70c5dc735544903438e5debe04d49172a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a913bfc442f4da56887743898caf70c5dc735544903438e5debe04d49172a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a913bfc442f4da56887743898caf70c5dc735544903438e5debe04d49172a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a913bfc442f4da56887743898caf70c5dc735544903438e5debe04d49172a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:46 np0005539563 nova_compute[252253]: 2025-11-29 07:50:46.052 252257 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Nov 29 02:50:46 np0005539563 nova_compute[252253]: 2025-11-29 07:50:46.053 252257 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Nov 29 02:50:46 np0005539563 nova_compute[252253]: 2025-11-29 07:50:46.053 252257 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Nov 29 02:50:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3-userdata-shm.mount: Deactivated successfully.
Nov 29 02:50:46 np0005539563 podman[274701]: 2025-11-29 07:50:46.077322363 +0000 UTC m=+0.165048440 container init 1cf3311b2c8e8207fc23e247d54c7daa139f90d4e8020a39854b5c22d2ba3f2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jemison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:50:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-cf245be613a2254cdbe1ecb869a7a0d9ef54ab2c92a58bd5d23074c4406ade61-merged.mount: Deactivated successfully.
Nov 29 02:50:46 np0005539563 podman[274731]: 2025-11-29 07:50:46.090495137 +0000 UTC m=+0.106005176 container cleanup 5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 02:50:46 np0005539563 podman[274701]: 2025-11-29 07:50:46.091424182 +0000 UTC m=+0.179150229 container start 1cf3311b2c8e8207fc23e247d54c7daa139f90d4e8020a39854b5c22d2ba3f2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jemison, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:50:46 np0005539563 podman[274701]: 2025-11-29 07:50:46.095286935 +0000 UTC m=+0.183012972 container attach 1cf3311b2c8e8207fc23e247d54c7daa139f90d4e8020a39854b5c22d2ba3f2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jemison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:50:46 np0005539563 systemd[1]: libpod-conmon-5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3.scope: Deactivated successfully.
Nov 29 02:50:46 np0005539563 nova_compute[252253]: 2025-11-29 07:50:46.113 252257 DEBUG nova.virt.libvirt.guest [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'aca637ac-6ef0-42f8-aacf-e022e990aeba' (instance-0000001e) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Nov 29 02:50:46 np0005539563 nova_compute[252253]: 2025-11-29 07:50:46.114 252257 INFO nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Migration operation has completed#033[00m
Nov 29 02:50:46 np0005539563 nova_compute[252253]: 2025-11-29 07:50:46.114 252257 INFO nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] _post_live_migration() is started..#033[00m
Nov 29 02:50:46 np0005539563 podman[274779]: 2025-11-29 07:50:46.155038689 +0000 UTC m=+0.037070476 container remove 5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.159 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d9645799-b988-4d4b-8180-d377f5238fd0]: (4, ('Sat Nov 29 07:50:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3 (5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3)\n5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3\nSat Nov 29 07:50:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b746034c-0143-4024-986c-673efea114a3 (5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3)\n5abc4fa16b246dbcb237ed88fcbdf3e2075b558c24efd86431f5e296340b69f3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.161 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[56f60e03-a641-43af-b780-5ffba651d887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.162 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb746034c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:46 np0005539563 nova_compute[252253]: 2025-11-29 07:50:46.163 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:46 np0005539563 kernel: tapb746034c-00: left promiscuous mode
Nov 29 02:50:46 np0005539563 nova_compute[252253]: 2025-11-29 07:50:46.179 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:46 np0005539563 nova_compute[252253]: 2025-11-29 07:50:46.185 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.189 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d60e6542-ccbc-414e-9e97-493bab2b656b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.202 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4e84bece-f8d3-4db9-aafb-00306b73fb8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.203 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[204bf6cb-66de-4456-af23-ec6e2d5b4781]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.218 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[14ebd498-02f3-4c25-8b85-304836633062]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560136, 'reachable_time': 25427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274798, 'error': None, 'target': 'ovnmeta-b746034c-0143-4024-986c-673efea114a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.221 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b746034c-0143-4024-986c-673efea114a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.221 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[543edab4-f4a7-4b00-883b-0d5924433c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.222 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 9ecac803-0ffe-4cdf-a724-cbb61954b01b in datapath 5791158c-7fc4-4c56-891c-c8aa0c79ed59 unbound from our chassis#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.224 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5791158c-7fc4-4c56-891c-c8aa0c79ed59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.225 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[45bc7b8b-56ce-4383-854a-10ef374d5820]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.226 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59 namespace which is not needed anymore#033[00m
Nov 29 02:50:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 29 02:50:46 np0005539563 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[274310]: [NOTICE]   (274314) : haproxy version is 2.8.14-c23fe91
Nov 29 02:50:46 np0005539563 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[274310]: [NOTICE]   (274314) : path to executable is /usr/sbin/haproxy
Nov 29 02:50:46 np0005539563 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[274310]: [WARNING]  (274314) : Exiting Master process...
Nov 29 02:50:46 np0005539563 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[274310]: [ALERT]    (274314) : Current worker (274316) exited with code 143 (Terminated)
Nov 29 02:50:46 np0005539563 neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59[274310]: [WARNING]  (274314) : All workers exited. Exiting... (0)
Nov 29 02:50:46 np0005539563 systemd[1]: libpod-a25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6.scope: Deactivated successfully.
Nov 29 02:50:46 np0005539563 podman[274817]: 2025-11-29 07:50:46.349307163 +0000 UTC m=+0.047956918 container died a25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:50:46 np0005539563 podman[274817]: 2025-11-29 07:50:46.457123526 +0000 UTC m=+0.155773261 container cleanup a25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:50:46 np0005539563 systemd[1]: libpod-conmon-a25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6.scope: Deactivated successfully.
Nov 29 02:50:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-95330f8a55a6438958520d5fea1e88be0a8e62828c07ccdf87617e940943ab14-merged.mount: Deactivated successfully.
Nov 29 02:50:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6-userdata-shm.mount: Deactivated successfully.
Nov 29 02:50:46 np0005539563 systemd[1]: run-netns-ovnmeta\x2db746034c\x2d0143\x2d4024\x2d986c\x2d673efea114a3.mount: Deactivated successfully.
Nov 29 02:50:46 np0005539563 podman[274848]: 2025-11-29 07:50:46.627463097 +0000 UTC m=+0.137624214 container remove a25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.632 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f5457b-bd88-474d-bb9e-6af6cd367623]: (4, ('Sat Nov 29 07:50:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59 (a25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6)\na25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6\nSat Nov 29 07:50:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59 (a25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6)\na25714b2bf1cdc3116ba092959ee3af927664b2655dfb3b9c3b8184175ea68a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.634 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0386b2e6-bb24-4c01-8636-c18d777aec63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.636 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5791158c-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:46 np0005539563 nova_compute[252253]: 2025-11-29 07:50:46.638 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:46 np0005539563 kernel: tap5791158c-70: left promiscuous mode
Nov 29 02:50:46 np0005539563 nova_compute[252253]: 2025-11-29 07:50:46.662 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.667 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fb178c56-e16a-4d87-b728-1c86a020fef9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.694 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c13c2cf6-db20-489b-9042-eb22fe3b9e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.695 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fb64dea3-8373-47e2-8e1c-d946eb128e4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.712 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[46253c33-fb7e-4cde-8289-444234625563]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560244, 'reachable_time': 40485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274865, 'error': None, 'target': 'ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.715 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5791158c-7fc4-4c56-891c-c8aa0c79ed59 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:50:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:50:46.715 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[0d34182e-a780-420e-8d22-f5b8935f9312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:46 np0005539563 systemd[1]: run-netns-ovnmeta\x2d5791158c\x2d7fc4\x2d4c56\x2d891c\x2dc8aa0c79ed59.mount: Deactivated successfully.
Nov 29 02:50:46 np0005539563 musing_jemison[274747]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:50:46 np0005539563 musing_jemison[274747]: --> relative data size: 1.0
Nov 29 02:50:46 np0005539563 musing_jemison[274747]: --> All data devices are unavailable
Nov 29 02:50:46 np0005539563 systemd[1]: libpod-1cf3311b2c8e8207fc23e247d54c7daa139f90d4e8020a39854b5c22d2ba3f2e.scope: Deactivated successfully.
Nov 29 02:50:46 np0005539563 podman[274701]: 2025-11-29 07:50:46.963909597 +0000 UTC m=+1.051635664 container died 1cf3311b2c8e8207fc23e247d54c7daa139f90d4e8020a39854b5c22d2ba3f2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:50:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-67a913bfc442f4da56887743898caf70c5dc735544903438e5debe04d49172a9-merged.mount: Deactivated successfully.
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.171 252257 DEBUG nova.compute.manager [req-4375cb17-b04a-4603-a6d8-31c464852ed6 req-80403397-273a-4592-ba22-94e08bf2ef39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-vif-unplugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.172 252257 DEBUG oslo_concurrency.lockutils [req-4375cb17-b04a-4603-a6d8-31c464852ed6 req-80403397-273a-4592-ba22-94e08bf2ef39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.172 252257 DEBUG oslo_concurrency.lockutils [req-4375cb17-b04a-4603-a6d8-31c464852ed6 req-80403397-273a-4592-ba22-94e08bf2ef39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.172 252257 DEBUG oslo_concurrency.lockutils [req-4375cb17-b04a-4603-a6d8-31c464852ed6 req-80403397-273a-4592-ba22-94e08bf2ef39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.172 252257 DEBUG nova.compute.manager [req-4375cb17-b04a-4603-a6d8-31c464852ed6 req-80403397-273a-4592-ba22-94e08bf2ef39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] No waiting events found dispatching network-vif-unplugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.172 252257 DEBUG nova.compute.manager [req-4375cb17-b04a-4603-a6d8-31c464852ed6 req-80403397-273a-4592-ba22-94e08bf2ef39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-vif-unplugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.192 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:47 np0005539563 podman[274701]: 2025-11-29 07:50:47.199948322 +0000 UTC m=+1.287674409 container remove 1cf3311b2c8e8207fc23e247d54c7daa139f90d4e8020a39854b5c22d2ba3f2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:50:47 np0005539563 systemd[1]: libpod-conmon-1cf3311b2c8e8207fc23e247d54c7daa139f90d4e8020a39854b5c22d2ba3f2e.scope: Deactivated successfully.
Nov 29 02:50:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:47.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1980099757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.536 252257 DEBUG nova.network.neutron [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Activated binding for port e347928a-5a81-4fdb-a7df-4ac039bb8bb3 and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.538 252257 DEBUG nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.539 252257 DEBUG nova.virt.libvirt.vif [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:50:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-267620235',display_name='tempest-LiveMigrationTest-server-267620235',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-267620235',id=30,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:50:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='73f3d0f2c9aa4ba29984fc9e6a7ed869',ramdisk_id='',reservation_id='r-e1tqaqaw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-561693451',owner_user_name='tempest-LiveMigrationTest-561693451-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:50:38Z,user_data=None,user_id='37531d9f927d40ecadd246429b5b598d',uuid=aca637ac-6ef0-42f8-aacf-e022e990aeba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.540 252257 DEBUG nova.network.os_vif_util [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converting VIF {"id": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "address": "fa:16:3e:93:42:48", "network": {"id": "b746034c-0143-4024-986c-673efea114a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1792671164-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "73f3d0f2c9aa4ba29984fc9e6a7ed869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape347928a-5a", "ovs_interfaceid": "e347928a-5a81-4fdb-a7df-4ac039bb8bb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.541 252257 DEBUG nova.network.os_vif_util [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.542 252257 DEBUG os_vif [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.544 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.545 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape347928a-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 569 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 3.9 KiB/s wr, 366 op/s
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.564 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.566 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.569 252257 INFO os_vif [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:42:48,bridge_name='br-int',has_traffic_filtering=True,id=e347928a-5a81-4fdb-a7df-4ac039bb8bb3,network=Network(b746034c-0143-4024-986c-673efea114a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape347928a-5a')#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.570 252257 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.570 252257 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.571 252257 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.571 252257 DEBUG nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.572 252257 INFO nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Deleting instance files /var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba_del#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.573 252257 INFO nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Deletion of /var/lib/nova/instances/aca637ac-6ef0-42f8-aacf-e022e990aeba_del complete#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.793 252257 DEBUG nova.compute.manager [req-0ae4e475-8213-4565-9b14-ea7bd68101b9 req-88c973a8-c194-4dbb-a980-c53a7a6c6ba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.794 252257 DEBUG oslo_concurrency.lockutils [req-0ae4e475-8213-4565-9b14-ea7bd68101b9 req-88c973a8-c194-4dbb-a980-c53a7a6c6ba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.794 252257 DEBUG oslo_concurrency.lockutils [req-0ae4e475-8213-4565-9b14-ea7bd68101b9 req-88c973a8-c194-4dbb-a980-c53a7a6c6ba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.794 252257 DEBUG oslo_concurrency.lockutils [req-0ae4e475-8213-4565-9b14-ea7bd68101b9 req-88c973a8-c194-4dbb-a980-c53a7a6c6ba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.795 252257 DEBUG nova.compute.manager [req-0ae4e475-8213-4565-9b14-ea7bd68101b9 req-88c973a8-c194-4dbb-a980-c53a7a6c6ba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] No waiting events found dispatching network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:47 np0005539563 nova_compute[252253]: 2025-11-29 07:50:47.795 252257 WARNING nova.compute.manager [req-0ae4e475-8213-4565-9b14-ea7bd68101b9 req-88c973a8-c194-4dbb-a980-c53a7a6c6ba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received unexpected event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:50:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:47.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 29 02:50:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 29 02:50:47 np0005539563 podman[275031]: 2025-11-29 07:50:47.939210912 +0000 UTC m=+0.044193287 container create e5ce02e839ff011694f463d388487cb6024b081a35cf944d42d285116a4c3cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keldysh, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:50:47 np0005539563 systemd[1]: Started libpod-conmon-e5ce02e839ff011694f463d388487cb6024b081a35cf944d42d285116a4c3cd2.scope.
Nov 29 02:50:48 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:50:48 np0005539563 podman[275031]: 2025-11-29 07:50:47.918116666 +0000 UTC m=+0.023099041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:50:48 np0005539563 podman[275031]: 2025-11-29 07:50:48.075322935 +0000 UTC m=+0.180305320 container init e5ce02e839ff011694f463d388487cb6024b081a35cf944d42d285116a4c3cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keldysh, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:50:48 np0005539563 podman[275046]: 2025-11-29 07:50:48.077191605 +0000 UTC m=+0.099052940 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:50:48 np0005539563 podman[275048]: 2025-11-29 07:50:48.078482089 +0000 UTC m=+0.098377151 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 29 02:50:48 np0005539563 podman[275045]: 2025-11-29 07:50:48.082086877 +0000 UTC m=+0.109032947 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:50:48 np0005539563 podman[275031]: 2025-11-29 07:50:48.082884698 +0000 UTC m=+0.187867053 container start e5ce02e839ff011694f463d388487cb6024b081a35cf944d42d285116a4c3cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:50:48 np0005539563 systemd[1]: libpod-e5ce02e839ff011694f463d388487cb6024b081a35cf944d42d285116a4c3cd2.scope: Deactivated successfully.
Nov 29 02:50:48 np0005539563 priceless_keldysh[275054]: 167 167
Nov 29 02:50:48 np0005539563 conmon[275054]: conmon e5ce02e839ff011694f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5ce02e839ff011694f463d388487cb6024b081a35cf944d42d285116a4c3cd2.scope/container/memory.events
Nov 29 02:50:48 np0005539563 podman[275031]: 2025-11-29 07:50:48.105975527 +0000 UTC m=+0.210957912 container attach e5ce02e839ff011694f463d388487cb6024b081a35cf944d42d285116a4c3cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keldysh, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:50:48 np0005539563 podman[275031]: 2025-11-29 07:50:48.109525712 +0000 UTC m=+0.214508077 container died e5ce02e839ff011694f463d388487cb6024b081a35cf944d42d285116a4c3cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:50:48 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f4edc20049441f159b3d81ccfc7b61bda6d1f950249a2ddbf5c876999d55b039-merged.mount: Deactivated successfully.
Nov 29 02:50:48 np0005539563 podman[275031]: 2025-11-29 07:50:48.230828929 +0000 UTC m=+0.335811284 container remove e5ce02e839ff011694f463d388487cb6024b081a35cf944d42d285116a4c3cd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:50:48 np0005539563 systemd[1]: libpod-conmon-e5ce02e839ff011694f463d388487cb6024b081a35cf944d42d285116a4c3cd2.scope: Deactivated successfully.
Nov 29 02:50:48 np0005539563 nova_compute[252253]: 2025-11-29 07:50:48.269 252257 DEBUG nova.virt.libvirt.driver [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:50:48 np0005539563 podman[275134]: 2025-11-29 07:50:48.450160825 +0000 UTC m=+0.081770816 container create 47701630a2924f8dc281d7354727044b07ab9cb2b105f0577b792c720a00db22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:50:48 np0005539563 podman[275134]: 2025-11-29 07:50:48.398621462 +0000 UTC m=+0.030231463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:50:48 np0005539563 systemd[1]: Started libpod-conmon-47701630a2924f8dc281d7354727044b07ab9cb2b105f0577b792c720a00db22.scope.
Nov 29 02:50:48 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:50:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7505aaf326a496fe085459a63f9a72ece2b75381307810b39969caa931c451b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7505aaf326a496fe085459a63f9a72ece2b75381307810b39969caa931c451b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7505aaf326a496fe085459a63f9a72ece2b75381307810b39969caa931c451b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7505aaf326a496fe085459a63f9a72ece2b75381307810b39969caa931c451b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:48 np0005539563 podman[275134]: 2025-11-29 07:50:48.581382117 +0000 UTC m=+0.212992138 container init 47701630a2924f8dc281d7354727044b07ab9cb2b105f0577b792c720a00db22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:50:48 np0005539563 podman[275134]: 2025-11-29 07:50:48.591706783 +0000 UTC m=+0.223316754 container start 47701630a2924f8dc281d7354727044b07ab9cb2b105f0577b792c720a00db22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:50:48 np0005539563 podman[275134]: 2025-11-29 07:50:48.692317573 +0000 UTC m=+0.323927584 container attach 47701630a2924f8dc281d7354727044b07ab9cb2b105f0577b792c720a00db22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:50:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:49.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]: {
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:    "0": [
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:        {
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:            "devices": [
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:                "/dev/loop3"
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:            ],
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:            "lv_name": "ceph_lv0",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:            "lv_size": "7511998464",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:            "name": "ceph_lv0",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:            "tags": {
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:                "ceph.cluster_name": "ceph",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:                "ceph.crush_device_class": "",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:                "ceph.encrypted": "0",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:                "ceph.osd_id": "0",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:                "ceph.type": "block",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:                "ceph.vdo": "0"
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:            },
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:            "type": "block",
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:            "vg_name": "ceph_vg0"
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:        }
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]:    ]
Nov 29 02:50:49 np0005539563 silly_meninsky[275150]: }
Nov 29 02:50:49 np0005539563 systemd[1]: libpod-47701630a2924f8dc281d7354727044b07ab9cb2b105f0577b792c720a00db22.scope: Deactivated successfully.
Nov 29 02:50:49 np0005539563 podman[275134]: 2025-11-29 07:50:49.447633164 +0000 UTC m=+1.079243145 container died 47701630a2924f8dc281d7354727044b07ab9cb2b105f0577b792c720a00db22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:50:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 554 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 698 KiB/s wr, 348 op/s
Nov 29 02:50:49 np0005539563 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 02:50:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:49.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:50 np0005539563 nova_compute[252253]: 2025-11-29 07:50:50.027 252257 DEBUG nova.compute.manager [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:50 np0005539563 nova_compute[252253]: 2025-11-29 07:50:50.028 252257 DEBUG oslo_concurrency.lockutils [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:50 np0005539563 nova_compute[252253]: 2025-11-29 07:50:50.028 252257 DEBUG oslo_concurrency.lockutils [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:50 np0005539563 nova_compute[252253]: 2025-11-29 07:50:50.028 252257 DEBUG oslo_concurrency.lockutils [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:50 np0005539563 nova_compute[252253]: 2025-11-29 07:50:50.028 252257 DEBUG nova.compute.manager [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] No waiting events found dispatching network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:50 np0005539563 nova_compute[252253]: 2025-11-29 07:50:50.028 252257 WARNING nova.compute.manager [req-1d8b6de6-f8ef-4af8-9b45-ab60d3e99b98 req-4c76fb64-6681-4967-8f1b-a7a9d5e64e26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received unexpected event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:50:50 np0005539563 nova_compute[252253]: 2025-11-29 07:50:50.269 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7505aaf326a496fe085459a63f9a72ece2b75381307810b39969caa931c451b6-merged.mount: Deactivated successfully.
Nov 29 02:50:50 np0005539563 podman[275134]: 2025-11-29 07:50:50.932625779 +0000 UTC m=+2.564235800 container remove 47701630a2924f8dc281d7354727044b07ab9cb2b105f0577b792c720a00db22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 29 02:50:51 np0005539563 systemd[1]: libpod-conmon-47701630a2924f8dc281d7354727044b07ab9cb2b105f0577b792c720a00db22.scope: Deactivated successfully.
Nov 29 02:50:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:51.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 459 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.5 MiB/s wr, 299 op/s
Nov 29 02:50:51 np0005539563 podman[275316]: 2025-11-29 07:50:51.619498473 +0000 UTC m=+0.054393901 container create 96a04d40ef1440ea582158b9317d8d11e43adccf6967d2f00d81a75a076fb276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:50:51 np0005539563 podman[275316]: 2025-11-29 07:50:51.587928466 +0000 UTC m=+0.022823914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:50:51 np0005539563 systemd[1]: Started libpod-conmon-96a04d40ef1440ea582158b9317d8d11e43adccf6967d2f00d81a75a076fb276.scope.
Nov 29 02:50:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:50:51 np0005539563 podman[275316]: 2025-11-29 07:50:51.787267745 +0000 UTC m=+0.222163193 container init 96a04d40ef1440ea582158b9317d8d11e43adccf6967d2f00d81a75a076fb276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dewdney, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:50:51 np0005539563 podman[275316]: 2025-11-29 07:50:51.794701745 +0000 UTC m=+0.229597173 container start 96a04d40ef1440ea582158b9317d8d11e43adccf6967d2f00d81a75a076fb276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dewdney, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:50:51 np0005539563 podman[275316]: 2025-11-29 07:50:51.799877054 +0000 UTC m=+0.234772482 container attach 96a04d40ef1440ea582158b9317d8d11e43adccf6967d2f00d81a75a076fb276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:50:51 np0005539563 focused_dewdney[275333]: 167 167
Nov 29 02:50:51 np0005539563 systemd[1]: libpod-96a04d40ef1440ea582158b9317d8d11e43adccf6967d2f00d81a75a076fb276.scope: Deactivated successfully.
Nov 29 02:50:51 np0005539563 conmon[275333]: conmon 96a04d40ef1440ea5821 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96a04d40ef1440ea582158b9317d8d11e43adccf6967d2f00d81a75a076fb276.scope/container/memory.events
Nov 29 02:50:51 np0005539563 podman[275316]: 2025-11-29 07:50:51.804030505 +0000 UTC m=+0.238925963 container died 96a04d40ef1440ea582158b9317d8d11e43adccf6967d2f00d81a75a076fb276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:50:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b810d3a2e8b17c404eedd9cfdf90fb8d7caf3babdf849b92c15ff51483b254aa-merged.mount: Deactivated successfully.
Nov 29 02:50:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:51.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:51 np0005539563 podman[275316]: 2025-11-29 07:50:51.88913928 +0000 UTC m=+0.324034718 container remove 96a04d40ef1440ea582158b9317d8d11e43adccf6967d2f00d81a75a076fb276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dewdney, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:50:51 np0005539563 systemd[1]: libpod-conmon-96a04d40ef1440ea582158b9317d8d11e43adccf6967d2f00d81a75a076fb276.scope: Deactivated successfully.
Nov 29 02:50:52 np0005539563 podman[275358]: 2025-11-29 07:50:52.072256594 +0000 UTC m=+0.055119071 container create 594771ec1feffefddbf4f1b8b7cafa9c98abe99f1d7895f7cb8fea4851017f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:50:52 np0005539563 systemd[1]: Started libpod-conmon-594771ec1feffefddbf4f1b8b7cafa9c98abe99f1d7895f7cb8fea4851017f56.scope.
Nov 29 02:50:52 np0005539563 podman[275358]: 2025-11-29 07:50:52.050708085 +0000 UTC m=+0.033570582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:50:52 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:50:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bdffa21e5dc213b3bf9ad0c58bb024696e9c5947199afbc5b3cc98d488299ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bdffa21e5dc213b3bf9ad0c58bb024696e9c5947199afbc5b3cc98d488299ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bdffa21e5dc213b3bf9ad0c58bb024696e9c5947199afbc5b3cc98d488299ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bdffa21e5dc213b3bf9ad0c58bb024696e9c5947199afbc5b3cc98d488299ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:52 np0005539563 podman[275358]: 2025-11-29 07:50:52.163798531 +0000 UTC m=+0.146661028 container init 594771ec1feffefddbf4f1b8b7cafa9c98abe99f1d7895f7cb8fea4851017f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:50:52 np0005539563 podman[275358]: 2025-11-29 07:50:52.181520936 +0000 UTC m=+0.164383413 container start 594771ec1feffefddbf4f1b8b7cafa9c98abe99f1d7895f7cb8fea4851017f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:50:52 np0005539563 podman[275358]: 2025-11-29 07:50:52.185992107 +0000 UTC m=+0.168854584 container attach 594771ec1feffefddbf4f1b8b7cafa9c98abe99f1d7895f7cb8fea4851017f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:50:52 np0005539563 nova_compute[252253]: 2025-11-29 07:50:52.218 252257 DEBUG nova.compute.manager [req-7264bcc3-258b-4c5d-ba84-e2962fa7b5fb req-dfb16a2c-5f3f-4ac4-9f66-bda661c0bc21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:52 np0005539563 nova_compute[252253]: 2025-11-29 07:50:52.218 252257 DEBUG oslo_concurrency.lockutils [req-7264bcc3-258b-4c5d-ba84-e2962fa7b5fb req-dfb16a2c-5f3f-4ac4-9f66-bda661c0bc21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:52 np0005539563 nova_compute[252253]: 2025-11-29 07:50:52.219 252257 DEBUG oslo_concurrency.lockutils [req-7264bcc3-258b-4c5d-ba84-e2962fa7b5fb req-dfb16a2c-5f3f-4ac4-9f66-bda661c0bc21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:52 np0005539563 nova_compute[252253]: 2025-11-29 07:50:52.219 252257 DEBUG oslo_concurrency.lockutils [req-7264bcc3-258b-4c5d-ba84-e2962fa7b5fb req-dfb16a2c-5f3f-4ac4-9f66-bda661c0bc21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:52 np0005539563 nova_compute[252253]: 2025-11-29 07:50:52.219 252257 DEBUG nova.compute.manager [req-7264bcc3-258b-4c5d-ba84-e2962fa7b5fb req-dfb16a2c-5f3f-4ac4-9f66-bda661c0bc21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] No waiting events found dispatching network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:52 np0005539563 nova_compute[252253]: 2025-11-29 07:50:52.219 252257 WARNING nova.compute.manager [req-7264bcc3-258b-4c5d-ba84-e2962fa7b5fb req-dfb16a2c-5f3f-4ac4-9f66-bda661c0bc21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Received unexpected event network-vif-plugged-e347928a-5a81-4fdb-a7df-4ac039bb8bb3 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:50:52 np0005539563 nova_compute[252253]: 2025-11-29 07:50:52.564 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:52 np0005539563 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 29 02:50:52 np0005539563 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001b.scope: Consumed 13.928s CPU time.
Nov 29 02:50:52 np0005539563 systemd-machined[213024]: Machine qemu-14-instance-0000001b terminated.
Nov 29 02:50:53 np0005539563 angry_saha[275375]: {
Nov 29 02:50:53 np0005539563 angry_saha[275375]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:50:53 np0005539563 angry_saha[275375]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:50:53 np0005539563 angry_saha[275375]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:50:53 np0005539563 angry_saha[275375]:        "osd_id": 0,
Nov 29 02:50:53 np0005539563 angry_saha[275375]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:50:53 np0005539563 angry_saha[275375]:        "type": "bluestore"
Nov 29 02:50:53 np0005539563 angry_saha[275375]:    }
Nov 29 02:50:53 np0005539563 angry_saha[275375]: }
Nov 29 02:50:53 np0005539563 systemd[1]: libpod-594771ec1feffefddbf4f1b8b7cafa9c98abe99f1d7895f7cb8fea4851017f56.scope: Deactivated successfully.
Nov 29 02:50:53 np0005539563 podman[275358]: 2025-11-29 07:50:53.087917822 +0000 UTC m=+1.070780319 container died 594771ec1feffefddbf4f1b8b7cafa9c98abe99f1d7895f7cb8fea4851017f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:50:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8bdffa21e5dc213b3bf9ad0c58bb024696e9c5947199afbc5b3cc98d488299ae-merged.mount: Deactivated successfully.
Nov 29 02:50:53 np0005539563 podman[275358]: 2025-11-29 07:50:53.158177048 +0000 UTC m=+1.141039525 container remove 594771ec1feffefddbf4f1b8b7cafa9c98abe99f1d7895f7cb8fea4851017f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 02:50:53 np0005539563 systemd[1]: libpod-conmon-594771ec1feffefddbf4f1b8b7cafa9c98abe99f1d7895f7cb8fea4851017f56.scope: Deactivated successfully.
Nov 29 02:50:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:50:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:50:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:50:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:50:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 446160b0-7577-46f5-ab09-520e744bb0a8 does not exist
Nov 29 02:50:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 378852c7-33ff-4366-b00b-e91c953643e9 does not exist
Nov 29 02:50:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f9b68639-aa73-4030-bd5a-f3ed488b0f16 does not exist
Nov 29 02:50:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.293 252257 INFO nova.virt.libvirt.driver [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance shutdown successfully after 15 seconds.#033[00m
Nov 29 02:50:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:53.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.299 252257 INFO nova.virt.libvirt.driver [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance destroyed successfully.#033[00m
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.299 252257 DEBUG nova.objects.instance [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lazy-loading 'numa_topology' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.499 252257 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.501 252257 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.501 252257 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "aca637ac-6ef0-42f8-aacf-e022e990aeba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.523 252257 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.524 252257 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.525 252257 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.526 252257 DEBUG nova.compute.resource_tracker [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.527 252257 DEBUG oslo_concurrency.processutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 459 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.5 MiB/s wr, 299 op/s
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.575 252257 INFO nova.virt.libvirt.driver [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Beginning cold snapshot process#033[00m
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.771 252257 DEBUG nova.virt.libvirt.imagebackend [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 02:50:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:53.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:53 np0005539563 nova_compute[252253]: 2025-11-29 07:50:53.972 252257 DEBUG nova.storage.rbd_utils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] creating snapshot(7f5d65852803432385ac2502f06588c8) on rbd image(d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.045 252257 DEBUG oslo_concurrency.processutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.141 252257 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.142 252257 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.145 252257 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.146 252257 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.304 252257 WARNING nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.305 252257 DEBUG nova.compute.resource_tracker [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4472MB free_disk=20.76129150390625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.305 252257 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.306 252257 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 29 02:50:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.415 252257 DEBUG nova.compute.resource_tracker [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Migration for instance aca637ac-6ef0-42f8-aacf-e022e990aeba refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.442 252257 DEBUG nova.compute.resource_tracker [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.469 252257 DEBUG nova.compute.resource_tracker [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Instance 3efe6bb4-36be-4a30-832d-8da05e5baa50 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.469 252257 DEBUG nova.compute.resource_tracker [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Instance d278aa2a-e5e7-4f89-8b5c-b6dca172b57d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.469 252257 DEBUG nova.compute.resource_tracker [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Migration 72ea9a4e-b79a-4ba4-a793-db44aa18be1d is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.469 252257 DEBUG nova.compute.resource_tracker [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.469 252257 DEBUG nova.compute.resource_tracker [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:50:54 np0005539563 nova_compute[252253]: 2025-11-29 07:50:54.525 252257 DEBUG oslo_concurrency.processutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 29 02:50:54 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 29 02:50:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1374294681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:55 np0005539563 nova_compute[252253]: 2025-11-29 07:50:55.018 252257 DEBUG oslo_concurrency.processutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:55 np0005539563 nova_compute[252253]: 2025-11-29 07:50:55.023 252257 DEBUG nova.compute.provider_tree [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:55 np0005539563 nova_compute[252253]: 2025-11-29 07:50:55.043 252257 DEBUG nova.scheduler.client.report [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:55 np0005539563 nova_compute[252253]: 2025-11-29 07:50:55.068 252257 DEBUG nova.compute.resource_tracker [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:50:55 np0005539563 nova_compute[252253]: 2025-11-29 07:50:55.069 252257 DEBUG oslo_concurrency.lockutils [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:55 np0005539563 nova_compute[252253]: 2025-11-29 07:50:55.077 252257 INFO nova.compute.manager [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Nov 29 02:50:55 np0005539563 nova_compute[252253]: 2025-11-29 07:50:55.184 252257 DEBUG nova.storage.rbd_utils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] cloning vms/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk@7f5d65852803432385ac2502f06588c8 to images/33939db1-a4ae-4fac-9a69-88ed807d304b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 02:50:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 02:50:55 np0005539563 nova_compute[252253]: 2025-11-29 07:50:55.235 252257 INFO nova.scheduler.client.report [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] Deleted allocation for migration 72ea9a4e-b79a-4ba4-a793-db44aa18be1d#033[00m
Nov 29 02:50:55 np0005539563 nova_compute[252253]: 2025-11-29 07:50:55.235 252257 DEBUG nova.virt.libvirt.driver [None req-fc650d3e-a296-41d5-921c-279e818971c8 749bb74010574cbb8b7b62a42729cb71 784d8fc21d3f412f83d45f20b61ecd85 - - default default] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Nov 29 02:50:55 np0005539563 nova_compute[252253]: 2025-11-29 07:50:55.269 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:55.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 364 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.2 MiB/s wr, 311 op/s
Nov 29 02:50:55 np0005539563 nova_compute[252253]: 2025-11-29 07:50:55.769 252257 DEBUG nova.storage.rbd_utils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] flattening images/33939db1-a4ae-4fac-9a69-88ed807d304b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 02:50:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:55.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:56 np0005539563 nova_compute[252253]: 2025-11-29 07:50:56.160 252257 DEBUG oslo_concurrency.lockutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "3efe6bb4-36be-4a30-832d-8da05e5baa50" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:56 np0005539563 nova_compute[252253]: 2025-11-29 07:50:56.161 252257 DEBUG oslo_concurrency.lockutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "3efe6bb4-36be-4a30-832d-8da05e5baa50" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:56 np0005539563 nova_compute[252253]: 2025-11-29 07:50:56.162 252257 DEBUG oslo_concurrency.lockutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "3efe6bb4-36be-4a30-832d-8da05e5baa50-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:56 np0005539563 nova_compute[252253]: 2025-11-29 07:50:56.162 252257 DEBUG oslo_concurrency.lockutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "3efe6bb4-36be-4a30-832d-8da05e5baa50-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:56 np0005539563 nova_compute[252253]: 2025-11-29 07:50:56.163 252257 DEBUG oslo_concurrency.lockutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "3efe6bb4-36be-4a30-832d-8da05e5baa50-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:56 np0005539563 nova_compute[252253]: 2025-11-29 07:50:56.164 252257 INFO nova.compute.manager [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Terminating instance#033[00m
Nov 29 02:50:56 np0005539563 nova_compute[252253]: 2025-11-29 07:50:56.166 252257 DEBUG oslo_concurrency.lockutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:56 np0005539563 nova_compute[252253]: 2025-11-29 07:50:56.166 252257 DEBUG oslo_concurrency.lockutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquired lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:56 np0005539563 nova_compute[252253]: 2025-11-29 07:50:56.167 252257 DEBUG nova.network.neutron [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:50:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:56 np0005539563 nova_compute[252253]: 2025-11-29 07:50:56.436 252257 DEBUG nova.network.neutron [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:56 np0005539563 nova_compute[252253]: 2025-11-29 07:50:56.787 252257 DEBUG nova.network.neutron [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:56 np0005539563 nova_compute[252253]: 2025-11-29 07:50:56.831 252257 DEBUG oslo_concurrency.lockutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Releasing lock "refresh_cache-3efe6bb4-36be-4a30-832d-8da05e5baa50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:56 np0005539563 nova_compute[252253]: 2025-11-29 07:50:56.831 252257 DEBUG nova.compute.manager [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:50:57 np0005539563 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 29 02:50:57 np0005539563 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000014.scope: Consumed 19.379s CPU time.
Nov 29 02:50:57 np0005539563 systemd-machined[213024]: Machine qemu-11-instance-00000014 terminated.
Nov 29 02:50:57 np0005539563 nova_compute[252253]: 2025-11-29 07:50:57.048 252257 INFO nova.virt.libvirt.driver [-] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance destroyed successfully.#033[00m
Nov 29 02:50:57 np0005539563 nova_compute[252253]: 2025-11-29 07:50:57.048 252257 DEBUG nova.objects.instance [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lazy-loading 'resources' on Instance uuid 3efe6bb4-36be-4a30-832d-8da05e5baa50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:57.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 364 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 258 op/s
Nov 29 02:50:57 np0005539563 nova_compute[252253]: 2025-11-29 07:50:57.614 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:57 np0005539563 nova_compute[252253]: 2025-11-29 07:50:57.751 252257 DEBUG nova.storage.rbd_utils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] removing snapshot(7f5d65852803432385ac2502f06588c8) on rbd image(d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:50:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:57.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:58 np0005539563 nova_compute[252253]: 2025-11-29 07:50:58.985 252257 INFO nova.virt.libvirt.driver [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Deleting instance files /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50_del#033[00m
Nov 29 02:50:58 np0005539563 nova_compute[252253]: 2025-11-29 07:50:58.986 252257 INFO nova.virt.libvirt.driver [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Deletion of /var/lib/nova/instances/3efe6bb4-36be-4a30-832d-8da05e5baa50_del complete#033[00m
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.041 252257 INFO nova.compute.manager [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Took 2.21 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.042 252257 DEBUG oslo.service.loopingcall [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.043 252257 DEBUG nova.compute.manager [-] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.043 252257 DEBUG nova.network.neutron [-] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:50:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.172 252257 DEBUG nova.network.neutron [-] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.186 252257 DEBUG nova.network.neutron [-] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.202 252257 INFO nova.compute.manager [-] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Took 0.16 seconds to deallocate network for instance.#033[00m
Nov 29 02:50:59 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.263 252257 DEBUG oslo_concurrency.lockutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.264 252257 DEBUG oslo_concurrency.lockutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:59.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.372 252257 DEBUG oslo_concurrency.processutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:50:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 373 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 142 op/s
Nov 29 02:50:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:50:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2843583701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.794 252257 DEBUG oslo_concurrency.processutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.800 252257 DEBUG nova.compute.provider_tree [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.823 252257 DEBUG nova.scheduler.client.report [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.853 252257 DEBUG oslo_concurrency.lockutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.879 252257 INFO nova.scheduler.client.report [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Deleted allocations for instance 3efe6bb4-36be-4a30-832d-8da05e5baa50#033[00m
Nov 29 02:50:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:50:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:59.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:59 np0005539563 nova_compute[252253]: 2025-11-29 07:50:59.998 252257 DEBUG oslo_concurrency.lockutils [None req-eb2ed1fd-b2a0-4271-b5a0-60e463a27fec e1c26cd8138e4114b4801d377b39933a f7e8ae9fdefb4049959228954fb4250e - - default default] Lock "3efe6bb4-36be-4a30-832d-8da05e5baa50" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:00 np0005539563 nova_compute[252253]: 2025-11-29 07:51:00.271 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:00 np0005539563 nova_compute[252253]: 2025-11-29 07:51:00.454 252257 DEBUG nova.storage.rbd_utils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] creating snapshot(snap) on rbd image(33939db1-a4ae-4fac-9a69-88ed807d304b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:51:01 np0005539563 nova_compute[252253]: 2025-11-29 07:51:01.050 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402646.0482006, aca637ac-6ef0-42f8-aacf-e022e990aeba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:51:01 np0005539563 nova_compute[252253]: 2025-11-29 07:51:01.051 252257 INFO nova.compute.manager [-] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:51:01 np0005539563 nova_compute[252253]: 2025-11-29 07:51:01.096 252257 DEBUG nova.compute.manager [None req-b6db3250-5ff7-44f8-8f08-addc4f09cc4f - - - - - -] [instance: aca637ac-6ef0-42f8-aacf-e022e990aeba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:01.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 364 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.0 MiB/s wr, 247 op/s
Nov 29 02:51:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 29 02:51:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:01.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 29 02:51:02 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 29 02:51:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:51:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2725795968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:51:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:51:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2725795968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:51:02 np0005539563 nova_compute[252253]: 2025-11-29 07:51:02.617 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:03.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 364 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.8 MiB/s wr, 153 op/s
Nov 29 02:51:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:03.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:51:04.895 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:51:04.896 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:51:04.896 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:05 np0005539563 nova_compute[252253]: 2025-11-29 07:51:05.151 252257 INFO nova.virt.libvirt.driver [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Snapshot image upload complete#033[00m
Nov 29 02:51:05 np0005539563 nova_compute[252253]: 2025-11-29 07:51:05.152 252257 DEBUG nova.compute.manager [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:05 np0005539563 nova_compute[252253]: 2025-11-29 07:51:05.273 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:05.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:05 np0005539563 nova_compute[252253]: 2025-11-29 07:51:05.491 252257 INFO nova.compute.manager [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Shelve offloading#033[00m
Nov 29 02:51:05 np0005539563 nova_compute[252253]: 2025-11-29 07:51:05.499 252257 INFO nova.virt.libvirt.driver [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance destroyed successfully.#033[00m
Nov 29 02:51:05 np0005539563 nova_compute[252253]: 2025-11-29 07:51:05.500 252257 DEBUG nova.compute.manager [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:05 np0005539563 nova_compute[252253]: 2025-11-29 07:51:05.502 252257 DEBUG oslo_concurrency.lockutils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:51:05 np0005539563 nova_compute[252253]: 2025-11-29 07:51:05.502 252257 DEBUG oslo_concurrency.lockutils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquired lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:51:05 np0005539563 nova_compute[252253]: 2025-11-29 07:51:05.502 252257 DEBUG nova.network.neutron [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:51:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 283 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.8 MiB/s wr, 226 op/s
Nov 29 02:51:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:05.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:06 np0005539563 nova_compute[252253]: 2025-11-29 07:51:06.021 252257 DEBUG nova.network.neutron [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:51:06 np0005539563 nova_compute[252253]: 2025-11-29 07:51:06.249 252257 DEBUG nova.network.neutron [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:51:06 np0005539563 nova_compute[252253]: 2025-11-29 07:51:06.280 252257 DEBUG oslo_concurrency.lockutils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Releasing lock "refresh_cache-d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:51:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 29 02:51:06 np0005539563 nova_compute[252253]: 2025-11-29 07:51:06.290 252257 INFO nova.virt.libvirt.driver [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Instance destroyed successfully.#033[00m
Nov 29 02:51:06 np0005539563 nova_compute[252253]: 2025-11-29 07:51:06.291 252257 DEBUG nova.objects.instance [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lazy-loading 'resources' on Instance uuid d278aa2a-e5e7-4f89-8b5c-b6dca172b57d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 29 02:51:06 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 29 02:51:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:07.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 283 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 3.7 MiB/s wr, 177 op/s
Nov 29 02:51:07 np0005539563 nova_compute[252253]: 2025-11-29 07:51:07.620 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:07 np0005539563 nova_compute[252253]: 2025-11-29 07:51:07.831 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402652.8303072, d278aa2a-e5e7-4f89-8b5c-b6dca172b57d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:51:07 np0005539563 nova_compute[252253]: 2025-11-29 07:51:07.831 252257 INFO nova.compute.manager [-] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:51:07 np0005539563 nova_compute[252253]: 2025-11-29 07:51:07.847 252257 INFO nova.virt.libvirt.driver [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Deleting instance files /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_del#033[00m
Nov 29 02:51:07 np0005539563 nova_compute[252253]: 2025-11-29 07:51:07.848 252257 INFO nova.virt.libvirt.driver [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Deletion of /var/lib/nova/instances/d278aa2a-e5e7-4f89-8b5c-b6dca172b57d_del complete#033[00m
Nov 29 02:51:07 np0005539563 nova_compute[252253]: 2025-11-29 07:51:07.862 252257 DEBUG nova.compute.manager [None req-84275e67-2f07-4fbf-933d-4374e3f25736 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:07 np0005539563 nova_compute[252253]: 2025-11-29 07:51:07.868 252257 DEBUG nova.compute.manager [None req-84275e67-2f07-4fbf-933d-4374e3f25736 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:51:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:07.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:07 np0005539563 nova_compute[252253]: 2025-11-29 07:51:07.934 252257 INFO nova.compute.manager [None req-84275e67-2f07-4fbf-933d-4374e3f25736 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Nov 29 02:51:08 np0005539563 nova_compute[252253]: 2025-11-29 07:51:08.013 252257 INFO nova.scheduler.client.report [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Deleted allocations for instance d278aa2a-e5e7-4f89-8b5c-b6dca172b57d#033[00m
Nov 29 02:51:08 np0005539563 nova_compute[252253]: 2025-11-29 07:51:08.124 252257 DEBUG oslo_concurrency.lockutils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:08 np0005539563 nova_compute[252253]: 2025-11-29 07:51:08.125 252257 DEBUG oslo_concurrency.lockutils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:08 np0005539563 nova_compute[252253]: 2025-11-29 07:51:08.168 252257 DEBUG oslo_concurrency.processutils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2157936712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:08 np0005539563 nova_compute[252253]: 2025-11-29 07:51:08.610 252257 DEBUG oslo_concurrency.processutils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:08 np0005539563 nova_compute[252253]: 2025-11-29 07:51:08.620 252257 DEBUG nova.compute.provider_tree [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:51:08 np0005539563 nova_compute[252253]: 2025-11-29 07:51:08.644 252257 DEBUG nova.scheduler.client.report [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:51:08 np0005539563 nova_compute[252253]: 2025-11-29 07:51:08.680 252257 DEBUG oslo_concurrency.lockutils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:08 np0005539563 nova_compute[252253]: 2025-11-29 07:51:08.766 252257 DEBUG oslo_concurrency.lockutils [None req-b10ee844-7213-48f2-98ba-6aced804af8a ed57e094b4c4441c8ffbfb96ecb62afc cf226b9a5bb945c3a8f54976b5736fe3 - - default default] Lock "d278aa2a-e5e7-4f89-8b5c-b6dca172b57d" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 30.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:09.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 268 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.1 MiB/s wr, 124 op/s
Nov 29 02:51:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:09.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:10 np0005539563 nova_compute[252253]: 2025-11-29 07:51:10.275 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:11.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 148 op/s
Nov 29 02:51:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:11.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:12 np0005539563 nova_compute[252253]: 2025-11-29 07:51:12.047 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402657.0461485, 3efe6bb4-36be-4a30-832d-8da05e5baa50 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:51:12 np0005539563 nova_compute[252253]: 2025-11-29 07:51:12.048 252257 INFO nova.compute.manager [-] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:51:12 np0005539563 nova_compute[252253]: 2025-11-29 07:51:12.077 252257 DEBUG nova.compute.manager [None req-34198a8d-2e11-4f48-8231-d08aab3aa39e - - - - - -] [instance: 3efe6bb4-36be-4a30-832d-8da05e5baa50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:12 np0005539563 nova_compute[252253]: 2025-11-29 07:51:12.622 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:51:12
Nov 29 02:51:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:51:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:51:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'vms', 'volumes', 'backups', '.mgr', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log']
Nov 29 02:51:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:51:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:13.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:51:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:51:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:13.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:15 np0005539563 nova_compute[252253]: 2025-11-29 07:51:15.277 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:15.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 29 02:51:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:15.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:17.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 1.9 MiB/s wr, 72 op/s
Nov 29 02:51:17 np0005539563 nova_compute[252253]: 2025-11-29 07:51:17.625 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:17.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:18 np0005539563 podman[275818]: 2025-11-29 07:51:18.284721258 +0000 UTC m=+0.069790714 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 02:51:18 np0005539563 podman[275819]: 2025-11-29 07:51:18.290632437 +0000 UTC m=+0.075329423 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:51:18 np0005539563 podman[275820]: 2025-11-29 07:51:18.320244552 +0000 UTC m=+0.103113518 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 02:51:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:19.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 29 02:51:19 np0005539563 nova_compute[252253]: 2025-11-29 07:51:19.644 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:51:19.646 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:51:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:51:19.647 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:51:19 np0005539563 nova_compute[252253]: 2025-11-29 07:51:19.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:19 np0005539563 nova_compute[252253]: 2025-11-29 07:51:19.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:51:19 np0005539563 nova_compute[252253]: 2025-11-29 07:51:19.720 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:51:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:19.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:20 np0005539563 nova_compute[252253]: 2025-11-29 07:51:20.280 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:21.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.0 MiB/s wr, 33 op/s
Nov 29 02:51:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:21.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:22 np0005539563 nova_compute[252253]: 2025-11-29 07:51:22.627 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00216759433742788 of space, bias 1.0, pg target 0.650278301228364 quantized to 32 (current 32)
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00406710200307091 of space, bias 1.0, pg target 1.2201306009212731 quantized to 32 (current 32)
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 02:51:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:23.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail
Nov 29 02:51:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:23.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:25 np0005539563 nova_compute[252253]: 2025-11-29 07:51:25.282 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:25.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.9 KiB/s wr, 27 op/s
Nov 29 02:51:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:25.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:27.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 248 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.9 KiB/s wr, 27 op/s
Nov 29 02:51:27 np0005539563 nova_compute[252253]: 2025-11-29 07:51:27.656 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:27 np0005539563 nova_compute[252253]: 2025-11-29 07:51:27.721 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:27.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:28 np0005539563 nova_compute[252253]: 2025-11-29 07:51:28.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:28 np0005539563 nova_compute[252253]: 2025-11-29 07:51:28.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:28 np0005539563 nova_compute[252253]: 2025-11-29 07:51:28.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:51:28 np0005539563 nova_compute[252253]: 2025-11-29 07:51:28.734 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: d278aa2a-e5e7-4f89-8b5c-b6dca172b57d] Skipping network cache update for instance because it has been migrated to another host. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9902#033[00m
Nov 29 02:51:28 np0005539563 nova_compute[252253]: 2025-11-29 07:51:28.735 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:51:28 np0005539563 nova_compute[252253]: 2025-11-29 07:51:28.736 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:28 np0005539563 nova_compute[252253]: 2025-11-29 07:51:28.736 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:51:28 np0005539563 nova_compute[252253]: 2025-11-29 07:51:28.736 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:28 np0005539563 nova_compute[252253]: 2025-11-29 07:51:28.808 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:28 np0005539563 nova_compute[252253]: 2025-11-29 07:51:28.809 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:28 np0005539563 nova_compute[252253]: 2025-11-29 07:51:28.809 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:28 np0005539563 nova_compute[252253]: 2025-11-29 07:51:28.809 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:51:28 np0005539563 nova_compute[252253]: 2025-11-29 07:51:28.810 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165337111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:29 np0005539563 nova_compute[252253]: 2025-11-29 07:51:29.260 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:29.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:29 np0005539563 nova_compute[252253]: 2025-11-29 07:51:29.417 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:51:29 np0005539563 nova_compute[252253]: 2025-11-29 07:51:29.419 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4680MB free_disk=20.94277572631836GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:51:29 np0005539563 nova_compute[252253]: 2025-11-29 07:51:29.419 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:29 np0005539563 nova_compute[252253]: 2025-11-29 07:51:29.420 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 257 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 240 KiB/s wr, 49 op/s
Nov 29 02:51:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:51:29.649 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:51:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:29.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:29 np0005539563 nova_compute[252253]: 2025-11-29 07:51:29.962 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:51:29 np0005539563 nova_compute[252253]: 2025-11-29 07:51:29.963 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:51:30 np0005539563 nova_compute[252253]: 2025-11-29 07:51:30.283 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:30 np0005539563 nova_compute[252253]: 2025-11-29 07:51:30.573 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507629536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:31 np0005539563 nova_compute[252253]: 2025-11-29 07:51:31.272 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:31 np0005539563 nova_compute[252253]: 2025-11-29 07:51:31.283 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:51:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:31.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 329 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 89 op/s
Nov 29 02:51:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:31.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:32 np0005539563 nova_compute[252253]: 2025-11-29 07:51:32.659 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:32 np0005539563 nova_compute[252253]: 2025-11-29 07:51:32.842 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:51:32 np0005539563 nova_compute[252253]: 2025-11-29 07:51:32.997 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:51:32 np0005539563 nova_compute[252253]: 2025-11-29 07:51:32.997 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:33.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 329 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 89 op/s
Nov 29 02:51:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:33.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:34 np0005539563 nova_compute[252253]: 2025-11-29 07:51:34.940 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:34 np0005539563 nova_compute[252253]: 2025-11-29 07:51:34.940 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:34 np0005539563 nova_compute[252253]: 2025-11-29 07:51:34.941 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:34 np0005539563 nova_compute[252253]: 2025-11-29 07:51:34.941 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 29 02:51:35 np0005539563 nova_compute[252253]: 2025-11-29 07:51:35.286 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:35.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 329 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Nov 29 02:51:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 29 02:51:35 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 29 02:51:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:35.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:36 np0005539563 nova_compute[252253]: 2025-11-29 07:51:36.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:37.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 329 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 157 op/s
Nov 29 02:51:37 np0005539563 nova_compute[252253]: 2025-11-29 07:51:37.661 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:37.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:38 np0005539563 nova_compute[252253]: 2025-11-29 07:51:38.715 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:38 np0005539563 nova_compute[252253]: 2025-11-29 07:51:38.715 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:51:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:39.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 297 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.4 MiB/s wr, 136 op/s
Nov 29 02:51:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:39.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:40 np0005539563 nova_compute[252253]: 2025-11-29 07:51:40.286 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:41.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:41 np0005539563 nova_compute[252253]: 2025-11-29 07:51:41.549 252257 DEBUG oslo_concurrency.lockutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Acquiring lock "23067680-c030-45ee-94ec-441a4e8dfdd3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:41 np0005539563 nova_compute[252253]: 2025-11-29 07:51:41.550 252257 DEBUG oslo_concurrency.lockutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "23067680-c030-45ee-94ec-441a4e8dfdd3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 248 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 104 op/s
Nov 29 02:51:41 np0005539563 nova_compute[252253]: 2025-11-29 07:51:41.613 252257 DEBUG nova.compute.manager [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:51:41 np0005539563 nova_compute[252253]: 2025-11-29 07:51:41.749 252257 DEBUG oslo_concurrency.lockutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:41 np0005539563 nova_compute[252253]: 2025-11-29 07:51:41.749 252257 DEBUG oslo_concurrency.lockutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:41 np0005539563 nova_compute[252253]: 2025-11-29 07:51:41.766 252257 DEBUG nova.virt.hardware [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:51:41 np0005539563 nova_compute[252253]: 2025-11-29 07:51:41.766 252257 INFO nova.compute.claims [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:51:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:41.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:42 np0005539563 nova_compute[252253]: 2025-11-29 07:51:42.530 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:42 np0005539563 nova_compute[252253]: 2025-11-29 07:51:42.663 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1876318502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:43 np0005539563 nova_compute[252253]: 2025-11-29 07:51:43.091 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:43 np0005539563 nova_compute[252253]: 2025-11-29 07:51:43.099 252257 DEBUG nova.compute.provider_tree [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:51:43 np0005539563 nova_compute[252253]: 2025-11-29 07:51:43.116 252257 DEBUG nova.scheduler.client.report [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:51:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:51:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:51:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:51:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:51:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:51:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:51:43 np0005539563 nova_compute[252253]: 2025-11-29 07:51:43.211 252257 DEBUG oslo_concurrency.lockutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.462s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:43 np0005539563 nova_compute[252253]: 2025-11-29 07:51:43.213 252257 DEBUG nova.compute.manager [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:51:43 np0005539563 nova_compute[252253]: 2025-11-29 07:51:43.318 252257 DEBUG nova.compute.manager [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:51:43 np0005539563 nova_compute[252253]: 2025-11-29 07:51:43.319 252257 DEBUG nova.network.neutron [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:51:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:43.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:43 np0005539563 nova_compute[252253]: 2025-11-29 07:51:43.367 252257 INFO nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:51:43 np0005539563 nova_compute[252253]: 2025-11-29 07:51:43.456 252257 DEBUG nova.compute.manager [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:51:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 248 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 104 op/s
Nov 29 02:51:43 np0005539563 nova_compute[252253]: 2025-11-29 07:51:43.771 252257 DEBUG nova.compute.manager [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:51:43 np0005539563 nova_compute[252253]: 2025-11-29 07:51:43.773 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:51:43 np0005539563 nova_compute[252253]: 2025-11-29 07:51:43.774 252257 INFO nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Creating image(s)#033[00m
Nov 29 02:51:43 np0005539563 nova_compute[252253]: 2025-11-29 07:51:43.867 252257 DEBUG nova.storage.rbd_utils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] rbd image 23067680-c030-45ee-94ec-441a4e8dfdd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:43.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:44 np0005539563 nova_compute[252253]: 2025-11-29 07:51:44.081 252257 DEBUG nova.storage.rbd_utils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] rbd image 23067680-c030-45ee-94ec-441a4e8dfdd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:44 np0005539563 nova_compute[252253]: 2025-11-29 07:51:44.129 252257 DEBUG nova.storage.rbd_utils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] rbd image 23067680-c030-45ee-94ec-441a4e8dfdd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:44 np0005539563 nova_compute[252253]: 2025-11-29 07:51:44.136 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:44 np0005539563 nova_compute[252253]: 2025-11-29 07:51:44.230 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:44 np0005539563 nova_compute[252253]: 2025-11-29 07:51:44.231 252257 DEBUG oslo_concurrency.lockutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:44 np0005539563 nova_compute[252253]: 2025-11-29 07:51:44.232 252257 DEBUG oslo_concurrency.lockutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:44 np0005539563 nova_compute[252253]: 2025-11-29 07:51:44.233 252257 DEBUG oslo_concurrency.lockutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:44 np0005539563 nova_compute[252253]: 2025-11-29 07:51:44.280 252257 DEBUG nova.storage.rbd_utils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] rbd image 23067680-c030-45ee-94ec-441a4e8dfdd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:44 np0005539563 nova_compute[252253]: 2025-11-29 07:51:44.286 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 23067680-c030-45ee-94ec-441a4e8dfdd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:44 np0005539563 nova_compute[252253]: 2025-11-29 07:51:44.826 252257 DEBUG nova.network.neutron [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:51:44 np0005539563 nova_compute[252253]: 2025-11-29 07:51:44.826 252257 DEBUG nova.compute.manager [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:51:45 np0005539563 nova_compute[252253]: 2025-11-29 07:51:45.290 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:45.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 275 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 126 KiB/s rd, 1.0 MiB/s wr, 63 op/s
Nov 29 02:51:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:45.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.114 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 23067680-c030-45ee-94ec-441a4e8dfdd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.828s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.186 252257 DEBUG nova.storage.rbd_utils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] resizing rbd image 23067680-c030-45ee-94ec-441a4e8dfdd3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:51:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.325 252257 DEBUG nova.objects.instance [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lazy-loading 'migration_context' on Instance uuid 23067680-c030-45ee-94ec-441a4e8dfdd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.371 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.372 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Ensure instance console log exists: /var/lib/nova/instances/23067680-c030-45ee-94ec-441a4e8dfdd3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.373 252257 DEBUG oslo_concurrency.lockutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.373 252257 DEBUG oslo_concurrency.lockutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.374 252257 DEBUG oslo_concurrency.lockutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.376 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:51:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 29 02:51:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.383 252257 WARNING nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.393 252257 DEBUG nova.virt.libvirt.host [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.394 252257 DEBUG nova.virt.libvirt.host [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.398 252257 DEBUG nova.virt.libvirt.host [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.399 252257 DEBUG nova.virt.libvirt.host [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.401 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.401 252257 DEBUG nova.virt.hardware [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.402 252257 DEBUG nova.virt.hardware [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.402 252257 DEBUG nova.virt.hardware [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.402 252257 DEBUG nova.virt.hardware [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.403 252257 DEBUG nova.virt.hardware [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.403 252257 DEBUG nova.virt.hardware [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.403 252257 DEBUG nova.virt.hardware [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.404 252257 DEBUG nova.virt.hardware [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.404 252257 DEBUG nova.virt.hardware [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.404 252257 DEBUG nova.virt.hardware [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.405 252257 DEBUG nova.virt.hardware [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.408 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:51:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580282389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.835 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.876 252257 DEBUG nova.storage.rbd_utils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] rbd image 23067680-c030-45ee-94ec-441a4e8dfdd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:46 np0005539563 nova_compute[252253]: 2025-11-29 07:51:46.882 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:51:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/327708464' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:51:47 np0005539563 nova_compute[252253]: 2025-11-29 07:51:47.331 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:47 np0005539563 nova_compute[252253]: 2025-11-29 07:51:47.334 252257 DEBUG nova.objects.instance [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lazy-loading 'pci_devices' on Instance uuid 23067680-c030-45ee-94ec-441a4e8dfdd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:47 np0005539563 nova_compute[252253]: 2025-11-29 07:51:47.352 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  <uuid>23067680-c030-45ee-94ec-441a4e8dfdd3</uuid>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  <name>instance-00000020</name>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerExternalEventsTest-server-733468418</nova:name>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:51:46</nova:creationTime>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <nova:user uuid="e325d4df8788423e83d0a5eced9320b8">tempest-ServerExternalEventsTest-612840473-project-member</nova:user>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <nova:project uuid="61c43a276b0446a187c8aaf55b42ff96">tempest-ServerExternalEventsTest-612840473</nova:project>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <entry name="serial">23067680-c030-45ee-94ec-441a4e8dfdd3</entry>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <entry name="uuid">23067680-c030-45ee-94ec-441a4e8dfdd3</entry>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/23067680-c030-45ee-94ec-441a4e8dfdd3_disk">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/23067680-c030-45ee-94ec-441a4e8dfdd3_disk.config">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/23067680-c030-45ee-94ec-441a4e8dfdd3/console.log" append="off"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:51:47 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:51:47 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:51:47 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:51:47 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:51:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:47.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:47 np0005539563 nova_compute[252253]: 2025-11-29 07:51:47.430 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:51:47 np0005539563 nova_compute[252253]: 2025-11-29 07:51:47.431 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:51:47 np0005539563 nova_compute[252253]: 2025-11-29 07:51:47.432 252257 INFO nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Using config drive#033[00m
Nov 29 02:51:47 np0005539563 nova_compute[252253]: 2025-11-29 07:51:47.470 252257 DEBUG nova.storage.rbd_utils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] rbd image 23067680-c030-45ee-94ec-441a4e8dfdd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 275 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 126 KiB/s rd, 1.0 MiB/s wr, 63 op/s
Nov 29 02:51:47 np0005539563 nova_compute[252253]: 2025-11-29 07:51:47.667 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:47 np0005539563 nova_compute[252253]: 2025-11-29 07:51:47.730 252257 INFO nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Creating config drive at /var/lib/nova/instances/23067680-c030-45ee-94ec-441a4e8dfdd3/disk.config#033[00m
Nov 29 02:51:47 np0005539563 nova_compute[252253]: 2025-11-29 07:51:47.734 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23067680-c030-45ee-94ec-441a4e8dfdd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwsvfxdpg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:47 np0005539563 nova_compute[252253]: 2025-11-29 07:51:47.874 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23067680-c030-45ee-94ec-441a4e8dfdd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwsvfxdpg" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:47 np0005539563 nova_compute[252253]: 2025-11-29 07:51:47.923 252257 DEBUG nova.storage.rbd_utils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] rbd image 23067680-c030-45ee-94ec-441a4e8dfdd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:47 np0005539563 nova_compute[252253]: 2025-11-29 07:51:47.927 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23067680-c030-45ee-94ec-441a4e8dfdd3/disk.config 23067680-c030-45ee-94ec-441a4e8dfdd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:47.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:48 np0005539563 nova_compute[252253]: 2025-11-29 07:51:48.226 252257 DEBUG oslo_concurrency.processutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23067680-c030-45ee-94ec-441a4e8dfdd3/disk.config 23067680-c030-45ee-94ec-441a4e8dfdd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:48 np0005539563 nova_compute[252253]: 2025-11-29 07:51:48.227 252257 INFO nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Deleting local config drive /var/lib/nova/instances/23067680-c030-45ee-94ec-441a4e8dfdd3/disk.config because it was imported into RBD.#033[00m
Nov 29 02:51:48 np0005539563 systemd-machined[213024]: New machine qemu-15-instance-00000020.
Nov 29 02:51:48 np0005539563 systemd[1]: Started Virtual Machine qemu-15-instance-00000020.
Nov 29 02:51:48 np0005539563 podman[276332]: 2025-11-29 07:51:48.396778258 +0000 UTC m=+0.072427844 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:51:48 np0005539563 podman[276333]: 2025-11-29 07:51:48.426427904 +0000 UTC m=+0.079770562 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125)
Nov 29 02:51:48 np0005539563 podman[276339]: 2025-11-29 07:51:48.45945787 +0000 UTC m=+0.107759912 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:51:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:49.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 285 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 983 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.668 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402709.668027, 23067680-c030-45ee-94ec-441a4e8dfdd3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.669 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.671 252257 DEBUG nova.compute.manager [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.671 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.676 252257 INFO nova.virt.libvirt.driver [-] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Instance spawned successfully.#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.676 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.701 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.707 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.708 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.708 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.708 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.709 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.709 252257 DEBUG nova.virt.libvirt.driver [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.714 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.766 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.767 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402709.6712594, 23067680-c030-45ee-94ec-441a4e8dfdd3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.767 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] VM Started (Lifecycle Event)#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.799 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.803 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.810 252257 INFO nova.compute.manager [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Took 6.04 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.811 252257 DEBUG nova.compute.manager [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.851 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.947 252257 INFO nova.compute.manager [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Took 8.25 seconds to build instance.#033[00m
Nov 29 02:51:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:49.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:49 np0005539563 nova_compute[252253]: 2025-11-29 07:51:49.979 252257 DEBUG oslo_concurrency.lockutils [None req-6131cb5a-178d-4f14-8e1e-f3c26f2cdc7a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "23067680-c030-45ee-94ec-441a4e8dfdd3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:50 np0005539563 nova_compute[252253]: 2025-11-29 07:51:50.341 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:51.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 214 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Nov 29 02:51:51 np0005539563 nova_compute[252253]: 2025-11-29 07:51:51.836 252257 DEBUG nova.compute.manager [None req-88a6f367-919b-4dc2-855f-b25feb6b0b5a 21f304a1765f4908abfe7fb15189ee95 0766d13dbd694c6fa58cd3c7073153c8 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:51:51 np0005539563 nova_compute[252253]: 2025-11-29 07:51:51.836 252257 DEBUG nova.compute.manager [None req-88a6f367-919b-4dc2-855f-b25feb6b0b5a 21f304a1765f4908abfe7fb15189ee95 0766d13dbd694c6fa58cd3c7073153c8 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:51:51 np0005539563 nova_compute[252253]: 2025-11-29 07:51:51.836 252257 DEBUG oslo_concurrency.lockutils [None req-88a6f367-919b-4dc2-855f-b25feb6b0b5a 21f304a1765f4908abfe7fb15189ee95 0766d13dbd694c6fa58cd3c7073153c8 - - default default] Acquiring lock "refresh_cache-23067680-c030-45ee-94ec-441a4e8dfdd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:51:51 np0005539563 nova_compute[252253]: 2025-11-29 07:51:51.837 252257 DEBUG oslo_concurrency.lockutils [None req-88a6f367-919b-4dc2-855f-b25feb6b0b5a 21f304a1765f4908abfe7fb15189ee95 0766d13dbd694c6fa58cd3c7073153c8 - - default default] Acquired lock "refresh_cache-23067680-c030-45ee-94ec-441a4e8dfdd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:51:51 np0005539563 nova_compute[252253]: 2025-11-29 07:51:51.837 252257 DEBUG nova.network.neutron [None req-88a6f367-919b-4dc2-855f-b25feb6b0b5a 21f304a1765f4908abfe7fb15189ee95 0766d13dbd694c6fa58cd3c7073153c8 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:51:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:51.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:52 np0005539563 nova_compute[252253]: 2025-11-29 07:51:52.116 252257 DEBUG oslo_concurrency.lockutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Acquiring lock "23067680-c030-45ee-94ec-441a4e8dfdd3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:52 np0005539563 nova_compute[252253]: 2025-11-29 07:51:52.116 252257 DEBUG oslo_concurrency.lockutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "23067680-c030-45ee-94ec-441a4e8dfdd3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:52 np0005539563 nova_compute[252253]: 2025-11-29 07:51:52.116 252257 DEBUG oslo_concurrency.lockutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Acquiring lock "23067680-c030-45ee-94ec-441a4e8dfdd3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:52 np0005539563 nova_compute[252253]: 2025-11-29 07:51:52.117 252257 DEBUG oslo_concurrency.lockutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "23067680-c030-45ee-94ec-441a4e8dfdd3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:52 np0005539563 nova_compute[252253]: 2025-11-29 07:51:52.117 252257 DEBUG oslo_concurrency.lockutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "23067680-c030-45ee-94ec-441a4e8dfdd3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:52 np0005539563 nova_compute[252253]: 2025-11-29 07:51:52.118 252257 INFO nova.compute.manager [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Terminating instance#033[00m
Nov 29 02:51:52 np0005539563 nova_compute[252253]: 2025-11-29 07:51:52.118 252257 DEBUG oslo_concurrency.lockutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Acquiring lock "refresh_cache-23067680-c030-45ee-94ec-441a4e8dfdd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:51:52 np0005539563 nova_compute[252253]: 2025-11-29 07:51:52.142 252257 DEBUG nova.network.neutron [None req-88a6f367-919b-4dc2-855f-b25feb6b0b5a 21f304a1765f4908abfe7fb15189ee95 0766d13dbd694c6fa58cd3c7073153c8 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:51:52 np0005539563 nova_compute[252253]: 2025-11-29 07:51:52.669 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:52 np0005539563 nova_compute[252253]: 2025-11-29 07:51:52.714 252257 DEBUG nova.network.neutron [None req-88a6f367-919b-4dc2-855f-b25feb6b0b5a 21f304a1765f4908abfe7fb15189ee95 0766d13dbd694c6fa58cd3c7073153c8 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:51:52 np0005539563 nova_compute[252253]: 2025-11-29 07:51:52.731 252257 DEBUG oslo_concurrency.lockutils [None req-88a6f367-919b-4dc2-855f-b25feb6b0b5a 21f304a1765f4908abfe7fb15189ee95 0766d13dbd694c6fa58cd3c7073153c8 - - default default] Releasing lock "refresh_cache-23067680-c030-45ee-94ec-441a4e8dfdd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:51:52 np0005539563 nova_compute[252253]: 2025-11-29 07:51:52.732 252257 DEBUG oslo_concurrency.lockutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Acquired lock "refresh_cache-23067680-c030-45ee-94ec-441a4e8dfdd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:51:52 np0005539563 nova_compute[252253]: 2025-11-29 07:51:52.733 252257 DEBUG nova.network.neutron [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:51:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:53.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 214 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Nov 29 02:51:53 np0005539563 nova_compute[252253]: 2025-11-29 07:51:53.886 252257 DEBUG nova.network.neutron [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:51:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:53.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:51:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:51:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:51:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:51:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:51:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:51:54 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8f613081-3e18-4e8e-8588-ef04a9e695d0 does not exist
Nov 29 02:51:54 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ffb4b112-fa9b-452c-ae93-dbd3c8b327f6 does not exist
Nov 29 02:51:54 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 3c3a67f3-3904-464e-8555-dc7fb453cd08 does not exist
Nov 29 02:51:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:51:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:51:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:51:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:51:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:51:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:51:55 np0005539563 nova_compute[252253]: 2025-11-29 07:51:55.341 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:55.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 214 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.2 MiB/s wr, 210 op/s
Nov 29 02:51:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:51:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:51:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:51:55 np0005539563 podman[276716]: 2025-11-29 07:51:55.742759358 +0000 UTC m=+0.052975843 container create a95bb09e6086793c61bd1ecb6ae6a6601de5d2fe900c36c03af4c377444259db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:51:55 np0005539563 systemd[1]: Started libpod-conmon-a95bb09e6086793c61bd1ecb6ae6a6601de5d2fe900c36c03af4c377444259db.scope.
Nov 29 02:51:55 np0005539563 podman[276716]: 2025-11-29 07:51:55.718048594 +0000 UTC m=+0.028265099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:51:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:51:55 np0005539563 podman[276716]: 2025-11-29 07:51:55.874890393 +0000 UTC m=+0.185106868 container init a95bb09e6086793c61bd1ecb6ae6a6601de5d2fe900c36c03af4c377444259db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:51:55 np0005539563 podman[276716]: 2025-11-29 07:51:55.883064973 +0000 UTC m=+0.193281428 container start a95bb09e6086793c61bd1ecb6ae6a6601de5d2fe900c36c03af4c377444259db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:51:55 np0005539563 podman[276716]: 2025-11-29 07:51:55.887595445 +0000 UTC m=+0.197811900 container attach a95bb09e6086793c61bd1ecb6ae6a6601de5d2fe900c36c03af4c377444259db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:51:55 np0005539563 friendly_hopper[276732]: 167 167
Nov 29 02:51:55 np0005539563 systemd[1]: libpod-a95bb09e6086793c61bd1ecb6ae6a6601de5d2fe900c36c03af4c377444259db.scope: Deactivated successfully.
Nov 29 02:51:55 np0005539563 podman[276716]: 2025-11-29 07:51:55.890247376 +0000 UTC m=+0.200463831 container died a95bb09e6086793c61bd1ecb6ae6a6601de5d2fe900c36c03af4c377444259db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:51:55 np0005539563 nova_compute[252253]: 2025-11-29 07:51:55.902 252257 DEBUG nova.network.neutron [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:51:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f4e0f7c5a62c679df4499e98e9701c0a3cfae039a73cf2a6217ce40af37c3662-merged.mount: Deactivated successfully.
Nov 29 02:51:55 np0005539563 podman[276716]: 2025-11-29 07:51:55.957870651 +0000 UTC m=+0.268087136 container remove a95bb09e6086793c61bd1ecb6ae6a6601de5d2fe900c36c03af4c377444259db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:51:55 np0005539563 systemd[1]: libpod-conmon-a95bb09e6086793c61bd1ecb6ae6a6601de5d2fe900c36c03af4c377444259db.scope: Deactivated successfully.
Nov 29 02:51:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:55.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:56 np0005539563 nova_compute[252253]: 2025-11-29 07:51:56.090 252257 DEBUG oslo_concurrency.lockutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Releasing lock "refresh_cache-23067680-c030-45ee-94ec-441a4e8dfdd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:51:56 np0005539563 nova_compute[252253]: 2025-11-29 07:51:56.092 252257 DEBUG nova.compute.manager [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:51:56 np0005539563 podman[276757]: 2025-11-29 07:51:56.140780019 +0000 UTC m=+0.057553675 container create 742dc8bc7745701c272953ead3a14116c0216b33199a809fb404cc143ec882da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_herschel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:51:56 np0005539563 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000020.scope: Deactivated successfully.
Nov 29 02:51:56 np0005539563 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000020.scope: Consumed 7.978s CPU time.
Nov 29 02:51:56 np0005539563 systemd-machined[213024]: Machine qemu-15-instance-00000020 terminated.
Nov 29 02:51:56 np0005539563 systemd[1]: Started libpod-conmon-742dc8bc7745701c272953ead3a14116c0216b33199a809fb404cc143ec882da.scope.
Nov 29 02:51:56 np0005539563 podman[276757]: 2025-11-29 07:51:56.115254464 +0000 UTC m=+0.032028140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:51:56 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:51:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cb0acf3d574b1d2142475f258521dd20e3d7b6c0c1a031fd591ae41b16eee0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cb0acf3d574b1d2142475f258521dd20e3d7b6c0c1a031fd591ae41b16eee0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cb0acf3d574b1d2142475f258521dd20e3d7b6c0c1a031fd591ae41b16eee0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cb0acf3d574b1d2142475f258521dd20e3d7b6c0c1a031fd591ae41b16eee0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cb0acf3d574b1d2142475f258521dd20e3d7b6c0c1a031fd591ae41b16eee0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:56 np0005539563 podman[276757]: 2025-11-29 07:51:56.239681044 +0000 UTC m=+0.156454730 container init 742dc8bc7745701c272953ead3a14116c0216b33199a809fb404cc143ec882da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:51:56 np0005539563 podman[276757]: 2025-11-29 07:51:56.253707171 +0000 UTC m=+0.170480827 container start 742dc8bc7745701c272953ead3a14116c0216b33199a809fb404cc143ec882da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_herschel, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:51:56 np0005539563 podman[276757]: 2025-11-29 07:51:56.268923099 +0000 UTC m=+0.185696745 container attach 742dc8bc7745701c272953ead3a14116c0216b33199a809fb404cc143ec882da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_herschel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:51:56 np0005539563 nova_compute[252253]: 2025-11-29 07:51:56.323 252257 INFO nova.virt.libvirt.driver [-] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Instance destroyed successfully.#033[00m
Nov 29 02:51:56 np0005539563 nova_compute[252253]: 2025-11-29 07:51:56.325 252257 DEBUG nova.objects.instance [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lazy-loading 'resources' on Instance uuid 23067680-c030-45ee-94ec-441a4e8dfdd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:57 np0005539563 great_herschel[276774]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:51:57 np0005539563 great_herschel[276774]: --> relative data size: 1.0
Nov 29 02:51:57 np0005539563 great_herschel[276774]: --> All data devices are unavailable
Nov 29 02:51:57 np0005539563 systemd[1]: libpod-742dc8bc7745701c272953ead3a14116c0216b33199a809fb404cc143ec882da.scope: Deactivated successfully.
Nov 29 02:51:57 np0005539563 podman[276757]: 2025-11-29 07:51:57.153872319 +0000 UTC m=+1.070645965 container died 742dc8bc7745701c272953ead3a14116c0216b33199a809fb404cc143ec882da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:51:57 np0005539563 systemd[1]: var-lib-containers-storage-overlay-25cb0acf3d574b1d2142475f258521dd20e3d7b6c0c1a031fd591ae41b16eee0-merged.mount: Deactivated successfully.
Nov 29 02:51:57 np0005539563 podman[276757]: 2025-11-29 07:51:57.221004611 +0000 UTC m=+1.137778247 container remove 742dc8bc7745701c272953ead3a14116c0216b33199a809fb404cc143ec882da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:51:57 np0005539563 systemd[1]: libpod-conmon-742dc8bc7745701c272953ead3a14116c0216b33199a809fb404cc143ec882da.scope: Deactivated successfully.
Nov 29 02:51:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:57.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 214 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.0 MiB/s wr, 187 op/s
Nov 29 02:51:57 np0005539563 nova_compute[252253]: 2025-11-29 07:51:57.672 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:57.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:58 np0005539563 podman[276966]: 2025-11-29 07:51:58.135885213 +0000 UTC m=+0.070358719 container create da10c698ae9eb3c661d32b2cabdffcf2e0c64a373e0690e2129b60006f3390cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:51:58 np0005539563 podman[276966]: 2025-11-29 07:51:58.101334376 +0000 UTC m=+0.035807922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:51:58 np0005539563 systemd[1]: Started libpod-conmon-da10c698ae9eb3c661d32b2cabdffcf2e0c64a373e0690e2129b60006f3390cd.scope.
Nov 29 02:51:58 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:51:58 np0005539563 podman[276966]: 2025-11-29 07:51:58.300669066 +0000 UTC m=+0.235142582 container init da10c698ae9eb3c661d32b2cabdffcf2e0c64a373e0690e2129b60006f3390cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:51:58 np0005539563 podman[276966]: 2025-11-29 07:51:58.311277871 +0000 UTC m=+0.245751337 container start da10c698ae9eb3c661d32b2cabdffcf2e0c64a373e0690e2129b60006f3390cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:51:58 np0005539563 podman[276966]: 2025-11-29 07:51:58.316353787 +0000 UTC m=+0.250827263 container attach da10c698ae9eb3c661d32b2cabdffcf2e0c64a373e0690e2129b60006f3390cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:51:58 np0005539563 vigilant_newton[276982]: 167 167
Nov 29 02:51:58 np0005539563 systemd[1]: libpod-da10c698ae9eb3c661d32b2cabdffcf2e0c64a373e0690e2129b60006f3390cd.scope: Deactivated successfully.
Nov 29 02:51:58 np0005539563 podman[276966]: 2025-11-29 07:51:58.321220338 +0000 UTC m=+0.255693804 container died da10c698ae9eb3c661d32b2cabdffcf2e0c64a373e0690e2129b60006f3390cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:51:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c0327974d669bf96cd3cc98d17279f429d348fd5c41a0717830dfc96ba0c4627-merged.mount: Deactivated successfully.
Nov 29 02:51:58 np0005539563 podman[276966]: 2025-11-29 07:51:58.370297605 +0000 UTC m=+0.304771071 container remove da10c698ae9eb3c661d32b2cabdffcf2e0c64a373e0690e2129b60006f3390cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:51:58 np0005539563 systemd[1]: libpod-conmon-da10c698ae9eb3c661d32b2cabdffcf2e0c64a373e0690e2129b60006f3390cd.scope: Deactivated successfully.
Nov 29 02:51:58 np0005539563 podman[277019]: 2025-11-29 07:51:58.564674371 +0000 UTC m=+0.045695097 container create a4b8d016ea612517e8743e967308921f1d1d0348861809c8e7fbda111cbf2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:51:58 np0005539563 systemd[1]: Started libpod-conmon-a4b8d016ea612517e8743e967308921f1d1d0348861809c8e7fbda111cbf2953.scope.
Nov 29 02:51:58 np0005539563 podman[277019]: 2025-11-29 07:51:58.545475266 +0000 UTC m=+0.026496022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:51:58 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:51:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c189ba04d6945095d532feca1aa541687a888be6aa9457db627aa6f7e59530d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c189ba04d6945095d532feca1aa541687a888be6aa9457db627aa6f7e59530d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c189ba04d6945095d532feca1aa541687a888be6aa9457db627aa6f7e59530d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c189ba04d6945095d532feca1aa541687a888be6aa9457db627aa6f7e59530d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:58 np0005539563 podman[277019]: 2025-11-29 07:51:58.67416934 +0000 UTC m=+0.155190096 container init a4b8d016ea612517e8743e967308921f1d1d0348861809c8e7fbda111cbf2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:51:58 np0005539563 podman[277019]: 2025-11-29 07:51:58.684190659 +0000 UTC m=+0.165211395 container start a4b8d016ea612517e8743e967308921f1d1d0348861809c8e7fbda111cbf2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_noyce, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:51:58 np0005539563 podman[277019]: 2025-11-29 07:51:58.688190987 +0000 UTC m=+0.169211743 container attach a4b8d016ea612517e8743e967308921f1d1d0348861809c8e7fbda111cbf2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:51:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:59.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]: {
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:    "0": [
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:        {
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:            "devices": [
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:                "/dev/loop3"
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:            ],
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:            "lv_name": "ceph_lv0",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:            "lv_size": "7511998464",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:            "name": "ceph_lv0",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:            "tags": {
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:                "ceph.cluster_name": "ceph",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:                "ceph.crush_device_class": "",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:                "ceph.encrypted": "0",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:                "ceph.osd_id": "0",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:                "ceph.type": "block",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:                "ceph.vdo": "0"
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:            },
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:            "type": "block",
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:            "vg_name": "ceph_vg0"
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:        }
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]:    ]
Nov 29 02:51:59 np0005539563 romantic_noyce[277071]: }
Nov 29 02:51:59 np0005539563 systemd[1]: libpod-a4b8d016ea612517e8743e967308921f1d1d0348861809c8e7fbda111cbf2953.scope: Deactivated successfully.
Nov 29 02:51:59 np0005539563 podman[277019]: 2025-11-29 07:51:59.589828924 +0000 UTC m=+1.070849670 container died a4b8d016ea612517e8743e967308921f1d1d0348861809c8e7fbda111cbf2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:51:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 214 MiB data, 500 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 182 op/s
Nov 29 02:51:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c189ba04d6945095d532feca1aa541687a888be6aa9457db627aa6f7e59530d9-merged.mount: Deactivated successfully.
Nov 29 02:51:59 np0005539563 podman[277019]: 2025-11-29 07:51:59.660288545 +0000 UTC m=+1.141309271 container remove a4b8d016ea612517e8743e967308921f1d1d0348861809c8e7fbda111cbf2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_noyce, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 02:51:59 np0005539563 systemd[1]: libpod-conmon-a4b8d016ea612517e8743e967308921f1d1d0348861809c8e7fbda111cbf2953.scope: Deactivated successfully.
Nov 29 02:51:59 np0005539563 nova_compute[252253]: 2025-11-29 07:51:59.877 252257 INFO nova.virt.libvirt.driver [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Deleting instance files /var/lib/nova/instances/23067680-c030-45ee-94ec-441a4e8dfdd3_del#033[00m
Nov 29 02:51:59 np0005539563 nova_compute[252253]: 2025-11-29 07:51:59.879 252257 INFO nova.virt.libvirt.driver [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Deletion of /var/lib/nova/instances/23067680-c030-45ee-94ec-441a4e8dfdd3_del complete#033[00m
Nov 29 02:51:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:51:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:59.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:00 np0005539563 nova_compute[252253]: 2025-11-29 07:52:00.097 252257 INFO nova.compute.manager [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Took 4.00 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:52:00 np0005539563 nova_compute[252253]: 2025-11-29 07:52:00.098 252257 DEBUG oslo.service.loopingcall [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:52:00 np0005539563 nova_compute[252253]: 2025-11-29 07:52:00.099 252257 DEBUG nova.compute.manager [-] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:52:00 np0005539563 nova_compute[252253]: 2025-11-29 07:52:00.099 252257 DEBUG nova.network.neutron [-] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:52:00 np0005539563 nova_compute[252253]: 2025-11-29 07:52:00.343 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:00 np0005539563 nova_compute[252253]: 2025-11-29 07:52:00.403 252257 DEBUG nova.network.neutron [-] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:52:00 np0005539563 podman[277239]: 2025-11-29 07:52:00.417481246 +0000 UTC m=+0.053484016 container create 4d4b446776db15ed2c51761db571afed965c22427f76a520bdf440a4b77b144b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_goodall, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:52:00 np0005539563 systemd[1]: Started libpod-conmon-4d4b446776db15ed2c51761db571afed965c22427f76a520bdf440a4b77b144b.scope.
Nov 29 02:52:00 np0005539563 podman[277239]: 2025-11-29 07:52:00.395310401 +0000 UTC m=+0.031313201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:52:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:52:00 np0005539563 podman[277239]: 2025-11-29 07:52:00.531046744 +0000 UTC m=+0.167049554 container init 4d4b446776db15ed2c51761db571afed965c22427f76a520bdf440a4b77b144b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_goodall, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:52:00 np0005539563 podman[277239]: 2025-11-29 07:52:00.541462723 +0000 UTC m=+0.177465503 container start 4d4b446776db15ed2c51761db571afed965c22427f76a520bdf440a4b77b144b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:52:00 np0005539563 podman[277239]: 2025-11-29 07:52:00.545529353 +0000 UTC m=+0.181532143 container attach 4d4b446776db15ed2c51761db571afed965c22427f76a520bdf440a4b77b144b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_goodall, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:52:00 np0005539563 youthful_goodall[277255]: 167 167
Nov 29 02:52:00 np0005539563 systemd[1]: libpod-4d4b446776db15ed2c51761db571afed965c22427f76a520bdf440a4b77b144b.scope: Deactivated successfully.
Nov 29 02:52:00 np0005539563 podman[277239]: 2025-11-29 07:52:00.548602506 +0000 UTC m=+0.184605286 container died 4d4b446776db15ed2c51761db571afed965c22427f76a520bdf440a4b77b144b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_goodall, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:52:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay-96649162af58b431d4c7830002c4463a64d21decf1eb1e45aebc0be00a799c04-merged.mount: Deactivated successfully.
Nov 29 02:52:00 np0005539563 podman[277239]: 2025-11-29 07:52:00.589628366 +0000 UTC m=+0.225631166 container remove 4d4b446776db15ed2c51761db571afed965c22427f76a520bdf440a4b77b144b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_goodall, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:52:00 np0005539563 systemd[1]: libpod-conmon-4d4b446776db15ed2c51761db571afed965c22427f76a520bdf440a4b77b144b.scope: Deactivated successfully.
Nov 29 02:52:00 np0005539563 nova_compute[252253]: 2025-11-29 07:52:00.627 252257 DEBUG nova.network.neutron [-] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:52:00 np0005539563 podman[277277]: 2025-11-29 07:52:00.797374491 +0000 UTC m=+0.049128689 container create 64f9f8297261423ce50d0294442555a6f1c88a61381ad16d8eb078b3f721bbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:52:00 np0005539563 systemd[1]: Started libpod-conmon-64f9f8297261423ce50d0294442555a6f1c88a61381ad16d8eb078b3f721bbe1.scope.
Nov 29 02:52:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:52:00 np0005539563 podman[277277]: 2025-11-29 07:52:00.775540206 +0000 UTC m=+0.027294454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:52:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dd5579d09f8dbc15d4111a2c72804f19688fb063d5b0204a674248e24e6a00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dd5579d09f8dbc15d4111a2c72804f19688fb063d5b0204a674248e24e6a00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dd5579d09f8dbc15d4111a2c72804f19688fb063d5b0204a674248e24e6a00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dd5579d09f8dbc15d4111a2c72804f19688fb063d5b0204a674248e24e6a00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:00 np0005539563 podman[277277]: 2025-11-29 07:52:00.891512158 +0000 UTC m=+0.143266376 container init 64f9f8297261423ce50d0294442555a6f1c88a61381ad16d8eb078b3f721bbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:52:00 np0005539563 podman[277277]: 2025-11-29 07:52:00.911800742 +0000 UTC m=+0.163554940 container start 64f9f8297261423ce50d0294442555a6f1c88a61381ad16d8eb078b3f721bbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:52:00 np0005539563 podman[277277]: 2025-11-29 07:52:00.915716228 +0000 UTC m=+0.167470456 container attach 64f9f8297261423ce50d0294442555a6f1c88a61381ad16d8eb078b3f721bbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:52:00 np0005539563 nova_compute[252253]: 2025-11-29 07:52:00.983 252257 INFO nova.compute.manager [-] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Took 0.88 seconds to deallocate network for instance.#033[00m
Nov 29 02:52:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:01 np0005539563 nova_compute[252253]: 2025-11-29 07:52:01.374 252257 DEBUG oslo_concurrency.lockutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:01 np0005539563 nova_compute[252253]: 2025-11-29 07:52:01.375 252257 DEBUG oslo_concurrency.lockutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:01.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:01 np0005539563 nova_compute[252253]: 2025-11-29 07:52:01.471 252257 DEBUG oslo_concurrency.processutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 205 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Nov 29 02:52:01 np0005539563 hopeful_dubinsky[277295]: {
Nov 29 02:52:01 np0005539563 hopeful_dubinsky[277295]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:52:01 np0005539563 hopeful_dubinsky[277295]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:52:01 np0005539563 hopeful_dubinsky[277295]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:52:01 np0005539563 hopeful_dubinsky[277295]:        "osd_id": 0,
Nov 29 02:52:01 np0005539563 hopeful_dubinsky[277295]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:52:01 np0005539563 hopeful_dubinsky[277295]:        "type": "bluestore"
Nov 29 02:52:01 np0005539563 hopeful_dubinsky[277295]:    }
Nov 29 02:52:01 np0005539563 hopeful_dubinsky[277295]: }
Nov 29 02:52:01 np0005539563 systemd[1]: libpod-64f9f8297261423ce50d0294442555a6f1c88a61381ad16d8eb078b3f721bbe1.scope: Deactivated successfully.
Nov 29 02:52:01 np0005539563 podman[277277]: 2025-11-29 07:52:01.857344609 +0000 UTC m=+1.109098877 container died 64f9f8297261423ce50d0294442555a6f1c88a61381ad16d8eb078b3f721bbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:52:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2524987151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:01 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a8dd5579d09f8dbc15d4111a2c72804f19688fb063d5b0204a674248e24e6a00-merged.mount: Deactivated successfully.
Nov 29 02:52:01 np0005539563 nova_compute[252253]: 2025-11-29 07:52:01.907 252257 DEBUG oslo_concurrency.processutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:01 np0005539563 nova_compute[252253]: 2025-11-29 07:52:01.915 252257 DEBUG nova.compute.provider_tree [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:52:01 np0005539563 podman[277277]: 2025-11-29 07:52:01.920867774 +0000 UTC m=+1.172621972 container remove 64f9f8297261423ce50d0294442555a6f1c88a61381ad16d8eb078b3f721bbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:52:01 np0005539563 systemd[1]: libpod-conmon-64f9f8297261423ce50d0294442555a6f1c88a61381ad16d8eb078b3f721bbe1.scope: Deactivated successfully.
Nov 29 02:52:01 np0005539563 nova_compute[252253]: 2025-11-29 07:52:01.934 252257 DEBUG nova.scheduler.client.report [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:52:01 np0005539563 nova_compute[252253]: 2025-11-29 07:52:01.965 252257 DEBUG oslo_concurrency.lockutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:52:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:52:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:52:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:52:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:01.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a1d4bc1d-9faf-451e-8bd4-696d12dc1c78 does not exist
Nov 29 02:52:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev bb248fb0-d711-4c22-bfd2-b2389f6cc745 does not exist
Nov 29 02:52:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 525d1afd-6045-460e-8b00-d5dafca3f566 does not exist
Nov 29 02:52:02 np0005539563 nova_compute[252253]: 2025-11-29 07:52:02.067 252257 INFO nova.scheduler.client.report [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Deleted allocations for instance 23067680-c030-45ee-94ec-441a4e8dfdd3#033[00m
Nov 29 02:52:02 np0005539563 nova_compute[252253]: 2025-11-29 07:52:02.160 252257 DEBUG oslo_concurrency.lockutils [None req-5d7faa01-be57-4b44-8b27-f407be8f951a e325d4df8788423e83d0a5eced9320b8 61c43a276b0446a187c8aaf55b42ff96 - - default default] Lock "23067680-c030-45ee-94ec-441a4e8dfdd3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:02 np0005539563 nova_compute[252253]: 2025-11-29 07:52:02.710 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:52:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:52:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:03.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 205 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 29 02:52:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:03.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:04.897 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:04.898 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:04.898 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:05 np0005539563 nova_compute[252253]: 2025-11-29 07:52:05.345 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:05.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 200 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Nov 29 02:52:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:06.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:07.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 200 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 594 KiB/s rd, 2.1 MiB/s wr, 97 op/s
Nov 29 02:52:07 np0005539563 nova_compute[252253]: 2025-11-29 07:52:07.713 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:08.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:08 np0005539563 ovn_controller[148841]: 2025-11-29T07:52:08Z|00106|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 02:52:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:09.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:09 np0005539563 nova_compute[252253]: 2025-11-29 07:52:09.481 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:09 np0005539563 nova_compute[252253]: 2025-11-29 07:52:09.481 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:09 np0005539563 nova_compute[252253]: 2025-11-29 07:52:09.511 252257 DEBUG nova.compute.manager [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:52:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 200 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 2.1 MiB/s wr, 98 op/s
Nov 29 02:52:09 np0005539563 nova_compute[252253]: 2025-11-29 07:52:09.667 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:09 np0005539563 nova_compute[252253]: 2025-11-29 07:52:09.668 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:09 np0005539563 nova_compute[252253]: 2025-11-29 07:52:09.675 252257 DEBUG nova.virt.hardware [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:52:09 np0005539563 nova_compute[252253]: 2025-11-29 07:52:09.675 252257 INFO nova.compute.claims [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:52:09 np0005539563 nova_compute[252253]: 2025-11-29 07:52:09.874 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:10.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/770395025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:10 np0005539563 nova_compute[252253]: 2025-11-29 07:52:10.350 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:10 np0005539563 nova_compute[252253]: 2025-11-29 07:52:10.358 252257 DEBUG nova.compute.provider_tree [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:52:10 np0005539563 nova_compute[252253]: 2025-11-29 07:52:10.391 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:11 np0005539563 nova_compute[252253]: 2025-11-29 07:52:11.322 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402716.3201969, 23067680-c030-45ee-94ec-441a4e8dfdd3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:52:11 np0005539563 nova_compute[252253]: 2025-11-29 07:52:11.322 252257 INFO nova.compute.manager [-] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:52:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:11.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 583 KiB/s rd, 1.9 MiB/s wr, 96 op/s
Nov 29 02:52:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:12.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:52:12
Nov 29 02:52:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:52:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:52:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.mgr', 'vms', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'backups']
Nov 29 02:52:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:52:12 np0005539563 nova_compute[252253]: 2025-11-29 07:52:12.760 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:12 np0005539563 nova_compute[252253]: 2025-11-29 07:52:12.965 252257 DEBUG nova.scheduler.client.report [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:52:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:13.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 317 KiB/s wr, 47 op/s
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:52:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:52:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:14.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:15 np0005539563 nova_compute[252253]: 2025-11-29 07:52:15.393 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:15.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:15 np0005539563 nova_compute[252253]: 2025-11-29 07:52:15.421 252257 DEBUG nova.compute.manager [None req-599dd50d-a40f-41b2-a2a7-dee95dbd6c90 - - - - - -] [instance: 23067680-c030-45ee-94ec-441a4e8dfdd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:15 np0005539563 nova_compute[252253]: 2025-11-29 07:52:15.423 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 5.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:15 np0005539563 nova_compute[252253]: 2025-11-29 07:52:15.425 252257 DEBUG nova.compute.manager [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:52:15 np0005539563 nova_compute[252253]: 2025-11-29 07:52:15.564 252257 DEBUG nova.compute.manager [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:52:15 np0005539563 nova_compute[252253]: 2025-11-29 07:52:15.565 252257 DEBUG nova.network.neutron [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:52:15 np0005539563 nova_compute[252253]: 2025-11-29 07:52:15.591 252257 INFO nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:52:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 317 KiB/s wr, 47 op/s
Nov 29 02:52:15 np0005539563 nova_compute[252253]: 2025-11-29 07:52:15.628 252257 DEBUG nova.compute.manager [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:52:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:16.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:17.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 5 op/s
Nov 29 02:52:17 np0005539563 nova_compute[252253]: 2025-11-29 07:52:17.778 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:18.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:18 np0005539563 podman[277430]: 2025-11-29 07:52:18.510628583 +0000 UTC m=+0.061524042 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 29 02:52:18 np0005539563 podman[277431]: 2025-11-29 07:52:18.542646332 +0000 UTC m=+0.088661931 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 29 02:52:18 np0005539563 podman[277470]: 2025-11-29 07:52:18.654520895 +0000 UTC m=+0.108562875 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:52:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:19.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 5 op/s
Nov 29 02:52:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:20.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:20 np0005539563 nova_compute[252253]: 2025-11-29 07:52:20.395 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:21.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 12 KiB/s wr, 5 op/s
Nov 29 02:52:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:22.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:22 np0005539563 nova_compute[252253]: 2025-11-29 07:52:22.841 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021746827307835472 of space, bias 1.0, pg target 0.6524048192350642 quantized to 32 (current 32)
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021608694514237857 of space, bias 1.0, pg target 0.6482608354271358 quantized to 32 (current 32)
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:52:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:23.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.437 252257 DEBUG nova.compute.manager [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.439 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.439 252257 INFO nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Creating image(s)#033[00m
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.465 252257 DEBUG nova.storage.rbd_utils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.492 252257 DEBUG nova.storage.rbd_utils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.522 252257 DEBUG nova.storage.rbd_utils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.525 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.548 252257 DEBUG nova.policy [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2e6a7e8a80384d83b5debf4c717f6e09', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b1a31b637613411eaeda132dc499537b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.585 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.586 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.586 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.587 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 200 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 1.1 KiB/s wr, 0 op/s
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.613 252257 DEBUG nova.storage.rbd_utils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.617 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:23 np0005539563 nova_compute[252253]: 2025-11-29 07:52:23.998 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:24.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:24 np0005539563 nova_compute[252253]: 2025-11-29 07:52:24.094 252257 DEBUG nova.storage.rbd_utils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] resizing rbd image 5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:52:24 np0005539563 nova_compute[252253]: 2025-11-29 07:52:24.212 252257 DEBUG nova.objects.instance [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lazy-loading 'migration_context' on Instance uuid 5338f516-8664-4303-aed1-b1d4e5b8e7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:24 np0005539563 nova_compute[252253]: 2025-11-29 07:52:24.237 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:52:24 np0005539563 nova_compute[252253]: 2025-11-29 07:52:24.238 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Ensure instance console log exists: /var/lib/nova/instances/5338f516-8664-4303-aed1-b1d4e5b8e7e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:52:24 np0005539563 nova_compute[252253]: 2025-11-29 07:52:24.239 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:24 np0005539563 nova_compute[252253]: 2025-11-29 07:52:24.239 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:24 np0005539563 nova_compute[252253]: 2025-11-29 07:52:24.240 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:24.647 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:52:24 np0005539563 nova_compute[252253]: 2025-11-29 07:52:24.647 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:24.647 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:52:25 np0005539563 nova_compute[252253]: 2025-11-29 07:52:25.396 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:25.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 240 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 5.6 KiB/s rd, 1.6 MiB/s wr, 11 op/s
Nov 29 02:52:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:26.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:27 np0005539563 nova_compute[252253]: 2025-11-29 07:52:27.200 252257 DEBUG nova.network.neutron [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Successfully created port: c606e5a0-f859-492d-827c-6449a1b0dbe4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:52:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 02:52:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:27.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 02:52:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 240 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 5.6 KiB/s rd, 1.6 MiB/s wr, 11 op/s
Nov 29 02:52:27 np0005539563 nova_compute[252253]: 2025-11-29 07:52:27.883 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:28.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:28 np0005539563 nova_compute[252253]: 2025-11-29 07:52:28.705 252257 DEBUG nova.network.neutron [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Successfully updated port: c606e5a0-f859-492d-827c-6449a1b0dbe4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:52:28 np0005539563 nova_compute[252253]: 2025-11-29 07:52:28.741 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:28 np0005539563 nova_compute[252253]: 2025-11-29 07:52:28.742 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:28 np0005539563 nova_compute[252253]: 2025-11-29 07:52:28.742 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:28 np0005539563 nova_compute[252253]: 2025-11-29 07:52:28.743 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:52:28 np0005539563 nova_compute[252253]: 2025-11-29 07:52:28.743 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:29 np0005539563 nova_compute[252253]: 2025-11-29 07:52:29.013 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:52:29 np0005539563 nova_compute[252253]: 2025-11-29 07:52:29.013 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquired lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:52:29 np0005539563 nova_compute[252253]: 2025-11-29 07:52:29.014 252257 DEBUG nova.network.neutron [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:52:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:29.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 246 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 02:52:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:30.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.345 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.345 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.346 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.346 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.346 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.369 252257 DEBUG nova.compute.manager [req-2f409210-a825-46ff-9d08-182dbd00fcb0 req-3da73f4c-5853-4b10-b749-a325cfc39999 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Received event network-changed-c606e5a0-f859-492d-827c-6449a1b0dbe4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.370 252257 DEBUG nova.compute.manager [req-2f409210-a825-46ff-9d08-182dbd00fcb0 req-3da73f4c-5853-4b10-b749-a325cfc39999 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Refreshing instance network info cache due to event network-changed-c606e5a0-f859-492d-827c-6449a1b0dbe4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.371 252257 DEBUG oslo_concurrency.lockutils [req-2f409210-a825-46ff-9d08-182dbd00fcb0 req-3da73f4c-5853-4b10-b749-a325cfc39999 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.398 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:30.649 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3162617546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.766 252257 DEBUG nova.network.neutron [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.787 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.956 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.958 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4744MB free_disk=20.921886444091797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.959 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:30 np0005539563 nova_compute[252253]: 2025-11-29 07:52:30.959 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:31.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:31 np0005539563 nova_compute[252253]: 2025-11-29 07:52:31.488 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 5338f516-8664-4303-aed1-b1d4e5b8e7e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:52:31 np0005539563 nova_compute[252253]: 2025-11-29 07:52:31.488 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:52:31 np0005539563 nova_compute[252253]: 2025-11-29 07:52:31.489 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:52:31 np0005539563 nova_compute[252253]: 2025-11-29 07:52:31.504 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:52:31 np0005539563 nova_compute[252253]: 2025-11-29 07:52:31.528 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:52:31 np0005539563 nova_compute[252253]: 2025-11-29 07:52:31.528 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:52:31 np0005539563 nova_compute[252253]: 2025-11-29 07:52:31.554 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:52:31 np0005539563 nova_compute[252253]: 2025-11-29 07:52:31.579 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:52:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 246 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 02:52:31 np0005539563 nova_compute[252253]: 2025-11-29 07:52:31.638 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:32.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1992865510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:32 np0005539563 nova_compute[252253]: 2025-11-29 07:52:32.098 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:32 np0005539563 nova_compute[252253]: 2025-11-29 07:52:32.103 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:52:32 np0005539563 nova_compute[252253]: 2025-11-29 07:52:32.118 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:52:32 np0005539563 nova_compute[252253]: 2025-11-29 07:52:32.146 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:52:32 np0005539563 nova_compute[252253]: 2025-11-29 07:52:32.147 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:32 np0005539563 nova_compute[252253]: 2025-11-29 07:52:32.886 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.066 252257 DEBUG nova.network.neutron [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Updating instance_info_cache with network_info: [{"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.092 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Releasing lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.093 252257 DEBUG nova.compute.manager [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Instance network_info: |[{"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.094 252257 DEBUG oslo_concurrency.lockutils [req-2f409210-a825-46ff-9d08-182dbd00fcb0 req-3da73f4c-5853-4b10-b749-a325cfc39999 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.095 252257 DEBUG nova.network.neutron [req-2f409210-a825-46ff-9d08-182dbd00fcb0 req-3da73f4c-5853-4b10-b749-a325cfc39999 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Refreshing network info cache for port c606e5a0-f859-492d-827c-6449a1b0dbe4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.101 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Start _get_guest_xml network_info=[{"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.108 252257 WARNING nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.114 252257 DEBUG nova.virt.libvirt.host [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.115 252257 DEBUG nova.virt.libvirt.host [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.119 252257 DEBUG nova.virt.libvirt.host [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.120 252257 DEBUG nova.virt.libvirt.host [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.122 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.122 252257 DEBUG nova.virt.hardware [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.123 252257 DEBUG nova.virt.hardware [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.123 252257 DEBUG nova.virt.hardware [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.124 252257 DEBUG nova.virt.hardware [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.124 252257 DEBUG nova.virt.hardware [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.125 252257 DEBUG nova.virt.hardware [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.125 252257 DEBUG nova.virt.hardware [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.126 252257 DEBUG nova.virt.hardware [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.126 252257 DEBUG nova.virt.hardware [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.126 252257 DEBUG nova.virt.hardware [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.127 252257 DEBUG nova.virt.hardware [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.131 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:33.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:52:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3711008745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.585 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 246 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.622 252257 DEBUG nova.storage.rbd_utils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:33 np0005539563 nova_compute[252253]: 2025-11-29 07:52:33.626 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:34.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.083 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.084 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.084 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:52:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:52:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1289566660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.104 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.105 252257 DEBUG nova.virt.libvirt.vif [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:52:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-325866224',display_name='tempest-FloatingIPsAssociationTestJSON-server-325866224',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-325866224',id=33,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b1a31b637613411eaeda132dc499537b',ramdisk_id='',reservation_id='r-2ee57xly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-120353870',owner_user_name='tempest-FloatingIPsAssociationTestJSON-120353870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:52:15Z,user_data=None,user_id='2e6a7e8a80384d83b5debf4c717f6e09',uuid=5338f516-8664-4303-aed1-b1d4e5b8e7e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.106 252257 DEBUG nova.network.os_vif_util [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Converting VIF {"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.107 252257 DEBUG nova.network.os_vif_util [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:7c:50,bridge_name='br-int',has_traffic_filtering=True,id=c606e5a0-f859-492d-827c-6449a1b0dbe4,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606e5a0-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.108 252257 DEBUG nova.objects.instance [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lazy-loading 'pci_devices' on Instance uuid 5338f516-8664-4303-aed1-b1d4e5b8e7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.119 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.119 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.120 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.120 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.120 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.127 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  <uuid>5338f516-8664-4303-aed1-b1d4e5b8e7e1</uuid>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  <name>instance-00000021</name>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-325866224</nova:name>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:52:33</nova:creationTime>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <nova:user uuid="2e6a7e8a80384d83b5debf4c717f6e09">tempest-FloatingIPsAssociationTestJSON-120353870-project-member</nova:user>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <nova:project uuid="b1a31b637613411eaeda132dc499537b">tempest-FloatingIPsAssociationTestJSON-120353870</nova:project>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <nova:port uuid="c606e5a0-f859-492d-827c-6449a1b0dbe4">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <entry name="serial">5338f516-8664-4303-aed1-b1d4e5b8e7e1</entry>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <entry name="uuid">5338f516-8664-4303-aed1-b1d4e5b8e7e1</entry>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk.config">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:d1:7c:50"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <target dev="tapc606e5a0-f8"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/5338f516-8664-4303-aed1-b1d4e5b8e7e1/console.log" append="off"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:52:34 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:52:34 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:52:34 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:52:34 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.128 252257 DEBUG nova.compute.manager [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Preparing to wait for external event network-vif-plugged-c606e5a0-f859-492d-827c-6449a1b0dbe4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.128 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.129 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.129 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.130 252257 DEBUG nova.virt.libvirt.vif [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:52:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-325866224',display_name='tempest-FloatingIPsAssociationTestJSON-server-325866224',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-325866224',id=33,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b1a31b637613411eaeda132dc499537b',ramdisk_id='',reservation_id='r-2ee57xly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-120353870',owner_user_name='tempest-FloatingIPsAssociationTestJSON-120353870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:52:15Z,user_data=None,user_id='2e6a7e8a80384d83b5debf4c717f6e09',uuid=5338f516-8664-4303-aed1-b1d4e5b8e7e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.130 252257 DEBUG nova.network.os_vif_util [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Converting VIF {"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.131 252257 DEBUG nova.network.os_vif_util [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:7c:50,bridge_name='br-int',has_traffic_filtering=True,id=c606e5a0-f859-492d-827c-6449a1b0dbe4,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606e5a0-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.131 252257 DEBUG os_vif [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:7c:50,bridge_name='br-int',has_traffic_filtering=True,id=c606e5a0-f859-492d-827c-6449a1b0dbe4,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606e5a0-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.132 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.133 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.133 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.137 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.141 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc606e5a0-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.142 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc606e5a0-f8, col_values=(('external_ids', {'iface-id': 'c606e5a0-f859-492d-827c-6449a1b0dbe4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:7c:50', 'vm-uuid': '5338f516-8664-4303-aed1-b1d4e5b8e7e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:34 np0005539563 NetworkManager[48981]: <info>  [1764402754.1445] manager: (tapc606e5a0-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.147 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.150 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.151 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.152 252257 INFO os_vif [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:7c:50,bridge_name='br-int',has_traffic_filtering=True,id=c606e5a0-f859-492d-827c-6449a1b0dbe4,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606e5a0-f8')#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.210 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.210 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.211 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] No VIF found with MAC fa:16:3e:d1:7c:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.211 252257 INFO nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Using config drive#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.234 252257 DEBUG nova.storage.rbd_utils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.711 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.931 252257 INFO nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Creating config drive at /var/lib/nova/instances/5338f516-8664-4303-aed1-b1d4e5b8e7e1/disk.config#033[00m
Nov 29 02:52:34 np0005539563 nova_compute[252253]: 2025-11-29 07:52:34.937 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5338f516-8664-4303-aed1-b1d4e5b8e7e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdfeclyg7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.080 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5338f516-8664-4303-aed1-b1d4e5b8e7e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdfeclyg7" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.108 252257 DEBUG nova.storage.rbd_utils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] rbd image 5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.111 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5338f516-8664-4303-aed1-b1d4e5b8e7e1/disk.config 5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.247 252257 DEBUG nova.network.neutron [req-2f409210-a825-46ff-9d08-182dbd00fcb0 req-3da73f4c-5853-4b10-b749-a325cfc39999 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Updated VIF entry in instance network info cache for port c606e5a0-f859-492d-827c-6449a1b0dbe4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.248 252257 DEBUG nova.network.neutron [req-2f409210-a825-46ff-9d08-182dbd00fcb0 req-3da73f4c-5853-4b10-b749-a325cfc39999 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Updating instance_info_cache with network_info: [{"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.280 252257 DEBUG oslo_concurrency.processutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5338f516-8664-4303-aed1-b1d4e5b8e7e1/disk.config 5338f516-8664-4303-aed1-b1d4e5b8e7e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.281 252257 INFO nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Deleting local config drive /var/lib/nova/instances/5338f516-8664-4303-aed1-b1d4e5b8e7e1/disk.config because it was imported into RBD.#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.284 252257 DEBUG oslo_concurrency.lockutils [req-2f409210-a825-46ff-9d08-182dbd00fcb0 req-3da73f4c-5853-4b10-b749-a325cfc39999 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:52:35 np0005539563 kernel: tapc606e5a0-f8: entered promiscuous mode
Nov 29 02:52:35 np0005539563 ovn_controller[148841]: 2025-11-29T07:52:35Z|00107|binding|INFO|Claiming lport c606e5a0-f859-492d-827c-6449a1b0dbe4 for this chassis.
Nov 29 02:52:35 np0005539563 ovn_controller[148841]: 2025-11-29T07:52:35Z|00108|binding|INFO|c606e5a0-f859-492d-827c-6449a1b0dbe4: Claiming fa:16:3e:d1:7c:50 10.100.0.14
Nov 29 02:52:35 np0005539563 NetworkManager[48981]: <info>  [1764402755.3355] manager: (tapc606e5a0-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.336 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.339 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.345 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:35 np0005539563 systemd-udevd[277902]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:52:35 np0005539563 systemd-machined[213024]: New machine qemu-16-instance-00000021.
Nov 29 02:52:35 np0005539563 NetworkManager[48981]: <info>  [1764402755.3833] device (tapc606e5a0-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:52:35 np0005539563 NetworkManager[48981]: <info>  [1764402755.3844] device (tapc606e5a0-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.389 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:7c:50 10.100.0.14'], port_security=['fa:16:3e:d1:7c:50 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5338f516-8664-4303-aed1-b1d4e5b8e7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be6e4a03-649a-413f-8a81-4fef5b740489', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1a31b637613411eaeda132dc499537b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6df00dc5-dca9-4705-acff-a62440113d04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b522ee0-0950-4570-a9e8-0fc3c27b4f72, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=c606e5a0-f859-492d-827c-6449a1b0dbe4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.390 158990 INFO neutron.agent.ovn.metadata.agent [-] Port c606e5a0-f859-492d-827c-6449a1b0dbe4 in datapath be6e4a03-649a-413f-8a81-4fef5b740489 bound to our chassis#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.391 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network be6e4a03-649a-413f-8a81-4fef5b740489#033[00m
Nov 29 02:52:35 np0005539563 systemd[1]: Started Virtual Machine qemu-16-instance-00000021.
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.405 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8c63670f-b164-4ed5-9e39-5997d794c91d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.406 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbe6e4a03-61 in ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.408 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbe6e4a03-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.408 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2533b1d9-9672-443b-b81e-c10ca3a045d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.409 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[85bf0980-ac8c-4f22-a5f7-f157fd073948]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.413 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.416 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:35 np0005539563 ovn_controller[148841]: 2025-11-29T07:52:35Z|00109|binding|INFO|Setting lport c606e5a0-f859-492d-827c-6449a1b0dbe4 ovn-installed in OVS
Nov 29 02:52:35 np0005539563 ovn_controller[148841]: 2025-11-29T07:52:35Z|00110|binding|INFO|Setting lport c606e5a0-f859-492d-827c-6449a1b0dbe4 up in Southbound
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.422 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.423 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[6982e584-6685-47c2-8615-8b7d7d2c8670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:35.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.436 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ce882bed-f141-4cd5-8337-0c6535c4799d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.465 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c31cb103-617d-468c-bf0f-79b081d2f327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.472 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7d642d35-3b11-49b5-8128-d42e6077fa7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 NetworkManager[48981]: <info>  [1764402755.4744] manager: (tapbe6e4a03-60): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.505 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d39c592a-9f0e-49ec-adbf-1880895176b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.508 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc38194-8605-4341-bfe7-d8e211326697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 NetworkManager[48981]: <info>  [1764402755.5306] device (tapbe6e4a03-60): carrier: link connected
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.536 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f37d9ea2-fbad-4577-8761-fe68bf76d480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.551 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ee006b7e-25ef-4382-9fd1-2445c85a4fdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe6e4a03-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:ec:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572330, 'reachable_time': 42429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277935, 'error': None, 'target': 'ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.564 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a92deffe-c2c2-4b95-b0b3-801560d5027f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:ec9f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572330, 'tstamp': 572330}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277936, 'error': None, 'target': 'ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.578 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dc9d92c5-91b6-481c-9ea2-2e954c5ea8dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe6e4a03-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:ec:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572330, 'reachable_time': 42429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277937, 'error': None, 'target': 'ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 281 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.9 MiB/s wr, 45 op/s
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.620 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b787807c-e412-48c7-af3b-0e8b888e0580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.679 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[78b0441c-34c2-4b77-83a8-ffda7bfba24a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.681 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe6e4a03-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.682 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.683 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe6e4a03-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:35 np0005539563 kernel: tapbe6e4a03-60: entered promiscuous mode
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.685 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:35 np0005539563 NetworkManager[48981]: <info>  [1764402755.6863] manager: (tapbe6e4a03-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.693 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbe6e4a03-60, col_values=(('external_ids', {'iface-id': '49d1fa79-68ff-4b00-b078-6119e837e4c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.694 252257 DEBUG nova.compute.manager [req-f280c166-5a26-47a6-8840-aed0f1fc367b req-fc50aff5-23b6-4f5d-98c3-97eeea7c506f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Received event network-vif-plugged-c606e5a0-f859-492d-827c-6449a1b0dbe4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.694 252257 DEBUG oslo_concurrency.lockutils [req-f280c166-5a26-47a6-8840-aed0f1fc367b req-fc50aff5-23b6-4f5d-98c3-97eeea7c506f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.695 252257 DEBUG oslo_concurrency.lockutils [req-f280c166-5a26-47a6-8840-aed0f1fc367b req-fc50aff5-23b6-4f5d-98c3-97eeea7c506f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.695 252257 DEBUG oslo_concurrency.lockutils [req-f280c166-5a26-47a6-8840-aed0f1fc367b req-fc50aff5-23b6-4f5d-98c3-97eeea7c506f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.696 252257 DEBUG nova.compute.manager [req-f280c166-5a26-47a6-8840-aed0f1fc367b req-fc50aff5-23b6-4f5d-98c3-97eeea7c506f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Processing event network-vif-plugged-c606e5a0-f859-492d-827c-6449a1b0dbe4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:52:35 np0005539563 ovn_controller[148841]: 2025-11-29T07:52:35Z|00111|binding|INFO|Releasing lport 49d1fa79-68ff-4b00-b078-6119e837e4c3 from this chassis (sb_readonly=0)
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.696 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.698 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/be6e4a03-649a-413f-8a81-4fef5b740489.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/be6e4a03-649a-413f-8a81-4fef5b740489.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.699 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dc639b33-9144-4be4-84e1-1da108e254a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.700 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-be6e4a03-649a-413f-8a81-4fef5b740489
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/be6e4a03-649a-413f-8a81-4fef5b740489.pid.haproxy
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID be6e4a03-649a-413f-8a81-4fef5b740489
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:52:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:52:35.700 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489', 'env', 'PROCESS_TAG=haproxy-be6e4a03-649a-413f-8a81-4fef5b740489', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/be6e4a03-649a-413f-8a81-4fef5b740489.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:52:35 np0005539563 nova_compute[252253]: 2025-11-29 07:52:35.709 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:36.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:36 np0005539563 podman[278005]: 2025-11-29 07:52:36.175177599 +0000 UTC m=+0.038187235 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.315 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402756.3149202, 5338f516-8664-4303-aed1-b1d4e5b8e7e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.315 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] VM Started (Lifecycle Event)#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.318 252257 DEBUG nova.compute.manager [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.321 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.326 252257 INFO nova.virt.libvirt.driver [-] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Instance spawned successfully.#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.326 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:52:36 np0005539563 podman[278005]: 2025-11-29 07:52:36.36447975 +0000 UTC m=+0.227489296 container create 5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.391 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.395 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.396 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.396 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.397 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.397 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.398 252257 DEBUG nova.virt.libvirt.driver [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.402 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:52:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:36 np0005539563 systemd[1]: Started libpod-conmon-5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482.scope.
Nov 29 02:52:36 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:52:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27ccef2379cd65cf976ac8a689f754186a3ff3ccab18d072069a4a944a9412d7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:36 np0005539563 podman[278005]: 2025-11-29 07:52:36.583392905 +0000 UTC m=+0.446402471 container init 5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 02:52:36 np0005539563 podman[278005]: 2025-11-29 07:52:36.58992549 +0000 UTC m=+0.452935036 container start 5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:52:36 np0005539563 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[278026]: [NOTICE]   (278030) : New worker (278032) forked
Nov 29 02:52:36 np0005539563 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[278026]: [NOTICE]   (278030) : Loading success.
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.700 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.701 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402756.316174, 5338f516-8664-4303-aed1-b1d4e5b8e7e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.701 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.883 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.889 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402756.3203979, 5338f516-8664-4303-aed1-b1d4e5b8e7e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.889 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.922 252257 INFO nova.compute.manager [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Took 13.48 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:52:36 np0005539563 nova_compute[252253]: 2025-11-29 07:52:36.923 252257 DEBUG nova.compute.manager [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:37 np0005539563 nova_compute[252253]: 2025-11-29 07:52:37.139 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:37 np0005539563 nova_compute[252253]: 2025-11-29 07:52:37.143 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:52:37 np0005539563 nova_compute[252253]: 2025-11-29 07:52:37.185 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:52:37 np0005539563 nova_compute[252253]: 2025-11-29 07:52:37.206 252257 INFO nova.compute.manager [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Took 27.57 seconds to build instance.#033[00m
Nov 29 02:52:37 np0005539563 nova_compute[252253]: 2025-11-29 07:52:37.227 252257 DEBUG oslo_concurrency.lockutils [None req-8652b678-fbe9-421d-8f45-b3ee07b26f26 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 27.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:37.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 281 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.2 MiB/s wr, 34 op/s
Nov 29 02:52:37 np0005539563 nova_compute[252253]: 2025-11-29 07:52:37.788 252257 DEBUG nova.compute.manager [req-338e5c3c-387e-4d59-8fb7-fe0fdb3d70b9 req-1ff92b21-c356-45a3-a5c2-15ce0c28c848 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Received event network-vif-plugged-c606e5a0-f859-492d-827c-6449a1b0dbe4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:37 np0005539563 nova_compute[252253]: 2025-11-29 07:52:37.788 252257 DEBUG oslo_concurrency.lockutils [req-338e5c3c-387e-4d59-8fb7-fe0fdb3d70b9 req-1ff92b21-c356-45a3-a5c2-15ce0c28c848 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:37 np0005539563 nova_compute[252253]: 2025-11-29 07:52:37.789 252257 DEBUG oslo_concurrency.lockutils [req-338e5c3c-387e-4d59-8fb7-fe0fdb3d70b9 req-1ff92b21-c356-45a3-a5c2-15ce0c28c848 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:37 np0005539563 nova_compute[252253]: 2025-11-29 07:52:37.789 252257 DEBUG oslo_concurrency.lockutils [req-338e5c3c-387e-4d59-8fb7-fe0fdb3d70b9 req-1ff92b21-c356-45a3-a5c2-15ce0c28c848 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:37 np0005539563 nova_compute[252253]: 2025-11-29 07:52:37.789 252257 DEBUG nova.compute.manager [req-338e5c3c-387e-4d59-8fb7-fe0fdb3d70b9 req-1ff92b21-c356-45a3-a5c2-15ce0c28c848 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] No waiting events found dispatching network-vif-plugged-c606e5a0-f859-492d-827c-6449a1b0dbe4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:52:37 np0005539563 nova_compute[252253]: 2025-11-29 07:52:37.790 252257 WARNING nova.compute.manager [req-338e5c3c-387e-4d59-8fb7-fe0fdb3d70b9 req-1ff92b21-c356-45a3-a5c2-15ce0c28c848 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Received unexpected event network-vif-plugged-c606e5a0-f859-492d-827c-6449a1b0dbe4 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:52:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:38.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:39 np0005539563 nova_compute[252253]: 2025-11-29 07:52:39.146 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:39.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 293 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 1.9 MiB/s wr, 43 op/s
Nov 29 02:52:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:40.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:40 np0005539563 nova_compute[252253]: 2025-11-29 07:52:40.417 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:41.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 293 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Nov 29 02:52:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:42.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:52:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:52:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:52:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:52:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:52:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:52:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:43.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 293 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Nov 29 02:52:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:44.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:44 np0005539563 nova_compute[252253]: 2025-11-29 07:52:44.148 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:45 np0005539563 nova_compute[252253]: 2025-11-29 07:52:45.421 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:45.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 293 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 29 02:52:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:46.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:47.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 293 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 714 KiB/s wr, 156 op/s
Nov 29 02:52:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:48.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:49 np0005539563 nova_compute[252253]: 2025-11-29 07:52:49.154 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:49.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:49 np0005539563 podman[278100]: 2025-11-29 07:52:49.578120015 +0000 UTC m=+0.105034425 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:52:49 np0005539563 podman[278099]: 2025-11-29 07:52:49.598955611 +0000 UTC m=+0.129516110 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:52:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 308 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 173 op/s
Nov 29 02:52:49 np0005539563 podman[278101]: 2025-11-29 07:52:49.642414692 +0000 UTC m=+0.163350470 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:52:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:50.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:50 np0005539563 nova_compute[252253]: 2025-11-29 07:52:50.450 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:51 np0005539563 ovn_controller[148841]: 2025-11-29T07:52:51Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:7c:50 10.100.0.14
Nov 29 02:52:51 np0005539563 ovn_controller[148841]: 2025-11-29T07:52:51Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:7c:50 10.100.0.14
Nov 29 02:52:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:51.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 357 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 213 op/s
Nov 29 02:52:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:52.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:52 np0005539563 nova_compute[252253]: 2025-11-29 07:52:52.929 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:52 np0005539563 nova_compute[252253]: 2025-11-29 07:52:52.930 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:52 np0005539563 nova_compute[252253]: 2025-11-29 07:52:52.968 252257 DEBUG nova.compute.manager [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.083 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.084 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.092 252257 DEBUG nova.virt.hardware [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.092 252257 INFO nova.compute.claims [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.254 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:53.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 357 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 120 op/s
Nov 29 02:52:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80401445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.700 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.707 252257 DEBUG nova.compute.provider_tree [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.782 252257 DEBUG nova.scheduler.client.report [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.809 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.810 252257 DEBUG nova.compute.manager [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.861 252257 DEBUG nova.compute.manager [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.861 252257 DEBUG nova.network.neutron [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.879 252257 INFO nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:52:53 np0005539563 nova_compute[252253]: 2025-11-29 07:52:53.919 252257 DEBUG nova.compute.manager [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.029 252257 DEBUG nova.compute.manager [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.030 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.030 252257 INFO nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Creating image(s)#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.054 252257 DEBUG nova.storage.rbd_utils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 17cfda51-18c0-426a-93a5-45208fdb5da2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:54.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.098 252257 DEBUG nova.storage.rbd_utils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 17cfda51-18c0-426a-93a5-45208fdb5da2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.122 252257 DEBUG nova.storage.rbd_utils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 17cfda51-18c0-426a-93a5-45208fdb5da2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.125 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.158 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.203 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.204 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.205 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.205 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.233 252257 DEBUG nova.storage.rbd_utils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 17cfda51-18c0-426a-93a5-45208fdb5da2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.236 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 17cfda51-18c0-426a-93a5-45208fdb5da2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.681 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 17cfda51-18c0-426a-93a5-45208fdb5da2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.718 252257 DEBUG nova.policy [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8551065d65214410b616d2a71729df0a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '800c0f050e95457384eee582d6da0afa', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.761 252257 DEBUG nova.storage.rbd_utils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] resizing rbd image 17cfda51-18c0-426a-93a5-45208fdb5da2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.872 252257 DEBUG nova.objects.instance [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lazy-loading 'migration_context' on Instance uuid 17cfda51-18c0-426a-93a5-45208fdb5da2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.886 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.886 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Ensure instance console log exists: /var/lib/nova/instances/17cfda51-18c0-426a-93a5-45208fdb5da2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.886 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.887 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:54 np0005539563 nova_compute[252253]: 2025-11-29 07:52:54.887 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:55.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:55 np0005539563 nova_compute[252253]: 2025-11-29 07:52:55.453 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 436 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 7.2 MiB/s wr, 422 op/s
Nov 29 02:52:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:56.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:57 np0005539563 nova_compute[252253]: 2025-11-29 07:52:57.273 252257 DEBUG nova.network.neutron [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Successfully created port: 08cb43c0-b482-46af-8fdc-a825b402c6e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:52:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:57.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 436 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 861 KiB/s rd, 7.2 MiB/s wr, 367 op/s
Nov 29 02:52:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:58.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:59 np0005539563 nova_compute[252253]: 2025-11-29 07:52:59.161 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:59 np0005539563 nova_compute[252253]: 2025-11-29 07:52:59.332 252257 DEBUG nova.network.neutron [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Successfully updated port: 08cb43c0-b482-46af-8fdc-a825b402c6e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:52:59 np0005539563 nova_compute[252253]: 2025-11-29 07:52:59.348 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "refresh_cache-17cfda51-18c0-426a-93a5-45208fdb5da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:52:59 np0005539563 nova_compute[252253]: 2025-11-29 07:52:59.348 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquired lock "refresh_cache-17cfda51-18c0-426a-93a5-45208fdb5da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:52:59 np0005539563 nova_compute[252253]: 2025-11-29 07:52:59.349 252257 DEBUG nova.network.neutron [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:52:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:52:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:59.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:59 np0005539563 nova_compute[252253]: 2025-11-29 07:52:59.497 252257 DEBUG nova.network.neutron [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:52:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 448 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 993 KiB/s rd, 8.6 MiB/s wr, 405 op/s
Nov 29 02:52:59 np0005539563 nova_compute[252253]: 2025-11-29 07:52:59.901 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:59 np0005539563 NetworkManager[48981]: <info>  [1764402779.9021] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/56)
Nov 29 02:52:59 np0005539563 NetworkManager[48981]: <info>  [1764402779.9028] device (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:52:59 np0005539563 NetworkManager[48981]: <info>  [1764402779.9042] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/57)
Nov 29 02:52:59 np0005539563 NetworkManager[48981]: <info>  [1764402779.9046] device (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:52:59 np0005539563 NetworkManager[48981]: <info>  [1764402779.9059] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 29 02:52:59 np0005539563 NetworkManager[48981]: <info>  [1764402779.9066] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 29 02:52:59 np0005539563 NetworkManager[48981]: <info>  [1764402779.9071] device (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:52:59 np0005539563 NetworkManager[48981]: <info>  [1764402779.9075] device (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:53:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:00.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.125 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.145 252257 DEBUG nova.compute.manager [req-4f215a02-8481-4e03-a332-541300e43cec req-dc920236-0116-4da3-9527-9ce53879184e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received event network-changed-08cb43c0-b482-46af-8fdc-a825b402c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.146 252257 DEBUG nova.compute.manager [req-4f215a02-8481-4e03-a332-541300e43cec req-dc920236-0116-4da3-9527-9ce53879184e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Refreshing instance network info cache due to event network-changed-08cb43c0-b482-46af-8fdc-a825b402c6e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.146 252257 DEBUG oslo_concurrency.lockutils [req-4f215a02-8481-4e03-a332-541300e43cec req-dc920236-0116-4da3-9527-9ce53879184e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-17cfda51-18c0-426a-93a5-45208fdb5da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:00 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:00Z|00112|binding|INFO|Releasing lport 49d1fa79-68ff-4b00-b078-6119e837e4c3 from this chassis (sb_readonly=0)
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.205 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.454 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:53:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4055466643' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:53:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:53:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4055466643' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.595 252257 DEBUG nova.compute.manager [req-2f8432e5-3a79-4982-915d-590feaead302 req-74933b75-10c6-43ee-9b07-570cff7a849a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Received event network-changed-c606e5a0-f859-492d-827c-6449a1b0dbe4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.595 252257 DEBUG nova.compute.manager [req-2f8432e5-3a79-4982-915d-590feaead302 req-74933b75-10c6-43ee-9b07-570cff7a849a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Refreshing instance network info cache due to event network-changed-c606e5a0-f859-492d-827c-6449a1b0dbe4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.595 252257 DEBUG oslo_concurrency.lockutils [req-2f8432e5-3a79-4982-915d-590feaead302 req-74933b75-10c6-43ee-9b07-570cff7a849a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.595 252257 DEBUG oslo_concurrency.lockutils [req-2f8432e5-3a79-4982-915d-590feaead302 req-74933b75-10c6-43ee-9b07-570cff7a849a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.595 252257 DEBUG nova.network.neutron [req-2f8432e5-3a79-4982-915d-590feaead302 req-74933b75-10c6-43ee-9b07-570cff7a849a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Refreshing network info cache for port c606e5a0-f859-492d-827c-6449a1b0dbe4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.790 252257 DEBUG nova.network.neutron [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Updating instance_info_cache with network_info: [{"id": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "address": "fa:16:3e:17:5e:80", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08cb43c0-b4", "ovs_interfaceid": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.824 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Releasing lock "refresh_cache-17cfda51-18c0-426a-93a5-45208fdb5da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.824 252257 DEBUG nova.compute.manager [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Instance network_info: |[{"id": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "address": "fa:16:3e:17:5e:80", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08cb43c0-b4", "ovs_interfaceid": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.825 252257 DEBUG oslo_concurrency.lockutils [req-4f215a02-8481-4e03-a332-541300e43cec req-dc920236-0116-4da3-9527-9ce53879184e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-17cfda51-18c0-426a-93a5-45208fdb5da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.825 252257 DEBUG nova.network.neutron [req-4f215a02-8481-4e03-a332-541300e43cec req-dc920236-0116-4da3-9527-9ce53879184e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Refreshing network info cache for port 08cb43c0-b482-46af-8fdc-a825b402c6e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.827 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Start _get_guest_xml network_info=[{"id": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "address": "fa:16:3e:17:5e:80", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08cb43c0-b4", "ovs_interfaceid": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.831 252257 WARNING nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.838 252257 DEBUG nova.virt.libvirt.host [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.839 252257 DEBUG nova.virt.libvirt.host [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.854 252257 DEBUG nova.virt.libvirt.host [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.855 252257 DEBUG nova.virt.libvirt.host [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.857 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.858 252257 DEBUG nova.virt.hardware [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.858 252257 DEBUG nova.virt.hardware [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.859 252257 DEBUG nova.virt.hardware [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.859 252257 DEBUG nova.virt.hardware [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.860 252257 DEBUG nova.virt.hardware [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.860 252257 DEBUG nova.virt.hardware [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.860 252257 DEBUG nova.virt.hardware [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.861 252257 DEBUG nova.virt.hardware [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.861 252257 DEBUG nova.virt.hardware [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.862 252257 DEBUG nova.virt.hardware [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.862 252257 DEBUG nova.virt.hardware [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:53:00 np0005539563 nova_compute[252253]: 2025-11-29 07:53:00.867 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4146737388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.326 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.351 252257 DEBUG nova.storage.rbd_utils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 17cfda51-18c0-426a-93a5-45208fdb5da2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.354 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.443446) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781443538, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2462, "num_deletes": 514, "total_data_size": 3616860, "memory_usage": 3680672, "flush_reason": "Manual Compaction"}
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 29 02:53:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:01.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781464308, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2410991, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25107, "largest_seqno": 27568, "table_properties": {"data_size": 2402493, "index_size": 4352, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 24255, "raw_average_key_size": 20, "raw_value_size": 2381918, "raw_average_value_size": 1973, "num_data_blocks": 192, "num_entries": 1207, "num_filter_entries": 1207, "num_deletions": 514, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402590, "oldest_key_time": 1764402590, "file_creation_time": 1764402781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 20970 microseconds, and 6613 cpu microseconds.
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.464439) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2410991 bytes OK
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.464489) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.467507) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.467537) EVENT_LOG_v1 {"time_micros": 1764402781467528, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.467609) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3605700, prev total WAL file size 3605700, number of live WAL files 2.
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.468693) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353036' seq:72057594037927935, type:22 .. '6C6F676D00373630' seq:0, type:0; will stop at (end)
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2354KB)], [56(10221KB)]
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781468823, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12878181, "oldest_snapshot_seqno": -1}
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5707 keys, 10029210 bytes, temperature: kUnknown
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781603110, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10029210, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9990633, "index_size": 23235, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 146320, "raw_average_key_size": 25, "raw_value_size": 9887584, "raw_average_value_size": 1732, "num_data_blocks": 942, "num_entries": 5707, "num_filter_entries": 5707, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764402781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.603377) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10029210 bytes
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.605681) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.8 rd, 74.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(9.5) write-amplify(4.2) OK, records in: 6674, records dropped: 967 output_compression: NoCompression
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.605708) EVENT_LOG_v1 {"time_micros": 1764402781605695, "job": 30, "event": "compaction_finished", "compaction_time_micros": 134370, "compaction_time_cpu_micros": 23516, "output_level": 6, "num_output_files": 1, "total_output_size": 10029210, "num_input_records": 6674, "num_output_records": 5707, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781606234, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402781607977, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.468504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.608013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.608017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.608019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.608020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:01.608031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 388 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 9.1 MiB/s wr, 503 op/s
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:53:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320301644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.779 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.782 252257 DEBUG nova.virt.libvirt.vif [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1113332245',display_name='tempest-VolumesAdminNegativeTest-server-1113332245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1113332245',id=36,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDQB63kCaQSwlw6cFguUzraE1hRm0I6I5y2+cnPYbsRok7wWB0w+8k4zrs0KRS+bkEJXOqxIBWOxb8lETWXFg5Vkpio2sZRy/K2dDYH17tDqll4pgElZwqq1RcDM+Oyvmg==',key_name='tempest-keypair-605237032',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='800c0f050e95457384eee582d6da0afa',ramdisk_id='',reservation_id='r-xocnody7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1765250615',owner_user_name='tempest-VolumesAdminNegativeTest-1765250615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:52:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8551065d65214410b616d2a71729df0a',uuid=17cfda51-18c0-426a-93a5-45208fdb5da2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "address": "fa:16:3e:17:5e:80", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08cb43c0-b4", "ovs_interfaceid": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.783 252257 DEBUG nova.network.os_vif_util [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Converting VIF {"id": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "address": "fa:16:3e:17:5e:80", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08cb43c0-b4", "ovs_interfaceid": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.784 252257 DEBUG nova.network.os_vif_util [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:5e:80,bridge_name='br-int',has_traffic_filtering=True,id=08cb43c0-b482-46af-8fdc-a825b402c6e9,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08cb43c0-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.786 252257 DEBUG nova.objects.instance [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lazy-loading 'pci_devices' on Instance uuid 17cfda51-18c0-426a-93a5-45208fdb5da2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.800 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  <uuid>17cfda51-18c0-426a-93a5-45208fdb5da2</uuid>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  <name>instance-00000024</name>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <nova:name>tempest-VolumesAdminNegativeTest-server-1113332245</nova:name>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:53:00</nova:creationTime>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <nova:user uuid="8551065d65214410b616d2a71729df0a">tempest-VolumesAdminNegativeTest-1765250615-project-member</nova:user>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <nova:project uuid="800c0f050e95457384eee582d6da0afa">tempest-VolumesAdminNegativeTest-1765250615</nova:project>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <nova:port uuid="08cb43c0-b482-46af-8fdc-a825b402c6e9">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <entry name="serial">17cfda51-18c0-426a-93a5-45208fdb5da2</entry>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <entry name="uuid">17cfda51-18c0-426a-93a5-45208fdb5da2</entry>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/17cfda51-18c0-426a-93a5-45208fdb5da2_disk">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/17cfda51-18c0-426a-93a5-45208fdb5da2_disk.config">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:17:5e:80"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <target dev="tap08cb43c0-b4"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/17cfda51-18c0-426a-93a5-45208fdb5da2/console.log" append="off"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:53:01 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:53:01 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:53:01 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:53:01 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.802 252257 DEBUG nova.compute.manager [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Preparing to wait for external event network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.802 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.802 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.802 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.803 252257 DEBUG nova.virt.libvirt.vif [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1113332245',display_name='tempest-VolumesAdminNegativeTest-server-1113332245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1113332245',id=36,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDQB63kCaQSwlw6cFguUzraE1hRm0I6I5y2+cnPYbsRok7wWB0w+8k4zrs0KRS+bkEJXOqxIBWOxb8lETWXFg5Vkpio2sZRy/K2dDYH17tDqll4pgElZwqq1RcDM+Oyvmg==',key_name='tempest-keypair-605237032',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='800c0f050e95457384eee582d6da0afa',ramdisk_id='',reservation_id='r-xocnody7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1765250615',owner_user_name='tempest-VolumesAdminNegativeTest-1765250615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:52:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8551065d65214410b616d2a71729df0a',uuid=17cfda51-18c0-426a-93a5-45208fdb5da2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "address": "fa:16:3e:17:5e:80", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08cb43c0-b4", "ovs_interfaceid": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.803 252257 DEBUG nova.network.os_vif_util [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Converting VIF {"id": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "address": "fa:16:3e:17:5e:80", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08cb43c0-b4", "ovs_interfaceid": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.804 252257 DEBUG nova.network.os_vif_util [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:5e:80,bridge_name='br-int',has_traffic_filtering=True,id=08cb43c0-b482-46af-8fdc-a825b402c6e9,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08cb43c0-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.804 252257 DEBUG os_vif [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:5e:80,bridge_name='br-int',has_traffic_filtering=True,id=08cb43c0-b482-46af-8fdc-a825b402c6e9,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08cb43c0-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.805 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.805 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.806 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.809 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.809 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08cb43c0-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.810 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap08cb43c0-b4, col_values=(('external_ids', {'iface-id': '08cb43c0-b482-46af-8fdc-a825b402c6e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:5e:80', 'vm-uuid': '17cfda51-18c0-426a-93a5-45208fdb5da2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.811 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:01 np0005539563 NetworkManager[48981]: <info>  [1764402781.8126] manager: (tap08cb43c0-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.814 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.818 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.819 252257 INFO os_vif [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:5e:80,bridge_name='br-int',has_traffic_filtering=True,id=08cb43c0-b482-46af-8fdc-a825b402c6e9,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08cb43c0-b4')#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.874 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.875 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.875 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] No VIF found with MAC fa:16:3e:17:5e:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.876 252257 INFO nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Using config drive#033[00m
Nov 29 02:53:01 np0005539563 nova_compute[252253]: 2025-11-29 07:53:01.901 252257 DEBUG nova.storage.rbd_utils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 17cfda51-18c0-426a-93a5-45208fdb5da2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:02.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.368 252257 INFO nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Creating config drive at /var/lib/nova/instances/17cfda51-18c0-426a-93a5-45208fdb5da2/disk.config#033[00m
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.376 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/17cfda51-18c0-426a-93a5-45208fdb5da2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7v7i4l0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.461057) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782461102, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 273, "num_deletes": 251, "total_data_size": 47999, "memory_usage": 54192, "flush_reason": "Manual Compaction"}
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782463447, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 47910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27569, "largest_seqno": 27841, "table_properties": {"data_size": 46026, "index_size": 113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4910, "raw_average_key_size": 18, "raw_value_size": 42336, "raw_average_value_size": 158, "num_data_blocks": 5, "num_entries": 267, "num_filter_entries": 267, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402781, "oldest_key_time": 1764402781, "file_creation_time": 1764402782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 2406 microseconds, and 669 cpu microseconds.
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.463472) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 47910 bytes OK
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.463485) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.464384) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.464396) EVENT_LOG_v1 {"time_micros": 1764402782464392, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.464405) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 45921, prev total WAL file size 45921, number of live WAL files 2.
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.464653) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(46KB)], [59(9794KB)]
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782464700, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10077120, "oldest_snapshot_seqno": -1}
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.510 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/17cfda51-18c0-426a-93a5-45208fdb5da2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7v7i4l0" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5464 keys, 7732474 bytes, temperature: kUnknown
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782528884, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 7732474, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7697482, "index_size": 20226, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 141976, "raw_average_key_size": 25, "raw_value_size": 7600581, "raw_average_value_size": 1391, "num_data_blocks": 808, "num_entries": 5464, "num_filter_entries": 5464, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764402782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.529093) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 7732474 bytes
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.530813) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.9 rd, 120.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.6 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(371.7) write-amplify(161.4) OK, records in: 5974, records dropped: 510 output_compression: NoCompression
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.530834) EVENT_LOG_v1 {"time_micros": 1764402782530824, "job": 32, "event": "compaction_finished", "compaction_time_micros": 64246, "compaction_time_cpu_micros": 16344, "output_level": 6, "num_output_files": 1, "total_output_size": 7732474, "num_input_records": 5974, "num_output_records": 5464, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782530957, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402782533052, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.464580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.533108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.533114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.533116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.533118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:53:02.533119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.541 252257 DEBUG nova.storage.rbd_utils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] rbd image 17cfda51-18c0-426a-93a5-45208fdb5da2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.546 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/17cfda51-18c0-426a-93a5-45208fdb5da2/disk.config 17cfda51-18c0-426a-93a5-45208fdb5da2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.570 252257 DEBUG nova.network.neutron [req-4f215a02-8481-4e03-a332-541300e43cec req-dc920236-0116-4da3-9527-9ce53879184e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Updated VIF entry in instance network info cache for port 08cb43c0-b482-46af-8fdc-a825b402c6e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.571 252257 DEBUG nova.network.neutron [req-4f215a02-8481-4e03-a332-541300e43cec req-dc920236-0116-4da3-9527-9ce53879184e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Updating instance_info_cache with network_info: [{"id": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "address": "fa:16:3e:17:5e:80", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08cb43c0-b4", "ovs_interfaceid": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.595 252257 DEBUG oslo_concurrency.lockutils [req-4f215a02-8481-4e03-a332-541300e43cec req-dc920236-0116-4da3-9527-9ce53879184e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-17cfda51-18c0-426a-93a5-45208fdb5da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.694 252257 DEBUG nova.network.neutron [req-2f8432e5-3a79-4982-915d-590feaead302 req-74933b75-10c6-43ee-9b07-570cff7a849a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Updated VIF entry in instance network info cache for port c606e5a0-f859-492d-827c-6449a1b0dbe4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.694 252257 DEBUG nova.network.neutron [req-2f8432e5-3a79-4982-915d-590feaead302 req-74933b75-10c6-43ee-9b07-570cff7a849a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Updating instance_info_cache with network_info: [{"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.712 252257 DEBUG oslo_concurrency.processutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/17cfda51-18c0-426a-93a5-45208fdb5da2/disk.config 17cfda51-18c0-426a-93a5-45208fdb5da2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.712 252257 INFO nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Deleting local config drive /var/lib/nova/instances/17cfda51-18c0-426a-93a5-45208fdb5da2/disk.config because it was imported into RBD.#033[00m
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.715 252257 DEBUG oslo_concurrency.lockutils [req-2f8432e5-3a79-4982-915d-590feaead302 req-74933b75-10c6-43ee-9b07-570cff7a849a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:02 np0005539563 kernel: tap08cb43c0-b4: entered promiscuous mode
Nov 29 02:53:02 np0005539563 NetworkManager[48981]: <info>  [1764402782.7597] manager: (tap08cb43c0-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.761 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:02 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:02Z|00113|binding|INFO|Claiming lport 08cb43c0-b482-46af-8fdc-a825b402c6e9 for this chassis.
Nov 29 02:53:02 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:02Z|00114|binding|INFO|08cb43c0-b482-46af-8fdc-a825b402c6e9: Claiming fa:16:3e:17:5e:80 10.100.0.13
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.768 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:5e:80 10.100.0.13'], port_security=['fa:16:3e:17:5e:80 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17cfda51-18c0-426a-93a5-45208fdb5da2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbef03be-c0f0-4708-987b-5002a6990bb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '800c0f050e95457384eee582d6da0afa', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ed6d3fed-23e0-4b6a-91e0-86a4c89e0306', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0e8f2a3-aeb4-4405-bc4f-8521ba3c2988, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=08cb43c0-b482-46af-8fdc-a825b402c6e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.769 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 08cb43c0-b482-46af-8fdc-a825b402c6e9 in datapath bbef03be-c0f0-4708-987b-5002a6990bb1 bound to our chassis#033[00m
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.771 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bbef03be-c0f0-4708-987b-5002a6990bb1#033[00m
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.782 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[de359c53-8ffd-4a0e-a9b0-9ca4692fa3f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.783 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.783 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbbef03be-c1 in ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:53:02 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:02Z|00115|binding|INFO|Setting lport 08cb43c0-b482-46af-8fdc-a825b402c6e9 ovn-installed in OVS
Nov 29 02:53:02 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:02Z|00116|binding|INFO|Setting lport 08cb43c0-b482-46af-8fdc-a825b402c6e9 up in Southbound
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.784 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.785 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbbef03be-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.785 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[663b4186-addc-4fda-a833-47f92da6de9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.786 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf8ad6a-38d4-40ef-b215-9a95c219cd60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:02 np0005539563 nova_compute[252253]: 2025-11-29 07:53:02.793 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:02 np0005539563 systemd-udevd[278646]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.797 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[88fc1aa4-d201-420e-b5b0-17671fe263cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:02 np0005539563 systemd-machined[213024]: New machine qemu-17-instance-00000024.
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.811 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e92c96-7068-4fb2-800d-1117f2f09cdf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:02 np0005539563 systemd[1]: Started Virtual Machine qemu-17-instance-00000024.
Nov 29 02:53:02 np0005539563 NetworkManager[48981]: <info>  [1764402782.8231] device (tap08cb43c0-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:53:02 np0005539563 NetworkManager[48981]: <info>  [1764402782.8242] device (tap08cb43c0-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.850 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[99074175-2820-4cfe-b892-4d179471e7a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.855 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4a31a396-d85e-4523-8b11-634aee67b493]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:02 np0005539563 NetworkManager[48981]: <info>  [1764402782.8561] manager: (tapbbef03be-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Nov 29 02:53:02 np0005539563 systemd-udevd[278651]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.889 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a46f9a-59d9-49c9-95ae-12f1ff3ab8ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.894 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[bb76a8ec-2792-4044-af35-b230c7717aca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:02 np0005539563 NetworkManager[48981]: <info>  [1764402782.9186] device (tapbbef03be-c0): carrier: link connected
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.923 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[058f3eed-6c95-4768-8f6e-8d6842a531fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.944 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3764f950-7af6-4f3d-ac91-6bb73a7f167f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbbef03be-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:3d:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575069, 'reachable_time': 44225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278694, 'error': None, 'target': 'ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.959 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[07061779-7a98-4098-92d8-94e7b237e892]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:3deb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575069, 'tstamp': 575069}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278697, 'error': None, 'target': 'ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:02.974 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[502c016a-4ca8-4bc4-a9ca-962190a4e3a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbbef03be-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:3d:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575069, 'reachable_time': 44225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278700, 'error': None, 'target': 'ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:03.004 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7036d052-7de8-492a-ab5e-cc25e76061fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:03.068 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[39e136fe-92c7-466f-ab5b-015e785c4730]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:03.070 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbbef03be-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:03.070 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:03.071 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbbef03be-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:03 np0005539563 nova_compute[252253]: 2025-11-29 07:53:03.073 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:03 np0005539563 NetworkManager[48981]: <info>  [1764402783.0740] manager: (tapbbef03be-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 29 02:53:03 np0005539563 kernel: tapbbef03be-c0: entered promiscuous mode
Nov 29 02:53:03 np0005539563 nova_compute[252253]: 2025-11-29 07:53:03.076 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:03.077 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbbef03be-c0, col_values=(('external_ids', {'iface-id': 'cf0990aa-411e-4273-b97e-3c36b1dfaef8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:03 np0005539563 nova_compute[252253]: 2025-11-29 07:53:03.078 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:03 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:03Z|00117|binding|INFO|Releasing lport cf0990aa-411e-4273-b97e-3c36b1dfaef8 from this chassis (sb_readonly=0)
Nov 29 02:53:03 np0005539563 nova_compute[252253]: 2025-11-29 07:53:03.079 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:03.080 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bbef03be-c0f0-4708-987b-5002a6990bb1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bbef03be-c0f0-4708-987b-5002a6990bb1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:03.081 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2a6077-1cef-4a3b-b75e-aa4940a8b1b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:03.082 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-bbef03be-c0f0-4708-987b-5002a6990bb1
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/bbef03be-c0f0-4708-987b-5002a6990bb1.pid.haproxy
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID bbef03be-c0f0-4708-987b-5002a6990bb1
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:53:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:03.083 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1', 'env', 'PROCESS_TAG=haproxy-bbef03be-c0f0-4708-987b-5002a6990bb1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bbef03be-c0f0-4708-987b-5002a6990bb1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:53:03 np0005539563 nova_compute[252253]: 2025-11-29 07:53:03.094 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:53:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:53:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:53:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:03 np0005539563 nova_compute[252253]: 2025-11-29 07:53:03.430 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402783.429299, 17cfda51-18c0-426a-93a5-45208fdb5da2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:03 np0005539563 nova_compute[252253]: 2025-11-29 07:53:03.431 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] VM Started (Lifecycle Event)#033[00m
Nov 29 02:53:03 np0005539563 nova_compute[252253]: 2025-11-29 07:53:03.453 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:03.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:03 np0005539563 podman[278827]: 2025-11-29 07:53:03.457265697 +0000 UTC m=+0.053458024 container create c84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:53:03 np0005539563 nova_compute[252253]: 2025-11-29 07:53:03.458 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402783.4294572, 17cfda51-18c0-426a-93a5-45208fdb5da2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:03 np0005539563 nova_compute[252253]: 2025-11-29 07:53:03.458 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:53:03 np0005539563 nova_compute[252253]: 2025-11-29 07:53:03.484 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:03 np0005539563 nova_compute[252253]: 2025-11-29 07:53:03.488 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:03 np0005539563 systemd[1]: Started libpod-conmon-c84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70.scope.
Nov 29 02:53:03 np0005539563 nova_compute[252253]: 2025-11-29 07:53:03.504 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:53:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:53:03 np0005539563 podman[278827]: 2025-11-29 07:53:03.430211731 +0000 UTC m=+0.026403948 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:53:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476ed4c04517bbd9a78ea4712cb67119d4576cfde32ffe25cea277d13cb668ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:03 np0005539563 podman[278827]: 2025-11-29 07:53:03.537847946 +0000 UTC m=+0.134040173 container init c84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 02:53:03 np0005539563 podman[278827]: 2025-11-29 07:53:03.543400297 +0000 UTC m=+0.139592484 container start c84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:53:03 np0005539563 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[278887]: [NOTICE]   (278892) : New worker (278894) forked
Nov 29 02:53:03 np0005539563 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[278887]: [NOTICE]   (278892) : Loading success.
Nov 29 02:53:03 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:53:03 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:53:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 388 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 454 op/s
Nov 29 02:53:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:04.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:53:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:53:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.839 252257 DEBUG nova.compute.manager [req-59bbd6af-68ef-4794-adb7-a11c3c682db1 req-7c7bb514-d5da-4a04-9050-a8302bad47a7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received event network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.840 252257 DEBUG oslo_concurrency.lockutils [req-59bbd6af-68ef-4794-adb7-a11c3c682db1 req-7c7bb514-d5da-4a04-9050-a8302bad47a7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.840 252257 DEBUG oslo_concurrency.lockutils [req-59bbd6af-68ef-4794-adb7-a11c3c682db1 req-7c7bb514-d5da-4a04-9050-a8302bad47a7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.840 252257 DEBUG oslo_concurrency.lockutils [req-59bbd6af-68ef-4794-adb7-a11c3c682db1 req-7c7bb514-d5da-4a04-9050-a8302bad47a7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.840 252257 DEBUG nova.compute.manager [req-59bbd6af-68ef-4794-adb7-a11c3c682db1 req-7c7bb514-d5da-4a04-9050-a8302bad47a7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Processing event network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.841 252257 DEBUG nova.compute.manager [req-59bbd6af-68ef-4794-adb7-a11c3c682db1 req-7c7bb514-d5da-4a04-9050-a8302bad47a7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received event network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.841 252257 DEBUG oslo_concurrency.lockutils [req-59bbd6af-68ef-4794-adb7-a11c3c682db1 req-7c7bb514-d5da-4a04-9050-a8302bad47a7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.841 252257 DEBUG oslo_concurrency.lockutils [req-59bbd6af-68ef-4794-adb7-a11c3c682db1 req-7c7bb514-d5da-4a04-9050-a8302bad47a7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.841 252257 DEBUG oslo_concurrency.lockutils [req-59bbd6af-68ef-4794-adb7-a11c3c682db1 req-7c7bb514-d5da-4a04-9050-a8302bad47a7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.841 252257 DEBUG nova.compute.manager [req-59bbd6af-68ef-4794-adb7-a11c3c682db1 req-7c7bb514-d5da-4a04-9050-a8302bad47a7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] No waiting events found dispatching network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.841 252257 WARNING nova.compute.manager [req-59bbd6af-68ef-4794-adb7-a11c3c682db1 req-7c7bb514-d5da-4a04-9050-a8302bad47a7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received unexpected event network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.842 252257 DEBUG nova.compute.manager [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.845 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402784.8452828, 17cfda51-18c0-426a-93a5-45208fdb5da2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.845 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.847 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.849 252257 INFO nova.virt.libvirt.driver [-] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Instance spawned successfully.#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.849 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:53:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:04.898 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:04.900 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:04.901 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.904 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.910 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.913 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.913 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.913 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.914 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.914 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.914 252257 DEBUG nova.virt.libvirt.driver [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:53:04 np0005539563 nova_compute[252253]: 2025-11-29 07:53:04.951 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:53:05 np0005539563 nova_compute[252253]: 2025-11-29 07:53:05.030 252257 INFO nova.compute.manager [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Took 11.00 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:53:05 np0005539563 nova_compute[252253]: 2025-11-29 07:53:05.030 252257 DEBUG nova.compute.manager [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:05 np0005539563 nova_compute[252253]: 2025-11-29 07:53:05.115 252257 INFO nova.compute.manager [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Took 12.09 seconds to build instance.#033[00m
Nov 29 02:53:05 np0005539563 nova_compute[252253]: 2025-11-29 07:53:05.132 252257 DEBUG oslo_concurrency.lockutils [None req-3c78ab1c-9a8f-4055-9b0d-32cc464e5306 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:05.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:05 np0005539563 nova_compute[252253]: 2025-11-29 07:53:05.457 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:53:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:53:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:53:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:53:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:53:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 260 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 6.0 MiB/s wr, 579 op/s
Nov 29 02:53:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:06 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a9604848-759b-442f-806f-25f4d47e0a71 does not exist
Nov 29 02:53:06 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 450ffafb-7f3f-48cb-9f3a-6c26400c3539 does not exist
Nov 29 02:53:06 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 3ecacf4d-e848-400a-89f8-6a8dcc88c21f does not exist
Nov 29 02:53:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:53:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:53:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:53:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:53:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:53:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:53:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:06.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:53:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:06 np0005539563 podman[279075]: 2025-11-29 07:53:06.684669953 +0000 UTC m=+0.025238147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:53:06 np0005539563 nova_compute[252253]: 2025-11-29 07:53:06.813 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:06 np0005539563 nova_compute[252253]: 2025-11-29 07:53:06.968 252257 DEBUG nova.compute.manager [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Received event network-changed-c606e5a0-f859-492d-827c-6449a1b0dbe4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:06 np0005539563 nova_compute[252253]: 2025-11-29 07:53:06.968 252257 DEBUG nova.compute.manager [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Refreshing instance network info cache due to event network-changed-c606e5a0-f859-492d-827c-6449a1b0dbe4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:53:06 np0005539563 nova_compute[252253]: 2025-11-29 07:53:06.969 252257 DEBUG oslo_concurrency.lockutils [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:06 np0005539563 nova_compute[252253]: 2025-11-29 07:53:06.969 252257 DEBUG oslo_concurrency.lockutils [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:06 np0005539563 nova_compute[252253]: 2025-11-29 07:53:06.969 252257 DEBUG nova.network.neutron [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Refreshing network info cache for port c606e5a0-f859-492d-827c-6449a1b0dbe4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:53:07 np0005539563 podman[279075]: 2025-11-29 07:53:07.044004105 +0000 UTC m=+0.384572269 container create bd8b92d940f51dc9f91d1f2d9827c77279b1cb7e4021444cb3b169d844b86f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:53:07 np0005539563 systemd[1]: Started libpod-conmon-bd8b92d940f51dc9f91d1f2d9827c77279b1cb7e4021444cb3b169d844b86f4e.scope.
Nov 29 02:53:07 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:53:07 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:07 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:53:07 np0005539563 podman[279075]: 2025-11-29 07:53:07.164047387 +0000 UTC m=+0.504615551 container init bd8b92d940f51dc9f91d1f2d9827c77279b1cb7e4021444cb3b169d844b86f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_perlman, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:53:07 np0005539563 podman[279075]: 2025-11-29 07:53:07.173278637 +0000 UTC m=+0.513846801 container start bd8b92d940f51dc9f91d1f2d9827c77279b1cb7e4021444cb3b169d844b86f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:53:07 np0005539563 inspiring_perlman[279092]: 167 167
Nov 29 02:53:07 np0005539563 systemd[1]: libpod-bd8b92d940f51dc9f91d1f2d9827c77279b1cb7e4021444cb3b169d844b86f4e.scope: Deactivated successfully.
Nov 29 02:53:07 np0005539563 podman[279075]: 2025-11-29 07:53:07.186211759 +0000 UTC m=+0.526780033 container attach bd8b92d940f51dc9f91d1f2d9827c77279b1cb7e4021444cb3b169d844b86f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:53:07 np0005539563 podman[279075]: 2025-11-29 07:53:07.187660759 +0000 UTC m=+0.528228923 container died bd8b92d940f51dc9f91d1f2d9827c77279b1cb7e4021444cb3b169d844b86f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:53:07 np0005539563 systemd[1]: var-lib-containers-storage-overlay-acfbcf8f98609254553e832618abb49ba475fbd93e504fb8b55ccab223e24e9d-merged.mount: Deactivated successfully.
Nov 29 02:53:07 np0005539563 podman[279075]: 2025-11-29 07:53:07.317352072 +0000 UTC m=+0.657920236 container remove bd8b92d940f51dc9f91d1f2d9827c77279b1cb7e4021444cb3b169d844b86f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 29 02:53:07 np0005539563 systemd[1]: libpod-conmon-bd8b92d940f51dc9f91d1f2d9827c77279b1cb7e4021444cb3b169d844b86f4e.scope: Deactivated successfully.
Nov 29 02:53:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:07.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:07 np0005539563 podman[279116]: 2025-11-29 07:53:07.488081711 +0000 UTC m=+0.041858079 container create 45b69b647adcf020e888c0f5f56230815433a4021184f2e1482830b31e9687c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:53:07 np0005539563 systemd[1]: Started libpod-conmon-45b69b647adcf020e888c0f5f56230815433a4021184f2e1482830b31e9687c0.scope.
Nov 29 02:53:07 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:53:07 np0005539563 podman[279116]: 2025-11-29 07:53:07.466847033 +0000 UTC m=+0.020623411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:53:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221224e819e5d3f9ea550f0eac71efc78cd58b4f541c69b529811724eb140070/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221224e819e5d3f9ea550f0eac71efc78cd58b4f541c69b529811724eb140070/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221224e819e5d3f9ea550f0eac71efc78cd58b4f541c69b529811724eb140070/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221224e819e5d3f9ea550f0eac71efc78cd58b4f541c69b529811724eb140070/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221224e819e5d3f9ea550f0eac71efc78cd58b4f541c69b529811724eb140070/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:07 np0005539563 podman[279116]: 2025-11-29 07:53:07.604358939 +0000 UTC m=+0.158135317 container init 45b69b647adcf020e888c0f5f56230815433a4021184f2e1482830b31e9687c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:53:07 np0005539563 podman[279116]: 2025-11-29 07:53:07.610238239 +0000 UTC m=+0.164014607 container start 45b69b647adcf020e888c0f5f56230815433a4021184f2e1482830b31e9687c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:53:07 np0005539563 podman[279116]: 2025-11-29 07:53:07.613598581 +0000 UTC m=+0.167374939 container attach 45b69b647adcf020e888c0f5f56230815433a4021184f2e1482830b31e9687c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:53:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 260 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.5 MiB/s wr, 277 op/s
Nov 29 02:53:07 np0005539563 nova_compute[252253]: 2025-11-29 07:53:07.823 252257 DEBUG nova.compute.manager [req-ec334ca4-aa61-45d4-80de-186ec2231d29 req-1daef2b1-5485-49d0-8b2d-31cd6bbec3b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received event network-changed-08cb43c0-b482-46af-8fdc-a825b402c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:07 np0005539563 nova_compute[252253]: 2025-11-29 07:53:07.824 252257 DEBUG nova.compute.manager [req-ec334ca4-aa61-45d4-80de-186ec2231d29 req-1daef2b1-5485-49d0-8b2d-31cd6bbec3b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Refreshing instance network info cache due to event network-changed-08cb43c0-b482-46af-8fdc-a825b402c6e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:53:07 np0005539563 nova_compute[252253]: 2025-11-29 07:53:07.825 252257 DEBUG oslo_concurrency.lockutils [req-ec334ca4-aa61-45d4-80de-186ec2231d29 req-1daef2b1-5485-49d0-8b2d-31cd6bbec3b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-17cfda51-18c0-426a-93a5-45208fdb5da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:07 np0005539563 nova_compute[252253]: 2025-11-29 07:53:07.826 252257 DEBUG oslo_concurrency.lockutils [req-ec334ca4-aa61-45d4-80de-186ec2231d29 req-1daef2b1-5485-49d0-8b2d-31cd6bbec3b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-17cfda51-18c0-426a-93a5-45208fdb5da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:07 np0005539563 nova_compute[252253]: 2025-11-29 07:53:07.826 252257 DEBUG nova.network.neutron [req-ec334ca4-aa61-45d4-80de-186ec2231d29 req-1daef2b1-5485-49d0-8b2d-31cd6bbec3b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Refreshing network info cache for port 08cb43c0-b482-46af-8fdc-a825b402c6e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:53:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:08.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:08 np0005539563 competent_chatelet[279132]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:53:08 np0005539563 competent_chatelet[279132]: --> relative data size: 1.0
Nov 29 02:53:08 np0005539563 competent_chatelet[279132]: --> All data devices are unavailable
Nov 29 02:53:08 np0005539563 systemd[1]: libpod-45b69b647adcf020e888c0f5f56230815433a4021184f2e1482830b31e9687c0.scope: Deactivated successfully.
Nov 29 02:53:08 np0005539563 podman[279116]: 2025-11-29 07:53:08.395307549 +0000 UTC m=+0.949083907 container died 45b69b647adcf020e888c0f5f56230815433a4021184f2e1482830b31e9687c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:53:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-221224e819e5d3f9ea550f0eac71efc78cd58b4f541c69b529811724eb140070-merged.mount: Deactivated successfully.
Nov 29 02:53:08 np0005539563 podman[279116]: 2025-11-29 07:53:08.871156428 +0000 UTC m=+1.424932786 container remove 45b69b647adcf020e888c0f5f56230815433a4021184f2e1482830b31e9687c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:53:08 np0005539563 systemd[1]: libpod-conmon-45b69b647adcf020e888c0f5f56230815433a4021184f2e1482830b31e9687c0.scope: Deactivated successfully.
Nov 29 02:53:09 np0005539563 nova_compute[252253]: 2025-11-29 07:53:09.237 252257 DEBUG nova.network.neutron [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Updated VIF entry in instance network info cache for port c606e5a0-f859-492d-827c-6449a1b0dbe4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:53:09 np0005539563 nova_compute[252253]: 2025-11-29 07:53:09.238 252257 DEBUG nova.network.neutron [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Updating instance_info_cache with network_info: [{"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:09 np0005539563 nova_compute[252253]: 2025-11-29 07:53:09.256 252257 DEBUG oslo_concurrency.lockutils [req-bc41ed25-fc99-4c32-b2f3-5d728616f922 req-1784a6d4-31c6-4ba8-a36b-a29532a4dbab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:09.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:09 np0005539563 podman[279297]: 2025-11-29 07:53:09.460113729 +0000 UTC m=+0.065658355 container create e18a6f38c48668a19a1959ffaa1bff2d841dcbf52804ce1c2f0768d58b615d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:53:09 np0005539563 podman[279297]: 2025-11-29 07:53:09.418254032 +0000 UTC m=+0.023798678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:53:09 np0005539563 systemd[1]: Started libpod-conmon-e18a6f38c48668a19a1959ffaa1bff2d841dcbf52804ce1c2f0768d58b615d71.scope.
Nov 29 02:53:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:53:09 np0005539563 nova_compute[252253]: 2025-11-29 07:53:09.629 252257 DEBUG nova.network.neutron [req-ec334ca4-aa61-45d4-80de-186ec2231d29 req-1daef2b1-5485-49d0-8b2d-31cd6bbec3b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Updated VIF entry in instance network info cache for port 08cb43c0-b482-46af-8fdc-a825b402c6e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:53:09 np0005539563 nova_compute[252253]: 2025-11-29 07:53:09.630 252257 DEBUG nova.network.neutron [req-ec334ca4-aa61-45d4-80de-186ec2231d29 req-1daef2b1-5485-49d0-8b2d-31cd6bbec3b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Updating instance_info_cache with network_info: [{"id": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "address": "fa:16:3e:17:5e:80", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08cb43c0-b4", "ovs_interfaceid": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 260 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.6 MiB/s wr, 302 op/s
Nov 29 02:53:09 np0005539563 nova_compute[252253]: 2025-11-29 07:53:09.651 252257 DEBUG oslo_concurrency.lockutils [req-ec334ca4-aa61-45d4-80de-186ec2231d29 req-1daef2b1-5485-49d0-8b2d-31cd6bbec3b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-17cfda51-18c0-426a-93a5-45208fdb5da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:09 np0005539563 podman[279297]: 2025-11-29 07:53:09.652373692 +0000 UTC m=+0.257918348 container init e18a6f38c48668a19a1959ffaa1bff2d841dcbf52804ce1c2f0768d58b615d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:53:09 np0005539563 podman[279297]: 2025-11-29 07:53:09.659191338 +0000 UTC m=+0.264735954 container start e18a6f38c48668a19a1959ffaa1bff2d841dcbf52804ce1c2f0768d58b615d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:53:09 np0005539563 podman[279297]: 2025-11-29 07:53:09.662370644 +0000 UTC m=+0.267915260 container attach e18a6f38c48668a19a1959ffaa1bff2d841dcbf52804ce1c2f0768d58b615d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:53:09 np0005539563 zealous_leavitt[279313]: 167 167
Nov 29 02:53:09 np0005539563 systemd[1]: libpod-e18a6f38c48668a19a1959ffaa1bff2d841dcbf52804ce1c2f0768d58b615d71.scope: Deactivated successfully.
Nov 29 02:53:09 np0005539563 conmon[279313]: conmon e18a6f38c48668a19a19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e18a6f38c48668a19a1959ffaa1bff2d841dcbf52804ce1c2f0768d58b615d71.scope/container/memory.events
Nov 29 02:53:09 np0005539563 podman[279297]: 2025-11-29 07:53:09.665768156 +0000 UTC m=+0.271312782 container died e18a6f38c48668a19a1959ffaa1bff2d841dcbf52804ce1c2f0768d58b615d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:53:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3938824db5c37bea0ead52a19c1dd608464d40c8572d31b00d8336df14b12881-merged.mount: Deactivated successfully.
Nov 29 02:53:09 np0005539563 podman[279297]: 2025-11-29 07:53:09.797991329 +0000 UTC m=+0.403535955 container remove e18a6f38c48668a19a1959ffaa1bff2d841dcbf52804ce1c2f0768d58b615d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:53:09 np0005539563 systemd[1]: libpod-conmon-e18a6f38c48668a19a1959ffaa1bff2d841dcbf52804ce1c2f0768d58b615d71.scope: Deactivated successfully.
Nov 29 02:53:09 np0005539563 podman[279338]: 2025-11-29 07:53:09.965651664 +0000 UTC m=+0.038371703 container create ab2b23bd112636352287f9a02cb1aa95566acd4e34072371a2eb93b1a0e64319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:53:10 np0005539563 systemd[1]: Started libpod-conmon-ab2b23bd112636352287f9a02cb1aa95566acd4e34072371a2eb93b1a0e64319.scope.
Nov 29 02:53:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:53:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92cf127bba49325128fcb3412a7ae9db970790257ee325b248860a1395139b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92cf127bba49325128fcb3412a7ae9db970790257ee325b248860a1395139b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92cf127bba49325128fcb3412a7ae9db970790257ee325b248860a1395139b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92cf127bba49325128fcb3412a7ae9db970790257ee325b248860a1395139b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:10 np0005539563 podman[279338]: 2025-11-29 07:53:09.947747638 +0000 UTC m=+0.020467697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:53:10 np0005539563 podman[279338]: 2025-11-29 07:53:10.051960829 +0000 UTC m=+0.124680898 container init ab2b23bd112636352287f9a02cb1aa95566acd4e34072371a2eb93b1a0e64319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:53:10 np0005539563 podman[279338]: 2025-11-29 07:53:10.063317508 +0000 UTC m=+0.136037587 container start ab2b23bd112636352287f9a02cb1aa95566acd4e34072371a2eb93b1a0e64319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:53:10 np0005539563 podman[279338]: 2025-11-29 07:53:10.066650848 +0000 UTC m=+0.139370877 container attach ab2b23bd112636352287f9a02cb1aa95566acd4e34072371a2eb93b1a0e64319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pasteur, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:53:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:10.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:10 np0005539563 nova_compute[252253]: 2025-11-29 07:53:10.459 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]: {
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:    "0": [
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:        {
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:            "devices": [
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:                "/dev/loop3"
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:            ],
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:            "lv_name": "ceph_lv0",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:            "lv_size": "7511998464",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:            "name": "ceph_lv0",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:            "tags": {
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:                "ceph.cluster_name": "ceph",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:                "ceph.crush_device_class": "",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:                "ceph.encrypted": "0",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:                "ceph.osd_id": "0",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:                "ceph.type": "block",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:                "ceph.vdo": "0"
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:            },
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:            "type": "block",
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:            "vg_name": "ceph_vg0"
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:        }
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]:    ]
Nov 29 02:53:10 np0005539563 gracious_pasteur[279355]: }
Nov 29 02:53:11 np0005539563 systemd[1]: libpod-ab2b23bd112636352287f9a02cb1aa95566acd4e34072371a2eb93b1a0e64319.scope: Deactivated successfully.
Nov 29 02:53:11 np0005539563 conmon[279355]: conmon ab2b23bd112636352287 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab2b23bd112636352287f9a02cb1aa95566acd4e34072371a2eb93b1a0e64319.scope/container/memory.events
Nov 29 02:53:11 np0005539563 podman[279338]: 2025-11-29 07:53:11.02146218 +0000 UTC m=+1.094182229 container died ab2b23bd112636352287f9a02cb1aa95566acd4e34072371a2eb93b1a0e64319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:53:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f92cf127bba49325128fcb3412a7ae9db970790257ee325b248860a1395139b8-merged.mount: Deactivated successfully.
Nov 29 02:53:11 np0005539563 podman[279338]: 2025-11-29 07:53:11.081388627 +0000 UTC m=+1.154108666 container remove ab2b23bd112636352287f9a02cb1aa95566acd4e34072371a2eb93b1a0e64319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pasteur, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:53:11 np0005539563 systemd[1]: libpod-conmon-ab2b23bd112636352287f9a02cb1aa95566acd4e34072371a2eb93b1a0e64319.scope: Deactivated successfully.
Nov 29 02:53:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:53:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:11.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:53:11 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:11Z|00118|binding|INFO|Releasing lport cf0990aa-411e-4273-b97e-3c36b1dfaef8 from this chassis (sb_readonly=0)
Nov 29 02:53:11 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:11Z|00119|binding|INFO|Releasing lport 49d1fa79-68ff-4b00-b078-6119e837e4c3 from this chassis (sb_readonly=0)
Nov 29 02:53:11 np0005539563 nova_compute[252253]: 2025-11-29 07:53:11.601 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 284 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.7 MiB/s wr, 342 op/s
Nov 29 02:53:11 np0005539563 podman[279512]: 2025-11-29 07:53:11.726986248 +0000 UTC m=+0.037824919 container create 1623ef04e100259492fa8c39dbe6e683d838d72d305c374b0c7edcc817d4a38e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:53:11 np0005539563 systemd[1]: Started libpod-conmon-1623ef04e100259492fa8c39dbe6e683d838d72d305c374b0c7edcc817d4a38e.scope.
Nov 29 02:53:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:53:11 np0005539563 podman[279512]: 2025-11-29 07:53:11.710603293 +0000 UTC m=+0.021441964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:53:11 np0005539563 podman[279512]: 2025-11-29 07:53:11.808309728 +0000 UTC m=+0.119148419 container init 1623ef04e100259492fa8c39dbe6e683d838d72d305c374b0c7edcc817d4a38e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:53:11 np0005539563 podman[279512]: 2025-11-29 07:53:11.813763756 +0000 UTC m=+0.124602427 container start 1623ef04e100259492fa8c39dbe6e683d838d72d305c374b0c7edcc817d4a38e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jennings, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:53:11 np0005539563 nova_compute[252253]: 2025-11-29 07:53:11.815 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:11 np0005539563 gifted_jennings[279528]: 167 167
Nov 29 02:53:11 np0005539563 systemd[1]: libpod-1623ef04e100259492fa8c39dbe6e683d838d72d305c374b0c7edcc817d4a38e.scope: Deactivated successfully.
Nov 29 02:53:11 np0005539563 podman[279512]: 2025-11-29 07:53:11.818565727 +0000 UTC m=+0.129404418 container attach 1623ef04e100259492fa8c39dbe6e683d838d72d305c374b0c7edcc817d4a38e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:53:11 np0005539563 podman[279512]: 2025-11-29 07:53:11.81979214 +0000 UTC m=+0.130630811 container died 1623ef04e100259492fa8c39dbe6e683d838d72d305c374b0c7edcc817d4a38e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jennings, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:53:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-637b6fd289247eb8e4f7b056720a6f48fb3a03c2424a4a5bc08fc21dc784375d-merged.mount: Deactivated successfully.
Nov 29 02:53:11 np0005539563 podman[279512]: 2025-11-29 07:53:11.863920539 +0000 UTC m=+0.174759210 container remove 1623ef04e100259492fa8c39dbe6e683d838d72d305c374b0c7edcc817d4a38e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jennings, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:53:11 np0005539563 systemd[1]: libpod-conmon-1623ef04e100259492fa8c39dbe6e683d838d72d305c374b0c7edcc817d4a38e.scope: Deactivated successfully.
Nov 29 02:53:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:12.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:12 np0005539563 podman[279552]: 2025-11-29 07:53:12.037657699 +0000 UTC m=+0.024157018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:53:12 np0005539563 podman[279552]: 2025-11-29 07:53:12.188113707 +0000 UTC m=+0.174613026 container create bfcc193f55946448badecd3e517fb5ce78ce63deacbdde8d14dc6f61f10e5b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:53:12 np0005539563 systemd[1]: Started libpod-conmon-bfcc193f55946448badecd3e517fb5ce78ce63deacbdde8d14dc6f61f10e5b75.scope.
Nov 29 02:53:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:53:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b8dc71e382f835c9e0e4bbbd6a095f80bd4cf4e77c09a12ef5b1738498d660/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b8dc71e382f835c9e0e4bbbd6a095f80bd4cf4e77c09a12ef5b1738498d660/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b8dc71e382f835c9e0e4bbbd6a095f80bd4cf4e77c09a12ef5b1738498d660/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b8dc71e382f835c9e0e4bbbd6a095f80bd4cf4e77c09a12ef5b1738498d660/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:53:12
Nov 29 02:53:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:53:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:53:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', '.rgw.root', '.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.log']
Nov 29 02:53:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:53:12 np0005539563 podman[279552]: 2025-11-29 07:53:12.889690688 +0000 UTC m=+0.876190107 container init bfcc193f55946448badecd3e517fb5ce78ce63deacbdde8d14dc6f61f10e5b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hamilton, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:53:12 np0005539563 podman[279552]: 2025-11-29 07:53:12.898380934 +0000 UTC m=+0.884880243 container start bfcc193f55946448badecd3e517fb5ce78ce63deacbdde8d14dc6f61f10e5b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hamilton, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:53:12 np0005539563 podman[279552]: 2025-11-29 07:53:12.905142098 +0000 UTC m=+0.891641517 container attach bfcc193f55946448badecd3e517fb5ce78ce63deacbdde8d14dc6f61f10e5b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:53:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:13.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 284 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.7 MiB/s wr, 227 op/s
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:53:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:53:13 np0005539563 elated_hamilton[279568]: {
Nov 29 02:53:13 np0005539563 elated_hamilton[279568]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:53:13 np0005539563 elated_hamilton[279568]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:53:13 np0005539563 elated_hamilton[279568]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:53:13 np0005539563 elated_hamilton[279568]:        "osd_id": 0,
Nov 29 02:53:13 np0005539563 elated_hamilton[279568]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:53:13 np0005539563 elated_hamilton[279568]:        "type": "bluestore"
Nov 29 02:53:13 np0005539563 elated_hamilton[279568]:    }
Nov 29 02:53:13 np0005539563 elated_hamilton[279568]: }
Nov 29 02:53:13 np0005539563 systemd[1]: libpod-bfcc193f55946448badecd3e517fb5ce78ce63deacbdde8d14dc6f61f10e5b75.scope: Deactivated successfully.
Nov 29 02:53:13 np0005539563 systemd[1]: libpod-bfcc193f55946448badecd3e517fb5ce78ce63deacbdde8d14dc6f61f10e5b75.scope: Consumed 1.010s CPU time.
Nov 29 02:53:13 np0005539563 podman[279552]: 2025-11-29 07:53:13.912498447 +0000 UTC m=+1.898997766 container died bfcc193f55946448badecd3e517fb5ce78ce63deacbdde8d14dc6f61f10e5b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:53:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-45b8dc71e382f835c9e0e4bbbd6a095f80bd4cf4e77c09a12ef5b1738498d660-merged.mount: Deactivated successfully.
Nov 29 02:53:13 np0005539563 podman[279552]: 2025-11-29 07:53:13.968811207 +0000 UTC m=+1.955310526 container remove bfcc193f55946448badecd3e517fb5ce78ce63deacbdde8d14dc6f61f10e5b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:53:13 np0005539563 systemd[1]: libpod-conmon-bfcc193f55946448badecd3e517fb5ce78ce63deacbdde8d14dc6f61f10e5b75.scope: Deactivated successfully.
Nov 29 02:53:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:53:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:53:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev fbc76df1-b894-404d-97c9-c99e80fc7612 does not exist
Nov 29 02:53:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8f22d751-d034-497d-a6bf-8f1362edb915 does not exist
Nov 29 02:53:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 62ca0a6f-6141-4ba3-8efc-de38ceb3faca does not exist
Nov 29 02:53:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:14.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:15 np0005539563 nova_compute[252253]: 2025-11-29 07:53:15.461 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:15.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 318 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.2 MiB/s wr, 290 op/s
Nov 29 02:53:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:53:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:16.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:16 np0005539563 nova_compute[252253]: 2025-11-29 07:53:16.817 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:17.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 318 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 166 op/s
Nov 29 02:53:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:18Z|00120|binding|INFO|Releasing lport cf0990aa-411e-4273-b97e-3c36b1dfaef8 from this chassis (sb_readonly=0)
Nov 29 02:53:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:18Z|00121|binding|INFO|Releasing lport 49d1fa79-68ff-4b00-b078-6119e837e4c3 from this chassis (sb_readonly=0)
Nov 29 02:53:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:18.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:18 np0005539563 nova_compute[252253]: 2025-11-29 07:53:18.160 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:19 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:19Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:5e:80 10.100.0.13
Nov 29 02:53:19 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:19Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:5e:80 10.100.0.13
Nov 29 02:53:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:19.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:19 np0005539563 nova_compute[252253]: 2025-11-29 07:53:19.470 252257 DEBUG nova.compute.manager [req-4fd15b1d-1a3a-44e0-b8e1-1411f09a6963 req-48bcf7c0-06ba-4421-8714-4773851c0876 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Received event network-changed-c606e5a0-f859-492d-827c-6449a1b0dbe4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:19 np0005539563 nova_compute[252253]: 2025-11-29 07:53:19.472 252257 DEBUG nova.compute.manager [req-4fd15b1d-1a3a-44e0-b8e1-1411f09a6963 req-48bcf7c0-06ba-4421-8714-4773851c0876 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Refreshing instance network info cache due to event network-changed-c606e5a0-f859-492d-827c-6449a1b0dbe4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:53:19 np0005539563 nova_compute[252253]: 2025-11-29 07:53:19.472 252257 DEBUG oslo_concurrency.lockutils [req-4fd15b1d-1a3a-44e0-b8e1-1411f09a6963 req-48bcf7c0-06ba-4421-8714-4773851c0876 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:19 np0005539563 nova_compute[252253]: 2025-11-29 07:53:19.473 252257 DEBUG oslo_concurrency.lockutils [req-4fd15b1d-1a3a-44e0-b8e1-1411f09a6963 req-48bcf7c0-06ba-4421-8714-4773851c0876 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:19 np0005539563 nova_compute[252253]: 2025-11-29 07:53:19.473 252257 DEBUG nova.network.neutron [req-4fd15b1d-1a3a-44e0-b8e1-1411f09a6963 req-48bcf7c0-06ba-4421-8714-4773851c0876 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Refreshing network info cache for port c606e5a0-f859-492d-827c-6449a1b0dbe4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:53:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 310 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.7 MiB/s wr, 193 op/s
Nov 29 02:53:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:20.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:20 np0005539563 nova_compute[252253]: 2025-11-29 07:53:20.465 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:20 np0005539563 podman[279706]: 2025-11-29 07:53:20.529652939 +0000 UTC m=+0.076557441 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:53:20 np0005539563 podman[279707]: 2025-11-29 07:53:20.548569273 +0000 UTC m=+0.083133580 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Nov 29 02:53:20 np0005539563 podman[279708]: 2025-11-29 07:53:20.577751876 +0000 UTC m=+0.107709147 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 02:53:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:21.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 278 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.2 MiB/s wr, 249 op/s
Nov 29 02:53:21 np0005539563 nova_compute[252253]: 2025-11-29 07:53:21.732 252257 DEBUG nova.network.neutron [req-4fd15b1d-1a3a-44e0-b8e1-1411f09a6963 req-48bcf7c0-06ba-4421-8714-4773851c0876 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Updated VIF entry in instance network info cache for port c606e5a0-f859-492d-827c-6449a1b0dbe4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:53:21 np0005539563 nova_compute[252253]: 2025-11-29 07:53:21.732 252257 DEBUG nova.network.neutron [req-4fd15b1d-1a3a-44e0-b8e1-1411f09a6963 req-48bcf7c0-06ba-4421-8714-4773851c0876 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Updating instance_info_cache with network_info: [{"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:21 np0005539563 nova_compute[252253]: 2025-11-29 07:53:21.755 252257 DEBUG oslo_concurrency.lockutils [req-4fd15b1d-1a3a-44e0-b8e1-1411f09a6963 req-48bcf7c0-06ba-4421-8714-4773851c0876 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:21 np0005539563 nova_compute[252253]: 2025-11-29 07:53:21.818 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:22.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006492786560115392 of space, bias 1.0, pg target 1.9478359680346176 quantized to 32 (current 32)
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 02:53:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:53:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:23.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:53:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 278 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 4.8 MiB/s wr, 171 op/s
Nov 29 02:53:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:24.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:25 np0005539563 nova_compute[252253]: 2025-11-29 07:53:25.473 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:25.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 781 KiB/s rd, 6.6 MiB/s wr, 233 op/s
Nov 29 02:53:25 np0005539563 nova_compute[252253]: 2025-11-29 07:53:25.660 252257 DEBUG nova.compute.manager [req-1f7836c8-23fc-44a5-aec3-d027a69640d6 req-c78d1621-0771-4589-a0db-9a37d93e8e2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Received event network-changed-c606e5a0-f859-492d-827c-6449a1b0dbe4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:25 np0005539563 nova_compute[252253]: 2025-11-29 07:53:25.661 252257 DEBUG nova.compute.manager [req-1f7836c8-23fc-44a5-aec3-d027a69640d6 req-c78d1621-0771-4589-a0db-9a37d93e8e2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Refreshing instance network info cache due to event network-changed-c606e5a0-f859-492d-827c-6449a1b0dbe4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:53:25 np0005539563 nova_compute[252253]: 2025-11-29 07:53:25.661 252257 DEBUG oslo_concurrency.lockutils [req-1f7836c8-23fc-44a5-aec3-d027a69640d6 req-c78d1621-0771-4589-a0db-9a37d93e8e2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:25 np0005539563 nova_compute[252253]: 2025-11-29 07:53:25.661 252257 DEBUG oslo_concurrency.lockutils [req-1f7836c8-23fc-44a5-aec3-d027a69640d6 req-c78d1621-0771-4589-a0db-9a37d93e8e2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:25 np0005539563 nova_compute[252253]: 2025-11-29 07:53:25.661 252257 DEBUG nova.network.neutron [req-1f7836c8-23fc-44a5-aec3-d027a69640d6 req-c78d1621-0771-4589-a0db-9a37d93e8e2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Refreshing network info cache for port c606e5a0-f859-492d-827c-6449a1b0dbe4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:53:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:26.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:26 np0005539563 nova_compute[252253]: 2025-11-29 07:53:26.820 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:27 np0005539563 nova_compute[252253]: 2025-11-29 07:53:27.272 252257 DEBUG nova.network.neutron [req-1f7836c8-23fc-44a5-aec3-d027a69640d6 req-c78d1621-0771-4589-a0db-9a37d93e8e2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Updated VIF entry in instance network info cache for port c606e5a0-f859-492d-827c-6449a1b0dbe4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:53:27 np0005539563 nova_compute[252253]: 2025-11-29 07:53:27.272 252257 DEBUG nova.network.neutron [req-1f7836c8-23fc-44a5-aec3-d027a69640d6 req-c78d1621-0771-4589-a0db-9a37d93e8e2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Updating instance_info_cache with network_info: [{"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:27 np0005539563 nova_compute[252253]: 2025-11-29 07:53:27.295 252257 DEBUG oslo_concurrency.lockutils [req-1f7836c8-23fc-44a5-aec3-d027a69640d6 req-c78d1621-0771-4589-a0db-9a37d93e8e2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5338f516-8664-4303-aed1-b1d4e5b8e7e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:27.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 462 KiB/s rd, 4.0 MiB/s wr, 170 op/s
Nov 29 02:53:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:28.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.403 252257 DEBUG oslo_concurrency.lockutils [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.404 252257 DEBUG oslo_concurrency.lockutils [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.404 252257 DEBUG oslo_concurrency.lockutils [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.405 252257 DEBUG oslo_concurrency.lockutils [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.405 252257 DEBUG oslo_concurrency.lockutils [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.407 252257 INFO nova.compute.manager [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Terminating instance#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.409 252257 DEBUG nova.compute.manager [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:53:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:29.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:29 np0005539563 kernel: tapc606e5a0-f8 (unregistering): left promiscuous mode
Nov 29 02:53:29 np0005539563 NetworkManager[48981]: <info>  [1764402809.5478] device (tapc606e5a0-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.557 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:29 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:29Z|00122|binding|INFO|Releasing lport c606e5a0-f859-492d-827c-6449a1b0dbe4 from this chassis (sb_readonly=0)
Nov 29 02:53:29 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:29Z|00123|binding|INFO|Setting lport c606e5a0-f859-492d-827c-6449a1b0dbe4 down in Southbound
Nov 29 02:53:29 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:29Z|00124|binding|INFO|Removing iface tapc606e5a0-f8 ovn-installed in OVS
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.583 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:29.606 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:7c:50 10.100.0.14'], port_security=['fa:16:3e:d1:7c:50 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5338f516-8664-4303-aed1-b1d4e5b8e7e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be6e4a03-649a-413f-8a81-4fef5b740489', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1a31b637613411eaeda132dc499537b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6df00dc5-dca9-4705-acff-a62440113d04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b522ee0-0950-4570-a9e8-0fc3c27b4f72, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=c606e5a0-f859-492d-827c-6449a1b0dbe4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:29.609 158990 INFO neutron.agent.ovn.metadata.agent [-] Port c606e5a0-f859-492d-827c-6449a1b0dbe4 in datapath be6e4a03-649a-413f-8a81-4fef5b740489 unbound from our chassis#033[00m
Nov 29 02:53:29 np0005539563 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000021.scope: Deactivated successfully.
Nov 29 02:53:29 np0005539563 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000021.scope: Consumed 14.858s CPU time.
Nov 29 02:53:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:29.612 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network be6e4a03-649a-413f-8a81-4fef5b740489, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:53:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:29.613 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3676e2-9c46-4cff-aacb-708b21065cdc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:29.614 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489 namespace which is not needed anymore#033[00m
Nov 29 02:53:29 np0005539563 systemd-machined[213024]: Machine qemu-16-instance-00000021 terminated.
Nov 29 02:53:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 876 KiB/s rd, 4.0 MiB/s wr, 185 op/s
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.655 252257 INFO nova.virt.libvirt.driver [-] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Instance destroyed successfully.#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.657 252257 DEBUG nova.objects.instance [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lazy-loading 'resources' on Instance uuid 5338f516-8664-4303-aed1-b1d4e5b8e7e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.778 252257 DEBUG nova.virt.libvirt.vif [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:52:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-325866224',display_name='tempest-FloatingIPsAssociationTestJSON-server-325866224',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-325866224',id=33,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:52:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b1a31b637613411eaeda132dc499537b',ramdisk_id='',reservation_id='r-2ee57xly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-120353870',owner_user_name='tempest-FloatingIPsAssociationTestJSON-120353870-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:52:37Z,user_data=None,user_id='2e6a7e8a80384d83b5debf4c717f6e09',uuid=5338f516-8664-4303-aed1-b1d4e5b8e7e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.779 252257 DEBUG nova.network.os_vif_util [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Converting VIF {"id": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "address": "fa:16:3e:d1:7c:50", "network": {"id": "be6e4a03-649a-413f-8a81-4fef5b740489", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-438100519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1a31b637613411eaeda132dc499537b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606e5a0-f8", "ovs_interfaceid": "c606e5a0-f859-492d-827c-6449a1b0dbe4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.780 252257 DEBUG nova.network.os_vif_util [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d1:7c:50,bridge_name='br-int',has_traffic_filtering=True,id=c606e5a0-f859-492d-827c-6449a1b0dbe4,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606e5a0-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.781 252257 DEBUG os_vif [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:7c:50,bridge_name='br-int',has_traffic_filtering=True,id=c606e5a0-f859-492d-827c-6449a1b0dbe4,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606e5a0-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.787 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.788 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc606e5a0-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.793 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.797 252257 INFO os_vif [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:7c:50,bridge_name='br-int',has_traffic_filtering=True,id=c606e5a0-f859-492d-827c-6449a1b0dbe4,network=Network(be6e4a03-649a-413f-8a81-4fef5b740489),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606e5a0-f8')#033[00m
Nov 29 02:53:29 np0005539563 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[278026]: [NOTICE]   (278030) : haproxy version is 2.8.14-c23fe91
Nov 29 02:53:29 np0005539563 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[278026]: [NOTICE]   (278030) : path to executable is /usr/sbin/haproxy
Nov 29 02:53:29 np0005539563 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[278026]: [WARNING]  (278030) : Exiting Master process...
Nov 29 02:53:29 np0005539563 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[278026]: [ALERT]    (278030) : Current worker (278032) exited with code 143 (Terminated)
Nov 29 02:53:29 np0005539563 neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489[278026]: [WARNING]  (278030) : All workers exited. Exiting... (0)
Nov 29 02:53:29 np0005539563 systemd[1]: libpod-5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482.scope: Deactivated successfully.
Nov 29 02:53:29 np0005539563 podman[279808]: 2025-11-29 07:53:29.839920632 +0000 UTC m=+0.112896179 container died 5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.868 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.869 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.869 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.869 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.869 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482-userdata-shm.mount: Deactivated successfully.
Nov 29 02:53:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-27ccef2379cd65cf976ac8a689f754186a3ff3ccab18d072069a4a944a9412d7-merged.mount: Deactivated successfully.
Nov 29 02:53:29 np0005539563 podman[279808]: 2025-11-29 07:53:29.889005505 +0000 UTC m=+0.161981082 container cleanup 5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 02:53:29 np0005539563 systemd[1]: libpod-conmon-5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482.scope: Deactivated successfully.
Nov 29 02:53:29 np0005539563 podman[279857]: 2025-11-29 07:53:29.974088797 +0000 UTC m=+0.054942124 container remove 5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 02:53:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:29.981 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[213c4503-d5a5-48df-a4f7-93d9994a240d]: (4, ('Sat Nov 29 07:53:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489 (5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482)\n5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482\nSat Nov 29 07:53:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489 (5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482)\n5516deaf520ef584e00495f1da6459ec9e873b8ef0b3ffa784b58033540ee482\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:29.983 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[20f728b0-2ef1-4350-b5ae-cfdcb6c380c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:29.984 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe6e4a03-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:29 np0005539563 nova_compute[252253]: 2025-11-29 07:53:29.987 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:29 np0005539563 kernel: tapbe6e4a03-60: left promiscuous mode
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.007 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:30.008 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8d181d-6853-4f04-bb65-9dc17e99ccd0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.012 252257 DEBUG nova.compute.manager [req-dc23d612-bfcb-4013-a1fb-ff095ee50293 req-3bb80442-65f6-424c-ad66-4ce083062880 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Received event network-vif-unplugged-c606e5a0-f859-492d-827c-6449a1b0dbe4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.012 252257 DEBUG oslo_concurrency.lockutils [req-dc23d612-bfcb-4013-a1fb-ff095ee50293 req-3bb80442-65f6-424c-ad66-4ce083062880 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.013 252257 DEBUG oslo_concurrency.lockutils [req-dc23d612-bfcb-4013-a1fb-ff095ee50293 req-3bb80442-65f6-424c-ad66-4ce083062880 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.013 252257 DEBUG oslo_concurrency.lockutils [req-dc23d612-bfcb-4013-a1fb-ff095ee50293 req-3bb80442-65f6-424c-ad66-4ce083062880 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.013 252257 DEBUG nova.compute.manager [req-dc23d612-bfcb-4013-a1fb-ff095ee50293 req-3bb80442-65f6-424c-ad66-4ce083062880 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] No waiting events found dispatching network-vif-unplugged-c606e5a0-f859-492d-827c-6449a1b0dbe4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.013 252257 DEBUG nova.compute.manager [req-dc23d612-bfcb-4013-a1fb-ff095ee50293 req-3bb80442-65f6-424c-ad66-4ce083062880 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Received event network-vif-unplugged-c606e5a0-f859-492d-827c-6449a1b0dbe4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:53:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:30.031 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6513ce-08c4-431f-8790-3cc47f055a7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:30.033 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b02e07-0375-4274-8467-8087ff469b96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:30.054 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bc09bf60-f334-4ff8-8c16-6534ff7ae5cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572323, 'reachable_time': 32067, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279876, 'error': None, 'target': 'ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:30 np0005539563 systemd[1]: run-netns-ovnmeta\x2dbe6e4a03\x2d649a\x2d413f\x2d8a81\x2d4fef5b740489.mount: Deactivated successfully.
Nov 29 02:53:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:30.060 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-be6e4a03-649a-413f-8a81-4fef5b740489 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:53:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:30.062 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[15788205-e33e-4fb5-8859-900e59a9c968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:30.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1379002420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.384 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.462 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.463 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.472 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.473 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.509 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.521 252257 INFO nova.virt.libvirt.driver [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Deleting instance files /var/lib/nova/instances/5338f516-8664-4303-aed1-b1d4e5b8e7e1_del#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.522 252257 INFO nova.virt.libvirt.driver [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Deletion of /var/lib/nova/instances/5338f516-8664-4303-aed1-b1d4e5b8e7e1_del complete#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.577 252257 INFO nova.compute.manager [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Took 1.17 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.577 252257 DEBUG oslo.service.loopingcall [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.578 252257 DEBUG nova.compute.manager [-] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.579 252257 DEBUG nova.network.neutron [-] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.750 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.751 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4493MB free_disk=20.876323699951172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.752 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.752 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.968 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 5338f516-8664-4303-aed1-b1d4e5b8e7e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.969 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 17cfda51-18c0-426a-93a5-45208fdb5da2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.969 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:53:30 np0005539563 nova_compute[252253]: 2025-11-29 07:53:30.970 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:53:31 np0005539563 nova_compute[252253]: 2025-11-29 07:53:31.039 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:31.068 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:31.069 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:53:31 np0005539563 nova_compute[252253]: 2025-11-29 07:53:31.077 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:31.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2479510454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:31 np0005539563 nova_compute[252253]: 2025-11-29 07:53:31.526 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:31 np0005539563 nova_compute[252253]: 2025-11-29 07:53:31.534 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:53:31 np0005539563 nova_compute[252253]: 2025-11-29 07:53:31.557 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:53:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:31 np0005539563 nova_compute[252253]: 2025-11-29 07:53:31.588 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:53:31 np0005539563 nova_compute[252253]: 2025-11-29 07:53:31.589 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 168 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 250 op/s
Nov 29 02:53:31 np0005539563 nova_compute[252253]: 2025-11-29 07:53:31.672 252257 DEBUG nova.network.neutron [-] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:31 np0005539563 nova_compute[252253]: 2025-11-29 07:53:31.698 252257 INFO nova.compute.manager [-] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Took 1.12 seconds to deallocate network for instance.#033[00m
Nov 29 02:53:31 np0005539563 nova_compute[252253]: 2025-11-29 07:53:31.747 252257 DEBUG oslo_concurrency.lockutils [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:31 np0005539563 nova_compute[252253]: 2025-11-29 07:53:31.747 252257 DEBUG oslo_concurrency.lockutils [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:31 np0005539563 nova_compute[252253]: 2025-11-29 07:53:31.782 252257 DEBUG nova.compute.manager [req-8800f6c9-8ce6-45f5-9384-056ad78cfa40 req-b8fd86a8-27e5-4839-b750-3db2045a93cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Received event network-vif-deleted-c606e5a0-f859-492d-827c-6449a1b0dbe4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:31 np0005539563 nova_compute[252253]: 2025-11-29 07:53:31.809 252257 DEBUG oslo_concurrency.processutils [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.123 252257 DEBUG nova.compute.manager [req-554c67c4-93ec-42a3-bddf-7651eb99bc9e req-43883e30-f6cf-4d54-8fba-081201bbcba3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Received event network-vif-plugged-c606e5a0-f859-492d-827c-6449a1b0dbe4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.124 252257 DEBUG oslo_concurrency.lockutils [req-554c67c4-93ec-42a3-bddf-7651eb99bc9e req-43883e30-f6cf-4d54-8fba-081201bbcba3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.124 252257 DEBUG oslo_concurrency.lockutils [req-554c67c4-93ec-42a3-bddf-7651eb99bc9e req-43883e30-f6cf-4d54-8fba-081201bbcba3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.125 252257 DEBUG oslo_concurrency.lockutils [req-554c67c4-93ec-42a3-bddf-7651eb99bc9e req-43883e30-f6cf-4d54-8fba-081201bbcba3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.125 252257 DEBUG nova.compute.manager [req-554c67c4-93ec-42a3-bddf-7651eb99bc9e req-43883e30-f6cf-4d54-8fba-081201bbcba3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] No waiting events found dispatching network-vif-plugged-c606e5a0-f859-492d-827c-6449a1b0dbe4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.125 252257 WARNING nova.compute.manager [req-554c67c4-93ec-42a3-bddf-7651eb99bc9e req-43883e30-f6cf-4d54-8fba-081201bbcba3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Received unexpected event network-vif-plugged-c606e5a0-f859-492d-827c-6449a1b0dbe4 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:53:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:32.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2698841310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.312 252257 DEBUG oslo_concurrency.processutils [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.319 252257 DEBUG nova.compute.provider_tree [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.343 252257 DEBUG nova.scheduler.client.report [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.383 252257 DEBUG oslo_concurrency.lockutils [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.415 252257 INFO nova.scheduler.client.report [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Deleted allocations for instance 5338f516-8664-4303-aed1-b1d4e5b8e7e1#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.493 252257 DEBUG oslo_concurrency.lockutils [None req-4630f09e-ed1b-466f-bd99-e82d720dd6d3 2e6a7e8a80384d83b5debf4c717f6e09 b1a31b637613411eaeda132dc499537b - - default default] Lock "5338f516-8664-4303-aed1-b1d4e5b8e7e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.585 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.586 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.586 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:53:32 np0005539563 nova_compute[252253]: 2025-11-29 07:53:32.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:53:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3474148776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:33.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 168 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 169 op/s
Nov 29 02:53:33 np0005539563 nova_compute[252253]: 2025-11-29 07:53:33.818 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-17cfda51-18c0-426a-93a5-45208fdb5da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:33 np0005539563 nova_compute[252253]: 2025-11-29 07:53:33.818 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-17cfda51-18c0-426a-93a5-45208fdb5da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:33 np0005539563 nova_compute[252253]: 2025-11-29 07:53:33.819 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:53:33 np0005539563 nova_compute[252253]: 2025-11-29 07:53:33.819 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 17cfda51-18c0-426a-93a5-45208fdb5da2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:34.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:34 np0005539563 nova_compute[252253]: 2025-11-29 07:53:34.791 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:53:35.072 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:35.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:35 np0005539563 nova_compute[252253]: 2025-11-29 07:53:35.511 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 167 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 279 op/s
Nov 29 02:53:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:36.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:36 np0005539563 nova_compute[252253]: 2025-11-29 07:53:36.985 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Updating instance_info_cache with network_info: [{"id": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "address": "fa:16:3e:17:5e:80", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08cb43c0-b4", "ovs_interfaceid": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:37 np0005539563 nova_compute[252253]: 2025-11-29 07:53:37.005 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-17cfda51-18c0-426a-93a5-45208fdb5da2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:37 np0005539563 nova_compute[252253]: 2025-11-29 07:53:37.006 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:53:37 np0005539563 nova_compute[252253]: 2025-11-29 07:53:37.006 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:37 np0005539563 nova_compute[252253]: 2025-11-29 07:53:37.007 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:37 np0005539563 nova_compute[252253]: 2025-11-29 07:53:37.007 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:37.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 167 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 217 op/s
Nov 29 02:53:37 np0005539563 ovn_controller[148841]: 2025-11-29T07:53:37Z|00125|binding|INFO|Releasing lport cf0990aa-411e-4273-b97e-3c36b1dfaef8 from this chassis (sb_readonly=0)
Nov 29 02:53:38 np0005539563 nova_compute[252253]: 2025-11-29 07:53:38.013 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:38.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:39.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 167 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 224 op/s
Nov 29 02:53:39 np0005539563 nova_compute[252253]: 2025-11-29 07:53:39.793 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:40.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:40 np0005539563 nova_compute[252253]: 2025-11-29 07:53:40.512 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:41.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 197 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.8 MiB/s wr, 233 op/s
Nov 29 02:53:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:42.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:42 np0005539563 nova_compute[252253]: 2025-11-29 07:53:42.535 252257 DEBUG oslo_concurrency.lockutils [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:42 np0005539563 nova_compute[252253]: 2025-11-29 07:53:42.535 252257 DEBUG oslo_concurrency.lockutils [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:42 np0005539563 nova_compute[252253]: 2025-11-29 07:53:42.553 252257 DEBUG nova.objects.instance [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lazy-loading 'flavor' on Instance uuid 17cfda51-18c0-426a-93a5-45208fdb5da2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:42 np0005539563 nova_compute[252253]: 2025-11-29 07:53:42.598 252257 DEBUG oslo_concurrency.lockutils [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:42 np0005539563 nova_compute[252253]: 2025-11-29 07:53:42.926 252257 DEBUG oslo_concurrency.lockutils [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:42 np0005539563 nova_compute[252253]: 2025-11-29 07:53:42.927 252257 DEBUG oslo_concurrency.lockutils [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:42 np0005539563 nova_compute[252253]: 2025-11-29 07:53:42.928 252257 INFO nova.compute.manager [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Attaching volume a6563453-509a-48d9-9889-41f32f7b7ce4 to /dev/vdb#033[00m
Nov 29 02:53:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:53:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:53:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:53:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:53:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:53:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.161 252257 DEBUG os_brick.utils [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.164 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.180 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.181 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[25325023-8ecc-48dd-a083-12053b0bfa47]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.182 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.192 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.192 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[61b76b6b-5213-426b-8b18-22bae5dd2a7c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.194 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.210 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.211 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[135784a1-0135-46f5-8f1b-aeae40a69e0c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.213 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[f22bdef0-db97-45d6-8326-8f15be9d0f5a]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.214 252257 DEBUG oslo_concurrency.processutils [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.243 252257 DEBUG oslo_concurrency.processutils [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.247 252257 DEBUG os_brick.initiator.connectors.lightos [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.247 252257 DEBUG os_brick.initiator.connectors.lightos [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.248 252257 DEBUG os_brick.initiator.connectors.lightos [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.248 252257 DEBUG os_brick.utils [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] <== get_connector_properties: return (86ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:53:43 np0005539563 nova_compute[252253]: 2025-11-29 07:53:43.248 252257 DEBUG nova.virt.block_device [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Updating existing volume attachment record: cbda0dce-a2e1-4c0f-b024-7e17a581c64e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 02:53:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:43.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 197 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 141 op/s
Nov 29 02:53:44 np0005539563 nova_compute[252253]: 2025-11-29 07:53:44.067 252257 DEBUG nova.objects.instance [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lazy-loading 'flavor' on Instance uuid 17cfda51-18c0-426a-93a5-45208fdb5da2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:44 np0005539563 nova_compute[252253]: 2025-11-29 07:53:44.095 252257 DEBUG nova.virt.libvirt.driver [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Attempting to attach volume a6563453-509a-48d9-9889-41f32f7b7ce4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 02:53:44 np0005539563 nova_compute[252253]: 2025-11-29 07:53:44.098 252257 DEBUG nova.virt.libvirt.guest [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 02:53:44 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:53:44 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-a6563453-509a-48d9-9889-41f32f7b7ce4">
Nov 29 02:53:44 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:44 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:44 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:44 np0005539563 nova_compute[252253]:  </source>
Nov 29 02:53:44 np0005539563 nova_compute[252253]:  <auth username="openstack">
Nov 29 02:53:44 np0005539563 nova_compute[252253]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:53:44 np0005539563 nova_compute[252253]:  </auth>
Nov 29 02:53:44 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:53:44 np0005539563 nova_compute[252253]:  <serial>a6563453-509a-48d9-9889-41f32f7b7ce4</serial>
Nov 29 02:53:44 np0005539563 nova_compute[252253]: </disk>
Nov 29 02:53:44 np0005539563 nova_compute[252253]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 02:53:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:44.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:44 np0005539563 nova_compute[252253]: 2025-11-29 07:53:44.236 252257 DEBUG nova.virt.libvirt.driver [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:44 np0005539563 nova_compute[252253]: 2025-11-29 07:53:44.237 252257 DEBUG nova.virt.libvirt.driver [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:44 np0005539563 nova_compute[252253]: 2025-11-29 07:53:44.237 252257 DEBUG nova.virt.libvirt.driver [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:44 np0005539563 nova_compute[252253]: 2025-11-29 07:53:44.237 252257 DEBUG nova.virt.libvirt.driver [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] No VIF found with MAC fa:16:3e:17:5e:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:53:44 np0005539563 nova_compute[252253]: 2025-11-29 07:53:44.546 252257 DEBUG oslo_concurrency.lockutils [None req-3062d97b-16c8-4745-9dbd-da972c739ee4 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:44 np0005539563 nova_compute[252253]: 2025-11-29 07:53:44.653 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402809.6529927, 5338f516-8664-4303-aed1-b1d4e5b8e7e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:44 np0005539563 nova_compute[252253]: 2025-11-29 07:53:44.654 252257 INFO nova.compute.manager [-] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:53:44 np0005539563 nova_compute[252253]: 2025-11-29 07:53:44.795 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:44 np0005539563 nova_compute[252253]: 2025-11-29 07:53:44.976 252257 DEBUG nova.compute.manager [None req-a4ed16b0-3ab0-4d30-9680-ff53a3c19d8b - - - - - -] [instance: 5338f516-8664-4303-aed1-b1d4e5b8e7e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:45.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:45 np0005539563 nova_compute[252253]: 2025-11-29 07:53:45.515 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 218 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.2 MiB/s wr, 229 op/s
Nov 29 02:53:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:46.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:47 np0005539563 nova_compute[252253]: 2025-11-29 07:53:47.297 252257 DEBUG oslo_concurrency.lockutils [None req-517eae75-c6e1-43b0-8429-288174d2d754 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:47 np0005539563 nova_compute[252253]: 2025-11-29 07:53:47.297 252257 DEBUG oslo_concurrency.lockutils [None req-517eae75-c6e1-43b0-8429-288174d2d754 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:47 np0005539563 nova_compute[252253]: 2025-11-29 07:53:47.310 252257 INFO nova.compute.manager [None req-517eae75-c6e1-43b0-8429-288174d2d754 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Detaching volume a6563453-509a-48d9-9889-41f32f7b7ce4#033[00m
Nov 29 02:53:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:47.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:47 np0005539563 nova_compute[252253]: 2025-11-29 07:53:47.645 252257 INFO nova.virt.block_device [None req-517eae75-c6e1-43b0-8429-288174d2d754 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Attempting to driver detach volume a6563453-509a-48d9-9889-41f32f7b7ce4 from mountpoint /dev/vdb#033[00m
Nov 29 02:53:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 218 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 119 op/s
Nov 29 02:53:47 np0005539563 nova_compute[252253]: 2025-11-29 07:53:47.657 252257 DEBUG nova.virt.libvirt.driver [None req-517eae75-c6e1-43b0-8429-288174d2d754 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Attempting to detach device vdb from instance 17cfda51-18c0-426a-93a5-45208fdb5da2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 02:53:47 np0005539563 nova_compute[252253]: 2025-11-29 07:53:47.658 252257 DEBUG nova.virt.libvirt.guest [None req-517eae75-c6e1-43b0-8429-288174d2d754 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:53:47 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-a6563453-509a-48d9-9889-41f32f7b7ce4">
Nov 29 02:53:47 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:  </source>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:  <serial>a6563453-509a-48d9-9889-41f32f7b7ce4</serial>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 02:53:47 np0005539563 nova_compute[252253]: </disk>
Nov 29 02:53:47 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:53:47 np0005539563 nova_compute[252253]: 2025-11-29 07:53:47.766 252257 INFO nova.virt.libvirt.driver [None req-517eae75-c6e1-43b0-8429-288174d2d754 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Successfully detached device vdb from instance 17cfda51-18c0-426a-93a5-45208fdb5da2 from the persistent domain config.#033[00m
Nov 29 02:53:47 np0005539563 nova_compute[252253]: 2025-11-29 07:53:47.769 252257 DEBUG nova.virt.libvirt.driver [None req-517eae75-c6e1-43b0-8429-288174d2d754 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 17cfda51-18c0-426a-93a5-45208fdb5da2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 02:53:47 np0005539563 nova_compute[252253]: 2025-11-29 07:53:47.770 252257 DEBUG nova.virt.libvirt.guest [None req-517eae75-c6e1-43b0-8429-288174d2d754 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:53:47 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-a6563453-509a-48d9-9889-41f32f7b7ce4">
Nov 29 02:53:47 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:  </source>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:  <serial>a6563453-509a-48d9-9889-41f32f7b7ce4</serial>
Nov 29 02:53:47 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 02:53:47 np0005539563 nova_compute[252253]: </disk>
Nov 29 02:53:47 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:53:47 np0005539563 nova_compute[252253]: 2025-11-29 07:53:47.912 252257 DEBUG nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Received event <DeviceRemovedEvent: 1764402827.9124255, 17cfda51-18c0-426a-93a5-45208fdb5da2 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 02:53:47 np0005539563 nova_compute[252253]: 2025-11-29 07:53:47.916 252257 DEBUG nova.virt.libvirt.driver [None req-517eae75-c6e1-43b0-8429-288174d2d754 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 17cfda51-18c0-426a-93a5-45208fdb5da2 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 02:53:47 np0005539563 nova_compute[252253]: 2025-11-29 07:53:47.918 252257 INFO nova.virt.libvirt.driver [None req-517eae75-c6e1-43b0-8429-288174d2d754 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Successfully detached device vdb from instance 17cfda51-18c0-426a-93a5-45208fdb5da2 from the live domain config.#033[00m
Nov 29 02:53:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:48.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:48 np0005539563 nova_compute[252253]: 2025-11-29 07:53:48.307 252257 DEBUG nova.objects.instance [None req-517eae75-c6e1-43b0-8429-288174d2d754 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lazy-loading 'flavor' on Instance uuid 17cfda51-18c0-426a-93a5-45208fdb5da2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:48 np0005539563 nova_compute[252253]: 2025-11-29 07:53:48.352 252257 DEBUG oslo_concurrency.lockutils [None req-517eae75-c6e1-43b0-8429-288174d2d754 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:49.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 220 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 148 op/s
Nov 29 02:53:49 np0005539563 nova_compute[252253]: 2025-11-29 07:53:49.798 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:50.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:50 np0005539563 nova_compute[252253]: 2025-11-29 07:53:50.517 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:51.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:51 np0005539563 podman[280033]: 2025-11-29 07:53:51.541944196 +0000 UTC m=+0.060784633 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:53:51 np0005539563 podman[280034]: 2025-11-29 07:53:51.550015585 +0000 UTC m=+0.091529107 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Nov 29 02:53:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:51 np0005539563 podman[280035]: 2025-11-29 07:53:51.603327723 +0000 UTC m=+0.145951366 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:53:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 200 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 191 op/s
Nov 29 02:53:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:52.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:52 np0005539563 nova_compute[252253]: 2025-11-29 07:53:52.956 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:53.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 200 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 166 op/s
Nov 29 02:53:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:54.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:54 np0005539563 nova_compute[252253]: 2025-11-29 07:53:54.799 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:53:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:55.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:53:55 np0005539563 nova_compute[252253]: 2025-11-29 07:53:55.520 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 147 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 189 op/s
Nov 29 02:53:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:56.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:57.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 147 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 323 KiB/s rd, 2.8 MiB/s wr, 101 op/s
Nov 29 02:53:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:58.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:58 np0005539563 nova_compute[252253]: 2025-11-29 07:53:58.680 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:59 np0005539563 nova_compute[252253]: 2025-11-29 07:53:59.043 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:53:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:59.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 167 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 347 KiB/s rd, 3.4 MiB/s wr, 134 op/s
Nov 29 02:53:59 np0005539563 nova_compute[252253]: 2025-11-29 07:53:59.801 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:00.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:00 np0005539563 nova_compute[252253]: 2025-11-29 07:54:00.564 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:01.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Nov 29 02:54:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:02.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:02 np0005539563 nova_compute[252253]: 2025-11-29 07:54:02.703 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:03.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 29 02:54:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:04.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:04 np0005539563 nova_compute[252253]: 2025-11-29 07:54:04.802 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:04.899 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:04.899 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:04.900 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:05.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:05 np0005539563 nova_compute[252253]: 2025-11-29 07:54:05.567 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 719 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 29 02:54:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:06.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:54:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:07.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:54:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 707 KiB/s rd, 607 KiB/s wr, 67 op/s
Nov 29 02:54:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:08.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:09 np0005539563 nova_compute[252253]: 2025-11-29 07:54:09.512 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:09.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 182 MiB data, 515 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 105 op/s
Nov 29 02:54:09 np0005539563 nova_compute[252253]: 2025-11-29 07:54:09.804 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:10.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:54:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3805871239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:54:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:54:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3805871239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:54:10 np0005539563 nova_compute[252253]: 2025-11-29 07:54:10.603 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:11.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 213 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Nov 29 02:54:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:12.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:54:12
Nov 29 02:54:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:54:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:54:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.rgw.root', 'images', 'backups', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr']
Nov 29 02:54:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:54:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:13.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 213 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Nov 29 02:54:13 np0005539563 nova_compute[252253]: 2025-11-29 07:54:13.691 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:54:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:54:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:14.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:14 np0005539563 nova_compute[252253]: 2025-11-29 07:54:14.805 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:54:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1de9df78-d233-4d42-b724-798e7feeb915 does not exist
Nov 29 02:54:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 23af5dcd-290a-4050-9021-147631ba16a5 does not exist
Nov 29 02:54:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 751c3ca2-5cb9-46e1-afdf-3b5ffae56933 does not exist
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:54:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:15.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:15 np0005539563 nova_compute[252253]: 2025-11-29 07:54:15.605 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Nov 29 02:54:15 np0005539563 ceph-mgr[74636]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:54:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:54:16 np0005539563 podman[280434]: 2025-11-29 07:54:16.065970435 +0000 UTC m=+0.061432500 container create aaf9acd4effe9121c73fc305765a3e099f4992b608a6fdc8d7dcebbfa2659192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shannon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:54:16 np0005539563 systemd[1]: Started libpod-conmon-aaf9acd4effe9121c73fc305765a3e099f4992b608a6fdc8d7dcebbfa2659192.scope.
Nov 29 02:54:16 np0005539563 podman[280434]: 2025-11-29 07:54:16.036523065 +0000 UTC m=+0.031985180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:54:16 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:54:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:16.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:16 np0005539563 podman[280434]: 2025-11-29 07:54:16.273034011 +0000 UTC m=+0.268496176 container init aaf9acd4effe9121c73fc305765a3e099f4992b608a6fdc8d7dcebbfa2659192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:54:16 np0005539563 podman[280434]: 2025-11-29 07:54:16.281640995 +0000 UTC m=+0.277103100 container start aaf9acd4effe9121c73fc305765a3e099f4992b608a6fdc8d7dcebbfa2659192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:54:16 np0005539563 upbeat_shannon[280450]: 167 167
Nov 29 02:54:16 np0005539563 systemd[1]: libpod-aaf9acd4effe9121c73fc305765a3e099f4992b608a6fdc8d7dcebbfa2659192.scope: Deactivated successfully.
Nov 29 02:54:16 np0005539563 podman[280434]: 2025-11-29 07:54:16.338308514 +0000 UTC m=+0.333770619 container attach aaf9acd4effe9121c73fc305765a3e099f4992b608a6fdc8d7dcebbfa2659192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shannon, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:54:16 np0005539563 podman[280434]: 2025-11-29 07:54:16.33962563 +0000 UTC m=+0.335087765 container died aaf9acd4effe9121c73fc305765a3e099f4992b608a6fdc8d7dcebbfa2659192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:54:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-acfb2ce46e8ef2c35118d0748be4e3dfa9261c5b5a84562c21d1a2e8d1721eae-merged.mount: Deactivated successfully.
Nov 29 02:54:16 np0005539563 podman[280434]: 2025-11-29 07:54:16.388782306 +0000 UTC m=+0.384244371 container remove aaf9acd4effe9121c73fc305765a3e099f4992b608a6fdc8d7dcebbfa2659192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:54:16 np0005539563 systemd[1]: libpod-conmon-aaf9acd4effe9121c73fc305765a3e099f4992b608a6fdc8d7dcebbfa2659192.scope: Deactivated successfully.
Nov 29 02:54:16 np0005539563 podman[280474]: 2025-11-29 07:54:16.585023007 +0000 UTC m=+0.028595717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:54:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:17 np0005539563 podman[280474]: 2025-11-29 07:54:17.02678801 +0000 UTC m=+0.470360720 container create 18d892e3263ce3559e812bbfbd98754b7045797544f2fdb7d505b2d51844273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:54:17 np0005539563 systemd[1]: Started libpod-conmon-18d892e3263ce3559e812bbfbd98754b7045797544f2fdb7d505b2d51844273c.scope.
Nov 29 02:54:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:54:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aacbc77cce51dd70f3d87daa8a4e6d31484c725d85701ef6c77c9d9d2f47406/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aacbc77cce51dd70f3d87daa8a4e6d31484c725d85701ef6c77c9d9d2f47406/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aacbc77cce51dd70f3d87daa8a4e6d31484c725d85701ef6c77c9d9d2f47406/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aacbc77cce51dd70f3d87daa8a4e6d31484c725d85701ef6c77c9d9d2f47406/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aacbc77cce51dd70f3d87daa8a4e6d31484c725d85701ef6c77c9d9d2f47406/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:17 np0005539563 podman[280474]: 2025-11-29 07:54:17.129285384 +0000 UTC m=+0.572858074 container init 18d892e3263ce3559e812bbfbd98754b7045797544f2fdb7d505b2d51844273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hellman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 02:54:17 np0005539563 podman[280474]: 2025-11-29 07:54:17.141126237 +0000 UTC m=+0.584698897 container start 18d892e3263ce3559e812bbfbd98754b7045797544f2fdb7d505b2d51844273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hellman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:54:17 np0005539563 podman[280474]: 2025-11-29 07:54:17.150661115 +0000 UTC m=+0.594233775 container attach 18d892e3263ce3559e812bbfbd98754b7045797544f2fdb7d505b2d51844273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hellman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:54:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:17.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 167 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Nov 29 02:54:17 np0005539563 jovial_hellman[280492]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:54:17 np0005539563 jovial_hellman[280492]: --> relative data size: 1.0
Nov 29 02:54:17 np0005539563 jovial_hellman[280492]: --> All data devices are unavailable
Nov 29 02:54:17 np0005539563 systemd[1]: libpod-18d892e3263ce3559e812bbfbd98754b7045797544f2fdb7d505b2d51844273c.scope: Deactivated successfully.
Nov 29 02:54:17 np0005539563 podman[280474]: 2025-11-29 07:54:17.972210836 +0000 UTC m=+1.415783496 container died 18d892e3263ce3559e812bbfbd98754b7045797544f2fdb7d505b2d51844273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:54:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5aacbc77cce51dd70f3d87daa8a4e6d31484c725d85701ef6c77c9d9d2f47406-merged.mount: Deactivated successfully.
Nov 29 02:54:18 np0005539563 podman[280474]: 2025-11-29 07:54:18.030210262 +0000 UTC m=+1.473782922 container remove 18d892e3263ce3559e812bbfbd98754b7045797544f2fdb7d505b2d51844273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:54:18 np0005539563 systemd[1]: libpod-conmon-18d892e3263ce3559e812bbfbd98754b7045797544f2fdb7d505b2d51844273c.scope: Deactivated successfully.
Nov 29 02:54:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:18.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.409 252257 DEBUG oslo_concurrency.lockutils [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.411 252257 DEBUG oslo_concurrency.lockutils [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.412 252257 DEBUG oslo_concurrency.lockutils [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.412 252257 DEBUG oslo_concurrency.lockutils [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.412 252257 DEBUG oslo_concurrency.lockutils [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.414 252257 INFO nova.compute.manager [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Terminating instance#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.415 252257 DEBUG nova.compute.manager [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:54:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Nov 29 02:54:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Nov 29 02:54:18 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Nov 29 02:54:18 np0005539563 podman[280661]: 2025-11-29 07:54:18.634619274 +0000 UTC m=+0.046332691 container create 54ba865a3f076c31cd7ceebfb6a30f2ccc5b3e45244e38d9e0db4fa8af06e160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:54:18 np0005539563 kernel: tap08cb43c0-b4 (unregistering): left promiscuous mode
Nov 29 02:54:18 np0005539563 NetworkManager[48981]: <info>  [1764402858.6433] device (tap08cb43c0-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:54:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:54:18Z|00126|binding|INFO|Releasing lport 08cb43c0-b482-46af-8fdc-a825b402c6e9 from this chassis (sb_readonly=0)
Nov 29 02:54:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:54:18Z|00127|binding|INFO|Setting lport 08cb43c0-b482-46af-8fdc-a825b402c6e9 down in Southbound
Nov 29 02:54:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:54:18Z|00128|binding|INFO|Removing iface tap08cb43c0-b4 ovn-installed in OVS
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.652 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.656 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:18 np0005539563 systemd[1]: Started libpod-conmon-54ba865a3f076c31cd7ceebfb6a30f2ccc5b3e45244e38d9e0db4fa8af06e160.scope.
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.674 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:18.677 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:5e:80 10.100.0.13'], port_security=['fa:16:3e:17:5e:80 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17cfda51-18c0-426a-93a5-45208fdb5da2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbef03be-c0f0-4708-987b-5002a6990bb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '800c0f050e95457384eee582d6da0afa', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed6d3fed-23e0-4b6a-91e0-86a4c89e0306', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0e8f2a3-aeb4-4405-bc4f-8521ba3c2988, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=08cb43c0-b482-46af-8fdc-a825b402c6e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:18.679 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 08cb43c0-b482-46af-8fdc-a825b402c6e9 in datapath bbef03be-c0f0-4708-987b-5002a6990bb1 unbound from our chassis#033[00m
Nov 29 02:54:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:18.680 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bbef03be-c0f0-4708-987b-5002a6990bb1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:54:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:18.682 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe3ac05-8bed-40f1-9d80-1e1b977048de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:18.683 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1 namespace which is not needed anymore#033[00m
Nov 29 02:54:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:54:18 np0005539563 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000024.scope: Deactivated successfully.
Nov 29 02:54:18 np0005539563 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000024.scope: Consumed 16.942s CPU time.
Nov 29 02:54:18 np0005539563 podman[280661]: 2025-11-29 07:54:18.617089317 +0000 UTC m=+0.028802754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:54:18 np0005539563 systemd-machined[213024]: Machine qemu-17-instance-00000024 terminated.
Nov 29 02:54:18 np0005539563 podman[280661]: 2025-11-29 07:54:18.728537055 +0000 UTC m=+0.140250482 container init 54ba865a3f076c31cd7ceebfb6a30f2ccc5b3e45244e38d9e0db4fa8af06e160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kare, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:54:18 np0005539563 podman[280661]: 2025-11-29 07:54:18.737833058 +0000 UTC m=+0.149546475 container start 54ba865a3f076c31cd7ceebfb6a30f2ccc5b3e45244e38d9e0db4fa8af06e160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:54:18 np0005539563 podman[280661]: 2025-11-29 07:54:18.741886398 +0000 UTC m=+0.153599845 container attach 54ba865a3f076c31cd7ceebfb6a30f2ccc5b3e45244e38d9e0db4fa8af06e160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:54:18 np0005539563 nice_kare[280680]: 167 167
Nov 29 02:54:18 np0005539563 systemd[1]: libpod-54ba865a3f076c31cd7ceebfb6a30f2ccc5b3e45244e38d9e0db4fa8af06e160.scope: Deactivated successfully.
Nov 29 02:54:18 np0005539563 podman[280661]: 2025-11-29 07:54:18.746427851 +0000 UTC m=+0.158141268 container died 54ba865a3f076c31cd7ceebfb6a30f2ccc5b3e45244e38d9e0db4fa8af06e160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:54:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-be23f8ba04476b5eb4cfed92e450908943ef32d477b0c1bf9cc5077a388e839d-merged.mount: Deactivated successfully.
Nov 29 02:54:18 np0005539563 podman[280661]: 2025-11-29 07:54:18.788850404 +0000 UTC m=+0.200563821 container remove 54ba865a3f076c31cd7ceebfb6a30f2ccc5b3e45244e38d9e0db4fa8af06e160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kare, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:54:18 np0005539563 systemd[1]: libpod-conmon-54ba865a3f076c31cd7ceebfb6a30f2ccc5b3e45244e38d9e0db4fa8af06e160.scope: Deactivated successfully.
Nov 29 02:54:18 np0005539563 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[278887]: [NOTICE]   (278892) : haproxy version is 2.8.14-c23fe91
Nov 29 02:54:18 np0005539563 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[278887]: [NOTICE]   (278892) : path to executable is /usr/sbin/haproxy
Nov 29 02:54:18 np0005539563 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[278887]: [WARNING]  (278892) : Exiting Master process...
Nov 29 02:54:18 np0005539563 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[278887]: [ALERT]    (278892) : Current worker (278894) exited with code 143 (Terminated)
Nov 29 02:54:18 np0005539563 neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1[278887]: [WARNING]  (278892) : All workers exited. Exiting... (0)
Nov 29 02:54:18 np0005539563 systemd[1]: libpod-c84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70.scope: Deactivated successfully.
Nov 29 02:54:18 np0005539563 kernel: tap08cb43c0-b4: entered promiscuous mode
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.838 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:54:18Z|00129|binding|INFO|Claiming lport 08cb43c0-b482-46af-8fdc-a825b402c6e9 for this chassis.
Nov 29 02:54:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:54:18Z|00130|binding|INFO|08cb43c0-b482-46af-8fdc-a825b402c6e9: Claiming fa:16:3e:17:5e:80 10.100.0.13
Nov 29 02:54:18 np0005539563 NetworkManager[48981]: <info>  [1764402858.8409] manager: (tap08cb43c0-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Nov 29 02:54:18 np0005539563 systemd-udevd[280684]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:54:18 np0005539563 podman[280717]: 2025-11-29 07:54:18.842119661 +0000 UTC m=+0.045244810 container died c84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:54:18 np0005539563 kernel: tap08cb43c0-b4 (unregistering): left promiscuous mode
Nov 29 02:54:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:18.865 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:5e:80 10.100.0.13'], port_security=['fa:16:3e:17:5e:80 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17cfda51-18c0-426a-93a5-45208fdb5da2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbef03be-c0f0-4708-987b-5002a6990bb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '800c0f050e95457384eee582d6da0afa', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed6d3fed-23e0-4b6a-91e0-86a4c89e0306', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0e8f2a3-aeb4-4405-bc4f-8521ba3c2988, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=08cb43c0-b482-46af-8fdc-a825b402c6e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:54:18Z|00131|binding|INFO|Setting lport 08cb43c0-b482-46af-8fdc-a825b402c6e9 ovn-installed in OVS
Nov 29 02:54:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:54:18Z|00132|binding|INFO|Setting lport 08cb43c0-b482-46af-8fdc-a825b402c6e9 up in Southbound
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.867 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:54:18Z|00133|binding|INFO|Releasing lport 08cb43c0-b482-46af-8fdc-a825b402c6e9 from this chassis (sb_readonly=1)
Nov 29 02:54:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:54:18Z|00134|if_status|INFO|Not setting lport 08cb43c0-b482-46af-8fdc-a825b402c6e9 down as sb is readonly
Nov 29 02:54:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:54:18Z|00135|binding|INFO|Removing iface tap08cb43c0-b4 ovn-installed in OVS
Nov 29 02:54:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:54:18Z|00136|binding|INFO|Releasing lport 08cb43c0-b482-46af-8fdc-a825b402c6e9 from this chassis (sb_readonly=0)
Nov 29 02:54:18 np0005539563 ovn_controller[148841]: 2025-11-29T07:54:18Z|00137|binding|INFO|Setting lport 08cb43c0-b482-46af-8fdc-a825b402c6e9 down in Southbound
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.874 252257 INFO nova.virt.libvirt.driver [-] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Instance destroyed successfully.#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.874 252257 DEBUG nova.objects.instance [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lazy-loading 'resources' on Instance uuid 17cfda51-18c0-426a-93a5-45208fdb5da2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70-userdata-shm.mount: Deactivated successfully.
Nov 29 02:54:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-476ed4c04517bbd9a78ea4712cb67119d4576cfde32ffe25cea277d13cb668ad-merged.mount: Deactivated successfully.
Nov 29 02:54:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:18.883 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:5e:80 10.100.0.13'], port_security=['fa:16:3e:17:5e:80 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17cfda51-18c0-426a-93a5-45208fdb5da2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbef03be-c0f0-4708-987b-5002a6990bb1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '800c0f050e95457384eee582d6da0afa', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed6d3fed-23e0-4b6a-91e0-86a4c89e0306', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0e8f2a3-aeb4-4405-bc4f-8521ba3c2988, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=08cb43c0-b482-46af-8fdc-a825b402c6e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.885 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:18 np0005539563 podman[280717]: 2025-11-29 07:54:18.888410888 +0000 UTC m=+0.091536027 container cleanup c84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.894 252257 DEBUG nova.virt.libvirt.vif [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:52:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1113332245',display_name='tempest-VolumesAdminNegativeTest-server-1113332245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1113332245',id=36,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDQB63kCaQSwlw6cFguUzraE1hRm0I6I5y2+cnPYbsRok7wWB0w+8k4zrs0KRS+bkEJXOqxIBWOxb8lETWXFg5Vkpio2sZRy/K2dDYH17tDqll4pgElZwqq1RcDM+Oyvmg==',key_name='tempest-keypair-605237032',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:53:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='800c0f050e95457384eee582d6da0afa',ramdisk_id='',reservation_id='r-xocnody7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-1765250615',owner_user_name='tempest-VolumesAdminNegativeTest-1765250615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:53:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8551065d65214410b616d2a71729df0a',uuid=17cfda51-18c0-426a-93a5-45208fdb5da2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "address": "fa:16:3e:17:5e:80", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08cb43c0-b4", "ovs_interfaceid": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.894 252257 DEBUG nova.network.os_vif_util [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Converting VIF {"id": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "address": "fa:16:3e:17:5e:80", "network": {"id": "bbef03be-c0f0-4708-987b-5002a6990bb1", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1189386436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "800c0f050e95457384eee582d6da0afa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08cb43c0-b4", "ovs_interfaceid": "08cb43c0-b482-46af-8fdc-a825b402c6e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.895 252257 DEBUG nova.network.os_vif_util [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:5e:80,bridge_name='br-int',has_traffic_filtering=True,id=08cb43c0-b482-46af-8fdc-a825b402c6e9,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08cb43c0-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.895 252257 DEBUG os_vif [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:5e:80,bridge_name='br-int',has_traffic_filtering=True,id=08cb43c0-b482-46af-8fdc-a825b402c6e9,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08cb43c0-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:54:18 np0005539563 systemd[1]: libpod-conmon-c84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70.scope: Deactivated successfully.
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.898 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.898 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08cb43c0-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.899 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.900 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.903 252257 INFO os_vif [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:5e:80,bridge_name='br-int',has_traffic_filtering=True,id=08cb43c0-b482-46af-8fdc-a825b402c6e9,network=Network(bbef03be-c0f0-4708-987b-5002a6990bb1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08cb43c0-b4')#033[00m
Nov 29 02:54:18 np0005539563 podman[280758]: 2025-11-29 07:54:18.962770959 +0000 UTC m=+0.049925498 container remove c84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:54:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:18.968 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[faddc006-b485-4073-9957-4ff67fbd3f14]: (4, ('Sat Nov 29 07:54:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1 (c84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70)\nc84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70\nSat Nov 29 07:54:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1 (c84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70)\nc84c6ed6be31021015c3c76b3dbf3639d6a861b3d98737e1fba2eac53dcfae70\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:18.969 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[843e39e5-241d-42c0-b841-40ac2a82df8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:18.970 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbbef03be-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:18 np0005539563 kernel: tapbbef03be-c0: left promiscuous mode
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.975 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:18.978 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b3283613-b304-48cd-a955-d3059075a166]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:18.995 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc559f8-2919-45bd-b551-db591f7d8eea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:18 np0005539563 nova_compute[252253]: 2025-11-29 07:54:18.996 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:18.997 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc3ed87-75ca-4b09-b52e-1d9b6d30e0a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:18 np0005539563 podman[280760]: 2025-11-29 07:54:18.997137703 +0000 UTC m=+0.075165364 container create 0d2da8f12fa2af6e64d1da35f1d61923addf30ad8b07e3f3bc9d5ecf4943545c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:54:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:19.016 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e37b3c-5af8-45bd-9e6d-ff92c3b35ae0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575061, 'reachable_time': 33202, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280806, 'error': None, 'target': 'ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:19.019 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bbef03be-c0f0-4708-987b-5002a6990bb1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:54:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:19.019 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[11605bbc-839b-49bf-877c-89e15cb80a55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:19.019 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 08cb43c0-b482-46af-8fdc-a825b402c6e9 in datapath bbef03be-c0f0-4708-987b-5002a6990bb1 unbound from our chassis#033[00m
Nov 29 02:54:19 np0005539563 systemd[1]: run-netns-ovnmeta\x2dbbef03be\x2dc0f0\x2d4708\x2d987b\x2d5002a6990bb1.mount: Deactivated successfully.
Nov 29 02:54:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:19.021 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bbef03be-c0f0-4708-987b-5002a6990bb1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:54:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:19.021 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[99b51448-fefc-4955-8d77-5753bc0778af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:19.022 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 08cb43c0-b482-46af-8fdc-a825b402c6e9 in datapath bbef03be-c0f0-4708-987b-5002a6990bb1 unbound from our chassis#033[00m
Nov 29 02:54:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:19.023 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bbef03be-c0f0-4708-987b-5002a6990bb1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:54:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:19.023 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[54d9d1c1-b943-46b9-b069-5c795c99deb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:19 np0005539563 systemd[1]: Started libpod-conmon-0d2da8f12fa2af6e64d1da35f1d61923addf30ad8b07e3f3bc9d5ecf4943545c.scope.
Nov 29 02:54:19 np0005539563 podman[280760]: 2025-11-29 07:54:18.965844062 +0000 UTC m=+0.043871763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:54:19 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:54:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4e5bb1c08dedccfd4656be9c579f5b8efa537ceb496b7e36c3e423c2d3c9c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4e5bb1c08dedccfd4656be9c579f5b8efa537ceb496b7e36c3e423c2d3c9c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4e5bb1c08dedccfd4656be9c579f5b8efa537ceb496b7e36c3e423c2d3c9c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4e5bb1c08dedccfd4656be9c579f5b8efa537ceb496b7e36c3e423c2d3c9c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:19 np0005539563 nova_compute[252253]: 2025-11-29 07:54:19.135 252257 DEBUG nova.compute.manager [req-796f7a79-6770-47ce-a624-2e7232547727 req-c9f8de6b-1179-4811-8c0c-d1c174417281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received event network-vif-unplugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:19 np0005539563 nova_compute[252253]: 2025-11-29 07:54:19.136 252257 DEBUG oslo_concurrency.lockutils [req-796f7a79-6770-47ce-a624-2e7232547727 req-c9f8de6b-1179-4811-8c0c-d1c174417281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:19 np0005539563 nova_compute[252253]: 2025-11-29 07:54:19.137 252257 DEBUG oslo_concurrency.lockutils [req-796f7a79-6770-47ce-a624-2e7232547727 req-c9f8de6b-1179-4811-8c0c-d1c174417281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:19 np0005539563 nova_compute[252253]: 2025-11-29 07:54:19.138 252257 DEBUG oslo_concurrency.lockutils [req-796f7a79-6770-47ce-a624-2e7232547727 req-c9f8de6b-1179-4811-8c0c-d1c174417281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:19 np0005539563 nova_compute[252253]: 2025-11-29 07:54:19.139 252257 DEBUG nova.compute.manager [req-796f7a79-6770-47ce-a624-2e7232547727 req-c9f8de6b-1179-4811-8c0c-d1c174417281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] No waiting events found dispatching network-vif-unplugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:19 np0005539563 nova_compute[252253]: 2025-11-29 07:54:19.140 252257 DEBUG nova.compute.manager [req-796f7a79-6770-47ce-a624-2e7232547727 req-c9f8de6b-1179-4811-8c0c-d1c174417281 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received event network-vif-unplugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:54:19 np0005539563 podman[280760]: 2025-11-29 07:54:19.24809176 +0000 UTC m=+0.326119451 container init 0d2da8f12fa2af6e64d1da35f1d61923addf30ad8b07e3f3bc9d5ecf4943545c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:54:19 np0005539563 podman[280760]: 2025-11-29 07:54:19.25585474 +0000 UTC m=+0.333882401 container start 0d2da8f12fa2af6e64d1da35f1d61923addf30ad8b07e3f3bc9d5ecf4943545c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:54:19 np0005539563 podman[280760]: 2025-11-29 07:54:19.259748837 +0000 UTC m=+0.337776548 container attach 0d2da8f12fa2af6e64d1da35f1d61923addf30ad8b07e3f3bc9d5ecf4943545c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:54:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:19.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Nov 29 02:54:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 175 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Nov 29 02:54:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Nov 29 02:54:19 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]: {
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:    "0": [
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:        {
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:            "devices": [
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:                "/dev/loop3"
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:            ],
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:            "lv_name": "ceph_lv0",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:            "lv_size": "7511998464",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:            "name": "ceph_lv0",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:            "tags": {
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:                "ceph.cluster_name": "ceph",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:                "ceph.crush_device_class": "",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:                "ceph.encrypted": "0",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:                "ceph.osd_id": "0",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:                "ceph.type": "block",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:                "ceph.vdo": "0"
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:            },
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:            "type": "block",
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:            "vg_name": "ceph_vg0"
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:        }
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]:    ]
Nov 29 02:54:20 np0005539563 flamboyant_nobel[280809]: }
Nov 29 02:54:20 np0005539563 systemd[1]: libpod-0d2da8f12fa2af6e64d1da35f1d61923addf30ad8b07e3f3bc9d5ecf4943545c.scope: Deactivated successfully.
Nov 29 02:54:20 np0005539563 conmon[280809]: conmon 0d2da8f12fa2af6e64d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d2da8f12fa2af6e64d1da35f1d61923addf30ad8b07e3f3bc9d5ecf4943545c.scope/container/memory.events
Nov 29 02:54:20 np0005539563 podman[280760]: 2025-11-29 07:54:20.072669024 +0000 UTC m=+1.150696705 container died 0d2da8f12fa2af6e64d1da35f1d61923addf30ad8b07e3f3bc9d5ecf4943545c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:54:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-eb4e5bb1c08dedccfd4656be9c579f5b8efa537ceb496b7e36c3e423c2d3c9c6-merged.mount: Deactivated successfully.
Nov 29 02:54:20 np0005539563 podman[280760]: 2025-11-29 07:54:20.129331843 +0000 UTC m=+1.207359504 container remove 0d2da8f12fa2af6e64d1da35f1d61923addf30ad8b07e3f3bc9d5ecf4943545c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:54:20 np0005539563 systemd[1]: libpod-conmon-0d2da8f12fa2af6e64d1da35f1d61923addf30ad8b07e3f3bc9d5ecf4943545c.scope: Deactivated successfully.
Nov 29 02:54:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:20.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:20 np0005539563 nova_compute[252253]: 2025-11-29 07:54:20.335 252257 INFO nova.virt.libvirt.driver [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Deleting instance files /var/lib/nova/instances/17cfda51-18c0-426a-93a5-45208fdb5da2_del#033[00m
Nov 29 02:54:20 np0005539563 nova_compute[252253]: 2025-11-29 07:54:20.337 252257 INFO nova.virt.libvirt.driver [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Deletion of /var/lib/nova/instances/17cfda51-18c0-426a-93a5-45208fdb5da2_del complete#033[00m
Nov 29 02:54:20 np0005539563 nova_compute[252253]: 2025-11-29 07:54:20.469 252257 INFO nova.compute.manager [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Took 2.05 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:54:20 np0005539563 nova_compute[252253]: 2025-11-29 07:54:20.469 252257 DEBUG oslo.service.loopingcall [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:54:20 np0005539563 nova_compute[252253]: 2025-11-29 07:54:20.470 252257 DEBUG nova.compute.manager [-] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:54:20 np0005539563 nova_compute[252253]: 2025-11-29 07:54:20.470 252257 DEBUG nova.network.neutron [-] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:54:20 np0005539563 nova_compute[252253]: 2025-11-29 07:54:20.607 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:20 np0005539563 podman[281022]: 2025-11-29 07:54:20.757768888 +0000 UTC m=+0.048935651 container create 88c46f9cb31663086c7ddb6c5649ff4963b8f26a1ecbbfa1f0c295f7b5849048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:54:20 np0005539563 systemd[1]: Started libpod-conmon-88c46f9cb31663086c7ddb6c5649ff4963b8f26a1ecbbfa1f0c295f7b5849048.scope.
Nov 29 02:54:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:54:20 np0005539563 podman[281022]: 2025-11-29 07:54:20.733534179 +0000 UTC m=+0.024700952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:54:20 np0005539563 podman[281022]: 2025-11-29 07:54:20.841096651 +0000 UTC m=+0.132263444 container init 88c46f9cb31663086c7ddb6c5649ff4963b8f26a1ecbbfa1f0c295f7b5849048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:54:20 np0005539563 podman[281022]: 2025-11-29 07:54:20.848165114 +0000 UTC m=+0.139331877 container start 88c46f9cb31663086c7ddb6c5649ff4963b8f26a1ecbbfa1f0c295f7b5849048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lumiere, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:54:20 np0005539563 systemd[1]: libpod-88c46f9cb31663086c7ddb6c5649ff4963b8f26a1ecbbfa1f0c295f7b5849048.scope: Deactivated successfully.
Nov 29 02:54:20 np0005539563 bold_lumiere[281038]: 167 167
Nov 29 02:54:20 np0005539563 conmon[281038]: conmon 88c46f9cb31663086c7d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88c46f9cb31663086c7ddb6c5649ff4963b8f26a1ecbbfa1f0c295f7b5849048.scope/container/memory.events
Nov 29 02:54:20 np0005539563 podman[281022]: 2025-11-29 07:54:20.87822014 +0000 UTC m=+0.169386933 container attach 88c46f9cb31663086c7ddb6c5649ff4963b8f26a1ecbbfa1f0c295f7b5849048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lumiere, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:54:20 np0005539563 podman[281022]: 2025-11-29 07:54:20.880514992 +0000 UTC m=+0.171681765 container died 88c46f9cb31663086c7ddb6c5649ff4963b8f26a1ecbbfa1f0c295f7b5849048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:54:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-757f80c9d43fc8865951a6e206dee42af7c135ac937f6c85aedb1b6e44c7057d-merged.mount: Deactivated successfully.
Nov 29 02:54:20 np0005539563 podman[281022]: 2025-11-29 07:54:20.978136994 +0000 UTC m=+0.269303757 container remove 88c46f9cb31663086c7ddb6c5649ff4963b8f26a1ecbbfa1f0c295f7b5849048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:54:20 np0005539563 systemd[1]: libpod-conmon-88c46f9cb31663086c7ddb6c5649ff4963b8f26a1ecbbfa1f0c295f7b5849048.scope: Deactivated successfully.
Nov 29 02:54:21 np0005539563 podman[281065]: 2025-11-29 07:54:21.150325253 +0000 UTC m=+0.050097362 container create b78d1b9e1ffecb5b7ba513d8e3cc8c8d1fa18367cee1219f579bae8eb13615ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:54:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Nov 29 02:54:21 np0005539563 systemd[1]: Started libpod-conmon-b78d1b9e1ffecb5b7ba513d8e3cc8c8d1fa18367cee1219f579bae8eb13615ad.scope.
Nov 29 02:54:21 np0005539563 podman[281065]: 2025-11-29 07:54:21.122836726 +0000 UTC m=+0.022608855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:54:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:54:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b887f991e2a303ee8f47e1fced85904442824526c2434259251c3521547b7eea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b887f991e2a303ee8f47e1fced85904442824526c2434259251c3521547b7eea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b887f991e2a303ee8f47e1fced85904442824526c2434259251c3521547b7eea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b887f991e2a303ee8f47e1fced85904442824526c2434259251c3521547b7eea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.361 252257 DEBUG nova.compute.manager [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received event network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.362 252257 DEBUG oslo_concurrency.lockutils [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.363 252257 DEBUG oslo_concurrency.lockutils [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.363 252257 DEBUG oslo_concurrency.lockutils [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.363 252257 DEBUG nova.compute.manager [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] No waiting events found dispatching network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.364 252257 WARNING nova.compute.manager [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received unexpected event network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.364 252257 DEBUG nova.compute.manager [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received event network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.364 252257 DEBUG oslo_concurrency.lockutils [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.365 252257 DEBUG oslo_concurrency.lockutils [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.365 252257 DEBUG oslo_concurrency.lockutils [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.366 252257 DEBUG nova.compute.manager [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] No waiting events found dispatching network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.366 252257 WARNING nova.compute.manager [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received unexpected event network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.366 252257 DEBUG nova.compute.manager [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received event network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.367 252257 DEBUG oslo_concurrency.lockutils [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.367 252257 DEBUG oslo_concurrency.lockutils [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.367 252257 DEBUG oslo_concurrency.lockutils [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.368 252257 DEBUG nova.compute.manager [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] No waiting events found dispatching network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.368 252257 WARNING nova.compute.manager [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received unexpected event network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.369 252257 DEBUG nova.compute.manager [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received event network-vif-unplugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.369 252257 DEBUG oslo_concurrency.lockutils [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.369 252257 DEBUG oslo_concurrency.lockutils [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.370 252257 DEBUG oslo_concurrency.lockutils [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.370 252257 DEBUG nova.compute.manager [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] No waiting events found dispatching network-vif-unplugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:21 np0005539563 nova_compute[252253]: 2025-11-29 07:54:21.370 252257 DEBUG nova.compute.manager [req-9fc8a7c8-2a8f-4213-be31-6ef0dbd95801 req-8b9e9f50-f239-4104-ac11-930298dcd235 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received event network-vif-unplugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:54:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Nov 29 02:54:21 np0005539563 podman[281065]: 2025-11-29 07:54:21.435569833 +0000 UTC m=+0.335341962 container init b78d1b9e1ffecb5b7ba513d8e3cc8c8d1fa18367cee1219f579bae8eb13615ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:54:21 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Nov 29 02:54:21 np0005539563 podman[281065]: 2025-11-29 07:54:21.445992856 +0000 UTC m=+0.345764965 container start b78d1b9e1ffecb5b7ba513d8e3cc8c8d1fa18367cee1219f579bae8eb13615ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mccarthy, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 29 02:54:21 np0005539563 podman[281065]: 2025-11-29 07:54:21.479021973 +0000 UTC m=+0.378794082 container attach b78d1b9e1ffecb5b7ba513d8e3cc8c8d1fa18367cee1219f579bae8eb13615ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mccarthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:54:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:21.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 134 MiB data, 498 MiB used, 21 GiB / 21 GiB avail; 7.6 MiB/s rd, 3.5 MiB/s wr, 253 op/s
Nov 29 02:54:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:22.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:22 np0005539563 unruffled_mccarthy[281081]: {
Nov 29 02:54:22 np0005539563 unruffled_mccarthy[281081]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:54:22 np0005539563 unruffled_mccarthy[281081]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:54:22 np0005539563 unruffled_mccarthy[281081]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:54:22 np0005539563 unruffled_mccarthy[281081]:        "osd_id": 0,
Nov 29 02:54:22 np0005539563 unruffled_mccarthy[281081]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:54:22 np0005539563 unruffled_mccarthy[281081]:        "type": "bluestore"
Nov 29 02:54:22 np0005539563 unruffled_mccarthy[281081]:    }
Nov 29 02:54:22 np0005539563 unruffled_mccarthy[281081]: }
Nov 29 02:54:22 np0005539563 systemd[1]: libpod-b78d1b9e1ffecb5b7ba513d8e3cc8c8d1fa18367cee1219f579bae8eb13615ad.scope: Deactivated successfully.
Nov 29 02:54:22 np0005539563 podman[281065]: 2025-11-29 07:54:22.298148638 +0000 UTC m=+1.197920747 container died b78d1b9e1ffecb5b7ba513d8e3cc8c8d1fa18367cee1219f579bae8eb13615ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mccarthy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:54:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b887f991e2a303ee8f47e1fced85904442824526c2434259251c3521547b7eea-merged.mount: Deactivated successfully.
Nov 29 02:54:22 np0005539563 podman[281065]: 2025-11-29 07:54:22.403971133 +0000 UTC m=+1.303743242 container remove b78d1b9e1ffecb5b7ba513d8e3cc8c8d1fa18367cee1219f579bae8eb13615ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:54:22 np0005539563 nova_compute[252253]: 2025-11-29 07:54:22.407 252257 DEBUG nova.network.neutron [-] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:22 np0005539563 systemd[1]: libpod-conmon-b78d1b9e1ffecb5b7ba513d8e3cc8c8d1fa18367cee1219f579bae8eb13615ad.scope: Deactivated successfully.
Nov 29 02:54:22 np0005539563 nova_compute[252253]: 2025-11-29 07:54:22.428 252257 INFO nova.compute.manager [-] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Took 1.96 seconds to deallocate network for instance.#033[00m
Nov 29 02:54:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:54:22 np0005539563 podman[281110]: 2025-11-29 07:54:22.453039637 +0000 UTC m=+0.129566951 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:54:22 np0005539563 podman[281103]: 2025-11-29 07:54:22.453201661 +0000 UTC m=+0.123929688 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:54:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:54:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Nov 29 02:54:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:54:22 np0005539563 nova_compute[252253]: 2025-11-29 07:54:22.495 252257 DEBUG oslo_concurrency.lockutils [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:22 np0005539563 nova_compute[252253]: 2025-11-29 07:54:22.495 252257 DEBUG oslo_concurrency.lockutils [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:54:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Nov 29 02:54:22 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Nov 29 02:54:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:54:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev efb1df09-f843-4f0e-b97e-1d538661a949 does not exist
Nov 29 02:54:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ba77a931-6418-49f1-8472-109d97d2e71d does not exist
Nov 29 02:54:22 np0005539563 podman[281111]: 2025-11-29 07:54:22.518542376 +0000 UTC m=+0.188542443 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 02:54:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b3289ba8-7cda-4992-b70c-e3ca8a21f416 does not exist
Nov 29 02:54:22 np0005539563 nova_compute[252253]: 2025-11-29 07:54:22.561 252257 DEBUG oslo_concurrency.processutils [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449062815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.010 252257 DEBUG oslo_concurrency.processutils [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.022 252257 DEBUG nova.compute.provider_tree [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.046 252257 DEBUG nova.scheduler.client.report [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.071 252257 DEBUG oslo_concurrency.lockutils [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.093 252257 INFO nova.scheduler.client.report [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Deleted allocations for instance 17cfda51-18c0-426a-93a5-45208fdb5da2#033[00m
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016548672180346175 of space, bias 1.0, pg target 0.49646016541038523 quantized to 32 (current 32)
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028933367648427324 of space, bias 1.0, pg target 0.8680010294528198 quantized to 32 (current 32)
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.168 252257 DEBUG oslo_concurrency.lockutils [None req-b1545391-27b8-4487-be4a-d1792d8636d9 8551065d65214410b616d2a71729df0a 800c0f050e95457384eee582d6da0afa - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:54:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:23.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.543 252257 DEBUG nova.compute.manager [req-cd0cc885-cadf-47cd-9996-79c63a498e5d req-b29ff920-549b-49e2-a396-0a3d1586d1ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received event network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.543 252257 DEBUG oslo_concurrency.lockutils [req-cd0cc885-cadf-47cd-9996-79c63a498e5d req-b29ff920-549b-49e2-a396-0a3d1586d1ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.543 252257 DEBUG oslo_concurrency.lockutils [req-cd0cc885-cadf-47cd-9996-79c63a498e5d req-b29ff920-549b-49e2-a396-0a3d1586d1ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.544 252257 DEBUG oslo_concurrency.lockutils [req-cd0cc885-cadf-47cd-9996-79c63a498e5d req-b29ff920-549b-49e2-a396-0a3d1586d1ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "17cfda51-18c0-426a-93a5-45208fdb5da2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.544 252257 DEBUG nova.compute.manager [req-cd0cc885-cadf-47cd-9996-79c63a498e5d req-b29ff920-549b-49e2-a396-0a3d1586d1ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] No waiting events found dispatching network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.544 252257 WARNING nova.compute.manager [req-cd0cc885-cadf-47cd-9996-79c63a498e5d req-b29ff920-549b-49e2-a396-0a3d1586d1ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received unexpected event network-vif-plugged-08cb43c0-b482-46af-8fdc-a825b402c6e9 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.545 252257 DEBUG nova.compute.manager [req-cd0cc885-cadf-47cd-9996-79c63a498e5d req-b29ff920-549b-49e2-a396-0a3d1586d1ed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Received event network-vif-deleted-08cb43c0-b482-46af-8fdc-a825b402c6e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 134 MiB data, 498 MiB used, 21 GiB / 21 GiB avail; 9.1 MiB/s rd, 4.2 MiB/s wr, 301 op/s
Nov 29 02:54:23 np0005539563 nova_compute[252253]: 2025-11-29 07:54:23.901 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:24.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:25.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:25 np0005539563 nova_compute[252253]: 2025-11-29 07:54:25.610 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 155 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 5.6 MiB/s wr, 310 op/s
Nov 29 02:54:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:26.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Nov 29 02:54:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Nov 29 02:54:27 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Nov 29 02:54:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:27.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 155 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.3 MiB/s wr, 115 op/s
Nov 29 02:54:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:54:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2288473742' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:54:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:54:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2288473742' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:54:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:28.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:28 np0005539563 nova_compute[252253]: 2025-11-29 07:54:28.905 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:29.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 155 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 107 op/s
Nov 29 02:54:29 np0005539563 nova_compute[252253]: 2025-11-29 07:54:29.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:29 np0005539563 nova_compute[252253]: 2025-11-29 07:54:29.860 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:29 np0005539563 nova_compute[252253]: 2025-11-29 07:54:29.861 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:29 np0005539563 nova_compute[252253]: 2025-11-29 07:54:29.861 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:29 np0005539563 nova_compute[252253]: 2025-11-29 07:54:29.861 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:54:29 np0005539563 nova_compute[252253]: 2025-11-29 07:54:29.861 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:30.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Nov 29 02:54:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507080374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:30 np0005539563 nova_compute[252253]: 2025-11-29 07:54:30.537 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:30 np0005539563 nova_compute[252253]: 2025-11-29 07:54:30.561 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:30 np0005539563 nova_compute[252253]: 2025-11-29 07:54:30.723 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:30 np0005539563 nova_compute[252253]: 2025-11-29 07:54:30.724 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4684MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:54:30 np0005539563 nova_compute[252253]: 2025-11-29 07:54:30.724 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:30 np0005539563 nova_compute[252253]: 2025-11-29 07:54:30.724 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:30 np0005539563 nova_compute[252253]: 2025-11-29 07:54:30.809 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:30 np0005539563 nova_compute[252253]: 2025-11-29 07:54:30.825 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:30 np0005539563 nova_compute[252253]: 2025-11-29 07:54:30.936 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:54:30 np0005539563 nova_compute[252253]: 2025-11-29 07:54:30.937 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:54:30 np0005539563 nova_compute[252253]: 2025-11-29 07:54:30.966 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:31.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:31.588 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:31 np0005539563 nova_compute[252253]: 2025-11-29 07:54:31.588 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:31.589 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:54:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 171 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.6 MiB/s wr, 120 op/s
Nov 29 02:54:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:32.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Nov 29 02:54:32 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Nov 29 02:54:32 np0005539563 nova_compute[252253]: 2025-11-29 07:54:32.523 252257 DEBUG oslo_concurrency.lockutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Acquiring lock "3d7bd8de-d133-49c5-b770-69f71695bf9f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:32 np0005539563 nova_compute[252253]: 2025-11-29 07:54:32.524 252257 DEBUG oslo_concurrency.lockutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "3d7bd8de-d133-49c5-b770-69f71695bf9f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:32 np0005539563 nova_compute[252253]: 2025-11-29 07:54:32.541 252257 DEBUG nova.compute.manager [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:54:32 np0005539563 nova_compute[252253]: 2025-11-29 07:54:32.612 252257 DEBUG oslo_concurrency.lockutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:32 np0005539563 nova_compute[252253]: 2025-11-29 07:54:32.814 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.848s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:32 np0005539563 nova_compute[252253]: 2025-11-29 07:54:32.819 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:32 np0005539563 nova_compute[252253]: 2025-11-29 07:54:32.836 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:32 np0005539563 nova_compute[252253]: 2025-11-29 07:54:32.889 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:54:32 np0005539563 nova_compute[252253]: 2025-11-29 07:54:32.890 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:32 np0005539563 nova_compute[252253]: 2025-11-29 07:54:32.890 252257 DEBUG oslo_concurrency.lockutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:32 np0005539563 nova_compute[252253]: 2025-11-29 07:54:32.898 252257 DEBUG nova.virt.hardware [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:54:32 np0005539563 nova_compute[252253]: 2025-11-29 07:54:32.898 252257 INFO nova.compute.claims [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.040 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:33.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089465281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.641 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.648 252257 DEBUG nova.compute.provider_tree [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 171 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.7 MiB/s wr, 48 op/s
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.675 252257 DEBUG nova.scheduler.client.report [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.704 252257 DEBUG oslo_concurrency.lockutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.705 252257 DEBUG nova.compute.manager [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.767 252257 DEBUG nova.compute.manager [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.767 252257 DEBUG nova.network.neutron [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.786 252257 INFO nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.803 252257 DEBUG nova.compute.manager [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.865 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402858.860984, 17cfda51-18c0-426a-93a5-45208fdb5da2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.865 252257 INFO nova.compute.manager [-] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.885 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.886 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.886 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.886 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.907 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.913 252257 DEBUG nova.compute.manager [None req-26233af7-1b1b-4afb-8693-cc1310477549 - - - - - -] [instance: 17cfda51-18c0-426a-93a5-45208fdb5da2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.923 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.923 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.923 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.923 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.924 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.924 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.924 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.929 252257 DEBUG nova.compute.manager [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.930 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.930 252257 INFO nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Creating image(s)#033[00m
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.954 252257 DEBUG nova.storage.rbd_utils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] rbd image 3d7bd8de-d133-49c5-b770-69f71695bf9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2393006790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:33 np0005539563 nova_compute[252253]: 2025-11-29 07:54:33.978 252257 DEBUG nova.storage.rbd_utils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] rbd image 3d7bd8de-d133-49c5-b770-69f71695bf9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:34 np0005539563 nova_compute[252253]: 2025-11-29 07:54:34.002 252257 DEBUG nova.storage.rbd_utils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] rbd image 3d7bd8de-d133-49c5-b770-69f71695bf9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:34 np0005539563 nova_compute[252253]: 2025-11-29 07:54:34.006 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:34 np0005539563 nova_compute[252253]: 2025-11-29 07:54:34.036 252257 DEBUG nova.network.neutron [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:54:34 np0005539563 nova_compute[252253]: 2025-11-29 07:54:34.036 252257 DEBUG nova.compute.manager [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:54:34 np0005539563 nova_compute[252253]: 2025-11-29 07:54:34.077 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:34 np0005539563 nova_compute[252253]: 2025-11-29 07:54:34.078 252257 DEBUG oslo_concurrency.lockutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:34 np0005539563 nova_compute[252253]: 2025-11-29 07:54:34.078 252257 DEBUG oslo_concurrency.lockutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:34 np0005539563 nova_compute[252253]: 2025-11-29 07:54:34.079 252257 DEBUG oslo_concurrency.lockutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:34 np0005539563 nova_compute[252253]: 2025-11-29 07:54:34.105 252257 DEBUG nova.storage.rbd_utils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] rbd image 3d7bd8de-d133-49c5-b770-69f71695bf9f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:34 np0005539563 nova_compute[252253]: 2025-11-29 07:54:34.109 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 3d7bd8de-d133-49c5-b770-69f71695bf9f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:34.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:35.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 176 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.9 MiB/s wr, 100 op/s
Nov 29 02:54:35 np0005539563 nova_compute[252253]: 2025-11-29 07:54:35.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:35 np0005539563 nova_compute[252253]: 2025-11-29 07:54:35.813 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:36.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:54:36.590 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Nov 29 02:54:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:37.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 176 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 84 op/s
Nov 29 02:54:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:38.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Nov 29 02:54:38 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Nov 29 02:54:38 np0005539563 nova_compute[252253]: 2025-11-29 07:54:38.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:38 np0005539563 nova_compute[252253]: 2025-11-29 07:54:38.698 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:38 np0005539563 nova_compute[252253]: 2025-11-29 07:54:38.722 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 3d7bd8de-d133-49c5-b770-69f71695bf9f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:38 np0005539563 nova_compute[252253]: 2025-11-29 07:54:38.852 252257 DEBUG nova.storage.rbd_utils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] resizing rbd image 3d7bd8de-d133-49c5-b770-69f71695bf9f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:54:38 np0005539563 nova_compute[252253]: 2025-11-29 07:54:38.996 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:39.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.661 252257 DEBUG nova.objects.instance [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lazy-loading 'migration_context' on Instance uuid 3d7bd8de-d133-49c5-b770-69f71695bf9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 229 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.1 MiB/s wr, 112 op/s
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.730 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.731 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Ensure instance console log exists: /var/lib/nova/instances/3d7bd8de-d133-49c5-b770-69f71695bf9f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.731 252257 DEBUG oslo_concurrency.lockutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.732 252257 DEBUG oslo_concurrency.lockutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.732 252257 DEBUG oslo_concurrency.lockutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.734 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.738 252257 WARNING nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.747 252257 DEBUG nova.virt.libvirt.host [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.748 252257 DEBUG nova.virt.libvirt.host [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.751 252257 DEBUG nova.virt.libvirt.host [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.752 252257 DEBUG nova.virt.libvirt.host [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.753 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.753 252257 DEBUG nova.virt.hardware [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.754 252257 DEBUG nova.virt.hardware [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.754 252257 DEBUG nova.virt.hardware [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.754 252257 DEBUG nova.virt.hardware [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.754 252257 DEBUG nova.virt.hardware [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.755 252257 DEBUG nova.virt.hardware [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.755 252257 DEBUG nova.virt.hardware [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.755 252257 DEBUG nova.virt.hardware [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.755 252257 DEBUG nova.virt.hardware [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.755 252257 DEBUG nova.virt.hardware [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.756 252257 DEBUG nova.virt.hardware [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:54:39 np0005539563 nova_compute[252253]: 2025-11-29 07:54:39.758 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1007932061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:40 np0005539563 nova_compute[252253]: 2025-11-29 07:54:40.222 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:40 np0005539563 nova_compute[252253]: 2025-11-29 07:54:40.248 252257 DEBUG nova.storage.rbd_utils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] rbd image 3d7bd8de-d133-49c5-b770-69f71695bf9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:40 np0005539563 nova_compute[252253]: 2025-11-29 07:54:40.252 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:40.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/956393232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:40 np0005539563 nova_compute[252253]: 2025-11-29 07:54:40.682 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:40 np0005539563 nova_compute[252253]: 2025-11-29 07:54:40.684 252257 DEBUG nova.objects.instance [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lazy-loading 'pci_devices' on Instance uuid 3d7bd8de-d133-49c5-b770-69f71695bf9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:40 np0005539563 nova_compute[252253]: 2025-11-29 07:54:40.816 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  <uuid>3d7bd8de-d133-49c5-b770-69f71695bf9f</uuid>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  <name>instance-0000002a</name>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerDiagnosticsNegativeTest-server-1777788869</nova:name>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:54:39</nova:creationTime>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <nova:user uuid="5b13aa58723747fb9b6bfab86b1ca7b3">tempest-ServerDiagnosticsNegativeTest-977033600-project-member</nova:user>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <nova:project uuid="08368fe371b944caa7d0d51bb539528a">tempest-ServerDiagnosticsNegativeTest-977033600</nova:project>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <entry name="serial">3d7bd8de-d133-49c5-b770-69f71695bf9f</entry>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <entry name="uuid">3d7bd8de-d133-49c5-b770-69f71695bf9f</entry>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/3d7bd8de-d133-49c5-b770-69f71695bf9f_disk">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/3d7bd8de-d133-49c5-b770-69f71695bf9f_disk.config">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/3d7bd8de-d133-49c5-b770-69f71695bf9f/console.log" append="off"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:54:40 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:54:40 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:54:40 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:54:40 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:54:40 np0005539563 nova_compute[252253]: 2025-11-29 07:54:40.819 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:40 np0005539563 nova_compute[252253]: 2025-11-29 07:54:40.894 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:40 np0005539563 nova_compute[252253]: 2025-11-29 07:54:40.895 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:40 np0005539563 nova_compute[252253]: 2025-11-29 07:54:40.896 252257 INFO nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Using config drive#033[00m
Nov 29 02:54:40 np0005539563 nova_compute[252253]: 2025-11-29 07:54:40.927 252257 DEBUG nova.storage.rbd_utils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] rbd image 3d7bd8de-d133-49c5-b770-69f71695bf9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:41.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 241 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.0 MiB/s wr, 117 op/s
Nov 29 02:54:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:42.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:54:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:54:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:54:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:54:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:54:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:54:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:43.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Nov 29 02:54:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Nov 29 02:54:43 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Nov 29 02:54:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 241 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 5.4 MiB/s wr, 77 op/s
Nov 29 02:54:43 np0005539563 nova_compute[252253]: 2025-11-29 07:54:43.843 252257 INFO nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Creating config drive at /var/lib/nova/instances/3d7bd8de-d133-49c5-b770-69f71695bf9f/disk.config#033[00m
Nov 29 02:54:43 np0005539563 nova_compute[252253]: 2025-11-29 07:54:43.849 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3d7bd8de-d133-49c5-b770-69f71695bf9f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp502lcgtv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:43 np0005539563 nova_compute[252253]: 2025-11-29 07:54:43.984 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3d7bd8de-d133-49c5-b770-69f71695bf9f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp502lcgtv" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:44 np0005539563 nova_compute[252253]: 2025-11-29 07:54:44.016 252257 DEBUG nova.storage.rbd_utils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] rbd image 3d7bd8de-d133-49c5-b770-69f71695bf9f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:44 np0005539563 nova_compute[252253]: 2025-11-29 07:54:44.021 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3d7bd8de-d133-49c5-b770-69f71695bf9f/disk.config 3d7bd8de-d133-49c5-b770-69f71695bf9f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:44 np0005539563 nova_compute[252253]: 2025-11-29 07:54:44.047 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:44.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:44 np0005539563 nova_compute[252253]: 2025-11-29 07:54:44.581 252257 DEBUG oslo_concurrency.processutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3d7bd8de-d133-49c5-b770-69f71695bf9f/disk.config 3d7bd8de-d133-49c5-b770-69f71695bf9f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:44 np0005539563 nova_compute[252253]: 2025-11-29 07:54:44.583 252257 INFO nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Deleting local config drive /var/lib/nova/instances/3d7bd8de-d133-49c5-b770-69f71695bf9f/disk.config because it was imported into RBD.#033[00m
Nov 29 02:54:44 np0005539563 systemd-machined[213024]: New machine qemu-18-instance-0000002a.
Nov 29 02:54:44 np0005539563 systemd[1]: Started Virtual Machine qemu-18-instance-0000002a.
Nov 29 02:54:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:45.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 275 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 8.6 MiB/s wr, 204 op/s
Nov 29 02:54:45 np0005539563 nova_compute[252253]: 2025-11-29 07:54:45.852 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.231 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402886.231167, 3d7bd8de-d133-49c5-b770-69f71695bf9f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.232 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.234 252257 DEBUG nova.compute.manager [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.235 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.240 252257 INFO nova.virt.libvirt.driver [-] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Instance spawned successfully.#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.240 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.259 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.267 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.272 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.272 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.273 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.273 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.274 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.274 252257 DEBUG nova.virt.libvirt.driver [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:46.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.286 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.287 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402886.232204, 3d7bd8de-d133-49c5-b770-69f71695bf9f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.287 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] VM Started (Lifecycle Event)#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.313 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.318 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.497 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.511 252257 INFO nova.compute.manager [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Took 12.58 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.511 252257 DEBUG nova.compute.manager [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.586 252257 INFO nova.compute.manager [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Took 13.99 seconds to build instance.#033[00m
Nov 29 02:54:46 np0005539563 nova_compute[252253]: 2025-11-29 07:54:46.606 252257 DEBUG oslo_concurrency.lockutils [None req-d7ba8fac-5929-4003-9150-a24dfd278523 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "3d7bd8de-d133-49c5-b770-69f71695bf9f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Nov 29 02:54:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Nov 29 02:54:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Nov 29 02:54:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:47.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 275 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.9 MiB/s wr, 149 op/s
Nov 29 02:54:47 np0005539563 nova_compute[252253]: 2025-11-29 07:54:47.937 252257 DEBUG oslo_concurrency.lockutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Acquiring lock "3d7bd8de-d133-49c5-b770-69f71695bf9f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:47 np0005539563 nova_compute[252253]: 2025-11-29 07:54:47.937 252257 DEBUG oslo_concurrency.lockutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "3d7bd8de-d133-49c5-b770-69f71695bf9f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:47 np0005539563 nova_compute[252253]: 2025-11-29 07:54:47.938 252257 DEBUG oslo_concurrency.lockutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Acquiring lock "3d7bd8de-d133-49c5-b770-69f71695bf9f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:47 np0005539563 nova_compute[252253]: 2025-11-29 07:54:47.938 252257 DEBUG oslo_concurrency.lockutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "3d7bd8de-d133-49c5-b770-69f71695bf9f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:47 np0005539563 nova_compute[252253]: 2025-11-29 07:54:47.938 252257 DEBUG oslo_concurrency.lockutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "3d7bd8de-d133-49c5-b770-69f71695bf9f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:47 np0005539563 nova_compute[252253]: 2025-11-29 07:54:47.939 252257 INFO nova.compute.manager [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Terminating instance#033[00m
Nov 29 02:54:47 np0005539563 nova_compute[252253]: 2025-11-29 07:54:47.940 252257 DEBUG oslo_concurrency.lockutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Acquiring lock "refresh_cache-3d7bd8de-d133-49c5-b770-69f71695bf9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:47 np0005539563 nova_compute[252253]: 2025-11-29 07:54:47.941 252257 DEBUG oslo_concurrency.lockutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Acquired lock "refresh_cache-3d7bd8de-d133-49c5-b770-69f71695bf9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:47 np0005539563 nova_compute[252253]: 2025-11-29 07:54:47.941 252257 DEBUG nova.network.neutron [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:54:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Nov 29 02:54:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:48.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Nov 29 02:54:48 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Nov 29 02:54:48 np0005539563 nova_compute[252253]: 2025-11-29 07:54:48.827 252257 DEBUG nova.network.neutron [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:54:49 np0005539563 nova_compute[252253]: 2025-11-29 07:54:49.049 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:49 np0005539563 nova_compute[252253]: 2025-11-29 07:54:49.197 252257 DEBUG nova.network.neutron [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:49 np0005539563 nova_compute[252253]: 2025-11-29 07:54:49.216 252257 DEBUG oslo_concurrency.lockutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Releasing lock "refresh_cache-3d7bd8de-d133-49c5-b770-69f71695bf9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:49 np0005539563 nova_compute[252253]: 2025-11-29 07:54:49.217 252257 DEBUG nova.compute.manager [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:54:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:49.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 313 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.7 MiB/s wr, 311 op/s
Nov 29 02:54:50 np0005539563 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Nov 29 02:54:50 np0005539563 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000002a.scope: Consumed 3.472s CPU time.
Nov 29 02:54:50 np0005539563 systemd-machined[213024]: Machine qemu-18-instance-0000002a terminated.
Nov 29 02:54:50 np0005539563 nova_compute[252253]: 2025-11-29 07:54:50.248 252257 INFO nova.virt.libvirt.driver [-] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Instance destroyed successfully.#033[00m
Nov 29 02:54:50 np0005539563 nova_compute[252253]: 2025-11-29 07:54:50.248 252257 DEBUG nova.objects.instance [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lazy-loading 'resources' on Instance uuid 3d7bd8de-d133-49c5-b770-69f71695bf9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:50.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:50 np0005539563 nova_compute[252253]: 2025-11-29 07:54:50.853 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:51.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 313 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 5.9 MiB/s wr, 359 op/s
Nov 29 02:54:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:52.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:53 np0005539563 nova_compute[252253]: 2025-11-29 07:54:53.112 252257 INFO nova.virt.libvirt.driver [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Deleting instance files /var/lib/nova/instances/3d7bd8de-d133-49c5-b770-69f71695bf9f_del#033[00m
Nov 29 02:54:53 np0005539563 nova_compute[252253]: 2025-11-29 07:54:53.113 252257 INFO nova.virt.libvirt.driver [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Deletion of /var/lib/nova/instances/3d7bd8de-d133-49c5-b770-69f71695bf9f_del complete#033[00m
Nov 29 02:54:53 np0005539563 nova_compute[252253]: 2025-11-29 07:54:53.173 252257 INFO nova.compute.manager [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Took 3.96 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:54:53 np0005539563 nova_compute[252253]: 2025-11-29 07:54:53.173 252257 DEBUG oslo.service.loopingcall [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:54:53 np0005539563 nova_compute[252253]: 2025-11-29 07:54:53.174 252257 DEBUG nova.compute.manager [-] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:54:53 np0005539563 nova_compute[252253]: 2025-11-29 07:54:53.174 252257 DEBUG nova.network.neutron [-] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:54:53 np0005539563 podman[281749]: 2025-11-29 07:54:53.524373478 +0000 UTC m=+0.079449669 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:54:53 np0005539563 podman[281750]: 2025-11-29 07:54:53.534107253 +0000 UTC m=+0.083414238 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:54:53 np0005539563 podman[281751]: 2025-11-29 07:54:53.566913833 +0000 UTC m=+0.110827771 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:54:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:53.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 313 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.7 MiB/s wr, 232 op/s
Nov 29 02:54:53 np0005539563 nova_compute[252253]: 2025-11-29 07:54:53.836 252257 DEBUG nova.network.neutron [-] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:54:53 np0005539563 nova_compute[252253]: 2025-11-29 07:54:53.853 252257 DEBUG nova.network.neutron [-] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:53 np0005539563 nova_compute[252253]: 2025-11-29 07:54:53.866 252257 INFO nova.compute.manager [-] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Took 0.69 seconds to deallocate network for instance.#033[00m
Nov 29 02:54:53 np0005539563 nova_compute[252253]: 2025-11-29 07:54:53.922 252257 DEBUG oslo_concurrency.lockutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:53 np0005539563 nova_compute[252253]: 2025-11-29 07:54:53.922 252257 DEBUG oslo_concurrency.lockutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Nov 29 02:54:53 np0005539563 nova_compute[252253]: 2025-11-29 07:54:53.972 252257 DEBUG oslo_concurrency.processutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:54 np0005539563 nova_compute[252253]: 2025-11-29 07:54:54.062 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:54.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/59936033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:54 np0005539563 nova_compute[252253]: 2025-11-29 07:54:54.458 252257 DEBUG oslo_concurrency.processutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:54 np0005539563 nova_compute[252253]: 2025-11-29 07:54:54.466 252257 DEBUG nova.compute.provider_tree [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:54 np0005539563 nova_compute[252253]: 2025-11-29 07:54:54.526 252257 DEBUG nova.scheduler.client.report [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:54 np0005539563 nova_compute[252253]: 2025-11-29 07:54:54.550 252257 DEBUG oslo_concurrency.lockutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:54 np0005539563 nova_compute[252253]: 2025-11-29 07:54:54.576 252257 INFO nova.scheduler.client.report [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Deleted allocations for instance 3d7bd8de-d133-49c5-b770-69f71695bf9f#033[00m
Nov 29 02:54:54 np0005539563 nova_compute[252253]: 2025-11-29 07:54:54.638 252257 DEBUG oslo_concurrency.lockutils [None req-fcd5e35a-ab71-4d2e-a91b-812d9fab99f2 5b13aa58723747fb9b6bfab86b1ca7b3 08368fe371b944caa7d0d51bb539528a - - default default] Lock "3d7bd8de-d133-49c5-b770-69f71695bf9f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:55.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 267 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 2.5 MiB/s wr, 320 op/s
Nov 29 02:54:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Nov 29 02:54:55 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Nov 29 02:54:55 np0005539563 nova_compute[252253]: 2025-11-29 07:54:55.855 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:56.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Nov 29 02:54:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:57.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 267 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.3 MiB/s wr, 289 op/s
Nov 29 02:54:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Nov 29 02:54:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:58.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Nov 29 02:54:59 np0005539563 nova_compute[252253]: 2025-11-29 07:54:59.066 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:54:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:59.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 267 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 914 KiB/s rd, 6.2 KiB/s wr, 111 op/s
Nov 29 02:55:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:00.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:00 np0005539563 nova_compute[252253]: 2025-11-29 07:55:00.858 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:01.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 245 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 948 KiB/s rd, 220 KiB/s wr, 134 op/s
Nov 29 02:55:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:02.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Nov 29 02:55:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Nov 29 02:55:02 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Nov 29 02:55:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:03.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 245 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 219 KiB/s wr, 29 op/s
Nov 29 02:55:04 np0005539563 nova_compute[252253]: 2025-11-29 07:55:04.070 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:04.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:55:04.900 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:55:04.901 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:55:04.901 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:05 np0005539563 nova_compute[252253]: 2025-11-29 07:55:05.235 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402890.2345963, 3d7bd8de-d133-49c5-b770-69f71695bf9f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:05 np0005539563 nova_compute[252253]: 2025-11-29 07:55:05.235 252257 INFO nova.compute.manager [-] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:55:05 np0005539563 nova_compute[252253]: 2025-11-29 07:55:05.261 252257 DEBUG nova.compute.manager [None req-97d0bd22-e2f8-4e77-8c57-c8668e308948 - - - - - -] [instance: 3d7bd8de-d133-49c5-b770-69f71695bf9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:05.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 139 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 600 KiB/s rd, 3.1 MiB/s wr, 169 op/s
Nov 29 02:55:05 np0005539563 nova_compute[252253]: 2025-11-29 07:55:05.902 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:06.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Nov 29 02:55:07 np0005539563 nova_compute[252253]: 2025-11-29 07:55:07.473 252257 DEBUG oslo_concurrency.lockutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "c648c560-a045-4d01-a499-b82e0654bbe4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:07 np0005539563 nova_compute[252253]: 2025-11-29 07:55:07.474 252257 DEBUG oslo_concurrency.lockutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "c648c560-a045-4d01-a499-b82e0654bbe4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:07 np0005539563 nova_compute[252253]: 2025-11-29 07:55:07.496 252257 DEBUG nova.compute.manager [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:55:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:07.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:07 np0005539563 nova_compute[252253]: 2025-11-29 07:55:07.590 252257 DEBUG oslo_concurrency.lockutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:07 np0005539563 nova_compute[252253]: 2025-11-29 07:55:07.590 252257 DEBUG oslo_concurrency.lockutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:07 np0005539563 nova_compute[252253]: 2025-11-29 07:55:07.596 252257 DEBUG nova.virt.hardware [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:55:07 np0005539563 nova_compute[252253]: 2025-11-29 07:55:07.597 252257 INFO nova.compute.claims [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:55:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Nov 29 02:55:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Nov 29 02:55:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 139 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 596 KiB/s rd, 3.1 MiB/s wr, 163 op/s
Nov 29 02:55:07 np0005539563 nova_compute[252253]: 2025-11-29 07:55:07.693 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:08.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2007226899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:08 np0005539563 nova_compute[252253]: 2025-11-29 07:55:08.517 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.824s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:08 np0005539563 nova_compute[252253]: 2025-11-29 07:55:08.526 252257 DEBUG nova.compute.provider_tree [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:08 np0005539563 nova_compute[252253]: 2025-11-29 07:55:08.544 252257 DEBUG nova.scheduler.client.report [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:08 np0005539563 nova_compute[252253]: 2025-11-29 07:55:08.580 252257 DEBUG oslo_concurrency.lockutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:08 np0005539563 nova_compute[252253]: 2025-11-29 07:55:08.581 252257 DEBUG nova.compute.manager [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:55:08 np0005539563 nova_compute[252253]: 2025-11-29 07:55:08.645 252257 DEBUG nova.compute.manager [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:55:08 np0005539563 nova_compute[252253]: 2025-11-29 07:55:08.645 252257 DEBUG nova.network.neutron [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:55:08 np0005539563 nova_compute[252253]: 2025-11-29 07:55:08.678 252257 INFO nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:55:08 np0005539563 nova_compute[252253]: 2025-11-29 07:55:08.699 252257 DEBUG nova.compute.manager [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:55:08 np0005539563 nova_compute[252253]: 2025-11-29 07:55:08.799 252257 DEBUG nova.compute.manager [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:55:08 np0005539563 nova_compute[252253]: 2025-11-29 07:55:08.801 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:55:08 np0005539563 nova_compute[252253]: 2025-11-29 07:55:08.801 252257 INFO nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Creating image(s)#033[00m
Nov 29 02:55:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:09.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 138 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 626 KiB/s rd, 3.0 MiB/s wr, 165 op/s
Nov 29 02:55:09 np0005539563 nova_compute[252253]: 2025-11-29 07:55:09.900 252257 DEBUG nova.storage.rbd_utils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image c648c560-a045-4d01-a499-b82e0654bbe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.059 252257 DEBUG nova.storage.rbd_utils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image c648c560-a045-4d01-a499-b82e0654bbe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.092 252257 DEBUG nova.storage.rbd_utils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image c648c560-a045-4d01-a499-b82e0654bbe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.097 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.119 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.181 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.182 252257 DEBUG oslo_concurrency.lockutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.183 252257 DEBUG oslo_concurrency.lockutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.183 252257 DEBUG oslo_concurrency.lockutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.214 252257 DEBUG nova.storage.rbd_utils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image c648c560-a045-4d01-a499-b82e0654bbe4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.218 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c648c560-a045-4d01-a499-b82e0654bbe4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:10.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.890 252257 DEBUG nova.network.neutron [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.891 252257 DEBUG nova.compute.manager [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.903 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.912 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c648c560-a045-4d01-a499-b82e0654bbe4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:10 np0005539563 nova_compute[252253]: 2025-11-29 07:55:10.990 252257 DEBUG nova.storage.rbd_utils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] resizing rbd image c648c560-a045-4d01-a499-b82e0654bbe4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:55:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:11.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 149 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 581 KiB/s rd, 3.0 MiB/s wr, 153 op/s
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.828 252257 DEBUG nova.objects.instance [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lazy-loading 'migration_context' on Instance uuid c648c560-a045-4d01-a499-b82e0654bbe4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.855 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.856 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Ensure instance console log exists: /var/lib/nova/instances/c648c560-a045-4d01-a499-b82e0654bbe4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.856 252257 DEBUG oslo_concurrency.lockutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.856 252257 DEBUG oslo_concurrency.lockutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.857 252257 DEBUG oslo_concurrency.lockutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.858 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.864 252257 WARNING nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.872 252257 DEBUG nova.virt.libvirt.host [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.873 252257 DEBUG nova.virt.libvirt.host [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.877 252257 DEBUG nova.virt.libvirt.host [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.878 252257 DEBUG nova.virt.libvirt.host [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.879 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.880 252257 DEBUG nova.virt.hardware [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.880 252257 DEBUG nova.virt.hardware [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.881 252257 DEBUG nova.virt.hardware [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.881 252257 DEBUG nova.virt.hardware [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.881 252257 DEBUG nova.virt.hardware [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.882 252257 DEBUG nova.virt.hardware [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.882 252257 DEBUG nova.virt.hardware [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.882 252257 DEBUG nova.virt.hardware [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.883 252257 DEBUG nova.virt.hardware [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.883 252257 DEBUG nova.virt.hardware [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.883 252257 DEBUG nova.virt.hardware [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:55:11 np0005539563 nova_compute[252253]: 2025-11-29 07:55:11.887 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:55:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1870973595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:55:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:12.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:12 np0005539563 nova_compute[252253]: 2025-11-29 07:55:12.333 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:12 np0005539563 nova_compute[252253]: 2025-11-29 07:55:12.373 252257 DEBUG nova.storage.rbd_utils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image c648c560-a045-4d01-a499-b82e0654bbe4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:12 np0005539563 nova_compute[252253]: 2025-11-29 07:55:12.380 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:55:12
Nov 29 02:55:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:55:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:55:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'backups', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.log']
Nov 29 02:55:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:55:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:55:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3910136033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:55:12 np0005539563 nova_compute[252253]: 2025-11-29 07:55:12.916 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:12 np0005539563 nova_compute[252253]: 2025-11-29 07:55:12.920 252257 DEBUG nova.objects.instance [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lazy-loading 'pci_devices' on Instance uuid c648c560-a045-4d01-a499-b82e0654bbe4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:13 np0005539563 nova_compute[252253]: 2025-11-29 07:55:13.100 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  <uuid>c648c560-a045-4d01-a499-b82e0654bbe4</uuid>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  <name>instance-0000002b</name>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-167940943</nova:name>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:55:11</nova:creationTime>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <nova:user uuid="db06e8f865ef4c7fbacd588b0c473e37">tempest-ServersAdminNegativeTestJSON-1455232210-project-member</nova:user>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <nova:project uuid="4a3681cf294441768c28547476705844">tempest-ServersAdminNegativeTestJSON-1455232210</nova:project>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <entry name="serial">c648c560-a045-4d01-a499-b82e0654bbe4</entry>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <entry name="uuid">c648c560-a045-4d01-a499-b82e0654bbe4</entry>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/c648c560-a045-4d01-a499-b82e0654bbe4_disk">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/c648c560-a045-4d01-a499-b82e0654bbe4_disk.config">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/c648c560-a045-4d01-a499-b82e0654bbe4/console.log" append="off"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:55:13 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:55:13 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:55:13 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:55:13 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:55:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:13.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 149 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 531 KiB/s rd, 2.7 MiB/s wr, 140 op/s
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:55:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:55:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:14.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:15 np0005539563 nova_compute[252253]: 2025-11-29 07:55:15.122 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:15 np0005539563 nova_compute[252253]: 2025-11-29 07:55:15.143 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:55:15 np0005539563 nova_compute[252253]: 2025-11-29 07:55:15.144 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:55:15 np0005539563 nova_compute[252253]: 2025-11-29 07:55:15.145 252257 INFO nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Using config drive#033[00m
Nov 29 02:55:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:15.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 187 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 2.2 MiB/s wr, 60 op/s
Nov 29 02:55:15 np0005539563 nova_compute[252253]: 2025-11-29 07:55:15.795 252257 DEBUG nova.storage.rbd_utils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image c648c560-a045-4d01-a499-b82e0654bbe4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:15 np0005539563 nova_compute[252253]: 2025-11-29 07:55:15.905 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:15 np0005539563 nova_compute[252253]: 2025-11-29 07:55:15.948 252257 INFO nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Creating config drive at /var/lib/nova/instances/c648c560-a045-4d01-a499-b82e0654bbe4/disk.config#033[00m
Nov 29 02:55:15 np0005539563 nova_compute[252253]: 2025-11-29 07:55:15.952 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c648c560-a045-4d01-a499-b82e0654bbe4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9rnbngal execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:16 np0005539563 nova_compute[252253]: 2025-11-29 07:55:16.079 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c648c560-a045-4d01-a499-b82e0654bbe4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9rnbngal" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:16 np0005539563 nova_compute[252253]: 2025-11-29 07:55:16.138 252257 DEBUG nova.storage.rbd_utils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] rbd image c648c560-a045-4d01-a499-b82e0654bbe4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:16 np0005539563 nova_compute[252253]: 2025-11-29 07:55:16.143 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c648c560-a045-4d01-a499-b82e0654bbe4/disk.config c648c560-a045-4d01-a499-b82e0654bbe4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:16.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:17.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 187 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 101 KiB/s rd, 2.2 MiB/s wr, 59 op/s
Nov 29 02:55:17 np0005539563 nova_compute[252253]: 2025-11-29 07:55:17.775 252257 DEBUG oslo_concurrency.processutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c648c560-a045-4d01-a499-b82e0654bbe4/disk.config c648c560-a045-4d01-a499-b82e0654bbe4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:17 np0005539563 nova_compute[252253]: 2025-11-29 07:55:17.777 252257 INFO nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Deleting local config drive /var/lib/nova/instances/c648c560-a045-4d01-a499-b82e0654bbe4/disk.config because it was imported into RBD.#033[00m
Nov 29 02:55:17 np0005539563 systemd-machined[213024]: New machine qemu-19-instance-0000002b.
Nov 29 02:55:17 np0005539563 systemd[1]: Started Virtual Machine qemu-19-instance-0000002b.
Nov 29 02:55:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:18.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:19.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 188 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 1.9 MiB/s wr, 53 op/s
Nov 29 02:55:19 np0005539563 nova_compute[252253]: 2025-11-29 07:55:19.969 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402919.9688115, c648c560-a045-4d01-a499-b82e0654bbe4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:19 np0005539563 nova_compute[252253]: 2025-11-29 07:55:19.970 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:55:19 np0005539563 nova_compute[252253]: 2025-11-29 07:55:19.975 252257 DEBUG nova.compute.manager [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:55:19 np0005539563 nova_compute[252253]: 2025-11-29 07:55:19.975 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:55:19 np0005539563 nova_compute[252253]: 2025-11-29 07:55:19.980 252257 INFO nova.virt.libvirt.driver [-] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Instance spawned successfully.#033[00m
Nov 29 02:55:19 np0005539563 nova_compute[252253]: 2025-11-29 07:55:19.981 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:55:19 np0005539563 nova_compute[252253]: 2025-11-29 07:55:19.994 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:19 np0005539563 nova_compute[252253]: 2025-11-29 07:55:19.999 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.009 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.009 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.010 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.010 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.011 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.012 252257 DEBUG nova.virt.libvirt.driver [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.021 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.021 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402919.970496, c648c560-a045-4d01-a499-b82e0654bbe4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.022 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] VM Started (Lifecycle Event)#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.049 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.053 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.082 252257 INFO nova.compute.manager [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Took 11.28 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.082 252257 DEBUG nova.compute.manager [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.083 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.140 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.143 252257 INFO nova.compute.manager [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Took 12.58 seconds to build instance.#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.170 252257 DEBUG oslo_concurrency.lockutils [None req-844d4f44-13c6-474d-8185-ac812f7f3f02 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "c648c560-a045-4d01-a499-b82e0654bbe4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:55:20.220 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.221 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:55:20.223 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:55:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:20.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:20 np0005539563 nova_compute[252253]: 2025-11-29 07:55:20.908 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:21.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 188 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Nov 29 02:55:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:22.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031665125395495997 of space, bias 1.0, pg target 0.9499537618648799 quantized to 32 (current 32)
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:55:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:23.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:55:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 02:55:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:55:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:55:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 188 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.5 MiB/s wr, 36 op/s
Nov 29 02:55:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 02:55:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:55:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:24.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:55:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:55:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:55:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:55:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:55:24 np0005539563 podman[282444]: 2025-11-29 07:55:24.502297562 +0000 UTC m=+0.055822647 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 29 02:55:24 np0005539563 podman[282443]: 2025-11-29 07:55:24.50294101 +0000 UTC m=+0.056380593 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 02:55:24 np0005539563 podman[282445]: 2025-11-29 07:55:24.537884509 +0000 UTC m=+0.089287717 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Nov 29 02:55:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:25 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1c05ff25-f95b-4c0b-a5f1-06d088af0299 does not exist
Nov 29 02:55:25 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6a9f4292-94b2-4a7b-8b85-fd2f7411b282 does not exist
Nov 29 02:55:25 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2e622b92-fb90-4dbe-9346-a04d90aa7eaa does not exist
Nov 29 02:55:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:55:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:55:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:55:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:55:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:55:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:55:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:55:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:55:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:55:25 np0005539563 nova_compute[252253]: 2025-11-29 07:55:25.142 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:25.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:25 np0005539563 podman[282645]: 2025-11-29 07:55:25.676793893 +0000 UTC m=+0.090833929 container create 7c2e6de045f82fb128689dc3b8b50965820ac7792eb1b3879d433c8883b97c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_fermat, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:55:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 226 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 125 op/s
Nov 29 02:55:25 np0005539563 podman[282645]: 2025-11-29 07:55:25.611447547 +0000 UTC m=+0.025487593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:55:25 np0005539563 systemd[1]: Started libpod-conmon-7c2e6de045f82fb128689dc3b8b50965820ac7792eb1b3879d433c8883b97c4a.scope.
Nov 29 02:55:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:55:25 np0005539563 podman[282645]: 2025-11-29 07:55:25.769686797 +0000 UTC m=+0.183726853 container init 7c2e6de045f82fb128689dc3b8b50965820ac7792eb1b3879d433c8883b97c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_fermat, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:55:25 np0005539563 podman[282645]: 2025-11-29 07:55:25.778592799 +0000 UTC m=+0.192632835 container start 7c2e6de045f82fb128689dc3b8b50965820ac7792eb1b3879d433c8883b97c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_fermat, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:55:25 np0005539563 podman[282645]: 2025-11-29 07:55:25.783261356 +0000 UTC m=+0.197301422 container attach 7c2e6de045f82fb128689dc3b8b50965820ac7792eb1b3879d433c8883b97c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:55:25 np0005539563 ecstatic_fermat[282662]: 167 167
Nov 29 02:55:25 np0005539563 systemd[1]: libpod-7c2e6de045f82fb128689dc3b8b50965820ac7792eb1b3879d433c8883b97c4a.scope: Deactivated successfully.
Nov 29 02:55:25 np0005539563 podman[282645]: 2025-11-29 07:55:25.787007497 +0000 UTC m=+0.201047553 container died 7c2e6de045f82fb128689dc3b8b50965820ac7792eb1b3879d433c8883b97c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_fermat, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:55:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b2b0336dcdc1ab12e628e5b94c81913913b4d07225f325e4a93a00c36be67973-merged.mount: Deactivated successfully.
Nov 29 02:55:25 np0005539563 podman[282645]: 2025-11-29 07:55:25.837882649 +0000 UTC m=+0.251922685 container remove 7c2e6de045f82fb128689dc3b8b50965820ac7792eb1b3879d433c8883b97c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:55:25 np0005539563 systemd[1]: libpod-conmon-7c2e6de045f82fb128689dc3b8b50965820ac7792eb1b3879d433c8883b97c4a.scope: Deactivated successfully.
Nov 29 02:55:25 np0005539563 nova_compute[252253]: 2025-11-29 07:55:25.910 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:26 np0005539563 podman[282685]: 2025-11-29 07:55:25.973397481 +0000 UTC m=+0.023372426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:55:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:26.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:26 np0005539563 podman[282685]: 2025-11-29 07:55:26.956913292 +0000 UTC m=+1.006888207 container create d010ee7836c131a959e6942cfeda93630c92445ec80de5e25ffabc63080d5248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:55:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:55:27 np0005539563 systemd[1]: Started libpod-conmon-d010ee7836c131a959e6942cfeda93630c92445ec80de5e25ffabc63080d5248.scope.
Nov 29 02:55:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:55:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9339810214e8452fee135ec566e43e9e36f1aceb64af325732d6669725885bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9339810214e8452fee135ec566e43e9e36f1aceb64af325732d6669725885bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9339810214e8452fee135ec566e43e9e36f1aceb64af325732d6669725885bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9339810214e8452fee135ec566e43e9e36f1aceb64af325732d6669725885bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9339810214e8452fee135ec566e43e9e36f1aceb64af325732d6669725885bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:27 np0005539563 podman[282685]: 2025-11-29 07:55:27.425781841 +0000 UTC m=+1.475756756 container init d010ee7836c131a959e6942cfeda93630c92445ec80de5e25ffabc63080d5248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yalow, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:55:27 np0005539563 podman[282685]: 2025-11-29 07:55:27.432810132 +0000 UTC m=+1.482785047 container start d010ee7836c131a959e6942cfeda93630c92445ec80de5e25ffabc63080d5248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:55:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:27.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 226 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 98 op/s
Nov 29 02:55:27 np0005539563 podman[282685]: 2025-11-29 07:55:27.74227399 +0000 UTC m=+1.792248925 container attach d010ee7836c131a959e6942cfeda93630c92445ec80de5e25ffabc63080d5248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yalow, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:55:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:55:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4256619282' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:55:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:55:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4256619282' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:55:28 np0005539563 pedantic_yalow[282702]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:55:28 np0005539563 pedantic_yalow[282702]: --> relative data size: 1.0
Nov 29 02:55:28 np0005539563 pedantic_yalow[282702]: --> All data devices are unavailable
Nov 29 02:55:28 np0005539563 systemd[1]: libpod-d010ee7836c131a959e6942cfeda93630c92445ec80de5e25ffabc63080d5248.scope: Deactivated successfully.
Nov 29 02:55:28 np0005539563 podman[282685]: 2025-11-29 07:55:28.276444053 +0000 UTC m=+2.326418968 container died d010ee7836c131a959e6942cfeda93630c92445ec80de5e25ffabc63080d5248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yalow, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:55:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:28.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:55:29.226 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:29.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 247 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 111 op/s
Nov 29 02:55:30 np0005539563 nova_compute[252253]: 2025-11-29 07:55:30.146 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:30.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e9339810214e8452fee135ec566e43e9e36f1aceb64af325732d6669725885bf-merged.mount: Deactivated successfully.
Nov 29 02:55:30 np0005539563 nova_compute[252253]: 2025-11-29 07:55:30.698 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:30 np0005539563 nova_compute[252253]: 2025-11-29 07:55:30.913 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:31.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:31 np0005539563 nova_compute[252253]: 2025-11-29 07:55:31.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:31 np0005539563 nova_compute[252253]: 2025-11-29 07:55:31.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:31 np0005539563 nova_compute[252253]: 2025-11-29 07:55:31.701 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 280 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 130 op/s
Nov 29 02:55:31 np0005539563 nova_compute[252253]: 2025-11-29 07:55:31.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:31 np0005539563 nova_compute[252253]: 2025-11-29 07:55:31.703 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:31 np0005539563 nova_compute[252253]: 2025-11-29 07:55:31.703 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:55:31 np0005539563 nova_compute[252253]: 2025-11-29 07:55:31.704 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:32.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:32 np0005539563 podman[282685]: 2025-11-29 07:55:32.964419712 +0000 UTC m=+7.014394687 container remove d010ee7836c131a959e6942cfeda93630c92445ec80de5e25ffabc63080d5248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yalow, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:55:33 np0005539563 systemd[1]: libpod-conmon-d010ee7836c131a959e6942cfeda93630c92445ec80de5e25ffabc63080d5248.scope: Deactivated successfully.
Nov 29 02:55:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:33.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 280 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 123 op/s
Nov 29 02:55:33 np0005539563 podman[282893]: 2025-11-29 07:55:33.783663069 +0000 UTC m=+0.028383382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:55:34 np0005539563 podman[282893]: 2025-11-29 07:55:34.055640669 +0000 UTC m=+0.300360892 container create 50da3e786aa2ac8760fbea6379d7ae84fe8bd8a393bed74794d1aa860af6237c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendeleev, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:55:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:34.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:34 np0005539563 nova_compute[252253]: 2025-11-29 07:55:34.474 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.770s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:34 np0005539563 systemd[1]: Started libpod-conmon-50da3e786aa2ac8760fbea6379d7ae84fe8bd8a393bed74794d1aa860af6237c.scope.
Nov 29 02:55:34 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:55:34 np0005539563 nova_compute[252253]: 2025-11-29 07:55:34.569 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:34 np0005539563 nova_compute[252253]: 2025-11-29 07:55:34.570 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:34 np0005539563 nova_compute[252253]: 2025-11-29 07:55:34.736 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:55:34 np0005539563 nova_compute[252253]: 2025-11-29 07:55:34.737 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4470MB free_disk=20.895858764648438GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:55:34 np0005539563 nova_compute[252253]: 2025-11-29 07:55:34.738 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:34 np0005539563 nova_compute[252253]: 2025-11-29 07:55:34.738 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:34 np0005539563 podman[282893]: 2025-11-29 07:55:34.753556841 +0000 UTC m=+0.998277084 container init 50da3e786aa2ac8760fbea6379d7ae84fe8bd8a393bed74794d1aa860af6237c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:55:34 np0005539563 podman[282893]: 2025-11-29 07:55:34.763318266 +0000 UTC m=+1.008038489 container start 50da3e786aa2ac8760fbea6379d7ae84fe8bd8a393bed74794d1aa860af6237c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:55:34 np0005539563 goofy_mendeleev[282910]: 167 167
Nov 29 02:55:34 np0005539563 systemd[1]: libpod-50da3e786aa2ac8760fbea6379d7ae84fe8bd8a393bed74794d1aa860af6237c.scope: Deactivated successfully.
Nov 29 02:55:34 np0005539563 podman[282893]: 2025-11-29 07:55:34.788243813 +0000 UTC m=+1.032964066 container attach 50da3e786aa2ac8760fbea6379d7ae84fe8bd8a393bed74794d1aa860af6237c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendeleev, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:55:34 np0005539563 podman[282893]: 2025-11-29 07:55:34.790723501 +0000 UTC m=+1.035443724 container died 50da3e786aa2ac8760fbea6379d7ae84fe8bd8a393bed74794d1aa860af6237c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendeleev, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:55:34 np0005539563 nova_compute[252253]: 2025-11-29 07:55:34.835 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance c648c560-a045-4d01-a499-b82e0654bbe4 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:34 np0005539563 nova_compute[252253]: 2025-11-29 07:55:34.835 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:55:34 np0005539563 nova_compute[252253]: 2025-11-29 07:55:34.836 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:55:34 np0005539563 nova_compute[252253]: 2025-11-29 07:55:34.876 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:35 np0005539563 nova_compute[252253]: 2025-11-29 07:55:35.151 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1960ce0f6e5579a47f605a6b2007953c67125384e2b9c1af53200375d3fee1e6-merged.mount: Deactivated successfully.
Nov 29 02:55:35 np0005539563 podman[282893]: 2025-11-29 07:55:35.241838407 +0000 UTC m=+1.486558630 container remove 50da3e786aa2ac8760fbea6379d7ae84fe8bd8a393bed74794d1aa860af6237c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:55:35 np0005539563 systemd[1]: libpod-conmon-50da3e786aa2ac8760fbea6379d7ae84fe8bd8a393bed74794d1aa860af6237c.scope: Deactivated successfully.
Nov 29 02:55:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1266717774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:35 np0005539563 nova_compute[252253]: 2025-11-29 07:55:35.336 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:35 np0005539563 nova_compute[252253]: 2025-11-29 07:55:35.341 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:35 np0005539563 nova_compute[252253]: 2025-11-29 07:55:35.403 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:35 np0005539563 nova_compute[252253]: 2025-11-29 07:55:35.430 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:55:35 np0005539563 nova_compute[252253]: 2025-11-29 07:55:35.431 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:35 np0005539563 podman[282960]: 2025-11-29 07:55:35.398197515 +0000 UTC m=+0.021780373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:55:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:35.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:35 np0005539563 podman[282960]: 2025-11-29 07:55:35.640323293 +0000 UTC m=+0.263906141 container create 317b59abfa5067d854222d19cfa52427511ddf77104a876027f5f4c44e55a3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:55:35 np0005539563 systemd[1]: Started libpod-conmon-317b59abfa5067d854222d19cfa52427511ddf77104a876027f5f4c44e55a3d7.scope.
Nov 29 02:55:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 296 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 150 op/s
Nov 29 02:55:35 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:55:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b36d6b4aa780617ed4ae5d26995c00936e79f0109ac11fbeb03a1e8d8fd57bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b36d6b4aa780617ed4ae5d26995c00936e79f0109ac11fbeb03a1e8d8fd57bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b36d6b4aa780617ed4ae5d26995c00936e79f0109ac11fbeb03a1e8d8fd57bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b36d6b4aa780617ed4ae5d26995c00936e79f0109ac11fbeb03a1e8d8fd57bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:35 np0005539563 nova_compute[252253]: 2025-11-29 07:55:35.914 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:36 np0005539563 podman[282960]: 2025-11-29 07:55:36.333902598 +0000 UTC m=+0.957485476 container init 317b59abfa5067d854222d19cfa52427511ddf77104a876027f5f4c44e55a3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_antonelli, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:55:36 np0005539563 podman[282960]: 2025-11-29 07:55:36.340790504 +0000 UTC m=+0.964373352 container start 317b59abfa5067d854222d19cfa52427511ddf77104a876027f5f4c44e55a3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:55:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:36.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:36 np0005539563 podman[282960]: 2025-11-29 07:55:36.547072819 +0000 UTC m=+1.170655697 container attach 317b59abfa5067d854222d19cfa52427511ddf77104a876027f5f4c44e55a3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_antonelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]: {
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:    "0": [
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:        {
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:            "devices": [
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:                "/dev/loop3"
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:            ],
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:            "lv_name": "ceph_lv0",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:            "lv_size": "7511998464",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:            "name": "ceph_lv0",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:            "tags": {
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:                "ceph.cluster_name": "ceph",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:                "ceph.crush_device_class": "",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:                "ceph.encrypted": "0",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:                "ceph.osd_id": "0",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:                "ceph.type": "block",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:                "ceph.vdo": "0"
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:            },
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:            "type": "block",
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:            "vg_name": "ceph_vg0"
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:        }
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]:    ]
Nov 29 02:55:37 np0005539563 sad_antonelli[282977]: }
Nov 29 02:55:37 np0005539563 systemd[1]: libpod-317b59abfa5067d854222d19cfa52427511ddf77104a876027f5f4c44e55a3d7.scope: Deactivated successfully.
Nov 29 02:55:37 np0005539563 podman[282960]: 2025-11-29 07:55:37.102646473 +0000 UTC m=+1.726229331 container died 317b59abfa5067d854222d19cfa52427511ddf77104a876027f5f4c44e55a3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_antonelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:55:37 np0005539563 nova_compute[252253]: 2025-11-29 07:55:37.430 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:37 np0005539563 nova_compute[252253]: 2025-11-29 07:55:37.431 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:55:37 np0005539563 nova_compute[252253]: 2025-11-29 07:55:37.431 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:55:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 296 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 3.1 MiB/s wr, 61 op/s
Nov 29 02:55:38 np0005539563 nova_compute[252253]: 2025-11-29 07:55:38.028 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-c648c560-a045-4d01-a499-b82e0654bbe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:38 np0005539563 nova_compute[252253]: 2025-11-29 07:55:38.030 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-c648c560-a045-4d01-a499-b82e0654bbe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:38 np0005539563 nova_compute[252253]: 2025-11-29 07:55:38.030 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:55:38 np0005539563 nova_compute[252253]: 2025-11-29 07:55:38.031 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c648c560-a045-4d01-a499-b82e0654bbe4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:38 np0005539563 nova_compute[252253]: 2025-11-29 07:55:38.293 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:55:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:38.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:39 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0b36d6b4aa780617ed4ae5d26995c00936e79f0109ac11fbeb03a1e8d8fd57bc-merged.mount: Deactivated successfully.
Nov 29 02:55:39 np0005539563 nova_compute[252253]: 2025-11-29 07:55:39.209 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:39 np0005539563 nova_compute[252253]: 2025-11-29 07:55:39.226 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-c648c560-a045-4d01-a499-b82e0654bbe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:39 np0005539563 nova_compute[252253]: 2025-11-29 07:55:39.226 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:55:39 np0005539563 nova_compute[252253]: 2025-11-29 07:55:39.227 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:39 np0005539563 nova_compute[252253]: 2025-11-29 07:55:39.227 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:39 np0005539563 nova_compute[252253]: 2025-11-29 07:55:39.228 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:39 np0005539563 nova_compute[252253]: 2025-11-29 07:55:39.228 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:39 np0005539563 nova_compute[252253]: 2025-11-29 07:55:39.228 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:55:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:39.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 320 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 291 KiB/s rd, 4.6 MiB/s wr, 88 op/s
Nov 29 02:55:40 np0005539563 nova_compute[252253]: 2025-11-29 07:55:40.155 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:40.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:40 np0005539563 nova_compute[252253]: 2025-11-29 07:55:40.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:40 np0005539563 nova_compute[252253]: 2025-11-29 07:55:40.916 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:41.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:41 np0005539563 podman[282960]: 2025-11-29 07:55:41.700446221 +0000 UTC m=+6.324029109 container remove 317b59abfa5067d854222d19cfa52427511ddf77104a876027f5f4c44e55a3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_antonelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:55:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 348 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.2 MiB/s wr, 158 op/s
Nov 29 02:55:41 np0005539563 systemd[1]: libpod-conmon-317b59abfa5067d854222d19cfa52427511ddf77104a876027f5f4c44e55a3d7.scope: Deactivated successfully.
Nov 29 02:55:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:42.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:42 np0005539563 podman[283191]: 2025-11-29 07:55:42.44279323 +0000 UTC m=+0.026198562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:55:42 np0005539563 podman[283191]: 2025-11-29 07:55:42.674441914 +0000 UTC m=+0.257847196 container create a053fcec35d7a51a5696b7bd4f002b96f9f1b0530a0b7b2e01fefdd9e9a91484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:55:43 np0005539563 systemd[1]: Started libpod-conmon-a053fcec35d7a51a5696b7bd4f002b96f9f1b0530a0b7b2e01fefdd9e9a91484.scope.
Nov 29 02:55:43 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:55:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:55:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:55:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:55:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:55:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:55:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:55:43 np0005539563 podman[283191]: 2025-11-29 07:55:43.621713961 +0000 UTC m=+1.205119243 container init a053fcec35d7a51a5696b7bd4f002b96f9f1b0530a0b7b2e01fefdd9e9a91484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:55:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:43.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:43 np0005539563 podman[283191]: 2025-11-29 07:55:43.631996171 +0000 UTC m=+1.215401413 container start a053fcec35d7a51a5696b7bd4f002b96f9f1b0530a0b7b2e01fefdd9e9a91484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamport, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:55:43 np0005539563 dazzling_lamport[283208]: 167 167
Nov 29 02:55:43 np0005539563 systemd[1]: libpod-a053fcec35d7a51a5696b7bd4f002b96f9f1b0530a0b7b2e01fefdd9e9a91484.scope: Deactivated successfully.
Nov 29 02:55:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 348 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 135 op/s
Nov 29 02:55:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:43 np0005539563 podman[283191]: 2025-11-29 07:55:43.833131536 +0000 UTC m=+1.416536878 container attach a053fcec35d7a51a5696b7bd4f002b96f9f1b0530a0b7b2e01fefdd9e9a91484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamport, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:55:43 np0005539563 podman[283191]: 2025-11-29 07:55:43.833924177 +0000 UTC m=+1.417329459 container died a053fcec35d7a51a5696b7bd4f002b96f9f1b0530a0b7b2e01fefdd9e9a91484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamport, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:55:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0ddb86edabca3f3d554aad8cd430bf60e9ff7c2611d42303958abbce70007d08-merged.mount: Deactivated successfully.
Nov 29 02:55:44 np0005539563 podman[283191]: 2025-11-29 07:55:44.207199158 +0000 UTC m=+1.790604400 container remove a053fcec35d7a51a5696b7bd4f002b96f9f1b0530a0b7b2e01fefdd9e9a91484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:55:44 np0005539563 systemd[1]: libpod-conmon-a053fcec35d7a51a5696b7bd4f002b96f9f1b0530a0b7b2e01fefdd9e9a91484.scope: Deactivated successfully.
Nov 29 02:55:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:44.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:44 np0005539563 podman[283234]: 2025-11-29 07:55:44.382293636 +0000 UTC m=+0.050382230 container create aafbbe408a3c5cd81f7b07b2611a6c02bbc7357d549a948d2eaeb66abad555cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:55:44 np0005539563 systemd[1]: Started libpod-conmon-aafbbe408a3c5cd81f7b07b2611a6c02bbc7357d549a948d2eaeb66abad555cf.scope.
Nov 29 02:55:44 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:55:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f747528ef59b75269cf39b3018c43b0782c5a76b7f8487adf017c76f229cb597/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f747528ef59b75269cf39b3018c43b0782c5a76b7f8487adf017c76f229cb597/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f747528ef59b75269cf39b3018c43b0782c5a76b7f8487adf017c76f229cb597/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f747528ef59b75269cf39b3018c43b0782c5a76b7f8487adf017c76f229cb597/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:44 np0005539563 podman[283234]: 2025-11-29 07:55:44.358431047 +0000 UTC m=+0.026519641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:55:44 np0005539563 podman[283234]: 2025-11-29 07:55:44.525815155 +0000 UTC m=+0.193903779 container init aafbbe408a3c5cd81f7b07b2611a6c02bbc7357d549a948d2eaeb66abad555cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:55:44 np0005539563 podman[283234]: 2025-11-29 07:55:44.536936807 +0000 UTC m=+0.205025401 container start aafbbe408a3c5cd81f7b07b2611a6c02bbc7357d549a948d2eaeb66abad555cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lichterman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:55:44 np0005539563 podman[283234]: 2025-11-29 07:55:44.540832363 +0000 UTC m=+0.208920977 container attach aafbbe408a3c5cd81f7b07b2611a6c02bbc7357d549a948d2eaeb66abad555cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:55:45 np0005539563 nova_compute[252253]: 2025-11-29 07:55:45.200 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:45 np0005539563 kind_lichterman[283250]: {
Nov 29 02:55:45 np0005539563 kind_lichterman[283250]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:55:45 np0005539563 kind_lichterman[283250]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:55:45 np0005539563 kind_lichterman[283250]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:55:45 np0005539563 kind_lichterman[283250]:        "osd_id": 0,
Nov 29 02:55:45 np0005539563 kind_lichterman[283250]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:55:45 np0005539563 kind_lichterman[283250]:        "type": "bluestore"
Nov 29 02:55:45 np0005539563 kind_lichterman[283250]:    }
Nov 29 02:55:45 np0005539563 kind_lichterman[283250]: }
Nov 29 02:55:45 np0005539563 systemd[1]: libpod-aafbbe408a3c5cd81f7b07b2611a6c02bbc7357d549a948d2eaeb66abad555cf.scope: Deactivated successfully.
Nov 29 02:55:45 np0005539563 podman[283234]: 2025-11-29 07:55:45.478900569 +0000 UTC m=+1.146989163 container died aafbbe408a3c5cd81f7b07b2611a6c02bbc7357d549a948d2eaeb66abad555cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lichterman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:55:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f747528ef59b75269cf39b3018c43b0782c5a76b7f8487adf017c76f229cb597-merged.mount: Deactivated successfully.
Nov 29 02:55:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:45.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 276 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 242 op/s
Nov 29 02:55:45 np0005539563 podman[283234]: 2025-11-29 07:55:45.717206134 +0000 UTC m=+1.385294718 container remove aafbbe408a3c5cd81f7b07b2611a6c02bbc7357d549a948d2eaeb66abad555cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:55:45 np0005539563 systemd[1]: libpod-conmon-aafbbe408a3c5cd81f7b07b2611a6c02bbc7357d549a948d2eaeb66abad555cf.scope: Deactivated successfully.
Nov 29 02:55:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:55:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:55:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:45 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 074d3963-c1b5-45de-8dca-7ebff920ead7 does not exist
Nov 29 02:55:45 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6e1c0232-f1db-4e49-b747-b32dee95ed8e does not exist
Nov 29 02:55:45 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d1e89273-102a-4b02-a4b3-eb3e70fbd355 does not exist
Nov 29 02:55:45 np0005539563 nova_compute[252253]: 2025-11-29 07:55:45.917 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:46 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:46 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:55:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:46.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:47 np0005539563 nova_compute[252253]: 2025-11-29 07:55:47.709 252257 DEBUG oslo_concurrency.lockutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "c648c560-a045-4d01-a499-b82e0654bbe4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:47 np0005539563 nova_compute[252253]: 2025-11-29 07:55:47.710 252257 DEBUG oslo_concurrency.lockutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "c648c560-a045-4d01-a499-b82e0654bbe4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:47 np0005539563 nova_compute[252253]: 2025-11-29 07:55:47.710 252257 DEBUG oslo_concurrency.lockutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "c648c560-a045-4d01-a499-b82e0654bbe4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:47 np0005539563 nova_compute[252253]: 2025-11-29 07:55:47.710 252257 DEBUG oslo_concurrency.lockutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "c648c560-a045-4d01-a499-b82e0654bbe4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:47 np0005539563 nova_compute[252253]: 2025-11-29 07:55:47.710 252257 DEBUG oslo_concurrency.lockutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "c648c560-a045-4d01-a499-b82e0654bbe4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:47 np0005539563 nova_compute[252253]: 2025-11-29 07:55:47.712 252257 INFO nova.compute.manager [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Terminating instance#033[00m
Nov 29 02:55:47 np0005539563 nova_compute[252253]: 2025-11-29 07:55:47.713 252257 DEBUG oslo_concurrency.lockutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "refresh_cache-c648c560-a045-4d01-a499-b82e0654bbe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:47 np0005539563 nova_compute[252253]: 2025-11-29 07:55:47.713 252257 DEBUG oslo_concurrency.lockutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquired lock "refresh_cache-c648c560-a045-4d01-a499-b82e0654bbe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:47 np0005539563 nova_compute[252253]: 2025-11-29 07:55:47.713 252257 DEBUG nova.network.neutron [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:55:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 276 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.6 MiB/s wr, 216 op/s
Nov 29 02:55:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:47.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:47 np0005539563 nova_compute[252253]: 2025-11-29 07:55:47.880 252257 DEBUG nova.network.neutron [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:55:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:48.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:48 np0005539563 nova_compute[252253]: 2025-11-29 07:55:48.402 252257 DEBUG nova.network.neutron [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:48 np0005539563 nova_compute[252253]: 2025-11-29 07:55:48.417 252257 DEBUG oslo_concurrency.lockutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Releasing lock "refresh_cache-c648c560-a045-4d01-a499-b82e0654bbe4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:48 np0005539563 nova_compute[252253]: 2025-11-29 07:55:48.418 252257 DEBUG nova.compute.manager [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:55:48 np0005539563 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Nov 29 02:55:48 np0005539563 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000002b.scope: Consumed 14.906s CPU time.
Nov 29 02:55:48 np0005539563 systemd-machined[213024]: Machine qemu-19-instance-0000002b terminated.
Nov 29 02:55:48 np0005539563 nova_compute[252253]: 2025-11-29 07:55:48.643 252257 INFO nova.virt.libvirt.driver [-] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Instance destroyed successfully.#033[00m
Nov 29 02:55:48 np0005539563 nova_compute[252253]: 2025-11-29 07:55:48.643 252257 DEBUG nova.objects.instance [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lazy-loading 'resources' on Instance uuid c648c560-a045-4d01-a499-b82e0654bbe4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.079 252257 INFO nova.virt.libvirt.driver [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Deleting instance files /var/lib/nova/instances/c648c560-a045-4d01-a499-b82e0654bbe4_del#033[00m
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.080 252257 INFO nova.virt.libvirt.driver [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Deletion of /var/lib/nova/instances/c648c560-a045-4d01-a499-b82e0654bbe4_del complete#033[00m
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.137 252257 INFO nova.compute.manager [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.137 252257 DEBUG oslo.service.loopingcall [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.138 252257 DEBUG nova.compute.manager [-] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.138 252257 DEBUG nova.network.neutron [-] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.361 252257 DEBUG nova.network.neutron [-] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.387 252257 DEBUG nova.network.neutron [-] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.415 252257 INFO nova.compute.manager [-] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Took 0.28 seconds to deallocate network for instance.#033[00m
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.484 252257 DEBUG oslo_concurrency.lockutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.484 252257 DEBUG oslo_concurrency.lockutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.554 252257 DEBUG oslo_concurrency.processutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 205 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.6 MiB/s wr, 287 op/s
Nov 29 02:55:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:49.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146170637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.981 252257 DEBUG oslo_concurrency.processutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:49 np0005539563 nova_compute[252253]: 2025-11-29 07:55:49.986 252257 DEBUG nova.compute.provider_tree [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:50 np0005539563 nova_compute[252253]: 2025-11-29 07:55:50.007 252257 DEBUG nova.scheduler.client.report [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:50 np0005539563 nova_compute[252253]: 2025-11-29 07:55:50.033 252257 DEBUG oslo_concurrency.lockutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:50 np0005539563 nova_compute[252253]: 2025-11-29 07:55:50.072 252257 INFO nova.scheduler.client.report [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Deleted allocations for instance c648c560-a045-4d01-a499-b82e0654bbe4#033[00m
Nov 29 02:55:50 np0005539563 nova_compute[252253]: 2025-11-29 07:55:50.143 252257 DEBUG oslo_concurrency.lockutils [None req-7bd223b9-a6e5-4e1c-bfc7-1024a68e1a87 db06e8f865ef4c7fbacd588b0c473e37 4a3681cf294441768c28547476705844 - - default default] Lock "c648c560-a045-4d01-a499-b82e0654bbe4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:50 np0005539563 nova_compute[252253]: 2025-11-29 07:55:50.204 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:50.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Nov 29 02:55:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Nov 29 02:55:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Nov 29 02:55:50 np0005539563 nova_compute[252253]: 2025-11-29 07:55:50.920 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 155 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 128 KiB/s wr, 289 op/s
Nov 29 02:55:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:51.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:52.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 147 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 129 KiB/s wr, 290 op/s
Nov 29 02:55:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:53.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:54.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:55 np0005539563 nova_compute[252253]: 2025-11-29 07:55:55.205 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:55 np0005539563 podman[283382]: 2025-11-29 07:55:55.509266826 +0000 UTC m=+0.062424288 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:55:55 np0005539563 podman[283381]: 2025-11-29 07:55:55.528777856 +0000 UTC m=+0.084019934 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 02:55:55 np0005539563 podman[283383]: 2025-11-29 07:55:55.53221274 +0000 UTC m=+0.080423197 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 02:55:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 134 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 61 KiB/s wr, 222 op/s
Nov 29 02:55:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:55.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:55 np0005539563 nova_compute[252253]: 2025-11-29 07:55:55.922 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:56.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 134 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 61 KiB/s wr, 222 op/s
Nov 29 02:55:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:57.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:58.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Nov 29 02:55:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Nov 29 02:55:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Nov 29 02:55:58 np0005539563 nova_compute[252253]: 2025-11-29 07:55:58.694 252257 DEBUG oslo_concurrency.lockutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Acquiring lock "ddb1f3ce-d35e-4812-b13f-ce46430c22c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:58 np0005539563 nova_compute[252253]: 2025-11-29 07:55:58.695 252257 DEBUG oslo_concurrency.lockutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "ddb1f3ce-d35e-4812-b13f-ce46430c22c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:58 np0005539563 nova_compute[252253]: 2025-11-29 07:55:58.745 252257 DEBUG nova.compute.manager [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:55:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Nov 29 02:55:58 np0005539563 nova_compute[252253]: 2025-11-29 07:55:58.914 252257 DEBUG oslo_concurrency.lockutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:58 np0005539563 nova_compute[252253]: 2025-11-29 07:55:58.915 252257 DEBUG oslo_concurrency.lockutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:58 np0005539563 nova_compute[252253]: 2025-11-29 07:55:58.923 252257 DEBUG nova.virt.hardware [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:55:58 np0005539563 nova_compute[252253]: 2025-11-29 07:55:58.923 252257 INFO nova.compute.claims [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.080 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Nov 29 02:55:59 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Nov 29 02:55:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535849423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.580 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.586 252257 DEBUG nova.compute.provider_tree [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.610 252257 DEBUG nova.scheduler.client.report [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.635 252257 DEBUG oslo_concurrency.lockutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.636 252257 DEBUG nova.compute.manager [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.697 252257 DEBUG nova.compute.manager [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.698 252257 DEBUG nova.network.neutron [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:55:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 147 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 767 KiB/s rd, 1.3 MiB/s wr, 126 op/s
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.722 252257 INFO nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.760 252257 DEBUG nova.compute.manager [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:55:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:55:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:59.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.849 252257 DEBUG nova.compute.manager [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.850 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.851 252257 INFO nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Creating image(s)#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.874 252257 DEBUG nova.storage.rbd_utils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] rbd image ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.900 252257 DEBUG nova.storage.rbd_utils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] rbd image ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.927 252257 DEBUG nova.storage.rbd_utils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] rbd image ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:59 np0005539563 nova_compute[252253]: 2025-11-29 07:55:59.932 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:00 np0005539563 nova_compute[252253]: 2025-11-29 07:56:00.013 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:00 np0005539563 nova_compute[252253]: 2025-11-29 07:56:00.014 252257 DEBUG oslo_concurrency.lockutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:00 np0005539563 nova_compute[252253]: 2025-11-29 07:56:00.015 252257 DEBUG oslo_concurrency.lockutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:00 np0005539563 nova_compute[252253]: 2025-11-29 07:56:00.015 252257 DEBUG oslo_concurrency.lockutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:00 np0005539563 nova_compute[252253]: 2025-11-29 07:56:00.039 252257 DEBUG nova.storage.rbd_utils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] rbd image ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:00 np0005539563 nova_compute[252253]: 2025-11-29 07:56:00.042 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:00 np0005539563 nova_compute[252253]: 2025-11-29 07:56:00.115 252257 DEBUG nova.network.neutron [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:56:00 np0005539563 nova_compute[252253]: 2025-11-29 07:56:00.116 252257 DEBUG nova.compute.manager [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:56:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Nov 29 02:56:00 np0005539563 nova_compute[252253]: 2025-11-29 07:56:00.262 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:00.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:00 np0005539563 nova_compute[252253]: 2025-11-29 07:56:00.924 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:01.428 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:01.429 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:56:01 np0005539563 nova_compute[252253]: 2025-11-29 07:56:01.430 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 174 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 198 op/s
Nov 29 02:56:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:01.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Nov 29 02:56:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:02.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:02 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Nov 29 02:56:03 np0005539563 nova_compute[252253]: 2025-11-29 07:56:03.642 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402948.641131, c648c560-a045-4d01-a499-b82e0654bbe4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:03 np0005539563 nova_compute[252253]: 2025-11-29 07:56:03.642 252257 INFO nova.compute.manager [-] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:56:03 np0005539563 nova_compute[252253]: 2025-11-29 07:56:03.685 252257 DEBUG nova.compute.manager [None req-2b1a0e47-26a9-4fd8-8a8a-d4dcf56461de - - - - - -] [instance: c648c560-a045-4d01-a499-b82e0654bbe4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 188 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.0 MiB/s wr, 199 op/s
Nov 29 02:56:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:03.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:03 np0005539563 nova_compute[252253]: 2025-11-29 07:56:03.877 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.834s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:03 np0005539563 nova_compute[252253]: 2025-11-29 07:56:03.964 252257 DEBUG nova.storage.rbd_utils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] resizing rbd image ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:56:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:04.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.590 252257 DEBUG nova.objects.instance [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lazy-loading 'migration_context' on Instance uuid ddb1f3ce-d35e-4812-b13f-ce46430c22c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.604 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.604 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Ensure instance console log exists: /var/lib/nova/instances/ddb1f3ce-d35e-4812-b13f-ce46430c22c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.605 252257 DEBUG oslo_concurrency.lockutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.606 252257 DEBUG oslo_concurrency.lockutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.606 252257 DEBUG oslo_concurrency.lockutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.609 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.618 252257 WARNING nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.624 252257 DEBUG nova.virt.libvirt.host [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.625 252257 DEBUG nova.virt.libvirt.host [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.630 252257 DEBUG nova.virt.libvirt.host [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.631 252257 DEBUG nova.virt.libvirt.host [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.633 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.634 252257 DEBUG nova.virt.hardware [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.635 252257 DEBUG nova.virt.hardware [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.635 252257 DEBUG nova.virt.hardware [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.636 252257 DEBUG nova.virt.hardware [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.636 252257 DEBUG nova.virt.hardware [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.637 252257 DEBUG nova.virt.hardware [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.637 252257 DEBUG nova.virt.hardware [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.637 252257 DEBUG nova.virt.hardware [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.638 252257 DEBUG nova.virt.hardware [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.638 252257 DEBUG nova.virt.hardware [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.639 252257 DEBUG nova.virt.hardware [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:56:04 np0005539563 nova_compute[252253]: 2025-11-29 07:56:04.644 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Nov 29 02:56:04 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Nov 29 02:56:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:04.902 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:04.903 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:04.903 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232760435' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:05 np0005539563 nova_compute[252253]: 2025-11-29 07:56:05.174 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:05 np0005539563 nova_compute[252253]: 2025-11-29 07:56:05.202 252257 DEBUG nova.storage.rbd_utils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] rbd image ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:05 np0005539563 nova_compute[252253]: 2025-11-29 07:56:05.205 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:05 np0005539563 nova_compute[252253]: 2025-11-29 07:56:05.265 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015177557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:05 np0005539563 nova_compute[252253]: 2025-11-29 07:56:05.669 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:05 np0005539563 nova_compute[252253]: 2025-11-29 07:56:05.672 252257 DEBUG nova.objects.instance [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lazy-loading 'pci_devices' on Instance uuid ddb1f3ce-d35e-4812-b13f-ce46430c22c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:05 np0005539563 nova_compute[252253]: 2025-11-29 07:56:05.697 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  <uuid>ddb1f3ce-d35e-4812-b13f-ce46430c22c2</uuid>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  <name>instance-0000002f</name>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <nova:name>tempest-TenantUsagesTestJSON-server-1281539413</nova:name>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:56:04</nova:creationTime>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <nova:user uuid="61f29c385ebd477cb1758c565c344fc8">tempest-TenantUsagesTestJSON-303450110-project-member</nova:user>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <nova:project uuid="f19eb4884f9c48bf804e00b1e85b70b3">tempest-TenantUsagesTestJSON-303450110</nova:project>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <entry name="serial">ddb1f3ce-d35e-4812-b13f-ce46430c22c2</entry>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <entry name="uuid">ddb1f3ce-d35e-4812-b13f-ce46430c22c2</entry>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk.config">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/ddb1f3ce-d35e-4812-b13f-ce46430c22c2/console.log" append="off"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:56:05 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:56:05 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:56:05 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:56:05 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:56:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 273 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 11 MiB/s wr, 251 op/s
Nov 29 02:56:05 np0005539563 nova_compute[252253]: 2025-11-29 07:56:05.762 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:05 np0005539563 nova_compute[252253]: 2025-11-29 07:56:05.763 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:05 np0005539563 nova_compute[252253]: 2025-11-29 07:56:05.763 252257 INFO nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Using config drive#033[00m
Nov 29 02:56:05 np0005539563 nova_compute[252253]: 2025-11-29 07:56:05.790 252257 DEBUG nova.storage.rbd_utils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] rbd image ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Nov 29 02:56:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Nov 29 02:56:05 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Nov 29 02:56:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:05.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:05 np0005539563 nova_compute[252253]: 2025-11-29 07:56:05.924 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:06 np0005539563 nova_compute[252253]: 2025-11-29 07:56:06.040 252257 INFO nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Creating config drive at /var/lib/nova/instances/ddb1f3ce-d35e-4812-b13f-ce46430c22c2/disk.config#033[00m
Nov 29 02:56:06 np0005539563 nova_compute[252253]: 2025-11-29 07:56:06.049 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddb1f3ce-d35e-4812-b13f-ce46430c22c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpty8bokf0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:06 np0005539563 nova_compute[252253]: 2025-11-29 07:56:06.187 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddb1f3ce-d35e-4812-b13f-ce46430c22c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpty8bokf0" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:06 np0005539563 nova_compute[252253]: 2025-11-29 07:56:06.228 252257 DEBUG nova.storage.rbd_utils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] rbd image ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:06 np0005539563 nova_compute[252253]: 2025-11-29 07:56:06.232 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddb1f3ce-d35e-4812-b13f-ce46430c22c2/disk.config ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:06.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 273 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 8.7 MiB/s wr, 176 op/s
Nov 29 02:56:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:07.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:08.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:08 np0005539563 nova_compute[252253]: 2025-11-29 07:56:08.776 252257 DEBUG oslo_concurrency.processutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddb1f3ce-d35e-4812-b13f-ce46430c22c2/disk.config ddb1f3ce-d35e-4812-b13f-ce46430c22c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:08 np0005539563 nova_compute[252253]: 2025-11-29 07:56:08.777 252257 INFO nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Deleting local config drive /var/lib/nova/instances/ddb1f3ce-d35e-4812-b13f-ce46430c22c2/disk.config because it was imported into RBD.#033[00m
Nov 29 02:56:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Nov 29 02:56:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Nov 29 02:56:08 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Nov 29 02:56:08 np0005539563 systemd-machined[213024]: New machine qemu-20-instance-0000002f.
Nov 29 02:56:08 np0005539563 systemd[1]: Started Virtual Machine qemu-20-instance-0000002f.
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.270 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402969.2704096, ddb1f3ce-d35e-4812-b13f-ce46430c22c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.271 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.275 252257 DEBUG nova.compute.manager [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.275 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.278 252257 INFO nova.virt.libvirt.driver [-] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Instance spawned successfully.#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.279 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.297 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.302 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.302 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.303 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.303 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.304 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.304 252257 DEBUG nova.virt.libvirt.driver [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.308 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.343 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.343 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402969.2741313, ddb1f3ce-d35e-4812-b13f-ce46430c22c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.343 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] VM Started (Lifecycle Event)#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.368 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.371 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.379 252257 INFO nova.compute.manager [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Took 9.53 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.380 252257 DEBUG nova.compute.manager [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.405 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.439 252257 INFO nova.compute.manager [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Took 10.55 seconds to build instance.#033[00m
Nov 29 02:56:09 np0005539563 nova_compute[252253]: 2025-11-29 07:56:09.469 252257 DEBUG oslo_concurrency.lockutils [None req-950ed446-3e1f-4658-b144-809fc9c45ff7 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "ddb1f3ce-d35e-4812-b13f-ce46430c22c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 281 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 9.6 MiB/s wr, 247 op/s
Nov 29 02:56:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:09.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:10 np0005539563 nova_compute[252253]: 2025-11-29 07:56:10.268 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:10.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:10.431 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:10 np0005539563 nova_compute[252253]: 2025-11-29 07:56:10.926 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 232 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.5 MiB/s wr, 228 op/s
Nov 29 02:56:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:11.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.007 252257 DEBUG oslo_concurrency.lockutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Acquiring lock "ddb1f3ce-d35e-4812-b13f-ce46430c22c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.008 252257 DEBUG oslo_concurrency.lockutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "ddb1f3ce-d35e-4812-b13f-ce46430c22c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.008 252257 DEBUG oslo_concurrency.lockutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Acquiring lock "ddb1f3ce-d35e-4812-b13f-ce46430c22c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.008 252257 DEBUG oslo_concurrency.lockutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "ddb1f3ce-d35e-4812-b13f-ce46430c22c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.009 252257 DEBUG oslo_concurrency.lockutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "ddb1f3ce-d35e-4812-b13f-ce46430c22c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.010 252257 INFO nova.compute.manager [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Terminating instance#033[00m
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.011 252257 DEBUG oslo_concurrency.lockutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Acquiring lock "refresh_cache-ddb1f3ce-d35e-4812-b13f-ce46430c22c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.011 252257 DEBUG oslo_concurrency.lockutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Acquired lock "refresh_cache-ddb1f3ce-d35e-4812-b13f-ce46430c22c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.011 252257 DEBUG nova.network.neutron [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.193 252257 DEBUG nova.network.neutron [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:56:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:12.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.449 252257 DEBUG nova.network.neutron [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.469 252257 DEBUG oslo_concurrency.lockutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Releasing lock "refresh_cache-ddb1f3ce-d35e-4812-b13f-ce46430c22c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.470 252257 DEBUG nova.compute.manager [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:56:12 np0005539563 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Nov 29 02:56:12 np0005539563 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000002f.scope: Consumed 3.647s CPU time.
Nov 29 02:56:12 np0005539563 systemd-machined[213024]: Machine qemu-20-instance-0000002f terminated.
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.691 252257 INFO nova.virt.libvirt.driver [-] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Instance destroyed successfully.#033[00m
Nov 29 02:56:12 np0005539563 nova_compute[252253]: 2025-11-29 07:56:12.692 252257 DEBUG nova.objects.instance [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lazy-loading 'resources' on Instance uuid ddb1f3ce-d35e-4812-b13f-ce46430c22c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:56:12
Nov 29 02:56:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:56:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:56:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'backups', 'images', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms']
Nov 29 02:56:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.162 252257 INFO nova.virt.libvirt.driver [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Deleting instance files /var/lib/nova/instances/ddb1f3ce-d35e-4812-b13f-ce46430c22c2_del#033[00m
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.163 252257 INFO nova.virt.libvirt.driver [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Deletion of /var/lib/nova/instances/ddb1f3ce-d35e-4812-b13f-ce46430c22c2_del complete#033[00m
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.245 252257 INFO nova.compute.manager [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.246 252257 DEBUG oslo.service.loopingcall [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.246 252257 DEBUG nova.compute.manager [-] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.247 252257 DEBUG nova.network.neutron [-] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.396 252257 DEBUG nova.network.neutron [-] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.410 252257 DEBUG nova.network.neutron [-] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.427 252257 INFO nova.compute.manager [-] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Took 0.18 seconds to deallocate network for instance.#033[00m
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.467 252257 DEBUG oslo_concurrency.lockutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.468 252257 DEBUG oslo_concurrency.lockutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.518 252257 DEBUG oslo_concurrency.processutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 201 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.3 MiB/s wr, 183 op/s
Nov 29 02:56:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:13.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:56:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:56:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Nov 29 02:56:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3924029452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.974 252257 DEBUG oslo_concurrency.processutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:13 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Nov 29 02:56:13 np0005539563 nova_compute[252253]: 2025-11-29 07:56:13.983 252257 DEBUG nova.compute.provider_tree [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:14 np0005539563 nova_compute[252253]: 2025-11-29 07:56:14.000 252257 DEBUG nova.scheduler.client.report [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:14 np0005539563 nova_compute[252253]: 2025-11-29 07:56:14.019 252257 DEBUG oslo_concurrency.lockutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:14 np0005539563 nova_compute[252253]: 2025-11-29 07:56:14.051 252257 INFO nova.scheduler.client.report [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Deleted allocations for instance ddb1f3ce-d35e-4812-b13f-ce46430c22c2#033[00m
Nov 29 02:56:14 np0005539563 nova_compute[252253]: 2025-11-29 07:56:14.135 252257 DEBUG oslo_concurrency.lockutils [None req-6a1e7255-9b32-4619-82c9-0a2307dfdace 61f29c385ebd477cb1758c565c344fc8 f19eb4884f9c48bf804e00b1e85b70b3 - - default default] Lock "ddb1f3ce-d35e-4812-b13f-ce46430c22c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:14.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:15 np0005539563 nova_compute[252253]: 2025-11-29 07:56:15.292 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 45 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.3 MiB/s wr, 330 op/s
Nov 29 02:56:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:15.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:15 np0005539563 nova_compute[252253]: 2025-11-29 07:56:15.929 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:16.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 45 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 560 KiB/s wr, 269 op/s
Nov 29 02:56:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:17.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:18.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:19 np0005539563 nova_compute[252253]: 2025-11-29 07:56:19.059 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "713e0825-1b56-4572-a0bd-817359261afe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:19 np0005539563 nova_compute[252253]: 2025-11-29 07:56:19.060 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "713e0825-1b56-4572-a0bd-817359261afe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:19 np0005539563 nova_compute[252253]: 2025-11-29 07:56:19.079 252257 DEBUG nova.compute.manager [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:56:19 np0005539563 nova_compute[252253]: 2025-11-29 07:56:19.161 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:19 np0005539563 nova_compute[252253]: 2025-11-29 07:56:19.162 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:19 np0005539563 nova_compute[252253]: 2025-11-29 07:56:19.170 252257 DEBUG nova.virt.hardware [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:56:19 np0005539563 nova_compute[252253]: 2025-11-29 07:56:19.170 252257 INFO nova.compute.claims [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:56:19 np0005539563 nova_compute[252253]: 2025-11-29 07:56:19.306 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 41 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 67 KiB/s wr, 204 op/s
Nov 29 02:56:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:19.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1982467621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:19 np0005539563 nova_compute[252253]: 2025-11-29 07:56:19.929 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:19 np0005539563 nova_compute[252253]: 2025-11-29 07:56:19.935 252257 DEBUG nova.compute.provider_tree [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:19 np0005539563 nova_compute[252253]: 2025-11-29 07:56:19.966 252257 DEBUG nova.scheduler.client.report [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.015 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.016 252257 DEBUG nova.compute.manager [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.060 252257 DEBUG nova.compute.manager [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.061 252257 DEBUG nova.network.neutron [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.078 252257 INFO nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.094 252257 DEBUG nova.compute.manager [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.170 252257 DEBUG nova.compute.manager [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.171 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.171 252257 INFO nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Creating image(s)#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.196 252257 DEBUG nova.storage.rbd_utils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 713e0825-1b56-4572-a0bd-817359261afe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.223 252257 DEBUG nova.storage.rbd_utils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 713e0825-1b56-4572-a0bd-817359261afe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.250 252257 DEBUG nova.storage.rbd_utils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 713e0825-1b56-4572-a0bd-817359261afe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.254 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.330 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.367 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.368 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.369 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.370 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.406 252257 DEBUG nova.storage.rbd_utils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 713e0825-1b56-4572-a0bd-817359261afe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.411 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 713e0825-1b56-4572-a0bd-817359261afe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:20.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.444 252257 DEBUG nova.policy [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dbeeaca97c3e4a1b9417ab3e996f721f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '219d722e6a2c4164be5a30e9565f13a0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.703 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:56:20 np0005539563 nova_compute[252253]: 2025-11-29 07:56:20.931 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:21 np0005539563 nova_compute[252253]: 2025-11-29 07:56:21.322 252257 DEBUG nova.network.neutron [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Successfully created port: 5ab522b7-2e98-40ec-90ba-cefe3a05e43c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:56:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 41 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 38 KiB/s wr, 141 op/s
Nov 29 02:56:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:21.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:22 np0005539563 nova_compute[252253]: 2025-11-29 07:56:22.212 252257 DEBUG nova.network.neutron [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Successfully updated port: 5ab522b7-2e98-40ec-90ba-cefe3a05e43c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:56:22 np0005539563 nova_compute[252253]: 2025-11-29 07:56:22.234 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "refresh_cache-713e0825-1b56-4572-a0bd-817359261afe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:56:22 np0005539563 nova_compute[252253]: 2025-11-29 07:56:22.235 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquired lock "refresh_cache-713e0825-1b56-4572-a0bd-817359261afe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:56:22 np0005539563 nova_compute[252253]: 2025-11-29 07:56:22.235 252257 DEBUG nova.network.neutron [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:56:22 np0005539563 nova_compute[252253]: 2025-11-29 07:56:22.262 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 713e0825-1b56-4572-a0bd-817359261afe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.851s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:22 np0005539563 nova_compute[252253]: 2025-11-29 07:56:22.346 252257 DEBUG nova.storage.rbd_utils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] resizing rbd image 713e0825-1b56-4572-a0bd-817359261afe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:56:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:22.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:22 np0005539563 nova_compute[252253]: 2025-11-29 07:56:22.443 252257 DEBUG nova.compute.manager [req-ec4cebe5-6b01-418e-974d-53e27b29b96d req-22ecc4fa-e93d-411b-8940-0f439458259e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Received event network-changed-5ab522b7-2e98-40ec-90ba-cefe3a05e43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:22 np0005539563 nova_compute[252253]: 2025-11-29 07:56:22.444 252257 DEBUG nova.compute.manager [req-ec4cebe5-6b01-418e-974d-53e27b29b96d req-22ecc4fa-e93d-411b-8940-0f439458259e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Refreshing instance network info cache due to event network-changed-5ab522b7-2e98-40ec-90ba-cefe3a05e43c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:56:22 np0005539563 nova_compute[252253]: 2025-11-29 07:56:22.444 252257 DEBUG oslo_concurrency.lockutils [req-ec4cebe5-6b01-418e-974d-53e27b29b96d req-22ecc4fa-e93d-411b-8940-0f439458259e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-713e0825-1b56-4572-a0bd-817359261afe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:56:22 np0005539563 nova_compute[252253]: 2025-11-29 07:56:22.666 252257 DEBUG nova.network.neutron [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:56:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 45 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 217 KiB/s wr, 132 op/s
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:23.761450) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402983761585, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2297, "num_deletes": 259, "total_data_size": 3899244, "memory_usage": 3969720, "flush_reason": "Manual Compaction"}
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.768 252257 DEBUG nova.network.neutron [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Updating instance_info_cache with network_info: [{"id": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "address": "fa:16:3e:97:a6:bd", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ab522b7-2e", "ovs_interfaceid": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.815 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Releasing lock "refresh_cache-713e0825-1b56-4572-a0bd-817359261afe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.815 252257 DEBUG nova.compute.manager [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Instance network_info: |[{"id": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "address": "fa:16:3e:97:a6:bd", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ab522b7-2e", "ovs_interfaceid": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.816 252257 DEBUG oslo_concurrency.lockutils [req-ec4cebe5-6b01-418e-974d-53e27b29b96d req-22ecc4fa-e93d-411b-8940-0f439458259e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-713e0825-1b56-4572-a0bd-817359261afe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.816 252257 DEBUG nova.network.neutron [req-ec4cebe5-6b01-418e-974d-53e27b29b96d req-22ecc4fa-e93d-411b-8940-0f439458259e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Refreshing network info cache for port 5ab522b7-2e98-40ec-90ba-cefe3a05e43c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.821 252257 DEBUG nova.objects.instance [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lazy-loading 'migration_context' on Instance uuid 713e0825-1b56-4572-a0bd-817359261afe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:23.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.838 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.839 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Ensure instance console log exists: /var/lib/nova/instances/713e0825-1b56-4572-a0bd-817359261afe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.839 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.840 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.840 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.842 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Start _get_guest_xml network_info=[{"id": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "address": "fa:16:3e:97:a6:bd", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ab522b7-2e", "ovs_interfaceid": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402983847416, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3805713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27842, "largest_seqno": 30138, "table_properties": {"data_size": 3795240, "index_size": 6711, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22599, "raw_average_key_size": 21, "raw_value_size": 3773958, "raw_average_value_size": 3527, "num_data_blocks": 289, "num_entries": 1070, "num_filter_entries": 1070, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402783, "oldest_key_time": 1764402783, "file_creation_time": 1764402983, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.847 252257 WARNING nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 90148 microseconds, and 12177 cpu microseconds.
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.854 252257 DEBUG nova.virt.libvirt.host [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.855 252257 DEBUG nova.virt.libvirt.host [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.859 252257 DEBUG nova.virt.libvirt.host [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.860 252257 DEBUG nova.virt.libvirt.host [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.861 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.862 252257 DEBUG nova.virt.hardware [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.862 252257 DEBUG nova.virt.hardware [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.863 252257 DEBUG nova.virt.hardware [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.863 252257 DEBUG nova.virt.hardware [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.863 252257 DEBUG nova.virt.hardware [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.864 252257 DEBUG nova.virt.hardware [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.864 252257 DEBUG nova.virt.hardware [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.864 252257 DEBUG nova.virt.hardware [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.865 252257 DEBUG nova.virt.hardware [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.865 252257 DEBUG nova.virt.hardware [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.865 252257 DEBUG nova.virt.hardware [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:56:23 np0005539563 nova_compute[252253]: 2025-11-29 07:56:23.870 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:23.847498) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3805713 bytes OK
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:23.851718) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:23.879921) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:23.879986) EVENT_LOG_v1 {"time_micros": 1764402983879974, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:23.880013) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3889662, prev total WAL file size 3889662, number of live WAL files 2.
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:23.881389) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3716KB)], [62(7551KB)]
Nov 29 02:56:23 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402983881540, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11538187, "oldest_snapshot_seqno": -1}
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5998 keys, 9552153 bytes, temperature: kUnknown
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402984091852, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 9552153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9511992, "index_size": 24051, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 154246, "raw_average_key_size": 25, "raw_value_size": 9404047, "raw_average_value_size": 1567, "num_data_blocks": 966, "num_entries": 5998, "num_filter_entries": 5998, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764402983, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:24.092092) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9552153 bytes
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:24.094570) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 54.8 rd, 45.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.4 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 6534, records dropped: 536 output_compression: NoCompression
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:24.094590) EVENT_LOG_v1 {"time_micros": 1764402984094580, "job": 34, "event": "compaction_finished", "compaction_time_micros": 210381, "compaction_time_cpu_micros": 20531, "output_level": 6, "num_output_files": 1, "total_output_size": 9552153, "num_input_records": 6534, "num_output_records": 5998, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402984095255, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402984096465, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:23.881223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:24.096515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:24.096519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:24.096521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:24.096522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:56:24.096523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1250436712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.316 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.345 252257 DEBUG nova.storage.rbd_utils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 713e0825-1b56-4572-a0bd-817359261afe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.349 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:24.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3315717158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.784 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.786 252257 DEBUG nova.virt.libvirt.vif [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:56:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-879303292',display_name='tempest-ImagesOneServerNegativeTestJSON-server-879303292',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-879303292',id=48,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='219d722e6a2c4164be5a30e9565f13a0',ramdisk_id='',reservation_id='r-tcbdac2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-267959441',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-267959441-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:56:20Z,user_data=None,user_id='dbeeaca97c3e4a1b9417ab3e996f721f',uuid=713e0825-1b56-4572-a0bd-817359261afe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "address": "fa:16:3e:97:a6:bd", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ab522b7-2e", "ovs_interfaceid": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.787 252257 DEBUG nova.network.os_vif_util [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converting VIF {"id": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "address": "fa:16:3e:97:a6:bd", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ab522b7-2e", "ovs_interfaceid": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.788 252257 DEBUG nova.network.os_vif_util [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:a6:bd,bridge_name='br-int',has_traffic_filtering=True,id=5ab522b7-2e98-40ec-90ba-cefe3a05e43c,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ab522b7-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.789 252257 DEBUG nova.objects.instance [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 713e0825-1b56-4572-a0bd-817359261afe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.812 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  <uuid>713e0825-1b56-4572-a0bd-817359261afe</uuid>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  <name>instance-00000030</name>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-879303292</nova:name>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:56:23</nova:creationTime>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <nova:user uuid="dbeeaca97c3e4a1b9417ab3e996f721f">tempest-ImagesOneServerNegativeTestJSON-267959441-project-member</nova:user>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <nova:project uuid="219d722e6a2c4164be5a30e9565f13a0">tempest-ImagesOneServerNegativeTestJSON-267959441</nova:project>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <nova:port uuid="5ab522b7-2e98-40ec-90ba-cefe3a05e43c">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <entry name="serial">713e0825-1b56-4572-a0bd-817359261afe</entry>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <entry name="uuid">713e0825-1b56-4572-a0bd-817359261afe</entry>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/713e0825-1b56-4572-a0bd-817359261afe_disk">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/713e0825-1b56-4572-a0bd-817359261afe_disk.config">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:97:a6:bd"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <target dev="tap5ab522b7-2e"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/713e0825-1b56-4572-a0bd-817359261afe/console.log" append="off"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:56:24 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:56:24 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:56:24 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:56:24 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.813 252257 DEBUG nova.compute.manager [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Preparing to wait for external event network-vif-plugged-5ab522b7-2e98-40ec-90ba-cefe3a05e43c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.813 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "713e0825-1b56-4572-a0bd-817359261afe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.814 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "713e0825-1b56-4572-a0bd-817359261afe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.814 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "713e0825-1b56-4572-a0bd-817359261afe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.814 252257 DEBUG nova.virt.libvirt.vif [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:56:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-879303292',display_name='tempest-ImagesOneServerNegativeTestJSON-server-879303292',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-879303292',id=48,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='219d722e6a2c4164be5a30e9565f13a0',ramdisk_id='',reservation_id='r-tcbdac2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-267959441',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-267959441-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:56:20Z,user_data=None,user_id='dbeeaca97c3e4a1b9417ab3e996f721f',uuid=713e0825-1b56-4572-a0bd-817359261afe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "address": "fa:16:3e:97:a6:bd", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ab522b7-2e", "ovs_interfaceid": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.815 252257 DEBUG nova.network.os_vif_util [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converting VIF {"id": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "address": "fa:16:3e:97:a6:bd", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ab522b7-2e", "ovs_interfaceid": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.815 252257 DEBUG nova.network.os_vif_util [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:a6:bd,bridge_name='br-int',has_traffic_filtering=True,id=5ab522b7-2e98-40ec-90ba-cefe3a05e43c,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ab522b7-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.816 252257 DEBUG os_vif [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:a6:bd,bridge_name='br-int',has_traffic_filtering=True,id=5ab522b7-2e98-40ec-90ba-cefe3a05e43c,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ab522b7-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.816 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.817 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.817 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.820 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.820 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ab522b7-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.821 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ab522b7-2e, col_values=(('external_ids', {'iface-id': '5ab522b7-2e98-40ec-90ba-cefe3a05e43c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:a6:bd', 'vm-uuid': '713e0825-1b56-4572-a0bd-817359261afe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.884 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:24 np0005539563 NetworkManager[48981]: <info>  [1764402984.8849] manager: (tap5ab522b7-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.887 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.889 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.890 252257 INFO os_vif [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:a6:bd,bridge_name='br-int',has_traffic_filtering=True,id=5ab522b7-2e98-40ec-90ba-cefe3a05e43c,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ab522b7-2e')#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.987 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.987 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.987 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] No VIF found with MAC fa:16:3e:97:a6:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:56:24 np0005539563 nova_compute[252253]: 2025-11-29 07:56:24.988 252257 INFO nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Using config drive#033[00m
Nov 29 02:56:25 np0005539563 nova_compute[252253]: 2025-11-29 07:56:25.014 252257 DEBUG nova.storage.rbd_utils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 713e0825-1b56-4572-a0bd-817359261afe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 88 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 632 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Nov 29 02:56:25 np0005539563 nova_compute[252253]: 2025-11-29 07:56:25.759 252257 INFO nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Creating config drive at /var/lib/nova/instances/713e0825-1b56-4572-a0bd-817359261afe/disk.config#033[00m
Nov 29 02:56:25 np0005539563 nova_compute[252253]: 2025-11-29 07:56:25.764 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/713e0825-1b56-4572-a0bd-817359261afe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr9nemi0g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:25.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:25 np0005539563 nova_compute[252253]: 2025-11-29 07:56:25.890 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/713e0825-1b56-4572-a0bd-817359261afe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr9nemi0g" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:25 np0005539563 nova_compute[252253]: 2025-11-29 07:56:25.917 252257 DEBUG nova.storage.rbd_utils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] rbd image 713e0825-1b56-4572-a0bd-817359261afe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:25 np0005539563 nova_compute[252253]: 2025-11-29 07:56:25.921 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/713e0825-1b56-4572-a0bd-817359261afe/disk.config 713e0825-1b56-4572-a0bd-817359261afe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:25 np0005539563 nova_compute[252253]: 2025-11-29 07:56:25.942 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.177 252257 DEBUG nova.network.neutron [req-ec4cebe5-6b01-418e-974d-53e27b29b96d req-22ecc4fa-e93d-411b-8940-0f439458259e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Updated VIF entry in instance network info cache for port 5ab522b7-2e98-40ec-90ba-cefe3a05e43c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.178 252257 DEBUG nova.network.neutron [req-ec4cebe5-6b01-418e-974d-53e27b29b96d req-22ecc4fa-e93d-411b-8940-0f439458259e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Updating instance_info_cache with network_info: [{"id": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "address": "fa:16:3e:97:a6:bd", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ab522b7-2e", "ovs_interfaceid": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.206 252257 DEBUG oslo_concurrency.lockutils [req-ec4cebe5-6b01-418e-974d-53e27b29b96d req-22ecc4fa-e93d-411b-8940-0f439458259e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-713e0825-1b56-4572-a0bd-817359261afe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.367 252257 DEBUG oslo_concurrency.processutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/713e0825-1b56-4572-a0bd-817359261afe/disk.config 713e0825-1b56-4572-a0bd-817359261afe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.368 252257 INFO nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Deleting local config drive /var/lib/nova/instances/713e0825-1b56-4572-a0bd-817359261afe/disk.config because it was imported into RBD.#033[00m
Nov 29 02:56:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:26.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:26 np0005539563 kernel: tap5ab522b7-2e: entered promiscuous mode
Nov 29 02:56:26 np0005539563 NetworkManager[48981]: <info>  [1764402986.4375] manager: (tap5ab522b7-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.435 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539563 ovn_controller[148841]: 2025-11-29T07:56:26Z|00138|binding|INFO|Claiming lport 5ab522b7-2e98-40ec-90ba-cefe3a05e43c for this chassis.
Nov 29 02:56:26 np0005539563 ovn_controller[148841]: 2025-11-29T07:56:26Z|00139|binding|INFO|5ab522b7-2e98-40ec-90ba-cefe3a05e43c: Claiming fa:16:3e:97:a6:bd 10.100.0.11
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.439 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.456 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:a6:bd 10.100.0.11'], port_security=['fa:16:3e:97:a6:bd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '713e0825-1b56-4572-a0bd-817359261afe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57fc634c-e12c-411b-a4cb-47f24328da03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '219d722e6a2c4164be5a30e9565f13a0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '402e53f5-f525-45fd-8980-0b9445b1b6de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7363dfa8-cf81-4433-ae14-b180682ce437, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=5ab522b7-2e98-40ec-90ba-cefe3a05e43c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.459 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 5ab522b7-2e98-40ec-90ba-cefe3a05e43c in datapath 57fc634c-e12c-411b-a4cb-47f24328da03 bound to our chassis#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.461 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 57fc634c-e12c-411b-a4cb-47f24328da03#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.475 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[19fa3129-a38e-4a00-b224-f37efdf67c00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.478 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap57fc634c-e1 in ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.480 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap57fc634c-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.480 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[814162a0-8b68-420e-8611-fb57d7f36657]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.481 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e620d0-ec55-4e26-9d8e-212b0e019b3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 systemd-machined[213024]: New machine qemu-21-instance-00000030.
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.496 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[696071a7-c907-48f8-a4b0-a1dff9faf4a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 systemd[1]: Started Virtual Machine qemu-21-instance-00000030.
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.512 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539563 systemd-udevd[284322]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.516 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1e139291-47da-4443-b50d-e94dfecfb38d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.520 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539563 ovn_controller[148841]: 2025-11-29T07:56:26Z|00140|binding|INFO|Setting lport 5ab522b7-2e98-40ec-90ba-cefe3a05e43c ovn-installed in OVS
Nov 29 02:56:26 np0005539563 ovn_controller[148841]: 2025-11-29T07:56:26Z|00141|binding|INFO|Setting lport 5ab522b7-2e98-40ec-90ba-cefe3a05e43c up in Southbound
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.526 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539563 NetworkManager[48981]: <info>  [1764402986.5322] device (tap5ab522b7-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:56:26 np0005539563 NetworkManager[48981]: <info>  [1764402986.5336] device (tap5ab522b7-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.549 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[96da8cd3-9041-4cbe-b4f8-b17eec34d1b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 podman[284287]: 2025-11-29 07:56:26.555667549 +0000 UTC m=+0.096830611 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 02:56:26 np0005539563 systemd-udevd[284341]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:56:26 np0005539563 NetworkManager[48981]: <info>  [1764402986.5588] manager: (tap57fc634c-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.559 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5fae1c-2798-4fe5-aa25-bb05f56eb953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 podman[284291]: 2025-11-29 07:56:26.589719704 +0000 UTC m=+0.121421690 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.589 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ea49dec9-de90-4eef-84be-bd466bbe2008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 podman[284292]: 2025-11-29 07:56:26.59141347 +0000 UTC m=+0.132609883 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.593 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[6715cf1a-b4c9-486d-a78c-ab0e3618d767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 NetworkManager[48981]: <info>  [1764402986.6144] device (tap57fc634c-e0): carrier: link connected
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.618 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1febe6-6a17-41da-a64a-f9aa588713d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.631 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[223b44f3-188f-4f6f-ad09-6efc5e3b9345]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57fc634c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:45:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595438, 'reachable_time': 16166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284386, 'error': None, 'target': 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.654 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb8a8c3-8d33-4545-9b91-77eccd151d0d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb0:458f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 595438, 'tstamp': 595438}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284387, 'error': None, 'target': 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.673 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba03839-40e9-429c-bfa9-05467408e21d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57fc634c-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:45:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595438, 'reachable_time': 16166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284388, 'error': None, 'target': 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.711 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[20f4257e-5de6-4fc5-af40-e6376f33f2db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.795 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[03c758cb-8f54-49ed-b2c5-f0f6faf50191]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.797 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57fc634c-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.797 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.798 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57fc634c-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.800 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539563 NetworkManager[48981]: <info>  [1764402986.8010] manager: (tap57fc634c-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 29 02:56:26 np0005539563 kernel: tap57fc634c-e0: entered promiscuous mode
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.802 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.805 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap57fc634c-e0, col_values=(('external_ids', {'iface-id': '423869da-3f5f-4bc2-bc87-d86bc5a0c7ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:26 np0005539563 ovn_controller[148841]: 2025-11-29T07:56:26Z|00142|binding|INFO|Releasing lport 423869da-3f5f-4bc2-bc87-d86bc5a0c7ce from this chassis (sb_readonly=0)
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.806 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.808 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/57fc634c-e12c-411b-a4cb-47f24328da03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/57fc634c-e12c-411b-a4cb-47f24328da03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.809 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1fedf9b3-3b57-41ec-b39c-ee3be9e73c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.810 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-57fc634c-e12c-411b-a4cb-47f24328da03
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/57fc634c-e12c-411b-a4cb-47f24328da03.pid.haproxy
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 57fc634c-e12c-411b-a4cb-47f24328da03
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:56:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:26.813 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'env', 'PROCESS_TAG=haproxy-57fc634c-e12c-411b-a4cb-47f24328da03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/57fc634c-e12c-411b-a4cb-47f24328da03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:56:26 np0005539563 nova_compute[252253]: 2025-11-29 07:56:26.822 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:27 np0005539563 nova_compute[252253]: 2025-11-29 07:56:27.113 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402987.1125157, 713e0825-1b56-4572-a0bd-817359261afe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:27 np0005539563 nova_compute[252253]: 2025-11-29 07:56:27.115 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] VM Started (Lifecycle Event)#033[00m
Nov 29 02:56:27 np0005539563 nova_compute[252253]: 2025-11-29 07:56:27.145 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:27 np0005539563 nova_compute[252253]: 2025-11-29 07:56:27.149 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402987.1127024, 713e0825-1b56-4572-a0bd-817359261afe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:27 np0005539563 nova_compute[252253]: 2025-11-29 07:56:27.149 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:56:27 np0005539563 nova_compute[252253]: 2025-11-29 07:56:27.166 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:27 np0005539563 nova_compute[252253]: 2025-11-29 07:56:27.170 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:27 np0005539563 podman[284463]: 2025-11-29 07:56:27.183877577 +0000 UTC m=+0.056243119 container create 3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 02:56:27 np0005539563 nova_compute[252253]: 2025-11-29 07:56:27.192 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:56:27 np0005539563 systemd[1]: Started libpod-conmon-3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108.scope.
Nov 29 02:56:27 np0005539563 podman[284463]: 2025-11-29 07:56:27.153236945 +0000 UTC m=+0.025602537 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:56:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:56:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51ae11366d66f4d64dbcd1d56314a5210746d9421956eb85955cbd8418c9e87/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:27 np0005539563 podman[284463]: 2025-11-29 07:56:27.28517617 +0000 UTC m=+0.157541712 container init 3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:56:27 np0005539563 podman[284463]: 2025-11-29 07:56:27.291196843 +0000 UTC m=+0.163562375 container start 3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 02:56:27 np0005539563 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[284478]: [NOTICE]   (284482) : New worker (284484) forked
Nov 29 02:56:27 np0005539563 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[284478]: [NOTICE]   (284482) : Loading success.
Nov 29 02:56:27 np0005539563 nova_compute[252253]: 2025-11-29 07:56:27.689 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402972.6881375, ddb1f3ce-d35e-4812-b13f-ce46430c22c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:27 np0005539563 nova_compute[252253]: 2025-11-29 07:56:27.690 252257 INFO nova.compute.manager [-] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:56:27 np0005539563 nova_compute[252253]: 2025-11-29 07:56:27.714 252257 DEBUG nova.compute.manager [None req-7ce65576-27b5-4bdd-b7bf-a3bec4132bf6 - - - - - -] [instance: ddb1f3ce-d35e-4812-b13f-ce46430c22c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 88 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 02:56:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:27.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:28.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 111 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 2.8 MiB/s wr, 62 op/s
Nov 29 02:56:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3107489570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:29.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:29 np0005539563 nova_compute[252253]: 2025-11-29 07:56:29.886 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:30.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:30 np0005539563 nova_compute[252253]: 2025-11-29 07:56:30.697 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:30 np0005539563 nova_compute[252253]: 2025-11-29 07:56:30.936 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 134 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Nov 29 02:56:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:56:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:31.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:56:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:32.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:32 np0005539563 nova_compute[252253]: 2025-11-29 07:56:32.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.294 252257 DEBUG nova.compute.manager [req-8633ca76-f4b8-4bcc-bc65-803a12ad1737 req-6dec1289-6541-4967-a397-c442c9281b0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Received event network-vif-plugged-5ab522b7-2e98-40ec-90ba-cefe3a05e43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.295 252257 DEBUG oslo_concurrency.lockutils [req-8633ca76-f4b8-4bcc-bc65-803a12ad1737 req-6dec1289-6541-4967-a397-c442c9281b0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "713e0825-1b56-4572-a0bd-817359261afe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.295 252257 DEBUG oslo_concurrency.lockutils [req-8633ca76-f4b8-4bcc-bc65-803a12ad1737 req-6dec1289-6541-4967-a397-c442c9281b0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "713e0825-1b56-4572-a0bd-817359261afe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.296 252257 DEBUG oslo_concurrency.lockutils [req-8633ca76-f4b8-4bcc-bc65-803a12ad1737 req-6dec1289-6541-4967-a397-c442c9281b0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "713e0825-1b56-4572-a0bd-817359261afe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.296 252257 DEBUG nova.compute.manager [req-8633ca76-f4b8-4bcc-bc65-803a12ad1737 req-6dec1289-6541-4967-a397-c442c9281b0a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Processing event network-vif-plugged-5ab522b7-2e98-40ec-90ba-cefe3a05e43c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.297 252257 DEBUG nova.compute.manager [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.301 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764402993.3013675, 713e0825-1b56-4572-a0bd-817359261afe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.302 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.305 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.309 252257 INFO nova.virt.libvirt.driver [-] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Instance spawned successfully.#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.310 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.325 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.331 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.348 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.349 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.350 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.350 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.351 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.352 252257 DEBUG nova.virt.libvirt.driver [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.357 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.418 252257 INFO nova.compute.manager [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Took 13.25 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.420 252257 DEBUG nova.compute.manager [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.491 252257 INFO nova.compute.manager [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Took 14.36 seconds to build instance.#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.517 252257 DEBUG oslo_concurrency.lockutils [None req-f592d4d9-4bc5-47ae-9144-f650e0a739b7 dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "713e0825-1b56-4572-a0bd-817359261afe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.709 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.710 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.710 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.711 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:56:33 np0005539563 nova_compute[252253]: 2025-11-29 07:56:33.711 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 134 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Nov 29 02:56:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:33.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/58463716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:34 np0005539563 nova_compute[252253]: 2025-11-29 07:56:34.176 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:34 np0005539563 nova_compute[252253]: 2025-11-29 07:56:34.250 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:56:34 np0005539563 nova_compute[252253]: 2025-11-29 07:56:34.250 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:56:34 np0005539563 nova_compute[252253]: 2025-11-29 07:56:34.407 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:56:34 np0005539563 nova_compute[252253]: 2025-11-29 07:56:34.408 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4558MB free_disk=20.94662857055664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:56:34 np0005539563 nova_compute[252253]: 2025-11-29 07:56:34.408 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:34 np0005539563 nova_compute[252253]: 2025-11-29 07:56:34.408 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:34.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:34 np0005539563 nova_compute[252253]: 2025-11-29 07:56:34.622 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 713e0825-1b56-4572-a0bd-817359261afe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:56:34 np0005539563 nova_compute[252253]: 2025-11-29 07:56:34.623 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:56:34 np0005539563 nova_compute[252253]: 2025-11-29 07:56:34.623 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:56:34 np0005539563 nova_compute[252253]: 2025-11-29 07:56:34.721 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:34 np0005539563 nova_compute[252253]: 2025-11-29 07:56:34.889 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2308935255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.141 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.148 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.185 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.227 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.227 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.456 252257 DEBUG nova.compute.manager [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Received event network-vif-plugged-5ab522b7-2e98-40ec-90ba-cefe3a05e43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.457 252257 DEBUG oslo_concurrency.lockutils [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "713e0825-1b56-4572-a0bd-817359261afe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.457 252257 DEBUG oslo_concurrency.lockutils [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "713e0825-1b56-4572-a0bd-817359261afe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.458 252257 DEBUG oslo_concurrency.lockutils [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "713e0825-1b56-4572-a0bd-817359261afe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.458 252257 DEBUG nova.compute.manager [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] No waiting events found dispatching network-vif-plugged-5ab522b7-2e98-40ec-90ba-cefe3a05e43c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.459 252257 WARNING nova.compute.manager [req-edf5c3fe-27a9-40b1-9e01-79dc6e913ca2 req-36149755-1764-48aa-b279-dc00d3b67e4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Received unexpected event network-vif-plugged-5ab522b7-2e98-40ec-90ba-cefe3a05e43c for instance with vm_state active and task_state None.#033[00m
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.627 252257 DEBUG nova.compute.manager [None req-562be1f7-359b-40fe-9014-90623053be2c dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 134 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.4 MiB/s wr, 111 op/s
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.733 252257 INFO nova.compute.manager [None req-562be1f7-359b-40fe-9014-90623053be2c dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] instance snapshotting#033[00m
Nov 29 02:56:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:35.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.938 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:35 np0005539563 nova_compute[252253]: 2025-11-29 07:56:35.991 252257 INFO nova.virt.libvirt.driver [None req-562be1f7-359b-40fe-9014-90623053be2c dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Beginning live snapshot process#033[00m
Nov 29 02:56:36 np0005539563 nova_compute[252253]: 2025-11-29 07:56:36.175 252257 DEBUG nova.virt.libvirt.imagebackend [None req-562be1f7-359b-40fe-9014-90623053be2c dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 02:56:36 np0005539563 nova_compute[252253]: 2025-11-29 07:56:36.363 252257 DEBUG nova.storage.rbd_utils [None req-562be1f7-359b-40fe-9014-90623053be2c dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] creating snapshot(3a3a342eec7e41d393bbf0dbda12d158) on rbd image(713e0825-1b56-4572-a0bd-817359261afe_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:56:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:36.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Nov 29 02:56:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Nov 29 02:56:36 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Nov 29 02:56:36 np0005539563 nova_compute[252253]: 2025-11-29 07:56:36.928 252257 DEBUG nova.storage.rbd_utils [None req-562be1f7-359b-40fe-9014-90623053be2c dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] cloning vms/713e0825-1b56-4572-a0bd-817359261afe_disk@3a3a342eec7e41d393bbf0dbda12d158 to images/4997c363-307f-4918-8be4-b61ff6b9c1ff clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 02:56:37 np0005539563 nova_compute[252253]: 2025-11-29 07:56:37.114 252257 DEBUG nova.storage.rbd_utils [None req-562be1f7-359b-40fe-9014-90623053be2c dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] flattening images/4997c363-307f-4918-8be4-b61ff6b9c1ff flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 02:56:37 np0005539563 nova_compute[252253]: 2025-11-29 07:56:37.227 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:37 np0005539563 nova_compute[252253]: 2025-11-29 07:56:37.228 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:56:37 np0005539563 nova_compute[252253]: 2025-11-29 07:56:37.229 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:56:37 np0005539563 nova_compute[252253]: 2025-11-29 07:56:37.260 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-713e0825-1b56-4572-a0bd-817359261afe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:56:37 np0005539563 nova_compute[252253]: 2025-11-29 07:56:37.261 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-713e0825-1b56-4572-a0bd-817359261afe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:56:37 np0005539563 nova_compute[252253]: 2025-11-29 07:56:37.261 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:56:37 np0005539563 nova_compute[252253]: 2025-11-29 07:56:37.261 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 713e0825-1b56-4572-a0bd-817359261afe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:37 np0005539563 nova_compute[252253]: 2025-11-29 07:56:37.504 252257 DEBUG nova.storage.rbd_utils [None req-562be1f7-359b-40fe-9014-90623053be2c dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] removing snapshot(3a3a342eec7e41d393bbf0dbda12d158) on rbd image(713e0825-1b56-4572-a0bd-817359261afe_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:56:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 134 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Nov 29 02:56:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:37.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Nov 29 02:56:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Nov 29 02:56:38 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Nov 29 02:56:38 np0005539563 nova_compute[252253]: 2025-11-29 07:56:38.086 252257 DEBUG nova.storage.rbd_utils [None req-562be1f7-359b-40fe-9014-90623053be2c dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] creating snapshot(snap) on rbd image(4997c363-307f-4918-8be4-b61ff6b9c1ff) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:56:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:38.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Nov 29 02:56:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Nov 29 02:56:39 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver [None req-562be1f7-359b-40fe-9014-90623053be2c dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 4997c363-307f-4918-8be4-b61ff6b9c1ff could not be found.
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 4997c363-307f-4918-8be4-b61ff6b9c1ff
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver 
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver 
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 4997c363-307f-4918-8be4-b61ff6b9c1ff could not be found.
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.239 252257 ERROR nova.virt.libvirt.driver #033[00m
Nov 29 02:56:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 143 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 2.1 MiB/s wr, 334 op/s
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.745 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Updating instance_info_cache with network_info: [{"id": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "address": "fa:16:3e:97:a6:bd", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ab522b7-2e", "ovs_interfaceid": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.773 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-713e0825-1b56-4572-a0bd-817359261afe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.774 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.775 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.775 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.775 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.776 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:39.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:39 np0005539563 nova_compute[252253]: 2025-11-29 07:56:39.893 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:40 np0005539563 nova_compute[252253]: 2025-11-29 07:56:40.431 252257 DEBUG nova.storage.rbd_utils [None req-562be1f7-359b-40fe-9014-90623053be2c dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] removing snapshot(snap) on rbd image(4997c363-307f-4918-8be4-b61ff6b9c1ff) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:56:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:40.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:40 np0005539563 nova_compute[252253]: 2025-11-29 07:56:40.702 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:40 np0005539563 nova_compute[252253]: 2025-11-29 07:56:40.703 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:56:40 np0005539563 nova_compute[252253]: 2025-11-29 07:56:40.941 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:41 np0005539563 nova_compute[252253]: 2025-11-29 07:56:41.690 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 139 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 3.5 MiB/s wr, 311 op/s
Nov 29 02:56:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Nov 29 02:56:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:41.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Nov 29 02:56:42 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Nov 29 02:56:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:42.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:42 np0005539563 nova_compute[252253]: 2025-11-29 07:56:42.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:42 np0005539563 nova_compute[252253]: 2025-11-29 07:56:42.861 252257 WARNING nova.compute.manager [None req-562be1f7-359b-40fe-9014-90623053be2c dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Image not found during snapshot: nova.exception.ImageNotFound: Image 4997c363-307f-4918-8be4-b61ff6b9c1ff could not be found.#033[00m
Nov 29 02:56:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:56:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:56:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:56:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:56:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:56:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:56:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 130 MiB data, 526 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 3.6 MiB/s wr, 321 op/s
Nov 29 02:56:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:43.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:44.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:44 np0005539563 nova_compute[252253]: 2025-11-29 07:56:44.549 252257 DEBUG oslo_concurrency.lockutils [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "713e0825-1b56-4572-a0bd-817359261afe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:44 np0005539563 nova_compute[252253]: 2025-11-29 07:56:44.550 252257 DEBUG oslo_concurrency.lockutils [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "713e0825-1b56-4572-a0bd-817359261afe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:44 np0005539563 nova_compute[252253]: 2025-11-29 07:56:44.550 252257 DEBUG oslo_concurrency.lockutils [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "713e0825-1b56-4572-a0bd-817359261afe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:44 np0005539563 nova_compute[252253]: 2025-11-29 07:56:44.551 252257 DEBUG oslo_concurrency.lockutils [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "713e0825-1b56-4572-a0bd-817359261afe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:44 np0005539563 nova_compute[252253]: 2025-11-29 07:56:44.551 252257 DEBUG oslo_concurrency.lockutils [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "713e0825-1b56-4572-a0bd-817359261afe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:44 np0005539563 nova_compute[252253]: 2025-11-29 07:56:44.552 252257 INFO nova.compute.manager [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Terminating instance#033[00m
Nov 29 02:56:44 np0005539563 nova_compute[252253]: 2025-11-29 07:56:44.553 252257 DEBUG nova.compute.manager [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:56:44 np0005539563 nova_compute[252253]: 2025-11-29 07:56:44.897 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539563 kernel: tap5ab522b7-2e (unregistering): left promiscuous mode
Nov 29 02:56:45 np0005539563 NetworkManager[48981]: <info>  [1764403005.2291] device (tap5ab522b7-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:56:45 np0005539563 ovn_controller[148841]: 2025-11-29T07:56:45Z|00143|binding|INFO|Releasing lport 5ab522b7-2e98-40ec-90ba-cefe3a05e43c from this chassis (sb_readonly=0)
Nov 29 02:56:45 np0005539563 ovn_controller[148841]: 2025-11-29T07:56:45Z|00144|binding|INFO|Setting lport 5ab522b7-2e98-40ec-90ba-cefe3a05e43c down in Southbound
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.242 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539563 ovn_controller[148841]: 2025-11-29T07:56:45Z|00145|binding|INFO|Removing iface tap5ab522b7-2e ovn-installed in OVS
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.246 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.250 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:a6:bd 10.100.0.11'], port_security=['fa:16:3e:97:a6:bd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '713e0825-1b56-4572-a0bd-817359261afe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57fc634c-e12c-411b-a4cb-47f24328da03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '219d722e6a2c4164be5a30e9565f13a0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '402e53f5-f525-45fd-8980-0b9445b1b6de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7363dfa8-cf81-4433-ae14-b180682ce437, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=5ab522b7-2e98-40ec-90ba-cefe3a05e43c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.252 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 5ab522b7-2e98-40ec-90ba-cefe3a05e43c in datapath 57fc634c-e12c-411b-a4cb-47f24328da03 unbound from our chassis#033[00m
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.253 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57fc634c-e12c-411b-a4cb-47f24328da03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.255 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[365d7c1c-23ea-4f31-a314-8a9a92fbbc93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.256 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 namespace which is not needed anymore#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.276 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539563 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000030.scope: Deactivated successfully.
Nov 29 02:56:45 np0005539563 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000030.scope: Consumed 11.788s CPU time.
Nov 29 02:56:45 np0005539563 systemd-machined[213024]: Machine qemu-21-instance-00000030 terminated.
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.371 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.376 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.385 252257 INFO nova.virt.libvirt.driver [-] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Instance destroyed successfully.#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.386 252257 DEBUG nova.objects.instance [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lazy-loading 'resources' on Instance uuid 713e0825-1b56-4572-a0bd-817359261afe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.401 252257 DEBUG nova.virt.libvirt.vif [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:56:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-879303292',display_name='tempest-ImagesOneServerNegativeTestJSON-server-879303292',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-879303292',id=48,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:56:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='219d722e6a2c4164be5a30e9565f13a0',ramdisk_id='',reservation_id='r-tcbdac2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-267959441',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-267959441-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:56:42Z,user_data=None,user_id='dbeeaca97c3e4a1b9417ab3e996f721f',uuid=713e0825-1b56-4572-a0bd-817359261afe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "address": "fa:16:3e:97:a6:bd", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ab522b7-2e", "ovs_interfaceid": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:56:45 np0005539563 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[284478]: [NOTICE]   (284482) : haproxy version is 2.8.14-c23fe91
Nov 29 02:56:45 np0005539563 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[284478]: [NOTICE]   (284482) : path to executable is /usr/sbin/haproxy
Nov 29 02:56:45 np0005539563 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[284478]: [WARNING]  (284482) : Exiting Master process...
Nov 29 02:56:45 np0005539563 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[284478]: [ALERT]    (284482) : Current worker (284484) exited with code 143 (Terminated)
Nov 29 02:56:45 np0005539563 neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03[284478]: [WARNING]  (284482) : All workers exited. Exiting... (0)
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.403 252257 DEBUG nova.network.os_vif_util [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converting VIF {"id": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "address": "fa:16:3e:97:a6:bd", "network": {"id": "57fc634c-e12c-411b-a4cb-47f24328da03", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1913611615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "219d722e6a2c4164be5a30e9565f13a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ab522b7-2e", "ovs_interfaceid": "5ab522b7-2e98-40ec-90ba-cefe3a05e43c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.405 252257 DEBUG nova.network.os_vif_util [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:a6:bd,bridge_name='br-int',has_traffic_filtering=True,id=5ab522b7-2e98-40ec-90ba-cefe3a05e43c,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ab522b7-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.405 252257 DEBUG os_vif [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a6:bd,bridge_name='br-int',has_traffic_filtering=True,id=5ab522b7-2e98-40ec-90ba-cefe3a05e43c,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ab522b7-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:56:45 np0005539563 systemd[1]: libpod-3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108.scope: Deactivated successfully.
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.408 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.408 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ab522b7-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:45 np0005539563 podman[284801]: 2025-11-29 07:56:45.413316208 +0000 UTC m=+0.060986169 container died 3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.451 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.452 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.455 252257 INFO os_vif [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:a6:bd,bridge_name='br-int',has_traffic_filtering=True,id=5ab522b7-2e98-40ec-90ba-cefe3a05e43c,network=Network(57fc634c-e12c-411b-a4cb-47f24328da03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ab522b7-2e')#033[00m
Nov 29 02:56:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108-userdata-shm.mount: Deactivated successfully.
Nov 29 02:56:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c51ae11366d66f4d64dbcd1d56314a5210746d9421956eb85955cbd8418c9e87-merged.mount: Deactivated successfully.
Nov 29 02:56:45 np0005539563 podman[284801]: 2025-11-29 07:56:45.491025758 +0000 UTC m=+0.138695729 container cleanup 3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:56:45 np0005539563 systemd[1]: libpod-conmon-3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108.scope: Deactivated successfully.
Nov 29 02:56:45 np0005539563 podman[284858]: 2025-11-29 07:56:45.601684785 +0000 UTC m=+0.089496892 container remove 3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.611 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae69761-7c1b-4465-9600-731ef0bd351e]: (4, ('Sat Nov 29 07:56:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 (3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108)\n3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108\nSat Nov 29 07:56:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 (3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108)\n3585127ed4eaa631bbc4b969399e33a19dcf4e5ac85f3e6e438195eb364d2108\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.613 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d667c5c6-689b-4a80-95c2-ef1fc420ba23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.614 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57fc634c-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:45 np0005539563 kernel: tap57fc634c-e0: left promiscuous mode
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.618 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.640 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.642 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.643 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ec1e77-8831-4af2-8486-a555f62418cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.660 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c68dccf2-f832-4e6c-a131-9555490a5cdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.661 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e64c0a-fc2e-49df-8fcf-5633379facb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.678 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f56f4d-2b8b-40c3-9b96-2bd638f84531]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595431, 'reachable_time': 40073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284873, 'error': None, 'target': 'ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:45 np0005539563 systemd[1]: run-netns-ovnmeta\x2d57fc634c\x2de12c\x2d411b\x2da4cb\x2d47f24328da03.mount: Deactivated successfully.
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.682 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-57fc634c-e12c-411b-a4cb-47f24328da03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:56:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:45.683 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[0e36ceea-d0b3-4137-9460-9a9fd410ef9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.4 MiB/s wr, 218 op/s
Nov 29 02:56:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:45.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.943 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.995 252257 INFO nova.virt.libvirt.driver [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Deleting instance files /var/lib/nova/instances/713e0825-1b56-4572-a0bd-817359261afe_del#033[00m
Nov 29 02:56:45 np0005539563 nova_compute[252253]: 2025-11-29 07:56:45.996 252257 INFO nova.virt.libvirt.driver [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Deletion of /var/lib/nova/instances/713e0825-1b56-4572-a0bd-817359261afe_del complete#033[00m
Nov 29 02:56:46 np0005539563 nova_compute[252253]: 2025-11-29 07:56:46.053 252257 INFO nova.compute.manager [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Took 1.50 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:56:46 np0005539563 nova_compute[252253]: 2025-11-29 07:56:46.054 252257 DEBUG oslo.service.loopingcall [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:56:46 np0005539563 nova_compute[252253]: 2025-11-29 07:56:46.054 252257 DEBUG nova.compute.manager [-] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:56:46 np0005539563 nova_compute[252253]: 2025-11-29 07:56:46.054 252257 DEBUG nova.network.neutron [-] [instance: 713e0825-1b56-4572-a0bd-817359261afe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:56:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:46.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:46 np0005539563 nova_compute[252253]: 2025-11-29 07:56:46.757 252257 DEBUG nova.network.neutron [-] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:46 np0005539563 nova_compute[252253]: 2025-11-29 07:56:46.787 252257 INFO nova.compute.manager [-] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Took 0.73 seconds to deallocate network for instance.#033[00m
Nov 29 02:56:46 np0005539563 nova_compute[252253]: 2025-11-29 07:56:46.835 252257 DEBUG oslo_concurrency.lockutils [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:46 np0005539563 nova_compute[252253]: 2025-11-29 07:56:46.836 252257 DEBUG oslo_concurrency.lockutils [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:46 np0005539563 nova_compute[252253]: 2025-11-29 07:56:46.861 252257 DEBUG nova.compute.manager [req-7ad72c70-ed90-4e18-bcd4-db0ec1ff1220 req-0cb3a7af-c53a-4bd8-adfd-4385365c42e0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Received event network-vif-deleted-5ab522b7-2e98-40ec-90ba-cefe3a05e43c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:46 np0005539563 nova_compute[252253]: 2025-11-29 07:56:46.883 252257 DEBUG oslo_concurrency.processutils [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:46.903 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:46.904 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:56:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:56:46.905 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:46 np0005539563 nova_compute[252253]: 2025-11-29 07:56:46.910 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 02:56:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:56:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:56:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:56:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:56:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:56:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:56:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/481685325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:47 np0005539563 nova_compute[252253]: 2025-11-29 07:56:47.352 252257 DEBUG oslo_concurrency.processutils [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:47 np0005539563 nova_compute[252253]: 2025-11-29 07:56:47.360 252257 DEBUG nova.compute.provider_tree [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:47 np0005539563 nova_compute[252253]: 2025-11-29 07:56:47.390 252257 DEBUG nova.scheduler.client.report [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:47 np0005539563 nova_compute[252253]: 2025-11-29 07:56:47.418 252257 DEBUG oslo_concurrency.lockutils [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:47 np0005539563 nova_compute[252253]: 2025-11-29 07:56:47.451 252257 INFO nova.scheduler.client.report [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Deleted allocations for instance 713e0825-1b56-4572-a0bd-817359261afe#033[00m
Nov 29 02:56:47 np0005539563 nova_compute[252253]: 2025-11-29 07:56:47.510 252257 DEBUG oslo_concurrency.lockutils [None req-b8f9b0c7-8319-4edb-b49b-fe0aec1ce18f dbeeaca97c3e4a1b9417ab3e996f721f 219d722e6a2c4164be5a30e9565f13a0 - - default default] Lock "713e0825-1b56-4572-a0bd-817359261afe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 88 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.0 MiB/s wr, 102 op/s
Nov 29 02:56:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:48.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:56:48 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c07b638e-e0d9-41af-9b7e-66895b0f8a67 does not exist
Nov 29 02:56:48 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2fc7af2a-43eb-4eb3-ac03-74a9533e9be3 does not exist
Nov 29 02:56:48 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev fa023209-1cd8-4433-be78-99e01758c210 does not exist
Nov 29 02:56:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:56:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:56:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:56:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:56:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:56:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:56:48 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:56:48 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:56:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:48.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:48 np0005539563 podman[285172]: 2025-11-29 07:56:48.572361195 +0000 UTC m=+0.034467757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:56:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Nov 29 02:56:49 np0005539563 podman[285172]: 2025-11-29 07:56:49.295355949 +0000 UTC m=+0.757462541 container create eacbf44365151ef16c7286e15c34f688e87d88088df71ea64b6bdddd180702e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:56:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Nov 29 02:56:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Nov 29 02:56:49 np0005539563 systemd[1]: Started libpod-conmon-eacbf44365151ef16c7286e15c34f688e87d88088df71ea64b6bdddd180702e5.scope.
Nov 29 02:56:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:56:49 np0005539563 podman[285172]: 2025-11-29 07:56:49.479135882 +0000 UTC m=+0.941242454 container init eacbf44365151ef16c7286e15c34f688e87d88088df71ea64b6bdddd180702e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:56:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:56:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:56:49 np0005539563 podman[285172]: 2025-11-29 07:56:49.489326589 +0000 UTC m=+0.951433131 container start eacbf44365151ef16c7286e15c34f688e87d88088df71ea64b6bdddd180702e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:56:49 np0005539563 podman[285172]: 2025-11-29 07:56:49.494679135 +0000 UTC m=+0.956785697 container attach eacbf44365151ef16c7286e15c34f688e87d88088df71ea64b6bdddd180702e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:56:49 np0005539563 nostalgic_ishizaka[285189]: 167 167
Nov 29 02:56:49 np0005539563 systemd[1]: libpod-eacbf44365151ef16c7286e15c34f688e87d88088df71ea64b6bdddd180702e5.scope: Deactivated successfully.
Nov 29 02:56:49 np0005539563 podman[285172]: 2025-11-29 07:56:49.496566485 +0000 UTC m=+0.958673027 container died eacbf44365151ef16c7286e15c34f688e87d88088df71ea64b6bdddd180702e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:56:49 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1aa1c0e52396e851a7f584e18f1325a2e36989a4db28564fc26e7d6119ee9592-merged.mount: Deactivated successfully.
Nov 29 02:56:49 np0005539563 podman[285172]: 2025-11-29 07:56:49.55927828 +0000 UTC m=+1.021384822 container remove eacbf44365151ef16c7286e15c34f688e87d88088df71ea64b6bdddd180702e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:56:49 np0005539563 systemd[1]: libpod-conmon-eacbf44365151ef16c7286e15c34f688e87d88088df71ea64b6bdddd180702e5.scope: Deactivated successfully.
Nov 29 02:56:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 66 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 4.0 KiB/s wr, 75 op/s
Nov 29 02:56:49 np0005539563 podman[285212]: 2025-11-29 07:56:49.687112942 +0000 UTC m=+0.021303039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:56:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:50.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:50 np0005539563 nova_compute[252253]: 2025-11-29 07:56:50.452 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:50.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:50 np0005539563 podman[285212]: 2025-11-29 07:56:50.642455118 +0000 UTC m=+0.976645205 container create 230439e3e2a95650309224b7ba6d4190ddb2205a83a1cdf0f676ba723e097b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:56:50 np0005539563 systemd[1]: Started libpod-conmon-230439e3e2a95650309224b7ba6d4190ddb2205a83a1cdf0f676ba723e097b9e.scope.
Nov 29 02:56:50 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:56:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6e760ee1160b0b3ce7bd21a7eef6a7c9eda9bfb32725adcab006ab271c78b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6e760ee1160b0b3ce7bd21a7eef6a7c9eda9bfb32725adcab006ab271c78b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6e760ee1160b0b3ce7bd21a7eef6a7c9eda9bfb32725adcab006ab271c78b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6e760ee1160b0b3ce7bd21a7eef6a7c9eda9bfb32725adcab006ab271c78b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6e760ee1160b0b3ce7bd21a7eef6a7c9eda9bfb32725adcab006ab271c78b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:50 np0005539563 podman[285212]: 2025-11-29 07:56:50.767858796 +0000 UTC m=+1.102048903 container init 230439e3e2a95650309224b7ba6d4190ddb2205a83a1cdf0f676ba723e097b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hoover, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:56:50 np0005539563 podman[285212]: 2025-11-29 07:56:50.779925463 +0000 UTC m=+1.114115550 container start 230439e3e2a95650309224b7ba6d4190ddb2205a83a1cdf0f676ba723e097b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hoover, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:56:50 np0005539563 podman[285212]: 2025-11-29 07:56:50.839981346 +0000 UTC m=+1.174171423 container attach 230439e3e2a95650309224b7ba6d4190ddb2205a83a1cdf0f676ba723e097b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hoover, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:56:50 np0005539563 nova_compute[252253]: 2025-11-29 07:56:50.945 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:51 np0005539563 heuristic_hoover[285230]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:56:51 np0005539563 heuristic_hoover[285230]: --> relative data size: 1.0
Nov 29 02:56:51 np0005539563 heuristic_hoover[285230]: --> All data devices are unavailable
Nov 29 02:56:51 np0005539563 systemd[1]: libpod-230439e3e2a95650309224b7ba6d4190ddb2205a83a1cdf0f676ba723e097b9e.scope: Deactivated successfully.
Nov 29 02:56:51 np0005539563 podman[285212]: 2025-11-29 07:56:51.640915076 +0000 UTC m=+1.975105153 container died 230439e3e2a95650309224b7ba6d4190ddb2205a83a1cdf0f676ba723e097b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hoover, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:56:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 41 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 3.8 KiB/s wr, 69 op/s
Nov 29 02:56:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:52.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:52 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4c6e760ee1160b0b3ce7bd21a7eef6a7c9eda9bfb32725adcab006ab271c78b0-merged.mount: Deactivated successfully.
Nov 29 02:56:52 np0005539563 podman[285212]: 2025-11-29 07:56:52.443674466 +0000 UTC m=+2.777864543 container remove 230439e3e2a95650309224b7ba6d4190ddb2205a83a1cdf0f676ba723e097b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:56:52 np0005539563 systemd[1]: libpod-conmon-230439e3e2a95650309224b7ba6d4190ddb2205a83a1cdf0f676ba723e097b9e.scope: Deactivated successfully.
Nov 29 02:56:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:52.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:53 np0005539563 podman[285401]: 2025-11-29 07:56:53.010059045 +0000 UTC m=+0.040558293 container create 4c45dcfbb1ec3226a954a29f97d92be3d4283ef0318c18688cf060f308b208ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:56:53 np0005539563 systemd[1]: Started libpod-conmon-4c45dcfbb1ec3226a954a29f97d92be3d4283ef0318c18688cf060f308b208ad.scope.
Nov 29 02:56:53 np0005539563 podman[285401]: 2025-11-29 07:56:52.991766768 +0000 UTC m=+0.022266036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:56:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:56:53 np0005539563 podman[285401]: 2025-11-29 07:56:53.109663241 +0000 UTC m=+0.140162499 container init 4c45dcfbb1ec3226a954a29f97d92be3d4283ef0318c18688cf060f308b208ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:56:53 np0005539563 podman[285401]: 2025-11-29 07:56:53.116347642 +0000 UTC m=+0.146846930 container start 4c45dcfbb1ec3226a954a29f97d92be3d4283ef0318c18688cf060f308b208ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:56:53 np0005539563 podman[285401]: 2025-11-29 07:56:53.120579437 +0000 UTC m=+0.151078675 container attach 4c45dcfbb1ec3226a954a29f97d92be3d4283ef0318c18688cf060f308b208ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:56:53 np0005539563 strange_williams[285418]: 167 167
Nov 29 02:56:53 np0005539563 systemd[1]: libpod-4c45dcfbb1ec3226a954a29f97d92be3d4283ef0318c18688cf060f308b208ad.scope: Deactivated successfully.
Nov 29 02:56:53 np0005539563 podman[285401]: 2025-11-29 07:56:53.126401066 +0000 UTC m=+0.156900414 container died 4c45dcfbb1ec3226a954a29f97d92be3d4283ef0318c18688cf060f308b208ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:56:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay-730f80137211a24b05393d1c9ad1884f052577509e956da300f37fa7f71ec372-merged.mount: Deactivated successfully.
Nov 29 02:56:53 np0005539563 podman[285401]: 2025-11-29 07:56:53.170320359 +0000 UTC m=+0.200819627 container remove 4c45dcfbb1ec3226a954a29f97d92be3d4283ef0318c18688cf060f308b208ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:56:53 np0005539563 systemd[1]: libpod-conmon-4c45dcfbb1ec3226a954a29f97d92be3d4283ef0318c18688cf060f308b208ad.scope: Deactivated successfully.
Nov 29 02:56:53 np0005539563 podman[285442]: 2025-11-29 07:56:53.321331352 +0000 UTC m=+0.042814005 container create ff207f68d3755d1020efb211ab6826e7760a2de516927f907200090c30f0fdfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:56:53 np0005539563 systemd[1]: Started libpod-conmon-ff207f68d3755d1020efb211ab6826e7760a2de516927f907200090c30f0fdfd.scope.
Nov 29 02:56:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:56:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c696e10288b50b8f0b774615f6016962155f670010ac3f7d9a5bbde70875856e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c696e10288b50b8f0b774615f6016962155f670010ac3f7d9a5bbde70875856e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c696e10288b50b8f0b774615f6016962155f670010ac3f7d9a5bbde70875856e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c696e10288b50b8f0b774615f6016962155f670010ac3f7d9a5bbde70875856e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:53 np0005539563 podman[285442]: 2025-11-29 07:56:53.303023844 +0000 UTC m=+0.024506507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:56:53 np0005539563 podman[285442]: 2025-11-29 07:56:53.406363202 +0000 UTC m=+0.127845865 container init ff207f68d3755d1020efb211ab6826e7760a2de516927f907200090c30f0fdfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:56:53 np0005539563 podman[285442]: 2025-11-29 07:56:53.419826018 +0000 UTC m=+0.141308671 container start ff207f68d3755d1020efb211ab6826e7760a2de516927f907200090c30f0fdfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:56:53 np0005539563 podman[285442]: 2025-11-29 07:56:53.426306394 +0000 UTC m=+0.147789057 container attach ff207f68d3755d1020efb211ab6826e7760a2de516927f907200090c30f0fdfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:56:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Nov 29 02:56:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Nov 29 02:56:53 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Nov 29 02:56:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 41 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.9 KiB/s wr, 50 op/s
Nov 29 02:56:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:54.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]: {
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:    "0": [
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:        {
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:            "devices": [
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:                "/dev/loop3"
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:            ],
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:            "lv_name": "ceph_lv0",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:            "lv_size": "7511998464",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:            "name": "ceph_lv0",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:            "tags": {
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:                "ceph.cluster_name": "ceph",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:                "ceph.crush_device_class": "",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:                "ceph.encrypted": "0",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:                "ceph.osd_id": "0",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:                "ceph.type": "block",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:                "ceph.vdo": "0"
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:            },
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:            "type": "block",
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:            "vg_name": "ceph_vg0"
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:        }
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]:    ]
Nov 29 02:56:54 np0005539563 gifted_satoshi[285459]: }
Nov 29 02:56:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Nov 29 02:56:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Nov 29 02:56:54 np0005539563 systemd[1]: libpod-ff207f68d3755d1020efb211ab6826e7760a2de516927f907200090c30f0fdfd.scope: Deactivated successfully.
Nov 29 02:56:54 np0005539563 podman[285442]: 2025-11-29 07:56:54.19432883 +0000 UTC m=+0.915811473 container died ff207f68d3755d1020efb211ab6826e7760a2de516927f907200090c30f0fdfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_satoshi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:56:54 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Nov 29 02:56:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c696e10288b50b8f0b774615f6016962155f670010ac3f7d9a5bbde70875856e-merged.mount: Deactivated successfully.
Nov 29 02:56:54 np0005539563 podman[285442]: 2025-11-29 07:56:54.280248435 +0000 UTC m=+1.001731078 container remove ff207f68d3755d1020efb211ab6826e7760a2de516927f907200090c30f0fdfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:56:54 np0005539563 systemd[1]: libpod-conmon-ff207f68d3755d1020efb211ab6826e7760a2de516927f907200090c30f0fdfd.scope: Deactivated successfully.
Nov 29 02:56:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:54.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:54 np0005539563 podman[285621]: 2025-11-29 07:56:54.878790277 +0000 UTC m=+0.042997840 container create fa1b197d6ca7dbbf7be7f6e9f67c8b91f7af6d7aa1c80c419203293e2b25be39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:56:54 np0005539563 systemd[1]: Started libpod-conmon-fa1b197d6ca7dbbf7be7f6e9f67c8b91f7af6d7aa1c80c419203293e2b25be39.scope.
Nov 29 02:56:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:56:54 np0005539563 podman[285621]: 2025-11-29 07:56:54.858622298 +0000 UTC m=+0.022829941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:56:54 np0005539563 podman[285621]: 2025-11-29 07:56:54.955250704 +0000 UTC m=+0.119458297 container init fa1b197d6ca7dbbf7be7f6e9f67c8b91f7af6d7aa1c80c419203293e2b25be39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:56:54 np0005539563 podman[285621]: 2025-11-29 07:56:54.965692128 +0000 UTC m=+0.129899691 container start fa1b197d6ca7dbbf7be7f6e9f67c8b91f7af6d7aa1c80c419203293e2b25be39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:56:54 np0005539563 podman[285621]: 2025-11-29 07:56:54.96984338 +0000 UTC m=+0.134050963 container attach fa1b197d6ca7dbbf7be7f6e9f67c8b91f7af6d7aa1c80c419203293e2b25be39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:56:54 np0005539563 compassionate_kepler[285636]: 167 167
Nov 29 02:56:54 np0005539563 systemd[1]: libpod-fa1b197d6ca7dbbf7be7f6e9f67c8b91f7af6d7aa1c80c419203293e2b25be39.scope: Deactivated successfully.
Nov 29 02:56:54 np0005539563 podman[285621]: 2025-11-29 07:56:54.971823124 +0000 UTC m=+0.136030697 container died fa1b197d6ca7dbbf7be7f6e9f67c8b91f7af6d7aa1c80c419203293e2b25be39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:56:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ee2463ad0508b459fc74423d0bcb9787a82e8140379bf5547ced1e21ec51ca56-merged.mount: Deactivated successfully.
Nov 29 02:56:55 np0005539563 podman[285621]: 2025-11-29 07:56:55.197972759 +0000 UTC m=+0.362180332 container remove fa1b197d6ca7dbbf7be7f6e9f67c8b91f7af6d7aa1c80c419203293e2b25be39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_kepler, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:56:55 np0005539563 systemd[1]: libpod-conmon-fa1b197d6ca7dbbf7be7f6e9f67c8b91f7af6d7aa1c80c419203293e2b25be39.scope: Deactivated successfully.
Nov 29 02:56:55 np0005539563 podman[285662]: 2025-11-29 07:56:55.381758692 +0000 UTC m=+0.040831140 container create 3ac932583fdb267c969ec5e6a0710f0f23010fbd2ed1be43dbcfbf6bd8d3587b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:56:55 np0005539563 systemd[1]: Started libpod-conmon-3ac932583fdb267c969ec5e6a0710f0f23010fbd2ed1be43dbcfbf6bd8d3587b.scope.
Nov 29 02:56:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:56:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ba821e5c93228e90cd344255dd0c8becba853e9c205ac6bdf9a7569c994708/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ba821e5c93228e90cd344255dd0c8becba853e9c205ac6bdf9a7569c994708/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ba821e5c93228e90cd344255dd0c8becba853e9c205ac6bdf9a7569c994708/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ba821e5c93228e90cd344255dd0c8becba853e9c205ac6bdf9a7569c994708/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:55 np0005539563 podman[285662]: 2025-11-29 07:56:55.36364349 +0000 UTC m=+0.022715968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:56:55 np0005539563 nova_compute[252253]: 2025-11-29 07:56:55.455 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:55 np0005539563 podman[285662]: 2025-11-29 07:56:55.472499768 +0000 UTC m=+0.131572236 container init 3ac932583fdb267c969ec5e6a0710f0f23010fbd2ed1be43dbcfbf6bd8d3587b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mestorf, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:56:55 np0005539563 podman[285662]: 2025-11-29 07:56:55.479407255 +0000 UTC m=+0.138479703 container start 3ac932583fdb267c969ec5e6a0710f0f23010fbd2ed1be43dbcfbf6bd8d3587b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mestorf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:56:55 np0005539563 podman[285662]: 2025-11-29 07:56:55.482893959 +0000 UTC m=+0.141966427 container attach 3ac932583fdb267c969ec5e6a0710f0f23010fbd2ed1be43dbcfbf6bd8d3587b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:56:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 75 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Nov 29 02:56:55 np0005539563 nova_compute[252253]: 2025-11-29 07:56:55.948 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:56.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:56 np0005539563 dreamy_mestorf[285678]: {
Nov 29 02:56:56 np0005539563 dreamy_mestorf[285678]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:56:56 np0005539563 dreamy_mestorf[285678]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:56:56 np0005539563 dreamy_mestorf[285678]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:56:56 np0005539563 dreamy_mestorf[285678]:        "osd_id": 0,
Nov 29 02:56:56 np0005539563 dreamy_mestorf[285678]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:56:56 np0005539563 dreamy_mestorf[285678]:        "type": "bluestore"
Nov 29 02:56:56 np0005539563 dreamy_mestorf[285678]:    }
Nov 29 02:56:56 np0005539563 dreamy_mestorf[285678]: }
Nov 29 02:56:56 np0005539563 systemd[1]: libpod-3ac932583fdb267c969ec5e6a0710f0f23010fbd2ed1be43dbcfbf6bd8d3587b.scope: Deactivated successfully.
Nov 29 02:56:56 np0005539563 podman[285662]: 2025-11-29 07:56:56.440916128 +0000 UTC m=+1.099988576 container died 3ac932583fdb267c969ec5e6a0710f0f23010fbd2ed1be43dbcfbf6bd8d3587b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mestorf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:56:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:56.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e9ba821e5c93228e90cd344255dd0c8becba853e9c205ac6bdf9a7569c994708-merged.mount: Deactivated successfully.
Nov 29 02:56:56 np0005539563 podman[285662]: 2025-11-29 07:56:56.513921531 +0000 UTC m=+1.172993979 container remove 3ac932583fdb267c969ec5e6a0710f0f23010fbd2ed1be43dbcfbf6bd8d3587b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:56:56 np0005539563 systemd[1]: libpod-conmon-3ac932583fdb267c969ec5e6a0710f0f23010fbd2ed1be43dbcfbf6bd8d3587b.scope: Deactivated successfully.
Nov 29 02:56:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:56:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:56:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:56:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:56:56 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 196b65d7-f997-4521-a7b1-044e0c5c798e does not exist
Nov 29 02:56:56 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b90fb085-95ad-4ad9-9a29-f0a3c4251fdc does not exist
Nov 29 02:56:56 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f35c0572-3e2d-441d-9a10-d8b31a7e9a1a does not exist
Nov 29 02:56:56 np0005539563 podman[285735]: 2025-11-29 07:56:56.732687745 +0000 UTC m=+0.059269891 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:56:56 np0005539563 podman[285736]: 2025-11-29 07:56:56.736651263 +0000 UTC m=+0.063886077 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:56:56 np0005539563 podman[285737]: 2025-11-29 07:56:56.7645086 +0000 UTC m=+0.089094332 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:56:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 75 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.9 MiB/s wr, 52 op/s
Nov 29 02:56:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:56:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:56:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:58.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:56:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:56:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:58.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:56:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 2.7 MiB/s wr, 59 op/s
Nov 29 02:57:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:00.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:00 np0005539563 nova_compute[252253]: 2025-11-29 07:57:00.384 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403005.3823938, 713e0825-1b56-4572-a0bd-817359261afe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:00 np0005539563 nova_compute[252253]: 2025-11-29 07:57:00.385 252257 INFO nova.compute.manager [-] [instance: 713e0825-1b56-4572-a0bd-817359261afe] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:57:00 np0005539563 nova_compute[252253]: 2025-11-29 07:57:00.406 252257 DEBUG nova.compute.manager [None req-d9b82a9c-b1bc-44f5-9d76-acc0b25ac998 - - - - - -] [instance: 713e0825-1b56-4572-a0bd-817359261afe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:00.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:00 np0005539563 nova_compute[252253]: 2025-11-29 07:57:00.520 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:00 np0005539563 nova_compute[252253]: 2025-11-29 07:57:00.950 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 2.6 MiB/s wr, 46 op/s
Nov 29 02:57:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:02.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:02.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 125 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Nov 29 02:57:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:04.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:04.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:04.904 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:04.905 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:04.905 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:05 np0005539563 nova_compute[252253]: 2025-11-29 07:57:05.522 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 54 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 884 KiB/s rd, 1009 KiB/s wr, 89 op/s
Nov 29 02:57:05 np0005539563 nova_compute[252253]: 2025-11-29 07:57:05.952 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:06.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:06.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 54 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 843 KiB/s rd, 510 KiB/s wr, 72 op/s
Nov 29 02:57:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:08.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:08.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 843 KiB/s rd, 510 KiB/s wr, 72 op/s
Nov 29 02:57:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:10.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:10.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:10 np0005539563 nova_compute[252253]: 2025-11-29 07:57:10.608 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:10 np0005539563 nova_compute[252253]: 2025-11-29 07:57:10.955 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 836 KiB/s rd, 14 KiB/s wr, 63 op/s
Nov 29 02:57:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:12.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:12.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:57:12
Nov 29 02:57:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:57:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:57:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.meta', 'images']
Nov 29 02:57:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:57:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Nov 29 02:57:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Nov 29 02:57:13 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 906 KiB/s rd, 16 KiB/s wr, 75 op/s
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:57:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:57:14 np0005539563 nova_compute[252253]: 2025-11-29 07:57:14.024 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:14.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:14.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:15 np0005539563 nova_compute[252253]: 2025-11-29 07:57:15.643 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 02:57:15 np0005539563 nova_compute[252253]: 2025-11-29 07:57:15.957 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:16.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:16.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 02:57:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:18.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:18.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Nov 29 02:57:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Nov 29 02:57:19 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Nov 29 02:57:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 2.1 KiB/s wr, 32 op/s
Nov 29 02:57:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:20.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:20.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:20 np0005539563 nova_compute[252253]: 2025-11-29 07:57:20.645 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:20 np0005539563 nova_compute[252253]: 2025-11-29 07:57:20.959 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:21 np0005539563 nova_compute[252253]: 2025-11-29 07:57:21.548 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 KiB/s wr, 28 op/s
Nov 29 02:57:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:22.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:22.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019035062465103294 of space, bias 1.0, pg target 0.5710518739530989 quantized to 32 (current 32)
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:57:23 np0005539563 nova_compute[252253]: 2025-11-29 07:57:23.303 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquiring lock "5ae03c1e-4959-4743-b844-017e2de24ee2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:23 np0005539563 nova_compute[252253]: 2025-11-29 07:57:23.303 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:23 np0005539563 nova_compute[252253]: 2025-11-29 07:57:23.321 252257 DEBUG nova.compute.manager [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:57:23 np0005539563 nova_compute[252253]: 2025-11-29 07:57:23.603 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:23 np0005539563 nova_compute[252253]: 2025-11-29 07:57:23.604 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:23 np0005539563 nova_compute[252253]: 2025-11-29 07:57:23.620 252257 DEBUG nova.virt.hardware [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:57:23 np0005539563 nova_compute[252253]: 2025-11-29 07:57:23.620 252257 INFO nova.compute.claims [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:57:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 23 op/s
Nov 29 02:57:23 np0005539563 nova_compute[252253]: 2025-11-29 07:57:23.776 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:24.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Nov 29 02:57:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380635584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:24 np0005539563 nova_compute[252253]: 2025-11-29 07:57:24.376 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:24 np0005539563 nova_compute[252253]: 2025-11-29 07:57:24.386 252257 DEBUG nova.compute.provider_tree [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:24 np0005539563 nova_compute[252253]: 2025-11-29 07:57:24.406 252257 DEBUG nova.scheduler.client.report [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:24 np0005539563 nova_compute[252253]: 2025-11-29 07:57:24.430 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:24 np0005539563 nova_compute[252253]: 2025-11-29 07:57:24.431 252257 DEBUG nova.compute.manager [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:57:24 np0005539563 nova_compute[252253]: 2025-11-29 07:57:24.477 252257 DEBUG nova.compute.manager [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:57:24 np0005539563 nova_compute[252253]: 2025-11-29 07:57:24.478 252257 DEBUG nova.network.neutron [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:57:24 np0005539563 nova_compute[252253]: 2025-11-29 07:57:24.509 252257 INFO nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:57:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:24.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:24 np0005539563 nova_compute[252253]: 2025-11-29 07:57:24.528 252257 DEBUG nova.compute.manager [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:57:24 np0005539563 nova_compute[252253]: 2025-11-29 07:57:24.662 252257 DEBUG nova.compute.manager [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:57:24 np0005539563 nova_compute[252253]: 2025-11-29 07:57:24.664 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:57:24 np0005539563 nova_compute[252253]: 2025-11-29 07:57:24.665 252257 INFO nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Creating image(s)#033[00m
Nov 29 02:57:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:24.795 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:57:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:24.796 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:57:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:24.797 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.074 252257 DEBUG nova.storage.rbd_utils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] rbd image 5ae03c1e-4959-4743-b844-017e2de24ee2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.111 252257 DEBUG nova.storage.rbd_utils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] rbd image 5ae03c1e-4959-4743-b844-017e2de24ee2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.142 252257 DEBUG nova.storage.rbd_utils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] rbd image 5ae03c1e-4959-4743-b844-017e2de24ee2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.147 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.190 252257 DEBUG nova.policy [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00183201554f47b3aa46da246e60580d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cf14587bd83441a8a75c2ec83dd6a271', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.192 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.249 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.250 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.251 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.251 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.292 252257 DEBUG nova.storage.rbd_utils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] rbd image 5ae03c1e-4959-4743-b844-017e2de24ee2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.298 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5ae03c1e-4959-4743-b844-017e2de24ee2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Nov 29 02:57:25 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.647 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 3.7 KiB/s wr, 43 op/s
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.842 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5ae03c1e-4959-4743-b844-017e2de24ee2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.914 252257 DEBUG nova.storage.rbd_utils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] resizing rbd image 5ae03c1e-4959-4743-b844-017e2de24ee2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:57:25 np0005539563 nova_compute[252253]: 2025-11-29 07:57:25.990 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:26 np0005539563 nova_compute[252253]: 2025-11-29 07:57:26.048 252257 DEBUG nova.objects.instance [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lazy-loading 'migration_context' on Instance uuid 5ae03c1e-4959-4743-b844-017e2de24ee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:26.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:26 np0005539563 nova_compute[252253]: 2025-11-29 07:57:26.069 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:57:26 np0005539563 nova_compute[252253]: 2025-11-29 07:57:26.069 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Ensure instance console log exists: /var/lib/nova/instances/5ae03c1e-4959-4743-b844-017e2de24ee2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:57:26 np0005539563 nova_compute[252253]: 2025-11-29 07:57:26.070 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:26 np0005539563 nova_compute[252253]: 2025-11-29 07:57:26.070 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:26 np0005539563 nova_compute[252253]: 2025-11-29 07:57:26.070 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:26.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Nov 29 02:57:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Nov 29 02:57:26 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Nov 29 02:57:26 np0005539563 nova_compute[252253]: 2025-11-29 07:57:26.639 252257 DEBUG nova.network.neutron [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Successfully created port: 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:57:27 np0005539563 podman[286125]: 2025-11-29 07:57:27.489285052 +0000 UTC m=+0.043389596 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:57:27 np0005539563 podman[286126]: 2025-11-29 07:57:27.522689073 +0000 UTC m=+0.076543190 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:57:27 np0005539563 podman[286127]: 2025-11-29 07:57:27.546643643 +0000 UTC m=+0.098795823 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:57:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 1.6 KiB/s wr, 17 op/s
Nov 29 02:57:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:57:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2709245885' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:57:27 np0005539563 nova_compute[252253]: 2025-11-29 07:57:27.898 252257 DEBUG nova.network.neutron [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Successfully updated port: 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:57:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:57:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2709245885' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:57:27 np0005539563 nova_compute[252253]: 2025-11-29 07:57:27.921 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquiring lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:27 np0005539563 nova_compute[252253]: 2025-11-29 07:57:27.922 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquired lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:27 np0005539563 nova_compute[252253]: 2025-11-29 07:57:27.922 252257 DEBUG nova.network.neutron [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:57:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:28.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:28 np0005539563 nova_compute[252253]: 2025-11-29 07:57:28.091 252257 DEBUG nova.compute.manager [req-25d59d17-fa32-488e-8efe-9adef575d4a7 req-8002fc43-49ac-4d5e-b416-c7b8060585af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Received event network-changed-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:28 np0005539563 nova_compute[252253]: 2025-11-29 07:57:28.091 252257 DEBUG nova.compute.manager [req-25d59d17-fa32-488e-8efe-9adef575d4a7 req-8002fc43-49ac-4d5e-b416-c7b8060585af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Refreshing instance network info cache due to event network-changed-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:57:28 np0005539563 nova_compute[252253]: 2025-11-29 07:57:28.092 252257 DEBUG oslo_concurrency.lockutils [req-25d59d17-fa32-488e-8efe-9adef575d4a7 req-8002fc43-49ac-4d5e-b416-c7b8060585af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:28 np0005539563 nova_compute[252253]: 2025-11-29 07:57:28.182 252257 DEBUG nova.network.neutron [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:57:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:28.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.297 252257 DEBUG nova.network.neutron [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updating instance_info_cache with network_info: [{"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.322 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Releasing lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.323 252257 DEBUG nova.compute.manager [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Instance network_info: |[{"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.324 252257 DEBUG oslo_concurrency.lockutils [req-25d59d17-fa32-488e-8efe-9adef575d4a7 req-8002fc43-49ac-4d5e-b416-c7b8060585af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.325 252257 DEBUG nova.network.neutron [req-25d59d17-fa32-488e-8efe-9adef575d4a7 req-8002fc43-49ac-4d5e-b416-c7b8060585af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Refreshing network info cache for port 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.330 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Start _get_guest_xml network_info=[{"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.338 252257 WARNING nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.344 252257 DEBUG nova.virt.libvirt.host [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:57:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.345 252257 DEBUG nova.virt.libvirt.host [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.354 252257 DEBUG nova.virt.libvirt.host [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.355 252257 DEBUG nova.virt.libvirt.host [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.357 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.357 252257 DEBUG nova.virt.hardware [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.358 252257 DEBUG nova.virt.hardware [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.358 252257 DEBUG nova.virt.hardware [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.358 252257 DEBUG nova.virt.hardware [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.359 252257 DEBUG nova.virt.hardware [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.359 252257 DEBUG nova.virt.hardware [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.359 252257 DEBUG nova.virt.hardware [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.360 252257 DEBUG nova.virt.hardware [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.360 252257 DEBUG nova.virt.hardware [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.360 252257 DEBUG nova.virt.hardware [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.361 252257 DEBUG nova.virt.hardware [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.365 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Nov 29 02:57:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Nov 29 02:57:29 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Nov 29 02:57:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 68 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 2.2 MiB/s wr, 49 op/s
Nov 29 02:57:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:57:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3804298917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.794 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.830 252257 DEBUG nova.storage.rbd_utils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] rbd image 5ae03c1e-4959-4743-b844-017e2de24ee2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:29 np0005539563 nova_compute[252253]: 2025-11-29 07:57:29.835 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:30.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:57:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1787394271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.256 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.257 252257 DEBUG nova.virt.libvirt.vif [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-578794848',display_name='tempest-AttachInterfacesUnderV243Test-server-578794848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-578794848',id=51,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9SwZMQ9yZM3JIjuPXqrbqk5c0gFUCH6Y2tJSIMoMYXdINFrxBWl0ifugy14AEU5tW4CYTXWHhYMVGrBaLKC+9wGW+ByOl9ZY24nMpMtNu41cGDvs1lBo852+nVd4SD3A==',key_name='tempest-keypair-835477071',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf14587bd83441a8a75c2ec83dd6a271',ramdisk_id='',reservation_id='r-ej06r2sh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1830435311',owner_user_name='tempest-AttachInterfacesUnderV243Test-1830435311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:57:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='00183201554f47b3aa46da246e60580d',uuid=5ae03c1e-4959-4743-b844-017e2de24ee2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.258 252257 DEBUG nova.network.os_vif_util [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Converting VIF {"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.258 252257 DEBUG nova.network.os_vif_util [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:0e:5f,bridge_name='br-int',has_traffic_filtering=True,id=1b788e11-b7c5-4cec-a04a-9bf9fbbd686a,network=Network(4cae663c-54d5-4e67-87eb-3a705a8ceb9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b788e11-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.259 252257 DEBUG nova.objects.instance [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ae03c1e-4959-4743-b844-017e2de24ee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.308 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  <uuid>5ae03c1e-4959-4743-b844-017e2de24ee2</uuid>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  <name>instance-00000033</name>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-578794848</nova:name>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:57:29</nova:creationTime>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <nova:user uuid="00183201554f47b3aa46da246e60580d">tempest-AttachInterfacesUnderV243Test-1830435311-project-member</nova:user>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <nova:project uuid="cf14587bd83441a8a75c2ec83dd6a271">tempest-AttachInterfacesUnderV243Test-1830435311</nova:project>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <nova:port uuid="1b788e11-b7c5-4cec-a04a-9bf9fbbd686a">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <entry name="serial">5ae03c1e-4959-4743-b844-017e2de24ee2</entry>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <entry name="uuid">5ae03c1e-4959-4743-b844-017e2de24ee2</entry>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/5ae03c1e-4959-4743-b844-017e2de24ee2_disk">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/5ae03c1e-4959-4743-b844-017e2de24ee2_disk.config">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:e1:0e:5f"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <target dev="tap1b788e11-b7"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/5ae03c1e-4959-4743-b844-017e2de24ee2/console.log" append="off"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:57:30 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:57:30 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:57:30 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:57:30 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.309 252257 DEBUG nova.compute.manager [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Preparing to wait for external event network-vif-plugged-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.309 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquiring lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.310 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.310 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.310 252257 DEBUG nova.virt.libvirt.vif [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-578794848',display_name='tempest-AttachInterfacesUnderV243Test-server-578794848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-578794848',id=51,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9SwZMQ9yZM3JIjuPXqrbqk5c0gFUCH6Y2tJSIMoMYXdINFrxBWl0ifugy14AEU5tW4CYTXWHhYMVGrBaLKC+9wGW+ByOl9ZY24nMpMtNu41cGDvs1lBo852+nVd4SD3A==',key_name='tempest-keypair-835477071',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf14587bd83441a8a75c2ec83dd6a271',ramdisk_id='',reservation_id='r-ej06r2sh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1830435311',owner_user_name='tempest-AttachInterfacesUnderV243Test-1830435311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:57:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='00183201554f47b3aa46da246e60580d',uuid=5ae03c1e-4959-4743-b844-017e2de24ee2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.311 252257 DEBUG nova.network.os_vif_util [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Converting VIF {"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.311 252257 DEBUG nova.network.os_vif_util [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:0e:5f,bridge_name='br-int',has_traffic_filtering=True,id=1b788e11-b7c5-4cec-a04a-9bf9fbbd686a,network=Network(4cae663c-54d5-4e67-87eb-3a705a8ceb9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b788e11-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.312 252257 DEBUG os_vif [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:0e:5f,bridge_name='br-int',has_traffic_filtering=True,id=1b788e11-b7c5-4cec-a04a-9bf9fbbd686a,network=Network(4cae663c-54d5-4e67-87eb-3a705a8ceb9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b788e11-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.312 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.313 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.313 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.316 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.316 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b788e11-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.317 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b788e11-b7, col_values=(('external_ids', {'iface-id': '1b788e11-b7c5-4cec-a04a-9bf9fbbd686a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:0e:5f', 'vm-uuid': '5ae03c1e-4959-4743-b844-017e2de24ee2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.318 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:30 np0005539563 NetworkManager[48981]: <info>  [1764403050.3195] manager: (tap1b788e11-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.321 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.330 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.331 252257 INFO os_vif [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:0e:5f,bridge_name='br-int',has_traffic_filtering=True,id=1b788e11-b7c5-4cec-a04a-9bf9fbbd686a,network=Network(4cae663c-54d5-4e67-87eb-3a705a8ceb9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b788e11-b7')#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.446 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.447 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.447 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] No VIF found with MAC fa:16:3e:e1:0e:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.448 252257 INFO nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Using config drive#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.472 252257 DEBUG nova.storage.rbd_utils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] rbd image 5ae03c1e-4959-4743-b844-017e2de24ee2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:30.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.971 252257 DEBUG nova.network.neutron [req-25d59d17-fa32-488e-8efe-9adef575d4a7 req-8002fc43-49ac-4d5e-b416-c7b8060585af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updated VIF entry in instance network info cache for port 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.972 252257 DEBUG nova.network.neutron [req-25d59d17-fa32-488e-8efe-9adef575d4a7 req-8002fc43-49ac-4d5e-b416-c7b8060585af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updating instance_info_cache with network_info: [{"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:30 np0005539563 nova_compute[252253]: 2025-11-29 07:57:30.992 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:31 np0005539563 nova_compute[252253]: 2025-11-29 07:57:31.085 252257 DEBUG oslo_concurrency.lockutils [req-25d59d17-fa32-488e-8efe-9adef575d4a7 req-8002fc43-49ac-4d5e-b416-c7b8060585af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:31 np0005539563 nova_compute[252253]: 2025-11-29 07:57:31.397 252257 INFO nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Creating config drive at /var/lib/nova/instances/5ae03c1e-4959-4743-b844-017e2de24ee2/disk.config#033[00m
Nov 29 02:57:31 np0005539563 nova_compute[252253]: 2025-11-29 07:57:31.403 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ae03c1e-4959-4743-b844-017e2de24ee2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphw5ciclv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:31 np0005539563 nova_compute[252253]: 2025-11-29 07:57:31.535 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ae03c1e-4959-4743-b844-017e2de24ee2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphw5ciclv" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:31 np0005539563 nova_compute[252253]: 2025-11-29 07:57:31.562 252257 DEBUG nova.storage.rbd_utils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] rbd image 5ae03c1e-4959-4743-b844-017e2de24ee2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:31 np0005539563 nova_compute[252253]: 2025-11-29 07:57:31.566 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ae03c1e-4959-4743-b844-017e2de24ee2/disk.config 5ae03c1e-4959-4743-b844-017e2de24ee2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:31 np0005539563 nova_compute[252253]: 2025-11-29 07:57:31.689 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:31 np0005539563 nova_compute[252253]: 2025-11-29 07:57:31.715 252257 DEBUG oslo_concurrency.processutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ae03c1e-4959-4743-b844-017e2de24ee2/disk.config 5ae03c1e-4959-4743-b844-017e2de24ee2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:31 np0005539563 nova_compute[252253]: 2025-11-29 07:57:31.716 252257 INFO nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Deleting local config drive /var/lib/nova/instances/5ae03c1e-4959-4743-b844-017e2de24ee2/disk.config because it was imported into RBD.#033[00m
Nov 29 02:57:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 88 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.4 MiB/s wr, 98 op/s
Nov 29 02:57:31 np0005539563 kernel: tap1b788e11-b7: entered promiscuous mode
Nov 29 02:57:31 np0005539563 NetworkManager[48981]: <info>  [1764403051.7667] manager: (tap1b788e11-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Nov 29 02:57:31 np0005539563 systemd-udevd[286324]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:57:31 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:31Z|00146|binding|INFO|Claiming lport 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a for this chassis.
Nov 29 02:57:31 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:31Z|00147|binding|INFO|1b788e11-b7c5-4cec-a04a-9bf9fbbd686a: Claiming fa:16:3e:e1:0e:5f 10.100.0.11
Nov 29 02:57:31 np0005539563 nova_compute[252253]: 2025-11-29 07:57:31.816 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:31 np0005539563 nova_compute[252253]: 2025-11-29 07:57:31.821 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:31 np0005539563 NetworkManager[48981]: <info>  [1764403051.8325] device (tap1b788e11-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.829 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:0e:5f 10.100.0.11'], port_security=['fa:16:3e:e1:0e:5f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5ae03c1e-4959-4743-b844-017e2de24ee2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4cae663c-54d5-4e67-87eb-3a705a8ceb9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf14587bd83441a8a75c2ec83dd6a271', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc61cd9d-5129-428b-901d-ab0ca3403925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=578c6c75-81b2-445c-889a-937faa321acb, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1b788e11-b7c5-4cec-a04a-9bf9fbbd686a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.831 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a in datapath 4cae663c-54d5-4e67-87eb-3a705a8ceb9e bound to our chassis#033[00m
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.832 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4cae663c-54d5-4e67-87eb-3a705a8ceb9e#033[00m
Nov 29 02:57:31 np0005539563 NetworkManager[48981]: <info>  [1764403051.8345] device (tap1b788e11-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:57:31 np0005539563 systemd-machined[213024]: New machine qemu-22-instance-00000033.
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.846 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[aac65e68-08bd-4340-b0a1-a91e3b09aad3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.847 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4cae663c-51 in ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.850 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4cae663c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.850 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[465c26b2-9180-4522-b6f1-517c59b41afa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.852 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8bc964-3b94-4c09-9975-00df56489d91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.864 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[88d47a17-68fe-43bd-9edc-10253c4f3b50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:31 np0005539563 systemd[1]: Started Virtual Machine qemu-22-instance-00000033.
Nov 29 02:57:31 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:31Z|00148|binding|INFO|Setting lport 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a ovn-installed in OVS
Nov 29 02:57:31 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:31Z|00149|binding|INFO|Setting lport 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a up in Southbound
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.889 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[50ed71af-9fa5-4dbe-834b-db4c783f6e98]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:31 np0005539563 nova_compute[252253]: 2025-11-29 07:57:31.891 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.922 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[8adddbee-edf1-45da-bb1b-f8517e8bba30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.928 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3b02a3cc-d1d8-4170-a236-8a6fed40016d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:31 np0005539563 NetworkManager[48981]: <info>  [1764403051.9292] manager: (tap4cae663c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.958 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e550b9cf-64f8-405e-9b43-2dce810c0732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.962 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[56b2bef4-4889-4ef9-9613-aa4f7708140a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:31 np0005539563 NetworkManager[48981]: <info>  [1764403051.9822] device (tap4cae663c-50): carrier: link connected
Nov 29 02:57:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:31.988 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[cd512619-1f48-428c-9e94-c248a0947f7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:32.003 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c52132-1d7d-49ea-aeef-23a4228d192c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4cae663c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:e5:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601975, 'reachable_time': 36258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286360, 'error': None, 'target': 'ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:32.018 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c5281035-530c-432e-abae-991af9f784ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed1:e5aa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601975, 'tstamp': 601975}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286361, 'error': None, 'target': 'ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:32.033 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ad89faf8-d390-4022-8c54-ccc0a637b4f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4cae663c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d1:e5:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601975, 'reachable_time': 36258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286362, 'error': None, 'target': 'ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:32.066 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[27837170-5639-405e-ba1b-1e831cc42ba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:32.123 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[de1df1b1-0aec-4f45-9a86-7f5b1b6b88fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:32.124 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4cae663c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:32.124 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:32.125 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4cae663c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:32 np0005539563 NetworkManager[48981]: <info>  [1764403052.1272] manager: (tap4cae663c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Nov 29 02:57:32 np0005539563 kernel: tap4cae663c-50: entered promiscuous mode
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.128 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:32.133 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4cae663c-50, col_values=(('external_ids', {'iface-id': '0b4477a1-a6d7-439d-a42e-e67669914558'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:32 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:32Z|00150|binding|INFO|Releasing lport 0b4477a1-a6d7-439d-a42e-e67669914558 from this chassis (sb_readonly=0)
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.134 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:32.136 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4cae663c-54d5-4e67-87eb-3a705a8ceb9e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4cae663c-54d5-4e67-87eb-3a705a8ceb9e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:32.147 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6f537b51-349d-4c36-b0c3-4d45e63b87fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.148 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:32.149 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-4cae663c-54d5-4e67-87eb-3a705a8ceb9e
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/4cae663c-54d5-4e67-87eb-3a705a8ceb9e.pid.haproxy
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 4cae663c-54d5-4e67-87eb-3a705a8ceb9e
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:57:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:32.150 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e', 'env', 'PROCESS_TAG=haproxy-4cae663c-54d5-4e67-87eb-3a705a8ceb9e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4cae663c-54d5-4e67-87eb-3a705a8ceb9e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.307 252257 DEBUG nova.compute.manager [req-753370d1-c6ad-4b33-a936-28211e2d9250 req-acfcb57b-ca0d-464f-9cdc-b50722a28c78 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Received event network-vif-plugged-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.308 252257 DEBUG oslo_concurrency.lockutils [req-753370d1-c6ad-4b33-a936-28211e2d9250 req-acfcb57b-ca0d-464f-9cdc-b50722a28c78 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.308 252257 DEBUG oslo_concurrency.lockutils [req-753370d1-c6ad-4b33-a936-28211e2d9250 req-acfcb57b-ca0d-464f-9cdc-b50722a28c78 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.309 252257 DEBUG oslo_concurrency.lockutils [req-753370d1-c6ad-4b33-a936-28211e2d9250 req-acfcb57b-ca0d-464f-9cdc-b50722a28c78 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.309 252257 DEBUG nova.compute.manager [req-753370d1-c6ad-4b33-a936-28211e2d9250 req-acfcb57b-ca0d-464f-9cdc-b50722a28c78 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Processing event network-vif-plugged-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.380 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403052.380425, 5ae03c1e-4959-4743-b844-017e2de24ee2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.381 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] VM Started (Lifecycle Event)#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.383 252257 DEBUG nova.compute.manager [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.386 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.389 252257 INFO nova.virt.libvirt.driver [-] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Instance spawned successfully.#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.389 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.412 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.418 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.423 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.423 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.424 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.424 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.425 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.425 252257 DEBUG nova.virt.libvirt.driver [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.452 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.452 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403052.3805163, 5ae03c1e-4959-4743-b844-017e2de24ee2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.452 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.474 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.477 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403052.3855968, 5ae03c1e-4959-4743-b844-017e2de24ee2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.477 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.497 252257 INFO nova.compute.manager [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Took 7.83 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.497 252257 DEBUG nova.compute.manager [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.498 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.505 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:32.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.543 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.570 252257 INFO nova.compute.manager [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Took 9.20 seconds to build instance.#033[00m
Nov 29 02:57:32 np0005539563 podman[286436]: 2025-11-29 07:57:32.478503222 +0000 UTC m=+0.021368809 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:57:32 np0005539563 nova_compute[252253]: 2025-11-29 07:57:32.590 252257 DEBUG oslo_concurrency.lockutils [None req-3e53b125-5c08-4d23-9c4f-f09fa96283a5 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:33 np0005539563 podman[286436]: 2025-11-29 07:57:33.100811961 +0000 UTC m=+0.643677528 container create 4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 02:57:33 np0005539563 systemd[1]: Started libpod-conmon-4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043.scope.
Nov 29 02:57:33 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:57:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98212d2b4652d277589e413e9d5a16c56f936da366be86ead1237df421a3c87/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:33 np0005539563 podman[286436]: 2025-11-29 07:57:33.244546511 +0000 UTC m=+0.787412098 container init 4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:57:33 np0005539563 podman[286436]: 2025-11-29 07:57:33.251800241 +0000 UTC m=+0.794665818 container start 4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:57:33 np0005539563 neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e[286451]: [NOTICE]   (286455) : New worker (286457) forked
Nov 29 02:57:33 np0005539563 neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e[286451]: [NOTICE]   (286455) : Loading success.
Nov 29 02:57:33 np0005539563 nova_compute[252253]: 2025-11-29 07:57:33.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 88 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 2.7 MiB/s wr, 84 op/s
Nov 29 02:57:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:34.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:34 np0005539563 nova_compute[252253]: 2025-11-29 07:57:34.474 252257 DEBUG nova.compute.manager [req-0a04d06d-de23-4599-9835-fcbf3d8df526 req-8931584f-742a-4015-9967-c96415a6a258 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Received event network-vif-plugged-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:34 np0005539563 nova_compute[252253]: 2025-11-29 07:57:34.475 252257 DEBUG oslo_concurrency.lockutils [req-0a04d06d-de23-4599-9835-fcbf3d8df526 req-8931584f-742a-4015-9967-c96415a6a258 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:34 np0005539563 nova_compute[252253]: 2025-11-29 07:57:34.476 252257 DEBUG oslo_concurrency.lockutils [req-0a04d06d-de23-4599-9835-fcbf3d8df526 req-8931584f-742a-4015-9967-c96415a6a258 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:34 np0005539563 nova_compute[252253]: 2025-11-29 07:57:34.476 252257 DEBUG oslo_concurrency.lockutils [req-0a04d06d-de23-4599-9835-fcbf3d8df526 req-8931584f-742a-4015-9967-c96415a6a258 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:34 np0005539563 nova_compute[252253]: 2025-11-29 07:57:34.477 252257 DEBUG nova.compute.manager [req-0a04d06d-de23-4599-9835-fcbf3d8df526 req-8931584f-742a-4015-9967-c96415a6a258 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] No waiting events found dispatching network-vif-plugged-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:57:34 np0005539563 nova_compute[252253]: 2025-11-29 07:57:34.478 252257 WARNING nova.compute.manager [req-0a04d06d-de23-4599-9835-fcbf3d8df526 req-8931584f-742a-4015-9967-c96415a6a258 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Received unexpected event network-vif-plugged-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a for instance with vm_state active and task_state None.#033[00m
Nov 29 02:57:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:34.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:34 np0005539563 nova_compute[252253]: 2025-11-29 07:57:34.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:34 np0005539563 nova_compute[252253]: 2025-11-29 07:57:34.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:57:35 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:35Z|00151|binding|INFO|Releasing lport 0b4477a1-a6d7-439d-a42e-e67669914558 from this chassis (sb_readonly=0)
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.083 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:35 np0005539563 NetworkManager[48981]: <info>  [1764403055.0951] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 29 02:57:35 np0005539563 NetworkManager[48981]: <info>  [1764403055.0963] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 29 02:57:35 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:35Z|00152|binding|INFO|Releasing lport 0b4477a1-a6d7-439d-a42e-e67669914558 from this chassis (sb_readonly=0)
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.144 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.153 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.319 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.544 252257 DEBUG nova.compute.manager [req-54302495-2daa-478a-be21-df1c88f963e6 req-bb9211bf-ce1f-4041-a3bb-2f47fd8a47e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Received event network-changed-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.545 252257 DEBUG nova.compute.manager [req-54302495-2daa-478a-be21-df1c88f963e6 req-bb9211bf-ce1f-4041-a3bb-2f47fd8a47e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Refreshing instance network info cache due to event network-changed-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.545 252257 DEBUG oslo_concurrency.lockutils [req-54302495-2daa-478a-be21-df1c88f963e6 req-bb9211bf-ce1f-4041-a3bb-2f47fd8a47e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.546 252257 DEBUG oslo_concurrency.lockutils [req-54302495-2daa-478a-be21-df1c88f963e6 req-bb9211bf-ce1f-4041-a3bb-2f47fd8a47e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.546 252257 DEBUG nova.network.neutron [req-54302495-2daa-478a-be21-df1c88f963e6 req-bb9211bf-ce1f-4041-a3bb-2f47fd8a47e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Refreshing network info cache for port 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.703 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.703 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.703 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.704 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 110 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 155 op/s
Nov 29 02:57:35 np0005539563 nova_compute[252253]: 2025-11-29 07:57:35.994 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:36.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/188750448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.225 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.295 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.295 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.474 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.475 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4506MB free_disk=20.96752166748047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.476 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.476 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:36.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.556 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 5ae03c1e-4959-4743-b844-017e2de24ee2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.557 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.557 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.590 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.622 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.622 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.652 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.686 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:57:36 np0005539563 nova_compute[252253]: 2025-11-29 07:57:36.757 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/34004069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:37 np0005539563 nova_compute[252253]: 2025-11-29 07:57:37.182 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:37 np0005539563 nova_compute[252253]: 2025-11-29 07:57:37.189 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:37 np0005539563 nova_compute[252253]: 2025-11-29 07:57:37.206 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:37 np0005539563 nova_compute[252253]: 2025-11-29 07:57:37.229 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:57:37 np0005539563 nova_compute[252253]: 2025-11-29 07:57:37.230 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:37 np0005539563 nova_compute[252253]: 2025-11-29 07:57:37.257 252257 DEBUG nova.network.neutron [req-54302495-2daa-478a-be21-df1c88f963e6 req-bb9211bf-ce1f-4041-a3bb-2f47fd8a47e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updated VIF entry in instance network info cache for port 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:57:37 np0005539563 nova_compute[252253]: 2025-11-29 07:57:37.258 252257 DEBUG nova.network.neutron [req-54302495-2daa-478a-be21-df1c88f963e6 req-bb9211bf-ce1f-4041-a3bb-2f47fd8a47e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updating instance_info_cache with network_info: [{"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:37 np0005539563 nova_compute[252253]: 2025-11-29 07:57:37.286 252257 DEBUG oslo_concurrency.lockutils [req-54302495-2daa-478a-be21-df1c88f963e6 req-bb9211bf-ce1f-4041-a3bb-2f47fd8a47e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 110 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 143 op/s
Nov 29 02:57:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:38.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:38 np0005539563 nova_compute[252253]: 2025-11-29 07:57:38.226 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:38 np0005539563 nova_compute[252253]: 2025-11-29 07:57:38.231 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:38 np0005539563 nova_compute[252253]: 2025-11-29 07:57:38.231 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:57:38 np0005539563 nova_compute[252253]: 2025-11-29 07:57:38.232 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:57:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:38.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:39 np0005539563 nova_compute[252253]: 2025-11-29 07:57:39.144 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:39 np0005539563 nova_compute[252253]: 2025-11-29 07:57:39.144 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:39 np0005539563 nova_compute[252253]: 2025-11-29 07:57:39.144 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:57:39 np0005539563 nova_compute[252253]: 2025-11-29 07:57:39.145 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5ae03c1e-4959-4743-b844-017e2de24ee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 169 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.6 MiB/s wr, 198 op/s
Nov 29 02:57:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:40.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:40 np0005539563 nova_compute[252253]: 2025-11-29 07:57:40.320 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:40 np0005539563 nova_compute[252253]: 2025-11-29 07:57:40.481 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updating instance_info_cache with network_info: [{"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:40 np0005539563 nova_compute[252253]: 2025-11-29 07:57:40.496 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:40 np0005539563 nova_compute[252253]: 2025-11-29 07:57:40.497 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:57:40 np0005539563 nova_compute[252253]: 2025-11-29 07:57:40.498 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:40 np0005539563 nova_compute[252253]: 2025-11-29 07:57:40.498 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:40 np0005539563 nova_compute[252253]: 2025-11-29 07:57:40.499 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:40.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:40 np0005539563 nova_compute[252253]: 2025-11-29 07:57:40.996 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:41 np0005539563 nova_compute[252253]: 2025-11-29 07:57:41.043 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 181 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 205 op/s
Nov 29 02:57:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:42.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:42 np0005539563 nova_compute[252253]: 2025-11-29 07:57:42.330 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:42.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:57:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:57:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:57:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:57:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:57:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:57:43 np0005539563 nova_compute[252253]: 2025-11-29 07:57:43.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 181 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 197 op/s
Nov 29 02:57:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:44.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 02:57:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:44.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 02:57:45 np0005539563 nova_compute[252253]: 2025-11-29 07:57:45.323 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:45 np0005539563 nova_compute[252253]: 2025-11-29 07:57:45.490 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 190 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.5 MiB/s wr, 259 op/s
Nov 29 02:57:45 np0005539563 nova_compute[252253]: 2025-11-29 07:57:45.998 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:57:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:46.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:57:46 np0005539563 nova_compute[252253]: 2025-11-29 07:57:46.316 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:46 np0005539563 nova_compute[252253]: 2025-11-29 07:57:46.317 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:46 np0005539563 nova_compute[252253]: 2025-11-29 07:57:46.337 252257 DEBUG nova.compute.manager [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:57:46 np0005539563 nova_compute[252253]: 2025-11-29 07:57:46.426 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:46 np0005539563 nova_compute[252253]: 2025-11-29 07:57:46.426 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:46 np0005539563 nova_compute[252253]: 2025-11-29 07:57:46.433 252257 DEBUG nova.virt.hardware [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:57:46 np0005539563 nova_compute[252253]: 2025-11-29 07:57:46.434 252257 INFO nova.compute.claims [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:57:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:46.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:46 np0005539563 nova_compute[252253]: 2025-11-29 07:57:46.578 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:46 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:46Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:0e:5f 10.100.0.11
Nov 29 02:57:46 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:46Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:0e:5f 10.100.0.11
Nov 29 02:57:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Nov 29 02:57:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Nov 29 02:57:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Nov 29 02:57:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1035584264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.424 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.846s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.430 252257 DEBUG nova.compute.provider_tree [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.446 252257 DEBUG nova.scheduler.client.report [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.471 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.472 252257 DEBUG nova.compute.manager [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.518 252257 DEBUG nova.compute.manager [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.519 252257 DEBUG nova.network.neutron [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.543 252257 INFO nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.574 252257 DEBUG nova.compute.manager [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.654 252257 DEBUG nova.compute.manager [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.655 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.656 252257 INFO nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Creating image(s)#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.683 252257 DEBUG nova.storage.rbd_utils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.719 252257 DEBUG nova.storage.rbd_utils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.749 252257 DEBUG nova.storage.rbd_utils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.754 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 190 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 236 op/s
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.822 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.823 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.824 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.824 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.848 252257 DEBUG nova.storage.rbd_utils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:47 np0005539563 nova_compute[252253]: 2025-11-29 07:57:47.852 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:48.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:48 np0005539563 nova_compute[252253]: 2025-11-29 07:57:48.154 252257 DEBUG nova.policy [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f7d59bea260d4752aa29379967636c0b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:57:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Nov 29 02:57:48 np0005539563 nova_compute[252253]: 2025-11-29 07:57:48.220 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Nov 29 02:57:48 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Nov 29 02:57:48 np0005539563 nova_compute[252253]: 2025-11-29 07:57:48.292 252257 DEBUG nova.storage.rbd_utils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] resizing rbd image 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:57:48 np0005539563 nova_compute[252253]: 2025-11-29 07:57:48.386 252257 DEBUG nova.objects.instance [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:48 np0005539563 nova_compute[252253]: 2025-11-29 07:57:48.405 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:57:48 np0005539563 nova_compute[252253]: 2025-11-29 07:57:48.405 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Ensure instance console log exists: /var/lib/nova/instances/2f014f4f-f43f-49b3-93cf-1cc4aec2e8af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:57:48 np0005539563 nova_compute[252253]: 2025-11-29 07:57:48.406 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:48 np0005539563 nova_compute[252253]: 2025-11-29 07:57:48.406 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:48 np0005539563 nova_compute[252253]: 2025-11-29 07:57:48.406 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:48.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:48 np0005539563 nova_compute[252253]: 2025-11-29 07:57:48.908 252257 DEBUG nova.network.neutron [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Successfully created port: d3b0b875-c7bf-4d35-ace2-8206af34d5aa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:57:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Nov 29 02:57:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Nov 29 02:57:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Nov 29 02:57:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 259 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 7.9 MiB/s wr, 301 op/s
Nov 29 02:57:50 np0005539563 nova_compute[252253]: 2025-11-29 07:57:50.015 252257 DEBUG nova.network.neutron [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Successfully updated port: d3b0b875-c7bf-4d35-ace2-8206af34d5aa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:57:50 np0005539563 nova_compute[252253]: 2025-11-29 07:57:50.030 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "refresh_cache-2f014f4f-f43f-49b3-93cf-1cc4aec2e8af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:50 np0005539563 nova_compute[252253]: 2025-11-29 07:57:50.030 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquired lock "refresh_cache-2f014f4f-f43f-49b3-93cf-1cc4aec2e8af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:50 np0005539563 nova_compute[252253]: 2025-11-29 07:57:50.031 252257 DEBUG nova.network.neutron [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:57:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:50.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:50 np0005539563 nova_compute[252253]: 2025-11-29 07:57:50.153 252257 DEBUG nova.compute.manager [req-bd406818-c911-4558-9b8a-0eead7040392 req-506162f6-3cd4-442e-a37a-1eb53a8d8b2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Received event network-changed-d3b0b875-c7bf-4d35-ace2-8206af34d5aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:50 np0005539563 nova_compute[252253]: 2025-11-29 07:57:50.154 252257 DEBUG nova.compute.manager [req-bd406818-c911-4558-9b8a-0eead7040392 req-506162f6-3cd4-442e-a37a-1eb53a8d8b2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Refreshing instance network info cache due to event network-changed-d3b0b875-c7bf-4d35-ace2-8206af34d5aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:57:50 np0005539563 nova_compute[252253]: 2025-11-29 07:57:50.154 252257 DEBUG oslo_concurrency.lockutils [req-bd406818-c911-4558-9b8a-0eead7040392 req-506162f6-3cd4-442e-a37a-1eb53a8d8b2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-2f014f4f-f43f-49b3-93cf-1cc4aec2e8af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:50 np0005539563 nova_compute[252253]: 2025-11-29 07:57:50.215 252257 DEBUG nova.network.neutron [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:57:50 np0005539563 nova_compute[252253]: 2025-11-29 07:57:50.324 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:50.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.001 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.350 252257 DEBUG nova.network.neutron [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Updating instance_info_cache with network_info: [{"id": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "address": "fa:16:3e:78:32:5a", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b0b875-c7", "ovs_interfaceid": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.370 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Releasing lock "refresh_cache-2f014f4f-f43f-49b3-93cf-1cc4aec2e8af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.371 252257 DEBUG nova.compute.manager [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Instance network_info: |[{"id": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "address": "fa:16:3e:78:32:5a", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b0b875-c7", "ovs_interfaceid": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.371 252257 DEBUG oslo_concurrency.lockutils [req-bd406818-c911-4558-9b8a-0eead7040392 req-506162f6-3cd4-442e-a37a-1eb53a8d8b2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-2f014f4f-f43f-49b3-93cf-1cc4aec2e8af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.372 252257 DEBUG nova.network.neutron [req-bd406818-c911-4558-9b8a-0eead7040392 req-506162f6-3cd4-442e-a37a-1eb53a8d8b2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Refreshing network info cache for port d3b0b875-c7bf-4d35-ace2-8206af34d5aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.376 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Start _get_guest_xml network_info=[{"id": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "address": "fa:16:3e:78:32:5a", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b0b875-c7", "ovs_interfaceid": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.384 252257 WARNING nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.390 252257 DEBUG nova.virt.libvirt.host [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.391 252257 DEBUG nova.virt.libvirt.host [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.400 252257 DEBUG nova.virt.libvirt.host [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.401 252257 DEBUG nova.virt.libvirt.host [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.403 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.404 252257 DEBUG nova.virt.hardware [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.405 252257 DEBUG nova.virt.hardware [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.405 252257 DEBUG nova.virt.hardware [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.406 252257 DEBUG nova.virt.hardware [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.406 252257 DEBUG nova.virt.hardware [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.407 252257 DEBUG nova.virt.hardware [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.407 252257 DEBUG nova.virt.hardware [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.408 252257 DEBUG nova.virt.hardware [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.409 252257 DEBUG nova.virt.hardware [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.411 252257 DEBUG nova.virt.hardware [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.411 252257 DEBUG nova.virt.hardware [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.418 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 302 MiB data, 655 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 9.1 MiB/s wr, 308 op/s
Nov 29 02:57:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:57:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3223520210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.877 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.923 252257 DEBUG nova.storage.rbd_utils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:51 np0005539563 nova_compute[252253]: 2025-11-29 07:57:51.928 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:52.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:57:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012813973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.399 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.403 252257 DEBUG nova.virt.libvirt.vif [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:57:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1745320207',display_name='tempest-ImagesTestJSON-server-1745320207',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1745320207',id=54,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-em5u2d2r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:57:47Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=2f014f4f-f43f-49b3-93cf-1cc4aec2e8af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "address": "fa:16:3e:78:32:5a", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b0b875-c7", "ovs_interfaceid": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.404 252257 DEBUG nova.network.os_vif_util [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "address": "fa:16:3e:78:32:5a", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b0b875-c7", "ovs_interfaceid": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.406 252257 DEBUG nova.network.os_vif_util [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:32:5a,bridge_name='br-int',has_traffic_filtering=True,id=d3b0b875-c7bf-4d35-ace2-8206af34d5aa,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b0b875-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.408 252257 DEBUG nova.objects.instance [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.453 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  <uuid>2f014f4f-f43f-49b3-93cf-1cc4aec2e8af</uuid>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  <name>instance-00000036</name>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <nova:name>tempest-ImagesTestJSON-server-1745320207</nova:name>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:57:51</nova:creationTime>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <nova:user uuid="f7d59bea260d4752aa29379967636c0b">tempest-ImagesTestJSON-911260095-project-member</nova:user>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <nova:project uuid="4d8c5b7e3ca74bc1880eb616b04711f7">tempest-ImagesTestJSON-911260095</nova:project>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <nova:port uuid="d3b0b875-c7bf-4d35-ace2-8206af34d5aa">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <entry name="serial">2f014f4f-f43f-49b3-93cf-1cc4aec2e8af</entry>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <entry name="uuid">2f014f4f-f43f-49b3-93cf-1cc4aec2e8af</entry>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk.config">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:78:32:5a"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <target dev="tapd3b0b875-c7"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/2f014f4f-f43f-49b3-93cf-1cc4aec2e8af/console.log" append="off"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:57:52 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:57:52 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:57:52 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:57:52 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.454 252257 DEBUG nova.compute.manager [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Preparing to wait for external event network-vif-plugged-d3b0b875-c7bf-4d35-ace2-8206af34d5aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.455 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.455 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.455 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.456 252257 DEBUG nova.virt.libvirt.vif [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:57:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1745320207',display_name='tempest-ImagesTestJSON-server-1745320207',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1745320207',id=54,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-em5u2d2r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:57:47Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=2f014f4f-f43f-49b3-93cf-1cc4aec2e8af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "address": "fa:16:3e:78:32:5a", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b0b875-c7", "ovs_interfaceid": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.456 252257 DEBUG nova.network.os_vif_util [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "address": "fa:16:3e:78:32:5a", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b0b875-c7", "ovs_interfaceid": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.457 252257 DEBUG nova.network.os_vif_util [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:32:5a,bridge_name='br-int',has_traffic_filtering=True,id=d3b0b875-c7bf-4d35-ace2-8206af34d5aa,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b0b875-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.457 252257 DEBUG os_vif [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:32:5a,bridge_name='br-int',has_traffic_filtering=True,id=d3b0b875-c7bf-4d35-ace2-8206af34d5aa,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b0b875-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.458 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.458 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.459 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.462 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.462 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3b0b875-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.463 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3b0b875-c7, col_values=(('external_ids', {'iface-id': 'd3b0b875-c7bf-4d35-ace2-8206af34d5aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:32:5a', 'vm-uuid': '2f014f4f-f43f-49b3-93cf-1cc4aec2e8af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.464 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:52 np0005539563 NetworkManager[48981]: <info>  [1764403072.4654] manager: (tapd3b0b875-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.466 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.475 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.480 252257 INFO os_vif [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:32:5a,bridge_name='br-int',has_traffic_filtering=True,id=d3b0b875-c7bf-4d35-ace2-8206af34d5aa,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b0b875-c7')#033[00m
Nov 29 02:57:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:52.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.608 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.609 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.609 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No VIF found with MAC fa:16:3e:78:32:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.610 252257 INFO nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Using config drive#033[00m
Nov 29 02:57:52 np0005539563 nova_compute[252253]: 2025-11-29 07:57:52.645 252257 DEBUG nova.storage.rbd_utils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 314 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 9.8 MiB/s wr, 338 op/s
Nov 29 02:57:53 np0005539563 nova_compute[252253]: 2025-11-29 07:57:53.813 252257 INFO nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Creating config drive at /var/lib/nova/instances/2f014f4f-f43f-49b3-93cf-1cc4aec2e8af/disk.config#033[00m
Nov 29 02:57:53 np0005539563 nova_compute[252253]: 2025-11-29 07:57:53.823 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f014f4f-f43f-49b3-93cf-1cc4aec2e8af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptk0zw2hw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:53 np0005539563 nova_compute[252253]: 2025-11-29 07:57:53.974 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f014f4f-f43f-49b3-93cf-1cc4aec2e8af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptk0zw2hw" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.024 252257 DEBUG nova.storage.rbd_utils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.029 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f014f4f-f43f-49b3-93cf-1cc4aec2e8af/disk.config 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:57:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:54.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.279 252257 DEBUG oslo_concurrency.processutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f014f4f-f43f-49b3-93cf-1cc4aec2e8af/disk.config 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.280 252257 INFO nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Deleting local config drive /var/lib/nova/instances/2f014f4f-f43f-49b3-93cf-1cc4aec2e8af/disk.config because it was imported into RBD.#033[00m
Nov 29 02:57:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:54 np0005539563 kernel: tapd3b0b875-c7: entered promiscuous mode
Nov 29 02:57:54 np0005539563 NetworkManager[48981]: <info>  [1764403074.3604] manager: (tapd3b0b875-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.360 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:54 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:54Z|00153|binding|INFO|Claiming lport d3b0b875-c7bf-4d35-ace2-8206af34d5aa for this chassis.
Nov 29 02:57:54 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:54Z|00154|binding|INFO|d3b0b875-c7bf-4d35-ace2-8206af34d5aa: Claiming fa:16:3e:78:32:5a 10.100.0.14
Nov 29 02:57:54 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:54Z|00155|binding|INFO|Setting lport d3b0b875-c7bf-4d35-ace2-8206af34d5aa ovn-installed in OVS
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.387 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.390 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:54 np0005539563 systemd-machined[213024]: New machine qemu-23-instance-00000036.
Nov 29 02:57:54 np0005539563 systemd[1]: Started Virtual Machine qemu-23-instance-00000036.
Nov 29 02:57:54 np0005539563 systemd-udevd[286897]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:57:54 np0005539563 NetworkManager[48981]: <info>  [1764403074.4463] device (tapd3b0b875-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:57:54 np0005539563 NetworkManager[48981]: <info>  [1764403074.4480] device (tapd3b0b875-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:57:54 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:54Z|00156|binding|INFO|Setting lport d3b0b875-c7bf-4d35-ace2-8206af34d5aa up in Southbound
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.468 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:32:5a 10.100.0.14'], port_security=['fa:16:3e:78:32:5a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2f014f4f-f43f-49b3-93cf-1cc4aec2e8af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7471f45a-da60-4567-a888-2a87ff526609', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'baf6db0c-e075-4519-aa02-9bbd4c984eba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bee78a1-1254-4dfe-ba24-259feeb5ade5, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=d3b0b875-c7bf-4d35-ace2-8206af34d5aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.470 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d3b0b875-c7bf-4d35-ace2-8206af34d5aa in datapath 7471f45a-da60-4567-a888-2a87ff526609 bound to our chassis#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.473 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7471f45a-da60-4567-a888-2a87ff526609#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.489 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[22540fd1-c969-44ba-ad5f-500bfbefc042]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.490 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7471f45a-d1 in ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.493 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7471f45a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.493 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6af08fce-3c43-4cba-b1a6-8ea0e4f68e34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.494 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[007954b3-3753-44c0-807e-4ca504168d53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.516 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[59514dff-a342-4189-9e63-fc0c6db003e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.548 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec70ff4-7ec0-45a6-9c07-62a2fe578370]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:54.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.580 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4203f026-0d81-45a3-99ac-6fb87b51d0d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.586 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2ceb2f74-246f-4f8d-919b-d3598d4dfb2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 NetworkManager[48981]: <info>  [1764403074.5872] manager: (tap7471f45a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Nov 29 02:57:54 np0005539563 systemd-udevd[286899]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.619 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b4afbfd3-5026-455b-972f-c01c9d7eddbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.623 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3063dc24-5475-4d6e-954f-20094347643c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 NetworkManager[48981]: <info>  [1764403074.6515] device (tap7471f45a-d0): carrier: link connected
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.657 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e090e7f8-0837-4ded-9942-926b485e1333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.677 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[288f89ca-7fab-4852-a2cf-f0278c1f5a84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7471f45a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:d7:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604242, 'reachable_time': 22230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286930, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.694 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d1eaf3d3-f65e-436e-9b7d-24093a02e55d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:d764'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604242, 'tstamp': 604242}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286931, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.713 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6a8989-2670-4e2f-957a-2ea0c2dc2839]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7471f45a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:d7:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604242, 'reachable_time': 22230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286932, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.743 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2a650c9b-49cb-45e4-8fdc-17a56dc6af70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.813 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d90b92-0f15-469a-af15-563e8f4dc508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.815 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7471f45a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.815 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.816 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7471f45a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:54 np0005539563 kernel: tap7471f45a-d0: entered promiscuous mode
Nov 29 02:57:54 np0005539563 NetworkManager[48981]: <info>  [1764403074.8201] manager: (tap7471f45a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.818 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.822 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7471f45a-d0, col_values=(('external_ids', {'iface-id': '06264566-5ffe-42a3-ad44-b3f54b7d79bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:54 np0005539563 ovn_controller[148841]: 2025-11-29T07:57:54Z|00157|binding|INFO|Releasing lport 06264566-5ffe-42a3-ad44-b3f54b7d79bb from this chassis (sb_readonly=0)
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.839 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.839 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.840 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[43b7135b-5ff3-4dd9-a33d-0e94cb0f734a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.841 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-7471f45a-da60-4567-a888-2a87ff526609
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 7471f45a-da60-4567-a888-2a87ff526609
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:57:54.842 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'env', 'PROCESS_TAG=haproxy-7471f45a-da60-4567-a888-2a87ff526609', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7471f45a-da60-4567-a888-2a87ff526609.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.844 252257 DEBUG nova.network.neutron [req-bd406818-c911-4558-9b8a-0eead7040392 req-506162f6-3cd4-442e-a37a-1eb53a8d8b2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Updated VIF entry in instance network info cache for port d3b0b875-c7bf-4d35-ace2-8206af34d5aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.844 252257 DEBUG nova.network.neutron [req-bd406818-c911-4558-9b8a-0eead7040392 req-506162f6-3cd4-442e-a37a-1eb53a8d8b2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Updating instance_info_cache with network_info: [{"id": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "address": "fa:16:3e:78:32:5a", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b0b875-c7", "ovs_interfaceid": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.882 252257 DEBUG oslo_concurrency.lockutils [req-bd406818-c911-4558-9b8a-0eead7040392 req-506162f6-3cd4-442e-a37a-1eb53a8d8b2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-2f014f4f-f43f-49b3-93cf-1cc4aec2e8af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.956 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403074.956255, 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:54 np0005539563 nova_compute[252253]: 2025-11-29 07:57:54.957 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] VM Started (Lifecycle Event)#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.028 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.032 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403074.9585347, 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.032 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.054 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.057 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.068 252257 DEBUG nova.compute.manager [req-1a795a09-64f2-4551-8965-15f5c60de3ba req-3c241e29-5c29-4773-9e88-edd0d8404d58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Received event network-vif-plugged-d3b0b875-c7bf-4d35-ace2-8206af34d5aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.068 252257 DEBUG oslo_concurrency.lockutils [req-1a795a09-64f2-4551-8965-15f5c60de3ba req-3c241e29-5c29-4773-9e88-edd0d8404d58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.068 252257 DEBUG oslo_concurrency.lockutils [req-1a795a09-64f2-4551-8965-15f5c60de3ba req-3c241e29-5c29-4773-9e88-edd0d8404d58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.069 252257 DEBUG oslo_concurrency.lockutils [req-1a795a09-64f2-4551-8965-15f5c60de3ba req-3c241e29-5c29-4773-9e88-edd0d8404d58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.069 252257 DEBUG nova.compute.manager [req-1a795a09-64f2-4551-8965-15f5c60de3ba req-3c241e29-5c29-4773-9e88-edd0d8404d58 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Processing event network-vif-plugged-d3b0b875-c7bf-4d35-ace2-8206af34d5aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.069 252257 DEBUG nova.compute.manager [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.073 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.076 252257 INFO nova.virt.libvirt.driver [-] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Instance spawned successfully.#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.076 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.090 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.090 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403075.072917, 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.090 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.108 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.117 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.117 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.118 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.119 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.120 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.121 252257 DEBUG nova.virt.libvirt.driver [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.124 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.189 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:57:55 np0005539563 podman[287007]: 2025-11-29 07:57:55.227133719 +0000 UTC m=+0.058437711 container create 97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 02:57:55 np0005539563 systemd[1]: Started libpod-conmon-97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987.scope.
Nov 29 02:57:55 np0005539563 podman[287007]: 2025-11-29 07:57:55.194051888 +0000 UTC m=+0.025355930 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.298 252257 INFO nova.compute.manager [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Took 7.64 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.299 252257 DEBUG nova.compute.manager [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:57:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a053345cacc810df6dedb36cb9b76000ffcfbd905b8f0bbea6f9c1b1708637a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:55 np0005539563 podman[287007]: 2025-11-29 07:57:55.327029092 +0000 UTC m=+0.158333114 container init 97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:57:55 np0005539563 podman[287007]: 2025-11-29 07:57:55.334012974 +0000 UTC m=+0.165316976 container start 97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 02:57:55 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[287022]: [NOTICE]   (287026) : New worker (287028) forked
Nov 29 02:57:55 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[287022]: [NOTICE]   (287026) : Loading success.
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.586 252257 INFO nova.compute.manager [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Took 9.19 seconds to build instance.#033[00m
Nov 29 02:57:55 np0005539563 nova_compute[252253]: 2025-11-29 07:57:55.624 252257 DEBUG oslo_concurrency.lockutils [None req-0d891879-830e-490b-85a0-4bd8c82d8c62 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 347 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 12 MiB/s wr, 372 op/s
Nov 29 02:57:56 np0005539563 nova_compute[252253]: 2025-11-29 07:57:56.003 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:56.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Nov 29 02:57:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:56.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Nov 29 02:57:56 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Nov 29 02:57:57 np0005539563 nova_compute[252253]: 2025-11-29 07:57:57.262 252257 DEBUG nova.compute.manager [req-9675c14d-e5eb-42d1-845c-319d7df9e060 req-8a7f55aa-f640-4180-b9a5-9cc8c8583b4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Received event network-vif-plugged-d3b0b875-c7bf-4d35-ace2-8206af34d5aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:57:57 np0005539563 nova_compute[252253]: 2025-11-29 07:57:57.263 252257 DEBUG oslo_concurrency.lockutils [req-9675c14d-e5eb-42d1-845c-319d7df9e060 req-8a7f55aa-f640-4180-b9a5-9cc8c8583b4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:57 np0005539563 nova_compute[252253]: 2025-11-29 07:57:57.263 252257 DEBUG oslo_concurrency.lockutils [req-9675c14d-e5eb-42d1-845c-319d7df9e060 req-8a7f55aa-f640-4180-b9a5-9cc8c8583b4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:57 np0005539563 nova_compute[252253]: 2025-11-29 07:57:57.264 252257 DEBUG oslo_concurrency.lockutils [req-9675c14d-e5eb-42d1-845c-319d7df9e060 req-8a7f55aa-f640-4180-b9a5-9cc8c8583b4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:57 np0005539563 nova_compute[252253]: 2025-11-29 07:57:57.264 252257 DEBUG nova.compute.manager [req-9675c14d-e5eb-42d1-845c-319d7df9e060 req-8a7f55aa-f640-4180-b9a5-9cc8c8583b4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] No waiting events found dispatching network-vif-plugged-d3b0b875-c7bf-4d35-ace2-8206af34d5aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:57:57 np0005539563 nova_compute[252253]: 2025-11-29 07:57:57.264 252257 WARNING nova.compute.manager [req-9675c14d-e5eb-42d1-845c-319d7df9e060 req-8a7f55aa-f640-4180-b9a5-9cc8c8583b4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Received unexpected event network-vif-plugged-d3b0b875-c7bf-4d35-ace2-8206af34d5aa for instance with vm_state active and task_state None.#033[00m
Nov 29 02:57:57 np0005539563 nova_compute[252253]: 2025-11-29 07:57:57.466 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 347 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.5 MiB/s wr, 231 op/s
Nov 29 02:57:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:57:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:57:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:57:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:57:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:57:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.003000081s ======
Nov 29 02:57:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:58.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Nov 29 02:57:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:57:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ebc98cfe-7ff8-4704-995f-4992f18c2e31 does not exist
Nov 29 02:57:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e107f057-9244-4367-9f14-d6400ebc2469 does not exist
Nov 29 02:57:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7b4edfcc-e696-4e42-b035-72e393856517 does not exist
Nov 29 02:57:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:57:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:57:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:57:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:57:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:57:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:57:58 np0005539563 podman[287193]: 2025-11-29 07:57:58.339300886 +0000 UTC m=+0.059383687 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 02:57:58 np0005539563 podman[287194]: 2025-11-29 07:57:58.377011155 +0000 UTC m=+0.094748421 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:57:58 np0005539563 podman[287195]: 2025-11-29 07:57:58.393678454 +0000 UTC m=+0.109330513 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 02:57:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:57:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:58.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:58 np0005539563 podman[287372]: 2025-11-29 07:57:58.803385925 +0000 UTC m=+0.059031309 container create baf2830bc9dac48ee81998f9fcab901283f7d18387ef99384926f0fc8e107c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:57:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:57:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:57:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:57:58 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 02:57:58 np0005539563 systemd[1]: Started libpod-conmon-baf2830bc9dac48ee81998f9fcab901283f7d18387ef99384926f0fc8e107c36.scope.
Nov 29 02:57:58 np0005539563 podman[287372]: 2025-11-29 07:57:58.766854507 +0000 UTC m=+0.022499911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:57:58 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:57:59 np0005539563 podman[287372]: 2025-11-29 07:57:59.610893546 +0000 UTC m=+0.866538970 container init baf2830bc9dac48ee81998f9fcab901283f7d18387ef99384926f0fc8e107c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:57:59 np0005539563 podman[287372]: 2025-11-29 07:57:59.624993564 +0000 UTC m=+0.880638998 container start baf2830bc9dac48ee81998f9fcab901283f7d18387ef99384926f0fc8e107c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:57:59 np0005539563 systemd[1]: libpod-baf2830bc9dac48ee81998f9fcab901283f7d18387ef99384926f0fc8e107c36.scope: Deactivated successfully.
Nov 29 02:57:59 np0005539563 nervous_gauss[287389]: 167 167
Nov 29 02:57:59 np0005539563 conmon[287389]: conmon baf2830bc9dac48ee819 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-baf2830bc9dac48ee81998f9fcab901283f7d18387ef99384926f0fc8e107c36.scope/container/memory.events
Nov 29 02:57:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Nov 29 02:57:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 350 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.9 MiB/s wr, 241 op/s
Nov 29 02:58:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:00.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:00 np0005539563 podman[287372]: 2025-11-29 07:58:00.113748801 +0000 UTC m=+1.369394185 container attach baf2830bc9dac48ee81998f9fcab901283f7d18387ef99384926f0fc8e107c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 02:58:00 np0005539563 podman[287372]: 2025-11-29 07:58:00.114255655 +0000 UTC m=+1.369901039 container died baf2830bc9dac48ee81998f9fcab901283f7d18387ef99384926f0fc8e107c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:58:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Nov 29 02:58:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Nov 29 02:58:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7ec6e431c77c25ea06c0b551156e4bf58ec04ace5822283205afc4b280b8fd21-merged.mount: Deactivated successfully.
Nov 29 02:58:00 np0005539563 podman[287372]: 2025-11-29 07:58:00.292007734 +0000 UTC m=+1.547653118 container remove baf2830bc9dac48ee81998f9fcab901283f7d18387ef99384926f0fc8e107c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:58:00 np0005539563 systemd[1]: libpod-conmon-baf2830bc9dac48ee81998f9fcab901283f7d18387ef99384926f0fc8e107c36.scope: Deactivated successfully.
Nov 29 02:58:00 np0005539563 podman[287415]: 2025-11-29 07:58:00.476610421 +0000 UTC m=+0.041598048 container create d371e253042f785f6890b28b635960ed884755cf141b1a362a35c84499e9ae1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hypatia, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:58:00 np0005539563 systemd[1]: Started libpod-conmon-d371e253042f785f6890b28b635960ed884755cf141b1a362a35c84499e9ae1d.scope.
Nov 29 02:58:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:58:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e805d2a2ae884271ae0316e7f290f185ccef419c97c3fc08e0c278dc3e93efc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e805d2a2ae884271ae0316e7f290f185ccef419c97c3fc08e0c278dc3e93efc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:00 np0005539563 podman[287415]: 2025-11-29 07:58:00.457811232 +0000 UTC m=+0.022798879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:58:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e805d2a2ae884271ae0316e7f290f185ccef419c97c3fc08e0c278dc3e93efc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e805d2a2ae884271ae0316e7f290f185ccef419c97c3fc08e0c278dc3e93efc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e805d2a2ae884271ae0316e7f290f185ccef419c97c3fc08e0c278dc3e93efc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:00 np0005539563 podman[287415]: 2025-11-29 07:58:00.578249801 +0000 UTC m=+0.143237448 container init d371e253042f785f6890b28b635960ed884755cf141b1a362a35c84499e9ae1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:58:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:00.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:00 np0005539563 podman[287415]: 2025-11-29 07:58:00.590430486 +0000 UTC m=+0.155418153 container start d371e253042f785f6890b28b635960ed884755cf141b1a362a35c84499e9ae1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hypatia, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:58:00 np0005539563 podman[287415]: 2025-11-29 07:58:00.597307566 +0000 UTC m=+0.162295223 container attach d371e253042f785f6890b28b635960ed884755cf141b1a362a35c84499e9ae1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hypatia, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:58:00 np0005539563 nova_compute[252253]: 2025-11-29 07:58:00.618 252257 INFO nova.compute.manager [None req-dcc58e86-ef2c-4019-86ea-519000f5dfc6 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Pausing#033[00m
Nov 29 02:58:00 np0005539563 nova_compute[252253]: 2025-11-29 07:58:00.620 252257 DEBUG nova.objects.instance [None req-dcc58e86-ef2c-4019-86ea-519000f5dfc6 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'flavor' on Instance uuid 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:00 np0005539563 nova_compute[252253]: 2025-11-29 07:58:00.663 252257 DEBUG nova.compute.manager [None req-dcc58e86-ef2c-4019-86ea-519000f5dfc6 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:00 np0005539563 nova_compute[252253]: 2025-11-29 07:58:00.663 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403080.6631508, 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:00 np0005539563 nova_compute[252253]: 2025-11-29 07:58:00.664 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:58:00 np0005539563 nova_compute[252253]: 2025-11-29 07:58:00.702 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:00 np0005539563 nova_compute[252253]: 2025-11-29 07:58:00.705 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:00 np0005539563 nova_compute[252253]: 2025-11-29 07:58:00.771 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 29 02:58:01 np0005539563 nova_compute[252253]: 2025-11-29 07:58:01.005 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:01 np0005539563 boring_hypatia[287431]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:58:01 np0005539563 boring_hypatia[287431]: --> relative data size: 1.0
Nov 29 02:58:01 np0005539563 boring_hypatia[287431]: --> All data devices are unavailable
Nov 29 02:58:01 np0005539563 systemd[1]: libpod-d371e253042f785f6890b28b635960ed884755cf141b1a362a35c84499e9ae1d.scope: Deactivated successfully.
Nov 29 02:58:01 np0005539563 podman[287415]: 2025-11-29 07:58:01.437516739 +0000 UTC m=+1.002504386 container died d371e253042f785f6890b28b635960ed884755cf141b1a362a35c84499e9ae1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:58:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Nov 29 02:58:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 388 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 7.7 MiB/s wr, 301 op/s
Nov 29 02:58:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:58:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:02.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:58:02 np0005539563 nova_compute[252253]: 2025-11-29 07:58:02.491 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Nov 29 02:58:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8e805d2a2ae884271ae0316e7f290f185ccef419c97c3fc08e0c278dc3e93efc-merged.mount: Deactivated successfully.
Nov 29 02:58:02 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Nov 29 02:58:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:02.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:02 np0005539563 podman[287415]: 2025-11-29 07:58:02.759445145 +0000 UTC m=+2.324432802 container remove d371e253042f785f6890b28b635960ed884755cf141b1a362a35c84499e9ae1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hypatia, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:58:02 np0005539563 systemd[1]: libpod-conmon-d371e253042f785f6890b28b635960ed884755cf141b1a362a35c84499e9ae1d.scope: Deactivated successfully.
Nov 29 02:58:03 np0005539563 podman[287649]: 2025-11-29 07:58:03.42157289 +0000 UTC m=+0.040528838 container create 2314e57649539f9fc1fa56370cdf390bb2c8a1aadda7df2d12d8fc896c7c111e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:58:03 np0005539563 systemd[1]: Started libpod-conmon-2314e57649539f9fc1fa56370cdf390bb2c8a1aadda7df2d12d8fc896c7c111e.scope.
Nov 29 02:58:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:58:03 np0005539563 podman[287649]: 2025-11-29 07:58:03.402086273 +0000 UTC m=+0.021042241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:58:03 np0005539563 podman[287649]: 2025-11-29 07:58:03.505843502 +0000 UTC m=+0.124799510 container init 2314e57649539f9fc1fa56370cdf390bb2c8a1aadda7df2d12d8fc896c7c111e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:58:03 np0005539563 podman[287649]: 2025-11-29 07:58:03.513117773 +0000 UTC m=+0.132073731 container start 2314e57649539f9fc1fa56370cdf390bb2c8a1aadda7df2d12d8fc896c7c111e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:58:03 np0005539563 podman[287649]: 2025-11-29 07:58:03.517132473 +0000 UTC m=+0.136088431 container attach 2314e57649539f9fc1fa56370cdf390bb2c8a1aadda7df2d12d8fc896c7c111e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:58:03 np0005539563 pensive_darwin[287666]: 167 167
Nov 29 02:58:03 np0005539563 systemd[1]: libpod-2314e57649539f9fc1fa56370cdf390bb2c8a1aadda7df2d12d8fc896c7c111e.scope: Deactivated successfully.
Nov 29 02:58:03 np0005539563 podman[287649]: 2025-11-29 07:58:03.519290672 +0000 UTC m=+0.138246630 container died 2314e57649539f9fc1fa56370cdf390bb2c8a1aadda7df2d12d8fc896c7c111e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:58:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Nov 29 02:58:03 np0005539563 systemd[1]: var-lib-containers-storage-overlay-94a1faf27cef77f4f6ce9337040f58c55284012d55b2674c0a5548921aedee8e-merged.mount: Deactivated successfully.
Nov 29 02:58:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Nov 29 02:58:03 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Nov 29 02:58:03 np0005539563 podman[287649]: 2025-11-29 07:58:03.553199457 +0000 UTC m=+0.172155415 container remove 2314e57649539f9fc1fa56370cdf390bb2c8a1aadda7df2d12d8fc896c7c111e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:58:03 np0005539563 systemd[1]: libpod-conmon-2314e57649539f9fc1fa56370cdf390bb2c8a1aadda7df2d12d8fc896c7c111e.scope: Deactivated successfully.
Nov 29 02:58:03 np0005539563 podman[287691]: 2025-11-29 07:58:03.717365781 +0000 UTC m=+0.035400627 container create fe21c2e2ac447c5c5e9d4d280eb2676cca2f98dc48974973a1cc4dfec86ebe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:58:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 399 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.8 MiB/s wr, 338 op/s
Nov 29 02:58:03 np0005539563 systemd[1]: Started libpod-conmon-fe21c2e2ac447c5c5e9d4d280eb2676cca2f98dc48974973a1cc4dfec86ebe72.scope.
Nov 29 02:58:03 np0005539563 podman[287691]: 2025-11-29 07:58:03.703388666 +0000 UTC m=+0.021423502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:58:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:58:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743ca59cdef48f56e5a93cade052d6d248dc382ddc7c84d69fd57815d73afa96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743ca59cdef48f56e5a93cade052d6d248dc382ddc7c84d69fd57815d73afa96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743ca59cdef48f56e5a93cade052d6d248dc382ddc7c84d69fd57815d73afa96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743ca59cdef48f56e5a93cade052d6d248dc382ddc7c84d69fd57815d73afa96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:03 np0005539563 podman[287691]: 2025-11-29 07:58:03.819847434 +0000 UTC m=+0.137882270 container init fe21c2e2ac447c5c5e9d4d280eb2676cca2f98dc48974973a1cc4dfec86ebe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tesla, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:58:03 np0005539563 podman[287691]: 2025-11-29 07:58:03.833197002 +0000 UTC m=+0.151231848 container start fe21c2e2ac447c5c5e9d4d280eb2676cca2f98dc48974973a1cc4dfec86ebe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:58:03 np0005539563 podman[287691]: 2025-11-29 07:58:03.837394227 +0000 UTC m=+0.155429043 container attach fe21c2e2ac447c5c5e9d4d280eb2676cca2f98dc48974973a1cc4dfec86ebe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 02:58:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:04.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:04 np0005539563 clever_tesla[287708]: {
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:    "0": [
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:        {
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:            "devices": [
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:                "/dev/loop3"
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:            ],
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:            "lv_name": "ceph_lv0",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:            "lv_size": "7511998464",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:            "name": "ceph_lv0",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:            "tags": {
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:                "ceph.cluster_name": "ceph",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:                "ceph.crush_device_class": "",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:                "ceph.encrypted": "0",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:                "ceph.osd_id": "0",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:                "ceph.type": "block",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:                "ceph.vdo": "0"
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:            },
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:            "type": "block",
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:            "vg_name": "ceph_vg0"
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:        }
Nov 29 02:58:04 np0005539563 clever_tesla[287708]:    ]
Nov 29 02:58:04 np0005539563 clever_tesla[287708]: }
Nov 29 02:58:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:04.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:04 np0005539563 systemd[1]: libpod-fe21c2e2ac447c5c5e9d4d280eb2676cca2f98dc48974973a1cc4dfec86ebe72.scope: Deactivated successfully.
Nov 29 02:58:04 np0005539563 podman[287691]: 2025-11-29 07:58:04.618223234 +0000 UTC m=+0.936258060 container died fe21c2e2ac447c5c5e9d4d280eb2676cca2f98dc48974973a1cc4dfec86ebe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 29 02:58:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay-743ca59cdef48f56e5a93cade052d6d248dc382ddc7c84d69fd57815d73afa96-merged.mount: Deactivated successfully.
Nov 29 02:58:04 np0005539563 podman[287691]: 2025-11-29 07:58:04.671649266 +0000 UTC m=+0.989684082 container remove fe21c2e2ac447c5c5e9d4d280eb2676cca2f98dc48974973a1cc4dfec86ebe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:58:04 np0005539563 systemd[1]: libpod-conmon-fe21c2e2ac447c5c5e9d4d280eb2676cca2f98dc48974973a1cc4dfec86ebe72.scope: Deactivated successfully.
Nov 29 02:58:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:04.905 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:04.907 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:04.907 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:05 np0005539563 nova_compute[252253]: 2025-11-29 07:58:05.272 252257 DEBUG nova.compute.manager [None req-029a6b9d-46b4-4ed5-b625-16aee168c98a f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:05 np0005539563 nova_compute[252253]: 2025-11-29 07:58:05.330 252257 INFO nova.compute.manager [None req-029a6b9d-46b4-4ed5-b625-16aee168c98a f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] instance snapshotting#033[00m
Nov 29 02:58:05 np0005539563 nova_compute[252253]: 2025-11-29 07:58:05.331 252257 WARNING nova.compute.manager [None req-029a6b9d-46b4-4ed5-b625-16aee168c98a f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Nov 29 02:58:05 np0005539563 podman[287870]: 2025-11-29 07:58:05.30182115 +0000 UTC m=+0.030201253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:58:05 np0005539563 podman[287870]: 2025-11-29 07:58:05.708041684 +0000 UTC m=+0.436421687 container create 32bd627cd25d02b2f9c9173c34b87993944dbed0c9191784158155fe7ad3d681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wilbur, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:58:05 np0005539563 nova_compute[252253]: 2025-11-29 07:58:05.751 252257 INFO nova.virt.libvirt.driver [None req-029a6b9d-46b4-4ed5-b625-16aee168c98a f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Beginning live snapshot process#033[00m
Nov 29 02:58:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 440 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 9.8 MiB/s wr, 325 op/s
Nov 29 02:58:05 np0005539563 nova_compute[252253]: 2025-11-29 07:58:05.967 252257 DEBUG nova.virt.libvirt.imagebackend [None req-029a6b9d-46b4-4ed5-b625-16aee168c98a f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 02:58:05 np0005539563 systemd[1]: Started libpod-conmon-32bd627cd25d02b2f9c9173c34b87993944dbed0c9191784158155fe7ad3d681.scope.
Nov 29 02:58:06 np0005539563 nova_compute[252253]: 2025-11-29 07:58:06.007 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:58:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:06.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:06 np0005539563 podman[287870]: 2025-11-29 07:58:06.275716067 +0000 UTC m=+1.004096080 container init 32bd627cd25d02b2f9c9173c34b87993944dbed0c9191784158155fe7ad3d681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wilbur, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:58:06 np0005539563 podman[287870]: 2025-11-29 07:58:06.284648913 +0000 UTC m=+1.013028916 container start 32bd627cd25d02b2f9c9173c34b87993944dbed0c9191784158155fe7ad3d681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:58:06 np0005539563 keen_wilbur[287920]: 167 167
Nov 29 02:58:06 np0005539563 systemd[1]: libpod-32bd627cd25d02b2f9c9173c34b87993944dbed0c9191784158155fe7ad3d681.scope: Deactivated successfully.
Nov 29 02:58:06 np0005539563 podman[287870]: 2025-11-29 07:58:06.328336527 +0000 UTC m=+1.056716600 container attach 32bd627cd25d02b2f9c9173c34b87993944dbed0c9191784158155fe7ad3d681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:58:06 np0005539563 podman[287870]: 2025-11-29 07:58:06.330453894 +0000 UTC m=+1.058833897 container died 32bd627cd25d02b2f9c9173c34b87993944dbed0c9191784158155fe7ad3d681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wilbur, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:58:06 np0005539563 nova_compute[252253]: 2025-11-29 07:58:06.331 252257 DEBUG nova.storage.rbd_utils [None req-029a6b9d-46b4-4ed5-b625-16aee168c98a f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] creating snapshot(9316352859eb4bb4a890dc9d8fcfb28e) on rbd image(2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:58:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-31fad811d1cec209af815c6c333475799705f288a892eef550e7780c18926167-merged.mount: Deactivated successfully.
Nov 29 02:58:06 np0005539563 podman[287870]: 2025-11-29 07:58:06.39778484 +0000 UTC m=+1.126164843 container remove 32bd627cd25d02b2f9c9173c34b87993944dbed0c9191784158155fe7ad3d681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wilbur, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:58:06 np0005539563 systemd[1]: libpod-conmon-32bd627cd25d02b2f9c9173c34b87993944dbed0c9191784158155fe7ad3d681.scope: Deactivated successfully.
Nov 29 02:58:06 np0005539563 podman[287961]: 2025-11-29 07:58:06.585144763 +0000 UTC m=+0.047010906 container create fdccf27556093a9bec8f8f29be173ba5d941cfe645861daa345f214ebdad092a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_solomon, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:58:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:06.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:06 np0005539563 systemd[1]: Started libpod-conmon-fdccf27556093a9bec8f8f29be173ba5d941cfe645861daa345f214ebdad092a.scope.
Nov 29 02:58:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:58:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947f737b44b12610c7a42316c8999079e5c0318bec75695c73edb29fe4e06c46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947f737b44b12610c7a42316c8999079e5c0318bec75695c73edb29fe4e06c46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947f737b44b12610c7a42316c8999079e5c0318bec75695c73edb29fe4e06c46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947f737b44b12610c7a42316c8999079e5c0318bec75695c73edb29fe4e06c46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:06 np0005539563 podman[287961]: 2025-11-29 07:58:06.565776859 +0000 UTC m=+0.027643022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:58:06 np0005539563 podman[287961]: 2025-11-29 07:58:06.663209094 +0000 UTC m=+0.125075257 container init fdccf27556093a9bec8f8f29be173ba5d941cfe645861daa345f214ebdad092a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_solomon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:58:06 np0005539563 podman[287961]: 2025-11-29 07:58:06.669783266 +0000 UTC m=+0.131649399 container start fdccf27556093a9bec8f8f29be173ba5d941cfe645861daa345f214ebdad092a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_solomon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:58:06 np0005539563 podman[287961]: 2025-11-29 07:58:06.673313853 +0000 UTC m=+0.135180006 container attach fdccf27556093a9bec8f8f29be173ba5d941cfe645861daa345f214ebdad092a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_solomon, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:58:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Nov 29 02:58:07 np0005539563 goofy_solomon[287978]: {
Nov 29 02:58:07 np0005539563 goofy_solomon[287978]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:58:07 np0005539563 goofy_solomon[287978]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:58:07 np0005539563 goofy_solomon[287978]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:58:07 np0005539563 goofy_solomon[287978]:        "osd_id": 0,
Nov 29 02:58:07 np0005539563 goofy_solomon[287978]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:58:07 np0005539563 goofy_solomon[287978]:        "type": "bluestore"
Nov 29 02:58:07 np0005539563 goofy_solomon[287978]:    }
Nov 29 02:58:07 np0005539563 goofy_solomon[287978]: }
Nov 29 02:58:07 np0005539563 systemd[1]: libpod-fdccf27556093a9bec8f8f29be173ba5d941cfe645861daa345f214ebdad092a.scope: Deactivated successfully.
Nov 29 02:58:07 np0005539563 podman[287961]: 2025-11-29 07:58:07.47562988 +0000 UTC m=+0.937496013 container died fdccf27556093a9bec8f8f29be173ba5d941cfe645861daa345f214ebdad092a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_solomon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:58:07 np0005539563 nova_compute[252253]: 2025-11-29 07:58:07.493 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 440 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.7 MiB/s wr, 165 op/s
Nov 29 02:58:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Nov 29 02:58:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:08.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:08.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:08 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Nov 29 02:58:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-947f737b44b12610c7a42316c8999079e5c0318bec75695c73edb29fe4e06c46-merged.mount: Deactivated successfully.
Nov 29 02:58:09 np0005539563 nova_compute[252253]: 2025-11-29 07:58:09.236 252257 DEBUG nova.objects.instance [None req-8ef470a9-bbdf-436e-b5d8-e7e8866bedd8 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lazy-loading 'flavor' on Instance uuid 5ae03c1e-4959-4743-b844-017e2de24ee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:09 np0005539563 nova_compute[252253]: 2025-11-29 07:58:09.282 252257 DEBUG oslo_concurrency.lockutils [None req-8ef470a9-bbdf-436e-b5d8-e7e8866bedd8 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquiring lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:09 np0005539563 nova_compute[252253]: 2025-11-29 07:58:09.283 252257 DEBUG oslo_concurrency.lockutils [None req-8ef470a9-bbdf-436e-b5d8-e7e8866bedd8 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquired lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 440 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 111 op/s
Nov 29 02:58:09 np0005539563 podman[287961]: 2025-11-29 07:58:09.904324884 +0000 UTC m=+3.366191017 container remove fdccf27556093a9bec8f8f29be173ba5d941cfe645861daa345f214ebdad092a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:58:09 np0005539563 systemd[1]: libpod-conmon-fdccf27556093a9bec8f8f29be173ba5d941cfe645861daa345f214ebdad092a.scope: Deactivated successfully.
Nov 29 02:58:09 np0005539563 nova_compute[252253]: 2025-11-29 07:58:09.948 252257 DEBUG nova.storage.rbd_utils [None req-029a6b9d-46b4-4ed5-b625-16aee168c98a f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] cloning vms/2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk@9316352859eb4bb4a890dc9d8fcfb28e to images/49b84182-d6c5-4bdc-9e97-50145d5437cf clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 02:58:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:58:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:10.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:58:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:58:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:58:10 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a4930c37-90d9-404a-9a53-88ee0c3fda39 does not exist
Nov 29 02:58:10 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6a6facfb-5dbb-470c-a86f-5012369ce317 does not exist
Nov 29 02:58:10 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 55e12870-00b9-4e16-8d4c-f029d8b3913b does not exist
Nov 29 02:58:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:10 np0005539563 nova_compute[252253]: 2025-11-29 07:58:10.124 252257 DEBUG nova.storage.rbd_utils [None req-029a6b9d-46b4-4ed5-b625-16aee168c98a f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] flattening images/49b84182-d6c5-4bdc-9e97-50145d5437cf flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 02:58:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:10.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:10 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:58:10 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:58:11 np0005539563 nova_compute[252253]: 2025-11-29 07:58:11.051 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Nov 29 02:58:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Nov 29 02:58:11 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Nov 29 02:58:11 np0005539563 nova_compute[252253]: 2025-11-29 07:58:11.260 252257 DEBUG nova.storage.rbd_utils [None req-029a6b9d-46b4-4ed5-b625-16aee168c98a f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] removing snapshot(9316352859eb4bb4a890dc9d8fcfb28e) on rbd image(2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:58:11 np0005539563 ovn_controller[148841]: 2025-11-29T07:58:11Z|00158|binding|INFO|Releasing lport 06264566-5ffe-42a3-ad44-b3f54b7d79bb from this chassis (sb_readonly=0)
Nov 29 02:58:11 np0005539563 ovn_controller[148841]: 2025-11-29T07:58:11Z|00159|binding|INFO|Releasing lport 0b4477a1-a6d7-439d-a42e-e67669914558 from this chassis (sb_readonly=0)
Nov 29 02:58:11 np0005539563 nova_compute[252253]: 2025-11-29 07:58:11.688 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 440 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 725 KiB/s rd, 2.7 MiB/s wr, 80 op/s
Nov 29 02:58:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:12.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Nov 29 02:58:12 np0005539563 nova_compute[252253]: 2025-11-29 07:58:12.294 252257 DEBUG nova.network.neutron [None req-8ef470a9-bbdf-436e-b5d8-e7e8866bedd8 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:58:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Nov 29 02:58:12 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Nov 29 02:58:12 np0005539563 nova_compute[252253]: 2025-11-29 07:58:12.495 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:12 np0005539563 nova_compute[252253]: 2025-11-29 07:58:12.532 252257 DEBUG nova.compute.manager [req-c92b9870-b4ec-40ca-b484-7e772d03b55e req-3001823f-9c7a-4023-9081-db3a4fceb65d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Received event network-changed-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:12 np0005539563 nova_compute[252253]: 2025-11-29 07:58:12.532 252257 DEBUG nova.compute.manager [req-c92b9870-b4ec-40ca-b484-7e772d03b55e req-3001823f-9c7a-4023-9081-db3a4fceb65d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Refreshing instance network info cache due to event network-changed-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:58:12 np0005539563 nova_compute[252253]: 2025-11-29 07:58:12.532 252257 DEBUG oslo_concurrency.lockutils [req-c92b9870-b4ec-40ca-b484-7e772d03b55e req-3001823f-9c7a-4023-9081-db3a4fceb65d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:12.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:58:12
Nov 29 02:58:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:58:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:58:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'volumes', 'backups', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'images']
Nov 29 02:58:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:58:13 np0005539563 nova_compute[252253]: 2025-11-29 07:58:13.097 252257 DEBUG nova.storage.rbd_utils [None req-029a6b9d-46b4-4ed5-b625-16aee168c98a f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] creating snapshot(snap) on rbd image(49b84182-d6c5-4bdc-9e97-50145d5437cf) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 469 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.2 MiB/s wr, 102 op/s
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:58:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:58:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Nov 29 02:58:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Nov 29 02:58:13 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Nov 29 02:58:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:14.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:14.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Nov 29 02:58:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Nov 29 02:58:14 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Nov 29 02:58:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Nov 29 02:58:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Nov 29 02:58:15 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Nov 29 02:58:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 559 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 17 MiB/s rd, 16 MiB/s wr, 333 op/s
Nov 29 02:58:16 np0005539563 nova_compute[252253]: 2025-11-29 07:58:16.054 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:16.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:16.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:58:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 7088 writes, 31K keys, 7085 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 7088 writes, 7085 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1684 writes, 7334 keys, 1684 commit groups, 1.0 writes per commit group, ingest: 10.89 MB, 0.02 MB/s#012Interval WAL: 1684 writes, 1684 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      8.1      4.85              0.14        17    0.285       0      0       0.0       0.0#012  L6      1/0    9.11 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5     15.6     12.8     10.87              0.47        16    0.679     84K   8984       0.0       0.0#012 Sum      1/0    9.11 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.5     10.8     11.4     15.72              0.62        33    0.476     84K   8984       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9     59.4     60.0      0.76              0.13         8    0.094     25K   2537       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     15.6     12.8     10.87              0.47        16    0.679     84K   8984       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      8.1      4.85              0.14        16    0.303       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.039, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.17 GB write, 0.06 MB/s write, 0.17 GB read, 0.06 MB/s read, 15.7 seconds#012Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 304.00 MB usage: 19.67 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000203 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1075,19.02 MB,6.25548%) FilterBlock(34,243.67 KB,0.0782766%) IndexBlock(34,425.33 KB,0.136631%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:58:16 np0005539563 nova_compute[252253]: 2025-11-29 07:58:16.974 252257 DEBUG nova.network.neutron [None req-8ef470a9-bbdf-436e-b5d8-e7e8866bedd8 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updating instance_info_cache with network_info: [{"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:17 np0005539563 nova_compute[252253]: 2025-11-29 07:58:17.003 252257 DEBUG oslo_concurrency.lockutils [None req-8ef470a9-bbdf-436e-b5d8-e7e8866bedd8 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Releasing lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:17 np0005539563 nova_compute[252253]: 2025-11-29 07:58:17.004 252257 DEBUG nova.compute.manager [None req-8ef470a9-bbdf-436e-b5d8-e7e8866bedd8 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 29 02:58:17 np0005539563 nova_compute[252253]: 2025-11-29 07:58:17.004 252257 DEBUG nova.compute.manager [None req-8ef470a9-bbdf-436e-b5d8-e7e8866bedd8 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] network_info to inject: |[{"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 29 02:58:17 np0005539563 nova_compute[252253]: 2025-11-29 07:58:17.008 252257 DEBUG oslo_concurrency.lockutils [req-c92b9870-b4ec-40ca-b484-7e772d03b55e req-3001823f-9c7a-4023-9081-db3a4fceb65d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:17 np0005539563 nova_compute[252253]: 2025-11-29 07:58:17.008 252257 DEBUG nova.network.neutron [req-c92b9870-b4ec-40ca-b484-7e772d03b55e req-3001823f-9c7a-4023-9081-db3a4fceb65d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Refreshing network info cache for port 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:58:17 np0005539563 nova_compute[252253]: 2025-11-29 07:58:17.498 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 559 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 12 MiB/s wr, 252 op/s
Nov 29 02:58:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:18.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:18 np0005539563 nova_compute[252253]: 2025-11-29 07:58:18.505 252257 INFO nova.virt.libvirt.driver [None req-029a6b9d-46b4-4ed5-b625-16aee168c98a f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Snapshot image upload complete#033[00m
Nov 29 02:58:18 np0005539563 nova_compute[252253]: 2025-11-29 07:58:18.505 252257 INFO nova.compute.manager [None req-029a6b9d-46b4-4ed5-b625-16aee168c98a f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Took 13.17 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 02:58:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:18.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:18.917 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:58:18 np0005539563 nova_compute[252253]: 2025-11-29 07:58:18.918 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:18.918 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:58:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:18.919 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:19 np0005539563 nova_compute[252253]: 2025-11-29 07:58:19.674 252257 DEBUG nova.objects.instance [None req-01548e2a-7242-44a5-bd5c-e6c5e6596b81 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lazy-loading 'flavor' on Instance uuid 5ae03c1e-4959-4743-b844-017e2de24ee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:19 np0005539563 nova_compute[252253]: 2025-11-29 07:58:19.707 252257 DEBUG oslo_concurrency.lockutils [None req-01548e2a-7242-44a5-bd5c-e6c5e6596b81 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquiring lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 565 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 8.1 MiB/s wr, 201 op/s
Nov 29 02:58:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:20.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:20 np0005539563 nova_compute[252253]: 2025-11-29 07:58:20.549 252257 DEBUG nova.network.neutron [req-c92b9870-b4ec-40ca-b484-7e772d03b55e req-3001823f-9c7a-4023-9081-db3a4fceb65d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updated VIF entry in instance network info cache for port 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:58:20 np0005539563 nova_compute[252253]: 2025-11-29 07:58:20.550 252257 DEBUG nova.network.neutron [req-c92b9870-b4ec-40ca-b484-7e772d03b55e req-3001823f-9c7a-4023-9081-db3a4fceb65d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updating instance_info_cache with network_info: [{"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:20 np0005539563 nova_compute[252253]: 2025-11-29 07:58:20.607 252257 DEBUG oslo_concurrency.lockutils [req-c92b9870-b4ec-40ca-b484-7e772d03b55e req-3001823f-9c7a-4023-9081-db3a4fceb65d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:20 np0005539563 nova_compute[252253]: 2025-11-29 07:58:20.608 252257 DEBUG oslo_concurrency.lockutils [None req-01548e2a-7242-44a5-bd5c-e6c5e6596b81 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquired lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:20.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:21 np0005539563 nova_compute[252253]: 2025-11-29 07:58:21.057 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 565 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 6.2 MiB/s wr, 171 op/s
Nov 29 02:58:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:22.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:22 np0005539563 nova_compute[252253]: 2025-11-29 07:58:22.257 252257 DEBUG nova.network.neutron [None req-01548e2a-7242-44a5-bd5c-e6c5e6596b81 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:58:22 np0005539563 nova_compute[252253]: 2025-11-29 07:58:22.500 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:22.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Nov 29 02:58:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Nov 29 02:58:22 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Nov 29 02:58:22 np0005539563 nova_compute[252253]: 2025-11-29 07:58:22.877 252257 DEBUG nova.compute.manager [req-d9ddd5a2-9b8c-49fc-b802-4303d378ce87 req-42746654-4cdb-469b-ad64-3375e80c94f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Received event network-changed-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:22 np0005539563 nova_compute[252253]: 2025-11-29 07:58:22.877 252257 DEBUG nova.compute.manager [req-d9ddd5a2-9b8c-49fc-b802-4303d378ce87 req-42746654-4cdb-469b-ad64-3375e80c94f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Refreshing instance network info cache due to event network-changed-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:58:22 np0005539563 nova_compute[252253]: 2025-11-29 07:58:22.878 252257 DEBUG oslo_concurrency.lockutils [req-d9ddd5a2-9b8c-49fc-b802-4303d378ce87 req-42746654-4cdb-469b-ad64-3375e80c94f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007501337707053787 of space, bias 1.0, pg target 2.250401312116136 quantized to 32 (current 32)
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.008162375825888702 of space, bias 1.0, pg target 2.432387996114833 quantized to 32 (current 32)
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Nov 29 02:58:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 565 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.9 MiB/s wr, 88 op/s
Nov 29 02:58:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:24.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:24 np0005539563 nova_compute[252253]: 2025-11-29 07:58:24.315 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:24 np0005539563 nova_compute[252253]: 2025-11-29 07:58:24.591 252257 DEBUG nova.network.neutron [None req-01548e2a-7242-44a5-bd5c-e6c5e6596b81 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updating instance_info_cache with network_info: [{"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:24.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:24 np0005539563 nova_compute[252253]: 2025-11-29 07:58:24.625 252257 DEBUG oslo_concurrency.lockutils [None req-01548e2a-7242-44a5-bd5c-e6c5e6596b81 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Releasing lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:24 np0005539563 nova_compute[252253]: 2025-11-29 07:58:24.625 252257 DEBUG nova.compute.manager [None req-01548e2a-7242-44a5-bd5c-e6c5e6596b81 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 29 02:58:24 np0005539563 nova_compute[252253]: 2025-11-29 07:58:24.626 252257 DEBUG nova.compute.manager [None req-01548e2a-7242-44a5-bd5c-e6c5e6596b81 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] network_info to inject: |[{"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 29 02:58:24 np0005539563 nova_compute[252253]: 2025-11-29 07:58:24.628 252257 DEBUG oslo_concurrency.lockutils [req-d9ddd5a2-9b8c-49fc-b802-4303d378ce87 req-42746654-4cdb-469b-ad64-3375e80c94f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:24 np0005539563 nova_compute[252253]: 2025-11-29 07:58:24.628 252257 DEBUG nova.network.neutron [req-d9ddd5a2-9b8c-49fc-b802-4303d378ce87 req-42746654-4cdb-469b-ad64-3375e80c94f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Refreshing network info cache for port 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.075 252257 DEBUG oslo_concurrency.lockutils [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.076 252257 DEBUG oslo_concurrency.lockutils [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.076 252257 DEBUG oslo_concurrency.lockutils [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.076 252257 DEBUG oslo_concurrency.lockutils [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.076 252257 DEBUG oslo_concurrency.lockutils [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.077 252257 INFO nova.compute.manager [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Terminating instance#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.078 252257 DEBUG nova.compute.manager [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.080 252257 DEBUG oslo_concurrency.lockutils [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquiring lock "5ae03c1e-4959-4743-b844-017e2de24ee2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.080 252257 DEBUG oslo_concurrency.lockutils [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.081 252257 DEBUG oslo_concurrency.lockutils [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquiring lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.081 252257 DEBUG oslo_concurrency.lockutils [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.081 252257 DEBUG oslo_concurrency.lockutils [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.082 252257 INFO nova.compute.manager [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Terminating instance#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.083 252257 DEBUG nova.compute.manager [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:58:25 np0005539563 kernel: tapd3b0b875-c7 (unregistering): left promiscuous mode
Nov 29 02:58:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Nov 29 02:58:25 np0005539563 NetworkManager[48981]: <info>  [1764403105.1233] device (tapd3b0b875-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:58:25 np0005539563 kernel: tap1b788e11-b7 (unregistering): left promiscuous mode
Nov 29 02:58:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Nov 29 02:58:25 np0005539563 NetworkManager[48981]: <info>  [1764403105.1345] device (tap1b788e11-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:58:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:58:25Z|00160|binding|INFO|Releasing lport d3b0b875-c7bf-4d35-ace2-8206af34d5aa from this chassis (sb_readonly=0)
Nov 29 02:58:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:58:25Z|00161|binding|INFO|Setting lport d3b0b875-c7bf-4d35-ace2-8206af34d5aa down in Southbound
Nov 29 02:58:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:58:25Z|00162|binding|INFO|Removing iface tapd3b0b875-c7 ovn-installed in OVS
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.136 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.148 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:32:5a 10.100.0.14'], port_security=['fa:16:3e:78:32:5a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2f014f4f-f43f-49b3-93cf-1cc4aec2e8af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7471f45a-da60-4567-a888-2a87ff526609', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'baf6db0c-e075-4519-aa02-9bbd4c984eba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bee78a1-1254-4dfe-ba24-259feeb5ade5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=d3b0b875-c7bf-4d35-ace2-8206af34d5aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.150 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d3b0b875-c7bf-4d35-ace2-8206af34d5aa in datapath 7471f45a-da60-4567-a888-2a87ff526609 unbound from our chassis#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.151 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7471f45a-da60-4567-a888-2a87ff526609, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.152 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[20c54212-6bad-455f-9e4a-e914896b048c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.152 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 namespace which is not needed anymore#033[00m
Nov 29 02:58:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:58:25Z|00163|binding|INFO|Releasing lport 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a from this chassis (sb_readonly=0)
Nov 29 02:58:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:58:25Z|00164|binding|INFO|Setting lport 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a down in Southbound
Nov 29 02:58:25 np0005539563 ovn_controller[148841]: 2025-11-29T07:58:25Z|00165|binding|INFO|Removing iface tap1b788e11-b7 ovn-installed in OVS
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.163 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.165 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000036.scope: Deactivated successfully.
Nov 29 02:58:25 np0005539563 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000036.scope: Consumed 6.300s CPU time.
Nov 29 02:58:25 np0005539563 systemd-machined[213024]: Machine qemu-23-instance-00000036 terminated.
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.178 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:0e:5f 10.100.0.11'], port_security=['fa:16:3e:e1:0e:5f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5ae03c1e-4959-4743-b844-017e2de24ee2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4cae663c-54d5-4e67-87eb-3a705a8ceb9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf14587bd83441a8a75c2ec83dd6a271', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'cc61cd9d-5129-428b-901d-ab0ca3403925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=578c6c75-81b2-445c-889a-937faa321acb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1b788e11-b7c5-4cec-a04a-9bf9fbbd686a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.185 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000033.scope: Deactivated successfully.
Nov 29 02:58:25 np0005539563 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000033.scope: Consumed 15.767s CPU time.
Nov 29 02:58:25 np0005539563 systemd-machined[213024]: Machine qemu-22-instance-00000033 terminated.
Nov 29 02:58:25 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[287022]: [NOTICE]   (287026) : haproxy version is 2.8.14-c23fe91
Nov 29 02:58:25 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[287022]: [NOTICE]   (287026) : path to executable is /usr/sbin/haproxy
Nov 29 02:58:25 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[287022]: [WARNING]  (287026) : Exiting Master process...
Nov 29 02:58:25 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[287022]: [ALERT]    (287026) : Current worker (287028) exited with code 143 (Terminated)
Nov 29 02:58:25 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[287022]: [WARNING]  (287026) : All workers exited. Exiting... (0)
Nov 29 02:58:25 np0005539563 systemd[1]: libpod-97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987.scope: Deactivated successfully.
Nov 29 02:58:25 np0005539563 podman[288244]: 2025-11-29 07:58:25.289331005 +0000 UTC m=+0.043478829 container died 97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.309 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.320 252257 INFO nova.virt.libvirt.driver [-] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Instance destroyed successfully.#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.320 252257 DEBUG nova.objects.instance [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'resources' on Instance uuid 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.322 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987-userdata-shm.mount: Deactivated successfully.
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.329 252257 INFO nova.virt.libvirt.driver [-] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Instance destroyed successfully.#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.329 252257 DEBUG nova.objects.instance [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lazy-loading 'resources' on Instance uuid 5ae03c1e-4959-4743-b844-017e2de24ee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8a053345cacc810df6dedb36cb9b76000ffcfbd905b8f0bbea6f9c1b1708637a-merged.mount: Deactivated successfully.
Nov 29 02:58:25 np0005539563 podman[288244]: 2025-11-29 07:58:25.347599031 +0000 UTC m=+0.101746855 container cleanup 97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:58:25 np0005539563 systemd[1]: libpod-conmon-97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987.scope: Deactivated successfully.
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.364 252257 DEBUG nova.virt.libvirt.vif [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-578794848',display_name='tempest-AttachInterfacesUnderV243Test-server-578794848',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-578794848',id=51,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9SwZMQ9yZM3JIjuPXqrbqk5c0gFUCH6Y2tJSIMoMYXdINFrxBWl0ifugy14AEU5tW4CYTXWHhYMVGrBaLKC+9wGW+ByOl9ZY24nMpMtNu41cGDvs1lBo852+nVd4SD3A==',key_name='tempest-keypair-835477071',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:57:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cf14587bd83441a8a75c2ec83dd6a271',ramdisk_id='',reservation_id='r-ej06r2sh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1830435311',owner_user_name='tempest-AttachInterfacesUnderV243Test-1830435311-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:58:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='00183201554f47b3aa46da246e60580d',uuid=5ae03c1e-4959-4743-b844-017e2de24ee2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.365 252257 DEBUG nova.network.os_vif_util [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Converting VIF {"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.366 252257 DEBUG nova.network.os_vif_util [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:0e:5f,bridge_name='br-int',has_traffic_filtering=True,id=1b788e11-b7c5-4cec-a04a-9bf9fbbd686a,network=Network(4cae663c-54d5-4e67-87eb-3a705a8ceb9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b788e11-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.366 252257 DEBUG os_vif [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:0e:5f,bridge_name='br-int',has_traffic_filtering=True,id=1b788e11-b7c5-4cec-a04a-9bf9fbbd686a,network=Network(4cae663c-54d5-4e67-87eb-3a705a8ceb9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b788e11-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.368 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.368 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b788e11-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.371 252257 DEBUG nova.virt.libvirt.vif [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:57:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1745320207',display_name='tempest-ImagesTestJSON-server-1745320207',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1745320207',id=54,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:57:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-em5u2d2r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:58:18Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=2f014f4f-f43f-49b3-93cf-1cc4aec2e8af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "address": "fa:16:3e:78:32:5a", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b0b875-c7", "ovs_interfaceid": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.371 252257 DEBUG nova.network.os_vif_util [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "address": "fa:16:3e:78:32:5a", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b0b875-c7", "ovs_interfaceid": "d3b0b875-c7bf-4d35-ace2-8206af34d5aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.372 252257 DEBUG nova.network.os_vif_util [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:32:5a,bridge_name='br-int',has_traffic_filtering=True,id=d3b0b875-c7bf-4d35-ace2-8206af34d5aa,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b0b875-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.372 252257 DEBUG os_vif [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:32:5a,bridge_name='br-int',has_traffic_filtering=True,id=d3b0b875-c7bf-4d35-ace2-8206af34d5aa,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b0b875-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.374 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.376 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.377 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.377 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3b0b875-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.379 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.381 252257 INFO os_vif [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:0e:5f,bridge_name='br-int',has_traffic_filtering=True,id=1b788e11-b7c5-4cec-a04a-9bf9fbbd686a,network=Network(4cae663c-54d5-4e67-87eb-3a705a8ceb9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b788e11-b7')#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.397 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.400 252257 INFO os_vif [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:32:5a,bridge_name='br-int',has_traffic_filtering=True,id=d3b0b875-c7bf-4d35-ace2-8206af34d5aa,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b0b875-c7')#033[00m
Nov 29 02:58:25 np0005539563 podman[288295]: 2025-11-29 07:58:25.43609911 +0000 UTC m=+0.064885290 container remove 97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.442 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[998749d6-20b1-498e-b7ee-ff62c2f28726]: (4, ('Sat Nov 29 07:58:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 (97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987)\n97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987\nSat Nov 29 07:58:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 (97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987)\n97589929c42c1b572b75922105488af8c1417033e3e5281ba6c8e1d719941987\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.444 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[94398859-649f-45ec-b24d-0fbc23467602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.445 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7471f45a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:25 np0005539563 kernel: tap7471f45a-d0: left promiscuous mode
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.447 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.449 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.453 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ec02c7-5fd6-4591-8cf0-fe1c1b0fa5b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.467 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.468 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d0fd8b5f-0ee7-4926-801a-aed90d52b0a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.469 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b9404c8c-b01b-4c90-9a38-078128151491]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.483 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2627c867-4a78-42e8-bd17-f05c21a6b263]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604234, 'reachable_time': 43643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288346, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.485 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.486 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[22b15d26-443e-4419-a4bc-cf9132fb7d14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.487 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a in datapath 4cae663c-54d5-4e67-87eb-3a705a8ceb9e unbound from our chassis#033[00m
Nov 29 02:58:25 np0005539563 systemd[1]: run-netns-ovnmeta\x2d7471f45a\x2dda60\x2d4567\x2da888\x2d2a87ff526609.mount: Deactivated successfully.
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.488 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4cae663c-54d5-4e67-87eb-3a705a8ceb9e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.490 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[53c3b370-8760-4ec6-ab66-27c49e25d8f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.490 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e namespace which is not needed anymore#033[00m
Nov 29 02:58:25 np0005539563 neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e[286451]: [NOTICE]   (286455) : haproxy version is 2.8.14-c23fe91
Nov 29 02:58:25 np0005539563 neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e[286451]: [NOTICE]   (286455) : path to executable is /usr/sbin/haproxy
Nov 29 02:58:25 np0005539563 neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e[286451]: [WARNING]  (286455) : Exiting Master process...
Nov 29 02:58:25 np0005539563 neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e[286451]: [ALERT]    (286455) : Current worker (286457) exited with code 143 (Terminated)
Nov 29 02:58:25 np0005539563 neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e[286451]: [WARNING]  (286455) : All workers exited. Exiting... (0)
Nov 29 02:58:25 np0005539563 systemd[1]: libpod-4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043.scope: Deactivated successfully.
Nov 29 02:58:25 np0005539563 conmon[286451]: conmon 4b51d5857e55251c6909 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043.scope/container/memory.events
Nov 29 02:58:25 np0005539563 podman[288364]: 2025-11-29 07:58:25.626082034 +0000 UTC m=+0.054689637 container died 4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:58:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043-userdata-shm.mount: Deactivated successfully.
Nov 29 02:58:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d98212d2b4652d277589e413e9d5a16c56f936da366be86ead1237df421a3c87-merged.mount: Deactivated successfully.
Nov 29 02:58:25 np0005539563 podman[288364]: 2025-11-29 07:58:25.668527353 +0000 UTC m=+0.097134956 container cleanup 4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 02:58:25 np0005539563 systemd[1]: libpod-conmon-4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043.scope: Deactivated successfully.
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.710 252257 DEBUG nova.compute.manager [req-e852deca-98b9-457f-b0a9-4aa8499f0d21 req-37b336b5-f271-423f-9a0d-ed9f09aced24 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Received event network-vif-unplugged-d3b0b875-c7bf-4d35-ace2-8206af34d5aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.710 252257 DEBUG oslo_concurrency.lockutils [req-e852deca-98b9-457f-b0a9-4aa8499f0d21 req-37b336b5-f271-423f-9a0d-ed9f09aced24 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.710 252257 DEBUG oslo_concurrency.lockutils [req-e852deca-98b9-457f-b0a9-4aa8499f0d21 req-37b336b5-f271-423f-9a0d-ed9f09aced24 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.711 252257 DEBUG oslo_concurrency.lockutils [req-e852deca-98b9-457f-b0a9-4aa8499f0d21 req-37b336b5-f271-423f-9a0d-ed9f09aced24 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.711 252257 DEBUG nova.compute.manager [req-e852deca-98b9-457f-b0a9-4aa8499f0d21 req-37b336b5-f271-423f-9a0d-ed9f09aced24 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] No waiting events found dispatching network-vif-unplugged-d3b0b875-c7bf-4d35-ace2-8206af34d5aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.711 252257 DEBUG nova.compute.manager [req-e852deca-98b9-457f-b0a9-4aa8499f0d21 req-37b336b5-f271-423f-9a0d-ed9f09aced24 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Received event network-vif-unplugged-d3b0b875-c7bf-4d35-ace2-8206af34d5aa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:58:25 np0005539563 podman[288397]: 2025-11-29 07:58:25.737205026 +0000 UTC m=+0.049826844 container remove 4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.742 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa801ac-9896-4b9b-af80-c3cd802b3721]: (4, ('Sat Nov 29 07:58:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e (4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043)\n4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043\nSat Nov 29 07:58:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e (4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043)\n4b51d5857e55251c6909b4b882fd9b9943eecdee074534dcbbbf234b79389043\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.744 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7e2d89-3f44-412a-af78-6529cddad5da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.745 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4cae663c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.747 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 kernel: tap4cae663c-50: left promiscuous mode
Nov 29 02:58:25 np0005539563 nova_compute[252253]: 2025-11-29 07:58:25.763 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.767 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3873fc2d-ab86-4c56-8083-a3448bdfcc35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 531 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 437 KiB/s wr, 65 op/s
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.780 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5acd73db-a449-4646-8df1-fb5f9862dcfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.782 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd5764c-8c2e-4f69-8ce4-e337788a8841]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.799 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6acdd9-4fbe-456b-834d-b8a22c679d53]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601969, 'reachable_time': 26824, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288410, 'error': None, 'target': 'ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.801 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4cae663c-54d5-4e67-87eb-3a705a8ceb9e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:58:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:25.802 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ae3415-da3f-4e90-99cb-16f5df0c67dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:26 np0005539563 nova_compute[252253]: 2025-11-29 07:58:26.059 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:26.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:26 np0005539563 nova_compute[252253]: 2025-11-29 07:58:26.144 252257 INFO nova.virt.libvirt.driver [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Deleting instance files /var/lib/nova/instances/5ae03c1e-4959-4743-b844-017e2de24ee2_del#033[00m
Nov 29 02:58:26 np0005539563 nova_compute[252253]: 2025-11-29 07:58:26.145 252257 INFO nova.virt.libvirt.driver [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Deletion of /var/lib/nova/instances/5ae03c1e-4959-4743-b844-017e2de24ee2_del complete#033[00m
Nov 29 02:58:26 np0005539563 nova_compute[252253]: 2025-11-29 07:58:26.195 252257 INFO nova.virt.libvirt.driver [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Deleting instance files /var/lib/nova/instances/2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_del#033[00m
Nov 29 02:58:26 np0005539563 nova_compute[252253]: 2025-11-29 07:58:26.197 252257 INFO nova.virt.libvirt.driver [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Deletion of /var/lib/nova/instances/2f014f4f-f43f-49b3-93cf-1cc4aec2e8af_del complete#033[00m
Nov 29 02:58:26 np0005539563 nova_compute[252253]: 2025-11-29 07:58:26.308 252257 INFO nova.compute.manager [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Took 1.22 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:58:26 np0005539563 nova_compute[252253]: 2025-11-29 07:58:26.309 252257 DEBUG oslo.service.loopingcall [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:58:26 np0005539563 nova_compute[252253]: 2025-11-29 07:58:26.310 252257 DEBUG nova.compute.manager [-] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:58:26 np0005539563 nova_compute[252253]: 2025-11-29 07:58:26.310 252257 DEBUG nova.network.neutron [-] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:58:26 np0005539563 nova_compute[252253]: 2025-11-29 07:58:26.316 252257 INFO nova.compute.manager [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Took 1.24 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:58:26 np0005539563 nova_compute[252253]: 2025-11-29 07:58:26.316 252257 DEBUG oslo.service.loopingcall [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:58:26 np0005539563 nova_compute[252253]: 2025-11-29 07:58:26.317 252257 DEBUG nova.compute.manager [-] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:58:26 np0005539563 nova_compute[252253]: 2025-11-29 07:58:26.317 252257 DEBUG nova.network.neutron [-] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:58:26 np0005539563 systemd[1]: run-netns-ovnmeta\x2d4cae663c\x2d54d5\x2d4e67\x2d87eb\x2d3a705a8ceb9e.mount: Deactivated successfully.
Nov 29 02:58:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:26.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:27 np0005539563 nova_compute[252253]: 2025-11-29 07:58:27.287 252257 DEBUG nova.network.neutron [req-d9ddd5a2-9b8c-49fc-b802-4303d378ce87 req-42746654-4cdb-469b-ad64-3375e80c94f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updated VIF entry in instance network info cache for port 1b788e11-b7c5-4cec-a04a-9bf9fbbd686a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:58:27 np0005539563 nova_compute[252253]: 2025-11-29 07:58:27.288 252257 DEBUG nova.network.neutron [req-d9ddd5a2-9b8c-49fc-b802-4303d378ce87 req-42746654-4cdb-469b-ad64-3375e80c94f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updating instance_info_cache with network_info: [{"id": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "address": "fa:16:3e:e1:0e:5f", "network": {"id": "4cae663c-54d5-4e67-87eb-3a705a8ceb9e", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-892128076-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf14587bd83441a8a75c2ec83dd6a271", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b788e11-b7", "ovs_interfaceid": "1b788e11-b7c5-4cec-a04a-9bf9fbbd686a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:27 np0005539563 nova_compute[252253]: 2025-11-29 07:58:27.318 252257 DEBUG oslo_concurrency.lockutils [req-d9ddd5a2-9b8c-49fc-b802-4303d378ce87 req-42746654-4cdb-469b-ad64-3375e80c94f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5ae03c1e-4959-4743-b844-017e2de24ee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 531 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 18 KiB/s wr, 39 op/s
Nov 29 02:58:27 np0005539563 nova_compute[252253]: 2025-11-29 07:58:27.907 252257 DEBUG nova.compute.manager [req-b7526e6b-40a3-4857-9eed-808088dad6d2 req-ca328fc8-5950-4981-9b58-bbb62ca64d2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Received event network-vif-plugged-d3b0b875-c7bf-4d35-ace2-8206af34d5aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:27 np0005539563 nova_compute[252253]: 2025-11-29 07:58:27.908 252257 DEBUG oslo_concurrency.lockutils [req-b7526e6b-40a3-4857-9eed-808088dad6d2 req-ca328fc8-5950-4981-9b58-bbb62ca64d2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:27 np0005539563 nova_compute[252253]: 2025-11-29 07:58:27.909 252257 DEBUG oslo_concurrency.lockutils [req-b7526e6b-40a3-4857-9eed-808088dad6d2 req-ca328fc8-5950-4981-9b58-bbb62ca64d2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:27 np0005539563 nova_compute[252253]: 2025-11-29 07:58:27.909 252257 DEBUG oslo_concurrency.lockutils [req-b7526e6b-40a3-4857-9eed-808088dad6d2 req-ca328fc8-5950-4981-9b58-bbb62ca64d2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:27 np0005539563 nova_compute[252253]: 2025-11-29 07:58:27.909 252257 DEBUG nova.compute.manager [req-b7526e6b-40a3-4857-9eed-808088dad6d2 req-ca328fc8-5950-4981-9b58-bbb62ca64d2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] No waiting events found dispatching network-vif-plugged-d3b0b875-c7bf-4d35-ace2-8206af34d5aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:58:27 np0005539563 nova_compute[252253]: 2025-11-29 07:58:27.910 252257 WARNING nova.compute.manager [req-b7526e6b-40a3-4857-9eed-808088dad6d2 req-ca328fc8-5950-4981-9b58-bbb62ca64d2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Received unexpected event network-vif-plugged-d3b0b875-c7bf-4d35-ace2-8206af34d5aa for instance with vm_state paused and task_state deleting.#033[00m
Nov 29 02:58:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:28.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:28 np0005539563 nova_compute[252253]: 2025-11-29 07:58:28.284 252257 DEBUG nova.network.neutron [-] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:28 np0005539563 nova_compute[252253]: 2025-11-29 07:58:28.474 252257 INFO nova.compute.manager [-] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Took 2.16 seconds to deallocate network for instance.#033[00m
Nov 29 02:58:28 np0005539563 podman[288412]: 2025-11-29 07:58:28.580528707 +0000 UTC m=+0.120003959 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:58:28 np0005539563 nova_compute[252253]: 2025-11-29 07:58:28.583 252257 DEBUG oslo_concurrency.lockutils [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:28 np0005539563 nova_compute[252253]: 2025-11-29 07:58:28.583 252257 DEBUG oslo_concurrency.lockutils [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:28 np0005539563 podman[288413]: 2025-11-29 07:58:28.59226932 +0000 UTC m=+0.128277997 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:58:28 np0005539563 nova_compute[252253]: 2025-11-29 07:58:28.619 252257 DEBUG nova.network.neutron [-] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:28.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:28 np0005539563 podman[288414]: 2025-11-29 07:58:28.650776151 +0000 UTC m=+0.183497667 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 02:58:28 np0005539563 nova_compute[252253]: 2025-11-29 07:58:28.654 252257 INFO nova.compute.manager [-] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Took 2.34 seconds to deallocate network for instance.#033[00m
Nov 29 02:58:28 np0005539563 nova_compute[252253]: 2025-11-29 07:58:28.716 252257 DEBUG oslo_concurrency.lockutils [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:28 np0005539563 nova_compute[252253]: 2025-11-29 07:58:28.729 252257 DEBUG oslo_concurrency.processutils [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:28 np0005539563 nova_compute[252253]: 2025-11-29 07:58:28.947 252257 DEBUG nova.compute.manager [req-bb891a3f-909c-4f80-baa9-1bc59883eae3 req-d3228919-0e93-4303-8d0d-d454d7f4ab80 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Received event network-vif-deleted-d3b0b875-c7bf-4d35-ace2-8206af34d5aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195951477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:29 np0005539563 nova_compute[252253]: 2025-11-29 07:58:29.187 252257 DEBUG oslo_concurrency.processutils [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:29 np0005539563 nova_compute[252253]: 2025-11-29 07:58:29.193 252257 DEBUG nova.compute.provider_tree [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:29 np0005539563 nova_compute[252253]: 2025-11-29 07:58:29.231 252257 DEBUG nova.scheduler.client.report [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:29 np0005539563 nova_compute[252253]: 2025-11-29 07:58:29.442 252257 DEBUG oslo_concurrency.lockutils [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:29 np0005539563 nova_compute[252253]: 2025-11-29 07:58:29.446 252257 DEBUG oslo_concurrency.lockutils [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:29 np0005539563 nova_compute[252253]: 2025-11-29 07:58:29.451 252257 DEBUG nova.compute.manager [req-03352697-2565-40eb-be86-8a0990947a57 req-803a6a56-4406-420b-abfa-d183e9143b01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Received event network-vif-deleted-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:29 np0005539563 nova_compute[252253]: 2025-11-29 07:58:29.487 252257 INFO nova.scheduler.client.report [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Deleted allocations for instance 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af#033[00m
Nov 29 02:58:29 np0005539563 nova_compute[252253]: 2025-11-29 07:58:29.537 252257 DEBUG oslo_concurrency.processutils [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:29 np0005539563 nova_compute[252253]: 2025-11-29 07:58:29.581 252257 DEBUG oslo_concurrency.lockutils [None req-aedf976f-a0e1-4a7a-903f-146109f2be3d f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "2f014f4f-f43f-49b3-93cf-1cc4aec2e8af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 484 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 16 KiB/s wr, 62 op/s
Nov 29 02:58:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1424215325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:30.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.112 252257 DEBUG oslo_concurrency.processutils [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.119 252257 DEBUG nova.compute.provider_tree [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.138 252257 DEBUG nova.scheduler.client.report [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.168 252257 DEBUG oslo_concurrency.lockutils [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.231 252257 INFO nova.scheduler.client.report [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Deleted allocations for instance 5ae03c1e-4959-4743-b844-017e2de24ee2#033[00m
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.363 252257 DEBUG oslo_concurrency.lockutils [None req-030cb64c-76c3-4542-9a40-96e0d5c58f16 00183201554f47b3aa46da246e60580d cf14587bd83441a8a75c2ec83dd6a271 - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.379 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.607 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.608 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000055s ======
Nov 29 02:58:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:30.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.662 252257 DEBUG nova.compute.manager [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.804 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.805 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.836 252257 DEBUG nova.virt.hardware [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:58:30 np0005539563 nova_compute[252253]: 2025-11-29 07:58:30.837 252257 INFO nova.compute.claims [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.044 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.092 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.179 252257 DEBUG nova.compute.manager [req-60961833-c396-43a7-975d-f216dbbc028e req-f8b9e015-3d4d-49fe-9629-fa6cffac705e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Received event network-vif-plugged-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.180 252257 DEBUG oslo_concurrency.lockutils [req-60961833-c396-43a7-975d-f216dbbc028e req-f8b9e015-3d4d-49fe-9629-fa6cffac705e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.181 252257 DEBUG oslo_concurrency.lockutils [req-60961833-c396-43a7-975d-f216dbbc028e req-f8b9e015-3d4d-49fe-9629-fa6cffac705e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.181 252257 DEBUG oslo_concurrency.lockutils [req-60961833-c396-43a7-975d-f216dbbc028e req-f8b9e015-3d4d-49fe-9629-fa6cffac705e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5ae03c1e-4959-4743-b844-017e2de24ee2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.182 252257 DEBUG nova.compute.manager [req-60961833-c396-43a7-975d-f216dbbc028e req-f8b9e015-3d4d-49fe-9629-fa6cffac705e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] No waiting events found dispatching network-vif-plugged-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.182 252257 WARNING nova.compute.manager [req-60961833-c396-43a7-975d-f216dbbc028e req-f8b9e015-3d4d-49fe-9629-fa6cffac705e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Received unexpected event network-vif-plugged-1b788e11-b7c5-4cec-a04a-9bf9fbbd686a for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:58:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1351949704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.552 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.561 252257 DEBUG nova.compute.provider_tree [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.591 252257 DEBUG nova.scheduler.client.report [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.624 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.625 252257 DEBUG nova.compute.manager [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.747 252257 DEBUG nova.compute.manager [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.747 252257 DEBUG nova.network.neutron [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.775 252257 INFO nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:58:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 393 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 18 KiB/s wr, 105 op/s
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.808 252257 DEBUG nova.compute.manager [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.937 252257 DEBUG nova.compute.manager [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.939 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.940 252257 INFO nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Creating image(s)#033[00m
Nov 29 02:58:31 np0005539563 nova_compute[252253]: 2025-11-29 07:58:31.982 252257 DEBUG nova.storage.rbd_utils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:32 np0005539563 nova_compute[252253]: 2025-11-29 07:58:32.025 252257 DEBUG nova.storage.rbd_utils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:32 np0005539563 nova_compute[252253]: 2025-11-29 07:58:32.068 252257 DEBUG nova.storage.rbd_utils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:32 np0005539563 nova_compute[252253]: 2025-11-29 07:58:32.071 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:32.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:32 np0005539563 nova_compute[252253]: 2025-11-29 07:58:32.162 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:32 np0005539563 nova_compute[252253]: 2025-11-29 07:58:32.164 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:32 np0005539563 nova_compute[252253]: 2025-11-29 07:58:32.165 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:32 np0005539563 nova_compute[252253]: 2025-11-29 07:58:32.166 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:32 np0005539563 nova_compute[252253]: 2025-11-29 07:58:32.204 252257 DEBUG nova.storage.rbd_utils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:32 np0005539563 nova_compute[252253]: 2025-11-29 07:58:32.208 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:32 np0005539563 nova_compute[252253]: 2025-11-29 07:58:32.347 252257 DEBUG nova.policy [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f7d59bea260d4752aa29379967636c0b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:58:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:32.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:32 np0005539563 nova_compute[252253]: 2025-11-29 07:58:32.682 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:33 np0005539563 nova_compute[252253]: 2025-11-29 07:58:33.211 252257 DEBUG nova.network.neutron [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Successfully created port: 1be99414-ee31-47a2-8f21-705afd5a21fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:58:33 np0005539563 nova_compute[252253]: 2025-11-29 07:58:33.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 393 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 14 KiB/s wr, 86 op/s
Nov 29 02:58:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:34.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:34.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:34 np0005539563 nova_compute[252253]: 2025-11-29 07:58:34.823 252257 DEBUG nova.network.neutron [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Successfully updated port: 1be99414-ee31-47a2-8f21-705afd5a21fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:58:34 np0005539563 nova_compute[252253]: 2025-11-29 07:58:34.841 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "refresh_cache-f5073e59-05bc-46a4-8bf4-ebeb74ca389b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:34 np0005539563 nova_compute[252253]: 2025-11-29 07:58:34.842 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquired lock "refresh_cache-f5073e59-05bc-46a4-8bf4-ebeb74ca389b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:34 np0005539563 nova_compute[252253]: 2025-11-29 07:58:34.842 252257 DEBUG nova.network.neutron [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:58:35 np0005539563 nova_compute[252253]: 2025-11-29 07:58:34.999 252257 DEBUG nova.compute.manager [req-fbf3bbda-d826-46e2-9941-90ad46357363 req-219557fe-d020-4f59-92ff-6f4697e0700c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Received event network-changed-1be99414-ee31-47a2-8f21-705afd5a21fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:35 np0005539563 nova_compute[252253]: 2025-11-29 07:58:35.000 252257 DEBUG nova.compute.manager [req-fbf3bbda-d826-46e2-9941-90ad46357363 req-219557fe-d020-4f59-92ff-6f4697e0700c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Refreshing instance network info cache due to event network-changed-1be99414-ee31-47a2-8f21-705afd5a21fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:58:35 np0005539563 nova_compute[252253]: 2025-11-29 07:58:35.000 252257 DEBUG oslo_concurrency.lockutils [req-fbf3bbda-d826-46e2-9941-90ad46357363 req-219557fe-d020-4f59-92ff-6f4697e0700c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-f5073e59-05bc-46a4-8bf4-ebeb74ca389b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:35 np0005539563 nova_compute[252253]: 2025-11-29 07:58:35.115 252257 DEBUG nova.network.neutron [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:58:35 np0005539563 nova_compute[252253]: 2025-11-29 07:58:35.381 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:35 np0005539563 nova_compute[252253]: 2025-11-29 07:58:35.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:35 np0005539563 nova_compute[252253]: 2025-11-29 07:58:35.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:58:35 np0005539563 nova_compute[252253]: 2025-11-29 07:58:35.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:35 np0005539563 nova_compute[252253]: 2025-11-29 07:58:35.706 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:35 np0005539563 nova_compute[252253]: 2025-11-29 07:58:35.707 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:35 np0005539563 nova_compute[252253]: 2025-11-29 07:58:35.707 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:35 np0005539563 nova_compute[252253]: 2025-11-29 07:58:35.708 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:58:35 np0005539563 nova_compute[252253]: 2025-11-29 07:58:35.708 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 393 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 15 KiB/s wr, 80 op/s
Nov 29 02:58:36 np0005539563 nova_compute[252253]: 2025-11-29 07:58:36.095 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:36.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247265864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:36.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:36 np0005539563 nova_compute[252253]: 2025-11-29 07:58:36.644 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.935s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:36 np0005539563 nova_compute[252253]: 2025-11-29 07:58:36.833 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:58:36 np0005539563 nova_compute[252253]: 2025-11-29 07:58:36.835 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4558MB free_disk=20.83083724975586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:58:36 np0005539563 nova_compute[252253]: 2025-11-29 07:58:36.835 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:36 np0005539563 nova_compute[252253]: 2025-11-29 07:58:36.835 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:36 np0005539563 nova_compute[252253]: 2025-11-29 07:58:36.961 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance f5073e59-05bc-46a4-8bf4-ebeb74ca389b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:58:36 np0005539563 nova_compute[252253]: 2025-11-29 07:58:36.962 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:58:36 np0005539563 nova_compute[252253]: 2025-11-29 07:58:36.962 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:58:37 np0005539563 nova_compute[252253]: 2025-11-29 07:58:37.021 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3457876871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:37 np0005539563 nova_compute[252253]: 2025-11-29 07:58:37.498 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:37 np0005539563 nova_compute[252253]: 2025-11-29 07:58:37.505 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:37 np0005539563 nova_compute[252253]: 2025-11-29 07:58:37.690 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:37 np0005539563 nova_compute[252253]: 2025-11-29 07:58:37.759 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:58:37 np0005539563 nova_compute[252253]: 2025-11-29 07:58:37.760 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 393 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 4.9 KiB/s wr, 67 op/s
Nov 29 02:58:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:38.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:58:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:38.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:58:38 np0005539563 nova_compute[252253]: 2025-11-29 07:58:38.761 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:38 np0005539563 nova_compute[252253]: 2025-11-29 07:58:38.762 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:58:38 np0005539563 nova_compute[252253]: 2025-11-29 07:58:38.762 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:58:38 np0005539563 nova_compute[252253]: 2025-11-29 07:58:38.823 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:58:38 np0005539563 nova_compute[252253]: 2025-11-29 07:58:38.824 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:58:38 np0005539563 nova_compute[252253]: 2025-11-29 07:58:38.824 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:38 np0005539563 nova_compute[252253]: 2025-11-29 07:58:38.825 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:38 np0005539563 nova_compute[252253]: 2025-11-29 07:58:38.969 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:39 np0005539563 nova_compute[252253]: 2025-11-29 07:58:39.017 252257 DEBUG nova.storage.rbd_utils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] resizing rbd image f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:58:39 np0005539563 nova_compute[252253]: 2025-11-29 07:58:39.271 252257 DEBUG nova.objects.instance [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'migration_context' on Instance uuid f5073e59-05bc-46a4-8bf4-ebeb74ca389b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:39 np0005539563 nova_compute[252253]: 2025-11-29 07:58:39.289 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:58:39 np0005539563 nova_compute[252253]: 2025-11-29 07:58:39.289 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Ensure instance console log exists: /var/lib/nova/instances/f5073e59-05bc-46a4-8bf4-ebeb74ca389b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:58:39 np0005539563 nova_compute[252253]: 2025-11-29 07:58:39.290 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:39 np0005539563 nova_compute[252253]: 2025-11-29 07:58:39.290 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:39 np0005539563 nova_compute[252253]: 2025-11-29 07:58:39.290 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 395 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 107 KiB/s wr, 78 op/s
Nov 29 02:58:39 np0005539563 nova_compute[252253]: 2025-11-29 07:58:39.822 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:40.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Nov 29 02:58:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Nov 29 02:58:40 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Nov 29 02:58:40 np0005539563 nova_compute[252253]: 2025-11-29 07:58:40.317 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403105.315838, 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:40 np0005539563 nova_compute[252253]: 2025-11-29 07:58:40.318 252257 INFO nova.compute.manager [-] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:58:40 np0005539563 nova_compute[252253]: 2025-11-29 07:58:40.323 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403105.3205125, 5ae03c1e-4959-4743-b844-017e2de24ee2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:40 np0005539563 nova_compute[252253]: 2025-11-29 07:58:40.323 252257 INFO nova.compute.manager [-] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:58:40 np0005539563 nova_compute[252253]: 2025-11-29 07:58:40.385 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:40 np0005539563 nova_compute[252253]: 2025-11-29 07:58:40.393 252257 DEBUG nova.compute.manager [None req-e6008123-eca4-4472-9cae-83a4e4db07b3 - - - - - -] [instance: 2f014f4f-f43f-49b3-93cf-1cc4aec2e8af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Nov 29 02:58:40 np0005539563 nova_compute[252253]: 2025-11-29 07:58:40.526 252257 DEBUG nova.compute.manager [None req-85f3f256-6d18-4e7e-ab2a-022a288f6c18 - - - - - -] [instance: 5ae03c1e-4959-4743-b844-017e2de24ee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000055s ======
Nov 29 02:58:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:40.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.098 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.231 252257 DEBUG nova.network.neutron [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Updating instance_info_cache with network_info: [{"id": "1be99414-ee31-47a2-8f21-705afd5a21fc", "address": "fa:16:3e:fa:17:be", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be99414-ee", "ovs_interfaceid": "1be99414-ee31-47a2-8f21-705afd5a21fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.266 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Releasing lock "refresh_cache-f5073e59-05bc-46a4-8bf4-ebeb74ca389b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.266 252257 DEBUG nova.compute.manager [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Instance network_info: |[{"id": "1be99414-ee31-47a2-8f21-705afd5a21fc", "address": "fa:16:3e:fa:17:be", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be99414-ee", "ovs_interfaceid": "1be99414-ee31-47a2-8f21-705afd5a21fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.267 252257 DEBUG oslo_concurrency.lockutils [req-fbf3bbda-d826-46e2-9941-90ad46357363 req-219557fe-d020-4f59-92ff-6f4697e0700c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-f5073e59-05bc-46a4-8bf4-ebeb74ca389b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.267 252257 DEBUG nova.network.neutron [req-fbf3bbda-d826-46e2-9941-90ad46357363 req-219557fe-d020-4f59-92ff-6f4697e0700c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Refreshing network info cache for port 1be99414-ee31-47a2-8f21-705afd5a21fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.272 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Start _get_guest_xml network_info=[{"id": "1be99414-ee31-47a2-8f21-705afd5a21fc", "address": "fa:16:3e:fa:17:be", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be99414-ee", "ovs_interfaceid": "1be99414-ee31-47a2-8f21-705afd5a21fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.278 252257 WARNING nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.291 252257 DEBUG nova.virt.libvirt.host [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.292 252257 DEBUG nova.virt.libvirt.host [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.307 252257 DEBUG nova.virt.libvirt.host [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.308 252257 DEBUG nova.virt.libvirt.host [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.310 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.310 252257 DEBUG nova.virt.hardware [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.310 252257 DEBUG nova.virt.hardware [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.311 252257 DEBUG nova.virt.hardware [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.311 252257 DEBUG nova.virt.hardware [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.311 252257 DEBUG nova.virt.hardware [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.311 252257 DEBUG nova.virt.hardware [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.312 252257 DEBUG nova.virt.hardware [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.312 252257 DEBUG nova.virt.hardware [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.312 252257 DEBUG nova.virt.hardware [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.312 252257 DEBUG nova.virt.hardware [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.313 252257 DEBUG nova.virt.hardware [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.316 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Nov 29 02:58:41 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 424 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.9 MiB/s wr, 51 op/s
Nov 29 02:58:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/665550622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.824 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.849 252257 DEBUG nova.storage.rbd_utils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:41 np0005539563 nova_compute[252253]: 2025-11-29 07:58:41.852 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:42.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3803348600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.353 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.355 252257 DEBUG nova.virt.libvirt.vif [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-609973919',display_name='tempest-ImagesTestJSON-server-609973919',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-609973919',id=55,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-73yrhjcs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:58:31Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=f5073e59-05bc-46a4-8bf4-ebeb74ca389b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1be99414-ee31-47a2-8f21-705afd5a21fc", "address": "fa:16:3e:fa:17:be", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be99414-ee", "ovs_interfaceid": "1be99414-ee31-47a2-8f21-705afd5a21fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.355 252257 DEBUG nova.network.os_vif_util [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "1be99414-ee31-47a2-8f21-705afd5a21fc", "address": "fa:16:3e:fa:17:be", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be99414-ee", "ovs_interfaceid": "1be99414-ee31-47a2-8f21-705afd5a21fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.356 252257 DEBUG nova.network.os_vif_util [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:17:be,bridge_name='br-int',has_traffic_filtering=True,id=1be99414-ee31-47a2-8f21-705afd5a21fc,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be99414-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.358 252257 DEBUG nova.objects.instance [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid f5073e59-05bc-46a4-8bf4-ebeb74ca389b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.385 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  <uuid>f5073e59-05bc-46a4-8bf4-ebeb74ca389b</uuid>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  <name>instance-00000037</name>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <nova:name>tempest-ImagesTestJSON-server-609973919</nova:name>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 07:58:41</nova:creationTime>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <nova:user uuid="f7d59bea260d4752aa29379967636c0b">tempest-ImagesTestJSON-911260095-project-member</nova:user>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <nova:project uuid="4d8c5b7e3ca74bc1880eb616b04711f7">tempest-ImagesTestJSON-911260095</nova:project>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <nova:port uuid="1be99414-ee31-47a2-8f21-705afd5a21fc">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <system>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <entry name="serial">f5073e59-05bc-46a4-8bf4-ebeb74ca389b</entry>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <entry name="uuid">f5073e59-05bc-46a4-8bf4-ebeb74ca389b</entry>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    </system>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  <os>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  </os>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  <features>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  </features>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  </clock>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  <devices>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk.config">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      </source>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      </auth>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    </disk>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:fa:17:be"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <target dev="tap1be99414-ee"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    </interface>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/f5073e59-05bc-46a4-8bf4-ebeb74ca389b/console.log" append="off"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    </serial>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <video>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    </video>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    </rng>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 02:58:42 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 02:58:42 np0005539563 nova_compute[252253]:  </devices>
Nov 29 02:58:42 np0005539563 nova_compute[252253]: </domain>
Nov 29 02:58:42 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.387 252257 DEBUG nova.compute.manager [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Preparing to wait for external event network-vif-plugged-1be99414-ee31-47a2-8f21-705afd5a21fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.387 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.388 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.388 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.388 252257 DEBUG nova.virt.libvirt.vif [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-609973919',display_name='tempest-ImagesTestJSON-server-609973919',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-609973919',id=55,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-73yrhjcs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:58:31Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=f5073e59-05bc-46a4-8bf4-ebeb74ca389b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1be99414-ee31-47a2-8f21-705afd5a21fc", "address": "fa:16:3e:fa:17:be", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be99414-ee", "ovs_interfaceid": "1be99414-ee31-47a2-8f21-705afd5a21fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.389 252257 DEBUG nova.network.os_vif_util [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "1be99414-ee31-47a2-8f21-705afd5a21fc", "address": "fa:16:3e:fa:17:be", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be99414-ee", "ovs_interfaceid": "1be99414-ee31-47a2-8f21-705afd5a21fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.389 252257 DEBUG nova.network.os_vif_util [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:17:be,bridge_name='br-int',has_traffic_filtering=True,id=1be99414-ee31-47a2-8f21-705afd5a21fc,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be99414-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.389 252257 DEBUG os_vif [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:17:be,bridge_name='br-int',has_traffic_filtering=True,id=1be99414-ee31-47a2-8f21-705afd5a21fc,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be99414-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.390 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.390 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.391 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.394 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.395 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1be99414-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.395 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1be99414-ee, col_values=(('external_ids', {'iface-id': '1be99414-ee31-47a2-8f21-705afd5a21fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:17:be', 'vm-uuid': 'f5073e59-05bc-46a4-8bf4-ebeb74ca389b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.397 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:42 np0005539563 NetworkManager[48981]: <info>  [1764403122.3980] manager: (tap1be99414-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.399 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.404 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.405 252257 INFO os_vif [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:17:be,bridge_name='br-int',has_traffic_filtering=True,id=1be99414-ee31-47a2-8f21-705afd5a21fc,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be99414-ee')#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.491 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.491 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.492 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No VIF found with MAC fa:16:3e:fa:17:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.492 252257 INFO nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Using config drive#033[00m
Nov 29 02:58:42 np0005539563 nova_compute[252253]: 2025-11-29 07:58:42.512 252257 DEBUG nova.storage.rbd_utils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:42.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.097 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.146 252257 INFO nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Creating config drive at /var/lib/nova/instances/f5073e59-05bc-46a4-8bf4-ebeb74ca389b/disk.config#033[00m
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.151 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5073e59-05bc-46a4-8bf4-ebeb74ca389b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy7awcy8_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:58:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:58:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:58:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:58:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:58:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.286 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5073e59-05bc-46a4-8bf4-ebeb74ca389b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy7awcy8_" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.314 252257 DEBUG nova.storage.rbd_utils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.318 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5073e59-05bc-46a4-8bf4-ebeb74ca389b/disk.config f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.343 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.509 252257 DEBUG oslo_concurrency.processutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5073e59-05bc-46a4-8bf4-ebeb74ca389b/disk.config f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.510 252257 INFO nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Deleting local config drive /var/lib/nova/instances/f5073e59-05bc-46a4-8bf4-ebeb74ca389b/disk.config because it was imported into RBD.#033[00m
Nov 29 02:58:43 np0005539563 kernel: tap1be99414-ee: entered promiscuous mode
Nov 29 02:58:43 np0005539563 NetworkManager[48981]: <info>  [1764403123.5771] manager: (tap1be99414-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Nov 29 02:58:43 np0005539563 ovn_controller[148841]: 2025-11-29T07:58:43Z|00166|binding|INFO|Claiming lport 1be99414-ee31-47a2-8f21-705afd5a21fc for this chassis.
Nov 29 02:58:43 np0005539563 ovn_controller[148841]: 2025-11-29T07:58:43Z|00167|binding|INFO|1be99414-ee31-47a2-8f21-705afd5a21fc: Claiming fa:16:3e:fa:17:be 10.100.0.8
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.580 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.594 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:17:be 10.100.0.8'], port_security=['fa:16:3e:fa:17:be 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f5073e59-05bc-46a4-8bf4-ebeb74ca389b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7471f45a-da60-4567-a888-2a87ff526609', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'baf6db0c-e075-4519-aa02-9bbd4c984eba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bee78a1-1254-4dfe-ba24-259feeb5ade5, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1be99414-ee31-47a2-8f21-705afd5a21fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.595 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1be99414-ee31-47a2-8f21-705afd5a21fc in datapath 7471f45a-da60-4567-a888-2a87ff526609 bound to our chassis#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.597 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7471f45a-da60-4567-a888-2a87ff526609#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.611 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[43343cc3-1e92-4d58-b664-283efc5b7250]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.612 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7471f45a-d1 in ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:58:43 np0005539563 systemd-udevd[288944]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.614 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7471f45a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.614 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dc64352a-73b0-47f8-8d8e-81a7cf582fe2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.615 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[13c83528-a3dd-47d6-9794-f9875c17017c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.627 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[81970f5b-d0e5-43d9-906f-517f6751ac49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 systemd-machined[213024]: New machine qemu-24-instance-00000037.
Nov 29 02:58:43 np0005539563 NetworkManager[48981]: <info>  [1764403123.6344] device (tap1be99414-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:58:43 np0005539563 NetworkManager[48981]: <info>  [1764403123.6351] device (tap1be99414-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:58:43 np0005539563 systemd[1]: Started Virtual Machine qemu-24-instance-00000037.
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.653 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc62a9a-0373-41a0-bd54-eb88916cbf37]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.681 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.687 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d3cf84b4-273f-4edf-9d3f-2e68d5d851bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 ovn_controller[148841]: 2025-11-29T07:58:43Z|00168|binding|INFO|Setting lport 1be99414-ee31-47a2-8f21-705afd5a21fc ovn-installed in OVS
Nov 29 02:58:43 np0005539563 ovn_controller[148841]: 2025-11-29T07:58:43Z|00169|binding|INFO|Setting lport 1be99414-ee31-47a2-8f21-705afd5a21fc up in Southbound
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.689 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.694 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[671856da-9898-40ce-89cd-b427f34993ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 NetworkManager[48981]: <info>  [1764403123.6958] manager: (tap7471f45a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/81)
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.724 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[86d8dd09-ed27-4882-83fb-2a17ec76edef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.728 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a9eafab2-745f-4f8c-8250-8ea24da8e3fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 NetworkManager[48981]: <info>  [1764403123.7528] device (tap7471f45a-d0): carrier: link connected
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.758 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[9564082a-0064-4c69-bc01-f765e10bd16e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.773 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5895201d-e49c-4b05-baf7-5ca9e74830f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7471f45a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:d7:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609152, 'reachable_time': 32045, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288978, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 416 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 2.7 MiB/s wr, 49 op/s
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.787 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c6608c-067c-461c-8df4-3a218974eb00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:d764'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609152, 'tstamp': 609152}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288979, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.802 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[38a3050c-e042-4de9-8be9-2bccdd86be6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7471f45a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:d7:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609152, 'reachable_time': 32045, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288980, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.838 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[38a23e2a-1016-4195-9f19-bbfa0c349dc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.897 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5403c1-10a1-4825-a501-407bc0bfb093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.898 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7471f45a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.898 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.899 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7471f45a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.900 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:43 np0005539563 NetworkManager[48981]: <info>  [1764403123.9014] manager: (tap7471f45a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Nov 29 02:58:43 np0005539563 kernel: tap7471f45a-d0: entered promiscuous mode
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.903 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7471f45a-d0, col_values=(('external_ids', {'iface-id': '06264566-5ffe-42a3-ad44-b3f54b7d79bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:43 np0005539563 ovn_controller[148841]: 2025-11-29T07:58:43Z|00170|binding|INFO|Releasing lport 06264566-5ffe-42a3-ad44-b3f54b7d79bb from this chassis (sb_readonly=0)
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.906 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.906 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.910 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1935d4bc-bd37-4ca7-851a-66365df087b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.911 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-7471f45a-da60-4567-a888-2a87ff526609
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 7471f45a-da60-4567-a888-2a87ff526609
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:58:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:43.912 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'env', 'PROCESS_TAG=haproxy-7471f45a-da60-4567-a888-2a87ff526609', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7471f45a-da60-4567-a888-2a87ff526609.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:58:43 np0005539563 nova_compute[252253]: 2025-11-29 07:58:43.924 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:44.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.273 252257 DEBUG nova.network.neutron [req-fbf3bbda-d826-46e2-9941-90ad46357363 req-219557fe-d020-4f59-92ff-6f4697e0700c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Updated VIF entry in instance network info cache for port 1be99414-ee31-47a2-8f21-705afd5a21fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.274 252257 DEBUG nova.network.neutron [req-fbf3bbda-d826-46e2-9941-90ad46357363 req-219557fe-d020-4f59-92ff-6f4697e0700c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Updating instance_info_cache with network_info: [{"id": "1be99414-ee31-47a2-8f21-705afd5a21fc", "address": "fa:16:3e:fa:17:be", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be99414-ee", "ovs_interfaceid": "1be99414-ee31-47a2-8f21-705afd5a21fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:44 np0005539563 podman[289048]: 2025-11-29 07:58:44.249412969 +0000 UTC m=+0.024766743 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.343 252257 DEBUG oslo_concurrency.lockutils [req-fbf3bbda-d826-46e2-9941-90ad46357363 req-219557fe-d020-4f59-92ff-6f4697e0700c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-f5073e59-05bc-46a4-8bf4-ebeb74ca389b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.608 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403124.6080902, f5073e59-05bc-46a4-8bf4-ebeb74ca389b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.609 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] VM Started (Lifecycle Event)#033[00m
Nov 29 02:58:44 np0005539563 podman[289048]: 2025-11-29 07:58:44.634102131 +0000 UTC m=+0.409455885 container create 808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 02:58:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:44.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Nov 29 02:58:44 np0005539563 systemd[1]: Started libpod-conmon-808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496.scope.
Nov 29 02:58:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.698 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.704 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403124.608199, f5073e59-05bc-46a4-8bf4-ebeb74ca389b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.704 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:58:44 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:58:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcea235dcfeb08e58eca8d0491a813c0982691851a0a5cdf98fa3aa48e82f63/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:44 np0005539563 podman[289048]: 2025-11-29 07:58:44.734432514 +0000 UTC m=+0.509786288 container init 808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 02:58:44 np0005539563 podman[289048]: 2025-11-29 07:58:44.741682004 +0000 UTC m=+0.517035758 container start 808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 02:58:44 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[289070]: [NOTICE]   (289074) : New worker (289076) forked
Nov 29 02:58:44 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[289070]: [NOTICE]   (289074) : Loading success.
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.793 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.797 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.829 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.934 252257 DEBUG nova.compute.manager [req-fdadf894-5351-47ee-88ad-7b0f20a1f0a8 req-f14ba03f-7634-4573-877a-658b1c1792a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Received event network-vif-plugged-1be99414-ee31-47a2-8f21-705afd5a21fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.935 252257 DEBUG oslo_concurrency.lockutils [req-fdadf894-5351-47ee-88ad-7b0f20a1f0a8 req-f14ba03f-7634-4573-877a-658b1c1792a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.935 252257 DEBUG oslo_concurrency.lockutils [req-fdadf894-5351-47ee-88ad-7b0f20a1f0a8 req-f14ba03f-7634-4573-877a-658b1c1792a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.936 252257 DEBUG oslo_concurrency.lockutils [req-fdadf894-5351-47ee-88ad-7b0f20a1f0a8 req-f14ba03f-7634-4573-877a-658b1c1792a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.936 252257 DEBUG nova.compute.manager [req-fdadf894-5351-47ee-88ad-7b0f20a1f0a8 req-f14ba03f-7634-4573-877a-658b1c1792a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Processing event network-vif-plugged-1be99414-ee31-47a2-8f21-705afd5a21fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.937 252257 DEBUG nova.compute.manager [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.940 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403124.940401, f5073e59-05bc-46a4-8bf4-ebeb74ca389b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.941 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.943 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.946 252257 INFO nova.virt.libvirt.driver [-] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Instance spawned successfully.#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.946 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.981 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.986 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.989 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.989 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.990 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.990 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.991 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:44 np0005539563 nova_compute[252253]: 2025-11-29 07:58:44.991 252257 DEBUG nova.virt.libvirt.driver [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:45 np0005539563 nova_compute[252253]: 2025-11-29 07:58:45.020 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:58:45 np0005539563 nova_compute[252253]: 2025-11-29 07:58:45.063 252257 INFO nova.compute.manager [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Took 13.13 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:58:45 np0005539563 nova_compute[252253]: 2025-11-29 07:58:45.064 252257 DEBUG nova.compute.manager [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:45 np0005539563 nova_compute[252253]: 2025-11-29 07:58:45.145 252257 INFO nova.compute.manager [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Took 14.39 seconds to build instance.#033[00m
Nov 29 02:58:45 np0005539563 nova_compute[252253]: 2025-11-29 07:58:45.174 252257 DEBUG oslo_concurrency.lockutils [None req-a9edb76a-429d-4b2c-a9a7-54c1257822f4 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 309 MiB data, 714 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 3.4 MiB/s wr, 125 op/s
Nov 29 02:58:46 np0005539563 nova_compute[252253]: 2025-11-29 07:58:46.102 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:46.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:46.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Nov 29 02:58:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Nov 29 02:58:47 np0005539563 nova_compute[252253]: 2025-11-29 07:58:47.242 252257 DEBUG nova.compute.manager [req-ae9f672c-bc14-4069-b0a0-afc156807d3a req-fa55fcb4-3c57-4d8a-8640-72394c3411b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Received event network-vif-plugged-1be99414-ee31-47a2-8f21-705afd5a21fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:58:47 np0005539563 nova_compute[252253]: 2025-11-29 07:58:47.242 252257 DEBUG oslo_concurrency.lockutils [req-ae9f672c-bc14-4069-b0a0-afc156807d3a req-fa55fcb4-3c57-4d8a-8640-72394c3411b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:47 np0005539563 nova_compute[252253]: 2025-11-29 07:58:47.243 252257 DEBUG oslo_concurrency.lockutils [req-ae9f672c-bc14-4069-b0a0-afc156807d3a req-fa55fcb4-3c57-4d8a-8640-72394c3411b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:47 np0005539563 nova_compute[252253]: 2025-11-29 07:58:47.243 252257 DEBUG oslo_concurrency.lockutils [req-ae9f672c-bc14-4069-b0a0-afc156807d3a req-fa55fcb4-3c57-4d8a-8640-72394c3411b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:47 np0005539563 nova_compute[252253]: 2025-11-29 07:58:47.243 252257 DEBUG nova.compute.manager [req-ae9f672c-bc14-4069-b0a0-afc156807d3a req-fa55fcb4-3c57-4d8a-8640-72394c3411b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] No waiting events found dispatching network-vif-plugged-1be99414-ee31-47a2-8f21-705afd5a21fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:58:47 np0005539563 nova_compute[252253]: 2025-11-29 07:58:47.243 252257 WARNING nova.compute.manager [req-ae9f672c-bc14-4069-b0a0-afc156807d3a req-fa55fcb4-3c57-4d8a-8640-72394c3411b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Received unexpected event network-vif-plugged-1be99414-ee31-47a2-8f21-705afd5a21fc for instance with vm_state active and task_state None.#033[00m
Nov 29 02:58:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Nov 29 02:58:47 np0005539563 nova_compute[252253]: 2025-11-29 07:58:47.399 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 309 MiB data, 714 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 948 KiB/s wr, 83 op/s
Nov 29 02:58:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:48.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:48 np0005539563 nova_compute[252253]: 2025-11-29 07:58:48.598 252257 DEBUG oslo_concurrency.lockutils [None req-953e96d3-ff77-4c02-8186-3a269b95117c f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:48 np0005539563 nova_compute[252253]: 2025-11-29 07:58:48.599 252257 DEBUG oslo_concurrency.lockutils [None req-953e96d3-ff77-4c02-8186-3a269b95117c f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:48 np0005539563 nova_compute[252253]: 2025-11-29 07:58:48.599 252257 DEBUG nova.compute.manager [None req-953e96d3-ff77-4c02-8186-3a269b95117c f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:48 np0005539563 nova_compute[252253]: 2025-11-29 07:58:48.603 252257 DEBUG nova.compute.manager [None req-953e96d3-ff77-4c02-8186-3a269b95117c f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 02:58:48 np0005539563 nova_compute[252253]: 2025-11-29 07:58:48.603 252257 DEBUG nova.objects.instance [None req-953e96d3-ff77-4c02-8186-3a269b95117c f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'flavor' on Instance uuid f5073e59-05bc-46a4-8bf4-ebeb74ca389b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:48 np0005539563 nova_compute[252253]: 2025-11-29 07:58:48.634 252257 DEBUG nova.virt.libvirt.driver [None req-953e96d3-ff77-4c02-8186-3a269b95117c f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:58:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:48.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 298 active+clean; 310 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 923 KiB/s rd, 1.4 MiB/s wr, 106 op/s
Nov 29 02:58:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:50.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Nov 29 02:58:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:50.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Nov 29 02:58:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Nov 29 02:58:51 np0005539563 nova_compute[252253]: 2025-11-29 07:58:51.103 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 304 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.0 MiB/s wr, 268 op/s
Nov 29 02:58:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:52.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:52 np0005539563 nova_compute[252253]: 2025-11-29 07:58:52.402 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:52.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 274 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 184 op/s
Nov 29 02:58:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:54.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:58:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:54.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:58:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 157 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 232 op/s
Nov 29 02:58:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Nov 29 02:58:56 np0005539563 nova_compute[252253]: 2025-11-29 07:58:56.105 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:56.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:56.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:56.877 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:58:56 np0005539563 nova_compute[252253]: 2025-11-29 07:58:56.878 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:56.880 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:58:57 np0005539563 nova_compute[252253]: 2025-11-29 07:58:57.403 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 157 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 199 op/s
Nov 29 02:58:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:58.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:58:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:58.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:58 np0005539563 nova_compute[252253]: 2025-11-29 07:58:58.677 252257 DEBUG nova.virt.libvirt.driver [None req-953e96d3-ff77-4c02-8186-3a269b95117c f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:58:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:58:58.882 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:59 np0005539563 podman[289095]: 2025-11-29 07:58:59.556601915 +0000 UTC m=+0.094019641 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:58:59 np0005539563 podman[289094]: 2025-11-29 07:58:59.556711298 +0000 UTC m=+0.092482239 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:58:59 np0005539563 podman[289096]: 2025-11-29 07:58:59.610587904 +0000 UTC m=+0.134981542 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 02:58:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 134 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 178 op/s
Nov 29 02:59:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:59:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:00.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:59:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:00.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:01 np0005539563 nova_compute[252253]: 2025-11-29 07:59:01.108 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 135 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 496 KiB/s wr, 74 op/s
Nov 29 02:59:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:02.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:02 np0005539563 nova_compute[252253]: 2025-11-29 07:59:02.406 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:02.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Nov 29 02:59:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 136 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 836 KiB/s wr, 74 op/s
Nov 29 02:59:03 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Nov 29 02:59:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:04.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:04.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:04.906 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:04.907 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:04.907 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 147 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 29 02:59:06 np0005539563 nova_compute[252253]: 2025-11-29 07:59:06.112 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:06.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:06.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:07 np0005539563 nova_compute[252253]: 2025-11-29 07:59:07.410 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 147 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 29 02:59:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:08.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:08.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:09 np0005539563 nova_compute[252253]: 2025-11-29 07:59:09.728 252257 DEBUG nova.virt.libvirt.driver [None req-953e96d3-ff77-4c02-8186-3a269b95117c f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:59:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 150 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.1 MiB/s wr, 29 op/s
Nov 29 02:59:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:10.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:10.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:11 np0005539563 nova_compute[252253]: 2025-11-29 07:59:11.114 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 155 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Nov 29 02:59:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:12.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:12 np0005539563 nova_compute[252253]: 2025-11-29 07:59:12.412 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:12.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_07:59:12
Nov 29 02:59:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:59:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 02:59:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'vms', 'images']
Nov 29 02:59:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:59:13 np0005539563 podman[289388]: 2025-11-29 07:59:13.128488715 +0000 UTC m=+1.721920019 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:59:13 np0005539563 podman[289409]: 2025-11-29 07:59:13.390965228 +0000 UTC m=+0.160898685 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:13 np0005539563 podman[289388]: 2025-11-29 07:59:13.404535942 +0000 UTC m=+1.997967256 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 155 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 4.8 KiB/s rd, 1.4 MiB/s wr, 14 op/s
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:59:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:59:14 np0005539563 ovn_controller[148841]: 2025-11-29T07:59:14Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:17:be 10.100.0.8
Nov 29 02:59:14 np0005539563 ovn_controller[148841]: 2025-11-29T07:59:14Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:17:be 10.100.0.8
Nov 29 02:59:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:14.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Nov 29 02:59:14 np0005539563 podman[289541]: 2025-11-29 07:59:14.497610232 +0000 UTC m=+0.457691234 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:59:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:14.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:15 np0005539563 podman[289541]: 2025-11-29 07:59:15.19358374 +0000 UTC m=+1.153664712 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 02:59:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 160 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 85 op/s
Nov 29 02:59:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Nov 29 02:59:15 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Nov 29 02:59:16 np0005539563 nova_compute[252253]: 2025-11-29 07:59:16.117 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:16 np0005539563 podman[289608]: 2025-11-29 07:59:16.161318137 +0000 UTC m=+0.248711335 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, vendor=Red Hat, Inc., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, distribution-scope=public, release=1793, description=keepalived for Ceph)
Nov 29 02:59:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:16.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:16 np0005539563 podman[289608]: 2025-11-29 07:59:16.427989854 +0000 UTC m=+0.515383042 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=2.2.4, io.openshift.tags=Ceph keepalived, name=keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph)
Nov 29 02:59:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:59:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:16.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:59:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:17 np0005539563 nova_compute[252253]: 2025-11-29 07:59:17.417 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 160 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 639 KiB/s wr, 97 op/s
Nov 29 02:59:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:59:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:59:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:59:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:59:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:59:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:18.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:18.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 58d06e96-beac-481d-8c6e-3dbc0ab5fc00 does not exist
Nov 29 02:59:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c79b6fa6-5bd8-4dd4-a67e-45a47ec099d9 does not exist
Nov 29 02:59:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 63db461a-a183-4d1f-b866-903762457bd8 does not exist
Nov 29 02:59:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:59:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:59:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:59:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:59:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:59:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:59:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:59:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 161 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 477 KiB/s wr, 134 op/s
Nov 29 02:59:19 np0005539563 podman[289915]: 2025-11-29 07:59:19.772651968 +0000 UTC m=+0.026644496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:59:19 np0005539563 podman[289915]: 2025-11-29 07:59:19.897724353 +0000 UTC m=+0.151716861 container create 38f36873ef75030e3c1a05ef1c1237d2839ac78e879ba75e2b66c2632c9c9ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:59:19 np0005539563 systemd[1]: Started libpod-conmon-38f36873ef75030e3c1a05ef1c1237d2839ac78e879ba75e2b66c2632c9c9ef7.scope.
Nov 29 02:59:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:59:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:20.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:20 np0005539563 podman[289915]: 2025-11-29 07:59:20.244464908 +0000 UTC m=+0.498457436 container init 38f36873ef75030e3c1a05ef1c1237d2839ac78e879ba75e2b66c2632c9c9ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:59:20 np0005539563 podman[289915]: 2025-11-29 07:59:20.251946134 +0000 UTC m=+0.505938642 container start 38f36873ef75030e3c1a05ef1c1237d2839ac78e879ba75e2b66c2632c9c9ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:20 np0005539563 sweet_dewdney[289931]: 167 167
Nov 29 02:59:20 np0005539563 systemd[1]: libpod-38f36873ef75030e3c1a05ef1c1237d2839ac78e879ba75e2b66c2632c9c9ef7.scope: Deactivated successfully.
Nov 29 02:59:20 np0005539563 podman[289915]: 2025-11-29 07:59:20.286082625 +0000 UTC m=+0.540075153 container attach 38f36873ef75030e3c1a05ef1c1237d2839ac78e879ba75e2b66c2632c9c9ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:59:20 np0005539563 podman[289915]: 2025-11-29 07:59:20.287147584 +0000 UTC m=+0.541140092 container died 38f36873ef75030e3c1a05ef1c1237d2839ac78e879ba75e2b66c2632c9c9ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:59:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Nov 29 02:59:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:59:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Nov 29 02:59:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-54266306db184e7c52116ba35831bede953719724bd15680b5d96f15e36f6a3a-merged.mount: Deactivated successfully.
Nov 29 02:59:20 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Nov 29 02:59:20 np0005539563 podman[289915]: 2025-11-29 07:59:20.560520587 +0000 UTC m=+0.814513135 container remove 38f36873ef75030e3c1a05ef1c1237d2839ac78e879ba75e2b66c2632c9c9ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:59:20 np0005539563 systemd[1]: libpod-conmon-38f36873ef75030e3c1a05ef1c1237d2839ac78e879ba75e2b66c2632c9c9ef7.scope: Deactivated successfully.
Nov 29 02:59:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:20.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:20 np0005539563 nova_compute[252253]: 2025-11-29 07:59:20.780 252257 DEBUG nova.virt.libvirt.driver [None req-953e96d3-ff77-4c02-8186-3a269b95117c f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:59:20 np0005539563 podman[289957]: 2025-11-29 07:59:20.741092023 +0000 UTC m=+0.025283248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:59:21 np0005539563 podman[289957]: 2025-11-29 07:59:21.02949689 +0000 UTC m=+0.313688065 container create e588886b9d2eac494f97189eba138bfd9cc54ee7dd791a62ed16f32ec1c0ff09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_leavitt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:59:21 np0005539563 nova_compute[252253]: 2025-11-29 07:59:21.119 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:21 np0005539563 systemd[1]: Started libpod-conmon-e588886b9d2eac494f97189eba138bfd9cc54ee7dd791a62ed16f32ec1c0ff09.scope.
Nov 29 02:59:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:59:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9ffb54f30d3be9c110cf318b8aff5c5b79371fd32a9f8460bffd60dcf116b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9ffb54f30d3be9c110cf318b8aff5c5b79371fd32a9f8460bffd60dcf116b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9ffb54f30d3be9c110cf318b8aff5c5b79371fd32a9f8460bffd60dcf116b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9ffb54f30d3be9c110cf318b8aff5c5b79371fd32a9f8460bffd60dcf116b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9ffb54f30d3be9c110cf318b8aff5c5b79371fd32a9f8460bffd60dcf116b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:21 np0005539563 podman[289957]: 2025-11-29 07:59:21.196241855 +0000 UTC m=+0.480433050 container init e588886b9d2eac494f97189eba138bfd9cc54ee7dd791a62ed16f32ec1c0ff09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:59:21 np0005539563 podman[289957]: 2025-11-29 07:59:21.203598378 +0000 UTC m=+0.487789563 container start e588886b9d2eac494f97189eba138bfd9cc54ee7dd791a62ed16f32ec1c0ff09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_leavitt, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:59:21 np0005539563 podman[289957]: 2025-11-29 07:59:21.370357702 +0000 UTC m=+0.654548867 container attach e588886b9d2eac494f97189eba138bfd9cc54ee7dd791a62ed16f32ec1c0ff09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:59:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 167 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 230 KiB/s wr, 199 op/s
Nov 29 02:59:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:22 np0005539563 sleepy_leavitt[289975]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:59:22 np0005539563 sleepy_leavitt[289975]: --> relative data size: 1.0
Nov 29 02:59:22 np0005539563 sleepy_leavitt[289975]: --> All data devices are unavailable
Nov 29 02:59:22 np0005539563 systemd[1]: libpod-e588886b9d2eac494f97189eba138bfd9cc54ee7dd791a62ed16f32ec1c0ff09.scope: Deactivated successfully.
Nov 29 02:59:22 np0005539563 podman[289957]: 2025-11-29 07:59:22.061507628 +0000 UTC m=+1.345698823 container died e588886b9d2eac494f97189eba138bfd9cc54ee7dd791a62ed16f32ec1c0ff09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:59:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:22.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4d9ffb54f30d3be9c110cf318b8aff5c5b79371fd32a9f8460bffd60dcf116b7-merged.mount: Deactivated successfully.
Nov 29 02:59:22 np0005539563 nova_compute[252253]: 2025-11-29 07:59:22.420 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:22 np0005539563 podman[289957]: 2025-11-29 07:59:22.613669712 +0000 UTC m=+1.897860897 container remove e588886b9d2eac494f97189eba138bfd9cc54ee7dd791a62ed16f32ec1c0ff09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:59:22 np0005539563 systemd[1]: libpod-conmon-e588886b9d2eac494f97189eba138bfd9cc54ee7dd791a62ed16f32ec1c0ff09.scope: Deactivated successfully.
Nov 29 02:59:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:22.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031574248557602828 of space, bias 1.0, pg target 0.9472274567280848 quantized to 32 (current 32)
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001904415014889261 of space, bias 1.0, pg target 0.5713245044667783 quantized to 32 (current 32)
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:59:23 np0005539563 podman[290198]: 2025-11-29 07:59:23.308461567 +0000 UTC m=+0.042226644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:59:23 np0005539563 nova_compute[252253]: 2025-11-29 07:59:23.801 252257 INFO nova.virt.libvirt.driver [None req-953e96d3-ff77-4c02-8186-3a269b95117c f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Instance shutdown successfully after 35 seconds.#033[00m
Nov 29 02:59:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 167 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 794 KiB/s rd, 157 KiB/s wr, 93 op/s
Nov 29 02:59:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:24.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:24 np0005539563 podman[290198]: 2025-11-29 07:59:24.393986679 +0000 UTC m=+1.127751716 container create 9d133825d72e18c4901babac6418d1d30d42b3b10cf9d42a33196db44189062e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:24 np0005539563 systemd[1]: Started libpod-conmon-9d133825d72e18c4901babac6418d1d30d42b3b10cf9d42a33196db44189062e.scope.
Nov 29 02:59:24 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:59:24 np0005539563 kernel: tap1be99414-ee (unregistering): left promiscuous mode
Nov 29 02:59:24 np0005539563 NetworkManager[48981]: <info>  [1764403164.4869] device (tap1be99414-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:59:24 np0005539563 ovn_controller[148841]: 2025-11-29T07:59:24Z|00171|binding|INFO|Releasing lport 1be99414-ee31-47a2-8f21-705afd5a21fc from this chassis (sb_readonly=0)
Nov 29 02:59:24 np0005539563 nova_compute[252253]: 2025-11-29 07:59:24.497 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:24 np0005539563 ovn_controller[148841]: 2025-11-29T07:59:24Z|00172|binding|INFO|Setting lport 1be99414-ee31-47a2-8f21-705afd5a21fc down in Southbound
Nov 29 02:59:24 np0005539563 ovn_controller[148841]: 2025-11-29T07:59:24Z|00173|binding|INFO|Removing iface tap1be99414-ee ovn-installed in OVS
Nov 29 02:59:24 np0005539563 nova_compute[252253]: 2025-11-29 07:59:24.500 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:24 np0005539563 nova_compute[252253]: 2025-11-29 07:59:24.530 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:24.533 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:17:be 10.100.0.8'], port_security=['fa:16:3e:fa:17:be 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f5073e59-05bc-46a4-8bf4-ebeb74ca389b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7471f45a-da60-4567-a888-2a87ff526609', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'baf6db0c-e075-4519-aa02-9bbd4c984eba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bee78a1-1254-4dfe-ba24-259feeb5ade5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1be99414-ee31-47a2-8f21-705afd5a21fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:59:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:24.535 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1be99414-ee31-47a2-8f21-705afd5a21fc in datapath 7471f45a-da60-4567-a888-2a87ff526609 unbound from our chassis#033[00m
Nov 29 02:59:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:24.536 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7471f45a-da60-4567-a888-2a87ff526609, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:59:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:24.537 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[629b9243-7a6c-440b-a6b0-8aaf9e8ec6fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:24.538 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 namespace which is not needed anymore#033[00m
Nov 29 02:59:24 np0005539563 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000037.scope: Deactivated successfully.
Nov 29 02:59:24 np0005539563 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000037.scope: Consumed 14.749s CPU time.
Nov 29 02:59:24 np0005539563 systemd-machined[213024]: Machine qemu-24-instance-00000037 terminated.
Nov 29 02:59:24 np0005539563 nova_compute[252253]: 2025-11-29 07:59:24.626 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:24 np0005539563 nova_compute[252253]: 2025-11-29 07:59:24.631 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:24 np0005539563 nova_compute[252253]: 2025-11-29 07:59:24.637 252257 INFO nova.virt.libvirt.driver [-] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Instance destroyed successfully.#033[00m
Nov 29 02:59:24 np0005539563 nova_compute[252253]: 2025-11-29 07:59:24.637 252257 DEBUG nova.objects.instance [None req-953e96d3-ff77-4c02-8186-3a269b95117c f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'numa_topology' on Instance uuid f5073e59-05bc-46a4-8bf4-ebeb74ca389b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:24 np0005539563 nova_compute[252253]: 2025-11-29 07:59:24.686 252257 DEBUG nova.compute.manager [None req-953e96d3-ff77-4c02-8186-3a269b95117c f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:24.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:24 np0005539563 nova_compute[252253]: 2025-11-29 07:59:24.785 252257 DEBUG oslo_concurrency.lockutils [None req-953e96d3-ff77-4c02-8186-3a269b95117c f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 36.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:25 np0005539563 podman[290198]: 2025-11-29 07:59:25.398086178 +0000 UTC m=+2.131851195 container init 9d133825d72e18c4901babac6418d1d30d42b3b10cf9d42a33196db44189062e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:59:25 np0005539563 podman[290198]: 2025-11-29 07:59:25.408624699 +0000 UTC m=+2.142389676 container start 9d133825d72e18c4901babac6418d1d30d42b3b10cf9d42a33196db44189062e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:59:25 np0005539563 systemd[1]: libpod-9d133825d72e18c4901babac6418d1d30d42b3b10cf9d42a33196db44189062e.scope: Deactivated successfully.
Nov 29 02:59:25 np0005539563 crazy_goldberg[290215]: 167 167
Nov 29 02:59:25 np0005539563 conmon[290215]: conmon 9d133825d72e18c4901b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d133825d72e18c4901babac6418d1d30d42b3b10cf9d42a33196db44189062e.scope/container/memory.events
Nov 29 02:59:25 np0005539563 nova_compute[252253]: 2025-11-29 07:59:25.504 252257 DEBUG nova.compute.manager [req-3a8e96bd-55a1-4748-b7d1-16acbf815820 req-5e62c364-c8ea-4128-be3e-57fd75ea9c6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Received event network-vif-unplugged-1be99414-ee31-47a2-8f21-705afd5a21fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:25 np0005539563 nova_compute[252253]: 2025-11-29 07:59:25.506 252257 DEBUG oslo_concurrency.lockutils [req-3a8e96bd-55a1-4748-b7d1-16acbf815820 req-5e62c364-c8ea-4128-be3e-57fd75ea9c6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:25 np0005539563 nova_compute[252253]: 2025-11-29 07:59:25.507 252257 DEBUG oslo_concurrency.lockutils [req-3a8e96bd-55a1-4748-b7d1-16acbf815820 req-5e62c364-c8ea-4128-be3e-57fd75ea9c6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:25 np0005539563 nova_compute[252253]: 2025-11-29 07:59:25.507 252257 DEBUG oslo_concurrency.lockutils [req-3a8e96bd-55a1-4748-b7d1-16acbf815820 req-5e62c364-c8ea-4128-be3e-57fd75ea9c6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:25 np0005539563 nova_compute[252253]: 2025-11-29 07:59:25.507 252257 DEBUG nova.compute.manager [req-3a8e96bd-55a1-4748-b7d1-16acbf815820 req-5e62c364-c8ea-4128-be3e-57fd75ea9c6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] No waiting events found dispatching network-vif-unplugged-1be99414-ee31-47a2-8f21-705afd5a21fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:59:25 np0005539563 nova_compute[252253]: 2025-11-29 07:59:25.508 252257 WARNING nova.compute.manager [req-3a8e96bd-55a1-4748-b7d1-16acbf815820 req-5e62c364-c8ea-4128-be3e-57fd75ea9c6a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Received unexpected event network-vif-unplugged-1be99414-ee31-47a2-8f21-705afd5a21fc for instance with vm_state stopped and task_state None.#033[00m
Nov 29 02:59:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 167 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 142 KiB/s wr, 105 op/s
Nov 29 02:59:25 np0005539563 podman[290198]: 2025-11-29 07:59:25.93669779 +0000 UTC m=+2.670462797 container attach 9d133825d72e18c4901babac6418d1d30d42b3b10cf9d42a33196db44189062e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 02:59:25 np0005539563 podman[290198]: 2025-11-29 07:59:25.938818598 +0000 UTC m=+2.672583615 container died 9d133825d72e18c4901babac6418d1d30d42b3b10cf9d42a33196db44189062e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:59:26 np0005539563 nova_compute[252253]: 2025-11-29 07:59:26.122 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:26.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:26 np0005539563 systemd[1]: var-lib-containers-storage-overlay-639273ec30843386296dfcbf6f17cb8a0d86bf9a00eeda315d075f0ee9bfb5b6-merged.mount: Deactivated successfully.
Nov 29 02:59:26 np0005539563 podman[290198]: 2025-11-29 07:59:26.673963255 +0000 UTC m=+3.407728242 container remove 9d133825d72e18c4901babac6418d1d30d42b3b10cf9d42a33196db44189062e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:59:26 np0005539563 systemd[1]: libpod-conmon-9d133825d72e18c4901babac6418d1d30d42b3b10cf9d42a33196db44189062e.scope: Deactivated successfully.
Nov 29 02:59:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:26.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Nov 29 02:59:27 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[289070]: [NOTICE]   (289074) : haproxy version is 2.8.14-c23fe91
Nov 29 02:59:27 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[289070]: [NOTICE]   (289074) : path to executable is /usr/sbin/haproxy
Nov 29 02:59:27 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[289070]: [WARNING]  (289074) : Exiting Master process...
Nov 29 02:59:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Nov 29 02:59:27 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[289070]: [ALERT]    (289074) : Current worker (289076) exited with code 143 (Terminated)
Nov 29 02:59:27 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[289070]: [WARNING]  (289074) : All workers exited. Exiting... (0)
Nov 29 02:59:27 np0005539563 systemd[1]: libpod-808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496.scope: Deactivated successfully.
Nov 29 02:59:27 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Nov 29 02:59:27 np0005539563 podman[290269]: 2025-11-29 07:59:27.204642069 +0000 UTC m=+0.432431138 container died 808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:59:27 np0005539563 podman[290285]: 2025-11-29 07:59:27.2634999 +0000 UTC m=+0.458046963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:59:27 np0005539563 nova_compute[252253]: 2025-11-29 07:59:27.424 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496-userdata-shm.mount: Deactivated successfully.
Nov 29 02:59:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-bdcea235dcfeb08e58eca8d0491a813c0982691851a0a5cdf98fa3aa48e82f63-merged.mount: Deactivated successfully.
Nov 29 02:59:27 np0005539563 nova_compute[252253]: 2025-11-29 07:59:27.689 252257 DEBUG nova.compute.manager [req-ec87dd33-60c2-4731-a007-2c921e1bef7e req-1999ec3c-2337-4cd9-bbc4-f8c36d0d7751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Received event network-vif-plugged-1be99414-ee31-47a2-8f21-705afd5a21fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:27 np0005539563 nova_compute[252253]: 2025-11-29 07:59:27.689 252257 DEBUG oslo_concurrency.lockutils [req-ec87dd33-60c2-4731-a007-2c921e1bef7e req-1999ec3c-2337-4cd9-bbc4-f8c36d0d7751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:27 np0005539563 nova_compute[252253]: 2025-11-29 07:59:27.690 252257 DEBUG oslo_concurrency.lockutils [req-ec87dd33-60c2-4731-a007-2c921e1bef7e req-1999ec3c-2337-4cd9-bbc4-f8c36d0d7751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:27 np0005539563 nova_compute[252253]: 2025-11-29 07:59:27.690 252257 DEBUG oslo_concurrency.lockutils [req-ec87dd33-60c2-4731-a007-2c921e1bef7e req-1999ec3c-2337-4cd9-bbc4-f8c36d0d7751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:27 np0005539563 nova_compute[252253]: 2025-11-29 07:59:27.690 252257 DEBUG nova.compute.manager [req-ec87dd33-60c2-4731-a007-2c921e1bef7e req-1999ec3c-2337-4cd9-bbc4-f8c36d0d7751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] No waiting events found dispatching network-vif-plugged-1be99414-ee31-47a2-8f21-705afd5a21fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:59:27 np0005539563 nova_compute[252253]: 2025-11-29 07:59:27.690 252257 WARNING nova.compute.manager [req-ec87dd33-60c2-4731-a007-2c921e1bef7e req-1999ec3c-2337-4cd9-bbc4-f8c36d0d7751 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Received unexpected event network-vif-plugged-1be99414-ee31-47a2-8f21-705afd5a21fc for instance with vm_state stopped and task_state None.#033[00m
Nov 29 02:59:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 167 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 94 KiB/s wr, 76 op/s
Nov 29 02:59:27 np0005539563 podman[290269]: 2025-11-29 07:59:27.831256614 +0000 UTC m=+1.059045713 container cleanup 808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:59:27 np0005539563 systemd[1]: libpod-conmon-808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496.scope: Deactivated successfully.
Nov 29 02:59:27 np0005539563 podman[290322]: 2025-11-29 07:59:27.920037621 +0000 UTC m=+0.065174687 container remove 808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:59:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:27.926 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3227e4b3-5837-46a6-926a-7c5ddabcf0ce]: (4, ('Sat Nov 29 07:59:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 (808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496)\n808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496\nSat Nov 29 07:59:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 (808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496)\n808e1def5ddbca4f3501ed245f18178331401332011056563e17f8f5c8d3d496\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:27.928 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ade1f428-63ec-4598-a8b9-127d174b3176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:27.930 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7471f45a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:27 np0005539563 nova_compute[252253]: 2025-11-29 07:59:27.932 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:27 np0005539563 kernel: tap7471f45a-d0: left promiscuous mode
Nov 29 02:59:27 np0005539563 nova_compute[252253]: 2025-11-29 07:59:27.958 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:27.961 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f0128955-3ad3-4b2b-b764-47b719b13097]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:27.977 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc36741-0220-4b7d-8272-7be64d9ca75d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:27.980 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[35ab2d4a-f2fd-40fb-b662-738db7c072b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:27.999 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2da5162b-4736-4293-8493-f466d802135d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609145, 'reachable_time': 15372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290340, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:28 np0005539563 systemd[1]: run-netns-ovnmeta\x2d7471f45a\x2dda60\x2d4567\x2da888\x2d2a87ff526609.mount: Deactivated successfully.
Nov 29 02:59:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:28.003 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:59:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:28.003 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[191b5408-50fe-4353-a584-db545f459a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:59:28 np0005539563 podman[290285]: 2025-11-29 07:59:28.011702527 +0000 UTC m=+1.206249560 container create ff00d7a0bd8140de33491ececcc5554c808d71a3df840b91faa6c25312840744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 29 02:59:28 np0005539563 systemd[1]: Started libpod-conmon-ff00d7a0bd8140de33491ececcc5554c808d71a3df840b91faa6c25312840744.scope.
Nov 29 02:59:28 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:59:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de349e4c7312ecf72725dca76557fe27e740704fa5d61f22fabe7f7089adb04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de349e4c7312ecf72725dca76557fe27e740704fa5d61f22fabe7f7089adb04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de349e4c7312ecf72725dca76557fe27e740704fa5d61f22fabe7f7089adb04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de349e4c7312ecf72725dca76557fe27e740704fa5d61f22fabe7f7089adb04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:28 np0005539563 podman[290285]: 2025-11-29 07:59:28.096975246 +0000 UTC m=+1.291522299 container init ff00d7a0bd8140de33491ececcc5554c808d71a3df840b91faa6c25312840744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:59:28 np0005539563 podman[290285]: 2025-11-29 07:59:28.105262015 +0000 UTC m=+1.299809068 container start ff00d7a0bd8140de33491ececcc5554c808d71a3df840b91faa6c25312840744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:59:28 np0005539563 podman[290285]: 2025-11-29 07:59:28.109816291 +0000 UTC m=+1.304363344 container attach ff00d7a0bd8140de33491ececcc5554c808d71a3df840b91faa6c25312840744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:59:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:28.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:28.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]: {
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:    "0": [
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:        {
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:            "devices": [
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:                "/dev/loop3"
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:            ],
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:            "lv_name": "ceph_lv0",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:            "lv_size": "7511998464",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:            "name": "ceph_lv0",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:            "tags": {
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:                "ceph.cluster_name": "ceph",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:                "ceph.crush_device_class": "",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:                "ceph.encrypted": "0",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:                "ceph.osd_id": "0",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:                "ceph.type": "block",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:                "ceph.vdo": "0"
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:            },
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:            "type": "block",
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:            "vg_name": "ceph_vg0"
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:        }
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]:    ]
Nov 29 02:59:28 np0005539563 wonderful_wescoff[290348]: }
Nov 29 02:59:28 np0005539563 systemd[1]: libpod-ff00d7a0bd8140de33491ececcc5554c808d71a3df840b91faa6c25312840744.scope: Deactivated successfully.
Nov 29 02:59:28 np0005539563 podman[290285]: 2025-11-29 07:59:28.94812928 +0000 UTC m=+2.142676323 container died ff00d7a0bd8140de33491ececcc5554c808d71a3df840b91faa6c25312840744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:59:28 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1de349e4c7312ecf72725dca76557fe27e740704fa5d61f22fabe7f7089adb04-merged.mount: Deactivated successfully.
Nov 29 02:59:29 np0005539563 podman[290285]: 2025-11-29 07:59:29.006116098 +0000 UTC m=+2.200663131 container remove ff00d7a0bd8140de33491ececcc5554c808d71a3df840b91faa6c25312840744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:59:29 np0005539563 systemd[1]: libpod-conmon-ff00d7a0bd8140de33491ececcc5554c808d71a3df840b91faa6c25312840744.scope: Deactivated successfully.
Nov 29 02:59:29 np0005539563 nova_compute[252253]: 2025-11-29 07:59:29.315 252257 DEBUG nova.compute.manager [None req-c61f24c4-d50f-4092-9958-ecf656ef0e6b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:29 np0005539563 nova_compute[252253]: 2025-11-29 07:59:29.376 252257 INFO nova.compute.manager [None req-c61f24c4-d50f-4092-9958-ecf656ef0e6b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] instance snapshotting#033[00m
Nov 29 02:59:29 np0005539563 nova_compute[252253]: 2025-11-29 07:59:29.376 252257 WARNING nova.compute.manager [None req-c61f24c4-d50f-4092-9958-ecf656ef0e6b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Nov 29 02:59:29 np0005539563 nova_compute[252253]: 2025-11-29 07:59:29.726 252257 INFO nova.virt.libvirt.driver [None req-c61f24c4-d50f-4092-9958-ecf656ef0e6b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Beginning cold snapshot process#033[00m
Nov 29 02:59:29 np0005539563 podman[290509]: 2025-11-29 07:59:29.659993716 +0000 UTC m=+0.024793965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:59:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 169 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 151 KiB/s wr, 45 op/s
Nov 29 02:59:29 np0005539563 nova_compute[252253]: 2025-11-29 07:59:29.869 252257 DEBUG nova.virt.libvirt.imagebackend [None req-c61f24c4-d50f-4092-9958-ecf656ef0e6b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 02:59:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:30.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:30 np0005539563 podman[290509]: 2025-11-29 07:59:30.280246008 +0000 UTC m=+0.645046157 container create a3790069f0bafa1ba6be153e39d0a7ab70b34afeb7d8bb6751a954f86cb136bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lalande, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:59:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:30.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:30 np0005539563 nova_compute[252253]: 2025-11-29 07:59:30.773 252257 DEBUG nova.storage.rbd_utils [None req-c61f24c4-d50f-4092-9958-ecf656ef0e6b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] creating snapshot(249c21d43eaf4cf7ac410fce3da83cf9) on rbd image(f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:59:31 np0005539563 nova_compute[252253]: 2025-11-29 07:59:31.123 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:31 np0005539563 systemd[1]: Started libpod-conmon-a3790069f0bafa1ba6be153e39d0a7ab70b34afeb7d8bb6751a954f86cb136bd.scope.
Nov 29 02:59:31 np0005539563 podman[290557]: 2025-11-29 07:59:31.647587185 +0000 UTC m=+1.316557229 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 02:59:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:59:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 198 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 115 op/s
Nov 29 02:59:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:32.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:32 np0005539563 nova_compute[252253]: 2025-11-29 07:59:32.427 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:32 np0005539563 nova_compute[252253]: 2025-11-29 07:59:32.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:32.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Nov 29 02:59:33 np0005539563 podman[290509]: 2025-11-29 07:59:33.021614447 +0000 UTC m=+3.386414616 container init a3790069f0bafa1ba6be153e39d0a7ab70b34afeb7d8bb6751a954f86cb136bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:33 np0005539563 podman[290509]: 2025-11-29 07:59:33.038680607 +0000 UTC m=+3.403480786 container start a3790069f0bafa1ba6be153e39d0a7ab70b34afeb7d8bb6751a954f86cb136bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:59:33 np0005539563 infallible_lalande[290616]: 167 167
Nov 29 02:59:33 np0005539563 systemd[1]: libpod-a3790069f0bafa1ba6be153e39d0a7ab70b34afeb7d8bb6751a954f86cb136bd.scope: Deactivated successfully.
Nov 29 02:59:33 np0005539563 podman[290509]: 2025-11-29 07:59:33.210718698 +0000 UTC m=+3.575518847 container attach a3790069f0bafa1ba6be153e39d0a7ab70b34afeb7d8bb6751a954f86cb136bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:59:33 np0005539563 podman[290509]: 2025-11-29 07:59:33.211247133 +0000 UTC m=+3.576047282 container died a3790069f0bafa1ba6be153e39d0a7ab70b34afeb7d8bb6751a954f86cb136bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lalande, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:59:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Nov 29 02:59:33 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Nov 29 02:59:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b1bbaba89b27d3e1a426c8ba996597cbc544b231f30dd8490dfd77e35874b736-merged.mount: Deactivated successfully.
Nov 29 02:59:33 np0005539563 podman[290509]: 2025-11-29 07:59:33.331495366 +0000 UTC m=+3.696295525 container remove a3790069f0bafa1ba6be153e39d0a7ab70b34afeb7d8bb6751a954f86cb136bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lalande, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:33 np0005539563 podman[290558]: 2025-11-29 07:59:33.381052182 +0000 UTC m=+3.052692509 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:59:33 np0005539563 podman[290559]: 2025-11-29 07:59:33.400554909 +0000 UTC m=+3.069784920 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:59:33 np0005539563 systemd[1]: libpod-conmon-a3790069f0bafa1ba6be153e39d0a7ab70b34afeb7d8bb6751a954f86cb136bd.scope: Deactivated successfully.
Nov 29 02:59:33 np0005539563 nova_compute[252253]: 2025-11-29 07:59:33.474 252257 DEBUG nova.storage.rbd_utils [None req-c61f24c4-d50f-4092-9958-ecf656ef0e6b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] cloning vms/f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk@249c21d43eaf4cf7ac410fce3da83cf9 to images/32b89eea-531a-4341-b175-80b921fa7805 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 02:59:33 np0005539563 podman[290668]: 2025-11-29 07:59:33.507043584 +0000 UTC m=+0.056339505 container create 2e1ac47c6a9effa1ba2c0dad0b1bd0aec56b4633e3c956adc7c6f6d005bbefaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:59:33 np0005539563 systemd[1]: Started libpod-conmon-2e1ac47c6a9effa1ba2c0dad0b1bd0aec56b4633e3c956adc7c6f6d005bbefaf.scope.
Nov 29 02:59:33 np0005539563 podman[290668]: 2025-11-29 07:59:33.478312051 +0000 UTC m=+0.027608022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:59:33 np0005539563 systemd[1]: Started libcrun container.
Nov 29 02:59:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fefb32c3b6e5505cf16f8056d98d8c019bac9896ddfb08d5c58cfda23eadf46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fefb32c3b6e5505cf16f8056d98d8c019bac9896ddfb08d5c58cfda23eadf46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fefb32c3b6e5505cf16f8056d98d8c019bac9896ddfb08d5c58cfda23eadf46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fefb32c3b6e5505cf16f8056d98d8c019bac9896ddfb08d5c58cfda23eadf46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:33 np0005539563 nova_compute[252253]: 2025-11-29 07:59:33.603 252257 DEBUG nova.storage.rbd_utils [None req-c61f24c4-d50f-4092-9958-ecf656ef0e6b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] flattening images/32b89eea-531a-4341-b175-80b921fa7805 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 02:59:33 np0005539563 podman[290668]: 2025-11-29 07:59:33.606846103 +0000 UTC m=+0.156142054 container init 2e1ac47c6a9effa1ba2c0dad0b1bd0aec56b4633e3c956adc7c6f6d005bbefaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cray, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:59:33 np0005539563 podman[290668]: 2025-11-29 07:59:33.61436122 +0000 UTC m=+0.163657141 container start 2e1ac47c6a9effa1ba2c0dad0b1bd0aec56b4633e3c956adc7c6f6d005bbefaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cray, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:59:33 np0005539563 podman[290668]: 2025-11-29 07:59:33.619339608 +0000 UTC m=+0.168635549 container attach 2e1ac47c6a9effa1ba2c0dad0b1bd0aec56b4633e3c956adc7c6f6d005bbefaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cray, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:59:33 np0005539563 nova_compute[252253]: 2025-11-29 07:59:33.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 200 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.2 MiB/s wr, 131 op/s
Nov 29 02:59:34 np0005539563 nova_compute[252253]: 2025-11-29 07:59:34.082 252257 DEBUG nova.storage.rbd_utils [None req-c61f24c4-d50f-4092-9958-ecf656ef0e6b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] removing snapshot(249c21d43eaf4cf7ac410fce3da83cf9) on rbd image(f5073e59-05bc-46a4-8bf4-ebeb74ca389b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:59:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:34.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Nov 29 02:59:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Nov 29 02:59:34 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Nov 29 02:59:34 np0005539563 nova_compute[252253]: 2025-11-29 07:59:34.317 252257 DEBUG nova.storage.rbd_utils [None req-c61f24c4-d50f-4092-9958-ecf656ef0e6b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] creating snapshot(snap) on rbd image(32b89eea-531a-4341-b175-80b921fa7805) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 02:59:34 np0005539563 elated_cray[290717]: {
Nov 29 02:59:34 np0005539563 elated_cray[290717]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 02:59:34 np0005539563 elated_cray[290717]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 02:59:34 np0005539563 elated_cray[290717]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:59:34 np0005539563 elated_cray[290717]:        "osd_id": 0,
Nov 29 02:59:34 np0005539563 elated_cray[290717]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 02:59:34 np0005539563 elated_cray[290717]:        "type": "bluestore"
Nov 29 02:59:34 np0005539563 elated_cray[290717]:    }
Nov 29 02:59:34 np0005539563 elated_cray[290717]: }
Nov 29 02:59:34 np0005539563 systemd[1]: libpod-2e1ac47c6a9effa1ba2c0dad0b1bd0aec56b4633e3c956adc7c6f6d005bbefaf.scope: Deactivated successfully.
Nov 29 02:59:34 np0005539563 podman[290668]: 2025-11-29 07:59:34.562365622 +0000 UTC m=+1.111661543 container died 2e1ac47c6a9effa1ba2c0dad0b1bd0aec56b4633e3c956adc7c6f6d005bbefaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cray, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:59:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8fefb32c3b6e5505cf16f8056d98d8c019bac9896ddfb08d5c58cfda23eadf46-merged.mount: Deactivated successfully.
Nov 29 02:59:34 np0005539563 podman[290668]: 2025-11-29 07:59:34.6098133 +0000 UTC m=+1.159109211 container remove 2e1ac47c6a9effa1ba2c0dad0b1bd0aec56b4633e3c956adc7c6f6d005bbefaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cray, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:59:34 np0005539563 systemd[1]: libpod-conmon-2e1ac47c6a9effa1ba2c0dad0b1bd0aec56b4633e3c956adc7c6f6d005bbefaf.scope: Deactivated successfully.
Nov 29 02:59:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:59:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:59:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c05072ea-a0a0-4f54-b8a7-170e8e88865a does not exist
Nov 29 02:59:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 04368a6e-aaa4-44a5-99aa-e26c5f0763b4 does not exist
Nov 29 02:59:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 75977136-34b1-44f9-b8e3-ed236b3dba15 does not exist
Nov 29 02:59:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:34.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:35 np0005539563 nova_compute[252253]: 2025-11-29 07:59:35.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:35 np0005539563 nova_compute[252253]: 2025-11-29 07:59:35.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:59:35 np0005539563 nova_compute[252253]: 2025-11-29 07:59:35.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:35 np0005539563 nova_compute[252253]: 2025-11-29 07:59:35.716 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:35 np0005539563 nova_compute[252253]: 2025-11-29 07:59:35.716 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:35 np0005539563 nova_compute[252253]: 2025-11-29 07:59:35.717 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:35 np0005539563 nova_compute[252253]: 2025-11-29 07:59:35.718 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:59:35 np0005539563 nova_compute[252253]: 2025-11-29 07:59:35.718 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 264 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.3 MiB/s wr, 248 op/s
Nov 29 02:59:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Nov 29 02:59:36 np0005539563 nova_compute[252253]: 2025-11-29 07:59:36.125 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:36.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1064891767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:36 np0005539563 nova_compute[252253]: 2025-11-29 07:59:36.329 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:36 np0005539563 nova_compute[252253]: 2025-11-29 07:59:36.409 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:36 np0005539563 nova_compute[252253]: 2025-11-29 07:59:36.409 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:36 np0005539563 nova_compute[252253]: 2025-11-29 07:59:36.583 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:59:36 np0005539563 nova_compute[252253]: 2025-11-29 07:59:36.585 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4549MB free_disk=20.897350311279297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:59:36 np0005539563 nova_compute[252253]: 2025-11-29 07:59:36.586 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:36 np0005539563 nova_compute[252253]: 2025-11-29 07:59:36.586 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:36 np0005539563 nova_compute[252253]: 2025-11-29 07:59:36.701 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance f5073e59-05bc-46a4-8bf4-ebeb74ca389b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:59:36 np0005539563 nova_compute[252253]: 2025-11-29 07:59:36.701 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:59:36 np0005539563 nova_compute[252253]: 2025-11-29 07:59:36.701 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:59:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:36.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:36 np0005539563 nova_compute[252253]: 2025-11-29 07:59:36.790 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 02:59:37 np0005539563 nova_compute[252253]: 2025-11-29 07:59:37.431 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 264 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.2 MiB/s wr, 234 op/s
Nov 29 02:59:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1545253227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:38 np0005539563 nova_compute[252253]: 2025-11-29 07:59:38.012 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:38 np0005539563 nova_compute[252253]: 2025-11-29 07:59:38.021 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:38 np0005539563 nova_compute[252253]: 2025-11-29 07:59:38.041 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:38 np0005539563 nova_compute[252253]: 2025-11-29 07:59:38.077 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:59:38 np0005539563 nova_compute[252253]: 2025-11-29 07:59:38.078 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.492s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:38.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:38.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:39 np0005539563 nova_compute[252253]: 2025-11-29 07:59:39.637 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403164.6360455, f5073e59-05bc-46a4-8bf4-ebeb74ca389b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:39 np0005539563 nova_compute[252253]: 2025-11-29 07:59:39.638 252257 INFO nova.compute.manager [-] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:59:39 np0005539563 nova_compute[252253]: 2025-11-29 07:59:39.658 252257 DEBUG nova.compute.manager [None req-86121600-f643-45f7-a055-99e2bc44cf50 - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:39 np0005539563 nova_compute[252253]: 2025-11-29 07:59:39.661 252257 DEBUG nova.compute.manager [None req-86121600-f643-45f7-a055-99e2bc44cf50 - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: image_uploading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:59:39 np0005539563 nova_compute[252253]: 2025-11-29 07:59:39.690 252257 INFO nova.compute.manager [None req-86121600-f643-45f7-a055-99e2bc44cf50 - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] During sync_power_state the instance has a pending task (image_uploading). Skip.#033[00m
Nov 29 02:59:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 325 MiB data, 728 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 8.5 MiB/s wr, 178 op/s
Nov 29 02:59:40 np0005539563 nova_compute[252253]: 2025-11-29 07:59:40.079 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:40 np0005539563 nova_compute[252253]: 2025-11-29 07:59:40.080 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:59:40 np0005539563 nova_compute[252253]: 2025-11-29 07:59:40.080 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:59:40 np0005539563 nova_compute[252253]: 2025-11-29 07:59:40.102 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-f5073e59-05bc-46a4-8bf4-ebeb74ca389b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:40 np0005539563 nova_compute[252253]: 2025-11-29 07:59:40.103 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-f5073e59-05bc-46a4-8bf4-ebeb74ca389b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:40 np0005539563 nova_compute[252253]: 2025-11-29 07:59:40.103 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:59:40 np0005539563 nova_compute[252253]: 2025-11-29 07:59:40.104 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f5073e59-05bc-46a4-8bf4-ebeb74ca389b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:40.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:40.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Nov 29 02:59:40 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Nov 29 02:59:41 np0005539563 nova_compute[252253]: 2025-11-29 07:59:41.127 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 325 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 152 op/s
Nov 29 02:59:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:42.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:42 np0005539563 nova_compute[252253]: 2025-11-29 07:59:42.436 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:42 np0005539563 nova_compute[252253]: 2025-11-29 07:59:42.696 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Updating instance_info_cache with network_info: [{"id": "1be99414-ee31-47a2-8f21-705afd5a21fc", "address": "fa:16:3e:fa:17:be", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be99414-ee", "ovs_interfaceid": "1be99414-ee31-47a2-8f21-705afd5a21fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:42 np0005539563 nova_compute[252253]: 2025-11-29 07:59:42.740 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-f5073e59-05bc-46a4-8bf4-ebeb74ca389b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:42 np0005539563 nova_compute[252253]: 2025-11-29 07:59:42.741 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:59:42 np0005539563 nova_compute[252253]: 2025-11-29 07:59:42.741 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:42 np0005539563 nova_compute[252253]: 2025-11-29 07:59:42.742 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:42 np0005539563 nova_compute[252253]: 2025-11-29 07:59:42.742 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:42.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Nov 29 02:59:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Nov 29 02:59:43 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Nov 29 02:59:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:59:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:59:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:59:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:59:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:59:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:59:43 np0005539563 nova_compute[252253]: 2025-11-29 07:59:43.559 252257 INFO nova.virt.libvirt.driver [None req-c61f24c4-d50f-4092-9958-ecf656ef0e6b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Snapshot image upload complete#033[00m
Nov 29 02:59:43 np0005539563 nova_compute[252253]: 2025-11-29 07:59:43.559 252257 INFO nova.compute.manager [None req-c61f24c4-d50f-4092-9958-ecf656ef0e6b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Took 14.18 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 02:59:43 np0005539563 nova_compute[252253]: 2025-11-29 07:59:43.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 325 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.4 MiB/s wr, 36 op/s
Nov 29 02:59:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:44.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:44.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 325 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.4 MiB/s wr, 66 op/s
Nov 29 02:59:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Nov 29 02:59:46 np0005539563 nova_compute[252253]: 2025-11-29 07:59:46.129 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:46.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Nov 29 02:59:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:46.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Nov 29 02:59:47 np0005539563 nova_compute[252253]: 2025-11-29 07:59:47.440 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 325 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 4.4 KiB/s wr, 35 op/s
Nov 29 02:59:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:48.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:48.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 325 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 3.9 KiB/s wr, 31 op/s
Nov 29 02:59:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:50.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.542 252257 DEBUG oslo_concurrency.lockutils [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.543 252257 DEBUG oslo_concurrency.lockutils [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.543 252257 DEBUG oslo_concurrency.lockutils [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.544 252257 DEBUG oslo_concurrency.lockutils [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.544 252257 DEBUG oslo_concurrency.lockutils [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.546 252257 INFO nova.compute.manager [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Terminating instance#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.548 252257 DEBUG nova.compute.manager [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.559 252257 INFO nova.virt.libvirt.driver [-] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Instance destroyed successfully.#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.559 252257 DEBUG nova.objects.instance [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'resources' on Instance uuid f5073e59-05bc-46a4-8bf4-ebeb74ca389b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.581 252257 DEBUG nova.virt.libvirt.vif [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-609973919',display_name='tempest-ImagesTestJSON-server-609973919',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-609973919',id=55,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:58:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-73yrhjcs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:59:43Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=f5073e59-05bc-46a4-8bf4-ebeb74ca389b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "1be99414-ee31-47a2-8f21-705afd5a21fc", "address": "fa:16:3e:fa:17:be", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be99414-ee", "ovs_interfaceid": "1be99414-ee31-47a2-8f21-705afd5a21fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.582 252257 DEBUG nova.network.os_vif_util [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "1be99414-ee31-47a2-8f21-705afd5a21fc", "address": "fa:16:3e:fa:17:be", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1be99414-ee", "ovs_interfaceid": "1be99414-ee31-47a2-8f21-705afd5a21fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.584 252257 DEBUG nova.network.os_vif_util [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:17:be,bridge_name='br-int',has_traffic_filtering=True,id=1be99414-ee31-47a2-8f21-705afd5a21fc,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be99414-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.584 252257 DEBUG os_vif [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:17:be,bridge_name='br-int',has_traffic_filtering=True,id=1be99414-ee31-47a2-8f21-705afd5a21fc,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be99414-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.591 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.592 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1be99414-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.594 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.596 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:50 np0005539563 nova_compute[252253]: 2025-11-29 07:59:50.600 252257 INFO os_vif [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:17:be,bridge_name='br-int',has_traffic_filtering=True,id=1be99414-ee31-47a2-8f21-705afd5a21fc,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1be99414-ee')#033[00m
Nov 29 02:59:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:50.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:50 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 29 02:59:50 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:50.824208) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:59:50 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 29 02:59:50 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403190824324, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2406, "num_deletes": 267, "total_data_size": 4089708, "memory_usage": 4151648, "flush_reason": "Manual Compaction"}
Nov 29 02:59:50 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 29 02:59:50 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403190868347, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4002433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30139, "largest_seqno": 32544, "table_properties": {"data_size": 3991328, "index_size": 7215, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23577, "raw_average_key_size": 21, "raw_value_size": 3969064, "raw_average_value_size": 3598, "num_data_blocks": 310, "num_entries": 1103, "num_filter_entries": 1103, "num_deletions": 267, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402984, "oldest_key_time": 1764402984, "file_creation_time": 1764403190, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:59:50 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 44159 microseconds, and 9623 cpu microseconds.
Nov 29 02:59:50 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:59:51 np0005539563 nova_compute[252253]: 2025-11-29 07:59:51.131 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:50.868414) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4002433 bytes OK
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:50.868440) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:51.288827) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:51.288913) EVENT_LOG_v1 {"time_micros": 1764403191288898, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:51.288949) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4079595, prev total WAL file size 4079595, number of live WAL files 2.
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:51.291335) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3908KB)], [65(9328KB)]
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403191291516, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 13554586, "oldest_snapshot_seqno": -1}
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6564 keys, 11703653 bytes, temperature: kUnknown
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403191659085, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 11703653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11657457, "index_size": 28676, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 167407, "raw_average_key_size": 25, "raw_value_size": 11537397, "raw_average_value_size": 1757, "num_data_blocks": 1154, "num_entries": 6564, "num_filter_entries": 6564, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764403191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:51.659500) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 11703653 bytes
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:51.663850) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 36.9 rd, 31.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 9.1 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 7101, records dropped: 537 output_compression: NoCompression
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:51.663899) EVENT_LOG_v1 {"time_micros": 1764403191663878, "job": 36, "event": "compaction_finished", "compaction_time_micros": 367695, "compaction_time_cpu_micros": 53942, "output_level": 6, "num_output_files": 1, "total_output_size": 11703653, "num_input_records": 7101, "num_output_records": 6564, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403191666105, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403191670342, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:51.291063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:51.670601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:51.670608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:51.670612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:51.670615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-07:59:51.670619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 280 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 25 KiB/s wr, 53 op/s
Nov 29 02:59:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:52.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:52.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 224 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 23 KiB/s wr, 98 op/s
Nov 29 02:59:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:54.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:54.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:55 np0005539563 nova_compute[252253]: 2025-11-29 07:59:55.595 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 167 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 129 op/s
Nov 29 02:59:56 np0005539563 nova_compute[252253]: 2025-11-29 07:59:56.133 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:59:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:56.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:59:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:56.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:57 np0005539563 nova_compute[252253]: 2025-11-29 07:59:57.367 252257 INFO nova.virt.libvirt.driver [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Deleting instance files /var/lib/nova/instances/f5073e59-05bc-46a4-8bf4-ebeb74ca389b_del#033[00m
Nov 29 02:59:57 np0005539563 nova_compute[252253]: 2025-11-29 07:59:57.368 252257 INFO nova.virt.libvirt.driver [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Deletion of /var/lib/nova/instances/f5073e59-05bc-46a4-8bf4-ebeb74ca389b_del complete#033[00m
Nov 29 02:59:57 np0005539563 nova_compute[252253]: 2025-11-29 07:59:57.447 252257 INFO nova.compute.manager [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Took 6.90 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:59:57 np0005539563 nova_compute[252253]: 2025-11-29 07:59:57.448 252257 DEBUG oslo.service.loopingcall [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:59:57 np0005539563 nova_compute[252253]: 2025-11-29 07:59:57.448 252257 DEBUG nova.compute.manager [-] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:59:57 np0005539563 nova_compute[252253]: 2025-11-29 07:59:57.448 252257 DEBUG nova.network.neutron [-] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:59:57 np0005539563 nova_compute[252253]: 2025-11-29 07:59:57.742 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:57.742 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:59:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:57.744 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:59:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 167 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 117 op/s
Nov 29 02:59:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Nov 29 02:59:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:58.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Nov 29 02:59:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Nov 29 02:59:58 np0005539563 nova_compute[252253]: 2025-11-29 07:59:58.532 252257 DEBUG nova.network.neutron [-] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:58 np0005539563 nova_compute[252253]: 2025-11-29 07:59:58.555 252257 INFO nova.compute.manager [-] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Took 1.11 seconds to deallocate network for instance.#033[00m
Nov 29 02:59:58 np0005539563 nova_compute[252253]: 2025-11-29 07:59:58.601 252257 DEBUG oslo_concurrency.lockutils [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:58 np0005539563 nova_compute[252253]: 2025-11-29 07:59:58.602 252257 DEBUG oslo_concurrency.lockutils [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:58 np0005539563 nova_compute[252253]: 2025-11-29 07:59:58.649 252257 DEBUG oslo_concurrency.processutils [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:58 np0005539563 nova_compute[252253]: 2025-11-29 07:59:58.684 252257 DEBUG nova.compute.manager [req-d2da7f3e-b48b-4eff-ae03-8403eddc50f5 req-bcb704e2-7aee-45be-a5dc-a95765a756f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: f5073e59-05bc-46a4-8bf4-ebeb74ca389b] Received event network-vif-deleted-1be99414-ee31-47a2-8f21-705afd5a21fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:59:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 02:59:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:58.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/616399820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:59 np0005539563 nova_compute[252253]: 2025-11-29 07:59:59.093 252257 DEBUG oslo_concurrency.processutils [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:59 np0005539563 nova_compute[252253]: 2025-11-29 07:59:59.101 252257 DEBUG nova.compute.provider_tree [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:59 np0005539563 nova_compute[252253]: 2025-11-29 07:59:59.124 252257 DEBUG nova.scheduler.client.report [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:59 np0005539563 nova_compute[252253]: 2025-11-29 07:59:59.176 252257 DEBUG oslo_concurrency.lockutils [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:59 np0005539563 nova_compute[252253]: 2025-11-29 07:59:59.202 252257 INFO nova.scheduler.client.report [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Deleted allocations for instance f5073e59-05bc-46a4-8bf4-ebeb74ca389b#033[00m
Nov 29 02:59:59 np0005539563 nova_compute[252253]: 2025-11-29 07:59:59.289 252257 DEBUG oslo_concurrency.lockutils [None req-a3d5a0d2-ff28-4dde-bf0e-4da2d250cd9b f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "f5073e59-05bc-46a4-8bf4-ebeb74ca389b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 07:59:59.746 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:59:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 157 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 76 KiB/s wr, 171 op/s
Nov 29 03:00:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 03:00:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000055s ======
Nov 29 03:00:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:00.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Nov 29 03:00:00 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 03:00:00 np0005539563 nova_compute[252253]: 2025-11-29 08:00:00.597 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:00.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:01 np0005539563 nova_compute[252253]: 2025-11-29 08:00:01.162 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 134 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 183 op/s
Nov 29 03:00:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:02.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:02 np0005539563 podman[291010]: 2025-11-29 08:00:02.498725311 +0000 UTC m=+0.051483501 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:00:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:02.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:03 np0005539563 podman[291080]: 2025-11-29 08:00:03.49960362 +0000 UTC m=+0.057518676 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:00:03 np0005539563 podman[291081]: 2025-11-29 08:00:03.549925977 +0000 UTC m=+0.099889694 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:00:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 134 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Nov 29 03:00:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:04.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:04.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:04.907 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:04.907 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:04.907 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:05 np0005539563 nova_compute[252253]: 2025-11-29 08:00:05.598 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 146 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 378 KiB/s rd, 3.5 MiB/s wr, 133 op/s
Nov 29 03:00:06 np0005539563 nova_compute[252253]: 2025-11-29 08:00:06.164 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:06.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:06.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 146 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 378 KiB/s rd, 3.5 MiB/s wr, 133 op/s
Nov 29 03:00:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:08.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:08.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:08 np0005539563 nova_compute[252253]: 2025-11-29 08:00:08.997 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "487e2d0b-cbfe-462e-b702-86a5164357d8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:08 np0005539563 nova_compute[252253]: 2025-11-29 08:00:08.997 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.024 252257 DEBUG nova.compute.manager [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.124 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.125 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.133 252257 DEBUG nova.virt.hardware [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.134 252257 INFO nova.compute.claims [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.354 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1837080993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.796 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.803 252257 DEBUG nova.compute.provider_tree [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.821 252257 DEBUG nova.scheduler.client.report [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 165 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 383 KiB/s rd, 4.0 MiB/s wr, 95 op/s
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.855 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.856 252257 DEBUG nova.compute.manager [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.928 252257 DEBUG nova.compute.manager [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.928 252257 DEBUG nova.network.neutron [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.957 252257 INFO nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:00:09 np0005539563 nova_compute[252253]: 2025-11-29 08:00:09.991 252257 DEBUG nova.compute.manager [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.030 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.031 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.050 252257 DEBUG nova.compute.manager [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.108 252257 DEBUG nova.compute.manager [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.109 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.110 252257 INFO nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Creating image(s)#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.149 252257 DEBUG nova.storage.rbd_utils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 487e2d0b-cbfe-462e-b702-86a5164357d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.191 252257 DEBUG nova.storage.rbd_utils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 487e2d0b-cbfe-462e-b702-86a5164357d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.217 252257 DEBUG nova.storage.rbd_utils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 487e2d0b-cbfe-462e-b702-86a5164357d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.221 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:10.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.278 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.279 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.285 252257 DEBUG nova.virt.hardware [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.286 252257 INFO nova.compute.claims [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.297 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.297 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.298 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.298 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.323 252257 DEBUG nova.storage.rbd_utils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 487e2d0b-cbfe-462e-b702-86a5164357d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.327 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 487e2d0b-cbfe-462e-b702-86a5164357d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.399 252257 DEBUG nova.policy [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d4e5ab1ae494327abcb3693ba332586', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6fce027870d041328a9b9968bfe90665', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.472 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.600 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.695 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 487e2d0b-cbfe-462e-b702-86a5164357d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.778 252257 DEBUG nova.storage.rbd_utils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] resizing rbd image 487e2d0b-cbfe-462e-b702-86a5164357d8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:00:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:10.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/815686413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.909 252257 DEBUG nova.objects.instance [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lazy-loading 'migration_context' on Instance uuid 487e2d0b-cbfe-462e-b702-86a5164357d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.915 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.921 252257 DEBUG nova.compute.provider_tree [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.924 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.925 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Ensure instance console log exists: /var/lib/nova/instances/487e2d0b-cbfe-462e-b702-86a5164357d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.925 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.925 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.925 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.934 252257 DEBUG nova.scheduler.client.report [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.955 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:10 np0005539563 nova_compute[252253]: 2025-11-29 08:00:10.955 252257 DEBUG nova.compute.manager [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.000 252257 DEBUG nova.compute.manager [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.000 252257 DEBUG nova.network.neutron [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.042 252257 INFO nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.064 252257 DEBUG nova.compute.manager [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.207 252257 DEBUG nova.compute.manager [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.208 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.209 252257 INFO nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Creating image(s)#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.230 252257 DEBUG nova.storage.rbd_utils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image c1396d33-3741-4e6a-acdf-79ac9f076e53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.259 252257 DEBUG nova.storage.rbd_utils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image c1396d33-3741-4e6a-acdf-79ac9f076e53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.283 252257 DEBUG nova.storage.rbd_utils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image c1396d33-3741-4e6a-acdf-79ac9f076e53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.286 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.321 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.380 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.381 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.381 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.382 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.408 252257 DEBUG nova.storage.rbd_utils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image c1396d33-3741-4e6a-acdf-79ac9f076e53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.412 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c1396d33-3741-4e6a-acdf-79ac9f076e53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.438 252257 DEBUG nova.policy [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a814d0c4600e45d9a1fac7bac5b7e69e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f69605de164b4c27ae715521263676fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.481 252257 DEBUG nova.network.neutron [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Successfully created port: 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.732 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c1396d33-3741-4e6a-acdf-79ac9f076e53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.830 252257 DEBUG nova.storage.rbd_utils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] resizing rbd image c1396d33-3741-4e6a-acdf-79ac9f076e53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.973 252257 DEBUG nova.objects.instance [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'migration_context' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.998 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.999 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Ensure instance console log exists: /var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:00:11 np0005539563 nova_compute[252253]: 2025-11-29 08:00:11.999 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:12 np0005539563 nova_compute[252253]: 2025-11-29 08:00:12.000 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:12 np0005539563 nova_compute[252253]: 2025-11-29 08:00:12.000 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:12.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:12 np0005539563 nova_compute[252253]: 2025-11-29 08:00:12.601 252257 DEBUG nova.network.neutron [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Successfully created port: 39a40677-39fb-46af-8988-f2b8b26d7512 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:00:12 np0005539563 nova_compute[252253]: 2025-11-29 08:00:12.696 252257 DEBUG nova.network.neutron [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Successfully updated port: 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:00:12 np0005539563 nova_compute[252253]: 2025-11-29 08:00:12.733 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:12 np0005539563 nova_compute[252253]: 2025-11-29 08:00:12.733 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquired lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:12 np0005539563 nova_compute[252253]: 2025-11-29 08:00:12.733 252257 DEBUG nova.network.neutron [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:00:12
Nov 29 03:00:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:00:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:00:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images', 'backups', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'volumes', 'vms']
Nov 29 03:00:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:00:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:12.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:12 np0005539563 nova_compute[252253]: 2025-11-29 08:00:12.849 252257 DEBUG nova.compute.manager [req-c8762eef-0ffd-43c7-876d-227d5ff1f27b req-0b5a935b-754f-4082-92ae-d0a5274e2233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Received event network-changed-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:12 np0005539563 nova_compute[252253]: 2025-11-29 08:00:12.849 252257 DEBUG nova.compute.manager [req-c8762eef-0ffd-43c7-876d-227d5ff1f27b req-0b5a935b-754f-4082-92ae-d0a5274e2233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Refreshing instance network info cache due to event network-changed-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:00:12 np0005539563 nova_compute[252253]: 2025-11-29 08:00:12.849 252257 DEBUG oslo_concurrency.lockutils [req-c8762eef-0ffd-43c7-876d-227d5ff1f27b req-0b5a935b-754f-4082-92ae-d0a5274e2233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:12 np0005539563 nova_compute[252253]: 2025-11-29 08:00:12.934 252257 DEBUG nova.network.neutron [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Nov 29 03:00:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Nov 29 03:00:13 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:00:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.616 252257 DEBUG nova.network.neutron [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Successfully updated port: 39a40677-39fb-46af-8988-f2b8b26d7512 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.637 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.638 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.638 252257 DEBUG nova.network.neutron [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.759 252257 DEBUG nova.compute.manager [req-b3b10315-7f28-4f02-86d5-528c9c404973 req-d25a3e3d-34c1-4eab-bad6-ba2b498d1447 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-changed-39a40677-39fb-46af-8988-f2b8b26d7512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.760 252257 DEBUG nova.compute.manager [req-b3b10315-7f28-4f02-86d5-528c9c404973 req-d25a3e3d-34c1-4eab-bad6-ba2b498d1447 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Refreshing instance network info cache due to event network-changed-39a40677-39fb-46af-8988-f2b8b26d7512. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.760 252257 DEBUG oslo_concurrency.lockutils [req-b3b10315-7f28-4f02-86d5-528c9c404973 req-d25a3e3d-34c1-4eab-bad6-ba2b498d1447 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 197 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.0 MiB/s wr, 173 op/s
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:00:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.866 252257 DEBUG nova.network.neutron [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.920 252257 DEBUG nova.network.neutron [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Updating instance_info_cache with network_info: [{"id": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "address": "fa:16:3e:85:0a:92", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c26a727-aa", "ovs_interfaceid": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.938 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Releasing lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.938 252257 DEBUG nova.compute.manager [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Instance network_info: |[{"id": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "address": "fa:16:3e:85:0a:92", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c26a727-aa", "ovs_interfaceid": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.938 252257 DEBUG oslo_concurrency.lockutils [req-c8762eef-0ffd-43c7-876d-227d5ff1f27b req-0b5a935b-754f-4082-92ae-d0a5274e2233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.939 252257 DEBUG nova.network.neutron [req-c8762eef-0ffd-43c7-876d-227d5ff1f27b req-0b5a935b-754f-4082-92ae-d0a5274e2233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Refreshing network info cache for port 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.941 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Start _get_guest_xml network_info=[{"id": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "address": "fa:16:3e:85:0a:92", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c26a727-aa", "ovs_interfaceid": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.945 252257 WARNING nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.950 252257 DEBUG nova.virt.libvirt.host [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.951 252257 DEBUG nova.virt.libvirt.host [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.955 252257 DEBUG nova.virt.libvirt.host [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.955 252257 DEBUG nova.virt.libvirt.host [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.956 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.957 252257 DEBUG nova.virt.hardware [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.957 252257 DEBUG nova.virt.hardware [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.957 252257 DEBUG nova.virt.hardware [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.957 252257 DEBUG nova.virt.hardware [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.958 252257 DEBUG nova.virt.hardware [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.958 252257 DEBUG nova.virt.hardware [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.958 252257 DEBUG nova.virt.hardware [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.958 252257 DEBUG nova.virt.hardware [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.959 252257 DEBUG nova.virt.hardware [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.959 252257 DEBUG nova.virt.hardware [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.959 252257 DEBUG nova.virt.hardware [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:00:13 np0005539563 nova_compute[252253]: 2025-11-29 08:00:13.962 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:14.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3148484917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:14 np0005539563 nova_compute[252253]: 2025-11-29 08:00:14.560 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:14 np0005539563 nova_compute[252253]: 2025-11-29 08:00:14.597 252257 DEBUG nova.storage.rbd_utils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 487e2d0b-cbfe-462e-b702-86a5164357d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:14 np0005539563 nova_compute[252253]: 2025-11-29 08:00:14.603 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:00:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:14.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:00:14 np0005539563 nova_compute[252253]: 2025-11-29 08:00:14.989 252257 DEBUG nova.network.neutron [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.016 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.016 252257 DEBUG nova.compute.manager [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Instance network_info: |[{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.016 252257 DEBUG oslo_concurrency.lockutils [req-b3b10315-7f28-4f02-86d5-528c9c404973 req-d25a3e3d-34c1-4eab-bad6-ba2b498d1447 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.017 252257 DEBUG nova.network.neutron [req-b3b10315-7f28-4f02-86d5-528c9c404973 req-d25a3e3d-34c1-4eab-bad6-ba2b498d1447 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Refreshing network info cache for port 39a40677-39fb-46af-8988-f2b8b26d7512 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.020 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Start _get_guest_xml network_info=[{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.024 252257 WARNING nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.029 252257 DEBUG nova.virt.libvirt.host [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.029 252257 DEBUG nova.virt.libvirt.host [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.036 252257 DEBUG nova.virt.libvirt.host [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.036 252257 DEBUG nova.virt.libvirt.host [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.037 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.037 252257 DEBUG nova.virt.hardware [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.037 252257 DEBUG nova.virt.hardware [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.038 252257 DEBUG nova.virt.hardware [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.038 252257 DEBUG nova.virt.hardware [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.038 252257 DEBUG nova.virt.hardware [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.038 252257 DEBUG nova.virt.hardware [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.038 252257 DEBUG nova.virt.hardware [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.039 252257 DEBUG nova.virt.hardware [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.039 252257 DEBUG nova.virt.hardware [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.039 252257 DEBUG nova.virt.hardware [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.039 252257 DEBUG nova.virt.hardware [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.042 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2823277088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.098 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.100 252257 DEBUG nova.virt.libvirt.vif [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1685884678',display_name='tempest-SecurityGroupsTestJSON-server-1685884678',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1685884678',id=59,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fce027870d041328a9b9968bfe90665',ramdisk_id='',reservation_id='r-3vijaar2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1868555561',owner_user_name='tempest-SecurityGroupsTestJSON-1868555561-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:10Z,user_data=None,user_id='8d4e5ab1ae494327abcb3693ba332586',uuid=487e2d0b-cbfe-462e-b702-86a5164357d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "address": "fa:16:3e:85:0a:92", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c26a727-aa", "ovs_interfaceid": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.101 252257 DEBUG nova.network.os_vif_util [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converting VIF {"id": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "address": "fa:16:3e:85:0a:92", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c26a727-aa", "ovs_interfaceid": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.102 252257 DEBUG nova.network.os_vif_util [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=3c26a727-aa6f-4daa-8e0c-9658c02a1f5c,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c26a727-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.103 252257 DEBUG nova.objects.instance [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lazy-loading 'pci_devices' on Instance uuid 487e2d0b-cbfe-462e-b702-86a5164357d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.124 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <uuid>487e2d0b-cbfe-462e-b702-86a5164357d8</uuid>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <name>instance-0000003b</name>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1685884678</nova:name>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:00:13</nova:creationTime>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:user uuid="8d4e5ab1ae494327abcb3693ba332586">tempest-SecurityGroupsTestJSON-1868555561-project-member</nova:user>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:project uuid="6fce027870d041328a9b9968bfe90665">tempest-SecurityGroupsTestJSON-1868555561</nova:project>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:port uuid="3c26a727-aa6f-4daa-8e0c-9658c02a1f5c">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <entry name="serial">487e2d0b-cbfe-462e-b702-86a5164357d8</entry>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <entry name="uuid">487e2d0b-cbfe-462e-b702-86a5164357d8</entry>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/487e2d0b-cbfe-462e-b702-86a5164357d8_disk">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/487e2d0b-cbfe-462e-b702-86a5164357d8_disk.config">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:85:0a:92"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <target dev="tap3c26a727-aa"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/487e2d0b-cbfe-462e-b702-86a5164357d8/console.log" append="off"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:00:15 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:00:15 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.125 252257 DEBUG nova.compute.manager [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Preparing to wait for external event network-vif-plugged-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.125 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.126 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.126 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.127 252257 DEBUG nova.virt.libvirt.vif [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1685884678',display_name='tempest-SecurityGroupsTestJSON-server-1685884678',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1685884678',id=59,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fce027870d041328a9b9968bfe90665',ramdisk_id='',reservation_id='r-3vijaar2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1868555561',owner_user_name='tempest-SecurityGroupsTestJSON-1868555561-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:10Z,user_data=None,user_id='8d4e5ab1ae494327abcb3693ba332586',uuid=487e2d0b-cbfe-462e-b702-86a5164357d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "address": "fa:16:3e:85:0a:92", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c26a727-aa", "ovs_interfaceid": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.127 252257 DEBUG nova.network.os_vif_util [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converting VIF {"id": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "address": "fa:16:3e:85:0a:92", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c26a727-aa", "ovs_interfaceid": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.127 252257 DEBUG nova.network.os_vif_util [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=3c26a727-aa6f-4daa-8e0c-9658c02a1f5c,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c26a727-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.128 252257 DEBUG os_vif [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=3c26a727-aa6f-4daa-8e0c-9658c02a1f5c,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c26a727-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.128 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.129 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.129 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.132 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.132 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c26a727-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.132 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c26a727-aa, col_values=(('external_ids', {'iface-id': '3c26a727-aa6f-4daa-8e0c-9658c02a1f5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:0a:92', 'vm-uuid': '487e2d0b-cbfe-462e-b702-86a5164357d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.134 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:15 np0005539563 NetworkManager[48981]: <info>  [1764403215.1358] manager: (tap3c26a727-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.137 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.141 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.142 252257 INFO os_vif [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=3c26a727-aa6f-4daa-8e0c-9658c02a1f5c,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c26a727-aa')#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.162 252257 DEBUG nova.network.neutron [req-c8762eef-0ffd-43c7-876d-227d5ff1f27b req-0b5a935b-754f-4082-92ae-d0a5274e2233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Updated VIF entry in instance network info cache for port 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.162 252257 DEBUG nova.network.neutron [req-c8762eef-0ffd-43c7-876d-227d5ff1f27b req-0b5a935b-754f-4082-92ae-d0a5274e2233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Updating instance_info_cache with network_info: [{"id": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "address": "fa:16:3e:85:0a:92", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c26a727-aa", "ovs_interfaceid": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.179 252257 DEBUG oslo_concurrency.lockutils [req-c8762eef-0ffd-43c7-876d-227d5ff1f27b req-0b5a935b-754f-4082-92ae-d0a5274e2233 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.410 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.411 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.411 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] No VIF found with MAC fa:16:3e:85:0a:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.412 252257 INFO nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Using config drive#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.442 252257 DEBUG nova.storage.rbd_utils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 487e2d0b-cbfe-462e-b702-86a5164357d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329891320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Nov 29 03:00:15 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.482 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.509 252257 DEBUG nova.storage.rbd_utils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image c1396d33-3741-4e6a-acdf-79ac9f076e53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.513 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.731 252257 INFO nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Creating config drive at /var/lib/nova/instances/487e2d0b-cbfe-462e-b702-86a5164357d8/disk.config#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.736 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/487e2d0b-cbfe-462e-b702-86a5164357d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_o2r3696 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 298 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 9.0 MiB/s wr, 281 op/s
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.869 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/487e2d0b-cbfe-462e-b702-86a5164357d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_o2r3696" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.905 252257 DEBUG nova.storage.rbd_utils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] rbd image 487e2d0b-cbfe-462e-b702-86a5164357d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.910 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/487e2d0b-cbfe-462e-b702-86a5164357d8/disk.config 487e2d0b-cbfe-462e-b702-86a5164357d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2551938721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.955 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.957 252257 DEBUG nova.virt.libvirt.vif [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.957 252257 DEBUG nova.network.os_vif_util [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.958 252257 DEBUG nova.network.os_vif_util [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:38:06,bridge_name='br-int',has_traffic_filtering=True,id=39a40677-39fb-46af-8988-f2b8b26d7512,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39a40677-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.959 252257 DEBUG nova.objects.instance [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'pci_devices' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.980 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <uuid>c1396d33-3741-4e6a-acdf-79ac9f076e53</uuid>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <name>instance-0000003c</name>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:name>tempest-AttachInterfacesTestJSON-server-369367062</nova:name>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:00:15</nova:creationTime>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <nova:port uuid="39a40677-39fb-46af-8988-f2b8b26d7512">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <entry name="serial">c1396d33-3741-4e6a-acdf-79ac9f076e53</entry>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <entry name="uuid">c1396d33-3741-4e6a-acdf-79ac9f076e53</entry>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/c1396d33-3741-4e6a-acdf-79ac9f076e53_disk">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/c1396d33-3741-4e6a-acdf-79ac9f076e53_disk.config">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:0f:38:06"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <target dev="tap39a40677-39"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/console.log" append="off"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:00:15 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:00:15 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:00:15 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:00:15 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.981 252257 DEBUG nova.compute.manager [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Preparing to wait for external event network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.982 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.982 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.982 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.983 252257 DEBUG nova.virt.libvirt.vif [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.983 252257 DEBUG nova.network.os_vif_util [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.983 252257 DEBUG nova.network.os_vif_util [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:38:06,bridge_name='br-int',has_traffic_filtering=True,id=39a40677-39fb-46af-8988-f2b8b26d7512,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39a40677-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.984 252257 DEBUG os_vif [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:38:06,bridge_name='br-int',has_traffic_filtering=True,id=39a40677-39fb-46af-8988-f2b8b26d7512,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39a40677-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.985 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.985 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.985 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.989 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.990 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a40677-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.991 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap39a40677-39, col_values=(('external_ids', {'iface-id': '39a40677-39fb-46af-8988-f2b8b26d7512', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:38:06', 'vm-uuid': 'c1396d33-3741-4e6a-acdf-79ac9f076e53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:15 np0005539563 NetworkManager[48981]: <info>  [1764403215.9936] manager: (tap39a40677-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.994 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.998 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:15 np0005539563 nova_compute[252253]: 2025-11-29 08:00:15.999 252257 INFO os_vif [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:38:06,bridge_name='br-int',has_traffic_filtering=True,id=39a40677-39fb-46af-8988-f2b8b26d7512,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39a40677-39')#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.054 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.054 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.054 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:0f:38:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.055 252257 INFO nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Using config drive#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.085 252257 DEBUG nova.storage.rbd_utils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image c1396d33-3741-4e6a-acdf-79ac9f076e53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.090 252257 DEBUG oslo_concurrency.processutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/487e2d0b-cbfe-462e-b702-86a5164357d8/disk.config 487e2d0b-cbfe-462e-b702-86a5164357d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.090 252257 INFO nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Deleting local config drive /var/lib/nova/instances/487e2d0b-cbfe-462e-b702-86a5164357d8/disk.config because it was imported into RBD.#033[00m
Nov 29 03:00:16 np0005539563 kernel: tap3c26a727-aa: entered promiscuous mode
Nov 29 03:00:16 np0005539563 NetworkManager[48981]: <info>  [1764403216.1448] manager: (tap3c26a727-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.146 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:16Z|00174|binding|INFO|Claiming lport 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c for this chassis.
Nov 29 03:00:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:16Z|00175|binding|INFO|3c26a727-aa6f-4daa-8e0c-9658c02a1f5c: Claiming fa:16:3e:85:0a:92 10.100.0.14
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.166 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:0a:92 10.100.0.14'], port_security=['fa:16:3e:85:0a:92 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '487e2d0b-cbfe-462e-b702-86a5164357d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2063759-3e65-4e4b-b3aa-6d737d865479', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fce027870d041328a9b9968bfe90665', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90056085-c762-483e-89c7-ac78dc504f10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbb6126a-1e4d-4a00-9500-8124c46f02a3, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=3c26a727-aa6f-4daa-8e0c-9658c02a1f5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.169 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c in datapath b2063759-3e65-4e4b-b3aa-6d737d865479 bound to our chassis#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.171 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b2063759-3e65-4e4b-b3aa-6d737d865479#033[00m
Nov 29 03:00:16 np0005539563 systemd-udevd[291723]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:00:16 np0005539563 systemd-machined[213024]: New machine qemu-25-instance-0000003b.
Nov 29 03:00:16 np0005539563 NetworkManager[48981]: <info>  [1764403216.1855] device (tap3c26a727-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.183 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8f09e67a-ce97-497f-ad29-f6a6424f4d11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.184 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb2063759-31 in ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:00:16 np0005539563 NetworkManager[48981]: <info>  [1764403216.1868] device (tap3c26a727-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.187 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb2063759-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.187 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c10145e2-b103-4d2b-a8c6-2015252748c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.188 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5a773fa0-d9d3-413e-ac45-fcdbb8562905]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.200 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[45516714-181f-4709-82b3-c3c837b58a40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.220 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539563 systemd[1]: Started Virtual Machine qemu-25-instance-0000003b.
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.224 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b8032954-ee22-4dc8-9d26-8eeac9d0c59f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.228 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.235 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.248 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e1692612-c22e-4040-b0f3-820703156305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:16.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:16 np0005539563 NetworkManager[48981]: <info>  [1764403216.2579] manager: (tapb2063759-30): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.257 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b120b9cf-d4af-4b67-a99f-2741d75a99d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.283 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4bd0dd-24a1-4548-be6e-cc2d4f6874e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.287 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0346e7-e902-43a6-8d59-29106a03b142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:16Z|00176|binding|INFO|Setting lport 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c ovn-installed in OVS
Nov 29 03:00:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:16Z|00177|binding|INFO|Setting lport 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c up in Southbound
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.300 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539563 NetworkManager[48981]: <info>  [1764403216.3093] device (tapb2063759-30): carrier: link connected
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.315 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[08d104cd-9de4-477a-b09b-3cdb83f721f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.330 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a521ef-914c-4c84-80a4-4e5131912f0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2063759-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:76:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618408, 'reachable_time': 28484, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291761, 'error': None, 'target': 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.343 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ec71b0-24ab-4448-a7d2-1ffd6f1b78f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:7680'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618408, 'tstamp': 618408}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291762, 'error': None, 'target': 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.357 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[92bbfc8a-5ccb-455a-8b06-5f550fd1783e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2063759-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:76:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618408, 'reachable_time': 28484, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291763, 'error': None, 'target': 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.382 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b92d5903-d7ca-436a-b326-46a25e9c8f7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.456 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[045989ed-bd8a-467d-82bc-ccab2566667c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.457 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2063759-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.458 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.458 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2063759-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.460 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539563 NetworkManager[48981]: <info>  [1764403216.4610] manager: (tapb2063759-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Nov 29 03:00:16 np0005539563 kernel: tapb2063759-30: entered promiscuous mode
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.466 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.467 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb2063759-30, col_values=(('external_ids', {'iface-id': '56d6fe86-a22b-4b4c-87cc-d5e908ba5810'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:16Z|00178|binding|INFO|Releasing lport 56d6fe86-a22b-4b4c-87cc-d5e908ba5810 from this chassis (sb_readonly=0)
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.468 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.483 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.486 252257 DEBUG nova.network.neutron [req-b3b10315-7f28-4f02-86d5-528c9c404973 req-d25a3e3d-34c1-4eab-bad6-ba2b498d1447 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updated VIF entry in instance network info cache for port 39a40677-39fb-46af-8988-f2b8b26d7512. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.487 252257 DEBUG nova.network.neutron [req-b3b10315-7f28-4f02-86d5-528c9c404973 req-d25a3e3d-34c1-4eab-bad6-ba2b498d1447 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.488 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.488 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b2063759-3e65-4e4b-b3aa-6d737d865479.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b2063759-3e65-4e4b-b3aa-6d737d865479.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.489 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4124a5-cac0-41e5-83d5-75e51beddd68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.490 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-b2063759-3e65-4e4b-b3aa-6d737d865479
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/b2063759-3e65-4e4b-b3aa-6d737d865479.pid.haproxy
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID b2063759-3e65-4e4b-b3aa-6d737d865479
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:16.491 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'env', 'PROCESS_TAG=haproxy-b2063759-3e65-4e4b-b3aa-6d737d865479', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b2063759-3e65-4e4b-b3aa-6d737d865479.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:00:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.557 252257 DEBUG oslo_concurrency.lockutils [req-b3b10315-7f28-4f02-86d5-528c9c404973 req-d25a3e3d-34c1-4eab-bad6-ba2b498d1447 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:16 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.567 252257 DEBUG nova.compute.manager [req-88c31be9-f75e-427c-ae8b-750fecced824 req-e2df18b3-8618-43a8-b92f-d69333d5f854 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Received event network-vif-plugged-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.567 252257 DEBUG oslo_concurrency.lockutils [req-88c31be9-f75e-427c-ae8b-750fecced824 req-e2df18b3-8618-43a8-b92f-d69333d5f854 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.567 252257 DEBUG oslo_concurrency.lockutils [req-88c31be9-f75e-427c-ae8b-750fecced824 req-e2df18b3-8618-43a8-b92f-d69333d5f854 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.568 252257 DEBUG oslo_concurrency.lockutils [req-88c31be9-f75e-427c-ae8b-750fecced824 req-e2df18b3-8618-43a8-b92f-d69333d5f854 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.568 252257 DEBUG nova.compute.manager [req-88c31be9-f75e-427c-ae8b-750fecced824 req-e2df18b3-8618-43a8-b92f-d69333d5f854 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Processing event network-vif-plugged-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.605 252257 INFO nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Creating config drive at /var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/disk.config#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.613 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz35hv0cg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.702 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403216.7021377, 487e2d0b-cbfe-462e-b702-86a5164357d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.703 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] VM Started (Lifecycle Event)#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.706 252257 DEBUG nova.compute.manager [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.709 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.712 252257 INFO nova.virt.libvirt.driver [-] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Instance spawned successfully.#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.713 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.741 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.745 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.747 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz35hv0cg" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.773 252257 DEBUG nova.storage.rbd_utils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image c1396d33-3741-4e6a-acdf-79ac9f076e53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.777 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/disk.config c1396d33-3741-4e6a-acdf-79ac9f076e53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:16.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.814 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.815 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403216.7023888, 487e2d0b-cbfe-462e-b702-86a5164357d8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.815 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.820 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.820 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.820 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.821 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.821 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.821 252257 DEBUG nova.virt.libvirt.driver [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.855 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.862 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403216.7081442, 487e2d0b-cbfe-462e-b702-86a5164357d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.862 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:00:16 np0005539563 podman[291861]: 2025-11-29 08:00:16.886842863 +0000 UTC m=+0.054992467 container create 4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.898 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.902 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.911 252257 INFO nova.compute.manager [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Took 6.80 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.912 252257 DEBUG nova.compute.manager [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:16 np0005539563 systemd[1]: Starting dnf makecache...
Nov 29 03:00:16 np0005539563 systemd[1]: Started libpod-conmon-4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2.scope.
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.925 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:00:16 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:00:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf37a979e81545a34dc81bee162d769b814e77f8279ddad40f2203f2fd86f3a3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:16 np0005539563 podman[291861]: 2025-11-29 08:00:16.8544601 +0000 UTC m=+0.022609734 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:00:16 np0005539563 podman[291861]: 2025-11-29 08:00:16.958073635 +0000 UTC m=+0.126223269 container init 4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:00:16 np0005539563 podman[291861]: 2025-11-29 08:00:16.965063718 +0000 UTC m=+0.133213332 container start 4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.976 252257 DEBUG oslo_concurrency.processutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/disk.config c1396d33-3741-4e6a-acdf-79ac9f076e53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.982 252257 INFO nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Deleting local config drive /var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/disk.config because it was imported into RBD.#033[00m
Nov 29 03:00:16 np0005539563 nova_compute[252253]: 2025-11-29 08:00:16.986 252257 INFO nova.compute.manager [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Took 7.91 seconds to build instance.#033[00m
Nov 29 03:00:16 np0005539563 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[291896]: [NOTICE]   (291900) : New worker (291903) forked
Nov 29 03:00:16 np0005539563 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[291896]: [NOTICE]   (291900) : Loading success.
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.007 252257 DEBUG oslo_concurrency.lockutils [None req-362f7035-bd85-46a0-ae33-a75eb28fba9a 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:17 np0005539563 NetworkManager[48981]: <info>  [1764403217.0407] manager: (tap39a40677-39): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Nov 29 03:00:17 np0005539563 kernel: tap39a40677-39: entered promiscuous mode
Nov 29 03:00:17 np0005539563 systemd-udevd[291744]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:00:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:17Z|00179|binding|INFO|Claiming lport 39a40677-39fb-46af-8988-f2b8b26d7512 for this chassis.
Nov 29 03:00:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:17Z|00180|binding|INFO|39a40677-39fb-46af-8988-f2b8b26d7512: Claiming fa:16:3e:0f:38:06 10.100.0.4
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.044 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.049 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:17 np0005539563 NetworkManager[48981]: <info>  [1764403217.0594] device (tap39a40677-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:00:17 np0005539563 NetworkManager[48981]: <info>  [1764403217.0605] device (tap39a40677-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.058 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:38:06 10.100.0.4'], port_security=['fa:16:3e:0f:38:06 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c1396d33-3741-4e6a-acdf-79ac9f076e53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07861448-e4b3-4a77-b3c7-b646bd0a5688', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=39a40677-39fb-46af-8988-f2b8b26d7512) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.061 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 39a40677-39fb-46af-8988-f2b8b26d7512 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 bound to our chassis#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.063 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.079 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e38353-70f2-47d0-aed3-6498e4f2ed6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.080 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap738e99b4-b1 in ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.087 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap738e99b4-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.088 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[84321ae3-1d99-4f3a-b2fd-5cc81fcc6354]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.089 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[927b71b1-d6f8-4262-8242-db813f404550]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 systemd-machined[213024]: New machine qemu-26-instance-0000003c.
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.102 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[0c52c236-f786-4ae8-b863-92ece658e32b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 systemd[1]: Started Virtual Machine qemu-26-instance-0000003c.
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.125 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:17Z|00181|binding|INFO|Setting lport 39a40677-39fb-46af-8988-f2b8b26d7512 ovn-installed in OVS
Nov 29 03:00:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:17Z|00182|binding|INFO|Setting lport 39a40677-39fb-46af-8988-f2b8b26d7512 up in Southbound
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.129 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.131 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7e19eac5-4f0d-4b31-a380-8b9c6c28c953]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 dnf[291893]: Metadata cache refreshed recently.
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.169 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f475bd58-b368-4ce2-ba8d-ba91968102d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 NetworkManager[48981]: <info>  [1764403217.1819] manager: (tap738e99b4-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/89)
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.182 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[734bfe01-bcb3-42e9-9d26-60aa260497a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 03:00:17 np0005539563 systemd[1]: Finished dnf makecache.
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.213 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4d4de5-2c1b-448d-b860-0c4c4c9569f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.217 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[6e572fed-12c6-4d9d-b4f9-20b28923da30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 NetworkManager[48981]: <info>  [1764403217.2440] device (tap738e99b4-b0): carrier: link connected
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.249 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[24416973-84a8-4b3a-90dc-d41b2234eef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.266 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[55460e78-a917-4ba6-a9a9-87f6da7eefc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618501, 'reachable_time': 21416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291941, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.282 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cc11232a-5b30-4ca9-a9a1-e8f403c59f13]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:bee3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618501, 'tstamp': 618501}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291942, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.302 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2b48eb6e-dd53-44a9-a589-c8b7725220c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618501, 'reachable_time': 21416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291943, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.332 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e43837b4-f453-4d6c-8bab-f9ee8c775973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.396 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ade0a023-4682-45ed-a645-9e15f29a00bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.398 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.398 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.398 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:17 np0005539563 NetworkManager[48981]: <info>  [1764403217.4008] manager: (tap738e99b4-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Nov 29 03:00:17 np0005539563 kernel: tap738e99b4-b0: entered promiscuous mode
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.402 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.407 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:17Z|00183|binding|INFO|Releasing lport 2a1fcde6-d99a-4732-a125-d24eb08c8766 from this chassis (sb_readonly=0)
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.415 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.427 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.429 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/738e99b4-b58e-4eff-b209-c4aa3748c994.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/738e99b4-b58e-4eff-b209-c4aa3748c994.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.430 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ef69ee13-ef37-4709-96d6-39193f95fcd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.431 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-738e99b4-b58e-4eff-b209-c4aa3748c994
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/738e99b4-b58e-4eff-b209-c4aa3748c994.pid.haproxy
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 738e99b4-b58e-4eff-b209-c4aa3748c994
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:17.431 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'env', 'PROCESS_TAG=haproxy-738e99b4-b58e-4eff-b209-c4aa3748c994', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/738e99b4-b58e-4eff-b209-c4aa3748c994.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.431 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:00:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1966415464' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:00:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:00:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1966415464' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:00:17 np0005539563 podman[291976]: 2025-11-29 08:00:17.817204039 +0000 UTC m=+0.057069684 container create b8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:00:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 298 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 10 MiB/s wr, 194 op/s
Nov 29 03:00:17 np0005539563 systemd[1]: Started libpod-conmon-b8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5.scope.
Nov 29 03:00:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:00:17 np0005539563 podman[291976]: 2025-11-29 08:00:17.78893106 +0000 UTC m=+0.028796725 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:00:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6130a907efcac48fa9d880e84d1dc7aa899e4a49fc5b08c8e6c608304b0a6299/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:17 np0005539563 podman[291976]: 2025-11-29 08:00:17.896312619 +0000 UTC m=+0.136178294 container init b8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:00:17 np0005539563 podman[291976]: 2025-11-29 08:00:17.902122059 +0000 UTC m=+0.141987704 container start b8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:00:17 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[292007]: [NOTICE]   (292031) : New worker (292034) forked
Nov 29 03:00:17 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[292007]: [NOTICE]   (292031) : Loading success.
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.982 252257 DEBUG nova.compute.manager [req-1ad61a9d-1f6a-4bb5-b0b2-de2bcf694728 req-cab9b0d2-6a75-4417-9036-1a0da9428fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.983 252257 DEBUG oslo_concurrency.lockutils [req-1ad61a9d-1f6a-4bb5-b0b2-de2bcf694728 req-cab9b0d2-6a75-4417-9036-1a0da9428fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.983 252257 DEBUG oslo_concurrency.lockutils [req-1ad61a9d-1f6a-4bb5-b0b2-de2bcf694728 req-cab9b0d2-6a75-4417-9036-1a0da9428fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.984 252257 DEBUG oslo_concurrency.lockutils [req-1ad61a9d-1f6a-4bb5-b0b2-de2bcf694728 req-cab9b0d2-6a75-4417-9036-1a0da9428fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.984 252257 DEBUG nova.compute.manager [req-1ad61a9d-1f6a-4bb5-b0b2-de2bcf694728 req-cab9b0d2-6a75-4417-9036-1a0da9428fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Processing event network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.984 252257 DEBUG nova.compute.manager [req-1ad61a9d-1f6a-4bb5-b0b2-de2bcf694728 req-cab9b0d2-6a75-4417-9036-1a0da9428fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.984 252257 DEBUG oslo_concurrency.lockutils [req-1ad61a9d-1f6a-4bb5-b0b2-de2bcf694728 req-cab9b0d2-6a75-4417-9036-1a0da9428fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.984 252257 DEBUG oslo_concurrency.lockutils [req-1ad61a9d-1f6a-4bb5-b0b2-de2bcf694728 req-cab9b0d2-6a75-4417-9036-1a0da9428fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.985 252257 DEBUG oslo_concurrency.lockutils [req-1ad61a9d-1f6a-4bb5-b0b2-de2bcf694728 req-cab9b0d2-6a75-4417-9036-1a0da9428fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.985 252257 DEBUG nova.compute.manager [req-1ad61a9d-1f6a-4bb5-b0b2-de2bcf694728 req-cab9b0d2-6a75-4417-9036-1a0da9428fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:00:17 np0005539563 nova_compute[252253]: 2025-11-29 08:00:17.985 252257 WARNING nova.compute.manager [req-1ad61a9d-1f6a-4bb5-b0b2-de2bcf694728 req-cab9b0d2-6a75-4417-9036-1a0da9428fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.026 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403218.0256796, c1396d33-3741-4e6a-acdf-79ac9f076e53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.026 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] VM Started (Lifecycle Event)#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.028 252257 DEBUG nova.compute.manager [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.032 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.034 252257 INFO nova.virt.libvirt.driver [-] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Instance spawned successfully.#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.035 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.053 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.056 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.075 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.076 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.076 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.076 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.076 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.077 252257 DEBUG nova.virt.libvirt.driver [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.134 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.135 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403218.0263052, c1396d33-3741-4e6a-acdf-79ac9f076e53 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.135 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.171 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.175 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403218.030981, c1396d33-3741-4e6a-acdf-79ac9f076e53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.175 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.181 252257 INFO nova.compute.manager [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Took 6.97 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.181 252257 DEBUG nova.compute.manager [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.209 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.212 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.242 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.243 252257 INFO nova.compute.manager [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Took 7.99 seconds to build instance.#033[00m
Nov 29 03:00:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:18.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.257 252257 DEBUG oslo_concurrency.lockutils [None req-c7a5cb60-1418-4a91-8cd1-d444b0d6345e a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.674 252257 DEBUG nova.compute.manager [req-f7e1bfbe-5eb9-421c-b7c7-715038e31688 req-e74e410a-2505-4cdd-8229-4c419af47f33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Received event network-vif-plugged-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.674 252257 DEBUG oslo_concurrency.lockutils [req-f7e1bfbe-5eb9-421c-b7c7-715038e31688 req-e74e410a-2505-4cdd-8229-4c419af47f33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.675 252257 DEBUG oslo_concurrency.lockutils [req-f7e1bfbe-5eb9-421c-b7c7-715038e31688 req-e74e410a-2505-4cdd-8229-4c419af47f33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.675 252257 DEBUG oslo_concurrency.lockutils [req-f7e1bfbe-5eb9-421c-b7c7-715038e31688 req-e74e410a-2505-4cdd-8229-4c419af47f33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.675 252257 DEBUG nova.compute.manager [req-f7e1bfbe-5eb9-421c-b7c7-715038e31688 req-e74e410a-2505-4cdd-8229-4c419af47f33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] No waiting events found dispatching network-vif-plugged-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:00:18 np0005539563 nova_compute[252253]: 2025-11-29 08:00:18.675 252257 WARNING nova.compute.manager [req-f7e1bfbe-5eb9-421c-b7c7-715038e31688 req-e74e410a-2505-4cdd-8229-4c419af47f33 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Received unexpected event network-vif-plugged-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c for instance with vm_state active and task_state None.#033[00m
Nov 29 03:00:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:18.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 260 MiB data, 681 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.4 MiB/s wr, 253 op/s
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.036 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539563 NetworkManager[48981]: <info>  [1764403220.0370] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Nov 29 03:00:20 np0005539563 NetworkManager[48981]: <info>  [1764403220.0379] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.122 252257 DEBUG nova.compute.manager [req-063109be-7efd-4230-abff-e060e0f47b17 req-5b477661-e1e0-49af-92a6-604c01b0dd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Received event network-changed-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.122 252257 DEBUG nova.compute.manager [req-063109be-7efd-4230-abff-e060e0f47b17 req-5b477661-e1e0-49af-92a6-604c01b0dd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Refreshing instance network info cache due to event network-changed-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.123 252257 DEBUG oslo_concurrency.lockutils [req-063109be-7efd-4230-abff-e060e0f47b17 req-5b477661-e1e0-49af-92a6-604c01b0dd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.123 252257 DEBUG oslo_concurrency.lockutils [req-063109be-7efd-4230-abff-e060e0f47b17 req-5b477661-e1e0-49af-92a6-604c01b0dd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.124 252257 DEBUG nova.network.neutron [req-063109be-7efd-4230-abff-e060e0f47b17 req-5b477661-e1e0-49af-92a6-604c01b0dd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Refreshing network info cache for port 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.171 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:20Z|00184|binding|INFO|Releasing lport 56d6fe86-a22b-4b4c-87cc-d5e908ba5810 from this chassis (sb_readonly=0)
Nov 29 03:00:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:20Z|00185|binding|INFO|Releasing lport 2a1fcde6-d99a-4732-a125-d24eb08c8766 from this chassis (sb_readonly=0)
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.190 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:20.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.638 252257 DEBUG nova.compute.manager [req-f72327c5-2e2c-49f3-b342-d5839be20e07 req-427706a2-2e52-4cea-ae49-41815114834f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-changed-39a40677-39fb-46af-8988-f2b8b26d7512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.638 252257 DEBUG nova.compute.manager [req-f72327c5-2e2c-49f3-b342-d5839be20e07 req-427706a2-2e52-4cea-ae49-41815114834f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Refreshing instance network info cache due to event network-changed-39a40677-39fb-46af-8988-f2b8b26d7512. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.639 252257 DEBUG oslo_concurrency.lockutils [req-f72327c5-2e2c-49f3-b342-d5839be20e07 req-427706a2-2e52-4cea-ae49-41815114834f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.639 252257 DEBUG oslo_concurrency.lockutils [req-f72327c5-2e2c-49f3-b342-d5839be20e07 req-427706a2-2e52-4cea-ae49-41815114834f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.639 252257 DEBUG nova.network.neutron [req-f72327c5-2e2c-49f3-b342-d5839be20e07 req-427706a2-2e52-4cea-ae49-41815114834f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Refreshing network info cache for port 39a40677-39fb-46af-8988-f2b8b26d7512 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:00:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:20.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:20 np0005539563 nova_compute[252253]: 2025-11-29 08:00:20.994 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Nov 29 03:00:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Nov 29 03:00:21 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Nov 29 03:00:21 np0005539563 nova_compute[252253]: 2025-11-29 08:00:21.232 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:21 np0005539563 nova_compute[252253]: 2025-11-29 08:00:21.460 252257 DEBUG nova.network.neutron [req-063109be-7efd-4230-abff-e060e0f47b17 req-5b477661-e1e0-49af-92a6-604c01b0dd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Updated VIF entry in instance network info cache for port 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:00:21 np0005539563 nova_compute[252253]: 2025-11-29 08:00:21.460 252257 DEBUG nova.network.neutron [req-063109be-7efd-4230-abff-e060e0f47b17 req-5b477661-e1e0-49af-92a6-604c01b0dd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Updating instance_info_cache with network_info: [{"id": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "address": "fa:16:3e:85:0a:92", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c26a727-aa", "ovs_interfaceid": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:21 np0005539563 nova_compute[252253]: 2025-11-29 08:00:21.484 252257 DEBUG oslo_concurrency.lockutils [req-063109be-7efd-4230-abff-e060e0f47b17 req-5b477661-e1e0-49af-92a6-604c01b0dd06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 227 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 583 KiB/s wr, 333 op/s
Nov 29 03:00:21 np0005539563 nova_compute[252253]: 2025-11-29 08:00:21.984 252257 DEBUG nova.network.neutron [req-f72327c5-2e2c-49f3-b342-d5839be20e07 req-427706a2-2e52-4cea-ae49-41815114834f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updated VIF entry in instance network info cache for port 39a40677-39fb-46af-8988-f2b8b26d7512. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:00:21 np0005539563 nova_compute[252253]: 2025-11-29 08:00:21.984 252257 DEBUG nova.network.neutron [req-f72327c5-2e2c-49f3-b342-d5839be20e07 req-427706a2-2e52-4cea-ae49-41815114834f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:22 np0005539563 nova_compute[252253]: 2025-11-29 08:00:22.023 252257 DEBUG oslo_concurrency.lockutils [req-f72327c5-2e2c-49f3-b342-d5839be20e07 req-427706a2-2e52-4cea-ae49-41815114834f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:22.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:22Z|00186|binding|INFO|Releasing lport 56d6fe86-a22b-4b4c-87cc-d5e908ba5810 from this chassis (sb_readonly=0)
Nov 29 03:00:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:22Z|00187|binding|INFO|Releasing lport 2a1fcde6-d99a-4732-a125-d24eb08c8766 from this chassis (sb_readonly=0)
Nov 29 03:00:22 np0005539563 nova_compute[252253]: 2025-11-29 08:00:22.494 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:22 np0005539563 nova_compute[252253]: 2025-11-29 08:00:22.770 252257 DEBUG nova.compute.manager [req-8b6bc36b-4298-487b-a656-51f8e6dacc60 req-6acf4860-765a-4f1a-8f43-eab86aa86ae9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Received event network-changed-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:22 np0005539563 nova_compute[252253]: 2025-11-29 08:00:22.771 252257 DEBUG nova.compute.manager [req-8b6bc36b-4298-487b-a656-51f8e6dacc60 req-6acf4860-765a-4f1a-8f43-eab86aa86ae9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Refreshing instance network info cache due to event network-changed-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:00:22 np0005539563 nova_compute[252253]: 2025-11-29 08:00:22.771 252257 DEBUG oslo_concurrency.lockutils [req-8b6bc36b-4298-487b-a656-51f8e6dacc60 req-6acf4860-765a-4f1a-8f43-eab86aa86ae9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:22 np0005539563 nova_compute[252253]: 2025-11-29 08:00:22.771 252257 DEBUG oslo_concurrency.lockutils [req-8b6bc36b-4298-487b-a656-51f8e6dacc60 req-6acf4860-765a-4f1a-8f43-eab86aa86ae9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:22 np0005539563 nova_compute[252253]: 2025-11-29 08:00:22.772 252257 DEBUG nova.network.neutron [req-8b6bc36b-4298-487b-a656-51f8e6dacc60 req-6acf4860-765a-4f1a-8f43-eab86aa86ae9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Refreshing network info cache for port 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:00:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:22.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029882121836032013 of space, bias 1.0, pg target 0.8964636550809604 quantized to 32 (current 32)
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8676193467336684 quantized to 32 (current 32)
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:00:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Nov 29 03:00:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Nov 29 03:00:23 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Nov 29 03:00:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 219 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 511 KiB/s wr, 346 op/s
Nov 29 03:00:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:24.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:24.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:25 np0005539563 nova_compute[252253]: 2025-11-29 08:00:25.794 252257 DEBUG nova.network.neutron [req-8b6bc36b-4298-487b-a656-51f8e6dacc60 req-6acf4860-765a-4f1a-8f43-eab86aa86ae9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Updated VIF entry in instance network info cache for port 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:00:25 np0005539563 nova_compute[252253]: 2025-11-29 08:00:25.795 252257 DEBUG nova.network.neutron [req-8b6bc36b-4298-487b-a656-51f8e6dacc60 req-6acf4860-765a-4f1a-8f43-eab86aa86ae9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Updating instance_info_cache with network_info: [{"id": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "address": "fa:16:3e:85:0a:92", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c26a727-aa", "ovs_interfaceid": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 138 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 467 KiB/s wr, 364 op/s
Nov 29 03:00:25 np0005539563 nova_compute[252253]: 2025-11-29 08:00:25.866 252257 DEBUG oslo_concurrency.lockutils [req-8b6bc36b-4298-487b-a656-51f8e6dacc60 req-6acf4860-765a-4f1a-8f43-eab86aa86ae9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:25 np0005539563 nova_compute[252253]: 2025-11-29 08:00:25.996 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:26 np0005539563 nova_compute[252253]: 2025-11-29 08:00:26.233 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:26.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 138 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 24 KiB/s wr, 286 op/s
Nov 29 03:00:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:28.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Nov 29 03:00:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Nov 29 03:00:28 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Nov 29 03:00:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:28.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 157 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 995 KiB/s wr, 113 op/s
Nov 29 03:00:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:30.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:30.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:30 np0005539563 nova_compute[252253]: 2025-11-29 08:00:30.998 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:31 np0005539563 nova_compute[252253]: 2025-11-29 08:00:31.235 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:31 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:31Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:0a:92 10.100.0.14
Nov 29 03:00:31 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:31Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:0a:92 10.100.0.14
Nov 29 03:00:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 197 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 317 KiB/s rd, 4.6 MiB/s wr, 148 op/s
Nov 29 03:00:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:32.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:32.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:33 np0005539563 podman[292108]: 2025-11-29 08:00:33.576972735 +0000 UTC m=+0.125938141 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:00:33 np0005539563 podman[292128]: 2025-11-29 08:00:33.656926638 +0000 UTC m=+0.059171421 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 29 03:00:33 np0005539563 nova_compute[252253]: 2025-11-29 08:00:33.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:33 np0005539563 podman[292129]: 2025-11-29 08:00:33.683342686 +0000 UTC m=+0.082451623 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller)
Nov 29 03:00:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 212 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 448 KiB/s rd, 5.3 MiB/s wr, 146 op/s
Nov 29 03:00:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:34Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:38:06 10.100.0.4
Nov 29 03:00:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:00:34Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:38:06 10.100.0.4
Nov 29 03:00:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:34.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:34 np0005539563 nova_compute[252253]: 2025-11-29 08:00:34.501 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:00:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:34.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:00:35 np0005539563 nova_compute[252253]: 2025-11-29 08:00:35.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 234 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 719 KiB/s rd, 7.0 MiB/s wr, 161 op/s
Nov 29 03:00:36 np0005539563 nova_compute[252253]: 2025-11-29 08:00:36.001 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:36 np0005539563 nova_compute[252253]: 2025-11-29 08:00:36.236 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:36.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:36 np0005539563 nova_compute[252253]: 2025-11-29 08:00:36.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:36 np0005539563 nova_compute[252253]: 2025-11-29 08:00:36.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:00:36 np0005539563 nova_compute[252253]: 2025-11-29 08:00:36.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:36 np0005539563 nova_compute[252253]: 2025-11-29 08:00:36.705 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:36 np0005539563 nova_compute[252253]: 2025-11-29 08:00:36.706 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:36 np0005539563 nova_compute[252253]: 2025-11-29 08:00:36.706 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:36 np0005539563 nova_compute[252253]: 2025-11-29 08:00:36.706 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:00:36 np0005539563 nova_compute[252253]: 2025-11-29 08:00:36.706 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:36.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:37.021 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:37.022 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.075 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3998648191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.139 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.225 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.225 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.228 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.228 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.408 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.409 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4167MB free_disk=20.877899169921875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.409 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.410 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.489 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 487e2d0b-cbfe-462e-b702-86a5164357d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.489 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance c1396d33-3741-4e6a-acdf-79ac9f076e53 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.489 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.489 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.554 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 234 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 719 KiB/s rd, 7.0 MiB/s wr, 161 op/s
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.927 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2155591358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.975 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:37 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5c37180d-99a4-4221-ac44-b5e1207437bd does not exist
Nov 29 03:00:37 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f410b823-5e71-4b39-91d4-cbef3f78f8f5 does not exist
Nov 29 03:00:37 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 071d2cbd-54df-4edd-bff8-18e19f0d7620 does not exist
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:00:37 np0005539563 nova_compute[252253]: 2025-11-29 08:00:37.982 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:00:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:00:38 np0005539563 nova_compute[252253]: 2025-11-29 08:00:38.000 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:38 np0005539563 nova_compute[252253]: 2025-11-29 08:00:38.024 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:00:38 np0005539563 nova_compute[252253]: 2025-11-29 08:00:38.025 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:38.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:00:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:00:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:00:38 np0005539563 podman[292493]: 2025-11-29 08:00:38.646003314 +0000 UTC m=+0.051063848 container create 05ed9a2f2fdb243cd17d42170d85985d67c28d9bc42ad91c3ad25f654a529f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khorana, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:00:38 np0005539563 systemd[1]: Started libpod-conmon-05ed9a2f2fdb243cd17d42170d85985d67c28d9bc42ad91c3ad25f654a529f2d.scope.
Nov 29 03:00:38 np0005539563 podman[292493]: 2025-11-29 08:00:38.624230113 +0000 UTC m=+0.029290697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:00:38 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:00:38 np0005539563 podman[292493]: 2025-11-29 08:00:38.759058538 +0000 UTC m=+0.164119062 container init 05ed9a2f2fdb243cd17d42170d85985d67c28d9bc42ad91c3ad25f654a529f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khorana, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:00:38 np0005539563 podman[292493]: 2025-11-29 08:00:38.767891172 +0000 UTC m=+0.172951666 container start 05ed9a2f2fdb243cd17d42170d85985d67c28d9bc42ad91c3ad25f654a529f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khorana, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:00:38 np0005539563 podman[292493]: 2025-11-29 08:00:38.772776667 +0000 UTC m=+0.177837221 container attach 05ed9a2f2fdb243cd17d42170d85985d67c28d9bc42ad91c3ad25f654a529f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khorana, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:00:38 np0005539563 sleepy_khorana[292510]: 167 167
Nov 29 03:00:38 np0005539563 systemd[1]: libpod-05ed9a2f2fdb243cd17d42170d85985d67c28d9bc42ad91c3ad25f654a529f2d.scope: Deactivated successfully.
Nov 29 03:00:38 np0005539563 podman[292493]: 2025-11-29 08:00:38.774972617 +0000 UTC m=+0.180033111 container died 05ed9a2f2fdb243cd17d42170d85985d67c28d9bc42ad91c3ad25f654a529f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khorana, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:00:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay-382602947f3bd0e8d541c708f6fe11cb3469a6106958b877bd544ef43751710a-merged.mount: Deactivated successfully.
Nov 29 03:00:38 np0005539563 podman[292493]: 2025-11-29 08:00:38.836003159 +0000 UTC m=+0.241063653 container remove 05ed9a2f2fdb243cd17d42170d85985d67c28d9bc42ad91c3ad25f654a529f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khorana, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:00:38 np0005539563 systemd[1]: libpod-conmon-05ed9a2f2fdb243cd17d42170d85985d67c28d9bc42ad91c3ad25f654a529f2d.scope: Deactivated successfully.
Nov 29 03:00:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:38.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:39 np0005539563 nova_compute[252253]: 2025-11-29 08:00:39.026 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:39 np0005539563 nova_compute[252253]: 2025-11-29 08:00:39.028 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:00:39 np0005539563 nova_compute[252253]: 2025-11-29 08:00:39.028 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:00:39 np0005539563 podman[292535]: 2025-11-29 08:00:39.060379212 +0000 UTC m=+0.065732613 container create c7500e8a2001ab13ab0e8ff4f58f27841167624908920f6d122ff9012143144f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:00:39 np0005539563 systemd[1]: Started libpod-conmon-c7500e8a2001ab13ab0e8ff4f58f27841167624908920f6d122ff9012143144f.scope.
Nov 29 03:00:39 np0005539563 podman[292535]: 2025-11-29 08:00:39.035409754 +0000 UTC m=+0.040763265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:00:39 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:00:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b62aaf12384f24be0221c37a9acc92b0221c4c6bb1a8c23a5a066a6eee21a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b62aaf12384f24be0221c37a9acc92b0221c4c6bb1a8c23a5a066a6eee21a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b62aaf12384f24be0221c37a9acc92b0221c4c6bb1a8c23a5a066a6eee21a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b62aaf12384f24be0221c37a9acc92b0221c4c6bb1a8c23a5a066a6eee21a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b62aaf12384f24be0221c37a9acc92b0221c4c6bb1a8c23a5a066a6eee21a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:39 np0005539563 podman[292535]: 2025-11-29 08:00:39.180767429 +0000 UTC m=+0.186120850 container init c7500e8a2001ab13ab0e8ff4f58f27841167624908920f6d122ff9012143144f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:00:39 np0005539563 podman[292535]: 2025-11-29 08:00:39.192495882 +0000 UTC m=+0.197849293 container start c7500e8a2001ab13ab0e8ff4f58f27841167624908920f6d122ff9012143144f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gagarin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:00:39 np0005539563 podman[292535]: 2025-11-29 08:00:39.19676135 +0000 UTC m=+0.202114811 container attach c7500e8a2001ab13ab0e8ff4f58f27841167624908920f6d122ff9012143144f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:00:39 np0005539563 nova_compute[252253]: 2025-11-29 08:00:39.278 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:39 np0005539563 nova_compute[252253]: 2025-11-29 08:00:39.279 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:39 np0005539563 nova_compute[252253]: 2025-11-29 08:00:39.279 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:00:39 np0005539563 nova_compute[252253]: 2025-11-29 08:00:39.279 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 487e2d0b-cbfe-462e-b702-86a5164357d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 241 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.6 MiB/s wr, 173 op/s
Nov 29 03:00:40 np0005539563 stoic_gagarin[292551]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:00:40 np0005539563 stoic_gagarin[292551]: --> relative data size: 1.0
Nov 29 03:00:40 np0005539563 stoic_gagarin[292551]: --> All data devices are unavailable
Nov 29 03:00:40 np0005539563 systemd[1]: libpod-c7500e8a2001ab13ab0e8ff4f58f27841167624908920f6d122ff9012143144f.scope: Deactivated successfully.
Nov 29 03:00:40 np0005539563 podman[292535]: 2025-11-29 08:00:40.033222669 +0000 UTC m=+1.038576090 container died c7500e8a2001ab13ab0e8ff4f58f27841167624908920f6d122ff9012143144f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:00:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:40.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:40.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:40 np0005539563 nova_compute[252253]: 2025-11-29 08:00:40.966 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Updating instance_info_cache with network_info: [{"id": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "address": "fa:16:3e:85:0a:92", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c26a727-aa", "ovs_interfaceid": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:41 np0005539563 nova_compute[252253]: 2025-11-29 08:00:41.002 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:00:41.025 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:41 np0005539563 nova_compute[252253]: 2025-11-29 08:00:41.038 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-487e2d0b-cbfe-462e-b702-86a5164357d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:41 np0005539563 nova_compute[252253]: 2025-11-29 08:00:41.038 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:00:41 np0005539563 nova_compute[252253]: 2025-11-29 08:00:41.039 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-95b62aaf12384f24be0221c37a9acc92b0221c4c6bb1a8c23a5a066a6eee21a8-merged.mount: Deactivated successfully.
Nov 29 03:00:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Nov 29 03:00:41 np0005539563 podman[292535]: 2025-11-29 08:00:41.218849949 +0000 UTC m=+2.224203350 container remove c7500e8a2001ab13ab0e8ff4f58f27841167624908920f6d122ff9012143144f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_gagarin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:00:41 np0005539563 systemd[1]: libpod-conmon-c7500e8a2001ab13ab0e8ff4f58f27841167624908920f6d122ff9012143144f.scope: Deactivated successfully.
Nov 29 03:00:41 np0005539563 nova_compute[252253]: 2025-11-29 08:00:41.238 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Nov 29 03:00:41 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Nov 29 03:00:41 np0005539563 nova_compute[252253]: 2025-11-29 08:00:41.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:41 np0005539563 nova_compute[252253]: 2025-11-29 08:00:41.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 268 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.0 MiB/s wr, 213 op/s
Nov 29 03:00:41 np0005539563 podman[292721]: 2025-11-29 08:00:41.858486515 +0000 UTC m=+0.039620293 container create aff0b46618fbfb7301ee0e55435e27e6a13ae0efc26672e958f27439fc234350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:00:41 np0005539563 systemd[1]: Started libpod-conmon-aff0b46618fbfb7301ee0e55435e27e6a13ae0efc26672e958f27439fc234350.scope.
Nov 29 03:00:41 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:00:41 np0005539563 podman[292721]: 2025-11-29 08:00:41.93453245 +0000 UTC m=+0.115666238 container init aff0b46618fbfb7301ee0e55435e27e6a13ae0efc26672e958f27439fc234350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:00:41 np0005539563 podman[292721]: 2025-11-29 08:00:41.840151629 +0000 UTC m=+0.021285417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:00:41 np0005539563 podman[292721]: 2025-11-29 08:00:41.944650159 +0000 UTC m=+0.125783927 container start aff0b46618fbfb7301ee0e55435e27e6a13ae0efc26672e958f27439fc234350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:00:41 np0005539563 podman[292721]: 2025-11-29 08:00:41.94791803 +0000 UTC m=+0.129051828 container attach aff0b46618fbfb7301ee0e55435e27e6a13ae0efc26672e958f27439fc234350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:00:41 np0005539563 ecstatic_allen[292736]: 167 167
Nov 29 03:00:41 np0005539563 systemd[1]: libpod-aff0b46618fbfb7301ee0e55435e27e6a13ae0efc26672e958f27439fc234350.scope: Deactivated successfully.
Nov 29 03:00:41 np0005539563 conmon[292736]: conmon aff0b46618fbfb7301ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aff0b46618fbfb7301ee0e55435e27e6a13ae0efc26672e958f27439fc234350.scope/container/memory.events
Nov 29 03:00:41 np0005539563 podman[292721]: 2025-11-29 08:00:41.952600478 +0000 UTC m=+0.133734246 container died aff0b46618fbfb7301ee0e55435e27e6a13ae0efc26672e958f27439fc234350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:00:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d18cc4c3ea6e25ea0aa2377701fc932ee7c29fa8f8d704211be5b699a8de7c41-merged.mount: Deactivated successfully.
Nov 29 03:00:41 np0005539563 podman[292721]: 2025-11-29 08:00:41.993491685 +0000 UTC m=+0.174625453 container remove aff0b46618fbfb7301ee0e55435e27e6a13ae0efc26672e958f27439fc234350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:00:42 np0005539563 systemd[1]: libpod-conmon-aff0b46618fbfb7301ee0e55435e27e6a13ae0efc26672e958f27439fc234350.scope: Deactivated successfully.
Nov 29 03:00:42 np0005539563 podman[292759]: 2025-11-29 08:00:42.184492118 +0000 UTC m=+0.040360583 container create 5903413d7eb0e6e145d71f7db9ad9502d0d5bb2538a8d065cef28344605d4c89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:00:42 np0005539563 systemd[1]: Started libpod-conmon-5903413d7eb0e6e145d71f7db9ad9502d0d5bb2538a8d065cef28344605d4c89.scope.
Nov 29 03:00:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Nov 29 03:00:42 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:00:42 np0005539563 podman[292759]: 2025-11-29 08:00:42.165272208 +0000 UTC m=+0.021140673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:00:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb17fa9afd838dde9220fdb6a4935ced2b4b6d9b1a4e00e0b5946ff46c7c005/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb17fa9afd838dde9220fdb6a4935ced2b4b6d9b1a4e00e0b5946ff46c7c005/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb17fa9afd838dde9220fdb6a4935ced2b4b6d9b1a4e00e0b5946ff46c7c005/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb17fa9afd838dde9220fdb6a4935ced2b4b6d9b1a4e00e0b5946ff46c7c005/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Nov 29 03:00:42 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Nov 29 03:00:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:42.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:42 np0005539563 podman[292759]: 2025-11-29 08:00:42.279123076 +0000 UTC m=+0.134991531 container init 5903413d7eb0e6e145d71f7db9ad9502d0d5bb2538a8d065cef28344605d4c89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_meitner, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:00:42 np0005539563 podman[292759]: 2025-11-29 08:00:42.28582643 +0000 UTC m=+0.141694865 container start 5903413d7eb0e6e145d71f7db9ad9502d0d5bb2538a8d065cef28344605d4c89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:00:42 np0005539563 podman[292759]: 2025-11-29 08:00:42.290667714 +0000 UTC m=+0.146536179 container attach 5903413d7eb0e6e145d71f7db9ad9502d0d5bb2538a8d065cef28344605d4c89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_meitner, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:00:42 np0005539563 nova_compute[252253]: 2025-11-29 08:00:42.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:00:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:42.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:00:43 np0005539563 angry_meitner[292776]: {
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:    "0": [
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:        {
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:            "devices": [
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:                "/dev/loop3"
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:            ],
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:            "lv_name": "ceph_lv0",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:            "lv_size": "7511998464",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:            "name": "ceph_lv0",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:            "tags": {
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:                "ceph.cluster_name": "ceph",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:                "ceph.crush_device_class": "",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:                "ceph.encrypted": "0",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:                "ceph.osd_id": "0",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:                "ceph.type": "block",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:                "ceph.vdo": "0"
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:            },
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:            "type": "block",
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:            "vg_name": "ceph_vg0"
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:        }
Nov 29 03:00:43 np0005539563 angry_meitner[292776]:    ]
Nov 29 03:00:43 np0005539563 angry_meitner[292776]: }
Nov 29 03:00:43 np0005539563 systemd[1]: libpod-5903413d7eb0e6e145d71f7db9ad9502d0d5bb2538a8d065cef28344605d4c89.scope: Deactivated successfully.
Nov 29 03:00:43 np0005539563 conmon[292776]: conmon 5903413d7eb0e6e145d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5903413d7eb0e6e145d71f7db9ad9502d0d5bb2538a8d065cef28344605d4c89.scope/container/memory.events
Nov 29 03:00:43 np0005539563 podman[292759]: 2025-11-29 08:00:43.107809591 +0000 UTC m=+0.963678076 container died 5903413d7eb0e6e145d71f7db9ad9502d0d5bb2538a8d065cef28344605d4c89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:00:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:00:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:00:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:00:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:00:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:00:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:00:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Nov 29 03:00:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 310 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.0 MiB/s wr, 201 op/s
Nov 29 03:00:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Nov 29 03:00:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Nov 29 03:00:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:00:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 22K writes, 87K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 22K writes, 6945 syncs, 3.21 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7760 writes, 28K keys, 7760 commit groups, 1.0 writes per commit group, ingest: 25.81 MB, 0.04 MB/s#012Interval WAL: 7759 writes, 3021 syncs, 2.57 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:00:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-cbb17fa9afd838dde9220fdb6a4935ced2b4b6d9b1a4e00e0b5946ff46c7c005-merged.mount: Deactivated successfully.
Nov 29 03:00:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:44.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:44 np0005539563 nova_compute[252253]: 2025-11-29 08:00:44.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:44.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 339 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 7.3 MiB/s wr, 286 op/s
Nov 29 03:00:46 np0005539563 nova_compute[252253]: 2025-11-29 08:00:46.003 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:46 np0005539563 nova_compute[252253]: 2025-11-29 08:00:46.241 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:46.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:46.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 339 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.1 MiB/s wr, 126 op/s
Nov 29 03:00:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:48.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:48.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 339 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 104 op/s
Nov 29 03:00:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:50.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:50 np0005539563 podman[292759]: 2025-11-29 08:00:50.694964358 +0000 UTC m=+8.550832803 container remove 5903413d7eb0e6e145d71f7db9ad9502d0d5bb2538a8d065cef28344605d4c89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_meitner, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:00:50 np0005539563 systemd[1]: libpod-conmon-5903413d7eb0e6e145d71f7db9ad9502d0d5bb2538a8d065cef28344605d4c89.scope: Deactivated successfully.
Nov 29 03:00:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:50.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:51 np0005539563 nova_compute[252253]: 2025-11-29 08:00:51.005 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:51 np0005539563 nova_compute[252253]: 2025-11-29 08:00:51.243 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:51 np0005539563 podman[292993]: 2025-11-29 08:00:51.225692302 +0000 UTC m=+0.019897009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:00:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 346 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 104 op/s
Nov 29 03:00:52 np0005539563 podman[292993]: 2025-11-29 08:00:52.02468202 +0000 UTC m=+0.818886747 container create 8433e4efe8de0be469a0e47d7f15f3d4a2d130f99d3ee8ac315ef96aad654786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elbakyan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:00:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:52.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:52 np0005539563 systemd[1]: Started libpod-conmon-8433e4efe8de0be469a0e47d7f15f3d4a2d130f99d3ee8ac315ef96aad654786.scope.
Nov 29 03:00:52 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:00:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:52.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 351 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.8 MiB/s wr, 86 op/s
Nov 29 03:00:53 np0005539563 nova_compute[252253]: 2025-11-29 08:00:53.845 252257 DEBUG oslo_concurrency.lockutils [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "interface-c1396d33-3741-4e6a-acdf-79ac9f076e53-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:53 np0005539563 nova_compute[252253]: 2025-11-29 08:00:53.846 252257 DEBUG oslo_concurrency.lockutils [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-c1396d33-3741-4e6a-acdf-79ac9f076e53-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:53 np0005539563 nova_compute[252253]: 2025-11-29 08:00:53.846 252257 DEBUG nova.objects.instance [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'flavor' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:53 np0005539563 podman[292993]: 2025-11-29 08:00:53.850873402 +0000 UTC m=+2.645078109 container init 8433e4efe8de0be469a0e47d7f15f3d4a2d130f99d3ee8ac315ef96aad654786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elbakyan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:00:53 np0005539563 podman[292993]: 2025-11-29 08:00:53.857199186 +0000 UTC m=+2.651403873 container start 8433e4efe8de0be469a0e47d7f15f3d4a2d130f99d3ee8ac315ef96aad654786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:00:53 np0005539563 admiring_elbakyan[293009]: 167 167
Nov 29 03:00:53 np0005539563 systemd[1]: libpod-8433e4efe8de0be469a0e47d7f15f3d4a2d130f99d3ee8ac315ef96aad654786.scope: Deactivated successfully.
Nov 29 03:00:53 np0005539563 nova_compute[252253]: 2025-11-29 08:00:53.865 252257 DEBUG nova.objects.instance [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'pci_requests' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:53 np0005539563 nova_compute[252253]: 2025-11-29 08:00:53.876 252257 DEBUG nova.network.neutron [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:00:54 np0005539563 podman[292993]: 2025-11-29 08:00:54.139456613 +0000 UTC m=+2.933661300 container attach 8433e4efe8de0be469a0e47d7f15f3d4a2d130f99d3ee8ac315ef96aad654786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elbakyan, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:00:54 np0005539563 podman[292993]: 2025-11-29 08:00:54.139890315 +0000 UTC m=+2.934095002 container died 8433e4efe8de0be469a0e47d7f15f3d4a2d130f99d3ee8ac315ef96aad654786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:00:54 np0005539563 nova_compute[252253]: 2025-11-29 08:00:54.203 252257 DEBUG nova.policy [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a814d0c4600e45d9a1fac7bac5b7e69e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f69605de164b4c27ae715521263676fe', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:00:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:54.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-78d905a9993338f1f02a705a2dc6546554c05dc7dd74f5bead4f3419bf28f804-merged.mount: Deactivated successfully.
Nov 29 03:00:54 np0005539563 podman[292993]: 2025-11-29 08:00:54.61221533 +0000 UTC m=+3.406420017 container remove 8433e4efe8de0be469a0e47d7f15f3d4a2d130f99d3ee8ac315ef96aad654786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:00:54 np0005539563 systemd[1]: libpod-conmon-8433e4efe8de0be469a0e47d7f15f3d4a2d130f99d3ee8ac315ef96aad654786.scope: Deactivated successfully.
Nov 29 03:00:54 np0005539563 podman[293035]: 2025-11-29 08:00:54.780528398 +0000 UTC m=+0.041976958 container create 5add3483fbeaac2c0d4b10e27fe97d8f3ecea519d3f4f93dc19fa5e9b897e4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_agnesi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 03:00:54 np0005539563 systemd[1]: Started libpod-conmon-5add3483fbeaac2c0d4b10e27fe97d8f3ecea519d3f4f93dc19fa5e9b897e4d7.scope.
Nov 29 03:00:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:00:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c798443acc8f234bef6957fd6d6801252c4ab6ff9695c55d319d121767cbbbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c798443acc8f234bef6957fd6d6801252c4ab6ff9695c55d319d121767cbbbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c798443acc8f234bef6957fd6d6801252c4ab6ff9695c55d319d121767cbbbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c798443acc8f234bef6957fd6d6801252c4ab6ff9695c55d319d121767cbbbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:54 np0005539563 podman[293035]: 2025-11-29 08:00:54.76136033 +0000 UTC m=+0.022808910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:00:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:54.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:55 np0005539563 podman[293035]: 2025-11-29 08:00:55.033067207 +0000 UTC m=+0.294515787 container init 5add3483fbeaac2c0d4b10e27fe97d8f3ecea519d3f4f93dc19fa5e9b897e4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_agnesi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:00:55 np0005539563 podman[293035]: 2025-11-29 08:00:55.040941704 +0000 UTC m=+0.302390264 container start 5add3483fbeaac2c0d4b10e27fe97d8f3ecea519d3f4f93dc19fa5e9b897e4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_agnesi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:00:55 np0005539563 podman[293035]: 2025-11-29 08:00:55.07202493 +0000 UTC m=+0.333473510 container attach 5add3483fbeaac2c0d4b10e27fe97d8f3ecea519d3f4f93dc19fa5e9b897e4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_agnesi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:00:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 03:00:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Nov 29 03:00:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 393 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.1 MiB/s wr, 103 op/s
Nov 29 03:00:55 np0005539563 laughing_agnesi[293052]: {
Nov 29 03:00:55 np0005539563 laughing_agnesi[293052]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:00:55 np0005539563 laughing_agnesi[293052]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:00:55 np0005539563 laughing_agnesi[293052]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:00:55 np0005539563 laughing_agnesi[293052]:        "osd_id": 0,
Nov 29 03:00:55 np0005539563 laughing_agnesi[293052]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:00:55 np0005539563 laughing_agnesi[293052]:        "type": "bluestore"
Nov 29 03:00:55 np0005539563 laughing_agnesi[293052]:    }
Nov 29 03:00:55 np0005539563 laughing_agnesi[293052]: }
Nov 29 03:00:55 np0005539563 systemd[1]: libpod-5add3483fbeaac2c0d4b10e27fe97d8f3ecea519d3f4f93dc19fa5e9b897e4d7.scope: Deactivated successfully.
Nov 29 03:00:55 np0005539563 podman[293035]: 2025-11-29 08:00:55.978513379 +0000 UTC m=+1.239961959 container died 5add3483fbeaac2c0d4b10e27fe97d8f3ecea519d3f4f93dc19fa5e9b897e4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_agnesi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:00:56 np0005539563 nova_compute[252253]: 2025-11-29 08:00:56.008 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:56 np0005539563 nova_compute[252253]: 2025-11-29 08:00:56.246 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:56.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:56.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Nov 29 03:00:57 np0005539563 nova_compute[252253]: 2025-11-29 08:00:57.158 252257 DEBUG nova.network.neutron [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Successfully created port: bb9bfd78-483e-4c5b-a548-8e4b77160086 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:00:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 393 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 2.9 MiB/s wr, 53 op/s
Nov 29 03:00:58 np0005539563 nova_compute[252253]: 2025-11-29 08:00:58.099 252257 DEBUG nova.network.neutron [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Successfully updated port: bb9bfd78-483e-4c5b-a548-8e4b77160086 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:00:58 np0005539563 nova_compute[252253]: 2025-11-29 08:00:58.120 252257 DEBUG oslo_concurrency.lockutils [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:58 np0005539563 nova_compute[252253]: 2025-11-29 08:00:58.121 252257 DEBUG oslo_concurrency.lockutils [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:58 np0005539563 nova_compute[252253]: 2025-11-29 08:00:58.121 252257 DEBUG nova.network.neutron [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:58 np0005539563 nova_compute[252253]: 2025-11-29 08:00:58.218 252257 DEBUG nova.compute.manager [req-ce384f7a-388c-4bb8-b362-df354e758a4f req-acd221db-2d23-4835-8886-b6a770863a65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-changed-bb9bfd78-483e-4c5b-a548-8e4b77160086 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:58 np0005539563 nova_compute[252253]: 2025-11-29 08:00:58.219 252257 DEBUG nova.compute.manager [req-ce384f7a-388c-4bb8-b362-df354e758a4f req-acd221db-2d23-4835-8886-b6a770863a65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Refreshing instance network info cache due to event network-changed-bb9bfd78-483e-4c5b-a548-8e4b77160086. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:00:58 np0005539563 nova_compute[252253]: 2025-11-29 08:00:58.219 252257 DEBUG oslo_concurrency.lockutils [req-ce384f7a-388c-4bb8-b362-df354e758a4f req-acd221db-2d23-4835-8886-b6a770863a65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:58.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:58 np0005539563 nova_compute[252253]: 2025-11-29 08:00:58.293 252257 WARNING nova.network.neutron [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] 738e99b4-b58e-4eff-b209-c4aa3748c994 already exists in list: networks containing: ['738e99b4-b58e-4eff-b209-c4aa3748c994']. ignoring it#033[00m
Nov 29 03:00:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Nov 29 03:00:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:00:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:58.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1c798443acc8f234bef6957fd6d6801252c4ab6ff9695c55d319d121767cbbbf-merged.mount: Deactivated successfully.
Nov 29 03:00:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 407 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 203 KiB/s rd, 4.6 MiB/s wr, 100 op/s
Nov 29 03:00:59 np0005539563 podman[293035]: 2025-11-29 08:00:59.983663022 +0000 UTC m=+5.245111592 container remove 5add3483fbeaac2c0d4b10e27fe97d8f3ecea519d3f4f93dc19fa5e9b897e4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_agnesi, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:01:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:01:00 np0005539563 systemd[1]: libpod-conmon-5add3483fbeaac2c0d4b10e27fe97d8f3ecea519d3f4f93dc19fa5e9b897e4d7.scope: Deactivated successfully.
Nov 29 03:01:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:00.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:00.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:01 np0005539563 nova_compute[252253]: 2025-11-29 08:01:01.011 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:01 np0005539563 nova_compute[252253]: 2025-11-29 08:01:01.248 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:01:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:01:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:01:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 232a6d40-994c-450d-bbbb-537f3fc581ef does not exist
Nov 29 03:01:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6714fe5b-46f2-4a37-8c68-6534228c2c86 does not exist
Nov 29 03:01:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2cb25719-fbd9-490a-b4cb-ccf8b5524685 does not exist
Nov 29 03:01:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 411 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Nov 29 03:01:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:02.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:01:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:01:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:02.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.452 252257 DEBUG nova.network.neutron [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.471 252257 DEBUG oslo_concurrency.lockutils [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.472 252257 DEBUG oslo_concurrency.lockutils [req-ce384f7a-388c-4bb8-b362-df354e758a4f req-acd221db-2d23-4835-8886-b6a770863a65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.472 252257 DEBUG nova.network.neutron [req-ce384f7a-388c-4bb8-b362-df354e758a4f req-acd221db-2d23-4835-8886-b6a770863a65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Refreshing network info cache for port bb9bfd78-483e-4c5b-a548-8e4b77160086 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.475 252257 DEBUG nova.virt.libvirt.vif [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.475 252257 DEBUG nova.network.os_vif_util [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.476 252257 DEBUG nova.network.os_vif_util [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:dd:f1,bridge_name='br-int',has_traffic_filtering=True,id=bb9bfd78-483e-4c5b-a548-8e4b77160086,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9bfd78-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.476 252257 DEBUG os_vif [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:dd:f1,bridge_name='br-int',has_traffic_filtering=True,id=bb9bfd78-483e-4c5b-a548-8e4b77160086,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9bfd78-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.477 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.477 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.477 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.480 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.480 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb9bfd78-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.481 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbb9bfd78-48, col_values=(('external_ids', {'iface-id': 'bb9bfd78-483e-4c5b-a548-8e4b77160086', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:dd:f1', 'vm-uuid': 'c1396d33-3741-4e6a-acdf-79ac9f076e53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.482 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539563 NetworkManager[48981]: <info>  [1764403263.4834] manager: (tapbb9bfd78-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.484 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.489 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.490 252257 INFO os_vif [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:dd:f1,bridge_name='br-int',has_traffic_filtering=True,id=bb9bfd78-483e-4c5b-a548-8e4b77160086,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9bfd78-48')#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.490 252257 DEBUG nova.virt.libvirt.vif [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.491 252257 DEBUG nova.network.os_vif_util [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.491 252257 DEBUG nova.network.os_vif_util [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:dd:f1,bridge_name='br-int',has_traffic_filtering=True,id=bb9bfd78-483e-4c5b-a548-8e4b77160086,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9bfd78-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.494 252257 DEBUG nova.virt.libvirt.guest [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  <mac address="fa:16:3e:bb:dd:f1"/>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  <model type="virtio"/>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  <mtu size="1442"/>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  <target dev="tapbb9bfd78-48"/>
Nov 29 03:01:03 np0005539563 nova_compute[252253]: </interface>
Nov 29 03:01:03 np0005539563 nova_compute[252253]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:01:03 np0005539563 kernel: tapbb9bfd78-48: entered promiscuous mode
Nov 29 03:01:03 np0005539563 NetworkManager[48981]: <info>  [1764403263.5073] manager: (tapbb9bfd78-48): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Nov 29 03:01:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:03Z|00188|binding|INFO|Claiming lport bb9bfd78-483e-4c5b-a548-8e4b77160086 for this chassis.
Nov 29 03:01:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:03Z|00189|binding|INFO|bb9bfd78-483e-4c5b-a548-8e4b77160086: Claiming fa:16:3e:bb:dd:f1 10.100.0.12
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.507 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.515 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:dd:f1 10.100.0.12'], port_security=['fa:16:3e:bb:dd:f1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c1396d33-3741-4e6a-acdf-79ac9f076e53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3edda898-8529-43cc-9949-7b5bcfbbe45d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=bb9bfd78-483e-4c5b-a548-8e4b77160086) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.516 158990 INFO neutron.agent.ovn.metadata.agent [-] Port bb9bfd78-483e-4c5b-a548-8e4b77160086 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 bound to our chassis#033[00m
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.519 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:01:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:03Z|00190|binding|INFO|Setting lport bb9bfd78-483e-4c5b-a548-8e4b77160086 ovn-installed in OVS
Nov 29 03:01:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:03Z|00191|binding|INFO|Setting lport bb9bfd78-483e-4c5b-a548-8e4b77160086 up in Southbound
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.526 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.529 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.538 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4d59ff-707a-4cd2-9270-9ad2f292cead]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:03 np0005539563 systemd-udevd[293210]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:01:03 np0005539563 NetworkManager[48981]: <info>  [1764403263.5574] device (tapbb9bfd78-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:01:03 np0005539563 NetworkManager[48981]: <info>  [1764403263.5581] device (tapbb9bfd78-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.567 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[add8bc90-b602-4630-a093-ef9e949c4a5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.571 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b127d497-656e-4266-bfcc-de3f9f6c3885]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.594 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b65575d1-a555-4cf4-8b4f-6563bfe8284c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.609 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[776817ab-b502-40c8-b523-b8cfe5e58fa0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618501, 'reachable_time': 21416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293217, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.633 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[83739b8b-ba6d-4b38-9480-625aec34f102]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618513, 'tstamp': 618513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293218, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618516, 'tstamp': 618516}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293218, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.635 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.637 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.639 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.639 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.640 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:03.640 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 411 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.2 MiB/s wr, 174 op/s
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.858 252257 DEBUG nova.virt.libvirt.driver [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.858 252257 DEBUG nova.virt.libvirt.driver [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.859 252257 DEBUG nova.virt.libvirt.driver [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:0f:38:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.859 252257 DEBUG nova.virt.libvirt.driver [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:bb:dd:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.887 252257 DEBUG nova.virt.libvirt.guest [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  <nova:name>tempest-AttachInterfacesTestJSON-server-369367062</nova:name>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:01:03</nova:creationTime>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:01:03 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:    <nova:port uuid="39a40677-39fb-46af-8988-f2b8b26d7512">
Nov 29 03:01:03 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:    <nova:port uuid="bb9bfd78-483e-4c5b-a548-8e4b77160086">
Nov 29 03:01:03 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:03 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:01:03 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:01:03 np0005539563 nova_compute[252253]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:01:03 np0005539563 nova_compute[252253]: 2025-11-29 08:01:03.909 252257 DEBUG oslo_concurrency.lockutils [None req-4b9348d7-18b3-4272-9821-4334b3515aaa a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-c1396d33-3741-4e6a-acdf-79ac9f076e53-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:04.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:04 np0005539563 podman[293219]: 2025-11-29 08:01:04.496823855 +0000 UTC m=+0.054691009 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 03:01:04 np0005539563 podman[293220]: 2025-11-29 08:01:04.502698837 +0000 UTC m=+0.059659695 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:01:04 np0005539563 podman[293221]: 2025-11-29 08:01:04.53402915 +0000 UTC m=+0.085410474 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller)
Nov 29 03:01:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:04.908 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:04.909 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:04.909 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:04.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:04 np0005539563 nova_compute[252253]: 2025-11-29 08:01:04.987 252257 DEBUG nova.compute.manager [req-8e62097e-11bd-476e-9d94-1112e1aadf19 req-61432311-d09c-4f90-8180-28d5293d230f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-bb9bfd78-483e-4c5b-a548-8e4b77160086 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:04 np0005539563 nova_compute[252253]: 2025-11-29 08:01:04.987 252257 DEBUG oslo_concurrency.lockutils [req-8e62097e-11bd-476e-9d94-1112e1aadf19 req-61432311-d09c-4f90-8180-28d5293d230f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:04 np0005539563 nova_compute[252253]: 2025-11-29 08:01:04.987 252257 DEBUG oslo_concurrency.lockutils [req-8e62097e-11bd-476e-9d94-1112e1aadf19 req-61432311-d09c-4f90-8180-28d5293d230f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:04 np0005539563 nova_compute[252253]: 2025-11-29 08:01:04.987 252257 DEBUG oslo_concurrency.lockutils [req-8e62097e-11bd-476e-9d94-1112e1aadf19 req-61432311-d09c-4f90-8180-28d5293d230f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:04 np0005539563 nova_compute[252253]: 2025-11-29 08:01:04.988 252257 DEBUG nova.compute.manager [req-8e62097e-11bd-476e-9d94-1112e1aadf19 req-61432311-d09c-4f90-8180-28d5293d230f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-plugged-bb9bfd78-483e-4c5b-a548-8e4b77160086 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:04 np0005539563 nova_compute[252253]: 2025-11-29 08:01:04.988 252257 WARNING nova.compute.manager [req-8e62097e-11bd-476e-9d94-1112e1aadf19 req-61432311-d09c-4f90-8180-28d5293d230f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-plugged-bb9bfd78-483e-4c5b-a548-8e4b77160086 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:04 np0005539563 nova_compute[252253]: 2025-11-29 08:01:04.988 252257 DEBUG nova.compute.manager [req-8e62097e-11bd-476e-9d94-1112e1aadf19 req-61432311-d09c-4f90-8180-28d5293d230f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-bb9bfd78-483e-4c5b-a548-8e4b77160086 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:04 np0005539563 nova_compute[252253]: 2025-11-29 08:01:04.988 252257 DEBUG oslo_concurrency.lockutils [req-8e62097e-11bd-476e-9d94-1112e1aadf19 req-61432311-d09c-4f90-8180-28d5293d230f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:04 np0005539563 nova_compute[252253]: 2025-11-29 08:01:04.988 252257 DEBUG oslo_concurrency.lockutils [req-8e62097e-11bd-476e-9d94-1112e1aadf19 req-61432311-d09c-4f90-8180-28d5293d230f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:04 np0005539563 nova_compute[252253]: 2025-11-29 08:01:04.988 252257 DEBUG oslo_concurrency.lockutils [req-8e62097e-11bd-476e-9d94-1112e1aadf19 req-61432311-d09c-4f90-8180-28d5293d230f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:04 np0005539563 nova_compute[252253]: 2025-11-29 08:01:04.988 252257 DEBUG nova.compute.manager [req-8e62097e-11bd-476e-9d94-1112e1aadf19 req-61432311-d09c-4f90-8180-28d5293d230f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-plugged-bb9bfd78-483e-4c5b-a548-8e4b77160086 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:04 np0005539563 nova_compute[252253]: 2025-11-29 08:01:04.989 252257 WARNING nova.compute.manager [req-8e62097e-11bd-476e-9d94-1112e1aadf19 req-61432311-d09c-4f90-8180-28d5293d230f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-plugged-bb9bfd78-483e-4c5b-a548-8e4b77160086 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.279 252257 DEBUG oslo_concurrency.lockutils [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "interface-c1396d33-3741-4e6a-acdf-79ac9f076e53-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.279 252257 DEBUG oslo_concurrency.lockutils [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-c1396d33-3741-4e6a-acdf-79ac9f076e53-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.280 252257 DEBUG nova.objects.instance [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'flavor' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.461 252257 DEBUG nova.network.neutron [req-ce384f7a-388c-4bb8-b362-df354e758a4f req-acd221db-2d23-4835-8886-b6a770863a65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updated VIF entry in instance network info cache for port bb9bfd78-483e-4c5b-a548-8e4b77160086. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.462 252257 DEBUG nova.network.neutron [req-ce384f7a-388c-4bb8-b362-df354e758a4f req-acd221db-2d23-4835-8886-b6a770863a65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.481 252257 DEBUG oslo_concurrency.lockutils [req-ce384f7a-388c-4bb8-b362-df354e758a4f req-acd221db-2d23-4835-8886-b6a770863a65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.691 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.691 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.715 252257 DEBUG nova.compute.manager [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:01:05 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:05Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:dd:f1 10.100.0.12
Nov 29 03:01:05 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:05Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:dd:f1 10.100.0.12
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.817 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.818 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.828 252257 DEBUG nova.virt.hardware [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.828 252257 INFO nova.compute.claims [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:01:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 417 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.2 MiB/s wr, 190 op/s
Nov 29 03:01:05 np0005539563 nova_compute[252253]: 2025-11-29 08:01:05.969 252257 DEBUG oslo_concurrency.processutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.252 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:06.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/708369811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.446 252257 DEBUG oslo_concurrency.processutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.454 252257 DEBUG nova.compute.provider_tree [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.476 252257 DEBUG nova.scheduler.client.report [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.485 252257 DEBUG nova.objects.instance [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'pci_requests' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.498 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.499 252257 DEBUG nova.compute.manager [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.539 252257 DEBUG nova.network.neutron [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.583 252257 DEBUG nova.compute.manager [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.583 252257 DEBUG nova.network.neutron [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.608 252257 INFO nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.671 252257 DEBUG nova.compute.manager [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.808 252257 DEBUG nova.compute.manager [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.809 252257 DEBUG nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.810 252257 INFO nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Creating image(s)#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.835 252257 DEBUG nova.storage.rbd_utils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image c19d4af3-8322-42c3-b55b-5b13d720e3fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.858 252257 DEBUG nova.storage.rbd_utils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image c19d4af3-8322-42c3-b55b-5b13d720e3fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.881 252257 DEBUG nova.storage.rbd_utils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image c19d4af3-8322-42c3-b55b-5b13d720e3fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.884 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "d36d899ea07c385b3eb97feb7e2ac1a2b66ea509" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:06 np0005539563 nova_compute[252253]: 2025-11-29 08:01:06.885 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "d36d899ea07c385b3eb97feb7e2ac1a2b66ea509" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:06.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:07 np0005539563 nova_compute[252253]: 2025-11-29 08:01:07.162 252257 DEBUG nova.virt.libvirt.imagebackend [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/bb915321-974b-4310-80ce-22fe787becec/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/bb915321-974b-4310-80ce-22fe787becec/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:01:07 np0005539563 nova_compute[252253]: 2025-11-29 08:01:07.217 252257 DEBUG nova.policy [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f7d59bea260d4752aa29379967636c0b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:01:07 np0005539563 nova_compute[252253]: 2025-11-29 08:01:07.220 252257 DEBUG nova.policy [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a814d0c4600e45d9a1fac7bac5b7e69e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f69605de164b4c27ae715521263676fe', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:01:07 np0005539563 nova_compute[252253]: 2025-11-29 08:01:07.225 252257 DEBUG nova.virt.libvirt.imagebackend [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/bb915321-974b-4310-80ce-22fe787becec/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:01:07 np0005539563 nova_compute[252253]: 2025-11-29 08:01:07.225 252257 DEBUG nova.storage.rbd_utils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] cloning images/bb915321-974b-4310-80ce-22fe787becec@snap to None/c19d4af3-8322-42c3-b55b-5b13d720e3fd_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:01:07 np0005539563 nova_compute[252253]: 2025-11-29 08:01:07.398 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "d36d899ea07c385b3eb97feb7e2ac1a2b66ea509" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:07 np0005539563 nova_compute[252253]: 2025-11-29 08:01:07.533 252257 DEBUG nova.objects.instance [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'migration_context' on Instance uuid c19d4af3-8322-42c3-b55b-5b13d720e3fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:07 np0005539563 nova_compute[252253]: 2025-11-29 08:01:07.550 252257 DEBUG nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:01:07 np0005539563 nova_compute[252253]: 2025-11-29 08:01:07.551 252257 DEBUG nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Ensure instance console log exists: /var/lib/nova/instances/c19d4af3-8322-42c3-b55b-5b13d720e3fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:01:07 np0005539563 nova_compute[252253]: 2025-11-29 08:01:07.551 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:07 np0005539563 nova_compute[252253]: 2025-11-29 08:01:07.552 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:07 np0005539563 nova_compute[252253]: 2025-11-29 08:01:07.552 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 417 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.2 MiB/s wr, 190 op/s
Nov 29 03:01:07 np0005539563 nova_compute[252253]: 2025-11-29 08:01:07.875 252257 DEBUG nova.network.neutron [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Successfully created port: 8125c0d6-4bae-4aba-aad0-d28ea5b81901 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:01:08 np0005539563 nova_compute[252253]: 2025-11-29 08:01:08.013 252257 DEBUG nova.network.neutron [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Successfully created port: 36e63c9f-9420-41d5-a969-c60bff8931d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:01:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:08.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:08 np0005539563 nova_compute[252253]: 2025-11-29 08:01:08.483 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:08.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:09 np0005539563 nova_compute[252253]: 2025-11-29 08:01:09.451 252257 DEBUG nova.network.neutron [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Successfully updated port: 8125c0d6-4bae-4aba-aad0-d28ea5b81901 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:01:09 np0005539563 nova_compute[252253]: 2025-11-29 08:01:09.474 252257 DEBUG oslo_concurrency.lockutils [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:09 np0005539563 nova_compute[252253]: 2025-11-29 08:01:09.474 252257 DEBUG oslo_concurrency.lockutils [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:09 np0005539563 nova_compute[252253]: 2025-11-29 08:01:09.474 252257 DEBUG nova.network.neutron [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:01:09 np0005539563 nova_compute[252253]: 2025-11-29 08:01:09.630 252257 DEBUG nova.network.neutron [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Successfully updated port: 36e63c9f-9420-41d5-a969-c60bff8931d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:01:09 np0005539563 nova_compute[252253]: 2025-11-29 08:01:09.645 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "refresh_cache-c19d4af3-8322-42c3-b55b-5b13d720e3fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:09 np0005539563 nova_compute[252253]: 2025-11-29 08:01:09.646 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquired lock "refresh_cache-c19d4af3-8322-42c3-b55b-5b13d720e3fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:09 np0005539563 nova_compute[252253]: 2025-11-29 08:01:09.646 252257 DEBUG nova.network.neutron [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:01:09 np0005539563 nova_compute[252253]: 2025-11-29 08:01:09.695 252257 WARNING nova.network.neutron [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] 738e99b4-b58e-4eff-b209-c4aa3748c994 already exists in list: networks containing: ['738e99b4-b58e-4eff-b209-c4aa3748c994']. ignoring it#033[00m
Nov 29 03:01:09 np0005539563 nova_compute[252253]: 2025-11-29 08:01:09.696 252257 WARNING nova.network.neutron [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] 738e99b4-b58e-4eff-b209-c4aa3748c994 already exists in list: networks containing: ['738e99b4-b58e-4eff-b209-c4aa3748c994']. ignoring it#033[00m
Nov 29 03:01:09 np0005539563 nova_compute[252253]: 2025-11-29 08:01:09.704 252257 DEBUG nova.compute.manager [req-dc35d9ce-4e73-4771-b424-6bbda70bb783 req-96663db9-e293-4ea2-b53d-977cec3ab5ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Received event network-changed-36e63c9f-9420-41d5-a969-c60bff8931d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:09 np0005539563 nova_compute[252253]: 2025-11-29 08:01:09.705 252257 DEBUG nova.compute.manager [req-dc35d9ce-4e73-4771-b424-6bbda70bb783 req-96663db9-e293-4ea2-b53d-977cec3ab5ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Refreshing instance network info cache due to event network-changed-36e63c9f-9420-41d5-a969-c60bff8931d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:09 np0005539563 nova_compute[252253]: 2025-11-29 08:01:09.705 252257 DEBUG oslo_concurrency.lockutils [req-dc35d9ce-4e73-4771-b424-6bbda70bb783 req-96663db9-e293-4ea2-b53d-977cec3ab5ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c19d4af3-8322-42c3-b55b-5b13d720e3fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 418 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 120 KiB/s wr, 171 op/s
Nov 29 03:01:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:01:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:10.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:01:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:10 np0005539563 nova_compute[252253]: 2025-11-29 08:01:10.415 252257 DEBUG nova.network.neutron [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:01:10 np0005539563 nova_compute[252253]: 2025-11-29 08:01:10.636 252257 DEBUG nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-changed-8125c0d6-4bae-4aba-aad0-d28ea5b81901 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:10 np0005539563 nova_compute[252253]: 2025-11-29 08:01:10.636 252257 DEBUG nova.compute.manager [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Refreshing instance network info cache due to event network-changed-8125c0d6-4bae-4aba-aad0-d28ea5b81901. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:10 np0005539563 nova_compute[252253]: 2025-11-29 08:01:10.637 252257 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:10.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.254 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.582 252257 DEBUG nova.network.neutron [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Updating instance_info_cache with network_info: [{"id": "36e63c9f-9420-41d5-a969-c60bff8931d1", "address": "fa:16:3e:59:68:de", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e63c9f-94", "ovs_interfaceid": "36e63c9f-9420-41d5-a969-c60bff8931d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.608 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Releasing lock "refresh_cache-c19d4af3-8322-42c3-b55b-5b13d720e3fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.609 252257 DEBUG nova.compute.manager [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Instance network_info: |[{"id": "36e63c9f-9420-41d5-a969-c60bff8931d1", "address": "fa:16:3e:59:68:de", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e63c9f-94", "ovs_interfaceid": "36e63c9f-9420-41d5-a969-c60bff8931d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.609 252257 DEBUG oslo_concurrency.lockutils [req-dc35d9ce-4e73-4771-b424-6bbda70bb783 req-96663db9-e293-4ea2-b53d-977cec3ab5ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c19d4af3-8322-42c3-b55b-5b13d720e3fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.609 252257 DEBUG nova.network.neutron [req-dc35d9ce-4e73-4771-b424-6bbda70bb783 req-96663db9-e293-4ea2-b53d-977cec3ab5ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Refreshing network info cache for port 36e63c9f-9420-41d5-a969-c60bff8931d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.612 252257 DEBUG nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Start _get_guest_xml network_info=[{"id": "36e63c9f-9420-41d5-a969-c60bff8931d1", "address": "fa:16:3e:59:68:de", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e63c9f-94", "ovs_interfaceid": "36e63c9f-9420-41d5-a969-c60bff8931d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:00:39Z,direct_url=<?>,disk_format='raw',id=bb915321-974b-4310-80ce-22fe787becec,min_disk=1,min_ram=0,name='tempest-test-snap-142131752',owner='4d8c5b7e3ca74bc1880eb616b04711f7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:01:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'bb915321-974b-4310-80ce-22fe787becec'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.616 252257 WARNING nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.620 252257 DEBUG nova.virt.libvirt.host [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.621 252257 DEBUG nova.virt.libvirt.host [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.626 252257 DEBUG nova.virt.libvirt.host [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.627 252257 DEBUG nova.virt.libvirt.host [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.628 252257 DEBUG nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.628 252257 DEBUG nova.virt.hardware [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:00:39Z,direct_url=<?>,disk_format='raw',id=bb915321-974b-4310-80ce-22fe787becec,min_disk=1,min_ram=0,name='tempest-test-snap-142131752',owner='4d8c5b7e3ca74bc1880eb616b04711f7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:01:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.628 252257 DEBUG nova.virt.hardware [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.628 252257 DEBUG nova.virt.hardware [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.629 252257 DEBUG nova.virt.hardware [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.629 252257 DEBUG nova.virt.hardware [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.629 252257 DEBUG nova.virt.hardware [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.629 252257 DEBUG nova.virt.hardware [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.629 252257 DEBUG nova.virt.hardware [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.629 252257 DEBUG nova.virt.hardware [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.630 252257 DEBUG nova.virt.hardware [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.630 252257 DEBUG nova.virt.hardware [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:01:11 np0005539563 nova_compute[252253]: 2025-11-29 08:01:11.632 252257 DEBUG oslo_concurrency.processutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 418 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 118 KiB/s wr, 199 op/s
Nov 29 03:01:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/826107425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.073 252257 DEBUG oslo_concurrency.processutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.101 252257 DEBUG nova.storage.rbd_utils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image c19d4af3-8322-42c3-b55b-5b13d720e3fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.105 252257 DEBUG oslo_concurrency.processutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:12.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377233110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.543 252257 DEBUG oslo_concurrency.processutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.545 252257 DEBUG nova.virt.libvirt.vif [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1963696524',display_name='tempest-ImagesTestJSON-server-1963696524',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1963696524',id=64,image_ref='bb915321-974b-4310-80ce-22fe787becec',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-nadv53s8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='5f638465-c65b-4824-bedc-60f4b695402a',image_min_disk='1',image_min_ram='0',image_owner_id='4d8c5b7e3ca74bc1880eb616b04711f7',image_owner_project_name='tempest-ImagesTestJSON-911260095',image_owner_user_name='tempest-ImagesTestJSON-911260095-project-member',image_user_id='f7d59bea260d4752aa29379967636c0b',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:06Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=c19d4af3-8322-42c3-b55b-5b13d720e3fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36e63c9f-9420-41d5-a969-c60bff8931d1", "address": "fa:16:3e:59:68:de", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e63c9f-94", "ovs_interfaceid": "36e63c9f-9420-41d5-a969-c60bff8931d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.545 252257 DEBUG nova.network.os_vif_util [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "36e63c9f-9420-41d5-a969-c60bff8931d1", "address": "fa:16:3e:59:68:de", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e63c9f-94", "ovs_interfaceid": "36e63c9f-9420-41d5-a969-c60bff8931d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.546 252257 DEBUG nova.network.os_vif_util [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:68:de,bridge_name='br-int',has_traffic_filtering=True,id=36e63c9f-9420-41d5-a969-c60bff8931d1,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e63c9f-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.547 252257 DEBUG nova.objects.instance [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid c19d4af3-8322-42c3-b55b-5b13d720e3fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.564 252257 DEBUG nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  <uuid>c19d4af3-8322-42c3-b55b-5b13d720e3fd</uuid>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  <name>instance-00000040</name>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <nova:name>tempest-ImagesTestJSON-server-1963696524</nova:name>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:01:11</nova:creationTime>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <nova:user uuid="f7d59bea260d4752aa29379967636c0b">tempest-ImagesTestJSON-911260095-project-member</nova:user>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <nova:project uuid="4d8c5b7e3ca74bc1880eb616b04711f7">tempest-ImagesTestJSON-911260095</nova:project>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="bb915321-974b-4310-80ce-22fe787becec"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <nova:port uuid="36e63c9f-9420-41d5-a969-c60bff8931d1">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <entry name="serial">c19d4af3-8322-42c3-b55b-5b13d720e3fd</entry>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <entry name="uuid">c19d4af3-8322-42c3-b55b-5b13d720e3fd</entry>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/c19d4af3-8322-42c3-b55b-5b13d720e3fd_disk">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/c19d4af3-8322-42c3-b55b-5b13d720e3fd_disk.config">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:59:68:de"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <target dev="tap36e63c9f-94"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/c19d4af3-8322-42c3-b55b-5b13d720e3fd/console.log" append="off"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <input type="keyboard" bus="usb"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:01:12 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:01:12 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:01:12 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:01:12 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.565 252257 DEBUG nova.compute.manager [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Preparing to wait for external event network-vif-plugged-36e63c9f-9420-41d5-a969-c60bff8931d1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.565 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.565 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.566 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.566 252257 DEBUG nova.virt.libvirt.vif [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1963696524',display_name='tempest-ImagesTestJSON-server-1963696524',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1963696524',id=64,image_ref='bb915321-974b-4310-80ce-22fe787becec',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-nadv53s8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='5f638465-c65b-4824-bedc-60f4b695402a',image_min_disk='1',image_min_ram='0',image_owner_id='4d8c5b7e3ca74bc1880eb616b04711f7',image_owner_project_name='tempest-ImagesTestJSON-911260095',image_owner_user_name='tempest-ImagesTestJSON-911260095-project-member',image_user_id='f7d59bea260d4752aa29379967636c0b',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:06Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=c19d4af3-8322-42c3-b55b-5b13d720e3fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36e63c9f-9420-41d5-a969-c60bff8931d1", "address": "fa:16:3e:59:68:de", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e63c9f-94", "ovs_interfaceid": "36e63c9f-9420-41d5-a969-c60bff8931d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.567 252257 DEBUG nova.network.os_vif_util [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "36e63c9f-9420-41d5-a969-c60bff8931d1", "address": "fa:16:3e:59:68:de", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e63c9f-94", "ovs_interfaceid": "36e63c9f-9420-41d5-a969-c60bff8931d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.567 252257 DEBUG nova.network.os_vif_util [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:68:de,bridge_name='br-int',has_traffic_filtering=True,id=36e63c9f-9420-41d5-a969-c60bff8931d1,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e63c9f-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.568 252257 DEBUG os_vif [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:68:de,bridge_name='br-int',has_traffic_filtering=True,id=36e63c9f-9420-41d5-a969-c60bff8931d1,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e63c9f-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.568 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.568 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.569 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.571 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.571 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36e63c9f-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.572 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36e63c9f-94, col_values=(('external_ids', {'iface-id': '36e63c9f-9420-41d5-a969-c60bff8931d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:68:de', 'vm-uuid': 'c19d4af3-8322-42c3-b55b-5b13d720e3fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:12 np0005539563 NetworkManager[48981]: <info>  [1764403272.5741] manager: (tap36e63c9f-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.575 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.580 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.581 252257 INFO os_vif [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:68:de,bridge_name='br-int',has_traffic_filtering=True,id=36e63c9f-9420-41d5-a969-c60bff8931d1,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e63c9f-94')#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.634 252257 DEBUG nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.634 252257 DEBUG nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.634 252257 DEBUG nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] No VIF found with MAC fa:16:3e:59:68:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.635 252257 INFO nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Using config drive#033[00m
Nov 29 03:01:12 np0005539563 nova_compute[252253]: 2025-11-29 08:01:12.663 252257 DEBUG nova.storage.rbd_utils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image c19d4af3-8322-42c3-b55b-5b13d720e3fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:01:12
Nov 29 03:01:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:01:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:01:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.meta']
Nov 29 03:01:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:01:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:12.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:01:13 np0005539563 nova_compute[252253]: 2025-11-29 08:01:13.593 252257 INFO nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Creating config drive at /var/lib/nova/instances/c19d4af3-8322-42c3-b55b-5b13d720e3fd/disk.config#033[00m
Nov 29 03:01:13 np0005539563 nova_compute[252253]: 2025-11-29 08:01:13.601 252257 DEBUG oslo_concurrency.processutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c19d4af3-8322-42c3-b55b-5b13d720e3fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdf9bzwu2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:13 np0005539563 nova_compute[252253]: 2025-11-29 08:01:13.755 252257 DEBUG oslo_concurrency.processutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c19d4af3-8322-42c3-b55b-5b13d720e3fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdf9bzwu2" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:13 np0005539563 nova_compute[252253]: 2025-11-29 08:01:13.795 252257 DEBUG nova.storage.rbd_utils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] rbd image c19d4af3-8322-42c3-b55b-5b13d720e3fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:13 np0005539563 nova_compute[252253]: 2025-11-29 08:01:13.800 252257 DEBUG oslo_concurrency.processutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c19d4af3-8322-42c3-b55b-5b13d720e3fd/disk.config c19d4af3-8322-42c3-b55b-5b13d720e3fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 418 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 123 KiB/s wr, 155 op/s
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:01:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:01:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:14.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:14 np0005539563 nova_compute[252253]: 2025-11-29 08:01:14.598 252257 DEBUG nova.network.neutron [req-dc35d9ce-4e73-4771-b424-6bbda70bb783 req-96663db9-e293-4ea2-b53d-977cec3ab5ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Updated VIF entry in instance network info cache for port 36e63c9f-9420-41d5-a969-c60bff8931d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:14 np0005539563 nova_compute[252253]: 2025-11-29 08:01:14.599 252257 DEBUG nova.network.neutron [req-dc35d9ce-4e73-4771-b424-6bbda70bb783 req-96663db9-e293-4ea2-b53d-977cec3ab5ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Updating instance_info_cache with network_info: [{"id": "36e63c9f-9420-41d5-a969-c60bff8931d1", "address": "fa:16:3e:59:68:de", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e63c9f-94", "ovs_interfaceid": "36e63c9f-9420-41d5-a969-c60bff8931d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:14 np0005539563 nova_compute[252253]: 2025-11-29 08:01:14.645 252257 DEBUG oslo_concurrency.lockutils [req-dc35d9ce-4e73-4771-b424-6bbda70bb783 req-96663db9-e293-4ea2-b53d-977cec3ab5ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c19d4af3-8322-42c3-b55b-5b13d720e3fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:14.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.814 252257 DEBUG nova.network.neutron [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "address": "fa:16:3e:d9:3f:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8125c0d6-4b", "ovs_interfaceid": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.839 252257 DEBUG oslo_concurrency.lockutils [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.840 252257 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.840 252257 DEBUG nova.network.neutron [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Refreshing network info cache for port 8125c0d6-4bae-4aba-aad0-d28ea5b81901 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.845 252257 DEBUG nova.virt.libvirt.vif [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "address": "fa:16:3e:d9:3f:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8125c0d6-4b", "ovs_interfaceid": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.845 252257 DEBUG nova.network.os_vif_util [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "address": "fa:16:3e:d9:3f:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8125c0d6-4b", "ovs_interfaceid": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.846 252257 DEBUG nova.network.os_vif_util [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:3f:77,bridge_name='br-int',has_traffic_filtering=True,id=8125c0d6-4bae-4aba-aad0-d28ea5b81901,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8125c0d6-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.847 252257 DEBUG os_vif [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:3f:77,bridge_name='br-int',has_traffic_filtering=True,id=8125c0d6-4bae-4aba-aad0-d28ea5b81901,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8125c0d6-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.848 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.848 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.849 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.853 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 418 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 65 KiB/s wr, 185 op/s
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.854 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8125c0d6-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.855 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8125c0d6-4b, col_values=(('external_ids', {'iface-id': '8125c0d6-4bae-4aba-aad0-d28ea5b81901', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:3f:77', 'vm-uuid': 'c1396d33-3741-4e6a-acdf-79ac9f076e53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.857 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:15 np0005539563 NetworkManager[48981]: <info>  [1764403275.8581] manager: (tap8125c0d6-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.863 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.864 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.865 252257 INFO os_vif [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:3f:77,bridge_name='br-int',has_traffic_filtering=True,id=8125c0d6-4bae-4aba-aad0-d28ea5b81901,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8125c0d6-4b')#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.867 252257 DEBUG nova.virt.libvirt.vif [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "address": "fa:16:3e:d9:3f:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8125c0d6-4b", "ovs_interfaceid": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.867 252257 DEBUG nova.network.os_vif_util [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "address": "fa:16:3e:d9:3f:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8125c0d6-4b", "ovs_interfaceid": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.868 252257 DEBUG nova.network.os_vif_util [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:3f:77,bridge_name='br-int',has_traffic_filtering=True,id=8125c0d6-4bae-4aba-aad0-d28ea5b81901,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8125c0d6-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.872 252257 DEBUG nova.virt.libvirt.guest [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:01:15 np0005539563 nova_compute[252253]:  <mac address="fa:16:3e:d9:3f:77"/>
Nov 29 03:01:15 np0005539563 nova_compute[252253]:  <model type="virtio"/>
Nov 29 03:01:15 np0005539563 nova_compute[252253]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:01:15 np0005539563 nova_compute[252253]:  <mtu size="1442"/>
Nov 29 03:01:15 np0005539563 nova_compute[252253]:  <target dev="tap8125c0d6-4b"/>
Nov 29 03:01:15 np0005539563 nova_compute[252253]: </interface>
Nov 29 03:01:15 np0005539563 nova_compute[252253]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:01:15 np0005539563 NetworkManager[48981]: <info>  [1764403275.8852] manager: (tap8125c0d6-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Nov 29 03:01:15 np0005539563 kernel: tap8125c0d6-4b: entered promiscuous mode
Nov 29 03:01:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:15Z|00192|binding|INFO|Claiming lport 8125c0d6-4bae-4aba-aad0-d28ea5b81901 for this chassis.
Nov 29 03:01:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:15Z|00193|binding|INFO|8125c0d6-4bae-4aba-aad0-d28ea5b81901: Claiming fa:16:3e:d9:3f:77 10.100.0.7
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.888 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:15.900 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:3f:77 10.100.0.7'], port_security=['fa:16:3e:d9:3f:77 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c1396d33-3741-4e6a-acdf-79ac9f076e53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3edda898-8529-43cc-9949-7b5bcfbbe45d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=8125c0d6-4bae-4aba-aad0-d28ea5b81901) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:15.901 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 8125c0d6-4bae-4aba-aad0-d28ea5b81901 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 bound to our chassis#033[00m
Nov 29 03:01:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:15.903 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:01:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:15Z|00194|binding|INFO|Setting lport 8125c0d6-4bae-4aba-aad0-d28ea5b81901 ovn-installed in OVS
Nov 29 03:01:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:15Z|00195|binding|INFO|Setting lport 8125c0d6-4bae-4aba-aad0-d28ea5b81901 up in Southbound
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.914 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:15 np0005539563 systemd-udevd[293618]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:01:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:15.921 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cd5f8566-9aef-483b-89b8-45ab70318150]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.922 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:15 np0005539563 NetworkManager[48981]: <info>  [1764403275.9314] device (tap8125c0d6-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:01:15 np0005539563 NetworkManager[48981]: <info>  [1764403275.9328] device (tap8125c0d6-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:01:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:15.954 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[560fb6a6-9f2b-4f7e-a10e-7f490753248a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:15.957 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f2f9cc-7cad-4f45-b7c6-8283a6b0928f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.984 252257 DEBUG nova.virt.libvirt.driver [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.985 252257 DEBUG nova.virt.libvirt.driver [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.985 252257 DEBUG nova.virt.libvirt.driver [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:0f:38:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.985 252257 DEBUG nova.virt.libvirt.driver [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:bb:dd:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:15 np0005539563 nova_compute[252253]: 2025-11-29 08:01:15.986 252257 DEBUG nova.virt.libvirt.driver [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:d9:3f:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:15.990 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3efdc7ed-5abd-4ba8-b6b2-33d264d5d2d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.006 252257 DEBUG nova.virt.libvirt.guest [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:16 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:  <nova:name>tempest-AttachInterfacesTestJSON-server-369367062</nova:name>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:01:16</nova:creationTime>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:01:16 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:    <nova:port uuid="39a40677-39fb-46af-8988-f2b8b26d7512">
Nov 29 03:01:16 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:    <nova:port uuid="bb9bfd78-483e-4c5b-a548-8e4b77160086">
Nov 29 03:01:16 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:    <nova:port uuid="8125c0d6-4bae-4aba-aad0-d28ea5b81901">
Nov 29 03:01:16 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:16 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:01:16 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:01:16 np0005539563 nova_compute[252253]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.009 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a22b04c4-8417-4b4c-b923-6086b12c996e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618501, 'reachable_time': 41713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293628, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.027 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f163b96f-fb13-44ab-aef9-4ea3767346ec]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618513, 'tstamp': 618513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293629, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618516, 'tstamp': 618516}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293629, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.028 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.030 252257 DEBUG oslo_concurrency.lockutils [None req-47016665-904c-408e-bf09-6d2087057f96 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-c1396d33-3741-4e6a-acdf-79ac9f076e53-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.031 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.033 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.034 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.034 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.034 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.034 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.256 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.288 252257 DEBUG oslo_concurrency.processutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c19d4af3-8322-42c3-b55b-5b13d720e3fd/disk.config c19d4af3-8322-42c3-b55b-5b13d720e3fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.289 252257 INFO nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Deleting local config drive /var/lib/nova/instances/c19d4af3-8322-42c3-b55b-5b13d720e3fd/disk.config because it was imported into RBD.#033[00m
Nov 29 03:01:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:16.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:16 np0005539563 kernel: tap36e63c9f-94: entered promiscuous mode
Nov 29 03:01:16 np0005539563 NetworkManager[48981]: <info>  [1764403276.3365] manager: (tap36e63c9f-94): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Nov 29 03:01:16 np0005539563 systemd-udevd[293622]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:01:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:16Z|00196|binding|INFO|Claiming lport 36e63c9f-9420-41d5-a969-c60bff8931d1 for this chassis.
Nov 29 03:01:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:16Z|00197|binding|INFO|36e63c9f-9420-41d5-a969-c60bff8931d1: Claiming fa:16:3e:59:68:de 10.100.0.12
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.338 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.348 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:68:de 10.100.0.12'], port_security=['fa:16:3e:59:68:de 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c19d4af3-8322-42c3-b55b-5b13d720e3fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7471f45a-da60-4567-a888-2a87ff526609', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'baf6db0c-e075-4519-aa02-9bbd4c984eba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bee78a1-1254-4dfe-ba24-259feeb5ade5, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=36e63c9f-9420-41d5-a969-c60bff8931d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.349 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 36e63c9f-9420-41d5-a969-c60bff8931d1 in datapath 7471f45a-da60-4567-a888-2a87ff526609 bound to our chassis#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.351 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7471f45a-da60-4567-a888-2a87ff526609#033[00m
Nov 29 03:01:16 np0005539563 NetworkManager[48981]: <info>  [1764403276.3537] device (tap36e63c9f-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:01:16 np0005539563 NetworkManager[48981]: <info>  [1764403276.3548] device (tap36e63c9f-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:01:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:16Z|00198|binding|INFO|Setting lport 36e63c9f-9420-41d5-a969-c60bff8931d1 ovn-installed in OVS
Nov 29 03:01:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:16Z|00199|binding|INFO|Setting lport 36e63c9f-9420-41d5-a969-c60bff8931d1 up in Southbound
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.357 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.361 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.363 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[decea52d-2b6c-42ba-9ee7-f3cc4db002ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.365 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7471f45a-d1 in ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.366 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7471f45a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.366 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2467a69b-1458-4f02-a4b3-2c0b38b19dea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.368 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c1572a-73f8-46c7-895e-046ba9c6f309]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 systemd-machined[213024]: New machine qemu-27-instance-00000040.
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.381 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[4176ecbf-aefa-45f1-94bc-a756e4465525]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 systemd[1]: Started Virtual Machine qemu-27-instance-00000040.
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.394 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[580f28d9-b5b2-4448-9b02-e259fc3e081d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.417 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[1cdbee7b-a055-47ae-adec-302584a8953a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.422 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[52300397-3715-4732-846b-acb20ebbd59c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 NetworkManager[48981]: <info>  [1764403276.4229] manager: (tap7471f45a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.451 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[0300130b-22bf-4442-8ee0-79fc064a7b86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.454 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[7db1212b-1cac-4944-ab96-9ad69ce353d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 NetworkManager[48981]: <info>  [1764403276.4756] device (tap7471f45a-d0): carrier: link connected
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.480 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3b1492e9-7466-4848-aed0-00f801981a65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.495 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1c2ed8-bc17-4bbd-9ee4-e60700451b8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7471f45a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:d7:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624424, 'reachable_time': 17285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293674, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.510 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d830ce5b-a220-48f1-882b-109af38c61d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:d764'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624424, 'tstamp': 624424}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293675, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.523 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[adb27124-e3b2-4ddb-a1b4-4246fc41e696]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7471f45a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:d7:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624424, 'reachable_time': 17285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293676, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.557 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4906ae-608a-4bef-9628-04ce0d7dfd62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.624 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5eda8056-8372-44e5-b2ef-838b147dd36a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.626 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7471f45a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.627 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.627 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7471f45a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:16 np0005539563 kernel: tap7471f45a-d0: entered promiscuous mode
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.670 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539563 NetworkManager[48981]: <info>  [1764403276.6711] manager: (tap7471f45a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.673 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7471f45a-d0, col_values=(('external_ids', {'iface-id': '06264566-5ffe-42a3-ad44-b3f54b7d79bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.674 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:16Z|00200|binding|INFO|Releasing lport 06264566-5ffe-42a3-ad44-b3f54b7d79bb from this chassis (sb_readonly=0)
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.690 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.691 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.692 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1871bc29-17e5-4920-a078-9109db5e1b96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.692 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-7471f45a-da60-4567-a888-2a87ff526609
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/7471f45a-da60-4567-a888-2a87ff526609.pid.haproxy
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 7471f45a-da60-4567-a888-2a87ff526609
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:01:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:16.693 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'env', 'PROCESS_TAG=haproxy-7471f45a-da60-4567-a888-2a87ff526609', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7471f45a-da60-4567-a888-2a87ff526609.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.696 252257 DEBUG nova.compute.manager [req-41834a6b-9158-4903-97ad-d6f8ff68e726 req-f6bbcd4f-c20e-4188-bc60-5f2665e7796a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Received event network-vif-plugged-36e63c9f-9420-41d5-a969-c60bff8931d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.697 252257 DEBUG oslo_concurrency.lockutils [req-41834a6b-9158-4903-97ad-d6f8ff68e726 req-f6bbcd4f-c20e-4188-bc60-5f2665e7796a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.697 252257 DEBUG oslo_concurrency.lockutils [req-41834a6b-9158-4903-97ad-d6f8ff68e726 req-f6bbcd4f-c20e-4188-bc60-5f2665e7796a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.697 252257 DEBUG oslo_concurrency.lockutils [req-41834a6b-9158-4903-97ad-d6f8ff68e726 req-f6bbcd4f-c20e-4188-bc60-5f2665e7796a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.697 252257 DEBUG nova.compute.manager [req-41834a6b-9158-4903-97ad-d6f8ff68e726 req-f6bbcd4f-c20e-4188-bc60-5f2665e7796a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Processing event network-vif-plugged-36e63c9f-9420-41d5-a969-c60bff8931d1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.816 252257 DEBUG nova.compute.manager [req-b4905251-4802-40d6-b5a1-765127e8a93e req-0432e8c9-c92c-4589-87ec-69a289bbf1d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-8125c0d6-4bae-4aba-aad0-d28ea5b81901 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.818 252257 DEBUG oslo_concurrency.lockutils [req-b4905251-4802-40d6-b5a1-765127e8a93e req-0432e8c9-c92c-4589-87ec-69a289bbf1d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.823 252257 DEBUG oslo_concurrency.lockutils [req-b4905251-4802-40d6-b5a1-765127e8a93e req-0432e8c9-c92c-4589-87ec-69a289bbf1d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.824 252257 DEBUG oslo_concurrency.lockutils [req-b4905251-4802-40d6-b5a1-765127e8a93e req-0432e8c9-c92c-4589-87ec-69a289bbf1d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.824 252257 DEBUG nova.compute.manager [req-b4905251-4802-40d6-b5a1-765127e8a93e req-0432e8c9-c92c-4589-87ec-69a289bbf1d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-plugged-8125c0d6-4bae-4aba-aad0-d28ea5b81901 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.824 252257 WARNING nova.compute.manager [req-b4905251-4802-40d6-b5a1-765127e8a93e req-0432e8c9-c92c-4589-87ec-69a289bbf1d8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-plugged-8125c0d6-4bae-4aba-aad0-d28ea5b81901 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:16.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.938 252257 DEBUG nova.compute.manager [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.939 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403276.938283, c19d4af3-8322-42c3-b55b-5b13d720e3fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.939 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] VM Started (Lifecycle Event)#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.942 252257 DEBUG nova.virt.libvirt.driver [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.944 252257 INFO nova.virt.libvirt.driver [-] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Instance spawned successfully.#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.945 252257 INFO nova.compute.manager [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Took 10.14 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.945 252257 DEBUG nova.compute.manager [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.973 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:16 np0005539563 nova_compute[252253]: 2025-11-29 08:01:16.976 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:17 np0005539563 nova_compute[252253]: 2025-11-29 08:01:17.006 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:01:17 np0005539563 nova_compute[252253]: 2025-11-29 08:01:17.007 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403276.9384615, c19d4af3-8322-42c3-b55b-5b13d720e3fd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:17 np0005539563 nova_compute[252253]: 2025-11-29 08:01:17.007 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:01:17 np0005539563 nova_compute[252253]: 2025-11-29 08:01:17.017 252257 INFO nova.compute.manager [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Took 11.23 seconds to build instance.#033[00m
Nov 29 03:01:17 np0005539563 nova_compute[252253]: 2025-11-29 08:01:17.030 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:17 np0005539563 nova_compute[252253]: 2025-11-29 08:01:17.034 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403276.941328, c19d4af3-8322-42c3-b55b-5b13d720e3fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:17 np0005539563 nova_compute[252253]: 2025-11-29 08:01:17.034 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:01:17 np0005539563 nova_compute[252253]: 2025-11-29 08:01:17.037 252257 DEBUG oslo_concurrency.lockutils [None req-7b9d6fc6-615e-44b1-bf8f-87ee3df373a0 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:17 np0005539563 nova_compute[252253]: 2025-11-29 08:01:17.050 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:17 np0005539563 nova_compute[252253]: 2025-11-29 08:01:17.053 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:17 np0005539563 podman[293751]: 2025-11-29 08:01:17.07707273 +0000 UTC m=+0.055441069 container create 6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:01:17 np0005539563 systemd[1]: Started libpod-conmon-6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694.scope.
Nov 29 03:01:17 np0005539563 podman[293751]: 2025-11-29 08:01:17.049178871 +0000 UTC m=+0.027547230 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:01:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:01:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d561936f6d89e084fac9c1a0f729cc4867d14daff1dc9c100956d8dea10439/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:17 np0005539563 podman[293751]: 2025-11-29 08:01:17.182726001 +0000 UTC m=+0.161094340 container init 6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:01:17 np0005539563 podman[293751]: 2025-11-29 08:01:17.18814151 +0000 UTC m=+0.166509849 container start 6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:01:17 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[293767]: [NOTICE]   (293771) : New worker (293773) forked
Nov 29 03:01:17 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[293767]: [NOTICE]   (293771) : Loading success.
Nov 29 03:01:17 np0005539563 nova_compute[252253]: 2025-11-29 08:01:17.788 252257 DEBUG nova.network.neutron [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updated VIF entry in instance network info cache for port 8125c0d6-4bae-4aba-aad0-d28ea5b81901. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:17 np0005539563 nova_compute[252253]: 2025-11-29 08:01:17.789 252257 DEBUG nova.network.neutron [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "address": "fa:16:3e:d9:3f:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8125c0d6-4b", "ovs_interfaceid": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:17 np0005539563 nova_compute[252253]: 2025-11-29 08:01:17.809 252257 DEBUG oslo_concurrency.lockutils [req-0eb0abdc-fb36-47de-9afb-31f6653a59e1 req-38a6ce57-7e4e-4ee5-acdd-c395e5aecd34 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 418 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 16 KiB/s wr, 143 op/s
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.084 252257 DEBUG oslo_concurrency.lockutils [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "interface-c1396d33-3741-4e6a-acdf-79ac9f076e53-dcb12fb0-34e9-47f5-8054-b56b5145a57a" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.085 252257 DEBUG oslo_concurrency.lockutils [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-c1396d33-3741-4e6a-acdf-79ac9f076e53-dcb12fb0-34e9-47f5-8054-b56b5145a57a" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.085 252257 DEBUG nova.objects.instance [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'flavor' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:18.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:18Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:3f:77 10.100.0.7
Nov 29 03:01:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:18Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:3f:77 10.100.0.7
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.837 252257 DEBUG nova.compute.manager [req-1ad30eed-3ee9-4d38-b7c8-01d708308e8e req-fcb63804-2d20-48f2-aa57-2dcad2fe4b38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Received event network-vif-plugged-36e63c9f-9420-41d5-a969-c60bff8931d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.838 252257 DEBUG oslo_concurrency.lockutils [req-1ad30eed-3ee9-4d38-b7c8-01d708308e8e req-fcb63804-2d20-48f2-aa57-2dcad2fe4b38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.838 252257 DEBUG oslo_concurrency.lockutils [req-1ad30eed-3ee9-4d38-b7c8-01d708308e8e req-fcb63804-2d20-48f2-aa57-2dcad2fe4b38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.838 252257 DEBUG oslo_concurrency.lockutils [req-1ad30eed-3ee9-4d38-b7c8-01d708308e8e req-fcb63804-2d20-48f2-aa57-2dcad2fe4b38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.839 252257 DEBUG nova.compute.manager [req-1ad30eed-3ee9-4d38-b7c8-01d708308e8e req-fcb63804-2d20-48f2-aa57-2dcad2fe4b38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] No waiting events found dispatching network-vif-plugged-36e63c9f-9420-41d5-a969-c60bff8931d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.839 252257 WARNING nova.compute.manager [req-1ad30eed-3ee9-4d38-b7c8-01d708308e8e req-fcb63804-2d20-48f2-aa57-2dcad2fe4b38 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Received unexpected event network-vif-plugged-36e63c9f-9420-41d5-a969-c60bff8931d1 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.847 252257 DEBUG nova.objects.instance [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'pci_requests' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.865 252257 DEBUG nova.network.neutron [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.907 252257 DEBUG nova.compute.manager [req-e5918129-ecda-4a72-ad38-ac871e3f70f2 req-1fda1f02-8a4e-4f7f-adc7-66ade534ed85 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-8125c0d6-4bae-4aba-aad0-d28ea5b81901 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.908 252257 DEBUG oslo_concurrency.lockutils [req-e5918129-ecda-4a72-ad38-ac871e3f70f2 req-1fda1f02-8a4e-4f7f-adc7-66ade534ed85 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.908 252257 DEBUG oslo_concurrency.lockutils [req-e5918129-ecda-4a72-ad38-ac871e3f70f2 req-1fda1f02-8a4e-4f7f-adc7-66ade534ed85 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.908 252257 DEBUG oslo_concurrency.lockutils [req-e5918129-ecda-4a72-ad38-ac871e3f70f2 req-1fda1f02-8a4e-4f7f-adc7-66ade534ed85 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.908 252257 DEBUG nova.compute.manager [req-e5918129-ecda-4a72-ad38-ac871e3f70f2 req-1fda1f02-8a4e-4f7f-adc7-66ade534ed85 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-plugged-8125c0d6-4bae-4aba-aad0-d28ea5b81901 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:18 np0005539563 nova_compute[252253]: 2025-11-29 08:01:18.908 252257 WARNING nova.compute.manager [req-e5918129-ecda-4a72-ad38-ac871e3f70f2 req-1fda1f02-8a4e-4f7f-adc7-66ade534ed85 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-plugged-8125c0d6-4bae-4aba-aad0-d28ea5b81901 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:18.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.110 252257 DEBUG oslo_concurrency.lockutils [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.111 252257 DEBUG oslo_concurrency.lockutils [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.111 252257 DEBUG oslo_concurrency.lockutils [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.112 252257 DEBUG oslo_concurrency.lockutils [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.112 252257 DEBUG oslo_concurrency.lockutils [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.113 252257 INFO nova.compute.manager [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Terminating instance#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.114 252257 DEBUG nova.compute.manager [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:01:19 np0005539563 kernel: tap36e63c9f-94 (unregistering): left promiscuous mode
Nov 29 03:01:19 np0005539563 NetworkManager[48981]: <info>  [1764403279.1594] device (tap36e63c9f-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:01:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:19Z|00201|binding|INFO|Releasing lport 36e63c9f-9420-41d5-a969-c60bff8931d1 from this chassis (sb_readonly=0)
Nov 29 03:01:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:19Z|00202|binding|INFO|Setting lport 36e63c9f-9420-41d5-a969-c60bff8931d1 down in Southbound
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.194 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:19Z|00203|binding|INFO|Removing iface tap36e63c9f-94 ovn-installed in OVS
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.196 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.204 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:68:de 10.100.0.12'], port_security=['fa:16:3e:59:68:de 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c19d4af3-8322-42c3-b55b-5b13d720e3fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7471f45a-da60-4567-a888-2a87ff526609', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d8c5b7e3ca74bc1880eb616b04711f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'baf6db0c-e075-4519-aa02-9bbd4c984eba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bee78a1-1254-4dfe-ba24-259feeb5ade5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=36e63c9f-9420-41d5-a969-c60bff8931d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.205 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 36e63c9f-9420-41d5-a969-c60bff8931d1 in datapath 7471f45a-da60-4567-a888-2a87ff526609 unbound from our chassis#033[00m
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.207 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7471f45a-da60-4567-a888-2a87ff526609, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.208 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f0238c35-00f9-423c-bb40-1d75001cb207]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.209 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 namespace which is not needed anymore#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.214 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:19 np0005539563 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000040.scope: Deactivated successfully.
Nov 29 03:01:19 np0005539563 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000040.scope: Consumed 2.799s CPU time.
Nov 29 03:01:19 np0005539563 systemd-machined[213024]: Machine qemu-27-instance-00000040 terminated.
Nov 29 03:01:19 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[293767]: [NOTICE]   (293771) : haproxy version is 2.8.14-c23fe91
Nov 29 03:01:19 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[293767]: [NOTICE]   (293771) : path to executable is /usr/sbin/haproxy
Nov 29 03:01:19 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[293767]: [WARNING]  (293771) : Exiting Master process...
Nov 29 03:01:19 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[293767]: [ALERT]    (293771) : Current worker (293773) exited with code 143 (Terminated)
Nov 29 03:01:19 np0005539563 neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609[293767]: [WARNING]  (293771) : All workers exited. Exiting... (0)
Nov 29 03:01:19 np0005539563 systemd[1]: libpod-6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694.scope: Deactivated successfully.
Nov 29 03:01:19 np0005539563 podman[293808]: 2025-11-29 08:01:19.332405447 +0000 UTC m=+0.043643764 container died 6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.347 252257 INFO nova.virt.libvirt.driver [-] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Instance destroyed successfully.#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.348 252257 DEBUG nova.objects.instance [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lazy-loading 'resources' on Instance uuid c19d4af3-8322-42c3-b55b-5b13d720e3fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694-userdata-shm.mount: Deactivated successfully.
Nov 29 03:01:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e4d561936f6d89e084fac9c1a0f729cc4867d14daff1dc9c100956d8dea10439-merged.mount: Deactivated successfully.
Nov 29 03:01:19 np0005539563 podman[293808]: 2025-11-29 08:01:19.372288425 +0000 UTC m=+0.083526732 container cleanup 6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.379 252257 DEBUG nova.virt.libvirt.vif [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:01:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1963696524',display_name='tempest-ImagesTestJSON-server-1963696524',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1963696524',id=64,image_ref='bb915321-974b-4310-80ce-22fe787becec',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:01:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d8c5b7e3ca74bc1880eb616b04711f7',ramdisk_id='',reservation_id='r-nadv53s8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='5f638465-c65b-4824-bedc-60f4b695402a',image_min_disk='1',image_min_ram='0',image_owner_id='4d8c5b7e3ca74bc1880eb616b04711f7',image_owner_project_name='tempest-ImagesTestJSON-911260095',image_owner_user_name='tempest-ImagesTestJSON-911260095-project-member',image_user_id='f7d59bea260d4752aa29379967636c0b',owner_project_name='tempest-ImagesTestJSON-911260095',owner_user_name='tempest-ImagesTestJSON-911260095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:16Z,user_data=None,user_id='f7d59bea260d4752aa29379967636c0b',uuid=c19d4af3-8322-42c3-b55b-5b13d720e3fd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36e63c9f-9420-41d5-a969-c60bff8931d1", "address": "fa:16:3e:59:68:de", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e63c9f-94", "ovs_interfaceid": "36e63c9f-9420-41d5-a969-c60bff8931d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.380 252257 DEBUG nova.network.os_vif_util [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converting VIF {"id": "36e63c9f-9420-41d5-a969-c60bff8931d1", "address": "fa:16:3e:59:68:de", "network": {"id": "7471f45a-da60-4567-a888-2a87ff526609", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1685364862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d8c5b7e3ca74bc1880eb616b04711f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36e63c9f-94", "ovs_interfaceid": "36e63c9f-9420-41d5-a969-c60bff8931d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.381 252257 DEBUG nova.network.os_vif_util [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:68:de,bridge_name='br-int',has_traffic_filtering=True,id=36e63c9f-9420-41d5-a969-c60bff8931d1,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e63c9f-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.381 252257 DEBUG os_vif [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:68:de,bridge_name='br-int',has_traffic_filtering=True,id=36e63c9f-9420-41d5-a969-c60bff8931d1,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e63c9f-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.382 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.383 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36e63c9f-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.384 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.385 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.387 252257 INFO os_vif [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:68:de,bridge_name='br-int',has_traffic_filtering=True,id=36e63c9f-9420-41d5-a969-c60bff8931d1,network=Network(7471f45a-da60-4567-a888-2a87ff526609),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36e63c9f-94')#033[00m
Nov 29 03:01:19 np0005539563 systemd[1]: libpod-conmon-6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694.scope: Deactivated successfully.
Nov 29 03:01:19 np0005539563 podman[293849]: 2025-11-29 08:01:19.439843067 +0000 UTC m=+0.045789323 container remove 6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.445 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a44a9d83-a828-4493-8e91-c05dbe38a8b3]: (4, ('Sat Nov 29 08:01:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 (6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694)\n6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694\nSat Nov 29 08:01:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 (6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694)\n6b3a1cec8cd849e8bca5c146f493742ca16116550013f3ee00ce7515dbd43694\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.447 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[daae9ab0-277c-4d85-ae8e-a2f4db91995f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.448 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7471f45a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.449 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:19 np0005539563 kernel: tap7471f45a-d0: left promiscuous mode
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.455 252257 DEBUG nova.policy [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a814d0c4600e45d9a1fac7bac5b7e69e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f69605de164b4c27ae715521263676fe', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.465 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.469 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6f550553-e287-45d9-b383-2ce5cddbf247]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.481 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[04c9bfe6-6d5e-4c8d-8acb-10f4fb4015b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.482 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[75a9cbd6-6dbe-4dbe-8f56-4cdf6c613727]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.500 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2db4bd64-3b0c-43ba-b156-ec0ae53663c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624418, 'reachable_time': 27393, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293880, 'error': None, 'target': 'ovnmeta-7471f45a-da60-4567-a888-2a87ff526609', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.503 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7471f45a-da60-4567-a888-2a87ff526609 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:01:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:19.504 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[3785bee0-19f1-46fb-bd1b-07660cd4185a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:19 np0005539563 systemd[1]: run-netns-ovnmeta\x2d7471f45a\x2dda60\x2d4567\x2da888\x2d2a87ff526609.mount: Deactivated successfully.
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.824 252257 INFO nova.virt.libvirt.driver [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Deleting instance files /var/lib/nova/instances/c19d4af3-8322-42c3-b55b-5b13d720e3fd_del#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.825 252257 INFO nova.virt.libvirt.driver [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Deletion of /var/lib/nova/instances/c19d4af3-8322-42c3-b55b-5b13d720e3fd_del complete#033[00m
Nov 29 03:01:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 393 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 17 KiB/s wr, 168 op/s
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.882 252257 INFO nova.compute.manager [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.882 252257 DEBUG oslo.service.loopingcall [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.882 252257 DEBUG nova.compute.manager [-] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:01:19 np0005539563 nova_compute[252253]: 2025-11-29 08:01:19.882 252257 DEBUG nova.network.neutron [-] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:01:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:20.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.387 252257 DEBUG nova.network.neutron [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Successfully updated port: dcb12fb0-34e9-47f5-8054-b56b5145a57a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.408 252257 DEBUG oslo_concurrency.lockutils [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.409 252257 DEBUG oslo_concurrency.lockutils [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.409 252257 DEBUG nova.network.neutron [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.574 252257 WARNING nova.network.neutron [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] 738e99b4-b58e-4eff-b209-c4aa3748c994 already exists in list: networks containing: ['738e99b4-b58e-4eff-b209-c4aa3748c994']. ignoring it#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.574 252257 WARNING nova.network.neutron [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] 738e99b4-b58e-4eff-b209-c4aa3748c994 already exists in list: networks containing: ['738e99b4-b58e-4eff-b209-c4aa3748c994']. ignoring it#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.574 252257 WARNING nova.network.neutron [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] 738e99b4-b58e-4eff-b209-c4aa3748c994 already exists in list: networks containing: ['738e99b4-b58e-4eff-b209-c4aa3748c994']. ignoring it#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.705 252257 DEBUG nova.network.neutron [-] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.736 252257 INFO nova.compute.manager [-] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Took 0.85 seconds to deallocate network for instance.#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.795 252257 DEBUG oslo_concurrency.lockutils [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.796 252257 DEBUG oslo_concurrency.lockutils [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.884 252257 DEBUG oslo_concurrency.processutils [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.936 252257 DEBUG nova.compute.manager [req-441bc16c-12c6-402a-b6d9-fd4fd4557f25 req-88614ea3-6269-40a4-90f0-a77c8845eb10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-changed-dcb12fb0-34e9-47f5-8054-b56b5145a57a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.936 252257 DEBUG nova.compute.manager [req-441bc16c-12c6-402a-b6d9-fd4fd4557f25 req-88614ea3-6269-40a4-90f0-a77c8845eb10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Refreshing instance network info cache due to event network-changed-dcb12fb0-34e9-47f5-8054-b56b5145a57a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.937 252257 DEBUG oslo_concurrency.lockutils [req-441bc16c-12c6-402a-b6d9-fd4fd4557f25 req-88614ea3-6269-40a4-90f0-a77c8845eb10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:01:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:20.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.966 252257 DEBUG nova.compute.manager [req-416aa7a4-a248-4dcc-9cfb-42b83a794bcb req-7ecb087a-c7fe-4d10-bacd-61aa36923ecb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Received event network-vif-unplugged-36e63c9f-9420-41d5-a969-c60bff8931d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.966 252257 DEBUG oslo_concurrency.lockutils [req-416aa7a4-a248-4dcc-9cfb-42b83a794bcb req-7ecb087a-c7fe-4d10-bacd-61aa36923ecb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.967 252257 DEBUG oslo_concurrency.lockutils [req-416aa7a4-a248-4dcc-9cfb-42b83a794bcb req-7ecb087a-c7fe-4d10-bacd-61aa36923ecb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.967 252257 DEBUG oslo_concurrency.lockutils [req-416aa7a4-a248-4dcc-9cfb-42b83a794bcb req-7ecb087a-c7fe-4d10-bacd-61aa36923ecb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.967 252257 DEBUG nova.compute.manager [req-416aa7a4-a248-4dcc-9cfb-42b83a794bcb req-7ecb087a-c7fe-4d10-bacd-61aa36923ecb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] No waiting events found dispatching network-vif-unplugged-36e63c9f-9420-41d5-a969-c60bff8931d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.968 252257 WARNING nova.compute.manager [req-416aa7a4-a248-4dcc-9cfb-42b83a794bcb req-7ecb087a-c7fe-4d10-bacd-61aa36923ecb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Received unexpected event network-vif-unplugged-36e63c9f-9420-41d5-a969-c60bff8931d1 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.968 252257 DEBUG nova.compute.manager [req-416aa7a4-a248-4dcc-9cfb-42b83a794bcb req-7ecb087a-c7fe-4d10-bacd-61aa36923ecb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Received event network-vif-plugged-36e63c9f-9420-41d5-a969-c60bff8931d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.968 252257 DEBUG oslo_concurrency.lockutils [req-416aa7a4-a248-4dcc-9cfb-42b83a794bcb req-7ecb087a-c7fe-4d10-bacd-61aa36923ecb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.968 252257 DEBUG oslo_concurrency.lockutils [req-416aa7a4-a248-4dcc-9cfb-42b83a794bcb req-7ecb087a-c7fe-4d10-bacd-61aa36923ecb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.969 252257 DEBUG oslo_concurrency.lockutils [req-416aa7a4-a248-4dcc-9cfb-42b83a794bcb req-7ecb087a-c7fe-4d10-bacd-61aa36923ecb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.969 252257 DEBUG nova.compute.manager [req-416aa7a4-a248-4dcc-9cfb-42b83a794bcb req-7ecb087a-c7fe-4d10-bacd-61aa36923ecb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] No waiting events found dispatching network-vif-plugged-36e63c9f-9420-41d5-a969-c60bff8931d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.969 252257 WARNING nova.compute.manager [req-416aa7a4-a248-4dcc-9cfb-42b83a794bcb req-7ecb087a-c7fe-4d10-bacd-61aa36923ecb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Received unexpected event network-vif-plugged-36e63c9f-9420-41d5-a969-c60bff8931d1 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:01:20 np0005539563 nova_compute[252253]: 2025-11-29 08:01:20.969 252257 DEBUG nova.compute.manager [req-416aa7a4-a248-4dcc-9cfb-42b83a794bcb req-7ecb087a-c7fe-4d10-bacd-61aa36923ecb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Received event network-vif-deleted-36e63c9f-9420-41d5-a969-c60bff8931d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:21 np0005539563 nova_compute[252253]: 2025-11-29 08:01:21.259 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1550873857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:21 np0005539563 nova_compute[252253]: 2025-11-29 08:01:21.330 252257 DEBUG oslo_concurrency.processutils [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:21 np0005539563 nova_compute[252253]: 2025-11-29 08:01:21.339 252257 DEBUG nova.compute.provider_tree [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:21 np0005539563 nova_compute[252253]: 2025-11-29 08:01:21.355 252257 DEBUG nova.scheduler.client.report [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:21 np0005539563 nova_compute[252253]: 2025-11-29 08:01:21.381 252257 DEBUG oslo_concurrency.lockutils [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:21 np0005539563 nova_compute[252253]: 2025-11-29 08:01:21.415 252257 INFO nova.scheduler.client.report [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Deleted allocations for instance c19d4af3-8322-42c3-b55b-5b13d720e3fd#033[00m
Nov 29 03:01:21 np0005539563 nova_compute[252253]: 2025-11-29 08:01:21.481 252257 DEBUG oslo_concurrency.lockutils [None req-583f86e8-b969-4ca2-a328-909724138159 f7d59bea260d4752aa29379967636c0b 4d8c5b7e3ca74bc1880eb616b04711f7 - - default default] Lock "c19d4af3-8322-42c3-b55b-5b13d720e3fd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 396 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 254 op/s
Nov 29 03:01:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:22.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Nov 29 03:01:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Nov 29 03:01:22 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Nov 29 03:01:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:22.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008566232493485948 of space, bias 1.0, pg target 2.5698697480457846 quantized to 32 (current 32)
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8618352177554439 quantized to 32 (current 32)
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:01:23 np0005539563 nova_compute[252253]: 2025-11-29 08:01:23.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:23 np0005539563 nova_compute[252253]: 2025-11-29 08:01:23.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:01:23 np0005539563 nova_compute[252253]: 2025-11-29 08:01:23.710 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:01:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 384 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.6 MiB/s wr, 283 op/s
Nov 29 03:01:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:24.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.386 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.612 252257 DEBUG nova.network.neutron [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "address": "fa:16:3e:d9:3f:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8125c0d6-4b", "ovs_interfaceid": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.632 252257 DEBUG oslo_concurrency.lockutils [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.635 252257 DEBUG oslo_concurrency.lockutils [req-441bc16c-12c6-402a-b6d9-fd4fd4557f25 req-88614ea3-6269-40a4-90f0-a77c8845eb10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.636 252257 DEBUG nova.network.neutron [req-441bc16c-12c6-402a-b6d9-fd4fd4557f25 req-88614ea3-6269-40a4-90f0-a77c8845eb10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Refreshing network info cache for port dcb12fb0-34e9-47f5-8054-b56b5145a57a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.641 252257 DEBUG nova.virt.libvirt.vif [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.642 252257 DEBUG nova.network.os_vif_util [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.643 252257 DEBUG nova.network.os_vif_util [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:aa:0c,bridge_name='br-int',has_traffic_filtering=True,id=dcb12fb0-34e9-47f5-8054-b56b5145a57a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdcb12fb0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.645 252257 DEBUG os_vif [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:aa:0c,bridge_name='br-int',has_traffic_filtering=True,id=dcb12fb0-34e9-47f5-8054-b56b5145a57a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdcb12fb0-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.646 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.646 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.647 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.652 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.652 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdcb12fb0-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.653 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdcb12fb0-34, col_values=(('external_ids', {'iface-id': 'dcb12fb0-34e9-47f5-8054-b56b5145a57a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1c:aa:0c', 'vm-uuid': 'c1396d33-3741-4e6a-acdf-79ac9f076e53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:24 np0005539563 NetworkManager[48981]: <info>  [1764403284.6568] manager: (tapdcb12fb0-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.663 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.666 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.667 252257 INFO os_vif [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:aa:0c,bridge_name='br-int',has_traffic_filtering=True,id=dcb12fb0-34e9-47f5-8054-b56b5145a57a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdcb12fb0-34')#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.668 252257 DEBUG nova.virt.libvirt.vif [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.669 252257 DEBUG nova.network.os_vif_util [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.670 252257 DEBUG nova.network.os_vif_util [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:aa:0c,bridge_name='br-int',has_traffic_filtering=True,id=dcb12fb0-34e9-47f5-8054-b56b5145a57a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdcb12fb0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.674 252257 DEBUG nova.virt.libvirt.guest [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:01:24 np0005539563 nova_compute[252253]:  <mac address="fa:16:3e:1c:aa:0c"/>
Nov 29 03:01:24 np0005539563 nova_compute[252253]:  <model type="virtio"/>
Nov 29 03:01:24 np0005539563 nova_compute[252253]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:01:24 np0005539563 nova_compute[252253]:  <mtu size="1442"/>
Nov 29 03:01:24 np0005539563 nova_compute[252253]:  <target dev="tapdcb12fb0-34"/>
Nov 29 03:01:24 np0005539563 nova_compute[252253]: </interface>
Nov 29 03:01:24 np0005539563 nova_compute[252253]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:01:24 np0005539563 kernel: tapdcb12fb0-34: entered promiscuous mode
Nov 29 03:01:24 np0005539563 NetworkManager[48981]: <info>  [1764403284.6926] manager: (tapdcb12fb0-34): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Nov 29 03:01:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:24Z|00204|binding|INFO|Claiming lport dcb12fb0-34e9-47f5-8054-b56b5145a57a for this chassis.
Nov 29 03:01:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:24Z|00205|binding|INFO|dcb12fb0-34e9-47f5-8054-b56b5145a57a: Claiming fa:16:3e:1c:aa:0c 10.100.0.11
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.694 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.705 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:aa:0c 10.100.0.11'], port_security=['fa:16:3e:1c:aa:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1935865918', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c1396d33-3741-4e6a-acdf-79ac9f076e53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1935865918', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3edda898-8529-43cc-9949-7b5bcfbbe45d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=dcb12fb0-34e9-47f5-8054-b56b5145a57a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.708 158990 INFO neutron.agent.ovn.metadata.agent [-] Port dcb12fb0-34e9-47f5-8054-b56b5145a57a in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 bound to our chassis#033[00m
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.711 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:01:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:24Z|00206|binding|INFO|Setting lport dcb12fb0-34e9-47f5-8054-b56b5145a57a ovn-installed in OVS
Nov 29 03:01:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:24Z|00207|binding|INFO|Setting lport dcb12fb0-34e9-47f5-8054-b56b5145a57a up in Southbound
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.724 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.728 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:24 np0005539563 systemd-udevd[293963]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.743 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9076d613-3101-409e-b5e9-e4b9900d0092]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:24 np0005539563 NetworkManager[48981]: <info>  [1764403284.7469] device (tapdcb12fb0-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:01:24 np0005539563 NetworkManager[48981]: <info>  [1764403284.7488] device (tapdcb12fb0-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.780 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[110f0582-54bb-4641-84e9-58238144f9ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.784 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2b181e72-635a-4d5f-a52a-d91377504752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.809 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[770b9d17-b736-40b9-a71d-181066e57530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.826 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[309b3bed-e477-46f4-b610-1cdb342bf798]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 9, 'rx_bytes': 742, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 9, 'rx_bytes': 742, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618501, 'reachable_time': 41713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293970, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.842 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0244504a-47bf-457f-80cd-4232a1855237]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618513, 'tstamp': 618513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293971, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618516, 'tstamp': 618516}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293971, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.844 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.847 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:24 np0005539563 nova_compute[252253]: 2025-11-29 08:01:24.849 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.849 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.849 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.849 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:24.849 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:24.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.266 252257 DEBUG nova.virt.libvirt.driver [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.266 252257 DEBUG nova.virt.libvirt.driver [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.267 252257 DEBUG nova.virt.libvirt.driver [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:0f:38:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.267 252257 DEBUG nova.virt.libvirt.driver [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:bb:dd:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.267 252257 DEBUG nova.virt.libvirt.driver [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:d9:3f:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.267 252257 DEBUG nova.virt.libvirt.driver [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:1c:aa:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.296 252257 DEBUG nova.virt.libvirt.guest [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:25 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:  <nova:name>tempest-AttachInterfacesTestJSON-server-369367062</nova:name>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:01:25</nova:creationTime>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    <nova:port uuid="39a40677-39fb-46af-8988-f2b8b26d7512">
Nov 29 03:01:25 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    <nova:port uuid="bb9bfd78-483e-4c5b-a548-8e4b77160086">
Nov 29 03:01:25 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    <nova:port uuid="8125c0d6-4bae-4aba-aad0-d28ea5b81901">
Nov 29 03:01:25 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    <nova:port uuid="dcb12fb0-34e9-47f5-8054-b56b5145a57a">
Nov 29 03:01:25 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:25 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:01:25 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:01:25 np0005539563 nova_compute[252253]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.328 252257 DEBUG oslo_concurrency.lockutils [None req-5a47f727-4d5f-4ced-96bd-0ec3ca13167a a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-c1396d33-3741-4e6a-acdf-79ac9f076e53-dcb12fb0-34e9-47f5-8054-b56b5145a57a" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.558 252257 DEBUG nova.compute.manager [req-0161211a-06d0-4cbd-891e-d0ef468e0e25 req-41597113-7152-4a33-bf12-870d0fcd4356 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-dcb12fb0-34e9-47f5-8054-b56b5145a57a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.559 252257 DEBUG oslo_concurrency.lockutils [req-0161211a-06d0-4cbd-891e-d0ef468e0e25 req-41597113-7152-4a33-bf12-870d0fcd4356 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.560 252257 DEBUG oslo_concurrency.lockutils [req-0161211a-06d0-4cbd-891e-d0ef468e0e25 req-41597113-7152-4a33-bf12-870d0fcd4356 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.564 252257 DEBUG oslo_concurrency.lockutils [req-0161211a-06d0-4cbd-891e-d0ef468e0e25 req-41597113-7152-4a33-bf12-870d0fcd4356 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.565 252257 DEBUG nova.compute.manager [req-0161211a-06d0-4cbd-891e-d0ef468e0e25 req-41597113-7152-4a33-bf12-870d0fcd4356 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-plugged-dcb12fb0-34e9-47f5-8054-b56b5145a57a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.566 252257 WARNING nova.compute.manager [req-0161211a-06d0-4cbd-891e-d0ef468e0e25 req-41597113-7152-4a33-bf12-870d0fcd4356 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-plugged-dcb12fb0-34e9-47f5-8054-b56b5145a57a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.784 252257 DEBUG oslo_concurrency.lockutils [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "487e2d0b-cbfe-462e-b702-86a5164357d8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.784 252257 DEBUG oslo_concurrency.lockutils [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.784 252257 DEBUG oslo_concurrency.lockutils [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.785 252257 DEBUG oslo_concurrency.lockutils [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.785 252257 DEBUG oslo_concurrency.lockutils [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.786 252257 INFO nova.compute.manager [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Terminating instance#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.787 252257 DEBUG nova.compute.manager [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:01:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 310 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 280 op/s
Nov 29 03:01:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:25Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1c:aa:0c 10.100.0.11
Nov 29 03:01:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:25Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1c:aa:0c 10.100.0.11
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.911 252257 DEBUG nova.network.neutron [req-441bc16c-12c6-402a-b6d9-fd4fd4557f25 req-88614ea3-6269-40a4-90f0-a77c8845eb10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updated VIF entry in instance network info cache for port dcb12fb0-34e9-47f5-8054-b56b5145a57a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.912 252257 DEBUG nova.network.neutron [req-441bc16c-12c6-402a-b6d9-fd4fd4557f25 req-88614ea3-6269-40a4-90f0-a77c8845eb10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "address": "fa:16:3e:d9:3f:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8125c0d6-4b", "ovs_interfaceid": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:25 np0005539563 nova_compute[252253]: 2025-11-29 08:01:25.992 252257 DEBUG oslo_concurrency.lockutils [req-441bc16c-12c6-402a-b6d9-fd4fd4557f25 req-88614ea3-6269-40a4-90f0-a77c8845eb10 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.263 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:26.279592) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403286279784, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1431, "num_deletes": 511, "total_data_size": 1788804, "memory_usage": 1821112, "flush_reason": "Manual Compaction"}
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 29 03:01:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:26.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403286569307, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1183926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32545, "largest_seqno": 33975, "table_properties": {"data_size": 1178372, "index_size": 2309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17020, "raw_average_key_size": 20, "raw_value_size": 1164511, "raw_average_value_size": 1370, "num_data_blocks": 101, "num_entries": 850, "num_filter_entries": 850, "num_deletions": 511, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403191, "oldest_key_time": 1764403191, "file_creation_time": 1764403286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 289720 microseconds, and 6682 cpu microseconds.
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:01:26 np0005539563 kernel: tap3c26a727-aa (unregistering): left promiscuous mode
Nov 29 03:01:26 np0005539563 NetworkManager[48981]: <info>  [1764403286.7147] device (tap3c26a727-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:01:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:26Z|00208|binding|INFO|Releasing lport 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c from this chassis (sb_readonly=0)
Nov 29 03:01:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:26Z|00209|binding|INFO|Setting lport 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c down in Southbound
Nov 29 03:01:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:26Z|00210|binding|INFO|Removing iface tap3c26a727-aa ovn-installed in OVS
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.730 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.732 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:26.739 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:0a:92 10.100.0.14'], port_security=['fa:16:3e:85:0a:92 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '487e2d0b-cbfe-462e-b702-86a5164357d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2063759-3e65-4e4b-b3aa-6d737d865479', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fce027870d041328a9b9968bfe90665', 'neutron:revision_number': '6', 'neutron:security_group_ids': '90056085-c762-483e-89c7-ac78dc504f10 932da7ac-fb04-4b34-9104-3c99becea564 bc76b505-32d6-466e-aebe-a288cb7b3442', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbb6126a-1e4d-4a00-9500-8124c46f02a3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=3c26a727-aa6f-4daa-8e0c-9658c02a1f5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:26.741 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 3c26a727-aa6f-4daa-8e0c-9658c02a1f5c in datapath b2063759-3e65-4e4b-b3aa-6d737d865479 unbound from our chassis#033[00m
Nov 29 03:01:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:26.743 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2063759-3e65-4e4b-b3aa-6d737d865479, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:01:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:26.744 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[284f4ab2-e96a-4b34-a6e3-d2a1039c4cf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:26.744 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 namespace which is not needed anymore#033[00m
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.795 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:26 np0005539563 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Nov 29 03:01:26 np0005539563 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d0000003b.scope: Consumed 16.019s CPU time.
Nov 29 03:01:26 np0005539563 systemd-machined[213024]: Machine qemu-25-instance-0000003b terminated.
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.892 252257 DEBUG oslo_concurrency.lockutils [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "interface-c1396d33-3741-4e6a-acdf-79ac9f076e53-bb9bfd78-483e-4c5b-a548-8e4b77160086" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.894 252257 DEBUG oslo_concurrency.lockutils [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-c1396d33-3741-4e6a-acdf-79ac9f076e53-bb9bfd78-483e-4c5b-a548-8e4b77160086" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:26.569390) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1183926 bytes OK
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:26.569418) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:26.912976) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:26.913039) EVENT_LOG_v1 {"time_micros": 1764403286913028, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:26.913064) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1781324, prev total WAL file size 1813289, number of live WAL files 2.
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:26.914390) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303130' seq:72057594037927935, type:22 .. '6D6772737461740031323631' seq:0, type:0; will stop at (end)
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1156KB)], [68(11MB)]
Nov 29 03:01:26 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403286914489, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 12887579, "oldest_snapshot_seqno": -1}
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.925 252257 DEBUG nova.objects.instance [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'flavor' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:26.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.987 252257 DEBUG nova.virt.libvirt.vif [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.987 252257 DEBUG nova.network.os_vif_util [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.988 252257 DEBUG nova.network.os_vif_util [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:dd:f1,bridge_name='br-int',has_traffic_filtering=True,id=bb9bfd78-483e-4c5b-a548-8e4b77160086,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9bfd78-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.991 252257 DEBUG nova.virt.libvirt.guest [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bb:dd:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbb9bfd78-48"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.993 252257 DEBUG nova.virt.libvirt.guest [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bb:dd:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbb9bfd78-48"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.995 252257 DEBUG nova.virt.libvirt.driver [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Attempting to detach device tapbb9bfd78-48 from instance c1396d33-3741-4e6a-acdf-79ac9f076e53 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:01:26 np0005539563 nova_compute[252253]: 2025-11-29 08:01:26.996 252257 DEBUG nova.virt.libvirt.guest [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:01:26 np0005539563 nova_compute[252253]:  <mac address="fa:16:3e:bb:dd:f1"/>
Nov 29 03:01:26 np0005539563 nova_compute[252253]:  <model type="virtio"/>
Nov 29 03:01:26 np0005539563 nova_compute[252253]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:01:26 np0005539563 nova_compute[252253]:  <mtu size="1442"/>
Nov 29 03:01:26 np0005539563 nova_compute[252253]:  <target dev="tapbb9bfd78-48"/>
Nov 29 03:01:26 np0005539563 nova_compute[252253]: </interface>
Nov 29 03:01:26 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.024 252257 INFO nova.virt.libvirt.driver [-] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Instance destroyed successfully.#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.025 252257 DEBUG nova.objects.instance [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lazy-loading 'resources' on Instance uuid 487e2d0b-cbfe-462e-b702-86a5164357d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.038 252257 DEBUG nova.compute.manager [req-d4459846-d4a0-41b2-b1c8-b0adb79d7f98 req-4fddb067-7f78-4f79-9955-5393d088b4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Received event network-vif-unplugged-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.039 252257 DEBUG oslo_concurrency.lockutils [req-d4459846-d4a0-41b2-b1c8-b0adb79d7f98 req-4fddb067-7f78-4f79-9955-5393d088b4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.039 252257 DEBUG oslo_concurrency.lockutils [req-d4459846-d4a0-41b2-b1c8-b0adb79d7f98 req-4fddb067-7f78-4f79-9955-5393d088b4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.039 252257 DEBUG oslo_concurrency.lockutils [req-d4459846-d4a0-41b2-b1c8-b0adb79d7f98 req-4fddb067-7f78-4f79-9955-5393d088b4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.039 252257 DEBUG nova.compute.manager [req-d4459846-d4a0-41b2-b1c8-b0adb79d7f98 req-4fddb067-7f78-4f79-9955-5393d088b4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] No waiting events found dispatching network-vif-unplugged-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.039 252257 DEBUG nova.compute.manager [req-d4459846-d4a0-41b2-b1c8-b0adb79d7f98 req-4fddb067-7f78-4f79-9955-5393d088b4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Received event network-vif-unplugged-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.052 252257 DEBUG nova.virt.libvirt.vif [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1685884678',display_name='tempest-SecurityGroupsTestJSON-server-1685884678',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1685884678',id=59,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6fce027870d041328a9b9968bfe90665',ramdisk_id='',reservation_id='r-3vijaar2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1868555561',owner_user_name='tempest-SecurityGroupsTestJSON-1868555561-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:16Z,user_data=None,user_id='8d4e5ab1ae494327abcb3693ba332586',uuid=487e2d0b-cbfe-462e-b702-86a5164357d8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "address": "fa:16:3e:85:0a:92", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c26a727-aa", "ovs_interfaceid": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.053 252257 DEBUG nova.network.os_vif_util [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converting VIF {"id": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "address": "fa:16:3e:85:0a:92", "network": {"id": "b2063759-3e65-4e4b-b3aa-6d737d865479", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-313649087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fce027870d041328a9b9968bfe90665", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c26a727-aa", "ovs_interfaceid": "3c26a727-aa6f-4daa-8e0c-9658c02a1f5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.053 252257 DEBUG nova.network.os_vif_util [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=3c26a727-aa6f-4daa-8e0c-9658c02a1f5c,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c26a727-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.054 252257 DEBUG os_vif [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=3c26a727-aa6f-4daa-8e0c-9658c02a1f5c,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c26a727-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.055 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.055 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c26a727-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.057 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.059 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.061 252257 INFO os_vif [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=3c26a727-aa6f-4daa-8e0c-9658c02a1f5c,network=Network(b2063759-3e65-4e4b-b3aa-6d737d865479),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c26a727-aa')#033[00m
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6411 keys, 9453687 bytes, temperature: kUnknown
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.090 252257 DEBUG nova.virt.libvirt.guest [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bb:dd:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbb9bfd78-48"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403287090247, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9453687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9411366, "index_size": 25187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 165888, "raw_average_key_size": 25, "raw_value_size": 9296669, "raw_average_value_size": 1450, "num_data_blocks": 1003, "num_entries": 6411, "num_filter_entries": 6411, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764403286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:27.090588) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9453687 bytes
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:27.098112) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 73.3 rd, 53.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.2 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(18.9) write-amplify(8.0) OK, records in: 7414, records dropped: 1003 output_compression: NoCompression
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:27.098140) EVENT_LOG_v1 {"time_micros": 1764403287098128, "job": 38, "event": "compaction_finished", "compaction_time_micros": 175838, "compaction_time_cpu_micros": 25334, "output_level": 6, "num_output_files": 1, "total_output_size": 9453687, "num_input_records": 7414, "num_output_records": 6411, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403287098569, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 29 03:01:27 np0005539563 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[291896]: [NOTICE]   (291900) : haproxy version is 2.8.14-c23fe91
Nov 29 03:01:27 np0005539563 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[291896]: [NOTICE]   (291900) : path to executable is /usr/sbin/haproxy
Nov 29 03:01:27 np0005539563 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[291896]: [WARNING]  (291900) : Exiting Master process...
Nov 29 03:01:27 np0005539563 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[291896]: [WARNING]  (291900) : Exiting Master process...
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403287101648, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:26.914220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:27.101724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:27.101760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:27.101764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:27.101767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:01:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:01:27.101770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.100 252257 DEBUG nova.virt.libvirt.guest [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bb:dd:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbb9bfd78-48"/></interface>not found in domain: <domain type='kvm' id='26'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <name>instance-0000003c</name>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <uuid>c1396d33-3741-4e6a-acdf-79ac9f076e53</uuid>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:name>tempest-AttachInterfacesTestJSON-server-369367062</nova:name>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:01:25</nova:creationTime>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:port uuid="39a40677-39fb-46af-8988-f2b8b26d7512">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:port uuid="bb9bfd78-483e-4c5b-a548-8e4b77160086">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:port uuid="8125c0d6-4bae-4aba-aad0-d28ea5b81901">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:port uuid="dcb12fb0-34e9-47f5-8054-b56b5145a57a">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:01:27 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <memory unit='KiB'>131072</memory>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <resource>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <partition>/machine</partition>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </resource>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <sysinfo type='smbios'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <entry name='serial'>c1396d33-3741-4e6a-acdf-79ac9f076e53</entry>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <entry name='uuid'>c1396d33-3741-4e6a-acdf-79ac9f076e53</entry>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <boot dev='hd'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <smbios mode='sysinfo'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <vmcoreinfo state='on'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <feature policy='require' name='x2apic'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <feature policy='require' name='vme'/>
Nov 29 03:01:27 np0005539563 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[291896]: [ALERT]    (291900) : Current worker (291903) exited with code 143 (Terminated)
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:01:27 np0005539563 neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479[291896]: [WARNING]  (291900) : All workers exited. Exiting... (0)
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <clock offset='utc'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <timer name='hpet' present='no'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <on_reboot>restart</on_reboot>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <on_crash>destroy</on_crash>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <disk type='network' device='disk'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/c1396d33-3741-4e6a-acdf-79ac9f076e53_disk' index='2'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target dev='vda' bus='virtio'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='virtio-disk0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <disk type='network' device='cdrom'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/c1396d33-3741-4e6a-acdf-79ac9f076e53_disk.config' index='1'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target dev='sda' bus='sata'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <readonly/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='sata0-0-0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pcie.0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='1' port='0x10'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='2' port='0x11'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='3' port='0x12'/>
Nov 29 03:01:27 np0005539563 conmon[291896]: conmon 4334f6db16e74625bc1b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2.scope/container/memory.events
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.3'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='4' port='0x13'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.4'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='5' port='0x14'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.5'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='6' port='0x15'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.6'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='7' port='0x16'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.7'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='8' port='0x17'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.8'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='9' port='0x18'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.9'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='10' port='0x19'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.10'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='11' port='0x1a'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.11'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='12' port='0x1b'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.12'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='13' port='0x1c'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.13'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='14' port='0x1d'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.14'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='15' port='0x1e'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.15'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='16' port='0x1f'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.16'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='17' port='0x20'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.17'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='18' port='0x21'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.18'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='19' port='0x22'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.19'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='20' port='0x23'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.20'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='21' port='0x24'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.21'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='22' port='0x25'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.22'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='23' port='0x26'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.23'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='24' port='0x27'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.24'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='25' port='0x28'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.25'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-pci-bridge'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.26'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='usb'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='sata' index='0'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='ide'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:0f:38:06'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target dev='tap39a40677-39'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='net0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:bb:dd:f1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target dev='tapbb9bfd78-48'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='net1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:d9:3f:77'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target dev='tap8125c0d6-4b'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='net2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:1c:aa:0c'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target dev='tapdcb12fb0-34'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='net3'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <serial type='pty'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <source path='/dev/pts/1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/console.log' append='off'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target type='isa-serial' port='0'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <model name='isa-serial'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      </target>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='serial0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <console type='pty' tty='/dev/pts/1'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <source path='/dev/pts/1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/console.log' append='off'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target type='serial' port='0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='serial0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </console>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <input type='tablet' bus='usb'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='input0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <input type='mouse' bus='ps2'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='input1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <input type='keyboard' bus='ps2'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='input2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <listen type='address' address='::0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </graphics>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <audio id='1' type='none'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='video0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <watchdog model='itco' action='reset'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='watchdog0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </watchdog>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <memballoon model='virtio'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <stats period='10'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='balloon0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <rng model='virtio'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='rng0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <label>system_u:system_r:svirt_t:s0:c432,c658</label>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c432,c658</imagelabel>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </seclabel>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <label>+107:+107</label>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </seclabel>
Nov 29 03:01:27 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:01:27 np0005539563 nova_compute[252253]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.101 252257 INFO nova.virt.libvirt.driver [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully detached device tapbb9bfd78-48 from instance c1396d33-3741-4e6a-acdf-79ac9f076e53 from the persistent domain config.#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.101 252257 DEBUG nova.virt.libvirt.driver [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] (1/8): Attempting to detach device tapbb9bfd78-48 with device alias net1 from instance c1396d33-3741-4e6a-acdf-79ac9f076e53 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.102 252257 DEBUG nova.virt.libvirt.guest [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <mac address="fa:16:3e:bb:dd:f1"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <model type="virtio"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <mtu size="1442"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <target dev="tapbb9bfd78-48"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]: </interface>
Nov 29 03:01:27 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:01:27 np0005539563 systemd[1]: libpod-4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2.scope: Deactivated successfully.
Nov 29 03:01:27 np0005539563 podman[293995]: 2025-11-29 08:01:27.111875177 +0000 UTC m=+0.234499334 container died 4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:01:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2-userdata-shm.mount: Deactivated successfully.
Nov 29 03:01:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-bf37a979e81545a34dc81bee162d769b814e77f8279ddad40f2203f2fd86f3a3-merged.mount: Deactivated successfully.
Nov 29 03:01:27 np0005539563 podman[293995]: 2025-11-29 08:01:27.155477858 +0000 UTC m=+0.278102005 container cleanup 4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:01:27 np0005539563 systemd[1]: libpod-conmon-4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2.scope: Deactivated successfully.
Nov 29 03:01:27 np0005539563 kernel: tapbb9bfd78-48 (unregistering): left promiscuous mode
Nov 29 03:01:27 np0005539563 NetworkManager[48981]: <info>  [1764403287.2166] device (tapbb9bfd78-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:01:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:27Z|00211|binding|INFO|Releasing lport bb9bfd78-483e-4c5b-a548-8e4b77160086 from this chassis (sb_readonly=0)
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.218 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:27Z|00212|binding|INFO|Setting lport bb9bfd78-483e-4c5b-a548-8e4b77160086 down in Southbound
Nov 29 03:01:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:27Z|00213|binding|INFO|Removing iface tapbb9bfd78-48 ovn-installed in OVS
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.220 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.226 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:dd:f1 10.100.0.12'], port_security=['fa:16:3e:bb:dd:f1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c1396d33-3741-4e6a-acdf-79ac9f076e53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3edda898-8529-43cc-9949-7b5bcfbbe45d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=bb9bfd78-483e-4c5b-a548-8e4b77160086) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.233 252257 DEBUG nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Received event <DeviceRemovedEvent: 1764403287.2336714, c1396d33-3741-4e6a-acdf-79ac9f076e53 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.235 252257 DEBUG nova.virt.libvirt.driver [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Start waiting for the detach event from libvirt for device tapbb9bfd78-48 with device alias net1 for instance c1396d33-3741-4e6a-acdf-79ac9f076e53 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.235 252257 DEBUG nova.virt.libvirt.guest [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bb:dd:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbb9bfd78-48"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:01:27 np0005539563 podman[294051]: 2025-11-29 08:01:27.236505143 +0000 UTC m=+0.060008576 container remove 4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.238 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.239 252257 DEBUG nova.virt.libvirt.guest [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bb:dd:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbb9bfd78-48"/></interface>not found in domain: <domain type='kvm' id='26'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <name>instance-0000003c</name>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <uuid>c1396d33-3741-4e6a-acdf-79ac9f076e53</uuid>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:name>tempest-AttachInterfacesTestJSON-server-369367062</nova:name>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:01:25</nova:creationTime>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:port uuid="39a40677-39fb-46af-8988-f2b8b26d7512">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:port uuid="bb9bfd78-483e-4c5b-a548-8e4b77160086">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:port uuid="8125c0d6-4bae-4aba-aad0-d28ea5b81901">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:port uuid="dcb12fb0-34e9-47f5-8054-b56b5145a57a">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:01:27 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <memory unit='KiB'>131072</memory>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <resource>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <partition>/machine</partition>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </resource>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <sysinfo type='smbios'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <entry name='serial'>c1396d33-3741-4e6a-acdf-79ac9f076e53</entry>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <entry name='uuid'>c1396d33-3741-4e6a-acdf-79ac9f076e53</entry>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <boot dev='hd'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <smbios mode='sysinfo'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <vmcoreinfo state='on'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <feature policy='require' name='x2apic'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <feature policy='require' name='vme'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <clock offset='utc'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <timer name='hpet' present='no'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <on_reboot>restart</on_reboot>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <on_crash>destroy</on_crash>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <disk type='network' device='disk'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/c1396d33-3741-4e6a-acdf-79ac9f076e53_disk' index='2'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target dev='vda' bus='virtio'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='virtio-disk0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <disk type='network' device='cdrom'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/c1396d33-3741-4e6a-acdf-79ac9f076e53_disk.config' index='1'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target dev='sda' bus='sata'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <readonly/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='sata0-0-0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pcie.0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='1' port='0x10'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='2' port='0x11'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='3' port='0x12'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.3'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='4' port='0x13'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.4'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 kernel: tapb2063759-30: left promiscuous mode
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='5' port='0x14'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.5'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='6' port='0x15'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.6'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='7' port='0x16'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.7'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='8' port='0x17'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.8'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='9' port='0x18'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.9'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='10' port='0x19'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.10'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='11' port='0x1a'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.11'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='12' port='0x1b'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.12'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='13' port='0x1c'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.13'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='14' port='0x1d'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.14'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='15' port='0x1e'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.15'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='16' port='0x1f'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.16'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='17' port='0x20'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.17'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='18' port='0x21'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.18'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='19' port='0x22'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.19'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='20' port='0x23'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.20'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='21' port='0x24'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.21'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='22' port='0x25'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.22'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='23' port='0x26'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.23'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='24' port='0x27'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.24'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target chassis='25' port='0x28'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.25'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model name='pcie-pci-bridge'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='pci.26'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='usb'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <controller type='sata' index='0'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='ide'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:0f:38:06'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target dev='tap39a40677-39'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='net0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:d9:3f:77'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target dev='tap8125c0d6-4b'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='net2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:1c:aa:0c'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target dev='tapdcb12fb0-34'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='net3'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <serial type='pty'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <source path='/dev/pts/1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/console.log' append='off'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target type='isa-serial' port='0'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:        <model name='isa-serial'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      </target>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='serial0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <console type='pty' tty='/dev/pts/1'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <source path='/dev/pts/1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/console.log' append='off'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <target type='serial' port='0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='serial0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </console>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <input type='tablet' bus='usb'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='input0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <input type='mouse' bus='ps2'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='input1'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <input type='keyboard' bus='ps2'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='input2'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <listen type='address' address='::0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </graphics>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <audio id='1' type='none'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='video0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <watchdog model='itco' action='reset'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='watchdog0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </watchdog>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <memballoon model='virtio'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <stats period='10'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='balloon0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <rng model='virtio'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <alias name='rng0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <label>system_u:system_r:svirt_t:s0:c432,c658</label>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c432,c658</imagelabel>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </seclabel>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <label>+107:+107</label>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </seclabel>
Nov 29 03:01:27 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:01:27 np0005539563 nova_compute[252253]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.240 252257 INFO nova.virt.libvirt.driver [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully detached device tapbb9bfd78-48 from instance c1396d33-3741-4e6a-acdf-79ac9f076e53 from the live domain config.#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.242 252257 DEBUG nova.virt.libvirt.vif [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.242 252257 DEBUG nova.network.os_vif_util [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.243 252257 DEBUG nova.network.os_vif_util [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:dd:f1,bridge_name='br-int',has_traffic_filtering=True,id=bb9bfd78-483e-4c5b-a548-8e4b77160086,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9bfd78-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.242 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc3172c-1662-4de1-8d3d-0bdcdd3b3ad5]: (4, ('Sat Nov 29 08:01:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 (4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2)\n4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2\nSat Nov 29 08:01:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 (4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2)\n4334f6db16e74625bc1b3385eeb852b7b3da4c351c54dfffbdfe29457c7949a2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.243 252257 DEBUG os_vif [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:dd:f1,bridge_name='br-int',has_traffic_filtering=True,id=bb9bfd78-483e-4c5b-a548-8e4b77160086,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9bfd78-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.243 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[aa56099d-882e-4811-ad3c-946d222e3751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.244 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2063759-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.244 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.245 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb9bfd78-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.247 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.264 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.264 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.266 252257 INFO os_vif [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:dd:f1,bridge_name='br-int',has_traffic_filtering=True,id=bb9bfd78-483e-4c5b-a548-8e4b77160086,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9bfd78-48')#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.267 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea51c36-1eda-41e6-8160-a54707f78f10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.267 252257 DEBUG nova.virt.libvirt.guest [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:name>tempest-AttachInterfacesTestJSON-server-369367062</nova:name>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:01:27</nova:creationTime>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:port uuid="39a40677-39fb-46af-8988-f2b8b26d7512">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:port uuid="8125c0d6-4bae-4aba-aad0-d28ea5b81901">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    <nova:port uuid="dcb12fb0-34e9-47f5-8054-b56b5145a57a">
Nov 29 03:01:27 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:27 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:01:27 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:01:27 np0005539563 nova_compute[252253]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.281 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7c808f-480a-43d8-9dd5-19aae9c739ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.282 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[af71767a-4726-429e-9ea5-7892c079f292]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.297 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b346472b-b746-441b-bb1d-29ed314b8d97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618401, 'reachable_time': 25684, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294076, 'error': None, 'target': 'ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.299 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2063759-3e65-4e4b-b3aa-6d737d865479 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.299 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[97c46d31-ac37-4608-bfb5-267ef1cff682]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.299 158990 INFO neutron.agent.ovn.metadata.agent [-] Port bb9bfd78-483e-4c5b-a548-8e4b77160086 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 unbound from our chassis#033[00m
Nov 29 03:01:27 np0005539563 systemd[1]: run-netns-ovnmeta\x2db2063759\x2d3e65\x2d4e4b\x2db3aa\x2d6d737d865479.mount: Deactivated successfully.
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.301 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.314 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e5d2da-6fdf-4065-8e62-b7ebf4ac4f5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.342 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[73e97cb2-a26b-47c6-b187-38c1669eadff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.344 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b23aae-4495-4eb1-a1e5-ef101c3cc88e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.369 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[240c245a-9b4b-4dd0-85ae-129ff6bbb909]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.388 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa6c397-878e-4cdb-b9cb-263c0f0f3cab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618501, 'reachable_time': 41713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294084, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.408 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e5573dd5-313a-464a-bc73-bab7c31824f5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618513, 'tstamp': 618513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294085, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618516, 'tstamp': 618516}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294085, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.410 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.412 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.413 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.413 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.414 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:27.414 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.549 252257 INFO nova.virt.libvirt.driver [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Deleting instance files /var/lib/nova/instances/487e2d0b-cbfe-462e-b702-86a5164357d8_del#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.550 252257 INFO nova.virt.libvirt.driver [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Deletion of /var/lib/nova/instances/487e2d0b-cbfe-462e-b702-86a5164357d8_del complete#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.614 252257 INFO nova.compute.manager [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Took 1.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.614 252257 DEBUG oslo.service.loopingcall [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.615 252257 DEBUG nova.compute.manager [-] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.615 252257 DEBUG nova.network.neutron [-] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.696 252257 DEBUG nova.compute.manager [req-ee3b705c-f02a-4b57-8a7b-7f53f2c2b10a req-2c807643-7879-438f-90dc-51b6f7001336 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-dcb12fb0-34e9-47f5-8054-b56b5145a57a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.697 252257 DEBUG oslo_concurrency.lockutils [req-ee3b705c-f02a-4b57-8a7b-7f53f2c2b10a req-2c807643-7879-438f-90dc-51b6f7001336 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.697 252257 DEBUG oslo_concurrency.lockutils [req-ee3b705c-f02a-4b57-8a7b-7f53f2c2b10a req-2c807643-7879-438f-90dc-51b6f7001336 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.697 252257 DEBUG oslo_concurrency.lockutils [req-ee3b705c-f02a-4b57-8a7b-7f53f2c2b10a req-2c807643-7879-438f-90dc-51b6f7001336 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.697 252257 DEBUG nova.compute.manager [req-ee3b705c-f02a-4b57-8a7b-7f53f2c2b10a req-2c807643-7879-438f-90dc-51b6f7001336 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-plugged-dcb12fb0-34e9-47f5-8054-b56b5145a57a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:27 np0005539563 nova_compute[252253]: 2025-11-29 08:01:27.698 252257 WARNING nova.compute.manager [req-ee3b705c-f02a-4b57-8a7b-7f53f2c2b10a req-2c807643-7879-438f-90dc-51b6f7001336 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-plugged-dcb12fb0-34e9-47f5-8054-b56b5145a57a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 310 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 280 op/s
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.118 252257 DEBUG nova.compute.manager [req-c790498a-5b20-492b-94c2-9f5ffe6885a7 req-a31f923b-41fa-4a19-9e8d-d190108ecd17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-unplugged-bb9bfd78-483e-4c5b-a548-8e4b77160086 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.119 252257 DEBUG oslo_concurrency.lockutils [req-c790498a-5b20-492b-94c2-9f5ffe6885a7 req-a31f923b-41fa-4a19-9e8d-d190108ecd17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.119 252257 DEBUG oslo_concurrency.lockutils [req-c790498a-5b20-492b-94c2-9f5ffe6885a7 req-a31f923b-41fa-4a19-9e8d-d190108ecd17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.120 252257 DEBUG oslo_concurrency.lockutils [req-c790498a-5b20-492b-94c2-9f5ffe6885a7 req-a31f923b-41fa-4a19-9e8d-d190108ecd17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.120 252257 DEBUG nova.compute.manager [req-c790498a-5b20-492b-94c2-9f5ffe6885a7 req-a31f923b-41fa-4a19-9e8d-d190108ecd17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-unplugged-bb9bfd78-483e-4c5b-a548-8e4b77160086 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.121 252257 WARNING nova.compute.manager [req-c790498a-5b20-492b-94c2-9f5ffe6885a7 req-a31f923b-41fa-4a19-9e8d-d190108ecd17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-unplugged-bb9bfd78-483e-4c5b-a548-8e4b77160086 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.121 252257 DEBUG nova.compute.manager [req-c790498a-5b20-492b-94c2-9f5ffe6885a7 req-a31f923b-41fa-4a19-9e8d-d190108ecd17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-bb9bfd78-483e-4c5b-a548-8e4b77160086 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.121 252257 DEBUG oslo_concurrency.lockutils [req-c790498a-5b20-492b-94c2-9f5ffe6885a7 req-a31f923b-41fa-4a19-9e8d-d190108ecd17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.122 252257 DEBUG oslo_concurrency.lockutils [req-c790498a-5b20-492b-94c2-9f5ffe6885a7 req-a31f923b-41fa-4a19-9e8d-d190108ecd17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.122 252257 DEBUG oslo_concurrency.lockutils [req-c790498a-5b20-492b-94c2-9f5ffe6885a7 req-a31f923b-41fa-4a19-9e8d-d190108ecd17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.122 252257 DEBUG nova.compute.manager [req-c790498a-5b20-492b-94c2-9f5ffe6885a7 req-a31f923b-41fa-4a19-9e8d-d190108ecd17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-plugged-bb9bfd78-483e-4c5b-a548-8e4b77160086 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.123 252257 WARNING nova.compute.manager [req-c790498a-5b20-492b-94c2-9f5ffe6885a7 req-a31f923b-41fa-4a19-9e8d-d190108ecd17 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-plugged-bb9bfd78-483e-4c5b-a548-8e4b77160086 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.213 252257 DEBUG oslo_concurrency.lockutils [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.213 252257 DEBUG oslo_concurrency.lockutils [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.213 252257 DEBUG nova.network.neutron [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:01:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:28.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.550 252257 DEBUG nova.network.neutron [-] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.566 252257 INFO nova.compute.manager [-] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Took 0.95 seconds to deallocate network for instance.#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.638 252257 DEBUG oslo_concurrency.lockutils [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.638 252257 DEBUG oslo_concurrency.lockutils [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.693 252257 DEBUG nova.compute.manager [req-14dd5a2a-546e-4312-8782-8198783e71df req-a10cf086-8706-4d0e-8249-c00968d84761 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Received event network-vif-deleted-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:28 np0005539563 nova_compute[252253]: 2025-11-29 08:01:28.779 252257 DEBUG oslo_concurrency.processutils [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:28.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.199 252257 DEBUG nova.compute.manager [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Received event network-vif-plugged-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.200 252257 DEBUG oslo_concurrency.lockutils [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.200 252257 DEBUG oslo_concurrency.lockutils [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.200 252257 DEBUG oslo_concurrency.lockutils [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.202 252257 DEBUG nova.compute.manager [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] No waiting events found dispatching network-vif-plugged-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.202 252257 WARNING nova.compute.manager [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Received unexpected event network-vif-plugged-3c26a727-aa6f-4daa-8e0c-9658c02a1f5c for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.203 252257 DEBUG nova.compute.manager [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-deleted-bb9bfd78-483e-4c5b-a548-8e4b77160086 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.203 252257 INFO nova.compute.manager [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Neutron deleted interface bb9bfd78-483e-4c5b-a548-8e4b77160086; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.203 252257 DEBUG nova.network.neutron [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "address": "fa:16:3e:d9:3f:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8125c0d6-4b", "ovs_interfaceid": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2711949263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.225 252257 DEBUG oslo_concurrency.processutils [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.231 252257 DEBUG nova.compute.provider_tree [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.244 252257 DEBUG nova.scheduler.client.report [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.249 252257 DEBUG nova.objects.instance [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lazy-loading 'system_metadata' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.265 252257 DEBUG oslo_concurrency.lockutils [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.285 252257 DEBUG nova.objects.instance [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lazy-loading 'flavor' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.318 158990 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 5292c60e-2508-43af-bae7-ef2b989b7dc2 with type ""#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.319 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:aa:0c 10.100.0.11'], port_security=['fa:16:3e:1c:aa:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1935865918', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c1396d33-3741-4e6a-acdf-79ac9f076e53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1935865918', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3edda898-8529-43cc-9949-7b5bcfbbe45d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=dcb12fb0-34e9-47f5-8054-b56b5145a57a) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.319 252257 DEBUG nova.virt.libvirt.vif [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.320 252257 DEBUG nova.network.os_vif_util [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converting VIF {"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.320 158990 INFO neutron.agent.ovn.metadata.agent [-] Port dcb12fb0-34e9-47f5-8054-b56b5145a57a in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 unbound from our chassis#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.321 252257 DEBUG nova.network.os_vif_util [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:dd:f1,bridge_name='br-int',has_traffic_filtering=True,id=bb9bfd78-483e-4c5b-a548-8e4b77160086,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9bfd78-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.322 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.324 252257 DEBUG nova.virt.libvirt.guest [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bb:dd:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbb9bfd78-48"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.327 252257 DEBUG nova.virt.libvirt.guest [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bb:dd:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbb9bfd78-48"/></interface>not found in domain: <domain type='kvm' id='26'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <name>instance-0000003c</name>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <uuid>c1396d33-3741-4e6a-acdf-79ac9f076e53</uuid>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:name>tempest-AttachInterfacesTestJSON-server-369367062</nova:name>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:01:27</nova:creationTime>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:port uuid="39a40677-39fb-46af-8988-f2b8b26d7512">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:port uuid="8125c0d6-4bae-4aba-aad0-d28ea5b81901">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:port uuid="dcb12fb0-34e9-47f5-8054-b56b5145a57a">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:01:29 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <memory unit='KiB'>131072</memory>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <resource>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <partition>/machine</partition>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </resource>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <sysinfo type='smbios'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <entry name='serial'>c1396d33-3741-4e6a-acdf-79ac9f076e53</entry>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <entry name='uuid'>c1396d33-3741-4e6a-acdf-79ac9f076e53</entry>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <boot dev='hd'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <smbios mode='sysinfo'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <vmcoreinfo state='on'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <feature policy='require' name='x2apic'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <feature policy='require' name='vme'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <clock offset='utc'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <timer name='hpet' present='no'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <on_reboot>restart</on_reboot>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <on_crash>destroy</on_crash>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <disk type='network' device='disk'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/c1396d33-3741-4e6a-acdf-79ac9f076e53_disk' index='2'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target dev='vda' bus='virtio'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='virtio-disk0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <disk type='network' device='cdrom'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/c1396d33-3741-4e6a-acdf-79ac9f076e53_disk.config' index='1'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target dev='sda' bus='sata'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <readonly/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='sata0-0-0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pcie.0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='1' port='0x10'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='2' port='0x11'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='3' port='0x12'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.3'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='4' port='0x13'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.4'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='5' port='0x14'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.5'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='6' port='0x15'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.6'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='7' port='0x16'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.7'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='8' port='0x17'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.8'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='9' port='0x18'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.9'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='10' port='0x19'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.10'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='11' port='0x1a'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.11'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='12' port='0x1b'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.12'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='13' port='0x1c'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.13'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='14' port='0x1d'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.14'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='15' port='0x1e'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.15'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='16' port='0x1f'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.16'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='17' port='0x20'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.17'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='18' port='0x21'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.18'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='19' port='0x22'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.19'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='20' port='0x23'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.20'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='21' port='0x24'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.21'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='22' port='0x25'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.22'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='23' port='0x26'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.23'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='24' port='0x27'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.24'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='25' port='0x28'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.25'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-pci-bridge'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.26'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='usb'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='sata' index='0'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='ide'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:0f:38:06'/>
Nov 29 03:01:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:29Z|00214|binding|INFO|Removing iface tapdcb12fb0-34 ovn-installed in OVS
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target dev='tap39a40677-39'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='net0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:d9:3f:77'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target dev='tap8125c0d6-4b'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='net2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:1c:aa:0c'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target dev='tapdcb12fb0-34'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='net3'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <serial type='pty'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <source path='/dev/pts/1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/console.log' append='off'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target type='isa-serial' port='0'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <model name='isa-serial'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      </target>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='serial0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <console type='pty' tty='/dev/pts/1'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <source path='/dev/pts/1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/console.log' append='off'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target type='serial' port='0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='serial0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </console>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <input type='tablet' bus='usb'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='input0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <input type='mouse' bus='ps2'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='input1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <input type='keyboard' bus='ps2'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='input2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <listen type='address' address='::0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </graphics>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <audio id='1' type='none'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='video0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <watchdog model='itco' action='reset'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='watchdog0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </watchdog>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <memballoon model='virtio'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <stats period='10'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='balloon0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <rng model='virtio'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='rng0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <label>system_u:system_r:svirt_t:s0:c432,c658</label>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c432,c658</imagelabel>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </seclabel>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <label>+107:+107</label>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </seclabel>
Nov 29 03:01:29 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:01:29 np0005539563 nova_compute[252253]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.329 252257 DEBUG nova.virt.libvirt.guest [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bb:dd:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbb9bfd78-48"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:01:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:29Z|00215|binding|INFO|Removing lport dcb12fb0-34e9-47f5-8054-b56b5145a57a ovn-installed in OVS
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.332 252257 DEBUG nova.virt.libvirt.guest [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bb:dd:f1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbb9bfd78-48"/></interface>not found in domain: <domain type='kvm' id='26'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <name>instance-0000003c</name>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <uuid>c1396d33-3741-4e6a-acdf-79ac9f076e53</uuid>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:name>tempest-AttachInterfacesTestJSON-server-369367062</nova:name>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:01:27</nova:creationTime>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:port uuid="39a40677-39fb-46af-8988-f2b8b26d7512">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:port uuid="8125c0d6-4bae-4aba-aad0-d28ea5b81901">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:port uuid="dcb12fb0-34e9-47f5-8054-b56b5145a57a">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:01:29 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <memory unit='KiB'>131072</memory>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <resource>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <partition>/machine</partition>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </resource>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <sysinfo type='smbios'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <entry name='serial'>c1396d33-3741-4e6a-acdf-79ac9f076e53</entry>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <entry name='uuid'>c1396d33-3741-4e6a-acdf-79ac9f076e53</entry>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <boot dev='hd'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <smbios mode='sysinfo'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <vmcoreinfo state='on'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <feature policy='require' name='x2apic'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <feature policy='require' name='vme'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <clock offset='utc'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <timer name='hpet' present='no'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <on_reboot>restart</on_reboot>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <on_crash>destroy</on_crash>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <disk type='network' device='disk'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/c1396d33-3741-4e6a-acdf-79ac9f076e53_disk' index='2'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target dev='vda' bus='virtio'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='virtio-disk0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <disk type='network' device='cdrom'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/c1396d33-3741-4e6a-acdf-79ac9f076e53_disk.config' index='1'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target dev='sda' bus='sata'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <readonly/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='sata0-0-0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pcie.0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='1' port='0x10'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='2' port='0x11'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='3' port='0x12'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.3'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='4' port='0x13'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.4'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='5' port='0x14'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.5'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='6' port='0x15'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.6'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='7' port='0x16'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.7'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='8' port='0x17'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.8'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='9' port='0x18'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.9'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='10' port='0x19'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.10'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='11' port='0x1a'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.11'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='12' port='0x1b'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.12'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='13' port='0x1c'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.13'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='14' port='0x1d'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.14'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='15' port='0x1e'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.15'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='16' port='0x1f'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.16'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='17' port='0x20'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.17'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='18' port='0x21'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.18'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='19' port='0x22'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.19'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='20' port='0x23'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.20'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='21' port='0x24'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.21'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='22' port='0x25'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.22'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='23' port='0x26'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.23'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='24' port='0x27'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.24'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target chassis='25' port='0x28'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.25'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model name='pcie-pci-bridge'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='pci.26'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='usb'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <controller type='sata' index='0'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='ide'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:0f:38:06'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target dev='tap39a40677-39'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='net0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:d9:3f:77'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target dev='tap8125c0d6-4b'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='net2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:1c:aa:0c'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target dev='tapdcb12fb0-34'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='net3'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <serial type='pty'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <source path='/dev/pts/1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/console.log' append='off'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target type='isa-serial' port='0'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:        <model name='isa-serial'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      </target>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='serial0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <console type='pty' tty='/dev/pts/1'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <source path='/dev/pts/1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/console.log' append='off'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <target type='serial' port='0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='serial0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </console>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <input type='tablet' bus='usb'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='input0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <input type='mouse' bus='ps2'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='input1'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <input type='keyboard' bus='ps2'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='input2'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <listen type='address' address='::0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </graphics>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <audio id='1' type='none'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='video0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <watchdog model='itco' action='reset'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='watchdog0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </watchdog>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <memballoon model='virtio'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <stats period='10'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='balloon0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <rng model='virtio'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <alias name='rng0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <label>system_u:system_r:svirt_t:s0:c432,c658</label>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c432,c658</imagelabel>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </seclabel>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <label>+107:+107</label>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </seclabel>
Nov 29 03:01:29 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:01:29 np0005539563 nova_compute[252253]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.332 252257 WARNING nova.virt.libvirt.driver [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Detaching interface fa:16:3e:bb:dd:f1 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapbb9bfd78-48' not found.#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.333 252257 DEBUG nova.virt.libvirt.vif [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.333 252257 DEBUG nova.network.os_vif_util [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converting VIF {"id": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "address": "fa:16:3e:bb:dd:f1", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9bfd78-48", "ovs_interfaceid": "bb9bfd78-483e-4c5b-a548-8e4b77160086", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.334 252257 DEBUG nova.network.os_vif_util [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:dd:f1,bridge_name='br-int',has_traffic_filtering=True,id=bb9bfd78-483e-4c5b-a548-8e4b77160086,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9bfd78-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.334 252257 DEBUG os_vif [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:dd:f1,bridge_name='br-int',has_traffic_filtering=True,id=bb9bfd78-483e-4c5b-a548-8e4b77160086,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9bfd78-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.336 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.337 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb9bfd78-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.337 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.337 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.338 252257 INFO nova.scheduler.client.report [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Deleted allocations for instance 487e2d0b-cbfe-462e-b702-86a5164357d8#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.339 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbb0135-a8a6-4a31-8e82-9622ca15223f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.341 252257 INFO os_vif [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:dd:f1,bridge_name='br-int',has_traffic_filtering=True,id=bb9bfd78-483e-4c5b-a548-8e4b77160086,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9bfd78-48')#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.343 252257 DEBUG nova.virt.libvirt.guest [req-3a8a827a-0096-46be-975c-f9791aafe95a req-a5b0a0b0-d6e0-445e-b511-42172c4a3d25 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:name>tempest-AttachInterfacesTestJSON-server-369367062</nova:name>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:01:29</nova:creationTime>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:port uuid="39a40677-39fb-46af-8988-f2b8b26d7512">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:port uuid="8125c0d6-4bae-4aba-aad0-d28ea5b81901">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    <nova:port uuid="dcb12fb0-34e9-47f5-8054-b56b5145a57a">
Nov 29 03:01:29 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:29 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:01:29 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:01:29 np0005539563 nova_compute[252253]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.346 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.368 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3266e9a3-e916-4268-b41a-bd1cd3b6a1f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.372 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5206ac6b-5073-4934-9f54-98db181d85c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.400 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe8c15a-b350-48c7-9e54-e690af297d59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.415 252257 DEBUG oslo_concurrency.lockutils [None req-b478c115-c372-4170-8532-b9f2f05a6a4e 8d4e5ab1ae494327abcb3693ba332586 6fce027870d041328a9b9968bfe90665 - - default default] Lock "487e2d0b-cbfe-462e-b702-86a5164357d8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.418 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9038b536-8259-4b4b-b802-8bc807b29c39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618501, 'reachable_time': 41713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294114, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.438 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c39bd230-b6c8-4568-8717-707db2039d28]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618513, 'tstamp': 618513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294115, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618516, 'tstamp': 618516}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294115, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.439 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.441 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.442 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.443 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.443 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.443 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.444 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.571 252257 DEBUG oslo_concurrency.lockutils [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.572 252257 DEBUG oslo_concurrency.lockutils [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.572 252257 DEBUG oslo_concurrency.lockutils [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.572 252257 DEBUG oslo_concurrency.lockutils [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.572 252257 DEBUG oslo_concurrency.lockutils [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.573 252257 INFO nova.compute.manager [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Terminating instance#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.574 252257 DEBUG nova.compute.manager [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:01:29 np0005539563 kernel: tap39a40677-39 (unregistering): left promiscuous mode
Nov 29 03:01:29 np0005539563 NetworkManager[48981]: <info>  [1764403289.6311] device (tap39a40677-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:01:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:29Z|00216|binding|INFO|Releasing lport 39a40677-39fb-46af-8988-f2b8b26d7512 from this chassis (sb_readonly=0)
Nov 29 03:01:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:29Z|00217|binding|INFO|Setting lport 39a40677-39fb-46af-8988-f2b8b26d7512 down in Southbound
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.636 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:29Z|00218|binding|INFO|Removing iface tap39a40677-39 ovn-installed in OVS
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.640 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.656 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539563 kernel: tap8125c0d6-4b (unregistering): left promiscuous mode
Nov 29 03:01:29 np0005539563 NetworkManager[48981]: <info>  [1764403289.6653] device (tap8125c0d6-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:01:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:29Z|00219|binding|INFO|Releasing lport 8125c0d6-4bae-4aba-aad0-d28ea5b81901 from this chassis (sb_readonly=1)
Nov 29 03:01:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:29Z|00220|binding|INFO|Removing iface tap8125c0d6-4b ovn-installed in OVS
Nov 29 03:01:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:29Z|00221|if_status|INFO|Dropped 2 log messages in last 431 seconds (most recently, 431 seconds ago) due to excessive rate
Nov 29 03:01:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:29Z|00222|if_status|INFO|Not setting lport 8125c0d6-4bae-4aba-aad0-d28ea5b81901 down as sb is readonly
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.678 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.715 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539563 kernel: tapdcb12fb0-34 (unregistering): left promiscuous mode
Nov 29 03:01:29 np0005539563 NetworkManager[48981]: <info>  [1764403289.7272] device (tapdcb12fb0-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.731 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.742 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539563 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Nov 29 03:01:29 np0005539563 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003c.scope: Consumed 17.314s CPU time.
Nov 29 03:01:29 np0005539563 systemd-machined[213024]: Machine qemu-26-instance-0000003c terminated.
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.823 252257 INFO nova.network.neutron [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Port bb9bfd78-483e-4c5b-a548-8e4b77160086 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:01:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 252 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 258 op/s
Nov 29 03:01:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:29Z|00223|binding|INFO|Setting lport 8125c0d6-4bae-4aba-aad0-d28ea5b81901 down in Southbound
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.939 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:38:06 10.100.0.4'], port_security=['fa:16:3e:0f:38:06 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c1396d33-3741-4e6a-acdf-79ac9f076e53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07861448-e4b3-4a77-b3c7-b646bd0a5688', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=39a40677-39fb-46af-8988-f2b8b26d7512) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.940 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 39a40677-39fb-46af-8988-f2b8b26d7512 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 unbound from our chassis#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.943 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.957 252257 DEBUG nova.compute.manager [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-deleted-dcb12fb0-34e9-47f5-8054-b56b5145a57a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.958 252257 INFO nova.compute.manager [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Neutron deleted interface dcb12fb0-34e9-47f5-8054-b56b5145a57a; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.958 252257 DEBUG nova.network.neutron [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "address": "fa:16:3e:d9:3f:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8125c0d6-4b", "ovs_interfaceid": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.959 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:3f:77 10.100.0.7'], port_security=['fa:16:3e:d9:3f:77 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c1396d33-3741-4e6a-acdf-79ac9f076e53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3edda898-8529-43cc-9949-7b5bcfbbe45d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=8125c0d6-4bae-4aba-aad0-d28ea5b81901) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.959 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf4aa34-af31-48ad-a446-28119237c083]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.971 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:29 np0005539563 nova_compute[252253]: 2025-11-29 08:01:29.972 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:29 np0005539563 kernel: tap39a40677-39: entered promiscuous mode
Nov 29 03:01:29 np0005539563 kernel: tap39a40677-39 (unregistering): left promiscuous mode
Nov 29 03:01:29 np0005539563 NetworkManager[48981]: <info>  [1764403289.9953] manager: (tap39a40677-39): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.994 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[344b06a0-6242-40af-b91c-7ecbeeb4c63e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:29.998 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[319c6a70-8f84-46c0-a7b3-0b007881f4a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 NetworkManager[48981]: <info>  [1764403290.0055] manager: (tap8125c0d6-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Nov 29 03:01:30 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:30Z|00224|binding|INFO|Claiming lport 39a40677-39fb-46af-8988-f2b8b26d7512 for this chassis.
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.005 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:30Z|00225|binding|INFO|39a40677-39fb-46af-8988-f2b8b26d7512: Claiming fa:16:3e:0f:38:06 10.100.0.4
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.013 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:38:06 10.100.0.4'], port_security=['fa:16:3e:0f:38:06 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c1396d33-3741-4e6a-acdf-79ac9f076e53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07861448-e4b3-4a77-b3c7-b646bd0a5688', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=39a40677-39fb-46af-8988-f2b8b26d7512) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:30 np0005539563 NetworkManager[48981]: <info>  [1764403290.0176] manager: (tapdcb12fb0-34): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.028 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[46d8ba16-1e49-44d6-acb4-2d7d9914f774]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:30Z|00226|binding|INFO|Setting lport 39a40677-39fb-46af-8988-f2b8b26d7512 ovn-installed in OVS
Nov 29 03:01:30 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:30Z|00227|binding|INFO|Setting lport 39a40677-39fb-46af-8988-f2b8b26d7512 up in Southbound
Nov 29 03:01:30 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:30Z|00228|binding|INFO|Releasing lport 39a40677-39fb-46af-8988-f2b8b26d7512 from this chassis (sb_readonly=1)
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.032 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:30Z|00229|binding|INFO|Removing iface tap39a40677-39 ovn-installed in OVS
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.036 252257 DEBUG nova.objects.instance [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lazy-loading 'system_metadata' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.040 252257 INFO nova.virt.libvirt.driver [-] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Instance destroyed successfully.#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.040 252257 DEBUG nova.objects.instance [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'resources' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.046 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[02d45bd6-6093-4d3c-9065-01f049c69ba5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618501, 'reachable_time': 41713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294168, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.057 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.062 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[810904b7-7e8c-4ce0-a0b1-5ee03ad4729c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618513, 'tstamp': 618513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294172, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618516, 'tstamp': 618516}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294172, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.063 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.064 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.073 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.074 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.074 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.074 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.075 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.076 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 8125c0d6-4bae-4aba-aad0-d28ea5b81901 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 bound to our chassis#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.077 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.090 252257 DEBUG nova.objects.instance [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lazy-loading 'flavor' on Instance uuid c1396d33-3741-4e6a-acdf-79ac9f076e53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.090 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[23fccba6-0fdd-45ad-91dd-b9229db9d475]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.110 252257 DEBUG nova.virt.libvirt.vif [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.111 252257 DEBUG nova.network.os_vif_util [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.111 252257 DEBUG nova.network.os_vif_util [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:38:06,bridge_name='br-int',has_traffic_filtering=True,id=39a40677-39fb-46af-8988-f2b8b26d7512,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39a40677-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.112 252257 DEBUG os_vif [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:38:06,bridge_name='br-int',has_traffic_filtering=True,id=39a40677-39fb-46af-8988-f2b8b26d7512,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39a40677-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.114 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.115 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a40677-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.117 252257 DEBUG nova.virt.libvirt.vif [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.117 252257 DEBUG nova.network.os_vif_util [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converting VIF {"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.118 252257 DEBUG nova.network.os_vif_util [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:aa:0c,bridge_name='br-int',has_traffic_filtering=True,id=dcb12fb0-34e9-47f5-8054-b56b5145a57a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdcb12fb0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.119 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.120 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:30 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:30Z|00230|binding|INFO|Releasing lport 39a40677-39fb-46af-8988-f2b8b26d7512 from this chassis (sb_readonly=0)
Nov 29 03:01:30 np0005539563 ovn_controller[148841]: 2025-11-29T08:01:30Z|00231|binding|INFO|Setting lport 39a40677-39fb-46af-8988-f2b8b26d7512 down in Southbound
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.123 252257 DEBUG nova.virt.libvirt.guest [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1c:aa:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdcb12fb0-34"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.124 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.126 252257 DEBUG nova.virt.libvirt.driver [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Attempting to detach device tapdcb12fb0-34 from instance c1396d33-3741-4e6a-acdf-79ac9f076e53 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.126 252257 DEBUG nova.virt.libvirt.guest [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <mac address="fa:16:3e:1c:aa:0c"/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <model type="virtio"/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <mtu size="1442"/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <target dev="tapdcb12fb0-34"/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]: </interface>
Nov 29 03:01:30 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.127 252257 INFO os_vif [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:38:06,bridge_name='br-int',has_traffic_filtering=True,id=39a40677-39fb-46af-8988-f2b8b26d7512,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39a40677-39')#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.128 252257 DEBUG nova.virt.libvirt.vif [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "address": "fa:16:3e:d9:3f:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8125c0d6-4b", "ovs_interfaceid": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.128 252257 DEBUG nova.network.os_vif_util [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "address": "fa:16:3e:d9:3f:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8125c0d6-4b", "ovs_interfaceid": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.127 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[538c95f0-05cd-4eac-9a1a-69afb2cdb436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.129 252257 DEBUG nova.network.os_vif_util [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:3f:77,bridge_name='br-int',has_traffic_filtering=True,id=8125c0d6-4bae-4aba-aad0-d28ea5b81901,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8125c0d6-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.129 252257 DEBUG os_vif [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:3f:77,bridge_name='br-int',has_traffic_filtering=True,id=8125c0d6-4bae-4aba-aad0-d28ea5b81901,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8125c0d6-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.129 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:38:06 10.100.0.4'], port_security=['fa:16:3e:0f:38:06 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c1396d33-3741-4e6a-acdf-79ac9f076e53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07861448-e4b3-4a77-b3c7-b646bd0a5688', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=39a40677-39fb-46af-8988-f2b8b26d7512) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.130 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.130 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8125c0d6-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.131 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.132 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d98c8419-27ef-467d-8cc1-2671c50fc5a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.133 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.183 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.186 252257 DEBUG nova.virt.libvirt.guest [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1c:aa:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdcb12fb0-34"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.187 252257 INFO os_vif [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:3f:77,bridge_name='br-int',has_traffic_filtering=True,id=8125c0d6-4bae-4aba-aad0-d28ea5b81901,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8125c0d6-4b')#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.188 252257 DEBUG nova.virt.libvirt.vif [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.188 252257 DEBUG nova.network.os_vif_util [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.189 252257 DEBUG nova.network.os_vif_util [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:aa:0c,bridge_name='br-int',has_traffic_filtering=True,id=dcb12fb0-34e9-47f5-8054-b56b5145a57a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdcb12fb0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.189 252257 DEBUG os_vif [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:aa:0c,bridge_name='br-int',has_traffic_filtering=True,id=dcb12fb0-34e9-47f5-8054-b56b5145a57a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdcb12fb0-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.190 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.191 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdcb12fb0-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.193 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.194 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.195 252257 DEBUG nova.virt.libvirt.guest [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1c:aa:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdcb12fb0-34"/></interface>not found in domain: <domain type='kvm'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <name>instance-0000003c</name>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <uuid>c1396d33-3741-4e6a-acdf-79ac9f076e53</uuid>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <nova:name>tempest-AttachInterfacesTestJSON-server-369367062</nova:name>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:00:15</nova:creationTime>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <nova:port uuid="39a40677-39fb-46af-8988-f2b8b26d7512">
Nov 29 03:01:30 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <memory unit='KiB'>131072</memory>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <sysinfo type='smbios'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <entry name='serial'>c1396d33-3741-4e6a-acdf-79ac9f076e53</entry>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <entry name='uuid'>c1396d33-3741-4e6a-acdf-79ac9f076e53</entry>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <boot dev='hd'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <smbios mode='sysinfo'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <vmcoreinfo state='on'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <cpu mode='custom' match='exact' check='partial'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <model fallback='allow'>Nehalem</model>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <clock offset='utc'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <timer name='hpet' present='no'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <on_reboot>restart</on_reboot>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <on_crash>destroy</on_crash>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <disk type='network' device='disk'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/c1396d33-3741-4e6a-acdf-79ac9f076e53_disk'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target dev='vda' bus='virtio'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <disk type='network' device='cdrom'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/c1396d33-3741-4e6a-acdf-79ac9f076e53_disk.config'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target dev='sda' bus='sata'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <readonly/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='0' model='pcie-root'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='1' port='0x10'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='2' port='0x11'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='3' port='0x12'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='4' port='0x13'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='5' port='0x14'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='6' port='0x15'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='7' port='0x16'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='8' port='0x17'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='9' port='0x18'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='10' port='0x19'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='11' port='0x1a'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='12' port='0x1b'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='13' port='0x1c'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='14' port='0x1d'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='15' port='0x1e'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='16' port='0x1f'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='17' port='0x20'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='18' port='0x21'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='19' port='0x22'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='20' port='0x23'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='21' port='0x24'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='22' port='0x25'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='23' port='0x26'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='24' port='0x27'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target chassis='25' port='0x28'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model name='pcie-pci-bridge'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <controller type='sata' index='0'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:0f:38:06'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target dev='tap39a40677-39'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:d9:3f:77'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target dev='tap8125c0d6-4b'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <serial type='pty'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/console.log' append='off'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target type='isa-serial' port='0'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:        <model name='isa-serial'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      </target>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <console type='pty'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53/console.log' append='off'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <target type='serial' port='0'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </console>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <input type='tablet' bus='usb'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <input type='mouse' bus='ps2'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <input type='keyboard' bus='ps2'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <listen type='address' address='::0'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </graphics>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <audio id='1' type='none'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <watchdog model='itco' action='reset'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <memballoon model='virtio'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <stats period='10'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <rng model='virtio'>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:01:30 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:01:30 np0005539563 nova_compute[252253]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.195 252257 INFO nova.virt.libvirt.driver [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Successfully detached device tapdcb12fb0-34 from instance c1396d33-3741-4e6a-acdf-79ac9f076e53 from the persistent domain config.#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.196 252257 DEBUG nova.virt.libvirt.vif [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-369367062',display_name='tempest-AttachInterfacesTestJSON-server-369367062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-369367062',id=60,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN4XJ5dp3QWFPLQWQylhhe+/KFke4AkHx/49lzYLtLAvoyd43SYV4g6G3CvTeKngJl9mIKoY6Wa7loMkfYFM9VG1u2APde5GcwnpsulT1K67oJ78FbuoOR7n1STbCCVMFQ==',key_name='tempest-keypair-1442923068',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-vk1d0e79',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=c1396d33-3741-4e6a-acdf-79ac9f076e53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.196 252257 DEBUG nova.network.os_vif_util [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converting VIF {"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.197 252257 DEBUG nova.network.os_vif_util [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:aa:0c,bridge_name='br-int',has_traffic_filtering=True,id=dcb12fb0-34e9-47f5-8054-b56b5145a57a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdcb12fb0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.197 252257 DEBUG os_vif [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:aa:0c,bridge_name='br-int',has_traffic_filtering=True,id=dcb12fb0-34e9-47f5-8054-b56b5145a57a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdcb12fb0-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.199 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.199 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdcb12fb0-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.199 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.200 252257 INFO os_vif [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:aa:0c,bridge_name='br-int',has_traffic_filtering=True,id=dcb12fb0-34e9-47f5-8054-b56b5145a57a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdcb12fb0-34')#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.200 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[15c2d5a5-cb04-4f01-ad4b-ca2f4e5c616e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.218 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9318a776-22ac-418d-b8f5-6d973b191dd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618501, 'reachable_time': 41713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294186, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.226 252257 INFO os_vif [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:aa:0c,bridge_name='br-int',has_traffic_filtering=True,id=dcb12fb0-34e9-47f5-8054-b56b5145a57a,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdcb12fb0-34')#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.227 252257 DEBUG nova.virt.libvirt.guest [req-d545507e-a93f-47c6-a918-65d81dbd18dd req-74a2e802-c0a9-43f0-9295-16294e526e5a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <nova:name>tempest-AttachInterfacesTestJSON-server-369367062</nova:name>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:01:30</nova:creationTime>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <nova:port uuid="39a40677-39fb-46af-8988-f2b8b26d7512">
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    <nova:port uuid="8125c0d6-4bae-4aba-aad0-d28ea5b81901">
Nov 29 03:01:30 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:01:30 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:01:30 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:01:30 np0005539563 nova_compute[252253]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.236 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[52260fe9-5f3f-4d89-9926-355d5227556a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618513, 'tstamp': 618513}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294202, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618516, 'tstamp': 618516}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294202, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.237 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.238 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.239 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.240 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.240 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.240 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.240 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.241 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.242 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 39a40677-39fb-46af-8988-f2b8b26d7512 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 unbound from our chassis#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.243 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 738e99b4-b58e-4eff-b209-c4aa3748c994, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.244 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[379a6400-8bb4-47c2-ad53-4df1a7923a20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.244 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 namespace which is not needed anymore#033[00m
Nov 29 03:01:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:30.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.382 252257 DEBUG nova.compute.manager [req-aa95f016-bb26-4074-b785-cc70e78eea79 req-a079899b-5843-42cd-ad96-878a13ffb0a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-unplugged-8125c0d6-4bae-4aba-aad0-d28ea5b81901 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.383 252257 DEBUG oslo_concurrency.lockutils [req-aa95f016-bb26-4074-b785-cc70e78eea79 req-a079899b-5843-42cd-ad96-878a13ffb0a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.383 252257 DEBUG oslo_concurrency.lockutils [req-aa95f016-bb26-4074-b785-cc70e78eea79 req-a079899b-5843-42cd-ad96-878a13ffb0a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.383 252257 DEBUG oslo_concurrency.lockutils [req-aa95f016-bb26-4074-b785-cc70e78eea79 req-a079899b-5843-42cd-ad96-878a13ffb0a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.383 252257 DEBUG nova.compute.manager [req-aa95f016-bb26-4074-b785-cc70e78eea79 req-a079899b-5843-42cd-ad96-878a13ffb0a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-unplugged-8125c0d6-4bae-4aba-aad0-d28ea5b81901 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.383 252257 DEBUG nova.compute.manager [req-aa95f016-bb26-4074-b785-cc70e78eea79 req-a079899b-5843-42cd-ad96-878a13ffb0a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-unplugged-8125c0d6-4bae-4aba-aad0-d28ea5b81901 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:01:30 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[292007]: [NOTICE]   (292031) : haproxy version is 2.8.14-c23fe91
Nov 29 03:01:30 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[292007]: [NOTICE]   (292031) : path to executable is /usr/sbin/haproxy
Nov 29 03:01:30 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[292007]: [WARNING]  (292031) : Exiting Master process...
Nov 29 03:01:30 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[292007]: [WARNING]  (292031) : Exiting Master process...
Nov 29 03:01:30 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[292007]: [ALERT]    (292031) : Current worker (292034) exited with code 143 (Terminated)
Nov 29 03:01:30 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[292007]: [WARNING]  (292031) : All workers exited. Exiting... (0)
Nov 29 03:01:30 np0005539563 systemd[1]: libpod-b8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5.scope: Deactivated successfully.
Nov 29 03:01:30 np0005539563 podman[294223]: 2025-11-29 08:01:30.397877337 +0000 UTC m=+0.059596167 container died b8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:01:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5-userdata-shm.mount: Deactivated successfully.
Nov 29 03:01:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6130a907efcac48fa9d880e84d1dc7aa899e4a49fc5b08c8e6c608304b0a6299-merged.mount: Deactivated successfully.
Nov 29 03:01:30 np0005539563 podman[294223]: 2025-11-29 08:01:30.436819041 +0000 UTC m=+0.098537871 container cleanup b8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:01:30 np0005539563 systemd[1]: libpod-conmon-b8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5.scope: Deactivated successfully.
Nov 29 03:01:30 np0005539563 podman[294255]: 2025-11-29 08:01:30.497216708 +0000 UTC m=+0.040802937 container remove b8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.502 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c0feeb76-7885-42e5-8fb5-ceff3e40029c]: (4, ('Sat Nov 29 08:01:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 (b8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5)\nb8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5\nSat Nov 29 08:01:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 (b8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5)\nb8c26745ac2ea026b1acb2f9c0c0e1bf073b8f9dee3f87c5a3393c25c47212d5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.504 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7910663d-b154-4fd1-9689-f682dabb1404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.504 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.506 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 kernel: tap738e99b4-b0: left promiscuous mode
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.520 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.523 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d490c9bf-51df-42f4-aadc-ee110d4c4ced]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.540 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[44871b2e-2964-4149-af03-681f3e6e7b94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.541 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[393ad713-1fb7-4f6e-83eb-0029f84f7e57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.562 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bdee9628-dd25-40e9-963c-db54a1654b85]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618493, 'reachable_time': 44850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294271, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.564 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.565 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b614e3-596d-4d86-832f-649aec7e5c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.565 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 39a40677-39fb-46af-8988-f2b8b26d7512 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 unbound from our chassis#033[00m
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.566 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 738e99b4-b58e-4eff-b209-c4aa3748c994, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:01:30 np0005539563 systemd[1]: run-netns-ovnmeta\x2d738e99b4\x2db58e\x2d4eff\x2db209\x2dc4aa3748c994.mount: Deactivated successfully.
Nov 29 03:01:30 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:30.567 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ed95bb04-9a5c-4cbc-a362-8afff8c2b2e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.674 252257 INFO nova.virt.libvirt.driver [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Deleting instance files /var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53_del#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.675 252257 INFO nova.virt.libvirt.driver [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Deletion of /var/lib/nova/instances/c1396d33-3741-4e6a-acdf-79ac9f076e53_del complete#033[00m
Nov 29 03:01:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Nov 29 03:01:30 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.759 252257 INFO nova.compute.manager [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Took 1.19 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.760 252257 DEBUG oslo.service.loopingcall [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.760 252257 DEBUG nova.compute.manager [-] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:01:30 np0005539563 nova_compute[252253]: 2025-11-29 08:01:30.760 252257 DEBUG nova.network.neutron [-] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:01:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:30.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:31 np0005539563 nova_compute[252253]: 2025-11-29 08:01:31.265 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 242 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 924 KiB/s rd, 2.5 MiB/s wr, 179 op/s
Nov 29 03:01:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:32.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.373 252257 DEBUG nova.compute.manager [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-unplugged-39a40677-39fb-46af-8988-f2b8b26d7512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.373 252257 DEBUG oslo_concurrency.lockutils [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.374 252257 DEBUG oslo_concurrency.lockutils [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.374 252257 DEBUG oslo_concurrency.lockutils [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.375 252257 DEBUG nova.compute.manager [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-unplugged-39a40677-39fb-46af-8988-f2b8b26d7512 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.375 252257 DEBUG nova.compute.manager [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-unplugged-39a40677-39fb-46af-8988-f2b8b26d7512 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.375 252257 DEBUG nova.compute.manager [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.376 252257 DEBUG oslo_concurrency.lockutils [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.376 252257 DEBUG oslo_concurrency.lockutils [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.377 252257 DEBUG oslo_concurrency.lockutils [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.377 252257 DEBUG nova.compute.manager [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.378 252257 WARNING nova.compute.manager [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.378 252257 DEBUG nova.compute.manager [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.378 252257 DEBUG oslo_concurrency.lockutils [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.379 252257 DEBUG oslo_concurrency.lockutils [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.379 252257 DEBUG oslo_concurrency.lockutils [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.380 252257 DEBUG nova.compute.manager [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.380 252257 WARNING nova.compute.manager [req-71a240ce-b416-4904-9f38-abd756ec1077 req-1235e5e1-aae0-42fc-b528-055c1da82ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.541 252257 DEBUG nova.compute.manager [req-786f5095-77ae-4e9f-9023-b7164de68a44 req-2369fb7b-d56e-4d0f-83d9-e8e2c66890f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-8125c0d6-4bae-4aba-aad0-d28ea5b81901 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.541 252257 DEBUG oslo_concurrency.lockutils [req-786f5095-77ae-4e9f-9023-b7164de68a44 req-2369fb7b-d56e-4d0f-83d9-e8e2c66890f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.542 252257 DEBUG oslo_concurrency.lockutils [req-786f5095-77ae-4e9f-9023-b7164de68a44 req-2369fb7b-d56e-4d0f-83d9-e8e2c66890f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.542 252257 DEBUG oslo_concurrency.lockutils [req-786f5095-77ae-4e9f-9023-b7164de68a44 req-2369fb7b-d56e-4d0f-83d9-e8e2c66890f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.542 252257 DEBUG nova.compute.manager [req-786f5095-77ae-4e9f-9023-b7164de68a44 req-2369fb7b-d56e-4d0f-83d9-e8e2c66890f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-plugged-8125c0d6-4bae-4aba-aad0-d28ea5b81901 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:32 np0005539563 nova_compute[252253]: 2025-11-29 08:01:32.542 252257 WARNING nova.compute.manager [req-786f5095-77ae-4e9f-9023-b7164de68a44 req-2369fb7b-d56e-4d0f-83d9-e8e2c66890f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-plugged-8125c0d6-4bae-4aba-aad0-d28ea5b81901 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:01:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:32.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 223 MiB data, 681 MiB used, 20 GiB / 21 GiB avail; 133 KiB/s rd, 2.2 MiB/s wr, 145 op/s
Nov 29 03:01:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:01:34.243 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:34.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:34 np0005539563 nova_compute[252253]: 2025-11-29 08:01:34.347 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403279.3458314, c19d4af3-8322-42c3-b55b-5b13d720e3fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:34 np0005539563 nova_compute[252253]: 2025-11-29 08:01:34.347 252257 INFO nova.compute.manager [-] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:01:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/621022494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:34.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:35 np0005539563 nova_compute[252253]: 2025-11-29 08:01:35.195 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:35 np0005539563 podman[294275]: 2025-11-29 08:01:35.503241179 +0000 UTC m=+0.056040999 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:01:35 np0005539563 nova_compute[252253]: 2025-11-29 08:01:35.526 252257 DEBUG nova.compute.manager [req-7d41a182-eb61-4daf-8b88-de1ae6b908d8 req-d813ab67-b866-45dd-a1b9-299a9a95ce9a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:35 np0005539563 nova_compute[252253]: 2025-11-29 08:01:35.526 252257 DEBUG oslo_concurrency.lockutils [req-7d41a182-eb61-4daf-8b88-de1ae6b908d8 req-d813ab67-b866-45dd-a1b9-299a9a95ce9a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:35 np0005539563 nova_compute[252253]: 2025-11-29 08:01:35.527 252257 DEBUG oslo_concurrency.lockutils [req-7d41a182-eb61-4daf-8b88-de1ae6b908d8 req-d813ab67-b866-45dd-a1b9-299a9a95ce9a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:35 np0005539563 nova_compute[252253]: 2025-11-29 08:01:35.527 252257 DEBUG oslo_concurrency.lockutils [req-7d41a182-eb61-4daf-8b88-de1ae6b908d8 req-d813ab67-b866-45dd-a1b9-299a9a95ce9a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:35 np0005539563 nova_compute[252253]: 2025-11-29 08:01:35.527 252257 DEBUG nova.compute.manager [req-7d41a182-eb61-4daf-8b88-de1ae6b908d8 req-d813ab67-b866-45dd-a1b9-299a9a95ce9a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:35 np0005539563 nova_compute[252253]: 2025-11-29 08:01:35.527 252257 WARNING nova.compute.manager [req-7d41a182-eb61-4daf-8b88-de1ae6b908d8 req-d813ab67-b866-45dd-a1b9-299a9a95ce9a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:01:35 np0005539563 podman[294276]: 2025-11-29 08:01:35.539785399 +0000 UTC m=+0.088601122 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 03:01:35 np0005539563 podman[294277]: 2025-11-29 08:01:35.583520664 +0000 UTC m=+0.125351947 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:01:35 np0005539563 nova_compute[252253]: 2025-11-29 08:01:35.600 252257 DEBUG nova.compute.manager [None req-947a08da-70b2-4360-9dca-4a0d22c0d04b - - - - - -] [instance: c19d4af3-8322-42c3-b55b-5b13d720e3fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:35 np0005539563 nova_compute[252253]: 2025-11-29 08:01:35.650 252257 DEBUG nova.network.neutron [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "address": "fa:16:3e:d9:3f:77", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8125c0d6-4b", "ovs_interfaceid": "8125c0d6-4bae-4aba-aad0-d28ea5b81901", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:35 np0005539563 nova_compute[252253]: 2025-11-29 08:01:35.670 252257 DEBUG oslo_concurrency.lockutils [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-c1396d33-3741-4e6a-acdf-79ac9f076e53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:35 np0005539563 nova_compute[252253]: 2025-11-29 08:01:35.698 252257 DEBUG oslo_concurrency.lockutils [None req-6a1c0b5c-87b6-4a7c-8bbf-0d6905253308 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-c1396d33-3741-4e6a-acdf-79ac9f076e53-bb9bfd78-483e-4c5b-a548-8e4b77160086" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 8.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:35 np0005539563 nova_compute[252253]: 2025-11-29 08:01:35.704 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:35 np0005539563 nova_compute[252253]: 2025-11-29 08:01:35.704 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Nov 29 03:01:36 np0005539563 nova_compute[252253]: 2025-11-29 08:01:36.266 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:36.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:36 np0005539563 nova_compute[252253]: 2025-11-29 08:01:36.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:36 np0005539563 nova_compute[252253]: 2025-11-29 08:01:36.884 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:36 np0005539563 nova_compute[252253]: 2025-11-29 08:01:36.885 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:36 np0005539563 nova_compute[252253]: 2025-11-29 08:01:36.885 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:36 np0005539563 nova_compute[252253]: 2025-11-29 08:01:36.886 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:01:36 np0005539563 nova_compute[252253]: 2025-11-29 08:01:36.886 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:36.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23151161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:37 np0005539563 nova_compute[252253]: 2025-11-29 08:01:37.600 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.714s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:37 np0005539563 nova_compute[252253]: 2025-11-29 08:01:37.814 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:01:37 np0005539563 nova_compute[252253]: 2025-11-29 08:01:37.816 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4560MB free_disk=20.921966552734375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:01:37 np0005539563 nova_compute[252253]: 2025-11-29 08:01:37.817 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:37 np0005539563 nova_compute[252253]: 2025-11-29 08:01:37.817 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Nov 29 03:01:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:38.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:38.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.378 252257 DEBUG nova.compute.manager [req-66015057-5a4b-4e34-adc7-ef7e2c667c00 req-a348831a-e3ef-48fc-80ef-33b83b8c3ddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-unplugged-39a40677-39fb-46af-8988-f2b8b26d7512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.378 252257 DEBUG oslo_concurrency.lockutils [req-66015057-5a4b-4e34-adc7-ef7e2c667c00 req-a348831a-e3ef-48fc-80ef-33b83b8c3ddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.379 252257 DEBUG oslo_concurrency.lockutils [req-66015057-5a4b-4e34-adc7-ef7e2c667c00 req-a348831a-e3ef-48fc-80ef-33b83b8c3ddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.379 252257 DEBUG oslo_concurrency.lockutils [req-66015057-5a4b-4e34-adc7-ef7e2c667c00 req-a348831a-e3ef-48fc-80ef-33b83b8c3ddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.379 252257 DEBUG nova.compute.manager [req-66015057-5a4b-4e34-adc7-ef7e2c667c00 req-a348831a-e3ef-48fc-80ef-33b83b8c3ddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-unplugged-39a40677-39fb-46af-8988-f2b8b26d7512 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.379 252257 DEBUG nova.compute.manager [req-66015057-5a4b-4e34-adc7-ef7e2c667c00 req-a348831a-e3ef-48fc-80ef-33b83b8c3ddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-unplugged-39a40677-39fb-46af-8988-f2b8b26d7512 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.380 252257 DEBUG nova.compute.manager [req-66015057-5a4b-4e34-adc7-ef7e2c667c00 req-a348831a-e3ef-48fc-80ef-33b83b8c3ddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-deleted-8125c0d6-4bae-4aba-aad0-d28ea5b81901 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.380 252257 INFO nova.compute.manager [req-66015057-5a4b-4e34-adc7-ef7e2c667c00 req-a348831a-e3ef-48fc-80ef-33b83b8c3ddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Neutron deleted interface 8125c0d6-4bae-4aba-aad0-d28ea5b81901; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.380 252257 DEBUG nova.network.neutron [req-66015057-5a4b-4e34-adc7-ef7e2c667c00 req-a348831a-e3ef-48fc-80ef-33b83b8c3ddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [{"id": "39a40677-39fb-46af-8988-f2b8b26d7512", "address": "fa:16:3e:0f:38:06", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39a40677-39", "ovs_interfaceid": "39a40677-39fb-46af-8988-f2b8b26d7512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "address": "fa:16:3e:1c:aa:0c", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb12fb0-34", "ovs_interfaceid": "dcb12fb0-34e9-47f5-8054-b56b5145a57a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.444 252257 DEBUG nova.compute.manager [req-66015057-5a4b-4e34-adc7-ef7e2c667c00 req-a348831a-e3ef-48fc-80ef-33b83b8c3ddd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Detach interface failed, port_id=8125c0d6-4bae-4aba-aad0-d28ea5b81901, reason: Instance c1396d33-3741-4e6a-acdf-79ac9f076e53 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.701 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance c1396d33-3741-4e6a-acdf-79ac9f076e53 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.702 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.702 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.846 252257 DEBUG nova.network.neutron [-] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.859 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.903 252257 INFO nova.compute.manager [-] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Took 9.14 seconds to deallocate network for instance.#033[00m
Nov 29 03:01:39 np0005539563 nova_compute[252253]: 2025-11-29 08:01:39.972 252257 DEBUG oslo_concurrency.lockutils [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:40 np0005539563 nova_compute[252253]: 2025-11-29 08:01:40.232 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:40.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1829546118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:40 np0005539563 nova_compute[252253]: 2025-11-29 08:01:40.513 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:40 np0005539563 nova_compute[252253]: 2025-11-29 08:01:40.522 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:40.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:41 np0005539563 nova_compute[252253]: 2025-11-29 08:01:41.304 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 50 KiB/s wr, 36 op/s
Nov 29 03:01:42 np0005539563 nova_compute[252253]: 2025-11-29 08:01:42.022 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403287.0214028, 487e2d0b-cbfe-462e-b702-86a5164357d8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:42 np0005539563 nova_compute[252253]: 2025-11-29 08:01:42.022 252257 INFO nova.compute.manager [-] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:01:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:42.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:42.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:01:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:01:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:01:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:01:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:01:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.777 252257 DEBUG nova.compute.manager [req-3d5801c3-6609-45a5-9cba-94b1b544c595 req-578f2a41-a7b7-45ea-9776-620c09b7d414 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.777 252257 DEBUG oslo_concurrency.lockutils [req-3d5801c3-6609-45a5-9cba-94b1b544c595 req-578f2a41-a7b7-45ea-9776-620c09b7d414 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.778 252257 DEBUG oslo_concurrency.lockutils [req-3d5801c3-6609-45a5-9cba-94b1b544c595 req-578f2a41-a7b7-45ea-9776-620c09b7d414 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.778 252257 DEBUG oslo_concurrency.lockutils [req-3d5801c3-6609-45a5-9cba-94b1b544c595 req-578f2a41-a7b7-45ea-9776-620c09b7d414 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.779 252257 DEBUG nova.compute.manager [req-3d5801c3-6609-45a5-9cba-94b1b544c595 req-578f2a41-a7b7-45ea-9776-620c09b7d414 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] No waiting events found dispatching network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.779 252257 WARNING nova.compute.manager [req-3d5801c3-6609-45a5-9cba-94b1b544c595 req-578f2a41-a7b7-45ea-9776-620c09b7d414 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received unexpected event network-vif-plugged-39a40677-39fb-46af-8988-f2b8b26d7512 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.780 252257 DEBUG nova.compute.manager [req-3d5801c3-6609-45a5-9cba-94b1b544c595 req-578f2a41-a7b7-45ea-9776-620c09b7d414 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Received event network-vif-deleted-39a40677-39fb-46af-8988-f2b8b26d7512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.860 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.864 252257 DEBUG nova.compute.manager [None req-7538ed63-d5b7-4b9b-8d83-a1de45bfe525 - - - - - -] [instance: 487e2d0b-cbfe-462e-b702-86a5164357d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 46 KiB/s wr, 35 op/s
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.919 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.920 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.920 252257 DEBUG oslo_concurrency.lockutils [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 3.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.921 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.921 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.943 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:43 np0005539563 nova_compute[252253]: 2025-11-29 08:01:43.972 252257 DEBUG oslo_concurrency.processutils [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:44.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947612651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:44 np0005539563 nova_compute[252253]: 2025-11-29 08:01:44.437 252257 DEBUG oslo_concurrency.processutils [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:44 np0005539563 nova_compute[252253]: 2025-11-29 08:01:44.443 252257 DEBUG nova.compute.provider_tree [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:44 np0005539563 nova_compute[252253]: 2025-11-29 08:01:44.459 252257 DEBUG nova.scheduler.client.report [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:44 np0005539563 nova_compute[252253]: 2025-11-29 08:01:44.512 252257 DEBUG oslo_concurrency.lockutils [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:44 np0005539563 nova_compute[252253]: 2025-11-29 08:01:44.549 252257 INFO nova.scheduler.client.report [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Deleted allocations for instance c1396d33-3741-4e6a-acdf-79ac9f076e53#033[00m
Nov 29 03:01:44 np0005539563 nova_compute[252253]: 2025-11-29 08:01:44.688 252257 DEBUG oslo_concurrency.lockutils [None req-a7d97673-cabe-4dd2-84c3-b79eb510654f a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "c1396d33-3741-4e6a-acdf-79ac9f076e53" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:44 np0005539563 nova_compute[252253]: 2025-11-29 08:01:44.957 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:44 np0005539563 nova_compute[252253]: 2025-11-29 08:01:44.958 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:01:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:44.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:45 np0005539563 nova_compute[252253]: 2025-11-29 08:01:45.038 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403290.0361648, c1396d33-3741-4e6a-acdf-79ac9f076e53 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:45 np0005539563 nova_compute[252253]: 2025-11-29 08:01:45.039 252257 INFO nova.compute.manager [-] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:01:45 np0005539563 nova_compute[252253]: 2025-11-29 08:01:45.203 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:01:45 np0005539563 nova_compute[252253]: 2025-11-29 08:01:45.204 252257 DEBUG nova.compute.manager [None req-41b5605b-9e4a-45ac-a581-e24064c7b35d - - - - - -] [instance: c1396d33-3741-4e6a-acdf-79ac9f076e53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:45 np0005539563 nova_compute[252253]: 2025-11-29 08:01:45.204 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:45 np0005539563 nova_compute[252253]: 2025-11-29 08:01:45.205 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:45 np0005539563 nova_compute[252253]: 2025-11-29 08:01:45.205 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:45 np0005539563 nova_compute[252253]: 2025-11-29 08:01:45.205 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:45 np0005539563 nova_compute[252253]: 2025-11-29 08:01:45.205 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:45 np0005539563 nova_compute[252253]: 2025-11-29 08:01:45.206 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:01:45 np0005539563 nova_compute[252253]: 2025-11-29 08:01:45.238 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 106 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 265 KiB/s rd, 15 KiB/s wr, 51 op/s
Nov 29 03:01:46 np0005539563 nova_compute[252253]: 2025-11-29 08:01:46.306 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:46.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:46.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:47 np0005539563 nova_compute[252253]: 2025-11-29 08:01:47.398 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:47 np0005539563 nova_compute[252253]: 2025-11-29 08:01:47.633 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 106 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 255 KiB/s rd, 14 KiB/s wr, 35 op/s
Nov 29 03:01:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:48.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:48.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 88 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 309 KiB/s rd, 15 KiB/s wr, 48 op/s
Nov 29 03:01:50 np0005539563 nova_compute[252253]: 2025-11-29 08:01:50.241 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:50.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:50.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:51 np0005539563 nova_compute[252253]: 2025-11-29 08:01:51.309 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 101 op/s
Nov 29 03:01:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:52.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:52.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Nov 29 03:01:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 103 op/s
Nov 29 03:01:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:54.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Nov 29 03:01:54 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Nov 29 03:01:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:54.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:55 np0005539563 nova_compute[252253]: 2025-11-29 08:01:55.244 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Nov 29 03:01:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Nov 29 03:01:55 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Nov 29 03:01:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 118 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.0 MiB/s wr, 137 op/s
Nov 29 03:01:56 np0005539563 nova_compute[252253]: 2025-11-29 08:01:56.310 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:01:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:56.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:01:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Nov 29 03:01:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Nov 29 03:01:56 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Nov 29 03:01:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:57.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Nov 29 03:01:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Nov 29 03:01:57 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Nov 29 03:01:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 118 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 4.0 MiB/s wr, 70 op/s
Nov 29 03:01:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:58.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:01:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:59.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 137 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.5 MiB/s wr, 117 op/s
Nov 29 03:02:00 np0005539563 nova_compute[252253]: 2025-11-29 08:02:00.299 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:00.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:01.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:01 np0005539563 nova_compute[252253]: 2025-11-29 08:02:01.312 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 128 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 793 KiB/s rd, 5.0 MiB/s wr, 223 op/s
Nov 29 03:02:01 np0005539563 nova_compute[252253]: 2025-11-29 08:02:01.938 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "0791bdff-16d7-4626-acea-1361fdb70652" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:01 np0005539563 nova_compute[252253]: 2025-11-29 08:02:01.938 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:01 np0005539563 nova_compute[252253]: 2025-11-29 08:02:01.958 252257 DEBUG nova.compute.manager [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:02:02 np0005539563 nova_compute[252253]: 2025-11-29 08:02:02.090 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:02 np0005539563 nova_compute[252253]: 2025-11-29 08:02:02.091 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:02 np0005539563 nova_compute[252253]: 2025-11-29 08:02:02.098 252257 DEBUG nova.virt.hardware [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:02:02 np0005539563 nova_compute[252253]: 2025-11-29 08:02:02.098 252257 INFO nova.compute.claims [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:02:02 np0005539563 nova_compute[252253]: 2025-11-29 08:02:02.271 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:02.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:03.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595788834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.090 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.819s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.097 252257 DEBUG nova.compute.provider_tree [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.126 252257 DEBUG nova.scheduler.client.report [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.165 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.166 252257 DEBUG nova.compute.manager [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.248 252257 DEBUG nova.compute.manager [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.248 252257 DEBUG nova.network.neutron [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.286 252257 INFO nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.305 252257 DEBUG nova.compute.manager [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.422 252257 DEBUG nova.compute.manager [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.423 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.423 252257 INFO nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Creating image(s)#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.450 252257 DEBUG nova.storage.rbd_utils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 0791bdff-16d7-4626-acea-1361fdb70652_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.477 252257 DEBUG nova.storage.rbd_utils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 0791bdff-16d7-4626-acea-1361fdb70652_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.539 252257 DEBUG nova.storage.rbd_utils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 0791bdff-16d7-4626-acea-1361fdb70652_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.543 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.572 252257 DEBUG nova.policy [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a814d0c4600e45d9a1fac7bac5b7e69e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f69605de164b4c27ae715521263676fe', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:02:03 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 521b2999-c68c-443c-bafd-cd2537bbc47e does not exist
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.616 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.617 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:03 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c2cc3192-4015-4edd-a895-8fb2b56b9dda does not exist
Nov 29 03:02:03 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9b827983-16ac-43a0-920c-b8c338f53213 does not exist
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.619 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.620 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:02:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.660 252257 DEBUG nova.storage.rbd_utils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 0791bdff-16d7-4626-acea-1361fdb70652_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:03 np0005539563 nova_compute[252253]: 2025-11-29 08:02:03.664 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 0791bdff-16d7-4626-acea-1361fdb70652_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 121 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 618 KiB/s rd, 3.8 MiB/s wr, 182 op/s
Nov 29 03:02:04 np0005539563 podman[294908]: 2025-11-29 08:02:04.15515898 +0000 UTC m=+0.023915629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:02:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:04.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:04 np0005539563 podman[294908]: 2025-11-29 08:02:04.384638268 +0000 UTC m=+0.253394887 container create a7c5cde11b0a844cd8e640d0436d535bfe60c4c3bb2d020d5d9a6467e73257c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:02:04 np0005539563 systemd[1]: Started libpod-conmon-a7c5cde11b0a844cd8e640d0436d535bfe60c4c3bb2d020d5d9a6467e73257c3.scope.
Nov 29 03:02:04 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:02:04 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:02:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:02:04 np0005539563 podman[294908]: 2025-11-29 08:02:04.554033227 +0000 UTC m=+0.422789866 container init a7c5cde11b0a844cd8e640d0436d535bfe60c4c3bb2d020d5d9a6467e73257c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_murdock, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:02:04 np0005539563 podman[294908]: 2025-11-29 08:02:04.56190193 +0000 UTC m=+0.430658549 container start a7c5cde11b0a844cd8e640d0436d535bfe60c4c3bb2d020d5d9a6467e73257c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_murdock, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:02:04 np0005539563 infallible_murdock[294925]: 167 167
Nov 29 03:02:04 np0005539563 systemd[1]: libpod-a7c5cde11b0a844cd8e640d0436d535bfe60c4c3bb2d020d5d9a6467e73257c3.scope: Deactivated successfully.
Nov 29 03:02:04 np0005539563 podman[294908]: 2025-11-29 08:02:04.695628303 +0000 UTC m=+0.564384942 container attach a7c5cde11b0a844cd8e640d0436d535bfe60c4c3bb2d020d5d9a6467e73257c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:02:04 np0005539563 podman[294908]: 2025-11-29 08:02:04.696683922 +0000 UTC m=+0.565440531 container died a7c5cde11b0a844cd8e640d0436d535bfe60c4c3bb2d020d5d9a6467e73257c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_murdock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:02:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:02:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:02:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:02:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay-525cf309c5ee9f34a9d9202325df037398bfa4c3dcffcc8887c33b7114273578-merged.mount: Deactivated successfully.
Nov 29 03:02:04 np0005539563 nova_compute[252253]: 2025-11-29 08:02:04.838 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 0791bdff-16d7-4626-acea-1361fdb70652_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:04.909 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:04.909 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:04.910 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:04 np0005539563 nova_compute[252253]: 2025-11-29 08:02:04.911 252257 DEBUG nova.storage.rbd_utils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] resizing rbd image 0791bdff-16d7-4626-acea-1361fdb70652_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:02:05 np0005539563 podman[294908]: 2025-11-29 08:02:05.013238858 +0000 UTC m=+0.881995477 container remove a7c5cde11b0a844cd8e640d0436d535bfe60c4c3bb2d020d5d9a6467e73257c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:02:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:05.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:05 np0005539563 systemd[1]: libpod-conmon-a7c5cde11b0a844cd8e640d0436d535bfe60c4c3bb2d020d5d9a6467e73257c3.scope: Deactivated successfully.
Nov 29 03:02:05 np0005539563 nova_compute[252253]: 2025-11-29 08:02:05.119 252257 DEBUG nova.objects.instance [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'migration_context' on Instance uuid 0791bdff-16d7-4626-acea-1361fdb70652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:05 np0005539563 nova_compute[252253]: 2025-11-29 08:02:05.155 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:02:05 np0005539563 nova_compute[252253]: 2025-11-29 08:02:05.155 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Ensure instance console log exists: /var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:02:05 np0005539563 nova_compute[252253]: 2025-11-29 08:02:05.156 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:05 np0005539563 nova_compute[252253]: 2025-11-29 08:02:05.156 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:05 np0005539563 nova_compute[252253]: 2025-11-29 08:02:05.156 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:05 np0005539563 podman[295025]: 2025-11-29 08:02:05.173179601 +0000 UTC m=+0.039224994 container create fe383fa3b9f50dc2e1c8b7db1ac351b47fc5f29c364cfeb09e8ae65ad818c2b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jackson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:02:05 np0005539563 systemd[1]: Started libpod-conmon-fe383fa3b9f50dc2e1c8b7db1ac351b47fc5f29c364cfeb09e8ae65ad818c2b0.scope.
Nov 29 03:02:05 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:02:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dc35b2ce3a16c3fd962251a4f8ff8162dd49660620f4dab480c5188f7361dd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dc35b2ce3a16c3fd962251a4f8ff8162dd49660620f4dab480c5188f7361dd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dc35b2ce3a16c3fd962251a4f8ff8162dd49660620f4dab480c5188f7361dd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dc35b2ce3a16c3fd962251a4f8ff8162dd49660620f4dab480c5188f7361dd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dc35b2ce3a16c3fd962251a4f8ff8162dd49660620f4dab480c5188f7361dd1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:05 np0005539563 podman[295025]: 2025-11-29 08:02:05.157844346 +0000 UTC m=+0.023889759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:02:05 np0005539563 podman[295025]: 2025-11-29 08:02:05.253087196 +0000 UTC m=+0.119132609 container init fe383fa3b9f50dc2e1c8b7db1ac351b47fc5f29c364cfeb09e8ae65ad818c2b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jackson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:02:05 np0005539563 podman[295025]: 2025-11-29 08:02:05.259625933 +0000 UTC m=+0.125671316 container start fe383fa3b9f50dc2e1c8b7db1ac351b47fc5f29c364cfeb09e8ae65ad818c2b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jackson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:02:05 np0005539563 podman[295025]: 2025-11-29 08:02:05.262713677 +0000 UTC m=+0.128759090 container attach fe383fa3b9f50dc2e1c8b7db1ac351b47fc5f29c364cfeb09e8ae65ad818c2b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:02:05 np0005539563 nova_compute[252253]: 2025-11-29 08:02:05.303 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Nov 29 03:02:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Nov 29 03:02:05 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Nov 29 03:02:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 87 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 641 KiB/s rd, 5.7 MiB/s wr, 230 op/s
Nov 29 03:02:06 np0005539563 heuristic_jackson[295042]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:02:06 np0005539563 heuristic_jackson[295042]: --> relative data size: 1.0
Nov 29 03:02:06 np0005539563 heuristic_jackson[295042]: --> All data devices are unavailable
Nov 29 03:02:06 np0005539563 systemd[1]: libpod-fe383fa3b9f50dc2e1c8b7db1ac351b47fc5f29c364cfeb09e8ae65ad818c2b0.scope: Deactivated successfully.
Nov 29 03:02:06 np0005539563 podman[295025]: 2025-11-29 08:02:06.09437034 +0000 UTC m=+0.960415733 container died fe383fa3b9f50dc2e1c8b7db1ac351b47fc5f29c364cfeb09e8ae65ad818c2b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:02:06 np0005539563 nova_compute[252253]: 2025-11-29 08:02:06.098 252257 DEBUG nova.network.neutron [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Successfully created port: 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:02:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3dc35b2ce3a16c3fd962251a4f8ff8162dd49660620f4dab480c5188f7361dd1-merged.mount: Deactivated successfully.
Nov 29 03:02:06 np0005539563 podman[295025]: 2025-11-29 08:02:06.164193401 +0000 UTC m=+1.030238794 container remove fe383fa3b9f50dc2e1c8b7db1ac351b47fc5f29c364cfeb09e8ae65ad818c2b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:02:06 np0005539563 systemd[1]: libpod-conmon-fe383fa3b9f50dc2e1c8b7db1ac351b47fc5f29c364cfeb09e8ae65ad818c2b0.scope: Deactivated successfully.
Nov 29 03:02:06 np0005539563 podman[295057]: 2025-11-29 08:02:06.22797552 +0000 UTC m=+0.094463321 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:02:06 np0005539563 podman[295067]: 2025-11-29 08:02:06.233643243 +0000 UTC m=+0.099080976 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:02:06 np0005539563 podman[295068]: 2025-11-29 08:02:06.234336252 +0000 UTC m=+0.096146796 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:02:06 np0005539563 nova_compute[252253]: 2025-11-29 08:02:06.313 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:06.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:06 np0005539563 podman[295270]: 2025-11-29 08:02:06.725238362 +0000 UTC m=+0.034940747 container create aba9a1bf03bc494179381ca6d7b60e712269309260ba299e33598f69a5ee3e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:02:06 np0005539563 systemd[1]: Started libpod-conmon-aba9a1bf03bc494179381ca6d7b60e712269309260ba299e33598f69a5ee3e8d.scope.
Nov 29 03:02:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:02:06 np0005539563 podman[295270]: 2025-11-29 08:02:06.792695419 +0000 UTC m=+0.102397824 container init aba9a1bf03bc494179381ca6d7b60e712269309260ba299e33598f69a5ee3e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:02:06 np0005539563 podman[295270]: 2025-11-29 08:02:06.799816803 +0000 UTC m=+0.109519188 container start aba9a1bf03bc494179381ca6d7b60e712269309260ba299e33598f69a5ee3e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:02:06 np0005539563 podman[295270]: 2025-11-29 08:02:06.8037732 +0000 UTC m=+0.113475615 container attach aba9a1bf03bc494179381ca6d7b60e712269309260ba299e33598f69a5ee3e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:02:06 np0005539563 relaxed_murdock[295286]: 167 167
Nov 29 03:02:06 np0005539563 systemd[1]: libpod-aba9a1bf03bc494179381ca6d7b60e712269309260ba299e33598f69a5ee3e8d.scope: Deactivated successfully.
Nov 29 03:02:06 np0005539563 podman[295270]: 2025-11-29 08:02:06.708591411 +0000 UTC m=+0.018293816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:02:06 np0005539563 podman[295270]: 2025-11-29 08:02:06.805959129 +0000 UTC m=+0.115661514 container died aba9a1bf03bc494179381ca6d7b60e712269309260ba299e33598f69a5ee3e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:02:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1f6979e8a23909f4f6e681a03fa85a478e7a55b11231608f939a170df364c43f-merged.mount: Deactivated successfully.
Nov 29 03:02:06 np0005539563 podman[295270]: 2025-11-29 08:02:06.838492251 +0000 UTC m=+0.148194636 container remove aba9a1bf03bc494179381ca6d7b60e712269309260ba299e33598f69a5ee3e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:02:06 np0005539563 systemd[1]: libpod-conmon-aba9a1bf03bc494179381ca6d7b60e712269309260ba299e33598f69a5ee3e8d.scope: Deactivated successfully.
Nov 29 03:02:06 np0005539563 podman[295312]: 2025-11-29 08:02:06.999375709 +0000 UTC m=+0.041232378 container create 508e9576a9e18fd3f550ac1fb1d01cc5623feaae09f27de5f49b334af285808c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_roentgen, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:02:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:07.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:07 np0005539563 systemd[1]: Started libpod-conmon-508e9576a9e18fd3f550ac1fb1d01cc5623feaae09f27de5f49b334af285808c.scope.
Nov 29 03:02:07 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:02:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c42a90d8429a42b1b6a412539b226e80c716d794cb0c0bfd74cf3ad5ada8a50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c42a90d8429a42b1b6a412539b226e80c716d794cb0c0bfd74cf3ad5ada8a50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c42a90d8429a42b1b6a412539b226e80c716d794cb0c0bfd74cf3ad5ada8a50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c42a90d8429a42b1b6a412539b226e80c716d794cb0c0bfd74cf3ad5ada8a50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:07 np0005539563 podman[295312]: 2025-11-29 08:02:07.069235312 +0000 UTC m=+0.111092021 container init 508e9576a9e18fd3f550ac1fb1d01cc5623feaae09f27de5f49b334af285808c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_roentgen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:02:07 np0005539563 podman[295312]: 2025-11-29 08:02:07.07543287 +0000 UTC m=+0.117289539 container start 508e9576a9e18fd3f550ac1fb1d01cc5623feaae09f27de5f49b334af285808c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_roentgen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:02:07 np0005539563 podman[295312]: 2025-11-29 08:02:06.983275593 +0000 UTC m=+0.025132302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:02:07 np0005539563 podman[295312]: 2025-11-29 08:02:07.078614036 +0000 UTC m=+0.120470705 container attach 508e9576a9e18fd3f550ac1fb1d01cc5623feaae09f27de5f49b334af285808c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_roentgen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:02:07 np0005539563 nova_compute[252253]: 2025-11-29 08:02:07.510 252257 DEBUG nova.network.neutron [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Successfully updated port: 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:02:07 np0005539563 nova_compute[252253]: 2025-11-29 08:02:07.550 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:07 np0005539563 nova_compute[252253]: 2025-11-29 08:02:07.550 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:07 np0005539563 nova_compute[252253]: 2025-11-29 08:02:07.550 252257 DEBUG nova.network.neutron [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]: {
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:    "0": [
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:        {
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:            "devices": [
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:                "/dev/loop3"
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:            ],
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:            "lv_name": "ceph_lv0",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:            "lv_size": "7511998464",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:            "name": "ceph_lv0",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:            "tags": {
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:                "ceph.cluster_name": "ceph",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:                "ceph.crush_device_class": "",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:                "ceph.encrypted": "0",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:                "ceph.osd_id": "0",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:                "ceph.type": "block",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:                "ceph.vdo": "0"
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:            },
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:            "type": "block",
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:            "vg_name": "ceph_vg0"
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:        }
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]:    ]
Nov 29 03:02:07 np0005539563 elated_roentgen[295328]: }
Nov 29 03:02:07 np0005539563 systemd[1]: libpod-508e9576a9e18fd3f550ac1fb1d01cc5623feaae09f27de5f49b334af285808c.scope: Deactivated successfully.
Nov 29 03:02:07 np0005539563 podman[295312]: 2025-11-29 08:02:07.827073645 +0000 UTC m=+0.868930314 container died 508e9576a9e18fd3f550ac1fb1d01cc5623feaae09f27de5f49b334af285808c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:02:07 np0005539563 nova_compute[252253]: 2025-11-29 08:02:07.862 252257 DEBUG nova.compute.manager [req-4ab5f1cb-27fb-41b8-8bab-e8aa535b761c req-6c663742-693c-4616-aed2-02be341b6b66 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-changed-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:07 np0005539563 nova_compute[252253]: 2025-11-29 08:02:07.863 252257 DEBUG nova.compute.manager [req-4ab5f1cb-27fb-41b8-8bab-e8aa535b761c req-6c663742-693c-4616-aed2-02be341b6b66 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Refreshing instance network info cache due to event network-changed-65f4bda5-b4d3-4fea-b8e2-3856b51660b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:02:07 np0005539563 nova_compute[252253]: 2025-11-29 08:02:07.864 252257 DEBUG oslo_concurrency.lockutils [req-4ab5f1cb-27fb-41b8-8bab-e8aa535b761c req-6c663742-693c-4616-aed2-02be341b6b66 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 87 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 519 KiB/s rd, 4.6 MiB/s wr, 186 op/s
Nov 29 03:02:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:08.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:08 np0005539563 nova_compute[252253]: 2025-11-29 08:02:08.487 252257 DEBUG nova.network.neutron [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:02:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:09.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.792 252257 DEBUG nova.network.neutron [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updating instance_info_cache with network_info: [{"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8c42a90d8429a42b1b6a412539b226e80c716d794cb0c0bfd74cf3ad5ada8a50-merged.mount: Deactivated successfully.
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.823 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.823 252257 DEBUG nova.compute.manager [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Instance network_info: |[{"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.824 252257 DEBUG oslo_concurrency.lockutils [req-4ab5f1cb-27fb-41b8-8bab-e8aa535b761c req-6c663742-693c-4616-aed2-02be341b6b66 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.824 252257 DEBUG nova.network.neutron [req-4ab5f1cb-27fb-41b8-8bab-e8aa535b761c req-6c663742-693c-4616-aed2-02be341b6b66 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Refreshing network info cache for port 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.827 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Start _get_guest_xml network_info=[{"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.831 252257 WARNING nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.845 252257 DEBUG nova.virt.libvirt.host [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.846 252257 DEBUG nova.virt.libvirt.host [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.857 252257 DEBUG nova.virt.libvirt.host [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.858 252257 DEBUG nova.virt.libvirt.host [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.860 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.860 252257 DEBUG nova.virt.hardware [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.861 252257 DEBUG nova.virt.hardware [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.861 252257 DEBUG nova.virt.hardware [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.861 252257 DEBUG nova.virt.hardware [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.862 252257 DEBUG nova.virt.hardware [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.862 252257 DEBUG nova.virt.hardware [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.862 252257 DEBUG nova.virt.hardware [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.863 252257 DEBUG nova.virt.hardware [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.863 252257 DEBUG nova.virt.hardware [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:02:09 np0005539563 podman[295312]: 2025-11-29 08:02:09.863605873 +0000 UTC m=+2.905462542 container remove 508e9576a9e18fd3f550ac1fb1d01cc5623feaae09f27de5f49b334af285808c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.863 252257 DEBUG nova.virt.hardware [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.864 252257 DEBUG nova.virt.hardware [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:02:09 np0005539563 nova_compute[252253]: 2025-11-29 08:02:09.869 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:09 np0005539563 systemd[1]: libpod-conmon-508e9576a9e18fd3f550ac1fb1d01cc5623feaae09f27de5f49b334af285808c.scope: Deactivated successfully.
Nov 29 03:02:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 88 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 454 KiB/s rd, 4.5 MiB/s wr, 172 op/s
Nov 29 03:02:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:02:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062387284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.306 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.334 252257 DEBUG nova.storage.rbd_utils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 0791bdff-16d7-4626-acea-1361fdb70652_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.339 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.364 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:10.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:10 np0005539563 podman[295528]: 2025-11-29 08:02:10.40056317 +0000 UTC m=+0.035365538 container create 1a01b85e1dfc8e5c60b80cceef828d5e493ee3bf4ec45e0aa1247d02a23e0305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:02:10 np0005539563 systemd[1]: Started libpod-conmon-1a01b85e1dfc8e5c60b80cceef828d5e493ee3bf4ec45e0aa1247d02a23e0305.scope.
Nov 29 03:02:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:02:10 np0005539563 podman[295528]: 2025-11-29 08:02:10.386082938 +0000 UTC m=+0.020885326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:02:10 np0005539563 podman[295528]: 2025-11-29 08:02:10.585961723 +0000 UTC m=+0.220764121 container init 1a01b85e1dfc8e5c60b80cceef828d5e493ee3bf4ec45e0aa1247d02a23e0305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:02:10 np0005539563 podman[295528]: 2025-11-29 08:02:10.600452557 +0000 UTC m=+0.235254925 container start 1a01b85e1dfc8e5c60b80cceef828d5e493ee3bf4ec45e0aa1247d02a23e0305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_payne, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:02:10 np0005539563 hungry_payne[295546]: 167 167
Nov 29 03:02:10 np0005539563 systemd[1]: libpod-1a01b85e1dfc8e5c60b80cceef828d5e493ee3bf4ec45e0aa1247d02a23e0305.scope: Deactivated successfully.
Nov 29 03:02:10 np0005539563 conmon[295546]: conmon 1a01b85e1dfc8e5c60b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a01b85e1dfc8e5c60b80cceef828d5e493ee3bf4ec45e0aa1247d02a23e0305.scope/container/memory.events
Nov 29 03:02:10 np0005539563 podman[295528]: 2025-11-29 08:02:10.674311997 +0000 UTC m=+0.309114395 container attach 1a01b85e1dfc8e5c60b80cceef828d5e493ee3bf4ec45e0aa1247d02a23e0305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:02:10 np0005539563 podman[295528]: 2025-11-29 08:02:10.674655696 +0000 UTC m=+0.309458064 container died 1a01b85e1dfc8e5c60b80cceef828d5e493ee3bf4ec45e0aa1247d02a23e0305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_payne, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:02:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:02:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525321337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.753 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.756 252257 DEBUG nova.virt.libvirt.vif [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1586373608',display_name='tempest-tempest.common.compute-instance-1586373608',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1586373608',id=66,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-m8s06nf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=0791bdff-16d7-4626-acea-1361fdb70652,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.757 252257 DEBUG nova.network.os_vif_util [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.759 252257 DEBUG nova.network.os_vif_util [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=65f4bda5-b4d3-4fea-b8e2-3856b51660b5,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65f4bda5-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.761 252257 DEBUG nova.objects.instance [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 0791bdff-16d7-4626-acea-1361fdb70652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:10.765 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.765 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:10.767 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.790 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  <uuid>0791bdff-16d7-4626-acea-1361fdb70652</uuid>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  <name>instance-00000042</name>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <nova:name>tempest-tempest.common.compute-instance-1586373608</nova:name>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:02:09</nova:creationTime>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <nova:port uuid="65f4bda5-b4d3-4fea-b8e2-3856b51660b5">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <entry name="serial">0791bdff-16d7-4626-acea-1361fdb70652</entry>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <entry name="uuid">0791bdff-16d7-4626-acea-1361fdb70652</entry>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/0791bdff-16d7-4626-acea-1361fdb70652_disk">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/0791bdff-16d7-4626-acea-1361fdb70652_disk.config">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:26:eb:91"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <target dev="tap65f4bda5-b4"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652/console.log" append="off"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:02:10 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:02:10 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:02:10 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:02:10 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.791 252257 DEBUG nova.compute.manager [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Preparing to wait for external event network-vif-plugged-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.792 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "0791bdff-16d7-4626-acea-1361fdb70652-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.792 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.792 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.793 252257 DEBUG nova.virt.libvirt.vif [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1586373608',display_name='tempest-tempest.common.compute-instance-1586373608',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1586373608',id=66,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-m8s06nf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=0791bdff-16d7-4626-acea-1361fdb70652,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.793 252257 DEBUG nova.network.os_vif_util [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.794 252257 DEBUG nova.network.os_vif_util [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=65f4bda5-b4d3-4fea-b8e2-3856b51660b5,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65f4bda5-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.795 252257 DEBUG os_vif [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=65f4bda5-b4d3-4fea-b8e2-3856b51660b5,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65f4bda5-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.796 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.796 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.796 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.799 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.799 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65f4bda5-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.799 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap65f4bda5-b4, col_values=(('external_ids', {'iface-id': '65f4bda5-b4d3-4fea-b8e2-3856b51660b5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:eb:91', 'vm-uuid': '0791bdff-16d7-4626-acea-1361fdb70652'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.801 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:10 np0005539563 NetworkManager[48981]: <info>  [1764403330.8025] manager: (tap65f4bda5-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.804 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.810 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c1b60ee0e8aa53422003e27a3e629157dbce1fc2b6fc608fe1c5baaff87fbae3-merged.mount: Deactivated successfully.
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.811 252257 INFO os_vif [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=65f4bda5-b4d3-4fea-b8e2-3856b51660b5,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65f4bda5-b4')#033[00m
Nov 29 03:02:10 np0005539563 podman[295528]: 2025-11-29 08:02:10.827533939 +0000 UTC m=+0.462336307 container remove 1a01b85e1dfc8e5c60b80cceef828d5e493ee3bf4ec45e0aa1247d02a23e0305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_payne, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:02:10 np0005539563 systemd[1]: libpod-conmon-1a01b85e1dfc8e5c60b80cceef828d5e493ee3bf4ec45e0aa1247d02a23e0305.scope: Deactivated successfully.
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.876 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.877 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.878 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:26:eb:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.878 252257 INFO nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Using config drive#033[00m
Nov 29 03:02:10 np0005539563 nova_compute[252253]: 2025-11-29 08:02:10.906 252257 DEBUG nova.storage.rbd_utils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 0791bdff-16d7-4626-acea-1361fdb70652_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:11.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:11 np0005539563 podman[295613]: 2025-11-29 08:02:11.043796428 +0000 UTC m=+0.090782741 container create 943db2a7d6861db8a62a02e04bba7dae3a05e8d115926a0a9f2b942c5743dd5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:02:11 np0005539563 podman[295613]: 2025-11-29 08:02:10.972722282 +0000 UTC m=+0.019708615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:02:11 np0005539563 systemd[1]: Started libpod-conmon-943db2a7d6861db8a62a02e04bba7dae3a05e8d115926a0a9f2b942c5743dd5e.scope.
Nov 29 03:02:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:02:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc5d0009a940732461558252955cbc9315c367a0dcf60fc2aad87d58d07d6f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc5d0009a940732461558252955cbc9315c367a0dcf60fc2aad87d58d07d6f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc5d0009a940732461558252955cbc9315c367a0dcf60fc2aad87d58d07d6f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc5d0009a940732461558252955cbc9315c367a0dcf60fc2aad87d58d07d6f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:11 np0005539563 nova_compute[252253]: 2025-11-29 08:02:11.314 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:11 np0005539563 podman[295613]: 2025-11-29 08:02:11.316008664 +0000 UTC m=+0.362995007 container init 943db2a7d6861db8a62a02e04bba7dae3a05e8d115926a0a9f2b942c5743dd5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:02:11 np0005539563 podman[295613]: 2025-11-29 08:02:11.327056063 +0000 UTC m=+0.374042376 container start 943db2a7d6861db8a62a02e04bba7dae3a05e8d115926a0a9f2b942c5743dd5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:02:11 np0005539563 podman[295613]: 2025-11-29 08:02:11.371338802 +0000 UTC m=+0.418325135 container attach 943db2a7d6861db8a62a02e04bba7dae3a05e8d115926a0a9f2b942c5743dd5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:02:11 np0005539563 nova_compute[252253]: 2025-11-29 08:02:11.414 252257 INFO nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Creating config drive at /var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652/disk.config#033[00m
Nov 29 03:02:11 np0005539563 nova_compute[252253]: 2025-11-29 08:02:11.421 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp717t0nu9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:11 np0005539563 nova_compute[252253]: 2025-11-29 08:02:11.560 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp717t0nu9" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:11 np0005539563 nova_compute[252253]: 2025-11-29 08:02:11.589 252257 DEBUG nova.storage.rbd_utils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] rbd image 0791bdff-16d7-4626-acea-1361fdb70652_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:02:11 np0005539563 nova_compute[252253]: 2025-11-29 08:02:11.593 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652/disk.config 0791bdff-16d7-4626-acea-1361fdb70652_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.164 252257 DEBUG oslo_concurrency.processutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652/disk.config 0791bdff-16d7-4626-acea-1361fdb70652_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.166 252257 INFO nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Deleting local config drive /var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652/disk.config because it was imported into RBD.#033[00m
Nov 29 03:02:12 np0005539563 friendly_williams[295629]: {
Nov 29 03:02:12 np0005539563 friendly_williams[295629]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:02:12 np0005539563 friendly_williams[295629]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:02:12 np0005539563 friendly_williams[295629]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:02:12 np0005539563 friendly_williams[295629]:        "osd_id": 0,
Nov 29 03:02:12 np0005539563 friendly_williams[295629]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:02:12 np0005539563 friendly_williams[295629]:        "type": "bluestore"
Nov 29 03:02:12 np0005539563 friendly_williams[295629]:    }
Nov 29 03:02:12 np0005539563 friendly_williams[295629]: }
Nov 29 03:02:12 np0005539563 systemd[1]: libpod-943db2a7d6861db8a62a02e04bba7dae3a05e8d115926a0a9f2b942c5743dd5e.scope: Deactivated successfully.
Nov 29 03:02:12 np0005539563 podman[295613]: 2025-11-29 08:02:12.222505074 +0000 UTC m=+1.269491477 container died 943db2a7d6861db8a62a02e04bba7dae3a05e8d115926a0a9f2b942c5743dd5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:02:12 np0005539563 kernel: tap65f4bda5-b4: entered promiscuous mode
Nov 29 03:02:12 np0005539563 NetworkManager[48981]: <info>  [1764403332.2383] manager: (tap65f4bda5-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Nov 29 03:02:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:02:12Z|00232|binding|INFO|Claiming lport 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 for this chassis.
Nov 29 03:02:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:02:12Z|00233|binding|INFO|65f4bda5-b4d3-4fea-b8e2-3856b51660b5: Claiming fa:16:3e:26:eb:91 10.100.0.14
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.237 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.244 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.246 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.252 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8cc5d0009a940732461558252955cbc9315c367a0dcf60fc2aad87d58d07d6f7-merged.mount: Deactivated successfully.
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.260 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:eb:91 10.100.0.14'], port_security=['fa:16:3e:26:eb:91 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0791bdff-16d7-4626-acea-1361fdb70652', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bf7ccb70-ed00-453b-b589-5d95da7defbd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=65f4bda5-b4d3-4fea-b8e2-3856b51660b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.261 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 bound to our chassis#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.262 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:02:12 np0005539563 systemd-udevd[295712]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.274 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e06635-b789-4499-8829-d26db99400b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.275 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap738e99b4-b1 in ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.278 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap738e99b4-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.278 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2ead9ece-85a6-4814-b1e4-3f3cb098bae0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.279 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b8190536-ba59-491f-896a-1b6ea1d43880]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 systemd-machined[213024]: New machine qemu-28-instance-00000042.
Nov 29 03:02:12 np0005539563 NetworkManager[48981]: <info>  [1764403332.2845] device (tap65f4bda5-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:02:12 np0005539563 NetworkManager[48981]: <info>  [1764403332.2853] device (tap65f4bda5-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.291 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3b55e0-3fe0-49c7-90e2-b7495edc990c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 systemd[1]: Started Virtual Machine qemu-28-instance-00000042.
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.317 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7be897d7-0fbf-4b8f-a509-69c0d7ca0c3f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.332 252257 DEBUG nova.network.neutron [req-4ab5f1cb-27fb-41b8-8bab-e8aa535b761c req-6c663742-693c-4616-aed2-02be341b6b66 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updated VIF entry in instance network info cache for port 65f4bda5-b4d3-4fea-b8e2-3856b51660b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.332 252257 DEBUG nova.network.neutron [req-4ab5f1cb-27fb-41b8-8bab-e8aa535b761c req-6c663742-693c-4616-aed2-02be341b6b66 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updating instance_info_cache with network_info: [{"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:02:12Z|00234|binding|INFO|Setting lport 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 ovn-installed in OVS
Nov 29 03:02:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:02:12Z|00235|binding|INFO|Setting lport 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 up in Southbound
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.345 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.348 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff3197d-262f-4e19-b934-b626a154b199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 NetworkManager[48981]: <info>  [1764403332.3548] manager: (tap738e99b4-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/108)
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.355 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6d6dbeec-9476-47a2-9ca4-8112c902c2cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.358 252257 DEBUG oslo_concurrency.lockutils [req-4ab5f1cb-27fb-41b8-8bab-e8aa535b761c req-6c663742-693c-4616-aed2-02be341b6b66 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:12.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.384 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b74ca29e-a169-4259-a9c6-d246f4cabf90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.387 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b9430753-c29c-4a31-9b33-02ce0b006e33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 NetworkManager[48981]: <info>  [1764403332.4079] device (tap738e99b4-b0): carrier: link connected
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.413 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1c4ef1-b773-47bb-a8d3-d9c30516817e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.428 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[181faf32-2a39-4c01-b35d-137d49c16f69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630018, 'reachable_time': 25083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295747, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.444 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d253c7-7eea-49f9-87a0-aabd8aaaae2e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:bee3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630018, 'tstamp': 630018}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295748, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.460 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[775f9901-c076-48d4-99ad-2c49cd56db1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630018, 'reachable_time': 25083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295749, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.496 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[eba726b4-2830-4184-97ef-bfc33626fd84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.552 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3b637649-ecc4-4664-9206-4be7bcfb4eeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.553 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.554 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.554 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:12 np0005539563 NetworkManager[48981]: <info>  [1764403332.5566] manager: (tap738e99b4-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Nov 29 03:02:12 np0005539563 kernel: tap738e99b4-b0: entered promiscuous mode
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.556 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.558 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:02:12Z|00236|binding|INFO|Releasing lport 2a1fcde6-d99a-4732-a125-d24eb08c8766 from this chassis (sb_readonly=0)
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.559 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.573 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.574 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/738e99b4-b58e-4eff-b209-c4aa3748c994.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/738e99b4-b58e-4eff-b209-c4aa3748c994.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.575 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0b591fa3-af2f-4c76-8275-2524c49c1ec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.576 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-738e99b4-b58e-4eff-b209-c4aa3748c994
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/738e99b4-b58e-4eff-b209-c4aa3748c994.pid.haproxy
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 738e99b4-b58e-4eff-b209-c4aa3748c994
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:02:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:12.576 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'env', 'PROCESS_TAG=haproxy-738e99b4-b58e-4eff-b209-c4aa3748c994', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/738e99b4-b58e-4eff-b209-c4aa3748c994.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.757 252257 DEBUG nova.compute.manager [req-8ddd2ddb-dbe0-44a7-906d-1d4b572480fc req-85b9e00d-d849-476d-b277-cb4dd48e631a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-vif-plugged-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.759 252257 DEBUG oslo_concurrency.lockutils [req-8ddd2ddb-dbe0-44a7-906d-1d4b572480fc req-85b9e00d-d849-476d-b277-cb4dd48e631a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0791bdff-16d7-4626-acea-1361fdb70652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.759 252257 DEBUG oslo_concurrency.lockutils [req-8ddd2ddb-dbe0-44a7-906d-1d4b572480fc req-85b9e00d-d849-476d-b277-cb4dd48e631a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.760 252257 DEBUG oslo_concurrency.lockutils [req-8ddd2ddb-dbe0-44a7-906d-1d4b572480fc req-85b9e00d-d849-476d-b277-cb4dd48e631a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:12 np0005539563 nova_compute[252253]: 2025-11-29 08:02:12.760 252257 DEBUG nova.compute.manager [req-8ddd2ddb-dbe0-44a7-906d-1d4b572480fc req-85b9e00d-d849-476d-b277-cb4dd48e631a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Processing event network-vif-plugged-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:02:12 np0005539563 podman[295613]: 2025-11-29 08:02:12.792726862 +0000 UTC m=+1.839713215 container remove 943db2a7d6861db8a62a02e04bba7dae3a05e8d115926a0a9f2b942c5743dd5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:02:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:02:12
Nov 29 03:02:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:02:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:02:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'vms', 'backups', '.mgr', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.log']
Nov 29 03:02:12 np0005539563 systemd[1]: libpod-conmon-943db2a7d6861db8a62a02e04bba7dae3a05e8d115926a0a9f2b942c5743dd5e.scope: Deactivated successfully.
Nov 29 03:02:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:02:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:02:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:02:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:02:13 np0005539563 podman[295790]: 2025-11-29 08:02:12.905882789 +0000 UTC m=+0.022764059 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:02:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:02:13 np0005539563 podman[295790]: 2025-11-29 08:02:13.009122515 +0000 UTC m=+0.126003765 container create 9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f86a8191-a4fa-4ad0-beff-285387d9c022 does not exist
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a612d44d-3dac-47ae-88dd-7657b9015eaa does not exist
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 519d2cfd-8fb0-41d3-8f5b-b581997d81dc does not exist
Nov 29 03:02:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:13.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:13 np0005539563 systemd[1]: Started libpod-conmon-9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d.scope.
Nov 29 03:02:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:02:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b093b3dbe64b60824974c5be9dbd38c894fae7803fb60c6f0bea8c2f41f90c31/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:02:13 np0005539563 podman[295790]: 2025-11-29 08:02:13.198312481 +0000 UTC m=+0.315193761 container init 9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:02:13 np0005539563 podman[295790]: 2025-11-29 08:02:13.205580148 +0000 UTC m=+0.322461398 container start 9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:02:13 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[295839]: [NOTICE]   (295877) : New worker (295888) forked
Nov 29 03:02:13 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[295839]: [NOTICE]   (295877) : Loading success.
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.375 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403333.3746293, 0791bdff-16d7-4626-acea-1361fdb70652 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.375 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] VM Started (Lifecycle Event)#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.377 252257 DEBUG nova.compute.manager [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.382 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.386 252257 INFO nova.virt.libvirt.driver [-] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Instance spawned successfully.#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.387 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.402 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.405 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.431 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.431 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.432 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.432 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.433 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.433 252257 DEBUG nova.virt.libvirt.driver [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.468 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.468 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403333.3756351, 0791bdff-16d7-4626-acea-1361fdb70652 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.468 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.501 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.504 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403333.3818667, 0791bdff-16d7-4626-acea-1361fdb70652 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.505 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.518 252257 INFO nova.compute.manager [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Took 10.10 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.518 252257 DEBUG nova.compute.manager [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.529 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.531 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.784 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.843 252257 INFO nova.compute.manager [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Took 11.80 seconds to build instance.#033[00m
Nov 29 03:02:13 np0005539563 nova_compute[252253]: 2025-11-29 08:02:13.858 252257 DEBUG oslo_concurrency.lockutils [None req-b9913604-b0c4-406d-aa46-7a7d932d7d33 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:02:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 29 03:02:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:02:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:02:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:14.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:15.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:15 np0005539563 nova_compute[252253]: 2025-11-29 08:02:15.097 252257 DEBUG nova.compute.manager [req-e783fe86-300e-4522-bb56-cb8a210e62d3 req-685b1fef-40ab-4176-a394-d2d2c5ea4c04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-vif-plugged-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:15 np0005539563 nova_compute[252253]: 2025-11-29 08:02:15.098 252257 DEBUG oslo_concurrency.lockutils [req-e783fe86-300e-4522-bb56-cb8a210e62d3 req-685b1fef-40ab-4176-a394-d2d2c5ea4c04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0791bdff-16d7-4626-acea-1361fdb70652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:15 np0005539563 nova_compute[252253]: 2025-11-29 08:02:15.098 252257 DEBUG oslo_concurrency.lockutils [req-e783fe86-300e-4522-bb56-cb8a210e62d3 req-685b1fef-40ab-4176-a394-d2d2c5ea4c04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:15 np0005539563 nova_compute[252253]: 2025-11-29 08:02:15.098 252257 DEBUG oslo_concurrency.lockutils [req-e783fe86-300e-4522-bb56-cb8a210e62d3 req-685b1fef-40ab-4176-a394-d2d2c5ea4c04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:15 np0005539563 nova_compute[252253]: 2025-11-29 08:02:15.099 252257 DEBUG nova.compute.manager [req-e783fe86-300e-4522-bb56-cb8a210e62d3 req-685b1fef-40ab-4176-a394-d2d2c5ea4c04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] No waiting events found dispatching network-vif-plugged-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:02:15 np0005539563 nova_compute[252253]: 2025-11-29 08:02:15.099 252257 WARNING nova.compute.manager [req-e783fe86-300e-4522-bb56-cb8a210e62d3 req-685b1fef-40ab-4176-a394-d2d2c5ea4c04 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received unexpected event network-vif-plugged-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:02:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:15.768 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:15 np0005539563 nova_compute[252253]: 2025-11-29 08:02:15.801 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 599 KiB/s wr, 80 op/s
Nov 29 03:02:16 np0005539563 nova_compute[252253]: 2025-11-29 08:02:16.315 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:16.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:16 np0005539563 nova_compute[252253]: 2025-11-29 08:02:16.756 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:16 np0005539563 NetworkManager[48981]: <info>  [1764403336.7569] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Nov 29 03:02:16 np0005539563 NetworkManager[48981]: <info>  [1764403336.7578] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Nov 29 03:02:16 np0005539563 nova_compute[252253]: 2025-11-29 08:02:16.914 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:02:16Z|00237|binding|INFO|Releasing lport 2a1fcde6-d99a-4732-a125-d24eb08c8766 from this chassis (sb_readonly=0)
Nov 29 03:02:16 np0005539563 nova_compute[252253]: 2025-11-29 08:02:16.925 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:17.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:17 np0005539563 nova_compute[252253]: 2025-11-29 08:02:17.226 252257 DEBUG nova.compute.manager [req-5f091b60-e088-4446-86ea-852ff34e2ce3 req-87ad3459-c0a7-4e45-87be-da4d891c4b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-changed-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:17 np0005539563 nova_compute[252253]: 2025-11-29 08:02:17.226 252257 DEBUG nova.compute.manager [req-5f091b60-e088-4446-86ea-852ff34e2ce3 req-87ad3459-c0a7-4e45-87be-da4d891c4b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Refreshing instance network info cache due to event network-changed-65f4bda5-b4d3-4fea-b8e2-3856b51660b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:02:17 np0005539563 nova_compute[252253]: 2025-11-29 08:02:17.227 252257 DEBUG oslo_concurrency.lockutils [req-5f091b60-e088-4446-86ea-852ff34e2ce3 req-87ad3459-c0a7-4e45-87be-da4d891c4b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:17 np0005539563 nova_compute[252253]: 2025-11-29 08:02:17.227 252257 DEBUG oslo_concurrency.lockutils [req-5f091b60-e088-4446-86ea-852ff34e2ce3 req-87ad3459-c0a7-4e45-87be-da4d891c4b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:17 np0005539563 nova_compute[252253]: 2025-11-29 08:02:17.228 252257 DEBUG nova.network.neutron [req-5f091b60-e088-4446-86ea-852ff34e2ce3 req-87ad3459-c0a7-4e45-87be-da4d891c4b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Refreshing network info cache for port 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:02:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 507 KiB/s wr, 67 op/s
Nov 29 03:02:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:18.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:18 np0005539563 nova_compute[252253]: 2025-11-29 08:02:18.979 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:19.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 507 KiB/s wr, 89 op/s
Nov 29 03:02:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:20.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:20 np0005539563 nova_compute[252253]: 2025-11-29 08:02:20.803 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:21.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:21 np0005539563 nova_compute[252253]: 2025-11-29 08:02:21.258 252257 DEBUG nova.network.neutron [req-5f091b60-e088-4446-86ea-852ff34e2ce3 req-87ad3459-c0a7-4e45-87be-da4d891c4b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updated VIF entry in instance network info cache for port 65f4bda5-b4d3-4fea-b8e2-3856b51660b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:02:21 np0005539563 nova_compute[252253]: 2025-11-29 08:02:21.259 252257 DEBUG nova.network.neutron [req-5f091b60-e088-4446-86ea-852ff34e2ce3 req-87ad3459-c0a7-4e45-87be-da4d891c4b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updating instance_info_cache with network_info: [{"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:21 np0005539563 nova_compute[252253]: 2025-11-29 08:02:21.291 252257 DEBUG oslo_concurrency.lockutils [req-5f091b60-e088-4446-86ea-852ff34e2ce3 req-87ad3459-c0a7-4e45-87be-da4d891c4b7e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:21 np0005539563 nova_compute[252253]: 2025-11-29 08:02:21.312 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:21 np0005539563 nova_compute[252253]: 2025-11-29 08:02:21.316 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 03:02:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:22.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:23.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00099691891168807 of space, bias 1.0, pg target 0.299075673506421 quantized to 32 (current 32)
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:02:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/49200391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 03:02:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:24.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:25.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:25 np0005539563 nova_compute[252253]: 2025-11-29 08:02:25.804 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 112 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 89 op/s
Nov 29 03:02:26 np0005539563 nova_compute[252253]: 2025-11-29 08:02:26.318 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:26.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:27.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 112 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 584 KiB/s rd, 1.1 MiB/s wr, 37 op/s
Nov 29 03:02:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:02:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3103524126' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:02:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:02:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3103524126' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:02:28 np0005539563 ovn_controller[148841]: 2025-11-29T08:02:28Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:eb:91 10.100.0.14
Nov 29 03:02:28 np0005539563 ovn_controller[148841]: 2025-11-29T08:02:28Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:eb:91 10.100.0.14
Nov 29 03:02:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:28.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:29.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 153 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 74 op/s
Nov 29 03:02:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:30.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:30 np0005539563 nova_compute[252253]: 2025-11-29 08:02:30.806 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:31.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:31 np0005539563 nova_compute[252253]: 2025-11-29 08:02:31.320 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 213 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 125 op/s
Nov 29 03:02:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:32.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:33.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 213 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 126 op/s
Nov 29 03:02:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:34.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:35.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:35 np0005539563 nova_compute[252253]: 2025-11-29 08:02:35.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:35 np0005539563 nova_compute[252253]: 2025-11-29 08:02:35.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:35 np0005539563 nova_compute[252253]: 2025-11-29 08:02:35.808 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 249 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 6.8 MiB/s wr, 149 op/s
Nov 29 03:02:36 np0005539563 nova_compute[252253]: 2025-11-29 08:02:36.322 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:36.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:36 np0005539563 podman[295967]: 2025-11-29 08:02:36.501925078 +0000 UTC m=+0.058489006 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:02:36 np0005539563 podman[295969]: 2025-11-29 08:02:36.5282384 +0000 UTC m=+0.083284028 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:02:36 np0005539563 podman[295968]: 2025-11-29 08:02:36.533046061 +0000 UTC m=+0.090122124 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:02:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:37.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:37 np0005539563 nova_compute[252253]: 2025-11-29 08:02:37.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:37 np0005539563 nova_compute[252253]: 2025-11-29 08:02:37.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:02:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 249 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 133 op/s
Nov 29 03:02:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:38.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:38 np0005539563 nova_compute[252253]: 2025-11-29 08:02:38.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:38 np0005539563 nova_compute[252253]: 2025-11-29 08:02:38.776 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:38 np0005539563 nova_compute[252253]: 2025-11-29 08:02:38.777 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:38 np0005539563 nova_compute[252253]: 2025-11-29 08:02:38.777 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:38 np0005539563 nova_compute[252253]: 2025-11-29 08:02:38.777 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:02:38 np0005539563 nova_compute[252253]: 2025-11-29 08:02:38.777 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:39.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/648014255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.213 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.299 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.299 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.317 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.452 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.453 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4385MB free_disk=20.901180267333984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.453 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.453 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.543 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 0791bdff-16d7-4626-acea-1361fdb70652 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.544 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.544 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.692 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.730 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.730 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.748 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.768 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:02:39 np0005539563 nova_compute[252253]: 2025-11-29 08:02:39.831 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 260 MiB data, 680 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.3 MiB/s wr, 175 op/s
Nov 29 03:02:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139501658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:40 np0005539563 nova_compute[252253]: 2025-11-29 08:02:40.243 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:40 np0005539563 nova_compute[252253]: 2025-11-29 08:02:40.248 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:02:40 np0005539563 nova_compute[252253]: 2025-11-29 08:02:40.267 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:02:40 np0005539563 nova_compute[252253]: 2025-11-29 08:02:40.297 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:02:40 np0005539563 nova_compute[252253]: 2025-11-29 08:02:40.298 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:40.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:40 np0005539563 nova_compute[252253]: 2025-11-29 08:02:40.809 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:41.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:41 np0005539563 nova_compute[252253]: 2025-11-29 08:02:41.324 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:41 np0005539563 nova_compute[252253]: 2025-11-29 08:02:41.336 252257 DEBUG nova.compute.manager [req-cb615176-96fc-4eef-996a-bdd40e9537ca req-9ffb1875-ab69-4dc2-84b5-6a616b722da7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-changed-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:41 np0005539563 nova_compute[252253]: 2025-11-29 08:02:41.337 252257 DEBUG nova.compute.manager [req-cb615176-96fc-4eef-996a-bdd40e9537ca req-9ffb1875-ab69-4dc2-84b5-6a616b722da7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Refreshing instance network info cache due to event network-changed-65f4bda5-b4d3-4fea-b8e2-3856b51660b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:02:41 np0005539563 nova_compute[252253]: 2025-11-29 08:02:41.337 252257 DEBUG oslo_concurrency.lockutils [req-cb615176-96fc-4eef-996a-bdd40e9537ca req-9ffb1875-ab69-4dc2-84b5-6a616b722da7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:41 np0005539563 nova_compute[252253]: 2025-11-29 08:02:41.338 252257 DEBUG oslo_concurrency.lockutils [req-cb615176-96fc-4eef-996a-bdd40e9537ca req-9ffb1875-ab69-4dc2-84b5-6a616b722da7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:41 np0005539563 nova_compute[252253]: 2025-11-29 08:02:41.338 252257 DEBUG nova.network.neutron [req-cb615176-96fc-4eef-996a-bdd40e9537ca req-9ffb1875-ab69-4dc2-84b5-6a616b722da7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Refreshing network info cache for port 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:02:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 223 MiB data, 681 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.7 MiB/s wr, 264 op/s
Nov 29 03:02:42 np0005539563 nova_compute[252253]: 2025-11-29 08:02:42.298 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:42 np0005539563 nova_compute[252253]: 2025-11-29 08:02:42.299 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:02:42 np0005539563 nova_compute[252253]: 2025-11-29 08:02:42.299 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:02:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:42.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:42 np0005539563 nova_compute[252253]: 2025-11-29 08:02:42.541 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:43.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:43 np0005539563 nova_compute[252253]: 2025-11-29 08:02:43.122 252257 DEBUG nova.network.neutron [req-cb615176-96fc-4eef-996a-bdd40e9537ca req-9ffb1875-ab69-4dc2-84b5-6a616b722da7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updated VIF entry in instance network info cache for port 65f4bda5-b4d3-4fea-b8e2-3856b51660b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:02:43 np0005539563 nova_compute[252253]: 2025-11-29 08:02:43.123 252257 DEBUG nova.network.neutron [req-cb615176-96fc-4eef-996a-bdd40e9537ca req-9ffb1875-ab69-4dc2-84b5-6a616b722da7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updating instance_info_cache with network_info: [{"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:43 np0005539563 nova_compute[252253]: 2025-11-29 08:02:43.147 252257 DEBUG oslo_concurrency.lockutils [req-cb615176-96fc-4eef-996a-bdd40e9537ca req-9ffb1875-ab69-4dc2-84b5-6a616b722da7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:43 np0005539563 nova_compute[252253]: 2025-11-29 08:02:43.148 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:43 np0005539563 nova_compute[252253]: 2025-11-29 08:02:43.148 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:02:43 np0005539563 nova_compute[252253]: 2025-11-29 08:02:43.149 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0791bdff-16d7-4626-acea-1361fdb70652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:02:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:02:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:02:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:02:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:02:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:02:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1047337460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 213 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 206 op/s
Nov 29 03:02:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:44.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:44 np0005539563 nova_compute[252253]: 2025-11-29 08:02:44.803 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:45.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:45 np0005539563 nova_compute[252253]: 2025-11-29 08:02:45.405 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updating instance_info_cache with network_info: [{"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:45 np0005539563 nova_compute[252253]: 2025-11-29 08:02:45.436 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:45 np0005539563 nova_compute[252253]: 2025-11-29 08:02:45.437 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:02:45 np0005539563 nova_compute[252253]: 2025-11-29 08:02:45.437 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:45 np0005539563 nova_compute[252253]: 2025-11-29 08:02:45.438 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:45 np0005539563 nova_compute[252253]: 2025-11-29 08:02:45.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:45 np0005539563 nova_compute[252253]: 2025-11-29 08:02:45.756 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:45 np0005539563 nova_compute[252253]: 2025-11-29 08:02:45.757 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:45 np0005539563 nova_compute[252253]: 2025-11-29 08:02:45.811 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 213 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 209 op/s
Nov 29 03:02:46 np0005539563 nova_compute[252253]: 2025-11-29 08:02:46.326 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:46.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:47.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 213 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 717 KiB/s wr, 187 op/s
Nov 29 03:02:48 np0005539563 nova_compute[252253]: 2025-11-29 08:02:48.111 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:48.110 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:02:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:48.112 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:02:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:48.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:49.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 213 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 718 KiB/s wr, 187 op/s
Nov 29 03:02:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:50.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:50 np0005539563 nova_compute[252253]: 2025-11-29 08:02:50.569 252257 DEBUG nova.compute.manager [req-daa00376-af76-4451-86af-fca0abb1322f req-e91dbaf2-a5c2-447a-93ec-39abc48f0d00 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-changed-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:50 np0005539563 nova_compute[252253]: 2025-11-29 08:02:50.569 252257 DEBUG nova.compute.manager [req-daa00376-af76-4451-86af-fca0abb1322f req-e91dbaf2-a5c2-447a-93ec-39abc48f0d00 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Refreshing instance network info cache due to event network-changed-65f4bda5-b4d3-4fea-b8e2-3856b51660b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:02:50 np0005539563 nova_compute[252253]: 2025-11-29 08:02:50.570 252257 DEBUG oslo_concurrency.lockutils [req-daa00376-af76-4451-86af-fca0abb1322f req-e91dbaf2-a5c2-447a-93ec-39abc48f0d00 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:50 np0005539563 nova_compute[252253]: 2025-11-29 08:02:50.570 252257 DEBUG oslo_concurrency.lockutils [req-daa00376-af76-4451-86af-fca0abb1322f req-e91dbaf2-a5c2-447a-93ec-39abc48f0d00 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:50 np0005539563 nova_compute[252253]: 2025-11-29 08:02:50.570 252257 DEBUG nova.network.neutron [req-daa00376-af76-4451-86af-fca0abb1322f req-e91dbaf2-a5c2-447a-93ec-39abc48f0d00 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Refreshing network info cache for port 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:02:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:50 np0005539563 nova_compute[252253]: 2025-11-29 08:02:50.813 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:51.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:51 np0005539563 nova_compute[252253]: 2025-11-29 08:02:51.331 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:51 np0005539563 nova_compute[252253]: 2025-11-29 08:02:51.598 252257 DEBUG oslo_concurrency.lockutils [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "interface-0791bdff-16d7-4626-acea-1361fdb70652-c8838967-6481-4acd-b59f-0be782c9a361" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:51 np0005539563 nova_compute[252253]: 2025-11-29 08:02:51.599 252257 DEBUG oslo_concurrency.lockutils [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-0791bdff-16d7-4626-acea-1361fdb70652-c8838967-6481-4acd-b59f-0be782c9a361" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:51 np0005539563 nova_compute[252253]: 2025-11-29 08:02:51.599 252257 DEBUG nova.objects.instance [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'flavor' on Instance uuid 0791bdff-16d7-4626-acea-1361fdb70652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:51 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 266 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 182 op/s
Nov 29 03:02:51 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 03:02:51 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 03:02:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:52.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:53.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 281 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 649 KiB/s rd, 3.8 MiB/s wr, 99 op/s
Nov 29 03:02:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:54.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:55.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:02:55.113 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:55 np0005539563 nova_compute[252253]: 2025-11-29 08:02:55.287 252257 DEBUG nova.objects.instance [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'pci_requests' on Instance uuid 0791bdff-16d7-4626-acea-1361fdb70652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:55 np0005539563 nova_compute[252253]: 2025-11-29 08:02:55.307 252257 DEBUG nova.network.neutron [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:02:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:55 np0005539563 nova_compute[252253]: 2025-11-29 08:02:55.815 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 290 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 431 KiB/s rd, 3.9 MiB/s wr, 156 op/s
Nov 29 03:02:56 np0005539563 nova_compute[252253]: 2025-11-29 08:02:56.215 252257 DEBUG nova.policy [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a814d0c4600e45d9a1fac7bac5b7e69e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f69605de164b4c27ae715521263676fe', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:02:56 np0005539563 nova_compute[252253]: 2025-11-29 08:02:56.281 252257 DEBUG nova.network.neutron [req-daa00376-af76-4451-86af-fca0abb1322f req-e91dbaf2-a5c2-447a-93ec-39abc48f0d00 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updated VIF entry in instance network info cache for port 65f4bda5-b4d3-4fea-b8e2-3856b51660b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:02:56 np0005539563 nova_compute[252253]: 2025-11-29 08:02:56.282 252257 DEBUG nova.network.neutron [req-daa00376-af76-4451-86af-fca0abb1322f req-e91dbaf2-a5c2-447a-93ec-39abc48f0d00 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updating instance_info_cache with network_info: [{"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:56 np0005539563 nova_compute[252253]: 2025-11-29 08:02:56.333 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:56.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:57.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:57 np0005539563 nova_compute[252253]: 2025-11-29 08:02:57.187 252257 DEBUG oslo_concurrency.lockutils [req-daa00376-af76-4451-86af-fca0abb1322f req-e91dbaf2-a5c2-447a-93ec-39abc48f0d00 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 290 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 429 KiB/s rd, 3.9 MiB/s wr, 153 op/s
Nov 29 03:02:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:58.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:02:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:59.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 293 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 474 KiB/s rd, 3.9 MiB/s wr, 209 op/s
Nov 29 03:03:00 np0005539563 nova_compute[252253]: 2025-11-29 08:03:00.186 252257 DEBUG nova.network.neutron [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Successfully updated port: c8838967-6481-4acd-b59f-0be782c9a361 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:03:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:00.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:00 np0005539563 nova_compute[252253]: 2025-11-29 08:03:00.816 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:01.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:01 np0005539563 nova_compute[252253]: 2025-11-29 08:03:01.334 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:01 np0005539563 nova_compute[252253]: 2025-11-29 08:03:01.678 252257 DEBUG oslo_concurrency.lockutils [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:01 np0005539563 nova_compute[252253]: 2025-11-29 08:03:01.678 252257 DEBUG oslo_concurrency.lockutils [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:01 np0005539563 nova_compute[252253]: 2025-11-29 08:03:01.679 252257 DEBUG nova.network.neutron [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:03:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 506 KiB/s rd, 3.9 MiB/s wr, 262 op/s
Nov 29 03:03:02 np0005539563 nova_compute[252253]: 2025-11-29 08:03:02.008 252257 WARNING nova.network.neutron [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] 738e99b4-b58e-4eff-b209-c4aa3748c994 already exists in list: networks containing: ['738e99b4-b58e-4eff-b209-c4aa3748c994']. ignoring it#033[00m
Nov 29 03:03:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:02.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:03.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 467 KiB/s rd, 1.2 MiB/s wr, 225 op/s
Nov 29 03:03:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:04.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:04.909 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:04.910 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:04.910 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:05.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:05 np0005539563 nova_compute[252253]: 2025-11-29 08:03:05.818 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 309 KiB/s rd, 132 KiB/s wr, 182 op/s
Nov 29 03:03:06 np0005539563 nova_compute[252253]: 2025-11-29 08:03:06.336 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:06.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:07.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:07 np0005539563 podman[296192]: 2025-11-29 08:03:07.522322175 +0000 UTC m=+0.070435350 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:03:07 np0005539563 podman[296193]: 2025-11-29 08:03:07.522503309 +0000 UTC m=+0.070952194 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:03:07 np0005539563 podman[296194]: 2025-11-29 08:03:07.596978917 +0000 UTC m=+0.131033951 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:03:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 67 KiB/s wr, 110 op/s
Nov 29 03:03:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:09.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 67 KiB/s wr, 110 op/s
Nov 29 03:03:10 np0005539563 nova_compute[252253]: 2025-11-29 08:03:10.355 252257 DEBUG nova.compute.manager [req-f2448d99-d636-406e-bec9-b8833eed2d0b req-83ce58dc-1aa2-4f6b-bbcf-d6397749402a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-changed-c8838967-6481-4acd-b59f-0be782c9a361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:10 np0005539563 nova_compute[252253]: 2025-11-29 08:03:10.356 252257 DEBUG nova.compute.manager [req-f2448d99-d636-406e-bec9-b8833eed2d0b req-83ce58dc-1aa2-4f6b-bbcf-d6397749402a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Refreshing instance network info cache due to event network-changed-c8838967-6481-4acd-b59f-0be782c9a361. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:03:10 np0005539563 nova_compute[252253]: 2025-11-29 08:03:10.356 252257 DEBUG oslo_concurrency.lockutils [req-f2448d99-d636-406e-bec9-b8833eed2d0b req-83ce58dc-1aa2-4f6b-bbcf-d6397749402a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:10.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:10 np0005539563 nova_compute[252253]: 2025-11-29 08:03:10.822 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:11.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.340 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.456 252257 DEBUG nova.network.neutron [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updating instance_info_cache with network_info: [{"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.498 252257 DEBUG oslo_concurrency.lockutils [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.500 252257 DEBUG oslo_concurrency.lockutils [req-f2448d99-d636-406e-bec9-b8833eed2d0b req-83ce58dc-1aa2-4f6b-bbcf-d6397749402a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.501 252257 DEBUG nova.network.neutron [req-f2448d99-d636-406e-bec9-b8833eed2d0b req-83ce58dc-1aa2-4f6b-bbcf-d6397749402a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Refreshing network info cache for port c8838967-6481-4acd-b59f-0be782c9a361 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.506 252257 DEBUG nova.virt.libvirt.vif [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1586373608',display_name='tempest-tempest.common.compute-instance-1586373608',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1586373608',id=66,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-m8s06nf4',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=0791bdff-16d7-4626-acea-1361fdb70652,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.506 252257 DEBUG nova.network.os_vif_util [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.508 252257 DEBUG nova.network.os_vif_util [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.509 252257 DEBUG os_vif [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.510 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.511 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.512 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.517 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.517 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8838967-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.518 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc8838967-64, col_values=(('external_ids', {'iface-id': 'c8838967-6481-4acd-b59f-0be782c9a361', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:8e:de', 'vm-uuid': '0791bdff-16d7-4626-acea-1361fdb70652'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.553 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:11 np0005539563 NetworkManager[48981]: <info>  [1764403391.5546] manager: (tapc8838967-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.558 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.560 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.561 252257 INFO os_vif [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64')#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.562 252257 DEBUG nova.virt.libvirt.vif [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1586373608',display_name='tempest-tempest.common.compute-instance-1586373608',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1586373608',id=66,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-m8s06nf4',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=0791bdff-16d7-4626-acea-1361fdb70652,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.563 252257 DEBUG nova.network.os_vif_util [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.563 252257 DEBUG nova.network.os_vif_util [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.567 252257 DEBUG nova.virt.libvirt.guest [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  <mac address="fa:16:3e:b1:8e:de"/>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  <model type="virtio"/>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  <mtu size="1442"/>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  <target dev="tapc8838967-64"/>
Nov 29 03:03:11 np0005539563 nova_compute[252253]: </interface>
Nov 29 03:03:11 np0005539563 nova_compute[252253]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:03:11 np0005539563 kernel: tapc8838967-64: entered promiscuous mode
Nov 29 03:03:11 np0005539563 NetworkManager[48981]: <info>  [1764403391.5809] manager: (tapc8838967-64): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Nov 29 03:03:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:03:11Z|00238|binding|INFO|Claiming lport c8838967-6481-4acd-b59f-0be782c9a361 for this chassis.
Nov 29 03:03:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:03:11Z|00239|binding|INFO|c8838967-6481-4acd-b59f-0be782c9a361: Claiming fa:16:3e:b1:8e:de 10.100.0.9
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.582 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.592 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:8e:de 10.100.0.9'], port_security=['fa:16:3e:b1:8e:de 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1620065171', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0791bdff-16d7-4626-acea-1361fdb70652', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1620065171', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3edda898-8529-43cc-9949-7b5bcfbbe45d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=c8838967-6481-4acd-b59f-0be782c9a361) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.594 158990 INFO neutron.agent.ovn.metadata.agent [-] Port c8838967-6481-4acd-b59f-0be782c9a361 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 bound to our chassis#033[00m
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.595 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:03:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:03:11Z|00240|binding|INFO|Setting lport c8838967-6481-4acd-b59f-0be782c9a361 ovn-installed in OVS
Nov 29 03:03:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:03:11Z|00241|binding|INFO|Setting lport c8838967-6481-4acd-b59f-0be782c9a361 up in Southbound
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.610 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:11 np0005539563 systemd-udevd[296265]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.617 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.618 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e9ead3-0b68-4e1a-acf7-47f461b874f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:11 np0005539563 NetworkManager[48981]: <info>  [1764403391.6322] device (tapc8838967-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:03:11 np0005539563 NetworkManager[48981]: <info>  [1764403391.6335] device (tapc8838967-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.660 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5068db87-fe8f-4ce6-ae31-4bacf23e3d92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.668 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4212b5-96f3-42c1-bc95-0d10d2ff95b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.679 252257 DEBUG nova.virt.libvirt.driver [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.679 252257 DEBUG nova.virt.libvirt.driver [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.679 252257 DEBUG nova.virt.libvirt.driver [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:26:eb:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.679 252257 DEBUG nova.virt.libvirt.driver [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] No VIF found with MAC fa:16:3e:b1:8e:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.703 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5172ed54-980a-4921-8a0c-e06979f1450d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.720 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[755cf657-033d-4630-a5f4-3e9f5cd3fd02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630018, 'reachable_time': 25083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296272, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.729 252257 DEBUG nova.virt.libvirt.guest [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  <nova:name>tempest-tempest.common.compute-instance-1586373608</nova:name>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:03:11</nova:creationTime>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:03:11 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:    <nova:port uuid="65f4bda5-b4d3-4fea-b8e2-3856b51660b5">
Nov 29 03:03:11 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:    <nova:port uuid="c8838967-6481-4acd-b59f-0be782c9a361">
Nov 29 03:03:11 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:03:11 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:03:11 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:03:11 np0005539563 nova_compute[252253]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.735 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d27a7867-3d41-438a-9569-f9a8d75f48ef]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630029, 'tstamp': 630029}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296273, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630032, 'tstamp': 630032}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296273, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.736 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.737 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.738 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.739 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.739 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.739 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:11.740 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.765 252257 DEBUG oslo_concurrency.lockutils [None req-f17660b3-3955-411e-abe1-a7043a3a55bf a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-0791bdff-16d7-4626-acea-1361fdb70652-c8838967-6481-4acd-b59f-0be782c9a361" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 20.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 13 KiB/s wr, 53 op/s
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.980 252257 DEBUG nova.compute.manager [req-cb0204bf-324a-4906-a4e0-0053d42285f7 req-13852419-9caf-449d-b8fb-c3a754bc4ede 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.981 252257 DEBUG oslo_concurrency.lockutils [req-cb0204bf-324a-4906-a4e0-0053d42285f7 req-13852419-9caf-449d-b8fb-c3a754bc4ede 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0791bdff-16d7-4626-acea-1361fdb70652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.981 252257 DEBUG oslo_concurrency.lockutils [req-cb0204bf-324a-4906-a4e0-0053d42285f7 req-13852419-9caf-449d-b8fb-c3a754bc4ede 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.981 252257 DEBUG oslo_concurrency.lockutils [req-cb0204bf-324a-4906-a4e0-0053d42285f7 req-13852419-9caf-449d-b8fb-c3a754bc4ede 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.982 252257 DEBUG nova.compute.manager [req-cb0204bf-324a-4906-a4e0-0053d42285f7 req-13852419-9caf-449d-b8fb-c3a754bc4ede 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] No waiting events found dispatching network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:11 np0005539563 nova_compute[252253]: 2025-11-29 08:03:11.982 252257 WARNING nova.compute.manager [req-cb0204bf-324a-4906-a4e0-0053d42285f7 req-13852419-9caf-449d-b8fb-c3a754bc4ede 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received unexpected event network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:03:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:12.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:03:12
Nov 29 03:03:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:03:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:03:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'volumes', 'vms']
Nov 29 03:03:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:03:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:13.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:03:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 14 KiB/s wr, 4 op/s
Nov 29 03:03:13 np0005539563 ovn_controller[148841]: 2025-11-29T08:03:13Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b1:8e:de 10.100.0.9
Nov 29 03:03:13 np0005539563 ovn_controller[148841]: 2025-11-29T08:03:13Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b1:8e:de 10.100.0.9
Nov 29 03:03:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:03:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:03:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:03:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:03:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:14.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:14 np0005539563 nova_compute[252253]: 2025-11-29 08:03:14.985 252257 DEBUG nova.compute.manager [req-17e3b946-fc79-4eb2-a209-5e7d681e8f1c req-94757156-369b-49ed-bd51-75e45403e1dc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:14 np0005539563 nova_compute[252253]: 2025-11-29 08:03:14.985 252257 DEBUG oslo_concurrency.lockutils [req-17e3b946-fc79-4eb2-a209-5e7d681e8f1c req-94757156-369b-49ed-bd51-75e45403e1dc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0791bdff-16d7-4626-acea-1361fdb70652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:14 np0005539563 nova_compute[252253]: 2025-11-29 08:03:14.986 252257 DEBUG oslo_concurrency.lockutils [req-17e3b946-fc79-4eb2-a209-5e7d681e8f1c req-94757156-369b-49ed-bd51-75e45403e1dc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:14 np0005539563 nova_compute[252253]: 2025-11-29 08:03:14.986 252257 DEBUG oslo_concurrency.lockutils [req-17e3b946-fc79-4eb2-a209-5e7d681e8f1c req-94757156-369b-49ed-bd51-75e45403e1dc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:14 np0005539563 nova_compute[252253]: 2025-11-29 08:03:14.986 252257 DEBUG nova.compute.manager [req-17e3b946-fc79-4eb2-a209-5e7d681e8f1c req-94757156-369b-49ed-bd51-75e45403e1dc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] No waiting events found dispatching network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:14 np0005539563 nova_compute[252253]: 2025-11-29 08:03:14.986 252257 WARNING nova.compute.manager [req-17e3b946-fc79-4eb2-a209-5e7d681e8f1c req-94757156-369b-49ed-bd51-75e45403e1dc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received unexpected event network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:03:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.121 252257 DEBUG oslo_concurrency.lockutils [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "interface-0791bdff-16d7-4626-acea-1361fdb70652-c8838967-6481-4acd-b59f-0be782c9a361" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.121 252257 DEBUG oslo_concurrency.lockutils [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-0791bdff-16d7-4626-acea-1361fdb70652-c8838967-6481-4acd-b59f-0be782c9a361" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:15.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.145 252257 DEBUG nova.objects.instance [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'flavor' on Instance uuid 0791bdff-16d7-4626-acea-1361fdb70652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.178 252257 DEBUG nova.virt.libvirt.vif [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1586373608',display_name='tempest-tempest.common.compute-instance-1586373608',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1586373608',id=66,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-m8s06nf4',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=0791bdff-16d7-4626-acea-1361fdb70652,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.178 252257 DEBUG nova.network.os_vif_util [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.179 252257 DEBUG nova.network.os_vif_util [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.183 252257 DEBUG nova.virt.libvirt.guest [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b1:8e:de"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8838967-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.185 252257 DEBUG nova.virt.libvirt.guest [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b1:8e:de"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8838967-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.187 252257 DEBUG nova.virt.libvirt.driver [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Attempting to detach device tapc8838967-64 from instance 0791bdff-16d7-4626-acea-1361fdb70652 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.188 252257 DEBUG nova.virt.libvirt.guest [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <mac address="fa:16:3e:b1:8e:de"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <model type="virtio"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <mtu size="1442"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <target dev="tapc8838967-64"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]: </interface>
Nov 29 03:03:15 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.193 252257 DEBUG nova.virt.libvirt.guest [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b1:8e:de"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8838967-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.195 252257 DEBUG nova.virt.libvirt.guest [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b1:8e:de"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8838967-64"/></interface>not found in domain: <domain type='kvm' id='28'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <name>instance-00000042</name>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <uuid>0791bdff-16d7-4626-acea-1361fdb70652</uuid>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:name>tempest-tempest.common.compute-instance-1586373608</nova:name>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:03:11</nova:creationTime>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:port uuid="65f4bda5-b4d3-4fea-b8e2-3856b51660b5">
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:port uuid="c8838967-6481-4acd-b59f-0be782c9a361">
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:03:15 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <memory unit='KiB'>131072</memory>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <resource>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <partition>/machine</partition>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </resource>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <sysinfo type='smbios'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <entry name='serial'>0791bdff-16d7-4626-acea-1361fdb70652</entry>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <entry name='uuid'>0791bdff-16d7-4626-acea-1361fdb70652</entry>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <boot dev='hd'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <smbios mode='sysinfo'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <vmcoreinfo state='on'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <feature policy='require' name='x2apic'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <feature policy='require' name='vme'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <clock offset='utc'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <timer name='hpet' present='no'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <on_reboot>restart</on_reboot>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <on_crash>destroy</on_crash>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <disk type='network' device='disk'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/0791bdff-16d7-4626-acea-1361fdb70652_disk' index='2'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target dev='vda' bus='virtio'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='virtio-disk0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <disk type='network' device='cdrom'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/0791bdff-16d7-4626-acea-1361fdb70652_disk.config' index='1'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target dev='sda' bus='sata'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <readonly/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='sata0-0-0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pcie.0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='1' port='0x10'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='2' port='0x11'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.2'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='3' port='0x12'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.3'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='4' port='0x13'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.4'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='5' port='0x14'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.5'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='6' port='0x15'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.6'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='7' port='0x16'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.7'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='8' port='0x17'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.8'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='9' port='0x18'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.9'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='10' port='0x19'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.10'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='11' port='0x1a'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.11'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='12' port='0x1b'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.12'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='13' port='0x1c'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.13'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='14' port='0x1d'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.14'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='15' port='0x1e'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.15'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='16' port='0x1f'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.16'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='17' port='0x20'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.17'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='18' port='0x21'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.18'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='19' port='0x22'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.19'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='20' port='0x23'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.20'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='21' port='0x24'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.21'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='22' port='0x25'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.22'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='23' port='0x26'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.23'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='24' port='0x27'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.24'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='25' port='0x28'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.25'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-pci-bridge'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.26'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='usb'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='sata' index='0'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='ide'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:26:eb:91'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target dev='tap65f4bda5-b4'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='net0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:b1:8e:de'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target dev='tapc8838967-64'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='net1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <serial type='pty'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <source path='/dev/pts/0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652/console.log' append='off'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target type='isa-serial' port='0'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <model name='isa-serial'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      </target>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='serial0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <source path='/dev/pts/0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652/console.log' append='off'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target type='serial' port='0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='serial0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </console>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <input type='tablet' bus='usb'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='input0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <input type='mouse' bus='ps2'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='input1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <input type='keyboard' bus='ps2'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='input2'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <listen type='address' address='::0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </graphics>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <audio id='1' type='none'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='video0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <watchdog model='itco' action='reset'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='watchdog0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </watchdog>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <memballoon model='virtio'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <stats period='10'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='balloon0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <rng model='virtio'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='rng0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <label>system_u:system_r:svirt_t:s0:c408,c655</label>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c408,c655</imagelabel>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </seclabel>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <label>+107:+107</label>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </seclabel>
Nov 29 03:03:15 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:03:15 np0005539563 nova_compute[252253]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.196 252257 INFO nova.virt.libvirt.driver [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully detached device tapc8838967-64 from instance 0791bdff-16d7-4626-acea-1361fdb70652 from the persistent domain config.#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.196 252257 DEBUG nova.virt.libvirt.driver [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] (1/8): Attempting to detach device tapc8838967-64 with device alias net1 from instance 0791bdff-16d7-4626-acea-1361fdb70652 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.196 252257 DEBUG nova.virt.libvirt.guest [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <mac address="fa:16:3e:b1:8e:de"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <model type="virtio"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <mtu size="1442"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <target dev="tapc8838967-64"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]: </interface>
Nov 29 03:03:15 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.199 252257 DEBUG nova.network.neutron [req-f2448d99-d636-406e-bec9-b8833eed2d0b req-83ce58dc-1aa2-4f6b-bbcf-d6397749402a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updated VIF entry in instance network info cache for port c8838967-6481-4acd-b59f-0be782c9a361. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.200 252257 DEBUG nova.network.neutron [req-f2448d99-d636-406e-bec9-b8833eed2d0b req-83ce58dc-1aa2-4f6b-bbcf-d6397749402a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updating instance_info_cache with network_info: [{"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.240 252257 DEBUG oslo_concurrency.lockutils [req-f2448d99-d636-406e-bec9-b8833eed2d0b req-83ce58dc-1aa2-4f6b-bbcf-d6397749402a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:15 np0005539563 kernel: tapc8838967-64 (unregistering): left promiscuous mode
Nov 29 03:03:15 np0005539563 NetworkManager[48981]: <info>  [1764403395.3025] device (tapc8838967-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:03:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:03:15Z|00242|binding|INFO|Releasing lport c8838967-6481-4acd-b59f-0be782c9a361 from this chassis (sb_readonly=0)
Nov 29 03:03:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:03:15Z|00243|binding|INFO|Setting lport c8838967-6481-4acd-b59f-0be782c9a361 down in Southbound
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.315 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:03:15Z|00244|binding|INFO|Removing iface tapc8838967-64 ovn-installed in OVS
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.317 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.319 252257 DEBUG nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Received event <DeviceRemovedEvent: 1764403395.3186936, 0791bdff-16d7-4626-acea-1361fdb70652 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.320 252257 DEBUG nova.virt.libvirt.driver [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Start waiting for the detach event from libvirt for device tapc8838967-64 with device alias net1 for instance 0791bdff-16d7-4626-acea-1361fdb70652 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.320 252257 DEBUG nova.virt.libvirt.guest [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b1:8e:de"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8838967-64"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.325 252257 DEBUG nova.virt.libvirt.guest [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b1:8e:de"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapc8838967-64"/></interface>not found in domain: <domain type='kvm' id='28'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <name>instance-00000042</name>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <uuid>0791bdff-16d7-4626-acea-1361fdb70652</uuid>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:name>tempest-tempest.common.compute-instance-1586373608</nova:name>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:03:11</nova:creationTime>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:port uuid="65f4bda5-b4d3-4fea-b8e2-3856b51660b5">
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:port uuid="c8838967-6481-4acd-b59f-0be782c9a361">
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:03:15 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <memory unit='KiB'>131072</memory>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <resource>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <partition>/machine</partition>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </resource>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <sysinfo type='smbios'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <entry name='serial'>0791bdff-16d7-4626-acea-1361fdb70652</entry>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <entry name='uuid'>0791bdff-16d7-4626-acea-1361fdb70652</entry>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <boot dev='hd'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <smbios mode='sysinfo'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <vmcoreinfo state='on'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <feature policy='require' name='x2apic'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <feature policy='require' name='vme'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <clock offset='utc'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <timer name='hpet' present='no'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <on_reboot>restart</on_reboot>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <on_crash>destroy</on_crash>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <disk type='network' device='disk'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/0791bdff-16d7-4626-acea-1361fdb70652_disk' index='2'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target dev='vda' bus='virtio'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='virtio-disk0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <disk type='network' device='cdrom'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <auth username='openstack'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <secret type='ceph' uuid='38a37ed2-442a-5e0d-a69a-881fdd186450'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <source protocol='rbd' name='vms/0791bdff-16d7-4626-acea-1361fdb70652_disk.config' index='1'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target dev='sda' bus='sata'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <readonly/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='sata0-0-0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pcie.0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='1' port='0x10'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='2' port='0x11'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.2'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='3' port='0x12'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.3'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='4' port='0x13'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.4'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='5' port='0x14'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.5'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='6' port='0x15'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.6'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='7' port='0x16'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.7'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='8' port='0x17'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.8'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='9' port='0x18'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.9'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='10' port='0x19'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.10'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='11' port='0x1a'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.11'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='12' port='0x1b'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.12'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='13' port='0x1c'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.13'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='14' port='0x1d'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.14'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='15' port='0x1e'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.15'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='16' port='0x1f'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.16'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='17' port='0x20'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.17'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='18' port='0x21'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.18'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='19' port='0x22'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.19'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='20' port='0x23'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.20'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='21' port='0x24'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.21'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='22' port='0x25'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.22'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='23' port='0x26'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.23'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='24' port='0x27'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.24'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-root-port'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target chassis='25' port='0x28'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.25'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model name='pcie-pci-bridge'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='pci.26'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='usb'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <controller type='sata' index='0'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='ide'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </controller>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <interface type='ethernet'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <mac address='fa:16:3e:26:eb:91'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target dev='tap65f4bda5-b4'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model type='virtio'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <mtu size='1442'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='net0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <serial type='pty'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <source path='/dev/pts/0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652/console.log' append='off'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target type='isa-serial' port='0'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:        <model name='isa-serial'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      </target>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='serial0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <source path='/dev/pts/0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <log file='/var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652/console.log' append='off'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <target type='serial' port='0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='serial0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </console>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <input type='tablet' bus='usb'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='input0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <input type='mouse' bus='ps2'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='input1'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <input type='keyboard' bus='ps2'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='input2'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </input>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <listen type='address' address='::0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </graphics>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <audio id='1' type='none'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='video0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <watchdog model='itco' action='reset'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='watchdog0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </watchdog>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <memballoon model='virtio'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <stats period='10'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='balloon0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <rng model='virtio'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <alias name='rng0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <label>system_u:system_r:svirt_t:s0:c408,c655</label>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c408,c655</imagelabel>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </seclabel>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <label>+107:+107</label>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </seclabel>
Nov 29 03:03:15 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:03:15 np0005539563 nova_compute[252253]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.326 252257 INFO nova.virt.libvirt.driver [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully detached device tapc8838967-64 from instance 0791bdff-16d7-4626-acea-1361fdb70652 from the live domain config.#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.326 252257 DEBUG nova.virt.libvirt.vif [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1586373608',display_name='tempest-tempest.common.compute-instance-1586373608',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1586373608',id=66,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-m8s06nf4',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=0791bdff-16d7-4626-acea-1361fdb70652,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.326 252257 DEBUG nova.network.os_vif_util [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "c8838967-6481-4acd-b59f-0be782c9a361", "address": "fa:16:3e:b1:8e:de", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8838967-64", "ovs_interfaceid": "c8838967-6481-4acd-b59f-0be782c9a361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.327 252257 DEBUG nova.network.os_vif_util [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.327 252257 DEBUG os_vif [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.328 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.329 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8838967-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.330 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.331 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.333 252257 INFO os_vif [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:8e:de,bridge_name='br-int',has_traffic_filtering=True,id=c8838967-6481-4acd-b59f-0be782c9a361,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc8838967-64')#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.333 252257 DEBUG nova.virt.libvirt.guest [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:name>tempest-tempest.common.compute-instance-1586373608</nova:name>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:creationTime>2025-11-29 08:03:15</nova:creationTime>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:flavor name="m1.nano">
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:memory>128</nova:memory>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:disk>1</nova:disk>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:swap>0</nova:swap>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </nova:flavor>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:owner>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:user uuid="a814d0c4600e45d9a1fac7bac5b7e69e">tempest-AttachInterfacesTestJSON-991196152-project-member</nova:user>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:project uuid="f69605de164b4c27ae715521263676fe">tempest-AttachInterfacesTestJSON-991196152</nova:project>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </nova:owner>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  <nova:ports>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    <nova:port uuid="65f4bda5-b4d3-4fea-b8e2-3856b51660b5">
Nov 29 03:03:15 np0005539563 nova_compute[252253]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:    </nova:port>
Nov 29 03:03:15 np0005539563 nova_compute[252253]:  </nova:ports>
Nov 29 03:03:15 np0005539563 nova_compute[252253]: </nova:instance>
Nov 29 03:03:15 np0005539563 nova_compute[252253]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.534 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:8e:de 10.100.0.9'], port_security=['fa:16:3e:b1:8e:de 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1620065171', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0791bdff-16d7-4626-acea-1361fdb70652', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1620065171', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3edda898-8529-43cc-9949-7b5bcfbbe45d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=c8838967-6481-4acd-b59f-0be782c9a361) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.535 158990 INFO neutron.agent.ovn.metadata.agent [-] Port c8838967-6481-4acd-b59f-0be782c9a361 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 unbound from our chassis#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.536 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 738e99b4-b58e-4eff-b209-c4aa3748c994#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.550 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[773e86e5-f8c3-4f18-8c1e-e979bd784d0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.581 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[aaae698f-c3d8-42b4-9c04-120b7f730a9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.584 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[1c733b47-cc9c-4aeb-9676-7c2c055117e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.620 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5e38d9a3-8872-4e2a-8ea1-c8d0767ec383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.647 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5c132633-9cb3-47c4-b06e-0853f165dac3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap738e99b4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:be:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630018, 'reachable_time': 25083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296539, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.668 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c1f414-48cf-43b0-b58a-9f25c826f614]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630029, 'tstamp': 630029}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296540, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap738e99b4-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630032, 'tstamp': 630032}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296540, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.669 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.671 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:15 np0005539563 nova_compute[252253]: 2025-11-29 08:03:15.673 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.673 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap738e99b4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.673 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.673 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap738e99b4-b0, col_values=(('external_ids', {'iface-id': '2a1fcde6-d99a-4732-a125-d24eb08c8766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:15.673 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:03:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 74 op/s
Nov 29 03:03:16 np0005539563 nova_compute[252253]: 2025-11-29 08:03:16.342 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:16.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:03:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:03:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:17.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.153 252257 DEBUG oslo_concurrency.lockutils [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.154 252257 DEBUG oslo_concurrency.lockutils [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquired lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.154 252257 DEBUG nova.network.neutron [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.168 252257 DEBUG nova.compute.manager [req-a70433bb-a8fe-4553-8ebb-aebba6b960e8 req-f1a4a4a2-278b-49f7-b51e-6ead220dc351 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-vif-unplugged-c8838967-6481-4acd-b59f-0be782c9a361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.168 252257 DEBUG oslo_concurrency.lockutils [req-a70433bb-a8fe-4553-8ebb-aebba6b960e8 req-f1a4a4a2-278b-49f7-b51e-6ead220dc351 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0791bdff-16d7-4626-acea-1361fdb70652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.168 252257 DEBUG oslo_concurrency.lockutils [req-a70433bb-a8fe-4553-8ebb-aebba6b960e8 req-f1a4a4a2-278b-49f7-b51e-6ead220dc351 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.168 252257 DEBUG oslo_concurrency.lockutils [req-a70433bb-a8fe-4553-8ebb-aebba6b960e8 req-f1a4a4a2-278b-49f7-b51e-6ead220dc351 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.169 252257 DEBUG nova.compute.manager [req-a70433bb-a8fe-4553-8ebb-aebba6b960e8 req-f1a4a4a2-278b-49f7-b51e-6ead220dc351 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] No waiting events found dispatching network-vif-unplugged-c8838967-6481-4acd-b59f-0be782c9a361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.169 252257 WARNING nova.compute.manager [req-a70433bb-a8fe-4553-8ebb-aebba6b960e8 req-f1a4a4a2-278b-49f7-b51e-6ead220dc351 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received unexpected event network-vif-unplugged-c8838967-6481-4acd-b59f-0be782c9a361 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.169 252257 DEBUG nova.compute.manager [req-a70433bb-a8fe-4553-8ebb-aebba6b960e8 req-f1a4a4a2-278b-49f7-b51e-6ead220dc351 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.169 252257 DEBUG oslo_concurrency.lockutils [req-a70433bb-a8fe-4553-8ebb-aebba6b960e8 req-f1a4a4a2-278b-49f7-b51e-6ead220dc351 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0791bdff-16d7-4626-acea-1361fdb70652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.169 252257 DEBUG oslo_concurrency.lockutils [req-a70433bb-a8fe-4553-8ebb-aebba6b960e8 req-f1a4a4a2-278b-49f7-b51e-6ead220dc351 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.169 252257 DEBUG oslo_concurrency.lockutils [req-a70433bb-a8fe-4553-8ebb-aebba6b960e8 req-f1a4a4a2-278b-49f7-b51e-6ead220dc351 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.169 252257 DEBUG nova.compute.manager [req-a70433bb-a8fe-4553-8ebb-aebba6b960e8 req-f1a4a4a2-278b-49f7-b51e-6ead220dc351 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] No waiting events found dispatching network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:17 np0005539563 nova_compute[252253]: 2025-11-29 08:03:17.170 252257 WARNING nova.compute.manager [req-a70433bb-a8fe-4553-8ebb-aebba6b960e8 req-f1a4a4a2-278b-49f7-b51e-6ead220dc351 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received unexpected event network-vif-plugged-c8838967-6481-4acd-b59f-0be782c9a361 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e3e955e0-2c5c-4356-8c39-b326c76710e1 does not exist
Nov 29 03:03:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b1e7b3e2-fcaa-4a3a-bd09-3bc0a71685d3 does not exist
Nov 29 03:03:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 056df6e7-bd37-4d0d-bd30-3ed19e347aed does not exist
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.557002) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397557117, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1329, "num_deletes": 254, "total_data_size": 2153819, "memory_usage": 2194224, "flush_reason": "Manual Compaction"}
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397578567, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2117411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33976, "largest_seqno": 35304, "table_properties": {"data_size": 2111085, "index_size": 3525, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13942, "raw_average_key_size": 20, "raw_value_size": 2098245, "raw_average_value_size": 3090, "num_data_blocks": 154, "num_entries": 679, "num_filter_entries": 679, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403286, "oldest_key_time": 1764403286, "file_creation_time": 1764403397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 21575 microseconds, and 6016 cpu microseconds.
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.578630) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2117411 bytes OK
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.578657) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.581150) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.581170) EVENT_LOG_v1 {"time_micros": 1764403397581166, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.581188) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2147934, prev total WAL file size 2147934, number of live WAL files 2.
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.581982) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2067KB)], [71(9232KB)]
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397582060, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11571098, "oldest_snapshot_seqno": -1}
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6561 keys, 9585705 bytes, temperature: kUnknown
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397713957, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9585705, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9542335, "index_size": 25850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 169762, "raw_average_key_size": 25, "raw_value_size": 9425008, "raw_average_value_size": 1436, "num_data_blocks": 1027, "num_entries": 6561, "num_filter_entries": 6561, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764403397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.714182) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9585705 bytes
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.723229) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 87.7 rd, 72.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.0 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(10.0) write-amplify(4.5) OK, records in: 7090, records dropped: 529 output_compression: NoCompression
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.723262) EVENT_LOG_v1 {"time_micros": 1764403397723249, "job": 40, "event": "compaction_finished", "compaction_time_micros": 131960, "compaction_time_cpu_micros": 22982, "output_level": 6, "num_output_files": 1, "total_output_size": 9585705, "num_input_records": 7090, "num_output_records": 6561, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397723794, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403397725367, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.581839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.725406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.725411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.725413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.725415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:03:17.725416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:03:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 74 op/s
Nov 29 03:03:18 np0005539563 podman[296682]: 2025-11-29 08:03:18.133244092 +0000 UTC m=+0.039570773 container create 70f0d6e3a2544b211527f3a7166a656075173863ab979d50ac137424beaddf29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_engelbart, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:03:18 np0005539563 systemd[1]: Started libpod-conmon-70f0d6e3a2544b211527f3a7166a656075173863ab979d50ac137424beaddf29.scope.
Nov 29 03:03:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:03:18 np0005539563 podman[296682]: 2025-11-29 08:03:18.206065135 +0000 UTC m=+0.112391866 container init 70f0d6e3a2544b211527f3a7166a656075173863ab979d50ac137424beaddf29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:03:18 np0005539563 podman[296682]: 2025-11-29 08:03:18.117394783 +0000 UTC m=+0.023721494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:03:18 np0005539563 podman[296682]: 2025-11-29 08:03:18.216060506 +0000 UTC m=+0.122387197 container start 70f0d6e3a2544b211527f3a7166a656075173863ab979d50ac137424beaddf29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:03:18 np0005539563 podman[296682]: 2025-11-29 08:03:18.220475886 +0000 UTC m=+0.126802617 container attach 70f0d6e3a2544b211527f3a7166a656075173863ab979d50ac137424beaddf29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_engelbart, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:03:18 np0005539563 flamboyant_engelbart[296698]: 167 167
Nov 29 03:03:18 np0005539563 systemd[1]: libpod-70f0d6e3a2544b211527f3a7166a656075173863ab979d50ac137424beaddf29.scope: Deactivated successfully.
Nov 29 03:03:18 np0005539563 podman[296682]: 2025-11-29 08:03:18.225323487 +0000 UTC m=+0.131650188 container died 70f0d6e3a2544b211527f3a7166a656075173863ab979d50ac137424beaddf29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_engelbart, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:03:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b233f14391f9cc9eff40b4c13119e2a3a0ff3f06b232685c3e092b2f1742c8ae-merged.mount: Deactivated successfully.
Nov 29 03:03:18 np0005539563 podman[296682]: 2025-11-29 08:03:18.265795964 +0000 UTC m=+0.172122665 container remove 70f0d6e3a2544b211527f3a7166a656075173863ab979d50ac137424beaddf29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_engelbart, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:03:18 np0005539563 systemd[1]: libpod-conmon-70f0d6e3a2544b211527f3a7166a656075173863ab979d50ac137424beaddf29.scope: Deactivated successfully.
Nov 29 03:03:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:18.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:18 np0005539563 podman[296720]: 2025-11-29 08:03:18.455043681 +0000 UTC m=+0.028672158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:03:18 np0005539563 podman[296720]: 2025-11-29 08:03:18.734705478 +0000 UTC m=+0.308333865 container create b2158442cc5758c08ea899b3d5535b2f68bb58a304a4b4689d821d4e2935ea60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:03:18 np0005539563 systemd[1]: Started libpod-conmon-b2158442cc5758c08ea899b3d5535b2f68bb58a304a4b4689d821d4e2935ea60.scope.
Nov 29 03:03:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:03:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb69c08f649bfa7ae18fdab080f664a25610a64794366840073b59446dfb140/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb69c08f649bfa7ae18fdab080f664a25610a64794366840073b59446dfb140/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb69c08f649bfa7ae18fdab080f664a25610a64794366840073b59446dfb140/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb69c08f649bfa7ae18fdab080f664a25610a64794366840073b59446dfb140/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb69c08f649bfa7ae18fdab080f664a25610a64794366840073b59446dfb140/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:19 np0005539563 podman[296720]: 2025-11-29 08:03:19.017935242 +0000 UTC m=+0.591563729 container init b2158442cc5758c08ea899b3d5535b2f68bb58a304a4b4689d821d4e2935ea60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:03:19 np0005539563 podman[296720]: 2025-11-29 08:03:19.029509085 +0000 UTC m=+0.603137472 container start b2158442cc5758c08ea899b3d5535b2f68bb58a304a4b4689d821d4e2935ea60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:03:19 np0005539563 podman[296720]: 2025-11-29 08:03:19.032497066 +0000 UTC m=+0.606125453 container attach b2158442cc5758c08ea899b3d5535b2f68bb58a304a4b4689d821d4e2935ea60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:03:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:19.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:19 np0005539563 nova_compute[252253]: 2025-11-29 08:03:19.297 252257 INFO nova.network.neutron [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Port c8838967-6481-4acd-b59f-0be782c9a361 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:03:19 np0005539563 nova_compute[252253]: 2025-11-29 08:03:19.298 252257 DEBUG nova.network.neutron [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updating instance_info_cache with network_info: [{"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:19 np0005539563 nova_compute[252253]: 2025-11-29 08:03:19.361 252257 DEBUG oslo_concurrency.lockutils [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Releasing lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:19 np0005539563 nova_compute[252253]: 2025-11-29 08:03:19.431 252257 DEBUG oslo_concurrency.lockutils [None req-e610a7e3-f3eb-41d4-96ad-bf6a11dd186d a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "interface-0791bdff-16d7-4626-acea-1361fdb70652-c8838967-6481-4acd-b59f-0be782c9a361" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:19 np0005539563 adoring_chebyshev[296738]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:03:19 np0005539563 adoring_chebyshev[296738]: --> relative data size: 1.0
Nov 29 03:03:19 np0005539563 adoring_chebyshev[296738]: --> All data devices are unavailable
Nov 29 03:03:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 75 op/s
Nov 29 03:03:19 np0005539563 systemd[1]: libpod-b2158442cc5758c08ea899b3d5535b2f68bb58a304a4b4689d821d4e2935ea60.scope: Deactivated successfully.
Nov 29 03:03:19 np0005539563 podman[296720]: 2025-11-29 08:03:19.93543631 +0000 UTC m=+1.509064728 container died b2158442cc5758c08ea899b3d5535b2f68bb58a304a4b4689d821d4e2935ea60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:03:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-abb69c08f649bfa7ae18fdab080f664a25610a64794366840073b59446dfb140-merged.mount: Deactivated successfully.
Nov 29 03:03:20 np0005539563 podman[296720]: 2025-11-29 08:03:19.999668881 +0000 UTC m=+1.573297278 container remove b2158442cc5758c08ea899b3d5535b2f68bb58a304a4b4689d821d4e2935ea60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:03:20 np0005539563 systemd[1]: libpod-conmon-b2158442cc5758c08ea899b3d5535b2f68bb58a304a4b4689d821d4e2935ea60.scope: Deactivated successfully.
Nov 29 03:03:20 np0005539563 nova_compute[252253]: 2025-11-29 08:03:20.332 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:20.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:20 np0005539563 podman[296909]: 2025-11-29 08:03:20.650763261 +0000 UTC m=+0.047516429 container create 35f66cf568ea06174c9241a46134ef1feb798b37bac15d7d4e7206af4a010ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:03:20 np0005539563 podman[296909]: 2025-11-29 08:03:20.626272207 +0000 UTC m=+0.023025415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:03:20 np0005539563 systemd[1]: Started libpod-conmon-35f66cf568ea06174c9241a46134ef1feb798b37bac15d7d4e7206af4a010ab6.scope.
Nov 29 03:03:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:03:20 np0005539563 podman[296909]: 2025-11-29 08:03:20.795126783 +0000 UTC m=+0.191879991 container init 35f66cf568ea06174c9241a46134ef1feb798b37bac15d7d4e7206af4a010ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:03:20 np0005539563 podman[296909]: 2025-11-29 08:03:20.801630048 +0000 UTC m=+0.198383216 container start 35f66cf568ea06174c9241a46134ef1feb798b37bac15d7d4e7206af4a010ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mestorf, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:03:20 np0005539563 mystifying_mestorf[296925]: 167 167
Nov 29 03:03:20 np0005539563 podman[296909]: 2025-11-29 08:03:20.805599756 +0000 UTC m=+0.202352934 container attach 35f66cf568ea06174c9241a46134ef1feb798b37bac15d7d4e7206af4a010ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:03:20 np0005539563 systemd[1]: libpod-35f66cf568ea06174c9241a46134ef1feb798b37bac15d7d4e7206af4a010ab6.scope: Deactivated successfully.
Nov 29 03:03:20 np0005539563 podman[296909]: 2025-11-29 08:03:20.807130398 +0000 UTC m=+0.203883556 container died 35f66cf568ea06174c9241a46134ef1feb798b37bac15d7d4e7206af4a010ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:03:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8ce1a819fdfd2020a61a0db09038e2143336f8c82c304c123986a32cb1e5208d-merged.mount: Deactivated successfully.
Nov 29 03:03:20 np0005539563 podman[296909]: 2025-11-29 08:03:20.852057765 +0000 UTC m=+0.248810933 container remove 35f66cf568ea06174c9241a46134ef1feb798b37bac15d7d4e7206af4a010ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mestorf, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:03:20 np0005539563 systemd[1]: libpod-conmon-35f66cf568ea06174c9241a46134ef1feb798b37bac15d7d4e7206af4a010ab6.scope: Deactivated successfully.
Nov 29 03:03:21 np0005539563 podman[296949]: 2025-11-29 08:03:21.066958437 +0000 UTC m=+0.049455951 container create 0f7dcc9028aa04342d9a35d3103dd3a3da8483fe25cc223aacf2c063cb62e865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:03:21 np0005539563 systemd[1]: Started libpod-conmon-0f7dcc9028aa04342d9a35d3103dd3a3da8483fe25cc223aacf2c063cb62e865.scope.
Nov 29 03:03:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:03:21 np0005539563 podman[296949]: 2025-11-29 08:03:21.046959036 +0000 UTC m=+0.029456520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:03:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb93b74bb10db18ac131468825de7c0768a181d0f769356d6753acd412e6433/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb93b74bb10db18ac131468825de7c0768a181d0f769356d6753acd412e6433/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb93b74bb10db18ac131468825de7c0768a181d0f769356d6753acd412e6433/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb93b74bb10db18ac131468825de7c0768a181d0f769356d6753acd412e6433/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:21.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:21 np0005539563 podman[296949]: 2025-11-29 08:03:21.16600072 +0000 UTC m=+0.148498234 container init 0f7dcc9028aa04342d9a35d3103dd3a3da8483fe25cc223aacf2c063cb62e865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:03:21 np0005539563 podman[296949]: 2025-11-29 08:03:21.174293645 +0000 UTC m=+0.156791129 container start 0f7dcc9028aa04342d9a35d3103dd3a3da8483fe25cc223aacf2c063cb62e865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:03:21 np0005539563 podman[296949]: 2025-11-29 08:03:21.178437468 +0000 UTC m=+0.160934952 container attach 0f7dcc9028aa04342d9a35d3103dd3a3da8483fe25cc223aacf2c063cb62e865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:03:21 np0005539563 nova_compute[252253]: 2025-11-29 08:03:21.344 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 293 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 78 op/s
Nov 29 03:03:21 np0005539563 nova_compute[252253]: 2025-11-29 08:03:21.954 252257 DEBUG nova.compute.manager [req-1ca7e917-06ad-4a90-b421-70343fae9a5f req-0f794b74-d460-4303-935c-c5eca1f705f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-changed-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:21 np0005539563 nova_compute[252253]: 2025-11-29 08:03:21.955 252257 DEBUG nova.compute.manager [req-1ca7e917-06ad-4a90-b421-70343fae9a5f req-0f794b74-d460-4303-935c-c5eca1f705f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Refreshing instance network info cache due to event network-changed-65f4bda5-b4d3-4fea-b8e2-3856b51660b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:03:21 np0005539563 nova_compute[252253]: 2025-11-29 08:03:21.955 252257 DEBUG oslo_concurrency.lockutils [req-1ca7e917-06ad-4a90-b421-70343fae9a5f req-0f794b74-d460-4303-935c-c5eca1f705f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:21 np0005539563 nova_compute[252253]: 2025-11-29 08:03:21.956 252257 DEBUG oslo_concurrency.lockutils [req-1ca7e917-06ad-4a90-b421-70343fae9a5f req-0f794b74-d460-4303-935c-c5eca1f705f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:21 np0005539563 nova_compute[252253]: 2025-11-29 08:03:21.956 252257 DEBUG nova.network.neutron [req-1ca7e917-06ad-4a90-b421-70343fae9a5f req-0f794b74-d460-4303-935c-c5eca1f705f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Refreshing network info cache for port 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]: {
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:    "0": [
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:        {
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:            "devices": [
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:                "/dev/loop3"
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:            ],
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:            "lv_name": "ceph_lv0",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:            "lv_size": "7511998464",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:            "name": "ceph_lv0",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:            "tags": {
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:                "ceph.cluster_name": "ceph",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:                "ceph.crush_device_class": "",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:                "ceph.encrypted": "0",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:                "ceph.osd_id": "0",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:                "ceph.type": "block",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:                "ceph.vdo": "0"
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:            },
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:            "type": "block",
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:            "vg_name": "ceph_vg0"
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:        }
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]:    ]
Nov 29 03:03:21 np0005539563 nervous_lovelace[296965]: }
Nov 29 03:03:22 np0005539563 systemd[1]: libpod-0f7dcc9028aa04342d9a35d3103dd3a3da8483fe25cc223aacf2c063cb62e865.scope: Deactivated successfully.
Nov 29 03:03:22 np0005539563 podman[296949]: 2025-11-29 08:03:22.009998057 +0000 UTC m=+0.992495581 container died 0f7dcc9028aa04342d9a35d3103dd3a3da8483fe25cc223aacf2c063cb62e865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:03:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ebb93b74bb10db18ac131468825de7c0768a181d0f769356d6753acd412e6433-merged.mount: Deactivated successfully.
Nov 29 03:03:22 np0005539563 podman[296949]: 2025-11-29 08:03:22.11898336 +0000 UTC m=+1.101480854 container remove 0f7dcc9028aa04342d9a35d3103dd3a3da8483fe25cc223aacf2c063cb62e865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:03:22 np0005539563 systemd[1]: libpod-conmon-0f7dcc9028aa04342d9a35d3103dd3a3da8483fe25cc223aacf2c063cb62e865.scope: Deactivated successfully.
Nov 29 03:03:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:22.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:22 np0005539563 podman[297128]: 2025-11-29 08:03:22.93467992 +0000 UTC m=+0.057643732 container create d3e6198defefe280c9da3d89ad32dd0426429fefc3d2cc044c48c0a2e328a446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:03:22 np0005539563 systemd[1]: Started libpod-conmon-d3e6198defefe280c9da3d89ad32dd0426429fefc3d2cc044c48c0a2e328a446.scope.
Nov 29 03:03:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:03:23 np0005539563 podman[297128]: 2025-11-29 08:03:22.912455418 +0000 UTC m=+0.035419320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:03:23 np0005539563 podman[297128]: 2025-11-29 08:03:23.02175692 +0000 UTC m=+0.144720742 container init d3e6198defefe280c9da3d89ad32dd0426429fefc3d2cc044c48c0a2e328a446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:03:23 np0005539563 podman[297128]: 2025-11-29 08:03:23.03358084 +0000 UTC m=+0.156544632 container start d3e6198defefe280c9da3d89ad32dd0426429fefc3d2cc044c48c0a2e328a446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:03:23 np0005539563 podman[297128]: 2025-11-29 08:03:23.036992232 +0000 UTC m=+0.159956024 container attach d3e6198defefe280c9da3d89ad32dd0426429fefc3d2cc044c48c0a2e328a446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:03:23 np0005539563 heuristic_carver[297144]: 167 167
Nov 29 03:03:23 np0005539563 systemd[1]: libpod-d3e6198defefe280c9da3d89ad32dd0426429fefc3d2cc044c48c0a2e328a446.scope: Deactivated successfully.
Nov 29 03:03:23 np0005539563 podman[297128]: 2025-11-29 08:03:23.041409963 +0000 UTC m=+0.164373775 container died d3e6198defefe280c9da3d89ad32dd0426429fefc3d2cc044c48c0a2e328a446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:03:23 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b8215e1aee31b03561703a227809c0e3eaf3c4c14e19680580db471a4a3280db-merged.mount: Deactivated successfully.
Nov 29 03:03:23 np0005539563 podman[297128]: 2025-11-29 08:03:23.090131722 +0000 UTC m=+0.213095534 container remove d3e6198defefe280c9da3d89ad32dd0426429fefc3d2cc044c48c0a2e328a446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:03:23 np0005539563 systemd[1]: libpod-conmon-d3e6198defefe280c9da3d89ad32dd0426429fefc3d2cc044c48c0a2e328a446.scope: Deactivated successfully.
Nov 29 03:03:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:23.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00534155877768472 of space, bias 1.0, pg target 1.602467633305416 quantized to 32 (current 32)
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009900122720081892 of space, bias 1.0, pg target 0.29601366933044854 quantized to 32 (current 32)
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:03:23 np0005539563 podman[297169]: 2025-11-29 08:03:23.320423141 +0000 UTC m=+0.060887611 container create 96f25fd1e56380ad57a33f676d8fa80cb0efa5eb0610e89cda005f94be269973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:03:23 np0005539563 systemd[1]: Started libpod-conmon-96f25fd1e56380ad57a33f676d8fa80cb0efa5eb0610e89cda005f94be269973.scope.
Nov 29 03:03:23 np0005539563 podman[297169]: 2025-11-29 08:03:23.294678764 +0000 UTC m=+0.035143214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:03:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:03:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e901c93a3e92341a2e72b27a3368fb195b7e235d73abb978c3d6df8e7ae7651/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e901c93a3e92341a2e72b27a3368fb195b7e235d73abb978c3d6df8e7ae7651/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e901c93a3e92341a2e72b27a3368fb195b7e235d73abb978c3d6df8e7ae7651/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e901c93a3e92341a2e72b27a3368fb195b7e235d73abb978c3d6df8e7ae7651/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:23 np0005539563 podman[297169]: 2025-11-29 08:03:23.460418774 +0000 UTC m=+0.200883274 container init 96f25fd1e56380ad57a33f676d8fa80cb0efa5eb0610e89cda005f94be269973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:03:23 np0005539563 podman[297169]: 2025-11-29 08:03:23.47207239 +0000 UTC m=+0.212536850 container start 96f25fd1e56380ad57a33f676d8fa80cb0efa5eb0610e89cda005f94be269973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cray, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:03:23 np0005539563 podman[297169]: 2025-11-29 08:03:23.477930089 +0000 UTC m=+0.218394549 container attach 96f25fd1e56380ad57a33f676d8fa80cb0efa5eb0610e89cda005f94be269973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:03:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 288 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 79 op/s
Nov 29 03:03:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:24.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:24 np0005539563 compassionate_cray[297185]: {
Nov 29 03:03:24 np0005539563 compassionate_cray[297185]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:03:24 np0005539563 compassionate_cray[297185]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:03:24 np0005539563 compassionate_cray[297185]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:03:24 np0005539563 compassionate_cray[297185]:        "osd_id": 0,
Nov 29 03:03:24 np0005539563 compassionate_cray[297185]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:03:24 np0005539563 compassionate_cray[297185]:        "type": "bluestore"
Nov 29 03:03:24 np0005539563 compassionate_cray[297185]:    }
Nov 29 03:03:24 np0005539563 compassionate_cray[297185]: }
Nov 29 03:03:24 np0005539563 systemd[1]: libpod-96f25fd1e56380ad57a33f676d8fa80cb0efa5eb0610e89cda005f94be269973.scope: Deactivated successfully.
Nov 29 03:03:24 np0005539563 systemd[1]: libpod-96f25fd1e56380ad57a33f676d8fa80cb0efa5eb0610e89cda005f94be269973.scope: Consumed 1.037s CPU time.
Nov 29 03:03:24 np0005539563 podman[297169]: 2025-11-29 08:03:24.504713238 +0000 UTC m=+1.245177688 container died 96f25fd1e56380ad57a33f676d8fa80cb0efa5eb0610e89cda005f94be269973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cray, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:03:24 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8e901c93a3e92341a2e72b27a3368fb195b7e235d73abb978c3d6df8e7ae7651-merged.mount: Deactivated successfully.
Nov 29 03:03:24 np0005539563 podman[297169]: 2025-11-29 08:03:24.885386222 +0000 UTC m=+1.625850652 container remove 96f25fd1e56380ad57a33f676d8fa80cb0efa5eb0610e89cda005f94be269973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cray, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:03:24 np0005539563 systemd[1]: libpod-conmon-96f25fd1e56380ad57a33f676d8fa80cb0efa5eb0610e89cda005f94be269973.scope: Deactivated successfully.
Nov 29 03:03:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:03:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:03:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 3b97014f-64a1-4062-8d01-d1f66eef7afa does not exist
Nov 29 03:03:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ee8a12b5-827a-4fdb-98c9-a01cc75bc214 does not exist
Nov 29 03:03:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1b02a39a-198a-4025-8590-ed5dcbe22ce0 does not exist
Nov 29 03:03:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:25.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:25 np0005539563 nova_compute[252253]: 2025-11-29 08:03:25.336 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:03:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 24 KiB/s wr, 101 op/s
Nov 29 03:03:26 np0005539563 nova_compute[252253]: 2025-11-29 08:03:26.346 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:26.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:26 np0005539563 nova_compute[252253]: 2025-11-29 08:03:26.462 252257 DEBUG nova.network.neutron [req-1ca7e917-06ad-4a90-b421-70343fae9a5f req-0f794b74-d460-4303-935c-c5eca1f705f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updated VIF entry in instance network info cache for port 65f4bda5-b4d3-4fea-b8e2-3856b51660b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:03:26 np0005539563 nova_compute[252253]: 2025-11-29 08:03:26.463 252257 DEBUG nova.network.neutron [req-1ca7e917-06ad-4a90-b421-70343fae9a5f req-0f794b74-d460-4303-935c-c5eca1f705f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updating instance_info_cache with network_info: [{"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:26 np0005539563 nova_compute[252253]: 2025-11-29 08:03:26.501 252257 DEBUG oslo_concurrency.lockutils [req-1ca7e917-06ad-4a90-b421-70343fae9a5f req-0f794b74-d460-4303-935c-c5eca1f705f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:27.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 2.6 KiB/s wr, 31 op/s
Nov 29 03:03:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:28.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:29.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 6.2 KiB/s wr, 31 op/s
Nov 29 03:03:30 np0005539563 nova_compute[252253]: 2025-11-29 08:03:30.343 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:30.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:31.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:31 np0005539563 nova_compute[252253]: 2025-11-29 08:03:31.368 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 5.4 KiB/s wr, 30 op/s
Nov 29 03:03:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:32.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:33.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 27 op/s
Nov 29 03:03:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2120497893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:34.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:35.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:35 np0005539563 nova_compute[252253]: 2025-11-29 08:03:35.347 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 27 op/s
Nov 29 03:03:36 np0005539563 nova_compute[252253]: 2025-11-29 08:03:36.371 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:36.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:36 np0005539563 nova_compute[252253]: 2025-11-29 08:03:36.751 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4271457649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:37.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3058818878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:37 np0005539563 nova_compute[252253]: 2025-11-29 08:03:37.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 246 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Nov 29 03:03:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:38.347 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:38 np0005539563 nova_compute[252253]: 2025-11-29 08:03:38.348 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:38.348 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:03:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:38.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:38 np0005539563 podman[297329]: 2025-11-29 08:03:38.541214638 +0000 UTC m=+0.081369136 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:03:38 np0005539563 podman[297328]: 2025-11-29 08:03:38.563676866 +0000 UTC m=+0.106217378 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:03:38 np0005539563 podman[297330]: 2025-11-29 08:03:38.571322903 +0000 UTC m=+0.108387337 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:03:38 np0005539563 nova_compute[252253]: 2025-11-29 08:03:38.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:38 np0005539563 nova_compute[252253]: 2025-11-29 08:03:38.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:03:38 np0005539563 nova_compute[252253]: 2025-11-29 08:03:38.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:38 np0005539563 nova_compute[252253]: 2025-11-29 08:03:38.710 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:38 np0005539563 nova_compute[252253]: 2025-11-29 08:03:38.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:38 np0005539563 nova_compute[252253]: 2025-11-29 08:03:38.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:38 np0005539563 nova_compute[252253]: 2025-11-29 08:03:38.711 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:03:38 np0005539563 nova_compute[252253]: 2025-11-29 08:03:38.711 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2661788479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:39.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:39 np0005539563 nova_compute[252253]: 2025-11-29 08:03:39.197 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:39 np0005539563 nova_compute[252253]: 2025-11-29 08:03:39.296 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:03:39 np0005539563 nova_compute[252253]: 2025-11-29 08:03:39.297 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:03:39 np0005539563 nova_compute[252253]: 2025-11-29 08:03:39.508 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:03:39 np0005539563 nova_compute[252253]: 2025-11-29 08:03:39.511 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4383MB free_disk=20.897045135498047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:03:39 np0005539563 nova_compute[252253]: 2025-11-29 08:03:39.511 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:39 np0005539563 nova_compute[252253]: 2025-11-29 08:03:39.512 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:39 np0005539563 nova_compute[252253]: 2025-11-29 08:03:39.606 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 0791bdff-16d7-4626-acea-1361fdb70652 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:03:39 np0005539563 nova_compute[252253]: 2025-11-29 08:03:39.606 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:03:39 np0005539563 nova_compute[252253]: 2025-11-29 08:03:39.607 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:03:39 np0005539563 nova_compute[252253]: 2025-11-29 08:03:39.681 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 272 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 650 KiB/s wr, 2 op/s
Nov 29 03:03:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2289387352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:40 np0005539563 nova_compute[252253]: 2025-11-29 08:03:40.110 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:40 np0005539563 nova_compute[252253]: 2025-11-29 08:03:40.120 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:40 np0005539563 nova_compute[252253]: 2025-11-29 08:03:40.144 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:40 np0005539563 nova_compute[252253]: 2025-11-29 08:03:40.147 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:03:40 np0005539563 nova_compute[252253]: 2025-11-29 08:03:40.148 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:40 np0005539563 nova_compute[252253]: 2025-11-29 08:03:40.350 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:40 np0005539563 ovn_controller[148841]: 2025-11-29T08:03:40Z|00245|binding|INFO|Releasing lport 2a1fcde6-d99a-4732-a125-d24eb08c8766 from this chassis (sb_readonly=0)
Nov 29 03:03:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:40.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:40 np0005539563 nova_compute[252253]: 2025-11-29 08:03:40.483 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:41.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:41 np0005539563 nova_compute[252253]: 2025-11-29 08:03:41.374 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 293 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 29 03:03:42 np0005539563 nova_compute[252253]: 2025-11-29 08:03:42.148 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:42.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:42 np0005539563 nova_compute[252253]: 2025-11-29 08:03:42.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:42 np0005539563 nova_compute[252253]: 2025-11-29 08:03:42.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:03:42 np0005539563 nova_compute[252253]: 2025-11-29 08:03:42.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:03:43 np0005539563 nova_compute[252253]: 2025-11-29 08:03:43.040 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:43 np0005539563 nova_compute[252253]: 2025-11-29 08:03:43.041 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:43 np0005539563 nova_compute[252253]: 2025-11-29 08:03:43.041 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:03:43 np0005539563 nova_compute[252253]: 2025-11-29 08:03:43.042 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0791bdff-16d7-4626-acea-1361fdb70652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:03:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:03:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:03:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:03:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:03:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:03:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:43.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:43.350 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 258 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 29 03:03:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:44.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.116 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updating instance_info_cache with network_info: [{"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.130 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-0791bdff-16d7-4626-acea-1361fdb70652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.131 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:03:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:45.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.354 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.484 252257 DEBUG oslo_concurrency.lockutils [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "0791bdff-16d7-4626-acea-1361fdb70652" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.485 252257 DEBUG oslo_concurrency.lockutils [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.485 252257 DEBUG oslo_concurrency.lockutils [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "0791bdff-16d7-4626-acea-1361fdb70652-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.485 252257 DEBUG oslo_concurrency.lockutils [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.485 252257 DEBUG oslo_concurrency.lockutils [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.487 252257 INFO nova.compute.manager [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Terminating instance#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.488 252257 DEBUG nova.compute.manager [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:03:45 np0005539563 kernel: tap65f4bda5-b4 (unregistering): left promiscuous mode
Nov 29 03:03:45 np0005539563 NetworkManager[48981]: <info>  [1764403425.5425] device (tap65f4bda5-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:03:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:03:45Z|00246|binding|INFO|Releasing lport 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 from this chassis (sb_readonly=0)
Nov 29 03:03:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:03:45Z|00247|binding|INFO|Setting lport 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 down in Southbound
Nov 29 03:03:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:03:45Z|00248|binding|INFO|Removing iface tap65f4bda5-b4 ovn-installed in OVS
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.551 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.553 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.559 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:eb:91 10.100.0.14'], port_security=['fa:16:3e:26:eb:91 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0791bdff-16d7-4626-acea-1361fdb70652', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-738e99b4-b58e-4eff-b209-c4aa3748c994', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f69605de164b4c27ae715521263676fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bf7ccb70-ed00-453b-b589-5d95da7defbd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e918c3-f77d-4277-9e74-f8ddcf4ab8e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=65f4bda5-b4d3-4fea-b8e2-3856b51660b5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.560 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 65f4bda5-b4d3-4fea-b8e2-3856b51660b5 in datapath 738e99b4-b58e-4eff-b209-c4aa3748c994 unbound from our chassis#033[00m
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.561 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 738e99b4-b58e-4eff-b209-c4aa3748c994, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.562 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[69d4cea8-7107-48ac-ad00-430468f3fe00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.563 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 namespace which is not needed anymore#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.570 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:45 np0005539563 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000042.scope: Deactivated successfully.
Nov 29 03:03:45 np0005539563 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000042.scope: Consumed 17.588s CPU time.
Nov 29 03:03:45 np0005539563 systemd-machined[213024]: Machine qemu-28-instance-00000042 terminated.
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.713 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.719 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.725 252257 INFO nova.virt.libvirt.driver [-] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Instance destroyed successfully.#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.725 252257 DEBUG nova.objects.instance [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lazy-loading 'resources' on Instance uuid 0791bdff-16d7-4626-acea-1361fdb70652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:45 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[295839]: [NOTICE]   (295877) : haproxy version is 2.8.14-c23fe91
Nov 29 03:03:45 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[295839]: [NOTICE]   (295877) : path to executable is /usr/sbin/haproxy
Nov 29 03:03:45 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[295839]: [WARNING]  (295877) : Exiting Master process...
Nov 29 03:03:45 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[295839]: [WARNING]  (295877) : Exiting Master process...
Nov 29 03:03:45 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[295839]: [ALERT]    (295877) : Current worker (295888) exited with code 143 (Terminated)
Nov 29 03:03:45 np0005539563 neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994[295839]: [WARNING]  (295877) : All workers exited. Exiting... (0)
Nov 29 03:03:45 np0005539563 systemd[1]: libpod-9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d.scope: Deactivated successfully.
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.738 252257 DEBUG nova.virt.libvirt.vif [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1586373608',display_name='tempest-tempest.common.compute-instance-1586373608',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1586373608',id=66,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0n9ZXn8H6JY1sjbCx/j99/wL1zxZy5QsBH0AsdRjLOqctx/oeY65gmDs4R5NwjnXMvJp27i+F5qDtP4SKtjrI8QpPaqSfAsVXkzWb4UIDMJE826KgCbMST4VlNYE+GQA==',key_name='tempest-keypair-1734268386',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f69605de164b4c27ae715521263676fe',ramdisk_id='',reservation_id='r-m8s06nf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-991196152',owner_user_name='tempest-AttachInterfacesTestJSON-991196152-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a814d0c4600e45d9a1fac7bac5b7e69e',uuid=0791bdff-16d7-4626-acea-1361fdb70652,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.739 252257 DEBUG nova.network.os_vif_util [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converting VIF {"id": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "address": "fa:16:3e:26:eb:91", "network": {"id": "738e99b4-b58e-4eff-b209-c4aa3748c994", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1711865186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f69605de164b4c27ae715521263676fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65f4bda5-b4", "ovs_interfaceid": "65f4bda5-b4d3-4fea-b8e2-3856b51660b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.739 252257 DEBUG nova.network.os_vif_util [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=65f4bda5-b4d3-4fea-b8e2-3856b51660b5,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65f4bda5-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.740 252257 DEBUG os_vif [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=65f4bda5-b4d3-4fea-b8e2-3856b51660b5,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65f4bda5-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:03:45 np0005539563 podman[297517]: 2025-11-29 08:03:45.741068627 +0000 UTC m=+0.049630375 container died 9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.741 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.742 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f4bda5-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.744 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.746 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.746 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.748 252257 INFO os_vif [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=65f4bda5-b4d3-4fea-b8e2-3856b51660b5,network=Network(738e99b4-b58e-4eff-b209-c4aa3748c994),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65f4bda5-b4')#033[00m
Nov 29 03:03:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d-userdata-shm.mount: Deactivated successfully.
Nov 29 03:03:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b093b3dbe64b60824974c5be9dbd38c894fae7803fb60c6f0bea8c2f41f90c31-merged.mount: Deactivated successfully.
Nov 29 03:03:45 np0005539563 podman[297517]: 2025-11-29 08:03:45.779720313 +0000 UTC m=+0.088282061 container cleanup 9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:03:45 np0005539563 systemd[1]: libpod-conmon-9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d.scope: Deactivated successfully.
Nov 29 03:03:45 np0005539563 podman[297573]: 2025-11-29 08:03:45.845820194 +0000 UTC m=+0.042404699 container remove 9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.852 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b4dd1b-4bb5-4acb-ac5b-a17d112dc1bd]: (4, ('Sat Nov 29 08:03:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 (9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d)\n9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d\nSat Nov 29 08:03:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 (9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d)\n9c63360cef3e84f510eeee3a8fed5256201cd7ba2c91f64f199c00dd8a45dc4d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.854 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8f3614-00fb-4a65-94e7-a7186b4bf7b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.855 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap738e99b4-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:45 np0005539563 kernel: tap738e99b4-b0: left promiscuous mode
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.859 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.865 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[037b0e29-fbe1-4ef1-9b91-789ad39f5360]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.873 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.884 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d271e9de-9011-4371-9200-47c91d087f03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.886 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[13717461-7594-47aa-b675-797ac9015182]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.908 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f813458f-dd1f-48a6-afa3-c891958e1fd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630011, 'reachable_time': 28039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297588, 'error': None, 'target': 'ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.913 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-738e99b4-b58e-4eff-b209-c4aa3748c994 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:03:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:03:45.913 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[36f5cdfd-db62-4170-bad1-6c4e75aea019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:03:45 np0005539563 systemd[1]: run-netns-ovnmeta\x2d738e99b4\x2db58e\x2d4eff\x2db209\x2dc4aa3748c994.mount: Deactivated successfully.
Nov 29 03:03:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 213 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.979 252257 DEBUG nova.compute.manager [req-7e24e4b9-96aa-4d4e-8c6e-52501cd9c7a7 req-1d33d58b-7a73-4024-ab8a-7a24a8020549 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-vif-unplugged-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.980 252257 DEBUG oslo_concurrency.lockutils [req-7e24e4b9-96aa-4d4e-8c6e-52501cd9c7a7 req-1d33d58b-7a73-4024-ab8a-7a24a8020549 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0791bdff-16d7-4626-acea-1361fdb70652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.981 252257 DEBUG oslo_concurrency.lockutils [req-7e24e4b9-96aa-4d4e-8c6e-52501cd9c7a7 req-1d33d58b-7a73-4024-ab8a-7a24a8020549 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.981 252257 DEBUG oslo_concurrency.lockutils [req-7e24e4b9-96aa-4d4e-8c6e-52501cd9c7a7 req-1d33d58b-7a73-4024-ab8a-7a24a8020549 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.982 252257 DEBUG nova.compute.manager [req-7e24e4b9-96aa-4d4e-8c6e-52501cd9c7a7 req-1d33d58b-7a73-4024-ab8a-7a24a8020549 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] No waiting events found dispatching network-vif-unplugged-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:45 np0005539563 nova_compute[252253]: 2025-11-29 08:03:45.982 252257 DEBUG nova.compute.manager [req-7e24e4b9-96aa-4d4e-8c6e-52501cd9c7a7 req-1d33d58b-7a73-4024-ab8a-7a24a8020549 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-vif-unplugged-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:03:46 np0005539563 nova_compute[252253]: 2025-11-29 08:03:46.351 252257 INFO nova.virt.libvirt.driver [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Deleting instance files /var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652_del#033[00m
Nov 29 03:03:46 np0005539563 nova_compute[252253]: 2025-11-29 08:03:46.352 252257 INFO nova.virt.libvirt.driver [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Deletion of /var/lib/nova/instances/0791bdff-16d7-4626-acea-1361fdb70652_del complete#033[00m
Nov 29 03:03:46 np0005539563 nova_compute[252253]: 2025-11-29 08:03:46.375 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:46 np0005539563 nova_compute[252253]: 2025-11-29 08:03:46.421 252257 INFO nova.compute.manager [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:03:46 np0005539563 nova_compute[252253]: 2025-11-29 08:03:46.422 252257 DEBUG oslo.service.loopingcall [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:03:46 np0005539563 nova_compute[252253]: 2025-11-29 08:03:46.422 252257 DEBUG nova.compute.manager [-] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:03:46 np0005539563 nova_compute[252253]: 2025-11-29 08:03:46.422 252257 DEBUG nova.network.neutron [-] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:03:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:46.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:46 np0005539563 nova_compute[252253]: 2025-11-29 08:03:46.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:47 np0005539563 nova_compute[252253]: 2025-11-29 08:03:47.164 252257 DEBUG nova.network.neutron [-] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:47 np0005539563 nova_compute[252253]: 2025-11-29 08:03:47.195 252257 INFO nova.compute.manager [-] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Took 0.77 seconds to deallocate network for instance.#033[00m
Nov 29 03:03:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:47.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:47 np0005539563 nova_compute[252253]: 2025-11-29 08:03:47.274 252257 DEBUG oslo_concurrency.lockutils [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:47 np0005539563 nova_compute[252253]: 2025-11-29 08:03:47.274 252257 DEBUG oslo_concurrency.lockutils [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:47 np0005539563 nova_compute[252253]: 2025-11-29 08:03:47.317 252257 DEBUG nova.compute.manager [req-8535972f-6029-4f94-8ff1-c3086c228056 req-7709989a-8789-4e79-86e0-2faa24e37b55 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-vif-deleted-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:47 np0005539563 nova_compute[252253]: 2025-11-29 08:03:47.373 252257 DEBUG oslo_concurrency.processutils [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:47 np0005539563 nova_compute[252253]: 2025-11-29 08:03:47.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2335680283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:47 np0005539563 nova_compute[252253]: 2025-11-29 08:03:47.794 252257 DEBUG oslo_concurrency.processutils [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:47 np0005539563 nova_compute[252253]: 2025-11-29 08:03:47.800 252257 DEBUG nova.compute.provider_tree [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:47 np0005539563 nova_compute[252253]: 2025-11-29 08:03:47.825 252257 DEBUG nova.scheduler.client.report [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:47 np0005539563 nova_compute[252253]: 2025-11-29 08:03:47.848 252257 DEBUG oslo_concurrency.lockutils [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:47 np0005539563 nova_compute[252253]: 2025-11-29 08:03:47.884 252257 INFO nova.scheduler.client.report [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Deleted allocations for instance 0791bdff-16d7-4626-acea-1361fdb70652#033[00m
Nov 29 03:03:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 213 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 29 03:03:47 np0005539563 nova_compute[252253]: 2025-11-29 08:03:47.996 252257 DEBUG oslo_concurrency.lockutils [None req-8a2c5a97-4089-44d6-9b57-cd389d651c51 a814d0c4600e45d9a1fac7bac5b7e69e f69605de164b4c27ae715521263676fe - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:48 np0005539563 nova_compute[252253]: 2025-11-29 08:03:48.110 252257 DEBUG nova.compute.manager [req-a9e5e992-9c61-4a32-ba85-ade5f2c7a813 req-f1245828-d566-4873-a972-faf14a91aca3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received event network-vif-plugged-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:03:48 np0005539563 nova_compute[252253]: 2025-11-29 08:03:48.110 252257 DEBUG oslo_concurrency.lockutils [req-a9e5e992-9c61-4a32-ba85-ade5f2c7a813 req-f1245828-d566-4873-a972-faf14a91aca3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0791bdff-16d7-4626-acea-1361fdb70652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:48 np0005539563 nova_compute[252253]: 2025-11-29 08:03:48.110 252257 DEBUG oslo_concurrency.lockutils [req-a9e5e992-9c61-4a32-ba85-ade5f2c7a813 req-f1245828-d566-4873-a972-faf14a91aca3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:48 np0005539563 nova_compute[252253]: 2025-11-29 08:03:48.111 252257 DEBUG oslo_concurrency.lockutils [req-a9e5e992-9c61-4a32-ba85-ade5f2c7a813 req-f1245828-d566-4873-a972-faf14a91aca3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0791bdff-16d7-4626-acea-1361fdb70652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:48 np0005539563 nova_compute[252253]: 2025-11-29 08:03:48.111 252257 DEBUG nova.compute.manager [req-a9e5e992-9c61-4a32-ba85-ade5f2c7a813 req-f1245828-d566-4873-a972-faf14a91aca3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] No waiting events found dispatching network-vif-plugged-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:03:48 np0005539563 nova_compute[252253]: 2025-11-29 08:03:48.111 252257 WARNING nova.compute.manager [req-a9e5e992-9c61-4a32-ba85-ade5f2c7a813 req-f1245828-d566-4873-a972-faf14a91aca3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Received unexpected event network-vif-plugged-65f4bda5-b4d3-4fea-b8e2-3856b51660b5 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:03:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:48.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 187 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 744 KiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 29 03:03:50 np0005539563 nova_compute[252253]: 2025-11-29 08:03:50.318 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:50.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:50 np0005539563 nova_compute[252253]: 2025-11-29 08:03:50.745 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:51.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:51 np0005539563 nova_compute[252253]: 2025-11-29 08:03:51.377 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 134 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 139 op/s
Nov 29 03:03:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:52.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:53.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 109 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 16 KiB/s wr, 120 op/s
Nov 29 03:03:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:54.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:55 np0005539563 nova_compute[252253]: 2025-11-29 08:03:55.086 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:55.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:55 np0005539563 nova_compute[252253]: 2025-11-29 08:03:55.747 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:55 np0005539563 nova_compute[252253]: 2025-11-29 08:03:55.772 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 16 KiB/s wr, 134 op/s
Nov 29 03:03:56 np0005539563 nova_compute[252253]: 2025-11-29 08:03:56.032 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:56 np0005539563 nova_compute[252253]: 2025-11-29 08:03:56.414 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:56.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:57.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.4 KiB/s wr, 106 op/s
Nov 29 03:03:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:03:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:58.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:03:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:03:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:59.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.4 KiB/s wr, 106 op/s
Nov 29 03:04:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:00.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:00 np0005539563 nova_compute[252253]: 2025-11-29 08:04:00.724 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403425.7224197, 0791bdff-16d7-4626-acea-1361fdb70652 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:00 np0005539563 nova_compute[252253]: 2025-11-29 08:04:00.725 252257 INFO nova.compute.manager [-] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:00 np0005539563 nova_compute[252253]: 2025-11-29 08:04:00.753 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:00 np0005539563 nova_compute[252253]: 2025-11-29 08:04:00.755 252257 DEBUG nova.compute.manager [None req-d0c690b5-fbe7-4375-bc36-423c182a127a - - - - - -] [instance: 0791bdff-16d7-4626-acea-1361fdb70652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:01.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:01 np0005539563 nova_compute[252253]: 2025-11-29 08:04:01.416 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 121 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 771 KiB/s rd, 1.2 MiB/s wr, 84 op/s
Nov 29 03:04:02 np0005539563 nova_compute[252253]: 2025-11-29 08:04:02.145 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Acquiring lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:02 np0005539563 nova_compute[252253]: 2025-11-29 08:04:02.146 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:02 np0005539563 nova_compute[252253]: 2025-11-29 08:04:02.171 252257 DEBUG nova.compute.manager [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:04:02 np0005539563 nova_compute[252253]: 2025-11-29 08:04:02.295 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:02 np0005539563 nova_compute[252253]: 2025-11-29 08:04:02.295 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:02 np0005539563 nova_compute[252253]: 2025-11-29 08:04:02.303 252257 DEBUG nova.virt.hardware [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:04:02 np0005539563 nova_compute[252253]: 2025-11-29 08:04:02.304 252257 INFO nova.compute.claims [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:04:02 np0005539563 nova_compute[252253]: 2025-11-29 08:04:02.482 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:02.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882508826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:02 np0005539563 nova_compute[252253]: 2025-11-29 08:04:02.915 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:02 np0005539563 nova_compute[252253]: 2025-11-29 08:04:02.923 252257 DEBUG nova.compute.provider_tree [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:02 np0005539563 nova_compute[252253]: 2025-11-29 08:04:02.944 252257 DEBUG nova.scheduler.client.report [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:02 np0005539563 nova_compute[252253]: 2025-11-29 08:04:02.984 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:02 np0005539563 nova_compute[252253]: 2025-11-29 08:04:02.986 252257 DEBUG nova.compute.manager [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.032 252257 DEBUG nova.compute.manager [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.033 252257 DEBUG nova.network.neutron [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.057 252257 INFO nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.078 252257 DEBUG nova.compute.manager [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.165 252257 DEBUG nova.compute.manager [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.167 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.167 252257 INFO nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Creating image(s)#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.190 252257 DEBUG nova.storage.rbd_utils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] rbd image 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.215 252257 DEBUG nova.storage.rbd_utils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] rbd image 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:03.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.244 252257 DEBUG nova.storage.rbd_utils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] rbd image 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.247 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.310 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.311 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.312 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.312 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.345 252257 DEBUG nova.storage.rbd_utils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] rbd image 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.349 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.412 252257 DEBUG nova.policy [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7d64f2611204481b2cb7f9b3178a0cf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e5224c4ebb92449a962d40c6cf1dd719', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.702 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.779 252257 DEBUG nova.storage.rbd_utils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] resizing rbd image 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.880 252257 DEBUG nova.objects.instance [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lazy-loading 'migration_context' on Instance uuid 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.896 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.897 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Ensure instance console log exists: /var/lib/nova/instances/1e8a4477-aa5b-4c5c-96ac-181e81b78f1f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.897 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.898 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:03 np0005539563 nova_compute[252253]: 2025-11-29 08:04:03.898 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 134 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 29 03:04:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:04.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:04.910 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:04.910 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:04.910 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:05.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:05 np0005539563 nova_compute[252253]: 2025-11-29 08:04:05.489 252257 DEBUG nova.network.neutron [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Successfully created port: 39c103da-6cc8-478f-8d8e-ea4f6da2c413 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:04:05 np0005539563 nova_compute[252253]: 2025-11-29 08:04:05.757 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 176 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.5 MiB/s wr, 63 op/s
Nov 29 03:04:06 np0005539563 nova_compute[252253]: 2025-11-29 08:04:06.418 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:06.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:07 np0005539563 nova_compute[252253]: 2025-11-29 08:04:07.000 252257 DEBUG nova.network.neutron [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Successfully updated port: 39c103da-6cc8-478f-8d8e-ea4f6da2c413 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:04:07 np0005539563 nova_compute[252253]: 2025-11-29 08:04:07.022 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Acquiring lock "refresh_cache-1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:07 np0005539563 nova_compute[252253]: 2025-11-29 08:04:07.022 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Acquired lock "refresh_cache-1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:07 np0005539563 nova_compute[252253]: 2025-11-29 08:04:07.023 252257 DEBUG nova.network.neutron [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:07 np0005539563 nova_compute[252253]: 2025-11-29 08:04:07.223 252257 DEBUG nova.network.neutron [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:07.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:07 np0005539563 nova_compute[252253]: 2025-11-29 08:04:07.657 252257 DEBUG nova.compute.manager [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Received event network-changed-39c103da-6cc8-478f-8d8e-ea4f6da2c413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:07 np0005539563 nova_compute[252253]: 2025-11-29 08:04:07.657 252257 DEBUG nova.compute.manager [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Refreshing instance network info cache due to event network-changed-39c103da-6cc8-478f-8d8e-ea4f6da2c413. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:04:07 np0005539563 nova_compute[252253]: 2025-11-29 08:04:07.658 252257 DEBUG oslo_concurrency.lockutils [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 176 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 3.5 MiB/s wr, 45 op/s
Nov 29 03:04:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:08.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.624 252257 DEBUG nova.network.neutron [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Updating instance_info_cache with network_info: [{"id": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "address": "fa:16:3e:51:46:33", "network": {"id": "7e70117d-5c88-4604-a344-02d8359c38e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1406427306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5224c4ebb92449a962d40c6cf1dd719", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39c103da-6c", "ovs_interfaceid": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.649 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Releasing lock "refresh_cache-1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.650 252257 DEBUG nova.compute.manager [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Instance network_info: |[{"id": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "address": "fa:16:3e:51:46:33", "network": {"id": "7e70117d-5c88-4604-a344-02d8359c38e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1406427306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5224c4ebb92449a962d40c6cf1dd719", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39c103da-6c", "ovs_interfaceid": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.652 252257 DEBUG oslo_concurrency.lockutils [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.652 252257 DEBUG nova.network.neutron [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Refreshing network info cache for port 39c103da-6cc8-478f-8d8e-ea4f6da2c413 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.658 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Start _get_guest_xml network_info=[{"id": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "address": "fa:16:3e:51:46:33", "network": {"id": "7e70117d-5c88-4604-a344-02d8359c38e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1406427306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5224c4ebb92449a962d40c6cf1dd719", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39c103da-6c", "ovs_interfaceid": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.666 252257 WARNING nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.672 252257 DEBUG nova.virt.libvirt.host [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.673 252257 DEBUG nova.virt.libvirt.host [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.683 252257 DEBUG nova.virt.libvirt.host [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.684 252257 DEBUG nova.virt.libvirt.host [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.685 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.686 252257 DEBUG nova.virt.hardware [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.686 252257 DEBUG nova.virt.hardware [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.687 252257 DEBUG nova.virt.hardware [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.687 252257 DEBUG nova.virt.hardware [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.687 252257 DEBUG nova.virt.hardware [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.687 252257 DEBUG nova.virt.hardware [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.688 252257 DEBUG nova.virt.hardware [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.688 252257 DEBUG nova.virt.hardware [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.689 252257 DEBUG nova.virt.hardware [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.689 252257 DEBUG nova.virt.hardware [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.689 252257 DEBUG nova.virt.hardware [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:04:08 np0005539563 nova_compute[252253]: 2025-11-29 08:04:08.697 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2852531384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.115 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.144 252257 DEBUG nova.storage.rbd_utils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] rbd image 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.147 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:09.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317147062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.558 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.560 252257 DEBUG nova.virt.libvirt.vif [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-972573030',display_name='tempest-ServersTestJSON-server-972573030',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-972573030',id=73,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIeQr2RMBCZCG/FSkINur3G6UnUjT6sGyaQjyLzLQzfwXgSxxIoOvE3NvP0MBL/XwYA0rDaRkphKWfa0gpWL9IMW9miYfKwVovn1Ph0V6xboSY9kCp6H09VSljw0wPwNcA==',key_name='tempest-keypair-404651123',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e5224c4ebb92449a962d40c6cf1dd719',ramdisk_id='',reservation_id='r-tility7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-86729619',owner_user_name='tempest-ServersTestJSON-86729619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b7d64f2611204481b2cb7f9b3178a0cf',uuid=1e8a4477-aa5b-4c5c-96ac-181e81b78f1f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "address": "fa:16:3e:51:46:33", "network": {"id": "7e70117d-5c88-4604-a344-02d8359c38e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1406427306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5224c4ebb92449a962d40c6cf1dd719", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39c103da-6c", "ovs_interfaceid": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.560 252257 DEBUG nova.network.os_vif_util [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Converting VIF {"id": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "address": "fa:16:3e:51:46:33", "network": {"id": "7e70117d-5c88-4604-a344-02d8359c38e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1406427306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5224c4ebb92449a962d40c6cf1dd719", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39c103da-6c", "ovs_interfaceid": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:09 np0005539563 podman[297926]: 2025-11-29 08:04:09.56264088 +0000 UTC m=+0.099912379 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.562 252257 DEBUG nova.network.os_vif_util [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:46:33,bridge_name='br-int',has_traffic_filtering=True,id=39c103da-6cc8-478f-8d8e-ea4f6da2c413,network=Network(7e70117d-5c88-4604-a344-02d8359c38e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39c103da-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.563 252257 DEBUG nova.objects.instance [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:09 np0005539563 podman[297928]: 2025-11-29 08:04:09.570564305 +0000 UTC m=+0.109714754 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:04:09 np0005539563 podman[297927]: 2025-11-29 08:04:09.570915894 +0000 UTC m=+0.110799753 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251125)
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.589 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  <uuid>1e8a4477-aa5b-4c5c-96ac-181e81b78f1f</uuid>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  <name>instance-00000049</name>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServersTestJSON-server-972573030</nova:name>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:04:08</nova:creationTime>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <nova:user uuid="b7d64f2611204481b2cb7f9b3178a0cf">tempest-ServersTestJSON-86729619-project-member</nova:user>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <nova:project uuid="e5224c4ebb92449a962d40c6cf1dd719">tempest-ServersTestJSON-86729619</nova:project>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <nova:port uuid="39c103da-6cc8-478f-8d8e-ea4f6da2c413">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <entry name="serial">1e8a4477-aa5b-4c5c-96ac-181e81b78f1f</entry>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <entry name="uuid">1e8a4477-aa5b-4c5c-96ac-181e81b78f1f</entry>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk.config">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:51:46:33"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <target dev="tap39c103da-6c"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/1e8a4477-aa5b-4c5c-96ac-181e81b78f1f/console.log" append="off"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:04:09 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:04:09 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:04:09 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:04:09 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.589 252257 DEBUG nova.compute.manager [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Preparing to wait for external event network-vif-plugged-39c103da-6cc8-478f-8d8e-ea4f6da2c413 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.589 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Acquiring lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.590 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.590 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.590 252257 DEBUG nova.virt.libvirt.vif [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-972573030',display_name='tempest-ServersTestJSON-server-972573030',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-972573030',id=73,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIeQr2RMBCZCG/FSkINur3G6UnUjT6sGyaQjyLzLQzfwXgSxxIoOvE3NvP0MBL/XwYA0rDaRkphKWfa0gpWL9IMW9miYfKwVovn1Ph0V6xboSY9kCp6H09VSljw0wPwNcA==',key_name='tempest-keypair-404651123',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e5224c4ebb92449a962d40c6cf1dd719',ramdisk_id='',reservation_id='r-tility7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-86729619',owner_user_name='tempest-ServersTestJSON-86729619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b7d64f2611204481b2cb7f9b3178a0cf',uuid=1e8a4477-aa5b-4c5c-96ac-181e81b78f1f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "address": "fa:16:3e:51:46:33", "network": {"id": "7e70117d-5c88-4604-a344-02d8359c38e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1406427306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5224c4ebb92449a962d40c6cf1dd719", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39c103da-6c", "ovs_interfaceid": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.591 252257 DEBUG nova.network.os_vif_util [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Converting VIF {"id": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "address": "fa:16:3e:51:46:33", "network": {"id": "7e70117d-5c88-4604-a344-02d8359c38e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1406427306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5224c4ebb92449a962d40c6cf1dd719", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39c103da-6c", "ovs_interfaceid": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.591 252257 DEBUG nova.network.os_vif_util [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:46:33,bridge_name='br-int',has_traffic_filtering=True,id=39c103da-6cc8-478f-8d8e-ea4f6da2c413,network=Network(7e70117d-5c88-4604-a344-02d8359c38e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39c103da-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.592 252257 DEBUG os_vif [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:46:33,bridge_name='br-int',has_traffic_filtering=True,id=39c103da-6cc8-478f-8d8e-ea4f6da2c413,network=Network(7e70117d-5c88-4604-a344-02d8359c38e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39c103da-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.592 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.592 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.593 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.596 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.597 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39c103da-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.597 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap39c103da-6c, col_values=(('external_ids', {'iface-id': '39c103da-6cc8-478f-8d8e-ea4f6da2c413', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:46:33', 'vm-uuid': '1e8a4477-aa5b-4c5c-96ac-181e81b78f1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.599 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:09 np0005539563 NetworkManager[48981]: <info>  [1764403449.6004] manager: (tap39c103da-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.601 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.606 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.607 252257 INFO os_vif [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:46:33,bridge_name='br-int',has_traffic_filtering=True,id=39c103da-6cc8-478f-8d8e-ea4f6da2c413,network=Network(7e70117d-5c88-4604-a344-02d8359c38e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39c103da-6c')#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.673 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.673 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.673 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] No VIF found with MAC fa:16:3e:51:46:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.674 252257 INFO nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Using config drive#033[00m
Nov 29 03:04:09 np0005539563 nova_compute[252253]: 2025-11-29 08:04:09.792 252257 DEBUG nova.storage.rbd_utils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] rbd image 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 180 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Nov 29 03:04:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:10.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:10 np0005539563 nova_compute[252253]: 2025-11-29 08:04:10.632 252257 INFO nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Creating config drive at /var/lib/nova/instances/1e8a4477-aa5b-4c5c-96ac-181e81b78f1f/disk.config#033[00m
Nov 29 03:04:10 np0005539563 nova_compute[252253]: 2025-11-29 08:04:10.639 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e8a4477-aa5b-4c5c-96ac-181e81b78f1f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5y0cgkib execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:10 np0005539563 nova_compute[252253]: 2025-11-29 08:04:10.668 252257 DEBUG nova.network.neutron [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Updated VIF entry in instance network info cache for port 39c103da-6cc8-478f-8d8e-ea4f6da2c413. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:04:10 np0005539563 nova_compute[252253]: 2025-11-29 08:04:10.670 252257 DEBUG nova.network.neutron [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Updating instance_info_cache with network_info: [{"id": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "address": "fa:16:3e:51:46:33", "network": {"id": "7e70117d-5c88-4604-a344-02d8359c38e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1406427306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5224c4ebb92449a962d40c6cf1dd719", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39c103da-6c", "ovs_interfaceid": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:10 np0005539563 nova_compute[252253]: 2025-11-29 08:04:10.691 252257 DEBUG oslo_concurrency.lockutils [req-e5f9e0eb-0719-48cb-a65f-5ff6a87bb8f3 req-05bb24b3-49ca-4046-b5a8-088a279320f9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:10 np0005539563 nova_compute[252253]: 2025-11-29 08:04:10.780 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e8a4477-aa5b-4c5c-96ac-181e81b78f1f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5y0cgkib" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:10 np0005539563 nova_compute[252253]: 2025-11-29 08:04:10.822 252257 DEBUG nova.storage.rbd_utils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] rbd image 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:10 np0005539563 nova_compute[252253]: 2025-11-29 08:04:10.826 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e8a4477-aa5b-4c5c-96ac-181e81b78f1f/disk.config 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:11.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:11 np0005539563 nova_compute[252253]: 2025-11-29 08:04:11.420 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:11 np0005539563 nova_compute[252253]: 2025-11-29 08:04:11.735 252257 DEBUG oslo_concurrency.processutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e8a4477-aa5b-4c5c-96ac-181e81b78f1f/disk.config 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.909s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:11 np0005539563 nova_compute[252253]: 2025-11-29 08:04:11.735 252257 INFO nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Deleting local config drive /var/lib/nova/instances/1e8a4477-aa5b-4c5c-96ac-181e81b78f1f/disk.config because it was imported into RBD.#033[00m
Nov 29 03:04:11 np0005539563 kernel: tap39c103da-6c: entered promiscuous mode
Nov 29 03:04:11 np0005539563 NetworkManager[48981]: <info>  [1764403451.7946] manager: (tap39c103da-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Nov 29 03:04:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:11Z|00249|binding|INFO|Claiming lport 39c103da-6cc8-478f-8d8e-ea4f6da2c413 for this chassis.
Nov 29 03:04:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:11Z|00250|binding|INFO|39c103da-6cc8-478f-8d8e-ea4f6da2c413: Claiming fa:16:3e:51:46:33 10.100.0.8
Nov 29 03:04:11 np0005539563 nova_compute[252253]: 2025-11-29 08:04:11.796 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:11 np0005539563 nova_compute[252253]: 2025-11-29 08:04:11.802 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.809 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:46:33 10.100.0.8'], port_security=['fa:16:3e:51:46:33 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1e8a4477-aa5b-4c5c-96ac-181e81b78f1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e70117d-5c88-4604-a344-02d8359c38e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5224c4ebb92449a962d40c6cf1dd719', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ae899cc-7f0c-46f4-b333-b6f85e22811b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49ced9cd-b6fd-46a6-91db-462fe99f300b, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=39c103da-6cc8-478f-8d8e-ea4f6da2c413) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.810 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 39c103da-6cc8-478f-8d8e-ea4f6da2c413 in datapath 7e70117d-5c88-4604-a344-02d8359c38e8 bound to our chassis#033[00m
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.812 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e70117d-5c88-4604-a344-02d8359c38e8#033[00m
Nov 29 03:04:11 np0005539563 systemd-udevd[298064]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.824 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6e6b298f-4c48-4fb9-a115-a13a9f3d8fc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.824 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7e70117d-51 in ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.826 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7e70117d-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.826 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[da8ce475-71bc-4863-a097-b9a986877298]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.827 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[be3b8fcc-f02e-457a-9fdb-ede2928a980f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:11 np0005539563 systemd-machined[213024]: New machine qemu-29-instance-00000049.
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.838 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[1f815d63-be23-4cd1-9e16-cf5de247e317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:11 np0005539563 NetworkManager[48981]: <info>  [1764403451.8411] device (tap39c103da-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:04:11 np0005539563 NetworkManager[48981]: <info>  [1764403451.8418] device (tap39c103da-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:04:11 np0005539563 systemd[1]: Started Virtual Machine qemu-29-instance-00000049.
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.862 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6c921f-1ed4-4a79-9464-02fe04fc3c64]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:11 np0005539563 nova_compute[252253]: 2025-11-29 08:04:11.868 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:11Z|00251|binding|INFO|Setting lport 39c103da-6cc8-478f-8d8e-ea4f6da2c413 ovn-installed in OVS
Nov 29 03:04:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:11Z|00252|binding|INFO|Setting lport 39c103da-6cc8-478f-8d8e-ea4f6da2c413 up in Southbound
Nov 29 03:04:11 np0005539563 nova_compute[252253]: 2025-11-29 08:04:11.874 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.902 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[58927614-3be7-4faf-b8b4-55cd92cfefab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:11 np0005539563 NetworkManager[48981]: <info>  [1764403451.9089] manager: (tap7e70117d-50): new Veth device (/org/freedesktop/NetworkManager/Devices/116)
Nov 29 03:04:11 np0005539563 systemd-udevd[298068]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.908 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bcdc3d4a-71ef-423c-b856-d104f0c63f81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 181 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 121 op/s
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.944 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf2be21-dd5e-473b-a1b6-66fb84d1bd0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.948 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[64b810d4-1637-462a-8054-3108479aef9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:11 np0005539563 NetworkManager[48981]: <info>  [1764403451.9736] device (tap7e70117d-50): carrier: link connected
Nov 29 03:04:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:11.982 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3f96ba20-8806-4230-a5f9-c853dd8d9bb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:12.003 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[491938b9-2884-4d3f-b40a-ab34bebc7545]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e70117d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:31:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641974, 'reachable_time': 25459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298097, 'error': None, 'target': 'ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:12.024 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7f056af6-d90a-4aa0-8e3c-b684e2c71a09]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:3100'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641974, 'tstamp': 641974}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298098, 'error': None, 'target': 'ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:12.043 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d38443ee-3343-4468-901b-b255a3b52141]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e70117d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:31:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641974, 'reachable_time': 25459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298099, 'error': None, 'target': 'ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:12.080 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[65478352-9e9b-4534-a36c-e0bb65d67730]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:12.138 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8222d1-e3d6-410a-984b-54e6526a1ca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:12.139 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e70117d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:12.139 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:12.140 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e70117d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.142 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:12 np0005539563 NetworkManager[48981]: <info>  [1764403452.1429] manager: (tap7e70117d-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Nov 29 03:04:12 np0005539563 kernel: tap7e70117d-50: entered promiscuous mode
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.144 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:12.145 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e70117d-50, col_values=(('external_ids', {'iface-id': '3b44eee9-713d-4fe5-ac02-ca26cd3140d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:12Z|00253|binding|INFO|Releasing lport 3b44eee9-713d-4fe5-ac02-ca26cd3140d0 from this chassis (sb_readonly=0)
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.146 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.187 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:12.188 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7e70117d-5c88-4604-a344-02d8359c38e8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7e70117d-5c88-4604-a344-02d8359c38e8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:12.189 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f3046b12-cb63-455a-ac49-98b388485fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:12.190 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-7e70117d-5c88-4604-a344-02d8359c38e8
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/7e70117d-5c88-4604-a344-02d8359c38e8.pid.haproxy
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 7e70117d-5c88-4604-a344-02d8359c38e8
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:04:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:12.191 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8', 'env', 'PROCESS_TAG=haproxy-7e70117d-5c88-4604-a344-02d8359c38e8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7e70117d-5c88-4604-a344-02d8359c38e8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:04:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:12.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.642 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403452.6415915, 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.642 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:04:12 np0005539563 podman[298166]: 2025-11-29 08:04:12.56814747 +0000 UTC m=+0.027766623 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.671 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.677 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403452.641831, 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.677 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.697 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.700 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.719 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:04:12
Nov 29 03:04:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:04:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:04:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'volumes']
Nov 29 03:04:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.925 252257 DEBUG nova.compute.manager [req-ccb5451e-a52d-4456-928e-fbce7e13a24d req-c15cf03a-9e8c-4be7-acf3-0b6be117b521 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Received event network-vif-plugged-39c103da-6cc8-478f-8d8e-ea4f6da2c413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.926 252257 DEBUG oslo_concurrency.lockutils [req-ccb5451e-a52d-4456-928e-fbce7e13a24d req-c15cf03a-9e8c-4be7-acf3-0b6be117b521 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.926 252257 DEBUG oslo_concurrency.lockutils [req-ccb5451e-a52d-4456-928e-fbce7e13a24d req-c15cf03a-9e8c-4be7-acf3-0b6be117b521 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.926 252257 DEBUG oslo_concurrency.lockutils [req-ccb5451e-a52d-4456-928e-fbce7e13a24d req-c15cf03a-9e8c-4be7-acf3-0b6be117b521 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.927 252257 DEBUG nova.compute.manager [req-ccb5451e-a52d-4456-928e-fbce7e13a24d req-c15cf03a-9e8c-4be7-acf3-0b6be117b521 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Processing event network-vif-plugged-39c103da-6cc8-478f-8d8e-ea4f6da2c413 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.927 252257 DEBUG nova.compute.manager [req-ccb5451e-a52d-4456-928e-fbce7e13a24d req-c15cf03a-9e8c-4be7-acf3-0b6be117b521 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Received event network-vif-plugged-39c103da-6cc8-478f-8d8e-ea4f6da2c413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.927 252257 DEBUG oslo_concurrency.lockutils [req-ccb5451e-a52d-4456-928e-fbce7e13a24d req-c15cf03a-9e8c-4be7-acf3-0b6be117b521 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.928 252257 DEBUG oslo_concurrency.lockutils [req-ccb5451e-a52d-4456-928e-fbce7e13a24d req-c15cf03a-9e8c-4be7-acf3-0b6be117b521 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.928 252257 DEBUG oslo_concurrency.lockutils [req-ccb5451e-a52d-4456-928e-fbce7e13a24d req-c15cf03a-9e8c-4be7-acf3-0b6be117b521 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.928 252257 DEBUG nova.compute.manager [req-ccb5451e-a52d-4456-928e-fbce7e13a24d req-c15cf03a-9e8c-4be7-acf3-0b6be117b521 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] No waiting events found dispatching network-vif-plugged-39c103da-6cc8-478f-8d8e-ea4f6da2c413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.928 252257 WARNING nova.compute.manager [req-ccb5451e-a52d-4456-928e-fbce7e13a24d req-c15cf03a-9e8c-4be7-acf3-0b6be117b521 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Received unexpected event network-vif-plugged-39c103da-6cc8-478f-8d8e-ea4f6da2c413 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.929 252257 DEBUG nova.compute.manager [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.932 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403452.932454, 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.933 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.934 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.939 252257 INFO nova.virt.libvirt.driver [-] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Instance spawned successfully.#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.939 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.959 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.963 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.974 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.974 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.975 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.976 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.976 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:12 np0005539563 nova_compute[252253]: 2025-11-29 08:04:12.976 252257 DEBUG nova.virt.libvirt.driver [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:12 np0005539563 podman[298166]: 2025-11-29 08:04:12.9818865 +0000 UTC m=+0.441505633 container create 8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:04:13 np0005539563 nova_compute[252253]: 2025-11-29 08:04:13.007 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:13 np0005539563 nova_compute[252253]: 2025-11-29 08:04:13.045 252257 INFO nova.compute.manager [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Took 9.88 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:04:13 np0005539563 nova_compute[252253]: 2025-11-29 08:04:13.045 252257 DEBUG nova.compute.manager [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:13 np0005539563 nova_compute[252253]: 2025-11-29 08:04:13.112 252257 INFO nova.compute.manager [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Took 10.85 seconds to build instance.#033[00m
Nov 29 03:04:13 np0005539563 nova_compute[252253]: 2025-11-29 08:04:13.126 252257 DEBUG oslo_concurrency.lockutils [None req-bc6dd076-278a-4baa-88d5-3d2cb8d14b63 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:04:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:13.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:13 np0005539563 systemd[1]: Started libpod-conmon-8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61.scope.
Nov 29 03:04:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:04:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67155996b5f377bcbcaf3b46ec272804f7cdd459d6dca4f3ab02a088093544e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:13 np0005539563 podman[298166]: 2025-11-29 08:04:13.391385394 +0000 UTC m=+0.851004557 container init 8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:04:13 np0005539563 podman[298166]: 2025-11-29 08:04:13.396809361 +0000 UTC m=+0.856428494 container start 8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:04:13 np0005539563 neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8[298188]: [NOTICE]   (298192) : New worker (298194) forked
Nov 29 03:04:13 np0005539563 neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8[298188]: [NOTICE]   (298192) : Loading success.
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:04:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 181 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 145 op/s
Nov 29 03:04:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:14.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:14 np0005539563 nova_compute[252253]: 2025-11-29 08:04:14.600 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:15.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:15 np0005539563 NetworkManager[48981]: <info>  [1764403455.3501] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Nov 29 03:04:15 np0005539563 NetworkManager[48981]: <info>  [1764403455.3506] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Nov 29 03:04:15 np0005539563 nova_compute[252253]: 2025-11-29 08:04:15.349 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:15 np0005539563 nova_compute[252253]: 2025-11-29 08:04:15.574 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:15Z|00254|binding|INFO|Releasing lport 3b44eee9-713d-4fe5-ac02-ca26cd3140d0 from this chassis (sb_readonly=0)
Nov 29 03:04:15 np0005539563 nova_compute[252253]: 2025-11-29 08:04:15.594 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 181 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 241 op/s
Nov 29 03:04:16 np0005539563 nova_compute[252253]: 2025-11-29 08:04:16.179 252257 DEBUG nova.compute.manager [req-f387ef94-2db6-4194-a0bf-54a76b80a126 req-d046b744-e4f4-40eb-bf9c-aef0549d2715 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Received event network-changed-39c103da-6cc8-478f-8d8e-ea4f6da2c413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:16 np0005539563 nova_compute[252253]: 2025-11-29 08:04:16.179 252257 DEBUG nova.compute.manager [req-f387ef94-2db6-4194-a0bf-54a76b80a126 req-d046b744-e4f4-40eb-bf9c-aef0549d2715 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Refreshing instance network info cache due to event network-changed-39c103da-6cc8-478f-8d8e-ea4f6da2c413. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:04:16 np0005539563 nova_compute[252253]: 2025-11-29 08:04:16.180 252257 DEBUG oslo_concurrency.lockutils [req-f387ef94-2db6-4194-a0bf-54a76b80a126 req-d046b744-e4f4-40eb-bf9c-aef0549d2715 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:16 np0005539563 nova_compute[252253]: 2025-11-29 08:04:16.180 252257 DEBUG oslo_concurrency.lockutils [req-f387ef94-2db6-4194-a0bf-54a76b80a126 req-d046b744-e4f4-40eb-bf9c-aef0549d2715 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:16 np0005539563 nova_compute[252253]: 2025-11-29 08:04:16.180 252257 DEBUG nova.network.neutron [req-f387ef94-2db6-4194-a0bf-54a76b80a126 req-d046b744-e4f4-40eb-bf9c-aef0549d2715 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Refreshing network info cache for port 39c103da-6cc8-478f-8d8e-ea4f6da2c413 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:04:16 np0005539563 nova_compute[252253]: 2025-11-29 08:04:16.423 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:16.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:17.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 181 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 76 KiB/s wr, 223 op/s
Nov 29 03:04:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:18.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:19.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:19 np0005539563 nova_compute[252253]: 2025-11-29 08:04:19.415 252257 DEBUG nova.network.neutron [req-f387ef94-2db6-4194-a0bf-54a76b80a126 req-d046b744-e4f4-40eb-bf9c-aef0549d2715 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Updated VIF entry in instance network info cache for port 39c103da-6cc8-478f-8d8e-ea4f6da2c413. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:04:19 np0005539563 nova_compute[252253]: 2025-11-29 08:04:19.415 252257 DEBUG nova.network.neutron [req-f387ef94-2db6-4194-a0bf-54a76b80a126 req-d046b744-e4f4-40eb-bf9c-aef0549d2715 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Updating instance_info_cache with network_info: [{"id": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "address": "fa:16:3e:51:46:33", "network": {"id": "7e70117d-5c88-4604-a344-02d8359c38e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1406427306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5224c4ebb92449a962d40c6cf1dd719", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39c103da-6c", "ovs_interfaceid": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:19 np0005539563 nova_compute[252253]: 2025-11-29 08:04:19.480 252257 DEBUG oslo_concurrency.lockutils [req-f387ef94-2db6-4194-a0bf-54a76b80a126 req-d046b744-e4f4-40eb-bf9c-aef0549d2715 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:19 np0005539563 nova_compute[252253]: 2025-11-29 08:04:19.603 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 181 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 76 KiB/s wr, 235 op/s
Nov 29 03:04:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:20.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:21.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:21 np0005539563 nova_compute[252253]: 2025-11-29 08:04:21.428 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 211 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.6 MiB/s wr, 228 op/s
Nov 29 03:04:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:22.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:23.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0028971535920342454 of space, bias 1.0, pg target 0.8691460776102736 quantized to 32 (current 32)
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009900122720081892 of space, bias 1.0, pg target 0.29700368160245677 quantized to 32 (current 32)
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:04:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 237 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.4 MiB/s wr, 209 op/s
Nov 29 03:04:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:24.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:24 np0005539563 nova_compute[252253]: 2025-11-29 08:04:24.605 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:25.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 274 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.2 MiB/s wr, 231 op/s
Nov 29 03:04:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:26Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:51:46:33 10.100.0.8
Nov 29 03:04:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:26Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:51:46:33 10.100.0.8
Nov 29 03:04:26 np0005539563 nova_compute[252253]: 2025-11-29 08:04:26.428 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:26.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:04:26 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 715661c5-406c-4876-997f-e77f38b4a1a2 does not exist
Nov 29 03:04:26 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1cf0eb2e-2155-4af7-b2d3-2f7ab98fda07 does not exist
Nov 29 03:04:26 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 86543331-4b9e-49e1-88ac-0868da53fa33 does not exist
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Nov 29 03:04:26 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Nov 29 03:04:27 np0005539563 podman[298531]: 2025-11-29 08:04:27.140714231 +0000 UTC m=+0.051238079 container create 75f1ad05fbb5534f5afb0e6eff0ff1a2e3ffbf9221a2997c50dc0790fbc1e372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:04:27 np0005539563 systemd[1]: Started libpod-conmon-75f1ad05fbb5534f5afb0e6eff0ff1a2e3ffbf9221a2997c50dc0790fbc1e372.scope.
Nov 29 03:04:27 np0005539563 podman[298531]: 2025-11-29 08:04:27.110271196 +0000 UTC m=+0.020795044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:04:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:04:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:27.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:27 np0005539563 podman[298531]: 2025-11-29 08:04:27.343092834 +0000 UTC m=+0.253616652 container init 75f1ad05fbb5534f5afb0e6eff0ff1a2e3ffbf9221a2997c50dc0790fbc1e372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:04:27 np0005539563 podman[298531]: 2025-11-29 08:04:27.351234405 +0000 UTC m=+0.261758223 container start 75f1ad05fbb5534f5afb0e6eff0ff1a2e3ffbf9221a2997c50dc0790fbc1e372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sutherland, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:04:27 np0005539563 systemd[1]: libpod-75f1ad05fbb5534f5afb0e6eff0ff1a2e3ffbf9221a2997c50dc0790fbc1e372.scope: Deactivated successfully.
Nov 29 03:04:27 np0005539563 sweet_sutherland[298549]: 167 167
Nov 29 03:04:27 np0005539563 conmon[298549]: conmon 75f1ad05fbb5534f5afb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75f1ad05fbb5534f5afb0e6eff0ff1a2e3ffbf9221a2997c50dc0790fbc1e372.scope/container/memory.events
Nov 29 03:04:27 np0005539563 podman[298531]: 2025-11-29 08:04:27.568538743 +0000 UTC m=+0.479062591 container attach 75f1ad05fbb5534f5afb0e6eff0ff1a2e3ffbf9221a2997c50dc0790fbc1e372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:04:27 np0005539563 podman[298531]: 2025-11-29 08:04:27.569604971 +0000 UTC m=+0.480128789 container died 75f1ad05fbb5534f5afb0e6eff0ff1a2e3ffbf9221a2997c50dc0790fbc1e372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sutherland, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:04:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9cf5241578f52915bdf1f642c83e079f75ebde8e80b15319860b45fd31d09a55-merged.mount: Deactivated successfully.
Nov 29 03:04:27 np0005539563 podman[298531]: 2025-11-29 08:04:27.71792087 +0000 UTC m=+0.628444678 container remove 75f1ad05fbb5534f5afb0e6eff0ff1a2e3ffbf9221a2997c50dc0790fbc1e372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sutherland, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:04:27 np0005539563 systemd[1]: libpod-conmon-75f1ad05fbb5534f5afb0e6eff0ff1a2e3ffbf9221a2997c50dc0790fbc1e372.scope: Deactivated successfully.
Nov 29 03:04:27 np0005539563 podman[298574]: 2025-11-29 08:04:27.921221337 +0000 UTC m=+0.045966776 container create 42373eb8e9fa77acb3cb65a6aac71c683263e8dd540dfadde4409ff01ab67c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:04:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Nov 29 03:04:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 274 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 904 KiB/s rd, 6.2 MiB/s wr, 149 op/s
Nov 29 03:04:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Nov 29 03:04:27 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Nov 29 03:04:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:04:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3085072095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:04:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:04:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3085072095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:04:27 np0005539563 systemd[1]: Started libpod-conmon-42373eb8e9fa77acb3cb65a6aac71c683263e8dd540dfadde4409ff01ab67c01.scope.
Nov 29 03:04:27 np0005539563 podman[298574]: 2025-11-29 08:04:27.90395528 +0000 UTC m=+0.028700749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:04:28 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:04:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf4c6d97b7f9082239442eefa73aac5109ecf0b306b42e28a923a40402d3833/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf4c6d97b7f9082239442eefa73aac5109ecf0b306b42e28a923a40402d3833/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf4c6d97b7f9082239442eefa73aac5109ecf0b306b42e28a923a40402d3833/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf4c6d97b7f9082239442eefa73aac5109ecf0b306b42e28a923a40402d3833/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf4c6d97b7f9082239442eefa73aac5109ecf0b306b42e28a923a40402d3833/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:28 np0005539563 podman[298574]: 2025-11-29 08:04:28.040690224 +0000 UTC m=+0.165435693 container init 42373eb8e9fa77acb3cb65a6aac71c683263e8dd540dfadde4409ff01ab67c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:04:28 np0005539563 podman[298574]: 2025-11-29 08:04:28.047639493 +0000 UTC m=+0.172384932 container start 42373eb8e9fa77acb3cb65a6aac71c683263e8dd540dfadde4409ff01ab67c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:04:28 np0005539563 podman[298574]: 2025-11-29 08:04:28.051152958 +0000 UTC m=+0.175898387 container attach 42373eb8e9fa77acb3cb65a6aac71c683263e8dd540dfadde4409ff01ab67c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:04:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:28.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:28 np0005539563 reverent_babbage[298590]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:04:28 np0005539563 reverent_babbage[298590]: --> relative data size: 1.0
Nov 29 03:04:28 np0005539563 reverent_babbage[298590]: --> All data devices are unavailable
Nov 29 03:04:28 np0005539563 systemd[1]: libpod-42373eb8e9fa77acb3cb65a6aac71c683263e8dd540dfadde4409ff01ab67c01.scope: Deactivated successfully.
Nov 29 03:04:28 np0005539563 podman[298574]: 2025-11-29 08:04:28.856887748 +0000 UTC m=+0.981633217 container died 42373eb8e9fa77acb3cb65a6aac71c683263e8dd540dfadde4409ff01ab67c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:04:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Nov 29 03:04:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:29.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:29 np0005539563 nova_compute[252253]: 2025-11-29 08:04:29.608 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Nov 29 03:04:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2bf4c6d97b7f9082239442eefa73aac5109ecf0b306b42e28a923a40402d3833-merged.mount: Deactivated successfully.
Nov 29 03:04:29 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Nov 29 03:04:29 np0005539563 podman[298574]: 2025-11-29 08:04:29.754500178 +0000 UTC m=+1.879245627 container remove 42373eb8e9fa77acb3cb65a6aac71c683263e8dd540dfadde4409ff01ab67c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:04:29 np0005539563 systemd[1]: libpod-conmon-42373eb8e9fa77acb3cb65a6aac71c683263e8dd540dfadde4409ff01ab67c01.scope: Deactivated successfully.
Nov 29 03:04:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 335 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 11 MiB/s wr, 207 op/s
Nov 29 03:04:30 np0005539563 podman[298759]: 2025-11-29 08:04:30.466246612 +0000 UTC m=+0.043027327 container create 45775e5534888a00a9b6bda568ff276958988f5e16a09c4d9f66c2f60aba615f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_golick, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:04:30 np0005539563 systemd[1]: Started libpod-conmon-45775e5534888a00a9b6bda568ff276958988f5e16a09c4d9f66c2f60aba615f.scope.
Nov 29 03:04:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:30.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:04:30 np0005539563 podman[298759]: 2025-11-29 08:04:30.444315947 +0000 UTC m=+0.021096652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:04:30 np0005539563 podman[298759]: 2025-11-29 08:04:30.552779526 +0000 UTC m=+0.129560241 container init 45775e5534888a00a9b6bda568ff276958988f5e16a09c4d9f66c2f60aba615f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_golick, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:04:30 np0005539563 podman[298759]: 2025-11-29 08:04:30.564873593 +0000 UTC m=+0.141654278 container start 45775e5534888a00a9b6bda568ff276958988f5e16a09c4d9f66c2f60aba615f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_golick, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:04:30 np0005539563 podman[298759]: 2025-11-29 08:04:30.569259933 +0000 UTC m=+0.146040618 container attach 45775e5534888a00a9b6bda568ff276958988f5e16a09c4d9f66c2f60aba615f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_golick, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:04:30 np0005539563 cool_golick[298775]: 167 167
Nov 29 03:04:30 np0005539563 systemd[1]: libpod-45775e5534888a00a9b6bda568ff276958988f5e16a09c4d9f66c2f60aba615f.scope: Deactivated successfully.
Nov 29 03:04:30 np0005539563 podman[298759]: 2025-11-29 08:04:30.576383735 +0000 UTC m=+0.153164460 container died 45775e5534888a00a9b6bda568ff276958988f5e16a09c4d9f66c2f60aba615f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:04:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-de87a6bef15647fe09e453d2373682200c167c75a4768cccb23e29866f4cfb65-merged.mount: Deactivated successfully.
Nov 29 03:04:30 np0005539563 podman[298759]: 2025-11-29 08:04:30.642049135 +0000 UTC m=+0.218829840 container remove 45775e5534888a00a9b6bda568ff276958988f5e16a09c4d9f66c2f60aba615f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:04:30 np0005539563 systemd[1]: libpod-conmon-45775e5534888a00a9b6bda568ff276958988f5e16a09c4d9f66c2f60aba615f.scope: Deactivated successfully.
Nov 29 03:04:30 np0005539563 podman[298799]: 2025-11-29 08:04:30.864623945 +0000 UTC m=+0.042222335 container create 41769c706cf808f3f8bd63cbea4f77174381c4e0b333e0e0fc8ab345b3c00d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 03:04:30 np0005539563 systemd[1]: Started libpod-conmon-41769c706cf808f3f8bd63cbea4f77174381c4e0b333e0e0fc8ab345b3c00d77.scope.
Nov 29 03:04:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:04:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a11d549a1c5f3fa40468c2c89971993ac869f53fd82e6932f656362b53ba38a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:30 np0005539563 podman[298799]: 2025-11-29 08:04:30.844104729 +0000 UTC m=+0.021703149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:04:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a11d549a1c5f3fa40468c2c89971993ac869f53fd82e6932f656362b53ba38a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a11d549a1c5f3fa40468c2c89971993ac869f53fd82e6932f656362b53ba38a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a11d549a1c5f3fa40468c2c89971993ac869f53fd82e6932f656362b53ba38a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:31 np0005539563 podman[298799]: 2025-11-29 08:04:31.085074038 +0000 UTC m=+0.262672438 container init 41769c706cf808f3f8bd63cbea4f77174381c4e0b333e0e0fc8ab345b3c00d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:04:31 np0005539563 podman[298799]: 2025-11-29 08:04:31.092996023 +0000 UTC m=+0.270594413 container start 41769c706cf808f3f8bd63cbea4f77174381c4e0b333e0e0fc8ab345b3c00d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:04:31 np0005539563 podman[298799]: 2025-11-29 08:04:31.098296496 +0000 UTC m=+0.275894896 container attach 41769c706cf808f3f8bd63cbea4f77174381c4e0b333e0e0fc8ab345b3c00d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:04:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:31.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:31 np0005539563 nova_compute[252253]: 2025-11-29 08:04:31.430 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]: {
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:    "0": [
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:        {
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:            "devices": [
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:                "/dev/loop3"
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:            ],
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:            "lv_name": "ceph_lv0",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:            "lv_size": "7511998464",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:            "name": "ceph_lv0",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:            "tags": {
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:                "ceph.cluster_name": "ceph",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:                "ceph.crush_device_class": "",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:                "ceph.encrypted": "0",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:                "ceph.osd_id": "0",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:                "ceph.type": "block",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:                "ceph.vdo": "0"
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:            },
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:            "type": "block",
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:            "vg_name": "ceph_vg0"
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:        }
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]:    ]
Nov 29 03:04:31 np0005539563 affectionate_dubinsky[298816]: }
Nov 29 03:04:31 np0005539563 systemd[1]: libpod-41769c706cf808f3f8bd63cbea4f77174381c4e0b333e0e0fc8ab345b3c00d77.scope: Deactivated successfully.
Nov 29 03:04:31 np0005539563 podman[298799]: 2025-11-29 08:04:31.87648986 +0000 UTC m=+1.054088280 container died 41769c706cf808f3f8bd63cbea4f77174381c4e0b333e0e0fc8ab345b3c00d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:04:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 405 MiB data, 833 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 14 MiB/s wr, 522 op/s
Nov 29 03:04:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a11d549a1c5f3fa40468c2c89971993ac869f53fd82e6932f656362b53ba38a2-merged.mount: Deactivated successfully.
Nov 29 03:04:32 np0005539563 podman[298799]: 2025-11-29 08:04:32.025898318 +0000 UTC m=+1.203496698 container remove 41769c706cf808f3f8bd63cbea4f77174381c4e0b333e0e0fc8ab345b3c00d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:04:32 np0005539563 systemd[1]: libpod-conmon-41769c706cf808f3f8bd63cbea4f77174381c4e0b333e0e0fc8ab345b3c00d77.scope: Deactivated successfully.
Nov 29 03:04:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:32.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:32 np0005539563 podman[298981]: 2025-11-29 08:04:32.611793542 +0000 UTC m=+0.035536744 container create ae0e8e93b64e9420bfb5e190976e2c505118001a9d2fd13947d164d8aae9bc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lederberg, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:04:32 np0005539563 systemd[1]: Started libpod-conmon-ae0e8e93b64e9420bfb5e190976e2c505118001a9d2fd13947d164d8aae9bc9d.scope.
Nov 29 03:04:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:04:32 np0005539563 podman[298981]: 2025-11-29 08:04:32.672114786 +0000 UTC m=+0.095858008 container init ae0e8e93b64e9420bfb5e190976e2c505118001a9d2fd13947d164d8aae9bc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:04:32 np0005539563 podman[298981]: 2025-11-29 08:04:32.679460965 +0000 UTC m=+0.103204167 container start ae0e8e93b64e9420bfb5e190976e2c505118001a9d2fd13947d164d8aae9bc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:04:32 np0005539563 ecstatic_lederberg[298998]: 167 167
Nov 29 03:04:32 np0005539563 systemd[1]: libpod-ae0e8e93b64e9420bfb5e190976e2c505118001a9d2fd13947d164d8aae9bc9d.scope: Deactivated successfully.
Nov 29 03:04:32 np0005539563 podman[298981]: 2025-11-29 08:04:32.687901744 +0000 UTC m=+0.111644966 container attach ae0e8e93b64e9420bfb5e190976e2c505118001a9d2fd13947d164d8aae9bc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lederberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:04:32 np0005539563 podman[298981]: 2025-11-29 08:04:32.688260384 +0000 UTC m=+0.112003616 container died ae0e8e93b64e9420bfb5e190976e2c505118001a9d2fd13947d164d8aae9bc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:04:32 np0005539563 podman[298981]: 2025-11-29 08:04:32.596027314 +0000 UTC m=+0.019770536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:04:32 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7c1ecf6af3c2e4b6fd2d8800fd9e966e9a7701e4b67bfcddbc581fe399100bdc-merged.mount: Deactivated successfully.
Nov 29 03:04:32 np0005539563 podman[298981]: 2025-11-29 08:04:32.732722339 +0000 UTC m=+0.156465551 container remove ae0e8e93b64e9420bfb5e190976e2c505118001a9d2fd13947d164d8aae9bc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:04:32 np0005539563 systemd[1]: libpod-conmon-ae0e8e93b64e9420bfb5e190976e2c505118001a9d2fd13947d164d8aae9bc9d.scope: Deactivated successfully.
Nov 29 03:04:32 np0005539563 podman[299023]: 2025-11-29 08:04:32.908965504 +0000 UTC m=+0.047382665 container create 6818d615340ad1782fbb6c0127ec7238777966801ed63afd97ff9861cf1811a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:04:32 np0005539563 systemd[1]: Started libpod-conmon-6818d615340ad1782fbb6c0127ec7238777966801ed63afd97ff9861cf1811a9.scope.
Nov 29 03:04:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:04:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88b3d8adf79317e938ef208fadf61375df351e78e62e7b49df99e0925709428/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88b3d8adf79317e938ef208fadf61375df351e78e62e7b49df99e0925709428/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88b3d8adf79317e938ef208fadf61375df351e78e62e7b49df99e0925709428/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e88b3d8adf79317e938ef208fadf61375df351e78e62e7b49df99e0925709428/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:32 np0005539563 podman[299023]: 2025-11-29 08:04:32.889484716 +0000 UTC m=+0.027901807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:04:32 np0005539563 podman[299023]: 2025-11-29 08:04:32.989981169 +0000 UTC m=+0.128398270 container init 6818d615340ad1782fbb6c0127ec7238777966801ed63afd97ff9861cf1811a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:04:33 np0005539563 podman[299023]: 2025-11-29 08:04:33.005063997 +0000 UTC m=+0.143481068 container start 6818d615340ad1782fbb6c0127ec7238777966801ed63afd97ff9861cf1811a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:04:33 np0005539563 podman[299023]: 2025-11-29 08:04:33.008189082 +0000 UTC m=+0.146606153 container attach 6818d615340ad1782fbb6c0127ec7238777966801ed63afd97ff9861cf1811a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:04:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:33.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:33 np0005539563 cranky_curran[299040]: {
Nov 29 03:04:33 np0005539563 cranky_curran[299040]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:04:33 np0005539563 cranky_curran[299040]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:04:33 np0005539563 cranky_curran[299040]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:04:33 np0005539563 cranky_curran[299040]:        "osd_id": 0,
Nov 29 03:04:33 np0005539563 cranky_curran[299040]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:04:33 np0005539563 cranky_curran[299040]:        "type": "bluestore"
Nov 29 03:04:33 np0005539563 cranky_curran[299040]:    }
Nov 29 03:04:33 np0005539563 cranky_curran[299040]: }
Nov 29 03:04:33 np0005539563 systemd[1]: libpod-6818d615340ad1782fbb6c0127ec7238777966801ed63afd97ff9861cf1811a9.scope: Deactivated successfully.
Nov 29 03:04:33 np0005539563 podman[299023]: 2025-11-29 08:04:33.831498819 +0000 UTC m=+0.969915900 container died 6818d615340ad1782fbb6c0127ec7238777966801ed63afd97ff9861cf1811a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:04:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e88b3d8adf79317e938ef208fadf61375df351e78e62e7b49df99e0925709428-merged.mount: Deactivated successfully.
Nov 29 03:04:33 np0005539563 podman[299023]: 2025-11-29 08:04:33.902862583 +0000 UTC m=+1.041279654 container remove 6818d615340ad1782fbb6c0127ec7238777966801ed63afd97ff9861cf1811a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:04:33 np0005539563 systemd[1]: libpod-conmon-6818d615340ad1782fbb6c0127ec7238777966801ed63afd97ff9861cf1811a9.scope: Deactivated successfully.
Nov 29 03:04:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:04:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 405 MiB data, 838 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 12 MiB/s wr, 448 op/s
Nov 29 03:04:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:04:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:04:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:04:33 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f2dfebdd-add8-423f-b657-3f8dad60b47f does not exist
Nov 29 03:04:33 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 933db177-ff63-42cc-b6ba-f9143732135d does not exist
Nov 29 03:04:33 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev aec5ad54-2b14-485f-abc6-4db22fba6f29 does not exist
Nov 29 03:04:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:34 np0005539563 nova_compute[252253]: 2025-11-29 08:04:34.612 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:34 np0005539563 nova_compute[252253]: 2025-11-29 08:04:34.838 252257 DEBUG oslo_concurrency.lockutils [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Acquiring lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:34 np0005539563 nova_compute[252253]: 2025-11-29 08:04:34.840 252257 DEBUG oslo_concurrency.lockutils [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:34 np0005539563 nova_compute[252253]: 2025-11-29 08:04:34.840 252257 DEBUG oslo_concurrency.lockutils [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Acquiring lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:34 np0005539563 nova_compute[252253]: 2025-11-29 08:04:34.841 252257 DEBUG oslo_concurrency.lockutils [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:34 np0005539563 nova_compute[252253]: 2025-11-29 08:04:34.841 252257 DEBUG oslo_concurrency.lockutils [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:34 np0005539563 nova_compute[252253]: 2025-11-29 08:04:34.842 252257 INFO nova.compute.manager [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Terminating instance#033[00m
Nov 29 03:04:34 np0005539563 nova_compute[252253]: 2025-11-29 08:04:34.843 252257 DEBUG nova.compute.manager [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:04:34 np0005539563 kernel: tap39c103da-6c (unregistering): left promiscuous mode
Nov 29 03:04:34 np0005539563 NetworkManager[48981]: <info>  [1764403474.9065] device (tap39c103da-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:04:34 np0005539563 nova_compute[252253]: 2025-11-29 08:04:34.918 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:34Z|00255|binding|INFO|Releasing lport 39c103da-6cc8-478f-8d8e-ea4f6da2c413 from this chassis (sb_readonly=0)
Nov 29 03:04:34 np0005539563 nova_compute[252253]: 2025-11-29 08:04:34.920 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:34Z|00256|binding|INFO|Setting lport 39c103da-6cc8-478f-8d8e-ea4f6da2c413 down in Southbound
Nov 29 03:04:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:34Z|00257|binding|INFO|Removing iface tap39c103da-6c ovn-installed in OVS
Nov 29 03:04:34 np0005539563 nova_compute[252253]: 2025-11-29 08:04:34.923 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:34 np0005539563 nova_compute[252253]: 2025-11-29 08:04:34.940 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:34.941 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:46:33 10.100.0.8'], port_security=['fa:16:3e:51:46:33 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1e8a4477-aa5b-4c5c-96ac-181e81b78f1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e70117d-5c88-4604-a344-02d8359c38e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5224c4ebb92449a962d40c6cf1dd719', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ae899cc-7f0c-46f4-b333-b6f85e22811b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49ced9cd-b6fd-46a6-91db-462fe99f300b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=39c103da-6cc8-478f-8d8e-ea4f6da2c413) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:34.942 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 39c103da-6cc8-478f-8d8e-ea4f6da2c413 in datapath 7e70117d-5c88-4604-a344-02d8359c38e8 unbound from our chassis#033[00m
Nov 29 03:04:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:34.944 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e70117d-5c88-4604-a344-02d8359c38e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:04:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:34.947 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9cebbd-8b1a-4517-8428-814558828a03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:34.947 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8 namespace which is not needed anymore#033[00m
Nov 29 03:04:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:04:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:04:34 np0005539563 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000049.scope: Deactivated successfully.
Nov 29 03:04:34 np0005539563 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000049.scope: Consumed 14.112s CPU time.
Nov 29 03:04:34 np0005539563 systemd-machined[213024]: Machine qemu-29-instance-00000049 terminated.
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.071 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.078 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.083 252257 INFO nova.virt.libvirt.driver [-] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Instance destroyed successfully.#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.084 252257 DEBUG nova.objects.instance [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lazy-loading 'resources' on Instance uuid 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.096 252257 DEBUG nova.virt.libvirt.vif [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:04:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-972573030',display_name='tempest-ServersTestJSON-server-972573030',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-972573030',id=73,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIeQr2RMBCZCG/FSkINur3G6UnUjT6sGyaQjyLzLQzfwXgSxxIoOvE3NvP0MBL/XwYA0rDaRkphKWfa0gpWL9IMW9miYfKwVovn1Ph0V6xboSY9kCp6H09VSljw0wPwNcA==',key_name='tempest-keypair-404651123',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e5224c4ebb92449a962d40c6cf1dd719',ramdisk_id='',reservation_id='r-tility7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-86729619',owner_user_name='tempest-ServersTestJSON-86729619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b7d64f2611204481b2cb7f9b3178a0cf',uuid=1e8a4477-aa5b-4c5c-96ac-181e81b78f1f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "address": "fa:16:3e:51:46:33", "network": {"id": "7e70117d-5c88-4604-a344-02d8359c38e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1406427306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5224c4ebb92449a962d40c6cf1dd719", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39c103da-6c", "ovs_interfaceid": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.096 252257 DEBUG nova.network.os_vif_util [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Converting VIF {"id": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "address": "fa:16:3e:51:46:33", "network": {"id": "7e70117d-5c88-4604-a344-02d8359c38e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1406427306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e5224c4ebb92449a962d40c6cf1dd719", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39c103da-6c", "ovs_interfaceid": "39c103da-6cc8-478f-8d8e-ea4f6da2c413", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.097 252257 DEBUG nova.network.os_vif_util [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:51:46:33,bridge_name='br-int',has_traffic_filtering=True,id=39c103da-6cc8-478f-8d8e-ea4f6da2c413,network=Network(7e70117d-5c88-4604-a344-02d8359c38e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39c103da-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.098 252257 DEBUG os_vif [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:46:33,bridge_name='br-int',has_traffic_filtering=True,id=39c103da-6cc8-478f-8d8e-ea4f6da2c413,network=Network(7e70117d-5c88-4604-a344-02d8359c38e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39c103da-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.099 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.099 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39c103da-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.101 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.102 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.105 252257 INFO os_vif [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:46:33,bridge_name='br-int',has_traffic_filtering=True,id=39c103da-6cc8-478f-8d8e-ea4f6da2c413,network=Network(7e70117d-5c88-4604-a344-02d8359c38e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39c103da-6c')#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.254 252257 DEBUG nova.compute.manager [req-e97750ba-fbf1-4f66-9486-2dde2f619729 req-8f2a8174-b86e-4bbd-a1b0-2792b94c276a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Received event network-vif-unplugged-39c103da-6cc8-478f-8d8e-ea4f6da2c413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.255 252257 DEBUG oslo_concurrency.lockutils [req-e97750ba-fbf1-4f66-9486-2dde2f619729 req-8f2a8174-b86e-4bbd-a1b0-2792b94c276a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.255 252257 DEBUG oslo_concurrency.lockutils [req-e97750ba-fbf1-4f66-9486-2dde2f619729 req-8f2a8174-b86e-4bbd-a1b0-2792b94c276a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.255 252257 DEBUG oslo_concurrency.lockutils [req-e97750ba-fbf1-4f66-9486-2dde2f619729 req-8f2a8174-b86e-4bbd-a1b0-2792b94c276a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.255 252257 DEBUG nova.compute.manager [req-e97750ba-fbf1-4f66-9486-2dde2f619729 req-8f2a8174-b86e-4bbd-a1b0-2792b94c276a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] No waiting events found dispatching network-vif-unplugged-39c103da-6cc8-478f-8d8e-ea4f6da2c413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.256 252257 DEBUG nova.compute.manager [req-e97750ba-fbf1-4f66-9486-2dde2f619729 req-8f2a8174-b86e-4bbd-a1b0-2792b94c276a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Received event network-vif-unplugged-39c103da-6cc8-478f-8d8e-ea4f6da2c413 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:04:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:35.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:35 np0005539563 neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8[298188]: [NOTICE]   (298192) : haproxy version is 2.8.14-c23fe91
Nov 29 03:04:35 np0005539563 neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8[298188]: [NOTICE]   (298192) : path to executable is /usr/sbin/haproxy
Nov 29 03:04:35 np0005539563 neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8[298188]: [WARNING]  (298192) : Exiting Master process...
Nov 29 03:04:35 np0005539563 neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8[298188]: [ALERT]    (298192) : Current worker (298194) exited with code 143 (Terminated)
Nov 29 03:04:35 np0005539563 neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8[298188]: [WARNING]  (298192) : All workers exited. Exiting... (0)
Nov 29 03:04:35 np0005539563 systemd[1]: libpod-8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61.scope: Deactivated successfully.
Nov 29 03:04:35 np0005539563 podman[299149]: 2025-11-29 08:04:35.485576614 +0000 UTC m=+0.412901209 container died 8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:04:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61-userdata-shm.mount: Deactivated successfully.
Nov 29 03:04:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e67155996b5f377bcbcaf3b46ec272804f7cdd459d6dca4f3ab02a088093544e-merged.mount: Deactivated successfully.
Nov 29 03:04:35 np0005539563 podman[299149]: 2025-11-29 08:04:35.715647278 +0000 UTC m=+0.642971833 container cleanup 8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:04:35 np0005539563 systemd[1]: libpod-conmon-8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61.scope: Deactivated successfully.
Nov 29 03:04:35 np0005539563 podman[299204]: 2025-11-29 08:04:35.790274299 +0000 UTC m=+0.051452685 container remove 8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:04:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:35.796 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8bcb036e-b8db-4db7-ab71-870973f65db8]: (4, ('Sat Nov 29 08:04:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8 (8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61)\n8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61\nSat Nov 29 08:04:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8 (8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61)\n8ac597127176845ee22fea2e9c4f8401046424547e31ecb14dba7ac66f04dc61\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:35.797 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dc67b73d-70f8-43ff-b02d-a079c0928865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:35.799 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e70117d-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.800 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:35 np0005539563 kernel: tap7e70117d-50: left promiscuous mode
Nov 29 03:04:35 np0005539563 nova_compute[252253]: 2025-11-29 08:04:35.816 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:35.819 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[04a3e5cb-c7ba-4446-a65a-18e1863a7824]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:35.839 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2fdb86ac-97bf-45e4-9b65-b34756e0a1d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:35.841 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca28655-4c47-4575-85cf-b1cf3acbb759]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:35.855 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c77eaf13-42db-4af1-b860-bba50c577467]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641966, 'reachable_time': 31268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299223, 'error': None, 'target': 'ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:35.857 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7e70117d-5c88-4604-a344-02d8359c38e8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:04:35 np0005539563 systemd[1]: run-netns-ovnmeta\x2d7e70117d\x2d5c88\x2d4604\x2da344\x2d02d8359c38e8.mount: Deactivated successfully.
Nov 29 03:04:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:35.858 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[76445af1-2101-4ed0-a629-61a0408f26c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 405 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 10 MiB/s wr, 413 op/s
Nov 29 03:04:36 np0005539563 nova_compute[252253]: 2025-11-29 08:04:36.433 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:36 np0005539563 nova_compute[252253]: 2025-11-29 08:04:36.446 252257 INFO nova.virt.libvirt.driver [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Deleting instance files /var/lib/nova/instances/1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_del#033[00m
Nov 29 03:04:36 np0005539563 nova_compute[252253]: 2025-11-29 08:04:36.446 252257 INFO nova.virt.libvirt.driver [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Deletion of /var/lib/nova/instances/1e8a4477-aa5b-4c5c-96ac-181e81b78f1f_del complete#033[00m
Nov 29 03:04:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Nov 29 03:04:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Nov 29 03:04:36 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Nov 29 03:04:36 np0005539563 nova_compute[252253]: 2025-11-29 08:04:36.526 252257 INFO nova.compute.manager [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Took 1.68 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:04:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:36 np0005539563 nova_compute[252253]: 2025-11-29 08:04:36.526 252257 DEBUG oslo.service.loopingcall [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:04:36 np0005539563 nova_compute[252253]: 2025-11-29 08:04:36.527 252257 DEBUG nova.compute.manager [-] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:04:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:36.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:36 np0005539563 nova_compute[252253]: 2025-11-29 08:04:36.527 252257 DEBUG nova.network.neutron [-] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:04:36 np0005539563 nova_compute[252253]: 2025-11-29 08:04:36.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:37.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.457 252257 DEBUG nova.compute.manager [req-f1fdcb81-f0d1-49c2-b1e0-95397b775e71 req-ee520254-3d21-46d4-8bd8-222696b2236f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Received event network-vif-plugged-39c103da-6cc8-478f-8d8e-ea4f6da2c413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.458 252257 DEBUG oslo_concurrency.lockutils [req-f1fdcb81-f0d1-49c2-b1e0-95397b775e71 req-ee520254-3d21-46d4-8bd8-222696b2236f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.458 252257 DEBUG oslo_concurrency.lockutils [req-f1fdcb81-f0d1-49c2-b1e0-95397b775e71 req-ee520254-3d21-46d4-8bd8-222696b2236f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.459 252257 DEBUG oslo_concurrency.lockutils [req-f1fdcb81-f0d1-49c2-b1e0-95397b775e71 req-ee520254-3d21-46d4-8bd8-222696b2236f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.459 252257 DEBUG nova.compute.manager [req-f1fdcb81-f0d1-49c2-b1e0-95397b775e71 req-ee520254-3d21-46d4-8bd8-222696b2236f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] No waiting events found dispatching network-vif-plugged-39c103da-6cc8-478f-8d8e-ea4f6da2c413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.460 252257 WARNING nova.compute.manager [req-f1fdcb81-f0d1-49c2-b1e0-95397b775e71 req-ee520254-3d21-46d4-8bd8-222696b2236f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Received unexpected event network-vif-plugged-39c103da-6cc8-478f-8d8e-ea4f6da2c413 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:04:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:37.650 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:37.651 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:04:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:37.652 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.656 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.725 252257 DEBUG nova.network.neutron [-] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.767 252257 INFO nova.compute.manager [-] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Took 1.24 seconds to deallocate network for instance.#033[00m
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.776 252257 DEBUG nova.compute.manager [req-e1ff7b6a-6fd5-4699-a422-96a7975185a2 req-83f34f86-c8c6-40ef-8bf5-5c5d6cedf4ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Received event network-vif-deleted-39c103da-6cc8-478f-8d8e-ea4f6da2c413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.813 252257 DEBUG oslo_concurrency.lockutils [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.813 252257 DEBUG oslo_concurrency.lockutils [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:37 np0005539563 nova_compute[252253]: 2025-11-29 08:04:37.892 252257 DEBUG oslo_concurrency.processutils [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 405 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 5.8 MiB/s wr, 340 op/s
Nov 29 03:04:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2163040213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:38 np0005539563 nova_compute[252253]: 2025-11-29 08:04:38.333 252257 DEBUG oslo_concurrency.processutils [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:38 np0005539563 nova_compute[252253]: 2025-11-29 08:04:38.340 252257 DEBUG nova.compute.provider_tree [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:38 np0005539563 nova_compute[252253]: 2025-11-29 08:04:38.384 252257 DEBUG nova.scheduler.client.report [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:38 np0005539563 nova_compute[252253]: 2025-11-29 08:04:38.410 252257 DEBUG oslo_concurrency.lockutils [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:38 np0005539563 nova_compute[252253]: 2025-11-29 08:04:38.454 252257 INFO nova.scheduler.client.report [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Deleted allocations for instance 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f#033[00m
Nov 29 03:04:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:38.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:38 np0005539563 nova_compute[252253]: 2025-11-29 08:04:38.558 252257 DEBUG oslo_concurrency.lockutils [None req-1d9fd104-d984-4687-87d2-48e8b17c45a8 b7d64f2611204481b2cb7f9b3178a0cf e5224c4ebb92449a962d40c6cf1dd719 - - default default] Lock "1e8a4477-aa5b-4c5c-96ac-181e81b78f1f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:38 np0005539563 nova_compute[252253]: 2025-11-29 08:04:38.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:38 np0005539563 nova_compute[252253]: 2025-11-29 08:04:38.698 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:38 np0005539563 nova_compute[252253]: 2025-11-29 08:04:38.699 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:38 np0005539563 nova_compute[252253]: 2025-11-29 08:04:38.699 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:38 np0005539563 nova_compute[252253]: 2025-11-29 08:04:38.699 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:04:38 np0005539563 nova_compute[252253]: 2025-11-29 08:04:38.700 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3283760105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:39 np0005539563 nova_compute[252253]: 2025-11-29 08:04:39.160 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:39.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:39 np0005539563 nova_compute[252253]: 2025-11-29 08:04:39.342 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:39 np0005539563 nova_compute[252253]: 2025-11-29 08:04:39.343 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4538MB free_disk=20.876083374023438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:04:39 np0005539563 nova_compute[252253]: 2025-11-29 08:04:39.343 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:39 np0005539563 nova_compute[252253]: 2025-11-29 08:04:39.344 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:39 np0005539563 nova_compute[252253]: 2025-11-29 08:04:39.405 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:04:39 np0005539563 nova_compute[252253]: 2025-11-29 08:04:39.405 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:04:39 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 29 03:04:39 np0005539563 nova_compute[252253]: 2025-11-29 08:04:39.422 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Nov 29 03:04:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Nov 29 03:04:39 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Nov 29 03:04:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2874465640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:39 np0005539563 nova_compute[252253]: 2025-11-29 08:04:39.861 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:39 np0005539563 nova_compute[252253]: 2025-11-29 08:04:39.870 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:39 np0005539563 nova_compute[252253]: 2025-11-29 08:04:39.901 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:39 np0005539563 nova_compute[252253]: 2025-11-29 08:04:39.940 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:04:39 np0005539563 nova_compute[252253]: 2025-11-29 08:04:39.941 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 370 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 42 KiB/s wr, 51 op/s
Nov 29 03:04:40 np0005539563 nova_compute[252253]: 2025-11-29 08:04:40.116 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:40 np0005539563 podman[299295]: 2025-11-29 08:04:40.51835465 +0000 UTC m=+0.067306484 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:04:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:40.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:40 np0005539563 podman[299296]: 2025-11-29 08:04:40.553137743 +0000 UTC m=+0.092929740 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 29 03:04:40 np0005539563 podman[299297]: 2025-11-29 08:04:40.57704877 +0000 UTC m=+0.113201017 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:04:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:41.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:41 np0005539563 nova_compute[252253]: 2025-11-29 08:04:41.437 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:41 np0005539563 nova_compute[252253]: 2025-11-29 08:04:41.943 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:41 np0005539563 nova_compute[252253]: 2025-11-29 08:04:41.943 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:04:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 227 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 256 KiB/s rd, 2.4 MiB/s wr, 168 op/s
Nov 29 03:04:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:42.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:42 np0005539563 nova_compute[252253]: 2025-11-29 08:04:42.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:04:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:04:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:04:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:04:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:04:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:04:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:43.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:43 np0005539563 nova_compute[252253]: 2025-11-29 08:04:43.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:43 np0005539563 nova_compute[252253]: 2025-11-29 08:04:43.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:04:43 np0005539563 nova_compute[252253]: 2025-11-29 08:04:43.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:04:43 np0005539563 nova_compute[252253]: 2025-11-29 08:04:43.699 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:04:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 172 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 492 KiB/s rd, 3.1 MiB/s wr, 201 op/s
Nov 29 03:04:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:44.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:44 np0005539563 nova_compute[252253]: 2025-11-29 08:04:44.660 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:44 np0005539563 nova_compute[252253]: 2025-11-29 08:04:44.934 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:44 np0005539563 nova_compute[252253]: 2025-11-29 08:04:44.955 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "dc239229-164f-4005-9e62-421e4e10cc6e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:44 np0005539563 nova_compute[252253]: 2025-11-29 08:04:44.955 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:44 np0005539563 nova_compute[252253]: 2025-11-29 08:04:44.975 252257 DEBUG nova.compute.manager [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.077 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.077 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.083 252257 DEBUG nova.virt.hardware [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.084 252257 INFO nova.compute.claims [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.119 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.200 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:45.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2625207377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.678 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.684 252257 DEBUG nova.compute.provider_tree [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.698 252257 DEBUG nova.scheduler.client.report [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.720 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.721 252257 DEBUG nova.compute.manager [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.775 252257 DEBUG nova.compute.manager [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.776 252257 DEBUG nova.network.neutron [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.797 252257 INFO nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.815 252257 DEBUG nova.compute.manager [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.945 252257 DEBUG nova.compute.manager [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.946 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.946 252257 INFO nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Creating image(s)#033[00m
Nov 29 03:04:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 121 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 455 KiB/s rd, 2.7 MiB/s wr, 195 op/s
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.970 252257 DEBUG nova.storage.rbd_utils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dc239229-164f-4005-9e62-421e4e10cc6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:45 np0005539563 nova_compute[252253]: 2025-11-29 08:04:45.994 252257 DEBUG nova.storage.rbd_utils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dc239229-164f-4005-9e62-421e4e10cc6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.019 252257 DEBUG nova.storage.rbd_utils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dc239229-164f-4005-9e62-421e4e10cc6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.023 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.051 252257 DEBUG nova.policy [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ef8e9cc962eb4827954df3c42cc34798', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.090 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.090 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.091 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.091 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.115 252257 DEBUG nova.storage.rbd_utils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dc239229-164f-4005-9e62-421e4e10cc6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.119 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf dc239229-164f-4005-9e62-421e4e10cc6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.439 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Nov 29 03:04:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Nov 29 03:04:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Nov 29 03:04:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:46.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.558 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf dc239229-164f-4005-9e62-421e4e10cc6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.670 252257 DEBUG nova.storage.rbd_utils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] resizing rbd image dc239229-164f-4005-9e62-421e4e10cc6e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.717 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.814 252257 DEBUG nova.objects.instance [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'migration_context' on Instance uuid dc239229-164f-4005-9e62-421e4e10cc6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.869 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.870 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Ensure instance console log exists: /var/lib/nova/instances/dc239229-164f-4005-9e62-421e4e10cc6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.870 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.871 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:46 np0005539563 nova_compute[252253]: 2025-11-29 08:04:46.871 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:47 np0005539563 nova_compute[252253]: 2025-11-29 08:04:47.122 252257 DEBUG nova.network.neutron [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Successfully created port: 3c7b6709-87de-4e03-b36b-6ab395679d80 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:04:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:47.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:47 np0005539563 nova_compute[252253]: 2025-11-29 08:04:47.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 121 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 493 KiB/s rd, 3.0 MiB/s wr, 190 op/s
Nov 29 03:04:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:48.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:49 np0005539563 nova_compute[252253]: 2025-11-29 08:04:49.159 252257 DEBUG nova.network.neutron [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Successfully updated port: 3c7b6709-87de-4e03-b36b-6ab395679d80 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:04:49 np0005539563 nova_compute[252253]: 2025-11-29 08:04:49.321 252257 DEBUG nova.compute.manager [req-fbd84390-5e37-4745-b3cc-e14332121519 req-d58ae13f-d55c-4976-9f56-92cea8ef6665 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Received event network-changed-3c7b6709-87de-4e03-b36b-6ab395679d80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:49 np0005539563 nova_compute[252253]: 2025-11-29 08:04:49.321 252257 DEBUG nova.compute.manager [req-fbd84390-5e37-4745-b3cc-e14332121519 req-d58ae13f-d55c-4976-9f56-92cea8ef6665 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Refreshing instance network info cache due to event network-changed-3c7b6709-87de-4e03-b36b-6ab395679d80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:04:49 np0005539563 nova_compute[252253]: 2025-11-29 08:04:49.322 252257 DEBUG oslo_concurrency.lockutils [req-fbd84390-5e37-4745-b3cc-e14332121519 req-d58ae13f-d55c-4976-9f56-92cea8ef6665 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-dc239229-164f-4005-9e62-421e4e10cc6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:49 np0005539563 nova_compute[252253]: 2025-11-29 08:04:49.322 252257 DEBUG oslo_concurrency.lockutils [req-fbd84390-5e37-4745-b3cc-e14332121519 req-d58ae13f-d55c-4976-9f56-92cea8ef6665 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-dc239229-164f-4005-9e62-421e4e10cc6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:49 np0005539563 nova_compute[252253]: 2025-11-29 08:04:49.322 252257 DEBUG nova.network.neutron [req-fbd84390-5e37-4745-b3cc-e14332121519 req-d58ae13f-d55c-4976-9f56-92cea8ef6665 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Refreshing network info cache for port 3c7b6709-87de-4e03-b36b-6ab395679d80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:04:49 np0005539563 nova_compute[252253]: 2025-11-29 08:04:49.324 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "refresh_cache-dc239229-164f-4005-9e62-421e4e10cc6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:49.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:49 np0005539563 nova_compute[252253]: 2025-11-29 08:04:49.510 252257 DEBUG nova.network.neutron [req-fbd84390-5e37-4745-b3cc-e14332121519 req-d58ae13f-d55c-4976-9f56-92cea8ef6665 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:49 np0005539563 nova_compute[252253]: 2025-11-29 08:04:49.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 128 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 416 KiB/s rd, 2.9 MiB/s wr, 160 op/s
Nov 29 03:04:49 np0005539563 nova_compute[252253]: 2025-11-29 08:04:49.971 252257 DEBUG nova.network.neutron [req-fbd84390-5e37-4745-b3cc-e14332121519 req-d58ae13f-d55c-4976-9f56-92cea8ef6665 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:49 np0005539563 nova_compute[252253]: 2025-11-29 08:04:49.995 252257 DEBUG oslo_concurrency.lockutils [req-fbd84390-5e37-4745-b3cc-e14332121519 req-d58ae13f-d55c-4976-9f56-92cea8ef6665 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-dc239229-164f-4005-9e62-421e4e10cc6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:49 np0005539563 nova_compute[252253]: 2025-11-29 08:04:49.995 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquired lock "refresh_cache-dc239229-164f-4005-9e62-421e4e10cc6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:49 np0005539563 nova_compute[252253]: 2025-11-29 08:04:49.995 252257 DEBUG nova.network.neutron [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:50 np0005539563 nova_compute[252253]: 2025-11-29 08:04:50.082 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403475.081218, 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:50 np0005539563 nova_compute[252253]: 2025-11-29 08:04:50.082 252257 INFO nova.compute.manager [-] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:50 np0005539563 nova_compute[252253]: 2025-11-29 08:04:50.105 252257 DEBUG nova.compute.manager [None req-ef8a1a8f-3b8b-4b38-b2a2-8a5c9aa72142 - - - - - -] [instance: 1e8a4477-aa5b-4c5c-96ac-181e81b78f1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:50 np0005539563 nova_compute[252253]: 2025-11-29 08:04:50.480 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:50 np0005539563 nova_compute[252253]: 2025-11-29 08:04:50.482 252257 DEBUG nova.network.neutron [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:50.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:51.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.481 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.665 252257 DEBUG nova.network.neutron [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Updating instance_info_cache with network_info: [{"id": "3c7b6709-87de-4e03-b36b-6ab395679d80", "address": "fa:16:3e:e0:03:8d", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7b6709-87", "ovs_interfaceid": "3c7b6709-87de-4e03-b36b-6ab395679d80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.697 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Releasing lock "refresh_cache-dc239229-164f-4005-9e62-421e4e10cc6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.697 252257 DEBUG nova.compute.manager [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Instance network_info: |[{"id": "3c7b6709-87de-4e03-b36b-6ab395679d80", "address": "fa:16:3e:e0:03:8d", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7b6709-87", "ovs_interfaceid": "3c7b6709-87de-4e03-b36b-6ab395679d80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.703 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Start _get_guest_xml network_info=[{"id": "3c7b6709-87de-4e03-b36b-6ab395679d80", "address": "fa:16:3e:e0:03:8d", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7b6709-87", "ovs_interfaceid": "3c7b6709-87de-4e03-b36b-6ab395679d80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.709 252257 WARNING nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.716 252257 DEBUG nova.virt.libvirt.host [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.717 252257 DEBUG nova.virt.libvirt.host [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.721 252257 DEBUG nova.virt.libvirt.host [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.722 252257 DEBUG nova.virt.libvirt.host [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.723 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.723 252257 DEBUG nova.virt.hardware [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.724 252257 DEBUG nova.virt.hardware [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.724 252257 DEBUG nova.virt.hardware [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.724 252257 DEBUG nova.virt.hardware [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.724 252257 DEBUG nova.virt.hardware [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.724 252257 DEBUG nova.virt.hardware [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.725 252257 DEBUG nova.virt.hardware [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.725 252257 DEBUG nova.virt.hardware [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.725 252257 DEBUG nova.virt.hardware [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.725 252257 DEBUG nova.virt.hardware [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.726 252257 DEBUG nova.virt.hardware [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:04:51 np0005539563 nova_compute[252253]: 2025-11-29 08:04:51.729 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 260 KiB/s rd, 2.8 MiB/s wr, 98 op/s
Nov 29 03:04:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/773053081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.180 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.224 252257 DEBUG nova.storage.rbd_utils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dc239229-164f-4005-9e62-421e4e10cc6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.230 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:52.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3525853994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.724 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.726 252257 DEBUG nova.virt.libvirt.vif [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-238723625',display_name='tempest-DeleteServersTestJSON-server-238723625',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-238723625',id=75,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-l2nx22ev',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:45Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=dc239229-164f-4005-9e62-421e4e10cc6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c7b6709-87de-4e03-b36b-6ab395679d80", "address": "fa:16:3e:e0:03:8d", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7b6709-87", "ovs_interfaceid": "3c7b6709-87de-4e03-b36b-6ab395679d80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.726 252257 DEBUG nova.network.os_vif_util [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "3c7b6709-87de-4e03-b36b-6ab395679d80", "address": "fa:16:3e:e0:03:8d", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7b6709-87", "ovs_interfaceid": "3c7b6709-87de-4e03-b36b-6ab395679d80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.727 252257 DEBUG nova.network.os_vif_util [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:03:8d,bridge_name='br-int',has_traffic_filtering=True,id=3c7b6709-87de-4e03-b36b-6ab395679d80,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7b6709-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.728 252257 DEBUG nova.objects.instance [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'pci_devices' on Instance uuid dc239229-164f-4005-9e62-421e4e10cc6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.849 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  <uuid>dc239229-164f-4005-9e62-421e4e10cc6e</uuid>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  <name>instance-0000004b</name>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <nova:name>tempest-DeleteServersTestJSON-server-238723625</nova:name>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:04:51</nova:creationTime>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <nova:user uuid="ef8e9cc962eb4827954df3c42cc34798">tempest-DeleteServersTestJSON-69711189-project-member</nova:user>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <nova:project uuid="f8bc2a2616a34ba1a18b3211e406993f">tempest-DeleteServersTestJSON-69711189</nova:project>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <nova:port uuid="3c7b6709-87de-4e03-b36b-6ab395679d80">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <entry name="serial">dc239229-164f-4005-9e62-421e4e10cc6e</entry>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <entry name="uuid">dc239229-164f-4005-9e62-421e4e10cc6e</entry>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/dc239229-164f-4005-9e62-421e4e10cc6e_disk">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/dc239229-164f-4005-9e62-421e4e10cc6e_disk.config">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:e0:03:8d"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <target dev="tap3c7b6709-87"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/dc239229-164f-4005-9e62-421e4e10cc6e/console.log" append="off"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:04:52 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:04:52 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:04:52 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:04:52 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.850 252257 DEBUG nova.compute.manager [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Preparing to wait for external event network-vif-plugged-3c7b6709-87de-4e03-b36b-6ab395679d80 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.851 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.852 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.852 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.853 252257 DEBUG nova.virt.libvirt.vif [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-238723625',display_name='tempest-DeleteServersTestJSON-server-238723625',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-238723625',id=75,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-l2nx22ev',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:04:45Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=dc239229-164f-4005-9e62-421e4e10cc6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c7b6709-87de-4e03-b36b-6ab395679d80", "address": "fa:16:3e:e0:03:8d", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7b6709-87", "ovs_interfaceid": "3c7b6709-87de-4e03-b36b-6ab395679d80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.853 252257 DEBUG nova.network.os_vif_util [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "3c7b6709-87de-4e03-b36b-6ab395679d80", "address": "fa:16:3e:e0:03:8d", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7b6709-87", "ovs_interfaceid": "3c7b6709-87de-4e03-b36b-6ab395679d80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.854 252257 DEBUG nova.network.os_vif_util [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:03:8d,bridge_name='br-int',has_traffic_filtering=True,id=3c7b6709-87de-4e03-b36b-6ab395679d80,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7b6709-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.854 252257 DEBUG os_vif [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:03:8d,bridge_name='br-int',has_traffic_filtering=True,id=3c7b6709-87de-4e03-b36b-6ab395679d80,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7b6709-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.855 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.855 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.856 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.858 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.858 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c7b6709-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.859 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c7b6709-87, col_values=(('external_ids', {'iface-id': '3c7b6709-87de-4e03-b36b-6ab395679d80', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:03:8d', 'vm-uuid': 'dc239229-164f-4005-9e62-421e4e10cc6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.860 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:52 np0005539563 NetworkManager[48981]: <info>  [1764403492.8612] manager: (tap3c7b6709-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.864 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.866 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.867 252257 INFO os_vif [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:03:8d,bridge_name='br-int',has_traffic_filtering=True,id=3c7b6709-87de-4e03-b36b-6ab395679d80,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7b6709-87')#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.921 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.922 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.922 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No VIF found with MAC fa:16:3e:e0:03:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.923 252257 INFO nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Using config drive#033[00m
Nov 29 03:04:52 np0005539563 nova_compute[252253]: 2025-11-29 08:04:52.952 252257 DEBUG nova.storage.rbd_utils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dc239229-164f-4005-9e62-421e4e10cc6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:53.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 2.2 MiB/s wr, 56 op/s
Nov 29 03:04:53 np0005539563 nova_compute[252253]: 2025-11-29 08:04:53.964 252257 INFO nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Creating config drive at /var/lib/nova/instances/dc239229-164f-4005-9e62-421e4e10cc6e/disk.config#033[00m
Nov 29 03:04:53 np0005539563 nova_compute[252253]: 2025-11-29 08:04:53.971 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc239229-164f-4005-9e62-421e4e10cc6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnr9mnq1e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.108 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc239229-164f-4005-9e62-421e4e10cc6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnr9mnq1e" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.158 252257 DEBUG nova.storage.rbd_utils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dc239229-164f-4005-9e62-421e4e10cc6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.163 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc239229-164f-4005-9e62-421e4e10cc6e/disk.config dc239229-164f-4005-9e62-421e4e10cc6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.353 252257 DEBUG oslo_concurrency.processutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc239229-164f-4005-9e62-421e4e10cc6e/disk.config dc239229-164f-4005-9e62-421e4e10cc6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.354 252257 INFO nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Deleting local config drive /var/lib/nova/instances/dc239229-164f-4005-9e62-421e4e10cc6e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:04:54 np0005539563 kernel: tap3c7b6709-87: entered promiscuous mode
Nov 29 03:04:54 np0005539563 NetworkManager[48981]: <info>  [1764403494.4273] manager: (tap3c7b6709-87): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.428 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:54Z|00258|binding|INFO|Claiming lport 3c7b6709-87de-4e03-b36b-6ab395679d80 for this chassis.
Nov 29 03:04:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:54Z|00259|binding|INFO|3c7b6709-87de-4e03-b36b-6ab395679d80: Claiming fa:16:3e:e0:03:8d 10.100.0.11
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.435 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.441 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.453 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:03:8d 10.100.0.11'], port_security=['fa:16:3e:e0:03:8d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dc239229-164f-4005-9e62-421e4e10cc6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=3c7b6709-87de-4e03-b36b-6ab395679d80) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.456 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 3c7b6709-87de-4e03-b36b-6ab395679d80 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 bound to our chassis#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.459 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5e42602-d72e-4beb-864d-714bd1635da9#033[00m
Nov 29 03:04:54 np0005539563 systemd-udevd[299741]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:04:54 np0005539563 NetworkManager[48981]: <info>  [1764403494.4765] device (tap3c7b6709-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:04:54 np0005539563 NetworkManager[48981]: <info>  [1764403494.4776] device (tap3c7b6709-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:04:54 np0005539563 systemd-machined[213024]: New machine qemu-30-instance-0000004b.
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.478 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7851f466-aec2-405e-bcb1-527596480c05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.479 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd5e42602-d1 in ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.483 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd5e42602-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.483 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb4e941-8e8a-4866-879f-747fc8f60bd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.484 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e21bc799-3086-4841-a274-7ed0e173bba6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.503 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[edbcade2-2dba-45a8-998d-282ee5ba57cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 systemd[1]: Started Virtual Machine qemu-30-instance-0000004b.
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.535 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6993dac7-c8a6-459e-b708-a66aaafe582b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.542 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:54Z|00260|binding|INFO|Setting lport 3c7b6709-87de-4e03-b36b-6ab395679d80 ovn-installed in OVS
Nov 29 03:04:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:54Z|00261|binding|INFO|Setting lport 3c7b6709-87de-4e03-b36b-6ab395679d80 up in Southbound
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.545 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:04:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:54.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.568 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[21f98118-b0e4-431e-ac1e-c4762039d54f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 NetworkManager[48981]: <info>  [1764403494.5784] manager: (tapd5e42602-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/122)
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.576 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a62a89-20b8-4654-baaf-28b990a5291c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.620 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[bc714bcd-82c3-40de-8b6a-11f5ce410470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.623 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ecd922-5154-4a08-a702-85d57efc802b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 NetworkManager[48981]: <info>  [1764403494.6526] device (tapd5e42602-d0): carrier: link connected
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.659 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[56b61ace-3d48-4b41-808e-18f8db28da98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.683 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2a5a9d-85fd-4cfa-ba6e-0d5f1fcb4f68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646242, 'reachable_time': 22729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299775, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.699 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc2e810-622d-4e98-b3e8-6f36674dc47f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:370b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646242, 'tstamp': 646242}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299776, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.716 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8b9f2e01-5394-440b-9555-eea3e3826298]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646242, 'reachable_time': 22729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299777, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.742 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e17c25b4-7a15-4e22-94f9-71dcb3d2873d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.799 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6088fae2-c9a4-4232-99ae-0753e0a80b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.801 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.801 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.802 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5e42602-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.847 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:54 np0005539563 NetworkManager[48981]: <info>  [1764403494.8476] manager: (tapd5e42602-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Nov 29 03:04:54 np0005539563 kernel: tapd5e42602-d0: entered promiscuous mode
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.850 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.860 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5e42602-d0, col_values=(('external_ids', {'iface-id': 'b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.862 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:04:54Z|00262|binding|INFO|Releasing lport b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e from this chassis (sb_readonly=0)
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.867 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.868 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[90a39b89-8e8f-4dfc-bc4e-667ed20b98b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.871 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:04:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:04:54.871 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'env', 'PROCESS_TAG=haproxy-d5e42602-d72e-4beb-864d-714bd1635da9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d5e42602-d72e-4beb-864d-714bd1635da9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:04:54 np0005539563 nova_compute[252253]: 2025-11-29 08:04:54.877 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.216 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403495.2153964, dc239229-164f-4005-9e62-421e4e10cc6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.217 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:04:55 np0005539563 podman[299849]: 2025-11-29 08:04:55.238406469 +0000 UTC m=+0.060610454 container create 3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.249 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.253 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403495.2156186, dc239229-164f-4005-9e62-421e4e10cc6e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.254 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:04:55 np0005539563 systemd[1]: Started libpod-conmon-3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea.scope.
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.275 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.281 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:55 np0005539563 podman[299849]: 2025-11-29 08:04:55.208212611 +0000 UTC m=+0.030416626 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:04:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.308 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4161634968c5ed93a28a2e9f4d4d45af7b92ae2131019ec2ee63ce8e600bbd5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:55 np0005539563 podman[299849]: 2025-11-29 08:04:55.321951983 +0000 UTC m=+0.144156028 container init 3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:04:55 np0005539563 podman[299849]: 2025-11-29 08:04:55.327262436 +0000 UTC m=+0.149466461 container start 3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:04:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:55.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:55 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[299865]: [NOTICE]   (299869) : New worker (299871) forked
Nov 29 03:04:55 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[299865]: [NOTICE]   (299869) : Loading success.
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.690 252257 DEBUG nova.compute.manager [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Received event network-vif-plugged-3c7b6709-87de-4e03-b36b-6ab395679d80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.690 252257 DEBUG oslo_concurrency.lockutils [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.690 252257 DEBUG oslo_concurrency.lockutils [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.691 252257 DEBUG oslo_concurrency.lockutils [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.691 252257 DEBUG nova.compute.manager [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Processing event network-vif-plugged-3c7b6709-87de-4e03-b36b-6ab395679d80 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.691 252257 DEBUG nova.compute.manager [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Received event network-vif-plugged-3c7b6709-87de-4e03-b36b-6ab395679d80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.692 252257 DEBUG oslo_concurrency.lockutils [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.692 252257 DEBUG oslo_concurrency.lockutils [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.692 252257 DEBUG oslo_concurrency.lockutils [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.692 252257 DEBUG nova.compute.manager [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] No waiting events found dispatching network-vif-plugged-3c7b6709-87de-4e03-b36b-6ab395679d80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.693 252257 WARNING nova.compute.manager [req-167e5db1-6ec6-494d-83ba-b523722e32fc req-01085193-7982-475a-9e35-540142a44f83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Received unexpected event network-vif-plugged-3c7b6709-87de-4e03-b36b-6ab395679d80 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.693 252257 DEBUG nova.compute.manager [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.697 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403495.6969607, dc239229-164f-4005-9e62-421e4e10cc6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.697 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.698 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.701 252257 INFO nova.virt.libvirt.driver [-] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Instance spawned successfully.#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.701 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.726 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.729 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.736 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.736 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.737 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.737 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.738 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.738 252257 DEBUG nova.virt.libvirt.driver [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.763 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.808 252257 INFO nova.compute.manager [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Took 9.86 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.809 252257 DEBUG nova.compute.manager [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.880 252257 INFO nova.compute.manager [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Took 10.84 seconds to build instance.#033[00m
Nov 29 03:04:55 np0005539563 nova_compute[252253]: 2025-11-29 08:04:55.907 252257 DEBUG oslo_concurrency.lockutils [None req-d80e1715-83a8-4523-8038-0287774c3c58 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Nov 29 03:04:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:56 np0005539563 nova_compute[252253]: 2025-11-29 08:04:56.483 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:56.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:57.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:57 np0005539563 nova_compute[252253]: 2025-11-29 08:04:57.854 252257 DEBUG oslo_concurrency.lockutils [None req-26b615ba-2294-4448-bba7-80b0d3bc19dd ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "dc239229-164f-4005-9e62-421e4e10cc6e" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:57 np0005539563 nova_compute[252253]: 2025-11-29 08:04:57.856 252257 DEBUG oslo_concurrency.lockutils [None req-26b615ba-2294-4448-bba7-80b0d3bc19dd ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:57 np0005539563 nova_compute[252253]: 2025-11-29 08:04:57.856 252257 DEBUG nova.compute.manager [None req-26b615ba-2294-4448-bba7-80b0d3bc19dd ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:57 np0005539563 nova_compute[252253]: 2025-11-29 08:04:57.861 252257 DEBUG nova.compute.manager [None req-26b615ba-2294-4448-bba7-80b0d3bc19dd ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:04:57 np0005539563 nova_compute[252253]: 2025-11-29 08:04:57.862 252257 DEBUG nova.objects.instance [None req-26b615ba-2294-4448-bba7-80b0d3bc19dd ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'flavor' on Instance uuid dc239229-164f-4005-9e62-421e4e10cc6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:57 np0005539563 nova_compute[252253]: 2025-11-29 08:04:57.864 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:57 np0005539563 nova_compute[252253]: 2025-11-29 08:04:57.896 252257 DEBUG nova.virt.libvirt.driver [None req-26b615ba-2294-4448-bba7-80b0d3bc19dd ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:04:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.9 MiB/s wr, 49 op/s
Nov 29 03:04:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:58.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:04:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:59.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 29 03:05:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:00.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:01.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:01 np0005539563 nova_compute[252253]: 2025-11-29 08:05:01.484 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 115 op/s
Nov 29 03:05:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:02.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:02 np0005539563 nova_compute[252253]: 2025-11-29 08:05:02.866 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:03.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 88 op/s
Nov 29 03:05:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:04.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:04.911 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:04.912 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:04.913 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:05.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 88 op/s
Nov 29 03:05:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:06 np0005539563 nova_compute[252253]: 2025-11-29 08:05:06.541 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:06.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:07.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:07 np0005539563 nova_compute[252253]: 2025-11-29 08:05:07.869 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:07 np0005539563 nova_compute[252253]: 2025-11-29 08:05:07.941 252257 DEBUG nova.virt.libvirt.driver [None req-26b615ba-2294-4448-bba7-80b0d3bc19dd ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:05:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 167 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 68 op/s
Nov 29 03:05:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:08.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:08Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:03:8d 10.100.0.11
Nov 29 03:05:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:08Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:03:8d 10.100.0.11
Nov 29 03:05:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:09.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 180 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 851 KiB/s wr, 71 op/s
Nov 29 03:05:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:10.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:11.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:11 np0005539563 podman[299938]: 2025-11-29 08:05:11.500243028 +0000 UTC m=+0.056105181 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:05:11 np0005539563 podman[299939]: 2025-11-29 08:05:11.500216407 +0000 UTC m=+0.053807329 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:05:11 np0005539563 podman[299940]: 2025-11-29 08:05:11.526886689 +0000 UTC m=+0.077169731 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:05:11 np0005539563 nova_compute[252253]: 2025-11-29 08:05:11.540 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 237 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 138 op/s
Nov 29 03:05:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:12.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:05:12
Nov 29 03:05:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:05:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:05:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'volumes', 'vms', 'backups', 'cephfs.cephfs.meta']
Nov 29 03:05:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:05:12 np0005539563 nova_compute[252253]: 2025-11-29 08:05:12.872 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:05:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:13.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:05:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 246 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 29 03:05:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:14.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:15.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:05:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3624555279' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:05:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:05:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3624555279' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:05:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 246 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 354 KiB/s rd, 3.9 MiB/s wr, 107 op/s
Nov 29 03:05:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:16 np0005539563 nova_compute[252253]: 2025-11-29 08:05:16.568 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:16.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:17.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:17 np0005539563 nova_compute[252253]: 2025-11-29 08:05:17.875 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 246 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 354 KiB/s rd, 3.9 MiB/s wr, 107 op/s
Nov 29 03:05:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:18.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:18 np0005539563 nova_compute[252253]: 2025-11-29 08:05:18.988 252257 DEBUG nova.virt.libvirt.driver [None req-26b615ba-2294-4448-bba7-80b0d3bc19dd ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:05:19 np0005539563 nova_compute[252253]: 2025-11-29 08:05:19.221 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:19.222 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:19.223 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:05:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:19.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 236 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.4 MiB/s wr, 159 op/s
Nov 29 03:05:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:20.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:21.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:21 np0005539563 kernel: tap3c7b6709-87 (unregistering): left promiscuous mode
Nov 29 03:05:21 np0005539563 NetworkManager[48981]: <info>  [1764403521.5511] device (tap3c7b6709-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:05:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:21Z|00263|binding|INFO|Releasing lport 3c7b6709-87de-4e03-b36b-6ab395679d80 from this chassis (sb_readonly=0)
Nov 29 03:05:21 np0005539563 nova_compute[252253]: 2025-11-29 08:05:21.562 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:21Z|00264|binding|INFO|Setting lport 3c7b6709-87de-4e03-b36b-6ab395679d80 down in Southbound
Nov 29 03:05:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:21Z|00265|binding|INFO|Removing iface tap3c7b6709-87 ovn-installed in OVS
Nov 29 03:05:21 np0005539563 nova_compute[252253]: 2025-11-29 08:05:21.566 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:21.575 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:03:8d 10.100.0.11'], port_security=['fa:16:3e:e0:03:8d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dc239229-164f-4005-9e62-421e4e10cc6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=3c7b6709-87de-4e03-b36b-6ab395679d80) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:21.576 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 3c7b6709-87de-4e03-b36b-6ab395679d80 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 unbound from our chassis#033[00m
Nov 29 03:05:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:21.578 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5e42602-d72e-4beb-864d-714bd1635da9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:05:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:21.582 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcee82a-a531-4aa1-a767-5d64926ce436]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:21.583 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace which is not needed anymore#033[00m
Nov 29 03:05:21 np0005539563 nova_compute[252253]: 2025-11-29 08:05:21.591 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:21 np0005539563 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Nov 29 03:05:21 np0005539563 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000004b.scope: Consumed 14.446s CPU time.
Nov 29 03:05:21 np0005539563 systemd-machined[213024]: Machine qemu-30-instance-0000004b terminated.
Nov 29 03:05:21 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[299865]: [NOTICE]   (299869) : haproxy version is 2.8.14-c23fe91
Nov 29 03:05:21 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[299865]: [NOTICE]   (299869) : path to executable is /usr/sbin/haproxy
Nov 29 03:05:21 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[299865]: [WARNING]  (299869) : Exiting Master process...
Nov 29 03:05:21 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[299865]: [WARNING]  (299869) : Exiting Master process...
Nov 29 03:05:21 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[299865]: [ALERT]    (299869) : Current worker (299871) exited with code 143 (Terminated)
Nov 29 03:05:21 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[299865]: [WARNING]  (299869) : All workers exited. Exiting... (0)
Nov 29 03:05:21 np0005539563 systemd[1]: libpod-3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea.scope: Deactivated successfully.
Nov 29 03:05:21 np0005539563 podman[300029]: 2025-11-29 08:05:21.795664437 +0000 UTC m=+0.116875388 container died 3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:05:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 213 MiB data, 759 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.9 MiB/s wr, 234 op/s
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.001 252257 INFO nova.virt.libvirt.driver [None req-26b615ba-2294-4448-bba7-80b0d3bc19dd ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Instance shutdown successfully after 24 seconds.#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.007 252257 INFO nova.virt.libvirt.driver [-] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Instance destroyed successfully.#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.007 252257 DEBUG nova.objects.instance [None req-26b615ba-2294-4448-bba7-80b0d3bc19dd ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'numa_topology' on Instance uuid dc239229-164f-4005-9e62-421e4e10cc6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.023 252257 DEBUG nova.compute.manager [None req-26b615ba-2294-4448-bba7-80b0d3bc19dd ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea-userdata-shm.mount: Deactivated successfully.
Nov 29 03:05:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c4161634968c5ed93a28a2e9f4d4d45af7b92ae2131019ec2ee63ce8e600bbd5-merged.mount: Deactivated successfully.
Nov 29 03:05:22 np0005539563 podman[300029]: 2025-11-29 08:05:22.089001364 +0000 UTC m=+0.410212315 container cleanup 3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.090 252257 DEBUG oslo_concurrency.lockutils [None req-26b615ba-2294-4448-bba7-80b0d3bc19dd ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:22 np0005539563 podman[300070]: 2025-11-29 08:05:22.149726449 +0000 UTC m=+0.040508358 container remove 3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:05:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:22.155 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[23e21663-cc1c-4931-b540-1bfb47741bba]: (4, ('Sat Nov 29 08:05:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea)\n3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea\nSat Nov 29 08:05:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea)\n3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:22.158 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8deae1f5-39e4-4949-8411-39cb61cc56b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:22.159 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.162 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:22 np0005539563 kernel: tapd5e42602-d0: left promiscuous mode
Nov 29 03:05:22 np0005539563 systemd[1]: libpod-conmon-3f89b5e33f9991d87c5af8c7bb19188806df0e8aa3c11be9fa52af99c264fbea.scope: Deactivated successfully.
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.181 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:22.184 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[24af5151-edf9-4ad3-9162-88f031ee6502]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:22.202 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3f081f-e56b-4df0-832a-44e9c18a95b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:22.204 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2fb9e8-03db-419b-a181-aa3ee6ec850d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:22.221 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[09927910-2302-4469-9d2c-41d9587a1574]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646233, 'reachable_time': 37514, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300088, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:22.224 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:05:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:22.224 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[484d4ed1-05ef-4591-824b-ad62bce52363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:22 np0005539563 systemd[1]: run-netns-ovnmeta\x2dd5e42602\x2dd72e\x2d4beb\x2d864d\x2d714bd1635da9.mount: Deactivated successfully.
Nov 29 03:05:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:22.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.877 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.920 252257 DEBUG nova.compute.manager [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Received event network-vif-unplugged-3c7b6709-87de-4e03-b36b-6ab395679d80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.921 252257 DEBUG oslo_concurrency.lockutils [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.921 252257 DEBUG oslo_concurrency.lockutils [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.921 252257 DEBUG oslo_concurrency.lockutils [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.921 252257 DEBUG nova.compute.manager [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] No waiting events found dispatching network-vif-unplugged-3c7b6709-87de-4e03-b36b-6ab395679d80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.921 252257 DEBUG nova.compute.manager [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Received event network-vif-unplugged-3c7b6709-87de-4e03-b36b-6ab395679d80 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.922 252257 DEBUG nova.compute.manager [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Received event network-vif-plugged-3c7b6709-87de-4e03-b36b-6ab395679d80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.922 252257 DEBUG oslo_concurrency.lockutils [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.922 252257 DEBUG oslo_concurrency.lockutils [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.922 252257 DEBUG oslo_concurrency.lockutils [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.923 252257 DEBUG nova.compute.manager [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] No waiting events found dispatching network-vif-plugged-3c7b6709-87de-4e03-b36b-6ab395679d80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.923 252257 WARNING nova.compute.manager [req-252a364f-a240-49b1-89dc-9f98657bb895 req-941c491a-f3b3-4581-99e1-1d3e50cc41f7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Received unexpected event network-vif-plugged-3c7b6709-87de-4e03-b36b-6ab395679d80 for instance with vm_state stopped and task_state deleting.#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.941 252257 DEBUG oslo_concurrency.lockutils [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "dc239229-164f-4005-9e62-421e4e10cc6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.942 252257 DEBUG oslo_concurrency.lockutils [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.942 252257 DEBUG oslo_concurrency.lockutils [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.942 252257 DEBUG oslo_concurrency.lockutils [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.942 252257 DEBUG oslo_concurrency.lockutils [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.943 252257 INFO nova.compute.manager [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Terminating instance#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.944 252257 DEBUG nova.compute.manager [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.949 252257 INFO nova.virt.libvirt.driver [-] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Instance destroyed successfully.#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.950 252257 DEBUG nova.objects.instance [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'resources' on Instance uuid dc239229-164f-4005-9e62-421e4e10cc6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.964 252257 DEBUG nova.virt.libvirt.vif [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-238723625',display_name='tempest-DeleteServersTestJSON-server-238723625',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-238723625',id=75,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-l2nx22ev',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:22Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=dc239229-164f-4005-9e62-421e4e10cc6e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "3c7b6709-87de-4e03-b36b-6ab395679d80", "address": "fa:16:3e:e0:03:8d", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7b6709-87", "ovs_interfaceid": "3c7b6709-87de-4e03-b36b-6ab395679d80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.964 252257 DEBUG nova.network.os_vif_util [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "3c7b6709-87de-4e03-b36b-6ab395679d80", "address": "fa:16:3e:e0:03:8d", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7b6709-87", "ovs_interfaceid": "3c7b6709-87de-4e03-b36b-6ab395679d80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.965 252257 DEBUG nova.network.os_vif_util [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:03:8d,bridge_name='br-int',has_traffic_filtering=True,id=3c7b6709-87de-4e03-b36b-6ab395679d80,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7b6709-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.965 252257 DEBUG os_vif [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:03:8d,bridge_name='br-int',has_traffic_filtering=True,id=3c7b6709-87de-4e03-b36b-6ab395679d80,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7b6709-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.967 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.967 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c7b6709-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.968 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.970 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:22 np0005539563 nova_compute[252253]: 2025-11-29 08:05:22.973 252257 INFO os_vif [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:03:8d,bridge_name='br-int',has_traffic_filtering=True,id=3c7b6709-87de-4e03-b36b-6ab395679d80,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7b6709-87')#033[00m
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004155252535827284 of space, bias 1.0, pg target 1.2465757607481853 quantized to 32 (current 32)
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:05:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:23.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 213 MiB data, 759 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 156 op/s
Nov 29 03:05:23 np0005539563 nova_compute[252253]: 2025-11-29 08:05:23.973 252257 INFO nova.virt.libvirt.driver [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Deleting instance files /var/lib/nova/instances/dc239229-164f-4005-9e62-421e4e10cc6e_del#033[00m
Nov 29 03:05:23 np0005539563 nova_compute[252253]: 2025-11-29 08:05:23.974 252257 INFO nova.virt.libvirt.driver [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Deletion of /var/lib/nova/instances/dc239229-164f-4005-9e62-421e4e10cc6e_del complete#033[00m
Nov 29 03:05:24 np0005539563 nova_compute[252253]: 2025-11-29 08:05:24.046 252257 INFO nova.compute.manager [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:05:24 np0005539563 nova_compute[252253]: 2025-11-29 08:05:24.047 252257 DEBUG oslo.service.loopingcall [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:05:24 np0005539563 nova_compute[252253]: 2025-11-29 08:05:24.047 252257 DEBUG nova.compute.manager [-] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:05:24 np0005539563 nova_compute[252253]: 2025-11-29 08:05:24.047 252257 DEBUG nova.network.neutron [-] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:05:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:24.226 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:24.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:25 np0005539563 nova_compute[252253]: 2025-11-29 08:05:25.019 252257 DEBUG nova.network.neutron [-] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:25 np0005539563 nova_compute[252253]: 2025-11-29 08:05:25.040 252257 INFO nova.compute.manager [-] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Took 0.99 seconds to deallocate network for instance.#033[00m
Nov 29 03:05:25 np0005539563 nova_compute[252253]: 2025-11-29 08:05:25.087 252257 DEBUG oslo_concurrency.lockutils [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:25 np0005539563 nova_compute[252253]: 2025-11-29 08:05:25.088 252257 DEBUG oslo_concurrency.lockutils [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:25 np0005539563 nova_compute[252253]: 2025-11-29 08:05:25.100 252257 DEBUG nova.compute.manager [req-73313315-e7c6-4d7d-ab30-230f4367af57 req-81207cd7-24d4-4495-8d0a-cea3923642c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Received event network-vif-deleted-3c7b6709-87de-4e03-b36b-6ab395679d80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:25 np0005539563 nova_compute[252253]: 2025-11-29 08:05:25.163 252257 DEBUG oslo_concurrency.processutils [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:25.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1282985769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:25 np0005539563 nova_compute[252253]: 2025-11-29 08:05:25.612 252257 DEBUG oslo_concurrency.processutils [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:25 np0005539563 nova_compute[252253]: 2025-11-29 08:05:25.619 252257 DEBUG nova.compute.provider_tree [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:25 np0005539563 nova_compute[252253]: 2025-11-29 08:05:25.641 252257 DEBUG nova.scheduler.client.report [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:25 np0005539563 nova_compute[252253]: 2025-11-29 08:05:25.686 252257 DEBUG oslo_concurrency.lockutils [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:25 np0005539563 nova_compute[252253]: 2025-11-29 08:05:25.716 252257 INFO nova.scheduler.client.report [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Deleted allocations for instance dc239229-164f-4005-9e62-421e4e10cc6e#033[00m
Nov 29 03:05:25 np0005539563 nova_compute[252253]: 2025-11-29 08:05:25.803 252257 DEBUG oslo_concurrency.lockutils [None req-83460a50-ab8b-4446-aab8-2b7621436aae ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dc239229-164f-4005-9e62-421e4e10cc6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 151 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Nov 29 03:05:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:26.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:26 np0005539563 nova_compute[252253]: 2025-11-29 08:05:26.593 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:27.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:05:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4123083525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:05:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:05:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4123083525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:05:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 151 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Nov 29 03:05:27 np0005539563 nova_compute[252253]: 2025-11-29 08:05:27.969 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:28.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:28 np0005539563 nova_compute[252253]: 2025-11-29 08:05:28.845 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:28 np0005539563 nova_compute[252253]: 2025-11-29 08:05:28.845 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:28 np0005539563 nova_compute[252253]: 2025-11-29 08:05:28.881 252257 DEBUG nova.compute.manager [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:05:28 np0005539563 nova_compute[252253]: 2025-11-29 08:05:28.955 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:28 np0005539563 nova_compute[252253]: 2025-11-29 08:05:28.956 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:28 np0005539563 nova_compute[252253]: 2025-11-29 08:05:28.963 252257 DEBUG nova.virt.hardware [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:05:28 np0005539563 nova_compute[252253]: 2025-11-29 08:05:28.964 252257 INFO nova.compute.claims [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.070 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:29.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1014316860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.622 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.630 252257 DEBUG nova.compute.provider_tree [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.662 252257 DEBUG nova.scheduler.client.report [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.686 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.687 252257 DEBUG nova.compute.manager [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.732 252257 DEBUG nova.compute.manager [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.733 252257 DEBUG nova.network.neutron [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.753 252257 INFO nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.779 252257 DEBUG nova.compute.manager [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.867 252257 DEBUG nova.compute.manager [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.869 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.870 252257 INFO nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Creating image(s)#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.905 252257 DEBUG nova.storage.rbd_utils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.940 252257 DEBUG nova.storage.rbd_utils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 134 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 232 op/s
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.971 252257 DEBUG nova.storage.rbd_utils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:29 np0005539563 nova_compute[252253]: 2025-11-29 08:05:29.976 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:30 np0005539563 nova_compute[252253]: 2025-11-29 08:05:30.012 252257 DEBUG nova.policy [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ef8e9cc962eb4827954df3c42cc34798', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:05:30 np0005539563 nova_compute[252253]: 2025-11-29 08:05:30.058 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:30 np0005539563 nova_compute[252253]: 2025-11-29 08:05:30.059 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:30 np0005539563 nova_compute[252253]: 2025-11-29 08:05:30.060 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:30 np0005539563 nova_compute[252253]: 2025-11-29 08:05:30.060 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:30 np0005539563 nova_compute[252253]: 2025-11-29 08:05:30.086 252257 DEBUG nova.storage.rbd_utils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:30 np0005539563 nova_compute[252253]: 2025-11-29 08:05:30.090 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:30.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:31 np0005539563 nova_compute[252253]: 2025-11-29 08:05:31.226 252257 DEBUG nova.network.neutron [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Successfully created port: a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:05:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:31.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:31 np0005539563 nova_compute[252253]: 2025-11-29 08:05:31.541 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:31 np0005539563 nova_compute[252253]: 2025-11-29 08:05:31.721 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:31 np0005539563 nova_compute[252253]: 2025-11-29 08:05:31.727 252257 DEBUG nova.storage.rbd_utils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] resizing rbd image dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:05:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 151 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.8 MiB/s wr, 200 op/s
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.094 252257 DEBUG nova.network.neutron [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Successfully updated port: a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.110 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "refresh_cache-dad9cb14-03c2-4419-8c77-78fdd0ff117f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.111 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquired lock "refresh_cache-dad9cb14-03c2-4419-8c77-78fdd0ff117f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.111 252257 DEBUG nova.network.neutron [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.267 252257 DEBUG nova.compute.manager [req-3bc271e0-82ab-40e0-a34b-ca5004d60a27 req-929d42de-97c5-49f9-a76b-a883d4c3b40e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Received event network-changed-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.268 252257 DEBUG nova.compute.manager [req-3bc271e0-82ab-40e0-a34b-ca5004d60a27 req-929d42de-97c5-49f9-a76b-a883d4c3b40e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Refreshing instance network info cache due to event network-changed-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.269 252257 DEBUG oslo_concurrency.lockutils [req-3bc271e0-82ab-40e0-a34b-ca5004d60a27 req-929d42de-97c5-49f9-a76b-a883d4c3b40e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-dad9cb14-03c2-4419-8c77-78fdd0ff117f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.339 252257 DEBUG nova.network.neutron [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:05:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:32.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.688 252257 DEBUG nova.objects.instance [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'migration_context' on Instance uuid dad9cb14-03c2-4419-8c77-78fdd0ff117f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.705 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.706 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Ensure instance console log exists: /var/lib/nova/instances/dad9cb14-03c2-4419-8c77-78fdd0ff117f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.706 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.706 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.707 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:32 np0005539563 nova_compute[252253]: 2025-11-29 08:05:32.971 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:33.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:33 np0005539563 nova_compute[252253]: 2025-11-29 08:05:33.954 252257 DEBUG nova.network.neutron [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Updating instance_info_cache with network_info: [{"id": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "address": "fa:16:3e:ce:66:6e", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ed1d98-ba", "ovs_interfaceid": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 176 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 133 op/s
Nov 29 03:05:33 np0005539563 nova_compute[252253]: 2025-11-29 08:05:33.984 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Releasing lock "refresh_cache-dad9cb14-03c2-4419-8c77-78fdd0ff117f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:33 np0005539563 nova_compute[252253]: 2025-11-29 08:05:33.985 252257 DEBUG nova.compute.manager [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Instance network_info: |[{"id": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "address": "fa:16:3e:ce:66:6e", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ed1d98-ba", "ovs_interfaceid": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:05:33 np0005539563 nova_compute[252253]: 2025-11-29 08:05:33.985 252257 DEBUG oslo_concurrency.lockutils [req-3bc271e0-82ab-40e0-a34b-ca5004d60a27 req-929d42de-97c5-49f9-a76b-a883d4c3b40e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-dad9cb14-03c2-4419-8c77-78fdd0ff117f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:33 np0005539563 nova_compute[252253]: 2025-11-29 08:05:33.985 252257 DEBUG nova.network.neutron [req-3bc271e0-82ab-40e0-a34b-ca5004d60a27 req-929d42de-97c5-49f9-a76b-a883d4c3b40e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Refreshing network info cache for port a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:05:33 np0005539563 nova_compute[252253]: 2025-11-29 08:05:33.988 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Start _get_guest_xml network_info=[{"id": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "address": "fa:16:3e:ce:66:6e", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ed1d98-ba", "ovs_interfaceid": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:05:33 np0005539563 nova_compute[252253]: 2025-11-29 08:05:33.992 252257 WARNING nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:33 np0005539563 nova_compute[252253]: 2025-11-29 08:05:33.999 252257 DEBUG nova.virt.libvirt.host [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.000 252257 DEBUG nova.virt.libvirt.host [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.006 252257 DEBUG nova.virt.libvirt.host [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.007 252257 DEBUG nova.virt.libvirt.host [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.008 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.008 252257 DEBUG nova.virt.hardware [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.008 252257 DEBUG nova.virt.hardware [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.009 252257 DEBUG nova.virt.hardware [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.009 252257 DEBUG nova.virt.hardware [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.009 252257 DEBUG nova.virt.hardware [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.009 252257 DEBUG nova.virt.hardware [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.009 252257 DEBUG nova.virt.hardware [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.010 252257 DEBUG nova.virt.hardware [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.010 252257 DEBUG nova.virt.hardware [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.010 252257 DEBUG nova.virt.hardware [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.010 252257 DEBUG nova.virt.hardware [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.013 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079359461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.444 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.473 252257 DEBUG nova.storage.rbd_utils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.479 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:34.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1798689061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.949 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.951 252257 DEBUG nova.virt.libvirt.vif [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-39169779',display_name='tempest-DeleteServersTestJSON-server-39169779',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-39169779',id=78,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-8g50sgvl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:29Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=dad9cb14-03c2-4419-8c77-78fdd0ff117f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "address": "fa:16:3e:ce:66:6e", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ed1d98-ba", "ovs_interfaceid": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.951 252257 DEBUG nova.network.os_vif_util [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "address": "fa:16:3e:ce:66:6e", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ed1d98-ba", "ovs_interfaceid": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.952 252257 DEBUG nova.network.os_vif_util [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ed1d98-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.954 252257 DEBUG nova.objects.instance [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'pci_devices' on Instance uuid dad9cb14-03c2-4419-8c77-78fdd0ff117f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.981 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  <uuid>dad9cb14-03c2-4419-8c77-78fdd0ff117f</uuid>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  <name>instance-0000004e</name>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <nova:name>tempest-DeleteServersTestJSON-server-39169779</nova:name>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:05:33</nova:creationTime>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <nova:user uuid="ef8e9cc962eb4827954df3c42cc34798">tempest-DeleteServersTestJSON-69711189-project-member</nova:user>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <nova:project uuid="f8bc2a2616a34ba1a18b3211e406993f">tempest-DeleteServersTestJSON-69711189</nova:project>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <nova:port uuid="a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <entry name="serial">dad9cb14-03c2-4419-8c77-78fdd0ff117f</entry>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <entry name="uuid">dad9cb14-03c2-4419-8c77-78fdd0ff117f</entry>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk.config">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:ce:66:6e"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <target dev="tapa4ed1d98-ba"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/dad9cb14-03c2-4419-8c77-78fdd0ff117f/console.log" append="off"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:05:34 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:05:34 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:05:34 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:05:34 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.982 252257 DEBUG nova.compute.manager [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Preparing to wait for external event network-vif-plugged-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.982 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.982 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.983 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.983 252257 DEBUG nova.virt.libvirt.vif [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-39169779',display_name='tempest-DeleteServersTestJSON-server-39169779',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-39169779',id=78,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-8g50sgvl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:29Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=dad9cb14-03c2-4419-8c77-78fdd0ff117f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "address": "fa:16:3e:ce:66:6e", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ed1d98-ba", "ovs_interfaceid": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.984 252257 DEBUG nova.network.os_vif_util [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "address": "fa:16:3e:ce:66:6e", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ed1d98-ba", "ovs_interfaceid": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.984 252257 DEBUG nova.network.os_vif_util [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ed1d98-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.984 252257 DEBUG os_vif [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ed1d98-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.985 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.986 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.986 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.989 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.990 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4ed1d98-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.990 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4ed1d98-ba, col_values=(('external_ids', {'iface-id': 'a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ce:66:6e', 'vm-uuid': 'dad9cb14-03c2-4419-8c77-78fdd0ff117f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.991 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:34 np0005539563 NetworkManager[48981]: <info>  [1764403534.9924] manager: (tapa4ed1d98-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.993 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.997 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:34 np0005539563 nova_compute[252253]: 2025-11-29 08:05:34.998 252257 INFO os_vif [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ed1d98-ba')#033[00m
Nov 29 03:05:35 np0005539563 nova_compute[252253]: 2025-11-29 08:05:35.058 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:35 np0005539563 nova_compute[252253]: 2025-11-29 08:05:35.059 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:35 np0005539563 nova_compute[252253]: 2025-11-29 08:05:35.059 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] No VIF found with MAC fa:16:3e:ce:66:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:05:35 np0005539563 nova_compute[252253]: 2025-11-29 08:05:35.060 252257 INFO nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Using config drive#033[00m
Nov 29 03:05:35 np0005539563 nova_compute[252253]: 2025-11-29 08:05:35.092 252257 DEBUG nova.storage.rbd_utils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 03:05:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:05:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:05:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:05:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 03:05:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:05:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:35.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:35 np0005539563 nova_compute[252253]: 2025-11-29 08:05:35.690 252257 INFO nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Creating config drive at /var/lib/nova/instances/dad9cb14-03c2-4419-8c77-78fdd0ff117f/disk.config#033[00m
Nov 29 03:05:35 np0005539563 nova_compute[252253]: 2025-11-29 08:05:35.695 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dad9cb14-03c2-4419-8c77-78fdd0ff117f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvemeqzvm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:35 np0005539563 nova_compute[252253]: 2025-11-29 08:05:35.824 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dad9cb14-03c2-4419-8c77-78fdd0ff117f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvemeqzvm" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:05:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:05:35 np0005539563 nova_compute[252253]: 2025-11-29 08:05:35.867 252257 DEBUG nova.storage.rbd_utils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] rbd image dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:35 np0005539563 nova_compute[252253]: 2025-11-29 08:05:35.871 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dad9cb14-03c2-4419-8c77-78fdd0ff117f/disk.config dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:35 np0005539563 nova_compute[252253]: 2025-11-29 08:05:35.960 252257 DEBUG nova.network.neutron [req-3bc271e0-82ab-40e0-a34b-ca5004d60a27 req-929d42de-97c5-49f9-a76b-a883d4c3b40e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Updated VIF entry in instance network info cache for port a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:05:35 np0005539563 nova_compute[252253]: 2025-11-29 08:05:35.961 252257 DEBUG nova.network.neutron [req-3bc271e0-82ab-40e0-a34b-ca5004d60a27 req-929d42de-97c5-49f9-a76b-a883d4c3b40e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Updating instance_info_cache with network_info: [{"id": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "address": "fa:16:3e:ce:66:6e", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ed1d98-ba", "ovs_interfaceid": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 213 MiB data, 759 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Nov 29 03:05:35 np0005539563 nova_compute[252253]: 2025-11-29 08:05:35.979 252257 DEBUG oslo_concurrency.lockutils [req-3bc271e0-82ab-40e0-a34b-ca5004d60a27 req-929d42de-97c5-49f9-a76b-a883d4c3b40e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-dad9cb14-03c2-4419-8c77-78fdd0ff117f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.028 252257 DEBUG oslo_concurrency.processutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dad9cb14-03c2-4419-8c77-78fdd0ff117f/disk.config dad9cb14-03c2-4419-8c77-78fdd0ff117f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.029 252257 INFO nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Deleting local config drive /var/lib/nova/instances/dad9cb14-03c2-4419-8c77-78fdd0ff117f/disk.config because it was imported into RBD.#033[00m
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:05:36 np0005539563 kernel: tapa4ed1d98-ba: entered promiscuous mode
Nov 29 03:05:36 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:36Z|00266|binding|INFO|Claiming lport a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 for this chassis.
Nov 29 03:05:36 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:36Z|00267|binding|INFO|a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40: Claiming fa:16:3e:ce:66:6e 10.100.0.7
Nov 29 03:05:36 np0005539563 NetworkManager[48981]: <info>  [1764403536.0814] manager: (tapa4ed1d98-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.082 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.087 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:66:6e 10.100.0.7'], port_security=['fa:16:3e:ce:66:6e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dad9cb14-03c2-4419-8c77-78fdd0ff117f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.088 158990 INFO neutron.agent.ovn.metadata.agent [-] Port a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 bound to our chassis#033[00m
Nov 29 03:05:36 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a2899fe7-03a0-4d86-8ef0-1583393cde52 does not exist
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.090 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5e42602-d72e-4beb-864d-714bd1635da9#033[00m
Nov 29 03:05:36 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4cb91aaa-ba49-4642-8300-8bbc14c1448c does not exist
Nov 29 03:05:36 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2f3e04fe-99b0-4930-a14a-14c5ecc9e7e1 does not exist
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:05:36 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:36Z|00268|binding|INFO|Setting lport a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 ovn-installed in OVS
Nov 29 03:05:36 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:36Z|00269|binding|INFO|Setting lport a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 up in Southbound
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.098 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.101 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.103 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bc5b4caf-cab6-40ed-8a3f-667d1f893886]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.106 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd5e42602-d1 in ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.107 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd5e42602-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.107 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3676b21d-0949-4140-8057-dedd5d01ea13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 systemd-udevd[300642]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.108 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[28bf9fd9-fde3-4952-9a72-6b2a4f188847]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 systemd-machined[213024]: New machine qemu-31-instance-0000004e.
Nov 29 03:05:36 np0005539563 NetworkManager[48981]: <info>  [1764403536.1242] device (tapa4ed1d98-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:05:36 np0005539563 NetworkManager[48981]: <info>  [1764403536.1252] device (tapa4ed1d98-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.122 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[85cf7851-665a-4b64-9fa8-56f4d02936e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 systemd[1]: Started Virtual Machine qemu-31-instance-0000004e.
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.136 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fae403d3-27b5-4f92-aca1-7f265a44ed0e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.159 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[da72b490-50a7-47e4-8108-e58f3495a050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.164 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[30def8e8-5fd1-455f-8413-6a7ee303a0fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 systemd-udevd[300652]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:36 np0005539563 NetworkManager[48981]: <info>  [1764403536.1692] manager: (tapd5e42602-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/126)
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.195 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[200d0269-b34e-4d1c-bbf2-2687383f9c3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.199 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[7e411720-55ee-45a8-8174-dc1ecd277c59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 NetworkManager[48981]: <info>  [1764403536.2222] device (tapd5e42602-d0): carrier: link connected
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.229 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[03d1d6fa-e44e-412a-b21b-b51192786c65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.247 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bf32961f-3131-4c93-b78d-39c05c16f054]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650399, 'reachable_time': 33604, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300724, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.263 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[60edd143-b31a-40a4-a31e-8762193145a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:370b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650399, 'tstamp': 650399}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300729, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.279 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[10e8a074-8d72-4fd2-9795-d74a2e9a2f12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5e42602-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:37:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650399, 'reachable_time': 33604, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300741, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.307 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbfa79a-0ea9-47d4-a711-2f148b9e2292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.376 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b0626eea-cecc-4e2a-b188-1d2f14e6a02d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.378 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.378 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.378 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5e42602-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.380 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:36 np0005539563 NetworkManager[48981]: <info>  [1764403536.3815] manager: (tapd5e42602-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Nov 29 03:05:36 np0005539563 kernel: tapd5e42602-d0: entered promiscuous mode
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.382 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.384 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5e42602-d0, col_values=(('external_ids', {'iface-id': 'b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.385 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:36 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:36Z|00270|binding|INFO|Releasing lport b61ef3f5-e0b1-44f8-9b21-acba8a1ead2e from this chassis (sb_readonly=0)
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.387 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.388 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a09936-c448-4dee-b37b-f84a846e6de0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.389 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/d5e42602-d72e-4beb-864d-714bd1635da9.pid.haproxy
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID d5e42602-d72e-4beb-864d-714bd1635da9
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:05:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:36.390 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'env', 'PROCESS_TAG=haproxy-d5e42602-d72e-4beb-864d-714bd1635da9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d5e42602-d72e-4beb-864d-714bd1635da9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.401 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.548 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403536.547956, dad9cb14-03c2-4419-8c77-78fdd0ff117f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.548 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.575 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.579 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403536.548156, dad9cb14-03c2-4419-8c77-78fdd0ff117f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.580 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:05:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:36.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.596 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.618 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.625 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.658 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:36 np0005539563 podman[300865]: 2025-11-29 08:05:36.699087743 +0000 UTC m=+0.056255815 container create 997f56f98520be1800a704e9b1fef79920a06a8989938568a49ba5fb0b89d7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mccarthy, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:05:36 np0005539563 systemd[1]: Started libpod-conmon-997f56f98520be1800a704e9b1fef79920a06a8989938568a49ba5fb0b89d7c3.scope.
Nov 29 03:05:36 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:05:36 np0005539563 podman[300865]: 2025-11-29 08:05:36.663235422 +0000 UTC m=+0.020403514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:05:36 np0005539563 podman[300865]: 2025-11-29 08:05:36.774160748 +0000 UTC m=+0.131328820 container init 997f56f98520be1800a704e9b1fef79920a06a8989938568a49ba5fb0b89d7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mccarthy, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:05:36 np0005539563 podman[300865]: 2025-11-29 08:05:36.78203822 +0000 UTC m=+0.139206282 container start 997f56f98520be1800a704e9b1fef79920a06a8989938568a49ba5fb0b89d7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mccarthy, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:05:36 np0005539563 podman[300865]: 2025-11-29 08:05:36.785935496 +0000 UTC m=+0.143103568 container attach 997f56f98520be1800a704e9b1fef79920a06a8989938568a49ba5fb0b89d7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:05:36 np0005539563 hungry_mccarthy[300898]: 167 167
Nov 29 03:05:36 np0005539563 systemd[1]: libpod-997f56f98520be1800a704e9b1fef79920a06a8989938568a49ba5fb0b89d7c3.scope: Deactivated successfully.
Nov 29 03:05:36 np0005539563 podman[300865]: 2025-11-29 08:05:36.788138486 +0000 UTC m=+0.145306558 container died 997f56f98520be1800a704e9b1fef79920a06a8989938568a49ba5fb0b89d7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.804 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403521.8038948, dc239229-164f-4005-9e62-421e4e10cc6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.805 252257 INFO nova.compute.manager [-] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:05:36 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9bc04d1ea5311a5242c0b48faa77467bbcbbf5133d1824ad7194e165ff21d07d-merged.mount: Deactivated successfully.
Nov 29 03:05:36 np0005539563 podman[300906]: 2025-11-29 08:05:36.825296353 +0000 UTC m=+0.058582228 container create c1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:05:36 np0005539563 podman[300865]: 2025-11-29 08:05:36.831590903 +0000 UTC m=+0.188758975 container remove 997f56f98520be1800a704e9b1fef79920a06a8989938568a49ba5fb0b89d7c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.837 252257 DEBUG nova.compute.manager [None req-7a983060-5f1a-471e-a567-a34d2ebb985b - - - - - -] [instance: dc239229-164f-4005-9e62-421e4e10cc6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.860 252257 DEBUG nova.compute.manager [req-3284c4d0-6f6a-4642-a19b-f5c40bdb6ffd req-62ce98e2-792d-4630-a3de-eaeddd18e670 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Received event network-vif-plugged-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.862 252257 DEBUG oslo_concurrency.lockutils [req-3284c4d0-6f6a-4642-a19b-f5c40bdb6ffd req-62ce98e2-792d-4630-a3de-eaeddd18e670 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.862 252257 DEBUG oslo_concurrency.lockutils [req-3284c4d0-6f6a-4642-a19b-f5c40bdb6ffd req-62ce98e2-792d-4630-a3de-eaeddd18e670 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.862 252257 DEBUG oslo_concurrency.lockutils [req-3284c4d0-6f6a-4642-a19b-f5c40bdb6ffd req-62ce98e2-792d-4630-a3de-eaeddd18e670 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.862 252257 DEBUG nova.compute.manager [req-3284c4d0-6f6a-4642-a19b-f5c40bdb6ffd req-62ce98e2-792d-4630-a3de-eaeddd18e670 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Processing event network-vif-plugged-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.863 252257 DEBUG nova.compute.manager [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:05:36 np0005539563 systemd[1]: Started libpod-conmon-c1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149.scope.
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:05:36 np0005539563 systemd[1]: libpod-conmon-997f56f98520be1800a704e9b1fef79920a06a8989938568a49ba5fb0b89d7c3.scope: Deactivated successfully.
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.867 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403536.8671498, dad9cb14-03c2-4419-8c77-78fdd0ff117f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.867 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.869 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.872 252257 INFO nova.virt.libvirt.driver [-] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Instance spawned successfully.#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.872 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.887 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:36 np0005539563 podman[300906]: 2025-11-29 08:05:36.795528686 +0000 UTC m=+0.028814601 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.891 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:36 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.895 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.895 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.896 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.896 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.897 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.897 252257 DEBUG nova.virt.libvirt.driver [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420fbd977c84c94afee952546004d2c3b5d9b4210201a481836e1c526ff70e96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:36 np0005539563 podman[300906]: 2025-11-29 08:05:36.912865515 +0000 UTC m=+0.146151390 container init c1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:05:36 np0005539563 podman[300906]: 2025-11-29 08:05:36.919419052 +0000 UTC m=+0.152704927 container start c1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.923 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:36 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[300935]: [NOTICE]   (300939) : New worker (300941) forked
Nov 29 03:05:36 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[300935]: [NOTICE]   (300939) : Loading success.
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.958 252257 INFO nova.compute.manager [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Took 7.09 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:05:36 np0005539563 nova_compute[252253]: 2025-11-29 08:05:36.958 252257 DEBUG nova.compute.manager [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:37 np0005539563 nova_compute[252253]: 2025-11-29 08:05:37.016 252257 INFO nova.compute.manager [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Took 8.08 seconds to build instance.#033[00m
Nov 29 03:05:37 np0005539563 nova_compute[252253]: 2025-11-29 08:05:37.031 252257 DEBUG oslo_concurrency.lockutils [None req-7b9dd61a-0cab-4e12-88f7-e3ae2d92aa49 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:37 np0005539563 podman[300955]: 2025-11-29 08:05:37.038951961 +0000 UTC m=+0.060644934 container create 0cfc8003431d1f6c6fd6ab8e651f056583fc75f893ac220967d71832c9f08dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 29 03:05:37 np0005539563 systemd[1]: Started libpod-conmon-0cfc8003431d1f6c6fd6ab8e651f056583fc75f893ac220967d71832c9f08dbf.scope.
Nov 29 03:05:37 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:05:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a311f18b83fa5cad2c168bb1d9da7675574feeb5e06c74bdc82a771dbd42551/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a311f18b83fa5cad2c168bb1d9da7675574feeb5e06c74bdc82a771dbd42551/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:37 np0005539563 podman[300955]: 2025-11-29 08:05:37.012948807 +0000 UTC m=+0.034641810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:05:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a311f18b83fa5cad2c168bb1d9da7675574feeb5e06c74bdc82a771dbd42551/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a311f18b83fa5cad2c168bb1d9da7675574feeb5e06c74bdc82a771dbd42551/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a311f18b83fa5cad2c168bb1d9da7675574feeb5e06c74bdc82a771dbd42551/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:37 np0005539563 podman[300955]: 2025-11-29 08:05:37.126840623 +0000 UTC m=+0.148533616 container init 0cfc8003431d1f6c6fd6ab8e651f056583fc75f893ac220967d71832c9f08dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dirac, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:05:37 np0005539563 podman[300955]: 2025-11-29 08:05:37.134325935 +0000 UTC m=+0.156018908 container start 0cfc8003431d1f6c6fd6ab8e651f056583fc75f893ac220967d71832c9f08dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dirac, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:05:37 np0005539563 podman[300955]: 2025-11-29 08:05:37.138486758 +0000 UTC m=+0.160179721 container attach 0cfc8003431d1f6c6fd6ab8e651f056583fc75f893ac220967d71832c9f08dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dirac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:05:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:37.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:37 np0005539563 vigilant_dirac[300972]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:05:37 np0005539563 vigilant_dirac[300972]: --> relative data size: 1.0
Nov 29 03:05:37 np0005539563 vigilant_dirac[300972]: --> All data devices are unavailable
Nov 29 03:05:37 np0005539563 systemd[1]: libpod-0cfc8003431d1f6c6fd6ab8e651f056583fc75f893ac220967d71832c9f08dbf.scope: Deactivated successfully.
Nov 29 03:05:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 213 MiB data, 759 MiB used, 20 GiB / 21 GiB avail; 938 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 29 03:05:38 np0005539563 podman[300987]: 2025-11-29 08:05:38.00593428 +0000 UTC m=+0.023808076 container died 0cfc8003431d1f6c6fd6ab8e651f056583fc75f893ac220967d71832c9f08dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dirac, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:05:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7a311f18b83fa5cad2c168bb1d9da7675574feeb5e06c74bdc82a771dbd42551-merged.mount: Deactivated successfully.
Nov 29 03:05:38 np0005539563 podman[300987]: 2025-11-29 08:05:38.069613995 +0000 UTC m=+0.087487791 container remove 0cfc8003431d1f6c6fd6ab8e651f056583fc75f893ac220967d71832c9f08dbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:05:38 np0005539563 systemd[1]: libpod-conmon-0cfc8003431d1f6c6fd6ab8e651f056583fc75f893ac220967d71832c9f08dbf.scope: Deactivated successfully.
Nov 29 03:05:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:38.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:38 np0005539563 nova_compute[252253]: 2025-11-29 08:05:38.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:38 np0005539563 podman[301142]: 2025-11-29 08:05:38.688932855 +0000 UTC m=+0.068531897 container create 39f00f7467b298a82cc7a99d0865a5e50abcbd9d9663f3f6d78e3771d35aa009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:05:38 np0005539563 systemd[1]: Started libpod-conmon-39f00f7467b298a82cc7a99d0865a5e50abcbd9d9663f3f6d78e3771d35aa009.scope.
Nov 29 03:05:38 np0005539563 podman[301142]: 2025-11-29 08:05:38.654178724 +0000 UTC m=+0.033777866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:05:38 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:05:38 np0005539563 podman[301142]: 2025-11-29 08:05:38.769491628 +0000 UTC m=+0.149090720 container init 39f00f7467b298a82cc7a99d0865a5e50abcbd9d9663f3f6d78e3771d35aa009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldwasser, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:05:38 np0005539563 podman[301142]: 2025-11-29 08:05:38.777572837 +0000 UTC m=+0.157171889 container start 39f00f7467b298a82cc7a99d0865a5e50abcbd9d9663f3f6d78e3771d35aa009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:05:38 np0005539563 podman[301142]: 2025-11-29 08:05:38.782804759 +0000 UTC m=+0.162403811 container attach 39f00f7467b298a82cc7a99d0865a5e50abcbd9d9663f3f6d78e3771d35aa009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:05:38 np0005539563 magical_goldwasser[301158]: 167 167
Nov 29 03:05:38 np0005539563 systemd[1]: libpod-39f00f7467b298a82cc7a99d0865a5e50abcbd9d9663f3f6d78e3771d35aa009.scope: Deactivated successfully.
Nov 29 03:05:38 np0005539563 podman[301142]: 2025-11-29 08:05:38.78579869 +0000 UTC m=+0.165397742 container died 39f00f7467b298a82cc7a99d0865a5e50abcbd9d9663f3f6d78e3771d35aa009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:05:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay-70c6318aaec150a5e73e97169a82f3313192683a2c8cfc539a3ed2af0789353e-merged.mount: Deactivated successfully.
Nov 29 03:05:38 np0005539563 podman[301142]: 2025-11-29 08:05:38.833635866 +0000 UTC m=+0.213234898 container remove 39f00f7467b298a82cc7a99d0865a5e50abcbd9d9663f3f6d78e3771d35aa009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldwasser, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:05:38 np0005539563 systemd[1]: libpod-conmon-39f00f7467b298a82cc7a99d0865a5e50abcbd9d9663f3f6d78e3771d35aa009.scope: Deactivated successfully.
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.030 252257 DEBUG nova.compute.manager [req-cc97e764-037c-4031-bbf5-9b08eb6d66e3 req-07220d38-221e-469c-ab4f-26c1d04202d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Received event network-vif-plugged-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.032 252257 DEBUG oslo_concurrency.lockutils [req-cc97e764-037c-4031-bbf5-9b08eb6d66e3 req-07220d38-221e-469c-ab4f-26c1d04202d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.032 252257 DEBUG oslo_concurrency.lockutils [req-cc97e764-037c-4031-bbf5-9b08eb6d66e3 req-07220d38-221e-469c-ab4f-26c1d04202d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.033 252257 DEBUG oslo_concurrency.lockutils [req-cc97e764-037c-4031-bbf5-9b08eb6d66e3 req-07220d38-221e-469c-ab4f-26c1d04202d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.033 252257 DEBUG nova.compute.manager [req-cc97e764-037c-4031-bbf5-9b08eb6d66e3 req-07220d38-221e-469c-ab4f-26c1d04202d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] No waiting events found dispatching network-vif-plugged-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.033 252257 WARNING nova.compute.manager [req-cc97e764-037c-4031-bbf5-9b08eb6d66e3 req-07220d38-221e-469c-ab4f-26c1d04202d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Received unexpected event network-vif-plugged-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 for instance with vm_state active and task_state suspending.#033[00m
Nov 29 03:05:39 np0005539563 podman[301182]: 2025-11-29 08:05:39.059096234 +0000 UTC m=+0.059536114 container create 8ba6b6a9ac7740278fc6f4a7817e98421beb68b752283d723c3daec6d655248b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_einstein, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:05:39 np0005539563 systemd[1]: Started libpod-conmon-8ba6b6a9ac7740278fc6f4a7817e98421beb68b752283d723c3daec6d655248b.scope.
Nov 29 03:05:39 np0005539563 podman[301182]: 2025-11-29 08:05:39.033376997 +0000 UTC m=+0.033816897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.130 252257 DEBUG nova.objects.instance [None req-6553a166-76b0-47ef-99dd-6e13605b1d83 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'pci_devices' on Instance uuid dad9cb14-03c2-4419-8c77-78fdd0ff117f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:39 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:05:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b057cb22c7fccf2f8784e71cbbce6cbab114f45b95fc987f0b4686d0897b61fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b057cb22c7fccf2f8784e71cbbce6cbab114f45b95fc987f0b4686d0897b61fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b057cb22c7fccf2f8784e71cbbce6cbab114f45b95fc987f0b4686d0897b61fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b057cb22c7fccf2f8784e71cbbce6cbab114f45b95fc987f0b4686d0897b61fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:39 np0005539563 podman[301182]: 2025-11-29 08:05:39.159946356 +0000 UTC m=+0.160386236 container init 8ba6b6a9ac7740278fc6f4a7817e98421beb68b752283d723c3daec6d655248b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.162 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403539.1625922, dad9cb14-03c2-4419-8c77-78fdd0ff117f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.163 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:05:39 np0005539563 podman[301182]: 2025-11-29 08:05:39.169094225 +0000 UTC m=+0.169534105 container start 8ba6b6a9ac7740278fc6f4a7817e98421beb68b752283d723c3daec6d655248b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:05:39 np0005539563 podman[301182]: 2025-11-29 08:05:39.174033358 +0000 UTC m=+0.174473258 container attach 8ba6b6a9ac7740278fc6f4a7817e98421beb68b752283d723c3daec6d655248b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_einstein, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.193 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.198 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.228 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 03:05:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:39.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]: {
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:    "0": [
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:        {
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:            "devices": [
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:                "/dev/loop3"
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:            ],
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:            "lv_name": "ceph_lv0",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:            "lv_size": "7511998464",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:            "name": "ceph_lv0",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:            "tags": {
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:                "ceph.cluster_name": "ceph",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:                "ceph.crush_device_class": "",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:                "ceph.encrypted": "0",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:                "ceph.osd_id": "0",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:                "ceph.type": "block",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:                "ceph.vdo": "0"
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:            },
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:            "type": "block",
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:            "vg_name": "ceph_vg0"
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:        }
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]:    ]
Nov 29 03:05:39 np0005539563 gifted_einstein[301199]: }
Nov 29 03:05:39 np0005539563 kernel: tapa4ed1d98-ba (unregistering): left promiscuous mode
Nov 29 03:05:39 np0005539563 NetworkManager[48981]: <info>  [1764403539.9361] device (tapa4ed1d98-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.947 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:39 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:39Z|00271|binding|INFO|Releasing lport a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 from this chassis (sb_readonly=0)
Nov 29 03:05:39 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:39Z|00272|binding|INFO|Setting lport a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 down in Southbound
Nov 29 03:05:39 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:39Z|00273|binding|INFO|Removing iface tapa4ed1d98-ba ovn-installed in OVS
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.951 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:39.958 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:66:6e 10.100.0.7'], port_security=['fa:16:3e:ce:66:6e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dad9cb14-03c2-4419-8c77-78fdd0ff117f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5e42602-d72e-4beb-864d-714bd1635da9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8bc2a2616a34ba1a18b3211e406993f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82ddc102-a213-473f-abf3-dc5f60e4fa79', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46241d46-2f65-4ed5-b860-f30a985d632f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:39.961 158990 INFO neutron.agent.ovn.metadata.agent [-] Port a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 in datapath d5e42602-d72e-4beb-864d-714bd1635da9 unbound from our chassis#033[00m
Nov 29 03:05:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:39.964 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5e42602-d72e-4beb-864d-714bd1635da9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:05:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:39.966 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6b0e7ae9-e820-401f-9f37-b836ca194523]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:39.967 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 namespace which is not needed anymore#033[00m
Nov 29 03:05:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 226 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.8 MiB/s wr, 145 op/s
Nov 29 03:05:39 np0005539563 systemd[1]: libpod-8ba6b6a9ac7740278fc6f4a7817e98421beb68b752283d723c3daec6d655248b.scope: Deactivated successfully.
Nov 29 03:05:39 np0005539563 podman[301182]: 2025-11-29 08:05:39.983136309 +0000 UTC m=+0.983576229 container died 8ba6b6a9ac7740278fc6f4a7817e98421beb68b752283d723c3daec6d655248b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_einstein, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.985 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:39 np0005539563 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Nov 29 03:05:39 np0005539563 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000004e.scope: Consumed 2.746s CPU time.
Nov 29 03:05:39 np0005539563 nova_compute[252253]: 2025-11-29 08:05:39.993 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:40 np0005539563 systemd-machined[213024]: Machine qemu-31-instance-0000004e terminated.
Nov 29 03:05:40 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b057cb22c7fccf2f8784e71cbbce6cbab114f45b95fc987f0b4686d0897b61fd-merged.mount: Deactivated successfully.
Nov 29 03:05:40 np0005539563 podman[301182]: 2025-11-29 08:05:40.055300095 +0000 UTC m=+1.055739975 container remove 8ba6b6a9ac7740278fc6f4a7817e98421beb68b752283d723c3daec6d655248b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:05:40 np0005539563 systemd[1]: libpod-conmon-8ba6b6a9ac7740278fc6f4a7817e98421beb68b752283d723c3daec6d655248b.scope: Deactivated successfully.
Nov 29 03:05:40 np0005539563 nova_compute[252253]: 2025-11-29 08:05:40.114 252257 DEBUG nova.compute.manager [None req-6553a166-76b0-47ef-99dd-6e13605b1d83 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:40 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[300935]: [NOTICE]   (300939) : haproxy version is 2.8.14-c23fe91
Nov 29 03:05:40 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[300935]: [NOTICE]   (300939) : path to executable is /usr/sbin/haproxy
Nov 29 03:05:40 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[300935]: [WARNING]  (300939) : Exiting Master process...
Nov 29 03:05:40 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[300935]: [ALERT]    (300939) : Current worker (300941) exited with code 143 (Terminated)
Nov 29 03:05:40 np0005539563 neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9[300935]: [WARNING]  (300939) : All workers exited. Exiting... (0)
Nov 29 03:05:40 np0005539563 systemd[1]: libpod-c1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149.scope: Deactivated successfully.
Nov 29 03:05:40 np0005539563 podman[301245]: 2025-11-29 08:05:40.146091414 +0000 UTC m=+0.049496571 container died c1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:05:40 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149-userdata-shm.mount: Deactivated successfully.
Nov 29 03:05:40 np0005539563 systemd[1]: var-lib-containers-storage-overlay-420fbd977c84c94afee952546004d2c3b5d9b4210201a481836e1c526ff70e96-merged.mount: Deactivated successfully.
Nov 29 03:05:40 np0005539563 podman[301245]: 2025-11-29 08:05:40.181838343 +0000 UTC m=+0.085243500 container cleanup c1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:05:40 np0005539563 systemd[1]: libpod-conmon-c1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149.scope: Deactivated successfully.
Nov 29 03:05:40 np0005539563 podman[301323]: 2025-11-29 08:05:40.237133991 +0000 UTC m=+0.037263341 container remove c1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:05:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:40.243 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6af5f531-f2dd-449e-bbfb-aac6779316ef]: (4, ('Sat Nov 29 08:05:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (c1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149)\nc1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149\nSat Nov 29 08:05:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 (c1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149)\nc1c79ff24cb90ab43511330102ca671553a15628971902c113e9f632ee741149\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:40.244 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bf58a62e-2ed9-4d1f-8e9e-5c2c9a99e8e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:40.245 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e42602-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:40 np0005539563 nova_compute[252253]: 2025-11-29 08:05:40.247 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:40 np0005539563 kernel: tapd5e42602-d0: left promiscuous mode
Nov 29 03:05:40 np0005539563 nova_compute[252253]: 2025-11-29 08:05:40.265 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:40.267 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f911e7a6-372a-424d-921b-7911814ed674]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:40.285 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[39944c70-c073-4c5b-9014-b10605474ce5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:40.286 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[34f14fc1-85af-426d-a68d-9910a9093bfa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:40.303 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8ea665-d0e9-4b89-bea3-5faea7bfb624]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650393, 'reachable_time': 37345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301382, 'error': None, 'target': 'ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:40.304 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d5e42602-d72e-4beb-864d-714bd1635da9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:05:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:40.305 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[6d081496-6cb5-4b8c-a5c3-15725315d3f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:40 np0005539563 systemd[1]: run-netns-ovnmeta\x2dd5e42602\x2dd72e\x2d4beb\x2d864d\x2d714bd1635da9.mount: Deactivated successfully.
Nov 29 03:05:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:40.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:40 np0005539563 podman[301444]: 2025-11-29 08:05:40.628514665 +0000 UTC m=+0.039314697 container create a0027ca9f073c4efa02bd498420395f93c763ee6105ecb8a83ed91f8ebc8c07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_feynman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:05:40 np0005539563 systemd[1]: Started libpod-conmon-a0027ca9f073c4efa02bd498420395f93c763ee6105ecb8a83ed91f8ebc8c07d.scope.
Nov 29 03:05:40 np0005539563 nova_compute[252253]: 2025-11-29 08:05:40.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:40 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:05:40 np0005539563 podman[301444]: 2025-11-29 08:05:40.696770894 +0000 UTC m=+0.107570956 container init a0027ca9f073c4efa02bd498420395f93c763ee6105ecb8a83ed91f8ebc8c07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:05:40 np0005539563 nova_compute[252253]: 2025-11-29 08:05:40.701 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:40 np0005539563 nova_compute[252253]: 2025-11-29 08:05:40.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:40 np0005539563 nova_compute[252253]: 2025-11-29 08:05:40.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:40 np0005539563 nova_compute[252253]: 2025-11-29 08:05:40.702 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:05:40 np0005539563 nova_compute[252253]: 2025-11-29 08:05:40.702 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:40 np0005539563 podman[301444]: 2025-11-29 08:05:40.70470919 +0000 UTC m=+0.115509232 container start a0027ca9f073c4efa02bd498420395f93c763ee6105ecb8a83ed91f8ebc8c07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:05:40 np0005539563 podman[301444]: 2025-11-29 08:05:40.708895473 +0000 UTC m=+0.119695515 container attach a0027ca9f073c4efa02bd498420395f93c763ee6105ecb8a83ed91f8ebc8c07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_feynman, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:05:40 np0005539563 podman[301444]: 2025-11-29 08:05:40.613676183 +0000 UTC m=+0.024476245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:05:40 np0005539563 angry_feynman[301458]: 167 167
Nov 29 03:05:40 np0005539563 systemd[1]: libpod-a0027ca9f073c4efa02bd498420395f93c763ee6105ecb8a83ed91f8ebc8c07d.scope: Deactivated successfully.
Nov 29 03:05:40 np0005539563 conmon[301458]: conmon a0027ca9f073c4efa02b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0027ca9f073c4efa02bd498420395f93c763ee6105ecb8a83ed91f8ebc8c07d.scope/container/memory.events
Nov 29 03:05:40 np0005539563 podman[301444]: 2025-11-29 08:05:40.712078729 +0000 UTC m=+0.122878771 container died a0027ca9f073c4efa02bd498420395f93c763ee6105ecb8a83ed91f8ebc8c07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_feynman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:05:40 np0005539563 podman[301444]: 2025-11-29 08:05:40.748637229 +0000 UTC m=+0.159437261 container remove a0027ca9f073c4efa02bd498420395f93c763ee6105ecb8a83ed91f8ebc8c07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_feynman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:05:40 np0005539563 systemd[1]: libpod-conmon-a0027ca9f073c4efa02bd498420395f93c763ee6105ecb8a83ed91f8ebc8c07d.scope: Deactivated successfully.
Nov 29 03:05:40 np0005539563 podman[301502]: 2025-11-29 08:05:40.915374787 +0000 UTC m=+0.039992134 container create 8c6a1bd4f213c28564f11de4854f6f0cc4e9ccf2608bd5d0975005b840e81d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:05:40 np0005539563 systemd[1]: Started libpod-conmon-8c6a1bd4f213c28564f11de4854f6f0cc4e9ccf2608bd5d0975005b840e81d16.scope.
Nov 29 03:05:40 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:05:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b408c60704333f6db733c063b2c9aa155c02c22ef154645f95fa56e5e9624f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b408c60704333f6db733c063b2c9aa155c02c22ef154645f95fa56e5e9624f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b408c60704333f6db733c063b2c9aa155c02c22ef154645f95fa56e5e9624f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b408c60704333f6db733c063b2c9aa155c02c22ef154645f95fa56e5e9624f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:40 np0005539563 podman[301502]: 2025-11-29 08:05:40.987009507 +0000 UTC m=+0.111626874 container init 8c6a1bd4f213c28564f11de4854f6f0cc4e9ccf2608bd5d0975005b840e81d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:05:40 np0005539563 podman[301502]: 2025-11-29 08:05:40.897900484 +0000 UTC m=+0.022517851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:05:40 np0005539563 podman[301502]: 2025-11-29 08:05:40.995118767 +0000 UTC m=+0.119736114 container start 8c6a1bd4f213c28564f11de4854f6f0cc4e9ccf2608bd5d0975005b840e81d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:05:40 np0005539563 podman[301502]: 2025-11-29 08:05:40.998277903 +0000 UTC m=+0.122895270 container attach 8c6a1bd4f213c28564f11de4854f6f0cc4e9ccf2608bd5d0975005b840e81d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:05:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-69e05f440f54a7c05f560f95bbea26cf85956cb580ffcbbbaf7cee0bee1c8259-merged.mount: Deactivated successfully.
Nov 29 03:05:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3971483710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.175 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.200 252257 DEBUG nova.compute.manager [req-99886837-bfc5-4669-92ae-5953a0b1fef1 req-ec39634b-9081-423f-8d0b-16fb4e1d53af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Received event network-vif-unplugged-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.201 252257 DEBUG oslo_concurrency.lockutils [req-99886837-bfc5-4669-92ae-5953a0b1fef1 req-ec39634b-9081-423f-8d0b-16fb4e1d53af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.201 252257 DEBUG oslo_concurrency.lockutils [req-99886837-bfc5-4669-92ae-5953a0b1fef1 req-ec39634b-9081-423f-8d0b-16fb4e1d53af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.201 252257 DEBUG oslo_concurrency.lockutils [req-99886837-bfc5-4669-92ae-5953a0b1fef1 req-ec39634b-9081-423f-8d0b-16fb4e1d53af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.201 252257 DEBUG nova.compute.manager [req-99886837-bfc5-4669-92ae-5953a0b1fef1 req-ec39634b-9081-423f-8d0b-16fb4e1d53af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] No waiting events found dispatching network-vif-unplugged-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.202 252257 WARNING nova.compute.manager [req-99886837-bfc5-4669-92ae-5953a0b1fef1 req-ec39634b-9081-423f-8d0b-16fb4e1d53af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Received unexpected event network-vif-unplugged-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.202 252257 DEBUG nova.compute.manager [req-99886837-bfc5-4669-92ae-5953a0b1fef1 req-ec39634b-9081-423f-8d0b-16fb4e1d53af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Received event network-vif-plugged-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.202 252257 DEBUG oslo_concurrency.lockutils [req-99886837-bfc5-4669-92ae-5953a0b1fef1 req-ec39634b-9081-423f-8d0b-16fb4e1d53af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.202 252257 DEBUG oslo_concurrency.lockutils [req-99886837-bfc5-4669-92ae-5953a0b1fef1 req-ec39634b-9081-423f-8d0b-16fb4e1d53af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.202 252257 DEBUG oslo_concurrency.lockutils [req-99886837-bfc5-4669-92ae-5953a0b1fef1 req-ec39634b-9081-423f-8d0b-16fb4e1d53af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.203 252257 DEBUG nova.compute.manager [req-99886837-bfc5-4669-92ae-5953a0b1fef1 req-ec39634b-9081-423f-8d0b-16fb4e1d53af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] No waiting events found dispatching network-vif-plugged-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.203 252257 WARNING nova.compute.manager [req-99886837-bfc5-4669-92ae-5953a0b1fef1 req-ec39634b-9081-423f-8d0b-16fb4e1d53af 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Received unexpected event network-vif-plugged-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.246 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.246 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.373 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.374 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4465MB free_disk=20.89059829711914GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.374 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.375 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:41.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.445 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance dad9cb14-03c2-4419-8c77-78fdd0ff117f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.445 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.446 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:05:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.491 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.597 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.828 252257 DEBUG oslo_concurrency.lockutils [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.829 252257 DEBUG oslo_concurrency.lockutils [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.829 252257 DEBUG oslo_concurrency.lockutils [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.829 252257 DEBUG oslo_concurrency.lockutils [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.829 252257 DEBUG oslo_concurrency.lockutils [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.831 252257 INFO nova.compute.manager [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Terminating instance#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.833 252257 DEBUG nova.compute.manager [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.839 252257 INFO nova.virt.libvirt.driver [-] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Instance destroyed successfully.#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.839 252257 DEBUG nova.objects.instance [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lazy-loading 'resources' on Instance uuid dad9cb14-03c2-4419-8c77-78fdd0ff117f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:41 np0005539563 sharp_mendeleev[301518]: {
Nov 29 03:05:41 np0005539563 sharp_mendeleev[301518]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:05:41 np0005539563 sharp_mendeleev[301518]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:05:41 np0005539563 sharp_mendeleev[301518]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:05:41 np0005539563 sharp_mendeleev[301518]:        "osd_id": 0,
Nov 29 03:05:41 np0005539563 sharp_mendeleev[301518]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:05:41 np0005539563 sharp_mendeleev[301518]:        "type": "bluestore"
Nov 29 03:05:41 np0005539563 sharp_mendeleev[301518]:    }
Nov 29 03:05:41 np0005539563 sharp_mendeleev[301518]: }
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.860 252257 DEBUG nova.virt.libvirt.vif [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-39169779',display_name='tempest-DeleteServersTestJSON-server-39169779',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-39169779',id=78,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f8bc2a2616a34ba1a18b3211e406993f',ramdisk_id='',reservation_id='r-8g50sgvl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-69711189',owner_user_name='tempest-DeleteServersTestJSON-69711189-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:40Z,user_data=None,user_id='ef8e9cc962eb4827954df3c42cc34798',uuid=dad9cb14-03c2-4419-8c77-78fdd0ff117f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "address": "fa:16:3e:ce:66:6e", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ed1d98-ba", "ovs_interfaceid": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.861 252257 DEBUG nova.network.os_vif_util [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converting VIF {"id": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "address": "fa:16:3e:ce:66:6e", "network": {"id": "d5e42602-d72e-4beb-864d-714bd1635da9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2144636506-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8bc2a2616a34ba1a18b3211e406993f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ed1d98-ba", "ovs_interfaceid": "a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.862 252257 DEBUG nova.network.os_vif_util [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ed1d98-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.862 252257 DEBUG os_vif [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ed1d98-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.864 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.865 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4ed1d98-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.867 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.869 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.871 252257 INFO os_vif [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40,network=Network(d5e42602-d72e-4beb-864d-714bd1635da9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ed1d98-ba')#033[00m
Nov 29 03:05:41 np0005539563 systemd[1]: libpod-8c6a1bd4f213c28564f11de4854f6f0cc4e9ccf2608bd5d0975005b840e81d16.scope: Deactivated successfully.
Nov 29 03:05:41 np0005539563 podman[301502]: 2025-11-29 08:05:41.88843778 +0000 UTC m=+1.013055127 container died 8c6a1bd4f213c28564f11de4854f6f0cc4e9ccf2608bd5d0975005b840e81d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:05:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2052017598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-06b408c60704333f6db733c063b2c9aa155c02c22ef154645f95fa56e5e9624f-merged.mount: Deactivated successfully.
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.938 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.947 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.967 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:41 np0005539563 podman[301502]: 2025-11-29 08:05:41.96848579 +0000 UTC m=+1.093103137 container remove 8c6a1bd4f213c28564f11de4854f6f0cc4e9ccf2608bd5d0975005b840e81d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:05:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 187 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 247 op/s
Nov 29 03:05:41 np0005539563 systemd[1]: libpod-conmon-8c6a1bd4f213c28564f11de4854f6f0cc4e9ccf2608bd5d0975005b840e81d16.scope: Deactivated successfully.
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.991 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:05:41 np0005539563 nova_compute[252253]: 2025-11-29 08:05:41.991 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:05:42 np0005539563 podman[301578]: 2025-11-29 08:05:42.006825228 +0000 UTC m=+0.079701300 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 03:05:42 np0005539563 podman[301588]: 2025-11-29 08:05:42.011872995 +0000 UTC m=+0.083153024 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:05:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:05:42 np0005539563 podman[301590]: 2025-11-29 08:05:42.018202607 +0000 UTC m=+0.084809680 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:05:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev af28c5ff-6dcd-4eff-bfed-035555e168b3 does not exist
Nov 29 03:05:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e3f7eb8e-c299-4f4a-96aa-7153d2af8911 does not exist
Nov 29 03:05:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 724eb333-61a0-4dde-9843-203ac6912d24 does not exist
Nov 29 03:05:42 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:42 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:05:42 np0005539563 nova_compute[252253]: 2025-11-29 08:05:42.550 252257 INFO nova.virt.libvirt.driver [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Deleting instance files /var/lib/nova/instances/dad9cb14-03c2-4419-8c77-78fdd0ff117f_del#033[00m
Nov 29 03:05:42 np0005539563 nova_compute[252253]: 2025-11-29 08:05:42.552 252257 INFO nova.virt.libvirt.driver [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Deletion of /var/lib/nova/instances/dad9cb14-03c2-4419-8c77-78fdd0ff117f_del complete#033[00m
Nov 29 03:05:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:42.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:42 np0005539563 nova_compute[252253]: 2025-11-29 08:05:42.627 252257 INFO nova.compute.manager [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:05:42 np0005539563 nova_compute[252253]: 2025-11-29 08:05:42.628 252257 DEBUG oslo.service.loopingcall [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:05:42 np0005539563 nova_compute[252253]: 2025-11-29 08:05:42.628 252257 DEBUG nova.compute.manager [-] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:05:42 np0005539563 nova_compute[252253]: 2025-11-29 08:05:42.629 252257 DEBUG nova.network.neutron [-] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:05:42 np0005539563 nova_compute[252253]: 2025-11-29 08:05:42.992 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:42 np0005539563 nova_compute[252253]: 2025-11-29 08:05:42.993 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:05:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:05:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:05:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:05:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:05:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:05:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:05:43 np0005539563 nova_compute[252253]: 2025-11-29 08:05:43.323 252257 DEBUG nova.network.neutron [-] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:43 np0005539563 nova_compute[252253]: 2025-11-29 08:05:43.367 252257 INFO nova.compute.manager [-] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Took 0.74 seconds to deallocate network for instance.#033[00m
Nov 29 03:05:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:43.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:43 np0005539563 nova_compute[252253]: 2025-11-29 08:05:43.437 252257 DEBUG oslo_concurrency.lockutils [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:43 np0005539563 nova_compute[252253]: 2025-11-29 08:05:43.438 252257 DEBUG oslo_concurrency.lockutils [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:43 np0005539563 nova_compute[252253]: 2025-11-29 08:05:43.480 252257 DEBUG nova.compute.manager [req-62a7582b-0f84-4042-aacf-4c973fd4181e req-8743897b-bf65-46a2-82c2-320c3e275996 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Received event network-vif-deleted-a4ed1d98-ba28-46f3-b3a3-ef4cda11cf40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:43 np0005539563 nova_compute[252253]: 2025-11-29 08:05:43.495 252257 DEBUG oslo_concurrency.processutils [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2542148193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 148 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.6 MiB/s wr, 239 op/s
Nov 29 03:05:43 np0005539563 nova_compute[252253]: 2025-11-29 08:05:43.977 252257 DEBUG oslo_concurrency.processutils [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:43 np0005539563 nova_compute[252253]: 2025-11-29 08:05:43.983 252257 DEBUG nova.compute.provider_tree [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:44 np0005539563 nova_compute[252253]: 2025-11-29 08:05:44.000 252257 DEBUG nova.scheduler.client.report [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:44 np0005539563 nova_compute[252253]: 2025-11-29 08:05:44.024 252257 DEBUG oslo_concurrency.lockutils [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:44 np0005539563 nova_compute[252253]: 2025-11-29 08:05:44.045 252257 INFO nova.scheduler.client.report [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Deleted allocations for instance dad9cb14-03c2-4419-8c77-78fdd0ff117f#033[00m
Nov 29 03:05:44 np0005539563 nova_compute[252253]: 2025-11-29 08:05:44.127 252257 DEBUG oslo_concurrency.lockutils [None req-3f7897b4-b4ab-4068-ba60-1ba61caee412 ef8e9cc962eb4827954df3c42cc34798 f8bc2a2616a34ba1a18b3211e406993f - - default default] Lock "dad9cb14-03c2-4419-8c77-78fdd0ff117f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:44.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:44 np0005539563 nova_compute[252253]: 2025-11-29 08:05:44.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:44 np0005539563 nova_compute[252253]: 2025-11-29 08:05:44.680 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:05:44 np0005539563 nova_compute[252253]: 2025-11-29 08:05:44.681 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:05:44 np0005539563 nova_compute[252253]: 2025-11-29 08:05:44.722 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:05:44 np0005539563 nova_compute[252253]: 2025-11-29 08:05:44.724 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:45.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 67 MiB data, 695 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 273 op/s
Nov 29 03:05:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:46 np0005539563 nova_compute[252253]: 2025-11-29 08:05:46.599 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:46.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:46 np0005539563 nova_compute[252253]: 2025-11-29 08:05:46.825 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "45797fd1-8963-4373-b547-4345ab32ac63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:46 np0005539563 nova_compute[252253]: 2025-11-29 08:05:46.826 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:46 np0005539563 nova_compute[252253]: 2025-11-29 08:05:46.854 252257 DEBUG nova.compute.manager [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:05:46 np0005539563 nova_compute[252253]: 2025-11-29 08:05:46.867 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:46 np0005539563 nova_compute[252253]: 2025-11-29 08:05:46.955 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:46 np0005539563 nova_compute[252253]: 2025-11-29 08:05:46.955 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:46 np0005539563 nova_compute[252253]: 2025-11-29 08:05:46.962 252257 DEBUG nova.virt.hardware [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:05:46 np0005539563 nova_compute[252253]: 2025-11-29 08:05:46.963 252257 INFO nova.compute.claims [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.102 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:47.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2449904932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.540 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.546 252257 DEBUG nova.compute.provider_tree [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.567 252257 DEBUG nova.scheduler.client.report [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.606 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.607 252257 DEBUG nova.compute.manager [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.663 252257 DEBUG nova.compute.manager [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.664 252257 DEBUG nova.network.neutron [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.682 252257 INFO nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.701 252257 DEBUG nova.compute.manager [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.782 252257 DEBUG nova.compute.manager [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.783 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.784 252257 INFO nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Creating image(s)#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.810 252257 DEBUG nova.storage.rbd_utils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.839 252257 DEBUG nova.storage.rbd_utils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.865 252257 DEBUG nova.storage.rbd_utils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.868 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.931 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.932 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.932 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.933 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.956 252257 DEBUG nova.storage.rbd_utils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.959 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 45797fd1-8963-4373-b547-4345ab32ac63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 67 MiB data, 695 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 205 op/s
Nov 29 03:05:47 np0005539563 nova_compute[252253]: 2025-11-29 08:05:47.987 252257 DEBUG nova.policy [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5a7b61623f854cf59636f192ab8af005', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:05:48 np0005539563 nova_compute[252253]: 2025-11-29 08:05:48.227 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 45797fd1-8963-4373-b547-4345ab32ac63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:48 np0005539563 nova_compute[252253]: 2025-11-29 08:05:48.306 252257 DEBUG nova.storage.rbd_utils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] resizing rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:05:48 np0005539563 nova_compute[252253]: 2025-11-29 08:05:48.464 252257 DEBUG nova.objects.instance [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'migration_context' on Instance uuid 45797fd1-8963-4373-b547-4345ab32ac63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:48 np0005539563 nova_compute[252253]: 2025-11-29 08:05:48.485 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:05:48 np0005539563 nova_compute[252253]: 2025-11-29 08:05:48.486 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Ensure instance console log exists: /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:05:48 np0005539563 nova_compute[252253]: 2025-11-29 08:05:48.487 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:48 np0005539563 nova_compute[252253]: 2025-11-29 08:05:48.488 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:48 np0005539563 nova_compute[252253]: 2025-11-29 08:05:48.488 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:48.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:48 np0005539563 nova_compute[252253]: 2025-11-29 08:05:48.730 252257 DEBUG nova.network.neutron [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Successfully created port: 481cb0ff-1134-4a04-83a5-1209b2f32b86 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:05:49 np0005539563 nova_compute[252253]: 2025-11-29 08:05:49.626 252257 DEBUG nova.network.neutron [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Successfully updated port: 481cb0ff-1134-4a04-83a5-1209b2f32b86 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:05:49 np0005539563 nova_compute[252253]: 2025-11-29 08:05:49.645 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "refresh_cache-45797fd1-8963-4373-b547-4345ab32ac63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:49 np0005539563 nova_compute[252253]: 2025-11-29 08:05:49.645 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquired lock "refresh_cache-45797fd1-8963-4373-b547-4345ab32ac63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:49 np0005539563 nova_compute[252253]: 2025-11-29 08:05:49.646 252257 DEBUG nova.network.neutron [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:05:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:49.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:49 np0005539563 nova_compute[252253]: 2025-11-29 08:05:49.786 252257 DEBUG nova.compute.manager [req-cf8ffee7-64c3-4102-92ad-70ca68bce015 req-95c98942-3db8-40d8-936b-14ce8711fb62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received event network-changed-481cb0ff-1134-4a04-83a5-1209b2f32b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:49 np0005539563 nova_compute[252253]: 2025-11-29 08:05:49.786 252257 DEBUG nova.compute.manager [req-cf8ffee7-64c3-4102-92ad-70ca68bce015 req-95c98942-3db8-40d8-936b-14ce8711fb62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Refreshing instance network info cache due to event network-changed-481cb0ff-1134-4a04-83a5-1209b2f32b86. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:05:49 np0005539563 nova_compute[252253]: 2025-11-29 08:05:49.787 252257 DEBUG oslo_concurrency.lockutils [req-cf8ffee7-64c3-4102-92ad-70ca68bce015 req-95c98942-3db8-40d8-936b-14ce8711fb62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-45797fd1-8963-4373-b547-4345ab32ac63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:49 np0005539563 nova_compute[252253]: 2025-11-29 08:05:49.876 252257 DEBUG nova.network.neutron [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:05:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 41 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 214 op/s
Nov 29 03:05:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:50.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.794 252257 DEBUG nova.network.neutron [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Updating instance_info_cache with network_info: [{"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.825 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Releasing lock "refresh_cache-45797fd1-8963-4373-b547-4345ab32ac63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.826 252257 DEBUG nova.compute.manager [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Instance network_info: |[{"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.828 252257 DEBUG oslo_concurrency.lockutils [req-cf8ffee7-64c3-4102-92ad-70ca68bce015 req-95c98942-3db8-40d8-936b-14ce8711fb62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-45797fd1-8963-4373-b547-4345ab32ac63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.828 252257 DEBUG nova.network.neutron [req-cf8ffee7-64c3-4102-92ad-70ca68bce015 req-95c98942-3db8-40d8-936b-14ce8711fb62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Refreshing network info cache for port 481cb0ff-1134-4a04-83a5-1209b2f32b86 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.835 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Start _get_guest_xml network_info=[{"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.846 252257 WARNING nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.867 252257 DEBUG nova.virt.libvirt.host [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.869 252257 DEBUG nova.virt.libvirt.host [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.873 252257 DEBUG nova.virt.libvirt.host [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.874 252257 DEBUG nova.virt.libvirt.host [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.875 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.875 252257 DEBUG nova.virt.hardware [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.876 252257 DEBUG nova.virt.hardware [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.876 252257 DEBUG nova.virt.hardware [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.876 252257 DEBUG nova.virt.hardware [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.877 252257 DEBUG nova.virt.hardware [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.877 252257 DEBUG nova.virt.hardware [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.877 252257 DEBUG nova.virt.hardware [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.877 252257 DEBUG nova.virt.hardware [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.878 252257 DEBUG nova.virt.hardware [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.878 252257 DEBUG nova.virt.hardware [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.878 252257 DEBUG nova.virt.hardware [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:05:50 np0005539563 nova_compute[252253]: 2025-11-29 08:05:50.881 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3136847276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:51 np0005539563 nova_compute[252253]: 2025-11-29 08:05:51.496 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:51 np0005539563 nova_compute[252253]: 2025-11-29 08:05:51.526 252257 DEBUG nova.storage.rbd_utils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:51 np0005539563 nova_compute[252253]: 2025-11-29 08:05:51.530 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:51 np0005539563 nova_compute[252253]: 2025-11-29 08:05:51.600 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:51 np0005539563 nova_compute[252253]: 2025-11-29 08:05:51.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:51.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:51 np0005539563 nova_compute[252253]: 2025-11-29 08:05:51.868 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559062296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 115 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 245 op/s
Nov 29 03:05:51 np0005539563 nova_compute[252253]: 2025-11-29 08:05:51.992 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:51 np0005539563 nova_compute[252253]: 2025-11-29 08:05:51.993 252257 DEBUG nova.virt.libvirt.vif [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1593821978',display_name='tempest-ServerDiskConfigTestJSON-server-1593821978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1593821978',id=79,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-p1vyt0k8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:47Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=45797fd1-8963-4373-b547-4345ab32ac63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:05:51 np0005539563 nova_compute[252253]: 2025-11-29 08:05:51.993 252257 DEBUG nova.network.os_vif_util [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:51 np0005539563 nova_compute[252253]: 2025-11-29 08:05:51.994 252257 DEBUG nova.network.os_vif_util [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:51 np0005539563 nova_compute[252253]: 2025-11-29 08:05:51.995 252257 DEBUG nova.objects.instance [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45797fd1-8963-4373-b547-4345ab32ac63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.108 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  <uuid>45797fd1-8963-4373-b547-4345ab32ac63</uuid>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  <name>instance-0000004f</name>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1593821978</nova:name>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:05:50</nova:creationTime>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <nova:user uuid="5a7b61623f854cf59636f192ab8af005">tempest-ServerDiskConfigTestJSON-904422786-project-member</nova:user>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <nova:project uuid="750bde86c9c7473fbf7f0a6a3b16cec1">tempest-ServerDiskConfigTestJSON-904422786</nova:project>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <nova:port uuid="481cb0ff-1134-4a04-83a5-1209b2f32b86">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <entry name="serial">45797fd1-8963-4373-b547-4345ab32ac63</entry>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <entry name="uuid">45797fd1-8963-4373-b547-4345ab32ac63</entry>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/45797fd1-8963-4373-b547-4345ab32ac63_disk">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/45797fd1-8963-4373-b547-4345ab32ac63_disk.config">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:7a:51:43"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <target dev="tap481cb0ff-11"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/console.log" append="off"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:05:52 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:05:52 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:05:52 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:05:52 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.109 252257 DEBUG nova.compute.manager [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Preparing to wait for external event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.110 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "45797fd1-8963-4373-b547-4345ab32ac63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.110 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.110 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.111 252257 DEBUG nova.virt.libvirt.vif [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1593821978',display_name='tempest-ServerDiskConfigTestJSON-server-1593821978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1593821978',id=79,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-p1vyt0k8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:47Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=45797fd1-8963-4373-b547-4345ab32ac63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.111 252257 DEBUG nova.network.os_vif_util [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.111 252257 DEBUG nova.network.os_vif_util [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.112 252257 DEBUG os_vif [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.112 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.113 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.113 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.117 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.117 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap481cb0ff-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.117 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap481cb0ff-11, col_values=(('external_ids', {'iface-id': '481cb0ff-1134-4a04-83a5-1209b2f32b86', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:51:43', 'vm-uuid': '45797fd1-8963-4373-b547-4345ab32ac63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.118 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:52 np0005539563 NetworkManager[48981]: <info>  [1764403552.1202] manager: (tap481cb0ff-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.121 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.126 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.127 252257 INFO os_vif [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11')#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.208 252257 DEBUG nova.network.neutron [req-cf8ffee7-64c3-4102-92ad-70ca68bce015 req-95c98942-3db8-40d8-936b-14ce8711fb62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Updated VIF entry in instance network info cache for port 481cb0ff-1134-4a04-83a5-1209b2f32b86. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.209 252257 DEBUG nova.network.neutron [req-cf8ffee7-64c3-4102-92ad-70ca68bce015 req-95c98942-3db8-40d8-936b-14ce8711fb62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Updating instance_info_cache with network_info: [{"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.261 252257 DEBUG oslo_concurrency.lockutils [req-cf8ffee7-64c3-4102-92ad-70ca68bce015 req-95c98942-3db8-40d8-936b-14ce8711fb62 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-45797fd1-8963-4373-b547-4345ab32ac63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.283 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.284 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.284 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No VIF found with MAC fa:16:3e:7a:51:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.285 252257 INFO nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Using config drive#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.316 252257 DEBUG nova.storage.rbd_utils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:05:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:52.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.979 252257 INFO nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Creating config drive at /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/disk.config#033[00m
Nov 29 03:05:52 np0005539563 nova_compute[252253]: 2025-11-29 08:05:52.985 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiv3fsx3j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.136 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiv3fsx3j" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.173 252257 DEBUG nova.storage.rbd_utils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.180 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/disk.config 45797fd1-8963-4373-b547-4345ab32ac63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.363 252257 DEBUG oslo_concurrency.processutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/disk.config 45797fd1-8963-4373-b547-4345ab32ac63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.365 252257 INFO nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Deleting local config drive /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/disk.config because it was imported into RBD.#033[00m
Nov 29 03:05:53 np0005539563 kernel: tap481cb0ff-11: entered promiscuous mode
Nov 29 03:05:53 np0005539563 NetworkManager[48981]: <info>  [1764403553.4228] manager: (tap481cb0ff-11): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Nov 29 03:05:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:53Z|00274|binding|INFO|Claiming lport 481cb0ff-1134-4a04-83a5-1209b2f32b86 for this chassis.
Nov 29 03:05:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:53Z|00275|binding|INFO|481cb0ff-1134-4a04-83a5-1209b2f32b86: Claiming fa:16:3e:7a:51:43 10.100.0.9
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.423 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.426 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.437 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:51:43 10.100.0.9'], port_security=['fa:16:3e:7a:51:43 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '45797fd1-8963-4373-b547-4345ab32ac63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=481cb0ff-1134-4a04-83a5-1209b2f32b86) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.439 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 481cb0ff-1134-4a04-83a5-1209b2f32b86 in datapath 8665acc6-1650-4878-8ffd-84f079f13741 bound to our chassis#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.441 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8665acc6-1650-4878-8ffd-84f079f13741#033[00m
Nov 29 03:05:53 np0005539563 systemd-udevd[302111]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:53 np0005539563 systemd-machined[213024]: New machine qemu-32-instance-0000004f.
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.459 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e94daa6d-08ac-4765-a5ca-6f5bda9c4870]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.460 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8665acc6-11 in ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:05:53 np0005539563 NetworkManager[48981]: <info>  [1764403553.4610] device (tap481cb0ff-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:05:53 np0005539563 NetworkManager[48981]: <info>  [1764403553.4618] device (tap481cb0ff-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.462 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8665acc6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.462 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[46a6826d-4eca-40a8-a197-7ea5d3fc81b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.463 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b2746d25-1a86-4c86-96e6-4749a4cc076d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.474 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf708f7-8fb2-4825-882b-a5e9740a2de7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 systemd[1]: Started Virtual Machine qemu-32-instance-0000004f.
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.490 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0a2e01-ae73-434d-9642-9ccbbc030229]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.491 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:53Z|00276|binding|INFO|Setting lport 481cb0ff-1134-4a04-83a5-1209b2f32b86 ovn-installed in OVS
Nov 29 03:05:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:53Z|00277|binding|INFO|Setting lport 481cb0ff-1134-4a04-83a5-1209b2f32b86 up in Southbound
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.503 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.520 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[8042b82c-9bc1-4041-9b50-babcde3f9fa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.524 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[685878ed-40fc-4bc2-999a-fbf11af7e38f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 NetworkManager[48981]: <info>  [1764403553.5259] manager: (tap8665acc6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/130)
Nov 29 03:05:53 np0005539563 systemd-udevd[302114]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.558 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8a8d37-0ce5-4ca1-a475-42a99b6618b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.562 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[30ccdd7d-c6a6-4db7-b746-6abfe70c31a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 NetworkManager[48981]: <info>  [1764403553.5871] device (tap8665acc6-10): carrier: link connected
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.592 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ba72df64-2e94-4899-b439-c3bb3b79e1e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.613 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9072feab-52c1-491e-a6ac-f3466b34f0b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652136, 'reachable_time': 29962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302144, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.637 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8e824a-edd7-46d8-812a-6f3554ddc636]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:2248'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652136, 'tstamp': 652136}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302145, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.655 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[52a93e47-19ff-4753-8874-5a5bcf73bb22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652136, 'reachable_time': 29962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302146, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.681 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[77e6b2a8-3cfa-4577-b810-58aca7e5c094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.736 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[aee530a7-f10f-49c7-afa2-86406ca4c7a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.738 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.739 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.740 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8665acc6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:53 np0005539563 kernel: tap8665acc6-10: entered promiscuous mode
Nov 29 03:05:53 np0005539563 NetworkManager[48981]: <info>  [1764403553.7439] manager: (tap8665acc6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.745 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.746 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8665acc6-10, col_values=(('external_ids', {'iface-id': 'e0f892e1-f1e8-4b29-8918-6cd036b9e8e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:53.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:05:53Z|00278|binding|INFO|Releasing lport e0f892e1-f1e8-4b29-8918-6cd036b9e8e0 from this chassis (sb_readonly=0)
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.751 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.752 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[25c15ca7-e24f-41ee-8b46-5690ffdc5bcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.754 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:05:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:05:53.755 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'env', 'PROCESS_TAG=haproxy-8665acc6-1650-4878-8ffd-84f079f13741', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8665acc6-1650-4878-8ffd-84f079f13741.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.812 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.884 252257 DEBUG nova.compute.manager [req-d49939c2-b443-452b-b7bf-40255437901a req-e39c0437-f586-419b-a32c-d75b01b0adc2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.885 252257 DEBUG oslo_concurrency.lockutils [req-d49939c2-b443-452b-b7bf-40255437901a req-e39c0437-f586-419b-a32c-d75b01b0adc2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "45797fd1-8963-4373-b547-4345ab32ac63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.885 252257 DEBUG oslo_concurrency.lockutils [req-d49939c2-b443-452b-b7bf-40255437901a req-e39c0437-f586-419b-a32c-d75b01b0adc2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.885 252257 DEBUG oslo_concurrency.lockutils [req-d49939c2-b443-452b-b7bf-40255437901a req-e39c0437-f586-419b-a32c-d75b01b0adc2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.885 252257 DEBUG nova.compute.manager [req-d49939c2-b443-452b-b7bf-40255437901a req-e39c0437-f586-419b-a32c-d75b01b0adc2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Processing event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.961 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403553.9604747, 45797fd1-8963-4373-b547-4345ab32ac63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.961 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] VM Started (Lifecycle Event)#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.965 252257 DEBUG nova.compute.manager [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.969 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.974 252257 INFO nova.virt.libvirt.driver [-] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Instance spawned successfully.#033[00m
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.974 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:05:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.6 MiB/s wr, 120 op/s
Nov 29 03:05:53 np0005539563 nova_compute[252253]: 2025-11-29 08:05:53.998 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.004 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.007 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.008 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.008 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.008 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.009 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.009 252257 DEBUG nova.virt.libvirt.driver [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.049 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.050 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403553.960782, 45797fd1-8963-4373-b547-4345ab32ac63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.050 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.095 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.098 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403553.968279, 45797fd1-8963-4373-b547-4345ab32ac63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.098 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.108 252257 INFO nova.compute.manager [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Took 6.33 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.108 252257 DEBUG nova.compute.manager [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.118 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.120 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.146 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:54 np0005539563 podman[302220]: 2025-11-29 08:05:54.1495904 +0000 UTC m=+0.051259400 container create a02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.179 252257 INFO nova.compute.manager [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Took 7.24 seconds to build instance.#033[00m
Nov 29 03:05:54 np0005539563 systemd[1]: Started libpod-conmon-a02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980.scope.
Nov 29 03:05:54 np0005539563 nova_compute[252253]: 2025-11-29 08:05:54.196 252257 DEBUG oslo_concurrency.lockutils [None req-ba3ae43a-c5c7-4b03-aaaf-a8535d35131d 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:05:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce9f71caba0d3497a67225b595401c9e3701bb163f786723e67c21408948fee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:54 np0005539563 podman[302220]: 2025-11-29 08:05:54.125930929 +0000 UTC m=+0.027599949 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:05:54 np0005539563 podman[302220]: 2025-11-29 08:05:54.233226386 +0000 UTC m=+0.134895416 container init a02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:05:54 np0005539563 podman[302220]: 2025-11-29 08:05:54.240198474 +0000 UTC m=+0.141867484 container start a02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:05:54 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302235]: [NOTICE]   (302239) : New worker (302241) forked
Nov 29 03:05:54 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302235]: [NOTICE]   (302239) : Loading success.
Nov 29 03:05:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:54.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:55 np0005539563 nova_compute[252253]: 2025-11-29 08:05:55.117 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403540.114896, dad9cb14-03c2-4419-8c77-78fdd0ff117f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:55 np0005539563 nova_compute[252253]: 2025-11-29 08:05:55.117 252257 INFO nova.compute.manager [-] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:05:55 np0005539563 nova_compute[252253]: 2025-11-29 08:05:55.486 252257 DEBUG nova.compute.manager [None req-d0c5db22-f08e-4799-a842-35885bb011a1 - - - - - -] [instance: dad9cb14-03c2-4419-8c77-78fdd0ff117f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:55.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 163 op/s
Nov 29 03:05:55 np0005539563 nova_compute[252253]: 2025-11-29 08:05:55.997 252257 DEBUG nova.compute.manager [req-0e39ca35-22ae-4593-8888-a585faac0a57 req-d64d81ee-b210-4250-a9c1-b1a60c5f808d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:55 np0005539563 nova_compute[252253]: 2025-11-29 08:05:55.997 252257 DEBUG oslo_concurrency.lockutils [req-0e39ca35-22ae-4593-8888-a585faac0a57 req-d64d81ee-b210-4250-a9c1-b1a60c5f808d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "45797fd1-8963-4373-b547-4345ab32ac63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:55 np0005539563 nova_compute[252253]: 2025-11-29 08:05:55.997 252257 DEBUG oslo_concurrency.lockutils [req-0e39ca35-22ae-4593-8888-a585faac0a57 req-d64d81ee-b210-4250-a9c1-b1a60c5f808d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:55 np0005539563 nova_compute[252253]: 2025-11-29 08:05:55.998 252257 DEBUG oslo_concurrency.lockutils [req-0e39ca35-22ae-4593-8888-a585faac0a57 req-d64d81ee-b210-4250-a9c1-b1a60c5f808d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:55 np0005539563 nova_compute[252253]: 2025-11-29 08:05:55.998 252257 DEBUG nova.compute.manager [req-0e39ca35-22ae-4593-8888-a585faac0a57 req-d64d81ee-b210-4250-a9c1-b1a60c5f808d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] No waiting events found dispatching network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:55 np0005539563 nova_compute[252253]: 2025-11-29 08:05:55.998 252257 WARNING nova.compute.manager [req-0e39ca35-22ae-4593-8888-a585faac0a57 req-d64d81ee-b210-4250-a9c1-b1a60c5f808d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received unexpected event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:05:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:56 np0005539563 nova_compute[252253]: 2025-11-29 08:05:56.603 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:56.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:57 np0005539563 nova_compute[252253]: 2025-11-29 08:05:57.119 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:57.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Nov 29 03:05:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:58.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:58 np0005539563 nova_compute[252253]: 2025-11-29 08:05:58.751 252257 INFO nova.compute.manager [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Rebuilding instance#033[00m
Nov 29 03:05:59 np0005539563 nova_compute[252253]: 2025-11-29 08:05:59.099 252257 DEBUG nova.objects.instance [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 45797fd1-8963-4373-b547-4345ab32ac63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:59 np0005539563 nova_compute[252253]: 2025-11-29 08:05:59.115 252257 DEBUG nova.compute.manager [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:59 np0005539563 nova_compute[252253]: 2025-11-29 08:05:59.184 252257 DEBUG nova.objects.instance [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 45797fd1-8963-4373-b547-4345ab32ac63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:59 np0005539563 nova_compute[252253]: 2025-11-29 08:05:59.200 252257 DEBUG nova.objects.instance [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45797fd1-8963-4373-b547-4345ab32ac63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:59 np0005539563 nova_compute[252253]: 2025-11-29 08:05:59.214 252257 DEBUG nova.objects.instance [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'resources' on Instance uuid 45797fd1-8963-4373-b547-4345ab32ac63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:59 np0005539563 nova_compute[252253]: 2025-11-29 08:05:59.238 252257 DEBUG nova.objects.instance [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'migration_context' on Instance uuid 45797fd1-8963-4373-b547-4345ab32ac63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:59 np0005539563 nova_compute[252253]: 2025-11-29 08:05:59.259 252257 DEBUG nova.objects.instance [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:05:59 np0005539563 nova_compute[252253]: 2025-11-29 08:05:59.262 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:05:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:05:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:59.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Nov 29 03:06:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:00.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:01 np0005539563 nova_compute[252253]: 2025-11-29 08:06:01.605 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:01.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Nov 29 03:06:02 np0005539563 nova_compute[252253]: 2025-11-29 08:06:02.121 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:02.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:03.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 598 KiB/s wr, 148 op/s
Nov 29 03:06:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:04.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:04.911 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:04.913 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:04.914 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:05.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 148 op/s
Nov 29 03:06:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:06 np0005539563 nova_compute[252253]: 2025-11-29 08:06:06.608 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:06.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:07 np0005539563 nova_compute[252253]: 2025-11-29 08:06:07.158 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:07.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:07 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:07Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:51:43 10.100.0.9
Nov 29 03:06:07 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:07Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:51:43 10.100.0.9
Nov 29 03:06:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 85 B/s wr, 92 op/s
Nov 29 03:06:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:08.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:09 np0005539563 nova_compute[252253]: 2025-11-29 08:06:09.313 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:06:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:09.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 137 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 191 KiB/s wr, 106 op/s
Nov 29 03:06:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:10.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:11 np0005539563 nova_compute[252253]: 2025-11-29 08:06:11.610 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:11 np0005539563 kernel: tap481cb0ff-11 (unregistering): left promiscuous mode
Nov 29 03:06:11 np0005539563 NetworkManager[48981]: <info>  [1764403571.7250] device (tap481cb0ff-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:06:11 np0005539563 nova_compute[252253]: 2025-11-29 08:06:11.739 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:11Z|00279|binding|INFO|Releasing lport 481cb0ff-1134-4a04-83a5-1209b2f32b86 from this chassis (sb_readonly=0)
Nov 29 03:06:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:11Z|00280|binding|INFO|Setting lport 481cb0ff-1134-4a04-83a5-1209b2f32b86 down in Southbound
Nov 29 03:06:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:11Z|00281|binding|INFO|Removing iface tap481cb0ff-11 ovn-installed in OVS
Nov 29 03:06:11 np0005539563 nova_compute[252253]: 2025-11-29 08:06:11.742 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:11.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:11 np0005539563 nova_compute[252253]: 2025-11-29 08:06:11.803 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:11 np0005539563 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Nov 29 03:06:11 np0005539563 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004f.scope: Consumed 13.789s CPU time.
Nov 29 03:06:11 np0005539563 systemd-machined[213024]: Machine qemu-32-instance-0000004f terminated.
Nov 29 03:06:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:11.978 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:51:43 10.100.0.9'], port_security=['fa:16:3e:7a:51:43 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '45797fd1-8963-4373-b547-4345ab32ac63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=481cb0ff-1134-4a04-83a5-1209b2f32b86) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:11.980 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 481cb0ff-1134-4a04-83a5-1209b2f32b86 in datapath 8665acc6-1650-4878-8ffd-84f079f13741 unbound from our chassis#033[00m
Nov 29 03:06:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:11.981 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8665acc6-1650-4878-8ffd-84f079f13741, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:11.983 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[57a1f998-362b-4698-9245-c8b73bbac6f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:11.983 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace which is not needed anymore#033[00m
Nov 29 03:06:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 188 MiB data, 758 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 170 op/s
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.160 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:12 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302235]: [NOTICE]   (302239) : haproxy version is 2.8.14-c23fe91
Nov 29 03:06:12 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302235]: [NOTICE]   (302239) : path to executable is /usr/sbin/haproxy
Nov 29 03:06:12 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302235]: [WARNING]  (302239) : Exiting Master process...
Nov 29 03:06:12 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302235]: [ALERT]    (302239) : Current worker (302241) exited with code 143 (Terminated)
Nov 29 03:06:12 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302235]: [WARNING]  (302239) : All workers exited. Exiting... (0)
Nov 29 03:06:12 np0005539563 systemd[1]: libpod-a02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980.scope: Deactivated successfully.
Nov 29 03:06:12 np0005539563 conmon[302235]: conmon a02474e15545f99305c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980.scope/container/memory.events
Nov 29 03:06:12 np0005539563 podman[302344]: 2025-11-29 08:06:12.181480497 +0000 UTC m=+0.061369434 container died a02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:06:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980-userdata-shm.mount: Deactivated successfully.
Nov 29 03:06:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2ce9f71caba0d3497a67225b595401c9e3701bb163f786723e67c21408948fee-merged.mount: Deactivated successfully.
Nov 29 03:06:12 np0005539563 podman[302344]: 2025-11-29 08:06:12.225834348 +0000 UTC m=+0.105723265 container cleanup a02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:12 np0005539563 systemd[1]: libpod-conmon-a02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980.scope: Deactivated successfully.
Nov 29 03:06:12 np0005539563 podman[302400]: 2025-11-29 08:06:12.291135597 +0000 UTC m=+0.042479091 container remove a02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:12 np0005539563 podman[302367]: 2025-11-29 08:06:12.292811033 +0000 UTC m=+0.068239310 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:12 np0005539563 podman[302359]: 2025-11-29 08:06:12.293005448 +0000 UTC m=+0.083253516 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:06:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:12.297 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b18589b8-ba18-4d11-a130-7209d929667e]: (4, ('Sat Nov 29 08:06:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (a02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980)\na02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980\nSat Nov 29 08:06:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (a02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980)\na02474e15545f99305c2c036ee58acf28c06939bf4a20838b9bfea8876530980\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:12.298 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[67bdd658-c4ae-4ce3-86b8-0cf07d78ffff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:12.299 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.300 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:12 np0005539563 kernel: tap8665acc6-10: left promiscuous mode
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.318 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:12.320 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3cee0275-2269-4ab2-8c3d-2ab8f273735b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539563 podman[302369]: 2025-11-29 08:06:12.324973494 +0000 UTC m=+0.106612569 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.329 252257 INFO nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.334 252257 INFO nova.virt.libvirt.driver [-] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Instance destroyed successfully.#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.338 252257 INFO nova.virt.libvirt.driver [-] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Instance destroyed successfully.#033[00m
Nov 29 03:06:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:12.338 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd34c8c-9a10-4e18-b7cf-3701c36000d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.339 252257 DEBUG nova.virt.libvirt.vif [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1593821978',display_name='tempest-ServerDiskConfigTestJSON-server-1593821978',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1593821978',id=79,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-p1vyt0k8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:57Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=45797fd1-8963-4373-b547-4345ab32ac63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.339 252257 DEBUG nova.network.os_vif_util [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:12.339 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec49fe2-d5c4-4c91-8db2-1924c263abcf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.340 252257 DEBUG nova.network.os_vif_util [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.340 252257 DEBUG os_vif [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.342 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.342 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap481cb0ff-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.343 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.344 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.346 252257 INFO os_vif [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11')#033[00m
Nov 29 03:06:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:12.353 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa6ad6a-8009-4d32-a605-9ba734b4903d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652128, 'reachable_time': 40561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302453, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539563 systemd[1]: run-netns-ovnmeta\x2d8665acc6\x2d1650\x2d4878\x2d8ffd\x2d84f079f13741.mount: Deactivated successfully.
Nov 29 03:06:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:12.357 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:06:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:12.357 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[ca142ad6-05db-4fe5-9f5f-91a2f7c23a53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.516 252257 DEBUG nova.compute.manager [req-0c007f0f-f035-4ef8-9911-0980087d1efb req-577fdfc6-039e-45d3-8fbd-17466638cf41 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received event network-vif-unplugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.517 252257 DEBUG oslo_concurrency.lockutils [req-0c007f0f-f035-4ef8-9911-0980087d1efb req-577fdfc6-039e-45d3-8fbd-17466638cf41 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "45797fd1-8963-4373-b547-4345ab32ac63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.517 252257 DEBUG oslo_concurrency.lockutils [req-0c007f0f-f035-4ef8-9911-0980087d1efb req-577fdfc6-039e-45d3-8fbd-17466638cf41 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.517 252257 DEBUG oslo_concurrency.lockutils [req-0c007f0f-f035-4ef8-9911-0980087d1efb req-577fdfc6-039e-45d3-8fbd-17466638cf41 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.517 252257 DEBUG nova.compute.manager [req-0c007f0f-f035-4ef8-9911-0980087d1efb req-577fdfc6-039e-45d3-8fbd-17466638cf41 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] No waiting events found dispatching network-vif-unplugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.517 252257 WARNING nova.compute.manager [req-0c007f0f-f035-4ef8-9911-0980087d1efb req-577fdfc6-039e-45d3-8fbd-17466638cf41 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received unexpected event network-vif-unplugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 03:06:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:12.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:06:12
Nov 29 03:06:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:06:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:06:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.meta', '.mgr', 'volumes', 'backups']
Nov 29 03:06:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.974 252257 INFO nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Deleting instance files /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63_del#033[00m
Nov 29 03:06:12 np0005539563 nova_compute[252253]: 2025-11-29 08:06:12.975 252257 INFO nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Deletion of /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63_del complete#033[00m
Nov 29 03:06:13 np0005539563 nova_compute[252253]: 2025-11-29 08:06:13.161 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:06:13 np0005539563 nova_compute[252253]: 2025-11-29 08:06:13.161 252257 INFO nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Creating image(s)#033[00m
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:06:13 np0005539563 nova_compute[252253]: 2025-11-29 08:06:13.199 252257 DEBUG nova.storage.rbd_utils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:13 np0005539563 nova_compute[252253]: 2025-11-29 08:06:13.242 252257 DEBUG nova.storage.rbd_utils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:13 np0005539563 nova_compute[252253]: 2025-11-29 08:06:13.281 252257 DEBUG nova.storage.rbd_utils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:13 np0005539563 nova_compute[252253]: 2025-11-29 08:06:13.285 252257 DEBUG oslo_concurrency.lockutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:13 np0005539563 nova_compute[252253]: 2025-11-29 08:06:13.286 252257 DEBUG oslo_concurrency.lockutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:13 np0005539563 nova_compute[252253]: 2025-11-29 08:06:13.553 252257 DEBUG nova.virt.libvirt.imagebackend [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/ed489666-5fa2-4ea4-8005-7a7505ac1b78/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/ed489666-5fa2-4ea4-8005-7a7505ac1b78/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:06:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:13.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:06:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 191 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 532 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Nov 29 03:06:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:14.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:14 np0005539563 nova_compute[252253]: 2025-11-29 08:06:14.943 252257 DEBUG nova.compute.manager [req-df389c0a-2d79-4de1-882e-a9092aeb62e4 req-a93fa3e6-722f-4b69-8f0a-190589c97e13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:14 np0005539563 nova_compute[252253]: 2025-11-29 08:06:14.944 252257 DEBUG oslo_concurrency.lockutils [req-df389c0a-2d79-4de1-882e-a9092aeb62e4 req-a93fa3e6-722f-4b69-8f0a-190589c97e13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "45797fd1-8963-4373-b547-4345ab32ac63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:14 np0005539563 nova_compute[252253]: 2025-11-29 08:06:14.944 252257 DEBUG oslo_concurrency.lockutils [req-df389c0a-2d79-4de1-882e-a9092aeb62e4 req-a93fa3e6-722f-4b69-8f0a-190589c97e13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:14 np0005539563 nova_compute[252253]: 2025-11-29 08:06:14.945 252257 DEBUG oslo_concurrency.lockutils [req-df389c0a-2d79-4de1-882e-a9092aeb62e4 req-a93fa3e6-722f-4b69-8f0a-190589c97e13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:14 np0005539563 nova_compute[252253]: 2025-11-29 08:06:14.945 252257 DEBUG nova.compute.manager [req-df389c0a-2d79-4de1-882e-a9092aeb62e4 req-a93fa3e6-722f-4b69-8f0a-190589c97e13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] No waiting events found dispatching network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:14 np0005539563 nova_compute[252253]: 2025-11-29 08:06:14.946 252257 WARNING nova.compute.manager [req-df389c0a-2d79-4de1-882e-a9092aeb62e4 req-a93fa3e6-722f-4b69-8f0a-190589c97e13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received unexpected event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:06:15 np0005539563 nova_compute[252253]: 2025-11-29 08:06:15.236 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:15 np0005539563 nova_compute[252253]: 2025-11-29 08:06:15.319 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.part --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:15 np0005539563 nova_compute[252253]: 2025-11-29 08:06:15.321 252257 DEBUG nova.virt.images [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] ed489666-5fa2-4ea4-8005-7a7505ac1b78 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 03:06:15 np0005539563 nova_compute[252253]: 2025-11-29 08:06:15.322 252257 DEBUG nova.privsep.utils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 03:06:15 np0005539563 nova_compute[252253]: 2025-11-29 08:06:15.323 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.part /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:15 np0005539563 nova_compute[252253]: 2025-11-29 08:06:15.553 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.part /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.converted" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:15 np0005539563 nova_compute[252253]: 2025-11-29 08:06:15.559 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:15 np0005539563 nova_compute[252253]: 2025-11-29 08:06:15.624 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242.converted --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:15 np0005539563 nova_compute[252253]: 2025-11-29 08:06:15.625 252257 DEBUG oslo_concurrency.lockutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:15 np0005539563 nova_compute[252253]: 2025-11-29 08:06:15.652 252257 DEBUG nova.storage.rbd_utils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:15 np0005539563 nova_compute[252253]: 2025-11-29 08:06:15.655 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 45797fd1-8963-4373-b547-4345ab32ac63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:15.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 121 MiB data, 728 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 157 op/s
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.017 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 45797fd1-8963-4373-b547-4345ab32ac63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.108 252257 DEBUG nova.storage.rbd_utils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] resizing rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.235 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.235 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Ensure instance console log exists: /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.236 252257 DEBUG oslo_concurrency.lockutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.236 252257 DEBUG oslo_concurrency.lockutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.237 252257 DEBUG oslo_concurrency.lockutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.239 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Start _get_guest_xml network_info=[{"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.243 252257 WARNING nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.251 252257 DEBUG nova.virt.libvirt.host [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.252 252257 DEBUG nova.virt.libvirt.host [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.255 252257 DEBUG nova.virt.libvirt.host [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.256 252257 DEBUG nova.virt.libvirt.host [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.257 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.257 252257 DEBUG nova.virt.hardware [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.257 252257 DEBUG nova.virt.hardware [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.258 252257 DEBUG nova.virt.hardware [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.258 252257 DEBUG nova.virt.hardware [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.258 252257 DEBUG nova.virt.hardware [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.258 252257 DEBUG nova.virt.hardware [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.258 252257 DEBUG nova.virt.hardware [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.259 252257 DEBUG nova.virt.hardware [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.259 252257 DEBUG nova.virt.hardware [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.259 252257 DEBUG nova.virt.hardware [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.259 252257 DEBUG nova.virt.hardware [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.260 252257 DEBUG nova.objects.instance [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 45797fd1-8963-4373-b547-4345ab32ac63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.274 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.612 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:16.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/721166876' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.690 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.715 252257 DEBUG nova.storage.rbd_utils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:16 np0005539563 nova_compute[252253]: 2025-11-29 08:06:16.718 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2636843643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.156 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.158 252257 DEBUG nova.virt.libvirt.vif [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:05:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1593821978',display_name='tempest-ServerDiskConfigTestJSON-server-1593821978',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1593821978',id=79,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-p1vyt0k8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:13Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=45797fd1-8963-4373-b547-4345ab32ac63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.158 252257 DEBUG nova.network.os_vif_util [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.159 252257 DEBUG nova.network.os_vif_util [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.162 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  <uuid>45797fd1-8963-4373-b547-4345ab32ac63</uuid>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  <name>instance-0000004f</name>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1593821978</nova:name>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:06:16</nova:creationTime>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <nova:user uuid="5a7b61623f854cf59636f192ab8af005">tempest-ServerDiskConfigTestJSON-904422786-project-member</nova:user>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <nova:project uuid="750bde86c9c7473fbf7f0a6a3b16cec1">tempest-ServerDiskConfigTestJSON-904422786</nova:project>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="ed489666-5fa2-4ea4-8005-7a7505ac1b78"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <nova:port uuid="481cb0ff-1134-4a04-83a5-1209b2f32b86">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <entry name="serial">45797fd1-8963-4373-b547-4345ab32ac63</entry>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <entry name="uuid">45797fd1-8963-4373-b547-4345ab32ac63</entry>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/45797fd1-8963-4373-b547-4345ab32ac63_disk">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/45797fd1-8963-4373-b547-4345ab32ac63_disk.config">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:7a:51:43"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <target dev="tap481cb0ff-11"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/console.log" append="off"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:06:17 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:06:17 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:06:17 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:06:17 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.163 252257 DEBUG nova.compute.manager [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Preparing to wait for external event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.163 252257 DEBUG oslo_concurrency.lockutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "45797fd1-8963-4373-b547-4345ab32ac63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.163 252257 DEBUG oslo_concurrency.lockutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.163 252257 DEBUG oslo_concurrency.lockutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.164 252257 DEBUG nova.virt.libvirt.vif [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:05:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1593821978',display_name='tempest-ServerDiskConfigTestJSON-server-1593821978',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1593821978',id=79,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-p1vyt0k8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:13Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=45797fd1-8963-4373-b547-4345ab32ac63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.164 252257 DEBUG nova.network.os_vif_util [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.164 252257 DEBUG nova.network.os_vif_util [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.165 252257 DEBUG os_vif [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.165 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.166 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.166 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.168 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.168 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap481cb0ff-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.168 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap481cb0ff-11, col_values=(('external_ids', {'iface-id': '481cb0ff-1134-4a04-83a5-1209b2f32b86', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:51:43', 'vm-uuid': '45797fd1-8963-4373-b547-4345ab32ac63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.170 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:17 np0005539563 NetworkManager[48981]: <info>  [1764403577.1709] manager: (tap481cb0ff-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.173 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.174 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.175 252257 INFO os_vif [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11')#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.283 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.284 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.284 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No VIF found with MAC fa:16:3e:7a:51:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.284 252257 INFO nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Using config drive#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.349 252257 DEBUG nova.storage.rbd_utils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.375 252257 DEBUG nova.objects.instance [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 45797fd1-8963-4373-b547-4345ab32ac63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.417 252257 DEBUG nova.objects.instance [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'keypairs' on Instance uuid 45797fd1-8963-4373-b547-4345ab32ac63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:17.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.793 252257 INFO nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Creating config drive at /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/disk.config#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.803 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1zimw8y8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.962 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1zimw8y8" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 121 MiB data, 728 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 157 op/s
Nov 29 03:06:17 np0005539563 nova_compute[252253]: 2025-11-29 08:06:17.997 252257 DEBUG nova.storage.rbd_utils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 45797fd1-8963-4373-b547-4345ab32ac63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.001 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/disk.config 45797fd1-8963-4373-b547-4345ab32ac63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.214 252257 DEBUG oslo_concurrency.processutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/disk.config 45797fd1-8963-4373-b547-4345ab32ac63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.215 252257 INFO nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Deleting local config drive /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63/disk.config because it was imported into RBD.#033[00m
Nov 29 03:06:18 np0005539563 kernel: tap481cb0ff-11: entered promiscuous mode
Nov 29 03:06:18 np0005539563 NetworkManager[48981]: <info>  [1764403578.2823] manager: (tap481cb0ff-11): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Nov 29 03:06:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:18Z|00282|binding|INFO|Claiming lport 481cb0ff-1134-4a04-83a5-1209b2f32b86 for this chassis.
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.284 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:18Z|00283|binding|INFO|481cb0ff-1134-4a04-83a5-1209b2f32b86: Claiming fa:16:3e:7a:51:43 10.100.0.9
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.292 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:51:43 10.100.0.9'], port_security=['fa:16:3e:7a:51:43 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '45797fd1-8963-4373-b547-4345ab32ac63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=481cb0ff-1134-4a04-83a5-1209b2f32b86) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.293 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 481cb0ff-1134-4a04-83a5-1209b2f32b86 in datapath 8665acc6-1650-4878-8ffd-84f079f13741 bound to our chassis#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.294 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8665acc6-1650-4878-8ffd-84f079f13741#033[00m
Nov 29 03:06:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:18Z|00284|binding|INFO|Setting lport 481cb0ff-1134-4a04-83a5-1209b2f32b86 ovn-installed in OVS
Nov 29 03:06:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:18Z|00285|binding|INFO|Setting lport 481cb0ff-1134-4a04-83a5-1209b2f32b86 up in Southbound
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.303 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.306 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c14aa818-3003-4558-8df6-01c7604b27f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.306 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8665acc6-11 in ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.309 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8665acc6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.309 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9cb7e3-1705-4516-9cab-328e61a079c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.311 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[62e6deb0-00b2-471f-8f5d-8c09c8c026e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 systemd-machined[213024]: New machine qemu-33-instance-0000004f.
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.327 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[0a7d8db5-5fa3-4fd1-8df2-64b78711b50a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 systemd[1]: Started Virtual Machine qemu-33-instance-0000004f.
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.342 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcae626-0512-45f2-a14b-b3d485572ff1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 systemd-udevd[302793]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:06:18 np0005539563 NetworkManager[48981]: <info>  [1764403578.3612] device (tap481cb0ff-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:06:18 np0005539563 NetworkManager[48981]: <info>  [1764403578.3620] device (tap481cb0ff-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.384 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed81573-5810-4305-b1d4-072c3dfd1526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 NetworkManager[48981]: <info>  [1764403578.3911] manager: (tap8665acc6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/134)
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.389 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[47a9555f-4179-4ce0-9195-c6870d1ceaad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.418 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2ae258-40f9-4b34-a0df-95bfa2bbe62d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.421 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ba14745c-2827-46e3-9ba3-492f03324b53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 NetworkManager[48981]: <info>  [1764403578.4402] device (tap8665acc6-10): carrier: link connected
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.445 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[565f9d48-3184-4f96-8509-5549b11eccd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.460 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fc3a578c-c064-4553-9794-48a8f275b8ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654621, 'reachable_time': 21914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302823, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.475 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b4dd6b10-b8d0-4701-9a4c-cb96192c33d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:2248'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654621, 'tstamp': 654621}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302824, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.488 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[86d408aa-7f5b-472d-a70b-f350e213986a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654621, 'reachable_time': 21914, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302825, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.510 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e97077-aac1-4093-941d-adf4fa0b8a0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.565 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0e2bca8a-a553-4bd9-9018-f0ae03448c81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.567 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.567 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.568 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8665acc6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:18 np0005539563 NetworkManager[48981]: <info>  [1764403578.5700] manager: (tap8665acc6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Nov 29 03:06:18 np0005539563 kernel: tap8665acc6-10: entered promiscuous mode
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.571 252257 DEBUG nova.compute.manager [req-2d19c018-b9e7-43b0-8155-c8d45a2d9403 req-b0af0293-f2e6-47e3-86f9-b7d93210d160 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.572 252257 DEBUG oslo_concurrency.lockutils [req-2d19c018-b9e7-43b0-8155-c8d45a2d9403 req-b0af0293-f2e6-47e3-86f9-b7d93210d160 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "45797fd1-8963-4373-b547-4345ab32ac63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.572 252257 DEBUG oslo_concurrency.lockutils [req-2d19c018-b9e7-43b0-8155-c8d45a2d9403 req-b0af0293-f2e6-47e3-86f9-b7d93210d160 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.572 252257 DEBUG oslo_concurrency.lockutils [req-2d19c018-b9e7-43b0-8155-c8d45a2d9403 req-b0af0293-f2e6-47e3-86f9-b7d93210d160 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.573 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8665acc6-10, col_values=(('external_ids', {'iface-id': 'e0f892e1-f1e8-4b29-8918-6cd036b9e8e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.572 252257 DEBUG nova.compute.manager [req-2d19c018-b9e7-43b0-8155-c8d45a2d9403 req-b0af0293-f2e6-47e3-86f9-b7d93210d160 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Processing event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.573 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:18Z|00286|binding|INFO|Releasing lport e0f892e1-f1e8-4b29-8918-6cd036b9e8e0 from this chassis (sb_readonly=0)
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.589 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.591 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.592 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d20d01a8-5acd-4425-9570-8514d22b01f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.592 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:06:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:18.593 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'env', 'PROCESS_TAG=haproxy-8665acc6-1650-4878-8ffd-84f079f13741', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8665acc6-1650-4878-8ffd-84f079f13741.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:06:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:18.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.839 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 45797fd1-8963-4373-b547-4345ab32ac63 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.840 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403578.8385584, 45797fd1-8963-4373-b547-4345ab32ac63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.840 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] VM Started (Lifecycle Event)#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.842 252257 DEBUG nova.compute.manager [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.845 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.850 252257 INFO nova.virt.libvirt.driver [-] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Instance spawned successfully.#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.850 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.860 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.863 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.884 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.884 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.885 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.885 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.885 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.886 252257 DEBUG nova.virt.libvirt.driver [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.889 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.889 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403578.8386345, 45797fd1-8963-4373-b547-4345ab32ac63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.890 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.947 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.951 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403578.8449101, 45797fd1-8963-4373-b547-4345ab32ac63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.952 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:06:18 np0005539563 podman[302896]: 2025-11-29 08:06:18.958423546 +0000 UTC m=+0.055171226 container create a987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.972 252257 DEBUG nova.compute.manager [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.981 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:18 np0005539563 nova_compute[252253]: 2025-11-29 08:06:18.987 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:18 np0005539563 systemd[1]: Started libpod-conmon-a987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6.scope.
Nov 29 03:06:19 np0005539563 nova_compute[252253]: 2025-11-29 08:06:19.016 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:06:19 np0005539563 podman[302896]: 2025-11-29 08:06:18.924415245 +0000 UTC m=+0.021162935 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:06:19 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:06:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54cd8961a078f60adf80a19f39528d6ee56f842eea4b497d52782dd67554ed3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:19 np0005539563 podman[302896]: 2025-11-29 08:06:19.048554508 +0000 UTC m=+0.145302208 container init a987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:19 np0005539563 podman[302896]: 2025-11-29 08:06:19.053462151 +0000 UTC m=+0.150209791 container start a987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:06:19 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302909]: [NOTICE]   (302913) : New worker (302915) forked
Nov 29 03:06:19 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302909]: [NOTICE]   (302913) : Loading success.
Nov 29 03:06:19 np0005539563 nova_compute[252253]: 2025-11-29 08:06:19.082 252257 DEBUG oslo_concurrency.lockutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:19 np0005539563 nova_compute[252253]: 2025-11-29 08:06:19.082 252257 DEBUG oslo_concurrency.lockutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:19 np0005539563 nova_compute[252253]: 2025-11-29 08:06:19.083 252257 DEBUG nova.objects.instance [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:06:19 np0005539563 nova_compute[252253]: 2025-11-29 08:06:19.140 252257 DEBUG oslo_concurrency.lockutils [None req-7673337a-a348-45ca-b50d-8027bf70283f 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:19.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 138 MiB data, 728 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.9 MiB/s wr, 166 op/s
Nov 29 03:06:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:20.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:20 np0005539563 nova_compute[252253]: 2025-11-29 08:06:20.813 252257 DEBUG nova.compute.manager [req-50862d1f-9f66-4449-988d-44e0a216b0aa req-aa71d7a0-e1b6-479c-b824-4cb6fe98283e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:20 np0005539563 nova_compute[252253]: 2025-11-29 08:06:20.813 252257 DEBUG oslo_concurrency.lockutils [req-50862d1f-9f66-4449-988d-44e0a216b0aa req-aa71d7a0-e1b6-479c-b824-4cb6fe98283e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "45797fd1-8963-4373-b547-4345ab32ac63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:20 np0005539563 nova_compute[252253]: 2025-11-29 08:06:20.814 252257 DEBUG oslo_concurrency.lockutils [req-50862d1f-9f66-4449-988d-44e0a216b0aa req-aa71d7a0-e1b6-479c-b824-4cb6fe98283e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:20 np0005539563 nova_compute[252253]: 2025-11-29 08:06:20.814 252257 DEBUG oslo_concurrency.lockutils [req-50862d1f-9f66-4449-988d-44e0a216b0aa req-aa71d7a0-e1b6-479c-b824-4cb6fe98283e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:20 np0005539563 nova_compute[252253]: 2025-11-29 08:06:20.814 252257 DEBUG nova.compute.manager [req-50862d1f-9f66-4449-988d-44e0a216b0aa req-aa71d7a0-e1b6-479c-b824-4cb6fe98283e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] No waiting events found dispatching network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:20 np0005539563 nova_compute[252253]: 2025-11-29 08:06:20.814 252257 WARNING nova.compute.manager [req-50862d1f-9f66-4449-988d-44e0a216b0aa req-aa71d7a0-e1b6-479c-b824-4cb6fe98283e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received unexpected event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:06:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:21 np0005539563 nova_compute[252253]: 2025-11-29 08:06:21.638 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:21.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 167 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 5.9 MiB/s wr, 229 op/s
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.121 252257 DEBUG oslo_concurrency.lockutils [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "45797fd1-8963-4373-b547-4345ab32ac63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.123 252257 DEBUG oslo_concurrency.lockutils [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.123 252257 DEBUG oslo_concurrency.lockutils [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "45797fd1-8963-4373-b547-4345ab32ac63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.123 252257 DEBUG oslo_concurrency.lockutils [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.124 252257 DEBUG oslo_concurrency.lockutils [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.125 252257 INFO nova.compute.manager [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Terminating instance#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.126 252257 DEBUG nova.compute.manager [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:06:22 np0005539563 kernel: tap481cb0ff-11 (unregistering): left promiscuous mode
Nov 29 03:06:22 np0005539563 NetworkManager[48981]: <info>  [1764403582.1646] device (tap481cb0ff-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.169 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.177 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:22Z|00287|binding|INFO|Releasing lport 481cb0ff-1134-4a04-83a5-1209b2f32b86 from this chassis (sb_readonly=0)
Nov 29 03:06:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:22Z|00288|binding|INFO|Setting lport 481cb0ff-1134-4a04-83a5-1209b2f32b86 down in Southbound
Nov 29 03:06:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:22Z|00289|binding|INFO|Removing iface tap481cb0ff-11 ovn-installed in OVS
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.178 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.185 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:51:43 10.100.0.9'], port_security=['fa:16:3e:7a:51:43 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '45797fd1-8963-4373-b547-4345ab32ac63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=481cb0ff-1134-4a04-83a5-1209b2f32b86) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.186 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 481cb0ff-1134-4a04-83a5-1209b2f32b86 in datapath 8665acc6-1650-4878-8ffd-84f079f13741 unbound from our chassis#033[00m
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.188 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8665acc6-1650-4878-8ffd-84f079f13741, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.189 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1bfc44cb-54ab-4d7a-9e81-91c61b9586c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.189 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace which is not needed anymore#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.197 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:22 np0005539563 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Nov 29 03:06:22 np0005539563 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004f.scope: Consumed 3.880s CPU time.
Nov 29 03:06:22 np0005539563 systemd-machined[213024]: Machine qemu-33-instance-0000004f terminated.
Nov 29 03:06:22 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302909]: [NOTICE]   (302913) : haproxy version is 2.8.14-c23fe91
Nov 29 03:06:22 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302909]: [NOTICE]   (302913) : path to executable is /usr/sbin/haproxy
Nov 29 03:06:22 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302909]: [WARNING]  (302913) : Exiting Master process...
Nov 29 03:06:22 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302909]: [ALERT]    (302913) : Current worker (302915) exited with code 143 (Terminated)
Nov 29 03:06:22 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[302909]: [WARNING]  (302913) : All workers exited. Exiting... (0)
Nov 29 03:06:22 np0005539563 systemd[1]: libpod-a987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6.scope: Deactivated successfully.
Nov 29 03:06:22 np0005539563 conmon[302909]: conmon a987892b93247497865b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6.scope/container/memory.events
Nov 29 03:06:22 np0005539563 podman[302949]: 2025-11-29 08:06:22.325668007 +0000 UTC m=+0.045141154 container died a987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.348 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.352 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6-userdata-shm.mount: Deactivated successfully.
Nov 29 03:06:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-54cd8961a078f60adf80a19f39528d6ee56f842eea4b497d52782dd67554ed3a-merged.mount: Deactivated successfully.
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.360 252257 INFO nova.virt.libvirt.driver [-] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Instance destroyed successfully.#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.361 252257 DEBUG nova.objects.instance [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'resources' on Instance uuid 45797fd1-8963-4373-b547-4345ab32ac63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:22 np0005539563 podman[302949]: 2025-11-29 08:06:22.368393054 +0000 UTC m=+0.087866181 container cleanup a987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.377 252257 DEBUG nova.virt.libvirt.vif [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:05:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1593821978',display_name='tempest-ServerDiskConfigTestJSON-server-1593821978',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1593821978',id=79,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-p1vyt0k8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:19Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=45797fd1-8963-4373-b547-4345ab32ac63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.378 252257 DEBUG nova.network.os_vif_util [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "address": "fa:16:3e:7a:51:43", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap481cb0ff-11", "ovs_interfaceid": "481cb0ff-1134-4a04-83a5-1209b2f32b86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.380 252257 DEBUG nova.network.os_vif_util [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.381 252257 DEBUG os_vif [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.384 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:22 np0005539563 systemd[1]: libpod-conmon-a987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6.scope: Deactivated successfully.
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.385 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap481cb0ff-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.387 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.388 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.390 252257 INFO os_vif [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:51:43,bridge_name='br-int',has_traffic_filtering=True,id=481cb0ff-1134-4a04-83a5-1209b2f32b86,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap481cb0ff-11')#033[00m
Nov 29 03:06:22 np0005539563 podman[302986]: 2025-11-29 08:06:22.424296969 +0000 UTC m=+0.035275696 container remove a987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.429 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[23df40b0-19f9-4a48-b45b-3c27ffc66970]: (4, ('Sat Nov 29 08:06:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (a987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6)\na987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6\nSat Nov 29 08:06:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (a987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6)\na987892b93247497865b24217a64c45d3f84f1c5212485978b0cf029e6e122a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.430 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a5e0ee-bffa-4664-9c12-e626c461a719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.431 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.433 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:22 np0005539563 kernel: tap8665acc6-10: left promiscuous mode
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.447 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.449 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c1600457-74c3-4c7c-be45-ef74e27da462]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.462 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[31344257-edfb-44b5-b65f-84a9c8bd192e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.463 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[64f39d5a-7bbf-4db5-b9b4-81d73ac2f09f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.475 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b4aeecbb-cdde-4764-b081-004605f1290c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654614, 'reachable_time': 24471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303016, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.477 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:06:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:22.478 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[5975a449-cf39-4ade-b668-b0119edcc4da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:22 np0005539563 systemd[1]: run-netns-ovnmeta\x2d8665acc6\x2d1650\x2d4878\x2d8ffd\x2d84f079f13741.mount: Deactivated successfully.
Nov 29 03:06:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:22.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.812 252257 INFO nova.virt.libvirt.driver [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Deleting instance files /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63_del#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.813 252257 INFO nova.virt.libvirt.driver [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Deletion of /var/lib/nova/instances/45797fd1-8963-4373-b547-4345ab32ac63_del complete#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.883 252257 INFO nova.compute.manager [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.884 252257 DEBUG oslo.service.loopingcall [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.884 252257 DEBUG nova.compute.manager [-] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:06:22 np0005539563 nova_compute[252253]: 2025-11-29 08:06:22.885 252257 DEBUG nova.network.neutron [-] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:06:23 np0005539563 nova_compute[252253]: 2025-11-29 08:06:23.039 252257 DEBUG nova.compute.manager [req-bd537737-167e-4293-acbc-032bb51c6667 req-d399de5d-a90e-4e19-84ab-a12dd9546bef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received event network-vif-unplugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:23 np0005539563 nova_compute[252253]: 2025-11-29 08:06:23.040 252257 DEBUG oslo_concurrency.lockutils [req-bd537737-167e-4293-acbc-032bb51c6667 req-d399de5d-a90e-4e19-84ab-a12dd9546bef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "45797fd1-8963-4373-b547-4345ab32ac63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:23 np0005539563 nova_compute[252253]: 2025-11-29 08:06:23.040 252257 DEBUG oslo_concurrency.lockutils [req-bd537737-167e-4293-acbc-032bb51c6667 req-d399de5d-a90e-4e19-84ab-a12dd9546bef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:23 np0005539563 nova_compute[252253]: 2025-11-29 08:06:23.040 252257 DEBUG oslo_concurrency.lockutils [req-bd537737-167e-4293-acbc-032bb51c6667 req-d399de5d-a90e-4e19-84ab-a12dd9546bef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:23 np0005539563 nova_compute[252253]: 2025-11-29 08:06:23.040 252257 DEBUG nova.compute.manager [req-bd537737-167e-4293-acbc-032bb51c6667 req-d399de5d-a90e-4e19-84ab-a12dd9546bef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] No waiting events found dispatching network-vif-unplugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:23 np0005539563 nova_compute[252253]: 2025-11-29 08:06:23.041 252257 DEBUG nova.compute.manager [req-bd537737-167e-4293-acbc-032bb51c6667 req-d399de5d-a90e-4e19-84ab-a12dd9546bef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received event network-vif-unplugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031661490321980273 of space, bias 1.0, pg target 0.9498447096594081 quantized to 32 (current 32)
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:06:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:23.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 171 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.8 MiB/s wr, 160 op/s
Nov 29 03:06:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:24.206 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:24.206 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:06:24 np0005539563 nova_compute[252253]: 2025-11-29 08:06:24.206 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:24 np0005539563 nova_compute[252253]: 2025-11-29 08:06:24.228 252257 DEBUG nova.network.neutron [-] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:24 np0005539563 nova_compute[252253]: 2025-11-29 08:06:24.250 252257 INFO nova.compute.manager [-] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Took 1.37 seconds to deallocate network for instance.#033[00m
Nov 29 03:06:24 np0005539563 nova_compute[252253]: 2025-11-29 08:06:24.304 252257 DEBUG oslo_concurrency.lockutils [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:24 np0005539563 nova_compute[252253]: 2025-11-29 08:06:24.305 252257 DEBUG oslo_concurrency.lockutils [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:24 np0005539563 nova_compute[252253]: 2025-11-29 08:06:24.390 252257 DEBUG oslo_concurrency.processutils [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Nov 29 03:06:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Nov 29 03:06:24 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Nov 29 03:06:24 np0005539563 nova_compute[252253]: 2025-11-29 08:06:24.577 252257 DEBUG nova.compute.manager [req-23681d44-9513-4478-b7cb-4eb249ee4195 req-d37cec3b-8bf1-4a58-9eaa-78f5a58db799 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received event network-vif-deleted-481cb0ff-1134-4a04-83a5-1209b2f32b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:24.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1245065104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:24 np0005539563 nova_compute[252253]: 2025-11-29 08:06:24.804 252257 DEBUG oslo_concurrency.processutils [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:24 np0005539563 nova_compute[252253]: 2025-11-29 08:06:24.812 252257 DEBUG nova.compute.provider_tree [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:24 np0005539563 nova_compute[252253]: 2025-11-29 08:06:24.832 252257 DEBUG nova.scheduler.client.report [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:24 np0005539563 nova_compute[252253]: 2025-11-29 08:06:24.861 252257 DEBUG oslo_concurrency.lockutils [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:24 np0005539563 nova_compute[252253]: 2025-11-29 08:06:24.934 252257 INFO nova.scheduler.client.report [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Deleted allocations for instance 45797fd1-8963-4373-b547-4345ab32ac63#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.012 252257 DEBUG oslo_concurrency.lockutils [None req-59646ce6-fe95-4485-896c-f344b85294b5 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.144 252257 DEBUG nova.compute.manager [req-8ef4e620-8d79-4d66-a215-442bd949163b req-a5602a13-ae1b-4222-887a-19cf34ffdfe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.144 252257 DEBUG oslo_concurrency.lockutils [req-8ef4e620-8d79-4d66-a215-442bd949163b req-a5602a13-ae1b-4222-887a-19cf34ffdfe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "45797fd1-8963-4373-b547-4345ab32ac63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.145 252257 DEBUG oslo_concurrency.lockutils [req-8ef4e620-8d79-4d66-a215-442bd949163b req-a5602a13-ae1b-4222-887a-19cf34ffdfe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.145 252257 DEBUG oslo_concurrency.lockutils [req-8ef4e620-8d79-4d66-a215-442bd949163b req-a5602a13-ae1b-4222-887a-19cf34ffdfe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "45797fd1-8963-4373-b547-4345ab32ac63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.146 252257 DEBUG nova.compute.manager [req-8ef4e620-8d79-4d66-a215-442bd949163b req-a5602a13-ae1b-4222-887a-19cf34ffdfe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] No waiting events found dispatching network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.146 252257 WARNING nova.compute.manager [req-8ef4e620-8d79-4d66-a215-442bd949163b req-a5602a13-ae1b-4222-887a-19cf34ffdfe4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Received unexpected event network-vif-plugged-481cb0ff-1134-4a04-83a5-1209b2f32b86 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.531 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "12d0faaa-a957-464c-ae56-3e90d6fd248c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.531 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.559 252257 DEBUG nova.compute.manager [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.632 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.633 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.641 252257 DEBUG nova.virt.hardware [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.641 252257 INFO nova.compute.claims [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:06:25 np0005539563 nova_compute[252253]: 2025-11-29 08:06:25.746 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:25.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 213 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 6.4 MiB/s wr, 259 op/s
Nov 29 03:06:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3664529898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.181 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.186 252257 DEBUG nova.compute.provider_tree [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.219 252257 DEBUG nova.scheduler.client.report [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.247 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.248 252257 DEBUG nova.compute.manager [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.370 252257 DEBUG nova.compute.manager [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.371 252257 DEBUG nova.network.neutron [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.429 252257 INFO nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.456 252257 DEBUG nova.compute.manager [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:06:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.560 252257 DEBUG nova.compute.manager [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.561 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.561 252257 INFO nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Creating image(s)#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.587 252257 DEBUG nova.storage.rbd_utils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.614 252257 DEBUG nova.storage.rbd_utils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.638 252257 DEBUG nova.storage.rbd_utils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.642 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:26.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.666 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.706 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.707 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.707 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.708 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.734 252257 DEBUG nova.storage.rbd_utils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.737 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:26 np0005539563 nova_compute[252253]: 2025-11-29 08:06:26.781 252257 DEBUG nova.policy [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5a7b61623f854cf59636f192ab8af005', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:06:27 np0005539563 nova_compute[252253]: 2025-11-29 08:06:27.006 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:27 np0005539563 nova_compute[252253]: 2025-11-29 08:06:27.087 252257 DEBUG nova.storage.rbd_utils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] resizing rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:06:27 np0005539563 nova_compute[252253]: 2025-11-29 08:06:27.200 252257 DEBUG nova.objects.instance [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'migration_context' on Instance uuid 12d0faaa-a957-464c-ae56-3e90d6fd248c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:27 np0005539563 nova_compute[252253]: 2025-11-29 08:06:27.222 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:06:27 np0005539563 nova_compute[252253]: 2025-11-29 08:06:27.223 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Ensure instance console log exists: /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:06:27 np0005539563 nova_compute[252253]: 2025-11-29 08:06:27.223 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:27 np0005539563 nova_compute[252253]: 2025-11-29 08:06:27.223 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:27 np0005539563 nova_compute[252253]: 2025-11-29 08:06:27.224 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:27 np0005539563 nova_compute[252253]: 2025-11-29 08:06:27.388 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:27.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 213 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 6.4 MiB/s wr, 259 op/s
Nov 29 03:06:28 np0005539563 nova_compute[252253]: 2025-11-29 08:06:28.000 252257 DEBUG nova.network.neutron [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Successfully created port: e4e1a07e-ccab-41ce-8316-f943d1063180 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:06:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:28.208 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:28.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:29 np0005539563 nova_compute[252253]: 2025-11-29 08:06:29.042 252257 DEBUG nova.network.neutron [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Successfully updated port: e4e1a07e-ccab-41ce-8316-f943d1063180 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:06:29 np0005539563 nova_compute[252253]: 2025-11-29 08:06:29.068 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "refresh_cache-12d0faaa-a957-464c-ae56-3e90d6fd248c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:29 np0005539563 nova_compute[252253]: 2025-11-29 08:06:29.069 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquired lock "refresh_cache-12d0faaa-a957-464c-ae56-3e90d6fd248c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:29 np0005539563 nova_compute[252253]: 2025-11-29 08:06:29.069 252257 DEBUG nova.network.neutron [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:06:29 np0005539563 nova_compute[252253]: 2025-11-29 08:06:29.359 252257 DEBUG nova.compute.manager [req-f9265637-bbb4-42a6-93d8-ae43aea73e40 req-4ee3691a-c374-48f2-9a27-ecfec776f743 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received event network-changed-e4e1a07e-ccab-41ce-8316-f943d1063180 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:29 np0005539563 nova_compute[252253]: 2025-11-29 08:06:29.360 252257 DEBUG nova.compute.manager [req-f9265637-bbb4-42a6-93d8-ae43aea73e40 req-4ee3691a-c374-48f2-9a27-ecfec776f743 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Refreshing instance network info cache due to event network-changed-e4e1a07e-ccab-41ce-8316-f943d1063180. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:06:29 np0005539563 nova_compute[252253]: 2025-11-29 08:06:29.360 252257 DEBUG oslo_concurrency.lockutils [req-f9265637-bbb4-42a6-93d8-ae43aea73e40 req-4ee3691a-c374-48f2-9a27-ecfec776f743 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-12d0faaa-a957-464c-ae56-3e90d6fd248c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:29 np0005539563 nova_compute[252253]: 2025-11-29 08:06:29.656 252257 DEBUG nova.network.neutron [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:06:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:29.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 221 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.8 MiB/s wr, 270 op/s
Nov 29 03:06:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Nov 29 03:06:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Nov 29 03:06:30 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Nov 29 03:06:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:30.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.511095) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591511229, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2124, "num_deletes": 259, "total_data_size": 3596644, "memory_usage": 3662408, "flush_reason": "Manual Compaction"}
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591569513, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3530624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35305, "largest_seqno": 37428, "table_properties": {"data_size": 3521047, "index_size": 5943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20532, "raw_average_key_size": 20, "raw_value_size": 3501591, "raw_average_value_size": 3501, "num_data_blocks": 258, "num_entries": 1000, "num_filter_entries": 1000, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403397, "oldest_key_time": 1764403397, "file_creation_time": 1764403591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 58450 microseconds, and 10426 cpu microseconds.
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.569598) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3530624 bytes OK
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.569637) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.573078) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.573104) EVENT_LOG_v1 {"time_micros": 1764403591573099, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.573132) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3587877, prev total WAL file size 3587877, number of live WAL files 2.
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.574301) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303131' seq:72057594037927935, type:22 .. '6C6F676D0031323634' seq:0, type:0; will stop at (end)
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3447KB)], [74(9361KB)]
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591574409, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 13116329, "oldest_snapshot_seqno": -1}
Nov 29 03:06:31 np0005539563 nova_compute[252253]: 2025-11-29 08:06:31.641 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 7024 keys, 12955088 bytes, temperature: kUnknown
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591743395, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 12955088, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12905217, "index_size": 31199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 180677, "raw_average_key_size": 25, "raw_value_size": 12776634, "raw_average_value_size": 1818, "num_data_blocks": 1250, "num_entries": 7024, "num_filter_entries": 7024, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764403591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.743674) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12955088 bytes
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.746686) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 77.6 rd, 76.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 9.1 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(7.4) write-amplify(3.7) OK, records in: 7561, records dropped: 537 output_compression: NoCompression
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.746773) EVENT_LOG_v1 {"time_micros": 1764403591746719, "job": 42, "event": "compaction_finished", "compaction_time_micros": 169088, "compaction_time_cpu_micros": 29529, "output_level": 6, "num_output_files": 1, "total_output_size": 12955088, "num_input_records": 7561, "num_output_records": 7024, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591748080, "job": 42, "event": "table_file_deletion", "file_number": 76}
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403591751363, "job": 42, "event": "table_file_deletion", "file_number": 74}
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.574138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.751434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.751443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.751446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.751449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:31 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:31.751452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:31.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 259 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 6.2 MiB/s wr, 311 op/s
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.391 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:32.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.782 252257 DEBUG nova.network.neutron [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Updating instance_info_cache with network_info: [{"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.906 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Releasing lock "refresh_cache-12d0faaa-a957-464c-ae56-3e90d6fd248c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.906 252257 DEBUG nova.compute.manager [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Instance network_info: |[{"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.906 252257 DEBUG oslo_concurrency.lockutils [req-f9265637-bbb4-42a6-93d8-ae43aea73e40 req-4ee3691a-c374-48f2-9a27-ecfec776f743 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-12d0faaa-a957-464c-ae56-3e90d6fd248c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.907 252257 DEBUG nova.network.neutron [req-f9265637-bbb4-42a6-93d8-ae43aea73e40 req-4ee3691a-c374-48f2-9a27-ecfec776f743 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Refreshing network info cache for port e4e1a07e-ccab-41ce-8316-f943d1063180 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.909 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Start _get_guest_xml network_info=[{"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.914 252257 WARNING nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.963 252257 DEBUG nova.virt.libvirt.host [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.965 252257 DEBUG nova.virt.libvirt.host [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.977 252257 DEBUG nova.virt.libvirt.host [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.978 252257 DEBUG nova.virt.libvirt.host [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.980 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.981 252257 DEBUG nova.virt.hardware [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.982 252257 DEBUG nova.virt.hardware [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.982 252257 DEBUG nova.virt.hardware [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.983 252257 DEBUG nova.virt.hardware [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.984 252257 DEBUG nova.virt.hardware [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.985 252257 DEBUG nova.virt.hardware [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.986 252257 DEBUG nova.virt.hardware [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.986 252257 DEBUG nova.virt.hardware [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.987 252257 DEBUG nova.virt.hardware [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.988 252257 DEBUG nova.virt.hardware [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.988 252257 DEBUG nova.virt.hardware [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:06:32 np0005539563 nova_compute[252253]: 2025-11-29 08:06:32.993 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1086621362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:33 np0005539563 nova_compute[252253]: 2025-11-29 08:06:33.550 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:33 np0005539563 nova_compute[252253]: 2025-11-29 08:06:33.581 252257 DEBUG nova.storage.rbd_utils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:33 np0005539563 nova_compute[252253]: 2025-11-29 08:06:33.586 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:33 np0005539563 nova_compute[252253]: 2025-11-29 08:06:33.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:33 np0005539563 nova_compute[252253]: 2025-11-29 08:06:33.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:06:33 np0005539563 nova_compute[252253]: 2025-11-29 08:06:33.757 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:06:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:33.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/234292158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 235 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.2 MiB/s wr, 261 op/s
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.006 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.009 252257 DEBUG nova.virt.libvirt.vif [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1671913756',display_name='tempest-ServerDiskConfigTestJSON-server-1671913756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1671913756',id=82,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-lonfocmu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:26Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=12d0faaa-a957-464c-ae56-3e90d6fd248c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.010 252257 DEBUG nova.network.os_vif_util [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.011 252257 DEBUG nova.network.os_vif_util [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.013 252257 DEBUG nova.objects.instance [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12d0faaa-a957-464c-ae56-3e90d6fd248c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.056 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  <uuid>12d0faaa-a957-464c-ae56-3e90d6fd248c</uuid>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  <name>instance-00000052</name>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1671913756</nova:name>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:06:32</nova:creationTime>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <nova:user uuid="5a7b61623f854cf59636f192ab8af005">tempest-ServerDiskConfigTestJSON-904422786-project-member</nova:user>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <nova:project uuid="750bde86c9c7473fbf7f0a6a3b16cec1">tempest-ServerDiskConfigTestJSON-904422786</nova:project>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <nova:port uuid="e4e1a07e-ccab-41ce-8316-f943d1063180">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <entry name="serial">12d0faaa-a957-464c-ae56-3e90d6fd248c</entry>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <entry name="uuid">12d0faaa-a957-464c-ae56-3e90d6fd248c</entry>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/12d0faaa-a957-464c-ae56-3e90d6fd248c_disk">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/12d0faaa-a957-464c-ae56-3e90d6fd248c_disk.config">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:f2:2b:73"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <target dev="tape4e1a07e-cc"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/console.log" append="off"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:06:34 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:06:34 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:06:34 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:06:34 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.058 252257 DEBUG nova.compute.manager [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Preparing to wait for external event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.059 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.060 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.061 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.062 252257 DEBUG nova.virt.libvirt.vif [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1671913756',display_name='tempest-ServerDiskConfigTestJSON-server-1671913756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1671913756',id=82,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-lonfocmu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:26Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=12d0faaa-a957-464c-ae56-3e90d6fd248c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.063 252257 DEBUG nova.network.os_vif_util [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.065 252257 DEBUG nova.network.os_vif_util [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.066 252257 DEBUG os_vif [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.067 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.069 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.070 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.074 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.075 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4e1a07e-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.076 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4e1a07e-cc, col_values=(('external_ids', {'iface-id': 'e4e1a07e-ccab-41ce-8316-f943d1063180', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:2b:73', 'vm-uuid': '12d0faaa-a957-464c-ae56-3e90d6fd248c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.078 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539563 NetworkManager[48981]: <info>  [1764403594.0795] manager: (tape4e1a07e-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.082 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.084 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.085 252257 INFO os_vif [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc')#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.162 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.162 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.163 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No VIF found with MAC fa:16:3e:f2:2b:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.164 252257 INFO nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Using config drive#033[00m
Nov 29 03:06:34 np0005539563 nova_compute[252253]: 2025-11-29 08:06:34.200 252257 DEBUG nova.storage.rbd_utils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:34.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:35 np0005539563 nova_compute[252253]: 2025-11-29 08:06:35.065 252257 INFO nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Creating config drive at /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/disk.config#033[00m
Nov 29 03:06:35 np0005539563 nova_compute[252253]: 2025-11-29 08:06:35.071 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsz8_03g5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:35 np0005539563 nova_compute[252253]: 2025-11-29 08:06:35.221 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsz8_03g5" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:35 np0005539563 nova_compute[252253]: 2025-11-29 08:06:35.262 252257 DEBUG nova.storage.rbd_utils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:35 np0005539563 nova_compute[252253]: 2025-11-29 08:06:35.267 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/disk.config 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:35 np0005539563 nova_compute[252253]: 2025-11-29 08:06:35.468 252257 DEBUG oslo_concurrency.processutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/disk.config 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:35 np0005539563 nova_compute[252253]: 2025-11-29 08:06:35.470 252257 INFO nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Deleting local config drive /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:06:35 np0005539563 kernel: tape4e1a07e-cc: entered promiscuous mode
Nov 29 03:06:35 np0005539563 NetworkManager[48981]: <info>  [1764403595.5326] manager: (tape4e1a07e-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Nov 29 03:06:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:35Z|00290|binding|INFO|Claiming lport e4e1a07e-ccab-41ce-8316-f943d1063180 for this chassis.
Nov 29 03:06:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:35Z|00291|binding|INFO|e4e1a07e-ccab-41ce-8316-f943d1063180: Claiming fa:16:3e:f2:2b:73 10.100.0.12
Nov 29 03:06:35 np0005539563 nova_compute[252253]: 2025-11-29 08:06:35.535 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.544 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:2b:73 10.100.0.12'], port_security=['fa:16:3e:f2:2b:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '12d0faaa-a957-464c-ae56-3e90d6fd248c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=e4e1a07e-ccab-41ce-8316-f943d1063180) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.545 158990 INFO neutron.agent.ovn.metadata.agent [-] Port e4e1a07e-ccab-41ce-8316-f943d1063180 in datapath 8665acc6-1650-4878-8ffd-84f079f13741 bound to our chassis#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.547 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8665acc6-1650-4878-8ffd-84f079f13741#033[00m
Nov 29 03:06:35 np0005539563 nova_compute[252253]: 2025-11-29 08:06:35.559 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:35Z|00292|binding|INFO|Setting lport e4e1a07e-ccab-41ce-8316-f943d1063180 ovn-installed in OVS
Nov 29 03:06:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:35Z|00293|binding|INFO|Setting lport e4e1a07e-ccab-41ce-8316-f943d1063180 up in Southbound
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.559 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[358ea5e9-e39c-4501-b653-5c97013390fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 nova_compute[252253]: 2025-11-29 08:06:35.561 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.561 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8665acc6-11 in ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:06:35 np0005539563 systemd-udevd[303419]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.563 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8665acc6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:06:35 np0005539563 nova_compute[252253]: 2025-11-29 08:06:35.567 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.563 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1f895425-8364-4657-bd9a-72baadcd8f90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.567 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[edbc2532-b2ee-4c65-99ab-f45c6567a778]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 NetworkManager[48981]: <info>  [1764403595.5807] device (tape4e1a07e-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:06:35 np0005539563 NetworkManager[48981]: <info>  [1764403595.5815] device (tape4e1a07e-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.580 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[5eea3489-7152-4edd-a3b2-7a84aa18db3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 systemd-machined[213024]: New machine qemu-34-instance-00000052.
Nov 29 03:06:35 np0005539563 systemd[1]: Started Virtual Machine qemu-34-instance-00000052.
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.603 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[14ae3c4e-c0f9-4ea0-9a91-a1b46cd77ac7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.630 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d4920a6b-5c17-4e8e-97ae-6bcb803afa06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.634 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6049d68b-4d96-4d7e-922d-38d660a3920c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 NetworkManager[48981]: <info>  [1764403595.6370] manager: (tap8665acc6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/138)
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.677 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[fddd6cec-3ce8-426c-99cb-8d099bb9c57f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.680 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2334b649-dbba-4008-abd1-3a68e7837e07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 NetworkManager[48981]: <info>  [1764403595.7040] device (tap8665acc6-10): carrier: link connected
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.707 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[96d83ef6-b073-417f-afbb-da67ae91ff61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.727 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[88b2fb6a-919d-489a-af7e-144a17886a73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656347, 'reachable_time': 36347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303454, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.741 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[76e1dcaa-3532-4f8b-9e3a-ab7959020e20]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:2248'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 656347, 'tstamp': 656347}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303455, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.760 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d28494d1-dd68-43d5-9e4e-5297318254bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656347, 'reachable_time': 36347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303456, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.788 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6105164a-894c-4be6-9de3-3021e1e03d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:35.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.844 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0ffb53d9-fdef-4f72-acb8-699df3061dd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.845 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.846 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.846 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8665acc6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:35 np0005539563 NetworkManager[48981]: <info>  [1764403595.8493] manager: (tap8665acc6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Nov 29 03:06:35 np0005539563 kernel: tap8665acc6-10: entered promiscuous mode
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.851 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8665acc6-10, col_values=(('external_ids', {'iface-id': 'e0f892e1-f1e8-4b29-8918-6cd036b9e8e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:35Z|00294|binding|INFO|Releasing lport e0f892e1-f1e8-4b29-8918-6cd036b9e8e0 from this chassis (sb_readonly=0)
Nov 29 03:06:35 np0005539563 nova_compute[252253]: 2025-11-29 08:06:35.851 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:35 np0005539563 nova_compute[252253]: 2025-11-29 08:06:35.867 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.868 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.869 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[91fc549e-f454-49cc-856d-5ad3ac33d87b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.870 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:06:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:35.872 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'env', 'PROCESS_TAG=haproxy-8665acc6-1650-4878-8ffd-84f079f13741', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8665acc6-1650-4878-8ffd-84f079f13741.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:06:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 180 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Nov 29 03:06:36 np0005539563 nova_compute[252253]: 2025-11-29 08:06:36.103 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403596.1033847, 12d0faaa-a957-464c-ae56-3e90d6fd248c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:36 np0005539563 nova_compute[252253]: 2025-11-29 08:06:36.105 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:06:36 np0005539563 nova_compute[252253]: 2025-11-29 08:06:36.152 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:36 np0005539563 nova_compute[252253]: 2025-11-29 08:06:36.156 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403596.1034904, 12d0faaa-a957-464c-ae56-3e90d6fd248c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:36 np0005539563 nova_compute[252253]: 2025-11-29 08:06:36.156 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:06:36 np0005539563 nova_compute[252253]: 2025-11-29 08:06:36.176 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:36 np0005539563 nova_compute[252253]: 2025-11-29 08:06:36.179 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:36 np0005539563 nova_compute[252253]: 2025-11-29 08:06:36.234 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:06:36 np0005539563 podman[303530]: 2025-11-29 08:06:36.238327199 +0000 UTC m=+0.057859218 container create 5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:36 np0005539563 systemd[1]: Started libpod-conmon-5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a.scope.
Nov 29 03:06:36 np0005539563 podman[303530]: 2025-11-29 08:06:36.211817771 +0000 UTC m=+0.031349780 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:06:36 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:06:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957ca6c2dbf8e7485417581f4e22aedb0480eff0b8742c70c07b3bacdcce8ea9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:36 np0005539563 podman[303530]: 2025-11-29 08:06:36.320997159 +0000 UTC m=+0.140529168 container init 5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:06:36 np0005539563 podman[303530]: 2025-11-29 08:06:36.325990304 +0000 UTC m=+0.145522293 container start 5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:06:36 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[303545]: [NOTICE]   (303549) : New worker (303551) forked
Nov 29 03:06:36 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[303545]: [NOTICE]   (303549) : Loading success.
Nov 29 03:06:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:36 np0005539563 nova_compute[252253]: 2025-11-29 08:06:36.642 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:36.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:36 np0005539563 nova_compute[252253]: 2025-11-29 08:06:36.752 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:36 np0005539563 nova_compute[252253]: 2025-11-29 08:06:36.772 252257 DEBUG nova.network.neutron [req-f9265637-bbb4-42a6-93d8-ae43aea73e40 req-4ee3691a-c374-48f2-9a27-ecfec776f743 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Updated VIF entry in instance network info cache for port e4e1a07e-ccab-41ce-8316-f943d1063180. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:06:36 np0005539563 nova_compute[252253]: 2025-11-29 08:06:36.772 252257 DEBUG nova.network.neutron [req-f9265637-bbb4-42a6-93d8-ae43aea73e40 req-4ee3691a-c374-48f2-9a27-ecfec776f743 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Updating instance_info_cache with network_info: [{"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:36 np0005539563 nova_compute[252253]: 2025-11-29 08:06:36.816 252257 DEBUG oslo_concurrency.lockutils [req-f9265637-bbb4-42a6-93d8-ae43aea73e40 req-4ee3691a-c374-48f2-9a27-ecfec776f743 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-12d0faaa-a957-464c-ae56-3e90d6fd248c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.359 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403582.3593967, 45797fd1-8963-4373-b547-4345ab32ac63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.360 252257 INFO nova.compute.manager [-] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.381 252257 DEBUG nova.compute.manager [None req-903ca952-d2ea-4491-bda9-8739bc526017 - - - - - -] [instance: 45797fd1-8963-4373-b547-4345ab32ac63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.402 252257 DEBUG nova.compute.manager [req-26ddb573-e5e9-44f4-891a-359db3407fc8 req-46447d0a-4738-4c92-b98a-86220e6106de 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.402 252257 DEBUG oslo_concurrency.lockutils [req-26ddb573-e5e9-44f4-891a-359db3407fc8 req-46447d0a-4738-4c92-b98a-86220e6106de 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.402 252257 DEBUG oslo_concurrency.lockutils [req-26ddb573-e5e9-44f4-891a-359db3407fc8 req-46447d0a-4738-4c92-b98a-86220e6106de 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.402 252257 DEBUG oslo_concurrency.lockutils [req-26ddb573-e5e9-44f4-891a-359db3407fc8 req-46447d0a-4738-4c92-b98a-86220e6106de 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.403 252257 DEBUG nova.compute.manager [req-26ddb573-e5e9-44f4-891a-359db3407fc8 req-46447d0a-4738-4c92-b98a-86220e6106de 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Processing event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.403 252257 DEBUG nova.compute.manager [req-26ddb573-e5e9-44f4-891a-359db3407fc8 req-46447d0a-4738-4c92-b98a-86220e6106de 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.403 252257 DEBUG oslo_concurrency.lockutils [req-26ddb573-e5e9-44f4-891a-359db3407fc8 req-46447d0a-4738-4c92-b98a-86220e6106de 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.403 252257 DEBUG oslo_concurrency.lockutils [req-26ddb573-e5e9-44f4-891a-359db3407fc8 req-46447d0a-4738-4c92-b98a-86220e6106de 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.404 252257 DEBUG oslo_concurrency.lockutils [req-26ddb573-e5e9-44f4-891a-359db3407fc8 req-46447d0a-4738-4c92-b98a-86220e6106de 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.404 252257 DEBUG nova.compute.manager [req-26ddb573-e5e9-44f4-891a-359db3407fc8 req-46447d0a-4738-4c92-b98a-86220e6106de 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] No waiting events found dispatching network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.404 252257 WARNING nova.compute.manager [req-26ddb573-e5e9-44f4-891a-359db3407fc8 req-46447d0a-4738-4c92-b98a-86220e6106de 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received unexpected event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.404 252257 DEBUG nova.compute.manager [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.410 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403597.4099975, 12d0faaa-a957-464c-ae56-3e90d6fd248c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.411 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.413 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.416 252257 INFO nova.virt.libvirt.driver [-] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Instance spawned successfully.#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.416 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.441 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.448 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.452 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.452 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.453 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.453 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.454 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.454 252257 DEBUG nova.virt.libvirt.driver [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.480 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.510 252257 INFO nova.compute.manager [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Took 10.95 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.511 252257 DEBUG nova.compute.manager [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.579 252257 INFO nova.compute.manager [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Took 11.96 seconds to build instance.#033[00m
Nov 29 03:06:37 np0005539563 nova_compute[252253]: 2025-11-29 08:06:37.604 252257 DEBUG oslo_concurrency.lockutils [None req-2b764815-d0bb-4ab1-b22c-ae2b25666ca6 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:37.840460) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403597840493, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 317, "num_deletes": 251, "total_data_size": 116076, "memory_usage": 123416, "flush_reason": "Manual Compaction"}
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403597844822, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 114908, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37429, "largest_seqno": 37745, "table_properties": {"data_size": 112839, "index_size": 233, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5269, "raw_average_key_size": 18, "raw_value_size": 108801, "raw_average_value_size": 383, "num_data_blocks": 10, "num_entries": 284, "num_filter_entries": 284, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403591, "oldest_key_time": 1764403591, "file_creation_time": 1764403597, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 4416 microseconds, and 1141 cpu microseconds.
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:37.844871) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 114908 bytes OK
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:37.844892) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:37.846811) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:37.846835) EVENT_LOG_v1 {"time_micros": 1764403597846828, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:37.846852) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 113846, prev total WAL file size 113846, number of live WAL files 2.
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:37.847322) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(112KB)], [77(12MB)]
Nov 29 03:06:37 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403597847388, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 13069996, "oldest_snapshot_seqno": -1}
Nov 29 03:06:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 180 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6798 keys, 11020994 bytes, temperature: kUnknown
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403598017028, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 11020994, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10974468, "index_size": 28430, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 176692, "raw_average_key_size": 25, "raw_value_size": 10851437, "raw_average_value_size": 1596, "num_data_blocks": 1126, "num_entries": 6798, "num_filter_entries": 6798, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764403597, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:38.017593) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 11020994 bytes
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:38.020567) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 76.9 rd, 64.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.4 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(209.7) write-amplify(95.9) OK, records in: 7308, records dropped: 510 output_compression: NoCompression
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:38.020608) EVENT_LOG_v1 {"time_micros": 1764403598020588, "job": 44, "event": "compaction_finished", "compaction_time_micros": 169951, "compaction_time_cpu_micros": 51488, "output_level": 6, "num_output_files": 1, "total_output_size": 11020994, "num_input_records": 7308, "num_output_records": 6798, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403598021184, "job": 44, "event": "table_file_deletion", "file_number": 79}
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403598026249, "job": 44, "event": "table_file_deletion", "file_number": 77}
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:37.847210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:38.026501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:38.026508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:38.026512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:38.026515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:38 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:06:38.026518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:06:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:38.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:39 np0005539563 nova_compute[252253]: 2025-11-29 08:06:39.079 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:39 np0005539563 nova_compute[252253]: 2025-11-29 08:06:39.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 180 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 141 op/s
Nov 29 03:06:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:40.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Nov 29 03:06:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Nov 29 03:06:41 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Nov 29 03:06:41 np0005539563 nova_compute[252253]: 2025-11-29 08:06:41.645 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:41 np0005539563 nova_compute[252253]: 2025-11-29 08:06:41.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:41 np0005539563 nova_compute[252253]: 2025-11-29 08:06:41.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:06:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 181 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 31 KiB/s wr, 125 op/s
Nov 29 03:06:42 np0005539563 podman[303563]: 2025-11-29 08:06:42.506134706 +0000 UTC m=+0.054814686 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:06:42 np0005539563 podman[303564]: 2025-11-29 08:06:42.542265585 +0000 UTC m=+0.083191965 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:06:42 np0005539563 podman[303565]: 2025-11-29 08:06:42.567496449 +0000 UTC m=+0.103122925 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:06:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:42.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:42 np0005539563 nova_compute[252253]: 2025-11-29 08:06:42.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:42 np0005539563 nova_compute[252253]: 2025-11-29 08:06:42.822 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:42 np0005539563 nova_compute[252253]: 2025-11-29 08:06:42.823 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:42 np0005539563 nova_compute[252253]: 2025-11-29 08:06:42.823 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:42 np0005539563 nova_compute[252253]: 2025-11-29 08:06:42.823 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:06:42 np0005539563 nova_compute[252253]: 2025-11-29 08:06:42.823 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:06:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:06:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:06:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:06:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:06:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364908919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:43 np0005539563 nova_compute[252253]: 2025-11-29 08:06:43.283 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:43 np0005539563 nova_compute[252253]: 2025-11-29 08:06:43.397 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:06:43 np0005539563 nova_compute[252253]: 2025-11-29 08:06:43.398 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:06:43 np0005539563 nova_compute[252253]: 2025-11-29 08:06:43.572 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:43 np0005539563 nova_compute[252253]: 2025-11-29 08:06:43.574 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4365MB free_disk=20.946491241455078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:06:43 np0005539563 nova_compute[252253]: 2025-11-29 08:06:43.574 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:43 np0005539563 nova_compute[252253]: 2025-11-29 08:06:43.575 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:06:43 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7d46b096-797a-4402-8f5b-d7045c8bc342 does not exist
Nov 29 03:06:43 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 054158ac-0c28-4c7a-bfd0-3d0a1ad4dc8b does not exist
Nov 29 03:06:43 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b215f47e-d382-40da-bd51-f552e1640f40 does not exist
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:06:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:06:43 np0005539563 nova_compute[252253]: 2025-11-29 08:06:43.761 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 12d0faaa-a957-464c-ae56-3e90d6fd248c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:06:43 np0005539563 nova_compute[252253]: 2025-11-29 08:06:43.762 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:06:43 np0005539563 nova_compute[252253]: 2025-11-29 08:06:43.762 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:06:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:43.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 181 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 32 KiB/s wr, 138 op/s
Nov 29 03:06:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:06:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:06:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:06:44 np0005539563 nova_compute[252253]: 2025-11-29 08:06:44.082 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:44 np0005539563 podman[303919]: 2025-11-29 08:06:44.168908426 +0000 UTC m=+0.063165732 container create de252753cbe03ceb36c92101da7749fd4620b02e78a2e5097a22138e1b1c793c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:06:44 np0005539563 systemd[1]: Started libpod-conmon-de252753cbe03ceb36c92101da7749fd4620b02e78a2e5097a22138e1b1c793c.scope.
Nov 29 03:06:44 np0005539563 podman[303919]: 2025-11-29 08:06:44.125972313 +0000 UTC m=+0.020229639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:06:44 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:06:44 np0005539563 podman[303919]: 2025-11-29 08:06:44.256826378 +0000 UTC m=+0.151083684 container init de252753cbe03ceb36c92101da7749fd4620b02e78a2e5097a22138e1b1c793c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:06:44 np0005539563 podman[303919]: 2025-11-29 08:06:44.263575031 +0000 UTC m=+0.157832337 container start de252753cbe03ceb36c92101da7749fd4620b02e78a2e5097a22138e1b1c793c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:06:44 np0005539563 podman[303919]: 2025-11-29 08:06:44.266988863 +0000 UTC m=+0.161246169 container attach de252753cbe03ceb36c92101da7749fd4620b02e78a2e5097a22138e1b1c793c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:06:44 np0005539563 blissful_raman[303935]: 167 167
Nov 29 03:06:44 np0005539563 systemd[1]: libpod-de252753cbe03ceb36c92101da7749fd4620b02e78a2e5097a22138e1b1c793c.scope: Deactivated successfully.
Nov 29 03:06:44 np0005539563 conmon[303935]: conmon de252753cbe03ceb36c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de252753cbe03ceb36c92101da7749fd4620b02e78a2e5097a22138e1b1c793c.scope/container/memory.events
Nov 29 03:06:44 np0005539563 podman[303919]: 2025-11-29 08:06:44.271191997 +0000 UTC m=+0.165449303 container died de252753cbe03ceb36c92101da7749fd4620b02e78a2e5097a22138e1b1c793c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:06:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a0a3c497a930e6267599f0f17ecb391824d8e650cc215a98fa664a3d9abf5790-merged.mount: Deactivated successfully.
Nov 29 03:06:44 np0005539563 podman[303919]: 2025-11-29 08:06:44.309863555 +0000 UTC m=+0.204120861 container remove de252753cbe03ceb36c92101da7749fd4620b02e78a2e5097a22138e1b1c793c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:06:44 np0005539563 systemd[1]: libpod-conmon-de252753cbe03ceb36c92101da7749fd4620b02e78a2e5097a22138e1b1c793c.scope: Deactivated successfully.
Nov 29 03:06:44 np0005539563 podman[303960]: 2025-11-29 08:06:44.463625611 +0000 UTC m=+0.037534048 container create a66957a19e4e01ec83308956d6422d44a2e9fa31f4ae4eb90e0f179f7c48e88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:06:44 np0005539563 systemd[1]: Started libpod-conmon-a66957a19e4e01ec83308956d6422d44a2e9fa31f4ae4eb90e0f179f7c48e88a.scope.
Nov 29 03:06:44 np0005539563 nova_compute[252253]: 2025-11-29 08:06:44.505 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:44 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:06:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7c24babd32b70aa62214a4f847e08277e007efe69f5b272591e1a5c1f9b619/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7c24babd32b70aa62214a4f847e08277e007efe69f5b272591e1a5c1f9b619/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7c24babd32b70aa62214a4f847e08277e007efe69f5b272591e1a5c1f9b619/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7c24babd32b70aa62214a4f847e08277e007efe69f5b272591e1a5c1f9b619/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7c24babd32b70aa62214a4f847e08277e007efe69f5b272591e1a5c1f9b619/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:44 np0005539563 podman[303960]: 2025-11-29 08:06:44.44662479 +0000 UTC m=+0.020533247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:06:44 np0005539563 podman[303960]: 2025-11-29 08:06:44.547076812 +0000 UTC m=+0.120985309 container init a66957a19e4e01ec83308956d6422d44a2e9fa31f4ae4eb90e0f179f7c48e88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:06:44 np0005539563 podman[303960]: 2025-11-29 08:06:44.559627922 +0000 UTC m=+0.133536359 container start a66957a19e4e01ec83308956d6422d44a2e9fa31f4ae4eb90e0f179f7c48e88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:06:44 np0005539563 podman[303960]: 2025-11-29 08:06:44.563024774 +0000 UTC m=+0.136933211 container attach a66957a19e4e01ec83308956d6422d44a2e9fa31f4ae4eb90e0f179f7c48e88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:06:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:44.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1429887533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:44 np0005539563 nova_compute[252253]: 2025-11-29 08:06:44.937 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:44 np0005539563 nova_compute[252253]: 2025-11-29 08:06:44.945 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:44 np0005539563 nova_compute[252253]: 2025-11-29 08:06:44.974 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:45 np0005539563 nova_compute[252253]: 2025-11-29 08:06:45.002 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:06:45 np0005539563 nova_compute[252253]: 2025-11-29 08:06:45.002 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:45 np0005539563 friendly_ritchie[303976]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:06:45 np0005539563 friendly_ritchie[303976]: --> relative data size: 1.0
Nov 29 03:06:45 np0005539563 friendly_ritchie[303976]: --> All data devices are unavailable
Nov 29 03:06:45 np0005539563 systemd[1]: libpod-a66957a19e4e01ec83308956d6422d44a2e9fa31f4ae4eb90e0f179f7c48e88a.scope: Deactivated successfully.
Nov 29 03:06:45 np0005539563 podman[304014]: 2025-11-29 08:06:45.400547206 +0000 UTC m=+0.020926459 container died a66957a19e4e01ec83308956d6422d44a2e9fa31f4ae4eb90e0f179f7c48e88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ritchie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:06:45 np0005539563 nova_compute[252253]: 2025-11-29 08:06:45.516 252257 INFO nova.compute.manager [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Rebuilding instance#033[00m
Nov 29 03:06:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9f7c24babd32b70aa62214a4f847e08277e007efe69f5b272591e1a5c1f9b619-merged.mount: Deactivated successfully.
Nov 29 03:06:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:45.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:45 np0005539563 podman[304014]: 2025-11-29 08:06:45.854634218 +0000 UTC m=+0.475013451 container remove a66957a19e4e01ec83308956d6422d44a2e9fa31f4ae4eb90e0f179f7c48e88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_ritchie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:06:45 np0005539563 systemd[1]: libpod-conmon-a66957a19e4e01ec83308956d6422d44a2e9fa31f4ae4eb90e0f179f7c48e88a.scope: Deactivated successfully.
Nov 29 03:06:45 np0005539563 nova_compute[252253]: 2025-11-29 08:06:45.880 252257 DEBUG nova.objects.instance [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 12d0faaa-a957-464c-ae56-3e90d6fd248c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:45 np0005539563 nova_compute[252253]: 2025-11-29 08:06:45.912 252257 DEBUG nova.compute.manager [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:45 np0005539563 nova_compute[252253]: 2025-11-29 08:06:45.978 252257 DEBUG nova.objects.instance [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 12d0faaa-a957-464c-ae56-3e90d6fd248c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 181 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 30 KiB/s wr, 177 op/s
Nov 29 03:06:46 np0005539563 nova_compute[252253]: 2025-11-29 08:06:46.006 252257 DEBUG nova.objects.instance [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12d0faaa-a957-464c-ae56-3e90d6fd248c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:46 np0005539563 nova_compute[252253]: 2025-11-29 08:06:46.028 252257 DEBUG nova.objects.instance [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'resources' on Instance uuid 12d0faaa-a957-464c-ae56-3e90d6fd248c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:46 np0005539563 nova_compute[252253]: 2025-11-29 08:06:46.046 252257 DEBUG nova.objects.instance [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'migration_context' on Instance uuid 12d0faaa-a957-464c-ae56-3e90d6fd248c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:46 np0005539563 nova_compute[252253]: 2025-11-29 08:06:46.075 252257 DEBUG nova.objects.instance [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:06:46 np0005539563 nova_compute[252253]: 2025-11-29 08:06:46.080 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:06:46 np0005539563 podman[304171]: 2025-11-29 08:06:46.483163227 +0000 UTC m=+0.040745694 container create 6453e0e8240d35c998b1a8b45181f4168808cbde1934f1ac99d8d634b6a8332b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:06:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:46 np0005539563 systemd[1]: Started libpod-conmon-6453e0e8240d35c998b1a8b45181f4168808cbde1934f1ac99d8d634b6a8332b.scope.
Nov 29 03:06:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:06:46 np0005539563 podman[304171]: 2025-11-29 08:06:46.467191245 +0000 UTC m=+0.024773742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:06:46 np0005539563 nova_compute[252253]: 2025-11-29 08:06:46.648 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:46.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:46 np0005539563 podman[304171]: 2025-11-29 08:06:46.809684324 +0000 UTC m=+0.367266811 container init 6453e0e8240d35c998b1a8b45181f4168808cbde1934f1ac99d8d634b6a8332b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:06:46 np0005539563 podman[304171]: 2025-11-29 08:06:46.816404766 +0000 UTC m=+0.373987233 container start 6453e0e8240d35c998b1a8b45181f4168808cbde1934f1ac99d8d634b6a8332b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 29 03:06:46 np0005539563 friendly_chatterjee[304188]: 167 167
Nov 29 03:06:46 np0005539563 systemd[1]: libpod-6453e0e8240d35c998b1a8b45181f4168808cbde1934f1ac99d8d634b6a8332b.scope: Deactivated successfully.
Nov 29 03:06:46 np0005539563 podman[304171]: 2025-11-29 08:06:46.827507087 +0000 UTC m=+0.385089574 container attach 6453e0e8240d35c998b1a8b45181f4168808cbde1934f1ac99d8d634b6a8332b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatterjee, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:06:46 np0005539563 podman[304171]: 2025-11-29 08:06:46.827939198 +0000 UTC m=+0.385521685 container died 6453e0e8240d35c998b1a8b45181f4168808cbde1934f1ac99d8d634b6a8332b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:47 np0005539563 nova_compute[252253]: 2025-11-29 08:06:47.004 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:47 np0005539563 nova_compute[252253]: 2025-11-29 08:06:47.005 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:06:47 np0005539563 nova_compute[252253]: 2025-11-29 08:06:47.007 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:06:47 np0005539563 nova_compute[252253]: 2025-11-29 08:06:47.026 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-12d0faaa-a957-464c-ae56-3e90d6fd248c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:47 np0005539563 nova_compute[252253]: 2025-11-29 08:06:47.027 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-12d0faaa-a957-464c-ae56-3e90d6fd248c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:47 np0005539563 nova_compute[252253]: 2025-11-29 08:06:47.027 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:06:47 np0005539563 nova_compute[252253]: 2025-11-29 08:06:47.028 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12d0faaa-a957-464c-ae56-3e90d6fd248c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1060873f002253b4a5837e3f69d489ebc0a8aa712020f7d5fc7300e8f2b2e7f5-merged.mount: Deactivated successfully.
Nov 29 03:06:47 np0005539563 podman[304171]: 2025-11-29 08:06:47.264740293 +0000 UTC m=+0.822322780 container remove 6453e0e8240d35c998b1a8b45181f4168808cbde1934f1ac99d8d634b6a8332b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:06:47 np0005539563 systemd[1]: libpod-conmon-6453e0e8240d35c998b1a8b45181f4168808cbde1934f1ac99d8d634b6a8332b.scope: Deactivated successfully.
Nov 29 03:06:47 np0005539563 podman[304261]: 2025-11-29 08:06:47.519501705 +0000 UTC m=+0.052655117 container create 0dbedc4d0be41dec9ab356532a6f45053c37c632f298dd13fac31e9b60b68c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:06:47 np0005539563 systemd[1]: Started libpod-conmon-0dbedc4d0be41dec9ab356532a6f45053c37c632f298dd13fac31e9b60b68c0f.scope.
Nov 29 03:06:47 np0005539563 podman[304261]: 2025-11-29 08:06:47.501494468 +0000 UTC m=+0.034647900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:06:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:06:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5f103a7cd11b9cca719e57edcc18a1fac7698a24a517e533b0eaaf18922e84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5f103a7cd11b9cca719e57edcc18a1fac7698a24a517e533b0eaaf18922e84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5f103a7cd11b9cca719e57edcc18a1fac7698a24a517e533b0eaaf18922e84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5f103a7cd11b9cca719e57edcc18a1fac7698a24a517e533b0eaaf18922e84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:47 np0005539563 podman[304261]: 2025-11-29 08:06:47.622675221 +0000 UTC m=+0.155828663 container init 0dbedc4d0be41dec9ab356532a6f45053c37c632f298dd13fac31e9b60b68c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lalande, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:47 np0005539563 podman[304261]: 2025-11-29 08:06:47.634778738 +0000 UTC m=+0.167932150 container start 0dbedc4d0be41dec9ab356532a6f45053c37c632f298dd13fac31e9b60b68c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lalande, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:06:47 np0005539563 podman[304261]: 2025-11-29 08:06:47.638621703 +0000 UTC m=+0.171775145 container attach 0dbedc4d0be41dec9ab356532a6f45053c37c632f298dd13fac31e9b60b68c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:06:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:47.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:47 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:47Z|00295|binding|INFO|Releasing lport e0f892e1-f1e8-4b29-8918-6cd036b9e8e0 from this chassis (sb_readonly=0)
Nov 29 03:06:47 np0005539563 nova_compute[252253]: 2025-11-29 08:06:47.908 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 181 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 30 KiB/s wr, 177 op/s
Nov 29 03:06:48 np0005539563 objective_lalande[304277]: {
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:    "0": [
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:        {
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:            "devices": [
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:                "/dev/loop3"
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:            ],
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:            "lv_name": "ceph_lv0",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:            "lv_size": "7511998464",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:            "name": "ceph_lv0",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:            "tags": {
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:                "ceph.cluster_name": "ceph",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:                "ceph.crush_device_class": "",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:                "ceph.encrypted": "0",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:                "ceph.osd_id": "0",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:                "ceph.type": "block",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:                "ceph.vdo": "0"
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:            },
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:            "type": "block",
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:            "vg_name": "ceph_vg0"
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:        }
Nov 29 03:06:48 np0005539563 objective_lalande[304277]:    ]
Nov 29 03:06:48 np0005539563 objective_lalande[304277]: }
Nov 29 03:06:48 np0005539563 systemd[1]: libpod-0dbedc4d0be41dec9ab356532a6f45053c37c632f298dd13fac31e9b60b68c0f.scope: Deactivated successfully.
Nov 29 03:06:48 np0005539563 podman[304261]: 2025-11-29 08:06:48.43501866 +0000 UTC m=+0.968172082 container died 0dbedc4d0be41dec9ab356532a6f45053c37c632f298dd13fac31e9b60b68c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:06:48 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ba5f103a7cd11b9cca719e57edcc18a1fac7698a24a517e533b0eaaf18922e84-merged.mount: Deactivated successfully.
Nov 29 03:06:48 np0005539563 podman[304261]: 2025-11-29 08:06:48.500073632 +0000 UTC m=+1.033227054 container remove 0dbedc4d0be41dec9ab356532a6f45053c37c632f298dd13fac31e9b60b68c0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lalande, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 03:06:48 np0005539563 systemd[1]: libpod-conmon-0dbedc4d0be41dec9ab356532a6f45053c37c632f298dd13fac31e9b60b68c0f.scope: Deactivated successfully.
Nov 29 03:06:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:48.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:49 np0005539563 nova_compute[252253]: 2025-11-29 08:06:49.084 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:49 np0005539563 podman[304439]: 2025-11-29 08:06:49.188010981 +0000 UTC m=+0.047474077 container create 15ef78156c207312ac447350c73e13b87be93a8e996086cd6fd6d1f27a10102b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:06:49 np0005539563 systemd[1]: Started libpod-conmon-15ef78156c207312ac447350c73e13b87be93a8e996086cd6fd6d1f27a10102b.scope.
Nov 29 03:06:49 np0005539563 podman[304439]: 2025-11-29 08:06:49.167685301 +0000 UTC m=+0.027148407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:06:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:06:49 np0005539563 podman[304439]: 2025-11-29 08:06:49.290178779 +0000 UTC m=+0.149641875 container init 15ef78156c207312ac447350c73e13b87be93a8e996086cd6fd6d1f27a10102b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:06:49 np0005539563 podman[304439]: 2025-11-29 08:06:49.298206177 +0000 UTC m=+0.157669263 container start 15ef78156c207312ac447350c73e13b87be93a8e996086cd6fd6d1f27a10102b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shannon, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:06:49 np0005539563 podman[304439]: 2025-11-29 08:06:49.302049211 +0000 UTC m=+0.161512307 container attach 15ef78156c207312ac447350c73e13b87be93a8e996086cd6fd6d1f27a10102b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shannon, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:06:49 np0005539563 cranky_shannon[304456]: 167 167
Nov 29 03:06:49 np0005539563 systemd[1]: libpod-15ef78156c207312ac447350c73e13b87be93a8e996086cd6fd6d1f27a10102b.scope: Deactivated successfully.
Nov 29 03:06:49 np0005539563 podman[304439]: 2025-11-29 08:06:49.304984201 +0000 UTC m=+0.164447277 container died 15ef78156c207312ac447350c73e13b87be93a8e996086cd6fd6d1f27a10102b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:06:49 np0005539563 systemd[1]: var-lib-containers-storage-overlay-07f9e9b0535b83744a2dd19bec1bffa5352797f9de14e3f0f3b44e5088fdc52f-merged.mount: Deactivated successfully.
Nov 29 03:06:49 np0005539563 podman[304439]: 2025-11-29 08:06:49.350626257 +0000 UTC m=+0.210089333 container remove 15ef78156c207312ac447350c73e13b87be93a8e996086cd6fd6d1f27a10102b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:06:49 np0005539563 systemd[1]: libpod-conmon-15ef78156c207312ac447350c73e13b87be93a8e996086cd6fd6d1f27a10102b.scope: Deactivated successfully.
Nov 29 03:06:49 np0005539563 podman[304480]: 2025-11-29 08:06:49.562132857 +0000 UTC m=+0.076480693 container create 85472885e5f591f61b181838e6b7b2ba901eb2dc7a4c22d1c63f239130a7c651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:06:49 np0005539563 systemd[1]: Started libpod-conmon-85472885e5f591f61b181838e6b7b2ba901eb2dc7a4c22d1c63f239130a7c651.scope.
Nov 29 03:06:49 np0005539563 podman[304480]: 2025-11-29 08:06:49.533102901 +0000 UTC m=+0.047450777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:06:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:06:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3fd59679d1e943bed7e2adea194a9080e792eadc3403a2909bd4b1d7ea5f538/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3fd59679d1e943bed7e2adea194a9080e792eadc3403a2909bd4b1d7ea5f538/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3fd59679d1e943bed7e2adea194a9080e792eadc3403a2909bd4b1d7ea5f538/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3fd59679d1e943bed7e2adea194a9080e792eadc3403a2909bd4b1d7ea5f538/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:49 np0005539563 podman[304480]: 2025-11-29 08:06:49.666891116 +0000 UTC m=+0.181239002 container init 85472885e5f591f61b181838e6b7b2ba901eb2dc7a4c22d1c63f239130a7c651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:06:49 np0005539563 podman[304480]: 2025-11-29 08:06:49.677998587 +0000 UTC m=+0.192346383 container start 85472885e5f591f61b181838e6b7b2ba901eb2dc7a4c22d1c63f239130a7c651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:06:49 np0005539563 podman[304480]: 2025-11-29 08:06:49.682371005 +0000 UTC m=+0.196718851 container attach 85472885e5f591f61b181838e6b7b2ba901eb2dc7a4c22d1c63f239130a7c651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:06:49 np0005539563 nova_compute[252253]: 2025-11-29 08:06:49.823 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Updating instance_info_cache with network_info: [{"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:49.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:49 np0005539563 nova_compute[252253]: 2025-11-29 08:06:49.846 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-12d0faaa-a957-464c-ae56-3e90d6fd248c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:49 np0005539563 nova_compute[252253]: 2025-11-29 08:06:49.846 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:06:49 np0005539563 nova_compute[252253]: 2025-11-29 08:06:49.846 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:49 np0005539563 nova_compute[252253]: 2025-11-29 08:06:49.847 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:49 np0005539563 nova_compute[252253]: 2025-11-29 08:06:49.847 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 173 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 31 KiB/s wr, 205 op/s
Nov 29 03:06:50 np0005539563 eloquent_dijkstra[304496]: {
Nov 29 03:06:50 np0005539563 eloquent_dijkstra[304496]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:06:50 np0005539563 eloquent_dijkstra[304496]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:06:50 np0005539563 eloquent_dijkstra[304496]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:06:50 np0005539563 eloquent_dijkstra[304496]:        "osd_id": 0,
Nov 29 03:06:50 np0005539563 eloquent_dijkstra[304496]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:06:50 np0005539563 eloquent_dijkstra[304496]:        "type": "bluestore"
Nov 29 03:06:50 np0005539563 eloquent_dijkstra[304496]:    }
Nov 29 03:06:50 np0005539563 eloquent_dijkstra[304496]: }
Nov 29 03:06:50 np0005539563 systemd[1]: libpod-85472885e5f591f61b181838e6b7b2ba901eb2dc7a4c22d1c63f239130a7c651.scope: Deactivated successfully.
Nov 29 03:06:50 np0005539563 podman[304517]: 2025-11-29 08:06:50.653349932 +0000 UTC m=+0.023046465 container died 85472885e5f591f61b181838e6b7b2ba901eb2dc7a4c22d1c63f239130a7c651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:06:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:50.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b3fd59679d1e943bed7e2adea194a9080e792eadc3403a2909bd4b1d7ea5f538-merged.mount: Deactivated successfully.
Nov 29 03:06:50 np0005539563 podman[304517]: 2025-11-29 08:06:50.71897984 +0000 UTC m=+0.088676363 container remove 85472885e5f591f61b181838e6b7b2ba901eb2dc7a4c22d1c63f239130a7c651 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dijkstra, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:06:50 np0005539563 systemd[1]: libpod-conmon-85472885e5f591f61b181838e6b7b2ba901eb2dc7a4c22d1c63f239130a7c651.scope: Deactivated successfully.
Nov 29 03:06:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:06:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:06:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:06:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:06:50 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 99edafab-22be-441a-b377-4f6fdea74820 does not exist
Nov 29 03:06:50 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e6040a13-7b64-4951-b0be-c6cad4a79a8b does not exist
Nov 29 03:06:50 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5aab98b7-b62e-4032-85e3-86f15bc4a11b does not exist
Nov 29 03:06:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:50Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:2b:73 10.100.0.12
Nov 29 03:06:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:50Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:2b:73 10.100.0.12
Nov 29 03:06:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:51 np0005539563 nova_compute[252253]: 2025-11-29 08:06:51.649 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:06:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:06:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:51.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 152 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.9 MiB/s wr, 166 op/s
Nov 29 03:06:52 np0005539563 nova_compute[252253]: 2025-11-29 08:06:52.516 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:52.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:52 np0005539563 nova_compute[252253]: 2025-11-29 08:06:52.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:52 np0005539563 nova_compute[252253]: 2025-11-29 08:06:52.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:52 np0005539563 nova_compute[252253]: 2025-11-29 08:06:52.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:06:52 np0005539563 nova_compute[252253]: 2025-11-29 08:06:52.706 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:53.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 158 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 160 op/s
Nov 29 03:06:54 np0005539563 nova_compute[252253]: 2025-11-29 08:06:54.090 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:54.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:55.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 167 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Nov 29 03:06:56 np0005539563 nova_compute[252253]: 2025-11-29 08:06:56.132 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:06:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:56 np0005539563 nova_compute[252253]: 2025-11-29 08:06:56.654 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:56.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:06:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:57.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:06:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 167 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 761 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Nov 29 03:06:58 np0005539563 kernel: tape4e1a07e-cc (unregistering): left promiscuous mode
Nov 29 03:06:58 np0005539563 NetworkManager[48981]: <info>  [1764403618.4500] device (tape4e1a07e-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:06:58 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:58Z|00296|binding|INFO|Releasing lport e4e1a07e-ccab-41ce-8316-f943d1063180 from this chassis (sb_readonly=0)
Nov 29 03:06:58 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:58Z|00297|binding|INFO|Setting lport e4e1a07e-ccab-41ce-8316-f943d1063180 down in Southbound
Nov 29 03:06:58 np0005539563 ovn_controller[148841]: 2025-11-29T08:06:58Z|00298|binding|INFO|Removing iface tape4e1a07e-cc ovn-installed in OVS
Nov 29 03:06:58 np0005539563 nova_compute[252253]: 2025-11-29 08:06:58.459 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:58 np0005539563 nova_compute[252253]: 2025-11-29 08:06:58.461 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.471 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:2b:73 10.100.0.12'], port_security=['fa:16:3e:f2:2b:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '12d0faaa-a957-464c-ae56-3e90d6fd248c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=e4e1a07e-ccab-41ce-8316-f943d1063180) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.473 158990 INFO neutron.agent.ovn.metadata.agent [-] Port e4e1a07e-ccab-41ce-8316-f943d1063180 in datapath 8665acc6-1650-4878-8ffd-84f079f13741 unbound from our chassis#033[00m
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.474 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8665acc6-1650-4878-8ffd-84f079f13741, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.475 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed847d6-c818-4f6d-9c82-fa020e306c5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.476 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace which is not needed anymore#033[00m
Nov 29 03:06:58 np0005539563 nova_compute[252253]: 2025-11-29 08:06:58.493 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:58 np0005539563 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000052.scope: Deactivated successfully.
Nov 29 03:06:58 np0005539563 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000052.scope: Consumed 14.536s CPU time.
Nov 29 03:06:58 np0005539563 systemd-machined[213024]: Machine qemu-34-instance-00000052 terminated.
Nov 29 03:06:58 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[303545]: [NOTICE]   (303549) : haproxy version is 2.8.14-c23fe91
Nov 29 03:06:58 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[303545]: [NOTICE]   (303549) : path to executable is /usr/sbin/haproxy
Nov 29 03:06:58 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[303545]: [WARNING]  (303549) : Exiting Master process...
Nov 29 03:06:58 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[303545]: [ALERT]    (303549) : Current worker (303551) exited with code 143 (Terminated)
Nov 29 03:06:58 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[303545]: [WARNING]  (303549) : All workers exited. Exiting... (0)
Nov 29 03:06:58 np0005539563 systemd[1]: libpod-5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a.scope: Deactivated successfully.
Nov 29 03:06:58 np0005539563 podman[304610]: 2025-11-29 08:06:58.633249177 +0000 UTC m=+0.054063356 container died 5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:06:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:06:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-957ca6c2dbf8e7485417581f4e22aedb0480eff0b8742c70c07b3bacdcce8ea9-merged.mount: Deactivated successfully.
Nov 29 03:06:58 np0005539563 podman[304610]: 2025-11-29 08:06:58.673256291 +0000 UTC m=+0.094070480 container cleanup 5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:58.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:58 np0005539563 systemd[1]: libpod-conmon-5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a.scope: Deactivated successfully.
Nov 29 03:06:58 np0005539563 nova_compute[252253]: 2025-11-29 08:06:58.696 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:58 np0005539563 nova_compute[252253]: 2025-11-29 08:06:58.702 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:58 np0005539563 podman[304638]: 2025-11-29 08:06:58.757607836 +0000 UTC m=+0.050530680 container remove 5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.765 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[165f65c3-21cb-4909-b66e-97ed6a7769ad]: (4, ('Sat Nov 29 08:06:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a)\n5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a\nSat Nov 29 08:06:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a)\n5721d28425bc31430264a8f14b9e65386897ef34c7f3aa2eb48f4d445927ce7a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.769 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[576ba6de-4861-4f2e-b27e-a82380ee3d73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.771 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:58 np0005539563 nova_compute[252253]: 2025-11-29 08:06:58.774 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:58 np0005539563 kernel: tap8665acc6-10: left promiscuous mode
Nov 29 03:06:58 np0005539563 nova_compute[252253]: 2025-11-29 08:06:58.795 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.800 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f873f929-b897-4a45-b4b5-28d48f7f3782]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.823 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e589f0de-8212-4976-b2aa-f6f85963622e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.825 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8f124056-d961-4d21-8f37-c8ed461385b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.853 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7ef390-7a0e-416d-b72c-d69a23a82399]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656339, 'reachable_time': 22020, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304664, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.857 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:06:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:06:58.858 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[662da2b1-86d2-449d-8c05-935e8341e8ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:58 np0005539563 systemd[1]: run-netns-ovnmeta\x2d8665acc6\x2d1650\x2d4878\x2d8ffd\x2d84f079f13741.mount: Deactivated successfully.
Nov 29 03:06:58 np0005539563 nova_compute[252253]: 2025-11-29 08:06:58.878 252257 DEBUG nova.compute.manager [req-23404c70-8659-4a16-9200-8b2ee6eff4c3 req-b214cfaa-7884-4e63-965c-00c71b415c93 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received event network-vif-unplugged-e4e1a07e-ccab-41ce-8316-f943d1063180 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:58 np0005539563 nova_compute[252253]: 2025-11-29 08:06:58.878 252257 DEBUG oslo_concurrency.lockutils [req-23404c70-8659-4a16-9200-8b2ee6eff4c3 req-b214cfaa-7884-4e63-965c-00c71b415c93 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:58 np0005539563 nova_compute[252253]: 2025-11-29 08:06:58.879 252257 DEBUG oslo_concurrency.lockutils [req-23404c70-8659-4a16-9200-8b2ee6eff4c3 req-b214cfaa-7884-4e63-965c-00c71b415c93 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:58 np0005539563 nova_compute[252253]: 2025-11-29 08:06:58.879 252257 DEBUG oslo_concurrency.lockutils [req-23404c70-8659-4a16-9200-8b2ee6eff4c3 req-b214cfaa-7884-4e63-965c-00c71b415c93 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:58 np0005539563 nova_compute[252253]: 2025-11-29 08:06:58.879 252257 DEBUG nova.compute.manager [req-23404c70-8659-4a16-9200-8b2ee6eff4c3 req-b214cfaa-7884-4e63-965c-00c71b415c93 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] No waiting events found dispatching network-vif-unplugged-e4e1a07e-ccab-41ce-8316-f943d1063180 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:58 np0005539563 nova_compute[252253]: 2025-11-29 08:06:58.880 252257 WARNING nova.compute.manager [req-23404c70-8659-4a16-9200-8b2ee6eff4c3 req-b214cfaa-7884-4e63-965c-00c71b415c93 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received unexpected event network-vif-unplugged-e4e1a07e-ccab-41ce-8316-f943d1063180 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.093 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.147 252257 INFO nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.154 252257 INFO nova.virt.libvirt.driver [-] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Instance destroyed successfully.#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.162 252257 INFO nova.virt.libvirt.driver [-] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Instance destroyed successfully.#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.164 252257 DEBUG nova.virt.libvirt.vif [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1671913756',display_name='tempest-ServerDiskConfigTestJSON-server-1671913756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1671913756',id=82,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-lonfocmu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:44Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=12d0faaa-a957-464c-ae56-3e90d6fd248c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.165 252257 DEBUG nova.network.os_vif_util [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.166 252257 DEBUG nova.network.os_vif_util [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.167 252257 DEBUG os_vif [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.169 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.170 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4e1a07e-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.173 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.176 252257 INFO os_vif [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc')#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.661 252257 INFO nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Deleting instance files /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c_del#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.662 252257 INFO nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Deletion of /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c_del complete#033[00m
Nov 29 03:06:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:06:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:59.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.970 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.971 252257 INFO nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Creating image(s)#033[00m
Nov 29 03:06:59 np0005539563 nova_compute[252253]: 2025-11-29 08:06:59.998 252257 DEBUG nova.storage.rbd_utils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 167 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 761 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.035 252257 DEBUG nova.storage.rbd_utils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.082 252257 DEBUG nova.storage.rbd_utils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.088 252257 DEBUG oslo_concurrency.processutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.197 252257 DEBUG oslo_concurrency.processutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.199 252257 DEBUG oslo_concurrency.lockutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.200 252257 DEBUG oslo_concurrency.lockutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.201 252257 DEBUG oslo_concurrency.lockutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.241 252257 DEBUG nova.storage.rbd_utils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.247 252257 DEBUG oslo_concurrency.processutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.565 252257 DEBUG oslo_concurrency.processutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.665 252257 DEBUG nova.storage.rbd_utils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] resizing rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:07:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:00.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.813 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.814 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Ensure instance console log exists: /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.814 252257 DEBUG oslo_concurrency.lockutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.815 252257 DEBUG oslo_concurrency.lockutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.815 252257 DEBUG oslo_concurrency.lockutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.818 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Start _get_guest_xml network_info=[{"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.823 252257 WARNING nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.831 252257 DEBUG nova.virt.libvirt.host [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.831 252257 DEBUG nova.virt.libvirt.host [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.835 252257 DEBUG nova.virt.libvirt.host [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.835 252257 DEBUG nova.virt.libvirt.host [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.837 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.837 252257 DEBUG nova.virt.hardware [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.837 252257 DEBUG nova.virt.hardware [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.838 252257 DEBUG nova.virt.hardware [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.838 252257 DEBUG nova.virt.hardware [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.838 252257 DEBUG nova.virt.hardware [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.838 252257 DEBUG nova.virt.hardware [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.838 252257 DEBUG nova.virt.hardware [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.839 252257 DEBUG nova.virt.hardware [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.839 252257 DEBUG nova.virt.hardware [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.839 252257 DEBUG nova.virt.hardware [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.839 252257 DEBUG nova.virt.hardware [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.840 252257 DEBUG nova.objects.instance [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 12d0faaa-a957-464c-ae56-3e90d6fd248c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:00 np0005539563 nova_compute[252253]: 2025-11-29 08:07:00.864 252257 DEBUG oslo_concurrency.processutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.078 252257 DEBUG nova.compute.manager [req-670f4e87-da4e-4356-b83b-88d3748ce2e6 req-56281f7c-fa9f-4b73-ab00-e18660b37518 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.079 252257 DEBUG oslo_concurrency.lockutils [req-670f4e87-da4e-4356-b83b-88d3748ce2e6 req-56281f7c-fa9f-4b73-ab00-e18660b37518 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.079 252257 DEBUG oslo_concurrency.lockutils [req-670f4e87-da4e-4356-b83b-88d3748ce2e6 req-56281f7c-fa9f-4b73-ab00-e18660b37518 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.080 252257 DEBUG oslo_concurrency.lockutils [req-670f4e87-da4e-4356-b83b-88d3748ce2e6 req-56281f7c-fa9f-4b73-ab00-e18660b37518 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.080 252257 DEBUG nova.compute.manager [req-670f4e87-da4e-4356-b83b-88d3748ce2e6 req-56281f7c-fa9f-4b73-ab00-e18660b37518 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] No waiting events found dispatching network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.080 252257 WARNING nova.compute.manager [req-670f4e87-da4e-4356-b83b-88d3748ce2e6 req-56281f7c-fa9f-4b73-ab00-e18660b37518 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received unexpected event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:07:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3542637573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.324 252257 DEBUG oslo_concurrency.processutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.371 252257 DEBUG nova.storage.rbd_utils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.378 252257 DEBUG oslo_concurrency.processutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.659 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:01.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4141054727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.963 252257 DEBUG oslo_concurrency.processutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.967 252257 DEBUG nova.virt.libvirt.vif [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1671913756',display_name='tempest-ServerDiskConfigTestJSON-server-1671913756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1671913756',id=82,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-lonfocmu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=12d0faaa-a957-464c-ae56-3e90d6fd248c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.968 252257 DEBUG nova.network.os_vif_util [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.970 252257 DEBUG nova.network.os_vif_util [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.976 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  <uuid>12d0faaa-a957-464c-ae56-3e90d6fd248c</uuid>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  <name>instance-00000052</name>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1671913756</nova:name>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:07:00</nova:creationTime>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <nova:user uuid="5a7b61623f854cf59636f192ab8af005">tempest-ServerDiskConfigTestJSON-904422786-project-member</nova:user>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <nova:project uuid="750bde86c9c7473fbf7f0a6a3b16cec1">tempest-ServerDiskConfigTestJSON-904422786</nova:project>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="ed489666-5fa2-4ea4-8005-7a7505ac1b78"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <nova:port uuid="e4e1a07e-ccab-41ce-8316-f943d1063180">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <entry name="serial">12d0faaa-a957-464c-ae56-3e90d6fd248c</entry>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <entry name="uuid">12d0faaa-a957-464c-ae56-3e90d6fd248c</entry>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/12d0faaa-a957-464c-ae56-3e90d6fd248c_disk">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/12d0faaa-a957-464c-ae56-3e90d6fd248c_disk.config">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:f2:2b:73"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <target dev="tape4e1a07e-cc"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/console.log" append="off"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:07:01 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:07:01 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:07:01 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:07:01 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.977 252257 DEBUG nova.compute.manager [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Preparing to wait for external event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.977 252257 DEBUG oslo_concurrency.lockutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.978 252257 DEBUG oslo_concurrency.lockutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.978 252257 DEBUG oslo_concurrency.lockutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.980 252257 DEBUG nova.virt.libvirt.vif [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1671913756',display_name='tempest-ServerDiskConfigTestJSON-server-1671913756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1671913756',id=82,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-lonfocmu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:59Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=12d0faaa-a957-464c-ae56-3e90d6fd248c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.980 252257 DEBUG nova.network.os_vif_util [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.981 252257 DEBUG nova.network.os_vif_util [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.982 252257 DEBUG os_vif [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.983 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.983 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.984 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.987 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.988 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4e1a07e-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.988 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4e1a07e-cc, col_values=(('external_ids', {'iface-id': 'e4e1a07e-ccab-41ce-8316-f943d1063180', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:2b:73', 'vm-uuid': '12d0faaa-a957-464c-ae56-3e90d6fd248c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.991 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:01 np0005539563 NetworkManager[48981]: <info>  [1764403621.9924] manager: (tape4e1a07e-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.995 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:01 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.997 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:02 np0005539563 nova_compute[252253]: 2025-11-29 08:07:01.999 252257 INFO os_vif [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc')#033[00m
Nov 29 03:07:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 101 MiB data, 701 MiB used, 20 GiB / 21 GiB avail; 354 KiB/s rd, 2.2 MiB/s wr, 105 op/s
Nov 29 03:07:02 np0005539563 nova_compute[252253]: 2025-11-29 08:07:02.077 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:02 np0005539563 nova_compute[252253]: 2025-11-29 08:07:02.077 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:02 np0005539563 nova_compute[252253]: 2025-11-29 08:07:02.077 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No VIF found with MAC fa:16:3e:f2:2b:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:07:02 np0005539563 nova_compute[252253]: 2025-11-29 08:07:02.078 252257 INFO nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Using config drive#033[00m
Nov 29 03:07:02 np0005539563 nova_compute[252253]: 2025-11-29 08:07:02.113 252257 DEBUG nova.storage.rbd_utils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:02 np0005539563 nova_compute[252253]: 2025-11-29 08:07:02.142 252257 DEBUG nova.objects.instance [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 12d0faaa-a957-464c-ae56-3e90d6fd248c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:02 np0005539563 nova_compute[252253]: 2025-11-29 08:07:02.183 252257 DEBUG nova.objects.instance [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'keypairs' on Instance uuid 12d0faaa-a957-464c-ae56-3e90d6fd248c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:02.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:02 np0005539563 nova_compute[252253]: 2025-11-29 08:07:02.761 252257 INFO nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Creating config drive at /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/disk.config#033[00m
Nov 29 03:07:02 np0005539563 nova_compute[252253]: 2025-11-29 08:07:02.768 252257 DEBUG oslo_concurrency.processutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprfcbd6kv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:02 np0005539563 nova_compute[252253]: 2025-11-29 08:07:02.926 252257 DEBUG oslo_concurrency.processutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprfcbd6kv" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:02 np0005539563 nova_compute[252253]: 2025-11-29 08:07:02.978 252257 DEBUG nova.storage.rbd_utils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:02 np0005539563 nova_compute[252253]: 2025-11-29 08:07:02.987 252257 DEBUG oslo_concurrency.processutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/disk.config 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:03 np0005539563 nova_compute[252253]: 2025-11-29 08:07:03.233 252257 DEBUG oslo_concurrency.processutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/disk.config 12d0faaa-a957-464c-ae56-3e90d6fd248c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:03 np0005539563 nova_compute[252253]: 2025-11-29 08:07:03.234 252257 INFO nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Deleting local config drive /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:07:03 np0005539563 kernel: tape4e1a07e-cc: entered promiscuous mode
Nov 29 03:07:03 np0005539563 NetworkManager[48981]: <info>  [1764403623.3012] manager: (tape4e1a07e-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Nov 29 03:07:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:03Z|00299|binding|INFO|Claiming lport e4e1a07e-ccab-41ce-8316-f943d1063180 for this chassis.
Nov 29 03:07:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:03Z|00300|binding|INFO|e4e1a07e-ccab-41ce-8316-f943d1063180: Claiming fa:16:3e:f2:2b:73 10.100.0.12
Nov 29 03:07:03 np0005539563 nova_compute[252253]: 2025-11-29 08:07:03.354 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:03Z|00301|binding|INFO|Setting lport e4e1a07e-ccab-41ce-8316-f943d1063180 ovn-installed in OVS
Nov 29 03:07:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:03Z|00302|binding|INFO|Setting lport e4e1a07e-ccab-41ce-8316-f943d1063180 up in Southbound
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.370 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:2b:73 10.100.0.12'], port_security=['fa:16:3e:f2:2b:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '12d0faaa-a957-464c-ae56-3e90d6fd248c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=e4e1a07e-ccab-41ce-8316-f943d1063180) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:03 np0005539563 nova_compute[252253]: 2025-11-29 08:07:03.373 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.374 158990 INFO neutron.agent.ovn.metadata.agent [-] Port e4e1a07e-ccab-41ce-8316-f943d1063180 in datapath 8665acc6-1650-4878-8ffd-84f079f13741 bound to our chassis#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.377 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8665acc6-1650-4878-8ffd-84f079f13741#033[00m
Nov 29 03:07:03 np0005539563 systemd-udevd[304988]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:03 np0005539563 systemd-machined[213024]: New machine qemu-35-instance-00000052.
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.389 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[000442e5-96f0-4b94-8e65-e01a0a1921b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.391 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8665acc6-11 in ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.394 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8665acc6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.394 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[55ce6aef-5e25-4cd4-afdd-5de18917630b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.396 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e05ccc35-191c-489e-b58a-f2d61bbc2643]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 systemd[1]: Started Virtual Machine qemu-35-instance-00000052.
Nov 29 03:07:03 np0005539563 NetworkManager[48981]: <info>  [1764403623.4073] device (tape4e1a07e-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:07:03 np0005539563 NetworkManager[48981]: <info>  [1764403623.4093] device (tape4e1a07e-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.414 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5433da-d6ee-4ac7-a946-cb9d2c75b4bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.431 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b93f50c9-6333-49a5-a4a7-74301bc2e6f3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.464 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[fca4fae0-b61f-4e61-951b-e1faf3689370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.469 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fed1f1ee-0913-4b21-9d4f-6c45ee1d224b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 NetworkManager[48981]: <info>  [1764403623.4703] manager: (tap8665acc6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/142)
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.508 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2a482279-6214-4f66-85a5-46095f43b99a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.512 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[819ed05b-a723-4b7f-870c-f9850b03f56b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 NetworkManager[48981]: <info>  [1764403623.5440] device (tap8665acc6-10): carrier: link connected
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.553 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4afc114e-c194-4436-bf42-9212ce8c5b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.570 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4c946a90-1c3a-4d43-97bd-3bf5d41fa5a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659131, 'reachable_time': 38984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305022, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.584 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4f057aa7-e717-4b89-9e33-4734c791a17f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:2248'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659131, 'tstamp': 659131}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305023, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.601 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4f13a0fd-be9b-4ae2-9dff-5145143635fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659131, 'reachable_time': 38984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305024, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.636 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6d5a62-c784-449e-a994-7d48d318a315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.715 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c5fe7345-8c91-436a-9bfb-da4b0d469a98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.718 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.718 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.718 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8665acc6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:03 np0005539563 nova_compute[252253]: 2025-11-29 08:07:03.720 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:03 np0005539563 kernel: tap8665acc6-10: entered promiscuous mode
Nov 29 03:07:03 np0005539563 NetworkManager[48981]: <info>  [1764403623.7233] manager: (tap8665acc6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Nov 29 03:07:03 np0005539563 nova_compute[252253]: 2025-11-29 08:07:03.724 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.726 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8665acc6-10, col_values=(('external_ids', {'iface-id': 'e0f892e1-f1e8-4b29-8918-6cd036b9e8e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:03Z|00303|binding|INFO|Releasing lport e0f892e1-f1e8-4b29-8918-6cd036b9e8e0 from this chassis (sb_readonly=0)
Nov 29 03:07:03 np0005539563 nova_compute[252253]: 2025-11-29 08:07:03.727 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:03 np0005539563 nova_compute[252253]: 2025-11-29 08:07:03.750 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.752 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.753 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf782d8-48e8-4bb1-9df7-87ef6ef31b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.754 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:07:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:03.755 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'env', 'PROCESS_TAG=haproxy-8665acc6-1650-4878-8ffd-84f079f13741', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8665acc6-1650-4878-8ffd-84f079f13741.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:07:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:03.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 106 MiB data, 704 MiB used, 20 GiB / 21 GiB avail; 154 KiB/s rd, 1.1 MiB/s wr, 65 op/s
Nov 29 03:07:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/289195649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:04 np0005539563 podman[305074]: 2025-11-29 08:07:04.171696503 +0000 UTC m=+0.059396851 container create 1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:07:04 np0005539563 systemd[1]: Started libpod-conmon-1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310.scope.
Nov 29 03:07:04 np0005539563 podman[305074]: 2025-11-29 08:07:04.144126555 +0000 UTC m=+0.031826933 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:07:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:07:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5f74632d1c24762ae57724c7d68d5d3b5cc724b6505213964e43aac91bacfa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:04 np0005539563 podman[305074]: 2025-11-29 08:07:04.270618252 +0000 UTC m=+0.158318640 container init 1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:07:04 np0005539563 podman[305074]: 2025-11-29 08:07:04.279622346 +0000 UTC m=+0.167322704 container start 1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.283 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 12d0faaa-a957-464c-ae56-3e90d6fd248c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.283 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403624.281753, 12d0faaa-a957-464c-ae56-3e90d6fd248c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.283 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:07:04 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305112]: [NOTICE]   (305117) : New worker (305119) forked
Nov 29 03:07:04 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305112]: [NOTICE]   (305117) : Loading success.
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.334 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.339 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403624.282109, 12d0faaa-a957-464c-ae56-3e90d6fd248c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.340 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.584 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.589 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.633 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:07:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:04.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.763 252257 DEBUG nova.compute.manager [req-93567374-25cd-4180-95d3-0be7e67af77d req-3e8fb75c-c8af-4275-b2d9-2630ccafdcef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.764 252257 DEBUG oslo_concurrency.lockutils [req-93567374-25cd-4180-95d3-0be7e67af77d req-3e8fb75c-c8af-4275-b2d9-2630ccafdcef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.764 252257 DEBUG oslo_concurrency.lockutils [req-93567374-25cd-4180-95d3-0be7e67af77d req-3e8fb75c-c8af-4275-b2d9-2630ccafdcef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.765 252257 DEBUG oslo_concurrency.lockutils [req-93567374-25cd-4180-95d3-0be7e67af77d req-3e8fb75c-c8af-4275-b2d9-2630ccafdcef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.765 252257 DEBUG nova.compute.manager [req-93567374-25cd-4180-95d3-0be7e67af77d req-3e8fb75c-c8af-4275-b2d9-2630ccafdcef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Processing event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.767 252257 DEBUG nova.compute.manager [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.772 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.773 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403624.7727277, 12d0faaa-a957-464c-ae56-3e90d6fd248c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.773 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.779 252257 INFO nova.virt.libvirt.driver [-] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Instance spawned successfully.#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.780 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.802 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.811 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.821 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.821 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.822 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.822 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.823 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.823 252257 DEBUG nova.virt.libvirt.driver [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.832 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.884 252257 DEBUG nova.compute.manager [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:04.913 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:04.914 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:04.915 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.982 252257 DEBUG oslo_concurrency.lockutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.982 252257 DEBUG oslo_concurrency.lockutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:04 np0005539563 nova_compute[252253]: 2025-11-29 08:07:04.983 252257 DEBUG nova.objects.instance [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:07:05 np0005539563 nova_compute[252253]: 2025-11-29 08:07:05.052 252257 DEBUG oslo_concurrency.lockutils [None req-f8e75630-f6fa-463a-a54a-e8bd2282c8d3 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:05.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 169 KiB/s rd, 1.9 MiB/s wr, 85 op/s
Nov 29 03:07:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:06.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:06 np0005539563 nova_compute[252253]: 2025-11-29 08:07:06.695 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:06 np0005539563 nova_compute[252253]: 2025-11-29 08:07:06.926 252257 DEBUG nova.compute.manager [req-f405d91c-6f31-4913-8b0d-2fc9342c3c9d req-fd7d4121-cc9e-42c7-8655-6b0ef5cb4dc5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:06 np0005539563 nova_compute[252253]: 2025-11-29 08:07:06.926 252257 DEBUG oslo_concurrency.lockutils [req-f405d91c-6f31-4913-8b0d-2fc9342c3c9d req-fd7d4121-cc9e-42c7-8655-6b0ef5cb4dc5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:06 np0005539563 nova_compute[252253]: 2025-11-29 08:07:06.927 252257 DEBUG oslo_concurrency.lockutils [req-f405d91c-6f31-4913-8b0d-2fc9342c3c9d req-fd7d4121-cc9e-42c7-8655-6b0ef5cb4dc5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:06 np0005539563 nova_compute[252253]: 2025-11-29 08:07:06.928 252257 DEBUG oslo_concurrency.lockutils [req-f405d91c-6f31-4913-8b0d-2fc9342c3c9d req-fd7d4121-cc9e-42c7-8655-6b0ef5cb4dc5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:06 np0005539563 nova_compute[252253]: 2025-11-29 08:07:06.929 252257 DEBUG nova.compute.manager [req-f405d91c-6f31-4913-8b0d-2fc9342c3c9d req-fd7d4121-cc9e-42c7-8655-6b0ef5cb4dc5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] No waiting events found dispatching network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:06 np0005539563 nova_compute[252253]: 2025-11-29 08:07:06.929 252257 WARNING nova.compute.manager [req-f405d91c-6f31-4913-8b0d-2fc9342c3c9d req-fd7d4121-cc9e-42c7-8655-6b0ef5cb4dc5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received unexpected event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:07:06 np0005539563 nova_compute[252253]: 2025-11-29 08:07:06.990 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:07.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.463 252257 DEBUG oslo_concurrency.lockutils [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "12d0faaa-a957-464c-ae56-3e90d6fd248c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.463 252257 DEBUG oslo_concurrency.lockutils [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.465 252257 DEBUG oslo_concurrency.lockutils [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.465 252257 DEBUG oslo_concurrency.lockutils [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.466 252257 DEBUG oslo_concurrency.lockutils [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.468 252257 INFO nova.compute.manager [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Terminating instance#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.470 252257 DEBUG nova.compute.manager [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:07:08 np0005539563 kernel: tape4e1a07e-cc (unregistering): left promiscuous mode
Nov 29 03:07:08 np0005539563 NetworkManager[48981]: <info>  [1764403628.5209] device (tape4e1a07e-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:07:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:08Z|00304|binding|INFO|Releasing lport e4e1a07e-ccab-41ce-8316-f943d1063180 from this chassis (sb_readonly=0)
Nov 29 03:07:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:08Z|00305|binding|INFO|Setting lport e4e1a07e-ccab-41ce-8316-f943d1063180 down in Southbound
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.528 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:08Z|00306|binding|INFO|Removing iface tape4e1a07e-cc ovn-installed in OVS
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.531 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.537 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:2b:73 10.100.0.12'], port_security=['fa:16:3e:f2:2b:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '12d0faaa-a957-464c-ae56-3e90d6fd248c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=e4e1a07e-ccab-41ce-8316-f943d1063180) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.539 158990 INFO neutron.agent.ovn.metadata.agent [-] Port e4e1a07e-ccab-41ce-8316-f943d1063180 in datapath 8665acc6-1650-4878-8ffd-84f079f13741 unbound from our chassis#033[00m
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.541 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8665acc6-1650-4878-8ffd-84f079f13741, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.543 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[09828080-2587-4b90-b2ed-70a71e4c9f60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.544 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace which is not needed anymore#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.555 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:08 np0005539563 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000052.scope: Deactivated successfully.
Nov 29 03:07:08 np0005539563 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000052.scope: Consumed 4.538s CPU time.
Nov 29 03:07:08 np0005539563 systemd-machined[213024]: Machine qemu-35-instance-00000052 terminated.
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.692 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:08 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305112]: [NOTICE]   (305117) : haproxy version is 2.8.14-c23fe91
Nov 29 03:07:08 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305112]: [NOTICE]   (305117) : path to executable is /usr/sbin/haproxy
Nov 29 03:07:08 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305112]: [WARNING]  (305117) : Exiting Master process...
Nov 29 03:07:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:08.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:08 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305112]: [ALERT]    (305117) : Current worker (305119) exited with code 143 (Terminated)
Nov 29 03:07:08 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305112]: [WARNING]  (305117) : All workers exited. Exiting... (0)
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.697 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:08 np0005539563 systemd[1]: libpod-1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310.scope: Deactivated successfully.
Nov 29 03:07:08 np0005539563 podman[305205]: 2025-11-29 08:07:08.706189657 +0000 UTC m=+0.050973712 container died 1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.712 252257 INFO nova.virt.libvirt.driver [-] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Instance destroyed successfully.#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.712 252257 DEBUG nova.objects.instance [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'resources' on Instance uuid 12d0faaa-a957-464c-ae56-3e90d6fd248c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6a5f74632d1c24762ae57724c7d68d5d3b5cc724b6505213964e43aac91bacfa-merged.mount: Deactivated successfully.
Nov 29 03:07:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310-userdata-shm.mount: Deactivated successfully.
Nov 29 03:07:08 np0005539563 podman[305205]: 2025-11-29 08:07:08.750989631 +0000 UTC m=+0.095773686 container cleanup 1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:07:08 np0005539563 systemd[1]: libpod-conmon-1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310.scope: Deactivated successfully.
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.773 252257 DEBUG nova.virt.libvirt.vif [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1671913756',display_name='tempest-ServerDiskConfigTestJSON-server-1671913756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1671913756',id=82,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-lonfocmu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:05Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=12d0faaa-a957-464c-ae56-3e90d6fd248c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.774 252257 DEBUG nova.network.os_vif_util [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "e4e1a07e-ccab-41ce-8316-f943d1063180", "address": "fa:16:3e:f2:2b:73", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e1a07e-cc", "ovs_interfaceid": "e4e1a07e-ccab-41ce-8316-f943d1063180", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.775 252257 DEBUG nova.network.os_vif_util [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.775 252257 DEBUG os_vif [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.776 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.776 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4e1a07e-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.779 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.781 252257 INFO os_vif [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:2b:73,bridge_name='br-int',has_traffic_filtering=True,id=e4e1a07e-ccab-41ce-8316-f943d1063180,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e1a07e-cc')#033[00m
Nov 29 03:07:08 np0005539563 podman[305242]: 2025-11-29 08:07:08.833469726 +0000 UTC m=+0.056505772 container remove 1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.839 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[aa13b1e7-d103-4c36-9639-bae0b91cc9c5]: (4, ('Sat Nov 29 08:07:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310)\n1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310\nSat Nov 29 08:07:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310)\n1d23df7737a01c58d55b14c8a26f8936477f770af5c1df0dffc8e3f83d2cd310\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.840 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e9156d2a-6901-47e4-99b8-a51e42da47d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.841 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.843 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:08 np0005539563 kernel: tap8665acc6-10: left promiscuous mode
Nov 29 03:07:08 np0005539563 nova_compute[252253]: 2025-11-29 08:07:08.858 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.860 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[84f13ed9-f745-465b-8125-5b8540b78e40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.881 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1e4af73b-7efe-4c78-b671-b5911f11fdbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.883 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0aaf69b4-7f1f-44a9-a3e2-35ae58b9e77d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.899 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[67f8eb20-0bb9-4fbe-a63a-ee2d1d94424f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659122, 'reachable_time': 32424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305279, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.901 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:07:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:08.901 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[526700f5-7935-4f62-a462-966652a82683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:08 np0005539563 systemd[1]: run-netns-ovnmeta\x2d8665acc6\x2d1650\x2d4878\x2d8ffd\x2d84f079f13741.mount: Deactivated successfully.
Nov 29 03:07:09 np0005539563 nova_compute[252253]: 2025-11-29 08:07:09.104 252257 DEBUG nova.compute.manager [req-85a447ed-3fe8-425f-ae8a-0a3e673f1ba3 req-aa9e34ec-d13d-468a-8de9-69fb67e99842 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received event network-vif-unplugged-e4e1a07e-ccab-41ce-8316-f943d1063180 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:09 np0005539563 nova_compute[252253]: 2025-11-29 08:07:09.105 252257 DEBUG oslo_concurrency.lockutils [req-85a447ed-3fe8-425f-ae8a-0a3e673f1ba3 req-aa9e34ec-d13d-468a-8de9-69fb67e99842 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:09 np0005539563 nova_compute[252253]: 2025-11-29 08:07:09.105 252257 DEBUG oslo_concurrency.lockutils [req-85a447ed-3fe8-425f-ae8a-0a3e673f1ba3 req-aa9e34ec-d13d-468a-8de9-69fb67e99842 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:09 np0005539563 nova_compute[252253]: 2025-11-29 08:07:09.105 252257 DEBUG oslo_concurrency.lockutils [req-85a447ed-3fe8-425f-ae8a-0a3e673f1ba3 req-aa9e34ec-d13d-468a-8de9-69fb67e99842 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:09 np0005539563 nova_compute[252253]: 2025-11-29 08:07:09.105 252257 DEBUG nova.compute.manager [req-85a447ed-3fe8-425f-ae8a-0a3e673f1ba3 req-aa9e34ec-d13d-468a-8de9-69fb67e99842 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] No waiting events found dispatching network-vif-unplugged-e4e1a07e-ccab-41ce-8316-f943d1063180 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:09 np0005539563 nova_compute[252253]: 2025-11-29 08:07:09.106 252257 DEBUG nova.compute.manager [req-85a447ed-3fe8-425f-ae8a-0a3e673f1ba3 req-aa9e34ec-d13d-468a-8de9-69fb67e99842 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received event network-vif-unplugged-e4e1a07e-ccab-41ce-8316-f943d1063180 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:07:09 np0005539563 nova_compute[252253]: 2025-11-29 08:07:09.273 252257 INFO nova.virt.libvirt.driver [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Deleting instance files /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c_del#033[00m
Nov 29 03:07:09 np0005539563 nova_compute[252253]: 2025-11-29 08:07:09.275 252257 INFO nova.virt.libvirt.driver [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Deletion of /var/lib/nova/instances/12d0faaa-a957-464c-ae56-3e90d6fd248c_del complete#033[00m
Nov 29 03:07:09 np0005539563 nova_compute[252253]: 2025-11-29 08:07:09.357 252257 INFO nova.compute.manager [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:07:09 np0005539563 nova_compute[252253]: 2025-11-29 08:07:09.358 252257 DEBUG oslo.service.loopingcall [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:07:09 np0005539563 nova_compute[252253]: 2025-11-29 08:07:09.359 252257 DEBUG nova.compute.manager [-] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:07:09 np0005539563 nova_compute[252253]: 2025-11-29 08:07:09.359 252257 DEBUG nova.network.neutron [-] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:07:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:09.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 134 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 771 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 29 03:07:10 np0005539563 nova_compute[252253]: 2025-11-29 08:07:10.221 252257 DEBUG nova.network.neutron [-] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:10 np0005539563 nova_compute[252253]: 2025-11-29 08:07:10.253 252257 INFO nova.compute.manager [-] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Took 0.89 seconds to deallocate network for instance.#033[00m
Nov 29 03:07:10 np0005539563 nova_compute[252253]: 2025-11-29 08:07:10.296 252257 DEBUG nova.compute.manager [req-d695c66c-922a-409f-8f0b-5ceed6aad3a7 req-5ed54ff5-5a97-4f1e-bc94-62eccbe940ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received event network-vif-deleted-e4e1a07e-ccab-41ce-8316-f943d1063180 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:10 np0005539563 nova_compute[252253]: 2025-11-29 08:07:10.335 252257 DEBUG oslo_concurrency.lockutils [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:10 np0005539563 nova_compute[252253]: 2025-11-29 08:07:10.336 252257 DEBUG oslo_concurrency.lockutils [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:10 np0005539563 nova_compute[252253]: 2025-11-29 08:07:10.443 252257 DEBUG oslo_concurrency.processutils [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:10.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/858972445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:10 np0005539563 nova_compute[252253]: 2025-11-29 08:07:10.880 252257 DEBUG oslo_concurrency.processutils [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:10 np0005539563 nova_compute[252253]: 2025-11-29 08:07:10.887 252257 DEBUG nova.compute.provider_tree [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:10 np0005539563 nova_compute[252253]: 2025-11-29 08:07:10.909 252257 DEBUG nova.scheduler.client.report [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:10 np0005539563 nova_compute[252253]: 2025-11-29 08:07:10.944 252257 DEBUG oslo_concurrency.lockutils [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:11 np0005539563 nova_compute[252253]: 2025-11-29 08:07:11.002 252257 INFO nova.scheduler.client.report [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Deleted allocations for instance 12d0faaa-a957-464c-ae56-3e90d6fd248c#033[00m
Nov 29 03:07:11 np0005539563 nova_compute[252253]: 2025-11-29 08:07:11.128 252257 DEBUG oslo_concurrency.lockutils [None req-32978316-905c-4631-b3be-4fdc81b5e178 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:11 np0005539563 nova_compute[252253]: 2025-11-29 08:07:11.241 252257 DEBUG nova.compute.manager [req-94a85fc2-524c-40ef-b9e9-d6e576c08220 req-bd8a6f78-708c-455a-84b1-f364161bd6b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:11 np0005539563 nova_compute[252253]: 2025-11-29 08:07:11.242 252257 DEBUG oslo_concurrency.lockutils [req-94a85fc2-524c-40ef-b9e9-d6e576c08220 req-bd8a6f78-708c-455a-84b1-f364161bd6b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:11 np0005539563 nova_compute[252253]: 2025-11-29 08:07:11.242 252257 DEBUG oslo_concurrency.lockutils [req-94a85fc2-524c-40ef-b9e9-d6e576c08220 req-bd8a6f78-708c-455a-84b1-f364161bd6b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:11 np0005539563 nova_compute[252253]: 2025-11-29 08:07:11.243 252257 DEBUG oslo_concurrency.lockutils [req-94a85fc2-524c-40ef-b9e9-d6e576c08220 req-bd8a6f78-708c-455a-84b1-f364161bd6b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "12d0faaa-a957-464c-ae56-3e90d6fd248c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:11 np0005539563 nova_compute[252253]: 2025-11-29 08:07:11.243 252257 DEBUG nova.compute.manager [req-94a85fc2-524c-40ef-b9e9-d6e576c08220 req-bd8a6f78-708c-455a-84b1-f364161bd6b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] No waiting events found dispatching network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:11 np0005539563 nova_compute[252253]: 2025-11-29 08:07:11.243 252257 WARNING nova.compute.manager [req-94a85fc2-524c-40ef-b9e9-d6e576c08220 req-bd8a6f78-708c-455a-84b1-f364161bd6b1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Received unexpected event network-vif-plugged-e4e1a07e-ccab-41ce-8316-f943d1063180 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:07:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:11 np0005539563 nova_compute[252253]: 2025-11-29 08:07:11.697 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:11.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 135 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 171 op/s
Nov 29 03:07:12 np0005539563 nova_compute[252253]: 2025-11-29 08:07:12.143 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:12 np0005539563 nova_compute[252253]: 2025-11-29 08:07:12.144 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:12 np0005539563 nova_compute[252253]: 2025-11-29 08:07:12.164 252257 DEBUG nova.compute.manager [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:07:12 np0005539563 nova_compute[252253]: 2025-11-29 08:07:12.513 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:12 np0005539563 nova_compute[252253]: 2025-11-29 08:07:12.513 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:12 np0005539563 nova_compute[252253]: 2025-11-29 08:07:12.520 252257 DEBUG nova.virt.hardware [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:07:12 np0005539563 nova_compute[252253]: 2025-11-29 08:07:12.520 252257 INFO nova.compute.claims [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:07:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:12.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:12 np0005539563 nova_compute[252253]: 2025-11-29 08:07:12.739 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:07:12
Nov 29 03:07:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:07:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:07:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'vms', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log']
Nov 29 03:07:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:07:13 np0005539563 podman[305305]: 2025-11-29 08:07:13.533802795 +0000 UTC m=+0.085815726 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:07:13 np0005539563 nova_compute[252253]: 2025-11-29 08:07:13.532 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:13 np0005539563 podman[305306]: 2025-11-29 08:07:13.542884361 +0000 UTC m=+0.082347763 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:07:13 np0005539563 podman[305307]: 2025-11-29 08:07:13.600477901 +0000 UTC m=+0.146189062 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:07:13 np0005539563 nova_compute[252253]: 2025-11-29 08:07:13.779 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:13.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:07:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:07:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1548749727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 128 MiB data, 704 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 147 op/s
Nov 29 03:07:14 np0005539563 nova_compute[252253]: 2025-11-29 08:07:14.023 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:14 np0005539563 nova_compute[252253]: 2025-11-29 08:07:14.029 252257 DEBUG nova.compute.provider_tree [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:14 np0005539563 nova_compute[252253]: 2025-11-29 08:07:14.249 252257 DEBUG nova.scheduler.client.report [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:14.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:14 np0005539563 nova_compute[252253]: 2025-11-29 08:07:14.729 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:14 np0005539563 nova_compute[252253]: 2025-11-29 08:07:14.730 252257 DEBUG nova.compute.manager [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.299 252257 DEBUG nova.compute.manager [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.299 252257 DEBUG nova.network.neutron [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.353 252257 INFO nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.508 252257 DEBUG nova.compute.manager [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.635 252257 DEBUG nova.compute.manager [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.637 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.638 252257 INFO nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Creating image(s)#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.667 252257 DEBUG nova.storage.rbd_utils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.701 252257 DEBUG nova.storage.rbd_utils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.727 252257 DEBUG nova.storage.rbd_utils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.731 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.799 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.800 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.801 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.801 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.834 252257 DEBUG nova.storage.rbd_utils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.838 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:15.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:15 np0005539563 nova_compute[252253]: 2025-11-29 08:07:15.987 252257 DEBUG nova.policy [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5a7b61623f854cf59636f192ab8af005', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:07:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 134 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 152 op/s
Nov 29 03:07:16 np0005539563 nova_compute[252253]: 2025-11-29 08:07:16.341 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:16 np0005539563 nova_compute[252253]: 2025-11-29 08:07:16.408 252257 DEBUG nova.storage.rbd_utils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] resizing rbd image a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:07:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:16 np0005539563 nova_compute[252253]: 2025-11-29 08:07:16.572 252257 DEBUG nova.objects.instance [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'migration_context' on Instance uuid a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:16 np0005539563 nova_compute[252253]: 2025-11-29 08:07:16.607 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:07:16 np0005539563 nova_compute[252253]: 2025-11-29 08:07:16.607 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Ensure instance console log exists: /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:07:16 np0005539563 nova_compute[252253]: 2025-11-29 08:07:16.608 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:16 np0005539563 nova_compute[252253]: 2025-11-29 08:07:16.609 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:16 np0005539563 nova_compute[252253]: 2025-11-29 08:07:16.609 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:16 np0005539563 nova_compute[252253]: 2025-11-29 08:07:16.700 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:16.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:17 np0005539563 nova_compute[252253]: 2025-11-29 08:07:17.828 252257 DEBUG nova.network.neutron [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Successfully created port: 18497509-f640-42ef-b25c-ac9f121ce0db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:07:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:17.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 134 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Nov 29 03:07:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:18.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:18 np0005539563 nova_compute[252253]: 2025-11-29 08:07:18.781 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:19.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 149 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Nov 29 03:07:20 np0005539563 nova_compute[252253]: 2025-11-29 08:07:20.101 252257 DEBUG nova.network.neutron [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Successfully updated port: 18497509-f640-42ef-b25c-ac9f121ce0db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:07:20 np0005539563 nova_compute[252253]: 2025-11-29 08:07:20.124 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:20 np0005539563 nova_compute[252253]: 2025-11-29 08:07:20.125 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquired lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:20 np0005539563 nova_compute[252253]: 2025-11-29 08:07:20.125 252257 DEBUG nova.network.neutron [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:07:20 np0005539563 nova_compute[252253]: 2025-11-29 08:07:20.238 252257 DEBUG nova.compute.manager [req-f24df0ac-30ed-4b1c-998c-9c52bb2f2ce4 req-c21b2cb6-7d04-428e-8d29-37d4e682fa4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-changed-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:20 np0005539563 nova_compute[252253]: 2025-11-29 08:07:20.239 252257 DEBUG nova.compute.manager [req-f24df0ac-30ed-4b1c-998c-9c52bb2f2ce4 req-c21b2cb6-7d04-428e-8d29-37d4e682fa4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Refreshing instance network info cache due to event network-changed-18497509-f640-42ef-b25c-ac9f121ce0db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:07:20 np0005539563 nova_compute[252253]: 2025-11-29 08:07:20.240 252257 DEBUG oslo_concurrency.lockutils [req-f24df0ac-30ed-4b1c-998c-9c52bb2f2ce4 req-c21b2cb6-7d04-428e-8d29-37d4e682fa4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:20 np0005539563 nova_compute[252253]: 2025-11-29 08:07:20.538 252257 DEBUG nova.network.neutron [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:07:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:20.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.720 252257 DEBUG nova.network.neutron [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updating instance_info_cache with network_info: [{"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.723 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.737 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Releasing lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.738 252257 DEBUG nova.compute.manager [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Instance network_info: |[{"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.738 252257 DEBUG oslo_concurrency.lockutils [req-f24df0ac-30ed-4b1c-998c-9c52bb2f2ce4 req-c21b2cb6-7d04-428e-8d29-37d4e682fa4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.738 252257 DEBUG nova.network.neutron [req-f24df0ac-30ed-4b1c-998c-9c52bb2f2ce4 req-c21b2cb6-7d04-428e-8d29-37d4e682fa4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Refreshing network info cache for port 18497509-f640-42ef-b25c-ac9f121ce0db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.740 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Start _get_guest_xml network_info=[{"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.746 252257 WARNING nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.751 252257 DEBUG nova.virt.libvirt.host [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.752 252257 DEBUG nova.virt.libvirt.host [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.758 252257 DEBUG nova.virt.libvirt.host [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.758 252257 DEBUG nova.virt.libvirt.host [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.759 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.759 252257 DEBUG nova.virt.hardware [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.760 252257 DEBUG nova.virt.hardware [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.760 252257 DEBUG nova.virt.hardware [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.760 252257 DEBUG nova.virt.hardware [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.760 252257 DEBUG nova.virt.hardware [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.761 252257 DEBUG nova.virt.hardware [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.761 252257 DEBUG nova.virt.hardware [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.761 252257 DEBUG nova.virt.hardware [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.761 252257 DEBUG nova.virt.hardware [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.762 252257 DEBUG nova.virt.hardware [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.762 252257 DEBUG nova.virt.hardware [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:07:21 np0005539563 nova_compute[252253]: 2025-11-29 08:07:21.764 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:21.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 180 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.5 MiB/s wr, 121 op/s
Nov 29 03:07:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1407093463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.285 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.331 252257 DEBUG nova.storage.rbd_utils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.337 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:22.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1425703125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.788 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.791 252257 DEBUG nova.virt.libvirt.vif [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1335756739',display_name='tempest-ServerDiskConfigTestJSON-server-1335756739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1335756739',id=85,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-mr1u36f8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:15Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=a28f7dd6-9c8c-46f4-9ce0-7d40194d9749,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.792 252257 DEBUG nova.network.os_vif_util [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.793 252257 DEBUG nova.network.os_vif_util [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.794 252257 DEBUG nova.objects.instance [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.818 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  <uuid>a28f7dd6-9c8c-46f4-9ce0-7d40194d9749</uuid>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  <name>instance-00000055</name>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1335756739</nova:name>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:07:21</nova:creationTime>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <nova:user uuid="5a7b61623f854cf59636f192ab8af005">tempest-ServerDiskConfigTestJSON-904422786-project-member</nova:user>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <nova:project uuid="750bde86c9c7473fbf7f0a6a3b16cec1">tempest-ServerDiskConfigTestJSON-904422786</nova:project>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <nova:port uuid="18497509-f640-42ef-b25c-ac9f121ce0db">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <entry name="serial">a28f7dd6-9c8c-46f4-9ce0-7d40194d9749</entry>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <entry name="uuid">a28f7dd6-9c8c-46f4-9ce0-7d40194d9749</entry>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk.config">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:49:09:70"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <target dev="tap18497509-f6"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/console.log" append="off"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:07:22 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:07:22 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:07:22 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:07:22 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.819 252257 DEBUG nova.compute.manager [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Preparing to wait for external event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.820 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.820 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.820 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.821 252257 DEBUG nova.virt.libvirt.vif [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1335756739',display_name='tempest-ServerDiskConfigTestJSON-server-1335756739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1335756739',id=85,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-mr1u36f8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:15Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=a28f7dd6-9c8c-46f4-9ce0-7d40194d9749,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.821 252257 DEBUG nova.network.os_vif_util [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.822 252257 DEBUG nova.network.os_vif_util [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.822 252257 DEBUG os_vif [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.823 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.823 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.824 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.827 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.827 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18497509-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.827 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18497509-f6, col_values=(('external_ids', {'iface-id': '18497509-f640-42ef-b25c-ac9f121ce0db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:09:70', 'vm-uuid': 'a28f7dd6-9c8c-46f4-9ce0-7d40194d9749'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.875 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:22 np0005539563 NetworkManager[48981]: <info>  [1764403642.8768] manager: (tap18497509-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.879 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.882 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.884 252257 INFO os_vif [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6')#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.947 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.948 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.948 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] No VIF found with MAC fa:16:3e:49:09:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.949 252257 INFO nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Using config drive#033[00m
Nov 29 03:07:22 np0005539563 nova_compute[252253]: 2025-11-29 08:07:22.984 252257 DEBUG nova.storage.rbd_utils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:23 np0005539563 nova_compute[252253]: 2025-11-29 08:07:23.329 252257 DEBUG nova.network.neutron [req-f24df0ac-30ed-4b1c-998c-9c52bb2f2ce4 req-c21b2cb6-7d04-428e-8d29-37d4e682fa4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updated VIF entry in instance network info cache for port 18497509-f640-42ef-b25c-ac9f121ce0db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:07:23 np0005539563 nova_compute[252253]: 2025-11-29 08:07:23.329 252257 DEBUG nova.network.neutron [req-f24df0ac-30ed-4b1c-998c-9c52bb2f2ce4 req-c21b2cb6-7d04-428e-8d29-37d4e682fa4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updating instance_info_cache with network_info: [{"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019776617462311558 of space, bias 1.0, pg target 0.5932985238693468 quantized to 32 (current 32)
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2968946293969849 quantized to 32 (current 32)
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:07:23 np0005539563 nova_compute[252253]: 2025-11-29 08:07:23.363 252257 DEBUG oslo_concurrency.lockutils [req-f24df0ac-30ed-4b1c-998c-9c52bb2f2ce4 req-c21b2cb6-7d04-428e-8d29-37d4e682fa4e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:23 np0005539563 nova_compute[252253]: 2025-11-29 08:07:23.710 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403628.7095149, 12d0faaa-a957-464c-ae56-3e90d6fd248c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:23 np0005539563 nova_compute[252253]: 2025-11-29 08:07:23.711 252257 INFO nova.compute.manager [-] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:07:23 np0005539563 nova_compute[252253]: 2025-11-29 08:07:23.732 252257 DEBUG nova.compute.manager [None req-abe441c8-2e01-4346-a875-fa1bb774de9a - - - - - -] [instance: 12d0faaa-a957-464c-ae56-3e90d6fd248c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:23 np0005539563 nova_compute[252253]: 2025-11-29 08:07:23.809 252257 INFO nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Creating config drive at /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/disk.config#033[00m
Nov 29 03:07:23 np0005539563 nova_compute[252253]: 2025-11-29 08:07:23.815 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_zybqrka execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:23.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:23 np0005539563 nova_compute[252253]: 2025-11-29 08:07:23.959 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_zybqrka" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 180 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.6 MiB/s wr, 39 op/s
Nov 29 03:07:24 np0005539563 nova_compute[252253]: 2025-11-29 08:07:24.015 252257 DEBUG nova.storage.rbd_utils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] rbd image a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:24 np0005539563 nova_compute[252253]: 2025-11-29 08:07:24.021 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/disk.config a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:24.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:24 np0005539563 nova_compute[252253]: 2025-11-29 08:07:24.997 252257 DEBUG oslo_concurrency.processutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/disk.config a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.976s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:24.999 252257 INFO nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Deleting local config drive /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/disk.config because it was imported into RBD.#033[00m
Nov 29 03:07:25 np0005539563 kernel: tap18497509-f6: entered promiscuous mode
Nov 29 03:07:25 np0005539563 NetworkManager[48981]: <info>  [1764403645.0902] manager: (tap18497509-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Nov 29 03:07:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:25Z|00307|binding|INFO|Claiming lport 18497509-f640-42ef-b25c-ac9f121ce0db for this chassis.
Nov 29 03:07:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:25Z|00308|binding|INFO|18497509-f640-42ef-b25c-ac9f121ce0db: Claiming fa:16:3e:49:09:70 10.100.0.7
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.109 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.116 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:09:70 10.100.0.7'], port_security=['fa:16:3e:49:09:70 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a28f7dd6-9c8c-46f4-9ce0-7d40194d9749', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=18497509-f640-42ef-b25c-ac9f121ce0db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.118 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 18497509-f640-42ef-b25c-ac9f121ce0db in datapath 8665acc6-1650-4878-8ffd-84f079f13741 bound to our chassis#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.121 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8665acc6-1650-4878-8ffd-84f079f13741#033[00m
Nov 29 03:07:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:25Z|00309|binding|INFO|Setting lport 18497509-f640-42ef-b25c-ac9f121ce0db ovn-installed in OVS
Nov 29 03:07:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:25Z|00310|binding|INFO|Setting lport 18497509-f640-42ef-b25c-ac9f121ce0db up in Southbound
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.127 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.131 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.137 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[20827b00-e92e-41b2-b1af-d5411b1a1847]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.139 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8665acc6-11 in ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.149 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8665acc6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.149 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7f795e62-4167-4ec8-b34d-a8ee0a5a498b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.150 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[86b7b8be-8d8e-436a-b00f-8c129c0272a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 systemd-machined[213024]: New machine qemu-36-instance-00000055.
Nov 29 03:07:25 np0005539563 systemd[1]: Started Virtual Machine qemu-36-instance-00000055.
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.166 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[a038d267-4ed0-4204-b7cd-56b94db11d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 systemd-udevd[305698]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:25 np0005539563 NetworkManager[48981]: <info>  [1764403645.1880] device (tap18497509-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:07:25 np0005539563 NetworkManager[48981]: <info>  [1764403645.1896] device (tap18497509-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.205 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2431afe9-2081-48a6-829d-2c8071994af9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.244 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[36885a3e-d870-467d-bf37-092ddc125550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 NetworkManager[48981]: <info>  [1764403645.2510] manager: (tap8665acc6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/146)
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.251 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[91402e06-8652-4e58-a140-8747341eb8c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 systemd-udevd[305701]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.308 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[07083329-f739-4dcf-ae78-e57834b00ba7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.314 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf02c3f-fa48-426a-9fd7-1888146cf7a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 NetworkManager[48981]: <info>  [1764403645.3525] device (tap8665acc6-10): carrier: link connected
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.359 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8c7f73-b81f-4a9c-b2f9-a781bb8e1804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.383 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f7e236-9d3f-448e-a4a5-06f799a71084]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661312, 'reachable_time': 27216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305729, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.407 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9afab078-17af-4554-826c-41d93a70ddff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:2248'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661312, 'tstamp': 661312}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305730, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.430 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[81307b51-3630-4576-b386-7599c5b1ed91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8665acc6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:22:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661312, 'reachable_time': 27216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305731, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.445 252257 DEBUG nova.compute.manager [req-84a853d2-6638-47b5-9454-a100740c419b req-423b7d04-6af3-4577-8656-041b5abb8227 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.446 252257 DEBUG oslo_concurrency.lockutils [req-84a853d2-6638-47b5-9454-a100740c419b req-423b7d04-6af3-4577-8656-041b5abb8227 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.446 252257 DEBUG oslo_concurrency.lockutils [req-84a853d2-6638-47b5-9454-a100740c419b req-423b7d04-6af3-4577-8656-041b5abb8227 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.446 252257 DEBUG oslo_concurrency.lockutils [req-84a853d2-6638-47b5-9454-a100740c419b req-423b7d04-6af3-4577-8656-041b5abb8227 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.447 252257 DEBUG nova.compute.manager [req-84a853d2-6638-47b5-9454-a100740c419b req-423b7d04-6af3-4577-8656-041b5abb8227 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Processing event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.474 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f26ebb-d096-4bbd-971e-bf845f0cac5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.551 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.574 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f7eafe-3139-405b-86aa-cc7328587f50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.576 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.577 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.577 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8665acc6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:25 np0005539563 NetworkManager[48981]: <info>  [1764403645.5805] manager: (tap8665acc6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.580 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:25 np0005539563 kernel: tap8665acc6-10: entered promiscuous mode
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.585 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8665acc6-10, col_values=(('external_ids', {'iface-id': 'e0f892e1-f1e8-4b29-8918-6cd036b9e8e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.585 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:25Z|00311|binding|INFO|Releasing lport e0f892e1-f1e8-4b29-8918-6cd036b9e8e0 from this chassis (sb_readonly=0)
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.588 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.589 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1b1d0697-c5d8-4fee-97ec-15551b56c398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.590 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/8665acc6-1650-4878-8ffd-84f079f13741.pid.haproxy
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 8665acc6-1650-4878-8ffd-84f079f13741
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:07:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:25.591 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'env', 'PROCESS_TAG=haproxy-8665acc6-1650-4878-8ffd-84f079f13741', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8665acc6-1650-4878-8ffd-84f079f13741.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.591 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Triggering sync for uuid a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.592 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:25 np0005539563 nova_compute[252253]: 2025-11-29 08:07:25.603 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:25.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 180 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 2.2 MiB/s wr, 41 op/s
Nov 29 03:07:26 np0005539563 podman[305781]: 2025-11-29 08:07:26.008984002 +0000 UTC m=+0.045770781 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:07:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:26.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:26 np0005539563 nova_compute[252253]: 2025-11-29 08:07:26.725 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:26 np0005539563 podman[305781]: 2025-11-29 08:07:26.784777661 +0000 UTC m=+0.821564390 container create 2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:07:26 np0005539563 systemd[1]: Started libpod-conmon-2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa.scope.
Nov 29 03:07:26 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:07:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50eee6c6e71009710678c8d75e319205f51287354c8e6c17800b2d54f46396af/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:26 np0005539563 podman[305781]: 2025-11-29 08:07:26.897014552 +0000 UTC m=+0.933801271 container init 2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:07:26 np0005539563 podman[305781]: 2025-11-29 08:07:26.903432396 +0000 UTC m=+0.940219085 container start 2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:07:26 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305817]: [NOTICE]   (305831) : New worker (305850) forked
Nov 29 03:07:26 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305817]: [NOTICE]   (305831) : Loading success.
Nov 29 03:07:26 np0005539563 nova_compute[252253]: 2025-11-29 08:07:26.975 252257 DEBUG nova.compute.manager [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:07:26 np0005539563 nova_compute[252253]: 2025-11-29 08:07:26.977 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403646.9765081, a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:26 np0005539563 nova_compute[252253]: 2025-11-29 08:07:26.977 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] VM Started (Lifecycle Event)#033[00m
Nov 29 03:07:26 np0005539563 nova_compute[252253]: 2025-11-29 08:07:26.986 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:07:26 np0005539563 nova_compute[252253]: 2025-11-29 08:07:26.996 252257 INFO nova.virt.libvirt.driver [-] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Instance spawned successfully.#033[00m
Nov 29 03:07:26 np0005539563 nova_compute[252253]: 2025-11-29 08:07:26.997 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.005 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.010 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.022 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.023 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.024 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.025 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.025 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.026 252257 DEBUG nova.virt.libvirt.driver [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.031 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.031 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403646.9769945, a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.032 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.077 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.083 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403646.98516, a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.084 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.105 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.110 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.118 252257 INFO nova.compute.manager [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Took 11.48 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.119 252257 DEBUG nova.compute.manager [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.158 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.204 252257 INFO nova.compute.manager [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Took 14.77 seconds to build instance.#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.227 252257 DEBUG oslo_concurrency.lockutils [None req-7dbecf43-6365-41f8-bf02-467e2c54375b 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.229 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.229 252257 INFO nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.230 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.520 252257 DEBUG nova.compute.manager [req-8ae58a18-0f44-4eee-a108-e484ae420153 req-abc4f804-39d4-4c6a-af49-2a222c7894e9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.522 252257 DEBUG oslo_concurrency.lockutils [req-8ae58a18-0f44-4eee-a108-e484ae420153 req-abc4f804-39d4-4c6a-af49-2a222c7894e9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.523 252257 DEBUG oslo_concurrency.lockutils [req-8ae58a18-0f44-4eee-a108-e484ae420153 req-abc4f804-39d4-4c6a-af49-2a222c7894e9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.524 252257 DEBUG oslo_concurrency.lockutils [req-8ae58a18-0f44-4eee-a108-e484ae420153 req-abc4f804-39d4-4c6a-af49-2a222c7894e9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.525 252257 DEBUG nova.compute.manager [req-8ae58a18-0f44-4eee-a108-e484ae420153 req-abc4f804-39d4-4c6a-af49-2a222c7894e9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.526 252257 WARNING nova.compute.manager [req-8ae58a18-0f44-4eee-a108-e484ae420153 req-abc4f804-39d4-4c6a-af49-2a222c7894e9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state None.#033[00m
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.876 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:27.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:27 np0005539563 nova_compute[252253]: 2025-11-29 08:07:27.979 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:27.983 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:27.985 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:07:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 180 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 29 03:07:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:28.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:29.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 180 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Nov 29 03:07:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:30.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:31 np0005539563 nova_compute[252253]: 2025-11-29 08:07:31.729 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:31.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 146 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 156 op/s
Nov 29 03:07:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:32.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:32 np0005539563 nova_compute[252253]: 2025-11-29 08:07:32.882 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:33.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 134 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 26 KiB/s wr, 167 op/s
Nov 29 03:07:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:34.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:35 np0005539563 nova_compute[252253]: 2025-11-29 08:07:35.042 252257 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:35 np0005539563 nova_compute[252253]: 2025-11-29 08:07:35.043 252257 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquired lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:35 np0005539563 nova_compute[252253]: 2025-11-29 08:07:35.043 252257 DEBUG nova.network.neutron [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:07:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:35.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:35.987 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 134 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 26 KiB/s wr, 168 op/s
Nov 29 03:07:36 np0005539563 nova_compute[252253]: 2025-11-29 08:07:36.444 252257 DEBUG nova.network.neutron [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updating instance_info_cache with network_info: [{"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:36 np0005539563 nova_compute[252253]: 2025-11-29 08:07:36.693 252257 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Releasing lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:36 np0005539563 nova_compute[252253]: 2025-11-29 08:07:36.714 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:36.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:36 np0005539563 nova_compute[252253]: 2025-11-29 08:07:36.730 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:36 np0005539563 nova_compute[252253]: 2025-11-29 08:07:36.863 252257 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:07:36 np0005539563 nova_compute[252253]: 2025-11-29 08:07:36.863 252257 DEBUG nova.virt.libvirt.volume.remotefs [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Creating file /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/0a0b5bf51a4a4e6f9104885a17ab314d.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:07:36 np0005539563 nova_compute[252253]: 2025-11-29 08:07:36.864 252257 DEBUG oslo_concurrency.processutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/0a0b5bf51a4a4e6f9104885a17ab314d.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:37 np0005539563 nova_compute[252253]: 2025-11-29 08:07:37.347 252257 DEBUG oslo_concurrency.processutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/0a0b5bf51a4a4e6f9104885a17ab314d.tmp" returned: 1 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:37 np0005539563 nova_compute[252253]: 2025-11-29 08:07:37.348 252257 DEBUG oslo_concurrency.processutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749/0a0b5bf51a4a4e6f9104885a17ab314d.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:07:37 np0005539563 nova_compute[252253]: 2025-11-29 08:07:37.348 252257 DEBUG nova.virt.libvirt.volume.remotefs [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Creating directory /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:07:37 np0005539563 nova_compute[252253]: 2025-11-29 08:07:37.349 252257 DEBUG oslo_concurrency.processutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:37 np0005539563 nova_compute[252253]: 2025-11-29 08:07:37.603 252257 DEBUG oslo_concurrency.processutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:37 np0005539563 nova_compute[252253]: 2025-11-29 08:07:37.607 252257 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:07:37 np0005539563 nova_compute[252253]: 2025-11-29 08:07:37.885 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:37.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 134 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 14 KiB/s wr, 164 op/s
Nov 29 03:07:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:38.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:39.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 145 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 351 KiB/s wr, 181 op/s
Nov 29 03:07:40 np0005539563 nova_compute[252253]: 2025-11-29 08:07:40.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:40.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:40 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:40Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:09:70 10.100.0.7
Nov 29 03:07:40 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:40Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:09:70 10.100.0.7
Nov 29 03:07:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:41 np0005539563 nova_compute[252253]: 2025-11-29 08:07:41.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:41 np0005539563 nova_compute[252253]: 2025-11-29 08:07:41.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:07:41 np0005539563 nova_compute[252253]: 2025-11-29 08:07:41.731 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:41.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 190 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.9 MiB/s wr, 211 op/s
Nov 29 03:07:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:42.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:42 np0005539563 nova_compute[252253]: 2025-11-29 08:07:42.888 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:07:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:07:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:07:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:07:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:07:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:07:43 np0005539563 nova_compute[252253]: 2025-11-29 08:07:43.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:43 np0005539563 nova_compute[252253]: 2025-11-29 08:07:43.862 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:43 np0005539563 nova_compute[252253]: 2025-11-29 08:07:43.862 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:43 np0005539563 nova_compute[252253]: 2025-11-29 08:07:43.862 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:43 np0005539563 nova_compute[252253]: 2025-11-29 08:07:43.863 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:07:43 np0005539563 nova_compute[252253]: 2025-11-29 08:07:43.863 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:43.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 206 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 125 op/s
Nov 29 03:07:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1395507505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:44 np0005539563 nova_compute[252253]: 2025-11-29 08:07:44.322 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:44 np0005539563 podman[305920]: 2025-11-29 08:07:44.425915874 +0000 UTC m=+0.058734042 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:07:44 np0005539563 nova_compute[252253]: 2025-11-29 08:07:44.426 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:07:44 np0005539563 nova_compute[252253]: 2025-11-29 08:07:44.426 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:07:44 np0005539563 podman[305921]: 2025-11-29 08:07:44.431333961 +0000 UTC m=+0.061828796 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:07:44 np0005539563 podman[305922]: 2025-11-29 08:07:44.465815495 +0000 UTC m=+0.092705873 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:07:44 np0005539563 nova_compute[252253]: 2025-11-29 08:07:44.600 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:44 np0005539563 nova_compute[252253]: 2025-11-29 08:07:44.602 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4365MB free_disk=20.922687530517578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:07:44 np0005539563 nova_compute[252253]: 2025-11-29 08:07:44.602 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:44 np0005539563 nova_compute[252253]: 2025-11-29 08:07:44.603 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:44.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:44 np0005539563 nova_compute[252253]: 2025-11-29 08:07:44.977 252257 INFO nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updating resource usage from migration 475e32d2-8c8f-4918-aa47-f1847e144c60#033[00m
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.027 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Migration 475e32d2-8c8f-4918-aa47-f1847e144c60 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.027 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.027 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.044 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.227 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.228 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.248 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.280 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.331 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/123183881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.764 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.769 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.787 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.817 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:07:45 np0005539563 nova_compute[252253]: 2025-11-29 08:07:45.817 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:45.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 214 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 179 op/s
Nov 29 03:07:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:46 np0005539563 nova_compute[252253]: 2025-11-29 08:07:46.734 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:46.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:47 np0005539563 nova_compute[252253]: 2025-11-29 08:07:47.660 252257 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:07:47 np0005539563 nova_compute[252253]: 2025-11-29 08:07:47.818 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:47 np0005539563 nova_compute[252253]: 2025-11-29 08:07:47.819 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:07:47 np0005539563 nova_compute[252253]: 2025-11-29 08:07:47.819 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:07:47 np0005539563 nova_compute[252253]: 2025-11-29 08:07:47.838 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:47 np0005539563 nova_compute[252253]: 2025-11-29 08:07:47.838 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:47 np0005539563 nova_compute[252253]: 2025-11-29 08:07:47.839 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:07:47 np0005539563 nova_compute[252253]: 2025-11-29 08:07:47.839 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:47 np0005539563 nova_compute[252253]: 2025-11-29 08:07:47.897 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:47.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 214 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Nov 29 03:07:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:48.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:49.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:49 np0005539563 kernel: tap18497509-f6 (unregistering): left promiscuous mode
Nov 29 03:07:49 np0005539563 NetworkManager[48981]: <info>  [1764403669.9642] device (tap18497509-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:07:49 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:49Z|00312|binding|INFO|Releasing lport 18497509-f640-42ef-b25c-ac9f121ce0db from this chassis (sb_readonly=0)
Nov 29 03:07:49 np0005539563 nova_compute[252253]: 2025-11-29 08:07:49.978 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:49 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:49Z|00313|binding|INFO|Setting lport 18497509-f640-42ef-b25c-ac9f121ce0db down in Southbound
Nov 29 03:07:49 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:49Z|00314|binding|INFO|Removing iface tap18497509-f6 ovn-installed in OVS
Nov 29 03:07:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:49.989 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:09:70 10.100.0.7'], port_security=['fa:16:3e:49:09:70 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a28f7dd6-9c8c-46f4-9ce0-7d40194d9749', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=18497509-f640-42ef-b25c-ac9f121ce0db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:49.991 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 18497509-f640-42ef-b25c-ac9f121ce0db in datapath 8665acc6-1650-4878-8ffd-84f079f13741 unbound from our chassis#033[00m
Nov 29 03:07:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:49.995 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8665acc6-1650-4878-8ffd-84f079f13741, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:07:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:49.996 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[00b18cb4-d383-487a-8362-52cd1dd0ee4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:49.997 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 namespace which is not needed anymore#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:49.998 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539563 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000055.scope: Deactivated successfully.
Nov 29 03:07:50 np0005539563 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000055.scope: Consumed 16.068s CPU time.
Nov 29 03:07:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 214 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Nov 29 03:07:50 np0005539563 systemd-machined[213024]: Machine qemu-36-instance-00000055 terminated.
Nov 29 03:07:50 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305817]: [NOTICE]   (305831) : haproxy version is 2.8.14-c23fe91
Nov 29 03:07:50 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305817]: [NOTICE]   (305831) : path to executable is /usr/sbin/haproxy
Nov 29 03:07:50 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305817]: [WARNING]  (305831) : Exiting Master process...
Nov 29 03:07:50 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305817]: [ALERT]    (305831) : Current worker (305850) exited with code 143 (Terminated)
Nov 29 03:07:50 np0005539563 neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741[305817]: [WARNING]  (305831) : All workers exited. Exiting... (0)
Nov 29 03:07:50 np0005539563 systemd[1]: libpod-2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa.scope: Deactivated successfully.
Nov 29 03:07:50 np0005539563 podman[306079]: 2025-11-29 08:07:50.141387697 +0000 UTC m=+0.041747163 container died 2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:07:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-50eee6c6e71009710678c8d75e319205f51287354c8e6c17800b2d54f46396af-merged.mount: Deactivated successfully.
Nov 29 03:07:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa-userdata-shm.mount: Deactivated successfully.
Nov 29 03:07:50 np0005539563 podman[306079]: 2025-11-29 08:07:50.178684098 +0000 UTC m=+0.079043574 container cleanup 2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:07:50 np0005539563 kernel: tap18497509-f6: entered promiscuous mode
Nov 29 03:07:50 np0005539563 NetworkManager[48981]: <info>  [1764403670.1893] manager: (tap18497509-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Nov 29 03:07:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:50Z|00315|binding|INFO|Claiming lport 18497509-f640-42ef-b25c-ac9f121ce0db for this chassis.
Nov 29 03:07:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:50Z|00316|binding|INFO|18497509-f640-42ef-b25c-ac9f121ce0db: Claiming fa:16:3e:49:09:70 10.100.0.7
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.190 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539563 kernel: tap18497509-f6 (unregistering): left promiscuous mode
Nov 29 03:07:50 np0005539563 systemd[1]: libpod-conmon-2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa.scope: Deactivated successfully.
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.197 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:09:70 10.100.0.7'], port_security=['fa:16:3e:49:09:70 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a28f7dd6-9c8c-46f4-9ce0-7d40194d9749', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=18497509-f640-42ef-b25c-ac9f121ce0db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:50Z|00317|binding|INFO|Setting lport 18497509-f640-42ef-b25c-ac9f121ce0db ovn-installed in OVS
Nov 29 03:07:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:50Z|00318|binding|INFO|Setting lport 18497509-f640-42ef-b25c-ac9f121ce0db up in Southbound
Nov 29 03:07:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:50Z|00319|binding|INFO|Releasing lport 18497509-f640-42ef-b25c-ac9f121ce0db from this chassis (sb_readonly=1)
Nov 29 03:07:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:50Z|00320|if_status|INFO|Dropped 13 log messages in last 381 seconds (most recently, 381 seconds ago) due to excessive rate
Nov 29 03:07:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:50Z|00321|if_status|INFO|Not setting lport 18497509-f640-42ef-b25c-ac9f121ce0db down as sb is readonly
Nov 29 03:07:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:50Z|00322|binding|INFO|Removing iface tap18497509-f6 ovn-installed in OVS
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.214 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:50Z|00323|binding|INFO|Releasing lport 18497509-f640-42ef-b25c-ac9f121ce0db from this chassis (sb_readonly=0)
Nov 29 03:07:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:07:50Z|00324|binding|INFO|Setting lport 18497509-f640-42ef-b25c-ac9f121ce0db down in Southbound
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.228 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.236 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:09:70 10.100.0.7'], port_security=['fa:16:3e:49:09:70 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a28f7dd6-9c8c-46f4-9ce0-7d40194d9749', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8665acc6-1650-4878-8ffd-84f079f13741', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '750bde86c9c7473fbf7f0a6a3b16cec1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8b143d91-a9e2-433e-a887-8851c4d95ae6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14735bae-f089-4bfd-bad1-f5ab455915a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=18497509-f640-42ef-b25c-ac9f121ce0db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:50 np0005539563 podman[306112]: 2025-11-29 08:07:50.25851584 +0000 UTC m=+0.053901031 container remove 2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.266 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[468949e2-e615-412f-95b7-428cb74273ff]: (4, ('Sat Nov 29 08:07:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa)\n2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa\nSat Nov 29 08:07:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 (2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa)\n2155fd139d5ca59aac91bb7e10c12eebcbf61a0f498aee03930b0d15120f2faa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.268 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[41b7992b-35b0-47f8-8eae-0e714444e222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.270 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8665acc6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.271 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539563 kernel: tap8665acc6-10: left promiscuous mode
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.288 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.290 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5dcfc830-db17-4ded-b1df-51db92541b29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.301 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3727a359-5645-4bc3-94c3-f6a8db685520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.302 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[07397637-d743-4896-bd13-8ce394a29fdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.316 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bfceb2e0-bad1-4a0b-9063-26c36dc1715f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661300, 'reachable_time': 37534, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306136, 'error': None, 'target': 'ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539563 systemd[1]: run-netns-ovnmeta\x2d8665acc6\x2d1650\x2d4878\x2d8ffd\x2d84f079f13741.mount: Deactivated successfully.
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.319 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8665acc6-1650-4878-8ffd-84f079f13741 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.319 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[e2002005-c523-4f11-99b2-610d89b82d72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.320 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 18497509-f640-42ef-b25c-ac9f121ce0db in datapath 8665acc6-1650-4878-8ffd-84f079f13741 unbound from our chassis#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.321 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8665acc6-1650-4878-8ffd-84f079f13741, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.322 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a72dacaf-c8ad-4208-b64b-47379da8bd2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.322 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 18497509-f640-42ef-b25c-ac9f121ce0db in datapath 8665acc6-1650-4878-8ffd-84f079f13741 unbound from our chassis#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.323 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8665acc6-1650-4878-8ffd-84f079f13741, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:07:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:07:50.324 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e198d450-a249-4100-a0be-e43b593348eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.515 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updating instance_info_cache with network_info: [{"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.532 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.533 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.533 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.534 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.534 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.665 252257 DEBUG nova.compute.manager [req-004d758e-507a-40f8-b6ee-96e5fdee8756 req-47ce9015-c8ac-49ff-a427-d6160a8f9790 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.666 252257 DEBUG oslo_concurrency.lockutils [req-004d758e-507a-40f8-b6ee-96e5fdee8756 req-47ce9015-c8ac-49ff-a427-d6160a8f9790 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.666 252257 DEBUG oslo_concurrency.lockutils [req-004d758e-507a-40f8-b6ee-96e5fdee8756 req-47ce9015-c8ac-49ff-a427-d6160a8f9790 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.666 252257 DEBUG oslo_concurrency.lockutils [req-004d758e-507a-40f8-b6ee-96e5fdee8756 req-47ce9015-c8ac-49ff-a427-d6160a8f9790 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.666 252257 DEBUG nova.compute.manager [req-004d758e-507a-40f8-b6ee-96e5fdee8756 req-47ce9015-c8ac-49ff-a427-d6160a8f9790 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.667 252257 WARNING nova.compute.manager [req-004d758e-507a-40f8-b6ee-96e5fdee8756 req-47ce9015-c8ac-49ff-a427-d6160a8f9790 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:07:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:50.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.762 252257 INFO nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.769 252257 INFO nova.virt.libvirt.driver [-] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Instance destroyed successfully.#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.770 252257 DEBUG nova.virt.libvirt.vif [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1335756739',display_name='tempest-ServerDiskConfigTestJSON-server-1335756739',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1335756739',id=85,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-mr1u36f8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:32Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=a28f7dd6-9c8c-46f4-9ce0-7d40194d9749,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:49:09:70"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.771 252257 DEBUG nova.network.os_vif_util [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "vif_mac": "fa:16:3e:49:09:70"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.772 252257 DEBUG nova.network.os_vif_util [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.773 252257 DEBUG os_vif [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.775 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.776 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18497509-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.777 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.780 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.782 252257 INFO os_vif [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6')#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.785 252257 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.785 252257 DEBUG nova.virt.libvirt.driver [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:07:50 np0005539563 nova_compute[252253]: 2025-11-29 08:07:50.949 252257 DEBUG neutronclient.v2_0.client [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 18497509-f640-42ef-b25c-ac9f121ce0db for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:07:51 np0005539563 nova_compute[252253]: 2025-11-29 08:07:51.075 252257 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:51 np0005539563 nova_compute[252253]: 2025-11-29 08:07:51.076 252257 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:51 np0005539563 nova_compute[252253]: 2025-11-29 08:07:51.077 252257 DEBUG oslo_concurrency.lockutils [None req-b131500e-2dc0-42d3-af25-5d9d3d8d5e3e 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:51 np0005539563 nova_compute[252253]: 2025-11-29 08:07:51.735 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:51.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 214 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 231 op/s
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.147 252257 DEBUG nova.compute.manager [req-b64e109d-7a4e-4ecf-9ff2-52830d8eacd9 req-d7080491-d13c-472c-9b0a-e91af6d3ac19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.147 252257 DEBUG oslo_concurrency.lockutils [req-b64e109d-7a4e-4ecf-9ff2-52830d8eacd9 req-d7080491-d13c-472c-9b0a-e91af6d3ac19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.147 252257 DEBUG oslo_concurrency.lockutils [req-b64e109d-7a4e-4ecf-9ff2-52830d8eacd9 req-d7080491-d13c-472c-9b0a-e91af6d3ac19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.148 252257 DEBUG oslo_concurrency.lockutils [req-b64e109d-7a4e-4ecf-9ff2-52830d8eacd9 req-d7080491-d13c-472c-9b0a-e91af6d3ac19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.148 252257 DEBUG nova.compute.manager [req-b64e109d-7a4e-4ecf-9ff2-52830d8eacd9 req-d7080491-d13c-472c-9b0a-e91af6d3ac19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.148 252257 WARNING nova.compute.manager [req-b64e109d-7a4e-4ecf-9ff2-52830d8eacd9 req-d7080491-d13c-472c-9b0a-e91af6d3ac19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-unplugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:07:52 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 14b49b6f-4e77-4853-854e-f12b077500af does not exist
Nov 29 03:07:52 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 846a04fd-9c7b-45f1-a957-1200e4a59555 does not exist
Nov 29 03:07:52 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ea1cb64c-254c-4697-982e-e30e6b823f4f does not exist
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:07:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:52.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.753 252257 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.753 252257 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.754 252257 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.754 252257 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.754 252257 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.755 252257 WARNING nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.755 252257 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.755 252257 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.756 252257 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.756 252257 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.756 252257 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.756 252257 WARNING nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.757 252257 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.757 252257 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.757 252257 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.758 252257 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.758 252257 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.758 252257 WARNING nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.759 252257 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.759 252257 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.760 252257 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.760 252257 DEBUG oslo_concurrency.lockutils [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.760 252257 DEBUG nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:52 np0005539563 nova_compute[252253]: 2025-11-29 08:07:52.761 252257 WARNING nova.compute.manager [req-53dcead2-3da3-4ae4-aecf-e7cf72126d27 req-3666dd75-a479-4706-9e99-181593614739 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:07:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:07:53 np0005539563 podman[306411]: 2025-11-29 08:07:53.049234171 +0000 UTC m=+0.063525632 container create 1864f222bbe627e91977da58a152f1733300b9eba43da5090971236d4ced0830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:07:53 np0005539563 systemd[1]: Started libpod-conmon-1864f222bbe627e91977da58a152f1733300b9eba43da5090971236d4ced0830.scope.
Nov 29 03:07:53 np0005539563 podman[306411]: 2025-11-29 08:07:53.022205189 +0000 UTC m=+0.036496730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:07:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:07:53 np0005539563 podman[306411]: 2025-11-29 08:07:53.139422664 +0000 UTC m=+0.153714185 container init 1864f222bbe627e91977da58a152f1733300b9eba43da5090971236d4ced0830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:07:53 np0005539563 podman[306411]: 2025-11-29 08:07:53.156181218 +0000 UTC m=+0.170472689 container start 1864f222bbe627e91977da58a152f1733300b9eba43da5090971236d4ced0830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 29 03:07:53 np0005539563 podman[306411]: 2025-11-29 08:07:53.15993451 +0000 UTC m=+0.174226061 container attach 1864f222bbe627e91977da58a152f1733300b9eba43da5090971236d4ced0830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:07:53 np0005539563 brave_kowalevski[306427]: 167 167
Nov 29 03:07:53 np0005539563 systemd[1]: libpod-1864f222bbe627e91977da58a152f1733300b9eba43da5090971236d4ced0830.scope: Deactivated successfully.
Nov 29 03:07:53 np0005539563 conmon[306427]: conmon 1864f222bbe627e91977 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1864f222bbe627e91977da58a152f1733300b9eba43da5090971236d4ced0830.scope/container/memory.events
Nov 29 03:07:53 np0005539563 podman[306411]: 2025-11-29 08:07:53.168082021 +0000 UTC m=+0.182373512 container died 1864f222bbe627e91977da58a152f1733300b9eba43da5090971236d4ced0830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:07:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0ba94f007fd377332d4bcb30e71450aee16b673a4da6eb808956e467c9f26424-merged.mount: Deactivated successfully.
Nov 29 03:07:53 np0005539563 podman[306411]: 2025-11-29 08:07:53.233948785 +0000 UTC m=+0.248240246 container remove 1864f222bbe627e91977da58a152f1733300b9eba43da5090971236d4ced0830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:07:53 np0005539563 systemd[1]: libpod-conmon-1864f222bbe627e91977da58a152f1733300b9eba43da5090971236d4ced0830.scope: Deactivated successfully.
Nov 29 03:07:53 np0005539563 podman[306452]: 2025-11-29 08:07:53.409992715 +0000 UTC m=+0.054991121 container create e802636a0d3a7a3648ecf38f5176e076f7a2d8569de5b23449a7c2bf2aa2277c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:07:53 np0005539563 systemd[1]: Started libpod-conmon-e802636a0d3a7a3648ecf38f5176e076f7a2d8569de5b23449a7c2bf2aa2277c.scope.
Nov 29 03:07:53 np0005539563 podman[306452]: 2025-11-29 08:07:53.390497697 +0000 UTC m=+0.035496123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:07:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:07:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d030bee2632c00838d0ab0ef8f283cac5175a08161ce52d818cb593bf331fce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d030bee2632c00838d0ab0ef8f283cac5175a08161ce52d818cb593bf331fce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d030bee2632c00838d0ab0ef8f283cac5175a08161ce52d818cb593bf331fce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d030bee2632c00838d0ab0ef8f283cac5175a08161ce52d818cb593bf331fce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d030bee2632c00838d0ab0ef8f283cac5175a08161ce52d818cb593bf331fce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:53 np0005539563 podman[306452]: 2025-11-29 08:07:53.517135568 +0000 UTC m=+0.162134014 container init e802636a0d3a7a3648ecf38f5176e076f7a2d8569de5b23449a7c2bf2aa2277c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:07:53 np0005539563 podman[306452]: 2025-11-29 08:07:53.529893624 +0000 UTC m=+0.174892040 container start e802636a0d3a7a3648ecf38f5176e076f7a2d8569de5b23449a7c2bf2aa2277c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goodall, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:07:53 np0005539563 podman[306452]: 2025-11-29 08:07:53.533186113 +0000 UTC m=+0.178184569 container attach e802636a0d3a7a3648ecf38f5176e076f7a2d8569de5b23449a7c2bf2aa2277c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goodall, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:07:53 np0005539563 nova_compute[252253]: 2025-11-29 08:07:53.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Nov 29 03:07:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Nov 29 03:07:53 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Nov 29 03:07:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:53.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 214 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 152 KiB/s wr, 177 op/s
Nov 29 03:07:54 np0005539563 nova_compute[252253]: 2025-11-29 08:07:54.243 252257 DEBUG nova.compute.manager [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-changed-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:54 np0005539563 nova_compute[252253]: 2025-11-29 08:07:54.244 252257 DEBUG nova.compute.manager [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Refreshing instance network info cache due to event network-changed-18497509-f640-42ef-b25c-ac9f121ce0db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:07:54 np0005539563 nova_compute[252253]: 2025-11-29 08:07:54.244 252257 DEBUG oslo_concurrency.lockutils [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:54 np0005539563 nova_compute[252253]: 2025-11-29 08:07:54.244 252257 DEBUG oslo_concurrency.lockutils [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:54 np0005539563 nova_compute[252253]: 2025-11-29 08:07:54.244 252257 DEBUG nova.network.neutron [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Refreshing network info cache for port 18497509-f640-42ef-b25c-ac9f121ce0db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:07:54 np0005539563 cool_goodall[306468]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:07:54 np0005539563 cool_goodall[306468]: --> relative data size: 1.0
Nov 29 03:07:54 np0005539563 cool_goodall[306468]: --> All data devices are unavailable
Nov 29 03:07:54 np0005539563 systemd[1]: libpod-e802636a0d3a7a3648ecf38f5176e076f7a2d8569de5b23449a7c2bf2aa2277c.scope: Deactivated successfully.
Nov 29 03:07:54 np0005539563 podman[306452]: 2025-11-29 08:07:54.454494325 +0000 UTC m=+1.099492761 container died e802636a0d3a7a3648ecf38f5176e076f7a2d8569de5b23449a7c2bf2aa2277c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goodall, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:07:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9d030bee2632c00838d0ab0ef8f283cac5175a08161ce52d818cb593bf331fce-merged.mount: Deactivated successfully.
Nov 29 03:07:54 np0005539563 podman[306452]: 2025-11-29 08:07:54.515925409 +0000 UTC m=+1.160923825 container remove e802636a0d3a7a3648ecf38f5176e076f7a2d8569de5b23449a7c2bf2aa2277c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:07:54 np0005539563 systemd[1]: libpod-conmon-e802636a0d3a7a3648ecf38f5176e076f7a2d8569de5b23449a7c2bf2aa2277c.scope: Deactivated successfully.
Nov 29 03:07:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3084548911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:54.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:55 np0005539563 podman[306634]: 2025-11-29 08:07:55.190762773 +0000 UTC m=+0.038424113 container create 291e37ee27b57b9e2807a40c9439a1d451209b2b76695bcd94e2026d067513ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wozniak, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:07:55 np0005539563 systemd[1]: Started libpod-conmon-291e37ee27b57b9e2807a40c9439a1d451209b2b76695bcd94e2026d067513ad.scope.
Nov 29 03:07:55 np0005539563 podman[306634]: 2025-11-29 08:07:55.173486735 +0000 UTC m=+0.021148095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:07:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:07:55 np0005539563 podman[306634]: 2025-11-29 08:07:55.286499186 +0000 UTC m=+0.134160526 container init 291e37ee27b57b9e2807a40c9439a1d451209b2b76695bcd94e2026d067513ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:07:55 np0005539563 podman[306634]: 2025-11-29 08:07:55.293235599 +0000 UTC m=+0.140896939 container start 291e37ee27b57b9e2807a40c9439a1d451209b2b76695bcd94e2026d067513ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:07:55 np0005539563 podman[306634]: 2025-11-29 08:07:55.296477856 +0000 UTC m=+0.144139206 container attach 291e37ee27b57b9e2807a40c9439a1d451209b2b76695bcd94e2026d067513ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:07:55 np0005539563 pensive_wozniak[306650]: 167 167
Nov 29 03:07:55 np0005539563 systemd[1]: libpod-291e37ee27b57b9e2807a40c9439a1d451209b2b76695bcd94e2026d067513ad.scope: Deactivated successfully.
Nov 29 03:07:55 np0005539563 podman[306634]: 2025-11-29 08:07:55.299823658 +0000 UTC m=+0.147485048 container died 291e37ee27b57b9e2807a40c9439a1d451209b2b76695bcd94e2026d067513ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:07:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay-95ecacf5d6241a69c4d359419f0dfa6ad98f91abefe09567a0c77f04f130dc77-merged.mount: Deactivated successfully.
Nov 29 03:07:55 np0005539563 podman[306634]: 2025-11-29 08:07:55.347012735 +0000 UTC m=+0.194674075 container remove 291e37ee27b57b9e2807a40c9439a1d451209b2b76695bcd94e2026d067513ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wozniak, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:07:55 np0005539563 systemd[1]: libpod-conmon-291e37ee27b57b9e2807a40c9439a1d451209b2b76695bcd94e2026d067513ad.scope: Deactivated successfully.
Nov 29 03:07:55 np0005539563 podman[306673]: 2025-11-29 08:07:55.521770361 +0000 UTC m=+0.048617779 container create b7815f9b078538ece321b7908b801c71d149356f1983297edfd894dd03dda29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:07:55 np0005539563 systemd[1]: Started libpod-conmon-b7815f9b078538ece321b7908b801c71d149356f1983297edfd894dd03dda29b.scope.
Nov 29 03:07:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:07:55 np0005539563 podman[306673]: 2025-11-29 08:07:55.499704603 +0000 UTC m=+0.026552051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:07:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d104b4cf89edc4780e0795f733f3448eb12459ac1f010dedb2341366f084be07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d104b4cf89edc4780e0795f733f3448eb12459ac1f010dedb2341366f084be07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d104b4cf89edc4780e0795f733f3448eb12459ac1f010dedb2341366f084be07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d104b4cf89edc4780e0795f733f3448eb12459ac1f010dedb2341366f084be07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:55 np0005539563 podman[306673]: 2025-11-29 08:07:55.605633463 +0000 UTC m=+0.132480881 container init b7815f9b078538ece321b7908b801c71d149356f1983297edfd894dd03dda29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:07:55 np0005539563 podman[306673]: 2025-11-29 08:07:55.617010981 +0000 UTC m=+0.143858389 container start b7815f9b078538ece321b7908b801c71d149356f1983297edfd894dd03dda29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:07:55 np0005539563 podman[306673]: 2025-11-29 08:07:55.620929777 +0000 UTC m=+0.147777205 container attach b7815f9b078538ece321b7908b801c71d149356f1983297edfd894dd03dda29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:07:55 np0005539563 nova_compute[252253]: 2025-11-29 08:07:55.778 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539563 nova_compute[252253]: 2025-11-29 08:07:55.802 252257 DEBUG nova.compute.manager [req-cdd73bf9-7256-4711-9f7d-e1bf90f6d88d req-fe46e03d-daa2-4a2d-ac46-f445913c0e39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:55 np0005539563 nova_compute[252253]: 2025-11-29 08:07:55.803 252257 DEBUG oslo_concurrency.lockutils [req-cdd73bf9-7256-4711-9f7d-e1bf90f6d88d req-fe46e03d-daa2-4a2d-ac46-f445913c0e39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:55 np0005539563 nova_compute[252253]: 2025-11-29 08:07:55.803 252257 DEBUG oslo_concurrency.lockutils [req-cdd73bf9-7256-4711-9f7d-e1bf90f6d88d req-fe46e03d-daa2-4a2d-ac46-f445913c0e39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:55 np0005539563 nova_compute[252253]: 2025-11-29 08:07:55.804 252257 DEBUG oslo_concurrency.lockutils [req-cdd73bf9-7256-4711-9f7d-e1bf90f6d88d req-fe46e03d-daa2-4a2d-ac46-f445913c0e39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:55 np0005539563 nova_compute[252253]: 2025-11-29 08:07:55.804 252257 DEBUG nova.compute.manager [req-cdd73bf9-7256-4711-9f7d-e1bf90f6d88d req-fe46e03d-daa2-4a2d-ac46-f445913c0e39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:55 np0005539563 nova_compute[252253]: 2025-11-29 08:07:55.804 252257 WARNING nova.compute.manager [req-cdd73bf9-7256-4711-9f7d-e1bf90f6d88d req-fe46e03d-daa2-4a2d-ac46-f445913c0e39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:07:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:55.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:56 np0005539563 nova_compute[252253]: 2025-11-29 08:07:56.001 252257 DEBUG nova.network.neutron [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updated VIF entry in instance network info cache for port 18497509-f640-42ef-b25c-ac9f121ce0db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:07:56 np0005539563 nova_compute[252253]: 2025-11-29 08:07:56.002 252257 DEBUG nova.network.neutron [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updating instance_info_cache with network_info: [{"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 230 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 118 op/s
Nov 29 03:07:56 np0005539563 nova_compute[252253]: 2025-11-29 08:07:56.029 252257 DEBUG oslo_concurrency.lockutils [req-0a4002b8-7483-45e1-8ba2-968d0fec6a6b req-9a03dea9-7d54-49d5-8c3d-d47fdfba6347 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]: {
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:    "0": [
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:        {
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:            "devices": [
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:                "/dev/loop3"
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:            ],
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:            "lv_name": "ceph_lv0",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:            "lv_size": "7511998464",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:            "name": "ceph_lv0",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:            "tags": {
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:                "ceph.cluster_name": "ceph",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:                "ceph.crush_device_class": "",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:                "ceph.encrypted": "0",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:                "ceph.osd_id": "0",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:                "ceph.type": "block",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:                "ceph.vdo": "0"
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:            },
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:            "type": "block",
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:            "vg_name": "ceph_vg0"
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:        }
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]:    ]
Nov 29 03:07:56 np0005539563 peaceful_shtern[306689]: }
Nov 29 03:07:56 np0005539563 systemd[1]: libpod-b7815f9b078538ece321b7908b801c71d149356f1983297edfd894dd03dda29b.scope: Deactivated successfully.
Nov 29 03:07:56 np0005539563 podman[306673]: 2025-11-29 08:07:56.395499333 +0000 UTC m=+0.922346731 container died b7815f9b078538ece321b7908b801c71d149356f1983297edfd894dd03dda29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:07:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d104b4cf89edc4780e0795f733f3448eb12459ac1f010dedb2341366f084be07-merged.mount: Deactivated successfully.
Nov 29 03:07:56 np0005539563 podman[306673]: 2025-11-29 08:07:56.454656106 +0000 UTC m=+0.981503504 container remove b7815f9b078538ece321b7908b801c71d149356f1983297edfd894dd03dda29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shtern, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:07:56 np0005539563 systemd[1]: libpod-conmon-b7815f9b078538ece321b7908b801c71d149356f1983297edfd894dd03dda29b.scope: Deactivated successfully.
Nov 29 03:07:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:56.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:56 np0005539563 nova_compute[252253]: 2025-11-29 08:07:56.794 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539563 podman[306852]: 2025-11-29 08:07:57.135823951 +0000 UTC m=+0.042705948 container create 4854c9510564acb003193fd1addf9423f721cff97d6f8663dbec652b0b84c54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 29 03:07:57 np0005539563 systemd[1]: Started libpod-conmon-4854c9510564acb003193fd1addf9423f721cff97d6f8663dbec652b0b84c54a.scope.
Nov 29 03:07:57 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:07:57 np0005539563 podman[306852]: 2025-11-29 08:07:57.114196315 +0000 UTC m=+0.021078332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:07:57 np0005539563 podman[306852]: 2025-11-29 08:07:57.214290587 +0000 UTC m=+0.121172614 container init 4854c9510564acb003193fd1addf9423f721cff97d6f8663dbec652b0b84c54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:07:57 np0005539563 podman[306852]: 2025-11-29 08:07:57.222596212 +0000 UTC m=+0.129478219 container start 4854c9510564acb003193fd1addf9423f721cff97d6f8663dbec652b0b84c54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:07:57 np0005539563 podman[306852]: 2025-11-29 08:07:57.226127397 +0000 UTC m=+0.133009424 container attach 4854c9510564acb003193fd1addf9423f721cff97d6f8663dbec652b0b84c54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:07:57 np0005539563 elated_elbakyan[306868]: 167 167
Nov 29 03:07:57 np0005539563 systemd[1]: libpod-4854c9510564acb003193fd1addf9423f721cff97d6f8663dbec652b0b84c54a.scope: Deactivated successfully.
Nov 29 03:07:57 np0005539563 podman[306852]: 2025-11-29 08:07:57.229026816 +0000 UTC m=+0.135908843 container died 4854c9510564acb003193fd1addf9423f721cff97d6f8663dbec652b0b84c54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:07:57 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4779145d7965b91c32f40903ac7b19508216cde3af26769452751fdd14a07af6-merged.mount: Deactivated successfully.
Nov 29 03:07:57 np0005539563 podman[306852]: 2025-11-29 08:07:57.271131767 +0000 UTC m=+0.178013764 container remove 4854c9510564acb003193fd1addf9423f721cff97d6f8663dbec652b0b84c54a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elbakyan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:07:57 np0005539563 systemd[1]: libpod-conmon-4854c9510564acb003193fd1addf9423f721cff97d6f8663dbec652b0b84c54a.scope: Deactivated successfully.
Nov 29 03:07:57 np0005539563 podman[306893]: 2025-11-29 08:07:57.466008297 +0000 UTC m=+0.055589247 container create 88c60de083eb19361a8ac5983732eed84cbfcd5a7fa9a491f220dd0682ecec86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:07:57 np0005539563 systemd[1]: Started libpod-conmon-88c60de083eb19361a8ac5983732eed84cbfcd5a7fa9a491f220dd0682ecec86.scope.
Nov 29 03:07:57 np0005539563 podman[306893]: 2025-11-29 08:07:57.440789174 +0000 UTC m=+0.030370134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:07:57 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:07:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c68e94da2db5e38144d64818a5af11f287939dd069f6e41454018917424e126/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c68e94da2db5e38144d64818a5af11f287939dd069f6e41454018917424e126/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c68e94da2db5e38144d64818a5af11f287939dd069f6e41454018917424e126/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c68e94da2db5e38144d64818a5af11f287939dd069f6e41454018917424e126/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:57 np0005539563 podman[306893]: 2025-11-29 08:07:57.56285283 +0000 UTC m=+0.152433760 container init 88c60de083eb19361a8ac5983732eed84cbfcd5a7fa9a491f220dd0682ecec86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:07:57 np0005539563 podman[306893]: 2025-11-29 08:07:57.574031713 +0000 UTC m=+0.163612623 container start 88c60de083eb19361a8ac5983732eed84cbfcd5a7fa9a491f220dd0682ecec86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:07:57 np0005539563 podman[306893]: 2025-11-29 08:07:57.577555599 +0000 UTC m=+0.167136529 container attach 88c60de083eb19361a8ac5983732eed84cbfcd5a7fa9a491f220dd0682ecec86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mcclintock, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 29 03:07:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:57.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:07:57 np0005539563 nova_compute[252253]: 2025-11-29 08:07:57.964 252257 DEBUG nova.compute.manager [req-05cee2c2-fbfa-4f7b-8210-fd3e2bf344e7 req-ac0b5972-2edf-4cf4-9323-c8becfb5cbf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:57 np0005539563 nova_compute[252253]: 2025-11-29 08:07:57.968 252257 DEBUG oslo_concurrency.lockutils [req-05cee2c2-fbfa-4f7b-8210-fd3e2bf344e7 req-ac0b5972-2edf-4cf4-9323-c8becfb5cbf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:57 np0005539563 nova_compute[252253]: 2025-11-29 08:07:57.968 252257 DEBUG oslo_concurrency.lockutils [req-05cee2c2-fbfa-4f7b-8210-fd3e2bf344e7 req-ac0b5972-2edf-4cf4-9323-c8becfb5cbf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:57 np0005539563 nova_compute[252253]: 2025-11-29 08:07:57.969 252257 DEBUG oslo_concurrency.lockutils [req-05cee2c2-fbfa-4f7b-8210-fd3e2bf344e7 req-ac0b5972-2edf-4cf4-9323-c8becfb5cbf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:57 np0005539563 nova_compute[252253]: 2025-11-29 08:07:57.969 252257 DEBUG nova.compute.manager [req-05cee2c2-fbfa-4f7b-8210-fd3e2bf344e7 req-ac0b5972-2edf-4cf4-9323-c8becfb5cbf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] No waiting events found dispatching network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:57 np0005539563 nova_compute[252253]: 2025-11-29 08:07:57.969 252257 WARNING nova.compute.manager [req-05cee2c2-fbfa-4f7b-8210-fd3e2bf344e7 req-ac0b5972-2edf-4cf4-9323-c8becfb5cbf4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Received unexpected event network-vif-plugged-18497509-f640-42ef-b25c-ac9f121ce0db for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:07:58 np0005539563 nova_compute[252253]: 2025-11-29 08:07:58.002 252257 DEBUG oslo_concurrency.lockutils [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:58 np0005539563 nova_compute[252253]: 2025-11-29 08:07:58.003 252257 DEBUG oslo_concurrency.lockutils [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:58 np0005539563 nova_compute[252253]: 2025-11-29 08:07:58.004 252257 DEBUG nova.compute.manager [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Going to confirm migration 12 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:07:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 230 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 118 op/s
Nov 29 03:07:58 np0005539563 quizzical_mcclintock[306909]: {
Nov 29 03:07:58 np0005539563 quizzical_mcclintock[306909]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:07:58 np0005539563 quizzical_mcclintock[306909]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:07:58 np0005539563 quizzical_mcclintock[306909]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:07:58 np0005539563 quizzical_mcclintock[306909]:        "osd_id": 0,
Nov 29 03:07:58 np0005539563 quizzical_mcclintock[306909]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:07:58 np0005539563 quizzical_mcclintock[306909]:        "type": "bluestore"
Nov 29 03:07:58 np0005539563 quizzical_mcclintock[306909]:    }
Nov 29 03:07:58 np0005539563 quizzical_mcclintock[306909]: }
Nov 29 03:07:58 np0005539563 systemd[1]: libpod-88c60de083eb19361a8ac5983732eed84cbfcd5a7fa9a491f220dd0682ecec86.scope: Deactivated successfully.
Nov 29 03:07:58 np0005539563 conmon[306909]: conmon 88c60de083eb19361a8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88c60de083eb19361a8ac5983732eed84cbfcd5a7fa9a491f220dd0682ecec86.scope/container/memory.events
Nov 29 03:07:58 np0005539563 podman[306893]: 2025-11-29 08:07:58.441211359 +0000 UTC m=+1.030792269 container died 88c60de083eb19361a8ac5983732eed84cbfcd5a7fa9a491f220dd0682ecec86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 29 03:07:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2c68e94da2db5e38144d64818a5af11f287939dd069f6e41454018917424e126-merged.mount: Deactivated successfully.
Nov 29 03:07:58 np0005539563 podman[306893]: 2025-11-29 08:07:58.496143757 +0000 UTC m=+1.085724717 container remove 88c60de083eb19361a8ac5983732eed84cbfcd5a7fa9a491f220dd0682ecec86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mcclintock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:07:58 np0005539563 systemd[1]: libpod-conmon-88c60de083eb19361a8ac5983732eed84cbfcd5a7fa9a491f220dd0682ecec86.scope: Deactivated successfully.
Nov 29 03:07:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:07:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:07:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:07:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:07:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b0f036b5-696e-4a41-a070-cac217421f3a does not exist
Nov 29 03:07:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0e4c3d46-afdb-47a0-9c1c-17bc584c5214 does not exist
Nov 29 03:07:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 877fd1f9-6000-4077-8f8c-cf00cad84a91 does not exist
Nov 29 03:07:58 np0005539563 nova_compute[252253]: 2025-11-29 08:07:58.742 252257 DEBUG neutronclient.v2_0.client [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 18497509-f640-42ef-b25c-ac9f121ce0db for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:07:58 np0005539563 nova_compute[252253]: 2025-11-29 08:07:58.743 252257 DEBUG oslo_concurrency.lockutils [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:58 np0005539563 nova_compute[252253]: 2025-11-29 08:07:58.743 252257 DEBUG oslo_concurrency.lockutils [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquired lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:58 np0005539563 nova_compute[252253]: 2025-11-29 08:07:58.743 252257 DEBUG nova.network.neutron [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:07:58 np0005539563 nova_compute[252253]: 2025-11-29 08:07:58.744 252257 DEBUG nova.objects.instance [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'info_cache' on Instance uuid a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:58.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:07:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:07:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:07:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:07:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:59.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 247 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 147 op/s
Nov 29 03:08:00 np0005539563 nova_compute[252253]: 2025-11-29 08:08:00.146 252257 DEBUG nova.network.neutron [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Updating instance_info_cache with network_info: [{"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:00 np0005539563 nova_compute[252253]: 2025-11-29 08:08:00.172 252257 DEBUG oslo_concurrency.lockutils [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Releasing lock "refresh_cache-a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:00 np0005539563 nova_compute[252253]: 2025-11-29 08:08:00.173 252257 DEBUG nova.objects.instance [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lazy-loading 'migration_context' on Instance uuid a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:00 np0005539563 nova_compute[252253]: 2025-11-29 08:08:00.278 252257 DEBUG nova.storage.rbd_utils [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] removing snapshot(nova-resize) on rbd image(a28f7dd6-9c8c-46f4-9ce0-7d40194d9749_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:08:00 np0005539563 nova_compute[252253]: 2025-11-29 08:08:00.785 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:00.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Nov 29 03:08:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Nov 29 03:08:01 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.184 252257 DEBUG nova.virt.libvirt.vif [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1335756739',display_name='tempest-ServerDiskConfigTestJSON-server-1335756739',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1335756739',id=85,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='750bde86c9c7473fbf7f0a6a3b16cec1',ramdisk_id='',reservation_id='r-mr1u36f8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-904422786',owner_user_name='tempest-ServerDiskConfigTestJSON-904422786-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:56Z,user_data=None,user_id='5a7b61623f854cf59636f192ab8af005',uuid=a28f7dd6-9c8c-46f4-9ce0-7d40194d9749,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.185 252257 DEBUG nova.network.os_vif_util [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converting VIF {"id": "18497509-f640-42ef-b25c-ac9f121ce0db", "address": "fa:16:3e:49:09:70", "network": {"id": "8665acc6-1650-4878-8ffd-84f079f13741", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1218253424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "750bde86c9c7473fbf7f0a6a3b16cec1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18497509-f6", "ovs_interfaceid": "18497509-f640-42ef-b25c-ac9f121ce0db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.186 252257 DEBUG nova.network.os_vif_util [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.187 252257 DEBUG os_vif [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.190 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.190 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18497509-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.191 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.194 252257 INFO os_vif [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:09:70,bridge_name='br-int',has_traffic_filtering=True,id=18497509-f640-42ef-b25c-ac9f121ce0db,network=Network(8665acc6-1650-4878-8ffd-84f079f13741),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18497509-f6')#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.194 252257 DEBUG oslo_concurrency.lockutils [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.195 252257 DEBUG oslo_concurrency.lockutils [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.271 252257 DEBUG oslo_concurrency.processutils [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3964345231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.752 252257 DEBUG oslo_concurrency.processutils [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.759 252257 DEBUG nova.compute.provider_tree [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.778 252257 DEBUG nova.scheduler.client.report [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.798 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.816 252257 DEBUG oslo_concurrency.lockutils [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.934 252257 INFO nova.scheduler.client.report [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Deleted allocation for migration 475e32d2-8c8f-4918-aa47-f1847e144c60#033[00m
Nov 29 03:08:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:01.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:01 np0005539563 nova_compute[252253]: 2025-11-29 08:08:01.993 252257 DEBUG oslo_concurrency.lockutils [None req-484cf5cb-9a87-4afc-863d-936f5b95ab49 5a7b61623f854cf59636f192ab8af005 750bde86c9c7473fbf7f0a6a3b16cec1 - - default default] Lock "a28f7dd6-9c8c-46f4-9ce0-7d40194d9749" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 3.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 279 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.2 MiB/s wr, 315 op/s
Nov 29 03:08:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:02.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:03.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 280 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.1 MiB/s wr, 269 op/s
Nov 29 03:08:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:04.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:04.913 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:04.914 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:04.914 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:05 np0005539563 nova_compute[252253]: 2025-11-29 08:08:05.205 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403670.2036712, a28f7dd6-9c8c-46f4-9ce0-7d40194d9749 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:05 np0005539563 nova_compute[252253]: 2025-11-29 08:08:05.205 252257 INFO nova.compute.manager [-] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:08:05 np0005539563 nova_compute[252253]: 2025-11-29 08:08:05.309 252257 DEBUG nova.compute.manager [None req-5c476ba9-df50-4378-88fd-ce0a50ec9ad1 - - - - - -] [instance: a28f7dd6-9c8c-46f4-9ce0-7d40194d9749] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:05 np0005539563 nova_compute[252253]: 2025-11-29 08:08:05.788 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:05.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 315 MiB data, 870 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 254 op/s
Nov 29 03:08:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Nov 29 03:08:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Nov 29 03:08:06 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Nov 29 03:08:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:06.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:06 np0005539563 nova_compute[252253]: 2025-11-29 08:08:06.802 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:07.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 315 MiB data, 870 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.5 MiB/s wr, 253 op/s
Nov 29 03:08:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:08.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:09.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 326 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 2.4 MiB/s wr, 53 op/s
Nov 29 03:08:10 np0005539563 nova_compute[252253]: 2025-11-29 08:08:10.790 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:10.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:11 np0005539563 nova_compute[252253]: 2025-11-29 08:08:11.802 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:11.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 278 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 308 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Nov 29 03:08:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:12.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:08:12
Nov 29 03:08:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:08:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:08:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms']
Nov 29 03:08:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:08:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:08:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:13.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 247 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 380 KiB/s rd, 2.2 MiB/s wr, 105 op/s
Nov 29 03:08:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:14.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:15 np0005539563 podman[307109]: 2025-11-29 08:08:15.537339991 +0000 UTC m=+0.078776045 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 29 03:08:15 np0005539563 podman[307110]: 2025-11-29 08:08:15.566726277 +0000 UTC m=+0.111467651 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:08:15 np0005539563 podman[307111]: 2025-11-29 08:08:15.595521307 +0000 UTC m=+0.126552320 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 03:08:15 np0005539563 nova_compute[252253]: 2025-11-29 08:08:15.791 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:15.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 282 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Nov 29 03:08:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:16 np0005539563 nova_compute[252253]: 2025-11-29 08:08:16.804 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:16.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 8696 writes, 38K keys, 8693 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 8696 writes, 8693 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1608 writes, 7277 keys, 1608 commit groups, 1.0 writes per commit group, ingest: 10.63 MB, 0.02 MB/s#012Interval WAL: 1608 writes, 1608 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.5      5.27              0.18        22    0.240       0      0       0.0       0.0#012  L6      1/0   10.51 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.8     19.4     16.1     11.89              0.66        21    0.566    120K    12K       0.0       0.0#012 Sum      1/0   10.51 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.8     13.5     14.1     17.16              0.83        43    0.399    120K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.0     42.7     43.7      1.43              0.22        10    0.143     36K   3116       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     19.4     16.1     11.89              0.66        21    0.566    120K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.5      5.26              0.18        21    0.251       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.049, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.24 GB write, 0.07 MB/s write, 0.23 GB read, 0.06 MB/s read, 17.2 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 304.00 MB usage: 27.89 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000274 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1548,26.97 MB,8.87156%) FilterBlock(44,350.61 KB,0.112629%) IndexBlock(44,594.84 KB,0.191086%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:08:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:17.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 282 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 163 op/s
Nov 29 03:08:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:18.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:19.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 282 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 176 op/s
Nov 29 03:08:20 np0005539563 nova_compute[252253]: 2025-11-29 08:08:20.801 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:20.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:21 np0005539563 nova_compute[252253]: 2025-11-29 08:08:21.807 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:21.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 247 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Nov 29 03:08:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:08:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:22.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003169784105713754 of space, bias 1.0, pg target 0.9509352317141262 quantized to 32 (current 32)
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021695936278615297 of space, bias 1.0, pg target 0.650878088358459 quantized to 32 (current 32)
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:08:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:23.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 247 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Nov 29 03:08:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:24.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:25 np0005539563 nova_compute[252253]: 2025-11-29 08:08:25.803 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:25.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 247 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 205 op/s
Nov 29 03:08:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:26 np0005539563 nova_compute[252253]: 2025-11-29 08:08:26.809 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:26.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:27.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:08:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2609822175' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:08:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:08:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2609822175' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:08:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 247 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 439 KiB/s wr, 121 op/s
Nov 29 03:08:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:28.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:28.837 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:28.838 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:08:28 np0005539563 nova_compute[252253]: 2025-11-29 08:08:28.842 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:29.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 230 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 439 KiB/s wr, 144 op/s
Nov 29 03:08:30 np0005539563 nova_compute[252253]: 2025-11-29 08:08:30.806 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:30.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:31 np0005539563 nova_compute[252253]: 2025-11-29 08:08:31.811 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:31.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 167 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 139 op/s
Nov 29 03:08:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:32.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:33.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 167 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 114 op/s
Nov 29 03:08:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:08:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093215457' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:08:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:08:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093215457' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:08:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:34.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:34.840 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:35 np0005539563 nova_compute[252253]: 2025-11-29 08:08:35.856 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:35.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 187 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Nov 29 03:08:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:36 np0005539563 nova_compute[252253]: 2025-11-29 08:08:36.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:36 np0005539563 nova_compute[252253]: 2025-11-29 08:08:36.812 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:36.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:37.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 187 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 579 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Nov 29 03:08:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:38.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:39.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 195 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 710 KiB/s rd, 2.0 MiB/s wr, 105 op/s
Nov 29 03:08:40 np0005539563 nova_compute[252253]: 2025-11-29 08:08:40.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:40.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:40 np0005539563 nova_compute[252253]: 2025-11-29 08:08:40.859 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:41 np0005539563 nova_compute[252253]: 2025-11-29 08:08:41.815 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:42.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 200 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Nov 29 03:08:42 np0005539563 nova_compute[252253]: 2025-11-29 08:08:42.346 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "94df285b-7fb5-486d-9242-0743b6edc562" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:42 np0005539563 nova_compute[252253]: 2025-11-29 08:08:42.347 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:42 np0005539563 nova_compute[252253]: 2025-11-29 08:08:42.367 252257 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:08:42 np0005539563 nova_compute[252253]: 2025-11-29 08:08:42.473 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:42 np0005539563 nova_compute[252253]: 2025-11-29 08:08:42.473 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:42 np0005539563 nova_compute[252253]: 2025-11-29 08:08:42.490 252257 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:08:42 np0005539563 nova_compute[252253]: 2025-11-29 08:08:42.491 252257 INFO nova.compute.claims [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:08:42 np0005539563 nova_compute[252253]: 2025-11-29 08:08:42.666 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:42.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3296093370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.106 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.115 252257 DEBUG nova.compute.provider_tree [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.140 252257 DEBUG nova.scheduler.client.report [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.170 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.172 252257 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:08:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:08:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:08:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:08:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:08:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:08:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.231 252257 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.232 252257 DEBUG nova.network.neutron [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.262 252257 INFO nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.293 252257 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.459 252257 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.460 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.461 252257 INFO nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Creating image(s)#033[00m
Nov 29 03:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3823946930' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.492 252257 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 94df285b-7fb5-486d-9242-0743b6edc562_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:08:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3823946930' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.526 252257 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 94df285b-7fb5-486d-9242-0743b6edc562_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.564 252257 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 94df285b-7fb5-486d-9242-0743b6edc562_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.569 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.642 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.644 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.645 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.645 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.687 252257 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 94df285b-7fb5-486d-9242-0743b6edc562_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.692 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 94df285b-7fb5-486d-9242-0743b6edc562_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.729 252257 DEBUG nova.policy [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8306d30b5b844909866bec7b9c8242d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8e860226190f4eb8971376b16032da1b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.734 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:43 np0005539563 nova_compute[252253]: 2025-11-29 08:08:43.735 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:08:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:44.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 200 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.139 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 94df285b-7fb5-486d-9242-0743b6edc562_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.221 252257 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] resizing rbd image 94df285b-7fb5-486d-9242-0743b6edc562_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.542 252257 DEBUG nova.objects.instance [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lazy-loading 'migration_context' on Instance uuid 94df285b-7fb5-486d-9242-0743b6edc562 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.565 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.566 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Ensure instance console log exists: /var/lib/nova/instances/94df285b-7fb5-486d-9242-0743b6edc562/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.566 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.566 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.567 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.683 252257 DEBUG nova.network.neutron [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Successfully created port: cdddb514-c416-42d6-b6bb-5210934cf16e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.704 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.705 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.705 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.705 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:08:44 np0005539563 nova_compute[252253]: 2025-11-29 08:08:44.705 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:44.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3314600850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.122 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.312 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.313 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4520MB free_disk=20.94293212890625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.313 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.314 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.443 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 94df285b-7fb5-486d-9242-0743b6edc562 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.444 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.444 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.487 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.731 252257 DEBUG nova.network.neutron [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Successfully updated port: cdddb514-c416-42d6-b6bb-5210934cf16e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.753 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "refresh_cache-94df285b-7fb5-486d-9242-0743b6edc562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.754 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquired lock "refresh_cache-94df285b-7fb5-486d-9242-0743b6edc562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.754 252257 DEBUG nova.network.neutron [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.834 252257 DEBUG nova.compute.manager [req-55874f24-1b13-4fa0-a239-fe868dcd086e req-57be3cc1-3e30-4e9a-a98f-d5bf17f8d591 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Received event network-changed-cdddb514-c416-42d6-b6bb-5210934cf16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.835 252257 DEBUG nova.compute.manager [req-55874f24-1b13-4fa0-a239-fe868dcd086e req-57be3cc1-3e30-4e9a-a98f-d5bf17f8d591 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Refreshing instance network info cache due to event network-changed-cdddb514-c416-42d6-b6bb-5210934cf16e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.835 252257 DEBUG oslo_concurrency.lockutils [req-55874f24-1b13-4fa0-a239-fe868dcd086e req-57be3cc1-3e30-4e9a-a98f-d5bf17f8d591 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-94df285b-7fb5-486d-9242-0743b6edc562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.898 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:45 np0005539563 nova_compute[252253]: 2025-11-29 08:08:45.953 252257 DEBUG nova.network.neutron [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:08:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609851083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:46 np0005539563 nova_compute[252253]: 2025-11-29 08:08:46.007 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:46.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:46 np0005539563 nova_compute[252253]: 2025-11-29 08:08:46.012 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:46 np0005539563 nova_compute[252253]: 2025-11-29 08:08:46.027 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 217 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 4.9 MiB/s wr, 173 op/s
Nov 29 03:08:46 np0005539563 nova_compute[252253]: 2025-11-29 08:08:46.064 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:08:46 np0005539563 nova_compute[252253]: 2025-11-29 08:08:46.064 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:46 np0005539563 podman[307474]: 2025-11-29 08:08:46.543925471 +0000 UTC m=+0.093736860 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 03:08:46 np0005539563 podman[307475]: 2025-11-29 08:08:46.545384301 +0000 UTC m=+0.090521334 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:08:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:46 np0005539563 podman[307476]: 2025-11-29 08:08:46.552543864 +0000 UTC m=+0.095349234 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:08:46 np0005539563 nova_compute[252253]: 2025-11-29 08:08:46.816 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:46.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.065 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.389 252257 DEBUG nova.network.neutron [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Updating instance_info_cache with network_info: [{"id": "cdddb514-c416-42d6-b6bb-5210934cf16e", "address": "fa:16:3e:bf:c0:f9", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdddb514-c4", "ovs_interfaceid": "cdddb514-c416-42d6-b6bb-5210934cf16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.425 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Releasing lock "refresh_cache-94df285b-7fb5-486d-9242-0743b6edc562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.426 252257 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Instance network_info: |[{"id": "cdddb514-c416-42d6-b6bb-5210934cf16e", "address": "fa:16:3e:bf:c0:f9", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdddb514-c4", "ovs_interfaceid": "cdddb514-c416-42d6-b6bb-5210934cf16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.427 252257 DEBUG oslo_concurrency.lockutils [req-55874f24-1b13-4fa0-a239-fe868dcd086e req-57be3cc1-3e30-4e9a-a98f-d5bf17f8d591 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-94df285b-7fb5-486d-9242-0743b6edc562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.428 252257 DEBUG nova.network.neutron [req-55874f24-1b13-4fa0-a239-fe868dcd086e req-57be3cc1-3e30-4e9a-a98f-d5bf17f8d591 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Refreshing network info cache for port cdddb514-c416-42d6-b6bb-5210934cf16e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.433 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Start _get_guest_xml network_info=[{"id": "cdddb514-c416-42d6-b6bb-5210934cf16e", "address": "fa:16:3e:bf:c0:f9", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdddb514-c4", "ovs_interfaceid": "cdddb514-c416-42d6-b6bb-5210934cf16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.439 252257 WARNING nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.444 252257 DEBUG nova.virt.libvirt.host [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.446 252257 DEBUG nova.virt.libvirt.host [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.450 252257 DEBUG nova.virt.libvirt.host [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.451 252257 DEBUG nova.virt.libvirt.host [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.453 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.454 252257 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.454 252257 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.455 252257 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.455 252257 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.456 252257 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.456 252257 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.457 252257 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.457 252257 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.458 252257 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.458 252257 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.459 252257 DEBUG nova.virt.hardware [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.465 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:08:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2257482976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.923 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.969 252257 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 94df285b-7fb5-486d-9242-0743b6edc562_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:47 np0005539563 nova_compute[252253]: 2025-11-29 08:08:47.976 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:48.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 217 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 218 KiB/s rd, 3.1 MiB/s wr, 135 op/s
Nov 29 03:08:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:08:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2185130790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.403 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.404 252257 DEBUG nova.virt.libvirt.vif [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1330031541',display_name='tempest-tempest.common.compute-instance-1330031541-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1330031541-2',id=90,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e860226190f4eb8971376b16032da1b',ramdisk_id='',reservation_id='r-01l5df52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-36900569',owner_user_name='tempest-MultipleCreateTestJSON-36900569-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:43Z,user_data=None,user_id='f8306d30b5b844909866bec7b9c8242d',uuid=94df285b-7fb5-486d-9242-0743b6edc562,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cdddb514-c416-42d6-b6bb-5210934cf16e", "address": "fa:16:3e:bf:c0:f9", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdddb514-c4", "ovs_interfaceid": "cdddb514-c416-42d6-b6bb-5210934cf16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.405 252257 DEBUG nova.network.os_vif_util [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converting VIF {"id": "cdddb514-c416-42d6-b6bb-5210934cf16e", "address": "fa:16:3e:bf:c0:f9", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdddb514-c4", "ovs_interfaceid": "cdddb514-c416-42d6-b6bb-5210934cf16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.406 252257 DEBUG nova.network.os_vif_util [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c0:f9,bridge_name='br-int',has_traffic_filtering=True,id=cdddb514-c416-42d6-b6bb-5210934cf16e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdddb514-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.407 252257 DEBUG nova.objects.instance [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lazy-loading 'pci_devices' on Instance uuid 94df285b-7fb5-486d-9242-0743b6edc562 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.424 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  <uuid>94df285b-7fb5-486d-9242-0743b6edc562</uuid>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  <name>instance-0000005a</name>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <nova:name>tempest-tempest.common.compute-instance-1330031541-2</nova:name>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:08:47</nova:creationTime>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <nova:user uuid="f8306d30b5b844909866bec7b9c8242d">tempest-MultipleCreateTestJSON-36900569-project-member</nova:user>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <nova:project uuid="8e860226190f4eb8971376b16032da1b">tempest-MultipleCreateTestJSON-36900569</nova:project>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <nova:port uuid="cdddb514-c416-42d6-b6bb-5210934cf16e">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <entry name="serial">94df285b-7fb5-486d-9242-0743b6edc562</entry>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <entry name="uuid">94df285b-7fb5-486d-9242-0743b6edc562</entry>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/94df285b-7fb5-486d-9242-0743b6edc562_disk">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/94df285b-7fb5-486d-9242-0743b6edc562_disk.config">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:bf:c0:f9"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <target dev="tapcdddb514-c4"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/94df285b-7fb5-486d-9242-0743b6edc562/console.log" append="off"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:08:48 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:08:48 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:08:48 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:08:48 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.425 252257 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Preparing to wait for external event network-vif-plugged-cdddb514-c416-42d6-b6bb-5210934cf16e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.426 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "94df285b-7fb5-486d-9242-0743b6edc562-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.426 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.426 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.427 252257 DEBUG nova.virt.libvirt.vif [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1330031541',display_name='tempest-tempest.common.compute-instance-1330031541-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1330031541-2',id=90,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e860226190f4eb8971376b16032da1b',ramdisk_id='',reservation_id='r-01l5df52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-36900569',owner_user_name='tempest-MultipleCreateTestJSON-36900569-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:43Z,user_data=None,user_id='f8306d30b5b844909866bec7b9c8242d',uuid=94df285b-7fb5-486d-9242-0743b6edc562,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cdddb514-c416-42d6-b6bb-5210934cf16e", "address": "fa:16:3e:bf:c0:f9", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdddb514-c4", "ovs_interfaceid": "cdddb514-c416-42d6-b6bb-5210934cf16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.427 252257 DEBUG nova.network.os_vif_util [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converting VIF {"id": "cdddb514-c416-42d6-b6bb-5210934cf16e", "address": "fa:16:3e:bf:c0:f9", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdddb514-c4", "ovs_interfaceid": "cdddb514-c416-42d6-b6bb-5210934cf16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.428 252257 DEBUG nova.network.os_vif_util [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c0:f9,bridge_name='br-int',has_traffic_filtering=True,id=cdddb514-c416-42d6-b6bb-5210934cf16e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdddb514-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.428 252257 DEBUG os_vif [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c0:f9,bridge_name='br-int',has_traffic_filtering=True,id=cdddb514-c416-42d6-b6bb-5210934cf16e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdddb514-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.429 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.429 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.430 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.434 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.434 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdddb514-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.435 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcdddb514-c4, col_values=(('external_ids', {'iface-id': 'cdddb514-c416-42d6-b6bb-5210934cf16e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:c0:f9', 'vm-uuid': '94df285b-7fb5-486d-9242-0743b6edc562'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:48 np0005539563 NetworkManager[48981]: <info>  [1764403728.4487] manager: (tapcdddb514-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.449 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.452 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.454 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.454 252257 INFO os_vif [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c0:f9,bridge_name='br-int',has_traffic_filtering=True,id=cdddb514-c416-42d6-b6bb-5210934cf16e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdddb514-c4')#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.504 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.505 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.506 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] No VIF found with MAC fa:16:3e:bf:c0:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.507 252257 INFO nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Using config drive#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.546 252257 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 94df285b-7fb5-486d-9242-0743b6edc562_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.707 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.707 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:08:48 np0005539563 nova_compute[252253]: 2025-11-29 08:08:48.708 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:48.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:49 np0005539563 nova_compute[252253]: 2025-11-29 08:08:49.017 252257 INFO nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Creating config drive at /var/lib/nova/instances/94df285b-7fb5-486d-9242-0743b6edc562/disk.config#033[00m
Nov 29 03:08:49 np0005539563 nova_compute[252253]: 2025-11-29 08:08:49.023 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/94df285b-7fb5-486d-9242-0743b6edc562/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_gsenht4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:49 np0005539563 nova_compute[252253]: 2025-11-29 08:08:49.171 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/94df285b-7fb5-486d-9242-0743b6edc562/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_gsenht4" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:49 np0005539563 nova_compute[252253]: 2025-11-29 08:08:49.198 252257 DEBUG nova.storage.rbd_utils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image 94df285b-7fb5-486d-9242-0743b6edc562_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:49 np0005539563 nova_compute[252253]: 2025-11-29 08:08:49.202 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/94df285b-7fb5-486d-9242-0743b6edc562/disk.config 94df285b-7fb5-486d-9242-0743b6edc562_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:49 np0005539563 nova_compute[252253]: 2025-11-29 08:08:49.411 252257 DEBUG oslo_concurrency.processutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/94df285b-7fb5-486d-9242-0743b6edc562/disk.config 94df285b-7fb5-486d-9242-0743b6edc562_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:49 np0005539563 nova_compute[252253]: 2025-11-29 08:08:49.413 252257 INFO nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Deleting local config drive /var/lib/nova/instances/94df285b-7fb5-486d-9242-0743b6edc562/disk.config because it was imported into RBD.#033[00m
Nov 29 03:08:49 np0005539563 kernel: tapcdddb514-c4: entered promiscuous mode
Nov 29 03:08:49 np0005539563 NetworkManager[48981]: <info>  [1764403729.4712] manager: (tapcdddb514-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Nov 29 03:08:49 np0005539563 nova_compute[252253]: 2025-11-29 08:08:49.863 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:49 np0005539563 nova_compute[252253]: 2025-11-29 08:08:49.866 252257 DEBUG nova.network.neutron [req-55874f24-1b13-4fa0-a239-fe868dcd086e req-57be3cc1-3e30-4e9a-a98f-d5bf17f8d591 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Updated VIF entry in instance network info cache for port cdddb514-c416-42d6-b6bb-5210934cf16e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:08:49 np0005539563 nova_compute[252253]: 2025-11-29 08:08:49.867 252257 DEBUG nova.network.neutron [req-55874f24-1b13-4fa0-a239-fe868dcd086e req-57be3cc1-3e30-4e9a-a98f-d5bf17f8d591 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Updating instance_info_cache with network_info: [{"id": "cdddb514-c416-42d6-b6bb-5210934cf16e", "address": "fa:16:3e:bf:c0:f9", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdddb514-c4", "ovs_interfaceid": "cdddb514-c416-42d6-b6bb-5210934cf16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:49 np0005539563 nova_compute[252253]: 2025-11-29 08:08:49.896 252257 DEBUG oslo_concurrency.lockutils [req-55874f24-1b13-4fa0-a239-fe868dcd086e req-57be3cc1-3e30-4e9a-a98f-d5bf17f8d591 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-94df285b-7fb5-486d-9242-0743b6edc562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:49 np0005539563 nova_compute[252253]: 2025-11-29 08:08:49.996 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:08:49Z|00325|binding|INFO|Claiming lport cdddb514-c416-42d6-b6bb-5210934cf16e for this chassis.
Nov 29 03:08:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:08:49Z|00326|binding|INFO|cdddb514-c416-42d6-b6bb-5210934cf16e: Claiming fa:16:3e:bf:c0:f9 10.100.0.10
Nov 29 03:08:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:50.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.019 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:c0:f9 10.100.0.10'], port_security=['fa:16:3e:bf:c0:f9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '94df285b-7fb5-486d-9242-0743b6edc562', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e860226190f4eb8971376b16032da1b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd8dc1d4-70a8-4fbe-bcb1-1a2eb3ad39c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aee2888b-87dd-4143-b028-b945f3d151f3, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=cdddb514-c416-42d6-b6bb-5210934cf16e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.020 158990 INFO neutron.agent.ovn.metadata.agent [-] Port cdddb514-c416-42d6-b6bb-5210934cf16e in datapath 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 bound to our chassis#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.023 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05#033[00m
Nov 29 03:08:50 np0005539563 systemd-udevd[307721]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.037 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2d444de9-5604-4b84-b6fe-4676a55c2c3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.038 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a4a6f7c-91 in ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.040 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a4a6f7c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.040 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a71ef5-a5f6-4f13-81cc-c4131f340ed6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.041 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[da44c234-fc4d-49a3-bbce-a5d7d19fde07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 systemd-machined[213024]: New machine qemu-37-instance-0000005a.
Nov 29 03:08:50 np0005539563 NetworkManager[48981]: <info>  [1764403730.0488] device (tapcdddb514-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:08:50 np0005539563 NetworkManager[48981]: <info>  [1764403730.0498] device (tapcdddb514-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:08:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 213 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 219 KiB/s rd, 3.9 MiB/s wr, 139 op/s
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.060 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[98388e5b-ba14-43a7-a02b-d89d92f47769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 systemd[1]: Started Virtual Machine qemu-37-instance-0000005a.
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.091 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[21640310-01c5-4d92-a736-9cc2c06beaf9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 nova_compute[252253]: 2025-11-29 08:08:50.102 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:50 np0005539563 nova_compute[252253]: 2025-11-29 08:08:50.107 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:08:50Z|00327|binding|INFO|Setting lport cdddb514-c416-42d6-b6bb-5210934cf16e ovn-installed in OVS
Nov 29 03:08:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:08:50Z|00328|binding|INFO|Setting lport cdddb514-c416-42d6-b6bb-5210934cf16e up in Southbound
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.128 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[845d822f-5706-4704-abf2-b95d4aebfde9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 NetworkManager[48981]: <info>  [1764403730.1343] manager: (tap6a4a6f7c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/151)
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.133 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6241f8ca-64e1-455e-a559-6a3eae088536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 systemd-udevd[307727]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.183 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[67fb791d-0cf4-4ac6-b257-29f337229785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.186 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[1055f5ca-1878-4fea-9c54-90bb21373c17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 NetworkManager[48981]: <info>  [1764403730.2139] device (tap6a4a6f7c-90): carrier: link connected
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.221 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[29ea3734-9576-4840-bc56-d911fa5a3dd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.240 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cf94fb31-c62c-4aeb-9baa-3b62d54ac993]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a4a6f7c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ed:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669798, 'reachable_time': 33120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307756, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.258 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8302df53-b896-4280-a707-ccc03f371b5a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:ede0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669798, 'tstamp': 669798}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307757, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.278 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[92690e72-09c6-477c-aa0e-4a825dd6629c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a4a6f7c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ed:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669798, 'reachable_time': 33120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307758, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.309 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c4f98b-ec1c-42ae-92ca-7cb06663b5c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.376 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9eceaa-5fc5-4c63-9d3d-bd62b896bc48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.378 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a4a6f7c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.379 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.379 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a4a6f7c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:50 np0005539563 nova_compute[252253]: 2025-11-29 08:08:50.381 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:50 np0005539563 NetworkManager[48981]: <info>  [1764403730.3822] manager: (tap6a4a6f7c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Nov 29 03:08:50 np0005539563 kernel: tap6a4a6f7c-90: entered promiscuous mode
Nov 29 03:08:50 np0005539563 nova_compute[252253]: 2025-11-29 08:08:50.384 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.387 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a4a6f7c-90, col_values=(('external_ids', {'iface-id': 'b10f5520-b53f-45d0-9de3-4af0dc481ad3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:50 np0005539563 nova_compute[252253]: 2025-11-29 08:08:50.389 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:08:50Z|00329|binding|INFO|Releasing lport b10f5520-b53f-45d0-9de3-4af0dc481ad3 from this chassis (sb_readonly=0)
Nov 29 03:08:50 np0005539563 nova_compute[252253]: 2025-11-29 08:08:50.389 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.391 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.392 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d766967d-b241-492e-9df5-8691a57332b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.393 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.pid.haproxy
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:08:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:50.394 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'env', 'PROCESS_TAG=haproxy-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:08:50 np0005539563 nova_compute[252253]: 2025-11-29 08:08:50.404 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:50 np0005539563 nova_compute[252253]: 2025-11-29 08:08:50.480 252257 DEBUG nova.compute.manager [req-6423a855-23cb-4234-ba21-1bcdf3333925 req-a702bd1a-6d07-48b3-986f-2b78550fc443 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Received event network-vif-plugged-cdddb514-c416-42d6-b6bb-5210934cf16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:50 np0005539563 nova_compute[252253]: 2025-11-29 08:08:50.481 252257 DEBUG oslo_concurrency.lockutils [req-6423a855-23cb-4234-ba21-1bcdf3333925 req-a702bd1a-6d07-48b3-986f-2b78550fc443 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "94df285b-7fb5-486d-9242-0743b6edc562-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:50 np0005539563 nova_compute[252253]: 2025-11-29 08:08:50.481 252257 DEBUG oslo_concurrency.lockutils [req-6423a855-23cb-4234-ba21-1bcdf3333925 req-a702bd1a-6d07-48b3-986f-2b78550fc443 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:50 np0005539563 nova_compute[252253]: 2025-11-29 08:08:50.481 252257 DEBUG oslo_concurrency.lockutils [req-6423a855-23cb-4234-ba21-1bcdf3333925 req-a702bd1a-6d07-48b3-986f-2b78550fc443 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:50 np0005539563 nova_compute[252253]: 2025-11-29 08:08:50.481 252257 DEBUG nova.compute.manager [req-6423a855-23cb-4234-ba21-1bcdf3333925 req-a702bd1a-6d07-48b3-986f-2b78550fc443 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Processing event network-vif-plugged-cdddb514-c416-42d6-b6bb-5210934cf16e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:08:50 np0005539563 podman[307790]: 2025-11-29 08:08:50.810654781 +0000 UTC m=+0.063368208 container create 265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:08:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:50.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:50 np0005539563 systemd[1]: Started libpod-conmon-265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209.scope.
Nov 29 03:08:50 np0005539563 podman[307790]: 2025-11-29 08:08:50.776034194 +0000 UTC m=+0.028747621 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:08:50 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:08:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ceee4b6ded03ecac99814e63922e5b1f7f6a0e27fa8b96aca4c7f7d453252b2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:50 np0005539563 podman[307790]: 2025-11-29 08:08:50.931441354 +0000 UTC m=+0.184154851 container init 265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:08:50 np0005539563 podman[307790]: 2025-11-29 08:08:50.938395623 +0000 UTC m=+0.191109060 container start 265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:08:50 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[307807]: [NOTICE]   (307811) : New worker (307813) forked
Nov 29 03:08:50 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[307807]: [NOTICE]   (307811) : Loading success.
Nov 29 03:08:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Nov 29 03:08:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Nov 29 03:08:51 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.511 252257 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.513 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403731.5121598, 94df285b-7fb5-486d-9242-0743b6edc562 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.514 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] VM Started (Lifecycle Event)#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.519 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.525 252257 INFO nova.virt.libvirt.driver [-] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Instance spawned successfully.#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.526 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.533 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.538 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.546 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.546 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.547 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.547 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.548 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.548 252257 DEBUG nova.virt.libvirt.driver [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.573 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.574 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403731.512295, 94df285b-7fb5-486d-9242-0743b6edc562 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.574 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.625 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.630 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403731.5167928, 94df285b-7fb5-486d-9242-0743b6edc562 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.630 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.670 252257 INFO nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Took 8.21 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.670 252257 DEBUG nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.671 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.680 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.683 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.718 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.740 252257 INFO nova.compute.manager [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Took 9.30 seconds to build instance.#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.759 252257 DEBUG oslo_concurrency.lockutils [None req-b7e079f9-0a47-4430-880e-2e26cbb68084 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:51 np0005539563 nova_compute[252253]: 2025-11-29 08:08:51.819 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:52.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 213 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 4.3 MiB/s wr, 118 op/s
Nov 29 03:08:52 np0005539563 nova_compute[252253]: 2025-11-29 08:08:52.647 252257 DEBUG nova.compute.manager [req-821a31d6-eed1-4bd4-aa50-a188d0aca3e5 req-6a6d6a55-f456-49a3-8c16-6e2bbcbb56d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Received event network-vif-plugged-cdddb514-c416-42d6-b6bb-5210934cf16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:52 np0005539563 nova_compute[252253]: 2025-11-29 08:08:52.648 252257 DEBUG oslo_concurrency.lockutils [req-821a31d6-eed1-4bd4-aa50-a188d0aca3e5 req-6a6d6a55-f456-49a3-8c16-6e2bbcbb56d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "94df285b-7fb5-486d-9242-0743b6edc562-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:52 np0005539563 nova_compute[252253]: 2025-11-29 08:08:52.649 252257 DEBUG oslo_concurrency.lockutils [req-821a31d6-eed1-4bd4-aa50-a188d0aca3e5 req-6a6d6a55-f456-49a3-8c16-6e2bbcbb56d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:52 np0005539563 nova_compute[252253]: 2025-11-29 08:08:52.649 252257 DEBUG oslo_concurrency.lockutils [req-821a31d6-eed1-4bd4-aa50-a188d0aca3e5 req-6a6d6a55-f456-49a3-8c16-6e2bbcbb56d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:52 np0005539563 nova_compute[252253]: 2025-11-29 08:08:52.650 252257 DEBUG nova.compute.manager [req-821a31d6-eed1-4bd4-aa50-a188d0aca3e5 req-6a6d6a55-f456-49a3-8c16-6e2bbcbb56d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] No waiting events found dispatching network-vif-plugged-cdddb514-c416-42d6-b6bb-5210934cf16e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:52 np0005539563 nova_compute[252253]: 2025-11-29 08:08:52.650 252257 WARNING nova.compute.manager [req-821a31d6-eed1-4bd4-aa50-a188d0aca3e5 req-6a6d6a55-f456-49a3-8c16-6e2bbcbb56d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Received unexpected event network-vif-plugged-cdddb514-c416-42d6-b6bb-5210934cf16e for instance with vm_state active and task_state None.#033[00m
Nov 29 03:08:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:08:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:52.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:08:53 np0005539563 nova_compute[252253]: 2025-11-29 08:08:53.449 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:54.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 214 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 708 KiB/s rd, 4.3 MiB/s wr, 143 op/s
Nov 29 03:08:54 np0005539563 nova_compute[252253]: 2025-11-29 08:08:54.680 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:54.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.603 252257 DEBUG oslo_concurrency.lockutils [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "94df285b-7fb5-486d-9242-0743b6edc562" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.604 252257 DEBUG oslo_concurrency.lockutils [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.604 252257 DEBUG oslo_concurrency.lockutils [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "94df285b-7fb5-486d-9242-0743b6edc562-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.605 252257 DEBUG oslo_concurrency.lockutils [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.606 252257 DEBUG oslo_concurrency.lockutils [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.608 252257 INFO nova.compute.manager [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Terminating instance#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.610 252257 DEBUG nova.compute.manager [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:08:55 np0005539563 kernel: tapcdddb514-c4 (unregistering): left promiscuous mode
Nov 29 03:08:55 np0005539563 NetworkManager[48981]: <info>  [1764403735.6581] device (tapcdddb514-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.665 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:55 np0005539563 ovn_controller[148841]: 2025-11-29T08:08:55Z|00330|binding|INFO|Releasing lport cdddb514-c416-42d6-b6bb-5210934cf16e from this chassis (sb_readonly=0)
Nov 29 03:08:55 np0005539563 ovn_controller[148841]: 2025-11-29T08:08:55Z|00331|binding|INFO|Setting lport cdddb514-c416-42d6-b6bb-5210934cf16e down in Southbound
Nov 29 03:08:55 np0005539563 ovn_controller[148841]: 2025-11-29T08:08:55Z|00332|binding|INFO|Removing iface tapcdddb514-c4 ovn-installed in OVS
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.667 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.681 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:c0:f9 10.100.0.10'], port_security=['fa:16:3e:bf:c0:f9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '94df285b-7fb5-486d-9242-0743b6edc562', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e860226190f4eb8971376b16032da1b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd8dc1d4-70a8-4fbe-bcb1-1a2eb3ad39c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aee2888b-87dd-4143-b028-b945f3d151f3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=cdddb514-c416-42d6-b6bb-5210934cf16e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.683 158990 INFO neutron.agent.ovn.metadata.agent [-] Port cdddb514-c416-42d6-b6bb-5210934cf16e in datapath 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 unbound from our chassis#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.685 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.686 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.688 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[21ab715d-86af-4555-8674-79e5e3d888ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.690 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 namespace which is not needed anymore#033[00m
Nov 29 03:08:55 np0005539563 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Nov 29 03:08:55 np0005539563 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000005a.scope: Consumed 5.673s CPU time.
Nov 29 03:08:55 np0005539563 systemd-machined[213024]: Machine qemu-37-instance-0000005a terminated.
Nov 29 03:08:55 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[307807]: [NOTICE]   (307811) : haproxy version is 2.8.14-c23fe91
Nov 29 03:08:55 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[307807]: [NOTICE]   (307811) : path to executable is /usr/sbin/haproxy
Nov 29 03:08:55 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[307807]: [WARNING]  (307811) : Exiting Master process...
Nov 29 03:08:55 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[307807]: [WARNING]  (307811) : Exiting Master process...
Nov 29 03:08:55 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[307807]: [ALERT]    (307811) : Current worker (307813) exited with code 143 (Terminated)
Nov 29 03:08:55 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[307807]: [WARNING]  (307811) : All workers exited. Exiting... (0)
Nov 29 03:08:55 np0005539563 systemd[1]: libpod-265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209.scope: Deactivated successfully.
Nov 29 03:08:55 np0005539563 podman[307891]: 2025-11-29 08:08:55.820406774 +0000 UTC m=+0.049447570 container died 265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:08:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209-userdata-shm.mount: Deactivated successfully.
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.846 252257 INFO nova.virt.libvirt.driver [-] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Instance destroyed successfully.#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.848 252257 DEBUG nova.objects.instance [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lazy-loading 'resources' on Instance uuid 94df285b-7fb5-486d-9242-0743b6edc562 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4ceee4b6ded03ecac99814e63922e5b1f7f6a0e27fa8b96aca4c7f7d453252b2-merged.mount: Deactivated successfully.
Nov 29 03:08:55 np0005539563 podman[307891]: 2025-11-29 08:08:55.862425292 +0000 UTC m=+0.091466078 container cleanup 265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.863 252257 DEBUG nova.virt.libvirt.vif [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1330031541',display_name='tempest-tempest.common.compute-instance-1330031541-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1330031541-2',id=90,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-29T08:08:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8e860226190f4eb8971376b16032da1b',ramdisk_id='',reservation_id='r-01l5df52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-36900569',owner_user_name='tempest-MultipleCreateTestJSON-36900569-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:08:51Z,user_data=None,user_id='f8306d30b5b844909866bec7b9c8242d',uuid=94df285b-7fb5-486d-9242-0743b6edc562,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cdddb514-c416-42d6-b6bb-5210934cf16e", "address": "fa:16:3e:bf:c0:f9", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdddb514-c4", "ovs_interfaceid": "cdddb514-c416-42d6-b6bb-5210934cf16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.864 252257 DEBUG nova.network.os_vif_util [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converting VIF {"id": "cdddb514-c416-42d6-b6bb-5210934cf16e", "address": "fa:16:3e:bf:c0:f9", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdddb514-c4", "ovs_interfaceid": "cdddb514-c416-42d6-b6bb-5210934cf16e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.865 252257 DEBUG nova.network.os_vif_util [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c0:f9,bridge_name='br-int',has_traffic_filtering=True,id=cdddb514-c416-42d6-b6bb-5210934cf16e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdddb514-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.866 252257 DEBUG os_vif [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c0:f9,bridge_name='br-int',has_traffic_filtering=True,id=cdddb514-c416-42d6-b6bb-5210934cf16e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdddb514-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.868 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.868 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdddb514-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.871 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.872 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.874 252257 INFO os_vif [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c0:f9,bridge_name='br-int',has_traffic_filtering=True,id=cdddb514-c416-42d6-b6bb-5210934cf16e,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdddb514-c4')#033[00m
Nov 29 03:08:55 np0005539563 systemd[1]: libpod-conmon-265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209.scope: Deactivated successfully.
Nov 29 03:08:55 np0005539563 podman[307928]: 2025-11-29 08:08:55.929108009 +0000 UTC m=+0.045877454 container remove 265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.934 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[91c47ca8-a77c-4d67-9267-0afe2e39109f]: (4, ('Sat Nov 29 08:08:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 (265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209)\n265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209\nSat Nov 29 08:08:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 (265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209)\n265206f0a5f26e88dfd7989e743b5d59f7052c23b04fd31b0bbeaec009f9c209\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.936 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3dccb1-49b0-42cb-bf5d-b99e74ed1f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.937 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a4a6f7c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:55 np0005539563 kernel: tap6a4a6f7c-90: left promiscuous mode
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.940 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:55 np0005539563 nova_compute[252253]: 2025-11-29 08:08:55.953 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.956 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d5876f28-a87e-4798-a3dc-ba1ce13f77fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.979 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[101c2cba-0f43-48ee-92e2-25d04599c0a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.980 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[52f32895-5b07-4b54-afe3-ee28345c69ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.994 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8f45bf25-6e59-43a2-b4be-e989a8193421]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669789, 'reachable_time': 37859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307965, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.997 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:08:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:08:55.997 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[53e3bd63-1826-4c0c-8694-cc3396af9058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:55 np0005539563 systemd[1]: run-netns-ovnmeta\x2d6a4a6f7c\x2d9da4\x2d4d0a\x2db32b\x2d578ab4776e05.mount: Deactivated successfully.
Nov 29 03:08:56 np0005539563 nova_compute[252253]: 2025-11-29 08:08:56.001 252257 DEBUG nova.compute.manager [req-fc2d17b4-ff61-4656-bdf8-f84a5f35279a req-0804c002-67e9-479d-a31a-03c4ebb2bca3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Received event network-vif-unplugged-cdddb514-c416-42d6-b6bb-5210934cf16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:56 np0005539563 nova_compute[252253]: 2025-11-29 08:08:56.002 252257 DEBUG oslo_concurrency.lockutils [req-fc2d17b4-ff61-4656-bdf8-f84a5f35279a req-0804c002-67e9-479d-a31a-03c4ebb2bca3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "94df285b-7fb5-486d-9242-0743b6edc562-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:56 np0005539563 nova_compute[252253]: 2025-11-29 08:08:56.002 252257 DEBUG oslo_concurrency.lockutils [req-fc2d17b4-ff61-4656-bdf8-f84a5f35279a req-0804c002-67e9-479d-a31a-03c4ebb2bca3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:56 np0005539563 nova_compute[252253]: 2025-11-29 08:08:56.002 252257 DEBUG oslo_concurrency.lockutils [req-fc2d17b4-ff61-4656-bdf8-f84a5f35279a req-0804c002-67e9-479d-a31a-03c4ebb2bca3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:56 np0005539563 nova_compute[252253]: 2025-11-29 08:08:56.002 252257 DEBUG nova.compute.manager [req-fc2d17b4-ff61-4656-bdf8-f84a5f35279a req-0804c002-67e9-479d-a31a-03c4ebb2bca3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] No waiting events found dispatching network-vif-unplugged-cdddb514-c416-42d6-b6bb-5210934cf16e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:56 np0005539563 nova_compute[252253]: 2025-11-29 08:08:56.003 252257 DEBUG nova.compute.manager [req-fc2d17b4-ff61-4656-bdf8-f84a5f35279a req-0804c002-67e9-479d-a31a-03c4ebb2bca3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Received event network-vif-unplugged-cdddb514-c416-42d6-b6bb-5210934cf16e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:08:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:56.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 214 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.0 MiB/s wr, 256 op/s
Nov 29 03:08:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:56.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:56 np0005539563 nova_compute[252253]: 2025-11-29 08:08:56.874 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:56 np0005539563 nova_compute[252253]: 2025-11-29 08:08:56.992 252257 INFO nova.virt.libvirt.driver [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Deleting instance files /var/lib/nova/instances/94df285b-7fb5-486d-9242-0743b6edc562_del#033[00m
Nov 29 03:08:56 np0005539563 nova_compute[252253]: 2025-11-29 08:08:56.993 252257 INFO nova.virt.libvirt.driver [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Deletion of /var/lib/nova/instances/94df285b-7fb5-486d-9242-0743b6edc562_del complete#033[00m
Nov 29 03:08:57 np0005539563 nova_compute[252253]: 2025-11-29 08:08:57.057 252257 INFO nova.compute.manager [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Took 1.45 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:08:57 np0005539563 nova_compute[252253]: 2025-11-29 08:08:57.057 252257 DEBUG oslo.service.loopingcall [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:08:57 np0005539563 nova_compute[252253]: 2025-11-29 08:08:57.058 252257 DEBUG nova.compute.manager [-] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:08:57 np0005539563 nova_compute[252253]: 2025-11-29 08:08:57.058 252257 DEBUG nova.network.neutron [-] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:08:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:58.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 214 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.0 MiB/s wr, 256 op/s
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.111 252257 DEBUG nova.compute.manager [req-e0b8d139-73f1-4608-9f6c-88063b0446a8 req-d4927cee-373d-4441-99ef-ab95f1eb3fe8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Received event network-vif-plugged-cdddb514-c416-42d6-b6bb-5210934cf16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.111 252257 DEBUG oslo_concurrency.lockutils [req-e0b8d139-73f1-4608-9f6c-88063b0446a8 req-d4927cee-373d-4441-99ef-ab95f1eb3fe8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "94df285b-7fb5-486d-9242-0743b6edc562-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.111 252257 DEBUG oslo_concurrency.lockutils [req-e0b8d139-73f1-4608-9f6c-88063b0446a8 req-d4927cee-373d-4441-99ef-ab95f1eb3fe8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.112 252257 DEBUG oslo_concurrency.lockutils [req-e0b8d139-73f1-4608-9f6c-88063b0446a8 req-d4927cee-373d-4441-99ef-ab95f1eb3fe8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.112 252257 DEBUG nova.compute.manager [req-e0b8d139-73f1-4608-9f6c-88063b0446a8 req-d4927cee-373d-4441-99ef-ab95f1eb3fe8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] No waiting events found dispatching network-vif-plugged-cdddb514-c416-42d6-b6bb-5210934cf16e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.112 252257 WARNING nova.compute.manager [req-e0b8d139-73f1-4608-9f6c-88063b0446a8 req-d4927cee-373d-4441-99ef-ab95f1eb3fe8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Received unexpected event network-vif-plugged-cdddb514-c416-42d6-b6bb-5210934cf16e for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:08:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Nov 29 03:08:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Nov 29 03:08:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.167 252257 DEBUG nova.network.neutron [-] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.197 252257 INFO nova.compute.manager [-] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Took 1.14 seconds to deallocate network for instance.#033[00m
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.249 252257 DEBUG oslo_concurrency.lockutils [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.250 252257 DEBUG oslo_concurrency.lockutils [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.310 252257 DEBUG oslo_concurrency.processutils [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3954315581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.776 252257 DEBUG oslo_concurrency.processutils [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.783 252257 DEBUG nova.compute.provider_tree [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.821 252257 DEBUG nova.scheduler.client.report [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:08:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:58.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.856 252257 DEBUG oslo_concurrency.lockutils [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.909 252257 INFO nova.scheduler.client.report [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Deleted allocations for instance 94df285b-7fb5-486d-9242-0743b6edc562#033[00m
Nov 29 03:08:58 np0005539563 nova_compute[252253]: 2025-11-29 08:08:58.990 252257 DEBUG oslo_concurrency.lockutils [None req-ced696b2-a3e7-47f3-9749-945093272a4e f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "94df285b-7fb5-486d-9242-0743b6edc562" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:59 np0005539563 nova_compute[252253]: 2025-11-29 08:08:59.771 252257 DEBUG nova.compute.manager [req-c80ef090-a62f-4a8b-92b9-ddee7afc339b req-da07f070-97e7-46e0-ac92-29c99f0e16f3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Received event network-vif-deleted-cdddb514-c416-42d6-b6bb-5210934cf16e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:00.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 191 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 18 KiB/s wr, 309 op/s
Nov 29 03:09:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:09:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:09:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:09:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:09:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:09:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:09:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ef3d26c2-7541-41f5-b1ab-c41144673ff5 does not exist
Nov 29 03:09:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5f885f47-791e-408c-a53b-b52c213c2028 does not exist
Nov 29 03:09:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4eabd917-efc3-432e-a924-5318df6e9d85 does not exist
Nov 29 03:09:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:09:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:09:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:09:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:09:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:09:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:09:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:00.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:00 np0005539563 nova_compute[252253]: 2025-11-29 08:09:00.870 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:00 np0005539563 podman[308265]: 2025-11-29 08:09:00.874491858 +0000 UTC m=+0.036439758 container create 3ce1b992e622dfca06b785d92b50b9270d0f50025e46340c2cc8bbb0f07dcfb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mclaren, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:09:00 np0005539563 systemd[1]: Started libpod-conmon-3ce1b992e622dfca06b785d92b50b9270d0f50025e46340c2cc8bbb0f07dcfb6.scope.
Nov 29 03:09:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:09:00 np0005539563 podman[308265]: 2025-11-29 08:09:00.858533295 +0000 UTC m=+0.020481205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:09:00 np0005539563 podman[308265]: 2025-11-29 08:09:00.956086868 +0000 UTC m=+0.118034778 container init 3ce1b992e622dfca06b785d92b50b9270d0f50025e46340c2cc8bbb0f07dcfb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mclaren, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:09:00 np0005539563 podman[308265]: 2025-11-29 08:09:00.962238745 +0000 UTC m=+0.124186635 container start 3ce1b992e622dfca06b785d92b50b9270d0f50025e46340c2cc8bbb0f07dcfb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mclaren, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:09:00 np0005539563 podman[308265]: 2025-11-29 08:09:00.965564845 +0000 UTC m=+0.127512735 container attach 3ce1b992e622dfca06b785d92b50b9270d0f50025e46340c2cc8bbb0f07dcfb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mclaren, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:09:00 np0005539563 goofy_mclaren[308283]: 167 167
Nov 29 03:09:00 np0005539563 systemd[1]: libpod-3ce1b992e622dfca06b785d92b50b9270d0f50025e46340c2cc8bbb0f07dcfb6.scope: Deactivated successfully.
Nov 29 03:09:00 np0005539563 podman[308265]: 2025-11-29 08:09:00.96758617 +0000 UTC m=+0.129534060 container died 3ce1b992e622dfca06b785d92b50b9270d0f50025e46340c2cc8bbb0f07dcfb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:09:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay-81f9e77b3e5fd3c8b17e62adef71ca13460b7bf0835fdc4988b46506d30087e7-merged.mount: Deactivated successfully.
Nov 29 03:09:01 np0005539563 podman[308265]: 2025-11-29 08:09:01.00823206 +0000 UTC m=+0.170179960 container remove 3ce1b992e622dfca06b785d92b50b9270d0f50025e46340c2cc8bbb0f07dcfb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:01 np0005539563 systemd[1]: libpod-conmon-3ce1b992e622dfca06b785d92b50b9270d0f50025e46340c2cc8bbb0f07dcfb6.scope: Deactivated successfully.
Nov 29 03:09:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:09:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:09:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:09:01 np0005539563 podman[308306]: 2025-11-29 08:09:01.190369885 +0000 UTC m=+0.055522955 container create 6c02b944a984763455452a2bbf3a315ff04579cb75e742204c52226f631ea54e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lederberg, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:01 np0005539563 systemd[1]: Started libpod-conmon-6c02b944a984763455452a2bbf3a315ff04579cb75e742204c52226f631ea54e.scope.
Nov 29 03:09:01 np0005539563 podman[308306]: 2025-11-29 08:09:01.163775235 +0000 UTC m=+0.028928385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:09:01 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:09:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a2bf96cc015b40279f0e03df96abc96a4909f3c9b50ac59a99e69379d618a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a2bf96cc015b40279f0e03df96abc96a4909f3c9b50ac59a99e69379d618a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a2bf96cc015b40279f0e03df96abc96a4909f3c9b50ac59a99e69379d618a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a2bf96cc015b40279f0e03df96abc96a4909f3c9b50ac59a99e69379d618a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a2bf96cc015b40279f0e03df96abc96a4909f3c9b50ac59a99e69379d618a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:01 np0005539563 podman[308306]: 2025-11-29 08:09:01.298845674 +0000 UTC m=+0.163998744 container init 6c02b944a984763455452a2bbf3a315ff04579cb75e742204c52226f631ea54e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lederberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:09:01 np0005539563 podman[308306]: 2025-11-29 08:09:01.30716895 +0000 UTC m=+0.172322030 container start 6c02b944a984763455452a2bbf3a315ff04579cb75e742204c52226f631ea54e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:09:01 np0005539563 podman[308306]: 2025-11-29 08:09:01.311363373 +0000 UTC m=+0.176516433 container attach 6c02b944a984763455452a2bbf3a315ff04579cb75e742204c52226f631ea54e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:09:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:01 np0005539563 nova_compute[252253]: 2025-11-29 08:09:01.875 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 121 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 19 KiB/s wr, 348 op/s
Nov 29 03:09:02 np0005539563 stupefied_lederberg[308322]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:09:02 np0005539563 stupefied_lederberg[308322]: --> relative data size: 1.0
Nov 29 03:09:02 np0005539563 stupefied_lederberg[308322]: --> All data devices are unavailable
Nov 29 03:09:02 np0005539563 systemd[1]: libpod-6c02b944a984763455452a2bbf3a315ff04579cb75e742204c52226f631ea54e.scope: Deactivated successfully.
Nov 29 03:09:02 np0005539563 podman[308306]: 2025-11-29 08:09:02.099199869 +0000 UTC m=+0.964352919 container died 6c02b944a984763455452a2bbf3a315ff04579cb75e742204c52226f631ea54e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:09:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e1a2bf96cc015b40279f0e03df96abc96a4909f3c9b50ac59a99e69379d618a6-merged.mount: Deactivated successfully.
Nov 29 03:09:02 np0005539563 podman[308306]: 2025-11-29 08:09:02.152337739 +0000 UTC m=+1.017490779 container remove 6c02b944a984763455452a2bbf3a315ff04579cb75e742204c52226f631ea54e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:09:02 np0005539563 systemd[1]: libpod-conmon-6c02b944a984763455452a2bbf3a315ff04579cb75e742204c52226f631ea54e.scope: Deactivated successfully.
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.164 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "ca260730-278e-41c5-9aae-4825e9497dcc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.166 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.186 252257 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.287 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.287 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.295 252257 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.295 252257 INFO nova.compute.claims [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.465 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:02 np0005539563 podman[308510]: 2025-11-29 08:09:02.751959234 +0000 UTC m=+0.049672937 container create 33e59e5c898b50833c1a4a0a03332aef21e080999178f0071252a8f2cadc60d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_visvesvaraya, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:09:02 np0005539563 systemd[1]: Started libpod-conmon-33e59e5c898b50833c1a4a0a03332aef21e080999178f0071252a8f2cadc60d3.scope.
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.788 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Acquiring lock "ec7136d6-4735-49a5-b788-f051bf09a83d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.789 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:02 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:09:02 np0005539563 podman[308510]: 2025-11-29 08:09:02.729980959 +0000 UTC m=+0.027694742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:09:02 np0005539563 podman[308510]: 2025-11-29 08:09:02.833947866 +0000 UTC m=+0.131661569 container init 33e59e5c898b50833c1a4a0a03332aef21e080999178f0071252a8f2cadc60d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.839 252257 DEBUG nova.compute.manager [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:09:02 np0005539563 podman[308510]: 2025-11-29 08:09:02.840462152 +0000 UTC m=+0.138175845 container start 33e59e5c898b50833c1a4a0a03332aef21e080999178f0071252a8f2cadc60d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_visvesvaraya, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:09:02 np0005539563 awesome_visvesvaraya[308526]: 167 167
Nov 29 03:09:02 np0005539563 systemd[1]: libpod-33e59e5c898b50833c1a4a0a03332aef21e080999178f0071252a8f2cadc60d3.scope: Deactivated successfully.
Nov 29 03:09:02 np0005539563 podman[308510]: 2025-11-29 08:09:02.846963339 +0000 UTC m=+0.144677062 container attach 33e59e5c898b50833c1a4a0a03332aef21e080999178f0071252a8f2cadc60d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_visvesvaraya, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:09:02 np0005539563 podman[308510]: 2025-11-29 08:09:02.847706178 +0000 UTC m=+0.145419881 container died 33e59e5c898b50833c1a4a0a03332aef21e080999178f0071252a8f2cadc60d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 03:09:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:02.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-38fc2bd3ea5167815ceec45c4e50bf394773a572625f9eada6482c07941afce9-merged.mount: Deactivated successfully.
Nov 29 03:09:02 np0005539563 podman[308510]: 2025-11-29 08:09:02.88172402 +0000 UTC m=+0.179437713 container remove 33e59e5c898b50833c1a4a0a03332aef21e080999178f0071252a8f2cadc60d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:09:02 np0005539563 systemd[1]: libpod-conmon-33e59e5c898b50833c1a4a0a03332aef21e080999178f0071252a8f2cadc60d3.scope: Deactivated successfully.
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.926 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2878870364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.950 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.957 252257 DEBUG nova.compute.provider_tree [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.973 252257 DEBUG nova.scheduler.client.report [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.995 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.995 252257 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:09:02 np0005539563 nova_compute[252253]: 2025-11-29 08:09:02.997 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.003 252257 DEBUG nova.virt.hardware [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.004 252257 INFO nova.compute.claims [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:09:03 np0005539563 podman[308554]: 2025-11-29 08:09:03.063072574 +0000 UTC m=+0.040092718 container create 80fdf0a6eae307f363ebc9d281e9716483e714e5dc1c66b9183630484a526959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.078 252257 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.079 252257 DEBUG nova.network.neutron [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:03 np0005539563 systemd[1]: Started libpod-conmon-80fdf0a6eae307f363ebc9d281e9716483e714e5dc1c66b9183630484a526959.scope.
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.112 252257 INFO nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.134 252257 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:09:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:09:03 np0005539563 podman[308554]: 2025-11-29 08:09:03.044330505 +0000 UTC m=+0.021350669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:09:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d685a80244ac8f58077e6deb677914d5d3b9db5979964dfac89be1aa171950cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d685a80244ac8f58077e6deb677914d5d3b9db5979964dfac89be1aa171950cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d685a80244ac8f58077e6deb677914d5d3b9db5979964dfac89be1aa171950cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d685a80244ac8f58077e6deb677914d5d3b9db5979964dfac89be1aa171950cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:03 np0005539563 podman[308554]: 2025-11-29 08:09:03.156092584 +0000 UTC m=+0.133112778 container init 80fdf0a6eae307f363ebc9d281e9716483e714e5dc1c66b9183630484a526959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:09:03 np0005539563 podman[308554]: 2025-11-29 08:09:03.161584552 +0000 UTC m=+0.138604696 container start 80fdf0a6eae307f363ebc9d281e9716483e714e5dc1c66b9183630484a526959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:09:03 np0005539563 podman[308554]: 2025-11-29 08:09:03.164407199 +0000 UTC m=+0.141427353 container attach 80fdf0a6eae307f363ebc9d281e9716483e714e5dc1c66b9183630484a526959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.196 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.298 252257 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.300 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.300 252257 INFO nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Creating image(s)#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.336 252257 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image ca260730-278e-41c5-9aae-4825e9497dcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.363 252257 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image ca260730-278e-41c5-9aae-4825e9497dcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.392 252257 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image ca260730-278e-41c5-9aae-4825e9497dcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.395 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.426 252257 DEBUG nova.policy [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8306d30b5b844909866bec7b9c8242d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8e860226190f4eb8971376b16032da1b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.468 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.469 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.469 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.470 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.501 252257 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image ca260730-278e-41c5-9aae-4825e9497dcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.506 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf ca260730-278e-41c5-9aae-4825e9497dcc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1084199128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.609 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.618 252257 DEBUG nova.compute.provider_tree [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.647 252257 DEBUG nova.scheduler.client.report [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.685 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.686 252257 DEBUG nova.compute.manager [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.759 252257 DEBUG nova.compute.manager [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.759 252257 DEBUG nova.network.neutron [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.782 252257 INFO nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.797 252257 DEBUG nova.compute.manager [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.811 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf ca260730-278e-41c5-9aae-4825e9497dcc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]: {
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:    "0": [
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:        {
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:            "devices": [
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:                "/dev/loop3"
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:            ],
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:            "lv_name": "ceph_lv0",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:            "lv_size": "7511998464",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:            "name": "ceph_lv0",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:            "tags": {
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.922 252257 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] resizing rbd image ca260730-278e-41c5-9aae-4825e9497dcc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:                "ceph.cluster_name": "ceph",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:                "ceph.crush_device_class": "",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:                "ceph.encrypted": "0",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:                "ceph.osd_id": "0",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:                "ceph.type": "block",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:                "ceph.vdo": "0"
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:            },
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:            "type": "block",
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:            "vg_name": "ceph_vg0"
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:        }
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]:    ]
Nov 29 03:09:03 np0005539563 jovial_haibt[308571]: }
Nov 29 03:09:03 np0005539563 systemd[1]: libpod-80fdf0a6eae307f363ebc9d281e9716483e714e5dc1c66b9183630484a526959.scope: Deactivated successfully.
Nov 29 03:09:03 np0005539563 podman[308554]: 2025-11-29 08:09:03.947283499 +0000 UTC m=+0.924303693 container died 80fdf0a6eae307f363ebc9d281e9716483e714e5dc1c66b9183630484a526959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.976 252257 DEBUG nova.compute.manager [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.978 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:03 np0005539563 nova_compute[252253]: 2025-11-29 08:09:03.978 252257 INFO nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Creating image(s)#033[00m
Nov 29 03:09:03 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d685a80244ac8f58077e6deb677914d5d3b9db5979964dfac89be1aa171950cd-merged.mount: Deactivated successfully.
Nov 29 03:09:04 np0005539563 podman[308554]: 2025-11-29 08:09:04.011510049 +0000 UTC m=+0.988530193 container remove 80fdf0a6eae307f363ebc9d281e9716483e714e5dc1c66b9183630484a526959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_haibt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.018 252257 DEBUG nova.storage.rbd_utils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] rbd image ec7136d6-4735-49a5-b788-f051bf09a83d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:04 np0005539563 systemd[1]: libpod-conmon-80fdf0a6eae307f363ebc9d281e9716483e714e5dc1c66b9183630484a526959.scope: Deactivated successfully.
Nov 29 03:09:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:04.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.059 252257 DEBUG nova.storage.rbd_utils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] rbd image ec7136d6-4735-49a5-b788-f051bf09a83d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 102 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 4.3 KiB/s wr, 325 op/s
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.093 252257 DEBUG nova.storage.rbd_utils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] rbd image ec7136d6-4735-49a5-b788-f051bf09a83d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.098 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.130 252257 DEBUG nova.policy [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59f60b6ae5304ccbbe873550b6e62e81', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a717411af66b4c23a4cc35a3803ff3b6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.209 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.210 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.211 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.211 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.244 252257 DEBUG nova.storage.rbd_utils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] rbd image ec7136d6-4735-49a5-b788-f051bf09a83d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.251 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf ec7136d6-4735-49a5-b788-f051bf09a83d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.310 252257 DEBUG nova.objects.instance [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lazy-loading 'migration_context' on Instance uuid ca260730-278e-41c5-9aae-4825e9497dcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.404 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.405 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Ensure instance console log exists: /var/lib/nova/instances/ca260730-278e-41c5-9aae-4825e9497dcc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.406 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.407 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.407 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.618 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf ec7136d6-4735-49a5-b788-f051bf09a83d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.697 252257 DEBUG nova.storage.rbd_utils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] resizing rbd image ec7136d6-4735-49a5-b788-f051bf09a83d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:04 np0005539563 podman[309050]: 2025-11-29 08:09:04.744592391 +0000 UTC m=+0.034602708 container create 16a52c89eadc25fcec04fd6c24d42e4d6d0c0cdf7a26b6fc10a7347b7ba901e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:09:04 np0005539563 systemd[1]: Started libpod-conmon-16a52c89eadc25fcec04fd6c24d42e4d6d0c0cdf7a26b6fc10a7347b7ba901e5.scope.
Nov 29 03:09:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.804 252257 DEBUG nova.objects.instance [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lazy-loading 'migration_context' on Instance uuid ec7136d6-4735-49a5-b788-f051bf09a83d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:04 np0005539563 podman[309050]: 2025-11-29 08:09:04.806531599 +0000 UTC m=+0.096541936 container init 16a52c89eadc25fcec04fd6c24d42e4d6d0c0cdf7a26b6fc10a7347b7ba901e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:09:04 np0005539563 podman[309050]: 2025-11-29 08:09:04.811921666 +0000 UTC m=+0.101931983 container start 16a52c89eadc25fcec04fd6c24d42e4d6d0c0cdf7a26b6fc10a7347b7ba901e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:09:04 np0005539563 podman[309050]: 2025-11-29 08:09:04.814938667 +0000 UTC m=+0.104949004 container attach 16a52c89eadc25fcec04fd6c24d42e4d6d0c0cdf7a26b6fc10a7347b7ba901e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:09:04 np0005539563 festive_colden[309100]: 167 167
Nov 29 03:09:04 np0005539563 systemd[1]: libpod-16a52c89eadc25fcec04fd6c24d42e4d6d0c0cdf7a26b6fc10a7347b7ba901e5.scope: Deactivated successfully.
Nov 29 03:09:04 np0005539563 podman[309050]: 2025-11-29 08:09:04.816475349 +0000 UTC m=+0.106485666 container died 16a52c89eadc25fcec04fd6c24d42e4d6d0c0cdf7a26b6fc10a7347b7ba901e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.817 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.818 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Ensure instance console log exists: /var/lib/nova/instances/ec7136d6-4735-49a5-b788-f051bf09a83d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.819 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.819 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:04 np0005539563 nova_compute[252253]: 2025-11-29 08:09:04.819 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:04 np0005539563 podman[309050]: 2025-11-29 08:09:04.730234963 +0000 UTC m=+0.020245300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:09:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay-86d56bde31e6e1c1c4562d49e0472a0d9634e9b94f9844cb75db741ffaff8819-merged.mount: Deactivated successfully.
Nov 29 03:09:04 np0005539563 podman[309050]: 2025-11-29 08:09:04.855158118 +0000 UTC m=+0.145168435 container remove 16a52c89eadc25fcec04fd6c24d42e4d6d0c0cdf7a26b6fc10a7347b7ba901e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:09:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:04.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:04 np0005539563 systemd[1]: libpod-conmon-16a52c89eadc25fcec04fd6c24d42e4d6d0c0cdf7a26b6fc10a7347b7ba901e5.scope: Deactivated successfully.
Nov 29 03:09:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:04.914 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:04.914 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:04.914 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:05 np0005539563 podman[309125]: 2025-11-29 08:09:05.015354398 +0000 UTC m=+0.043779368 container create 452f7df12b5987afa7ed8b693b6bc63bb50de0a65fa7dc7e2a95828a83d00e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:05 np0005539563 systemd[1]: Started libpod-conmon-452f7df12b5987afa7ed8b693b6bc63bb50de0a65fa7dc7e2a95828a83d00e7c.scope.
Nov 29 03:09:05 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:09:05 np0005539563 podman[309125]: 2025-11-29 08:09:04.995838549 +0000 UTC m=+0.024263499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:09:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec121faad8c70c47ea03b3345bcf614bce95d0bb7d4cb52f4745b88695be5eb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:05 np0005539563 nova_compute[252253]: 2025-11-29 08:09:05.091 252257 DEBUG nova.network.neutron [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Successfully created port: fed1216e-fe85-481a-93a1-18cdd79832c2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec121faad8c70c47ea03b3345bcf614bce95d0bb7d4cb52f4745b88695be5eb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec121faad8c70c47ea03b3345bcf614bce95d0bb7d4cb52f4745b88695be5eb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec121faad8c70c47ea03b3345bcf614bce95d0bb7d4cb52f4745b88695be5eb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:05 np0005539563 podman[309125]: 2025-11-29 08:09:05.109872278 +0000 UTC m=+0.138297208 container init 452f7df12b5987afa7ed8b693b6bc63bb50de0a65fa7dc7e2a95828a83d00e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:09:05 np0005539563 podman[309125]: 2025-11-29 08:09:05.116305583 +0000 UTC m=+0.144730523 container start 452f7df12b5987afa7ed8b693b6bc63bb50de0a65fa7dc7e2a95828a83d00e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:09:05 np0005539563 podman[309125]: 2025-11-29 08:09:05.120132576 +0000 UTC m=+0.148557506 container attach 452f7df12b5987afa7ed8b693b6bc63bb50de0a65fa7dc7e2a95828a83d00e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:09:05 np0005539563 nova_compute[252253]: 2025-11-29 08:09:05.922 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:05 np0005539563 laughing_driscoll[309140]: {
Nov 29 03:09:05 np0005539563 laughing_driscoll[309140]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:09:05 np0005539563 laughing_driscoll[309140]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:09:05 np0005539563 laughing_driscoll[309140]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:09:05 np0005539563 laughing_driscoll[309140]:        "osd_id": 0,
Nov 29 03:09:05 np0005539563 laughing_driscoll[309140]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:09:05 np0005539563 laughing_driscoll[309140]:        "type": "bluestore"
Nov 29 03:09:05 np0005539563 laughing_driscoll[309140]:    }
Nov 29 03:09:05 np0005539563 laughing_driscoll[309140]: }
Nov 29 03:09:05 np0005539563 systemd[1]: libpod-452f7df12b5987afa7ed8b693b6bc63bb50de0a65fa7dc7e2a95828a83d00e7c.scope: Deactivated successfully.
Nov 29 03:09:05 np0005539563 podman[309125]: 2025-11-29 08:09:05.993983732 +0000 UTC m=+1.022408692 container died 452f7df12b5987afa7ed8b693b6bc63bb50de0a65fa7dc7e2a95828a83d00e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:09:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ec121faad8c70c47ea03b3345bcf614bce95d0bb7d4cb52f4745b88695be5eb9-merged.mount: Deactivated successfully.
Nov 29 03:09:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:06.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:06 np0005539563 podman[309125]: 2025-11-29 08:09:06.057619986 +0000 UTC m=+1.086044916 container remove 452f7df12b5987afa7ed8b693b6bc63bb50de0a65fa7dc7e2a95828a83d00e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:09:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 148 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.6 MiB/s wr, 218 op/s
Nov 29 03:09:06 np0005539563 systemd[1]: libpod-conmon-452f7df12b5987afa7ed8b693b6bc63bb50de0a65fa7dc7e2a95828a83d00e7c.scope: Deactivated successfully.
Nov 29 03:09:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:09:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:09:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:09:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:09:06 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 879176d4-74e4-4201-8a75-57c6f62de947 does not exist
Nov 29 03:09:06 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 81a36891-a358-4936-b85f-7f7c7539d830 does not exist
Nov 29 03:09:06 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cacb5a9c-c0cc-49c0-9f5d-1b35b2f0690a does not exist
Nov 29 03:09:06 np0005539563 nova_compute[252253]: 2025-11-29 08:09:06.168 252257 DEBUG nova.network.neutron [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Successfully created port: 84902e3f-6e9d-45fd-88b9-3e367b4e1870 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:09:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:09:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Nov 29 03:09:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Nov 29 03:09:06 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Nov 29 03:09:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:06.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:06 np0005539563 nova_compute[252253]: 2025-11-29 08:09:06.876 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:07 np0005539563 nova_compute[252253]: 2025-11-29 08:09:07.243 252257 DEBUG nova.network.neutron [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Successfully updated port: fed1216e-fe85-481a-93a1-18cdd79832c2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:07 np0005539563 nova_compute[252253]: 2025-11-29 08:09:07.267 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "refresh_cache-ca260730-278e-41c5-9aae-4825e9497dcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:07 np0005539563 nova_compute[252253]: 2025-11-29 08:09:07.268 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquired lock "refresh_cache-ca260730-278e-41c5-9aae-4825e9497dcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:07 np0005539563 nova_compute[252253]: 2025-11-29 08:09:07.268 252257 DEBUG nova.network.neutron [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:09:07 np0005539563 nova_compute[252253]: 2025-11-29 08:09:07.416 252257 DEBUG nova.compute.manager [req-6a6c12f7-32dc-4c23-b2c3-71f924258c0f req-983b0bd3-2395-4367-806e-4829e6825d90 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Received event network-changed-fed1216e-fe85-481a-93a1-18cdd79832c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:07 np0005539563 nova_compute[252253]: 2025-11-29 08:09:07.416 252257 DEBUG nova.compute.manager [req-6a6c12f7-32dc-4c23-b2c3-71f924258c0f req-983b0bd3-2395-4367-806e-4829e6825d90 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Refreshing instance network info cache due to event network-changed-fed1216e-fe85-481a-93a1-18cdd79832c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:07 np0005539563 nova_compute[252253]: 2025-11-29 08:09:07.417 252257 DEBUG oslo_concurrency.lockutils [req-6a6c12f7-32dc-4c23-b2c3-71f924258c0f req-983b0bd3-2395-4367-806e-4829e6825d90 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ca260730-278e-41c5-9aae-4825e9497dcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:07 np0005539563 nova_compute[252253]: 2025-11-29 08:09:07.565 252257 DEBUG nova.network.neutron [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:07 np0005539563 nova_compute[252253]: 2025-11-29 08:09:07.899 252257 DEBUG nova.network.neutron [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Successfully updated port: 84902e3f-6e9d-45fd-88b9-3e367b4e1870 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:07 np0005539563 nova_compute[252253]: 2025-11-29 08:09:07.945 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Acquiring lock "refresh_cache-ec7136d6-4735-49a5-b788-f051bf09a83d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:07 np0005539563 nova_compute[252253]: 2025-11-29 08:09:07.946 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Acquired lock "refresh_cache-ec7136d6-4735-49a5-b788-f051bf09a83d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:07 np0005539563 nova_compute[252253]: 2025-11-29 08:09:07.946 252257 DEBUG nova.network.neutron [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:09:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:08.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 148 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 128 KiB/s rd, 4.6 MiB/s wr, 186 op/s
Nov 29 03:09:08 np0005539563 nova_compute[252253]: 2025-11-29 08:09:08.197 252257 DEBUG nova.network.neutron [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:08.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.064 252257 DEBUG nova.network.neutron [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Updating instance_info_cache with network_info: [{"id": "fed1216e-fe85-481a-93a1-18cdd79832c2", "address": "fa:16:3e:b9:97:ae", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed1216e-fe", "ovs_interfaceid": "fed1216e-fe85-481a-93a1-18cdd79832c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.096 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Releasing lock "refresh_cache-ca260730-278e-41c5-9aae-4825e9497dcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.097 252257 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Instance network_info: |[{"id": "fed1216e-fe85-481a-93a1-18cdd79832c2", "address": "fa:16:3e:b9:97:ae", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed1216e-fe", "ovs_interfaceid": "fed1216e-fe85-481a-93a1-18cdd79832c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.098 252257 DEBUG oslo_concurrency.lockutils [req-6a6c12f7-32dc-4c23-b2c3-71f924258c0f req-983b0bd3-2395-4367-806e-4829e6825d90 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ca260730-278e-41c5-9aae-4825e9497dcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.099 252257 DEBUG nova.network.neutron [req-6a6c12f7-32dc-4c23-b2c3-71f924258c0f req-983b0bd3-2395-4367-806e-4829e6825d90 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Refreshing network info cache for port fed1216e-fe85-481a-93a1-18cdd79832c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.104 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Start _get_guest_xml network_info=[{"id": "fed1216e-fe85-481a-93a1-18cdd79832c2", "address": "fa:16:3e:b9:97:ae", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed1216e-fe", "ovs_interfaceid": "fed1216e-fe85-481a-93a1-18cdd79832c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.112 252257 WARNING nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.121 252257 DEBUG nova.virt.libvirt.host [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.122 252257 DEBUG nova.virt.libvirt.host [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.130 252257 DEBUG nova.virt.libvirt.host [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.131 252257 DEBUG nova.virt.libvirt.host [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.133 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.133 252257 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.134 252257 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.135 252257 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.135 252257 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.136 252257 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.136 252257 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.136 252257 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.137 252257 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.137 252257 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.138 252257 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.138 252257 DEBUG nova.virt.hardware [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.143 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.521 252257 DEBUG nova.compute.manager [req-a1c07125-965d-40b4-9458-36f2f925c421 req-4512405d-7a13-4da9-8f95-a1c11d0e3c56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received event network-changed-84902e3f-6e9d-45fd-88b9-3e367b4e1870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.521 252257 DEBUG nova.compute.manager [req-a1c07125-965d-40b4-9458-36f2f925c421 req-4512405d-7a13-4da9-8f95-a1c11d0e3c56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Refreshing instance network info cache due to event network-changed-84902e3f-6e9d-45fd-88b9-3e367b4e1870. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.522 252257 DEBUG oslo_concurrency.lockutils [req-a1c07125-965d-40b4-9458-36f2f925c421 req-4512405d-7a13-4da9-8f95-a1c11d0e3c56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ec7136d6-4735-49a5-b788-f051bf09a83d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/450969542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.587 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.619 252257 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image ca260730-278e-41c5-9aae-4825e9497dcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.622 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.852 252257 DEBUG nova.network.neutron [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Updating instance_info_cache with network_info: [{"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.874 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Releasing lock "refresh_cache-ec7136d6-4735-49a5-b788-f051bf09a83d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.875 252257 DEBUG nova.compute.manager [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Instance network_info: |[{"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.875 252257 DEBUG oslo_concurrency.lockutils [req-a1c07125-965d-40b4-9458-36f2f925c421 req-4512405d-7a13-4da9-8f95-a1c11d0e3c56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ec7136d6-4735-49a5-b788-f051bf09a83d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.876 252257 DEBUG nova.network.neutron [req-a1c07125-965d-40b4-9458-36f2f925c421 req-4512405d-7a13-4da9-8f95-a1c11d0e3c56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Refreshing network info cache for port 84902e3f-6e9d-45fd-88b9-3e367b4e1870 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.878 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Start _get_guest_xml network_info=[{"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.882 252257 WARNING nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.886 252257 DEBUG nova.virt.libvirt.host [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.887 252257 DEBUG nova.virt.libvirt.host [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.890 252257 DEBUG nova.virt.libvirt.host [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.890 252257 DEBUG nova.virt.libvirt.host [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.891 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.891 252257 DEBUG nova.virt.hardware [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.892 252257 DEBUG nova.virt.hardware [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.892 252257 DEBUG nova.virt.hardware [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.892 252257 DEBUG nova.virt.hardware [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.892 252257 DEBUG nova.virt.hardware [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.893 252257 DEBUG nova.virt.hardware [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.893 252257 DEBUG nova.virt.hardware [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.893 252257 DEBUG nova.virt.hardware [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.893 252257 DEBUG nova.virt.hardware [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.893 252257 DEBUG nova.virt.hardware [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.894 252257 DEBUG nova.virt.hardware [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:09 np0005539563 nova_compute[252253]: 2025-11-29 08:09:09.896 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/288648728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:10.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.056 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.058 252257 DEBUG nova.virt.libvirt.vif [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1728114398',display_name='tempest-MultipleCreateTestJSON-server-1728114398-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1728114398-1',id=91,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e860226190f4eb8971376b16032da1b',ramdisk_id='',reservation_id='r-cjh5wutt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-36900569',owner_user_name='tempest-MultipleCreateTestJSON-36900569-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:03Z,user_data=None,user_id='f8306d30b5b844909866bec7b9c8242d',uuid=ca260730-278e-41c5-9aae-4825e9497dcc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fed1216e-fe85-481a-93a1-18cdd79832c2", "address": "fa:16:3e:b9:97:ae", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed1216e-fe", "ovs_interfaceid": "fed1216e-fe85-481a-93a1-18cdd79832c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.059 252257 DEBUG nova.network.os_vif_util [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converting VIF {"id": "fed1216e-fe85-481a-93a1-18cdd79832c2", "address": "fa:16:3e:b9:97:ae", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed1216e-fe", "ovs_interfaceid": "fed1216e-fe85-481a-93a1-18cdd79832c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.060 252257 DEBUG nova.network.os_vif_util [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:97:ae,bridge_name='br-int',has_traffic_filtering=True,id=fed1216e-fe85-481a-93a1-18cdd79832c2,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed1216e-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.062 252257 DEBUG nova.objects.instance [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lazy-loading 'pci_devices' on Instance uuid ca260730-278e-41c5-9aae-4825e9497dcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 186 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 135 KiB/s rd, 6.9 MiB/s wr, 199 op/s
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.078 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <uuid>ca260730-278e-41c5-9aae-4825e9497dcc</uuid>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <name>instance-0000005b</name>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:name>tempest-MultipleCreateTestJSON-server-1728114398-1</nova:name>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:09:09</nova:creationTime>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:user uuid="f8306d30b5b844909866bec7b9c8242d">tempest-MultipleCreateTestJSON-36900569-project-member</nova:user>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:project uuid="8e860226190f4eb8971376b16032da1b">tempest-MultipleCreateTestJSON-36900569</nova:project>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:port uuid="fed1216e-fe85-481a-93a1-18cdd79832c2">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <entry name="serial">ca260730-278e-41c5-9aae-4825e9497dcc</entry>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <entry name="uuid">ca260730-278e-41c5-9aae-4825e9497dcc</entry>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/ca260730-278e-41c5-9aae-4825e9497dcc_disk">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/ca260730-278e-41c5-9aae-4825e9497dcc_disk.config">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:b9:97:ae"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <target dev="tapfed1216e-fe"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/ca260730-278e-41c5-9aae-4825e9497dcc/console.log" append="off"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:09:10 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:09:10 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.080 252257 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Preparing to wait for external event network-vif-plugged-fed1216e-fe85-481a-93a1-18cdd79832c2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.081 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.082 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.082 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.083 252257 DEBUG nova.virt.libvirt.vif [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1728114398',display_name='tempest-MultipleCreateTestJSON-server-1728114398-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1728114398-1',id=91,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e860226190f4eb8971376b16032da1b',ramdisk_id='',reservation_id='r-cjh5wutt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-36900569',owner_user_name='tempest-MultipleCreateTestJSON-36900569-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:03Z,user_data=None,user_id='f8306d30b5b844909866bec7b9c8242d',uuid=ca260730-278e-41c5-9aae-4825e9497dcc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fed1216e-fe85-481a-93a1-18cdd79832c2", "address": "fa:16:3e:b9:97:ae", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed1216e-fe", "ovs_interfaceid": "fed1216e-fe85-481a-93a1-18cdd79832c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.084 252257 DEBUG nova.network.os_vif_util [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converting VIF {"id": "fed1216e-fe85-481a-93a1-18cdd79832c2", "address": "fa:16:3e:b9:97:ae", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed1216e-fe", "ovs_interfaceid": "fed1216e-fe85-481a-93a1-18cdd79832c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.085 252257 DEBUG nova.network.os_vif_util [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:97:ae,bridge_name='br-int',has_traffic_filtering=True,id=fed1216e-fe85-481a-93a1-18cdd79832c2,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed1216e-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.085 252257 DEBUG os_vif [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:97:ae,bridge_name='br-int',has_traffic_filtering=True,id=fed1216e-fe85-481a-93a1-18cdd79832c2,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed1216e-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.087 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.087 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.088 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.094 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.094 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfed1216e-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.095 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfed1216e-fe, col_values=(('external_ids', {'iface-id': 'fed1216e-fe85-481a-93a1-18cdd79832c2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:97:ae', 'vm-uuid': 'ca260730-278e-41c5-9aae-4825e9497dcc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.097 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539563 NetworkManager[48981]: <info>  [1764403750.0979] manager: (tapfed1216e-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.099 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.105 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.107 252257 INFO os_vif [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:97:ae,bridge_name='br-int',has_traffic_filtering=True,id=fed1216e-fe85-481a-93a1-18cdd79832c2,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed1216e-fe')#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.200 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.200 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.201 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] No VIF found with MAC fa:16:3e:b9:97:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.201 252257 INFO nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Using config drive#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.231 252257 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image ca260730-278e-41c5-9aae-4825e9497dcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963408332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.337 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.363 252257 DEBUG nova.storage.rbd_utils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] rbd image ec7136d6-4735-49a5-b788-f051bf09a83d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.367 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801509104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.804 252257 INFO nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Creating config drive at /var/lib/nova/instances/ca260730-278e-41c5-9aae-4825e9497dcc/disk.config#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.815 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ca260730-278e-41c5-9aae-4825e9497dcc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2zkz1b_g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.854 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403735.8444903, 94df285b-7fb5-486d-9242-0743b6edc562 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.855 252257 INFO nova.compute.manager [-] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.861 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.864 252257 DEBUG nova.network.neutron [req-6a6c12f7-32dc-4c23-b2c3-71f924258c0f req-983b0bd3-2395-4367-806e-4829e6825d90 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Updated VIF entry in instance network info cache for port fed1216e-fe85-481a-93a1-18cdd79832c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.865 252257 DEBUG nova.network.neutron [req-6a6c12f7-32dc-4c23-b2c3-71f924258c0f req-983b0bd3-2395-4367-806e-4829e6825d90 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Updating instance_info_cache with network_info: [{"id": "fed1216e-fe85-481a-93a1-18cdd79832c2", "address": "fa:16:3e:b9:97:ae", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed1216e-fe", "ovs_interfaceid": "fed1216e-fe85-481a-93a1-18cdd79832c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:10.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.869 252257 DEBUG nova.virt.libvirt.vif [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-65778651',display_name='tempest-InstanceActionsTestJSON-server-65778651',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-65778651',id=93,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a717411af66b4c23a4cc35a3803ff3b6',ramdisk_id='',reservation_id='r-ypatsxef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1179662766',owner_user_name='tempest-InstanceActionsTestJSON-1179662766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:03Z,user_data=None,user_id='59f60b6ae5304ccbbe873550b6e62e81',uuid=ec7136d6-4735-49a5-b788-f051bf09a83d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.870 252257 DEBUG nova.network.os_vif_util [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Converting VIF {"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.871 252257 DEBUG nova.network.os_vif_util [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.874 252257 DEBUG nova.objects.instance [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lazy-loading 'pci_devices' on Instance uuid ec7136d6-4735-49a5-b788-f051bf09a83d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.892 252257 DEBUG nova.compute.manager [None req-08bb7d3f-040d-4f3f-9a66-8ae1d6affffb - - - - - -] [instance: 94df285b-7fb5-486d-9242-0743b6edc562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.898 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <uuid>ec7136d6-4735-49a5-b788-f051bf09a83d</uuid>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <name>instance-0000005d</name>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:name>tempest-InstanceActionsTestJSON-server-65778651</nova:name>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:09:09</nova:creationTime>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:user uuid="59f60b6ae5304ccbbe873550b6e62e81">tempest-InstanceActionsTestJSON-1179662766-project-member</nova:user>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:project uuid="a717411af66b4c23a4cc35a3803ff3b6">tempest-InstanceActionsTestJSON-1179662766</nova:project>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <nova:port uuid="84902e3f-6e9d-45fd-88b9-3e367b4e1870">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <entry name="serial">ec7136d6-4735-49a5-b788-f051bf09a83d</entry>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <entry name="uuid">ec7136d6-4735-49a5-b788-f051bf09a83d</entry>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/ec7136d6-4735-49a5-b788-f051bf09a83d_disk">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/ec7136d6-4735-49a5-b788-f051bf09a83d_disk.config">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:6d:a2:98"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <target dev="tap84902e3f-6e"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/ec7136d6-4735-49a5-b788-f051bf09a83d/console.log" append="off"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:09:10 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:09:10 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:09:10 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:09:10 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.900 252257 DEBUG nova.compute.manager [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Preparing to wait for external event network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.901 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Acquiring lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.901 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.902 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.903 252257 DEBUG nova.virt.libvirt.vif [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-65778651',display_name='tempest-InstanceActionsTestJSON-server-65778651',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-65778651',id=93,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a717411af66b4c23a4cc35a3803ff3b6',ramdisk_id='',reservation_id='r-ypatsxef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1179662766',owner_user_name='tempest-InstanceActionsTestJSON-1179662766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:03Z,user_data=None,user_id='59f60b6ae5304ccbbe873550b6e62e81',uuid=ec7136d6-4735-49a5-b788-f051bf09a83d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.904 252257 DEBUG nova.network.os_vif_util [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Converting VIF {"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.905 252257 DEBUG nova.network.os_vif_util [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.906 252257 DEBUG os_vif [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.908 252257 DEBUG oslo_concurrency.lockutils [req-6a6c12f7-32dc-4c23-b2c3-71f924258c0f req-983b0bd3-2395-4367-806e-4829e6825d90 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ca260730-278e-41c5-9aae-4825e9497dcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.909 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.909 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.909 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.912 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.913 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84902e3f-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.913 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84902e3f-6e, col_values=(('external_ids', {'iface-id': '84902e3f-6e9d-45fd-88b9-3e367b4e1870', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:a2:98', 'vm-uuid': 'ec7136d6-4735-49a5-b788-f051bf09a83d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.915 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539563 NetworkManager[48981]: <info>  [1764403750.9164] manager: (tap84902e3f-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.917 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.924 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.925 252257 INFO os_vif [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e')#033[00m
Nov 29 03:09:10 np0005539563 nova_compute[252253]: 2025-11-29 08:09:10.967 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ca260730-278e-41c5-9aae-4825e9497dcc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2zkz1b_g" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.003 252257 DEBUG nova.storage.rbd_utils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] rbd image ca260730-278e-41c5-9aae-4825e9497dcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.008 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ca260730-278e-41c5-9aae-4825e9497dcc/disk.config ca260730-278e-41c5-9aae-4825e9497dcc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.048 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.049 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.049 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] No VIF found with MAC fa:16:3e:6d:a2:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.050 252257 INFO nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Using config drive#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.083 252257 DEBUG nova.storage.rbd_utils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] rbd image ec7136d6-4735-49a5-b788-f051bf09a83d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.095 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "2cab3642-38d5-47c8-82d7-93f626786383" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.096 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.126 252257 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.187 252257 DEBUG oslo_concurrency.processutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ca260730-278e-41c5-9aae-4825e9497dcc/disk.config ca260730-278e-41c5-9aae-4825e9497dcc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.188 252257 INFO nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Deleting local config drive /var/lib/nova/instances/ca260730-278e-41c5-9aae-4825e9497dcc/disk.config because it was imported into RBD.#033[00m
Nov 29 03:09:11 np0005539563 NetworkManager[48981]: <info>  [1764403751.2468] manager: (tapfed1216e-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/155)
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.246 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.247 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:11 np0005539563 kernel: tapfed1216e-fe: entered promiscuous mode
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.251 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:11Z|00333|binding|INFO|Claiming lport fed1216e-fe85-481a-93a1-18cdd79832c2 for this chassis.
Nov 29 03:09:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:11Z|00334|binding|INFO|fed1216e-fe85-481a-93a1-18cdd79832c2: Claiming fa:16:3e:b9:97:ae 10.100.0.11
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.259 252257 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.259 252257 INFO nova.compute.claims [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.268 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:97:ae 10.100.0.11'], port_security=['fa:16:3e:b9:97:ae 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ca260730-278e-41c5-9aae-4825e9497dcc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e860226190f4eb8971376b16032da1b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd8dc1d4-70a8-4fbe-bcb1-1a2eb3ad39c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aee2888b-87dd-4143-b028-b945f3d151f3, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=fed1216e-fe85-481a-93a1-18cdd79832c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.272 158990 INFO neutron.agent.ovn.metadata.agent [-] Port fed1216e-fe85-481a-93a1-18cdd79832c2 in datapath 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 bound to our chassis#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.276 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05#033[00m
Nov 29 03:09:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:11Z|00335|binding|INFO|Setting lport fed1216e-fe85-481a-93a1-18cdd79832c2 ovn-installed in OVS
Nov 29 03:09:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:11Z|00336|binding|INFO|Setting lport fed1216e-fe85-481a-93a1-18cdd79832c2 up in Southbound
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.279 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:11 np0005539563 systemd-machined[213024]: New machine qemu-38-instance-0000005b.
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.290 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.291 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[51bb4e90-ea7a-4728-87a7-93fe3e8749f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.294 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a4a6f7c-91 in ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.296 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a4a6f7c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.296 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f53e3572-6a30-4538-b17a-0336d8078e20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.298 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8c2cbff7-c586-4f46-96e5-671316ca32ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 systemd[1]: Started Virtual Machine qemu-38-instance-0000005b.
Nov 29 03:09:11 np0005539563 systemd-udevd[309500]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.314 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[c0eb1209-f7d6-4d7a-8757-ff23a5f9b709]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 NetworkManager[48981]: <info>  [1764403751.3300] device (tapfed1216e-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:11 np0005539563 NetworkManager[48981]: <info>  [1764403751.3312] device (tapfed1216e-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.331 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7b246e5d-0fb3-4920-91ca-aaaabd990ad8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.371 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[93535f12-627a-4cf6-be35-a7426f9127a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 NetworkManager[48981]: <info>  [1764403751.3772] manager: (tap6a4a6f7c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/156)
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.377 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0cfba7-7961-4fa9-85d1-720eb8baeb50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.400 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.415 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[6f539d6c-b23e-4716-b882-bee314e97a28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.419 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[7f85da0d-0e10-464c-9366-be49dfadbf94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 NetworkManager[48981]: <info>  [1764403751.4476] device (tap6a4a6f7c-90): carrier: link connected
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.454 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3df2ca-b6df-4f7e-bf86-afc5cef7df77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.475 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a99a02-75dc-4b0d-a867-3ea853cf303d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a4a6f7c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ed:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671922, 'reachable_time': 34373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309535, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.489 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3896b9ed-30b5-4db6-8de4-b26939ce92f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:ede0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671922, 'tstamp': 671922}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309536, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.506 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc1f8fe-0232-4418-96f5-202d3512b2b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a4a6f7c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:ed:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671922, 'reachable_time': 34373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309537, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.514 252257 INFO nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Creating config drive at /var/lib/nova/instances/ec7136d6-4735-49a5-b788-f051bf09a83d/disk.config#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.519 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec7136d6-4735-49a5-b788-f051bf09a83d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp384eys10 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.539 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7ada1d2f-c3b9-43a3-ab17-07ab1858d582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.610 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf22d9b-e256-4e2f-825a-27879aa5abb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.611 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a4a6f7c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.611 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.612 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a4a6f7c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:11 np0005539563 kernel: tap6a4a6f7c-90: entered promiscuous mode
Nov 29 03:09:11 np0005539563 NetworkManager[48981]: <info>  [1764403751.6147] manager: (tap6a4a6f7c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.613 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.619 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.621 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a4a6f7c-90, col_values=(('external_ids', {'iface-id': 'b10f5520-b53f-45d0-9de3-4af0dc481ad3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.621 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:11Z|00337|binding|INFO|Releasing lport b10f5520-b53f-45d0-9de3-4af0dc481ad3 from this chassis (sb_readonly=0)
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.645 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.649 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.650 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.651 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2433aaad-3173-4c76-8569-37fe676e44a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.651 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.pid.haproxy
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:11.653 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'env', 'PROCESS_TAG=haproxy-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a4a6f7c-9da4-4d0a-b32b-578ab4776e05.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.673 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec7136d6-4735-49a5-b788-f051bf09a83d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp384eys10" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.715 252257 DEBUG nova.storage.rbd_utils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] rbd image ec7136d6-4735-49a5-b788-f051bf09a83d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.726 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec7136d6-4735-49a5-b788-f051bf09a83d/disk.config ec7136d6-4735-49a5-b788-f051bf09a83d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.807 252257 DEBUG nova.network.neutron [req-a1c07125-965d-40b4-9458-36f2f925c421 req-4512405d-7a13-4da9-8f95-a1c11d0e3c56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Updated VIF entry in instance network info cache for port 84902e3f-6e9d-45fd-88b9-3e367b4e1870. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.808 252257 DEBUG nova.network.neutron [req-a1c07125-965d-40b4-9458-36f2f925c421 req-4512405d-7a13-4da9-8f95-a1c11d0e3c56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Updating instance_info_cache with network_info: [{"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.819 252257 DEBUG oslo_concurrency.lockutils [req-a1c07125-965d-40b4-9458-36f2f925c421 req-4512405d-7a13-4da9-8f95-a1c11d0e3c56 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ec7136d6-4735-49a5-b788-f051bf09a83d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/74092610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.878 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.881 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.889 252257 DEBUG nova.compute.provider_tree [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.928 252257 DEBUG nova.scheduler.client.report [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.957 252257 DEBUG oslo_concurrency.processutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec7136d6-4735-49a5-b788-f051bf09a83d/disk.config ec7136d6-4735-49a5-b788-f051bf09a83d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.958 252257 INFO nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Deleting local config drive /var/lib/nova/instances/ec7136d6-4735-49a5-b788-f051bf09a83d/disk.config because it was imported into RBD.#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.960 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:11 np0005539563 nova_compute[252253]: 2025-11-29 08:09:11.961 252257 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.013 252257 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.013 252257 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:12 np0005539563 kernel: tap84902e3f-6e: entered promiscuous mode
Nov 29 03:09:12 np0005539563 NetworkManager[48981]: <info>  [1764403752.0170] manager: (tap84902e3f-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/158)
Nov 29 03:09:12 np0005539563 systemd-udevd[309527]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.018 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:12Z|00338|binding|INFO|Claiming lport 84902e3f-6e9d-45fd-88b9-3e367b4e1870 for this chassis.
Nov 29 03:09:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:12Z|00339|binding|INFO|84902e3f-6e9d-45fd-88b9-3e367b4e1870: Claiming fa:16:3e:6d:a2:98 10.100.0.9
Nov 29 03:09:12 np0005539563 NetworkManager[48981]: <info>  [1764403752.0315] device (tap84902e3f-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:12 np0005539563 NetworkManager[48981]: <info>  [1764403752.0328] device (tap84902e3f-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.035 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:a2:98 10.100.0.9'], port_security=['fa:16:3e:6d:a2:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ec7136d6-4735-49a5-b788-f051bf09a83d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-953f39e8-83db-4773-801b-104c949c136d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a717411af66b4c23a4cc35a3803ff3b6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5ac20701-06f7-4870-8879-14fab3d0b2e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92eecc45-e1ef-47f4-9b35-addf481d012c, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=84902e3f-6e9d-45fd-88b9-3e367b4e1870) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.048 252257 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:09:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:12.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:12 np0005539563 systemd-machined[213024]: New machine qemu-39-instance-0000005d.
Nov 29 03:09:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 227 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 113 KiB/s rd, 8.5 MiB/s wr, 173 op/s
Nov 29 03:09:12 np0005539563 systemd[1]: Started Virtual Machine qemu-39-instance-0000005d.
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.076 252257 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:09:12 np0005539563 podman[309675]: 2025-11-29 08:09:12.083245712 +0000 UTC m=+0.094099760 container create f4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.084 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403752.0841956, ca260730-278e-41c5-9aae-4825e9497dcc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.084 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:12 np0005539563 podman[309675]: 2025-11-29 08:09:12.012579157 +0000 UTC m=+0.023433225 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.118 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:12Z|00340|binding|INFO|Setting lport 84902e3f-6e9d-45fd-88b9-3e367b4e1870 ovn-installed in OVS
Nov 29 03:09:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:12Z|00341|binding|INFO|Setting lport 84902e3f-6e9d-45fd-88b9-3e367b4e1870 up in Southbound
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.127 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:12 np0005539563 systemd[1]: Started libpod-conmon-f4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3.scope.
Nov 29 03:09:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:09:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/139fff69ce8398d303f11a8e84738c2930ece2a874dbf86f5957a3413821440a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:12 np0005539563 podman[309675]: 2025-11-29 08:09:12.222479864 +0000 UTC m=+0.233333942 container init f4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:09:12 np0005539563 podman[309675]: 2025-11-29 08:09:12.228518989 +0000 UTC m=+0.239373037 container start f4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:09:12 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[309712]: [NOTICE]   (309730) : New worker (309736) forked
Nov 29 03:09:12 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[309712]: [NOTICE]   (309730) : Loading success.
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.277 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.281 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403752.0843399, ca260730-278e-41c5-9aae-4825e9497dcc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.281 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.300 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 84902e3f-6e9d-45fd-88b9-3e367b4e1870 in datapath 953f39e8-83db-4773-801b-104c949c136d unbound from our chassis#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.302 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 953f39e8-83db-4773-801b-104c949c136d#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.305 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.309 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.313 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[21dc05d3-568d-45d7-956c-b84e2d4adcf8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.314 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap953f39e8-81 in ovnmeta-953f39e8-83db-4773-801b-104c949c136d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.316 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap953f39e8-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.316 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[800b0533-3910-42b4-8bcf-194daeed5137]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.318 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f53a0ab3-dd03-4f27-83f3-c61c7c7a2a8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.333 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[ae82b96d-7a5a-4245-a52a-a24d79f0b093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.334 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.358 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4de6b161-4f8e-4f69-b787-76deeff6620f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.360 252257 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.361 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.361 252257 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Creating image(s)#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.389 252257 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 2cab3642-38d5-47c8-82d7-93f626786383_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.394 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[435960bc-c536-42af-bd5f-1f1570575c73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.400 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dac8d467-9a11-4b26-bb8d-2804f30d05b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 NetworkManager[48981]: <info>  [1764403752.4009] manager: (tap953f39e8-80): new Veth device (/org/freedesktop/NetworkManager/Devices/159)
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.427 252257 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 2cab3642-38d5-47c8-82d7-93f626786383_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.428 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3a3e08-5be7-41c3-9567-d5c85132bfb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.431 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[bd508018-980d-4513-9752-9a70a03b86c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.454 252257 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 2cab3642-38d5-47c8-82d7-93f626786383_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:12 np0005539563 NetworkManager[48981]: <info>  [1764403752.4584] device (tap953f39e8-80): carrier: link connected
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.458 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.463 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2aac812a-6577-4a76-a917-96e5b83ede61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.478 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[91d3f19e-e466-46a7-bec4-3154d73833d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap953f39e8-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:7f:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672023, 'reachable_time': 44310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309833, 'error': None, 'target': 'ovnmeta-953f39e8-83db-4773-801b-104c949c136d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.482 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403752.4067585, ec7136d6-4735-49a5-b788-f051bf09a83d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.482 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.493 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1de0b9-7ef2-422c-a501-d550b42987b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:7f3c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672023, 'tstamp': 672023}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309835, 'error': None, 'target': 'ovnmeta-953f39e8-83db-4773-801b-104c949c136d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.509 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3c239d5c-27ed-4b77-bfdd-6923e991e034]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap953f39e8-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:7f:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672023, 'reachable_time': 44310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309836, 'error': None, 'target': 'ovnmeta-953f39e8-83db-4773-801b-104c949c136d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.512 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.517 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403752.4069133, ec7136d6-4735-49a5-b788-f051bf09a83d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.519 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.524 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.525 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.525 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.526 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.540 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee621ea-ef29-4598-b726-ce6658ce64fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.553 252257 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 2cab3642-38d5-47c8-82d7-93f626786383_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.560 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2cab3642-38d5-47c8-82d7-93f626786383_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.601 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.608 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.608 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[78e0bc62-0c53-414c-aa53-f056e24a9261]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.610 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap953f39e8-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.610 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.611 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap953f39e8-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:12 np0005539563 NetworkManager[48981]: <info>  [1764403752.6485] manager: (tap953f39e8-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Nov 29 03:09:12 np0005539563 kernel: tap953f39e8-80: entered promiscuous mode
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.648 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.651 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap953f39e8-80, col_values=(('external_ids', {'iface-id': 'f4046fb2-2f0e-4fb2-89d3-7261e94c38a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.652 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:12Z|00342|binding|INFO|Releasing lport f4046fb2-2f0e-4fb2-89d3-7261e94c38a3 from this chassis (sb_readonly=0)
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.654 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/953f39e8-83db-4773-801b-104c949c136d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/953f39e8-83db-4773-801b-104c949c136d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.654 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.654 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.655 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3342c2dd-b3d3-4f2b-b941-a373b4271508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.656 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-953f39e8-83db-4773-801b-104c949c136d
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/953f39e8-83db-4773-801b-104c949c136d.pid.haproxy
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 953f39e8-83db-4773-801b-104c949c136d
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:12.657 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-953f39e8-83db-4773-801b-104c949c136d', 'env', 'PROCESS_TAG=haproxy-953f39e8-83db-4773-801b-104c949c136d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/953f39e8-83db-4773-801b-104c949c136d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.669 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:09:12
Nov 29 03:09:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:09:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:09:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['volumes', 'vms', '.mgr', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Nov 29 03:09:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:09:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:12.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:12 np0005539563 nova_compute[252253]: 2025-11-29 08:09:12.952 252257 DEBUG nova.policy [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '95361d3a276f4d7f81e9f9a4bcafd2ea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3e18973b82a4071bdc187ede8c1afb8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.066 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 2cab3642-38d5-47c8-82d7-93f626786383_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:13 np0005539563 podman[309909]: 2025-11-29 08:09:12.996366142 +0000 UTC m=+0.020406454 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.138 252257 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] resizing rbd image 2cab3642-38d5-47c8-82d7-93f626786383_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:13 np0005539563 podman[309909]: 2025-11-29 08:09:13.157318962 +0000 UTC m=+0.181359254 container create a8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:09:13 np0005539563 systemd[1]: Started libpod-conmon-a8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a.scope.
Nov 29 03:09:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:09:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c9d0251934ea6a5d06b7bea7a43e5c9aeb0dc4fd4803e54eaf300506c12f8a3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:13 np0005539563 podman[309909]: 2025-11-29 08:09:13.256472038 +0000 UTC m=+0.280512390 container init a8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:09:13 np0005539563 podman[309909]: 2025-11-29 08:09:13.266546942 +0000 UTC m=+0.290587234 container start a8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:09:13 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[309978]: [NOTICE]   (309982) : New worker (309991) forked
Nov 29 03:09:13 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[309978]: [NOTICE]   (309982) : Loading success.
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.324 252257 DEBUG nova.objects.instance [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lazy-loading 'migration_context' on Instance uuid 2cab3642-38d5-47c8-82d7-93f626786383 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.348 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.348 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Ensure instance console log exists: /var/lib/nova/instances/2cab3642-38d5-47c8-82d7-93f626786383/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.349 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.349 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.349 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:09:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.955 252257 DEBUG nova.compute.manager [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Received event network-vif-plugged-fed1216e-fe85-481a-93a1-18cdd79832c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.956 252257 DEBUG oslo_concurrency.lockutils [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.956 252257 DEBUG oslo_concurrency.lockutils [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.957 252257 DEBUG oslo_concurrency.lockutils [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.957 252257 DEBUG nova.compute.manager [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Processing event network-vif-plugged-fed1216e-fe85-481a-93a1-18cdd79832c2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.957 252257 DEBUG nova.compute.manager [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Received event network-vif-plugged-fed1216e-fe85-481a-93a1-18cdd79832c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.957 252257 DEBUG oslo_concurrency.lockutils [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.958 252257 DEBUG oslo_concurrency.lockutils [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.958 252257 DEBUG oslo_concurrency.lockutils [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.958 252257 DEBUG nova.compute.manager [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] No waiting events found dispatching network-vif-plugged-fed1216e-fe85-481a-93a1-18cdd79832c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.959 252257 WARNING nova.compute.manager [req-9e3cf005-0b7c-4f15-a898-f037b40e8b78 req-123748b8-5826-4586-a325-73e5eba3f1db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Received unexpected event network-vif-plugged-fed1216e-fe85-481a-93a1-18cdd79832c2 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.959 252257 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.967 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.967 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403753.9670756, ca260730-278e-41c5-9aae-4825e9497dcc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.968 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.974 252257 INFO nova.virt.libvirt.driver [-] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Instance spawned successfully.#033[00m
Nov 29 03:09:13 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.976 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:13.999 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.006 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.010 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.011 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.011 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.012 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.012 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.013 252257 DEBUG nova.virt.libvirt.driver [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.047 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.050 252257 DEBUG nova.compute.manager [req-84ab2484-d8cd-414b-8682-ad96384a7307 req-1502289d-0281-4caa-ba36-2fcc1b54b308 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received event network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.050 252257 DEBUG oslo_concurrency.lockutils [req-84ab2484-d8cd-414b-8682-ad96384a7307 req-1502289d-0281-4caa-ba36-2fcc1b54b308 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.050 252257 DEBUG oslo_concurrency.lockutils [req-84ab2484-d8cd-414b-8682-ad96384a7307 req-1502289d-0281-4caa-ba36-2fcc1b54b308 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.050 252257 DEBUG oslo_concurrency.lockutils [req-84ab2484-d8cd-414b-8682-ad96384a7307 req-1502289d-0281-4caa-ba36-2fcc1b54b308 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.051 252257 DEBUG nova.compute.manager [req-84ab2484-d8cd-414b-8682-ad96384a7307 req-1502289d-0281-4caa-ba36-2fcc1b54b308 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Processing event network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.051 252257 DEBUG nova.compute.manager [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:09:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:14.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.055 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.055 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403754.0551355, ec7136d6-4735-49a5-b788-f051bf09a83d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.055 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.061 252257 INFO nova.virt.libvirt.driver [-] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Instance spawned successfully.#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.062 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:09:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 259 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 762 KiB/s rd, 9.5 MiB/s wr, 205 op/s
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.096 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.101 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.102 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.102 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.103 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.103 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.103 252257 DEBUG nova.virt.libvirt.driver [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.110 252257 INFO nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Took 10.81 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.110 252257 DEBUG nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.112 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.163 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.252 252257 INFO nova.compute.manager [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Took 10.28 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.252 252257 DEBUG nova.compute.manager [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.254 252257 INFO nova.compute.manager [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Took 11.99 seconds to build instance.#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.296 252257 DEBUG oslo_concurrency.lockutils [None req-15b0fc1b-37fe-4ea5-8dfa-2cc8e725cb1b f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.352 252257 INFO nova.compute.manager [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Took 11.45 seconds to build instance.#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.380 252257 DEBUG oslo_concurrency.lockutils [None req-9032083c-d8df-48c1-9126-ab8a74f784c3 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:14 np0005539563 nova_compute[252253]: 2025-11-29 08:09:14.651 252257 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Successfully created port: 7df15ed0-1865-4623-916d-c3dcbbbe020e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:14.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:15 np0005539563 ceph-mgr[74636]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 03:09:15 np0005539563 nova_compute[252253]: 2025-11-29 08:09:15.916 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:16.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 354 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 10 MiB/s wr, 314 op/s
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.079 252257 DEBUG oslo_concurrency.lockutils [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "ca260730-278e-41c5-9aae-4825e9497dcc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.080 252257 DEBUG oslo_concurrency.lockutils [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.080 252257 DEBUG oslo_concurrency.lockutils [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.081 252257 DEBUG oslo_concurrency.lockutils [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.081 252257 DEBUG oslo_concurrency.lockutils [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.083 252257 INFO nova.compute.manager [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Terminating instance#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.084 252257 DEBUG nova.compute.manager [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:09:16 np0005539563 kernel: tapfed1216e-fe (unregistering): left promiscuous mode
Nov 29 03:09:16 np0005539563 NetworkManager[48981]: <info>  [1764403756.1679] device (tapfed1216e-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.220 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:16Z|00343|binding|INFO|Releasing lport fed1216e-fe85-481a-93a1-18cdd79832c2 from this chassis (sb_readonly=0)
Nov 29 03:09:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:16Z|00344|binding|INFO|Setting lport fed1216e-fe85-481a-93a1-18cdd79832c2 down in Southbound
Nov 29 03:09:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:16Z|00345|binding|INFO|Removing iface tapfed1216e-fe ovn-installed in OVS
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.231 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.235 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:97:ae 10.100.0.11'], port_security=['fa:16:3e:b9:97:ae 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ca260730-278e-41c5-9aae-4825e9497dcc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e860226190f4eb8971376b16032da1b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd8dc1d4-70a8-4fbe-bcb1-1a2eb3ad39c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aee2888b-87dd-4143-b028-b945f3d151f3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=fed1216e-fe85-481a-93a1-18cdd79832c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.236 158990 INFO neutron.agent.ovn.metadata.agent [-] Port fed1216e-fe85-481a-93a1-18cdd79832c2 in datapath 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 unbound from our chassis#033[00m
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.238 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.239 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[61abdd70-5b5f-4466-906c-44751681d56e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.239 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 namespace which is not needed anymore#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.247 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539563 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Nov 29 03:09:16 np0005539563 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005b.scope: Consumed 2.900s CPU time.
Nov 29 03:09:16 np0005539563 systemd-machined[213024]: Machine qemu-38-instance-0000005b terminated.
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.288 252257 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Successfully updated port: 7df15ed0-1865-4623-916d-c3dcbbbe020e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.319 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "refresh_cache-2cab3642-38d5-47c8-82d7-93f626786383" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.320 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquired lock "refresh_cache-2cab3642-38d5-47c8-82d7-93f626786383" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.320 252257 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.329 252257 INFO nova.virt.libvirt.driver [-] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Instance destroyed successfully.#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.331 252257 DEBUG nova.objects.instance [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lazy-loading 'resources' on Instance uuid ca260730-278e-41c5-9aae-4825e9497dcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.337 252257 DEBUG nova.compute.manager [req-23c54d4d-b4a3-4ae8-825e-7b21bd52dcbc req-4639ad9e-641a-4fb7-b1e8-27034fa5f75d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received event network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.338 252257 DEBUG oslo_concurrency.lockutils [req-23c54d4d-b4a3-4ae8-825e-7b21bd52dcbc req-4639ad9e-641a-4fb7-b1e8-27034fa5f75d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.338 252257 DEBUG oslo_concurrency.lockutils [req-23c54d4d-b4a3-4ae8-825e-7b21bd52dcbc req-4639ad9e-641a-4fb7-b1e8-27034fa5f75d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.338 252257 DEBUG oslo_concurrency.lockutils [req-23c54d4d-b4a3-4ae8-825e-7b21bd52dcbc req-4639ad9e-641a-4fb7-b1e8-27034fa5f75d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.338 252257 DEBUG nova.compute.manager [req-23c54d4d-b4a3-4ae8-825e-7b21bd52dcbc req-4639ad9e-641a-4fb7-b1e8-27034fa5f75d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] No waiting events found dispatching network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.339 252257 WARNING nova.compute.manager [req-23c54d4d-b4a3-4ae8-825e-7b21bd52dcbc req-4639ad9e-641a-4fb7-b1e8-27034fa5f75d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received unexpected event network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:09:16 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[309712]: [NOTICE]   (309730) : haproxy version is 2.8.14-c23fe91
Nov 29 03:09:16 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[309712]: [NOTICE]   (309730) : path to executable is /usr/sbin/haproxy
Nov 29 03:09:16 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[309712]: [WARNING]  (309730) : Exiting Master process...
Nov 29 03:09:16 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[309712]: [WARNING]  (309730) : Exiting Master process...
Nov 29 03:09:16 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[309712]: [ALERT]    (309730) : Current worker (309736) exited with code 143 (Terminated)
Nov 29 03:09:16 np0005539563 neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05[309712]: [WARNING]  (309730) : All workers exited. Exiting... (0)
Nov 29 03:09:16 np0005539563 systemd[1]: libpod-f4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3.scope: Deactivated successfully.
Nov 29 03:09:16 np0005539563 podman[310044]: 2025-11-29 08:09:16.381161378 +0000 UTC m=+0.045549765 container died f4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.391 252257 DEBUG nova.virt.libvirt.vif [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1728114398',display_name='tempest-MultipleCreateTestJSON-server-1728114398-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1728114398-1',id=91,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8e860226190f4eb8971376b16032da1b',ramdisk_id='',reservation_id='r-cjh5wutt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-36900569',owner_user_name='tempest-MultipleCreateTestJSON-36900569-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:14Z,user_data=None,user_id='f8306d30b5b844909866bec7b9c8242d',uuid=ca260730-278e-41c5-9aae-4825e9497dcc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fed1216e-fe85-481a-93a1-18cdd79832c2", "address": "fa:16:3e:b9:97:ae", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed1216e-fe", "ovs_interfaceid": "fed1216e-fe85-481a-93a1-18cdd79832c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.392 252257 DEBUG nova.network.os_vif_util [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converting VIF {"id": "fed1216e-fe85-481a-93a1-18cdd79832c2", "address": "fa:16:3e:b9:97:ae", "network": {"id": "6a4a6f7c-9da4-4d0a-b32b-578ab4776e05", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-724757681-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e860226190f4eb8971376b16032da1b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed1216e-fe", "ovs_interfaceid": "fed1216e-fe85-481a-93a1-18cdd79832c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.392 252257 DEBUG nova.network.os_vif_util [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:97:ae,bridge_name='br-int',has_traffic_filtering=True,id=fed1216e-fe85-481a-93a1-18cdd79832c2,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed1216e-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.393 252257 DEBUG os_vif [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:97:ae,bridge_name='br-int',has_traffic_filtering=True,id=fed1216e-fe85-481a-93a1-18cdd79832c2,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed1216e-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.394 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.395 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfed1216e-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.396 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.399 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.401 252257 INFO os_vif [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:97:ae,bridge_name='br-int',has_traffic_filtering=True,id=fed1216e-fe85-481a-93a1-18cdd79832c2,network=Network(6a4a6f7c-9da4-4d0a-b32b-578ab4776e05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed1216e-fe')#033[00m
Nov 29 03:09:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3-userdata-shm.mount: Deactivated successfully.
Nov 29 03:09:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-139fff69ce8398d303f11a8e84738c2930ece2a874dbf86f5957a3413821440a-merged.mount: Deactivated successfully.
Nov 29 03:09:16 np0005539563 podman[310044]: 2025-11-29 08:09:16.426520606 +0000 UTC m=+0.090908993 container cleanup f4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:16 np0005539563 systemd[1]: libpod-conmon-f4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3.scope: Deactivated successfully.
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.451 252257 DEBUG nova.compute.manager [req-cb88687c-e114-4872-8545-616361de77be req-e4d1cacf-8bd0-4022-8fae-674a8b564439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Received event network-changed-7df15ed0-1865-4623-916d-c3dcbbbe020e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.451 252257 DEBUG nova.compute.manager [req-cb88687c-e114-4872-8545-616361de77be req-e4d1cacf-8bd0-4022-8fae-674a8b564439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Refreshing instance network info cache due to event network-changed-7df15ed0-1865-4623-916d-c3dcbbbe020e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.452 252257 DEBUG oslo_concurrency.lockutils [req-cb88687c-e114-4872-8545-616361de77be req-e4d1cacf-8bd0-4022-8fae-674a8b564439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-2cab3642-38d5-47c8-82d7-93f626786383" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:16 np0005539563 podman[310090]: 2025-11-29 08:09:16.507016188 +0000 UTC m=+0.061827856 container remove f4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.513 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb540fd-e68d-4e46-b27e-b925b53ad5c0]: (4, ('Sat Nov 29 08:09:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 (f4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3)\nf4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3\nSat Nov 29 08:09:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 (f4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3)\nf4413f97975f6fd32570a409a1a830be59cd71885c0a12dfdfd71a0d07afb0e3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.514 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b673931a-3dfd-483d-96a1-175332ca26a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.515 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a4a6f7c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.516 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539563 kernel: tap6a4a6f7c-90: left promiscuous mode
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.538 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.540 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4ceeb9-dc92-4580-8bb2-21f8a74f0057]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.548 252257 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.554 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed3f83c-bd66-4ca8-8e05-992e72cc03f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.555 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0efc4cd8-e6bd-45ff-8465-eeee4978c429]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.571 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f6ad09-1ef4-46a1-99e1-962c8d0c2c40]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671913, 'reachable_time': 28655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310108, 'error': None, 'target': 'ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539563 systemd[1]: run-netns-ovnmeta\x2d6a4a6f7c\x2d9da4\x2d4d0a\x2db32b\x2d578ab4776e05.mount: Deactivated successfully.
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.579 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a4a6f7c-9da4-4d0a-b32b-578ab4776e05 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:09:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:16.579 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a89acf-97c4-4696-ab2d-a323651944c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:16 np0005539563 podman[310109]: 2025-11-29 08:09:16.654821412 +0000 UTC m=+0.056865712 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:09:16 np0005539563 podman[310110]: 2025-11-29 08:09:16.660795444 +0000 UTC m=+0.062435623 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:09:16 np0005539563 podman[310111]: 2025-11-29 08:09:16.685984927 +0000 UTC m=+0.084512151 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 29 03:09:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:16.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.880 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.890 252257 DEBUG oslo_concurrency.lockutils [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Acquiring lock "ec7136d6-4735-49a5-b788-f051bf09a83d" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.891 252257 DEBUG oslo_concurrency.lockutils [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.891 252257 INFO nova.compute.manager [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Rebooting instance#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.908 252257 DEBUG oslo_concurrency.lockutils [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Acquiring lock "refresh_cache-ec7136d6-4735-49a5-b788-f051bf09a83d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.908 252257 DEBUG oslo_concurrency.lockutils [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Acquired lock "refresh_cache-ec7136d6-4735-49a5-b788-f051bf09a83d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.909 252257 DEBUG nova.network.neutron [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.987 252257 INFO nova.virt.libvirt.driver [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Deleting instance files /var/lib/nova/instances/ca260730-278e-41c5-9aae-4825e9497dcc_del#033[00m
Nov 29 03:09:16 np0005539563 nova_compute[252253]: 2025-11-29 08:09:16.988 252257 INFO nova.virt.libvirt.driver [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Deletion of /var/lib/nova/instances/ca260730-278e-41c5-9aae-4825e9497dcc_del complete#033[00m
Nov 29 03:09:17 np0005539563 nova_compute[252253]: 2025-11-29 08:09:17.047 252257 INFO nova.compute.manager [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:09:17 np0005539563 nova_compute[252253]: 2025-11-29 08:09:17.048 252257 DEBUG oslo.service.loopingcall [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:09:17 np0005539563 nova_compute[252253]: 2025-11-29 08:09:17.048 252257 DEBUG nova.compute.manager [-] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:09:17 np0005539563 nova_compute[252253]: 2025-11-29 08:09:17.049 252257 DEBUG nova.network.neutron [-] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:09:17 np0005539563 nova_compute[252253]: 2025-11-29 08:09:17.977 252257 DEBUG nova.network.neutron [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Updating instance_info_cache with network_info: [{"id": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "address": "fa:16:3e:f7:4c:ea", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7df15ed0-18", "ovs_interfaceid": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.005 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Releasing lock "refresh_cache-2cab3642-38d5-47c8-82d7-93f626786383" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.005 252257 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Instance network_info: |[{"id": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "address": "fa:16:3e:f7:4c:ea", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7df15ed0-18", "ovs_interfaceid": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.006 252257 DEBUG oslo_concurrency.lockutils [req-cb88687c-e114-4872-8545-616361de77be req-e4d1cacf-8bd0-4022-8fae-674a8b564439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-2cab3642-38d5-47c8-82d7-93f626786383" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.006 252257 DEBUG nova.network.neutron [req-cb88687c-e114-4872-8545-616361de77be req-e4d1cacf-8bd0-4022-8fae-674a8b564439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Refreshing network info cache for port 7df15ed0-1865-4623-916d-c3dcbbbe020e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.008 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Start _get_guest_xml network_info=[{"id": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "address": "fa:16:3e:f7:4c:ea", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7df15ed0-18", "ovs_interfaceid": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.012 252257 WARNING nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.017 252257 DEBUG nova.virt.libvirt.host [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.018 252257 DEBUG nova.virt.libvirt.host [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.025 252257 DEBUG nova.virt.libvirt.host [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.025 252257 DEBUG nova.virt.libvirt.host [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.026 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.027 252257 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.027 252257 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.027 252257 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.027 252257 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.028 252257 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.028 252257 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.028 252257 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.029 252257 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.029 252257 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.029 252257 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.029 252257 DEBUG nova.virt.hardware [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.032 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:18.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 354 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 8.9 MiB/s wr, 273 op/s
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.329 252257 DEBUG nova.compute.manager [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Received event network-vif-unplugged-fed1216e-fe85-481a-93a1-18cdd79832c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.329 252257 DEBUG oslo_concurrency.lockutils [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.330 252257 DEBUG oslo_concurrency.lockutils [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.330 252257 DEBUG oslo_concurrency.lockutils [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.330 252257 DEBUG nova.compute.manager [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] No waiting events found dispatching network-vif-unplugged-fed1216e-fe85-481a-93a1-18cdd79832c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.331 252257 DEBUG nova.compute.manager [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Received event network-vif-unplugged-fed1216e-fe85-481a-93a1-18cdd79832c2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.331 252257 DEBUG nova.compute.manager [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Received event network-vif-plugged-fed1216e-fe85-481a-93a1-18cdd79832c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.331 252257 DEBUG oslo_concurrency.lockutils [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.331 252257 DEBUG oslo_concurrency.lockutils [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.331 252257 DEBUG oslo_concurrency.lockutils [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.332 252257 DEBUG nova.compute.manager [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] No waiting events found dispatching network-vif-plugged-fed1216e-fe85-481a-93a1-18cdd79832c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.332 252257 WARNING nova.compute.manager [req-2f41b9e1-9be7-421c-9424-962e9745bf40 req-1aceb9a8-cf8e-4776-abd1-210d7de0e93b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Received unexpected event network-vif-plugged-fed1216e-fe85-481a-93a1-18cdd79832c2 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:09:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1077723992' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.478 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.500 252257 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 2cab3642-38d5-47c8-82d7-93f626786383_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.503 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:18.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.957 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.961 252257 DEBUG nova.virt.libvirt.vif [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-313770805',display_name='tempest-ListServersNegativeTestJSON-server-313770805-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-313770805-3',id=97,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3e18973b82a4071bdc187ede8c1afb8',ramdisk_id='',reservation_id='r-enaoqmph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1935238201',owner_user_name='tempest-ListServersNegativeTestJSON-1935238201-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:12Z,user_data=None,user_id='95361d3a276f4d7f81e9f9a4bcafd2ea',uuid=2cab3642-38d5-47c8-82d7-93f626786383,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "address": "fa:16:3e:f7:4c:ea", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7df15ed0-18", "ovs_interfaceid": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.961 252257 DEBUG nova.network.os_vif_util [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converting VIF {"id": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "address": "fa:16:3e:f7:4c:ea", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7df15ed0-18", "ovs_interfaceid": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.963 252257 DEBUG nova.network.os_vif_util [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:4c:ea,bridge_name='br-int',has_traffic_filtering=True,id=7df15ed0-1865-4623-916d-c3dcbbbe020e,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7df15ed0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:18 np0005539563 nova_compute[252253]: 2025-11-29 08:09:18.966 252257 DEBUG nova.objects.instance [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2cab3642-38d5-47c8-82d7-93f626786383 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.006 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  <uuid>2cab3642-38d5-47c8-82d7-93f626786383</uuid>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  <name>instance-00000061</name>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <nova:name>tempest-ListServersNegativeTestJSON-server-313770805-3</nova:name>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:09:18</nova:creationTime>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <nova:user uuid="95361d3a276f4d7f81e9f9a4bcafd2ea">tempest-ListServersNegativeTestJSON-1935238201-project-member</nova:user>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <nova:project uuid="e3e18973b82a4071bdc187ede8c1afb8">tempest-ListServersNegativeTestJSON-1935238201</nova:project>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <nova:port uuid="7df15ed0-1865-4623-916d-c3dcbbbe020e">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <entry name="serial">2cab3642-38d5-47c8-82d7-93f626786383</entry>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <entry name="uuid">2cab3642-38d5-47c8-82d7-93f626786383</entry>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/2cab3642-38d5-47c8-82d7-93f626786383_disk">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/2cab3642-38d5-47c8-82d7-93f626786383_disk.config">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:f7:4c:ea"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <target dev="tap7df15ed0-18"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/2cab3642-38d5-47c8-82d7-93f626786383/console.log" append="off"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:09:19 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:09:19 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:09:19 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:09:19 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.008 252257 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Preparing to wait for external event network-vif-plugged-7df15ed0-1865-4623-916d-c3dcbbbe020e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.009 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "2cab3642-38d5-47c8-82d7-93f626786383-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.009 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.010 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.014 252257 DEBUG nova.virt.libvirt.vif [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-313770805',display_name='tempest-ListServersNegativeTestJSON-server-313770805-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-313770805-3',id=97,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3e18973b82a4071bdc187ede8c1afb8',ramdisk_id='',reservation_id='r-enaoqmph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1935238201',owner_user_name='tempest-ListServersNegativeTestJSON-1935238201-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:12Z,user_data=None,user_id='95361d3a276f4d7f81e9f9a4bcafd2ea',uuid=2cab3642-38d5-47c8-82d7-93f626786383,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "address": "fa:16:3e:f7:4c:ea", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7df15ed0-18", "ovs_interfaceid": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.015 252257 DEBUG nova.network.os_vif_util [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converting VIF {"id": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "address": "fa:16:3e:f7:4c:ea", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7df15ed0-18", "ovs_interfaceid": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.015 252257 DEBUG nova.network.os_vif_util [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:4c:ea,bridge_name='br-int',has_traffic_filtering=True,id=7df15ed0-1865-4623-916d-c3dcbbbe020e,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7df15ed0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.016 252257 DEBUG os_vif [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:4c:ea,bridge_name='br-int',has_traffic_filtering=True,id=7df15ed0-1865-4623-916d-c3dcbbbe020e,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7df15ed0-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.016 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.017 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.017 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.020 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.020 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7df15ed0-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.021 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7df15ed0-18, col_values=(('external_ids', {'iface-id': '7df15ed0-1865-4623-916d-c3dcbbbe020e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:4c:ea', 'vm-uuid': '2cab3642-38d5-47c8-82d7-93f626786383'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.023 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:19 np0005539563 NetworkManager[48981]: <info>  [1764403759.0243] manager: (tap7df15ed0-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.025 252257 DEBUG nova.network.neutron [-] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.027 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.030 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.031 252257 INFO os_vif [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:4c:ea,bridge_name='br-int',has_traffic_filtering=True,id=7df15ed0-1865-4623-916d-c3dcbbbe020e,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7df15ed0-18')#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.098 252257 INFO nova.compute.manager [-] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Took 2.05 seconds to deallocate network for instance.#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.109 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.110 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.110 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] No VIF found with MAC fa:16:3e:f7:4c:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.110 252257 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Using config drive#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.131 252257 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 2cab3642-38d5-47c8-82d7-93f626786383_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.190 252257 DEBUG nova.compute.manager [req-9f0273e6-f775-4e70-aa89-02d21d451f3d req-b0710b6a-2f31-4d85-89b7-0419f11fc8b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Received event network-vif-deleted-fed1216e-fe85-481a-93a1-18cdd79832c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.212 252257 DEBUG oslo_concurrency.lockutils [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.213 252257 DEBUG oslo_concurrency.lockutils [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:19 np0005539563 nova_compute[252253]: 2025-11-29 08:09:19.568 252257 DEBUG oslo_concurrency.processutils [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3188377218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.047 252257 DEBUG oslo_concurrency.processutils [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.053 252257 DEBUG nova.compute.provider_tree [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:20.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 344 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 8.6 MiB/s wr, 335 op/s
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.068 252257 DEBUG nova.scheduler.client.report [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.095 252257 DEBUG oslo_concurrency.lockutils [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.139 252257 INFO nova.scheduler.client.report [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Deleted allocations for instance ca260730-278e-41c5-9aae-4825e9497dcc#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.228 252257 DEBUG oslo_concurrency.lockutils [None req-c240bcd0-1839-46d0-979d-0a764e587406 f8306d30b5b844909866bec7b9c8242d 8e860226190f4eb8971376b16032da1b - - default default] Lock "ca260730-278e-41c5-9aae-4825e9497dcc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.278 252257 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Creating config drive at /var/lib/nova/instances/2cab3642-38d5-47c8-82d7-93f626786383/disk.config#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.284 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2cab3642-38d5-47c8-82d7-93f626786383/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd75ziz7_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.424 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2cab3642-38d5-47c8-82d7-93f626786383/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd75ziz7_" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.452 252257 DEBUG nova.storage.rbd_utils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] rbd image 2cab3642-38d5-47c8-82d7-93f626786383_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.456 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2cab3642-38d5-47c8-82d7-93f626786383/disk.config 2cab3642-38d5-47c8-82d7-93f626786383_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.638 252257 DEBUG oslo_concurrency.processutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2cab3642-38d5-47c8-82d7-93f626786383/disk.config 2cab3642-38d5-47c8-82d7-93f626786383_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.639 252257 INFO nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Deleting local config drive /var/lib/nova/instances/2cab3642-38d5-47c8-82d7-93f626786383/disk.config because it was imported into RBD.#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.650 252257 DEBUG nova.network.neutron [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Updating instance_info_cache with network_info: [{"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.676 252257 DEBUG oslo_concurrency.lockutils [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Releasing lock "refresh_cache-ec7136d6-4735-49a5-b788-f051bf09a83d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.679 252257 DEBUG nova.compute.manager [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:20 np0005539563 kernel: tap7df15ed0-18: entered promiscuous mode
Nov 29 03:09:20 np0005539563 NetworkManager[48981]: <info>  [1764403760.6898] manager: (tap7df15ed0-18): new Tun device (/org/freedesktop/NetworkManager/Devices/162)
Nov 29 03:09:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:20Z|00346|binding|INFO|Claiming lport 7df15ed0-1865-4623-916d-c3dcbbbe020e for this chassis.
Nov 29 03:09:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:20Z|00347|binding|INFO|7df15ed0-1865-4623-916d-c3dcbbbe020e: Claiming fa:16:3e:f7:4c:ea 10.100.0.12
Nov 29 03:09:20 np0005539563 systemd-udevd[310328]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.730 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.733 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:4c:ea 10.100.0.12'], port_security=['fa:16:3e:f7:4c:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2cab3642-38d5-47c8-82d7-93f626786383', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3e18973b82a4071bdc187ede8c1afb8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1da66fc3-7f9f-49ea-a35d-351f9e777793', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8ed36bb-bd1a-404c-bed2-6bc7af2884c4, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=7df15ed0-1865-4623-916d-c3dcbbbe020e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.734 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 7df15ed0-1865-4623-916d-c3dcbbbe020e in datapath 4c0a06e3-8d77-4f81-85b4-47e57dafff04 bound to our chassis#033[00m
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.735 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c0a06e3-8d77-4f81-85b4-47e57dafff04#033[00m
Nov 29 03:09:20 np0005539563 NetworkManager[48981]: <info>  [1764403760.7474] device (tap7df15ed0-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:20 np0005539563 NetworkManager[48981]: <info>  [1764403760.7489] device (tap7df15ed0-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.750 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fe820801-e2c6-4158-8ee8-f850cf847102]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.751 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c0a06e3-81 in ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.753 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c0a06e3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.753 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f12fba30-8fc0-4b10-a5c7-925f87bc1c0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.754 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[325d241e-da5a-49e1-a963-07de30c54809]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539563 systemd-machined[213024]: New machine qemu-40-instance-00000061.
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.767 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[17e441bc-80e2-428a-bbca-b0253f4f2f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539563 systemd[1]: Started Virtual Machine qemu-40-instance-00000061.
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.792 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[27701cd7-2d27-43cd-91df-8f9ffc0dd69a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.813 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:20Z|00348|binding|INFO|Setting lport 7df15ed0-1865-4623-916d-c3dcbbbe020e ovn-installed in OVS
Nov 29 03:09:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:20Z|00349|binding|INFO|Setting lport 7df15ed0-1865-4623-916d-c3dcbbbe020e up in Southbound
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.817 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539563 kernel: tap84902e3f-6e (unregistering): left promiscuous mode
Nov 29 03:09:20 np0005539563 NetworkManager[48981]: <info>  [1764403760.8332] device (tap84902e3f-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.832 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c855d955-ea0b-44d1-aa4b-2505f8971378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.839 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ea4b15-1865-4092-a86f-13fc8215792e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539563 NetworkManager[48981]: <info>  [1764403760.8408] manager: (tap4c0a06e3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/163)
Nov 29 03:09:20 np0005539563 systemd-udevd[310333]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:20Z|00350|binding|INFO|Releasing lport 84902e3f-6e9d-45fd-88b9-3e367b4e1870 from this chassis (sb_readonly=0)
Nov 29 03:09:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:20Z|00351|binding|INFO|Setting lport 84902e3f-6e9d-45fd-88b9-3e367b4e1870 down in Southbound
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.845 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:20Z|00352|binding|INFO|Removing iface tap84902e3f-6e ovn-installed in OVS
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.852 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:a2:98 10.100.0.9'], port_security=['fa:16:3e:6d:a2:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ec7136d6-4735-49a5-b788-f051bf09a83d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-953f39e8-83db-4773-801b-104c949c136d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a717411af66b4c23a4cc35a3803ff3b6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5ac20701-06f7-4870-8879-14fab3d0b2e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92eecc45-e1ef-47f4-9b35-addf481d012c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=84902e3f-6e9d-45fd-88b9-3e367b4e1870) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:20 np0005539563 nova_compute[252253]: 2025-11-29 08:09:20.869 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:20.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.874 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[9a04c342-836a-4d8b-b5bc-0f28e174669c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.876 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e81a96fd-e0fa-4879-b10e-66f9ff2e4a22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539563 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Nov 29 03:09:20 np0005539563 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005d.scope: Consumed 7.227s CPU time.
Nov 29 03:09:20 np0005539563 systemd-machined[213024]: Machine qemu-39-instance-0000005d terminated.
Nov 29 03:09:20 np0005539563 NetworkManager[48981]: <info>  [1764403760.9004] device (tap4c0a06e3-80): carrier: link connected
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.908 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[44b70266-1251-4560-95e5-c9738eb94828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.926 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[087de883-cdd3-4185-bce5-5b4fb6961e6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c0a06e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:42:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672867, 'reachable_time': 21004, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310371, 'error': None, 'target': 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.942 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[95b8cc69-ef39-4b4f-89c9-d85a90c4dfb8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:42d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672867, 'tstamp': 672867}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310379, 'error': None, 'target': 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.959 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[94d8d6ec-a217-42e3-a174-be0b141af1bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c0a06e3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:42:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672867, 'reachable_time': 21004, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310388, 'error': None, 'target': 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:20.989 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8d42111e-034f-40c6-a9f6-0f7ff976f5ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539563 NetworkManager[48981]: <info>  [1764403761.0108] manager: (tap84902e3f-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/164)
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.011 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.016 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.021 252257 INFO nova.virt.libvirt.driver [-] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Instance destroyed successfully.#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.024 252257 DEBUG nova.objects.instance [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lazy-loading 'resources' on Instance uuid ec7136d6-4735-49a5-b788-f051bf09a83d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.054 252257 DEBUG nova.virt.libvirt.vif [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-65778651',display_name='tempest-InstanceActionsTestJSON-server-65778651',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-65778651',id=93,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a717411af66b4c23a4cc35a3803ff3b6',ramdisk_id='',reservation_id='r-ypatsxef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1179662766',owner_user_name='tempest-InstanceActionsTestJSON-1179662766-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:20Z,user_data=None,user_id='59f60b6ae5304ccbbe873550b6e62e81',uuid=ec7136d6-4735-49a5-b788-f051bf09a83d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.055 252257 DEBUG nova.network.os_vif_util [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Converting VIF {"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.056 252257 DEBUG nova.network.os_vif_util [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.057 252257 DEBUG os_vif [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.060 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.060 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84902e3f-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.063 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.065 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d7751939-bc31-48c3-af8a-4681e062f453]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.067 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c0a06e3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.067 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.067 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c0a06e3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.068 252257 DEBUG nova.compute.manager [req-986b7dfc-eeea-4e60-ab52-e972c42995cc req-927b2f34-63b1-43a6-b53d-c71bf8eedf1b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Received event network-vif-plugged-7df15ed0-1865-4623-916d-c3dcbbbe020e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:21 np0005539563 kernel: tap4c0a06e3-80: entered promiscuous mode
Nov 29 03:09:21 np0005539563 NetworkManager[48981]: <info>  [1764403761.0694] manager: (tap4c0a06e3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.069 252257 DEBUG oslo_concurrency.lockutils [req-986b7dfc-eeea-4e60-ab52-e972c42995cc req-927b2f34-63b1-43a6-b53d-c71bf8eedf1b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2cab3642-38d5-47c8-82d7-93f626786383-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.069 252257 DEBUG oslo_concurrency.lockutils [req-986b7dfc-eeea-4e60-ab52-e972c42995cc req-927b2f34-63b1-43a6-b53d-c71bf8eedf1b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.071 252257 DEBUG oslo_concurrency.lockutils [req-986b7dfc-eeea-4e60-ab52-e972c42995cc req-927b2f34-63b1-43a6-b53d-c71bf8eedf1b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.071 252257 DEBUG nova.compute.manager [req-986b7dfc-eeea-4e60-ab52-e972c42995cc req-927b2f34-63b1-43a6-b53d-c71bf8eedf1b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Processing event network-vif-plugged-7df15ed0-1865-4623-916d-c3dcbbbe020e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.072 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.073 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c0a06e3-80, col_values=(('external_ids', {'iface-id': '25db3838-7764-409c-8606-f0c90f681664'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:21Z|00353|binding|INFO|Releasing lport 25db3838-7764-409c-8606-f0c90f681664 from this chassis (sb_readonly=0)
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.075 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.076 252257 INFO os_vif [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e')#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.084 252257 DEBUG nova.virt.libvirt.driver [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Start _get_guest_xml network_info=[{"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.087 252257 WARNING nova.virt.libvirt.driver [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.090 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.091 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c0a06e3-8d77-4f81-85b4-47e57dafff04.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c0a06e3-8d77-4f81-85b4-47e57dafff04.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.092 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[230374c6-1816-46bd-89bf-b1ee964839d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.092 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-4c0a06e3-8d77-4f81-85b4-47e57dafff04
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/4c0a06e3-8d77-4f81-85b4-47e57dafff04.pid.haproxy
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 4c0a06e3-8d77-4f81-85b4-47e57dafff04
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.093 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'env', 'PROCESS_TAG=haproxy-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c0a06e3-8d77-4f81-85b4-47e57dafff04.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.096 252257 DEBUG nova.compute.manager [req-e941392c-f4a9-4208-9222-1cb8f027accc req-090798a0-4e07-422d-acd3-cc9b4df0fa6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received event network-vif-unplugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.097 252257 DEBUG oslo_concurrency.lockutils [req-e941392c-f4a9-4208-9222-1cb8f027accc req-090798a0-4e07-422d-acd3-cc9b4df0fa6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.097 252257 DEBUG oslo_concurrency.lockutils [req-e941392c-f4a9-4208-9222-1cb8f027accc req-090798a0-4e07-422d-acd3-cc9b4df0fa6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.097 252257 DEBUG oslo_concurrency.lockutils [req-e941392c-f4a9-4208-9222-1cb8f027accc req-090798a0-4e07-422d-acd3-cc9b4df0fa6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.098 252257 DEBUG nova.compute.manager [req-e941392c-f4a9-4208-9222-1cb8f027accc req-090798a0-4e07-422d-acd3-cc9b4df0fa6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] No waiting events found dispatching network-vif-unplugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.098 252257 WARNING nova.compute.manager [req-e941392c-f4a9-4208-9222-1cb8f027accc req-090798a0-4e07-422d-acd3-cc9b4df0fa6d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received unexpected event network-vif-unplugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.099 252257 DEBUG nova.virt.libvirt.host [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.100 252257 DEBUG nova.virt.libvirt.host [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.103 252257 DEBUG nova.virt.libvirt.host [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.104 252257 DEBUG nova.virt.libvirt.host [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.105 252257 DEBUG nova.virt.libvirt.driver [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.106 252257 DEBUG nova.virt.hardware [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.106 252257 DEBUG nova.virt.hardware [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.107 252257 DEBUG nova.virt.hardware [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.107 252257 DEBUG nova.virt.hardware [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.108 252257 DEBUG nova.virt.hardware [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.108 252257 DEBUG nova.virt.hardware [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.108 252257 DEBUG nova.virt.hardware [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.109 252257 DEBUG nova.virt.hardware [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.109 252257 DEBUG nova.virt.hardware [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.109 252257 DEBUG nova.virt.hardware [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.110 252257 DEBUG nova.virt.hardware [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.110 252257 DEBUG nova.objects.instance [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ec7136d6-4735-49a5-b788-f051bf09a83d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.138 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403761.137704, 2cab3642-38d5-47c8-82d7-93f626786383 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.138 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.141 252257 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.145 252257 DEBUG oslo_concurrency.processutils [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.180 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.182 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.186 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.195 252257 INFO nova.virt.libvirt.driver [-] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Instance spawned successfully.#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.195 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.218 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.219 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403761.1379678, 2cab3642-38d5-47c8-82d7-93f626786383 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.219 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.227 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.228 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.228 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.229 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.229 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.229 252257 DEBUG nova.virt.libvirt.driver [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.236 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.240 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403761.144898, 2cab3642-38d5-47c8-82d7-93f626786383 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.240 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.287 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.293 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.301 252257 DEBUG nova.network.neutron [req-cb88687c-e114-4872-8545-616361de77be req-e4d1cacf-8bd0-4022-8fae-674a8b564439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Updated VIF entry in instance network info cache for port 7df15ed0-1865-4623-916d-c3dcbbbe020e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.302 252257 DEBUG nova.network.neutron [req-cb88687c-e114-4872-8545-616361de77be req-e4d1cacf-8bd0-4022-8fae-674a8b564439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Updating instance_info_cache with network_info: [{"id": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "address": "fa:16:3e:f7:4c:ea", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7df15ed0-18", "ovs_interfaceid": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.330 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.333 252257 DEBUG oslo_concurrency.lockutils [req-cb88687c-e114-4872-8545-616361de77be req-e4d1cacf-8bd0-4022-8fae-674a8b564439 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-2cab3642-38d5-47c8-82d7-93f626786383" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.347 252257 INFO nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Took 8.99 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.348 252257 DEBUG nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.413 252257 INFO nova.compute.manager [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Took 10.21 seconds to build instance.#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.429 252257 DEBUG oslo_concurrency.lockutils [None req-0b00be25-40f0-49be-bfec-68590e1662cb 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:21 np0005539563 podman[310477]: 2025-11-29 08:09:21.461021909 +0000 UTC m=+0.050643754 container create 4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:09:21 np0005539563 systemd[1]: Started libpod-conmon-4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d.scope.
Nov 29 03:09:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:09:21 np0005539563 podman[310477]: 2025-11-29 08:09:21.4359644 +0000 UTC m=+0.025586585 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aee2f92641600ebb1bceaba3da9e63a425fb210b8d8fd28c197f7628ee6b73d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:21 np0005539563 podman[310477]: 2025-11-29 08:09:21.547315057 +0000 UTC m=+0.136936912 container init 4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:09:21 np0005539563 podman[310477]: 2025-11-29 08:09:21.553278518 +0000 UTC m=+0.142900363 container start 4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:09:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:21 np0005539563 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[310493]: [NOTICE]   (310497) : New worker (310499) forked
Nov 29 03:09:21 np0005539563 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[310493]: [NOTICE]   (310497) : Loading success.
Nov 29 03:09:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4168310815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.592 252257 DEBUG oslo_concurrency.processutils [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.627 252257 DEBUG oslo_concurrency.processutils [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.636 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 84902e3f-6e9d-45fd-88b9-3e367b4e1870 in datapath 953f39e8-83db-4773-801b-104c949c136d unbound from our chassis#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.638 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 953f39e8-83db-4773-801b-104c949c136d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.639 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[138edd78-a3c7-4b56-a073-94f940121b16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.639 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-953f39e8-83db-4773-801b-104c949c136d namespace which is not needed anymore#033[00m
Nov 29 03:09:21 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[309978]: [NOTICE]   (309982) : haproxy version is 2.8.14-c23fe91
Nov 29 03:09:21 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[309978]: [NOTICE]   (309982) : path to executable is /usr/sbin/haproxy
Nov 29 03:09:21 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[309978]: [WARNING]  (309982) : Exiting Master process...
Nov 29 03:09:21 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[309978]: [WARNING]  (309982) : Exiting Master process...
Nov 29 03:09:21 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[309978]: [ALERT]    (309982) : Current worker (309991) exited with code 143 (Terminated)
Nov 29 03:09:21 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[309978]: [WARNING]  (309982) : All workers exited. Exiting... (0)
Nov 29 03:09:21 np0005539563 systemd[1]: libpod-a8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a.scope: Deactivated successfully.
Nov 29 03:09:21 np0005539563 podman[310546]: 2025-11-29 08:09:21.787780292 +0000 UTC m=+0.055324900 container died a8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:09:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7c9d0251934ea6a5d06b7bea7a43e5c9aeb0dc4fd4803e54eaf300506c12f8a3-merged.mount: Deactivated successfully.
Nov 29 03:09:21 np0005539563 podman[310546]: 2025-11-29 08:09:21.834471007 +0000 UTC m=+0.102015595 container cleanup a8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:09:21 np0005539563 systemd[1]: libpod-conmon-a8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a.scope: Deactivated successfully.
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.882 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539563 podman[310591]: 2025-11-29 08:09:21.897857074 +0000 UTC m=+0.043105799 container remove a8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.903 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[52807990-52cf-4a20-b36f-8684b2ea589c]: (4, ('Sat Nov 29 08:09:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d (a8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a)\na8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a\nSat Nov 29 08:09:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d (a8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a)\na8a96b0140718ca5a5667f474928ad55d77934eefa726cd7a75b5b9cfaa7b16a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.904 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[969ed719-cd7e-4df0-8e50-aa419d47ec49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.905 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap953f39e8-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.907 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539563 kernel: tap953f39e8-80: left promiscuous mode
Nov 29 03:09:21 np0005539563 nova_compute[252253]: 2025-11-29 08:09:21.924 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.931 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f5f24e-1205-40d4-bf25-b2721bf18795]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.950 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6273ce04-c1c3-456f-ac74-b31a3b9a381c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.951 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6b1a44-cd66-4bcd-8312-8f618773566d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.964 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4af5c9bd-77c2-4a06-ad54-755148de9086]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672016, 'reachable_time': 28646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310606, 'error': None, 'target': 'ovnmeta-953f39e8-83db-4773-801b-104c949c136d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.967 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-953f39e8-83db-4773-801b-104c949c136d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:09:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:21.967 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[a9752c07-70b7-4f24-a1e1-6e571974fd21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:21 np0005539563 systemd[1]: run-netns-ovnmeta\x2d953f39e8\x2d83db\x2d4773\x2d801b\x2d104c949c136d.mount: Deactivated successfully.
Nov 29 03:09:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3848419887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:22.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.066 252257 DEBUG oslo_concurrency.processutils [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.067 252257 DEBUG nova.virt.libvirt.vif [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-65778651',display_name='tempest-InstanceActionsTestJSON-server-65778651',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-65778651',id=93,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a717411af66b4c23a4cc35a3803ff3b6',ramdisk_id='',reservation_id='r-ypatsxef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1179662766',owner_user_name='tempest-InstanceActionsTestJSON-1179662766-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:20Z,user_data=None,user_id='59f60b6ae5304ccbbe873550b6e62e81',uuid=ec7136d6-4735-49a5-b788-f051bf09a83d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.068 252257 DEBUG nova.network.os_vif_util [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Converting VIF {"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.068 252257 DEBUG nova.network.os_vif_util [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 273 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.7 MiB/s wr, 458 op/s
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.071 252257 DEBUG nova.objects.instance [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lazy-loading 'pci_devices' on Instance uuid ec7136d6-4735-49a5-b788-f051bf09a83d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.088 252257 DEBUG nova.virt.libvirt.driver [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  <uuid>ec7136d6-4735-49a5-b788-f051bf09a83d</uuid>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  <name>instance-0000005d</name>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <nova:name>tempest-InstanceActionsTestJSON-server-65778651</nova:name>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:09:21</nova:creationTime>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <nova:user uuid="59f60b6ae5304ccbbe873550b6e62e81">tempest-InstanceActionsTestJSON-1179662766-project-member</nova:user>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <nova:project uuid="a717411af66b4c23a4cc35a3803ff3b6">tempest-InstanceActionsTestJSON-1179662766</nova:project>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <nova:port uuid="84902e3f-6e9d-45fd-88b9-3e367b4e1870">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <entry name="serial">ec7136d6-4735-49a5-b788-f051bf09a83d</entry>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <entry name="uuid">ec7136d6-4735-49a5-b788-f051bf09a83d</entry>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/ec7136d6-4735-49a5-b788-f051bf09a83d_disk">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/ec7136d6-4735-49a5-b788-f051bf09a83d_disk.config">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:6d:a2:98"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <target dev="tap84902e3f-6e"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/ec7136d6-4735-49a5-b788-f051bf09a83d/console.log" append="off"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <input type="keyboard" bus="usb"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:09:22 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:09:22 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:09:22 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:09:22 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.089 252257 DEBUG nova.virt.libvirt.driver [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.089 252257 DEBUG nova.virt.libvirt.driver [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.090 252257 DEBUG nova.virt.libvirt.vif [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-65778651',display_name='tempest-InstanceActionsTestJSON-server-65778651',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-65778651',id=93,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='a717411af66b4c23a4cc35a3803ff3b6',ramdisk_id='',reservation_id='r-ypatsxef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1179662766',owner_user_name='tempest-InstanceActionsTestJSON-1179662766-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:20Z,user_data=None,user_id='59f60b6ae5304ccbbe873550b6e62e81',uuid=ec7136d6-4735-49a5-b788-f051bf09a83d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.091 252257 DEBUG nova.network.os_vif_util [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Converting VIF {"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.091 252257 DEBUG nova.network.os_vif_util [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.091 252257 DEBUG os_vif [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.092 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.092 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.093 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.095 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.095 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84902e3f-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.096 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84902e3f-6e, col_values=(('external_ids', {'iface-id': '84902e3f-6e9d-45fd-88b9-3e367b4e1870', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:a2:98', 'vm-uuid': 'ec7136d6-4735-49a5-b788-f051bf09a83d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.097 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539563 NetworkManager[48981]: <info>  [1764403762.0982] manager: (tap84902e3f-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.099 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.105 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.105 252257 INFO os_vif [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e')#033[00m
Nov 29 03:09:22 np0005539563 NetworkManager[48981]: <info>  [1764403762.1703] manager: (tap84902e3f-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Nov 29 03:09:22 np0005539563 systemd-udevd[310357]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:22 np0005539563 kernel: tap84902e3f-6e: entered promiscuous mode
Nov 29 03:09:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:22Z|00354|binding|INFO|Claiming lport 84902e3f-6e9d-45fd-88b9-3e367b4e1870 for this chassis.
Nov 29 03:09:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:22Z|00355|binding|INFO|84902e3f-6e9d-45fd-88b9-3e367b4e1870: Claiming fa:16:3e:6d:a2:98 10.100.0.9
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.175 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.183 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:a2:98 10.100.0.9'], port_security=['fa:16:3e:6d:a2:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ec7136d6-4735-49a5-b788-f051bf09a83d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-953f39e8-83db-4773-801b-104c949c136d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a717411af66b4c23a4cc35a3803ff3b6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5ac20701-06f7-4870-8879-14fab3d0b2e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92eecc45-e1ef-47f4-9b35-addf481d012c, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=84902e3f-6e9d-45fd-88b9-3e367b4e1870) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.184 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 84902e3f-6e9d-45fd-88b9-3e367b4e1870 in datapath 953f39e8-83db-4773-801b-104c949c136d bound to our chassis#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.185 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 953f39e8-83db-4773-801b-104c949c136d#033[00m
Nov 29 03:09:22 np0005539563 NetworkManager[48981]: <info>  [1764403762.1883] device (tap84902e3f-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:22 np0005539563 NetworkManager[48981]: <info>  [1764403762.1899] device (tap84902e3f-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.195 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cce1290e-cdb1-4305-a841-6db7d4d1e037]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.196 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap953f39e8-81 in ovnmeta-953f39e8-83db-4773-801b-104c949c136d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.197 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap953f39e8-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.197 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e248dcb4-f553-4a1a-a306-5639b79b8381]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.198 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3ba88e-37d9-4dda-826e-d400ef9baf1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:22Z|00356|binding|INFO|Setting lport 84902e3f-6e9d-45fd-88b9-3e367b4e1870 ovn-installed in OVS
Nov 29 03:09:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:22Z|00357|binding|INFO|Setting lport 84902e3f-6e9d-45fd-88b9-3e367b4e1870 up in Southbound
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.200 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.201 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539563 systemd-machined[213024]: New machine qemu-41-instance-0000005d.
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.214 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[d7da1000-2a22-4120-8d53-def4f414ddf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 systemd[1]: Started Virtual Machine qemu-41-instance-0000005d.
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.244 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fea04c99-c1b4-4c6a-89a1-a8af2086924c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.284 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d01e99a7-7a4f-43d4-a646-1718dacb9acd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 NetworkManager[48981]: <info>  [1764403762.2930] manager: (tap953f39e8-80): new Veth device (/org/freedesktop/NetworkManager/Devices/168)
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.292 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[84556c0f-4546-4e45-91da-9a1f741fd8a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.340 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[210844b8-b8ea-492a-929f-be16db774df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.344 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5922ac-ea18-4047-a47d-ff6c4c625889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 NetworkManager[48981]: <info>  [1764403762.3826] device (tap953f39e8-80): carrier: link connected
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.395 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4d299422-6589-492c-96a4-06929297fec4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.418 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[80f5866d-8cd3-46a5-a56e-fb45855ec239]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap953f39e8-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:7f:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673015, 'reachable_time': 28259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310640, 'error': None, 'target': 'ovnmeta-953f39e8-83db-4773-801b-104c949c136d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.437 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[969ea917-cd1a-44c5-89e2-61652dfb7bb2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:7f3c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673015, 'tstamp': 673015}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310641, 'error': None, 'target': 'ovnmeta-953f39e8-83db-4773-801b-104c949c136d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.458 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[74919d81-e47f-4dc4-9db7-38ce014129f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap953f39e8-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:7f:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673015, 'reachable_time': 28259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310642, 'error': None, 'target': 'ovnmeta-953f39e8-83db-4773-801b-104c949c136d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.498 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a8dd3ad4-1a1e-4cb6-9d57-9f2a37047628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.597 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[40063f3c-8040-4245-9e01-4361cc28fef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.602 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap953f39e8-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.602 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.603 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap953f39e8-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.605 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539563 NetworkManager[48981]: <info>  [1764403762.6058] manager: (tap953f39e8-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Nov 29 03:09:22 np0005539563 kernel: tap953f39e8-80: entered promiscuous mode
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.610 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.613 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap953f39e8-80, col_values=(('external_ids', {'iface-id': 'f4046fb2-2f0e-4fb2-89d3-7261e94c38a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.614 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:22Z|00358|binding|INFO|Releasing lport f4046fb2-2f0e-4fb2-89d3-7261e94c38a3 from this chassis (sb_readonly=0)
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.648 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.651 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/953f39e8-83db-4773-801b-104c949c136d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/953f39e8-83db-4773-801b-104c949c136d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.652 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d597d74c-50f6-453b-93eb-e2fbd6882df2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.653 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-953f39e8-83db-4773-801b-104c949c136d
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/953f39e8-83db-4773-801b-104c949c136d.pid.haproxy
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 953f39e8-83db-4773-801b-104c949c136d
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:22.656 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-953f39e8-83db-4773-801b-104c949c136d', 'env', 'PROCESS_TAG=haproxy-953f39e8-83db-4773-801b-104c949c136d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/953f39e8-83db-4773-801b-104c949c136d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.866 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for ec7136d6-4735-49a5-b788-f051bf09a83d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.867 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403762.8661134, ec7136d6-4735-49a5-b788-f051bf09a83d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.867 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.869 252257 DEBUG nova.compute.manager [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.874 252257 INFO nova.virt.libvirt.driver [-] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Instance rebooted successfully.#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.874 252257 DEBUG nova.compute.manager [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:22.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.907 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.910 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.941 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.942 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403762.868103, ec7136d6-4735-49a5-b788-f051bf09a83d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.942 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.949 252257 DEBUG oslo_concurrency.lockutils [None req-e85ff5b7-2f33-43aa-87b5-95df83ecdca0 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.966 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:22 np0005539563 nova_compute[252253]: 2025-11-29 08:09:22.969 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:23 np0005539563 podman[310715]: 2025-11-29 08:09:23.026225466 +0000 UTC m=+0.044979690 container create 5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:09:23 np0005539563 systemd[1]: Started libpod-conmon-5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d.scope.
Nov 29 03:09:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:09:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3ba73dc81bc91100d1fca9dabbab39fa82519e2ffaf176cb195d652124749c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:23 np0005539563 podman[310715]: 2025-11-29 08:09:23.002496952 +0000 UTC m=+0.021251196 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:23 np0005539563 podman[310715]: 2025-11-29 08:09:23.110395466 +0000 UTC m=+0.129149690 container init 5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:09:23 np0005539563 podman[310715]: 2025-11-29 08:09:23.126155373 +0000 UTC m=+0.144909587 container start 5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.160 252257 DEBUG nova.compute.manager [req-c9d0d1a4-36e3-458a-886e-cb1c746332a6 req-7d89cffa-04f7-4b48-93d9-852a2bd8b569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Received event network-vif-plugged-7df15ed0-1865-4623-916d-c3dcbbbe020e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.160 252257 DEBUG oslo_concurrency.lockutils [req-c9d0d1a4-36e3-458a-886e-cb1c746332a6 req-7d89cffa-04f7-4b48-93d9-852a2bd8b569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2cab3642-38d5-47c8-82d7-93f626786383-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.161 252257 DEBUG oslo_concurrency.lockutils [req-c9d0d1a4-36e3-458a-886e-cb1c746332a6 req-7d89cffa-04f7-4b48-93d9-852a2bd8b569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.161 252257 DEBUG oslo_concurrency.lockutils [req-c9d0d1a4-36e3-458a-886e-cb1c746332a6 req-7d89cffa-04f7-4b48-93d9-852a2bd8b569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.161 252257 DEBUG nova.compute.manager [req-c9d0d1a4-36e3-458a-886e-cb1c746332a6 req-7d89cffa-04f7-4b48-93d9-852a2bd8b569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] No waiting events found dispatching network-vif-plugged-7df15ed0-1865-4623-916d-c3dcbbbe020e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.162 252257 WARNING nova.compute.manager [req-c9d0d1a4-36e3-458a-886e-cb1c746332a6 req-7d89cffa-04f7-4b48-93d9-852a2bd8b569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Received unexpected event network-vif-plugged-7df15ed0-1865-4623-916d-c3dcbbbe020e for instance with vm_state active and task_state None.#033[00m
Nov 29 03:09:23 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[310730]: [NOTICE]   (310734) : New worker (310736) forked
Nov 29 03:09:23 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[310730]: [NOTICE]   (310734) : Loading success.
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.207 252257 DEBUG nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received event network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.207 252257 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.207 252257 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.207 252257 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.207 252257 DEBUG nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] No waiting events found dispatching network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.208 252257 WARNING nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received unexpected event network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.208 252257 DEBUG nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received event network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.208 252257 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.209 252257 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.210 252257 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.210 252257 DEBUG nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] No waiting events found dispatching network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.210 252257 WARNING nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received unexpected event network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.211 252257 DEBUG nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received event network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.211 252257 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.211 252257 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.211 252257 DEBUG oslo_concurrency.lockutils [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.211 252257 DEBUG nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] No waiting events found dispatching network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:23 np0005539563 nova_compute[252253]: 2025-11-29 08:09:23.211 252257 WARNING nova.compute.manager [req-1368bd59-f9a3-442a-b006-1183f42b509b req-d6582ff7-842f-4758-99c7-c79377a56efc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received unexpected event network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004957695014423971 of space, bias 1.0, pg target 1.4873085043271914 quantized to 32 (current 32)
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:09:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:24.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 273 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 5.4 MiB/s wr, 450 op/s
Nov 29 03:09:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:24.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.407 252257 DEBUG oslo_concurrency.lockutils [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Acquiring lock "ec7136d6-4735-49a5-b788-f051bf09a83d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.408 252257 DEBUG oslo_concurrency.lockutils [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.409 252257 DEBUG oslo_concurrency.lockutils [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Acquiring lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.410 252257 DEBUG oslo_concurrency.lockutils [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.410 252257 DEBUG oslo_concurrency.lockutils [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.413 252257 INFO nova.compute.manager [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Terminating instance#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.415 252257 DEBUG nova.compute.manager [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:09:25 np0005539563 kernel: tap84902e3f-6e (unregistering): left promiscuous mode
Nov 29 03:09:25 np0005539563 NetworkManager[48981]: <info>  [1764403765.4583] device (tap84902e3f-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:09:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:25Z|00359|binding|INFO|Releasing lport 84902e3f-6e9d-45fd-88b9-3e367b4e1870 from this chassis (sb_readonly=0)
Nov 29 03:09:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:25Z|00360|binding|INFO|Setting lport 84902e3f-6e9d-45fd-88b9-3e367b4e1870 down in Southbound
Nov 29 03:09:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:25Z|00361|binding|INFO|Removing iface tap84902e3f-6e ovn-installed in OVS
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.475 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.478 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.485 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:a2:98 10.100.0.9'], port_security=['fa:16:3e:6d:a2:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ec7136d6-4735-49a5-b788-f051bf09a83d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-953f39e8-83db-4773-801b-104c949c136d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a717411af66b4c23a4cc35a3803ff3b6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5ac20701-06f7-4870-8879-14fab3d0b2e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92eecc45-e1ef-47f4-9b35-addf481d012c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=84902e3f-6e9d-45fd-88b9-3e367b4e1870) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.486 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 84902e3f-6e9d-45fd-88b9-3e367b4e1870 in datapath 953f39e8-83db-4773-801b-104c949c136d unbound from our chassis#033[00m
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.488 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 953f39e8-83db-4773-801b-104c949c136d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.488 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0354b71a-b572-488b-80ca-55d05a314d9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.489 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-953f39e8-83db-4773-801b-104c949c136d namespace which is not needed anymore#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.514 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:25 np0005539563 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Nov 29 03:09:25 np0005539563 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005d.scope: Consumed 3.290s CPU time.
Nov 29 03:09:25 np0005539563 systemd-machined[213024]: Machine qemu-41-instance-0000005d terminated.
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.655 252257 INFO nova.virt.libvirt.driver [-] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Instance destroyed successfully.#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.656 252257 DEBUG nova.objects.instance [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lazy-loading 'resources' on Instance uuid ec7136d6-4735-49a5-b788-f051bf09a83d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.676 252257 DEBUG nova.virt.libvirt.vif [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-65778651',display_name='tempest-InstanceActionsTestJSON-server-65778651',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-65778651',id=93,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a717411af66b4c23a4cc35a3803ff3b6',ramdisk_id='',reservation_id='r-ypatsxef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1179662766',owner_user_name='tempest-InstanceActionsTestJSON-1179662766-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:22Z,user_data=None,user_id='59f60b6ae5304ccbbe873550b6e62e81',uuid=ec7136d6-4735-49a5-b788-f051bf09a83d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.676 252257 DEBUG nova.network.os_vif_util [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Converting VIF {"id": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "address": "fa:16:3e:6d:a2:98", "network": {"id": "953f39e8-83db-4773-801b-104c949c136d", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2093835153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a717411af66b4c23a4cc35a3803ff3b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84902e3f-6e", "ovs_interfaceid": "84902e3f-6e9d-45fd-88b9-3e367b4e1870", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.677 252257 DEBUG nova.network.os_vif_util [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.677 252257 DEBUG os_vif [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:09:25 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[310730]: [NOTICE]   (310734) : haproxy version is 2.8.14-c23fe91
Nov 29 03:09:25 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[310730]: [NOTICE]   (310734) : path to executable is /usr/sbin/haproxy
Nov 29 03:09:25 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[310730]: [WARNING]  (310734) : Exiting Master process...
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.679 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.679 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84902e3f-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:25 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[310730]: [ALERT]    (310734) : Current worker (310736) exited with code 143 (Terminated)
Nov 29 03:09:25 np0005539563 neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d[310730]: [WARNING]  (310734) : All workers exited. Exiting... (0)
Nov 29 03:09:25 np0005539563 systemd[1]: libpod-5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d.scope: Deactivated successfully.
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.682 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.685 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:25 np0005539563 podman[310770]: 2025-11-29 08:09:25.689425431 +0000 UTC m=+0.074500000 container died 5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.690 252257 INFO os_vif [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:a2:98,bridge_name='br-int',has_traffic_filtering=True,id=84902e3f-6e9d-45fd-88b9-3e367b4e1870,network=Network(953f39e8-83db-4773-801b-104c949c136d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84902e3f-6e')#033[00m
Nov 29 03:09:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:25Z|00362|binding|INFO|Releasing lport f4046fb2-2f0e-4fb2-89d3-7261e94c38a3 from this chassis (sb_readonly=0)
Nov 29 03:09:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:25Z|00363|binding|INFO|Releasing lport 25db3838-7764-409c-8606-f0c90f681664 from this chassis (sb_readonly=0)
Nov 29 03:09:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5f3ba73dc81bc91100d1fca9dabbab39fa82519e2ffaf176cb195d652124749c-merged.mount: Deactivated successfully.
Nov 29 03:09:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d-userdata-shm.mount: Deactivated successfully.
Nov 29 03:09:25 np0005539563 podman[310770]: 2025-11-29 08:09:25.740212817 +0000 UTC m=+0.125287386 container cleanup 5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:25 np0005539563 systemd[1]: libpod-conmon-5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d.scope: Deactivated successfully.
Nov 29 03:09:25 np0005539563 podman[310822]: 2025-11-29 08:09:25.80675987 +0000 UTC m=+0.044744863 container remove 5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.814 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f7b258-9e2e-47a6-8643-12d91a8dc3b2]: (4, ('Sat Nov 29 08:09:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d (5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d)\n5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d\nSat Nov 29 08:09:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-953f39e8-83db-4773-801b-104c949c136d (5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d)\n5848fbc82ea13306ca3d8a0940a9ddc671cceca4be686193e357eefbcb13c72d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.816 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[81e1e35c-6c79-4ded-9ecf-b0c0508f39b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.817 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap953f39e8-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.819 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:25 np0005539563 kernel: tap953f39e8-80: left promiscuous mode
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.886 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.890 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec81e6a-0423-4423-a756-201b1dca7cf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.898 252257 DEBUG nova.compute.manager [req-c68a9fae-4dcd-4b6a-9054-22d9d9d7d8c1 req-da30caae-42a5-4f8e-8f6f-7a9e196615a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received event network-vif-unplugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.898 252257 DEBUG oslo_concurrency.lockutils [req-c68a9fae-4dcd-4b6a-9054-22d9d9d7d8c1 req-da30caae-42a5-4f8e-8f6f-7a9e196615a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.899 252257 DEBUG oslo_concurrency.lockutils [req-c68a9fae-4dcd-4b6a-9054-22d9d9d7d8c1 req-da30caae-42a5-4f8e-8f6f-7a9e196615a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.899 252257 DEBUG oslo_concurrency.lockutils [req-c68a9fae-4dcd-4b6a-9054-22d9d9d7d8c1 req-da30caae-42a5-4f8e-8f6f-7a9e196615a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.899 252257 DEBUG nova.compute.manager [req-c68a9fae-4dcd-4b6a-9054-22d9d9d7d8c1 req-da30caae-42a5-4f8e-8f6f-7a9e196615a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] No waiting events found dispatching network-vif-unplugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.899 252257 DEBUG nova.compute.manager [req-c68a9fae-4dcd-4b6a-9054-22d9d9d7d8c1 req-da30caae-42a5-4f8e-8f6f-7a9e196615a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received event network-vif-unplugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.899 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:25 np0005539563 nova_compute[252253]: 2025-11-29 08:09:25.911 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.915 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1bbb0633-bb66-4b1f-969a-f7e20dabc915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.916 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[323c4445-be52-49eb-b21c-b84add66dc7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.934 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fa97c5f3-3e9e-48af-983a-4d5a96249351]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673004, 'reachable_time': 25031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310838, 'error': None, 'target': 'ovnmeta-953f39e8-83db-4773-801b-104c949c136d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.944 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-953f39e8-83db-4773-801b-104c949c136d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:09:25 np0005539563 systemd[1]: run-netns-ovnmeta\x2d953f39e8\x2d83db\x2d4773\x2d801b\x2d104c949c136d.mount: Deactivated successfully.
Nov 29 03:09:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:25.944 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[6f94bcb6-dd6e-4d3b-94e3-c67c26b5ab9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:26.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 233 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 4.6 MiB/s wr, 680 op/s
Nov 29 03:09:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:26 np0005539563 nova_compute[252253]: 2025-11-29 08:09:26.661 252257 INFO nova.virt.libvirt.driver [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Deleting instance files /var/lib/nova/instances/ec7136d6-4735-49a5-b788-f051bf09a83d_del#033[00m
Nov 29 03:09:26 np0005539563 nova_compute[252253]: 2025-11-29 08:09:26.662 252257 INFO nova.virt.libvirt.driver [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Deletion of /var/lib/nova/instances/ec7136d6-4735-49a5-b788-f051bf09a83d_del complete#033[00m
Nov 29 03:09:26 np0005539563 nova_compute[252253]: 2025-11-29 08:09:26.746 252257 INFO nova.compute.manager [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Took 1.33 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:09:26 np0005539563 nova_compute[252253]: 2025-11-29 08:09:26.747 252257 DEBUG oslo.service.loopingcall [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:09:26 np0005539563 nova_compute[252253]: 2025-11-29 08:09:26.748 252257 DEBUG nova.compute.manager [-] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:09:26 np0005539563 nova_compute[252253]: 2025-11-29 08:09:26.748 252257 DEBUG nova.network.neutron [-] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:09:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:26.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:26 np0005539563 nova_compute[252253]: 2025-11-29 08:09:26.884 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:27 np0005539563 nova_compute[252253]: 2025-11-29 08:09:27.614 252257 DEBUG nova.network.neutron [-] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:27 np0005539563 nova_compute[252253]: 2025-11-29 08:09:27.640 252257 INFO nova.compute.manager [-] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Took 0.89 seconds to deallocate network for instance.#033[00m
Nov 29 03:09:27 np0005539563 nova_compute[252253]: 2025-11-29 08:09:27.695 252257 DEBUG oslo_concurrency.lockutils [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:27 np0005539563 nova_compute[252253]: 2025-11-29 08:09:27.696 252257 DEBUG oslo_concurrency.lockutils [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:27 np0005539563 nova_compute[252253]: 2025-11-29 08:09:27.772 252257 DEBUG nova.compute.manager [req-490d35af-5ed2-4490-9678-8d02ebfb4148 req-38d3f8d0-b023-4015-9623-ea9f191a2878 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received event network-vif-deleted-84902e3f-6e9d-45fd-88b9-3e367b4e1870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:27 np0005539563 nova_compute[252253]: 2025-11-29 08:09:27.794 252257 DEBUG oslo_concurrency.processutils [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:28.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 233 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 134 KiB/s wr, 496 op/s
Nov 29 03:09:28 np0005539563 nova_compute[252253]: 2025-11-29 08:09:28.091 252257 DEBUG nova.compute.manager [req-68fcb32b-a973-460c-9060-8f0d0bb7ede4 req-faec00be-a272-4bc2-9088-d1a5d495fd22 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received event network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:28 np0005539563 nova_compute[252253]: 2025-11-29 08:09:28.093 252257 DEBUG oslo_concurrency.lockutils [req-68fcb32b-a973-460c-9060-8f0d0bb7ede4 req-faec00be-a272-4bc2-9088-d1a5d495fd22 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:28 np0005539563 nova_compute[252253]: 2025-11-29 08:09:28.093 252257 DEBUG oslo_concurrency.lockutils [req-68fcb32b-a973-460c-9060-8f0d0bb7ede4 req-faec00be-a272-4bc2-9088-d1a5d495fd22 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:28 np0005539563 nova_compute[252253]: 2025-11-29 08:09:28.094 252257 DEBUG oslo_concurrency.lockutils [req-68fcb32b-a973-460c-9060-8f0d0bb7ede4 req-faec00be-a272-4bc2-9088-d1a5d495fd22 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:28 np0005539563 nova_compute[252253]: 2025-11-29 08:09:28.094 252257 DEBUG nova.compute.manager [req-68fcb32b-a973-460c-9060-8f0d0bb7ede4 req-faec00be-a272-4bc2-9088-d1a5d495fd22 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] No waiting events found dispatching network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:28 np0005539563 nova_compute[252253]: 2025-11-29 08:09:28.094 252257 WARNING nova.compute.manager [req-68fcb32b-a973-460c-9060-8f0d0bb7ede4 req-faec00be-a272-4bc2-9088-d1a5d495fd22 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Received unexpected event network-vif-plugged-84902e3f-6e9d-45fd-88b9-3e367b4e1870 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:09:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1868038605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:28 np0005539563 nova_compute[252253]: 2025-11-29 08:09:28.306 252257 DEBUG oslo_concurrency.processutils [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:28 np0005539563 nova_compute[252253]: 2025-11-29 08:09:28.319 252257 DEBUG nova.compute.provider_tree [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:28 np0005539563 nova_compute[252253]: 2025-11-29 08:09:28.351 252257 DEBUG nova.scheduler.client.report [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:28 np0005539563 nova_compute[252253]: 2025-11-29 08:09:28.401 252257 DEBUG oslo_concurrency.lockutils [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:28 np0005539563 nova_compute[252253]: 2025-11-29 08:09:28.477 252257 INFO nova.scheduler.client.report [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Deleted allocations for instance ec7136d6-4735-49a5-b788-f051bf09a83d#033[00m
Nov 29 03:09:28 np0005539563 nova_compute[252253]: 2025-11-29 08:09:28.563 252257 DEBUG oslo_concurrency.lockutils [None req-c36ffc77-29cc-45d0-9502-ba5d38784567 59f60b6ae5304ccbbe873550b6e62e81 a717411af66b4c23a4cc35a3803ff3b6 - - default default] Lock "ec7136d6-4735-49a5-b788-f051bf09a83d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:28.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:29 np0005539563 nova_compute[252253]: 2025-11-29 08:09:29.321 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:29.323 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:29.324 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:09:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:29.327 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:30.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 205 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 135 KiB/s wr, 538 op/s
Nov 29 03:09:30 np0005539563 nova_compute[252253]: 2025-11-29 08:09:30.682 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:30.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:31 np0005539563 nova_compute[252253]: 2025-11-29 08:09:31.317 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403756.315597, ca260730-278e-41c5-9aae-4825e9497dcc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:31 np0005539563 nova_compute[252253]: 2025-11-29 08:09:31.317 252257 INFO nova.compute.manager [-] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:09:31 np0005539563 nova_compute[252253]: 2025-11-29 08:09:31.342 252257 DEBUG nova.compute.manager [None req-46c6fbd8-687e-465b-a4f5-bbcdf7cf6603 - - - - - -] [instance: ca260730-278e-41c5-9aae-4825e9497dcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:31 np0005539563 nova_compute[252253]: 2025-11-29 08:09:31.924 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.022 252257 DEBUG oslo_concurrency.lockutils [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "2cab3642-38d5-47c8-82d7-93f626786383" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.023 252257 DEBUG oslo_concurrency.lockutils [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.024 252257 DEBUG oslo_concurrency.lockutils [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "2cab3642-38d5-47c8-82d7-93f626786383-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.024 252257 DEBUG oslo_concurrency.lockutils [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.026 252257 DEBUG oslo_concurrency.lockutils [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.027 252257 INFO nova.compute.manager [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Terminating instance#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.029 252257 DEBUG nova.compute.manager [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:09:32 np0005539563 kernel: tap7df15ed0-18 (unregistering): left promiscuous mode
Nov 29 03:09:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 134 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 43 KiB/s wr, 503 op/s
Nov 29 03:09:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:32.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:32 np0005539563 NetworkManager[48981]: <info>  [1764403772.0842] device (tap7df15ed0-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:09:32 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:32Z|00364|binding|INFO|Releasing lport 7df15ed0-1865-4623-916d-c3dcbbbe020e from this chassis (sb_readonly=0)
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.092 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:32Z|00365|binding|INFO|Setting lport 7df15ed0-1865-4623-916d-c3dcbbbe020e down in Southbound
Nov 29 03:09:32 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:32Z|00366|binding|INFO|Removing iface tap7df15ed0-18 ovn-installed in OVS
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.097 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.102 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:4c:ea 10.100.0.12'], port_security=['fa:16:3e:f7:4c:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2cab3642-38d5-47c8-82d7-93f626786383', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3e18973b82a4071bdc187ede8c1afb8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1da66fc3-7f9f-49ea-a35d-351f9e777793', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8ed36bb-bd1a-404c-bed2-6bc7af2884c4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=7df15ed0-1865-4623-916d-c3dcbbbe020e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.103 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 7df15ed0-1865-4623-916d-c3dcbbbe020e in datapath 4c0a06e3-8d77-4f81-85b4-47e57dafff04 unbound from our chassis#033[00m
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.104 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c0a06e3-8d77-4f81-85b4-47e57dafff04, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.106 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[987fe6b8-8e94-4d28-8f3d-062f1d67eeda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.107 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 namespace which is not needed anymore#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.123 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539563 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000061.scope: Deactivated successfully.
Nov 29 03:09:32 np0005539563 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000061.scope: Consumed 11.349s CPU time.
Nov 29 03:09:32 np0005539563 systemd-machined[213024]: Machine qemu-40-instance-00000061 terminated.
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.249 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.253 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.269 252257 INFO nova.virt.libvirt.driver [-] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Instance destroyed successfully.#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.270 252257 DEBUG nova.objects.instance [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lazy-loading 'resources' on Instance uuid 2cab3642-38d5-47c8-82d7-93f626786383 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.307 252257 DEBUG nova.virt.libvirt.vif [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-313770805',display_name='tempest-ListServersNegativeTestJSON-server-313770805-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-313770805-3',id=97,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-11-29T08:09:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3e18973b82a4071bdc187ede8c1afb8',ramdisk_id='',reservation_id='r-enaoqmph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1935238201',owner_user_name='tempest-ListServersNegativeTestJSON-1935238201-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:21Z,user_data=None,user_id='95361d3a276f4d7f81e9f9a4bcafd2ea',uuid=2cab3642-38d5-47c8-82d7-93f626786383,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "address": "fa:16:3e:f7:4c:ea", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7df15ed0-18", "ovs_interfaceid": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.307 252257 DEBUG nova.network.os_vif_util [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converting VIF {"id": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "address": "fa:16:3e:f7:4c:ea", "network": {"id": "4c0a06e3-8d77-4f81-85b4-47e57dafff04", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-147553301-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3e18973b82a4071bdc187ede8c1afb8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7df15ed0-18", "ovs_interfaceid": "7df15ed0-1865-4623-916d-c3dcbbbe020e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.308 252257 DEBUG nova.network.os_vif_util [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:4c:ea,bridge_name='br-int',has_traffic_filtering=True,id=7df15ed0-1865-4623-916d-c3dcbbbe020e,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7df15ed0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.308 252257 DEBUG os_vif [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:4c:ea,bridge_name='br-int',has_traffic_filtering=True,id=7df15ed0-1865-4623-916d-c3dcbbbe020e,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7df15ed0-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.309 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.310 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7df15ed0-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.311 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.312 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.314 252257 INFO os_vif [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:4c:ea,bridge_name='br-int',has_traffic_filtering=True,id=7df15ed0-1865-4623-916d-c3dcbbbe020e,network=Network(4c0a06e3-8d77-4f81-85b4-47e57dafff04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7df15ed0-18')#033[00m
Nov 29 03:09:32 np0005539563 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[310493]: [NOTICE]   (310497) : haproxy version is 2.8.14-c23fe91
Nov 29 03:09:32 np0005539563 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[310493]: [NOTICE]   (310497) : path to executable is /usr/sbin/haproxy
Nov 29 03:09:32 np0005539563 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[310493]: [WARNING]  (310497) : Exiting Master process...
Nov 29 03:09:32 np0005539563 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[310493]: [ALERT]    (310497) : Current worker (310499) exited with code 143 (Terminated)
Nov 29 03:09:32 np0005539563 neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04[310493]: [WARNING]  (310497) : All workers exited. Exiting... (0)
Nov 29 03:09:32 np0005539563 systemd[1]: libpod-4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d.scope: Deactivated successfully.
Nov 29 03:09:32 np0005539563 podman[310940]: 2025-11-29 08:09:32.34748508 +0000 UTC m=+0.152537053 container died 4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:09:32 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d-userdata-shm.mount: Deactivated successfully.
Nov 29 03:09:32 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3aee2f92641600ebb1bceaba3da9e63a425fb210b8d8fd28c197f7628ee6b73d-merged.mount: Deactivated successfully.
Nov 29 03:09:32 np0005539563 podman[310940]: 2025-11-29 08:09:32.808106771 +0000 UTC m=+0.613158744 container cleanup 4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:32 np0005539563 podman[310998]: 2025-11-29 08:09:32.865632009 +0000 UTC m=+0.036865789 container remove 4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.870 252257 DEBUG nova.compute.manager [req-ada4364e-7014-455a-b46f-85748be51be3 req-55c52a71-4799-4a41-8533-b8263657d9d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Received event network-vif-unplugged-7df15ed0-1865-4623-916d-c3dcbbbe020e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.870 252257 DEBUG oslo_concurrency.lockutils [req-ada4364e-7014-455a-b46f-85748be51be3 req-55c52a71-4799-4a41-8533-b8263657d9d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2cab3642-38d5-47c8-82d7-93f626786383-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.871 252257 DEBUG oslo_concurrency.lockutils [req-ada4364e-7014-455a-b46f-85748be51be3 req-55c52a71-4799-4a41-8533-b8263657d9d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.871 252257 DEBUG oslo_concurrency.lockutils [req-ada4364e-7014-455a-b46f-85748be51be3 req-55c52a71-4799-4a41-8533-b8263657d9d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.871 252257 DEBUG nova.compute.manager [req-ada4364e-7014-455a-b46f-85748be51be3 req-55c52a71-4799-4a41-8533-b8263657d9d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] No waiting events found dispatching network-vif-unplugged-7df15ed0-1865-4623-916d-c3dcbbbe020e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.871 252257 DEBUG nova.compute.manager [req-ada4364e-7014-455a-b46f-85748be51be3 req-55c52a71-4799-4a41-8533-b8263657d9d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Received event network-vif-unplugged-7df15ed0-1865-4623-916d-c3dcbbbe020e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.873 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1ffcc6fc-4407-4617-a83b-61b7cca8b20d]: (4, ('Sat Nov 29 08:09:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 (4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d)\n4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d\nSat Nov 29 08:09:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 (4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d)\n4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.875 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf71c9a-162a-4546-bccd-7f5945a916fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.876 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c0a06e3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.877 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539563 kernel: tap4c0a06e3-80: left promiscuous mode
Nov 29 03:09:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:32.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:32 np0005539563 nova_compute[252253]: 2025-11-29 08:09:32.893 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.897 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[27d778ad-dc4e-4172-819b-725dcc445e7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.914 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4443225f-59ff-4e43-a766-f93146da1209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.915 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ccae6a35-b4bd-4887-994f-3108142f8478]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539563 systemd[1]: libpod-conmon-4398c627c8a9c08e13073fc7eb451d81875d5f6a8735f432cc2383ba18ca223d.scope: Deactivated successfully.
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.934 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[98779ca6-33fc-4f69-bbc2-65cb8610df4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672859, 'reachable_time': 36402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311014, 'error': None, 'target': 'ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:32 np0005539563 systemd[1]: run-netns-ovnmeta\x2d4c0a06e3\x2d8d77\x2d4f81\x2d85b4\x2d47e57dafff04.mount: Deactivated successfully.
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.937 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c0a06e3-8d77-4f81-85b4-47e57dafff04 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:09:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:32.937 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[97d0e5e3-7907-442d-b8fd-118c68dc1eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:33 np0005539563 nova_compute[252253]: 2025-11-29 08:09:33.938 252257 INFO nova.virt.libvirt.driver [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Deleting instance files /var/lib/nova/instances/2cab3642-38d5-47c8-82d7-93f626786383_del#033[00m
Nov 29 03:09:33 np0005539563 nova_compute[252253]: 2025-11-29 08:09:33.940 252257 INFO nova.virt.libvirt.driver [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Deletion of /var/lib/nova/instances/2cab3642-38d5-47c8-82d7-93f626786383_del complete#033[00m
Nov 29 03:09:34 np0005539563 nova_compute[252253]: 2025-11-29 08:09:34.018 252257 INFO nova.compute.manager [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Took 1.99 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:09:34 np0005539563 nova_compute[252253]: 2025-11-29 08:09:34.019 252257 DEBUG oslo.service.loopingcall [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:09:34 np0005539563 nova_compute[252253]: 2025-11-29 08:09:34.019 252257 DEBUG nova.compute.manager [-] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:09:34 np0005539563 nova_compute[252253]: 2025-11-29 08:09:34.019 252257 DEBUG nova.network.neutron [-] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:09:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 134 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 41 KiB/s wr, 368 op/s
Nov 29 03:09:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:34.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:34.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:35 np0005539563 nova_compute[252253]: 2025-11-29 08:09:35.023 252257 DEBUG nova.compute.manager [req-e976c53f-3926-498e-bb4f-209c090a1450 req-13573f7f-e41e-4d1c-99db-175c102708a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Received event network-vif-plugged-7df15ed0-1865-4623-916d-c3dcbbbe020e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:35 np0005539563 nova_compute[252253]: 2025-11-29 08:09:35.024 252257 DEBUG oslo_concurrency.lockutils [req-e976c53f-3926-498e-bb4f-209c090a1450 req-13573f7f-e41e-4d1c-99db-175c102708a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "2cab3642-38d5-47c8-82d7-93f626786383-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:35 np0005539563 nova_compute[252253]: 2025-11-29 08:09:35.024 252257 DEBUG oslo_concurrency.lockutils [req-e976c53f-3926-498e-bb4f-209c090a1450 req-13573f7f-e41e-4d1c-99db-175c102708a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:35 np0005539563 nova_compute[252253]: 2025-11-29 08:09:35.024 252257 DEBUG oslo_concurrency.lockutils [req-e976c53f-3926-498e-bb4f-209c090a1450 req-13573f7f-e41e-4d1c-99db-175c102708a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:35 np0005539563 nova_compute[252253]: 2025-11-29 08:09:35.025 252257 DEBUG nova.compute.manager [req-e976c53f-3926-498e-bb4f-209c090a1450 req-13573f7f-e41e-4d1c-99db-175c102708a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] No waiting events found dispatching network-vif-plugged-7df15ed0-1865-4623-916d-c3dcbbbe020e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:35 np0005539563 nova_compute[252253]: 2025-11-29 08:09:35.025 252257 WARNING nova.compute.manager [req-e976c53f-3926-498e-bb4f-209c090a1450 req-13573f7f-e41e-4d1c-99db-175c102708a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Received unexpected event network-vif-plugged-7df15ed0-1865-4623-916d-c3dcbbbe020e for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:09:35 np0005539563 nova_compute[252253]: 2025-11-29 08:09:35.611 252257 DEBUG nova.network.neutron [-] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:35 np0005539563 nova_compute[252253]: 2025-11-29 08:09:35.642 252257 INFO nova.compute.manager [-] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Took 1.62 seconds to deallocate network for instance.#033[00m
Nov 29 03:09:35 np0005539563 nova_compute[252253]: 2025-11-29 08:09:35.718 252257 DEBUG oslo_concurrency.lockutils [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:35 np0005539563 nova_compute[252253]: 2025-11-29 08:09:35.719 252257 DEBUG oslo_concurrency.lockutils [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:35 np0005539563 nova_compute[252253]: 2025-11-29 08:09:35.771 252257 DEBUG oslo_concurrency.processutils [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 74 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 43 KiB/s wr, 392 op/s
Nov 29 03:09:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:36.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1968305457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:36 np0005539563 nova_compute[252253]: 2025-11-29 08:09:36.212 252257 DEBUG oslo_concurrency.processutils [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:36 np0005539563 nova_compute[252253]: 2025-11-29 08:09:36.220 252257 DEBUG nova.compute.provider_tree [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:36 np0005539563 nova_compute[252253]: 2025-11-29 08:09:36.243 252257 DEBUG nova.scheduler.client.report [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:36 np0005539563 nova_compute[252253]: 2025-11-29 08:09:36.281 252257 DEBUG oslo_concurrency.lockutils [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:36 np0005539563 nova_compute[252253]: 2025-11-29 08:09:36.318 252257 INFO nova.scheduler.client.report [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Deleted allocations for instance 2cab3642-38d5-47c8-82d7-93f626786383#033[00m
Nov 29 03:09:36 np0005539563 nova_compute[252253]: 2025-11-29 08:09:36.445 252257 DEBUG oslo_concurrency.lockutils [None req-38c8d011-7799-4689-b539-e8ab7adc352e 95361d3a276f4d7f81e9f9a4bcafd2ea e3e18973b82a4071bdc187ede8c1afb8 - - default default] Lock "2cab3642-38d5-47c8-82d7-93f626786383" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:36 np0005539563 nova_compute[252253]: 2025-11-29 08:09:36.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:36.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:36 np0005539563 nova_compute[252253]: 2025-11-29 08:09:36.926 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:37 np0005539563 nova_compute[252253]: 2025-11-29 08:09:37.312 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:38 np0005539563 nova_compute[252253]: 2025-11-29 08:09:38.027 252257 DEBUG nova.compute.manager [req-65d2a0e6-df0b-484e-a691-306b5d92ef56 req-5973c5f3-b8e0-4ec9-8c0c-adc31c0d5360 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Received event network-vif-deleted-7df15ed0-1865-4623-916d-c3dcbbbe020e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 74 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 823 KiB/s rd, 5.0 KiB/s wr, 131 op/s
Nov 29 03:09:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:38.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:38.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 41 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 823 KiB/s rd, 5.0 KiB/s wr, 132 op/s
Nov 29 03:09:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:40.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:40 np0005539563 nova_compute[252253]: 2025-11-29 08:09:40.653 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403765.6532204, ec7136d6-4735-49a5-b788-f051bf09a83d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:40 np0005539563 nova_compute[252253]: 2025-11-29 08:09:40.654 252257 INFO nova.compute.manager [-] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:09:40 np0005539563 nova_compute[252253]: 2025-11-29 08:09:40.683 252257 DEBUG nova.compute.manager [None req-f3b886c5-f42a-48a9-8fc7-839fa5db70ca - - - - - -] [instance: ec7136d6-4735-49a5-b788-f051bf09a83d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:40.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:41 np0005539563 nova_compute[252253]: 2025-11-29 08:09:41.055 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:41 np0005539563 nova_compute[252253]: 2025-11-29 08:09:41.928 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 41 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 4.0 KiB/s wr, 91 op/s
Nov 29 03:09:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:42.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:42 np0005539563 nova_compute[252253]: 2025-11-29 08:09:42.313 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:42 np0005539563 nova_compute[252253]: 2025-11-29 08:09:42.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:42.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:09:43 np0005539563 nova_compute[252253]: 2025-11-29 08:09:43.281 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:43 np0005539563 nova_compute[252253]: 2025-11-29 08:09:43.282 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:43 np0005539563 nova_compute[252253]: 2025-11-29 08:09:43.307 252257 DEBUG nova.compute.manager [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:09:43 np0005539563 nova_compute[252253]: 2025-11-29 08:09:43.449 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:43 np0005539563 nova_compute[252253]: 2025-11-29 08:09:43.449 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:43 np0005539563 nova_compute[252253]: 2025-11-29 08:09:43.456 252257 DEBUG nova.virt.hardware [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:09:43 np0005539563 nova_compute[252253]: 2025-11-29 08:09:43.456 252257 INFO nova.compute.claims [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:09:43 np0005539563 nova_compute[252253]: 2025-11-29 08:09:43.554 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/991044372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.044 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.051 252257 DEBUG nova.compute.provider_tree [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.077 252257 DEBUG nova.scheduler.client.report [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 41 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Nov 29 03:09:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:44.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.108 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.109 252257 DEBUG nova.compute.manager [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.195 252257 DEBUG nova.compute.manager [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.195 252257 DEBUG nova.network.neutron [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.232 252257 INFO nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.254 252257 DEBUG nova.compute.manager [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.390 252257 DEBUG nova.compute.manager [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.392 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.392 252257 INFO nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Creating image(s)#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.420 252257 DEBUG nova.storage.rbd_utils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.448 252257 DEBUG nova.storage.rbd_utils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.481 252257 DEBUG nova.storage.rbd_utils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.485 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.547 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.548 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.548 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.549 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.579 252257 DEBUG nova.storage.rbd_utils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.586 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.763 252257 DEBUG nova.policy [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7c90fe1780904a6098015abc66b38d9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'baca94adaa5145a6b9cef930bff28fa4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.856 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:44.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:44 np0005539563 nova_compute[252253]: 2025-11-29 08:09:44.930 252257 DEBUG nova.storage.rbd_utils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] resizing rbd image 3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:45 np0005539563 nova_compute[252253]: 2025-11-29 08:09:45.034 252257 DEBUG nova.objects.instance [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'migration_context' on Instance uuid 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:45 np0005539563 nova_compute[252253]: 2025-11-29 08:09:45.059 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:45 np0005539563 nova_compute[252253]: 2025-11-29 08:09:45.060 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Ensure instance console log exists: /var/lib/nova/instances/3f3e4ffb-a9ea-48f0-b9b8-54e436335953/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:45 np0005539563 nova_compute[252253]: 2025-11-29 08:09:45.060 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:45 np0005539563 nova_compute[252253]: 2025-11-29 08:09:45.060 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:45 np0005539563 nova_compute[252253]: 2025-11-29 08:09:45.061 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:45 np0005539563 nova_compute[252253]: 2025-11-29 08:09:45.677 252257 DEBUG nova.network.neutron [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Successfully created port: 58e652e8-a3e6-48fb-af53-e7057ad02f02 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:45 np0005539563 nova_compute[252253]: 2025-11-29 08:09:45.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:45 np0005539563 nova_compute[252253]: 2025-11-29 08:09:45.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:45 np0005539563 nova_compute[252253]: 2025-11-29 08:09:45.712 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:45 np0005539563 nova_compute[252253]: 2025-11-29 08:09:45.712 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:45 np0005539563 nova_compute[252253]: 2025-11-29 08:09:45.712 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:09:45 np0005539563 nova_compute[252253]: 2025-11-29 08:09:45.713 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 69 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 979 KiB/s wr, 78 op/s
Nov 29 03:09:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:46.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167156216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.151 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.307 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.308 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4471MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.308 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.309 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.418 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.419 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.419 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.457 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:46.677892) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403786678042, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2124, "num_deletes": 254, "total_data_size": 3498047, "memory_usage": 3552016, "flush_reason": "Manual Compaction"}
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.736 252257 DEBUG nova.network.neutron [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Successfully updated port: 58e652e8-a3e6-48fb-af53-e7057ad02f02 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.759 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "refresh_cache-3f3e4ffb-a9ea-48f0-b9b8-54e436335953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.759 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquired lock "refresh_cache-3f3e4ffb-a9ea-48f0-b9b8-54e436335953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.759 252257 DEBUG nova.network.neutron [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403786773445, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2072046, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37746, "largest_seqno": 39869, "table_properties": {"data_size": 2064997, "index_size": 3675, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19650, "raw_average_key_size": 21, "raw_value_size": 2048914, "raw_average_value_size": 2234, "num_data_blocks": 163, "num_entries": 917, "num_filter_entries": 917, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403598, "oldest_key_time": 1764403598, "file_creation_time": 1764403786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 95620 microseconds, and 8017 cpu microseconds.
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:46.773586) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2072046 bytes OK
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:46.773668) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:46.781167) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:46.781196) EVENT_LOG_v1 {"time_micros": 1764403786781189, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:46.781214) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3489237, prev total WAL file size 3489237, number of live WAL files 2.
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:46.782362) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323630' seq:72057594037927935, type:22 .. '6D6772737461740031353132' seq:0, type:0; will stop at (end)
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2023KB)], [80(10MB)]
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403786782429, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 13093040, "oldest_snapshot_seqno": -1}
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/923109898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.891 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.896 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:46.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.906 252257 DEBUG nova.compute.manager [req-b9a46078-c0ba-4d29-aa2b-76ceccd12451 req-a67867ba-545e-47a0-9e09-fa151ec8e792 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received event network-changed-58e652e8-a3e6-48fb-af53-e7057ad02f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.907 252257 DEBUG nova.compute.manager [req-b9a46078-c0ba-4d29-aa2b-76ceccd12451 req-a67867ba-545e-47a0-9e09-fa151ec8e792 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Refreshing instance network info cache due to event network-changed-58e652e8-a3e6-48fb-af53-e7057ad02f02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.907 252257 DEBUG oslo_concurrency.lockutils [req-b9a46078-c0ba-4d29-aa2b-76ceccd12451 req-a67867ba-545e-47a0-9e09-fa151ec8e792 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-3f3e4ffb-a9ea-48f0-b9b8-54e436335953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.930 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.934 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.967 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.967 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:46 np0005539563 nova_compute[252253]: 2025-11-29 08:09:46.972 252257 DEBUG nova.network.neutron [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7280 keys, 10600747 bytes, temperature: kUnknown
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403787155006, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 10600747, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10553107, "index_size": 28328, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 187311, "raw_average_key_size": 25, "raw_value_size": 10424078, "raw_average_value_size": 1431, "num_data_blocks": 1125, "num_entries": 7280, "num_filter_entries": 7280, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764403786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:47.155313) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 10600747 bytes
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:47.159229) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 35.1 rd, 28.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(11.4) write-amplify(5.1) OK, records in: 7715, records dropped: 435 output_compression: NoCompression
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:47.159283) EVENT_LOG_v1 {"time_micros": 1764403787159249, "job": 46, "event": "compaction_finished", "compaction_time_micros": 372678, "compaction_time_cpu_micros": 26162, "output_level": 6, "num_output_files": 1, "total_output_size": 10600747, "num_input_records": 7715, "num_output_records": 7280, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403787160081, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403787162498, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:46.782216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:47.162873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:47.163207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:47.163210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:47.163212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:47.163213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:47 np0005539563 nova_compute[252253]: 2025-11-29 08:09:47.268 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403772.2679732, 2cab3642-38d5-47c8-82d7-93f626786383 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:47 np0005539563 nova_compute[252253]: 2025-11-29 08:09:47.269 252257 INFO nova.compute.manager [-] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:09:47 np0005539563 podman[311280]: 2025-11-29 08:09:47.272072301 +0000 UTC m=+0.076993667 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:09:47 np0005539563 podman[311279]: 2025-11-29 08:09:47.272646177 +0000 UTC m=+0.075265881 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:09:47 np0005539563 nova_compute[252253]: 2025-11-29 08:09:47.288 252257 DEBUG nova.compute.manager [None req-56a38bf7-6830-407f-880e-8adc84a88df9 - - - - - -] [instance: 2cab3642-38d5-47c8-82d7-93f626786383] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:47 np0005539563 nova_compute[252253]: 2025-11-29 08:09:47.315 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:47 np0005539563 podman[311281]: 2025-11-29 08:09:47.321775778 +0000 UTC m=+0.111524042 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:09:47 np0005539563 nova_compute[252253]: 2025-11-29 08:09:47.966 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 88 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:09:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:48.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:48 np0005539563 nova_compute[252253]: 2025-11-29 08:09:48.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:48 np0005539563 nova_compute[252253]: 2025-11-29 08:09:48.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:09:48 np0005539563 nova_compute[252253]: 2025-11-29 08:09:48.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:09:48 np0005539563 nova_compute[252253]: 2025-11-29 08:09:48.709 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:09:48 np0005539563 nova_compute[252253]: 2025-11-29 08:09:48.710 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:09:48 np0005539563 nova_compute[252253]: 2025-11-29 08:09:48.711 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:48.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:48 np0005539563 nova_compute[252253]: 2025-11-29 08:09:48.998 252257 DEBUG nova.network.neutron [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Updating instance_info_cache with network_info: [{"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.021 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Releasing lock "refresh_cache-3f3e4ffb-a9ea-48f0-b9b8-54e436335953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.021 252257 DEBUG nova.compute.manager [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Instance network_info: |[{"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.022 252257 DEBUG oslo_concurrency.lockutils [req-b9a46078-c0ba-4d29-aa2b-76ceccd12451 req-a67867ba-545e-47a0-9e09-fa151ec8e792 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-3f3e4ffb-a9ea-48f0-b9b8-54e436335953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.022 252257 DEBUG nova.network.neutron [req-b9a46078-c0ba-4d29-aa2b-76ceccd12451 req-a67867ba-545e-47a0-9e09-fa151ec8e792 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Refreshing network info cache for port 58e652e8-a3e6-48fb-af53-e7057ad02f02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.024 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Start _get_guest_xml network_info=[{"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.030 252257 WARNING nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.038 252257 DEBUG nova.virt.libvirt.host [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.038 252257 DEBUG nova.virt.libvirt.host [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.042 252257 DEBUG nova.virt.libvirt.host [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.042 252257 DEBUG nova.virt.libvirt.host [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.043 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.044 252257 DEBUG nova.virt.hardware [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.044 252257 DEBUG nova.virt.hardware [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.044 252257 DEBUG nova.virt.hardware [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.044 252257 DEBUG nova.virt.hardware [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.044 252257 DEBUG nova.virt.hardware [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.044 252257 DEBUG nova.virt.hardware [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.045 252257 DEBUG nova.virt.hardware [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.045 252257 DEBUG nova.virt.hardware [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.045 252257 DEBUG nova.virt.hardware [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.045 252257 DEBUG nova.virt.hardware [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.045 252257 DEBUG nova.virt.hardware [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.048 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315773696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.556 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.584 252257 DEBUG nova.storage.rbd_utils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:49 np0005539563 nova_compute[252253]: 2025-11-29 08:09:49.588 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 88 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.107 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Acquiring lock "68205641-041c-4c36-8811-7c3107533161" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.107 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "68205641-041c-4c36-8811-7c3107533161" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:50.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.136 252257 DEBUG nova.compute.manager [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.213 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.214 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.220 252257 DEBUG nova.virt.hardware [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.220 252257 INFO nova.compute.claims [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.403 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3312907837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.590 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.001s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.592 252257 DEBUG nova.virt.libvirt.vif [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-934074715',display_name='tempest-ListServerFiltersTestJSON-instance-934074715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-934074715',id=98,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='baca94adaa5145a6b9cef930bff28fa4',ramdisk_id='',reservation_id='r-7qudlxck',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-207904478',owner_user_name='tempest-ListServerFiltersTestJSON-207904478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:44Z,user_data=None,user_id='7c90fe1780904a6098015abc66b38d9d',uuid=3f3e4ffb-a9ea-48f0-b9b8-54e436335953,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.592 252257 DEBUG nova.network.os_vif_util [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converting VIF {"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.593 252257 DEBUG nova.network.os_vif_util [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.595 252257 DEBUG nova.objects.instance [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.611 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  <uuid>3f3e4ffb-a9ea-48f0-b9b8-54e436335953</uuid>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  <name>instance-00000062</name>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-934074715</nova:name>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:09:49</nova:creationTime>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <nova:user uuid="7c90fe1780904a6098015abc66b38d9d">tempest-ListServerFiltersTestJSON-207904478-project-member</nova:user>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <nova:project uuid="baca94adaa5145a6b9cef930bff28fa4">tempest-ListServerFiltersTestJSON-207904478</nova:project>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <nova:port uuid="58e652e8-a3e6-48fb-af53-e7057ad02f02">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <entry name="serial">3f3e4ffb-a9ea-48f0-b9b8-54e436335953</entry>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <entry name="uuid">3f3e4ffb-a9ea-48f0-b9b8-54e436335953</entry>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk.config">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:7c:d0:29"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <target dev="tap58e652e8-a3"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/3f3e4ffb-a9ea-48f0-b9b8-54e436335953/console.log" append="off"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:09:50 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:09:50 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:09:50 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:09:50 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.612 252257 DEBUG nova.compute.manager [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Preparing to wait for external event network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.612 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.612 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.613 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.613 252257 DEBUG nova.virt.libvirt.vif [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-934074715',display_name='tempest-ListServerFiltersTestJSON-instance-934074715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-934074715',id=98,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='baca94adaa5145a6b9cef930bff28fa4',ramdisk_id='',reservation_id='r-7qudlxck',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-207904478',owner_user_name='tempest-ListServerFiltersTestJSON-207904478-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:44Z,user_data=None,user_id='7c90fe1780904a6098015abc66b38d9d',uuid=3f3e4ffb-a9ea-48f0-b9b8-54e436335953,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.614 252257 DEBUG nova.network.os_vif_util [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converting VIF {"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.614 252257 DEBUG nova.network.os_vif_util [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.614 252257 DEBUG os_vif [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.615 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.615 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.616 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.621 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.621 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58e652e8-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.622 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap58e652e8-a3, col_values=(('external_ids', {'iface-id': '58e652e8-a3e6-48fb-af53-e7057ad02f02', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:d0:29', 'vm-uuid': '3f3e4ffb-a9ea-48f0-b9b8-54e436335953'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.623 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:50 np0005539563 NetworkManager[48981]: <info>  [1764403790.6242] manager: (tap58e652e8-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.624 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.628 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.629 252257 INFO os_vif [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3')#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.698 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.698 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.698 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] No VIF found with MAC fa:16:3e:7c:d0:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.699 252257 INFO nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Using config drive#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.721 252257 DEBUG nova.storage.rbd_utils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/5507704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.884 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.886 252257 DEBUG nova.network.neutron [req-b9a46078-c0ba-4d29-aa2b-76ceccd12451 req-a67867ba-545e-47a0-9e09-fa151ec8e792 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Updated VIF entry in instance network info cache for port 58e652e8-a3e6-48fb-af53-e7057ad02f02. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.886 252257 DEBUG nova.network.neutron [req-b9a46078-c0ba-4d29-aa2b-76ceccd12451 req-a67867ba-545e-47a0-9e09-fa151ec8e792 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Updating instance_info_cache with network_info: [{"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.892 252257 DEBUG nova.compute.provider_tree [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:50.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.906 252257 DEBUG nova.scheduler.client.report [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.909 252257 DEBUG oslo_concurrency.lockutils [req-b9a46078-c0ba-4d29-aa2b-76ceccd12451 req-a67867ba-545e-47a0-9e09-fa151ec8e792 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-3f3e4ffb-a9ea-48f0-b9b8-54e436335953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.924 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.925 252257 DEBUG nova.compute.manager [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.974 252257 DEBUG nova.compute.manager [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.975 252257 DEBUG nova.network.neutron [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:50 np0005539563 nova_compute[252253]: 2025-11-29 08:09:50.998 252257 INFO nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.018 252257 DEBUG nova.compute.manager [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.174 252257 DEBUG nova.compute.manager [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.176 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.176 252257 INFO nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Creating image(s)#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.204 252257 DEBUG nova.storage.rbd_utils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] rbd image 68205641-041c-4c36-8811-7c3107533161_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.234 252257 DEBUG nova.storage.rbd_utils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] rbd image 68205641-041c-4c36-8811-7c3107533161_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.278 252257 DEBUG nova.storage.rbd_utils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] rbd image 68205641-041c-4c36-8811-7c3107533161_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.282 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.312 252257 DEBUG nova.policy [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6907cc56541a4a7a9e563fe7c11cf669', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5edfd39e548a47c3b5602c79308928e7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.318 252257 INFO nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Creating config drive at /var/lib/nova/instances/3f3e4ffb-a9ea-48f0-b9b8-54e436335953/disk.config#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.324 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f3e4ffb-a9ea-48f0-b9b8-54e436335953/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7fpo0nq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.363 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.365 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.366 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.366 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.403 252257 DEBUG nova.storage.rbd_utils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] rbd image 68205641-041c-4c36-8811-7c3107533161_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.408 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 68205641-041c-4c36-8811-7c3107533161_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.474 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f3e4ffb-a9ea-48f0-b9b8-54e436335953/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7fpo0nq" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.506 252257 DEBUG nova.storage.rbd_utils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] rbd image 3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.511 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f3e4ffb-a9ea-48f0-b9b8-54e436335953/disk.config 3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:51 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:09:51 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.715 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 68205641-041c-4c36-8811-7c3107533161_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.757 252257 DEBUG oslo_concurrency.processutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f3e4ffb-a9ea-48f0-b9b8-54e436335953/disk.config 3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.758 252257 INFO nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Deleting local config drive /var/lib/nova/instances/3f3e4ffb-a9ea-48f0-b9b8-54e436335953/disk.config because it was imported into RBD.#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.805 252257 DEBUG nova.storage.rbd_utils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] resizing rbd image 68205641-041c-4c36-8811-7c3107533161_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:51 np0005539563 kernel: tap58e652e8-a3: entered promiscuous mode
Nov 29 03:09:51 np0005539563 NetworkManager[48981]: <info>  [1764403791.8407] manager: (tap58e652e8-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/171)
Nov 29 03:09:51 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:51Z|00367|binding|INFO|Claiming lport 58e652e8-a3e6-48fb-af53-e7057ad02f02 for this chassis.
Nov 29 03:09:51 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:51Z|00368|binding|INFO|58e652e8-a3e6-48fb-af53-e7057ad02f02: Claiming fa:16:3e:7c:d0:29 10.100.0.4
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.849 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.851 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:d0:29 10.100.0.4'], port_security=['fa:16:3e:7c:d0:29 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f3e4ffb-a9ea-48f0-b9b8-54e436335953', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'baca94adaa5145a6b9cef930bff28fa4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c333182-abc9-4e1c-9562-d9522d2eaaba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b69ef350-fb24-4945-9405-01b7ba3f6aca, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=58e652e8-a3e6-48fb-af53-e7057ad02f02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.853 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 58e652e8-a3e6-48fb-af53-e7057ad02f02 in datapath 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 bound to our chassis#033[00m
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.855 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6#033[00m
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.868 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a837c001-ae21-4a2c-8806-7f645d9c96ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.869 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9a0b70e3-11 in ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.871 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9a0b70e3-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.871 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ad41afb4-9022-48ce-ac2f-42c39f66012e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.872 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0b06e9-d0b9-452a-9833-fd90c39ebd53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:51 np0005539563 systemd-machined[213024]: New machine qemu-42-instance-00000062.
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.884 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb23fc5-5295-485a-a987-ce401224b34e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.891 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:51 np0005539563 systemd[1]: Started Virtual Machine qemu-42-instance-00000062.
Nov 29 03:09:51 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:51Z|00369|binding|INFO|Setting lport 58e652e8-a3e6-48fb-af53-e7057ad02f02 ovn-installed in OVS
Nov 29 03:09:51 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:51Z|00370|binding|INFO|Setting lport 58e652e8-a3e6-48fb-af53-e7057ad02f02 up in Southbound
Nov 29 03:09:51 np0005539563 nova_compute[252253]: 2025-11-29 08:09:51.896 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.900 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d820cd13-5f77-43cf-acc9-eb0041af7b9f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:51 np0005539563 systemd-udevd[311705]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.929 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e89cb489-ddb7-44b1-ab6a-7db7cd0fd35f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:51 np0005539563 NetworkManager[48981]: <info>  [1764403791.9309] device (tap58e652e8-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:51 np0005539563 NetworkManager[48981]: <info>  [1764403791.9324] device (tap58e652e8-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:51 np0005539563 systemd-udevd[311712]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:51 np0005539563 NetworkManager[48981]: <info>  [1764403791.9355] manager: (tap9a0b70e3-10): new Veth device (/org/freedesktop/NetworkManager/Devices/172)
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.935 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[05222336-5e1b-4bff-a100-9160635f87b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.963 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[669a5e5e-f494-4edc-8bdb-0f013e5ae073]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.966 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e5cdac65-c5af-4b4c-84da-b2d6d209cfff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:51 np0005539563 NetworkManager[48981]: <info>  [1764403791.9921] device (tap9a0b70e3-10): carrier: link connected
Nov 29 03:09:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:51.996 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[8a089164-9964-4c40-860d-2c9a05961496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.004 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:52.014 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e2aed793-d231-4c25-821a-dc5d2975b51e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a0b70e3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:e9:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675976, 'reachable_time': 23958, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311752, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.015 252257 DEBUG nova.objects.instance [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lazy-loading 'migration_context' on Instance uuid 68205641-041c-4c36-8811-7c3107533161 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:52.026 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9aba24a5-24f1-43a9-b17c-94d3ddaf5099]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:e973'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675976, 'tstamp': 675976}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311753, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:52.041 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0a470091-54ff-4dbb-8df9-98cb47716a63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a0b70e3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:e9:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675976, 'reachable_time': 23958, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311754, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.043 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.043 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Ensure instance console log exists: /var/lib/nova/instances/68205641-041c-4c36-8811-7c3107533161/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.044 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.044 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.044 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:52.066 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3e9aefc9-e56e-4cb8-bb11-6b31b6aadc05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 169 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 5.0 MiB/s wr, 67 op/s
Nov 29 03:09:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:52.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:52.134 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5fc5eb-7210-43ad-95ce-24241262e11e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:52.135 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a0b70e3-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:52.136 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:52.136 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a0b70e3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.137 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:52 np0005539563 NetworkManager[48981]: <info>  [1764403792.1383] manager: (tap9a0b70e3-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/173)
Nov 29 03:09:52 np0005539563 kernel: tap9a0b70e3-10: entered promiscuous mode
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.140 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:52.141 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a0b70e3-10, col_values=(('external_ids', {'iface-id': '564ded89-d5cd-4ed0-aa20-e32de45b6125'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:52 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:52Z|00371|binding|INFO|Releasing lport 564ded89-d5cd-4ed0-aa20-e32de45b6125 from this chassis (sb_readonly=0)
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.143 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:52.143 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:52.144 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e744d323-b04c-45dc-8097-02769610ea8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:52.144 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.pid.haproxy
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:52.145 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'env', 'PROCESS_TAG=haproxy-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.158 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.518 252257 DEBUG nova.compute.manager [req-39c6ce1a-3433-4c05-a0f1-7a4be4e009fb req-cf25279a-f3bc-4eea-8afc-0f2f7812a0dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received event network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.519 252257 DEBUG oslo_concurrency.lockutils [req-39c6ce1a-3433-4c05-a0f1-7a4be4e009fb req-cf25279a-f3bc-4eea-8afc-0f2f7812a0dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.520 252257 DEBUG oslo_concurrency.lockutils [req-39c6ce1a-3433-4c05-a0f1-7a4be4e009fb req-cf25279a-f3bc-4eea-8afc-0f2f7812a0dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.520 252257 DEBUG oslo_concurrency.lockutils [req-39c6ce1a-3433-4c05-a0f1-7a4be4e009fb req-cf25279a-f3bc-4eea-8afc-0f2f7812a0dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.521 252257 DEBUG nova.compute.manager [req-39c6ce1a-3433-4c05-a0f1-7a4be4e009fb req-cf25279a-f3bc-4eea-8afc-0f2f7812a0dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Processing event network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:09:52 np0005539563 podman[311826]: 2025-11-29 08:09:52.525417441 +0000 UTC m=+0.052541014 container create 76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:52 np0005539563 systemd[1]: Started libpod-conmon-76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95.scope.
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.568 252257 DEBUG nova.compute.manager [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.570 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403792.567704, 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.570 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.572 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.576 252257 INFO nova.virt.libvirt.driver [-] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Instance spawned successfully.#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.577 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:09:52 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:09:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/737c7f97c1adce8b73035235ba88f070261f26e0684c0207f917e92a42d1ee98/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:52 np0005539563 podman[311826]: 2025-11-29 08:09:52.497349991 +0000 UTC m=+0.024473594 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.608 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.609 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.609 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.609 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.610 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.610 252257 DEBUG nova.virt.libvirt.driver [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.617 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.620 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:52 np0005539563 podman[311826]: 2025-11-29 08:09:52.627402295 +0000 UTC m=+0.154525888 container init 76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:09:52 np0005539563 podman[311826]: 2025-11-29 08:09:52.636262795 +0000 UTC m=+0.163386368 container start 76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:09:52 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[311843]: [NOTICE]   (311847) : New worker (311849) forked
Nov 29 03:09:52 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[311843]: [NOTICE]   (311847) : Loading success.
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.660 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.661 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403792.568649, 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.661 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.681 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.685 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403792.572076, 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.685 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.688 252257 INFO nova.compute.manager [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Took 8.30 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.688 252257 DEBUG nova.compute.manager [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.702 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.705 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.731 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.750 252257 INFO nova.compute.manager [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Took 9.33 seconds to build instance.#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.768 252257 DEBUG oslo_concurrency.lockutils [None req-7ec8ba66-9641-42da-beed-f5105bb1dafa 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:52 np0005539563 nova_compute[252253]: 2025-11-29 08:09:52.795 252257 DEBUG nova.network.neutron [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Successfully created port: 08c9e340-aae6-460f-ad1b-a58326f5bc32 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:52.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 201 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 62 KiB/s rd, 6.2 MiB/s wr, 98 op/s
Nov 29 03:09:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:54.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.288 252257 DEBUG nova.network.neutron [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Successfully updated port: 08c9e340-aae6-460f-ad1b-a58326f5bc32 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.329 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Acquiring lock "refresh_cache-68205641-041c-4c36-8811-7c3107533161" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.330 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Acquired lock "refresh_cache-68205641-041c-4c36-8811-7c3107533161" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.331 252257 DEBUG nova.network.neutron [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.442 252257 DEBUG nova.compute.manager [req-04411e50-5e10-4e1c-a4f1-ace6926e743a req-e02b5cfa-9da0-4ca9-b5df-3ecef8c05602 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Received event network-changed-08c9e340-aae6-460f-ad1b-a58326f5bc32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.443 252257 DEBUG nova.compute.manager [req-04411e50-5e10-4e1c-a4f1-ace6926e743a req-e02b5cfa-9da0-4ca9-b5df-3ecef8c05602 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Refreshing instance network info cache due to event network-changed-08c9e340-aae6-460f-ad1b-a58326f5bc32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.444 252257 DEBUG oslo_concurrency.lockutils [req-04411e50-5e10-4e1c-a4f1-ace6926e743a req-e02b5cfa-9da0-4ca9-b5df-3ecef8c05602 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-68205641-041c-4c36-8811-7c3107533161" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.627344) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794627444, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 337, "num_deletes": 251, "total_data_size": 159726, "memory_usage": 167480, "flush_reason": "Manual Compaction"}
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794630176, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 158259, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39870, "largest_seqno": 40206, "table_properties": {"data_size": 156127, "index_size": 296, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5619, "raw_average_key_size": 18, "raw_value_size": 151842, "raw_average_value_size": 509, "num_data_blocks": 13, "num_entries": 298, "num_filter_entries": 298, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403787, "oldest_key_time": 1764403787, "file_creation_time": 1764403794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 2850 microseconds, and 1263 cpu microseconds.
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.630214) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 158259 bytes OK
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.630229) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.631944) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.631957) EVENT_LOG_v1 {"time_micros": 1764403794631953, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.631975) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 157380, prev total WAL file size 157380, number of live WAL files 2.
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.632421) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(154KB)], [83(10MB)]
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794632506, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10759006, "oldest_snapshot_seqno": -1}
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.643 252257 DEBUG nova.network.neutron [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.681 252257 DEBUG nova.compute.manager [req-b6fb9efa-dbf8-4511-a180-665f478ecae2 req-6c2cf3aa-7eb6-4c71-afe4-e5adc4817a3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received event network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.681 252257 DEBUG oslo_concurrency.lockutils [req-b6fb9efa-dbf8-4511-a180-665f478ecae2 req-6c2cf3aa-7eb6-4c71-afe4-e5adc4817a3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.682 252257 DEBUG oslo_concurrency.lockutils [req-b6fb9efa-dbf8-4511-a180-665f478ecae2 req-6c2cf3aa-7eb6-4c71-afe4-e5adc4817a3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.682 252257 DEBUG oslo_concurrency.lockutils [req-b6fb9efa-dbf8-4511-a180-665f478ecae2 req-6c2cf3aa-7eb6-4c71-afe4-e5adc4817a3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.682 252257 DEBUG nova.compute.manager [req-b6fb9efa-dbf8-4511-a180-665f478ecae2 req-6c2cf3aa-7eb6-4c71-afe4-e5adc4817a3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] No waiting events found dispatching network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:54 np0005539563 nova_compute[252253]: 2025-11-29 08:09:54.683 252257 WARNING nova.compute.manager [req-b6fb9efa-dbf8-4511-a180-665f478ecae2 req-6c2cf3aa-7eb6-4c71-afe4-e5adc4817a3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received unexpected event network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 7068 keys, 8780924 bytes, temperature: kUnknown
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794775578, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8780924, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8736410, "index_size": 25714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 183676, "raw_average_key_size": 25, "raw_value_size": 8612676, "raw_average_value_size": 1218, "num_data_blocks": 1007, "num_entries": 7068, "num_filter_entries": 7068, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764403794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.776056) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8780924 bytes
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.777461) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 75.1 rd, 61.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.1 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(123.5) write-amplify(55.5) OK, records in: 7578, records dropped: 510 output_compression: NoCompression
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.777495) EVENT_LOG_v1 {"time_micros": 1764403794777478, "job": 48, "event": "compaction_finished", "compaction_time_micros": 143189, "compaction_time_cpu_micros": 39253, "output_level": 6, "num_output_files": 1, "total_output_size": 8780924, "num_input_records": 7578, "num_output_records": 7068, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794777724, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403794781429, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.632291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.781609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.781621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.781625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.781628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:54 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:09:54.781631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:09:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:54.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:55 np0005539563 nova_compute[252253]: 2025-11-29 08:09:55.625 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:55 np0005539563 nova_compute[252253]: 2025-11-29 08:09:55.986 252257 DEBUG nova.network.neutron [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Updating instance_info_cache with network_info: [{"id": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "address": "fa:16:3e:8a:34:0c", "network": {"id": "fc9741ae-8266-4acd-abaa-3359510fdba9", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-313293522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5edfd39e548a47c3b5602c79308928e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08c9e340-aa", "ovs_interfaceid": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.014 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Releasing lock "refresh_cache-68205641-041c-4c36-8811-7c3107533161" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.015 252257 DEBUG nova.compute.manager [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Instance network_info: |[{"id": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "address": "fa:16:3e:8a:34:0c", "network": {"id": "fc9741ae-8266-4acd-abaa-3359510fdba9", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-313293522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5edfd39e548a47c3b5602c79308928e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08c9e340-aa", "ovs_interfaceid": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.016 252257 DEBUG oslo_concurrency.lockutils [req-04411e50-5e10-4e1c-a4f1-ace6926e743a req-e02b5cfa-9da0-4ca9-b5df-3ecef8c05602 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-68205641-041c-4c36-8811-7c3107533161" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.017 252257 DEBUG nova.network.neutron [req-04411e50-5e10-4e1c-a4f1-ace6926e743a req-e02b5cfa-9da0-4ca9-b5df-3ecef8c05602 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Refreshing network info cache for port 08c9e340-aae6-460f-ad1b-a58326f5bc32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.022 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Start _get_guest_xml network_info=[{"id": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "address": "fa:16:3e:8a:34:0c", "network": {"id": "fc9741ae-8266-4acd-abaa-3359510fdba9", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-313293522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5edfd39e548a47c3b5602c79308928e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08c9e340-aa", "ovs_interfaceid": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.030 252257 WARNING nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.038 252257 DEBUG nova.virt.libvirt.host [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.039 252257 DEBUG nova.virt.libvirt.host [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.051 252257 DEBUG nova.virt.libvirt.host [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.052 252257 DEBUG nova.virt.libvirt.host [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.054 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.055 252257 DEBUG nova.virt.hardware [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.056 252257 DEBUG nova.virt.hardware [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.057 252257 DEBUG nova.virt.hardware [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.057 252257 DEBUG nova.virt.hardware [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.058 252257 DEBUG nova.virt.hardware [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.058 252257 DEBUG nova.virt.hardware [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.059 252257 DEBUG nova.virt.hardware [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.059 252257 DEBUG nova.virt.hardware [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.059 252257 DEBUG nova.virt.hardware [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.060 252257 DEBUG nova.virt.hardware [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.060 252257 DEBUG nova.virt.hardware [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.066 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 273 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 8.9 MiB/s wr, 178 op/s
Nov 29 03:09:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:09:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:56.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:09:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3309264447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.508 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.538 252257 DEBUG nova.storage.rbd_utils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] rbd image 68205641-041c-4c36-8811-7c3107533161_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.542 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:56.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:56 np0005539563 nova_compute[252253]: 2025-11-29 08:09:56.934 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4044706291' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.045 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.048 252257 DEBUG nova.virt.libvirt.vif [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-80363138',display_name='tempest-InstanceActionsNegativeTestJSON-server-80363138',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-80363138',id=102,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5edfd39e548a47c3b5602c79308928e7',ramdisk_id='',reservation_id='r-9edy8q6z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-599544848',owner_user_name='tempest-InstanceActionsNegativeTestJSON-599544848-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:51Z,user_data=None,user_id='6907cc56541a4a7a9e563fe7c11cf669',uuid=68205641-041c-4c36-8811-7c3107533161,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "address": "fa:16:3e:8a:34:0c", "network": {"id": "fc9741ae-8266-4acd-abaa-3359510fdba9", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-313293522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5edfd39e548a47c3b5602c79308928e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08c9e340-aa", "ovs_interfaceid": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.048 252257 DEBUG nova.network.os_vif_util [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Converting VIF {"id": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "address": "fa:16:3e:8a:34:0c", "network": {"id": "fc9741ae-8266-4acd-abaa-3359510fdba9", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-313293522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5edfd39e548a47c3b5602c79308928e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08c9e340-aa", "ovs_interfaceid": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.050 252257 DEBUG nova.network.os_vif_util [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:34:0c,bridge_name='br-int',has_traffic_filtering=True,id=08c9e340-aae6-460f-ad1b-a58326f5bc32,network=Network(fc9741ae-8266-4acd-abaa-3359510fdba9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08c9e340-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.052 252257 DEBUG nova.objects.instance [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 68205641-041c-4c36-8811-7c3107533161 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.069 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  <uuid>68205641-041c-4c36-8811-7c3107533161</uuid>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  <name>instance-00000066</name>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <nova:name>tempest-InstanceActionsNegativeTestJSON-server-80363138</nova:name>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:09:56</nova:creationTime>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <nova:user uuid="6907cc56541a4a7a9e563fe7c11cf669">tempest-InstanceActionsNegativeTestJSON-599544848-project-member</nova:user>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <nova:project uuid="5edfd39e548a47c3b5602c79308928e7">tempest-InstanceActionsNegativeTestJSON-599544848</nova:project>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <nova:port uuid="08c9e340-aae6-460f-ad1b-a58326f5bc32">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <entry name="serial">68205641-041c-4c36-8811-7c3107533161</entry>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <entry name="uuid">68205641-041c-4c36-8811-7c3107533161</entry>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/68205641-041c-4c36-8811-7c3107533161_disk">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/68205641-041c-4c36-8811-7c3107533161_disk.config">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:8a:34:0c"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <target dev="tap08c9e340-aa"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/68205641-041c-4c36-8811-7c3107533161/console.log" append="off"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:09:57 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:09:57 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:09:57 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:09:57 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.070 252257 DEBUG nova.compute.manager [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Preparing to wait for external event network-vif-plugged-08c9e340-aae6-460f-ad1b-a58326f5bc32 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.070 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Acquiring lock "68205641-041c-4c36-8811-7c3107533161-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.070 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "68205641-041c-4c36-8811-7c3107533161-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.071 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "68205641-041c-4c36-8811-7c3107533161-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.072 252257 DEBUG nova.virt.libvirt.vif [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-80363138',display_name='tempest-InstanceActionsNegativeTestJSON-server-80363138',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-80363138',id=102,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5edfd39e548a47c3b5602c79308928e7',ramdisk_id='',reservation_id='r-9edy8q6z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-599544848',owner_user_name='tempest-InstanceActionsNegativeTestJSON-599544848-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:51Z,user_data=None,user_id='6907cc56541a4a7a9e563fe7c11cf669',uuid=68205641-041c-4c36-8811-7c3107533161,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "address": "fa:16:3e:8a:34:0c", "network": {"id": "fc9741ae-8266-4acd-abaa-3359510fdba9", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-313293522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5edfd39e548a47c3b5602c79308928e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08c9e340-aa", "ovs_interfaceid": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.072 252257 DEBUG nova.network.os_vif_util [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Converting VIF {"id": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "address": "fa:16:3e:8a:34:0c", "network": {"id": "fc9741ae-8266-4acd-abaa-3359510fdba9", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-313293522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5edfd39e548a47c3b5602c79308928e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08c9e340-aa", "ovs_interfaceid": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.073 252257 DEBUG nova.network.os_vif_util [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:34:0c,bridge_name='br-int',has_traffic_filtering=True,id=08c9e340-aae6-460f-ad1b-a58326f5bc32,network=Network(fc9741ae-8266-4acd-abaa-3359510fdba9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08c9e340-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.074 252257 DEBUG os_vif [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:34:0c,bridge_name='br-int',has_traffic_filtering=True,id=08c9e340-aae6-460f-ad1b-a58326f5bc32,network=Network(fc9741ae-8266-4acd-abaa-3359510fdba9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08c9e340-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.075 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.075 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.076 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.079 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.079 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08c9e340-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.080 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap08c9e340-aa, col_values=(('external_ids', {'iface-id': '08c9e340-aae6-460f-ad1b-a58326f5bc32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8a:34:0c', 'vm-uuid': '68205641-041c-4c36-8811-7c3107533161'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:57 np0005539563 NetworkManager[48981]: <info>  [1764403797.0839] manager: (tap08c9e340-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.087 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.093 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.095 252257 INFO os_vif [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:34:0c,bridge_name='br-int',has_traffic_filtering=True,id=08c9e340-aae6-460f-ad1b-a58326f5bc32,network=Network(fc9741ae-8266-4acd-abaa-3359510fdba9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08c9e340-aa')#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.163 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.163 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.163 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] No VIF found with MAC fa:16:3e:8a:34:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.164 252257 INFO nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Using config drive#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.192 252257 DEBUG nova.storage.rbd_utils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] rbd image 68205641-041c-4c36-8811-7c3107533161_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.835 252257 INFO nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Creating config drive at /var/lib/nova/instances/68205641-041c-4c36-8811-7c3107533161/disk.config#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.840 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/68205641-041c-4c36-8811-7c3107533161/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw4c4l2yv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:57 np0005539563 nova_compute[252253]: 2025-11-29 08:09:57.976 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/68205641-041c-4c36-8811-7c3107533161/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw4c4l2yv" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.010 252257 DEBUG nova.storage.rbd_utils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] rbd image 68205641-041c-4c36-8811-7c3107533161_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.013 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/68205641-041c-4c36-8811-7c3107533161/disk.config 68205641-041c-4c36-8811-7c3107533161_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 273 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 7.9 MiB/s wr, 192 op/s
Nov 29 03:09:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:58.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.204 252257 DEBUG oslo_concurrency.processutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/68205641-041c-4c36-8811-7c3107533161/disk.config 68205641-041c-4c36-8811-7c3107533161_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.206 252257 INFO nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Deleting local config drive /var/lib/nova/instances/68205641-041c-4c36-8811-7c3107533161/disk.config because it was imported into RBD.#033[00m
Nov 29 03:09:58 np0005539563 kernel: tap08c9e340-aa: entered promiscuous mode
Nov 29 03:09:58 np0005539563 NetworkManager[48981]: <info>  [1764403798.2513] manager: (tap08c9e340-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/175)
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.255 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:58 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:58Z|00372|binding|INFO|Claiming lport 08c9e340-aae6-460f-ad1b-a58326f5bc32 for this chassis.
Nov 29 03:09:58 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:58Z|00373|binding|INFO|08c9e340-aae6-460f-ad1b-a58326f5bc32: Claiming fa:16:3e:8a:34:0c 10.100.0.11
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.264 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:34:0c 10.100.0.11'], port_security=['fa:16:3e:8a:34:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '68205641-041c-4c36-8811-7c3107533161', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc9741ae-8266-4acd-abaa-3359510fdba9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5edfd39e548a47c3b5602c79308928e7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ee72994-8aaa-4bdb-87ff-2a37693a2a04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=549d4639-829b-4377-996f-9d12f377204f, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=08c9e340-aae6-460f-ad1b-a58326f5bc32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.265 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 08c9e340-aae6-460f-ad1b-a58326f5bc32 in datapath fc9741ae-8266-4acd-abaa-3359510fdba9 bound to our chassis#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.266 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc9741ae-8266-4acd-abaa-3359510fdba9#033[00m
Nov 29 03:09:58 np0005539563 systemd-udevd[311997]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.279 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ee8c2f-17a4-47f1-a808-72ac34d744bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.281 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc9741ae-81 in ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.284 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc9741ae-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.284 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[245aea41-2cc2-4f02-ac28-02315b5c00d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.285 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9581f56d-2f18-4e55-8941-982bdd39ae95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 systemd-machined[213024]: New machine qemu-43-instance-00000066.
Nov 29 03:09:58 np0005539563 NetworkManager[48981]: <info>  [1764403798.2949] device (tap08c9e340-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:58 np0005539563 NetworkManager[48981]: <info>  [1764403798.2988] device (tap08c9e340-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.302 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[bd023099-641c-4f7a-871b-a2a272a102a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 systemd[1]: Started Virtual Machine qemu-43-instance-00000066.
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.326 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[05d2d7d3-08f2-41a5-893b-24148f7e3d29]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.341 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:58 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:58Z|00374|binding|INFO|Setting lport 08c9e340-aae6-460f-ad1b-a58326f5bc32 ovn-installed in OVS
Nov 29 03:09:58 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:58Z|00375|binding|INFO|Setting lport 08c9e340-aae6-460f-ad1b-a58326f5bc32 up in Southbound
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.349 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.356 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b8bdd6-2d22-4024-a82b-3842a814843a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 NetworkManager[48981]: <info>  [1764403798.3671] manager: (tapfc9741ae-80): new Veth device (/org/freedesktop/NetworkManager/Devices/176)
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.368 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[eadf6fad-09a9-4ad8-a552-6b6f2d628c31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.397 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4f818d33-da48-4e2f-b992-d5ac0e918f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.400 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5bc1bf-d6ff-4164-b3a1-f5ff4127bebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 NetworkManager[48981]: <info>  [1764403798.4230] device (tapfc9741ae-80): carrier: link connected
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.429 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[82b83f29-a378-4742-8c2b-e7385e600530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.445 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[de2baa05-50d9-41c4-b498-70889840be81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc9741ae-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:62:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676619, 'reachable_time': 42296, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312030, 'error': None, 'target': 'ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.459 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[352f05c7-aabb-4bd8-a3bf-20be47a7571b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:62bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676619, 'tstamp': 676619}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312031, 'error': None, 'target': 'ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.474 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1b6d28a8-921a-4fd1-8d2b-0291b1dc4ab2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc9741ae-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:62:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676619, 'reachable_time': 42296, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312032, 'error': None, 'target': 'ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.506 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6e111133-7dd9-4774-b0d2-2ff635c3a98e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.569 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ba880f1a-a33f-4055-8171-08533ae18dc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.571 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc9741ae-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.571 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.572 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc9741ae-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:58 np0005539563 NetworkManager[48981]: <info>  [1764403798.5748] manager: (tapfc9741ae-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.574 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:58 np0005539563 kernel: tapfc9741ae-80: entered promiscuous mode
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.577 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.578 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc9741ae-80, col_values=(('external_ids', {'iface-id': 'c2a1fc8f-1a68-4ac9-876b-f3d5fe0bce33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.579 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:58 np0005539563 ovn_controller[148841]: 2025-11-29T08:09:58Z|00376|binding|INFO|Releasing lport c2a1fc8f-1a68-4ac9-876b-f3d5fe0bce33 from this chassis (sb_readonly=0)
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.596 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.596 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc9741ae-8266-4acd-abaa-3359510fdba9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc9741ae-8266-4acd-abaa-3359510fdba9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.597 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a189ae1f-8edf-43ab-aca2-5aad07fa2a16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.598 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-fc9741ae-8266-4acd-abaa-3359510fdba9
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/fc9741ae-8266-4acd-abaa-3359510fdba9.pid.haproxy
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID fc9741ae-8266-4acd-abaa-3359510fdba9
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:09:58.599 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9', 'env', 'PROCESS_TAG=haproxy-fc9741ae-8266-4acd-abaa-3359510fdba9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc9741ae-8266-4acd-abaa-3359510fdba9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.689 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403798.6889944, 68205641-041c-4c36-8811-7c3107533161 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.690 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 68205641-041c-4c36-8811-7c3107533161] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.711 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 68205641-041c-4c36-8811-7c3107533161] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.716 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403798.6893535, 68205641-041c-4c36-8811-7c3107533161 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.716 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 68205641-041c-4c36-8811-7c3107533161] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.737 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 68205641-041c-4c36-8811-7c3107533161] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.740 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 68205641-041c-4c36-8811-7c3107533161] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.760 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 68205641-041c-4c36-8811-7c3107533161] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:09:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:09:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:58.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.917 252257 DEBUG nova.network.neutron [req-04411e50-5e10-4e1c-a4f1-ace6926e743a req-e02b5cfa-9da0-4ca9-b5df-3ecef8c05602 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Updated VIF entry in instance network info cache for port 08c9e340-aae6-460f-ad1b-a58326f5bc32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.917 252257 DEBUG nova.network.neutron [req-04411e50-5e10-4e1c-a4f1-ace6926e743a req-e02b5cfa-9da0-4ca9-b5df-3ecef8c05602 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Updating instance_info_cache with network_info: [{"id": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "address": "fa:16:3e:8a:34:0c", "network": {"id": "fc9741ae-8266-4acd-abaa-3359510fdba9", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-313293522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5edfd39e548a47c3b5602c79308928e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08c9e340-aa", "ovs_interfaceid": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:58 np0005539563 nova_compute[252253]: 2025-11-29 08:09:58.942 252257 DEBUG oslo_concurrency.lockutils [req-04411e50-5e10-4e1c-a4f1-ace6926e743a req-e02b5cfa-9da0-4ca9-b5df-3ecef8c05602 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-68205641-041c-4c36-8811-7c3107533161" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:58 np0005539563 podman[312107]: 2025-11-29 08:09:58.977165272 +0000 UTC m=+0.046864600 container create 9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:09:59 np0005539563 systemd[1]: Started libpod-conmon-9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e.scope.
Nov 29 03:09:59 np0005539563 podman[312107]: 2025-11-29 08:09:58.95050515 +0000 UTC m=+0.020204448 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:09:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/863bf4729b52c00bc6c3b63e8dfaf4d09b4ca5784969cec4847184b70d1f3290/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:59 np0005539563 podman[312107]: 2025-11-29 08:09:59.094404218 +0000 UTC m=+0.164103556 container init 9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:09:59 np0005539563 podman[312107]: 2025-11-29 08:09:59.104867272 +0000 UTC m=+0.174566600 container start 9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:09:59 np0005539563 neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9[312123]: [NOTICE]   (312127) : New worker (312129) forked
Nov 29 03:09:59 np0005539563 neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9[312123]: [NOTICE]   (312127) : Loading success.
Nov 29 03:10:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 03:10:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 273 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 7.1 MiB/s wr, 191 op/s
Nov 29 03:10:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:00.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:00.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.926 252257 DEBUG nova.compute.manager [req-56fcd79f-2c28-4471-8f09-b23c680303d9 req-cef79328-3db3-4a3c-98a6-1d4370a27df0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Received event network-vif-plugged-08c9e340-aae6-460f-ad1b-a58326f5bc32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.926 252257 DEBUG oslo_concurrency.lockutils [req-56fcd79f-2c28-4471-8f09-b23c680303d9 req-cef79328-3db3-4a3c-98a6-1d4370a27df0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "68205641-041c-4c36-8811-7c3107533161-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.926 252257 DEBUG oslo_concurrency.lockutils [req-56fcd79f-2c28-4471-8f09-b23c680303d9 req-cef79328-3db3-4a3c-98a6-1d4370a27df0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68205641-041c-4c36-8811-7c3107533161-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.927 252257 DEBUG oslo_concurrency.lockutils [req-56fcd79f-2c28-4471-8f09-b23c680303d9 req-cef79328-3db3-4a3c-98a6-1d4370a27df0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68205641-041c-4c36-8811-7c3107533161-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.927 252257 DEBUG nova.compute.manager [req-56fcd79f-2c28-4471-8f09-b23c680303d9 req-cef79328-3db3-4a3c-98a6-1d4370a27df0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Processing event network-vif-plugged-08c9e340-aae6-460f-ad1b-a58326f5bc32 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.927 252257 DEBUG nova.compute.manager [req-56fcd79f-2c28-4471-8f09-b23c680303d9 req-cef79328-3db3-4a3c-98a6-1d4370a27df0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Received event network-vif-plugged-08c9e340-aae6-460f-ad1b-a58326f5bc32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.927 252257 DEBUG oslo_concurrency.lockutils [req-56fcd79f-2c28-4471-8f09-b23c680303d9 req-cef79328-3db3-4a3c-98a6-1d4370a27df0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "68205641-041c-4c36-8811-7c3107533161-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.927 252257 DEBUG oslo_concurrency.lockutils [req-56fcd79f-2c28-4471-8f09-b23c680303d9 req-cef79328-3db3-4a3c-98a6-1d4370a27df0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68205641-041c-4c36-8811-7c3107533161-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.927 252257 DEBUG oslo_concurrency.lockutils [req-56fcd79f-2c28-4471-8f09-b23c680303d9 req-cef79328-3db3-4a3c-98a6-1d4370a27df0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68205641-041c-4c36-8811-7c3107533161-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.927 252257 DEBUG nova.compute.manager [req-56fcd79f-2c28-4471-8f09-b23c680303d9 req-cef79328-3db3-4a3c-98a6-1d4370a27df0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] No waiting events found dispatching network-vif-plugged-08c9e340-aae6-460f-ad1b-a58326f5bc32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.928 252257 WARNING nova.compute.manager [req-56fcd79f-2c28-4471-8f09-b23c680303d9 req-cef79328-3db3-4a3c-98a6-1d4370a27df0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Received unexpected event network-vif-plugged-08c9e340-aae6-460f-ad1b-a58326f5bc32 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.928 252257 DEBUG nova.compute.manager [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.943 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403800.9432821, 68205641-041c-4c36-8811-7c3107533161 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.943 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 68205641-041c-4c36-8811-7c3107533161] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:10:00 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.945 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.949 252257 INFO nova.virt.libvirt.driver [-] [instance: 68205641-041c-4c36-8811-7c3107533161] Instance spawned successfully.#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.950 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.966 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 68205641-041c-4c36-8811-7c3107533161] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.972 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 68205641-041c-4c36-8811-7c3107533161] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.977 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.978 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.978 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.978 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.979 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:00 np0005539563 nova_compute[252253]: 2025-11-29 08:10:00.979 252257 DEBUG nova.virt.libvirt.driver [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:01 np0005539563 nova_compute[252253]: 2025-11-29 08:10:01.008 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 68205641-041c-4c36-8811-7c3107533161] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:10:01 np0005539563 nova_compute[252253]: 2025-11-29 08:10:01.123 252257 INFO nova.compute.manager [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Took 9.95 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:10:01 np0005539563 nova_compute[252253]: 2025-11-29 08:10:01.123 252257 DEBUG nova.compute.manager [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:01 np0005539563 nova_compute[252253]: 2025-11-29 08:10:01.234 252257 INFO nova.compute.manager [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Took 11.04 seconds to build instance.#033[00m
Nov 29 03:10:01 np0005539563 nova_compute[252253]: 2025-11-29 08:10:01.272 252257 DEBUG oslo_concurrency.lockutils [None req-01425b76-2b20-4fd4-8ce0-429d4145cb50 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "68205641-041c-4c36-8811-7c3107533161" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:01 np0005539563 nova_compute[252253]: 2025-11-29 08:10:01.937 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:02 np0005539563 nova_compute[252253]: 2025-11-29 08:10:02.081 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 274 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 7.2 MiB/s wr, 341 op/s
Nov 29 03:10:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:02.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:02.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.415 252257 DEBUG oslo_concurrency.lockutils [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Acquiring lock "68205641-041c-4c36-8811-7c3107533161" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.416 252257 DEBUG oslo_concurrency.lockutils [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "68205641-041c-4c36-8811-7c3107533161" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.416 252257 DEBUG oslo_concurrency.lockutils [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Acquiring lock "68205641-041c-4c36-8811-7c3107533161-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.416 252257 DEBUG oslo_concurrency.lockutils [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "68205641-041c-4c36-8811-7c3107533161-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.416 252257 DEBUG oslo_concurrency.lockutils [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "68205641-041c-4c36-8811-7c3107533161-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.417 252257 INFO nova.compute.manager [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Terminating instance#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.418 252257 DEBUG nova.compute.manager [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:10:03 np0005539563 kernel: tap08c9e340-aa (unregistering): left promiscuous mode
Nov 29 03:10:03 np0005539563 NetworkManager[48981]: <info>  [1764403803.4616] device (tap08c9e340-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:03Z|00377|binding|INFO|Releasing lport 08c9e340-aae6-460f-ad1b-a58326f5bc32 from this chassis (sb_readonly=0)
Nov 29 03:10:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:03Z|00378|binding|INFO|Setting lport 08c9e340-aae6-460f-ad1b-a58326f5bc32 down in Southbound
Nov 29 03:10:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:03Z|00379|binding|INFO|Removing iface tap08c9e340-aa ovn-installed in OVS
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.470 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.478 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:34:0c 10.100.0.11'], port_security=['fa:16:3e:8a:34:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '68205641-041c-4c36-8811-7c3107533161', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc9741ae-8266-4acd-abaa-3359510fdba9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5edfd39e548a47c3b5602c79308928e7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ee72994-8aaa-4bdb-87ff-2a37693a2a04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=549d4639-829b-4377-996f-9d12f377204f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=08c9e340-aae6-460f-ad1b-a58326f5bc32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.480 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 08c9e340-aae6-460f-ad1b-a58326f5bc32 in datapath fc9741ae-8266-4acd-abaa-3359510fdba9 unbound from our chassis#033[00m
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.481 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc9741ae-8266-4acd-abaa-3359510fdba9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.482 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d9063893-c312-4699-b913-a2f20848fd2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.483 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9 namespace which is not needed anymore#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.489 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:03 np0005539563 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000066.scope: Deactivated successfully.
Nov 29 03:10:03 np0005539563 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000066.scope: Consumed 2.945s CPU time.
Nov 29 03:10:03 np0005539563 systemd-machined[213024]: Machine qemu-43-instance-00000066 terminated.
Nov 29 03:10:03 np0005539563 neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9[312123]: [NOTICE]   (312127) : haproxy version is 2.8.14-c23fe91
Nov 29 03:10:03 np0005539563 neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9[312123]: [NOTICE]   (312127) : path to executable is /usr/sbin/haproxy
Nov 29 03:10:03 np0005539563 neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9[312123]: [WARNING]  (312127) : Exiting Master process...
Nov 29 03:10:03 np0005539563 neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9[312123]: [ALERT]    (312127) : Current worker (312129) exited with code 143 (Terminated)
Nov 29 03:10:03 np0005539563 neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9[312123]: [WARNING]  (312127) : All workers exited. Exiting... (0)
Nov 29 03:10:03 np0005539563 systemd[1]: libpod-9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e.scope: Deactivated successfully.
Nov 29 03:10:03 np0005539563 podman[312164]: 2025-11-29 08:10:03.644390743 +0000 UTC m=+0.059667677 container died 9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.654 252257 INFO nova.virt.libvirt.driver [-] [instance: 68205641-041c-4c36-8811-7c3107533161] Instance destroyed successfully.#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.654 252257 DEBUG nova.objects.instance [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lazy-loading 'resources' on Instance uuid 68205641-041c-4c36-8811-7c3107533161 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:03 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:10:03 np0005539563 systemd[1]: var-lib-containers-storage-overlay-863bf4729b52c00bc6c3b63e8dfaf4d09b4ca5784969cec4847184b70d1f3290-merged.mount: Deactivated successfully.
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.679 252257 DEBUG nova.virt.libvirt.vif [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-80363138',display_name='tempest-InstanceActionsNegativeTestJSON-server-80363138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-80363138',id=102,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5edfd39e548a47c3b5602c79308928e7',ramdisk_id='',reservation_id='r-9edy8q6z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-599544848',owner_user_name='tempest-InstanceActionsNegativeTestJSON-599544848-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:01Z,user_data=None,user_id='6907cc56541a4a7a9e563fe7c11cf669',uuid=68205641-041c-4c36-8811-7c3107533161,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "address": "fa:16:3e:8a:34:0c", "network": {"id": "fc9741ae-8266-4acd-abaa-3359510fdba9", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-313293522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5edfd39e548a47c3b5602c79308928e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08c9e340-aa", "ovs_interfaceid": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.681 252257 DEBUG nova.network.os_vif_util [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Converting VIF {"id": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "address": "fa:16:3e:8a:34:0c", "network": {"id": "fc9741ae-8266-4acd-abaa-3359510fdba9", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-313293522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5edfd39e548a47c3b5602c79308928e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08c9e340-aa", "ovs_interfaceid": "08c9e340-aae6-460f-ad1b-a58326f5bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.684 252257 DEBUG nova.network.os_vif_util [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:34:0c,bridge_name='br-int',has_traffic_filtering=True,id=08c9e340-aae6-460f-ad1b-a58326f5bc32,network=Network(fc9741ae-8266-4acd-abaa-3359510fdba9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08c9e340-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.685 252257 DEBUG os_vif [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:34:0c,bridge_name='br-int',has_traffic_filtering=True,id=08c9e340-aae6-460f-ad1b-a58326f5bc32,network=Network(fc9741ae-8266-4acd-abaa-3359510fdba9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08c9e340-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.687 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.688 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08c9e340-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:03 np0005539563 podman[312164]: 2025-11-29 08:10:03.689322101 +0000 UTC m=+0.104599035 container cleanup 9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.690 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.693 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.696 252257 INFO os_vif [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:34:0c,bridge_name='br-int',has_traffic_filtering=True,id=08c9e340-aae6-460f-ad1b-a58326f5bc32,network=Network(fc9741ae-8266-4acd-abaa-3359510fdba9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08c9e340-aa')#033[00m
Nov 29 03:10:03 np0005539563 systemd[1]: libpod-conmon-9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e.scope: Deactivated successfully.
Nov 29 03:10:03 np0005539563 podman[312202]: 2025-11-29 08:10:03.768195787 +0000 UTC m=+0.051783383 container remove 9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.774 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b3eb4f52-f663-4977-b81e-a68a137d2c0a]: (4, ('Sat Nov 29 08:10:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9 (9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e)\n9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e\nSat Nov 29 08:10:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9 (9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e)\n9f2aa7097ae03163505f0e49a0c51a8bfb037fb658a16a6c3b1032c4aefd5b7e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.776 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7d6357-0f6c-47e8-83bc-620f58f4d416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.777 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc9741ae-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.778 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:03 np0005539563 kernel: tapfc9741ae-80: left promiscuous mode
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.794 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.797 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c1635d2c-4333-4a31-bfe2-8c9ed402fbc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.807 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[aa9ba6b1-5884-4a01-a7c2-d8d6ece5e0b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.809 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e2da094c-773e-460e-a81e-3ff36c46b62f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.822 252257 DEBUG nova.compute.manager [req-ec1e4cec-0ddb-4c90-851a-52af4b78d45a req-2ad73731-0bc8-47dd-bd90-ddde49d7e83d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Received event network-vif-unplugged-08c9e340-aae6-460f-ad1b-a58326f5bc32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.823 252257 DEBUG oslo_concurrency.lockutils [req-ec1e4cec-0ddb-4c90-851a-52af4b78d45a req-2ad73731-0bc8-47dd-bd90-ddde49d7e83d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "68205641-041c-4c36-8811-7c3107533161-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.823 252257 DEBUG oslo_concurrency.lockutils [req-ec1e4cec-0ddb-4c90-851a-52af4b78d45a req-2ad73731-0bc8-47dd-bd90-ddde49d7e83d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68205641-041c-4c36-8811-7c3107533161-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.824 252257 DEBUG oslo_concurrency.lockutils [req-ec1e4cec-0ddb-4c90-851a-52af4b78d45a req-2ad73731-0bc8-47dd-bd90-ddde49d7e83d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68205641-041c-4c36-8811-7c3107533161-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.824 252257 DEBUG nova.compute.manager [req-ec1e4cec-0ddb-4c90-851a-52af4b78d45a req-2ad73731-0bc8-47dd-bd90-ddde49d7e83d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] No waiting events found dispatching network-vif-unplugged-08c9e340-aae6-460f-ad1b-a58326f5bc32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:03 np0005539563 nova_compute[252253]: 2025-11-29 08:10:03.825 252257 DEBUG nova.compute.manager [req-ec1e4cec-0ddb-4c90-851a-52af4b78d45a req-2ad73731-0bc8-47dd-bd90-ddde49d7e83d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Received event network-vif-unplugged-08c9e340-aae6-460f-ad1b-a58326f5bc32 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.827 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0b53a45b-3f00-4f7e-b019-62ad6a878c03]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676612, 'reachable_time': 17370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312234, 'error': None, 'target': 'ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:03 np0005539563 systemd[1]: run-netns-ovnmeta\x2dfc9741ae\x2d8266\x2d4acd\x2dabaa\x2d3359510fdba9.mount: Deactivated successfully.
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.832 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc9741ae-8266-4acd-abaa-3359510fdba9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:10:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:03.832 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf699eb-36aa-4e7c-850c-af211e96dccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 274 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 3.9 MiB/s wr, 344 op/s
Nov 29 03:10:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:04.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:04 np0005539563 nova_compute[252253]: 2025-11-29 08:10:04.153 252257 INFO nova.virt.libvirt.driver [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Deleting instance files /var/lib/nova/instances/68205641-041c-4c36-8811-7c3107533161_del#033[00m
Nov 29 03:10:04 np0005539563 nova_compute[252253]: 2025-11-29 08:10:04.154 252257 INFO nova.virt.libvirt.driver [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Deletion of /var/lib/nova/instances/68205641-041c-4c36-8811-7c3107533161_del complete#033[00m
Nov 29 03:10:04 np0005539563 nova_compute[252253]: 2025-11-29 08:10:04.204 252257 INFO nova.compute.manager [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:10:04 np0005539563 nova_compute[252253]: 2025-11-29 08:10:04.205 252257 DEBUG oslo.service.loopingcall [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:10:04 np0005539563 nova_compute[252253]: 2025-11-29 08:10:04.205 252257 DEBUG nova.compute.manager [-] [instance: 68205641-041c-4c36-8811-7c3107533161] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:10:04 np0005539563 nova_compute[252253]: 2025-11-29 08:10:04.205 252257 DEBUG nova.network.neutron [-] [instance: 68205641-041c-4c36-8811-7c3107533161] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:10:04 np0005539563 nova_compute[252253]: 2025-11-29 08:10:04.911 252257 DEBUG nova.network.neutron [-] [instance: 68205641-041c-4c36-8811-7c3107533161] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:04.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:04.914 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:04.915 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:04.915 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:04 np0005539563 nova_compute[252253]: 2025-11-29 08:10:04.932 252257 INFO nova.compute.manager [-] [instance: 68205641-041c-4c36-8811-7c3107533161] Took 0.73 seconds to deallocate network for instance.#033[00m
Nov 29 03:10:04 np0005539563 nova_compute[252253]: 2025-11-29 08:10:04.978 252257 DEBUG oslo_concurrency.lockutils [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:04 np0005539563 nova_compute[252253]: 2025-11-29 08:10:04.979 252257 DEBUG oslo_concurrency.lockutils [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:05 np0005539563 nova_compute[252253]: 2025-11-29 08:10:05.061 252257 DEBUG oslo_concurrency.processutils [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/338495937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:05 np0005539563 nova_compute[252253]: 2025-11-29 08:10:05.514 252257 DEBUG oslo_concurrency.processutils [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:05 np0005539563 nova_compute[252253]: 2025-11-29 08:10:05.520 252257 DEBUG nova.compute.provider_tree [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:05 np0005539563 nova_compute[252253]: 2025-11-29 08:10:05.536 252257 DEBUG nova.scheduler.client.report [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:05 np0005539563 nova_compute[252253]: 2025-11-29 08:10:05.555 252257 DEBUG oslo_concurrency.lockutils [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:05 np0005539563 nova_compute[252253]: 2025-11-29 08:10:05.576 252257 INFO nova.scheduler.client.report [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Deleted allocations for instance 68205641-041c-4c36-8811-7c3107533161#033[00m
Nov 29 03:10:05 np0005539563 nova_compute[252253]: 2025-11-29 08:10:05.671 252257 DEBUG oslo_concurrency.lockutils [None req-700d1e55-6630-44f5-94cf-3107ef1e6710 6907cc56541a4a7a9e563fe7c11cf669 5edfd39e548a47c3b5602c79308928e7 - - default default] Lock "68205641-041c-4c36-8811-7c3107533161" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:06 np0005539563 nova_compute[252253]: 2025-11-29 08:10:06.016 252257 DEBUG nova.compute.manager [req-6d238768-5aa7-4fec-93b5-5a97b2dfb85d req-6c27376e-8afc-42df-9c87-897fefa2f8dc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Received event network-vif-plugged-08c9e340-aae6-460f-ad1b-a58326f5bc32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:06 np0005539563 nova_compute[252253]: 2025-11-29 08:10:06.018 252257 DEBUG oslo_concurrency.lockutils [req-6d238768-5aa7-4fec-93b5-5a97b2dfb85d req-6c27376e-8afc-42df-9c87-897fefa2f8dc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "68205641-041c-4c36-8811-7c3107533161-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:06 np0005539563 nova_compute[252253]: 2025-11-29 08:10:06.019 252257 DEBUG oslo_concurrency.lockutils [req-6d238768-5aa7-4fec-93b5-5a97b2dfb85d req-6c27376e-8afc-42df-9c87-897fefa2f8dc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68205641-041c-4c36-8811-7c3107533161-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:06 np0005539563 nova_compute[252253]: 2025-11-29 08:10:06.021 252257 DEBUG oslo_concurrency.lockutils [req-6d238768-5aa7-4fec-93b5-5a97b2dfb85d req-6c27376e-8afc-42df-9c87-897fefa2f8dc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "68205641-041c-4c36-8811-7c3107533161-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:06 np0005539563 nova_compute[252253]: 2025-11-29 08:10:06.021 252257 DEBUG nova.compute.manager [req-6d238768-5aa7-4fec-93b5-5a97b2dfb85d req-6c27376e-8afc-42df-9c87-897fefa2f8dc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] No waiting events found dispatching network-vif-plugged-08c9e340-aae6-460f-ad1b-a58326f5bc32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:06 np0005539563 nova_compute[252253]: 2025-11-29 08:10:06.022 252257 WARNING nova.compute.manager [req-6d238768-5aa7-4fec-93b5-5a97b2dfb85d req-6c27376e-8afc-42df-9c87-897fefa2f8dc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Received unexpected event network-vif-plugged-08c9e340-aae6-460f-ad1b-a58326f5bc32 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:10:06 np0005539563 nova_compute[252253]: 2025-11-29 08:10:06.023 252257 DEBUG nova.compute.manager [req-6d238768-5aa7-4fec-93b5-5a97b2dfb85d req-6c27376e-8afc-42df-9c87-897fefa2f8dc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 68205641-041c-4c36-8811-7c3107533161] Received event network-vif-deleted-08c9e340-aae6-460f-ad1b-a58326f5bc32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 255 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 4.1 MiB/s wr, 456 op/s
Nov 29 03:10:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:06.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:06 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:06Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7c:d0:29 10.100.0.4
Nov 29 03:10:06 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:06Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:d0:29 10.100.0.4
Nov 29 03:10:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:06.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:06 np0005539563 nova_compute[252253]: 2025-11-29 08:10:06.937 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:07 np0005539563 podman[312436]: 2025-11-29 08:10:07.454096541 +0000 UTC m=+0.056125682 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:10:07 np0005539563 podman[312436]: 2025-11-29 08:10:07.547122971 +0000 UTC m=+0.149152252 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:10:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 269 MiB data, 858 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 2.7 MiB/s wr, 404 op/s
Nov 29 03:10:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:08.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:08 np0005539563 podman[312586]: 2025-11-29 08:10:08.233402525 +0000 UTC m=+0.071675952 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 03:10:08 np0005539563 podman[312586]: 2025-11-29 08:10:08.244393163 +0000 UTC m=+0.082666560 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 03:10:08 np0005539563 podman[312651]: 2025-11-29 08:10:08.565946075 +0000 UTC m=+0.074553441 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, name=keepalived, release=1793)
Nov 29 03:10:08 np0005539563 podman[312651]: 2025-11-29 08:10:08.580360975 +0000 UTC m=+0.088968371 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, description=keepalived for Ceph, io.openshift.expose-services=, vcs-type=git, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, release=1793, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 03:10:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:10:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:10:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:08 np0005539563 nova_compute[252253]: 2025-11-29 08:10:08.692 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:08.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:09 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c1b7ca5d-4f1b-48f0-b018-ab5000a91b89 does not exist
Nov 29 03:10:09 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 31dea87f-39ca-4bdf-9564-2310fd33a759 does not exist
Nov 29 03:10:09 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 66835817-a0ee-4dad-a335-05bea04b4843 does not exist
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:10:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 269 MiB data, 858 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 2.7 MiB/s wr, 364 op/s
Nov 29 03:10:10 np0005539563 podman[313004]: 2025-11-29 08:10:10.139888809 +0000 UTC m=+0.052746660 container create d4cf748f79627fa264a67b7baa624d8f5869f9a92dd7ac31200393075439c689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:10:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:10.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:10 np0005539563 systemd[1]: Started libpod-conmon-d4cf748f79627fa264a67b7baa624d8f5869f9a92dd7ac31200393075439c689.scope.
Nov 29 03:10:10 np0005539563 podman[313004]: 2025-11-29 08:10:10.118233122 +0000 UTC m=+0.031090893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:10:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:10:10 np0005539563 podman[313004]: 2025-11-29 08:10:10.24325599 +0000 UTC m=+0.156113781 container init d4cf748f79627fa264a67b7baa624d8f5869f9a92dd7ac31200393075439c689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:10:10 np0005539563 podman[313004]: 2025-11-29 08:10:10.251899714 +0000 UTC m=+0.164757475 container start d4cf748f79627fa264a67b7baa624d8f5869f9a92dd7ac31200393075439c689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:10 np0005539563 podman[313004]: 2025-11-29 08:10:10.256158859 +0000 UTC m=+0.169016730 container attach d4cf748f79627fa264a67b7baa624d8f5869f9a92dd7ac31200393075439c689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:10:10 np0005539563 optimistic_bouman[313020]: 167 167
Nov 29 03:10:10 np0005539563 systemd[1]: libpod-d4cf748f79627fa264a67b7baa624d8f5869f9a92dd7ac31200393075439c689.scope: Deactivated successfully.
Nov 29 03:10:10 np0005539563 podman[313004]: 2025-11-29 08:10:10.262128672 +0000 UTC m=+0.174986423 container died d4cf748f79627fa264a67b7baa624d8f5869f9a92dd7ac31200393075439c689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:10:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9a3845efbfd316cb92f35efd4292132321b64be7f057c1cf4c79ca2dfbc13bed-merged.mount: Deactivated successfully.
Nov 29 03:10:10 np0005539563 podman[313004]: 2025-11-29 08:10:10.315334953 +0000 UTC m=+0.228192694 container remove d4cf748f79627fa264a67b7baa624d8f5869f9a92dd7ac31200393075439c689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:10:10 np0005539563 systemd[1]: libpod-conmon-d4cf748f79627fa264a67b7baa624d8f5869f9a92dd7ac31200393075439c689.scope: Deactivated successfully.
Nov 29 03:10:10 np0005539563 podman[313047]: 2025-11-29 08:10:10.561865693 +0000 UTC m=+0.048770953 container create ef9a69c405aff70abbc2e131b153b51e811cc29743163bf5533f9a329c0dee74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_feistel, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:10:10 np0005539563 systemd[1]: Started libpod-conmon-ef9a69c405aff70abbc2e131b153b51e811cc29743163bf5533f9a329c0dee74.scope.
Nov 29 03:10:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:10:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4733c4c6e1e9a37a86040ab34df1f17903799cf883cad67f8959efaadb78d0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4733c4c6e1e9a37a86040ab34df1f17903799cf883cad67f8959efaadb78d0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4733c4c6e1e9a37a86040ab34df1f17903799cf883cad67f8959efaadb78d0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4733c4c6e1e9a37a86040ab34df1f17903799cf883cad67f8959efaadb78d0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4733c4c6e1e9a37a86040ab34df1f17903799cf883cad67f8959efaadb78d0d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:10 np0005539563 podman[313047]: 2025-11-29 08:10:10.542924899 +0000 UTC m=+0.029830199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:10:10 np0005539563 podman[313047]: 2025-11-29 08:10:10.646960668 +0000 UTC m=+0.133865958 container init ef9a69c405aff70abbc2e131b153b51e811cc29743163bf5533f9a329c0dee74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:10:10 np0005539563 podman[313047]: 2025-11-29 08:10:10.657715249 +0000 UTC m=+0.144620519 container start ef9a69c405aff70abbc2e131b153b51e811cc29743163bf5533f9a329c0dee74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_feistel, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:10:10 np0005539563 podman[313047]: 2025-11-29 08:10:10.661376968 +0000 UTC m=+0.148282238 container attach ef9a69c405aff70abbc2e131b153b51e811cc29743163bf5533f9a329c0dee74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_feistel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:10:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:10.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:11 np0005539563 elegant_feistel[313063]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:10:11 np0005539563 elegant_feistel[313063]: --> relative data size: 1.0
Nov 29 03:10:11 np0005539563 elegant_feistel[313063]: --> All data devices are unavailable
Nov 29 03:10:11 np0005539563 systemd[1]: libpod-ef9a69c405aff70abbc2e131b153b51e811cc29743163bf5533f9a329c0dee74.scope: Deactivated successfully.
Nov 29 03:10:11 np0005539563 podman[313047]: 2025-11-29 08:10:11.486429402 +0000 UTC m=+0.973334702 container died ef9a69c405aff70abbc2e131b153b51e811cc29743163bf5533f9a329c0dee74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:10:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b4733c4c6e1e9a37a86040ab34df1f17903799cf883cad67f8959efaadb78d0d-merged.mount: Deactivated successfully.
Nov 29 03:10:11 np0005539563 podman[313047]: 2025-11-29 08:10:11.566549523 +0000 UTC m=+1.053454783 container remove ef9a69c405aff70abbc2e131b153b51e811cc29743163bf5533f9a329c0dee74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_feistel, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:10:11 np0005539563 systemd[1]: libpod-conmon-ef9a69c405aff70abbc2e131b153b51e811cc29743163bf5533f9a329c0dee74.scope: Deactivated successfully.
Nov 29 03:10:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:11 np0005539563 nova_compute[252253]: 2025-11-29 08:10:11.711 252257 DEBUG oslo_concurrency.lockutils [None req-d23424be-b5c7-4b18-88eb-706dcfcb0532 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:11 np0005539563 nova_compute[252253]: 2025-11-29 08:10:11.714 252257 DEBUG oslo_concurrency.lockutils [None req-d23424be-b5c7-4b18-88eb-706dcfcb0532 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:11 np0005539563 nova_compute[252253]: 2025-11-29 08:10:11.714 252257 DEBUG nova.compute.manager [None req-d23424be-b5c7-4b18-88eb-706dcfcb0532 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:11 np0005539563 nova_compute[252253]: 2025-11-29 08:10:11.719 252257 DEBUG nova.compute.manager [None req-d23424be-b5c7-4b18-88eb-706dcfcb0532 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:10:11 np0005539563 nova_compute[252253]: 2025-11-29 08:10:11.720 252257 DEBUG nova.objects.instance [None req-d23424be-b5c7-4b18-88eb-706dcfcb0532 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'flavor' on Instance uuid 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:11 np0005539563 nova_compute[252253]: 2025-11-29 08:10:11.757 252257 DEBUG nova.virt.libvirt.driver [None req-d23424be-b5c7-4b18-88eb-706dcfcb0532 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:10:11 np0005539563 nova_compute[252253]: 2025-11-29 08:10:11.939 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:12Z|00380|binding|INFO|Releasing lport 564ded89-d5cd-4ed0-aa20-e32de45b6125 from this chassis (sb_readonly=0)
Nov 29 03:10:12 np0005539563 nova_compute[252253]: 2025-11-29 08:10:12.093 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 334 MiB data, 905 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.0 MiB/s wr, 467 op/s
Nov 29 03:10:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:12.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:12 np0005539563 podman[313230]: 2025-11-29 08:10:12.252781725 +0000 UTC m=+0.054430785 container create 38736a5b10062846cd9992a4a4645886181773b7ea989e61dfc9eeb5ecae3be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:10:12 np0005539563 systemd[1]: Started libpod-conmon-38736a5b10062846cd9992a4a4645886181773b7ea989e61dfc9eeb5ecae3be2.scope.
Nov 29 03:10:12 np0005539563 podman[313230]: 2025-11-29 08:10:12.225769644 +0000 UTC m=+0.027418784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:10:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:10:12 np0005539563 podman[313230]: 2025-11-29 08:10:12.349879456 +0000 UTC m=+0.151528526 container init 38736a5b10062846cd9992a4a4645886181773b7ea989e61dfc9eeb5ecae3be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_gates, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:10:12 np0005539563 podman[313230]: 2025-11-29 08:10:12.357859322 +0000 UTC m=+0.159508392 container start 38736a5b10062846cd9992a4a4645886181773b7ea989e61dfc9eeb5ecae3be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:10:12 np0005539563 podman[313230]: 2025-11-29 08:10:12.361725427 +0000 UTC m=+0.163374507 container attach 38736a5b10062846cd9992a4a4645886181773b7ea989e61dfc9eeb5ecae3be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_gates, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:10:12 np0005539563 laughing_gates[313246]: 167 167
Nov 29 03:10:12 np0005539563 systemd[1]: libpod-38736a5b10062846cd9992a4a4645886181773b7ea989e61dfc9eeb5ecae3be2.scope: Deactivated successfully.
Nov 29 03:10:12 np0005539563 conmon[313246]: conmon 38736a5b10062846cd99 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38736a5b10062846cd9992a4a4645886181773b7ea989e61dfc9eeb5ecae3be2.scope/container/memory.events
Nov 29 03:10:12 np0005539563 podman[313230]: 2025-11-29 08:10:12.364013769 +0000 UTC m=+0.165662819 container died 38736a5b10062846cd9992a4a4645886181773b7ea989e61dfc9eeb5ecae3be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_gates, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:10:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:12.368 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:12 np0005539563 nova_compute[252253]: 2025-11-29 08:10:12.368 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:12.370 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:10:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2b3e72c7c55e53899af50d32759177fd6dd7ca33e2759fa7e6b7265e0db3aff1-merged.mount: Deactivated successfully.
Nov 29 03:10:12 np0005539563 podman[313230]: 2025-11-29 08:10:12.411774183 +0000 UTC m=+0.213423233 container remove 38736a5b10062846cd9992a4a4645886181773b7ea989e61dfc9eeb5ecae3be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:10:12 np0005539563 systemd[1]: libpod-conmon-38736a5b10062846cd9992a4a4645886181773b7ea989e61dfc9eeb5ecae3be2.scope: Deactivated successfully.
Nov 29 03:10:12 np0005539563 podman[313272]: 2025-11-29 08:10:12.609013416 +0000 UTC m=+0.057357784 container create 196c0a92e6896405c203a85384aef71bc058f03a65b6c42f8c04bd9fab7ce926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:10:12 np0005539563 systemd[1]: Started libpod-conmon-196c0a92e6896405c203a85384aef71bc058f03a65b6c42f8c04bd9fab7ce926.scope.
Nov 29 03:10:12 np0005539563 podman[313272]: 2025-11-29 08:10:12.581398448 +0000 UTC m=+0.029742906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:10:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:10:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88e4359e2983ddff2528ce7df92abd62fd2476ec7e3f2216f0832a0c39fcbdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88e4359e2983ddff2528ce7df92abd62fd2476ec7e3f2216f0832a0c39fcbdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88e4359e2983ddff2528ce7df92abd62fd2476ec7e3f2216f0832a0c39fcbdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88e4359e2983ddff2528ce7df92abd62fd2476ec7e3f2216f0832a0c39fcbdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:12 np0005539563 podman[313272]: 2025-11-29 08:10:12.694201305 +0000 UTC m=+0.142545733 container init 196c0a92e6896405c203a85384aef71bc058f03a65b6c42f8c04bd9fab7ce926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:10:12 np0005539563 podman[313272]: 2025-11-29 08:10:12.702374626 +0000 UTC m=+0.150719014 container start 196c0a92e6896405c203a85384aef71bc058f03a65b6c42f8c04bd9fab7ce926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:10:12 np0005539563 podman[313272]: 2025-11-29 08:10:12.705708887 +0000 UTC m=+0.154053315 container attach 196c0a92e6896405c203a85384aef71bc058f03a65b6c42f8c04bd9fab7ce926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:10:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:10:12
Nov 29 03:10:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:10:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:10:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'volumes', 'default.rgw.control', '.rgw.root']
Nov 29 03:10:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:10:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:12.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]: {
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:    "0": [
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:        {
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:            "devices": [
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:                "/dev/loop3"
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:            ],
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:            "lv_name": "ceph_lv0",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:            "lv_size": "7511998464",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:            "name": "ceph_lv0",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:            "tags": {
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:                "ceph.cluster_name": "ceph",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:                "ceph.crush_device_class": "",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:                "ceph.encrypted": "0",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:                "ceph.osd_id": "0",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:                "ceph.type": "block",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:                "ceph.vdo": "0"
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:            },
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:            "type": "block",
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:            "vg_name": "ceph_vg0"
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:        }
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]:    ]
Nov 29 03:10:13 np0005539563 exciting_bassi[313288]: }
Nov 29 03:10:13 np0005539563 systemd[1]: libpod-196c0a92e6896405c203a85384aef71bc058f03a65b6c42f8c04bd9fab7ce926.scope: Deactivated successfully.
Nov 29 03:10:13 np0005539563 podman[313272]: 2025-11-29 08:10:13.565486641 +0000 UTC m=+1.013831059 container died 196c0a92e6896405c203a85384aef71bc058f03a65b6c42f8c04bd9fab7ce926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a88e4359e2983ddff2528ce7df92abd62fd2476ec7e3f2216f0832a0c39fcbdd-merged.mount: Deactivated successfully.
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:10:13 np0005539563 podman[313272]: 2025-11-29 08:10:13.630562004 +0000 UTC m=+1.078906412 container remove 196c0a92e6896405c203a85384aef71bc058f03a65b6c42f8c04bd9fab7ce926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:10:13 np0005539563 systemd[1]: libpod-conmon-196c0a92e6896405c203a85384aef71bc058f03a65b6c42f8c04bd9fab7ce926.scope: Deactivated successfully.
Nov 29 03:10:13 np0005539563 nova_compute[252253]: 2025-11-29 08:10:13.725 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:10:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:10:14 np0005539563 kernel: tap58e652e8-a3 (unregistering): left promiscuous mode
Nov 29 03:10:14 np0005539563 NetworkManager[48981]: <info>  [1764403814.0511] device (tap58e652e8-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:14 np0005539563 nova_compute[252253]: 2025-11-29 08:10:14.055 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:14Z|00381|binding|INFO|Releasing lport 58e652e8-a3e6-48fb-af53-e7057ad02f02 from this chassis (sb_readonly=0)
Nov 29 03:10:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:14Z|00382|binding|INFO|Setting lport 58e652e8-a3e6-48fb-af53-e7057ad02f02 down in Southbound
Nov 29 03:10:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:14Z|00383|binding|INFO|Removing iface tap58e652e8-a3 ovn-installed in OVS
Nov 29 03:10:14 np0005539563 nova_compute[252253]: 2025-11-29 08:10:14.059 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.071 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:d0:29 10.100.0.4'], port_security=['fa:16:3e:7c:d0:29 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f3e4ffb-a9ea-48f0-b9b8-54e436335953', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'baca94adaa5145a6b9cef930bff28fa4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c333182-abc9-4e1c-9562-d9522d2eaaba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b69ef350-fb24-4945-9405-01b7ba3f6aca, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=58e652e8-a3e6-48fb-af53-e7057ad02f02) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.072 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 58e652e8-a3e6-48fb-af53-e7057ad02f02 in datapath 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 unbound from our chassis#033[00m
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.073 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.074 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ecab11bb-df59-4248-8bce-1b4340199c44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.074 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 namespace which is not needed anymore#033[00m
Nov 29 03:10:14 np0005539563 nova_compute[252253]: 2025-11-29 08:10:14.080 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 352 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 8.0 MiB/s wr, 348 op/s
Nov 29 03:10:14 np0005539563 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 29 03:10:14 np0005539563 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000062.scope: Consumed 13.793s CPU time.
Nov 29 03:10:14 np0005539563 systemd-machined[213024]: Machine qemu-42-instance-00000062 terminated.
Nov 29 03:10:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:14.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:14 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[311843]: [NOTICE]   (311847) : haproxy version is 2.8.14-c23fe91
Nov 29 03:10:14 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[311843]: [NOTICE]   (311847) : path to executable is /usr/sbin/haproxy
Nov 29 03:10:14 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[311843]: [WARNING]  (311847) : Exiting Master process...
Nov 29 03:10:14 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[311843]: [WARNING]  (311847) : Exiting Master process...
Nov 29 03:10:14 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[311843]: [ALERT]    (311847) : Current worker (311849) exited with code 143 (Terminated)
Nov 29 03:10:14 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[311843]: [WARNING]  (311847) : All workers exited. Exiting... (0)
Nov 29 03:10:14 np0005539563 systemd[1]: libpod-76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95.scope: Deactivated successfully.
Nov 29 03:10:14 np0005539563 podman[313437]: 2025-11-29 08:10:14.237150749 +0000 UTC m=+0.056189454 container died 76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:10:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95-userdata-shm.mount: Deactivated successfully.
Nov 29 03:10:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-737c7f97c1adce8b73035235ba88f070261f26e0684c0207f917e92a42d1ee98-merged.mount: Deactivated successfully.
Nov 29 03:10:14 np0005539563 podman[313437]: 2025-11-29 08:10:14.281220892 +0000 UTC m=+0.100259597 container cleanup 76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:14 np0005539563 systemd[1]: libpod-conmon-76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95.scope: Deactivated successfully.
Nov 29 03:10:14 np0005539563 podman[313496]: 2025-11-29 08:10:14.375703603 +0000 UTC m=+0.056719358 container remove 76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.382 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9b61a47a-f336-4290-92f1-33d423e3fe7a]: (4, ('Sat Nov 29 08:10:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 (76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95)\n76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95\nSat Nov 29 08:10:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 (76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95)\n76b42102236cf65ac51f1f9f615167e3cc25aae02679c5cc14201aee4c19ab95\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.384 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[76ffb47e-9ed2-4f99-a7e7-782fe2babdc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.384 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a0b70e3-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:14 np0005539563 nova_compute[252253]: 2025-11-29 08:10:14.386 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:14 np0005539563 kernel: tap9a0b70e3-10: left promiscuous mode
Nov 29 03:10:14 np0005539563 nova_compute[252253]: 2025-11-29 08:10:14.414 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.416 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e57c459c-7474-4314-8d40-b6e689387c7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539563 podman[313524]: 2025-11-29 08:10:14.42839309 +0000 UTC m=+0.050712315 container create 591afec72e52bde15741ab9650c24dcaf3491a708c2fd07b89a535363664d6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_carver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.434 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe847bc-bdf6-4120-b26e-c2e73fcd80f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.435 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d3644109-ca62-4443-bbaf-aa06f1679cf1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.452 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[98ab6b69-f39a-4e26-97db-86adb6d8e1d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675969, 'reachable_time': 33480, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313543, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.454 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:10:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:14.454 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[97148875-a821-488b-b5fa-d94945e3be8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:14 np0005539563 systemd[1]: run-netns-ovnmeta\x2d9a0b70e3\x2d1894\x2d47e1\x2dbc43\x2d1721fdb1c9d6.mount: Deactivated successfully.
Nov 29 03:10:14 np0005539563 systemd[1]: Started libpod-conmon-591afec72e52bde15741ab9650c24dcaf3491a708c2fd07b89a535363664d6ca.scope.
Nov 29 03:10:14 np0005539563 podman[313524]: 2025-11-29 08:10:14.40919448 +0000 UTC m=+0.031513715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:10:14 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:10:14 np0005539563 podman[313524]: 2025-11-29 08:10:14.535141802 +0000 UTC m=+0.157461067 container init 591afec72e52bde15741ab9650c24dcaf3491a708c2fd07b89a535363664d6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_carver, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:10:14 np0005539563 podman[313524]: 2025-11-29 08:10:14.54356115 +0000 UTC m=+0.165880375 container start 591afec72e52bde15741ab9650c24dcaf3491a708c2fd07b89a535363664d6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_carver, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:10:14 np0005539563 podman[313524]: 2025-11-29 08:10:14.547478126 +0000 UTC m=+0.169797381 container attach 591afec72e52bde15741ab9650c24dcaf3491a708c2fd07b89a535363664d6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:10:14 np0005539563 systemd[1]: libpod-591afec72e52bde15741ab9650c24dcaf3491a708c2fd07b89a535363664d6ca.scope: Deactivated successfully.
Nov 29 03:10:14 np0005539563 dazzling_carver[313546]: 167 167
Nov 29 03:10:14 np0005539563 conmon[313546]: conmon 591afec72e52bde15741 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-591afec72e52bde15741ab9650c24dcaf3491a708c2fd07b89a535363664d6ca.scope/container/memory.events
Nov 29 03:10:14 np0005539563 podman[313524]: 2025-11-29 08:10:14.551537437 +0000 UTC m=+0.173856672 container died 591afec72e52bde15741ab9650c24dcaf3491a708c2fd07b89a535363664d6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:10:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-457f045f5d5b8d296431101d823f86e8ad6ea0cb3c87ea9538f205d752b540f7-merged.mount: Deactivated successfully.
Nov 29 03:10:14 np0005539563 podman[313524]: 2025-11-29 08:10:14.594998764 +0000 UTC m=+0.217317989 container remove 591afec72e52bde15741ab9650c24dcaf3491a708c2fd07b89a535363664d6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:10:14 np0005539563 systemd[1]: libpod-conmon-591afec72e52bde15741ab9650c24dcaf3491a708c2fd07b89a535363664d6ca.scope: Deactivated successfully.
Nov 29 03:10:14 np0005539563 nova_compute[252253]: 2025-11-29 08:10:14.782 252257 INFO nova.virt.libvirt.driver [None req-d23424be-b5c7-4b18-88eb-706dcfcb0532 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:10:14 np0005539563 nova_compute[252253]: 2025-11-29 08:10:14.793 252257 INFO nova.virt.libvirt.driver [-] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Instance destroyed successfully.#033[00m
Nov 29 03:10:14 np0005539563 nova_compute[252253]: 2025-11-29 08:10:14.795 252257 DEBUG nova.objects.instance [None req-d23424be-b5c7-4b18-88eb-706dcfcb0532 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:14 np0005539563 podman[313569]: 2025-11-29 08:10:14.805934209 +0000 UTC m=+0.053143840 container create 8bf45da98f74bcdbb2a4366dbedc170b97366d8dcf1bfe589b9742223efeaa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jennings, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:10:14 np0005539563 nova_compute[252253]: 2025-11-29 08:10:14.814 252257 DEBUG nova.compute.manager [None req-d23424be-b5c7-4b18-88eb-706dcfcb0532 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:14 np0005539563 podman[313569]: 2025-11-29 08:10:14.788476396 +0000 UTC m=+0.035686047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:10:14 np0005539563 nova_compute[252253]: 2025-11-29 08:10:14.904 252257 DEBUG oslo_concurrency.lockutils [None req-d23424be-b5c7-4b18-88eb-706dcfcb0532 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:14 np0005539563 systemd[1]: Started libpod-conmon-8bf45da98f74bcdbb2a4366dbedc170b97366d8dcf1bfe589b9742223efeaa5d.scope.
Nov 29 03:10:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:14.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:14 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:10:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab134afb866a27242d14dc58d4e42e3fce8506da1c8c92153dd5dc0db2a78e83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab134afb866a27242d14dc58d4e42e3fce8506da1c8c92153dd5dc0db2a78e83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab134afb866a27242d14dc58d4e42e3fce8506da1c8c92153dd5dc0db2a78e83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab134afb866a27242d14dc58d4e42e3fce8506da1c8c92153dd5dc0db2a78e83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:14 np0005539563 podman[313569]: 2025-11-29 08:10:14.973468471 +0000 UTC m=+0.220678122 container init 8bf45da98f74bcdbb2a4366dbedc170b97366d8dcf1bfe589b9742223efeaa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jennings, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:10:14 np0005539563 podman[313569]: 2025-11-29 08:10:14.988078257 +0000 UTC m=+0.235287888 container start 8bf45da98f74bcdbb2a4366dbedc170b97366d8dcf1bfe589b9742223efeaa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jennings, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:14 np0005539563 podman[313569]: 2025-11-29 08:10:14.991971192 +0000 UTC m=+0.239180843 container attach 8bf45da98f74bcdbb2a4366dbedc170b97366d8dcf1bfe589b9742223efeaa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:10:15 np0005539563 nova_compute[252253]: 2025-11-29 08:10:15.087 252257 DEBUG nova.compute.manager [req-efecfb04-06a1-40c2-97c4-1edad91dd837 req-2f2c9784-b84c-40c3-b6a5-66041085af19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received event network-vif-unplugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:15 np0005539563 nova_compute[252253]: 2025-11-29 08:10:15.089 252257 DEBUG oslo_concurrency.lockutils [req-efecfb04-06a1-40c2-97c4-1edad91dd837 req-2f2c9784-b84c-40c3-b6a5-66041085af19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:15 np0005539563 nova_compute[252253]: 2025-11-29 08:10:15.089 252257 DEBUG oslo_concurrency.lockutils [req-efecfb04-06a1-40c2-97c4-1edad91dd837 req-2f2c9784-b84c-40c3-b6a5-66041085af19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:15 np0005539563 nova_compute[252253]: 2025-11-29 08:10:15.090 252257 DEBUG oslo_concurrency.lockutils [req-efecfb04-06a1-40c2-97c4-1edad91dd837 req-2f2c9784-b84c-40c3-b6a5-66041085af19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:15 np0005539563 nova_compute[252253]: 2025-11-29 08:10:15.090 252257 DEBUG nova.compute.manager [req-efecfb04-06a1-40c2-97c4-1edad91dd837 req-2f2c9784-b84c-40c3-b6a5-66041085af19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] No waiting events found dispatching network-vif-unplugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:15 np0005539563 nova_compute[252253]: 2025-11-29 08:10:15.090 252257 WARNING nova.compute.manager [req-efecfb04-06a1-40c2-97c4-1edad91dd837 req-2f2c9784-b84c-40c3-b6a5-66041085af19 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received unexpected event network-vif-unplugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:10:15 np0005539563 admiring_jennings[313587]: {
Nov 29 03:10:15 np0005539563 admiring_jennings[313587]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:10:15 np0005539563 admiring_jennings[313587]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:10:15 np0005539563 admiring_jennings[313587]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:10:15 np0005539563 admiring_jennings[313587]:        "osd_id": 0,
Nov 29 03:10:15 np0005539563 admiring_jennings[313587]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:10:15 np0005539563 admiring_jennings[313587]:        "type": "bluestore"
Nov 29 03:10:15 np0005539563 admiring_jennings[313587]:    }
Nov 29 03:10:15 np0005539563 admiring_jennings[313587]: }
Nov 29 03:10:15 np0005539563 systemd[1]: libpod-8bf45da98f74bcdbb2a4366dbedc170b97366d8dcf1bfe589b9742223efeaa5d.scope: Deactivated successfully.
Nov 29 03:10:15 np0005539563 podman[313569]: 2025-11-29 08:10:15.904925185 +0000 UTC m=+1.152134856 container died 8bf45da98f74bcdbb2a4366dbedc170b97366d8dcf1bfe589b9742223efeaa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:10:15 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ab134afb866a27242d14dc58d4e42e3fce8506da1c8c92153dd5dc0db2a78e83-merged.mount: Deactivated successfully.
Nov 29 03:10:15 np0005539563 podman[313569]: 2025-11-29 08:10:15.992903558 +0000 UTC m=+1.240113219 container remove 8bf45da98f74bcdbb2a4366dbedc170b97366d8dcf1bfe589b9742223efeaa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:10:16 np0005539563 systemd[1]: libpod-conmon-8bf45da98f74bcdbb2a4366dbedc170b97366d8dcf1bfe589b9742223efeaa5d.scope: Deactivated successfully.
Nov 29 03:10:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:10:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:10:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a87512fc-0edf-4f57-9899-2423667474db does not exist
Nov 29 03:10:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 3b697384-4b74-4a8f-bf5c-78447df35f9b does not exist
Nov 29 03:10:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f3e35dff-27ac-435b-a130-7e3376d96fc7 does not exist
Nov 29 03:10:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 385 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 9.7 MiB/s wr, 378 op/s
Nov 29 03:10:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:16.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:16.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:16 np0005539563 nova_compute[252253]: 2025-11-29 08:10:16.942 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:10:17 np0005539563 nova_compute[252253]: 2025-11-29 08:10:17.236 252257 DEBUG nova.compute.manager [req-54574763-0b0b-4b9e-ae68-b12e37504da6 req-f4f6740a-c68f-4aae-aaac-d3ad8fc5a14e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received event network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:17 np0005539563 nova_compute[252253]: 2025-11-29 08:10:17.236 252257 DEBUG oslo_concurrency.lockutils [req-54574763-0b0b-4b9e-ae68-b12e37504da6 req-f4f6740a-c68f-4aae-aaac-d3ad8fc5a14e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:17 np0005539563 nova_compute[252253]: 2025-11-29 08:10:17.236 252257 DEBUG oslo_concurrency.lockutils [req-54574763-0b0b-4b9e-ae68-b12e37504da6 req-f4f6740a-c68f-4aae-aaac-d3ad8fc5a14e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:17 np0005539563 nova_compute[252253]: 2025-11-29 08:10:17.237 252257 DEBUG oslo_concurrency.lockutils [req-54574763-0b0b-4b9e-ae68-b12e37504da6 req-f4f6740a-c68f-4aae-aaac-d3ad8fc5a14e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:17 np0005539563 nova_compute[252253]: 2025-11-29 08:10:17.237 252257 DEBUG nova.compute.manager [req-54574763-0b0b-4b9e-ae68-b12e37504da6 req-f4f6740a-c68f-4aae-aaac-d3ad8fc5a14e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] No waiting events found dispatching network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:17 np0005539563 nova_compute[252253]: 2025-11-29 08:10:17.237 252257 WARNING nova.compute.manager [req-54574763-0b0b-4b9e-ae68-b12e37504da6 req-f4f6740a-c68f-4aae-aaac-d3ad8fc5a14e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received unexpected event network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:10:17 np0005539563 nova_compute[252253]: 2025-11-29 08:10:17.491 252257 DEBUG nova.objects.instance [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'flavor' on Instance uuid 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:17 np0005539563 nova_compute[252253]: 2025-11-29 08:10:17.512 252257 DEBUG oslo_concurrency.lockutils [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "refresh_cache-3f3e4ffb-a9ea-48f0-b9b8-54e436335953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:17 np0005539563 nova_compute[252253]: 2025-11-29 08:10:17.512 252257 DEBUG oslo_concurrency.lockutils [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquired lock "refresh_cache-3f3e4ffb-a9ea-48f0-b9b8-54e436335953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:17 np0005539563 nova_compute[252253]: 2025-11-29 08:10:17.512 252257 DEBUG nova.network.neutron [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:10:17 np0005539563 nova_compute[252253]: 2025-11-29 08:10:17.512 252257 DEBUG nova.objects.instance [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'info_cache' on Instance uuid 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:17 np0005539563 podman[313673]: 2025-11-29 08:10:17.532533953 +0000 UTC m=+0.082014913 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:17 np0005539563 podman[313674]: 2025-11-29 08:10:17.535979226 +0000 UTC m=+0.087405249 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:10:17 np0005539563 podman[313675]: 2025-11-29 08:10:17.568161209 +0000 UTC m=+0.113843276 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:10:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 397 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 8.9 MiB/s wr, 279 op/s
Nov 29 03:10:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:18.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:18 np0005539563 nova_compute[252253]: 2025-11-29 08:10:18.652 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403803.650548, 68205641-041c-4c36-8811-7c3107533161 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:18 np0005539563 nova_compute[252253]: 2025-11-29 08:10:18.653 252257 INFO nova.compute.manager [-] [instance: 68205641-041c-4c36-8811-7c3107533161] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:10:18 np0005539563 nova_compute[252253]: 2025-11-29 08:10:18.673 252257 DEBUG nova.compute.manager [None req-fa7be772-bd46-4521-b688-48ac296e6ea9 - - - - - -] [instance: 68205641-041c-4c36-8811-7c3107533161] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:18 np0005539563 nova_compute[252253]: 2025-11-29 08:10:18.729 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:18.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.274 252257 DEBUG nova.network.neutron [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Updating instance_info_cache with network_info: [{"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.295 252257 DEBUG oslo_concurrency.lockutils [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Releasing lock "refresh_cache-3f3e4ffb-a9ea-48f0-b9b8-54e436335953" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.332 252257 INFO nova.virt.libvirt.driver [-] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Instance destroyed successfully.#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.333 252257 DEBUG nova.objects.instance [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.349 252257 DEBUG nova.objects.instance [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'resources' on Instance uuid 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.362 252257 DEBUG nova.virt.libvirt.vif [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-934074715',display_name='tempest-ListServerFiltersTestJSON-instance-934074715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-934074715',id=98,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='baca94adaa5145a6b9cef930bff28fa4',ramdisk_id='',reservation_id='r-7qudlxck',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-207904478',owner_user_name='tempest-ListServerFiltersTestJSON-207904478-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:14Z,user_data=None,user_id='7c90fe1780904a6098015abc66b38d9d',uuid=3f3e4ffb-a9ea-48f0-b9b8-54e436335953,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.363 252257 DEBUG nova.network.os_vif_util [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converting VIF {"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.364 252257 DEBUG nova.network.os_vif_util [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.364 252257 DEBUG os_vif [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.367 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.368 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58e652e8-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.371 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.373 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.376 252257 INFO os_vif [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3')#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.387 252257 DEBUG nova.virt.libvirt.driver [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Start _get_guest_xml network_info=[{"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.393 252257 WARNING nova.virt.libvirt.driver [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.404 252257 DEBUG nova.virt.libvirt.host [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.405 252257 DEBUG nova.virt.libvirt.host [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.407 252257 DEBUG nova.virt.libvirt.host [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.408 252257 DEBUG nova.virt.libvirt.host [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.409 252257 DEBUG nova.virt.libvirt.driver [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.409 252257 DEBUG nova.virt.hardware [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.409 252257 DEBUG nova.virt.hardware [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.410 252257 DEBUG nova.virt.hardware [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.410 252257 DEBUG nova.virt.hardware [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.410 252257 DEBUG nova.virt.hardware [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.410 252257 DEBUG nova.virt.hardware [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.410 252257 DEBUG nova.virt.hardware [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.411 252257 DEBUG nova.virt.hardware [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.411 252257 DEBUG nova.virt.hardware [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.411 252257 DEBUG nova.virt.hardware [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.411 252257 DEBUG nova.virt.hardware [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.411 252257 DEBUG nova.objects.instance [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.428 252257 DEBUG oslo_concurrency.processutils [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:10:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3569949997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.923 252257 DEBUG oslo_concurrency.processutils [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:19 np0005539563 nova_compute[252253]: 2025-11-29 08:10:19.971 252257 DEBUG oslo_concurrency.processutils [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 397 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 7.6 MiB/s wr, 251 op/s
Nov 29 03:10:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:20.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:10:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1762040387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.400 252257 DEBUG oslo_concurrency.processutils [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.404 252257 DEBUG nova.virt.libvirt.vif [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-934074715',display_name='tempest-ListServerFiltersTestJSON-instance-934074715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-934074715',id=98,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='baca94adaa5145a6b9cef930bff28fa4',ramdisk_id='',reservation_id='r-7qudlxck',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-207904478',owner_user_name='tempest-ListServerFiltersTestJSON-207904478-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:14Z,user_data=None,user_id='7c90fe1780904a6098015abc66b38d9d',uuid=3f3e4ffb-a9ea-48f0-b9b8-54e436335953,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.405 252257 DEBUG nova.network.os_vif_util [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converting VIF {"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.407 252257 DEBUG nova.network.os_vif_util [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.409 252257 DEBUG nova.objects.instance [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.429 252257 DEBUG nova.virt.libvirt.driver [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  <uuid>3f3e4ffb-a9ea-48f0-b9b8-54e436335953</uuid>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  <name>instance-00000062</name>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-934074715</nova:name>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:10:19</nova:creationTime>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <nova:user uuid="7c90fe1780904a6098015abc66b38d9d">tempest-ListServerFiltersTestJSON-207904478-project-member</nova:user>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <nova:project uuid="baca94adaa5145a6b9cef930bff28fa4">tempest-ListServerFiltersTestJSON-207904478</nova:project>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <nova:port uuid="58e652e8-a3e6-48fb-af53-e7057ad02f02">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <entry name="serial">3f3e4ffb-a9ea-48f0-b9b8-54e436335953</entry>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <entry name="uuid">3f3e4ffb-a9ea-48f0-b9b8-54e436335953</entry>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/3f3e4ffb-a9ea-48f0-b9b8-54e436335953_disk.config">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:7c:d0:29"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <target dev="tap58e652e8-a3"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/3f3e4ffb-a9ea-48f0-b9b8-54e436335953/console.log" append="off"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <input type="keyboard" bus="usb"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:10:20 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:10:20 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:10:20 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:10:20 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.431 252257 DEBUG nova.virt.libvirt.driver [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.431 252257 DEBUG nova.virt.libvirt.driver [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.432 252257 DEBUG nova.virt.libvirt.vif [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-934074715',display_name='tempest-ListServerFiltersTestJSON-instance-934074715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-934074715',id=98,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='baca94adaa5145a6b9cef930bff28fa4',ramdisk_id='',reservation_id='r-7qudlxck',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-207904478',owner_user_name='tempest-ListServerFiltersTestJSON-207904478-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:14Z,user_data=None,user_id='7c90fe1780904a6098015abc66b38d9d',uuid=3f3e4ffb-a9ea-48f0-b9b8-54e436335953,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.432 252257 DEBUG nova.network.os_vif_util [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converting VIF {"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.432 252257 DEBUG nova.network.os_vif_util [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.433 252257 DEBUG os_vif [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.433 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.434 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.434 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.436 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.436 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58e652e8-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.437 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap58e652e8-a3, col_values=(('external_ids', {'iface-id': '58e652e8-a3e6-48fb-af53-e7057ad02f02', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:d0:29', 'vm-uuid': '3f3e4ffb-a9ea-48f0-b9b8-54e436335953'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.439 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:20 np0005539563 NetworkManager[48981]: <info>  [1764403820.4402] manager: (tap58e652e8-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.448 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.450 252257 INFO os_vif [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3')#033[00m
Nov 29 03:10:20 np0005539563 kernel: tap58e652e8-a3: entered promiscuous mode
Nov 29 03:10:20 np0005539563 NetworkManager[48981]: <info>  [1764403820.5498] manager: (tap58e652e8-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Nov 29 03:10:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:20Z|00384|binding|INFO|Claiming lport 58e652e8-a3e6-48fb-af53-e7057ad02f02 for this chassis.
Nov 29 03:10:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:20Z|00385|binding|INFO|58e652e8-a3e6-48fb-af53-e7057ad02f02: Claiming fa:16:3e:7c:d0:29 10.100.0.4
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.550 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.556 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:d0:29 10.100.0.4'], port_security=['fa:16:3e:7c:d0:29 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f3e4ffb-a9ea-48f0-b9b8-54e436335953', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'baca94adaa5145a6b9cef930bff28fa4', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3c333182-abc9-4e1c-9562-d9522d2eaaba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b69ef350-fb24-4945-9405-01b7ba3f6aca, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=58e652e8-a3e6-48fb-af53-e7057ad02f02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.559 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 58e652e8-a3e6-48fb-af53-e7057ad02f02 in datapath 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 bound to our chassis#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.562 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.577 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[321764ce-7cfd-46b7-b256-43b3224ef78c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.578 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9a0b70e3-11 in ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.580 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9a0b70e3-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.580 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5656ff5f-74ec-4989-8225-788e432aa102]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.581 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b82d3551-e9fe-4a5b-8666-923fc06d5dbf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 systemd-udevd[313814]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:10:20 np0005539563 systemd-machined[213024]: New machine qemu-44-instance-00000062.
Nov 29 03:10:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:20Z|00386|binding|INFO|Setting lport 58e652e8-a3e6-48fb-af53-e7057ad02f02 up in Southbound
Nov 29 03:10:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:20Z|00387|binding|INFO|Setting lport 58e652e8-a3e6-48fb-af53-e7057ad02f02 ovn-installed in OVS
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.595 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.599 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[8707691e-a0dd-4136-9c01-bab734830569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.602 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:20 np0005539563 NetworkManager[48981]: <info>  [1764403820.6072] device (tap58e652e8-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:20 np0005539563 systemd[1]: Started Virtual Machine qemu-44-instance-00000062.
Nov 29 03:10:20 np0005539563 NetworkManager[48981]: <info>  [1764403820.6090] device (tap58e652e8-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.625 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c18bfe3c-6ca9-4b8b-ad0d-c0f94e920e5a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.661 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b45f8a1a-f108-4308-8f69-04d1261d1a61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.668 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc10479-e193-408c-b286-29bba61659d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 NetworkManager[48981]: <info>  [1764403820.6703] manager: (tap9a0b70e3-10): new Veth device (/org/freedesktop/NetworkManager/Devices/180)
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.702 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb2dcb0-d4ff-4e92-9611-8e3205e4e109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.704 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[7d482c73-e3e6-4d33-9607-4d41ca809d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 NetworkManager[48981]: <info>  [1764403820.7255] device (tap9a0b70e3-10): carrier: link connected
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.729 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[0a298776-9279-486f-86a2-04f18aa2ad32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.749 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7881bfc6-8bc2-4e51-a5d6-76c9e91a98de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a0b70e3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:e9:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678849, 'reachable_time': 16028, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313846, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.767 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4959b01f-e123-45e4-9ee5-ed9c9a1b856b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:e973'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678849, 'tstamp': 678849}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313847, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.789 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e6107b-5daf-44aa-ab0b-5e578e33acf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a0b70e3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:e9:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678849, 'reachable_time': 16028, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313848, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.820 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4f902979-e185-4cfe-91f6-ff20f8ee1e65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.872 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8532b9-b6f4-48a6-8d4c-983f6ef9ca49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.874 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a0b70e3-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.874 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.875 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a0b70e3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.877 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:20 np0005539563 kernel: tap9a0b70e3-10: entered promiscuous mode
Nov 29 03:10:20 np0005539563 NetworkManager[48981]: <info>  [1764403820.8780] manager: (tap9a0b70e3-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.884 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.885 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a0b70e3-10, col_values=(('external_ids', {'iface-id': '564ded89-d5cd-4ed0-aa20-e32de45b6125'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.886 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:20Z|00388|binding|INFO|Releasing lport 564ded89-d5cd-4ed0-aa20-e32de45b6125 from this chassis (sb_readonly=0)
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.888 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.888 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.889 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7de37006-91f9-49b9-8bcd-885d6109315b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.890 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.pid.haproxy
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:10:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:20.891 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'env', 'PROCESS_TAG=haproxy-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9a0b70e3-1894-47e1-bc43-1721fdb1c9d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:10:20 np0005539563 nova_compute[252253]: 2025-11-29 08:10:20.902 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:20.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.177 252257 DEBUG nova.compute.manager [req-dfe86472-e268-401a-b8e8-8d216715aca5 req-80bb08e0-3257-46c9-add3-6f72b066f669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received event network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.180 252257 DEBUG oslo_concurrency.lockutils [req-dfe86472-e268-401a-b8e8-8d216715aca5 req-80bb08e0-3257-46c9-add3-6f72b066f669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.181 252257 DEBUG oslo_concurrency.lockutils [req-dfe86472-e268-401a-b8e8-8d216715aca5 req-80bb08e0-3257-46c9-add3-6f72b066f669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.182 252257 DEBUG oslo_concurrency.lockutils [req-dfe86472-e268-401a-b8e8-8d216715aca5 req-80bb08e0-3257-46c9-add3-6f72b066f669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.182 252257 DEBUG nova.compute.manager [req-dfe86472-e268-401a-b8e8-8d216715aca5 req-80bb08e0-3257-46c9-add3-6f72b066f669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] No waiting events found dispatching network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.183 252257 WARNING nova.compute.manager [req-dfe86472-e268-401a-b8e8-8d216715aca5 req-80bb08e0-3257-46c9-add3-6f72b066f669 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received unexpected event network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:10:21 np0005539563 podman[313881]: 2025-11-29 08:10:21.277941899 +0000 UTC m=+0.064668523 container create c44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:10:21 np0005539563 systemd[1]: Started libpod-conmon-c44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9.scope.
Nov 29 03:10:21 np0005539563 podman[313881]: 2025-11-29 08:10:21.247656389 +0000 UTC m=+0.034383023 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:10:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:10:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abd36c5c1583ee3cb427d12c963e911a998e6a89e4ebdad403728ba4384a308/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:21 np0005539563 podman[313881]: 2025-11-29 08:10:21.372026578 +0000 UTC m=+0.158753222 container init c44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:21 np0005539563 podman[313881]: 2025-11-29 08:10:21.377612289 +0000 UTC m=+0.164338903 container start c44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:10:21 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[313897]: [NOTICE]   (313901) : New worker (313903) forked
Nov 29 03:10:21 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[313897]: [NOTICE]   (313901) : Loading success.
Nov 29 03:10:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.792 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.793 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403821.7918642, 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.793 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.795 252257 DEBUG nova.compute.manager [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.798 252257 INFO nova.virt.libvirt.driver [-] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Instance rebooted successfully.#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.799 252257 DEBUG nova.compute.manager [None req-3b6126de-60ed-4494-ac37-ab2f7c443844 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.841 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.845 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.879 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.880 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403821.7928617, 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.880 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] VM Started (Lifecycle Event)#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.902 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.907 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:21 np0005539563 nova_compute[252253]: 2025-11-29 08:10:21.943 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 7.7 MiB/s wr, 301 op/s
Nov 29 03:10:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:22.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:22.372 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:22.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:23 np0005539563 nova_compute[252253]: 2025-11-29 08:10:23.299 252257 DEBUG nova.compute.manager [req-14f1b2dd-597d-4e2a-b072-b01fa5172bb8 req-6c6a33e4-3f8e-41f2-9c87-6b9575c1235e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received event network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:23 np0005539563 nova_compute[252253]: 2025-11-29 08:10:23.300 252257 DEBUG oslo_concurrency.lockutils [req-14f1b2dd-597d-4e2a-b072-b01fa5172bb8 req-6c6a33e4-3f8e-41f2-9c87-6b9575c1235e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:23 np0005539563 nova_compute[252253]: 2025-11-29 08:10:23.300 252257 DEBUG oslo_concurrency.lockutils [req-14f1b2dd-597d-4e2a-b072-b01fa5172bb8 req-6c6a33e4-3f8e-41f2-9c87-6b9575c1235e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:23 np0005539563 nova_compute[252253]: 2025-11-29 08:10:23.301 252257 DEBUG oslo_concurrency.lockutils [req-14f1b2dd-597d-4e2a-b072-b01fa5172bb8 req-6c6a33e4-3f8e-41f2-9c87-6b9575c1235e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:23 np0005539563 nova_compute[252253]: 2025-11-29 08:10:23.301 252257 DEBUG nova.compute.manager [req-14f1b2dd-597d-4e2a-b072-b01fa5172bb8 req-6c6a33e4-3f8e-41f2-9c87-6b9575c1235e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] No waiting events found dispatching network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:23 np0005539563 nova_compute[252253]: 2025-11-29 08:10:23.301 252257 WARNING nova.compute.manager [req-14f1b2dd-597d-4e2a-b072-b01fa5172bb8 req-6c6a33e4-3f8e-41f2-9c87-6b9575c1235e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received unexpected event network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009664388202587009 of space, bias 1.0, pg target 2.899316460776103 quantized to 32 (current 32)
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:10:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 234 op/s
Nov 29 03:10:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:24.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:25 np0005539563 nova_compute[252253]: 2025-11-29 08:10:25.440 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 405 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.4 MiB/s wr, 239 op/s
Nov 29 03:10:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:26.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:26.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:26 np0005539563 nova_compute[252253]: 2025-11-29 08:10:26.947 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 412 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 177 op/s
Nov 29 03:10:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:28.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:28.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 412 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 642 KiB/s wr, 133 op/s
Nov 29 03:10:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:30.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:30 np0005539563 nova_compute[252253]: 2025-11-29 08:10:30.444 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:30.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:31 np0005539563 nova_compute[252253]: 2025-11-29 08:10:31.948 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 436 MiB data, 971 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.2 MiB/s wr, 178 op/s
Nov 29 03:10:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:32.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:32.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 414 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Nov 29 03:10:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:34.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:34.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:35 np0005539563 nova_compute[252253]: 2025-11-29 08:10:35.447 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:35Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:d0:29 10.100.0.4
Nov 29 03:10:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 292 MiB data, 895 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 181 op/s
Nov 29 03:10:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:36.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:36.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:36 np0005539563 nova_compute[252253]: 2025-11-29 08:10:36.951 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:37 np0005539563 nova_compute[252253]: 2025-11-29 08:10:37.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 279 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 895 KiB/s rd, 2.2 MiB/s wr, 164 op/s
Nov 29 03:10:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:38.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.323 252257 DEBUG oslo_concurrency.lockutils [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.324 252257 DEBUG oslo_concurrency.lockutils [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.324 252257 DEBUG oslo_concurrency.lockutils [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.324 252257 DEBUG oslo_concurrency.lockutils [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.324 252257 DEBUG oslo_concurrency.lockutils [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.325 252257 INFO nova.compute.manager [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Terminating instance#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.326 252257 DEBUG nova.compute.manager [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:10:38 np0005539563 kernel: tap58e652e8-a3 (unregistering): left promiscuous mode
Nov 29 03:10:38 np0005539563 NetworkManager[48981]: <info>  [1764403838.4025] device (tap58e652e8-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.417 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:38 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:38Z|00389|binding|INFO|Releasing lport 58e652e8-a3e6-48fb-af53-e7057ad02f02 from this chassis (sb_readonly=0)
Nov 29 03:10:38 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:38Z|00390|binding|INFO|Setting lport 58e652e8-a3e6-48fb-af53-e7057ad02f02 down in Southbound
Nov 29 03:10:38 np0005539563 ovn_controller[148841]: 2025-11-29T08:10:38Z|00391|binding|INFO|Removing iface tap58e652e8-a3 ovn-installed in OVS
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.420 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.425 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:d0:29 10.100.0.4'], port_security=['fa:16:3e:7c:d0:29 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f3e4ffb-a9ea-48f0-b9b8-54e436335953', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'baca94adaa5145a6b9cef930bff28fa4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3c333182-abc9-4e1c-9562-d9522d2eaaba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b69ef350-fb24-4945-9405-01b7ba3f6aca, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=58e652e8-a3e6-48fb-af53-e7057ad02f02) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.426 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 58e652e8-a3e6-48fb-af53-e7057ad02f02 in datapath 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 unbound from our chassis#033[00m
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.428 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.429 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[63188c48-84b0-43d0-a1f4-33305d378774]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.429 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 namespace which is not needed anymore#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.462 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:38 np0005539563 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 29 03:10:38 np0005539563 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000062.scope: Consumed 14.563s CPU time.
Nov 29 03:10:38 np0005539563 systemd-machined[213024]: Machine qemu-44-instance-00000062 terminated.
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.578 252257 INFO nova.virt.libvirt.driver [-] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Instance destroyed successfully.#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.579 252257 DEBUG nova.objects.instance [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lazy-loading 'resources' on Instance uuid 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:38 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[313897]: [NOTICE]   (313901) : haproxy version is 2.8.14-c23fe91
Nov 29 03:10:38 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[313897]: [NOTICE]   (313901) : path to executable is /usr/sbin/haproxy
Nov 29 03:10:38 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[313897]: [WARNING]  (313901) : Exiting Master process...
Nov 29 03:10:38 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[313897]: [ALERT]    (313901) : Current worker (313903) exited with code 143 (Terminated)
Nov 29 03:10:38 np0005539563 neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6[313897]: [WARNING]  (313901) : All workers exited. Exiting... (0)
Nov 29 03:10:38 np0005539563 systemd[1]: libpod-c44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9.scope: Deactivated successfully.
Nov 29 03:10:38 np0005539563 podman[314037]: 2025-11-29 08:10:38.607403825 +0000 UTC m=+0.059225525 container died c44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.608 252257 DEBUG nova.virt.libvirt.vif [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-934074715',display_name='tempest-ListServerFiltersTestJSON-instance-934074715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-934074715',id=98,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='baca94adaa5145a6b9cef930bff28fa4',ramdisk_id='',reservation_id='r-7qudlxck',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-207904478',owner_user_name='tempest-ListServerFiltersTestJSON-207904478-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:21Z,user_data=None,user_id='7c90fe1780904a6098015abc66b38d9d',uuid=3f3e4ffb-a9ea-48f0-b9b8-54e436335953,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.609 252257 DEBUG nova.network.os_vif_util [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converting VIF {"id": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "address": "fa:16:3e:7c:d0:29", "network": {"id": "9a0b70e3-1894-47e1-bc43-1721fdb1c9d6", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-45799944-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "baca94adaa5145a6b9cef930bff28fa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58e652e8-a3", "ovs_interfaceid": "58e652e8-a3e6-48fb-af53-e7057ad02f02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.610 252257 DEBUG nova.network.os_vif_util [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.610 252257 DEBUG os_vif [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.612 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.612 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58e652e8-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.614 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.618 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.621 252257 INFO os_vif [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d0:29,bridge_name='br-int',has_traffic_filtering=True,id=58e652e8-a3e6-48fb-af53-e7057ad02f02,network=Network(9a0b70e3-1894-47e1-bc43-1721fdb1c9d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58e652e8-a3')#033[00m
Nov 29 03:10:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9-userdata-shm.mount: Deactivated successfully.
Nov 29 03:10:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1abd36c5c1583ee3cb427d12c963e911a998e6a89e4ebdad403728ba4384a308-merged.mount: Deactivated successfully.
Nov 29 03:10:38 np0005539563 podman[314037]: 2025-11-29 08:10:38.65594661 +0000 UTC m=+0.107768270 container cleanup c44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:10:38 np0005539563 systemd[1]: libpod-conmon-c44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9.scope: Deactivated successfully.
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.724 252257 DEBUG nova.compute.manager [req-9c7bc0ba-d029-4df3-a8b0-233d2be75dcf req-35320636-7720-4cdd-adc5-06a02c43c252 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received event network-vif-unplugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.725 252257 DEBUG oslo_concurrency.lockutils [req-9c7bc0ba-d029-4df3-a8b0-233d2be75dcf req-35320636-7720-4cdd-adc5-06a02c43c252 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.725 252257 DEBUG oslo_concurrency.lockutils [req-9c7bc0ba-d029-4df3-a8b0-233d2be75dcf req-35320636-7720-4cdd-adc5-06a02c43c252 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.726 252257 DEBUG oslo_concurrency.lockutils [req-9c7bc0ba-d029-4df3-a8b0-233d2be75dcf req-35320636-7720-4cdd-adc5-06a02c43c252 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.726 252257 DEBUG nova.compute.manager [req-9c7bc0ba-d029-4df3-a8b0-233d2be75dcf req-35320636-7720-4cdd-adc5-06a02c43c252 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] No waiting events found dispatching network-vif-unplugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.727 252257 DEBUG nova.compute.manager [req-9c7bc0ba-d029-4df3-a8b0-233d2be75dcf req-35320636-7720-4cdd-adc5-06a02c43c252 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received event network-vif-unplugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:10:38 np0005539563 podman[314093]: 2025-11-29 08:10:38.729534394 +0000 UTC m=+0.044599689 container remove c44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.738 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[697750d2-f6a3-4ed1-90c9-993c6113128e]: (4, ('Sat Nov 29 08:10:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 (c44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9)\nc44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9\nSat Nov 29 08:10:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 (c44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9)\nc44ca75d065641a15de9fe34e325443384526f0a09fb0bce5fe020929db384d9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.740 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccf6acd-cc58-4ece-8b9b-3888265b0e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.741 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a0b70e3-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.744 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:38 np0005539563 kernel: tap9a0b70e3-10: left promiscuous mode
Nov 29 03:10:38 np0005539563 nova_compute[252253]: 2025-11-29 08:10:38.759 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.763 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5788784d-c7b1-49a6-aa03-e4d1faaeab0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.779 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d623fc38-55b1-4345-a4d3-95a4d419da23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.780 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fe81a804-ac7e-4b6e-8a74-6a9daed39bd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.795 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1c24da6e-59e2-42d7-8a63-91c8c6e338e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678842, 'reachable_time': 17801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314110, 'error': None, 'target': 'ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:38 np0005539563 systemd[1]: run-netns-ovnmeta\x2d9a0b70e3\x2d1894\x2d47e1\x2dbc43\x2d1721fdb1c9d6.mount: Deactivated successfully.
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.800 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9a0b70e3-1894-47e1-bc43-1721fdb1c9d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:10:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:10:38.800 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[96436bbe-157b-4cb5-8017-7fd3cf271803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:38.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:39 np0005539563 nova_compute[252253]: 2025-11-29 08:10:39.078 252257 INFO nova.virt.libvirt.driver [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Deleting instance files /var/lib/nova/instances/3f3e4ffb-a9ea-48f0-b9b8-54e436335953_del#033[00m
Nov 29 03:10:39 np0005539563 nova_compute[252253]: 2025-11-29 08:10:39.079 252257 INFO nova.virt.libvirt.driver [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Deletion of /var/lib/nova/instances/3f3e4ffb-a9ea-48f0-b9b8-54e436335953_del complete#033[00m
Nov 29 03:10:39 np0005539563 nova_compute[252253]: 2025-11-29 08:10:39.147 252257 INFO nova.compute.manager [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:10:39 np0005539563 nova_compute[252253]: 2025-11-29 08:10:39.149 252257 DEBUG oslo.service.loopingcall [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:10:39 np0005539563 nova_compute[252253]: 2025-11-29 08:10:39.150 252257 DEBUG nova.compute.manager [-] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:10:39 np0005539563 nova_compute[252253]: 2025-11-29 08:10:39.151 252257 DEBUG nova.network.neutron [-] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:10:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 279 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 838 KiB/s rd, 1.6 MiB/s wr, 153 op/s
Nov 29 03:10:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.199 252257 DEBUG nova.network.neutron [-] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:40.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.244 252257 INFO nova.compute.manager [-] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Took 1.09 seconds to deallocate network for instance.#033[00m
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.311 252257 DEBUG oslo_concurrency.lockutils [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.311 252257 DEBUG oslo_concurrency.lockutils [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.387 252257 DEBUG oslo_concurrency.processutils [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.824 252257 DEBUG nova.compute.manager [req-12bd6837-3bb4-4440-81b8-dffbbf04fed1 req-5c2a67e4-a95f-403a-a35a-15514e820001 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received event network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.825 252257 DEBUG oslo_concurrency.lockutils [req-12bd6837-3bb4-4440-81b8-dffbbf04fed1 req-5c2a67e4-a95f-403a-a35a-15514e820001 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.825 252257 DEBUG oslo_concurrency.lockutils [req-12bd6837-3bb4-4440-81b8-dffbbf04fed1 req-5c2a67e4-a95f-403a-a35a-15514e820001 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.825 252257 DEBUG oslo_concurrency.lockutils [req-12bd6837-3bb4-4440-81b8-dffbbf04fed1 req-5c2a67e4-a95f-403a-a35a-15514e820001 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.826 252257 DEBUG nova.compute.manager [req-12bd6837-3bb4-4440-81b8-dffbbf04fed1 req-5c2a67e4-a95f-403a-a35a-15514e820001 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] No waiting events found dispatching network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.826 252257 WARNING nova.compute.manager [req-12bd6837-3bb4-4440-81b8-dffbbf04fed1 req-5c2a67e4-a95f-403a-a35a-15514e820001 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received unexpected event network-vif-plugged-58e652e8-a3e6-48fb-af53-e7057ad02f02 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.826 252257 DEBUG nova.compute.manager [req-12bd6837-3bb4-4440-81b8-dffbbf04fed1 req-5c2a67e4-a95f-403a-a35a-15514e820001 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Received event network-vif-deleted-58e652e8-a3e6-48fb-af53-e7057ad02f02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3812345530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.875 252257 DEBUG oslo_concurrency.processutils [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.883 252257 DEBUG nova.compute.provider_tree [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.915 252257 DEBUG nova.scheduler.client.report [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.938 252257 DEBUG oslo_concurrency.lockutils [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:40.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:40 np0005539563 nova_compute[252253]: 2025-11-29 08:10:40.970 252257 INFO nova.scheduler.client.report [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Deleted allocations for instance 3f3e4ffb-a9ea-48f0-b9b8-54e436335953#033[00m
Nov 29 03:10:41 np0005539563 nova_compute[252253]: 2025-11-29 08:10:41.153 252257 DEBUG oslo_concurrency.lockutils [None req-0144b815-daa9-45e3-a077-987ea0291229 7c90fe1780904a6098015abc66b38d9d baca94adaa5145a6b9cef930bff28fa4 - - default default] Lock "3f3e4ffb-a9ea-48f0-b9b8-54e436335953" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:41 np0005539563 nova_compute[252253]: 2025-11-29 08:10:41.955 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 227 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 853 KiB/s rd, 1.6 MiB/s wr, 175 op/s
Nov 29 03:10:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:42.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:42 np0005539563 nova_compute[252253]: 2025-11-29 08:10:42.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:42.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:10:43 np0005539563 nova_compute[252253]: 2025-11-29 08:10:43.306 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:43 np0005539563 nova_compute[252253]: 2025-11-29 08:10:43.614 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 219 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 708 KiB/s rd, 883 KiB/s wr, 153 op/s
Nov 29 03:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 29K writes, 113K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 29K writes, 9809 syncs, 3.00 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7121 writes, 26K keys, 7121 commit groups, 1.0 writes per commit group, ingest: 28.08 MB, 0.05 MB/s#012Interval WAL: 7122 writes, 2864 syncs, 2.49 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:10:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:44.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:44 np0005539563 nova_compute[252253]: 2025-11-29 08:10:44.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:44 np0005539563 nova_compute[252253]: 2025-11-29 08:10:44.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:10:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:44.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 246 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Nov 29 03:10:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:46.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:46 np0005539563 nova_compute[252253]: 2025-11-29 08:10:46.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:46.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:46 np0005539563 nova_compute[252253]: 2025-11-29 08:10:46.956 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:46 np0005539563 nova_compute[252253]: 2025-11-29 08:10:46.986 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:46 np0005539563 nova_compute[252253]: 2025-11-29 08:10:46.987 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:46 np0005539563 nova_compute[252253]: 2025-11-29 08:10:46.987 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:46 np0005539563 nova_compute[252253]: 2025-11-29 08:10:46.987 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:10:46 np0005539563 nova_compute[252253]: 2025-11-29 08:10:46.988 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956711814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:47 np0005539563 nova_compute[252253]: 2025-11-29 08:10:47.420 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:47 np0005539563 nova_compute[252253]: 2025-11-29 08:10:47.592 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:10:47 np0005539563 nova_compute[252253]: 2025-11-29 08:10:47.593 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4504MB free_disk=20.876358032226562GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:10:47 np0005539563 nova_compute[252253]: 2025-11-29 08:10:47.593 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:47 np0005539563 nova_compute[252253]: 2025-11-29 08:10:47.593 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:47 np0005539563 nova_compute[252253]: 2025-11-29 08:10:47.714 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:10:47 np0005539563 nova_compute[252253]: 2025-11-29 08:10:47.715 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:10:47 np0005539563 nova_compute[252253]: 2025-11-29 08:10:47.729 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 247 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Nov 29 03:10:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1390464953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:48 np0005539563 nova_compute[252253]: 2025-11-29 08:10:48.153 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:48 np0005539563 nova_compute[252253]: 2025-11-29 08:10:48.160 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:48 np0005539563 nova_compute[252253]: 2025-11-29 08:10:48.178 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:48 np0005539563 nova_compute[252253]: 2025-11-29 08:10:48.210 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:10:48 np0005539563 nova_compute[252253]: 2025-11-29 08:10:48.211 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:48.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:48 np0005539563 podman[314185]: 2025-11-29 08:10:48.499498046 +0000 UTC m=+0.057147599 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:10:48 np0005539563 podman[314186]: 2025-11-29 08:10:48.508963062 +0000 UTC m=+0.061918269 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:10:48 np0005539563 podman[314187]: 2025-11-29 08:10:48.546607212 +0000 UTC m=+0.091799148 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 03:10:48 np0005539563 nova_compute[252253]: 2025-11-29 08:10:48.617 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:48.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:49 np0005539563 nova_compute[252253]: 2025-11-29 08:10:49.211 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:49 np0005539563 nova_compute[252253]: 2025-11-29 08:10:49.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:49 np0005539563 nova_compute[252253]: 2025-11-29 08:10:49.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:10:49 np0005539563 nova_compute[252253]: 2025-11-29 08:10:49.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:10:49 np0005539563 nova_compute[252253]: 2025-11-29 08:10:49.698 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:10:49 np0005539563 nova_compute[252253]: 2025-11-29 08:10:49.698 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 247 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Nov 29 03:10:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:50.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:50.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:51 np0005539563 nova_compute[252253]: 2025-11-29 08:10:51.958 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 247 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Nov 29 03:10:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:52.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:52.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:53 np0005539563 nova_compute[252253]: 2025-11-29 08:10:53.574 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403838.5732002, 3f3e4ffb-a9ea-48f0-b9b8-54e436335953 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:53 np0005539563 nova_compute[252253]: 2025-11-29 08:10:53.574 252257 INFO nova.compute.manager [-] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:10:53 np0005539563 nova_compute[252253]: 2025-11-29 08:10:53.618 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:53 np0005539563 nova_compute[252253]: 2025-11-29 08:10:53.628 252257 DEBUG nova.compute.manager [None req-478945d6-9f61-4b7c-8fac-a65692b1adf7 - - - - - -] [instance: 3f3e4ffb-a9ea-48f0-b9b8-54e436335953] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:53 np0005539563 nova_compute[252253]: 2025-11-29 08:10:53.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:53 np0005539563 nova_compute[252253]: 2025-11-29 08:10:53.719 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 247 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Nov 29 03:10:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:54.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Nov 29 03:10:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Nov 29 03:10:54 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Nov 29 03:10:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:54.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 03:10:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Nov 29 03:10:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Nov 29 03:10:55 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Nov 29 03:10:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 261 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.1 MiB/s wr, 197 op/s
Nov 29 03:10:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:10:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:56.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:10:56 np0005539563 nova_compute[252253]: 2025-11-29 08:10:56.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Nov 29 03:10:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Nov 29 03:10:56 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Nov 29 03:10:56 np0005539563 nova_compute[252253]: 2025-11-29 08:10:56.960 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:56.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 288 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 4.0 MiB/s wr, 267 op/s
Nov 29 03:10:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:58.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:58 np0005539563 nova_compute[252253]: 2025-11-29 08:10:58.621 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:10:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:58.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 326 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 9.9 MiB/s rd, 7.8 MiB/s wr, 272 op/s
Nov 29 03:11:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:00.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:00.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:01 np0005539563 nova_compute[252253]: 2025-11-29 08:11:01.301 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Acquiring lock "307f5ed4-bb56-465b-a107-d4b965230c9b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:01 np0005539563 nova_compute[252253]: 2025-11-29 08:11:01.301 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:01 np0005539563 nova_compute[252253]: 2025-11-29 08:11:01.319 252257 DEBUG nova.compute.manager [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:11:01 np0005539563 nova_compute[252253]: 2025-11-29 08:11:01.402 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:01 np0005539563 nova_compute[252253]: 2025-11-29 08:11:01.403 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:01 np0005539563 nova_compute[252253]: 2025-11-29 08:11:01.409 252257 DEBUG nova.virt.hardware [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:11:01 np0005539563 nova_compute[252253]: 2025-11-29 08:11:01.409 252257 INFO nova.compute.claims [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:11:01 np0005539563 nova_compute[252253]: 2025-11-29 08:11:01.512 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Nov 29 03:11:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Nov 29 03:11:01 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Nov 29 03:11:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3485771019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:01 np0005539563 nova_compute[252253]: 2025-11-29 08:11:01.943 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:01 np0005539563 nova_compute[252253]: 2025-11-29 08:11:01.950 252257 DEBUG nova.compute.provider_tree [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:01 np0005539563 nova_compute[252253]: 2025-11-29 08:11:01.962 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:01 np0005539563 nova_compute[252253]: 2025-11-29 08:11:01.975 252257 DEBUG nova.scheduler.client.report [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.010 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.012 252257 DEBUG nova.compute.manager [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.076 252257 DEBUG nova.compute.manager [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.078 252257 DEBUG nova.network.neutron [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.103 252257 INFO nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:11:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 326 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 5.9 MiB/s wr, 138 op/s
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.123 252257 DEBUG nova.compute.manager [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.258 252257 DEBUG nova.compute.manager [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.259 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.260 252257 INFO nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Creating image(s)#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.311 252257 DEBUG nova.storage.rbd_utils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] rbd image 307f5ed4-bb56-465b-a107-d4b965230c9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.348 252257 DEBUG nova.storage.rbd_utils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] rbd image 307f5ed4-bb56-465b-a107-d4b965230c9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.383 252257 DEBUG nova.storage.rbd_utils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] rbd image 307f5ed4-bb56-465b-a107-d4b965230c9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.388 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:02.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.487 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.488 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.489 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.490 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.516 252257 DEBUG nova.storage.rbd_utils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] rbd image 307f5ed4-bb56-465b-a107-d4b965230c9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.521 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 307f5ed4-bb56-465b-a107-d4b965230c9b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Nov 29 03:11:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Nov 29 03:11:02 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.875 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 307f5ed4-bb56-465b-a107-d4b965230c9b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.946 252257 DEBUG nova.policy [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a776b2def7b4458d8c8ed719c2ea771b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c88bdebf444849adaebaf037c057e96a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:11:02 np0005539563 nova_compute[252253]: 2025-11-29 08:11:02.952 252257 DEBUG nova.storage.rbd_utils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] resizing rbd image 307f5ed4-bb56-465b-a107-d4b965230c9b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:11:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:02.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:03 np0005539563 nova_compute[252253]: 2025-11-29 08:11:03.060 252257 DEBUG nova.objects.instance [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lazy-loading 'migration_context' on Instance uuid 307f5ed4-bb56-465b-a107-d4b965230c9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:03 np0005539563 nova_compute[252253]: 2025-11-29 08:11:03.078 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:11:03 np0005539563 nova_compute[252253]: 2025-11-29 08:11:03.079 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Ensure instance console log exists: /var/lib/nova/instances/307f5ed4-bb56-465b-a107-d4b965230c9b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:11:03 np0005539563 nova_compute[252253]: 2025-11-29 08:11:03.080 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:03 np0005539563 nova_compute[252253]: 2025-11-29 08:11:03.080 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:03 np0005539563 nova_compute[252253]: 2025-11-29 08:11:03.081 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:03 np0005539563 nova_compute[252253]: 2025-11-29 08:11:03.624 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Nov 29 03:11:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Nov 29 03:11:03 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Nov 29 03:11:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 367 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 7.6 MiB/s wr, 135 op/s
Nov 29 03:11:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:04.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:04.916 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:04.917 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:04.917 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:04.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:04.996 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:04 np0005539563 nova_compute[252253]: 2025-11-29 08:11:04.996 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:04.997 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:11:05 np0005539563 nova_compute[252253]: 2025-11-29 08:11:05.123 252257 DEBUG nova.network.neutron [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Successfully created port: da666052-bace-4a8f-af1c-3e3947f88062 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:11:05 np0005539563 nova_compute[252253]: 2025-11-29 08:11:05.923 252257 DEBUG nova.network.neutron [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Successfully updated port: da666052-bace-4a8f-af1c-3e3947f88062 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:11:05 np0005539563 nova_compute[252253]: 2025-11-29 08:11:05.954 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Acquiring lock "refresh_cache-307f5ed4-bb56-465b-a107-d4b965230c9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:05 np0005539563 nova_compute[252253]: 2025-11-29 08:11:05.955 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Acquired lock "refresh_cache-307f5ed4-bb56-465b-a107-d4b965230c9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:05 np0005539563 nova_compute[252253]: 2025-11-29 08:11:05.955 252257 DEBUG nova.network.neutron [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:11:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 480 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 337 op/s
Nov 29 03:11:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:06.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Nov 29 03:11:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Nov 29 03:11:06 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Nov 29 03:11:06 np0005539563 nova_compute[252253]: 2025-11-29 08:11:06.864 252257 DEBUG nova.network.neutron [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:11:06 np0005539563 nova_compute[252253]: 2025-11-29 08:11:06.964 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:06.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.292 252257 DEBUG nova.compute.manager [req-60ed829e-2e94-4778-9f1d-bcb94fb65fcb req-8eac3332-45cf-4c90-9a8d-28fe64241c0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Received event network-changed-da666052-bace-4a8f-af1c-3e3947f88062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.293 252257 DEBUG nova.compute.manager [req-60ed829e-2e94-4778-9f1d-bcb94fb65fcb req-8eac3332-45cf-4c90-9a8d-28fe64241c0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Refreshing instance network info cache due to event network-changed-da666052-bace-4a8f-af1c-3e3947f88062. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.293 252257 DEBUG oslo_concurrency.lockutils [req-60ed829e-2e94-4778-9f1d-bcb94fb65fcb req-8eac3332-45cf-4c90-9a8d-28fe64241c0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-307f5ed4-bb56-465b-a107-d4b965230c9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.828 252257 DEBUG nova.network.neutron [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Updating instance_info_cache with network_info: [{"id": "da666052-bace-4a8f-af1c-3e3947f88062", "address": "fa:16:3e:2f:f2:5d", "network": {"id": "5649a6b2-d5aa-4bf1-8057-9af3d105e9ba", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-2005899826-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c88bdebf444849adaebaf037c057e96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda666052-ba", "ovs_interfaceid": "da666052-bace-4a8f-af1c-3e3947f88062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.850 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Releasing lock "refresh_cache-307f5ed4-bb56-465b-a107-d4b965230c9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.850 252257 DEBUG nova.compute.manager [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Instance network_info: |[{"id": "da666052-bace-4a8f-af1c-3e3947f88062", "address": "fa:16:3e:2f:f2:5d", "network": {"id": "5649a6b2-d5aa-4bf1-8057-9af3d105e9ba", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-2005899826-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c88bdebf444849adaebaf037c057e96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda666052-ba", "ovs_interfaceid": "da666052-bace-4a8f-af1c-3e3947f88062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.851 252257 DEBUG oslo_concurrency.lockutils [req-60ed829e-2e94-4778-9f1d-bcb94fb65fcb req-8eac3332-45cf-4c90-9a8d-28fe64241c0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-307f5ed4-bb56-465b-a107-d4b965230c9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.851 252257 DEBUG nova.network.neutron [req-60ed829e-2e94-4778-9f1d-bcb94fb65fcb req-8eac3332-45cf-4c90-9a8d-28fe64241c0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Refreshing network info cache for port da666052-bace-4a8f-af1c-3e3947f88062 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.854 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Start _get_guest_xml network_info=[{"id": "da666052-bace-4a8f-af1c-3e3947f88062", "address": "fa:16:3e:2f:f2:5d", "network": {"id": "5649a6b2-d5aa-4bf1-8057-9af3d105e9ba", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-2005899826-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c88bdebf444849adaebaf037c057e96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda666052-ba", "ovs_interfaceid": "da666052-bace-4a8f-af1c-3e3947f88062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.857 252257 WARNING nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.862 252257 DEBUG nova.virt.libvirt.host [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.862 252257 DEBUG nova.virt.libvirt.host [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.865 252257 DEBUG nova.virt.libvirt.host [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.865 252257 DEBUG nova.virt.libvirt.host [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.866 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.866 252257 DEBUG nova.virt.hardware [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.867 252257 DEBUG nova.virt.hardware [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.867 252257 DEBUG nova.virt.hardware [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.867 252257 DEBUG nova.virt.hardware [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.867 252257 DEBUG nova.virt.hardware [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.868 252257 DEBUG nova.virt.hardware [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.868 252257 DEBUG nova.virt.hardware [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.868 252257 DEBUG nova.virt.hardware [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.868 252257 DEBUG nova.virt.hardware [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.869 252257 DEBUG nova.virt.hardware [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.869 252257 DEBUG nova.virt.hardware [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:11:07 np0005539563 nova_compute[252253]: 2025-11-29 08:11:07.871 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:07.999 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 483 MiB data, 1021 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 16 MiB/s wr, 348 op/s
Nov 29 03:11:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:11:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/581233916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.284 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.309 252257 DEBUG nova.storage.rbd_utils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] rbd image 307f5ed4-bb56-465b-a107-d4b965230c9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.313 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:08.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.628 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:11:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2287252584' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.757 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.758 252257 DEBUG nova.virt.libvirt.vif [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1295608404',display_name='tempest-NoVNCConsoleTestJSON-server-1295608404',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1295608404',id=105,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c88bdebf444849adaebaf037c057e96a',ramdisk_id='',reservation_id='r-8ewi79y7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-NoVNCConsoleTestJSON-1957249485',owner_user_name='tempest-NoVNCConsoleTestJSON-1957249485-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:02Z,user_data=None,user_id='a776b2def7b4458d8c8ed719c2ea771b',uuid=307f5ed4-bb56-465b-a107-d4b965230c9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da666052-bace-4a8f-af1c-3e3947f88062", "address": "fa:16:3e:2f:f2:5d", "network": {"id": "5649a6b2-d5aa-4bf1-8057-9af3d105e9ba", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-2005899826-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c88bdebf444849adaebaf037c057e96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda666052-ba", "ovs_interfaceid": "da666052-bace-4a8f-af1c-3e3947f88062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.759 252257 DEBUG nova.network.os_vif_util [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Converting VIF {"id": "da666052-bace-4a8f-af1c-3e3947f88062", "address": "fa:16:3e:2f:f2:5d", "network": {"id": "5649a6b2-d5aa-4bf1-8057-9af3d105e9ba", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-2005899826-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c88bdebf444849adaebaf037c057e96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda666052-ba", "ovs_interfaceid": "da666052-bace-4a8f-af1c-3e3947f88062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.760 252257 DEBUG nova.network.os_vif_util [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:f2:5d,bridge_name='br-int',has_traffic_filtering=True,id=da666052-bace-4a8f-af1c-3e3947f88062,network=Network(5649a6b2-d5aa-4bf1-8057-9af3d105e9ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda666052-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.761 252257 DEBUG nova.objects.instance [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lazy-loading 'pci_devices' on Instance uuid 307f5ed4-bb56-465b-a107-d4b965230c9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.782 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  <uuid>307f5ed4-bb56-465b-a107-d4b965230c9b</uuid>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  <name>instance-00000069</name>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <nova:name>tempest-NoVNCConsoleTestJSON-server-1295608404</nova:name>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:11:07</nova:creationTime>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <nova:user uuid="a776b2def7b4458d8c8ed719c2ea771b">tempest-NoVNCConsoleTestJSON-1957249485-project-member</nova:user>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <nova:project uuid="c88bdebf444849adaebaf037c057e96a">tempest-NoVNCConsoleTestJSON-1957249485</nova:project>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <nova:port uuid="da666052-bace-4a8f-af1c-3e3947f88062">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <entry name="serial">307f5ed4-bb56-465b-a107-d4b965230c9b</entry>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <entry name="uuid">307f5ed4-bb56-465b-a107-d4b965230c9b</entry>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/307f5ed4-bb56-465b-a107-d4b965230c9b_disk">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/307f5ed4-bb56-465b-a107-d4b965230c9b_disk.config">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:2f:f2:5d"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <target dev="tapda666052-ba"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/307f5ed4-bb56-465b-a107-d4b965230c9b/console.log" append="off"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:11:08 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:11:08 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:11:08 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:11:08 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.783 252257 DEBUG nova.compute.manager [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Preparing to wait for external event network-vif-plugged-da666052-bace-4a8f-af1c-3e3947f88062 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.784 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Acquiring lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.784 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.785 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.786 252257 DEBUG nova.virt.libvirt.vif [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1295608404',display_name='tempest-NoVNCConsoleTestJSON-server-1295608404',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1295608404',id=105,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c88bdebf444849adaebaf037c057e96a',ramdisk_id='',reservation_id='r-8ewi79y7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-NoVNCConsoleTestJSON-1957249485',owner_user_name='tempest-NoVNCConsoleTestJSON-1957249485-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:02Z,user_data=None,user_id='a776b2def7b4458d8c8ed719c2ea771b',uuid=307f5ed4-bb56-465b-a107-d4b965230c9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da666052-bace-4a8f-af1c-3e3947f88062", "address": "fa:16:3e:2f:f2:5d", "network": {"id": "5649a6b2-d5aa-4bf1-8057-9af3d105e9ba", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-2005899826-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c88bdebf444849adaebaf037c057e96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda666052-ba", "ovs_interfaceid": "da666052-bace-4a8f-af1c-3e3947f88062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.787 252257 DEBUG nova.network.os_vif_util [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Converting VIF {"id": "da666052-bace-4a8f-af1c-3e3947f88062", "address": "fa:16:3e:2f:f2:5d", "network": {"id": "5649a6b2-d5aa-4bf1-8057-9af3d105e9ba", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-2005899826-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c88bdebf444849adaebaf037c057e96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda666052-ba", "ovs_interfaceid": "da666052-bace-4a8f-af1c-3e3947f88062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.788 252257 DEBUG nova.network.os_vif_util [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:f2:5d,bridge_name='br-int',has_traffic_filtering=True,id=da666052-bace-4a8f-af1c-3e3947f88062,network=Network(5649a6b2-d5aa-4bf1-8057-9af3d105e9ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda666052-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.789 252257 DEBUG os_vif [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:f2:5d,bridge_name='br-int',has_traffic_filtering=True,id=da666052-bace-4a8f-af1c-3e3947f88062,network=Network(5649a6b2-d5aa-4bf1-8057-9af3d105e9ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda666052-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.790 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.791 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.791 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.797 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.797 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda666052-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.798 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda666052-ba, col_values=(('external_ids', {'iface-id': 'da666052-bace-4a8f-af1c-3e3947f88062', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:f2:5d', 'vm-uuid': '307f5ed4-bb56-465b-a107-d4b965230c9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.800 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:08 np0005539563 NetworkManager[48981]: <info>  [1764403868.8012] manager: (tapda666052-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.803 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.808 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.809 252257 INFO os_vif [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:f2:5d,bridge_name='br-int',has_traffic_filtering=True,id=da666052-bace-4a8f-af1c-3e3947f88062,network=Network(5649a6b2-d5aa-4bf1-8057-9af3d105e9ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda666052-ba')#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.895 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.896 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.896 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] No VIF found with MAC fa:16:3e:2f:f2:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.897 252257 INFO nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Using config drive#033[00m
Nov 29 03:11:08 np0005539563 nova_compute[252253]: 2025-11-29 08:11:08.930 252257 DEBUG nova.storage.rbd_utils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] rbd image 307f5ed4-bb56-465b-a107-d4b965230c9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:08.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Nov 29 03:11:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Nov 29 03:11:10 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Nov 29 03:11:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 330 op/s
Nov 29 03:11:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:10.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:10 np0005539563 nova_compute[252253]: 2025-11-29 08:11:10.894 252257 INFO nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Creating config drive at /var/lib/nova/instances/307f5ed4-bb56-465b-a107-d4b965230c9b/disk.config#033[00m
Nov 29 03:11:10 np0005539563 nova_compute[252253]: 2025-11-29 08:11:10.905 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/307f5ed4-bb56-465b-a107-d4b965230c9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp15jjvvqn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:10.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.057 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/307f5ed4-bb56-465b-a107-d4b965230c9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp15jjvvqn" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Nov 29 03:11:11 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.105 252257 DEBUG nova.storage.rbd_utils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] rbd image 307f5ed4-bb56-465b-a107-d4b965230c9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.109 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/307f5ed4-bb56-465b-a107-d4b965230c9b/disk.config 307f5ed4-bb56-465b-a107-d4b965230c9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.265 252257 DEBUG oslo_concurrency.processutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/307f5ed4-bb56-465b-a107-d4b965230c9b/disk.config 307f5ed4-bb56-465b-a107-d4b965230c9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.266 252257 INFO nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Deleting local config drive /var/lib/nova/instances/307f5ed4-bb56-465b-a107-d4b965230c9b/disk.config because it was imported into RBD.#033[00m
Nov 29 03:11:11 np0005539563 kernel: tapda666052-ba: entered promiscuous mode
Nov 29 03:11:11 np0005539563 NetworkManager[48981]: <info>  [1764403871.3195] manager: (tapda666052-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/183)
Nov 29 03:11:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:11Z|00392|binding|INFO|Claiming lport da666052-bace-4a8f-af1c-3e3947f88062 for this chassis.
Nov 29 03:11:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:11Z|00393|binding|INFO|da666052-bace-4a8f-af1c-3e3947f88062: Claiming fa:16:3e:2f:f2:5d 10.100.0.8
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.323 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.337 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:f2:5d 10.100.0.8'], port_security=['fa:16:3e:2f:f2:5d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '307f5ed4-bb56-465b-a107-d4b965230c9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c88bdebf444849adaebaf037c057e96a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bba19793-b08c-4bff-b3d0-32c99d85ff60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fa62d15-32fc-4f7f-ad6d-777c3e122a15, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=da666052-bace-4a8f-af1c-3e3947f88062) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.339 158990 INFO neutron.agent.ovn.metadata.agent [-] Port da666052-bace-4a8f-af1c-3e3947f88062 in datapath 5649a6b2-d5aa-4bf1-8057-9af3d105e9ba bound to our chassis#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.340 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5649a6b2-d5aa-4bf1-8057-9af3d105e9ba#033[00m
Nov 29 03:11:11 np0005539563 systemd-udevd[314685]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:11:11 np0005539563 systemd-machined[213024]: New machine qemu-45-instance-00000069.
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.355 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b3406b1d-8f19-400e-8c09-8014346b08da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.356 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5649a6b2-d1 in ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.358 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5649a6b2-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.359 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7051cff6-a6f0-439d-a0ab-7a36cde94046]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.360 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[465e369f-9d51-405d-82ed-e57ff42c2549]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 NetworkManager[48981]: <info>  [1764403871.3608] device (tapda666052-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:11:11 np0005539563 NetworkManager[48981]: <info>  [1764403871.3616] device (tapda666052-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:11:11 np0005539563 systemd[1]: Started Virtual Machine qemu-45-instance-00000069.
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.371 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[bed10077-1ecb-4112-9315-2c34c2a2d3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.397 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[28419801-0909-496a-9514-fc4254c0048a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.403 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:11Z|00394|binding|INFO|Setting lport da666052-bace-4a8f-af1c-3e3947f88062 ovn-installed in OVS
Nov 29 03:11:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:11Z|00395|binding|INFO|Setting lport da666052-bace-4a8f-af1c-3e3947f88062 up in Southbound
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.407 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.427 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[08fddfa0-a4ff-459e-a17c-b49353b0f6b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 NetworkManager[48981]: <info>  [1764403871.4344] manager: (tap5649a6b2-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/184)
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.433 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[257f1a5b-943d-4069-9211-f03b12dd07ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.462 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2d93c72c-519e-40ce-ac85-e9ee1f427c7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.465 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[bb19e2e1-52f2-447d-8114-b232c03f7995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 NetworkManager[48981]: <info>  [1764403871.4899] device (tap5649a6b2-d0): carrier: link connected
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.498 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[969d2611-16f7-4972-92cb-1e6b3379d4e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.520 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[48cd3f90-7f91-4ef3-bcce-d9ec9bafd431]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5649a6b2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:5b:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683926, 'reachable_time': 31760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314718, 'error': None, 'target': 'ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.533 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e56395b7-e0c7-47b5-9d20-ad855fa3d2c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:5b8c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683926, 'tstamp': 683926}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314719, 'error': None, 'target': 'ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.556 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f8a850-e697-4a17-b914-4f93d8dca92e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5649a6b2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:5b:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683926, 'reachable_time': 31760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314720, 'error': None, 'target': 'ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.582 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3878ad64-be8e-4ad7-b8f2-955e9f36f3b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.655 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3e164a-22a4-4978-afaf-b4749623a9b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.657 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5649a6b2-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.657 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.657 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5649a6b2-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.659 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:11 np0005539563 NetworkManager[48981]: <info>  [1764403871.6599] manager: (tap5649a6b2-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Nov 29 03:11:11 np0005539563 kernel: tap5649a6b2-d0: entered promiscuous mode
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.663 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5649a6b2-d0, col_values=(('external_ids', {'iface-id': 'ce543b8f-df53-4f2e-88f6-91873a77e81d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.664 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:11Z|00396|binding|INFO|Releasing lport ce543b8f-df53-4f2e-88f6-91873a77e81d from this chassis (sb_readonly=0)
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.682 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.683 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5649a6b2-d5aa-4bf1-8057-9af3d105e9ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5649a6b2-d5aa-4bf1-8057-9af3d105e9ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.684 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a3dcd27d-d669-40f5-879d-0d61438fd69c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.685 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/5649a6b2-d5aa-4bf1-8057-9af3d105e9ba.pid.haproxy
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 5649a6b2-d5aa-4bf1-8057-9af3d105e9ba
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:11:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:11.686 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba', 'env', 'PROCESS_TAG=haproxy-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5649a6b2-d5aa-4bf1-8057-9af3d105e9ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:11:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.744 252257 DEBUG nova.network.neutron [req-60ed829e-2e94-4778-9f1d-bcb94fb65fcb req-8eac3332-45cf-4c90-9a8d-28fe64241c0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Updated VIF entry in instance network info cache for port da666052-bace-4a8f-af1c-3e3947f88062. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.745 252257 DEBUG nova.network.neutron [req-60ed829e-2e94-4778-9f1d-bcb94fb65fcb req-8eac3332-45cf-4c90-9a8d-28fe64241c0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Updating instance_info_cache with network_info: [{"id": "da666052-bace-4a8f-af1c-3e3947f88062", "address": "fa:16:3e:2f:f2:5d", "network": {"id": "5649a6b2-d5aa-4bf1-8057-9af3d105e9ba", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-2005899826-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c88bdebf444849adaebaf037c057e96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda666052-ba", "ovs_interfaceid": "da666052-bace-4a8f-af1c-3e3947f88062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Nov 29 03:11:11 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.767 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403871.7670019, 307f5ed4-bb56-465b-a107-d4b965230c9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.768 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.848 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.848 252257 DEBUG oslo_concurrency.lockutils [req-60ed829e-2e94-4778-9f1d-bcb94fb65fcb req-8eac3332-45cf-4c90-9a8d-28fe64241c0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-307f5ed4-bb56-465b-a107-d4b965230c9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.859 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403871.7673643, 307f5ed4-bb56-465b-a107-d4b965230c9b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.859 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.966 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.988 252257 DEBUG nova.compute.manager [req-074b692a-2447-41f9-b2dd-71ea0da8b586 req-84742940-d78c-46bd-800e-d204a90f913e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Received event network-vif-plugged-da666052-bace-4a8f-af1c-3e3947f88062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.988 252257 DEBUG oslo_concurrency.lockutils [req-074b692a-2447-41f9-b2dd-71ea0da8b586 req-84742940-d78c-46bd-800e-d204a90f913e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.988 252257 DEBUG oslo_concurrency.lockutils [req-074b692a-2447-41f9-b2dd-71ea0da8b586 req-84742940-d78c-46bd-800e-d204a90f913e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.989 252257 DEBUG oslo_concurrency.lockutils [req-074b692a-2447-41f9-b2dd-71ea0da8b586 req-84742940-d78c-46bd-800e-d204a90f913e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.989 252257 DEBUG nova.compute.manager [req-074b692a-2447-41f9-b2dd-71ea0da8b586 req-84742940-d78c-46bd-800e-d204a90f913e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Processing event network-vif-plugged-da666052-bace-4a8f-af1c-3e3947f88062 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.990 252257 DEBUG nova.compute.manager [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.996 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:11:11 np0005539563 nova_compute[252253]: 2025-11-29 08:11:11.997 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.003 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403871.9954858, 307f5ed4-bb56-465b-a107-d4b965230c9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.004 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.007 252257 INFO nova.virt.libvirt.driver [-] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Instance spawned successfully.#033[00m
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.008 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:11:12 np0005539563 podman[314794]: 2025-11-29 08:11:12.049944331 +0000 UTC m=+0.062447934 container create 4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:11:12 np0005539563 systemd[1]: Started libpod-conmon-4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7.scope.
Nov 29 03:11:12 np0005539563 podman[314794]: 2025-11-29 08:11:12.015772835 +0000 UTC m=+0.028276458 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:11:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 173 KiB/s wr, 84 op/s
Nov 29 03:11:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.129 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/027642f337b19a18b86194315ea1507a85f646bfe39d2657fe06c54b6d887fae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.137 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.138 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.139 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.140 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.141 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.142 252257 DEBUG nova.virt.libvirt.driver [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:12 np0005539563 podman[314794]: 2025-11-29 08:11:12.148638015 +0000 UTC m=+0.161141638 container init 4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.152 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:11:12 np0005539563 podman[314794]: 2025-11-29 08:11:12.155157091 +0000 UTC m=+0.167660694 container start 4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.194 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:11:12 np0005539563 neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba[314809]: [NOTICE]   (314813) : New worker (314815) forked
Nov 29 03:11:12 np0005539563 neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba[314809]: [NOTICE]   (314813) : Loading success.
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.236 252257 INFO nova.compute.manager [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Took 9.98 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.237 252257 DEBUG nova.compute.manager [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.289 252257 INFO nova.compute.manager [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Took 10.92 seconds to build instance.#033[00m
Nov 29 03:11:12 np0005539563 nova_compute[252253]: 2025-11-29 08:11:12.303 252257 DEBUG oslo_concurrency.lockutils [None req-07e0fd3e-be0e-420c-b502-c93635583ccb a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:12.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:11:12
Nov 29 03:11:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:11:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:11:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'volumes', 'images', '.rgw.root', 'backups', '.mgr', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 29 03:11:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:11:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:12.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:11:13 np0005539563 nova_compute[252253]: 2025-11-29 08:11:13.802 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:11:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:11:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Nov 29 03:11:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Nov 29 03:11:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 513 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.0 MiB/s wr, 84 op/s
Nov 29 03:11:14 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Nov 29 03:11:14 np0005539563 nova_compute[252253]: 2025-11-29 08:11:14.214 252257 DEBUG nova.compute.manager [None req-2f32ad10-ef9f-400a-822b-6fa738dc6ff1 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Nov 29 03:11:14 np0005539563 nova_compute[252253]: 2025-11-29 08:11:14.233 252257 DEBUG nova.compute.manager [req-15d64983-4aa5-4cb5-a2fc-430825de02cb req-f266f273-e93f-4f83-87a7-c8ca91a2edc6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Received event network-vif-plugged-da666052-bace-4a8f-af1c-3e3947f88062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:14 np0005539563 nova_compute[252253]: 2025-11-29 08:11:14.234 252257 DEBUG oslo_concurrency.lockutils [req-15d64983-4aa5-4cb5-a2fc-430825de02cb req-f266f273-e93f-4f83-87a7-c8ca91a2edc6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:14 np0005539563 nova_compute[252253]: 2025-11-29 08:11:14.234 252257 DEBUG oslo_concurrency.lockutils [req-15d64983-4aa5-4cb5-a2fc-430825de02cb req-f266f273-e93f-4f83-87a7-c8ca91a2edc6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:14 np0005539563 nova_compute[252253]: 2025-11-29 08:11:14.235 252257 DEBUG oslo_concurrency.lockutils [req-15d64983-4aa5-4cb5-a2fc-430825de02cb req-f266f273-e93f-4f83-87a7-c8ca91a2edc6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:14 np0005539563 nova_compute[252253]: 2025-11-29 08:11:14.235 252257 DEBUG nova.compute.manager [req-15d64983-4aa5-4cb5-a2fc-430825de02cb req-f266f273-e93f-4f83-87a7-c8ca91a2edc6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] No waiting events found dispatching network-vif-plugged-da666052-bace-4a8f-af1c-3e3947f88062 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:14 np0005539563 nova_compute[252253]: 2025-11-29 08:11:14.236 252257 WARNING nova.compute.manager [req-15d64983-4aa5-4cb5-a2fc-430825de02cb req-f266f273-e93f-4f83-87a7-c8ca91a2edc6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Received unexpected event network-vif-plugged-da666052-bace-4a8f-af1c-3e3947f88062 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:11:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:14.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:14 np0005539563 nova_compute[252253]: 2025-11-29 08:11:14.888 252257 DEBUG nova.compute.manager [None req-600ce1f5-7cf9-462e-8c90-0b8b07eaf128 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Nov 29 03:11:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:14.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.260 252257 DEBUG oslo_concurrency.lockutils [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Acquiring lock "307f5ed4-bb56-465b-a107-d4b965230c9b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.260 252257 DEBUG oslo_concurrency.lockutils [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.260 252257 DEBUG oslo_concurrency.lockutils [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Acquiring lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.261 252257 DEBUG oslo_concurrency.lockutils [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.261 252257 DEBUG oslo_concurrency.lockutils [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.262 252257 INFO nova.compute.manager [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Terminating instance#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.263 252257 DEBUG nova.compute.manager [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:11:15 np0005539563 kernel: tapda666052-ba (unregistering): left promiscuous mode
Nov 29 03:11:15 np0005539563 NetworkManager[48981]: <info>  [1764403875.3155] device (tapda666052-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:11:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:15Z|00397|binding|INFO|Releasing lport da666052-bace-4a8f-af1c-3e3947f88062 from this chassis (sb_readonly=0)
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.323 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:15Z|00398|binding|INFO|Setting lport da666052-bace-4a8f-af1c-3e3947f88062 down in Southbound
Nov 29 03:11:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:15Z|00399|binding|INFO|Removing iface tapda666052-ba ovn-installed in OVS
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.325 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.330 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:f2:5d 10.100.0.8'], port_security=['fa:16:3e:2f:f2:5d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '307f5ed4-bb56-465b-a107-d4b965230c9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c88bdebf444849adaebaf037c057e96a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bba19793-b08c-4bff-b3d0-32c99d85ff60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fa62d15-32fc-4f7f-ad6d-777c3e122a15, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=da666052-bace-4a8f-af1c-3e3947f88062) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.332 158990 INFO neutron.agent.ovn.metadata.agent [-] Port da666052-bace-4a8f-af1c-3e3947f88062 in datapath 5649a6b2-d5aa-4bf1-8057-9af3d105e9ba unbound from our chassis#033[00m
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.333 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5649a6b2-d5aa-4bf1-8057-9af3d105e9ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.334 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4143075b-ce7e-4238-8c53-2c83e835c52b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.334 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba namespace which is not needed anymore#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.347 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:15 np0005539563 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000069.scope: Deactivated successfully.
Nov 29 03:11:15 np0005539563 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000069.scope: Consumed 3.741s CPU time.
Nov 29 03:11:15 np0005539563 systemd-machined[213024]: Machine qemu-45-instance-00000069 terminated.
Nov 29 03:11:15 np0005539563 neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba[314809]: [NOTICE]   (314813) : haproxy version is 2.8.14-c23fe91
Nov 29 03:11:15 np0005539563 neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba[314809]: [NOTICE]   (314813) : path to executable is /usr/sbin/haproxy
Nov 29 03:11:15 np0005539563 neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba[314809]: [WARNING]  (314813) : Exiting Master process...
Nov 29 03:11:15 np0005539563 neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba[314809]: [ALERT]    (314813) : Current worker (314815) exited with code 143 (Terminated)
Nov 29 03:11:15 np0005539563 neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba[314809]: [WARNING]  (314813) : All workers exited. Exiting... (0)
Nov 29 03:11:15 np0005539563 systemd[1]: libpod-4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7.scope: Deactivated successfully.
Nov 29 03:11:15 np0005539563 podman[314851]: 2025-11-29 08:11:15.464671148 +0000 UTC m=+0.043891260 container died 4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:11:15 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7-userdata-shm.mount: Deactivated successfully.
Nov 29 03:11:15 np0005539563 systemd[1]: var-lib-containers-storage-overlay-027642f337b19a18b86194315ea1507a85f646bfe39d2657fe06c54b6d887fae-merged.mount: Deactivated successfully.
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.496 252257 INFO nova.virt.libvirt.driver [-] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Instance destroyed successfully.#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.497 252257 DEBUG nova.objects.instance [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lazy-loading 'resources' on Instance uuid 307f5ed4-bb56-465b-a107-d4b965230c9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:15 np0005539563 podman[314851]: 2025-11-29 08:11:15.503335546 +0000 UTC m=+0.082555668 container cleanup 4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.510 252257 DEBUG nova.virt.libvirt.vif [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:11:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1295608404',display_name='tempest-NoVNCConsoleTestJSON-server-1295608404',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1295608404',id=105,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:11:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c88bdebf444849adaebaf037c057e96a',ramdisk_id='',reservation_id='r-8ewi79y7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-NoVNCConsoleTestJSON-1957249485',owner_user_name='tempest-NoVNCConsoleTestJSON-1957249485-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:12Z,user_data=None,user_id='a776b2def7b4458d8c8ed719c2ea771b',uuid=307f5ed4-bb56-465b-a107-d4b965230c9b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da666052-bace-4a8f-af1c-3e3947f88062", "address": "fa:16:3e:2f:f2:5d", "network": {"id": "5649a6b2-d5aa-4bf1-8057-9af3d105e9ba", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-2005899826-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c88bdebf444849adaebaf037c057e96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda666052-ba", "ovs_interfaceid": "da666052-bace-4a8f-af1c-3e3947f88062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.511 252257 DEBUG nova.network.os_vif_util [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Converting VIF {"id": "da666052-bace-4a8f-af1c-3e3947f88062", "address": "fa:16:3e:2f:f2:5d", "network": {"id": "5649a6b2-d5aa-4bf1-8057-9af3d105e9ba", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-2005899826-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c88bdebf444849adaebaf037c057e96a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda666052-ba", "ovs_interfaceid": "da666052-bace-4a8f-af1c-3e3947f88062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.512 252257 DEBUG nova.network.os_vif_util [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:f2:5d,bridge_name='br-int',has_traffic_filtering=True,id=da666052-bace-4a8f-af1c-3e3947f88062,network=Network(5649a6b2-d5aa-4bf1-8057-9af3d105e9ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda666052-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.512 252257 DEBUG os_vif [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:f2:5d,bridge_name='br-int',has_traffic_filtering=True,id=da666052-bace-4a8f-af1c-3e3947f88062,network=Network(5649a6b2-d5aa-4bf1-8057-9af3d105e9ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda666052-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.514 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.514 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda666052-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.515 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.518 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.520 252257 INFO os_vif [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:f2:5d,bridge_name='br-int',has_traffic_filtering=True,id=da666052-bace-4a8f-af1c-3e3947f88062,network=Network(5649a6b2-d5aa-4bf1-8057-9af3d105e9ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda666052-ba')#033[00m
Nov 29 03:11:15 np0005539563 systemd[1]: libpod-conmon-4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7.scope: Deactivated successfully.
Nov 29 03:11:15 np0005539563 podman[314890]: 2025-11-29 08:11:15.567433962 +0000 UTC m=+0.043037807 container remove 4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.573 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[53b3e830-f57d-48c0-84a6-e96fd7e7d25b]: (4, ('Sat Nov 29 08:11:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba (4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7)\n4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7\nSat Nov 29 08:11:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba (4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7)\n4e6563ef9e5532c4a9ec48dc30b93e7b5eae4043dc1aef43d23c31ca65c46ab7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.574 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8074f0-b8b6-47e6-9c98-a98aa08c28e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.575 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5649a6b2-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.577 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:15 np0005539563 kernel: tap5649a6b2-d0: left promiscuous mode
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.580 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.585 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e40105ec-4067-4dc4-88ce-9fb3b1a4a0c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.593 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.606 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9d32b0cc-c9fc-4780-8bd4-b9a1afe5b9a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.608 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[893f5e7b-c718-48ca-acd0-ab52fa443df4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.626 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc282b4-79dc-4731-a39f-67c6800dff23]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683919, 'reachable_time': 40528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314923, 'error': None, 'target': 'ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.628 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5649a6b2-d5aa-4bf1-8057-9af3d105e9ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:11:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:15.628 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[f0cd427a-2b9f-4c5e-8747-fe7862a03f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:15 np0005539563 systemd[1]: run-netns-ovnmeta\x2d5649a6b2\x2dd5aa\x2d4bf1\x2d8057\x2d9af3d105e9ba.mount: Deactivated successfully.
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.978 252257 INFO nova.virt.libvirt.driver [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Deleting instance files /var/lib/nova/instances/307f5ed4-bb56-465b-a107-d4b965230c9b_del#033[00m
Nov 29 03:11:15 np0005539563 nova_compute[252253]: 2025-11-29 08:11:15.978 252257 INFO nova.virt.libvirt.driver [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Deletion of /var/lib/nova/instances/307f5ed4-bb56-465b-a107-d4b965230c9b_del complete#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.060 252257 INFO nova.compute.manager [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.061 252257 DEBUG oslo.service.loopingcall [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.062 252257 DEBUG nova.compute.manager [-] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.062 252257 DEBUG nova.network.neutron [-] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:11:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 551 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 15 MiB/s rd, 7.8 MiB/s wr, 449 op/s
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.337 252257 DEBUG nova.compute.manager [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Received event network-vif-unplugged-da666052-bace-4a8f-af1c-3e3947f88062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.337 252257 DEBUG oslo_concurrency.lockutils [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.337 252257 DEBUG oslo_concurrency.lockutils [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.338 252257 DEBUG oslo_concurrency.lockutils [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.338 252257 DEBUG nova.compute.manager [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] No waiting events found dispatching network-vif-unplugged-da666052-bace-4a8f-af1c-3e3947f88062 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.338 252257 DEBUG nova.compute.manager [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Received event network-vif-unplugged-da666052-bace-4a8f-af1c-3e3947f88062 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.339 252257 DEBUG nova.compute.manager [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Received event network-vif-plugged-da666052-bace-4a8f-af1c-3e3947f88062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.339 252257 DEBUG oslo_concurrency.lockutils [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.339 252257 DEBUG oslo_concurrency.lockutils [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.339 252257 DEBUG oslo_concurrency.lockutils [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.340 252257 DEBUG nova.compute.manager [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] No waiting events found dispatching network-vif-plugged-da666052-bace-4a8f-af1c-3e3947f88062 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.340 252257 WARNING nova.compute.manager [req-8cff83b5-8e19-4b14-a9cd-76cd0db95b4b req-1177f3f4-5148-4198-9082-2214bb7363d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Received unexpected event network-vif-plugged-da666052-bace-4a8f-af1c-3e3947f88062 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:11:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:16.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Nov 29 03:11:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Nov 29 03:11:16 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Nov 29 03:11:16 np0005539563 nova_compute[252253]: 2025-11-29 08:11:16.968 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:16.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:17 np0005539563 nova_compute[252253]: 2025-11-29 08:11:17.177 252257 DEBUG nova.network.neutron [-] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:17 np0005539563 nova_compute[252253]: 2025-11-29 08:11:17.209 252257 INFO nova.compute.manager [-] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Took 1.15 seconds to deallocate network for instance.#033[00m
Nov 29 03:11:17 np0005539563 nova_compute[252253]: 2025-11-29 08:11:17.276 252257 DEBUG oslo_concurrency.lockutils [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:17 np0005539563 nova_compute[252253]: 2025-11-29 08:11:17.278 252257 DEBUG oslo_concurrency.lockutils [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:17 np0005539563 nova_compute[252253]: 2025-11-29 08:11:17.292 252257 DEBUG nova.compute.manager [req-1c226547-2b33-4f7f-9a53-090e399ddac0 req-a284fbdc-2b49-45cc-a7d0-d6aab230a94e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Received event network-vif-deleted-da666052-bace-4a8f-af1c-3e3947f88062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:17 np0005539563 nova_compute[252253]: 2025-11-29 08:11:17.330 252257 DEBUG oslo_concurrency.processutils [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Nov 29 03:11:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Nov 29 03:11:17 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Nov 29 03:11:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776797890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:17 np0005539563 nova_compute[252253]: 2025-11-29 08:11:17.780 252257 DEBUG oslo_concurrency.processutils [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:17 np0005539563 nova_compute[252253]: 2025-11-29 08:11:17.787 252257 DEBUG nova.compute.provider_tree [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:17 np0005539563 nova_compute[252253]: 2025-11-29 08:11:17.803 252257 DEBUG nova.scheduler.client.report [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:17 np0005539563 nova_compute[252253]: 2025-11-29 08:11:17.826 252257 DEBUG oslo_concurrency.lockutils [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:17 np0005539563 nova_compute[252253]: 2025-11-29 08:11:17.880 252257 INFO nova.scheduler.client.report [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Deleted allocations for instance 307f5ed4-bb56-465b-a107-d4b965230c9b#033[00m
Nov 29 03:11:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:11:17 np0005539563 nova_compute[252253]: 2025-11-29 08:11:17.962 252257 DEBUG oslo_concurrency.lockutils [None req-0a3a6890-bba9-4a72-b1f0-41969a0e7191 a776b2def7b4458d8c8ed719c2ea771b c88bdebf444849adaebaf037c057e96a - - default default] Lock "307f5ed4-bb56-465b-a107-d4b965230c9b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:11:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 4 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 291 active+clean; 513 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 7.8 MiB/s wr, 498 op/s
Nov 29 03:11:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:11:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:11:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:18.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:18.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Nov 29 03:11:19 np0005539563 podman[315197]: 2025-11-29 08:11:19.560128928 +0000 UTC m=+0.094204344 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:11:19 np0005539563 podman[315198]: 2025-11-29 08:11:19.572678038 +0000 UTC m=+0.106272631 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 03:11:19 np0005539563 podman[315199]: 2025-11-29 08:11:19.646710564 +0000 UTC m=+0.167604122 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:19 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b6a1bc8c-d787-4ba9-8ed8-895f3c46493d does not exist
Nov 29 03:11:19 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b78bf919-8a37-4044-bc9e-268035fb95a7 does not exist
Nov 29 03:11:19 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5f41919e-ba28-4c93-8630-df392a14d43e does not exist
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:11:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:11:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:11:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:11:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 426 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 4.9 MiB/s wr, 524 op/s
Nov 29 03:11:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:20.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:20 np0005539563 nova_compute[252253]: 2025-11-29 08:11:20.516 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:20 np0005539563 podman[315403]: 2025-11-29 08:11:20.631548996 +0000 UTC m=+0.048926496 container create ccfae8090474ce64514f1e7c150cfa7e5b8745a380f6a2df1a8dad3acb29abe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:11:20 np0005539563 systemd[1]: Started libpod-conmon-ccfae8090474ce64514f1e7c150cfa7e5b8745a380f6a2df1a8dad3acb29abe5.scope.
Nov 29 03:11:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:11:20 np0005539563 podman[315403]: 2025-11-29 08:11:20.707901985 +0000 UTC m=+0.125279485 container init ccfae8090474ce64514f1e7c150cfa7e5b8745a380f6a2df1a8dad3acb29abe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_burnell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:11:20 np0005539563 podman[315403]: 2025-11-29 08:11:20.613717203 +0000 UTC m=+0.031094723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:11:20 np0005539563 podman[315403]: 2025-11-29 08:11:20.714432262 +0000 UTC m=+0.131809752 container start ccfae8090474ce64514f1e7c150cfa7e5b8745a380f6a2df1a8dad3acb29abe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:11:20 np0005539563 laughing_burnell[315419]: 167 167
Nov 29 03:11:20 np0005539563 systemd[1]: libpod-ccfae8090474ce64514f1e7c150cfa7e5b8745a380f6a2df1a8dad3acb29abe5.scope: Deactivated successfully.
Nov 29 03:11:20 np0005539563 podman[315403]: 2025-11-29 08:11:20.72319904 +0000 UTC m=+0.140576540 container attach ccfae8090474ce64514f1e7c150cfa7e5b8745a380f6a2df1a8dad3acb29abe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:11:20 np0005539563 podman[315403]: 2025-11-29 08:11:20.723506468 +0000 UTC m=+0.140883968 container died ccfae8090474ce64514f1e7c150cfa7e5b8745a380f6a2df1a8dad3acb29abe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:11:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-154ff71075fbca642235d9ff2144ae553874d48607bb5cfae214cba0185d49b8-merged.mount: Deactivated successfully.
Nov 29 03:11:20 np0005539563 podman[315403]: 2025-11-29 08:11:20.759726639 +0000 UTC m=+0.177104149 container remove ccfae8090474ce64514f1e7c150cfa7e5b8745a380f6a2df1a8dad3acb29abe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_burnell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:11:20 np0005539563 systemd[1]: libpod-conmon-ccfae8090474ce64514f1e7c150cfa7e5b8745a380f6a2df1a8dad3acb29abe5.scope: Deactivated successfully.
Nov 29 03:11:20 np0005539563 podman[315444]: 2025-11-29 08:11:20.929767297 +0000 UTC m=+0.039906923 container create bf6b484add6be51bdde04bb6571227946718e8e5a0811cf6433462de07424ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:11:20 np0005539563 systemd[1]: Started libpod-conmon-bf6b484add6be51bdde04bb6571227946718e8e5a0811cf6433462de07424ae6.scope.
Nov 29 03:11:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:11:21 np0005539563 podman[315444]: 2025-11-29 08:11:20.912724944 +0000 UTC m=+0.022864530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:11:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef29ec0706070d735aabdc5223313a82d7889d762a5a34d5ac5c079472862e2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef29ec0706070d735aabdc5223313a82d7889d762a5a34d5ac5c079472862e2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef29ec0706070d735aabdc5223313a82d7889d762a5a34d5ac5c079472862e2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef29ec0706070d735aabdc5223313a82d7889d762a5a34d5ac5c079472862e2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef29ec0706070d735aabdc5223313a82d7889d762a5a34d5ac5c079472862e2d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:21 np0005539563 podman[315444]: 2025-11-29 08:11:21.027214176 +0000 UTC m=+0.137353792 container init bf6b484add6be51bdde04bb6571227946718e8e5a0811cf6433462de07424ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kilby, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:11:21 np0005539563 podman[315444]: 2025-11-29 08:11:21.039086528 +0000 UTC m=+0.149226114 container start bf6b484add6be51bdde04bb6571227946718e8e5a0811cf6433462de07424ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kilby, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:11:21 np0005539563 podman[315444]: 2025-11-29 08:11:21.042496131 +0000 UTC m=+0.152635717 container attach bf6b484add6be51bdde04bb6571227946718e8e5a0811cf6433462de07424ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kilby, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:11:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Nov 29 03:11:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Nov 29 03:11:21 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Nov 29 03:11:21 np0005539563 silly_kilby[315461]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:11:21 np0005539563 silly_kilby[315461]: --> relative data size: 1.0
Nov 29 03:11:21 np0005539563 silly_kilby[315461]: --> All data devices are unavailable
Nov 29 03:11:21 np0005539563 systemd[1]: libpod-bf6b484add6be51bdde04bb6571227946718e8e5a0811cf6433462de07424ae6.scope: Deactivated successfully.
Nov 29 03:11:21 np0005539563 podman[315476]: 2025-11-29 08:11:21.936213824 +0000 UTC m=+0.027309701 container died bf6b484add6be51bdde04bb6571227946718e8e5a0811cf6433462de07424ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:11:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ef29ec0706070d735aabdc5223313a82d7889d762a5a34d5ac5c079472862e2d-merged.mount: Deactivated successfully.
Nov 29 03:11:21 np0005539563 nova_compute[252253]: 2025-11-29 08:11:21.970 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:21 np0005539563 podman[315476]: 2025-11-29 08:11:21.998964054 +0000 UTC m=+0.090059891 container remove bf6b484add6be51bdde04bb6571227946718e8e5a0811cf6433462de07424ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kilby, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:11:22 np0005539563 systemd[1]: libpod-conmon-bf6b484add6be51bdde04bb6571227946718e8e5a0811cf6433462de07424ae6.scope: Deactivated successfully.
Nov 29 03:11:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 364 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 26 KiB/s wr, 176 op/s
Nov 29 03:11:22 np0005539563 nova_compute[252253]: 2025-11-29 08:11:22.436 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:22.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:22 np0005539563 podman[315633]: 2025-11-29 08:11:22.675387661 +0000 UTC m=+0.030616930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:11:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:22.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:23 np0005539563 podman[315633]: 2025-11-29 08:11:23.055841639 +0000 UTC m=+0.411070838 container create ffdc558b765e574829266eb36650f3a2f98427e99cef389909bb39f9f7da8093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:11:23 np0005539563 systemd[1]: Started libpod-conmon-ffdc558b765e574829266eb36650f3a2f98427e99cef389909bb39f9f7da8093.scope.
Nov 29 03:11:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:11:23 np0005539563 podman[315633]: 2025-11-29 08:11:23.16918643 +0000 UTC m=+0.524415649 container init ffdc558b765e574829266eb36650f3a2f98427e99cef389909bb39f9f7da8093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:11:23 np0005539563 podman[315633]: 2025-11-29 08:11:23.178617535 +0000 UTC m=+0.533846754 container start ffdc558b765e574829266eb36650f3a2f98427e99cef389909bb39f9f7da8093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:11:23 np0005539563 podman[315633]: 2025-11-29 08:11:23.183387375 +0000 UTC m=+0.538616574 container attach ffdc558b765e574829266eb36650f3a2f98427e99cef389909bb39f9f7da8093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 29 03:11:23 np0005539563 suspicious_lewin[315650]: 167 167
Nov 29 03:11:23 np0005539563 systemd[1]: libpod-ffdc558b765e574829266eb36650f3a2f98427e99cef389909bb39f9f7da8093.scope: Deactivated successfully.
Nov 29 03:11:23 np0005539563 podman[315633]: 2025-11-29 08:11:23.191630048 +0000 UTC m=+0.546859267 container died ffdc558b765e574829266eb36650f3a2f98427e99cef389909bb39f9f7da8093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:11:23 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9f8ae43870656416a1d9b7dda9ff315c13bf698992b445781d1053211f32b79d-merged.mount: Deactivated successfully.
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006518777335752839 of space, bias 1.0, pg target 1.9556332007258517 quantized to 32 (current 32)
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004340641285129351 of space, bias 1.0, pg target 1.2978517442536759 quantized to 32 (current 32)
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:11:23 np0005539563 podman[315633]: 2025-11-29 08:11:23.492392047 +0000 UTC m=+0.847621286 container remove ffdc558b765e574829266eb36650f3a2f98427e99cef389909bb39f9f7da8093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:11:23 np0005539563 systemd[1]: libpod-conmon-ffdc558b765e574829266eb36650f3a2f98427e99cef389909bb39f9f7da8093.scope: Deactivated successfully.
Nov 29 03:11:23 np0005539563 podman[315674]: 2025-11-29 08:11:23.816222391 +0000 UTC m=+0.119674404 container create 424fd0a6aa07e041b0fcfdc99ecef49e5453c00f9e1e3feb270eb812db500f88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:11:23 np0005539563 podman[315674]: 2025-11-29 08:11:23.73793737 +0000 UTC m=+0.041389403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:11:23 np0005539563 systemd[1]: Started libpod-conmon-424fd0a6aa07e041b0fcfdc99ecef49e5453c00f9e1e3feb270eb812db500f88.scope.
Nov 29 03:11:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:11:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1fe608e5cab183acbaabf23c1217e46fe007f8523007e2fe3677024725b78e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1fe608e5cab183acbaabf23c1217e46fe007f8523007e2fe3677024725b78e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1fe608e5cab183acbaabf23c1217e46fe007f8523007e2fe3677024725b78e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd1fe608e5cab183acbaabf23c1217e46fe007f8523007e2fe3677024725b78e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:23 np0005539563 podman[315674]: 2025-11-29 08:11:23.971075346 +0000 UTC m=+0.274527379 container init 424fd0a6aa07e041b0fcfdc99ecef49e5453c00f9e1e3feb270eb812db500f88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:11:23 np0005539563 podman[315674]: 2025-11-29 08:11:23.979023402 +0000 UTC m=+0.282475415 container start 424fd0a6aa07e041b0fcfdc99ecef49e5453c00f9e1e3feb270eb812db500f88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bhabha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:11:24 np0005539563 podman[315674]: 2025-11-29 08:11:24.021384109 +0000 UTC m=+0.324836122 container attach 424fd0a6aa07e041b0fcfdc99ecef49e5453c00f9e1e3feb270eb812db500f88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:11:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 314 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 915 KiB/s rd, 24 KiB/s wr, 165 op/s
Nov 29 03:11:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:24.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]: {
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:    "0": [
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:        {
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:            "devices": [
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:                "/dev/loop3"
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:            ],
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:            "lv_name": "ceph_lv0",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:            "lv_size": "7511998464",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:            "name": "ceph_lv0",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:            "tags": {
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:                "ceph.cluster_name": "ceph",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:                "ceph.crush_device_class": "",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:                "ceph.encrypted": "0",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:                "ceph.osd_id": "0",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:                "ceph.type": "block",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:                "ceph.vdo": "0"
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:            },
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:            "type": "block",
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:            "vg_name": "ceph_vg0"
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:        }
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]:    ]
Nov 29 03:11:24 np0005539563 thirsty_bhabha[315691]: }
Nov 29 03:11:24 np0005539563 systemd[1]: libpod-424fd0a6aa07e041b0fcfdc99ecef49e5453c00f9e1e3feb270eb812db500f88.scope: Deactivated successfully.
Nov 29 03:11:24 np0005539563 podman[315674]: 2025-11-29 08:11:24.780865047 +0000 UTC m=+1.084317080 container died 424fd0a6aa07e041b0fcfdc99ecef49e5453c00f9e1e3feb270eb812db500f88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:11:24 np0005539563 systemd[1]: var-lib-containers-storage-overlay-fd1fe608e5cab183acbaabf23c1217e46fe007f8523007e2fe3677024725b78e-merged.mount: Deactivated successfully.
Nov 29 03:11:24 np0005539563 podman[315674]: 2025-11-29 08:11:24.837638636 +0000 UTC m=+1.141090649 container remove 424fd0a6aa07e041b0fcfdc99ecef49e5453c00f9e1e3feb270eb812db500f88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bhabha, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:11:24 np0005539563 systemd[1]: libpod-conmon-424fd0a6aa07e041b0fcfdc99ecef49e5453c00f9e1e3feb270eb812db500f88.scope: Deactivated successfully.
Nov 29 03:11:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:24.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:25 np0005539563 nova_compute[252253]: 2025-11-29 08:11:25.520 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:25 np0005539563 podman[315856]: 2025-11-29 08:11:25.604108671 +0000 UTC m=+0.040262101 container create 7ae1120f755adc0cd8d1780e173d2fde10dcfe1f9fb946f0bc8ed27deba39c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:11:25 np0005539563 systemd[1]: Started libpod-conmon-7ae1120f755adc0cd8d1780e173d2fde10dcfe1f9fb946f0bc8ed27deba39c92.scope.
Nov 29 03:11:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:11:25 np0005539563 podman[315856]: 2025-11-29 08:11:25.587303946 +0000 UTC m=+0.023457406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:11:25 np0005539563 podman[315856]: 2025-11-29 08:11:25.692366863 +0000 UTC m=+0.128520333 container init 7ae1120f755adc0cd8d1780e173d2fde10dcfe1f9fb946f0bc8ed27deba39c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lumiere, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:11:25 np0005539563 podman[315856]: 2025-11-29 08:11:25.706297591 +0000 UTC m=+0.142451021 container start 7ae1120f755adc0cd8d1780e173d2fde10dcfe1f9fb946f0bc8ed27deba39c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:11:25 np0005539563 podman[315856]: 2025-11-29 08:11:25.712466088 +0000 UTC m=+0.148619558 container attach 7ae1120f755adc0cd8d1780e173d2fde10dcfe1f9fb946f0bc8ed27deba39c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lumiere, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:11:25 np0005539563 great_lumiere[315873]: 167 167
Nov 29 03:11:25 np0005539563 systemd[1]: libpod-7ae1120f755adc0cd8d1780e173d2fde10dcfe1f9fb946f0bc8ed27deba39c92.scope: Deactivated successfully.
Nov 29 03:11:25 np0005539563 podman[315856]: 2025-11-29 08:11:25.718662265 +0000 UTC m=+0.154815715 container died 7ae1120f755adc0cd8d1780e173d2fde10dcfe1f9fb946f0bc8ed27deba39c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lumiere, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:11:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-cd20021588663059c6f3a77844b80ff49a19fbcdb78aa2629a5b081684b0a90f-merged.mount: Deactivated successfully.
Nov 29 03:11:25 np0005539563 podman[315856]: 2025-11-29 08:11:25.765174556 +0000 UTC m=+0.201327996 container remove 7ae1120f755adc0cd8d1780e173d2fde10dcfe1f9fb946f0bc8ed27deba39c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lumiere, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:11:25 np0005539563 systemd[1]: libpod-conmon-7ae1120f755adc0cd8d1780e173d2fde10dcfe1f9fb946f0bc8ed27deba39c92.scope: Deactivated successfully.
Nov 29 03:11:25 np0005539563 podman[315897]: 2025-11-29 08:11:25.976030248 +0000 UTC m=+0.055885285 container create 8b4e95dd02a7924040b921446384c0cc98934a90931036d293bcdcf5e46586bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:11:26 np0005539563 systemd[1]: Started libpod-conmon-8b4e95dd02a7924040b921446384c0cc98934a90931036d293bcdcf5e46586bc.scope.
Nov 29 03:11:26 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:11:26 np0005539563 podman[315897]: 2025-11-29 08:11:25.954096914 +0000 UTC m=+0.033951931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:11:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939b49e5156282bbc334a43bab27505d42944fbf81c5c0c97a986dee26e67821/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939b49e5156282bbc334a43bab27505d42944fbf81c5c0c97a986dee26e67821/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939b49e5156282bbc334a43bab27505d42944fbf81c5c0c97a986dee26e67821/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939b49e5156282bbc334a43bab27505d42944fbf81c5c0c97a986dee26e67821/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:26 np0005539563 podman[315897]: 2025-11-29 08:11:26.067155157 +0000 UTC m=+0.147010244 container init 8b4e95dd02a7924040b921446384c0cc98934a90931036d293bcdcf5e46586bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:11:26 np0005539563 podman[315897]: 2025-11-29 08:11:26.075190365 +0000 UTC m=+0.155045362 container start 8b4e95dd02a7924040b921446384c0cc98934a90931036d293bcdcf5e46586bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:11:26 np0005539563 podman[315897]: 2025-11-29 08:11:26.07942066 +0000 UTC m=+0.159275657 container attach 8b4e95dd02a7924040b921446384c0cc98934a90931036d293bcdcf5e46586bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:11:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 281 MiB data, 911 MiB used, 20 GiB / 21 GiB avail; 617 KiB/s rd, 18 KiB/s wr, 113 op/s
Nov 29 03:11:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:26.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Nov 29 03:11:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Nov 29 03:11:26 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Nov 29 03:11:26 np0005539563 charming_ganguly[315913]: {
Nov 29 03:11:26 np0005539563 charming_ganguly[315913]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:11:26 np0005539563 charming_ganguly[315913]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:11:26 np0005539563 charming_ganguly[315913]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:11:26 np0005539563 charming_ganguly[315913]:        "osd_id": 0,
Nov 29 03:11:26 np0005539563 charming_ganguly[315913]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:11:26 np0005539563 charming_ganguly[315913]:        "type": "bluestore"
Nov 29 03:11:26 np0005539563 charming_ganguly[315913]:    }
Nov 29 03:11:26 np0005539563 charming_ganguly[315913]: }
Nov 29 03:11:26 np0005539563 systemd[1]: libpod-8b4e95dd02a7924040b921446384c0cc98934a90931036d293bcdcf5e46586bc.scope: Deactivated successfully.
Nov 29 03:11:26 np0005539563 podman[315897]: 2025-11-29 08:11:26.90769858 +0000 UTC m=+0.987553577 container died 8b4e95dd02a7924040b921446384c0cc98934a90931036d293bcdcf5e46586bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:11:26 np0005539563 systemd[1]: var-lib-containers-storage-overlay-939b49e5156282bbc334a43bab27505d42944fbf81c5c0c97a986dee26e67821-merged.mount: Deactivated successfully.
Nov 29 03:11:26 np0005539563 podman[315897]: 2025-11-29 08:11:26.965497926 +0000 UTC m=+1.045352923 container remove 8b4e95dd02a7924040b921446384c0cc98934a90931036d293bcdcf5e46586bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:11:27 np0005539563 systemd[1]: libpod-conmon-8b4e95dd02a7924040b921446384c0cc98934a90931036d293bcdcf5e46586bc.scope: Deactivated successfully.
Nov 29 03:11:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:26.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:27 np0005539563 nova_compute[252253]: 2025-11-29 08:11:27.021 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:11:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:11:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 34e6eaba-d9ba-424b-8bd0-f418ae858f8b does not exist
Nov 29 03:11:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 591a93ac-1c16-4ce1-bac3-09033f52f29a does not exist
Nov 29 03:11:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e89f116f-9a47-44a2-9c9a-9e1190e38701 does not exist
Nov 29 03:11:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:11:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:11:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1831097261' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:11:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:11:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1831097261' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:11:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 281 MiB data, 912 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Nov 29 03:11:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:28.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:28.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:29 np0005539563 nova_compute[252253]: 2025-11-29 08:11:29.826 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "98453ec7-fbda-42ae-8624-8aa5921fd634" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:29 np0005539563 nova_compute[252253]: 2025-11-29 08:11:29.826 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:29 np0005539563 nova_compute[252253]: 2025-11-29 08:11:29.849 252257 DEBUG nova.compute.manager [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:11:29 np0005539563 nova_compute[252253]: 2025-11-29 08:11:29.941 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:29 np0005539563 nova_compute[252253]: 2025-11-29 08:11:29.942 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:29 np0005539563 nova_compute[252253]: 2025-11-29 08:11:29.955 252257 DEBUG nova.virt.hardware [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:11:29 np0005539563 nova_compute[252253]: 2025-11-29 08:11:29.956 252257 INFO nova.compute.claims [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.081 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 281 MiB data, 912 MiB used, 20 GiB / 21 GiB avail; 104 KiB/s rd, 2.7 KiB/s wr, 36 op/s
Nov 29 03:11:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:30.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.496 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403875.4946404, 307f5ed4-bb56-465b-a107-d4b965230c9b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.497 252257 INFO nova.compute.manager [-] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:11:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968321550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.527 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.530 252257 DEBUG nova.compute.manager [None req-60bb77bd-6e93-4460-9ace-34f4a9c1f1a6 - - - - - -] [instance: 307f5ed4-bb56-465b-a107-d4b965230c9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.564 252257 DEBUG nova.compute.provider_tree [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.566 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.637 252257 DEBUG nova.scheduler.client.report [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.903 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.904 252257 DEBUG nova.compute.manager [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.950 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.950 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.987 252257 DEBUG nova.compute.manager [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.996 252257 DEBUG nova.compute.manager [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:11:30 np0005539563 nova_compute[252253]: 2025-11-29 08:11:30.997 252257 DEBUG nova.network.neutron [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:11:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:30.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.034 252257 INFO nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.080 252257 DEBUG nova.compute.manager [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.110 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.110 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.117 252257 DEBUG nova.virt.hardware [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.118 252257 INFO nova.compute.claims [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.192 252257 DEBUG nova.compute.manager [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.194 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.195 252257 INFO nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Creating image(s)#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.231 252257 DEBUG nova.storage.rbd_utils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 98453ec7-fbda-42ae-8624-8aa5921fd634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.270 252257 DEBUG nova.storage.rbd_utils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 98453ec7-fbda-42ae-8624-8aa5921fd634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.303 252257 DEBUG nova.storage.rbd_utils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 98453ec7-fbda-42ae-8624-8aa5921fd634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.307 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.370 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.371 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.372 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.372 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.401 252257 DEBUG nova.storage.rbd_utils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 98453ec7-fbda-42ae-8624-8aa5921fd634_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.405 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 98453ec7-fbda-42ae-8624-8aa5921fd634_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.468 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.723 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 98453ec7-fbda-42ae-8624-8aa5921fd634_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.770 252257 DEBUG nova.policy [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ca93c8e3eac142c0aa6b61807727dea2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.822 252257 DEBUG nova.storage.rbd_utils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] resizing rbd image 98453ec7-fbda-42ae-8624-8aa5921fd634_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:11:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588585032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.949 252257 DEBUG nova.objects.instance [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'migration_context' on Instance uuid 98453ec7-fbda-42ae-8624-8aa5921fd634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.952 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.956 252257 DEBUG nova.compute.provider_tree [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.978 252257 DEBUG nova.scheduler.client.report [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.981 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.981 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Ensure instance console log exists: /var/lib/nova/instances/98453ec7-fbda-42ae-8624-8aa5921fd634/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.982 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.982 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.983 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.996 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:31 np0005539563 nova_compute[252253]: 2025-11-29 08:11:31.997 252257 DEBUG nova.compute.manager [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.021 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.046 252257 DEBUG nova.compute.manager [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.047 252257 DEBUG nova.network.neutron [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.067 252257 INFO nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.084 252257 DEBUG nova.compute.manager [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:11:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 281 MiB data, 912 MiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 15 KiB/s wr, 50 op/s
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.174 252257 DEBUG nova.compute.manager [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.175 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.176 252257 INFO nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Creating image(s)#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.198 252257 DEBUG nova.storage.rbd_utils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.221 252257 DEBUG nova.storage.rbd_utils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.244 252257 DEBUG nova.storage.rbd_utils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.247 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.305 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.306 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.306 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.307 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.333 252257 DEBUG nova.storage.rbd_utils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.336 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 9d9d2058-c79d-456b-b647-e73537cb9223_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:32.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.671 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 9d9d2058-c79d-456b-b647-e73537cb9223_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.781 252257 DEBUG nova.storage.rbd_utils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] resizing rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.922 252257 DEBUG nova.objects.instance [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'migration_context' on Instance uuid 9d9d2058-c79d-456b-b647-e73537cb9223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.942 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.943 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Ensure instance console log exists: /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.943 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.944 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.944 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:32 np0005539563 nova_compute[252253]: 2025-11-29 08:11:32.965 252257 DEBUG nova.policy [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '58625e4c2b5d43a1abbab05b98853a65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '250671461f27498d9f6b4476c7b69533', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:11:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:33.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:33 np0005539563 nova_compute[252253]: 2025-11-29 08:11:33.009 252257 DEBUG nova.network.neutron [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Successfully created port: 5a778f1e-9dbc-422a-b415-d2ea4fecdaac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:11:33 np0005539563 nova_compute[252253]: 2025-11-29 08:11:33.819 252257 DEBUG nova.network.neutron [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Successfully updated port: 5a778f1e-9dbc-422a-b415-d2ea4fecdaac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:11:33 np0005539563 nova_compute[252253]: 2025-11-29 08:11:33.876 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:33 np0005539563 nova_compute[252253]: 2025-11-29 08:11:33.877 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:33 np0005539563 nova_compute[252253]: 2025-11-29 08:11:33.877 252257 DEBUG nova.network.neutron [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:11:33 np0005539563 nova_compute[252253]: 2025-11-29 08:11:33.889 252257 DEBUG nova.network.neutron [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Successfully created port: a5c93ffe-8186-4e03-86aa-e1b1efc225cc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:11:33 np0005539563 nova_compute[252253]: 2025-11-29 08:11:33.990 252257 DEBUG nova.compute.manager [req-0181223d-62f0-4967-bfa0-b3a3100a2816 req-0be1614e-e6da-4433-8aed-81dce17e7220 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Received event network-changed-5a778f1e-9dbc-422a-b415-d2ea4fecdaac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:33 np0005539563 nova_compute[252253]: 2025-11-29 08:11:33.990 252257 DEBUG nova.compute.manager [req-0181223d-62f0-4967-bfa0-b3a3100a2816 req-0be1614e-e6da-4433-8aed-81dce17e7220 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Refreshing instance network info cache due to event network-changed-5a778f1e-9dbc-422a-b415-d2ea4fecdaac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:11:33 np0005539563 nova_compute[252253]: 2025-11-29 08:11:33.991 252257 DEBUG oslo_concurrency.lockutils [req-0181223d-62f0-4967-bfa0-b3a3100a2816 req-0be1614e-e6da-4433-8aed-81dce17e7220 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:34 np0005539563 nova_compute[252253]: 2025-11-29 08:11:34.103 252257 DEBUG nova.network.neutron [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:11:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 308 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 542 KiB/s rd, 962 KiB/s wr, 63 op/s
Nov 29 03:11:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:34.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:35.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.015 252257 DEBUG nova.network.neutron [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Updating instance_info_cache with network_info: [{"id": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "address": "fa:16:3e:72:b2:e8", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a778f1e-9d", "ovs_interfaceid": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.051 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.052 252257 DEBUG nova.compute.manager [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Instance network_info: |[{"id": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "address": "fa:16:3e:72:b2:e8", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a778f1e-9d", "ovs_interfaceid": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.052 252257 DEBUG oslo_concurrency.lockutils [req-0181223d-62f0-4967-bfa0-b3a3100a2816 req-0be1614e-e6da-4433-8aed-81dce17e7220 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.052 252257 DEBUG nova.network.neutron [req-0181223d-62f0-4967-bfa0-b3a3100a2816 req-0be1614e-e6da-4433-8aed-81dce17e7220 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Refreshing network info cache for port 5a778f1e-9dbc-422a-b415-d2ea4fecdaac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.056 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Start _get_guest_xml network_info=[{"id": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "address": "fa:16:3e:72:b2:e8", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a778f1e-9d", "ovs_interfaceid": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.059 252257 WARNING nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.063 252257 DEBUG nova.virt.libvirt.host [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.064 252257 DEBUG nova.virt.libvirt.host [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.066 252257 DEBUG nova.virt.libvirt.host [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.067 252257 DEBUG nova.virt.libvirt.host [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.068 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.068 252257 DEBUG nova.virt.hardware [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.069 252257 DEBUG nova.virt.hardware [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.069 252257 DEBUG nova.virt.hardware [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.070 252257 DEBUG nova.virt.hardware [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.070 252257 DEBUG nova.virt.hardware [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.070 252257 DEBUG nova.virt.hardware [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.070 252257 DEBUG nova.virt.hardware [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.071 252257 DEBUG nova.virt.hardware [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.071 252257 DEBUG nova.virt.hardware [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.071 252257 DEBUG nova.virt.hardware [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.072 252257 DEBUG nova.virt.hardware [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.075 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.479 252257 DEBUG nova.network.neutron [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Successfully updated port: a5c93ffe-8186-4e03-86aa-e1b1efc225cc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.498 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "refresh_cache-9d9d2058-c79d-456b-b647-e73537cb9223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.498 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquired lock "refresh_cache-9d9d2058-c79d-456b-b647-e73537cb9223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.498 252257 DEBUG nova.network.neutron [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:11:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:11:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1476214331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.557 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.585 252257 DEBUG nova.storage.rbd_utils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 98453ec7-fbda-42ae-8624-8aa5921fd634_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.589 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.614 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.620 252257 DEBUG nova.compute.manager [req-a8a9588b-380c-4e72-b8e4-f371428b9f40 req-94fc0d8c-b360-4eb7-8fba-dc27e83d482d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received event network-changed-a5c93ffe-8186-4e03-86aa-e1b1efc225cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.621 252257 DEBUG nova.compute.manager [req-a8a9588b-380c-4e72-b8e4-f371428b9f40 req-94fc0d8c-b360-4eb7-8fba-dc27e83d482d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Refreshing instance network info cache due to event network-changed-a5c93ffe-8186-4e03-86aa-e1b1efc225cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.621 252257 DEBUG oslo_concurrency.lockutils [req-a8a9588b-380c-4e72-b8e4-f371428b9f40 req-94fc0d8c-b360-4eb7-8fba-dc27e83d482d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9d9d2058-c79d-456b-b647-e73537cb9223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:35 np0005539563 nova_compute[252253]: 2025-11-29 08:11:35.760 252257 DEBUG nova.network.neutron [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:11:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:11:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4152178789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.011 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.012 252257 DEBUG nova.virt.libvirt.vif [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-865719826',display_name='tempest-ServerActionsTestOtherB-server-865719826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-865719826',id=106,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-ndatvz0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:31Z,user_data=None,user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=98453ec7-fbda-42ae-8624-8aa5921fd634,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "address": "fa:16:3e:72:b2:e8", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a778f1e-9d", "ovs_interfaceid": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.013 252257 DEBUG nova.network.os_vif_util [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "address": "fa:16:3e:72:b2:e8", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a778f1e-9d", "ovs_interfaceid": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.014 252257 DEBUG nova.network.os_vif_util [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:b2:e8,bridge_name='br-int',has_traffic_filtering=True,id=5a778f1e-9dbc-422a-b415-d2ea4fecdaac,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a778f1e-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.015 252257 DEBUG nova.objects.instance [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 98453ec7-fbda-42ae-8624-8aa5921fd634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.033 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  <uuid>98453ec7-fbda-42ae-8624-8aa5921fd634</uuid>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  <name>instance-0000006a</name>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerActionsTestOtherB-server-865719826</nova:name>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:11:35</nova:creationTime>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <nova:user uuid="ca93c8e3eac142c0aa6b61807727dea2">tempest-ServerActionsTestOtherB-325732369-project-member</nova:user>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <nova:project uuid="ba867fac17034bb28fe2cdb0fff3af2b">tempest-ServerActionsTestOtherB-325732369</nova:project>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <nova:port uuid="5a778f1e-9dbc-422a-b415-d2ea4fecdaac">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <entry name="serial">98453ec7-fbda-42ae-8624-8aa5921fd634</entry>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <entry name="uuid">98453ec7-fbda-42ae-8624-8aa5921fd634</entry>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/98453ec7-fbda-42ae-8624-8aa5921fd634_disk">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/98453ec7-fbda-42ae-8624-8aa5921fd634_disk.config">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:72:b2:e8"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <target dev="tap5a778f1e-9d"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/98453ec7-fbda-42ae-8624-8aa5921fd634/console.log" append="off"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:11:36 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:11:36 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:11:36 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:11:36 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.035 252257 DEBUG nova.compute.manager [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Preparing to wait for external event network-vif-plugged-5a778f1e-9dbc-422a-b415-d2ea4fecdaac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.035 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.036 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.036 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.038 252257 DEBUG nova.virt.libvirt.vif [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-865719826',display_name='tempest-ServerActionsTestOtherB-server-865719826',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-865719826',id=106,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-ndatvz0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:31Z,user_data=None,user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=98453ec7-fbda-42ae-8624-8aa5921fd634,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "address": "fa:16:3e:72:b2:e8", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a778f1e-9d", "ovs_interfaceid": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.038 252257 DEBUG nova.network.os_vif_util [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "address": "fa:16:3e:72:b2:e8", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a778f1e-9d", "ovs_interfaceid": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.039 252257 DEBUG nova.network.os_vif_util [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:b2:e8,bridge_name='br-int',has_traffic_filtering=True,id=5a778f1e-9dbc-422a-b415-d2ea4fecdaac,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a778f1e-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.040 252257 DEBUG os_vif [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:b2:e8,bridge_name='br-int',has_traffic_filtering=True,id=5a778f1e-9dbc-422a-b415-d2ea4fecdaac,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a778f1e-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.041 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.042 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.043 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.047 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.048 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a778f1e-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.048 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a778f1e-9d, col_values=(('external_ids', {'iface-id': '5a778f1e-9dbc-422a-b415-d2ea4fecdaac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:b2:e8', 'vm-uuid': '98453ec7-fbda-42ae-8624-8aa5921fd634'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:36 np0005539563 NetworkManager[48981]: <info>  [1764403896.0514] manager: (tap5a778f1e-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.053 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.057 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.058 252257 INFO os_vif [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:b2:e8,bridge_name='br-int',has_traffic_filtering=True,id=5a778f1e-9dbc-422a-b415-d2ea4fecdaac,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a778f1e-9d')#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.118 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.118 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.119 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No VIF found with MAC fa:16:3e:72:b2:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.119 252257 INFO nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Using config drive#033[00m
Nov 29 03:11:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 374 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 682 KiB/s rd, 4.3 MiB/s wr, 117 op/s
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.147 252257 DEBUG nova.storage.rbd_utils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 98453ec7-fbda-42ae-8624-8aa5921fd634_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:36.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.911 252257 DEBUG nova.network.neutron [req-0181223d-62f0-4967-bfa0-b3a3100a2816 req-0be1614e-e6da-4433-8aed-81dce17e7220 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Updated VIF entry in instance network info cache for port 5a778f1e-9dbc-422a-b415-d2ea4fecdaac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:11:36 np0005539563 nova_compute[252253]: 2025-11-29 08:11:36.911 252257 DEBUG nova.network.neutron [req-0181223d-62f0-4967-bfa0-b3a3100a2816 req-0be1614e-e6da-4433-8aed-81dce17e7220 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Updating instance_info_cache with network_info: [{"id": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "address": "fa:16:3e:72:b2:e8", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a778f1e-9d", "ovs_interfaceid": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:37.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.024 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.064 252257 INFO nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Creating config drive at /var/lib/nova/instances/98453ec7-fbda-42ae-8624-8aa5921fd634/disk.config#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.069 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98453ec7-fbda-42ae-8624-8aa5921fd634/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpub0h4ozf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.108 252257 DEBUG oslo_concurrency.lockutils [req-0181223d-62f0-4967-bfa0-b3a3100a2816 req-0be1614e-e6da-4433-8aed-81dce17e7220 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.221 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98453ec7-fbda-42ae-8624-8aa5921fd634/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpub0h4ozf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.267 252257 DEBUG nova.storage.rbd_utils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 98453ec7-fbda-42ae-8624-8aa5921fd634_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.273 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98453ec7-fbda-42ae-8624-8aa5921fd634/disk.config 98453ec7-fbda-42ae-8624-8aa5921fd634_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.471 252257 DEBUG oslo_concurrency.processutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98453ec7-fbda-42ae-8624-8aa5921fd634/disk.config 98453ec7-fbda-42ae-8624-8aa5921fd634_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.473 252257 INFO nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Deleting local config drive /var/lib/nova/instances/98453ec7-fbda-42ae-8624-8aa5921fd634/disk.config because it was imported into RBD.#033[00m
Nov 29 03:11:37 np0005539563 kernel: tap5a778f1e-9d: entered promiscuous mode
Nov 29 03:11:37 np0005539563 NetworkManager[48981]: <info>  [1764403897.5561] manager: (tap5a778f1e-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.557 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:37Z|00400|binding|INFO|Claiming lport 5a778f1e-9dbc-422a-b415-d2ea4fecdaac for this chassis.
Nov 29 03:11:37 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:37Z|00401|binding|INFO|5a778f1e-9dbc-422a-b415-d2ea4fecdaac: Claiming fa:16:3e:72:b2:e8 10.100.0.5
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.564 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.566 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.572 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539563 NetworkManager[48981]: <info>  [1764403897.5862] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Nov 29 03:11:37 np0005539563 NetworkManager[48981]: <info>  [1764403897.5872] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.585 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.596 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:b2:e8 10.100.0.5'], port_security=['fa:16:3e:72:b2:e8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '98453ec7-fbda-42ae-8624-8aa5921fd634', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '35298f43-8419-4a47-81fd-585bfb137a9a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=5a778f1e-9dbc-422a-b415-d2ea4fecdaac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.598 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 5a778f1e-9dbc-422a-b415-d2ea4fecdaac in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 bound to our chassis#033[00m
Nov 29 03:11:37 np0005539563 systemd-machined[213024]: New machine qemu-46-instance-0000006a.
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.600 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d5b8c11-b69e-4a74-846b-03943fb29a81#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.620 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[35a57857-2e8f-4a98-bdb4-45237985dac9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.621 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d5b8c11-b1 in ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.623 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d5b8c11-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.624 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a917b760-d5f3-4130-b6e0-e5e0cb0f4c9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.624 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[228f8b4e-5121-45c7-98e8-1467c4526e09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.642 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[284c8765-0cbd-470c-9536-095edf37b928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 systemd[1]: Started Virtual Machine qemu-46-instance-0000006a.
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.670 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d49d5f89-1c03-444f-b9f8-f78d41ff5604]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.679 252257 DEBUG nova.network.neutron [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Updating instance_info_cache with network_info: [{"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.682 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.683 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:11:37 np0005539563 systemd-udevd[316569]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:11:37 np0005539563 NetworkManager[48981]: <info>  [1764403897.7027] device (tap5a778f1e-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:11:37 np0005539563 NetworkManager[48981]: <info>  [1764403897.7037] device (tap5a778f1e-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.704 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[135d040c-a95a-4e8e-b4c7-2a15612ce27a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.714 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:11:37 np0005539563 systemd-udevd[316575]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.718 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e18812-8bed-4480-9a69-62f7b5a69fdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.722 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Releasing lock "refresh_cache-9d9d2058-c79d-456b-b647-e73537cb9223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.722 252257 DEBUG nova.compute.manager [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance network_info: |[{"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.723 252257 DEBUG oslo_concurrency.lockutils [req-a8a9588b-380c-4e72-b8e4-f371428b9f40 req-94fc0d8c-b360-4eb7-8fba-dc27e83d482d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9d9d2058-c79d-456b-b647-e73537cb9223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.723 252257 DEBUG nova.network.neutron [req-a8a9588b-380c-4e72-b8e4-f371428b9f40 req-94fc0d8c-b360-4eb7-8fba-dc27e83d482d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Refreshing network info cache for port a5c93ffe-8186-4e03-86aa-e1b1efc225cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:11:37 np0005539563 NetworkManager[48981]: <info>  [1764403897.7303] manager: (tap4d5b8c11-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/190)
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.728 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Start _get_guest_xml network_info=[{"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.740 252257 WARNING nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.750 252257 DEBUG nova.virt.libvirt.host [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.751 252257 DEBUG nova.virt.libvirt.host [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.750 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b2756850-9eec-4245-b9e7-e27961963fb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.754 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd80a74-c96f-4148-b6d5-430cd5078e19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.756 252257 DEBUG nova.virt.libvirt.host [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.757 252257 DEBUG nova.virt.libvirt.host [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.759 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.759 252257 DEBUG nova.virt.hardware [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.760 252257 DEBUG nova.virt.hardware [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.760 252257 DEBUG nova.virt.hardware [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.760 252257 DEBUG nova.virt.hardware [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.761 252257 DEBUG nova.virt.hardware [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.761 252257 DEBUG nova.virt.hardware [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.761 252257 DEBUG nova.virt.hardware [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.762 252257 DEBUG nova.virt.hardware [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.762 252257 DEBUG nova.virt.hardware [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.762 252257 DEBUG nova.virt.hardware [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.762 252257 DEBUG nova.virt.hardware [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.767 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:37 np0005539563 NetworkManager[48981]: <info>  [1764403897.7761] device (tap4d5b8c11-b0): carrier: link connected
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.780 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[1a87bc04-0ffd-42d7-9fdc-06f360f59e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.804 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d120f8-3eda-4cc8-9f93-25747b39d6da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686554, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316599, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.832 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[58088900-9dc2-4386-ba1c-7731ba884e48]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:6d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686554, 'tstamp': 686554}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316600, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.834 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.865 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.873 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[94e36d98-ffac-4b7a-a1bd-2d3574f6b2be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686554, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316601, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:37Z|00402|binding|INFO|Setting lport 5a778f1e-9dbc-422a-b415-d2ea4fecdaac ovn-installed in OVS
Nov 29 03:11:37 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:37Z|00403|binding|INFO|Setting lport 5a778f1e-9dbc-422a-b415-d2ea4fecdaac up in Southbound
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.882 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.905 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[22c9471c-aecd-4ff1-b076-cabcfb5b4141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.984 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8857206a-c84d-4529-8fe4-9b0c7870b9f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.985 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.986 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.986 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d5b8c11-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.988 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539563 NetworkManager[48981]: <info>  [1764403897.9889] manager: (tap4d5b8c11-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Nov 29 03:11:37 np0005539563 kernel: tap4d5b8c11-b0: entered promiscuous mode
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.990 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:37.992 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d5b8c11-b0, col_values=(('external_ids', {'iface-id': 'a2e47e7a-aef0-4c09-aeef-4a0d63960d7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:37 np0005539563 nova_compute[252253]: 2025-11-29 08:11:37.992 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:37 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:37Z|00404|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.008 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:38.009 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:38.009 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3417b7-303d-44d6-a462-4c1407a3b4cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:38.010 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/4d5b8c11-b69e-4a74-846b-03943fb29a81.pid.haproxy
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 4d5b8c11-b69e-4a74-846b-03943fb29a81
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:11:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:38.012 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'env', 'PROCESS_TAG=haproxy-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d5b8c11-b69e-4a74-846b-03943fb29a81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.065 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403898.06461, 98453ec7-fbda-42ae-8624-8aa5921fd634 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.065 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] VM Started (Lifecycle Event)#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.095 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.101 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403898.0680027, 98453ec7-fbda-42ae-8624-8aa5921fd634 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.102 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.127 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.132 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:11:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 601 KiB/s rd, 3.8 MiB/s wr, 105 op/s
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.165 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:11:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:11:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2072111826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.193 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.225 252257 DEBUG nova.storage.rbd_utils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.230 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.259 252257 DEBUG nova.compute.manager [req-41840667-ceda-454c-b119-64176078f238 req-ba8a9931-e981-4484-9842-08aa02d7a8b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Received event network-vif-plugged-5a778f1e-9dbc-422a-b415-d2ea4fecdaac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.260 252257 DEBUG oslo_concurrency.lockutils [req-41840667-ceda-454c-b119-64176078f238 req-ba8a9931-e981-4484-9842-08aa02d7a8b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.260 252257 DEBUG oslo_concurrency.lockutils [req-41840667-ceda-454c-b119-64176078f238 req-ba8a9931-e981-4484-9842-08aa02d7a8b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.261 252257 DEBUG oslo_concurrency.lockutils [req-41840667-ceda-454c-b119-64176078f238 req-ba8a9931-e981-4484-9842-08aa02d7a8b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.261 252257 DEBUG nova.compute.manager [req-41840667-ceda-454c-b119-64176078f238 req-ba8a9931-e981-4484-9842-08aa02d7a8b0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Processing event network-vif-plugged-5a778f1e-9dbc-422a-b415-d2ea4fecdaac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.262 252257 DEBUG nova.compute.manager [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.265 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403898.2656543, 98453ec7-fbda-42ae-8624-8aa5921fd634 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.266 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.268 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.272 252257 INFO nova.virt.libvirt.driver [-] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Instance spawned successfully.#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.272 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.294 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.299 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.302 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.303 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.303 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.304 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.304 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.305 252257 DEBUG nova.virt.libvirt.driver [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.329 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:11:38 np0005539563 podman[316714]: 2025-11-29 08:11:38.360757815 +0000 UTC m=+0.053519721 container create a466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.368 252257 INFO nova.compute.manager [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Took 7.18 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.369 252257 DEBUG nova.compute.manager [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:38 np0005539563 systemd[1]: Started libpod-conmon-a466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc.scope.
Nov 29 03:11:38 np0005539563 podman[316714]: 2025-11-29 08:11:38.330023772 +0000 UTC m=+0.022785728 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:11:38 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.439 252257 INFO nova.compute.manager [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Took 8.53 seconds to build instance.#033[00m
Nov 29 03:11:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c288e7ab1ffde6b7cba9327cdcaa21ffd4e55d598eaae42a7b01c08f031a711/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:38 np0005539563 podman[316714]: 2025-11-29 08:11:38.452588823 +0000 UTC m=+0.145350749 container init a466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.457 252257 DEBUG oslo_concurrency.lockutils [None req-8c4897ae-2d3c-467e-8801-02104fe2af8e ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:38 np0005539563 podman[316714]: 2025-11-29 08:11:38.459980263 +0000 UTC m=+0.152742179 container start a466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:11:38 np0005539563 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[316748]: [NOTICE]   (316752) : New worker (316754) forked
Nov 29 03:11:38 np0005539563 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[316748]: [NOTICE]   (316752) : Loading success.
Nov 29 03:11:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:38.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:11:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2277657958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.670 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.672 252257 DEBUG nova.virt.libvirt.vif [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-32984963',display_name='tempest-tempest.common.compute-instance-32984963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-32984963',id=107,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-701yf0f3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:32Z,user_data=None,user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=9d9d2058-c79d-456b-b647-e73537cb9223,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.673 252257 DEBUG nova.network.os_vif_util [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.675 252257 DEBUG nova.network.os_vif_util [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.677 252257 DEBUG nova.objects.instance [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d9d2058-c79d-456b-b647-e73537cb9223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.695 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  <uuid>9d9d2058-c79d-456b-b647-e73537cb9223</uuid>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  <name>instance-0000006b</name>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <nova:name>tempest-tempest.common.compute-instance-32984963</nova:name>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:11:37</nova:creationTime>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <nova:user uuid="58625e4c2b5d43a1abbab05b98853a65">tempest-ServerActionsTestOtherA-552273978-project-member</nova:user>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <nova:project uuid="250671461f27498d9f6b4476c7b69533">tempest-ServerActionsTestOtherA-552273978</nova:project>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <nova:port uuid="a5c93ffe-8186-4e03-86aa-e1b1efc225cc">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <entry name="serial">9d9d2058-c79d-456b-b647-e73537cb9223</entry>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <entry name="uuid">9d9d2058-c79d-456b-b647-e73537cb9223</entry>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/9d9d2058-c79d-456b-b647-e73537cb9223_disk">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/9d9d2058-c79d-456b-b647-e73537cb9223_disk.config">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:8e:86:ed"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <target dev="tapa5c93ffe-81"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/console.log" append="off"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:11:38 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:11:38 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:11:38 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:11:38 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.706 252257 DEBUG nova.compute.manager [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Preparing to wait for external event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.706 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.707 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.707 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.708 252257 DEBUG nova.virt.libvirt.vif [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-32984963',display_name='tempest-tempest.common.compute-instance-32984963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-32984963',id=107,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-701yf0f3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:32Z,user_data=None,user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=9d9d2058-c79d-456b-b647-e73537cb9223,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.708 252257 DEBUG nova.network.os_vif_util [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.709 252257 DEBUG nova.network.os_vif_util [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.710 252257 DEBUG os_vif [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.712 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.713 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.713 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.715 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.718 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.719 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5c93ffe-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.720 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa5c93ffe-81, col_values=(('external_ids', {'iface-id': 'a5c93ffe-8186-4e03-86aa-e1b1efc225cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:86:ed', 'vm-uuid': '9d9d2058-c79d-456b-b647-e73537cb9223'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.722 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:38 np0005539563 NetworkManager[48981]: <info>  [1764403898.7226] manager: (tapa5c93ffe-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.726 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.728 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.730 252257 INFO os_vif [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81')#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.793 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.794 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.794 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No VIF found with MAC fa:16:3e:8e:86:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.795 252257 INFO nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Using config drive#033[00m
Nov 29 03:11:38 np0005539563 nova_compute[252253]: 2025-11-29 08:11:38.840 252257 DEBUG nova.storage.rbd_utils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:39.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.000 252257 INFO nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Creating config drive at /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/disk.config#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.006 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk7opjdnv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 942 KiB/s rd, 3.6 MiB/s wr, 113 op/s
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.150 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk7opjdnv" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.197 252257 DEBUG nova.storage.rbd_utils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.202 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/disk.config 9d9d2058-c79d-456b-b647-e73537cb9223_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.374 252257 DEBUG nova.compute.manager [req-f8ca2a64-a512-43ec-8270-77e9e8eb9eb9 req-f78120e8-03c1-4134-be44-dc45e70d0f74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Received event network-vif-plugged-5a778f1e-9dbc-422a-b415-d2ea4fecdaac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.375 252257 DEBUG oslo_concurrency.lockutils [req-f8ca2a64-a512-43ec-8270-77e9e8eb9eb9 req-f78120e8-03c1-4134-be44-dc45e70d0f74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.376 252257 DEBUG oslo_concurrency.lockutils [req-f8ca2a64-a512-43ec-8270-77e9e8eb9eb9 req-f78120e8-03c1-4134-be44-dc45e70d0f74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.376 252257 DEBUG oslo_concurrency.lockutils [req-f8ca2a64-a512-43ec-8270-77e9e8eb9eb9 req-f78120e8-03c1-4134-be44-dc45e70d0f74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.377 252257 DEBUG nova.compute.manager [req-f8ca2a64-a512-43ec-8270-77e9e8eb9eb9 req-f78120e8-03c1-4134-be44-dc45e70d0f74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] No waiting events found dispatching network-vif-plugged-5a778f1e-9dbc-422a-b415-d2ea4fecdaac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.377 252257 WARNING nova.compute.manager [req-f8ca2a64-a512-43ec-8270-77e9e8eb9eb9 req-f78120e8-03c1-4134-be44-dc45e70d0f74 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Received unexpected event network-vif-plugged-5a778f1e-9dbc-422a-b415-d2ea4fecdaac for instance with vm_state active and task_state None.#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.385 252257 DEBUG oslo_concurrency.processutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/disk.config 9d9d2058-c79d-456b-b647-e73537cb9223_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.386 252257 INFO nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Deleting local config drive /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/disk.config because it was imported into RBD.#033[00m
Nov 29 03:11:40 np0005539563 kernel: tapa5c93ffe-81: entered promiscuous mode
Nov 29 03:11:40 np0005539563 NetworkManager[48981]: <info>  [1764403900.4351] manager: (tapa5c93ffe-81): new Tun device (/org/freedesktop/NetworkManager/Devices/193)
Nov 29 03:11:40 np0005539563 systemd-udevd[316581]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:11:40 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:40Z|00405|binding|INFO|Claiming lport a5c93ffe-8186-4e03-86aa-e1b1efc225cc for this chassis.
Nov 29 03:11:40 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:40Z|00406|binding|INFO|a5c93ffe-8186-4e03-86aa-e1b1efc225cc: Claiming fa:16:3e:8e:86:ed 10.100.0.6
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.444 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:86:ed 10.100.0.6'], port_security=['fa:16:3e:8e:86:ed 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9d9d2058-c79d-456b-b647-e73537cb9223', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8b7c8a30-f080-4336-87a1-164f41eed0f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=a5c93ffe-8186-4e03-86aa-e1b1efc225cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.444 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.449 158990 INFO neutron.agent.ovn.metadata.agent [-] Port a5c93ffe-8186-4e03-86aa-e1b1efc225cc in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 bound to our chassis#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.451 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 10a9b8d1-2de6-4e47-8e44-16b661da8624#033[00m
Nov 29 03:11:40 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:40Z|00407|binding|INFO|Setting lport a5c93ffe-8186-4e03-86aa-e1b1efc225cc ovn-installed in OVS
Nov 29 03:11:40 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:40Z|00408|binding|INFO|Setting lport a5c93ffe-8186-4e03-86aa-e1b1efc225cc up in Southbound
Nov 29 03:11:40 np0005539563 NetworkManager[48981]: <info>  [1764403900.4572] device (tapa5c93ffe-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:11:40 np0005539563 NetworkManager[48981]: <info>  [1764403900.4594] device (tapa5c93ffe-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.459 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.469 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[69a5276c-0f8b-47a8-9eb4-68d68aeff05f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.470 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap10a9b8d1-21 in ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.472 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap10a9b8d1-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.472 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4ecb61-79fd-4f81-b8f3-790758e3ee08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.473 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4d79224a-0c6f-47a6-b7e3-a9b6d0a770a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 systemd-machined[213024]: New machine qemu-47-instance-0000006b.
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.485 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6a4dee-93d3-4e76-9e63-1d90bbacc488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:40.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:40 np0005539563 systemd[1]: Started Virtual Machine qemu-47-instance-0000006b.
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.502 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[eaeb4a63-a9db-4b42-ada4-e788b028669d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.537 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9746d2-510b-41a4-b963-a0c8a8932d31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 NetworkManager[48981]: <info>  [1764403900.5453] manager: (tap10a9b8d1-20): new Veth device (/org/freedesktop/NetworkManager/Devices/194)
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.547 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a50a09-af90-4efd-93b2-15e5083e162b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.596 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[caf476aa-b7e2-4f27-a1be-153d9e0f4998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.601 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[972a2876-1085-43d1-b8cb-a0911f145e9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 NetworkManager[48981]: <info>  [1764403900.6244] device (tap10a9b8d1-20): carrier: link connected
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.630 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5011d8d4-22da-450a-bc43-5ed114e70d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.645 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[066634a1-aac9-45cb-a21e-003d04e94a0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686839, 'reachable_time': 18777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316872, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.660 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8de9c1b3-d5ab-4eb6-81be-cb9e8d0baee0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:676'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686839, 'tstamp': 686839}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316873, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.675 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2e970bac-0bcb-4880-b1b6-fff3380fc554]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686839, 'reachable_time': 18777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316874, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.713 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[20be0352-3b11-4f26-aa45-d806a6793968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.795 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[07b3ab72-6999-4e66-8c3f-29df94a9b407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.799 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.800 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.801 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10a9b8d1-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.803 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:40 np0005539563 NetworkManager[48981]: <info>  [1764403900.8041] manager: (tap10a9b8d1-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Nov 29 03:11:40 np0005539563 kernel: tap10a9b8d1-20: entered promiscuous mode
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.805 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.806 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap10a9b8d1-20, col_values=(('external_ids', {'iface-id': '56facbc8-1a3f-4008-8f77-23eeac832994'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.807 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:40 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:40Z|00409|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.824 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.825 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.826 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[12ff8645-a678-4099-ae07-d801740c5098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.826 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:11:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:40.828 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'env', 'PROCESS_TAG=haproxy-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/10a9b8d1-2de6-4e47-8e44-16b661da8624.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.949 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403900.949148, 9d9d2058-c79d-456b-b647-e73537cb9223 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.950 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] VM Started (Lifecycle Event)#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.967 252257 DEBUG nova.network.neutron [req-a8a9588b-380c-4e72-b8e4-f371428b9f40 req-94fc0d8c-b360-4eb7-8fba-dc27e83d482d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Updated VIF entry in instance network info cache for port a5c93ffe-8186-4e03-86aa-e1b1efc225cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.967 252257 DEBUG nova.network.neutron [req-a8a9588b-380c-4e72-b8e4-f371428b9f40 req-94fc0d8c-b360-4eb7-8fba-dc27e83d482d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Updating instance_info_cache with network_info: [{"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.975 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.981 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403900.9492488, 9d9d2058-c79d-456b-b647-e73537cb9223 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.981 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:11:40 np0005539563 nova_compute[252253]: 2025-11-29 08:11:40.988 252257 DEBUG oslo_concurrency.lockutils [req-a8a9588b-380c-4e72-b8e4-f371428b9f40 req-94fc0d8c-b360-4eb7-8fba-dc27e83d482d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9d9d2058-c79d-456b-b647-e73537cb9223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:41.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:41 np0005539563 nova_compute[252253]: 2025-11-29 08:11:41.011 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:41 np0005539563 nova_compute[252253]: 2025-11-29 08:11:41.014 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:11:41 np0005539563 nova_compute[252253]: 2025-11-29 08:11:41.035 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:11:41 np0005539563 podman[316949]: 2025-11-29 08:11:41.202359533 +0000 UTC m=+0.053810239 container create 35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:11:41 np0005539563 systemd[1]: Started libpod-conmon-35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be.scope.
Nov 29 03:11:41 np0005539563 podman[316949]: 2025-11-29 08:11:41.17345929 +0000 UTC m=+0.024909976 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:11:41 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:11:41 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4475b098890054b05df06ea2c71ef971726375c698f581f38e9c8ab418dccb4f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:41 np0005539563 nova_compute[252253]: 2025-11-29 08:11:41.296 252257 INFO nova.compute.manager [None req-92f1557e-ffb7-46cf-82c0-49979493ec5f ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Get console output#033[00m
Nov 29 03:11:41 np0005539563 nova_compute[252253]: 2025-11-29 08:11:41.303 252257 INFO oslo.privsep.daemon [None req-92f1557e-ffb7-46cf-82c0-49979493ec5f ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpws683owr/privsep.sock']#033[00m
Nov 29 03:11:41 np0005539563 podman[316949]: 2025-11-29 08:11:41.311098369 +0000 UTC m=+0.162549075 container init 35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:11:41 np0005539563 podman[316949]: 2025-11-29 08:11:41.316478595 +0000 UTC m=+0.167929301 container start 35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:11:41 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[316965]: [NOTICE]   (316970) : New worker (316972) forked
Nov 29 03:11:41 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[316965]: [NOTICE]   (316970) : Loading success.
Nov 29 03:11:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.025 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.031 252257 INFO oslo.privsep.daemon [None req-92f1557e-ffb7-46cf-82c0-49979493ec5f ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:41.906 316984 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:41.913 316984 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:41.918 316984 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:41.918 316984 INFO oslo.privsep.daemon [-] privsep daemon running as pid 316984#033[00m
Nov 29 03:11:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 137 op/s
Nov 29 03:11:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:42.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.608 252257 DEBUG nova.compute.manager [req-a59981e7-3513-4faa-907c-03de61914574 req-410ee1a1-5da2-4ba7-918b-c4388dd37b2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.609 252257 DEBUG oslo_concurrency.lockutils [req-a59981e7-3513-4faa-907c-03de61914574 req-410ee1a1-5da2-4ba7-918b-c4388dd37b2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.610 252257 DEBUG oslo_concurrency.lockutils [req-a59981e7-3513-4faa-907c-03de61914574 req-410ee1a1-5da2-4ba7-918b-c4388dd37b2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.611 252257 DEBUG oslo_concurrency.lockutils [req-a59981e7-3513-4faa-907c-03de61914574 req-410ee1a1-5da2-4ba7-918b-c4388dd37b2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.611 252257 DEBUG nova.compute.manager [req-a59981e7-3513-4faa-907c-03de61914574 req-410ee1a1-5da2-4ba7-918b-c4388dd37b2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Processing event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.612 252257 DEBUG nova.compute.manager [req-a59981e7-3513-4faa-907c-03de61914574 req-410ee1a1-5da2-4ba7-918b-c4388dd37b2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.613 252257 DEBUG oslo_concurrency.lockutils [req-a59981e7-3513-4faa-907c-03de61914574 req-410ee1a1-5da2-4ba7-918b-c4388dd37b2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.613 252257 DEBUG oslo_concurrency.lockutils [req-a59981e7-3513-4faa-907c-03de61914574 req-410ee1a1-5da2-4ba7-918b-c4388dd37b2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.614 252257 DEBUG oslo_concurrency.lockutils [req-a59981e7-3513-4faa-907c-03de61914574 req-410ee1a1-5da2-4ba7-918b-c4388dd37b2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.615 252257 DEBUG nova.compute.manager [req-a59981e7-3513-4faa-907c-03de61914574 req-410ee1a1-5da2-4ba7-918b-c4388dd37b2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] No waiting events found dispatching network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.615 252257 WARNING nova.compute.manager [req-a59981e7-3513-4faa-907c-03de61914574 req-410ee1a1-5da2-4ba7-918b-c4388dd37b2a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received unexpected event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.617 252257 DEBUG nova.compute.manager [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.629 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.631 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403902.631342, 9d9d2058-c79d-456b-b647-e73537cb9223 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.632 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.635 252257 INFO nova.virt.libvirt.driver [-] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance spawned successfully.#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.636 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.913 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.918 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.919 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.920 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.920 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.921 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.922 252257 DEBUG nova.virt.libvirt.driver [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:11:42 np0005539563 nova_compute[252253]: 2025-11-29 08:11:42.929 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:11:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:43.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:11:43 np0005539563 nova_compute[252253]: 2025-11-29 08:11:43.198 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:11:43 np0005539563 nova_compute[252253]: 2025-11-29 08:11:43.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:43 np0005539563 nova_compute[252253]: 2025-11-29 08:11:43.723 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:43 np0005539563 nova_compute[252253]: 2025-11-29 08:11:43.747 252257 INFO nova.compute.manager [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Took 11.57 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:11:43 np0005539563 nova_compute[252253]: 2025-11-29 08:11:43.748 252257 DEBUG nova.compute.manager [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:43 np0005539563 nova_compute[252253]: 2025-11-29 08:11:43.861 252257 INFO nova.compute.manager [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Took 12.78 seconds to build instance.#033[00m
Nov 29 03:11:43 np0005539563 nova_compute[252253]: 2025-11-29 08:11:43.892 252257 DEBUG oslo_concurrency.lockutils [None req-922c2e82-8c02-446a-b7b8-d84fe6cce667 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Nov 29 03:11:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:44.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:45.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:45 np0005539563 nova_compute[252253]: 2025-11-29 08:11:45.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:45 np0005539563 nova_compute[252253]: 2025-11-29 08:11:45.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:11:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.8 MiB/s wr, 200 op/s
Nov 29 03:11:46 np0005539563 nova_compute[252253]: 2025-11-29 08:11:46.368 252257 DEBUG oslo_concurrency.lockutils [None req-48c1f45d-f282-4112-9c5a-e84f92039e34 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:46 np0005539563 nova_compute[252253]: 2025-11-29 08:11:46.369 252257 DEBUG oslo_concurrency.lockutils [None req-48c1f45d-f282-4112-9c5a-e84f92039e34 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:46 np0005539563 nova_compute[252253]: 2025-11-29 08:11:46.369 252257 DEBUG nova.compute.manager [None req-48c1f45d-f282-4112-9c5a-e84f92039e34 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:46 np0005539563 nova_compute[252253]: 2025-11-29 08:11:46.372 252257 DEBUG nova.compute.manager [None req-48c1f45d-f282-4112-9c5a-e84f92039e34 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:11:46 np0005539563 nova_compute[252253]: 2025-11-29 08:11:46.373 252257 DEBUG nova.objects.instance [None req-48c1f45d-f282-4112-9c5a-e84f92039e34 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'flavor' on Instance uuid 9d9d2058-c79d-456b-b647-e73537cb9223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:46 np0005539563 nova_compute[252253]: 2025-11-29 08:11:46.397 252257 DEBUG nova.virt.libvirt.driver [None req-48c1f45d-f282-4112-9c5a-e84f92039e34 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:11:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:46.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:47.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:47 np0005539563 nova_compute[252253]: 2025-11-29 08:11:47.029 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:47 np0005539563 nova_compute[252253]: 2025-11-29 08:11:47.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 40 KiB/s wr, 161 op/s
Nov 29 03:11:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:48.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:48 np0005539563 nova_compute[252253]: 2025-11-29 08:11:48.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:48 np0005539563 nova_compute[252253]: 2025-11-29 08:11:48.725 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:48 np0005539563 nova_compute[252253]: 2025-11-29 08:11:48.748 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:48 np0005539563 nova_compute[252253]: 2025-11-29 08:11:48.749 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:48 np0005539563 nova_compute[252253]: 2025-11-29 08:11:48.749 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:48 np0005539563 nova_compute[252253]: 2025-11-29 08:11:48.750 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:11:48 np0005539563 nova_compute[252253]: 2025-11-29 08:11:48.750 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:49.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:49 np0005539563 nova_compute[252253]: 2025-11-29 08:11:49.197 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:49 np0005539563 nova_compute[252253]: 2025-11-29 08:11:49.303 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:11:49 np0005539563 nova_compute[252253]: 2025-11-29 08:11:49.304 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:11:49 np0005539563 nova_compute[252253]: 2025-11-29 08:11:49.308 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:11:49 np0005539563 nova_compute[252253]: 2025-11-29 08:11:49.308 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:11:49 np0005539563 nova_compute[252253]: 2025-11-29 08:11:49.493 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:11:49 np0005539563 nova_compute[252253]: 2025-11-29 08:11:49.494 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4150MB free_disk=20.809612274169922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:11:49 np0005539563 nova_compute[252253]: 2025-11-29 08:11:49.495 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:49 np0005539563 nova_compute[252253]: 2025-11-29 08:11:49.495 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:49 np0005539563 nova_compute[252253]: 2025-11-29 08:11:49.672 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 98453ec7-fbda-42ae-8624-8aa5921fd634 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:11:49 np0005539563 nova_compute[252253]: 2025-11-29 08:11:49.673 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 9d9d2058-c79d-456b-b647-e73537cb9223 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:11:49 np0005539563 nova_compute[252253]: 2025-11-29 08:11:49.673 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:11:49 np0005539563 nova_compute[252253]: 2025-11-29 08:11:49.674 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:11:49 np0005539563 podman[317062]: 2025-11-29 08:11:49.718033463 +0000 UTC m=+0.055482505 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:11:49 np0005539563 podman[317063]: 2025-11-29 08:11:49.732562026 +0000 UTC m=+0.064288623 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:11:49 np0005539563 podman[317064]: 2025-11-29 08:11:49.770224677 +0000 UTC m=+0.094805120 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:11:50 np0005539563 nova_compute[252253]: 2025-11-29 08:11:50.084 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 36 KiB/s wr, 174 op/s
Nov 29 03:11:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:50.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/38568835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:50 np0005539563 nova_compute[252253]: 2025-11-29 08:11:50.621 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:50 np0005539563 nova_compute[252253]: 2025-11-29 08:11:50.628 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:50 np0005539563 nova_compute[252253]: 2025-11-29 08:11:50.651 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:50 np0005539563 nova_compute[252253]: 2025-11-29 08:11:50.685 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:11:50 np0005539563 nova_compute[252253]: 2025-11-29 08:11:50.686 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:50 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 29 03:11:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:51.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:51 np0005539563 nova_compute[252253]: 2025-11-29 08:11:51.687 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:51 np0005539563 nova_compute[252253]: 2025-11-29 08:11:51.687 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:11:51 np0005539563 nova_compute[252253]: 2025-11-29 08:11:51.687 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:11:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:52 np0005539563 nova_compute[252253]: 2025-11-29 08:11:52.032 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 374 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 36 KiB/s wr, 214 op/s
Nov 29 03:11:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:52.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:52 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:52Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:b2:e8 10.100.0.5
Nov 29 03:11:52 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:52Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:b2:e8 10.100.0.5
Nov 29 03:11:52 np0005539563 nova_compute[252253]: 2025-11-29 08:11:52.822 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:52 np0005539563 nova_compute[252253]: 2025-11-29 08:11:52.823 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:52 np0005539563 nova_compute[252253]: 2025-11-29 08:11:52.823 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:11:52 np0005539563 nova_compute[252253]: 2025-11-29 08:11:52.823 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 98453ec7-fbda-42ae-8624-8aa5921fd634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:53.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:53.262 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:53.264 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:11:53 np0005539563 nova_compute[252253]: 2025-11-29 08:11:53.264 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:53 np0005539563 nova_compute[252253]: 2025-11-29 08:11:53.728 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 397 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 1.3 MiB/s wr, 226 op/s
Nov 29 03:11:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:54.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:11:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:55.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:11:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 453 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.9 MiB/s wr, 261 op/s
Nov 29 03:11:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:11:56.266 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:56 np0005539563 nova_compute[252253]: 2025-11-29 08:11:56.439 252257 DEBUG nova.virt.libvirt.driver [None req-48c1f45d-f282-4112-9c5a-e84f92039e34 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:11:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:56.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:56 np0005539563 nova_compute[252253]: 2025-11-29 08:11:56.923 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Updating instance_info_cache with network_info: [{"id": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "address": "fa:16:3e:72:b2:e8", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a778f1e-9d", "ovs_interfaceid": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:57.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:57 np0005539563 nova_compute[252253]: 2025-11-29 08:11:57.062 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 459 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.4 MiB/s wr, 199 op/s
Nov 29 03:11:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:58.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:58 np0005539563 nova_compute[252253]: 2025-11-29 08:11:58.732 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:11:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:59.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:59Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:86:ed 10.100.0.6
Nov 29 03:11:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:11:59Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:86:ed 10.100.0.6
Nov 29 03:12:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 470 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.3 MiB/s wr, 199 op/s
Nov 29 03:12:00 np0005539563 nova_compute[252253]: 2025-11-29 08:12:00.318 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:00 np0005539563 nova_compute[252253]: 2025-11-29 08:12:00.318 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:12:00 np0005539563 nova_compute[252253]: 2025-11-29 08:12:00.318 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:00 np0005539563 nova_compute[252253]: 2025-11-29 08:12:00.319 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:00 np0005539563 nova_compute[252253]: 2025-11-29 08:12:00.319 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:00 np0005539563 nova_compute[252253]: 2025-11-29 08:12:00.319 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:00 np0005539563 nova_compute[252253]: 2025-11-29 08:12:00.319 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:12:00 np0005539563 nova_compute[252253]: 2025-11-29 08:12:00.375 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:12:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:00.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:12:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:01.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:02 np0005539563 nova_compute[252253]: 2025-11-29 08:12:02.066 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 483 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 217 op/s
Nov 29 03:12:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:02.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:03.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:03 np0005539563 nova_compute[252253]: 2025-11-29 08:12:03.735 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 483 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 859 KiB/s rd, 6.1 MiB/s wr, 186 op/s
Nov 29 03:12:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:04.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:04.917 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:04.918 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:04.919 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:05.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 984 KiB/s rd, 4.8 MiB/s wr, 162 op/s
Nov 29 03:12:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:06.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:07.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:07 np0005539563 nova_compute[252253]: 2025-11-29 08:12:07.068 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:07 np0005539563 nova_compute[252253]: 2025-11-29 08:12:07.488 252257 DEBUG nova.virt.libvirt.driver [None req-48c1f45d-f282-4112-9c5a-e84f92039e34 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:12:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 796 KiB/s rd, 2.2 MiB/s wr, 102 op/s
Nov 29 03:12:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:08.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:08 np0005539563 nova_compute[252253]: 2025-11-29 08:12:08.739 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:09.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 757 KiB/s rd, 1.7 MiB/s wr, 91 op/s
Nov 29 03:12:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:10.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:10 np0005539563 kernel: tapa5c93ffe-81 (unregistering): left promiscuous mode
Nov 29 03:12:10 np0005539563 NetworkManager[48981]: <info>  [1764403930.8092] device (tapa5c93ffe-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:12:10 np0005539563 ovn_controller[148841]: 2025-11-29T08:12:10Z|00410|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Nov 29 03:12:10 np0005539563 nova_compute[252253]: 2025-11-29 08:12:10.826 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:10 np0005539563 ovn_controller[148841]: 2025-11-29T08:12:10Z|00411|binding|INFO|Releasing lport a5c93ffe-8186-4e03-86aa-e1b1efc225cc from this chassis (sb_readonly=0)
Nov 29 03:12:10 np0005539563 ovn_controller[148841]: 2025-11-29T08:12:10Z|00412|binding|INFO|Setting lport a5c93ffe-8186-4e03-86aa-e1b1efc225cc down in Southbound
Nov 29 03:12:10 np0005539563 ovn_controller[148841]: 2025-11-29T08:12:10Z|00413|binding|INFO|Removing iface tapa5c93ffe-81 ovn-installed in OVS
Nov 29 03:12:10 np0005539563 nova_compute[252253]: 2025-11-29 08:12:10.829 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:10 np0005539563 nova_compute[252253]: 2025-11-29 08:12:10.844 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:10 np0005539563 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Nov 29 03:12:10 np0005539563 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000006b.scope: Consumed 14.590s CPU time.
Nov 29 03:12:10 np0005539563 systemd-machined[213024]: Machine qemu-47-instance-0000006b terminated.
Nov 29 03:12:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:11.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:11 np0005539563 nova_compute[252253]: 2025-11-29 08:12:11.041 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539563 nova_compute[252253]: 2025-11-29 08:12:11.050 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539563 nova_compute[252253]: 2025-11-29 08:12:11.508 252257 INFO nova.virt.libvirt.driver [None req-48c1f45d-f282-4112-9c5a-e84f92039e34 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance shutdown successfully after 25 seconds.#033[00m
Nov 29 03:12:11 np0005539563 nova_compute[252253]: 2025-11-29 08:12:11.515 252257 INFO nova.virt.libvirt.driver [-] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance destroyed successfully.#033[00m
Nov 29 03:12:11 np0005539563 nova_compute[252253]: 2025-11-29 08:12:11.515 252257 DEBUG nova.objects.instance [None req-48c1f45d-f282-4112-9c5a-e84f92039e34 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9d9d2058-c79d-456b-b647-e73537cb9223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:11.862 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:86:ed 10.100.0.6'], port_security=['fa:16:3e:8e:86:ed 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9d9d2058-c79d-456b-b647-e73537cb9223', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8b7c8a30-f080-4336-87a1-164f41eed0f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=a5c93ffe-8186-4e03-86aa-e1b1efc225cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:11.864 158990 INFO neutron.agent.ovn.metadata.agent [-] Port a5c93ffe-8186-4e03-86aa-e1b1efc225cc in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 unbound from our chassis#033[00m
Nov 29 03:12:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:11.865 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10a9b8d1-2de6-4e47-8e44-16b661da8624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:12:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:11.867 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f12e05-173d-4902-b6eb-af3bcbe252c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:11.868 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace which is not needed anymore#033[00m
Nov 29 03:12:11 np0005539563 nova_compute[252253]: 2025-11-29 08:12:11.906 252257 DEBUG nova.compute.manager [None req-48c1f45d-f282-4112-9c5a-e84f92039e34 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:12 np0005539563 nova_compute[252253]: 2025-11-29 08:12:12.003 252257 DEBUG oslo_concurrency.lockutils [None req-48c1f45d-f282-4112-9c5a-e84f92039e34 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 25.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:12 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[316965]: [NOTICE]   (316970) : haproxy version is 2.8.14-c23fe91
Nov 29 03:12:12 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[316965]: [NOTICE]   (316970) : path to executable is /usr/sbin/haproxy
Nov 29 03:12:12 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[316965]: [WARNING]  (316970) : Exiting Master process...
Nov 29 03:12:12 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[316965]: [ALERT]    (316970) : Current worker (316972) exited with code 143 (Terminated)
Nov 29 03:12:12 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[316965]: [WARNING]  (316970) : All workers exited. Exiting... (0)
Nov 29 03:12:12 np0005539563 systemd[1]: libpod-35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be.scope: Deactivated successfully.
Nov 29 03:12:12 np0005539563 podman[317242]: 2025-11-29 08:12:12.042826572 +0000 UTC m=+0.055199607 container died 35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:12:12 np0005539563 nova_compute[252253]: 2025-11-29 08:12:12.070 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be-userdata-shm.mount: Deactivated successfully.
Nov 29 03:12:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4475b098890054b05df06ea2c71ef971726375c698f581f38e9c8ab418dccb4f-merged.mount: Deactivated successfully.
Nov 29 03:12:12 np0005539563 podman[317242]: 2025-11-29 08:12:12.083995968 +0000 UTC m=+0.096368973 container cleanup 35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:12:12 np0005539563 systemd[1]: libpod-conmon-35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be.scope: Deactivated successfully.
Nov 29 03:12:12 np0005539563 podman[317272]: 2025-11-29 08:12:12.154412886 +0000 UTC m=+0.045299799 container remove 35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:12:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 665 KiB/s rd, 845 KiB/s wr, 73 op/s
Nov 29 03:12:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:12.161 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[284c7011-4ae7-4134-acae-005c124064e3]: (4, ('Sat Nov 29 08:12:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be)\n35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be\nSat Nov 29 08:12:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be)\n35d75d0b550816ed9570747ebc32e585357de95401a35429d27732077ef5f2be\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:12.164 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f74fc1c5-9bf0-40f6-b183-54b61d40c816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:12.166 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:12 np0005539563 nova_compute[252253]: 2025-11-29 08:12:12.168 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:12 np0005539563 kernel: tap10a9b8d1-20: left promiscuous mode
Nov 29 03:12:12 np0005539563 nova_compute[252253]: 2025-11-29 08:12:12.187 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:12.191 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2d5074e1-cb01-4a16-bd20-cf2e312341a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:12.213 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b18cd5-918b-489b-b77e-b77877ef802f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:12.214 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e037b9b7-1558-4db2-ac5b-c3ca441f27fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:12.229 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[48e3d8bb-25d9-4ce9-9fa0-69680ca8b54e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686830, 'reachable_time': 22661, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317292, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:12.232 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:12:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:12.232 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[3d346471-4825-49a9-a856-91713141f327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539563 systemd[1]: run-netns-ovnmeta\x2d10a9b8d1\x2d2de6\x2d4e47\x2d8e44\x2d16b661da8624.mount: Deactivated successfully.
Nov 29 03:12:12 np0005539563 nova_compute[252253]: 2025-11-29 08:12:12.505 252257 DEBUG nova.compute.manager [req-059029b1-fe26-495d-9471-3eb2cd749009 req-8e4432dd-eeb7-4456-8f80-7795f5e231c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received event network-vif-unplugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:12 np0005539563 nova_compute[252253]: 2025-11-29 08:12:12.506 252257 DEBUG oslo_concurrency.lockutils [req-059029b1-fe26-495d-9471-3eb2cd749009 req-8e4432dd-eeb7-4456-8f80-7795f5e231c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:12 np0005539563 nova_compute[252253]: 2025-11-29 08:12:12.506 252257 DEBUG oslo_concurrency.lockutils [req-059029b1-fe26-495d-9471-3eb2cd749009 req-8e4432dd-eeb7-4456-8f80-7795f5e231c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:12 np0005539563 nova_compute[252253]: 2025-11-29 08:12:12.506 252257 DEBUG oslo_concurrency.lockutils [req-059029b1-fe26-495d-9471-3eb2cd749009 req-8e4432dd-eeb7-4456-8f80-7795f5e231c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:12 np0005539563 nova_compute[252253]: 2025-11-29 08:12:12.507 252257 DEBUG nova.compute.manager [req-059029b1-fe26-495d-9471-3eb2cd749009 req-8e4432dd-eeb7-4456-8f80-7795f5e231c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] No waiting events found dispatching network-vif-unplugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:12 np0005539563 nova_compute[252253]: 2025-11-29 08:12:12.507 252257 WARNING nova.compute.manager [req-059029b1-fe26-495d-9471-3eb2cd749009 req-8e4432dd-eeb7-4456-8f80-7795f5e231c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received unexpected event network-vif-unplugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:12:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:12.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:12:12
Nov 29 03:12:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:12:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:12:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'images', 'default.rgw.control', '.mgr', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 29 03:12:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:12:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:13.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:12:13 np0005539563 nova_compute[252253]: 2025-11-29 08:12:13.744 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:12:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:12:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 434 KiB/s rd, 116 KiB/s wr, 43 op/s
Nov 29 03:12:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:14.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:15.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:15 np0005539563 nova_compute[252253]: 2025-11-29 08:12:15.078 252257 DEBUG nova.compute.manager [req-61130e9a-2d0e-4679-aa86-ae224b8c56a8 req-4afa472c-0d71-4a0b-a8e9-cd6bfe1a252f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:15 np0005539563 nova_compute[252253]: 2025-11-29 08:12:15.078 252257 DEBUG oslo_concurrency.lockutils [req-61130e9a-2d0e-4679-aa86-ae224b8c56a8 req-4afa472c-0d71-4a0b-a8e9-cd6bfe1a252f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:15 np0005539563 nova_compute[252253]: 2025-11-29 08:12:15.078 252257 DEBUG oslo_concurrency.lockutils [req-61130e9a-2d0e-4679-aa86-ae224b8c56a8 req-4afa472c-0d71-4a0b-a8e9-cd6bfe1a252f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:15 np0005539563 nova_compute[252253]: 2025-11-29 08:12:15.078 252257 DEBUG oslo_concurrency.lockutils [req-61130e9a-2d0e-4679-aa86-ae224b8c56a8 req-4afa472c-0d71-4a0b-a8e9-cd6bfe1a252f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:15 np0005539563 nova_compute[252253]: 2025-11-29 08:12:15.079 252257 DEBUG nova.compute.manager [req-61130e9a-2d0e-4679-aa86-ae224b8c56a8 req-4afa472c-0d71-4a0b-a8e9-cd6bfe1a252f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] No waiting events found dispatching network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:15 np0005539563 nova_compute[252253]: 2025-11-29 08:12:15.079 252257 WARNING nova.compute.manager [req-61130e9a-2d0e-4679-aa86-ae224b8c56a8 req-4afa472c-0d71-4a0b-a8e9-cd6bfe1a252f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received unexpected event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:12:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 276 KiB/s rd, 46 KiB/s wr, 21 op/s
Nov 29 03:12:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:16.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:17.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:17 np0005539563 nova_compute[252253]: 2025-11-29 08:12:17.072 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 43 KiB/s wr, 5 op/s
Nov 29 03:12:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:18.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:18 np0005539563 nova_compute[252253]: 2025-11-29 08:12:18.748 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:19.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:19 np0005539563 nova_compute[252253]: 2025-11-29 08:12:19.403 252257 INFO nova.compute.manager [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Rebuilding instance#033[00m
Nov 29 03:12:19 np0005539563 nova_compute[252253]: 2025-11-29 08:12:19.816 252257 DEBUG nova.objects.instance [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9d9d2058-c79d-456b-b647-e73537cb9223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.5 KiB/s rd, 29 KiB/s wr, 4 op/s
Nov 29 03:12:20 np0005539563 podman[317297]: 2025-11-29 08:12:20.515534229 +0000 UTC m=+0.062709781 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:12:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:20.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:20 np0005539563 podman[317304]: 2025-11-29 08:12:20.571718471 +0000 UTC m=+0.102572130 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:12:20 np0005539563 podman[317298]: 2025-11-29 08:12:20.579650296 +0000 UTC m=+0.108496290 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.772 252257 DEBUG nova.compute.manager [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.843 252257 DEBUG nova.objects.instance [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9d9d2058-c79d-456b-b647-e73537cb9223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.857 252257 DEBUG nova.objects.instance [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d9d2058-c79d-456b-b647-e73537cb9223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.879 252257 DEBUG nova.objects.instance [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'resources' on Instance uuid 9d9d2058-c79d-456b-b647-e73537cb9223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.897 252257 DEBUG nova.objects.instance [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'migration_context' on Instance uuid 9d9d2058-c79d-456b-b647-e73537cb9223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.917 252257 DEBUG nova.objects.instance [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.922 252257 INFO nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance already shutdown.#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.929 252257 INFO nova.virt.libvirt.driver [-] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance destroyed successfully.#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.936 252257 INFO nova.virt.libvirt.driver [-] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance destroyed successfully.#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.938 252257 DEBUG nova.virt.libvirt.vif [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:11:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-32984963',display_name='tempest-tempest.common.compute-instance-32984963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-32984963',id=107,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-701yf0f3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:16Z,user_data=None,user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=9d9d2058-c79d-456b-b647-e73537cb9223,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.938 252257 DEBUG nova.network.os_vif_util [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.939 252257 DEBUG nova.network.os_vif_util [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.940 252257 DEBUG os_vif [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.943 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.943 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5c93ffe-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.974 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.976 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:20 np0005539563 nova_compute[252253]: 2025-11-29 08:12:20.978 252257 INFO os_vif [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81')#033[00m
Nov 29 03:12:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:21.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.436 252257 INFO nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Deleting instance files /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223_del#033[00m
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.437 252257 INFO nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Deletion of /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223_del complete#033[00m
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.596 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.597 252257 INFO nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Creating image(s)#033[00m
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.634 252257 DEBUG nova.storage.rbd_utils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.681 252257 DEBUG nova.storage.rbd_utils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.718 252257 DEBUG nova.storage.rbd_utils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.722 252257 DEBUG oslo_concurrency.processutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.803 252257 DEBUG oslo_concurrency.processutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.804 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.805 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.805 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.835 252257 DEBUG nova.storage.rbd_utils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:21 np0005539563 nova_compute[252253]: 2025-11-29 08:12:21.839 252257 DEBUG oslo_concurrency.processutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 9d9d2058-c79d-456b-b647-e73537cb9223_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.126 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 30 KiB/s wr, 4 op/s
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.164 252257 DEBUG oslo_concurrency.processutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 9d9d2058-c79d-456b-b647-e73537cb9223_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.274 252257 DEBUG nova.storage.rbd_utils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] resizing rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.383 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.384 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Ensure instance console log exists: /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.385 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.385 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.386 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.388 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Start _get_guest_xml network_info=[{"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.393 252257 WARNING nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.408 252257 DEBUG nova.virt.libvirt.host [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.409 252257 DEBUG nova.virt.libvirt.host [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.428 252257 DEBUG nova.virt.libvirt.host [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.429 252257 DEBUG nova.virt.libvirt.host [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.430 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.431 252257 DEBUG nova.virt.hardware [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.431 252257 DEBUG nova.virt.hardware [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.431 252257 DEBUG nova.virt.hardware [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.432 252257 DEBUG nova.virt.hardware [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.432 252257 DEBUG nova.virt.hardware [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.432 252257 DEBUG nova.virt.hardware [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.432 252257 DEBUG nova.virt.hardware [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.433 252257 DEBUG nova.virt.hardware [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.433 252257 DEBUG nova.virt.hardware [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.433 252257 DEBUG nova.virt.hardware [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.433 252257 DEBUG nova.virt.hardware [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.434 252257 DEBUG nova.objects.instance [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9d9d2058-c79d-456b-b647-e73537cb9223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.464 252257 DEBUG oslo_concurrency.processutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:22.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3492893778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.946 252257 DEBUG oslo_concurrency.processutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.984 252257 DEBUG nova.storage.rbd_utils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:22 np0005539563 nova_compute[252253]: 2025-11-29 08:12:22.991 252257 DEBUG oslo_concurrency.processutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:23.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010862508433370556 of space, bias 1.0, pg target 3.258752530011167 quantized to 32 (current 32)
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.2937097597361809 quantized to 32 (current 32)
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Nov 29 03:12:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3484016015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.442 252257 DEBUG oslo_concurrency.processutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.445 252257 DEBUG nova.virt.libvirt.vif [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:11:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-32984963',display_name='tempest-tempest.common.compute-instance-32984963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-32984963',id=107,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-701yf0f3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:21Z,user_data=None,user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=9d9d2058-c79d-456b-b647-e73537cb9223,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.445 252257 DEBUG nova.network.os_vif_util [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.447 252257 DEBUG nova.network.os_vif_util [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.451 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  <uuid>9d9d2058-c79d-456b-b647-e73537cb9223</uuid>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  <name>instance-0000006b</name>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <nova:name>tempest-tempest.common.compute-instance-32984963</nova:name>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:12:22</nova:creationTime>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <nova:user uuid="58625e4c2b5d43a1abbab05b98853a65">tempest-ServerActionsTestOtherA-552273978-project-member</nova:user>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <nova:project uuid="250671461f27498d9f6b4476c7b69533">tempest-ServerActionsTestOtherA-552273978</nova:project>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="ed489666-5fa2-4ea4-8005-7a7505ac1b78"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <nova:port uuid="a5c93ffe-8186-4e03-86aa-e1b1efc225cc">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <entry name="serial">9d9d2058-c79d-456b-b647-e73537cb9223</entry>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <entry name="uuid">9d9d2058-c79d-456b-b647-e73537cb9223</entry>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/9d9d2058-c79d-456b-b647-e73537cb9223_disk">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/9d9d2058-c79d-456b-b647-e73537cb9223_disk.config">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:8e:86:ed"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <target dev="tapa5c93ffe-81"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/console.log" append="off"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:12:23 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:12:23 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:12:23 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:12:23 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.453 252257 DEBUG nova.compute.manager [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Preparing to wait for external event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.454 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.454 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.454 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.455 252257 DEBUG nova.virt.libvirt.vif [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:11:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-32984963',display_name='tempest-tempest.common.compute-instance-32984963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-32984963',id=107,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:11:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-701yf0f3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:21Z,user_data=None,user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=9d9d2058-c79d-456b-b647-e73537cb9223,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.456 252257 DEBUG nova.network.os_vif_util [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.456 252257 DEBUG nova.network.os_vif_util [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.457 252257 DEBUG os_vif [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.457 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.458 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.458 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.461 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.461 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5c93ffe-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.462 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa5c93ffe-81, col_values=(('external_ids', {'iface-id': 'a5c93ffe-8186-4e03-86aa-e1b1efc225cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:86:ed', 'vm-uuid': '9d9d2058-c79d-456b-b647-e73537cb9223'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.463 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:23 np0005539563 NetworkManager[48981]: <info>  [1764403943.4654] manager: (tapa5c93ffe-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.466 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.468 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.472 252257 INFO os_vif [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81')#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.588 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.589 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.590 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No VIF found with MAC fa:16:3e:8e:86:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.592 252257 INFO nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Using config drive#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.638 252257 DEBUG nova.storage.rbd_utils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.726 252257 DEBUG nova.objects.instance [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9d9d2058-c79d-456b-b647-e73537cb9223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:23 np0005539563 nova_compute[252253]: 2025-11-29 08:12:23.791 252257 DEBUG nova.objects.instance [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'keypairs' on Instance uuid 9d9d2058-c79d-456b-b647-e73537cb9223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 478 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 524 KiB/s wr, 16 op/s
Nov 29 03:12:24 np0005539563 nova_compute[252253]: 2025-11-29 08:12:24.408 252257 INFO nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Creating config drive at /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/disk.config#033[00m
Nov 29 03:12:24 np0005539563 nova_compute[252253]: 2025-11-29 08:12:24.415 252257 DEBUG oslo_concurrency.processutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ep9cjgc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:24 np0005539563 nova_compute[252253]: 2025-11-29 08:12:24.546 252257 DEBUG oslo_concurrency.processutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ep9cjgc" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:24.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:24 np0005539563 nova_compute[252253]: 2025-11-29 08:12:24.583 252257 DEBUG nova.storage.rbd_utils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 9d9d2058-c79d-456b-b647-e73537cb9223_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:24 np0005539563 nova_compute[252253]: 2025-11-29 08:12:24.588 252257 DEBUG oslo_concurrency.processutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/disk.config 9d9d2058-c79d-456b-b647-e73537cb9223_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:24 np0005539563 nova_compute[252253]: 2025-11-29 08:12:24.789 252257 DEBUG oslo_concurrency.processutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/disk.config 9d9d2058-c79d-456b-b647-e73537cb9223_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:24 np0005539563 nova_compute[252253]: 2025-11-29 08:12:24.790 252257 INFO nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Deleting local config drive /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223/disk.config because it was imported into RBD.#033[00m
Nov 29 03:12:24 np0005539563 kernel: tapa5c93ffe-81: entered promiscuous mode
Nov 29 03:12:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:12:24Z|00414|binding|INFO|Claiming lport a5c93ffe-8186-4e03-86aa-e1b1efc225cc for this chassis.
Nov 29 03:12:24 np0005539563 nova_compute[252253]: 2025-11-29 08:12:24.844 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:12:24Z|00415|binding|INFO|a5c93ffe-8186-4e03-86aa-e1b1efc225cc: Claiming fa:16:3e:8e:86:ed 10.100.0.6
Nov 29 03:12:24 np0005539563 NetworkManager[48981]: <info>  [1764403944.8468] manager: (tapa5c93ffe-81): new Tun device (/org/freedesktop/NetworkManager/Devices/197)
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.854 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:86:ed 10.100.0.6'], port_security=['fa:16:3e:8e:86:ed 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9d9d2058-c79d-456b-b647-e73537cb9223', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8b7c8a30-f080-4336-87a1-164f41eed0f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=a5c93ffe-8186-4e03-86aa-e1b1efc225cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.856 158990 INFO neutron.agent.ovn.metadata.agent [-] Port a5c93ffe-8186-4e03-86aa-e1b1efc225cc in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 bound to our chassis#033[00m
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.858 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 10a9b8d1-2de6-4e47-8e44-16b661da8624#033[00m
Nov 29 03:12:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:12:24Z|00416|binding|INFO|Setting lport a5c93ffe-8186-4e03-86aa-e1b1efc225cc ovn-installed in OVS
Nov 29 03:12:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:12:24Z|00417|binding|INFO|Setting lport a5c93ffe-8186-4e03-86aa-e1b1efc225cc up in Southbound
Nov 29 03:12:24 np0005539563 nova_compute[252253]: 2025-11-29 08:12:24.863 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:24 np0005539563 nova_compute[252253]: 2025-11-29 08:12:24.866 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.875 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5da427f2-2924-4b18-9b41-13f668f5532f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.876 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap10a9b8d1-21 in ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:12:24 np0005539563 systemd-udevd[317686]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.878 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap10a9b8d1-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.879 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[35526b6e-2bde-4b16-b6b9-bd3d4593e70a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.879 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a2fa06-3ad2-4981-9fea-c4e91507385f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.893 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[14371a4a-7aea-43c6-8b90-8e2bb2c90ad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539563 NetworkManager[48981]: <info>  [1764403944.8944] device (tapa5c93ffe-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:12:24 np0005539563 systemd-machined[213024]: New machine qemu-48-instance-0000006b.
Nov 29 03:12:24 np0005539563 NetworkManager[48981]: <info>  [1764403944.8962] device (tapa5c93ffe-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:12:24 np0005539563 systemd[1]: Started Virtual Machine qemu-48-instance-0000006b.
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.917 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2b6513-fdf0-48bf-b256-9137381e1a95]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.943 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6442e9-d4be-411d-b773-9eee5f0053af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.949 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f7470b8f-3484-47bd-b00a-947d77a1c571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539563 NetworkManager[48981]: <info>  [1764403944.9500] manager: (tap10a9b8d1-20): new Veth device (/org/freedesktop/NetworkManager/Devices/198)
Nov 29 03:12:24 np0005539563 systemd-udevd[317690]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.987 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f891ff53-8c89-4b00-9164-9503b5e19f08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:24.991 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb10576-d2d3-4d48-867e-ba539db2a672]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:25 np0005539563 NetworkManager[48981]: <info>  [1764403945.0190] device (tap10a9b8d1-20): carrier: link connected
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.026 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5f951e55-c58f-4ad2-aa0b-7c0873922824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.043 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[98a31d73-a7ad-4a4b-b48c-1badd3232e7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691279, 'reachable_time': 23359, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317720, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:25.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.059 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7d23dc00-1da8-4303-bd40-e617d91dedea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:676'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691279, 'tstamp': 691279}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317721, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.078 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[33f4d5cd-3361-4f40-8e2c-27ee3d4ffae1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691279, 'reachable_time': 23359, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317722, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.108 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[048328ef-a00b-4e61-a1c3-62aa20cff479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.164 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6314c081-1b3b-4147-80b3-f9638326f064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.165 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.165 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.166 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10a9b8d1-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.167 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:25 np0005539563 kernel: tap10a9b8d1-20: entered promiscuous mode
Nov 29 03:12:25 np0005539563 NetworkManager[48981]: <info>  [1764403945.1704] manager: (tap10a9b8d1-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.174 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap10a9b8d1-20, col_values=(('external_ids', {'iface-id': '56facbc8-1a3f-4008-8f77-23eeac832994'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.175 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:12:25Z|00418|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.178 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.179 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[46db8c6b-2404-4e84-b931-5a81d76e6b47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.180 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:12:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:25.180 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'env', 'PROCESS_TAG=haproxy-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/10a9b8d1-2de6-4e47-8e44-16b661da8624.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.193 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:25 np0005539563 podman[317754]: 2025-11-29 08:12:25.610210541 +0000 UTC m=+0.058844155 container create 9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:12:25 np0005539563 systemd[1]: Started libpod-conmon-9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e.scope.
Nov 29 03:12:25 np0005539563 podman[317754]: 2025-11-29 08:12:25.583640721 +0000 UTC m=+0.032274345 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:12:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:12:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e42060f3f5c85ae2f6457c423134286807c619865e18e2165a8ed9b0ce24b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:25 np0005539563 podman[317754]: 2025-11-29 08:12:25.71313616 +0000 UTC m=+0.161769794 container init 9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:12:25 np0005539563 podman[317754]: 2025-11-29 08:12:25.72200772 +0000 UTC m=+0.170641324 container start 9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.738 252257 DEBUG nova.compute.manager [req-f69461b8-b79d-4f42-a3a8-6c07a0a33af9 req-7c8d661a-17a8-4ab3-a132-603a61ac1498 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.738 252257 DEBUG oslo_concurrency.lockutils [req-f69461b8-b79d-4f42-a3a8-6c07a0a33af9 req-7c8d661a-17a8-4ab3-a132-603a61ac1498 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.739 252257 DEBUG oslo_concurrency.lockutils [req-f69461b8-b79d-4f42-a3a8-6c07a0a33af9 req-7c8d661a-17a8-4ab3-a132-603a61ac1498 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.739 252257 DEBUG oslo_concurrency.lockutils [req-f69461b8-b79d-4f42-a3a8-6c07a0a33af9 req-7c8d661a-17a8-4ab3-a132-603a61ac1498 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.739 252257 DEBUG nova.compute.manager [req-f69461b8-b79d-4f42-a3a8-6c07a0a33af9 req-7c8d661a-17a8-4ab3-a132-603a61ac1498 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Processing event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:12:25 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[317776]: [NOTICE]   (317806) : New worker (317812) forked
Nov 29 03:12:25 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[317776]: [NOTICE]   (317806) : Loading success.
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.857 252257 DEBUG nova.compute.manager [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.858 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 9d9d2058-c79d-456b-b647-e73537cb9223 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.859 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403945.857186, 9d9d2058-c79d-456b-b647-e73537cb9223 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.859 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] VM Started (Lifecycle Event)#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.862 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.866 252257 INFO nova.virt.libvirt.driver [-] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance spawned successfully.#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.866 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.879 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.885 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.888 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.889 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.889 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.890 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.890 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.891 252257 DEBUG nova.virt.libvirt.driver [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.911 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.911 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403945.8572729, 9d9d2058-c79d-456b-b647-e73537cb9223 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.912 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.948 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.951 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403945.862112, 9d9d2058-c79d-456b-b647-e73537cb9223 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.951 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.962 252257 DEBUG nova.compute.manager [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.989 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:25 np0005539563 nova_compute[252253]: 2025-11-29 08:12:25.991 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Nov 29 03:12:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Nov 29 03:12:26 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.019 252257 INFO nova.compute.manager [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] bringing vm to original state: 'stopped'#033[00m
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.022 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.116 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.117 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.118 252257 DEBUG nova.compute.manager [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.121 252257 DEBUG nova.compute.manager [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:12:26 np0005539563 kernel: tapa5c93ffe-81 (unregistering): left promiscuous mode
Nov 29 03:12:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 453 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Nov 29 03:12:26 np0005539563 NetworkManager[48981]: <info>  [1764403946.1620] device (tapa5c93ffe-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.165 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:12:26Z|00419|binding|INFO|Releasing lport a5c93ffe-8186-4e03-86aa-e1b1efc225cc from this chassis (sb_readonly=0)
Nov 29 03:12:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:12:26Z|00420|binding|INFO|Setting lport a5c93ffe-8186-4e03-86aa-e1b1efc225cc down in Southbound
Nov 29 03:12:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:12:26Z|00421|binding|INFO|Removing iface tapa5c93ffe-81 ovn-installed in OVS
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.166 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.175 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:86:ed 10.100.0.6'], port_security=['fa:16:3e:8e:86:ed 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9d9d2058-c79d-456b-b647-e73537cb9223', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8b7c8a30-f080-4336-87a1-164f41eed0f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=a5c93ffe-8186-4e03-86aa-e1b1efc225cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.177 158990 INFO neutron.agent.ovn.metadata.agent [-] Port a5c93ffe-8186-4e03-86aa-e1b1efc225cc in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 unbound from our chassis#033[00m
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.179 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10a9b8d1-2de6-4e47-8e44-16b661da8624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.179 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b1be4c23-3ea9-4e93-bdaa-050b1d13323d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.180 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace which is not needed anymore#033[00m
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.184 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539563 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Nov 29 03:12:26 np0005539563 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006b.scope: Consumed 1.094s CPU time.
Nov 29 03:12:26 np0005539563 systemd-machined[213024]: Machine qemu-48-instance-0000006b terminated.
Nov 29 03:12:26 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[317776]: [NOTICE]   (317806) : haproxy version is 2.8.14-c23fe91
Nov 29 03:12:26 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[317776]: [NOTICE]   (317806) : path to executable is /usr/sbin/haproxy
Nov 29 03:12:26 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[317776]: [WARNING]  (317806) : Exiting Master process...
Nov 29 03:12:26 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[317776]: [ALERT]    (317806) : Current worker (317812) exited with code 143 (Terminated)
Nov 29 03:12:26 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[317776]: [WARNING]  (317806) : All workers exited. Exiting... (0)
Nov 29 03:12:26 np0005539563 systemd[1]: libpod-9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e.scope: Deactivated successfully.
Nov 29 03:12:26 np0005539563 podman[317847]: 2025-11-29 08:12:26.310912725 +0000 UTC m=+0.049026839 container died 9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:12:26 np0005539563 NetworkManager[48981]: <info>  [1764403946.3380] manager: (tapa5c93ffe-81): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Nov 29 03:12:26 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:12:26 np0005539563 systemd[1]: var-lib-containers-storage-overlay-13e42060f3f5c85ae2f6457c423134286807c619865e18e2165a8ed9b0ce24b6-merged.mount: Deactivated successfully.
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.351 252257 INFO nova.virt.libvirt.driver [-] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance destroyed successfully.#033[00m
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.352 252257 DEBUG nova.compute.manager [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:26 np0005539563 podman[317847]: 2025-11-29 08:12:26.359317197 +0000 UTC m=+0.097431321 container cleanup 9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:12:26 np0005539563 systemd[1]: libpod-conmon-9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e.scope: Deactivated successfully.
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.416 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:26 np0005539563 podman[317887]: 2025-11-29 08:12:26.43508775 +0000 UTC m=+0.051246490 container remove 9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.440 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b5ed64-d49b-44d8-b525-a0a3095961c3]: (4, ('Sat Nov 29 08:12:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e)\n9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e\nSat Nov 29 08:12:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e)\n9752a75f6e722f672ffe525ab229112a1225e4cbd945f34a2bf40443409c187e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.441 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b83f93c8-76e2-4c94-8524-941c016734f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.442 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:26 np0005539563 kernel: tap10a9b8d1-20: left promiscuous mode
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.487 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.493 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.493 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.493 252257 DEBUG nova.objects.instance [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.500 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.504 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d505d582-db98-4927-864c-eec7f84fa28e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.519 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c7931ecd-87ff-4a48-a8dd-6ac3739422ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.520 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb89937-edb4-48e2-904a-9380704d8b11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.542 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0a2761b0-31af-481f-a9f1-e363724d999c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691271, 'reachable_time': 17035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317906, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539563 systemd[1]: run-netns-ovnmeta\x2d10a9b8d1\x2d2de6\x2d4e47\x2d8e44\x2d16b661da8624.mount: Deactivated successfully.
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.544 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:12:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:26.544 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7fb9b8-ed34-45fe-8271-1d59afee25c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:26.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:26 np0005539563 nova_compute[252253]: 2025-11-29 08:12:26.590 252257 DEBUG oslo_concurrency.lockutils [None req-f5c2e4a6-75a3-466a-aac0-ff33e6d4ec28 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:27.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.128 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.946 252257 DEBUG nova.compute.manager [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.947 252257 DEBUG oslo_concurrency.lockutils [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.947 252257 DEBUG oslo_concurrency.lockutils [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.948 252257 DEBUG oslo_concurrency.lockutils [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.948 252257 DEBUG nova.compute.manager [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] No waiting events found dispatching network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.948 252257 WARNING nova.compute.manager [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received unexpected event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.948 252257 DEBUG nova.compute.manager [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received event network-vif-unplugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.948 252257 DEBUG oslo_concurrency.lockutils [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.949 252257 DEBUG oslo_concurrency.lockutils [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.949 252257 DEBUG oslo_concurrency.lockutils [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.949 252257 DEBUG nova.compute.manager [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] No waiting events found dispatching network-vif-unplugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.949 252257 WARNING nova.compute.manager [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received unexpected event network-vif-unplugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.949 252257 DEBUG nova.compute.manager [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.949 252257 DEBUG oslo_concurrency.lockutils [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.950 252257 DEBUG oslo_concurrency.lockutils [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.950 252257 DEBUG oslo_concurrency.lockutils [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.950 252257 DEBUG nova.compute.manager [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] No waiting events found dispatching network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:27 np0005539563 nova_compute[252253]: 2025-11-29 08:12:27.950 252257 WARNING nova.compute.manager [req-edab8eaa-d68d-4b21-93b7-cd5179ebf4ff req-34b4dca7-0ba9-4bd3-b0ec-22561df19039 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received unexpected event network-vif-plugged-a5c93ffe-8186-4e03-86aa-e1b1efc225cc for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:12:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 453 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Nov 29 03:12:28 np0005539563 nova_compute[252253]: 2025-11-29 08:12:28.465 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:28.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:29.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:29 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b6780766-3be3-4c54-9f52-84db203aa4a2 does not exist
Nov 29 03:12:29 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 16565808-b94a-40ca-8588-a8d4bc8966c4 does not exist
Nov 29 03:12:29 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 80960434-0b75-409e-be7f-cde3c1b4aa05 does not exist
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:12:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:12:29 np0005539563 podman[318180]: 2025-11-29 08:12:29.720179754 +0000 UTC m=+0.044807495 container create b3d226185d7b527f5bca8b9faa99ae7360b960aef911782c5623843e70b91ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:12:29 np0005539563 systemd[1]: Started libpod-conmon-b3d226185d7b527f5bca8b9faa99ae7360b960aef911782c5623843e70b91ad0.scope.
Nov 29 03:12:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:12:29 np0005539563 podman[318180]: 2025-11-29 08:12:29.700429649 +0000 UTC m=+0.025057410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:12:29 np0005539563 podman[318180]: 2025-11-29 08:12:29.807679995 +0000 UTC m=+0.132307736 container init b3d226185d7b527f5bca8b9faa99ae7360b960aef911782c5623843e70b91ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 03:12:29 np0005539563 podman[318180]: 2025-11-29 08:12:29.820284196 +0000 UTC m=+0.144911917 container start b3d226185d7b527f5bca8b9faa99ae7360b960aef911782c5623843e70b91ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:12:29 np0005539563 podman[318180]: 2025-11-29 08:12:29.824786079 +0000 UTC m=+0.149413810 container attach b3d226185d7b527f5bca8b9faa99ae7360b960aef911782c5623843e70b91ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:12:29 np0005539563 systemd[1]: libpod-b3d226185d7b527f5bca8b9faa99ae7360b960aef911782c5623843e70b91ad0.scope: Deactivated successfully.
Nov 29 03:12:29 np0005539563 objective_keller[318196]: 167 167
Nov 29 03:12:29 np0005539563 podman[318180]: 2025-11-29 08:12:29.827917163 +0000 UTC m=+0.152544884 container died b3d226185d7b527f5bca8b9faa99ae7360b960aef911782c5623843e70b91ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 03:12:29 np0005539563 conmon[318196]: conmon b3d226185d7b527f5bca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3d226185d7b527f5bca8b9faa99ae7360b960aef911782c5623843e70b91ad0.scope/container/memory.events
Nov 29 03:12:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3de636f28b91c596f34de9b9de895313fc65201f4d806d4f0b06fb4e39bc7d8d-merged.mount: Deactivated successfully.
Nov 29 03:12:29 np0005539563 podman[318180]: 2025-11-29 08:12:29.869163391 +0000 UTC m=+0.193791112 container remove b3d226185d7b527f5bca8b9faa99ae7360b960aef911782c5623843e70b91ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keller, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:12:29 np0005539563 systemd[1]: libpod-conmon-b3d226185d7b527f5bca8b9faa99ae7360b960aef911782c5623843e70b91ad0.scope: Deactivated successfully.
Nov 29 03:12:30 np0005539563 podman[318267]: 2025-11-29 08:12:30.041181631 +0000 UTC m=+0.043041556 container create acf757a5fb78def3105466b29f0e355ebea83f66abcc474c3022527fdff364fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:12:30 np0005539563 systemd[1]: Started libpod-conmon-acf757a5fb78def3105466b29f0e355ebea83f66abcc474c3022527fdff364fd.scope.
Nov 29 03:12:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:12:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0febcbbf3f417835ce53d20d737418e0da2031ba086a5cba3b24ec0f83960764/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0febcbbf3f417835ce53d20d737418e0da2031ba086a5cba3b24ec0f83960764/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0febcbbf3f417835ce53d20d737418e0da2031ba086a5cba3b24ec0f83960764/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0febcbbf3f417835ce53d20d737418e0da2031ba086a5cba3b24ec0f83960764/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0febcbbf3f417835ce53d20d737418e0da2031ba086a5cba3b24ec0f83960764/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:30 np0005539563 podman[318267]: 2025-11-29 08:12:30.113376428 +0000 UTC m=+0.115236363 container init acf757a5fb78def3105466b29f0e355ebea83f66abcc474c3022527fdff364fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:12:30 np0005539563 podman[318267]: 2025-11-29 08:12:30.022702 +0000 UTC m=+0.024561935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:12:30 np0005539563 podman[318267]: 2025-11-29 08:12:30.122822083 +0000 UTC m=+0.124681988 container start acf757a5fb78def3105466b29f0e355ebea83f66abcc474c3022527fdff364fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gould, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:12:30 np0005539563 podman[318267]: 2025-11-29 08:12:30.126591715 +0000 UTC m=+0.128451650 container attach acf757a5fb78def3105466b29f0e355ebea83f66abcc474c3022527fdff364fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:12:30 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:12:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 453 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 2.1 MiB/s wr, 97 op/s
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.369 252257 DEBUG oslo_concurrency.lockutils [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.370 252257 DEBUG oslo_concurrency.lockutils [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.370 252257 DEBUG oslo_concurrency.lockutils [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.371 252257 DEBUG oslo_concurrency.lockutils [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.371 252257 DEBUG oslo_concurrency.lockutils [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.372 252257 INFO nova.compute.manager [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Terminating instance#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.373 252257 DEBUG nova.compute.manager [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.378 252257 INFO nova.virt.libvirt.driver [-] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Instance destroyed successfully.#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.379 252257 DEBUG nova.objects.instance [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'resources' on Instance uuid 9d9d2058-c79d-456b-b647-e73537cb9223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.400 252257 DEBUG nova.virt.libvirt.vif [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:11:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-32984963',display_name='tempest-tempest.common.compute-instance-32984963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-32984963',id=107,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-701yf0f3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:26Z,user_data=None,user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=9d9d2058-c79d-456b-b647-e73537cb9223,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.401 252257 DEBUG nova.network.os_vif_util [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "address": "fa:16:3e:8e:86:ed", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c93ffe-81", "ovs_interfaceid": "a5c93ffe-8186-4e03-86aa-e1b1efc225cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.402 252257 DEBUG nova.network.os_vif_util [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.402 252257 DEBUG os_vif [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.404 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.404 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5c93ffe-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.407 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.409 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.411 252257 INFO os_vif [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:86:ed,bridge_name='br-int',has_traffic_filtering=True,id=a5c93ffe-8186-4e03-86aa-e1b1efc225cc,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c93ffe-81')#033[00m
Nov 29 03:12:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:30.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.853 252257 INFO nova.virt.libvirt.driver [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Deleting instance files /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223_del#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.854 252257 INFO nova.virt.libvirt.driver [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Deletion of /var/lib/nova/instances/9d9d2058-c79d-456b-b647-e73537cb9223_del complete#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.922 252257 INFO nova.compute.manager [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Took 0.55 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.923 252257 DEBUG oslo.service.loopingcall [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.924 252257 DEBUG nova.compute.manager [-] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:12:30 np0005539563 nova_compute[252253]: 2025-11-29 08:12:30.924 252257 DEBUG nova.network.neutron [-] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:12:30 np0005539563 affectionate_gould[318285]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:12:30 np0005539563 affectionate_gould[318285]: --> relative data size: 1.0
Nov 29 03:12:30 np0005539563 affectionate_gould[318285]: --> All data devices are unavailable
Nov 29 03:12:31 np0005539563 systemd[1]: libpod-acf757a5fb78def3105466b29f0e355ebea83f66abcc474c3022527fdff364fd.scope: Deactivated successfully.
Nov 29 03:12:31 np0005539563 podman[318267]: 2025-11-29 08:12:31.00739921 +0000 UTC m=+1.009259155 container died acf757a5fb78def3105466b29f0e355ebea83f66abcc474c3022527fdff364fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:12:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0febcbbf3f417835ce53d20d737418e0da2031ba086a5cba3b24ec0f83960764-merged.mount: Deactivated successfully.
Nov 29 03:12:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:31.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:31 np0005539563 podman[318267]: 2025-11-29 08:12:31.070403226 +0000 UTC m=+1.072263141 container remove acf757a5fb78def3105466b29f0e355ebea83f66abcc474c3022527fdff364fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:12:31 np0005539563 systemd[1]: libpod-conmon-acf757a5fb78def3105466b29f0e355ebea83f66abcc474c3022527fdff364fd.scope: Deactivated successfully.
Nov 29 03:12:31 np0005539563 podman[318471]: 2025-11-29 08:12:31.674876244 +0000 UTC m=+0.035639407 container create c0fbed1ba62c8464eb1ebcb6e8874010317894f07c942c24b27396c7e0d65d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:12:31 np0005539563 systemd[1]: Started libpod-conmon-c0fbed1ba62c8464eb1ebcb6e8874010317894f07c942c24b27396c7e0d65d06.scope.
Nov 29 03:12:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:12:31 np0005539563 podman[318471]: 2025-11-29 08:12:31.74595351 +0000 UTC m=+0.106716673 container init c0fbed1ba62c8464eb1ebcb6e8874010317894f07c942c24b27396c7e0d65d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goodall, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:12:31 np0005539563 podman[318471]: 2025-11-29 08:12:31.753965076 +0000 UTC m=+0.114728239 container start c0fbed1ba62c8464eb1ebcb6e8874010317894f07c942c24b27396c7e0d65d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:12:31 np0005539563 podman[318471]: 2025-11-29 08:12:31.659147958 +0000 UTC m=+0.019911141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:12:31 np0005539563 podman[318471]: 2025-11-29 08:12:31.756845634 +0000 UTC m=+0.117608797 container attach c0fbed1ba62c8464eb1ebcb6e8874010317894f07c942c24b27396c7e0d65d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goodall, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:12:31 np0005539563 laughing_goodall[318487]: 167 167
Nov 29 03:12:31 np0005539563 systemd[1]: libpod-c0fbed1ba62c8464eb1ebcb6e8874010317894f07c942c24b27396c7e0d65d06.scope: Deactivated successfully.
Nov 29 03:12:31 np0005539563 podman[318471]: 2025-11-29 08:12:31.760380161 +0000 UTC m=+0.121143314 container died c0fbed1ba62c8464eb1ebcb6e8874010317894f07c942c24b27396c7e0d65d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goodall, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:12:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3bda8c49768c7d4e2969061642462ae4770b7b0760207b1fb1e49f0994062af3-merged.mount: Deactivated successfully.
Nov 29 03:12:31 np0005539563 podman[318471]: 2025-11-29 08:12:31.797045164 +0000 UTC m=+0.157808327 container remove c0fbed1ba62c8464eb1ebcb6e8874010317894f07c942c24b27396c7e0d65d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:12:31 np0005539563 systemd[1]: libpod-conmon-c0fbed1ba62c8464eb1ebcb6e8874010317894f07c942c24b27396c7e0d65d06.scope: Deactivated successfully.
Nov 29 03:12:31 np0005539563 podman[318510]: 2025-11-29 08:12:31.955224699 +0000 UTC m=+0.040067526 container create c999e1d7372fb411e851dbe7397fb8ae3495708d3fae7fe18212c9fc0183f6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 29 03:12:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:31.960 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:31.962 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:12:31 np0005539563 nova_compute[252253]: 2025-11-29 08:12:31.961 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:31 np0005539563 nova_compute[252253]: 2025-11-29 08:12:31.993 252257 DEBUG nova.network.neutron [-] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:32 np0005539563 systemd[1]: Started libpod-conmon-c999e1d7372fb411e851dbe7397fb8ae3495708d3fae7fe18212c9fc0183f6b4.scope.
Nov 29 03:12:32 np0005539563 nova_compute[252253]: 2025-11-29 08:12:32.013 252257 INFO nova.compute.manager [-] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Took 1.09 seconds to deallocate network for instance.#033[00m
Nov 29 03:12:32 np0005539563 podman[318510]: 2025-11-29 08:12:31.936369349 +0000 UTC m=+0.021212206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:12:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:12:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8240a77b2b58c1fb9ddbbdb0d576bf18985108cb5cff615ecfa4c25e8c843af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8240a77b2b58c1fb9ddbbdb0d576bf18985108cb5cff615ecfa4c25e8c843af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8240a77b2b58c1fb9ddbbdb0d576bf18985108cb5cff615ecfa4c25e8c843af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8240a77b2b58c1fb9ddbbdb0d576bf18985108cb5cff615ecfa4c25e8c843af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:32 np0005539563 podman[318510]: 2025-11-29 08:12:32.063057351 +0000 UTC m=+0.147900198 container init c999e1d7372fb411e851dbe7397fb8ae3495708d3fae7fe18212c9fc0183f6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_shtern, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:12:32 np0005539563 podman[318510]: 2025-11-29 08:12:32.071555682 +0000 UTC m=+0.156398539 container start c999e1d7372fb411e851dbe7397fb8ae3495708d3fae7fe18212c9fc0183f6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_shtern, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:12:32 np0005539563 podman[318510]: 2025-11-29 08:12:32.076195727 +0000 UTC m=+0.161038584 container attach c999e1d7372fb411e851dbe7397fb8ae3495708d3fae7fe18212c9fc0183f6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:12:32 np0005539563 nova_compute[252253]: 2025-11-29 08:12:32.078 252257 DEBUG nova.compute.manager [req-93219932-bd6e-427e-ba46-ec0e07a2ea67 req-398898b3-30c8-41ec-83a6-f1e6b6933a43 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Received event network-vif-deleted-a5c93ffe-8186-4e03-86aa-e1b1efc225cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:32 np0005539563 nova_compute[252253]: 2025-11-29 08:12:32.092 252257 DEBUG oslo_concurrency.lockutils [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:32 np0005539563 nova_compute[252253]: 2025-11-29 08:12:32.092 252257 DEBUG oslo_concurrency.lockutils [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:32 np0005539563 nova_compute[252253]: 2025-11-29 08:12:32.130 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 479 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 740 KiB/s rd, 3.3 MiB/s wr, 137 op/s
Nov 29 03:12:32 np0005539563 nova_compute[252253]: 2025-11-29 08:12:32.242 252257 DEBUG oslo_concurrency.processutils [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:32.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2216913731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:32 np0005539563 nova_compute[252253]: 2025-11-29 08:12:32.671 252257 DEBUG oslo_concurrency.processutils [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:32 np0005539563 nova_compute[252253]: 2025-11-29 08:12:32.677 252257 DEBUG nova.compute.provider_tree [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:32 np0005539563 nova_compute[252253]: 2025-11-29 08:12:32.701 252257 DEBUG nova.scheduler.client.report [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:32 np0005539563 nova_compute[252253]: 2025-11-29 08:12:32.728 252257 DEBUG oslo_concurrency.lockutils [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:32 np0005539563 nova_compute[252253]: 2025-11-29 08:12:32.801 252257 INFO nova.scheduler.client.report [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Deleted allocations for instance 9d9d2058-c79d-456b-b647-e73537cb9223#033[00m
Nov 29 03:12:32 np0005539563 happy_shtern[318526]: {
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:    "0": [
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:        {
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:            "devices": [
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:                "/dev/loop3"
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:            ],
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:            "lv_name": "ceph_lv0",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:            "lv_size": "7511998464",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:            "name": "ceph_lv0",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:            "tags": {
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:                "ceph.cluster_name": "ceph",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:                "ceph.crush_device_class": "",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:                "ceph.encrypted": "0",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:                "ceph.osd_id": "0",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:                "ceph.type": "block",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:                "ceph.vdo": "0"
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:            },
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:            "type": "block",
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:            "vg_name": "ceph_vg0"
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:        }
Nov 29 03:12:32 np0005539563 happy_shtern[318526]:    ]
Nov 29 03:12:32 np0005539563 happy_shtern[318526]: }
Nov 29 03:12:32 np0005539563 systemd[1]: libpod-c999e1d7372fb411e851dbe7397fb8ae3495708d3fae7fe18212c9fc0183f6b4.scope: Deactivated successfully.
Nov 29 03:12:32 np0005539563 conmon[318526]: conmon c999e1d7372fb411e851 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c999e1d7372fb411e851dbe7397fb8ae3495708d3fae7fe18212c9fc0183f6b4.scope/container/memory.events
Nov 29 03:12:32 np0005539563 podman[318510]: 2025-11-29 08:12:32.874324761 +0000 UTC m=+0.959167608 container died c999e1d7372fb411e851dbe7397fb8ae3495708d3fae7fe18212c9fc0183f6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_shtern, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:12:32 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a8240a77b2b58c1fb9ddbbdb0d576bf18985108cb5cff615ecfa4c25e8c843af-merged.mount: Deactivated successfully.
Nov 29 03:12:32 np0005539563 podman[318510]: 2025-11-29 08:12:32.93078063 +0000 UTC m=+1.015623447 container remove c999e1d7372fb411e851dbe7397fb8ae3495708d3fae7fe18212c9fc0183f6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_shtern, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:12:32 np0005539563 nova_compute[252253]: 2025-11-29 08:12:32.937 252257 DEBUG oslo_concurrency.lockutils [None req-e8c70150-5dbf-4410-90d1-c92c1eff45ec 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9d9d2058-c79d-456b-b647-e73537cb9223" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:32 np0005539563 systemd[1]: libpod-conmon-c999e1d7372fb411e851dbe7397fb8ae3495708d3fae7fe18212c9fc0183f6b4.scope: Deactivated successfully.
Nov 29 03:12:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:33.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Nov 29 03:12:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Nov 29 03:12:33 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Nov 29 03:12:33 np0005539563 podman[318711]: 2025-11-29 08:12:33.599005415 +0000 UTC m=+0.058159797 container create 01e15ffbb7ee965dc629b345cfe687ef5984d26eba623579b9a9c3d8050e4192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:12:33 np0005539563 systemd[1]: Started libpod-conmon-01e15ffbb7ee965dc629b345cfe687ef5984d26eba623579b9a9c3d8050e4192.scope.
Nov 29 03:12:33 np0005539563 podman[318711]: 2025-11-29 08:12:33.566495064 +0000 UTC m=+0.025649426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:12:33 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:12:33 np0005539563 podman[318711]: 2025-11-29 08:12:33.700409562 +0000 UTC m=+0.159563944 container init 01e15ffbb7ee965dc629b345cfe687ef5984d26eba623579b9a9c3d8050e4192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ardinghelli, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:12:33 np0005539563 podman[318711]: 2025-11-29 08:12:33.71214775 +0000 UTC m=+0.171302092 container start 01e15ffbb7ee965dc629b345cfe687ef5984d26eba623579b9a9c3d8050e4192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:12:33 np0005539563 podman[318711]: 2025-11-29 08:12:33.715952424 +0000 UTC m=+0.175106776 container attach 01e15ffbb7ee965dc629b345cfe687ef5984d26eba623579b9a9c3d8050e4192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ardinghelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:12:33 np0005539563 thirsty_ardinghelli[318728]: 167 167
Nov 29 03:12:33 np0005539563 systemd[1]: libpod-01e15ffbb7ee965dc629b345cfe687ef5984d26eba623579b9a9c3d8050e4192.scope: Deactivated successfully.
Nov 29 03:12:33 np0005539563 podman[318733]: 2025-11-29 08:12:33.781394827 +0000 UTC m=+0.030401835 container died 01e15ffbb7ee965dc629b345cfe687ef5984d26eba623579b9a9c3d8050e4192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:12:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e70cdf0e6858941184640124133adc1c925a03248ede515862d1e78482668239-merged.mount: Deactivated successfully.
Nov 29 03:12:33 np0005539563 podman[318733]: 2025-11-29 08:12:33.8202594 +0000 UTC m=+0.069266408 container remove 01e15ffbb7ee965dc629b345cfe687ef5984d26eba623579b9a9c3d8050e4192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_ardinghelli, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:12:33 np0005539563 systemd[1]: libpod-conmon-01e15ffbb7ee965dc629b345cfe687ef5984d26eba623579b9a9c3d8050e4192.scope: Deactivated successfully.
Nov 29 03:12:34 np0005539563 podman[318755]: 2025-11-29 08:12:34.008860409 +0000 UTC m=+0.038135893 container create eac86029d69e7ca7f44d1b0a1c1cd1a5b7bebc3d574ea209a3886100c9f575a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:12:34 np0005539563 systemd[1]: Started libpod-conmon-eac86029d69e7ca7f44d1b0a1c1cd1a5b7bebc3d574ea209a3886100c9f575a7.scope.
Nov 29 03:12:34 np0005539563 podman[318755]: 2025-11-29 08:12:33.991323484 +0000 UTC m=+0.020599018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:12:34 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:12:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4bb7d4023fe2a21f26e91cde3afa666fafac1576a16dd608855df0e5574b05d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4bb7d4023fe2a21f26e91cde3afa666fafac1576a16dd608855df0e5574b05d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4bb7d4023fe2a21f26e91cde3afa666fafac1576a16dd608855df0e5574b05d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4bb7d4023fe2a21f26e91cde3afa666fafac1576a16dd608855df0e5574b05d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:34 np0005539563 podman[318755]: 2025-11-29 08:12:34.11405497 +0000 UTC m=+0.143330474 container init eac86029d69e7ca7f44d1b0a1c1cd1a5b7bebc3d574ea209a3886100c9f575a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_almeida, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:12:34 np0005539563 podman[318755]: 2025-11-29 08:12:34.122091938 +0000 UTC m=+0.151367412 container start eac86029d69e7ca7f44d1b0a1c1cd1a5b7bebc3d574ea209a3886100c9f575a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_almeida, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:12:34 np0005539563 podman[318755]: 2025-11-29 08:12:34.126283801 +0000 UTC m=+0.155559335 container attach eac86029d69e7ca7f44d1b0a1c1cd1a5b7bebc3d574ea209a3886100c9f575a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_almeida, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:12:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 477 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 128 op/s
Nov 29 03:12:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:34.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:34 np0005539563 stupefied_almeida[318772]: {
Nov 29 03:12:34 np0005539563 stupefied_almeida[318772]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:12:34 np0005539563 stupefied_almeida[318772]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:12:34 np0005539563 stupefied_almeida[318772]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:12:34 np0005539563 stupefied_almeida[318772]:        "osd_id": 0,
Nov 29 03:12:34 np0005539563 stupefied_almeida[318772]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:12:34 np0005539563 stupefied_almeida[318772]:        "type": "bluestore"
Nov 29 03:12:34 np0005539563 stupefied_almeida[318772]:    }
Nov 29 03:12:34 np0005539563 stupefied_almeida[318772]: }
Nov 29 03:12:35 np0005539563 systemd[1]: libpod-eac86029d69e7ca7f44d1b0a1c1cd1a5b7bebc3d574ea209a3886100c9f575a7.scope: Deactivated successfully.
Nov 29 03:12:35 np0005539563 podman[318755]: 2025-11-29 08:12:35.027481168 +0000 UTC m=+1.056756682 container died eac86029d69e7ca7f44d1b0a1c1cd1a5b7bebc3d574ea209a3886100c9f575a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_almeida, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:12:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f4bb7d4023fe2a21f26e91cde3afa666fafac1576a16dd608855df0e5574b05d-merged.mount: Deactivated successfully.
Nov 29 03:12:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:35.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:35 np0005539563 podman[318755]: 2025-11-29 08:12:35.090190656 +0000 UTC m=+1.119466150 container remove eac86029d69e7ca7f44d1b0a1c1cd1a5b7bebc3d574ea209a3886100c9f575a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:12:35 np0005539563 systemd[1]: libpod-conmon-eac86029d69e7ca7f44d1b0a1c1cd1a5b7bebc3d574ea209a3886100c9f575a7.scope: Deactivated successfully.
Nov 29 03:12:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:12:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:12:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:35 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5ca9a135-8ba7-45cc-8e3f-704489f34cea does not exist
Nov 29 03:12:35 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6fd27feb-7883-48e4-a854-f0c6b3a5f383 does not exist
Nov 29 03:12:35 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f7733475-df72-4fde-a8c9-67c2b48a2872 does not exist
Nov 29 03:12:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:12:35 np0005539563 nova_compute[252253]: 2025-11-29 08:12:35.407 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 453 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Nov 29 03:12:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:36.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.791793) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956791936, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 2031, "num_deletes": 262, "total_data_size": 3388871, "memory_usage": 3435640, "flush_reason": "Manual Compaction"}
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956814060, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 3293722, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40207, "largest_seqno": 42237, "table_properties": {"data_size": 3284429, "index_size": 5787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19770, "raw_average_key_size": 20, "raw_value_size": 3265629, "raw_average_value_size": 3415, "num_data_blocks": 250, "num_entries": 956, "num_filter_entries": 956, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403795, "oldest_key_time": 1764403795, "file_creation_time": 1764403956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 22284 microseconds, and 10108 cpu microseconds.
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.814140) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 3293722 bytes OK
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.814174) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.834488) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.834556) EVENT_LOG_v1 {"time_micros": 1764403956834544, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.834584) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 3380368, prev total WAL file size 3380368, number of live WAL files 2.
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.835965) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323633' seq:72057594037927935, type:22 .. '6C6F676D0031353134' seq:0, type:0; will stop at (end)
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(3216KB)], [86(8575KB)]
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956836098, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 12074646, "oldest_snapshot_seqno": -1}
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 7484 keys, 11910050 bytes, temperature: kUnknown
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956910070, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 11910050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11859362, "index_size": 30864, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 193554, "raw_average_key_size": 25, "raw_value_size": 11725027, "raw_average_value_size": 1566, "num_data_blocks": 1223, "num_entries": 7484, "num_filter_entries": 7484, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764403956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.910644) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11910050 bytes
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.912503) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.4 rd, 160.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.4 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(7.3) write-amplify(3.6) OK, records in: 8024, records dropped: 540 output_compression: NoCompression
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.912519) EVENT_LOG_v1 {"time_micros": 1764403956912512, "job": 50, "event": "compaction_finished", "compaction_time_micros": 74355, "compaction_time_cpu_micros": 38370, "output_level": 6, "num_output_files": 1, "total_output_size": 11910050, "num_input_records": 8024, "num_output_records": 7484, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956913431, "job": 50, "event": "table_file_deletion", "file_number": 88}
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403956915115, "job": 50, "event": "table_file_deletion", "file_number": 86}
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.835710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.915171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.915178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.915180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.915183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:36 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:36.915185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:12:36.965 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:37.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:37 np0005539563 nova_compute[252253]: 2025-11-29 08:12:37.133 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 453 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Nov 29 03:12:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:38.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:39.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 454 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Nov 29 03:12:40 np0005539563 nova_compute[252253]: 2025-11-29 08:12:40.427 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:40.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:41.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:41 np0005539563 nova_compute[252253]: 2025-11-29 08:12:41.349 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403946.348571, 9d9d2058-c79d-456b-b647-e73537cb9223 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:41 np0005539563 nova_compute[252253]: 2025-11-29 08:12:41.350 252257 INFO nova.compute.manager [-] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:12:41 np0005539563 nova_compute[252253]: 2025-11-29 08:12:41.409 252257 DEBUG nova.compute.manager [None req-d7d1bc42-928d-464a-985e-cea7c9ad5652 - - - - - -] [instance: 9d9d2058-c79d-456b-b647-e73537cb9223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Nov 29 03:12:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Nov 29 03:12:41 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Nov 29 03:12:42 np0005539563 nova_compute[252253]: 2025-11-29 08:12:42.136 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 454 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 660 KiB/s wr, 106 op/s
Nov 29 03:12:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:42.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:43.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:12:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 467 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 111 op/s
Nov 29 03:12:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:44.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:45.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:45 np0005539563 nova_compute[252253]: 2025-11-29 08:12:45.430 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:45 np0005539563 nova_compute[252253]: 2025-11-29 08:12:45.748 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:45 np0005539563 nova_compute[252253]: 2025-11-29 08:12:45.749 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 483 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 141 op/s
Nov 29 03:12:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:46.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:46 np0005539563 nova_compute[252253]: 2025-11-29 08:12:46.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:46 np0005539563 nova_compute[252253]: 2025-11-29 08:12:46.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:47.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:47 np0005539563 nova_compute[252253]: 2025-11-29 08:12:47.138 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 487 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 163 op/s
Nov 29 03:12:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:48.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:48 np0005539563 nova_compute[252253]: 2025-11-29 08:12:48.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:49 np0005539563 nova_compute[252253]: 2025-11-29 08:12:49.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:49 np0005539563 nova_compute[252253]: 2025-11-29 08:12:49.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:12:49 np0005539563 nova_compute[252253]: 2025-11-29 08:12:49.725 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:12:49 np0005539563 nova_compute[252253]: 2025-11-29 08:12:49.726 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:49 np0005539563 nova_compute[252253]: 2025-11-29 08:12:49.772 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:49 np0005539563 nova_compute[252253]: 2025-11-29 08:12:49.773 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:49 np0005539563 nova_compute[252253]: 2025-11-29 08:12:49.773 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:49 np0005539563 nova_compute[252253]: 2025-11-29 08:12:49.773 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:12:49 np0005539563 nova_compute[252253]: 2025-11-29 08:12:49.774 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 487 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 171 op/s
Nov 29 03:12:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1034312422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:50 np0005539563 nova_compute[252253]: 2025-11-29 08:12:50.402 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:50 np0005539563 nova_compute[252253]: 2025-11-29 08:12:50.432 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:50 np0005539563 nova_compute[252253]: 2025-11-29 08:12:50.541 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:12:50 np0005539563 nova_compute[252253]: 2025-11-29 08:12:50.541 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:12:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:50.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:50 np0005539563 nova_compute[252253]: 2025-11-29 08:12:50.749 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:50 np0005539563 nova_compute[252253]: 2025-11-29 08:12:50.750 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4274MB free_disk=20.784767150878906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:12:50 np0005539563 nova_compute[252253]: 2025-11-29 08:12:50.750 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:50 np0005539563 nova_compute[252253]: 2025-11-29 08:12:50.750 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.011 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 98453ec7-fbda-42ae-8624-8aa5921fd634 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.012 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.012 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:12:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:12:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:51.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.098 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.133 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.133 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.182 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.217 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.291 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:51 np0005539563 podman[318959]: 2025-11-29 08:12:51.532358306 +0000 UTC m=+0.077273515 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:12:51 np0005539563 podman[318956]: 2025-11-29 08:12:51.532710926 +0000 UTC m=+0.078199910 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 03:12:51 np0005539563 podman[318960]: 2025-11-29 08:12:51.573146771 +0000 UTC m=+0.108687826 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:12:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2774113000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.791 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.799 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.849 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.873 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:12:51 np0005539563 nova_compute[252253]: 2025-11-29 08:12:51.874 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:52 np0005539563 nova_compute[252253]: 2025-11-29 08:12:52.140 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 487 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 165 op/s
Nov 29 03:12:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:52.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:52 np0005539563 nova_compute[252253]: 2025-11-29 08:12:52.825 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:53.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 462 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Nov 29 03:12:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:54.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:54 np0005539563 nova_compute[252253]: 2025-11-29 08:12:54.722 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:54 np0005539563 nova_compute[252253]: 2025-11-29 08:12:54.723 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:54 np0005539563 nova_compute[252253]: 2025-11-29 08:12:54.792 252257 DEBUG nova.compute.manager [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:12:54 np0005539563 nova_compute[252253]: 2025-11-29 08:12:54.992 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:54 np0005539563 nova_compute[252253]: 2025-11-29 08:12:54.993 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:54 np0005539563 nova_compute[252253]: 2025-11-29 08:12:54.999 252257 DEBUG nova.virt.hardware [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:12:54 np0005539563 nova_compute[252253]: 2025-11-29 08:12:54.999 252257 INFO nova.compute.claims [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:12:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:55.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:55 np0005539563 nova_compute[252253]: 2025-11-29 08:12:55.178 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3540463941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:55 np0005539563 nova_compute[252253]: 2025-11-29 08:12:55.434 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3573923313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:55 np0005539563 nova_compute[252253]: 2025-11-29 08:12:55.659 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:55 np0005539563 nova_compute[252253]: 2025-11-29 08:12:55.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:55 np0005539563 nova_compute[252253]: 2025-11-29 08:12:55.677 252257 DEBUG nova.compute.provider_tree [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:55 np0005539563 nova_compute[252253]: 2025-11-29 08:12:55.711 252257 DEBUG nova.scheduler.client.report [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:55 np0005539563 nova_compute[252253]: 2025-11-29 08:12:55.741 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:55 np0005539563 nova_compute[252253]: 2025-11-29 08:12:55.742 252257 DEBUG nova.compute.manager [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:12:55 np0005539563 nova_compute[252253]: 2025-11-29 08:12:55.805 252257 DEBUG nova.compute.manager [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:12:55 np0005539563 nova_compute[252253]: 2025-11-29 08:12:55.805 252257 DEBUG nova.network.neutron [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:12:55 np0005539563 nova_compute[252253]: 2025-11-29 08:12:55.841 252257 INFO nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:12:55 np0005539563 nova_compute[252253]: 2025-11-29 08:12:55.891 252257 DEBUG nova.compute.manager [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.053 252257 DEBUG nova.compute.manager [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.055 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.056 252257 INFO nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Creating image(s)#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.097 252257 DEBUG nova.storage.rbd_utils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.131 252257 DEBUG nova.storage.rbd_utils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.162 252257 DEBUG nova.storage.rbd_utils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.166 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 407 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.2 MiB/s wr, 351 op/s
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.245 252257 DEBUG nova.policy [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '58625e4c2b5d43a1abbab05b98853a65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '250671461f27498d9f6b4476c7b69533', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.263 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.264 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.264 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.265 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.288 252257 DEBUG nova.storage.rbd_utils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.292 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:56.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.622841) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976622885, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 456, "num_deletes": 252, "total_data_size": 370779, "memory_usage": 379048, "flush_reason": "Manual Compaction"}
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976628258, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 366532, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42238, "largest_seqno": 42693, "table_properties": {"data_size": 363959, "index_size": 609, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6564, "raw_average_key_size": 19, "raw_value_size": 358675, "raw_average_value_size": 1048, "num_data_blocks": 27, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403957, "oldest_key_time": 1764403957, "file_creation_time": 1764403976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 5454 microseconds, and 2237 cpu microseconds.
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.628296) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 366532 bytes OK
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.628313) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.630637) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.630658) EVENT_LOG_v1 {"time_micros": 1764403976630651, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.630674) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 368043, prev total WAL file size 368043, number of live WAL files 2.
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.631198) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(357KB)], [89(11MB)]
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976631263, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 12276582, "oldest_snapshot_seqno": -1}
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.674 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 7309 keys, 10415110 bytes, temperature: kUnknown
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976706625, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 10415110, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10366911, "index_size": 28823, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 190615, "raw_average_key_size": 26, "raw_value_size": 10236943, "raw_average_value_size": 1400, "num_data_blocks": 1130, "num_entries": 7309, "num_filter_entries": 7309, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764403976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.706912) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10415110 bytes
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.714307) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.7 rd, 138.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(61.9) write-amplify(28.4) OK, records in: 7826, records dropped: 517 output_compression: NoCompression
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.714360) EVENT_LOG_v1 {"time_micros": 1764403976714340, "job": 52, "event": "compaction_finished", "compaction_time_micros": 75437, "compaction_time_cpu_micros": 23002, "output_level": 6, "num_output_files": 1, "total_output_size": 10415110, "num_input_records": 7826, "num_output_records": 7309, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976714661, "job": 52, "event": "table_file_deletion", "file_number": 91}
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403976716403, "job": 52, "event": "table_file_deletion", "file_number": 89}
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.631088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.716461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.716465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.716467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.716468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:12:56.716470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.759 252257 DEBUG nova.storage.rbd_utils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] resizing rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:12:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.863 252257 DEBUG nova.objects.instance [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'migration_context' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.886 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.887 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Ensure instance console log exists: /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.887 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.888 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:56 np0005539563 nova_compute[252253]: 2025-11-29 08:12:56.888 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:57.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:57 np0005539563 nova_compute[252253]: 2025-11-29 08:12:57.143 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:58 np0005539563 nova_compute[252253]: 2025-11-29 08:12:58.031 252257 DEBUG nova.network.neutron [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Successfully created port: ec8e7efa-3c86-430e-b26a-e5d8d611a64a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:12:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 417 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 769 KiB/s wr, 294 op/s
Nov 29 03:12:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:12:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:58.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:12:58 np0005539563 nova_compute[252253]: 2025-11-29 08:12:58.675 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:58 np0005539563 nova_compute[252253]: 2025-11-29 08:12:58.705 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:12:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:59.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:59 np0005539563 nova_compute[252253]: 2025-11-29 08:12:59.466 252257 DEBUG nova.network.neutron [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Successfully updated port: ec8e7efa-3c86-430e-b26a-e5d8d611a64a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:12:59 np0005539563 nova_compute[252253]: 2025-11-29 08:12:59.493 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "refresh_cache-d83d7773-fe1e-4ac9-b90c-74a74180acbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:59 np0005539563 nova_compute[252253]: 2025-11-29 08:12:59.494 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquired lock "refresh_cache-d83d7773-fe1e-4ac9-b90c-74a74180acbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:59 np0005539563 nova_compute[252253]: 2025-11-29 08:12:59.495 252257 DEBUG nova.network.neutron [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:12:59 np0005539563 nova_compute[252253]: 2025-11-29 08:12:59.613 252257 DEBUG nova.compute.manager [req-270924e4-a9ed-45ac-bc47-747bb3e2f443 req-7f804491-fdb6-4f25-af1e-a38be409859d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received event network-changed-ec8e7efa-3c86-430e-b26a-e5d8d611a64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:59 np0005539563 nova_compute[252253]: 2025-11-29 08:12:59.614 252257 DEBUG nova.compute.manager [req-270924e4-a9ed-45ac-bc47-747bb3e2f443 req-7f804491-fdb6-4f25-af1e-a38be409859d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Refreshing instance network info cache due to event network-changed-ec8e7efa-3c86-430e-b26a-e5d8d611a64a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:59 np0005539563 nova_compute[252253]: 2025-11-29 08:12:59.614 252257 DEBUG oslo_concurrency.lockutils [req-270924e4-a9ed-45ac-bc47-747bb3e2f443 req-7f804491-fdb6-4f25-af1e-a38be409859d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-d83d7773-fe1e-4ac9-b90c-74a74180acbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:59 np0005539563 nova_compute[252253]: 2025-11-29 08:12:59.793 252257 DEBUG nova.network.neutron [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:13:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 365 op/s
Nov 29 03:13:00 np0005539563 nova_compute[252253]: 2025-11-29 08:13:00.436 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:00.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:01.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.262 252257 DEBUG nova.network.neutron [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Updating instance_info_cache with network_info: [{"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.280 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Releasing lock "refresh_cache-d83d7773-fe1e-4ac9-b90c-74a74180acbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.281 252257 DEBUG nova.compute.manager [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Instance network_info: |[{"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.282 252257 DEBUG oslo_concurrency.lockutils [req-270924e4-a9ed-45ac-bc47-747bb3e2f443 req-7f804491-fdb6-4f25-af1e-a38be409859d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-d83d7773-fe1e-4ac9-b90c-74a74180acbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.282 252257 DEBUG nova.network.neutron [req-270924e4-a9ed-45ac-bc47-747bb3e2f443 req-7f804491-fdb6-4f25-af1e-a38be409859d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Refreshing network info cache for port ec8e7efa-3c86-430e-b26a-e5d8d611a64a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.288 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Start _get_guest_xml network_info=[{"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.293 252257 WARNING nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.300 252257 DEBUG nova.virt.libvirt.host [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.301 252257 DEBUG nova.virt.libvirt.host [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.321 252257 DEBUG nova.virt.libvirt.host [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.322 252257 DEBUG nova.virt.libvirt.host [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.323 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.324 252257 DEBUG nova.virt.hardware [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.325 252257 DEBUG nova.virt.hardware [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.325 252257 DEBUG nova.virt.hardware [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.326 252257 DEBUG nova.virt.hardware [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.326 252257 DEBUG nova.virt.hardware [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.327 252257 DEBUG nova.virt.hardware [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.327 252257 DEBUG nova.virt.hardware [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.328 252257 DEBUG nova.virt.hardware [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.328 252257 DEBUG nova.virt.hardware [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.328 252257 DEBUG nova.virt.hardware [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.329 252257 DEBUG nova.virt.hardware [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.334 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1652912322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.800 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.826 252257 DEBUG nova.storage.rbd_utils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:01 np0005539563 nova_compute[252253]: 2025-11-29 08:13:01.830 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.145 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 353 op/s
Nov 29 03:13:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1958067740' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.275 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.277 252257 DEBUG nova.virt.libvirt.vif [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2044766677',display_name='tempest-tempest.common.compute-instance-2044766677',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2044766677',id=110,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFrqk4rXvCBZyesNqy6ygZ6Gp5u2dJYASwFcyFUrpnmzFLmX4dmLstV85/UcVOuy/g8aGelmtEAngSltNMIVz+nyyj//ozYBJzauh3XWFgxF3C3yhw63J9BBc9qclV2mQ==',key_name='tempest-keypair-1415150174',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-2sixhnkw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=d83d7773-fe1e-4ac9-b90c-74a74180acbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.278 252257 DEBUG nova.network.os_vif_util [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.279 252257 DEBUG nova.network.os_vif_util [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.280 252257 DEBUG nova.objects.instance [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'pci_devices' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.296 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  <uuid>d83d7773-fe1e-4ac9-b90c-74a74180acbe</uuid>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  <name>instance-0000006e</name>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <nova:name>tempest-tempest.common.compute-instance-2044766677</nova:name>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:13:01</nova:creationTime>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <nova:user uuid="58625e4c2b5d43a1abbab05b98853a65">tempest-ServerActionsTestOtherA-552273978-project-member</nova:user>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <nova:project uuid="250671461f27498d9f6b4476c7b69533">tempest-ServerActionsTestOtherA-552273978</nova:project>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <nova:port uuid="ec8e7efa-3c86-430e-b26a-e5d8d611a64a">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <entry name="serial">d83d7773-fe1e-4ac9-b90c-74a74180acbe</entry>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <entry name="uuid">d83d7773-fe1e-4ac9-b90c-74a74180acbe</entry>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk.config">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:69:0d:b5"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <target dev="tapec8e7efa-3c"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/console.log" append="off"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:13:02 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:13:02 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:13:02 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:13:02 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.298 252257 DEBUG nova.compute.manager [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Preparing to wait for external event network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.298 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.299 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.299 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.299 252257 DEBUG nova.virt.libvirt.vif [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2044766677',display_name='tempest-tempest.common.compute-instance-2044766677',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2044766677',id=110,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFrqk4rXvCBZyesNqy6ygZ6Gp5u2dJYASwFcyFUrpnmzFLmX4dmLstV85/UcVOuy/g8aGelmtEAngSltNMIVz+nyyj//ozYBJzauh3XWFgxF3C3yhw63J9BBc9qclV2mQ==',key_name='tempest-keypair-1415150174',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-2sixhnkw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=d83d7773-fe1e-4ac9-b90c-74a74180acbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.300 252257 DEBUG nova.network.os_vif_util [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.300 252257 DEBUG nova.network.os_vif_util [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.300 252257 DEBUG os_vif [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.301 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.301 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.302 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.305 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.305 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8e7efa-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.306 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec8e7efa-3c, col_values=(('external_ids', {'iface-id': 'ec8e7efa-3c86-430e-b26a-e5d8d611a64a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:0d:b5', 'vm-uuid': 'd83d7773-fe1e-4ac9-b90c-74a74180acbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.307 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539563 NetworkManager[48981]: <info>  [1764403982.3085] manager: (tapec8e7efa-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.310 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.314 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.315 252257 INFO os_vif [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c')#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.391 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.391 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.392 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No VIF found with MAC fa:16:3e:69:0d:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.393 252257 INFO nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Using config drive#033[00m
Nov 29 03:13:02 np0005539563 nova_compute[252253]: 2025-11-29 08:13:02.426 252257 DEBUG nova.storage.rbd_utils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:02.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:03.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 359 op/s
Nov 29 03:13:04 np0005539563 nova_compute[252253]: 2025-11-29 08:13:04.371 252257 INFO nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Creating config drive at /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/disk.config#033[00m
Nov 29 03:13:04 np0005539563 nova_compute[252253]: 2025-11-29 08:13:04.378 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_sm5vb8z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:04 np0005539563 nova_compute[252253]: 2025-11-29 08:13:04.515 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_sm5vb8z" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:04 np0005539563 nova_compute[252253]: 2025-11-29 08:13:04.542 252257 DEBUG nova.storage.rbd_utils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:04 np0005539563 nova_compute[252253]: 2025-11-29 08:13:04.545 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/disk.config d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:04.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:04 np0005539563 nova_compute[252253]: 2025-11-29 08:13:04.690 252257 DEBUG oslo_concurrency.processutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/disk.config d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:04 np0005539563 nova_compute[252253]: 2025-11-29 08:13:04.691 252257 INFO nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Deleting local config drive /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/disk.config because it was imported into RBD.#033[00m
Nov 29 03:13:04 np0005539563 kernel: tapec8e7efa-3c: entered promiscuous mode
Nov 29 03:13:04 np0005539563 NetworkManager[48981]: <info>  [1764403984.7465] manager: (tapec8e7efa-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Nov 29 03:13:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:04Z|00422|binding|INFO|Claiming lport ec8e7efa-3c86-430e-b26a-e5d8d611a64a for this chassis.
Nov 29 03:13:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:04Z|00423|binding|INFO|ec8e7efa-3c86-430e-b26a-e5d8d611a64a: Claiming fa:16:3e:69:0d:b5 10.100.0.12
Nov 29 03:13:04 np0005539563 nova_compute[252253]: 2025-11-29 08:13:04.749 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:04Z|00424|binding|INFO|Setting lport ec8e7efa-3c86-430e-b26a-e5d8d611a64a ovn-installed in OVS
Nov 29 03:13:04 np0005539563 nova_compute[252253]: 2025-11-29 08:13:04.769 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:04Z|00425|binding|INFO|Setting lport ec8e7efa-3c86-430e-b26a-e5d8d611a64a up in Southbound
Nov 29 03:13:04 np0005539563 nova_compute[252253]: 2025-11-29 08:13:04.773 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.771 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:0d:b5 10.100.0.12'], port_security=['fa:16:3e:69:0d:b5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd83d7773-fe1e-4ac9-b90c-74a74180acbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5850abe-4884-46af-b00d-61910ac4ba3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=ec8e7efa-3c86-430e-b26a-e5d8d611a64a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.772 158990 INFO neutron.agent.ovn.metadata.agent [-] Port ec8e7efa-3c86-430e-b26a-e5d8d611a64a in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 bound to our chassis#033[00m
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.774 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 10a9b8d1-2de6-4e47-8e44-16b661da8624#033[00m
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.786 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a9343e-fce8-4cd0-a75a-bd1230d94833]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.787 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap10a9b8d1-21 in ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:13:04 np0005539563 systemd-udevd[319348]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.789 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap10a9b8d1-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.789 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1476559c-b4a3-44a7-b534-15812f7c0062]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.790 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7540e082-e71a-4fba-86e6-e7fdef78ea7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:04 np0005539563 systemd-machined[213024]: New machine qemu-49-instance-0000006e.
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.799 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[c746d9aa-33af-4fab-8396-f8c70e7e85d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:04 np0005539563 NetworkManager[48981]: <info>  [1764403984.8051] device (tapec8e7efa-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:13:04 np0005539563 NetworkManager[48981]: <info>  [1764403984.8061] device (tapec8e7efa-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:13:04 np0005539563 systemd[1]: Started Virtual Machine qemu-49-instance-0000006e.
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.826 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4f0970ba-dc27-4aef-b3c4-075767d32f27]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.919 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.919 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.919 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.923 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ace2e22a-743d-461d-8ce2-cb4f82bd9874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:04 np0005539563 NetworkManager[48981]: <info>  [1764403984.9288] manager: (tap10a9b8d1-20): new Veth device (/org/freedesktop/NetworkManager/Devices/203)
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.928 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bf09fbed-784f-43e6-a637-9b725ff36b52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.961 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[177d2b0f-28f7-49f8-92eb-c19668f324dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:04.966 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f8aa69b3-6457-4b5f-aa29-8508fe16b73f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:04 np0005539563 NetworkManager[48981]: <info>  [1764403984.9931] device (tap10a9b8d1-20): carrier: link connected
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.001 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5ce471-b0cf-4dc2-abac-65327a6eabc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.018 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd09b8f-0ce5-43f5-af6a-9664169c3ab2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695276, 'reachable_time': 27327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319382, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.035 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f86df901-039c-45b4-abaa-f0eb864a9d9f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:676'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 695276, 'tstamp': 695276}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319383, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.057 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1f574145-1564-4d22-9a14-6347c0be8912]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695276, 'reachable_time': 27327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319384, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:05.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.102 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[aee05a39-66ec-400f-a96a-880a227f14d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.186 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[93677a31-01c8-4d76-bf65-21e10350192d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.188 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.188 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.189 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10a9b8d1-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.191 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:05 np0005539563 NetworkManager[48981]: <info>  [1764403985.1920] manager: (tap10a9b8d1-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Nov 29 03:13:05 np0005539563 kernel: tap10a9b8d1-20: entered promiscuous mode
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.194 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.196 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap10a9b8d1-20, col_values=(('external_ids', {'iface-id': '56facbc8-1a3f-4008-8f77-23eeac832994'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.197 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:05 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:05Z|00426|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.221 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.222 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.223 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5c70eb29-7510-4ec6-b2ff-a15465f53693]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.224 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:13:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:05.225 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'env', 'PROCESS_TAG=haproxy-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/10a9b8d1-2de6-4e47-8e44-16b661da8624.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.513 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403985.5124843, d83d7773-fe1e-4ac9-b90c-74a74180acbe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.515 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] VM Started (Lifecycle Event)#033[00m
Nov 29 03:13:05 np0005539563 podman[319456]: 2025-11-29 08:13:05.599193199 +0000 UTC m=+0.053914202 container create bfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:13:05 np0005539563 systemd[1]: Started libpod-conmon-bfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185.scope.
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.639 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.646 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403985.5134115, d83d7773-fe1e-4ac9-b90c-74a74180acbe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.646 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:13:05 np0005539563 podman[319456]: 2025-11-29 08:13:05.568382304 +0000 UTC m=+0.023103317 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:13:05 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.682 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.685 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8a1875f09e15e7786bc2f41f8c7cd2ae5222a37adaf1c4acde2c92e5e3e805/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:05 np0005539563 podman[319456]: 2025-11-29 08:13:05.710815043 +0000 UTC m=+0.165536046 container init bfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.713 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:05 np0005539563 podman[319456]: 2025-11-29 08:13:05.716583839 +0000 UTC m=+0.171304832 container start bfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.717 252257 DEBUG nova.compute.manager [req-fbd14daa-b2c9-40b3-827a-09ac08d3d8e7 req-9caf4e14-1d64-4d6d-bbe9-681be8780ca5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received event network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.717 252257 DEBUG oslo_concurrency.lockutils [req-fbd14daa-b2c9-40b3-827a-09ac08d3d8e7 req-9caf4e14-1d64-4d6d-bbe9-681be8780ca5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.718 252257 DEBUG oslo_concurrency.lockutils [req-fbd14daa-b2c9-40b3-827a-09ac08d3d8e7 req-9caf4e14-1d64-4d6d-bbe9-681be8780ca5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.718 252257 DEBUG oslo_concurrency.lockutils [req-fbd14daa-b2c9-40b3-827a-09ac08d3d8e7 req-9caf4e14-1d64-4d6d-bbe9-681be8780ca5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.718 252257 DEBUG nova.compute.manager [req-fbd14daa-b2c9-40b3-827a-09ac08d3d8e7 req-9caf4e14-1d64-4d6d-bbe9-681be8780ca5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Processing event network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.719 252257 DEBUG nova.compute.manager [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.725 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764403985.724763, d83d7773-fe1e-4ac9-b90c-74a74180acbe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.726 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.729 252257 DEBUG nova.network.neutron [req-270924e4-a9ed-45ac-bc47-747bb3e2f443 req-7f804491-fdb6-4f25-af1e-a38be409859d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Updated VIF entry in instance network info cache for port ec8e7efa-3c86-430e-b26a-e5d8d611a64a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.729 252257 DEBUG nova.network.neutron [req-270924e4-a9ed-45ac-bc47-747bb3e2f443 req-7f804491-fdb6-4f25-af1e-a38be409859d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Updating instance_info_cache with network_info: [{"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.730 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.734 252257 INFO nova.virt.libvirt.driver [-] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Instance spawned successfully.#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.735 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:13:05 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[319472]: [NOTICE]   (319476) : New worker (319478) forked
Nov 29 03:13:05 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[319472]: [NOTICE]   (319476) : Loading success.
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.769 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.773 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.773 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.774 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.774 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.774 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.775 252257 DEBUG nova.virt.libvirt.driver [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.779 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.832 252257 DEBUG oslo_concurrency.lockutils [req-270924e4-a9ed-45ac-bc47-747bb3e2f443 req-7f804491-fdb6-4f25-af1e-a38be409859d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-d83d7773-fe1e-4ac9-b90c-74a74180acbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.928 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.963 252257 INFO nova.compute.manager [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Took 9.91 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:13:05 np0005539563 nova_compute[252253]: 2025-11-29 08:13:05.964 252257 DEBUG nova.compute.manager [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 475 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.9 MiB/s wr, 351 op/s
Nov 29 03:13:06 np0005539563 nova_compute[252253]: 2025-11-29 08:13:06.519 252257 INFO nova.compute.manager [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Took 11.57 seconds to build instance.#033[00m
Nov 29 03:13:06 np0005539563 nova_compute[252253]: 2025-11-29 08:13:06.549 252257 DEBUG oslo_concurrency.lockutils [None req-a7ecb19c-c6b3-4177-ba73-181011c1c83e 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:06.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:07.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:07 np0005539563 nova_compute[252253]: 2025-11-29 08:13:07.147 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:07 np0005539563 nova_compute[252253]: 2025-11-29 08:13:07.308 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 472 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.5 MiB/s wr, 203 op/s
Nov 29 03:13:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:08.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:09.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 449 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 6.0 MiB/s wr, 252 op/s
Nov 29 03:13:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:10.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:11.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:11 np0005539563 nova_compute[252253]: 2025-11-29 08:13:11.903 252257 DEBUG nova.compute.manager [req-f6562923-f4ce-4e24-8685-e01bc09dc43c req-4c759fcf-8a94-475c-8504-13c70c340122 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received event network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:11 np0005539563 nova_compute[252253]: 2025-11-29 08:13:11.903 252257 DEBUG oslo_concurrency.lockutils [req-f6562923-f4ce-4e24-8685-e01bc09dc43c req-4c759fcf-8a94-475c-8504-13c70c340122 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:11 np0005539563 nova_compute[252253]: 2025-11-29 08:13:11.903 252257 DEBUG oslo_concurrency.lockutils [req-f6562923-f4ce-4e24-8685-e01bc09dc43c req-4c759fcf-8a94-475c-8504-13c70c340122 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:11 np0005539563 nova_compute[252253]: 2025-11-29 08:13:11.904 252257 DEBUG oslo_concurrency.lockutils [req-f6562923-f4ce-4e24-8685-e01bc09dc43c req-4c759fcf-8a94-475c-8504-13c70c340122 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:11 np0005539563 nova_compute[252253]: 2025-11-29 08:13:11.904 252257 DEBUG nova.compute.manager [req-f6562923-f4ce-4e24-8685-e01bc09dc43c req-4c759fcf-8a94-475c-8504-13c70c340122 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] No waiting events found dispatching network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:11 np0005539563 nova_compute[252253]: 2025-11-29 08:13:11.904 252257 WARNING nova.compute.manager [req-f6562923-f4ce-4e24-8685-e01bc09dc43c req-4c759fcf-8a94-475c-8504-13c70c340122 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received unexpected event network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:13:12 np0005539563 nova_compute[252253]: 2025-11-29 08:13:12.150 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 449 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.8 MiB/s wr, 163 op/s
Nov 29 03:13:12 np0005539563 nova_compute[252253]: 2025-11-29 08:13:12.310 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:12.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:13:12
Nov 29 03:13:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:13:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:13:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes']
Nov 29 03:13:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:13:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:13.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:13 np0005539563 nova_compute[252253]: 2025-11-29 08:13:13.173 252257 DEBUG nova.compute.manager [req-9a1e335e-253d-4382-923c-f31a30f58fe7 req-505dc943-1bf6-4096-b2e6-6d2bc3de3f0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received event network-changed-ec8e7efa-3c86-430e-b26a-e5d8d611a64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:13 np0005539563 nova_compute[252253]: 2025-11-29 08:13:13.174 252257 DEBUG nova.compute.manager [req-9a1e335e-253d-4382-923c-f31a30f58fe7 req-505dc943-1bf6-4096-b2e6-6d2bc3de3f0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Refreshing instance network info cache due to event network-changed-ec8e7efa-3c86-430e-b26a-e5d8d611a64a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:13:13 np0005539563 nova_compute[252253]: 2025-11-29 08:13:13.174 252257 DEBUG oslo_concurrency.lockutils [req-9a1e335e-253d-4382-923c-f31a30f58fe7 req-505dc943-1bf6-4096-b2e6-6d2bc3de3f0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-d83d7773-fe1e-4ac9-b90c-74a74180acbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:13 np0005539563 nova_compute[252253]: 2025-11-29 08:13:13.174 252257 DEBUG oslo_concurrency.lockutils [req-9a1e335e-253d-4382-923c-f31a30f58fe7 req-505dc943-1bf6-4096-b2e6-6d2bc3de3f0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-d83d7773-fe1e-4ac9-b90c-74a74180acbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:13 np0005539563 nova_compute[252253]: 2025-11-29 08:13:13.174 252257 DEBUG nova.network.neutron [req-9a1e335e-253d-4382-923c-f31a30f58fe7 req-505dc943-1bf6-4096-b2e6-6d2bc3de3f0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Refreshing network info cache for port ec8e7efa-3c86-430e-b26a-e5d8d611a64a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:13:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:13:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 461 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.3 MiB/s wr, 181 op/s
Nov 29 03:13:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:14.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:15.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 467 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.4 MiB/s wr, 208 op/s
Nov 29 03:13:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:16.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:17.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:17 np0005539563 nova_compute[252253]: 2025-11-29 08:13:17.152 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:17 np0005539563 nova_compute[252253]: 2025-11-29 08:13:17.312 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:17 np0005539563 nova_compute[252253]: 2025-11-29 08:13:17.814 252257 DEBUG nova.network.neutron [req-9a1e335e-253d-4382-923c-f31a30f58fe7 req-505dc943-1bf6-4096-b2e6-6d2bc3de3f0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Updated VIF entry in instance network info cache for port ec8e7efa-3c86-430e-b26a-e5d8d611a64a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:13:17 np0005539563 nova_compute[252253]: 2025-11-29 08:13:17.815 252257 DEBUG nova.network.neutron [req-9a1e335e-253d-4382-923c-f31a30f58fe7 req-505dc943-1bf6-4096-b2e6-6d2bc3de3f0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Updating instance_info_cache with network_info: [{"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:17 np0005539563 nova_compute[252253]: 2025-11-29 08:13:17.913 252257 DEBUG oslo_concurrency.lockutils [req-9a1e335e-253d-4382-923c-f31a30f58fe7 req-505dc943-1bf6-4096-b2e6-6d2bc3de3f0b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-d83d7773-fe1e-4ac9-b90c-74a74180acbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 467 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.4 MiB/s wr, 171 op/s
Nov 29 03:13:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:18.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1837911653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:19.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 488 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.4 MiB/s wr, 236 op/s
Nov 29 03:13:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:20Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:0d:b5 10.100.0.12
Nov 29 03:13:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:20Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:0d:b5 10.100.0.12
Nov 29 03:13:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:20.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:21.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:22 np0005539563 nova_compute[252253]: 2025-11-29 08:13:22.155 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 488 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.2 MiB/s wr, 145 op/s
Nov 29 03:13:22 np0005539563 nova_compute[252253]: 2025-11-29 08:13:22.313 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:22 np0005539563 podman[319545]: 2025-11-29 08:13:22.511608267 +0000 UTC m=+0.062172355 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:13:22 np0005539563 podman[319546]: 2025-11-29 08:13:22.522933084 +0000 UTC m=+0.061642711 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:13:22 np0005539563 podman[319547]: 2025-11-29 08:13:22.582784126 +0000 UTC m=+0.119424627 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:13:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:22.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:23.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010394492718220733 of space, bias 1.0, pg target 3.11834781546622 quantized to 32 (current 32)
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.293817721419598 quantized to 32 (current 32)
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Nov 29 03:13:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 482 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 186 op/s
Nov 29 03:13:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:24.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:25.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.2 MiB/s wr, 286 op/s
Nov 29 03:13:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:26.462 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:26 np0005539563 nova_compute[252253]: 2025-11-29 08:13:26.462 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:26.464 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:13:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:26.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:27.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:27 np0005539563 nova_compute[252253]: 2025-11-29 08:13:27.157 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:27 np0005539563 nova_compute[252253]: 2025-11-29 08:13:27.314 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 309 op/s
Nov 29 03:13:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:28.466 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:28.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:29.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.2 MiB/s wr, 307 op/s
Nov 29 03:13:30 np0005539563 nova_compute[252253]: 2025-11-29 08:13:30.622 252257 DEBUG oslo_concurrency.lockutils [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:30 np0005539563 nova_compute[252253]: 2025-11-29 08:13:30.623 252257 DEBUG oslo_concurrency.lockutils [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:30.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:30 np0005539563 nova_compute[252253]: 2025-11-29 08:13:30.730 252257 DEBUG nova.objects.instance [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'flavor' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:30 np0005539563 nova_compute[252253]: 2025-11-29 08:13:30.797 252257 DEBUG oslo_concurrency.lockutils [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:31.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.334 252257 DEBUG oslo_concurrency.lockutils [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.335 252257 DEBUG oslo_concurrency.lockutils [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.335 252257 INFO nova.compute.manager [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Attaching volume 57626c0a-e0bf-45ee-90b0-ca7f160cc5ab to /dev/vdb#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.606 252257 DEBUG os_brick.utils [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.609 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.627 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.628 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[31afd727-3247-4e00-9f6c-3885c67c6a2a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.630 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.638 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.639 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[5d85bcf6-45dd-4149-92b1-620db35810ed]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.641 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.650 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.650 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[d93cdddd-28c7-41db-9015-53361881102b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.652 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[97c561ba-b0da-4606-9c60-103a738466c7]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.653 252257 DEBUG oslo_concurrency.processutils [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.685 252257 DEBUG oslo_concurrency.processutils [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.689 252257 DEBUG os_brick.initiator.connectors.lightos [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.690 252257 DEBUG os_brick.initiator.connectors.lightos [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.690 252257 DEBUG os_brick.initiator.connectors.lightos [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.692 252257 DEBUG os_brick.utils [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] <== get_connector_properties: return (83ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:13:31 np0005539563 nova_compute[252253]: 2025-11-29 08:13:31.692 252257 DEBUG nova.virt.block_device [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Updating existing volume attachment record: 6efb8ee1-9436-4e6f-9f72-23f6b5832f83 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:13:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:32 np0005539563 nova_compute[252253]: 2025-11-29 08:13:32.160 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 544 KiB/s wr, 216 op/s
Nov 29 03:13:32 np0005539563 nova_compute[252253]: 2025-11-29 08:13:32.315 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:32.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1555366208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:32 np0005539563 nova_compute[252253]: 2025-11-29 08:13:32.886 252257 DEBUG nova.objects.instance [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'flavor' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:32 np0005539563 nova_compute[252253]: 2025-11-29 08:13:32.925 252257 DEBUG nova.virt.libvirt.driver [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Attempting to attach volume 57626c0a-e0bf-45ee-90b0-ca7f160cc5ab with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:13:32 np0005539563 nova_compute[252253]: 2025-11-29 08:13:32.928 252257 DEBUG nova.virt.libvirt.guest [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:13:32 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:13:32 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-57626c0a-e0bf-45ee-90b0-ca7f160cc5ab">
Nov 29 03:13:32 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:32 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:32 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:32 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:13:32 np0005539563 nova_compute[252253]:  <auth username="openstack">
Nov 29 03:13:32 np0005539563 nova_compute[252253]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:13:32 np0005539563 nova_compute[252253]:  </auth>
Nov 29 03:13:32 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:13:32 np0005539563 nova_compute[252253]:  <serial>57626c0a-e0bf-45ee-90b0-ca7f160cc5ab</serial>
Nov 29 03:13:32 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:13:32 np0005539563 nova_compute[252253]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:13:33 np0005539563 nova_compute[252253]: 2025-11-29 08:13:33.084 252257 DEBUG nova.virt.libvirt.driver [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:33 np0005539563 nova_compute[252253]: 2025-11-29 08:13:33.084 252257 DEBUG nova.virt.libvirt.driver [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:33 np0005539563 nova_compute[252253]: 2025-11-29 08:13:33.084 252257 DEBUG nova.virt.libvirt.driver [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:33 np0005539563 nova_compute[252253]: 2025-11-29 08:13:33.085 252257 DEBUG nova.virt.libvirt.driver [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No VIF found with MAC fa:16:3e:69:0d:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:13:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:33.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:33 np0005539563 nova_compute[252253]: 2025-11-29 08:13:33.438 252257 DEBUG oslo_concurrency.lockutils [None req-aff79873-6d8d-44ac-a50c-82f650680f44 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 544 KiB/s wr, 216 op/s
Nov 29 03:13:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:34.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:35.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 454 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 154 KiB/s wr, 178 op/s
Nov 29 03:13:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:13:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:13:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:13:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:13:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:36.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:37.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:37 np0005539563 nova_compute[252253]: 2025-11-29 08:13:37.163 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:37 np0005539563 nova_compute[252253]: 2025-11-29 08:13:37.317 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:37 np0005539563 nova_compute[252253]: 2025-11-29 08:13:37.357 252257 INFO nova.compute.manager [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Rebuilding instance#033[00m
Nov 29 03:13:37 np0005539563 podman[320091]: 2025-11-29 08:13:37.658074761 +0000 UTC m=+0.037014104 container create 7824489267dd2011c85269e22767a7e72a6e04539243f545f7e5ad023f71734e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:13:37 np0005539563 systemd[1]: Started libpod-conmon-7824489267dd2011c85269e22767a7e72a6e04539243f545f7e5ad023f71734e.scope.
Nov 29 03:13:37 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:13:37 np0005539563 podman[320091]: 2025-11-29 08:13:37.642452717 +0000 UTC m=+0.021392090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:37 np0005539563 podman[320091]: 2025-11-29 08:13:37.739079516 +0000 UTC m=+0.118018889 container init 7824489267dd2011c85269e22767a7e72a6e04539243f545f7e5ad023f71734e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nightingale, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:13:37 np0005539563 podman[320091]: 2025-11-29 08:13:37.747752841 +0000 UTC m=+0.126692184 container start 7824489267dd2011c85269e22767a7e72a6e04539243f545f7e5ad023f71734e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nightingale, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:13:37 np0005539563 podman[320091]: 2025-11-29 08:13:37.750906726 +0000 UTC m=+0.129846079 container attach 7824489267dd2011c85269e22767a7e72a6e04539243f545f7e5ad023f71734e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nightingale, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:13:37 np0005539563 jolly_nightingale[320108]: 167 167
Nov 29 03:13:37 np0005539563 systemd[1]: libpod-7824489267dd2011c85269e22767a7e72a6e04539243f545f7e5ad023f71734e.scope: Deactivated successfully.
Nov 29 03:13:37 np0005539563 conmon[320108]: conmon 7824489267dd2011c852 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7824489267dd2011c85269e22767a7e72a6e04539243f545f7e5ad023f71734e.scope/container/memory.events
Nov 29 03:13:37 np0005539563 podman[320091]: 2025-11-29 08:13:37.755226664 +0000 UTC m=+0.134166007 container died 7824489267dd2011c85269e22767a7e72a6e04539243f545f7e5ad023f71734e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nightingale, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:13:37 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d23d8754f5397f8ebedb5f8c68485dc7464dd5a5060720bb0d284e3393b6ea27-merged.mount: Deactivated successfully.
Nov 29 03:13:37 np0005539563 podman[320091]: 2025-11-29 08:13:37.795943256 +0000 UTC m=+0.174882599 container remove 7824489267dd2011c85269e22767a7e72a6e04539243f545f7e5ad023f71734e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:13:37 np0005539563 systemd[1]: libpod-conmon-7824489267dd2011c85269e22767a7e72a6e04539243f545f7e5ad023f71734e.scope: Deactivated successfully.
Nov 29 03:13:37 np0005539563 nova_compute[252253]: 2025-11-29 08:13:37.818 252257 DEBUG nova.objects.instance [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:37 np0005539563 nova_compute[252253]: 2025-11-29 08:13:37.852 252257 DEBUG nova.compute.manager [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:37 np0005539563 podman[320134]: 2025-11-29 08:13:37.959196419 +0000 UTC m=+0.046218653 container create 6e16e7cd8b7900d8ad68fdb76cc700bb1088c342a6ba6ccf8deb62ac89b33369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:13:37 np0005539563 nova_compute[252253]: 2025-11-29 08:13:37.966 252257 DEBUG nova.objects.instance [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'pci_requests' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:37 np0005539563 nova_compute[252253]: 2025-11-29 08:13:37.987 252257 DEBUG nova.objects.instance [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'pci_devices' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:38 np0005539563 systemd[1]: Started libpod-conmon-6e16e7cd8b7900d8ad68fdb76cc700bb1088c342a6ba6ccf8deb62ac89b33369.scope.
Nov 29 03:13:38 np0005539563 nova_compute[252253]: 2025-11-29 08:13:38.006 252257 DEBUG nova.objects.instance [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'resources' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:38 np0005539563 nova_compute[252253]: 2025-11-29 08:13:38.019 252257 DEBUG nova.objects.instance [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'migration_context' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:38 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:13:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015a0ded692508c430e0ebc0efb61709ecddb418e0e2d38f9c8cc5c50d9d1b5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015a0ded692508c430e0ebc0efb61709ecddb418e0e2d38f9c8cc5c50d9d1b5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015a0ded692508c430e0ebc0efb61709ecddb418e0e2d38f9c8cc5c50d9d1b5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015a0ded692508c430e0ebc0efb61709ecddb418e0e2d38f9c8cc5c50d9d1b5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:38 np0005539563 podman[320134]: 2025-11-29 08:13:37.938613122 +0000 UTC m=+0.025635386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:38 np0005539563 nova_compute[252253]: 2025-11-29 08:13:38.041 252257 DEBUG nova.objects.instance [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:13:38 np0005539563 nova_compute[252253]: 2025-11-29 08:13:38.045 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:13:38 np0005539563 podman[320134]: 2025-11-29 08:13:38.046392862 +0000 UTC m=+0.133415116 container init 6e16e7cd8b7900d8ad68fdb76cc700bb1088c342a6ba6ccf8deb62ac89b33369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khayyam, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:13:38 np0005539563 podman[320134]: 2025-11-29 08:13:38.054187703 +0000 UTC m=+0.141209937 container start 6e16e7cd8b7900d8ad68fdb76cc700bb1088c342a6ba6ccf8deb62ac89b33369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:13:38 np0005539563 podman[320134]: 2025-11-29 08:13:38.058447279 +0000 UTC m=+0.145469513 container attach 6e16e7cd8b7900d8ad68fdb76cc700bb1088c342a6ba6ccf8deb62ac89b33369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khayyam, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:13:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 477 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 113 op/s
Nov 29 03:13:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:13:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:13:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:38.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:39.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]: [
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:    {
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:        "available": false,
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:        "ceph_device": false,
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:        "lsm_data": {},
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:        "lvs": [],
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:        "path": "/dev/sr0",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:        "rejected_reasons": [
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "Insufficient space (<5GB)",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "Has a FileSystem"
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:        ],
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:        "sys_api": {
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "actuators": null,
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "device_nodes": "sr0",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "devname": "sr0",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "human_readable_size": "482.00 KB",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "id_bus": "ata",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "model": "QEMU DVD-ROM",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "nr_requests": "2",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "parent": "/dev/sr0",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "partitions": {},
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "path": "/dev/sr0",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "removable": "1",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "rev": "2.5+",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "ro": "0",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "rotational": "1",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "sas_address": "",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "sas_device_handle": "",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "scheduler_mode": "mq-deadline",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "sectors": 0,
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "sectorsize": "2048",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "size": 493568.0,
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "support_discard": "2048",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "type": "disk",
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:            "vendor": "QEMU"
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:        }
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]:    }
Nov 29 03:13:39 np0005539563 practical_khayyam[320151]: ]
Nov 29 03:13:39 np0005539563 systemd[1]: libpod-6e16e7cd8b7900d8ad68fdb76cc700bb1088c342a6ba6ccf8deb62ac89b33369.scope: Deactivated successfully.
Nov 29 03:13:39 np0005539563 systemd[1]: libpod-6e16e7cd8b7900d8ad68fdb76cc700bb1088c342a6ba6ccf8deb62ac89b33369.scope: Consumed 1.192s CPU time.
Nov 29 03:13:39 np0005539563 conmon[320151]: conmon 6e16e7cd8b7900d8ad68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e16e7cd8b7900d8ad68fdb76cc700bb1088c342a6ba6ccf8deb62ac89b33369.scope/container/memory.events
Nov 29 03:13:39 np0005539563 podman[320134]: 2025-11-29 08:13:39.261854703 +0000 UTC m=+1.348876947 container died 6e16e7cd8b7900d8ad68fdb76cc700bb1088c342a6ba6ccf8deb62ac89b33369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:13:39 np0005539563 systemd[1]: var-lib-containers-storage-overlay-015a0ded692508c430e0ebc0efb61709ecddb418e0e2d38f9c8cc5c50d9d1b5c-merged.mount: Deactivated successfully.
Nov 29 03:13:39 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:39 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:39 np0005539563 podman[320134]: 2025-11-29 08:13:39.324505211 +0000 UTC m=+1.411527445 container remove 6e16e7cd8b7900d8ad68fdb76cc700bb1088c342a6ba6ccf8deb62ac89b33369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 03:13:39 np0005539563 systemd[1]: libpod-conmon-6e16e7cd8b7900d8ad68fdb76cc700bb1088c342a6ba6ccf8deb62ac89b33369.scope: Deactivated successfully.
Nov 29 03:13:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:13:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:13:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:13:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:13:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 691 KiB/s rd, 4.3 MiB/s wr, 136 op/s
Nov 29 03:13:40 np0005539563 kernel: tapec8e7efa-3c (unregistering): left promiscuous mode
Nov 29 03:13:40 np0005539563 NetworkManager[48981]: <info>  [1764404020.3589] device (tapec8e7efa-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:13:40 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:40Z|00427|binding|INFO|Releasing lport ec8e7efa-3c86-430e-b26a-e5d8d611a64a from this chassis (sb_readonly=0)
Nov 29 03:13:40 np0005539563 nova_compute[252253]: 2025-11-29 08:13:40.369 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:40 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:40Z|00428|binding|INFO|Setting lport ec8e7efa-3c86-430e-b26a-e5d8d611a64a down in Southbound
Nov 29 03:13:40 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:40Z|00429|binding|INFO|Removing iface tapec8e7efa-3c ovn-installed in OVS
Nov 29 03:13:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.381 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:0d:b5 10.100.0.12'], port_security=['fa:16:3e:69:0d:b5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd83d7773-fe1e-4ac9-b90c-74a74180acbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5850abe-4884-46af-b00d-61910ac4ba3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=ec8e7efa-3c86-430e-b26a-e5d8d611a64a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:40 np0005539563 nova_compute[252253]: 2025-11-29 08:13:40.385 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.385 158990 INFO neutron.agent.ovn.metadata.agent [-] Port ec8e7efa-3c86-430e-b26a-e5d8d611a64a in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 unbound from our chassis#033[00m
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.386 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10a9b8d1-2de6-4e47-8e44-16b661da8624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.387 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f00599-007a-4cc4-b128-d4171898f671]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.388 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace which is not needed anymore#033[00m
Nov 29 03:13:40 np0005539563 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 29 03:13:40 np0005539563 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006e.scope: Consumed 15.405s CPU time.
Nov 29 03:13:40 np0005539563 systemd-machined[213024]: Machine qemu-49-instance-0000006e terminated.
Nov 29 03:13:40 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[319472]: [NOTICE]   (319476) : haproxy version is 2.8.14-c23fe91
Nov 29 03:13:40 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[319472]: [NOTICE]   (319476) : path to executable is /usr/sbin/haproxy
Nov 29 03:13:40 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[319472]: [WARNING]  (319476) : Exiting Master process...
Nov 29 03:13:40 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[319472]: [ALERT]    (319476) : Current worker (319478) exited with code 143 (Terminated)
Nov 29 03:13:40 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[319472]: [WARNING]  (319476) : All workers exited. Exiting... (0)
Nov 29 03:13:40 np0005539563 systemd[1]: libpod-bfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185.scope: Deactivated successfully.
Nov 29 03:13:40 np0005539563 podman[321592]: 2025-11-29 08:13:40.521246004 +0000 UTC m=+0.043013086 container died bfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:13:40 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185-userdata-shm.mount: Deactivated successfully.
Nov 29 03:13:40 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3c8a1875f09e15e7786bc2f41f8c7cd2ae5222a37adaf1c4acde2c92e5e3e805-merged.mount: Deactivated successfully.
Nov 29 03:13:40 np0005539563 podman[321592]: 2025-11-29 08:13:40.558083002 +0000 UTC m=+0.079850034 container cleanup bfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:13:40 np0005539563 systemd[1]: libpod-conmon-bfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185.scope: Deactivated successfully.
Nov 29 03:13:40 np0005539563 nova_compute[252253]: 2025-11-29 08:13:40.587 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:40 np0005539563 nova_compute[252253]: 2025-11-29 08:13:40.591 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:40 np0005539563 podman[321623]: 2025-11-29 08:13:40.63997784 +0000 UTC m=+0.051711191 container remove bfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.645 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[40d8cbbd-d3d9-4e08-acce-18bb1b3ae220]: (4, ('Sat Nov 29 08:13:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (bfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185)\nbfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185\nSat Nov 29 08:13:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (bfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185)\nbfdb20d1e6b33d014774ccddeff841a40feb452cc0088c98505ee9872df66185\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.648 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b53202d7-6bae-4056-b8f4-5822cb4d3cdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.649 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:40 np0005539563 nova_compute[252253]: 2025-11-29 08:13:40.650 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:40 np0005539563 kernel: tap10a9b8d1-20: left promiscuous mode
Nov 29 03:13:40 np0005539563 nova_compute[252253]: 2025-11-29 08:13:40.667 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.670 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c4de9d49-292a-4900-92bd-f47388f023a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:40.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.693 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[67fba908-fe98-431b-b337-139df3edcf27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.694 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a3be55b6-a974-4304-a3c9-7d2df381ea4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:40 np0005539563 nova_compute[252253]: 2025-11-29 08:13:40.702 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.709 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb995bb-a91a-4b46-af69-e782169af708]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 695268, 'reachable_time': 30712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321651, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.712 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:13:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:40.712 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[774cb48f-c8c5-4e11-87c5-60d1f48c6ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:40 np0005539563 systemd[1]: run-netns-ovnmeta\x2d10a9b8d1\x2d2de6\x2d4e47\x2d8e44\x2d16b661da8624.mount: Deactivated successfully.
Nov 29 03:13:40 np0005539563 nova_compute[252253]: 2025-11-29 08:13:40.832 252257 DEBUG nova.compute.manager [req-4d1f4ff2-8db9-43c9-87f5-ec080dad6366 req-ce113bdd-b523-41be-8222-ebf5b39e8d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received event network-vif-unplugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:40 np0005539563 nova_compute[252253]: 2025-11-29 08:13:40.832 252257 DEBUG oslo_concurrency.lockutils [req-4d1f4ff2-8db9-43c9-87f5-ec080dad6366 req-ce113bdd-b523-41be-8222-ebf5b39e8d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:40 np0005539563 nova_compute[252253]: 2025-11-29 08:13:40.833 252257 DEBUG oslo_concurrency.lockutils [req-4d1f4ff2-8db9-43c9-87f5-ec080dad6366 req-ce113bdd-b523-41be-8222-ebf5b39e8d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:40 np0005539563 nova_compute[252253]: 2025-11-29 08:13:40.833 252257 DEBUG oslo_concurrency.lockutils [req-4d1f4ff2-8db9-43c9-87f5-ec080dad6366 req-ce113bdd-b523-41be-8222-ebf5b39e8d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:40 np0005539563 nova_compute[252253]: 2025-11-29 08:13:40.833 252257 DEBUG nova.compute.manager [req-4d1f4ff2-8db9-43c9-87f5-ec080dad6366 req-ce113bdd-b523-41be-8222-ebf5b39e8d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] No waiting events found dispatching network-vif-unplugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:40 np0005539563 nova_compute[252253]: 2025-11-29 08:13:40.833 252257 WARNING nova.compute.manager [req-4d1f4ff2-8db9-43c9-87f5-ec080dad6366 req-ce113bdd-b523-41be-8222-ebf5b39e8d63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received unexpected event network-vif-unplugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 03:13:41 np0005539563 nova_compute[252253]: 2025-11-29 08:13:41.064 252257 INFO nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:13:41 np0005539563 nova_compute[252253]: 2025-11-29 08:13:41.073 252257 INFO nova.virt.libvirt.driver [-] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Instance destroyed successfully.#033[00m
Nov 29 03:13:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:41.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:41 np0005539563 nova_compute[252253]: 2025-11-29 08:13:41.500 252257 INFO nova.compute.manager [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Detaching volume 57626c0a-e0bf-45ee-90b0-ca7f160cc5ab#033[00m
Nov 29 03:13:41 np0005539563 nova_compute[252253]: 2025-11-29 08:13:41.748 252257 INFO nova.virt.block_device [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Attempting to driver detach volume 57626c0a-e0bf-45ee-90b0-ca7f160cc5ab from mountpoint /dev/vdb#033[00m
Nov 29 03:13:41 np0005539563 nova_compute[252253]: 2025-11-29 08:13:41.755 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Attempting to detach device vdb from instance d83d7773-fe1e-4ac9-b90c-74a74180acbe from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:13:41 np0005539563 nova_compute[252253]: 2025-11-29 08:13:41.756 252257 DEBUG nova.virt.libvirt.guest [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:13:41 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:13:41 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-57626c0a-e0bf-45ee-90b0-ca7f160cc5ab">
Nov 29 03:13:41 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:41 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:41 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:41 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:13:41 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:13:41 np0005539563 nova_compute[252253]:  <serial>57626c0a-e0bf-45ee-90b0-ca7f160cc5ab</serial>
Nov 29 03:13:41 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:13:41 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:13:41 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:13:41 np0005539563 nova_compute[252253]: 2025-11-29 08:13:41.769 252257 INFO nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully detached device vdb from instance d83d7773-fe1e-4ac9-b90c-74a74180acbe from the persistent domain config.#033[00m
Nov 29 03:13:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev acc15f71-5771-4b89-b31d-bc9145f44170 does not exist
Nov 29 03:13:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f2e84e84-16f8-4e2a-9f08-8725b24f49b8 does not exist
Nov 29 03:13:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8bcbda5b-ed94-48b7-875b-7cbc8b38d154 does not exist
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:13:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.123 252257 INFO nova.virt.libvirt.driver [-] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Instance destroyed successfully.#033[00m
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.123 252257 DEBUG nova.virt.libvirt.vif [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2044766677',display_name='tempest-ServerActionsTestOtherA-server-413447025',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2044766677',id=110,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFrqk4rXvCBZyesNqy6ygZ6Gp5u2dJYASwFcyFUrpnmzFLmX4dmLstV85/UcVOuy/g8aGelmtEAngSltNMIVz+nyyj//ozYBJzauh3XWFgxF3C3yhw63J9BBc9qclV2mQ==',key_name='tempest-keypair-1415150174',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-2sixhnkw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=d83d7773-fe1e-4ac9-b90c-74a74180acbe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.124 252257 DEBUG nova.network.os_vif_util [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.124 252257 DEBUG nova.network.os_vif_util [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.125 252257 DEBUG os_vif [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.127 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.128 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8e7efa-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.129 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.131 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.133 252257 INFO os_vif [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c')#033[00m
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.165 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 662 KiB/s rd, 4.3 MiB/s wr, 134 op/s
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.560 252257 INFO nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Deleting instance files /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe_del#033[00m
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.560 252257 INFO nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Deletion of /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe_del complete#033[00m
Nov 29 03:13:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:42.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:42 np0005539563 podman[321813]: 2025-11-29 08:13:42.69362444 +0000 UTC m=+0.044780474 container create 8b23ec8450a4fbefa62273f3b2e83a742687a649dce67a8d309e4dadc28e6c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_heyrovsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:13:42 np0005539563 systemd[1]: Started libpod-conmon-8b23ec8450a4fbefa62273f3b2e83a742687a649dce67a8d309e4dadc28e6c99.scope.
Nov 29 03:13:42 np0005539563 nova_compute[252253]: 2025-11-29 08:13:42.743 252257 INFO nova.virt.block_device [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Booting with volume 57626c0a-e0bf-45ee-90b0-ca7f160cc5ab at /dev/vdb#033[00m
Nov 29 03:13:42 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:13:42 np0005539563 podman[321813]: 2025-11-29 08:13:42.677797572 +0000 UTC m=+0.028953636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:42 np0005539563 podman[321813]: 2025-11-29 08:13:42.785659854 +0000 UTC m=+0.136815908 container init 8b23ec8450a4fbefa62273f3b2e83a742687a649dce67a8d309e4dadc28e6c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_heyrovsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:13:42 np0005539563 podman[321813]: 2025-11-29 08:13:42.796871728 +0000 UTC m=+0.148027762 container start 8b23ec8450a4fbefa62273f3b2e83a742687a649dce67a8d309e4dadc28e6c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_heyrovsky, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:13:42 np0005539563 podman[321813]: 2025-11-29 08:13:42.799570321 +0000 UTC m=+0.150726395 container attach 8b23ec8450a4fbefa62273f3b2e83a742687a649dce67a8d309e4dadc28e6c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:13:42 np0005539563 nifty_heyrovsky[321829]: 167 167
Nov 29 03:13:42 np0005539563 systemd[1]: libpod-8b23ec8450a4fbefa62273f3b2e83a742687a649dce67a8d309e4dadc28e6c99.scope: Deactivated successfully.
Nov 29 03:13:42 np0005539563 conmon[321829]: conmon 8b23ec8450a4fbefa622 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b23ec8450a4fbefa62273f3b2e83a742687a649dce67a8d309e4dadc28e6c99.scope/container/memory.events
Nov 29 03:13:42 np0005539563 podman[321813]: 2025-11-29 08:13:42.804292879 +0000 UTC m=+0.155448953 container died 8b23ec8450a4fbefa62273f3b2e83a742687a649dce67a8d309e4dadc28e6c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_heyrovsky, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:13:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2a5e4cd28471202f4e226727d20f31bed8a55fef4267196310f56d55020d3166-merged.mount: Deactivated successfully.
Nov 29 03:13:42 np0005539563 podman[321813]: 2025-11-29 08:13:42.845819725 +0000 UTC m=+0.196975769 container remove 8b23ec8450a4fbefa62273f3b2e83a742687a649dce67a8d309e4dadc28e6c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:13:42 np0005539563 systemd[1]: libpod-conmon-8b23ec8450a4fbefa62273f3b2e83a742687a649dce67a8d309e4dadc28e6c99.scope: Deactivated successfully.
Nov 29 03:13:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:13:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:13:43 np0005539563 podman[321853]: 2025-11-29 08:13:43.046001108 +0000 UTC m=+0.055443434 container create c1950e15d9467d12cfc4f5a5e153af653f2aeb9d80ebec4ee55e401b50a69009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:13:43 np0005539563 systemd[1]: Started libpod-conmon-c1950e15d9467d12cfc4f5a5e153af653f2aeb9d80ebec4ee55e401b50a69009.scope.
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.086 252257 DEBUG nova.compute.manager [req-938e9c66-1cd4-4c9c-b298-e7daefe5b5d4 req-b5f8bcbe-6c87-4c23-92c0-2aeb0389e6b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received event network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.087 252257 DEBUG oslo_concurrency.lockutils [req-938e9c66-1cd4-4c9c-b298-e7daefe5b5d4 req-b5f8bcbe-6c87-4c23-92c0-2aeb0389e6b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.087 252257 DEBUG oslo_concurrency.lockutils [req-938e9c66-1cd4-4c9c-b298-e7daefe5b5d4 req-b5f8bcbe-6c87-4c23-92c0-2aeb0389e6b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.087 252257 DEBUG oslo_concurrency.lockutils [req-938e9c66-1cd4-4c9c-b298-e7daefe5b5d4 req-b5f8bcbe-6c87-4c23-92c0-2aeb0389e6b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.087 252257 DEBUG nova.compute.manager [req-938e9c66-1cd4-4c9c-b298-e7daefe5b5d4 req-b5f8bcbe-6c87-4c23-92c0-2aeb0389e6b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] No waiting events found dispatching network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.088 252257 WARNING nova.compute.manager [req-938e9c66-1cd4-4c9c-b298-e7daefe5b5d4 req-b5f8bcbe-6c87-4c23-92c0-2aeb0389e6b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received unexpected event network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a for instance with vm_state active and task_state rebuild_block_device_mapping.#033[00m
Nov 29 03:13:43 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:13:43 np0005539563 podman[321853]: 2025-11-29 08:13:43.019518191 +0000 UTC m=+0.028960587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b707bc1f8588af31a6c35dc29141181ce6d1b64b754ed5e89d9f0e62816024d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b707bc1f8588af31a6c35dc29141181ce6d1b64b754ed5e89d9f0e62816024d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b707bc1f8588af31a6c35dc29141181ce6d1b64b754ed5e89d9f0e62816024d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b707bc1f8588af31a6c35dc29141181ce6d1b64b754ed5e89d9f0e62816024d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b707bc1f8588af31a6c35dc29141181ce6d1b64b754ed5e89d9f0e62816024d5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:43.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.127 252257 DEBUG os_brick.utils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.129 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:43 np0005539563 podman[321853]: 2025-11-29 08:13:43.134615589 +0000 UTC m=+0.144057915 container init c1950e15d9467d12cfc4f5a5e153af653f2aeb9d80ebec4ee55e401b50a69009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:13:43 np0005539563 podman[321853]: 2025-11-29 08:13:43.140454917 +0000 UTC m=+0.149897223 container start c1950e15d9467d12cfc4f5a5e153af653f2aeb9d80ebec4ee55e401b50a69009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.140 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.140 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[b54df396-47b9-477d-a86b-032f9e8fcd50]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.141 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:43 np0005539563 podman[321853]: 2025-11-29 08:13:43.144545417 +0000 UTC m=+0.153987723 container attach c1950e15d9467d12cfc4f5a5e153af653f2aeb9d80ebec4ee55e401b50a69009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.149 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.150 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4d27e1-3871-49f4-8231-043ef4f20e95]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.151 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.159 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.159 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[049287b2-475e-484e-84f3-064d7611b99d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.160 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[d870e679-ea4d-4deb-9f5b-be4d58e68af0]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.161 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.188 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.190 252257 DEBUG os_brick.initiator.connectors.lightos [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.191 252257 DEBUG os_brick.initiator.connectors.lightos [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.191 252257 DEBUG os_brick.initiator.connectors.lightos [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.191 252257 DEBUG os_brick.utils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:13:43 np0005539563 nova_compute[252253]: 2025-11-29 08:13:43.192 252257 DEBUG nova.virt.block_device [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Updating existing volume attachment record: 0cdc8dee-f076-425a-a988-c08db480dfc0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:13:43 np0005539563 sweet_herschel[321870]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:13:43 np0005539563 sweet_herschel[321870]: --> relative data size: 1.0
Nov 29 03:13:43 np0005539563 sweet_herschel[321870]: --> All data devices are unavailable
Nov 29 03:13:43 np0005539563 systemd[1]: libpod-c1950e15d9467d12cfc4f5a5e153af653f2aeb9d80ebec4ee55e401b50a69009.scope: Deactivated successfully.
Nov 29 03:13:43 np0005539563 podman[321853]: 2025-11-29 08:13:43.916863053 +0000 UTC m=+0.926305359 container died c1950e15d9467d12cfc4f5a5e153af653f2aeb9d80ebec4ee55e401b50a69009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:13:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b707bc1f8588af31a6c35dc29141181ce6d1b64b754ed5e89d9f0e62816024d5-merged.mount: Deactivated successfully.
Nov 29 03:13:43 np0005539563 podman[321853]: 2025-11-29 08:13:43.966986581 +0000 UTC m=+0.976428887 container remove c1950e15d9467d12cfc4f5a5e153af653f2aeb9d80ebec4ee55e401b50a69009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:13:43 np0005539563 systemd[1]: libpod-conmon-c1950e15d9467d12cfc4f5a5e153af653f2aeb9d80ebec4ee55e401b50a69009.scope: Deactivated successfully.
Nov 29 03:13:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 505 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 662 KiB/s rd, 4.3 MiB/s wr, 135 op/s
Nov 29 03:13:44 np0005539563 podman[322047]: 2025-11-29 08:13:44.54902387 +0000 UTC m=+0.048825964 container create a8ace4d08f411020cc09d284b6aa0a25c982dc31dc257b1d06a550dbaa2ae9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:13:44 np0005539563 systemd[1]: Started libpod-conmon-a8ace4d08f411020cc09d284b6aa0a25c982dc31dc257b1d06a550dbaa2ae9cc.scope.
Nov 29 03:13:44 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:13:44 np0005539563 podman[322047]: 2025-11-29 08:13:44.521893895 +0000 UTC m=+0.021696009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:44 np0005539563 podman[322047]: 2025-11-29 08:13:44.626453737 +0000 UTC m=+0.126255851 container init a8ace4d08f411020cc09d284b6aa0a25c982dc31dc257b1d06a550dbaa2ae9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:13:44 np0005539563 podman[322047]: 2025-11-29 08:13:44.631846564 +0000 UTC m=+0.131648658 container start a8ace4d08f411020cc09d284b6aa0a25c982dc31dc257b1d06a550dbaa2ae9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:13:44 np0005539563 podman[322047]: 2025-11-29 08:13:44.634774613 +0000 UTC m=+0.134576707 container attach a8ace4d08f411020cc09d284b6aa0a25c982dc31dc257b1d06a550dbaa2ae9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:13:44 np0005539563 trusting_jang[322063]: 167 167
Nov 29 03:13:44 np0005539563 systemd[1]: libpod-a8ace4d08f411020cc09d284b6aa0a25c982dc31dc257b1d06a550dbaa2ae9cc.scope: Deactivated successfully.
Nov 29 03:13:44 np0005539563 conmon[322063]: conmon a8ace4d08f411020cc09 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8ace4d08f411020cc09d284b6aa0a25c982dc31dc257b1d06a550dbaa2ae9cc.scope/container/memory.events
Nov 29 03:13:44 np0005539563 podman[322047]: 2025-11-29 08:13:44.640635342 +0000 UTC m=+0.140437466 container died a8ace4d08f411020cc09d284b6aa0a25c982dc31dc257b1d06a550dbaa2ae9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:13:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-17ebbe74ddf9544ae0a4f9e878482cf7344e15b5ad7395737ed091c844669a85-merged.mount: Deactivated successfully.
Nov 29 03:13:44 np0005539563 podman[322047]: 2025-11-29 08:13:44.676043181 +0000 UTC m=+0.175845275 container remove a8ace4d08f411020cc09d284b6aa0a25c982dc31dc257b1d06a550dbaa2ae9cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:13:44 np0005539563 systemd[1]: libpod-conmon-a8ace4d08f411020cc09d284b6aa0a25c982dc31dc257b1d06a550dbaa2ae9cc.scope: Deactivated successfully.
Nov 29 03:13:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:44.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:44 np0005539563 nova_compute[252253]: 2025-11-29 08:13:44.825 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:13:44 np0005539563 nova_compute[252253]: 2025-11-29 08:13:44.825 252257 INFO nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Creating image(s)#033[00m
Nov 29 03:13:44 np0005539563 podman[322085]: 2025-11-29 08:13:44.832691896 +0000 UTC m=+0.037475346 container create b34c7ebeb1098c06014eb11b90780d020ca9b82c72e89d99740c71e62c8dbf3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:13:44 np0005539563 nova_compute[252253]: 2025-11-29 08:13:44.859 252257 DEBUG nova.storage.rbd_utils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:44 np0005539563 systemd[1]: Started libpod-conmon-b34c7ebeb1098c06014eb11b90780d020ca9b82c72e89d99740c71e62c8dbf3d.scope.
Nov 29 03:13:44 np0005539563 nova_compute[252253]: 2025-11-29 08:13:44.886 252257 DEBUG nova.storage.rbd_utils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:44 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:13:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef60bb44d291f4f3d34384f5cb410bfa9ddbb843a3380619a678f5c2bd0ca61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef60bb44d291f4f3d34384f5cb410bfa9ddbb843a3380619a678f5c2bd0ca61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef60bb44d291f4f3d34384f5cb410bfa9ddbb843a3380619a678f5c2bd0ca61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef60bb44d291f4f3d34384f5cb410bfa9ddbb843a3380619a678f5c2bd0ca61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:44 np0005539563 podman[322085]: 2025-11-29 08:13:44.902477076 +0000 UTC m=+0.107260556 container init b34c7ebeb1098c06014eb11b90780d020ca9b82c72e89d99740c71e62c8dbf3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hofstadter, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:13:44 np0005539563 podman[322085]: 2025-11-29 08:13:44.815966452 +0000 UTC m=+0.020749912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:44 np0005539563 podman[322085]: 2025-11-29 08:13:44.911528321 +0000 UTC m=+0.116311781 container start b34c7ebeb1098c06014eb11b90780d020ca9b82c72e89d99740c71e62c8dbf3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:13:44 np0005539563 podman[322085]: 2025-11-29 08:13:44.915793177 +0000 UTC m=+0.120576637 container attach b34c7ebeb1098c06014eb11b90780d020ca9b82c72e89d99740c71e62c8dbf3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:13:44 np0005539563 nova_compute[252253]: 2025-11-29 08:13:44.921 252257 DEBUG nova.storage.rbd_utils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:44 np0005539563 nova_compute[252253]: 2025-11-29 08:13:44.926 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:44 np0005539563 nova_compute[252253]: 2025-11-29 08:13:44.989 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:44 np0005539563 nova_compute[252253]: 2025-11-29 08:13:44.990 252257 DEBUG oslo_concurrency.lockutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:44 np0005539563 nova_compute[252253]: 2025-11-29 08:13:44.990 252257 DEBUG oslo_concurrency.lockutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:44 np0005539563 nova_compute[252253]: 2025-11-29 08:13:44.991 252257 DEBUG oslo_concurrency.lockutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.022 252257 DEBUG nova.storage.rbd_utils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.025 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:45.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.324 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.432 252257 DEBUG nova.storage.rbd_utils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] resizing rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.557 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.557 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Ensure instance console log exists: /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.558 252257 DEBUG oslo_concurrency.lockutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.558 252257 DEBUG oslo_concurrency.lockutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.558 252257 DEBUG oslo_concurrency.lockutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.562 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Start _get_guest_xml network_info=[{"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-57626c0a-e0bf-45ee-90b0-ca7f160cc5ab', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '57626c0a-e0bf-45ee-90b0-ca7f160cc5ab', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'd83d7773-fe1e-4ac9-b90c-74a74180acbe', 'attached_at': '', 'detached_at': '', 'volume_id': '57626c0a-e0bf-45ee-90b0-ca7f160cc5ab', 'serial': '57626c0a-e0bf-45ee-90b0-ca7f160cc5ab'}, 'attachment_id': '0cdc8dee-f076-425a-a988-c08db480dfc0', 'disk_bus': 'virtio', 'boot_index': None, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.568 252257 WARNING nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.575 252257 DEBUG nova.virt.libvirt.host [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.576 252257 DEBUG nova.virt.libvirt.host [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.581 252257 DEBUG nova.virt.libvirt.host [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.582 252257 DEBUG nova.virt.libvirt.host [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.583 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.583 252257 DEBUG nova.virt.hardware [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.584 252257 DEBUG nova.virt.hardware [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.584 252257 DEBUG nova.virt.hardware [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.584 252257 DEBUG nova.virt.hardware [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.584 252257 DEBUG nova.virt.hardware [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.584 252257 DEBUG nova.virt.hardware [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.585 252257 DEBUG nova.virt.hardware [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.585 252257 DEBUG nova.virt.hardware [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.585 252257 DEBUG nova.virt.hardware [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.585 252257 DEBUG nova.virt.hardware [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.586 252257 DEBUG nova.virt.hardware [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.586 252257 DEBUG nova.objects.instance [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]: {
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:    "0": [
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:        {
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:            "devices": [
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:                "/dev/loop3"
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:            ],
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:            "lv_name": "ceph_lv0",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:            "lv_size": "7511998464",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:            "name": "ceph_lv0",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:            "tags": {
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:                "ceph.cluster_name": "ceph",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:                "ceph.crush_device_class": "",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:                "ceph.encrypted": "0",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:                "ceph.osd_id": "0",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:                "ceph.type": "block",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:                "ceph.vdo": "0"
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:            },
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:            "type": "block",
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:            "vg_name": "ceph_vg0"
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:        }
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]:    ]
Nov 29 03:13:45 np0005539563 compassionate_hofstadter[322125]: }
Nov 29 03:13:45 np0005539563 systemd[1]: libpod-b34c7ebeb1098c06014eb11b90780d020ca9b82c72e89d99740c71e62c8dbf3d.scope: Deactivated successfully.
Nov 29 03:13:45 np0005539563 podman[322085]: 2025-11-29 08:13:45.739578747 +0000 UTC m=+0.944362247 container died b34c7ebeb1098c06014eb11b90780d020ca9b82c72e89d99740c71e62c8dbf3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:13:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-cef60bb44d291f4f3d34384f5cb410bfa9ddbb843a3380619a678f5c2bd0ca61-merged.mount: Deactivated successfully.
Nov 29 03:13:45 np0005539563 podman[322085]: 2025-11-29 08:13:45.795859521 +0000 UTC m=+1.000642991 container remove b34c7ebeb1098c06014eb11b90780d020ca9b82c72e89d99740c71e62c8dbf3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hofstadter, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:13:45 np0005539563 systemd[1]: libpod-conmon-b34c7ebeb1098c06014eb11b90780d020ca9b82c72e89d99740c71e62c8dbf3d.scope: Deactivated successfully.
Nov 29 03:13:45 np0005539563 nova_compute[252253]: 2025-11-29 08:13:45.803 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 440 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 684 KiB/s rd, 4.3 MiB/s wr, 164 op/s
Nov 29 03:13:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1796266973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.267 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.296 252257 DEBUG nova.storage.rbd_utils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.301 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:46 np0005539563 podman[322489]: 2025-11-29 08:13:46.536792415 +0000 UTC m=+0.044502067 container create a75c3c7f197601982dc8533e1ef91f32d477ec0846c7ba61521cea043cd64dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:13:46 np0005539563 systemd[1]: Started libpod-conmon-a75c3c7f197601982dc8533e1ef91f32d477ec0846c7ba61521cea043cd64dd4.scope.
Nov 29 03:13:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:13:46 np0005539563 podman[322489]: 2025-11-29 08:13:46.516191237 +0000 UTC m=+0.023900909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:46 np0005539563 podman[322489]: 2025-11-29 08:13:46.615469147 +0000 UTC m=+0.123178789 container init a75c3c7f197601982dc8533e1ef91f32d477ec0846c7ba61521cea043cd64dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermat, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:13:46 np0005539563 podman[322489]: 2025-11-29 08:13:46.62810342 +0000 UTC m=+0.135813112 container start a75c3c7f197601982dc8533e1ef91f32d477ec0846c7ba61521cea043cd64dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermat, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:13:46 np0005539563 podman[322489]: 2025-11-29 08:13:46.633124486 +0000 UTC m=+0.140834258 container attach a75c3c7f197601982dc8533e1ef91f32d477ec0846c7ba61521cea043cd64dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:13:46 np0005539563 quirky_fermat[322505]: 167 167
Nov 29 03:13:46 np0005539563 systemd[1]: libpod-a75c3c7f197601982dc8533e1ef91f32d477ec0846c7ba61521cea043cd64dd4.scope: Deactivated successfully.
Nov 29 03:13:46 np0005539563 podman[322489]: 2025-11-29 08:13:46.635264834 +0000 UTC m=+0.142974496 container died a75c3c7f197601982dc8533e1ef91f32d477ec0846c7ba61521cea043cd64dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermat, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:13:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7b4d2ab84b419b04956dc64ef8234d4ee1885f6bca78ab492ff3167212c2ff6e-merged.mount: Deactivated successfully.
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:13:46 np0005539563 podman[322489]: 2025-11-29 08:13:46.685295629 +0000 UTC m=+0.193005311 container remove a75c3c7f197601982dc8533e1ef91f32d477ec0846c7ba61521cea043cd64dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermat, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 29 03:13:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:46.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/243463392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:46 np0005539563 systemd[1]: libpod-conmon-a75c3c7f197601982dc8533e1ef91f32d477ec0846c7ba61521cea043cd64dd4.scope: Deactivated successfully.
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.738 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.778 252257 DEBUG nova.virt.libvirt.vif [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:12:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2044766677',display_name='tempest-ServerActionsTestOtherA-server-413447025',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2044766677',id=110,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFrqk4rXvCBZyesNqy6ygZ6Gp5u2dJYASwFcyFUrpnmzFLmX4dmLstV85/UcVOuy/g8aGelmtEAngSltNMIVz+nyyj//ozYBJzauh3XWFgxF3C3yhw63J9BBc9qclV2mQ==',key_name='tempest-keypair-1415150174',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-2sixhnkw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=d83d7773-fe1e-4ac9-b90c-74a74180acbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.779 252257 DEBUG nova.network.os_vif_util [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.780 252257 DEBUG nova.network.os_vif_util [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.783 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  <uuid>d83d7773-fe1e-4ac9-b90c-74a74180acbe</uuid>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  <name>instance-0000006e</name>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerActionsTestOtherA-server-413447025</nova:name>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:13:45</nova:creationTime>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <nova:user uuid="58625e4c2b5d43a1abbab05b98853a65">tempest-ServerActionsTestOtherA-552273978-project-member</nova:user>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <nova:project uuid="250671461f27498d9f6b4476c7b69533">tempest-ServerActionsTestOtherA-552273978</nova:project>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="ed489666-5fa2-4ea4-8005-7a7505ac1b78"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <nova:port uuid="ec8e7efa-3c86-430e-b26a-e5d8d611a64a">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <entry name="serial">d83d7773-fe1e-4ac9-b90c-74a74180acbe</entry>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <entry name="uuid">d83d7773-fe1e-4ac9-b90c-74a74180acbe</entry>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk.config">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="volumes/volume-57626c0a-e0bf-45ee-90b0-ca7f160cc5ab">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <serial>57626c0a-e0bf-45ee-90b0-ca7f160cc5ab</serial>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:69:0d:b5"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <target dev="tapec8e7efa-3c"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/console.log" append="off"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:13:46 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:13:46 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:13:46 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:13:46 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.783 252257 DEBUG nova.virt.libvirt.vif [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:12:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2044766677',display_name='tempest-ServerActionsTestOtherA-server-413447025',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2044766677',id=110,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFrqk4rXvCBZyesNqy6ygZ6Gp5u2dJYASwFcyFUrpnmzFLmX4dmLstV85/UcVOuy/g8aGelmtEAngSltNMIVz+nyyj//ozYBJzauh3XWFgxF3C3yhw63J9BBc9qclV2mQ==',key_name='tempest-keypair-1415150174',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-2sixhnkw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=d83d7773-fe1e-4ac9-b90c-74a74180acbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.783 252257 DEBUG nova.network.os_vif_util [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.784 252257 DEBUG nova.network.os_vif_util [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.784 252257 DEBUG os_vif [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.785 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.785 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.785 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.788 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.788 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8e7efa-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.789 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec8e7efa-3c, col_values=(('external_ids', {'iface-id': 'ec8e7efa-3c86-430e-b26a-e5d8d611a64a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:0d:b5', 'vm-uuid': 'd83d7773-fe1e-4ac9-b90c-74a74180acbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:46 np0005539563 NetworkManager[48981]: <info>  [1764404026.7914] manager: (tapec8e7efa-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.793 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.797 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.799 252257 INFO os_vif [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c')#033[00m
Nov 29 03:13:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.850 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.850 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.850 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.851 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No VIF found with MAC fa:16:3e:69:0d:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.851 252257 INFO nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Using config drive#033[00m
Nov 29 03:13:46 np0005539563 podman[322534]: 2025-11-29 08:13:46.852610992 +0000 UTC m=+0.041393693 container create 40f0e51aa0d19fe455819b23d46baadde3fbc7175d6a7243eb05909de0c9612e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.884 252257 DEBUG nova.storage.rbd_utils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:46 np0005539563 systemd[1]: Started libpod-conmon-40f0e51aa0d19fe455819b23d46baadde3fbc7175d6a7243eb05909de0c9612e.scope.
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.916 252257 DEBUG nova.objects.instance [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'ec2_ids' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:46 np0005539563 podman[322534]: 2025-11-29 08:13:46.834236525 +0000 UTC m=+0.023019246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:13:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e42ecb7dd9a126a674cf69de150b471168ceef35a57f8506d356f1efa1c86fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e42ecb7dd9a126a674cf69de150b471168ceef35a57f8506d356f1efa1c86fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e42ecb7dd9a126a674cf69de150b471168ceef35a57f8506d356f1efa1c86fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e42ecb7dd9a126a674cf69de150b471168ceef35a57f8506d356f1efa1c86fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:46 np0005539563 podman[322534]: 2025-11-29 08:13:46.960245099 +0000 UTC m=+0.149027890 container init 40f0e51aa0d19fe455819b23d46baadde3fbc7175d6a7243eb05909de0c9612e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ardinghelli, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:13:46 np0005539563 podman[322534]: 2025-11-29 08:13:46.967672369 +0000 UTC m=+0.156455060 container start 40f0e51aa0d19fe455819b23d46baadde3fbc7175d6a7243eb05909de0c9612e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ardinghelli, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:13:46 np0005539563 podman[322534]: 2025-11-29 08:13:46.970758823 +0000 UTC m=+0.159541544 container attach 40f0e51aa0d19fe455819b23d46baadde3fbc7175d6a7243eb05909de0c9612e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:13:46 np0005539563 nova_compute[252253]: 2025-11-29 08:13:46.994 252257 DEBUG nova.objects.instance [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'keypairs' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:47.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:47 np0005539563 nova_compute[252253]: 2025-11-29 08:13:47.167 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:47 np0005539563 nova_compute[252253]: 2025-11-29 08:13:47.494 252257 INFO nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Creating config drive at /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/disk.config#033[00m
Nov 29 03:13:47 np0005539563 nova_compute[252253]: 2025-11-29 08:13:47.502 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuiu6gjwu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:47 np0005539563 nova_compute[252253]: 2025-11-29 08:13:47.656 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuiu6gjwu" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:47 np0005539563 nova_compute[252253]: 2025-11-29 08:13:47.688 252257 DEBUG nova.storage.rbd_utils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:47 np0005539563 nova_compute[252253]: 2025-11-29 08:13:47.691 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/disk.config d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:47 np0005539563 reverent_ardinghelli[322570]: {
Nov 29 03:13:47 np0005539563 reverent_ardinghelli[322570]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:13:47 np0005539563 reverent_ardinghelli[322570]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:13:47 np0005539563 reverent_ardinghelli[322570]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:13:47 np0005539563 reverent_ardinghelli[322570]:        "osd_id": 0,
Nov 29 03:13:47 np0005539563 reverent_ardinghelli[322570]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:13:47 np0005539563 reverent_ardinghelli[322570]:        "type": "bluestore"
Nov 29 03:13:47 np0005539563 reverent_ardinghelli[322570]:    }
Nov 29 03:13:47 np0005539563 reverent_ardinghelli[322570]: }
Nov 29 03:13:47 np0005539563 systemd[1]: libpod-40f0e51aa0d19fe455819b23d46baadde3fbc7175d6a7243eb05909de0c9612e.scope: Deactivated successfully.
Nov 29 03:13:47 np0005539563 podman[322534]: 2025-11-29 08:13:47.852966855 +0000 UTC m=+1.041749586 container died 40f0e51aa0d19fe455819b23d46baadde3fbc7175d6a7243eb05909de0c9612e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:13:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4e42ecb7dd9a126a674cf69de150b471168ceef35a57f8506d356f1efa1c86fc-merged.mount: Deactivated successfully.
Nov 29 03:13:47 np0005539563 nova_compute[252253]: 2025-11-29 08:13:47.891 252257 DEBUG oslo_concurrency.processutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/disk.config d83d7773-fe1e-4ac9-b90c-74a74180acbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:47 np0005539563 nova_compute[252253]: 2025-11-29 08:13:47.892 252257 INFO nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Deleting local config drive /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe/disk.config because it was imported into RBD.#033[00m
Nov 29 03:13:47 np0005539563 podman[322534]: 2025-11-29 08:13:47.909660411 +0000 UTC m=+1.098443122 container remove 40f0e51aa0d19fe455819b23d46baadde3fbc7175d6a7243eb05909de0c9612e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:13:47 np0005539563 systemd[1]: libpod-conmon-40f0e51aa0d19fe455819b23d46baadde3fbc7175d6a7243eb05909de0c9612e.scope: Deactivated successfully.
Nov 29 03:13:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:13:47 np0005539563 kernel: tapec8e7efa-3c: entered promiscuous mode
Nov 29 03:13:47 np0005539563 NetworkManager[48981]: <info>  [1764404027.9611] manager: (tapec8e7efa-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Nov 29 03:13:47 np0005539563 nova_compute[252253]: 2025-11-29 08:13:47.962 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:47 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:47Z|00430|binding|INFO|Claiming lport ec8e7efa-3c86-430e-b26a-e5d8d611a64a for this chassis.
Nov 29 03:13:47 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:47Z|00431|binding|INFO|ec8e7efa-3c86-430e-b26a-e5d8d611a64a: Claiming fa:16:3e:69:0d:b5 10.100.0.12
Nov 29 03:13:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:13:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:47.974 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:0d:b5 10.100.0.12'], port_security=['fa:16:3e:69:0d:b5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd83d7773-fe1e-4ac9-b90c-74a74180acbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f5850abe-4884-46af-b00d-61910ac4ba3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=ec8e7efa-3c86-430e-b26a-e5d8d611a64a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:47.977 158990 INFO neutron.agent.ovn.metadata.agent [-] Port ec8e7efa-3c86-430e-b26a-e5d8d611a64a in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 bound to our chassis#033[00m
Nov 29 03:13:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:47.979 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 10a9b8d1-2de6-4e47-8e44-16b661da8624#033[00m
Nov 29 03:13:47 np0005539563 nova_compute[252253]: 2025-11-29 08:13:47.980 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:47 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:47Z|00432|binding|INFO|Setting lport ec8e7efa-3c86-430e-b26a-e5d8d611a64a ovn-installed in OVS
Nov 29 03:13:47 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:47Z|00433|binding|INFO|Setting lport ec8e7efa-3c86-430e-b26a-e5d8d611a64a up in Southbound
Nov 29 03:13:47 np0005539563 nova_compute[252253]: 2025-11-29 08:13:47.982 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:47 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0b13ea4e-8c6d-44b7-b98f-0cc272a92fbc does not exist
Nov 29 03:13:47 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 23bff362-38c8-4f6b-a18e-9174e7888c38 does not exist
Nov 29 03:13:47 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 077a750d-98a3-488d-8d63-65c19926ff87 does not exist
Nov 29 03:13:47 np0005539563 systemd-udevd[322657]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:48 np0005539563 systemd-machined[213024]: New machine qemu-50-instance-0000006e.
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.000 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1adbfeae-bbf0-47ee-9bc5-08fa83a27cdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.001 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap10a9b8d1-21 in ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.004 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap10a9b8d1-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.004 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fa091328-18dd-460c-8044-d30d52f41f67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.005 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[eb25d73a-4191-4c38-b29b-b240f4bbf2de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 systemd[1]: Started Virtual Machine qemu-50-instance-0000006e.
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.016 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[499c01b2-938a-425c-aa0f-197eb80e8780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 NetworkManager[48981]: <info>  [1764404028.0250] device (tapec8e7efa-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:13:48 np0005539563 NetworkManager[48981]: <info>  [1764404028.0260] device (tapec8e7efa-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.033 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[eabb490d-ccad-4d83-820d-6b514fb04769]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.064 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[483696a6-df18-4c19-8b94-502935bc69e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 systemd-udevd[322674]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.069 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d262cd58-ec87-434d-acb8-5dfdbe106a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 NetworkManager[48981]: <info>  [1764404028.0715] manager: (tap10a9b8d1-20): new Veth device (/org/freedesktop/NetworkManager/Devices/207)
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.104 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9048b8-9b5c-43ee-84fb-ed3c27064a31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.108 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4d265550-7415-4572-8ee4-50c3249726bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:48 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:13:48 np0005539563 NetworkManager[48981]: <info>  [1764404028.1380] device (tap10a9b8d1-20): carrier: link connected
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.146 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4057257b-edb4-4e02-9736-6dc1d1f7c060]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.172 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d1ec19-174c-4966-a596-6f2ccd9e5df1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699591, 'reachable_time': 19773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322739, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.190 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6608a4db-7ed0-44c6-b7dc-afc06ca4afd6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:676'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699591, 'tstamp': 699591}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322740, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 456 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 676 KiB/s rd, 5.0 MiB/s wr, 163 op/s
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.215 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[faaf58e8-88c9-4e7d-9ce5-775cee14387c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699591, 'reachable_time': 19773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322741, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.248 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8be5e0c8-1156-4c7d-aa73-5d291fe05893]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.322 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[55b6dc0b-7ad4-4515-9725-7bef07e5d989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.324 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.324 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.324 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10a9b8d1-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.327 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:48 np0005539563 kernel: tap10a9b8d1-20: entered promiscuous mode
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.331 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap10a9b8d1-20, col_values=(('external_ids', {'iface-id': '56facbc8-1a3f-4008-8f77-23eeac832994'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:48 np0005539563 ovn_controller[148841]: 2025-11-29T08:13:48Z|00434|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.333 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:48 np0005539563 NetworkManager[48981]: <info>  [1764404028.3342] manager: (tap10a9b8d1-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.334 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.336 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.345 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f2357353-2abf-42e7-bbdc-abd87100d31e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.345 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:13:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:13:48.346 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'env', 'PROCESS_TAG=haproxy-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/10a9b8d1-2de6-4e47-8e44-16b661da8624.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.355 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.684 252257 DEBUG nova.compute.manager [req-5b257262-0d2b-4c91-80a2-546b0bbb6a74 req-f4e59f07-d931-40c9-9ef4-67f38a7b9511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received event network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.685 252257 DEBUG oslo_concurrency.lockutils [req-5b257262-0d2b-4c91-80a2-546b0bbb6a74 req-f4e59f07-d931-40c9-9ef4-67f38a7b9511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.685 252257 DEBUG oslo_concurrency.lockutils [req-5b257262-0d2b-4c91-80a2-546b0bbb6a74 req-f4e59f07-d931-40c9-9ef4-67f38a7b9511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.685 252257 DEBUG oslo_concurrency.lockutils [req-5b257262-0d2b-4c91-80a2-546b0bbb6a74 req-f4e59f07-d931-40c9-9ef4-67f38a7b9511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.686 252257 DEBUG nova.compute.manager [req-5b257262-0d2b-4c91-80a2-546b0bbb6a74 req-f4e59f07-d931-40c9-9ef4-67f38a7b9511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] No waiting events found dispatching network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.686 252257 WARNING nova.compute.manager [req-5b257262-0d2b-4c91-80a2-546b0bbb6a74 req-f4e59f07-d931-40c9-9ef4-67f38a7b9511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received unexpected event network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:13:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:48.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:48 np0005539563 podman[322829]: 2025-11-29 08:13:48.745796345 +0000 UTC m=+0.055729461 container create eded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:13:48 np0005539563 systemd[1]: Started libpod-conmon-eded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984.scope.
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.780 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for d83d7773-fe1e-4ac9-b90c-74a74180acbe due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.782 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404028.7791712, d83d7773-fe1e-4ac9-b90c-74a74180acbe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.782 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.786 252257 DEBUG nova.compute.manager [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.786 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.792 252257 INFO nova.virt.libvirt.driver [-] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Instance spawned successfully.#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.792 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:13:48 np0005539563 podman[322829]: 2025-11-29 08:13:48.713942743 +0000 UTC m=+0.023875899 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.808 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:48 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.811 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74bfb4925027ca9540b09683d1b18d0d6b29bce852e7eb2744fb0c0e85761465/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:48 np0005539563 podman[322829]: 2025-11-29 08:13:48.840123471 +0000 UTC m=+0.150056607 container init eded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:13:48 np0005539563 podman[322829]: 2025-11-29 08:13:48.846211296 +0000 UTC m=+0.156144402 container start eded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:13:48 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[322848]: [NOTICE]   (322852) : New worker (322854) forked
Nov 29 03:13:48 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[322848]: [NOTICE]   (322852) : Loading success.
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.885 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.886 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.886 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.886 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.887 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:48 np0005539563 nova_compute[252253]: 2025-11-29 08:13:48.887 252257 DEBUG nova.virt.libvirt.driver [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.021 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.021 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404028.7793665, d83d7773-fe1e-4ac9-b90c-74a74180acbe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.022 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] VM Started (Lifecycle Event)#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.068 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.071 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.100 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.124 252257 DEBUG nova.compute.manager [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:49.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.182 252257 DEBUG oslo_concurrency.lockutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.182 252257 DEBUG oslo_concurrency.lockutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.182 252257 DEBUG nova.objects.instance [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.248 252257 DEBUG oslo_concurrency.lockutils [None req-daa934d2-3efd-4046-9824-53d8dc0cb069 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.736 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.737 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.737 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.737 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:13:49 np0005539563 nova_compute[252253]: 2025-11-29 08:13:49.737 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1131043082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.179 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 888 KiB/s rd, 4.4 MiB/s wr, 164 op/s
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.299 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.299 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.304 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.304 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.304 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.476 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.477 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4075MB free_disk=20.796966552734375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.478 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.478 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.566 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 98453ec7-fbda-42ae-8624-8aa5921fd634 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.567 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance d83d7773-fe1e-4ac9-b90c-74a74180acbe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.567 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.567 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:13:50 np0005539563 nova_compute[252253]: 2025-11-29 08:13:50.662 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:50.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/587617492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:51 np0005539563 nova_compute[252253]: 2025-11-29 08:13:51.089 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:51 np0005539563 nova_compute[252253]: 2025-11-29 08:13:51.094 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:51.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:51 np0005539563 nova_compute[252253]: 2025-11-29 08:13:51.245 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:51 np0005539563 nova_compute[252253]: 2025-11-29 08:13:51.317 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:13:51 np0005539563 nova_compute[252253]: 2025-11-29 08:13:51.318 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:51 np0005539563 nova_compute[252253]: 2025-11-29 08:13:51.793 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:51 np0005539563 nova_compute[252253]: 2025-11-29 08:13:51.957 252257 DEBUG nova.compute.manager [req-3f520148-1c25-4c44-94d0-7116d3337227 req-873584e0-8e47-46b7-b3a8-99c25607a62a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received event network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:51 np0005539563 nova_compute[252253]: 2025-11-29 08:13:51.958 252257 DEBUG oslo_concurrency.lockutils [req-3f520148-1c25-4c44-94d0-7116d3337227 req-873584e0-8e47-46b7-b3a8-99c25607a62a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:51 np0005539563 nova_compute[252253]: 2025-11-29 08:13:51.958 252257 DEBUG oslo_concurrency.lockutils [req-3f520148-1c25-4c44-94d0-7116d3337227 req-873584e0-8e47-46b7-b3a8-99c25607a62a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:51 np0005539563 nova_compute[252253]: 2025-11-29 08:13:51.959 252257 DEBUG oslo_concurrency.lockutils [req-3f520148-1c25-4c44-94d0-7116d3337227 req-873584e0-8e47-46b7-b3a8-99c25607a62a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:51 np0005539563 nova_compute[252253]: 2025-11-29 08:13:51.959 252257 DEBUG nova.compute.manager [req-3f520148-1c25-4c44-94d0-7116d3337227 req-873584e0-8e47-46b7-b3a8-99c25607a62a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] No waiting events found dispatching network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:51 np0005539563 nova_compute[252253]: 2025-11-29 08:13:51.960 252257 WARNING nova.compute.manager [req-3f520148-1c25-4c44-94d0-7116d3337227 req-873584e0-8e47-46b7-b3a8-99c25607a62a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received unexpected event network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:13:52 np0005539563 nova_compute[252253]: 2025-11-29 08:13:52.170 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 545 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Nov 29 03:13:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Nov 29 03:13:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Nov 29 03:13:52 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Nov 29 03:13:52 np0005539563 nova_compute[252253]: 2025-11-29 08:13:52.318 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:52 np0005539563 nova_compute[252253]: 2025-11-29 08:13:52.319 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:13:52 np0005539563 nova_compute[252253]: 2025-11-29 08:13:52.319 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:13:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:52.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:53.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:53 np0005539563 nova_compute[252253]: 2025-11-29 08:13:53.336 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:53 np0005539563 nova_compute[252253]: 2025-11-29 08:13:53.336 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:53 np0005539563 nova_compute[252253]: 2025-11-29 08:13:53.338 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:13:53 np0005539563 nova_compute[252253]: 2025-11-29 08:13:53.339 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 98453ec7-fbda-42ae-8624-8aa5921fd634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:53 np0005539563 podman[322962]: 2025-11-29 08:13:53.524067407 +0000 UTC m=+0.066904604 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 29 03:13:53 np0005539563 podman[322961]: 2025-11-29 08:13:53.531886508 +0000 UTC m=+0.076613597 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:13:53 np0005539563 podman[322963]: 2025-11-29 08:13:53.601457383 +0000 UTC m=+0.135512272 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 03:13:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Nov 29 03:13:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:54.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:55.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 159 op/s
Nov 29 03:13:56 np0005539563 nova_compute[252253]: 2025-11-29 08:13:56.432 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Updating instance_info_cache with network_info: [{"id": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "address": "fa:16:3e:72:b2:e8", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a778f1e-9d", "ovs_interfaceid": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:56 np0005539563 nova_compute[252253]: 2025-11-29 08:13:56.463 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:56 np0005539563 nova_compute[252253]: 2025-11-29 08:13:56.463 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:13:56 np0005539563 nova_compute[252253]: 2025-11-29 08:13:56.464 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:56 np0005539563 nova_compute[252253]: 2025-11-29 08:13:56.464 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:56 np0005539563 nova_compute[252253]: 2025-11-29 08:13:56.464 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:13:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:56.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:13:56 np0005539563 nova_compute[252253]: 2025-11-29 08:13:56.797 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:57.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:57 np0005539563 nova_compute[252253]: 2025-11-29 08:13:57.178 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:57 np0005539563 nova_compute[252253]: 2025-11-29 08:13:57.727 252257 DEBUG oslo_concurrency.lockutils [None req-71861b97-aefa-438f-b543-3e8ca1c6a943 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:57 np0005539563 nova_compute[252253]: 2025-11-29 08:13:57.728 252257 DEBUG oslo_concurrency.lockutils [None req-71861b97-aefa-438f-b543-3e8ca1c6a943 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:57 np0005539563 nova_compute[252253]: 2025-11-29 08:13:57.751 252257 INFO nova.compute.manager [None req-71861b97-aefa-438f-b543-3e8ca1c6a943 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Detaching volume 57626c0a-e0bf-45ee-90b0-ca7f160cc5ab#033[00m
Nov 29 03:13:57 np0005539563 nova_compute[252253]: 2025-11-29 08:13:57.973 252257 INFO nova.virt.block_device [None req-71861b97-aefa-438f-b543-3e8ca1c6a943 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Attempting to driver detach volume 57626c0a-e0bf-45ee-90b0-ca7f160cc5ab from mountpoint /dev/vdb#033[00m
Nov 29 03:13:57 np0005539563 nova_compute[252253]: 2025-11-29 08:13:57.982 252257 DEBUG nova.virt.libvirt.driver [None req-71861b97-aefa-438f-b543-3e8ca1c6a943 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Attempting to detach device vdb from instance d83d7773-fe1e-4ac9-b90c-74a74180acbe from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:13:57 np0005539563 nova_compute[252253]: 2025-11-29 08:13:57.983 252257 DEBUG nova.virt.libvirt.guest [None req-71861b97-aefa-438f-b543-3e8ca1c6a943 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:13:57 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-57626c0a-e0bf-45ee-90b0-ca7f160cc5ab">
Nov 29 03:13:57 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:  <serial>57626c0a-e0bf-45ee-90b0-ca7f160cc5ab</serial>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:13:57 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:13:57 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:13:57 np0005539563 nova_compute[252253]: 2025-11-29 08:13:57.992 252257 INFO nova.virt.libvirt.driver [None req-71861b97-aefa-438f-b543-3e8ca1c6a943 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully detached device vdb from instance d83d7773-fe1e-4ac9-b90c-74a74180acbe from the persistent domain config.#033[00m
Nov 29 03:13:57 np0005539563 nova_compute[252253]: 2025-11-29 08:13:57.993 252257 DEBUG nova.virt.libvirt.driver [None req-71861b97-aefa-438f-b543-3e8ca1c6a943 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d83d7773-fe1e-4ac9-b90c-74a74180acbe from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:13:57 np0005539563 nova_compute[252253]: 2025-11-29 08:13:57.993 252257 DEBUG nova.virt.libvirt.guest [None req-71861b97-aefa-438f-b543-3e8ca1c6a943 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:13:57 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-57626c0a-e0bf-45ee-90b0-ca7f160cc5ab">
Nov 29 03:13:57 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:  <serial>57626c0a-e0bf-45ee-90b0-ca7f160cc5ab</serial>
Nov 29 03:13:57 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:13:57 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:13:57 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:13:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.3 MiB/s wr, 162 op/s
Nov 29 03:13:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:58.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:58 np0005539563 nova_compute[252253]: 2025-11-29 08:13:58.845 252257 DEBUG nova.compute.manager [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:13:58 np0005539563 nova_compute[252253]: 2025-11-29 08:13:58.942 252257 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:58 np0005539563 nova_compute[252253]: 2025-11-29 08:13:58.942 252257 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:58 np0005539563 nova_compute[252253]: 2025-11-29 08:13:58.976 252257 DEBUG nova.objects.instance [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'pci_requests' on Instance uuid 48a6ffaa-4f03-4048-bd19-c50aea2863cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.003 252257 DEBUG nova.virt.hardware [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.004 252257 INFO nova.compute.claims [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.005 252257 DEBUG nova.objects.instance [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'resources' on Instance uuid 48a6ffaa-4f03-4048-bd19-c50aea2863cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.031 252257 DEBUG nova.objects.instance [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 48a6ffaa-4f03-4048-bd19-c50aea2863cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.106 252257 INFO nova.compute.resource_tracker [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating resource usage from migration 2909cc6b-3d8b-4f0b-bc0b-f0caf4f98d5f#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.106 252257 DEBUG nova.compute.resource_tracker [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Starting to track incoming migration 2909cc6b-3d8b-4f0b-bc0b-f0caf4f98d5f with flavor a3833334-6e3e-4b1c-bf74-bdd1055a9e9b _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:13:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:13:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:59.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.267 252257 DEBUG oslo_concurrency.processutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.321 252257 DEBUG nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Received event <DeviceRemovedEvent: 1764404039.3199847, d83d7773-fe1e-4ac9-b90c-74a74180acbe => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.322 252257 DEBUG nova.virt.libvirt.driver [None req-71861b97-aefa-438f-b543-3e8ca1c6a943 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d83d7773-fe1e-4ac9-b90c-74a74180acbe _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.326 252257 INFO nova.virt.libvirt.driver [None req-71861b97-aefa-438f-b543-3e8ca1c6a943 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully detached device vdb from instance d83d7773-fe1e-4ac9-b90c-74a74180acbe from the live domain config.#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.606 252257 DEBUG nova.objects.instance [None req-71861b97-aefa-438f-b543-3e8ca1c6a943 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'flavor' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.675 252257 DEBUG oslo_concurrency.lockutils [None req-71861b97-aefa-438f-b543-3e8ca1c6a943 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/395582412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.740 252257 DEBUG oslo_concurrency.processutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.748 252257 DEBUG nova.compute.provider_tree [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.775 252257 DEBUG nova.scheduler.client.report [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.818 252257 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:59 np0005539563 nova_compute[252253]: 2025-11-29 08:13:59.819 252257 INFO nova.compute.manager [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Migrating#033[00m
Nov 29 03:14:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 KiB/s wr, 166 op/s
Nov 29 03:14:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Nov 29 03:14:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Nov 29 03:14:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Nov 29 03:14:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:00Z|00435|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:00.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.715 252257 DEBUG oslo_concurrency.lockutils [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.715 252257 DEBUG oslo_concurrency.lockutils [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.716 252257 DEBUG oslo_concurrency.lockutils [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.716 252257 DEBUG oslo_concurrency.lockutils [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.716 252257 DEBUG oslo_concurrency.lockutils [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.718 252257 INFO nova.compute.manager [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Terminating instance#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.719 252257 DEBUG nova.compute.manager [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:14:00 np0005539563 kernel: tapec8e7efa-3c (unregistering): left promiscuous mode
Nov 29 03:14:00 np0005539563 NetworkManager[48981]: <info>  [1764404040.7596] device (tapec8e7efa-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.767 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:00Z|00436|binding|INFO|Releasing lport ec8e7efa-3c86-430e-b26a-e5d8d611a64a from this chassis (sb_readonly=0)
Nov 29 03:14:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:00Z|00437|binding|INFO|Setting lport ec8e7efa-3c86-430e-b26a-e5d8d611a64a down in Southbound
Nov 29 03:14:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:00Z|00438|binding|INFO|Removing iface tapec8e7efa-3c ovn-installed in OVS
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.773 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:00.785 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:0d:b5 10.100.0.12'], port_security=['fa:16:3e:69:0d:b5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd83d7773-fe1e-4ac9-b90c-74a74180acbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f5850abe-4884-46af-b00d-61910ac4ba3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=ec8e7efa-3c86-430e-b26a-e5d8d611a64a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:00.787 158990 INFO neutron.agent.ovn.metadata.agent [-] Port ec8e7efa-3c86-430e-b26a-e5d8d611a64a in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 unbound from our chassis#033[00m
Nov 29 03:14:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:00.788 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10a9b8d1-2de6-4e47-8e44-16b661da8624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:00.790 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb1d5a8-810f-4751-b5e3-a9e8c3b5fbef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:00.790 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace which is not needed anymore#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.791 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539563 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 29 03:14:00 np0005539563 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006e.scope: Consumed 12.723s CPU time.
Nov 29 03:14:00 np0005539563 systemd-machined[213024]: Machine qemu-50-instance-0000006e terminated.
Nov 29 03:14:00 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[322848]: [NOTICE]   (322852) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:00 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[322848]: [NOTICE]   (322852) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:00 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[322848]: [WARNING]  (322852) : Exiting Master process...
Nov 29 03:14:00 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[322848]: [WARNING]  (322852) : Exiting Master process...
Nov 29 03:14:00 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[322848]: [ALERT]    (322852) : Current worker (322854) exited with code 143 (Terminated)
Nov 29 03:14:00 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[322848]: [WARNING]  (322852) : All workers exited. Exiting... (0)
Nov 29 03:14:00 np0005539563 systemd[1]: libpod-eded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984.scope: Deactivated successfully.
Nov 29 03:14:00 np0005539563 podman[323071]: 2025-11-29 08:14:00.907558322 +0000 UTC m=+0.039464661 container died eded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:14:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay-74bfb4925027ca9540b09683d1b18d0d6b29bce852e7eb2744fb0c0e85761465-merged.mount: Deactivated successfully.
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.943 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.952 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539563 podman[323071]: 2025-11-29 08:14:00.955081229 +0000 UTC m=+0.086987568 container cleanup eded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.955 252257 INFO nova.virt.libvirt.driver [-] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Instance destroyed successfully.#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.955 252257 DEBUG nova.objects.instance [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'resources' on Instance uuid d83d7773-fe1e-4ac9-b90c-74a74180acbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:00 np0005539563 systemd[1]: libpod-conmon-eded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984.scope: Deactivated successfully.
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.980 252257 DEBUG nova.virt.libvirt.vif [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:12:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2044766677',display_name='tempest-ServerActionsTestOtherA-server-413447025',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2044766677',id=110,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFrqk4rXvCBZyesNqy6ygZ6Gp5u2dJYASwFcyFUrpnmzFLmX4dmLstV85/UcVOuy/g8aGelmtEAngSltNMIVz+nyyj//ozYBJzauh3XWFgxF3C3yhw63J9BBc9qclV2mQ==',key_name='tempest-keypair-1415150174',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-2sixhnkw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:13:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=d83d7773-fe1e-4ac9-b90c-74a74180acbe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.980 252257 DEBUG nova.network.os_vif_util [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "address": "fa:16:3e:69:0d:b5", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec8e7efa-3c", "ovs_interfaceid": "ec8e7efa-3c86-430e-b26a-e5d8d611a64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.981 252257 DEBUG nova.network.os_vif_util [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.981 252257 DEBUG os_vif [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.983 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.983 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8e7efa-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.984 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.986 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:00 np0005539563 nova_compute[252253]: 2025-11-29 08:14:00.989 252257 INFO os_vif [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:0d:b5,bridge_name='br-int',has_traffic_filtering=True,id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec8e7efa-3c')#033[00m
Nov 29 03:14:01 np0005539563 podman[323110]: 2025-11-29 08:14:01.015132577 +0000 UTC m=+0.042261596 container remove eded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:01.021 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dfce0a14-b83c-446c-b2e3-8bdeedcf828d]: (4, ('Sat Nov 29 08:14:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (eded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984)\neded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984\nSat Nov 29 08:14:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (eded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984)\neded5bf5f62252e55f1834edc5ad494d2683f42aae307cded262cdc8a6785984\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:01.023 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7ebf22c0-da4a-4c36-8ad1-44c1c5827cbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:01.024 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:01 np0005539563 nova_compute[252253]: 2025-11-29 08:14:01.025 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:01 np0005539563 kernel: tap10a9b8d1-20: left promiscuous mode
Nov 29 03:14:01 np0005539563 nova_compute[252253]: 2025-11-29 08:14:01.039 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:01.042 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd2996f-4400-46b5-9e90-46914d05e8ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:01.063 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ed03ae-0636-45c9-9ff2-5c4e9893daf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:01.064 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[24e3722b-4384-405c-a51b-8cfe2bd67a1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:01.079 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[992f4dcd-1f72-4bb2-a5ab-b8577f20d240]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699583, 'reachable_time': 32866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323143, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539563 systemd[1]: run-netns-ovnmeta\x2d10a9b8d1\x2d2de6\x2d4e47\x2d8e44\x2d16b661da8624.mount: Deactivated successfully.
Nov 29 03:14:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:01.081 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:01.082 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[aae29677-8106-4303-b717-e5566d2072cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:01.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:01 np0005539563 nova_compute[252253]: 2025-11-29 08:14:01.154 252257 DEBUG nova.compute.manager [req-283b2726-d5b0-4a15-8a96-30b12acb4e37 req-d34c7298-1776-4da5-8a2c-ef1e241d96c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received event network-vif-unplugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:01 np0005539563 nova_compute[252253]: 2025-11-29 08:14:01.154 252257 DEBUG oslo_concurrency.lockutils [req-283b2726-d5b0-4a15-8a96-30b12acb4e37 req-d34c7298-1776-4da5-8a2c-ef1e241d96c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:01 np0005539563 nova_compute[252253]: 2025-11-29 08:14:01.155 252257 DEBUG oslo_concurrency.lockutils [req-283b2726-d5b0-4a15-8a96-30b12acb4e37 req-d34c7298-1776-4da5-8a2c-ef1e241d96c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:01 np0005539563 nova_compute[252253]: 2025-11-29 08:14:01.155 252257 DEBUG oslo_concurrency.lockutils [req-283b2726-d5b0-4a15-8a96-30b12acb4e37 req-d34c7298-1776-4da5-8a2c-ef1e241d96c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:01 np0005539563 nova_compute[252253]: 2025-11-29 08:14:01.156 252257 DEBUG nova.compute.manager [req-283b2726-d5b0-4a15-8a96-30b12acb4e37 req-d34c7298-1776-4da5-8a2c-ef1e241d96c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] No waiting events found dispatching network-vif-unplugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:01 np0005539563 nova_compute[252253]: 2025-11-29 08:14:01.156 252257 DEBUG nova.compute.manager [req-283b2726-d5b0-4a15-8a96-30b12acb4e37 req-d34c7298-1776-4da5-8a2c-ef1e241d96c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received event network-vif-unplugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:14:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:02 np0005539563 nova_compute[252253]: 2025-11-29 08:14:02.182 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 487 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.9 KiB/s wr, 168 op/s
Nov 29 03:14:02 np0005539563 nova_compute[252253]: 2025-11-29 08:14:02.355 252257 INFO nova.virt.libvirt.driver [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Deleting instance files /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe_del#033[00m
Nov 29 03:14:02 np0005539563 nova_compute[252253]: 2025-11-29 08:14:02.356 252257 INFO nova.virt.libvirt.driver [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Deletion of /var/lib/nova/instances/d83d7773-fe1e-4ac9-b90c-74a74180acbe_del complete#033[00m
Nov 29 03:14:02 np0005539563 nova_compute[252253]: 2025-11-29 08:14:02.494 252257 INFO nova.compute.manager [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Took 1.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:14:02 np0005539563 nova_compute[252253]: 2025-11-29 08:14:02.495 252257 DEBUG oslo.service.loopingcall [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:14:02 np0005539563 nova_compute[252253]: 2025-11-29 08:14:02.495 252257 DEBUG nova.compute.manager [-] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:14:02 np0005539563 nova_compute[252253]: 2025-11-29 08:14:02.496 252257 DEBUG nova.network.neutron [-] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:14:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:02.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:02 np0005539563 systemd-logind[785]: New session 55 of user nova.
Nov 29 03:14:02 np0005539563 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:14:02 np0005539563 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:14:02 np0005539563 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:14:02 np0005539563 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:14:02 np0005539563 systemd[323149]: Queued start job for default target Main User Target.
Nov 29 03:14:02 np0005539563 systemd[323149]: Created slice User Application Slice.
Nov 29 03:14:02 np0005539563 systemd[323149]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:14:02 np0005539563 systemd[323149]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:14:02 np0005539563 systemd[323149]: Reached target Paths.
Nov 29 03:14:02 np0005539563 systemd[323149]: Reached target Timers.
Nov 29 03:14:02 np0005539563 systemd[323149]: Starting D-Bus User Message Bus Socket...
Nov 29 03:14:02 np0005539563 systemd[323149]: Starting Create User's Volatile Files and Directories...
Nov 29 03:14:02 np0005539563 systemd[323149]: Finished Create User's Volatile Files and Directories.
Nov 29 03:14:02 np0005539563 systemd[323149]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:14:02 np0005539563 systemd[323149]: Reached target Sockets.
Nov 29 03:14:02 np0005539563 systemd[323149]: Reached target Basic System.
Nov 29 03:14:02 np0005539563 systemd[323149]: Reached target Main User Target.
Nov 29 03:14:02 np0005539563 systemd[323149]: Startup finished in 131ms.
Nov 29 03:14:02 np0005539563 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:14:02 np0005539563 systemd[1]: Started Session 55 of User nova.
Nov 29 03:14:03 np0005539563 systemd[1]: session-55.scope: Deactivated successfully.
Nov 29 03:14:03 np0005539563 systemd-logind[785]: Session 55 logged out. Waiting for processes to exit.
Nov 29 03:14:03 np0005539563 systemd-logind[785]: Removed session 55.
Nov 29 03:14:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:03.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:03 np0005539563 systemd-logind[785]: New session 57 of user nova.
Nov 29 03:14:03 np0005539563 systemd[1]: Started Session 57 of User nova.
Nov 29 03:14:03 np0005539563 nova_compute[252253]: 2025-11-29 08:14:03.273 252257 DEBUG nova.compute.manager [req-a35995a7-e9bf-4602-9b2a-28dd109b1135 req-d08999a6-6b75-42b0-b846-d9be3819a23e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received event network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:03 np0005539563 nova_compute[252253]: 2025-11-29 08:14:03.275 252257 DEBUG oslo_concurrency.lockutils [req-a35995a7-e9bf-4602-9b2a-28dd109b1135 req-d08999a6-6b75-42b0-b846-d9be3819a23e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:03 np0005539563 nova_compute[252253]: 2025-11-29 08:14:03.275 252257 DEBUG oslo_concurrency.lockutils [req-a35995a7-e9bf-4602-9b2a-28dd109b1135 req-d08999a6-6b75-42b0-b846-d9be3819a23e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:03 np0005539563 nova_compute[252253]: 2025-11-29 08:14:03.276 252257 DEBUG oslo_concurrency.lockutils [req-a35995a7-e9bf-4602-9b2a-28dd109b1135 req-d08999a6-6b75-42b0-b846-d9be3819a23e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:03 np0005539563 nova_compute[252253]: 2025-11-29 08:14:03.276 252257 DEBUG nova.compute.manager [req-a35995a7-e9bf-4602-9b2a-28dd109b1135 req-d08999a6-6b75-42b0-b846-d9be3819a23e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] No waiting events found dispatching network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:03 np0005539563 nova_compute[252253]: 2025-11-29 08:14:03.276 252257 WARNING nova.compute.manager [req-a35995a7-e9bf-4602-9b2a-28dd109b1135 req-d08999a6-6b75-42b0-b846-d9be3819a23e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received unexpected event network-vif-plugged-ec8e7efa-3c86-430e-b26a-e5d8d611a64a for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:14:03 np0005539563 systemd[1]: session-57.scope: Deactivated successfully.
Nov 29 03:14:03 np0005539563 systemd-logind[785]: Session 57 logged out. Waiting for processes to exit.
Nov 29 03:14:03 np0005539563 systemd-logind[785]: Removed session 57.
Nov 29 03:14:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 471 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.3 KiB/s wr, 134 op/s
Nov 29 03:14:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:04.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:04.920 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:04.920 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:04.921 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:05.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:05 np0005539563 nova_compute[252253]: 2025-11-29 08:14:05.986 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:06 np0005539563 nova_compute[252253]: 2025-11-29 08:14:06.134 252257 DEBUG nova.network.neutron [-] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 398 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 13 KiB/s wr, 153 op/s
Nov 29 03:14:06 np0005539563 nova_compute[252253]: 2025-11-29 08:14:06.213 252257 DEBUG nova.compute.manager [req-a26f156c-ada4-40e7-928b-bf619bd4c841 req-859bb550-72ff-4c01-86f6-c9825815b0f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Received event network-vif-deleted-ec8e7efa-3c86-430e-b26a-e5d8d611a64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:06 np0005539563 nova_compute[252253]: 2025-11-29 08:14:06.213 252257 INFO nova.compute.manager [req-a26f156c-ada4-40e7-928b-bf619bd4c841 req-859bb550-72ff-4c01-86f6-c9825815b0f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Neutron deleted interface ec8e7efa-3c86-430e-b26a-e5d8d611a64a; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:14:06 np0005539563 nova_compute[252253]: 2025-11-29 08:14:06.214 252257 DEBUG nova.network.neutron [req-a26f156c-ada4-40e7-928b-bf619bd4c841 req-859bb550-72ff-4c01-86f6-c9825815b0f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:06 np0005539563 nova_compute[252253]: 2025-11-29 08:14:06.342 252257 INFO nova.compute.manager [-] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Took 3.85 seconds to deallocate network for instance.#033[00m
Nov 29 03:14:06 np0005539563 nova_compute[252253]: 2025-11-29 08:14:06.382 252257 DEBUG nova.compute.manager [req-a26f156c-ada4-40e7-928b-bf619bd4c841 req-859bb550-72ff-4c01-86f6-c9825815b0f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Detach interface failed, port_id=ec8e7efa-3c86-430e-b26a-e5d8d611a64a, reason: Instance d83d7773-fe1e-4ac9-b90c-74a74180acbe could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:14:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:06.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:06 np0005539563 nova_compute[252253]: 2025-11-29 08:14:06.778 252257 DEBUG oslo_concurrency.lockutils [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:06 np0005539563 nova_compute[252253]: 2025-11-29 08:14:06.778 252257 DEBUG oslo_concurrency.lockutils [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:06 np0005539563 nova_compute[252253]: 2025-11-29 08:14:06.922 252257 DEBUG oslo_concurrency.processutils [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:07.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:07 np0005539563 nova_compute[252253]: 2025-11-29 08:14:07.184 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2825660439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:07 np0005539563 nova_compute[252253]: 2025-11-29 08:14:07.522 252257 DEBUG oslo_concurrency.processutils [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:07 np0005539563 nova_compute[252253]: 2025-11-29 08:14:07.529 252257 DEBUG nova.compute.provider_tree [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:07 np0005539563 nova_compute[252253]: 2025-11-29 08:14:07.571 252257 DEBUG nova.scheduler.client.report [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:07.574 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:07.576 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:14:07 np0005539563 nova_compute[252253]: 2025-11-29 08:14:07.577 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:07 np0005539563 nova_compute[252253]: 2025-11-29 08:14:07.659 252257 DEBUG oslo_concurrency.lockutils [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:07 np0005539563 nova_compute[252253]: 2025-11-29 08:14:07.728 252257 INFO nova.scheduler.client.report [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Deleted allocations for instance d83d7773-fe1e-4ac9-b90c-74a74180acbe#033[00m
Nov 29 03:14:07 np0005539563 nova_compute[252253]: 2025-11-29 08:14:07.857 252257 DEBUG oslo_concurrency.lockutils [None req-b32c6439-f5af-4ee0-9de8-fa5fe134db83 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "d83d7773-fe1e-4ac9-b90c-74a74180acbe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 361 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 26 KiB/s wr, 156 op/s
Nov 29 03:14:08 np0005539563 nova_compute[252253]: 2025-11-29 08:14:08.402 252257 DEBUG nova.compute.manager [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:08 np0005539563 nova_compute[252253]: 2025-11-29 08:14:08.403 252257 DEBUG oslo_concurrency.lockutils [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:08 np0005539563 nova_compute[252253]: 2025-11-29 08:14:08.403 252257 DEBUG oslo_concurrency.lockutils [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:08 np0005539563 nova_compute[252253]: 2025-11-29 08:14:08.403 252257 DEBUG oslo_concurrency.lockutils [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:08 np0005539563 nova_compute[252253]: 2025-11-29 08:14:08.404 252257 DEBUG nova.compute.manager [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:08 np0005539563 nova_compute[252253]: 2025-11-29 08:14:08.404 252257 WARNING nova.compute.manager [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:14:08 np0005539563 nova_compute[252253]: 2025-11-29 08:14:08.404 252257 DEBUG nova.compute.manager [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:08 np0005539563 nova_compute[252253]: 2025-11-29 08:14:08.404 252257 DEBUG oslo_concurrency.lockutils [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:08 np0005539563 nova_compute[252253]: 2025-11-29 08:14:08.404 252257 DEBUG oslo_concurrency.lockutils [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:08 np0005539563 nova_compute[252253]: 2025-11-29 08:14:08.405 252257 DEBUG oslo_concurrency.lockutils [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:08 np0005539563 nova_compute[252253]: 2025-11-29 08:14:08.405 252257 DEBUG nova.compute.manager [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:08 np0005539563 nova_compute[252253]: 2025-11-29 08:14:08.405 252257 WARNING nova.compute.manager [req-b388760e-fd46-443a-a76d-92400f12a92c req-bd1cc2f2-d0bd-4fc6-8b05-f078520dabe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:14:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:08.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:09.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:09 np0005539563 nova_compute[252253]: 2025-11-29 08:14:09.629 252257 INFO nova.network.neutron [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating port 18df9eaa-1422-4e4b-ac00-67cdb84e329f with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:14:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 299 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 171 KiB/s rd, 35 KiB/s wr, 120 op/s
Nov 29 03:14:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:10.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:10 np0005539563 nova_compute[252253]: 2025-11-29 08:14:10.803 252257 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:10 np0005539563 nova_compute[252253]: 2025-11-29 08:14:10.804 252257 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:10 np0005539563 nova_compute[252253]: 2025-11-29 08:14:10.804 252257 DEBUG nova.network.neutron [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:10 np0005539563 nova_compute[252253]: 2025-11-29 08:14:10.988 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:11.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:11Z|00439|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:14:11 np0005539563 nova_compute[252253]: 2025-11-29 08:14:11.831 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:12 np0005539563 nova_compute[252253]: 2025-11-29 08:14:12.186 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Nov 29 03:14:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 299 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 147 KiB/s rd, 30 KiB/s wr, 103 op/s
Nov 29 03:14:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Nov 29 03:14:12 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Nov 29 03:14:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:12.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:14:12
Nov 29 03:14:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:14:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:14:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.meta']
Nov 29 03:14:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:14:12 np0005539563 nova_compute[252253]: 2025-11-29 08:14:12.867 252257 DEBUG nova.network.neutron [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating instance_info_cache with network_info: [{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:12 np0005539563 nova_compute[252253]: 2025-11-29 08:14:12.998 252257 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.005 252257 DEBUG nova.compute.manager [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-changed-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.005 252257 DEBUG nova.compute.manager [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Refreshing instance network info cache due to event network-changed-18df9eaa-1422-4e4b-ac00-67cdb84e329f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.006 252257 DEBUG oslo_concurrency.lockutils [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.006 252257 DEBUG oslo_concurrency.lockutils [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.006 252257 DEBUG nova.network.neutron [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Refreshing network info cache for port 18df9eaa-1422-4e4b-ac00-67cdb84e329f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:13.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.169 252257 DEBUG os_brick.utils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.170 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.181 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.181 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[6e87bb23-d7fd-4143-9fdb-a4dc55585877]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.182 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.190 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.190 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[e968431e-be53-4eb6-8d68-6eedd3dbde8c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.191 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.198 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.199 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1caf90-be36-415b-a8e4-ef9c79fa1e32]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.200 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[bada8218-1ad6-4111-b5b7-c00976f7b23e]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.200 252257 DEBUG oslo_concurrency.processutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.224 252257 DEBUG oslo_concurrency.processutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.226 252257 DEBUG os_brick.initiator.connectors.lightos [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.226 252257 DEBUG os_brick.initiator.connectors.lightos [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.227 252257 DEBUG os_brick.initiator.connectors.lightos [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:14:13 np0005539563 nova_compute[252253]: 2025-11-29 08:14:13.227 252257 DEBUG os_brick.utils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:14:13 np0005539563 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:14:13 np0005539563 systemd[323149]: Activating special unit Exit the Session...
Nov 29 03:14:13 np0005539563 systemd[323149]: Stopped target Main User Target.
Nov 29 03:14:13 np0005539563 systemd[323149]: Stopped target Basic System.
Nov 29 03:14:13 np0005539563 systemd[323149]: Stopped target Paths.
Nov 29 03:14:13 np0005539563 systemd[323149]: Stopped target Sockets.
Nov 29 03:14:13 np0005539563 systemd[323149]: Stopped target Timers.
Nov 29 03:14:13 np0005539563 systemd[323149]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:14:13 np0005539563 systemd[323149]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:14:13 np0005539563 systemd[323149]: Closed D-Bus User Message Bus Socket.
Nov 29 03:14:13 np0005539563 systemd[323149]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:14:13 np0005539563 systemd[323149]: Removed slice User Application Slice.
Nov 29 03:14:13 np0005539563 systemd[323149]: Reached target Shutdown.
Nov 29 03:14:13 np0005539563 systemd[323149]: Finished Exit the Session.
Nov 29 03:14:13 np0005539563 systemd[323149]: Reached target Exit the Session.
Nov 29 03:14:13 np0005539563 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:14:13 np0005539563 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:14:13 np0005539563 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:14:13 np0005539563 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:14:13 np0005539563 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:14:13 np0005539563 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:14:13 np0005539563 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:14:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:14:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 279 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 168 KiB/s rd, 33 KiB/s wr, 116 op/s
Nov 29 03:14:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:14.578 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:14.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:14 np0005539563 nova_compute[252253]: 2025-11-29 08:14:14.993 252257 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:14:14 np0005539563 nova_compute[252253]: 2025-11-29 08:14:14.995 252257 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:14:14 np0005539563 nova_compute[252253]: 2025-11-29 08:14:14.995 252257 INFO nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Creating image(s)#033[00m
Nov 29 03:14:15 np0005539563 nova_compute[252253]: 2025-11-29 08:14:15.037 252257 DEBUG nova.storage.rbd_utils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] creating snapshot(nova-resize) on rbd image(48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:14:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:15.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Nov 29 03:14:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Nov 29 03:14:15 np0005539563 nova_compute[252253]: 2025-11-29 08:14:15.890 252257 DEBUG nova.network.neutron [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updated VIF entry in instance network info cache for port 18df9eaa-1422-4e4b-ac00-67cdb84e329f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:15 np0005539563 nova_compute[252253]: 2025-11-29 08:14:15.891 252257 DEBUG nova.network.neutron [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating instance_info_cache with network_info: [{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:15 np0005539563 nova_compute[252253]: 2025-11-29 08:14:15.911 252257 DEBUG oslo_concurrency.lockutils [req-b312d114-2a4a-433b-b866-35b321c4b6e6 req-963b0db2-96a3-49cf-97b2-b5544ff6cab9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:15 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Nov 29 03:14:15 np0005539563 nova_compute[252253]: 2025-11-29 08:14:15.954 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404040.9532924, d83d7773-fe1e-4ac9-b90c-74a74180acbe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:15 np0005539563 nova_compute[252253]: 2025-11-29 08:14:15.955 252257 INFO nova.compute.manager [-] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:14:15 np0005539563 nova_compute[252253]: 2025-11-29 08:14:15.989 252257 DEBUG nova.compute.manager [None req-8be7ffbf-600c-403b-b1ca-2d420a61eb5f - - - - - -] [instance: d83d7773-fe1e-4ac9-b90c-74a74180acbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:15 np0005539563 nova_compute[252253]: 2025-11-29 08:14:15.991 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 279 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 124 KiB/s rd, 13 KiB/s wr, 54 op/s
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.329 252257 DEBUG nova.objects.instance [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'trusted_certs' on Instance uuid 48a6ffaa-4f03-4048-bd19-c50aea2863cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.449 252257 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.450 252257 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Ensure instance console log exists: /var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.451 252257 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.452 252257 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.453 252257 DEBUG oslo_concurrency.lockutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.459 252257 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Start _get_guest_xml network_info=[{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:16:57:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-e19fd9ae-371b-4152-b2b2-910bd950e653', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'e19fd9ae-371b-4152-b2b2-910bd950e653', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '48a6ffaa-4f03-4048-bd19-c50aea2863cc', 'attached_at': '2025-11-29T08:14:14.000000', 'detached_at': '', 'volume_id': 'e19fd9ae-371b-4152-b2b2-910bd950e653', 'serial': 'e19fd9ae-371b-4152-b2b2-910bd950e653'}, 'attachment_id': '85e84f2c-4d27-447a-9dfd-b50a30a9ab25', 'disk_bus': 'virtio', 'boot_index': None, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.464 252257 WARNING nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.471 252257 DEBUG nova.virt.libvirt.host [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.472 252257 DEBUG nova.virt.libvirt.host [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.477 252257 DEBUG nova.virt.libvirt.host [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.477 252257 DEBUG nova.virt.libvirt.host [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.479 252257 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.479 252257 DEBUG nova.virt.hardware [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.480 252257 DEBUG nova.virt.hardware [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.480 252257 DEBUG nova.virt.hardware [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.480 252257 DEBUG nova.virt.hardware [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.481 252257 DEBUG nova.virt.hardware [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.481 252257 DEBUG nova.virt.hardware [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.481 252257 DEBUG nova.virt.hardware [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.482 252257 DEBUG nova.virt.hardware [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.482 252257 DEBUG nova.virt.hardware [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.482 252257 DEBUG nova.virt.hardware [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.482 252257 DEBUG nova.virt.hardware [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.483 252257 DEBUG nova.objects.instance [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 48a6ffaa-4f03-4048-bd19-c50aea2863cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.498 252257 DEBUG oslo_concurrency.processutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.576 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.577 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.639 252257 DEBUG nova.compute.manager [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.755 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.755 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:16.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.762 252257 DEBUG nova.virt.hardware [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.762 252257 INFO nova.compute.claims [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:14:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2226767345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.957 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:16 np0005539563 nova_compute[252253]: 2025-11-29 08:14:16.983 252257 DEBUG oslo_concurrency.processutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.136 252257 DEBUG oslo_concurrency.processutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:17.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.187 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518392122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.415 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.423 252257 DEBUG nova.compute.provider_tree [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.439 252257 DEBUG nova.scheduler.client.report [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.477 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.478 252257 DEBUG nova.compute.manager [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.522 252257 DEBUG nova.compute.manager [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.523 252257 DEBUG nova.network.neutron [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.542 252257 INFO nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.562 252257 DEBUG nova.compute.manager [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:14:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3264506246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.650 252257 DEBUG oslo_concurrency.processutils [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.684 252257 DEBUG nova.virt.libvirt.vif [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-127920487',display_name='tempest-ServerActionsTestOtherB-server-127920487',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-127920487',id=111,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-9qzmh09j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=48a6ffaa-4f03-4048-bd19-c50aea2863cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:16:57:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.684 252257 DEBUG nova.network.os_vif_util [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:16:57:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.685 252257 DEBUG nova.network.os_vif_util [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.688 252257 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  <uuid>48a6ffaa-4f03-4048-bd19-c50aea2863cc</uuid>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  <name>instance-0000006f</name>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  <memory>196608</memory>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerActionsTestOtherB-server-127920487</nova:name>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:14:16</nova:creationTime>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.micro">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <nova:memory>192</nova:memory>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <nova:user uuid="ca93c8e3eac142c0aa6b61807727dea2">tempest-ServerActionsTestOtherB-325732369-project-member</nova:user>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <nova:project uuid="ba867fac17034bb28fe2cdb0fff3af2b">tempest-ServerActionsTestOtherB-325732369</nova:project>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <nova:port uuid="18df9eaa-1422-4e4b-ac00-67cdb84e329f">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <entry name="serial">48a6ffaa-4f03-4048-bd19-c50aea2863cc</entry>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <entry name="uuid">48a6ffaa-4f03-4048-bd19-c50aea2863cc</entry>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/48a6ffaa-4f03-4048-bd19-c50aea2863cc_disk.config">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="volumes/volume-e19fd9ae-371b-4152-b2b2-910bd950e653">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <serial>e19fd9ae-371b-4152-b2b2-910bd950e653</serial>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:16:57:d0"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <target dev="tap18df9eaa-14"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/48a6ffaa-4f03-4048-bd19-c50aea2863cc/console.log" append="off"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:14:17 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:14:17 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:14:17 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:14:17 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.690 252257 DEBUG nova.virt.libvirt.vif [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-127920487',display_name='tempest-ServerActionsTestOtherB-server-127920487',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-127920487',id=111,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:13:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-9qzmh09j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=48a6ffaa-4f03-4048-bd19-c50aea2863cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:16:57:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.691 252257 DEBUG nova.network.os_vif_util [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-667031396-network", "vif_mac": "fa:16:3e:16:57:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.691 252257 DEBUG nova.network.os_vif_util [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.692 252257 DEBUG os_vif [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.693 252257 DEBUG nova.compute.manager [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.694 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.694 252257 INFO nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Creating image(s)#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.726 252257 DEBUG nova.storage.rbd_utils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.759 252257 DEBUG nova.storage.rbd_utils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.792 252257 DEBUG nova.storage.rbd_utils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.796 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.830 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.832 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.832 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.837 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.837 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18df9eaa-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.838 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18df9eaa-14, col_values=(('external_ids', {'iface-id': '18df9eaa-1422-4e4b-ac00-67cdb84e329f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:57:d0', 'vm-uuid': '48a6ffaa-4f03-4048-bd19-c50aea2863cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.882 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:17 np0005539563 NetworkManager[48981]: <info>  [1764404057.8839] manager: (tap18df9eaa-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.886 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.891 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.892 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.893 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.893 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.950 252257 DEBUG nova.storage.rbd_utils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.955 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.981 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:17 np0005539563 nova_compute[252253]: 2025-11-29 08:14:17.984 252257 INFO os_vif [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14')#033[00m
Nov 29 03:14:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 279 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 1.1 KiB/s wr, 13 op/s
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.256 252257 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.257 252257 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.257 252257 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.258 252257 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No VIF found with MAC fa:16:3e:16:57:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.259 252257 INFO nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Using config drive#033[00m
Nov 29 03:14:18 np0005539563 kernel: tap18df9eaa-14: entered promiscuous mode
Nov 29 03:14:18 np0005539563 NetworkManager[48981]: <info>  [1764404058.3894] manager: (tap18df9eaa-14): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Nov 29 03:14:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:18Z|00440|binding|INFO|Claiming lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f for this chassis.
Nov 29 03:14:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:18Z|00441|binding|INFO|18df9eaa-1422-4e4b-ac00-67cdb84e329f: Claiming fa:16:3e:16:57:d0 10.100.0.13
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.391 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.407 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:57:d0 10.100.0.13'], port_security=['fa:16:3e:16:57:d0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '48a6ffaa-4f03-4048-bd19-c50aea2863cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=18df9eaa-1422-4e4b-ac00-67cdb84e329f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.409 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 18df9eaa-1422-4e4b-ac00-67cdb84e329f in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 bound to our chassis#033[00m
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.412 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d5b8c11-b69e-4a74-846b-03943fb29a81#033[00m
Nov 29 03:14:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:18Z|00442|binding|INFO|Setting lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f ovn-installed in OVS
Nov 29 03:14:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:18Z|00443|binding|INFO|Setting lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f up in Southbound
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.420 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.425 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:18 np0005539563 systemd-udevd[323544]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.436 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1c63b67c-eabc-45e7-993c-e56a65d24fde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539563 systemd-machined[213024]: New machine qemu-51-instance-0000006f.
Nov 29 03:14:18 np0005539563 NetworkManager[48981]: <info>  [1764404058.4466] device (tap18df9eaa-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:14:18 np0005539563 NetworkManager[48981]: <info>  [1764404058.4475] device (tap18df9eaa-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:14:18 np0005539563 systemd[1]: Started Virtual Machine qemu-51-instance-0000006f.
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.476 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e04ae465-b35d-4bf6-8ef1-d39701e2eb78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.481 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f3927633-8392-4a6d-a596-553c21510218]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.512 252257 DEBUG nova.policy [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '58625e4c2b5d43a1abbab05b98853a65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '250671461f27498d9f6b4476c7b69533', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.513 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[247bc0d2-a7df-43b3-9ae8-542866d22187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.537 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd77bab-cb2c-4d75-b315-5465327a10a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686554, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323557, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.556 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[812e0ae6-01bb-4b3f-b613-287c40aa55dd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d5b8c11-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686571, 'tstamp': 686571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323559, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d5b8c11-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686575, 'tstamp': 686575}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323559, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.558 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.559 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.561 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.562 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d5b8c11-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.562 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.563 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d5b8c11-b0, col_values=(('external_ids', {'iface-id': 'a2e47e7a-aef0-4c09-aeef-4a0d63960d7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:18.563 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:18.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.998 252257 DEBUG nova.compute.manager [req-60dc4ad2-f2e6-47f1-be83-f0c37ab80318 req-99d7feb3-c4ec-49c7-987f-eb881f98ddfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.999 252257 DEBUG oslo_concurrency.lockutils [req-60dc4ad2-f2e6-47f1-be83-f0c37ab80318 req-99d7feb3-c4ec-49c7-987f-eb881f98ddfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.999 252257 DEBUG oslo_concurrency.lockutils [req-60dc4ad2-f2e6-47f1-be83-f0c37ab80318 req-99d7feb3-c4ec-49c7-987f-eb881f98ddfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:18 np0005539563 nova_compute[252253]: 2025-11-29 08:14:18.999 252257 DEBUG oslo_concurrency.lockutils [req-60dc4ad2-f2e6-47f1-be83-f0c37ab80318 req-99d7feb3-c4ec-49c7-987f-eb881f98ddfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.000 252257 DEBUG nova.compute.manager [req-60dc4ad2-f2e6-47f1-be83-f0c37ab80318 req-99d7feb3-c4ec-49c7-987f-eb881f98ddfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.000 252257 WARNING nova.compute.manager [req-60dc4ad2-f2e6-47f1-be83-f0c37ab80318 req-99d7feb3-c4ec-49c7-987f-eb881f98ddfa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:14:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:19.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.328 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.392 252257 DEBUG nova.storage.rbd_utils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] resizing rbd image 33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.568 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404059.4080102, 48a6ffaa-4f03-4048-bd19-c50aea2863cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.568 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.571 252257 DEBUG nova.compute.manager [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.575 252257 INFO nova.virt.libvirt.driver [-] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Instance running successfully.#033[00m
Nov 29 03:14:19 np0005539563 virtqemud[251807]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.577 252257 DEBUG nova.virt.libvirt.guest [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.577 252257 DEBUG nova.virt.libvirt.driver [None req-cfc400ae-986d-470b-b840-4fae5f967211 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.608 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.611 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.640 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.641 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404059.409025, 48a6ffaa-4f03-4048-bd19-c50aea2863cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.641 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.666 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.670 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.719 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:14:19 np0005539563 nova_compute[252253]: 2025-11-29 08:14:19.729 252257 DEBUG nova.network.neutron [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Successfully created port: d83f3010-ca91-4737-a053-26c71234f9d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:14:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 304 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 1.7 MiB/s wr, 72 op/s
Nov 29 03:14:20 np0005539563 nova_compute[252253]: 2025-11-29 08:14:20.432 252257 DEBUG nova.objects.instance [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'migration_context' on Instance uuid 33cff286-3b50-41f5-9cb9-d4d98a1d3f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:20 np0005539563 nova_compute[252253]: 2025-11-29 08:14:20.476 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:14:20 np0005539563 nova_compute[252253]: 2025-11-29 08:14:20.477 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Ensure instance console log exists: /var/lib/nova/instances/33cff286-3b50-41f5-9cb9-d4d98a1d3f88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:14:20 np0005539563 nova_compute[252253]: 2025-11-29 08:14:20.478 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:20 np0005539563 nova_compute[252253]: 2025-11-29 08:14:20.478 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:20 np0005539563 nova_compute[252253]: 2025-11-29 08:14:20.478 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:20.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:21.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.225 252257 DEBUG nova.compute.manager [req-7ce4fded-7106-4f3d-9702-4ceecf3c42a6 req-cf659121-52d9-4539-a2f8-5c8268560caf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.226 252257 DEBUG oslo_concurrency.lockutils [req-7ce4fded-7106-4f3d-9702-4ceecf3c42a6 req-cf659121-52d9-4539-a2f8-5c8268560caf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.226 252257 DEBUG oslo_concurrency.lockutils [req-7ce4fded-7106-4f3d-9702-4ceecf3c42a6 req-cf659121-52d9-4539-a2f8-5c8268560caf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.226 252257 DEBUG oslo_concurrency.lockutils [req-7ce4fded-7106-4f3d-9702-4ceecf3c42a6 req-cf659121-52d9-4539-a2f8-5c8268560caf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.226 252257 DEBUG nova.compute.manager [req-7ce4fded-7106-4f3d-9702-4ceecf3c42a6 req-cf659121-52d9-4539-a2f8-5c8268560caf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.227 252257 WARNING nova.compute.manager [req-7ce4fded-7106-4f3d-9702-4ceecf3c42a6 req-cf659121-52d9-4539-a2f8-5c8268560caf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.888 252257 DEBUG nova.network.neutron [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Successfully updated port: d83f3010-ca91-4737-a053-26c71234f9d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.910 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "refresh_cache-33cff286-3b50-41f5-9cb9-d4d98a1d3f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.911 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquired lock "refresh_cache-33cff286-3b50-41f5-9cb9-d4d98a1d3f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.912 252257 DEBUG nova.network.neutron [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.968 252257 DEBUG nova.network.neutron [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Port 18df9eaa-1422-4e4b-ac00-67cdb84e329f binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.969 252257 DEBUG oslo_concurrency.lockutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.969 252257 DEBUG oslo_concurrency.lockutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:21 np0005539563 nova_compute[252253]: 2025-11-29 08:14:21.970 252257 DEBUG nova.network.neutron [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:22 np0005539563 nova_compute[252253]: 2025-11-29 08:14:22.101 252257 DEBUG nova.network.neutron [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:14:22 np0005539563 nova_compute[252253]: 2025-11-29 08:14:22.188 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 304 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.4 MiB/s wr, 58 op/s
Nov 29 03:14:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:22.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:22 np0005539563 nova_compute[252253]: 2025-11-29 08:14:22.942 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:23.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.359 252257 DEBUG nova.compute.manager [req-026530a9-6464-4e20-b78e-98de0f3f282e req-520387d4-014c-4645-b996-f861bc34db83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received event network-changed-d83f3010-ca91-4737-a053-26c71234f9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.360 252257 DEBUG nova.compute.manager [req-026530a9-6464-4e20-b78e-98de0f3f282e req-520387d4-014c-4645-b996-f861bc34db83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Refreshing instance network info cache due to event network-changed-d83f3010-ca91-4737-a053-26c71234f9d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.360 252257 DEBUG oslo_concurrency.lockutils [req-026530a9-6464-4e20-b78e-98de0f3f282e req-520387d4-014c-4645-b996-f861bc34db83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-33cff286-3b50-41f5-9cb9-d4d98a1d3f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007156914491438675 of space, bias 1.0, pg target 2.1470743474316025 quantized to 32 (current 32)
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.682 252257 DEBUG nova.network.neutron [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Updating instance_info_cache with network_info: [{"id": "d83f3010-ca91-4737-a053-26c71234f9d9", "address": "fa:16:3e:0e:24:bf", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83f3010-ca", "ovs_interfaceid": "d83f3010-ca91-4737-a053-26c71234f9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.718 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Releasing lock "refresh_cache-33cff286-3b50-41f5-9cb9-d4d98a1d3f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.720 252257 DEBUG nova.compute.manager [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Instance network_info: |[{"id": "d83f3010-ca91-4737-a053-26c71234f9d9", "address": "fa:16:3e:0e:24:bf", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83f3010-ca", "ovs_interfaceid": "d83f3010-ca91-4737-a053-26c71234f9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.721 252257 DEBUG oslo_concurrency.lockutils [req-026530a9-6464-4e20-b78e-98de0f3f282e req-520387d4-014c-4645-b996-f861bc34db83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-33cff286-3b50-41f5-9cb9-d4d98a1d3f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.721 252257 DEBUG nova.network.neutron [req-026530a9-6464-4e20-b78e-98de0f3f282e req-520387d4-014c-4645-b996-f861bc34db83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Refreshing network info cache for port d83f3010-ca91-4737-a053-26c71234f9d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.728 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Start _get_guest_xml network_info=[{"id": "d83f3010-ca91-4737-a053-26c71234f9d9", "address": "fa:16:3e:0e:24:bf", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83f3010-ca", "ovs_interfaceid": "d83f3010-ca91-4737-a053-26c71234f9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.737 252257 WARNING nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.749 252257 DEBUG nova.virt.libvirt.host [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.751 252257 DEBUG nova.virt.libvirt.host [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.758 252257 DEBUG nova.virt.libvirt.host [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.759 252257 DEBUG nova.virt.libvirt.host [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.760 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.760 252257 DEBUG nova.virt.hardware [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.761 252257 DEBUG nova.virt.hardware [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.761 252257 DEBUG nova.virt.hardware [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.761 252257 DEBUG nova.virt.hardware [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.761 252257 DEBUG nova.virt.hardware [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.761 252257 DEBUG nova.virt.hardware [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.762 252257 DEBUG nova.virt.hardware [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.762 252257 DEBUG nova.virt.hardware [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.762 252257 DEBUG nova.virt.hardware [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.762 252257 DEBUG nova.virt.hardware [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.762 252257 DEBUG nova.virt.hardware [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:14:23 np0005539563 nova_compute[252253]: 2025-11-29 08:14:23.765 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.184 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 349 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 834 KiB/s rd, 3.5 MiB/s wr, 88 op/s
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.260 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.297 252257 DEBUG nova.storage.rbd_utils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.303 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:24 np0005539563 podman[323736]: 2025-11-29 08:14:24.513884429 +0000 UTC m=+0.067947071 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:14:24 np0005539563 podman[323738]: 2025-11-29 08:14:24.533561802 +0000 UTC m=+0.087497921 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 29 03:14:24 np0005539563 podman[323740]: 2025-11-29 08:14:24.56152241 +0000 UTC m=+0.103424093 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3662821542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.771 252257 DEBUG nova.network.neutron [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating instance_info_cache with network_info: [{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:24.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.791 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.794 252257 DEBUG nova.virt.libvirt.vif [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2075577673',display_name='tempest-ServerActionsTestOtherA-server-2075577673',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2075577673',id=112,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-z80enzsc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:17Z,user_data=None,user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=33cff286-3b50-41f5-9cb9-d4d98a1d3f88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d83f3010-ca91-4737-a053-26c71234f9d9", "address": "fa:16:3e:0e:24:bf", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83f3010-ca", "ovs_interfaceid": "d83f3010-ca91-4737-a053-26c71234f9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.794 252257 DEBUG nova.network.os_vif_util [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "d83f3010-ca91-4737-a053-26c71234f9d9", "address": "fa:16:3e:0e:24:bf", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83f3010-ca", "ovs_interfaceid": "d83f3010-ca91-4737-a053-26c71234f9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.796 252257 DEBUG nova.network.os_vif_util [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:24:bf,bridge_name='br-int',has_traffic_filtering=True,id=d83f3010-ca91-4737-a053-26c71234f9d9,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83f3010-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.799 252257 DEBUG nova.objects.instance [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33cff286-3b50-41f5-9cb9-d4d98a1d3f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.804 252257 DEBUG oslo_concurrency.lockutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.827 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  <uuid>33cff286-3b50-41f5-9cb9-d4d98a1d3f88</uuid>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  <name>instance-00000070</name>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerActionsTestOtherA-server-2075577673</nova:name>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:14:23</nova:creationTime>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <nova:user uuid="58625e4c2b5d43a1abbab05b98853a65">tempest-ServerActionsTestOtherA-552273978-project-member</nova:user>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <nova:project uuid="250671461f27498d9f6b4476c7b69533">tempest-ServerActionsTestOtherA-552273978</nova:project>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <nova:port uuid="d83f3010-ca91-4737-a053-26c71234f9d9">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <entry name="serial">33cff286-3b50-41f5-9cb9-d4d98a1d3f88</entry>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <entry name="uuid">33cff286-3b50-41f5-9cb9-d4d98a1d3f88</entry>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk.config">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:0e:24:bf"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <target dev="tapd83f3010-ca"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/33cff286-3b50-41f5-9cb9-d4d98a1d3f88/console.log" append="off"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:14:24 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:14:24 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:14:24 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:14:24 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.830 252257 DEBUG nova.compute.manager [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Preparing to wait for external event network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.831 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.832 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.832 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.834 252257 DEBUG nova.virt.libvirt.vif [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2075577673',display_name='tempest-ServerActionsTestOtherA-server-2075577673',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2075577673',id=112,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-z80enzsc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:17Z,user_data=None,user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=33cff286-3b50-41f5-9cb9-d4d98a1d3f88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d83f3010-ca91-4737-a053-26c71234f9d9", "address": "fa:16:3e:0e:24:bf", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83f3010-ca", "ovs_interfaceid": "d83f3010-ca91-4737-a053-26c71234f9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.835 252257 DEBUG nova.network.os_vif_util [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "d83f3010-ca91-4737-a053-26c71234f9d9", "address": "fa:16:3e:0e:24:bf", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83f3010-ca", "ovs_interfaceid": "d83f3010-ca91-4737-a053-26c71234f9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.837 252257 DEBUG nova.network.os_vif_util [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:24:bf,bridge_name='br-int',has_traffic_filtering=True,id=d83f3010-ca91-4737-a053-26c71234f9d9,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83f3010-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.838 252257 DEBUG os_vif [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:24:bf,bridge_name='br-int',has_traffic_filtering=True,id=d83f3010-ca91-4737-a053-26c71234f9d9,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83f3010-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.840 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.841 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.842 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.850 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.851 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd83f3010-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.853 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd83f3010-ca, col_values=(('external_ids', {'iface-id': 'd83f3010-ca91-4737-a053-26c71234f9d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:24:bf', 'vm-uuid': '33cff286-3b50-41f5-9cb9-d4d98a1d3f88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.855 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539563 NetworkManager[48981]: <info>  [1764404064.8586] manager: (tapd83f3010-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.859 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.864 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.866 252257 INFO os_vif [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:24:bf,bridge_name='br-int',has_traffic_filtering=True,id=d83f3010-ca91-4737-a053-26c71234f9d9,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83f3010-ca')#033[00m
Nov 29 03:14:24 np0005539563 kernel: tap18df9eaa-14 (unregistering): left promiscuous mode
Nov 29 03:14:24 np0005539563 NetworkManager[48981]: <info>  [1764404064.9225] device (tap18df9eaa-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:24Z|00444|binding|INFO|Releasing lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f from this chassis (sb_readonly=0)
Nov 29 03:14:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:24Z|00445|binding|INFO|Setting lport 18df9eaa-1422-4e4b-ac00-67cdb84e329f down in Southbound
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.930 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:24Z|00446|binding|INFO|Removing iface tap18df9eaa-14 ovn-installed in OVS
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.935 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.949 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.966 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.967 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.967 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] No VIF found with MAC fa:16:3e:0e:24:bf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:14:24 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.967 252257 INFO nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Using config drive#033[00m
Nov 29 03:14:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:24.971 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:57:d0 10.100.0.13'], port_security=['fa:16:3e:16:57:d0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '48a6ffaa-4f03-4048-bd19-c50aea2863cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a54db614-4504-4e8e-a3a5-27d3f60f6cdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=18df9eaa-1422-4e4b-ac00-67cdb84e329f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:24.972 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 18df9eaa-1422-4e4b-ac00-67cdb84e329f in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 unbound from our chassis#033[00m
Nov 29 03:14:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:24.974 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d5b8c11-b69e-4a74-846b-03943fb29a81#033[00m
Nov 29 03:14:24 np0005539563 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Nov 29 03:14:24 np0005539563 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006f.scope: Consumed 6.384s CPU time.
Nov 29 03:14:24 np0005539563 systemd-machined[213024]: Machine qemu-51-instance-0000006f terminated.
Nov 29 03:14:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:24.990 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fba14e5e-0c4f-47ae-aea3-8b85bf421a90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:24.999 252257 DEBUG nova.storage.rbd_utils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:25.019 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9c726f-84f7-434a-97f2-ed9c5ddabe40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:25.022 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4496ab-e6e4-421c-8ad3-b6f039d1ecf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:25.056 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d817be17-be95-4399-ad68-646d637e40a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:25.078 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe9c4f4-4ece-4af6-b1c0-e0a3b316f1d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686554, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323856, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.087 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.096 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:25.099 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8537a9-4fa9-405d-aaa0-34c204721019]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d5b8c11-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686571, 'tstamp': 686571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323862, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d5b8c11-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686575, 'tstamp': 686575}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323862, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:25.101 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.102 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.105 252257 INFO nova.virt.libvirt.driver [-] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Instance destroyed successfully.#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.105 252257 DEBUG nova.objects.instance [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'resources' on Instance uuid 48a6ffaa-4f03-4048-bd19-c50aea2863cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.109 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:25.110 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d5b8c11-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:25.111 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:25.111 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d5b8c11-b0, col_values=(('external_ids', {'iface-id': 'a2e47e7a-aef0-4c09-aeef-4a0d63960d7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:25.111 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.121 252257 DEBUG nova.virt.libvirt.vif [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-127920487',display_name='tempest-ServerActionsTestOtherB-server-127920487',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-127920487',id=111,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFR2A4rqHty1PxOihJGr6CLeieY2A6hQbQhWuRk7yYUwOYPvlgBFCeYpXPRg+EImok8PXcjU56J6yMvwfigxZeP4BreCe+MzD3uTdqP8PHZ6U4YNDwkQqqigObBB8nAoaw==',key_name='tempest-keypair-98169709',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-9qzmh09j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=48a6ffaa-4f03-4048-bd19-c50aea2863cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.121 252257 DEBUG nova.network.os_vif_util [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.122 252257 DEBUG nova.network.os_vif_util [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.122 252257 DEBUG os_vif [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.123 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.124 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18df9eaa-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.125 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.128 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.130 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.133 252257 INFO os_vif [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:57:d0,bridge_name='br-int',has_traffic_filtering=True,id=18df9eaa-1422-4e4b-ac00-67cdb84e329f,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18df9eaa-14')#033[00m
Nov 29 03:14:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:25.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.626 252257 DEBUG nova.compute.manager [req-b843238a-29cb-48e6-aa9a-3267c079ad26 req-8238db15-9da6-4fc7-91c6-9a6d2899a352 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.627 252257 DEBUG oslo_concurrency.lockutils [req-b843238a-29cb-48e6-aa9a-3267c079ad26 req-8238db15-9da6-4fc7-91c6-9a6d2899a352 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.627 252257 DEBUG oslo_concurrency.lockutils [req-b843238a-29cb-48e6-aa9a-3267c079ad26 req-8238db15-9da6-4fc7-91c6-9a6d2899a352 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.627 252257 DEBUG oslo_concurrency.lockutils [req-b843238a-29cb-48e6-aa9a-3267c079ad26 req-8238db15-9da6-4fc7-91c6-9a6d2899a352 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.627 252257 DEBUG nova.compute.manager [req-b843238a-29cb-48e6-aa9a-3267c079ad26 req-8238db15-9da6-4fc7-91c6-9a6d2899a352 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.628 252257 WARNING nova.compute.manager [req-b843238a-29cb-48e6-aa9a-3267c079ad26 req-8238db15-9da6-4fc7-91c6-9a6d2899a352 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-unplugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.750 252257 DEBUG oslo_concurrency.lockutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.751 252257 DEBUG oslo_concurrency.lockutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.765 252257 INFO nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Creating config drive at /var/lib/nova/instances/33cff286-3b50-41f5-9cb9-d4d98a1d3f88/disk.config#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.770 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33cff286-3b50-41f5-9cb9-d4d98a1d3f88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxz5sp5qk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.798 252257 DEBUG nova.objects.instance [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'migration_context' on Instance uuid 48a6ffaa-4f03-4048-bd19-c50aea2863cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.900 252257 DEBUG oslo_concurrency.processutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.934 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33cff286-3b50-41f5-9cb9-d4d98a1d3f88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxz5sp5qk" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.967 252257 DEBUG nova.storage.rbd_utils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] rbd image 33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:25 np0005539563 nova_compute[252253]: 2025-11-29 08:14:25.972 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33cff286-3b50-41f5-9cb9-d4d98a1d3f88/disk.config 33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.167 252257 DEBUG oslo_concurrency.processutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33cff286-3b50-41f5-9cb9-d4d98a1d3f88/disk.config 33cff286-3b50-41f5-9cb9-d4d98a1d3f88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.168 252257 INFO nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Deleting local config drive /var/lib/nova/instances/33cff286-3b50-41f5-9cb9-d4d98a1d3f88/disk.config because it was imported into RBD.#033[00m
Nov 29 03:14:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 166 op/s
Nov 29 03:14:26 np0005539563 kernel: tapd83f3010-ca: entered promiscuous mode
Nov 29 03:14:26 np0005539563 systemd-udevd[323829]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:26 np0005539563 NetworkManager[48981]: <info>  [1764404066.2339] manager: (tapd83f3010-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Nov 29 03:14:26 np0005539563 NetworkManager[48981]: <info>  [1764404066.2522] device (tapd83f3010-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:14:26 np0005539563 NetworkManager[48981]: <info>  [1764404066.2534] device (tapd83f3010-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.284 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:26Z|00447|binding|INFO|Claiming lport d83f3010-ca91-4737-a053-26c71234f9d9 for this chassis.
Nov 29 03:14:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:26Z|00448|binding|INFO|d83f3010-ca91-4737-a053-26c71234f9d9: Claiming fa:16:3e:0e:24:bf 10.100.0.5
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.293 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:24:bf 10.100.0.5'], port_security=['fa:16:3e:0e:24:bf 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '33cff286-3b50-41f5-9cb9-d4d98a1d3f88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8b7c8a30-f080-4336-87a1-164f41eed0f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=d83f3010-ca91-4737-a053-26c71234f9d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.294 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d83f3010-ca91-4737-a053-26c71234f9d9 in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 bound to our chassis#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.295 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 10a9b8d1-2de6-4e47-8e44-16b661da8624#033[00m
Nov 29 03:14:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:26Z|00449|binding|INFO|Setting lport d83f3010-ca91-4737-a053-26c71234f9d9 ovn-installed in OVS
Nov 29 03:14:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:26Z|00450|binding|INFO|Setting lport d83f3010-ca91-4737-a053-26c71234f9d9 up in Southbound
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.306 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.310 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9fad92-c98f-4a75-9a9c-f9a4ca411b18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.312 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap10a9b8d1-21 in ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.314 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap10a9b8d1-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.315 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[885d4144-2bd1-4c00-85a5-eee02db956ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.315 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[76be11a3-f499-422e-8088-7c3d7f4c0a35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 systemd-machined[213024]: New machine qemu-52-instance-00000070.
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.326 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[a77b3882-53cc-4884-989e-3d16655f1a53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.332 252257 DEBUG nova.network.neutron [req-026530a9-6464-4e20-b78e-98de0f3f282e req-520387d4-014c-4645-b996-f861bc34db83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Updated VIF entry in instance network info cache for port d83f3010-ca91-4737-a053-26c71234f9d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.333 252257 DEBUG nova.network.neutron [req-026530a9-6464-4e20-b78e-98de0f3f282e req-520387d4-014c-4645-b996-f861bc34db83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Updating instance_info_cache with network_info: [{"id": "d83f3010-ca91-4737-a053-26c71234f9d9", "address": "fa:16:3e:0e:24:bf", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83f3010-ca", "ovs_interfaceid": "d83f3010-ca91-4737-a053-26c71234f9d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:26 np0005539563 systemd[1]: Started Virtual Machine qemu-52-instance-00000070.
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.351 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[71148fe2-314f-4201-9402-b6449f3c8a61]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.369 252257 DEBUG oslo_concurrency.lockutils [req-026530a9-6464-4e20-b78e-98de0f3f282e req-520387d4-014c-4645-b996-f861bc34db83 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-33cff286-3b50-41f5-9cb9-d4d98a1d3f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.385 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc09fc9-f815-45c0-934a-4d884be422a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.391 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a574e7-b15a-4cec-a90b-c9a823de0154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 NetworkManager[48981]: <info>  [1764404066.3935] manager: (tap10a9b8d1-20): new Veth device (/org/freedesktop/NetworkManager/Devices/213)
Nov 29 03:14:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/950357901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.438 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ff584a-5f3f-48a7-89bd-104862088f6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.442 252257 DEBUG oslo_concurrency.processutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.441 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[647f964c-884d-45aa-80fe-85eaba271f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.451 252257 DEBUG nova.compute.provider_tree [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:26 np0005539563 NetworkManager[48981]: <info>  [1764404066.4669] device (tap10a9b8d1-20): carrier: link connected
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.472 252257 DEBUG nova.scheduler.client.report [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.475 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[1acb1a4f-d10d-4f8c-9e9a-1ccd07d03d03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.498 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b911ce29-0aaf-4617-85c2-4f8be1f520c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703424, 'reachable_time': 17092, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323979, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.516 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb300c2-71ed-4216-b3ed-5d9e5342602f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:676'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703424, 'tstamp': 703424}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323980, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.536 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b9ae7d-9843-4b9a-a5de-eba81e43bce1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap10a9b8d1-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:06:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703424, 'reachable_time': 17092, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323981, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.563 252257 DEBUG oslo_concurrency.lockutils [None req-189180cb-069e-4873-8344-7e6e70de72a3 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.573 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[564b4285-ed52-4edc-bd13-9855016f4510]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.651 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[de2c5807-3a09-40b0-93c2-a21ed4eade16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.652 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.653 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.653 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10a9b8d1-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.655 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:26 np0005539563 kernel: tap10a9b8d1-20: entered promiscuous mode
Nov 29 03:14:26 np0005539563 NetworkManager[48981]: <info>  [1764404066.6557] manager: (tap10a9b8d1-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.658 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap10a9b8d1-20, col_values=(('external_ids', {'iface-id': '56facbc8-1a3f-4008-8f77-23eeac832994'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:26Z|00451|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.660 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.674 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.676 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.677 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fa00bbc8-e444-4705-b02a-28d43d112cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.678 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/10a9b8d1-2de6-4e47-8e44-16b661da8624.pid.haproxy
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 10a9b8d1-2de6-4e47-8e44-16b661da8624
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:14:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:26.679 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'env', 'PROCESS_TAG=haproxy-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/10a9b8d1-2de6-4e47-8e44-16b661da8624.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.748 252257 DEBUG nova.compute.manager [req-4f144b3f-2d5c-4c59-9e3f-7d424ed58a80 req-cdd820c7-8eaa-48af-86ea-3f351c5385d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received event network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.750 252257 DEBUG oslo_concurrency.lockutils [req-4f144b3f-2d5c-4c59-9e3f-7d424ed58a80 req-cdd820c7-8eaa-48af-86ea-3f351c5385d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.750 252257 DEBUG oslo_concurrency.lockutils [req-4f144b3f-2d5c-4c59-9e3f-7d424ed58a80 req-cdd820c7-8eaa-48af-86ea-3f351c5385d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.751 252257 DEBUG oslo_concurrency.lockutils [req-4f144b3f-2d5c-4c59-9e3f-7d424ed58a80 req-cdd820c7-8eaa-48af-86ea-3f351c5385d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:26 np0005539563 nova_compute[252253]: 2025-11-29 08:14:26.751 252257 DEBUG nova.compute.manager [req-4f144b3f-2d5c-4c59-9e3f-7d424ed58a80 req-cdd820c7-8eaa-48af-86ea-3f351c5385d0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Processing event network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:14:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:26.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:27 np0005539563 podman[324014]: 2025-11-29 08:14:27.123356688 +0000 UTC m=+0.080396528 container create 394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:14:27 np0005539563 systemd[1]: Started libpod-conmon-394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad.scope.
Nov 29 03:14:27 np0005539563 podman[324014]: 2025-11-29 08:14:27.069972772 +0000 UTC m=+0.027012642 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:14:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:27.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:14:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75312e7500a488a0414d685cb85dd5b36ef86edd3004ee28628ca29fe18607db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.190 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:27 np0005539563 podman[324014]: 2025-11-29 08:14:27.204164828 +0000 UTC m=+0.161204698 container init 394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.210 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404067.2098753, 33cff286-3b50-41f5-9cb9-d4d98a1d3f88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.210 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] VM Started (Lifecycle Event)#033[00m
Nov 29 03:14:27 np0005539563 podman[324014]: 2025-11-29 08:14:27.210833269 +0000 UTC m=+0.167873109 container start 394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.212 252257 DEBUG nova.compute.manager [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.216 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.219 252257 INFO nova.virt.libvirt.driver [-] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Instance spawned successfully.#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.219 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:14:27 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[324068]: [NOTICE]   (324073) : New worker (324075) forked
Nov 29 03:14:27 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[324068]: [NOTICE]   (324073) : Loading success.
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.244 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.246 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.246 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.247 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.247 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.248 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.248 252257 DEBUG nova.virt.libvirt.driver [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.252 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.301 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.301 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404067.2105033, 33cff286-3b50-41f5-9cb9-d4d98a1d3f88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.302 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.333 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.339 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404067.2148967, 33cff286-3b50-41f5-9cb9-d4d98a1d3f88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.339 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.373 252257 INFO nova.compute.manager [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Took 9.68 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.374 252257 DEBUG nova.compute.manager [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.549 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.551 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.581 252257 INFO nova.compute.manager [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Took 10.85 seconds to build instance.#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.604 252257 DEBUG oslo_concurrency.lockutils [None req-bb59bbfc-7c66-444a-865b-49ac7c88fa58 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.768 252257 DEBUG nova.compute.manager [req-af607f22-b0aa-41ac-9370-9fd47fa91d0c req-e4c39253-8e40-460d-8a9f-01585002b661 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.769 252257 DEBUG oslo_concurrency.lockutils [req-af607f22-b0aa-41ac-9370-9fd47fa91d0c req-e4c39253-8e40-460d-8a9f-01585002b661 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.769 252257 DEBUG oslo_concurrency.lockutils [req-af607f22-b0aa-41ac-9370-9fd47fa91d0c req-e4c39253-8e40-460d-8a9f-01585002b661 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.770 252257 DEBUG oslo_concurrency.lockutils [req-af607f22-b0aa-41ac-9370-9fd47fa91d0c req-e4c39253-8e40-460d-8a9f-01585002b661 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.770 252257 DEBUG nova.compute.manager [req-af607f22-b0aa-41ac-9370-9fd47fa91d0c req-e4c39253-8e40-460d-8a9f-01585002b661 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:27 np0005539563 nova_compute[252253]: 2025-11-29 08:14:27.770 252257 WARNING nova.compute.manager [req-af607f22-b0aa-41ac-9370-9fd47fa91d0c req-e4c39253-8e40-460d-8a9f-01585002b661 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:14:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:14:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3635728905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:14:28 np0005539563 nova_compute[252253]: 2025-11-29 08:14:28.006 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:14:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3635728905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:14:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 146 op/s
Nov 29 03:14:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:28.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:29 np0005539563 nova_compute[252253]: 2025-11-29 08:14:29.045 252257 DEBUG nova.compute.manager [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received event network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:29 np0005539563 nova_compute[252253]: 2025-11-29 08:14:29.045 252257 DEBUG oslo_concurrency.lockutils [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:29 np0005539563 nova_compute[252253]: 2025-11-29 08:14:29.045 252257 DEBUG oslo_concurrency.lockutils [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:29 np0005539563 nova_compute[252253]: 2025-11-29 08:14:29.046 252257 DEBUG oslo_concurrency.lockutils [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:29 np0005539563 nova_compute[252253]: 2025-11-29 08:14:29.046 252257 DEBUG nova.compute.manager [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] No waiting events found dispatching network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:29 np0005539563 nova_compute[252253]: 2025-11-29 08:14:29.046 252257 WARNING nova.compute.manager [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received unexpected event network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:14:29 np0005539563 nova_compute[252253]: 2025-11-29 08:14:29.046 252257 DEBUG nova.compute.manager [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-changed-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:29 np0005539563 nova_compute[252253]: 2025-11-29 08:14:29.046 252257 DEBUG nova.compute.manager [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Refreshing instance network info cache due to event network-changed-18df9eaa-1422-4e4b-ac00-67cdb84e329f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:29 np0005539563 nova_compute[252253]: 2025-11-29 08:14:29.046 252257 DEBUG oslo_concurrency.lockutils [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:29 np0005539563 nova_compute[252253]: 2025-11-29 08:14:29.047 252257 DEBUG oslo_concurrency.lockutils [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:29 np0005539563 nova_compute[252253]: 2025-11-29 08:14:29.047 252257 DEBUG nova.network.neutron [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Refreshing network info cache for port 18df9eaa-1422-4e4b-ac00-67cdb84e329f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:29.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:30 np0005539563 nova_compute[252253]: 2025-11-29 08:14:30.126 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 224 op/s
Nov 29 03:14:30 np0005539563 nova_compute[252253]: 2025-11-29 08:14:30.559 252257 DEBUG nova.network.neutron [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updated VIF entry in instance network info cache for port 18df9eaa-1422-4e4b-ac00-67cdb84e329f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:30 np0005539563 nova_compute[252253]: 2025-11-29 08:14:30.560 252257 DEBUG nova.network.neutron [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Updating instance_info_cache with network_info: [{"id": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "address": "fa:16:3e:16:57:d0", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18df9eaa-14", "ovs_interfaceid": "18df9eaa-1422-4e4b-ac00-67cdb84e329f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:30 np0005539563 nova_compute[252253]: 2025-11-29 08:14:30.586 252257 DEBUG oslo_concurrency.lockutils [req-197122ea-755e-4234-9e91-7dd72db722fa req-1db99104-e2e3-4e29-9d81-dfda502b7bf0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-48a6ffaa-4f03-4048-bd19-c50aea2863cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:30.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:31.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:31 np0005539563 nova_compute[252253]: 2025-11-29 08:14:31.214 252257 DEBUG nova.compute.manager [req-32d3981a-9971-4f03-952e-6b2571f14cb9 req-1bd2ea53-3c6c-4423-8694-8e09d54470fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received event network-changed-d83f3010-ca91-4737-a053-26c71234f9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:31 np0005539563 nova_compute[252253]: 2025-11-29 08:14:31.215 252257 DEBUG nova.compute.manager [req-32d3981a-9971-4f03-952e-6b2571f14cb9 req-1bd2ea53-3c6c-4423-8694-8e09d54470fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Refreshing instance network info cache due to event network-changed-d83f3010-ca91-4737-a053-26c71234f9d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:31 np0005539563 nova_compute[252253]: 2025-11-29 08:14:31.216 252257 DEBUG oslo_concurrency.lockutils [req-32d3981a-9971-4f03-952e-6b2571f14cb9 req-1bd2ea53-3c6c-4423-8694-8e09d54470fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-33cff286-3b50-41f5-9cb9-d4d98a1d3f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:31 np0005539563 nova_compute[252253]: 2025-11-29 08:14:31.216 252257 DEBUG oslo_concurrency.lockutils [req-32d3981a-9971-4f03-952e-6b2571f14cb9 req-1bd2ea53-3c6c-4423-8694-8e09d54470fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-33cff286-3b50-41f5-9cb9-d4d98a1d3f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:31 np0005539563 nova_compute[252253]: 2025-11-29 08:14:31.216 252257 DEBUG nova.network.neutron [req-32d3981a-9971-4f03-952e-6b2571f14cb9 req-1bd2ea53-3c6c-4423-8694-8e09d54470fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Refreshing network info cache for port d83f3010-ca91-4737-a053-26c71234f9d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Nov 29 03:14:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Nov 29 03:14:31 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Nov 29 03:14:32 np0005539563 nova_compute[252253]: 2025-11-29 08:14:32.193 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.9 MiB/s wr, 223 op/s
Nov 29 03:14:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:32 np0005539563 nova_compute[252253]: 2025-11-29 08:14:32.636 252257 DEBUG nova.network.neutron [req-32d3981a-9971-4f03-952e-6b2571f14cb9 req-1bd2ea53-3c6c-4423-8694-8e09d54470fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Updated VIF entry in instance network info cache for port d83f3010-ca91-4737-a053-26c71234f9d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:32 np0005539563 nova_compute[252253]: 2025-11-29 08:14:32.637 252257 DEBUG nova.network.neutron [req-32d3981a-9971-4f03-952e-6b2571f14cb9 req-1bd2ea53-3c6c-4423-8694-8e09d54470fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Updating instance_info_cache with network_info: [{"id": "d83f3010-ca91-4737-a053-26c71234f9d9", "address": "fa:16:3e:0e:24:bf", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83f3010-ca", "ovs_interfaceid": "d83f3010-ca91-4737-a053-26c71234f9d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:32 np0005539563 nova_compute[252253]: 2025-11-29 08:14:32.670 252257 DEBUG oslo_concurrency.lockutils [req-32d3981a-9971-4f03-952e-6b2571f14cb9 req-1bd2ea53-3c6c-4423-8694-8e09d54470fd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-33cff286-3b50-41f5-9cb9-d4d98a1d3f88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:32.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:33.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 820 KiB/s wr, 267 op/s
Nov 29 03:14:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:34.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:34 np0005539563 nova_compute[252253]: 2025-11-29 08:14:34.854 252257 DEBUG nova.compute.manager [req-18eec59b-e727-4cf4-b8ee-7e4b02871d76 req-186d6fc2-12f6-4090-a051-68385924d30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:34 np0005539563 nova_compute[252253]: 2025-11-29 08:14:34.855 252257 DEBUG oslo_concurrency.lockutils [req-18eec59b-e727-4cf4-b8ee-7e4b02871d76 req-186d6fc2-12f6-4090-a051-68385924d30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:34 np0005539563 nova_compute[252253]: 2025-11-29 08:14:34.855 252257 DEBUG oslo_concurrency.lockutils [req-18eec59b-e727-4cf4-b8ee-7e4b02871d76 req-186d6fc2-12f6-4090-a051-68385924d30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:34 np0005539563 nova_compute[252253]: 2025-11-29 08:14:34.855 252257 DEBUG oslo_concurrency.lockutils [req-18eec59b-e727-4cf4-b8ee-7e4b02871d76 req-186d6fc2-12f6-4090-a051-68385924d30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "48a6ffaa-4f03-4048-bd19-c50aea2863cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:34 np0005539563 nova_compute[252253]: 2025-11-29 08:14:34.855 252257 DEBUG nova.compute.manager [req-18eec59b-e727-4cf4-b8ee-7e4b02871d76 req-186d6fc2-12f6-4090-a051-68385924d30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] No waiting events found dispatching network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:34 np0005539563 nova_compute[252253]: 2025-11-29 08:14:34.855 252257 WARNING nova.compute.manager [req-18eec59b-e727-4cf4-b8ee-7e4b02871d76 req-186d6fc2-12f6-4090-a051-68385924d30c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Received unexpected event network-vif-plugged-18df9eaa-1422-4e4b-ac00-67cdb84e329f for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:14:35 np0005539563 nova_compute[252253]: 2025-11-29 08:14:35.128 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:35.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 413 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.7 MiB/s wr, 220 op/s
Nov 29 03:14:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:36.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:37.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:37 np0005539563 nova_compute[252253]: 2025-11-29 08:14:37.195 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Nov 29 03:14:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Nov 29 03:14:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Nov 29 03:14:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 419 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.7 MiB/s wr, 210 op/s
Nov 29 03:14:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:38.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:39.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:40 np0005539563 nova_compute[252253]: 2025-11-29 08:14:40.103 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404065.102058, 48a6ffaa-4f03-4048-bd19-c50aea2863cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:40 np0005539563 nova_compute[252253]: 2025-11-29 08:14:40.104 252257 INFO nova.compute.manager [-] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:14:40 np0005539563 nova_compute[252253]: 2025-11-29 08:14:40.125 252257 DEBUG nova.compute.manager [None req-5fbe822f-c97a-4237-ab25-08ae2ea84056 - - - - - -] [instance: 48a6ffaa-4f03-4048-bd19-c50aea2863cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:40 np0005539563 nova_compute[252253]: 2025-11-29 08:14:40.130 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 366 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 3.3 MiB/s wr, 313 op/s
Nov 29 03:14:40 np0005539563 nova_compute[252253]: 2025-11-29 08:14:40.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:40.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:41.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:42 np0005539563 nova_compute[252253]: 2025-11-29 08:14:42.198 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 366 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.8 MiB/s wr, 261 op/s
Nov 29 03:14:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:42.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:14:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:14:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:43.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:14:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 392 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.1 MiB/s wr, 269 op/s
Nov 29 03:14:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:44.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:45 np0005539563 nova_compute[252253]: 2025-11-29 08:14:45.132 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:45.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.7 MiB/s wr, 380 op/s
Nov 29 03:14:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:46.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:47.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:47 np0005539563 nova_compute[252253]: 2025-11-29 08:14:47.198 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 446 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 7.0 MiB/s wr, 373 op/s
Nov 29 03:14:48 np0005539563 nova_compute[252253]: 2025-11-29 08:14:48.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:48 np0005539563 nova_compute[252253]: 2025-11-29 08:14:48.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:48 np0005539563 nova_compute[252253]: 2025-11-29 08:14:48.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:14:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:48.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:49.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:14:49 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 30806a42-8139-437a-af6f-161a937ab4a4 does not exist
Nov 29 03:14:49 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8a066124-a09f-41b2-a187-37ef70741273 does not exist
Nov 29 03:14:49 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7cbe1645-11d2-44ba-a944-e1f14d6e8b77 does not exist
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/544943348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:50 np0005539563 nova_compute[252253]: 2025-11-29 08:14:50.133 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 7.0 MiB/s wr, 347 op/s
Nov 29 03:14:50 np0005539563 podman[324415]: 2025-11-29 08:14:50.403948685 +0000 UTC m=+0.038442422 container create 8f2215e7d85317715803b8a432bb9962af6c6d3d3f7fb6a8ead1cb004a2d13fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:14:50 np0005539563 systemd[1]: Started libpod-conmon-8f2215e7d85317715803b8a432bb9962af6c6d3d3f7fb6a8ead1cb004a2d13fa.scope.
Nov 29 03:14:50 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:14:50 np0005539563 podman[324415]: 2025-11-29 08:14:50.386360548 +0000 UTC m=+0.020854305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:14:50 np0005539563 podman[324415]: 2025-11-29 08:14:50.48863247 +0000 UTC m=+0.123126297 container init 8f2215e7d85317715803b8a432bb9962af6c6d3d3f7fb6a8ead1cb004a2d13fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:14:50 np0005539563 podman[324415]: 2025-11-29 08:14:50.500901722 +0000 UTC m=+0.135395499 container start 8f2215e7d85317715803b8a432bb9962af6c6d3d3f7fb6a8ead1cb004a2d13fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:14:50 np0005539563 podman[324415]: 2025-11-29 08:14:50.505566688 +0000 UTC m=+0.140060465 container attach 8f2215e7d85317715803b8a432bb9962af6c6d3d3f7fb6a8ead1cb004a2d13fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:50 np0005539563 systemd[1]: libpod-8f2215e7d85317715803b8a432bb9962af6c6d3d3f7fb6a8ead1cb004a2d13fa.scope: Deactivated successfully.
Nov 29 03:14:50 np0005539563 jovial_buck[324431]: 167 167
Nov 29 03:14:50 np0005539563 conmon[324431]: conmon 8f2215e7d85317715803 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8f2215e7d85317715803b8a432bb9962af6c6d3d3f7fb6a8ead1cb004a2d13fa.scope/container/memory.events
Nov 29 03:14:50 np0005539563 podman[324415]: 2025-11-29 08:14:50.510085001 +0000 UTC m=+0.144578768 container died 8f2215e7d85317715803b8a432bb9962af6c6d3d3f7fb6a8ead1cb004a2d13fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ca40fffe1d064fb4fcd2faa0c95d8819ddd4852210d7fa9db94eda3f85899a47-merged.mount: Deactivated successfully.
Nov 29 03:14:50 np0005539563 podman[324415]: 2025-11-29 08:14:50.56840834 +0000 UTC m=+0.202902087 container remove 8f2215e7d85317715803b8a432bb9962af6c6d3d3f7fb6a8ead1cb004a2d13fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:14:50 np0005539563 systemd[1]: libpod-conmon-8f2215e7d85317715803b8a432bb9962af6c6d3d3f7fb6a8ead1cb004a2d13fa.scope: Deactivated successfully.
Nov 29 03:14:50 np0005539563 nova_compute[252253]: 2025-11-29 08:14:50.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:50 np0005539563 nova_compute[252253]: 2025-11-29 08:14:50.708 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:50 np0005539563 nova_compute[252253]: 2025-11-29 08:14:50.708 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:50 np0005539563 nova_compute[252253]: 2025-11-29 08:14:50.709 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:50 np0005539563 nova_compute[252253]: 2025-11-29 08:14:50.709 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:14:50 np0005539563 nova_compute[252253]: 2025-11-29 08:14:50.709 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:50 np0005539563 podman[324456]: 2025-11-29 08:14:50.764545275 +0000 UTC m=+0.040127028 container create 69ebb4d5fed72fd6dadb2f8222bb4a7e4cba944774ffa4e9a3eceeff649c96fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_turing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 03:14:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:14:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:14:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:14:50 np0005539563 systemd[1]: Started libpod-conmon-69ebb4d5fed72fd6dadb2f8222bb4a7e4cba944774ffa4e9a3eceeff649c96fd.scope.
Nov 29 03:14:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:50.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:50 np0005539563 podman[324456]: 2025-11-29 08:14:50.748632344 +0000 UTC m=+0.024214097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:14:50 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:14:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab1ac41a5d93da8df8d616ca0715e3613c89afbde91766ead770c78c0cd1b92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab1ac41a5d93da8df8d616ca0715e3613c89afbde91766ead770c78c0cd1b92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab1ac41a5d93da8df8d616ca0715e3613c89afbde91766ead770c78c0cd1b92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab1ac41a5d93da8df8d616ca0715e3613c89afbde91766ead770c78c0cd1b92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab1ac41a5d93da8df8d616ca0715e3613c89afbde91766ead770c78c0cd1b92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:50 np0005539563 podman[324456]: 2025-11-29 08:14:50.87988948 +0000 UTC m=+0.155471243 container init 69ebb4d5fed72fd6dadb2f8222bb4a7e4cba944774ffa4e9a3eceeff649c96fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:14:50 np0005539563 podman[324456]: 2025-11-29 08:14:50.888562315 +0000 UTC m=+0.164144058 container start 69ebb4d5fed72fd6dadb2f8222bb4a7e4cba944774ffa4e9a3eceeff649c96fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_turing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 03:14:50 np0005539563 podman[324456]: 2025-11-29 08:14:50.89208249 +0000 UTC m=+0.167664263 container attach 69ebb4d5fed72fd6dadb2f8222bb4a7e4cba944774ffa4e9a3eceeff649c96fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_turing, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:14:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3882917752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.158 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:51.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.238 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.238 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.241 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.241 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.402 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.403 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3951MB free_disk=20.79470443725586GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.404 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.404 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.547 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 98453ec7-fbda-42ae-8624-8aa5921fd634 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.547 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 33cff286-3b50-41f5-9cb9-d4d98a1d3f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.548 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.548 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.608 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:51 np0005539563 confident_turing[324473]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:14:51 np0005539563 confident_turing[324473]: --> relative data size: 1.0
Nov 29 03:14:51 np0005539563 confident_turing[324473]: --> All data devices are unavailable
Nov 29 03:14:51 np0005539563 systemd[1]: libpod-69ebb4d5fed72fd6dadb2f8222bb4a7e4cba944774ffa4e9a3eceeff649c96fd.scope: Deactivated successfully.
Nov 29 03:14:51 np0005539563 conmon[324473]: conmon 69ebb4d5fed72fd6dadb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-69ebb4d5fed72fd6dadb2f8222bb4a7e4cba944774ffa4e9a3eceeff649c96fd.scope/container/memory.events
Nov 29 03:14:51 np0005539563 podman[324581]: 2025-11-29 08:14:51.845327807 +0000 UTC m=+0.027075614 container died 69ebb4d5fed72fd6dadb2f8222bb4a7e4cba944774ffa4e9a3eceeff649c96fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_turing, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:14:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3ab1ac41a5d93da8df8d616ca0715e3613c89afbde91766ead770c78c0cd1b92-merged.mount: Deactivated successfully.
Nov 29 03:14:51 np0005539563 podman[324581]: 2025-11-29 08:14:51.903091302 +0000 UTC m=+0.084839079 container remove 69ebb4d5fed72fd6dadb2f8222bb4a7e4cba944774ffa4e9a3eceeff649c96fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_turing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:14:51 np0005539563 systemd[1]: libpod-conmon-69ebb4d5fed72fd6dadb2f8222bb4a7e4cba944774ffa4e9a3eceeff649c96fd.scope: Deactivated successfully.
Nov 29 03:14:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:51.971 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:51.972 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:14:51 np0005539563 nova_compute[252253]: 2025-11-29 08:14:51.972 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1868820674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:52 np0005539563 nova_compute[252253]: 2025-11-29 08:14:52.105 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:52 np0005539563 nova_compute[252253]: 2025-11-29 08:14:52.113 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:52 np0005539563 nova_compute[252253]: 2025-11-29 08:14:52.133 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:52 np0005539563 nova_compute[252253]: 2025-11-29 08:14:52.163 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:14:52 np0005539563 nova_compute[252253]: 2025-11-29 08:14:52.163 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:52 np0005539563 nova_compute[252253]: 2025-11-29 08:14:52.201 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.5 MiB/s wr, 269 op/s
Nov 29 03:14:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:52 np0005539563 podman[324738]: 2025-11-29 08:14:52.486194031 +0000 UTC m=+0.034873776 container create 2ae8032b8c6046e7f1556aabe06fd1d0af1eacabb48255058c96482662dc5cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ramanujan, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:14:52 np0005539563 systemd[1]: Started libpod-conmon-2ae8032b8c6046e7f1556aabe06fd1d0af1eacabb48255058c96482662dc5cf3.scope.
Nov 29 03:14:52 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:14:52 np0005539563 podman[324738]: 2025-11-29 08:14:52.55222366 +0000 UTC m=+0.100903415 container init 2ae8032b8c6046e7f1556aabe06fd1d0af1eacabb48255058c96482662dc5cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:14:52 np0005539563 podman[324738]: 2025-11-29 08:14:52.558376506 +0000 UTC m=+0.107056261 container start 2ae8032b8c6046e7f1556aabe06fd1d0af1eacabb48255058c96482662dc5cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ramanujan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 03:14:52 np0005539563 compassionate_ramanujan[324755]: 167 167
Nov 29 03:14:52 np0005539563 podman[324738]: 2025-11-29 08:14:52.56183037 +0000 UTC m=+0.110510125 container attach 2ae8032b8c6046e7f1556aabe06fd1d0af1eacabb48255058c96482662dc5cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:14:52 np0005539563 systemd[1]: libpod-2ae8032b8c6046e7f1556aabe06fd1d0af1eacabb48255058c96482662dc5cf3.scope: Deactivated successfully.
Nov 29 03:14:52 np0005539563 podman[324738]: 2025-11-29 08:14:52.563554667 +0000 UTC m=+0.112234422 container died 2ae8032b8c6046e7f1556aabe06fd1d0af1eacabb48255058c96482662dc5cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:14:52 np0005539563 podman[324738]: 2025-11-29 08:14:52.470539596 +0000 UTC m=+0.019219381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:14:52 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e17fe5df0cc52c582d705b4ca39e30b9e70b5a560642c1ad2c86d6cea6d460c6-merged.mount: Deactivated successfully.
Nov 29 03:14:52 np0005539563 podman[324738]: 2025-11-29 08:14:52.605351739 +0000 UTC m=+0.154031494 container remove 2ae8032b8c6046e7f1556aabe06fd1d0af1eacabb48255058c96482662dc5cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:14:52 np0005539563 systemd[1]: libpod-conmon-2ae8032b8c6046e7f1556aabe06fd1d0af1eacabb48255058c96482662dc5cf3.scope: Deactivated successfully.
Nov 29 03:14:52 np0005539563 podman[324778]: 2025-11-29 08:14:52.796482007 +0000 UTC m=+0.066600125 container create 92f89f97ba689adffcdb2d6aee8c0b0db2be60fb1f6ebbf1fb81b5df2ff761ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:14:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:52.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:52 np0005539563 podman[324778]: 2025-11-29 08:14:52.755699242 +0000 UTC m=+0.025817450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:14:52 np0005539563 systemd[1]: Started libpod-conmon-92f89f97ba689adffcdb2d6aee8c0b0db2be60fb1f6ebbf1fb81b5df2ff761ce.scope.
Nov 29 03:14:52 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:14:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6052abc003e9be7beda5afff912cfdf29734be67db3b3ed0f016d75de603c408/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6052abc003e9be7beda5afff912cfdf29734be67db3b3ed0f016d75de603c408/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6052abc003e9be7beda5afff912cfdf29734be67db3b3ed0f016d75de603c408/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6052abc003e9be7beda5afff912cfdf29734be67db3b3ed0f016d75de603c408/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:52 np0005539563 podman[324778]: 2025-11-29 08:14:52.936514671 +0000 UTC m=+0.206632789 container init 92f89f97ba689adffcdb2d6aee8c0b0db2be60fb1f6ebbf1fb81b5df2ff761ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:14:52 np0005539563 podman[324778]: 2025-11-29 08:14:52.945985598 +0000 UTC m=+0.216103686 container start 92f89f97ba689adffcdb2d6aee8c0b0db2be60fb1f6ebbf1fb81b5df2ff761ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dhawan, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:14:52 np0005539563 podman[324778]: 2025-11-29 08:14:52.949705509 +0000 UTC m=+0.219823627 container attach 92f89f97ba689adffcdb2d6aee8c0b0db2be60fb1f6ebbf1fb81b5df2ff761ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dhawan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:14:53 np0005539563 nova_compute[252253]: 2025-11-29 08:14:53.165 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:53 np0005539563 nova_compute[252253]: 2025-11-29 08:14:53.168 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:14:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:53.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:53 np0005539563 nova_compute[252253]: 2025-11-29 08:14:53.213 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:14:53 np0005539563 nova_compute[252253]: 2025-11-29 08:14:53.214 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:53 np0005539563 nova_compute[252253]: 2025-11-29 08:14:53.214 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]: {
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:    "0": [
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:        {
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:            "devices": [
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:                "/dev/loop3"
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:            ],
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:            "lv_name": "ceph_lv0",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:            "lv_size": "7511998464",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:            "name": "ceph_lv0",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:            "tags": {
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:                "ceph.cluster_name": "ceph",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:                "ceph.crush_device_class": "",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:                "ceph.encrypted": "0",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:                "ceph.osd_id": "0",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:                "ceph.type": "block",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:                "ceph.vdo": "0"
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:            },
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:            "type": "block",
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:            "vg_name": "ceph_vg0"
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:        }
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]:    ]
Nov 29 03:14:53 np0005539563 adoring_dhawan[324795]: }
Nov 29 03:14:53 np0005539563 systemd[1]: libpod-92f89f97ba689adffcdb2d6aee8c0b0db2be60fb1f6ebbf1fb81b5df2ff761ce.scope: Deactivated successfully.
Nov 29 03:14:53 np0005539563 podman[324778]: 2025-11-29 08:14:53.816276527 +0000 UTC m=+1.086394635 container died 92f89f97ba689adffcdb2d6aee8c0b0db2be60fb1f6ebbf1fb81b5df2ff761ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dhawan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:14:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6052abc003e9be7beda5afff912cfdf29734be67db3b3ed0f016d75de603c408-merged.mount: Deactivated successfully.
Nov 29 03:14:53 np0005539563 podman[324778]: 2025-11-29 08:14:53.899184713 +0000 UTC m=+1.169302821 container remove 92f89f97ba689adffcdb2d6aee8c0b0db2be60fb1f6ebbf1fb81b5df2ff761ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dhawan, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:14:53 np0005539563 systemd[1]: libpod-conmon-92f89f97ba689adffcdb2d6aee8c0b0db2be60fb1f6ebbf1fb81b5df2ff761ce.scope: Deactivated successfully.
Nov 29 03:14:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 7.3 MiB/s wr, 281 op/s
Nov 29 03:14:54 np0005539563 podman[324957]: 2025-11-29 08:14:54.757460937 +0000 UTC m=+0.055536376 container create a5d65309667fbac73ea70ea5b16c81c5a572fd5b52dbed4e000c25dbdd483940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:14:54 np0005539563 systemd[1]: Started libpod-conmon-a5d65309667fbac73ea70ea5b16c81c5a572fd5b52dbed4e000c25dbdd483940.scope.
Nov 29 03:14:54 np0005539563 podman[324957]: 2025-11-29 08:14:54.727348381 +0000 UTC m=+0.025423820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:14:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:54.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:54Z|00452|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 03:14:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:14:54Z|00453|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:14:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:14:54 np0005539563 nova_compute[252253]: 2025-11-29 08:14:54.896 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:54 np0005539563 podman[324971]: 2025-11-29 08:14:54.902344493 +0000 UTC m=+0.083378100 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:14:54 np0005539563 podman[324957]: 2025-11-29 08:14:54.903095553 +0000 UTC m=+0.201170972 container init a5d65309667fbac73ea70ea5b16c81c5a572fd5b52dbed4e000c25dbdd483940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_banach, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:14:54 np0005539563 podman[324974]: 2025-11-29 08:14:54.905362355 +0000 UTC m=+0.085360065 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:14:54 np0005539563 podman[324957]: 2025-11-29 08:14:54.915154329 +0000 UTC m=+0.213229728 container start a5d65309667fbac73ea70ea5b16c81c5a572fd5b52dbed4e000c25dbdd483940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:14:54 np0005539563 podman[324957]: 2025-11-29 08:14:54.919152278 +0000 UTC m=+0.217227687 container attach a5d65309667fbac73ea70ea5b16c81c5a572fd5b52dbed4e000c25dbdd483940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:14:54 np0005539563 laughing_banach[324976]: 167 167
Nov 29 03:14:54 np0005539563 systemd[1]: libpod-a5d65309667fbac73ea70ea5b16c81c5a572fd5b52dbed4e000c25dbdd483940.scope: Deactivated successfully.
Nov 29 03:14:54 np0005539563 podman[324957]: 2025-11-29 08:14:54.923482655 +0000 UTC m=+0.221558064 container died a5d65309667fbac73ea70ea5b16c81c5a572fd5b52dbed4e000c25dbdd483940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_banach, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:54 np0005539563 podman[324975]: 2025-11-29 08:14:54.942425199 +0000 UTC m=+0.124093203 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 03:14:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-17d3f6e13b68f19abc528d5e2fea7089f08d54fe7f399fc2eee5834c374b9288-merged.mount: Deactivated successfully.
Nov 29 03:14:54 np0005539563 podman[324957]: 2025-11-29 08:14:54.971926947 +0000 UTC m=+0.270002346 container remove a5d65309667fbac73ea70ea5b16c81c5a572fd5b52dbed4e000c25dbdd483940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_banach, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:14:54.974 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:54 np0005539563 systemd[1]: libpod-conmon-a5d65309667fbac73ea70ea5b16c81c5a572fd5b52dbed4e000c25dbdd483940.scope: Deactivated successfully.
Nov 29 03:14:55 np0005539563 nova_compute[252253]: 2025-11-29 08:14:55.134 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:55 np0005539563 podman[325058]: 2025-11-29 08:14:55.171419163 +0000 UTC m=+0.046916732 container create 778f914427b470f14e740c8dba12de6ee3ffbd2ebb5c3b9196defb1ccf5d0cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dewdney, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:14:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:55.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:55 np0005539563 podman[325058]: 2025-11-29 08:14:55.151525423 +0000 UTC m=+0.027022992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:14:55 np0005539563 systemd[1]: Started libpod-conmon-778f914427b470f14e740c8dba12de6ee3ffbd2ebb5c3b9196defb1ccf5d0cad.scope.
Nov 29 03:14:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:14:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16429b2f3db10c8c775bc4c5068ba44d73fb690302b74cf7af20b36b6d66408/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16429b2f3db10c8c775bc4c5068ba44d73fb690302b74cf7af20b36b6d66408/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16429b2f3db10c8c775bc4c5068ba44d73fb690302b74cf7af20b36b6d66408/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16429b2f3db10c8c775bc4c5068ba44d73fb690302b74cf7af20b36b6d66408/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:55 np0005539563 podman[325058]: 2025-11-29 08:14:55.560246867 +0000 UTC m=+0.435744426 container init 778f914427b470f14e740c8dba12de6ee3ffbd2ebb5c3b9196defb1ccf5d0cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:14:55 np0005539563 podman[325058]: 2025-11-29 08:14:55.566867647 +0000 UTC m=+0.442365206 container start 778f914427b470f14e740c8dba12de6ee3ffbd2ebb5c3b9196defb1ccf5d0cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dewdney, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:14:55 np0005539563 podman[325058]: 2025-11-29 08:14:55.570044183 +0000 UTC m=+0.445541742 container attach 778f914427b470f14e740c8dba12de6ee3ffbd2ebb5c3b9196defb1ccf5d0cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dewdney, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:14:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 228 op/s
Nov 29 03:14:56 np0005539563 elastic_dewdney[325074]: {
Nov 29 03:14:56 np0005539563 elastic_dewdney[325074]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:14:56 np0005539563 elastic_dewdney[325074]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:14:56 np0005539563 elastic_dewdney[325074]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:14:56 np0005539563 elastic_dewdney[325074]:        "osd_id": 0,
Nov 29 03:14:56 np0005539563 elastic_dewdney[325074]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:14:56 np0005539563 elastic_dewdney[325074]:        "type": "bluestore"
Nov 29 03:14:56 np0005539563 elastic_dewdney[325074]:    }
Nov 29 03:14:56 np0005539563 elastic_dewdney[325074]: }
Nov 29 03:14:56 np0005539563 systemd[1]: libpod-778f914427b470f14e740c8dba12de6ee3ffbd2ebb5c3b9196defb1ccf5d0cad.scope: Deactivated successfully.
Nov 29 03:14:56 np0005539563 podman[325058]: 2025-11-29 08:14:56.418166562 +0000 UTC m=+1.293664121 container died 778f914427b470f14e740c8dba12de6ee3ffbd2ebb5c3b9196defb1ccf5d0cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dewdney, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:14:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e16429b2f3db10c8c775bc4c5068ba44d73fb690302b74cf7af20b36b6d66408-merged.mount: Deactivated successfully.
Nov 29 03:14:56 np0005539563 podman[325058]: 2025-11-29 08:14:56.722188398 +0000 UTC m=+1.597685947 container remove 778f914427b470f14e740c8dba12de6ee3ffbd2ebb5c3b9196defb1ccf5d0cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dewdney, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:14:56 np0005539563 systemd[1]: libpod-conmon-778f914427b470f14e740c8dba12de6ee3ffbd2ebb5c3b9196defb1ccf5d0cad.scope: Deactivated successfully.
Nov 29 03:14:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:14:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:14:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:14:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:14:56 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f50c0ab6-1efe-4723-9a4b-71954e96742d does not exist
Nov 29 03:14:56 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 68206492-64b9-4bba-9d7c-9c36030e5d77 does not exist
Nov 29 03:14:56 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9b12d664-ed56-4e15-baa1-759c680c517d does not exist
Nov 29 03:14:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:56.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:14:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:57.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:14:57 np0005539563 nova_compute[252253]: 2025-11-29 08:14:57.203 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:14:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:14:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:57 np0005539563 nova_compute[252253]: 2025-11-29 08:14:57.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 157 op/s
Nov 29 03:14:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:58.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:14:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:59.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:00 np0005539563 nova_compute[252253]: 2025-11-29 08:15:00.137 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Nov 29 03:15:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:00.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:01.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:02 np0005539563 nova_compute[252253]: 2025-11-29 08:15:02.205 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 849 KiB/s wr, 164 op/s
Nov 29 03:15:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:02 np0005539563 nova_compute[252253]: 2025-11-29 08:15:02.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:02.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:03.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:03 np0005539563 nova_compute[252253]: 2025-11-29 08:15:03.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 849 KiB/s wr, 165 op/s
Nov 29 03:15:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:04.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:04.921 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:04.921 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:04.922 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:05 np0005539563 nova_compute[252253]: 2025-11-29 08:15:05.139 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:05.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:05 np0005539563 nova_compute[252253]: 2025-11-29 08:15:05.931 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 153 op/s
Nov 29 03:15:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:06.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:07 np0005539563 nova_compute[252253]: 2025-11-29 08:15:07.208 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:07.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 462 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 962 KiB/s wr, 148 op/s
Nov 29 03:15:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:08.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:09.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:10 np0005539563 nova_compute[252253]: 2025-11-29 08:15:10.208 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 508 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.1 MiB/s wr, 174 op/s
Nov 29 03:15:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:10.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:11.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:12 np0005539563 nova_compute[252253]: 2025-11-29 08:15:12.211 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 508 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 4.1 MiB/s wr, 97 op/s
Nov 29 03:15:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:12.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:15:12
Nov 29 03:15:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:15:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:15:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'volumes', '.mgr', '.rgw.root', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.data']
Nov 29 03:15:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:15:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:13.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:15:13 np0005539563 nova_compute[252253]: 2025-11-29 08:15:13.811 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:15:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:15:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 513 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 518 KiB/s rd, 4.2 MiB/s wr, 119 op/s
Nov 29 03:15:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:14.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Nov 29 03:15:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Nov 29 03:15:15 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Nov 29 03:15:15 np0005539563 nova_compute[252253]: 2025-11-29 08:15:15.211 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:15.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 517 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 648 KiB/s rd, 5.1 MiB/s wr, 156 op/s
Nov 29 03:15:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:16.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:17 np0005539563 nova_compute[252253]: 2025-11-29 08:15:17.213 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:17.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:18 np0005539563 nova_compute[252253]: 2025-11-29 08:15:18.076 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "89ab44e3-7209-4a9c-b399-77cf74efb51c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:18 np0005539563 nova_compute[252253]: 2025-11-29 08:15:18.077 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:18 np0005539563 nova_compute[252253]: 2025-11-29 08:15:18.100 252257 DEBUG nova.compute.manager [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:15:18 np0005539563 nova_compute[252253]: 2025-11-29 08:15:18.202 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:18 np0005539563 nova_compute[252253]: 2025-11-29 08:15:18.202 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:18 np0005539563 nova_compute[252253]: 2025-11-29 08:15:18.210 252257 DEBUG nova.virt.hardware [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:15:18 np0005539563 nova_compute[252253]: 2025-11-29 08:15:18.210 252257 INFO nova.compute.claims [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:15:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 517 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 4.0 MiB/s wr, 145 op/s
Nov 29 03:15:18 np0005539563 nova_compute[252253]: 2025-11-29 08:15:18.362 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:18.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565780543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.028 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.036 252257 DEBUG nova.compute.provider_tree [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.063 252257 DEBUG nova.scheduler.client.report [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.097 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.098 252257 DEBUG nova.compute.manager [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.165 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Acquiring lock "34556945-6717-428b-937e-51175f19d32e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.166 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "34556945-6717-428b-937e-51175f19d32e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.170 252257 DEBUG nova.compute.manager [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.170 252257 DEBUG nova.network.neutron [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.199 252257 DEBUG nova.compute.manager [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.204 252257 INFO nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:15:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:19.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.251 252257 DEBUG nova.compute.manager [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.339 252257 DEBUG nova.compute.manager [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.340 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.341 252257 INFO nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Creating image(s)#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.382 252257 DEBUG nova.storage.rbd_utils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 89ab44e3-7209-4a9c-b399-77cf74efb51c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.415 252257 DEBUG nova.storage.rbd_utils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 89ab44e3-7209-4a9c-b399-77cf74efb51c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.454 252257 DEBUG nova.storage.rbd_utils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 89ab44e3-7209-4a9c-b399-77cf74efb51c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.459 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.503 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.504 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.516 252257 DEBUG nova.virt.hardware [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.516 252257 INFO nova.compute.claims [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.560 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.560 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.561 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.561 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.591 252257 DEBUG nova.storage.rbd_utils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 89ab44e3-7209-4a9c-b399-77cf74efb51c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.596 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 89ab44e3-7209-4a9c-b399-77cf74efb51c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.754 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:19 np0005539563 nova_compute[252253]: 2025-11-29 08:15:19.920 252257 DEBUG nova.policy [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ca93c8e3eac142c0aa6b61807727dea2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.134 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 89ab44e3-7209-4a9c-b399-77cf74efb51c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377378725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.241 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 549 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.271 252257 DEBUG nova.storage.rbd_utils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] resizing rbd image 89ab44e3-7209-4a9c-b399-77cf74efb51c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.321 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.326 252257 DEBUG nova.compute.provider_tree [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.357 252257 DEBUG nova.scheduler.client.report [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.402 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.408 252257 DEBUG nova.objects.instance [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'migration_context' on Instance uuid 89ab44e3-7209-4a9c-b399-77cf74efb51c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.451 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.451 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Ensure instance console log exists: /var/lib/nova/instances/89ab44e3-7209-4a9c-b399-77cf74efb51c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.452 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.452 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.453 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.459 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Acquiring lock "76d8311b-3355-433a-a713-1ca876b5fabf" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.460 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "76d8311b-3355-433a-a713-1ca876b5fabf" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.474 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "76d8311b-3355-433a-a713-1ca876b5fabf" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.475 252257 DEBUG nova.compute.manager [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.516 252257 DEBUG nova.compute.manager [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.517 252257 DEBUG nova.network.neutron [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.536 252257 INFO nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.552 252257 DEBUG nova.compute.manager [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.689 252257 DEBUG nova.compute.manager [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.692 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.693 252257 INFO nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Creating image(s)#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.735 252257 DEBUG nova.storage.rbd_utils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] rbd image 34556945-6717-428b-937e-51175f19d32e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.766 252257 DEBUG nova.storage.rbd_utils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] rbd image 34556945-6717-428b-937e-51175f19d32e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.797 252257 DEBUG nova.storage.rbd_utils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] rbd image 34556945-6717-428b-937e-51175f19d32e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.801 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.841 252257 DEBUG nova.policy [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8dfd7d44b219445e87f720287c54e093', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e18e33f6f2ab40279f539fc9988c80d8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.843 252257 DEBUG nova.network.neutron [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Successfully created port: 4d5a29f4-c628-4a0d-b707-82e46e56bbe0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:15:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:20.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.883 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.884 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.884 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.884 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.912 252257 DEBUG nova.storage.rbd_utils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] rbd image 34556945-6717-428b-937e-51175f19d32e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:20 np0005539563 nova_compute[252253]: 2025-11-29 08:15:20.917 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 34556945-6717-428b-937e-51175f19d32e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:21.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:21 np0005539563 nova_compute[252253]: 2025-11-29 08:15:21.429 252257 DEBUG nova.network.neutron [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Successfully created port: f958f665-0f7f-4f50-81e9-f9e540450f1d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:15:21 np0005539563 nova_compute[252253]: 2025-11-29 08:15:21.939 252257 DEBUG nova.network.neutron [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Successfully updated port: 4d5a29f4-c628-4a0d-b707-82e46e56bbe0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:15:21 np0005539563 nova_compute[252253]: 2025-11-29 08:15:21.961 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-89ab44e3-7209-4a9c-b399-77cf74efb51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:21 np0005539563 nova_compute[252253]: 2025-11-29 08:15:21.962 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-89ab44e3-7209-4a9c-b399-77cf74efb51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:21 np0005539563 nova_compute[252253]: 2025-11-29 08:15:21.962 252257 DEBUG nova.network.neutron [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.049 252257 DEBUG nova.compute.manager [req-ffe3423c-f144-4634-8786-6a772ab6a755 req-01a32878-2b13-4d3c-9f04-a314dce6d4e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Received event network-changed-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.050 252257 DEBUG nova.compute.manager [req-ffe3423c-f144-4634-8786-6a772ab6a755 req-01a32878-2b13-4d3c-9f04-a314dce6d4e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Refreshing instance network info cache due to event network-changed-4d5a29f4-c628-4a0d-b707-82e46e56bbe0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.051 252257 DEBUG oslo_concurrency.lockutils [req-ffe3423c-f144-4634-8786-6a772ab6a755 req-01a32878-2b13-4d3c-9f04-a314dce6d4e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-89ab44e3-7209-4a9c-b399-77cf74efb51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.117 252257 DEBUG nova.network.neutron [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.188 252257 DEBUG nova.network.neutron [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Successfully updated port: f958f665-0f7f-4f50-81e9-f9e540450f1d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.202 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Acquiring lock "refresh_cache-34556945-6717-428b-937e-51175f19d32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.202 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Acquired lock "refresh_cache-34556945-6717-428b-937e-51175f19d32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.203 252257 DEBUG nova.network.neutron [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.218 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 549 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Nov 29 03:15:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.301 252257 DEBUG nova.compute.manager [req-d1ec0942-883c-400e-8064-a6dfeeb7616d req-bc5fe8b5-167b-4ac0-a5dd-acd60a1cdc9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Received event network-changed-f958f665-0f7f-4f50-81e9-f9e540450f1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.301 252257 DEBUG nova.compute.manager [req-d1ec0942-883c-400e-8064-a6dfeeb7616d req-bc5fe8b5-167b-4ac0-a5dd-acd60a1cdc9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Refreshing instance network info cache due to event network-changed-f958f665-0f7f-4f50-81e9-f9e540450f1d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.301 252257 DEBUG oslo_concurrency.lockutils [req-d1ec0942-883c-400e-8064-a6dfeeb7616d req-bc5fe8b5-167b-4ac0-a5dd-acd60a1cdc9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-34556945-6717-428b-937e-51175f19d32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.404 252257 DEBUG nova.network.neutron [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.703 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 34556945-6717-428b-937e-51175f19d32e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:22 np0005539563 nova_compute[252253]: 2025-11-29 08:15:22.804 252257 DEBUG nova.storage.rbd_utils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] resizing rbd image 34556945-6717-428b-937e-51175f19d32e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:15:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:22.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:23.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.267 252257 DEBUG nova.objects.instance [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lazy-loading 'migration_context' on Instance uuid 34556945-6717-428b-937e-51175f19d32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.283 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.284 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Ensure instance console log exists: /var/lib/nova/instances/34556945-6717-428b-937e-51175f19d32e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.284 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.285 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.285 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0115564439675228 of space, bias 1.0, pg target 3.4669331902568397 quantized to 32 (current 32)
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002166322061697376 of space, bias 1.0, pg target 0.6433976523241207 quantized to 32 (current 32)
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.558 252257 DEBUG nova.network.neutron [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Updating instance_info_cache with network_info: [{"id": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "address": "fa:16:3e:5b:d2:45", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d5a29f4-c6", "ovs_interfaceid": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.576 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-89ab44e3-7209-4a9c-b399-77cf74efb51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.576 252257 DEBUG nova.compute.manager [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Instance network_info: |[{"id": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "address": "fa:16:3e:5b:d2:45", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d5a29f4-c6", "ovs_interfaceid": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.577 252257 DEBUG oslo_concurrency.lockutils [req-ffe3423c-f144-4634-8786-6a772ab6a755 req-01a32878-2b13-4d3c-9f04-a314dce6d4e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-89ab44e3-7209-4a9c-b399-77cf74efb51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.577 252257 DEBUG nova.network.neutron [req-ffe3423c-f144-4634-8786-6a772ab6a755 req-01a32878-2b13-4d3c-9f04-a314dce6d4e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Refreshing network info cache for port 4d5a29f4-c628-4a0d-b707-82e46e56bbe0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.579 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Start _get_guest_xml network_info=[{"id": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "address": "fa:16:3e:5b:d2:45", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d5a29f4-c6", "ovs_interfaceid": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.583 252257 WARNING nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.588 252257 DEBUG nova.virt.libvirt.host [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.589 252257 DEBUG nova.virt.libvirt.host [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.595 252257 DEBUG nova.virt.libvirt.host [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.596 252257 DEBUG nova.virt.libvirt.host [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.597 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.597 252257 DEBUG nova.virt.hardware [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.598 252257 DEBUG nova.virt.hardware [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.598 252257 DEBUG nova.virt.hardware [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.598 252257 DEBUG nova.virt.hardware [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.598 252257 DEBUG nova.virt.hardware [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.599 252257 DEBUG nova.virt.hardware [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.599 252257 DEBUG nova.virt.hardware [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.599 252257 DEBUG nova.virt.hardware [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.600 252257 DEBUG nova.virt.hardware [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.600 252257 DEBUG nova.virt.hardware [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.600 252257 DEBUG nova.virt.hardware [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:15:23 np0005539563 nova_compute[252253]: 2025-11-29 08:15:23.604 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 568 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Nov 29 03:15:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3343366361' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.401 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.798s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.430 252257 DEBUG nova.storage.rbd_utils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 89ab44e3-7209-4a9c-b399-77cf74efb51c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.434 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1107604286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:24.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.871 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.873 252257 DEBUG nova.virt.libvirt.vif [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1616306290',display_name='tempest-ServerActionsTestOtherB-server-1616306290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1616306290',id=117,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-c0rhhm0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:19Z,user_data=None,user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=89ab44e3-7209-4a9c-b399-77cf74efb51c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "address": "fa:16:3e:5b:d2:45", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d5a29f4-c6", "ovs_interfaceid": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.874 252257 DEBUG nova.network.os_vif_util [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "address": "fa:16:3e:5b:d2:45", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d5a29f4-c6", "ovs_interfaceid": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.875 252257 DEBUG nova.network.os_vif_util [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:d2:45,bridge_name='br-int',has_traffic_filtering=True,id=4d5a29f4-c628-4a0d-b707-82e46e56bbe0,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d5a29f4-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.876 252257 DEBUG nova.objects.instance [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 89ab44e3-7209-4a9c-b399-77cf74efb51c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.901 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  <uuid>89ab44e3-7209-4a9c-b399-77cf74efb51c</uuid>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  <name>instance-00000075</name>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerActionsTestOtherB-server-1616306290</nova:name>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:15:23</nova:creationTime>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <nova:user uuid="ca93c8e3eac142c0aa6b61807727dea2">tempest-ServerActionsTestOtherB-325732369-project-member</nova:user>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <nova:project uuid="ba867fac17034bb28fe2cdb0fff3af2b">tempest-ServerActionsTestOtherB-325732369</nova:project>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <nova:port uuid="4d5a29f4-c628-4a0d-b707-82e46e56bbe0">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <entry name="serial">89ab44e3-7209-4a9c-b399-77cf74efb51c</entry>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <entry name="uuid">89ab44e3-7209-4a9c-b399-77cf74efb51c</entry>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/89ab44e3-7209-4a9c-b399-77cf74efb51c_disk">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/89ab44e3-7209-4a9c-b399-77cf74efb51c_disk.config">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:5b:d2:45"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <target dev="tap4d5a29f4-c6"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/89ab44e3-7209-4a9c-b399-77cf74efb51c/console.log" append="off"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:15:24 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:15:24 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:15:24 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:15:24 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.902 252257 DEBUG nova.compute.manager [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Preparing to wait for external event network-vif-plugged-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.902 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.902 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.903 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.903 252257 DEBUG nova.virt.libvirt.vif [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1616306290',display_name='tempest-ServerActionsTestOtherB-server-1616306290',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1616306290',id=117,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-c0rhhm0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:19Z,user_data=None,user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=89ab44e3-7209-4a9c-b399-77cf74efb51c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "address": "fa:16:3e:5b:d2:45", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d5a29f4-c6", "ovs_interfaceid": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.903 252257 DEBUG nova.network.os_vif_util [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "address": "fa:16:3e:5b:d2:45", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d5a29f4-c6", "ovs_interfaceid": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.904 252257 DEBUG nova.network.os_vif_util [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:d2:45,bridge_name='br-int',has_traffic_filtering=True,id=4d5a29f4-c628-4a0d-b707-82e46e56bbe0,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d5a29f4-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.904 252257 DEBUG os_vif [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:d2:45,bridge_name='br-int',has_traffic_filtering=True,id=4d5a29f4-c628-4a0d-b707-82e46e56bbe0,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d5a29f4-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.905 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.905 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.906 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.910 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.910 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d5a29f4-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.910 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4d5a29f4-c6, col_values=(('external_ids', {'iface-id': '4d5a29f4-c628-4a0d-b707-82e46e56bbe0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:d2:45', 'vm-uuid': '89ab44e3-7209-4a9c-b399-77cf74efb51c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:24 np0005539563 NetworkManager[48981]: <info>  [1764404124.9131] manager: (tap4d5a29f4-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.915 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.919 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539563 nova_compute[252253]: 2025-11-29 08:15:24.920 252257 INFO os_vif [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:d2:45,bridge_name='br-int',has_traffic_filtering=True,id=4d5a29f4-c628-4a0d-b707-82e46e56bbe0,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d5a29f4-c6')#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.021 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.021 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.030 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No VIF found with MAC fa:16:3e:5b:d2:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.030 252257 INFO nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Using config drive#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.124 252257 DEBUG nova.storage.rbd_utils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 89ab44e3-7209-4a9c-b399-77cf74efb51c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.138 252257 DEBUG nova.network.neutron [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Updating instance_info_cache with network_info: [{"id": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "address": "fa:16:3e:3a:28:ca", "network": {"id": "8c5ef17f-bb7c-4d36-834a-137dcbeed11e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1294590021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e18e33f6f2ab40279f539fc9988c80d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf958f665-0f", "ovs_interfaceid": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.163 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Releasing lock "refresh_cache-34556945-6717-428b-937e-51175f19d32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.163 252257 DEBUG nova.compute.manager [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Instance network_info: |[{"id": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "address": "fa:16:3e:3a:28:ca", "network": {"id": "8c5ef17f-bb7c-4d36-834a-137dcbeed11e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1294590021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e18e33f6f2ab40279f539fc9988c80d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf958f665-0f", "ovs_interfaceid": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.163 252257 DEBUG oslo_concurrency.lockutils [req-d1ec0942-883c-400e-8064-a6dfeeb7616d req-bc5fe8b5-167b-4ac0-a5dd-acd60a1cdc9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-34556945-6717-428b-937e-51175f19d32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.163 252257 DEBUG nova.network.neutron [req-d1ec0942-883c-400e-8064-a6dfeeb7616d req-bc5fe8b5-167b-4ac0-a5dd-acd60a1cdc9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Refreshing network info cache for port f958f665-0f7f-4f50-81e9-f9e540450f1d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.166 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Start _get_guest_xml network_info=[{"id": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "address": "fa:16:3e:3a:28:ca", "network": {"id": "8c5ef17f-bb7c-4d36-834a-137dcbeed11e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1294590021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e18e33f6f2ab40279f539fc9988c80d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf958f665-0f", "ovs_interfaceid": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.169 252257 WARNING nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.173 252257 DEBUG nova.virt.libvirt.host [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.174 252257 DEBUG nova.virt.libvirt.host [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.180 252257 DEBUG nova.virt.libvirt.host [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.180 252257 DEBUG nova.virt.libvirt.host [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.181 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.181 252257 DEBUG nova.virt.hardware [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.182 252257 DEBUG nova.virt.hardware [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.182 252257 DEBUG nova.virt.hardware [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.182 252257 DEBUG nova.virt.hardware [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.182 252257 DEBUG nova.virt.hardware [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.183 252257 DEBUG nova.virt.hardware [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.183 252257 DEBUG nova.virt.hardware [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.183 252257 DEBUG nova.virt.hardware [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.183 252257 DEBUG nova.virt.hardware [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.183 252257 DEBUG nova.virt.hardware [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.184 252257 DEBUG nova.virt.hardware [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.186 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:25.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:25 np0005539563 podman[325702]: 2025-11-29 08:15:25.539136107 +0000 UTC m=+0.084717056 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:15:25 np0005539563 podman[325703]: 2025-11-29 08:15:25.544882763 +0000 UTC m=+0.085980191 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:15:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2166702727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:25 np0005539563 podman[325704]: 2025-11-29 08:15:25.595805873 +0000 UTC m=+0.128753020 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.617 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.642 252257 DEBUG nova.storage.rbd_utils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] rbd image 34556945-6717-428b-937e-51175f19d32e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:25 np0005539563 nova_compute[252253]: 2025-11-29 08:15:25.645 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/477985426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.125 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.127 252257 DEBUG nova.virt.libvirt.vif [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-78110649',display_name='tempest-ServerGroupTestJSON-server-78110649',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-78110649',id=118,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e18e33f6f2ab40279f539fc9988c80d8',ramdisk_id='',reservation_id='r-kbaxkwqj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-2078587833',owner_user_name='tempest-ServerGroupTestJSON-2078587833-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:20Z,user_data=None,user_id='8dfd7d44b219445e87f720287c54e093',uuid=34556945-6717-428b-937e-51175f19d32e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "address": "fa:16:3e:3a:28:ca", "network": {"id": "8c5ef17f-bb7c-4d36-834a-137dcbeed11e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1294590021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e18e33f6f2ab40279f539fc9988c80d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf958f665-0f", "ovs_interfaceid": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.128 252257 DEBUG nova.network.os_vif_util [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Converting VIF {"id": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "address": "fa:16:3e:3a:28:ca", "network": {"id": "8c5ef17f-bb7c-4d36-834a-137dcbeed11e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1294590021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e18e33f6f2ab40279f539fc9988c80d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf958f665-0f", "ovs_interfaceid": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.129 252257 DEBUG nova.network.os_vif_util [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:28:ca,bridge_name='br-int',has_traffic_filtering=True,id=f958f665-0f7f-4f50-81e9-f9e540450f1d,network=Network(8c5ef17f-bb7c-4d36-834a-137dcbeed11e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf958f665-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.131 252257 DEBUG nova.objects.instance [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 34556945-6717-428b-937e-51175f19d32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.155 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  <uuid>34556945-6717-428b-937e-51175f19d32e</uuid>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  <name>instance-00000076</name>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerGroupTestJSON-server-78110649</nova:name>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:15:25</nova:creationTime>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <nova:user uuid="8dfd7d44b219445e87f720287c54e093">tempest-ServerGroupTestJSON-2078587833-project-member</nova:user>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <nova:project uuid="e18e33f6f2ab40279f539fc9988c80d8">tempest-ServerGroupTestJSON-2078587833</nova:project>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <nova:port uuid="f958f665-0f7f-4f50-81e9-f9e540450f1d">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <entry name="serial">34556945-6717-428b-937e-51175f19d32e</entry>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <entry name="uuid">34556945-6717-428b-937e-51175f19d32e</entry>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/34556945-6717-428b-937e-51175f19d32e_disk">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/34556945-6717-428b-937e-51175f19d32e_disk.config">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:3a:28:ca"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <target dev="tapf958f665-0f"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/34556945-6717-428b-937e-51175f19d32e/console.log" append="off"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:15:26 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:15:26 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:15:26 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:15:26 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.155 252257 DEBUG nova.compute.manager [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Preparing to wait for external event network-vif-plugged-f958f665-0f7f-4f50-81e9-f9e540450f1d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.156 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Acquiring lock "34556945-6717-428b-937e-51175f19d32e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.156 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "34556945-6717-428b-937e-51175f19d32e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.156 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "34556945-6717-428b-937e-51175f19d32e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.157 252257 DEBUG nova.virt.libvirt.vif [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-78110649',display_name='tempest-ServerGroupTestJSON-server-78110649',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-78110649',id=118,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e18e33f6f2ab40279f539fc9988c80d8',ramdisk_id='',reservation_id='r-kbaxkwqj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-2078587833',owner_user_name='tempest-ServerGroupTestJSON-2078587833-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:20Z,user_data=None,user_id='8dfd7d44b219445e87f720287c54e093',uuid=34556945-6717-428b-937e-51175f19d32e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "address": "fa:16:3e:3a:28:ca", "network": {"id": "8c5ef17f-bb7c-4d36-834a-137dcbeed11e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1294590021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e18e33f6f2ab40279f539fc9988c80d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf958f665-0f", "ovs_interfaceid": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.157 252257 DEBUG nova.network.os_vif_util [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Converting VIF {"id": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "address": "fa:16:3e:3a:28:ca", "network": {"id": "8c5ef17f-bb7c-4d36-834a-137dcbeed11e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1294590021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e18e33f6f2ab40279f539fc9988c80d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf958f665-0f", "ovs_interfaceid": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.158 252257 DEBUG nova.network.os_vif_util [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:28:ca,bridge_name='br-int',has_traffic_filtering=True,id=f958f665-0f7f-4f50-81e9-f9e540450f1d,network=Network(8c5ef17f-bb7c-4d36-834a-137dcbeed11e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf958f665-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.159 252257 DEBUG os_vif [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:28:ca,bridge_name='br-int',has_traffic_filtering=True,id=f958f665-0f7f-4f50-81e9-f9e540450f1d,network=Network(8c5ef17f-bb7c-4d36-834a-137dcbeed11e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf958f665-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.160 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.160 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.161 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.165 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.165 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf958f665-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.165 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf958f665-0f, col_values=(('external_ids', {'iface-id': 'f958f665-0f7f-4f50-81e9-f9e540450f1d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:28:ca', 'vm-uuid': '34556945-6717-428b-937e-51175f19d32e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.168 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:26 np0005539563 NetworkManager[48981]: <info>  [1764404126.1697] manager: (tapf958f665-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.171 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.179 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.180 252257 INFO os_vif [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:28:ca,bridge_name='br-int',has_traffic_filtering=True,id=f958f665-0f7f-4f50-81e9-f9e540450f1d,network=Network(8c5ef17f-bb7c-4d36-834a-137dcbeed11e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf958f665-0f')#033[00m
Nov 29 03:15:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.293 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.293 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.293 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] No VIF found with MAC fa:16:3e:3a:28:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.294 252257 INFO nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Using config drive#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.320 252257 DEBUG nova.storage.rbd_utils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] rbd image 34556945-6717-428b-937e-51175f19d32e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.548 252257 INFO nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Creating config drive at /var/lib/nova/instances/89ab44e3-7209-4a9c-b399-77cf74efb51c/disk.config#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.556 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89ab44e3-7209-4a9c-b399-77cf74efb51c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_7u8_ssu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.714 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89ab44e3-7209-4a9c-b399-77cf74efb51c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_7u8_ssu" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.742 252257 DEBUG nova.storage.rbd_utils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] rbd image 89ab44e3-7209-4a9c-b399-77cf74efb51c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:26 np0005539563 nova_compute[252253]: 2025-11-29 08:15:26.747 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89ab44e3-7209-4a9c-b399-77cf74efb51c/disk.config 89ab44e3-7209-4a9c-b399-77cf74efb51c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:26.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.218 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:27.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.355 252257 INFO nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Creating config drive at /var/lib/nova/instances/34556945-6717-428b-937e-51175f19d32e/disk.config#033[00m
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.364 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/34556945-6717-428b-937e-51175f19d32e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf574frnl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.418 252257 DEBUG oslo_concurrency.processutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89ab44e3-7209-4a9c-b399-77cf74efb51c/disk.config 89ab44e3-7209-4a9c-b399-77cf74efb51c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.671s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.420 252257 INFO nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Deleting local config drive /var/lib/nova/instances/89ab44e3-7209-4a9c-b399-77cf74efb51c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:15:27 np0005539563 kernel: tap4d5a29f4-c6: entered promiscuous mode
Nov 29 03:15:27 np0005539563 NetworkManager[48981]: <info>  [1764404127.4875] manager: (tap4d5a29f4-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/217)
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.496 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:27Z|00454|binding|INFO|Claiming lport 4d5a29f4-c628-4a0d-b707-82e46e56bbe0 for this chassis.
Nov 29 03:15:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:27Z|00455|binding|INFO|4d5a29f4-c628-4a0d-b707-82e46e56bbe0: Claiming fa:16:3e:5b:d2:45 10.100.0.4
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.507 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:d2:45 10.100.0.4'], port_security=['fa:16:3e:5b:d2:45 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '89ab44e3-7209-4a9c-b399-77cf74efb51c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '35298f43-8419-4a47-81fd-585bfb137a9a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=4d5a29f4-c628-4a0d-b707-82e46e56bbe0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.510 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 4d5a29f4-c628-4a0d-b707-82e46e56bbe0 in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 bound to our chassis#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.513 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d5b8c11-b69e-4a74-846b-03943fb29a81#033[00m
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.514 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/34556945-6717-428b-937e-51175f19d32e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf574frnl" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:27Z|00456|binding|INFO|Setting lport 4d5a29f4-c628-4a0d-b707-82e46e56bbe0 ovn-installed in OVS
Nov 29 03:15:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:27Z|00457|binding|INFO|Setting lport 4d5a29f4-c628-4a0d-b707-82e46e56bbe0 up in Southbound
Nov 29 03:15:27 np0005539563 systemd-udevd[325885]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:15:27 np0005539563 systemd-machined[213024]: New machine qemu-53-instance-00000075.
Nov 29 03:15:27 np0005539563 NetworkManager[48981]: <info>  [1764404127.5421] device (tap4d5a29f4-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.541 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[803e990e-6052-4e62-a346-f9f8316d265e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:27 np0005539563 NetworkManager[48981]: <info>  [1764404127.5456] device (tap4d5a29f4-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:15:27 np0005539563 systemd[1]: Started Virtual Machine qemu-53-instance-00000075.
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.563 252257 DEBUG nova.storage.rbd_utils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] rbd image 34556945-6717-428b-937e-51175f19d32e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.567 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/34556945-6717-428b-937e-51175f19d32e/disk.config 34556945-6717-428b-937e-51175f19d32e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.592 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[441415f9-2854-412a-912b-80565a42b152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.594 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.596 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9c3780-12d1-4d3b-9b4d-9f3621338d77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.627 252257 DEBUG nova.network.neutron [req-ffe3423c-f144-4634-8786-6a772ab6a755 req-01a32878-2b13-4d3c-9f04-a314dce6d4e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Updated VIF entry in instance network info cache for port 4d5a29f4-c628-4a0d-b707-82e46e56bbe0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.628 252257 DEBUG nova.network.neutron [req-ffe3423c-f144-4634-8786-6a772ab6a755 req-01a32878-2b13-4d3c-9f04-a314dce6d4e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Updating instance_info_cache with network_info: [{"id": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "address": "fa:16:3e:5b:d2:45", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d5a29f4-c6", "ovs_interfaceid": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.639 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5db4c6-c99c-4852-af3e-eed12aadebf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.652 252257 DEBUG oslo_concurrency.lockutils [req-ffe3423c-f144-4634-8786-6a772ab6a755 req-01a32878-2b13-4d3c-9f04-a314dce6d4e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-89ab44e3-7209-4a9c-b399-77cf74efb51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.659 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[40d499a4-6272-4a74-99d3-7405262b9a93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686554, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325933, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.674 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd8e750-9dfe-44ec-91c3-970c4c94e4d1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d5b8c11-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686571, 'tstamp': 686571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325934, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d5b8c11-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686575, 'tstamp': 686575}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325934, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.676 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.733 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.740 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.740 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d5b8c11-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.741 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.741 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d5b8c11-b0, col_values=(('external_ids', {'iface-id': 'a2e47e7a-aef0-4c09-aeef-4a0d63960d7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.742 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.833 252257 DEBUG oslo_concurrency.processutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/34556945-6717-428b-937e-51175f19d32e/disk.config 34556945-6717-428b-937e-51175f19d32e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.834 252257 INFO nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Deleting local config drive /var/lib/nova/instances/34556945-6717-428b-937e-51175f19d32e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:15:27 np0005539563 kernel: tapf958f665-0f: entered promiscuous mode
Nov 29 03:15:27 np0005539563 systemd-udevd[325902]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:15:27 np0005539563 NetworkManager[48981]: <info>  [1764404127.8887] manager: (tapf958f665-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/218)
Nov 29 03:15:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:27Z|00458|binding|INFO|Claiming lport f958f665-0f7f-4f50-81e9-f9e540450f1d for this chassis.
Nov 29 03:15:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:27Z|00459|binding|INFO|f958f665-0f7f-4f50-81e9-f9e540450f1d: Claiming fa:16:3e:3a:28:ca 10.100.0.7
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.889 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.899 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:28:ca 10.100.0.7'], port_security=['fa:16:3e:3a:28:ca 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '34556945-6717-428b-937e-51175f19d32e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c5ef17f-bb7c-4d36-834a-137dcbeed11e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e18e33f6f2ab40279f539fc9988c80d8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd42e2e3d-053f-434d-9e5a-eeacf778703b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa3770b3-85c8-41ed-a081-0fb789d4489b, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=f958f665-0f7f-4f50-81e9-f9e540450f1d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.901 158990 INFO neutron.agent.ovn.metadata.agent [-] Port f958f665-0f7f-4f50-81e9-f9e540450f1d in datapath 8c5ef17f-bb7c-4d36-834a-137dcbeed11e bound to our chassis#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.903 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c5ef17f-bb7c-4d36-834a-137dcbeed11e#033[00m
Nov 29 03:15:27 np0005539563 NetworkManager[48981]: <info>  [1764404127.9041] device (tapf958f665-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:15:27 np0005539563 NetworkManager[48981]: <info>  [1764404127.9053] device (tapf958f665-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:15:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:27Z|00460|binding|INFO|Setting lport f958f665-0f7f-4f50-81e9-f9e540450f1d ovn-installed in OVS
Nov 29 03:15:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:27Z|00461|binding|INFO|Setting lport f958f665-0f7f-4f50-81e9-f9e540450f1d up in Southbound
Nov 29 03:15:27 np0005539563 nova_compute[252253]: 2025-11-29 08:15:27.919 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.919 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8dc09f-e79e-40d3-a527-9b54ea1ea203]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.920 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8c5ef17f-b1 in ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.923 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8c5ef17f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.923 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[830332e4-046c-4990-851e-0d974cb55a2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.926 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4f6e70-6452-4f3b-86d0-29eb5e745efe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:27 np0005539563 systemd-machined[213024]: New machine qemu-54-instance-00000076.
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.940 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[3a754af6-15fa-4da5-9350-9eabacfdf670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:27 np0005539563 systemd[1]: Started Virtual Machine qemu-54-instance-00000076.
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.958 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfc669b-18c6-482e-9937-d872fb3ecc1c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:27.990 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[7509ae1a-b63e-41ed-931e-ce367fa34d7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.005 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5b49c689-e976-4eb0-9a9f-5118cbb5d498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:28 np0005539563 NetworkManager[48981]: <info>  [1764404128.0068] manager: (tap8c5ef17f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.042 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf6a6cc-8fce-4552-b619-338f975294b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.046 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a76de057-59d1-4d60-8b64-17cc9b724c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:28 np0005539563 NetworkManager[48981]: <info>  [1764404128.0740] device (tap8c5ef17f-b0): carrier: link connected
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.080 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[8c07e13c-b145-4f93-a2cb-e9ef5a2c96a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.098 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[457810d4-ec63-4b40-b6e3-e099eec594d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c5ef17f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:4e:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709584, 'reachable_time': 22772, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326007, 'error': None, 'target': 'ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.115 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc69b73-7835-425a-962f-28df00fc4744]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:4e25'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 709584, 'tstamp': 709584}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326017, 'error': None, 'target': 'ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.138 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff762bc-e0e1-4427-a7bc-61c8e9f30495]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c5ef17f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:4e:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709584, 'reachable_time': 22772, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326020, 'error': None, 'target': 'ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:15:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1951469474' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:15:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:15:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1951469474' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.182 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0381be14-20f9-48ed-ba95-c5594b0aec12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.242 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404128.242138, 89ab44e3-7209-4a9c-b399-77cf74efb51c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.243 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:15:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 190 op/s
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.265 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.262 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[81e5d89e-803b-4405-92e0-3d42d169d651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.273 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c5ef17f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.273 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.273 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c5ef17f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:28 np0005539563 NetworkManager[48981]: <info>  [1764404128.2765] manager: (tap8c5ef17f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Nov 29 03:15:28 np0005539563 kernel: tap8c5ef17f-b0: entered promiscuous mode
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.280 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c5ef17f-b0, col_values=(('external_ids', {'iface-id': '561b4110-0431-4c2b-9ebd-3f9577fbc09b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:28 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:28Z|00462|binding|INFO|Releasing lport 561b4110-0431-4c2b-9ebd-3f9577fbc09b from this chassis (sb_readonly=0)
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.290 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.294 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404128.2448645, 89ab44e3-7209-4a9c-b399-77cf74efb51c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.295 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.301 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.302 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8c5ef17f-bb7c-4d36-834a-137dcbeed11e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8c5ef17f-bb7c-4d36-834a-137dcbeed11e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.303 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e90ca0-6d51-471d-9671-d5675fc5ccc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.304 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-8c5ef17f-bb7c-4d36-834a-137dcbeed11e
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/8c5ef17f-bb7c-4d36-834a-137dcbeed11e.pid.haproxy
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 8c5ef17f-bb7c-4d36-834a-137dcbeed11e
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:15:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:28.304 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e', 'env', 'PROCESS_TAG=haproxy-8c5ef17f-bb7c-4d36-834a-137dcbeed11e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8c5ef17f-bb7c-4d36-834a-137dcbeed11e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.321 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.326 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.375 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.375 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404128.3419046, 34556945-6717-428b-937e-51175f19d32e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.375 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 34556945-6717-428b-937e-51175f19d32e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.400 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 34556945-6717-428b-937e-51175f19d32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.405 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404128.3423886, 34556945-6717-428b-937e-51175f19d32e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.406 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 34556945-6717-428b-937e-51175f19d32e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.428 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 34556945-6717-428b-937e-51175f19d32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.432 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 34556945-6717-428b-937e-51175f19d32e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:28 np0005539563 nova_compute[252253]: 2025-11-29 08:15:28.450 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 34556945-6717-428b-937e-51175f19d32e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:28 np0005539563 podman[326100]: 2025-11-29 08:15:28.717939072 +0000 UTC m=+0.078523318 container create ad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:15:28 np0005539563 systemd[1]: Started libpod-conmon-ad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2.scope.
Nov 29 03:15:28 np0005539563 podman[326100]: 2025-11-29 08:15:28.684037783 +0000 UTC m=+0.044622119 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:15:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Nov 29 03:15:28 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:15:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Nov 29 03:15:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54425a828ac19df4605c655dffe1f794adc8beb2819b09cdb74faf092226ea66/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:15:28 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Nov 29 03:15:28 np0005539563 podman[326100]: 2025-11-29 08:15:28.812626527 +0000 UTC m=+0.173210793 container init ad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:15:28 np0005539563 podman[326100]: 2025-11-29 08:15:28.81790262 +0000 UTC m=+0.178486856 container start ad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:15:28 np0005539563 neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e[326115]: [NOTICE]   (326119) : New worker (326121) forked
Nov 29 03:15:28 np0005539563 neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e[326115]: [NOTICE]   (326119) : Loading success.
Nov 29 03:15:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:28.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:29.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.624 252257 DEBUG nova.network.neutron [req-d1ec0942-883c-400e-8064-a6dfeeb7616d req-bc5fe8b5-167b-4ac0-a5dd-acd60a1cdc9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Updated VIF entry in instance network info cache for port f958f665-0f7f-4f50-81e9-f9e540450f1d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.625 252257 DEBUG nova.network.neutron [req-d1ec0942-883c-400e-8064-a6dfeeb7616d req-bc5fe8b5-167b-4ac0-a5dd-acd60a1cdc9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Updating instance_info_cache with network_info: [{"id": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "address": "fa:16:3e:3a:28:ca", "network": {"id": "8c5ef17f-bb7c-4d36-834a-137dcbeed11e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1294590021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e18e33f6f2ab40279f539fc9988c80d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf958f665-0f", "ovs_interfaceid": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.653 252257 DEBUG oslo_concurrency.lockutils [req-d1ec0942-883c-400e-8064-a6dfeeb7616d req-bc5fe8b5-167b-4ac0-a5dd-acd60a1cdc9b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-34556945-6717-428b-937e-51175f19d32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.894 252257 DEBUG nova.compute.manager [req-5360297b-1cd0-4ac7-8625-6a8a71764e5e req-f7c31c9f-93fe-4213-a00d-bb9d65f933f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Received event network-vif-plugged-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.894 252257 DEBUG oslo_concurrency.lockutils [req-5360297b-1cd0-4ac7-8625-6a8a71764e5e req-f7c31c9f-93fe-4213-a00d-bb9d65f933f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.894 252257 DEBUG oslo_concurrency.lockutils [req-5360297b-1cd0-4ac7-8625-6a8a71764e5e req-f7c31c9f-93fe-4213-a00d-bb9d65f933f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.895 252257 DEBUG oslo_concurrency.lockutils [req-5360297b-1cd0-4ac7-8625-6a8a71764e5e req-f7c31c9f-93fe-4213-a00d-bb9d65f933f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.895 252257 DEBUG nova.compute.manager [req-5360297b-1cd0-4ac7-8625-6a8a71764e5e req-f7c31c9f-93fe-4213-a00d-bb9d65f933f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Processing event network-vif-plugged-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.895 252257 DEBUG nova.compute.manager [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.901 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404129.9007707, 89ab44e3-7209-4a9c-b399-77cf74efb51c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.901 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.903 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.906 252257 INFO nova.virt.libvirt.driver [-] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Instance spawned successfully.#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.907 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.931 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.936 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.937 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.937 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.938 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.938 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.938 252257 DEBUG nova.virt.libvirt.driver [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:29 np0005539563 nova_compute[252253]: 2025-11-29 08:15:29.943 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:30 np0005539563 nova_compute[252253]: 2025-11-29 08:15:30.215 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:30 np0005539563 nova_compute[252253]: 2025-11-29 08:15:30.241 252257 INFO nova.compute.manager [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Took 10.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:15:30 np0005539563 nova_compute[252253]: 2025-11-29 08:15:30.242 252257 DEBUG nova.compute.manager [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.8 MiB/s wr, 217 op/s
Nov 29 03:15:30 np0005539563 nova_compute[252253]: 2025-11-29 08:15:30.315 252257 INFO nova.compute.manager [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Took 12.14 seconds to build instance.#033[00m
Nov 29 03:15:30 np0005539563 nova_compute[252253]: 2025-11-29 08:15:30.344 252257 DEBUG oslo_concurrency.lockutils [None req-8baf1c8f-8fbf-4c07-a07c-462993cc6f85 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:30.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:31 np0005539563 nova_compute[252253]: 2025-11-29 08:15:31.170 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:31.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.016 252257 DEBUG nova.compute.manager [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Received event network-vif-plugged-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.016 252257 DEBUG oslo_concurrency.lockutils [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.016 252257 DEBUG oslo_concurrency.lockutils [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.016 252257 DEBUG oslo_concurrency.lockutils [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.016 252257 DEBUG nova.compute.manager [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] No waiting events found dispatching network-vif-plugged-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.017 252257 WARNING nova.compute.manager [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Received unexpected event network-vif-plugged-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 for instance with vm_state active and task_state pausing.#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.017 252257 DEBUG nova.compute.manager [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Received event network-vif-plugged-f958f665-0f7f-4f50-81e9-f9e540450f1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.017 252257 DEBUG oslo_concurrency.lockutils [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "34556945-6717-428b-937e-51175f19d32e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.017 252257 DEBUG oslo_concurrency.lockutils [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "34556945-6717-428b-937e-51175f19d32e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.017 252257 DEBUG oslo_concurrency.lockutils [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "34556945-6717-428b-937e-51175f19d32e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.017 252257 DEBUG nova.compute.manager [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Processing event network-vif-plugged-f958f665-0f7f-4f50-81e9-f9e540450f1d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.018 252257 DEBUG nova.compute.manager [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Received event network-vif-plugged-f958f665-0f7f-4f50-81e9-f9e540450f1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.018 252257 DEBUG oslo_concurrency.lockutils [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "34556945-6717-428b-937e-51175f19d32e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.018 252257 DEBUG oslo_concurrency.lockutils [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "34556945-6717-428b-937e-51175f19d32e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.018 252257 DEBUG oslo_concurrency.lockutils [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "34556945-6717-428b-937e-51175f19d32e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.018 252257 DEBUG nova.compute.manager [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] No waiting events found dispatching network-vif-plugged-f958f665-0f7f-4f50-81e9-f9e540450f1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.018 252257 WARNING nova.compute.manager [req-2d168757-986f-4a37-84d8-cb0488770da2 req-af1a931e-f8f2-4758-8cea-24276b409198 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Received unexpected event network-vif-plugged-f958f665-0f7f-4f50-81e9-f9e540450f1d for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.019 252257 DEBUG nova.compute.manager [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.027 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.031 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404132.028266, 34556945-6717-428b-937e-51175f19d32e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.031 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 34556945-6717-428b-937e-51175f19d32e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.033 252257 INFO nova.compute.manager [None req-5605e46d-e588-461a-b33d-34b21b0e8afc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Pausing#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.034 252257 DEBUG nova.objects.instance [None req-5605e46d-e588-461a-b33d-34b21b0e8afc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'flavor' on Instance uuid 89ab44e3-7209-4a9c-b399-77cf74efb51c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.038 252257 INFO nova.virt.libvirt.driver [-] [instance: 34556945-6717-428b-937e-51175f19d32e] Instance spawned successfully.#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.039 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.085 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 34556945-6717-428b-937e-51175f19d32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.091 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 34556945-6717-428b-937e-51175f19d32e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.093 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.094 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.094 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.095 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.095 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.095 252257 DEBUG nova.virt.libvirt.driver [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.105 252257 DEBUG nova.compute.manager [None req-5605e46d-e588-461a-b33d-34b21b0e8afc ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.142 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 34556945-6717-428b-937e-51175f19d32e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.143 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404132.104514, 89ab44e3-7209-4a9c-b399-77cf74efb51c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.143 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.178 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.182 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.212 252257 INFO nova.compute.manager [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Took 11.52 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.213 252257 DEBUG nova.compute.manager [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.221 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.267 252257 INFO nova.compute.manager [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Took 12.96 seconds to build instance.#033[00m
Nov 29 03:15:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.8 MiB/s wr, 217 op/s
Nov 29 03:15:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:32 np0005539563 nova_compute[252253]: 2025-11-29 08:15:32.284 252257 DEBUG oslo_concurrency.lockutils [None req-a67fb75b-2017-4f3f-ab9b-e9db0d6f4e44 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "34556945-6717-428b-937e-51175f19d32e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:32.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:33.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 241 op/s
Nov 29 03:15:34 np0005539563 nova_compute[252253]: 2025-11-29 08:15:34.871 252257 DEBUG oslo_concurrency.lockutils [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Acquiring lock "34556945-6717-428b-937e-51175f19d32e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:34 np0005539563 nova_compute[252253]: 2025-11-29 08:15:34.871 252257 DEBUG oslo_concurrency.lockutils [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "34556945-6717-428b-937e-51175f19d32e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:34 np0005539563 nova_compute[252253]: 2025-11-29 08:15:34.871 252257 DEBUG oslo_concurrency.lockutils [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Acquiring lock "34556945-6717-428b-937e-51175f19d32e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:34 np0005539563 nova_compute[252253]: 2025-11-29 08:15:34.872 252257 DEBUG oslo_concurrency.lockutils [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "34556945-6717-428b-937e-51175f19d32e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:34 np0005539563 nova_compute[252253]: 2025-11-29 08:15:34.872 252257 DEBUG oslo_concurrency.lockutils [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "34556945-6717-428b-937e-51175f19d32e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:34 np0005539563 nova_compute[252253]: 2025-11-29 08:15:34.873 252257 INFO nova.compute.manager [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Terminating instance#033[00m
Nov 29 03:15:34 np0005539563 nova_compute[252253]: 2025-11-29 08:15:34.873 252257 DEBUG nova.compute.manager [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:15:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:34.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:34 np0005539563 kernel: tapf958f665-0f (unregistering): left promiscuous mode
Nov 29 03:15:35 np0005539563 NetworkManager[48981]: <info>  [1764404135.0030] device (tapf958f665-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:15:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:35Z|00463|binding|INFO|Releasing lport f958f665-0f7f-4f50-81e9-f9e540450f1d from this chassis (sb_readonly=0)
Nov 29 03:15:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:35Z|00464|binding|INFO|Setting lport f958f665-0f7f-4f50-81e9-f9e540450f1d down in Southbound
Nov 29 03:15:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:35Z|00465|binding|INFO|Removing iface tapf958f665-0f ovn-installed in OVS
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.013 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.015 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.021 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:28:ca 10.100.0.7'], port_security=['fa:16:3e:3a:28:ca 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '34556945-6717-428b-937e-51175f19d32e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c5ef17f-bb7c-4d36-834a-137dcbeed11e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e18e33f6f2ab40279f539fc9988c80d8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd42e2e3d-053f-434d-9e5a-eeacf778703b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa3770b3-85c8-41ed-a081-0fb789d4489b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=f958f665-0f7f-4f50-81e9-f9e540450f1d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.023 158990 INFO neutron.agent.ovn.metadata.agent [-] Port f958f665-0f7f-4f50-81e9-f9e540450f1d in datapath 8c5ef17f-bb7c-4d36-834a-137dcbeed11e unbound from our chassis#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.024 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8c5ef17f-bb7c-4d36-834a-137dcbeed11e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.025 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[aa7771a1-cf11-4bbd-8d8b-a48a9db6a7fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.025 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e namespace which is not needed anymore#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.039 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539563 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000076.scope: Deactivated successfully.
Nov 29 03:15:35 np0005539563 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000076.scope: Consumed 3.306s CPU time.
Nov 29 03:15:35 np0005539563 systemd-machined[213024]: Machine qemu-54-instance-00000076 terminated.
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.095 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.102 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.115 252257 INFO nova.virt.libvirt.driver [-] [instance: 34556945-6717-428b-937e-51175f19d32e] Instance destroyed successfully.#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.115 252257 DEBUG nova.objects.instance [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lazy-loading 'resources' on Instance uuid 34556945-6717-428b-937e-51175f19d32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.131 252257 DEBUG nova.virt.libvirt.vif [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-78110649',display_name='tempest-ServerGroupTestJSON-server-78110649',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-78110649',id=118,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e18e33f6f2ab40279f539fc9988c80d8',ramdisk_id='',reservation_id='r-kbaxkwqj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-2078587833',owner_user_name='tempest-ServerGroupTestJSON-2078587833-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:32Z,user_data=None,user_id='8dfd7d44b219445e87f720287c54e093',uuid=34556945-6717-428b-937e-51175f19d32e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "address": "fa:16:3e:3a:28:ca", "network": {"id": "8c5ef17f-bb7c-4d36-834a-137dcbeed11e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1294590021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e18e33f6f2ab40279f539fc9988c80d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf958f665-0f", "ovs_interfaceid": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.131 252257 DEBUG nova.network.os_vif_util [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Converting VIF {"id": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "address": "fa:16:3e:3a:28:ca", "network": {"id": "8c5ef17f-bb7c-4d36-834a-137dcbeed11e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1294590021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e18e33f6f2ab40279f539fc9988c80d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf958f665-0f", "ovs_interfaceid": "f958f665-0f7f-4f50-81e9-f9e540450f1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.132 252257 DEBUG nova.network.os_vif_util [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:28:ca,bridge_name='br-int',has_traffic_filtering=True,id=f958f665-0f7f-4f50-81e9-f9e540450f1d,network=Network(8c5ef17f-bb7c-4d36-834a-137dcbeed11e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf958f665-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.132 252257 DEBUG os_vif [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:28:ca,bridge_name='br-int',has_traffic_filtering=True,id=f958f665-0f7f-4f50-81e9-f9e540450f1d,network=Network(8c5ef17f-bb7c-4d36-834a-137dcbeed11e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf958f665-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.134 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.135 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf958f665-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.136 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.140 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.142 252257 INFO os_vif [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:28:ca,bridge_name='br-int',has_traffic_filtering=True,id=f958f665-0f7f-4f50-81e9-f9e540450f1d,network=Network(8c5ef17f-bb7c-4d36-834a-137dcbeed11e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf958f665-0f')#033[00m
Nov 29 03:15:35 np0005539563 neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e[326115]: [NOTICE]   (326119) : haproxy version is 2.8.14-c23fe91
Nov 29 03:15:35 np0005539563 neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e[326115]: [NOTICE]   (326119) : path to executable is /usr/sbin/haproxy
Nov 29 03:15:35 np0005539563 neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e[326115]: [WARNING]  (326119) : Exiting Master process...
Nov 29 03:15:35 np0005539563 neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e[326115]: [ALERT]    (326119) : Current worker (326121) exited with code 143 (Terminated)
Nov 29 03:15:35 np0005539563 neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e[326115]: [WARNING]  (326119) : All workers exited. Exiting... (0)
Nov 29 03:15:35 np0005539563 systemd[1]: libpod-ad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2.scope: Deactivated successfully.
Nov 29 03:15:35 np0005539563 podman[326215]: 2025-11-29 08:15:35.182606922 +0000 UTC m=+0.050802937 container died ad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:15:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2-userdata-shm.mount: Deactivated successfully.
Nov 29 03:15:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-54425a828ac19df4605c655dffe1f794adc8beb2819b09cdb74faf092226ea66-merged.mount: Deactivated successfully.
Nov 29 03:15:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:35.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.670 252257 DEBUG nova.compute.manager [req-ed699352-3c88-4ac2-82f2-d35451ae01a7 req-e977773e-ed66-43c4-a53b-e4158e80dc3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Received event network-vif-unplugged-f958f665-0f7f-4f50-81e9-f9e540450f1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.671 252257 DEBUG oslo_concurrency.lockutils [req-ed699352-3c88-4ac2-82f2-d35451ae01a7 req-e977773e-ed66-43c4-a53b-e4158e80dc3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "34556945-6717-428b-937e-51175f19d32e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.672 252257 DEBUG oslo_concurrency.lockutils [req-ed699352-3c88-4ac2-82f2-d35451ae01a7 req-e977773e-ed66-43c4-a53b-e4158e80dc3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "34556945-6717-428b-937e-51175f19d32e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.673 252257 DEBUG oslo_concurrency.lockutils [req-ed699352-3c88-4ac2-82f2-d35451ae01a7 req-e977773e-ed66-43c4-a53b-e4158e80dc3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "34556945-6717-428b-937e-51175f19d32e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.673 252257 DEBUG nova.compute.manager [req-ed699352-3c88-4ac2-82f2-d35451ae01a7 req-e977773e-ed66-43c4-a53b-e4158e80dc3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] No waiting events found dispatching network-vif-unplugged-f958f665-0f7f-4f50-81e9-f9e540450f1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.674 252257 DEBUG nova.compute.manager [req-ed699352-3c88-4ac2-82f2-d35451ae01a7 req-e977773e-ed66-43c4-a53b-e4158e80dc3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Received event network-vif-unplugged-f958f665-0f7f-4f50-81e9-f9e540450f1d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:15:35 np0005539563 podman[326215]: 2025-11-29 08:15:35.792171777 +0000 UTC m=+0.660367822 container cleanup ad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:15:35 np0005539563 systemd[1]: libpod-conmon-ad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2.scope: Deactivated successfully.
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.818 252257 DEBUG oslo_concurrency.lockutils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "89ab44e3-7209-4a9c-b399-77cf74efb51c" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.819 252257 DEBUG oslo_concurrency.lockutils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.819 252257 INFO nova.compute.manager [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Shelving#033[00m
Nov 29 03:15:35 np0005539563 kernel: tap4d5a29f4-c6 (unregistering): left promiscuous mode
Nov 29 03:15:35 np0005539563 NetworkManager[48981]: <info>  [1764404135.8764] device (tap4d5a29f4-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:15:35 np0005539563 podman[326262]: 2025-11-29 08:15:35.886462661 +0000 UTC m=+0.064078266 container remove ad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.887 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:35Z|00466|binding|INFO|Releasing lport 4d5a29f4-c628-4a0d-b707-82e46e56bbe0 from this chassis (sb_readonly=0)
Nov 29 03:15:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:35Z|00467|binding|INFO|Setting lport 4d5a29f4-c628-4a0d-b707-82e46e56bbe0 down in Southbound
Nov 29 03:15:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:35Z|00468|binding|INFO|Removing iface tap4d5a29f4-c6 ovn-installed in OVS
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.889 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.894 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[38edb840-e2e7-48d7-900a-76487a060da0]: (4, ('Sat Nov 29 08:15:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e (ad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2)\nad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2\nSat Nov 29 08:15:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e (ad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2)\nad6fbb8935b28c89d938480ee28342f44c983c7ca6f43b2a64dfe23d0f7b76c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.897 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:d2:45 10.100.0.4'], port_security=['fa:16:3e:5b:d2:45 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '89ab44e3-7209-4a9c-b399-77cf74efb51c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '35298f43-8419-4a47-81fd-585bfb137a9a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=4d5a29f4-c628-4a0d-b707-82e46e56bbe0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.899 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e592422a-8279-4738-8cc1-bfda50ba7e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.900 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c5ef17f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.903 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.915 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539563 kernel: tap8c5ef17f-b0: left promiscuous mode
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.933 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539563 nova_compute[252253]: 2025-11-29 08:15:35.934 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:35 np0005539563 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000075.scope: Deactivated successfully.
Nov 29 03:15:35 np0005539563 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000075.scope: Consumed 2.925s CPU time.
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.938 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2e8bcc-c2c8-4804-ae66-a145fb1c6955]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:35 np0005539563 systemd-machined[213024]: Machine qemu-53-instance-00000075 terminated.
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.963 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5d8b7bd3-fd6c-4d7d-9401-70b6c9fdb943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.964 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8cff2a4b-d6f4-4428-9b13-ef58f1b8fb24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.978 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce948ec-3d9f-40ae-9dbc-89b3d7568949]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709575, 'reachable_time': 25930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326283, 'error': None, 'target': 'ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:35 np0005539563 systemd[1]: run-netns-ovnmeta\x2d8c5ef17f\x2dbb7c\x2d4d36\x2d834a\x2d137dcbeed11e.mount: Deactivated successfully.
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.981 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8c5ef17f-bb7c-4d36-834a-137dcbeed11e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.981 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[c4cf451e-ad76-48a2-a855-bfa5391d7dca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.981 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 4d5a29f4-c628-4a0d-b707-82e46e56bbe0 in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 unbound from our chassis#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.983 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d5b8c11-b69e-4a74-846b-03943fb29a81#033[00m
Nov 29 03:15:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:35.996 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a542c45c-6044-40e1-bfb6-bf3411b05b43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:36.024 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8f21e5-4af9-416e-bc76-68834a144528]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:36.026 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[346f3422-6396-426b-9019-7c474bf73478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:36.053 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[23b55ad3-01d9-438f-aa22-44bc5277d928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:36.073 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cd9109bc-d313-4f1d-a809-53f801bbbc9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d5b8c11-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:06:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686554, 'reachable_time': 18973, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326292, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:36 np0005539563 nova_compute[252253]: 2025-11-29 08:15:36.085 252257 INFO nova.virt.libvirt.driver [-] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Instance destroyed successfully.#033[00m
Nov 29 03:15:36 np0005539563 nova_compute[252253]: 2025-11-29 08:15:36.085 252257 DEBUG nova.objects.instance [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'numa_topology' on Instance uuid 89ab44e3-7209-4a9c-b399-77cf74efb51c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:36.097 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f653df84-5103-43d6-a809-ad7a506fd85f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d5b8c11-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686571, 'tstamp': 686571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326302, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d5b8c11-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 686575, 'tstamp': 686575}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326302, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:36.099 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:36 np0005539563 nova_compute[252253]: 2025-11-29 08:15:36.100 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:36 np0005539563 nova_compute[252253]: 2025-11-29 08:15:36.107 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:36.107 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d5b8c11-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:36.107 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:36.108 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d5b8c11-b0, col_values=(('external_ids', {'iface-id': 'a2e47e7a-aef0-4c09-aeef-4a0d63960d7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:36.108 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 32 KiB/s wr, 352 op/s
Nov 29 03:15:36 np0005539563 nova_compute[252253]: 2025-11-29 08:15:36.402 252257 INFO nova.virt.libvirt.driver [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Beginning cold snapshot process#033[00m
Nov 29 03:15:36 np0005539563 nova_compute[252253]: 2025-11-29 08:15:36.563 252257 DEBUG nova.virt.libvirt.imagebackend [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:15:36 np0005539563 nova_compute[252253]: 2025-11-29 08:15:36.777 252257 INFO nova.virt.libvirt.driver [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Deleting instance files /var/lib/nova/instances/34556945-6717-428b-937e-51175f19d32e_del#033[00m
Nov 29 03:15:36 np0005539563 nova_compute[252253]: 2025-11-29 08:15:36.778 252257 INFO nova.virt.libvirt.driver [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Deletion of /var/lib/nova/instances/34556945-6717-428b-937e-51175f19d32e_del complete#033[00m
Nov 29 03:15:36 np0005539563 nova_compute[252253]: 2025-11-29 08:15:36.876 252257 INFO nova.compute.manager [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Took 2.00 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:15:36 np0005539563 nova_compute[252253]: 2025-11-29 08:15:36.877 252257 DEBUG oslo.service.loopingcall [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:15:36 np0005539563 nova_compute[252253]: 2025-11-29 08:15:36.878 252257 DEBUG nova.compute.manager [-] [instance: 34556945-6717-428b-937e-51175f19d32e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:15:36 np0005539563 nova_compute[252253]: 2025-11-29 08:15:36.878 252257 DEBUG nova.network.neutron [-] [instance: 34556945-6717-428b-937e-51175f19d32e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:15:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:36.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:36 np0005539563 nova_compute[252253]: 2025-11-29 08:15:36.938 252257 DEBUG nova.storage.rbd_utils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] creating snapshot(379b94d063df4f1d8e2ddd9664fbeeb1) on rbd image(89ab44e3-7209-4a9c-b399-77cf74efb51c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.222 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Nov 29 03:15:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:37.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Nov 29 03:15:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Nov 29 03:15:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Nov 29 03:15:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Nov 29 03:15:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.328 252257 DEBUG nova.storage.rbd_utils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] cloning vms/89ab44e3-7209-4a9c-b399-77cf74efb51c_disk@379b94d063df4f1d8e2ddd9664fbeeb1 to images/a949de33-fe8e-409f-9e15-d50c466534f9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.469 252257 DEBUG nova.storage.rbd_utils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] flattening images/a949de33-fe8e-409f-9e15-d50c466534f9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:15:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:37.694 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:37.695 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.701 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.713 252257 DEBUG nova.network.neutron [-] [instance: 34556945-6717-428b-937e-51175f19d32e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.771 252257 INFO nova.compute.manager [-] [instance: 34556945-6717-428b-937e-51175f19d32e] Took 0.89 seconds to deallocate network for instance.#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.784 252257 DEBUG nova.compute.manager [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Received event network-vif-plugged-f958f665-0f7f-4f50-81e9-f9e540450f1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.784 252257 DEBUG oslo_concurrency.lockutils [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "34556945-6717-428b-937e-51175f19d32e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.784 252257 DEBUG oslo_concurrency.lockutils [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "34556945-6717-428b-937e-51175f19d32e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.785 252257 DEBUG oslo_concurrency.lockutils [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "34556945-6717-428b-937e-51175f19d32e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.785 252257 DEBUG nova.compute.manager [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] No waiting events found dispatching network-vif-plugged-f958f665-0f7f-4f50-81e9-f9e540450f1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.785 252257 WARNING nova.compute.manager [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Received unexpected event network-vif-plugged-f958f665-0f7f-4f50-81e9-f9e540450f1d for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.785 252257 DEBUG nova.compute.manager [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Received event network-vif-unplugged-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.785 252257 DEBUG oslo_concurrency.lockutils [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.786 252257 DEBUG oslo_concurrency.lockutils [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.786 252257 DEBUG oslo_concurrency.lockutils [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.786 252257 DEBUG nova.compute.manager [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] No waiting events found dispatching network-vif-unplugged-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.786 252257 WARNING nova.compute.manager [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Received unexpected event network-vif-unplugged-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.786 252257 DEBUG nova.compute.manager [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Received event network-vif-plugged-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.787 252257 DEBUG oslo_concurrency.lockutils [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.787 252257 DEBUG oslo_concurrency.lockutils [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.787 252257 DEBUG oslo_concurrency.lockutils [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.787 252257 DEBUG nova.compute.manager [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] No waiting events found dispatching network-vif-plugged-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.787 252257 WARNING nova.compute.manager [req-9ad5c959-8bb8-4efb-b690-a8a43c025c6d req-89f62d24-2e1a-43e8-8a1d-294ce044ff0c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Received unexpected event network-vif-plugged-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.790 252257 DEBUG nova.storage.rbd_utils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] removing snapshot(379b94d063df4f1d8e2ddd9664fbeeb1) on rbd image(89ab44e3-7209-4a9c-b399-77cf74efb51c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.816 252257 DEBUG nova.compute.manager [req-95c465a2-2f90-483f-baac-f1d7ebad8d40 req-ec45f82a-d271-488b-b97c-9995f483d326 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 34556945-6717-428b-937e-51175f19d32e] Received event network-vif-deleted-f958f665-0f7f-4f50-81e9-f9e540450f1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.834 252257 DEBUG oslo_concurrency.lockutils [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.835 252257 DEBUG oslo_concurrency.lockutils [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:37 np0005539563 nova_compute[252253]: 2025-11-29 08:15:37.981 252257 DEBUG oslo_concurrency.processutils [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 589 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 1020 KiB/s wr, 352 op/s
Nov 29 03:15:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Nov 29 03:15:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Nov 29 03:15:38 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Nov 29 03:15:38 np0005539563 nova_compute[252253]: 2025-11-29 08:15:38.328 252257 DEBUG nova.storage.rbd_utils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] creating snapshot(snap) on rbd image(a949de33-fe8e-409f-9e15-d50c466534f9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:15:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/621339171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:38 np0005539563 nova_compute[252253]: 2025-11-29 08:15:38.417 252257 DEBUG oslo_concurrency.processutils [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:38 np0005539563 nova_compute[252253]: 2025-11-29 08:15:38.424 252257 DEBUG nova.compute.provider_tree [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:38 np0005539563 nova_compute[252253]: 2025-11-29 08:15:38.449 252257 DEBUG nova.scheduler.client.report [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:38 np0005539563 nova_compute[252253]: 2025-11-29 08:15:38.469 252257 DEBUG oslo_concurrency.lockutils [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:38 np0005539563 nova_compute[252253]: 2025-11-29 08:15:38.491 252257 INFO nova.scheduler.client.report [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Deleted allocations for instance 34556945-6717-428b-937e-51175f19d32e#033[00m
Nov 29 03:15:38 np0005539563 nova_compute[252253]: 2025-11-29 08:15:38.543 252257 DEBUG oslo_concurrency.lockutils [None req-8bc7fa2c-d74f-4e5b-ac32-2ce54b2f731a 8dfd7d44b219445e87f720287c54e093 e18e33f6f2ab40279f539fc9988c80d8 - - default default] Lock "34556945-6717-428b-937e-51175f19d32e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:38.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:39.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Nov 29 03:15:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Nov 29 03:15:39 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Nov 29 03:15:40 np0005539563 nova_compute[252253]: 2025-11-29 08:15:40.151 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 5.4 MiB/s wr, 468 op/s
Nov 29 03:15:40 np0005539563 nova_compute[252253]: 2025-11-29 08:15:40.746 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:40.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:41 np0005539563 nova_compute[252253]: 2025-11-29 08:15:41.226 252257 INFO nova.virt.libvirt.driver [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Snapshot image upload complete#033[00m
Nov 29 03:15:41 np0005539563 nova_compute[252253]: 2025-11-29 08:15:41.226 252257 DEBUG nova.compute.manager [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:41.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:41 np0005539563 nova_compute[252253]: 2025-11-29 08:15:41.280 252257 INFO nova.compute.manager [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Shelve offloading#033[00m
Nov 29 03:15:41 np0005539563 nova_compute[252253]: 2025-11-29 08:15:41.287 252257 INFO nova.virt.libvirt.driver [-] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Instance destroyed successfully.#033[00m
Nov 29 03:15:41 np0005539563 nova_compute[252253]: 2025-11-29 08:15:41.287 252257 DEBUG nova.compute.manager [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:41 np0005539563 nova_compute[252253]: 2025-11-29 08:15:41.290 252257 DEBUG oslo_concurrency.lockutils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "refresh_cache-89ab44e3-7209-4a9c-b399-77cf74efb51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:41 np0005539563 nova_compute[252253]: 2025-11-29 08:15:41.290 252257 DEBUG oslo_concurrency.lockutils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquired lock "refresh_cache-89ab44e3-7209-4a9c-b399-77cf74efb51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:41 np0005539563 nova_compute[252253]: 2025-11-29 08:15:41.290 252257 DEBUG nova.network.neutron [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:42 np0005539563 nova_compute[252253]: 2025-11-29 08:15:42.226 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.3 MiB/s wr, 373 op/s
Nov 29 03:15:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:42.697 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:42.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:15:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:43.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:43 np0005539563 nova_compute[252253]: 2025-11-29 08:15:43.552 252257 DEBUG nova.network.neutron [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Updating instance_info_cache with network_info: [{"id": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "address": "fa:16:3e:5b:d2:45", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d5a29f4-c6", "ovs_interfaceid": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:43 np0005539563 nova_compute[252253]: 2025-11-29 08:15:43.566 252257 DEBUG oslo_concurrency.lockutils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Releasing lock "refresh_cache-89ab44e3-7209-4a9c-b399-77cf74efb51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.1 MiB/s wr, 272 op/s
Nov 29 03:15:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:44.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:44 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:44Z|00469|binding|INFO|Releasing lport 56facbc8-1a3f-4008-8f77-23eeac832994 from this chassis (sb_readonly=0)
Nov 29 03:15:44 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:44Z|00470|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:15:44 np0005539563 nova_compute[252253]: 2025-11-29 08:15:44.996 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.153 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:45.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.309 252257 INFO nova.virt.libvirt.driver [-] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Instance destroyed successfully.#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.309 252257 DEBUG nova.objects.instance [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'resources' on Instance uuid 89ab44e3-7209-4a9c-b399-77cf74efb51c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.324 252257 DEBUG nova.virt.libvirt.vif [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1616306290',display_name='tempest-ServerActionsTestOtherB-server-1616306290',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1616306290',id=117,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-c0rhhm0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member',shelved_at='2025-11-29T08:15:41.226774',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a949de33-fe8e-409f-9e15-d50c466534f9'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:36Z,user_data=None,user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=89ab44e3-7209-4a9c-b399-77cf74efb51c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "address": "fa:16:3e:5b:d2:45", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d5a29f4-c6", "ovs_interfaceid": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.324 252257 DEBUG nova.network.os_vif_util [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "address": "fa:16:3e:5b:d2:45", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d5a29f4-c6", "ovs_interfaceid": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.325 252257 DEBUG nova.network.os_vif_util [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:d2:45,bridge_name='br-int',has_traffic_filtering=True,id=4d5a29f4-c628-4a0d-b707-82e46e56bbe0,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d5a29f4-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.325 252257 DEBUG os_vif [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:d2:45,bridge_name='br-int',has_traffic_filtering=True,id=4d5a29f4-c628-4a0d-b707-82e46e56bbe0,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d5a29f4-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.327 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.327 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5a29f4-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.329 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.330 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.332 252257 INFO os_vif [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:d2:45,bridge_name='br-int',has_traffic_filtering=True,id=4d5a29f4-c628-4a0d-b707-82e46e56bbe0,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d5a29f4-c6')#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.445 252257 DEBUG nova.compute.manager [req-0ed91c1a-b9ba-44bb-b5e9-7b807e9b9507 req-dfaeb1d9-2537-4399-acfd-d68496243e06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Received event network-changed-4d5a29f4-c628-4a0d-b707-82e46e56bbe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.446 252257 DEBUG nova.compute.manager [req-0ed91c1a-b9ba-44bb-b5e9-7b807e9b9507 req-dfaeb1d9-2537-4399-acfd-d68496243e06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Refreshing instance network info cache due to event network-changed-4d5a29f4-c628-4a0d-b707-82e46e56bbe0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.447 252257 DEBUG oslo_concurrency.lockutils [req-0ed91c1a-b9ba-44bb-b5e9-7b807e9b9507 req-dfaeb1d9-2537-4399-acfd-d68496243e06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-89ab44e3-7209-4a9c-b399-77cf74efb51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.447 252257 DEBUG oslo_concurrency.lockutils [req-0ed91c1a-b9ba-44bb-b5e9-7b807e9b9507 req-dfaeb1d9-2537-4399-acfd-d68496243e06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-89ab44e3-7209-4a9c-b399-77cf74efb51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.447 252257 DEBUG nova.network.neutron [req-0ed91c1a-b9ba-44bb-b5e9-7b807e9b9507 req-dfaeb1d9-2537-4399-acfd-d68496243e06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Refreshing network info cache for port 4d5a29f4-c628-4a0d-b707-82e46e56bbe0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.728 252257 INFO nova.virt.libvirt.driver [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Deleting instance files /var/lib/nova/instances/89ab44e3-7209-4a9c-b399-77cf74efb51c_del#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.729 252257 INFO nova.virt.libvirt.driver [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Deletion of /var/lib/nova/instances/89ab44e3-7209-4a9c-b399-77cf74efb51c_del complete#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.877 252257 INFO nova.scheduler.client.report [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Deleted allocations for instance 89ab44e3-7209-4a9c-b399-77cf74efb51c#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.928 252257 DEBUG oslo_concurrency.lockutils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:45 np0005539563 nova_compute[252253]: 2025-11-29 08:15:45.929 252257 DEBUG oslo_concurrency.lockutils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:46 np0005539563 nova_compute[252253]: 2025-11-29 08:15:46.012 252257 DEBUG oslo_concurrency.processutils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 539 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.3 MiB/s wr, 218 op/s
Nov 29 03:15:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2761208253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:46 np0005539563 nova_compute[252253]: 2025-11-29 08:15:46.455 252257 DEBUG oslo_concurrency.processutils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:46 np0005539563 nova_compute[252253]: 2025-11-29 08:15:46.460 252257 DEBUG nova.compute.provider_tree [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:46 np0005539563 nova_compute[252253]: 2025-11-29 08:15:46.480 252257 DEBUG nova.scheduler.client.report [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:46 np0005539563 nova_compute[252253]: 2025-11-29 08:15:46.513 252257 DEBUG oslo_concurrency.lockutils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:46 np0005539563 nova_compute[252253]: 2025-11-29 08:15:46.553 252257 DEBUG oslo_concurrency.lockutils [None req-74e8ca22-d937-4ed4-a865-388a4ac9e5a5 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "89ab44e3-7209-4a9c-b399-77cf74efb51c" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 10.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:46.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:47 np0005539563 nova_compute[252253]: 2025-11-29 08:15:47.226 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:47.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Nov 29 03:15:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Nov 29 03:15:47 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Nov 29 03:15:47 np0005539563 nova_compute[252253]: 2025-11-29 08:15:47.474 252257 DEBUG nova.network.neutron [req-0ed91c1a-b9ba-44bb-b5e9-7b807e9b9507 req-dfaeb1d9-2537-4399-acfd-d68496243e06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Updated VIF entry in instance network info cache for port 4d5a29f4-c628-4a0d-b707-82e46e56bbe0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:15:47 np0005539563 nova_compute[252253]: 2025-11-29 08:15:47.475 252257 DEBUG nova.network.neutron [req-0ed91c1a-b9ba-44bb-b5e9-7b807e9b9507 req-dfaeb1d9-2537-4399-acfd-d68496243e06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Updating instance_info_cache with network_info: [{"id": "4d5a29f4-c628-4a0d-b707-82e46e56bbe0", "address": "fa:16:3e:5b:d2:45", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": null, "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap4d5a29f4-c6", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:47 np0005539563 nova_compute[252253]: 2025-11-29 08:15:47.496 252257 DEBUG oslo_concurrency.lockutils [req-0ed91c1a-b9ba-44bb-b5e9-7b807e9b9507 req-dfaeb1d9-2537-4399-acfd-d68496243e06 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-89ab44e3-7209-4a9c-b399-77cf74efb51c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 552 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 213 op/s
Nov 29 03:15:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:48.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:49.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:49 np0005539563 nova_compute[252253]: 2025-11-29 08:15:49.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:49 np0005539563 nova_compute[252253]: 2025-11-29 08:15:49.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:15:50 np0005539563 nova_compute[252253]: 2025-11-29 08:15:50.114 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404135.1129172, 34556945-6717-428b-937e-51175f19d32e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:50 np0005539563 nova_compute[252253]: 2025-11-29 08:15:50.114 252257 INFO nova.compute.manager [-] [instance: 34556945-6717-428b-937e-51175f19d32e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:15:50 np0005539563 nova_compute[252253]: 2025-11-29 08:15:50.135 252257 DEBUG nova.compute.manager [None req-dc313e30-c2ba-42fd-baa4-1014abfa6bc1 - - - - - -] [instance: 34556945-6717-428b-937e-51175f19d32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.2 MiB/s wr, 80 op/s
Nov 29 03:15:50 np0005539563 nova_compute[252253]: 2025-11-29 08:15:50.329 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:50 np0005539563 nova_compute[252253]: 2025-11-29 08:15:50.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:50 np0005539563 nova_compute[252253]: 2025-11-29 08:15:50.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:50 np0005539563 nova_compute[252253]: 2025-11-29 08:15:50.695 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:50 np0005539563 nova_compute[252253]: 2025-11-29 08:15:50.696 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:50 np0005539563 nova_compute[252253]: 2025-11-29 08:15:50.696 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:50 np0005539563 nova_compute[252253]: 2025-11-29 08:15:50.696 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:15:50 np0005539563 nova_compute[252253]: 2025-11-29 08:15:50.697 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:50.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.084 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404136.0830925, 89ab44e3-7209-4a9c-b399-77cf74efb51c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.085 252257 INFO nova.compute.manager [-] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.123 252257 DEBUG nova.compute.manager [None req-88379fe4-b50b-46eb-bfc1-d46374aaa85e - - - - - -] [instance: 89ab44e3-7209-4a9c-b399-77cf74efb51c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:51.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3458064120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.537 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.840s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.616 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.616 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.620 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.621 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.785 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.786 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3963MB free_disk=20.78518295288086GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.786 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.787 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.869 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 98453ec7-fbda-42ae-8624-8aa5921fd634 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.870 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 33cff286-3b50-41f5-9cb9-d4d98a1d3f88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.870 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.870 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:15:51 np0005539563 nova_compute[252253]: 2025-11-29 08:15:51.935 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:52 np0005539563 nova_compute[252253]: 2025-11-29 08:15:52.227 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.2 MiB/s wr, 80 op/s
Nov 29 03:15:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877820590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:52 np0005539563 nova_compute[252253]: 2025-11-29 08:15:52.389 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:52 np0005539563 nova_compute[252253]: 2025-11-29 08:15:52.395 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:52 np0005539563 nova_compute[252253]: 2025-11-29 08:15:52.423 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:52 np0005539563 nova_compute[252253]: 2025-11-29 08:15:52.457 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:15:52 np0005539563 nova_compute[252253]: 2025-11-29 08:15:52.458 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:52.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:53 np0005539563 nova_compute[252253]: 2025-11-29 08:15:53.011 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:53.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:53 np0005539563 nova_compute[252253]: 2025-11-29 08:15:53.458 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Nov 29 03:15:54 np0005539563 nova_compute[252253]: 2025-11-29 08:15:54.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:54 np0005539563 nova_compute[252253]: 2025-11-29 08:15:54.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:15:54 np0005539563 nova_compute[252253]: 2025-11-29 08:15:54.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:15:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:54.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:54 np0005539563 nova_compute[252253]: 2025-11-29 08:15:54.923 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:54 np0005539563 nova_compute[252253]: 2025-11-29 08:15:54.924 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:54 np0005539563 nova_compute[252253]: 2025-11-29 08:15:54.925 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:15:54 np0005539563 nova_compute[252253]: 2025-11-29 08:15:54.925 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 98453ec7-fbda-42ae-8624-8aa5921fd634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:55.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Nov 29 03:15:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Nov 29 03:15:55 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Nov 29 03:15:55 np0005539563 nova_compute[252253]: 2025-11-29 08:15:55.331 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 180 op/s
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.373962) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156374111, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2209, "num_deletes": 255, "total_data_size": 3663943, "memory_usage": 3735648, "flush_reason": "Manual Compaction"}
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156424024, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 3564009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42694, "largest_seqno": 44902, "table_properties": {"data_size": 3554189, "index_size": 6122, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21375, "raw_average_key_size": 20, "raw_value_size": 3534194, "raw_average_value_size": 3441, "num_data_blocks": 264, "num_entries": 1027, "num_filter_entries": 1027, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403977, "oldest_key_time": 1764403977, "file_creation_time": 1764404156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 50184 microseconds, and 10480 cpu microseconds.
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.424215) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 3564009 bytes OK
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.424288) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.426381) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.426401) EVENT_LOG_v1 {"time_micros": 1764404156426397, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.426420) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3654849, prev total WAL file size 3654849, number of live WAL files 2.
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.428125) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(3480KB)], [92(10171KB)]
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156428216, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 13979119, "oldest_snapshot_seqno": -1}
Nov 29 03:15:56 np0005539563 podman[326613]: 2025-11-29 08:15:56.509876571 +0000 UTC m=+0.061456076 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:15:56 np0005539563 podman[326614]: 2025-11-29 08:15:56.523549482 +0000 UTC m=+0.068849446 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.537 252257 DEBUG oslo_concurrency.lockutils [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.538 252257 DEBUG oslo_concurrency.lockutils [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.538 252257 DEBUG oslo_concurrency.lockutils [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.538 252257 DEBUG oslo_concurrency.lockutils [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.539 252257 DEBUG oslo_concurrency.lockutils [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.540 252257 INFO nova.compute.manager [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Terminating instance#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.541 252257 DEBUG nova.compute.manager [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:15:56 np0005539563 podman[326620]: 2025-11-29 08:15:56.545063775 +0000 UTC m=+0.085138417 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 7808 keys, 12113285 bytes, temperature: kUnknown
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156601556, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 12113285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12060386, "index_size": 32307, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19525, "raw_key_size": 201969, "raw_average_key_size": 25, "raw_value_size": 11920314, "raw_average_value_size": 1526, "num_data_blocks": 1272, "num_entries": 7808, "num_filter_entries": 7808, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764404156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.601885) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 12113285 bytes
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.603750) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.6 rd, 69.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 9.9 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 8336, records dropped: 528 output_compression: NoCompression
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.603766) EVENT_LOG_v1 {"time_micros": 1764404156603759, "job": 54, "event": "compaction_finished", "compaction_time_micros": 173450, "compaction_time_cpu_micros": 27534, "output_level": 6, "num_output_files": 1, "total_output_size": 12113285, "num_input_records": 8336, "num_output_records": 7808, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156604417, "job": 54, "event": "table_file_deletion", "file_number": 94}
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404156606644, "job": 54, "event": "table_file_deletion", "file_number": 92}
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.427891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.606700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.606704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.606707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.606708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:15:56.606710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:56 np0005539563 kernel: tapd83f3010-ca (unregistering): left promiscuous mode
Nov 29 03:15:56 np0005539563 NetworkManager[48981]: <info>  [1764404156.6445] device (tapd83f3010-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:15:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:56Z|00471|binding|INFO|Releasing lport d83f3010-ca91-4737-a053-26c71234f9d9 from this chassis (sb_readonly=0)
Nov 29 03:15:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:56Z|00472|binding|INFO|Setting lport d83f3010-ca91-4737-a053-26c71234f9d9 down in Southbound
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.652 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:56Z|00473|binding|INFO|Removing iface tapd83f3010-ca ovn-installed in OVS
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.654 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:56.663 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:24:bf 10.100.0.5'], port_security=['fa:16:3e:0e:24:bf 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '33cff286-3b50-41f5-9cb9-d4d98a1d3f88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=d83f3010-ca91-4737-a053-26c71234f9d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:56.664 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d83f3010-ca91-4737-a053-26c71234f9d9 in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 unbound from our chassis#033[00m
Nov 29 03:15:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:56.665 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10a9b8d1-2de6-4e47-8e44-16b661da8624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:15:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:56.667 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e2831e26-ff0c-4336-9832-8e9e44b5e875]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:56.667 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 namespace which is not needed anymore#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.670 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000070.scope: Deactivated successfully.
Nov 29 03:15:56 np0005539563 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000070.scope: Consumed 17.937s CPU time.
Nov 29 03:15:56 np0005539563 systemd-machined[213024]: Machine qemu-52-instance-00000070 terminated.
Nov 29 03:15:56 np0005539563 kernel: tapd83f3010-ca: entered promiscuous mode
Nov 29 03:15:56 np0005539563 kernel: tapd83f3010-ca (unregistering): left promiscuous mode
Nov 29 03:15:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:56Z|00474|binding|INFO|Claiming lport d83f3010-ca91-4737-a053-26c71234f9d9 for this chassis.
Nov 29 03:15:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:56Z|00475|binding|INFO|d83f3010-ca91-4737-a053-26c71234f9d9: Claiming fa:16:3e:0e:24:bf 10.100.0.5
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.763 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:56.769 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:24:bf 10.100.0.5'], port_security=['fa:16:3e:0e:24:bf 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '33cff286-3b50-41f5-9cb9-d4d98a1d3f88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=d83f3010-ca91-4737-a053-26c71234f9d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.784 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.785 252257 INFO nova.virt.libvirt.driver [-] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Instance destroyed successfully.#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.786 252257 DEBUG nova.objects.instance [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lazy-loading 'resources' on Instance uuid 33cff286-3b50-41f5-9cb9-d4d98a1d3f88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:56Z|00476|binding|INFO|Setting lport d83f3010-ca91-4737-a053-26c71234f9d9 ovn-installed in OVS
Nov 29 03:15:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:56Z|00477|binding|INFO|Setting lport d83f3010-ca91-4737-a053-26c71234f9d9 up in Southbound
Nov 29 03:15:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:56Z|00478|binding|INFO|Releasing lport d83f3010-ca91-4737-a053-26c71234f9d9 from this chassis (sb_readonly=1)
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.787 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:56Z|00479|if_status|INFO|Dropped 2 log messages in last 486 seconds (most recently, 486 seconds ago) due to excessive rate
Nov 29 03:15:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:56Z|00480|if_status|INFO|Not setting lport d83f3010-ca91-4737-a053-26c71234f9d9 down as sb is readonly
Nov 29 03:15:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:56Z|00481|binding|INFO|Removing iface tapd83f3010-ca ovn-installed in OVS
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.788 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:56Z|00482|binding|INFO|Releasing lport d83f3010-ca91-4737-a053-26c71234f9d9 from this chassis (sb_readonly=0)
Nov 29 03:15:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:15:56Z|00483|binding|INFO|Setting lport d83f3010-ca91-4737-a053-26c71234f9d9 down in Southbound
Nov 29 03:15:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:56.796 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:24:bf 10.100.0.5'], port_security=['fa:16:3e:0e:24:bf 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '33cff286-3b50-41f5-9cb9-d4d98a1d3f88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '250671461f27498d9f6b4476c7b69533', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a03133c-20d7-4b83-a65b-3860eafc9833, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=d83f3010-ca91-4737-a053-26c71234f9d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.800 252257 DEBUG nova.virt.libvirt.vif [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2075577673',display_name='tempest-ServerActionsTestOtherA-server-2075577673',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2075577673',id=112,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='250671461f27498d9f6b4476c7b69533',ramdisk_id='',reservation_id='r-z80enzsc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-552273978',owner_user_name='tempest-ServerActionsTestOtherA-552273978-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:27Z,user_data=None,user_id='58625e4c2b5d43a1abbab05b98853a65',uuid=33cff286-3b50-41f5-9cb9-d4d98a1d3f88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d83f3010-ca91-4737-a053-26c71234f9d9", "address": "fa:16:3e:0e:24:bf", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83f3010-ca", "ovs_interfaceid": "d83f3010-ca91-4737-a053-26c71234f9d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.801 252257 DEBUG nova.network.os_vif_util [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converting VIF {"id": "d83f3010-ca91-4737-a053-26c71234f9d9", "address": "fa:16:3e:0e:24:bf", "network": {"id": "10a9b8d1-2de6-4e47-8e44-16b661da8624", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-656101484-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "250671461f27498d9f6b4476c7b69533", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83f3010-ca", "ovs_interfaceid": "d83f3010-ca91-4737-a053-26c71234f9d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.802 252257 DEBUG nova.network.os_vif_util [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:24:bf,bridge_name='br-int',has_traffic_filtering=True,id=d83f3010-ca91-4737-a053-26c71234f9d9,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83f3010-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.802 252257 DEBUG os_vif [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:24:bf,bridge_name='br-int',has_traffic_filtering=True,id=d83f3010-ca91-4737-a053-26c71234f9d9,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83f3010-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.803 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.804 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd83f3010-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.805 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.807 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.808 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.809 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.811 252257 INFO os_vif [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:24:bf,bridge_name='br-int',has_traffic_filtering=True,id=d83f3010-ca91-4737-a053-26c71234f9d9,network=Network(10a9b8d1-2de6-4e47-8e44-16b661da8624),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83f3010-ca')#033[00m
Nov 29 03:15:56 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[324068]: [NOTICE]   (324073) : haproxy version is 2.8.14-c23fe91
Nov 29 03:15:56 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[324068]: [NOTICE]   (324073) : path to executable is /usr/sbin/haproxy
Nov 29 03:15:56 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[324068]: [WARNING]  (324073) : Exiting Master process...
Nov 29 03:15:56 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[324068]: [WARNING]  (324073) : Exiting Master process...
Nov 29 03:15:56 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[324068]: [ALERT]    (324073) : Current worker (324075) exited with code 143 (Terminated)
Nov 29 03:15:56 np0005539563 neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624[324068]: [WARNING]  (324073) : All workers exited. Exiting... (0)
Nov 29 03:15:56 np0005539563 systemd[1]: libpod-394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad.scope: Deactivated successfully.
Nov 29 03:15:56 np0005539563 podman[326696]: 2025-11-29 08:15:56.824214269 +0000 UTC m=+0.061427676 container died 394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:15:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad-userdata-shm.mount: Deactivated successfully.
Nov 29 03:15:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay-75312e7500a488a0414d685cb85dd5b36ef86edd3004ee28628ca29fe18607db-merged.mount: Deactivated successfully.
Nov 29 03:15:56 np0005539563 podman[326696]: 2025-11-29 08:15:56.8660053 +0000 UTC m=+0.103218707 container cleanup 394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:15:56 np0005539563 systemd[1]: libpod-conmon-394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad.scope: Deactivated successfully.
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.886 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Updating instance_info_cache with network_info: [{"id": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "address": "fa:16:3e:72:b2:e8", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a778f1e-9d", "ovs_interfaceid": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.907 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-98453ec7-fbda-42ae-8624-8aa5921fd634" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.907 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.908 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:56.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:56 np0005539563 podman[326751]: 2025-11-29 08:15:56.927166707 +0000 UTC m=+0.039788959 container remove 394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:15:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:56.932 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2c052ada-2118-427a-9f8a-e3f4f2966c3d]: (4, ('Sat Nov 29 08:15:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad)\n394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad\nSat Nov 29 08:15:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 (394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad)\n394b79a7edc51e026a3a5e6fbb7720e38b36f39fb0b3c8a2bb8977b9bdb59dad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:56.934 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[53ff7baa-db88-4be9-84b4-6cffb2db4a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:56.934 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10a9b8d1-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.936 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 kernel: tap10a9b8d1-20: left promiscuous mode
Nov 29 03:15:56 np0005539563 nova_compute[252253]: 2025-11-29 08:15:56.952 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:56.956 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ba61caa6-8d15-4cb8-bc6a-c8e8a5099628]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:56.978 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce14003-d3d4-45a9-bcb4-872b62dc6ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:56.980 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[366f9ad7-44af-4bf0-8cc0-8077b43de778]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:57.001 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d95aacb3-9079-46fe-b17d-d69f6a12c075]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703415, 'reachable_time': 35688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326766, 'error': None, 'target': 'ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:57 np0005539563 systemd[1]: run-netns-ovnmeta\x2d10a9b8d1\x2d2de6\x2d4e47\x2d8e44\x2d16b661da8624.mount: Deactivated successfully.
Nov 29 03:15:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:57.005 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-10a9b8d1-2de6-4e47-8e44-16b661da8624 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:15:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:57.005 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[c373c342-5212-4e31-8e24-5200c1e2e91f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:57.006 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d83f3010-ca91-4737-a053-26c71234f9d9 in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 unbound from our chassis#033[00m
Nov 29 03:15:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:57.007 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10a9b8d1-2de6-4e47-8e44-16b661da8624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:15:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:57.008 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[879d0c17-2fc9-42bb-8df4-98e0a82541f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:57.009 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d83f3010-ca91-4737-a053-26c71234f9d9 in datapath 10a9b8d1-2de6-4e47-8e44-16b661da8624 unbound from our chassis#033[00m
Nov 29 03:15:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:57.010 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 10a9b8d1-2de6-4e47-8e44-16b661da8624, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:15:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:15:57.010 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c12f6f0d-6db2-471c-8ded-5bdb603e011f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.125 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.154 252257 DEBUG nova.compute.manager [req-ec5f2d1b-2a21-4e65-b6ad-e1f0f1d4286e req-2524c95a-a630-4953-a955-e071167ed19e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received event network-vif-unplugged-d83f3010-ca91-4737-a053-26c71234f9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.155 252257 DEBUG oslo_concurrency.lockutils [req-ec5f2d1b-2a21-4e65-b6ad-e1f0f1d4286e req-2524c95a-a630-4953-a955-e071167ed19e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.155 252257 DEBUG oslo_concurrency.lockutils [req-ec5f2d1b-2a21-4e65-b6ad-e1f0f1d4286e req-2524c95a-a630-4953-a955-e071167ed19e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.155 252257 DEBUG oslo_concurrency.lockutils [req-ec5f2d1b-2a21-4e65-b6ad-e1f0f1d4286e req-2524c95a-a630-4953-a955-e071167ed19e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.156 252257 DEBUG nova.compute.manager [req-ec5f2d1b-2a21-4e65-b6ad-e1f0f1d4286e req-2524c95a-a630-4953-a955-e071167ed19e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] No waiting events found dispatching network-vif-unplugged-d83f3010-ca91-4737-a053-26c71234f9d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.156 252257 DEBUG nova.compute.manager [req-ec5f2d1b-2a21-4e65-b6ad-e1f0f1d4286e req-2524c95a-a630-4953-a955-e071167ed19e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received event network-vif-unplugged-d83f3010-ca91-4737-a053-26c71234f9d9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.156 252257 DEBUG nova.compute.manager [req-ec5f2d1b-2a21-4e65-b6ad-e1f0f1d4286e req-2524c95a-a630-4953-a955-e071167ed19e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received event network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.156 252257 DEBUG oslo_concurrency.lockutils [req-ec5f2d1b-2a21-4e65-b6ad-e1f0f1d4286e req-2524c95a-a630-4953-a955-e071167ed19e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.157 252257 DEBUG oslo_concurrency.lockutils [req-ec5f2d1b-2a21-4e65-b6ad-e1f0f1d4286e req-2524c95a-a630-4953-a955-e071167ed19e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.157 252257 DEBUG oslo_concurrency.lockutils [req-ec5f2d1b-2a21-4e65-b6ad-e1f0f1d4286e req-2524c95a-a630-4953-a955-e071167ed19e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.157 252257 DEBUG nova.compute.manager [req-ec5f2d1b-2a21-4e65-b6ad-e1f0f1d4286e req-2524c95a-a630-4953-a955-e071167ed19e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] No waiting events found dispatching network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.157 252257 WARNING nova.compute.manager [req-ec5f2d1b-2a21-4e65-b6ad-e1f0f1d4286e req-2524c95a-a630-4953-a955-e071167ed19e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received unexpected event network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.227 252257 INFO nova.virt.libvirt.driver [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Deleting instance files /var/lib/nova/instances/33cff286-3b50-41f5-9cb9-d4d98a1d3f88_del#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.227 252257 INFO nova.virt.libvirt.driver [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Deletion of /var/lib/nova/instances/33cff286-3b50-41f5-9cb9-d4d98a1d3f88_del complete#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.230 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:15:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:57.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.289 252257 INFO nova.compute.manager [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.290 252257 DEBUG oslo.service.loopingcall [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.290 252257 DEBUG nova.compute.manager [-] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.290 252257 DEBUG nova.network.neutron [-] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:15:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Nov 29 03:15:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Nov 29 03:15:57 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.816 252257 DEBUG nova.network.neutron [-] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.835 252257 INFO nova.compute.manager [-] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Took 0.54 seconds to deallocate network for instance.#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.898 252257 DEBUG oslo_concurrency.lockutils [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.898 252257 DEBUG oslo_concurrency.lockutils [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.908 252257 DEBUG nova.compute.manager [req-ac6c0659-b73e-4b32-a51b-f921bc702386 req-ea7f4a47-6a81-4462-a772-cfcb08e785cd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received event network-vif-deleted-d83f3010-ca91-4737-a053-26c71234f9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:57 np0005539563 nova_compute[252253]: 2025-11-29 08:15:57.990 252257 DEBUG oslo_concurrency.processutils [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:15:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 489 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.0 MiB/s wr, 150 op/s
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Nov 29 03:15:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Nov 29 03:15:58 np0005539563 nova_compute[252253]: 2025-11-29 08:15:58.424 252257 DEBUG oslo_concurrency.processutils [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:58 np0005539563 nova_compute[252253]: 2025-11-29 08:15:58.430 252257 DEBUG nova.compute.provider_tree [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:58 np0005539563 nova_compute[252253]: 2025-11-29 08:15:58.445 252257 DEBUG nova.scheduler.client.report [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:58 np0005539563 nova_compute[252253]: 2025-11-29 08:15:58.460 252257 DEBUG oslo_concurrency.lockutils [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:58 np0005539563 nova_compute[252253]: 2025-11-29 08:15:58.499 252257 INFO nova.scheduler.client.report [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Deleted allocations for instance 33cff286-3b50-41f5-9cb9-d4d98a1d3f88#033[00m
Nov 29 03:15:58 np0005539563 nova_compute[252253]: 2025-11-29 08:15:58.552 252257 DEBUG oslo_concurrency.lockutils [None req-f2ca6847-9b2b-495b-aa8f-0c2d1df66cdb 58625e4c2b5d43a1abbab05b98853a65 250671461f27498d9f6b4476c7b69533 - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:58 np0005539563 nova_compute[252253]: 2025-11-29 08:15:58.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:58.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:15:59 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 55d8f2ae-d848-4d8a-9806-01f1391169ba does not exist
Nov 29 03:15:59 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c427d2e7-541e-48f3-9674-6fc89635f1fe does not exist
Nov 29 03:15:59 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d32ae24d-e8cb-453b-ab17-52a7b6e11045 does not exist
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:15:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:15:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:59.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.344 252257 DEBUG nova.compute.manager [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received event network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.345 252257 DEBUG oslo_concurrency.lockutils [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.345 252257 DEBUG oslo_concurrency.lockutils [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.345 252257 DEBUG oslo_concurrency.lockutils [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.345 252257 DEBUG nova.compute.manager [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] No waiting events found dispatching network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.346 252257 WARNING nova.compute.manager [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received unexpected event network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.346 252257 DEBUG nova.compute.manager [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received event network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.346 252257 DEBUG oslo_concurrency.lockutils [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.346 252257 DEBUG oslo_concurrency.lockutils [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.346 252257 DEBUG oslo_concurrency.lockutils [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.347 252257 DEBUG nova.compute.manager [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] No waiting events found dispatching network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.347 252257 WARNING nova.compute.manager [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received unexpected event network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.347 252257 DEBUG nova.compute.manager [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received event network-vif-unplugged-d83f3010-ca91-4737-a053-26c71234f9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.347 252257 DEBUG oslo_concurrency.lockutils [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.348 252257 DEBUG oslo_concurrency.lockutils [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.348 252257 DEBUG oslo_concurrency.lockutils [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.348 252257 DEBUG nova.compute.manager [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] No waiting events found dispatching network-vif-unplugged-d83f3010-ca91-4737-a053-26c71234f9d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.348 252257 WARNING nova.compute.manager [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received unexpected event network-vif-unplugged-d83f3010-ca91-4737-a053-26c71234f9d9 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.348 252257 DEBUG nova.compute.manager [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received event network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.349 252257 DEBUG oslo_concurrency.lockutils [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.349 252257 DEBUG oslo_concurrency.lockutils [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.349 252257 DEBUG oslo_concurrency.lockutils [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "33cff286-3b50-41f5-9cb9-d4d98a1d3f88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.349 252257 DEBUG nova.compute.manager [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] No waiting events found dispatching network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:59 np0005539563 nova_compute[252253]: 2025-11-29 08:15:59.349 252257 WARNING nova.compute.manager [req-17630acb-1018-4ce7-b884-c5dbb6736847 req-4fac46a2-3559-4c4b-bf2f-013dc3523ce8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Received unexpected event network-vif-plugged-d83f3010-ca91-4737-a053-26c71234f9d9 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:15:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:15:59 np0005539563 podman[327063]: 2025-11-29 08:15:59.819931043 +0000 UTC m=+0.045047362 container create 217c502b2ab33da7c13b848626a64024e023660111a5d443fabcd54306dfb34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:15:59 np0005539563 systemd[1]: Started libpod-conmon-217c502b2ab33da7c13b848626a64024e023660111a5d443fabcd54306dfb34f.scope.
Nov 29 03:15:59 np0005539563 podman[327063]: 2025-11-29 08:15:59.796171929 +0000 UTC m=+0.021288268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:15:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:16:00 np0005539563 podman[327063]: 2025-11-29 08:16:00.014153105 +0000 UTC m=+0.239269444 container init 217c502b2ab33da7c13b848626a64024e023660111a5d443fabcd54306dfb34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:16:00 np0005539563 podman[327063]: 2025-11-29 08:16:00.02208704 +0000 UTC m=+0.247203359 container start 217c502b2ab33da7c13b848626a64024e023660111a5d443fabcd54306dfb34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_fermi, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 03:16:00 np0005539563 podman[327063]: 2025-11-29 08:16:00.02613755 +0000 UTC m=+0.251253869 container attach 217c502b2ab33da7c13b848626a64024e023660111a5d443fabcd54306dfb34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:16:00 np0005539563 zealous_fermi[327079]: 167 167
Nov 29 03:16:00 np0005539563 systemd[1]: libpod-217c502b2ab33da7c13b848626a64024e023660111a5d443fabcd54306dfb34f.scope: Deactivated successfully.
Nov 29 03:16:00 np0005539563 conmon[327079]: conmon 217c502b2ab33da7c13b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-217c502b2ab33da7c13b848626a64024e023660111a5d443fabcd54306dfb34f.scope/container/memory.events
Nov 29 03:16:00 np0005539563 podman[327063]: 2025-11-29 08:16:00.029513151 +0000 UTC m=+0.254629470 container died 217c502b2ab33da7c13b848626a64024e023660111a5d443fabcd54306dfb34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:16:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ce4e088e635b18eaf0aa041c1756a26a63a4279b77afbaaa96afde14f7599faf-merged.mount: Deactivated successfully.
Nov 29 03:16:00 np0005539563 podman[327063]: 2025-11-29 08:16:00.08517636 +0000 UTC m=+0.310292679 container remove 217c502b2ab33da7c13b848626a64024e023660111a5d443fabcd54306dfb34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_fermi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:16:00 np0005539563 systemd[1]: libpod-conmon-217c502b2ab33da7c13b848626a64024e023660111a5d443fabcd54306dfb34f.scope: Deactivated successfully.
Nov 29 03:16:00 np0005539563 podman[327105]: 2025-11-29 08:16:00.24134383 +0000 UTC m=+0.038970957 container create 3a98b69b569ce542505b79640c658bc394b4a8c574d208e44c820d171cdba19a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:16:00 np0005539563 systemd[1]: Started libpod-conmon-3a98b69b569ce542505b79640c658bc394b4a8c574d208e44c820d171cdba19a.scope.
Nov 29 03:16:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.6 MiB/s rd, 7.8 MiB/s wr, 357 op/s
Nov 29 03:16:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:16:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2f50181d084c39dfd1b420e0db9252913ffee869342c29394f4fbca9ec089b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2f50181d084c39dfd1b420e0db9252913ffee869342c29394f4fbca9ec089b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2f50181d084c39dfd1b420e0db9252913ffee869342c29394f4fbca9ec089b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2f50181d084c39dfd1b420e0db9252913ffee869342c29394f4fbca9ec089b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2f50181d084c39dfd1b420e0db9252913ffee869342c29394f4fbca9ec089b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:00 np0005539563 podman[327105]: 2025-11-29 08:16:00.223846926 +0000 UTC m=+0.021474083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:00 np0005539563 podman[327105]: 2025-11-29 08:16:00.337256769 +0000 UTC m=+0.134883896 container init 3a98b69b569ce542505b79640c658bc394b4a8c574d208e44c820d171cdba19a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:16:00 np0005539563 podman[327105]: 2025-11-29 08:16:00.342858551 +0000 UTC m=+0.140485678 container start 3a98b69b569ce542505b79640c658bc394b4a8c574d208e44c820d171cdba19a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:16:00 np0005539563 podman[327105]: 2025-11-29 08:16:00.34540387 +0000 UTC m=+0.143031007 container attach 3a98b69b569ce542505b79640c658bc394b4a8c574d208e44c820d171cdba19a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:16:00 np0005539563 nova_compute[252253]: 2025-11-29 08:16:00.665 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:00 np0005539563 nova_compute[252253]: 2025-11-29 08:16:00.666 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:00 np0005539563 nova_compute[252253]: 2025-11-29 08:16:00.681 252257 DEBUG nova.compute.manager [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:16:00 np0005539563 nova_compute[252253]: 2025-11-29 08:16:00.739 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:00 np0005539563 nova_compute[252253]: 2025-11-29 08:16:00.740 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:00 np0005539563 nova_compute[252253]: 2025-11-29 08:16:00.751 252257 DEBUG nova.virt.hardware [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:16:00 np0005539563 nova_compute[252253]: 2025-11-29 08:16:00.751 252257 INFO nova.compute.claims [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:16:00 np0005539563 nova_compute[252253]: 2025-11-29 08:16:00.920 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:00.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:01 np0005539563 adoring_pasteur[327121]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:16:01 np0005539563 adoring_pasteur[327121]: --> relative data size: 1.0
Nov 29 03:16:01 np0005539563 adoring_pasteur[327121]: --> All data devices are unavailable
Nov 29 03:16:01 np0005539563 systemd[1]: libpod-3a98b69b569ce542505b79640c658bc394b4a8c574d208e44c820d171cdba19a.scope: Deactivated successfully.
Nov 29 03:16:01 np0005539563 conmon[327121]: conmon 3a98b69b569ce542505b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a98b69b569ce542505b79640c658bc394b4a8c574d208e44c820d171cdba19a.scope/container/memory.events
Nov 29 03:16:01 np0005539563 podman[327157]: 2025-11-29 08:16:01.218452543 +0000 UTC m=+0.029913311 container died 3a98b69b569ce542505b79640c658bc394b4a8c574d208e44c820d171cdba19a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:16:01 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3f2f50181d084c39dfd1b420e0db9252913ffee869342c29394f4fbca9ec089b-merged.mount: Deactivated successfully.
Nov 29 03:16:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:01.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:01 np0005539563 podman[327157]: 2025-11-29 08:16:01.276318642 +0000 UTC m=+0.087779380 container remove 3a98b69b569ce542505b79640c658bc394b4a8c574d208e44c820d171cdba19a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_pasteur, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:16:01 np0005539563 systemd[1]: libpod-conmon-3a98b69b569ce542505b79640c658bc394b4a8c574d208e44c820d171cdba19a.scope: Deactivated successfully.
Nov 29 03:16:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1044889036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.422 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.430 252257 DEBUG nova.compute.provider_tree [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.457 252257 DEBUG nova.scheduler.client.report [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.490 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.491 252257 DEBUG nova.compute.manager [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.576 252257 DEBUG nova.compute.manager [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.577 252257 DEBUG nova.network.neutron [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.601 252257 INFO nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.621 252257 DEBUG nova.compute.manager [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.714 252257 DEBUG nova.compute.manager [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.715 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.715 252257 INFO nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Creating image(s)#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.741 252257 DEBUG nova.storage.rbd_utils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.767 252257 DEBUG nova.storage.rbd_utils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.794 252257 DEBUG nova.storage.rbd_utils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.797 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.828 252257 DEBUG nova.policy [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bb4a89eea4e4166a7a1c5e3135cb182', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6025758b69854406b221c47d9ef59dea', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.831 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:01 np0005539563 podman[327367]: 2025-11-29 08:16:01.857242391 +0000 UTC m=+0.041191458 container create 6df2d1d947840b45ac9d45cc64f5e0e616de2e6ab035b3d6d7e8fb9942f1e7c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.867 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.868 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.869 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.870 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:01 np0005539563 systemd[1]: Started libpod-conmon-6df2d1d947840b45ac9d45cc64f5e0e616de2e6ab035b3d6d7e8fb9942f1e7c6.scope.
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.896 252257 DEBUG nova.storage.rbd_utils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:01 np0005539563 nova_compute[252253]: 2025-11-29 08:16:01.898 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:01 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:16:01 np0005539563 podman[327367]: 2025-11-29 08:16:01.933724832 +0000 UTC m=+0.117673929 container init 6df2d1d947840b45ac9d45cc64f5e0e616de2e6ab035b3d6d7e8fb9942f1e7c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:16:01 np0005539563 podman[327367]: 2025-11-29 08:16:01.841219476 +0000 UTC m=+0.025168563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:01 np0005539563 podman[327367]: 2025-11-29 08:16:01.944798082 +0000 UTC m=+0.128747169 container start 6df2d1d947840b45ac9d45cc64f5e0e616de2e6ab035b3d6d7e8fb9942f1e7c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:16:01 np0005539563 podman[327367]: 2025-11-29 08:16:01.948969226 +0000 UTC m=+0.132918323 container attach 6df2d1d947840b45ac9d45cc64f5e0e616de2e6ab035b3d6d7e8fb9942f1e7c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_napier, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:16:01 np0005539563 systemd[1]: libpod-6df2d1d947840b45ac9d45cc64f5e0e616de2e6ab035b3d6d7e8fb9942f1e7c6.scope: Deactivated successfully.
Nov 29 03:16:01 np0005539563 priceless_napier[327402]: 167 167
Nov 29 03:16:01 np0005539563 conmon[327402]: conmon 6df2d1d947840b45ac9d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6df2d1d947840b45ac9d45cc64f5e0e616de2e6ab035b3d6d7e8fb9942f1e7c6.scope/container/memory.events
Nov 29 03:16:01 np0005539563 podman[327410]: 2025-11-29 08:16:01.998241291 +0000 UTC m=+0.030956920 container died 6df2d1d947840b45ac9d45cc64f5e0e616de2e6ab035b3d6d7e8fb9942f1e7c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_napier, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:16:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-de58ddbf55bdca9ae15145eed7046457911dd34c181a70897097635803ed13e7-merged.mount: Deactivated successfully.
Nov 29 03:16:02 np0005539563 podman[327410]: 2025-11-29 08:16:02.036271631 +0000 UTC m=+0.068987260 container remove 6df2d1d947840b45ac9d45cc64f5e0e616de2e6ab035b3d6d7e8fb9942f1e7c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_napier, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:16:02 np0005539563 systemd[1]: libpod-conmon-6df2d1d947840b45ac9d45cc64f5e0e616de2e6ab035b3d6d7e8fb9942f1e7c6.scope: Deactivated successfully.
Nov 29 03:16:02 np0005539563 nova_compute[252253]: 2025-11-29 08:16:02.185 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:02 np0005539563 podman[327449]: 2025-11-29 08:16:02.252015116 +0000 UTC m=+0.048419992 container create 57ac345b6dd118d07c0ec90954cd2d1dde74decd8ca68834294bf449616d68eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kapitsa, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:16:02 np0005539563 nova_compute[252253]: 2025-11-29 08:16:02.266 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:02 np0005539563 nova_compute[252253]: 2025-11-29 08:16:02.276 252257 DEBUG nova.storage.rbd_utils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] resizing rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:16:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 6.7 MiB/s wr, 224 op/s
Nov 29 03:16:02 np0005539563 systemd[1]: Started libpod-conmon-57ac345b6dd118d07c0ec90954cd2d1dde74decd8ca68834294bf449616d68eb.scope.
Nov 29 03:16:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Nov 29 03:16:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Nov 29 03:16:02 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:16:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334c030bb28fd6b27201faf5a3b7b40879023f828585b0baf91ff7a538f994c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:02 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Nov 29 03:16:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334c030bb28fd6b27201faf5a3b7b40879023f828585b0baf91ff7a538f994c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334c030bb28fd6b27201faf5a3b7b40879023f828585b0baf91ff7a538f994c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334c030bb28fd6b27201faf5a3b7b40879023f828585b0baf91ff7a538f994c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:02 np0005539563 podman[327449]: 2025-11-29 08:16:02.231042279 +0000 UTC m=+0.027447175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:02 np0005539563 podman[327449]: 2025-11-29 08:16:02.338238373 +0000 UTC m=+0.134643259 container init 57ac345b6dd118d07c0ec90954cd2d1dde74decd8ca68834294bf449616d68eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kapitsa, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:16:02 np0005539563 podman[327449]: 2025-11-29 08:16:02.345218391 +0000 UTC m=+0.141623267 container start 57ac345b6dd118d07c0ec90954cd2d1dde74decd8ca68834294bf449616d68eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:16:02 np0005539563 podman[327449]: 2025-11-29 08:16:02.34884193 +0000 UTC m=+0.145246806 container attach 57ac345b6dd118d07c0ec90954cd2d1dde74decd8ca68834294bf449616d68eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:16:02 np0005539563 nova_compute[252253]: 2025-11-29 08:16:02.392 252257 DEBUG nova.objects.instance [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'migration_context' on Instance uuid 194fa81a-dfa7-4c98-9fd0-7d20d250db7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:02 np0005539563 nova_compute[252253]: 2025-11-29 08:16:02.418 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:16:02 np0005539563 nova_compute[252253]: 2025-11-29 08:16:02.419 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Ensure instance console log exists: /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:16:02 np0005539563 nova_compute[252253]: 2025-11-29 08:16:02.420 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:02 np0005539563 nova_compute[252253]: 2025-11-29 08:16:02.420 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:02 np0005539563 nova_compute[252253]: 2025-11-29 08:16:02.420 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:02 np0005539563 nova_compute[252253]: 2025-11-29 08:16:02.668 252257 DEBUG nova.network.neutron [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Successfully created port: 53db9057-1d66-479a-9112-5c451e2dc1c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:16:02 np0005539563 nova_compute[252253]: 2025-11-29 08:16:02.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:02.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]: {
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:    "0": [
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:        {
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:            "devices": [
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:                "/dev/loop3"
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:            ],
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:            "lv_name": "ceph_lv0",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:            "lv_size": "7511998464",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:            "name": "ceph_lv0",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:            "tags": {
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:                "ceph.cluster_name": "ceph",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:                "ceph.crush_device_class": "",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:                "ceph.encrypted": "0",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:                "ceph.osd_id": "0",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:                "ceph.type": "block",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:                "ceph.vdo": "0"
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:            },
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:            "type": "block",
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:            "vg_name": "ceph_vg0"
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:        }
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]:    ]
Nov 29 03:16:03 np0005539563 stupefied_kapitsa[327517]: }
Nov 29 03:16:03 np0005539563 systemd[1]: libpod-57ac345b6dd118d07c0ec90954cd2d1dde74decd8ca68834294bf449616d68eb.scope: Deactivated successfully.
Nov 29 03:16:03 np0005539563 podman[327449]: 2025-11-29 08:16:03.113179529 +0000 UTC m=+0.909584405 container died 57ac345b6dd118d07c0ec90954cd2d1dde74decd8ca68834294bf449616d68eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:16:03 np0005539563 systemd[1]: var-lib-containers-storage-overlay-334c030bb28fd6b27201faf5a3b7b40879023f828585b0baf91ff7a538f994c3-merged.mount: Deactivated successfully.
Nov 29 03:16:03 np0005539563 podman[327449]: 2025-11-29 08:16:03.169399372 +0000 UTC m=+0.965804248 container remove 57ac345b6dd118d07c0ec90954cd2d1dde74decd8ca68834294bf449616d68eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:16:03 np0005539563 systemd[1]: libpod-conmon-57ac345b6dd118d07c0ec90954cd2d1dde74decd8ca68834294bf449616d68eb.scope: Deactivated successfully.
Nov 29 03:16:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:03.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:03 np0005539563 podman[327700]: 2025-11-29 08:16:03.82043446 +0000 UTC m=+0.041221338 container create 494c163f7fc4a255806a22a7a97be8bb4b7c3b72518a914e8a907ed23680211e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:16:03 np0005539563 nova_compute[252253]: 2025-11-29 08:16:03.865 252257 DEBUG nova.network.neutron [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Successfully updated port: 53db9057-1d66-479a-9112-5c451e2dc1c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:16:03 np0005539563 systemd[1]: Started libpod-conmon-494c163f7fc4a255806a22a7a97be8bb4b7c3b72518a914e8a907ed23680211e.scope.
Nov 29 03:16:03 np0005539563 nova_compute[252253]: 2025-11-29 08:16:03.888 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "refresh_cache-194fa81a-dfa7-4c98-9fd0-7d20d250db7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:03 np0005539563 nova_compute[252253]: 2025-11-29 08:16:03.889 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquired lock "refresh_cache-194fa81a-dfa7-4c98-9fd0-7d20d250db7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:03 np0005539563 nova_compute[252253]: 2025-11-29 08:16:03.889 252257 DEBUG nova.network.neutron [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:16:03 np0005539563 podman[327700]: 2025-11-29 08:16:03.80084602 +0000 UTC m=+0.021632918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:16:03 np0005539563 podman[327700]: 2025-11-29 08:16:03.918393895 +0000 UTC m=+0.139180753 container init 494c163f7fc4a255806a22a7a97be8bb4b7c3b72518a914e8a907ed23680211e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:16:03 np0005539563 podman[327700]: 2025-11-29 08:16:03.925558669 +0000 UTC m=+0.146345527 container start 494c163f7fc4a255806a22a7a97be8bb4b7c3b72518a914e8a907ed23680211e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_knuth, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:16:03 np0005539563 podman[327700]: 2025-11-29 08:16:03.929080414 +0000 UTC m=+0.149867272 container attach 494c163f7fc4a255806a22a7a97be8bb4b7c3b72518a914e8a907ed23680211e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_knuth, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:16:03 np0005539563 sweet_knuth[327717]: 167 167
Nov 29 03:16:03 np0005539563 systemd[1]: libpod-494c163f7fc4a255806a22a7a97be8bb4b7c3b72518a914e8a907ed23680211e.scope: Deactivated successfully.
Nov 29 03:16:03 np0005539563 conmon[327717]: conmon 494c163f7fc4a255806a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-494c163f7fc4a255806a22a7a97be8bb4b7c3b72518a914e8a907ed23680211e.scope/container/memory.events
Nov 29 03:16:03 np0005539563 podman[327700]: 2025-11-29 08:16:03.932382464 +0000 UTC m=+0.153169332 container died 494c163f7fc4a255806a22a7a97be8bb4b7c3b72518a914e8a907ed23680211e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:16:04 np0005539563 nova_compute[252253]: 2025-11-29 08:16:04.055 252257 DEBUG nova.network.neutron [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:16:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2eabab007dd617ea1b2d3aac4f0f89e5024abea1d629c8c9abce9934d2493a9b-merged.mount: Deactivated successfully.
Nov 29 03:16:04 np0005539563 podman[327700]: 2025-11-29 08:16:04.283560168 +0000 UTC m=+0.504347046 container remove 494c163f7fc4a255806a22a7a97be8bb4b7c3b72518a914e8a907ed23680211e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:16:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 435 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 7.9 MiB/s wr, 267 op/s
Nov 29 03:16:04 np0005539563 systemd[1]: libpod-conmon-494c163f7fc4a255806a22a7a97be8bb4b7c3b72518a914e8a907ed23680211e.scope: Deactivated successfully.
Nov 29 03:16:04 np0005539563 podman[327741]: 2025-11-29 08:16:04.484942684 +0000 UTC m=+0.043626272 container create 092bdf293d0b8f6f98c3d40e5f5c7a008f07c66d022bbb50eee98139a71310c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keldysh, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:16:04 np0005539563 systemd[1]: Started libpod-conmon-092bdf293d0b8f6f98c3d40e5f5c7a008f07c66d022bbb50eee98139a71310c4.scope.
Nov 29 03:16:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:16:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d76e908add5d07c7328cc407fcc100884e91f7545d96f9a60d51c4ce8bd679b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:04 np0005539563 podman[327741]: 2025-11-29 08:16:04.466322531 +0000 UTC m=+0.025006159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d76e908add5d07c7328cc407fcc100884e91f7545d96f9a60d51c4ce8bd679b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d76e908add5d07c7328cc407fcc100884e91f7545d96f9a60d51c4ce8bd679b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d76e908add5d07c7328cc407fcc100884e91f7545d96f9a60d51c4ce8bd679b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:04 np0005539563 podman[327741]: 2025-11-29 08:16:04.57631343 +0000 UTC m=+0.134997048 container init 092bdf293d0b8f6f98c3d40e5f5c7a008f07c66d022bbb50eee98139a71310c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keldysh, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:16:04 np0005539563 podman[327741]: 2025-11-29 08:16:04.583660759 +0000 UTC m=+0.142344347 container start 092bdf293d0b8f6f98c3d40e5f5c7a008f07c66d022bbb50eee98139a71310c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:16:04 np0005539563 podman[327741]: 2025-11-29 08:16:04.587208646 +0000 UTC m=+0.145892254 container attach 092bdf293d0b8f6f98c3d40e5f5c7a008f07c66d022bbb50eee98139a71310c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:16:04 np0005539563 nova_compute[252253]: 2025-11-29 08:16:04.904 252257 DEBUG nova.compute.manager [req-3c6c96eb-bd68-44eb-92b7-a3a17acd36f0 req-1966b7ce-077e-4e45-ba3f-86290c2b8904 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received event network-changed-53db9057-1d66-479a-9112-5c451e2dc1c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:04 np0005539563 nova_compute[252253]: 2025-11-29 08:16:04.906 252257 DEBUG nova.compute.manager [req-3c6c96eb-bd68-44eb-92b7-a3a17acd36f0 req-1966b7ce-077e-4e45-ba3f-86290c2b8904 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Refreshing instance network info cache due to event network-changed-53db9057-1d66-479a-9112-5c451e2dc1c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:16:04 np0005539563 nova_compute[252253]: 2025-11-29 08:16:04.906 252257 DEBUG oslo_concurrency.lockutils [req-3c6c96eb-bd68-44eb-92b7-a3a17acd36f0 req-1966b7ce-077e-4e45-ba3f-86290c2b8904 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-194fa81a-dfa7-4c98-9fd0-7d20d250db7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:04.922 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:04.922 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:04.923 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:04.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.059 252257 DEBUG nova.network.neutron [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Updating instance_info_cache with network_info: [{"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.078 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Releasing lock "refresh_cache-194fa81a-dfa7-4c98-9fd0-7d20d250db7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.079 252257 DEBUG nova.compute.manager [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Instance network_info: |[{"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.079 252257 DEBUG oslo_concurrency.lockutils [req-3c6c96eb-bd68-44eb-92b7-a3a17acd36f0 req-1966b7ce-077e-4e45-ba3f-86290c2b8904 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-194fa81a-dfa7-4c98-9fd0-7d20d250db7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.080 252257 DEBUG nova.network.neutron [req-3c6c96eb-bd68-44eb-92b7-a3a17acd36f0 req-1966b7ce-077e-4e45-ba3f-86290c2b8904 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Refreshing network info cache for port 53db9057-1d66-479a-9112-5c451e2dc1c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.085 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Start _get_guest_xml network_info=[{"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.094 252257 WARNING nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.104 252257 DEBUG nova.virt.libvirt.host [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.105 252257 DEBUG nova.virt.libvirt.host [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.111 252257 DEBUG nova.virt.libvirt.host [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.111 252257 DEBUG nova.virt.libvirt.host [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.113 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.114 252257 DEBUG nova.virt.hardware [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.114 252257 DEBUG nova.virt.hardware [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.115 252257 DEBUG nova.virt.hardware [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.115 252257 DEBUG nova.virt.hardware [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.115 252257 DEBUG nova.virt.hardware [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.115 252257 DEBUG nova.virt.hardware [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.116 252257 DEBUG nova.virt.hardware [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.116 252257 DEBUG nova.virt.hardware [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.117 252257 DEBUG nova.virt.hardware [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.117 252257 DEBUG nova.virt.hardware [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.117 252257 DEBUG nova.virt.hardware [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.123 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:05.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:05 np0005539563 objective_keldysh[327757]: {
Nov 29 03:16:05 np0005539563 objective_keldysh[327757]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:16:05 np0005539563 objective_keldysh[327757]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:16:05 np0005539563 objective_keldysh[327757]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:16:05 np0005539563 objective_keldysh[327757]:        "osd_id": 0,
Nov 29 03:16:05 np0005539563 objective_keldysh[327757]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:16:05 np0005539563 objective_keldysh[327757]:        "type": "bluestore"
Nov 29 03:16:05 np0005539563 objective_keldysh[327757]:    }
Nov 29 03:16:05 np0005539563 objective_keldysh[327757]: }
Nov 29 03:16:05 np0005539563 systemd[1]: libpod-092bdf293d0b8f6f98c3d40e5f5c7a008f07c66d022bbb50eee98139a71310c4.scope: Deactivated successfully.
Nov 29 03:16:05 np0005539563 podman[327741]: 2025-11-29 08:16:05.503915832 +0000 UTC m=+1.062599440 container died 092bdf293d0b8f6f98c3d40e5f5c7a008f07c66d022bbb50eee98139a71310c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keldysh, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:16:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2047163776' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.597 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.623 252257 DEBUG nova.storage.rbd_utils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:05 np0005539563 nova_compute[252253]: 2025-11-29 08:16:05.626 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:05 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5d76e908add5d07c7328cc407fcc100884e91f7545d96f9a60d51c4ce8bd679b-merged.mount: Deactivated successfully.
Nov 29 03:16:05 np0005539563 podman[327741]: 2025-11-29 08:16:05.976327662 +0000 UTC m=+1.535011240 container remove 092bdf293d0b8f6f98c3d40e5f5c7a008f07c66d022bbb50eee98139a71310c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:16:05 np0005539563 systemd[1]: libpod-conmon-092bdf293d0b8f6f98c3d40e5f5c7a008f07c66d022bbb50eee98139a71310c4.scope: Deactivated successfully.
Nov 29 03:16:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:16:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:16:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:16:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:16:06 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ff014eb3-8264-4c10-8152-43521729a36c does not exist
Nov 29 03:16:06 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 79cec26e-e848-4196-8f46-057760f89aae does not exist
Nov 29 03:16:06 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 72e6ab8c-74c4-4686-81e8-4e80eff3f6b2 does not exist
Nov 29 03:16:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/827431039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.069 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.071 252257 DEBUG nova.virt.libvirt.vif [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1466602938',display_name='tempest-ServerRescueTestJSON-server-1466602938',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1466602938',id=120,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6025758b69854406b221c47d9ef59dea',ramdisk_id='',reservation_id='r-2lp5obe1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-597299129',owner_user_name='tempest-ServerRescueTestJSON-597299129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:01Z,user_data=None,user_id='7bb4a89eea4e4166a7a1c5e3135cb182',uuid=194fa81a-dfa7-4c98-9fd0-7d20d250db7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.071 252257 DEBUG nova.network.os_vif_util [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converting VIF {"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.072 252257 DEBUG nova.network.os_vif_util [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:62:62,bridge_name='br-int',has_traffic_filtering=True,id=53db9057-1d66-479a-9112-5c451e2dc1c8,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53db9057-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.074 252257 DEBUG nova.objects.instance [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'pci_devices' on Instance uuid 194fa81a-dfa7-4c98-9fd0-7d20d250db7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.099 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  <uuid>194fa81a-dfa7-4c98-9fd0-7d20d250db7c</uuid>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  <name>instance-00000078</name>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerRescueTestJSON-server-1466602938</nova:name>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:16:05</nova:creationTime>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <nova:user uuid="7bb4a89eea4e4166a7a1c5e3135cb182">tempest-ServerRescueTestJSON-597299129-project-member</nova:user>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <nova:project uuid="6025758b69854406b221c47d9ef59dea">tempest-ServerRescueTestJSON-597299129</nova:project>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <nova:port uuid="53db9057-1d66-479a-9112-5c451e2dc1c8">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <entry name="serial">194fa81a-dfa7-4c98-9fd0-7d20d250db7c</entry>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <entry name="uuid">194fa81a-dfa7-4c98-9fd0-7d20d250db7c</entry>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.config">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:c4:62:62"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <target dev="tap53db9057-1d"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/console.log" append="off"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:16:06 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:16:06 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:16:06 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:16:06 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.101 252257 DEBUG nova.compute.manager [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Preparing to wait for external event network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.101 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.101 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.102 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.102 252257 DEBUG nova.virt.libvirt.vif [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:15:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1466602938',display_name='tempest-ServerRescueTestJSON-server-1466602938',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1466602938',id=120,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6025758b69854406b221c47d9ef59dea',ramdisk_id='',reservation_id='r-2lp5obe1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-597299129',owner_user_name='tempest-ServerRescueTestJSON-597299129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:01Z,user_data=None,user_id='7bb4a89eea4e4166a7a1c5e3135cb182',uuid=194fa81a-dfa7-4c98-9fd0-7d20d250db7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.103 252257 DEBUG nova.network.os_vif_util [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converting VIF {"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.103 252257 DEBUG nova.network.os_vif_util [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:62:62,bridge_name='br-int',has_traffic_filtering=True,id=53db9057-1d66-479a-9112-5c451e2dc1c8,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53db9057-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.104 252257 DEBUG os_vif [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:62:62,bridge_name='br-int',has_traffic_filtering=True,id=53db9057-1d66-479a-9112-5c451e2dc1c8,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53db9057-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.104 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.105 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.105 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.109 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.109 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53db9057-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.110 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap53db9057-1d, col_values=(('external_ids', {'iface-id': '53db9057-1d66-479a-9112-5c451e2dc1c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:62:62', 'vm-uuid': '194fa81a-dfa7-4c98-9fd0-7d20d250db7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:06 np0005539563 NetworkManager[48981]: <info>  [1764404166.1130] manager: (tap53db9057-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.114 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.118 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.119 252257 INFO os_vif [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:62:62,bridge_name='br-int',has_traffic_filtering=True,id=53db9057-1d66-479a-9112-5c451e2dc1c8,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53db9057-1d')#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.179 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.179 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.180 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No VIF found with MAC fa:16:3e:c4:62:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.180 252257 INFO nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Using config drive#033[00m
Nov 29 03:16:06 np0005539563 nova_compute[252253]: 2025-11-29 08:16:06.202 252257 DEBUG nova.storage.rbd_utils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 375 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 8.1 MiB/s wr, 363 op/s
Nov 29 03:16:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:16:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:16:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:06.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:07 np0005539563 nova_compute[252253]: 2025-11-29 08:16:07.234 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:07.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:07 np0005539563 nova_compute[252253]: 2025-11-29 08:16:07.336 252257 INFO nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Creating config drive at /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/disk.config#033[00m
Nov 29 03:16:07 np0005539563 nova_compute[252253]: 2025-11-29 08:16:07.342 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkiingo98 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:07 np0005539563 nova_compute[252253]: 2025-11-29 08:16:07.413 252257 DEBUG nova.network.neutron [req-3c6c96eb-bd68-44eb-92b7-a3a17acd36f0 req-1966b7ce-077e-4e45-ba3f-86290c2b8904 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Updated VIF entry in instance network info cache for port 53db9057-1d66-479a-9112-5c451e2dc1c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:16:07 np0005539563 nova_compute[252253]: 2025-11-29 08:16:07.414 252257 DEBUG nova.network.neutron [req-3c6c96eb-bd68-44eb-92b7-a3a17acd36f0 req-1966b7ce-077e-4e45-ba3f-86290c2b8904 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Updating instance_info_cache with network_info: [{"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:07 np0005539563 nova_compute[252253]: 2025-11-29 08:16:07.444 252257 DEBUG oslo_concurrency.lockutils [req-3c6c96eb-bd68-44eb-92b7-a3a17acd36f0 req-1966b7ce-077e-4e45-ba3f-86290c2b8904 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-194fa81a-dfa7-4c98-9fd0-7d20d250db7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:07 np0005539563 nova_compute[252253]: 2025-11-29 08:16:07.480 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkiingo98" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:07 np0005539563 nova_compute[252253]: 2025-11-29 08:16:07.508 252257 DEBUG nova.storage.rbd_utils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:07 np0005539563 nova_compute[252253]: 2025-11-29 08:16:07.513 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/disk.config 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:08 np0005539563 nova_compute[252253]: 2025-11-29 08:16:08.163 252257 DEBUG oslo_concurrency.processutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/disk.config 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:08 np0005539563 nova_compute[252253]: 2025-11-29 08:16:08.164 252257 INFO nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Deleting local config drive /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:16:08 np0005539563 kernel: tap53db9057-1d: entered promiscuous mode
Nov 29 03:16:08 np0005539563 NetworkManager[48981]: <info>  [1764404168.2349] manager: (tap53db9057-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Nov 29 03:16:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:08Z|00484|binding|INFO|Claiming lport 53db9057-1d66-479a-9112-5c451e2dc1c8 for this chassis.
Nov 29 03:16:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:08Z|00485|binding|INFO|53db9057-1d66-479a-9112-5c451e2dc1c8: Claiming fa:16:3e:c4:62:62 10.100.0.8
Nov 29 03:16:08 np0005539563 nova_compute[252253]: 2025-11-29 08:16:08.237 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:08.250 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:62:62 10.100.0.8'], port_security=['fa:16:3e:c4:62:62 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '194fa81a-dfa7-4c98-9fd0-7d20d250db7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a48f340-3ab0-428a-8b80-75fcf0f9f3f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6025758b69854406b221c47d9ef59dea', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75bc8da6-dde6-455d-b531-bf85392bb032', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=344a2810-fa48-40b2-8837-e84899a18cc0, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=53db9057-1d66-479a-9112-5c451e2dc1c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:08.251 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 53db9057-1d66-479a-9112-5c451e2dc1c8 in datapath 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 bound to our chassis#033[00m
Nov 29 03:16:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:08.252 158990 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:08.253 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[54f024d4-c9a7-4400-a5c4-ac4f537f99ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:08Z|00486|binding|INFO|Setting lport 53db9057-1d66-479a-9112-5c451e2dc1c8 up in Southbound
Nov 29 03:16:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:08Z|00487|binding|INFO|Setting lport 53db9057-1d66-479a-9112-5c451e2dc1c8 ovn-installed in OVS
Nov 29 03:16:08 np0005539563 nova_compute[252253]: 2025-11-29 08:16:08.259 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:08 np0005539563 nova_compute[252253]: 2025-11-29 08:16:08.267 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:08 np0005539563 systemd-machined[213024]: New machine qemu-55-instance-00000078.
Nov 29 03:16:08 np0005539563 systemd-udevd[327979]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:16:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 362 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.2 MiB/s wr, 310 op/s
Nov 29 03:16:08 np0005539563 NetworkManager[48981]: <info>  [1764404168.2967] device (tap53db9057-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:16:08 np0005539563 NetworkManager[48981]: <info>  [1764404168.2975] device (tap53db9057-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:16:08 np0005539563 systemd[1]: Started Virtual Machine qemu-55-instance-00000078.
Nov 29 03:16:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:08.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:09 np0005539563 nova_compute[252253]: 2025-11-29 08:16:09.273 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404169.2722785, 194fa81a-dfa7-4c98-9fd0-7d20d250db7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:09 np0005539563 nova_compute[252253]: 2025-11-29 08:16:09.273 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:16:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:09.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:09 np0005539563 nova_compute[252253]: 2025-11-29 08:16:09.309 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:09 np0005539563 nova_compute[252253]: 2025-11-29 08:16:09.315 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404169.275616, 194fa81a-dfa7-4c98-9fd0-7d20d250db7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:09 np0005539563 nova_compute[252253]: 2025-11-29 08:16:09.316 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:16:09 np0005539563 nova_compute[252253]: 2025-11-29 08:16:09.335 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:09 np0005539563 nova_compute[252253]: 2025-11-29 08:16:09.340 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:09 np0005539563 nova_compute[252253]: 2025-11-29 08:16:09.365 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 4.7 MiB/s wr, 204 op/s
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.898 252257 DEBUG nova.compute.manager [req-70174a91-c6ea-43b4-b38a-e53a5cadc278 req-5b82469d-eb57-4e78-ab2d-13d0f949ed94 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received event network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.899 252257 DEBUG oslo_concurrency.lockutils [req-70174a91-c6ea-43b4-b38a-e53a5cadc278 req-5b82469d-eb57-4e78-ab2d-13d0f949ed94 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.899 252257 DEBUG oslo_concurrency.lockutils [req-70174a91-c6ea-43b4-b38a-e53a5cadc278 req-5b82469d-eb57-4e78-ab2d-13d0f949ed94 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.899 252257 DEBUG oslo_concurrency.lockutils [req-70174a91-c6ea-43b4-b38a-e53a5cadc278 req-5b82469d-eb57-4e78-ab2d-13d0f949ed94 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.899 252257 DEBUG nova.compute.manager [req-70174a91-c6ea-43b4-b38a-e53a5cadc278 req-5b82469d-eb57-4e78-ab2d-13d0f949ed94 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Processing event network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.900 252257 DEBUG nova.compute.manager [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.904 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404170.9041357, 194fa81a-dfa7-4c98-9fd0-7d20d250db7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.906 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.910 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.915 252257 INFO nova.virt.libvirt.driver [-] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Instance spawned successfully.#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.916 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:16:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:10.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.947 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.958 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.963 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.964 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.965 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.965 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.966 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:10 np0005539563 nova_compute[252253]: 2025-11-29 08:16:10.967 252257 DEBUG nova.virt.libvirt.driver [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:11 np0005539563 nova_compute[252253]: 2025-11-29 08:16:11.010 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:11 np0005539563 nova_compute[252253]: 2025-11-29 08:16:11.116 252257 INFO nova.compute.manager [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Took 9.40 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:16:11 np0005539563 nova_compute[252253]: 2025-11-29 08:16:11.117 252257 DEBUG nova.compute.manager [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:11 np0005539563 nova_compute[252253]: 2025-11-29 08:16:11.118 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:11 np0005539563 nova_compute[252253]: 2025-11-29 08:16:11.190 252257 INFO nova.compute.manager [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Took 10.47 seconds to build instance.#033[00m
Nov 29 03:16:11 np0005539563 nova_compute[252253]: 2025-11-29 08:16:11.211 252257 DEBUG oslo_concurrency.lockutils [None req-a3581211-eec9-4989-b094-e9b48f31c4e5 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:11.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:11Z|00488|binding|INFO|Releasing lport a2e47e7a-aef0-4c09-aeef-4a0d63960d7b from this chassis (sb_readonly=0)
Nov 29 03:16:11 np0005539563 nova_compute[252253]: 2025-11-29 08:16:11.485 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:11 np0005539563 nova_compute[252253]: 2025-11-29 08:16:11.779 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404156.778706, 33cff286-3b50-41f5-9cb9-d4d98a1d3f88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:11 np0005539563 nova_compute[252253]: 2025-11-29 08:16:11.780 252257 INFO nova.compute.manager [-] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:16:11 np0005539563 nova_compute[252253]: 2025-11-29 08:16:11.806 252257 DEBUG nova.compute.manager [None req-8f880d3a-c1dc-430c-ba67-43a772785344 - - - - - -] [instance: 33cff286-3b50-41f5-9cb9-d4d98a1d3f88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:12 np0005539563 nova_compute[252253]: 2025-11-29 08:16:12.237 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 4.7 MiB/s wr, 204 op/s
Nov 29 03:16:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:16:12
Nov 29 03:16:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:16:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:16:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', '.rgw.root', '.mgr', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'backups']
Nov 29 03:16:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:16:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:12.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:12 np0005539563 nova_compute[252253]: 2025-11-29 08:16:12.995 252257 DEBUG nova.compute.manager [req-cf8a0f32-32e4-48fc-af85-c83b361fb486 req-29bb5cac-9c84-4f48-97f3-58088e005a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received event network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:12 np0005539563 nova_compute[252253]: 2025-11-29 08:16:12.996 252257 DEBUG oslo_concurrency.lockutils [req-cf8a0f32-32e4-48fc-af85-c83b361fb486 req-29bb5cac-9c84-4f48-97f3-58088e005a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:12 np0005539563 nova_compute[252253]: 2025-11-29 08:16:12.996 252257 DEBUG oslo_concurrency.lockutils [req-cf8a0f32-32e4-48fc-af85-c83b361fb486 req-29bb5cac-9c84-4f48-97f3-58088e005a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:12 np0005539563 nova_compute[252253]: 2025-11-29 08:16:12.996 252257 DEBUG oslo_concurrency.lockutils [req-cf8a0f32-32e4-48fc-af85-c83b361fb486 req-29bb5cac-9c84-4f48-97f3-58088e005a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:12 np0005539563 nova_compute[252253]: 2025-11-29 08:16:12.996 252257 DEBUG nova.compute.manager [req-cf8a0f32-32e4-48fc-af85-c83b361fb486 req-29bb5cac-9c84-4f48-97f3-58088e005a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] No waiting events found dispatching network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:12 np0005539563 nova_compute[252253]: 2025-11-29 08:16:12.997 252257 WARNING nova.compute.manager [req-cf8a0f32-32e4-48fc-af85-c83b361fb486 req-29bb5cac-9c84-4f48-97f3-58088e005a39 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received unexpected event network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:16:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:13.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:16:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:16:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.557 252257 INFO nova.compute.manager [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Rescuing#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.560 252257 DEBUG oslo_concurrency.lockutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "refresh_cache-194fa81a-dfa7-4c98-9fd0-7d20d250db7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.561 252257 DEBUG oslo_concurrency.lockutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquired lock "refresh_cache-194fa81a-dfa7-4c98-9fd0-7d20d250db7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.561 252257 DEBUG nova.network.neutron [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.678 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.679 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.679 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.679 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.680 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.681 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.737 252257 DEBUG nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.738 252257 DEBUG nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Image id 1be11678-cfa4-4dee-b54c-6c7e547e5a6a yields fingerprint 9b6c4a62e987670abc3ce4c57f88bd403b2af8bf _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.738 252257 INFO nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] image 1be11678-cfa4-4dee-b54c-6c7e547e5a6a at (/var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf): checking#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.738 252257 DEBUG nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] image 1be11678-cfa4-4dee-b54c-6c7e547e5a6a at (/var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.743 252257 DEBUG nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.744 252257 DEBUG nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] 98453ec7-fbda-42ae-8624-8aa5921fd634 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.744 252257 DEBUG nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] 194fa81a-dfa7-4c98-9fd0-7d20d250db7c is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.744 252257 WARNING nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.745 252257 INFO nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Active base files: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.745 252257 INFO nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Removable base files: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.747 252257 INFO nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.747 252257 DEBUG nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.747 252257 DEBUG nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 29 03:16:14 np0005539563 nova_compute[252253]: 2025-11-29 08:16:14.747 252257 DEBUG nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 29 03:16:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:14.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:15.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:16 np0005539563 nova_compute[252253]: 2025-11-29 08:16:16.107 252257 DEBUG nova.network.neutron [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Updating instance_info_cache with network_info: [{"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:16 np0005539563 nova_compute[252253]: 2025-11-29 08:16:16.121 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:16 np0005539563 nova_compute[252253]: 2025-11-29 08:16:16.136 252257 DEBUG oslo_concurrency.lockutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Releasing lock "refresh_cache-194fa81a-dfa7-4c98-9fd0-7d20d250db7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 210 op/s
Nov 29 03:16:16 np0005539563 nova_compute[252253]: 2025-11-29 08:16:16.533 252257 DEBUG nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:16:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:16.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:17 np0005539563 nova_compute[252253]: 2025-11-29 08:16:17.241 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:17.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 386 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Nov 29 03:16:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:18.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:19.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 488 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 6.0 MiB/s wr, 186 op/s
Nov 29 03:16:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:20.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:21 np0005539563 nova_compute[252253]: 2025-11-29 08:16:21.124 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:21.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:22 np0005539563 nova_compute[252253]: 2025-11-29 08:16:22.244 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 488 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.4 MiB/s wr, 150 op/s
Nov 29 03:16:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Nov 29 03:16:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Nov 29 03:16:22 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Nov 29 03:16:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:22.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:23.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008361759608226316 of space, bias 1.0, pg target 2.5085278824678947 quantized to 32 (current 32)
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0050583865508096034 of space, bias 1.0, pg target 1.5073991921412617 quantized to 32 (current 32)
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Nov 29 03:16:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 6.8 MiB/s wr, 189 op/s
Nov 29 03:16:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:24.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:25.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:26 np0005539563 nova_compute[252253]: 2025-11-29 08:16:26.127 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 459 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 8.4 MiB/s wr, 335 op/s
Nov 29 03:16:26 np0005539563 nova_compute[252253]: 2025-11-29 08:16:26.596 252257 DEBUG nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:16:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Nov 29 03:16:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Nov 29 03:16:26 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Nov 29 03:16:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:26.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:27 np0005539563 nova_compute[252253]: 2025-11-29 08:16:27.247 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:27.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:27 np0005539563 podman[328090]: 2025-11-29 08:16:27.514289776 +0000 UTC m=+0.055040162 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 29 03:16:27 np0005539563 podman[328091]: 2025-11-29 08:16:27.530041394 +0000 UTC m=+0.068258871 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125)
Nov 29 03:16:27 np0005539563 podman[328092]: 2025-11-29 08:16:27.560858329 +0000 UTC m=+0.092958991 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:16:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 419 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.5 MiB/s wr, 375 op/s
Nov 29 03:16:28 np0005539563 kernel: tap53db9057-1d (unregistering): left promiscuous mode
Nov 29 03:16:28 np0005539563 NetworkManager[48981]: <info>  [1764404188.9024] device (tap53db9057-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:16:28 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:28Z|00489|binding|INFO|Releasing lport 53db9057-1d66-479a-9112-5c451e2dc1c8 from this chassis (sb_readonly=0)
Nov 29 03:16:28 np0005539563 nova_compute[252253]: 2025-11-29 08:16:28.912 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:28 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:28Z|00490|binding|INFO|Setting lport 53db9057-1d66-479a-9112-5c451e2dc1c8 down in Southbound
Nov 29 03:16:28 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:28Z|00491|binding|INFO|Removing iface tap53db9057-1d ovn-installed in OVS
Nov 29 03:16:28 np0005539563 nova_compute[252253]: 2025-11-29 08:16:28.913 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:28.921 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:62:62 10.100.0.8'], port_security=['fa:16:3e:c4:62:62 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '194fa81a-dfa7-4c98-9fd0-7d20d250db7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a48f340-3ab0-428a-8b80-75fcf0f9f3f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6025758b69854406b221c47d9ef59dea', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75bc8da6-dde6-455d-b531-bf85392bb032', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=344a2810-fa48-40b2-8837-e84899a18cc0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=53db9057-1d66-479a-9112-5c451e2dc1c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:28.922 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 53db9057-1d66-479a-9112-5c451e2dc1c8 in datapath 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 unbound from our chassis#033[00m
Nov 29 03:16:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:28.922 158990 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:28.923 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0977f9d6-e39b-4003-9e3d-b8ac044e8c4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:28 np0005539563 nova_compute[252253]: 2025-11-29 08:16:28.930 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:28.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:29 np0005539563 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000078.scope: Deactivated successfully.
Nov 29 03:16:29 np0005539563 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000078.scope: Consumed 15.195s CPU time.
Nov 29 03:16:29 np0005539563 systemd-machined[213024]: Machine qemu-55-instance-00000078 terminated.
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.133 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.139 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:29.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.371 252257 DEBUG nova.compute.manager [req-ea329f35-2fb9-4584-ae21-d880e234adf3 req-d41535f3-c1a7-4def-87b2-84f0c369e44a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received event network-vif-unplugged-53db9057-1d66-479a-9112-5c451e2dc1c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.372 252257 DEBUG oslo_concurrency.lockutils [req-ea329f35-2fb9-4584-ae21-d880e234adf3 req-d41535f3-c1a7-4def-87b2-84f0c369e44a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.372 252257 DEBUG oslo_concurrency.lockutils [req-ea329f35-2fb9-4584-ae21-d880e234adf3 req-d41535f3-c1a7-4def-87b2-84f0c369e44a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.372 252257 DEBUG oslo_concurrency.lockutils [req-ea329f35-2fb9-4584-ae21-d880e234adf3 req-d41535f3-c1a7-4def-87b2-84f0c369e44a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.372 252257 DEBUG nova.compute.manager [req-ea329f35-2fb9-4584-ae21-d880e234adf3 req-d41535f3-c1a7-4def-87b2-84f0c369e44a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] No waiting events found dispatching network-vif-unplugged-53db9057-1d66-479a-9112-5c451e2dc1c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.372 252257 WARNING nova.compute.manager [req-ea329f35-2fb9-4584-ae21-d880e234adf3 req-d41535f3-c1a7-4def-87b2-84f0c369e44a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received unexpected event network-vif-unplugged-53db9057-1d66-479a-9112-5c451e2dc1c8 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.614 252257 INFO nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.621 252257 INFO nova.virt.libvirt.driver [-] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Instance destroyed successfully.#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.622 252257 DEBUG nova.objects.instance [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'numa_topology' on Instance uuid 194fa81a-dfa7-4c98-9fd0-7d20d250db7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.643 252257 INFO nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Attempting rescue#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.644 252257 DEBUG nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.650 252257 DEBUG nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.651 252257 INFO nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Creating image(s)#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.685 252257 DEBUG nova.storage.rbd_utils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.691 252257 DEBUG nova.objects.instance [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'trusted_certs' on Instance uuid 194fa81a-dfa7-4c98-9fd0-7d20d250db7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.726 252257 DEBUG nova.storage.rbd_utils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.752 252257 DEBUG nova.storage.rbd_utils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.756 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.819 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.820 252257 DEBUG oslo_concurrency.lockutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.821 252257 DEBUG oslo_concurrency.lockutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.821 252257 DEBUG oslo_concurrency.lockutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.844 252257 DEBUG nova.storage.rbd_utils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:29 np0005539563 nova_compute[252253]: 2025-11-29 08:16:29.848 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.112 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.113 252257 DEBUG nova.objects.instance [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'migration_context' on Instance uuid 194fa81a-dfa7-4c98-9fd0-7d20d250db7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.129 252257 DEBUG nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.130 252257 DEBUG nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Start _get_guest_xml network_info=[{"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1392053617-network", "vif_mac": "fa:16:3e:c4:62:62"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.130 252257 DEBUG nova.objects.instance [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'resources' on Instance uuid 194fa81a-dfa7-4c98-9fd0-7d20d250db7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.149 252257 WARNING nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.158 252257 DEBUG nova.virt.libvirt.host [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.158 252257 DEBUG nova.virt.libvirt.host [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.161 252257 DEBUG nova.virt.libvirt.host [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.161 252257 DEBUG nova.virt.libvirt.host [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.162 252257 DEBUG nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.163 252257 DEBUG nova.virt.hardware [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.163 252257 DEBUG nova.virt.hardware [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.163 252257 DEBUG nova.virt.hardware [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.163 252257 DEBUG nova.virt.hardware [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.164 252257 DEBUG nova.virt.hardware [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.164 252257 DEBUG nova.virt.hardware [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.164 252257 DEBUG nova.virt.hardware [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.164 252257 DEBUG nova.virt.hardware [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.164 252257 DEBUG nova.virt.hardware [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.165 252257 DEBUG nova.virt.hardware [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.165 252257 DEBUG nova.virt.hardware [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.165 252257 DEBUG nova.objects.instance [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'vcpu_model' on Instance uuid 194fa81a-dfa7-4c98-9fd0-7d20d250db7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.185 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 3.7 MiB/s wr, 471 op/s
Nov 29 03:16:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2360559406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.601 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:30 np0005539563 nova_compute[252253]: 2025-11-29 08:16:30.602 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:30.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221074366' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.017 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.018 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.153 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:31.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4252430010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.479 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.483 252257 DEBUG nova.virt.libvirt.vif [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1466602938',display_name='tempest-ServerRescueTestJSON-server-1466602938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1466602938',id=120,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:16:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6025758b69854406b221c47d9ef59dea',ramdisk_id='',reservation_id='r-2lp5obe1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-597299129',owner_user_name='tempest-ServerRescueTestJSON-597299129-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:11Z,user_data=None,user_id='7bb4a89eea4e4166a7a1c5e3135cb182',uuid=194fa81a-dfa7-4c98-9fd0-7d20d250db7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1392053617-network", "vif_mac": "fa:16:3e:c4:62:62"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.484 252257 DEBUG nova.network.os_vif_util [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converting VIF {"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1392053617-network", "vif_mac": "fa:16:3e:c4:62:62"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.486 252257 DEBUG nova.network.os_vif_util [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:62:62,bridge_name='br-int',has_traffic_filtering=True,id=53db9057-1d66-479a-9112-5c451e2dc1c8,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53db9057-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.489 252257 DEBUG nova.objects.instance [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'pci_devices' on Instance uuid 194fa81a-dfa7-4c98-9fd0-7d20d250db7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.514 252257 DEBUG nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  <uuid>194fa81a-dfa7-4c98-9fd0-7d20d250db7c</uuid>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  <name>instance-00000078</name>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerRescueTestJSON-server-1466602938</nova:name>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:16:30</nova:creationTime>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <nova:user uuid="7bb4a89eea4e4166a7a1c5e3135cb182">tempest-ServerRescueTestJSON-597299129-project-member</nova:user>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <nova:project uuid="6025758b69854406b221c47d9ef59dea">tempest-ServerRescueTestJSON-597299129</nova:project>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <nova:port uuid="53db9057-1d66-479a-9112-5c451e2dc1c8">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <entry name="serial">194fa81a-dfa7-4c98-9fd0-7d20d250db7c</entry>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <entry name="uuid">194fa81a-dfa7-4c98-9fd0-7d20d250db7c</entry>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.rescue">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.config.rescue">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:c4:62:62"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <target dev="tap53db9057-1d"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/console.log" append="off"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:16:31 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:16:31 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:16:31 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:16:31 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.524 252257 INFO nova.virt.libvirt.driver [-] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Instance destroyed successfully.#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.531 252257 DEBUG nova.compute.manager [req-4d8c769d-22b5-4dd8-a3d0-f17b68ea63c2 req-837c7cac-1b51-4487-8789-e3d1b24e3214 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received event network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.532 252257 DEBUG oslo_concurrency.lockutils [req-4d8c769d-22b5-4dd8-a3d0-f17b68ea63c2 req-837c7cac-1b51-4487-8789-e3d1b24e3214 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.532 252257 DEBUG oslo_concurrency.lockutils [req-4d8c769d-22b5-4dd8-a3d0-f17b68ea63c2 req-837c7cac-1b51-4487-8789-e3d1b24e3214 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.532 252257 DEBUG oslo_concurrency.lockutils [req-4d8c769d-22b5-4dd8-a3d0-f17b68ea63c2 req-837c7cac-1b51-4487-8789-e3d1b24e3214 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.533 252257 DEBUG nova.compute.manager [req-4d8c769d-22b5-4dd8-a3d0-f17b68ea63c2 req-837c7cac-1b51-4487-8789-e3d1b24e3214 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] No waiting events found dispatching network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.533 252257 WARNING nova.compute.manager [req-4d8c769d-22b5-4dd8-a3d0-f17b68ea63c2 req-837c7cac-1b51-4487-8789-e3d1b24e3214 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received unexpected event network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.581 252257 DEBUG nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.581 252257 DEBUG nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.581 252257 DEBUG nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.581 252257 DEBUG nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] No VIF found with MAC fa:16:3e:c4:62:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.582 252257 INFO nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Using config drive#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.613 252257 DEBUG nova.storage.rbd_utils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.633 252257 DEBUG nova.objects.instance [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'ec2_ids' on Instance uuid 194fa81a-dfa7-4c98-9fd0-7d20d250db7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:31 np0005539563 nova_compute[252253]: 2025-11-29 08:16:31.671 252257 DEBUG nova.objects.instance [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'keypairs' on Instance uuid 194fa81a-dfa7-4c98-9fd0-7d20d250db7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:32 np0005539563 nova_compute[252253]: 2025-11-29 08:16:32.197 252257 INFO nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Creating config drive at /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/disk.config.rescue#033[00m
Nov 29 03:16:32 np0005539563 nova_compute[252253]: 2025-11-29 08:16:32.203 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgjd715_k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:32 np0005539563 nova_compute[252253]: 2025-11-29 08:16:32.266 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.1 MiB/s wr, 400 op/s
Nov 29 03:16:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Nov 29 03:16:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Nov 29 03:16:32 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Nov 29 03:16:32 np0005539563 nova_compute[252253]: 2025-11-29 08:16:32.361 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgjd715_k" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:32 np0005539563 nova_compute[252253]: 2025-11-29 08:16:32.400 252257 DEBUG nova.storage.rbd_utils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] rbd image 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:32 np0005539563 nova_compute[252253]: 2025-11-29 08:16:32.405 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/disk.config.rescue 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:32 np0005539563 nova_compute[252253]: 2025-11-29 08:16:32.593 252257 DEBUG oslo_concurrency.processutils [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/disk.config.rescue 194fa81a-dfa7-4c98-9fd0-7d20d250db7c_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:32 np0005539563 nova_compute[252253]: 2025-11-29 08:16:32.593 252257 INFO nova.virt.libvirt.driver [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Deleting local config drive /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:16:32 np0005539563 kernel: tap53db9057-1d: entered promiscuous mode
Nov 29 03:16:32 np0005539563 NetworkManager[48981]: <info>  [1764404192.6687] manager: (tap53db9057-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Nov 29 03:16:32 np0005539563 nova_compute[252253]: 2025-11-29 08:16:32.670 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:32 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:32Z|00492|binding|INFO|Claiming lport 53db9057-1d66-479a-9112-5c451e2dc1c8 for this chassis.
Nov 29 03:16:32 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:32Z|00493|binding|INFO|53db9057-1d66-479a-9112-5c451e2dc1c8: Claiming fa:16:3e:c4:62:62 10.100.0.8
Nov 29 03:16:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:32.677 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:62:62 10.100.0.8'], port_security=['fa:16:3e:c4:62:62 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '194fa81a-dfa7-4c98-9fd0-7d20d250db7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a48f340-3ab0-428a-8b80-75fcf0f9f3f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6025758b69854406b221c47d9ef59dea', 'neutron:revision_number': '5', 'neutron:security_group_ids': '75bc8da6-dde6-455d-b531-bf85392bb032', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=344a2810-fa48-40b2-8837-e84899a18cc0, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=53db9057-1d66-479a-9112-5c451e2dc1c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:32.679 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 53db9057-1d66-479a-9112-5c451e2dc1c8 in datapath 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 bound to our chassis#033[00m
Nov 29 03:16:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:32.679 158990 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:32 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:32.681 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e18d54a9-24ce-421e-a103-a063e8561b9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:32 np0005539563 systemd-udevd[328427]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:16:32 np0005539563 systemd-machined[213024]: New machine qemu-56-instance-00000078.
Nov 29 03:16:32 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:32Z|00494|binding|INFO|Setting lport 53db9057-1d66-479a-9112-5c451e2dc1c8 up in Southbound
Nov 29 03:16:32 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:32Z|00495|binding|INFO|Setting lport 53db9057-1d66-479a-9112-5c451e2dc1c8 ovn-installed in OVS
Nov 29 03:16:32 np0005539563 nova_compute[252253]: 2025-11-29 08:16:32.712 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:32 np0005539563 systemd[1]: Started Virtual Machine qemu-56-instance-00000078.
Nov 29 03:16:32 np0005539563 NetworkManager[48981]: <info>  [1764404192.7216] device (tap53db9057-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:16:32 np0005539563 NetworkManager[48981]: <info>  [1764404192.7229] device (tap53db9057-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:16:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:32.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.164 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 194fa81a-dfa7-4c98-9fd0-7d20d250db7c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.165 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404193.1640816, 194fa81a-dfa7-4c98-9fd0-7d20d250db7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.166 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.171 252257 DEBUG nova.compute.manager [None req-ec3aafb4-641b-4874-bf01-c85e58a50698 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.192 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.197 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.223 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.224 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404193.1652105, 194fa81a-dfa7-4c98-9fd0-7d20d250db7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.224 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.241 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.245 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:33.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.680 252257 DEBUG nova.compute.manager [req-7ccaac78-46b2-473f-9591-4c3583060677 req-ce2948a7-91af-4e7a-a22e-90d69791a749 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received event network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.680 252257 DEBUG oslo_concurrency.lockutils [req-7ccaac78-46b2-473f-9591-4c3583060677 req-ce2948a7-91af-4e7a-a22e-90d69791a749 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.681 252257 DEBUG oslo_concurrency.lockutils [req-7ccaac78-46b2-473f-9591-4c3583060677 req-ce2948a7-91af-4e7a-a22e-90d69791a749 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.682 252257 DEBUG oslo_concurrency.lockutils [req-7ccaac78-46b2-473f-9591-4c3583060677 req-ce2948a7-91af-4e7a-a22e-90d69791a749 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.682 252257 DEBUG nova.compute.manager [req-7ccaac78-46b2-473f-9591-4c3583060677 req-ce2948a7-91af-4e7a-a22e-90d69791a749 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] No waiting events found dispatching network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.683 252257 WARNING nova.compute.manager [req-7ccaac78-46b2-473f-9591-4c3583060677 req-ce2948a7-91af-4e7a-a22e-90d69791a749 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received unexpected event network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.683 252257 DEBUG nova.compute.manager [req-7ccaac78-46b2-473f-9591-4c3583060677 req-ce2948a7-91af-4e7a-a22e-90d69791a749 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received event network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.684 252257 DEBUG oslo_concurrency.lockutils [req-7ccaac78-46b2-473f-9591-4c3583060677 req-ce2948a7-91af-4e7a-a22e-90d69791a749 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.684 252257 DEBUG oslo_concurrency.lockutils [req-7ccaac78-46b2-473f-9591-4c3583060677 req-ce2948a7-91af-4e7a-a22e-90d69791a749 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.685 252257 DEBUG oslo_concurrency.lockutils [req-7ccaac78-46b2-473f-9591-4c3583060677 req-ce2948a7-91af-4e7a-a22e-90d69791a749 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.685 252257 DEBUG nova.compute.manager [req-7ccaac78-46b2-473f-9591-4c3583060677 req-ce2948a7-91af-4e7a-a22e-90d69791a749 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] No waiting events found dispatching network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:33 np0005539563 nova_compute[252253]: 2025-11-29 08:16:33.686 252257 WARNING nova.compute.manager [req-7ccaac78-46b2-473f-9591-4c3583060677 req-ce2948a7-91af-4e7a-a22e-90d69791a749 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received unexpected event network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:16:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 329 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 188 op/s
Nov 29 03:16:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:34.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.139 252257 DEBUG oslo_concurrency.lockutils [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "98453ec7-fbda-42ae-8624-8aa5921fd634" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.140 252257 DEBUG oslo_concurrency.lockutils [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.141 252257 DEBUG oslo_concurrency.lockutils [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.141 252257 DEBUG oslo_concurrency.lockutils [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.142 252257 DEBUG oslo_concurrency.lockutils [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.143 252257 INFO nova.compute.manager [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Terminating instance#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.145 252257 DEBUG nova.compute.manager [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:16:35 np0005539563 kernel: tap5a778f1e-9d (unregistering): left promiscuous mode
Nov 29 03:16:35 np0005539563 NetworkManager[48981]: <info>  [1764404195.2010] device (tap5a778f1e-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.214 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:35Z|00496|binding|INFO|Releasing lport 5a778f1e-9dbc-422a-b415-d2ea4fecdaac from this chassis (sb_readonly=0)
Nov 29 03:16:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:35Z|00497|binding|INFO|Setting lport 5a778f1e-9dbc-422a-b415-d2ea4fecdaac down in Southbound
Nov 29 03:16:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:16:35Z|00498|binding|INFO|Removing iface tap5a778f1e-9d ovn-installed in OVS
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.220 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.224 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:b2:e8 10.100.0.5'], port_security=['fa:16:3e:72:b2:e8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '98453ec7-fbda-42ae-8624-8aa5921fd634', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba867fac17034bb28fe2cdb0fff3af2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '35298f43-8419-4a47-81fd-585bfb137a9a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5e4b2f3-5e6e-48f8-b35a-ab61c62108a6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=5a778f1e-9dbc-422a-b415-d2ea4fecdaac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.227 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 5a778f1e-9dbc-422a-b415-d2ea4fecdaac in datapath 4d5b8c11-b69e-4a74-846b-03943fb29a81 unbound from our chassis#033[00m
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.228 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d5b8c11-b69e-4a74-846b-03943fb29a81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.230 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1ecc59-2ecf-46a9-adfd-d24b0d262d73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.230 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 namespace which is not needed anymore#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.234 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:35 np0005539563 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Nov 29 03:16:35 np0005539563 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006a.scope: Consumed 26.780s CPU time.
Nov 29 03:16:35 np0005539563 systemd-machined[213024]: Machine qemu-46-instance-0000006a terminated.
Nov 29 03:16:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:35.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:35 np0005539563 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[316748]: [NOTICE]   (316752) : haproxy version is 2.8.14-c23fe91
Nov 29 03:16:35 np0005539563 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[316748]: [NOTICE]   (316752) : path to executable is /usr/sbin/haproxy
Nov 29 03:16:35 np0005539563 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[316748]: [WARNING]  (316752) : Exiting Master process...
Nov 29 03:16:35 np0005539563 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[316748]: [ALERT]    (316752) : Current worker (316754) exited with code 143 (Terminated)
Nov 29 03:16:35 np0005539563 neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81[316748]: [WARNING]  (316752) : All workers exited. Exiting... (0)
Nov 29 03:16:35 np0005539563 systemd[1]: libpod-a466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc.scope: Deactivated successfully.
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.362 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:35 np0005539563 podman[328547]: 2025-11-29 08:16:35.364374062 +0000 UTC m=+0.043126089 container died a466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.367 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.379 252257 INFO nova.virt.libvirt.driver [-] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Instance destroyed successfully.#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.380 252257 DEBUG nova.objects.instance [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lazy-loading 'resources' on Instance uuid 98453ec7-fbda-42ae-8624-8aa5921fd634 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc-userdata-shm.mount: Deactivated successfully.
Nov 29 03:16:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2c288e7ab1ffde6b7cba9327cdcaa21ffd4e55d598eaae42a7b01c08f031a711-merged.mount: Deactivated successfully.
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.401 252257 DEBUG nova.virt.libvirt.vif [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:11:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-865719826',display_name='tempest-ServerActionsTestOtherB-server-865719826',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-865719826',id=106,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:11:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba867fac17034bb28fe2cdb0fff3af2b',ramdisk_id='',reservation_id='r-ndatvz0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-325732369',owner_user_name='tempest-ServerActionsTestOtherB-325732369-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:38Z,user_data=None,user_id='ca93c8e3eac142c0aa6b61807727dea2',uuid=98453ec7-fbda-42ae-8624-8aa5921fd634,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "address": "fa:16:3e:72:b2:e8", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a778f1e-9d", "ovs_interfaceid": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.403 252257 DEBUG nova.network.os_vif_util [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converting VIF {"id": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "address": "fa:16:3e:72:b2:e8", "network": {"id": "4d5b8c11-b69e-4a74-846b-03943fb29a81", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-667031396-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba867fac17034bb28fe2cdb0fff3af2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a778f1e-9d", "ovs_interfaceid": "5a778f1e-9dbc-422a-b415-d2ea4fecdaac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.404 252257 DEBUG nova.network.os_vif_util [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:b2:e8,bridge_name='br-int',has_traffic_filtering=True,id=5a778f1e-9dbc-422a-b415-d2ea4fecdaac,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a778f1e-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.405 252257 DEBUG os_vif [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:b2:e8,bridge_name='br-int',has_traffic_filtering=True,id=5a778f1e-9dbc-422a-b415-d2ea4fecdaac,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a778f1e-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.408 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.409 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a778f1e-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.411 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:35 np0005539563 podman[328547]: 2025-11-29 08:16:35.413876933 +0000 UTC m=+0.092628980 container cleanup a466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.415 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.420 252257 INFO os_vif [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:b2:e8,bridge_name='br-int',has_traffic_filtering=True,id=5a778f1e-9dbc-422a-b415-d2ea4fecdaac,network=Network(4d5b8c11-b69e-4a74-846b-03943fb29a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a778f1e-9d')#033[00m
Nov 29 03:16:35 np0005539563 systemd[1]: libpod-conmon-a466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc.scope: Deactivated successfully.
Nov 29 03:16:35 np0005539563 podman[328585]: 2025-11-29 08:16:35.492545234 +0000 UTC m=+0.052416601 container remove a466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.498 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd5d6ee-984b-4049-a1e3-be3e7bfa196b]: (4, ('Sat Nov 29 08:16:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (a466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc)\na466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc\nSat Nov 29 08:16:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 (a466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc)\na466cb3aeceb307f2db33423735297c4be9a6dcd62d4b9bbb39f800729cf70fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.500 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9b923d-8929-4f7d-9826-8f73d7a115be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.502 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d5b8c11-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:35 np0005539563 kernel: tap4d5b8c11-b0: left promiscuous mode
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.509 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c78547a1-2867-4bf1-9655-d6ee2106391d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.514 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.531 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.532 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[aba21b33-9b99-4ce3-a132-615accea0730]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.533 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[242fe5f5-467f-427a-ae4b-250e6df6e110]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.554 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa4e4a1-1003-4e0a-89a0-97a7027738b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 686547, 'reachable_time': 31606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328618, 'error': None, 'target': 'ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.557 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d5b8c11-b69e-4a74-846b-03943fb29a81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:16:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:35.557 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a571b4-5644-4299-ad45-913676f3d1eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:35 np0005539563 systemd[1]: run-netns-ovnmeta\x2d4d5b8c11\x2db69e\x2d4a74\x2d846b\x2d03943fb29a81.mount: Deactivated successfully.
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.747 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.924 252257 INFO nova.virt.libvirt.driver [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Deleting instance files /var/lib/nova/instances/98453ec7-fbda-42ae-8624-8aa5921fd634_del#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.926 252257 INFO nova.virt.libvirt.driver [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Deletion of /var/lib/nova/instances/98453ec7-fbda-42ae-8624-8aa5921fd634_del complete#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.981 252257 INFO nova.compute.manager [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.982 252257 DEBUG oslo.service.loopingcall [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.983 252257 DEBUG nova.compute.manager [-] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:16:35 np0005539563 nova_compute[252253]: 2025-11-29 08:16:35.983 252257 DEBUG nova.network.neutron [-] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:16:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.4 MiB/s wr, 262 op/s
Nov 29 03:16:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:36.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.268 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:37.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.336 252257 DEBUG nova.compute.manager [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Received event network-vif-unplugged-5a778f1e-9dbc-422a-b415-d2ea4fecdaac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.337 252257 DEBUG oslo_concurrency.lockutils [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.337 252257 DEBUG oslo_concurrency.lockutils [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.337 252257 DEBUG oslo_concurrency.lockutils [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.337 252257 DEBUG nova.compute.manager [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] No waiting events found dispatching network-vif-unplugged-5a778f1e-9dbc-422a-b415-d2ea4fecdaac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.337 252257 DEBUG nova.compute.manager [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Received event network-vif-unplugged-5a778f1e-9dbc-422a-b415-d2ea4fecdaac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.337 252257 DEBUG nova.compute.manager [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Received event network-vif-plugged-5a778f1e-9dbc-422a-b415-d2ea4fecdaac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.338 252257 DEBUG oslo_concurrency.lockutils [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.338 252257 DEBUG oslo_concurrency.lockutils [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.338 252257 DEBUG oslo_concurrency.lockutils [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.338 252257 DEBUG nova.compute.manager [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] No waiting events found dispatching network-vif-plugged-5a778f1e-9dbc-422a-b415-d2ea4fecdaac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.338 252257 WARNING nova.compute.manager [req-43bd0361-cbe8-41d3-8bd1-18efdf9fc31e req-2e1eefa2-7201-4820-9a1f-40693e7daf21 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Received unexpected event network-vif-plugged-5a778f1e-9dbc-422a-b415-d2ea4fecdaac for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.535 252257 DEBUG nova.network.neutron [-] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.556 252257 INFO nova.compute.manager [-] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Took 1.57 seconds to deallocate network for instance.#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.606 252257 DEBUG nova.compute.manager [req-ac898344-ed6f-4d82-8205-47070e1ef361 req-9c0cd04a-1665-439f-9eba-bfb9817ba144 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Received event network-vif-deleted-5a778f1e-9dbc-422a-b415-d2ea4fecdaac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.609 252257 DEBUG oslo_concurrency.lockutils [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.609 252257 DEBUG oslo_concurrency.lockutils [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.667 252257 DEBUG oslo_concurrency.processutils [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.699 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.700 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.717 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:16:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:37.918 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:37.920 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:16:37 np0005539563 nova_compute[252253]: 2025-11-29 08:16:37.969 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1182341685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:38 np0005539563 nova_compute[252253]: 2025-11-29 08:16:38.146 252257 DEBUG oslo_concurrency.processutils [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:38 np0005539563 nova_compute[252253]: 2025-11-29 08:16:38.152 252257 DEBUG nova.compute.provider_tree [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:38 np0005539563 nova_compute[252253]: 2025-11-29 08:16:38.173 252257 DEBUG nova.scheduler.client.report [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:38 np0005539563 nova_compute[252253]: 2025-11-29 08:16:38.196 252257 DEBUG oslo_concurrency.lockutils [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:38 np0005539563 nova_compute[252253]: 2025-11-29 08:16:38.231 252257 INFO nova.scheduler.client.report [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Deleted allocations for instance 98453ec7-fbda-42ae-8624-8aa5921fd634#033[00m
Nov 29 03:16:38 np0005539563 nova_compute[252253]: 2025-11-29 08:16:38.292 252257 DEBUG oslo_concurrency.lockutils [None req-e73ff4f2-043f-4579-a366-c7c98b0392b9 ca93c8e3eac142c0aa6b61807727dea2 ba867fac17034bb28fe2cdb0fff3af2b - - default default] Lock "98453ec7-fbda-42ae-8624-8aa5921fd634" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 299 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 265 op/s
Nov 29 03:16:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:38.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:39.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 269 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.8 MiB/s wr, 301 op/s
Nov 29 03:16:40 np0005539563 nova_compute[252253]: 2025-11-29 08:16:40.458 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:16:40.922 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:40.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:41 np0005539563 nova_compute[252253]: 2025-11-29 08:16:41.031 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:41.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:42 np0005539563 nova_compute[252253]: 2025-11-29 08:16:42.270 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 269 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.8 MiB/s wr, 301 op/s
Nov 29 03:16:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:42 np0005539563 nova_compute[252253]: 2025-11-29 08:16:42.689 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:42.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:43 np0005539563 nova_compute[252253]: 2025-11-29 08:16:43.177 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:16:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:43.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:43 np0005539563 nova_compute[252253]: 2025-11-29 08:16:43.382 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 293 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 252 op/s
Nov 29 03:16:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:45.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:45.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:45 np0005539563 nova_compute[252253]: 2025-11-29 08:16:45.460 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 293 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 248 op/s
Nov 29 03:16:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:47.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:47 np0005539563 nova_compute[252253]: 2025-11-29 08:16:47.273 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:47.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 261 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 250 op/s
Nov 29 03:16:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:49.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:49.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 214 MiB data, 980 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 261 op/s
Nov 29 03:16:50 np0005539563 nova_compute[252253]: 2025-11-29 08:16:50.375 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404195.374594, 98453ec7-fbda-42ae-8624-8aa5921fd634 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:50 np0005539563 nova_compute[252253]: 2025-11-29 08:16:50.376 252257 INFO nova.compute.manager [-] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:16:50 np0005539563 nova_compute[252253]: 2025-11-29 08:16:50.404 252257 DEBUG nova.compute.manager [None req-4b98072e-7505-4d8a-a5a5-ed1ccdbedf8d - - - - - -] [instance: 98453ec7-fbda-42ae-8624-8aa5921fd634] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:50 np0005539563 nova_compute[252253]: 2025-11-29 08:16:50.463 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:51.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.003000080s ======
Nov 29 03:16:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:51.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Nov 29 03:16:51 np0005539563 nova_compute[252253]: 2025-11-29 08:16:51.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:51 np0005539563 nova_compute[252253]: 2025-11-29 08:16:51.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:16:52 np0005539563 nova_compute[252253]: 2025-11-29 08:16:52.275 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 214 MiB data, 980 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.3 MiB/s wr, 167 op/s
Nov 29 03:16:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:52 np0005539563 nova_compute[252253]: 2025-11-29 08:16:52.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:52 np0005539563 nova_compute[252253]: 2025-11-29 08:16:52.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:52 np0005539563 nova_compute[252253]: 2025-11-29 08:16:52.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:52 np0005539563 nova_compute[252253]: 2025-11-29 08:16:52.699 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:52 np0005539563 nova_compute[252253]: 2025-11-29 08:16:52.699 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:52 np0005539563 nova_compute[252253]: 2025-11-29 08:16:52.700 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:52 np0005539563 nova_compute[252253]: 2025-11-29 08:16:52.700 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:16:52 np0005539563 nova_compute[252253]: 2025-11-29 08:16:52.700 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:53.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/230588409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:53 np0005539563 nova_compute[252253]: 2025-11-29 08:16:53.150 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:53 np0005539563 nova_compute[252253]: 2025-11-29 08:16:53.246 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:16:53 np0005539563 nova_compute[252253]: 2025-11-29 08:16:53.247 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:16:53 np0005539563 nova_compute[252253]: 2025-11-29 08:16:53.247 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:16:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:53.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:53 np0005539563 nova_compute[252253]: 2025-11-29 08:16:53.397 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:53 np0005539563 nova_compute[252253]: 2025-11-29 08:16:53.398 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4188MB free_disk=20.900981903076172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:16:53 np0005539563 nova_compute[252253]: 2025-11-29 08:16:53.398 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:53 np0005539563 nova_compute[252253]: 2025-11-29 08:16:53.399 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:53 np0005539563 nova_compute[252253]: 2025-11-29 08:16:53.587 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 194fa81a-dfa7-4c98-9fd0-7d20d250db7c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:16:53 np0005539563 nova_compute[252253]: 2025-11-29 08:16:53.587 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:16:53 np0005539563 nova_compute[252253]: 2025-11-29 08:16:53.588 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:16:53 np0005539563 nova_compute[252253]: 2025-11-29 08:16:53.708 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1579080136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:54 np0005539563 nova_compute[252253]: 2025-11-29 08:16:54.149 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:54 np0005539563 nova_compute[252253]: 2025-11-29 08:16:54.157 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:54 np0005539563 nova_compute[252253]: 2025-11-29 08:16:54.176 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:54 np0005539563 nova_compute[252253]: 2025-11-29 08:16:54.197 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:16:54 np0005539563 nova_compute[252253]: 2025-11-29 08:16:54.198 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 214 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.3 MiB/s wr, 167 op/s
Nov 29 03:16:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:55.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:55.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:55 np0005539563 nova_compute[252253]: 2025-11-29 08:16:55.466 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:56 np0005539563 nova_compute[252253]: 2025-11-29 08:16:56.198 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:56 np0005539563 nova_compute[252253]: 2025-11-29 08:16:56.199 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:16:56 np0005539563 nova_compute[252253]: 2025-11-29 08:16:56.231 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:16:56 np0005539563 nova_compute[252253]: 2025-11-29 08:16:56.231 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 215 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 34 KiB/s wr, 167 op/s
Nov 29 03:16:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:57.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:57 np0005539563 nova_compute[252253]: 2025-11-29 08:16:57.307 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:16:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:57.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:16:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 221 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 833 KiB/s wr, 169 op/s
Nov 29 03:16:58 np0005539563 podman[328750]: 2025-11-29 08:16:58.507717706 +0000 UTC m=+0.061543118 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:16:58 np0005539563 podman[328751]: 2025-11-29 08:16:58.523343529 +0000 UTC m=+0.077107069 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:16:58 np0005539563 podman[328752]: 2025-11-29 08:16:58.571557245 +0000 UTC m=+0.123619670 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:16:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:59.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:16:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:59.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:59 np0005539563 nova_compute[252253]: 2025-11-29 08:16:59.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:59 np0005539563 nova_compute[252253]: 2025-11-29 08:16:59.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:59 np0005539563 nova_compute[252253]: 2025-11-29 08:16:59.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:17:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 245 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Nov 29 03:17:00 np0005539563 nova_compute[252253]: 2025-11-29 08:17:00.469 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:01.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:01.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:01 np0005539563 nova_compute[252253]: 2025-11-29 08:17:01.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 245 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 179 KiB/s rd, 2.2 MiB/s wr, 58 op/s
Nov 29 03:17:02 np0005539563 nova_compute[252253]: 2025-11-29 08:17:02.348 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:02 np0005539563 nova_compute[252253]: 2025-11-29 08:17:02.614 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Acquiring lock "02b0884a-8366-4c63-ae13-b1bed91e9f04" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:02 np0005539563 nova_compute[252253]: 2025-11-29 08:17:02.615 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:02 np0005539563 nova_compute[252253]: 2025-11-29 08:17:02.667 252257 DEBUG nova.compute.manager [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:17:02 np0005539563 nova_compute[252253]: 2025-11-29 08:17:02.735 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:02 np0005539563 nova_compute[252253]: 2025-11-29 08:17:02.736 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:02 np0005539563 nova_compute[252253]: 2025-11-29 08:17:02.747 252257 DEBUG nova.virt.hardware [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:17:02 np0005539563 nova_compute[252253]: 2025-11-29 08:17:02.748 252257 INFO nova.compute.claims [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:17:02 np0005539563 nova_compute[252253]: 2025-11-29 08:17:02.871 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:03.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4038493462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.302 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.309 252257 DEBUG nova.compute.provider_tree [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.331 252257 DEBUG nova.scheduler.client.report [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:03.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.371 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.372 252257 DEBUG nova.compute.manager [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.449 252257 DEBUG nova.compute.manager [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.449 252257 DEBUG nova.network.neutron [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.473 252257 INFO nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.504 252257 DEBUG nova.compute.manager [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.630 252257 DEBUG nova.compute.manager [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.631 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.632 252257 INFO nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Creating image(s)#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.652 252257 DEBUG nova.storage.rbd_utils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] rbd image 02b0884a-8366-4c63-ae13-b1bed91e9f04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.676 252257 DEBUG nova.storage.rbd_utils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] rbd image 02b0884a-8366-4c63-ae13-b1bed91e9f04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.700 252257 DEBUG nova.storage.rbd_utils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] rbd image 02b0884a-8366-4c63-ae13-b1bed91e9f04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.704 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.767 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.770 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.771 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.771 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.802 252257 DEBUG nova.storage.rbd_utils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] rbd image 02b0884a-8366-4c63-ae13-b1bed91e9f04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.807 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 02b0884a-8366-4c63-ae13-b1bed91e9f04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:03 np0005539563 nova_compute[252253]: 2025-11-29 08:17:03.847 252257 DEBUG nova.policy [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97c94eba4968495e9801d5e3afc94a72', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3d1a4708cff44cd7853545efb6ccc406', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:17:04 np0005539563 nova_compute[252253]: 2025-11-29 08:17:04.087 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 02b0884a-8366-4c63-ae13-b1bed91e9f04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:04 np0005539563 nova_compute[252253]: 2025-11-29 08:17:04.152 252257 DEBUG nova.storage.rbd_utils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] resizing rbd image 02b0884a-8366-4c63-ae13-b1bed91e9f04_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:17:04 np0005539563 nova_compute[252253]: 2025-11-29 08:17:04.240 252257 DEBUG nova.objects.instance [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lazy-loading 'migration_context' on Instance uuid 02b0884a-8366-4c63-ae13-b1bed91e9f04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:04 np0005539563 nova_compute[252253]: 2025-11-29 08:17:04.296 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:17:04 np0005539563 nova_compute[252253]: 2025-11-29 08:17:04.297 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Ensure instance console log exists: /var/lib/nova/instances/02b0884a-8366-4c63-ae13-b1bed91e9f04/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:17:04 np0005539563 nova_compute[252253]: 2025-11-29 08:17:04.297 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:04 np0005539563 nova_compute[252253]: 2025-11-29 08:17:04.298 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:04 np0005539563 nova_compute[252253]: 2025-11-29 08:17:04.298 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 270 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 201 KiB/s rd, 3.1 MiB/s wr, 61 op/s
Nov 29 03:17:04 np0005539563 nova_compute[252253]: 2025-11-29 08:17:04.698 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:04 np0005539563 nova_compute[252253]: 2025-11-29 08:17:04.797 252257 DEBUG nova.network.neutron [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Successfully created port: 8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:17:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:04.923 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:04.923 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:04.923 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:05.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:05.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:05 np0005539563 nova_compute[252253]: 2025-11-29 08:17:05.471 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 377 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 227 KiB/s rd, 6.9 MiB/s wr, 107 op/s
Nov 29 03:17:06 np0005539563 nova_compute[252253]: 2025-11-29 08:17:06.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:06 np0005539563 nova_compute[252253]: 2025-11-29 08:17:06.979 252257 DEBUG nova.network.neutron [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Successfully updated port: 8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:17:06 np0005539563 nova_compute[252253]: 2025-11-29 08:17:06.992 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Acquiring lock "refresh_cache-02b0884a-8366-4c63-ae13-b1bed91e9f04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:06 np0005539563 nova_compute[252253]: 2025-11-29 08:17:06.993 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Acquired lock "refresh_cache-02b0884a-8366-4c63-ae13-b1bed91e9f04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:06 np0005539563 nova_compute[252253]: 2025-11-29 08:17:06.993 252257 DEBUG nova.network.neutron [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:17:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:07.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:07.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:07 np0005539563 nova_compute[252253]: 2025-11-29 08:17:07.350 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539563 nova_compute[252253]: 2025-11-29 08:17:07.520 252257 DEBUG nova.network.neutron [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:17:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:17:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:17:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:17:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:17:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:17:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:17:07 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 293687ef-e23f-4bd2-acd8-6d2f579d1294 does not exist
Nov 29 03:17:07 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 085e2d9b-b32f-4e84-9460-ecb5ddbd4259 does not exist
Nov 29 03:17:07 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2b026320-f356-4fd7-8e46-ffff4a5ed7de does not exist
Nov 29 03:17:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:17:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:17:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:17:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:17:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:17:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:17:08 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:17:08 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:17:08 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:17:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 387 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 7.5 MiB/s wr, 143 op/s
Nov 29 03:17:08 np0005539563 podman[329277]: 2025-11-29 08:17:08.245671295 +0000 UTC m=+0.021349200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:08 np0005539563 podman[329277]: 2025-11-29 08:17:08.386221752 +0000 UTC m=+0.161899647 container create 42df94324d63cf1cd3011e038136dc62eab2823f2ebd3eb9ae36fdd2c3db0f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:17:08 np0005539563 systemd[1]: Started libpod-conmon-42df94324d63cf1cd3011e038136dc62eab2823f2ebd3eb9ae36fdd2c3db0f51.scope.
Nov 29 03:17:08 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:17:08 np0005539563 podman[329277]: 2025-11-29 08:17:08.481624287 +0000 UTC m=+0.257302202 container init 42df94324d63cf1cd3011e038136dc62eab2823f2ebd3eb9ae36fdd2c3db0f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hawking, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:17:08 np0005539563 podman[329277]: 2025-11-29 08:17:08.489318866 +0000 UTC m=+0.264996761 container start 42df94324d63cf1cd3011e038136dc62eab2823f2ebd3eb9ae36fdd2c3db0f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:17:08 np0005539563 fervent_hawking[329293]: 167 167
Nov 29 03:17:08 np0005539563 systemd[1]: libpod-42df94324d63cf1cd3011e038136dc62eab2823f2ebd3eb9ae36fdd2c3db0f51.scope: Deactivated successfully.
Nov 29 03:17:08 np0005539563 podman[329277]: 2025-11-29 08:17:08.53524226 +0000 UTC m=+0.310920145 container attach 42df94324d63cf1cd3011e038136dc62eab2823f2ebd3eb9ae36fdd2c3db0f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hawking, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:17:08 np0005539563 podman[329277]: 2025-11-29 08:17:08.536321659 +0000 UTC m=+0.311999554 container died 42df94324d63cf1cd3011e038136dc62eab2823f2ebd3eb9ae36fdd2c3db0f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hawking, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:17:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7588f16a3c718d5d3fd6076bd7dcebc0a11ac65897124b1a25ed181b3f43c968-merged.mount: Deactivated successfully.
Nov 29 03:17:08 np0005539563 podman[329277]: 2025-11-29 08:17:08.603889349 +0000 UTC m=+0.379567244 container remove 42df94324d63cf1cd3011e038136dc62eab2823f2ebd3eb9ae36fdd2c3db0f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:17:08 np0005539563 systemd[1]: libpod-conmon-42df94324d63cf1cd3011e038136dc62eab2823f2ebd3eb9ae36fdd2c3db0f51.scope: Deactivated successfully.
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.653 252257 DEBUG nova.compute.manager [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Received event network-changed-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.654 252257 DEBUG nova.compute.manager [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Refreshing instance network info cache due to event network-changed-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.655 252257 DEBUG oslo_concurrency.lockutils [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-02b0884a-8366-4c63-ae13-b1bed91e9f04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:08 np0005539563 podman[329319]: 2025-11-29 08:17:08.774283016 +0000 UTC m=+0.051598349 container create 696ad0adba1171e10343eec333a55ad73d90939c83d4b065755753d56fb1c68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:17:08 np0005539563 systemd[1]: Started libpod-conmon-696ad0adba1171e10343eec333a55ad73d90939c83d4b065755753d56fb1c68e.scope.
Nov 29 03:17:08 np0005539563 podman[329319]: 2025-11-29 08:17:08.752725912 +0000 UTC m=+0.030041265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:08 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:17:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/884194f2a129c6fb8c6b0acc15827de35464d93922a745a79b4a0e4968871af1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/884194f2a129c6fb8c6b0acc15827de35464d93922a745a79b4a0e4968871af1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/884194f2a129c6fb8c6b0acc15827de35464d93922a745a79b4a0e4968871af1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/884194f2a129c6fb8c6b0acc15827de35464d93922a745a79b4a0e4968871af1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/884194f2a129c6fb8c6b0acc15827de35464d93922a745a79b4a0e4968871af1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.875 252257 DEBUG nova.network.neutron [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Updating instance_info_cache with network_info: [{"id": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "address": "fa:16:3e:96:87:0f", "network": {"id": "9a8a029b-96ed-437e-9595-0ab7ef872b99", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-175254642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d1a4708cff44cd7853545efb6ccc406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c7b6566-ec", "ovs_interfaceid": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:08 np0005539563 podman[329319]: 2025-11-29 08:17:08.880650978 +0000 UTC m=+0.157966341 container init 696ad0adba1171e10343eec333a55ad73d90939c83d4b065755753d56fb1c68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:17:08 np0005539563 podman[329319]: 2025-11-29 08:17:08.888456789 +0000 UTC m=+0.165772152 container start 696ad0adba1171e10343eec333a55ad73d90939c83d4b065755753d56fb1c68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:17:08 np0005539563 podman[329319]: 2025-11-29 08:17:08.894358939 +0000 UTC m=+0.171674302 container attach 696ad0adba1171e10343eec333a55ad73d90939c83d4b065755753d56fb1c68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.914 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Releasing lock "refresh_cache-02b0884a-8366-4c63-ae13-b1bed91e9f04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.914 252257 DEBUG nova.compute.manager [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Instance network_info: |[{"id": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "address": "fa:16:3e:96:87:0f", "network": {"id": "9a8a029b-96ed-437e-9595-0ab7ef872b99", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-175254642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d1a4708cff44cd7853545efb6ccc406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c7b6566-ec", "ovs_interfaceid": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.915 252257 DEBUG oslo_concurrency.lockutils [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-02b0884a-8366-4c63-ae13-b1bed91e9f04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.915 252257 DEBUG nova.network.neutron [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Refreshing network info cache for port 8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.918 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Start _get_guest_xml network_info=[{"id": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "address": "fa:16:3e:96:87:0f", "network": {"id": "9a8a029b-96ed-437e-9595-0ab7ef872b99", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-175254642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d1a4708cff44cd7853545efb6ccc406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c7b6566-ec", "ovs_interfaceid": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.923 252257 WARNING nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.929 252257 DEBUG nova.virt.libvirt.host [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.930 252257 DEBUG nova.virt.libvirt.host [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.937 252257 DEBUG nova.virt.libvirt.host [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.937 252257 DEBUG nova.virt.libvirt.host [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.938 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.938 252257 DEBUG nova.virt.hardware [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.939 252257 DEBUG nova.virt.hardware [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.939 252257 DEBUG nova.virt.hardware [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.939 252257 DEBUG nova.virt.hardware [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.939 252257 DEBUG nova.virt.hardware [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.939 252257 DEBUG nova.virt.hardware [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.940 252257 DEBUG nova.virt.hardware [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.940 252257 DEBUG nova.virt.hardware [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.940 252257 DEBUG nova.virt.hardware [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.940 252257 DEBUG nova.virt.hardware [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.940 252257 DEBUG nova.virt.hardware [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:17:08 np0005539563 nova_compute[252253]: 2025-11-29 08:17:08.943 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:09.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:09.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2812517582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.412 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.445 252257 DEBUG nova.storage.rbd_utils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] rbd image 02b0884a-8366-4c63-ae13-b1bed91e9f04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.451 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:09 np0005539563 practical_bartik[329336]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:17:09 np0005539563 practical_bartik[329336]: --> relative data size: 1.0
Nov 29 03:17:09 np0005539563 practical_bartik[329336]: --> All data devices are unavailable
Nov 29 03:17:09 np0005539563 systemd[1]: libpod-696ad0adba1171e10343eec333a55ad73d90939c83d4b065755753d56fb1c68e.scope: Deactivated successfully.
Nov 29 03:17:09 np0005539563 podman[329319]: 2025-11-29 08:17:09.789682827 +0000 UTC m=+1.066998180 container died 696ad0adba1171e10343eec333a55ad73d90939c83d4b065755753d56fb1c68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:17:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-884194f2a129c6fb8c6b0acc15827de35464d93922a745a79b4a0e4968871af1-merged.mount: Deactivated successfully.
Nov 29 03:17:09 np0005539563 podman[329319]: 2025-11-29 08:17:09.876376176 +0000 UTC m=+1.153691519 container remove 696ad0adba1171e10343eec333a55ad73d90939c83d4b065755753d56fb1c68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:17:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1621604132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:09 np0005539563 systemd[1]: libpod-conmon-696ad0adba1171e10343eec333a55ad73d90939c83d4b065755753d56fb1c68e.scope: Deactivated successfully.
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.923 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.926 252257 DEBUG nova.virt.libvirt.vif [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1151973140',display_name='tempest-ServerPasswordTestJSON-server-1151973140',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1151973140',id=124,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d1a4708cff44cd7853545efb6ccc406',ramdisk_id='',reservation_id='r-7e6fjv9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1059040424',owner_user_name='tempest-ServerPasswordTestJSON-1059040424-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:03Z,user_data=None,user_id='97c94eba4968495e9801d5e3afc94a72',uuid=02b0884a-8366-4c63-ae13-b1bed91e9f04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "address": "fa:16:3e:96:87:0f", "network": {"id": "9a8a029b-96ed-437e-9595-0ab7ef872b99", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-175254642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d1a4708cff44cd7853545efb6ccc406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c7b6566-ec", "ovs_interfaceid": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.927 252257 DEBUG nova.network.os_vif_util [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Converting VIF {"id": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "address": "fa:16:3e:96:87:0f", "network": {"id": "9a8a029b-96ed-437e-9595-0ab7ef872b99", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-175254642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d1a4708cff44cd7853545efb6ccc406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c7b6566-ec", "ovs_interfaceid": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.929 252257 DEBUG nova.network.os_vif_util [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:87:0f,bridge_name='br-int',has_traffic_filtering=True,id=8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084,network=Network(9a8a029b-96ed-437e-9595-0ab7ef872b99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c7b6566-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.932 252257 DEBUG nova.objects.instance [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lazy-loading 'pci_devices' on Instance uuid 02b0884a-8366-4c63-ae13-b1bed91e9f04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.953 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  <uuid>02b0884a-8366-4c63-ae13-b1bed91e9f04</uuid>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  <name>instance-0000007c</name>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerPasswordTestJSON-server-1151973140</nova:name>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:17:08</nova:creationTime>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <nova:user uuid="97c94eba4968495e9801d5e3afc94a72">tempest-ServerPasswordTestJSON-1059040424-project-member</nova:user>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <nova:project uuid="3d1a4708cff44cd7853545efb6ccc406">tempest-ServerPasswordTestJSON-1059040424</nova:project>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <nova:port uuid="8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <entry name="serial">02b0884a-8366-4c63-ae13-b1bed91e9f04</entry>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <entry name="uuid">02b0884a-8366-4c63-ae13-b1bed91e9f04</entry>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/02b0884a-8366-4c63-ae13-b1bed91e9f04_disk">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/02b0884a-8366-4c63-ae13-b1bed91e9f04_disk.config">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:96:87:0f"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <target dev="tap8c7b6566-ec"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/02b0884a-8366-4c63-ae13-b1bed91e9f04/console.log" append="off"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:17:09 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:17:09 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:17:09 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:17:09 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.954 252257 DEBUG nova.compute.manager [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Preparing to wait for external event network-vif-plugged-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.955 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Acquiring lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.955 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.955 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.956 252257 DEBUG nova.virt.libvirt.vif [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1151973140',display_name='tempest-ServerPasswordTestJSON-server-1151973140',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1151973140',id=124,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d1a4708cff44cd7853545efb6ccc406',ramdisk_id='',reservation_id='r-7e6fjv9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1059040424',owner_user_name='tempest-ServerPasswordTestJSON-1059040424-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:03Z,user_data=None,user_id='97c94eba4968495e9801d5e3afc94a72',uuid=02b0884a-8366-4c63-ae13-b1bed91e9f04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "address": "fa:16:3e:96:87:0f", "network": {"id": "9a8a029b-96ed-437e-9595-0ab7ef872b99", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-175254642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d1a4708cff44cd7853545efb6ccc406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c7b6566-ec", "ovs_interfaceid": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.956 252257 DEBUG nova.network.os_vif_util [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Converting VIF {"id": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "address": "fa:16:3e:96:87:0f", "network": {"id": "9a8a029b-96ed-437e-9595-0ab7ef872b99", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-175254642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d1a4708cff44cd7853545efb6ccc406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c7b6566-ec", "ovs_interfaceid": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.957 252257 DEBUG nova.network.os_vif_util [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:87:0f,bridge_name='br-int',has_traffic_filtering=True,id=8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084,network=Network(9a8a029b-96ed-437e-9595-0ab7ef872b99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c7b6566-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.958 252257 DEBUG os_vif [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:87:0f,bridge_name='br-int',has_traffic_filtering=True,id=8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084,network=Network(9a8a029b-96ed-437e-9595-0ab7ef872b99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c7b6566-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.958 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.959 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.960 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.963 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.963 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c7b6566-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.963 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8c7b6566-ec, col_values=(('external_ids', {'iface-id': '8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:87:0f', 'vm-uuid': '02b0884a-8366-4c63-ae13-b1bed91e9f04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.965 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:09 np0005539563 NetworkManager[48981]: <info>  [1764404229.9668] manager: (tap8c7b6566-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.971 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.974 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:09 np0005539563 nova_compute[252253]: 2025-11-29 08:17:09.975 252257 INFO os_vif [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:87:0f,bridge_name='br-int',has_traffic_filtering=True,id=8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084,network=Network(9a8a029b-96ed-437e-9595-0ab7ef872b99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c7b6566-ec')#033[00m
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.042 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.042 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.042 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] No VIF found with MAC fa:16:3e:96:87:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.043 252257 INFO nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Using config drive#033[00m
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.070 252257 DEBUG nova.storage.rbd_utils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] rbd image 02b0884a-8366-4c63-ae13-b1bed91e9f04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 387 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.7 MiB/s wr, 221 op/s
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.534 252257 INFO nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Creating config drive at /var/lib/nova/instances/02b0884a-8366-4c63-ae13-b1bed91e9f04/disk.config#033[00m
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.539 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/02b0884a-8366-4c63-ae13-b1bed91e9f04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtmxoa3f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:10 np0005539563 podman[329586]: 2025-11-29 08:17:10.570975725 +0000 UTC m=+0.041106875 container create f567f05b6512b689b2b330730ab34993bdd901bac075ec2d250fd1f49076214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:10 np0005539563 systemd[1]: Started libpod-conmon-f567f05b6512b689b2b330730ab34993bdd901bac075ec2d250fd1f49076214d.scope.
Nov 29 03:17:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:17:10 np0005539563 podman[329586]: 2025-11-29 08:17:10.552694299 +0000 UTC m=+0.022825469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:10 np0005539563 podman[329586]: 2025-11-29 08:17:10.648980789 +0000 UTC m=+0.119111949 container init f567f05b6512b689b2b330730ab34993bdd901bac075ec2d250fd1f49076214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:17:10 np0005539563 podman[329586]: 2025-11-29 08:17:10.661856517 +0000 UTC m=+0.131987677 container start f567f05b6512b689b2b330730ab34993bdd901bac075ec2d250fd1f49076214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:17:10 np0005539563 podman[329586]: 2025-11-29 08:17:10.665851616 +0000 UTC m=+0.135982786 container attach f567f05b6512b689b2b330730ab34993bdd901bac075ec2d250fd1f49076214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:17:10 np0005539563 loving_faraday[329603]: 167 167
Nov 29 03:17:10 np0005539563 systemd[1]: libpod-f567f05b6512b689b2b330730ab34993bdd901bac075ec2d250fd1f49076214d.scope: Deactivated successfully.
Nov 29 03:17:10 np0005539563 podman[329586]: 2025-11-29 08:17:10.667821169 +0000 UTC m=+0.137952389 container died f567f05b6512b689b2b330730ab34993bdd901bac075ec2d250fd1f49076214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.676 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/02b0884a-8366-4c63-ae13-b1bed91e9f04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtmxoa3f" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a46f557e5cac7a3e594c3a5e6cbcf60811edd6a724bca3df39c7536328648824-merged.mount: Deactivated successfully.
Nov 29 03:17:10 np0005539563 podman[329586]: 2025-11-29 08:17:10.708173672 +0000 UTC m=+0.178304832 container remove f567f05b6512b689b2b330730ab34993bdd901bac075ec2d250fd1f49076214d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:17:10 np0005539563 systemd[1]: libpod-conmon-f567f05b6512b689b2b330730ab34993bdd901bac075ec2d250fd1f49076214d.scope: Deactivated successfully.
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.716 252257 DEBUG nova.storage.rbd_utils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] rbd image 02b0884a-8366-4c63-ae13-b1bed91e9f04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.721 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/02b0884a-8366-4c63-ae13-b1bed91e9f04/disk.config 02b0884a-8366-4c63-ae13-b1bed91e9f04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:10 np0005539563 podman[329660]: 2025-11-29 08:17:10.88305073 +0000 UTC m=+0.049413070 container create d6f448392638a93c641b284c9c89968c60c5f91544f531d242cf6f49f51d0fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_nightingale, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.887 252257 DEBUG oslo_concurrency.processutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/02b0884a-8366-4c63-ae13-b1bed91e9f04/disk.config 02b0884a-8366-4c63-ae13-b1bed91e9f04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.888 252257 INFO nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Deleting local config drive /var/lib/nova/instances/02b0884a-8366-4c63-ae13-b1bed91e9f04/disk.config because it was imported into RBD.#033[00m
Nov 29 03:17:10 np0005539563 systemd[1]: Started libpod-conmon-d6f448392638a93c641b284c9c89968c60c5f91544f531d242cf6f49f51d0fda.scope.
Nov 29 03:17:10 np0005539563 kernel: tap8c7b6566-ec: entered promiscuous mode
Nov 29 03:17:10 np0005539563 NetworkManager[48981]: <info>  [1764404230.9404] manager: (tap8c7b6566-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/225)
Nov 29 03:17:10 np0005539563 podman[329660]: 2025-11-29 08:17:10.857391205 +0000 UTC m=+0.023753485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.986 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:10Z|00499|binding|INFO|Claiming lport 8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 for this chassis.
Nov 29 03:17:10 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:10Z|00500|binding|INFO|8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084: Claiming fa:16:3e:96:87:0f 10.100.0.12
Nov 29 03:17:10 np0005539563 nova_compute[252253]: 2025-11-29 08:17:10.990 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:10.998 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:87:0f 10.100.0.12'], port_security=['fa:16:3e:96:87:0f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '02b0884a-8366-4c63-ae13-b1bed91e9f04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a8a029b-96ed-437e-9595-0ab7ef872b99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d1a4708cff44cd7853545efb6ccc406', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20f2e7b8-8585-4fa6-b674-cba9fb6dd4db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7d9ceab-e368-4b47-bdd6-052680d60acd, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:10.999 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 in datapath 9a8a029b-96ed-437e-9595-0ab7ef872b99 bound to our chassis#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.000 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a8a029b-96ed-437e-9595-0ab7ef872b99#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.013 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d281be3f-35e6-4baa-bb3f-6a3489ae644c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.014 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9a8a029b-91 in ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.016 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9a8a029b-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.016 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7cf6d5-51f8-42da-8b19-a175b40dd35b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.017 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c25f2793-6598-457e-990b-cebb039f369c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:17:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674ce2d64346cd8cc22c3ca5900c94e08e9dd3d1659d2c7b2f4f497cd3b39e70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674ce2d64346cd8cc22c3ca5900c94e08e9dd3d1659d2c7b2f4f497cd3b39e70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674ce2d64346cd8cc22c3ca5900c94e08e9dd3d1659d2c7b2f4f497cd3b39e70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674ce2d64346cd8cc22c3ca5900c94e08e9dd3d1659d2c7b2f4f497cd3b39e70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.032 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[3504c9e0-ad0a-45a5-be81-bd269df75f5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 systemd-machined[213024]: New machine qemu-57-instance-0000007c.
Nov 29 03:17:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:11 np0005539563 systemd[1]: Started Virtual Machine qemu-57-instance-0000007c.
Nov 29 03:17:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:11.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:11 np0005539563 podman[329660]: 2025-11-29 08:17:11.048143273 +0000 UTC m=+0.214505523 container init d6f448392638a93c641b284c9c89968c60c5f91544f531d242cf6f49f51d0fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_nightingale, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:17:11 np0005539563 systemd-udevd[329699]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:17:11 np0005539563 podman[329660]: 2025-11-29 08:17:11.057539347 +0000 UTC m=+0.223901577 container start d6f448392638a93c641b284c9c89968c60c5f91544f531d242cf6f49f51d0fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.059 252257 DEBUG nova.network.neutron [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Updated VIF entry in instance network info cache for port 8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:17:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:11Z|00501|binding|INFO|Setting lport 8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 ovn-installed in OVS
Nov 29 03:17:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:11Z|00502|binding|INFO|Setting lport 8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 up in Southbound
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.060 252257 DEBUG nova.network.neutron [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Updating instance_info_cache with network_info: [{"id": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "address": "fa:16:3e:96:87:0f", "network": {"id": "9a8a029b-96ed-437e-9595-0ab7ef872b99", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-175254642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d1a4708cff44cd7853545efb6ccc406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c7b6566-ec", "ovs_interfaceid": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:11 np0005539563 podman[329660]: 2025-11-29 08:17:11.061058233 +0000 UTC m=+0.227420483 container attach d6f448392638a93c641b284c9c89968c60c5f91544f531d242cf6f49f51d0fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_nightingale, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.060 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[635aeb85-6cfd-4d87-b359-42831dff3713]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.063 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539563 NetworkManager[48981]: <info>  [1764404231.0768] device (tap8c7b6566-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:11 np0005539563 NetworkManager[48981]: <info>  [1764404231.0776] device (tap8c7b6566-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.090 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc48188-e0ed-4d74-8c0e-b63ff95e8f3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 NetworkManager[48981]: <info>  [1764404231.0980] manager: (tap9a8a029b-90): new Veth device (/org/freedesktop/NetworkManager/Devices/226)
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.097 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[005367fe-f9ca-484b-a558-33f0ca941f0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 systemd-udevd[329704]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.114 252257 DEBUG oslo_concurrency.lockutils [req-dc431073-e36a-4118-9c95-d625fee3a48d req-31333904-648c-4693-a910-0e34c3db56ab 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-02b0884a-8366-4c63-ae13-b1bed91e9f04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.130 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d94cdec3-fc19-4b80-9e05-77a160381d6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.134 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae8a843-3608-4a70-9f0c-eb7b8ff550ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 NetworkManager[48981]: <info>  [1764404231.1600] device (tap9a8a029b-90): carrier: link connected
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.166 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[86475f40-3887-4aaa-82e4-19f9a10704a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.182 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1f12204c-bce4-4f16-909a-1bb2688ea2b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a8a029b-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:4a:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719893, 'reachable_time': 41916, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329732, 'error': None, 'target': 'ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.197 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c7160568-d072-433f-8954-741a0c050358]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed4:4a9f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719893, 'tstamp': 719893}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329733, 'error': None, 'target': 'ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.215 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2d5a44-b30e-4fbd-8946-5888d42fd2b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a8a029b-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:4a:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719893, 'reachable_time': 41916, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329734, 'error': None, 'target': 'ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.247 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b8acab68-e911-4490-bade-b6e3742a5abc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.304 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bd40a738-ed74-4456-8146-6bcb6d70ed1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.305 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a8a029b-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.306 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.306 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a8a029b-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539563 NetworkManager[48981]: <info>  [1764404231.3088] manager: (tap9a8a029b-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Nov 29 03:17:11 np0005539563 kernel: tap9a8a029b-90: entered promiscuous mode
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.308 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.314 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a8a029b-90, col_values=(('external_ids', {'iface-id': '06bcc80b-80ee-442c-8dce-e6b71c0e7c9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:11Z|00503|binding|INFO|Releasing lport 06bcc80b-80ee-442c-8dce-e6b71c0e7c9a from this chassis (sb_readonly=0)
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.315 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.317 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9a8a029b-96ed-437e-9595-0ab7ef872b99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9a8a029b-96ed-437e-9595-0ab7ef872b99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.318 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[68ae2156-2cfd-46a5-9bb6-1b192c17e8d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.319 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-9a8a029b-96ed-437e-9595-0ab7ef872b99
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/9a8a029b-96ed-437e-9595-0ab7ef872b99.pid.haproxy
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 9a8a029b-96ed-437e-9595-0ab7ef872b99
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:17:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:11.320 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99', 'env', 'PROCESS_TAG=haproxy-9a8a029b-96ed-437e-9595-0ab7ef872b99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9a8a029b-96ed-437e-9595-0ab7ef872b99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.332 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:11.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.398 252257 DEBUG nova.compute.manager [req-86b50433-83e3-4a52-90b7-bb15d5ce9184 req-ed0740b0-13de-475e-aa7e-8cf7ee3cf7ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Received event network-vif-plugged-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.398 252257 DEBUG oslo_concurrency.lockutils [req-86b50433-83e3-4a52-90b7-bb15d5ce9184 req-ed0740b0-13de-475e-aa7e-8cf7ee3cf7ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.398 252257 DEBUG oslo_concurrency.lockutils [req-86b50433-83e3-4a52-90b7-bb15d5ce9184 req-ed0740b0-13de-475e-aa7e-8cf7ee3cf7ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.399 252257 DEBUG oslo_concurrency.lockutils [req-86b50433-83e3-4a52-90b7-bb15d5ce9184 req-ed0740b0-13de-475e-aa7e-8cf7ee3cf7ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.399 252257 DEBUG nova.compute.manager [req-86b50433-83e3-4a52-90b7-bb15d5ce9184 req-ed0740b0-13de-475e-aa7e-8cf7ee3cf7ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Processing event network-vif-plugged-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.448 252257 DEBUG nova.compute.manager [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.449 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404231.4488022, 02b0884a-8366-4c63-ae13-b1bed91e9f04 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.449 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] VM Started (Lifecycle Event)#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.451 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.454 252257 INFO nova.virt.libvirt.driver [-] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Instance spawned successfully.#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.454 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.469 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.475 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.479 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.480 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.480 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.481 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.481 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.482 252257 DEBUG nova.virt.libvirt.driver [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.506 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.506 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404231.4490745, 02b0884a-8366-4c63-ae13-b1bed91e9f04 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.507 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.540 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.543 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404231.4512932, 02b0884a-8366-4c63-ae13-b1bed91e9f04 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.543 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.575 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.578 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.580 252257 INFO nova.compute.manager [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Took 7.95 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.581 252257 DEBUG nova.compute.manager [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.609 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.668 252257 INFO nova.compute.manager [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Took 8.96 seconds to build instance.#033[00m
Nov 29 03:17:11 np0005539563 nova_compute[252253]: 2025-11-29 08:17:11.690 252257 DEBUG oslo_concurrency.lockutils [None req-f2b41b2e-e17f-46c7-be49-a71f2dd3259c 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:11 np0005539563 podman[329808]: 2025-11-29 08:17:11.698054951 +0000 UTC m=+0.046967563 container create e045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:17:11 np0005539563 systemd[1]: Started libpod-conmon-e045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e.scope.
Nov 29 03:17:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:17:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2928b0d25951a664362754ed386b22653f6cf017f17a7e49a2f606409f1d16b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:11 np0005539563 podman[329808]: 2025-11-29 08:17:11.671989915 +0000 UTC m=+0.020902547 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:17:11 np0005539563 podman[329808]: 2025-11-29 08:17:11.781058571 +0000 UTC m=+0.129971203 container init e045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:11 np0005539563 podman[329808]: 2025-11-29 08:17:11.788488331 +0000 UTC m=+0.137400943 container start e045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:17:11 np0005539563 neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99[329826]: [NOTICE]   (329832) : New worker (329834) forked
Nov 29 03:17:11 np0005539563 neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99[329826]: [NOTICE]   (329832) : Loading success.
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]: {
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:    "0": [
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:        {
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:            "devices": [
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:                "/dev/loop3"
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:            ],
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:            "lv_name": "ceph_lv0",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:            "lv_size": "7511998464",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:            "name": "ceph_lv0",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:            "tags": {
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:                "ceph.cluster_name": "ceph",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:                "ceph.crush_device_class": "",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:                "ceph.encrypted": "0",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:                "ceph.osd_id": "0",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:                "ceph.type": "block",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:                "ceph.vdo": "0"
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:            },
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:            "type": "block",
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:            "vg_name": "ceph_vg0"
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:        }
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]:    ]
Nov 29 03:17:11 np0005539563 ecstatic_nightingale[329686]: }
Nov 29 03:17:11 np0005539563 systemd[1]: libpod-d6f448392638a93c641b284c9c89968c60c5f91544f531d242cf6f49f51d0fda.scope: Deactivated successfully.
Nov 29 03:17:11 np0005539563 podman[329660]: 2025-11-29 08:17:11.83678019 +0000 UTC m=+1.003142440 container died d6f448392638a93c641b284c9c89968c60c5f91544f531d242cf6f49f51d0fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:17:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-674ce2d64346cd8cc22c3ca5900c94e08e9dd3d1659d2c7b2f4f497cd3b39e70-merged.mount: Deactivated successfully.
Nov 29 03:17:11 np0005539563 podman[329660]: 2025-11-29 08:17:11.893051535 +0000 UTC m=+1.059413765 container remove d6f448392638a93c641b284c9c89968c60c5f91544f531d242cf6f49f51d0fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:17:11 np0005539563 systemd[1]: libpod-conmon-d6f448392638a93c641b284c9c89968c60c5f91544f531d242cf6f49f51d0fda.scope: Deactivated successfully.
Nov 29 03:17:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 387 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.3 MiB/s wr, 167 op/s
Nov 29 03:17:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:12 np0005539563 nova_compute[252253]: 2025-11-29 08:17:12.352 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:12 np0005539563 podman[329995]: 2025-11-29 08:17:12.523485916 +0000 UTC m=+0.036835869 container create 2cb6b8d2e71f707b0335c2903c7b479e1bcc6761a36cd7a35ba192ed75d6c718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_matsumoto, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:12 np0005539563 systemd[1]: Started libpod-conmon-2cb6b8d2e71f707b0335c2903c7b479e1bcc6761a36cd7a35ba192ed75d6c718.scope.
Nov 29 03:17:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:17:12 np0005539563 podman[329995]: 2025-11-29 08:17:12.602954738 +0000 UTC m=+0.116304741 container init 2cb6b8d2e71f707b0335c2903c7b479e1bcc6761a36cd7a35ba192ed75d6c718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_matsumoto, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:12 np0005539563 podman[329995]: 2025-11-29 08:17:12.50778104 +0000 UTC m=+0.021131013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:12 np0005539563 podman[329995]: 2025-11-29 08:17:12.610177204 +0000 UTC m=+0.123527157 container start 2cb6b8d2e71f707b0335c2903c7b479e1bcc6761a36cd7a35ba192ed75d6c718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_matsumoto, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:17:12 np0005539563 podman[329995]: 2025-11-29 08:17:12.613934216 +0000 UTC m=+0.127284189 container attach 2cb6b8d2e71f707b0335c2903c7b479e1bcc6761a36cd7a35ba192ed75d6c718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_matsumoto, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:17:12 np0005539563 pedantic_matsumoto[330011]: 167 167
Nov 29 03:17:12 np0005539563 systemd[1]: libpod-2cb6b8d2e71f707b0335c2903c7b479e1bcc6761a36cd7a35ba192ed75d6c718.scope: Deactivated successfully.
Nov 29 03:17:12 np0005539563 podman[329995]: 2025-11-29 08:17:12.615327893 +0000 UTC m=+0.128677846 container died 2cb6b8d2e71f707b0335c2903c7b479e1bcc6761a36cd7a35ba192ed75d6c718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:17:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8cf45f0b7be17e21ce545c4e97430dc43ba48120fd9966d84bf6569b38377e15-merged.mount: Deactivated successfully.
Nov 29 03:17:12 np0005539563 podman[329995]: 2025-11-29 08:17:12.658971066 +0000 UTC m=+0.172321029 container remove 2cb6b8d2e71f707b0335c2903c7b479e1bcc6761a36cd7a35ba192ed75d6c718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:17:12 np0005539563 systemd[1]: libpod-conmon-2cb6b8d2e71f707b0335c2903c7b479e1bcc6761a36cd7a35ba192ed75d6c718.scope: Deactivated successfully.
Nov 29 03:17:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:17:12
Nov 29 03:17:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:17:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:17:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'backups', 'vms', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 29 03:17:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:17:12 np0005539563 podman[330034]: 2025-11-29 08:17:12.894546028 +0000 UTC m=+0.079836343 container create 3dc44cc8236e7115c635817a2b46c00cebcae95104475267c7fb9cfc95e3c2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:17:12 np0005539563 systemd[1]: Started libpod-conmon-3dc44cc8236e7115c635817a2b46c00cebcae95104475267c7fb9cfc95e3c2b8.scope.
Nov 29 03:17:12 np0005539563 podman[330034]: 2025-11-29 08:17:12.854890614 +0000 UTC m=+0.040180959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:17:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8051db565fac8646f918793a41121ca8bcbcb8d5621d42d7e430465b702dbf0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8051db565fac8646f918793a41121ca8bcbcb8d5621d42d7e430465b702dbf0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8051db565fac8646f918793a41121ca8bcbcb8d5621d42d7e430465b702dbf0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8051db565fac8646f918793a41121ca8bcbcb8d5621d42d7e430465b702dbf0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:12 np0005539563 podman[330034]: 2025-11-29 08:17:12.976375255 +0000 UTC m=+0.161665580 container init 3dc44cc8236e7115c635817a2b46c00cebcae95104475267c7fb9cfc95e3c2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:17:12 np0005539563 podman[330034]: 2025-11-29 08:17:12.986780097 +0000 UTC m=+0.172070372 container start 3dc44cc8236e7115c635817a2b46c00cebcae95104475267c7fb9cfc95e3c2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mccarthy, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:17:12 np0005539563 podman[330034]: 2025-11-29 08:17:12.990417096 +0000 UTC m=+0.175707391 container attach 3dc44cc8236e7115c635817a2b46c00cebcae95104475267c7fb9cfc95e3c2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:17:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:13.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:17:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:13.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:13 np0005539563 nova_compute[252253]: 2025-11-29 08:17:13.509 252257 DEBUG nova.compute.manager [req-eaa247e4-57d0-40e4-9b1a-e1d32ed710c9 req-f461beff-ac78-4ec2-833c-1e1cb8b1878f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Received event network-vif-plugged-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:13 np0005539563 nova_compute[252253]: 2025-11-29 08:17:13.509 252257 DEBUG oslo_concurrency.lockutils [req-eaa247e4-57d0-40e4-9b1a-e1d32ed710c9 req-f461beff-ac78-4ec2-833c-1e1cb8b1878f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:13 np0005539563 nova_compute[252253]: 2025-11-29 08:17:13.509 252257 DEBUG oslo_concurrency.lockutils [req-eaa247e4-57d0-40e4-9b1a-e1d32ed710c9 req-f461beff-ac78-4ec2-833c-1e1cb8b1878f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:13 np0005539563 nova_compute[252253]: 2025-11-29 08:17:13.510 252257 DEBUG oslo_concurrency.lockutils [req-eaa247e4-57d0-40e4-9b1a-e1d32ed710c9 req-f461beff-ac78-4ec2-833c-1e1cb8b1878f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:13 np0005539563 nova_compute[252253]: 2025-11-29 08:17:13.510 252257 DEBUG nova.compute.manager [req-eaa247e4-57d0-40e4-9b1a-e1d32ed710c9 req-f461beff-ac78-4ec2-833c-1e1cb8b1878f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] No waiting events found dispatching network-vif-plugged-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:13 np0005539563 nova_compute[252253]: 2025-11-29 08:17:13.510 252257 WARNING nova.compute.manager [req-eaa247e4-57d0-40e4-9b1a-e1d32ed710c9 req-f461beff-ac78-4ec2-833c-1e1cb8b1878f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Received unexpected event network-vif-plugged-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:17:13 np0005539563 xenodochial_mccarthy[330052]: {
Nov 29 03:17:13 np0005539563 xenodochial_mccarthy[330052]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:17:13 np0005539563 xenodochial_mccarthy[330052]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:17:13 np0005539563 xenodochial_mccarthy[330052]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:17:13 np0005539563 xenodochial_mccarthy[330052]:        "osd_id": 0,
Nov 29 03:17:13 np0005539563 xenodochial_mccarthy[330052]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:17:13 np0005539563 xenodochial_mccarthy[330052]:        "type": "bluestore"
Nov 29 03:17:13 np0005539563 xenodochial_mccarthy[330052]:    }
Nov 29 03:17:13 np0005539563 xenodochial_mccarthy[330052]: }
Nov 29 03:17:13 np0005539563 systemd[1]: libpod-3dc44cc8236e7115c635817a2b46c00cebcae95104475267c7fb9cfc95e3c2b8.scope: Deactivated successfully.
Nov 29 03:17:13 np0005539563 podman[330034]: 2025-11-29 08:17:13.887617805 +0000 UTC m=+1.072908090 container died 3dc44cc8236e7115c635817a2b46c00cebcae95104475267c7fb9cfc95e3c2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:17:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8051db565fac8646f918793a41121ca8bcbcb8d5621d42d7e430465b702dbf0d-merged.mount: Deactivated successfully.
Nov 29 03:17:13 np0005539563 podman[330034]: 2025-11-29 08:17:13.953750006 +0000 UTC m=+1.139040301 container remove 3dc44cc8236e7115c635817a2b46c00cebcae95104475267c7fb9cfc95e3c2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mccarthy, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:17:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:17:13 np0005539563 systemd[1]: libpod-conmon-3dc44cc8236e7115c635817a2b46c00cebcae95104475267c7fb9cfc95e3c2b8.scope: Deactivated successfully.
Nov 29 03:17:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:17:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:17:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:17:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:17:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev fe5e70d7-2e2d-4748-9890-e1db24de640b does not exist
Nov 29 03:17:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 3970bcc7-1de7-49f9-8214-8bf7b3292c90 does not exist
Nov 29 03:17:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 89bfb09a-591e-4e01-8ab9-ce8d84834356 does not exist
Nov 29 03:17:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:17:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:17:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 365 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.3 MiB/s wr, 234 op/s
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.540 252257 DEBUG oslo_concurrency.lockutils [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Acquiring lock "02b0884a-8366-4c63-ae13-b1bed91e9f04" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.541 252257 DEBUG oslo_concurrency.lockutils [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.542 252257 DEBUG oslo_concurrency.lockutils [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Acquiring lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.542 252257 DEBUG oslo_concurrency.lockutils [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.542 252257 DEBUG oslo_concurrency.lockutils [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.543 252257 INFO nova.compute.manager [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Terminating instance#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.545 252257 DEBUG nova.compute.manager [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:17:14 np0005539563 kernel: tap8c7b6566-ec (unregistering): left promiscuous mode
Nov 29 03:17:14 np0005539563 NetworkManager[48981]: <info>  [1764404234.5903] device (tap8c7b6566-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.600 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:14Z|00504|binding|INFO|Releasing lport 8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 from this chassis (sb_readonly=0)
Nov 29 03:17:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:14Z|00505|binding|INFO|Setting lport 8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 down in Southbound
Nov 29 03:17:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:14Z|00506|binding|INFO|Removing iface tap8c7b6566-ec ovn-installed in OVS
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.602 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:14.610 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:87:0f 10.100.0.12'], port_security=['fa:16:3e:96:87:0f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '02b0884a-8366-4c63-ae13-b1bed91e9f04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a8a029b-96ed-437e-9595-0ab7ef872b99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d1a4708cff44cd7853545efb6ccc406', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20f2e7b8-8585-4fa6-b674-cba9fb6dd4db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7d9ceab-e368-4b47-bdd6-052680d60acd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:14.613 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 in datapath 9a8a029b-96ed-437e-9595-0ab7ef872b99 unbound from our chassis#033[00m
Nov 29 03:17:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:14.616 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9a8a029b-96ed-437e-9595-0ab7ef872b99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:17:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:14.618 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dab7d344-bb49-4481-a3ff-e5531bf2825d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:14.619 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99 namespace which is not needed anymore#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.620 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539563 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Nov 29 03:17:14 np0005539563 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d0000007c.scope: Consumed 3.465s CPU time.
Nov 29 03:17:14 np0005539563 systemd-machined[213024]: Machine qemu-57-instance-0000007c terminated.
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.796 252257 INFO nova.virt.libvirt.driver [-] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Instance destroyed successfully.#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.797 252257 DEBUG nova.objects.instance [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lazy-loading 'resources' on Instance uuid 02b0884a-8366-4c63-ae13-b1bed91e9f04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:14 np0005539563 neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99[329826]: [NOTICE]   (329832) : haproxy version is 2.8.14-c23fe91
Nov 29 03:17:14 np0005539563 neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99[329826]: [NOTICE]   (329832) : path to executable is /usr/sbin/haproxy
Nov 29 03:17:14 np0005539563 neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99[329826]: [WARNING]  (329832) : Exiting Master process...
Nov 29 03:17:14 np0005539563 neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99[329826]: [ALERT]    (329832) : Current worker (329834) exited with code 143 (Terminated)
Nov 29 03:17:14 np0005539563 neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99[329826]: [WARNING]  (329832) : All workers exited. Exiting... (0)
Nov 29 03:17:14 np0005539563 systemd[1]: libpod-e045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e.scope: Deactivated successfully.
Nov 29 03:17:14 np0005539563 podman[330209]: 2025-11-29 08:17:14.813458038 +0000 UTC m=+0.065254378 container died e045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.814 252257 DEBUG nova.virt.libvirt.vif [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1151973140',display_name='tempest-ServerPasswordTestJSON-server-1151973140',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1151973140',id=124,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3d1a4708cff44cd7853545efb6ccc406',ramdisk_id='',reservation_id='r-7e6fjv9f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1059040424',owner_user_name='tempest-ServerPasswordTestJSON-1059040424-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:14Z,user_data=None,user_id='97c94eba4968495e9801d5e3afc94a72',uuid=02b0884a-8366-4c63-ae13-b1bed91e9f04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "address": "fa:16:3e:96:87:0f", "network": {"id": "9a8a029b-96ed-437e-9595-0ab7ef872b99", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-175254642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d1a4708cff44cd7853545efb6ccc406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c7b6566-ec", "ovs_interfaceid": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.814 252257 DEBUG nova.network.os_vif_util [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Converting VIF {"id": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "address": "fa:16:3e:96:87:0f", "network": {"id": "9a8a029b-96ed-437e-9595-0ab7ef872b99", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-175254642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d1a4708cff44cd7853545efb6ccc406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c7b6566-ec", "ovs_interfaceid": "8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.815 252257 DEBUG nova.network.os_vif_util [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:87:0f,bridge_name='br-int',has_traffic_filtering=True,id=8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084,network=Network(9a8a029b-96ed-437e-9595-0ab7ef872b99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c7b6566-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.815 252257 DEBUG os_vif [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:87:0f,bridge_name='br-int',has_traffic_filtering=True,id=8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084,network=Network(9a8a029b-96ed-437e-9595-0ab7ef872b99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c7b6566-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.817 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.818 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c7b6566-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.819 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.823 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.825 252257 INFO os_vif [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:87:0f,bridge_name='br-int',has_traffic_filtering=True,id=8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084,network=Network(9a8a029b-96ed-437e-9595-0ab7ef872b99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c7b6566-ec')#033[00m
Nov 29 03:17:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2928b0d25951a664362754ed386b22653f6cf017f17a7e49a2f606409f1d16b6-merged.mount: Deactivated successfully.
Nov 29 03:17:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:17:14 np0005539563 podman[330209]: 2025-11-29 08:17:14.858294493 +0000 UTC m=+0.110090813 container cleanup e045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:17:14 np0005539563 systemd[1]: libpod-conmon-e045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e.scope: Deactivated successfully.
Nov 29 03:17:14 np0005539563 podman[330260]: 2025-11-29 08:17:14.938564908 +0000 UTC m=+0.054477027 container remove e045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:17:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:14.944 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[047f905e-e4e7-457a-ab0b-6a1b990ec076]: (4, ('Sat Nov 29 08:17:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99 (e045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e)\ne045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e\nSat Nov 29 08:17:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99 (e045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e)\ne045ed899f34ec23c14659d278726b406ad722a8fadedeee58b52a8154565e1e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:14.946 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5da44c-2267-45bd-90a3-930bf89ef381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:14.947 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a8a029b-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.949 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539563 kernel: tap9a8a029b-90: left promiscuous mode
Nov 29 03:17:14 np0005539563 nova_compute[252253]: 2025-11-29 08:17:14.965 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:14.968 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4c679ab6-4d87-4b61-b8b8-7b6cffb6e780]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:14.985 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd411ae-04cb-4ae8-b6eb-ca6433c24769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:14.986 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f8a474-19db-4dd0-81bd-a36378744291]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:15.001 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1988630b-bbd1-4dfe-9e56-e1e597ed5fa5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719886, 'reachable_time': 38983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330280, 'error': None, 'target': 'ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:15 np0005539563 systemd[1]: run-netns-ovnmeta\x2d9a8a029b\x2d96ed\x2d437e\x2d9595\x2d0ab7ef872b99.mount: Deactivated successfully.
Nov 29 03:17:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:15.006 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9a8a029b-96ed-437e-9595-0ab7ef872b99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:17:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:15.006 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[993806ce-3f76-4e6e-85c5-9d4103f0b2bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:15.019 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.019 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:15.020 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:17:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:15.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.268 252257 INFO nova.virt.libvirt.driver [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Deleting instance files /var/lib/nova/instances/02b0884a-8366-4c63-ae13-b1bed91e9f04_del#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.269 252257 INFO nova.virt.libvirt.driver [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Deletion of /var/lib/nova/instances/02b0884a-8366-4c63-ae13-b1bed91e9f04_del complete#033[00m
Nov 29 03:17:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:15.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.350 252257 INFO nova.compute.manager [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.351 252257 DEBUG oslo.service.loopingcall [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.352 252257 DEBUG nova.compute.manager [-] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.352 252257 DEBUG nova.network.neutron [-] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.622 252257 DEBUG nova.compute.manager [req-1a8877d8-65b2-4010-8c05-e7756a6eaf69 req-cefc5d02-0ae2-4ad2-a63f-4dfafec9c627 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Received event network-vif-unplugged-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.623 252257 DEBUG oslo_concurrency.lockutils [req-1a8877d8-65b2-4010-8c05-e7756a6eaf69 req-cefc5d02-0ae2-4ad2-a63f-4dfafec9c627 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.623 252257 DEBUG oslo_concurrency.lockutils [req-1a8877d8-65b2-4010-8c05-e7756a6eaf69 req-cefc5d02-0ae2-4ad2-a63f-4dfafec9c627 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.624 252257 DEBUG oslo_concurrency.lockutils [req-1a8877d8-65b2-4010-8c05-e7756a6eaf69 req-cefc5d02-0ae2-4ad2-a63f-4dfafec9c627 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.624 252257 DEBUG nova.compute.manager [req-1a8877d8-65b2-4010-8c05-e7756a6eaf69 req-cefc5d02-0ae2-4ad2-a63f-4dfafec9c627 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] No waiting events found dispatching network-vif-unplugged-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.625 252257 DEBUG nova.compute.manager [req-1a8877d8-65b2-4010-8c05-e7756a6eaf69 req-cefc5d02-0ae2-4ad2-a63f-4dfafec9c627 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Received event network-vif-unplugged-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.625 252257 DEBUG nova.compute.manager [req-1a8877d8-65b2-4010-8c05-e7756a6eaf69 req-cefc5d02-0ae2-4ad2-a63f-4dfafec9c627 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Received event network-vif-plugged-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.626 252257 DEBUG oslo_concurrency.lockutils [req-1a8877d8-65b2-4010-8c05-e7756a6eaf69 req-cefc5d02-0ae2-4ad2-a63f-4dfafec9c627 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.626 252257 DEBUG oslo_concurrency.lockutils [req-1a8877d8-65b2-4010-8c05-e7756a6eaf69 req-cefc5d02-0ae2-4ad2-a63f-4dfafec9c627 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.627 252257 DEBUG oslo_concurrency.lockutils [req-1a8877d8-65b2-4010-8c05-e7756a6eaf69 req-cefc5d02-0ae2-4ad2-a63f-4dfafec9c627 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.627 252257 DEBUG nova.compute.manager [req-1a8877d8-65b2-4010-8c05-e7756a6eaf69 req-cefc5d02-0ae2-4ad2-a63f-4dfafec9c627 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] No waiting events found dispatching network-vif-plugged-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.628 252257 WARNING nova.compute.manager [req-1a8877d8-65b2-4010-8c05-e7756a6eaf69 req-cefc5d02-0ae2-4ad2-a63f-4dfafec9c627 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Received unexpected event network-vif-plugged-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.914 252257 DEBUG nova.network.neutron [-] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.949 252257 INFO nova.compute.manager [-] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Took 0.60 seconds to deallocate network for instance.#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.993 252257 DEBUG oslo_concurrency.lockutils [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:15 np0005539563 nova_compute[252253]: 2025-11-29 08:17:15.994 252257 DEBUG oslo_concurrency.lockutils [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:16 np0005539563 nova_compute[252253]: 2025-11-29 08:17:16.136 252257 DEBUG oslo_concurrency.processutils [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 258 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 4.4 MiB/s wr, 381 op/s
Nov 29 03:17:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2384987786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:16 np0005539563 nova_compute[252253]: 2025-11-29 08:17:16.600 252257 DEBUG oslo_concurrency.processutils [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:16 np0005539563 nova_compute[252253]: 2025-11-29 08:17:16.610 252257 DEBUG nova.compute.provider_tree [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:16 np0005539563 nova_compute[252253]: 2025-11-29 08:17:16.638 252257 DEBUG nova.scheduler.client.report [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:16 np0005539563 nova_compute[252253]: 2025-11-29 08:17:16.667 252257 DEBUG oslo_concurrency.lockutils [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:16 np0005539563 nova_compute[252253]: 2025-11-29 08:17:16.695 252257 INFO nova.scheduler.client.report [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Deleted allocations for instance 02b0884a-8366-4c63-ae13-b1bed91e9f04#033[00m
Nov 29 03:17:16 np0005539563 nova_compute[252253]: 2025-11-29 08:17:16.770 252257 DEBUG oslo_concurrency.lockutils [None req-050e34f5-1b6a-4df1-9293-dd72dc1a9809 97c94eba4968495e9801d5e3afc94a72 3d1a4708cff44cd7853545efb6ccc406 - - default default] Lock "02b0884a-8366-4c63-ae13-b1bed91e9f04" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:17.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:17 np0005539563 nova_compute[252253]: 2025-11-29 08:17:17.321 252257 DEBUG nova.compute.manager [req-05fae664-519a-4064-b074-445f5028504f req-ab8784d7-249c-4c22-adba-eb8432ee9c16 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Received event network-vif-deleted-8c7b6566-ec90-4ceb-9c3d-ad7b43c3b084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:17.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:17 np0005539563 nova_compute[252253]: 2025-11-29 08:17:17.356 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:17 np0005539563 nova_compute[252253]: 2025-11-29 08:17:17.822 252257 DEBUG oslo_concurrency.lockutils [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:17 np0005539563 nova_compute[252253]: 2025-11-29 08:17:17.822 252257 DEBUG oslo_concurrency.lockutils [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:17 np0005539563 nova_compute[252253]: 2025-11-29 08:17:17.822 252257 DEBUG oslo_concurrency.lockutils [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:17 np0005539563 nova_compute[252253]: 2025-11-29 08:17:17.823 252257 DEBUG oslo_concurrency.lockutils [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:17 np0005539563 nova_compute[252253]: 2025-11-29 08:17:17.823 252257 DEBUG oslo_concurrency.lockutils [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:17 np0005539563 nova_compute[252253]: 2025-11-29 08:17:17.824 252257 INFO nova.compute.manager [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Terminating instance#033[00m
Nov 29 03:17:17 np0005539563 nova_compute[252253]: 2025-11-29 08:17:17.825 252257 DEBUG nova.compute.manager [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:17:17 np0005539563 kernel: tap53db9057-1d (unregistering): left promiscuous mode
Nov 29 03:17:17 np0005539563 NetworkManager[48981]: <info>  [1764404237.9087] device (tap53db9057-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:17Z|00507|binding|INFO|Releasing lport 53db9057-1d66-479a-9112-5c451e2dc1c8 from this chassis (sb_readonly=0)
Nov 29 03:17:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:17Z|00508|binding|INFO|Setting lport 53db9057-1d66-479a-9112-5c451e2dc1c8 down in Southbound
Nov 29 03:17:17 np0005539563 nova_compute[252253]: 2025-11-29 08:17:17.970 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:17Z|00509|binding|INFO|Removing iface tap53db9057-1d ovn-installed in OVS
Nov 29 03:17:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:17.979 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:62:62 10.100.0.8'], port_security=['fa:16:3e:c4:62:62 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '194fa81a-dfa7-4c98-9fd0-7d20d250db7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a48f340-3ab0-428a-8b80-75fcf0f9f3f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6025758b69854406b221c47d9ef59dea', 'neutron:revision_number': '6', 'neutron:security_group_ids': '75bc8da6-dde6-455d-b531-bf85392bb032', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=344a2810-fa48-40b2-8837-e84899a18cc0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=53db9057-1d66-479a-9112-5c451e2dc1c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:17.981 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 53db9057-1d66-479a-9112-5c451e2dc1c8 in datapath 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 unbound from our chassis#033[00m
Nov 29 03:17:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:17.982 158990 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2a48f340-3ab0-428a-8b80-75fcf0f9f3f2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:17:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:17.982 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b935caea-f8f8-4168-beb1-a81a53035f83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:17 np0005539563 nova_compute[252253]: 2025-11-29 08:17:17.994 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:18 np0005539563 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000078.scope: Deactivated successfully.
Nov 29 03:17:18 np0005539563 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000078.scope: Consumed 15.525s CPU time.
Nov 29 03:17:18 np0005539563 systemd-machined[213024]: Machine qemu-56-instance-00000078 terminated.
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.054 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.059 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.068 252257 INFO nova.virt.libvirt.driver [-] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Instance destroyed successfully.#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.069 252257 DEBUG nova.objects.instance [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lazy-loading 'resources' on Instance uuid 194fa81a-dfa7-4c98-9fd0-7d20d250db7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.108 252257 DEBUG nova.virt.libvirt.vif [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:15:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1466602938',display_name='tempest-ServerRescueTestJSON-server-1466602938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1466602938',id=120,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:16:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6025758b69854406b221c47d9ef59dea',ramdisk_id='',reservation_id='r-2lp5obe1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-597299129',owner_user_name='tempest-ServerRescueTestJSON-597299129-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:16:33Z,user_data=None,user_id='7bb4a89eea4e4166a7a1c5e3135cb182',uuid=194fa81a-dfa7-4c98-9fd0-7d20d250db7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.108 252257 DEBUG nova.network.os_vif_util [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converting VIF {"id": "53db9057-1d66-479a-9112-5c451e2dc1c8", "address": "fa:16:3e:c4:62:62", "network": {"id": "2a48f340-3ab0-428a-8b80-75fcf0f9f3f2", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1392053617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6025758b69854406b221c47d9ef59dea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53db9057-1d", "ovs_interfaceid": "53db9057-1d66-479a-9112-5c451e2dc1c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.109 252257 DEBUG nova.network.os_vif_util [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:62:62,bridge_name='br-int',has_traffic_filtering=True,id=53db9057-1d66-479a-9112-5c451e2dc1c8,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53db9057-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.109 252257 DEBUG os_vif [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:62:62,bridge_name='br-int',has_traffic_filtering=True,id=53db9057-1d66-479a-9112-5c451e2dc1c8,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53db9057-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.111 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.111 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53db9057-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.113 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.114 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.114 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.116 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.118 252257 INFO os_vif [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:62:62,bridge_name='br-int',has_traffic_filtering=True,id=53db9057-1d66-479a-9112-5c451e2dc1c8,network=Network(2a48f340-3ab0-428a-8b80-75fcf0f9f3f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53db9057-1d')#033[00m
Nov 29 03:17:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 234 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 1.4 MiB/s wr, 399 op/s
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.996 252257 INFO nova.virt.libvirt.driver [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Deleting instance files /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c_del#033[00m
Nov 29 03:17:18 np0005539563 nova_compute[252253]: 2025-11-29 08:17:18.997 252257 INFO nova.virt.libvirt.driver [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Deletion of /var/lib/nova/instances/194fa81a-dfa7-4c98-9fd0-7d20d250db7c_del complete#033[00m
Nov 29 03:17:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:19.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.064 252257 INFO nova.compute.manager [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Took 1.24 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.065 252257 DEBUG oslo.service.loopingcall [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.066 252257 DEBUG nova.compute.manager [-] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.067 252257 DEBUG nova.network.neutron [-] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:17:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:19.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.463 252257 DEBUG nova.compute.manager [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received event network-vif-unplugged-53db9057-1d66-479a-9112-5c451e2dc1c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.464 252257 DEBUG oslo_concurrency.lockutils [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.465 252257 DEBUG oslo_concurrency.lockutils [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.466 252257 DEBUG oslo_concurrency.lockutils [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.466 252257 DEBUG nova.compute.manager [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] No waiting events found dispatching network-vif-unplugged-53db9057-1d66-479a-9112-5c451e2dc1c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.467 252257 DEBUG nova.compute.manager [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received event network-vif-unplugged-53db9057-1d66-479a-9112-5c451e2dc1c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.467 252257 DEBUG nova.compute.manager [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received event network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.467 252257 DEBUG oslo_concurrency.lockutils [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.468 252257 DEBUG oslo_concurrency.lockutils [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.468 252257 DEBUG oslo_concurrency.lockutils [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.469 252257 DEBUG nova.compute.manager [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] No waiting events found dispatching network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:19 np0005539563 nova_compute[252253]: 2025-11-29 08:17:19.469 252257 WARNING nova.compute.manager [req-f5d69788-4526-4c35-ac85-6e9d319b12c2 req-2170f6ea-1564-4944-ae95-ec3f6a802bb9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received unexpected event network-vif-plugged-53db9057-1d66-479a-9112-5c451e2dc1c8 for instance with vm_state rescued and task_state deleting.#033[00m
Nov 29 03:17:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 130 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 1.8 MiB/s wr, 466 op/s
Nov 29 03:17:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:21.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:21.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:21 np0005539563 nova_compute[252253]: 2025-11-29 08:17:21.814 252257 DEBUG nova.network.neutron [-] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:21 np0005539563 nova_compute[252253]: 2025-11-29 08:17:21.845 252257 INFO nova.compute.manager [-] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Took 2.78 seconds to deallocate network for instance.#033[00m
Nov 29 03:17:21 np0005539563 nova_compute[252253]: 2025-11-29 08:17:21.921 252257 DEBUG nova.compute.manager [req-9b246977-b71f-444d-a36f-e3ad20a67c4c req-ae6b466d-1c77-4ab2-9425-9b0e4d5762ce 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Received event network-vif-deleted-53db9057-1d66-479a-9112-5c451e2dc1c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:21 np0005539563 nova_compute[252253]: 2025-11-29 08:17:21.924 252257 DEBUG oslo_concurrency.lockutils [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:21 np0005539563 nova_compute[252253]: 2025-11-29 08:17:21.925 252257 DEBUG oslo_concurrency.lockutils [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:22 np0005539563 nova_compute[252253]: 2025-11-29 08:17:22.081 252257 DEBUG oslo_concurrency.processutils [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 130 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.8 MiB/s wr, 384 op/s
Nov 29 03:17:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:22 np0005539563 nova_compute[252253]: 2025-11-29 08:17:22.384 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1345798867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:22 np0005539563 nova_compute[252253]: 2025-11-29 08:17:22.541 252257 DEBUG oslo_concurrency.processutils [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:22 np0005539563 nova_compute[252253]: 2025-11-29 08:17:22.550 252257 DEBUG nova.compute.provider_tree [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:22 np0005539563 nova_compute[252253]: 2025-11-29 08:17:22.573 252257 DEBUG nova.scheduler.client.report [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:22 np0005539563 nova_compute[252253]: 2025-11-29 08:17:22.598 252257 DEBUG oslo_concurrency.lockutils [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:22 np0005539563 nova_compute[252253]: 2025-11-29 08:17:22.632 252257 INFO nova.scheduler.client.report [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Deleted allocations for instance 194fa81a-dfa7-4c98-9fd0-7d20d250db7c#033[00m
Nov 29 03:17:22 np0005539563 nova_compute[252253]: 2025-11-29 08:17:22.708 252257 DEBUG oslo_concurrency.lockutils [None req-6f518cdb-60b6-4aee-bdf7-a4752eb44004 7bb4a89eea4e4166a7a1c5e3135cb182 6025758b69854406b221c47d9ef59dea - - default default] Lock "194fa81a-dfa7-4c98-9fd0-7d20d250db7c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:23.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:23 np0005539563 nova_compute[252253]: 2025-11-29 08:17:23.114 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:23.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021286990508096034 of space, bias 1.0, pg target 0.638609715242881 quantized to 32 (current 32)
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:17:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 88 MiB data, 905 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.8 MiB/s wr, 385 op/s
Nov 29 03:17:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:25.022 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:25.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:25.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 88 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 320 op/s
Nov 29 03:17:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:27.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:27.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:27 np0005539563 nova_compute[252253]: 2025-11-29 08:17:27.386 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:28 np0005539563 nova_compute[252253]: 2025-11-29 08:17:28.118 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 88 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Nov 29 03:17:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:29.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:29.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:29 np0005539563 podman[330369]: 2025-11-29 08:17:29.551494605 +0000 UTC m=+0.096475535 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:17:29 np0005539563 podman[330370]: 2025-11-29 08:17:29.593246935 +0000 UTC m=+0.130669461 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125)
Nov 29 03:17:29 np0005539563 podman[330371]: 2025-11-29 08:17:29.594055217 +0000 UTC m=+0.136365225 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 03:17:29 np0005539563 nova_compute[252253]: 2025-11-29 08:17:29.794 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404234.7922926, 02b0884a-8366-4c63-ae13-b1bed91e9f04 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:29 np0005539563 nova_compute[252253]: 2025-11-29 08:17:29.796 252257 INFO nova.compute.manager [-] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:17:29 np0005539563 nova_compute[252253]: 2025-11-29 08:17:29.817 252257 DEBUG nova.compute.manager [None req-943ec1c5-df4c-4ef7-86c8-a65c8d58114a - - - - - -] [instance: 02b0884a-8366-4c63-ae13-b1bed91e9f04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 88 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1006 KiB/s wr, 179 op/s
Nov 29 03:17:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:31.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:31.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 88 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 29 03:17:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:32 np0005539563 nova_compute[252253]: 2025-11-29 08:17:32.387 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:33 np0005539563 nova_compute[252253]: 2025-11-29 08:17:33.067 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404238.065031, 194fa81a-dfa7-4c98-9fd0-7d20d250db7c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:33 np0005539563 nova_compute[252253]: 2025-11-29 08:17:33.067 252257 INFO nova.compute.manager [-] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:17:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:33.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:33 np0005539563 nova_compute[252253]: 2025-11-29 08:17:33.087 252257 DEBUG nova.compute.manager [None req-32edb3e1-6f08-43cc-a928-310146d97e4f - - - - - -] [instance: 194fa81a-dfa7-4c98-9fd0-7d20d250db7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:33 np0005539563 nova_compute[252253]: 2025-11-29 08:17:33.121 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:33.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 88 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 29 03:17:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:35.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:35.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 60 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Nov 29 03:17:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:37.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:37.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:37 np0005539563 nova_compute[252253]: 2025-11-29 08:17:37.390 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:38 np0005539563 nova_compute[252253]: 2025-11-29 08:17:38.124 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 41 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 97 op/s
Nov 29 03:17:38 np0005539563 nova_compute[252253]: 2025-11-29 08:17:38.530 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Acquiring lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:38 np0005539563 nova_compute[252253]: 2025-11-29 08:17:38.531 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:38 np0005539563 nova_compute[252253]: 2025-11-29 08:17:38.557 252257 DEBUG nova.compute.manager [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:17:38 np0005539563 nova_compute[252253]: 2025-11-29 08:17:38.640 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:38 np0005539563 nova_compute[252253]: 2025-11-29 08:17:38.641 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:38 np0005539563 nova_compute[252253]: 2025-11-29 08:17:38.649 252257 DEBUG nova.virt.hardware [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:17:38 np0005539563 nova_compute[252253]: 2025-11-29 08:17:38.650 252257 INFO nova.compute.claims [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:17:38 np0005539563 nova_compute[252253]: 2025-11-29 08:17:38.769 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:39.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2243143571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.191 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.199 252257 DEBUG nova.compute.provider_tree [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.219 252257 DEBUG nova.scheduler.client.report [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.249 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.251 252257 DEBUG nova.compute.manager [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.343 252257 DEBUG nova.compute.manager [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.344 252257 DEBUG nova.network.neutron [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.359 252257 INFO nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:17:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:39.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.382 252257 DEBUG nova.compute.manager [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.496 252257 DEBUG nova.compute.manager [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.497 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.497 252257 INFO nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Creating image(s)#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.530 252257 DEBUG nova.storage.rbd_utils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] rbd image d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.570 252257 DEBUG nova.storage.rbd_utils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] rbd image d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.609 252257 DEBUG nova.storage.rbd_utils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] rbd image d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.614 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.647 252257 DEBUG nova.policy [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9d3b2212bcd144cbb0c1abdeb25b9998', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '53808986ef2542a6b39d9d28957c85c7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.699 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.700 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.701 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.702 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.735 252257 DEBUG nova.storage.rbd_utils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] rbd image d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:39 np0005539563 nova_compute[252253]: 2025-11-29 08:17:39.739 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:40 np0005539563 nova_compute[252253]: 2025-11-29 08:17:40.264 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:40 np0005539563 nova_compute[252253]: 2025-11-29 08:17:40.306 252257 DEBUG nova.network.neutron [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Successfully created port: b49b18e2-089e-4f82-96c6-39227d328cf6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:17:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 41 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 KiB/s wr, 72 op/s
Nov 29 03:17:40 np0005539563 nova_compute[252253]: 2025-11-29 08:17:40.360 252257 DEBUG nova.storage.rbd_utils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] resizing rbd image d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:17:40 np0005539563 nova_compute[252253]: 2025-11-29 08:17:40.479 252257 DEBUG nova.objects.instance [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lazy-loading 'migration_context' on Instance uuid d19e7e7b-5493-4a97-a39b-ebaa9f201a68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:40 np0005539563 nova_compute[252253]: 2025-11-29 08:17:40.514 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:17:40 np0005539563 nova_compute[252253]: 2025-11-29 08:17:40.515 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Ensure instance console log exists: /var/lib/nova/instances/d19e7e7b-5493-4a97-a39b-ebaa9f201a68/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:17:40 np0005539563 nova_compute[252253]: 2025-11-29 08:17:40.515 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:40 np0005539563 nova_compute[252253]: 2025-11-29 08:17:40.516 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:40 np0005539563 nova_compute[252253]: 2025-11-29 08:17:40.516 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:41.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:41 np0005539563 nova_compute[252253]: 2025-11-29 08:17:41.341 252257 DEBUG nova.network.neutron [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Successfully updated port: b49b18e2-089e-4f82-96c6-39227d328cf6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:17:41 np0005539563 nova_compute[252253]: 2025-11-29 08:17:41.358 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Acquiring lock "refresh_cache-d19e7e7b-5493-4a97-a39b-ebaa9f201a68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:41 np0005539563 nova_compute[252253]: 2025-11-29 08:17:41.358 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Acquired lock "refresh_cache-d19e7e7b-5493-4a97-a39b-ebaa9f201a68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:41 np0005539563 nova_compute[252253]: 2025-11-29 08:17:41.358 252257 DEBUG nova.network.neutron [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:17:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:41.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:41 np0005539563 nova_compute[252253]: 2025-11-29 08:17:41.509 252257 DEBUG nova.compute.manager [req-04f099b2-5648-42d5-9ce6-5d3220cf5065 req-1f28a2c2-cf39-4713-b67c-a3f5a1be14df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Received event network-changed-b49b18e2-089e-4f82-96c6-39227d328cf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:41 np0005539563 nova_compute[252253]: 2025-11-29 08:17:41.510 252257 DEBUG nova.compute.manager [req-04f099b2-5648-42d5-9ce6-5d3220cf5065 req-1f28a2c2-cf39-4713-b67c-a3f5a1be14df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Refreshing instance network info cache due to event network-changed-b49b18e2-089e-4f82-96c6-39227d328cf6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:17:41 np0005539563 nova_compute[252253]: 2025-11-29 08:17:41.510 252257 DEBUG oslo_concurrency.lockutils [req-04f099b2-5648-42d5-9ce6-5d3220cf5065 req-1f28a2c2-cf39-4713-b67c-a3f5a1be14df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-d19e7e7b-5493-4a97-a39b-ebaa9f201a68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:41 np0005539563 nova_compute[252253]: 2025-11-29 08:17:41.603 252257 DEBUG nova.network.neutron [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:17:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 41 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 28 op/s
Nov 29 03:17:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.393 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.515 252257 DEBUG nova.network.neutron [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Updating instance_info_cache with network_info: [{"id": "b49b18e2-089e-4f82-96c6-39227d328cf6", "address": "fa:16:3e:b0:77:91", "network": {"id": "97a517fe-79d8-476a-86af-94d084440c69", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-2029188772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "53808986ef2542a6b39d9d28957c85c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb49b18e2-08", "ovs_interfaceid": "b49b18e2-089e-4f82-96c6-39227d328cf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.533 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Releasing lock "refresh_cache-d19e7e7b-5493-4a97-a39b-ebaa9f201a68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.533 252257 DEBUG nova.compute.manager [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Instance network_info: |[{"id": "b49b18e2-089e-4f82-96c6-39227d328cf6", "address": "fa:16:3e:b0:77:91", "network": {"id": "97a517fe-79d8-476a-86af-94d084440c69", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-2029188772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "53808986ef2542a6b39d9d28957c85c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb49b18e2-08", "ovs_interfaceid": "b49b18e2-089e-4f82-96c6-39227d328cf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.535 252257 DEBUG oslo_concurrency.lockutils [req-04f099b2-5648-42d5-9ce6-5d3220cf5065 req-1f28a2c2-cf39-4713-b67c-a3f5a1be14df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-d19e7e7b-5493-4a97-a39b-ebaa9f201a68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.535 252257 DEBUG nova.network.neutron [req-04f099b2-5648-42d5-9ce6-5d3220cf5065 req-1f28a2c2-cf39-4713-b67c-a3f5a1be14df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Refreshing network info cache for port b49b18e2-089e-4f82-96c6-39227d328cf6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.540 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Start _get_guest_xml network_info=[{"id": "b49b18e2-089e-4f82-96c6-39227d328cf6", "address": "fa:16:3e:b0:77:91", "network": {"id": "97a517fe-79d8-476a-86af-94d084440c69", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-2029188772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "53808986ef2542a6b39d9d28957c85c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb49b18e2-08", "ovs_interfaceid": "b49b18e2-089e-4f82-96c6-39227d328cf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.547 252257 WARNING nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.560 252257 DEBUG nova.virt.libvirt.host [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.561 252257 DEBUG nova.virt.libvirt.host [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.566 252257 DEBUG nova.virt.libvirt.host [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.566 252257 DEBUG nova.virt.libvirt.host [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.568 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.568 252257 DEBUG nova.virt.hardware [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.569 252257 DEBUG nova.virt.hardware [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.569 252257 DEBUG nova.virt.hardware [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.569 252257 DEBUG nova.virt.hardware [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:17:42 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.570 252257 DEBUG nova.virt.hardware [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.570 252257 DEBUG nova.virt.hardware [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.570 252257 DEBUG nova.virt.hardware [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.571 252257 DEBUG nova.virt.hardware [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.571 252257 DEBUG nova.virt.hardware [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.571 252257 DEBUG nova.virt.hardware [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.572 252257 DEBUG nova.virt.hardware [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.574 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:42 np0005539563 nova_compute[252253]: 2025-11-29 08:17:42.697 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262924247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.055 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.078 252257 DEBUG nova.storage.rbd_utils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] rbd image d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.082 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:43.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.127 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:17:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:43.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3260172181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.540 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.543 252257 DEBUG nova.virt.libvirt.vif [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-970709490',display_name='tempest-ServerMetadataTestJSON-server-970709490',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-970709490',id=126,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='53808986ef2542a6b39d9d28957c85c7',ramdisk_id='',reservation_id='r-9u1a4qf6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1347032547',owner_user_name='tempest-ServerMetadataTestJSON-1347032547-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:39Z,user_data=None,user_id='9d3b2212bcd144cbb0c1abdeb25b9998',uuid=d19e7e7b-5493-4a97-a39b-ebaa9f201a68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b49b18e2-089e-4f82-96c6-39227d328cf6", "address": "fa:16:3e:b0:77:91", "network": {"id": "97a517fe-79d8-476a-86af-94d084440c69", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-2029188772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "53808986ef2542a6b39d9d28957c85c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb49b18e2-08", "ovs_interfaceid": "b49b18e2-089e-4f82-96c6-39227d328cf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.544 252257 DEBUG nova.network.os_vif_util [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Converting VIF {"id": "b49b18e2-089e-4f82-96c6-39227d328cf6", "address": "fa:16:3e:b0:77:91", "network": {"id": "97a517fe-79d8-476a-86af-94d084440c69", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-2029188772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "53808986ef2542a6b39d9d28957c85c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb49b18e2-08", "ovs_interfaceid": "b49b18e2-089e-4f82-96c6-39227d328cf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.545 252257 DEBUG nova.network.os_vif_util [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:77:91,bridge_name='br-int',has_traffic_filtering=True,id=b49b18e2-089e-4f82-96c6-39227d328cf6,network=Network(97a517fe-79d8-476a-86af-94d084440c69),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb49b18e2-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.548 252257 DEBUG nova.objects.instance [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid d19e7e7b-5493-4a97-a39b-ebaa9f201a68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.621 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  <uuid>d19e7e7b-5493-4a97-a39b-ebaa9f201a68</uuid>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  <name>instance-0000007e</name>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerMetadataTestJSON-server-970709490</nova:name>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:17:42</nova:creationTime>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <nova:user uuid="9d3b2212bcd144cbb0c1abdeb25b9998">tempest-ServerMetadataTestJSON-1347032547-project-member</nova:user>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <nova:project uuid="53808986ef2542a6b39d9d28957c85c7">tempest-ServerMetadataTestJSON-1347032547</nova:project>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <nova:port uuid="b49b18e2-089e-4f82-96c6-39227d328cf6">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <entry name="serial">d19e7e7b-5493-4a97-a39b-ebaa9f201a68</entry>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <entry name="uuid">d19e7e7b-5493-4a97-a39b-ebaa9f201a68</entry>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk.config">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:b0:77:91"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <target dev="tapb49b18e2-08"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/d19e7e7b-5493-4a97-a39b-ebaa9f201a68/console.log" append="off"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:17:43 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:17:43 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:17:43 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:17:43 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.624 252257 DEBUG nova.compute.manager [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Preparing to wait for external event network-vif-plugged-b49b18e2-089e-4f82-96c6-39227d328cf6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.625 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Acquiring lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.626 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.626 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.627 252257 DEBUG nova.virt.libvirt.vif [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-970709490',display_name='tempest-ServerMetadataTestJSON-server-970709490',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-970709490',id=126,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='53808986ef2542a6b39d9d28957c85c7',ramdisk_id='',reservation_id='r-9u1a4qf6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1347032547',owner_user_name='tempest-ServerMetadataTestJSON-1347032547-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:39Z,user_data=None,user_id='9d3b2212bcd144cbb0c1abdeb25b9998',uuid=d19e7e7b-5493-4a97-a39b-ebaa9f201a68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b49b18e2-089e-4f82-96c6-39227d328cf6", "address": "fa:16:3e:b0:77:91", "network": {"id": "97a517fe-79d8-476a-86af-94d084440c69", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-2029188772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "53808986ef2542a6b39d9d28957c85c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb49b18e2-08", "ovs_interfaceid": "b49b18e2-089e-4f82-96c6-39227d328cf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.628 252257 DEBUG nova.network.os_vif_util [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Converting VIF {"id": "b49b18e2-089e-4f82-96c6-39227d328cf6", "address": "fa:16:3e:b0:77:91", "network": {"id": "97a517fe-79d8-476a-86af-94d084440c69", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-2029188772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "53808986ef2542a6b39d9d28957c85c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb49b18e2-08", "ovs_interfaceid": "b49b18e2-089e-4f82-96c6-39227d328cf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.629 252257 DEBUG nova.network.os_vif_util [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:77:91,bridge_name='br-int',has_traffic_filtering=True,id=b49b18e2-089e-4f82-96c6-39227d328cf6,network=Network(97a517fe-79d8-476a-86af-94d084440c69),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb49b18e2-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.630 252257 DEBUG os_vif [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:77:91,bridge_name='br-int',has_traffic_filtering=True,id=b49b18e2-089e-4f82-96c6-39227d328cf6,network=Network(97a517fe-79d8-476a-86af-94d084440c69),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb49b18e2-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.631 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.631 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.632 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.636 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.637 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb49b18e2-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.638 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb49b18e2-08, col_values=(('external_ids', {'iface-id': 'b49b18e2-089e-4f82-96c6-39227d328cf6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:77:91', 'vm-uuid': 'd19e7e7b-5493-4a97-a39b-ebaa9f201a68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.640 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:43 np0005539563 NetworkManager[48981]: <info>  [1764404263.6418] manager: (tapb49b18e2-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.643 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.646 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:43 np0005539563 nova_compute[252253]: 2025-11-29 08:17:43.648 252257 INFO os_vif [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:77:91,bridge_name='br-int',has_traffic_filtering=True,id=b49b18e2-089e-4f82-96c6-39227d328cf6,network=Network(97a517fe-79d8-476a-86af-94d084440c69),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb49b18e2-08')#033[00m
Nov 29 03:17:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 83 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Nov 29 03:17:44 np0005539563 nova_compute[252253]: 2025-11-29 08:17:44.371 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:44 np0005539563 nova_compute[252253]: 2025-11-29 08:17:44.372 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:44 np0005539563 nova_compute[252253]: 2025-11-29 08:17:44.372 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] No VIF found with MAC fa:16:3e:b0:77:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:17:44 np0005539563 nova_compute[252253]: 2025-11-29 08:17:44.373 252257 INFO nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Using config drive#033[00m
Nov 29 03:17:44 np0005539563 nova_compute[252253]: 2025-11-29 08:17:44.406 252257 DEBUG nova.storage.rbd_utils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] rbd image d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:45 np0005539563 nova_compute[252253]: 2025-11-29 08:17:45.047 252257 INFO nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Creating config drive at /var/lib/nova/instances/d19e7e7b-5493-4a97-a39b-ebaa9f201a68/disk.config#033[00m
Nov 29 03:17:45 np0005539563 nova_compute[252253]: 2025-11-29 08:17:45.052 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d19e7e7b-5493-4a97-a39b-ebaa9f201a68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkcalhvy5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:17:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:45.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:17:45 np0005539563 nova_compute[252253]: 2025-11-29 08:17:45.162 252257 DEBUG nova.network.neutron [req-04f099b2-5648-42d5-9ce6-5d3220cf5065 req-1f28a2c2-cf39-4713-b67c-a3f5a1be14df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Updated VIF entry in instance network info cache for port b49b18e2-089e-4f82-96c6-39227d328cf6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:17:45 np0005539563 nova_compute[252253]: 2025-11-29 08:17:45.163 252257 DEBUG nova.network.neutron [req-04f099b2-5648-42d5-9ce6-5d3220cf5065 req-1f28a2c2-cf39-4713-b67c-a3f5a1be14df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Updating instance_info_cache with network_info: [{"id": "b49b18e2-089e-4f82-96c6-39227d328cf6", "address": "fa:16:3e:b0:77:91", "network": {"id": "97a517fe-79d8-476a-86af-94d084440c69", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-2029188772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "53808986ef2542a6b39d9d28957c85c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb49b18e2-08", "ovs_interfaceid": "b49b18e2-089e-4f82-96c6-39227d328cf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:45 np0005539563 nova_compute[252253]: 2025-11-29 08:17:45.199 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d19e7e7b-5493-4a97-a39b-ebaa9f201a68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkcalhvy5" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:45 np0005539563 nova_compute[252253]: 2025-11-29 08:17:45.231 252257 DEBUG nova.storage.rbd_utils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] rbd image d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:45 np0005539563 nova_compute[252253]: 2025-11-29 08:17:45.236 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d19e7e7b-5493-4a97-a39b-ebaa9f201a68/disk.config d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:45 np0005539563 nova_compute[252253]: 2025-11-29 08:17:45.270 252257 DEBUG oslo_concurrency.lockutils [req-04f099b2-5648-42d5-9ce6-5d3220cf5065 req-1f28a2c2-cf39-4713-b67c-a3f5a1be14df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-d19e7e7b-5493-4a97-a39b-ebaa9f201a68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:45.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.175 252257 DEBUG oslo_concurrency.processutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d19e7e7b-5493-4a97-a39b-ebaa9f201a68/disk.config d19e7e7b-5493-4a97-a39b-ebaa9f201a68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.939s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.176 252257 INFO nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Deleting local config drive /var/lib/nova/instances/d19e7e7b-5493-4a97-a39b-ebaa9f201a68/disk.config because it was imported into RBD.#033[00m
Nov 29 03:17:46 np0005539563 kernel: tapb49b18e2-08: entered promiscuous mode
Nov 29 03:17:46 np0005539563 NetworkManager[48981]: <info>  [1764404266.2319] manager: (tapb49b18e2-08): new Tun device (/org/freedesktop/NetworkManager/Devices/229)
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.232 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:46Z|00510|binding|INFO|Claiming lport b49b18e2-089e-4f82-96c6-39227d328cf6 for this chassis.
Nov 29 03:17:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:46Z|00511|binding|INFO|b49b18e2-089e-4f82-96c6-39227d328cf6: Claiming fa:16:3e:b0:77:91 10.100.0.7
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.237 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.245 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:77:91 10.100.0.7'], port_security=['fa:16:3e:b0:77:91 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd19e7e7b-5493-4a97-a39b-ebaa9f201a68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97a517fe-79d8-476a-86af-94d084440c69', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '53808986ef2542a6b39d9d28957c85c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c87fb8e0-aca3-4e27-bb10-f86edfd4b40f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9976bd9-eee5-4043-9726-7dbf010a8174, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=b49b18e2-089e-4f82-96c6-39227d328cf6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.247 158990 INFO neutron.agent.ovn.metadata.agent [-] Port b49b18e2-089e-4f82-96c6-39227d328cf6 in datapath 97a517fe-79d8-476a-86af-94d084440c69 bound to our chassis#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.248 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 97a517fe-79d8-476a-86af-94d084440c69#033[00m
Nov 29 03:17:46 np0005539563 systemd-udevd[330818]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.259 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4171817c-b275-4142-8fae-5e2925387541]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.262 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap97a517fe-71 in ovnmeta-97a517fe-79d8-476a-86af-94d084440c69 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.263 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap97a517fe-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.263 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b57a32b4-01ca-4e97-a1a0-f6e380b7c437]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.265 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bb256ea7-6dd7-4c6c-9491-3e4cb5b042a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 systemd-machined[213024]: New machine qemu-58-instance-0000007e.
Nov 29 03:17:46 np0005539563 NetworkManager[48981]: <info>  [1764404266.2724] device (tapb49b18e2-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:46 np0005539563 NetworkManager[48981]: <info>  [1764404266.2739] device (tapb49b18e2-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.275 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[45454a58-caf9-48bf-a7ef-420add38f219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 systemd[1]: Started Virtual Machine qemu-58-instance-0000007e.
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.298 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa16e69-d990-4431-aa90-69639716275b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.323 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.323 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2df55a14-e85a-4d0a-9740-24b5c0d131b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 NetworkManager[48981]: <info>  [1764404266.3304] manager: (tap97a517fe-70): new Veth device (/org/freedesktop/NetworkManager/Devices/230)
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.329 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[921a1bc7-ba4b-4194-bbe7-e4fb6265fb5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 180 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.3 MiB/s wr, 149 op/s
Nov 29 03:17:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:46Z|00512|binding|INFO|Setting lport b49b18e2-089e-4f82-96c6-39227d328cf6 ovn-installed in OVS
Nov 29 03:17:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:46Z|00513|binding|INFO|Setting lport b49b18e2-089e-4f82-96c6-39227d328cf6 up in Southbound
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.337 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.357 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f59bcff6-cac4-4c56-bf89-6f60ca8c3b9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.360 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[af48d1c2-3226-4a8f-a750-49a3f472e237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 NetworkManager[48981]: <info>  [1764404266.3776] device (tap97a517fe-70): carrier: link connected
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.382 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[37cf6bdd-c59d-456e-b8fc-20f2404a638e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.396 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9e095b9b-36c7-40bb-81d2-6b3b343ef493]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97a517fe-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:e7:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723415, 'reachable_time': 16228, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330851, 'error': None, 'target': 'ovnmeta-97a517fe-79d8-476a-86af-94d084440c69', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.408 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bedf82bc-c460-4688-90b1-42ddd86aea8a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:e777'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 723415, 'tstamp': 723415}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330852, 'error': None, 'target': 'ovnmeta-97a517fe-79d8-476a-86af-94d084440c69', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.422 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1ecf6a-d60a-4c80-a802-a3e77b10911d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97a517fe-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:e7:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723415, 'reachable_time': 16228, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330853, 'error': None, 'target': 'ovnmeta-97a517fe-79d8-476a-86af-94d084440c69', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.445 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ce75aea2-0756-4655-89f0-8e3759a39938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.495 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[efe9efc8-21bd-4853-9585-e984325cefd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.496 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97a517fe-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.496 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.496 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97a517fe-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.498 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539563 NetworkManager[48981]: <info>  [1764404266.4989] manager: (tap97a517fe-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Nov 29 03:17:46 np0005539563 kernel: tap97a517fe-70: entered promiscuous mode
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.501 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.502 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap97a517fe-70, col_values=(('external_ids', {'iface-id': 'e7a07c29-f687-41e9-a782-9ca131c91d94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.503 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:46Z|00514|binding|INFO|Releasing lport e7a07c29-f687-41e9-a782-9ca131c91d94 from this chassis (sb_readonly=0)
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.535 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.537 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/97a517fe-79d8-476a-86af-94d084440c69.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/97a517fe-79d8-476a-86af-94d084440c69.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.537 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1914bd5d-1e5f-4810-bf18-5076cbf2e5af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.538 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-97a517fe-79d8-476a-86af-94d084440c69
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/97a517fe-79d8-476a-86af-94d084440c69.pid.haproxy
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 97a517fe-79d8-476a-86af-94d084440c69
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:17:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:46.539 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-97a517fe-79d8-476a-86af-94d084440c69', 'env', 'PROCESS_TAG=haproxy-97a517fe-79d8-476a-86af-94d084440c69', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/97a517fe-79d8-476a-86af-94d084440c69.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.895 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404266.8944793, d19e7e7b-5493-4a97-a39b-ebaa9f201a68 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.895 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] VM Started (Lifecycle Event)#033[00m
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.916 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.920 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404266.895869, d19e7e7b-5493-4a97-a39b-ebaa9f201a68 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.920 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.939 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.942 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:46 np0005539563 nova_compute[252253]: 2025-11-29 08:17:46.963 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:46 np0005539563 podman[330927]: 2025-11-29 08:17:46.890848098 +0000 UTC m=+0.027409674 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:17:47 np0005539563 podman[330927]: 2025-11-29 08:17:47.053473024 +0000 UTC m=+0.190034580 container create ffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.073 252257 DEBUG nova.compute.manager [req-4232b2b0-8a67-4602-8256-6fed5e735bca req-32b8ca2c-b77a-4896-b396-5075511f75b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Received event network-vif-plugged-b49b18e2-089e-4f82-96c6-39227d328cf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.074 252257 DEBUG oslo_concurrency.lockutils [req-4232b2b0-8a67-4602-8256-6fed5e735bca req-32b8ca2c-b77a-4896-b396-5075511f75b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.074 252257 DEBUG oslo_concurrency.lockutils [req-4232b2b0-8a67-4602-8256-6fed5e735bca req-32b8ca2c-b77a-4896-b396-5075511f75b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.075 252257 DEBUG oslo_concurrency.lockutils [req-4232b2b0-8a67-4602-8256-6fed5e735bca req-32b8ca2c-b77a-4896-b396-5075511f75b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.075 252257 DEBUG nova.compute.manager [req-4232b2b0-8a67-4602-8256-6fed5e735bca req-32b8ca2c-b77a-4896-b396-5075511f75b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Processing event network-vif-plugged-b49b18e2-089e-4f82-96c6-39227d328cf6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.075 252257 DEBUG nova.compute.manager [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.079 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.080 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404267.0792396, d19e7e7b-5493-4a97-a39b-ebaa9f201a68 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.081 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.086 252257 INFO nova.virt.libvirt.driver [-] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Instance spawned successfully.#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.087 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:17:47 np0005539563 systemd[1]: Started libpod-conmon-ffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0.scope.
Nov 29 03:17:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:47.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.124 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.130 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.133 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.134 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.134 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.135 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb740ebb034f5c8eb764de7ebf20b9af2a45b6aa7c2b75afeb8fafbec0c9635d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.135 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.136 252257 DEBUG nova.virt.libvirt.driver [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.171 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.224 252257 INFO nova.compute.manager [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Took 7.73 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.226 252257 DEBUG nova.compute.manager [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:47 np0005539563 podman[330927]: 2025-11-29 08:17:47.251418266 +0000 UTC m=+0.387979842 container init ffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:17:47 np0005539563 podman[330927]: 2025-11-29 08:17:47.256430612 +0000 UTC m=+0.392992168 container start ffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:17:47 np0005539563 neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69[330943]: [NOTICE]   (330947) : New worker (330949) forked
Nov 29 03:17:47 np0005539563 neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69[330943]: [NOTICE]   (330947) : Loading success.
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.306 252257 INFO nova.compute.manager [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Took 8.70 seconds to build instance.#033[00m
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.327 252257 DEBUG oslo_concurrency.lockutils [None req-f236f331-9e15-4835-998b-55e2a9927d3f 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:47.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:47 np0005539563 nova_compute[252253]: 2025-11-29 08:17:47.395 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 180 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.3 MiB/s wr, 136 op/s
Nov 29 03:17:48 np0005539563 nova_compute[252253]: 2025-11-29 08:17:48.640 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:49.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:49 np0005539563 nova_compute[252253]: 2025-11-29 08:17:49.232 252257 DEBUG nova.compute.manager [req-9e0ca293-f8d5-4f39-8a87-f88e7d892cd0 req-b8c7393b-d956-4db0-afd7-5bf1fc53e204 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Received event network-vif-plugged-b49b18e2-089e-4f82-96c6-39227d328cf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:49 np0005539563 nova_compute[252253]: 2025-11-29 08:17:49.233 252257 DEBUG oslo_concurrency.lockutils [req-9e0ca293-f8d5-4f39-8a87-f88e7d892cd0 req-b8c7393b-d956-4db0-afd7-5bf1fc53e204 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:49 np0005539563 nova_compute[252253]: 2025-11-29 08:17:49.233 252257 DEBUG oslo_concurrency.lockutils [req-9e0ca293-f8d5-4f39-8a87-f88e7d892cd0 req-b8c7393b-d956-4db0-afd7-5bf1fc53e204 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:49 np0005539563 nova_compute[252253]: 2025-11-29 08:17:49.233 252257 DEBUG oslo_concurrency.lockutils [req-9e0ca293-f8d5-4f39-8a87-f88e7d892cd0 req-b8c7393b-d956-4db0-afd7-5bf1fc53e204 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:49 np0005539563 nova_compute[252253]: 2025-11-29 08:17:49.234 252257 DEBUG nova.compute.manager [req-9e0ca293-f8d5-4f39-8a87-f88e7d892cd0 req-b8c7393b-d956-4db0-afd7-5bf1fc53e204 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] No waiting events found dispatching network-vif-plugged-b49b18e2-089e-4f82-96c6-39227d328cf6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:49 np0005539563 nova_compute[252253]: 2025-11-29 08:17:49.234 252257 WARNING nova.compute.manager [req-9e0ca293-f8d5-4f39-8a87-f88e7d892cd0 req-b8c7393b-d956-4db0-afd7-5bf1fc53e204 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Received unexpected event network-vif-plugged-b49b18e2-089e-4f82-96c6-39227d328cf6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:17:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:49.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 181 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.4 MiB/s wr, 233 op/s
Nov 29 03:17:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:51.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:51.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 181 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.4 MiB/s wr, 230 op/s
Nov 29 03:17:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:52 np0005539563 nova_compute[252253]: 2025-11-29 08:17:52.399 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:52 np0005539563 nova_compute[252253]: 2025-11-29 08:17:52.547 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:52 np0005539563 nova_compute[252253]: 2025-11-29 08:17:52.567 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Triggering sync for uuid d19e7e7b-5493-4a97-a39b-ebaa9f201a68 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:17:52 np0005539563 nova_compute[252253]: 2025-11-29 08:17:52.567 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:52 np0005539563 nova_compute[252253]: 2025-11-29 08:17:52.568 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:52 np0005539563 nova_compute[252253]: 2025-11-29 08:17:52.590 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:52 np0005539563 nova_compute[252253]: 2025-11-29 08:17:52.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:52 np0005539563 nova_compute[252253]: 2025-11-29 08:17:52.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:52 np0005539563 nova_compute[252253]: 2025-11-29 08:17:52.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.105 252257 DEBUG oslo_concurrency.lockutils [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Acquiring lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.106 252257 DEBUG oslo_concurrency.lockutils [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.106 252257 DEBUG oslo_concurrency.lockutils [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Acquiring lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.107 252257 DEBUG oslo_concurrency.lockutils [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.107 252257 DEBUG oslo_concurrency.lockutils [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.109 252257 INFO nova.compute.manager [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Terminating instance#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.110 252257 DEBUG nova.compute.manager [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:17:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:53.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:53 np0005539563 kernel: tapb49b18e2-08 (unregistering): left promiscuous mode
Nov 29 03:17:53 np0005539563 NetworkManager[48981]: <info>  [1764404273.1572] device (tapb49b18e2-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.163 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:53Z|00515|binding|INFO|Releasing lport b49b18e2-089e-4f82-96c6-39227d328cf6 from this chassis (sb_readonly=0)
Nov 29 03:17:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:53Z|00516|binding|INFO|Setting lport b49b18e2-089e-4f82-96c6-39227d328cf6 down in Southbound
Nov 29 03:17:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:17:53Z|00517|binding|INFO|Removing iface tapb49b18e2-08 ovn-installed in OVS
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.166 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.175 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:77:91 10.100.0.7'], port_security=['fa:16:3e:b0:77:91 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd19e7e7b-5493-4a97-a39b-ebaa9f201a68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97a517fe-79d8-476a-86af-94d084440c69', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '53808986ef2542a6b39d9d28957c85c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c87fb8e0-aca3-4e27-bb10-f86edfd4b40f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9976bd9-eee5-4043-9726-7dbf010a8174, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=b49b18e2-089e-4f82-96c6-39227d328cf6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.178 158990 INFO neutron.agent.ovn.metadata.agent [-] Port b49b18e2-089e-4f82-96c6-39227d328cf6 in datapath 97a517fe-79d8-476a-86af-94d084440c69 unbound from our chassis#033[00m
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.180 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 97a517fe-79d8-476a-86af-94d084440c69, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.182 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[981b078e-1ef8-4044-a242-871f7f3861b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.183 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-97a517fe-79d8-476a-86af-94d084440c69 namespace which is not needed anymore#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.224 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:53 np0005539563 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Nov 29 03:17:53 np0005539563 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007e.scope: Consumed 6.849s CPU time.
Nov 29 03:17:53 np0005539563 systemd-machined[213024]: Machine qemu-58-instance-0000007e terminated.
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.333 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.338 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:53 np0005539563 neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69[330943]: [NOTICE]   (330947) : haproxy version is 2.8.14-c23fe91
Nov 29 03:17:53 np0005539563 neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69[330943]: [NOTICE]   (330947) : path to executable is /usr/sbin/haproxy
Nov 29 03:17:53 np0005539563 neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69[330943]: [WARNING]  (330947) : Exiting Master process...
Nov 29 03:17:53 np0005539563 neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69[330943]: [WARNING]  (330947) : Exiting Master process...
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.348 252257 INFO nova.virt.libvirt.driver [-] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Instance destroyed successfully.#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.349 252257 DEBUG nova.objects.instance [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lazy-loading 'resources' on Instance uuid d19e7e7b-5493-4a97-a39b-ebaa9f201a68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:53 np0005539563 neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69[330943]: [ALERT]    (330947) : Current worker (330949) exited with code 143 (Terminated)
Nov 29 03:17:53 np0005539563 neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69[330943]: [WARNING]  (330947) : All workers exited. Exiting... (0)
Nov 29 03:17:53 np0005539563 systemd[1]: libpod-ffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0.scope: Deactivated successfully.
Nov 29 03:17:53 np0005539563 podman[330986]: 2025-11-29 08:17:53.36001225 +0000 UTC m=+0.045774981 container died ffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.373 252257 DEBUG nova.virt.libvirt.vif [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-970709490',display_name='tempest-ServerMetadataTestJSON-server-970709490',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-970709490',id=126,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='53808986ef2542a6b39d9d28957c85c7',ramdisk_id='',reservation_id='r-9u1a4qf6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-1347032547',owner_user_name='tempest-ServerMetadataTestJSON-1347032547-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:52Z,user_data=None,user_id='9d3b2212bcd144cbb0c1abdeb25b9998',uuid=d19e7e7b-5493-4a97-a39b-ebaa9f201a68,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b49b18e2-089e-4f82-96c6-39227d328cf6", "address": "fa:16:3e:b0:77:91", "network": {"id": "97a517fe-79d8-476a-86af-94d084440c69", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-2029188772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "53808986ef2542a6b39d9d28957c85c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb49b18e2-08", "ovs_interfaceid": "b49b18e2-089e-4f82-96c6-39227d328cf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.374 252257 DEBUG nova.network.os_vif_util [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Converting VIF {"id": "b49b18e2-089e-4f82-96c6-39227d328cf6", "address": "fa:16:3e:b0:77:91", "network": {"id": "97a517fe-79d8-476a-86af-94d084440c69", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-2029188772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "53808986ef2542a6b39d9d28957c85c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb49b18e2-08", "ovs_interfaceid": "b49b18e2-089e-4f82-96c6-39227d328cf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.375 252257 DEBUG nova.network.os_vif_util [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:77:91,bridge_name='br-int',has_traffic_filtering=True,id=b49b18e2-089e-4f82-96c6-39227d328cf6,network=Network(97a517fe-79d8-476a-86af-94d084440c69),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb49b18e2-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.375 252257 DEBUG os_vif [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:77:91,bridge_name='br-int',has_traffic_filtering=True,id=b49b18e2-089e-4f82-96c6-39227d328cf6,network=Network(97a517fe-79d8-476a-86af-94d084440c69),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb49b18e2-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.377 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.377 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb49b18e2-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.378 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.380 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:53.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.382 252257 INFO os_vif [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:77:91,bridge_name='br-int',has_traffic_filtering=True,id=b49b18e2-089e-4f82-96c6-39227d328cf6,network=Network(97a517fe-79d8-476a-86af-94d084440c69),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb49b18e2-08')#033[00m
Nov 29 03:17:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0-userdata-shm.mount: Deactivated successfully.
Nov 29 03:17:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay-bb740ebb034f5c8eb764de7ebf20b9af2a45b6aa7c2b75afeb8fafbec0c9635d-merged.mount: Deactivated successfully.
Nov 29 03:17:53 np0005539563 podman[330986]: 2025-11-29 08:17:53.405856942 +0000 UTC m=+0.091619673 container cleanup ffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:17:53 np0005539563 systemd[1]: libpod-conmon-ffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0.scope: Deactivated successfully.
Nov 29 03:17:53 np0005539563 podman[331038]: 2025-11-29 08:17:53.498530482 +0000 UTC m=+0.064325293 container remove ffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.505 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8f4fa4-ed74-4ccc-af99-2563aa29d1ab]: (4, ('Sat Nov 29 08:17:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69 (ffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0)\nffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0\nSat Nov 29 08:17:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-97a517fe-79d8-476a-86af-94d084440c69 (ffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0)\nffd640ac72b711e543a044f372a0c8f7c041a1aae72fe974bf65ee4b12ebfcd0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.508 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f6751a29-9d48-475a-b03b-8900df36d0d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.509 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97a517fe-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:53 np0005539563 kernel: tap97a517fe-70: left promiscuous mode
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.512 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.588 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.591 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7cde281e-de59-46ef-bf89-8bd299d8ec68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.606 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a098d07e-7be8-43dd-89e4-cb60043eb5a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.607 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f20f38a5-05bf-4208-89c1-1240e4da4b9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.624 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[57dda276-2c27-4893-b1fd-9d09c14eff96]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723409, 'reachable_time': 17880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331090, 'error': None, 'target': 'ovnmeta-97a517fe-79d8-476a-86af-94d084440c69', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.627 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-97a517fe-79d8-476a-86af-94d084440c69 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:17:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:17:53.627 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[05075bdb-2b49-41b5-bd72-6be25fe369b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:53 np0005539563 systemd[1]: run-netns-ovnmeta\x2d97a517fe\x2d79d8\x2d476a\x2d86af\x2d94d084440c69.mount: Deactivated successfully.
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.918 252257 INFO nova.virt.libvirt.driver [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Deleting instance files /var/lib/nova/instances/d19e7e7b-5493-4a97-a39b-ebaa9f201a68_del#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.919 252257 INFO nova.virt.libvirt.driver [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Deletion of /var/lib/nova/instances/d19e7e7b-5493-4a97-a39b-ebaa9f201a68_del complete#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.992 252257 INFO nova.compute.manager [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.993 252257 DEBUG oslo.service.loopingcall [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.993 252257 DEBUG nova.compute.manager [-] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:17:53 np0005539563 nova_compute[252253]: 2025-11-29 08:17:53.993 252257 DEBUG nova.network.neutron [-] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:17:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 181 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 5.4 MiB/s wr, 273 op/s
Nov 29 03:17:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Nov 29 03:17:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Nov 29 03:17:54 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Nov 29 03:17:54 np0005539563 nova_compute[252253]: 2025-11-29 08:17:54.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:54 np0005539563 nova_compute[252253]: 2025-11-29 08:17:54.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:17:54 np0005539563 nova_compute[252253]: 2025-11-29 08:17:54.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:17:54 np0005539563 nova_compute[252253]: 2025-11-29 08:17:54.696 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 03:17:54 np0005539563 nova_compute[252253]: 2025-11-29 08:17:54.696 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:17:54 np0005539563 nova_compute[252253]: 2025-11-29 08:17:54.697 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:54 np0005539563 nova_compute[252253]: 2025-11-29 08:17:54.720 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:54 np0005539563 nova_compute[252253]: 2025-11-29 08:17:54.721 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:54 np0005539563 nova_compute[252253]: 2025-11-29 08:17:54.721 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:54 np0005539563 nova_compute[252253]: 2025-11-29 08:17:54.722 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:17:54 np0005539563 nova_compute[252253]: 2025-11-29 08:17:54.722 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:55.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/805839888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.205 252257 DEBUG nova.compute.manager [req-c9718e26-0c53-4ecd-8075-920de0106d67 req-1770862d-c45f-4477-bbd4-561202756689 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Received event network-vif-unplugged-b49b18e2-089e-4f82-96c6-39227d328cf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.206 252257 DEBUG oslo_concurrency.lockutils [req-c9718e26-0c53-4ecd-8075-920de0106d67 req-1770862d-c45f-4477-bbd4-561202756689 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.206 252257 DEBUG oslo_concurrency.lockutils [req-c9718e26-0c53-4ecd-8075-920de0106d67 req-1770862d-c45f-4477-bbd4-561202756689 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.206 252257 DEBUG oslo_concurrency.lockutils [req-c9718e26-0c53-4ecd-8075-920de0106d67 req-1770862d-c45f-4477-bbd4-561202756689 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.206 252257 DEBUG nova.compute.manager [req-c9718e26-0c53-4ecd-8075-920de0106d67 req-1770862d-c45f-4477-bbd4-561202756689 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] No waiting events found dispatching network-vif-unplugged-b49b18e2-089e-4f82-96c6-39227d328cf6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.207 252257 DEBUG nova.compute.manager [req-c9718e26-0c53-4ecd-8075-920de0106d67 req-1770862d-c45f-4477-bbd4-561202756689 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Received event network-vif-unplugged-b49b18e2-089e-4f82-96c6-39227d328cf6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.207 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.373 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.374 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4389MB free_disk=20.925586700439453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.374 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.375 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:55.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.507 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance d19e7e7b-5493-4a97-a39b-ebaa9f201a68 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.507 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.507 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.520 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:17:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Nov 29 03:17:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Nov 29 03:17:55 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.817 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.817 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.832 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.866 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:17:55 np0005539563 nova_compute[252253]: 2025-11-29 08:17:55.897 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.107 252257 DEBUG nova.network.neutron [-] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.133 252257 INFO nova.compute.manager [-] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Took 2.14 seconds to deallocate network for instance.#033[00m
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.195 252257 DEBUG oslo_concurrency.lockutils [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066631849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.321 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.329 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 152 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 924 KiB/s wr, 298 op/s
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.352 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.380 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.380 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.381 252257 DEBUG oslo_concurrency.lockutils [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.478 252257 DEBUG oslo_concurrency.processutils [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Nov 29 03:17:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Nov 29 03:17:56 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Nov 29 03:17:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2832711309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.941 252257 DEBUG oslo_concurrency.processutils [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.945 252257 DEBUG nova.compute.provider_tree [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.964 252257 DEBUG nova.scheduler.client.report [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:56 np0005539563 nova_compute[252253]: 2025-11-29 08:17:56.996 252257 DEBUG oslo_concurrency.lockutils [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:57 np0005539563 nova_compute[252253]: 2025-11-29 08:17:57.020 252257 INFO nova.scheduler.client.report [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Deleted allocations for instance d19e7e7b-5493-4a97-a39b-ebaa9f201a68#033[00m
Nov 29 03:17:57 np0005539563 nova_compute[252253]: 2025-11-29 08:17:57.081 252257 DEBUG oslo_concurrency.lockutils [None req-834de00d-38e5-47a7-8ac9-0cb39742e960 9d3b2212bcd144cbb0c1abdeb25b9998 53808986ef2542a6b39d9d28957c85c7 - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:17:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:57.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:17:57 np0005539563 nova_compute[252253]: 2025-11-29 08:17:57.298 252257 DEBUG nova.compute.manager [req-e0887ab2-f635-42c8-96c6-dc8619cca8ef req-c403e741-b197-48d7-b962-2986cb860cf9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Received event network-vif-plugged-b49b18e2-089e-4f82-96c6-39227d328cf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:57 np0005539563 nova_compute[252253]: 2025-11-29 08:17:57.299 252257 DEBUG oslo_concurrency.lockutils [req-e0887ab2-f635-42c8-96c6-dc8619cca8ef req-c403e741-b197-48d7-b962-2986cb860cf9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:57 np0005539563 nova_compute[252253]: 2025-11-29 08:17:57.299 252257 DEBUG oslo_concurrency.lockutils [req-e0887ab2-f635-42c8-96c6-dc8619cca8ef req-c403e741-b197-48d7-b962-2986cb860cf9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:57 np0005539563 nova_compute[252253]: 2025-11-29 08:17:57.300 252257 DEBUG oslo_concurrency.lockutils [req-e0887ab2-f635-42c8-96c6-dc8619cca8ef req-c403e741-b197-48d7-b962-2986cb860cf9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d19e7e7b-5493-4a97-a39b-ebaa9f201a68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:57 np0005539563 nova_compute[252253]: 2025-11-29 08:17:57.300 252257 DEBUG nova.compute.manager [req-e0887ab2-f635-42c8-96c6-dc8619cca8ef req-c403e741-b197-48d7-b962-2986cb860cf9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] No waiting events found dispatching network-vif-plugged-b49b18e2-089e-4f82-96c6-39227d328cf6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:57 np0005539563 nova_compute[252253]: 2025-11-29 08:17:57.300 252257 WARNING nova.compute.manager [req-e0887ab2-f635-42c8-96c6-dc8619cca8ef req-c403e741-b197-48d7-b962-2986cb860cf9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Received unexpected event network-vif-plugged-b49b18e2-089e-4f82-96c6-39227d328cf6 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:17:57 np0005539563 nova_compute[252253]: 2025-11-29 08:17:57.301 252257 DEBUG nova.compute.manager [req-e0887ab2-f635-42c8-96c6-dc8619cca8ef req-c403e741-b197-48d7-b962-2986cb860cf9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Received event network-vif-deleted-b49b18e2-089e-4f82-96c6-39227d328cf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:57 np0005539563 nova_compute[252253]: 2025-11-29 08:17:57.361 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:57.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:57 np0005539563 nova_compute[252253]: 2025-11-29 08:17:57.402 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 150 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 2.7 MiB/s wr, 316 op/s
Nov 29 03:17:58 np0005539563 nova_compute[252253]: 2025-11-29 08:17:58.380 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:59.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:17:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:59.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 206 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 7.7 MiB/s wr, 333 op/s
Nov 29 03:18:00 np0005539563 podman[331181]: 2025-11-29 08:18:00.536111697 +0000 UTC m=+0.073219465 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:18:00 np0005539563 podman[331180]: 2025-11-29 08:18:00.544231566 +0000 UTC m=+0.078702343 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:18:00 np0005539563 podman[331182]: 2025-11-29 08:18:00.610543633 +0000 UTC m=+0.135853081 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:18:00 np0005539563 nova_compute[252253]: 2025-11-29 08:18:00.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:01.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:01.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 206 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.9 MiB/s wr, 223 op/s
Nov 29 03:18:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Nov 29 03:18:02 np0005539563 nova_compute[252253]: 2025-11-29 08:18:02.404 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:02 np0005539563 nova_compute[252253]: 2025-11-29 08:18:02.667 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Acquiring lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:02 np0005539563 nova_compute[252253]: 2025-11-29 08:18:02.668 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:02 np0005539563 nova_compute[252253]: 2025-11-29 08:18:02.799 252257 DEBUG nova.compute.manager [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:18:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Nov 29 03:18:02 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Nov 29 03:18:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:03.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:03 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 29 03:18:03 np0005539563 nova_compute[252253]: 2025-11-29 08:18:03.328 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:03 np0005539563 nova_compute[252253]: 2025-11-29 08:18:03.329 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:03 np0005539563 nova_compute[252253]: 2025-11-29 08:18:03.335 252257 DEBUG nova.virt.hardware [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:18:03 np0005539563 nova_compute[252253]: 2025-11-29 08:18:03.335 252257 INFO nova.compute.claims [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:18:03 np0005539563 nova_compute[252253]: 2025-11-29 08:18:03.383 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:03.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:03 np0005539563 nova_compute[252253]: 2025-11-29 08:18:03.584 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.003 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2427365914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.047 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.057 252257 DEBUG nova.compute.provider_tree [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.080 252257 DEBUG nova.scheduler.client.report [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.104 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.105 252257 DEBUG nova.compute.manager [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.167 252257 DEBUG nova.compute.manager [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.168 252257 DEBUG nova.network.neutron [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.188 252257 INFO nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.205 252257 DEBUG nova.compute.manager [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.311 252257 DEBUG nova.compute.manager [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.313 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.314 252257 INFO nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Creating image(s)#033[00m
Nov 29 03:18:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 206 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 167 op/s
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.350 252257 DEBUG nova.storage.rbd_utils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] rbd image d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:04.385 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:04.386 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.448 252257 DEBUG nova.storage.rbd_utils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] rbd image d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.501 252257 DEBUG nova.storage.rbd_utils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] rbd image d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.506 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.536 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.549 252257 DEBUG nova.policy [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '64c0341c4b844cbc9fd50d485aa954a0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4fb4eb1c2db40a5b320f63884fe8888', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.579 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.580 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.580 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:04 np0005539563 nova_compute[252253]: 2025-11-29 08:18:04.581 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:04.923 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:04.924 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:04.925 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:05 np0005539563 nova_compute[252253]: 2025-11-29 08:18:05.051 252257 DEBUG nova.storage.rbd_utils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] rbd image d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:05 np0005539563 nova_compute[252253]: 2025-11-29 08:18:05.056 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:05.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:05.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:05 np0005539563 nova_compute[252253]: 2025-11-29 08:18:05.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 233 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.9 MiB/s wr, 199 op/s
Nov 29 03:18:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:06.388 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:06 np0005539563 nova_compute[252253]: 2025-11-29 08:18:06.916 252257 DEBUG nova.network.neutron [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Successfully created port: 255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:18:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:07.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:07.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:07 np0005539563 nova_compute[252253]: 2025-11-29 08:18:07.406 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:08 np0005539563 nova_compute[252253]: 2025-11-29 08:18:08.241 252257 DEBUG nova.network.neutron [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Successfully updated port: 255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:18:08 np0005539563 nova_compute[252253]: 2025-11-29 08:18:08.259 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Acquiring lock "refresh_cache-d5c8d04c-2432-48d5-b0c7-f39c9d462798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:08 np0005539563 nova_compute[252253]: 2025-11-29 08:18:08.260 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Acquired lock "refresh_cache-d5c8d04c-2432-48d5-b0c7-f39c9d462798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:08 np0005539563 nova_compute[252253]: 2025-11-29 08:18:08.260 252257 DEBUG nova.network.neutron [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:18:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.7 MiB/s wr, 141 op/s
Nov 29 03:18:08 np0005539563 nova_compute[252253]: 2025-11-29 08:18:08.347 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404273.3459358, d19e7e7b-5493-4a97-a39b-ebaa9f201a68 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:08 np0005539563 nova_compute[252253]: 2025-11-29 08:18:08.347 252257 INFO nova.compute.manager [-] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:18:08 np0005539563 nova_compute[252253]: 2025-11-29 08:18:08.367 252257 DEBUG nova.compute.manager [req-595c68e3-3855-4c9e-af6d-23dfbec429c2 req-5c8e1abb-ecac-44cb-861b-c9f56c0f998e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Received event network-changed-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:08 np0005539563 nova_compute[252253]: 2025-11-29 08:18:08.368 252257 DEBUG nova.compute.manager [req-595c68e3-3855-4c9e-af6d-23dfbec429c2 req-5c8e1abb-ecac-44cb-861b-c9f56c0f998e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Refreshing instance network info cache due to event network-changed-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:18:08 np0005539563 nova_compute[252253]: 2025-11-29 08:18:08.368 252257 DEBUG oslo_concurrency.lockutils [req-595c68e3-3855-4c9e-af6d-23dfbec429c2 req-5c8e1abb-ecac-44cb-861b-c9f56c0f998e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-d5c8d04c-2432-48d5-b0c7-f39c9d462798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:08 np0005539563 nova_compute[252253]: 2025-11-29 08:18:08.372 252257 DEBUG nova.compute.manager [None req-59232519-057e-4259-b47e-8fd2818b5186 - - - - - -] [instance: d19e7e7b-5493-4a97-a39b-ebaa9f201a68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:08 np0005539563 nova_compute[252253]: 2025-11-29 08:18:08.385 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:08 np0005539563 nova_compute[252253]: 2025-11-29 08:18:08.498 252257 DEBUG nova.network.neutron [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:18:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.091 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:09.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.180 252257 DEBUG nova.storage.rbd_utils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] resizing rbd image d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.299 252257 DEBUG nova.objects.instance [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lazy-loading 'migration_context' on Instance uuid d5c8d04c-2432-48d5-b0c7-f39c9d462798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.310 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.311 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Ensure instance console log exists: /var/lib/nova/instances/d5c8d04c-2432-48d5-b0c7-f39c9d462798/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.311 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.311 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.312 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:09.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.552 252257 DEBUG nova.network.neutron [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Updating instance_info_cache with network_info: [{"id": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "address": "fa:16:3e:70:39:5f", "network": {"id": "fe001af2-79a4-4d02-a5b3-cecabe1e65de", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-210114522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4fb4eb1c2db40a5b320f63884fe8888", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap255c51bf-e9", "ovs_interfaceid": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.582 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Releasing lock "refresh_cache-d5c8d04c-2432-48d5-b0c7-f39c9d462798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.583 252257 DEBUG nova.compute.manager [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Instance network_info: |[{"id": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "address": "fa:16:3e:70:39:5f", "network": {"id": "fe001af2-79a4-4d02-a5b3-cecabe1e65de", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-210114522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4fb4eb1c2db40a5b320f63884fe8888", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap255c51bf-e9", "ovs_interfaceid": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.584 252257 DEBUG oslo_concurrency.lockutils [req-595c68e3-3855-4c9e-af6d-23dfbec429c2 req-5c8e1abb-ecac-44cb-861b-c9f56c0f998e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-d5c8d04c-2432-48d5-b0c7-f39c9d462798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.584 252257 DEBUG nova.network.neutron [req-595c68e3-3855-4c9e-af6d-23dfbec429c2 req-5c8e1abb-ecac-44cb-861b-c9f56c0f998e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Refreshing network info cache for port 255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.589 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Start _get_guest_xml network_info=[{"id": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "address": "fa:16:3e:70:39:5f", "network": {"id": "fe001af2-79a4-4d02-a5b3-cecabe1e65de", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-210114522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4fb4eb1c2db40a5b320f63884fe8888", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap255c51bf-e9", "ovs_interfaceid": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.595 252257 WARNING nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.602 252257 DEBUG nova.virt.libvirt.host [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.603 252257 DEBUG nova.virt.libvirt.host [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.613 252257 DEBUG nova.virt.libvirt.host [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.614 252257 DEBUG nova.virt.libvirt.host [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.617 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.617 252257 DEBUG nova.virt.hardware [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.618 252257 DEBUG nova.virt.hardware [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.619 252257 DEBUG nova.virt.hardware [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.619 252257 DEBUG nova.virt.hardware [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.620 252257 DEBUG nova.virt.hardware [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.620 252257 DEBUG nova.virt.hardware [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.620 252257 DEBUG nova.virt.hardware [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.621 252257 DEBUG nova.virt.hardware [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.622 252257 DEBUG nova.virt.hardware [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.622 252257 DEBUG nova.virt.hardware [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.622 252257 DEBUG nova.virt.hardware [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:18:09 np0005539563 nova_compute[252253]: 2025-11-29 08:18:09.628 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478945521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.098 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.144 252257 DEBUG nova.storage.rbd_utils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] rbd image d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.148 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 228 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 474 KiB/s rd, 4.9 MiB/s wr, 149 op/s
Nov 29 03:18:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604072690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.598 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.600 252257 DEBUG nova.virt.libvirt.vif [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-662063696',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-662063696',id=129,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4fb4eb1c2db40a5b320f63884fe8888',ramdisk_id='',reservation_id='r-1e8umi2o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-201520601',owner_user_name='tempest-ServerTagsTestJSON-201520601-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:04Z,user_data=None,user_id='64c0341c4b844cbc9fd50d485aa954a0',uuid=d5c8d04c-2432-48d5-b0c7-f39c9d462798,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "address": "fa:16:3e:70:39:5f", "network": {"id": "fe001af2-79a4-4d02-a5b3-cecabe1e65de", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-210114522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4fb4eb1c2db40a5b320f63884fe8888", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap255c51bf-e9", "ovs_interfaceid": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.600 252257 DEBUG nova.network.os_vif_util [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Converting VIF {"id": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "address": "fa:16:3e:70:39:5f", "network": {"id": "fe001af2-79a4-4d02-a5b3-cecabe1e65de", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-210114522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4fb4eb1c2db40a5b320f63884fe8888", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap255c51bf-e9", "ovs_interfaceid": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.601 252257 DEBUG nova.network.os_vif_util [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:39:5f,bridge_name='br-int',has_traffic_filtering=True,id=255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b,network=Network(fe001af2-79a4-4d02-a5b3-cecabe1e65de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap255c51bf-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.602 252257 DEBUG nova.objects.instance [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lazy-loading 'pci_devices' on Instance uuid d5c8d04c-2432-48d5-b0c7-f39c9d462798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.618 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  <uuid>d5c8d04c-2432-48d5-b0c7-f39c9d462798</uuid>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  <name>instance-00000081</name>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerTagsTestJSON-server-662063696</nova:name>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:18:09</nova:creationTime>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <nova:user uuid="64c0341c4b844cbc9fd50d485aa954a0">tempest-ServerTagsTestJSON-201520601-project-member</nova:user>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <nova:project uuid="b4fb4eb1c2db40a5b320f63884fe8888">tempest-ServerTagsTestJSON-201520601</nova:project>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <nova:port uuid="255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <entry name="serial">d5c8d04c-2432-48d5-b0c7-f39c9d462798</entry>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <entry name="uuid">d5c8d04c-2432-48d5-b0c7-f39c9d462798</entry>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk.config">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:70:39:5f"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <target dev="tap255c51bf-e9"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/d5c8d04c-2432-48d5-b0c7-f39c9d462798/console.log" append="off"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:18:10 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:18:10 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:18:10 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:18:10 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.619 252257 DEBUG nova.compute.manager [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Preparing to wait for external event network-vif-plugged-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.619 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Acquiring lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.619 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.619 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.620 252257 DEBUG nova.virt.libvirt.vif [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-662063696',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-662063696',id=129,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4fb4eb1c2db40a5b320f63884fe8888',ramdisk_id='',reservation_id='r-1e8umi2o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-201520601',owner_user_name='tempest-ServerTagsTestJSON-201520601-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:04Z,user_data=None,user_id='64c0341c4b844cbc9fd50d485aa954a0',uuid=d5c8d04c-2432-48d5-b0c7-f39c9d462798,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "address": "fa:16:3e:70:39:5f", "network": {"id": "fe001af2-79a4-4d02-a5b3-cecabe1e65de", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-210114522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4fb4eb1c2db40a5b320f63884fe8888", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap255c51bf-e9", "ovs_interfaceid": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.620 252257 DEBUG nova.network.os_vif_util [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Converting VIF {"id": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "address": "fa:16:3e:70:39:5f", "network": {"id": "fe001af2-79a4-4d02-a5b3-cecabe1e65de", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-210114522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4fb4eb1c2db40a5b320f63884fe8888", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap255c51bf-e9", "ovs_interfaceid": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.621 252257 DEBUG nova.network.os_vif_util [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:39:5f,bridge_name='br-int',has_traffic_filtering=True,id=255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b,network=Network(fe001af2-79a4-4d02-a5b3-cecabe1e65de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap255c51bf-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.621 252257 DEBUG os_vif [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:39:5f,bridge_name='br-int',has_traffic_filtering=True,id=255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b,network=Network(fe001af2-79a4-4d02-a5b3-cecabe1e65de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap255c51bf-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.622 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.622 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.622 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.626 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.626 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap255c51bf-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.627 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap255c51bf-e9, col_values=(('external_ids', {'iface-id': '255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:39:5f', 'vm-uuid': 'd5c8d04c-2432-48d5-b0c7-f39c9d462798'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.628 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539563 NetworkManager[48981]: <info>  [1764404290.6290] manager: (tap255c51bf-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.630 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.634 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.634 252257 INFO os_vif [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:39:5f,bridge_name='br-int',has_traffic_filtering=True,id=255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b,network=Network(fe001af2-79a4-4d02-a5b3-cecabe1e65de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap255c51bf-e9')#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.688 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.688 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.688 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] No VIF found with MAC fa:16:3e:70:39:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.688 252257 INFO nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Using config drive#033[00m
Nov 29 03:18:10 np0005539563 nova_compute[252253]: 2025-11-29 08:18:10.713 252257 DEBUG nova.storage.rbd_utils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] rbd image d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.137 252257 INFO nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Creating config drive at /var/lib/nova/instances/d5c8d04c-2432-48d5-b0c7-f39c9d462798/disk.config#033[00m
Nov 29 03:18:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:11.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.148 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d5c8d04c-2432-48d5-b0c7-f39c9d462798/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp708z99x4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.306 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d5c8d04c-2432-48d5-b0c7-f39c9d462798/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp708z99x4" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.342 252257 DEBUG nova.storage.rbd_utils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] rbd image d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.345 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d5c8d04c-2432-48d5-b0c7-f39c9d462798/disk.config d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:11.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.557 252257 DEBUG oslo_concurrency.processutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d5c8d04c-2432-48d5-b0c7-f39c9d462798/disk.config d5c8d04c-2432-48d5-b0c7-f39c9d462798_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.558 252257 INFO nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Deleting local config drive /var/lib/nova/instances/d5c8d04c-2432-48d5-b0c7-f39c9d462798/disk.config because it was imported into RBD.#033[00m
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.616 252257 DEBUG nova.network.neutron [req-595c68e3-3855-4c9e-af6d-23dfbec429c2 req-5c8e1abb-ecac-44cb-861b-c9f56c0f998e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Updated VIF entry in instance network info cache for port 255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.616 252257 DEBUG nova.network.neutron [req-595c68e3-3855-4c9e-af6d-23dfbec429c2 req-5c8e1abb-ecac-44cb-861b-c9f56c0f998e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Updating instance_info_cache with network_info: [{"id": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "address": "fa:16:3e:70:39:5f", "network": {"id": "fe001af2-79a4-4d02-a5b3-cecabe1e65de", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-210114522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4fb4eb1c2db40a5b320f63884fe8888", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap255c51bf-e9", "ovs_interfaceid": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:11 np0005539563 kernel: tap255c51bf-e9: entered promiscuous mode
Nov 29 03:18:11 np0005539563 NetworkManager[48981]: <info>  [1764404291.6326] manager: (tap255c51bf-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Nov 29 03:18:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:11Z|00518|binding|INFO|Claiming lport 255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b for this chassis.
Nov 29 03:18:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:11Z|00519|binding|INFO|255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b: Claiming fa:16:3e:70:39:5f 10.100.0.12
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.633 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.638 252257 DEBUG oslo_concurrency.lockutils [req-595c68e3-3855-4c9e-af6d-23dfbec429c2 req-5c8e1abb-ecac-44cb-861b-c9f56c0f998e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-d5c8d04c-2432-48d5-b0c7-f39c9d462798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.639 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.653 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:39:5f 10.100.0.12'], port_security=['fa:16:3e:70:39:5f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd5c8d04c-2432-48d5-b0c7-f39c9d462798', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe001af2-79a4-4d02-a5b3-cecabe1e65de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4fb4eb1c2db40a5b320f63884fe8888', 'neutron:revision_number': '2', 'neutron:security_group_ids': '85cacc75-6995-43c1-9bbb-4a645e28b940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8000cbf-f4a4-48b5-a551-0a7108781ed2, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.654 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b in datapath fe001af2-79a4-4d02-a5b3-cecabe1e65de bound to our chassis#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.656 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe001af2-79a4-4d02-a5b3-cecabe1e65de#033[00m
Nov 29 03:18:11 np0005539563 systemd-machined[213024]: New machine qemu-59-instance-00000081.
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.667 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2e400d1d-5902-480c-aa0d-ff559170d538]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.668 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfe001af2-71 in ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.670 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfe001af2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.670 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfae652-5673-408b-9363-51440f16c7df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.671 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[99fb1a36-0b8d-404c-ab25-9dc4d213ad5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.681 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[0e38b434-c08a-434d-b05a-c938f3759eae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 systemd[1]: Started Virtual Machine qemu-59-instance-00000081.
Nov 29 03:18:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3901207140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:11 np0005539563 systemd-udevd[331576]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.704 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[03ac15ab-39b7-42a3-af4f-3330d5d9d311]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.706 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:11Z|00520|binding|INFO|Setting lport 255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b ovn-installed in OVS
Nov 29 03:18:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:11Z|00521|binding|INFO|Setting lport 255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b up in Southbound
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.711 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:11 np0005539563 NetworkManager[48981]: <info>  [1764404291.7164] device (tap255c51bf-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:18:11 np0005539563 NetworkManager[48981]: <info>  [1764404291.7174] device (tap255c51bf-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.734 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[83870047-dea4-447a-89a9-c7a146726ce7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.738 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a523fe1f-71ce-4278-9310-608a34cf3e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 NetworkManager[48981]: <info>  [1764404291.7397] manager: (tapfe001af2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/234)
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.767 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e328710d-9cac-467b-80fd-ac5bcf9f4ffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.770 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4d75a9cb-33c7-4ac7-b45b-a3bbbad73983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 NetworkManager[48981]: <info>  [1764404291.7906] device (tapfe001af2-70): carrier: link connected
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.795 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b61a0554-5df9-497e-a869-273484825f4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.811 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[348f62ea-042f-45de-8feb-da8ffa78216e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe001af2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:f7:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725956, 'reachable_time': 24838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331606, 'error': None, 'target': 'ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.828 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[00857a0d-1f32-47ca-be9b-51e314d6af20]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:f7c5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725956, 'tstamp': 725956}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331607, 'error': None, 'target': 'ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.846 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[974c5787-c2dc-4eb5-a7b7-96730002f4c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe001af2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:f7:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725956, 'reachable_time': 24838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331608, 'error': None, 'target': 'ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.883 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[28d391e4-06d9-4d78-83fc-b84ae1bbb89d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.949 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad7def1-fe4d-4b49-8063-2b757cc2737c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.950 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe001af2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.950 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.950 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe001af2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.952 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:11 np0005539563 NetworkManager[48981]: <info>  [1764404291.9535] manager: (tapfe001af2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Nov 29 03:18:11 np0005539563 kernel: tapfe001af2-70: entered promiscuous mode
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.955 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.959 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe001af2-70, col_values=(('external_ids', {'iface-id': '26544536-0cc5-4de0-a584-882f10440439'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.961 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:11Z|00522|binding|INFO|Releasing lport 26544536-0cc5-4de0-a584-882f10440439 from this chassis (sb_readonly=0)
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.962 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.976 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fe001af2-79a4-4d02-a5b3-cecabe1e65de.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fe001af2-79a4-4d02-a5b3-cecabe1e65de.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:18:11 np0005539563 nova_compute[252253]: 2025-11-29 08:18:11.977 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.979 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9fcd1de3-e15a-4075-a937-f3a7a2a806f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.979 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-fe001af2-79a4-4d02-a5b3-cecabe1e65de
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/fe001af2-79a4-4d02-a5b3-cecabe1e65de.pid.haproxy
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID fe001af2-79a4-4d02-a5b3-cecabe1e65de
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:18:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:11.980 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de', 'env', 'PROCESS_TAG=haproxy-fe001af2-79a4-4d02-a5b3-cecabe1e65de', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fe001af2-79a4-4d02-a5b3-cecabe1e65de.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.167 252257 DEBUG nova.compute.manager [req-1baf3b33-ddce-433c-875c-7706dc22c42b req-b69b79a2-4c39-477b-bf8e-3149d3d49243 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Received event network-vif-plugged-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.167 252257 DEBUG oslo_concurrency.lockutils [req-1baf3b33-ddce-433c-875c-7706dc22c42b req-b69b79a2-4c39-477b-bf8e-3149d3d49243 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.168 252257 DEBUG oslo_concurrency.lockutils [req-1baf3b33-ddce-433c-875c-7706dc22c42b req-b69b79a2-4c39-477b-bf8e-3149d3d49243 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.168 252257 DEBUG oslo_concurrency.lockutils [req-1baf3b33-ddce-433c-875c-7706dc22c42b req-b69b79a2-4c39-477b-bf8e-3149d3d49243 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.168 252257 DEBUG nova.compute.manager [req-1baf3b33-ddce-433c-875c-7706dc22c42b req-b69b79a2-4c39-477b-bf8e-3149d3d49243 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Processing event network-vif-plugged-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.336 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404292.3360925, d5c8d04c-2432-48d5-b0c7-f39c9d462798 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.337 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] VM Started (Lifecycle Event)#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.339 252257 DEBUG nova.compute.manager [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.342 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:18:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 228 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 474 KiB/s rd, 4.9 MiB/s wr, 149 op/s
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.346 252257 INFO nova.virt.libvirt.driver [-] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Instance spawned successfully.#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.347 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.364 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.369 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.371 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.372 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.372 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.372 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.373 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.373 252257 DEBUG nova.virt.libvirt.driver [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:12 np0005539563 podman[331681]: 2025-11-29 08:18:12.382606607 +0000 UTC m=+0.070701846 container create 88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:18:12 np0005539563 podman[331681]: 2025-11-29 08:18:12.343135728 +0000 UTC m=+0.031231077 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.455 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.459 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.460 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404292.3363166, d5c8d04c-2432-48d5-b0c7-f39c9d462798 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.460 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.477 252257 INFO nova.compute.manager [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Took 8.16 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.478 252257 DEBUG nova.compute.manager [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:12 np0005539563 systemd[1]: Started libpod-conmon-88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d.scope.
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.486 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.489 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404292.341724, d5c8d04c-2432-48d5-b0c7-f39c9d462798 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.489 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:18:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.510 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d949bfe2f8470ea80df764dbe042d0417970b813c0c760672de59cdd8bac7666/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.512 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:12 np0005539563 podman[331681]: 2025-11-29 08:18:12.525686314 +0000 UTC m=+0.213781563 container init 88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:18:12 np0005539563 podman[331681]: 2025-11-29 08:18:12.530867744 +0000 UTC m=+0.218962993 container start 88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.541 252257 INFO nova.compute.manager [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Took 9.24 seconds to build instance.#033[00m
Nov 29 03:18:12 np0005539563 neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de[331695]: [NOTICE]   (331699) : New worker (331701) forked
Nov 29 03:18:12 np0005539563 neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de[331695]: [NOTICE]   (331699) : Loading success.
Nov 29 03:18:12 np0005539563 nova_compute[252253]: 2025-11-29 08:18:12.557 252257 DEBUG oslo_concurrency.lockutils [None req-3624e0aa-a011-4efe-be67-8c973f79ea3b 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:18:12
Nov 29 03:18:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:18:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:18:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'images', 'cephfs.cephfs.meta', 'volumes']
Nov 29 03:18:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:18:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:13.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:18:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:13.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.750266) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293750399, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1665, "num_deletes": 257, "total_data_size": 2634063, "memory_usage": 2678064, "flush_reason": "Manual Compaction"}
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293761409, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 1642320, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44903, "largest_seqno": 46567, "table_properties": {"data_size": 1636270, "index_size": 3060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16587, "raw_average_key_size": 21, "raw_value_size": 1622724, "raw_average_value_size": 2112, "num_data_blocks": 134, "num_entries": 768, "num_filter_entries": 768, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404157, "oldest_key_time": 1764404157, "file_creation_time": 1764404293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 11157 microseconds, and 5110 cpu microseconds.
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.761477) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 1642320 bytes OK
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.761504) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.762909) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.762932) EVENT_LOG_v1 {"time_micros": 1764404293762926, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.762950) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 2626867, prev total WAL file size 2626867, number of live WAL files 2.
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.764237) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353131' seq:72057594037927935, type:22 .. '6D6772737461740031373635' seq:0, type:0; will stop at (end)
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(1603KB)], [95(11MB)]
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293764373, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 13755605, "oldest_snapshot_seqno": -1}
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 8107 keys, 10792061 bytes, temperature: kUnknown
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293888254, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 10792061, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10739844, "index_size": 30857, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 208750, "raw_average_key_size": 25, "raw_value_size": 10597357, "raw_average_value_size": 1307, "num_data_blocks": 1213, "num_entries": 8107, "num_filter_entries": 8107, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764404293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.888512) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10792061 bytes
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.891262) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.0 rd, 87.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 11.6 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(14.9) write-amplify(6.6) OK, records in: 8576, records dropped: 469 output_compression: NoCompression
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.891287) EVENT_LOG_v1 {"time_micros": 1764404293891277, "job": 56, "event": "compaction_finished", "compaction_time_micros": 123949, "compaction_time_cpu_micros": 41527, "output_level": 6, "num_output_files": 1, "total_output_size": 10792061, "num_input_records": 8576, "num_output_records": 8107, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293891896, "job": 56, "event": "table_file_deletion", "file_number": 97}
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404293894551, "job": 56, "event": "table_file_deletion", "file_number": 95}
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.763724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.894749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.894753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.894755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.894757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:13.894758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:18:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:18:14 np0005539563 nova_compute[252253]: 2025-11-29 08:18:14.345 252257 DEBUG nova.compute.manager [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Received event network-vif-plugged-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:14 np0005539563 nova_compute[252253]: 2025-11-29 08:18:14.346 252257 DEBUG oslo_concurrency.lockutils [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:14 np0005539563 nova_compute[252253]: 2025-11-29 08:18:14.347 252257 DEBUG oslo_concurrency.lockutils [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:14 np0005539563 nova_compute[252253]: 2025-11-29 08:18:14.347 252257 DEBUG oslo_concurrency.lockutils [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:14 np0005539563 nova_compute[252253]: 2025-11-29 08:18:14.347 252257 DEBUG nova.compute.manager [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] No waiting events found dispatching network-vif-plugged-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:14 np0005539563 nova_compute[252253]: 2025-11-29 08:18:14.348 252257 WARNING nova.compute.manager [req-fa348c62-10f4-4875-babf-d4da5726b25f req-0753dee6-9856-4e81-b0eb-9a0471a86a88 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Received unexpected event network-vif-plugged-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:18:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 223 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 434 KiB/s rd, 4.6 MiB/s wr, 153 op/s
Nov 29 03:18:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:15.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:15.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:18:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a275b5cc-4bff-4e53-8af8-4785b6ef4933 does not exist
Nov 29 03:18:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4c5b6b29-8a7b-4bdf-b75d-800299236b8b does not exist
Nov 29 03:18:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a3a3c9f2-8870-4180-b462-1c142791dd61 does not exist
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:18:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:18:15 np0005539563 nova_compute[252253]: 2025-11-29 08:18:15.629 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:16 np0005539563 podman[332032]: 2025-11-29 08:18:16.089203023 +0000 UTC m=+0.021299807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:18:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 260 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.7 MiB/s wr, 271 op/s
Nov 29 03:18:16 np0005539563 podman[332032]: 2025-11-29 08:18:16.350588836 +0000 UTC m=+0.282685680 container create 0cde869501453a955fa3781f81df3a622f79742234ad26d02401365c5b6ff54e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cray, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:18:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:18:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:18:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:18:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:18:16 np0005539563 systemd[1]: Started libpod-conmon-0cde869501453a955fa3781f81df3a622f79742234ad26d02401365c5b6ff54e.scope.
Nov 29 03:18:16 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:18:16 np0005539563 podman[332032]: 2025-11-29 08:18:16.48032533 +0000 UTC m=+0.412422154 container init 0cde869501453a955fa3781f81df3a622f79742234ad26d02401365c5b6ff54e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cray, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:18:16 np0005539563 podman[332032]: 2025-11-29 08:18:16.490597679 +0000 UTC m=+0.422694443 container start 0cde869501453a955fa3781f81df3a622f79742234ad26d02401365c5b6ff54e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:18:16 np0005539563 podman[332032]: 2025-11-29 08:18:16.493817246 +0000 UTC m=+0.425914070 container attach 0cde869501453a955fa3781f81df3a622f79742234ad26d02401365c5b6ff54e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:18:16 np0005539563 nervous_cray[332048]: 167 167
Nov 29 03:18:16 np0005539563 systemd[1]: libpod-0cde869501453a955fa3781f81df3a622f79742234ad26d02401365c5b6ff54e.scope: Deactivated successfully.
Nov 29 03:18:16 np0005539563 conmon[332048]: conmon 0cde869501453a955fa3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0cde869501453a955fa3781f81df3a622f79742234ad26d02401365c5b6ff54e.scope/container/memory.events
Nov 29 03:18:16 np0005539563 podman[332032]: 2025-11-29 08:18:16.501609348 +0000 UTC m=+0.433706142 container died 0cde869501453a955fa3781f81df3a622f79742234ad26d02401365c5b6ff54e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:18:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c3786fcb5e8470f1b15bd6ca5391b1944a9ae0af3f8948ff56833e71841d950e-merged.mount: Deactivated successfully.
Nov 29 03:18:16 np0005539563 podman[332032]: 2025-11-29 08:18:16.546658268 +0000 UTC m=+0.478755042 container remove 0cde869501453a955fa3781f81df3a622f79742234ad26d02401365c5b6ff54e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cray, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:18:16 np0005539563 systemd[1]: libpod-conmon-0cde869501453a955fa3781f81df3a622f79742234ad26d02401365c5b6ff54e.scope: Deactivated successfully.
Nov 29 03:18:16 np0005539563 podman[332072]: 2025-11-29 08:18:16.763357309 +0000 UTC m=+0.054504788 container create 8d4e80264225f43bdc09348d866004728404f29436566bca37271b4e38c97bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_payne, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:18:16 np0005539563 systemd[1]: Started libpod-conmon-8d4e80264225f43bdc09348d866004728404f29436566bca37271b4e38c97bcd.scope.
Nov 29 03:18:16 np0005539563 podman[332072]: 2025-11-29 08:18:16.736618365 +0000 UTC m=+0.027765854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:18:16 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:18:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce4b05d27fb8730553565e3e9519b7ce4d500ccfab6b3fd67addbd502eb5e67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce4b05d27fb8730553565e3e9519b7ce4d500ccfab6b3fd67addbd502eb5e67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce4b05d27fb8730553565e3e9519b7ce4d500ccfab6b3fd67addbd502eb5e67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce4b05d27fb8730553565e3e9519b7ce4d500ccfab6b3fd67addbd502eb5e67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce4b05d27fb8730553565e3e9519b7ce4d500ccfab6b3fd67addbd502eb5e67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:16 np0005539563 podman[332072]: 2025-11-29 08:18:16.862104044 +0000 UTC m=+0.153251543 container init 8d4e80264225f43bdc09348d866004728404f29436566bca37271b4e38c97bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_payne, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:18:16 np0005539563 podman[332072]: 2025-11-29 08:18:16.872346012 +0000 UTC m=+0.163493481 container start 8d4e80264225f43bdc09348d866004728404f29436566bca37271b4e38c97bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_payne, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:18:16 np0005539563 podman[332072]: 2025-11-29 08:18:16.876107413 +0000 UTC m=+0.167254912 container attach 8d4e80264225f43bdc09348d866004728404f29436566bca37271b4e38c97bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_payne, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:18:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:18:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 46K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1788 writes, 7901 keys, 1788 commit groups, 1.0 writes per commit group, ingest: 11.36 MB, 0.02 MB/s#012Interval WAL: 1788 writes, 1788 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     11.1      5.46              0.22        28    0.195       0      0       0.0       0.0#012  L6      1/0   10.29 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.2     23.6     19.7     12.85              0.85        27    0.476    168K    15K       0.0       0.0#012 Sum      1/0   10.29 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.2     16.6     17.1     18.31              1.07        55    0.333    168K    15K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8     62.9     62.8      1.15              0.23        12    0.096     48K   2999       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     23.6     19.7     12.85              0.85        27    0.476    168K    15K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     11.1      5.45              0.22        27    0.202       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.059, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.31 GB write, 0.07 MB/s write, 0.30 GB read, 0.07 MB/s read, 18.3 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 304.00 MB usage: 36.31 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000375 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2016,35.06 MB,11.532%) FilterBlock(56,482.61 KB,0.155032%) IndexBlock(56,801.72 KB,0.257542%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.088 252257 DEBUG oslo_concurrency.lockutils [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Acquiring lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.093 252257 DEBUG oslo_concurrency.lockutils [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.094 252257 DEBUG oslo_concurrency.lockutils [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Acquiring lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.094 252257 DEBUG oslo_concurrency.lockutils [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.095 252257 DEBUG oslo_concurrency.lockutils [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.096 252257 INFO nova.compute.manager [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Terminating instance#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.098 252257 DEBUG nova.compute.manager [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:18:17 np0005539563 kernel: tap255c51bf-e9 (unregistering): left promiscuous mode
Nov 29 03:18:17 np0005539563 NetworkManager[48981]: <info>  [1764404297.1569] device (tap255c51bf-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:18:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:17.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:17Z|00523|binding|INFO|Releasing lport 255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b from this chassis (sb_readonly=0)
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.171 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:17Z|00524|binding|INFO|Setting lport 255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b down in Southbound
Nov 29 03:18:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:17Z|00525|binding|INFO|Removing iface tap255c51bf-e9 ovn-installed in OVS
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.176 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:17.187 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:39:5f 10.100.0.12'], port_security=['fa:16:3e:70:39:5f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd5c8d04c-2432-48d5-b0c7-f39c9d462798', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe001af2-79a4-4d02-a5b3-cecabe1e65de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4fb4eb1c2db40a5b320f63884fe8888', 'neutron:revision_number': '4', 'neutron:security_group_ids': '85cacc75-6995-43c1-9bbb-4a645e28b940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8000cbf-f4a4-48b5-a551-0a7108781ed2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:17.189 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b in datapath fe001af2-79a4-4d02-a5b3-cecabe1e65de unbound from our chassis#033[00m
Nov 29 03:18:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:17.190 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fe001af2-79a4-4d02-a5b3-cecabe1e65de, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:18:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:17.193 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[96e493e5-389b-4626-8cf4-f51c9101ba3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:17.194 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de namespace which is not needed anymore#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.209 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539563 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000081.scope: Deactivated successfully.
Nov 29 03:18:17 np0005539563 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000081.scope: Consumed 5.450s CPU time.
Nov 29 03:18:17 np0005539563 systemd-machined[213024]: Machine qemu-59-instance-00000081 terminated.
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.332 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.336 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.350 252257 INFO nova.virt.libvirt.driver [-] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Instance destroyed successfully.#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.351 252257 DEBUG nova.objects.instance [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lazy-loading 'resources' on Instance uuid d5c8d04c-2432-48d5-b0c7-f39c9d462798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.372 252257 DEBUG nova.virt.libvirt.vif [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-662063696',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-662063696',id=129,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:18:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4fb4eb1c2db40a5b320f63884fe8888',ramdisk_id='',reservation_id='r-1e8umi2o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-201520601',owner_user_name='tempest-ServerTagsTestJSON-201520601-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:12Z,user_data=None,user_id='64c0341c4b844cbc9fd50d485aa954a0',uuid=d5c8d04c-2432-48d5-b0c7-f39c9d462798,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "address": "fa:16:3e:70:39:5f", "network": {"id": "fe001af2-79a4-4d02-a5b3-cecabe1e65de", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-210114522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4fb4eb1c2db40a5b320f63884fe8888", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap255c51bf-e9", "ovs_interfaceid": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.373 252257 DEBUG nova.network.os_vif_util [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Converting VIF {"id": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "address": "fa:16:3e:70:39:5f", "network": {"id": "fe001af2-79a4-4d02-a5b3-cecabe1e65de", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-210114522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4fb4eb1c2db40a5b320f63884fe8888", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap255c51bf-e9", "ovs_interfaceid": "255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.374 252257 DEBUG nova.network.os_vif_util [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:39:5f,bridge_name='br-int',has_traffic_filtering=True,id=255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b,network=Network(fe001af2-79a4-4d02-a5b3-cecabe1e65de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap255c51bf-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.374 252257 DEBUG os_vif [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:39:5f,bridge_name='br-int',has_traffic_filtering=True,id=255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b,network=Network(fe001af2-79a4-4d02-a5b3-cecabe1e65de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap255c51bf-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.376 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.377 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap255c51bf-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:17.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.424 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.427 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.430 252257 INFO os_vif [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:39:5f,bridge_name='br-int',has_traffic_filtering=True,id=255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b,network=Network(fe001af2-79a4-4d02-a5b3-cecabe1e65de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap255c51bf-e9')#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.456 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.693 252257 DEBUG nova.compute.manager [req-e47c0d62-1d42-40c7-817e-c076f011330e req-72a98816-ec98-4eba-a7e1-509857d6891b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Received event network-vif-unplugged-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.695 252257 DEBUG oslo_concurrency.lockutils [req-e47c0d62-1d42-40c7-817e-c076f011330e req-72a98816-ec98-4eba-a7e1-509857d6891b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.695 252257 DEBUG oslo_concurrency.lockutils [req-e47c0d62-1d42-40c7-817e-c076f011330e req-72a98816-ec98-4eba-a7e1-509857d6891b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.696 252257 DEBUG oslo_concurrency.lockutils [req-e47c0d62-1d42-40c7-817e-c076f011330e req-72a98816-ec98-4eba-a7e1-509857d6891b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.696 252257 DEBUG nova.compute.manager [req-e47c0d62-1d42-40c7-817e-c076f011330e req-72a98816-ec98-4eba-a7e1-509857d6891b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] No waiting events found dispatching network-vif-unplugged-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.696 252257 DEBUG nova.compute.manager [req-e47c0d62-1d42-40c7-817e-c076f011330e req-72a98816-ec98-4eba-a7e1-509857d6891b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Received event network-vif-unplugged-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:18:17 np0005539563 neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de[331695]: [NOTICE]   (331699) : haproxy version is 2.8.14-c23fe91
Nov 29 03:18:17 np0005539563 neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de[331695]: [NOTICE]   (331699) : path to executable is /usr/sbin/haproxy
Nov 29 03:18:17 np0005539563 neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de[331695]: [WARNING]  (331699) : Exiting Master process...
Nov 29 03:18:17 np0005539563 neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de[331695]: [WARNING]  (331699) : Exiting Master process...
Nov 29 03:18:17 np0005539563 neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de[331695]: [ALERT]    (331699) : Current worker (331701) exited with code 143 (Terminated)
Nov 29 03:18:17 np0005539563 neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de[331695]: [WARNING]  (331699) : All workers exited. Exiting... (0)
Nov 29 03:18:17 np0005539563 systemd[1]: libpod-88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d.scope: Deactivated successfully.
Nov 29 03:18:17 np0005539563 beautiful_payne[332087]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:18:17 np0005539563 beautiful_payne[332087]: --> relative data size: 1.0
Nov 29 03:18:17 np0005539563 beautiful_payne[332087]: --> All data devices are unavailable
Nov 29 03:18:17 np0005539563 podman[332119]: 2025-11-29 08:18:17.749524477 +0000 UTC m=+0.445048579 container died 88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:18:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d949bfe2f8470ea80df764dbe042d0417970b813c0c760672de59cdd8bac7666-merged.mount: Deactivated successfully.
Nov 29 03:18:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d-userdata-shm.mount: Deactivated successfully.
Nov 29 03:18:17 np0005539563 systemd[1]: libpod-8d4e80264225f43bdc09348d866004728404f29436566bca37271b4e38c97bcd.scope: Deactivated successfully.
Nov 29 03:18:17 np0005539563 podman[332072]: 2025-11-29 08:18:17.810564242 +0000 UTC m=+1.101711761 container died 8d4e80264225f43bdc09348d866004728404f29436566bca37271b4e38c97bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_payne, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:18:17 np0005539563 podman[332119]: 2025-11-29 08:18:17.845645051 +0000 UTC m=+0.541169163 container cleanup 88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:18:17 np0005539563 systemd[1]: libpod-conmon-88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d.scope: Deactivated successfully.
Nov 29 03:18:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8ce4b05d27fb8730553565e3e9519b7ce4d500ccfab6b3fd67addbd502eb5e67-merged.mount: Deactivated successfully.
Nov 29 03:18:17 np0005539563 podman[332072]: 2025-11-29 08:18:17.895357149 +0000 UTC m=+1.186504628 container remove 8d4e80264225f43bdc09348d866004728404f29436566bca37271b4e38c97bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:18:17 np0005539563 systemd[1]: libpod-conmon-8d4e80264225f43bdc09348d866004728404f29436566bca37271b4e38c97bcd.scope: Deactivated successfully.
Nov 29 03:18:17 np0005539563 podman[332197]: 2025-11-29 08:18:17.955328244 +0000 UTC m=+0.069361820 container remove 88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:18:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:17.964 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5d268d-7531-47d9-b498-c277c1e7f4e9]: (4, ('Sat Nov 29 08:18:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de (88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d)\n88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d\nSat Nov 29 08:18:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de (88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d)\n88bd6ef11e38a1951bc87d3b717542c18eadee22b0e2847a5402728d5fbaa99d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:17.968 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[88f3c31f-6347-4921-b9da-cb830d3f4c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:17.969 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe001af2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:17 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.971 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539563 kernel: tapfe001af2-70: left promiscuous mode
Nov 29 03:18:18 np0005539563 nova_compute[252253]: 2025-11-29 08:18:17.998 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:18.002 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d883af80-e2c7-4bcb-9269-0a5556c08376]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:18.018 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae5caab-7ae6-40dc-8ec2-307a6c9916dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:18.021 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7d916468-ef6d-4c6a-bba8-6fa22a4cdd5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:18.040 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ac45329f-9bdc-43a1-8596-b6df5d9c3732]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725950, 'reachable_time': 35332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332242, 'error': None, 'target': 'ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:18 np0005539563 systemd[1]: run-netns-ovnmeta\x2dfe001af2\x2d79a4\x2d4d02\x2da5b3\x2dcecabe1e65de.mount: Deactivated successfully.
Nov 29 03:18:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:18.046 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fe001af2-79a4-4d02-a5b3-cecabe1e65de deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:18:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:18.046 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[8241c088-2a7d-4f8b-9c8c-d0dba9971ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:18 np0005539563 nova_compute[252253]: 2025-11-29 08:18:18.240 252257 INFO nova.virt.libvirt.driver [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Deleting instance files /var/lib/nova/instances/d5c8d04c-2432-48d5-b0c7-f39c9d462798_del#033[00m
Nov 29 03:18:18 np0005539563 nova_compute[252253]: 2025-11-29 08:18:18.241 252257 INFO nova.virt.libvirt.driver [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Deletion of /var/lib/nova/instances/d5c8d04c-2432-48d5-b0c7-f39c9d462798_del complete#033[00m
Nov 29 03:18:18 np0005539563 nova_compute[252253]: 2025-11-29 08:18:18.320 252257 INFO nova.compute.manager [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Took 1.22 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:18:18 np0005539563 nova_compute[252253]: 2025-11-29 08:18:18.321 252257 DEBUG oslo.service.loopingcall [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:18:18 np0005539563 nova_compute[252253]: 2025-11-29 08:18:18.322 252257 DEBUG nova.compute.manager [-] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:18:18 np0005539563 nova_compute[252253]: 2025-11-29 08:18:18.322 252257 DEBUG nova.network.neutron [-] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:18:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 247 MiB data, 1022 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.3 MiB/s wr, 253 op/s
Nov 29 03:18:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:18 np0005539563 podman[332359]: 2025-11-29 08:18:18.668701351 +0000 UTC m=+0.039680046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:18:18 np0005539563 podman[332359]: 2025-11-29 08:18:18.839205751 +0000 UTC m=+0.210184436 container create 11e666911e1e339ed7fb442166dccc8059f8aa7ba8f20dba49e9c25171736aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_liskov, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:18:18 np0005539563 systemd[1]: Started libpod-conmon-11e666911e1e339ed7fb442166dccc8059f8aa7ba8f20dba49e9c25171736aa4.scope.
Nov 29 03:18:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:18:18 np0005539563 podman[332359]: 2025-11-29 08:18:18.955028089 +0000 UTC m=+0.326006814 container init 11e666911e1e339ed7fb442166dccc8059f8aa7ba8f20dba49e9c25171736aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:18:18 np0005539563 podman[332359]: 2025-11-29 08:18:18.969259525 +0000 UTC m=+0.340238210 container start 11e666911e1e339ed7fb442166dccc8059f8aa7ba8f20dba49e9c25171736aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_liskov, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:18:18 np0005539563 podman[332359]: 2025-11-29 08:18:18.973374106 +0000 UTC m=+0.344352841 container attach 11e666911e1e339ed7fb442166dccc8059f8aa7ba8f20dba49e9c25171736aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_liskov, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:18:18 np0005539563 hardcore_liskov[332376]: 167 167
Nov 29 03:18:18 np0005539563 systemd[1]: libpod-11e666911e1e339ed7fb442166dccc8059f8aa7ba8f20dba49e9c25171736aa4.scope: Deactivated successfully.
Nov 29 03:18:18 np0005539563 conmon[332376]: conmon 11e666911e1e339ed7fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11e666911e1e339ed7fb442166dccc8059f8aa7ba8f20dba49e9c25171736aa4.scope/container/memory.events
Nov 29 03:18:18 np0005539563 podman[332359]: 2025-11-29 08:18:18.981375263 +0000 UTC m=+0.352353938 container died 11e666911e1e339ed7fb442166dccc8059f8aa7ba8f20dba49e9c25171736aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:18:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-86112b0176c0336f1c3d37da7dd823640bd5cd70083e227ca4d8c723c3677821-merged.mount: Deactivated successfully.
Nov 29 03:18:19 np0005539563 podman[332359]: 2025-11-29 08:18:19.041975295 +0000 UTC m=+0.412953970 container remove 11e666911e1e339ed7fb442166dccc8059f8aa7ba8f20dba49e9c25171736aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:18:19 np0005539563 systemd[1]: libpod-conmon-11e666911e1e339ed7fb442166dccc8059f8aa7ba8f20dba49e9c25171736aa4.scope: Deactivated successfully.
Nov 29 03:18:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:19.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.229 252257 DEBUG nova.network.neutron [-] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:19 np0005539563 podman[332401]: 2025-11-29 08:18:19.247174075 +0000 UTC m=+0.054671553 container create b05e54d09dcf57ffe9a023f82757473988f23b51c62b9fa36bbc9392cd3ef8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_robinson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.263 252257 INFO nova.compute.manager [-] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Took 0.94 seconds to deallocate network for instance.#033[00m
Nov 29 03:18:19 np0005539563 systemd[1]: Started libpod-conmon-b05e54d09dcf57ffe9a023f82757473988f23b51c62b9fa36bbc9392cd3ef8e8.scope.
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.311 252257 DEBUG oslo_concurrency.lockutils [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.312 252257 DEBUG oslo_concurrency.lockutils [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:19 np0005539563 podman[332401]: 2025-11-29 08:18:19.226479533 +0000 UTC m=+0.033976981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:18:19 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:18:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db7ba1efca8351c371b2e95e90f293f09b432acf985212c950a9deb0ff65a688/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db7ba1efca8351c371b2e95e90f293f09b432acf985212c950a9deb0ff65a688/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db7ba1efca8351c371b2e95e90f293f09b432acf985212c950a9deb0ff65a688/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db7ba1efca8351c371b2e95e90f293f09b432acf985212c950a9deb0ff65a688/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:19 np0005539563 podman[332401]: 2025-11-29 08:18:19.372280744 +0000 UTC m=+0.179778242 container init b05e54d09dcf57ffe9a023f82757473988f23b51c62b9fa36bbc9392cd3ef8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_robinson, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.376 252257 DEBUG oslo_concurrency.processutils [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:19 np0005539563 podman[332401]: 2025-11-29 08:18:19.385102791 +0000 UTC m=+0.192600259 container start b05e54d09dcf57ffe9a023f82757473988f23b51c62b9fa36bbc9392cd3ef8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:18:19 np0005539563 podman[332401]: 2025-11-29 08:18:19.38946294 +0000 UTC m=+0.196960468 container attach b05e54d09dcf57ffe9a023f82757473988f23b51c62b9fa36bbc9392cd3ef8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_robinson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:18:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:19.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.422 252257 DEBUG nova.compute.manager [req-c13b5598-4f18-4689-92ec-09d44bac2e87 req-87d596bc-2f88-4bfc-9230-19a9e401beed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Received event network-vif-deleted-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/47726654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.853 252257 DEBUG oslo_concurrency.processutils [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.863 252257 DEBUG nova.compute.provider_tree [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.890 252257 DEBUG nova.compute.manager [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Received event network-vif-plugged-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.891 252257 DEBUG oslo_concurrency.lockutils [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.891 252257 DEBUG oslo_concurrency.lockutils [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.891 252257 DEBUG oslo_concurrency.lockutils [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.891 252257 DEBUG nova.compute.manager [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] No waiting events found dispatching network-vif-plugged-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.892 252257 WARNING nova.compute.manager [req-143fcf43-7234-4bc3-947c-204ae971b60f req-e0d399df-73f3-4f89-94f9-d69da869cdf1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Received unexpected event network-vif-plugged-255c51bf-e9c4-4e6b-b2d6-a989fae5aa8b for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.894 252257 DEBUG nova.scheduler.client.report [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.920 252257 DEBUG oslo_concurrency.lockutils [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:19 np0005539563 nova_compute[252253]: 2025-11-29 08:18:19.961 252257 INFO nova.scheduler.client.report [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Deleted allocations for instance d5c8d04c-2432-48d5-b0c7-f39c9d462798#033[00m
Nov 29 03:18:20 np0005539563 nova_compute[252253]: 2025-11-29 08:18:20.051 252257 DEBUG oslo_concurrency.lockutils [None req-b555c286-0fe2-45e6-b6b4-238674290f2c 64c0341c4b844cbc9fd50d485aa954a0 b4fb4eb1c2db40a5b320f63884fe8888 - - default default] Lock "d5c8d04c-2432-48d5-b0c7-f39c9d462798" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]: {
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:    "0": [
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:        {
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:            "devices": [
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:                "/dev/loop3"
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:            ],
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:            "lv_name": "ceph_lv0",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:            "lv_size": "7511998464",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:            "name": "ceph_lv0",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:            "tags": {
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:                "ceph.cluster_name": "ceph",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:                "ceph.crush_device_class": "",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:                "ceph.encrypted": "0",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:                "ceph.osd_id": "0",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:                "ceph.type": "block",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:                "ceph.vdo": "0"
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:            },
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:            "type": "block",
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:            "vg_name": "ceph_vg0"
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:        }
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]:    ]
Nov 29 03:18:20 np0005539563 hopeful_robinson[332417]: }
Nov 29 03:18:20 np0005539563 systemd[1]: libpod-b05e54d09dcf57ffe9a023f82757473988f23b51c62b9fa36bbc9392cd3ef8e8.scope: Deactivated successfully.
Nov 29 03:18:20 np0005539563 podman[332401]: 2025-11-29 08:18:20.251791403 +0000 UTC m=+1.059288941 container died b05e54d09dcf57ffe9a023f82757473988f23b51c62b9fa36bbc9392cd3ef8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:18:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-db7ba1efca8351c371b2e95e90f293f09b432acf985212c950a9deb0ff65a688-merged.mount: Deactivated successfully.
Nov 29 03:18:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 169 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 302 op/s
Nov 29 03:18:20 np0005539563 podman[332401]: 2025-11-29 08:18:20.365457453 +0000 UTC m=+1.172954891 container remove b05e54d09dcf57ffe9a023f82757473988f23b51c62b9fa36bbc9392cd3ef8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:18:20 np0005539563 systemd[1]: libpod-conmon-b05e54d09dcf57ffe9a023f82757473988f23b51c62b9fa36bbc9392cd3ef8e8.scope: Deactivated successfully.
Nov 29 03:18:20 np0005539563 podman[332602]: 2025-11-29 08:18:20.937357967 +0000 UTC m=+0.044565618 container create 0902e8e56b210e412c0aa91a6ad87733a80c63d11ce73f27ee0935c65050130a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_taussig, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:18:20 np0005539563 systemd[1]: Started libpod-conmon-0902e8e56b210e412c0aa91a6ad87733a80c63d11ce73f27ee0935c65050130a.scope.
Nov 29 03:18:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:18:21 np0005539563 podman[332602]: 2025-11-29 08:18:20.916094052 +0000 UTC m=+0.023301753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:18:21 np0005539563 podman[332602]: 2025-11-29 08:18:21.01350915 +0000 UTC m=+0.120716841 container init 0902e8e56b210e412c0aa91a6ad87733a80c63d11ce73f27ee0935c65050130a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:18:21 np0005539563 podman[332602]: 2025-11-29 08:18:21.020766077 +0000 UTC m=+0.127973728 container start 0902e8e56b210e412c0aa91a6ad87733a80c63d11ce73f27ee0935c65050130a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:18:21 np0005539563 podman[332602]: 2025-11-29 08:18:21.025107924 +0000 UTC m=+0.132315675 container attach 0902e8e56b210e412c0aa91a6ad87733a80c63d11ce73f27ee0935c65050130a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:18:21 np0005539563 recursing_taussig[332619]: 167 167
Nov 29 03:18:21 np0005539563 systemd[1]: libpod-0902e8e56b210e412c0aa91a6ad87733a80c63d11ce73f27ee0935c65050130a.scope: Deactivated successfully.
Nov 29 03:18:21 np0005539563 podman[332624]: 2025-11-29 08:18:21.079487288 +0000 UTC m=+0.038706529 container died 0902e8e56b210e412c0aa91a6ad87733a80c63d11ce73f27ee0935c65050130a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_taussig, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:18:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay-45dc034985ad667d849634eb602aa4138b42607374232b52bbda565ea23a9444-merged.mount: Deactivated successfully.
Nov 29 03:18:21 np0005539563 podman[332624]: 2025-11-29 08:18:21.116764898 +0000 UTC m=+0.075984139 container remove 0902e8e56b210e412c0aa91a6ad87733a80c63d11ce73f27ee0935c65050130a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_taussig, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:18:21 np0005539563 systemd[1]: libpod-conmon-0902e8e56b210e412c0aa91a6ad87733a80c63d11ce73f27ee0935c65050130a.scope: Deactivated successfully.
Nov 29 03:18:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:21.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:21 np0005539563 podman[332646]: 2025-11-29 08:18:21.318445472 +0000 UTC m=+0.039026508 container create a687aa84ecd27716ffd3bad5f167771b0005e8c5bcce88bd978068b5eb78241a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:18:21 np0005539563 systemd[1]: Started libpod-conmon-a687aa84ecd27716ffd3bad5f167771b0005e8c5bcce88bd978068b5eb78241a.scope.
Nov 29 03:18:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:18:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0e636ad4309810f1f690841a9e694ea4a8260a5369f19c6b0d2a9befa4d8c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0e636ad4309810f1f690841a9e694ea4a8260a5369f19c6b0d2a9befa4d8c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0e636ad4309810f1f690841a9e694ea4a8260a5369f19c6b0d2a9befa4d8c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0e636ad4309810f1f690841a9e694ea4a8260a5369f19c6b0d2a9befa4d8c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:21 np0005539563 podman[332646]: 2025-11-29 08:18:21.396940499 +0000 UTC m=+0.117521565 container init a687aa84ecd27716ffd3bad5f167771b0005e8c5bcce88bd978068b5eb78241a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:18:21 np0005539563 podman[332646]: 2025-11-29 08:18:21.302031137 +0000 UTC m=+0.022612203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:18:21 np0005539563 podman[332646]: 2025-11-29 08:18:21.404403402 +0000 UTC m=+0.124984438 container start a687aa84ecd27716ffd3bad5f167771b0005e8c5bcce88bd978068b5eb78241a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:18:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:21 np0005539563 podman[332646]: 2025-11-29 08:18:21.408602505 +0000 UTC m=+0.129183571 container attach a687aa84ecd27716ffd3bad5f167771b0005e8c5bcce88bd978068b5eb78241a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:18:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:21.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:22 np0005539563 laughing_faraday[332662]: {
Nov 29 03:18:22 np0005539563 laughing_faraday[332662]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:18:22 np0005539563 laughing_faraday[332662]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:18:22 np0005539563 laughing_faraday[332662]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:18:22 np0005539563 laughing_faraday[332662]:        "osd_id": 0,
Nov 29 03:18:22 np0005539563 laughing_faraday[332662]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:18:22 np0005539563 laughing_faraday[332662]:        "type": "bluestore"
Nov 29 03:18:22 np0005539563 laughing_faraday[332662]:    }
Nov 29 03:18:22 np0005539563 laughing_faraday[332662]: }
Nov 29 03:18:22 np0005539563 systemd[1]: libpod-a687aa84ecd27716ffd3bad5f167771b0005e8c5bcce88bd978068b5eb78241a.scope: Deactivated successfully.
Nov 29 03:18:22 np0005539563 podman[332646]: 2025-11-29 08:18:22.317664855 +0000 UTC m=+1.038245911 container died a687aa84ecd27716ffd3bad5f167771b0005e8c5bcce88bd978068b5eb78241a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:18:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-dc0e636ad4309810f1f690841a9e694ea4a8260a5369f19c6b0d2a9befa4d8c9-merged.mount: Deactivated successfully.
Nov 29 03:18:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 169 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 244 op/s
Nov 29 03:18:22 np0005539563 podman[332646]: 2025-11-29 08:18:22.374045802 +0000 UTC m=+1.094626838 container remove a687aa84ecd27716ffd3bad5f167771b0005e8c5bcce88bd978068b5eb78241a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_faraday, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:18:22 np0005539563 systemd[1]: libpod-conmon-a687aa84ecd27716ffd3bad5f167771b0005e8c5bcce88bd978068b5eb78241a.scope: Deactivated successfully.
Nov 29 03:18:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:18:22 np0005539563 nova_compute[252253]: 2025-11-29 08:18:22.428 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:22 np0005539563 nova_compute[252253]: 2025-11-29 08:18:22.459 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:18:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:18:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:18:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c779eb50-b07b-49e4-997e-5db7edd18f20 does not exist
Nov 29 03:18:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 05982d03-c764-4d2b-b8d3-1078fdc07372 does not exist
Nov 29 03:18:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4d1a3dd3-b815-4b05-9956-233e933954a3 does not exist
Nov 29 03:18:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:23.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:23.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:18:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021741374697561884 of space, bias 1.0, pg target 0.6522412409268565 quantized to 32 (current 32)
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8676193467336684 quantized to 32 (current 32)
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:18:23 np0005539563 nova_compute[252253]: 2025-11-29 08:18:23.585 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 167 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 247 op/s
Nov 29 03:18:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:25.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:25.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 167 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.4 MiB/s wr, 222 op/s
Nov 29 03:18:26 np0005539563 nova_compute[252253]: 2025-11-29 08:18:26.515 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:26 np0005539563 nova_compute[252253]: 2025-11-29 08:18:26.516 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:26 np0005539563 nova_compute[252253]: 2025-11-29 08:18:26.554 252257 DEBUG nova.compute.manager [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:18:26 np0005539563 nova_compute[252253]: 2025-11-29 08:18:26.688 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:26 np0005539563 nova_compute[252253]: 2025-11-29 08:18:26.689 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:26 np0005539563 nova_compute[252253]: 2025-11-29 08:18:26.699 252257 DEBUG nova.virt.hardware [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:18:26 np0005539563 nova_compute[252253]: 2025-11-29 08:18:26.700 252257 INFO nova.compute.claims [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:18:26 np0005539563 nova_compute[252253]: 2025-11-29 08:18:26.810 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:27.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2976345814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.265 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.277 252257 DEBUG nova.compute.provider_tree [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.300 252257 DEBUG nova.scheduler.client.report [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.325 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.327 252257 DEBUG nova.compute.manager [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.388 252257 DEBUG nova.compute.manager [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.388 252257 DEBUG nova.network.neutron [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:18:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:27.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.419 252257 INFO nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.433 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.439 252257 DEBUG nova.compute.manager [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.461 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.545 252257 DEBUG nova.compute.manager [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.547 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.548 252257 INFO nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Creating image(s)#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.596 252257 DEBUG nova.storage.rbd_utils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 1fad2d6f-5a00-43ad-af43-00916509fc61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.644 252257 DEBUG nova.storage.rbd_utils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 1fad2d6f-5a00-43ad-af43-00916509fc61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.697 252257 DEBUG nova.storage.rbd_utils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 1fad2d6f-5a00-43ad-af43-00916509fc61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.705 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.758 252257 DEBUG nova.policy [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b52040d601a4a56abcaf3f046f1e349', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '358970eca7ad4b05b70f43e5507ac052', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.800 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.802 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.802 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.803 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.840 252257 DEBUG nova.storage.rbd_utils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 1fad2d6f-5a00-43ad-af43-00916509fc61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:27 np0005539563 nova_compute[252253]: 2025-11-29 08:18:27.846 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 1fad2d6f-5a00-43ad-af43-00916509fc61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 167 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 27 KiB/s wr, 96 op/s
Nov 29 03:18:28 np0005539563 nova_compute[252253]: 2025-11-29 08:18:28.587 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 1fad2d6f-5a00-43ad-af43-00916509fc61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:28 np0005539563 nova_compute[252253]: 2025-11-29 08:18:28.647 252257 DEBUG nova.network.neutron [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Successfully created port: 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.665903) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308665947, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 423, "num_deletes": 255, "total_data_size": 333954, "memory_usage": 342168, "flush_reason": "Manual Compaction"}
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308670265, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 331057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46568, "largest_seqno": 46990, "table_properties": {"data_size": 328498, "index_size": 595, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6166, "raw_average_key_size": 18, "raw_value_size": 323375, "raw_average_value_size": 971, "num_data_blocks": 25, "num_entries": 333, "num_filter_entries": 333, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404294, "oldest_key_time": 1764404294, "file_creation_time": 1764404308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 4387 microseconds, and 1677 cpu microseconds.
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.670293) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 331057 bytes OK
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.670306) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.671683) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.671696) EVENT_LOG_v1 {"time_micros": 1764404308671691, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.671710) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 331302, prev total WAL file size 331302, number of live WAL files 2.
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.672082) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353133' seq:72057594037927935, type:22 .. '6C6F676D0031373634' seq:0, type:0; will stop at (end)
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(323KB)], [98(10MB)]
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308672169, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 11123118, "oldest_snapshot_seqno": -1}
Nov 29 03:18:28 np0005539563 nova_compute[252253]: 2025-11-29 08:18:28.714 252257 DEBUG nova.storage.rbd_utils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] resizing rbd image 1fad2d6f-5a00-43ad-af43-00916509fc61_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 7918 keys, 10984284 bytes, temperature: kUnknown
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308827996, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 10984284, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10932619, "index_size": 30753, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 205795, "raw_average_key_size": 25, "raw_value_size": 10792693, "raw_average_value_size": 1363, "num_data_blocks": 1206, "num_entries": 7918, "num_filter_entries": 7918, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764404308, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.828223) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10984284 bytes
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.829484) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.4 rd, 70.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.3 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(66.8) write-amplify(33.2) OK, records in: 8440, records dropped: 522 output_compression: NoCompression
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.829502) EVENT_LOG_v1 {"time_micros": 1764404308829493, "job": 58, "event": "compaction_finished", "compaction_time_micros": 155891, "compaction_time_cpu_micros": 37501, "output_level": 6, "num_output_files": 1, "total_output_size": 10984284, "num_input_records": 8440, "num_output_records": 7918, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308829646, "job": 58, "event": "table_file_deletion", "file_number": 100}
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404308831098, "job": 58, "event": "table_file_deletion", "file_number": 98}
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.671969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.831185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.831195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.831198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.831202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:18:28.831205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:28 np0005539563 nova_compute[252253]: 2025-11-29 08:18:28.877 252257 DEBUG nova.objects.instance [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'migration_context' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:28 np0005539563 nova_compute[252253]: 2025-11-29 08:18:28.892 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:18:28 np0005539563 nova_compute[252253]: 2025-11-29 08:18:28.893 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Ensure instance console log exists: /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:18:28 np0005539563 nova_compute[252253]: 2025-11-29 08:18:28.894 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:28 np0005539563 nova_compute[252253]: 2025-11-29 08:18:28.895 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:28 np0005539563 nova_compute[252253]: 2025-11-29 08:18:28.895 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:29.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:29.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:29 np0005539563 nova_compute[252253]: 2025-11-29 08:18:29.729 252257 DEBUG nova.network.neutron [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Successfully updated port: 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:18:29 np0005539563 nova_compute[252253]: 2025-11-29 08:18:29.747 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:29 np0005539563 nova_compute[252253]: 2025-11-29 08:18:29.747 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquired lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:29 np0005539563 nova_compute[252253]: 2025-11-29 08:18:29.748 252257 DEBUG nova.network.neutron [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:18:29 np0005539563 nova_compute[252253]: 2025-11-29 08:18:29.868 252257 DEBUG nova.compute.manager [req-5ce0cccd-cd67-4356-8237-efebdd5745c2 req-8ff8b67a-7c58-409e-a176-44b22736c511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-changed-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:29 np0005539563 nova_compute[252253]: 2025-11-29 08:18:29.869 252257 DEBUG nova.compute.manager [req-5ce0cccd-cd67-4356-8237-efebdd5745c2 req-8ff8b67a-7c58-409e-a176-44b22736c511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Refreshing instance network info cache due to event network-changed-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:18:29 np0005539563 nova_compute[252253]: 2025-11-29 08:18:29.869 252257 DEBUG oslo_concurrency.lockutils [req-5ce0cccd-cd67-4356-8237-efebdd5745c2 req-8ff8b67a-7c58-409e-a176-44b22736c511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:29 np0005539563 nova_compute[252253]: 2025-11-29 08:18:29.982 252257 DEBUG nova.network.neutron [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:18:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 197 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 550 KiB/s rd, 849 KiB/s wr, 91 op/s
Nov 29 03:18:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:31.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.309 252257 DEBUG nova.network.neutron [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Updating instance_info_cache with network_info: [{"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.373 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Releasing lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.374 252257 DEBUG nova.compute.manager [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Instance network_info: |[{"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.375 252257 DEBUG oslo_concurrency.lockutils [req-5ce0cccd-cd67-4356-8237-efebdd5745c2 req-8ff8b67a-7c58-409e-a176-44b22736c511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.375 252257 DEBUG nova.network.neutron [req-5ce0cccd-cd67-4356-8237-efebdd5745c2 req-8ff8b67a-7c58-409e-a176-44b22736c511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Refreshing network info cache for port 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.381 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Start _get_guest_xml network_info=[{"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.386 252257 WARNING nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:31.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.495 252257 DEBUG nova.virt.libvirt.host [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.496 252257 DEBUG nova.virt.libvirt.host [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.502 252257 DEBUG nova.virt.libvirt.host [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.503 252257 DEBUG nova.virt.libvirt.host [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.505 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.506 252257 DEBUG nova.virt.hardware [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.507 252257 DEBUG nova.virt.hardware [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.507 252257 DEBUG nova.virt.hardware [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.508 252257 DEBUG nova.virt.hardware [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.508 252257 DEBUG nova.virt.hardware [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.509 252257 DEBUG nova.virt.hardware [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.509 252257 DEBUG nova.virt.hardware [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.510 252257 DEBUG nova.virt.hardware [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.510 252257 DEBUG nova.virt.hardware [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.511 252257 DEBUG nova.virt.hardware [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.511 252257 DEBUG nova.virt.hardware [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.519 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:31 np0005539563 podman[332937]: 2025-11-29 08:18:31.52775346 +0000 UTC m=+0.068661532 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:18:31 np0005539563 podman[332936]: 2025-11-29 08:18:31.543824325 +0000 UTC m=+0.091916262 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Nov 29 03:18:31 np0005539563 podman[332938]: 2025-11-29 08:18:31.579753928 +0000 UTC m=+0.123054265 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 03:18:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4056697421' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.938 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.967 252257 DEBUG nova.storage.rbd_utils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 1fad2d6f-5a00-43ad-af43-00916509fc61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:31 np0005539563 nova_compute[252253]: 2025-11-29 08:18:31.971 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.349 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404297.3466072, d5c8d04c-2432-48d5-b0c7-f39c9d462798 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.350 252257 INFO nova.compute.manager [-] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:18:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 197 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 834 KiB/s wr, 29 op/s
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.378 252257 DEBUG nova.compute.manager [None req-584255af-2ed6-48e0-aac3-b1c74d80047f - - - - - -] [instance: d5c8d04c-2432-48d5-b0c7-f39c9d462798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2581710571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.421 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.422 252257 DEBUG nova.virt.libvirt.vif [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:18:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1419637348',display_name='tempest-ServerStableDeviceRescueTest-server-1419637348',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1419637348',id=131,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-z40kifsz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:27Z,user_data=None,user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=1fad2d6f-5a00-43ad-af43-00916509fc61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.422 252257 DEBUG nova.network.os_vif_util [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.423 252257 DEBUG nova.network.os_vif_util [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:36:6b,bridge_name='br-int',has_traffic_filtering=True,id=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96eb3aec-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.424 252257 DEBUG nova.objects.instance [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.435 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.441 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  <uuid>1fad2d6f-5a00-43ad-af43-00916509fc61</uuid>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  <name>instance-00000083</name>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1419637348</nova:name>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:18:31</nova:creationTime>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <nova:user uuid="3b52040d601a4a56abcaf3f046f1e349">tempest-ServerStableDeviceRescueTest-1105304301-project-member</nova:user>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <nova:project uuid="358970eca7ad4b05b70f43e5507ac052">tempest-ServerStableDeviceRescueTest-1105304301</nova:project>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <nova:port uuid="96eb3aec-07ea-42dc-8983-3d61e9f8b5fc">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <entry name="serial">1fad2d6f-5a00-43ad-af43-00916509fc61</entry>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <entry name="uuid">1fad2d6f-5a00-43ad-af43-00916509fc61</entry>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/1fad2d6f-5a00-43ad-af43-00916509fc61_disk">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/1fad2d6f-5a00-43ad-af43-00916509fc61_disk.config">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:5b:36:6b"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <target dev="tap96eb3aec-07"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/console.log" append="off"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:18:32 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:18:32 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:18:32 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:18:32 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.442 252257 DEBUG nova.compute.manager [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Preparing to wait for external event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.442 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.443 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.443 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.444 252257 DEBUG nova.virt.libvirt.vif [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:18:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1419637348',display_name='tempest-ServerStableDeviceRescueTest-server-1419637348',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1419637348',id=131,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-z40kifsz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:27Z,user_data=None,user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=1fad2d6f-5a00-43ad-af43-00916509fc61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.444 252257 DEBUG nova.network.os_vif_util [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.445 252257 DEBUG nova.network.os_vif_util [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:36:6b,bridge_name='br-int',has_traffic_filtering=True,id=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96eb3aec-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.446 252257 DEBUG os_vif [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:36:6b,bridge_name='br-int',has_traffic_filtering=True,id=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96eb3aec-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.447 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.447 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.448 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.451 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.451 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96eb3aec-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.452 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap96eb3aec-07, col_values=(('external_ids', {'iface-id': '96eb3aec-07ea-42dc-8983-3d61e9f8b5fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:36:6b', 'vm-uuid': '1fad2d6f-5a00-43ad-af43-00916509fc61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:32 np0005539563 NetworkManager[48981]: <info>  [1764404312.4544] manager: (tap96eb3aec-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.453 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.456 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.464 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.465 252257 INFO os_vif [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:36:6b,bridge_name='br-int',has_traffic_filtering=True,id=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96eb3aec-07')#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.519 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.519 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.519 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No VIF found with MAC fa:16:3e:5b:36:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.520 252257 INFO nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Using config drive#033[00m
Nov 29 03:18:32 np0005539563 nova_compute[252253]: 2025-11-29 08:18:32.547 252257 DEBUG nova.storage.rbd_utils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 1fad2d6f-5a00-43ad-af43-00916509fc61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.027 252257 INFO nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Creating config drive at /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/disk.config#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.036 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4y3ng4uu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.180 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4y3ng4uu" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:33.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.227 252257 DEBUG nova.storage.rbd_utils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 1fad2d6f-5a00-43ad-af43-00916509fc61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.231 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/disk.config 1fad2d6f-5a00-43ad-af43-00916509fc61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.279 252257 DEBUG oslo_concurrency.lockutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "b91293df-e55b-4e94-92e0-99e06f96a4d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.280 252257 DEBUG oslo_concurrency.lockutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "b91293df-e55b-4e94-92e0-99e06f96a4d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.305 252257 DEBUG nova.compute.manager [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.421 252257 DEBUG oslo_concurrency.lockutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.422 252257 DEBUG oslo_concurrency.lockutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:33.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.428 252257 DEBUG nova.virt.hardware [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.429 252257 INFO nova.compute.claims [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.466 252257 DEBUG oslo_concurrency.processutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/disk.config 1fad2d6f-5a00-43ad-af43-00916509fc61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.467 252257 INFO nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Deleting local config drive /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/disk.config because it was imported into RBD.#033[00m
Nov 29 03:18:33 np0005539563 NetworkManager[48981]: <info>  [1764404313.5230] manager: (tap96eb3aec-07): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Nov 29 03:18:33 np0005539563 kernel: tap96eb3aec-07: entered promiscuous mode
Nov 29 03:18:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:33Z|00526|binding|INFO|Claiming lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for this chassis.
Nov 29 03:18:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:33Z|00527|binding|INFO|96eb3aec-07ea-42dc-8983-3d61e9f8b5fc: Claiming fa:16:3e:5b:36:6b 10.100.0.6
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.524 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.540 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.545 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:36:6b 10.100.0.6'], port_security=['fa:16:3e:5b:36:6b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1fad2d6f-5a00-43ad-af43-00916509fc61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.546 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 bound to our chassis#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.547 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.560 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f04819e4-1394-49d6-8ddb-58f81ab2110b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.560 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap32485b0e-11 in ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:18:33 np0005539563 systemd-udevd[333139]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.562 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap32485b0e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.562 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[710a0727-f399-4d54-9609-89c45d1bd0e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.562 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[09b5c242-ee2d-485a-95a8-bd1934452cda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.564 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:33 np0005539563 systemd-machined[213024]: New machine qemu-60-instance-00000083.
Nov 29 03:18:33 np0005539563 NetworkManager[48981]: <info>  [1764404313.5816] device (tap96eb3aec-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:18:33 np0005539563 NetworkManager[48981]: <info>  [1764404313.5829] device (tap96eb3aec-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.579 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0afa7c-7abd-4f25-a6f2-74f16313c917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 systemd[1]: Started Virtual Machine qemu-60-instance-00000083.
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.604 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[eae67a96-48d4-4166-bad1-06e2dc5f5a3f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:33Z|00528|binding|INFO|Setting lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc ovn-installed in OVS
Nov 29 03:18:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:33Z|00529|binding|INFO|Setting lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc up in Southbound
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.633 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.642 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca8e09f-c51e-481d-bafd-916755abdf46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 NetworkManager[48981]: <info>  [1764404313.6508] manager: (tap32485b0e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/238)
Nov 29 03:18:33 np0005539563 systemd-udevd[333142]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.655 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1e8641-d9f3-4ab8-ad93-e6edd65c7920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.690 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[52274674-2870-43aa-b986-2dabe3a90e82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.693 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4d13792d-2363-407b-b809-5613ce5165d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 NetworkManager[48981]: <info>  [1764404313.7193] device (tap32485b0e-10): carrier: link connected
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.730 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[081bf80c-9f64-41c3-86c6-5c215d72d939]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.753 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[52d8cb95-88b4-4428-bfd6-3e612a74bc97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 158], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728149, 'reachable_time': 25294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333191, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.773 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[95141caa-2512-482d-8059-a0afa5482791]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:4406'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 728149, 'tstamp': 728149}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333192, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.799 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[35515e07-e601-4c69-b98c-7627723f61d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 158], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728149, 'reachable_time': 25294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333193, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.842 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ef882b73-581c-4274-8253-383db8801ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.907 252257 DEBUG nova.network.neutron [req-5ce0cccd-cd67-4356-8237-efebdd5745c2 req-8ff8b67a-7c58-409e-a176-44b22736c511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Updated VIF entry in instance network info cache for port 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.908 252257 DEBUG nova.network.neutron [req-5ce0cccd-cd67-4356-8237-efebdd5745c2 req-8ff8b67a-7c58-409e-a176-44b22736c511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Updating instance_info_cache with network_info: [{"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.938 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d93fb1-7598-4048-880b-cd938232bf0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.940 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.940 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.941 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:33 np0005539563 kernel: tap32485b0e-10: entered promiscuous mode
Nov 29 03:18:33 np0005539563 NetworkManager[48981]: <info>  [1764404313.9440] manager: (tap32485b0e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.945 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.948 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:33Z|00530|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.950 252257 DEBUG oslo_concurrency.lockutils [req-5ce0cccd-cd67-4356-8237-efebdd5745c2 req-8ff8b67a-7c58-409e-a176-44b22736c511 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.951 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.952 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.954 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f79cb651-6f00-48d7-a18f-448dc79f687f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.955 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-32485b0e-177b-4dfd-a55a-0249528f32e1
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 32485b0e-177b-4dfd-a55a-0249528f32e1
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:18:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:33.956 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'env', 'PROCESS_TAG=haproxy-32485b0e-177b-4dfd-a55a-0249528f32e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/32485b0e-177b-4dfd-a55a-0249528f32e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:18:33 np0005539563 nova_compute[252253]: 2025-11-29 08:18:33.980 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124941999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.066 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.075 252257 DEBUG nova.compute.provider_tree [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.094 252257 DEBUG nova.scheduler.client.report [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.100 252257 DEBUG nova.compute.manager [req-6abe3416-c01f-4640-b630-7d2d5e2d0998 req-5a335ebf-2915-41c0-8dce-f56e89c55bbb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.100 252257 DEBUG oslo_concurrency.lockutils [req-6abe3416-c01f-4640-b630-7d2d5e2d0998 req-5a335ebf-2915-41c0-8dce-f56e89c55bbb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.100 252257 DEBUG oslo_concurrency.lockutils [req-6abe3416-c01f-4640-b630-7d2d5e2d0998 req-5a335ebf-2915-41c0-8dce-f56e89c55bbb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.101 252257 DEBUG oslo_concurrency.lockutils [req-6abe3416-c01f-4640-b630-7d2d5e2d0998 req-5a335ebf-2915-41c0-8dce-f56e89c55bbb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.101 252257 DEBUG nova.compute.manager [req-6abe3416-c01f-4640-b630-7d2d5e2d0998 req-5a335ebf-2915-41c0-8dce-f56e89c55bbb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Processing event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.121 252257 DEBUG oslo_concurrency.lockutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.122 252257 DEBUG nova.compute.manager [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.169 252257 DEBUG nova.compute.manager [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.188 252257 INFO nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.211 252257 DEBUG nova.compute.manager [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.296 252257 DEBUG nova.compute.manager [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.299 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.300 252257 INFO nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Creating image(s)#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.324 252257 DEBUG nova.storage.rbd_utils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 213 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.356 252257 DEBUG nova.storage.rbd_utils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.390 252257 DEBUG nova.storage.rbd_utils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.398 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.461 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.462 252257 DEBUG oslo_concurrency.lockutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.463 252257 DEBUG oslo_concurrency.lockutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.464 252257 DEBUG oslo_concurrency.lockutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:34 np0005539563 podman[333317]: 2025-11-29 08:18:34.39081719 +0000 UTC m=+0.027616389 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.498 252257 DEBUG nova.storage.rbd_utils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.503 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf b91293df-e55b-4e94-92e0-99e06f96a4d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1164412559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.693 252257 DEBUG nova.compute.manager [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.695 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404314.692103, 1fad2d6f-5a00-43ad-af43-00916509fc61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.695 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] VM Started (Lifecycle Event)#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.698 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.703 252257 INFO nova.virt.libvirt.driver [-] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Instance spawned successfully.#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.704 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.718 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.729 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.732 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.732 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.733 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.733 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.734 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.734 252257 DEBUG nova.virt.libvirt.driver [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.778 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.779 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404314.6939921, 1fad2d6f-5a00-43ad-af43-00916509fc61 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.779 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.810 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.814 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404314.6977997, 1fad2d6f-5a00-43ad-af43-00916509fc61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.815 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.820 252257 INFO nova.compute.manager [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Took 7.27 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.821 252257 DEBUG nova.compute.manager [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.833 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.836 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.861 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.878 252257 INFO nova.compute.manager [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Took 8.22 seconds to build instance.#033[00m
Nov 29 03:18:34 np0005539563 nova_compute[252253]: 2025-11-29 08:18:34.929 252257 DEBUG oslo_concurrency.lockutils [None req-2dbfff2e-289c-4689-9d0b-99352acdd289 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:35 np0005539563 podman[333317]: 2025-11-29 08:18:35.108432913 +0000 UTC m=+0.745232092 container create cb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:18:35 np0005539563 systemd[1]: Started libpod-conmon-cb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317.scope.
Nov 29 03:18:35 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:18:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3588c9ae35099fba03a365ea0b3a9123d5109b1f4eea6316f6c4188d5a25019/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:35.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:35 np0005539563 podman[333317]: 2025-11-29 08:18:35.222937065 +0000 UTC m=+0.859736244 container init cb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:18:35 np0005539563 podman[333317]: 2025-11-29 08:18:35.233308206 +0000 UTC m=+0.870107385 container start cb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:18:35 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[333429]: [NOTICE]   (333433) : New worker (333435) forked
Nov 29 03:18:35 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[333429]: [NOTICE]   (333433) : Loading success.
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.329 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf b91293df-e55b-4e94-92e0-99e06f96a4d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.825s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.411 252257 DEBUG nova.storage.rbd_utils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] resizing rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:18:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:35.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.561 252257 DEBUG nova.objects.instance [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'migration_context' on Instance uuid b91293df-e55b-4e94-92e0-99e06f96a4d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.576 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.577 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Ensure instance console log exists: /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.578 252257 DEBUG oslo_concurrency.lockutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.578 252257 DEBUG oslo_concurrency.lockutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.578 252257 DEBUG oslo_concurrency.lockutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.579 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.584 252257 WARNING nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.590 252257 DEBUG nova.virt.libvirt.host [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.590 252257 DEBUG nova.virt.libvirt.host [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.593 252257 DEBUG nova.virt.libvirt.host [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.593 252257 DEBUG nova.virt.libvirt.host [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.595 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.595 252257 DEBUG nova.virt.hardware [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.596 252257 DEBUG nova.virt.hardware [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.596 252257 DEBUG nova.virt.hardware [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.596 252257 DEBUG nova.virt.hardware [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.596 252257 DEBUG nova.virt.hardware [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.597 252257 DEBUG nova.virt.hardware [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.597 252257 DEBUG nova.virt.hardware [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.597 252257 DEBUG nova.virt.hardware [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.597 252257 DEBUG nova.virt.hardware [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.598 252257 DEBUG nova.virt.hardware [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.598 252257 DEBUG nova.virt.hardware [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:18:35 np0005539563 nova_compute[252253]: 2025-11-29 08:18:35.601 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3052598900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.105 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.141 252257 DEBUG nova.storage.rbd_utils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.147 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.213 252257 DEBUG nova.compute.manager [req-2ace1250-bde5-4fdc-bb44-aacb144500f7 req-b8cbd597-0d77-40bc-95e0-0c652ca1f763 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.214 252257 DEBUG oslo_concurrency.lockutils [req-2ace1250-bde5-4fdc-bb44-aacb144500f7 req-b8cbd597-0d77-40bc-95e0-0c652ca1f763 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.215 252257 DEBUG oslo_concurrency.lockutils [req-2ace1250-bde5-4fdc-bb44-aacb144500f7 req-b8cbd597-0d77-40bc-95e0-0c652ca1f763 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.215 252257 DEBUG oslo_concurrency.lockutils [req-2ace1250-bde5-4fdc-bb44-aacb144500f7 req-b8cbd597-0d77-40bc-95e0-0c652ca1f763 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.216 252257 DEBUG nova.compute.manager [req-2ace1250-bde5-4fdc-bb44-aacb144500f7 req-b8cbd597-0d77-40bc-95e0-0c652ca1f763 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] No waiting events found dispatching network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.216 252257 WARNING nova.compute.manager [req-2ace1250-bde5-4fdc-bb44-aacb144500f7 req-b8cbd597-0d77-40bc-95e0-0c652ca1f763 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received unexpected event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for instance with vm_state active and task_state None.#033[00m
Nov 29 03:18:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 251 MiB data, 1021 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.2 MiB/s wr, 62 op/s
Nov 29 03:18:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1036624833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.600 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.603 252257 DEBUG nova.objects.instance [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'pci_devices' on Instance uuid b91293df-e55b-4e94-92e0-99e06f96a4d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.638 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  <uuid>b91293df-e55b-4e94-92e0-99e06f96a4d0</uuid>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  <name>instance-00000085</name>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerShowV247Test-server-1186309825</nova:name>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:18:35</nova:creationTime>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <nova:user uuid="d043b72e9a1f4575835e938f1a090e3a">tempest-ServerShowV247Test-1340079126-project-member</nova:user>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <nova:project uuid="4b2d7c5689334b7eb116fab1fd5dedac">tempest-ServerShowV247Test-1340079126</nova:project>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <entry name="serial">b91293df-e55b-4e94-92e0-99e06f96a4d0</entry>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <entry name="uuid">b91293df-e55b-4e94-92e0-99e06f96a4d0</entry>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/b91293df-e55b-4e94-92e0-99e06f96a4d0_disk">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/b91293df-e55b-4e94-92e0-99e06f96a4d0_disk.config">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/console.log" append="off"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:18:36 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:18:36 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:18:36 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:18:36 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.721 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.722 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.724 252257 INFO nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Using config drive#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.772 252257 DEBUG nova.storage.rbd_utils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.965 252257 DEBUG nova.compute.manager [None req-1327dde5-ba80-4809-91d2-85780459e76f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.980 252257 INFO nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Creating config drive at /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/disk.config#033[00m
Nov 29 03:18:36 np0005539563 nova_compute[252253]: 2025-11-29 08:18:36.986 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8r43wulo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:37 np0005539563 nova_compute[252253]: 2025-11-29 08:18:37.036 252257 INFO nova.compute.manager [None req-1327dde5-ba80-4809-91d2-85780459e76f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] instance snapshotting#033[00m
Nov 29 03:18:37 np0005539563 nova_compute[252253]: 2025-11-29 08:18:37.129 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8r43wulo" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:37 np0005539563 nova_compute[252253]: 2025-11-29 08:18:37.162 252257 DEBUG nova.storage.rbd_utils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:37 np0005539563 nova_compute[252253]: 2025-11-29 08:18:37.168 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/disk.config b91293df-e55b-4e94-92e0-99e06f96a4d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:37.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:37 np0005539563 nova_compute[252253]: 2025-11-29 08:18:37.328 252257 INFO nova.virt.libvirt.driver [None req-1327dde5-ba80-4809-91d2-85780459e76f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Beginning live snapshot process#033[00m
Nov 29 03:18:37 np0005539563 nova_compute[252253]: 2025-11-29 08:18:37.376 252257 DEBUG oslo_concurrency.processutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/disk.config b91293df-e55b-4e94-92e0-99e06f96a4d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:37 np0005539563 nova_compute[252253]: 2025-11-29 08:18:37.377 252257 INFO nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Deleting local config drive /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:18:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:37.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:37 np0005539563 systemd-machined[213024]: New machine qemu-61-instance-00000085.
Nov 29 03:18:37 np0005539563 systemd[1]: Started Virtual Machine qemu-61-instance-00000085.
Nov 29 03:18:37 np0005539563 nova_compute[252253]: 2025-11-29 08:18:37.514 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539563 nova_compute[252253]: 2025-11-29 08:18:37.517 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:18:37 np0005539563 nova_compute[252253]: 2025-11-29 08:18:37.523 252257 DEBUG nova.virt.libvirt.imagebackend [None req-1327dde5-ba80-4809-91d2-85780459e76f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:18:37 np0005539563 nova_compute[252253]: 2025-11-29 08:18:37.739 252257 DEBUG nova.storage.rbd_utils [None req-1327dde5-ba80-4809-91d2-85780459e76f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] creating snapshot(867e1f07bb2e4158bee11a5b67c846e5) on rbd image(1fad2d6f-5a00-43ad-af43-00916509fc61_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:18:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Nov 29 03:18:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Nov 29 03:18:38 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Nov 29 03:18:38 np0005539563 nova_compute[252253]: 2025-11-29 08:18:38.220 252257 DEBUG nova.storage.rbd_utils [None req-1327dde5-ba80-4809-91d2-85780459e76f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] cloning vms/1fad2d6f-5a00-43ad-af43-00916509fc61_disk@867e1f07bb2e4158bee11a5b67c846e5 to images/b01ef350-a5c1-4fa3-afd2-96dd9326703d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:18:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 281 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.2 MiB/s wr, 148 op/s
Nov 29 03:18:38 np0005539563 nova_compute[252253]: 2025-11-29 08:18:38.375 252257 DEBUG nova.storage.rbd_utils [None req-1327dde5-ba80-4809-91d2-85780459e76f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] flattening images/b01ef350-a5c1-4fa3-afd2-96dd9326703d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:18:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:38 np0005539563 nova_compute[252253]: 2025-11-29 08:18:38.750 252257 DEBUG nova.storage.rbd_utils [None req-1327dde5-ba80-4809-91d2-85780459e76f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] removing snapshot(867e1f07bb2e4158bee11a5b67c846e5) on rbd image(1fad2d6f-5a00-43ad-af43-00916509fc61_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:18:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Nov 29 03:18:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Nov 29 03:18:39 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Nov 29 03:18:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:39.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.217 252257 DEBUG nova.storage.rbd_utils [None req-1327dde5-ba80-4809-91d2-85780459e76f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] creating snapshot(snap) on rbd image(b01ef350-a5c1-4fa3-afd2-96dd9326703d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.373 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404319.3731363, b91293df-e55b-4e94-92e0-99e06f96a4d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.374 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.376 252257 DEBUG nova.compute.manager [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.377 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.380 252257 INFO nova.virt.libvirt.driver [-] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Instance spawned successfully.#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.381 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.420 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:39.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.431 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.436 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.437 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.438 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.438 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.439 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.440 252257 DEBUG nova.virt.libvirt.driver [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.470 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.471 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404319.3748198, b91293df-e55b-4e94-92e0-99e06f96a4d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.471 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.497 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.501 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.530 252257 INFO nova.compute.manager [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Took 5.23 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.531 252257 DEBUG nova.compute.manager [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.538 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.595 252257 INFO nova.compute.manager [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Took 6.22 seconds to build instance.#033[00m
Nov 29 03:18:39 np0005539563 nova_compute[252253]: 2025-11-29 08:18:39.611 252257 DEBUG oslo_concurrency.lockutils [None req-67ed1a58-5782-4925-a12e-a0e65ce72651 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "b91293df-e55b-4e94-92e0-99e06f96a4d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Nov 29 03:18:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Nov 29 03:18:40 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Nov 29 03:18:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 340 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 9.6 MiB/s wr, 494 op/s
Nov 29 03:18:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:41.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:41.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.065 252257 INFO nova.virt.libvirt.driver [None req-1327dde5-ba80-4809-91d2-85780459e76f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Snapshot image upload complete#033[00m
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.066 252257 INFO nova.compute.manager [None req-1327dde5-ba80-4809-91d2-85780459e76f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Took 5.03 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.165 252257 INFO nova.compute.manager [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Rebuilding instance#033[00m
Nov 29 03:18:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 340 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 6.7 MiB/s wr, 423 op/s
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.382 252257 DEBUG nova.objects.instance [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'trusted_certs' on Instance uuid b91293df-e55b-4e94-92e0-99e06f96a4d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.408 252257 DEBUG nova.compute.manager [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.470 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.476 252257 DEBUG nova.objects.instance [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'pci_requests' on Instance uuid b91293df-e55b-4e94-92e0-99e06f96a4d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.492 252257 DEBUG nova.objects.instance [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'pci_devices' on Instance uuid b91293df-e55b-4e94-92e0-99e06f96a4d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.504 252257 DEBUG nova.objects.instance [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'resources' on Instance uuid b91293df-e55b-4e94-92e0-99e06f96a4d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.515 252257 DEBUG nova.objects.instance [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'migration_context' on Instance uuid b91293df-e55b-4e94-92e0-99e06f96a4d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.517 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.525 252257 DEBUG nova.objects.instance [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.530 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:18:42 np0005539563 nova_compute[252253]: 2025-11-29 08:18:42.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:18:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:43.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:43.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 353 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 5.4 MiB/s wr, 375 op/s
Nov 29 03:18:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:45.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:45.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:45 np0005539563 nova_compute[252253]: 2025-11-29 08:18:45.692 252257 INFO nova.compute.manager [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Rescuing#033[00m
Nov 29 03:18:45 np0005539563 nova_compute[252253]: 2025-11-29 08:18:45.693 252257 DEBUG oslo_concurrency.lockutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:45 np0005539563 nova_compute[252253]: 2025-11-29 08:18:45.693 252257 DEBUG oslo_concurrency.lockutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquired lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:45 np0005539563 nova_compute[252253]: 2025-11-29 08:18:45.693 252257 DEBUG nova.network.neutron [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:18:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 353 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 4.2 MiB/s wr, 368 op/s
Nov 29 03:18:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:47.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:47 np0005539563 nova_compute[252253]: 2025-11-29 08:18:47.354 252257 DEBUG nova.network.neutron [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Updating instance_info_cache with network_info: [{"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:47 np0005539563 nova_compute[252253]: 2025-11-29 08:18:47.377 252257 DEBUG oslo_concurrency.lockutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Releasing lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:47.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:47 np0005539563 nova_compute[252253]: 2025-11-29 08:18:47.473 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:47 np0005539563 nova_compute[252253]: 2025-11-29 08:18:47.519 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:47 np0005539563 nova_compute[252253]: 2025-11-29 08:18:47.700 252257 DEBUG nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:18:47 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:47Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:36:6b 10.100.0.6
Nov 29 03:18:47 np0005539563 ovn_controller[148841]: 2025-11-29T08:18:47Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:36:6b 10.100.0.6
Nov 29 03:18:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:47.952 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:47.953 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:18:47 np0005539563 nova_compute[252253]: 2025-11-29 08:18:47.954 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 366 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 4.5 MiB/s wr, 337 op/s
Nov 29 03:18:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Nov 29 03:18:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Nov 29 03:18:48 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Nov 29 03:18:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:49.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:49.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 447 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.1 MiB/s wr, 259 op/s
Nov 29 03:18:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:51.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:51.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 447 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.1 MiB/s wr, 259 op/s
Nov 29 03:18:52 np0005539563 nova_compute[252253]: 2025-11-29 08:18:52.475 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:52 np0005539563 nova_compute[252253]: 2025-11-29 08:18:52.520 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:52 np0005539563 nova_compute[252253]: 2025-11-29 08:18:52.572 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:18:52 np0005539563 nova_compute[252253]: 2025-11-29 08:18:52.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:52 np0005539563 nova_compute[252253]: 2025-11-29 08:18:52.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:18:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:18:52.955 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:53.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:53.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:53 np0005539563 nova_compute[252253]: 2025-11-29 08:18:53.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 462 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 7.7 MiB/s wr, 243 op/s
Nov 29 03:18:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:55.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:55.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:55 np0005539563 nova_compute[252253]: 2025-11-29 08:18:55.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:55 np0005539563 nova_compute[252253]: 2025-11-29 08:18:55.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:18:55 np0005539563 nova_compute[252253]: 2025-11-29 08:18:55.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:18:55 np0005539563 nova_compute[252253]: 2025-11-29 08:18:55.703 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:55 np0005539563 nova_compute[252253]: 2025-11-29 08:18:55.703 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:55 np0005539563 nova_compute[252253]: 2025-11-29 08:18:55.703 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:18:55 np0005539563 nova_compute[252253]: 2025-11-29 08:18:55.704 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 491 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 9.8 MiB/s wr, 251 op/s
Nov 29 03:18:56 np0005539563 nova_compute[252253]: 2025-11-29 08:18:56.984 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Updating instance_info_cache with network_info: [{"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:56 np0005539563 nova_compute[252253]: 2025-11-29 08:18:56.997 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:56 np0005539563 nova_compute[252253]: 2025-11-29 08:18:56.998 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:18:56 np0005539563 nova_compute[252253]: 2025-11-29 08:18:56.998 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:56 np0005539563 nova_compute[252253]: 2025-11-29 08:18:56.998 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:56 np0005539563 nova_compute[252253]: 2025-11-29 08:18:56.999 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.020 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.021 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.021 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.022 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.023 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:57.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:57.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.478 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1548226146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.519 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.521 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.600 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.600 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.603 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.604 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.746 252257 DEBUG nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.778 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.780 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3997MB free_disk=20.785663604736328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.781 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.781 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.895 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 1fad2d6f-5a00-43ad-af43-00916509fc61 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.896 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance b91293df-e55b-4e94-92e0-99e06f96a4d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.896 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.897 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:18:57 np0005539563 nova_compute[252253]: 2025-11-29 08:18:57.984 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630120659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 9.0 MiB/s wr, 266 op/s
Nov 29 03:18:58 np0005539563 nova_compute[252253]: 2025-11-29 08:18:58.382 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:58 np0005539563 nova_compute[252253]: 2025-11-29 08:18:58.393 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:58 np0005539563 nova_compute[252253]: 2025-11-29 08:18:58.423 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:58 np0005539563 nova_compute[252253]: 2025-11-29 08:18:58.445 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:18:58 np0005539563 nova_compute[252253]: 2025-11-29 08:18:58.446 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:59.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:18:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:18:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:59.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:00 np0005539563 kernel: tap96eb3aec-07 (unregistering): left promiscuous mode
Nov 29 03:19:00 np0005539563 NetworkManager[48981]: <info>  [1764404340.0868] device (tap96eb3aec-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:19:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:00Z|00531|binding|INFO|Releasing lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc from this chassis (sb_readonly=0)
Nov 29 03:19:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:00Z|00532|binding|INFO|Setting lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc down in Southbound
Nov 29 03:19:00 np0005539563 nova_compute[252253]: 2025-11-29 08:19:00.094 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:00Z|00533|binding|INFO|Removing iface tap96eb3aec-07 ovn-installed in OVS
Nov 29 03:19:00 np0005539563 nova_compute[252253]: 2025-11-29 08:19:00.097 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.103 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:36:6b 10.100.0.6'], port_security=['fa:16:3e:5b:36:6b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1fad2d6f-5a00-43ad-af43-00916509fc61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.104 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.105 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32485b0e-177b-4dfd-a55a-0249528f32e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.106 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cb1780ca-8d09-4975-9010-1ee4c34478dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.106 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 namespace which is not needed anymore#033[00m
Nov 29 03:19:00 np0005539563 nova_compute[252253]: 2025-11-29 08:19:00.125 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:00 np0005539563 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000083.scope: Deactivated successfully.
Nov 29 03:19:00 np0005539563 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000083.scope: Consumed 15.577s CPU time.
Nov 29 03:19:00 np0005539563 systemd-machined[213024]: Machine qemu-60-instance-00000083 terminated.
Nov 29 03:19:00 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[333429]: [NOTICE]   (333433) : haproxy version is 2.8.14-c23fe91
Nov 29 03:19:00 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[333429]: [NOTICE]   (333433) : path to executable is /usr/sbin/haproxy
Nov 29 03:19:00 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[333429]: [WARNING]  (333433) : Exiting Master process...
Nov 29 03:19:00 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[333429]: [ALERT]    (333433) : Current worker (333435) exited with code 143 (Terminated)
Nov 29 03:19:00 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[333429]: [WARNING]  (333433) : All workers exited. Exiting... (0)
Nov 29 03:19:00 np0005539563 systemd[1]: libpod-cb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317.scope: Deactivated successfully.
Nov 29 03:19:00 np0005539563 podman[333968]: 2025-11-29 08:19:00.302140485 +0000 UTC m=+0.048834815 container died cb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:19:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317-userdata-shm.mount: Deactivated successfully.
Nov 29 03:19:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e3588c9ae35099fba03a365ea0b3a9123d5109b1f4eea6316f6c4188d5a25019-merged.mount: Deactivated successfully.
Nov 29 03:19:00 np0005539563 podman[333968]: 2025-11-29 08:19:00.358107473 +0000 UTC m=+0.104801763 container cleanup cb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:19:00 np0005539563 systemd[1]: libpod-conmon-cb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317.scope: Deactivated successfully.
Nov 29 03:19:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.8 MiB/s wr, 287 op/s
Nov 29 03:19:00 np0005539563 podman[334005]: 2025-11-29 08:19:00.441080503 +0000 UTC m=+0.060146712 container remove cb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.449 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[23eff7d0-4ec5-4f08-8925-d6fd27ea8de9]: (4, ('Sat Nov 29 08:19:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 (cb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317)\ncb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317\nSat Nov 29 08:19:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 (cb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317)\ncb2e24680d480205f4ba93ddd5a315f3cfa42e0def553b16c77f3c7f7865a317\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.451 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[41eb9cd9-f7df-461d-9b23-5423b3ce4b69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.452 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:00 np0005539563 nova_compute[252253]: 2025-11-29 08:19:00.454 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:00 np0005539563 kernel: tap32485b0e-10: left promiscuous mode
Nov 29 03:19:00 np0005539563 nova_compute[252253]: 2025-11-29 08:19:00.479 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.482 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2c4f2b0b-5489-49cd-8267-66f48ad14054]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.495 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cc060b2a-2ee1-4f1b-bce0-d02957101f12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.495 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fcfd787b-effb-4780-8de7-d1e080f5d321]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.513 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d0323afb-b3bb-4229-ba43-027e830bdea6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728141, 'reachable_time': 43020, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334023, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.516 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:19:00 np0005539563 systemd[1]: run-netns-ovnmeta\x2d32485b0e\x2d177b\x2d4dfd\x2da55a\x2d0249528f32e1.mount: Deactivated successfully.
Nov 29 03:19:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:00.517 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[fe09b390-4110-4997-9838-04595d5c2ce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:00 np0005539563 nova_compute[252253]: 2025-11-29 08:19:00.761 252257 INFO nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:19:00 np0005539563 nova_compute[252253]: 2025-11-29 08:19:00.769 252257 INFO nova.virt.libvirt.driver [-] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Instance destroyed successfully.#033[00m
Nov 29 03:19:00 np0005539563 nova_compute[252253]: 2025-11-29 08:19:00.769 252257 DEBUG nova.objects.instance [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:00 np0005539563 nova_compute[252253]: 2025-11-29 08:19:00.804 252257 INFO nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Attempting a stable device rescue#033[00m
Nov 29 03:19:01 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.187 252257 DEBUG nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:19:01 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.194 252257 DEBUG nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:19:01 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.198 252257 INFO nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Creating image(s)#033[00m
Nov 29 03:19:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:01.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:01 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.232 252257 DEBUG nova.storage.rbd_utils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 1fad2d6f-5a00-43ad-af43-00916509fc61_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:01 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.238 252257 DEBUG nova.objects.instance [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:01 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.299 252257 DEBUG nova.storage.rbd_utils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 1fad2d6f-5a00-43ad-af43-00916509fc61_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:01 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.342 252257 DEBUG nova.storage.rbd_utils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 1fad2d6f-5a00-43ad-af43-00916509fc61_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:01 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.346 252257 DEBUG oslo_concurrency.lockutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "96d13c6f24c13a933cd8bf6fac89f5da81f23eb4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:01 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.347 252257 DEBUG oslo_concurrency.lockutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "96d13c6f24c13a933cd8bf6fac89f5da81f23eb4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:01.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:01 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.654 252257 DEBUG nova.virt.libvirt.imagebackend [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/b01ef350-a5c1-4fa3-afd2-96dd9326703d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/b01ef350-a5c1-4fa3-afd2-96dd9326703d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:19:01 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.740 252257 DEBUG nova.virt.libvirt.imagebackend [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/b01ef350-a5c1-4fa3-afd2-96dd9326703d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:19:01 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.741 252257 DEBUG nova.storage.rbd_utils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] cloning images/b01ef350-a5c1-4fa3-afd2-96dd9326703d@snap to None/1fad2d6f-5a00-43ad-af43-00916509fc61_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:19:01 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.925 252257 DEBUG oslo_concurrency.lockutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "96d13c6f24c13a933cd8bf6fac89f5da81f23eb4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:01.999 252257 DEBUG nova.compute.manager [req-807a49bf-805f-4e95-b8a5-0ba470a7797a req-6038f930-c6f9-4997-bc4c-96967fe69d3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-unplugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.000 252257 DEBUG oslo_concurrency.lockutils [req-807a49bf-805f-4e95-b8a5-0ba470a7797a req-6038f930-c6f9-4997-bc4c-96967fe69d3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.001 252257 DEBUG oslo_concurrency.lockutils [req-807a49bf-805f-4e95-b8a5-0ba470a7797a req-6038f930-c6f9-4997-bc4c-96967fe69d3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.001 252257 DEBUG oslo_concurrency.lockutils [req-807a49bf-805f-4e95-b8a5-0ba470a7797a req-6038f930-c6f9-4997-bc4c-96967fe69d3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.001 252257 DEBUG nova.compute.manager [req-807a49bf-805f-4e95-b8a5-0ba470a7797a req-6038f930-c6f9-4997-bc4c-96967fe69d3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] No waiting events found dispatching network-vif-unplugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.002 252257 WARNING nova.compute.manager [req-807a49bf-805f-4e95-b8a5-0ba470a7797a req-6038f930-c6f9-4997-bc4c-96967fe69d3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received unexpected event network-vif-unplugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.011 252257 DEBUG nova.objects.instance [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'migration_context' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.033 252257 DEBUG nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.036 252257 DEBUG nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Start _get_guest_xml network_info=[{"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "vif_mac": "fa:16:3e:5b:36:6b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'b01ef350-a5c1-4fa3-afd2-96dd9326703d', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.037 252257 DEBUG nova.objects.instance [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'resources' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.060 252257 WARNING nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.068 252257 DEBUG nova.virt.libvirt.host [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.069 252257 DEBUG nova.virt.libvirt.host [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.075 252257 DEBUG nova.virt.libvirt.host [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.076 252257 DEBUG nova.virt.libvirt.host [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.078 252257 DEBUG nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.079 252257 DEBUG nova.virt.hardware [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.080 252257 DEBUG nova.virt.hardware [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.080 252257 DEBUG nova.virt.hardware [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.080 252257 DEBUG nova.virt.hardware [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.081 252257 DEBUG nova.virt.hardware [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.081 252257 DEBUG nova.virt.hardware [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.081 252257 DEBUG nova.virt.hardware [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.082 252257 DEBUG nova.virt.hardware [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.082 252257 DEBUG nova.virt.hardware [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.082 252257 DEBUG nova.virt.hardware [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.083 252257 DEBUG nova.virt.hardware [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.083 252257 DEBUG nova.objects.instance [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.101 252257 DEBUG oslo_concurrency.processutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.154 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 170 op/s
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.481 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.523 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539563 podman[334187]: 2025-11-29 08:19:02.53392978 +0000 UTC m=+0.083728512 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:19:02 np0005539563 podman[334186]: 2025-11-29 08:19:02.557855058 +0000 UTC m=+0.107650510 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:19:02 np0005539563 podman[334188]: 2025-11-29 08:19:02.581708796 +0000 UTC m=+0.114396264 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 29 03:19:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1709759204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.620 252257 DEBUG oslo_concurrency.processutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:02 np0005539563 nova_compute[252253]: 2025-11-29 08:19:02.662 252257 DEBUG oslo_concurrency.processutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/846496273' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.124 252257 DEBUG oslo_concurrency.processutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.127 252257 DEBUG oslo_concurrency.processutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:03.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:03.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4058452510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.621 252257 DEBUG oslo_concurrency.processutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.624 252257 DEBUG nova.virt.libvirt.vif [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:18:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1419637348',display_name='tempest-ServerStableDeviceRescueTest-server-1419637348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1419637348',id=131,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:18:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-z40kifsz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:42Z,user_data=None,user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=1fad2d6f-5a00-43ad-af43-00916509fc61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "vif_mac": "fa:16:3e:5b:36:6b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.625 252257 DEBUG nova.network.os_vif_util [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "vif_mac": "fa:16:3e:5b:36:6b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.627 252257 DEBUG nova.network.os_vif_util [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:36:6b,bridge_name='br-int',has_traffic_filtering=True,id=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96eb3aec-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.634 252257 DEBUG nova.objects.instance [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.639 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.659 252257 DEBUG nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  <uuid>1fad2d6f-5a00-43ad-af43-00916509fc61</uuid>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  <name>instance-00000083</name>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1419637348</nova:name>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:19:02</nova:creationTime>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <nova:user uuid="3b52040d601a4a56abcaf3f046f1e349">tempest-ServerStableDeviceRescueTest-1105304301-project-member</nova:user>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <nova:project uuid="358970eca7ad4b05b70f43e5507ac052">tempest-ServerStableDeviceRescueTest-1105304301</nova:project>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <nova:port uuid="96eb3aec-07ea-42dc-8983-3d61e9f8b5fc">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <entry name="serial">1fad2d6f-5a00-43ad-af43-00916509fc61</entry>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <entry name="uuid">1fad2d6f-5a00-43ad-af43-00916509fc61</entry>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/1fad2d6f-5a00-43ad-af43-00916509fc61_disk">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/1fad2d6f-5a00-43ad-af43-00916509fc61_disk.config">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/1fad2d6f-5a00-43ad-af43-00916509fc61_disk.rescue">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <target dev="sdb" bus="scsi"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <boot order="1"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:5b:36:6b"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <target dev="tap96eb3aec-07"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/console.log" append="off"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:19:03 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:19:03 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:19:03 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:19:03 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:19:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.675 252257 INFO nova.virt.libvirt.driver [-] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Instance destroyed successfully.#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.748 252257 DEBUG nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.748 252257 DEBUG nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.749 252257 DEBUG nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.749 252257 DEBUG nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No VIF found with MAC fa:16:3e:5b:36:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.750 252257 INFO nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Using config drive#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.790 252257 DEBUG nova.storage.rbd_utils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 1fad2d6f-5a00-43ad-af43-00916509fc61_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.814 252257 DEBUG nova.objects.instance [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:03 np0005539563 nova_compute[252253]: 2025-11-29 08:19:03.848 252257 DEBUG nova.objects.instance [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'keypairs' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.108 252257 DEBUG nova.compute.manager [req-0e0e16a4-4500-466e-a861-bc608ac77dd1 req-e6301c9d-600f-49ed-a16f-3a04bf24be13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.108 252257 DEBUG oslo_concurrency.lockutils [req-0e0e16a4-4500-466e-a861-bc608ac77dd1 req-e6301c9d-600f-49ed-a16f-3a04bf24be13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.110 252257 DEBUG oslo_concurrency.lockutils [req-0e0e16a4-4500-466e-a861-bc608ac77dd1 req-e6301c9d-600f-49ed-a16f-3a04bf24be13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.111 252257 DEBUG oslo_concurrency.lockutils [req-0e0e16a4-4500-466e-a861-bc608ac77dd1 req-e6301c9d-600f-49ed-a16f-3a04bf24be13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.111 252257 DEBUG nova.compute.manager [req-0e0e16a4-4500-466e-a861-bc608ac77dd1 req-e6301c9d-600f-49ed-a16f-3a04bf24be13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] No waiting events found dispatching network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.111 252257 WARNING nova.compute.manager [req-0e0e16a4-4500-466e-a861-bc608ac77dd1 req-e6301c9d-600f-49ed-a16f-3a04bf24be13 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received unexpected event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.192 252257 INFO nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Creating config drive at /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/disk.config.rescue#033[00m
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.197 252257 DEBUG oslo_concurrency.processutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkg439udh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.345 252257 DEBUG oslo_concurrency.processutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkg439udh" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 489 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 179 op/s
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.397 252257 DEBUG nova.storage.rbd_utils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 1fad2d6f-5a00-43ad-af43-00916509fc61_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.402 252257 DEBUG oslo_concurrency.processutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/disk.config.rescue 1fad2d6f-5a00-43ad-af43-00916509fc61_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.637 252257 DEBUG oslo_concurrency.processutils [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/disk.config.rescue 1fad2d6f-5a00-43ad-af43-00916509fc61_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.640 252257 INFO nova.virt.libvirt.driver [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Deleting local config drive /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:19:04 np0005539563 kernel: tap96eb3aec-07: entered promiscuous mode
Nov 29 03:19:04 np0005539563 NetworkManager[48981]: <info>  [1764404344.7118] manager: (tap96eb3aec-07): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.713 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:04Z|00534|binding|INFO|Claiming lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for this chassis.
Nov 29 03:19:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:04Z|00535|binding|INFO|96eb3aec-07ea-42dc-8983-3d61e9f8b5fc: Claiming fa:16:3e:5b:36:6b 10.100.0.6
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.722 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:36:6b 10.100.0.6'], port_security=['fa:16:3e:5b:36:6b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1fad2d6f-5a00-43ad-af43-00916509fc61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '5', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.724 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 bound to our chassis#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.726 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:19:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:04Z|00536|binding|INFO|Setting lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc ovn-installed in OVS
Nov 29 03:19:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:04Z|00537|binding|INFO|Setting lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc up in Southbound
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.739 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:04 np0005539563 nova_compute[252253]: 2025-11-29 08:19:04.742 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.745 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d4cdb21c-67ae-4e63-945b-aaff10d1ae32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.746 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap32485b0e-11 in ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.749 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap32485b0e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.749 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dc07ea29-3a54-414f-9a71-92d9a5d30350]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.750 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[846fb6bc-a7a8-4813-80a1-5ea741d5c56f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:04 np0005539563 systemd-machined[213024]: New machine qemu-62-instance-00000083.
Nov 29 03:19:04 np0005539563 systemd-udevd[334390]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.771 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[fbceaf34-acd1-47b5-92ec-a6d1594a1e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:04 np0005539563 systemd[1]: Started Virtual Machine qemu-62-instance-00000083.
Nov 29 03:19:04 np0005539563 NetworkManager[48981]: <info>  [1764404344.7893] device (tap96eb3aec-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:19:04 np0005539563 NetworkManager[48981]: <info>  [1764404344.7901] device (tap96eb3aec-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.806 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8b212753-77a2-4f4a-9dc2-16db84cc1218]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.849 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e5a3cd-957c-4788-acba-b2d6b549f2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.856 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e13909cb-e6e2-427d-be44-b43d1b77b070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:04 np0005539563 NetworkManager[48981]: <info>  [1764404344.8575] manager: (tap32485b0e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/241)
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.894 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9738f3-1975-4a78-9954-fb0a78c7adef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.899 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[af6f01f3-a093-4855-bf36-4d21b2bb51fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.924 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.924 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.924 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:04 np0005539563 NetworkManager[48981]: <info>  [1764404344.9316] device (tap32485b0e-10): carrier: link connected
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.940 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[9eaaa686-173b-4a15-9f75-03baf275474c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.964 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[630ba558-01b8-4715-814d-cda057524172]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731270, 'reachable_time': 36421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334422, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:04.988 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[81b65511-35bf-45c5-ad1e-eb0f11abaedc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:4406'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731270, 'tstamp': 731270}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334423, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:05.010 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fb2e83fb-2d80-4082-a257-55f309b3bf82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731270, 'reachable_time': 36421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334424, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:05.045 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f26eeb9f-2b68-4b95-a184-88b5852b5665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:05.124 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[714807f1-5da0-4f89-9e86-f74f0e5028fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:05.125 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:05.126 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:05.126 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:05 np0005539563 NetworkManager[48981]: <info>  [1764404345.1292] manager: (tap32485b0e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.128 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:05 np0005539563 kernel: tap32485b0e-10: entered promiscuous mode
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:05.132 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.134 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:05 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:05Z|00538|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.170 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.172 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:05.173 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:05.174 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd0d2b7-1b6a-4003-9275-c61886e3fe69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:05.174 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-32485b0e-177b-4dfd-a55a-0249528f32e1
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 32485b0e-177b-4dfd-a55a-0249528f32e1
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:19:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:05.175 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'env', 'PROCESS_TAG=haproxy-32485b0e-177b-4dfd-a55a-0249528f32e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/32485b0e-177b-4dfd-a55a-0249528f32e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:19:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:05.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.394 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 1fad2d6f-5a00-43ad-af43-00916509fc61 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.395 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404345.3929908, 1fad2d6f-5a00-43ad-af43-00916509fc61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.396 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.413 252257 DEBUG nova.compute.manager [None req-19c0d90c-a132-4f61-b81f-b945bf68e9ef 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.430 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.440 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:05.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.483 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.484 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404345.3945286, 1fad2d6f-5a00-43ad-af43-00916509fc61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.484 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.519 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.535 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:05 np0005539563 podman[334516]: 2025-11-29 08:19:05.644572579 +0000 UTC m=+0.069281630 container create 8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:19:05 np0005539563 nova_compute[252253]: 2025-11-29 08:19:05.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:05 np0005539563 systemd[1]: Started libpod-conmon-8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df.scope.
Nov 29 03:19:05 np0005539563 podman[334516]: 2025-11-29 08:19:05.609981711 +0000 UTC m=+0.034690722 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:19:05 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:19:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f296093a9c6fa03bf4e5e60f198c131eb082a5598ef9039b689935c9d2583ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:05 np0005539563 podman[334516]: 2025-11-29 08:19:05.753431501 +0000 UTC m=+0.178140512 container init 8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:19:05 np0005539563 podman[334516]: 2025-11-29 08:19:05.759141956 +0000 UTC m=+0.183850967 container start 8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:19:05 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[334532]: [NOTICE]   (334536) : New worker (334538) forked
Nov 29 03:19:05 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[334532]: [NOTICE]   (334536) : Loading success.
Nov 29 03:19:05 np0005539563 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000085.scope: Deactivated successfully.
Nov 29 03:19:05 np0005539563 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000085.scope: Consumed 16.098s CPU time.
Nov 29 03:19:05 np0005539563 systemd-machined[213024]: Machine qemu-61-instance-00000085 terminated.
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.195 252257 DEBUG nova.compute.manager [req-7cba0c2b-a996-4327-878f-c206cc5a05d5 req-5492e6ce-ca32-4531-861b-1d00da0f008a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.197 252257 DEBUG oslo_concurrency.lockutils [req-7cba0c2b-a996-4327-878f-c206cc5a05d5 req-5492e6ce-ca32-4531-861b-1d00da0f008a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.199 252257 DEBUG oslo_concurrency.lockutils [req-7cba0c2b-a996-4327-878f-c206cc5a05d5 req-5492e6ce-ca32-4531-861b-1d00da0f008a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.200 252257 DEBUG oslo_concurrency.lockutils [req-7cba0c2b-a996-4327-878f-c206cc5a05d5 req-5492e6ce-ca32-4531-861b-1d00da0f008a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.201 252257 DEBUG nova.compute.manager [req-7cba0c2b-a996-4327-878f-c206cc5a05d5 req-5492e6ce-ca32-4531-861b-1d00da0f008a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] No waiting events found dispatching network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.201 252257 WARNING nova.compute.manager [req-7cba0c2b-a996-4327-878f-c206cc5a05d5 req-5492e6ce-ca32-4531-861b-1d00da0f008a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received unexpected event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.201 252257 DEBUG nova.compute.manager [req-7cba0c2b-a996-4327-878f-c206cc5a05d5 req-5492e6ce-ca32-4531-861b-1d00da0f008a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.202 252257 DEBUG oslo_concurrency.lockutils [req-7cba0c2b-a996-4327-878f-c206cc5a05d5 req-5492e6ce-ca32-4531-861b-1d00da0f008a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.202 252257 DEBUG oslo_concurrency.lockutils [req-7cba0c2b-a996-4327-878f-c206cc5a05d5 req-5492e6ce-ca32-4531-861b-1d00da0f008a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.202 252257 DEBUG oslo_concurrency.lockutils [req-7cba0c2b-a996-4327-878f-c206cc5a05d5 req-5492e6ce-ca32-4531-861b-1d00da0f008a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.202 252257 DEBUG nova.compute.manager [req-7cba0c2b-a996-4327-878f-c206cc5a05d5 req-5492e6ce-ca32-4531-861b-1d00da0f008a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] No waiting events found dispatching network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.203 252257 WARNING nova.compute.manager [req-7cba0c2b-a996-4327-878f-c206cc5a05d5 req-5492e6ce-ca32-4531-861b-1d00da0f008a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received unexpected event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:19:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.661 252257 INFO nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Instance shutdown successfully after 24 seconds.#033[00m
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.670 252257 INFO nova.virt.libvirt.driver [-] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Instance destroyed successfully.#033[00m
Nov 29 03:19:06 np0005539563 nova_compute[252253]: 2025-11-29 08:19:06.676 252257 INFO nova.virt.libvirt.driver [-] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Instance destroyed successfully.#033[00m
Nov 29 03:19:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:07.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:07.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.521 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.525 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.617 252257 INFO nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Deleting instance files /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0_del#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.618 252257 INFO nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Deletion of /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0_del complete#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.775 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.776 252257 INFO nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Creating image(s)#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.815 252257 DEBUG nova.storage.rbd_utils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.856 252257 DEBUG nova.storage.rbd_utils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.886 252257 DEBUG nova.storage.rbd_utils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.890 252257 DEBUG oslo_concurrency.processutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.954 252257 INFO nova.compute.manager [None req-7acc7d77-a444-467f-9b35-792005e0fcac 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Unrescuing#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.955 252257 DEBUG oslo_concurrency.lockutils [None req-7acc7d77-a444-467f-9b35-792005e0fcac 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.955 252257 DEBUG oslo_concurrency.lockutils [None req-7acc7d77-a444-467f-9b35-792005e0fcac 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquired lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.956 252257 DEBUG nova.network.neutron [None req-7acc7d77-a444-467f-9b35-792005e0fcac 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.965 252257 DEBUG oslo_concurrency.processutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.966 252257 DEBUG oslo_concurrency.lockutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.966 252257 DEBUG oslo_concurrency.lockutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:07 np0005539563 nova_compute[252253]: 2025-11-29 08:19:07.967 252257 DEBUG oslo_concurrency.lockutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.001 252257 DEBUG nova.storage.rbd_utils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.005 252257 DEBUG oslo_concurrency.processutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 b91293df-e55b-4e94-92e0-99e06f96a4d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.180760) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348180908, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 650, "num_deletes": 252, "total_data_size": 718607, "memory_usage": 730968, "flush_reason": "Manual Compaction"}
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348190915, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 709396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46991, "largest_seqno": 47640, "table_properties": {"data_size": 706014, "index_size": 1226, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8177, "raw_average_key_size": 19, "raw_value_size": 699105, "raw_average_value_size": 1676, "num_data_blocks": 54, "num_entries": 417, "num_filter_entries": 417, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404309, "oldest_key_time": 1764404309, "file_creation_time": 1764404348, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 10166 microseconds, and 4213 cpu microseconds.
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.190992) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 709396 bytes OK
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.191023) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.192847) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.192870) EVENT_LOG_v1 {"time_micros": 1764404348192864, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.192889) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 715173, prev total WAL file size 715173, number of live WAL files 2.
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.193777) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(692KB)], [101(10MB)]
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348193850, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11693680, "oldest_snapshot_seqno": -1}
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 7819 keys, 9808128 bytes, temperature: kUnknown
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348252438, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9808128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9758258, "index_size": 29214, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 204516, "raw_average_key_size": 26, "raw_value_size": 9621011, "raw_average_value_size": 1230, "num_data_blocks": 1134, "num_entries": 7819, "num_filter_entries": 7819, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764404348, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.252829) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9808128 bytes
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.253885) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.1 rd, 167.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.5 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(30.3) write-amplify(13.8) OK, records in: 8335, records dropped: 516 output_compression: NoCompression
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.253906) EVENT_LOG_v1 {"time_micros": 1764404348253897, "job": 60, "event": "compaction_finished", "compaction_time_micros": 58724, "compaction_time_cpu_micros": 24386, "output_level": 6, "num_output_files": 1, "total_output_size": 9808128, "num_input_records": 8335, "num_output_records": 7819, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348254150, "job": 60, "event": "table_file_deletion", "file_number": 103}
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404348255667, "job": 60, "event": "table_file_deletion", "file_number": 101}
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.193584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.255829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.255837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.255841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.255844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:19:08.255847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:19:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 422 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 83 KiB/s wr, 158 op/s
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.442 252257 DEBUG oslo_concurrency.processutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 b91293df-e55b-4e94-92e0-99e06f96a4d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.516 252257 DEBUG nova.storage.rbd_utils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] resizing rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.627 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.628 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Ensure instance console log exists: /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.628 252257 DEBUG oslo_concurrency.lockutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.629 252257 DEBUG oslo_concurrency.lockutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.629 252257 DEBUG oslo_concurrency.lockutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.630 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.634 252257 WARNING nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.640 252257 DEBUG nova.virt.libvirt.host [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.640 252257 DEBUG nova.virt.libvirt.host [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.644 252257 DEBUG nova.virt.libvirt.host [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.645 252257 DEBUG nova.virt.libvirt.host [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.646 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.646 252257 DEBUG nova.virt.hardware [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.647 252257 DEBUG nova.virt.hardware [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.647 252257 DEBUG nova.virt.hardware [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.647 252257 DEBUG nova.virt.hardware [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.647 252257 DEBUG nova.virt.hardware [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.647 252257 DEBUG nova.virt.hardware [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.648 252257 DEBUG nova.virt.hardware [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.648 252257 DEBUG nova.virt.hardware [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.648 252257 DEBUG nova.virt.hardware [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.648 252257 DEBUG nova.virt.hardware [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.648 252257 DEBUG nova.virt.hardware [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.649 252257 DEBUG nova.objects.instance [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'vcpu_model' on Instance uuid b91293df-e55b-4e94-92e0-99e06f96a4d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:08 np0005539563 nova_compute[252253]: 2025-11-29 08:19:08.667 252257 DEBUG oslo_concurrency.processutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2847169781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.127 252257 DEBUG oslo_concurrency.processutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.153 252257 DEBUG nova.storage.rbd_utils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.158 252257 DEBUG oslo_concurrency.processutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.194 252257 DEBUG nova.network.neutron [None req-7acc7d77-a444-467f-9b35-792005e0fcac 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Updating instance_info_cache with network_info: [{"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.220 252257 DEBUG oslo_concurrency.lockutils [None req-7acc7d77-a444-467f-9b35-792005e0fcac 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Releasing lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.222 252257 DEBUG nova.objects.instance [None req-7acc7d77-a444-467f-9b35-792005e0fcac 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'flavor' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:09.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:09 np0005539563 kernel: tap96eb3aec-07 (unregistering): left promiscuous mode
Nov 29 03:19:09 np0005539563 NetworkManager[48981]: <info>  [1764404349.2962] device (tap96eb3aec-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:19:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:09Z|00539|binding|INFO|Releasing lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc from this chassis (sb_readonly=0)
Nov 29 03:19:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:09Z|00540|binding|INFO|Setting lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc down in Southbound
Nov 29 03:19:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:09Z|00541|binding|INFO|Removing iface tap96eb3aec-07 ovn-installed in OVS
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.309 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.316 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:36:6b 10.100.0.6'], port_security=['fa:16:3e:5b:36:6b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1fad2d6f-5a00-43ad-af43-00916509fc61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '6', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.317 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.319 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32485b0e-177b-4dfd-a55a-0249528f32e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.320 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[15fb9e58-86c3-4230-a02f-b0aa4551ef3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.321 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 namespace which is not needed anymore#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.330 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:09 np0005539563 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000083.scope: Deactivated successfully.
Nov 29 03:19:09 np0005539563 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000083.scope: Consumed 4.603s CPU time.
Nov 29 03:19:09 np0005539563 systemd-machined[213024]: Machine qemu-62-instance-00000083 terminated.
Nov 29 03:19:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:09.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.482 252257 INFO nova.virt.libvirt.driver [-] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Instance destroyed successfully.#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.483 252257 DEBUG nova.objects.instance [None req-7acc7d77-a444-467f-9b35-792005e0fcac 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:09 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[334532]: [NOTICE]   (334536) : haproxy version is 2.8.14-c23fe91
Nov 29 03:19:09 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[334532]: [NOTICE]   (334536) : path to executable is /usr/sbin/haproxy
Nov 29 03:19:09 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[334532]: [WARNING]  (334536) : Exiting Master process...
Nov 29 03:19:09 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[334532]: [WARNING]  (334536) : Exiting Master process...
Nov 29 03:19:09 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[334532]: [ALERT]    (334536) : Current worker (334538) exited with code 143 (Terminated)
Nov 29 03:19:09 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[334532]: [WARNING]  (334536) : All workers exited. Exiting... (0)
Nov 29 03:19:09 np0005539563 systemd[1]: libpod-8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df.scope: Deactivated successfully.
Nov 29 03:19:09 np0005539563 podman[334820]: 2025-11-29 08:19:09.512451244 +0000 UTC m=+0.072640531 container died 8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:19:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df-userdata-shm.mount: Deactivated successfully.
Nov 29 03:19:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6f296093a9c6fa03bf4e5e60f198c131eb082a5598ef9039b689935c9d2583ae-merged.mount: Deactivated successfully.
Nov 29 03:19:09 np0005539563 podman[334820]: 2025-11-29 08:19:09.55394154 +0000 UTC m=+0.114130817 container cleanup 8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.570 252257 DEBUG nova.compute.manager [req-feaa2007-9595-4521-9565-1b64137ee736 req-f152e04a-6e15-4db4-8019-ec88af3cc73b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-unplugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.571 252257 DEBUG oslo_concurrency.lockutils [req-feaa2007-9595-4521-9565-1b64137ee736 req-f152e04a-6e15-4db4-8019-ec88af3cc73b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.571 252257 DEBUG oslo_concurrency.lockutils [req-feaa2007-9595-4521-9565-1b64137ee736 req-f152e04a-6e15-4db4-8019-ec88af3cc73b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.571 252257 DEBUG oslo_concurrency.lockutils [req-feaa2007-9595-4521-9565-1b64137ee736 req-f152e04a-6e15-4db4-8019-ec88af3cc73b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.572 252257 DEBUG nova.compute.manager [req-feaa2007-9595-4521-9565-1b64137ee736 req-f152e04a-6e15-4db4-8019-ec88af3cc73b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] No waiting events found dispatching network-vif-unplugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.572 252257 WARNING nova.compute.manager [req-feaa2007-9595-4521-9565-1b64137ee736 req-f152e04a-6e15-4db4-8019-ec88af3cc73b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received unexpected event network-vif-unplugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:19:09 np0005539563 kernel: tap96eb3aec-07: entered promiscuous mode
Nov 29 03:19:09 np0005539563 systemd-udevd[334801]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:09 np0005539563 NetworkManager[48981]: <info>  [1764404349.5789] manager: (tap96eb3aec-07): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Nov 29 03:19:09 np0005539563 systemd[1]: libpod-conmon-8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df.scope: Deactivated successfully.
Nov 29 03:19:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:09Z|00542|binding|INFO|Claiming lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for this chassis.
Nov 29 03:19:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:09Z|00543|binding|INFO|96eb3aec-07ea-42dc-8983-3d61e9f8b5fc: Claiming fa:16:3e:5b:36:6b 10.100.0.6
Nov 29 03:19:09 np0005539563 NetworkManager[48981]: <info>  [1764404349.5904] device (tap96eb3aec-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:19:09 np0005539563 NetworkManager[48981]: <info>  [1764404349.5916] device (tap96eb3aec-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.592 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:36:6b 10.100.0.6'], port_security=['fa:16:3e:5b:36:6b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1fad2d6f-5a00-43ad-af43-00916509fc61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '7', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.594 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:09Z|00544|binding|INFO|Setting lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc ovn-installed in OVS
Nov 29 03:19:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:09Z|00545|binding|INFO|Setting lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc up in Southbound
Nov 29 03:19:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3600629391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.613 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.616 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:09 np0005539563 systemd-machined[213024]: New machine qemu-63-instance-00000083.
Nov 29 03:19:09 np0005539563 systemd[1]: Started Virtual Machine qemu-63-instance-00000083.
Nov 29 03:19:09 np0005539563 podman[334868]: 2025-11-29 08:19:09.635125231 +0000 UTC m=+0.042139093 container remove 8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.637 252257 DEBUG oslo_concurrency.processutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.640 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e6a535-3112-4f4b-87a8-50efa3825bf3]: (4, ('Sat Nov 29 08:19:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 (8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df)\n8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df\nSat Nov 29 08:19:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 (8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df)\n8eada95fb1aea719471f26287095dc579652c578bcf2e86f5ff8dcbb7e2760df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.641 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[568d1e8d-768a-4a51-9b5e-0283b57161ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.642 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.642 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  <uuid>b91293df-e55b-4e94-92e0-99e06f96a4d0</uuid>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  <name>instance-00000085</name>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerShowV247Test-server-1186309825</nova:name>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:19:08</nova:creationTime>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <nova:user uuid="d043b72e9a1f4575835e938f1a090e3a">tempest-ServerShowV247Test-1340079126-project-member</nova:user>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <nova:project uuid="4b2d7c5689334b7eb116fab1fd5dedac">tempest-ServerShowV247Test-1340079126</nova:project>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="ed489666-5fa2-4ea4-8005-7a7505ac1b78"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <nova:ports/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <entry name="serial">b91293df-e55b-4e94-92e0-99e06f96a4d0</entry>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <entry name="uuid">b91293df-e55b-4e94-92e0-99e06f96a4d0</entry>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:19:09 np0005539563 kernel: tap32485b0e-10: left promiscuous mode
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/b91293df-e55b-4e94-92e0-99e06f96a4d0_disk">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/b91293df-e55b-4e94-92e0-99e06f96a4d0_disk.config">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/console.log" append="off"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:19:09 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:19:09 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:19:09 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:19:09 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.646 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.661 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.663 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bc4630-1332-49ec-92f4-a688bea416a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.678 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[addb9744-a823-470d-98bf-6a16fb108ed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.679 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5ba676-9028-48e4-b2a8-f79450a4c5c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.692 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b0409421-db5d-4a21-a4e6-98aa7e755628]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731261, 'reachable_time': 18811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334893, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 systemd[1]: run-netns-ovnmeta\x2d32485b0e\x2d177b\x2d4dfd\x2da55a\x2d0249528f32e1.mount: Deactivated successfully.
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.695 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.695 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[29afdb90-ce76-41d7-b44a-c1ee9a3f369c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.696 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.697 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.703 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.703 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.704 252257 INFO nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Using config drive#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.708 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e976bf05-491e-488b-bc33-726fa24d244b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.709 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap32485b0e-11 in ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.711 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap32485b0e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.712 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2fdb6a-e74f-4ab8-99e2-fac6484bbbed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.713 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ff65fdf3-64e3-4c61-9a05-1f5352d4aedc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.730 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[28a32144-f0d1-4e72-b860-518e5958b769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.737 252257 DEBUG nova.storage.rbd_utils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.743 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0db29e26-bccc-43fe-afbe-2d987978491f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.766 252257 DEBUG nova.objects.instance [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'ec2_ids' on Instance uuid b91293df-e55b-4e94-92e0-99e06f96a4d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.782 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[dba82803-62c5-4f5f-b56b-2709a87aa62e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 NetworkManager[48981]: <info>  [1764404349.7910] manager: (tap32485b0e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/244)
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.789 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d46aa1a1-4a56-4cd2-8497-ec2d1e37153a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.802 252257 DEBUG nova.objects.instance [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'keypairs' on Instance uuid b91293df-e55b-4e94-92e0-99e06f96a4d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.824 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[6be4b8b5-7e35-4feb-87f3-a05381fadb2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.826 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[51aacb16-510c-4070-a1a5-585c514a7c9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 NetworkManager[48981]: <info>  [1764404349.8499] device (tap32485b0e-10): carrier: link connected
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.858 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e2725193-6d22-40c3-99c9-bb410c9fcdb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.881 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6a6c35-432d-4261-b2e3-76594696b6e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731762, 'reachable_time': 15596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334937, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.912 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[39b2b38d-d80a-44e7-80a1-7bd023d39267]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:4406'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731762, 'tstamp': 731762}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334938, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.938 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[19a52c06-7f37-4163-8091-c2023e6c7b96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731762, 'reachable_time': 15596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334939, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:09.979 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[13c18e51-8217-4613-8c8e-0c276744cf9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.982 252257 INFO nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Creating config drive at /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/disk.config#033[00m
Nov 29 03:19:09 np0005539563 nova_compute[252253]: 2025-11-29 08:19:09.987 252257 DEBUG oslo_concurrency.processutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp03cre43n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:10.060 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2257aa7e-3184-47e3-a856-f4c1c14e99fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:10.062 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:10.062 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:10.063 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:10 np0005539563 NetworkManager[48981]: <info>  [1764404350.0658] manager: (tap32485b0e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.065 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:10 np0005539563 kernel: tap32485b0e-10: entered promiscuous mode
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:10.068 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:10 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:10Z|00546|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:10.072 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:10.074 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b342f9e5-b6a8-40a5-9596-fbe9562fdbf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:10.075 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-32485b0e-177b-4dfd-a55a-0249528f32e1
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/32485b0e-177b-4dfd-a55a-0249528f32e1.pid.haproxy
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 32485b0e-177b-4dfd-a55a-0249528f32e1
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:19:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:10.076 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'env', 'PROCESS_TAG=haproxy-32485b0e-177b-4dfd-a55a-0249528f32e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/32485b0e-177b-4dfd-a55a-0249528f32e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.088 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.141 252257 DEBUG oslo_concurrency.processutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp03cre43n" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.173 252257 DEBUG nova.storage.rbd_utils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] rbd image b91293df-e55b-4e94-92e0-99e06f96a4d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.181 252257 DEBUG oslo_concurrency.processutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/disk.config b91293df-e55b-4e94-92e0-99e06f96a4d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.230 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 1fad2d6f-5a00-43ad-af43-00916509fc61 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.231 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404350.2301056, 1fad2d6f-5a00-43ad-af43-00916509fc61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.231 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.277 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.286 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.321 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.321 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404350.2445602, 1fad2d6f-5a00-43ad-af43-00916509fc61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.322 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.350 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.352 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 416 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 242 op/s
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.383 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.386 252257 DEBUG oslo_concurrency.processutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/disk.config b91293df-e55b-4e94-92e0-99e06f96a4d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.386 252257 INFO nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Deleting local config drive /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:19:10 np0005539563 systemd-machined[213024]: New machine qemu-64-instance-00000085.
Nov 29 03:19:10 np0005539563 systemd[1]: Started Virtual Machine qemu-64-instance-00000085.
Nov 29 03:19:10 np0005539563 podman[335074]: 2025-11-29 08:19:10.482411889 +0000 UTC m=+0.052481685 container create 08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:19:10 np0005539563 systemd[1]: Started libpod-conmon-08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c.scope.
Nov 29 03:19:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:19:10 np0005539563 podman[335074]: 2025-11-29 08:19:10.454957884 +0000 UTC m=+0.025027710 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:19:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf9bf3f82f6d56245d299b13d38fd0940f3b33cd648a8d98518033fca0e3b4b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:10 np0005539563 podman[335074]: 2025-11-29 08:19:10.560699742 +0000 UTC m=+0.130769568 container init 08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:19:10 np0005539563 podman[335074]: 2025-11-29 08:19:10.568534715 +0000 UTC m=+0.138604511 container start 08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:19:10 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[335101]: [NOTICE]   (335106) : New worker (335108) forked
Nov 29 03:19:10 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[335101]: [NOTICE]   (335106) : Loading success.
Nov 29 03:19:10 np0005539563 nova_compute[252253]: 2025-11-29 08:19:10.681 252257 DEBUG nova.compute.manager [None req-7acc7d77-a444-467f-9b35-792005e0fcac 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.011 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for b91293df-e55b-4e94-92e0-99e06f96a4d0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.011 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404351.0109053, b91293df-e55b-4e94-92e0-99e06f96a4d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.011 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.013 252257 DEBUG nova.compute.manager [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.014 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.016 252257 INFO nova.virt.libvirt.driver [-] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Instance spawned successfully.#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.017 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.046 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.049 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.056 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.056 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.056 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.057 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.057 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.057 252257 DEBUG nova.virt.libvirt.driver [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:11.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.279 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.280 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404351.0118759, b91293df-e55b-4e94-92e0-99e06f96a4d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.280 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.303 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.306 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.329 252257 DEBUG nova.compute.manager [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.330 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.374 252257 DEBUG oslo_concurrency.lockutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.374 252257 DEBUG oslo_concurrency.lockutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.375 252257 DEBUG nova.objects.instance [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.443 252257 DEBUG oslo_concurrency.lockutils [None req-2ac064cb-3e4c-4da7-bb2e-864cb7a0333d d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:11.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.662 252257 DEBUG nova.compute.manager [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.663 252257 DEBUG oslo_concurrency.lockutils [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.663 252257 DEBUG oslo_concurrency.lockutils [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.664 252257 DEBUG oslo_concurrency.lockutils [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.664 252257 DEBUG nova.compute.manager [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] No waiting events found dispatching network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.665 252257 WARNING nova.compute.manager [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received unexpected event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.666 252257 DEBUG nova.compute.manager [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.666 252257 DEBUG oslo_concurrency.lockutils [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.667 252257 DEBUG oslo_concurrency.lockutils [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.667 252257 DEBUG oslo_concurrency.lockutils [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.668 252257 DEBUG nova.compute.manager [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] No waiting events found dispatching network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.668 252257 WARNING nova.compute.manager [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received unexpected event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.668 252257 DEBUG nova.compute.manager [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.669 252257 DEBUG oslo_concurrency.lockutils [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.669 252257 DEBUG oslo_concurrency.lockutils [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.669 252257 DEBUG oslo_concurrency.lockutils [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.670 252257 DEBUG nova.compute.manager [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] No waiting events found dispatching network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.670 252257 WARNING nova.compute.manager [req-0565d98e-a7dd-457c-97c5-374d8a1e6be5 req-33f85508-ac19-43b0-a8d7-9693c2587022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received unexpected event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:11 np0005539563 nova_compute[252253]: 2025-11-29 08:19:11.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 416 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 184 op/s
Nov 29 03:19:12 np0005539563 nova_compute[252253]: 2025-11-29 08:19:12.523 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:12 np0005539563 nova_compute[252253]: 2025-11-29 08:19:12.527 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:19:12
Nov 29 03:19:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:19:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:19:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'default.rgw.meta', 'default.rgw.control']
Nov 29 03:19:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:19:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:13.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:13.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:13 np0005539563 nova_compute[252253]: 2025-11-29 08:19:13.470 252257 DEBUG oslo_concurrency.lockutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "b91293df-e55b-4e94-92e0-99e06f96a4d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:13 np0005539563 nova_compute[252253]: 2025-11-29 08:19:13.471 252257 DEBUG oslo_concurrency.lockutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "b91293df-e55b-4e94-92e0-99e06f96a4d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:13 np0005539563 nova_compute[252253]: 2025-11-29 08:19:13.471 252257 DEBUG oslo_concurrency.lockutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "b91293df-e55b-4e94-92e0-99e06f96a4d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:13 np0005539563 nova_compute[252253]: 2025-11-29 08:19:13.471 252257 DEBUG oslo_concurrency.lockutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "b91293df-e55b-4e94-92e0-99e06f96a4d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:13 np0005539563 nova_compute[252253]: 2025-11-29 08:19:13.472 252257 DEBUG oslo_concurrency.lockutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "b91293df-e55b-4e94-92e0-99e06f96a4d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:13 np0005539563 nova_compute[252253]: 2025-11-29 08:19:13.473 252257 INFO nova.compute.manager [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Terminating instance#033[00m
Nov 29 03:19:13 np0005539563 nova_compute[252253]: 2025-11-29 08:19:13.474 252257 DEBUG oslo_concurrency.lockutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "refresh_cache-b91293df-e55b-4e94-92e0-99e06f96a4d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:13 np0005539563 nova_compute[252253]: 2025-11-29 08:19:13.474 252257 DEBUG oslo_concurrency.lockutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquired lock "refresh_cache-b91293df-e55b-4e94-92e0-99e06f96a4d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:13 np0005539563 nova_compute[252253]: 2025-11-29 08:19:13.474 252257 DEBUG nova.network.neutron [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:19:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:19:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 419 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 226 op/s
Nov 29 03:19:14 np0005539563 nova_compute[252253]: 2025-11-29 08:19:14.792 252257 DEBUG nova.network.neutron [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:19:15 np0005539563 nova_compute[252253]: 2025-11-29 08:19:15.193 252257 DEBUG nova.network.neutron [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:15 np0005539563 nova_compute[252253]: 2025-11-29 08:19:15.208 252257 DEBUG oslo_concurrency.lockutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Releasing lock "refresh_cache-b91293df-e55b-4e94-92e0-99e06f96a4d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:15 np0005539563 nova_compute[252253]: 2025-11-29 08:19:15.209 252257 DEBUG nova.compute.manager [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:19:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:15.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:15 np0005539563 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000085.scope: Deactivated successfully.
Nov 29 03:19:15 np0005539563 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000085.scope: Consumed 4.903s CPU time.
Nov 29 03:19:15 np0005539563 systemd-machined[213024]: Machine qemu-64-instance-00000085 terminated.
Nov 29 03:19:15 np0005539563 nova_compute[252253]: 2025-11-29 08:19:15.431 252257 INFO nova.virt.libvirt.driver [-] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Instance destroyed successfully.#033[00m
Nov 29 03:19:15 np0005539563 nova_compute[252253]: 2025-11-29 08:19:15.431 252257 DEBUG nova.objects.instance [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lazy-loading 'resources' on Instance uuid b91293df-e55b-4e94-92e0-99e06f96a4d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:15.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:15 np0005539563 nova_compute[252253]: 2025-11-29 08:19:15.926 252257 INFO nova.virt.libvirt.driver [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Deleting instance files /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0_del#033[00m
Nov 29 03:19:15 np0005539563 nova_compute[252253]: 2025-11-29 08:19:15.928 252257 INFO nova.virt.libvirt.driver [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Deletion of /var/lib/nova/instances/b91293df-e55b-4e94-92e0-99e06f96a4d0_del complete#033[00m
Nov 29 03:19:15 np0005539563 nova_compute[252253]: 2025-11-29 08:19:15.990 252257 INFO nova.compute.manager [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:19:15 np0005539563 nova_compute[252253]: 2025-11-29 08:19:15.991 252257 DEBUG oslo.service.loopingcall [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:19:15 np0005539563 nova_compute[252253]: 2025-11-29 08:19:15.992 252257 DEBUG nova.compute.manager [-] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:19:15 np0005539563 nova_compute[252253]: 2025-11-29 08:19:15.992 252257 DEBUG nova.network.neutron [-] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:19:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 341 op/s
Nov 29 03:19:16 np0005539563 nova_compute[252253]: 2025-11-29 08:19:16.787 252257 DEBUG nova.network.neutron [-] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:19:16 np0005539563 nova_compute[252253]: 2025-11-29 08:19:16.805 252257 DEBUG nova.network.neutron [-] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:16 np0005539563 nova_compute[252253]: 2025-11-29 08:19:16.821 252257 INFO nova.compute.manager [-] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Took 0.83 seconds to deallocate network for instance.#033[00m
Nov 29 03:19:16 np0005539563 nova_compute[252253]: 2025-11-29 08:19:16.863 252257 DEBUG oslo_concurrency.lockutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:16 np0005539563 nova_compute[252253]: 2025-11-29 08:19:16.864 252257 DEBUG oslo_concurrency.lockutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:16 np0005539563 nova_compute[252253]: 2025-11-29 08:19:16.954 252257 DEBUG oslo_concurrency.processutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:17.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1650006821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:17 np0005539563 nova_compute[252253]: 2025-11-29 08:19:17.412 252257 DEBUG oslo_concurrency.processutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:17 np0005539563 nova_compute[252253]: 2025-11-29 08:19:17.421 252257 DEBUG nova.compute.provider_tree [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:17 np0005539563 nova_compute[252253]: 2025-11-29 08:19:17.436 252257 DEBUG nova.scheduler.client.report [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:17 np0005539563 nova_compute[252253]: 2025-11-29 08:19:17.459 252257 DEBUG oslo_concurrency.lockutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:17.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:17 np0005539563 nova_compute[252253]: 2025-11-29 08:19:17.505 252257 INFO nova.scheduler.client.report [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Deleted allocations for instance b91293df-e55b-4e94-92e0-99e06f96a4d0#033[00m
Nov 29 03:19:17 np0005539563 nova_compute[252253]: 2025-11-29 08:19:17.525 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:17 np0005539563 nova_compute[252253]: 2025-11-29 08:19:17.528 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:17 np0005539563 nova_compute[252253]: 2025-11-29 08:19:17.565 252257 DEBUG oslo_concurrency.lockutils [None req-9d3ff6e2-b2b1-4316-ae99-7185d26b1961 d043b72e9a1f4575835e938f1a090e3a 4b2d7c5689334b7eb116fab1fd5dedac - - default default] Lock "b91293df-e55b-4e94-92e0-99e06f96a4d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 427 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.2 MiB/s wr, 298 op/s
Nov 29 03:19:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:19.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:19.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.6 MiB/s wr, 343 op/s
Nov 29 03:19:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:21.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:21.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 234 op/s
Nov 29 03:19:22 np0005539563 nova_compute[252253]: 2025-11-29 08:19:22.527 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:22 np0005539563 nova_compute[252253]: 2025-11-29 08:19:22.530 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:22Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:36:6b 10.100.0.6
Nov 29 03:19:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:23.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:23.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005871916003629257 of space, bias 1.0, pg target 1.761574801088777 quantized to 32 (current 32)
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.160414885480644 quantized to 32 (current 32)
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:19:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:19:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 812c7207-3e91-470c-a0f8-d459aa4cce1d does not exist
Nov 29 03:19:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e55b002c-e0a7-4e93-b591-5101068ed629 does not exist
Nov 29 03:19:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0381fc40-a333-4c63-8cd6-5154f51cff90 does not exist
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:19:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.9 MiB/s wr, 257 op/s
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:19:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:19:24 np0005539563 podman[335529]: 2025-11-29 08:19:24.963903241 +0000 UTC m=+0.066012301 container create 58c26931083ee68a1c97c21da95dde5f7213ecc63709d4214716f4de20a5104d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bassi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:19:25 np0005539563 systemd[1]: Started libpod-conmon-58c26931083ee68a1c97c21da95dde5f7213ecc63709d4214716f4de20a5104d.scope.
Nov 29 03:19:25 np0005539563 podman[335529]: 2025-11-29 08:19:24.927137584 +0000 UTC m=+0.029246744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:19:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:19:25 np0005539563 podman[335529]: 2025-11-29 08:19:25.09107237 +0000 UTC m=+0.193181440 container init 58c26931083ee68a1c97c21da95dde5f7213ecc63709d4214716f4de20a5104d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bassi, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:19:25 np0005539563 podman[335529]: 2025-11-29 08:19:25.098769529 +0000 UTC m=+0.200878579 container start 58c26931083ee68a1c97c21da95dde5f7213ecc63709d4214716f4de20a5104d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:19:25 np0005539563 podman[335529]: 2025-11-29 08:19:25.102689255 +0000 UTC m=+0.204798305 container attach 58c26931083ee68a1c97c21da95dde5f7213ecc63709d4214716f4de20a5104d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:19:25 np0005539563 romantic_bassi[335547]: 167 167
Nov 29 03:19:25 np0005539563 systemd[1]: libpod-58c26931083ee68a1c97c21da95dde5f7213ecc63709d4214716f4de20a5104d.scope: Deactivated successfully.
Nov 29 03:19:25 np0005539563 podman[335529]: 2025-11-29 08:19:25.107353622 +0000 UTC m=+0.209462662 container died 58c26931083ee68a1c97c21da95dde5f7213ecc63709d4214716f4de20a5104d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:19:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d9cbb97769370d425955f14b33d7b9426645546eae3335bc469901e2e791db29-merged.mount: Deactivated successfully.
Nov 29 03:19:25 np0005539563 podman[335529]: 2025-11-29 08:19:25.142624628 +0000 UTC m=+0.244733678 container remove 58c26931083ee68a1c97c21da95dde5f7213ecc63709d4214716f4de20a5104d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:19:25 np0005539563 systemd[1]: libpod-conmon-58c26931083ee68a1c97c21da95dde5f7213ecc63709d4214716f4de20a5104d.scope: Deactivated successfully.
Nov 29 03:19:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:25.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:25 np0005539563 podman[335570]: 2025-11-29 08:19:25.353290961 +0000 UTC m=+0.078321645 container create b75d5893cc4d552626833d7e8955b3ec4c1a88754a65ecae152ab2b7fdf81db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_stonebraker, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:19:25 np0005539563 systemd[1]: Started libpod-conmon-b75d5893cc4d552626833d7e8955b3ec4c1a88754a65ecae152ab2b7fdf81db0.scope.
Nov 29 03:19:25 np0005539563 podman[335570]: 2025-11-29 08:19:25.322120466 +0000 UTC m=+0.047151230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:19:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:19:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf502b3e37b735885abb86288d911e6f3599f19bc4a90a82b43e0a902953cab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf502b3e37b735885abb86288d911e6f3599f19bc4a90a82b43e0a902953cab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf502b3e37b735885abb86288d911e6f3599f19bc4a90a82b43e0a902953cab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf502b3e37b735885abb86288d911e6f3599f19bc4a90a82b43e0a902953cab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf502b3e37b735885abb86288d911e6f3599f19bc4a90a82b43e0a902953cab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:25 np0005539563 podman[335570]: 2025-11-29 08:19:25.467696874 +0000 UTC m=+0.192727558 container init b75d5893cc4d552626833d7e8955b3ec4c1a88754a65ecae152ab2b7fdf81db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_stonebraker, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:19:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:25.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:25 np0005539563 podman[335570]: 2025-11-29 08:19:25.4774847 +0000 UTC m=+0.202515364 container start b75d5893cc4d552626833d7e8955b3ec4c1a88754a65ecae152ab2b7fdf81db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_stonebraker, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:19:25 np0005539563 podman[335570]: 2025-11-29 08:19:25.481125758 +0000 UTC m=+0.206156472 container attach b75d5893cc4d552626833d7e8955b3ec4c1a88754a65ecae152ab2b7fdf81db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:19:26 np0005539563 naughty_stonebraker[335586]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:19:26 np0005539563 naughty_stonebraker[335586]: --> relative data size: 1.0
Nov 29 03:19:26 np0005539563 naughty_stonebraker[335586]: --> All data devices are unavailable
Nov 29 03:19:26 np0005539563 systemd[1]: libpod-b75d5893cc4d552626833d7e8955b3ec4c1a88754a65ecae152ab2b7fdf81db0.scope: Deactivated successfully.
Nov 29 03:19:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 283 op/s
Nov 29 03:19:26 np0005539563 podman[335601]: 2025-11-29 08:19:26.413909185 +0000 UTC m=+0.026397497 container died b75d5893cc4d552626833d7e8955b3ec4c1a88754a65ecae152ab2b7fdf81db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:19:26 np0005539563 systemd[1]: var-lib-containers-storage-overlay-daf502b3e37b735885abb86288d911e6f3599f19bc4a90a82b43e0a902953cab-merged.mount: Deactivated successfully.
Nov 29 03:19:26 np0005539563 podman[335601]: 2025-11-29 08:19:26.496561057 +0000 UTC m=+0.109049359 container remove b75d5893cc4d552626833d7e8955b3ec4c1a88754a65ecae152ab2b7fdf81db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:19:26 np0005539563 systemd[1]: libpod-conmon-b75d5893cc4d552626833d7e8955b3ec4c1a88754a65ecae152ab2b7fdf81db0.scope: Deactivated successfully.
Nov 29 03:19:27 np0005539563 podman[335758]: 2025-11-29 08:19:27.226919893 +0000 UTC m=+0.060781519 container create 5c195b8c3d740a9467487d794943ed38e478d4f8db8ba012ab10fe6574f946f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:19:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:27.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:27 np0005539563 systemd[1]: Started libpod-conmon-5c195b8c3d740a9467487d794943ed38e478d4f8db8ba012ab10fe6574f946f1.scope.
Nov 29 03:19:27 np0005539563 podman[335758]: 2025-11-29 08:19:27.197418564 +0000 UTC m=+0.031280250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:19:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:19:27 np0005539563 podman[335758]: 2025-11-29 08:19:27.328767786 +0000 UTC m=+0.162629412 container init 5c195b8c3d740a9467487d794943ed38e478d4f8db8ba012ab10fe6574f946f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kare, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:19:27 np0005539563 podman[335758]: 2025-11-29 08:19:27.340929515 +0000 UTC m=+0.174791111 container start 5c195b8c3d740a9467487d794943ed38e478d4f8db8ba012ab10fe6574f946f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kare, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:19:27 np0005539563 podman[335758]: 2025-11-29 08:19:27.344412939 +0000 UTC m=+0.178274555 container attach 5c195b8c3d740a9467487d794943ed38e478d4f8db8ba012ab10fe6574f946f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:19:27 np0005539563 intelligent_kare[335774]: 167 167
Nov 29 03:19:27 np0005539563 systemd[1]: libpod-5c195b8c3d740a9467487d794943ed38e478d4f8db8ba012ab10fe6574f946f1.scope: Deactivated successfully.
Nov 29 03:19:27 np0005539563 podman[335758]: 2025-11-29 08:19:27.349180919 +0000 UTC m=+0.183042575 container died 5c195b8c3d740a9467487d794943ed38e478d4f8db8ba012ab10fe6574f946f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:19:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9243021a0b3f1ac6aaabdb09bb005d4021728d8e1f5e79a509558a4d6f566560-merged.mount: Deactivated successfully.
Nov 29 03:19:27 np0005539563 podman[335758]: 2025-11-29 08:19:27.401250381 +0000 UTC m=+0.235112017 container remove 5c195b8c3d740a9467487d794943ed38e478d4f8db8ba012ab10fe6574f946f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:19:27 np0005539563 systemd[1]: libpod-conmon-5c195b8c3d740a9467487d794943ed38e478d4f8db8ba012ab10fe6574f946f1.scope: Deactivated successfully.
Nov 29 03:19:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:27.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:27 np0005539563 nova_compute[252253]: 2025-11-29 08:19:27.531 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Nov 29 03:19:27 np0005539563 podman[335798]: 2025-11-29 08:19:27.604087472 +0000 UTC m=+0.050447859 container create c579bd4562bcc35ccb54956a471d1ce2f98feaa224025fa28db3ca69d3e4b192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_golick, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:19:27 np0005539563 systemd[1]: Started libpod-conmon-c579bd4562bcc35ccb54956a471d1ce2f98feaa224025fa28db3ca69d3e4b192.scope.
Nov 29 03:19:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Nov 29 03:19:27 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Nov 29 03:19:27 np0005539563 podman[335798]: 2025-11-29 08:19:27.583103592 +0000 UTC m=+0.029463979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:19:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:19:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffa4b7f693e49fb28721fc93e473bb17d3a1da591fd1bc2f1888f0ef303f7f9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffa4b7f693e49fb28721fc93e473bb17d3a1da591fd1bc2f1888f0ef303f7f9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffa4b7f693e49fb28721fc93e473bb17d3a1da591fd1bc2f1888f0ef303f7f9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffa4b7f693e49fb28721fc93e473bb17d3a1da591fd1bc2f1888f0ef303f7f9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:27 np0005539563 podman[335798]: 2025-11-29 08:19:27.711664889 +0000 UTC m=+0.158025276 container init c579bd4562bcc35ccb54956a471d1ce2f98feaa224025fa28db3ca69d3e4b192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_golick, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:19:27 np0005539563 podman[335798]: 2025-11-29 08:19:27.722885973 +0000 UTC m=+0.169246330 container start c579bd4562bcc35ccb54956a471d1ce2f98feaa224025fa28db3ca69d3e4b192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:19:27 np0005539563 podman[335798]: 2025-11-29 08:19:27.72607371 +0000 UTC m=+0.172434097 container attach c579bd4562bcc35ccb54956a471d1ce2f98feaa224025fa28db3ca69d3e4b192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:19:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:19:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1255212359' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:19:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:19:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1255212359' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:19:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.6 MiB/s wr, 218 op/s
Nov 29 03:19:28 np0005539563 zen_golick[335814]: {
Nov 29 03:19:28 np0005539563 zen_golick[335814]:    "0": [
Nov 29 03:19:28 np0005539563 zen_golick[335814]:        {
Nov 29 03:19:28 np0005539563 zen_golick[335814]:            "devices": [
Nov 29 03:19:28 np0005539563 zen_golick[335814]:                "/dev/loop3"
Nov 29 03:19:28 np0005539563 zen_golick[335814]:            ],
Nov 29 03:19:28 np0005539563 zen_golick[335814]:            "lv_name": "ceph_lv0",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:            "lv_size": "7511998464",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:            "name": "ceph_lv0",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:            "tags": {
Nov 29 03:19:28 np0005539563 zen_golick[335814]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:                "ceph.cluster_name": "ceph",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:                "ceph.crush_device_class": "",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:                "ceph.encrypted": "0",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:                "ceph.osd_id": "0",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:                "ceph.type": "block",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:                "ceph.vdo": "0"
Nov 29 03:19:28 np0005539563 zen_golick[335814]:            },
Nov 29 03:19:28 np0005539563 zen_golick[335814]:            "type": "block",
Nov 29 03:19:28 np0005539563 zen_golick[335814]:            "vg_name": "ceph_vg0"
Nov 29 03:19:28 np0005539563 zen_golick[335814]:        }
Nov 29 03:19:28 np0005539563 zen_golick[335814]:    ]
Nov 29 03:19:28 np0005539563 zen_golick[335814]: }
Nov 29 03:19:28 np0005539563 systemd[1]: libpod-c579bd4562bcc35ccb54956a471d1ce2f98feaa224025fa28db3ca69d3e4b192.scope: Deactivated successfully.
Nov 29 03:19:28 np0005539563 podman[335798]: 2025-11-29 08:19:28.500356459 +0000 UTC m=+0.946716816 container died c579bd4562bcc35ccb54956a471d1ce2f98feaa224025fa28db3ca69d3e4b192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:19:28 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ffa4b7f693e49fb28721fc93e473bb17d3a1da591fd1bc2f1888f0ef303f7f9d-merged.mount: Deactivated successfully.
Nov 29 03:19:28 np0005539563 podman[335798]: 2025-11-29 08:19:28.562408101 +0000 UTC m=+1.008768498 container remove c579bd4562bcc35ccb54956a471d1ce2f98feaa224025fa28db3ca69d3e4b192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_golick, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:19:28 np0005539563 systemd[1]: libpod-conmon-c579bd4562bcc35ccb54956a471d1ce2f98feaa224025fa28db3ca69d3e4b192.scope: Deactivated successfully.
Nov 29 03:19:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Nov 29 03:19:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Nov 29 03:19:28 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Nov 29 03:19:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:29.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:29 np0005539563 podman[335976]: 2025-11-29 08:19:29.322783021 +0000 UTC m=+0.037637411 container create 8f751f2806a922018652e52b4ea162a20d94b5b7c7f654e504f27d60f084170e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bartik, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:19:29 np0005539563 systemd[1]: Started libpod-conmon-8f751f2806a922018652e52b4ea162a20d94b5b7c7f654e504f27d60f084170e.scope.
Nov 29 03:19:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:19:29 np0005539563 podman[335976]: 2025-11-29 08:19:29.306710656 +0000 UTC m=+0.021565066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:19:29 np0005539563 podman[335976]: 2025-11-29 08:19:29.415574688 +0000 UTC m=+0.130429108 container init 8f751f2806a922018652e52b4ea162a20d94b5b7c7f654e504f27d60f084170e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bartik, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:19:29 np0005539563 podman[335976]: 2025-11-29 08:19:29.422921087 +0000 UTC m=+0.137775477 container start 8f751f2806a922018652e52b4ea162a20d94b5b7c7f654e504f27d60f084170e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:19:29 np0005539563 podman[335976]: 2025-11-29 08:19:29.425948539 +0000 UTC m=+0.140802929 container attach 8f751f2806a922018652e52b4ea162a20d94b5b7c7f654e504f27d60f084170e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bartik, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:19:29 np0005539563 gifted_bartik[335993]: 167 167
Nov 29 03:19:29 np0005539563 systemd[1]: libpod-8f751f2806a922018652e52b4ea162a20d94b5b7c7f654e504f27d60f084170e.scope: Deactivated successfully.
Nov 29 03:19:29 np0005539563 podman[335976]: 2025-11-29 08:19:29.429822485 +0000 UTC m=+0.144676885 container died 8f751f2806a922018652e52b4ea162a20d94b5b7c7f654e504f27d60f084170e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:19:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ba62a33dce7cf5a8f87ad97af1b6a2e1ec46ecc4d7d66bc001f3cb01c06bceb2-merged.mount: Deactivated successfully.
Nov 29 03:19:29 np0005539563 podman[335976]: 2025-11-29 08:19:29.467452675 +0000 UTC m=+0.182307065 container remove 8f751f2806a922018652e52b4ea162a20d94b5b7c7f654e504f27d60f084170e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:19:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:29.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:29 np0005539563 systemd[1]: libpod-conmon-8f751f2806a922018652e52b4ea162a20d94b5b7c7f654e504f27d60f084170e.scope: Deactivated successfully.
Nov 29 03:19:29 np0005539563 podman[336016]: 2025-11-29 08:19:29.678943121 +0000 UTC m=+0.063030321 container create b89b088b9ec3aa019514acf494bccea586e80113ba776bccea69a549aed98495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:19:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Nov 29 03:19:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Nov 29 03:19:29 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Nov 29 03:19:29 np0005539563 systemd[1]: Started libpod-conmon-b89b088b9ec3aa019514acf494bccea586e80113ba776bccea69a549aed98495.scope.
Nov 29 03:19:29 np0005539563 podman[336016]: 2025-11-29 08:19:29.657798127 +0000 UTC m=+0.041885337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:19:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:19:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6128d9466f8c81fc8642742c79d20bbda6ef6f767d1b6d163bdd180576d38f5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6128d9466f8c81fc8642742c79d20bbda6ef6f767d1b6d163bdd180576d38f5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6128d9466f8c81fc8642742c79d20bbda6ef6f767d1b6d163bdd180576d38f5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6128d9466f8c81fc8642742c79d20bbda6ef6f767d1b6d163bdd180576d38f5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:29 np0005539563 podman[336016]: 2025-11-29 08:19:29.786679912 +0000 UTC m=+0.170767152 container init b89b088b9ec3aa019514acf494bccea586e80113ba776bccea69a549aed98495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:19:29 np0005539563 podman[336016]: 2025-11-29 08:19:29.795981474 +0000 UTC m=+0.180068684 container start b89b088b9ec3aa019514acf494bccea586e80113ba776bccea69a549aed98495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:19:29 np0005539563 podman[336016]: 2025-11-29 08:19:29.799449499 +0000 UTC m=+0.183536699 container attach b89b088b9ec3aa019514acf494bccea586e80113ba776bccea69a549aed98495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:19:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 374 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 2.9 MiB/s wr, 329 op/s
Nov 29 03:19:30 np0005539563 nova_compute[252253]: 2025-11-29 08:19:30.431 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404355.429301, b91293df-e55b-4e94-92e0-99e06f96a4d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:30 np0005539563 nova_compute[252253]: 2025-11-29 08:19:30.432 252257 INFO nova.compute.manager [-] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:19:30 np0005539563 nova_compute[252253]: 2025-11-29 08:19:30.456 252257 DEBUG nova.compute.manager [None req-53b33ba0-c4aa-4e1a-907e-4eb4b351acf5 - - - - - -] [instance: b91293df-e55b-4e94-92e0-99e06f96a4d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:30 np0005539563 competent_tesla[336033]: {
Nov 29 03:19:30 np0005539563 competent_tesla[336033]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:19:30 np0005539563 competent_tesla[336033]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:19:30 np0005539563 competent_tesla[336033]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:19:30 np0005539563 competent_tesla[336033]:        "osd_id": 0,
Nov 29 03:19:30 np0005539563 competent_tesla[336033]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:19:30 np0005539563 competent_tesla[336033]:        "type": "bluestore"
Nov 29 03:19:30 np0005539563 competent_tesla[336033]:    }
Nov 29 03:19:30 np0005539563 competent_tesla[336033]: }
Nov 29 03:19:30 np0005539563 systemd[1]: libpod-b89b088b9ec3aa019514acf494bccea586e80113ba776bccea69a549aed98495.scope: Deactivated successfully.
Nov 29 03:19:30 np0005539563 podman[336016]: 2025-11-29 08:19:30.677694387 +0000 UTC m=+1.061781567 container died b89b088b9ec3aa019514acf494bccea586e80113ba776bccea69a549aed98495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:19:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6128d9466f8c81fc8642742c79d20bbda6ef6f767d1b6d163bdd180576d38f5a-merged.mount: Deactivated successfully.
Nov 29 03:19:30 np0005539563 podman[336016]: 2025-11-29 08:19:30.734923958 +0000 UTC m=+1.119011148 container remove b89b088b9ec3aa019514acf494bccea586e80113ba776bccea69a549aed98495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tesla, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:19:30 np0005539563 systemd[1]: libpod-conmon-b89b088b9ec3aa019514acf494bccea586e80113ba776bccea69a549aed98495.scope: Deactivated successfully.
Nov 29 03:19:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:19:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:19:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:19:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:19:30 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 79c1a714-f42d-459d-ad30-297a17fd5966 does not exist
Nov 29 03:19:30 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 01408762-e086-43dd-addc-ec3092a3df1d does not exist
Nov 29 03:19:30 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4acfad62-c449-42ab-b371-48413dd2f22d does not exist
Nov 29 03:19:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:31.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:31.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:19:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:19:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 374 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.9 MiB/s wr, 192 op/s
Nov 29 03:19:32 np0005539563 nova_compute[252253]: 2025-11-29 08:19:32.534 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:19:32 np0005539563 nova_compute[252253]: 2025-11-29 08:19:32.536 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:32 np0005539563 nova_compute[252253]: 2025-11-29 08:19:32.536 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 03:19:32 np0005539563 nova_compute[252253]: 2025-11-29 08:19:32.536 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:19:32 np0005539563 nova_compute[252253]: 2025-11-29 08:19:32.537 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 03:19:32 np0005539563 nova_compute[252253]: 2025-11-29 08:19:32.540 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:33.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:33.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:33 np0005539563 podman[336118]: 2025-11-29 08:19:33.519871365 +0000 UTC m=+0.072310953 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 03:19:33 np0005539563 podman[336119]: 2025-11-29 08:19:33.524281914 +0000 UTC m=+0.075457658 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:19:33 np0005539563 podman[336120]: 2025-11-29 08:19:33.567461715 +0000 UTC m=+0.102236654 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:19:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 387 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.2 MiB/s wr, 211 op/s
Nov 29 03:19:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:35.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:35.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 428 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 5.1 MiB/s wr, 283 op/s
Nov 29 03:19:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:37.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:37.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:37 np0005539563 nova_compute[252253]: 2025-11-29 08:19:37.539 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539563 nova_compute[252253]: 2025-11-29 08:19:38.200 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "0d1eac76-3b6b-4734-a481-9b315b2ae484" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:38 np0005539563 nova_compute[252253]: 2025-11-29 08:19:38.201 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:38 np0005539563 nova_compute[252253]: 2025-11-29 08:19:38.234 252257 DEBUG nova.compute.manager [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:19:38 np0005539563 nova_compute[252253]: 2025-11-29 08:19:38.324 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:38 np0005539563 nova_compute[252253]: 2025-11-29 08:19:38.325 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:38 np0005539563 nova_compute[252253]: 2025-11-29 08:19:38.335 252257 DEBUG nova.virt.hardware [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:19:38 np0005539563 nova_compute[252253]: 2025-11-29 08:19:38.336 252257 INFO nova.compute.claims [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:19:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.8 MiB/s wr, 276 op/s
Nov 29 03:19:38 np0005539563 nova_compute[252253]: 2025-11-29 08:19:38.525 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Nov 29 03:19:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Nov 29 03:19:38 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Nov 29 03:19:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4043866743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.089 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.098 252257 DEBUG nova.compute.provider_tree [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.117 252257 DEBUG nova.scheduler.client.report [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.155 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.157 252257 DEBUG nova.compute.manager [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.232 252257 DEBUG nova.compute.manager [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.233 252257 DEBUG nova.network.neutron [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.262 252257 INFO nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:19:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:39.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.302 252257 DEBUG nova.compute.manager [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:19:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:39.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.634 252257 DEBUG nova.compute.manager [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.636 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.636 252257 INFO nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Creating image(s)#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.667 252257 DEBUG nova.storage.rbd_utils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 0d1eac76-3b6b-4734-a481-9b315b2ae484_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.706 252257 DEBUG nova.storage.rbd_utils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 0d1eac76-3b6b-4734-a481-9b315b2ae484_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.737 252257 DEBUG nova.storage.rbd_utils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 0d1eac76-3b6b-4734-a481-9b315b2ae484_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.742 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.805 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.806 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.807 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.807 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.829 252257 DEBUG nova.storage.rbd_utils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 0d1eac76-3b6b-4734-a481-9b315b2ae484_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.832 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 0d1eac76-3b6b-4734-a481-9b315b2ae484_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:39 np0005539563 nova_compute[252253]: 2025-11-29 08:19:39.859 252257 DEBUG nova.policy [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b9a756606a84398819fa76cc6ce9ecd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a738c288b1654ec58416b0da60aacb69', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:19:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 466 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.1 MiB/s wr, 261 op/s
Nov 29 03:19:40 np0005539563 nova_compute[252253]: 2025-11-29 08:19:40.420 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 0d1eac76-3b6b-4734-a481-9b315b2ae484_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:40 np0005539563 nova_compute[252253]: 2025-11-29 08:19:40.515 252257 DEBUG nova.storage.rbd_utils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] resizing rbd image 0d1eac76-3b6b-4734-a481-9b315b2ae484_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:19:40 np0005539563 nova_compute[252253]: 2025-11-29 08:19:40.701 252257 DEBUG nova.objects.instance [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d1eac76-3b6b-4734-a481-9b315b2ae484 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:40 np0005539563 nova_compute[252253]: 2025-11-29 08:19:40.744 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:19:40 np0005539563 nova_compute[252253]: 2025-11-29 08:19:40.745 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Ensure instance console log exists: /var/lib/nova/instances/0d1eac76-3b6b-4734-a481-9b315b2ae484/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:19:40 np0005539563 nova_compute[252253]: 2025-11-29 08:19:40.745 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:40 np0005539563 nova_compute[252253]: 2025-11-29 08:19:40.746 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:40 np0005539563 nova_compute[252253]: 2025-11-29 08:19:40.746 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:40 np0005539563 nova_compute[252253]: 2025-11-29 08:19:40.955 252257 DEBUG nova.network.neutron [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Successfully created port: 65163519-df32-4076-bfa2-5a804018b2e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:19:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:41.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:41.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:42 np0005539563 nova_compute[252253]: 2025-11-29 08:19:42.282 252257 DEBUG nova.network.neutron [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Successfully updated port: 65163519-df32-4076-bfa2-5a804018b2e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:19:42 np0005539563 nova_compute[252253]: 2025-11-29 08:19:42.300 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "refresh_cache-0d1eac76-3b6b-4734-a481-9b315b2ae484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:42 np0005539563 nova_compute[252253]: 2025-11-29 08:19:42.301 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquired lock "refresh_cache-0d1eac76-3b6b-4734-a481-9b315b2ae484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:42 np0005539563 nova_compute[252253]: 2025-11-29 08:19:42.301 252257 DEBUG nova.network.neutron [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 466 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.1 MiB/s wr, 261 op/s
Nov 29 03:19:42 np0005539563 nova_compute[252253]: 2025-11-29 08:19:42.487 252257 DEBUG nova.network.neutron [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:19:42 np0005539563 nova_compute[252253]: 2025-11-29 08:19:42.542 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:42 np0005539563 nova_compute[252253]: 2025-11-29 08:19:42.696 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.051 252257 DEBUG nova.compute.manager [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Received event network-changed-65163519-df32-4076-bfa2-5a804018b2e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.052 252257 DEBUG nova.compute.manager [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Refreshing instance network info cache due to event network-changed-65163519-df32-4076-bfa2-5a804018b2e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.052 252257 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-0d1eac76-3b6b-4734-a481-9b315b2ae484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:19:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:43.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.303 252257 DEBUG nova.network.neutron [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Updating instance_info_cache with network_info: [{"id": "65163519-df32-4076-bfa2-5a804018b2e9", "address": "fa:16:3e:e1:8e:91", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65163519-df", "ovs_interfaceid": "65163519-df32-4076-bfa2-5a804018b2e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.339 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Releasing lock "refresh_cache-0d1eac76-3b6b-4734-a481-9b315b2ae484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.339 252257 DEBUG nova.compute.manager [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Instance network_info: |[{"id": "65163519-df32-4076-bfa2-5a804018b2e9", "address": "fa:16:3e:e1:8e:91", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65163519-df", "ovs_interfaceid": "65163519-df32-4076-bfa2-5a804018b2e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.340 252257 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-0d1eac76-3b6b-4734-a481-9b315b2ae484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.340 252257 DEBUG nova.network.neutron [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Refreshing network info cache for port 65163519-df32-4076-bfa2-5a804018b2e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.344 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Start _get_guest_xml network_info=[{"id": "65163519-df32-4076-bfa2-5a804018b2e9", "address": "fa:16:3e:e1:8e:91", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65163519-df", "ovs_interfaceid": "65163519-df32-4076-bfa2-5a804018b2e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.352 252257 WARNING nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.360 252257 DEBUG nova.virt.libvirt.host [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.362 252257 DEBUG nova.virt.libvirt.host [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.366 252257 DEBUG nova.virt.libvirt.host [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.367 252257 DEBUG nova.virt.libvirt.host [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.370 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.370 252257 DEBUG nova.virt.hardware [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.371 252257 DEBUG nova.virt.hardware [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.372 252257 DEBUG nova.virt.hardware [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.373 252257 DEBUG nova.virt.hardware [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.373 252257 DEBUG nova.virt.hardware [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.374 252257 DEBUG nova.virt.hardware [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.374 252257 DEBUG nova.virt.hardware [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.375 252257 DEBUG nova.virt.hardware [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.376 252257 DEBUG nova.virt.hardware [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.376 252257 DEBUG nova.virt.hardware [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.377 252257 DEBUG nova.virt.hardware [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.382 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:43.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2260853582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.854 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.894 252257 DEBUG nova.storage.rbd_utils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 0d1eac76-3b6b-4734-a481-9b315b2ae484_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:43 np0005539563 nova_compute[252253]: 2025-11-29 08:19:43.900 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4142059694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.374 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.377 252257 DEBUG nova.virt.libvirt.vif [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1325725551',display_name='tempest-₡-1325725551',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1325725551',id=137,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-39umlpeh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:39Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=0d1eac76-3b6b-4734-a481-9b315b2ae484,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65163519-df32-4076-bfa2-5a804018b2e9", "address": "fa:16:3e:e1:8e:91", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65163519-df", "ovs_interfaceid": "65163519-df32-4076-bfa2-5a804018b2e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.377 252257 DEBUG nova.network.os_vif_util [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "65163519-df32-4076-bfa2-5a804018b2e9", "address": "fa:16:3e:e1:8e:91", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65163519-df", "ovs_interfaceid": "65163519-df32-4076-bfa2-5a804018b2e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.378 252257 DEBUG nova.network.os_vif_util [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:8e:91,bridge_name='br-int',has_traffic_filtering=True,id=65163519-df32-4076-bfa2-5a804018b2e9,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65163519-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.380 252257 DEBUG nova.objects.instance [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d1eac76-3b6b-4734-a481-9b315b2ae484 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.2 MiB/s wr, 259 op/s
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.409 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  <uuid>0d1eac76-3b6b-4734-a481-9b315b2ae484</uuid>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  <name>instance-00000089</name>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <nova:name>tempest-₡-1325725551</nova:name>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:19:43</nova:creationTime>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <nova:user uuid="3b9a756606a84398819fa76cc6ce9ecd">tempest-ServersTestJSON-1672739819-project-member</nova:user>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <nova:project uuid="a738c288b1654ec58416b0da60aacb69">tempest-ServersTestJSON-1672739819</nova:project>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <nova:port uuid="65163519-df32-4076-bfa2-5a804018b2e9">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <entry name="serial">0d1eac76-3b6b-4734-a481-9b315b2ae484</entry>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <entry name="uuid">0d1eac76-3b6b-4734-a481-9b315b2ae484</entry>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/0d1eac76-3b6b-4734-a481-9b315b2ae484_disk">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/0d1eac76-3b6b-4734-a481-9b315b2ae484_disk.config">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:e1:8e:91"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <target dev="tap65163519-df"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/0d1eac76-3b6b-4734-a481-9b315b2ae484/console.log" append="off"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:19:44 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:19:44 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:19:44 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:19:44 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.411 252257 DEBUG nova.compute.manager [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Preparing to wait for external event network-vif-plugged-65163519-df32-4076-bfa2-5a804018b2e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.412 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.413 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.413 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.414 252257 DEBUG nova.virt.libvirt.vif [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1325725551',display_name='tempest-₡-1325725551',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1325725551',id=137,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-39umlpeh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:39Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=0d1eac76-3b6b-4734-a481-9b315b2ae484,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65163519-df32-4076-bfa2-5a804018b2e9", "address": "fa:16:3e:e1:8e:91", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65163519-df", "ovs_interfaceid": "65163519-df32-4076-bfa2-5a804018b2e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.414 252257 DEBUG nova.network.os_vif_util [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "65163519-df32-4076-bfa2-5a804018b2e9", "address": "fa:16:3e:e1:8e:91", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65163519-df", "ovs_interfaceid": "65163519-df32-4076-bfa2-5a804018b2e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.415 252257 DEBUG nova.network.os_vif_util [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:8e:91,bridge_name='br-int',has_traffic_filtering=True,id=65163519-df32-4076-bfa2-5a804018b2e9,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65163519-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.416 252257 DEBUG os_vif [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:8e:91,bridge_name='br-int',has_traffic_filtering=True,id=65163519-df32-4076-bfa2-5a804018b2e9,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65163519-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.421 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.422 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.422 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.430 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.430 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65163519-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.431 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap65163519-df, col_values=(('external_ids', {'iface-id': '65163519-df32-4076-bfa2-5a804018b2e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:8e:91', 'vm-uuid': '0d1eac76-3b6b-4734-a481-9b315b2ae484'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.467 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:44 np0005539563 NetworkManager[48981]: <info>  [1764404384.4691] manager: (tap65163519-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.471 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.481 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.481 252257 INFO os_vif [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:8e:91,bridge_name='br-int',has_traffic_filtering=True,id=65163519-df32-4076-bfa2-5a804018b2e9,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65163519-df')#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.554 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.554 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.554 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No VIF found with MAC fa:16:3e:e1:8e:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.555 252257 INFO nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Using config drive#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.586 252257 DEBUG nova.storage.rbd_utils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 0d1eac76-3b6b-4734-a481-9b315b2ae484_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.877 252257 DEBUG nova.network.neutron [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Updated VIF entry in instance network info cache for port 65163519-df32-4076-bfa2-5a804018b2e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.878 252257 DEBUG nova.network.neutron [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Updating instance_info_cache with network_info: [{"id": "65163519-df32-4076-bfa2-5a804018b2e9", "address": "fa:16:3e:e1:8e:91", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65163519-df", "ovs_interfaceid": "65163519-df32-4076-bfa2-5a804018b2e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:44 np0005539563 nova_compute[252253]: 2025-11-29 08:19:44.901 252257 DEBUG oslo_concurrency.lockutils [req-98ec09e9-dbac-4a34-8e57-d0de130c8be2 req-a096fdba-940d-49ba-a3fe-f8d94aab7520 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-0d1eac76-3b6b-4734-a481-9b315b2ae484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.090 252257 INFO nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Creating config drive at /var/lib/nova/instances/0d1eac76-3b6b-4734-a481-9b315b2ae484/disk.config#033[00m
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.097 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d1eac76-3b6b-4734-a481-9b315b2ae484/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbxkf9s1_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.237 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d1eac76-3b6b-4734-a481-9b315b2ae484/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbxkf9s1_" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.269 252257 DEBUG nova.storage.rbd_utils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 0d1eac76-3b6b-4734-a481-9b315b2ae484_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.274 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d1eac76-3b6b-4734-a481-9b315b2ae484/disk.config 0d1eac76-3b6b-4734-a481-9b315b2ae484_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:45.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.464 252257 DEBUG oslo_concurrency.processutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d1eac76-3b6b-4734-a481-9b315b2ae484/disk.config 0d1eac76-3b6b-4734-a481-9b315b2ae484_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.465 252257 INFO nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Deleting local config drive /var/lib/nova/instances/0d1eac76-3b6b-4734-a481-9b315b2ae484/disk.config because it was imported into RBD.#033[00m
Nov 29 03:19:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:45.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:45 np0005539563 kernel: tap65163519-df: entered promiscuous mode
Nov 29 03:19:45 np0005539563 NetworkManager[48981]: <info>  [1764404385.5395] manager: (tap65163519-df): new Tun device (/org/freedesktop/NetworkManager/Devices/247)
Nov 29 03:19:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:45Z|00547|binding|INFO|Claiming lport 65163519-df32-4076-bfa2-5a804018b2e9 for this chassis.
Nov 29 03:19:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:45Z|00548|binding|INFO|65163519-df32-4076-bfa2-5a804018b2e9: Claiming fa:16:3e:e1:8e:91 10.100.0.6
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.541 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.556 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:8e:91 10.100.0.6'], port_security=['fa:16:3e:e1:8e:91 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0d1eac76-3b6b-4734-a481-9b315b2ae484', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=65163519-df32-4076-bfa2-5a804018b2e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.558 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 65163519-df32-4076-bfa2-5a804018b2e9 in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 bound to our chassis#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.560 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 97e6ef02-6896-45a2-9eb9-28926c1a7400#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.579 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1e99ab8f-abf9-453e-86da-e2996d847161]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.580 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap97e6ef02-61 in ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.582 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap97e6ef02-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.583 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[79066eb3-5719-4d14-ab95-18d389e341fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.585 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[29010a19-e87c-4a89-8eac-e1af7fd89c81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.596 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ddc296-8a3d-4974-a10e-7f7c4251b731]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 systemd-udevd[336567]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:45 np0005539563 systemd-machined[213024]: New machine qemu-65-instance-00000089.
Nov 29 03:19:45 np0005539563 NetworkManager[48981]: <info>  [1764404385.6155] device (tap65163519-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:19:45 np0005539563 NetworkManager[48981]: <info>  [1764404385.6165] device (tap65163519-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.624 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[08de18dd-45c9-4592-b642-7c5886c23aeb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 systemd[1]: Started Virtual Machine qemu-65-instance-00000089.
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.645 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:45Z|00549|binding|INFO|Setting lport 65163519-df32-4076-bfa2-5a804018b2e9 ovn-installed in OVS
Nov 29 03:19:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:45Z|00550|binding|INFO|Setting lport 65163519-df32-4076-bfa2-5a804018b2e9 up in Southbound
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.649 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.667 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[713fcb9a-420d-42e9-912e-060c9945d9a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 NetworkManager[48981]: <info>  [1764404385.6777] manager: (tap97e6ef02-60): new Veth device (/org/freedesktop/NetworkManager/Devices/248)
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.676 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2057c6c7-dcb4-40fa-a29e-19178fb2a38d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 systemd-udevd[336570]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.714 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[64803059-c4a6-4ce2-9ac5-405a36a0dbce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.717 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[25df06dc-9e39-45a8-9bb9-1301e0b5fcd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 NetworkManager[48981]: <info>  [1764404385.7391] device (tap97e6ef02-60): carrier: link connected
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.745 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c800a9e5-ebfe-4f42-8bc9-8a86f9c2299a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.765 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f72eb906-315c-4f9e-ba5f-d91a39b0ad5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735351, 'reachable_time': 23061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336598, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.786 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc39a4e-5718-4a9c-a96f-8d25ee6352a7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:de28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 735351, 'tstamp': 735351}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336599, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.804 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb28567-5c47-4ab4-9993-a84adb507d15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735351, 'reachable_time': 23061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336600, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.842 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4f58f406-e43a-4ead-88a0-9a0a5731bb0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.929 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8310d6-b2ad-478a-a687-ce6fdb04486e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.930 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.931 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.931 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97e6ef02-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.933 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:45 np0005539563 NetworkManager[48981]: <info>  [1764404385.9339] manager: (tap97e6ef02-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Nov 29 03:19:45 np0005539563 kernel: tap97e6ef02-60: entered promiscuous mode
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.935 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.938 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap97e6ef02-60, col_values=(('external_ids', {'iface-id': 'ea7a63c4-c071-447c-8225-8a48ff4b56c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.939 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:19:45Z|00551|binding|INFO|Releasing lport ea7a63c4-c071-447c-8225-8a48ff4b56c5 from this chassis (sb_readonly=0)
Nov 29 03:19:45 np0005539563 nova_compute[252253]: 2025-11-29 08:19:45.959 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.961 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.961 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c35f1e-5203-466d-ba27-b1e7d04e5936]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.962 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/97e6ef02-6896-45a2-9eb9-28926c1a7400.pid.haproxy
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 97e6ef02-6896-45a2-9eb9-28926c1a7400
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:19:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:19:45.963 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'env', 'PROCESS_TAG=haproxy-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/97e6ef02-6896-45a2-9eb9-28926c1a7400.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:19:46 np0005539563 nova_compute[252253]: 2025-11-29 08:19:46.091 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404386.0911171, 0d1eac76-3b6b-4734-a481-9b315b2ae484 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:46 np0005539563 nova_compute[252253]: 2025-11-29 08:19:46.092 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:46 np0005539563 nova_compute[252253]: 2025-11-29 08:19:46.108 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:46 np0005539563 nova_compute[252253]: 2025-11-29 08:19:46.112 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404386.0924006, 0d1eac76-3b6b-4734-a481-9b315b2ae484 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:46 np0005539563 nova_compute[252253]: 2025-11-29 08:19:46.112 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:19:46 np0005539563 nova_compute[252253]: 2025-11-29 08:19:46.147 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:46 np0005539563 nova_compute[252253]: 2025-11-29 08:19:46.151 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:46 np0005539563 nova_compute[252253]: 2025-11-29 08:19:46.171 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:46 np0005539563 podman[336674]: 2025-11-29 08:19:46.359586062 +0000 UTC m=+0.060169573 container create ebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:19:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 539 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.2 MiB/s wr, 263 op/s
Nov 29 03:19:46 np0005539563 systemd[1]: Started libpod-conmon-ebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a.scope.
Nov 29 03:19:46 np0005539563 podman[336674]: 2025-11-29 08:19:46.331941002 +0000 UTC m=+0.032524523 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:19:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:19:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b48a202685550ff50547b1421dae2ea545e29e2a16f929d9532075f9e09af3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:46 np0005539563 podman[336674]: 2025-11-29 08:19:46.453914891 +0000 UTC m=+0.154498392 container init ebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:19:46 np0005539563 podman[336674]: 2025-11-29 08:19:46.459234815 +0000 UTC m=+0.159818316 container start ebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:19:46 np0005539563 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[336690]: [NOTICE]   (336694) : New worker (336696) forked
Nov 29 03:19:46 np0005539563 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[336690]: [NOTICE]   (336694) : Loading success.
Nov 29 03:19:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:47.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:47.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:47 np0005539563 nova_compute[252253]: 2025-11-29 08:19:47.543 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 547 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.1 MiB/s wr, 249 op/s
Nov 29 03:19:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:49.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.470 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:49.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.961 252257 DEBUG nova.compute.manager [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Received event network-vif-plugged-65163519-df32-4076-bfa2-5a804018b2e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.961 252257 DEBUG oslo_concurrency.lockutils [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.961 252257 DEBUG oslo_concurrency.lockutils [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.962 252257 DEBUG oslo_concurrency.lockutils [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.962 252257 DEBUG nova.compute.manager [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Processing event network-vif-plugged-65163519-df32-4076-bfa2-5a804018b2e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.962 252257 DEBUG nova.compute.manager [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Received event network-vif-plugged-65163519-df32-4076-bfa2-5a804018b2e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.962 252257 DEBUG oslo_concurrency.lockutils [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.963 252257 DEBUG oslo_concurrency.lockutils [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.963 252257 DEBUG oslo_concurrency.lockutils [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.963 252257 DEBUG nova.compute.manager [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] No waiting events found dispatching network-vif-plugged-65163519-df32-4076-bfa2-5a804018b2e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.963 252257 WARNING nova.compute.manager [req-2636bae9-8c78-4881-97e0-36610b882648 req-c30d4dd0-7b15-45e2-b7a7-9f7d48c2b464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Received unexpected event network-vif-plugged-65163519-df32-4076-bfa2-5a804018b2e9 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.964 252257 DEBUG nova.compute.manager [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.970 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.971 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404389.9713256, 0d1eac76-3b6b-4734-a481-9b315b2ae484 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.971 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.976 252257 INFO nova.virt.libvirt.driver [-] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Instance spawned successfully.#033[00m
Nov 29 03:19:49 np0005539563 nova_compute[252253]: 2025-11-29 08:19:49.976 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:19:50 np0005539563 nova_compute[252253]: 2025-11-29 08:19:50.009 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:50 np0005539563 nova_compute[252253]: 2025-11-29 08:19:50.014 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:50 np0005539563 nova_compute[252253]: 2025-11-29 08:19:50.014 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:50 np0005539563 nova_compute[252253]: 2025-11-29 08:19:50.015 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:50 np0005539563 nova_compute[252253]: 2025-11-29 08:19:50.015 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:50 np0005539563 nova_compute[252253]: 2025-11-29 08:19:50.016 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:50 np0005539563 nova_compute[252253]: 2025-11-29 08:19:50.016 252257 DEBUG nova.virt.libvirt.driver [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:50 np0005539563 nova_compute[252253]: 2025-11-29 08:19:50.021 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:50 np0005539563 nova_compute[252253]: 2025-11-29 08:19:50.058 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:50 np0005539563 nova_compute[252253]: 2025-11-29 08:19:50.090 252257 INFO nova.compute.manager [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Took 10.46 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:19:50 np0005539563 nova_compute[252253]: 2025-11-29 08:19:50.091 252257 DEBUG nova.compute.manager [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:50 np0005539563 nova_compute[252253]: 2025-11-29 08:19:50.269 252257 INFO nova.compute.manager [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Took 11.98 seconds to build instance.#033[00m
Nov 29 03:19:50 np0005539563 nova_compute[252253]: 2025-11-29 08:19:50.285 252257 DEBUG oslo_concurrency.lockutils [None req-504b79a0-8870-4331-91d9-7505910c799e 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 559 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.6 MiB/s wr, 259 op/s
Nov 29 03:19:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:51.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:51.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 559 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Nov 29 03:19:52 np0005539563 nova_compute[252253]: 2025-11-29 08:19:52.544 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539563 nova_compute[252253]: 2025-11-29 08:19:52.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:52 np0005539563 nova_compute[252253]: 2025-11-29 08:19:52.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:19:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:53.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:53.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 560 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Nov 29 03:19:54 np0005539563 nova_compute[252253]: 2025-11-29 08:19:54.472 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:54 np0005539563 nova_compute[252253]: 2025-11-29 08:19:54.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:55.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:55.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 560 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 3.2 MiB/s wr, 330 op/s
Nov 29 03:19:56 np0005539563 nova_compute[252253]: 2025-11-29 08:19:56.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:56 np0005539563 nova_compute[252253]: 2025-11-29 08:19:56.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Nov 29 03:19:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Nov 29 03:19:57 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Nov 29 03:19:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:57.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:57.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:57 np0005539563 nova_compute[252253]: 2025-11-29 08:19:57.546 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:57 np0005539563 nova_compute[252253]: 2025-11-29 08:19:57.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:57 np0005539563 nova_compute[252253]: 2025-11-29 08:19:57.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:19:57 np0005539563 nova_compute[252253]: 2025-11-29 08:19:57.701 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:19:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Nov 29 03:19:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Nov 29 03:19:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Nov 29 03:19:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 587 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 2.0 MiB/s wr, 328 op/s
Nov 29 03:19:58 np0005539563 nova_compute[252253]: 2025-11-29 08:19:58.557 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "eef70661-378c-4187-b4b0-f0cfb9dc585e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:58 np0005539563 nova_compute[252253]: 2025-11-29 08:19:58.558 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:58 np0005539563 nova_compute[252253]: 2025-11-29 08:19:58.572 252257 DEBUG nova.compute.manager [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:19:58 np0005539563 nova_compute[252253]: 2025-11-29 08:19:58.665 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:58 np0005539563 nova_compute[252253]: 2025-11-29 08:19:58.666 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:58 np0005539563 nova_compute[252253]: 2025-11-29 08:19:58.676 252257 DEBUG nova.virt.hardware [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:19:58 np0005539563 nova_compute[252253]: 2025-11-29 08:19:58.676 252257 INFO nova.compute.claims [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:19:58 np0005539563 nova_compute[252253]: 2025-11-29 08:19:58.680 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:58 np0005539563 nova_compute[252253]: 2025-11-29 08:19:58.718 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:58 np0005539563 nova_compute[252253]: 2025-11-29 08:19:58.859 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Nov 29 03:19:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Nov 29 03:19:59 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Nov 29 03:19:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4266005841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:19:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:59.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:19:59 np0005539563 nova_compute[252253]: 2025-11-29 08:19:59.344 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:59 np0005539563 nova_compute[252253]: 2025-11-29 08:19:59.350 252257 DEBUG nova.compute.provider_tree [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:59 np0005539563 nova_compute[252253]: 2025-11-29 08:19:59.486 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:19:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:59.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 03:20:00 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 03:20:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 639 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 15 MiB/s rd, 7.8 MiB/s wr, 666 op/s
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.494 252257 DEBUG nova.scheduler.client.report [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.536 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.538 252257 DEBUG nova.compute.manager [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.546 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.547 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.547 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.548 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.638 252257 DEBUG nova.compute.manager [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.639 252257 DEBUG nova.network.neutron [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.671 252257 INFO nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.690 252257 DEBUG nova.compute.manager [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.727 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.728 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.748 252257 DEBUG nova.compute.manager [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.788 252257 DEBUG nova.compute.manager [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.790 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.790 252257 INFO nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Creating image(s)#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.829 252257 DEBUG nova.storage.rbd_utils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image eef70661-378c-4187-b4b0-f0cfb9dc585e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.871 252257 DEBUG nova.storage.rbd_utils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image eef70661-378c-4187-b4b0-f0cfb9dc585e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.911 252257 DEBUG nova.storage.rbd_utils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image eef70661-378c-4187-b4b0-f0cfb9dc585e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:00 np0005539563 nova_compute[252253]: 2025-11-29 08:20:00.916 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.013 252257 DEBUG nova.policy [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b9a756606a84398819fa76cc6ce9ecd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a738c288b1654ec58416b0da60aacb69', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.034 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.036 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.036 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.037 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.068 252257 DEBUG nova.storage.rbd_utils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image eef70661-378c-4187-b4b0-f0cfb9dc585e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.074 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf eef70661-378c-4187-b4b0-f0cfb9dc585e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2490009090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.220 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.276 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.278 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.285 252257 DEBUG nova.virt.hardware [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.286 252257 INFO nova.compute.claims [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:20:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:01.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.327 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.328 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.332 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.332 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.458 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.499 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf eef70661-378c-4187-b4b0-f0cfb9dc585e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:01.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.582 252257 DEBUG nova.storage.rbd_utils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] resizing rbd image eef70661-378c-4187-b4b0-f0cfb9dc585e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.690 252257 DEBUG nova.objects.instance [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'migration_context' on Instance uuid eef70661-378c-4187-b4b0-f0cfb9dc585e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.703 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.703 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Ensure instance console log exists: /var/lib/nova/instances/eef70661-378c-4187-b4b0-f0cfb9dc585e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.704 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.704 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.704 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.730 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.731 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3977MB free_disk=20.784988403320312GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.732 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623551928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.953 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.958 252257 DEBUG nova.compute.provider_tree [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.972 252257 DEBUG nova.scheduler.client.report [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:01 np0005539563 nova_compute[252253]: 2025-11-29 08:20:01.994 252257 DEBUG nova.network.neutron [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Successfully created port: 4d931b08-d631-4f22-9ea9-f6907c63e9da _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.006 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.007 252257 DEBUG nova.compute.manager [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.012 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.086 252257 DEBUG nova.compute.manager [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.086 252257 DEBUG nova.network.neutron [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.114 252257 INFO nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.122 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 1fad2d6f-5a00-43ad-af43-00916509fc61 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.122 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 0d1eac76-3b6b-4734-a481-9b315b2ae484 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.122 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance eef70661-378c-4187-b4b0-f0cfb9dc585e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.123 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 9b6f3346-1230-472f-bd04-791d2367bebb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.123 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.123 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.150 252257 DEBUG nova.compute.manager [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.274 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.312 252257 DEBUG nova.compute.manager [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.315 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.315 252257 INFO nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Creating image(s)#033[00m
Nov 29 03:20:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 639 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 371 op/s
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.420 252257 DEBUG nova.storage.rbd_utils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 9b6f3346-1230-472f-bd04-791d2367bebb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.448 252257 DEBUG nova.storage.rbd_utils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 9b6f3346-1230-472f-bd04-791d2367bebb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.476 252257 DEBUG nova.storage.rbd_utils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 9b6f3346-1230-472f-bd04-791d2367bebb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.481 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.515 252257 DEBUG nova.policy [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b52040d601a4a56abcaf3f046f1e349', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '358970eca7ad4b05b70f43e5507ac052', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.548 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.553 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.554 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.555 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.555 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.587 252257 DEBUG nova.storage.rbd_utils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 9b6f3346-1230-472f-bd04-791d2367bebb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.591 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 9b6f3346-1230-472f-bd04-791d2367bebb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550358599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.903 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.911 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.917 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 9b6f3346-1230-472f-bd04-791d2367bebb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.950 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.985 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.986 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:02 np0005539563 nova_compute[252253]: 2025-11-29 08:20:02.991 252257 DEBUG nova.storage.rbd_utils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] resizing rbd image 9b6f3346-1230-472f-bd04-791d2367bebb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:20:03 np0005539563 nova_compute[252253]: 2025-11-29 08:20:03.097 252257 DEBUG nova.objects.instance [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b6f3346-1230-472f-bd04-791d2367bebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:03 np0005539563 nova_compute[252253]: 2025-11-29 08:20:03.115 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:20:03 np0005539563 nova_compute[252253]: 2025-11-29 08:20:03.116 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Ensure instance console log exists: /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:20:03 np0005539563 nova_compute[252253]: 2025-11-29 08:20:03.116 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:03 np0005539563 nova_compute[252253]: 2025-11-29 08:20:03.117 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:03 np0005539563 nova_compute[252253]: 2025-11-29 08:20:03.117 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:03.258 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:03.259 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:20:03 np0005539563 nova_compute[252253]: 2025-11-29 08:20:03.287 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:03.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:03 np0005539563 nova_compute[252253]: 2025-11-29 08:20:03.372 252257 DEBUG nova.network.neutron [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Successfully created port: 7d3e9f63-03fd-471c-8eeb-dba78634e033 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:20:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:03.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:03Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:8e:91 10.100.0.6
Nov 29 03:20:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:03Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:8e:91 10.100.0.6
Nov 29 03:20:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:04.263 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 669 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 9.2 MiB/s wr, 349 op/s
Nov 29 03:20:04 np0005539563 nova_compute[252253]: 2025-11-29 08:20:04.514 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:04 np0005539563 podman[337186]: 2025-11-29 08:20:04.549681159 +0000 UTC m=+0.088577232 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 29 03:20:04 np0005539563 podman[337187]: 2025-11-29 08:20:04.58955369 +0000 UTC m=+0.129195004 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:20:04 np0005539563 podman[337188]: 2025-11-29 08:20:04.598291018 +0000 UTC m=+0.134764237 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:20:04 np0005539563 nova_compute[252253]: 2025-11-29 08:20:04.862 252257 DEBUG nova.network.neutron [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Successfully updated port: 4d931b08-d631-4f22-9ea9-f6907c63e9da _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:20:04 np0005539563 nova_compute[252253]: 2025-11-29 08:20:04.885 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "refresh_cache-eef70661-378c-4187-b4b0-f0cfb9dc585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:04 np0005539563 nova_compute[252253]: 2025-11-29 08:20:04.885 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquired lock "refresh_cache-eef70661-378c-4187-b4b0-f0cfb9dc585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:04 np0005539563 nova_compute[252253]: 2025-11-29 08:20:04.885 252257 DEBUG nova.network.neutron [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:04.925 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:04.925 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:04.926 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:05 np0005539563 nova_compute[252253]: 2025-11-29 08:20:05.065 252257 DEBUG nova.compute.manager [req-5ad26615-63e3-4f7e-a5d6-c00c938b93ce req-aae6facf-d223-48ae-ae26-a80635e45c3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Received event network-changed-4d931b08-d631-4f22-9ea9-f6907c63e9da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:05 np0005539563 nova_compute[252253]: 2025-11-29 08:20:05.065 252257 DEBUG nova.compute.manager [req-5ad26615-63e3-4f7e-a5d6-c00c938b93ce req-aae6facf-d223-48ae-ae26-a80635e45c3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Refreshing instance network info cache due to event network-changed-4d931b08-d631-4f22-9ea9-f6907c63e9da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:20:05 np0005539563 nova_compute[252253]: 2025-11-29 08:20:05.065 252257 DEBUG oslo_concurrency.lockutils [req-5ad26615-63e3-4f7e-a5d6-c00c938b93ce req-aae6facf-d223-48ae-ae26-a80635e45c3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-eef70661-378c-4187-b4b0-f0cfb9dc585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:05 np0005539563 nova_compute[252253]: 2025-11-29 08:20:05.106 252257 DEBUG nova.network.neutron [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:20:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:05.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:05 np0005539563 nova_compute[252253]: 2025-11-29 08:20:05.422 252257 DEBUG nova.network.neutron [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Successfully updated port: 7d3e9f63-03fd-471c-8eeb-dba78634e033 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:20:05 np0005539563 nova_compute[252253]: 2025-11-29 08:20:05.457 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "refresh_cache-9b6f3346-1230-472f-bd04-791d2367bebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:05 np0005539563 nova_compute[252253]: 2025-11-29 08:20:05.457 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquired lock "refresh_cache-9b6f3346-1230-472f-bd04-791d2367bebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:05 np0005539563 nova_compute[252253]: 2025-11-29 08:20:05.458 252257 DEBUG nova.network.neutron [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:05.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:05 np0005539563 nova_compute[252253]: 2025-11-29 08:20:05.538 252257 DEBUG nova.compute.manager [req-1e5bfdb7-e696-4cfe-be39-87cfb146f8e2 req-f7700e6e-e37c-4894-af08-8c2949dd2935 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-changed-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:05 np0005539563 nova_compute[252253]: 2025-11-29 08:20:05.538 252257 DEBUG nova.compute.manager [req-1e5bfdb7-e696-4cfe-be39-87cfb146f8e2 req-f7700e6e-e37c-4894-af08-8c2949dd2935 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Refreshing instance network info cache due to event network-changed-7d3e9f63-03fd-471c-8eeb-dba78634e033. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:20:05 np0005539563 nova_compute[252253]: 2025-11-29 08:20:05.539 252257 DEBUG oslo_concurrency.lockutils [req-1e5bfdb7-e696-4cfe-be39-87cfb146f8e2 req-f7700e6e-e37c-4894-af08-8c2949dd2935 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9b6f3346-1230-472f-bd04-791d2367bebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:05 np0005539563 nova_compute[252253]: 2025-11-29 08:20:05.856 252257 DEBUG nova.network.neutron [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:20:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 781 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 14 MiB/s wr, 395 op/s
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.665 252257 DEBUG nova.network.neutron [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Updating instance_info_cache with network_info: [{"id": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "address": "fa:16:3e:84:90:aa", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d931b08-d6", "ovs_interfaceid": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.690 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Releasing lock "refresh_cache-eef70661-378c-4187-b4b0-f0cfb9dc585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.691 252257 DEBUG nova.compute.manager [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Instance network_info: |[{"id": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "address": "fa:16:3e:84:90:aa", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d931b08-d6", "ovs_interfaceid": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.692 252257 DEBUG oslo_concurrency.lockutils [req-5ad26615-63e3-4f7e-a5d6-c00c938b93ce req-aae6facf-d223-48ae-ae26-a80635e45c3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-eef70661-378c-4187-b4b0-f0cfb9dc585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.692 252257 DEBUG nova.network.neutron [req-5ad26615-63e3-4f7e-a5d6-c00c938b93ce req-aae6facf-d223-48ae-ae26-a80635e45c3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Refreshing network info cache for port 4d931b08-d631-4f22-9ea9-f6907c63e9da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.698 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Start _get_guest_xml network_info=[{"id": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "address": "fa:16:3e:84:90:aa", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d931b08-d6", "ovs_interfaceid": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.704 252257 WARNING nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.708 252257 DEBUG nova.virt.libvirt.host [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.709 252257 DEBUG nova.virt.libvirt.host [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.714 252257 DEBUG nova.virt.libvirt.host [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.715 252257 DEBUG nova.virt.libvirt.host [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.716 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.716 252257 DEBUG nova.virt.hardware [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.717 252257 DEBUG nova.virt.hardware [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.717 252257 DEBUG nova.virt.hardware [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.717 252257 DEBUG nova.virt.hardware [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.718 252257 DEBUG nova.virt.hardware [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.718 252257 DEBUG nova.virt.hardware [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.718 252257 DEBUG nova.virt.hardware [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.718 252257 DEBUG nova.virt.hardware [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.719 252257 DEBUG nova.virt.hardware [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.719 252257 DEBUG nova.virt.hardware [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.719 252257 DEBUG nova.virt.hardware [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.722 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.984 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:06 np0005539563 nova_compute[252253]: 2025-11-29 08:20:06.985 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.105 252257 DEBUG nova.network.neutron [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Updating instance_info_cache with network_info: [{"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.123 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Releasing lock "refresh_cache-9b6f3346-1230-472f-bd04-791d2367bebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.124 252257 DEBUG nova.compute.manager [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Instance network_info: |[{"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.126 252257 DEBUG oslo_concurrency.lockutils [req-1e5bfdb7-e696-4cfe-be39-87cfb146f8e2 req-f7700e6e-e37c-4894-af08-8c2949dd2935 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9b6f3346-1230-472f-bd04-791d2367bebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.127 252257 DEBUG nova.network.neutron [req-1e5bfdb7-e696-4cfe-be39-87cfb146f8e2 req-f7700e6e-e37c-4894-af08-8c2949dd2935 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Refreshing network info cache for port 7d3e9f63-03fd-471c-8eeb-dba78634e033 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.131 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Start _get_guest_xml network_info=[{"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.137 252257 WARNING nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.142 252257 DEBUG nova.virt.libvirt.host [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.143 252257 DEBUG nova.virt.libvirt.host [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.151 252257 DEBUG nova.virt.libvirt.host [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.152 252257 DEBUG nova.virt.libvirt.host [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.153 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.153 252257 DEBUG nova.virt.hardware [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.154 252257 DEBUG nova.virt.hardware [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.154 252257 DEBUG nova.virt.hardware [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.154 252257 DEBUG nova.virt.hardware [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.155 252257 DEBUG nova.virt.hardware [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.155 252257 DEBUG nova.virt.hardware [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.155 252257 DEBUG nova.virt.hardware [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.156 252257 DEBUG nova.virt.hardware [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.156 252257 DEBUG nova.virt.hardware [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.156 252257 DEBUG nova.virt.hardware [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.156 252257 DEBUG nova.virt.hardware [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.159 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3081443638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.192 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.224 252257 DEBUG nova.storage.rbd_utils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image eef70661-378c-4187-b4b0-f0cfb9dc585e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.229 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:07.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:07.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.551 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1317666613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.603 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.633 252257 DEBUG nova.storage.rbd_utils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 9b6f3346-1230-472f-bd04-791d2367bebb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.637 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2768765722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.668 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.671 252257 DEBUG nova.virt.libvirt.vif [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-787239480',display_name='tempest-ServersTestJSON-server-787239480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-787239480',id=139,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-qz0qiq38',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:00Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=eef70661-378c-4187-b4b0-f0cfb9dc585e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "address": "fa:16:3e:84:90:aa", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d931b08-d6", "ovs_interfaceid": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.671 252257 DEBUG nova.network.os_vif_util [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "address": "fa:16:3e:84:90:aa", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d931b08-d6", "ovs_interfaceid": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.672 252257 DEBUG nova.network.os_vif_util [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:90:aa,bridge_name='br-int',has_traffic_filtering=True,id=4d931b08-d631-4f22-9ea9-f6907c63e9da,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d931b08-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.673 252257 DEBUG nova.objects.instance [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'pci_devices' on Instance uuid eef70661-378c-4187-b4b0-f0cfb9dc585e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.693 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  <uuid>eef70661-378c-4187-b4b0-f0cfb9dc585e</uuid>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  <name>instance-0000008b</name>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServersTestJSON-server-787239480</nova:name>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:20:06</nova:creationTime>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <nova:user uuid="3b9a756606a84398819fa76cc6ce9ecd">tempest-ServersTestJSON-1672739819-project-member</nova:user>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <nova:project uuid="a738c288b1654ec58416b0da60aacb69">tempest-ServersTestJSON-1672739819</nova:project>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <nova:port uuid="4d931b08-d631-4f22-9ea9-f6907c63e9da">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <entry name="serial">eef70661-378c-4187-b4b0-f0cfb9dc585e</entry>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <entry name="uuid">eef70661-378c-4187-b4b0-f0cfb9dc585e</entry>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/eef70661-378c-4187-b4b0-f0cfb9dc585e_disk">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/eef70661-378c-4187-b4b0-f0cfb9dc585e_disk.config">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:84:90:aa"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <target dev="tap4d931b08-d6"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/eef70661-378c-4187-b4b0-f0cfb9dc585e/console.log" append="off"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:20:07 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:20:07 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:20:07 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:20:07 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.694 252257 DEBUG nova.compute.manager [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Preparing to wait for external event network-vif-plugged-4d931b08-d631-4f22-9ea9-f6907c63e9da prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.694 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.694 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.695 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.695 252257 DEBUG nova.virt.libvirt.vif [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-787239480',display_name='tempest-ServersTestJSON-server-787239480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-787239480',id=139,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-qz0qiq38',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:00Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=eef70661-378c-4187-b4b0-f0cfb9dc585e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "address": "fa:16:3e:84:90:aa", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d931b08-d6", "ovs_interfaceid": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.696 252257 DEBUG nova.network.os_vif_util [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "address": "fa:16:3e:84:90:aa", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d931b08-d6", "ovs_interfaceid": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.696 252257 DEBUG nova.network.os_vif_util [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:90:aa,bridge_name='br-int',has_traffic_filtering=True,id=4d931b08-d631-4f22-9ea9-f6907c63e9da,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d931b08-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.697 252257 DEBUG os_vif [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:90:aa,bridge_name='br-int',has_traffic_filtering=True,id=4d931b08-d631-4f22-9ea9-f6907c63e9da,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d931b08-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.697 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.698 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.698 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.701 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.702 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d931b08-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.702 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4d931b08-d6, col_values=(('external_ids', {'iface-id': '4d931b08-d631-4f22-9ea9-f6907c63e9da', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:90:aa', 'vm-uuid': 'eef70661-378c-4187-b4b0-f0cfb9dc585e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.703 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:07 np0005539563 NetworkManager[48981]: <info>  [1764404407.7045] manager: (tap4d931b08-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.706 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.710 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.711 252257 INFO os_vif [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:90:aa,bridge_name='br-int',has_traffic_filtering=True,id=4d931b08-d631-4f22-9ea9-f6907c63e9da,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d931b08-d6')#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.783 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.784 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.784 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No VIF found with MAC fa:16:3e:84:90:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.784 252257 INFO nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Using config drive#033[00m
Nov 29 03:20:07 np0005539563 nova_compute[252253]: 2025-11-29 08:20:07.816 252257 DEBUG nova.storage.rbd_utils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image eef70661-378c-4187-b4b0-f0cfb9dc585e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334364862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.100 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.102 252257 DEBUG nova.virt.libvirt.vif [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1647701311',display_name='tempest-ServerStableDeviceRescueTest-server-1647701311',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1647701311',id=140,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-dl1duw1u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:02Z,user_data=None,user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=9b6f3346-1230-472f-bd04-791d2367bebb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.103 252257 DEBUG nova.network.os_vif_util [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.104 252257 DEBUG nova.network.os_vif_util [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:6f:ea,bridge_name='br-int',has_traffic_filtering=True,id=7d3e9f63-03fd-471c-8eeb-dba78634e033,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d3e9f63-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.105 252257 DEBUG nova.objects.instance [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b6f3346-1230-472f-bd04-791d2367bebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.148 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  <uuid>9b6f3346-1230-472f-bd04-791d2367bebb</uuid>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  <name>instance-0000008c</name>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1647701311</nova:name>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:20:07</nova:creationTime>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <nova:user uuid="3b52040d601a4a56abcaf3f046f1e349">tempest-ServerStableDeviceRescueTest-1105304301-project-member</nova:user>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <nova:project uuid="358970eca7ad4b05b70f43e5507ac052">tempest-ServerStableDeviceRescueTest-1105304301</nova:project>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <nova:port uuid="7d3e9f63-03fd-471c-8eeb-dba78634e033">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <entry name="serial">9b6f3346-1230-472f-bd04-791d2367bebb</entry>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <entry name="uuid">9b6f3346-1230-472f-bd04-791d2367bebb</entry>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/9b6f3346-1230-472f-bd04-791d2367bebb_disk">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/9b6f3346-1230-472f-bd04-791d2367bebb_disk.config">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:e1:6f:ea"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <target dev="tap7d3e9f63-03"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/console.log" append="off"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:20:08 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:20:08 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:20:08 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:20:08 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.148 252257 DEBUG nova.compute.manager [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Preparing to wait for external event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.149 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.149 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.150 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.151 252257 DEBUG nova.virt.libvirt.vif [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1647701311',display_name='tempest-ServerStableDeviceRescueTest-server-1647701311',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1647701311',id=140,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-dl1duw1u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:02Z,user_data=None,user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=9b6f3346-1230-472f-bd04-791d2367bebb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.151 252257 DEBUG nova.network.os_vif_util [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.152 252257 DEBUG nova.network.os_vif_util [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:6f:ea,bridge_name='br-int',has_traffic_filtering=True,id=7d3e9f63-03fd-471c-8eeb-dba78634e033,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d3e9f63-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.153 252257 DEBUG os_vif [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:6f:ea,bridge_name='br-int',has_traffic_filtering=True,id=7d3e9f63-03fd-471c-8eeb-dba78634e033,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d3e9f63-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.153 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.154 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.154 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.158 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.159 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d3e9f63-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.159 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7d3e9f63-03, col_values=(('external_ids', {'iface-id': '7d3e9f63-03fd-471c-8eeb-dba78634e033', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:6f:ea', 'vm-uuid': '9b6f3346-1230-472f-bd04-791d2367bebb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.161 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539563 NetworkManager[48981]: <info>  [1764404408.1629] manager: (tap7d3e9f63-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.164 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.173 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.174 252257 INFO os_vif [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:6f:ea,bridge_name='br-int',has_traffic_filtering=True,id=7d3e9f63-03fd-471c-8eeb-dba78634e033,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d3e9f63-03')#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.227 252257 INFO nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Creating config drive at /var/lib/nova/instances/eef70661-378c-4187-b4b0-f0cfb9dc585e/disk.config#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.235 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eef70661-378c-4187-b4b0-f0cfb9dc585e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbsij4mrc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.273 252257 DEBUG nova.network.neutron [req-5ad26615-63e3-4f7e-a5d6-c00c938b93ce req-aae6facf-d223-48ae-ae26-a80635e45c3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Updated VIF entry in instance network info cache for port 4d931b08-d631-4f22-9ea9-f6907c63e9da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.275 252257 DEBUG nova.network.neutron [req-5ad26615-63e3-4f7e-a5d6-c00c938b93ce req-aae6facf-d223-48ae-ae26-a80635e45c3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Updating instance_info_cache with network_info: [{"id": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "address": "fa:16:3e:84:90:aa", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d931b08-d6", "ovs_interfaceid": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.286 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.286 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.287 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No VIF found with MAC fa:16:3e:e1:6f:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.288 252257 INFO nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Using config drive#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.324 252257 DEBUG nova.storage.rbd_utils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 9b6f3346-1230-472f-bd04-791d2367bebb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.331 252257 DEBUG oslo_concurrency.lockutils [req-5ad26615-63e3-4f7e-a5d6-c00c938b93ce req-aae6facf-d223-48ae-ae26-a80635e45c3b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-eef70661-378c-4187-b4b0-f0cfb9dc585e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.379 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eef70661-378c-4187-b4b0-f0cfb9dc585e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbsij4mrc" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 788 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 MiB/s wr, 382 op/s
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.415 252257 DEBUG nova.storage.rbd_utils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image eef70661-378c-4187-b4b0-f0cfb9dc585e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.419 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eef70661-378c-4187-b4b0-f0cfb9dc585e/disk.config eef70661-378c-4187-b4b0-f0cfb9dc585e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.613 252257 DEBUG oslo_concurrency.processutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eef70661-378c-4187-b4b0-f0cfb9dc585e/disk.config eef70661-378c-4187-b4b0-f0cfb9dc585e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.614 252257 INFO nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Deleting local config drive /var/lib/nova/instances/eef70661-378c-4187-b4b0-f0cfb9dc585e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:20:08 np0005539563 NetworkManager[48981]: <info>  [1764404408.6842] manager: (tap4d931b08-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/252)
Nov 29 03:20:08 np0005539563 kernel: tap4d931b08-d6: entered promiscuous mode
Nov 29 03:20:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:08Z|00552|binding|INFO|Claiming lport 4d931b08-d631-4f22-9ea9-f6907c63e9da for this chassis.
Nov 29 03:20:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:08Z|00553|binding|INFO|4d931b08-d631-4f22-9ea9-f6907c63e9da: Claiming fa:16:3e:84:90:aa 10.100.0.10
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.692 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Nov 29 03:20:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:08Z|00554|binding|INFO|Setting lport 4d931b08-d631-4f22-9ea9-f6907c63e9da ovn-installed in OVS
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.721 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.727 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539563 systemd-machined[213024]: New machine qemu-66-instance-0000008b.
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.743 252257 INFO nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Creating config drive at /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/disk.config#033[00m
Nov 29 03:20:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:08Z|00555|binding|INFO|Setting lport 4d931b08-d631-4f22-9ea9-f6907c63e9da up in Southbound
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.747 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:90:aa 10.100.0.10'], port_security=['fa:16:3e:84:90:aa 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'eef70661-378c-4187-b4b0-f0cfb9dc585e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=4d931b08-d631-4f22-9ea9-f6907c63e9da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.748 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 4d931b08-d631-4f22-9ea9-f6907c63e9da in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 bound to our chassis#033[00m
Nov 29 03:20:08 np0005539563 systemd[1]: Started Virtual Machine qemu-66-instance-0000008b.
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.749 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 97e6ef02-6896-45a2-9eb9-28926c1a7400#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.755 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuq3zjbtg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:08 np0005539563 systemd-udevd[337470]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Nov 29 03:20:08 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Nov 29 03:20:08 np0005539563 NetworkManager[48981]: <info>  [1764404408.7700] device (tap4d931b08-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:20:08 np0005539563 NetworkManager[48981]: <info>  [1764404408.7717] device (tap4d931b08-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.769 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[62697f3b-b6fa-4421-a7dd-677d844f724f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.790 252257 DEBUG nova.network.neutron [req-1e5bfdb7-e696-4cfe-be39-87cfb146f8e2 req-f7700e6e-e37c-4894-af08-8c2949dd2935 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Updated VIF entry in instance network info cache for port 7d3e9f63-03fd-471c-8eeb-dba78634e033. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.792 252257 DEBUG nova.network.neutron [req-1e5bfdb7-e696-4cfe-be39-87cfb146f8e2 req-f7700e6e-e37c-4894-af08-8c2949dd2935 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Updating instance_info_cache with network_info: [{"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.799 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f53cd665-ffbf-4f9d-8124-b83f9c3f9a06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.802 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[26464136-34f9-4ba3-b38d-4fc5a74081ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.824 252257 DEBUG oslo_concurrency.lockutils [req-1e5bfdb7-e696-4cfe-be39-87cfb146f8e2 req-f7700e6e-e37c-4894-af08-8c2949dd2935 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9b6f3346-1230-472f-bd04-791d2367bebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.831 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[dc6e1b22-ea95-4d3a-bbb4-c67ddc8c8c6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.851 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[76f42db5-ab9d-4756-8cbe-6dcd9c43be51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735351, 'reachable_time': 23061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337486, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.871 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[057ce527-9f26-4eda-88f9-6efc4f5a5141]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap97e6ef02-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 735365, 'tstamp': 735365}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337488, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap97e6ef02-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 735369, 'tstamp': 735369}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337488, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.873 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.876 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.882 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.883 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97e6ef02-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.884 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.884 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap97e6ef02-60, col_values=(('external_ids', {'iface-id': 'ea7a63c4-c071-447c-8225-8a48ff4b56c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:08.885 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.895 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuq3zjbtg" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.935 252257 DEBUG nova.storage.rbd_utils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 9b6f3346-1230-472f-bd04-791d2367bebb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:08 np0005539563 nova_compute[252253]: 2025-11-29 08:20:08.947 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/disk.config 9b6f3346-1230-472f-bd04-791d2367bebb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.045 252257 DEBUG nova.compute.manager [req-4ff8cd96-2c44-499c-8c92-834f5e7c567d req-247f6831-cbb5-44d3-9b25-0c0843260fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Received event network-vif-plugged-4d931b08-d631-4f22-9ea9-f6907c63e9da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.046 252257 DEBUG oslo_concurrency.lockutils [req-4ff8cd96-2c44-499c-8c92-834f5e7c567d req-247f6831-cbb5-44d3-9b25-0c0843260fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.047 252257 DEBUG oslo_concurrency.lockutils [req-4ff8cd96-2c44-499c-8c92-834f5e7c567d req-247f6831-cbb5-44d3-9b25-0c0843260fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.047 252257 DEBUG oslo_concurrency.lockutils [req-4ff8cd96-2c44-499c-8c92-834f5e7c567d req-247f6831-cbb5-44d3-9b25-0c0843260fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.048 252257 DEBUG nova.compute.manager [req-4ff8cd96-2c44-499c-8c92-834f5e7c567d req-247f6831-cbb5-44d3-9b25-0c0843260fac 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Processing event network-vif-plugged-4d931b08-d631-4f22-9ea9-f6907c63e9da _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.148 252257 DEBUG oslo_concurrency.processutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/disk.config 9b6f3346-1230-472f-bd04-791d2367bebb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.149 252257 INFO nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Deleting local config drive /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/disk.config because it was imported into RBD.#033[00m
Nov 29 03:20:09 np0005539563 kernel: tap7d3e9f63-03: entered promiscuous mode
Nov 29 03:20:09 np0005539563 NetworkManager[48981]: <info>  [1764404409.2174] manager: (tap7d3e9f63-03): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Nov 29 03:20:09 np0005539563 systemd-udevd[337473]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:09Z|00556|binding|INFO|Claiming lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 for this chassis.
Nov 29 03:20:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:09Z|00557|binding|INFO|7d3e9f63-03fd-471c-8eeb-dba78634e033: Claiming fa:16:3e:e1:6f:ea 10.100.0.3
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.223 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:09 np0005539563 NetworkManager[48981]: <info>  [1764404409.2332] device (tap7d3e9f63-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:20:09 np0005539563 NetworkManager[48981]: <info>  [1764404409.2349] device (tap7d3e9f63-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.236 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:6f:ea 10.100.0.3'], port_security=['fa:16:3e:e1:6f:ea 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9b6f3346-1230-472f-bd04-791d2367bebb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=7d3e9f63-03fd-471c-8eeb-dba78634e033) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.237 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 7d3e9f63-03fd-471c-8eeb-dba78634e033 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 bound to our chassis#033[00m
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.238 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.254 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[344f6afd-f9d0-4331-8e1d-fe4000fa2a8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:09 np0005539563 systemd-machined[213024]: New machine qemu-67-instance-0000008c.
Nov 29 03:20:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:09Z|00558|binding|INFO|Setting lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 ovn-installed in OVS
Nov 29 03:20:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:09Z|00559|binding|INFO|Setting lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 up in Southbound
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.267 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.268 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:09 np0005539563 systemd[1]: Started Virtual Machine qemu-67-instance-0000008c.
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.290 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[8500806f-fcfa-4bd5-ac4a-40b16ea33638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.293 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[0243fe60-1fb6-435a-8230-373e2edf99b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.294 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404409.2944448, eef70661-378c-4187-b4b0-f0cfb9dc585e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.295 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.297 252257 DEBUG nova.compute.manager [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.304 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.310 252257 INFO nova.virt.libvirt.driver [-] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Instance spawned successfully.#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.310 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.319 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.325 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.329 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ec68c708-cf93-43bd-9cc4-1f100a6e7d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.331 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.331 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.332 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.333 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.333 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.334 252257 DEBUG nova.virt.libvirt.driver [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:09.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.342 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.342 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404409.296709, eef70661-378c-4187-b4b0-f0cfb9dc585e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.342 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.350 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b711790d-726e-4e5a-b9b4-c380fc821e30]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731762, 'reachable_time': 15596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337595, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.366 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.371 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404409.3030944, eef70661-378c-4187-b4b0-f0cfb9dc585e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.373 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.373 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d81f957f-be2c-46ab-9900-2b972f39f463]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731778, 'tstamp': 731778}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337597, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731782, 'tstamp': 731782}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337597, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.379 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.381 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.382 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.386 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.386 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.387 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:09.388 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.398 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.403 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.414 252257 INFO nova.compute.manager [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Took 8.63 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.415 252257 DEBUG nova.compute.manager [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.425 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.483 252257 INFO nova.compute.manager [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Took 10.86 seconds to build instance.#033[00m
Nov 29 03:20:09 np0005539563 nova_compute[252253]: 2025-11-29 08:20:09.499 252257 DEBUG oslo_concurrency.lockutils [None req-8bfe62fa-a626-4fc0-9097-c4ecb752a6c3 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:09.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.020 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404410.020021, 9b6f3346-1230-472f-bd04-791d2367bebb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.020 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] VM Started (Lifecycle Event)#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.045 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.049 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404410.0228183, 9b6f3346-1230-472f-bd04-791d2367bebb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.049 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.067 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.071 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.089 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:20:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 797 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 9.4 MiB/s wr, 303 op/s
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.708 252257 DEBUG oslo_concurrency.lockutils [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "eef70661-378c-4187-b4b0-f0cfb9dc585e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.709 252257 DEBUG oslo_concurrency.lockutils [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.710 252257 DEBUG oslo_concurrency.lockutils [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.710 252257 DEBUG oslo_concurrency.lockutils [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.710 252257 DEBUG oslo_concurrency.lockutils [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.712 252257 INFO nova.compute.manager [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Terminating instance#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.714 252257 DEBUG nova.compute.manager [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:20:10 np0005539563 kernel: tap4d931b08-d6 (unregistering): left promiscuous mode
Nov 29 03:20:10 np0005539563 NetworkManager[48981]: <info>  [1764404410.7579] device (tap4d931b08-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:20:10 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:10Z|00560|binding|INFO|Releasing lport 4d931b08-d631-4f22-9ea9-f6907c63e9da from this chassis (sb_readonly=0)
Nov 29 03:20:10 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:10Z|00561|binding|INFO|Setting lport 4d931b08-d631-4f22-9ea9-f6907c63e9da down in Southbound
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.767 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:10 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:10Z|00562|binding|INFO|Removing iface tap4d931b08-d6 ovn-installed in OVS
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.769 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.792 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:10 np0005539563 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Nov 29 03:20:10 np0005539563 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008b.scope: Consumed 1.930s CPU time.
Nov 29 03:20:10 np0005539563 systemd-machined[213024]: Machine qemu-66-instance-0000008b terminated.
Nov 29 03:20:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:10.889 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:90:aa 10.100.0.10'], port_security=['fa:16:3e:84:90:aa 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'eef70661-378c-4187-b4b0-f0cfb9dc585e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=4d931b08-d631-4f22-9ea9-f6907c63e9da) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:10.891 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 4d931b08-d631-4f22-9ea9-f6907c63e9da in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 unbound from our chassis#033[00m
Nov 29 03:20:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:10.892 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 97e6ef02-6896-45a2-9eb9-28926c1a7400#033[00m
Nov 29 03:20:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:10.907 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce9f9fa-dd9f-4069-8b78-d05f7b7ba8eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:10.941 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b3965c-90c7-4267-8d37-7a07248cd135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:10.944 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a23f55-b1bc-4bbe-9203-70d10603d11f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.948 252257 INFO nova.virt.libvirt.driver [-] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Instance destroyed successfully.#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.949 252257 DEBUG nova.objects.instance [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'resources' on Instance uuid eef70661-378c-4187-b4b0-f0cfb9dc585e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.969 252257 DEBUG nova.virt.libvirt.vif [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-787239480',display_name='tempest-ServersTestJSON-server-787239480',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-787239480',id=139,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-qz0qiq38',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:09Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=eef70661-378c-4187-b4b0-f0cfb9dc585e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "address": "fa:16:3e:84:90:aa", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d931b08-d6", "ovs_interfaceid": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.969 252257 DEBUG nova.network.os_vif_util [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "address": "fa:16:3e:84:90:aa", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d931b08-d6", "ovs_interfaceid": "4d931b08-d631-4f22-9ea9-f6907c63e9da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.970 252257 DEBUG nova.network.os_vif_util [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:90:aa,bridge_name='br-int',has_traffic_filtering=True,id=4d931b08-d631-4f22-9ea9-f6907c63e9da,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d931b08-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.971 252257 DEBUG os_vif [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:90:aa,bridge_name='br-int',has_traffic_filtering=True,id=4d931b08-d631-4f22-9ea9-f6907c63e9da,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d931b08-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.972 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.973 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d931b08-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.974 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.976 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:10.977 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[bafe3bc0-74a8-4786-a417-4a191c40d8ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:10 np0005539563 nova_compute[252253]: 2025-11-29 08:20:10.978 252257 INFO os_vif [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:90:aa,bridge_name='br-int',has_traffic_filtering=True,id=4d931b08-d631-4f22-9ea9-f6907c63e9da,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d931b08-d6')#033[00m
Nov 29 03:20:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:10.998 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c99ae00f-8a7b-4bb8-b946-250c74703244]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735351, 'reachable_time': 23061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337661, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:11.016 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f199d945-d6b3-4feb-922a-4b07436b8d01]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap97e6ef02-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 735365, 'tstamp': 735365}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337677, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap97e6ef02-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 735369, 'tstamp': 735369}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337677, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:11.017 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.018 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.019 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:11.020 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97e6ef02-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:11.020 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:11.021 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap97e6ef02-60, col_values=(('external_ids', {'iface-id': 'ea7a63c4-c071-447c-8225-8a48ff4b56c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:11.022 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.203 252257 DEBUG nova.compute.manager [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Received event network-vif-plugged-4d931b08-d631-4f22-9ea9-f6907c63e9da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.204 252257 DEBUG oslo_concurrency.lockutils [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.204 252257 DEBUG oslo_concurrency.lockutils [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.204 252257 DEBUG oslo_concurrency.lockutils [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.204 252257 DEBUG nova.compute.manager [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] No waiting events found dispatching network-vif-plugged-4d931b08-d631-4f22-9ea9-f6907c63e9da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.204 252257 WARNING nova.compute.manager [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Received unexpected event network-vif-plugged-4d931b08-d631-4f22-9ea9-f6907c63e9da for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.205 252257 DEBUG nova.compute.manager [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.205 252257 DEBUG oslo_concurrency.lockutils [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.205 252257 DEBUG oslo_concurrency.lockutils [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.205 252257 DEBUG oslo_concurrency.lockutils [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.205 252257 DEBUG nova.compute.manager [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Processing event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.205 252257 DEBUG nova.compute.manager [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.206 252257 DEBUG oslo_concurrency.lockutils [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.206 252257 DEBUG oslo_concurrency.lockutils [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.206 252257 DEBUG oslo_concurrency.lockutils [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.206 252257 DEBUG nova.compute.manager [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] No waiting events found dispatching network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.206 252257 WARNING nova.compute.manager [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received unexpected event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.206 252257 DEBUG nova.compute.manager [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Received event network-vif-unplugged-4d931b08-d631-4f22-9ea9-f6907c63e9da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.207 252257 DEBUG oslo_concurrency.lockutils [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.207 252257 DEBUG oslo_concurrency.lockutils [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.207 252257 DEBUG oslo_concurrency.lockutils [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.207 252257 DEBUG nova.compute.manager [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] No waiting events found dispatching network-vif-unplugged-4d931b08-d631-4f22-9ea9-f6907c63e9da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.207 252257 DEBUG nova.compute.manager [req-f322d2af-2752-4270-a153-e3f6b7e843f2 req-2b1a7930-09a1-4f4e-846e-073e1b8c80ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Received event network-vif-unplugged-4d931b08-d631-4f22-9ea9-f6907c63e9da for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.208 252257 DEBUG nova.compute.manager [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.212 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404411.211855, 9b6f3346-1230-472f-bd04-791d2367bebb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.212 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.213 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.216 252257 INFO nova.virt.libvirt.driver [-] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Instance spawned successfully.#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.217 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:20:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:11.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.370 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.373 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.509 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:20:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:11.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.516 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.518 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.520 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.521 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.521 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.522 252257 DEBUG nova.virt.libvirt.driver [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.747 252257 INFO nova.compute.manager [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Took 9.43 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.748 252257 DEBUG nova.compute.manager [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.861 252257 INFO nova.compute.manager [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Took 10.84 seconds to build instance.#033[00m
Nov 29 03:20:11 np0005539563 nova_compute[252253]: 2025-11-29 08:20:11.992 252257 DEBUG oslo_concurrency.lockutils [None req-c43797aa-4abe-4083-9349-ded3fe3d50bd 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:12 np0005539563 nova_compute[252253]: 2025-11-29 08:20:12.305 252257 INFO nova.virt.libvirt.driver [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Deleting instance files /var/lib/nova/instances/eef70661-378c-4187-b4b0-f0cfb9dc585e_del#033[00m
Nov 29 03:20:12 np0005539563 nova_compute[252253]: 2025-11-29 08:20:12.306 252257 INFO nova.virt.libvirt.driver [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Deletion of /var/lib/nova/instances/eef70661-378c-4187-b4b0-f0cfb9dc585e_del complete#033[00m
Nov 29 03:20:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 797 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 9.4 MiB/s wr, 303 op/s
Nov 29 03:20:12 np0005539563 nova_compute[252253]: 2025-11-29 08:20:12.600 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:12 np0005539563 nova_compute[252253]: 2025-11-29 08:20:12.602 252257 INFO nova.compute.manager [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Took 1.89 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:20:12 np0005539563 nova_compute[252253]: 2025-11-29 08:20:12.603 252257 DEBUG oslo.service.loopingcall [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:20:12 np0005539563 nova_compute[252253]: 2025-11-29 08:20:12.603 252257 DEBUG nova.compute.manager [-] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:20:12 np0005539563 nova_compute[252253]: 2025-11-29 08:20:12.604 252257 DEBUG nova.network.neutron [-] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:20:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:20:12
Nov 29 03:20:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:20:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:20:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta']
Nov 29 03:20:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.157 252257 DEBUG nova.network.neutron [-] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.190 252257 INFO nova.compute.manager [-] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Took 0.59 seconds to deallocate network for instance.#033[00m
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.237 252257 DEBUG oslo_concurrency.lockutils [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.238 252257 DEBUG oslo_concurrency.lockutils [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.313 252257 DEBUG nova.compute.manager [None req-a0ac1e84-fdf4-4aa8-beb3-e187c111d367 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.321 252257 DEBUG nova.compute.manager [req-4762a8e6-b1cb-434e-805c-ca21f4666871 req-cbedd4c5-c011-4024-8174-96038477f1df 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Received event network-vif-deleted-4d931b08-d631-4f22-9ea9-f6907c63e9da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.342 252257 DEBUG nova.compute.manager [req-0a488dfb-abed-43b0-b8eb-0d2a6d5ec55e req-b78973a3-b8b1-454d-a55a-3ee1ce215736 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Received event network-vif-plugged-4d931b08-d631-4f22-9ea9-f6907c63e9da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.342 252257 DEBUG oslo_concurrency.lockutils [req-0a488dfb-abed-43b0-b8eb-0d2a6d5ec55e req-b78973a3-b8b1-454d-a55a-3ee1ce215736 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.343 252257 DEBUG oslo_concurrency.lockutils [req-0a488dfb-abed-43b0-b8eb-0d2a6d5ec55e req-b78973a3-b8b1-454d-a55a-3ee1ce215736 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.343 252257 DEBUG oslo_concurrency.lockutils [req-0a488dfb-abed-43b0-b8eb-0d2a6d5ec55e req-b78973a3-b8b1-454d-a55a-3ee1ce215736 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.343 252257 DEBUG nova.compute.manager [req-0a488dfb-abed-43b0-b8eb-0d2a6d5ec55e req-b78973a3-b8b1-454d-a55a-3ee1ce215736 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] No waiting events found dispatching network-vif-plugged-4d931b08-d631-4f22-9ea9-f6907c63e9da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.344 252257 WARNING nova.compute.manager [req-0a488dfb-abed-43b0-b8eb-0d2a6d5ec55e req-b78973a3-b8b1-454d-a55a-3ee1ce215736 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Received unexpected event network-vif-plugged-4d931b08-d631-4f22-9ea9-f6907c63e9da for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:20:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:13.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.404 252257 INFO nova.compute.manager [None req-a0ac1e84-fdf4-4aa8-beb3-e187c111d367 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] instance snapshotting#033[00m
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.435 252257 DEBUG oslo_concurrency.processutils [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:13.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.627 252257 INFO nova.virt.libvirt.driver [None req-a0ac1e84-fdf4-4aa8-beb3-e187c111d367 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Beginning live snapshot process#033[00m
Nov 29 03:20:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.805 252257 DEBUG nova.virt.libvirt.imagebackend [None req-a0ac1e84-fdf4-4aa8-beb3-e187c111d367 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:20:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1508581070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.911 252257 DEBUG oslo_concurrency.processutils [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.922 252257 DEBUG nova.compute.provider_tree [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.960 252257 DEBUG nova.scheduler.client.report [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:20:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:20:13 np0005539563 nova_compute[252253]: 2025-11-29 08:20:13.991 252257 DEBUG oslo_concurrency.lockutils [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:14 np0005539563 nova_compute[252253]: 2025-11-29 08:20:14.020 252257 INFO nova.scheduler.client.report [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Deleted allocations for instance eef70661-378c-4187-b4b0-f0cfb9dc585e#033[00m
Nov 29 03:20:14 np0005539563 nova_compute[252253]: 2025-11-29 08:20:14.027 252257 DEBUG nova.storage.rbd_utils [None req-a0ac1e84-fdf4-4aa8-beb3-e187c111d367 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] creating snapshot(c0dcbb43347f4b518455d99b06e7fa5a) on rbd image(9b6f3346-1230-472f-bd04-791d2367bebb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:20:14 np0005539563 nova_compute[252253]: 2025-11-29 08:20:14.112 252257 DEBUG oslo_concurrency.lockutils [None req-1068543f-c530-491c-8da7-d2100788980a 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "eef70661-378c-4187-b4b0-f0cfb9dc585e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 787 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 7.3 MiB/s wr, 339 op/s
Nov 29 03:20:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Nov 29 03:20:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Nov 29 03:20:15 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Nov 29 03:20:15 np0005539563 nova_compute[252253]: 2025-11-29 08:20:15.229 252257 DEBUG nova.storage.rbd_utils [None req-a0ac1e84-fdf4-4aa8-beb3-e187c111d367 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] cloning vms/9b6f3346-1230-472f-bd04-791d2367bebb_disk@c0dcbb43347f4b518455d99b06e7fa5a to images/5b6878f5-ee6a-4f36-b7ce-cdb9894ce54c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:20:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:15.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:15 np0005539563 nova_compute[252253]: 2025-11-29 08:20:15.365 252257 DEBUG nova.storage.rbd_utils [None req-a0ac1e84-fdf4-4aa8-beb3-e187c111d367 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] flattening images/5b6878f5-ee6a-4f36-b7ce-cdb9894ce54c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:20:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:15.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:15 np0005539563 nova_compute[252253]: 2025-11-29 08:20:15.810 252257 DEBUG nova.storage.rbd_utils [None req-a0ac1e84-fdf4-4aa8-beb3-e187c111d367 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] removing snapshot(c0dcbb43347f4b518455d99b06e7fa5a) on rbd image(9b6f3346-1230-472f-bd04-791d2367bebb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:20:15 np0005539563 nova_compute[252253]: 2025-11-29 08:20:15.975 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Nov 29 03:20:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Nov 29 03:20:16 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Nov 29 03:20:16 np0005539563 nova_compute[252253]: 2025-11-29 08:20:16.098 252257 DEBUG nova.storage.rbd_utils [None req-a0ac1e84-fdf4-4aa8-beb3-e187c111d367 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] creating snapshot(snap) on rbd image(5b6878f5-ee6a-4f36-b7ce-cdb9894ce54c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:20:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 768 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 1.7 MiB/s wr, 361 op/s
Nov 29 03:20:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Nov 29 03:20:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Nov 29 03:20:17 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Nov 29 03:20:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:17.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:17.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:17 np0005539563 nova_compute[252253]: 2025-11-29 08:20:17.603 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 815 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 5.0 MiB/s wr, 453 op/s
Nov 29 03:20:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:18 np0005539563 nova_compute[252253]: 2025-11-29 08:20:18.910 252257 INFO nova.virt.libvirt.driver [None req-a0ac1e84-fdf4-4aa8-beb3-e187c111d367 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Snapshot image upload complete#033[00m
Nov 29 03:20:18 np0005539563 nova_compute[252253]: 2025-11-29 08:20:18.911 252257 INFO nova.compute.manager [None req-a0ac1e84-fdf4-4aa8-beb3-e187c111d367 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Took 5.50 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:20:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:19.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:19.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 891 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 11 MiB/s wr, 433 op/s
Nov 29 03:20:20 np0005539563 nova_compute[252253]: 2025-11-29 08:20:20.977 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:21 np0005539563 nova_compute[252253]: 2025-11-29 08:20:21.027 252257 INFO nova.compute.manager [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Rescuing#033[00m
Nov 29 03:20:21 np0005539563 nova_compute[252253]: 2025-11-29 08:20:21.028 252257 DEBUG oslo_concurrency.lockutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "refresh_cache-9b6f3346-1230-472f-bd04-791d2367bebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:21 np0005539563 nova_compute[252253]: 2025-11-29 08:20:21.028 252257 DEBUG oslo_concurrency.lockutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquired lock "refresh_cache-9b6f3346-1230-472f-bd04-791d2367bebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:21 np0005539563 nova_compute[252253]: 2025-11-29 08:20:21.029 252257 DEBUG nova.network.neutron [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:21.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:21.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 891 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 8.7 MiB/s wr, 290 op/s
Nov 29 03:20:22 np0005539563 nova_compute[252253]: 2025-11-29 08:20:22.605 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:22 np0005539563 nova_compute[252253]: 2025-11-29 08:20:22.633 252257 DEBUG nova.network.neutron [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Updating instance_info_cache with network_info: [{"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:22 np0005539563 nova_compute[252253]: 2025-11-29 08:20:22.652 252257 DEBUG oslo_concurrency.lockutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Releasing lock "refresh_cache-9b6f3346-1230-472f-bd04-791d2367bebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:22 np0005539563 nova_compute[252253]: 2025-11-29 08:20:22.873 252257 DEBUG nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:20:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:23.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:23.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.015005038211892797 of space, bias 1.0, pg target 4.501511463567839 quantized to 32 (current 32)
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2929360343383584 quantized to 32 (current 32)
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.006847751488926112 of space, bias 1.0, pg target 2.0269344407221292 quantized to 32 (current 32)
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017099385817978784 quantized to 16 (current 16)
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003206134840871022 quantized to 32 (current 32)
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018168097431602458 quantized to 32 (current 32)
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004274846454494696 quantized to 32 (current 32)
Nov 29 03:20:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Nov 29 03:20:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Nov 29 03:20:23 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Nov 29 03:20:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 902 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 7.9 MiB/s wr, 200 op/s
Nov 29 03:20:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:25.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:25.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:25Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:6f:ea 10.100.0.3
Nov 29 03:20:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:25Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:6f:ea 10.100.0.3
Nov 29 03:20:25 np0005539563 nova_compute[252253]: 2025-11-29 08:20:25.947 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404410.946635, eef70661-378c-4187-b4b0-f0cfb9dc585e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:25 np0005539563 nova_compute[252253]: 2025-11-29 08:20:25.948 252257 INFO nova.compute.manager [-] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:20:25 np0005539563 nova_compute[252253]: 2025-11-29 08:20:25.974 252257 DEBUG nova.compute.manager [None req-1925c8cb-76b2-4e72-b2f5-ea0a88401f15 - - - - - -] [instance: eef70661-378c-4187-b4b0-f0cfb9dc585e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:25 np0005539563 nova_compute[252253]: 2025-11-29 08:20:25.980 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 955 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 10 MiB/s wr, 339 op/s
Nov 29 03:20:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:27.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:27.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:27 np0005539563 nova_compute[252253]: 2025-11-29 08:20:27.607 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 963 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 8.0 MiB/s wr, 300 op/s
Nov 29 03:20:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:29.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:29.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 971 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.7 MiB/s wr, 340 op/s
Nov 29 03:20:30 np0005539563 nova_compute[252253]: 2025-11-29 08:20:30.982 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:31.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:31.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:32 np0005539563 podman[338076]: 2025-11-29 08:20:32.130839368 +0000 UTC m=+0.061678174 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:20:32 np0005539563 podman[338076]: 2025-11-29 08:20:32.251193112 +0000 UTC m=+0.182031948 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:20:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 971 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.7 MiB/s wr, 340 op/s
Nov 29 03:20:32 np0005539563 nova_compute[252253]: 2025-11-29 08:20:32.610 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:32 np0005539563 podman[338232]: 2025-11-29 08:20:32.874416813 +0000 UTC m=+0.072191799 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 03:20:32 np0005539563 podman[338232]: 2025-11-29 08:20:32.881642479 +0000 UTC m=+0.079417365 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 03:20:32 np0005539563 nova_compute[252253]: 2025-11-29 08:20:32.924 252257 DEBUG nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:20:33 np0005539563 podman[338298]: 2025-11-29 08:20:33.10920221 +0000 UTC m=+0.053095721 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, name=keepalived, release=1793, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, vcs-type=git, io.buildah.version=1.28.2)
Nov 29 03:20:33 np0005539563 podman[338298]: 2025-11-29 08:20:33.124173976 +0000 UTC m=+0.068067507 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, description=keepalived for Ceph, name=keepalived, release=1793, vcs-type=git, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 29 03:20:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:20:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:20:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:33.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:33.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2c816fe3-95ac-4ca7-9715-9152ccd45807 does not exist
Nov 29 03:20:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 360d7e27-d087-4e3b-bd29-cec2e7345266 does not exist
Nov 29 03:20:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 34f435f1-8985-40b0-91b4-ce12b9452080 does not exist
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:20:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 960 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.7 MiB/s wr, 311 op/s
Nov 29 03:20:34 np0005539563 podman[338599]: 2025-11-29 08:20:34.6298361 +0000 UTC m=+0.061479399 container create ad63333caf93e086315a2c207b77558d93d66d69d5053927db27da0b20d85357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:20:34 np0005539563 systemd[1]: Started libpod-conmon-ad63333caf93e086315a2c207b77558d93d66d69d5053927db27da0b20d85357.scope.
Nov 29 03:20:34 np0005539563 podman[338599]: 2025-11-29 08:20:34.611603945 +0000 UTC m=+0.043247274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:20:34 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:20:34 np0005539563 podman[338599]: 2025-11-29 08:20:34.719046769 +0000 UTC m=+0.150690108 container init ad63333caf93e086315a2c207b77558d93d66d69d5053927db27da0b20d85357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:20:34 np0005539563 podman[338599]: 2025-11-29 08:20:34.728259549 +0000 UTC m=+0.159902848 container start ad63333caf93e086315a2c207b77558d93d66d69d5053927db27da0b20d85357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:20:34 np0005539563 podman[338599]: 2025-11-29 08:20:34.73270313 +0000 UTC m=+0.164346439 container attach ad63333caf93e086315a2c207b77558d93d66d69d5053927db27da0b20d85357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shtern, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:20:34 np0005539563 optimistic_shtern[338618]: 167 167
Nov 29 03:20:34 np0005539563 systemd[1]: libpod-ad63333caf93e086315a2c207b77558d93d66d69d5053927db27da0b20d85357.scope: Deactivated successfully.
Nov 29 03:20:34 np0005539563 podman[338599]: 2025-11-29 08:20:34.736123912 +0000 UTC m=+0.167767211 container died ad63333caf93e086315a2c207b77558d93d66d69d5053927db27da0b20d85357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shtern, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:20:34 np0005539563 podman[338616]: 2025-11-29 08:20:34.747645304 +0000 UTC m=+0.069396023 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 29 03:20:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-721fe7525eeeeb4887e594ff007196a497584cab9cc636f3f33e6110313c1b49-merged.mount: Deactivated successfully.
Nov 29 03:20:34 np0005539563 podman[338613]: 2025-11-29 08:20:34.789090959 +0000 UTC m=+0.106561761 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:20:34 np0005539563 podman[338599]: 2025-11-29 08:20:34.794879345 +0000 UTC m=+0.226522644 container remove ad63333caf93e086315a2c207b77558d93d66d69d5053927db27da0b20d85357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:20:34 np0005539563 podman[338617]: 2025-11-29 08:20:34.800674043 +0000 UTC m=+0.121024413 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:20:34 np0005539563 systemd[1]: libpod-conmon-ad63333caf93e086315a2c207b77558d93d66d69d5053927db27da0b20d85357.scope: Deactivated successfully.
Nov 29 03:20:34 np0005539563 podman[338700]: 2025-11-29 08:20:34.982812892 +0000 UTC m=+0.037329274 container create 7a01b6b8c18fbf49ccc868e5f4552bcd404136aaad2bd84f04afc56d24d89ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dewdney, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:20:35 np0005539563 systemd[1]: Started libpod-conmon-7a01b6b8c18fbf49ccc868e5f4552bcd404136aaad2bd84f04afc56d24d89ebf.scope.
Nov 29 03:20:35 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:20:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/694ea9876d3056bd8e01666a8640c14213f016b1a8362328af2c6735b594222d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/694ea9876d3056bd8e01666a8640c14213f016b1a8362328af2c6735b594222d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/694ea9876d3056bd8e01666a8640c14213f016b1a8362328af2c6735b594222d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/694ea9876d3056bd8e01666a8640c14213f016b1a8362328af2c6735b594222d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/694ea9876d3056bd8e01666a8640c14213f016b1a8362328af2c6735b594222d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:35 np0005539563 podman[338700]: 2025-11-29 08:20:34.968686539 +0000 UTC m=+0.023202951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:20:35 np0005539563 podman[338700]: 2025-11-29 08:20:35.068137416 +0000 UTC m=+0.122653818 container init 7a01b6b8c18fbf49ccc868e5f4552bcd404136aaad2bd84f04afc56d24d89ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:20:35 np0005539563 podman[338700]: 2025-11-29 08:20:35.077269354 +0000 UTC m=+0.131785736 container start 7a01b6b8c18fbf49ccc868e5f4552bcd404136aaad2bd84f04afc56d24d89ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:20:35 np0005539563 podman[338700]: 2025-11-29 08:20:35.0812114 +0000 UTC m=+0.135727782 container attach 7a01b6b8c18fbf49ccc868e5f4552bcd404136aaad2bd84f04afc56d24d89ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dewdney, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:20:35 np0005539563 kernel: tap7d3e9f63-03 (unregistering): left promiscuous mode
Nov 29 03:20:35 np0005539563 NetworkManager[48981]: <info>  [1764404435.2027] device (tap7d3e9f63-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:20:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:35Z|00563|binding|INFO|Releasing lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 from this chassis (sb_readonly=0)
Nov 29 03:20:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:35Z|00564|binding|INFO|Setting lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 down in Southbound
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.223 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:35Z|00565|binding|INFO|Removing iface tap7d3e9f63-03 ovn-installed in OVS
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.225 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.231 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:6f:ea 10.100.0.3'], port_security=['fa:16:3e:e1:6f:ea 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9b6f3346-1230-472f-bd04-791d2367bebb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=7d3e9f63-03fd-471c-8eeb-dba78634e033) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.232 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 7d3e9f63-03fd-471c-8eeb-dba78634e033 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.234 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.250 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.253 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a57a90c1-20f9-463b-b10c-a6c83763cc68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:35 np0005539563 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Nov 29 03:20:35 np0005539563 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000008c.scope: Consumed 15.173s CPU time.
Nov 29 03:20:35 np0005539563 systemd-machined[213024]: Machine qemu-67-instance-0000008c terminated.
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.284 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ee86d7-23b6-4db5-9a10-c1d19a6f9fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.286 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7c59e3-cd8b-4687-be0b-c2141c2b7f69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.312 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ec68bd-bfd1-4ce2-a9aa-71caef331ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.326 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1897cbb1-1083-4ded-9d04-bfae3679ca9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731762, 'reachable_time': 15596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338732, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.341 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ccb702-7354-4cf5-b2e3-cfcc7b7aacca]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731778, 'tstamp': 731778}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338733, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731782, 'tstamp': 731782}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338733, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.342 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.344 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.348 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.348 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.349 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.349 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:35.349 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:35.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:35.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.675 252257 DEBUG nova.compute.manager [req-4694d010-e6f9-4710-9f21-e6da4b2911bf req-5837f7b0-618b-4980-929c-06df9a1b3a46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-unplugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.677 252257 DEBUG oslo_concurrency.lockutils [req-4694d010-e6f9-4710-9f21-e6da4b2911bf req-5837f7b0-618b-4980-929c-06df9a1b3a46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.677 252257 DEBUG oslo_concurrency.lockutils [req-4694d010-e6f9-4710-9f21-e6da4b2911bf req-5837f7b0-618b-4980-929c-06df9a1b3a46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.677 252257 DEBUG oslo_concurrency.lockutils [req-4694d010-e6f9-4710-9f21-e6da4b2911bf req-5837f7b0-618b-4980-929c-06df9a1b3a46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.678 252257 DEBUG nova.compute.manager [req-4694d010-e6f9-4710-9f21-e6da4b2911bf req-5837f7b0-618b-4980-929c-06df9a1b3a46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] No waiting events found dispatching network-vif-unplugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.678 252257 WARNING nova.compute.manager [req-4694d010-e6f9-4710-9f21-e6da4b2911bf req-5837f7b0-618b-4980-929c-06df9a1b3a46 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received unexpected event network-vif-unplugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:20:35 np0005539563 eloquent_dewdney[338716]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:20:35 np0005539563 eloquent_dewdney[338716]: --> relative data size: 1.0
Nov 29 03:20:35 np0005539563 eloquent_dewdney[338716]: --> All data devices are unavailable
Nov 29 03:20:35 np0005539563 systemd[1]: libpod-7a01b6b8c18fbf49ccc868e5f4552bcd404136aaad2bd84f04afc56d24d89ebf.scope: Deactivated successfully.
Nov 29 03:20:35 np0005539563 podman[338700]: 2025-11-29 08:20:35.894344612 +0000 UTC m=+0.948860994 container died 7a01b6b8c18fbf49ccc868e5f4552bcd404136aaad2bd84f04afc56d24d89ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:20:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-694ea9876d3056bd8e01666a8640c14213f016b1a8362328af2c6735b594222d-merged.mount: Deactivated successfully.
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.942 252257 INFO nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.948 252257 INFO nova.virt.libvirt.driver [-] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Instance destroyed successfully.#033[00m
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.948 252257 DEBUG nova.objects.instance [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9b6f3346-1230-472f-bd04-791d2367bebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:35 np0005539563 podman[338700]: 2025-11-29 08:20:35.952251253 +0000 UTC m=+1.006767625 container remove 7a01b6b8c18fbf49ccc868e5f4552bcd404136aaad2bd84f04afc56d24d89ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:20:35 np0005539563 systemd[1]: libpod-conmon-7a01b6b8c18fbf49ccc868e5f4552bcd404136aaad2bd84f04afc56d24d89ebf.scope: Deactivated successfully.
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.965 252257 INFO nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Attempting a stable device rescue#033[00m
Nov 29 03:20:35 np0005539563 nova_compute[252253]: 2025-11-29 08:20:35.984 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.282 252257 DEBUG nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.286 252257 DEBUG nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.287 252257 INFO nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Creating image(s)#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.316 252257 DEBUG nova.storage.rbd_utils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 9b6f3346-1230-472f-bd04-791d2367bebb_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.320 252257 DEBUG nova.objects.instance [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9b6f3346-1230-472f-bd04-791d2367bebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 933 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.0 MiB/s wr, 320 op/s
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.433 252257 DEBUG nova.storage.rbd_utils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 9b6f3346-1230-472f-bd04-791d2367bebb_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.464 252257 DEBUG nova.storage.rbd_utils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 9b6f3346-1230-472f-bd04-791d2367bebb_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.469 252257 DEBUG oslo_concurrency.lockutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "81e8e531cffa2fb2565c287853dbb5c4412043aa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.470 252257 DEBUG oslo_concurrency.lockutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "81e8e531cffa2fb2565c287853dbb5c4412043aa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:36 np0005539563 podman[339008]: 2025-11-29 08:20:36.541127053 +0000 UTC m=+0.039110062 container create 5d2280901a17cc6e01b9bde65f14afeadd1e90a10f56864d728a9758236b5e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:20:36 np0005539563 systemd[1]: Started libpod-conmon-5d2280901a17cc6e01b9bde65f14afeadd1e90a10f56864d728a9758236b5e1d.scope.
Nov 29 03:20:36 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:20:36 np0005539563 podman[339008]: 2025-11-29 08:20:36.526567738 +0000 UTC m=+0.024550767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:20:36 np0005539563 podman[339008]: 2025-11-29 08:20:36.627006422 +0000 UTC m=+0.124989531 container init 5d2280901a17cc6e01b9bde65f14afeadd1e90a10f56864d728a9758236b5e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:20:36 np0005539563 podman[339008]: 2025-11-29 08:20:36.636129229 +0000 UTC m=+0.134112238 container start 5d2280901a17cc6e01b9bde65f14afeadd1e90a10f56864d728a9758236b5e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:20:36 np0005539563 podman[339008]: 2025-11-29 08:20:36.640045415 +0000 UTC m=+0.138028454 container attach 5d2280901a17cc6e01b9bde65f14afeadd1e90a10f56864d728a9758236b5e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:20:36 np0005539563 competent_tu[339026]: 167 167
Nov 29 03:20:36 np0005539563 systemd[1]: libpod-5d2280901a17cc6e01b9bde65f14afeadd1e90a10f56864d728a9758236b5e1d.scope: Deactivated successfully.
Nov 29 03:20:36 np0005539563 podman[339008]: 2025-11-29 08:20:36.64280956 +0000 UTC m=+0.140792579 container died 5d2280901a17cc6e01b9bde65f14afeadd1e90a10f56864d728a9758236b5e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:20:36 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c3b20bb4182fe8e24a43d8007fde79badcce1a78e40dfc53085269ad8feef490-merged.mount: Deactivated successfully.
Nov 29 03:20:36 np0005539563 podman[339008]: 2025-11-29 08:20:36.682096695 +0000 UTC m=+0.180079704 container remove 5d2280901a17cc6e01b9bde65f14afeadd1e90a10f56864d728a9758236b5e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:20:36 np0005539563 systemd[1]: libpod-conmon-5d2280901a17cc6e01b9bde65f14afeadd1e90a10f56864d728a9758236b5e1d.scope: Deactivated successfully.
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.718 252257 DEBUG nova.virt.libvirt.imagebackend [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/5b6878f5-ee6a-4f36-b7ce-cdb9894ce54c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/5b6878f5-ee6a-4f36-b7ce-cdb9894ce54c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.767 252257 DEBUG nova.virt.libvirt.imagebackend [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/5b6878f5-ee6a-4f36-b7ce-cdb9894ce54c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.767 252257 DEBUG nova.storage.rbd_utils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] cloning images/5b6878f5-ee6a-4f36-b7ce-cdb9894ce54c@snap to None/9b6f3346-1230-472f-bd04-791d2367bebb_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:20:36 np0005539563 podman[339117]: 2025-11-29 08:20:36.855916869 +0000 UTC m=+0.037494527 container create 8de2d4aeeec328c5bb0ebf34719c923afc14c7f8b30c5ab95bba4dbdf494d096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.876 252257 DEBUG oslo_concurrency.lockutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "81e8e531cffa2fb2565c287853dbb5c4412043aa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:36 np0005539563 systemd[1]: Started libpod-conmon-8de2d4aeeec328c5bb0ebf34719c923afc14c7f8b30c5ab95bba4dbdf494d096.scope.
Nov 29 03:20:36 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:20:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b534e73ce2697df4e10daa6380156b69574519f3cbc52cb3a602fa08931b3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b534e73ce2697df4e10daa6380156b69574519f3cbc52cb3a602fa08931b3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b534e73ce2697df4e10daa6380156b69574519f3cbc52cb3a602fa08931b3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:36 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b534e73ce2697df4e10daa6380156b69574519f3cbc52cb3a602fa08931b3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:36 np0005539563 podman[339117]: 2025-11-29 08:20:36.930095461 +0000 UTC m=+0.111673139 container init 8de2d4aeeec328c5bb0ebf34719c923afc14c7f8b30c5ab95bba4dbdf494d096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.933 252257 DEBUG nova.objects.instance [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b6f3346-1230-472f-bd04-791d2367bebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:36 np0005539563 podman[339117]: 2025-11-29 08:20:36.839533576 +0000 UTC m=+0.021111254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:20:36 np0005539563 podman[339117]: 2025-11-29 08:20:36.938205811 +0000 UTC m=+0.119783469 container start 8de2d4aeeec328c5bb0ebf34719c923afc14c7f8b30c5ab95bba4dbdf494d096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:20:36 np0005539563 podman[339117]: 2025-11-29 08:20:36.941676755 +0000 UTC m=+0.123254413 container attach 8de2d4aeeec328c5bb0ebf34719c923afc14c7f8b30c5ab95bba4dbdf494d096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.946 252257 DEBUG nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.948 252257 DEBUG nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Start _get_guest_xml network_info=[{"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "vif_mac": "fa:16:3e:e1:6f:ea"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '5b6878f5-ee6a-4f36-b7ce-cdb9894ce54c', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.948 252257 DEBUG nova.objects.instance [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'resources' on Instance uuid 9b6f3346-1230-472f-bd04-791d2367bebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.964 252257 WARNING nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.969 252257 DEBUG nova.virt.libvirt.host [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.970 252257 DEBUG nova.virt.libvirt.host [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.973 252257 DEBUG nova.virt.libvirt.host [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.973 252257 DEBUG nova.virt.libvirt.host [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.974 252257 DEBUG nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.975 252257 DEBUG nova.virt.hardware [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.975 252257 DEBUG nova.virt.hardware [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.975 252257 DEBUG nova.virt.hardware [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.975 252257 DEBUG nova.virt.hardware [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.976 252257 DEBUG nova.virt.hardware [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.976 252257 DEBUG nova.virt.hardware [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.976 252257 DEBUG nova.virt.hardware [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.976 252257 DEBUG nova.virt.hardware [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.976 252257 DEBUG nova.virt.hardware [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.976 252257 DEBUG nova.virt.hardware [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.976 252257 DEBUG nova.virt.hardware [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.977 252257 DEBUG nova.objects.instance [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9b6f3346-1230-472f-bd04-791d2367bebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:36 np0005539563 nova_compute[252253]: 2025-11-29 08:20:36.991 252257 DEBUG oslo_concurrency.processutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:37.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3670188589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:37 np0005539563 nova_compute[252253]: 2025-11-29 08:20:37.481 252257 DEBUG oslo_concurrency.processutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:37 np0005539563 nova_compute[252253]: 2025-11-29 08:20:37.514 252257 DEBUG oslo_concurrency.processutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:37.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:37 np0005539563 nova_compute[252253]: 2025-11-29 08:20:37.612 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]: {
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:    "0": [
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:        {
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:            "devices": [
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:                "/dev/loop3"
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:            ],
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:            "lv_name": "ceph_lv0",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:            "lv_size": "7511998464",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:            "name": "ceph_lv0",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:            "tags": {
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:                "ceph.cluster_name": "ceph",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:                "ceph.crush_device_class": "",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:                "ceph.encrypted": "0",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:                "ceph.osd_id": "0",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:                "ceph.type": "block",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:                "ceph.vdo": "0"
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:            },
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:            "type": "block",
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:            "vg_name": "ceph_vg0"
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:        }
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]:    ]
Nov 29 03:20:37 np0005539563 elegant_thompson[339151]: }
Nov 29 03:20:37 np0005539563 systemd[1]: libpod-8de2d4aeeec328c5bb0ebf34719c923afc14c7f8b30c5ab95bba4dbdf494d096.scope: Deactivated successfully.
Nov 29 03:20:37 np0005539563 podman[339117]: 2025-11-29 08:20:37.735480283 +0000 UTC m=+0.917057941 container died 8de2d4aeeec328c5bb0ebf34719c923afc14c7f8b30c5ab95bba4dbdf494d096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:20:37 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a5b534e73ce2697df4e10daa6380156b69574519f3cbc52cb3a602fa08931b3f-merged.mount: Deactivated successfully.
Nov 29 03:20:37 np0005539563 podman[339117]: 2025-11-29 08:20:37.786894108 +0000 UTC m=+0.968471766 container remove 8de2d4aeeec328c5bb0ebf34719c923afc14c7f8b30c5ab95bba4dbdf494d096 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_thompson, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:20:37 np0005539563 nova_compute[252253]: 2025-11-29 08:20:37.817 252257 DEBUG nova.compute.manager [req-0030971f-4f1a-473d-91da-543573ba14e7 req-1fdea58a-cc08-472a-a7bd-29247c1dd8d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:37 np0005539563 nova_compute[252253]: 2025-11-29 08:20:37.818 252257 DEBUG oslo_concurrency.lockutils [req-0030971f-4f1a-473d-91da-543573ba14e7 req-1fdea58a-cc08-472a-a7bd-29247c1dd8d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:37 np0005539563 nova_compute[252253]: 2025-11-29 08:20:37.818 252257 DEBUG oslo_concurrency.lockutils [req-0030971f-4f1a-473d-91da-543573ba14e7 req-1fdea58a-cc08-472a-a7bd-29247c1dd8d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:37 np0005539563 nova_compute[252253]: 2025-11-29 08:20:37.818 252257 DEBUG oslo_concurrency.lockutils [req-0030971f-4f1a-473d-91da-543573ba14e7 req-1fdea58a-cc08-472a-a7bd-29247c1dd8d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:37 np0005539563 nova_compute[252253]: 2025-11-29 08:20:37.818 252257 DEBUG nova.compute.manager [req-0030971f-4f1a-473d-91da-543573ba14e7 req-1fdea58a-cc08-472a-a7bd-29247c1dd8d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] No waiting events found dispatching network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:37 np0005539563 nova_compute[252253]: 2025-11-29 08:20:37.819 252257 WARNING nova.compute.manager [req-0030971f-4f1a-473d-91da-543573ba14e7 req-1fdea58a-cc08-472a-a7bd-29247c1dd8d1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received unexpected event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:20:37 np0005539563 systemd[1]: libpod-conmon-8de2d4aeeec328c5bb0ebf34719c923afc14c7f8b30c5ab95bba4dbdf494d096.scope: Deactivated successfully.
Nov 29 03:20:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/328847071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:37 np0005539563 nova_compute[252253]: 2025-11-29 08:20:37.992 252257 DEBUG oslo_concurrency.processutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:37 np0005539563 nova_compute[252253]: 2025-11-29 08:20:37.993 252257 DEBUG oslo_concurrency.processutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1105626513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:38 np0005539563 podman[339398]: 2025-11-29 08:20:38.409561524 +0000 UTC m=+0.048883466 container create a2e99f4514f981e7da5669eedd9fe89c73db1cc118180546b1884c14564d5805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:20:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 965 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.1 MiB/s wr, 224 op/s
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.422 252257 DEBUG oslo_concurrency.processutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.424 252257 DEBUG nova.virt.libvirt.vif [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1647701311',display_name='tempest-ServerStableDeviceRescueTest-server-1647701311',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1647701311',id=140,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-dl1duw1u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:19Z,user_data=None,user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=9b6f3346-1230-472f-bd04-791d2367bebb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "vif_mac": "fa:16:3e:e1:6f:ea"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.424 252257 DEBUG nova.network.os_vif_util [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "vif_mac": "fa:16:3e:e1:6f:ea"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.425 252257 DEBUG nova.network.os_vif_util [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:6f:ea,bridge_name='br-int',has_traffic_filtering=True,id=7d3e9f63-03fd-471c-8eeb-dba78634e033,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d3e9f63-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.427 252257 DEBUG nova.objects.instance [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b6f3346-1230-472f-bd04-791d2367bebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.442 252257 DEBUG nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  <uuid>9b6f3346-1230-472f-bd04-791d2367bebb</uuid>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  <name>instance-0000008c</name>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1647701311</nova:name>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:20:36</nova:creationTime>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <nova:user uuid="3b52040d601a4a56abcaf3f046f1e349">tempest-ServerStableDeviceRescueTest-1105304301-project-member</nova:user>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <nova:project uuid="358970eca7ad4b05b70f43e5507ac052">tempest-ServerStableDeviceRescueTest-1105304301</nova:project>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <nova:port uuid="7d3e9f63-03fd-471c-8eeb-dba78634e033">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <entry name="serial">9b6f3346-1230-472f-bd04-791d2367bebb</entry>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <entry name="uuid">9b6f3346-1230-472f-bd04-791d2367bebb</entry>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/9b6f3346-1230-472f-bd04-791d2367bebb_disk">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/9b6f3346-1230-472f-bd04-791d2367bebb_disk.config">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/9b6f3346-1230-472f-bd04-791d2367bebb_disk.rescue">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <boot order="1"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:e1:6f:ea"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:20:38 np0005539563 systemd[1]: Started libpod-conmon-a2e99f4514f981e7da5669eedd9fe89c73db1cc118180546b1884c14564d5805.scope.
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <target dev="tap7d3e9f63-03"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/console.log" append="off"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:20:38 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:20:38 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:20:38 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:20:38 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.451 252257 INFO nova.virt.libvirt.driver [-] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Instance destroyed successfully.#033[00m
Nov 29 03:20:38 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:20:38 np0005539563 podman[339398]: 2025-11-29 08:20:38.382921862 +0000 UTC m=+0.022243824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:20:38 np0005539563 podman[339398]: 2025-11-29 08:20:38.48244956 +0000 UTC m=+0.121771502 container init a2e99f4514f981e7da5669eedd9fe89c73db1cc118180546b1884c14564d5805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_allen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:20:38 np0005539563 podman[339398]: 2025-11-29 08:20:38.491102585 +0000 UTC m=+0.130424517 container start a2e99f4514f981e7da5669eedd9fe89c73db1cc118180546b1884c14564d5805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:20:38 np0005539563 podman[339398]: 2025-11-29 08:20:38.495194867 +0000 UTC m=+0.134516809 container attach a2e99f4514f981e7da5669eedd9fe89c73db1cc118180546b1884c14564d5805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:20:38 np0005539563 hardcore_allen[339416]: 167 167
Nov 29 03:20:38 np0005539563 systemd[1]: libpod-a2e99f4514f981e7da5669eedd9fe89c73db1cc118180546b1884c14564d5805.scope: Deactivated successfully.
Nov 29 03:20:38 np0005539563 podman[339398]: 2025-11-29 08:20:38.497772727 +0000 UTC m=+0.137094669 container died a2e99f4514f981e7da5669eedd9fe89c73db1cc118180546b1884c14564d5805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:20:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c9f747543a63c491b203e34d88bb0962a3c04b0324de1745cbd2b7ffa5009562-merged.mount: Deactivated successfully.
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.526 252257 DEBUG nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.527 252257 DEBUG nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.527 252257 DEBUG nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.527 252257 DEBUG nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] No VIF found with MAC fa:16:3e:e1:6f:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.528 252257 INFO nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Using config drive#033[00m
Nov 29 03:20:38 np0005539563 podman[339398]: 2025-11-29 08:20:38.537604436 +0000 UTC m=+0.176926378 container remove a2e99f4514f981e7da5669eedd9fe89c73db1cc118180546b1884c14564d5805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_allen, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:20:38 np0005539563 systemd[1]: libpod-conmon-a2e99f4514f981e7da5669eedd9fe89c73db1cc118180546b1884c14564d5805.scope: Deactivated successfully.
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.554 252257 DEBUG nova.storage.rbd_utils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 9b6f3346-1230-472f-bd04-791d2367bebb_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.580 252257 DEBUG nova.objects.instance [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9b6f3346-1230-472f-bd04-791d2367bebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.618 252257 DEBUG nova.objects.instance [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'keypairs' on Instance uuid 9b6f3346-1230-472f-bd04-791d2367bebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:38 np0005539563 podman[339458]: 2025-11-29 08:20:38.699384023 +0000 UTC m=+0.039571134 container create 8d3b0ccfc51645c9bf80dad1a1935073b10ceb7643e797f4bf0d6a9fdca7e6b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:20:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:38 np0005539563 systemd[1]: Started libpod-conmon-8d3b0ccfc51645c9bf80dad1a1935073b10ceb7643e797f4bf0d6a9fdca7e6b3.scope.
Nov 29 03:20:38 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:20:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e38ad712a548b4b664fe8498771f2ddf767858816ef66288e1dbd6f7283ac54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e38ad712a548b4b664fe8498771f2ddf767858816ef66288e1dbd6f7283ac54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e38ad712a548b4b664fe8498771f2ddf767858816ef66288e1dbd6f7283ac54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e38ad712a548b4b664fe8498771f2ddf767858816ef66288e1dbd6f7283ac54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:38 np0005539563 podman[339458]: 2025-11-29 08:20:38.773931875 +0000 UTC m=+0.114118996 container init 8d3b0ccfc51645c9bf80dad1a1935073b10ceb7643e797f4bf0d6a9fdca7e6b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclean, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:20:38 np0005539563 podman[339458]: 2025-11-29 08:20:38.682754583 +0000 UTC m=+0.022941724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:20:38 np0005539563 podman[339458]: 2025-11-29 08:20:38.780900745 +0000 UTC m=+0.121087856 container start 8d3b0ccfc51645c9bf80dad1a1935073b10ceb7643e797f4bf0d6a9fdca7e6b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclean, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:20:38 np0005539563 podman[339458]: 2025-11-29 08:20:38.784167293 +0000 UTC m=+0.124354424 container attach 8d3b0ccfc51645c9bf80dad1a1935073b10ceb7643e797f4bf0d6a9fdca7e6b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:20:38 np0005539563 nova_compute[252253]: 2025-11-29 08:20:38.996 252257 INFO nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Creating config drive at /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/disk.config.rescue#033[00m
Nov 29 03:20:39 np0005539563 nova_compute[252253]: 2025-11-29 08:20:39.001 252257 DEBUG oslo_concurrency.processutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq7cvm_px execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:39 np0005539563 nova_compute[252253]: 2025-11-29 08:20:39.140 252257 DEBUG oslo_concurrency.processutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq7cvm_px" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:39 np0005539563 nova_compute[252253]: 2025-11-29 08:20:39.167 252257 DEBUG nova.storage.rbd_utils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] rbd image 9b6f3346-1230-472f-bd04-791d2367bebb_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:39 np0005539563 nova_compute[252253]: 2025-11-29 08:20:39.171 252257 DEBUG oslo_concurrency.processutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/disk.config.rescue 9b6f3346-1230-472f-bd04-791d2367bebb_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:39 np0005539563 nova_compute[252253]: 2025-11-29 08:20:39.329 252257 DEBUG oslo_concurrency.processutils [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/disk.config.rescue 9b6f3346-1230-472f-bd04-791d2367bebb_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:39 np0005539563 nova_compute[252253]: 2025-11-29 08:20:39.330 252257 INFO nova.virt.libvirt.driver [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Deleting local config drive /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:20:39 np0005539563 kernel: tap7d3e9f63-03: entered promiscuous mode
Nov 29 03:20:39 np0005539563 NetworkManager[48981]: <info>  [1764404439.3908] manager: (tap7d3e9f63-03): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Nov 29 03:20:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:39.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:39 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:39Z|00566|binding|INFO|Claiming lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 for this chassis.
Nov 29 03:20:39 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:39Z|00567|binding|INFO|7d3e9f63-03fd-471c-8eeb-dba78634e033: Claiming fa:16:3e:e1:6f:ea 10.100.0.3
Nov 29 03:20:39 np0005539563 nova_compute[252253]: 2025-11-29 08:20:39.453 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.470 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:6f:ea 10.100.0.3'], port_security=['fa:16:3e:e1:6f:ea 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9b6f3346-1230-472f-bd04-791d2367bebb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '5', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=7d3e9f63-03fd-471c-8eeb-dba78634e033) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.471 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 7d3e9f63-03fd-471c-8eeb-dba78634e033 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 bound to our chassis#033[00m
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.473 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:20:39 np0005539563 systemd-udevd[339535]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:39 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:39Z|00568|binding|INFO|Setting lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 ovn-installed in OVS
Nov 29 03:20:39 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:39Z|00569|binding|INFO|Setting lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 up in Southbound
Nov 29 03:20:39 np0005539563 nova_compute[252253]: 2025-11-29 08:20:39.480 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:39 np0005539563 systemd-machined[213024]: New machine qemu-68-instance-0000008c.
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.490 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[80c6e518-736f-4935-87f9-27e1837357dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:39 np0005539563 systemd[1]: Started Virtual Machine qemu-68-instance-0000008c.
Nov 29 03:20:39 np0005539563 NetworkManager[48981]: <info>  [1764404439.4954] device (tap7d3e9f63-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:20:39 np0005539563 NetworkManager[48981]: <info>  [1764404439.4967] device (tap7d3e9f63-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.516 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[acc2d49a-5b33-4a75-a25f-64de13f912d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.519 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[53aa7745-ae99-4b45-9a98-1b067e22bdc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.542 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d268de-8b1a-401b-90a5-3b06dc5c1cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:39.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.557 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[70f90ec2-61d7-4ea4-8634-249caf4f00cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731762, 'reachable_time': 15596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339555, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.577 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[10f00c47-3a32-4807-ab60-5859c1876073]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731778, 'tstamp': 731778}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339560, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731782, 'tstamp': 731782}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339560, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.579 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:39 np0005539563 nova_compute[252253]: 2025-11-29 08:20:39.581 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:39 np0005539563 nova_compute[252253]: 2025-11-29 08:20:39.582 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.584 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.585 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.585 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:39.585 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:39 np0005539563 affectionate_mclean[339475]: {
Nov 29 03:20:39 np0005539563 affectionate_mclean[339475]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:20:39 np0005539563 affectionate_mclean[339475]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:20:39 np0005539563 affectionate_mclean[339475]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:20:39 np0005539563 affectionate_mclean[339475]:        "osd_id": 0,
Nov 29 03:20:39 np0005539563 affectionate_mclean[339475]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:20:39 np0005539563 affectionate_mclean[339475]:        "type": "bluestore"
Nov 29 03:20:39 np0005539563 affectionate_mclean[339475]:    }
Nov 29 03:20:39 np0005539563 affectionate_mclean[339475]: }
Nov 29 03:20:39 np0005539563 systemd[1]: libpod-8d3b0ccfc51645c9bf80dad1a1935073b10ceb7643e797f4bf0d6a9fdca7e6b3.scope: Deactivated successfully.
Nov 29 03:20:39 np0005539563 podman[339458]: 2025-11-29 08:20:39.647384173 +0000 UTC m=+0.987571284 container died 8d3b0ccfc51645c9bf80dad1a1935073b10ceb7643e797f4bf0d6a9fdca7e6b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclean, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:20:39 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3e38ad712a548b4b664fe8498771f2ddf767858816ef66288e1dbd6f7283ac54-merged.mount: Deactivated successfully.
Nov 29 03:20:39 np0005539563 podman[339458]: 2025-11-29 08:20:39.698777246 +0000 UTC m=+1.038964357 container remove 8d3b0ccfc51645c9bf80dad1a1935073b10ceb7643e797f4bf0d6a9fdca7e6b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclean, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:20:39 np0005539563 systemd[1]: libpod-conmon-8d3b0ccfc51645c9bf80dad1a1935073b10ceb7643e797f4bf0d6a9fdca7e6b3.scope: Deactivated successfully.
Nov 29 03:20:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:20:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:20:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:39 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 429be2dd-5d49-4b8f-92ae-cb1bb0a49afd does not exist
Nov 29 03:20:39 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 39043e37-49b2-4c3a-841a-a15d7d014470 does not exist
Nov 29 03:20:39 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev bf16059a-e150-499f-9a6c-448e23ec35eb does not exist
Nov 29 03:20:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.291 252257 DEBUG nova.compute.manager [req-ff002c78-caa7-408b-9619-d3894ff340c8 req-0309c53c-ef70-43db-bc99-63cfea1a6cee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.292 252257 DEBUG oslo_concurrency.lockutils [req-ff002c78-caa7-408b-9619-d3894ff340c8 req-0309c53c-ef70-43db-bc99-63cfea1a6cee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.292 252257 DEBUG oslo_concurrency.lockutils [req-ff002c78-caa7-408b-9619-d3894ff340c8 req-0309c53c-ef70-43db-bc99-63cfea1a6cee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.292 252257 DEBUG oslo_concurrency.lockutils [req-ff002c78-caa7-408b-9619-d3894ff340c8 req-0309c53c-ef70-43db-bc99-63cfea1a6cee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.292 252257 DEBUG nova.compute.manager [req-ff002c78-caa7-408b-9619-d3894ff340c8 req-0309c53c-ef70-43db-bc99-63cfea1a6cee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] No waiting events found dispatching network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.293 252257 WARNING nova.compute.manager [req-ff002c78-caa7-408b-9619-d3894ff340c8 req-0309c53c-ef70-43db-bc99-63cfea1a6cee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received unexpected event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.385 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 9b6f3346-1230-472f-bd04-791d2367bebb due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.385 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404440.3845534, 9b6f3346-1230-472f-bd04-791d2367bebb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.385 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.389 252257 DEBUG nova.compute.manager [None req-2143d7fd-cae6-42de-8a51-36be34954a31 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.2 MiB/s wr, 298 op/s
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.435 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.439 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.482 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.483 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404440.3872166, 9b6f3346-1230-472f-bd04-791d2367bebb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.483 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] VM Started (Lifecycle Event)#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.503 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.508 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:40 np0005539563 nova_compute[252253]: 2025-11-29 08:20:40.986 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:41.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:41.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:42 np0005539563 nova_compute[252253]: 2025-11-29 08:20:42.120 252257 INFO nova.compute.manager [None req-b08a177e-51fd-4de0-b2b9-1cf9d933d9d7 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Unrescuing#033[00m
Nov 29 03:20:42 np0005539563 nova_compute[252253]: 2025-11-29 08:20:42.121 252257 DEBUG oslo_concurrency.lockutils [None req-b08a177e-51fd-4de0-b2b9-1cf9d933d9d7 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "refresh_cache-9b6f3346-1230-472f-bd04-791d2367bebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:42 np0005539563 nova_compute[252253]: 2025-11-29 08:20:42.121 252257 DEBUG oslo_concurrency.lockutils [None req-b08a177e-51fd-4de0-b2b9-1cf9d933d9d7 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquired lock "refresh_cache-9b6f3346-1230-472f-bd04-791d2367bebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:42 np0005539563 nova_compute[252253]: 2025-11-29 08:20:42.121 252257 DEBUG nova.network.neutron [None req-b08a177e-51fd-4de0-b2b9-1cf9d933d9d7 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:42 np0005539563 nova_compute[252253]: 2025-11-29 08:20:42.406 252257 DEBUG nova.compute.manager [req-52c6b425-2fd2-4d6f-b162-009bc4e23c9e req-88691657-3fb1-4808-91f4-65204423ef51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:42 np0005539563 nova_compute[252253]: 2025-11-29 08:20:42.407 252257 DEBUG oslo_concurrency.lockutils [req-52c6b425-2fd2-4d6f-b162-009bc4e23c9e req-88691657-3fb1-4808-91f4-65204423ef51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:42 np0005539563 nova_compute[252253]: 2025-11-29 08:20:42.408 252257 DEBUG oslo_concurrency.lockutils [req-52c6b425-2fd2-4d6f-b162-009bc4e23c9e req-88691657-3fb1-4808-91f4-65204423ef51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:42 np0005539563 nova_compute[252253]: 2025-11-29 08:20:42.408 252257 DEBUG oslo_concurrency.lockutils [req-52c6b425-2fd2-4d6f-b162-009bc4e23c9e req-88691657-3fb1-4808-91f4-65204423ef51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:42 np0005539563 nova_compute[252253]: 2025-11-29 08:20:42.409 252257 DEBUG nova.compute.manager [req-52c6b425-2fd2-4d6f-b162-009bc4e23c9e req-88691657-3fb1-4808-91f4-65204423ef51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] No waiting events found dispatching network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:42 np0005539563 nova_compute[252253]: 2025-11-29 08:20:42.409 252257 WARNING nova.compute.manager [req-52c6b425-2fd2-4d6f-b162-009bc4e23c9e req-88691657-3fb1-4808-91f4-65204423ef51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received unexpected event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:20:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 6.1 MiB/s wr, 215 op/s
Nov 29 03:20:42 np0005539563 nova_compute[252253]: 2025-11-29 08:20:42.643 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:42 np0005539563 nova_compute[252253]: 2025-11-29 08:20:42.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:20:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:43.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:43.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:20:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 38K writes, 149K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.85 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8780 writes, 35K keys, 8780 commit groups, 1.0 writes per commit group, ingest: 35.09 MB, 0.06 MB/s#012Interval WAL: 8780 writes, 3576 syncs, 2.46 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:20:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.1 MiB/s wr, 252 op/s
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.350 252257 DEBUG nova.network.neutron [None req-b08a177e-51fd-4de0-b2b9-1cf9d933d9d7 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Updating instance_info_cache with network_info: [{"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.368 252257 DEBUG oslo_concurrency.lockutils [None req-b08a177e-51fd-4de0-b2b9-1cf9d933d9d7 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Releasing lock "refresh_cache-9b6f3346-1230-472f-bd04-791d2367bebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.369 252257 DEBUG nova.objects.instance [None req-b08a177e-51fd-4de0-b2b9-1cf9d933d9d7 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'flavor' on Instance uuid 9b6f3346-1230-472f-bd04-791d2367bebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:45.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:45 np0005539563 kernel: tap7d3e9f63-03 (unregistering): left promiscuous mode
Nov 29 03:20:45 np0005539563 NetworkManager[48981]: <info>  [1764404445.4527] device (tap7d3e9f63-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:20:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:45Z|00570|binding|INFO|Releasing lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 from this chassis (sb_readonly=0)
Nov 29 03:20:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:45Z|00571|binding|INFO|Setting lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 down in Southbound
Nov 29 03:20:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:45Z|00572|binding|INFO|Removing iface tap7d3e9f63-03 ovn-installed in OVS
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.461 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.464 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.472 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:6f:ea 10.100.0.3'], port_security=['fa:16:3e:e1:6f:ea 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9b6f3346-1230-472f-bd04-791d2367bebb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '6', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=7d3e9f63-03fd-471c-8eeb-dba78634e033) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.473 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 7d3e9f63-03fd-471c-8eeb-dba78634e033 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.475 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.485 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.492 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c60b226b-e26b-41eb-9414-946c2cce14e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:45 np0005539563 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Nov 29 03:20:45 np0005539563 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000008c.scope: Consumed 5.896s CPU time.
Nov 29 03:20:45 np0005539563 systemd-machined[213024]: Machine qemu-68-instance-0000008c terminated.
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.520 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[1a74efc9-f0d5-4b0d-8e03-f990aaeb52a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.523 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a48d934f-0605-4278-9463-b4266ee48838]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.548 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a6306de7-b582-480f-bd96-fb23847b33c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:45.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.568 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e65d3816-8c21-4934-921d-54c959cdf86c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731762, 'reachable_time': 15596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339704, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.582 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f9868395-6edf-4286-a972-2727693c17cf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731778, 'tstamp': 731778}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339705, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731782, 'tstamp': 731782}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339705, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.583 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.585 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.588 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.588 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.589 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.589 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.590 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.630 252257 DEBUG nova.compute.manager [req-cc16094e-5e4f-41aa-840b-3e9c4137df87 req-e55a0715-6c97-476e-85a0-45c7f91c6983 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-unplugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.630 252257 DEBUG oslo_concurrency.lockutils [req-cc16094e-5e4f-41aa-840b-3e9c4137df87 req-e55a0715-6c97-476e-85a0-45c7f91c6983 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.631 252257 DEBUG oslo_concurrency.lockutils [req-cc16094e-5e4f-41aa-840b-3e9c4137df87 req-e55a0715-6c97-476e-85a0-45c7f91c6983 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.631 252257 DEBUG oslo_concurrency.lockutils [req-cc16094e-5e4f-41aa-840b-3e9c4137df87 req-e55a0715-6c97-476e-85a0-45c7f91c6983 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.631 252257 DEBUG nova.compute.manager [req-cc16094e-5e4f-41aa-840b-3e9c4137df87 req-e55a0715-6c97-476e-85a0-45c7f91c6983 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] No waiting events found dispatching network-vif-unplugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.631 252257 WARNING nova.compute.manager [req-cc16094e-5e4f-41aa-840b-3e9c4137df87 req-e55a0715-6c97-476e-85a0-45c7f91c6983 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received unexpected event network-vif-unplugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.633 252257 INFO nova.virt.libvirt.driver [-] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Instance destroyed successfully.#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.634 252257 DEBUG nova.objects.instance [None req-b08a177e-51fd-4de0-b2b9-1cf9d933d9d7 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9b6f3346-1230-472f-bd04-791d2367bebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:45 np0005539563 kernel: tap7d3e9f63-03: entered promiscuous mode
Nov 29 03:20:45 np0005539563 NetworkManager[48981]: <info>  [1764404445.7241] manager: (tap7d3e9f63-03): new Tun device (/org/freedesktop/NetworkManager/Devices/255)
Nov 29 03:20:45 np0005539563 systemd-udevd[339694]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:45Z|00573|binding|INFO|Claiming lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 for this chassis.
Nov 29 03:20:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:45Z|00574|binding|INFO|7d3e9f63-03fd-471c-8eeb-dba78634e033: Claiming fa:16:3e:e1:6f:ea 10.100.0.3
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.725 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.731 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:6f:ea 10.100.0.3'], port_security=['fa:16:3e:e1:6f:ea 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9b6f3346-1230-472f-bd04-791d2367bebb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '7', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=7d3e9f63-03fd-471c-8eeb-dba78634e033) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.731 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 7d3e9f63-03fd-471c-8eeb-dba78634e033 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 bound to our chassis#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.733 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:20:45 np0005539563 NetworkManager[48981]: <info>  [1764404445.7379] device (tap7d3e9f63-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:20:45 np0005539563 NetworkManager[48981]: <info>  [1764404445.7392] device (tap7d3e9f63-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.750 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4b4444-e07e-4f94-8ded-77c34209b9bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:45Z|00575|binding|INFO|Setting lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 ovn-installed in OVS
Nov 29 03:20:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:20:45Z|00576|binding|INFO|Setting lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 up in Southbound
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.756 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:45 np0005539563 systemd-machined[213024]: New machine qemu-69-instance-0000008c.
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.763 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:45 np0005539563 systemd[1]: Started Virtual Machine qemu-69-instance-0000008c.
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.788 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[98db6b9e-c759-409d-933f-2b9b67748245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.791 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f1abae9e-201d-4d5c-859a-7effd03f84f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.819 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[331f31e5-062c-4672-b348-34dbb0e2792b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.835 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6469c2cb-64b9-492f-a0f1-94f4cf77841d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731762, 'reachable_time': 15596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339743, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.848 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[de62f8aa-0049-4cf2-813f-a20309b8c979]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731778, 'tstamp': 731778}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339744, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731782, 'tstamp': 731782}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339744, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.850 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.851 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.852 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.853 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.853 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.853 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:20:45.854 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:45 np0005539563 nova_compute[252253]: 2025-11-29 08:20:45.987 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:46 np0005539563 nova_compute[252253]: 2025-11-29 08:20:46.150 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 9b6f3346-1230-472f-bd04-791d2367bebb due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:20:46 np0005539563 nova_compute[252253]: 2025-11-29 08:20:46.150 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404446.14915, 9b6f3346-1230-472f-bd04-791d2367bebb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:46 np0005539563 nova_compute[252253]: 2025-11-29 08:20:46.150 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:20:46 np0005539563 nova_compute[252253]: 2025-11-29 08:20:46.175 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:46 np0005539563 nova_compute[252253]: 2025-11-29 08:20:46.180 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:46 np0005539563 nova_compute[252253]: 2025-11-29 08:20:46.198 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:20:46 np0005539563 nova_compute[252253]: 2025-11-29 08:20:46.199 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404446.152728, 9b6f3346-1230-472f-bd04-791d2367bebb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:46 np0005539563 nova_compute[252253]: 2025-11-29 08:20:46.199 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] VM Started (Lifecycle Event)#033[00m
Nov 29 03:20:46 np0005539563 nova_compute[252253]: 2025-11-29 08:20:46.218 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:46 np0005539563 nova_compute[252253]: 2025-11-29 08:20:46.222 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:46 np0005539563 nova_compute[252253]: 2025-11-29 08:20:46.240 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:20:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.1 MiB/s wr, 317 op/s
Nov 29 03:20:46 np0005539563 nova_compute[252253]: 2025-11-29 08:20:46.544 252257 DEBUG nova.compute.manager [None req-b08a177e-51fd-4de0-b2b9-1cf9d933d9d7 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:47.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:47.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.646 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.738 252257 DEBUG nova.compute.manager [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.739 252257 DEBUG oslo_concurrency.lockutils [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.739 252257 DEBUG oslo_concurrency.lockutils [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.739 252257 DEBUG oslo_concurrency.lockutils [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.740 252257 DEBUG nova.compute.manager [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] No waiting events found dispatching network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.740 252257 WARNING nova.compute.manager [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received unexpected event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.740 252257 DEBUG nova.compute.manager [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.740 252257 DEBUG oslo_concurrency.lockutils [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.740 252257 DEBUG oslo_concurrency.lockutils [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.741 252257 DEBUG oslo_concurrency.lockutils [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.741 252257 DEBUG nova.compute.manager [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] No waiting events found dispatching network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.741 252257 WARNING nova.compute.manager [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received unexpected event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.743 252257 DEBUG nova.compute.manager [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.743 252257 DEBUG oslo_concurrency.lockutils [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.744 252257 DEBUG oslo_concurrency.lockutils [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.744 252257 DEBUG oslo_concurrency.lockutils [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.745 252257 DEBUG nova.compute.manager [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] No waiting events found dispatching network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:47 np0005539563 nova_compute[252253]: 2025-11-29 08:20:47.745 252257 WARNING nova.compute.manager [req-904aa3e4-706a-41d8-a403-1590da6a5d9d req-c052eca9-8553-4fa2-a8ee-b369bf0fac05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received unexpected event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:20:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.4 MiB/s wr, 345 op/s
Nov 29 03:20:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:49.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:49.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 3.7 MiB/s wr, 457 op/s
Nov 29 03:20:50 np0005539563 nova_compute[252253]: 2025-11-29 08:20:50.515 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:50 np0005539563 nova_compute[252253]: 2025-11-29 08:20:50.516 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:50 np0005539563 nova_compute[252253]: 2025-11-29 08:20:50.529 252257 DEBUG nova.compute.manager [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:20:50 np0005539563 nova_compute[252253]: 2025-11-29 08:20:50.604 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:50 np0005539563 nova_compute[252253]: 2025-11-29 08:20:50.605 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:50 np0005539563 nova_compute[252253]: 2025-11-29 08:20:50.612 252257 DEBUG nova.virt.hardware [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:20:50 np0005539563 nova_compute[252253]: 2025-11-29 08:20:50.612 252257 INFO nova.compute.claims [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:20:50 np0005539563 nova_compute[252253]: 2025-11-29 08:20:50.782 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:50 np0005539563 nova_compute[252253]: 2025-11-29 08:20:50.989 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471549231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.246 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.253 252257 DEBUG nova.compute.provider_tree [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.273 252257 DEBUG nova.scheduler.client.report [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.299 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.300 252257 DEBUG nova.compute.manager [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.350 252257 DEBUG nova.compute.manager [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.350 252257 DEBUG nova.network.neutron [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.395 252257 INFO nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:20:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:51.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.427 252257 DEBUG nova.compute.manager [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.529 252257 DEBUG nova.compute.manager [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.530 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.531 252257 INFO nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Creating image(s)#033[00m
Nov 29 03:20:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:51.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.567 252257 DEBUG nova.storage.rbd_utils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.596 252257 DEBUG nova.storage.rbd_utils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.622 252257 DEBUG nova.storage.rbd_utils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.625 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.655 252257 DEBUG nova.policy [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b9a756606a84398819fa76cc6ce9ecd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a738c288b1654ec58416b0da60aacb69', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.701 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.702 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.702 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.703 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.733 252257 DEBUG nova.storage.rbd_utils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:51 np0005539563 nova_compute[252253]: 2025-11-29 08:20:51.737 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:52 np0005539563 nova_compute[252253]: 2025-11-29 08:20:52.019 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:52 np0005539563 nova_compute[252253]: 2025-11-29 08:20:52.110 252257 DEBUG nova.storage.rbd_utils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] resizing rbd image 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:20:52 np0005539563 nova_compute[252253]: 2025-11-29 08:20:52.277 252257 DEBUG nova.objects.instance [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'migration_context' on Instance uuid 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:52 np0005539563 nova_compute[252253]: 2025-11-29 08:20:52.294 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:20:52 np0005539563 nova_compute[252253]: 2025-11-29 08:20:52.294 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Ensure instance console log exists: /var/lib/nova/instances/7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:20:52 np0005539563 nova_compute[252253]: 2025-11-29 08:20:52.295 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:52 np0005539563 nova_compute[252253]: 2025-11-29 08:20:52.295 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:52 np0005539563 nova_compute[252253]: 2025-11-29 08:20:52.296 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.1 MiB/s rd, 97 KiB/s wr, 333 op/s
Nov 29 03:20:52 np0005539563 nova_compute[252253]: 2025-11-29 08:20:52.597 252257 DEBUG nova.network.neutron [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Successfully created port: 39bedb41-5eaf-4edd-8178-376b6456b337 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:20:52 np0005539563 nova_compute[252253]: 2025-11-29 08:20:52.648 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:53.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:53.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:53 np0005539563 nova_compute[252253]: 2025-11-29 08:20:53.702 252257 DEBUG nova.network.neutron [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Successfully updated port: 39bedb41-5eaf-4edd-8178-376b6456b337 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:20:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:53 np0005539563 nova_compute[252253]: 2025-11-29 08:20:53.732 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "refresh_cache-7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:53 np0005539563 nova_compute[252253]: 2025-11-29 08:20:53.733 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquired lock "refresh_cache-7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:53 np0005539563 nova_compute[252253]: 2025-11-29 08:20:53.733 252257 DEBUG nova.network.neutron [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:53 np0005539563 nova_compute[252253]: 2025-11-29 08:20:53.851 252257 DEBUG nova.compute.manager [req-12c2e5fc-9ce0-439a-8468-99014adbd309 req-30f1d9a7-18f7-4ef3-b70c-a2bfc3c71be5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Received event network-changed-39bedb41-5eaf-4edd-8178-376b6456b337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:53 np0005539563 nova_compute[252253]: 2025-11-29 08:20:53.852 252257 DEBUG nova.compute.manager [req-12c2e5fc-9ce0-439a-8468-99014adbd309 req-30f1d9a7-18f7-4ef3-b70c-a2bfc3c71be5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Refreshing instance network info cache due to event network-changed-39bedb41-5eaf-4edd-8178-376b6456b337. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:20:53 np0005539563 nova_compute[252253]: 2025-11-29 08:20:53.853 252257 DEBUG oslo_concurrency.lockutils [req-12c2e5fc-9ce0-439a-8468-99014adbd309 req-30f1d9a7-18f7-4ef3-b70c-a2bfc3c71be5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:53 np0005539563 nova_compute[252253]: 2025-11-29 08:20:53.873 252257 DEBUG nova.network.neutron [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:20:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.9 MiB/s rd, 829 KiB/s wr, 399 op/s
Nov 29 03:20:54 np0005539563 nova_compute[252253]: 2025-11-29 08:20:54.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:54 np0005539563 nova_compute[252253]: 2025-11-29 08:20:54.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:20:54 np0005539563 nova_compute[252253]: 2025-11-29 08:20:54.965 252257 DEBUG nova.network.neutron [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Updating instance_info_cache with network_info: [{"id": "39bedb41-5eaf-4edd-8178-376b6456b337", "address": "fa:16:3e:ad:65:fb", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39bedb41-5e", "ovs_interfaceid": "39bedb41-5eaf-4edd-8178-376b6456b337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 03:20:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:55.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.494 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Releasing lock "refresh_cache-7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.495 252257 DEBUG nova.compute.manager [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Instance network_info: |[{"id": "39bedb41-5eaf-4edd-8178-376b6456b337", "address": "fa:16:3e:ad:65:fb", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39bedb41-5e", "ovs_interfaceid": "39bedb41-5eaf-4edd-8178-376b6456b337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.496 252257 DEBUG oslo_concurrency.lockutils [req-12c2e5fc-9ce0-439a-8468-99014adbd309 req-30f1d9a7-18f7-4ef3-b70c-a2bfc3c71be5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.497 252257 DEBUG nova.network.neutron [req-12c2e5fc-9ce0-439a-8468-99014adbd309 req-30f1d9a7-18f7-4ef3-b70c-a2bfc3c71be5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Refreshing network info cache for port 39bedb41-5eaf-4edd-8178-376b6456b337 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.501 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Start _get_guest_xml network_info=[{"id": "39bedb41-5eaf-4edd-8178-376b6456b337", "address": "fa:16:3e:ad:65:fb", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39bedb41-5e", "ovs_interfaceid": "39bedb41-5eaf-4edd-8178-376b6456b337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.507 252257 WARNING nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.512 252257 DEBUG nova.virt.libvirt.host [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.513 252257 DEBUG nova.virt.libvirt.host [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.515 252257 DEBUG nova.virt.libvirt.host [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.516 252257 DEBUG nova.virt.libvirt.host [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.518 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.518 252257 DEBUG nova.virt.hardware [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.519 252257 DEBUG nova.virt.hardware [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.520 252257 DEBUG nova.virt.hardware [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.520 252257 DEBUG nova.virt.hardware [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.520 252257 DEBUG nova.virt.hardware [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.521 252257 DEBUG nova.virt.hardware [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.521 252257 DEBUG nova.virt.hardware [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.522 252257 DEBUG nova.virt.hardware [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.522 252257 DEBUG nova.virt.hardware [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.522 252257 DEBUG nova.virt.hardware [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.523 252257 DEBUG nova.virt.hardware [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.526 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:55.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:55 np0005539563 nova_compute[252253]: 2025-11-29 08:20:55.992 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066490339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.019 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.057 252257 DEBUG nova.storage.rbd_utils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.063 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 1.1 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.2 MiB/s rd, 1.8 MiB/s wr, 453 op/s
Nov 29 03:20:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2242993027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.492 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.495 252257 DEBUG nova.virt.libvirt.vif [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1402160904',display_name='tempest-ServersTestJSON-server-1402160904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1402160904',id=145,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-5rgh1tth',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:51Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39bedb41-5eaf-4edd-8178-376b6456b337", "address": "fa:16:3e:ad:65:fb", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39bedb41-5e", "ovs_interfaceid": "39bedb41-5eaf-4edd-8178-376b6456b337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.496 252257 DEBUG nova.network.os_vif_util [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "39bedb41-5eaf-4edd-8178-376b6456b337", "address": "fa:16:3e:ad:65:fb", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39bedb41-5e", "ovs_interfaceid": "39bedb41-5eaf-4edd-8178-376b6456b337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.498 252257 DEBUG nova.network.os_vif_util [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:65:fb,bridge_name='br-int',has_traffic_filtering=True,id=39bedb41-5eaf-4edd-8178-376b6456b337,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39bedb41-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.500 252257 DEBUG nova.objects.instance [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.592 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  <uuid>7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8</uuid>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  <name>instance-00000091</name>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServersTestJSON-server-1402160904</nova:name>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:20:55</nova:creationTime>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <nova:user uuid="3b9a756606a84398819fa76cc6ce9ecd">tempest-ServersTestJSON-1672739819-project-member</nova:user>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <nova:project uuid="a738c288b1654ec58416b0da60aacb69">tempest-ServersTestJSON-1672739819</nova:project>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <nova:port uuid="39bedb41-5eaf-4edd-8178-376b6456b337">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <entry name="serial">7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8</entry>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <entry name="uuid">7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8</entry>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk.config">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:ad:65:fb"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <target dev="tap39bedb41-5e"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8/console.log" append="off"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:20:56 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:20:56 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:20:56 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:20:56 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.600 252257 DEBUG nova.compute.manager [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Preparing to wait for external event network-vif-plugged-39bedb41-5eaf-4edd-8178-376b6456b337 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.601 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.601 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.602 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.602 252257 DEBUG nova.virt.libvirt.vif [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:20:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1402160904',display_name='tempest-ServersTestJSON-server-1402160904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1402160904',id=145,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-5rgh1tth',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:51Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39bedb41-5eaf-4edd-8178-376b6456b337", "address": "fa:16:3e:ad:65:fb", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39bedb41-5e", "ovs_interfaceid": "39bedb41-5eaf-4edd-8178-376b6456b337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.603 252257 DEBUG nova.network.os_vif_util [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "39bedb41-5eaf-4edd-8178-376b6456b337", "address": "fa:16:3e:ad:65:fb", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39bedb41-5e", "ovs_interfaceid": "39bedb41-5eaf-4edd-8178-376b6456b337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.603 252257 DEBUG nova.network.os_vif_util [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:65:fb,bridge_name='br-int',has_traffic_filtering=True,id=39bedb41-5eaf-4edd-8178-376b6456b337,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39bedb41-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.604 252257 DEBUG os_vif [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:65:fb,bridge_name='br-int',has_traffic_filtering=True,id=39bedb41-5eaf-4edd-8178-376b6456b337,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39bedb41-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.604 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.605 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.605 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.609 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.609 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39bedb41-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.610 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap39bedb41-5e, col_values=(('external_ids', {'iface-id': '39bedb41-5eaf-4edd-8178-376b6456b337', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:65:fb', 'vm-uuid': '7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.611 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:56 np0005539563 NetworkManager[48981]: <info>  [1764404456.6130] manager: (tap39bedb41-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.613 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.624 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.625 252257 INFO os_vif [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:65:fb,bridge_name='br-int',has_traffic_filtering=True,id=39bedb41-5eaf-4edd-8178-376b6456b337,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39bedb41-5e')#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.786 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.787 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.787 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] No VIF found with MAC fa:16:3e:ad:65:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.787 252257 INFO nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Using config drive#033[00m
Nov 29 03:20:56 np0005539563 nova_compute[252253]: 2025-11-29 08:20:56.813 252257 DEBUG nova.storage.rbd_utils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:57.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:57.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:57 np0005539563 nova_compute[252253]: 2025-11-29 08:20:57.650 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 1.1 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.7 MiB/s rd, 2.2 MiB/s wr, 385 op/s
Nov 29 03:20:58 np0005539563 nova_compute[252253]: 2025-11-29 08:20:58.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:58 np0005539563 nova_compute[252253]: 2025-11-29 08:20:58.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:20:58 np0005539563 nova_compute[252253]: 2025-11-29 08:20:58.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:20:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:58 np0005539563 nova_compute[252253]: 2025-11-29 08:20:58.890 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:20:59 np0005539563 nova_compute[252253]: 2025-11-29 08:20:59.351 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:59 np0005539563 nova_compute[252253]: 2025-11-29 08:20:59.351 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:59 np0005539563 nova_compute[252253]: 2025-11-29 08:20:59.352 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:20:59 np0005539563 nova_compute[252253]: 2025-11-29 08:20:59.352 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:59.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:20:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:20:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:20:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:59.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:00Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:6f:ea 10.100.0.3
Nov 29 03:21:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 1.1 GiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.5 MiB/s wr, 396 op/s
Nov 29 03:21:00 np0005539563 nova_compute[252253]: 2025-11-29 08:21:00.943 252257 INFO nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Creating config drive at /var/lib/nova/instances/7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8/disk.config#033[00m
Nov 29 03:21:00 np0005539563 nova_compute[252253]: 2025-11-29 08:21:00.951 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpph5b4xvp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:01 np0005539563 nova_compute[252253]: 2025-11-29 08:21:01.089 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpph5b4xvp" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:01.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:01.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:01 np0005539563 nova_compute[252253]: 2025-11-29 08:21:01.665 252257 DEBUG nova.storage.rbd_utils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] rbd image 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:01 np0005539563 nova_compute[252253]: 2025-11-29 08:21:01.669 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8/disk.config 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:01 np0005539563 nova_compute[252253]: 2025-11-29 08:21:01.703 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:01 np0005539563 nova_compute[252253]: 2025-11-29 08:21:01.960 252257 DEBUG oslo_concurrency.processutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8/disk.config 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:01 np0005539563 nova_compute[252253]: 2025-11-29 08:21:01.962 252257 INFO nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Deleting local config drive /var/lib/nova/instances/7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8/disk.config because it was imported into RBD.#033[00m
Nov 29 03:21:02 np0005539563 NetworkManager[48981]: <info>  [1764404462.0442] manager: (tap39bedb41-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Nov 29 03:21:02 np0005539563 kernel: tap39bedb41-5e: entered promiscuous mode
Nov 29 03:21:02 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:02Z|00577|binding|INFO|Claiming lport 39bedb41-5eaf-4edd-8178-376b6456b337 for this chassis.
Nov 29 03:21:02 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:02Z|00578|binding|INFO|39bedb41-5eaf-4edd-8178-376b6456b337: Claiming fa:16:3e:ad:65:fb 10.100.0.4
Nov 29 03:21:02 np0005539563 nova_compute[252253]: 2025-11-29 08:21:02.092 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.102 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:65:fb 10.100.0.4'], port_security=['fa:16:3e:ad:65:fb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=39bedb41-5eaf-4edd-8178-376b6456b337) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.103 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 39bedb41-5eaf-4edd-8178-376b6456b337 in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 bound to our chassis#033[00m
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.105 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 97e6ef02-6896-45a2-9eb9-28926c1a7400#033[00m
Nov 29 03:21:02 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:02Z|00579|binding|INFO|Setting lport 39bedb41-5eaf-4edd-8178-376b6456b337 ovn-installed in OVS
Nov 29 03:21:02 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:02Z|00580|binding|INFO|Setting lport 39bedb41-5eaf-4edd-8178-376b6456b337 up in Southbound
Nov 29 03:21:02 np0005539563 nova_compute[252253]: 2025-11-29 08:21:02.116 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.121 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8afb55b0-601d-4871-8abc-62281f5107b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:02 np0005539563 systemd-udevd[340190]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:21:02 np0005539563 systemd-machined[213024]: New machine qemu-70-instance-00000091.
Nov 29 03:21:02 np0005539563 NetworkManager[48981]: <info>  [1764404462.1366] device (tap39bedb41-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:21:02 np0005539563 NetworkManager[48981]: <info>  [1764404462.1373] device (tap39bedb41-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:21:02 np0005539563 systemd[1]: Started Virtual Machine qemu-70-instance-00000091.
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.153 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd4e32a-da2c-4f7f-a909-ac89451067e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.156 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a0fc4da9-8fc1-47c0-bbbf-f42454ada254]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.181 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[6ffebe73-6904-4df6-9c46-39ef5a5b82af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.197 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d643e4aa-11c2-4d15-8d45-1a40db0671bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735351, 'reachable_time': 23061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340203, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.212 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[deaa53b3-d2e5-4fe1-a3e6-2aaa6a177419]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap97e6ef02-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 735365, 'tstamp': 735365}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340204, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap97e6ef02-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 735369, 'tstamp': 735369}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340204, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.213 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:02 np0005539563 nova_compute[252253]: 2025-11-29 08:21:02.215 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:02 np0005539563 nova_compute[252253]: 2025-11-29 08:21:02.217 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.218 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97e6ef02-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.218 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.218 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap97e6ef02-60, col_values=(('external_ids', {'iface-id': 'ea7a63c4-c071-447c-8225-8a48ff4b56c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:02.218 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 1.1 GiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.5 MiB/s wr, 250 op/s
Nov 29 03:21:02 np0005539563 nova_compute[252253]: 2025-11-29 08:21:02.653 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:02 np0005539563 nova_compute[252253]: 2025-11-29 08:21:02.708 252257 DEBUG nova.network.neutron [req-12c2e5fc-9ce0-439a-8468-99014adbd309 req-30f1d9a7-18f7-4ef3-b70c-a2bfc3c71be5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Updated VIF entry in instance network info cache for port 39bedb41-5eaf-4edd-8178-376b6456b337. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:21:02 np0005539563 nova_compute[252253]: 2025-11-29 08:21:02.708 252257 DEBUG nova.network.neutron [req-12c2e5fc-9ce0-439a-8468-99014adbd309 req-30f1d9a7-18f7-4ef3-b70c-a2bfc3c71be5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Updating instance_info_cache with network_info: [{"id": "39bedb41-5eaf-4edd-8178-376b6456b337", "address": "fa:16:3e:ad:65:fb", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39bedb41-5e", "ovs_interfaceid": "39bedb41-5eaf-4edd-8178-376b6456b337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Nov 29 03:21:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Nov 29 03:21:02 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Nov 29 03:21:02 np0005539563 nova_compute[252253]: 2025-11-29 08:21:02.973 252257 DEBUG nova.compute.manager [req-843c037f-7a9b-42e2-ab99-6369308e9c9e req-4eaf8186-0d5e-413e-a9de-4ef8a5a4b9bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Received event network-vif-plugged-39bedb41-5eaf-4edd-8178-376b6456b337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:02 np0005539563 nova_compute[252253]: 2025-11-29 08:21:02.974 252257 DEBUG oslo_concurrency.lockutils [req-843c037f-7a9b-42e2-ab99-6369308e9c9e req-4eaf8186-0d5e-413e-a9de-4ef8a5a4b9bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:02 np0005539563 nova_compute[252253]: 2025-11-29 08:21:02.975 252257 DEBUG oslo_concurrency.lockutils [req-843c037f-7a9b-42e2-ab99-6369308e9c9e req-4eaf8186-0d5e-413e-a9de-4ef8a5a4b9bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:02 np0005539563 nova_compute[252253]: 2025-11-29 08:21:02.975 252257 DEBUG oslo_concurrency.lockutils [req-843c037f-7a9b-42e2-ab99-6369308e9c9e req-4eaf8186-0d5e-413e-a9de-4ef8a5a4b9bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:02 np0005539563 nova_compute[252253]: 2025-11-29 08:21:02.975 252257 DEBUG nova.compute.manager [req-843c037f-7a9b-42e2-ab99-6369308e9c9e req-4eaf8186-0d5e-413e-a9de-4ef8a5a4b9bc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Processing event network-vif-plugged-39bedb41-5eaf-4edd-8178-376b6456b337 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.014 252257 DEBUG oslo_concurrency.lockutils [req-12c2e5fc-9ce0-439a-8468-99014adbd309 req-30f1d9a7-18f7-4ef3-b70c-a2bfc3c71be5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.108 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404463.1076007, 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.108 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] VM Started (Lifecycle Event)#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.110 252257 DEBUG nova.compute.manager [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.113 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.117 252257 INFO nova.virt.libvirt.driver [-] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Instance spawned successfully.#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.118 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.130 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.134 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.147 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.147 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.148 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.148 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.148 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.149 252257 DEBUG nova.virt.libvirt.driver [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.159 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.160 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404463.108513, 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.160 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.195 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.199 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404463.112726, 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.200 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.230 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.234 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.239 252257 INFO nova.compute.manager [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Took 11.71 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.240 252257 DEBUG nova.compute.manager [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.256 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.309 252257 INFO nova.compute.manager [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Took 12.73 seconds to build instance.#033[00m
Nov 29 03:21:03 np0005539563 nova_compute[252253]: 2025-11-29 08:21:03.326 252257 DEBUG oslo_concurrency.lockutils [None req-be3d8f41-990e-4c97-b84e-c64a23ed7094 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:03.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:03.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:04.217 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:04.217 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:21:04 np0005539563 nova_compute[252253]: 2025-11-29 08:21:04.217 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 1.1 GiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.9 MiB/s wr, 252 op/s
Nov 29 03:21:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:04.926 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:04.927 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:04.928 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:05.219 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.322 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Updating instance_info_cache with network_info: [{"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:05.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:05 np0005539563 podman[340250]: 2025-11-29 08:21:05.521725283 +0000 UTC m=+0.065606920 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:21:05 np0005539563 podman[340251]: 2025-11-29 08:21:05.534685775 +0000 UTC m=+0.079792386 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:21:05 np0005539563 podman[340252]: 2025-11-29 08:21:05.568531042 +0000 UTC m=+0.103912258 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:21:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:05.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.586 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-1fad2d6f-5a00-43ad-af43-00916509fc61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.586 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.587 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.587 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.587 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.588 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.614 252257 DEBUG nova.compute.manager [req-0d13185d-c843-4a13-8686-07c4bad4a9d3 req-f77c1a21-0f32-47c1-b8d0-6f6e2f4fbb51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Received event network-vif-plugged-39bedb41-5eaf-4edd-8178-376b6456b337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.614 252257 DEBUG oslo_concurrency.lockutils [req-0d13185d-c843-4a13-8686-07c4bad4a9d3 req-f77c1a21-0f32-47c1-b8d0-6f6e2f4fbb51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.615 252257 DEBUG oslo_concurrency.lockutils [req-0d13185d-c843-4a13-8686-07c4bad4a9d3 req-f77c1a21-0f32-47c1-b8d0-6f6e2f4fbb51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.615 252257 DEBUG oslo_concurrency.lockutils [req-0d13185d-c843-4a13-8686-07c4bad4a9d3 req-f77c1a21-0f32-47c1-b8d0-6f6e2f4fbb51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.615 252257 DEBUG nova.compute.manager [req-0d13185d-c843-4a13-8686-07c4bad4a9d3 req-f77c1a21-0f32-47c1-b8d0-6f6e2f4fbb51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] No waiting events found dispatching network-vif-plugged-39bedb41-5eaf-4edd-8178-376b6456b337 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.616 252257 WARNING nova.compute.manager [req-0d13185d-c843-4a13-8686-07c4bad4a9d3 req-f77c1a21-0f32-47c1-b8d0-6f6e2f4fbb51 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Received unexpected event network-vif-plugged-39bedb41-5eaf-4edd-8178-376b6456b337 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.622 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.623 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.623 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.623 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:21:05 np0005539563 nova_compute[252253]: 2025-11-29 08:21:05.624 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2457919789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.085 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.192 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.192 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.196 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.197 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.201 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.202 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.206 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.206 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:21:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 989 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 308 op/s
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.450 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.451 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3634MB free_disk=20.551525115966797GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.452 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.452 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.706 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.763 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 1fad2d6f-5a00-43ad-af43-00916509fc61 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.764 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 0d1eac76-3b6b-4734-a481-9b315b2ae484 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.764 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 9b6f3346-1230-472f-bd04-791d2367bebb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.764 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.764 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.764 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:21:06 np0005539563 nova_compute[252253]: 2025-11-29 08:21:06.877 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042416663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.316 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.321 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.340 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.375 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.375 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:07.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:07.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.656 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.786 252257 DEBUG oslo_concurrency.lockutils [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.787 252257 DEBUG oslo_concurrency.lockutils [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.787 252257 DEBUG oslo_concurrency.lockutils [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.787 252257 DEBUG oslo_concurrency.lockutils [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.788 252257 DEBUG oslo_concurrency.lockutils [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.789 252257 INFO nova.compute.manager [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Terminating instance#033[00m
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.790 252257 DEBUG nova.compute.manager [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:21:07 np0005539563 kernel: tap39bedb41-5e (unregistering): left promiscuous mode
Nov 29 03:21:07 np0005539563 NetworkManager[48981]: <info>  [1764404467.8948] device (tap39bedb41-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:21:07 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:07Z|00581|binding|INFO|Releasing lport 39bedb41-5eaf-4edd-8178-376b6456b337 from this chassis (sb_readonly=0)
Nov 29 03:21:07 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:07Z|00582|binding|INFO|Setting lport 39bedb41-5eaf-4edd-8178-376b6456b337 down in Southbound
Nov 29 03:21:07 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:07Z|00583|binding|INFO|Removing iface tap39bedb41-5e ovn-installed in OVS
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.899 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.901 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:07.939 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:65:fb 10.100.0.4'], port_security=['fa:16:3e:ad:65:fb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=39bedb41-5eaf-4edd-8178-376b6456b337) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:07.941 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 39bedb41-5eaf-4edd-8178-376b6456b337 in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 unbound from our chassis#033[00m
Nov 29 03:21:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:07.946 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 97e6ef02-6896-45a2-9eb9-28926c1a7400#033[00m
Nov 29 03:21:07 np0005539563 nova_compute[252253]: 2025-11-29 08:21:07.951 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:07 np0005539563 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000091.scope: Deactivated successfully.
Nov 29 03:21:07 np0005539563 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000091.scope: Consumed 5.739s CPU time.
Nov 29 03:21:07 np0005539563 systemd-machined[213024]: Machine qemu-70-instance-00000091 terminated.
Nov 29 03:21:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:07.963 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0acf6dab-6569-46b4-a0a8-2c0b05a0fd88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:08.002 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[6184b3e5-b681-41c8-a095-817830ce815f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:08.004 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e32d14f4-4241-4c86-86d9-db1162c26a4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.030 252257 INFO nova.virt.libvirt.driver [-] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Instance destroyed successfully.#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.031 252257 DEBUG nova.objects.instance [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'resources' on Instance uuid 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:08.036 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[991f749a-126d-4b50-88d4-ba4f3974919e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.045 252257 DEBUG nova.virt.libvirt.vif [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1402160904',display_name='tempest-ServersTestJSON-server-1402160904',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1402160904',id=145,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-5rgh1tth',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:21:03Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "39bedb41-5eaf-4edd-8178-376b6456b337", "address": "fa:16:3e:ad:65:fb", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39bedb41-5e", "ovs_interfaceid": "39bedb41-5eaf-4edd-8178-376b6456b337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.045 252257 DEBUG nova.network.os_vif_util [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "39bedb41-5eaf-4edd-8178-376b6456b337", "address": "fa:16:3e:ad:65:fb", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39bedb41-5e", "ovs_interfaceid": "39bedb41-5eaf-4edd-8178-376b6456b337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.046 252257 DEBUG nova.network.os_vif_util [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:65:fb,bridge_name='br-int',has_traffic_filtering=True,id=39bedb41-5eaf-4edd-8178-376b6456b337,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39bedb41-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.047 252257 DEBUG os_vif [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:65:fb,bridge_name='br-int',has_traffic_filtering=True,id=39bedb41-5eaf-4edd-8178-376b6456b337,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39bedb41-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.049 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.049 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39bedb41-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.051 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.053 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:08.054 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e0eeb19b-412a-4977-bf25-bc1fd5fdab5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap97e6ef02-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:de:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735351, 'reachable_time': 23061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340379, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.056 252257 INFO os_vif [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:65:fb,bridge_name='br-int',has_traffic_filtering=True,id=39bedb41-5eaf-4edd-8178-376b6456b337,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39bedb41-5e')#033[00m
Nov 29 03:21:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:08.069 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5caeed-123d-4241-838e-cf442beb4919]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap97e6ef02-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 735365, 'tstamp': 735365}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340380, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap97e6ef02-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 735369, 'tstamp': 735369}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340380, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:08.070 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:08.074 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97e6ef02-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:08.074 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:08.075 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap97e6ef02-60, col_values=(('external_ids', {'iface-id': 'ea7a63c4-c071-447c-8225-8a48ff4b56c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:08.076 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.077 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.285 252257 DEBUG nova.compute.manager [req-a819f7e6-ded1-46a4-b573-8bd8288f2516 req-77e82ac7-34c8-4e8a-a178-267633edc8d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Received event network-vif-unplugged-39bedb41-5eaf-4edd-8178-376b6456b337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.285 252257 DEBUG oslo_concurrency.lockutils [req-a819f7e6-ded1-46a4-b573-8bd8288f2516 req-77e82ac7-34c8-4e8a-a178-267633edc8d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.285 252257 DEBUG oslo_concurrency.lockutils [req-a819f7e6-ded1-46a4-b573-8bd8288f2516 req-77e82ac7-34c8-4e8a-a178-267633edc8d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.286 252257 DEBUG oslo_concurrency.lockutils [req-a819f7e6-ded1-46a4-b573-8bd8288f2516 req-77e82ac7-34c8-4e8a-a178-267633edc8d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.286 252257 DEBUG nova.compute.manager [req-a819f7e6-ded1-46a4-b573-8bd8288f2516 req-77e82ac7-34c8-4e8a-a178-267633edc8d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] No waiting events found dispatching network-vif-unplugged-39bedb41-5eaf-4edd-8178-376b6456b337 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.286 252257 DEBUG nova.compute.manager [req-a819f7e6-ded1-46a4-b573-8bd8288f2516 req-77e82ac7-34c8-4e8a-a178-267633edc8d3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Received event network-vif-unplugged-39bedb41-5eaf-4edd-8178-376b6456b337 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:21:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 959 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.2 MiB/s wr, 333 op/s
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.465 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.476 252257 INFO nova.virt.libvirt.driver [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Deleting instance files /var/lib/nova/instances/7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_del#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.477 252257 INFO nova.virt.libvirt.driver [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Deletion of /var/lib/nova/instances/7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8_del complete#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.525 252257 INFO nova.compute.manager [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.526 252257 DEBUG oslo.service.loopingcall [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.526 252257 DEBUG nova.compute.manager [-] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:21:08 np0005539563 nova_compute[252253]: 2025-11-29 08:21:08.526 252257 DEBUG nova.network.neutron [-] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:21:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:09.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:09 np0005539563 nova_compute[252253]: 2025-11-29 08:21:09.498 252257 DEBUG nova.network.neutron [-] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:09 np0005539563 nova_compute[252253]: 2025-11-29 08:21:09.516 252257 INFO nova.compute.manager [-] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Took 0.99 seconds to deallocate network for instance.#033[00m
Nov 29 03:21:09 np0005539563 nova_compute[252253]: 2025-11-29 08:21:09.565 252257 DEBUG oslo_concurrency.lockutils [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:09 np0005539563 nova_compute[252253]: 2025-11-29 08:21:09.565 252257 DEBUG oslo_concurrency.lockutils [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:09.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:09 np0005539563 nova_compute[252253]: 2025-11-29 08:21:09.657 252257 DEBUG oslo_concurrency.processutils [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3916480110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:10 np0005539563 nova_compute[252253]: 2025-11-29 08:21:10.085 252257 DEBUG oslo_concurrency.processutils [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:10 np0005539563 nova_compute[252253]: 2025-11-29 08:21:10.091 252257 DEBUG nova.compute.provider_tree [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:10 np0005539563 nova_compute[252253]: 2025-11-29 08:21:10.159 252257 DEBUG nova.scheduler.client.report [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:10 np0005539563 nova_compute[252253]: 2025-11-29 08:21:10.370 252257 DEBUG oslo_concurrency.lockutils [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 928 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 274 KiB/s wr, 275 op/s
Nov 29 03:21:10 np0005539563 nova_compute[252253]: 2025-11-29 08:21:10.457 252257 INFO nova.scheduler.client.report [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Deleted allocations for instance 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8#033[00m
Nov 29 03:21:10 np0005539563 nova_compute[252253]: 2025-11-29 08:21:10.589 252257 DEBUG nova.compute.manager [req-80bbccf5-6551-42e5-bb24-004c7fecba3a req-2837b6cf-c9a1-40ab-8412-42ef664b6907 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Received event network-vif-plugged-39bedb41-5eaf-4edd-8178-376b6456b337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:10 np0005539563 nova_compute[252253]: 2025-11-29 08:21:10.590 252257 DEBUG oslo_concurrency.lockutils [req-80bbccf5-6551-42e5-bb24-004c7fecba3a req-2837b6cf-c9a1-40ab-8412-42ef664b6907 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:10 np0005539563 nova_compute[252253]: 2025-11-29 08:21:10.590 252257 DEBUG oslo_concurrency.lockutils [req-80bbccf5-6551-42e5-bb24-004c7fecba3a req-2837b6cf-c9a1-40ab-8412-42ef664b6907 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:10 np0005539563 nova_compute[252253]: 2025-11-29 08:21:10.591 252257 DEBUG oslo_concurrency.lockutils [req-80bbccf5-6551-42e5-bb24-004c7fecba3a req-2837b6cf-c9a1-40ab-8412-42ef664b6907 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:10 np0005539563 nova_compute[252253]: 2025-11-29 08:21:10.591 252257 DEBUG nova.compute.manager [req-80bbccf5-6551-42e5-bb24-004c7fecba3a req-2837b6cf-c9a1-40ab-8412-42ef664b6907 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] No waiting events found dispatching network-vif-plugged-39bedb41-5eaf-4edd-8178-376b6456b337 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:10 np0005539563 nova_compute[252253]: 2025-11-29 08:21:10.591 252257 WARNING nova.compute.manager [req-80bbccf5-6551-42e5-bb24-004c7fecba3a req-2837b6cf-c9a1-40ab-8412-42ef664b6907 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Received unexpected event network-vif-plugged-39bedb41-5eaf-4edd-8178-376b6456b337 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:21:10 np0005539563 nova_compute[252253]: 2025-11-29 08:21:10.622 252257 DEBUG oslo_concurrency.lockutils [None req-0ac08135-d467-41d4-88d4-7203617b38c1 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:10 np0005539563 nova_compute[252253]: 2025-11-29 08:21:10.933 252257 DEBUG nova.compute.manager [req-f02e2f73-44d8-499a-9c88-ec7834734253 req-09ac658c-dfdb-4ca3-bc93-9c3039cf0c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Received event network-vif-deleted-39bedb41-5eaf-4edd-8178-376b6456b337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:11.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:11.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 928 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 274 KiB/s wr, 275 op/s
Nov 29 03:21:12 np0005539563 nova_compute[252253]: 2025-11-29 08:21:12.657 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Nov 29 03:21:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Nov 29 03:21:12 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Nov 29 03:21:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:21:12
Nov 29 03:21:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:21:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:21:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'images', 'backups', '.mgr', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 29 03:21:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:21:13 np0005539563 nova_compute[252253]: 2025-11-29 08:21:13.051 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:21:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:13.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:13.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:13 np0005539563 nova_compute[252253]: 2025-11-29 08:21:13.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Nov 29 03:21:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Nov 29 03:21:13 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:21:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:21:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 878 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 31 KiB/s wr, 137 op/s
Nov 29 03:21:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:15.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:15.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:16 np0005539563 nova_compute[252253]: 2025-11-29 08:21:16.008 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:16 np0005539563 NetworkManager[48981]: <info>  [1764404476.0093] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Nov 29 03:21:16 np0005539563 NetworkManager[48981]: <info>  [1764404476.0107] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Nov 29 03:21:16 np0005539563 nova_compute[252253]: 2025-11-29 08:21:16.209 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:16Z|00584|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:21:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:16Z|00585|binding|INFO|Releasing lport ea7a63c4-c071-447c-8225-8a48ff4b56c5 from this chassis (sb_readonly=0)
Nov 29 03:21:16 np0005539563 nova_compute[252253]: 2025-11-29 08:21:16.240 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 662 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 60 KiB/s wr, 293 op/s
Nov 29 03:21:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:17.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:17.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:17 np0005539563 nova_compute[252253]: 2025-11-29 08:21:17.661 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:18 np0005539563 nova_compute[252253]: 2025-11-29 08:21:18.053 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 628 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 33 KiB/s wr, 274 op/s
Nov 29 03:21:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:19.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:19.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 628 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 35 KiB/s wr, 281 op/s
Nov 29 03:21:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:21.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:21.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 628 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 29 KiB/s wr, 230 op/s
Nov 29 03:21:22 np0005539563 nova_compute[252253]: 2025-11-29 08:21:22.663 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:23 np0005539563 nova_compute[252253]: 2025-11-29 08:21:23.026 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404468.0247998, 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:23 np0005539563 nova_compute[252253]: 2025-11-29 08:21:23.026 252257 INFO nova.compute.manager [-] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:21:23 np0005539563 nova_compute[252253]: 2025-11-29 08:21:23.055 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:23 np0005539563 nova_compute[252253]: 2025-11-29 08:21:23.232 252257 DEBUG nova.compute.manager [None req-5df552b1-550e-44c9-9b7f-654f8afbb421 - - - - - -] [instance: 7554a404-c3d0-4ab3-8dca-a13f6f7e6cf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:23.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009676020437837335 of space, bias 1.0, pg target 2.9028061313512006 quantized to 32 (current 32)
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.6443182346919785 quantized to 32 (current 32)
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00486990798901917 of space, bias 1.0, pg target 1.4512325807277127 quantized to 32 (current 32)
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Nov 29 03:21:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:23.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Nov 29 03:21:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Nov 29 03:21:23 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Nov 29 03:21:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 632 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 60 KiB/s wr, 194 op/s
Nov 29 03:21:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:25Z|00586|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:21:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:25Z|00587|binding|INFO|Releasing lport ea7a63c4-c071-447c-8225-8a48ff4b56c5 from this chassis (sb_readonly=0)
Nov 29 03:21:25 np0005539563 nova_compute[252253]: 2025-11-29 08:21:25.461 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:25.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:25.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.157 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.157 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.172 252257 DEBUG nova.compute.manager [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.246 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.246 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.254 252257 DEBUG nova.virt.hardware [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.255 252257 INFO nova.compute.claims [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:21:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 668 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 1.5 MiB/s wr, 68 op/s
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.440 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/808761927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.895 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.901 252257 DEBUG nova.compute.provider_tree [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.920 252257 DEBUG nova.scheduler.client.report [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.943 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.944 252257 DEBUG nova.compute.manager [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.986 252257 DEBUG nova.compute.manager [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:21:26 np0005539563 nova_compute[252253]: 2025-11-29 08:21:26.987 252257 DEBUG nova.network.neutron [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.012 252257 INFO nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.028 252257 DEBUG nova.compute.manager [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.125 252257 DEBUG nova.compute.manager [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.126 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.127 252257 INFO nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Creating image(s)#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.154 252257 DEBUG nova.storage.rbd_utils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.183 252257 DEBUG nova.storage.rbd_utils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.212 252257 DEBUG nova.storage.rbd_utils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.218 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.276 252257 DEBUG nova.policy [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '09f1f8a0998948b7b96830d8559609f6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.305 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.306 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.306 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.307 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.337 252257 DEBUG nova.storage.rbd_utils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.341 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:27.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:27.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:27 np0005539563 nova_compute[252253]: 2025-11-29 08:21:27.739 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:28 np0005539563 nova_compute[252253]: 2025-11-29 08:21:28.056 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 690 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 3.5 MiB/s wr, 74 op/s
Nov 29 03:21:28 np0005539563 nova_compute[252253]: 2025-11-29 08:21:28.469 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:28 np0005539563 nova_compute[252253]: 2025-11-29 08:21:28.543 252257 DEBUG nova.storage.rbd_utils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] resizing rbd image aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:21:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:28 np0005539563 nova_compute[252253]: 2025-11-29 08:21:28.740 252257 DEBUG nova.network.neutron [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Successfully created port: d357d277-1d9f-42be-a5bc-31258c88b186 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:21:28 np0005539563 nova_compute[252253]: 2025-11-29 08:21:28.796 252257 DEBUG nova.objects.instance [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'migration_context' on Instance uuid aa6d3201-d2d3-4001-9e68-bd07dcc23b11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:28 np0005539563 nova_compute[252253]: 2025-11-29 08:21:28.811 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:21:28 np0005539563 nova_compute[252253]: 2025-11-29 08:21:28.811 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Ensure instance console log exists: /var/lib/nova/instances/aa6d3201-d2d3-4001-9e68-bd07dcc23b11/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:21:28 np0005539563 nova_compute[252253]: 2025-11-29 08:21:28.812 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:28 np0005539563 nova_compute[252253]: 2025-11-29 08:21:28.812 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:28 np0005539563 nova_compute[252253]: 2025-11-29 08:21:28.813 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:29.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:29.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:29 np0005539563 nova_compute[252253]: 2025-11-29 08:21:29.720 252257 DEBUG nova.network.neutron [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Successfully updated port: d357d277-1d9f-42be-a5bc-31258c88b186 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:21:29 np0005539563 nova_compute[252253]: 2025-11-29 08:21:29.736 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "refresh_cache-aa6d3201-d2d3-4001-9e68-bd07dcc23b11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:29 np0005539563 nova_compute[252253]: 2025-11-29 08:21:29.736 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquired lock "refresh_cache-aa6d3201-d2d3-4001-9e68-bd07dcc23b11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:29 np0005539563 nova_compute[252253]: 2025-11-29 08:21:29.736 252257 DEBUG nova.network.neutron [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:21:29 np0005539563 nova_compute[252253]: 2025-11-29 08:21:29.834 252257 DEBUG nova.compute.manager [req-759b8ec8-e1ce-4d6d-a5a4-7c294b4976e6 req-f87ffc9e-3e62-4b22-b02a-024a80816921 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Received event network-changed-d357d277-1d9f-42be-a5bc-31258c88b186 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:29 np0005539563 nova_compute[252253]: 2025-11-29 08:21:29.834 252257 DEBUG nova.compute.manager [req-759b8ec8-e1ce-4d6d-a5a4-7c294b4976e6 req-f87ffc9e-3e62-4b22-b02a-024a80816921 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Refreshing instance network info cache due to event network-changed-d357d277-1d9f-42be-a5bc-31258c88b186. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:21:29 np0005539563 nova_compute[252253]: 2025-11-29 08:21:29.835 252257 DEBUG oslo_concurrency.lockutils [req-759b8ec8-e1ce-4d6d-a5a4-7c294b4976e6 req-f87ffc9e-3e62-4b22-b02a-024a80816921 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-aa6d3201-d2d3-4001-9e68-bd07dcc23b11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:29 np0005539563 nova_compute[252253]: 2025-11-29 08:21:29.910 252257 DEBUG nova.network.neutron [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:21:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 754 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 464 KiB/s rd, 6.8 MiB/s wr, 145 op/s
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.748 252257 DEBUG nova.network.neutron [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Updating instance_info_cache with network_info: [{"id": "d357d277-1d9f-42be-a5bc-31258c88b186", "address": "fa:16:3e:7a:a1:ed", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd357d277-1d", "ovs_interfaceid": "d357d277-1d9f-42be-a5bc-31258c88b186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.766 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Releasing lock "refresh_cache-aa6d3201-d2d3-4001-9e68-bd07dcc23b11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.767 252257 DEBUG nova.compute.manager [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Instance network_info: |[{"id": "d357d277-1d9f-42be-a5bc-31258c88b186", "address": "fa:16:3e:7a:a1:ed", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd357d277-1d", "ovs_interfaceid": "d357d277-1d9f-42be-a5bc-31258c88b186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.768 252257 DEBUG oslo_concurrency.lockutils [req-759b8ec8-e1ce-4d6d-a5a4-7c294b4976e6 req-f87ffc9e-3e62-4b22-b02a-024a80816921 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-aa6d3201-d2d3-4001-9e68-bd07dcc23b11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.768 252257 DEBUG nova.network.neutron [req-759b8ec8-e1ce-4d6d-a5a4-7c294b4976e6 req-f87ffc9e-3e62-4b22-b02a-024a80816921 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Refreshing network info cache for port d357d277-1d9f-42be-a5bc-31258c88b186 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.773 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Start _get_guest_xml network_info=[{"id": "d357d277-1d9f-42be-a5bc-31258c88b186", "address": "fa:16:3e:7a:a1:ed", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd357d277-1d", "ovs_interfaceid": "d357d277-1d9f-42be-a5bc-31258c88b186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.781 252257 WARNING nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.792 252257 DEBUG nova.virt.libvirt.host [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.793 252257 DEBUG nova.virt.libvirt.host [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.798 252257 DEBUG nova.virt.libvirt.host [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.798 252257 DEBUG nova.virt.libvirt.host [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.800 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.801 252257 DEBUG nova.virt.hardware [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.802 252257 DEBUG nova.virt.hardware [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.802 252257 DEBUG nova.virt.hardware [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.803 252257 DEBUG nova.virt.hardware [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.803 252257 DEBUG nova.virt.hardware [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.803 252257 DEBUG nova.virt.hardware [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.804 252257 DEBUG nova.virt.hardware [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.804 252257 DEBUG nova.virt.hardware [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.805 252257 DEBUG nova.virt.hardware [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.805 252257 DEBUG nova.virt.hardware [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.806 252257 DEBUG nova.virt.hardware [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:21:30 np0005539563 nova_compute[252253]: 2025-11-29 08:21:30.810 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401190389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:31 np0005539563 nova_compute[252253]: 2025-11-29 08:21:31.328 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:31 np0005539563 nova_compute[252253]: 2025-11-29 08:21:31.354 252257 DEBUG nova.storage.rbd_utils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:31 np0005539563 nova_compute[252253]: 2025-11-29 08:21:31.359 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:31.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:31.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1516825404' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:31 np0005539563 nova_compute[252253]: 2025-11-29 08:21:31.960 252257 DEBUG nova.network.neutron [req-759b8ec8-e1ce-4d6d-a5a4-7c294b4976e6 req-f87ffc9e-3e62-4b22-b02a-024a80816921 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Updated VIF entry in instance network info cache for port d357d277-1d9f-42be-a5bc-31258c88b186. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:21:31 np0005539563 nova_compute[252253]: 2025-11-29 08:21:31.961 252257 DEBUG nova.network.neutron [req-759b8ec8-e1ce-4d6d-a5a4-7c294b4976e6 req-f87ffc9e-3e62-4b22-b02a-024a80816921 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Updating instance_info_cache with network_info: [{"id": "d357d277-1d9f-42be-a5bc-31258c88b186", "address": "fa:16:3e:7a:a1:ed", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd357d277-1d", "ovs_interfaceid": "d357d277-1d9f-42be-a5bc-31258c88b186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:31 np0005539563 nova_compute[252253]: 2025-11-29 08:21:31.977 252257 DEBUG oslo_concurrency.lockutils [req-759b8ec8-e1ce-4d6d-a5a4-7c294b4976e6 req-f87ffc9e-3e62-4b22-b02a-024a80816921 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-aa6d3201-d2d3-4001-9e68-bd07dcc23b11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:31 np0005539563 nova_compute[252253]: 2025-11-29 08:21:31.979 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:31 np0005539563 nova_compute[252253]: 2025-11-29 08:21:31.980 252257 DEBUG nova.virt.libvirt.vif [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1696398812',display_name='tempest-AttachVolumeNegativeTest-server-1696398812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1696398812',id=148,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAzCH7tmUqqjE3XxkTuel0a6zJGk+OZ3GNwvJRjSVRO7p+eWVYeTnt0fgHnEypPSzORk1lJIK6LCrtQhpsLfReR+qXoLg/TUUhb1bqOnnBhn1FZUow/HnvDLhop2w1zR1g==',key_name='tempest-keypair-1146956241',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61d8d3b6b31f4b36b5749db9c550c696',ramdisk_id='',reservation_id='r-0pvhzsal',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1426807399',owner_user_name='tempest-AttachVolumeNegativeTest-1426807399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f1f8a0998948b7b96830d8559609f6',uuid=aa6d3201-d2d3-4001-9e68-bd07dcc23b11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d357d277-1d9f-42be-a5bc-31258c88b186", "address": "fa:16:3e:7a:a1:ed", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd357d277-1d", "ovs_interfaceid": "d357d277-1d9f-42be-a5bc-31258c88b186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:21:31 np0005539563 nova_compute[252253]: 2025-11-29 08:21:31.980 252257 DEBUG nova.network.os_vif_util [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converting VIF {"id": "d357d277-1d9f-42be-a5bc-31258c88b186", "address": "fa:16:3e:7a:a1:ed", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd357d277-1d", "ovs_interfaceid": "d357d277-1d9f-42be-a5bc-31258c88b186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:31 np0005539563 nova_compute[252253]: 2025-11-29 08:21:31.981 252257 DEBUG nova.network.os_vif_util [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:a1:ed,bridge_name='br-int',has_traffic_filtering=True,id=d357d277-1d9f-42be-a5bc-31258c88b186,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd357d277-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:31 np0005539563 nova_compute[252253]: 2025-11-29 08:21:31.983 252257 DEBUG nova.objects.instance [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'pci_devices' on Instance uuid aa6d3201-d2d3-4001-9e68-bd07dcc23b11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.000 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  <uuid>aa6d3201-d2d3-4001-9e68-bd07dcc23b11</uuid>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  <name>instance-00000094</name>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <nova:name>tempest-AttachVolumeNegativeTest-server-1696398812</nova:name>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:21:30</nova:creationTime>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <nova:user uuid="09f1f8a0998948b7b96830d8559609f6">tempest-AttachVolumeNegativeTest-1426807399-project-member</nova:user>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <nova:project uuid="61d8d3b6b31f4b36b5749db9c550c696">tempest-AttachVolumeNegativeTest-1426807399</nova:project>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <nova:port uuid="d357d277-1d9f-42be-a5bc-31258c88b186">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <entry name="serial">aa6d3201-d2d3-4001-9e68-bd07dcc23b11</entry>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <entry name="uuid">aa6d3201-d2d3-4001-9e68-bd07dcc23b11</entry>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk.config">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:7a:a1:ed"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <target dev="tapd357d277-1d"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/aa6d3201-d2d3-4001-9e68-bd07dcc23b11/console.log" append="off"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:21:32 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:21:32 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:21:32 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:21:32 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.001 252257 DEBUG nova.compute.manager [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Preparing to wait for external event network-vif-plugged-d357d277-1d9f-42be-a5bc-31258c88b186 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.001 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.001 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.002 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.002 252257 DEBUG nova.virt.libvirt.vif [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1696398812',display_name='tempest-AttachVolumeNegativeTest-server-1696398812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1696398812',id=148,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAzCH7tmUqqjE3XxkTuel0a6zJGk+OZ3GNwvJRjSVRO7p+eWVYeTnt0fgHnEypPSzORk1lJIK6LCrtQhpsLfReR+qXoLg/TUUhb1bqOnnBhn1FZUow/HnvDLhop2w1zR1g==',key_name='tempest-keypair-1146956241',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61d8d3b6b31f4b36b5749db9c550c696',ramdisk_id='',reservation_id='r-0pvhzsal',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1426807399',owner_user_name='tempest-AttachVolumeNegativeTest-1426807399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f1f8a0998948b7b96830d8559609f6',uuid=aa6d3201-d2d3-4001-9e68-bd07dcc23b11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d357d277-1d9f-42be-a5bc-31258c88b186", "address": "fa:16:3e:7a:a1:ed", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd357d277-1d", "ovs_interfaceid": "d357d277-1d9f-42be-a5bc-31258c88b186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.003 252257 DEBUG nova.network.os_vif_util [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converting VIF {"id": "d357d277-1d9f-42be-a5bc-31258c88b186", "address": "fa:16:3e:7a:a1:ed", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd357d277-1d", "ovs_interfaceid": "d357d277-1d9f-42be-a5bc-31258c88b186", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.003 252257 DEBUG nova.network.os_vif_util [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:a1:ed,bridge_name='br-int',has_traffic_filtering=True,id=d357d277-1d9f-42be-a5bc-31258c88b186,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd357d277-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.004 252257 DEBUG os_vif [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:a1:ed,bridge_name='br-int',has_traffic_filtering=True,id=d357d277-1d9f-42be-a5bc-31258c88b186,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd357d277-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.004 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.005 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.006 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.009 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.009 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd357d277-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.010 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd357d277-1d, col_values=(('external_ids', {'iface-id': 'd357d277-1d9f-42be-a5bc-31258c88b186', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:a1:ed', 'vm-uuid': 'aa6d3201-d2d3-4001-9e68-bd07dcc23b11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.011 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:32 np0005539563 NetworkManager[48981]: <info>  [1764404492.0125] manager: (tapd357d277-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.014 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.020 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.021 252257 INFO os_vif [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:a1:ed,bridge_name='br-int',has_traffic_filtering=True,id=d357d277-1d9f-42be-a5bc-31258c88b186,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd357d277-1d')#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.121 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.122 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.122 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No VIF found with MAC fa:16:3e:7a:a1:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.123 252257 INFO nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Using config drive#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.154 252257 DEBUG nova.storage.rbd_utils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 754 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 464 KiB/s rd, 6.8 MiB/s wr, 145 op/s
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.741 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.910 252257 INFO nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Creating config drive at /var/lib/nova/instances/aa6d3201-d2d3-4001-9e68-bd07dcc23b11/disk.config#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.922 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa6d3201-d2d3-4001-9e68-bd07dcc23b11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7npdt180 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:32 np0005539563 nova_compute[252253]: 2025-11-29 08:21:32.955 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:33 np0005539563 nova_compute[252253]: 2025-11-29 08:21:33.064 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa6d3201-d2d3-4001-9e68-bd07dcc23b11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7npdt180" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:33 np0005539563 nova_compute[252253]: 2025-11-29 08:21:33.093 252257 DEBUG nova.storage.rbd_utils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:33 np0005539563 nova_compute[252253]: 2025-11-29 08:21:33.097 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa6d3201-d2d3-4001-9e68-bd07dcc23b11/disk.config aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:33 np0005539563 nova_compute[252253]: 2025-11-29 08:21:33.310 252257 DEBUG oslo_concurrency.processutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa6d3201-d2d3-4001-9e68-bd07dcc23b11/disk.config aa6d3201-d2d3-4001-9e68-bd07dcc23b11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:33 np0005539563 nova_compute[252253]: 2025-11-29 08:21:33.311 252257 INFO nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Deleting local config drive /var/lib/nova/instances/aa6d3201-d2d3-4001-9e68-bd07dcc23b11/disk.config because it was imported into RBD.#033[00m
Nov 29 03:21:33 np0005539563 kernel: tapd357d277-1d: entered promiscuous mode
Nov 29 03:21:33 np0005539563 NetworkManager[48981]: <info>  [1764404493.3742] manager: (tapd357d277-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Nov 29 03:21:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:33Z|00588|binding|INFO|Claiming lport d357d277-1d9f-42be-a5bc-31258c88b186 for this chassis.
Nov 29 03:21:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:33Z|00589|binding|INFO|d357d277-1d9f-42be-a5bc-31258c88b186: Claiming fa:16:3e:7a:a1:ed 10.100.0.9
Nov 29 03:21:33 np0005539563 nova_compute[252253]: 2025-11-29 08:21:33.384 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.399 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:a1:ed 10.100.0.9'], port_security=['fa:16:3e:7a:a1:ed 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'aa6d3201-d2d3-4001-9e68-bd07dcc23b11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'neutron:revision_number': '2', 'neutron:security_group_ids': '114e2ded-8c00-4a31-82b3-b68656218e0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d406c7c1-fafd-4f72-8c37-90a5a1b5d4e7, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=d357d277-1d9f-42be-a5bc-31258c88b186) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.401 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d357d277-1d9f-42be-a5bc-31258c88b186 in datapath 3d6ff1b5-e67b-4a23-9145-8139b35e63e8 bound to our chassis#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.403 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d6ff1b5-e67b-4a23-9145-8139b35e63e8#033[00m
Nov 29 03:21:33 np0005539563 systemd-udevd[340810]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:21:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:33Z|00590|binding|INFO|Setting lport d357d277-1d9f-42be-a5bc-31258c88b186 ovn-installed in OVS
Nov 29 03:21:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:33Z|00591|binding|INFO|Setting lport d357d277-1d9f-42be-a5bc-31258c88b186 up in Southbound
Nov 29 03:21:33 np0005539563 nova_compute[252253]: 2025-11-29 08:21:33.415 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:33 np0005539563 nova_compute[252253]: 2025-11-29 08:21:33.416 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.424 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2a976492-bc8b-4a23-85fe-373cbcd0110b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.428 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d6ff1b5-e1 in ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:21:33 np0005539563 NetworkManager[48981]: <info>  [1764404493.4322] device (tapd357d277-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.431 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d6ff1b5-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.432 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa01267-600a-4acd-af2f-7e195517fea3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 NetworkManager[48981]: <info>  [1764404493.4328] device (tapd357d277-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:21:33 np0005539563 systemd-machined[213024]: New machine qemu-71-instance-00000094.
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.433 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3089c1d8-c448-429c-a968-bc744b96daff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 systemd[1]: Started Virtual Machine qemu-71-instance-00000094.
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.449 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a69e6b-e8be-44cc-b3bd-c456db646397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.469 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce9f9da-f581-4d40-b66c-36a1af35b2fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:33.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.506 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2091b384-1394-4994-b986-005ba3336c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 NetworkManager[48981]: <info>  [1764404493.5126] manager: (tap3d6ff1b5-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/262)
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.512 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a05cc402-4078-4d27-8d02-cdb48cece107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.555 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f46401ce-05cb-4f8c-bf50-788d6de73a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.561 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d1779706-e0c3-466e-8c73-a92bc31f7d28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 NetworkManager[48981]: <info>  [1764404493.5940] device (tap3d6ff1b5-e0): carrier: link connected
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.599 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a70d0c5a-69c5-44ae-a0d5-c66efe0c1fab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:33.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.617 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ff51fffe-a3c0-47b5-90f4-2f8522d1eabe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d6ff1b5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:6a:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746136, 'reachable_time': 27051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340844, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.633 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a3db2b0c-396a-4693-936f-0cd11d777aca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:6a1d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746136, 'tstamp': 746136}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340845, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.656 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f0b51a-703d-4c4d-9f7f-1d915774c6ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d6ff1b5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:6a:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746136, 'reachable_time': 27051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340846, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.688 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[257fa1e9-02a8-43b8-81c5-b9f416b4faa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.757 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a95a03d4-83b8-46a1-bb7c-3f6f9f3a8813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.759 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d6ff1b5-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.759 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.759 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d6ff1b5-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:33 np0005539563 nova_compute[252253]: 2025-11-29 08:21:33.761 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:33 np0005539563 NetworkManager[48981]: <info>  [1764404493.7618] manager: (tap3d6ff1b5-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Nov 29 03:21:33 np0005539563 kernel: tap3d6ff1b5-e0: entered promiscuous mode
Nov 29 03:21:33 np0005539563 nova_compute[252253]: 2025-11-29 08:21:33.763 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.764 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d6ff1b5-e0, col_values=(('external_ids', {'iface-id': '54675c6b-d3a2-417c-b976-28c1e010fd1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:33 np0005539563 nova_compute[252253]: 2025-11-29 08:21:33.765 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:33Z|00592|binding|INFO|Releasing lport 54675c6b-d3a2-417c-b976-28c1e010fd1e from this chassis (sb_readonly=0)
Nov 29 03:21:33 np0005539563 nova_compute[252253]: 2025-11-29 08:21:33.784 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.785 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d6ff1b5-e67b-4a23-9145-8139b35e63e8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d6ff1b5-e67b-4a23-9145-8139b35e63e8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.786 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9d06c7b0-4f8b-4cc1-b847-faff47d13ba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.787 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-3d6ff1b5-e67b-4a23-9145-8139b35e63e8
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/3d6ff1b5-e67b-4a23-9145-8139b35e63e8.pid.haproxy
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 3d6ff1b5-e67b-4a23-9145-8139b35e63e8
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:21:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:33.788 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'env', 'PROCESS_TAG=haproxy-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d6ff1b5-e67b-4a23-9145-8139b35e63e8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:21:34 np0005539563 podman[340878]: 2025-11-29 08:21:34.190075637 +0000 UTC m=+0.061196201 container create 355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:21:34 np0005539563 systemd[1]: Started libpod-conmon-355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5.scope.
Nov 29 03:21:34 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:21:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34aab37bb6dad53f3e67d6251cb08b4f0ca43b7c991af029726a320bb7fe21f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:34 np0005539563 podman[340878]: 2025-11-29 08:21:34.160034352 +0000 UTC m=+0.031154936 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:21:34 np0005539563 podman[340878]: 2025-11-29 08:21:34.267249029 +0000 UTC m=+0.138369603 container init 355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:21:34 np0005539563 podman[340878]: 2025-11-29 08:21:34.272955744 +0000 UTC m=+0.144076288 container start 355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:21:34 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[340933]: [NOTICE]   (340939) : New worker (340941) forked
Nov 29 03:21:34 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[340933]: [NOTICE]   (340939) : Loading success.
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.310 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404494.309715, aa6d3201-d2d3-4001-9e68-bd07dcc23b11 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.311 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] VM Started (Lifecycle Event)#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.384 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.389 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404494.3110392, aa6d3201-d2d3-4001-9e68-bd07dcc23b11 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.389 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.433 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.437 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 754 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.4 MiB/s wr, 170 op/s
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.466 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.618 252257 DEBUG nova.compute.manager [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Received event network-vif-plugged-d357d277-1d9f-42be-a5bc-31258c88b186 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.619 252257 DEBUG oslo_concurrency.lockutils [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.620 252257 DEBUG oslo_concurrency.lockutils [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.620 252257 DEBUG oslo_concurrency.lockutils [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.621 252257 DEBUG nova.compute.manager [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Processing event network-vif-plugged-d357d277-1d9f-42be-a5bc-31258c88b186 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.621 252257 DEBUG nova.compute.manager [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Received event network-vif-plugged-d357d277-1d9f-42be-a5bc-31258c88b186 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.622 252257 DEBUG oslo_concurrency.lockutils [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.622 252257 DEBUG oslo_concurrency.lockutils [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.622 252257 DEBUG oslo_concurrency.lockutils [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.623 252257 DEBUG nova.compute.manager [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] No waiting events found dispatching network-vif-plugged-d357d277-1d9f-42be-a5bc-31258c88b186 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.623 252257 WARNING nova.compute.manager [req-7f8b7a6b-1290-4075-bfb1-b979415ffba8 req-3be55886-ac9f-4679-b974-b25e89f13161 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Received unexpected event network-vif-plugged-d357d277-1d9f-42be-a5bc-31258c88b186 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.624 252257 DEBUG nova.compute.manager [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.627 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404494.6276796, aa6d3201-d2d3-4001-9e68-bd07dcc23b11 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.628 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.630 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.632 252257 INFO nova.virt.libvirt.driver [-] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Instance spawned successfully.#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.633 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.704 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.712 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.713 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.714 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.714 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.715 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.715 252257 DEBUG nova.virt.libvirt.driver [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.719 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.759 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.799 252257 INFO nova.compute.manager [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Took 7.67 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.799 252257 DEBUG nova.compute.manager [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.870 252257 INFO nova.compute.manager [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Took 8.65 seconds to build instance.#033[00m
Nov 29 03:21:34 np0005539563 nova_compute[252253]: 2025-11-29 08:21:34.900 252257 DEBUG oslo_concurrency.lockutils [None req-3ed2b713-3604-4453-9544-395aa55c72c6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Nov 29 03:21:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Nov 29 03:21:35 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Nov 29 03:21:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:35.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:35.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Nov 29 03:21:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Nov 29 03:21:36 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Nov 29 03:21:36 np0005539563 podman[340976]: 2025-11-29 08:21:36.151212562 +0000 UTC m=+0.066195116 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:21:36 np0005539563 podman[340977]: 2025-11-29 08:21:36.170918536 +0000 UTC m=+0.083615658 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:21:36 np0005539563 podman[340975]: 2025-11-29 08:21:36.172575461 +0000 UTC m=+0.087198756 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:21:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 721 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.3 MiB/s wr, 269 op/s
Nov 29 03:21:37 np0005539563 nova_compute[252253]: 2025-11-29 08:21:37.056 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Nov 29 03:21:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Nov 29 03:21:37 np0005539563 nova_compute[252253]: 2025-11-29 08:21:37.098 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Nov 29 03:21:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:37.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:37.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:37 np0005539563 nova_compute[252253]: 2025-11-29 08:21:37.743 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 720 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 1.8 MiB/s wr, 259 op/s
Nov 29 03:21:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:38 np0005539563 nova_compute[252253]: 2025-11-29 08:21:38.842 252257 DEBUG nova.compute.manager [req-76286a22-00dd-4685-a430-0125833d1698 req-901e9c66-2e0f-4bf8-bf07-d6e7e0ddde4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Received event network-changed-d357d277-1d9f-42be-a5bc-31258c88b186 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:38 np0005539563 nova_compute[252253]: 2025-11-29 08:21:38.842 252257 DEBUG nova.compute.manager [req-76286a22-00dd-4685-a430-0125833d1698 req-901e9c66-2e0f-4bf8-bf07-d6e7e0ddde4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Refreshing instance network info cache due to event network-changed-d357d277-1d9f-42be-a5bc-31258c88b186. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:21:38 np0005539563 nova_compute[252253]: 2025-11-29 08:21:38.843 252257 DEBUG oslo_concurrency.lockutils [req-76286a22-00dd-4685-a430-0125833d1698 req-901e9c66-2e0f-4bf8-bf07-d6e7e0ddde4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-aa6d3201-d2d3-4001-9e68-bd07dcc23b11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:38 np0005539563 nova_compute[252253]: 2025-11-29 08:21:38.843 252257 DEBUG oslo_concurrency.lockutils [req-76286a22-00dd-4685-a430-0125833d1698 req-901e9c66-2e0f-4bf8-bf07-d6e7e0ddde4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-aa6d3201-d2d3-4001-9e68-bd07dcc23b11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:38 np0005539563 nova_compute[252253]: 2025-11-29 08:21:38.844 252257 DEBUG nova.network.neutron [req-76286a22-00dd-4685-a430-0125833d1698 req-901e9c66-2e0f-4bf8-bf07-d6e7e0ddde4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Refreshing network info cache for port d357d277-1d9f-42be-a5bc-31258c88b186 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:21:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:39.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:39.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:40 np0005539563 nova_compute[252253]: 2025-11-29 08:21:40.073 252257 DEBUG nova.network.neutron [req-76286a22-00dd-4685-a430-0125833d1698 req-901e9c66-2e0f-4bf8-bf07-d6e7e0ddde4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Updated VIF entry in instance network info cache for port d357d277-1d9f-42be-a5bc-31258c88b186. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:21:40 np0005539563 nova_compute[252253]: 2025-11-29 08:21:40.074 252257 DEBUG nova.network.neutron [req-76286a22-00dd-4685-a430-0125833d1698 req-901e9c66-2e0f-4bf8-bf07-d6e7e0ddde4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Updating instance_info_cache with network_info: [{"id": "d357d277-1d9f-42be-a5bc-31258c88b186", "address": "fa:16:3e:7a:a1:ed", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd357d277-1d", "ovs_interfaceid": "d357d277-1d9f-42be-a5bc-31258c88b186", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:40 np0005539563 nova_compute[252253]: 2025-11-29 08:21:40.093 252257 DEBUG oslo_concurrency.lockutils [req-76286a22-00dd-4685-a430-0125833d1698 req-901e9c66-2e0f-4bf8-bf07-d6e7e0ddde4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-aa6d3201-d2d3-4001-9e68-bd07dcc23b11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 7.8 MiB/s wr, 459 op/s
Nov 29 03:21:40 np0005539563 nova_compute[252253]: 2025-11-29 08:21:40.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:40 np0005539563 nova_compute[252253]: 2025-11-29 08:21:40.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:21:40 np0005539563 nova_compute[252253]: 2025-11-29 08:21:40.696 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:21:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:41.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:41.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:42 np0005539563 nova_compute[252253]: 2025-11-29 08:21:42.059 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:21:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:21:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 6.3 MiB/s wr, 299 op/s
Nov 29 03:21:42 np0005539563 nova_compute[252253]: 2025-11-29 08:21:42.744 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:43 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 96e52159-9664-4c41-bc94-ab30b132ca6d does not exist
Nov 29 03:21:43 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 92f5e98c-14d1-4a26-a2bf-5513ac1eba28 does not exist
Nov 29 03:21:43 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d9b18913-6226-4811-9ab5-5ea722c5f3c0 does not exist
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:21:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:43.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:43.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Nov 29 03:21:43 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Nov 29 03:21:43 np0005539563 podman[341338]: 2025-11-29 08:21:43.784892934 +0000 UTC m=+0.057418528 container create ecca639a054ee0041a6221da4116f6392d18305921a56d705d2da6fa7ffc8576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pasteur, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:21:43 np0005539563 systemd[1]: Started libpod-conmon-ecca639a054ee0041a6221da4116f6392d18305921a56d705d2da6fa7ffc8576.scope.
Nov 29 03:21:43 np0005539563 podman[341338]: 2025-11-29 08:21:43.760596345 +0000 UTC m=+0.033121989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:21:43 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:21:43 np0005539563 podman[341338]: 2025-11-29 08:21:43.892374459 +0000 UTC m=+0.164900063 container init ecca639a054ee0041a6221da4116f6392d18305921a56d705d2da6fa7ffc8576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:21:43 np0005539563 podman[341338]: 2025-11-29 08:21:43.898927967 +0000 UTC m=+0.171453561 container start ecca639a054ee0041a6221da4116f6392d18305921a56d705d2da6fa7ffc8576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 03:21:43 np0005539563 podman[341338]: 2025-11-29 08:21:43.90198082 +0000 UTC m=+0.174506414 container attach ecca639a054ee0041a6221da4116f6392d18305921a56d705d2da6fa7ffc8576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 29 03:21:43 np0005539563 flamboyant_pasteur[341352]: 167 167
Nov 29 03:21:43 np0005539563 systemd[1]: libpod-ecca639a054ee0041a6221da4116f6392d18305921a56d705d2da6fa7ffc8576.scope: Deactivated successfully.
Nov 29 03:21:43 np0005539563 conmon[341352]: conmon ecca639a054ee0041a62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ecca639a054ee0041a6221da4116f6392d18305921a56d705d2da6fa7ffc8576.scope/container/memory.events
Nov 29 03:21:43 np0005539563 podman[341338]: 2025-11-29 08:21:43.908069385 +0000 UTC m=+0.180594969 container died ecca639a054ee0041a6221da4116f6392d18305921a56d705d2da6fa7ffc8576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pasteur, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:21:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay-dd0dd51d6d6e9aaed3c129d5da4fe0c4b31cea4e5fc89b1d3afca17df4a513f6-merged.mount: Deactivated successfully.
Nov 29 03:21:43 np0005539563 podman[341338]: 2025-11-29 08:21:43.960665482 +0000 UTC m=+0.233191076 container remove ecca639a054ee0041a6221da4116f6392d18305921a56d705d2da6fa7ffc8576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:21:43 np0005539563 systemd[1]: libpod-conmon-ecca639a054ee0041a6221da4116f6392d18305921a56d705d2da6fa7ffc8576.scope: Deactivated successfully.
Nov 29 03:21:44 np0005539563 podman[341378]: 2025-11-29 08:21:44.156575905 +0000 UTC m=+0.044018985 container create 23f8350a9e0e68305bad4feacc64937d3697618cd89c717c8a302dcf8409f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:21:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:21:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:21:44 np0005539563 systemd[1]: Started libpod-conmon-23f8350a9e0e68305bad4feacc64937d3697618cd89c717c8a302dcf8409f802.scope.
Nov 29 03:21:44 np0005539563 podman[341378]: 2025-11-29 08:21:44.133800806 +0000 UTC m=+0.021243936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:21:44 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:21:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a883693725c9a0a184fe0f42d81cc8511c458da98eb216531997e248526188ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a883693725c9a0a184fe0f42d81cc8511c458da98eb216531997e248526188ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a883693725c9a0a184fe0f42d81cc8511c458da98eb216531997e248526188ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a883693725c9a0a184fe0f42d81cc8511c458da98eb216531997e248526188ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a883693725c9a0a184fe0f42d81cc8511c458da98eb216531997e248526188ab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:44 np0005539563 podman[341378]: 2025-11-29 08:21:44.269195219 +0000 UTC m=+0.156638379 container init 23f8350a9e0e68305bad4feacc64937d3697618cd89c717c8a302dcf8409f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_maxwell, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:21:44 np0005539563 podman[341378]: 2025-11-29 08:21:44.276542977 +0000 UTC m=+0.163986057 container start 23f8350a9e0e68305bad4feacc64937d3697618cd89c717c8a302dcf8409f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_maxwell, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:21:44 np0005539563 podman[341378]: 2025-11-29 08:21:44.282802798 +0000 UTC m=+0.170245938 container attach 23f8350a9e0e68305bad4feacc64937d3697618cd89c717c8a302dcf8409f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_maxwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:21:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 814 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 7.1 MiB/s wr, 216 op/s
Nov 29 03:21:44 np0005539563 nova_compute[252253]: 2025-11-29 08:21:44.693 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:45 np0005539563 elated_maxwell[341395]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:21:45 np0005539563 elated_maxwell[341395]: --> relative data size: 1.0
Nov 29 03:21:45 np0005539563 elated_maxwell[341395]: --> All data devices are unavailable
Nov 29 03:21:45 np0005539563 systemd[1]: libpod-23f8350a9e0e68305bad4feacc64937d3697618cd89c717c8a302dcf8409f802.scope: Deactivated successfully.
Nov 29 03:21:45 np0005539563 podman[341378]: 2025-11-29 08:21:45.125701996 +0000 UTC m=+1.013145086 container died 23f8350a9e0e68305bad4feacc64937d3697618cd89c717c8a302dcf8409f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:21:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a883693725c9a0a184fe0f42d81cc8511c458da98eb216531997e248526188ab-merged.mount: Deactivated successfully.
Nov 29 03:21:45 np0005539563 podman[341378]: 2025-11-29 08:21:45.195041706 +0000 UTC m=+1.082484797 container remove 23f8350a9e0e68305bad4feacc64937d3697618cd89c717c8a302dcf8409f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:21:45 np0005539563 systemd[1]: libpod-conmon-23f8350a9e0e68305bad4feacc64937d3697618cd89c717c8a302dcf8409f802.scope: Deactivated successfully.
Nov 29 03:21:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:45.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:45.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:45 np0005539563 podman[341563]: 2025-11-29 08:21:45.912789781 +0000 UTC m=+0.053675266 container create f87dddd0e70d23b18e8e17c92c4b5f0c4b40019de05f40f214ce6cff94e7bb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_williamson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:21:45 np0005539563 systemd[1]: Started libpod-conmon-f87dddd0e70d23b18e8e17c92c4b5f0c4b40019de05f40f214ce6cff94e7bb26.scope.
Nov 29 03:21:45 np0005539563 podman[341563]: 2025-11-29 08:21:45.88544228 +0000 UTC m=+0.026327865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:21:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:21:46 np0005539563 podman[341563]: 2025-11-29 08:21:46.01155017 +0000 UTC m=+0.152435665 container init f87dddd0e70d23b18e8e17c92c4b5f0c4b40019de05f40f214ce6cff94e7bb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:21:46 np0005539563 podman[341563]: 2025-11-29 08:21:46.018843658 +0000 UTC m=+0.159729143 container start f87dddd0e70d23b18e8e17c92c4b5f0c4b40019de05f40f214ce6cff94e7bb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:21:46 np0005539563 podman[341563]: 2025-11-29 08:21:46.022163267 +0000 UTC m=+0.163048752 container attach f87dddd0e70d23b18e8e17c92c4b5f0c4b40019de05f40f214ce6cff94e7bb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_williamson, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:21:46 np0005539563 zealous_williamson[341578]: 167 167
Nov 29 03:21:46 np0005539563 systemd[1]: libpod-f87dddd0e70d23b18e8e17c92c4b5f0c4b40019de05f40f214ce6cff94e7bb26.scope: Deactivated successfully.
Nov 29 03:21:46 np0005539563 conmon[341578]: conmon f87dddd0e70d23b18e8e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f87dddd0e70d23b18e8e17c92c4b5f0c4b40019de05f40f214ce6cff94e7bb26.scope/container/memory.events
Nov 29 03:21:46 np0005539563 podman[341563]: 2025-11-29 08:21:46.024949733 +0000 UTC m=+0.165835218 container died f87dddd0e70d23b18e8e17c92c4b5f0c4b40019de05f40f214ce6cff94e7bb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_williamson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:21:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-84027f44138568dccf277e2da4f42f8bfcc8519d9cfe440b656fd0ca5d64285a-merged.mount: Deactivated successfully.
Nov 29 03:21:46 np0005539563 podman[341563]: 2025-11-29 08:21:46.06539362 +0000 UTC m=+0.206279105 container remove f87dddd0e70d23b18e8e17c92c4b5f0c4b40019de05f40f214ce6cff94e7bb26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_williamson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:21:46 np0005539563 systemd[1]: libpod-conmon-f87dddd0e70d23b18e8e17c92c4b5f0c4b40019de05f40f214ce6cff94e7bb26.scope: Deactivated successfully.
Nov 29 03:21:46 np0005539563 podman[341602]: 2025-11-29 08:21:46.270558364 +0000 UTC m=+0.057998584 container create 33bc63714b6fd88913d5462a1440c6ee7f3185db6ddf1a9ea3aeb6d73588a8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_snyder, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:21:46 np0005539563 systemd[1]: Started libpod-conmon-33bc63714b6fd88913d5462a1440c6ee7f3185db6ddf1a9ea3aeb6d73588a8ac.scope.
Nov 29 03:21:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:21:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ebdc03a78febdfb0b7978f922d7990f9d6660f79e5dce40f77f3a330d0a06d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ebdc03a78febdfb0b7978f922d7990f9d6660f79e5dce40f77f3a330d0a06d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ebdc03a78febdfb0b7978f922d7990f9d6660f79e5dce40f77f3a330d0a06d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ebdc03a78febdfb0b7978f922d7990f9d6660f79e5dce40f77f3a330d0a06d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:46 np0005539563 podman[341602]: 2025-11-29 08:21:46.253543913 +0000 UTC m=+0.040984163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:21:46 np0005539563 podman[341602]: 2025-11-29 08:21:46.358838279 +0000 UTC m=+0.146278529 container init 33bc63714b6fd88913d5462a1440c6ee7f3185db6ddf1a9ea3aeb6d73588a8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_snyder, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:21:46 np0005539563 podman[341602]: 2025-11-29 08:21:46.376427386 +0000 UTC m=+0.163867616 container start 33bc63714b6fd88913d5462a1440c6ee7f3185db6ddf1a9ea3aeb6d73588a8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_snyder, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:21:46 np0005539563 podman[341602]: 2025-11-29 08:21:46.379662603 +0000 UTC m=+0.167102843 container attach 33bc63714b6fd88913d5462a1440c6ee7f3185db6ddf1a9ea3aeb6d73588a8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:21:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 880 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 9.5 MiB/s wr, 257 op/s
Nov 29 03:21:47 np0005539563 nova_compute[252253]: 2025-11-29 08:21:47.063 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]: {
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:    "0": [
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:        {
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:            "devices": [
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:                "/dev/loop3"
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:            ],
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:            "lv_name": "ceph_lv0",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:            "lv_size": "7511998464",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:            "name": "ceph_lv0",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:            "tags": {
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:                "ceph.cluster_name": "ceph",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:                "ceph.crush_device_class": "",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:                "ceph.encrypted": "0",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:                "ceph.osd_id": "0",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:                "ceph.type": "block",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:                "ceph.vdo": "0"
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:            },
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:            "type": "block",
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:            "vg_name": "ceph_vg0"
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:        }
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]:    ]
Nov 29 03:21:47 np0005539563 zealous_snyder[341618]: }
Nov 29 03:21:47 np0005539563 systemd[1]: libpod-33bc63714b6fd88913d5462a1440c6ee7f3185db6ddf1a9ea3aeb6d73588a8ac.scope: Deactivated successfully.
Nov 29 03:21:47 np0005539563 conmon[341618]: conmon 33bc63714b6fd88913d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33bc63714b6fd88913d5462a1440c6ee7f3185db6ddf1a9ea3aeb6d73588a8ac.scope/container/memory.events
Nov 29 03:21:47 np0005539563 podman[341602]: 2025-11-29 08:21:47.207254887 +0000 UTC m=+0.994695117 container died 33bc63714b6fd88913d5462a1440c6ee7f3185db6ddf1a9ea3aeb6d73588a8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:21:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e9ebdc03a78febdfb0b7978f922d7990f9d6660f79e5dce40f77f3a330d0a06d-merged.mount: Deactivated successfully.
Nov 29 03:21:47 np0005539563 podman[341602]: 2025-11-29 08:21:47.258173918 +0000 UTC m=+1.045614148 container remove 33bc63714b6fd88913d5462a1440c6ee7f3185db6ddf1a9ea3aeb6d73588a8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_snyder, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:21:47 np0005539563 systemd[1]: libpod-conmon-33bc63714b6fd88913d5462a1440c6ee7f3185db6ddf1a9ea3aeb6d73588a8ac.scope: Deactivated successfully.
Nov 29 03:21:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:47.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:47.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:47 np0005539563 nova_compute[252253]: 2025-11-29 08:21:47.747 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:47 np0005539563 podman[341781]: 2025-11-29 08:21:47.932488825 +0000 UTC m=+0.053147603 container create 530cfa01f29283609392fd59481d5896fa7e23fbd54bdbbf23c1ef1bad928364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_booth, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:21:47 np0005539563 systemd[1]: Started libpod-conmon-530cfa01f29283609392fd59481d5896fa7e23fbd54bdbbf23c1ef1bad928364.scope.
Nov 29 03:21:48 np0005539563 podman[341781]: 2025-11-29 08:21:47.906728836 +0000 UTC m=+0.027387644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:21:48 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:21:48 np0005539563 podman[341781]: 2025-11-29 08:21:48.025286432 +0000 UTC m=+0.145945220 container init 530cfa01f29283609392fd59481d5896fa7e23fbd54bdbbf23c1ef1bad928364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:21:48 np0005539563 podman[341781]: 2025-11-29 08:21:48.032090036 +0000 UTC m=+0.152748794 container start 530cfa01f29283609392fd59481d5896fa7e23fbd54bdbbf23c1ef1bad928364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_booth, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:21:48 np0005539563 podman[341781]: 2025-11-29 08:21:48.035032176 +0000 UTC m=+0.155690954 container attach 530cfa01f29283609392fd59481d5896fa7e23fbd54bdbbf23c1ef1bad928364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:21:48 np0005539563 upbeat_booth[341797]: 167 167
Nov 29 03:21:48 np0005539563 systemd[1]: libpod-530cfa01f29283609392fd59481d5896fa7e23fbd54bdbbf23c1ef1bad928364.scope: Deactivated successfully.
Nov 29 03:21:48 np0005539563 podman[341781]: 2025-11-29 08:21:48.038221622 +0000 UTC m=+0.158880380 container died 530cfa01f29283609392fd59481d5896fa7e23fbd54bdbbf23c1ef1bad928364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_booth, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:21:48 np0005539563 systemd[1]: var-lib-containers-storage-overlay-df64b58d54ffb9ae30d4d161a92ad55d45cc09e1ea08777f368fcb6db63fd1d2-merged.mount: Deactivated successfully.
Nov 29 03:21:48 np0005539563 podman[341781]: 2025-11-29 08:21:48.078862595 +0000 UTC m=+0.199521353 container remove 530cfa01f29283609392fd59481d5896fa7e23fbd54bdbbf23c1ef1bad928364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:21:48 np0005539563 systemd[1]: libpod-conmon-530cfa01f29283609392fd59481d5896fa7e23fbd54bdbbf23c1ef1bad928364.scope: Deactivated successfully.
Nov 29 03:21:48 np0005539563 podman[341821]: 2025-11-29 08:21:48.247825487 +0000 UTC m=+0.021902945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:21:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 880 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 8.0 MiB/s wr, 227 op/s
Nov 29 03:21:48 np0005539563 podman[341821]: 2025-11-29 08:21:48.689711931 +0000 UTC m=+0.463789369 container create d1b521e7525199062d9b3a6986d652a8197432662d3f72a8d0ba38b7e095637a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:21:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:49 np0005539563 systemd[1]: Started libpod-conmon-d1b521e7525199062d9b3a6986d652a8197432662d3f72a8d0ba38b7e095637a.scope.
Nov 29 03:21:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:21:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d83c005e6f4ac058f314880c2112af82aad47ba3e4c5cddc81a07e4a8849ef3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d83c005e6f4ac058f314880c2112af82aad47ba3e4c5cddc81a07e4a8849ef3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d83c005e6f4ac058f314880c2112af82aad47ba3e4c5cddc81a07e4a8849ef3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d83c005e6f4ac058f314880c2112af82aad47ba3e4c5cddc81a07e4a8849ef3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:49 np0005539563 podman[341821]: 2025-11-29 08:21:49.198663714 +0000 UTC m=+0.972741242 container init d1b521e7525199062d9b3a6986d652a8197432662d3f72a8d0ba38b7e095637a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_colden, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:21:49 np0005539563 podman[341821]: 2025-11-29 08:21:49.215266664 +0000 UTC m=+0.989344112 container start d1b521e7525199062d9b3a6986d652a8197432662d3f72a8d0ba38b7e095637a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:21:49 np0005539563 podman[341821]: 2025-11-29 08:21:49.261717524 +0000 UTC m=+1.035795082 container attach d1b521e7525199062d9b3a6986d652a8197432662d3f72a8d0ba38b7e095637a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_colden, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:21:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:49.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:49.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:50 np0005539563 nervous_colden[341838]: {
Nov 29 03:21:50 np0005539563 nervous_colden[341838]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:21:50 np0005539563 nervous_colden[341838]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:21:50 np0005539563 nervous_colden[341838]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:21:50 np0005539563 nervous_colden[341838]:        "osd_id": 0,
Nov 29 03:21:50 np0005539563 nervous_colden[341838]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:21:50 np0005539563 nervous_colden[341838]:        "type": "bluestore"
Nov 29 03:21:50 np0005539563 nervous_colden[341838]:    }
Nov 29 03:21:50 np0005539563 nervous_colden[341838]: }
Nov 29 03:21:50 np0005539563 systemd[1]: libpod-d1b521e7525199062d9b3a6986d652a8197432662d3f72a8d0ba38b7e095637a.scope: Deactivated successfully.
Nov 29 03:21:50 np0005539563 podman[341821]: 2025-11-29 08:21:50.100258205 +0000 UTC m=+1.874335653 container died d1b521e7525199062d9b3a6986d652a8197432662d3f72a8d0ba38b7e095637a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_colden, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:21:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8d83c005e6f4ac058f314880c2112af82aad47ba3e4c5cddc81a07e4a8849ef3-merged.mount: Deactivated successfully.
Nov 29 03:21:50 np0005539563 podman[341821]: 2025-11-29 08:21:50.160330344 +0000 UTC m=+1.934407782 container remove d1b521e7525199062d9b3a6986d652a8197432662d3f72a8d0ba38b7e095637a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:21:50 np0005539563 systemd[1]: libpod-conmon-d1b521e7525199062d9b3a6986d652a8197432662d3f72a8d0ba38b7e095637a.scope: Deactivated successfully.
Nov 29 03:21:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:21:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 899 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 6.4 MiB/s wr, 106 op/s
Nov 29 03:21:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:21:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:51.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:51.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:52 np0005539563 nova_compute[252253]: 2025-11-29 08:21:52.069 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 899 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 6.4 MiB/s wr, 106 op/s
Nov 29 03:21:52 np0005539563 nova_compute[252253]: 2025-11-29 08:21:52.749 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 25939cf1-6a83-4025-a077-3c3255b7f3ea does not exist
Nov 29 03:21:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d34c9489-f725-4521-8f19-844764c08bb6 does not exist
Nov 29 03:21:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ea9711d7-2a6f-4738-99e8-927c080d4ce4 does not exist
Nov 29 03:21:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:53.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:53.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:54 np0005539563 nova_compute[252253]: 2025-11-29 08:21:54.373 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:54.372 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:54.375 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:21:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 900 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 5.1 MiB/s wr, 99 op/s
Nov 29 03:21:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:21:54 np0005539563 nova_compute[252253]: 2025-11-29 08:21:54.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:54 np0005539563 nova_compute[252253]: 2025-11-29 08:21:54.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:21:55 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:55Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:a1:ed 10.100.0.9
Nov 29 03:21:55 np0005539563 ovn_controller[148841]: 2025-11-29T08:21:55Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:a1:ed 10.100.0.9
Nov 29 03:21:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:21:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:55.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:21:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:55.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 901 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 266 KiB/s rd, 4.8 MiB/s wr, 118 op/s
Nov 29 03:21:56 np0005539563 nova_compute[252253]: 2025-11-29 08:21:56.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:57 np0005539563 nova_compute[252253]: 2025-11-29 08:21:57.072 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:57.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:57.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:57 np0005539563 nova_compute[252253]: 2025-11-29 08:21:57.751 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:21:58.377 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 905 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Nov 29 03:21:58 np0005539563 nova_compute[252253]: 2025-11-29 08:21:58.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:58 np0005539563 nova_compute[252253]: 2025-11-29 08:21:58.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:59.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:21:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:59.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:59 np0005539563 nova_compute[252253]: 2025-11-29 08:21:59.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:59 np0005539563 nova_compute[252253]: 2025-11-29 08:21:59.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:21:59 np0005539563 nova_compute[252253]: 2025-11-29 08:21:59.931 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-0d1eac76-3b6b-4734-a481-9b315b2ae484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:59 np0005539563 nova_compute[252253]: 2025-11-29 08:21:59.932 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-0d1eac76-3b6b-4734-a481-9b315b2ae484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:59 np0005539563 nova_compute[252253]: 2025-11-29 08:21:59.933 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:22:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 906 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 199 op/s
Nov 29 03:22:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:01.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:01.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:02 np0005539563 nova_compute[252253]: 2025-11-29 08:22:02.077 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 906 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 381 KiB/s wr, 170 op/s
Nov 29 03:22:02 np0005539563 nova_compute[252253]: 2025-11-29 08:22:02.754 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:03.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:03.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:03 np0005539563 nova_compute[252253]: 2025-11-29 08:22:03.894 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Updating instance_info_cache with network_info: [{"id": "65163519-df32-4076-bfa2-5a804018b2e9", "address": "fa:16:3e:e1:8e:91", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65163519-df", "ovs_interfaceid": "65163519-df32-4076-bfa2-5a804018b2e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.021 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-0d1eac76-3b6b-4734-a481-9b315b2ae484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.021 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.021 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.099 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.100 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.101 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.101 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.103 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 907 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 383 KiB/s wr, 176 op/s
Nov 29 03:22:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3785667739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.547 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.678 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.678 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.683 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.684 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.688 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.688 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.695 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.696 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.916 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.917 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3560MB free_disk=20.673580169677734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.917 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:04 np0005539563 nova_compute[252253]: 2025-11-29 08:22:04.917 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:04.926 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:04.927 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:04.928 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:05 np0005539563 nova_compute[252253]: 2025-11-29 08:22:05.240 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 1fad2d6f-5a00-43ad-af43-00916509fc61 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:22:05 np0005539563 nova_compute[252253]: 2025-11-29 08:22:05.241 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 0d1eac76-3b6b-4734-a481-9b315b2ae484 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:22:05 np0005539563 nova_compute[252253]: 2025-11-29 08:22:05.241 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 9b6f3346-1230-472f-bd04-791d2367bebb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:22:05 np0005539563 nova_compute[252253]: 2025-11-29 08:22:05.241 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance aa6d3201-d2d3-4001-9e68-bd07dcc23b11 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:22:05 np0005539563 nova_compute[252253]: 2025-11-29 08:22:05.241 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:22:05 np0005539563 nova_compute[252253]: 2025-11-29 08:22:05.242 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:22:05 np0005539563 nova_compute[252253]: 2025-11-29 08:22:05.473 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:05.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:05.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/454350748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:05 np0005539563 nova_compute[252253]: 2025-11-29 08:22:05.990 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:05 np0005539563 nova_compute[252253]: 2025-11-29 08:22:05.996 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:06 np0005539563 nova_compute[252253]: 2025-11-29 08:22:06.022 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:06 np0005539563 nova_compute[252253]: 2025-11-29 08:22:06.088 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:22:06 np0005539563 nova_compute[252253]: 2025-11-29 08:22:06.088 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:06 np0005539563 nova_compute[252253]: 2025-11-29 08:22:06.089 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 913 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 390 KiB/s wr, 201 op/s
Nov 29 03:22:06 np0005539563 podman[342023]: 2025-11-29 08:22:06.535652516 +0000 UTC m=+0.079030254 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:22:06 np0005539563 podman[342024]: 2025-11-29 08:22:06.560155971 +0000 UTC m=+0.088430650 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:22:06 np0005539563 podman[342025]: 2025-11-29 08:22:06.592442846 +0000 UTC m=+0.122033020 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Nov 29 03:22:06 np0005539563 nova_compute[252253]: 2025-11-29 08:22:06.701 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:06 np0005539563 nova_compute[252253]: 2025-11-29 08:22:06.702 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:06 np0005539563 nova_compute[252253]: 2025-11-29 08:22:06.703 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:22:07 np0005539563 nova_compute[252253]: 2025-11-29 08:22:07.078 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:07.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:07.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:07 np0005539563 nova_compute[252253]: 2025-11-29 08:22:07.718 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:07 np0005539563 nova_compute[252253]: 2025-11-29 08:22:07.805 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 918 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 780 KiB/s wr, 185 op/s
Nov 29 03:22:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:09.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:09.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 955 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.0 MiB/s wr, 279 op/s
Nov 29 03:22:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:11.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:11 np0005539563 nova_compute[252253]: 2025-11-29 08:22:11.555 252257 DEBUG oslo_concurrency.lockutils [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:11 np0005539563 nova_compute[252253]: 2025-11-29 08:22:11.556 252257 DEBUG oslo_concurrency.lockutils [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:11 np0005539563 nova_compute[252253]: 2025-11-29 08:22:11.574 252257 DEBUG nova.objects.instance [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'flavor' on Instance uuid aa6d3201-d2d3-4001-9e68-bd07dcc23b11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:11 np0005539563 nova_compute[252253]: 2025-11-29 08:22:11.621 252257 DEBUG oslo_concurrency.lockutils [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:11.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:11Z|00593|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.050 252257 DEBUG oslo_concurrency.lockutils [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.051 252257 DEBUG oslo_concurrency.lockutils [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.051 252257 INFO nova.compute.manager [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Attaching volume fe8a190c-eaa2-4117-ba27-c0ae9ce39d0e to /dev/vdb#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.080 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.305 252257 DEBUG os_brick.utils [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.307 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.319 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.319 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d68462-5831-4856-a035-c8797b289d1f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.321 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.330 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.331 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ef994a-5dbc-4c40-bb0f-2976298eb118]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.333 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.344 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.345 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[a76f8faf-73d6-4f1f-bb12-8b0cf041b592]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.347 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb62aef-2353-4877-9c5e-fff4f15c18e6]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.347 252257 DEBUG oslo_concurrency.processutils [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.385 252257 DEBUG oslo_concurrency.processutils [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "nvme version" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.388 252257 DEBUG os_brick.initiator.connectors.lightos [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.388 252257 DEBUG os_brick.initiator.connectors.lightos [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.388 252257 DEBUG os_brick.initiator.connectors.lightos [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.389 252257 DEBUG os_brick.utils [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] <== get_connector_properties: return (83ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.389 252257 DEBUG nova.virt.block_device [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Updating existing volume attachment record: 0755f089-7b3e-4b7a-a3b7-2ae5d0750ae3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:22:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 955 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 173 op/s
Nov 29 03:22:12 np0005539563 nova_compute[252253]: 2025-11-29 08:22:12.809 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:22:12
Nov 29 03:22:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:22:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:22:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'backups']
Nov 29 03:22:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:22:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:13.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:13.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2480426081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:22:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:22:13 np0005539563 nova_compute[252253]: 2025-11-29 08:22:13.951 252257 DEBUG nova.objects.instance [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'flavor' on Instance uuid aa6d3201-d2d3-4001-9e68-bd07dcc23b11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:13 np0005539563 nova_compute[252253]: 2025-11-29 08:22:13.984 252257 DEBUG nova.virt.libvirt.driver [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Attempting to attach volume fe8a190c-eaa2-4117-ba27-c0ae9ce39d0e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:22:13 np0005539563 nova_compute[252253]: 2025-11-29 08:22:13.987 252257 DEBUG nova.virt.libvirt.guest [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:22:13 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:22:13 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-fe8a190c-eaa2-4117-ba27-c0ae9ce39d0e">
Nov 29 03:22:13 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:13 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:13 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:13 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:22:13 np0005539563 nova_compute[252253]:  <auth username="openstack">
Nov 29 03:22:13 np0005539563 nova_compute[252253]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:22:13 np0005539563 nova_compute[252253]:  </auth>
Nov 29 03:22:13 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:22:13 np0005539563 nova_compute[252253]:  <serial>fe8a190c-eaa2-4117-ba27-c0ae9ce39d0e</serial>
Nov 29 03:22:13 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:22:13 np0005539563 nova_compute[252253]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:22:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:22:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:22:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:22:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:22:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:22:14 np0005539563 nova_compute[252253]: 2025-11-29 08:22:14.121 252257 DEBUG nova.virt.libvirt.driver [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:14 np0005539563 nova_compute[252253]: 2025-11-29 08:22:14.121 252257 DEBUG nova.virt.libvirt.driver [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:14 np0005539563 nova_compute[252253]: 2025-11-29 08:22:14.121 252257 DEBUG nova.virt.libvirt.driver [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:14 np0005539563 nova_compute[252253]: 2025-11-29 08:22:14.122 252257 DEBUG nova.virt.libvirt.driver [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No VIF found with MAC fa:16:3e:7a:a1:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:22:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 966 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 225 op/s
Nov 29 03:22:14 np0005539563 nova_compute[252253]: 2025-11-29 08:22:14.518 252257 DEBUG oslo_concurrency.lockutils [None req-ca56d9e9-5900-4e1e-8e7a-df8c8e55e23a 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.672010) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534672135, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2270, "num_deletes": 257, "total_data_size": 3721852, "memory_usage": 3774200, "flush_reason": "Manual Compaction"}
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534691614, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3640641, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47641, "largest_seqno": 49910, "table_properties": {"data_size": 3630466, "index_size": 6413, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22445, "raw_average_key_size": 21, "raw_value_size": 3609524, "raw_average_value_size": 3389, "num_data_blocks": 277, "num_entries": 1065, "num_filter_entries": 1065, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404349, "oldest_key_time": 1764404349, "file_creation_time": 1764404534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 19862 microseconds, and 8619 cpu microseconds.
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.691919) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3640641 bytes OK
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.692022) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.716927) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.717003) EVENT_LOG_v1 {"time_micros": 1764404534716990, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.717033) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3712413, prev total WAL file size 3712413, number of live WAL files 2.
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.719835) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3555KB)], [104(9578KB)]
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534719959, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 13448769, "oldest_snapshot_seqno": -1}
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 8352 keys, 11592410 bytes, temperature: kUnknown
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534813459, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 11592410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11537312, "index_size": 33098, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 216834, "raw_average_key_size": 25, "raw_value_size": 11389166, "raw_average_value_size": 1363, "num_data_blocks": 1294, "num_entries": 8352, "num_filter_entries": 8352, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764404534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.813925) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11592410 bytes
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.886459) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.6 rd, 123.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 8884, records dropped: 532 output_compression: NoCompression
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.886508) EVENT_LOG_v1 {"time_micros": 1764404534886488, "job": 62, "event": "compaction_finished", "compaction_time_micros": 93626, "compaction_time_cpu_micros": 25316, "output_level": 6, "num_output_files": 1, "total_output_size": 11592410, "num_input_records": 8884, "num_output_records": 8352, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534888324, "job": 62, "event": "table_file_deletion", "file_number": 106}
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404534891795, "job": 62, "event": "table_file_deletion", "file_number": 104}
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.719573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.891963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.891978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.891981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.891984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:22:14 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:22:14.891987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:22:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:15.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:15.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 979 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 247 op/s
Nov 29 03:22:16 np0005539563 nova_compute[252253]: 2025-11-29 08:22:16.762 252257 DEBUG oslo_concurrency.lockutils [None req-e643c261-cae2-44a4-98fa-3da449fa12a9 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:16 np0005539563 nova_compute[252253]: 2025-11-29 08:22:16.763 252257 DEBUG oslo_concurrency.lockutils [None req-e643c261-cae2-44a4-98fa-3da449fa12a9 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:16 np0005539563 nova_compute[252253]: 2025-11-29 08:22:16.776 252257 INFO nova.compute.manager [None req-e643c261-cae2-44a4-98fa-3da449fa12a9 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Detaching volume fe8a190c-eaa2-4117-ba27-c0ae9ce39d0e#033[00m
Nov 29 03:22:17 np0005539563 nova_compute[252253]: 2025-11-29 08:22:16.999 252257 INFO nova.virt.block_device [None req-e643c261-cae2-44a4-98fa-3da449fa12a9 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Attempting to driver detach volume fe8a190c-eaa2-4117-ba27-c0ae9ce39d0e from mountpoint /dev/vdb#033[00m
Nov 29 03:22:17 np0005539563 nova_compute[252253]: 2025-11-29 08:22:17.008 252257 DEBUG nova.virt.libvirt.driver [None req-e643c261-cae2-44a4-98fa-3da449fa12a9 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Attempting to detach device vdb from instance aa6d3201-d2d3-4001-9e68-bd07dcc23b11 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:22:17 np0005539563 nova_compute[252253]: 2025-11-29 08:22:17.009 252257 DEBUG nova.virt.libvirt.guest [None req-e643c261-cae2-44a4-98fa-3da449fa12a9 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:22:17 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-fe8a190c-eaa2-4117-ba27-c0ae9ce39d0e">
Nov 29 03:22:17 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:  <serial>fe8a190c-eaa2-4117-ba27-c0ae9ce39d0e</serial>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:22:17 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:22:17 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:22:17 np0005539563 nova_compute[252253]: 2025-11-29 08:22:17.019 252257 INFO nova.virt.libvirt.driver [None req-e643c261-cae2-44a4-98fa-3da449fa12a9 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully detached device vdb from instance aa6d3201-d2d3-4001-9e68-bd07dcc23b11 from the persistent domain config.#033[00m
Nov 29 03:22:17 np0005539563 nova_compute[252253]: 2025-11-29 08:22:17.019 252257 DEBUG nova.virt.libvirt.driver [None req-e643c261-cae2-44a4-98fa-3da449fa12a9 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance aa6d3201-d2d3-4001-9e68-bd07dcc23b11 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:22:17 np0005539563 nova_compute[252253]: 2025-11-29 08:22:17.020 252257 DEBUG nova.virt.libvirt.guest [None req-e643c261-cae2-44a4-98fa-3da449fa12a9 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:22:17 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-fe8a190c-eaa2-4117-ba27-c0ae9ce39d0e">
Nov 29 03:22:17 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:  <serial>fe8a190c-eaa2-4117-ba27-c0ae9ce39d0e</serial>
Nov 29 03:22:17 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:22:17 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:22:17 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:22:17 np0005539563 nova_compute[252253]: 2025-11-29 08:22:17.083 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:17 np0005539563 nova_compute[252253]: 2025-11-29 08:22:17.143 252257 DEBUG nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Received event <DeviceRemovedEvent: 1764404537.1429353, aa6d3201-d2d3-4001-9e68-bd07dcc23b11 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:22:17 np0005539563 nova_compute[252253]: 2025-11-29 08:22:17.145 252257 DEBUG nova.virt.libvirt.driver [None req-e643c261-cae2-44a4-98fa-3da449fa12a9 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance aa6d3201-d2d3-4001-9e68-bd07dcc23b11 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:22:17 np0005539563 nova_compute[252253]: 2025-11-29 08:22:17.147 252257 INFO nova.virt.libvirt.driver [None req-e643c261-cae2-44a4-98fa-3da449fa12a9 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully detached device vdb from instance aa6d3201-d2d3-4001-9e68-bd07dcc23b11 from the live domain config.#033[00m
Nov 29 03:22:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:17.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:17 np0005539563 nova_compute[252253]: 2025-11-29 08:22:17.569 252257 DEBUG nova.objects.instance [None req-e643c261-cae2-44a4-98fa-3da449fa12a9 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'flavor' on Instance uuid aa6d3201-d2d3-4001-9e68-bd07dcc23b11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:17.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:17 np0005539563 nova_compute[252253]: 2025-11-29 08:22:17.841 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:18 np0005539563 nova_compute[252253]: 2025-11-29 08:22:18.138 252257 DEBUG oslo_concurrency.lockutils [None req-e643c261-cae2-44a4-98fa-3da449fa12a9 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 979 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.3 MiB/s wr, 232 op/s
Nov 29 03:22:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:18 np0005539563 nova_compute[252253]: 2025-11-29 08:22:18.835 252257 DEBUG oslo_concurrency.lockutils [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:18 np0005539563 nova_compute[252253]: 2025-11-29 08:22:18.836 252257 DEBUG oslo_concurrency.lockutils [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:18 np0005539563 nova_compute[252253]: 2025-11-29 08:22:18.836 252257 DEBUG oslo_concurrency.lockutils [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:18 np0005539563 nova_compute[252253]: 2025-11-29 08:22:18.836 252257 DEBUG oslo_concurrency.lockutils [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:18 np0005539563 nova_compute[252253]: 2025-11-29 08:22:18.836 252257 DEBUG oslo_concurrency.lockutils [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:18 np0005539563 nova_compute[252253]: 2025-11-29 08:22:18.837 252257 INFO nova.compute.manager [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Terminating instance#033[00m
Nov 29 03:22:18 np0005539563 nova_compute[252253]: 2025-11-29 08:22:18.838 252257 DEBUG nova.compute.manager [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:22:18 np0005539563 kernel: tapd357d277-1d (unregistering): left promiscuous mode
Nov 29 03:22:18 np0005539563 NetworkManager[48981]: <info>  [1764404538.8942] device (tapd357d277-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:22:18 np0005539563 nova_compute[252253]: 2025-11-29 08:22:18.936 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:18Z|00594|binding|INFO|Releasing lport d357d277-1d9f-42be-a5bc-31258c88b186 from this chassis (sb_readonly=0)
Nov 29 03:22:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:18Z|00595|binding|INFO|Setting lport d357d277-1d9f-42be-a5bc-31258c88b186 down in Southbound
Nov 29 03:22:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:18Z|00596|binding|INFO|Removing iface tapd357d277-1d ovn-installed in OVS
Nov 29 03:22:18 np0005539563 nova_compute[252253]: 2025-11-29 08:22:18.941 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:18 np0005539563 nova_compute[252253]: 2025-11-29 08:22:18.957 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:18 np0005539563 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000094.scope: Deactivated successfully.
Nov 29 03:22:18 np0005539563 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000094.scope: Consumed 15.178s CPU time.
Nov 29 03:22:18 np0005539563 systemd-machined[213024]: Machine qemu-71-instance-00000094 terminated.
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.018 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:a1:ed 10.100.0.9'], port_security=['fa:16:3e:7a:a1:ed 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'aa6d3201-d2d3-4001-9e68-bd07dcc23b11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'neutron:revision_number': '4', 'neutron:security_group_ids': '114e2ded-8c00-4a31-82b3-b68656218e0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.204'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d406c7c1-fafd-4f72-8c37-90a5a1b5d4e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=d357d277-1d9f-42be-a5bc-31258c88b186) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.019 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d357d277-1d9f-42be-a5bc-31258c88b186 in datapath 3d6ff1b5-e67b-4a23-9145-8139b35e63e8 unbound from our chassis#033[00m
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.020 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d6ff1b5-e67b-4a23-9145-8139b35e63e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.021 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe32947-86f4-4293-b673-b415be3b69bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.022 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 namespace which is not needed anymore#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.061 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.067 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.079 252257 INFO nova.virt.libvirt.driver [-] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Instance destroyed successfully.#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.079 252257 DEBUG nova.objects.instance [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'resources' on Instance uuid aa6d3201-d2d3-4001-9e68-bd07dcc23b11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.097 252257 DEBUG nova.virt.libvirt.vif [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1696398812',display_name='tempest-AttachVolumeNegativeTest-server-1696398812',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1696398812',id=148,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAzCH7tmUqqjE3XxkTuel0a6zJGk+OZ3GNwvJRjSVRO7p+eWVYeTnt0fgHnEypPSzORk1lJIK6LCrtQhpsLfReR+qXoLg/TUUhb1bqOnnBhn1FZUow/HnvDLhop2w1zR1g==',key_name='tempest-keypair-1146956241',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='61d8d3b6b31f4b36b5749db9c550c696',ramdisk_id='',reservation_id='r-0pvhzsal',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1426807399',owner_user_name='tempest-AttachVolumeNegativeTest-1426807399-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:21:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f1f8a0998948b7b96830d8559609f6',uuid=aa6d3201-d2d3-4001-9e68-bd07dcc23b11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d357d277-1d9f-42be-a5bc-31258c88b186", "address": "fa:16:3e:7a:a1:ed", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd357d277-1d", "ovs_interfaceid": "d357d277-1d9f-42be-a5bc-31258c88b186", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.098 252257 DEBUG nova.network.os_vif_util [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converting VIF {"id": "d357d277-1d9f-42be-a5bc-31258c88b186", "address": "fa:16:3e:7a:a1:ed", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd357d277-1d", "ovs_interfaceid": "d357d277-1d9f-42be-a5bc-31258c88b186", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.099 252257 DEBUG nova.network.os_vif_util [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:a1:ed,bridge_name='br-int',has_traffic_filtering=True,id=d357d277-1d9f-42be-a5bc-31258c88b186,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd357d277-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.099 252257 DEBUG os_vif [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:a1:ed,bridge_name='br-int',has_traffic_filtering=True,id=d357d277-1d9f-42be-a5bc-31258c88b186,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd357d277-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.101 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.101 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd357d277-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.103 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.105 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.108 252257 INFO os_vif [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:a1:ed,bridge_name='br-int',has_traffic_filtering=True,id=d357d277-1d9f-42be-a5bc-31258c88b186,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd357d277-1d')#033[00m
Nov 29 03:22:19 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[340933]: [NOTICE]   (340939) : haproxy version is 2.8.14-c23fe91
Nov 29 03:22:19 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[340933]: [NOTICE]   (340939) : path to executable is /usr/sbin/haproxy
Nov 29 03:22:19 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[340933]: [WARNING]  (340939) : Exiting Master process...
Nov 29 03:22:19 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[340933]: [WARNING]  (340939) : Exiting Master process...
Nov 29 03:22:19 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[340933]: [ALERT]    (340939) : Current worker (340941) exited with code 143 (Terminated)
Nov 29 03:22:19 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[340933]: [WARNING]  (340939) : All workers exited. Exiting... (0)
Nov 29 03:22:19 np0005539563 systemd[1]: libpod-355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5.scope: Deactivated successfully.
Nov 29 03:22:19 np0005539563 podman[342206]: 2025-11-29 08:22:19.166156342 +0000 UTC m=+0.048957259 container died 355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:22:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5-userdata-shm.mount: Deactivated successfully.
Nov 29 03:22:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-34aab37bb6dad53f3e67d6251cb08b4f0ca43b7c991af029726a320bb7fe21f7-merged.mount: Deactivated successfully.
Nov 29 03:22:19 np0005539563 podman[342206]: 2025-11-29 08:22:19.207203844 +0000 UTC m=+0.090004761 container cleanup 355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:22:19 np0005539563 systemd[1]: libpod-conmon-355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5.scope: Deactivated successfully.
Nov 29 03:22:19 np0005539563 podman[342252]: 2025-11-29 08:22:19.2682475 +0000 UTC m=+0.040557810 container remove 355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.275 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cb106448-f847-4f5e-8d19-102e2c348f74]: (4, ('Sat Nov 29 08:22:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 (355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5)\n355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5\nSat Nov 29 08:22:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 (355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5)\n355ff98b043e762799b947c224448cec150bae3062179268c9adf65733c99fe5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.277 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b1519dbc-a88b-44c8-815c-5f6e16aaca08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.279 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d6ff1b5-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.281 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:19 np0005539563 kernel: tap3d6ff1b5-e0: left promiscuous mode
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.299 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.303 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[88b0f47c-d7ff-4f48-8da8-891c37f701ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.319 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[faeb167c-f90e-473e-ad0d-6ceabad12ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.321 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dc55ff65-1de0-4c66-8f16-b4c9163f4868]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.341 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f6549d61-0023-455a-b93a-b623b3ee688c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746127, 'reachable_time': 43407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342267, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:19 np0005539563 systemd[1]: run-netns-ovnmeta\x2d3d6ff1b5\x2de67b\x2d4a23\x2d9145\x2d8139b35e63e8.mount: Deactivated successfully.
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.344 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:22:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:19.344 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[6d233aae-de34-45c2-8102-82158ed654dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.395 252257 DEBUG nova.compute.manager [req-ae08ad92-b1ef-4dab-a8cf-30ddfdb0dfcb req-bc3591e8-1f72-4340-ae4b-7790c86ed5d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Received event network-vif-unplugged-d357d277-1d9f-42be-a5bc-31258c88b186 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.395 252257 DEBUG oslo_concurrency.lockutils [req-ae08ad92-b1ef-4dab-a8cf-30ddfdb0dfcb req-bc3591e8-1f72-4340-ae4b-7790c86ed5d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.396 252257 DEBUG oslo_concurrency.lockutils [req-ae08ad92-b1ef-4dab-a8cf-30ddfdb0dfcb req-bc3591e8-1f72-4340-ae4b-7790c86ed5d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.396 252257 DEBUG oslo_concurrency.lockutils [req-ae08ad92-b1ef-4dab-a8cf-30ddfdb0dfcb req-bc3591e8-1f72-4340-ae4b-7790c86ed5d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.396 252257 DEBUG nova.compute.manager [req-ae08ad92-b1ef-4dab-a8cf-30ddfdb0dfcb req-bc3591e8-1f72-4340-ae4b-7790c86ed5d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] No waiting events found dispatching network-vif-unplugged-d357d277-1d9f-42be-a5bc-31258c88b186 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.396 252257 DEBUG nova.compute.manager [req-ae08ad92-b1ef-4dab-a8cf-30ddfdb0dfcb req-bc3591e8-1f72-4340-ae4b-7790c86ed5d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Received event network-vif-unplugged-d357d277-1d9f-42be-a5bc-31258c88b186 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:22:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:19.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.555 252257 INFO nova.virt.libvirt.driver [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Deleting instance files /var/lib/nova/instances/aa6d3201-d2d3-4001-9e68-bd07dcc23b11_del#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.556 252257 INFO nova.virt.libvirt.driver [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Deletion of /var/lib/nova/instances/aa6d3201-d2d3-4001-9e68-bd07dcc23b11_del complete#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.611 252257 INFO nova.compute.manager [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.611 252257 DEBUG oslo.service.loopingcall [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.612 252257 DEBUG nova.compute.manager [-] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:22:19 np0005539563 nova_compute[252253]: 2025-11-29 08:22:19.612 252257 DEBUG nova.network.neutron [-] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:22:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:19.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 915 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.3 MiB/s wr, 360 op/s
Nov 29 03:22:21 np0005539563 nova_compute[252253]: 2025-11-29 08:22:21.453 252257 DEBUG nova.network.neutron [-] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:21 np0005539563 nova_compute[252253]: 2025-11-29 08:22:21.469 252257 INFO nova.compute.manager [-] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Took 1.86 seconds to deallocate network for instance.#033[00m
Nov 29 03:22:21 np0005539563 nova_compute[252253]: 2025-11-29 08:22:21.539 252257 DEBUG oslo_concurrency.lockutils [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:21 np0005539563 nova_compute[252253]: 2025-11-29 08:22:21.540 252257 DEBUG oslo_concurrency.lockutils [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:21.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:21 np0005539563 nova_compute[252253]: 2025-11-29 08:22:21.583 252257 DEBUG nova.compute.manager [req-771e0d59-fbfe-4ea1-abff-f94c81f5bf95 req-f3172aa6-dabd-4b86-ad1e-eb25a98b2256 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Received event network-vif-deleted-d357d277-1d9f-42be-a5bc-31258c88b186 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:21.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:21 np0005539563 nova_compute[252253]: 2025-11-29 08:22:21.666 252257 DEBUG oslo_concurrency.processutils [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:21 np0005539563 nova_compute[252253]: 2025-11-29 08:22:21.797 252257 DEBUG nova.compute.manager [req-47fdb779-c44e-4e95-bd98-7ebda29159e6 req-b0eebaf7-812d-4324-8bb5-3c3edc731eda 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Received event network-vif-plugged-d357d277-1d9f-42be-a5bc-31258c88b186 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:21 np0005539563 nova_compute[252253]: 2025-11-29 08:22:21.798 252257 DEBUG oslo_concurrency.lockutils [req-47fdb779-c44e-4e95-bd98-7ebda29159e6 req-b0eebaf7-812d-4324-8bb5-3c3edc731eda 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:21 np0005539563 nova_compute[252253]: 2025-11-29 08:22:21.799 252257 DEBUG oslo_concurrency.lockutils [req-47fdb779-c44e-4e95-bd98-7ebda29159e6 req-b0eebaf7-812d-4324-8bb5-3c3edc731eda 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:21 np0005539563 nova_compute[252253]: 2025-11-29 08:22:21.799 252257 DEBUG oslo_concurrency.lockutils [req-47fdb779-c44e-4e95-bd98-7ebda29159e6 req-b0eebaf7-812d-4324-8bb5-3c3edc731eda 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:21 np0005539563 nova_compute[252253]: 2025-11-29 08:22:21.800 252257 DEBUG nova.compute.manager [req-47fdb779-c44e-4e95-bd98-7ebda29159e6 req-b0eebaf7-812d-4324-8bb5-3c3edc731eda 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] No waiting events found dispatching network-vif-plugged-d357d277-1d9f-42be-a5bc-31258c88b186 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:21 np0005539563 nova_compute[252253]: 2025-11-29 08:22:21.800 252257 WARNING nova.compute.manager [req-47fdb779-c44e-4e95-bd98-7ebda29159e6 req-b0eebaf7-812d-4324-8bb5-3c3edc731eda 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Received unexpected event network-vif-plugged-d357d277-1d9f-42be-a5bc-31258c88b186 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:22:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2893105580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:22 np0005539563 nova_compute[252253]: 2025-11-29 08:22:22.130 252257 DEBUG oslo_concurrency.processutils [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:22 np0005539563 nova_compute[252253]: 2025-11-29 08:22:22.136 252257 DEBUG nova.compute.provider_tree [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:22 np0005539563 nova_compute[252253]: 2025-11-29 08:22:22.154 252257 DEBUG nova.scheduler.client.report [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:22 np0005539563 nova_compute[252253]: 2025-11-29 08:22:22.177 252257 DEBUG oslo_concurrency.lockutils [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:22 np0005539563 nova_compute[252253]: 2025-11-29 08:22:22.213 252257 INFO nova.scheduler.client.report [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Deleted allocations for instance aa6d3201-d2d3-4001-9e68-bd07dcc23b11#033[00m
Nov 29 03:22:22 np0005539563 nova_compute[252253]: 2025-11-29 08:22:22.309 252257 DEBUG oslo_concurrency.lockutils [None req-76386bbd-6087-437b-a5be-14053e35b7f6 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "aa6d3201-d2d3-4001-9e68-bd07dcc23b11" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 915 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 233 op/s
Nov 29 03:22:22 np0005539563 nova_compute[252253]: 2025-11-29 08:22:22.843 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:23.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01534546284664061 of space, bias 1.0, pg target 4.603638853992183 quantized to 32 (current 32)
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021625052345058625 of space, bias 1.0, pg target 0.6401015494137353 quantized to 32 (current 32)
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.007033140238228178 of space, bias 1.0, pg target 2.0818095105155408 quantized to 32 (current 32)
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017099385817978784 quantized to 16 (current 16)
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003206134840871022 quantized to 32 (current 32)
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018168097431602458 quantized to 32 (current 32)
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004274846454494696 quantized to 32 (current 32)
Nov 29 03:22:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:23.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:24 np0005539563 nova_compute[252253]: 2025-11-29 08:22:24.104 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 887 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 255 op/s
Nov 29 03:22:24 np0005539563 nova_compute[252253]: 2025-11-29 08:22:24.738 252257 DEBUG oslo_concurrency.lockutils [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "0d1eac76-3b6b-4734-a481-9b315b2ae484" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:24 np0005539563 nova_compute[252253]: 2025-11-29 08:22:24.739 252257 DEBUG oslo_concurrency.lockutils [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:24 np0005539563 nova_compute[252253]: 2025-11-29 08:22:24.739 252257 DEBUG oslo_concurrency.lockutils [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:24 np0005539563 nova_compute[252253]: 2025-11-29 08:22:24.740 252257 DEBUG oslo_concurrency.lockutils [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:24 np0005539563 nova_compute[252253]: 2025-11-29 08:22:24.740 252257 DEBUG oslo_concurrency.lockutils [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:24 np0005539563 nova_compute[252253]: 2025-11-29 08:22:24.742 252257 INFO nova.compute.manager [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Terminating instance#033[00m
Nov 29 03:22:24 np0005539563 nova_compute[252253]: 2025-11-29 08:22:24.743 252257 DEBUG nova.compute.manager [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:22:24 np0005539563 kernel: tap65163519-df (unregistering): left promiscuous mode
Nov 29 03:22:24 np0005539563 NetworkManager[48981]: <info>  [1764404544.8760] device (tap65163519-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:22:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:24Z|00597|binding|INFO|Releasing lport 65163519-df32-4076-bfa2-5a804018b2e9 from this chassis (sb_readonly=0)
Nov 29 03:22:24 np0005539563 nova_compute[252253]: 2025-11-29 08:22:24.886 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:24Z|00598|binding|INFO|Setting lport 65163519-df32-4076-bfa2-5a804018b2e9 down in Southbound
Nov 29 03:22:24 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:24Z|00599|binding|INFO|Removing iface tap65163519-df ovn-installed in OVS
Nov 29 03:22:24 np0005539563 nova_compute[252253]: 2025-11-29 08:22:24.890 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:24.897 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:8e:91 10.100.0.6'], port_security=['fa:16:3e:e1:8e:91 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0d1eac76-3b6b-4734-a481-9b315b2ae484', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a738c288b1654ec58416b0da60aacb69', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'edee2156-9188-4700-8452-1d956f3d4c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d677aff-8b0e-4773-b2bd-f6f8dac4947d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=65163519-df32-4076-bfa2-5a804018b2e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:24.899 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 65163519-df32-4076-bfa2-5a804018b2e9 in datapath 97e6ef02-6896-45a2-9eb9-28926c1a7400 unbound from our chassis#033[00m
Nov 29 03:22:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:24.900 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 97e6ef02-6896-45a2-9eb9-28926c1a7400, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:22:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:24.901 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0688656f-3380-401f-992f-8a6fc6718be0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:24.903 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 namespace which is not needed anymore#033[00m
Nov 29 03:22:24 np0005539563 nova_compute[252253]: 2025-11-29 08:22:24.905 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:24 np0005539563 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000089.scope: Deactivated successfully.
Nov 29 03:22:24 np0005539563 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000089.scope: Consumed 20.612s CPU time.
Nov 29 03:22:24 np0005539563 systemd-machined[213024]: Machine qemu-65-instance-00000089 terminated.
Nov 29 03:22:24 np0005539563 nova_compute[252253]: 2025-11-29 08:22:24.987 252257 INFO nova.virt.libvirt.driver [-] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Instance destroyed successfully.#033[00m
Nov 29 03:22:24 np0005539563 nova_compute[252253]: 2025-11-29 08:22:24.987 252257 DEBUG nova.objects.instance [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lazy-loading 'resources' on Instance uuid 0d1eac76-3b6b-4734-a481-9b315b2ae484 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.004 252257 DEBUG nova.virt.libvirt.vif [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-1325725551',display_name='tempest-₡-1325725551',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1325725551',id=137,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a738c288b1654ec58416b0da60aacb69',ramdisk_id='',reservation_id='r-39umlpeh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1672739819',owner_user_name='tempest-ServersTestJSON-1672739819-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:19:50Z,user_data=None,user_id='3b9a756606a84398819fa76cc6ce9ecd',uuid=0d1eac76-3b6b-4734-a481-9b315b2ae484,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "65163519-df32-4076-bfa2-5a804018b2e9", "address": "fa:16:3e:e1:8e:91", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65163519-df", "ovs_interfaceid": "65163519-df32-4076-bfa2-5a804018b2e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.005 252257 DEBUG nova.network.os_vif_util [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converting VIF {"id": "65163519-df32-4076-bfa2-5a804018b2e9", "address": "fa:16:3e:e1:8e:91", "network": {"id": "97e6ef02-6896-45a2-9eb9-28926c1a7400", "bridge": "br-int", "label": "tempest-ServersTestJSON-1346797520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a738c288b1654ec58416b0da60aacb69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65163519-df", "ovs_interfaceid": "65163519-df32-4076-bfa2-5a804018b2e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.005 252257 DEBUG nova.network.os_vif_util [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:8e:91,bridge_name='br-int',has_traffic_filtering=True,id=65163519-df32-4076-bfa2-5a804018b2e9,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65163519-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.006 252257 DEBUG os_vif [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:8e:91,bridge_name='br-int',has_traffic_filtering=True,id=65163519-df32-4076-bfa2-5a804018b2e9,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65163519-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.009 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.009 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65163519-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.012 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.016 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.018 252257 INFO os_vif [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:8e:91,bridge_name='br-int',has_traffic_filtering=True,id=65163519-df32-4076-bfa2-5a804018b2e9,network=Network(97e6ef02-6896-45a2-9eb9-28926c1a7400),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65163519-df')#033[00m
Nov 29 03:22:25 np0005539563 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[336690]: [NOTICE]   (336694) : haproxy version is 2.8.14-c23fe91
Nov 29 03:22:25 np0005539563 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[336690]: [NOTICE]   (336694) : path to executable is /usr/sbin/haproxy
Nov 29 03:22:25 np0005539563 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[336690]: [WARNING]  (336694) : Exiting Master process...
Nov 29 03:22:25 np0005539563 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[336690]: [WARNING]  (336694) : Exiting Master process...
Nov 29 03:22:25 np0005539563 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[336690]: [ALERT]    (336694) : Current worker (336696) exited with code 143 (Terminated)
Nov 29 03:22:25 np0005539563 neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400[336690]: [WARNING]  (336694) : All workers exited. Exiting... (0)
Nov 29 03:22:25 np0005539563 systemd[1]: libpod-ebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a.scope: Deactivated successfully.
Nov 29 03:22:25 np0005539563 podman[342328]: 2025-11-29 08:22:25.057267627 +0000 UTC m=+0.053386349 container died ebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:22:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:22:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6b48a202685550ff50547b1421dae2ea545e29e2a16f929d9532075f9e09af3a-merged.mount: Deactivated successfully.
Nov 29 03:22:25 np0005539563 podman[342328]: 2025-11-29 08:22:25.252099241 +0000 UTC m=+0.248217963 container cleanup ebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:22:25 np0005539563 systemd[1]: libpod-conmon-ebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a.scope: Deactivated successfully.
Nov 29 03:22:25 np0005539563 podman[342378]: 2025-11-29 08:22:25.511440174 +0000 UTC m=+0.240253467 container remove ebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:22:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:25.519 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[70f4899c-7129-41d6-bd12-5e21b8603ed1]: (4, ('Sat Nov 29 08:22:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 (ebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a)\nebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a\nSat Nov 29 08:22:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 (ebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a)\nebc93a223332b291d004ed160da9f9809a7d8b86240c4541dcaf9bc935b6c83a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:25.521 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ab546be1-1a39-4684-b1d8-5f1d6ce4bc2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:25.522 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97e6ef02-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.525 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:25 np0005539563 kernel: tap97e6ef02-60: left promiscuous mode
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.547 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:25.549 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5545fa-1763-4285-a0e3-0cd26c40900e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:25.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:25.566 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0da3a38b-d940-4d70-8b63-61b30494d7a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:25.567 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9852ee6f-c207-4830-9939-8141324bff20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:25.586 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac2ee7e-c69a-47a0-be56-c963cbbf2f33]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735343, 'reachable_time': 28364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342393, 'error': None, 'target': 'ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:25.589 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-97e6ef02-6896-45a2-9eb9-28926c1a7400 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:22:25 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:25.589 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc7612f-e6d8-437f-83c7-89c4230cd3d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539563 systemd[1]: run-netns-ovnmeta\x2d97e6ef02\x2d6896\x2d45a2\x2d9eb9\x2d28926c1a7400.mount: Deactivated successfully.
Nov 29 03:22:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:25.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.816 252257 INFO nova.virt.libvirt.driver [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Deleting instance files /var/lib/nova/instances/0d1eac76-3b6b-4734-a481-9b315b2ae484_del#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.817 252257 INFO nova.virt.libvirt.driver [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Deletion of /var/lib/nova/instances/0d1eac76-3b6b-4734-a481-9b315b2ae484_del complete#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.898 252257 INFO nova.compute.manager [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Took 1.15 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.899 252257 DEBUG oslo.service.loopingcall [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.899 252257 DEBUG nova.compute.manager [-] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:22:25 np0005539563 nova_compute[252253]: 2025-11-29 08:22:25.899 252257 DEBUG nova.network.neutron [-] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.342 252257 DEBUG nova.compute.manager [req-bd2b2dd5-2255-44a2-ae6b-93dbe78c120a req-2bb224e1-41d9-45a8-9481-f176a4380663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Received event network-vif-unplugged-65163519-df32-4076-bfa2-5a804018b2e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.343 252257 DEBUG oslo_concurrency.lockutils [req-bd2b2dd5-2255-44a2-ae6b-93dbe78c120a req-2bb224e1-41d9-45a8-9481-f176a4380663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.343 252257 DEBUG oslo_concurrency.lockutils [req-bd2b2dd5-2255-44a2-ae6b-93dbe78c120a req-2bb224e1-41d9-45a8-9481-f176a4380663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.344 252257 DEBUG oslo_concurrency.lockutils [req-bd2b2dd5-2255-44a2-ae6b-93dbe78c120a req-2bb224e1-41d9-45a8-9481-f176a4380663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.344 252257 DEBUG nova.compute.manager [req-bd2b2dd5-2255-44a2-ae6b-93dbe78c120a req-2bb224e1-41d9-45a8-9481-f176a4380663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] No waiting events found dispatching network-vif-unplugged-65163519-df32-4076-bfa2-5a804018b2e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.345 252257 DEBUG nova.compute.manager [req-bd2b2dd5-2255-44a2-ae6b-93dbe78c120a req-2bb224e1-41d9-45a8-9481-f176a4380663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Received event network-vif-unplugged-65163519-df32-4076-bfa2-5a804018b2e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.345 252257 DEBUG nova.compute.manager [req-bd2b2dd5-2255-44a2-ae6b-93dbe78c120a req-2bb224e1-41d9-45a8-9481-f176a4380663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Received event network-vif-plugged-65163519-df32-4076-bfa2-5a804018b2e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.345 252257 DEBUG oslo_concurrency.lockutils [req-bd2b2dd5-2255-44a2-ae6b-93dbe78c120a req-2bb224e1-41d9-45a8-9481-f176a4380663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.346 252257 DEBUG oslo_concurrency.lockutils [req-bd2b2dd5-2255-44a2-ae6b-93dbe78c120a req-2bb224e1-41d9-45a8-9481-f176a4380663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.346 252257 DEBUG oslo_concurrency.lockutils [req-bd2b2dd5-2255-44a2-ae6b-93dbe78c120a req-2bb224e1-41d9-45a8-9481-f176a4380663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.347 252257 DEBUG nova.compute.manager [req-bd2b2dd5-2255-44a2-ae6b-93dbe78c120a req-2bb224e1-41d9-45a8-9481-f176a4380663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] No waiting events found dispatching network-vif-plugged-65163519-df32-4076-bfa2-5a804018b2e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.348 252257 WARNING nova.compute.manager [req-bd2b2dd5-2255-44a2-ae6b-93dbe78c120a req-2bb224e1-41d9-45a8-9481-f176a4380663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Received unexpected event network-vif-plugged-65163519-df32-4076-bfa2-5a804018b2e9 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:22:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 866 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.9 MiB/s wr, 271 op/s
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.869 252257 DEBUG nova.network.neutron [-] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.904 252257 INFO nova.compute.manager [-] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Took 1.00 seconds to deallocate network for instance.#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.975 252257 DEBUG oslo_concurrency.lockutils [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:26 np0005539563 nova_compute[252253]: 2025-11-29 08:22:26.976 252257 DEBUG oslo_concurrency.lockutils [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:27 np0005539563 nova_compute[252253]: 2025-11-29 08:22:27.079 252257 DEBUG oslo_concurrency.processutils [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3688258654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:27 np0005539563 nova_compute[252253]: 2025-11-29 08:22:27.533 252257 DEBUG oslo_concurrency.processutils [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:27 np0005539563 nova_compute[252253]: 2025-11-29 08:22:27.541 252257 DEBUG nova.compute.provider_tree [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:27 np0005539563 nova_compute[252253]: 2025-11-29 08:22:27.560 252257 DEBUG nova.scheduler.client.report [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:27.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:27 np0005539563 nova_compute[252253]: 2025-11-29 08:22:27.597 252257 DEBUG oslo_concurrency.lockutils [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:27 np0005539563 nova_compute[252253]: 2025-11-29 08:22:27.644 252257 INFO nova.scheduler.client.report [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Deleted allocations for instance 0d1eac76-3b6b-4734-a481-9b315b2ae484#033[00m
Nov 29 03:22:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:27.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:27 np0005539563 nova_compute[252253]: 2025-11-29 08:22:27.713 252257 DEBUG oslo_concurrency.lockutils [None req-45dff76a-cc94-4e53-b82c-bbd722e311f6 3b9a756606a84398819fa76cc6ce9ecd a738c288b1654ec58416b0da60aacb69 - - default default] Lock "0d1eac76-3b6b-4734-a481-9b315b2ae484" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:27 np0005539563 nova_compute[252253]: 2025-11-29 08:22:27.845 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 837 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 247 op/s
Nov 29 03:22:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:29.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:29.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:30 np0005539563 nova_compute[252253]: 2025-11-29 08:22:30.014 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 296 op/s
Nov 29 03:22:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:31.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:31.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:32 np0005539563 nova_compute[252253]: 2025-11-29 08:22:32.094 252257 DEBUG nova.compute.manager [req-2b08c90c-ccce-40ad-8c09-206ad9ee1e19 req-c984a112-30ac-49fc-a483-067c49cfaba2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Received event network-vif-deleted-65163519-df32-4076-bfa2-5a804018b2e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 228 KiB/s wr, 155 op/s
Nov 29 03:22:32 np0005539563 nova_compute[252253]: 2025-11-29 08:22:32.891 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.040 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "c6849280-963f-4661-bac1-c3655d2dad57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.040 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.065 252257 DEBUG nova.compute.manager [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.178 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.179 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.184 252257 DEBUG nova.virt.hardware [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.185 252257 INFO nova.compute.claims [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.425 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:33Z|00600|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:22:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:33.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.617 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:33.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2114051630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.886 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.892 252257 DEBUG nova.compute.provider_tree [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.921 252257 DEBUG nova.scheduler.client.report [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.956 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:33 np0005539563 nova_compute[252253]: 2025-11-29 08:22:33.958 252257 DEBUG nova.compute.manager [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.077 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404539.0753543, aa6d3201-d2d3-4001-9e68-bd07dcc23b11 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.077 252257 INFO nova.compute.manager [-] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.101 252257 DEBUG nova.compute.manager [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.101 252257 DEBUG nova.network.neutron [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.126 252257 DEBUG nova.compute.manager [None req-e2408e84-7424-4d06-9f3d-f40a73d834ba - - - - - -] [instance: aa6d3201-d2d3-4001-9e68-bd07dcc23b11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.130 252257 INFO nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.149 252257 DEBUG nova.compute.manager [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.437 252257 DEBUG nova.compute.manager [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.438 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.438 252257 INFO nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Creating image(s)#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.463 252257 DEBUG nova.storage.rbd_utils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image c6849280-963f-4661-bac1-c3655d2dad57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 229 KiB/s wr, 158 op/s
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.488 252257 DEBUG nova.storage.rbd_utils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image c6849280-963f-4661-bac1-c3655d2dad57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.513 252257 DEBUG nova.storage.rbd_utils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image c6849280-963f-4661-bac1-c3655d2dad57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.517 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.583 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.584 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.585 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.585 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.612 252257 DEBUG nova.storage.rbd_utils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image c6849280-963f-4661-bac1-c3655d2dad57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.616 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c6849280-963f-4661-bac1-c3655d2dad57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.719 252257 DEBUG nova.policy [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '09f1f8a0998948b7b96830d8559609f6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:22:34 np0005539563 nova_compute[252253]: 2025-11-29 08:22:34.944 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c6849280-963f-4661-bac1-c3655d2dad57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:35 np0005539563 nova_compute[252253]: 2025-11-29 08:22:35.037 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:35 np0005539563 nova_compute[252253]: 2025-11-29 08:22:35.045 252257 DEBUG nova.storage.rbd_utils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] resizing rbd image c6849280-963f-4661-bac1-c3655d2dad57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:22:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:35.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:35 np0005539563 nova_compute[252253]: 2025-11-29 08:22:35.625 252257 DEBUG nova.objects.instance [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'migration_context' on Instance uuid c6849280-963f-4661-bac1-c3655d2dad57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:35 np0005539563 nova_compute[252253]: 2025-11-29 08:22:35.644 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:22:35 np0005539563 nova_compute[252253]: 2025-11-29 08:22:35.645 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Ensure instance console log exists: /var/lib/nova/instances/c6849280-963f-4661-bac1-c3655d2dad57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:22:35 np0005539563 nova_compute[252253]: 2025-11-29 08:22:35.645 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:35 np0005539563 nova_compute[252253]: 2025-11-29 08:22:35.645 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:35 np0005539563 nova_compute[252253]: 2025-11-29 08:22:35.646 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:35.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:35 np0005539563 nova_compute[252253]: 2025-11-29 08:22:35.807 252257 DEBUG nova.network.neutron [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Successfully created port: 9427a372-ceaa-418b-9f5e-699c618df26b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:22:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 811 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.3 MiB/s wr, 182 op/s
Nov 29 03:22:36 np0005539563 podman[342635]: 2025-11-29 08:22:36.755503931 +0000 UTC m=+0.056825933 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:22:36 np0005539563 podman[342636]: 2025-11-29 08:22:36.785809032 +0000 UTC m=+0.087249767 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:22:36 np0005539563 podman[342637]: 2025-11-29 08:22:36.811119009 +0000 UTC m=+0.108637957 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:22:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:37.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:37.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:37 np0005539563 nova_compute[252253]: 2025-11-29 08:22:37.931 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 814 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 971 KiB/s rd, 1.3 MiB/s wr, 125 op/s
Nov 29 03:22:38 np0005539563 nova_compute[252253]: 2025-11-29 08:22:38.681 252257 DEBUG nova.network.neutron [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Successfully updated port: 9427a372-ceaa-418b-9f5e-699c618df26b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:22:38 np0005539563 nova_compute[252253]: 2025-11-29 08:22:38.701 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "refresh_cache-c6849280-963f-4661-bac1-c3655d2dad57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:38 np0005539563 nova_compute[252253]: 2025-11-29 08:22:38.702 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquired lock "refresh_cache-c6849280-963f-4661-bac1-c3655d2dad57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:38 np0005539563 nova_compute[252253]: 2025-11-29 08:22:38.702 252257 DEBUG nova.network.neutron [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:22:38 np0005539563 nova_compute[252253]: 2025-11-29 08:22:38.861 252257 DEBUG nova.compute.manager [req-6d3fd55a-08cf-4596-8b6d-bddcc3fabde6 req-900b2d92-bab0-40da-926a-12744aa5b451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Received event network-changed-9427a372-ceaa-418b-9f5e-699c618df26b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:38 np0005539563 nova_compute[252253]: 2025-11-29 08:22:38.861 252257 DEBUG nova.compute.manager [req-6d3fd55a-08cf-4596-8b6d-bddcc3fabde6 req-900b2d92-bab0-40da-926a-12744aa5b451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Refreshing instance network info cache due to event network-changed-9427a372-ceaa-418b-9f5e-699c618df26b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:38 np0005539563 nova_compute[252253]: 2025-11-29 08:22:38.861 252257 DEBUG oslo_concurrency.lockutils [req-6d3fd55a-08cf-4596-8b6d-bddcc3fabde6 req-900b2d92-bab0-40da-926a-12744aa5b451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c6849280-963f-4661-bac1-c3655d2dad57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:38 np0005539563 nova_compute[252253]: 2025-11-29 08:22:38.933 252257 DEBUG nova.network.neutron [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:22:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:39.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:39.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:39 np0005539563 nova_compute[252253]: 2025-11-29 08:22:39.985 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404544.9840288, 0d1eac76-3b6b-4734-a481-9b315b2ae484 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:39 np0005539563 nova_compute[252253]: 2025-11-29 08:22:39.986 252257 INFO nova.compute.manager [-] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.010 252257 DEBUG nova.compute.manager [None req-00ccd2aa-2e6d-4e3b-a436-2877b2aff22e - - - - - -] [instance: 0d1eac76-3b6b-4734-a481-9b315b2ae484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.041 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.184 252257 DEBUG nova.network.neutron [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Updating instance_info_cache with network_info: [{"id": "9427a372-ceaa-418b-9f5e-699c618df26b", "address": "fa:16:3e:2e:98:4d", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9427a372-ce", "ovs_interfaceid": "9427a372-ceaa-418b-9f5e-699c618df26b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.203 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Releasing lock "refresh_cache-c6849280-963f-4661-bac1-c3655d2dad57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.203 252257 DEBUG nova.compute.manager [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Instance network_info: |[{"id": "9427a372-ceaa-418b-9f5e-699c618df26b", "address": "fa:16:3e:2e:98:4d", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9427a372-ce", "ovs_interfaceid": "9427a372-ceaa-418b-9f5e-699c618df26b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.204 252257 DEBUG oslo_concurrency.lockutils [req-6d3fd55a-08cf-4596-8b6d-bddcc3fabde6 req-900b2d92-bab0-40da-926a-12744aa5b451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c6849280-963f-4661-bac1-c3655d2dad57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.204 252257 DEBUG nova.network.neutron [req-6d3fd55a-08cf-4596-8b6d-bddcc3fabde6 req-900b2d92-bab0-40da-926a-12744aa5b451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Refreshing network info cache for port 9427a372-ceaa-418b-9f5e-699c618df26b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.206 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Start _get_guest_xml network_info=[{"id": "9427a372-ceaa-418b-9f5e-699c618df26b", "address": "fa:16:3e:2e:98:4d", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9427a372-ce", "ovs_interfaceid": "9427a372-ceaa-418b-9f5e-699c618df26b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.211 252257 WARNING nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.221 252257 DEBUG nova.virt.libvirt.host [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.222 252257 DEBUG nova.virt.libvirt.host [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.226 252257 DEBUG nova.virt.libvirt.host [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.226 252257 DEBUG nova.virt.libvirt.host [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.227 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.228 252257 DEBUG nova.virt.hardware [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.228 252257 DEBUG nova.virt.hardware [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.229 252257 DEBUG nova.virt.hardware [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.229 252257 DEBUG nova.virt.hardware [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.229 252257 DEBUG nova.virt.hardware [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.229 252257 DEBUG nova.virt.hardware [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.229 252257 DEBUG nova.virt.hardware [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.230 252257 DEBUG nova.virt.hardware [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.230 252257 DEBUG nova.virt.hardware [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.230 252257 DEBUG nova.virt.hardware [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.230 252257 DEBUG nova.virt.hardware [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.233 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 835 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Nov 29 03:22:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1325680114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.711 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.746 252257 DEBUG nova.storage.rbd_utils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image c6849280-963f-4661-bac1-c3655d2dad57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:40 np0005539563 nova_compute[252253]: 2025-11-29 08:22:40.752 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125967921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.204 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.205 252257 DEBUG nova.virt.libvirt.vif [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:22:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1800912635',display_name='tempest-AttachVolumeNegativeTest-server-1800912635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1800912635',id=151,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQ52WxsQJ5eoReeIIKj0v0+u3fz+tEXZrd3CiTnWBojHO7l362bVK7dMHcSW3OAzN918Q7bczaHDE1n0Dcc1GdtFpM6SBH2x2daFTDP5jd/WLB9A/+7WVVlaMS23JThHg==',key_name='tempest-keypair-1223795523',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61d8d3b6b31f4b36b5749db9c550c696',ramdisk_id='',reservation_id='r-43zklu7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1426807399',owner_user_name='tempest-AttachVolumeNegativeTest-1426807399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f1f8a0998948b7b96830d8559609f6',uuid=c6849280-963f-4661-bac1-c3655d2dad57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9427a372-ceaa-418b-9f5e-699c618df26b", "address": "fa:16:3e:2e:98:4d", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9427a372-ce", "ovs_interfaceid": "9427a372-ceaa-418b-9f5e-699c618df26b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.206 252257 DEBUG nova.network.os_vif_util [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converting VIF {"id": "9427a372-ceaa-418b-9f5e-699c618df26b", "address": "fa:16:3e:2e:98:4d", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9427a372-ce", "ovs_interfaceid": "9427a372-ceaa-418b-9f5e-699c618df26b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.207 252257 DEBUG nova.network.os_vif_util [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:98:4d,bridge_name='br-int',has_traffic_filtering=True,id=9427a372-ceaa-418b-9f5e-699c618df26b,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9427a372-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.208 252257 DEBUG nova.objects.instance [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6849280-963f-4661-bac1-c3655d2dad57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.231 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  <uuid>c6849280-963f-4661-bac1-c3655d2dad57</uuid>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  <name>instance-00000097</name>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <nova:name>tempest-AttachVolumeNegativeTest-server-1800912635</nova:name>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:22:40</nova:creationTime>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <nova:user uuid="09f1f8a0998948b7b96830d8559609f6">tempest-AttachVolumeNegativeTest-1426807399-project-member</nova:user>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <nova:project uuid="61d8d3b6b31f4b36b5749db9c550c696">tempest-AttachVolumeNegativeTest-1426807399</nova:project>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <nova:port uuid="9427a372-ceaa-418b-9f5e-699c618df26b">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <entry name="serial">c6849280-963f-4661-bac1-c3655d2dad57</entry>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <entry name="uuid">c6849280-963f-4661-bac1-c3655d2dad57</entry>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/c6849280-963f-4661-bac1-c3655d2dad57_disk">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/c6849280-963f-4661-bac1-c3655d2dad57_disk.config">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:2e:98:4d"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <target dev="tap9427a372-ce"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/c6849280-963f-4661-bac1-c3655d2dad57/console.log" append="off"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:22:41 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:22:41 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:22:41 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:22:41 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.233 252257 DEBUG nova.compute.manager [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Preparing to wait for external event network-vif-plugged-9427a372-ceaa-418b-9f5e-699c618df26b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.234 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "c6849280-963f-4661-bac1-c3655d2dad57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.234 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.234 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.236 252257 DEBUG nova.virt.libvirt.vif [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:22:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1800912635',display_name='tempest-AttachVolumeNegativeTest-server-1800912635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1800912635',id=151,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQ52WxsQJ5eoReeIIKj0v0+u3fz+tEXZrd3CiTnWBojHO7l362bVK7dMHcSW3OAzN918Q7bczaHDE1n0Dcc1GdtFpM6SBH2x2daFTDP5jd/WLB9A/+7WVVlaMS23JThHg==',key_name='tempest-keypair-1223795523',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61d8d3b6b31f4b36b5749db9c550c696',ramdisk_id='',reservation_id='r-43zklu7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1426807399',owner_user_name='tempest-AttachVolumeNegativeTest-1426807399-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f1f8a0998948b7b96830d8559609f6',uuid=c6849280-963f-4661-bac1-c3655d2dad57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9427a372-ceaa-418b-9f5e-699c618df26b", "address": "fa:16:3e:2e:98:4d", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9427a372-ce", "ovs_interfaceid": "9427a372-ceaa-418b-9f5e-699c618df26b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.236 252257 DEBUG nova.network.os_vif_util [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converting VIF {"id": "9427a372-ceaa-418b-9f5e-699c618df26b", "address": "fa:16:3e:2e:98:4d", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9427a372-ce", "ovs_interfaceid": "9427a372-ceaa-418b-9f5e-699c618df26b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.238 252257 DEBUG nova.network.os_vif_util [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:98:4d,bridge_name='br-int',has_traffic_filtering=True,id=9427a372-ceaa-418b-9f5e-699c618df26b,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9427a372-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.239 252257 DEBUG os_vif [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:98:4d,bridge_name='br-int',has_traffic_filtering=True,id=9427a372-ceaa-418b-9f5e-699c618df26b,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9427a372-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.239 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.240 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.240 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.242 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.243 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9427a372-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.243 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9427a372-ce, col_values=(('external_ids', {'iface-id': '9427a372-ceaa-418b-9f5e-699c618df26b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:98:4d', 'vm-uuid': 'c6849280-963f-4661-bac1-c3655d2dad57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:41 np0005539563 NetworkManager[48981]: <info>  [1764404561.2457] manager: (tap9427a372-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.248 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.252 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.254 252257 INFO os_vif [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:98:4d,bridge_name='br-int',has_traffic_filtering=True,id=9427a372-ceaa-418b-9f5e-699c618df26b,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9427a372-ce')#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.354 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.354 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.354 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No VIF found with MAC fa:16:3e:2e:98:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.355 252257 INFO nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Using config drive#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.380 252257 DEBUG nova.storage.rbd_utils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image c6849280-963f-4661-bac1-c3655d2dad57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:41.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:41.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.814 252257 INFO nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Creating config drive at /var/lib/nova/instances/c6849280-963f-4661-bac1-c3655d2dad57/disk.config#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.826 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6849280-963f-4661-bac1-c3655d2dad57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgy2lw8_g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.932 252257 DEBUG nova.network.neutron [req-6d3fd55a-08cf-4596-8b6d-bddcc3fabde6 req-900b2d92-bab0-40da-926a-12744aa5b451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Updated VIF entry in instance network info cache for port 9427a372-ceaa-418b-9f5e-699c618df26b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.933 252257 DEBUG nova.network.neutron [req-6d3fd55a-08cf-4596-8b6d-bddcc3fabde6 req-900b2d92-bab0-40da-926a-12744aa5b451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Updating instance_info_cache with network_info: [{"id": "9427a372-ceaa-418b-9f5e-699c618df26b", "address": "fa:16:3e:2e:98:4d", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9427a372-ce", "ovs_interfaceid": "9427a372-ceaa-418b-9f5e-699c618df26b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.950 252257 DEBUG oslo_concurrency.lockutils [req-6d3fd55a-08cf-4596-8b6d-bddcc3fabde6 req-900b2d92-bab0-40da-926a-12744aa5b451 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c6849280-963f-4661-bac1-c3655d2dad57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:41 np0005539563 nova_compute[252253]: 2025-11-29 08:22:41.969 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6849280-963f-4661-bac1-c3655d2dad57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgy2lw8_g" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:42 np0005539563 nova_compute[252253]: 2025-11-29 08:22:42.000 252257 DEBUG nova.storage.rbd_utils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] rbd image c6849280-963f-4661-bac1-c3655d2dad57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:42 np0005539563 nova_compute[252253]: 2025-11-29 08:22:42.003 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c6849280-963f-4661-bac1-c3655d2dad57/disk.config c6849280-963f-4661-bac1-c3655d2dad57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:42 np0005539563 nova_compute[252253]: 2025-11-29 08:22:42.162 252257 DEBUG oslo_concurrency.processutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c6849280-963f-4661-bac1-c3655d2dad57/disk.config c6849280-963f-4661-bac1-c3655d2dad57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:42 np0005539563 nova_compute[252253]: 2025-11-29 08:22:42.162 252257 INFO nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Deleting local config drive /var/lib/nova/instances/c6849280-963f-4661-bac1-c3655d2dad57/disk.config because it was imported into RBD.#033[00m
Nov 29 03:22:42 np0005539563 NetworkManager[48981]: <info>  [1764404562.2192] manager: (tap9427a372-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/265)
Nov 29 03:22:42 np0005539563 kernel: tap9427a372-ce: entered promiscuous mode
Nov 29 03:22:42 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:42Z|00601|binding|INFO|Claiming lport 9427a372-ceaa-418b-9f5e-699c618df26b for this chassis.
Nov 29 03:22:42 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:42Z|00602|binding|INFO|9427a372-ceaa-418b-9f5e-699c618df26b: Claiming fa:16:3e:2e:98:4d 10.100.0.9
Nov 29 03:22:42 np0005539563 nova_compute[252253]: 2025-11-29 08:22:42.220 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.231 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:98:4d 10.100.0.9'], port_security=['fa:16:3e:2e:98:4d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c6849280-963f-4661-bac1-c3655d2dad57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'neutron:revision_number': '2', 'neutron:security_group_ids': '51f8f3df-202b-4d8a-8e5e-534a0e5fbff8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d406c7c1-fafd-4f72-8c37-90a5a1b5d4e7, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=9427a372-ceaa-418b-9f5e-699c618df26b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.232 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 9427a372-ceaa-418b-9f5e-699c618df26b in datapath 3d6ff1b5-e67b-4a23-9145-8139b35e63e8 bound to our chassis#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.233 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d6ff1b5-e67b-4a23-9145-8139b35e63e8#033[00m
Nov 29 03:22:42 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:42Z|00603|binding|INFO|Setting lport 9427a372-ceaa-418b-9f5e-699c618df26b ovn-installed in OVS
Nov 29 03:22:42 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:42Z|00604|binding|INFO|Setting lport 9427a372-ceaa-418b-9f5e-699c618df26b up in Southbound
Nov 29 03:22:42 np0005539563 nova_compute[252253]: 2025-11-29 08:22:42.238 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:42 np0005539563 nova_compute[252253]: 2025-11-29 08:22:42.240 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.248 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1e66bb60-dfcf-49ea-adc9-2413e64e4dd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.249 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d6ff1b5-e1 in ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.251 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d6ff1b5-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.251 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8bed45fc-1763-4f03-80b5-918a9a7ce860]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.253 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc91e58-ea3b-40c0-aedf-37755559e498]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 systemd-udevd[342862]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:22:42 np0005539563 systemd-machined[213024]: New machine qemu-72-instance-00000097.
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.269 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[bb47db48-e9bb-4f26-8d3a-5082a024dd1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 NetworkManager[48981]: <info>  [1764404562.2724] device (tap9427a372-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:22:42 np0005539563 NetworkManager[48981]: <info>  [1764404562.2741] device (tap9427a372-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.288 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c9aa67-b5b0-42cc-8f62-6fa6df637b74]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 systemd[1]: Started Virtual Machine qemu-72-instance-00000097.
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.317 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[38c4210b-0b4e-4283-9ddc-592089514d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.323 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c7afac-a188-40fb-afa8-5c2576d77a5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 NetworkManager[48981]: <info>  [1764404562.3250] manager: (tap3d6ff1b5-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/266)
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.357 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[778177fc-ae03-4a75-96ee-37a695ca6c0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.361 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[df01c17c-6225-4853-aad8-81fa1f5db376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 NetworkManager[48981]: <info>  [1764404562.3827] device (tap3d6ff1b5-e0): carrier: link connected
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.388 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[256f8eb0-2fe9-4a8d-a158-758018639531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.405 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f721e2-e6eb-45ac-a89d-42f0dc2bce17]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d6ff1b5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:6a:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 753015, 'reachable_time': 24150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342894, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.420 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ed1603-a762-4bab-84da-21b42350428d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:6a1d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 753015, 'tstamp': 753015}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342895, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.436 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b9292750-2389-457f-b563-a91f6d45fe10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d6ff1b5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:6a:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 753015, 'reachable_time': 24150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342896, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.463 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2d79e0-31f3-4f9a-9434-0695083501e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 835 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 690 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.511 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9bdf7093-ad46-406c-b8d3-abcd82ac6058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.512 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d6ff1b5-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.512 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.513 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d6ff1b5-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:42 np0005539563 NetworkManager[48981]: <info>  [1764404562.5151] manager: (tap3d6ff1b5-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Nov 29 03:22:42 np0005539563 kernel: tap3d6ff1b5-e0: entered promiscuous mode
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.517 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d6ff1b5-e0, col_values=(('external_ids', {'iface-id': '54675c6b-d3a2-417c-b976-28c1e010fd1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:42 np0005539563 nova_compute[252253]: 2025-11-29 08:22:42.514 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:42 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:42Z|00605|binding|INFO|Releasing lport 54675c6b-d3a2-417c-b976-28c1e010fd1e from this chassis (sb_readonly=0)
Nov 29 03:22:42 np0005539563 nova_compute[252253]: 2025-11-29 08:22:42.533 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.534 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d6ff1b5-e67b-4a23-9145-8139b35e63e8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d6ff1b5-e67b-4a23-9145-8139b35e63e8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.534 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[753f6e7e-3460-4e58-b0da-af1971626691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.535 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-3d6ff1b5-e67b-4a23-9145-8139b35e63e8
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/3d6ff1b5-e67b-4a23-9145-8139b35e63e8.pid.haproxy
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 3d6ff1b5-e67b-4a23-9145-8139b35e63e8
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:22:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:42.536 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'env', 'PROCESS_TAG=haproxy-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d6ff1b5-e67b-4a23-9145-8139b35e63e8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:22:42 np0005539563 nova_compute[252253]: 2025-11-29 08:22:42.933 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:42 np0005539563 podman[342925]: 2025-11-29 08:22:42.951644701 +0000 UTC m=+0.055855816 container create ae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:22:42 np0005539563 systemd[1]: Started libpod-conmon-ae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616.scope.
Nov 29 03:22:43 np0005539563 podman[342925]: 2025-11-29 08:22:42.923124328 +0000 UTC m=+0.027335453 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:22:43 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:22:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4bf98443066f24527bbdfa653d89152cdef6e1a3cc12ca707cc4ca69bfe9d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:43 np0005539563 podman[342925]: 2025-11-29 08:22:43.049262228 +0000 UTC m=+0.153473343 container init ae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:22:43 np0005539563 podman[342925]: 2025-11-29 08:22:43.054979343 +0000 UTC m=+0.159190458 container start ae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:22:43 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[342941]: [NOTICE]   (342945) : New worker (342947) forked
Nov 29 03:22:43 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[342941]: [NOTICE]   (342945) : Loading success.
Nov 29 03:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.400 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404563.400045, c6849280-963f-4661-bac1-c3655d2dad57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.401 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c6849280-963f-4661-bac1-c3655d2dad57] VM Started (Lifecycle Event)#033[00m
Nov 29 03:22:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:43.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.638 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.643 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404563.400269, c6849280-963f-4661-bac1-c3655d2dad57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.644 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c6849280-963f-4661-bac1-c3655d2dad57] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.670 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.672 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:43.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.689 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c6849280-963f-4661-bac1-c3655d2dad57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.762 252257 DEBUG nova.compute.manager [req-95ccfbd6-7857-4bbf-a619-01b09c982e33 req-ea443302-2421-4956-b00d-035961784346 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Received event network-vif-plugged-9427a372-ceaa-418b-9f5e-699c618df26b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.762 252257 DEBUG oslo_concurrency.lockutils [req-95ccfbd6-7857-4bbf-a619-01b09c982e33 req-ea443302-2421-4956-b00d-035961784346 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c6849280-963f-4661-bac1-c3655d2dad57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.762 252257 DEBUG oslo_concurrency.lockutils [req-95ccfbd6-7857-4bbf-a619-01b09c982e33 req-ea443302-2421-4956-b00d-035961784346 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.763 252257 DEBUG oslo_concurrency.lockutils [req-95ccfbd6-7857-4bbf-a619-01b09c982e33 req-ea443302-2421-4956-b00d-035961784346 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.763 252257 DEBUG nova.compute.manager [req-95ccfbd6-7857-4bbf-a619-01b09c982e33 req-ea443302-2421-4956-b00d-035961784346 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Processing event network-vif-plugged-9427a372-ceaa-418b-9f5e-699c618df26b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.763 252257 DEBUG nova.compute.manager [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.767 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404563.7673292, c6849280-963f-4661-bac1-c3655d2dad57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.768 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c6849280-963f-4661-bac1-c3655d2dad57] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.769 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.772 252257 INFO nova.virt.libvirt.driver [-] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Instance spawned successfully.#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.773 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.795 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.799 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.799 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.800 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.800 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.800 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.801 252257 DEBUG nova.virt.libvirt.driver [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.805 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.857 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c6849280-963f-4661-bac1-c3655d2dad57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.879 252257 INFO nova.compute.manager [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Took 9.44 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.880 252257 DEBUG nova.compute.manager [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.951 252257 INFO nova.compute.manager [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Took 10.82 seconds to build instance.#033[00m
Nov 29 03:22:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:43 np0005539563 nova_compute[252253]: 2025-11-29 08:22:43.975 252257 DEBUG oslo_concurrency.lockutils [None req-d940a4ee-0b5a-4e8b-8e44-e768ae28b907 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 837 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 692 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Nov 29 03:22:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:45.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:45 np0005539563 nova_compute[252253]: 2025-11-29 08:22:45.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:45.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:45 np0005539563 nova_compute[252253]: 2025-11-29 08:22:45.866 252257 DEBUG nova.compute.manager [req-ada6980d-bb0f-4e64-a60f-552ccc3b40fb req-490ad69b-d126-4f3a-987c-0a6bbd8914a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Received event network-vif-plugged-9427a372-ceaa-418b-9f5e-699c618df26b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:45 np0005539563 nova_compute[252253]: 2025-11-29 08:22:45.867 252257 DEBUG oslo_concurrency.lockutils [req-ada6980d-bb0f-4e64-a60f-552ccc3b40fb req-490ad69b-d126-4f3a-987c-0a6bbd8914a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c6849280-963f-4661-bac1-c3655d2dad57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:45 np0005539563 nova_compute[252253]: 2025-11-29 08:22:45.867 252257 DEBUG oslo_concurrency.lockutils [req-ada6980d-bb0f-4e64-a60f-552ccc3b40fb req-490ad69b-d126-4f3a-987c-0a6bbd8914a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:45 np0005539563 nova_compute[252253]: 2025-11-29 08:22:45.867 252257 DEBUG oslo_concurrency.lockutils [req-ada6980d-bb0f-4e64-a60f-552ccc3b40fb req-490ad69b-d126-4f3a-987c-0a6bbd8914a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:45 np0005539563 nova_compute[252253]: 2025-11-29 08:22:45.867 252257 DEBUG nova.compute.manager [req-ada6980d-bb0f-4e64-a60f-552ccc3b40fb req-490ad69b-d126-4f3a-987c-0a6bbd8914a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] No waiting events found dispatching network-vif-plugged-9427a372-ceaa-418b-9f5e-699c618df26b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:45 np0005539563 nova_compute[252253]: 2025-11-29 08:22:45.867 252257 WARNING nova.compute.manager [req-ada6980d-bb0f-4e64-a60f-552ccc3b40fb req-490ad69b-d126-4f3a-987c-0a6bbd8914a1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Received unexpected event network-vif-plugged-9427a372-ceaa-418b-9f5e-699c618df26b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:22:46 np0005539563 nova_compute[252253]: 2025-11-29 08:22:46.245 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.3 MiB/s wr, 153 op/s
Nov 29 03:22:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:47.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:47.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:47 np0005539563 nova_compute[252253]: 2025-11-29 08:22:47.935 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Nov 29 03:22:48 np0005539563 nova_compute[252253]: 2025-11-29 08:22:48.280 252257 DEBUG nova.compute.manager [req-a4be9da9-8033-4cf3-a1b4-16d01224aac3 req-8315a888-e47b-4940-89a0-63f9326d18ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Received event network-changed-9427a372-ceaa-418b-9f5e-699c618df26b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:48 np0005539563 nova_compute[252253]: 2025-11-29 08:22:48.280 252257 DEBUG nova.compute.manager [req-a4be9da9-8033-4cf3-a1b4-16d01224aac3 req-8315a888-e47b-4940-89a0-63f9326d18ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Refreshing instance network info cache due to event network-changed-9427a372-ceaa-418b-9f5e-699c618df26b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:48 np0005539563 nova_compute[252253]: 2025-11-29 08:22:48.280 252257 DEBUG oslo_concurrency.lockutils [req-a4be9da9-8033-4cf3-a1b4-16d01224aac3 req-8315a888-e47b-4940-89a0-63f9326d18ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c6849280-963f-4661-bac1-c3655d2dad57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:48 np0005539563 nova_compute[252253]: 2025-11-29 08:22:48.281 252257 DEBUG oslo_concurrency.lockutils [req-a4be9da9-8033-4cf3-a1b4-16d01224aac3 req-8315a888-e47b-4940-89a0-63f9326d18ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c6849280-963f-4661-bac1-c3655d2dad57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:48 np0005539563 nova_compute[252253]: 2025-11-29 08:22:48.281 252257 DEBUG nova.network.neutron [req-a4be9da9-8033-4cf3-a1b4-16d01224aac3 req-8315a888-e47b-4940-89a0-63f9326d18ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Refreshing network info cache for port 9427a372-ceaa-418b-9f5e-699c618df26b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 884 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 116 op/s
Nov 29 03:22:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Nov 29 03:22:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Nov 29 03:22:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:49.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:49.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:50 np0005539563 nova_compute[252253]: 2025-11-29 08:22:50.195 252257 DEBUG nova.network.neutron [req-a4be9da9-8033-4cf3-a1b4-16d01224aac3 req-8315a888-e47b-4940-89a0-63f9326d18ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Updated VIF entry in instance network info cache for port 9427a372-ceaa-418b-9f5e-699c618df26b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:50 np0005539563 nova_compute[252253]: 2025-11-29 08:22:50.196 252257 DEBUG nova.network.neutron [req-a4be9da9-8033-4cf3-a1b4-16d01224aac3 req-8315a888-e47b-4940-89a0-63f9326d18ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Updating instance_info_cache with network_info: [{"id": "9427a372-ceaa-418b-9f5e-699c618df26b", "address": "fa:16:3e:2e:98:4d", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9427a372-ce", "ovs_interfaceid": "9427a372-ceaa-418b-9f5e-699c618df26b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:50 np0005539563 nova_compute[252253]: 2025-11-29 08:22:50.221 252257 DEBUG oslo_concurrency.lockutils [req-a4be9da9-8033-4cf3-a1b4-16d01224aac3 req-8315a888-e47b-4940-89a0-63f9326d18ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c6849280-963f-4661-bac1-c3655d2dad57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 866 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 183 op/s
Nov 29 03:22:51 np0005539563 nova_compute[252253]: 2025-11-29 08:22:51.247 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:51.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:51.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 866 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 183 op/s
Nov 29 03:22:52 np0005539563 nova_compute[252253]: 2025-11-29 08:22:52.937 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:53.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:53.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 675 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.2 MiB/s wr, 350 op/s
Nov 29 03:22:54 np0005539563 nova_compute[252253]: 2025-11-29 08:22:54.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:54 np0005539563 nova_compute[252253]: 2025-11-29 08:22:54.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:22:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:22:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:22:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:22:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:22:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:22:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:22:54 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d6d6429b-851d-4214-9f9d-77eac1e78eef does not exist
Nov 29 03:22:54 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c8998fc1-c114-499e-b882-bd5b37cf6fb2 does not exist
Nov 29 03:22:54 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8795da53-7f02-41c5-8028-a61d37f328ed does not exist
Nov 29 03:22:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:22:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:22:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:22:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:22:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:22:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:22:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:22:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:22:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:22:55 np0005539563 podman[343273]: 2025-11-29 08:22:55.4986192 +0000 UTC m=+0.044688043 container create d74b44c94828a55a6e711c87f146d8187a169f7aa707d4e9dd193c978d57cf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:22:55 np0005539563 systemd[1]: Started libpod-conmon-d74b44c94828a55a6e711c87f146d8187a169f7aa707d4e9dd193c978d57cf04.scope.
Nov 29 03:22:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:22:55 np0005539563 podman[343273]: 2025-11-29 08:22:55.478172836 +0000 UTC m=+0.024241719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:22:55 np0005539563 podman[343273]: 2025-11-29 08:22:55.585609089 +0000 UTC m=+0.131677952 container init d74b44c94828a55a6e711c87f146d8187a169f7aa707d4e9dd193c978d57cf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_driscoll, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:22:55 np0005539563 podman[343273]: 2025-11-29 08:22:55.592727342 +0000 UTC m=+0.138796185 container start d74b44c94828a55a6e711c87f146d8187a169f7aa707d4e9dd193c978d57cf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_driscoll, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 29 03:22:55 np0005539563 podman[343273]: 2025-11-29 08:22:55.595510038 +0000 UTC m=+0.141578901 container attach d74b44c94828a55a6e711c87f146d8187a169f7aa707d4e9dd193c978d57cf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_driscoll, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:22:55 np0005539563 unruffled_driscoll[343289]: 167 167
Nov 29 03:22:55 np0005539563 systemd[1]: libpod-d74b44c94828a55a6e711c87f146d8187a169f7aa707d4e9dd193c978d57cf04.scope: Deactivated successfully.
Nov 29 03:22:55 np0005539563 podman[343273]: 2025-11-29 08:22:55.598048017 +0000 UTC m=+0.144116860 container died d74b44c94828a55a6e711c87f146d8187a169f7aa707d4e9dd193c978d57cf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_driscoll, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:22:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:55.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay-63e01200963001fc546543d7d671e01705fbff3aa01adb884b1aacf76c5fbb93-merged.mount: Deactivated successfully.
Nov 29 03:22:55 np0005539563 podman[343273]: 2025-11-29 08:22:55.634411913 +0000 UTC m=+0.180480756 container remove d74b44c94828a55a6e711c87f146d8187a169f7aa707d4e9dd193c978d57cf04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:22:55 np0005539563 systemd[1]: libpod-conmon-d74b44c94828a55a6e711c87f146d8187a169f7aa707d4e9dd193c978d57cf04.scope: Deactivated successfully.
Nov 29 03:22:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:55.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:55 np0005539563 podman[343313]: 2025-11-29 08:22:55.809935412 +0000 UTC m=+0.039417749 container create 792e1da46032cfd4c59f63e52801ee0ca2c5e4f5f4ed52db4116d119c7a3e724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:22:55 np0005539563 systemd[1]: Started libpod-conmon-792e1da46032cfd4c59f63e52801ee0ca2c5e4f5f4ed52db4116d119c7a3e724.scope.
Nov 29 03:22:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:22:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59aef5ac77cc55ff31c125d9488d9f9a7a615310a527f183d10e7026543ad68b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59aef5ac77cc55ff31c125d9488d9f9a7a615310a527f183d10e7026543ad68b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59aef5ac77cc55ff31c125d9488d9f9a7a615310a527f183d10e7026543ad68b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59aef5ac77cc55ff31c125d9488d9f9a7a615310a527f183d10e7026543ad68b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59aef5ac77cc55ff31c125d9488d9f9a7a615310a527f183d10e7026543ad68b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:55 np0005539563 podman[343313]: 2025-11-29 08:22:55.792272984 +0000 UTC m=+0.021755341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:22:55 np0005539563 podman[343313]: 2025-11-29 08:22:55.896935952 +0000 UTC m=+0.126418299 container init 792e1da46032cfd4c59f63e52801ee0ca2c5e4f5f4ed52db4116d119c7a3e724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chaplygin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:22:55 np0005539563 podman[343313]: 2025-11-29 08:22:55.905781663 +0000 UTC m=+0.135264000 container start 792e1da46032cfd4c59f63e52801ee0ca2c5e4f5f4ed52db4116d119c7a3e724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chaplygin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:22:55 np0005539563 podman[343313]: 2025-11-29 08:22:55.910194072 +0000 UTC m=+0.139676439 container attach 792e1da46032cfd4c59f63e52801ee0ca2c5e4f5f4ed52db4116d119c7a3e724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:22:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:22:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1170531116' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:22:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:22:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1170531116' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:22:56 np0005539563 nova_compute[252253]: 2025-11-29 08:22:56.249 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 596 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 437 KiB/s wr, 415 op/s
Nov 29 03:22:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:56Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:98:4d 10.100.0.9
Nov 29 03:22:56 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:56Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:98:4d 10.100.0.9
Nov 29 03:22:56 np0005539563 unruffled_chaplygin[343330]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:22:56 np0005539563 unruffled_chaplygin[343330]: --> relative data size: 1.0
Nov 29 03:22:56 np0005539563 unruffled_chaplygin[343330]: --> All data devices are unavailable
Nov 29 03:22:56 np0005539563 systemd[1]: libpod-792e1da46032cfd4c59f63e52801ee0ca2c5e4f5f4ed52db4116d119c7a3e724.scope: Deactivated successfully.
Nov 29 03:22:56 np0005539563 podman[343313]: 2025-11-29 08:22:56.804905856 +0000 UTC m=+1.034388223 container died 792e1da46032cfd4c59f63e52801ee0ca2c5e4f5f4ed52db4116d119c7a3e724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:22:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay-59aef5ac77cc55ff31c125d9488d9f9a7a615310a527f183d10e7026543ad68b-merged.mount: Deactivated successfully.
Nov 29 03:22:56 np0005539563 podman[343313]: 2025-11-29 08:22:56.876555409 +0000 UTC m=+1.106037746 container remove 792e1da46032cfd4c59f63e52801ee0ca2c5e4f5f4ed52db4116d119c7a3e724 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:22:56 np0005539563 systemd[1]: libpod-conmon-792e1da46032cfd4c59f63e52801ee0ca2c5e4f5f4ed52db4116d119c7a3e724.scope: Deactivated successfully.
Nov 29 03:22:56 np0005539563 nova_compute[252253]: 2025-11-29 08:22:56.935 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:56.934 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:56.936 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:22:57 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:57Z|00606|binding|INFO|Releasing lport 6711ba96-49f0-431a-a4d5-64f9cee27708 from this chassis (sb_readonly=0)
Nov 29 03:22:57 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:57Z|00607|binding|INFO|Releasing lport 54675c6b-d3a2-417c-b976-28c1e010fd1e from this chassis (sb_readonly=0)
Nov 29 03:22:57 np0005539563 nova_compute[252253]: 2025-11-29 08:22:57.170 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:57.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:57 np0005539563 podman[343547]: 2025-11-29 08:22:57.620528525 +0000 UTC m=+0.046158632 container create 54bb9e246da2b599d929767ce23186149000a01f6824988c9ccc339e81b4f1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:22:57 np0005539563 systemd[1]: Started libpod-conmon-54bb9e246da2b599d929767ce23186149000a01f6824988c9ccc339e81b4f1ae.scope.
Nov 29 03:22:57 np0005539563 nova_compute[252253]: 2025-11-29 08:22:57.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:22:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:57.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:22:57 np0005539563 podman[343547]: 2025-11-29 08:22:57.601427487 +0000 UTC m=+0.027057624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:22:57 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:22:57 np0005539563 podman[343547]: 2025-11-29 08:22:57.714233566 +0000 UTC m=+0.139863693 container init 54bb9e246da2b599d929767ce23186149000a01f6824988c9ccc339e81b4f1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 29 03:22:57 np0005539563 podman[343547]: 2025-11-29 08:22:57.721195055 +0000 UTC m=+0.146825162 container start 54bb9e246da2b599d929767ce23186149000a01f6824988c9ccc339e81b4f1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:22:57 np0005539563 podman[343547]: 2025-11-29 08:22:57.724786133 +0000 UTC m=+0.150416240 container attach 54bb9e246da2b599d929767ce23186149000a01f6824988c9ccc339e81b4f1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:22:57 np0005539563 youthful_galileo[343564]: 167 167
Nov 29 03:22:57 np0005539563 systemd[1]: libpod-54bb9e246da2b599d929767ce23186149000a01f6824988c9ccc339e81b4f1ae.scope: Deactivated successfully.
Nov 29 03:22:57 np0005539563 podman[343547]: 2025-11-29 08:22:57.728132543 +0000 UTC m=+0.153762660 container died 54bb9e246da2b599d929767ce23186149000a01f6824988c9ccc339e81b4f1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:22:57 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3970712dbceb4097085ba4eb47f93d7eb55e9f92e7d4969c4e0a3b655d33e072-merged.mount: Deactivated successfully.
Nov 29 03:22:57 np0005539563 podman[343547]: 2025-11-29 08:22:57.766193596 +0000 UTC m=+0.191823713 container remove 54bb9e246da2b599d929767ce23186149000a01f6824988c9ccc339e81b4f1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:22:57 np0005539563 systemd[1]: libpod-conmon-54bb9e246da2b599d929767ce23186149000a01f6824988c9ccc339e81b4f1ae.scope: Deactivated successfully.
Nov 29 03:22:57 np0005539563 nova_compute[252253]: 2025-11-29 08:22:57.939 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:57 np0005539563 podman[343588]: 2025-11-29 08:22:57.96474367 +0000 UTC m=+0.048764844 container create 7aa3678a4bd8b5d2a18ebc48e3c43e9a4c82f4d1673101339d6dd277b5ae927c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:22:58 np0005539563 systemd[1]: Started libpod-conmon-7aa3678a4bd8b5d2a18ebc48e3c43e9a4c82f4d1673101339d6dd277b5ae927c.scope.
Nov 29 03:22:58 np0005539563 podman[343588]: 2025-11-29 08:22:57.938874448 +0000 UTC m=+0.022895652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:22:58 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:22:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d3005c7786271da3283fc75839aeec08b3be2e7e114f20bf84577ce283a5d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d3005c7786271da3283fc75839aeec08b3be2e7e114f20bf84577ce283a5d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d3005c7786271da3283fc75839aeec08b3be2e7e114f20bf84577ce283a5d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d3005c7786271da3283fc75839aeec08b3be2e7e114f20bf84577ce283a5d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:58 np0005539563 podman[343588]: 2025-11-29 08:22:58.063027556 +0000 UTC m=+0.147048760 container init 7aa3678a4bd8b5d2a18ebc48e3c43e9a4c82f4d1673101339d6dd277b5ae927c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:22:58 np0005539563 podman[343588]: 2025-11-29 08:22:58.072629936 +0000 UTC m=+0.156651110 container start 7aa3678a4bd8b5d2a18ebc48e3c43e9a4c82f4d1673101339d6dd277b5ae927c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:22:58 np0005539563 podman[343588]: 2025-11-29 08:22:58.076445279 +0000 UTC m=+0.160466493 container attach 7aa3678a4bd8b5d2a18ebc48e3c43e9a4c82f4d1673101339d6dd277b5ae927c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:22:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Nov 29 03:22:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Nov 29 03:22:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Nov 29 03:22:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 608 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 549 KiB/s wr, 498 op/s
Nov 29 03:22:58 np0005539563 nova_compute[252253]: 2025-11-29 08:22:58.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:58 np0005539563 nova_compute[252253]: 2025-11-29 08:22:58.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:58 np0005539563 gracious_carver[343605]: {
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:    "0": [
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:        {
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:            "devices": [
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:                "/dev/loop3"
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:            ],
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:            "lv_name": "ceph_lv0",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:            "lv_size": "7511998464",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:            "name": "ceph_lv0",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:            "tags": {
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:                "ceph.cluster_name": "ceph",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:                "ceph.crush_device_class": "",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:                "ceph.encrypted": "0",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:                "ceph.osd_id": "0",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:                "ceph.type": "block",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:                "ceph.vdo": "0"
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:            },
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:            "type": "block",
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:            "vg_name": "ceph_vg0"
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:        }
Nov 29 03:22:58 np0005539563 gracious_carver[343605]:    ]
Nov 29 03:22:58 np0005539563 gracious_carver[343605]: }
Nov 29 03:22:58 np0005539563 systemd[1]: libpod-7aa3678a4bd8b5d2a18ebc48e3c43e9a4c82f4d1673101339d6dd277b5ae927c.scope: Deactivated successfully.
Nov 29 03:22:58 np0005539563 podman[343588]: 2025-11-29 08:22:58.880215417 +0000 UTC m=+0.964236681 container died 7aa3678a4bd8b5d2a18ebc48e3c43e9a4c82f4d1673101339d6dd277b5ae927c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:22:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f8d3005c7786271da3283fc75839aeec08b3be2e7e114f20bf84577ce283a5d5-merged.mount: Deactivated successfully.
Nov 29 03:22:58 np0005539563 nova_compute[252253]: 2025-11-29 08:22:58.945 252257 DEBUG oslo_concurrency.lockutils [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:58 np0005539563 nova_compute[252253]: 2025-11-29 08:22:58.945 252257 DEBUG oslo_concurrency.lockutils [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:58 np0005539563 nova_compute[252253]: 2025-11-29 08:22:58.946 252257 DEBUG oslo_concurrency.lockutils [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:58 np0005539563 nova_compute[252253]: 2025-11-29 08:22:58.946 252257 DEBUG oslo_concurrency.lockutils [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:58 np0005539563 nova_compute[252253]: 2025-11-29 08:22:58.946 252257 DEBUG oslo_concurrency.lockutils [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:58 np0005539563 nova_compute[252253]: 2025-11-29 08:22:58.947 252257 INFO nova.compute.manager [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Terminating instance#033[00m
Nov 29 03:22:58 np0005539563 nova_compute[252253]: 2025-11-29 08:22:58.948 252257 DEBUG nova.compute.manager [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:22:58 np0005539563 podman[343588]: 2025-11-29 08:22:58.949985419 +0000 UTC m=+1.034006603 container remove 7aa3678a4bd8b5d2a18ebc48e3c43e9a4c82f4d1673101339d6dd277b5ae927c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:22:58 np0005539563 systemd[1]: libpod-conmon-7aa3678a4bd8b5d2a18ebc48e3c43e9a4c82f4d1673101339d6dd277b5ae927c.scope: Deactivated successfully.
Nov 29 03:22:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Nov 29 03:22:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Nov 29 03:22:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Nov 29 03:22:59 np0005539563 kernel: tap7d3e9f63-03 (unregistering): left promiscuous mode
Nov 29 03:22:59 np0005539563 NetworkManager[48981]: <info>  [1764404579.0170] device (tap7d3e9f63-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.029 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:59Z|00608|binding|INFO|Releasing lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 from this chassis (sb_readonly=0)
Nov 29 03:22:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:59Z|00609|binding|INFO|Setting lport 7d3e9f63-03fd-471c-8eeb-dba78634e033 down in Southbound
Nov 29 03:22:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:22:59Z|00610|binding|INFO|Removing iface tap7d3e9f63-03 ovn-installed in OVS
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.032 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.039 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:6f:ea 10.100.0.3'], port_security=['fa:16:3e:e1:6f:ea 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9b6f3346-1230-472f-bd04-791d2367bebb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '8', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=7d3e9f63-03fd-471c-8eeb-dba78634e033) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.040 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 7d3e9f63-03fd-471c-8eeb-dba78634e033 in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.042 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32485b0e-177b-4dfd-a55a-0249528f32e1#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.045 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.066 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ed7dcd-30b0-4a4d-8e99-5110c222115a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:59 np0005539563 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Nov 29 03:22:59 np0005539563 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000008c.scope: Consumed 18.872s CPU time.
Nov 29 03:22:59 np0005539563 systemd-machined[213024]: Machine qemu-69-instance-0000008c terminated.
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.105 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7c8272-9e88-4ef7-840a-c94a2483abad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.111 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b44feb-386f-48e6-9104-053d0622a38a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.144 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[345fbf29-b1cb-490a-858d-9fc149c5b1e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.166 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d44ad88e-9e8a-4a2e-a216-1630a188d765]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32485b0e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:44:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731762, 'reachable_time': 39146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343692, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.185 252257 INFO nova.virt.libvirt.driver [-] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Instance destroyed successfully.#033[00m
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.186 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3daaf9ca-f2a3-4bcf-808f-65a9f7c67b9d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731778, 'tstamp': 731778}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343710, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32485b0e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731782, 'tstamp': 731782}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343710, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.186 252257 DEBUG nova.objects.instance [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'resources' on Instance uuid 9b6f3346-1230-472f-bd04-791d2367bebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.187 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.189 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.193 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.194 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32485b0e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.194 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.194 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32485b0e-10, col_values=(('external_ids', {'iface-id': '6711ba96-49f0-431a-a4d5-64f9cee27708'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:22:59.195 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.208 252257 DEBUG nova.virt.libvirt.vif [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1647701311',display_name='tempest-ServerStableDeviceRescueTest-server-1647701311',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1647701311',id=140,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-dl1duw1u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:46Z,user_data=None,user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=9b6f3346-1230-472f-bd04-791d2367bebb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.208 252257 DEBUG nova.network.os_vif_util [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "address": "fa:16:3e:e1:6f:ea", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d3e9f63-03", "ovs_interfaceid": "7d3e9f63-03fd-471c-8eeb-dba78634e033", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.209 252257 DEBUG nova.network.os_vif_util [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:6f:ea,bridge_name='br-int',has_traffic_filtering=True,id=7d3e9f63-03fd-471c-8eeb-dba78634e033,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d3e9f63-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.210 252257 DEBUG os_vif [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:6f:ea,bridge_name='br-int',has_traffic_filtering=True,id=7d3e9f63-03fd-471c-8eeb-dba78634e033,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d3e9f63-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.212 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.212 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d3e9f63-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.214 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.216 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.219 252257 INFO os_vif [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:6f:ea,bridge_name='br-int',has_traffic_filtering=True,id=7d3e9f63-03fd-471c-8eeb-dba78634e033,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d3e9f63-03')#033[00m
Nov 29 03:22:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:59.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.636 252257 INFO nova.virt.libvirt.driver [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Deleting instance files /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb_del#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.637 252257 INFO nova.virt.libvirt.driver [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Deletion of /var/lib/nova/instances/9b6f3346-1230-472f-bd04-791d2367bebb_del complete#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.651 252257 DEBUG nova.compute.manager [req-40efda2b-58ee-4202-be81-ff92de4c3d0a req-c92820dc-1f71-4945-8b7f-182feaf1fd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-unplugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.651 252257 DEBUG oslo_concurrency.lockutils [req-40efda2b-58ee-4202-be81-ff92de4c3d0a req-c92820dc-1f71-4945-8b7f-182feaf1fd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.653 252257 DEBUG oslo_concurrency.lockutils [req-40efda2b-58ee-4202-be81-ff92de4c3d0a req-c92820dc-1f71-4945-8b7f-182feaf1fd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.654 252257 DEBUG oslo_concurrency.lockutils [req-40efda2b-58ee-4202-be81-ff92de4c3d0a req-c92820dc-1f71-4945-8b7f-182feaf1fd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.654 252257 DEBUG nova.compute.manager [req-40efda2b-58ee-4202-be81-ff92de4c3d0a req-c92820dc-1f71-4945-8b7f-182feaf1fd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] No waiting events found dispatching network-vif-unplugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.654 252257 DEBUG nova.compute.manager [req-40efda2b-58ee-4202-be81-ff92de4c3d0a req-c92820dc-1f71-4945-8b7f-182feaf1fd84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-unplugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:22:59 np0005539563 podman[343814]: 2025-11-29 08:22:59.664161337 +0000 UTC m=+0.051648001 container create c318bdb7086ed7040ff080701c3c9f0e215c9aca1bf6d43822571e72daccbf55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_taussig, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:22:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:22:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:59.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:59 np0005539563 systemd[1]: Started libpod-conmon-c318bdb7086ed7040ff080701c3c9f0e215c9aca1bf6d43822571e72daccbf55.scope.
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.709 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.710 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.710 252257 INFO nova.compute.manager [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.711 252257 DEBUG oslo.service.loopingcall [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.711 252257 DEBUG nova.compute.manager [-] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:22:59 np0005539563 nova_compute[252253]: 2025-11-29 08:22:59.711 252257 DEBUG nova.network.neutron [-] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:22:59 np0005539563 podman[343814]: 2025-11-29 08:22:59.646386206 +0000 UTC m=+0.033872880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:22:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:22:59 np0005539563 podman[343814]: 2025-11-29 08:22:59.764474808 +0000 UTC m=+0.151961512 container init c318bdb7086ed7040ff080701c3c9f0e215c9aca1bf6d43822571e72daccbf55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_taussig, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:22:59 np0005539563 podman[343814]: 2025-11-29 08:22:59.772482085 +0000 UTC m=+0.159968769 container start c318bdb7086ed7040ff080701c3c9f0e215c9aca1bf6d43822571e72daccbf55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_taussig, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 03:22:59 np0005539563 podman[343814]: 2025-11-29 08:22:59.776124244 +0000 UTC m=+0.163610928 container attach c318bdb7086ed7040ff080701c3c9f0e215c9aca1bf6d43822571e72daccbf55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 03:22:59 np0005539563 distracted_taussig[343830]: 167 167
Nov 29 03:22:59 np0005539563 systemd[1]: libpod-c318bdb7086ed7040ff080701c3c9f0e215c9aca1bf6d43822571e72daccbf55.scope: Deactivated successfully.
Nov 29 03:22:59 np0005539563 podman[343814]: 2025-11-29 08:22:59.782393504 +0000 UTC m=+0.169880198 container died c318bdb7086ed7040ff080701c3c9f0e215c9aca1bf6d43822571e72daccbf55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:22:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f67a9a5a20d7075ccb4f05edfc4dd958f52879c42171feaffa62fce40b92ca22-merged.mount: Deactivated successfully.
Nov 29 03:22:59 np0005539563 podman[343814]: 2025-11-29 08:22:59.832066921 +0000 UTC m=+0.219553585 container remove c318bdb7086ed7040ff080701c3c9f0e215c9aca1bf6d43822571e72daccbf55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_taussig, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:22:59 np0005539563 systemd[1]: libpod-conmon-c318bdb7086ed7040ff080701c3c9f0e215c9aca1bf6d43822571e72daccbf55.scope: Deactivated successfully.
Nov 29 03:23:00 np0005539563 podman[343853]: 2025-11-29 08:23:00.045198611 +0000 UTC m=+0.065585869 container create 69480ce219759393c71ca0f56befa742e82411cdcf3600686c360211fe90361e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:23:00 np0005539563 systemd[1]: Started libpod-conmon-69480ce219759393c71ca0f56befa742e82411cdcf3600686c360211fe90361e.scope.
Nov 29 03:23:00 np0005539563 podman[343853]: 2025-11-29 08:23:00.019877084 +0000 UTC m=+0.040264432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:23:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:23:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfac97b0db7cf2218a1beaac038183ba9bed8cbae3de863e392fde62b8fcfa16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfac97b0db7cf2218a1beaac038183ba9bed8cbae3de863e392fde62b8fcfa16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfac97b0db7cf2218a1beaac038183ba9bed8cbae3de863e392fde62b8fcfa16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfac97b0db7cf2218a1beaac038183ba9bed8cbae3de863e392fde62b8fcfa16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:00 np0005539563 podman[343853]: 2025-11-29 08:23:00.148684437 +0000 UTC m=+0.169071715 container init 69480ce219759393c71ca0f56befa742e82411cdcf3600686c360211fe90361e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_meitner, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:23:00 np0005539563 podman[343853]: 2025-11-29 08:23:00.155489363 +0000 UTC m=+0.175876621 container start 69480ce219759393c71ca0f56befa742e82411cdcf3600686c360211fe90361e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_meitner, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:23:00 np0005539563 podman[343853]: 2025-11-29 08:23:00.158656568 +0000 UTC m=+0.179043846 container attach 69480ce219759393c71ca0f56befa742e82411cdcf3600686c360211fe90361e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_meitner, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:23:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 535 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 616 op/s
Nov 29 03:23:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:00.939 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:01 np0005539563 elastic_meitner[343870]: {
Nov 29 03:23:01 np0005539563 elastic_meitner[343870]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:23:01 np0005539563 elastic_meitner[343870]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:23:01 np0005539563 elastic_meitner[343870]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:23:01 np0005539563 elastic_meitner[343870]:        "osd_id": 0,
Nov 29 03:23:01 np0005539563 elastic_meitner[343870]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:23:01 np0005539563 elastic_meitner[343870]:        "type": "bluestore"
Nov 29 03:23:01 np0005539563 elastic_meitner[343870]:    }
Nov 29 03:23:01 np0005539563 elastic_meitner[343870]: }
Nov 29 03:23:01 np0005539563 systemd[1]: libpod-69480ce219759393c71ca0f56befa742e82411cdcf3600686c360211fe90361e.scope: Deactivated successfully.
Nov 29 03:23:01 np0005539563 podman[343892]: 2025-11-29 08:23:01.079175032 +0000 UTC m=+0.030224921 container died 69480ce219759393c71ca0f56befa742e82411cdcf3600686c360211fe90361e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_meitner, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:23:01 np0005539563 systemd[1]: var-lib-containers-storage-overlay-dfac97b0db7cf2218a1beaac038183ba9bed8cbae3de863e392fde62b8fcfa16-merged.mount: Deactivated successfully.
Nov 29 03:23:01 np0005539563 podman[343892]: 2025-11-29 08:23:01.144892955 +0000 UTC m=+0.095942824 container remove 69480ce219759393c71ca0f56befa742e82411cdcf3600686c360211fe90361e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_meitner, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:23:01 np0005539563 systemd[1]: libpod-conmon-69480ce219759393c71ca0f56befa742e82411cdcf3600686c360211fe90361e.scope: Deactivated successfully.
Nov 29 03:23:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:23:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:23:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:23:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:23:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8235dbc1-381c-4237-b3de-dd697150c5c7 does not exist
Nov 29 03:23:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1aaa974b-6eb4-48a5-bc6d-4b69567354e5 does not exist
Nov 29 03:23:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f281886d-9916-4e4b-8b21-3a7fd91dbefd does not exist
Nov 29 03:23:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:01.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:01 np0005539563 nova_compute[252253]: 2025-11-29 08:23:01.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:01.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:01 np0005539563 nova_compute[252253]: 2025-11-29 08:23:01.701 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:01 np0005539563 nova_compute[252253]: 2025-11-29 08:23:01.701 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:01 np0005539563 nova_compute[252253]: 2025-11-29 08:23:01.701 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:01 np0005539563 nova_compute[252253]: 2025-11-29 08:23:01.702 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:23:01 np0005539563 nova_compute[252253]: 2025-11-29 08:23:01.702 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:01 np0005539563 nova_compute[252253]: 2025-11-29 08:23:01.808 252257 DEBUG nova.compute.manager [req-8af4c45d-61a3-440f-ac2c-318c906066ac req-c92d8791-3445-4fb5-8ad9-32770945dcd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:01 np0005539563 nova_compute[252253]: 2025-11-29 08:23:01.808 252257 DEBUG oslo_concurrency.lockutils [req-8af4c45d-61a3-440f-ac2c-318c906066ac req-c92d8791-3445-4fb5-8ad9-32770945dcd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:01 np0005539563 nova_compute[252253]: 2025-11-29 08:23:01.808 252257 DEBUG oslo_concurrency.lockutils [req-8af4c45d-61a3-440f-ac2c-318c906066ac req-c92d8791-3445-4fb5-8ad9-32770945dcd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:01 np0005539563 nova_compute[252253]: 2025-11-29 08:23:01.809 252257 DEBUG oslo_concurrency.lockutils [req-8af4c45d-61a3-440f-ac2c-318c906066ac req-c92d8791-3445-4fb5-8ad9-32770945dcd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:01 np0005539563 nova_compute[252253]: 2025-11-29 08:23:01.809 252257 DEBUG nova.compute.manager [req-8af4c45d-61a3-440f-ac2c-318c906066ac req-c92d8791-3445-4fb5-8ad9-32770945dcd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] No waiting events found dispatching network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:01 np0005539563 nova_compute[252253]: 2025-11-29 08:23:01.809 252257 WARNING nova.compute.manager [req-8af4c45d-61a3-440f-ac2c-318c906066ac req-c92d8791-3445-4fb5-8ad9-32770945dcd3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received unexpected event network-vif-plugged-7d3e9f63-03fd-471c-8eeb-dba78634e033 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:23:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2081328841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.134 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:23:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.232 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.233 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.236 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.236 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.280 252257 DEBUG nova.network.neutron [-] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.295 252257 INFO nova.compute.manager [-] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Took 2.58 seconds to deallocate network for instance.#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.360 252257 DEBUG oslo_concurrency.lockutils [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.360 252257 DEBUG oslo_concurrency.lockutils [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.416 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.418 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3882MB free_disk=20.817977905273438GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.418 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 535 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 401 op/s
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.544 252257 DEBUG nova.compute.manager [req-7f155301-d4d5-4554-875d-09ae803d88ff req-8733c9e3-e62a-4c53-927c-e99a4856bc79 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Received event network-vif-deleted-7d3e9f63-03fd-471c-8eeb-dba78634e033 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.581 252257 DEBUG nova.scheduler.client.report [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.622 252257 DEBUG nova.scheduler.client.report [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.623 252257 DEBUG nova.compute.provider_tree [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.637 252257 DEBUG nova.scheduler.client.report [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.666 252257 DEBUG nova.scheduler.client.report [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.752 252257 DEBUG oslo_concurrency.processutils [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:02 np0005539563 nova_compute[252253]: 2025-11-29 08:23:02.941 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3916305998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.182 252257 DEBUG oslo_concurrency.processutils [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.188 252257 DEBUG nova.compute.provider_tree [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.224 252257 DEBUG nova.scheduler.client.report [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.260 252257 DEBUG oslo_concurrency.lockutils [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.263 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.294 252257 INFO nova.scheduler.client.report [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Deleted allocations for instance 9b6f3346-1230-472f-bd04-791d2367bebb#033[00m
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.449 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 1fad2d6f-5a00-43ad-af43-00916509fc61 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.450 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance c6849280-963f-4661-bac1-c3655d2dad57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.450 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.450 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.496 252257 DEBUG oslo_concurrency.lockutils [None req-9e141454-3882-4a33-b4ff-4454fb5e831f 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "9b6f3346-1230-472f-bd04-791d2367bebb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.517 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:03.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:03.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067972468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.957 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.963 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:03 np0005539563 nova_compute[252253]: 2025-11-29 08:23:03.986 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:04 np0005539563 nova_compute[252253]: 2025-11-29 08:23:04.015 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:23:04 np0005539563 nova_compute[252253]: 2025-11-29 08:23:04.015 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:04 np0005539563 nova_compute[252253]: 2025-11-29 08:23:04.215 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Nov 29 03:23:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Nov 29 03:23:04 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Nov 29 03:23:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 525 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 876 KiB/s rd, 6.1 MiB/s wr, 317 op/s
Nov 29 03:23:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:04.928 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:04.928 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:04.929 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:05.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:05.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:05 np0005539563 nova_compute[252253]: 2025-11-29 08:23:05.764 252257 DEBUG oslo_concurrency.lockutils [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "c6849280-963f-4661-bac1-c3655d2dad57" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:05 np0005539563 nova_compute[252253]: 2025-11-29 08:23:05.764 252257 DEBUG oslo_concurrency.lockutils [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:05 np0005539563 nova_compute[252253]: 2025-11-29 08:23:05.784 252257 DEBUG nova.objects.instance [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'flavor' on Instance uuid c6849280-963f-4661-bac1-c3655d2dad57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:05 np0005539563 nova_compute[252253]: 2025-11-29 08:23:05.829 252257 DEBUG oslo_concurrency.lockutils [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.114 252257 DEBUG oslo_concurrency.lockutils [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "c6849280-963f-4661-bac1-c3655d2dad57" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.115 252257 DEBUG oslo_concurrency.lockutils [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.115 252257 INFO nova.compute.manager [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Attaching volume e2fd498b-baaf-4a87-aa0b-3132c2eb32db to /dev/vdb#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.278 252257 DEBUG os_brick.utils [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.279 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.297 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.298 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[6f79dd52-24fd-4e95-9dac-c2e7566e8c02]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.299 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.307 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.308 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[5449b70e-1b62-4bf8-9c04-6fc804ed884a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.309 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.318 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.318 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e7031d-c203-4934-8e22-cb39ceaa0ff2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.319 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[41a12041-4232-4322-b5df-96eb437fab52]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.320 252257 DEBUG oslo_concurrency.processutils [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.348 252257 DEBUG oslo_concurrency.processutils [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.350 252257 DEBUG os_brick.initiator.connectors.lightos [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.350 252257 DEBUG os_brick.initiator.connectors.lightos [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.351 252257 DEBUG os_brick.initiator.connectors.lightos [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.351 252257 DEBUG os_brick.utils [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:23:06 np0005539563 nova_compute[252253]: 2025-11-29 08:23:06.351 252257 DEBUG nova.virt.block_device [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Updating existing volume attachment record: b1e5438b-2408-4c69-b06b-5edd5f6c8ff6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:23:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 521 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 956 KiB/s rd, 5.8 MiB/s wr, 288 op/s
Nov 29 03:23:07 np0005539563 nova_compute[252253]: 2025-11-29 08:23:07.016 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:07 np0005539563 nova_compute[252253]: 2025-11-29 08:23:07.129 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:07 np0005539563 nova_compute[252253]: 2025-11-29 08:23:07.174 252257 DEBUG nova.objects.instance [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'flavor' on Instance uuid c6849280-963f-4661-bac1-c3655d2dad57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:07 np0005539563 nova_compute[252253]: 2025-11-29 08:23:07.204 252257 DEBUG nova.virt.libvirt.driver [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Attempting to attach volume e2fd498b-baaf-4a87-aa0b-3132c2eb32db with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:23:07 np0005539563 nova_compute[252253]: 2025-11-29 08:23:07.206 252257 DEBUG nova.virt.libvirt.guest [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:23:07 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:23:07 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-e2fd498b-baaf-4a87-aa0b-3132c2eb32db">
Nov 29 03:23:07 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:07 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:07 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:07 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:23:07 np0005539563 nova_compute[252253]:  <auth username="openstack">
Nov 29 03:23:07 np0005539563 nova_compute[252253]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:23:07 np0005539563 nova_compute[252253]:  </auth>
Nov 29 03:23:07 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:23:07 np0005539563 nova_compute[252253]:  <serial>e2fd498b-baaf-4a87-aa0b-3132c2eb32db</serial>
Nov 29 03:23:07 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:23:07 np0005539563 nova_compute[252253]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:23:07 np0005539563 nova_compute[252253]: 2025-11-29 08:23:07.320 252257 DEBUG nova.virt.libvirt.driver [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:07 np0005539563 nova_compute[252253]: 2025-11-29 08:23:07.321 252257 DEBUG nova.virt.libvirt.driver [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:07 np0005539563 nova_compute[252253]: 2025-11-29 08:23:07.321 252257 DEBUG nova.virt.libvirt.driver [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:07 np0005539563 nova_compute[252253]: 2025-11-29 08:23:07.322 252257 DEBUG nova.virt.libvirt.driver [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] No VIF found with MAC fa:16:3e:2e:98:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:23:07 np0005539563 podman[344055]: 2025-11-29 08:23:07.531513767 +0000 UTC m=+0.074361648 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 03:23:07 np0005539563 podman[344054]: 2025-11-29 08:23:07.534765435 +0000 UTC m=+0.078089828 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 03:23:07 np0005539563 podman[344056]: 2025-11-29 08:23:07.534825497 +0000 UTC m=+0.075497358 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:23:07 np0005539563 nova_compute[252253]: 2025-11-29 08:23:07.572 252257 DEBUG oslo_concurrency.lockutils [None req-5eadbeb6-a806-47d1-87a8-59a4ce99073e 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:07.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:07 np0005539563 nova_compute[252253]: 2025-11-29 08:23:07.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:07.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:07 np0005539563 nova_compute[252253]: 2025-11-29 08:23:07.945 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 825 KiB/s rd, 5.3 MiB/s wr, 275 op/s
Nov 29 03:23:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Nov 29 03:23:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Nov 29 03:23:09 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Nov 29 03:23:09 np0005539563 nova_compute[252253]: 2025-11-29 08:23:09.217 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:09 np0005539563 nova_compute[252253]: 2025-11-29 08:23:09.455 252257 DEBUG oslo_concurrency.lockutils [None req-1d6aae3e-2e14-41c5-bd5e-9d854c69d9b1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "c6849280-963f-4661-bac1-c3655d2dad57" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:09 np0005539563 nova_compute[252253]: 2025-11-29 08:23:09.456 252257 DEBUG oslo_concurrency.lockutils [None req-1d6aae3e-2e14-41c5-bd5e-9d854c69d9b1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:09 np0005539563 nova_compute[252253]: 2025-11-29 08:23:09.469 252257 INFO nova.compute.manager [None req-1d6aae3e-2e14-41c5-bd5e-9d854c69d9b1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Detaching volume e2fd498b-baaf-4a87-aa0b-3132c2eb32db#033[00m
Nov 29 03:23:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:09.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:09.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:10 np0005539563 nova_compute[252253]: 2025-11-29 08:23:10.117 252257 INFO nova.virt.block_device [None req-1d6aae3e-2e14-41c5-bd5e-9d854c69d9b1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Attempting to driver detach volume e2fd498b-baaf-4a87-aa0b-3132c2eb32db from mountpoint /dev/vdb#033[00m
Nov 29 03:23:10 np0005539563 nova_compute[252253]: 2025-11-29 08:23:10.125 252257 DEBUG nova.virt.libvirt.driver [None req-1d6aae3e-2e14-41c5-bd5e-9d854c69d9b1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Attempting to detach device vdb from instance c6849280-963f-4661-bac1-c3655d2dad57 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:23:10 np0005539563 nova_compute[252253]: 2025-11-29 08:23:10.126 252257 DEBUG nova.virt.libvirt.guest [None req-1d6aae3e-2e14-41c5-bd5e-9d854c69d9b1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:23:10 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-e2fd498b-baaf-4a87-aa0b-3132c2eb32db">
Nov 29 03:23:10 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:  <serial>e2fd498b-baaf-4a87-aa0b-3132c2eb32db</serial>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:23:10 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:23:10 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:23:10 np0005539563 nova_compute[252253]: 2025-11-29 08:23:10.133 252257 INFO nova.virt.libvirt.driver [None req-1d6aae3e-2e14-41c5-bd5e-9d854c69d9b1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully detached device vdb from instance c6849280-963f-4661-bac1-c3655d2dad57 from the persistent domain config.#033[00m
Nov 29 03:23:10 np0005539563 nova_compute[252253]: 2025-11-29 08:23:10.134 252257 DEBUG nova.virt.libvirt.driver [None req-1d6aae3e-2e14-41c5-bd5e-9d854c69d9b1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c6849280-963f-4661-bac1-c3655d2dad57 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:23:10 np0005539563 nova_compute[252253]: 2025-11-29 08:23:10.134 252257 DEBUG nova.virt.libvirt.guest [None req-1d6aae3e-2e14-41c5-bd5e-9d854c69d9b1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:23:10 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-e2fd498b-baaf-4a87-aa0b-3132c2eb32db">
Nov 29 03:23:10 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:  <serial>e2fd498b-baaf-4a87-aa0b-3132c2eb32db</serial>
Nov 29 03:23:10 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:23:10 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:23:10 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:23:10 np0005539563 nova_compute[252253]: 2025-11-29 08:23:10.191 252257 DEBUG nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Received event <DeviceRemovedEvent: 1764404590.1909006, c6849280-963f-4661-bac1-c3655d2dad57 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:23:10 np0005539563 nova_compute[252253]: 2025-11-29 08:23:10.193 252257 DEBUG nova.virt.libvirt.driver [None req-1d6aae3e-2e14-41c5-bd5e-9d854c69d9b1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c6849280-963f-4661-bac1-c3655d2dad57 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:23:10 np0005539563 nova_compute[252253]: 2025-11-29 08:23:10.195 252257 INFO nova.virt.libvirt.driver [None req-1d6aae3e-2e14-41c5-bd5e-9d854c69d9b1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully detached device vdb from instance c6849280-963f-4661-bac1-c3655d2dad57 from the live domain config.#033[00m
Nov 29 03:23:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 374 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 626 KiB/s rd, 5.9 MiB/s wr, 304 op/s
Nov 29 03:23:10 np0005539563 nova_compute[252253]: 2025-11-29 08:23:10.519 252257 DEBUG nova.objects.instance [None req-1d6aae3e-2e14-41c5-bd5e-9d854c69d9b1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'flavor' on Instance uuid c6849280-963f-4661-bac1-c3655d2dad57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:10 np0005539563 nova_compute[252253]: 2025-11-29 08:23:10.596 252257 DEBUG oslo_concurrency.lockutils [None req-1d6aae3e-2e14-41c5-bd5e-9d854c69d9b1 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.155 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Nov 29 03:23:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Nov 29 03:23:11 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.592 252257 DEBUG oslo_concurrency.lockutils [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "c6849280-963f-4661-bac1-c3655d2dad57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.593 252257 DEBUG oslo_concurrency.lockutils [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.593 252257 DEBUG oslo_concurrency.lockutils [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "c6849280-963f-4661-bac1-c3655d2dad57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.593 252257 DEBUG oslo_concurrency.lockutils [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.593 252257 DEBUG oslo_concurrency.lockutils [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.594 252257 INFO nova.compute.manager [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Terminating instance#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.595 252257 DEBUG nova.compute.manager [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:23:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:11.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:11 np0005539563 kernel: tap9427a372-ce (unregistering): left promiscuous mode
Nov 29 03:23:11 np0005539563 NetworkManager[48981]: <info>  [1764404591.6473] device (tap9427a372-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.688 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:23:11Z|00611|binding|INFO|Releasing lport 9427a372-ceaa-418b-9f5e-699c618df26b from this chassis (sb_readonly=0)
Nov 29 03:23:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:23:11Z|00612|binding|INFO|Setting lport 9427a372-ceaa-418b-9f5e-699c618df26b down in Southbound
Nov 29 03:23:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:23:11Z|00613|binding|INFO|Removing iface tap9427a372-ce ovn-installed in OVS
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.691 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:11.699 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:98:4d 10.100.0.9'], port_security=['fa:16:3e:2e:98:4d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c6849280-963f-4661-bac1-c3655d2dad57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61d8d3b6b31f4b36b5749db9c550c696', 'neutron:revision_number': '4', 'neutron:security_group_ids': '51f8f3df-202b-4d8a-8e5e-534a0e5fbff8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d406c7c1-fafd-4f72-8c37-90a5a1b5d4e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=9427a372-ceaa-418b-9f5e-699c618df26b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:11.700 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 9427a372-ceaa-418b-9f5e-699c618df26b in datapath 3d6ff1b5-e67b-4a23-9145-8139b35e63e8 unbound from our chassis#033[00m
Nov 29 03:23:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:11.701 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d6ff1b5-e67b-4a23-9145-8139b35e63e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:23:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:11.702 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[256b3455-a9f5-4dbe-83a4-a8adcc4b9a4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:11.703 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 namespace which is not needed anymore#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.704 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:11.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:11 np0005539563 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 29 03:23:11 np0005539563 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000097.scope: Consumed 15.308s CPU time.
Nov 29 03:23:11 np0005539563 systemd-machined[213024]: Machine qemu-72-instance-00000097 terminated.
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.814 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.819 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.830 252257 INFO nova.virt.libvirt.driver [-] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Instance destroyed successfully.#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.831 252257 DEBUG nova.objects.instance [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lazy-loading 'resources' on Instance uuid c6849280-963f-4661-bac1-c3655d2dad57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.843 252257 DEBUG nova.virt.libvirt.vif [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:22:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1800912635',display_name='tempest-AttachVolumeNegativeTest-server-1800912635',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1800912635',id=151,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJQ52WxsQJ5eoReeIIKj0v0+u3fz+tEXZrd3CiTnWBojHO7l362bVK7dMHcSW3OAzN918Q7bczaHDE1n0Dcc1GdtFpM6SBH2x2daFTDP5jd/WLB9A/+7WVVlaMS23JThHg==',key_name='tempest-keypair-1223795523',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='61d8d3b6b31f4b36b5749db9c550c696',ramdisk_id='',reservation_id='r-43zklu7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1426807399',owner_user_name='tempest-AttachVolumeNegativeTest-1426807399-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:22:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f1f8a0998948b7b96830d8559609f6',uuid=c6849280-963f-4661-bac1-c3655d2dad57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9427a372-ceaa-418b-9f5e-699c618df26b", "address": "fa:16:3e:2e:98:4d", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9427a372-ce", "ovs_interfaceid": "9427a372-ceaa-418b-9f5e-699c618df26b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.844 252257 DEBUG nova.network.os_vif_util [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converting VIF {"id": "9427a372-ceaa-418b-9f5e-699c618df26b", "address": "fa:16:3e:2e:98:4d", "network": {"id": "3d6ff1b5-e67b-4a23-9145-8139b35e63e8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-200311477-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61d8d3b6b31f4b36b5749db9c550c696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9427a372-ce", "ovs_interfaceid": "9427a372-ceaa-418b-9f5e-699c618df26b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.845 252257 DEBUG nova.network.os_vif_util [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:98:4d,bridge_name='br-int',has_traffic_filtering=True,id=9427a372-ceaa-418b-9f5e-699c618df26b,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9427a372-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.845 252257 DEBUG os_vif [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:98:4d,bridge_name='br-int',has_traffic_filtering=True,id=9427a372-ceaa-418b-9f5e-699c618df26b,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9427a372-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.847 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.848 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9427a372-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.849 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.852 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.855 252257 INFO os_vif [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:98:4d,bridge_name='br-int',has_traffic_filtering=True,id=9427a372-ceaa-418b-9f5e-699c618df26b,network=Network(3d6ff1b5-e67b-4a23-9145-8139b35e63e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9427a372-ce')#033[00m
Nov 29 03:23:11 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[342941]: [NOTICE]   (342945) : haproxy version is 2.8.14-c23fe91
Nov 29 03:23:11 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[342941]: [NOTICE]   (342945) : path to executable is /usr/sbin/haproxy
Nov 29 03:23:11 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[342941]: [WARNING]  (342945) : Exiting Master process...
Nov 29 03:23:11 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[342941]: [WARNING]  (342945) : Exiting Master process...
Nov 29 03:23:11 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[342941]: [ALERT]    (342945) : Current worker (342947) exited with code 143 (Terminated)
Nov 29 03:23:11 np0005539563 neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8[342941]: [WARNING]  (342945) : All workers exited. Exiting... (0)
Nov 29 03:23:11 np0005539563 systemd[1]: libpod-ae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616.scope: Deactivated successfully.
Nov 29 03:23:11 np0005539563 podman[344145]: 2025-11-29 08:23:11.867550728 +0000 UTC m=+0.063911223 container died ae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:23:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616-userdata-shm.mount: Deactivated successfully.
Nov 29 03:23:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2d4bf98443066f24527bbdfa653d89152cdef6e1a3cc12ca707cc4ca69bfe9d0-merged.mount: Deactivated successfully.
Nov 29 03:23:11 np0005539563 podman[344145]: 2025-11-29 08:23:11.914174323 +0000 UTC m=+0.110534828 container cleanup ae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:23:11 np0005539563 systemd[1]: libpod-conmon-ae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616.scope: Deactivated successfully.
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.924 252257 DEBUG nova.compute.manager [req-cb454d92-6340-480f-8321-7e91a3e37472 req-a9fb18e1-4ad5-4941-88d6-1204c272cfb0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Received event network-vif-unplugged-9427a372-ceaa-418b-9f5e-699c618df26b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.925 252257 DEBUG oslo_concurrency.lockutils [req-cb454d92-6340-480f-8321-7e91a3e37472 req-a9fb18e1-4ad5-4941-88d6-1204c272cfb0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c6849280-963f-4661-bac1-c3655d2dad57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.925 252257 DEBUG oslo_concurrency.lockutils [req-cb454d92-6340-480f-8321-7e91a3e37472 req-a9fb18e1-4ad5-4941-88d6-1204c272cfb0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.925 252257 DEBUG oslo_concurrency.lockutils [req-cb454d92-6340-480f-8321-7e91a3e37472 req-a9fb18e1-4ad5-4941-88d6-1204c272cfb0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.926 252257 DEBUG nova.compute.manager [req-cb454d92-6340-480f-8321-7e91a3e37472 req-a9fb18e1-4ad5-4941-88d6-1204c272cfb0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] No waiting events found dispatching network-vif-unplugged-9427a372-ceaa-418b-9f5e-699c618df26b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:11 np0005539563 nova_compute[252253]: 2025-11-29 08:23:11.926 252257 DEBUG nova.compute.manager [req-cb454d92-6340-480f-8321-7e91a3e37472 req-a9fb18e1-4ad5-4941-88d6-1204c272cfb0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Received event network-vif-unplugged-9427a372-ceaa-418b-9f5e-699c618df26b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:23:12 np0005539563 podman[344200]: 2025-11-29 08:23:12.001527412 +0000 UTC m=+0.060852142 container remove ae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.008 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9fddf6-cdad-4a05-a837-f07542ab4e43]: (4, ('Sat Nov 29 08:23:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 (ae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616)\nae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616\nSat Nov 29 08:23:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 (ae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616)\nae7c963324d8151510e448766c11230acaf87d71417b7d1557d255e9cc4e9616\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.010 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[99b6cbd6-b33f-4ff5-ada6-2f0fa22cf04a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.012 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d6ff1b5-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.014 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539563 kernel: tap3d6ff1b5-e0: left promiscuous mode
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.029 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.030 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.034 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4522e6ae-72a6-420d-8da9-e04846c47f2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.054 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5d77c50c-96ef-4085-860c-75d61aff89f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.056 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2815927d-a781-4e30-ab42-04bc99266e18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.076 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a318a2f8-af79-45c1-92e2-81aef7c5e24b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 753008, 'reachable_time': 44337, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344215, 'error': None, 'target': 'ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 systemd[1]: run-netns-ovnmeta\x2d3d6ff1b5\x2de67b\x2d4a23\x2d9145\x2d8139b35e63e8.mount: Deactivated successfully.
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.084 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d6ff1b5-e67b-4a23-9145-8139b35e63e8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.085 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[9121dce9-f487-47bb-939f-4ca516be07b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.260 252257 DEBUG oslo_concurrency.lockutils [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.261 252257 DEBUG oslo_concurrency.lockutils [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.261 252257 DEBUG oslo_concurrency.lockutils [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.262 252257 DEBUG oslo_concurrency.lockutils [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.262 252257 DEBUG oslo_concurrency.lockutils [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.263 252257 INFO nova.compute.manager [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Terminating instance#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.264 252257 DEBUG nova.compute.manager [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.305 252257 INFO nova.virt.libvirt.driver [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Deleting instance files /var/lib/nova/instances/c6849280-963f-4661-bac1-c3655d2dad57_del#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.307 252257 INFO nova.virt.libvirt.driver [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Deletion of /var/lib/nova/instances/c6849280-963f-4661-bac1-c3655d2dad57_del complete#033[00m
Nov 29 03:23:12 np0005539563 kernel: tap96eb3aec-07 (unregistering): left promiscuous mode
Nov 29 03:23:12 np0005539563 NetworkManager[48981]: <info>  [1764404592.3181] device (tap96eb3aec-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:23:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:23:12Z|00614|binding|INFO|Releasing lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc from this chassis (sb_readonly=0)
Nov 29 03:23:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:23:12Z|00615|binding|INFO|Setting lport 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc down in Southbound
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.324 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:23:12Z|00616|binding|INFO|Removing iface tap96eb3aec-07 ovn-installed in OVS
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.326 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.331 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:36:6b 10.100.0.6'], port_security=['fa:16:3e:5b:36:6b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1fad2d6f-5a00-43ad-af43-00916509fc61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32485b0e-177b-4dfd-a55a-0249528f32e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '358970eca7ad4b05b70f43e5507ac052', 'neutron:revision_number': '8', 'neutron:security_group_ids': '33616c4d-f137-4188-9923-071fd3df21bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83a2eb53-2a5d-447d-a36c-4b9c2b295f15, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.333 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 96eb3aec-07ea-42dc-8983-3d61e9f8b5fc in datapath 32485b0e-177b-4dfd-a55a-0249528f32e1 unbound from our chassis#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.334 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32485b0e-177b-4dfd-a55a-0249528f32e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.336 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[545acf69-d2be-49e5-a921-bdf959562273]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.337 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 namespace which is not needed anymore#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.344 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.363 252257 INFO nova.compute.manager [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.364 252257 DEBUG oslo.service.loopingcall [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:23:12 np0005539563 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000083.scope: Deactivated successfully.
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.365 252257 DEBUG nova.compute.manager [-] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.365 252257 DEBUG nova.network.neutron [-] [instance: c6849280-963f-4661-bac1-c3655d2dad57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:23:12 np0005539563 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000083.scope: Consumed 24.066s CPU time.
Nov 29 03:23:12 np0005539563 systemd-machined[213024]: Machine qemu-63-instance-00000083 terminated.
Nov 29 03:23:12 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[335101]: [NOTICE]   (335106) : haproxy version is 2.8.14-c23fe91
Nov 29 03:23:12 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[335101]: [NOTICE]   (335106) : path to executable is /usr/sbin/haproxy
Nov 29 03:23:12 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[335101]: [WARNING]  (335106) : Exiting Master process...
Nov 29 03:23:12 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[335101]: [ALERT]    (335106) : Current worker (335108) exited with code 143 (Terminated)
Nov 29 03:23:12 np0005539563 neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1[335101]: [WARNING]  (335106) : All workers exited. Exiting... (0)
Nov 29 03:23:12 np0005539563 systemd[1]: libpod-08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c.scope: Deactivated successfully.
Nov 29 03:23:12 np0005539563 podman[344238]: 2025-11-29 08:23:12.460340345 +0000 UTC m=+0.042391131 container died 08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:23:12 np0005539563 NetworkManager[48981]: <info>  [1764404592.4796] manager: (tap96eb3aec-07): new Tun device (/org/freedesktop/NetworkManager/Devices/268)
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.482 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:23:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 374 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 3.6 MiB/s wr, 198 op/s
Nov 29 03:23:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-bf9bf3f82f6d56245d299b13d38fd0940f3b33cd648a8d98518033fca0e3b4b0-merged.mount: Deactivated successfully.
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.496 252257 INFO nova.virt.libvirt.driver [-] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Instance destroyed successfully.#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.498 252257 DEBUG nova.objects.instance [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lazy-loading 'resources' on Instance uuid 1fad2d6f-5a00-43ad-af43-00916509fc61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:12 np0005539563 podman[344238]: 2025-11-29 08:23:12.499213789 +0000 UTC m=+0.081264575 container cleanup 08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.514 252257 DEBUG nova.virt.libvirt.vif [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:18:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1419637348',display_name='tempest-ServerStableDeviceRescueTest-server-1419637348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1419637348',id=131,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='358970eca7ad4b05b70f43e5507ac052',ramdisk_id='',reservation_id='r-z40kifsz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1105304301',owner_user_name='tempest-ServerStableDeviceRescueTest-1105304301-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:19:10Z,user_data=None,user_id='3b52040d601a4a56abcaf3f046f1e349',uuid=1fad2d6f-5a00-43ad-af43-00916509fc61,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:23:12 np0005539563 systemd[1]: libpod-conmon-08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c.scope: Deactivated successfully.
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.515 252257 DEBUG nova.network.os_vif_util [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converting VIF {"id": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "address": "fa:16:3e:5b:36:6b", "network": {"id": "32485b0e-177b-4dfd-a55a-0249528f32e1", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-627892437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "358970eca7ad4b05b70f43e5507ac052", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96eb3aec-07", "ovs_interfaceid": "96eb3aec-07ea-42dc-8983-3d61e9f8b5fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.515 252257 DEBUG nova.network.os_vif_util [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:36:6b,bridge_name='br-int',has_traffic_filtering=True,id=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96eb3aec-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.516 252257 DEBUG os_vif [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:36:6b,bridge_name='br-int',has_traffic_filtering=True,id=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96eb3aec-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.517 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.518 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96eb3aec-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.520 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.522 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.524 252257 INFO os_vif [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:36:6b,bridge_name='br-int',has_traffic_filtering=True,id=96eb3aec-07ea-42dc-8983-3d61e9f8b5fc,network=Network(32485b0e-177b-4dfd-a55a-0249528f32e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96eb3aec-07')#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.543 252257 DEBUG nova.compute.manager [req-55c8852f-9e88-45e6-b1e3-3d31f78ad668 req-70e210ca-a741-4996-9113-cafd753b5306 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-unplugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.543 252257 DEBUG oslo_concurrency.lockutils [req-55c8852f-9e88-45e6-b1e3-3d31f78ad668 req-70e210ca-a741-4996-9113-cafd753b5306 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.544 252257 DEBUG oslo_concurrency.lockutils [req-55c8852f-9e88-45e6-b1e3-3d31f78ad668 req-70e210ca-a741-4996-9113-cafd753b5306 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.544 252257 DEBUG oslo_concurrency.lockutils [req-55c8852f-9e88-45e6-b1e3-3d31f78ad668 req-70e210ca-a741-4996-9113-cafd753b5306 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.544 252257 DEBUG nova.compute.manager [req-55c8852f-9e88-45e6-b1e3-3d31f78ad668 req-70e210ca-a741-4996-9113-cafd753b5306 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] No waiting events found dispatching network-vif-unplugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.544 252257 DEBUG nova.compute.manager [req-55c8852f-9e88-45e6-b1e3-3d31f78ad668 req-70e210ca-a741-4996-9113-cafd753b5306 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-unplugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:23:12 np0005539563 podman[344275]: 2025-11-29 08:23:12.563572224 +0000 UTC m=+0.039860991 container remove 08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.569 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[14bd92b3-8f04-4486-83dd-8e765a5be79d]: (4, ('Sat Nov 29 08:23:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 (08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c)\n08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c\nSat Nov 29 08:23:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 (08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c)\n08616805caaee0780764658127183c767b1446594874d159b344ebd9a928c18c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.571 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6d9baf-451a-4187-abfb-50a004d8b0bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.572 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32485b0e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:12 np0005539563 kernel: tap32485b0e-10: left promiscuous mode
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.574 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.588 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.592 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7a76d3-ee26-4511-b342-1794fc67acb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.606 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[75383df8-da7c-49b6-856f-1d94237e64ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.607 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e86de3f5-38cc-457d-a3c0-92a861cdad85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.621 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f06750-af16-49fa-9a5e-a6776a4414bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731754, 'reachable_time': 42672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344308, 'error': None, 'target': 'ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.623 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-32485b0e-177b-4dfd-a55a-0249528f32e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:23:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:12.623 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[8e940f65-a1a6-48c0-b6cc-5f4d1ac96252]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:23:12
Nov 29 03:23:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:23:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:23:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.meta']
Nov 29 03:23:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:23:12 np0005539563 systemd[1]: run-netns-ovnmeta\x2d32485b0e\x2d177b\x2d4dfd\x2da55a\x2d0249528f32e1.mount: Deactivated successfully.
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.947 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.959 252257 INFO nova.virt.libvirt.driver [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Deleting instance files /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61_del#033[00m
Nov 29 03:23:12 np0005539563 nova_compute[252253]: 2025-11-29 08:23:12.960 252257 INFO nova.virt.libvirt.driver [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Deletion of /var/lib/nova/instances/1fad2d6f-5a00-43ad-af43-00916509fc61_del complete#033[00m
Nov 29 03:23:13 np0005539563 nova_compute[252253]: 2025-11-29 08:23:13.047 252257 INFO nova.compute.manager [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:23:13 np0005539563 nova_compute[252253]: 2025-11-29 08:23:13.048 252257 DEBUG oslo.service.loopingcall [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:23:13 np0005539563 nova_compute[252253]: 2025-11-29 08:23:13.048 252257 DEBUG nova.compute.manager [-] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:23:13 np0005539563 nova_compute[252253]: 2025-11-29 08:23:13.049 252257 DEBUG nova.network.neutron [-] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:23:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:13.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:13.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:23:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:23:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:23:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:23:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:23:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:23:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:23:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.169 252257 DEBUG nova.compute.manager [req-8fcf184b-0024-4638-b52c-1806663c242c req-25687416-f82a-4a10-be40-914e33c54483 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Received event network-vif-plugged-9427a372-ceaa-418b-9f5e-699c618df26b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.170 252257 DEBUG oslo_concurrency.lockutils [req-8fcf184b-0024-4638-b52c-1806663c242c req-25687416-f82a-4a10-be40-914e33c54483 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c6849280-963f-4661-bac1-c3655d2dad57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.170 252257 DEBUG oslo_concurrency.lockutils [req-8fcf184b-0024-4638-b52c-1806663c242c req-25687416-f82a-4a10-be40-914e33c54483 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.171 252257 DEBUG oslo_concurrency.lockutils [req-8fcf184b-0024-4638-b52c-1806663c242c req-25687416-f82a-4a10-be40-914e33c54483 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.171 252257 DEBUG nova.compute.manager [req-8fcf184b-0024-4638-b52c-1806663c242c req-25687416-f82a-4a10-be40-914e33c54483 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] No waiting events found dispatching network-vif-plugged-9427a372-ceaa-418b-9f5e-699c618df26b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.171 252257 WARNING nova.compute.manager [req-8fcf184b-0024-4638-b52c-1806663c242c req-25687416-f82a-4a10-be40-914e33c54483 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Received unexpected event network-vif-plugged-9427a372-ceaa-418b-9f5e-699c618df26b for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.184 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404579.1824958, 9b6f3346-1230-472f-bd04-791d2367bebb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.184 252257 INFO nova.compute.manager [-] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:23:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Nov 29 03:23:14 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.232 252257 DEBUG nova.compute.manager [None req-89e797eb-8991-4d10-ab9d-968eaefdd976 - - - - - -] [instance: 9b6f3346-1230-472f-bd04-791d2367bebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.420 252257 DEBUG nova.network.neutron [-] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.424 252257 DEBUG nova.network.neutron [-] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.441 252257 INFO nova.compute.manager [-] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Took 2.08 seconds to deallocate network for instance.#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.457 252257 INFO nova.compute.manager [-] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Took 1.41 seconds to deallocate network for instance.#033[00m
Nov 29 03:23:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 216 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.9 MiB/s wr, 366 op/s
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.504 252257 DEBUG nova.compute.manager [req-cfc6eddf-3b46-4903-af89-a86efd4ae358 req-352633bd-8f06-4b6c-9154-d6e667528c5e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Received event network-vif-deleted-9427a372-ceaa-418b-9f5e-699c618df26b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.508 252257 DEBUG oslo_concurrency.lockutils [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.509 252257 DEBUG oslo_concurrency.lockutils [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.512 252257 DEBUG oslo_concurrency.lockutils [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.604 252257 DEBUG oslo_concurrency.processutils [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.768 252257 DEBUG nova.compute.manager [req-db84e113-9880-4404-b71a-69726e198a1e req-97999053-a94f-4372-ab9b-76c26c868f9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.769 252257 DEBUG oslo_concurrency.lockutils [req-db84e113-9880-4404-b71a-69726e198a1e req-97999053-a94f-4372-ab9b-76c26c868f9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.769 252257 DEBUG oslo_concurrency.lockutils [req-db84e113-9880-4404-b71a-69726e198a1e req-97999053-a94f-4372-ab9b-76c26c868f9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.770 252257 DEBUG oslo_concurrency.lockutils [req-db84e113-9880-4404-b71a-69726e198a1e req-97999053-a94f-4372-ab9b-76c26c868f9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.770 252257 DEBUG nova.compute.manager [req-db84e113-9880-4404-b71a-69726e198a1e req-97999053-a94f-4372-ab9b-76c26c868f9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] No waiting events found dispatching network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:14 np0005539563 nova_compute[252253]: 2025-11-29 08:23:14.770 252257 WARNING nova.compute.manager [req-db84e113-9880-4404-b71a-69726e198a1e req-97999053-a94f-4372-ab9b-76c26c868f9c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received unexpected event network-vif-plugged-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:23:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233120286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.087 252257 DEBUG oslo_concurrency.processutils [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.094 252257 DEBUG nova.compute.provider_tree [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.113 252257 DEBUG nova.scheduler.client.report [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.133 252257 DEBUG oslo_concurrency.lockutils [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.136 252257 DEBUG oslo_concurrency.lockutils [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.177 252257 INFO nova.scheduler.client.report [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Deleted allocations for instance 1fad2d6f-5a00-43ad-af43-00916509fc61#033[00m
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.196 252257 DEBUG oslo_concurrency.processutils [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.248 252257 DEBUG oslo_concurrency.lockutils [None req-a6a8a88a-9ea2-471a-ae81-8c04a05ddafa 3b52040d601a4a56abcaf3f046f1e349 358970eca7ad4b05b70f43e5507ac052 - - default default] Lock "1fad2d6f-5a00-43ad-af43-00916509fc61" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:15.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4042379002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.659 252257 DEBUG oslo_concurrency.processutils [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.665 252257 DEBUG nova.compute.provider_tree [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.688 252257 DEBUG nova.scheduler.client.report [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:15.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.715 252257 DEBUG oslo_concurrency.lockutils [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.746 252257 INFO nova.scheduler.client.report [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Deleted allocations for instance c6849280-963f-4661-bac1-c3655d2dad57#033[00m
Nov 29 03:23:15 np0005539563 nova_compute[252253]: 2025-11-29 08:23:15.815 252257 DEBUG oslo_concurrency.lockutils [None req-e741a9fb-d852-47ed-8c46-223638ea6ec7 09f1f8a0998948b7b96830d8559609f6 61d8d3b6b31f4b36b5749db9c550c696 - - default default] Lock "c6849280-963f-4661-bac1-c3655d2dad57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 120 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.4 MiB/s wr, 405 op/s
Nov 29 03:23:16 np0005539563 nova_compute[252253]: 2025-11-29 08:23:16.494 252257 DEBUG nova.compute.manager [req-15c11b18-2b57-4f28-bdcd-ba79eed2092c req-989401a2-62cd-4a8e-a84f-a543154e76b9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Received event network-vif-deleted-96eb3aec-07ea-42dc-8983-3d61e9f8b5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:17 np0005539563 nova_compute[252253]: 2025-11-29 08:23:17.521 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:17.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:17.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:17 np0005539563 nova_compute[252253]: 2025-11-29 08:23:17.950 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 127 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 363 KiB/s wr, 268 op/s
Nov 29 03:23:18 np0005539563 nova_compute[252253]: 2025-11-29 08:23:18.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Nov 29 03:23:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Nov 29 03:23:19 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Nov 29 03:23:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:19.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:19.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 293 op/s
Nov 29 03:23:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:21.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:21.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 233 op/s
Nov 29 03:23:22 np0005539563 nova_compute[252253]: 2025-11-29 08:23:22.525 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:22 np0005539563 nova_compute[252253]: 2025-11-29 08:23:22.951 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162323480830076 of space, bias 1.0, pg target 0.6486970442490229 quantized to 32 (current 32)
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:23:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:23.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:23.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 756 KiB/s rd, 2.1 MiB/s wr, 133 op/s
Nov 29 03:23:25 np0005539563 nova_compute[252253]: 2025-11-29 08:23:25.108 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:23:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1117672145' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:23:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:23:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1117672145' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:23:25 np0005539563 nova_compute[252253]: 2025-11-29 08:23:25.313 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:25.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:25.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 03:23:26 np0005539563 nova_compute[252253]: 2025-11-29 08:23:26.828 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404591.827289, c6849280-963f-4661-bac1-c3655d2dad57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:26 np0005539563 nova_compute[252253]: 2025-11-29 08:23:26.829 252257 INFO nova.compute.manager [-] [instance: c6849280-963f-4661-bac1-c3655d2dad57] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:23:27 np0005539563 nova_compute[252253]: 2025-11-29 08:23:27.495 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404592.49413, 1fad2d6f-5a00-43ad-af43-00916509fc61 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:27 np0005539563 nova_compute[252253]: 2025-11-29 08:23:27.495 252257 INFO nova.compute.manager [-] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:23:27 np0005539563 nova_compute[252253]: 2025-11-29 08:23:27.529 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:27.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:27.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:27 np0005539563 nova_compute[252253]: 2025-11-29 08:23:27.953 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:23:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664543326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:23:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664543326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:23:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.9 MiB/s wr, 57 op/s
Nov 29 03:23:28 np0005539563 nova_compute[252253]: 2025-11-29 08:23:28.600 252257 DEBUG nova.compute.manager [None req-56394d6d-5c6e-4864-8314-3def59c957ad - - - - - -] [instance: c6849280-963f-4661-bac1-c3655d2dad57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:28 np0005539563 nova_compute[252253]: 2025-11-29 08:23:28.637 252257 DEBUG nova.compute.manager [None req-3117de77-c497-44ef-afc9-306d2096a932 - - - - - -] [instance: 1fad2d6f-5a00-43ad-af43-00916509fc61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:23:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2722934986' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:23:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2722934986' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:23:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:29.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:29.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.6 MiB/s wr, 59 op/s
Nov 29 03:23:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:31.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:31.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 40 op/s
Nov 29 03:23:32 np0005539563 nova_compute[252253]: 2025-11-29 08:23:32.533 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:32 np0005539563 nova_compute[252253]: 2025-11-29 08:23:32.956 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:33.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:33.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 539 KiB/s rd, 13 KiB/s wr, 65 op/s
Nov 29 03:23:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:35.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:35.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 97 op/s
Nov 29 03:23:37 np0005539563 nova_compute[252253]: 2025-11-29 08:23:37.602 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:37.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:37.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:37 np0005539563 nova_compute[252253]: 2025-11-29 08:23:37.960 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 87 op/s
Nov 29 03:23:38 np0005539563 podman[344469]: 2025-11-29 08:23:38.505766738 +0000 UTC m=+0.049179574 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:23:38 np0005539563 podman[344470]: 2025-11-29 08:23:38.517549188 +0000 UTC m=+0.061475318 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:23:38 np0005539563 podman[344471]: 2025-11-29 08:23:38.54049611 +0000 UTC m=+0.084234365 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Nov 29 03:23:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:39.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:39.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 82 op/s
Nov 29 03:23:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:41.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:23:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:41.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:23:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 73 op/s
Nov 29 03:23:42 np0005539563 nova_compute[252253]: 2025-11-29 08:23:42.605 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:42 np0005539563 nova_compute[252253]: 2025-11-29 08:23:42.961 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:23:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:23:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:23:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:23:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:23:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:23:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:43.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:43.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 177 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 586 KiB/s wr, 80 op/s
Nov 29 03:23:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:45.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:45 np0005539563 nova_compute[252253]: 2025-11-29 08:23:45.692 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:45.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 179 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.1 MiB/s wr, 61 op/s
Nov 29 03:23:47 np0005539563 nova_compute[252253]: 2025-11-29 08:23:47.609 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:47.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:47.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:48 np0005539563 nova_compute[252253]: 2025-11-29 08:23:48.005 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 180 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 1.4 MiB/s wr, 21 op/s
Nov 29 03:23:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:49.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:49.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 227 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 323 KiB/s rd, 3.0 MiB/s wr, 59 op/s
Nov 29 03:23:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:51.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:51.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 227 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 323 KiB/s rd, 3.0 MiB/s wr, 59 op/s
Nov 29 03:23:52 np0005539563 nova_compute[252253]: 2025-11-29 08:23:52.655 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:53 np0005539563 nova_compute[252253]: 2025-11-29 08:23:53.007 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:53.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:53.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Nov 29 03:23:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:55.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:55.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 3.3 MiB/s wr, 82 op/s
Nov 29 03:23:56 np0005539563 nova_compute[252253]: 2025-11-29 08:23:56.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:56 np0005539563 nova_compute[252253]: 2025-11-29 08:23:56.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:23:57 np0005539563 nova_compute[252253]: 2025-11-29 08:23:57.658 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:23:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:57.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:23:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:57.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:58 np0005539563 nova_compute[252253]: 2025-11-29 08:23:58.008 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:58.210 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:58 np0005539563 nova_compute[252253]: 2025-11-29 08:23:58.211 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:23:58.213 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:23:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 314 KiB/s rd, 2.8 MiB/s wr, 76 op/s
Nov 29 03:23:58 np0005539563 nova_compute[252253]: 2025-11-29 08:23:58.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:59 np0005539563 nova_compute[252253]: 2025-11-29 08:23:59.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:59 np0005539563 nova_compute[252253]: 2025-11-29 08:23:59.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:59.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:23:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:59.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 75 op/s
Nov 29 03:24:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Nov 29 03:24:01 np0005539563 nova_compute[252253]: 2025-11-29 08:24:01.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:01 np0005539563 nova_compute[252253]: 2025-11-29 08:24:01.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:24:01 np0005539563 nova_compute[252253]: 2025-11-29 08:24:01.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:24:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:01.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Nov 29 03:24:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:01.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:01 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Nov 29 03:24:02 np0005539563 nova_compute[252253]: 2025-11-29 08:24:02.122 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:24:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:24:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:24:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:24:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:24:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.1 MiB/s wr, 43 op/s
Nov 29 03:24:02 np0005539563 nova_compute[252253]: 2025-11-29 08:24:02.661 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:02 np0005539563 nova_compute[252253]: 2025-11-29 08:24:02.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:02 np0005539563 nova_compute[252253]: 2025-11-29 08:24:02.767 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:02 np0005539563 nova_compute[252253]: 2025-11-29 08:24:02.767 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:02 np0005539563 nova_compute[252253]: 2025-11-29 08:24:02.767 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:02 np0005539563 nova_compute[252253]: 2025-11-29 08:24:02.768 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:24:02 np0005539563 nova_compute[252253]: 2025-11-29 08:24:02.768 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:03 np0005539563 nova_compute[252253]: 2025-11-29 08:24:03.059 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531296144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:03 np0005539563 nova_compute[252253]: 2025-11-29 08:24:03.221 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:03 np0005539563 nova_compute[252253]: 2025-11-29 08:24:03.402 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:24:03 np0005539563 nova_compute[252253]: 2025-11-29 08:24:03.403 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4358MB free_disk=20.92190170288086GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:24:03 np0005539563 nova_compute[252253]: 2025-11-29 08:24:03.403 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:03 np0005539563 nova_compute[252253]: 2025-11-29 08:24:03.404 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:03.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:03.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:04.215 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:04 np0005539563 nova_compute[252253]: 2025-11-29 08:24:04.262 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:24:04 np0005539563 nova_compute[252253]: 2025-11-29 08:24:04.262 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:24:04 np0005539563 nova_compute[252253]: 2025-11-29 08:24:04.284 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 31 KiB/s wr, 77 op/s
Nov 29 03:24:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:24:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:24:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1007952135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:04 np0005539563 nova_compute[252253]: 2025-11-29 08:24:04.868 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:04 np0005539563 nova_compute[252253]: 2025-11-29 08:24:04.874 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:04 np0005539563 nova_compute[252253]: 2025-11-29 08:24:04.905 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:04.929 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:04.929 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:04.929 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:04 np0005539563 nova_compute[252253]: 2025-11-29 08:24:04.942 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:24:04 np0005539563 nova_compute[252253]: 2025-11-29 08:24:04.942 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:05.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:24:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:05.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:05 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b36fb929-a5ba-4234-8d5c-6dfcdaa6d9f0 does not exist
Nov 29 03:24:05 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6102edec-e7a0-4b1e-8eec-1decd05a61fa does not exist
Nov 29 03:24:05 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 703c01e3-3e6a-4eb0-80ca-3edb615bfbee does not exist
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Nov 29 03:24:05 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Nov 29 03:24:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 217 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 22 KiB/s wr, 155 op/s
Nov 29 03:24:06 np0005539563 podman[345036]: 2025-11-29 08:24:06.58694483 +0000 UTC m=+0.048609029 container create 0598c9b865022cc47ed74bed58b3708047733a6b6a65cddca450f2a84bec406c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:24:06 np0005539563 systemd[1]: Started libpod-conmon-0598c9b865022cc47ed74bed58b3708047733a6b6a65cddca450f2a84bec406c.scope.
Nov 29 03:24:06 np0005539563 podman[345036]: 2025-11-29 08:24:06.564893922 +0000 UTC m=+0.026558131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:24:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:24:06 np0005539563 podman[345036]: 2025-11-29 08:24:06.69163379 +0000 UTC m=+0.153297989 container init 0598c9b865022cc47ed74bed58b3708047733a6b6a65cddca450f2a84bec406c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:24:06 np0005539563 podman[345036]: 2025-11-29 08:24:06.700915451 +0000 UTC m=+0.162579610 container start 0598c9b865022cc47ed74bed58b3708047733a6b6a65cddca450f2a84bec406c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_leakey, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:24:06 np0005539563 podman[345036]: 2025-11-29 08:24:06.704531099 +0000 UTC m=+0.166195348 container attach 0598c9b865022cc47ed74bed58b3708047733a6b6a65cddca450f2a84bec406c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:24:06 np0005539563 reverent_leakey[345053]: 167 167
Nov 29 03:24:06 np0005539563 systemd[1]: libpod-0598c9b865022cc47ed74bed58b3708047733a6b6a65cddca450f2a84bec406c.scope: Deactivated successfully.
Nov 29 03:24:06 np0005539563 podman[345036]: 2025-11-29 08:24:06.708821416 +0000 UTC m=+0.170485575 container died 0598c9b865022cc47ed74bed58b3708047733a6b6a65cddca450f2a84bec406c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_leakey, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:24:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ae9165194c3c4186ac9dab128fc4291e5f53dbc53b4fd0379c3d279a34b8a157-merged.mount: Deactivated successfully.
Nov 29 03:24:06 np0005539563 podman[345036]: 2025-11-29 08:24:06.746292182 +0000 UTC m=+0.207956341 container remove 0598c9b865022cc47ed74bed58b3708047733a6b6a65cddca450f2a84bec406c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:24:06 np0005539563 systemd[1]: libpod-conmon-0598c9b865022cc47ed74bed58b3708047733a6b6a65cddca450f2a84bec406c.scope: Deactivated successfully.
Nov 29 03:24:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:24:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:06 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:24:06 np0005539563 podman[345076]: 2025-11-29 08:24:06.919190901 +0000 UTC m=+0.044556440 container create 208a02af7f4961d01cdf072c06e34dfd9f0c83cce4180091094555498286a516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:24:06 np0005539563 systemd[1]: Started libpod-conmon-208a02af7f4961d01cdf072c06e34dfd9f0c83cce4180091094555498286a516.scope.
Nov 29 03:24:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:24:06 np0005539563 podman[345076]: 2025-11-29 08:24:06.901282105 +0000 UTC m=+0.026647664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:24:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40a93e11998bcf64d0afdc6d526bc97ccaedbdb6790f6f33c72cbbe424d0f28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40a93e11998bcf64d0afdc6d526bc97ccaedbdb6790f6f33c72cbbe424d0f28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40a93e11998bcf64d0afdc6d526bc97ccaedbdb6790f6f33c72cbbe424d0f28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40a93e11998bcf64d0afdc6d526bc97ccaedbdb6790f6f33c72cbbe424d0f28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40a93e11998bcf64d0afdc6d526bc97ccaedbdb6790f6f33c72cbbe424d0f28/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:07 np0005539563 nova_compute[252253]: 2025-11-29 08:24:07.664 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:07.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:07.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:07 np0005539563 podman[345076]: 2025-11-29 08:24:07.895682413 +0000 UTC m=+1.021047992 container init 208a02af7f4961d01cdf072c06e34dfd9f0c83cce4180091094555498286a516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:24:07 np0005539563 podman[345076]: 2025-11-29 08:24:07.902985441 +0000 UTC m=+1.028350980 container start 208a02af7f4961d01cdf072c06e34dfd9f0c83cce4180091094555498286a516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:24:08 np0005539563 nova_compute[252253]: 2025-11-29 08:24:08.061 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:08 np0005539563 podman[345076]: 2025-11-29 08:24:08.297471819 +0000 UTC m=+1.422837358 container attach 208a02af7f4961d01cdf072c06e34dfd9f0c83cce4180091094555498286a516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:24:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 203 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.5 KiB/s wr, 156 op/s
Nov 29 03:24:08 np0005539563 agitated_satoshi[345093]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:24:08 np0005539563 agitated_satoshi[345093]: --> relative data size: 1.0
Nov 29 03:24:08 np0005539563 agitated_satoshi[345093]: --> All data devices are unavailable
Nov 29 03:24:08 np0005539563 systemd[1]: libpod-208a02af7f4961d01cdf072c06e34dfd9f0c83cce4180091094555498286a516.scope: Deactivated successfully.
Nov 29 03:24:08 np0005539563 podman[345076]: 2025-11-29 08:24:08.757254458 +0000 UTC m=+1.882620017 container died 208a02af7f4961d01cdf072c06e34dfd9f0c83cce4180091094555498286a516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:24:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f40a93e11998bcf64d0afdc6d526bc97ccaedbdb6790f6f33c72cbbe424d0f28-merged.mount: Deactivated successfully.
Nov 29 03:24:08 np0005539563 podman[345076]: 2025-11-29 08:24:08.873531862 +0000 UTC m=+1.998897401 container remove 208a02af7f4961d01cdf072c06e34dfd9f0c83cce4180091094555498286a516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:24:08 np0005539563 systemd[1]: libpod-conmon-208a02af7f4961d01cdf072c06e34dfd9f0c83cce4180091094555498286a516.scope: Deactivated successfully.
Nov 29 03:24:08 np0005539563 podman[345119]: 2025-11-29 08:24:08.888832136 +0000 UTC m=+0.089657512 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:24:08 np0005539563 podman[345108]: 2025-11-29 08:24:08.89634037 +0000 UTC m=+0.097105594 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:24:08 np0005539563 podman[345120]: 2025-11-29 08:24:08.939040178 +0000 UTC m=+0.137208742 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:24:08 np0005539563 nova_compute[252253]: 2025-11-29 08:24:08.944 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:08 np0005539563 nova_compute[252253]: 2025-11-29 08:24:08.944 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:09 np0005539563 podman[345328]: 2025-11-29 08:24:09.514616297 +0000 UTC m=+0.055169047 container create a8e80967ef4bbb5b67e1ec6aa6ad64e973f93af35c41d3d12f0904d7789d386e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 03:24:09 np0005539563 nova_compute[252253]: 2025-11-29 08:24:09.544 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:09 np0005539563 nova_compute[252253]: 2025-11-29 08:24:09.545 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:09 np0005539563 systemd[1]: Started libpod-conmon-a8e80967ef4bbb5b67e1ec6aa6ad64e973f93af35c41d3d12f0904d7789d386e.scope.
Nov 29 03:24:09 np0005539563 nova_compute[252253]: 2025-11-29 08:24:09.563 252257 DEBUG nova.compute.manager [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:24:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:24:09 np0005539563 podman[345328]: 2025-11-29 08:24:09.486259408 +0000 UTC m=+0.026812128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:24:09 np0005539563 podman[345328]: 2025-11-29 08:24:09.582096498 +0000 UTC m=+0.122649248 container init a8e80967ef4bbb5b67e1ec6aa6ad64e973f93af35c41d3d12f0904d7789d386e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:24:09 np0005539563 podman[345328]: 2025-11-29 08:24:09.588433709 +0000 UTC m=+0.128986439 container start a8e80967ef4bbb5b67e1ec6aa6ad64e973f93af35c41d3d12f0904d7789d386e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:24:09 np0005539563 festive_payne[345344]: 167 167
Nov 29 03:24:09 np0005539563 systemd[1]: libpod-a8e80967ef4bbb5b67e1ec6aa6ad64e973f93af35c41d3d12f0904d7789d386e.scope: Deactivated successfully.
Nov 29 03:24:09 np0005539563 podman[345328]: 2025-11-29 08:24:09.592818328 +0000 UTC m=+0.133371078 container attach a8e80967ef4bbb5b67e1ec6aa6ad64e973f93af35c41d3d12f0904d7789d386e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:24:09 np0005539563 conmon[345344]: conmon a8e80967ef4bbb5b67e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8e80967ef4bbb5b67e1ec6aa6ad64e973f93af35c41d3d12f0904d7789d386e.scope/container/memory.events
Nov 29 03:24:09 np0005539563 podman[345328]: 2025-11-29 08:24:09.594287728 +0000 UTC m=+0.134840448 container died a8e80967ef4bbb5b67e1ec6aa6ad64e973f93af35c41d3d12f0904d7789d386e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:24:09 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #49. Immutable memtables: 0.
Nov 29 03:24:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8acb7ca6d56b4cc2e19f4d0ec17095687abe499a9b3d1aa4924ed6f472258e86-merged.mount: Deactivated successfully.
Nov 29 03:24:09 np0005539563 podman[345328]: 2025-11-29 08:24:09.640922003 +0000 UTC m=+0.181474723 container remove a8e80967ef4bbb5b67e1ec6aa6ad64e973f93af35c41d3d12f0904d7789d386e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:24:09 np0005539563 systemd[1]: libpod-conmon-a8e80967ef4bbb5b67e1ec6aa6ad64e973f93af35c41d3d12f0904d7789d386e.scope: Deactivated successfully.
Nov 29 03:24:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:09.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:09.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:09 np0005539563 podman[345367]: 2025-11-29 08:24:09.806348879 +0000 UTC m=+0.035782501 container create c8cdf9563ea8e36b1712f6bba7edd22f13791b15431960713189f66552b37635 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:24:09 np0005539563 systemd[1]: Started libpod-conmon-c8cdf9563ea8e36b1712f6bba7edd22f13791b15431960713189f66552b37635.scope.
Nov 29 03:24:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:24:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b8499f8471e47b897c1929e6c3ff876e42e766e15e16f0f0c1053359ad5f49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b8499f8471e47b897c1929e6c3ff876e42e766e15e16f0f0c1053359ad5f49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b8499f8471e47b897c1929e6c3ff876e42e766e15e16f0f0c1053359ad5f49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b8499f8471e47b897c1929e6c3ff876e42e766e15e16f0f0c1053359ad5f49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:09 np0005539563 podman[345367]: 2025-11-29 08:24:09.7912611 +0000 UTC m=+0.020694752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:24:09 np0005539563 podman[345367]: 2025-11-29 08:24:09.891529389 +0000 UTC m=+0.120963111 container init c8cdf9563ea8e36b1712f6bba7edd22f13791b15431960713189f66552b37635 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:24:09 np0005539563 podman[345367]: 2025-11-29 08:24:09.900577024 +0000 UTC m=+0.130010716 container start c8cdf9563ea8e36b1712f6bba7edd22f13791b15431960713189f66552b37635 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:24:09 np0005539563 podman[345367]: 2025-11-29 08:24:09.905489198 +0000 UTC m=+0.134922860 container attach c8cdf9563ea8e36b1712f6bba7edd22f13791b15431960713189f66552b37635 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.099 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.101 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.110 252257 DEBUG nova.virt.hardware [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.110 252257 INFO nova.compute.claims [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.221 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.6 KiB/s wr, 176 op/s
Nov 29 03:24:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2088208941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.653 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.659 252257 DEBUG nova.compute.provider_tree [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.707 252257 DEBUG nova.scheduler.client.report [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:10 np0005539563 silly_merkle[345383]: {
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:    "0": [
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:        {
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:            "devices": [
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:                "/dev/loop3"
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:            ],
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:            "lv_name": "ceph_lv0",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:            "lv_size": "7511998464",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:            "name": "ceph_lv0",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:            "tags": {
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:                "ceph.cluster_name": "ceph",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:                "ceph.crush_device_class": "",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:                "ceph.encrypted": "0",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:                "ceph.osd_id": "0",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:                "ceph.type": "block",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:                "ceph.vdo": "0"
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:            },
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:            "type": "block",
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:            "vg_name": "ceph_vg0"
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:        }
Nov 29 03:24:10 np0005539563 silly_merkle[345383]:    ]
Nov 29 03:24:10 np0005539563 silly_merkle[345383]: }
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.767 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.768 252257 DEBUG nova.compute.manager [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:24:10 np0005539563 systemd[1]: libpod-c8cdf9563ea8e36b1712f6bba7edd22f13791b15431960713189f66552b37635.scope: Deactivated successfully.
Nov 29 03:24:10 np0005539563 podman[345367]: 2025-11-29 08:24:10.77393494 +0000 UTC m=+1.003368572 container died c8cdf9563ea8e36b1712f6bba7edd22f13791b15431960713189f66552b37635 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:24:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-70b8499f8471e47b897c1929e6c3ff876e42e766e15e16f0f0c1053359ad5f49-merged.mount: Deactivated successfully.
Nov 29 03:24:10 np0005539563 podman[345367]: 2025-11-29 08:24:10.841939024 +0000 UTC m=+1.071372666 container remove c8cdf9563ea8e36b1712f6bba7edd22f13791b15431960713189f66552b37635 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:24:10 np0005539563 systemd[1]: libpod-conmon-c8cdf9563ea8e36b1712f6bba7edd22f13791b15431960713189f66552b37635.scope: Deactivated successfully.
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.856 252257 DEBUG nova.compute.manager [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.857 252257 DEBUG nova.network.neutron [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.892 252257 INFO nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:24:10 np0005539563 nova_compute[252253]: 2025-11-29 08:24:10.953 252257 DEBUG nova.compute.manager [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.152 252257 DEBUG nova.compute.manager [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.154 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.154 252257 INFO nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Creating image(s)#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.179 252257 DEBUG nova.storage.rbd_utils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.201 252257 DEBUG nova.storage.rbd_utils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.225 252257 DEBUG nova.storage.rbd_utils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.230 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.302 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.303 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.304 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.304 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.330 252257 DEBUG nova.storage.rbd_utils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.337 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 59a5747d-b29d-47f7-848c-62778e994c56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:11 np0005539563 podman[345657]: 2025-11-29 08:24:11.550211732 +0000 UTC m=+0.085613573 container create 979c7e46911a0fb4f8d9c571712e97b1dc68fb543c1f29c9670bc4c6bdbd5a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:24:11 np0005539563 podman[345657]: 2025-11-29 08:24:11.490059631 +0000 UTC m=+0.025461472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:24:11 np0005539563 systemd[1]: Started libpod-conmon-979c7e46911a0fb4f8d9c571712e97b1dc68fb543c1f29c9670bc4c6bdbd5a8a.scope.
Nov 29 03:24:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:24:11 np0005539563 podman[345657]: 2025-11-29 08:24:11.652271219 +0000 UTC m=+0.187673050 container init 979c7e46911a0fb4f8d9c571712e97b1dc68fb543c1f29c9670bc4c6bdbd5a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kare, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:24:11 np0005539563 podman[345657]: 2025-11-29 08:24:11.660010089 +0000 UTC m=+0.195411900 container start 979c7e46911a0fb4f8d9c571712e97b1dc68fb543c1f29c9670bc4c6bdbd5a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 29 03:24:11 np0005539563 intelligent_kare[345676]: 167 167
Nov 29 03:24:11 np0005539563 systemd[1]: libpod-979c7e46911a0fb4f8d9c571712e97b1dc68fb543c1f29c9670bc4c6bdbd5a8a.scope: Deactivated successfully.
Nov 29 03:24:11 np0005539563 podman[345657]: 2025-11-29 08:24:11.677243786 +0000 UTC m=+0.212645597 container attach 979c7e46911a0fb4f8d9c571712e97b1dc68fb543c1f29c9670bc4c6bdbd5a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kare, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:24:11 np0005539563 podman[345657]: 2025-11-29 08:24:11.677756391 +0000 UTC m=+0.213158202 container died 979c7e46911a0fb4f8d9c571712e97b1dc68fb543c1f29c9670bc4c6bdbd5a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:24:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ee09aae225c2b995f8feb91eb948c3fc4a69b84699e5108425736d6f055e2991-merged.mount: Deactivated successfully.
Nov 29 03:24:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:11.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:11 np0005539563 podman[345657]: 2025-11-29 08:24:11.742676412 +0000 UTC m=+0.278078223 container remove 979c7e46911a0fb4f8d9c571712e97b1dc68fb543c1f29c9670bc4c6bdbd5a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:24:11 np0005539563 ceph-osd[84724]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:24:11 np0005539563 systemd[1]: libpod-conmon-979c7e46911a0fb4f8d9c571712e97b1dc68fb543c1f29c9670bc4c6bdbd5a8a.scope: Deactivated successfully.
Nov 29 03:24:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:11.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.780 252257 DEBUG nova.policy [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dfcf2db50da745c09bffcf32ec016854', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09cc8c3182d845f597dda064f9013941', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.783 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 59a5747d-b29d-47f7-848c-62778e994c56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:11 np0005539563 nova_compute[252253]: 2025-11-29 08:24:11.865 252257 DEBUG nova.storage.rbd_utils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] resizing rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:24:11 np0005539563 podman[345734]: 2025-11-29 08:24:11.922034466 +0000 UTC m=+0.050542552 container create c04b45a9c0786325379228158eb7d9f973f5400b6840863fda97e936487b7019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_montalcini, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:24:11 np0005539563 systemd[1]: Started libpod-conmon-c04b45a9c0786325379228158eb7d9f973f5400b6840863fda97e936487b7019.scope.
Nov 29 03:24:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:24:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666e0a321a0416be1665de0b5d06e7d1a3588c71d0bc72607c95fe909608b236/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666e0a321a0416be1665de0b5d06e7d1a3588c71d0bc72607c95fe909608b236/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666e0a321a0416be1665de0b5d06e7d1a3588c71d0bc72607c95fe909608b236/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/666e0a321a0416be1665de0b5d06e7d1a3588c71d0bc72607c95fe909608b236/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:11 np0005539563 podman[345734]: 2025-11-29 08:24:11.903353219 +0000 UTC m=+0.031861355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:24:12 np0005539563 nova_compute[252253]: 2025-11-29 08:24:12.003 252257 DEBUG nova.objects.instance [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'migration_context' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:12 np0005539563 podman[345734]: 2025-11-29 08:24:12.014180925 +0000 UTC m=+0.142689111 container init c04b45a9c0786325379228158eb7d9f973f5400b6840863fda97e936487b7019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_montalcini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:24:12 np0005539563 nova_compute[252253]: 2025-11-29 08:24:12.021 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:24:12 np0005539563 nova_compute[252253]: 2025-11-29 08:24:12.021 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Ensure instance console log exists: /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:24:12 np0005539563 nova_compute[252253]: 2025-11-29 08:24:12.021 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:12 np0005539563 nova_compute[252253]: 2025-11-29 08:24:12.022 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:12 np0005539563 nova_compute[252253]: 2025-11-29 08:24:12.022 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:12 np0005539563 podman[345734]: 2025-11-29 08:24:12.032186203 +0000 UTC m=+0.160694329 container start c04b45a9c0786325379228158eb7d9f973f5400b6840863fda97e936487b7019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:24:12 np0005539563 podman[345734]: 2025-11-29 08:24:12.037792845 +0000 UTC m=+0.166300991 container attach c04b45a9c0786325379228158eb7d9f973f5400b6840863fda97e936487b7019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:24:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.9 KiB/s wr, 154 op/s
Nov 29 03:24:12 np0005539563 nova_compute[252253]: 2025-11-29 08:24:12.666 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:24:12
Nov 29 03:24:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:24:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:24:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'volumes', 'images', 'backups', 'default.rgw.log']
Nov 29 03:24:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:24:12 np0005539563 exciting_montalcini[345779]: {
Nov 29 03:24:12 np0005539563 exciting_montalcini[345779]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:24:12 np0005539563 exciting_montalcini[345779]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:24:12 np0005539563 exciting_montalcini[345779]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:24:12 np0005539563 exciting_montalcini[345779]:        "osd_id": 0,
Nov 29 03:24:12 np0005539563 exciting_montalcini[345779]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:24:12 np0005539563 exciting_montalcini[345779]:        "type": "bluestore"
Nov 29 03:24:12 np0005539563 exciting_montalcini[345779]:    }
Nov 29 03:24:12 np0005539563 exciting_montalcini[345779]: }
Nov 29 03:24:12 np0005539563 systemd[1]: libpod-c04b45a9c0786325379228158eb7d9f973f5400b6840863fda97e936487b7019.scope: Deactivated successfully.
Nov 29 03:24:12 np0005539563 podman[345734]: 2025-11-29 08:24:12.932309093 +0000 UTC m=+1.060817249 container died c04b45a9c0786325379228158eb7d9f973f5400b6840863fda97e936487b7019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_montalcini, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:24:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-666e0a321a0416be1665de0b5d06e7d1a3588c71d0bc72607c95fe909608b236-merged.mount: Deactivated successfully.
Nov 29 03:24:12 np0005539563 podman[345734]: 2025-11-29 08:24:12.990959074 +0000 UTC m=+1.119467160 container remove c04b45a9c0786325379228158eb7d9f973f5400b6840863fda97e936487b7019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_montalcini, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:24:13 np0005539563 systemd[1]: libpod-conmon-c04b45a9c0786325379228158eb7d9f973f5400b6840863fda97e936487b7019.scope: Deactivated successfully.
Nov 29 03:24:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:24:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:24:13 np0005539563 nova_compute[252253]: 2025-11-29 08:24:13.063 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 16498101-abd5-4854-9d20-2b28c9357876 does not exist
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e0c961e2-dbd2-45ed-a961-7380bb55a2c6 does not exist
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9a6b69da-9f00-41ad-a814-72cb775909c9 does not exist
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:24:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:13.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:13.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:24:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:24:13 np0005539563 nova_compute[252253]: 2025-11-29 08:24:13.958 252257 DEBUG nova.network.neutron [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Successfully created port: 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:24:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:24:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:24:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:24:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:24:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:24:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:24:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Nov 29 03:24:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Nov 29 03:24:14 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Nov 29 03:24:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 261 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 3.9 MiB/s wr, 120 op/s
Nov 29 03:24:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:15.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:15.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 292 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 80 KiB/s rd, 4.3 MiB/s wr, 118 op/s
Nov 29 03:24:17 np0005539563 nova_compute[252253]: 2025-11-29 08:24:17.066 252257 DEBUG nova.network.neutron [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Successfully updated port: 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:24:17 np0005539563 nova_compute[252253]: 2025-11-29 08:24:17.114 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:17 np0005539563 nova_compute[252253]: 2025-11-29 08:24:17.114 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquired lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:17 np0005539563 nova_compute[252253]: 2025-11-29 08:24:17.114 252257 DEBUG nova.network.neutron [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:24:17 np0005539563 nova_compute[252253]: 2025-11-29 08:24:17.521 252257 DEBUG nova.compute.manager [req-a14d334c-c278-466e-ac74-bbb1a9be2a26 req-f237a79c-5a9f-411a-8803-ca2d8163270e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-changed-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:17 np0005539563 nova_compute[252253]: 2025-11-29 08:24:17.522 252257 DEBUG nova.compute.manager [req-a14d334c-c278-466e-ac74-bbb1a9be2a26 req-f237a79c-5a9f-411a-8803-ca2d8163270e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Refreshing instance network info cache due to event network-changed-1a4ca7b6-25c7-44e8-9189-4d8759d2d061. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:24:17 np0005539563 nova_compute[252253]: 2025-11-29 08:24:17.522 252257 DEBUG oslo_concurrency.lockutils [req-a14d334c-c278-466e-ac74-bbb1a9be2a26 req-f237a79c-5a9f-411a-8803-ca2d8163270e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:17 np0005539563 nova_compute[252253]: 2025-11-29 08:24:17.618 252257 DEBUG nova.network.neutron [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:24:17 np0005539563 nova_compute[252253]: 2025-11-29 08:24:17.670 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:24:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:17.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:24:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:17.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:18 np0005539563 nova_compute[252253]: 2025-11-29 08:24:18.064 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 294 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 4.4 MiB/s wr, 110 op/s
Nov 29 03:24:18 np0005539563 nova_compute[252253]: 2025-11-29 08:24:18.929 252257 DEBUG nova.network.neutron [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Updating instance_info_cache with network_info: [{"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:18 np0005539563 nova_compute[252253]: 2025-11-29 08:24:18.966 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Releasing lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:18 np0005539563 nova_compute[252253]: 2025-11-29 08:24:18.966 252257 DEBUG nova.compute.manager [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Instance network_info: |[{"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:24:18 np0005539563 nova_compute[252253]: 2025-11-29 08:24:18.967 252257 DEBUG oslo_concurrency.lockutils [req-a14d334c-c278-466e-ac74-bbb1a9be2a26 req-f237a79c-5a9f-411a-8803-ca2d8163270e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:18 np0005539563 nova_compute[252253]: 2025-11-29 08:24:18.967 252257 DEBUG nova.network.neutron [req-a14d334c-c278-466e-ac74-bbb1a9be2a26 req-f237a79c-5a9f-411a-8803-ca2d8163270e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Refreshing network info cache for port 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:24:18 np0005539563 nova_compute[252253]: 2025-11-29 08:24:18.973 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Start _get_guest_xml network_info=[{"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:24:18 np0005539563 nova_compute[252253]: 2025-11-29 08:24:18.983 252257 WARNING nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:24:18 np0005539563 nova_compute[252253]: 2025-11-29 08:24:18.993 252257 DEBUG nova.virt.libvirt.host [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:24:18 np0005539563 nova_compute[252253]: 2025-11-29 08:24:18.994 252257 DEBUG nova.virt.libvirt.host [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.003 252257 DEBUG nova.virt.libvirt.host [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.003 252257 DEBUG nova.virt.libvirt.host [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.005 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.006 252257 DEBUG nova.virt.hardware [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.007 252257 DEBUG nova.virt.hardware [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.007 252257 DEBUG nova.virt.hardware [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.008 252257 DEBUG nova.virt.hardware [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.008 252257 DEBUG nova.virt.hardware [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.009 252257 DEBUG nova.virt.hardware [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.009 252257 DEBUG nova.virt.hardware [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.010 252257 DEBUG nova.virt.hardware [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.010 252257 DEBUG nova.virt.hardware [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.011 252257 DEBUG nova.virt.hardware [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.011 252257 DEBUG nova.virt.hardware [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.017 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2420874830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.588 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.619 252257 DEBUG nova.storage.rbd_utils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:19 np0005539563 nova_compute[252253]: 2025-11-29 08:24:19.623 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:19.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:19.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/335840357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.092 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.095 252257 DEBUG nova.virt.libvirt.vif [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-92344656',display_name='tempest-ServerRescueNegativeTestJSON-server-92344656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-92344656',id=155,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09cc8c3182d845f597dda064f9013941',ramdisk_id='',reservation_id='r-0zxpzi3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-754875869',owner_user_name='tempest-ServerRescueNegativeTestJSON-754875869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:11Z,user_data=None,user_id='dfcf2db50da745c09bffcf32ec016854',uuid=59a5747d-b29d-47f7-848c-62778e994c56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.095 252257 DEBUG nova.network.os_vif_util [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converting VIF {"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.097 252257 DEBUG nova.network.os_vif_util [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:96:ee,bridge_name='br-int',has_traffic_filtering=True,id=1a4ca7b6-25c7-44e8-9189-4d8759d2d061,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a4ca7b6-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.099 252257 DEBUG nova.objects.instance [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.214 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  <uuid>59a5747d-b29d-47f7-848c-62778e994c56</uuid>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  <name>instance-0000009b</name>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-92344656</nova:name>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:24:18</nova:creationTime>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <nova:user uuid="dfcf2db50da745c09bffcf32ec016854">tempest-ServerRescueNegativeTestJSON-754875869-project-member</nova:user>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <nova:project uuid="09cc8c3182d845f597dda064f9013941">tempest-ServerRescueNegativeTestJSON-754875869</nova:project>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <nova:port uuid="1a4ca7b6-25c7-44e8-9189-4d8759d2d061">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <entry name="serial">59a5747d-b29d-47f7-848c-62778e994c56</entry>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <entry name="uuid">59a5747d-b29d-47f7-848c-62778e994c56</entry>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/59a5747d-b29d-47f7-848c-62778e994c56_disk">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/59a5747d-b29d-47f7-848c-62778e994c56_disk.config">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:85:96:ee"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <target dev="tap1a4ca7b6-25"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/console.log" append="off"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:24:20 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:24:20 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:24:20 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:24:20 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.215 252257 DEBUG nova.compute.manager [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Preparing to wait for external event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.216 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.216 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.217 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.218 252257 DEBUG nova.virt.libvirt.vif [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-92344656',display_name='tempest-ServerRescueNegativeTestJSON-server-92344656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-92344656',id=155,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09cc8c3182d845f597dda064f9013941',ramdisk_id='',reservation_id='r-0zxpzi3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-754875869',owner_user_name='tempest-ServerRescueNegativeTestJSON-754875869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:11Z,user_data=None,user_id='dfcf2db50da745c09bffcf32ec016854',uuid=59a5747d-b29d-47f7-848c-62778e994c56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.219 252257 DEBUG nova.network.os_vif_util [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converting VIF {"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.220 252257 DEBUG nova.network.os_vif_util [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:96:ee,bridge_name='br-int',has_traffic_filtering=True,id=1a4ca7b6-25c7-44e8-9189-4d8759d2d061,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a4ca7b6-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.220 252257 DEBUG os_vif [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:96:ee,bridge_name='br-int',has_traffic_filtering=True,id=1a4ca7b6-25c7-44e8-9189-4d8759d2d061,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a4ca7b6-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.222 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.222 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.223 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.229 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.229 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a4ca7b6-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.230 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a4ca7b6-25, col_values=(('external_ids', {'iface-id': '1a4ca7b6-25c7-44e8-9189-4d8759d2d061', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:96:ee', 'vm-uuid': '59a5747d-b29d-47f7-848c-62778e994c56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.284 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:20 np0005539563 NetworkManager[48981]: <info>  [1764404660.2859] manager: (tap1a4ca7b6-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.288 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.293 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:20 np0005539563 nova_compute[252253]: 2025-11-29 08:24:20.294 252257 INFO os_vif [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:96:ee,bridge_name='br-int',has_traffic_filtering=True,id=1a4ca7b6-25c7-44e8-9189-4d8759d2d061,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a4ca7b6-25')#033[00m
Nov 29 03:24:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 6.4 MiB/s wr, 115 op/s
Nov 29 03:24:21 np0005539563 nova_compute[252253]: 2025-11-29 08:24:21.200 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:21 np0005539563 nova_compute[252253]: 2025-11-29 08:24:21.200 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:21 np0005539563 nova_compute[252253]: 2025-11-29 08:24:21.201 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No VIF found with MAC fa:16:3e:85:96:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:24:21 np0005539563 nova_compute[252253]: 2025-11-29 08:24:21.201 252257 INFO nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Using config drive#033[00m
Nov 29 03:24:21 np0005539563 nova_compute[252253]: 2025-11-29 08:24:21.233 252257 DEBUG nova.storage.rbd_utils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:21.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:21.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:21 np0005539563 nova_compute[252253]: 2025-11-29 08:24:21.957 252257 INFO nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Creating config drive at /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/disk.config#033[00m
Nov 29 03:24:21 np0005539563 nova_compute[252253]: 2025-11-29 08:24:21.966 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfhr_46xy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:22 np0005539563 nova_compute[252253]: 2025-11-29 08:24:22.064 252257 DEBUG nova.network.neutron [req-a14d334c-c278-466e-ac74-bbb1a9be2a26 req-f237a79c-5a9f-411a-8803-ca2d8163270e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Updated VIF entry in instance network info cache for port 1a4ca7b6-25c7-44e8-9189-4d8759d2d061. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:24:22 np0005539563 nova_compute[252253]: 2025-11-29 08:24:22.065 252257 DEBUG nova.network.neutron [req-a14d334c-c278-466e-ac74-bbb1a9be2a26 req-f237a79c-5a9f-411a-8803-ca2d8163270e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Updating instance_info_cache with network_info: [{"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:22 np0005539563 nova_compute[252253]: 2025-11-29 08:24:22.110 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfhr_46xy" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:22 np0005539563 nova_compute[252253]: 2025-11-29 08:24:22.177 252257 DEBUG nova.storage.rbd_utils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:22 np0005539563 nova_compute[252253]: 2025-11-29 08:24:22.181 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/disk.config 59a5747d-b29d-47f7-848c-62778e994c56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:22 np0005539563 nova_compute[252253]: 2025-11-29 08:24:22.222 252257 DEBUG oslo_concurrency.lockutils [req-a14d334c-c278-466e-ac74-bbb1a9be2a26 req-f237a79c-5a9f-411a-8803-ca2d8163270e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 6.4 MiB/s wr, 115 op/s
Nov 29 03:24:22 np0005539563 nova_compute[252253]: 2025-11-29 08:24:22.604 252257 DEBUG oslo_concurrency.processutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/disk.config 59a5747d-b29d-47f7-848c-62778e994c56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:22 np0005539563 nova_compute[252253]: 2025-11-29 08:24:22.605 252257 INFO nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Deleting local config drive /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/disk.config because it was imported into RBD.#033[00m
Nov 29 03:24:22 np0005539563 kernel: tap1a4ca7b6-25: entered promiscuous mode
Nov 29 03:24:22 np0005539563 NetworkManager[48981]: <info>  [1764404662.7416] manager: (tap1a4ca7b6-25): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Nov 29 03:24:22 np0005539563 nova_compute[252253]: 2025-11-29 08:24:22.741 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:24:22Z|00617|binding|INFO|Claiming lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for this chassis.
Nov 29 03:24:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:24:22Z|00618|binding|INFO|1a4ca7b6-25c7-44e8-9189-4d8759d2d061: Claiming fa:16:3e:85:96:ee 10.100.0.9
Nov 29 03:24:22 np0005539563 nova_compute[252253]: 2025-11-29 08:24:22.746 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:22 np0005539563 nova_compute[252253]: 2025-11-29 08:24:22.751 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:22 np0005539563 systemd-udevd[346064]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:24:22 np0005539563 systemd-machined[213024]: New machine qemu-73-instance-0000009b.
Nov 29 03:24:22 np0005539563 NetworkManager[48981]: <info>  [1764404662.7822] device (tap1a4ca7b6-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:24:22 np0005539563 NetworkManager[48981]: <info>  [1764404662.7830] device (tap1a4ca7b6-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.816 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:96:ee 10.100.0.9'], port_security=['fa:16:3e:85:96:ee 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '59a5747d-b29d-47f7-848c-62778e994c56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1a4ca7b6-25c7-44e8-9189-4d8759d2d061) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.818 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 bound to our chassis#033[00m
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.820 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7008b597-8de2-4973-801f-fcc733e4f6c9#033[00m
Nov 29 03:24:22 np0005539563 systemd[1]: Started Virtual Machine qemu-73-instance-0000009b.
Nov 29 03:24:22 np0005539563 nova_compute[252253]: 2025-11-29 08:24:22.821 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:24:22Z|00619|binding|INFO|Setting lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 ovn-installed in OVS
Nov 29 03:24:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:24:22Z|00620|binding|INFO|Setting lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 up in Southbound
Nov 29 03:24:22 np0005539563 nova_compute[252253]: 2025-11-29 08:24:22.833 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.832 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[03b57058-673e-4ce2-9883-e69297351246]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.833 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7008b597-81 in ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.835 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7008b597-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.836 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[749c9b81-5d6a-45d1-bb5c-2bdd346d106e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.837 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cf301c9b-7fc7-4aae-aff1-616fc95e9f54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.853 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[3214b451-7f54-46e7-a953-8edad07c65dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.867 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b45dd794-1e50-4094-a4d1-da4f2174fdcb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.899 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[74f81eff-5a2b-4f03-a098-6ac27eccbd57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.905 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[94a047f6-aa54-46ca-b7b1-4ccff25de621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539563 systemd-udevd[346067]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:24:22 np0005539563 NetworkManager[48981]: <info>  [1764404662.9070] manager: (tap7008b597-80): new Veth device (/org/freedesktop/NetworkManager/Devices/271)
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.948 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c64889d8-63b8-47f8-882e-61427bbb7212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.951 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f9532eb9-2e95-4337-9225-66e58ae2dbc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539563 NetworkManager[48981]: <info>  [1764404662.9756] device (tap7008b597-80): carrier: link connected
Nov 29 03:24:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:22.984 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[19c409cf-1f45-42b5-a722-d08792b5167a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:23.018 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2806999e-6803-4205-a3f5-e74631a77e51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763074, 'reachable_time': 19625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346099, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:23.040 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[038d8a49-64d4-44fd-b1ec-704e2fc73c48]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2c65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763074, 'tstamp': 763074}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346115, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:23.055 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3016d4e9-8f3f-4752-a8ba-bbf188e54122]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763074, 'reachable_time': 19625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346119, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.066 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:23.087 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[96200304-1c89-4d64-adae-309c8c75efb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:23.134 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[effcb243-43a5-4568-8993-ff410dd51259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:23.135 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:23.136 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:23.136 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7008b597-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:23 np0005539563 NetworkManager[48981]: <info>  [1764404663.1384] manager: (tap7008b597-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Nov 29 03:24:23 np0005539563 kernel: tap7008b597-80: entered promiscuous mode
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.138 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.140 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:23.141 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7008b597-80, col_values=(('external_ids', {'iface-id': '42a41b42-1527-4cfa-9dcf-4b7f34b092b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.142 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:23 np0005539563 ovn_controller[148841]: 2025-11-29T08:24:23Z|00621|binding|INFO|Releasing lport 42a41b42-1527-4cfa-9dcf-4b7f34b092b7 from this chassis (sb_readonly=0)
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.172 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:23.173 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:23.174 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd616e0-5cec-40e6-aae9-6626a4985894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:23.175 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:24:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:23.176 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'env', 'PROCESS_TAG=haproxy-7008b597-8de2-4973-801f-fcc733e4f6c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7008b597-8de2-4973-801f-fcc733e4f6c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.256 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404663.2558362, 59a5747d-b29d-47f7-848c-62778e994c56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.256 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] VM Started (Lifecycle Event)#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.283 252257 DEBUG nova.compute.manager [req-81560567-9317-46ed-9290-c2ce8ed33e6a req-17421b03-6357-4451-884b-a0036e3e167a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.283 252257 DEBUG oslo_concurrency.lockutils [req-81560567-9317-46ed-9290-c2ce8ed33e6a req-17421b03-6357-4451-884b-a0036e3e167a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.284 252257 DEBUG oslo_concurrency.lockutils [req-81560567-9317-46ed-9290-c2ce8ed33e6a req-17421b03-6357-4451-884b-a0036e3e167a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.284 252257 DEBUG oslo_concurrency.lockutils [req-81560567-9317-46ed-9290-c2ce8ed33e6a req-17421b03-6357-4451-884b-a0036e3e167a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.284 252257 DEBUG nova.compute.manager [req-81560567-9317-46ed-9290-c2ce8ed33e6a req-17421b03-6357-4451-884b-a0036e3e167a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Processing event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.285 252257 DEBUG nova.compute.manager [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.289 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.290 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.295 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.299 252257 INFO nova.virt.libvirt.driver [-] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Instance spawned successfully.#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.299 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.346 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.347 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404663.2565565, 59a5747d-b29d-47f7-848c-62778e994c56 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.347 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.352 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.353 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.353 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.354 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.354 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.355 252257 DEBUG nova.virt.libvirt.driver [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.383 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.386 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404663.2882109, 59a5747d-b29d-47f7-848c-62778e994c56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.386 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.424 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.429 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.486 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.501 252257 INFO nova.compute.manager [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Took 12.35 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.501 252257 DEBUG nova.compute.manager [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.583 252257 INFO nova.compute.manager [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Took 13.54 seconds to build instance.#033[00m
Nov 29 03:24:23 np0005539563 nova_compute[252253]: 2025-11-29 08:24:23.615 252257 DEBUG oslo_concurrency.lockutils [None req-3b15c798-6779-4616-8dec-894ea3bf7f37 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004145437837334822 of space, bias 1.0, pg target 1.2436313512004467 quantized to 32 (current 32)
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003149972955053043 of space, bias 1.0, pg target 0.9418419135608599 quantized to 32 (current 32)
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:23 np0005539563 podman[346175]: 2025-11-29 08:24:23.551052261 +0000 UTC m=+0.026152570 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:24:23 np0005539563 podman[346175]: 2025-11-29 08:24:23.678388624 +0000 UTC m=+0.153488893 container create d99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:24:23 np0005539563 systemd[1]: Started libpod-conmon-d99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872.scope.
Nov 29 03:24:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:23.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:24:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f01a40cb8b43943ad597c8d250238d9b64cfdf955e8ef0997d2c7053373bdf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:23.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:23 np0005539563 podman[346175]: 2025-11-29 08:24:23.811255487 +0000 UTC m=+0.286355786 container init d99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:24:23 np0005539563 podman[346175]: 2025-11-29 08:24:23.823602072 +0000 UTC m=+0.298702331 container start d99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:24:23 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[346190]: [NOTICE]   (346194) : New worker (346196) forked
Nov 29 03:24:23 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[346190]: [NOTICE]   (346194) : Loading success.
Nov 29 03:24:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 776 KiB/s rd, 5.2 MiB/s wr, 115 op/s
Nov 29 03:24:25 np0005539563 nova_compute[252253]: 2025-11-29 08:24:25.287 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:25 np0005539563 nova_compute[252253]: 2025-11-29 08:24:25.498 252257 DEBUG nova.compute.manager [req-18f30a50-405e-4833-b062-6cb189066de1 req-9f4fbc9b-a9f9-4e40-9d78-d2e3eaeca941 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:25 np0005539563 nova_compute[252253]: 2025-11-29 08:24:25.499 252257 DEBUG oslo_concurrency.lockutils [req-18f30a50-405e-4833-b062-6cb189066de1 req-9f4fbc9b-a9f9-4e40-9d78-d2e3eaeca941 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:25 np0005539563 nova_compute[252253]: 2025-11-29 08:24:25.499 252257 DEBUG oslo_concurrency.lockutils [req-18f30a50-405e-4833-b062-6cb189066de1 req-9f4fbc9b-a9f9-4e40-9d78-d2e3eaeca941 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:25 np0005539563 nova_compute[252253]: 2025-11-29 08:24:25.499 252257 DEBUG oslo_concurrency.lockutils [req-18f30a50-405e-4833-b062-6cb189066de1 req-9f4fbc9b-a9f9-4e40-9d78-d2e3eaeca941 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:25 np0005539563 nova_compute[252253]: 2025-11-29 08:24:25.500 252257 DEBUG nova.compute.manager [req-18f30a50-405e-4833-b062-6cb189066de1 req-9f4fbc9b-a9f9-4e40-9d78-d2e3eaeca941 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:25 np0005539563 nova_compute[252253]: 2025-11-29 08:24:25.500 252257 WARNING nova.compute.manager [req-18f30a50-405e-4833-b062-6cb189066de1 req-9f4fbc9b-a9f9-4e40-9d78-d2e3eaeca941 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:24:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:25.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:25.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 29 03:24:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:27.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:27.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:28 np0005539563 nova_compute[252253]: 2025-11-29 08:24:28.070 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 29 03:24:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:29.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:29.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:30 np0005539563 nova_compute[252253]: 2025-11-29 08:24:30.290 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.7 MiB/s wr, 79 op/s
Nov 29 03:24:31 np0005539563 ceph-mds[93638]: mds.beacon.cephfs.compute-0.msknqt missed beacon ack from the monitors
Nov 29 03:24:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:31.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:31.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 15 KiB/s wr, 52 op/s
Nov 29 03:24:33 np0005539563 nova_compute[252253]: 2025-11-29 08:24:33.070 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:33 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 03:24:33 np0005539563 ceph-mon[74338]: paxos.0).electionLogic(53) init, last seen epoch 53, mid-election, bumping
Nov 29 03:24:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 03:24:33 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 03:24:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:33.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:33.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:34 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 03:24:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 03:24:34 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.fwjrvc=up:active} 2 up:standby
Nov 29 03:24:34 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Nov 29 03:24:34 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.rotard(active, since 75m), standbys: compute-2.vyxqrz, compute-1.jjnjed
Nov 29 03:24:34 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 03:24:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 77 op/s
Nov 29 03:24:35 np0005539563 nova_compute[252253]: 2025-11-29 08:24:35.322 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:35 np0005539563 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 29 03:24:35 np0005539563 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 29 03:24:35 np0005539563 ceph-mon[74338]: mon.compute-1 calling monitor election
Nov 29 03:24:35 np0005539563 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 03:24:35 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 03:24:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:35.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:35.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.2 KiB/s wr, 51 op/s
Nov 29 03:24:36 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 6.
Nov 29 03:24:37 np0005539563 nova_compute[252253]: 2025-11-29 08:24:37.725 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:37.726 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:37 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:37.728 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:24:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:37.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:24:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:37.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:24:37 np0005539563 ovn_controller[148841]: 2025-11-29T08:24:37Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:96:ee 10.100.0.9
Nov 29 03:24:37 np0005539563 ovn_controller[148841]: 2025-11-29T08:24:37Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:96:ee 10.100.0.9
Nov 29 03:24:38 np0005539563 nova_compute[252253]: 2025-11-29 08:24:38.073 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 349 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.1 MiB/s wr, 98 op/s
Nov 29 03:24:39 np0005539563 podman[346264]: 2025-11-29 08:24:39.53021015 +0000 UTC m=+0.071690336 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:24:39 np0005539563 podman[346265]: 2025-11-29 08:24:39.535287698 +0000 UTC m=+0.076565539 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 03:24:39 np0005539563 podman[346266]: 2025-11-29 08:24:39.606853818 +0000 UTC m=+0.142256478 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 03:24:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:39.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:39.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:40 np0005539563 nova_compute[252253]: 2025-11-29 08:24:40.364 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 366 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 188 op/s
Nov 29 03:24:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:41.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:41.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 366 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 188 op/s
Nov 29 03:24:43 np0005539563 nova_compute[252253]: 2025-11-29 08:24:43.077 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:24:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:24:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:24:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:24:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:24:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:24:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:43.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:43.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.2 MiB/s wr, 263 op/s
Nov 29 03:24:45 np0005539563 nova_compute[252253]: 2025-11-29 08:24:45.366 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:45.730 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:45.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:45.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 238 op/s
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.150 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.151 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.170 252257 DEBUG nova.compute.manager [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.248 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.249 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.256 252257 DEBUG nova.virt.hardware [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.257 252257 INFO nova.compute.claims [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.417 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:47.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:47.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1357165522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.846 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.853 252257 DEBUG nova.compute.provider_tree [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.871 252257 DEBUG nova.scheduler.client.report [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.902 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.902 252257 DEBUG nova.compute.manager [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.983 252257 DEBUG nova.compute.manager [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:24:47 np0005539563 nova_compute[252253]: 2025-11-29 08:24:47.983 252257 DEBUG nova.network.neutron [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.006 252257 INFO nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.024 252257 DEBUG nova.compute.manager [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.079 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.151 252257 DEBUG nova.compute.manager [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.152 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.152 252257 INFO nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Creating image(s)#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.174 252257 DEBUG nova.storage.rbd_utils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.209 252257 DEBUG nova.storage.rbd_utils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.243 252257 DEBUG nova.storage.rbd_utils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.249 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.313 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.314 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.315 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.315 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.346 252257 DEBUG nova.storage.rbd_utils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.351 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 376 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 262 op/s
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.669528) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688669644, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1778, "num_deletes": 264, "total_data_size": 2844038, "memory_usage": 2897344, "flush_reason": "Manual Compaction"}
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688691258, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 2784462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49911, "largest_seqno": 51688, "table_properties": {"data_size": 2776232, "index_size": 4916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18303, "raw_average_key_size": 20, "raw_value_size": 2759359, "raw_average_value_size": 3146, "num_data_blocks": 213, "num_entries": 877, "num_filter_entries": 877, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404535, "oldest_key_time": 1764404535, "file_creation_time": 1764404688, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 21785 microseconds, and 7124 cpu microseconds.
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.691357) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 2784462 bytes OK
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.691388) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.702184) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.702240) EVENT_LOG_v1 {"time_micros": 1764404688702230, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.702276) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 2836428, prev total WAL file size 2836428, number of live WAL files 2.
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.703271) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373633' seq:72057594037927935, type:22 .. '6C6F676D0032303136' seq:0, type:0; will stop at (end)
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(2719KB)], [107(11MB)]
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688703390, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 14376872, "oldest_snapshot_seqno": -1}
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 8680 keys, 14207223 bytes, temperature: kUnknown
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688823870, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 14207223, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14147059, "index_size": 37332, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 224997, "raw_average_key_size": 25, "raw_value_size": 13990334, "raw_average_value_size": 1611, "num_data_blocks": 1470, "num_entries": 8680, "num_filter_entries": 8680, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764404688, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.824177) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 14207223 bytes
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.826165) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.2 rd, 117.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.1 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(10.3) write-amplify(5.1) OK, records in: 9229, records dropped: 549 output_compression: NoCompression
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.826189) EVENT_LOG_v1 {"time_micros": 1764404688826178, "job": 64, "event": "compaction_finished", "compaction_time_micros": 120566, "compaction_time_cpu_micros": 43397, "output_level": 6, "num_output_files": 1, "total_output_size": 14207223, "num_input_records": 9229, "num_output_records": 8680, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688826765, "job": 64, "event": "table_file_deletion", "file_number": 109}
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404688828660, "job": 64, "event": "table_file_deletion", "file_number": 107}
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.703095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.828824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.828839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.828843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.828848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:24:48.828853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.925 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:48 np0005539563 nova_compute[252253]: 2025-11-29 08:24:48.961 252257 DEBUG nova.policy [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b4f4d28745dd46e586642c84c051db39', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '23450c2eaf4442459dec94c6d29f0412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:24:49 np0005539563 nova_compute[252253]: 2025-11-29 08:24:49.003 252257 DEBUG nova.storage.rbd_utils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] resizing rbd image 5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:24:49 np0005539563 nova_compute[252253]: 2025-11-29 08:24:49.109 252257 DEBUG nova.objects.instance [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'migration_context' on Instance uuid 5a603f26-2b4a-4025-8cc2-a31c8c89e652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:49 np0005539563 nova_compute[252253]: 2025-11-29 08:24:49.225 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:24:49 np0005539563 nova_compute[252253]: 2025-11-29 08:24:49.226 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Ensure instance console log exists: /var/lib/nova/instances/5a603f26-2b4a-4025-8cc2-a31c8c89e652/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:24:49 np0005539563 nova_compute[252253]: 2025-11-29 08:24:49.227 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:49 np0005539563 nova_compute[252253]: 2025-11-29 08:24:49.227 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:49 np0005539563 nova_compute[252253]: 2025-11-29 08:24:49.227 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:49.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:49.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:50 np0005539563 nova_compute[252253]: 2025-11-29 08:24:50.371 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 415 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 213 op/s
Nov 29 03:24:50 np0005539563 nova_compute[252253]: 2025-11-29 08:24:50.722 252257 DEBUG nova.network.neutron [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Successfully created port: 53d86447-39c2-4624-8083-b6dc36b78b15 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:24:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:51.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:51.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:51 np0005539563 nova_compute[252253]: 2025-11-29 08:24:51.950 252257 DEBUG nova.network.neutron [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Successfully updated port: 53d86447-39c2-4624-8083-b6dc36b78b15 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:24:51 np0005539563 nova_compute[252253]: 2025-11-29 08:24:51.967 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:51 np0005539563 nova_compute[252253]: 2025-11-29 08:24:51.968 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquired lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:51 np0005539563 nova_compute[252253]: 2025-11-29 08:24:51.968 252257 DEBUG nova.network.neutron [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:24:52 np0005539563 nova_compute[252253]: 2025-11-29 08:24:52.057 252257 DEBUG nova.compute.manager [req-2f5ac911-2974-483c-8c30-cb7b202f0536 req-07a47ccb-eec7-47f0-97fb-1c978b53532f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Received event network-changed-53d86447-39c2-4624-8083-b6dc36b78b15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:52 np0005539563 nova_compute[252253]: 2025-11-29 08:24:52.057 252257 DEBUG nova.compute.manager [req-2f5ac911-2974-483c-8c30-cb7b202f0536 req-07a47ccb-eec7-47f0-97fb-1c978b53532f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Refreshing instance network info cache due to event network-changed-53d86447-39c2-4624-8083-b6dc36b78b15. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:24:52 np0005539563 nova_compute[252253]: 2025-11-29 08:24:52.057 252257 DEBUG oslo_concurrency.lockutils [req-2f5ac911-2974-483c-8c30-cb7b202f0536 req-07a47ccb-eec7-47f0-97fb-1c978b53532f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:52 np0005539563 nova_compute[252253]: 2025-11-29 08:24:52.116 252257 DEBUG nova.network.neutron [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:24:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 415 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.8 MiB/s wr, 120 op/s
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.107 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.379 252257 DEBUG nova.network.neutron [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Updating instance_info_cache with network_info: [{"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.414 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Releasing lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.414 252257 DEBUG nova.compute.manager [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Instance network_info: |[{"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.414 252257 DEBUG oslo_concurrency.lockutils [req-2f5ac911-2974-483c-8c30-cb7b202f0536 req-07a47ccb-eec7-47f0-97fb-1c978b53532f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.415 252257 DEBUG nova.network.neutron [req-2f5ac911-2974-483c-8c30-cb7b202f0536 req-07a47ccb-eec7-47f0-97fb-1c978b53532f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Refreshing network info cache for port 53d86447-39c2-4624-8083-b6dc36b78b15 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.417 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Start _get_guest_xml network_info=[{"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.421 252257 WARNING nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.426 252257 DEBUG nova.virt.libvirt.host [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.427 252257 DEBUG nova.virt.libvirt.host [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.431 252257 DEBUG nova.virt.libvirt.host [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.432 252257 DEBUG nova.virt.libvirt.host [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.433 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.433 252257 DEBUG nova.virt.hardware [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.434 252257 DEBUG nova.virt.hardware [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.434 252257 DEBUG nova.virt.hardware [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.434 252257 DEBUG nova.virt.hardware [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.434 252257 DEBUG nova.virt.hardware [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.434 252257 DEBUG nova.virt.hardware [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.435 252257 DEBUG nova.virt.hardware [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.435 252257 DEBUG nova.virt.hardware [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.435 252257 DEBUG nova.virt.hardware [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.435 252257 DEBUG nova.virt.hardware [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.436 252257 DEBUG nova.virt.hardware [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.438 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:24:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:53.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:24:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:53.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/97075944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.892 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.922 252257 DEBUG nova.storage.rbd_utils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:53 np0005539563 nova_compute[252253]: 2025-11-29 08:24:53.925 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/524950244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.358 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.360 252257 DEBUG nova.virt.libvirt.vif [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=158,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-1bf2j4th',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=5a603f26-2b4a-4025-8cc2-a31c8c89e652,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.360 252257 DEBUG nova.network.os_vif_util [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.361 252257 DEBUG nova.network.os_vif_util [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ee:77,bridge_name='br-int',has_traffic_filtering=True,id=53d86447-39c2-4624-8083-b6dc36b78b15,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53d86447-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.362 252257 DEBUG nova.objects.instance [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a603f26-2b4a-4025-8cc2-a31c8c89e652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.390 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  <uuid>5a603f26-2b4a-4025-8cc2-a31c8c89e652</uuid>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  <name>instance-0000009e</name>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <nova:name>multiattach-server-0</nova:name>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:24:53</nova:creationTime>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <nova:user uuid="b4f4d28745dd46e586642c84c051db39">tempest-AttachVolumeMultiAttachTest-1454477111-project-member</nova:user>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <nova:project uuid="23450c2eaf4442459dec94c6d29f0412">tempest-AttachVolumeMultiAttachTest-1454477111</nova:project>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <nova:port uuid="53d86447-39c2-4624-8083-b6dc36b78b15">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <entry name="serial">5a603f26-2b4a-4025-8cc2-a31c8c89e652</entry>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <entry name="uuid">5a603f26-2b4a-4025-8cc2-a31c8c89e652</entry>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk.config">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:e5:ee:77"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <target dev="tap53d86447-39"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/5a603f26-2b4a-4025-8cc2-a31c8c89e652/console.log" append="off"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:24:54 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:24:54 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:24:54 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:24:54 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.391 252257 DEBUG nova.compute.manager [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Preparing to wait for external event network-vif-plugged-53d86447-39c2-4624-8083-b6dc36b78b15 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.392 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.392 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.392 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.393 252257 DEBUG nova.virt.libvirt.vif [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=158,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-1bf2j4th',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=5a603f26-2b4a-4025-8cc2-a31c8c89e652,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.393 252257 DEBUG nova.network.os_vif_util [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.394 252257 DEBUG nova.network.os_vif_util [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ee:77,bridge_name='br-int',has_traffic_filtering=True,id=53d86447-39c2-4624-8083-b6dc36b78b15,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53d86447-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.394 252257 DEBUG os_vif [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ee:77,bridge_name='br-int',has_traffic_filtering=True,id=53d86447-39c2-4624-8083-b6dc36b78b15,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53d86447-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.395 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.395 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.396 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.399 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.400 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53d86447-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.401 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap53d86447-39, col_values=(('external_ids', {'iface-id': '53d86447-39c2-4624-8083-b6dc36b78b15', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:ee:77', 'vm-uuid': '5a603f26-2b4a-4025-8cc2-a31c8c89e652'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.402 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:54 np0005539563 NetworkManager[48981]: <info>  [1764404694.4046] manager: (tap53d86447-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.406 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.410 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.411 252257 INFO os_vif [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ee:77,bridge_name='br-int',has_traffic_filtering=True,id=53d86447-39c2-4624-8083-b6dc36b78b15,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53d86447-39')#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.463 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.464 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.464 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No VIF found with MAC fa:16:3e:e5:ee:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.465 252257 INFO nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Using config drive#033[00m
Nov 29 03:24:54 np0005539563 nova_compute[252253]: 2025-11-29 08:24:54.488 252257 DEBUG nova.storage.rbd_utils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 479 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.0 MiB/s wr, 221 op/s
Nov 29 03:24:55 np0005539563 nova_compute[252253]: 2025-11-29 08:24:55.186 252257 INFO nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Creating config drive at /var/lib/nova/instances/5a603f26-2b4a-4025-8cc2-a31c8c89e652/disk.config#033[00m
Nov 29 03:24:55 np0005539563 nova_compute[252253]: 2025-11-29 08:24:55.192 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a603f26-2b4a-4025-8cc2-a31c8c89e652/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy_l_0t5b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:55 np0005539563 nova_compute[252253]: 2025-11-29 08:24:55.324 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a603f26-2b4a-4025-8cc2-a31c8c89e652/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy_l_0t5b" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:55 np0005539563 nova_compute[252253]: 2025-11-29 08:24:55.355 252257 DEBUG nova.storage.rbd_utils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image 5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:55 np0005539563 nova_compute[252253]: 2025-11-29 08:24:55.359 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a603f26-2b4a-4025-8cc2-a31c8c89e652/disk.config 5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:55.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:24:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:55.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:24:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 484 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 680 KiB/s rd, 6.1 MiB/s wr, 159 op/s
Nov 29 03:24:57 np0005539563 nova_compute[252253]: 2025-11-29 08:24:57.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:57 np0005539563 nova_compute[252253]: 2025-11-29 08:24:57.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:24:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:57.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:57.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:58 np0005539563 nova_compute[252253]: 2025-11-29 08:24:58.109 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 484 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 681 KiB/s rd, 6.1 MiB/s wr, 163 op/s
Nov 29 03:24:59 np0005539563 nova_compute[252253]: 2025-11-29 08:24:59.274 252257 DEBUG oslo_concurrency.processutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a603f26-2b4a-4025-8cc2-a31c8c89e652/disk.config 5a603f26-2b4a-4025-8cc2-a31c8c89e652_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.916s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:59 np0005539563 nova_compute[252253]: 2025-11-29 08:24:59.275 252257 INFO nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Deleting local config drive /var/lib/nova/instances/5a603f26-2b4a-4025-8cc2-a31c8c89e652/disk.config because it was imported into RBD.#033[00m
Nov 29 03:24:59 np0005539563 kernel: tap53d86447-39: entered promiscuous mode
Nov 29 03:24:59 np0005539563 NetworkManager[48981]: <info>  [1764404699.3292] manager: (tap53d86447-39): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Nov 29 03:24:59 np0005539563 nova_compute[252253]: 2025-11-29 08:24:59.329 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:24:59Z|00622|binding|INFO|Claiming lport 53d86447-39c2-4624-8083-b6dc36b78b15 for this chassis.
Nov 29 03:24:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:24:59Z|00623|binding|INFO|53d86447-39c2-4624-8083-b6dc36b78b15: Claiming fa:16:3e:e5:ee:77 10.100.0.4
Nov 29 03:24:59 np0005539563 systemd-udevd[346708]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:24:59 np0005539563 NetworkManager[48981]: <info>  [1764404699.3653] device (tap53d86447-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:24:59 np0005539563 NetworkManager[48981]: <info>  [1764404699.3659] device (tap53d86447-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:24:59 np0005539563 systemd-machined[213024]: New machine qemu-74-instance-0000009e.
Nov 29 03:24:59 np0005539563 systemd[1]: Started Virtual Machine qemu-74-instance-0000009e.
Nov 29 03:24:59 np0005539563 nova_compute[252253]: 2025-11-29 08:24:59.397 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:24:59Z|00624|binding|INFO|Setting lport 53d86447-39c2-4624-8083-b6dc36b78b15 ovn-installed in OVS
Nov 29 03:24:59 np0005539563 nova_compute[252253]: 2025-11-29 08:24:59.402 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:59 np0005539563 nova_compute[252253]: 2025-11-29 08:24:59.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:59 np0005539563 nova_compute[252253]: 2025-11-29 08:24:59.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:59.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:24:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:24:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:24:59Z|00625|binding|INFO|Setting lport 53d86447-39c2-4624-8083-b6dc36b78b15 up in Southbound
Nov 29 03:24:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:59.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:24:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:59.836 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:ee:77 10.100.0.4'], port_security=['fa:16:3e:e5:ee:77 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5a603f26-2b4a-4025-8cc2-a31c8c89e652', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e9e03ca-34d5-466f-8e26-e073c35a802c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=53d86447-39c2-4624-8083-b6dc36b78b15) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:59.838 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 53d86447-39c2-4624-8083-b6dc36b78b15 in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 bound to our chassis#033[00m
Nov 29 03:24:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:59.841 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abbc8daa-d665-4e2f-bf74-9e57db481441#033[00m
Nov 29 03:24:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:59.861 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[502a5a95-cf76-42c9-9295-558a4dad3abc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:59.862 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabbc8daa-d1 in ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:24:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:59.865 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabbc8daa-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:24:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:59.865 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[413cd7f6-6262-43a4-9a44-66c9e4432b5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:59.867 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[73678a44-fa8d-41c7-95b8-d1e53396d199]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:59.893 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[002adcdc-d139-4b32-ab8a-5a38be36ab65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:59.930 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[81ae30b7-7cb6-451f-8772-93ca933e4be8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:59.977 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[dac90205-b3b6-4675-a683-229b68d5cbaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:24:59.983 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[deedcf31-dee5-4dfd-86bc-c277eb3fa8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:59 np0005539563 NetworkManager[48981]: <info>  [1764404699.9849] manager: (tapabbc8daa-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/275)
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.020 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9e5d7b-7616-4b7d-acc0-4a7db5f59d45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.025 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a50b8cd1-9777-427b-8b04-944b7f9dca51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:00 np0005539563 NetworkManager[48981]: <info>  [1764404700.0574] device (tapabbc8daa-d0): carrier: link connected
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.061 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe9ab59-a61a-48f8-8e6f-366516d04ae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.071 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404700.0708299, 5a603f26-2b4a-4025-8cc2-a31c8c89e652 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.072 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] VM Started (Lifecycle Event)#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.083 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4d81bb4a-f227-4bb3-a260-b8d474092123]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766783, 'reachable_time': 24294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346784, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.097 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6139dec0-0d9c-4f48-9634-63f8331fda3b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:892d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766783, 'tstamp': 766783}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346785, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.114 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3f4725b8-1190-47a5-b506-122c36c67ef6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766783, 'reachable_time': 24294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346786, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.140 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6fdc94-ac9b-4554-8d72-1fa71566b8b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.193 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[49d9d29f-8f9f-4fff-8ace-7eca24c5bb12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.195 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.196 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.196 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabbc8daa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:00 np0005539563 NetworkManager[48981]: <info>  [1764404700.1988] manager: (tapabbc8daa-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.198 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:00 np0005539563 kernel: tapabbc8daa-d0: entered promiscuous mode
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.200 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.203 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabbc8daa-d0, col_values=(('external_ids', {'iface-id': 'fb65e0fb-a778-4ace-a666-dfdbc516af09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.205 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:00Z|00626|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.222 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.224 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abbc8daa-d665-4e2f-bf74-9e57db481441.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abbc8daa-d665-4e2f-bf74-9e57db481441.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.225 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[45a826d8-d3a1-43cf-8932-fa3e4a20be06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.226 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-abbc8daa-d665-4e2f-bf74-9e57db481441
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/abbc8daa-d665-4e2f-bf74-9e57db481441.pid.haproxy
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID abbc8daa-d665-4e2f-bf74-9e57db481441
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.226 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:00.227 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'env', 'PROCESS_TAG=haproxy-abbc8daa-d665-4e2f-bf74-9e57db481441', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abbc8daa-d665-4e2f-bf74-9e57db481441.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.231 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404700.0742815, 5a603f26-2b4a-4025-8cc2-a31c8c89e652 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.231 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.461 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.466 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.501 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 484 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 646 KiB/s rd, 5.6 MiB/s wr, 140 op/s
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.567 252257 DEBUG nova.network.neutron [req-2f5ac911-2974-483c-8c30-cb7b202f0536 req-07a47ccb-eec7-47f0-97fb-1c978b53532f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Updated VIF entry in instance network info cache for port 53d86447-39c2-4624-8083-b6dc36b78b15. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.568 252257 DEBUG nova.network.neutron [req-2f5ac911-2974-483c-8c30-cb7b202f0536 req-07a47ccb-eec7-47f0-97fb-1c978b53532f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Updating instance_info_cache with network_info: [{"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.596 252257 DEBUG oslo_concurrency.lockutils [req-2f5ac911-2974-483c-8c30-cb7b202f0536 req-07a47ccb-eec7-47f0-97fb-1c978b53532f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:00 np0005539563 podman[346818]: 2025-11-29 08:25:00.646606959 +0000 UTC m=+0.061022846 container create b4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:25:00 np0005539563 nova_compute[252253]: 2025-11-29 08:25:00.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:00 np0005539563 systemd[1]: Started libpod-conmon-b4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61.scope.
Nov 29 03:25:00 np0005539563 podman[346818]: 2025-11-29 08:25:00.623175893 +0000 UTC m=+0.037591800 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:25:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:25:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82f6089bcbb3562f9805a2602f6b7d55ce55ced36b47cd2190e416d9a522e54/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:00 np0005539563 podman[346818]: 2025-11-29 08:25:00.7498846 +0000 UTC m=+0.164300507 container init b4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:25:00 np0005539563 podman[346818]: 2025-11-29 08:25:00.760404975 +0000 UTC m=+0.174820862 container start b4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:00 np0005539563 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[346833]: [NOTICE]   (346837) : New worker (346839) forked
Nov 29 03:25:00 np0005539563 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[346833]: [NOTICE]   (346837) : Loading success.
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.073 252257 DEBUG nova.compute.manager [req-e35d815d-fa4e-4e40-b965-4b2cf9dc840e req-6467e4a4-e7b8-463c-9d64-1ae5a8c51987 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Received event network-vif-plugged-53d86447-39c2-4624-8083-b6dc36b78b15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.073 252257 DEBUG oslo_concurrency.lockutils [req-e35d815d-fa4e-4e40-b965-4b2cf9dc840e req-6467e4a4-e7b8-463c-9d64-1ae5a8c51987 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.073 252257 DEBUG oslo_concurrency.lockutils [req-e35d815d-fa4e-4e40-b965-4b2cf9dc840e req-6467e4a4-e7b8-463c-9d64-1ae5a8c51987 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.074 252257 DEBUG oslo_concurrency.lockutils [req-e35d815d-fa4e-4e40-b965-4b2cf9dc840e req-6467e4a4-e7b8-463c-9d64-1ae5a8c51987 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.074 252257 DEBUG nova.compute.manager [req-e35d815d-fa4e-4e40-b965-4b2cf9dc840e req-6467e4a4-e7b8-463c-9d64-1ae5a8c51987 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Processing event network-vif-plugged-53d86447-39c2-4624-8083-b6dc36b78b15 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.074 252257 DEBUG nova.compute.manager [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.079 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404701.079325, 5a603f26-2b4a-4025-8cc2-a31c8c89e652 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.080 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.082 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.087 252257 INFO nova.virt.libvirt.driver [-] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Instance spawned successfully.#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.088 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.153 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.159 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.160 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.160 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.160 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.161 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.161 252257 DEBUG nova.virt.libvirt.driver [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.166 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.316 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.362 252257 INFO nova.compute.manager [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Took 13.21 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.362 252257 DEBUG nova.compute.manager [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.595 252257 INFO nova.compute.manager [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Took 14.38 seconds to build instance.#033[00m
Nov 29 03:25:01 np0005539563 nova_compute[252253]: 2025-11-29 08:25:01.628 252257 DEBUG oslo_concurrency.lockutils [None req-38097858-4d9f-4afb-8f03-cdcd259c96e4 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:01.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:01.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 484 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 531 KiB/s rd, 3.3 MiB/s wr, 120 op/s
Nov 29 03:25:02 np0005539563 nova_compute[252253]: 2025-11-29 08:25:02.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:02 np0005539563 nova_compute[252253]: 2025-11-29 08:25:02.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:25:02 np0005539563 nova_compute[252253]: 2025-11-29 08:25:02.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:25:03 np0005539563 nova_compute[252253]: 2025-11-29 08:25:03.015 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:03 np0005539563 nova_compute[252253]: 2025-11-29 08:25:03.015 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:03 np0005539563 nova_compute[252253]: 2025-11-29 08:25:03.015 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:25:03 np0005539563 nova_compute[252253]: 2025-11-29 08:25:03.016 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:03 np0005539563 nova_compute[252253]: 2025-11-29 08:25:03.158 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:03 np0005539563 nova_compute[252253]: 2025-11-29 08:25:03.278 252257 DEBUG nova.compute.manager [req-42f9e84a-4022-43e2-b3d4-364f57cd6008 req-b34bc392-5d33-4928-9798-7bd212ef2623 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Received event network-vif-plugged-53d86447-39c2-4624-8083-b6dc36b78b15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:03 np0005539563 nova_compute[252253]: 2025-11-29 08:25:03.280 252257 DEBUG oslo_concurrency.lockutils [req-42f9e84a-4022-43e2-b3d4-364f57cd6008 req-b34bc392-5d33-4928-9798-7bd212ef2623 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:03 np0005539563 nova_compute[252253]: 2025-11-29 08:25:03.280 252257 DEBUG oslo_concurrency.lockutils [req-42f9e84a-4022-43e2-b3d4-364f57cd6008 req-b34bc392-5d33-4928-9798-7bd212ef2623 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:03 np0005539563 nova_compute[252253]: 2025-11-29 08:25:03.281 252257 DEBUG oslo_concurrency.lockutils [req-42f9e84a-4022-43e2-b3d4-364f57cd6008 req-b34bc392-5d33-4928-9798-7bd212ef2623 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:03 np0005539563 nova_compute[252253]: 2025-11-29 08:25:03.281 252257 DEBUG nova.compute.manager [req-42f9e84a-4022-43e2-b3d4-364f57cd6008 req-b34bc392-5d33-4928-9798-7bd212ef2623 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] No waiting events found dispatching network-vif-plugged-53d86447-39c2-4624-8083-b6dc36b78b15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:03 np0005539563 nova_compute[252253]: 2025-11-29 08:25:03.282 252257 WARNING nova.compute.manager [req-42f9e84a-4022-43e2-b3d4-364f57cd6008 req-b34bc392-5d33-4928-9798-7bd212ef2623 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Received unexpected event network-vif-plugged-53d86447-39c2-4624-8083-b6dc36b78b15 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:25:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:03.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:03.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:04 np0005539563 nova_compute[252253]: 2025-11-29 08:25:04.403 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 507 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Nov 29 03:25:04 np0005539563 NetworkManager[48981]: <info>  [1764404704.7119] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Nov 29 03:25:04 np0005539563 nova_compute[252253]: 2025-11-29 08:25:04.712 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:04 np0005539563 NetworkManager[48981]: <info>  [1764404704.7131] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Nov 29 03:25:04 np0005539563 nova_compute[252253]: 2025-11-29 08:25:04.823 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:04Z|00627|binding|INFO|Releasing lport 42a41b42-1527-4cfa-9dcf-4b7f34b092b7 from this chassis (sb_readonly=0)
Nov 29 03:25:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:04Z|00628|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 03:25:04 np0005539563 nova_compute[252253]: 2025-11-29 08:25:04.834 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:04.930 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:04.931 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:04.931 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.044 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.127 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Updating instance_info_cache with network_info: [{"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.148 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.149 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.149 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.197 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.197 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.198 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.198 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.199 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.285 252257 DEBUG nova.compute.manager [req-7bba7ca7-1c5f-4ece-8e33-3c026cfc1508 req-ca695155-0e0a-470f-84e3-8152ddd3c4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Received event network-changed-53d86447-39c2-4624-8083-b6dc36b78b15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.286 252257 DEBUG nova.compute.manager [req-7bba7ca7-1c5f-4ece-8e33-3c026cfc1508 req-ca695155-0e0a-470f-84e3-8152ddd3c4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Refreshing instance network info cache due to event network-changed-53d86447-39c2-4624-8083-b6dc36b78b15. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.286 252257 DEBUG oslo_concurrency.lockutils [req-7bba7ca7-1c5f-4ece-8e33-3c026cfc1508 req-ca695155-0e0a-470f-84e3-8152ddd3c4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.287 252257 DEBUG oslo_concurrency.lockutils [req-7bba7ca7-1c5f-4ece-8e33-3c026cfc1508 req-ca695155-0e0a-470f-84e3-8152ddd3c4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.287 252257 DEBUG nova.network.neutron [req-7bba7ca7-1c5f-4ece-8e33-3c026cfc1508 req-ca695155-0e0a-470f-84e3-8152ddd3c4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Refreshing network info cache for port 53d86447-39c2-4624-8083-b6dc36b78b15 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346172956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.630 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.710 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.711 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.715 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.715 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:05.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:05.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.904 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.905 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3946MB free_disk=20.823829650878906GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.905 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.906 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.990 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 59a5747d-b29d-47f7-848c-62778e994c56 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.990 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 5a603f26-2b4a-4025-8cc2-a31c8c89e652 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.990 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:25:05 np0005539563 nova_compute[252253]: 2025-11-29 08:25:05.990 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:25:06 np0005539563 nova_compute[252253]: 2025-11-29 08:25:06.046 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048243618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 108 op/s
Nov 29 03:25:06 np0005539563 nova_compute[252253]: 2025-11-29 08:25:06.546 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:06 np0005539563 nova_compute[252253]: 2025-11-29 08:25:06.553 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:06 np0005539563 nova_compute[252253]: 2025-11-29 08:25:06.569 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:06 np0005539563 nova_compute[252253]: 2025-11-29 08:25:06.593 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:25:06 np0005539563 nova_compute[252253]: 2025-11-29 08:25:06.593 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:07.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:07 np0005539563 nova_compute[252253]: 2025-11-29 08:25:07.840 252257 DEBUG nova.network.neutron [req-7bba7ca7-1c5f-4ece-8e33-3c026cfc1508 req-ca695155-0e0a-470f-84e3-8152ddd3c4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Updated VIF entry in instance network info cache for port 53d86447-39c2-4624-8083-b6dc36b78b15. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:07 np0005539563 nova_compute[252253]: 2025-11-29 08:25:07.840 252257 DEBUG nova.network.neutron [req-7bba7ca7-1c5f-4ece-8e33-3c026cfc1508 req-ca695155-0e0a-470f-84e3-8152ddd3c4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Updating instance_info_cache with network_info: [{"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:07.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:07 np0005539563 nova_compute[252253]: 2025-11-29 08:25:07.861 252257 DEBUG oslo_concurrency.lockutils [req-7bba7ca7-1c5f-4ece-8e33-3c026cfc1508 req-ca695155-0e0a-470f-84e3-8152ddd3c4bf 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:08 np0005539563 nova_compute[252253]: 2025-11-29 08:25:08.161 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:08 np0005539563 nova_compute[252253]: 2025-11-29 08:25:08.491 252257 INFO nova.compute.manager [None req-438baba9-d11b-4ab5-8cea-f7d0f32c00f2 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Pausing#033[00m
Nov 29 03:25:08 np0005539563 nova_compute[252253]: 2025-11-29 08:25:08.492 252257 DEBUG nova.objects.instance [None req-438baba9-d11b-4ab5-8cea-f7d0f32c00f2 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'flavor' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Nov 29 03:25:08 np0005539563 nova_compute[252253]: 2025-11-29 08:25:08.547 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404708.5473495, 59a5747d-b29d-47f7-848c-62778e994c56 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:08 np0005539563 nova_compute[252253]: 2025-11-29 08:25:08.548 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:25:08 np0005539563 nova_compute[252253]: 2025-11-29 08:25:08.549 252257 DEBUG nova.compute.manager [None req-438baba9-d11b-4ab5-8cea-f7d0f32c00f2 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:08 np0005539563 nova_compute[252253]: 2025-11-29 08:25:08.598 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:08 np0005539563 nova_compute[252253]: 2025-11-29 08:25:08.603 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:08 np0005539563 nova_compute[252253]: 2025-11-29 08:25:08.635 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 29 03:25:09 np0005539563 nova_compute[252253]: 2025-11-29 08:25:09.121 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:09 np0005539563 nova_compute[252253]: 2025-11-29 08:25:09.121 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:09 np0005539563 nova_compute[252253]: 2025-11-29 08:25:09.405 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:09.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:09.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:10 np0005539563 podman[346900]: 2025-11-29 08:25:10.520795592 +0000 UTC m=+0.065919079 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:25:10 np0005539563 podman[346899]: 2025-11-29 08:25:10.536412775 +0000 UTC m=+0.084890723 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:25:10 np0005539563 podman[346901]: 2025-11-29 08:25:10.538675467 +0000 UTC m=+0.080637928 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:25:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Nov 29 03:25:10 np0005539563 nova_compute[252253]: 2025-11-29 08:25:10.803 252257 INFO nova.compute.manager [None req-74d3590c-9e2e-4f9b-a40a-20d05e1f638a dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Unpausing#033[00m
Nov 29 03:25:10 np0005539563 nova_compute[252253]: 2025-11-29 08:25:10.804 252257 DEBUG nova.objects.instance [None req-74d3590c-9e2e-4f9b-a40a-20d05e1f638a dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'flavor' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:10 np0005539563 nova_compute[252253]: 2025-11-29 08:25:10.840 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404710.8400805, 59a5747d-b29d-47f7-848c-62778e994c56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:10 np0005539563 nova_compute[252253]: 2025-11-29 08:25:10.840 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:10 np0005539563 virtqemud[251807]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:25:10 np0005539563 nova_compute[252253]: 2025-11-29 08:25:10.845 252257 DEBUG nova.virt.libvirt.guest [None req-74d3590c-9e2e-4f9b-a40a-20d05e1f638a dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:25:10 np0005539563 nova_compute[252253]: 2025-11-29 08:25:10.845 252257 DEBUG nova.compute.manager [None req-74d3590c-9e2e-4f9b-a40a-20d05e1f638a dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:10 np0005539563 nova_compute[252253]: 2025-11-29 08:25:10.917 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:10 np0005539563 nova_compute[252253]: 2025-11-29 08:25:10.923 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:10 np0005539563 nova_compute[252253]: 2025-11-29 08:25:10.990 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:10 np0005539563 nova_compute[252253]: 2025-11-29 08:25:10.991 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:11 np0005539563 nova_compute[252253]: 2025-11-29 08:25:11.010 252257 DEBUG nova.compute.manager [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:25:11 np0005539563 nova_compute[252253]: 2025-11-29 08:25:11.238 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:11 np0005539563 nova_compute[252253]: 2025-11-29 08:25:11.238 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:11 np0005539563 nova_compute[252253]: 2025-11-29 08:25:11.244 252257 DEBUG nova.virt.hardware [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:25:11 np0005539563 nova_compute[252253]: 2025-11-29 08:25:11.244 252257 INFO nova.compute.claims [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:25:11 np0005539563 nova_compute[252253]: 2025-11-29 08:25:11.577 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:11.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:11.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2934959806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.014 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.022 252257 DEBUG nova.compute.provider_tree [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.045 252257 DEBUG nova.scheduler.client.report [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.070 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.071 252257 DEBUG nova.compute.manager [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.141 252257 DEBUG nova.compute.manager [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.142 252257 DEBUG nova.network.neutron [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.164 252257 INFO nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.184 252257 DEBUG nova.compute.manager [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.270 252257 DEBUG nova.compute.manager [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.273 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.274 252257 INFO nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Creating image(s)#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.314 252257 DEBUG nova.storage.rbd_utils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.361 252257 DEBUG nova.storage.rbd_utils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.402 252257 DEBUG nova.storage.rbd_utils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.407 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.465 252257 DEBUG nova.policy [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b4f4d28745dd46e586642c84c051db39', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '23450c2eaf4442459dec94c6d29f0412', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.516 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.517 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.518 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.518 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.553 252257 DEBUG nova.storage.rbd_utils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.557 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.789 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:25:12
Nov 29 03:25:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:25:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:25:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.meta', '.mgr', 'volumes', 'images', 'cephfs.cephfs.data']
Nov 29 03:25:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:25:12 np0005539563 nova_compute[252253]: 2025-11-29 08:25:12.913 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:13 np0005539563 nova_compute[252253]: 2025-11-29 08:25:13.024 252257 DEBUG nova.storage.rbd_utils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] resizing rbd image df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:25:13 np0005539563 nova_compute[252253]: 2025-11-29 08:25:13.167 252257 DEBUG nova.objects.instance [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'migration_context' on Instance uuid df3ef43d-e67b-4d7f-8603-5cf61569ae1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:13 np0005539563 nova_compute[252253]: 2025-11-29 08:25:13.172 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:25:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:25:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:25:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:25:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:25:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:25:13 np0005539563 nova_compute[252253]: 2025-11-29 08:25:13.344 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:25:13 np0005539563 nova_compute[252253]: 2025-11-29 08:25:13.344 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Ensure instance console log exists: /var/lib/nova/instances/df3ef43d-e67b-4d7f-8603-5cf61569ae1f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:25:13 np0005539563 nova_compute[252253]: 2025-11-29 08:25:13.344 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:13 np0005539563 nova_compute[252253]: 2025-11-29 08:25:13.345 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:13 np0005539563 nova_compute[252253]: 2025-11-29 08:25:13.345 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:13.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:25:13 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:25:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:13.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:25:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:25:14 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:25:14 np0005539563 nova_compute[252253]: 2025-11-29 08:25:14.407 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 190 op/s
Nov 29 03:25:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:25:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:25:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:25:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:25:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:25:15 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:25:15 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:25:15 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:25:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:15.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:15.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:25:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b893e4dd-9bdb-49d7-a768-d34b89863257 does not exist
Nov 29 03:25:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6cc66208-690c-4ae1-9cac-e187d927ae9b does not exist
Nov 29 03:25:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cd276de9-0f41-477c-8bf1-539c3e3ad168 does not exist
Nov 29 03:25:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:25:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:25:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:25:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:25:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:25:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:25:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:25:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:25:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 584 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Nov 29 03:25:16 np0005539563 podman[347423]: 2025-11-29 08:25:16.78331017 +0000 UTC m=+0.058942950 container create e6f54295c6d2c09c3107802e7c8f7a579b5e6ac06983c49a2126ab92683f21a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jemison, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:25:16 np0005539563 systemd[1]: Started libpod-conmon-e6f54295c6d2c09c3107802e7c8f7a579b5e6ac06983c49a2126ab92683f21a7.scope.
Nov 29 03:25:16 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:25:16 np0005539563 podman[347423]: 2025-11-29 08:25:16.762357962 +0000 UTC m=+0.037990782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:25:16 np0005539563 podman[347423]: 2025-11-29 08:25:16.873377922 +0000 UTC m=+0.149010742 container init e6f54295c6d2c09c3107802e7c8f7a579b5e6ac06983c49a2126ab92683f21a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:25:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:25:16 np0005539563 podman[347423]: 2025-11-29 08:25:16.886788076 +0000 UTC m=+0.162420856 container start e6f54295c6d2c09c3107802e7c8f7a579b5e6ac06983c49a2126ab92683f21a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:25:16 np0005539563 podman[347423]: 2025-11-29 08:25:16.890809735 +0000 UTC m=+0.166442625 container attach e6f54295c6d2c09c3107802e7c8f7a579b5e6ac06983c49a2126ab92683f21a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jemison, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:25:16 np0005539563 gracious_jemison[347439]: 167 167
Nov 29 03:25:16 np0005539563 systemd[1]: libpod-e6f54295c6d2c09c3107802e7c8f7a579b5e6ac06983c49a2126ab92683f21a7.scope: Deactivated successfully.
Nov 29 03:25:16 np0005539563 podman[347423]: 2025-11-29 08:25:16.894119955 +0000 UTC m=+0.169752735 container died e6f54295c6d2c09c3107802e7c8f7a579b5e6ac06983c49a2126ab92683f21a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:25:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5b65c421b7d74243486a44bb092257e3db338030f540832e04ca35b01d49aa84-merged.mount: Deactivated successfully.
Nov 29 03:25:16 np0005539563 podman[347423]: 2025-11-29 08:25:16.938548879 +0000 UTC m=+0.214181659 container remove e6f54295c6d2c09c3107802e7c8f7a579b5e6ac06983c49a2126ab92683f21a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jemison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:25:16 np0005539563 systemd[1]: libpod-conmon-e6f54295c6d2c09c3107802e7c8f7a579b5e6ac06983c49a2126ab92683f21a7.scope: Deactivated successfully.
Nov 29 03:25:17 np0005539563 podman[347464]: 2025-11-29 08:25:17.179577297 +0000 UTC m=+0.050618465 container create 3e503877977403c030d2896175bbcb8b739b9b42e3e01c0500816af0bf915d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:25:17 np0005539563 systemd[1]: Started libpod-conmon-3e503877977403c030d2896175bbcb8b739b9b42e3e01c0500816af0bf915d6b.scope.
Nov 29 03:25:17 np0005539563 podman[347464]: 2025-11-29 08:25:17.156704276 +0000 UTC m=+0.027745474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:25:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:25:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9621435c66ca0be257ebcc0ee90f0bad904e25e3c855811334deb614cb36c260/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9621435c66ca0be257ebcc0ee90f0bad904e25e3c855811334deb614cb36c260/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9621435c66ca0be257ebcc0ee90f0bad904e25e3c855811334deb614cb36c260/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9621435c66ca0be257ebcc0ee90f0bad904e25e3c855811334deb614cb36c260/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9621435c66ca0be257ebcc0ee90f0bad904e25e3c855811334deb614cb36c260/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:17 np0005539563 podman[347464]: 2025-11-29 08:25:17.283998708 +0000 UTC m=+0.155039876 container init 3e503877977403c030d2896175bbcb8b739b9b42e3e01c0500816af0bf915d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:25:17 np0005539563 podman[347464]: 2025-11-29 08:25:17.300997649 +0000 UTC m=+0.172038847 container start 3e503877977403c030d2896175bbcb8b739b9b42e3e01c0500816af0bf915d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:25:17 np0005539563 podman[347464]: 2025-11-29 08:25:17.307789083 +0000 UTC m=+0.178830241 container attach 3e503877977403c030d2896175bbcb8b739b9b42e3e01c0500816af0bf915d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:25:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:17.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:17.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:25:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:25:18 np0005539563 determined_ellis[347480]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:25:18 np0005539563 determined_ellis[347480]: --> relative data size: 1.0
Nov 29 03:25:18 np0005539563 determined_ellis[347480]: --> All data devices are unavailable
Nov 29 03:25:18 np0005539563 nova_compute[252253]: 2025-11-29 08:25:18.168 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:18 np0005539563 systemd[1]: libpod-3e503877977403c030d2896175bbcb8b739b9b42e3e01c0500816af0bf915d6b.scope: Deactivated successfully.
Nov 29 03:25:18 np0005539563 podman[347464]: 2025-11-29 08:25:18.191720795 +0000 UTC m=+1.062761963 container died 3e503877977403c030d2896175bbcb8b739b9b42e3e01c0500816af0bf915d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:25:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9621435c66ca0be257ebcc0ee90f0bad904e25e3c855811334deb614cb36c260-merged.mount: Deactivated successfully.
Nov 29 03:25:18 np0005539563 podman[347464]: 2025-11-29 08:25:18.245638847 +0000 UTC m=+1.116680005 container remove 3e503877977403c030d2896175bbcb8b739b9b42e3e01c0500816af0bf915d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ellis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:25:18 np0005539563 systemd[1]: libpod-conmon-3e503877977403c030d2896175bbcb8b739b9b42e3e01c0500816af0bf915d6b.scope: Deactivated successfully.
Nov 29 03:25:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 595 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 132 op/s
Nov 29 03:25:18 np0005539563 podman[347695]: 2025-11-29 08:25:18.975400629 +0000 UTC m=+0.042139754 container create ccc9626dce234a5dceea2b1b15371b8978861be0e58fa2fb601e35b06aa8abed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sammet, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:25:19 np0005539563 systemd[1]: Started libpod-conmon-ccc9626dce234a5dceea2b1b15371b8978861be0e58fa2fb601e35b06aa8abed.scope.
Nov 29 03:25:19 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:25:19 np0005539563 podman[347695]: 2025-11-29 08:25:18.957430941 +0000 UTC m=+0.024170076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:25:19 np0005539563 podman[347695]: 2025-11-29 08:25:19.059978492 +0000 UTC m=+0.126717637 container init ccc9626dce234a5dceea2b1b15371b8978861be0e58fa2fb601e35b06aa8abed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sammet, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:25:19 np0005539563 podman[347695]: 2025-11-29 08:25:19.068877323 +0000 UTC m=+0.135616448 container start ccc9626dce234a5dceea2b1b15371b8978861be0e58fa2fb601e35b06aa8abed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sammet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:25:19 np0005539563 podman[347695]: 2025-11-29 08:25:19.073832148 +0000 UTC m=+0.140571263 container attach ccc9626dce234a5dceea2b1b15371b8978861be0e58fa2fb601e35b06aa8abed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sammet, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:25:19 np0005539563 confident_sammet[347713]: 167 167
Nov 29 03:25:19 np0005539563 systemd[1]: libpod-ccc9626dce234a5dceea2b1b15371b8978861be0e58fa2fb601e35b06aa8abed.scope: Deactivated successfully.
Nov 29 03:25:19 np0005539563 podman[347695]: 2025-11-29 08:25:19.075571774 +0000 UTC m=+0.142310899 container died ccc9626dce234a5dceea2b1b15371b8978861be0e58fa2fb601e35b06aa8abed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sammet, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:25:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6b042d66be51bebb09175a05dfd27f04cc9c9c98fb1b3aac9d4d6f0b611deaf2-merged.mount: Deactivated successfully.
Nov 29 03:25:19 np0005539563 podman[347695]: 2025-11-29 08:25:19.188398545 +0000 UTC m=+0.255137660 container remove ccc9626dce234a5dceea2b1b15371b8978861be0e58fa2fb601e35b06aa8abed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:25:19 np0005539563 systemd[1]: libpod-conmon-ccc9626dce234a5dceea2b1b15371b8978861be0e58fa2fb601e35b06aa8abed.scope: Deactivated successfully.
Nov 29 03:25:19 np0005539563 podman[347737]: 2025-11-29 08:25:19.407680441 +0000 UTC m=+0.047484388 container create 26c7d922ca4cec430e3a4bf561febb785344dcf2f148bb4e779f424bf3a58216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:25:19 np0005539563 nova_compute[252253]: 2025-11-29 08:25:19.410 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:19 np0005539563 systemd[1]: Started libpod-conmon-26c7d922ca4cec430e3a4bf561febb785344dcf2f148bb4e779f424bf3a58216.scope.
Nov 29 03:25:19 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:25:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c869dc07470dc0b98813412e49b32dd14a58ecccd37c8d13dc9a68337a430938/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:19 np0005539563 podman[347737]: 2025-11-29 08:25:19.38990713 +0000 UTC m=+0.029711087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:25:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c869dc07470dc0b98813412e49b32dd14a58ecccd37c8d13dc9a68337a430938/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c869dc07470dc0b98813412e49b32dd14a58ecccd37c8d13dc9a68337a430938/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c869dc07470dc0b98813412e49b32dd14a58ecccd37c8d13dc9a68337a430938/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:19 np0005539563 podman[347737]: 2025-11-29 08:25:19.50721603 +0000 UTC m=+0.147020007 container init 26c7d922ca4cec430e3a4bf561febb785344dcf2f148bb4e779f424bf3a58216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heyrovsky, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:25:19 np0005539563 podman[347737]: 2025-11-29 08:25:19.519970387 +0000 UTC m=+0.159774334 container start 26c7d922ca4cec430e3a4bf561febb785344dcf2f148bb4e779f424bf3a58216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:25:19 np0005539563 podman[347737]: 2025-11-29 08:25:19.523575574 +0000 UTC m=+0.163379511 container attach 26c7d922ca4cec430e3a4bf561febb785344dcf2f148bb4e779f424bf3a58216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:25:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:19.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:19.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]: {
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:    "0": [
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:        {
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:            "devices": [
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:                "/dev/loop3"
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:            ],
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:            "lv_name": "ceph_lv0",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:            "lv_size": "7511998464",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:            "name": "ceph_lv0",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:            "tags": {
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:                "ceph.cluster_name": "ceph",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:                "ceph.crush_device_class": "",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:                "ceph.encrypted": "0",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:                "ceph.osd_id": "0",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:                "ceph.type": "block",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:                "ceph.vdo": "0"
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:            },
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:            "type": "block",
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:            "vg_name": "ceph_vg0"
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:        }
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]:    ]
Nov 29 03:25:20 np0005539563 nervous_heyrovsky[347753]: }
Nov 29 03:25:20 np0005539563 systemd[1]: libpod-26c7d922ca4cec430e3a4bf561febb785344dcf2f148bb4e779f424bf3a58216.scope: Deactivated successfully.
Nov 29 03:25:20 np0005539563 podman[347737]: 2025-11-29 08:25:20.343729336 +0000 UTC m=+0.983533313 container died 26c7d922ca4cec430e3a4bf561febb785344dcf2f148bb4e779f424bf3a58216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:25:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c869dc07470dc0b98813412e49b32dd14a58ecccd37c8d13dc9a68337a430938-merged.mount: Deactivated successfully.
Nov 29 03:25:20 np0005539563 podman[347737]: 2025-11-29 08:25:20.41318791 +0000 UTC m=+1.052991857 container remove 26c7d922ca4cec430e3a4bf561febb785344dcf2f148bb4e779f424bf3a58216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:25:20 np0005539563 systemd[1]: libpod-conmon-26c7d922ca4cec430e3a4bf561febb785344dcf2f148bb4e779f424bf3a58216.scope: Deactivated successfully.
Nov 29 03:25:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 597 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 108 op/s
Nov 29 03:25:21 np0005539563 podman[347918]: 2025-11-29 08:25:21.178819944 +0000 UTC m=+0.079276861 container create b67a7fe9f696b016837a7d936c0a78edcf2f2727c7cdd2e993b394fbd35cfabc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cohen, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 03:25:21 np0005539563 systemd[1]: Started libpod-conmon-b67a7fe9f696b016837a7d936c0a78edcf2f2727c7cdd2e993b394fbd35cfabc.scope.
Nov 29 03:25:21 np0005539563 podman[347918]: 2025-11-29 08:25:21.145313965 +0000 UTC m=+0.045770962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:25:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:25:21 np0005539563 podman[347918]: 2025-11-29 08:25:21.280415139 +0000 UTC m=+0.180872066 container init b67a7fe9f696b016837a7d936c0a78edcf2f2727c7cdd2e993b394fbd35cfabc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cohen, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:25:21 np0005539563 podman[347918]: 2025-11-29 08:25:21.287969994 +0000 UTC m=+0.188426911 container start b67a7fe9f696b016837a7d936c0a78edcf2f2727c7cdd2e993b394fbd35cfabc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cohen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:25:21 np0005539563 podman[347918]: 2025-11-29 08:25:21.29152008 +0000 UTC m=+0.191977007 container attach b67a7fe9f696b016837a7d936c0a78edcf2f2727c7cdd2e993b394fbd35cfabc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cohen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:25:21 np0005539563 quizzical_cohen[347934]: 167 167
Nov 29 03:25:21 np0005539563 systemd[1]: libpod-b67a7fe9f696b016837a7d936c0a78edcf2f2727c7cdd2e993b394fbd35cfabc.scope: Deactivated successfully.
Nov 29 03:25:21 np0005539563 podman[347918]: 2025-11-29 08:25:21.297198784 +0000 UTC m=+0.197655701 container died b67a7fe9f696b016837a7d936c0a78edcf2f2727c7cdd2e993b394fbd35cfabc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cohen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:25:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ba7b8109e490b5fe362044107e13d9fa813dc9cd0fbd7c4afd3ab297172b01ad-merged.mount: Deactivated successfully.
Nov 29 03:25:21 np0005539563 podman[347918]: 2025-11-29 08:25:21.346877631 +0000 UTC m=+0.247334548 container remove b67a7fe9f696b016837a7d936c0a78edcf2f2727c7cdd2e993b394fbd35cfabc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:25:21 np0005539563 systemd[1]: libpod-conmon-b67a7fe9f696b016837a7d936c0a78edcf2f2727c7cdd2e993b394fbd35cfabc.scope: Deactivated successfully.
Nov 29 03:25:21 np0005539563 podman[347957]: 2025-11-29 08:25:21.623671858 +0000 UTC m=+0.088404179 container create d792a8a94d4b21f2a1af2bf4ec4879cf9bfccbed5b6ce084318f8ec0a2fb249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_payne, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:25:21 np0005539563 podman[347957]: 2025-11-29 08:25:21.585028 +0000 UTC m=+0.049760411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:25:21 np0005539563 systemd[1]: Started libpod-conmon-d792a8a94d4b21f2a1af2bf4ec4879cf9bfccbed5b6ce084318f8ec0a2fb249c.scope.
Nov 29 03:25:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:25:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2fdb39c6f409bee6db9eafc86de84fb707c2dd717d071a4f7e49598e2ea48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2fdb39c6f409bee6db9eafc86de84fb707c2dd717d071a4f7e49598e2ea48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2fdb39c6f409bee6db9eafc86de84fb707c2dd717d071a4f7e49598e2ea48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2fdb39c6f409bee6db9eafc86de84fb707c2dd717d071a4f7e49598e2ea48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:21 np0005539563 podman[347957]: 2025-11-29 08:25:21.743322513 +0000 UTC m=+0.208054834 container init d792a8a94d4b21f2a1af2bf4ec4879cf9bfccbed5b6ce084318f8ec0a2fb249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:25:21 np0005539563 podman[347957]: 2025-11-29 08:25:21.75204752 +0000 UTC m=+0.216779831 container start d792a8a94d4b21f2a1af2bf4ec4879cf9bfccbed5b6ce084318f8ec0a2fb249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_payne, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:25:21 np0005539563 podman[347957]: 2025-11-29 08:25:21.756328925 +0000 UTC m=+0.221061256 container attach d792a8a94d4b21f2a1af2bf4ec4879cf9bfccbed5b6ce084318f8ec0a2fb249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:25:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:21.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:21.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 597 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 3.6 MiB/s wr, 65 op/s
Nov 29 03:25:22 np0005539563 hopeful_payne[347973]: {
Nov 29 03:25:22 np0005539563 hopeful_payne[347973]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:25:22 np0005539563 hopeful_payne[347973]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:25:22 np0005539563 hopeful_payne[347973]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:25:22 np0005539563 hopeful_payne[347973]:        "osd_id": 0,
Nov 29 03:25:22 np0005539563 hopeful_payne[347973]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:25:22 np0005539563 hopeful_payne[347973]:        "type": "bluestore"
Nov 29 03:25:22 np0005539563 hopeful_payne[347973]:    }
Nov 29 03:25:22 np0005539563 hopeful_payne[347973]: }
Nov 29 03:25:22 np0005539563 systemd[1]: libpod-d792a8a94d4b21f2a1af2bf4ec4879cf9bfccbed5b6ce084318f8ec0a2fb249c.scope: Deactivated successfully.
Nov 29 03:25:22 np0005539563 nova_compute[252253]: 2025-11-29 08:25:22.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:22 np0005539563 podman[347995]: 2025-11-29 08:25:22.698181598 +0000 UTC m=+0.026209312 container died d792a8a94d4b21f2a1af2bf4ec4879cf9bfccbed5b6ce084318f8ec0a2fb249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_payne, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:25:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f5e2fdb39c6f409bee6db9eafc86de84fb707c2dd717d071a4f7e49598e2ea48-merged.mount: Deactivated successfully.
Nov 29 03:25:22 np0005539563 podman[347995]: 2025-11-29 08:25:22.758561616 +0000 UTC m=+0.086589320 container remove d792a8a94d4b21f2a1af2bf4ec4879cf9bfccbed5b6ce084318f8ec0a2fb249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:25:22 np0005539563 systemd[1]: libpod-conmon-d792a8a94d4b21f2a1af2bf4ec4879cf9bfccbed5b6ce084318f8ec0a2fb249c.scope: Deactivated successfully.
Nov 29 03:25:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:25:23 np0005539563 nova_compute[252253]: 2025-11-29 08:25:23.177 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:25:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010563160129350456 of space, bias 1.0, pg target 3.1689480388051368 quantized to 32 (current 32)
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00432083013446864 of space, bias 1.0, pg target 1.283286549937186 quantized to 32 (current 32)
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Nov 29 03:25:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:23.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:23.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:24 np0005539563 nova_compute[252253]: 2025-11-29 08:25:24.413 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 597 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 3.7 MiB/s wr, 68 op/s
Nov 29 03:25:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:25:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b336e8af-be2b-4f24-8039-dfdbd2e177c9 does not exist
Nov 29 03:25:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 54dcc0ed-7fae-4ada-9f0a-7e20d9eaa919 does not exist
Nov 29 03:25:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev deb3778b-d12d-4aa7-9877-045a2ca9ed56 does not exist
Nov 29 03:25:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:25:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:25.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:25:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:26Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:ee:77 10.100.0.4
Nov 29 03:25:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:26Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:ee:77 10.100.0.4
Nov 29 03:25:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 598 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 426 KiB/s rd, 2.3 MiB/s wr, 58 op/s
Nov 29 03:25:26 np0005539563 nova_compute[252253]: 2025-11-29 08:25:26.923 252257 DEBUG nova.network.neutron [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Successfully created port: 74a0b6a5-7ae5-44ef-a159-4a87de6da113 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:25:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:27.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:27.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:28 np0005539563 nova_compute[252253]: 2025-11-29 08:25:28.180 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 606 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 MiB/s wr, 115 op/s
Nov 29 03:25:28 np0005539563 nova_compute[252253]: 2025-11-29 08:25:28.690 252257 INFO nova.compute.manager [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Rescuing#033[00m
Nov 29 03:25:28 np0005539563 nova_compute[252253]: 2025-11-29 08:25:28.691 252257 DEBUG oslo_concurrency.lockutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:28 np0005539563 nova_compute[252253]: 2025-11-29 08:25:28.691 252257 DEBUG oslo_concurrency.lockutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquired lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:28 np0005539563 nova_compute[252253]: 2025-11-29 08:25:28.691 252257 DEBUG nova.network.neutron [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:25:29 np0005539563 nova_compute[252253]: 2025-11-29 08:25:29.416 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:29.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:29.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 610 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1017 KiB/s rd, 459 KiB/s wr, 102 op/s
Nov 29 03:25:31 np0005539563 nova_compute[252253]: 2025-11-29 08:25:31.145 252257 DEBUG nova.network.neutron [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Successfully updated port: 74a0b6a5-7ae5-44ef-a159-4a87de6da113 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:25:31 np0005539563 nova_compute[252253]: 2025-11-29 08:25:31.159 252257 DEBUG nova.network.neutron [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Updating instance_info_cache with network_info: [{"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:31 np0005539563 nova_compute[252253]: 2025-11-29 08:25:31.164 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:31 np0005539563 nova_compute[252253]: 2025-11-29 08:25:31.164 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquired lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:31 np0005539563 nova_compute[252253]: 2025-11-29 08:25:31.164 252257 DEBUG nova.network.neutron [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:25:31 np0005539563 nova_compute[252253]: 2025-11-29 08:25:31.192 252257 DEBUG oslo_concurrency.lockutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Releasing lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:31 np0005539563 nova_compute[252253]: 2025-11-29 08:25:31.254 252257 DEBUG nova.compute.manager [req-39aa7cd4-63c1-436e-b4e6-9e4d646569ac req-75836187-4ebe-4650-88f8-388e38c10b94 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Received event network-changed-74a0b6a5-7ae5-44ef-a159-4a87de6da113 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:31 np0005539563 nova_compute[252253]: 2025-11-29 08:25:31.254 252257 DEBUG nova.compute.manager [req-39aa7cd4-63c1-436e-b4e6-9e4d646569ac req-75836187-4ebe-4650-88f8-388e38c10b94 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Refreshing instance network info cache due to event network-changed-74a0b6a5-7ae5-44ef-a159-4a87de6da113. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:31 np0005539563 nova_compute[252253]: 2025-11-29 08:25:31.255 252257 DEBUG oslo_concurrency.lockutils [req-39aa7cd4-63c1-436e-b4e6-9e4d646569ac req-75836187-4ebe-4650-88f8-388e38c10b94 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:31 np0005539563 nova_compute[252253]: 2025-11-29 08:25:31.493 252257 DEBUG nova.network.neutron [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:25:31 np0005539563 nova_compute[252253]: 2025-11-29 08:25:31.675 252257 DEBUG nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:25:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:31.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:31.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 610 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 994 KiB/s rd, 292 KiB/s wr, 96 op/s
Nov 29 03:25:32 np0005539563 nova_compute[252253]: 2025-11-29 08:25:32.941 252257 DEBUG nova.network.neutron [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Updating instance_info_cache with network_info: [{"id": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "address": "fa:16:3e:e6:bf:db", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a0b6a5-7a", "ovs_interfaceid": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:32 np0005539563 nova_compute[252253]: 2025-11-29 08:25:32.981 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Releasing lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:32 np0005539563 nova_compute[252253]: 2025-11-29 08:25:32.981 252257 DEBUG nova.compute.manager [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Instance network_info: |[{"id": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "address": "fa:16:3e:e6:bf:db", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a0b6a5-7a", "ovs_interfaceid": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:25:32 np0005539563 nova_compute[252253]: 2025-11-29 08:25:32.982 252257 DEBUG oslo_concurrency.lockutils [req-39aa7cd4-63c1-436e-b4e6-9e4d646569ac req-75836187-4ebe-4650-88f8-388e38c10b94 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:32 np0005539563 nova_compute[252253]: 2025-11-29 08:25:32.983 252257 DEBUG nova.network.neutron [req-39aa7cd4-63c1-436e-b4e6-9e4d646569ac req-75836187-4ebe-4650-88f8-388e38c10b94 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Refreshing network info cache for port 74a0b6a5-7ae5-44ef-a159-4a87de6da113 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:32 np0005539563 nova_compute[252253]: 2025-11-29 08:25:32.988 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Start _get_guest_xml network_info=[{"id": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "address": "fa:16:3e:e6:bf:db", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a0b6a5-7a", "ovs_interfaceid": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:25:32 np0005539563 nova_compute[252253]: 2025-11-29 08:25:32.993 252257 WARNING nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.000 252257 DEBUG nova.virt.libvirt.host [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.001 252257 DEBUG nova.virt.libvirt.host [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.006 252257 DEBUG nova.virt.libvirt.host [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.007 252257 DEBUG nova.virt.libvirt.host [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.009 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.010 252257 DEBUG nova.virt.hardware [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.011 252257 DEBUG nova.virt.hardware [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.011 252257 DEBUG nova.virt.hardware [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.012 252257 DEBUG nova.virt.hardware [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.012 252257 DEBUG nova.virt.hardware [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.013 252257 DEBUG nova.virt.hardware [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.013 252257 DEBUG nova.virt.hardware [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.014 252257 DEBUG nova.virt.hardware [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.014 252257 DEBUG nova.virt.hardware [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.015 252257 DEBUG nova.virt.hardware [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.015 252257 DEBUG nova.virt.hardware [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.020 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.182 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1835030847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.426 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.453 252257 DEBUG nova.storage.rbd_utils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.458 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:33.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:33.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3754352276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.952 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.955 252257 DEBUG nova.virt.libvirt.vif [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=159,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-0pvktber',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=df3ef43d-e67b-4d7f-8603-5cf61569ae1f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "address": "fa:16:3e:e6:bf:db", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a0b6a5-7a", "ovs_interfaceid": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.956 252257 DEBUG nova.network.os_vif_util [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "address": "fa:16:3e:e6:bf:db", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a0b6a5-7a", "ovs_interfaceid": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.958 252257 DEBUG nova.network.os_vif_util [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:bf:db,bridge_name='br-int',has_traffic_filtering=True,id=74a0b6a5-7ae5-44ef-a159-4a87de6da113,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74a0b6a5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:33 np0005539563 nova_compute[252253]: 2025-11-29 08:25:33.960 252257 DEBUG nova.objects.instance [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'pci_devices' on Instance uuid df3ef43d-e67b-4d7f-8603-5cf61569ae1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.279 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  <uuid>df3ef43d-e67b-4d7f-8603-5cf61569ae1f</uuid>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  <name>instance-0000009f</name>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <nova:name>multiattach-server-1</nova:name>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:25:32</nova:creationTime>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <nova:user uuid="b4f4d28745dd46e586642c84c051db39">tempest-AttachVolumeMultiAttachTest-1454477111-project-member</nova:user>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <nova:project uuid="23450c2eaf4442459dec94c6d29f0412">tempest-AttachVolumeMultiAttachTest-1454477111</nova:project>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <nova:port uuid="74a0b6a5-7ae5-44ef-a159-4a87de6da113">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <entry name="serial">df3ef43d-e67b-4d7f-8603-5cf61569ae1f</entry>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <entry name="uuid">df3ef43d-e67b-4d7f-8603-5cf61569ae1f</entry>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk.config">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:e6:bf:db"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <target dev="tap74a0b6a5-7a"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/df3ef43d-e67b-4d7f-8603-5cf61569ae1f/console.log" append="off"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:25:34 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:25:34 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:25:34 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:25:34 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.281 252257 DEBUG nova.compute.manager [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Preparing to wait for external event network-vif-plugged-74a0b6a5-7ae5-44ef-a159-4a87de6da113 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.281 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.281 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.282 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.283 252257 DEBUG nova.virt.libvirt.vif [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=159,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-0pvktber',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=df3ef43d-e67b-4d7f-8603-5cf61569ae1f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "address": "fa:16:3e:e6:bf:db", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a0b6a5-7a", "ovs_interfaceid": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.283 252257 DEBUG nova.network.os_vif_util [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "address": "fa:16:3e:e6:bf:db", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a0b6a5-7a", "ovs_interfaceid": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.284 252257 DEBUG nova.network.os_vif_util [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:bf:db,bridge_name='br-int',has_traffic_filtering=True,id=74a0b6a5-7ae5-44ef-a159-4a87de6da113,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74a0b6a5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.285 252257 DEBUG os_vif [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:bf:db,bridge_name='br-int',has_traffic_filtering=True,id=74a0b6a5-7ae5-44ef-a159-4a87de6da113,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74a0b6a5-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.285 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.286 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.287 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.291 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.291 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74a0b6a5-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.292 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap74a0b6a5-7a, col_values=(('external_ids', {'iface-id': '74a0b6a5-7ae5-44ef-a159-4a87de6da113', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:bf:db', 'vm-uuid': 'df3ef43d-e67b-4d7f-8603-5cf61569ae1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.339 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539563 NetworkManager[48981]: <info>  [1764404734.3406] manager: (tap74a0b6a5-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.343 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.348 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539563 kernel: tap1a4ca7b6-25 (unregistering): left promiscuous mode
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.349 252257 INFO os_vif [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:bf:db,bridge_name='br-int',has_traffic_filtering=True,id=74a0b6a5-7ae5-44ef-a159-4a87de6da113,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74a0b6a5-7a')#033[00m
Nov 29 03:25:34 np0005539563 NetworkManager[48981]: <info>  [1764404734.3548] device (tap1a4ca7b6-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.365 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:34Z|00629|binding|INFO|Releasing lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 from this chassis (sb_readonly=0)
Nov 29 03:25:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:34Z|00630|binding|INFO|Setting lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 down in Southbound
Nov 29 03:25:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:34Z|00631|binding|INFO|Removing iface tap1a4ca7b6-25 ovn-installed in OVS
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.367 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.380 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.407 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:96:ee 10.100.0.9'], port_security=['fa:16:3e:85:96:ee 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '59a5747d-b29d-47f7-848c-62778e994c56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1a4ca7b6-25c7-44e8-9189-4d8759d2d061) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.408 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.410 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7008b597-8de2-4973-801f-fcc733e4f6c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.412 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf60352-a612-436d-a7fe-27ba65c6b9a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.412 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace which is not needed anymore#033[00m
Nov 29 03:25:34 np0005539563 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009b.scope: Deactivated successfully.
Nov 29 03:25:34 np0005539563 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009b.scope: Consumed 16.571s CPU time.
Nov 29 03:25:34 np0005539563 systemd-machined[213024]: Machine qemu-73-instance-0000009b terminated.
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.455 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.456 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.456 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No VIF found with MAC fa:16:3e:e6:bf:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.456 252257 INFO nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Using config drive#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.489 252257 DEBUG nova.storage.rbd_utils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:34 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[346190]: [NOTICE]   (346194) : haproxy version is 2.8.14-c23fe91
Nov 29 03:25:34 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[346190]: [NOTICE]   (346194) : path to executable is /usr/sbin/haproxy
Nov 29 03:25:34 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[346190]: [WARNING]  (346194) : Exiting Master process...
Nov 29 03:25:34 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[346190]: [ALERT]    (346194) : Current worker (346196) exited with code 143 (Terminated)
Nov 29 03:25:34 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[346190]: [WARNING]  (346194) : All workers exited. Exiting... (0)
Nov 29 03:25:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 611 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 995 KiB/s rd, 309 KiB/s wr, 98 op/s
Nov 29 03:25:34 np0005539563 systemd[1]: libpod-d99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872.scope: Deactivated successfully.
Nov 29 03:25:34 np0005539563 podman[348173]: 2025-11-29 08:25:34.558401762 +0000 UTC m=+0.042846663 container died d99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:34 np0005539563 kernel: tap1a4ca7b6-25: entered promiscuous mode
Nov 29 03:25:34 np0005539563 NetworkManager[48981]: <info>  [1764404734.5784] manager: (tap1a4ca7b6-25): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Nov 29 03:25:34 np0005539563 kernel: tap1a4ca7b6-25 (unregistering): left promiscuous mode
Nov 29 03:25:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:34Z|00632|binding|INFO|Claiming lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for this chassis.
Nov 29 03:25:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:34Z|00633|binding|INFO|1a4ca7b6-25c7-44e8-9189-4d8759d2d061: Claiming fa:16:3e:85:96:ee 10.100.0.9
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.593 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:96:ee 10.100.0.9'], port_security=['fa:16:3e:85:96:ee 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '59a5747d-b29d-47f7-848c-62778e994c56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1a4ca7b6-25c7-44e8-9189-4d8759d2d061) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-64f01a40cb8b43943ad597c8d250238d9b64cfdf955e8ef0997d2c7053373bdf-merged.mount: Deactivated successfully.
Nov 29 03:25:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872-userdata-shm.mount: Deactivated successfully.
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.598 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539563 podman[348173]: 2025-11-29 08:25:34.601234513 +0000 UTC m=+0.085679414 container cleanup d99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:34 np0005539563 systemd[1]: libpod-conmon-d99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872.scope: Deactivated successfully.
Nov 29 03:25:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:34Z|00634|binding|INFO|Setting lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 ovn-installed in OVS
Nov 29 03:25:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:34Z|00635|binding|INFO|Setting lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 up in Southbound
Nov 29 03:25:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:34Z|00636|binding|INFO|Releasing lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 from this chassis (sb_readonly=1)
Nov 29 03:25:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:34Z|00637|if_status|INFO|Dropped 2 log messages in last 578 seconds (most recently, 578 seconds ago) due to excessive rate
Nov 29 03:25:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:34Z|00638|if_status|INFO|Not setting lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 down as sb is readonly
Nov 29 03:25:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:34Z|00639|binding|INFO|Removing iface tap1a4ca7b6-25 ovn-installed in OVS
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.619 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:34Z|00640|binding|INFO|Releasing lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 from this chassis (sb_readonly=0)
Nov 29 03:25:34 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:34Z|00641|binding|INFO|Setting lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 down in Southbound
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.631 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:96:ee 10.100.0.9'], port_security=['fa:16:3e:85:96:ee 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '59a5747d-b29d-47f7-848c-62778e994c56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1a4ca7b6-25c7-44e8-9189-4d8759d2d061) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.634 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539563 podman[348206]: 2025-11-29 08:25:34.67150999 +0000 UTC m=+0.046173104 container remove d99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.677 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c8b268a8-92db-4e5b-b40f-cdcc93e94d0d]: (4, ('Sat Nov 29 08:25:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (d99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872)\nd99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872\nSat Nov 29 08:25:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (d99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872)\nd99c9e07d58da8d170eeb333766666abb80620fa30dbf9b39d12da8a800f9872\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.679 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f56be2-f0cb-4a80-9d16-27d40e8f4bda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.680 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:34 np0005539563 kernel: tap7008b597-80: left promiscuous mode
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.682 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.695 252257 INFO nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.701 252257 INFO nova.virt.libvirt.driver [-] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Instance destroyed successfully.#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.701 252257 DEBUG nova.objects.instance [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'numa_topology' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.703 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.705 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f64e1e0c-0c62-42f5-8ea4-80dd214cff51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.719 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e06a77-85c7-4616-aaae-9d868d13bc9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.720 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8f1490-2ac5-4b90-8535-6868e5d68443]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.722 252257 INFO nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Attempting rescue#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.723 252257 DEBUG nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.726 252257 DEBUG nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.726 252257 INFO nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Creating image(s)#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.736 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[462d719e-68c2-4bce-90f8-34f8039706bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763066, 'reachable_time': 43709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348229, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:34 np0005539563 systemd[1]: run-netns-ovnmeta\x2d7008b597\x2d8de2\x2d4973\x2d801f\x2dfcc733e4f6c9.mount: Deactivated successfully.
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.743 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.744 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[a3bb7635-190f-4f6b-8b38-3af98d2e8ec5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.745 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.747 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7008b597-8de2-4973-801f-fcc733e4f6c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.747 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a0033f4e-f732-45d6-994c-20afd70dfe65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.748 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.749 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7008b597-8de2-4973-801f-fcc733e4f6c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:25:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:34.750 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc14770-328e-4946-a8ad-55fb4c17ea86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.752 252257 DEBUG nova.storage.rbd_utils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.756 252257 DEBUG nova.objects.instance [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.803 252257 DEBUG nova.storage.rbd_utils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.830 252257 DEBUG nova.storage.rbd_utils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.834 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.902 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.903 252257 DEBUG oslo_concurrency.lockutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.903 252257 DEBUG oslo_concurrency.lockutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.904 252257 DEBUG oslo_concurrency.lockutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.930 252257 DEBUG nova.storage.rbd_utils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:34 np0005539563 nova_compute[252253]: 2025-11-29 08:25:34.933 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 59a5747d-b29d-47f7-848c-62778e994c56_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:35 np0005539563 nova_compute[252253]: 2025-11-29 08:25:35.604 252257 INFO nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Creating config drive at /var/lib/nova/instances/df3ef43d-e67b-4d7f-8603-5cf61569ae1f/disk.config#033[00m
Nov 29 03:25:35 np0005539563 nova_compute[252253]: 2025-11-29 08:25:35.613 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df3ef43d-e67b-4d7f-8603-5cf61569ae1f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw7e6wq9r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:35 np0005539563 nova_compute[252253]: 2025-11-29 08:25:35.755 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df3ef43d-e67b-4d7f-8603-5cf61569ae1f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw7e6wq9r" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:35 np0005539563 nova_compute[252253]: 2025-11-29 08:25:35.785 252257 DEBUG nova.storage.rbd_utils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] rbd image df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:35 np0005539563 nova_compute[252253]: 2025-11-29 08:25:35.790 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df3ef43d-e67b-4d7f-8603-5cf61569ae1f/disk.config df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:35.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:35.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.335 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 59a5747d-b29d-47f7-848c-62778e994c56_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.336 252257 DEBUG nova.objects.instance [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'migration_context' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.349 252257 DEBUG nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.350 252257 DEBUG nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Start _get_guest_xml network_info=[{"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "vif_mac": "fa:16:3e:85:96:ee"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.350 252257 DEBUG nova.objects.instance [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'resources' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.376 252257 DEBUG oslo_concurrency.processutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df3ef43d-e67b-4d7f-8603-5cf61569ae1f/disk.config df3ef43d-e67b-4d7f-8603-5cf61569ae1f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.376 252257 INFO nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Deleting local config drive /var/lib/nova/instances/df3ef43d-e67b-4d7f-8603-5cf61569ae1f/disk.config because it was imported into RBD.#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.377 252257 WARNING nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.383 252257 DEBUG nova.virt.libvirt.host [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.384 252257 DEBUG nova.virt.libvirt.host [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.391 252257 DEBUG nova.virt.libvirt.host [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.391 252257 DEBUG nova.virt.libvirt.host [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.393 252257 DEBUG nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.393 252257 DEBUG nova.virt.hardware [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.394 252257 DEBUG nova.virt.hardware [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.394 252257 DEBUG nova.virt.hardware [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.394 252257 DEBUG nova.virt.hardware [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.394 252257 DEBUG nova.virt.hardware [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.395 252257 DEBUG nova.virt.hardware [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.395 252257 DEBUG nova.virt.hardware [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.395 252257 DEBUG nova.virt.hardware [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.396 252257 DEBUG nova.virt.hardware [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.396 252257 DEBUG nova.virt.hardware [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.396 252257 DEBUG nova.virt.hardware [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.396 252257 DEBUG nova.objects.instance [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.415 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:36 np0005539563 systemd-udevd[348133]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:36 np0005539563 NetworkManager[48981]: <info>  [1764404736.4344] manager: (tap74a0b6a5-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/281)
Nov 29 03:25:36 np0005539563 kernel: tap74a0b6a5-7a: entered promiscuous mode
Nov 29 03:25:36 np0005539563 NetworkManager[48981]: <info>  [1764404736.4460] device (tap74a0b6a5-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:25:36 np0005539563 NetworkManager[48981]: <info>  [1764404736.4475] device (tap74a0b6a5-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:25:36 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:36Z|00642|binding|INFO|Claiming lport 74a0b6a5-7ae5-44ef-a159-4a87de6da113 for this chassis.
Nov 29 03:25:36 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:36Z|00643|binding|INFO|74a0b6a5-7ae5-44ef-a159-4a87de6da113: Claiming fa:16:3e:e6:bf:db 10.100.0.9
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.478 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.484 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:bf:db 10.100.0.9'], port_security=['fa:16:3e:e6:bf:db 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'df3ef43d-e67b-4d7f-8603-5cf61569ae1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e9e03ca-34d5-466f-8e26-e073c35a802c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=74a0b6a5-7ae5-44ef-a159-4a87de6da113) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.486 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 74a0b6a5-7ae5-44ef-a159-4a87de6da113 in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 bound to our chassis#033[00m
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.488 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abbc8daa-d665-4e2f-bf74-9e57db481441#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.506 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:36 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:36Z|00644|binding|INFO|Setting lport 74a0b6a5-7ae5-44ef-a159-4a87de6da113 ovn-installed in OVS
Nov 29 03:25:36 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:36Z|00645|binding|INFO|Setting lport 74a0b6a5-7ae5-44ef-a159-4a87de6da113 up in Southbound
Nov 29 03:25:36 np0005539563 systemd-machined[213024]: New machine qemu-75-instance-0000009f.
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.513 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[188695f7-1333-4abb-9d3e-5139fef6cd56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:36 np0005539563 systemd[1]: Started Virtual Machine qemu-75-instance-0000009f.
Nov 29 03:25:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 611 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 995 KiB/s rd, 215 KiB/s wr, 96 op/s
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.552 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[21f0991d-3d4a-4e9b-b76a-93cec1d6b1fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.555 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a243f6e7-5a35-4846-8056-6252083bca91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.582 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf6abbb-98b6-48e2-b96d-a7171cba6c3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.602 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6dadcf50-6c47-4de0-838b-169f71e904ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766783, 'reachable_time': 24294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348413, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.617 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[141a2310-dc5a-4a74-b51d-a5d00a7f3a2a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766793, 'tstamp': 766793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348414, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766796, 'tstamp': 766796}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348414, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.619 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.620 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.624 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.624 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabbc8daa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.625 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.625 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabbc8daa-d0, col_values=(('external_ids', {'iface-id': 'fb65e0fb-a778-4ace-a666-dfdbc516af09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:36.625 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.853 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:36 np0005539563 nova_compute[252253]: 2025-11-29 08:25:36.854 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.147 252257 DEBUG nova.network.neutron [req-39aa7cd4-63c1-436e-b4e6-9e4d646569ac req-75836187-4ebe-4650-88f8-388e38c10b94 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Updated VIF entry in instance network info cache for port 74a0b6a5-7ae5-44ef-a159-4a87de6da113. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.148 252257 DEBUG nova.network.neutron [req-39aa7cd4-63c1-436e-b4e6-9e4d646569ac req-75836187-4ebe-4650-88f8-388e38c10b94 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Updating instance_info_cache with network_info: [{"id": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "address": "fa:16:3e:e6:bf:db", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a0b6a5-7a", "ovs_interfaceid": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.183 252257 DEBUG oslo_concurrency.lockutils [req-39aa7cd4-63c1-436e-b4e6-9e4d646569ac req-75836187-4ebe-4650-88f8-388e38c10b94 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.243 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404737.2435205, df3ef43d-e67b-4d7f-8603-5cf61569ae1f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.244 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:25:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4116594488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.330 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.331 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.369 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.373 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404737.2444081, df3ef43d-e67b-4d7f-8603-5cf61569ae1f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.374 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.418 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.422 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.510 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.570 252257 DEBUG nova.compute.manager [req-ff8739f6-77d6-4f00-be74-d0805dbf29be req-369b4369-ba34-42b2-820d-dd5a439e18c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Received event network-vif-plugged-74a0b6a5-7ae5-44ef-a159-4a87de6da113 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.570 252257 DEBUG oslo_concurrency.lockutils [req-ff8739f6-77d6-4f00-be74-d0805dbf29be req-369b4369-ba34-42b2-820d-dd5a439e18c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.570 252257 DEBUG oslo_concurrency.lockutils [req-ff8739f6-77d6-4f00-be74-d0805dbf29be req-369b4369-ba34-42b2-820d-dd5a439e18c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.571 252257 DEBUG oslo_concurrency.lockutils [req-ff8739f6-77d6-4f00-be74-d0805dbf29be req-369b4369-ba34-42b2-820d-dd5a439e18c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.571 252257 DEBUG nova.compute.manager [req-ff8739f6-77d6-4f00-be74-d0805dbf29be req-369b4369-ba34-42b2-820d-dd5a439e18c8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Processing event network-vif-plugged-74a0b6a5-7ae5-44ef-a159-4a87de6da113 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.572 252257 DEBUG nova.compute.manager [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.576 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.580 252257 INFO nova.virt.libvirt.driver [-] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Instance spawned successfully.#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.581 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.584 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404737.5838544, df3ef43d-e67b-4d7f-8603-5cf61569ae1f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.584 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3158111379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.802 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.804 252257 DEBUG nova.virt.libvirt.vif [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:24:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-92344656',display_name='tempest-ServerRescueNegativeTestJSON-server-92344656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-92344656',id=155,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:24:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09cc8c3182d845f597dda064f9013941',ramdisk_id='',reservation_id='r-0zxpzi3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-754875869',owner_user_name='tempest-ServerRescueNegativeTestJSON-754875869-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:10Z,user_data=None,user_id='dfcf2db50da745c09bffcf32ec016854',uuid=59a5747d-b29d-47f7-848c-62778e994c56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "vif_mac": "fa:16:3e:85:96:ee"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.804 252257 DEBUG nova.network.os_vif_util [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converting VIF {"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "vif_mac": "fa:16:3e:85:96:ee"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.805 252257 DEBUG nova.network.os_vif_util [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:96:ee,bridge_name='br-int',has_traffic_filtering=True,id=1a4ca7b6-25c7-44e8-9189-4d8759d2d061,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a4ca7b6-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.807 252257 DEBUG nova.objects.instance [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.825 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.831 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.832 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.832 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.832 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.832 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.833 252257 DEBUG nova.virt.libvirt.driver [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.836 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:37.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:37.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.935 252257 DEBUG nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  <uuid>59a5747d-b29d-47f7-848c-62778e994c56</uuid>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  <name>instance-0000009b</name>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-92344656</nova:name>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:25:36</nova:creationTime>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <nova:user uuid="dfcf2db50da745c09bffcf32ec016854">tempest-ServerRescueNegativeTestJSON-754875869-project-member</nova:user>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <nova:project uuid="09cc8c3182d845f597dda064f9013941">tempest-ServerRescueNegativeTestJSON-754875869</nova:project>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <nova:port uuid="1a4ca7b6-25c7-44e8-9189-4d8759d2d061">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <entry name="serial">59a5747d-b29d-47f7-848c-62778e994c56</entry>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <entry name="uuid">59a5747d-b29d-47f7-848c-62778e994c56</entry>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/59a5747d-b29d-47f7-848c-62778e994c56_disk.rescue">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/59a5747d-b29d-47f7-848c-62778e994c56_disk">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/59a5747d-b29d-47f7-848c-62778e994c56_disk.config.rescue">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:85:96:ee"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <target dev="tap1a4ca7b6-25"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/console.log" append="off"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:25:37 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:25:37 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:25:37 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:25:37 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.939 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.979 252257 INFO nova.virt.libvirt.driver [-] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Instance destroyed successfully.#033[00m
Nov 29 03:25:37 np0005539563 nova_compute[252253]: 2025-11-29 08:25:37.982 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.119 252257 INFO nova.compute.manager [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Took 25.85 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.120 252257 DEBUG nova.compute.manager [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.184 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.469 252257 DEBUG nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.470 252257 DEBUG nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.470 252257 DEBUG nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.470 252257 DEBUG nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] No VIF found with MAC fa:16:3e:85:96:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.471 252257 INFO nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Using config drive#033[00m
Nov 29 03:25:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.499 252257 DEBUG nova.storage.rbd_utils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 633 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 977 KiB/s rd, 1.3 MiB/s wr, 94 op/s
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.569 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:38.571 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:38.573 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.654 252257 DEBUG nova.objects.instance [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.701 252257 INFO nova.compute.manager [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Took 27.49 seconds to build instance.#033[00m
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.797 252257 DEBUG nova.objects.instance [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'keypairs' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:38 np0005539563 nova_compute[252253]: 2025-11-29 08:25:38.800 252257 DEBUG oslo_concurrency.lockutils [None req-6b5fe731-ca3f-4862-b9fe-8d380109b3ae b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 27.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.104 252257 INFO nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Creating config drive at /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/disk.config.rescue#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.109 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw0_1yykh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.241 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw0_1yykh" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.272 252257 DEBUG nova.storage.rbd_utils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] rbd image 59a5747d-b29d-47f7-848c-62778e994c56_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.276 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/disk.config.rescue 59a5747d-b29d-47f7-848c-62778e994c56_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.340 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.471 252257 DEBUG oslo_concurrency.processutils [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/disk.config.rescue 59a5747d-b29d-47f7-848c-62778e994c56_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.471 252257 INFO nova.virt.libvirt.driver [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Deleting local config drive /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:25:39 np0005539563 kernel: tap1a4ca7b6-25: entered promiscuous mode
Nov 29 03:25:39 np0005539563 NetworkManager[48981]: <info>  [1764404739.5177] manager: (tap1a4ca7b6-25): new Tun device (/org/freedesktop/NetworkManager/Devices/282)
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.519 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:39 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:39Z|00646|binding|INFO|Claiming lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for this chassis.
Nov 29 03:25:39 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:39Z|00647|binding|INFO|1a4ca7b6-25c7-44e8-9189-4d8759d2d061: Claiming fa:16:3e:85:96:ee 10.100.0.9
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.524 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.531 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:96:ee 10.100.0.9'], port_security=['fa:16:3e:85:96:ee 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '59a5747d-b29d-47f7-848c-62778e994c56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1a4ca7b6-25c7-44e8-9189-4d8759d2d061) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.533 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 bound to our chassis#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.535 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7008b597-8de2-4973-801f-fcc733e4f6c9#033[00m
Nov 29 03:25:39 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:39Z|00648|binding|INFO|Setting lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 ovn-installed in OVS
Nov 29 03:25:39 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:39Z|00649|binding|INFO|Setting lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 up in Southbound
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.546 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.550 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[348c8a19-1bfd-4ab0-952a-e13db6aea893]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.552 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7008b597-81 in ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.553 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7008b597-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.554 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d2546e8f-d4a1-47bc-9644-9471d068635d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.555 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[39f89b15-7c60-4b6b-88ff-b3c3b5e2e2bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 systemd-udevd[348632]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.570 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[01f0953f-edea-473c-b89f-9db1eff05d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 NetworkManager[48981]: <info>  [1764404739.5819] device (tap1a4ca7b6-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:25:39 np0005539563 systemd-machined[213024]: New machine qemu-76-instance-0000009b.
Nov 29 03:25:39 np0005539563 NetworkManager[48981]: <info>  [1764404739.5835] device (tap1a4ca7b6-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:25:39 np0005539563 systemd[1]: Started Virtual Machine qemu-76-instance-0000009b.
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.596 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c8368374-bad5-4b0d-a082-c6149e3e76f4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.622 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[402d5816-e658-4912-a77f-217f36d052f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.628 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e4dcc12a-f10e-49bf-81a2-097df20b0dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 systemd-udevd[348635]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:39 np0005539563 NetworkManager[48981]: <info>  [1764404739.6293] manager: (tap7008b597-80): new Veth device (/org/freedesktop/NetworkManager/Devices/283)
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.662 252257 DEBUG nova.compute.manager [req-2bf98e6b-6166-4d1b-81df-55558e7e77cf req-b973a3a7-7475-422e-8de4-664a3746483b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Received event network-vif-plugged-74a0b6a5-7ae5-44ef-a159-4a87de6da113 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.663 252257 DEBUG oslo_concurrency.lockutils [req-2bf98e6b-6166-4d1b-81df-55558e7e77cf req-b973a3a7-7475-422e-8de4-664a3746483b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.663 252257 DEBUG oslo_concurrency.lockutils [req-2bf98e6b-6166-4d1b-81df-55558e7e77cf req-b973a3a7-7475-422e-8de4-664a3746483b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.663 252257 DEBUG oslo_concurrency.lockutils [req-2bf98e6b-6166-4d1b-81df-55558e7e77cf req-b973a3a7-7475-422e-8de4-664a3746483b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.663 252257 DEBUG nova.compute.manager [req-2bf98e6b-6166-4d1b-81df-55558e7e77cf req-b973a3a7-7475-422e-8de4-664a3746483b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] No waiting events found dispatching network-vif-plugged-74a0b6a5-7ae5-44ef-a159-4a87de6da113 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.664 252257 WARNING nova.compute.manager [req-2bf98e6b-6166-4d1b-81df-55558e7e77cf req-b973a3a7-7475-422e-8de4-664a3746483b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Received unexpected event network-vif-plugged-74a0b6a5-7ae5-44ef-a159-4a87de6da113 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.665 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[6a08febd-1f0c-4b63-987a-37165a93da80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.668 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4e9096-e81f-445a-9766-f71c780c8b1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 NetworkManager[48981]: <info>  [1764404739.7022] device (tap7008b597-80): carrier: link connected
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.709 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f02c57e8-740e-4935-bb99-cd3de09370cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.722 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9f842d6b-c83d-4805-b02b-390d0982b70e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770747, 'reachable_time': 16715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348663, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.741 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e53ddecf-9921-4b1e-9107-7893a31556b7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2c65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 770747, 'tstamp': 770747}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348664, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.755 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d345c7da-2e0d-42fc-a330-21ceccac3717]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770747, 'reachable_time': 16715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348665, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.788 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[336e853d-4ec3-41ba-9f30-5fbcb877f3fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.834 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8163ce66-344b-434f-ad51-85288d247010]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.835 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.836 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.836 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7008b597-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.837 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:39 np0005539563 NetworkManager[48981]: <info>  [1764404739.8385] manager: (tap7008b597-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Nov 29 03:25:39 np0005539563 kernel: tap7008b597-80: entered promiscuous mode
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.840 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.841 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7008b597-80, col_values=(('external_ids', {'iface-id': '42a41b42-1527-4cfa-9dcf-4b7f34b092b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.842 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:39 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:39Z|00650|binding|INFO|Releasing lport 42a41b42-1527-4cfa-9dcf-4b7f34b092b7 from this chassis (sb_readonly=0)
Nov 29 03:25:39 np0005539563 nova_compute[252253]: 2025-11-29 08:25:39.858 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.859 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.859 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[865ec101-b168-4985-9eff-dafcb7081aa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.860 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:25:39 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:39.860 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'env', 'PROCESS_TAG=haproxy-7008b597-8de2-4973-801f-fcc733e4f6c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7008b597-8de2-4973-801f-fcc733e4f6c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:25:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:39.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:39.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.113 252257 DEBUG nova.compute.manager [req-ea9ef50f-8b58-46a5-8c96-7206bb96b2c8 req-1cc96c28-9361-4c5b-889b-2f252de67af1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-unplugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.113 252257 DEBUG oslo_concurrency.lockutils [req-ea9ef50f-8b58-46a5-8c96-7206bb96b2c8 req-1cc96c28-9361-4c5b-889b-2f252de67af1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.114 252257 DEBUG oslo_concurrency.lockutils [req-ea9ef50f-8b58-46a5-8c96-7206bb96b2c8 req-1cc96c28-9361-4c5b-889b-2f252de67af1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.114 252257 DEBUG oslo_concurrency.lockutils [req-ea9ef50f-8b58-46a5-8c96-7206bb96b2c8 req-1cc96c28-9361-4c5b-889b-2f252de67af1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.114 252257 DEBUG nova.compute.manager [req-ea9ef50f-8b58-46a5-8c96-7206bb96b2c8 req-1cc96c28-9361-4c5b-889b-2f252de67af1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-unplugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.114 252257 WARNING nova.compute.manager [req-ea9ef50f-8b58-46a5-8c96-7206bb96b2c8 req-1cc96c28-9361-4c5b-889b-2f252de67af1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-unplugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.128 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 59a5747d-b29d-47f7-848c-62778e994c56 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.129 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404740.1285338, 59a5747d-b29d-47f7-848c-62778e994c56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.129 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.133 252257 DEBUG nova.compute.manager [None req-4140585a-d98d-47d1-aed2-21791be52b6b dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:40 np0005539563 podman[348757]: 2025-11-29 08:25:40.25489252 +0000 UTC m=+0.055956349 container create 758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:25:40 np0005539563 systemd[1]: Started libpod-conmon-758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082.scope.
Nov 29 03:25:40 np0005539563 podman[348757]: 2025-11-29 08:25:40.221888814 +0000 UTC m=+0.022952663 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:25:40 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:25:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d9d0beae53b9c84677e777c91cf7835d2ed243d88d584403d926eee5eff747/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:40 np0005539563 podman[348757]: 2025-11-29 08:25:40.349563836 +0000 UTC m=+0.150627685 container init 758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:25:40 np0005539563 podman[348757]: 2025-11-29 08:25:40.357223704 +0000 UTC m=+0.158287533 container start 758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:25:40 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[348772]: [NOTICE]   (348776) : New worker (348778) forked
Nov 29 03:25:40 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[348772]: [NOTICE]   (348776) : Loading success.
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.465 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.471 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.495 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404740.1302905, 59a5747d-b29d-47f7-848c-62778e994c56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.496 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] VM Started (Lifecycle Event)#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.515 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:40 np0005539563 nova_compute[252253]: 2025-11-29 08:25:40.521 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 662 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 664 KiB/s rd, 2.2 MiB/s wr, 62 op/s
Nov 29 03:25:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:40.575 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Nov 29 03:25:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Nov 29 03:25:40 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Nov 29 03:25:41 np0005539563 podman[348788]: 2025-11-29 08:25:41.515691892 +0000 UTC m=+0.065938350 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:25:41 np0005539563 podman[348789]: 2025-11-29 08:25:41.521007796 +0000 UTC m=+0.066816193 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:25:41 np0005539563 podman[348790]: 2025-11-29 08:25:41.559498939 +0000 UTC m=+0.097452103 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 03:25:41 np0005539563 nova_compute[252253]: 2025-11-29 08:25:41.784 252257 DEBUG nova.compute.manager [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Received event network-changed-53d86447-39c2-4624-8083-b6dc36b78b15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:41 np0005539563 nova_compute[252253]: 2025-11-29 08:25:41.785 252257 DEBUG nova.compute.manager [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Refreshing instance network info cache due to event network-changed-53d86447-39c2-4624-8083-b6dc36b78b15. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:41 np0005539563 nova_compute[252253]: 2025-11-29 08:25:41.785 252257 DEBUG oslo_concurrency.lockutils [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:41 np0005539563 nova_compute[252253]: 2025-11-29 08:25:41.785 252257 DEBUG oslo_concurrency.lockutils [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:41 np0005539563 nova_compute[252253]: 2025-11-29 08:25:41.785 252257 DEBUG nova.network.neutron [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Refreshing network info cache for port 53d86447-39c2-4624-8083-b6dc36b78b15 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:41.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:41.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.303 252257 DEBUG nova.compute.manager [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.305 252257 DEBUG oslo_concurrency.lockutils [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.305 252257 DEBUG oslo_concurrency.lockutils [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.305 252257 DEBUG oslo_concurrency.lockutils [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.306 252257 DEBUG nova.compute.manager [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.306 252257 WARNING nova.compute.manager [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.306 252257 DEBUG nova.compute.manager [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.307 252257 DEBUG oslo_concurrency.lockutils [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.307 252257 DEBUG oslo_concurrency.lockutils [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.307 252257 DEBUG oslo_concurrency.lockutils [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.308 252257 DEBUG nova.compute.manager [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.308 252257 WARNING nova.compute.manager [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.308 252257 DEBUG nova.compute.manager [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.308 252257 DEBUG oslo_concurrency.lockutils [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.309 252257 DEBUG oslo_concurrency.lockutils [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.309 252257 DEBUG oslo_concurrency.lockutils [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.309 252257 DEBUG nova.compute.manager [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:42 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.310 252257 WARNING nova.compute.manager [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:25:42 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.310 252257 DEBUG nova.compute.manager [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-unplugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.310 252257 DEBUG oslo_concurrency.lockutils [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.310 252257 DEBUG oslo_concurrency.lockutils [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.310 252257 DEBUG oslo_concurrency.lockutils [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.311 252257 DEBUG nova.compute.manager [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-unplugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:42 np0005539563 nova_compute[252253]: 2025-11-29 08:25:42.312 252257 WARNING nova.compute.manager [req-7e30a049-2edb-45b6-8e82-ac13520e2166 req-e32e2968-0bf3-4d54-9f9a-f590826eba9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-unplugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:25:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 662 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 740 KiB/s rd, 2.6 MiB/s wr, 64 op/s
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.632250) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742632289, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 728, "num_deletes": 251, "total_data_size": 979328, "memory_usage": 994040, "flush_reason": "Manual Compaction"}
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742639367, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 968521, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51689, "largest_seqno": 52416, "table_properties": {"data_size": 964736, "index_size": 1565, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8869, "raw_average_key_size": 19, "raw_value_size": 957031, "raw_average_value_size": 2131, "num_data_blocks": 68, "num_entries": 449, "num_filter_entries": 449, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404688, "oldest_key_time": 1764404688, "file_creation_time": 1764404742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 7147 microseconds, and 2868 cpu microseconds.
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.639398) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 968521 bytes OK
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.639414) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.641068) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.641080) EVENT_LOG_v1 {"time_micros": 1764404742641076, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.641093) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 975613, prev total WAL file size 975613, number of live WAL files 2.
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.641514) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(945KB)], [110(13MB)]
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742641552, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 15175744, "oldest_snapshot_seqno": -1}
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 8611 keys, 13314310 bytes, temperature: kUnknown
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742793213, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 13314310, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13255436, "index_size": 36234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21573, "raw_key_size": 224320, "raw_average_key_size": 26, "raw_value_size": 13100755, "raw_average_value_size": 1521, "num_data_blocks": 1417, "num_entries": 8611, "num_filter_entries": 8611, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764404742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.793692) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 13314310 bytes
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.797109) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.0 rd, 87.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.5 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(29.4) write-amplify(13.7) OK, records in: 9129, records dropped: 518 output_compression: NoCompression
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.797148) EVENT_LOG_v1 {"time_micros": 1764404742797131, "job": 66, "event": "compaction_finished", "compaction_time_micros": 151788, "compaction_time_cpu_micros": 33523, "output_level": 6, "num_output_files": 1, "total_output_size": 13314310, "num_input_records": 9129, "num_output_records": 8611, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742797783, "job": 66, "event": "table_file_deletion", "file_number": 112}
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404742802518, "job": 66, "event": "table_file_deletion", "file_number": 110}
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.641422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.802586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.802590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.802591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.802593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:25:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:25:42.802594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:25:43 np0005539563 nova_compute[252253]: 2025-11-29 08:25:43.186 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:25:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:25:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:25:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:25:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:25:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:25:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:43 np0005539563 nova_compute[252253]: 2025-11-29 08:25:43.614 252257 DEBUG nova.network.neutron [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Updated VIF entry in instance network info cache for port 53d86447-39c2-4624-8083-b6dc36b78b15. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:43 np0005539563 nova_compute[252253]: 2025-11-29 08:25:43.615 252257 DEBUG nova.network.neutron [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Updating instance_info_cache with network_info: [{"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:43 np0005539563 nova_compute[252253]: 2025-11-29 08:25:43.657 252257 DEBUG oslo_concurrency.lockutils [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:43 np0005539563 nova_compute[252253]: 2025-11-29 08:25:43.657 252257 DEBUG nova.compute.manager [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Received event network-changed-74a0b6a5-7ae5-44ef-a159-4a87de6da113 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:43 np0005539563 nova_compute[252253]: 2025-11-29 08:25:43.658 252257 DEBUG nova.compute.manager [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Refreshing instance network info cache due to event network-changed-74a0b6a5-7ae5-44ef-a159-4a87de6da113. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:43 np0005539563 nova_compute[252253]: 2025-11-29 08:25:43.658 252257 DEBUG oslo_concurrency.lockutils [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:43 np0005539563 nova_compute[252253]: 2025-11-29 08:25:43.658 252257 DEBUG oslo_concurrency.lockutils [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:43 np0005539563 nova_compute[252253]: 2025-11-29 08:25:43.658 252257 DEBUG nova.network.neutron [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Refreshing network info cache for port 74a0b6a5-7ae5-44ef-a159-4a87de6da113 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:43 np0005539563 nova_compute[252253]: 2025-11-29 08:25:43.770 252257 INFO nova.compute.manager [None req-e4dd42c8-dbd6-49b6-a743-4d5545ba79ac dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Unrescuing#033[00m
Nov 29 03:25:43 np0005539563 nova_compute[252253]: 2025-11-29 08:25:43.771 252257 DEBUG oslo_concurrency.lockutils [None req-e4dd42c8-dbd6-49b6-a743-4d5545ba79ac dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:43 np0005539563 nova_compute[252253]: 2025-11-29 08:25:43.771 252257 DEBUG oslo_concurrency.lockutils [None req-e4dd42c8-dbd6-49b6-a743-4d5545ba79ac dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquired lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:43 np0005539563 nova_compute[252253]: 2025-11-29 08:25:43.771 252257 DEBUG nova.network.neutron [None req-e4dd42c8-dbd6-49b6-a743-4d5545ba79ac dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:25:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:25:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:43.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:25:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:43.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.342 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.458 252257 DEBUG nova.compute.manager [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.458 252257 DEBUG oslo_concurrency.lockutils [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.459 252257 DEBUG oslo_concurrency.lockutils [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.459 252257 DEBUG oslo_concurrency.lockutils [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.459 252257 DEBUG nova.compute.manager [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.460 252257 WARNING nova.compute.manager [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.460 252257 DEBUG nova.compute.manager [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.460 252257 DEBUG oslo_concurrency.lockutils [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.461 252257 DEBUG oslo_concurrency.lockutils [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.461 252257 DEBUG oslo_concurrency.lockutils [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.461 252257 DEBUG nova.compute.manager [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.462 252257 WARNING nova.compute.manager [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.462 252257 DEBUG nova.compute.manager [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.462 252257 DEBUG oslo_concurrency.lockutils [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.463 252257 DEBUG oslo_concurrency.lockutils [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.463 252257 DEBUG oslo_concurrency.lockutils [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.463 252257 DEBUG nova.compute.manager [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.463 252257 WARNING nova.compute.manager [req-e200e82a-8d3f-4b26-9bf7-7f700e8ac43a req-52c77fcc-df09-4588-8911-5fcc02c92ef4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:25:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 679 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.2 MiB/s wr, 186 op/s
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.991 252257 DEBUG nova.network.neutron [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Updated VIF entry in instance network info cache for port 74a0b6a5-7ae5-44ef-a159-4a87de6da113. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:44 np0005539563 nova_compute[252253]: 2025-11-29 08:25:44.991 252257 DEBUG nova.network.neutron [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Updating instance_info_cache with network_info: [{"id": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "address": "fa:16:3e:e6:bf:db", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a0b6a5-7a", "ovs_interfaceid": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.008 252257 DEBUG oslo_concurrency.lockutils [req-0d304efa-4e19-4a5a-8df3-22a95f191329 req-4030e2a8-0270-4718-a086-c9567dd77c4a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.054 252257 DEBUG oslo_concurrency.lockutils [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.054 252257 DEBUG oslo_concurrency.lockutils [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.081 252257 DEBUG nova.objects.instance [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'flavor' on Instance uuid 5a603f26-2b4a-4025-8cc2-a31c8c89e652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.119 252257 DEBUG oslo_concurrency.lockutils [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.157 252257 DEBUG nova.network.neutron [None req-e4dd42c8-dbd6-49b6-a743-4d5545ba79ac dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Updating instance_info_cache with network_info: [{"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.171 252257 DEBUG oslo_concurrency.lockutils [None req-e4dd42c8-dbd6-49b6-a743-4d5545ba79ac dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Releasing lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.172 252257 DEBUG nova.objects.instance [None req-e4dd42c8-dbd6-49b6-a743-4d5545ba79ac dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'flavor' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:45 np0005539563 kernel: tap1a4ca7b6-25 (unregistering): left promiscuous mode
Nov 29 03:25:45 np0005539563 NetworkManager[48981]: <info>  [1764404745.2406] device (tap1a4ca7b6-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:25:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:45Z|00651|binding|INFO|Releasing lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 from this chassis (sb_readonly=0)
Nov 29 03:25:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:45Z|00652|binding|INFO|Setting lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 down in Southbound
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.250 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:45Z|00653|binding|INFO|Removing iface tap1a4ca7b6-25 ovn-installed in OVS
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.253 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.257 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:96:ee 10.100.0.9'], port_security=['fa:16:3e:85:96:ee 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '59a5747d-b29d-47f7-848c-62778e994c56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1a4ca7b6-25c7-44e8-9189-4d8759d2d061) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.258 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.260 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7008b597-8de2-4973-801f-fcc733e4f6c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.261 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[67dc611e-3589-40de-88dc-98fe3ea08df3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.265 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace which is not needed anymore#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.278 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539563 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000009b.scope: Deactivated successfully.
Nov 29 03:25:45 np0005539563 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000009b.scope: Consumed 5.686s CPU time.
Nov 29 03:25:45 np0005539563 systemd-machined[213024]: Machine qemu-76-instance-0000009b terminated.
Nov 29 03:25:45 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[348772]: [NOTICE]   (348776) : haproxy version is 2.8.14-c23fe91
Nov 29 03:25:45 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[348772]: [NOTICE]   (348776) : path to executable is /usr/sbin/haproxy
Nov 29 03:25:45 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[348772]: [WARNING]  (348776) : Exiting Master process...
Nov 29 03:25:45 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[348772]: [WARNING]  (348776) : Exiting Master process...
Nov 29 03:25:45 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[348772]: [ALERT]    (348776) : Current worker (348778) exited with code 143 (Terminated)
Nov 29 03:25:45 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[348772]: [WARNING]  (348776) : All workers exited. Exiting... (0)
Nov 29 03:25:45 np0005539563 systemd[1]: libpod-758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082.scope: Deactivated successfully.
Nov 29 03:25:45 np0005539563 podman[348876]: 2025-11-29 08:25:45.395302104 +0000 UTC m=+0.040102389 container died 758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.393 252257 DEBUG oslo_concurrency.lockutils [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.396 252257 DEBUG oslo_concurrency.lockutils [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.396 252257 INFO nova.compute.manager [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Attaching volume 32b198ad-3a42-4de3-9995-b7e93d51e7ec to /dev/vdb#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.418 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.423 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082-userdata-shm.mount: Deactivated successfully.
Nov 29 03:25:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-92d9d0beae53b9c84677e777c91cf7835d2ed243d88d584403d926eee5eff747-merged.mount: Deactivated successfully.
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.444 252257 INFO nova.virt.libvirt.driver [-] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Instance destroyed successfully.#033[00m
Nov 29 03:25:45 np0005539563 podman[348876]: 2025-11-29 08:25:45.446101192 +0000 UTC m=+0.090901477 container cleanup 758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.446 252257 DEBUG nova.objects.instance [None req-e4dd42c8-dbd6-49b6-a743-4d5545ba79ac dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'numa_topology' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:45 np0005539563 systemd[1]: libpod-conmon-758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082.scope: Deactivated successfully.
Nov 29 03:25:45 np0005539563 podman[348912]: 2025-11-29 08:25:45.520747516 +0000 UTC m=+0.055310931 container remove 758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.527 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[075fcc26-aba2-4b00-ae84-a1189d0c7cfc]: (4, ('Sat Nov 29 08:25:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082)\n758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082\nSat Nov 29 08:25:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082)\n758330f8339c26925bf4a2934a2efec1b3e183aef3e02ab46ba50e6b078c9082\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 kernel: tap1a4ca7b6-25: entered promiscuous mode
Nov 29 03:25:45 np0005539563 NetworkManager[48981]: <info>  [1764404745.5323] manager: (tap1a4ca7b6-25): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.530 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e314f061-3965-4bbb-9aec-37d6418aa3a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.533 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:45 np0005539563 systemd-udevd[348857]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.535 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:45Z|00654|binding|INFO|Claiming lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for this chassis.
Nov 29 03:25:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:45Z|00655|binding|INFO|1a4ca7b6-25c7-44e8-9189-4d8759d2d061: Claiming fa:16:3e:85:96:ee 10.100.0.9
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.542 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:96:ee 10.100.0.9'], port_security=['fa:16:3e:85:96:ee 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '59a5747d-b29d-47f7-848c-62778e994c56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1a4ca7b6-25c7-44e8-9189-4d8759d2d061) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:45 np0005539563 NetworkManager[48981]: <info>  [1764404745.5476] device (tap1a4ca7b6-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:25:45 np0005539563 NetworkManager[48981]: <info>  [1764404745.5486] device (tap1a4ca7b6-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:25:45 np0005539563 kernel: tap7008b597-80: left promiscuous mode
Nov 29 03:25:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:45Z|00656|binding|INFO|Setting lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 ovn-installed in OVS
Nov 29 03:25:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:45Z|00657|binding|INFO|Setting lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 up in Southbound
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.567 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.570 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.570 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5b378017-b53d-410c-bf1a-675a18042b40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 systemd-machined[213024]: New machine qemu-77-instance-0000009b.
Nov 29 03:25:45 np0005539563 systemd[1]: Started Virtual Machine qemu-77-instance-0000009b.
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.588 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6644b2-3152-4345-b9a0-2f9f67e2a793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.589 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1e6d4a-fb47-47f7-92e6-12b6d7a2f8f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.598 252257 DEBUG os_brick.utils [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.600 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:45 np0005539563 auditd[703]: Audit daemon rotating log files
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.607 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd144a3-c491-46f7-bfd1-5570091e4ed7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770739, 'reachable_time': 24499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348939, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 systemd[1]: run-netns-ovnmeta\x2d7008b597\x2d8de2\x2d4973\x2d801f\x2dfcc733e4f6c9.mount: Deactivated successfully.
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.618 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.618 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e5964f-6874-403e-b0a1-4396e29d8bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.622 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.624 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7008b597-8de2-4973-801f-fcc733e4f6c9#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.637 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.638 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[b075fafd-3b5b-4481-8d8b-a5038c132797]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.639 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c764f0e3-5816-495d-9eb8-7e4933881049]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.640 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7008b597-81 in ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.639 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.642 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7008b597-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.642 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[08aadd59-eb4c-426e-bdd4-92e211576ef9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.644 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[32741025-1258-4d01-83d5-9c2f46042a22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.648 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.648 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[ab55f32f-d5f6-4103-9282-9805d188a409]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.654 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.663 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.663 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a175d1-b448-4c07-9d3b-9f1d51985bf9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.655 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[2f45f435-3d06-46cc-8cb6-eee8807399aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.664 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7a91d5-f79d-4e76-8755-95ac2b861887]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.666 252257 DEBUG oslo_concurrency.processutils [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.693 252257 DEBUG oslo_concurrency.processutils [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.695 252257 DEBUG os_brick.initiator.connectors.lightos [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.696 252257 DEBUG os_brick.initiator.connectors.lightos [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.696 252257 DEBUG os_brick.initiator.connectors.lightos [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.696 252257 DEBUG os_brick.utils [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] <== get_connector_properties: return (98ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.697 252257 DEBUG nova.virt.block_device [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Updating existing volume attachment record: 1ac4d988-ef82-47a9-8e98-5d87ed245216 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.697 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fd996eb7-6c1a-490b-9f43-88408ea516ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.725 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[07bf50a0-209c-49df-88ea-d4da719ebafd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.731 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d0e182-86be-4ead-9994-9c19aea7c974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 NetworkManager[48981]: <info>  [1764404745.7334] manager: (tap7008b597-80): new Veth device (/org/freedesktop/NetworkManager/Devices/286)
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.762 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0edb97-9a8b-49b1-b1a6-11533d23ed53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.764 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b6796f8c-4c58-41ca-9c99-19021e37b8fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 NetworkManager[48981]: <info>  [1764404745.7860] device (tap7008b597-80): carrier: link connected
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.797 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd06722-c958-48b8-8486-28aea7c4b27f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.811 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcc5224-df4e-45d6-ab30-8b3f1f170dbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771355, 'reachable_time': 40440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348976, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.824 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd858f1-0c91-4ae8-89d2-bc7f9adbe4f3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2c65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 771355, 'tstamp': 771355}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348977, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.844 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[81892437-927f-489f-bbf2-bc2139f2e5d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7008b597-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2c:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771355, 'reachable_time': 40440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348978, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:45.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.875 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4ebef8ca-6565-4b7d-a163-a0253c23cd0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:45.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.928 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd9cfe9-9662-47d6-afa2-893e3b30285c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.929 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.930 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.930 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7008b597-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:45 np0005539563 NetworkManager[48981]: <info>  [1764404745.9326] manager: (tap7008b597-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Nov 29 03:25:45 np0005539563 kernel: tap7008b597-80: entered promiscuous mode
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.931 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.935 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7008b597-80, col_values=(('external_ids', {'iface-id': '42a41b42-1527-4cfa-9dcf-4b7f34b092b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.936 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:45Z|00658|binding|INFO|Releasing lport 42a41b42-1527-4cfa-9dcf-4b7f34b092b7 from this chassis (sb_readonly=0)
Nov 29 03:25:45 np0005539563 nova_compute[252253]: 2025-11-29 08:25:45.951 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.953 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.954 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[70cc1bf4-3fe6-421f-99ad-cef7a8721e76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.954 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/7008b597-8de2-4973-801f-fcc733e4f6c9.pid.haproxy
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 7008b597-8de2-4973-801f-fcc733e4f6c9
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:25:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:25:45.955 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'env', 'PROCESS_TAG=haproxy-7008b597-8de2-4973-801f-fcc733e4f6c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7008b597-8de2-4973-801f-fcc733e4f6c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.175 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 59a5747d-b29d-47f7-848c-62778e994c56 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.176 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404746.1743248, 59a5747d-b29d-47f7-848c-62778e994c56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.176 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.208 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.229 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.249 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.249 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404746.176398, 59a5747d-b29d-47f7-848c-62778e994c56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.250 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] VM Started (Lifecycle Event)#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.273 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.279 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.304 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:25:46 np0005539563 podman[349068]: 2025-11-29 08:25:46.334889165 +0000 UTC m=+0.052666829 container create 34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:25:46 np0005539563 systemd[1]: Started libpod-conmon-34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c.scope.
Nov 29 03:25:46 np0005539563 podman[349068]: 2025-11-29 08:25:46.308213721 +0000 UTC m=+0.025991415 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:25:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:25:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f333987072d7680e6059494cf08dabe134384b6a5df62e84257a24789c13b0f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:46 np0005539563 podman[349068]: 2025-11-29 08:25:46.432831921 +0000 UTC m=+0.150609605 container init 34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:25:46 np0005539563 podman[349068]: 2025-11-29 08:25:46.439395459 +0000 UTC m=+0.157173123 container start 34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:46 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[349084]: [NOTICE]   (349088) : New worker (349090) forked
Nov 29 03:25:46 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[349084]: [NOTICE]   (349088) : Loading success.
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.483 252257 DEBUG nova.objects.instance [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'flavor' on Instance uuid 5a603f26-2b4a-4025-8cc2-a31c8c89e652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.510 252257 DEBUG nova.virt.libvirt.driver [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Attempting to attach volume 32b198ad-3a42-4de3-9995-b7e93d51e7ec with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.513 252257 DEBUG nova.virt.libvirt.guest [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:25:46 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:25:46 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-32b198ad-3a42-4de3-9995-b7e93d51e7ec">
Nov 29 03:25:46 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:46 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:46 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:46 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:25:46 np0005539563 nova_compute[252253]:  <auth username="openstack">
Nov 29 03:25:46 np0005539563 nova_compute[252253]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:25:46 np0005539563 nova_compute[252253]:  </auth>
Nov 29 03:25:46 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:25:46 np0005539563 nova_compute[252253]:  <serial>32b198ad-3a42-4de3-9995-b7e93d51e7ec</serial>
Nov 29 03:25:46 np0005539563 nova_compute[252253]:  <shareable/>
Nov 29 03:25:46 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:25:46 np0005539563 nova_compute[252253]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:25:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 679 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.2 MiB/s wr, 219 op/s
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.567 252257 DEBUG nova.compute.manager [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-unplugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.568 252257 DEBUG oslo_concurrency.lockutils [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.569 252257 DEBUG oslo_concurrency.lockutils [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.569 252257 DEBUG oslo_concurrency.lockutils [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.569 252257 DEBUG nova.compute.manager [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-unplugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.569 252257 WARNING nova.compute.manager [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-unplugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.570 252257 DEBUG nova.compute.manager [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.570 252257 DEBUG oslo_concurrency.lockutils [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.570 252257 DEBUG oslo_concurrency.lockutils [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.570 252257 DEBUG oslo_concurrency.lockutils [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.570 252257 DEBUG nova.compute.manager [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.571 252257 WARNING nova.compute.manager [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.571 252257 DEBUG nova.compute.manager [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.571 252257 DEBUG oslo_concurrency.lockutils [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.571 252257 DEBUG oslo_concurrency.lockutils [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.571 252257 DEBUG oslo_concurrency.lockutils [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.572 252257 DEBUG nova.compute.manager [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.572 252257 WARNING nova.compute.manager [req-f4375201-b507-4287-9373-34c0ecc1c8ee req-0511d5b3-77b0-42f8-92cf-3e4a4295be40 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.599 252257 DEBUG nova.compute.manager [None req-e4dd42c8-dbd6-49b6-a743-4d5545ba79ac dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.676 252257 DEBUG nova.virt.libvirt.driver [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.676 252257 DEBUG nova.virt.libvirt.driver [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.676 252257 DEBUG nova.virt.libvirt.driver [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.677 252257 DEBUG nova.virt.libvirt.driver [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No VIF found with MAC fa:16:3e:e5:ee:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:25:46 np0005539563 nova_compute[252253]: 2025-11-29 08:25:46.928 252257 DEBUG oslo_concurrency.lockutils [None req-7c4bbde0-ef8e-4d76-833a-98cf76519847 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:47.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:47.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.189 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.208 252257 DEBUG oslo_concurrency.lockutils [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.209 252257 DEBUG oslo_concurrency.lockutils [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.237 252257 DEBUG nova.objects.instance [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'flavor' on Instance uuid df3ef43d-e67b-4d7f-8603-5cf61569ae1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.289 252257 DEBUG oslo_concurrency.lockutils [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 672 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 3.6 MiB/s wr, 289 op/s
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.772 252257 DEBUG oslo_concurrency.lockutils [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.772 252257 DEBUG oslo_concurrency.lockutils [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.773 252257 INFO nova.compute.manager [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Attaching volume 32b198ad-3a42-4de3-9995-b7e93d51e7ec to /dev/vdb#033[00m
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.986 252257 DEBUG nova.compute.manager [req-d5b4339b-4393-4ac6-b2ef-b8d0b340556f req-18ecd15a-11f8-42b4-a031-e5ddce56e1fb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.987 252257 DEBUG oslo_concurrency.lockutils [req-d5b4339b-4393-4ac6-b2ef-b8d0b340556f req-18ecd15a-11f8-42b4-a031-e5ddce56e1fb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.987 252257 DEBUG oslo_concurrency.lockutils [req-d5b4339b-4393-4ac6-b2ef-b8d0b340556f req-18ecd15a-11f8-42b4-a031-e5ddce56e1fb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.987 252257 DEBUG oslo_concurrency.lockutils [req-d5b4339b-4393-4ac6-b2ef-b8d0b340556f req-18ecd15a-11f8-42b4-a031-e5ddce56e1fb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.988 252257 DEBUG nova.compute.manager [req-d5b4339b-4393-4ac6-b2ef-b8d0b340556f req-18ecd15a-11f8-42b4-a031-e5ddce56e1fb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:48 np0005539563 nova_compute[252253]: 2025-11-29 08:25:48.988 252257 WARNING nova.compute.manager [req-d5b4339b-4393-4ac6-b2ef-b8d0b340556f req-18ecd15a-11f8-42b4-a031-e5ddce56e1fb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.012 252257 DEBUG os_brick.utils [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.014 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.027 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.027 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[af45fc07-785b-48b0-9822-da8fc01a606a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.030 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.041 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.041 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac9cd06-d26e-4d35-b2dc-8ece061ce546]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.045 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.054 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.055 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[36b7a654-1583-41b5-ab96-d18d2c8b9354]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.057 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[5645bc9d-a79c-479c-be4c-f36d0066a5e8]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.058 252257 DEBUG oslo_concurrency.processutils [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.101 252257 DEBUG oslo_concurrency.processutils [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "nvme version" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.105 252257 DEBUG os_brick.initiator.connectors.lightos [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.106 252257 DEBUG os_brick.initiator.connectors.lightos [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.107 252257 DEBUG os_brick.initiator.connectors.lightos [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.108 252257 DEBUG os_brick.utils [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] <== get_connector_properties: return (94ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.109 252257 DEBUG nova.virt.block_device [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Updating existing volume attachment record: 843f3731-2267-4517-b4a5-9d21bf265b1a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.345 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:49 np0005539563 nova_compute[252253]: 2025-11-29 08:25:49.638 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:49.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:49.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:50 np0005539563 nova_compute[252253]: 2025-11-29 08:25:50.162 252257 DEBUG nova.objects.instance [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'flavor' on Instance uuid df3ef43d-e67b-4d7f-8603-5cf61569ae1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:50 np0005539563 nova_compute[252253]: 2025-11-29 08:25:50.269 252257 DEBUG nova.virt.libvirt.driver [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Attempting to attach volume 32b198ad-3a42-4de3-9995-b7e93d51e7ec with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:25:50 np0005539563 nova_compute[252253]: 2025-11-29 08:25:50.275 252257 DEBUG nova.virt.libvirt.guest [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:25:50 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:25:50 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-32b198ad-3a42-4de3-9995-b7e93d51e7ec">
Nov 29 03:25:50 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:50 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:50 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:50 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:25:50 np0005539563 nova_compute[252253]:  <auth username="openstack">
Nov 29 03:25:50 np0005539563 nova_compute[252253]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:25:50 np0005539563 nova_compute[252253]:  </auth>
Nov 29 03:25:50 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:25:50 np0005539563 nova_compute[252253]:  <serial>32b198ad-3a42-4de3-9995-b7e93d51e7ec</serial>
Nov 29 03:25:50 np0005539563 nova_compute[252253]:  <shareable/>
Nov 29 03:25:50 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:25:50 np0005539563 nova_compute[252253]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:25:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 2.9 MiB/s wr, 295 op/s
Nov 29 03:25:51 np0005539563 nova_compute[252253]: 2025-11-29 08:25:51.116 252257 DEBUG nova.virt.libvirt.driver [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:51 np0005539563 nova_compute[252253]: 2025-11-29 08:25:51.118 252257 DEBUG nova.virt.libvirt.driver [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:51 np0005539563 nova_compute[252253]: 2025-11-29 08:25:51.119 252257 DEBUG nova.virt.libvirt.driver [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:51 np0005539563 nova_compute[252253]: 2025-11-29 08:25:51.119 252257 DEBUG nova.virt.libvirt.driver [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No VIF found with MAC fa:16:3e:e6:bf:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:25:51 np0005539563 nova_compute[252253]: 2025-11-29 08:25:51.420 252257 DEBUG oslo_concurrency.lockutils [None req-816e585b-3fda-4b5b-8d50-b8ea6a08d4f2 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:51.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:51.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 2.5 MiB/s wr, 246 op/s
Nov 29 03:25:53 np0005539563 nova_compute[252253]: 2025-11-29 08:25:53.191 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:53 np0005539563 nova_compute[252253]: 2025-11-29 08:25:53.474 252257 DEBUG oslo_concurrency.lockutils [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:53 np0005539563 nova_compute[252253]: 2025-11-29 08:25:53.475 252257 DEBUG oslo_concurrency.lockutils [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:53 np0005539563 nova_compute[252253]: 2025-11-29 08:25:53.498 252257 INFO nova.compute.manager [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Detaching volume 32b198ad-3a42-4de3-9995-b7e93d51e7ec#033[00m
Nov 29 03:25:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:53 np0005539563 nova_compute[252253]: 2025-11-29 08:25:53.698 252257 INFO nova.virt.block_device [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Attempting to driver detach volume 32b198ad-3a42-4de3-9995-b7e93d51e7ec from mountpoint /dev/vdb#033[00m
Nov 29 03:25:53 np0005539563 nova_compute[252253]: 2025-11-29 08:25:53.708 252257 DEBUG nova.virt.libvirt.driver [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Attempting to detach device vdb from instance 5a603f26-2b4a-4025-8cc2-a31c8c89e652 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:25:53 np0005539563 nova_compute[252253]: 2025-11-29 08:25:53.708 252257 DEBUG nova.virt.libvirt.guest [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-32b198ad-3a42-4de3-9995-b7e93d51e7ec">
Nov 29 03:25:53 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  <serial>32b198ad-3a42-4de3-9995-b7e93d51e7ec</serial>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  <shareable/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:25:53 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:25:53 np0005539563 nova_compute[252253]: 2025-11-29 08:25:53.714 252257 INFO nova.virt.libvirt.driver [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully detached device vdb from instance 5a603f26-2b4a-4025-8cc2-a31c8c89e652 from the persistent domain config.#033[00m
Nov 29 03:25:53 np0005539563 nova_compute[252253]: 2025-11-29 08:25:53.715 252257 DEBUG nova.virt.libvirt.driver [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 5a603f26-2b4a-4025-8cc2-a31c8c89e652 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:25:53 np0005539563 nova_compute[252253]: 2025-11-29 08:25:53.715 252257 DEBUG nova.virt.libvirt.guest [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-32b198ad-3a42-4de3-9995-b7e93d51e7ec">
Nov 29 03:25:53 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  <serial>32b198ad-3a42-4de3-9995-b7e93d51e7ec</serial>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  <shareable/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:25:53 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:25:53 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:25:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:53Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:bf:db 10.100.0.9
Nov 29 03:25:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:53Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:bf:db 10.100.0.9
Nov 29 03:25:53 np0005539563 nova_compute[252253]: 2025-11-29 08:25:53.810 252257 DEBUG nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Received event <DeviceRemovedEvent: 1764404753.8090916, 5a603f26-2b4a-4025-8cc2-a31c8c89e652 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:25:53 np0005539563 nova_compute[252253]: 2025-11-29 08:25:53.811 252257 DEBUG nova.virt.libvirt.driver [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 5a603f26-2b4a-4025-8cc2-a31c8c89e652 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:25:53 np0005539563 nova_compute[252253]: 2025-11-29 08:25:53.812 252257 INFO nova.virt.libvirt.driver [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully detached device vdb from instance 5a603f26-2b4a-4025-8cc2-a31c8c89e652 from the live domain config.#033[00m
Nov 29 03:25:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:53.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:53.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:54 np0005539563 nova_compute[252253]: 2025-11-29 08:25:54.016 252257 INFO nova.virt.libvirt.driver [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Detected multiple connections on this host for volume: 32b198ad-3a42-4de3-9995-b7e93d51e7ec, skipping target disconnect.#033[00m
Nov 29 03:25:54 np0005539563 nova_compute[252253]: 2025-11-29 08:25:54.348 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:54 np0005539563 nova_compute[252253]: 2025-11-29 08:25:54.390 252257 DEBUG nova.objects.instance [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'flavor' on Instance uuid 5a603f26-2b4a-4025-8cc2-a31c8c89e652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:54 np0005539563 nova_compute[252253]: 2025-11-29 08:25:54.436 252257 DEBUG oslo_concurrency.lockutils [None req-fa90d08c-acd2-4171-89eb-880cb97a5d21 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 690 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 4.0 MiB/s wr, 318 op/s
Nov 29 03:25:55 np0005539563 nova_compute[252253]: 2025-11-29 08:25:55.684 252257 DEBUG oslo_concurrency.lockutils [None req-59633211-c50b-4bc9-89f1-79826f24ec7e b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:55 np0005539563 nova_compute[252253]: 2025-11-29 08:25:55.685 252257 DEBUG oslo_concurrency.lockutils [None req-59633211-c50b-4bc9-89f1-79826f24ec7e b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:55 np0005539563 nova_compute[252253]: 2025-11-29 08:25:55.708 252257 INFO nova.compute.manager [None req-59633211-c50b-4bc9-89f1-79826f24ec7e b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Detaching volume 32b198ad-3a42-4de3-9995-b7e93d51e7ec#033[00m
Nov 29 03:25:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:55.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:25:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:55.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:25:55 np0005539563 nova_compute[252253]: 2025-11-29 08:25:55.974 252257 INFO nova.virt.block_device [None req-59633211-c50b-4bc9-89f1-79826f24ec7e b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Attempting to driver detach volume 32b198ad-3a42-4de3-9995-b7e93d51e7ec from mountpoint /dev/vdb#033[00m
Nov 29 03:25:55 np0005539563 nova_compute[252253]: 2025-11-29 08:25:55.987 252257 DEBUG nova.virt.libvirt.driver [None req-59633211-c50b-4bc9-89f1-79826f24ec7e b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Attempting to detach device vdb from instance df3ef43d-e67b-4d7f-8603-5cf61569ae1f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:25:55 np0005539563 nova_compute[252253]: 2025-11-29 08:25:55.988 252257 DEBUG nova.virt.libvirt.guest [None req-59633211-c50b-4bc9-89f1-79826f24ec7e b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-32b198ad-3a42-4de3-9995-b7e93d51e7ec">
Nov 29 03:25:55 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  <serial>32b198ad-3a42-4de3-9995-b7e93d51e7ec</serial>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  <shareable/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:25:55 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:25:55 np0005539563 nova_compute[252253]: 2025-11-29 08:25:55.997 252257 INFO nova.virt.libvirt.driver [None req-59633211-c50b-4bc9-89f1-79826f24ec7e b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully detached device vdb from instance df3ef43d-e67b-4d7f-8603-5cf61569ae1f from the persistent domain config.#033[00m
Nov 29 03:25:55 np0005539563 nova_compute[252253]: 2025-11-29 08:25:55.998 252257 DEBUG nova.virt.libvirt.driver [None req-59633211-c50b-4bc9-89f1-79826f24ec7e b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance df3ef43d-e67b-4d7f-8603-5cf61569ae1f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:25:55 np0005539563 nova_compute[252253]: 2025-11-29 08:25:55.998 252257 DEBUG nova.virt.libvirt.guest [None req-59633211-c50b-4bc9-89f1-79826f24ec7e b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-32b198ad-3a42-4de3-9995-b7e93d51e7ec">
Nov 29 03:25:55 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  <serial>32b198ad-3a42-4de3-9995-b7e93d51e7ec</serial>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  <shareable/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:25:55 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:25:56 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:25:56 np0005539563 nova_compute[252253]: 2025-11-29 08:25:56.128 252257 DEBUG nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Received event <DeviceRemovedEvent: 1764404756.1280797, df3ef43d-e67b-4d7f-8603-5cf61569ae1f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:25:56 np0005539563 nova_compute[252253]: 2025-11-29 08:25:56.130 252257 DEBUG nova.virt.libvirt.driver [None req-59633211-c50b-4bc9-89f1-79826f24ec7e b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance df3ef43d-e67b-4d7f-8603-5cf61569ae1f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:25:56 np0005539563 nova_compute[252253]: 2025-11-29 08:25:56.134 252257 INFO nova.virt.libvirt.driver [None req-59633211-c50b-4bc9-89f1-79826f24ec7e b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully detached device vdb from instance df3ef43d-e67b-4d7f-8603-5cf61569ae1f from the live domain config.#033[00m
Nov 29 03:25:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 712 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.9 MiB/s wr, 254 op/s
Nov 29 03:25:56 np0005539563 nova_compute[252253]: 2025-11-29 08:25:56.642 252257 DEBUG nova.objects.instance [None req-59633211-c50b-4bc9-89f1-79826f24ec7e b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'flavor' on Instance uuid df3ef43d-e67b-4d7f-8603-5cf61569ae1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:56 np0005539563 nova_compute[252253]: 2025-11-29 08:25:56.726 252257 DEBUG oslo_concurrency.lockutils [None req-59633211-c50b-4bc9-89f1-79826f24ec7e b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:57.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:57.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:58 np0005539563 nova_compute[252253]: 2025-11-29 08:25:58.230 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 712 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.9 MiB/s wr, 286 op/s
Nov 29 03:25:59 np0005539563 nova_compute[252253]: 2025-11-29 08:25:59.350 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:59 np0005539563 nova_compute[252253]: 2025-11-29 08:25:59.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:59 np0005539563 nova_compute[252253]: 2025-11-29 08:25:59.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:25:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:25:59Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:96:ee 10.100.0.9
Nov 29 03:25:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:59.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:25:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:59.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 734 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.3 MiB/s wr, 256 op/s
Nov 29 03:26:00 np0005539563 nova_compute[252253]: 2025-11-29 08:26:00.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:01 np0005539563 nova_compute[252253]: 2025-11-29 08:26:01.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:01 np0005539563 nova_compute[252253]: 2025-11-29 08:26:01.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:01.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:01.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 734 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 218 op/s
Nov 29 03:26:02 np0005539563 nova_compute[252253]: 2025-11-29 08:26:02.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:02 np0005539563 nova_compute[252253]: 2025-11-29 08:26:02.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:26:03 np0005539563 nova_compute[252253]: 2025-11-29 08:26:03.199 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:03 np0005539563 nova_compute[252253]: 2025-11-29 08:26:03.199 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:03 np0005539563 nova_compute[252253]: 2025-11-29 08:26:03.200 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:26:03 np0005539563 nova_compute[252253]: 2025-11-29 08:26:03.232 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Nov 29 03:26:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Nov 29 03:26:03 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Nov 29 03:26:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:03.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:03.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:04 np0005539563 nova_compute[252253]: 2025-11-29 08:26:04.353 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 758 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 198 op/s
Nov 29 03:26:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:04.930 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:04.931 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:04.932 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:05.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:05.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.318 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Updating instance_info_cache with network_info: [{"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.345 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.345 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.346 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.390 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.391 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.392 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.392 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.393 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 778 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 155 op/s
Nov 29 03:26:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3668102352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.870 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.978 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.978 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.982 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.982 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.985 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:06 np0005539563 nova_compute[252253]: 2025-11-29 08:26:06.986 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:07 np0005539563 nova_compute[252253]: 2025-11-29 08:26:07.144 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:26:07 np0005539563 nova_compute[252253]: 2025-11-29 08:26:07.145 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3744MB free_disk=20.690399169921875GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:26:07 np0005539563 nova_compute[252253]: 2025-11-29 08:26:07.145 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:07 np0005539563 nova_compute[252253]: 2025-11-29 08:26:07.146 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:07 np0005539563 nova_compute[252253]: 2025-11-29 08:26:07.250 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 59a5747d-b29d-47f7-848c-62778e994c56 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:26:07 np0005539563 nova_compute[252253]: 2025-11-29 08:26:07.250 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 5a603f26-2b4a-4025-8cc2-a31c8c89e652 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:26:07 np0005539563 nova_compute[252253]: 2025-11-29 08:26:07.251 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance df3ef43d-e67b-4d7f-8603-5cf61569ae1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:26:07 np0005539563 nova_compute[252253]: 2025-11-29 08:26:07.251 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:26:07 np0005539563 nova_compute[252253]: 2025-11-29 08:26:07.251 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:26:07 np0005539563 nova_compute[252253]: 2025-11-29 08:26:07.503 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Nov 29 03:26:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Nov 29 03:26:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Nov 29 03:26:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:07.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:07.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/922232462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:07 np0005539563 nova_compute[252253]: 2025-11-29 08:26:07.977 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:07 np0005539563 nova_compute[252253]: 2025-11-29 08:26:07.986 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:08 np0005539563 nova_compute[252253]: 2025-11-29 08:26:08.021 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:08 np0005539563 nova_compute[252253]: 2025-11-29 08:26:08.050 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:26:08 np0005539563 nova_compute[252253]: 2025-11-29 08:26:08.050 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:08 np0005539563 nova_compute[252253]: 2025-11-29 08:26:08.283 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 855 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 7.6 MiB/s wr, 201 op/s
Nov 29 03:26:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Nov 29 03:26:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Nov 29 03:26:08 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Nov 29 03:26:09 np0005539563 nova_compute[252253]: 2025-11-29 08:26:09.394 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Nov 29 03:26:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:09.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:09.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Nov 29 03:26:09 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Nov 29 03:26:10 np0005539563 nova_compute[252253]: 2025-11-29 08:26:10.381 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:10 np0005539563 nova_compute[252253]: 2025-11-29 08:26:10.382 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 893 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 13 MiB/s wr, 337 op/s
Nov 29 03:26:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:11.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:11.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:12 np0005539563 podman[349258]: 2025-11-29 08:26:12.542554855 +0000 UTC m=+0.085972302 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:26:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 893 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 12 MiB/s wr, 329 op/s
Nov 29 03:26:12 np0005539563 podman[349259]: 2025-11-29 08:26:12.578980193 +0000 UTC m=+0.115966626 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:26:12 np0005539563 podman[349260]: 2025-11-29 08:26:12.611766172 +0000 UTC m=+0.142045923 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 29 03:26:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:26:12
Nov 29 03:26:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:26:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:26:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'backups', 'images', '.mgr', '.rgw.root', 'volumes', 'default.rgw.log', 'vms']
Nov 29 03:26:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:26:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:26:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:26:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:26:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:26:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:26:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:26:13 np0005539563 nova_compute[252253]: 2025-11-29 08:26:13.286 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:13.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:13.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:14 np0005539563 nova_compute[252253]: 2025-11-29 08:26:14.397 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 918 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.1 MiB/s rd, 13 MiB/s wr, 382 op/s
Nov 29 03:26:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:15.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:26:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:26:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:26:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:26:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:26:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:26:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:26:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:26:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:26:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:26:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 918 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.4 MiB/s wr, 230 op/s
Nov 29 03:26:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:17.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:17.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:18 np0005539563 nova_compute[252253]: 2025-11-29 08:26:18.288 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Nov 29 03:26:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Nov 29 03:26:18 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Nov 29 03:26:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 918 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 MiB/s rd, 5.0 MiB/s wr, 276 op/s
Nov 29 03:26:19 np0005539563 nova_compute[252253]: 2025-11-29 08:26:19.400 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:19.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:19.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 918 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.5 MiB/s wr, 197 op/s
Nov 29 03:26:21 np0005539563 nova_compute[252253]: 2025-11-29 08:26:21.340 252257 DEBUG nova.compute.manager [req-81a3c52f-3b5b-45aa-9a12-d21771798fb7 req-ccca29f7-f85c-46b3-8585-229ba28b0f4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Received event network-changed-74a0b6a5-7ae5-44ef-a159-4a87de6da113 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:21 np0005539563 nova_compute[252253]: 2025-11-29 08:26:21.341 252257 DEBUG nova.compute.manager [req-81a3c52f-3b5b-45aa-9a12-d21771798fb7 req-ccca29f7-f85c-46b3-8585-229ba28b0f4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Refreshing instance network info cache due to event network-changed-74a0b6a5-7ae5-44ef-a159-4a87de6da113. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:21 np0005539563 nova_compute[252253]: 2025-11-29 08:26:21.341 252257 DEBUG oslo_concurrency.lockutils [req-81a3c52f-3b5b-45aa-9a12-d21771798fb7 req-ccca29f7-f85c-46b3-8585-229ba28b0f4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:21 np0005539563 nova_compute[252253]: 2025-11-29 08:26:21.341 252257 DEBUG oslo_concurrency.lockutils [req-81a3c52f-3b5b-45aa-9a12-d21771798fb7 req-ccca29f7-f85c-46b3-8585-229ba28b0f4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:21 np0005539563 nova_compute[252253]: 2025-11-29 08:26:21.341 252257 DEBUG nova.network.neutron [req-81a3c52f-3b5b-45aa-9a12-d21771798fb7 req-ccca29f7-f85c-46b3-8585-229ba28b0f4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Refreshing network info cache for port 74a0b6a5-7ae5-44ef-a159-4a87de6da113 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:21.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:21.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 918 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.5 MiB/s wr, 198 op/s
Nov 29 03:26:23 np0005539563 nova_compute[252253]: 2025-11-29 08:26:23.291 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.016020495998511075 of space, bias 1.0, pg target 4.8061487995533225 quantized to 32 (current 32)
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00432228416387493 of space, bias 1.0, pg target 1.2793961125069793 quantized to 32 (current 32)
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005018218988460822 of space, bias 1.0, pg target 1.4803746015959427 quantized to 32 (current 32)
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017099385817978784 quantized to 16 (current 16)
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003206134840871022 quantized to 32 (current 32)
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018168097431602458 quantized to 32 (current 32)
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004274846454494696 quantized to 32 (current 32)
Nov 29 03:26:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:23.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:23.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:24 np0005539563 nova_compute[252253]: 2025-11-29 08:26:24.403 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 891 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.3 MiB/s wr, 182 op/s
Nov 29 03:26:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:25.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:25.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:26 np0005539563 nova_compute[252253]: 2025-11-29 08:26:26.184 252257 DEBUG nova.network.neutron [req-81a3c52f-3b5b-45aa-9a12-d21771798fb7 req-ccca29f7-f85c-46b3-8585-229ba28b0f4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Updated VIF entry in instance network info cache for port 74a0b6a5-7ae5-44ef-a159-4a87de6da113. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:26 np0005539563 nova_compute[252253]: 2025-11-29 08:26:26.185 252257 DEBUG nova.network.neutron [req-81a3c52f-3b5b-45aa-9a12-d21771798fb7 req-ccca29f7-f85c-46b3-8585-229ba28b0f4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Updating instance_info_cache with network_info: [{"id": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "address": "fa:16:3e:e6:bf:db", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a0b6a5-7a", "ovs_interfaceid": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:26 np0005539563 nova_compute[252253]: 2025-11-29 08:26:26.215 252257 DEBUG oslo_concurrency.lockutils [req-81a3c52f-3b5b-45aa-9a12-d21771798fb7 req-ccca29f7-f85c-46b3-8585-229ba28b0f4b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:26:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 868 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 178 op/s
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:26 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0b4e1cd1-3ae3-4b37-abf8-8647cbbefaeb does not exist
Nov 29 03:26:26 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 989bdbd7-cc41-42a5-9e41-1916fdc17527 does not exist
Nov 29 03:26:26 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5b5bcc99-3329-41e6-8895-b6b9f3958e2d does not exist
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:26:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:26:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:26:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:26:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:26:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:27 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:26:27 np0005539563 podman[349654]: 2025-11-29 08:26:27.525908536 +0000 UTC m=+0.039948194 container create 1d9eb07345965b54cbc87d266e200ec7a9884e697fed61589bf547ac672c5a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:26:27 np0005539563 systemd[1]: Started libpod-conmon-1d9eb07345965b54cbc87d266e200ec7a9884e697fed61589bf547ac672c5a91.scope.
Nov 29 03:26:27 np0005539563 podman[349654]: 2025-11-29 08:26:27.507950828 +0000 UTC m=+0.021990506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:26:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:26:27 np0005539563 podman[349654]: 2025-11-29 08:26:27.619065582 +0000 UTC m=+0.133105260 container init 1d9eb07345965b54cbc87d266e200ec7a9884e697fed61589bf547ac672c5a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:26:27 np0005539563 podman[349654]: 2025-11-29 08:26:27.626211646 +0000 UTC m=+0.140251304 container start 1d9eb07345965b54cbc87d266e200ec7a9884e697fed61589bf547ac672c5a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:26:27 np0005539563 podman[349654]: 2025-11-29 08:26:27.63005865 +0000 UTC m=+0.144098328 container attach 1d9eb07345965b54cbc87d266e200ec7a9884e697fed61589bf547ac672c5a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:26:27 np0005539563 eloquent_nash[349670]: 167 167
Nov 29 03:26:27 np0005539563 podman[349654]: 2025-11-29 08:26:27.633276747 +0000 UTC m=+0.147316395 container died 1d9eb07345965b54cbc87d266e200ec7a9884e697fed61589bf547ac672c5a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:26:27 np0005539563 systemd[1]: libpod-1d9eb07345965b54cbc87d266e200ec7a9884e697fed61589bf547ac672c5a91.scope: Deactivated successfully.
Nov 29 03:26:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-730191d3f2d52cb23d1d2bbaa08751168d1cab77013e1b29a15cebb0800eae4b-merged.mount: Deactivated successfully.
Nov 29 03:26:27 np0005539563 podman[349654]: 2025-11-29 08:26:27.672882921 +0000 UTC m=+0.186922589 container remove 1d9eb07345965b54cbc87d266e200ec7a9884e697fed61589bf547ac672c5a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:26:27 np0005539563 systemd[1]: libpod-conmon-1d9eb07345965b54cbc87d266e200ec7a9884e697fed61589bf547ac672c5a91.scope: Deactivated successfully.
Nov 29 03:26:27 np0005539563 podman[349694]: 2025-11-29 08:26:27.867623323 +0000 UTC m=+0.052639869 container create d87aa7e5495b1d6cd9d721c245cd891aae60dc1459ec2f50762e8591fd9e8fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:26:27 np0005539563 systemd[1]: Started libpod-conmon-d87aa7e5495b1d6cd9d721c245cd891aae60dc1459ec2f50762e8591fd9e8fdd.scope.
Nov 29 03:26:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:27.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:26:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfaab6025265c493c672228f257dd88fcf3c3e2f4601aebe94bbc500b32814d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:27 np0005539563 podman[349694]: 2025-11-29 08:26:27.851014992 +0000 UTC m=+0.036031558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:26:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfaab6025265c493c672228f257dd88fcf3c3e2f4601aebe94bbc500b32814d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfaab6025265c493c672228f257dd88fcf3c3e2f4601aebe94bbc500b32814d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfaab6025265c493c672228f257dd88fcf3c3e2f4601aebe94bbc500b32814d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfaab6025265c493c672228f257dd88fcf3c3e2f4601aebe94bbc500b32814d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:27.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:27 np0005539563 podman[349694]: 2025-11-29 08:26:27.957528281 +0000 UTC m=+0.142544847 container init d87aa7e5495b1d6cd9d721c245cd891aae60dc1459ec2f50762e8591fd9e8fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:26:27 np0005539563 podman[349694]: 2025-11-29 08:26:27.967536212 +0000 UTC m=+0.152552758 container start d87aa7e5495b1d6cd9d721c245cd891aae60dc1459ec2f50762e8591fd9e8fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gagarin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:26:27 np0005539563 podman[349694]: 2025-11-29 08:26:27.971989833 +0000 UTC m=+0.157006379 container attach d87aa7e5495b1d6cd9d721c245cd891aae60dc1459ec2f50762e8591fd9e8fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:26:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:26:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4256123452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:26:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:26:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4256123452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:26:28 np0005539563 nova_compute[252253]: 2025-11-29 08:26:28.294 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 890 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 137 op/s
Nov 29 03:26:28 np0005539563 optimistic_gagarin[349710]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:26:28 np0005539563 optimistic_gagarin[349710]: --> relative data size: 1.0
Nov 29 03:26:28 np0005539563 optimistic_gagarin[349710]: --> All data devices are unavailable
Nov 29 03:26:28 np0005539563 systemd[1]: libpod-d87aa7e5495b1d6cd9d721c245cd891aae60dc1459ec2f50762e8591fd9e8fdd.scope: Deactivated successfully.
Nov 29 03:26:28 np0005539563 podman[349694]: 2025-11-29 08:26:28.804468229 +0000 UTC m=+0.989484845 container died d87aa7e5495b1d6cd9d721c245cd891aae60dc1459ec2f50762e8591fd9e8fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gagarin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:26:28 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4bfaab6025265c493c672228f257dd88fcf3c3e2f4601aebe94bbc500b32814d-merged.mount: Deactivated successfully.
Nov 29 03:26:28 np0005539563 podman[349694]: 2025-11-29 08:26:28.960135004 +0000 UTC m=+1.145151550 container remove d87aa7e5495b1d6cd9d721c245cd891aae60dc1459ec2f50762e8591fd9e8fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gagarin, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:26:28 np0005539563 systemd[1]: libpod-conmon-d87aa7e5495b1d6cd9d721c245cd891aae60dc1459ec2f50762e8591fd9e8fdd.scope: Deactivated successfully.
Nov 29 03:26:29 np0005539563 nova_compute[252253]: 2025-11-29 08:26:29.406 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:29 np0005539563 podman[349879]: 2025-11-29 08:26:29.632820284 +0000 UTC m=+0.053722368 container create c4a365b1786385900549736ccd688cef37ae35a4ff565c729f26d6b92de778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cerf, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:26:29 np0005539563 systemd[1]: Started libpod-conmon-c4a365b1786385900549736ccd688cef37ae35a4ff565c729f26d6b92de778d1.scope.
Nov 29 03:26:29 np0005539563 podman[349879]: 2025-11-29 08:26:29.607469936 +0000 UTC m=+0.028372120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:26:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:26:29 np0005539563 podman[349879]: 2025-11-29 08:26:29.722981859 +0000 UTC m=+0.143884053 container init c4a365b1786385900549736ccd688cef37ae35a4ff565c729f26d6b92de778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cerf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:26:29 np0005539563 podman[349879]: 2025-11-29 08:26:29.734361497 +0000 UTC m=+0.155263621 container start c4a365b1786385900549736ccd688cef37ae35a4ff565c729f26d6b92de778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:26:29 np0005539563 podman[349879]: 2025-11-29 08:26:29.738128519 +0000 UTC m=+0.159030603 container attach c4a365b1786385900549736ccd688cef37ae35a4ff565c729f26d6b92de778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:26:29 np0005539563 relaxed_cerf[349895]: 167 167
Nov 29 03:26:29 np0005539563 systemd[1]: libpod-c4a365b1786385900549736ccd688cef37ae35a4ff565c729f26d6b92de778d1.scope: Deactivated successfully.
Nov 29 03:26:29 np0005539563 podman[349900]: 2025-11-29 08:26:29.784645621 +0000 UTC m=+0.029244604 container died c4a365b1786385900549736ccd688cef37ae35a4ff565c729f26d6b92de778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:26:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-443f466c2634dacb341a4611d0563fd283a15bd07b00762b74d5c1cec2a219fe-merged.mount: Deactivated successfully.
Nov 29 03:26:29 np0005539563 podman[349900]: 2025-11-29 08:26:29.826573088 +0000 UTC m=+0.071171981 container remove c4a365b1786385900549736ccd688cef37ae35a4ff565c729f26d6b92de778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:26:29 np0005539563 systemd[1]: libpod-conmon-c4a365b1786385900549736ccd688cef37ae35a4ff565c729f26d6b92de778d1.scope: Deactivated successfully.
Nov 29 03:26:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:29.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:29.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Nov 29 03:26:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Nov 29 03:26:30 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Nov 29 03:26:30 np0005539563 podman[349922]: 2025-11-29 08:26:30.115143484 +0000 UTC m=+0.083544227 container create 20e2bfa61e2897b8023f79a76f7bacac5b50d7c8d2d7165af1f39857dd1f8d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:26:30 np0005539563 systemd[1]: Started libpod-conmon-20e2bfa61e2897b8023f79a76f7bacac5b50d7c8d2d7165af1f39857dd1f8d42.scope.
Nov 29 03:26:30 np0005539563 podman[349922]: 2025-11-29 08:26:30.074347028 +0000 UTC m=+0.042747851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:26:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:26:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49c8c6bcefa2f4b5be960c059e3f4fef98be325430d49ad6f46db48384284d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49c8c6bcefa2f4b5be960c059e3f4fef98be325430d49ad6f46db48384284d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49c8c6bcefa2f4b5be960c059e3f4fef98be325430d49ad6f46db48384284d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49c8c6bcefa2f4b5be960c059e3f4fef98be325430d49ad6f46db48384284d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:30 np0005539563 podman[349922]: 2025-11-29 08:26:30.213156762 +0000 UTC m=+0.181557535 container init 20e2bfa61e2897b8023f79a76f7bacac5b50d7c8d2d7165af1f39857dd1f8d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:26:30 np0005539563 podman[349922]: 2025-11-29 08:26:30.224678644 +0000 UTC m=+0.193079387 container start 20e2bfa61e2897b8023f79a76f7bacac5b50d7c8d2d7165af1f39857dd1f8d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:26:30 np0005539563 podman[349922]: 2025-11-29 08:26:30.228432817 +0000 UTC m=+0.196833580 container attach 20e2bfa61e2897b8023f79a76f7bacac5b50d7c8d2d7165af1f39857dd1f8d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 03:26:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 916 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 590 KiB/s rd, 5.2 MiB/s wr, 157 op/s
Nov 29 03:26:30 np0005539563 nova_compute[252253]: 2025-11-29 08:26:30.764 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:30 np0005539563 nova_compute[252253]: 2025-11-29 08:26:30.764 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:30 np0005539563 nova_compute[252253]: 2025-11-29 08:26:30.765 252257 INFO nova.compute.manager [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Unshelving#033[00m
Nov 29 03:26:30 np0005539563 nova_compute[252253]: 2025-11-29 08:26:30.921 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:30 np0005539563 nova_compute[252253]: 2025-11-29 08:26:30.921 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:30 np0005539563 nova_compute[252253]: 2025-11-29 08:26:30.926 252257 DEBUG nova.objects.instance [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:30 np0005539563 nova_compute[252253]: 2025-11-29 08:26:30.949 252257 DEBUG nova.objects.instance [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:30 np0005539563 nova_compute[252253]: 2025-11-29 08:26:30.967 252257 DEBUG nova.virt.hardware [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:26:30 np0005539563 nova_compute[252253]: 2025-11-29 08:26:30.968 252257 INFO nova.compute.claims [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]: {
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:    "0": [
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:        {
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:            "devices": [
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:                "/dev/loop3"
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:            ],
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:            "lv_name": "ceph_lv0",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:            "lv_size": "7511998464",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:            "name": "ceph_lv0",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:            "tags": {
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:                "ceph.cluster_name": "ceph",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:                "ceph.crush_device_class": "",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:                "ceph.encrypted": "0",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:                "ceph.osd_id": "0",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:                "ceph.type": "block",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:                "ceph.vdo": "0"
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:            },
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:            "type": "block",
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:            "vg_name": "ceph_vg0"
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:        }
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]:    ]
Nov 29 03:26:31 np0005539563 sweet_goldberg[349938]: }
Nov 29 03:26:31 np0005539563 systemd[1]: libpod-20e2bfa61e2897b8023f79a76f7bacac5b50d7c8d2d7165af1f39857dd1f8d42.scope: Deactivated successfully.
Nov 29 03:26:31 np0005539563 podman[349922]: 2025-11-29 08:26:31.113693135 +0000 UTC m=+1.082093888 container died 20e2bfa61e2897b8023f79a76f7bacac5b50d7c8d2d7165af1f39857dd1f8d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:26:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f49c8c6bcefa2f4b5be960c059e3f4fef98be325430d49ad6f46db48384284d2-merged.mount: Deactivated successfully.
Nov 29 03:26:31 np0005539563 podman[349922]: 2025-11-29 08:26:31.177906975 +0000 UTC m=+1.146307738 container remove 20e2bfa61e2897b8023f79a76f7bacac5b50d7c8d2d7165af1f39857dd1f8d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:26:31 np0005539563 systemd[1]: libpod-conmon-20e2bfa61e2897b8023f79a76f7bacac5b50d7c8d2d7165af1f39857dd1f8d42.scope: Deactivated successfully.
Nov 29 03:26:31 np0005539563 nova_compute[252253]: 2025-11-29 08:26:31.330 252257 DEBUG oslo_concurrency.processutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1819305652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:31 np0005539563 nova_compute[252253]: 2025-11-29 08:26:31.794 252257 DEBUG oslo_concurrency.processutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:31 np0005539563 nova_compute[252253]: 2025-11-29 08:26:31.803 252257 DEBUG nova.compute.provider_tree [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:31 np0005539563 nova_compute[252253]: 2025-11-29 08:26:31.856 252257 DEBUG nova.scheduler.client.report [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:31 np0005539563 podman[350122]: 2025-11-29 08:26:31.93692501 +0000 UTC m=+0.044170038 container create 845bef4b5642e3b27ebc1b3d44b06780223beac6ca7bfdb12ca36ced4e6e84f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_banzai, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:26:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:31.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:31.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:31 np0005539563 systemd[1]: Started libpod-conmon-845bef4b5642e3b27ebc1b3d44b06780223beac6ca7bfdb12ca36ced4e6e84f7.scope.
Nov 29 03:26:32 np0005539563 podman[350122]: 2025-11-29 08:26:31.918444129 +0000 UTC m=+0.025689197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:26:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:26:32 np0005539563 podman[350122]: 2025-11-29 08:26:32.036336136 +0000 UTC m=+0.143581274 container init 845bef4b5642e3b27ebc1b3d44b06780223beac6ca7bfdb12ca36ced4e6e84f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_banzai, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:26:32 np0005539563 podman[350122]: 2025-11-29 08:26:32.042842722 +0000 UTC m=+0.150087750 container start 845bef4b5642e3b27ebc1b3d44b06780223beac6ca7bfdb12ca36ced4e6e84f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:26:32 np0005539563 podman[350122]: 2025-11-29 08:26:32.046531743 +0000 UTC m=+0.153776781 container attach 845bef4b5642e3b27ebc1b3d44b06780223beac6ca7bfdb12ca36ced4e6e84f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:26:32 np0005539563 awesome_banzai[350138]: 167 167
Nov 29 03:26:32 np0005539563 systemd[1]: libpod-845bef4b5642e3b27ebc1b3d44b06780223beac6ca7bfdb12ca36ced4e6e84f7.scope: Deactivated successfully.
Nov 29 03:26:32 np0005539563 podman[350122]: 2025-11-29 08:26:32.051875527 +0000 UTC m=+0.159120645 container died 845bef4b5642e3b27ebc1b3d44b06780223beac6ca7bfdb12ca36ced4e6e84f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_banzai, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:26:32 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0b80982d28d0b5195fa05df4fc8c9f64ad7e2068234e2192d036f247d467780f-merged.mount: Deactivated successfully.
Nov 29 03:26:32 np0005539563 podman[350122]: 2025-11-29 08:26:32.084271946 +0000 UTC m=+0.191516974 container remove 845bef4b5642e3b27ebc1b3d44b06780223beac6ca7bfdb12ca36ced4e6e84f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_banzai, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:26:32 np0005539563 nova_compute[252253]: 2025-11-29 08:26:32.085 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:32 np0005539563 systemd[1]: libpod-conmon-845bef4b5642e3b27ebc1b3d44b06780223beac6ca7bfdb12ca36ced4e6e84f7.scope: Deactivated successfully.
Nov 29 03:26:32 np0005539563 podman[350162]: 2025-11-29 08:26:32.350375583 +0000 UTC m=+0.061752496 container create a03cf6264aa65c1533f3df6f468f07a79b20180ef7efeb0d0578e220fd4b4546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:26:32 np0005539563 systemd[1]: Started libpod-conmon-a03cf6264aa65c1533f3df6f468f07a79b20180ef7efeb0d0578e220fd4b4546.scope.
Nov 29 03:26:32 np0005539563 podman[350162]: 2025-11-29 08:26:32.327878733 +0000 UTC m=+0.039255696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:26:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:26:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e01ab4d422202f5e0f07dc4fc3045c2d723b2e45e91eb48fdb29d4473386564/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e01ab4d422202f5e0f07dc4fc3045c2d723b2e45e91eb48fdb29d4473386564/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e01ab4d422202f5e0f07dc4fc3045c2d723b2e45e91eb48fdb29d4473386564/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e01ab4d422202f5e0f07dc4fc3045c2d723b2e45e91eb48fdb29d4473386564/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:32 np0005539563 nova_compute[252253]: 2025-11-29 08:26:32.437 252257 INFO nova.network.neutron [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updating port 654e5561-248d-48f1-9b25-da86880e3041 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:26:32 np0005539563 podman[350162]: 2025-11-29 08:26:32.451491115 +0000 UTC m=+0.162868018 container init a03cf6264aa65c1533f3df6f468f07a79b20180ef7efeb0d0578e220fd4b4546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:26:32 np0005539563 podman[350162]: 2025-11-29 08:26:32.459451031 +0000 UTC m=+0.170827924 container start a03cf6264aa65c1533f3df6f468f07a79b20180ef7efeb0d0578e220fd4b4546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 03:26:32 np0005539563 podman[350162]: 2025-11-29 08:26:32.463821169 +0000 UTC m=+0.175198062 container attach a03cf6264aa65c1533f3df6f468f07a79b20180ef7efeb0d0578e220fd4b4546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:26:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 916 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 590 KiB/s rd, 5.2 MiB/s wr, 157 op/s
Nov 29 03:26:33 np0005539563 dazzling_kirch[350178]: {
Nov 29 03:26:33 np0005539563 dazzling_kirch[350178]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:26:33 np0005539563 dazzling_kirch[350178]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:26:33 np0005539563 dazzling_kirch[350178]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:26:33 np0005539563 dazzling_kirch[350178]:        "osd_id": 0,
Nov 29 03:26:33 np0005539563 dazzling_kirch[350178]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:26:33 np0005539563 dazzling_kirch[350178]:        "type": "bluestore"
Nov 29 03:26:33 np0005539563 dazzling_kirch[350178]:    }
Nov 29 03:26:33 np0005539563 dazzling_kirch[350178]: }
Nov 29 03:26:33 np0005539563 nova_compute[252253]: 2025-11-29 08:26:33.297 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:33 np0005539563 systemd[1]: libpod-a03cf6264aa65c1533f3df6f468f07a79b20180ef7efeb0d0578e220fd4b4546.scope: Deactivated successfully.
Nov 29 03:26:33 np0005539563 podman[350162]: 2025-11-29 08:26:33.316181465 +0000 UTC m=+1.027558348 container died a03cf6264aa65c1533f3df6f468f07a79b20180ef7efeb0d0578e220fd4b4546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:26:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9e01ab4d422202f5e0f07dc4fc3045c2d723b2e45e91eb48fdb29d4473386564-merged.mount: Deactivated successfully.
Nov 29 03:26:33 np0005539563 podman[350162]: 2025-11-29 08:26:33.370118118 +0000 UTC m=+1.081494991 container remove a03cf6264aa65c1533f3df6f468f07a79b20180ef7efeb0d0578e220fd4b4546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:26:33 np0005539563 systemd[1]: libpod-conmon-a03cf6264aa65c1533f3df6f468f07a79b20180ef7efeb0d0578e220fd4b4546.scope: Deactivated successfully.
Nov 29 03:26:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:26:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:26:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:33 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9d5efca3-13d8-4cd4-b1a5-45803cd119d3 does not exist
Nov 29 03:26:33 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f414b951-e835-42d1-819b-e938356a3ddf does not exist
Nov 29 03:26:33 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f0587b80-5706-4dbf-b182-13c1bfdbbc54 does not exist
Nov 29 03:26:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:33.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:33.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:26:34 np0005539563 nova_compute[252253]: 2025-11-29 08:26:34.409 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 950 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 6.0 MiB/s wr, 183 op/s
Nov 29 03:26:34 np0005539563 nova_compute[252253]: 2025-11-29 08:26:34.789 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:34 np0005539563 nova_compute[252253]: 2025-11-29 08:26:34.789 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquired lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:34 np0005539563 nova_compute[252253]: 2025-11-29 08:26:34.789 252257 DEBUG nova.network.neutron [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:26:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:35.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:35.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:36 np0005539563 nova_compute[252253]: 2025-11-29 08:26:36.100 252257 DEBUG nova.compute.manager [req-348c3da8-fc25-4f3b-bdbf-ae6bbb2b3e52 req-4c3540eb-f3aa-42a6-82c3-d33d9944fda3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-changed-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:36 np0005539563 nova_compute[252253]: 2025-11-29 08:26:36.101 252257 DEBUG nova.compute.manager [req-348c3da8-fc25-4f3b-bdbf-ae6bbb2b3e52 req-4c3540eb-f3aa-42a6-82c3-d33d9944fda3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Refreshing instance network info cache due to event network-changed-654e5561-248d-48f1-9b25-da86880e3041. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:36 np0005539563 nova_compute[252253]: 2025-11-29 08:26:36.101 252257 DEBUG oslo_concurrency.lockutils [req-348c3da8-fc25-4f3b-bdbf-ae6bbb2b3e52 req-4c3540eb-f3aa-42a6-82c3-d33d9944fda3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 951 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 501 KiB/s rd, 4.8 MiB/s wr, 139 op/s
Nov 29 03:26:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:37.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:37.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.299 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.366 252257 DEBUG nova.network.neutron [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updating instance_info_cache with network_info: [{"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.488 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Releasing lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.491 252257 DEBUG nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.491 252257 INFO nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Creating image(s)#033[00m
Nov 29 03:26:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.526 252257 DEBUG nova.storage.rbd_utils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Nov 29 03:26:38 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.531 252257 DEBUG nova.objects.instance [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.533 252257 DEBUG oslo_concurrency.lockutils [req-348c3da8-fc25-4f3b-bdbf-ae6bbb2b3e52 req-4c3540eb-f3aa-42a6-82c3-d33d9944fda3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.533 252257 DEBUG nova.network.neutron [req-348c3da8-fc25-4f3b-bdbf-ae6bbb2b3e52 req-4c3540eb-f3aa-42a6-82c3-d33d9944fda3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Refreshing network info cache for port 654e5561-248d-48f1-9b25-da86880e3041 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 951 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.2 MiB/s wr, 213 op/s
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.596 252257 DEBUG nova.storage.rbd_utils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.624 252257 DEBUG nova.storage.rbd_utils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.628 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "3e42a232ffb80e643a7e3e704e61d7c578e4a967" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.628 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "3e42a232ffb80e643a7e3e704e61d7c578e4a967" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:38.738 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:38 np0005539563 nova_compute[252253]: 2025-11-29 08:26:38.739 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:38.739 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:26:39 np0005539563 nova_compute[252253]: 2025-11-29 08:26:39.273 252257 DEBUG nova.virt.libvirt.imagebackend [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/5945a148-7986-4fa0-8052-c380ea11f788/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/5945a148-7986-4fa0-8052-c380ea11f788/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:26:39 np0005539563 nova_compute[252253]: 2025-11-29 08:26:39.324 252257 DEBUG nova.virt.libvirt.imagebackend [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/5945a148-7986-4fa0-8052-c380ea11f788/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:26:39 np0005539563 nova_compute[252253]: 2025-11-29 08:26:39.325 252257 DEBUG nova.storage.rbd_utils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] cloning images/5945a148-7986-4fa0-8052-c380ea11f788@snap to None/9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:26:39 np0005539563 nova_compute[252253]: 2025-11-29 08:26:39.412 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:39 np0005539563 nova_compute[252253]: 2025-11-29 08:26:39.520 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "3e42a232ffb80e643a7e3e704e61d7c578e4a967" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:39 np0005539563 nova_compute[252253]: 2025-11-29 08:26:39.677 252257 DEBUG nova.objects.instance [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'migration_context' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:39 np0005539563 nova_compute[252253]: 2025-11-29 08:26:39.753 252257 DEBUG nova.storage.rbd_utils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] flattening vms/9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:26:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:39.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:39.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.126 252257 DEBUG nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Image rbd:vms/9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.128 252257 DEBUG nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.129 252257 DEBUG nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Ensure instance console log exists: /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.130 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.131 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.131 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.134 252257 DEBUG nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Start _get_guest_xml network_info=[{"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:26:02Z,direct_url=<?>,disk_format='raw',id=5945a148-7986-4fa0-8052-c380ea11f788,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1933381778-shelved',owner='d9406fbc6fef486fa5b0e79549e78d00',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:26:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.144 252257 WARNING nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.151 252257 DEBUG nova.virt.libvirt.host [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.152 252257 DEBUG nova.virt.libvirt.host [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.158 252257 DEBUG nova.virt.libvirt.host [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.159 252257 DEBUG nova.virt.libvirt.host [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.160 252257 DEBUG nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.160 252257 DEBUG nova.virt.hardware [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:26:02Z,direct_url=<?>,disk_format='raw',id=5945a148-7986-4fa0-8052-c380ea11f788,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1933381778-shelved',owner='d9406fbc6fef486fa5b0e79549e78d00',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:26:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.161 252257 DEBUG nova.virt.hardware [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.161 252257 DEBUG nova.virt.hardware [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.161 252257 DEBUG nova.virt.hardware [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.161 252257 DEBUG nova.virt.hardware [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.162 252257 DEBUG nova.virt.hardware [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.162 252257 DEBUG nova.virt.hardware [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.162 252257 DEBUG nova.virt.hardware [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.162 252257 DEBUG nova.virt.hardware [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.163 252257 DEBUG nova.virt.hardware [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.163 252257 DEBUG nova.virt.hardware [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.163 252257 DEBUG nova.objects.instance [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.179 252257 DEBUG oslo_concurrency.processutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 984 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.0 MiB/s wr, 202 op/s
Nov 29 03:26:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1795562614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.612 252257 DEBUG oslo_concurrency.processutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.640 252257 DEBUG nova.storage.rbd_utils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:40 np0005539563 nova_compute[252253]: 2025-11-29 08:26:40.646 252257 DEBUG oslo_concurrency.processutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:26:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3501057398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.130 252257 DEBUG oslo_concurrency.processutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.131 252257 DEBUG nova.virt.libvirt.vif [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:23:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1933381778',display_name='tempest-ServersNegativeTestJSON-server-1933381778',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1933381778',id=153,image_ref='5945a148-7986-4fa0-8052-c380ea11f788',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:23:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d9406fbc6fef486fa5b0e79549e78d00',ramdisk_id='',reservation_id='r-dbgir4tj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-213437080',owner_user_name='tempest-ServersNegativeTestJSON-213437080-project-member',shelved_at='2025-11-29T08:26:12.813664',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='5945a148-7986-4fa0-8052-c380ea11f788'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:30Z,user_data=None,user_id='3a37c720b9bb4273b66cd2dce30fbf48',uuid=9c6c5334-4e97-46b8-9013-cc5269d8c1c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.132 252257 DEBUG nova.network.os_vif_util [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converting VIF {"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.132 252257 DEBUG nova.network.os_vif_util [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.134 252257 DEBUG nova.objects.instance [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.214 252257 DEBUG nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  <uuid>9c6c5334-4e97-46b8-9013-cc5269d8c1c1</uuid>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  <name>instance-00000099</name>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <nova:name>tempest-ServersNegativeTestJSON-server-1933381778</nova:name>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:26:40</nova:creationTime>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <nova:user uuid="3a37c720b9bb4273b66cd2dce30fbf48">tempest-ServersNegativeTestJSON-213437080-project-member</nova:user>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <nova:project uuid="d9406fbc6fef486fa5b0e79549e78d00">tempest-ServersNegativeTestJSON-213437080</nova:project>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="5945a148-7986-4fa0-8052-c380ea11f788"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <nova:port uuid="654e5561-248d-48f1-9b25-da86880e3041">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <entry name="serial">9c6c5334-4e97-46b8-9013-cc5269d8c1c1</entry>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <entry name="uuid">9c6c5334-4e97-46b8-9013-cc5269d8c1c1</entry>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk.config">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:65:8b:2b"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <target dev="tap654e5561-24"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/console.log" append="off"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <input type="keyboard" bus="usb"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:26:41 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:26:41 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:26:41 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:26:41 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.215 252257 DEBUG nova.compute.manager [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Preparing to wait for external event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.215 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.215 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.215 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.216 252257 DEBUG nova.virt.libvirt.vif [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:23:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1933381778',display_name='tempest-ServersNegativeTestJSON-server-1933381778',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1933381778',id=153,image_ref='5945a148-7986-4fa0-8052-c380ea11f788',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:23:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d9406fbc6fef486fa5b0e79549e78d00',ramdisk_id='',reservation_id='r-dbgir4tj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-213437080',owner_user_name='tempest-ServersNegativeTestJSON-213437080-project-member',shelved_at='2025-11-29T08:26:12.813664',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='5945a148-7986-4fa0-8052-c380ea11f788'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:26:30Z,user_data=None,user_id='3a37c720b9bb4273b66cd2dce30fbf48',uuid=9c6c5334-4e97-46b8-9013-cc5269d8c1c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.216 252257 DEBUG nova.network.os_vif_util [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converting VIF {"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.217 252257 DEBUG nova.network.os_vif_util [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.217 252257 DEBUG os_vif [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.218 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.218 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.219 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.222 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.223 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap654e5561-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.223 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap654e5561-24, col_values=(('external_ids', {'iface-id': '654e5561-248d-48f1-9b25-da86880e3041', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:8b:2b', 'vm-uuid': '9c6c5334-4e97-46b8-9013-cc5269d8c1c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.225 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:41 np0005539563 NetworkManager[48981]: <info>  [1764404801.2261] manager: (tap654e5561-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.228 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.233 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.233 252257 INFO os_vif [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24')#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.284 252257 DEBUG nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.284 252257 DEBUG nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.284 252257 DEBUG nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] No VIF found with MAC fa:16:3e:65:8b:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.285 252257 INFO nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Using config drive#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.311 252257 DEBUG nova.storage.rbd_utils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.336 252257 DEBUG nova.objects.instance [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:41 np0005539563 nova_compute[252253]: 2025-11-29 08:26:41.438 252257 DEBUG nova.objects.instance [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'keypairs' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:41.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:41.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 984 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.0 MiB/s wr, 202 op/s
Nov 29 03:26:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:26:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:26:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:26:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:26:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:26:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:26:43 np0005539563 nova_compute[252253]: 2025-11-29 08:26:43.349 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:43 np0005539563 podman[350613]: 2025-11-29 08:26:43.512058416 +0000 UTC m=+0.058088606 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:26:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:43 np0005539563 podman[350612]: 2025-11-29 08:26:43.532747277 +0000 UTC m=+0.078796697 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 03:26:43 np0005539563 podman[350614]: 2025-11-29 08:26:43.534416652 +0000 UTC m=+0.076482055 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:26:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:43.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:43.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.036 252257 INFO nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Creating config drive at /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/disk.config#033[00m
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.042 252257 DEBUG oslo_concurrency.processutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy6ef7ge3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.179 252257 DEBUG oslo_concurrency.processutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy6ef7ge3" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.211 252257 DEBUG nova.storage.rbd_utils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] rbd image 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.214 252257 DEBUG oslo_concurrency.processutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/disk.config 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.382 252257 DEBUG oslo_concurrency.processutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/disk.config 9c6c5334-4e97-46b8-9013-cc5269d8c1c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.383 252257 INFO nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Deleting local config drive /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1/disk.config because it was imported into RBD.#033[00m
Nov 29 03:26:44 np0005539563 kernel: tap654e5561-24: entered promiscuous mode
Nov 29 03:26:44 np0005539563 NetworkManager[48981]: <info>  [1764404804.4292] manager: (tap654e5561-24): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Nov 29 03:26:44 np0005539563 systemd-udevd[350725]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:26:44 np0005539563 ovn_controller[148841]: 2025-11-29T08:26:44Z|00659|binding|INFO|Claiming lport 654e5561-248d-48f1-9b25-da86880e3041 for this chassis.
Nov 29 03:26:44 np0005539563 ovn_controller[148841]: 2025-11-29T08:26:44Z|00660|binding|INFO|654e5561-248d-48f1-9b25-da86880e3041: Claiming fa:16:3e:65:8b:2b 10.100.0.3
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.470 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.479 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:8b:2b 10.100.0.3'], port_security=['fa:16:3e:65:8b:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9c6c5334-4e97-46b8-9013-cc5269d8c1c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-258f6232-6798-4075-adab-c07c4559ef67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9406fbc6fef486fa5b0e79549e78d00', 'neutron:revision_number': '7', 'neutron:security_group_ids': '43e688c9-ebb1-4f07-b4e2-f54248247a71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aac86bc6-5ac8-43c8-9a9b-f058a154968b, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=654e5561-248d-48f1-9b25-da86880e3041) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.480 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 654e5561-248d-48f1-9b25-da86880e3041 in datapath 258f6232-6798-4075-adab-c07c4559ef67 bound to our chassis#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.482 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 258f6232-6798-4075-adab-c07c4559ef67#033[00m
Nov 29 03:26:44 np0005539563 NetworkManager[48981]: <info>  [1764404804.4851] device (tap654e5561-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:26:44 np0005539563 NetworkManager[48981]: <info>  [1764404804.4870] device (tap654e5561-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:26:44 np0005539563 ovn_controller[148841]: 2025-11-29T08:26:44Z|00661|binding|INFO|Setting lport 654e5561-248d-48f1-9b25-da86880e3041 ovn-installed in OVS
Nov 29 03:26:44 np0005539563 ovn_controller[148841]: 2025-11-29T08:26:44Z|00662|binding|INFO|Setting lport 654e5561-248d-48f1-9b25-da86880e3041 up in Southbound
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.491 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.496 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8b503e-a403-4d21-9a47-ede1159e6fe8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.496 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap258f6232-61 in ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:26:44 np0005539563 systemd-machined[213024]: New machine qemu-78-instance-00000099.
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.499 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap258f6232-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.499 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c28a00ce-12a4-4ceb-9a92-62ce58d8ae2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.500 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[77a55dc6-31e1-452d-8ab1-9c1637f382d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.510 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4967af-762b-4a5f-95e9-a9070d2f9711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 systemd[1]: Started Virtual Machine qemu-78-instance-00000099.
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.523 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d290f540-9700-4fdb-9756-1dff448658b8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.553 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[93a997c0-c188-464d-97c4-a7a04a6baecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.559 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a22042dd-9057-4848-a346-153f7bb0f2c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 NetworkManager[48981]: <info>  [1764404804.5609] manager: (tap258f6232-60): new Veth device (/org/freedesktop/NetworkManager/Devices/290)
Nov 29 03:26:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 1.0 GiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.9 MiB/s rd, 4.7 MiB/s wr, 231 op/s
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.597 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe3ddbb-93f4-4915-a5b6-d66646b53978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.601 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[580a3e1c-b65e-4387-91ae-c79cabb8cbaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 NetworkManager[48981]: <info>  [1764404804.6268] device (tap258f6232-60): carrier: link connected
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.632 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[82c87cd6-52cd-4f2b-92f6-0c5ab89f81a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.658 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b11fb73a-0a92-4cff-ab23-c0af098a5de9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap258f6232-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:63:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777240, 'reachable_time': 31843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350761, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.683 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5946e5-189b-45e3-bf63-8e1d5de40057]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:63e2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777240, 'tstamp': 777240}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350762, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.700 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9855a76c-8295-450e-b366-1362903edaa4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap258f6232-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:63:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777240, 'reachable_time': 31843, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350763, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.701 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.741 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.743 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e1868f-bee7-4c08-b485-672a25a4e000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.805 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[04bcd94d-a39a-4126-a2f9-f6d0cc10deb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.807 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap258f6232-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.808 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.808 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap258f6232-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.810 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:44 np0005539563 NetworkManager[48981]: <info>  [1764404804.8117] manager: (tap258f6232-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Nov 29 03:26:44 np0005539563 kernel: tap258f6232-60: entered promiscuous mode
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.814 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.815 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap258f6232-60, col_values=(('external_ids', {'iface-id': 'c87f2e10-0d06-412e-bd89-4b9ab0d16c96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.816 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:44 np0005539563 ovn_controller[148841]: 2025-11-29T08:26:44Z|00663|binding|INFO|Releasing lport c87f2e10-0d06-412e-bd89-4b9ab0d16c96 from this chassis (sb_readonly=0)
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.834 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:44 np0005539563 nova_compute[252253]: 2025-11-29 08:26:44.836 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.837 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/258f6232-6798-4075-adab-c07c4559ef67.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/258f6232-6798-4075-adab-c07c4559ef67.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.838 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[29f20139-21e5-4a7f-bb42-8a6f5c12fd00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.839 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-258f6232-6798-4075-adab-c07c4559ef67
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/258f6232-6798-4075-adab-c07c4559ef67.pid.haproxy
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 258f6232-6798-4075-adab-c07c4559ef67
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:26:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:26:44.839 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'env', 'PROCESS_TAG=haproxy-258f6232-6798-4075-adab-c07c4559ef67', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/258f6232-6798-4075-adab-c07c4559ef67.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.123 252257 DEBUG nova.network.neutron [req-348c3da8-fc25-4f3b-bdbf-ae6bbb2b3e52 req-4c3540eb-f3aa-42a6-82c3-d33d9944fda3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updated VIF entry in instance network info cache for port 654e5561-248d-48f1-9b25-da86880e3041. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.124 252257 DEBUG nova.network.neutron [req-348c3da8-fc25-4f3b-bdbf-ae6bbb2b3e52 req-4c3540eb-f3aa-42a6-82c3-d33d9944fda3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updating instance_info_cache with network_info: [{"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.210 252257 DEBUG oslo_concurrency.lockutils [req-348c3da8-fc25-4f3b-bdbf-ae6bbb2b3e52 req-4c3540eb-f3aa-42a6-82c3-d33d9944fda3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:45 np0005539563 podman[350796]: 2025-11-29 08:26:45.224105466 +0000 UTC m=+0.051606460 container create 3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:26:45 np0005539563 systemd[1]: Started libpod-conmon-3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6.scope.
Nov 29 03:26:45 np0005539563 podman[350796]: 2025-11-29 08:26:45.19694243 +0000 UTC m=+0.024443444 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:26:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:26:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58fe144b62bec006f89a5d0ea828aa3ddb6b7a0338b4ca75024aaa66c498155a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:45 np0005539563 podman[350796]: 2025-11-29 08:26:45.338080417 +0000 UTC m=+0.165581441 container init 3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:26:45 np0005539563 podman[350796]: 2025-11-29 08:26:45.345320464 +0000 UTC m=+0.172821498 container start 3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:26:45 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[350836]: [NOTICE]   (350856) : New worker (350858) forked
Nov 29 03:26:45 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[350836]: [NOTICE]   (350856) : Loading success.
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.428 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404805.427849, 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.428 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.448 252257 DEBUG nova.compute.manager [req-81e01ef9-ae44-4124-b424-f96d219c12ec req-9e1f4f6c-312f-45b6-afa3-a736548a0125 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.449 252257 DEBUG oslo_concurrency.lockutils [req-81e01ef9-ae44-4124-b424-f96d219c12ec req-9e1f4f6c-312f-45b6-afa3-a736548a0125 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.449 252257 DEBUG oslo_concurrency.lockutils [req-81e01ef9-ae44-4124-b424-f96d219c12ec req-9e1f4f6c-312f-45b6-afa3-a736548a0125 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.449 252257 DEBUG oslo_concurrency.lockutils [req-81e01ef9-ae44-4124-b424-f96d219c12ec req-9e1f4f6c-312f-45b6-afa3-a736548a0125 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.450 252257 DEBUG nova.compute.manager [req-81e01ef9-ae44-4124-b424-f96d219c12ec req-9e1f4f6c-312f-45b6-afa3-a736548a0125 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Processing event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.450 252257 DEBUG nova.compute.manager [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.456 252257 DEBUG nova.virt.libvirt.driver [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.459 252257 INFO nova.virt.libvirt.driver [-] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Instance spawned successfully.#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.477 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.481 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.503 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.504 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404805.4299734, 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.504 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.523 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.526 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404805.454714, 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.526 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.547 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.552 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:45 np0005539563 nova_compute[252253]: 2025-11-29 08:26:45.584 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:26:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:45.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:45.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:46 np0005539563 nova_compute[252253]: 2025-11-29 08:26:46.224 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 1.0 GiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.5 MiB/s rd, 4.7 MiB/s wr, 256 op/s
Nov 29 03:26:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Nov 29 03:26:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Nov 29 03:26:46 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Nov 29 03:26:47 np0005539563 nova_compute[252253]: 2025-11-29 08:26:47.627 252257 DEBUG nova.compute.manager [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:47 np0005539563 nova_compute[252253]: 2025-11-29 08:26:47.766 252257 DEBUG nova.compute.manager [req-a039e6fb-58e1-4fdc-8079-0cb7b5510065 req-d806c3d9-f6e0-4b39-9211-313271bc631a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:47 np0005539563 nova_compute[252253]: 2025-11-29 08:26:47.767 252257 DEBUG oslo_concurrency.lockutils [req-a039e6fb-58e1-4fdc-8079-0cb7b5510065 req-d806c3d9-f6e0-4b39-9211-313271bc631a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:47 np0005539563 nova_compute[252253]: 2025-11-29 08:26:47.767 252257 DEBUG oslo_concurrency.lockutils [req-a039e6fb-58e1-4fdc-8079-0cb7b5510065 req-d806c3d9-f6e0-4b39-9211-313271bc631a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:47 np0005539563 nova_compute[252253]: 2025-11-29 08:26:47.768 252257 DEBUG oslo_concurrency.lockutils [req-a039e6fb-58e1-4fdc-8079-0cb7b5510065 req-d806c3d9-f6e0-4b39-9211-313271bc631a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:47 np0005539563 nova_compute[252253]: 2025-11-29 08:26:47.769 252257 DEBUG nova.compute.manager [req-a039e6fb-58e1-4fdc-8079-0cb7b5510065 req-d806c3d9-f6e0-4b39-9211-313271bc631a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] No waiting events found dispatching network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:47 np0005539563 nova_compute[252253]: 2025-11-29 08:26:47.769 252257 WARNING nova.compute.manager [req-a039e6fb-58e1-4fdc-8079-0cb7b5510065 req-d806c3d9-f6e0-4b39-9211-313271bc631a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received unexpected event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Nov 29 03:26:47 np0005539563 nova_compute[252253]: 2025-11-29 08:26:47.783 252257 DEBUG oslo_concurrency.lockutils [None req-c29b26ba-5e2a-45eb-be22-c433967c324e 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 17.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:47.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:47.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:48 np0005539563 nova_compute[252253]: 2025-11-29 08:26:48.353 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.534587) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808534686, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 967, "num_deletes": 252, "total_data_size": 1322142, "memory_usage": 1339160, "flush_reason": "Manual Compaction"}
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808556476, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 907975, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52417, "largest_seqno": 53383, "table_properties": {"data_size": 903802, "index_size": 1761, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11203, "raw_average_key_size": 21, "raw_value_size": 894787, "raw_average_value_size": 1714, "num_data_blocks": 76, "num_entries": 522, "num_filter_entries": 522, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404743, "oldest_key_time": 1764404743, "file_creation_time": 1764404808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 21919 microseconds, and 4276 cpu microseconds.
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.556545) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 907975 bytes OK
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.556575) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.563829) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.563889) EVENT_LOG_v1 {"time_micros": 1764404808563878, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.563913) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1317533, prev total WAL file size 1317533, number of live WAL files 2.
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.564822) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373634' seq:72057594037927935, type:22 .. '6D6772737461740032303135' seq:0, type:0; will stop at (end)
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(886KB)], [113(12MB)]
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808564941, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 14222285, "oldest_snapshot_seqno": -1}
Nov 29 03:26:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 967 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.5 MiB/s rd, 4.7 MiB/s wr, 284 op/s
Nov 29 03:26:48 np0005539563 nova_compute[252253]: 2025-11-29 08:26:48.697 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 8636 keys, 10767896 bytes, temperature: kUnknown
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808712719, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 10767896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10712596, "index_size": 32596, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 225211, "raw_average_key_size": 26, "raw_value_size": 10561188, "raw_average_value_size": 1222, "num_data_blocks": 1266, "num_entries": 8636, "num_filter_entries": 8636, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764404808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.713058) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10767896 bytes
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.718491) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 96.2 rd, 72.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.7 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(27.5) write-amplify(11.9) OK, records in: 9133, records dropped: 497 output_compression: NoCompression
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.718536) EVENT_LOG_v1 {"time_micros": 1764404808718519, "job": 68, "event": "compaction_finished", "compaction_time_micros": 147880, "compaction_time_cpu_micros": 30204, "output_level": 6, "num_output_files": 1, "total_output_size": 10767896, "num_input_records": 9133, "num_output_records": 8636, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808718972, "job": 68, "event": "table_file_deletion", "file_number": 115}
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404808721841, "job": 68, "event": "table_file_deletion", "file_number": 113}
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.564606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.721944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.721952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.721955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.721957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:26:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:26:48.721959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:26:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:49.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:49.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 971 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.9 MiB/s rd, 4.2 MiB/s wr, 301 op/s
Nov 29 03:26:51 np0005539563 nova_compute[252253]: 2025-11-29 08:26:51.227 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:51.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:51.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 971 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.9 MiB/s rd, 4.2 MiB/s wr, 301 op/s
Nov 29 03:26:53 np0005539563 nova_compute[252253]: 2025-11-29 08:26:53.355 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Nov 29 03:26:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Nov 29 03:26:53 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Nov 29 03:26:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:53.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:53.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 998 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.7 MiB/s wr, 267 op/s
Nov 29 03:26:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:55.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:26:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:55.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:26:56 np0005539563 nova_compute[252253]: 2025-11-29 08:26:56.229 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 998 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 232 op/s
Nov 29 03:26:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:57.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:57.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:58 np0005539563 nova_compute[252253]: 2025-11-29 08:26:58.359 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 1020 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 219 op/s
Nov 29 03:26:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:26:59Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:8b:2b 10.100.0.3
Nov 29 03:26:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:59.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:26:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:59.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:00 np0005539563 nova_compute[252253]: 2025-11-29 08:27:00.401 252257 DEBUG nova.objects.instance [None req-de2c63d7-0648-4193-b634-10d4fb3a80b0 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:00 np0005539563 nova_compute[252253]: 2025-11-29 08:27:00.439 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404820.4386683, 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:00 np0005539563 nova_compute[252253]: 2025-11-29 08:27:00.440 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:27:00 np0005539563 nova_compute[252253]: 2025-11-29 08:27:00.458 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:00 np0005539563 nova_compute[252253]: 2025-11-29 08:27:00.462 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:00 np0005539563 nova_compute[252253]: 2025-11-29 08:27:00.480 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 03:27:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 1.0 GiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 253 op/s
Nov 29 03:27:00 np0005539563 nova_compute[252253]: 2025-11-29 08:27:00.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:00 np0005539563 nova_compute[252253]: 2025-11-29 08:27:00.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:27:01 np0005539563 nova_compute[252253]: 2025-11-29 08:27:01.232 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:01 np0005539563 kernel: tap654e5561-24 (unregistering): left promiscuous mode
Nov 29 03:27:01 np0005539563 NetworkManager[48981]: <info>  [1764404821.4617] device (tap654e5561-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:27:01 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:01Z|00664|binding|INFO|Releasing lport 654e5561-248d-48f1-9b25-da86880e3041 from this chassis (sb_readonly=0)
Nov 29 03:27:01 np0005539563 nova_compute[252253]: 2025-11-29 08:27:01.471 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:01 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:01Z|00665|binding|INFO|Setting lport 654e5561-248d-48f1-9b25-da86880e3041 down in Southbound
Nov 29 03:27:01 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:01Z|00666|binding|INFO|Removing iface tap654e5561-24 ovn-installed in OVS
Nov 29 03:27:01 np0005539563 nova_compute[252253]: 2025-11-29 08:27:01.473 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:01 np0005539563 nova_compute[252253]: 2025-11-29 08:27:01.496 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.520 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:8b:2b 10.100.0.3'], port_security=['fa:16:3e:65:8b:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9c6c5334-4e97-46b8-9013-cc5269d8c1c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-258f6232-6798-4075-adab-c07c4559ef67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9406fbc6fef486fa5b0e79549e78d00', 'neutron:revision_number': '9', 'neutron:security_group_ids': '43e688c9-ebb1-4f07-b4e2-f54248247a71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aac86bc6-5ac8-43c8-9a9b-f058a154968b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=654e5561-248d-48f1-9b25-da86880e3041) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.523 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 654e5561-248d-48f1-9b25-da86880e3041 in datapath 258f6232-6798-4075-adab-c07c4559ef67 unbound from our chassis#033[00m
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.527 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 258f6232-6798-4075-adab-c07c4559ef67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.528 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ef32eed5-bb75-4f7f-8995-e19ca38884d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.529 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 namespace which is not needed anymore#033[00m
Nov 29 03:27:01 np0005539563 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000099.scope: Deactivated successfully.
Nov 29 03:27:01 np0005539563 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000099.scope: Consumed 15.417s CPU time.
Nov 29 03:27:01 np0005539563 systemd-machined[213024]: Machine qemu-78-instance-00000099 terminated.
Nov 29 03:27:01 np0005539563 nova_compute[252253]: 2025-11-29 08:27:01.630 252257 DEBUG nova.compute.manager [None req-de2c63d7-0648-4193-b634-10d4fb3a80b0 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:01 np0005539563 nova_compute[252253]: 2025-11-29 08:27:01.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:01 np0005539563 nova_compute[252253]: 2025-11-29 08:27:01.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:01 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[350836]: [NOTICE]   (350856) : haproxy version is 2.8.14-c23fe91
Nov 29 03:27:01 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[350836]: [NOTICE]   (350856) : path to executable is /usr/sbin/haproxy
Nov 29 03:27:01 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[350836]: [WARNING]  (350856) : Exiting Master process...
Nov 29 03:27:01 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[350836]: [ALERT]    (350856) : Current worker (350858) exited with code 143 (Terminated)
Nov 29 03:27:01 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[350836]: [WARNING]  (350856) : All workers exited. Exiting... (0)
Nov 29 03:27:01 np0005539563 systemd[1]: libpod-3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6.scope: Deactivated successfully.
Nov 29 03:27:01 np0005539563 podman[350964]: 2025-11-29 08:27:01.723762898 +0000 UTC m=+0.058512118 container died 3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:27:01 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6-userdata-shm.mount: Deactivated successfully.
Nov 29 03:27:01 np0005539563 systemd[1]: var-lib-containers-storage-overlay-58fe144b62bec006f89a5d0ea828aa3ddb6b7a0338b4ca75024aaa66c498155a-merged.mount: Deactivated successfully.
Nov 29 03:27:01 np0005539563 podman[350964]: 2025-11-29 08:27:01.773608389 +0000 UTC m=+0.108357599 container cleanup 3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:27:01 np0005539563 systemd[1]: libpod-conmon-3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6.scope: Deactivated successfully.
Nov 29 03:27:01 np0005539563 podman[350994]: 2025-11-29 08:27:01.839386154 +0000 UTC m=+0.043752158 container remove 3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.845 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9043f6-3212-4248-ab5a-cbb1d8c708f3]: (4, ('Sat Nov 29 08:27:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 (3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6)\n3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6\nSat Nov 29 08:27:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 (3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6)\n3a1ee47d3bd6fb535f49d081a7f7b9cd0faaaa851b431476b7b1e0f5b45078d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.847 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5f95cfdb-ec57-4651-b1e2-01961e7a1b44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.848 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap258f6232-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:01 np0005539563 nova_compute[252253]: 2025-11-29 08:27:01.850 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:01 np0005539563 kernel: tap258f6232-60: left promiscuous mode
Nov 29 03:27:01 np0005539563 nova_compute[252253]: 2025-11-29 08:27:01.865 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:01 np0005539563 nova_compute[252253]: 2025-11-29 08:27:01.869 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.873 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b8016dbb-2ff3-4986-b9f5-ff49c59606ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.889 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[63b7655a-0e50-4cea-8a55-4bd50b214857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.890 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bdf8d5-cc08-4063-8fcd-ee56032e9d19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.906 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2acd8a66-a98e-42e3-b17f-a32520b0c9af]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777232, 'reachable_time': 28664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351012, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.910 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:27:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:01.910 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[f398beaa-912e-4856-a589-beae922c5c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:01 np0005539563 systemd[1]: run-netns-ovnmeta\x2d258f6232\x2d6798\x2d4075\x2dadab\x2dc07c4559ef67.mount: Deactivated successfully.
Nov 29 03:27:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:01.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:01.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:02 np0005539563 nova_compute[252253]: 2025-11-29 08:27:02.496 252257 DEBUG nova.compute.manager [req-4b00941f-fc71-4da7-bc7a-5cf33209d9e1 req-3b6a320c-1551-4fb4-9d5c-e84d95f31332 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-vif-unplugged-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:02 np0005539563 nova_compute[252253]: 2025-11-29 08:27:02.497 252257 DEBUG oslo_concurrency.lockutils [req-4b00941f-fc71-4da7-bc7a-5cf33209d9e1 req-3b6a320c-1551-4fb4-9d5c-e84d95f31332 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:02 np0005539563 nova_compute[252253]: 2025-11-29 08:27:02.497 252257 DEBUG oslo_concurrency.lockutils [req-4b00941f-fc71-4da7-bc7a-5cf33209d9e1 req-3b6a320c-1551-4fb4-9d5c-e84d95f31332 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:02 np0005539563 nova_compute[252253]: 2025-11-29 08:27:02.497 252257 DEBUG oslo_concurrency.lockutils [req-4b00941f-fc71-4da7-bc7a-5cf33209d9e1 req-3b6a320c-1551-4fb4-9d5c-e84d95f31332 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:02 np0005539563 nova_compute[252253]: 2025-11-29 08:27:02.498 252257 DEBUG nova.compute.manager [req-4b00941f-fc71-4da7-bc7a-5cf33209d9e1 req-3b6a320c-1551-4fb4-9d5c-e84d95f31332 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] No waiting events found dispatching network-vif-unplugged-654e5561-248d-48f1-9b25-da86880e3041 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:02 np0005539563 nova_compute[252253]: 2025-11-29 08:27:02.498 252257 WARNING nova.compute.manager [req-4b00941f-fc71-4da7-bc7a-5cf33209d9e1 req-3b6a320c-1551-4fb4-9d5c-e84d95f31332 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received unexpected event network-vif-unplugged-654e5561-248d-48f1-9b25-da86880e3041 for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:27:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 1.0 GiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 253 op/s
Nov 29 03:27:02 np0005539563 nova_compute[252253]: 2025-11-29 08:27:02.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:02 np0005539563 nova_compute[252253]: 2025-11-29 08:27:02.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:02 np0005539563 nova_compute[252253]: 2025-11-29 08:27:02.820 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:02 np0005539563 nova_compute[252253]: 2025-11-29 08:27:02.821 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:02 np0005539563 nova_compute[252253]: 2025-11-29 08:27:02.821 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:02 np0005539563 nova_compute[252253]: 2025-11-29 08:27:02.821 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:27:02 np0005539563 nova_compute[252253]: 2025-11-29 08:27:02.822 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1293910375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.297 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.362 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.410 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.411 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.414 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.414 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.417 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.417 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.420 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.421 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.637 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.638 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3684MB free_disk=20.53622817993164GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.638 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.639 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2318822720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.819 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 59a5747d-b29d-47f7-848c-62778e994c56 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.820 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 5a603f26-2b4a-4025-8cc2-a31c8c89e652 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.820 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance df3ef43d-e67b-4d7f-8603-5cf61569ae1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.820 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.820 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.821 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:27:03 np0005539563 nova_compute[252253]: 2025-11-29 08:27:03.957 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:03.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:03.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1686904777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.427 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.433 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.460 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.494 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.494 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 1.0 GiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.4 MiB/s wr, 206 op/s
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.654 252257 DEBUG nova.compute.manager [req-ae41fe9e-25fe-4726-9319-3c97baa20ddb req-f33f9ba1-6d25-4cb9-b951-b45613701fbc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.655 252257 DEBUG oslo_concurrency.lockutils [req-ae41fe9e-25fe-4726-9319-3c97baa20ddb req-f33f9ba1-6d25-4cb9-b951-b45613701fbc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.655 252257 DEBUG oslo_concurrency.lockutils [req-ae41fe9e-25fe-4726-9319-3c97baa20ddb req-f33f9ba1-6d25-4cb9-b951-b45613701fbc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.656 252257 DEBUG oslo_concurrency.lockutils [req-ae41fe9e-25fe-4726-9319-3c97baa20ddb req-f33f9ba1-6d25-4cb9-b951-b45613701fbc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.656 252257 DEBUG nova.compute.manager [req-ae41fe9e-25fe-4726-9319-3c97baa20ddb req-f33f9ba1-6d25-4cb9-b951-b45613701fbc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] No waiting events found dispatching network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.656 252257 WARNING nova.compute.manager [req-ae41fe9e-25fe-4726-9319-3c97baa20ddb req-f33f9ba1-6d25-4cb9-b951-b45613701fbc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received unexpected event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.797 252257 INFO nova.compute.manager [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Resuming#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.799 252257 DEBUG nova.objects.instance [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'flavor' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.845 252257 DEBUG oslo_concurrency.lockutils [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.846 252257 DEBUG oslo_concurrency.lockutils [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquired lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:04 np0005539563 nova_compute[252253]: 2025-11-29 08:27:04.847 252257 DEBUG nova.network.neutron [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:27:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:04.932 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:04.932 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:04.933 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:05.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:06.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:06 np0005539563 nova_compute[252253]: 2025-11-29 08:27:06.234 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:06 np0005539563 nova_compute[252253]: 2025-11-29 08:27:06.495 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:06 np0005539563 nova_compute[252253]: 2025-11-29 08:27:06.496 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:27:06 np0005539563 nova_compute[252253]: 2025-11-29 08:27:06.496 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:27:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 1013 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 219 op/s
Nov 29 03:27:07 np0005539563 nova_compute[252253]: 2025-11-29 08:27:07.361 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:07 np0005539563 nova_compute[252253]: 2025-11-29 08:27:07.362 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:07 np0005539563 nova_compute[252253]: 2025-11-29 08:27:07.362 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:27:07 np0005539563 nova_compute[252253]: 2025-11-29 08:27:07.362 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:07.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:08.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.365 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 986 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 255 op/s
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.887 252257 DEBUG nova.network.neutron [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updating instance_info_cache with network_info: [{"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.918 252257 DEBUG oslo_concurrency.lockutils [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Releasing lock "refresh_cache-9c6c5334-4e97-46b8-9013-cc5269d8c1c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.926 252257 DEBUG nova.virt.libvirt.vif [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:23:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1933381778',display_name='tempest-ServersNegativeTestJSON-server-1933381778',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1933381778',id=153,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d9406fbc6fef486fa5b0e79549e78d00',ramdisk_id='',reservation_id='r-dbgir4tj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-213437080',owner_user_name='tempest-ServersNegativeTestJSON-213437080-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:27:01Z,user_data=None,user_id='3a37c720b9bb4273b66cd2dce30fbf48',uuid=9c6c5334-4e97-46b8-9013-cc5269d8c1c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.927 252257 DEBUG nova.network.os_vif_util [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converting VIF {"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.928 252257 DEBUG nova.network.os_vif_util [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.928 252257 DEBUG os_vif [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.929 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.930 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.930 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.934 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.934 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap654e5561-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.935 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap654e5561-24, col_values=(('external_ids', {'iface-id': '654e5561-248d-48f1-9b25-da86880e3041', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:8b:2b', 'vm-uuid': '9c6c5334-4e97-46b8-9013-cc5269d8c1c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.936 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.936 252257 INFO os_vif [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24')#033[00m
Nov 29 03:27:08 np0005539563 nova_compute[252253]: 2025-11-29 08:27:08.951 252257 DEBUG nova.objects.instance [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:09 np0005539563 kernel: tap654e5561-24: entered promiscuous mode
Nov 29 03:27:09 np0005539563 NetworkManager[48981]: <info>  [1764404829.0221] manager: (tap654e5561-24): new Tun device (/org/freedesktop/NetworkManager/Devices/292)
Nov 29 03:27:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:09Z|00667|binding|INFO|Claiming lport 654e5561-248d-48f1-9b25-da86880e3041 for this chassis.
Nov 29 03:27:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:09Z|00668|binding|INFO|654e5561-248d-48f1-9b25-da86880e3041: Claiming fa:16:3e:65:8b:2b 10.100.0.3
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.021 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.033 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:8b:2b 10.100.0.3'], port_security=['fa:16:3e:65:8b:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9c6c5334-4e97-46b8-9013-cc5269d8c1c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-258f6232-6798-4075-adab-c07c4559ef67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9406fbc6fef486fa5b0e79549e78d00', 'neutron:revision_number': '10', 'neutron:security_group_ids': '43e688c9-ebb1-4f07-b4e2-f54248247a71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aac86bc6-5ac8-43c8-9a9b-f058a154968b, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=654e5561-248d-48f1-9b25-da86880e3041) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.035 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 654e5561-248d-48f1-9b25-da86880e3041 in datapath 258f6232-6798-4075-adab-c07c4559ef67 bound to our chassis#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.036 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 258f6232-6798-4075-adab-c07c4559ef67#033[00m
Nov 29 03:27:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:09Z|00669|binding|INFO|Setting lport 654e5561-248d-48f1-9b25-da86880e3041 ovn-installed in OVS
Nov 29 03:27:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:09Z|00670|binding|INFO|Setting lport 654e5561-248d-48f1-9b25-da86880e3041 up in Southbound
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.039 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.042 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539563 systemd-udevd[351075]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.051 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[15bd7a97-e98a-409e-984a-47b86ae4b43b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.052 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap258f6232-61 in ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.055 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap258f6232-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:27:09 np0005539563 systemd-machined[213024]: New machine qemu-79-instance-00000099.
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.055 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7e15ef-4540-4b2e-85b9-758b1ef9e4c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.056 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[00030866-65f8-4017-9aa4-0fdc6de28ad4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 NetworkManager[48981]: <info>  [1764404829.0631] device (tap654e5561-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:27:09 np0005539563 NetworkManager[48981]: <info>  [1764404829.0643] device (tap654e5561-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.067 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8ef857-62b4-41a4-90af-6b1bf49e2968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 systemd[1]: Started Virtual Machine qemu-79-instance-00000099.
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.084 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1279de8c-d409-4658-8f65-78b1b72d62b1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.114 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[35c3556a-cf91-43a8-bffb-4b949b96a52c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.119 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a03429-e06d-40c1-bcf1-82439ba5925f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 NetworkManager[48981]: <info>  [1764404829.1207] manager: (tap258f6232-60): new Veth device (/org/freedesktop/NetworkManager/Devices/293)
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.154 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f6527d53-f0cc-46d5-ade3-67d21c654c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.157 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a622fabf-acbf-41d5-ba43-2d54d6c48771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 NetworkManager[48981]: <info>  [1764404829.1838] device (tap258f6232-60): carrier: link connected
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.184 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3fde392b-1058-4d7b-a921-64c51e7d060f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.202 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[85251710-d97d-462a-a5e4-f3dde203296e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap258f6232-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:63:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779695, 'reachable_time': 26571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351108, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.221 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d61944-3d1b-4a6f-a3c3-5153f1605a35]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:63e2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779695, 'tstamp': 779695}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351109, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.239 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3fe2e5-c1e0-41c6-b7a8-3bcea976a0c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap258f6232-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:63:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779695, 'reachable_time': 26571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351110, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.272 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a637f4-23a8-4d3a-b63b-b6c390b904cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.335 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd6119d-605d-46af-9c00-8a51f76f32d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.336 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap258f6232-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.337 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.337 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap258f6232-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.339 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539563 NetworkManager[48981]: <info>  [1764404829.3406] manager: (tap258f6232-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Nov 29 03:27:09 np0005539563 kernel: tap258f6232-60: entered promiscuous mode
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.344 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.345 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap258f6232-60, col_values=(('external_ids', {'iface-id': 'c87f2e10-0d06-412e-bd89-4b9ab0d16c96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:09Z|00671|binding|INFO|Releasing lport c87f2e10-0d06-412e-bd89-4b9ab0d16c96 from this chassis (sb_readonly=0)
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.380 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.381 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/258f6232-6798-4075-adab-c07c4559ef67.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/258f6232-6798-4075-adab-c07c4559ef67.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.382 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2336c4f3-987e-4e33-a2d2-4cb3cdde8e40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.384 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-258f6232-6798-4075-adab-c07c4559ef67
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/258f6232-6798-4075-adab-c07c4559ef67.pid.haproxy
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 258f6232-6798-4075-adab-c07c4559ef67
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:27:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:09.385 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'env', 'PROCESS_TAG=haproxy-258f6232-6798-4075-adab-c07c4559ef67', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/258f6232-6798-4075-adab-c07c4559ef67.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.643 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.644 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404829.6426785, 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.644 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.678 252257 DEBUG nova.compute.manager [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.679 252257 DEBUG nova.objects.instance [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.697 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.704 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.708 252257 INFO nova.virt.libvirt.driver [-] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Instance running successfully.#033[00m
Nov 29 03:27:09 np0005539563 virtqemud[251807]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.712 252257 DEBUG nova.virt.libvirt.guest [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.712 252257 DEBUG nova.compute.manager [None req-abdf8315-46cf-446b-b0ad-4406b11120eb 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.739 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.740 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404829.6465008, 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.740 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.766 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:09 np0005539563 nova_compute[252253]: 2025-11-29 08:27:09.769 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:09 np0005539563 podman[351184]: 2025-11-29 08:27:09.778370575 +0000 UTC m=+0.049738849 container create 19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:27:09 np0005539563 systemd[1]: Started libpod-conmon-19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe.scope.
Nov 29 03:27:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:27:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d42e65c3b4ed77337a553c8ffc9c401072fb9ac002d3dd4e2650328a4314cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:09 np0005539563 podman[351184]: 2025-11-29 08:27:09.751689872 +0000 UTC m=+0.023058176 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:27:09 np0005539563 podman[351184]: 2025-11-29 08:27:09.866599978 +0000 UTC m=+0.137968272 container init 19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:27:09 np0005539563 podman[351184]: 2025-11-29 08:27:09.873435243 +0000 UTC m=+0.144803517 container start 19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:27:09 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[351199]: [NOTICE]   (351203) : New worker (351205) forked
Nov 29 03:27:09 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[351199]: [NOTICE]   (351203) : Loading success.
Nov 29 03:27:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:09.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:10.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 986 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1020 KiB/s wr, 218 op/s
Nov 29 03:27:10 np0005539563 nova_compute[252253]: 2025-11-29 08:27:10.779 252257 DEBUG nova.compute.manager [req-ad22c83d-818a-417f-97fd-555a5799fecc req-528a3f92-589f-4e8e-93ee-85bc506ec35c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:10 np0005539563 nova_compute[252253]: 2025-11-29 08:27:10.780 252257 DEBUG oslo_concurrency.lockutils [req-ad22c83d-818a-417f-97fd-555a5799fecc req-528a3f92-589f-4e8e-93ee-85bc506ec35c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:10 np0005539563 nova_compute[252253]: 2025-11-29 08:27:10.781 252257 DEBUG oslo_concurrency.lockutils [req-ad22c83d-818a-417f-97fd-555a5799fecc req-528a3f92-589f-4e8e-93ee-85bc506ec35c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:10 np0005539563 nova_compute[252253]: 2025-11-29 08:27:10.782 252257 DEBUG oslo_concurrency.lockutils [req-ad22c83d-818a-417f-97fd-555a5799fecc req-528a3f92-589f-4e8e-93ee-85bc506ec35c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:10 np0005539563 nova_compute[252253]: 2025-11-29 08:27:10.782 252257 DEBUG nova.compute.manager [req-ad22c83d-818a-417f-97fd-555a5799fecc req-528a3f92-589f-4e8e-93ee-85bc506ec35c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] No waiting events found dispatching network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:10 np0005539563 nova_compute[252253]: 2025-11-29 08:27:10.783 252257 WARNING nova.compute.manager [req-ad22c83d-818a-417f-97fd-555a5799fecc req-528a3f92-589f-4e8e-93ee-85bc506ec35c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received unexpected event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:27:10 np0005539563 nova_compute[252253]: 2025-11-29 08:27:10.854 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Updating instance_info_cache with network_info: [{"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:10 np0005539563 nova_compute[252253]: 2025-11-29 08:27:10.861 252257 DEBUG nova.compute.manager [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:27:10 np0005539563 nova_compute[252253]: 2025-11-29 08:27:10.868 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-59a5747d-b29d-47f7-848c-62778e994c56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:10 np0005539563 nova_compute[252253]: 2025-11-29 08:27:10.869 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:27:10 np0005539563 nova_compute[252253]: 2025-11-29 08:27:10.869 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:10 np0005539563 nova_compute[252253]: 2025-11-29 08:27:10.870 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.036 252257 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.036 252257 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.077 252257 DEBUG nova.objects.instance [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'pci_requests' on Instance uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.095 252257 DEBUG nova.virt.hardware [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.096 252257 INFO nova.compute.claims [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.096 252257 DEBUG nova.objects.instance [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'resources' on Instance uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.107 252257 DEBUG nova.objects.instance [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.170 252257 INFO nova.compute.resource_tracker [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating resource usage from migration eb7ab834-06eb-430d-b4a7-c662625ee1a3#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.171 252257 DEBUG nova.compute.resource_tracker [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Starting to track incoming migration eb7ab834-06eb-430d-b4a7-c662625ee1a3 with flavor a3833334-6e3e-4b1c-bf74-bdd1055a9e9b _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.240 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.369 252257 DEBUG oslo_concurrency.processutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1472573862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.844 252257 DEBUG oslo_concurrency.processutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.851 252257 DEBUG nova.compute.provider_tree [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.876 252257 DEBUG nova.scheduler.client.report [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.898 252257 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:11 np0005539563 nova_compute[252253]: 2025-11-29 08:27:11.898 252257 INFO nova.compute.manager [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Migrating#033[00m
Nov 29 03:27:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:11.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:12.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 986 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 40 KiB/s wr, 121 op/s
Nov 29 03:27:12 np0005539563 nova_compute[252253]: 2025-11-29 08:27:12.881 252257 DEBUG nova.compute.manager [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:12 np0005539563 nova_compute[252253]: 2025-11-29 08:27:12.882 252257 DEBUG oslo_concurrency.lockutils [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:12 np0005539563 nova_compute[252253]: 2025-11-29 08:27:12.882 252257 DEBUG oslo_concurrency.lockutils [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:12 np0005539563 nova_compute[252253]: 2025-11-29 08:27:12.883 252257 DEBUG oslo_concurrency.lockutils [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:12 np0005539563 nova_compute[252253]: 2025-11-29 08:27:12.883 252257 DEBUG nova.compute.manager [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] No waiting events found dispatching network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:12 np0005539563 nova_compute[252253]: 2025-11-29 08:27:12.883 252257 WARNING nova.compute.manager [req-2b981c04-8a05-462d-9dce-3575305ebf3b req-36117d4f-c546-434b-84bf-742310fca929 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received unexpected event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:27:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:27:12
Nov 29 03:27:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:27:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:27:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'volumes', 'images', 'cephfs.cephfs.data', 'vms']
Nov 29 03:27:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:27:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:27:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:27:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:27:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:27:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:27:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:27:13 np0005539563 nova_compute[252253]: 2025-11-29 08:27:13.404 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:13 np0005539563 nova_compute[252253]: 2025-11-29 08:27:13.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:14.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:14.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:14 np0005539563 podman[351238]: 2025-11-29 08:27:14.525909858 +0000 UTC m=+0.069565018 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:27:14 np0005539563 podman[351239]: 2025-11-29 08:27:14.566344755 +0000 UTC m=+0.107966769 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:27:14 np0005539563 podman[351240]: 2025-11-29 08:27:14.579281835 +0000 UTC m=+0.123330825 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:27:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 988 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 50 KiB/s wr, 127 op/s
Nov 29 03:27:15 np0005539563 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:27:15 np0005539563 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:27:15 np0005539563 systemd-logind[785]: New session 58 of user nova.
Nov 29 03:27:15 np0005539563 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:27:15 np0005539563 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:27:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:16.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:16.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:16 np0005539563 systemd[351310]: Queued start job for default target Main User Target.
Nov 29 03:27:16 np0005539563 systemd[351310]: Created slice User Application Slice.
Nov 29 03:27:16 np0005539563 systemd[351310]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:27:16 np0005539563 systemd[351310]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:27:16 np0005539563 systemd[351310]: Reached target Paths.
Nov 29 03:27:16 np0005539563 systemd[351310]: Reached target Timers.
Nov 29 03:27:16 np0005539563 systemd[351310]: Starting D-Bus User Message Bus Socket...
Nov 29 03:27:16 np0005539563 systemd[351310]: Starting Create User's Volatile Files and Directories...
Nov 29 03:27:16 np0005539563 systemd[351310]: Finished Create User's Volatile Files and Directories.
Nov 29 03:27:16 np0005539563 systemd[351310]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:27:16 np0005539563 systemd[351310]: Reached target Sockets.
Nov 29 03:27:16 np0005539563 systemd[351310]: Reached target Basic System.
Nov 29 03:27:16 np0005539563 systemd[351310]: Reached target Main User Target.
Nov 29 03:27:16 np0005539563 systemd[351310]: Startup finished in 145ms.
Nov 29 03:27:16 np0005539563 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:27:16 np0005539563 systemd[1]: Started Session 58 of User nova.
Nov 29 03:27:16 np0005539563 systemd[1]: session-58.scope: Deactivated successfully.
Nov 29 03:27:16 np0005539563 systemd-logind[785]: Session 58 logged out. Waiting for processes to exit.
Nov 29 03:27:16 np0005539563 systemd-logind[785]: Removed session 58.
Nov 29 03:27:16 np0005539563 nova_compute[252253]: 2025-11-29 08:27:16.241 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:16 np0005539563 systemd-logind[785]: New session 60 of user nova.
Nov 29 03:27:16 np0005539563 systemd[1]: Started Session 60 of User nova.
Nov 29 03:27:16 np0005539563 systemd[1]: session-60.scope: Deactivated successfully.
Nov 29 03:27:16 np0005539563 systemd-logind[785]: Session 60 logged out. Waiting for processes to exit.
Nov 29 03:27:16 np0005539563 systemd-logind[785]: Removed session 60.
Nov 29 03:27:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:27:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:27:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:27:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:27:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:27:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:27:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:27:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:27:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:27:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:27:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 988 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 43 KiB/s wr, 115 op/s
Nov 29 03:27:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:18.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:18.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:18 np0005539563 nova_compute[252253]: 2025-11-29 08:27:18.406 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 988 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 37 KiB/s wr, 110 op/s
Nov 29 03:27:19 np0005539563 nova_compute[252253]: 2025-11-29 08:27:19.691 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:19 np0005539563 nova_compute[252253]: 2025-11-29 08:27:19.692 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:27:19 np0005539563 nova_compute[252253]: 2025-11-29 08:27:19.748 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:19.748 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:19.751 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:27:19 np0005539563 nova_compute[252253]: 2025-11-29 08:27:19.818 252257 DEBUG nova.compute.manager [req-0eec042c-dd59-464b-ba14-68926d47cc61 req-ecd8a12c-08e7-450c-b920-f759570536ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-vif-unplugged-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:19 np0005539563 nova_compute[252253]: 2025-11-29 08:27:19.819 252257 DEBUG oslo_concurrency.lockutils [req-0eec042c-dd59-464b-ba14-68926d47cc61 req-ecd8a12c-08e7-450c-b920-f759570536ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:19 np0005539563 nova_compute[252253]: 2025-11-29 08:27:19.819 252257 DEBUG oslo_concurrency.lockutils [req-0eec042c-dd59-464b-ba14-68926d47cc61 req-ecd8a12c-08e7-450c-b920-f759570536ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:19 np0005539563 nova_compute[252253]: 2025-11-29 08:27:19.819 252257 DEBUG oslo_concurrency.lockutils [req-0eec042c-dd59-464b-ba14-68926d47cc61 req-ecd8a12c-08e7-450c-b920-f759570536ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:19 np0005539563 nova_compute[252253]: 2025-11-29 08:27:19.819 252257 DEBUG nova.compute.manager [req-0eec042c-dd59-464b-ba14-68926d47cc61 req-ecd8a12c-08e7-450c-b920-f759570536ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] No waiting events found dispatching network-vif-unplugged-f095bbfd-d901-4dd4-8831-72dab1104494 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:19 np0005539563 nova_compute[252253]: 2025-11-29 08:27:19.820 252257 WARNING nova.compute.manager [req-0eec042c-dd59-464b-ba14-68926d47cc61 req-ecd8a12c-08e7-450c-b920-f759570536ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received unexpected event network-vif-unplugged-f095bbfd-d901-4dd4-8831-72dab1104494 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:27:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:20.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:20.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 988 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 43 KiB/s wr, 82 op/s
Nov 29 03:27:21 np0005539563 nova_compute[252253]: 2025-11-29 08:27:21.244 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:21 np0005539563 nova_compute[252253]: 2025-11-29 08:27:21.306 252257 INFO nova.network.neutron [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating port f095bbfd-d901-4dd4-8831-72dab1104494 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:27:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:22.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:22.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:22 np0005539563 nova_compute[252253]: 2025-11-29 08:27:22.124 252257 DEBUG nova.compute.manager [req-0a262ee5-1244-4b48-a6b1-34edecaef47c req-4f828b4c-d745-406a-97c1-4d8949d6a479 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:22 np0005539563 nova_compute[252253]: 2025-11-29 08:27:22.124 252257 DEBUG oslo_concurrency.lockutils [req-0a262ee5-1244-4b48-a6b1-34edecaef47c req-4f828b4c-d745-406a-97c1-4d8949d6a479 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:22 np0005539563 nova_compute[252253]: 2025-11-29 08:27:22.125 252257 DEBUG oslo_concurrency.lockutils [req-0a262ee5-1244-4b48-a6b1-34edecaef47c req-4f828b4c-d745-406a-97c1-4d8949d6a479 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:22 np0005539563 nova_compute[252253]: 2025-11-29 08:27:22.125 252257 DEBUG oslo_concurrency.lockutils [req-0a262ee5-1244-4b48-a6b1-34edecaef47c req-4f828b4c-d745-406a-97c1-4d8949d6a479 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:22 np0005539563 nova_compute[252253]: 2025-11-29 08:27:22.125 252257 DEBUG nova.compute.manager [req-0a262ee5-1244-4b48-a6b1-34edecaef47c req-4f828b4c-d745-406a-97c1-4d8949d6a479 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] No waiting events found dispatching network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:22 np0005539563 nova_compute[252253]: 2025-11-29 08:27:22.125 252257 WARNING nova.compute.manager [req-0a262ee5-1244-4b48-a6b1-34edecaef47c req-4f828b4c-d745-406a-97c1-4d8949d6a479 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received unexpected event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:27:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 988 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 746 KiB/s rd, 42 KiB/s wr, 56 op/s
Nov 29 03:27:22 np0005539563 nova_compute[252253]: 2025-11-29 08:27:22.722 252257 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:22 np0005539563 nova_compute[252253]: 2025-11-29 08:27:22.722 252257 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquired lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:22 np0005539563 nova_compute[252253]: 2025-11-29 08:27:22.722 252257 DEBUG nova.network.neutron [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:27:22 np0005539563 nova_compute[252253]: 2025-11-29 08:27:22.851 252257 DEBUG nova.compute.manager [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:22 np0005539563 nova_compute[252253]: 2025-11-29 08:27:22.852 252257 DEBUG nova.compute.manager [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing instance network info cache due to event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:27:22 np0005539563 nova_compute[252253]: 2025-11-29 08:27:22.852 252257 DEBUG oslo_concurrency.lockutils [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:23 np0005539563 nova_compute[252253]: 2025-11-29 08:27:23.409 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.02055597722408338 of space, bias 1.0, pg target 6.166793167225014 quantized to 32 (current 32)
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00432228416387493 of space, bias 1.0, pg target 1.2707515441792294 quantized to 32 (current 32)
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8364046069467709 quantized to 32 (current 32)
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017041224641727154 quantized to 16 (current 16)
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00031952296203238413 quantized to 32 (current 32)
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018106301181835102 quantized to 32 (current 32)
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042603061604317886 quantized to 32 (current 32)
Nov 29 03:27:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:23.753 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68a2e6f0 =====
Nov 29 03:27:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:24.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68a2e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:24 np0005539563 radosgw[93236]: beast: 0x7efd68a2e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:24.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 990 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 747 KiB/s rd, 43 KiB/s wr, 57 op/s
Nov 29 03:27:24 np0005539563 nova_compute[252253]: 2025-11-29 08:27:24.689 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.030 252257 DEBUG nova.network.neutron [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating instance_info_cache with network_info: [{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.069 252257 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Releasing lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.072 252257 DEBUG oslo_concurrency.lockutils [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.072 252257 DEBUG nova.network.neutron [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.198 252257 DEBUG os_brick.utils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.199 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.214 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.215 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ce4aed-0f9e-4744-83fb-a98001d4bca4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.216 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.227 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.227 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[22483388-ba3b-4719-a82b-7f3c067f7ce8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.229 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.238 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.239 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[f81793a9-1660-4d15-bd44-f7552fd6840f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.240 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[39ac3683-403e-464d-b651-7578ed771631]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.241 252257 DEBUG oslo_concurrency.processutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.274 252257 DEBUG oslo_concurrency.processutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.277 252257 DEBUG os_brick.initiator.connectors.lightos [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.277 252257 DEBUG os_brick.initiator.connectors.lightos [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.277 252257 DEBUG os_brick.initiator.connectors.lightos [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:27:25 np0005539563 nova_compute[252253]: 2025-11-29 08:27:25.278 252257 DEBUG os_brick.utils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:27:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68a2e6f0 =====
Nov 29 03:27:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68a2e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:26.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:26 np0005539563 radosgw[93236]: beast: 0x7efd68a2e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:26.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:26 np0005539563 nova_compute[252253]: 2025-11-29 08:27:26.246 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 990 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 740 KiB/s rd, 42 KiB/s wr, 52 op/s
Nov 29 03:27:26 np0005539563 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:27:26 np0005539563 systemd[351310]: Activating special unit Exit the Session...
Nov 29 03:27:26 np0005539563 systemd[351310]: Stopped target Main User Target.
Nov 29 03:27:26 np0005539563 systemd[351310]: Stopped target Basic System.
Nov 29 03:27:26 np0005539563 systemd[351310]: Stopped target Paths.
Nov 29 03:27:26 np0005539563 systemd[351310]: Stopped target Sockets.
Nov 29 03:27:26 np0005539563 systemd[351310]: Stopped target Timers.
Nov 29 03:27:26 np0005539563 systemd[351310]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:27:26 np0005539563 systemd[351310]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:27:26 np0005539563 systemd[351310]: Closed D-Bus User Message Bus Socket.
Nov 29 03:27:26 np0005539563 systemd[351310]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:27:26 np0005539563 systemd[351310]: Removed slice User Application Slice.
Nov 29 03:27:26 np0005539563 systemd[351310]: Reached target Shutdown.
Nov 29 03:27:26 np0005539563 systemd[351310]: Finished Exit the Session.
Nov 29 03:27:26 np0005539563 systemd[351310]: Reached target Exit the Session.
Nov 29 03:27:26 np0005539563 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:27:26 np0005539563 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:27:26 np0005539563 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:27:26 np0005539563 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:27:26 np0005539563 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:27:26 np0005539563 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:27:26 np0005539563 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:27:27 np0005539563 nova_compute[252253]: 2025-11-29 08:27:27.262 252257 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:27:27 np0005539563 nova_compute[252253]: 2025-11-29 08:27:27.264 252257 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:27:27 np0005539563 nova_compute[252253]: 2025-11-29 08:27:27.264 252257 INFO nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Creating image(s)#033[00m
Nov 29 03:27:27 np0005539563 nova_compute[252253]: 2025-11-29 08:27:27.300 252257 DEBUG nova.storage.rbd_utils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] creating snapshot(nova-resize) on rbd image(554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:27:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Nov 29 03:27:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Nov 29 03:27:27 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Nov 29 03:27:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:28.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68a2e6f0 =====
Nov 29 03:27:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68a2e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:28 np0005539563 radosgw[93236]: beast: 0x7efd68a2e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:28.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.041 252257 DEBUG nova.objects.instance [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.204 252257 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.205 252257 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Ensure instance console log exists: /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.205 252257 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.205 252257 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.206 252257 DEBUG oslo_concurrency.lockutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.208 252257 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Start _get_guest_xml network_info=[{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:7b:13:85"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ff1e082f-e768-4c5f-850b-5e8ce6b839d1', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ff1e082f-e768-4c5f-850b-5e8ce6b839d1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '554ea6a4-8de1-41bf-8772-b15e95a7fd05', 'attached_at': '2025-11-29T08:27:26.000000', 'detached_at': '', 'volume_id': 'ff1e082f-e768-4c5f-850b-5e8ce6b839d1', 'multiattach': True, 'serial': 'ff1e082f-e768-4c5f-850b-5e8ce6b839d1'}, 'attachment_id': '8381dd59-919d-420e-91e1-715e31b5424c', 'disk_bus': 'virtio', 'boot_index': None, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.212 252257 WARNING nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.218 252257 DEBUG nova.virt.libvirt.host [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.219 252257 DEBUG nova.virt.libvirt.host [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.231 252257 DEBUG nova.virt.libvirt.host [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.232 252257 DEBUG nova.virt.libvirt.host [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.233 252257 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.233 252257 DEBUG nova.virt.hardware [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.233 252257 DEBUG nova.virt.hardware [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.233 252257 DEBUG nova.virt.hardware [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.234 252257 DEBUG nova.virt.hardware [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.234 252257 DEBUG nova.virt.hardware [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.234 252257 DEBUG nova.virt.hardware [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.234 252257 DEBUG nova.virt.hardware [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.234 252257 DEBUG nova.virt.hardware [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.234 252257 DEBUG nova.virt.hardware [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.235 252257 DEBUG nova.virt.hardware [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.235 252257 DEBUG nova.virt.hardware [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.235 252257 DEBUG nova.objects.instance [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.263 252257 DEBUG oslo_concurrency.processutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.410 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 990 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 316 KiB/s rd, 29 KiB/s wr, 56 op/s
Nov 29 03:27:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2873914067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.700 252257 DEBUG oslo_concurrency.processutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.739 252257 DEBUG oslo_concurrency.processutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.816 252257 DEBUG nova.network.neutron [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updated VIF entry in instance network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.817 252257 DEBUG nova.network.neutron [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating instance_info_cache with network_info: [{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:28 np0005539563 nova_compute[252253]: 2025-11-29 08:27:28.841 252257 DEBUG oslo_concurrency.lockutils [req-26edc2cc-58c4-4aa8-8dbd-ec8daccef9a1 req-ac8472db-bcb1-46ee-857f-ed530e51e9c9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2590103841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.169 252257 DEBUG oslo_concurrency.processutils [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.299 252257 DEBUG nova.virt.libvirt.vif [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=162,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-6fx7zmqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=554ea6a4-8de1-41bf-8772-b15e95a7fd05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:7b:13:85"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.299 252257 DEBUG nova.network.os_vif_util [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:7b:13:85"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.300 252257 DEBUG nova.network.os_vif_util [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.303 252257 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  <uuid>554ea6a4-8de1-41bf-8772-b15e95a7fd05</uuid>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  <name>instance-000000a2</name>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  <memory>196608</memory>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <nova:name>multiattach-server-0</nova:name>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:27:28</nova:creationTime>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.micro">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <nova:memory>192</nova:memory>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <nova:user uuid="b4f4d28745dd46e586642c84c051db39">tempest-AttachVolumeMultiAttachTest-1454477111-project-member</nova:user>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <nova:project uuid="23450c2eaf4442459dec94c6d29f0412">tempest-AttachVolumeMultiAttachTest-1454477111</nova:project>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <nova:port uuid="f095bbfd-d901-4dd4-8831-72dab1104494">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <entry name="serial">554ea6a4-8de1-41bf-8772-b15e95a7fd05</entry>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <entry name="uuid">554ea6a4-8de1-41bf-8772-b15e95a7fd05</entry>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/554ea6a4-8de1-41bf-8772-b15e95a7fd05_disk.config">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="volumes/volume-ff1e082f-e768-4c5f-850b-5e8ce6b839d1">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <serial>ff1e082f-e768-4c5f-850b-5e8ce6b839d1</serial>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <shareable/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:7b:13:85"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <target dev="tapf095bbfd-d9"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05/console.log" append="off"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:27:29 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:27:29 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:27:29 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:27:29 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.303 252257 DEBUG nova.virt.libvirt.vif [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=162,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-6fx7zmqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=554ea6a4-8de1-41bf-8772-b15e95a7fd05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:7b:13:85"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.304 252257 DEBUG nova.network.os_vif_util [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:7b:13:85"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.304 252257 DEBUG nova.network.os_vif_util [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.304 252257 DEBUG os_vif [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.305 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.305 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.306 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.309 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.309 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf095bbfd-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.309 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf095bbfd-d9, col_values=(('external_ids', {'iface-id': 'f095bbfd-d901-4dd4-8831-72dab1104494', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:13:85', 'vm-uuid': '554ea6a4-8de1-41bf-8772-b15e95a7fd05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.311 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:29 np0005539563 NetworkManager[48981]: <info>  [1764404849.3122] manager: (tapf095bbfd-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/295)
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.314 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.317 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.319 252257 INFO os_vif [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9')#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.392 252257 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.392 252257 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.393 252257 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.393 252257 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No VIF found with MAC fa:16:3e:7b:13:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.393 252257 INFO nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Using config drive#033[00m
Nov 29 03:27:29 np0005539563 kernel: tapf095bbfd-d9: entered promiscuous mode
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.516 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:29Z|00672|binding|INFO|Claiming lport f095bbfd-d901-4dd4-8831-72dab1104494 for this chassis.
Nov 29 03:27:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:29Z|00673|binding|INFO|f095bbfd-d901-4dd4-8831-72dab1104494: Claiming fa:16:3e:7b:13:85 10.100.0.12
Nov 29 03:27:29 np0005539563 NetworkManager[48981]: <info>  [1764404849.5173] manager: (tapf095bbfd-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Nov 29 03:27:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:29Z|00674|binding|INFO|Setting lport f095bbfd-d901-4dd4-8831-72dab1104494 ovn-installed in OVS
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.534 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:29Z|00675|binding|INFO|Setting lport f095bbfd-d901-4dd4-8831-72dab1104494 up in Southbound
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.539 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:13:85 10.100.0.12'], port_security=['fa:16:3e:7b:13:85 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '554ea6a4-8de1-41bf-8772-b15e95a7fd05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6e9e03ca-34d5-466f-8e26-e073c35a802c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=f095bbfd-d901-4dd4-8831-72dab1104494) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.539 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.542 158990 INFO neutron.agent.ovn.metadata.agent [-] Port f095bbfd-d901-4dd4-8831-72dab1104494 in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 bound to our chassis#033[00m
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.544 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abbc8daa-d665-4e2f-bf74-9e57db481441#033[00m
Nov 29 03:27:29 np0005539563 systemd-machined[213024]: New machine qemu-80-instance-000000a2.
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.564 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f478be19-f519-425c-9791-9064f65b13b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:29 np0005539563 systemd-udevd[351567]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:29 np0005539563 systemd[1]: Started Virtual Machine qemu-80-instance-000000a2.
Nov 29 03:27:29 np0005539563 NetworkManager[48981]: <info>  [1764404849.5840] device (tapf095bbfd-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:27:29 np0005539563 NetworkManager[48981]: <info>  [1764404849.5848] device (tapf095bbfd-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.602 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[48b7e444-6670-4d81-84bc-191014da8789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.605 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[684eea20-d110-47f2-b2de-a2cdfcfa506a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.635 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[52c36493-5609-47cf-854b-f68bb5e6d72e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.655 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[65303c84-9d49-4675-aed5-ab6b93493505]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766783, 'reachable_time': 19799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351579, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.673 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9f107b-dfdf-47da-976e-8587de2c748b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766793, 'tstamp': 766793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351580, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766796, 'tstamp': 766796}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351580, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.676 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:29 np0005539563 nova_compute[252253]: 2025-11-29 08:27:29.735 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.739 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabbc8daa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.739 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.740 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabbc8daa-d0, col_values=(('external_ids', {'iface-id': 'fb65e0fb-a778-4ace-a666-dfdbc516af09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:29.740 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.008 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404850.0084016, 554ea6a4-8de1-41bf-8772-b15e95a7fd05 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.009 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.012 252257 DEBUG nova.compute.manager [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.015 252257 INFO nova.virt.libvirt.driver [-] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Instance running successfully.#033[00m
Nov 29 03:27:30 np0005539563 virtqemud[251807]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.017 252257 DEBUG nova.virt.libvirt.guest [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.017 252257 DEBUG nova.virt.libvirt.driver [None req-1a5bf77a-60ba-470d-8deb-85e897aba576 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:27:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68a2e6f0 =====
Nov 29 03:27:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:30.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68a2e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:30 np0005539563 radosgw[93236]: beast: 0x7efd68a2e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:30.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.319 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.324 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 990 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 19 KiB/s wr, 42 op/s
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.603 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.603 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404850.0085847, 554ea6a4-8de1-41bf-8772-b15e95a7fd05 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.603 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] VM Started (Lifecycle Event)#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.605 252257 DEBUG nova.compute.manager [req-3c8981b9-a814-4ae2-87bd-ad2057cc8ff4 req-82764bc9-ab5d-448d-8688-06c6fb9b286a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.606 252257 DEBUG oslo_concurrency.lockutils [req-3c8981b9-a814-4ae2-87bd-ad2057cc8ff4 req-82764bc9-ab5d-448d-8688-06c6fb9b286a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.606 252257 DEBUG oslo_concurrency.lockutils [req-3c8981b9-a814-4ae2-87bd-ad2057cc8ff4 req-82764bc9-ab5d-448d-8688-06c6fb9b286a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.606 252257 DEBUG oslo_concurrency.lockutils [req-3c8981b9-a814-4ae2-87bd-ad2057cc8ff4 req-82764bc9-ab5d-448d-8688-06c6fb9b286a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.606 252257 DEBUG nova.compute.manager [req-3c8981b9-a814-4ae2-87bd-ad2057cc8ff4 req-82764bc9-ab5d-448d-8688-06c6fb9b286a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] No waiting events found dispatching network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.607 252257 WARNING nova.compute.manager [req-3c8981b9-a814-4ae2-87bd-ad2057cc8ff4 req-82764bc9-ab5d-448d-8688-06c6fb9b286a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received unexpected event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.679 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.686 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:27:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/47885940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:27:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:27:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/47885940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:27:30 np0005539563 nova_compute[252253]: 2025-11-29 08:27:30.786 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:27:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68a2e6f0 =====
Nov 29 03:27:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:32.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68a2e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:32 np0005539563 radosgw[93236]: beast: 0x7efd68a2e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:32.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 990 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 19 KiB/s wr, 42 op/s
Nov 29 03:27:32 np0005539563 nova_compute[252253]: 2025-11-29 08:27:32.791 252257 DEBUG nova.compute.manager [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:32 np0005539563 nova_compute[252253]: 2025-11-29 08:27:32.791 252257 DEBUG oslo_concurrency.lockutils [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:32 np0005539563 nova_compute[252253]: 2025-11-29 08:27:32.792 252257 DEBUG oslo_concurrency.lockutils [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:32 np0005539563 nova_compute[252253]: 2025-11-29 08:27:32.792 252257 DEBUG oslo_concurrency.lockutils [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:32 np0005539563 nova_compute[252253]: 2025-11-29 08:27:32.793 252257 DEBUG nova.compute.manager [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] No waiting events found dispatching network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:32 np0005539563 nova_compute[252253]: 2025-11-29 08:27:32.793 252257 WARNING nova.compute.manager [req-f6ee70c1-4e31-4a33-b4f0-0714ba937d7b req-ba503ef7-5863-4919-b66c-f5c1dd307397 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received unexpected event network-vif-plugged-f095bbfd-d901-4dd4-8831-72dab1104494 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.086 252257 DEBUG oslo_concurrency.lockutils [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.086 252257 DEBUG oslo_concurrency.lockutils [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.087 252257 DEBUG oslo_concurrency.lockutils [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.087 252257 DEBUG oslo_concurrency.lockutils [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.087 252257 DEBUG oslo_concurrency.lockutils [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.088 252257 INFO nova.compute.manager [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Terminating instance#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.089 252257 DEBUG nova.compute.manager [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:27:33 np0005539563 kernel: tap654e5561-24 (unregistering): left promiscuous mode
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.157 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:33Z|00676|binding|INFO|Releasing lport 654e5561-248d-48f1-9b25-da86880e3041 from this chassis (sb_readonly=0)
Nov 29 03:27:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:33Z|00677|binding|INFO|Setting lport 654e5561-248d-48f1-9b25-da86880e3041 down in Southbound
Nov 29 03:27:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:33Z|00678|binding|INFO|Removing iface tap654e5561-24 ovn-installed in OVS
Nov 29 03:27:33 np0005539563 NetworkManager[48981]: <info>  [1764404853.1614] device (tap654e5561-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.163 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.173 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:8b:2b 10.100.0.3'], port_security=['fa:16:3e:65:8b:2b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9c6c5334-4e97-46b8-9013-cc5269d8c1c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-258f6232-6798-4075-adab-c07c4559ef67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9406fbc6fef486fa5b0e79549e78d00', 'neutron:revision_number': '11', 'neutron:security_group_ids': '43e688c9-ebb1-4f07-b4e2-f54248247a71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aac86bc6-5ac8-43c8-9a9b-f058a154968b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=654e5561-248d-48f1-9b25-da86880e3041) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.174 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 654e5561-248d-48f1-9b25-da86880e3041 in datapath 258f6232-6798-4075-adab-c07c4559ef67 unbound from our chassis#033[00m
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.176 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 258f6232-6798-4075-adab-c07c4559ef67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.177 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8b5475-3fe1-4d05-b1cc-be8678349eae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.178 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 namespace which is not needed anymore#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.198 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:33 np0005539563 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000099.scope: Deactivated successfully.
Nov 29 03:27:33 np0005539563 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000099.scope: Consumed 1.618s CPU time.
Nov 29 03:27:33 np0005539563 systemd-machined[213024]: Machine qemu-79-instance-00000099 terminated.
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.323 252257 INFO nova.virt.libvirt.driver [-] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Instance destroyed successfully.#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.323 252257 DEBUG nova.objects.instance [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lazy-loading 'resources' on Instance uuid 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:33 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[351199]: [NOTICE]   (351203) : haproxy version is 2.8.14-c23fe91
Nov 29 03:27:33 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[351199]: [NOTICE]   (351203) : path to executable is /usr/sbin/haproxy
Nov 29 03:27:33 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[351199]: [WARNING]  (351203) : Exiting Master process...
Nov 29 03:27:33 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[351199]: [WARNING]  (351203) : Exiting Master process...
Nov 29 03:27:33 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[351199]: [ALERT]    (351203) : Current worker (351205) exited with code 143 (Terminated)
Nov 29 03:27:33 np0005539563 neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67[351199]: [WARNING]  (351203) : All workers exited. Exiting... (0)
Nov 29 03:27:33 np0005539563 systemd[1]: libpod-19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe.scope: Deactivated successfully.
Nov 29 03:27:33 np0005539563 podman[351668]: 2025-11-29 08:27:33.340747515 +0000 UTC m=+0.052502712 container died 19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:27:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe-userdata-shm.mount: Deactivated successfully.
Nov 29 03:27:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-12d42e65c3b4ed77337a553c8ffc9c401072fb9ac002d3dd4e2650328a4314cb-merged.mount: Deactivated successfully.
Nov 29 03:27:33 np0005539563 podman[351668]: 2025-11-29 08:27:33.382694242 +0000 UTC m=+0.094449439 container cleanup 19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:27:33 np0005539563 systemd[1]: libpod-conmon-19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe.scope: Deactivated successfully.
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.412 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:33 np0005539563 podman[351706]: 2025-11-29 08:27:33.458030843 +0000 UTC m=+0.048033202 container remove 19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.464 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1042889f-2d41-42e3-a4da-eb5ce238b203]: (4, ('Sat Nov 29 08:27:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 (19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe)\n19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe\nSat Nov 29 08:27:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 (19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe)\n19254ffc4b9cb741f5b74128654543d3f1bb1ecafe7c6f20f3a965d3e17c06fe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.468 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[484a6952-4421-402e-8bb8-b5633a565887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.469 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap258f6232-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.471 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:33 np0005539563 kernel: tap258f6232-60: left promiscuous mode
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.488 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.494 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a449e8-fdf1-44b8-bee0-f5d1b58d6092]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.506 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[23c66860-3c35-4c4d-ae63-9c5d7a6e9de9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.510 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7990a597-02a2-4200-b75f-180b55db3425]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.529 252257 DEBUG nova.virt.libvirt.vif [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:23:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1933381778',display_name='tempest-ServersNegativeTestJSON-server-1933381778',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1933381778',id=153,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9406fbc6fef486fa5b0e79549e78d00',ramdisk_id='',reservation_id='r-dbgir4tj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-213437080',owner_user_name='tempest-ServersNegativeTestJSON-213437080-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:27:09Z,user_data=None,user_id='3a37c720b9bb4273b66cd2dce30fbf48',uuid=9c6c5334-4e97-46b8-9013-cc5269d8c1c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.530 252257 DEBUG nova.network.os_vif_util [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converting VIF {"id": "654e5561-248d-48f1-9b25-da86880e3041", "address": "fa:16:3e:65:8b:2b", "network": {"id": "258f6232-6798-4075-adab-c07c4559ef67", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1452555004-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9406fbc6fef486fa5b0e79549e78d00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap654e5561-24", "ovs_interfaceid": "654e5561-248d-48f1-9b25-da86880e3041", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.530 252257 DEBUG nova.network.os_vif_util [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.531 252257 DEBUG os_vif [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.532 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.532 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap654e5561-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.533 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.536 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:27:33 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.538 252257 INFO os_vif [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:8b:2b,bridge_name='br-int',has_traffic_filtering=True,id=654e5561-248d-48f1-9b25-da86880e3041,network=Network(258f6232-6798-4075-adab-c07c4559ef67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap654e5561-24')#033[00m
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.535 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[70440fcc-084d-4f01-97b7-607d13d443c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779688, 'reachable_time': 36957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351724, 'error': None, 'target': 'ovnmeta-258f6232-6798-4075-adab-c07c4559ef67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.545 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-258f6232-6798-4075-adab-c07c4559ef67 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:27:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:33.545 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[2a42e7b9-8231-455e-af60-19ec0b7ddddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:33 np0005539563 systemd[1]: run-netns-ovnmeta\x2d258f6232\x2d6798\x2d4075\x2dadab\x2dc07c4559ef67.mount: Deactivated successfully.
Nov 29 03:27:34 np0005539563 nova_compute[252253]: 2025-11-29 08:27:33.999 252257 INFO nova.virt.libvirt.driver [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Deleting instance files /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1_del#033[00m
Nov 29 03:27:34 np0005539563 nova_compute[252253]: 2025-11-29 08:27:34.000 252257 INFO nova.virt.libvirt.driver [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Deletion of /var/lib/nova/instances/9c6c5334-4e97-46b8-9013-cc5269d8c1c1_del complete#033[00m
Nov 29 03:27:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68a2e6f0 =====
Nov 29 03:27:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:34.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68a2e6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:34 np0005539563 radosgw[93236]: beast: 0x7efd68a2e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:34.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:34 np0005539563 nova_compute[252253]: 2025-11-29 08:27:34.321 252257 INFO nova.compute.manager [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Took 1.23 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:27:34 np0005539563 nova_compute[252253]: 2025-11-29 08:27:34.322 252257 DEBUG oslo.service.loopingcall [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:27:34 np0005539563 nova_compute[252253]: 2025-11-29 08:27:34.323 252257 DEBUG nova.compute.manager [-] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:27:34 np0005539563 nova_compute[252253]: 2025-11-29 08:27:34.323 252257 DEBUG nova.network.neutron [-] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:27:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 907 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 170 op/s
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:27:35 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c38b661e-eeaa-491f-9c3d-dcfcea2ab05c does not exist
Nov 29 03:27:35 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9daaf045-c0b5-4c7e-aefa-8618b4ff6659 does not exist
Nov 29 03:27:35 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 3f52e97c-e5eb-444f-9f0f-8e296f40a861 does not exist
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:27:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:27:35 np0005539563 nova_compute[252253]: 2025-11-29 08:27:35.300 252257 DEBUG nova.network.neutron [-] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:35 np0005539563 nova_compute[252253]: 2025-11-29 08:27:35.324 252257 INFO nova.compute.manager [-] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Took 1.00 seconds to deallocate network for instance.#033[00m
Nov 29 03:27:35 np0005539563 nova_compute[252253]: 2025-11-29 08:27:35.407 252257 DEBUG oslo_concurrency.lockutils [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:35 np0005539563 nova_compute[252253]: 2025-11-29 08:27:35.407 252257 DEBUG oslo_concurrency.lockutils [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:35 np0005539563 nova_compute[252253]: 2025-11-29 08:27:35.594 252257 DEBUG oslo_concurrency.processutils [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:35 np0005539563 podman[352017]: 2025-11-29 08:27:35.740371478 +0000 UTC m=+0.077646165 container create de021ac723d704364879b7e7e43288aea3279812070e2eab7cef2393c5ba14a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:27:35 np0005539563 podman[352017]: 2025-11-29 08:27:35.689465049 +0000 UTC m=+0.026739816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:27:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2696924827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.025 252257 DEBUG oslo_concurrency.processutils [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.032 252257 DEBUG nova.compute.provider_tree [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68a2e6f0 =====
Nov 29 03:27:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:36.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68a2e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:36 np0005539563 radosgw[93236]: beast: 0x7efd68a2e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:36.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.075 252257 DEBUG nova.scheduler.client.report [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.125 252257 DEBUG oslo_concurrency.lockutils [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.158 252257 DEBUG nova.compute.manager [req-2efd1af0-dcba-4824-980f-154cad2d8092 req-9ba2901b-44f9-43a8-9684-f78c2285589c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-vif-deleted-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.161 252257 INFO nova.scheduler.client.report [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Deleted allocations for instance 9c6c5334-4e97-46b8-9013-cc5269d8c1c1#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.248 252257 DEBUG oslo_concurrency.lockutils [None req-587ed33e-fd5d-4eff-bf3f-698927c1be7f 3a37c720b9bb4273b66cd2dce30fbf48 d9406fbc6fef486fa5b0e79549e78d00 - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.282 252257 DEBUG nova.compute.manager [req-8c0528ae-c64d-4cb9-a582-350130e71335 req-22d83c8a-4c1b-4bfb-9b0d-a4f1020a7084 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-vif-unplugged-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.282 252257 DEBUG oslo_concurrency.lockutils [req-8c0528ae-c64d-4cb9-a582-350130e71335 req-22d83c8a-4c1b-4bfb-9b0d-a4f1020a7084 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.283 252257 DEBUG oslo_concurrency.lockutils [req-8c0528ae-c64d-4cb9-a582-350130e71335 req-22d83c8a-4c1b-4bfb-9b0d-a4f1020a7084 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.283 252257 DEBUG oslo_concurrency.lockutils [req-8c0528ae-c64d-4cb9-a582-350130e71335 req-22d83c8a-4c1b-4bfb-9b0d-a4f1020a7084 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.283 252257 DEBUG nova.compute.manager [req-8c0528ae-c64d-4cb9-a582-350130e71335 req-22d83c8a-4c1b-4bfb-9b0d-a4f1020a7084 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] No waiting events found dispatching network-vif-unplugged-654e5561-248d-48f1-9b25-da86880e3041 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.284 252257 WARNING nova.compute.manager [req-8c0528ae-c64d-4cb9-a582-350130e71335 req-22d83c8a-4c1b-4bfb-9b0d-a4f1020a7084 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received unexpected event network-vif-unplugged-654e5561-248d-48f1-9b25-da86880e3041 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.284 252257 DEBUG nova.compute.manager [req-8c0528ae-c64d-4cb9-a582-350130e71335 req-22d83c8a-4c1b-4bfb-9b0d-a4f1020a7084 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.284 252257 DEBUG oslo_concurrency.lockutils [req-8c0528ae-c64d-4cb9-a582-350130e71335 req-22d83c8a-4c1b-4bfb-9b0d-a4f1020a7084 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.285 252257 DEBUG oslo_concurrency.lockutils [req-8c0528ae-c64d-4cb9-a582-350130e71335 req-22d83c8a-4c1b-4bfb-9b0d-a4f1020a7084 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.285 252257 DEBUG oslo_concurrency.lockutils [req-8c0528ae-c64d-4cb9-a582-350130e71335 req-22d83c8a-4c1b-4bfb-9b0d-a4f1020a7084 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9c6c5334-4e97-46b8-9013-cc5269d8c1c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:36 np0005539563 systemd[1]: Started libpod-conmon-de021ac723d704364879b7e7e43288aea3279812070e2eab7cef2393c5ba14a2.scope.
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.286 252257 DEBUG nova.compute.manager [req-8c0528ae-c64d-4cb9-a582-350130e71335 req-22d83c8a-4c1b-4bfb-9b0d-a4f1020a7084 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] No waiting events found dispatching network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:36 np0005539563 nova_compute[252253]: 2025-11-29 08:27:36.286 252257 WARNING nova.compute.manager [req-8c0528ae-c64d-4cb9-a582-350130e71335 req-22d83c8a-4c1b-4bfb-9b0d-a4f1020a7084 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Received unexpected event network-vif-plugged-654e5561-248d-48f1-9b25-da86880e3041 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:27:36 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:27:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 795 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 240 op/s
Nov 29 03:27:36 np0005539563 podman[352017]: 2025-11-29 08:27:36.948426196 +0000 UTC m=+1.285700983 container init de021ac723d704364879b7e7e43288aea3279812070e2eab7cef2393c5ba14a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:27:36 np0005539563 podman[352017]: 2025-11-29 08:27:36.960618016 +0000 UTC m=+1.297892703 container start de021ac723d704364879b7e7e43288aea3279812070e2eab7cef2393c5ba14a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 29 03:27:36 np0005539563 clever_murdock[352054]: 167 167
Nov 29 03:27:36 np0005539563 systemd[1]: libpod-de021ac723d704364879b7e7e43288aea3279812070e2eab7cef2393c5ba14a2.scope: Deactivated successfully.
Nov 29 03:27:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Nov 29 03:27:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Nov 29 03:27:37 np0005539563 podman[352017]: 2025-11-29 08:27:37.420443534 +0000 UTC m=+1.757718241 container attach de021ac723d704364879b7e7e43288aea3279812070e2eab7cef2393c5ba14a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:27:37 np0005539563 podman[352017]: 2025-11-29 08:27:37.420984118 +0000 UTC m=+1.758258815 container died de021ac723d704364879b7e7e43288aea3279812070e2eab7cef2393c5ba14a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 03:27:37 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5514bffb419dbebc89c8f3619eadaa7d31d246db5cd2402e9dd3852ea18929e7-merged.mount: Deactivated successfully.
Nov 29 03:27:37 np0005539563 podman[352017]: 2025-11-29 08:27:37.53144873 +0000 UTC m=+1.868723427 container remove de021ac723d704364879b7e7e43288aea3279812070e2eab7cef2393c5ba14a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:27:37 np0005539563 systemd[1]: libpod-conmon-de021ac723d704364879b7e7e43288aea3279812070e2eab7cef2393c5ba14a2.scope: Deactivated successfully.
Nov 29 03:27:37 np0005539563 podman[352080]: 2025-11-29 08:27:37.783583571 +0000 UTC m=+0.105673213 container create 2457f029919a22e673cc03f3d023ae9eeafb2f160495733f3f8bd2080427798a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:27:37 np0005539563 podman[352080]: 2025-11-29 08:27:37.701930229 +0000 UTC m=+0.024019851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:27:37 np0005539563 systemd[1]: Started libpod-conmon-2457f029919a22e673cc03f3d023ae9eeafb2f160495733f3f8bd2080427798a.scope.
Nov 29 03:27:37 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:27:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a450286d96b374dfa264ef329119711cb537045d01789dcf0dfd0da7367e145d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a450286d96b374dfa264ef329119711cb537045d01789dcf0dfd0da7367e145d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a450286d96b374dfa264ef329119711cb537045d01789dcf0dfd0da7367e145d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a450286d96b374dfa264ef329119711cb537045d01789dcf0dfd0da7367e145d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a450286d96b374dfa264ef329119711cb537045d01789dcf0dfd0da7367e145d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:37 np0005539563 podman[352080]: 2025-11-29 08:27:37.973540768 +0000 UTC m=+0.295630390 container init 2457f029919a22e673cc03f3d023ae9eeafb2f160495733f3f8bd2080427798a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:27:37 np0005539563 podman[352080]: 2025-11-29 08:27:37.981272377 +0000 UTC m=+0.303361979 container start 2457f029919a22e673cc03f3d023ae9eeafb2f160495733f3f8bd2080427798a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kare, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:27:37 np0005539563 podman[352080]: 2025-11-29 08:27:37.984571357 +0000 UTC m=+0.306660979 container attach 2457f029919a22e673cc03f3d023ae9eeafb2f160495733f3f8bd2080427798a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68a2e6f0 =====
Nov 29 03:27:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:38.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68a2e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:38 np0005539563 radosgw[93236]: beast: 0x7efd68a2e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:38.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:38 np0005539563 nova_compute[252253]: 2025-11-29 08:27:38.414 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Nov 29 03:27:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Nov 29 03:27:38 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Nov 29 03:27:38 np0005539563 nova_compute[252253]: 2025-11-29 08:27:38.534 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 738 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 9.4 KiB/s wr, 313 op/s
Nov 29 03:27:38 np0005539563 nervous_kare[352094]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:27:38 np0005539563 nervous_kare[352094]: --> relative data size: 1.0
Nov 29 03:27:38 np0005539563 nervous_kare[352094]: --> All data devices are unavailable
Nov 29 03:27:38 np0005539563 systemd[1]: libpod-2457f029919a22e673cc03f3d023ae9eeafb2f160495733f3f8bd2080427798a.scope: Deactivated successfully.
Nov 29 03:27:38 np0005539563 podman[352080]: 2025-11-29 08:27:38.832064737 +0000 UTC m=+1.154154339 container died 2457f029919a22e673cc03f3d023ae9eeafb2f160495733f3f8bd2080427798a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 03:27:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a450286d96b374dfa264ef329119711cb537045d01789dcf0dfd0da7367e145d-merged.mount: Deactivated successfully.
Nov 29 03:27:38 np0005539563 podman[352080]: 2025-11-29 08:27:38.900328646 +0000 UTC m=+1.222418238 container remove 2457f029919a22e673cc03f3d023ae9eeafb2f160495733f3f8bd2080427798a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:27:38 np0005539563 systemd[1]: libpod-conmon-2457f029919a22e673cc03f3d023ae9eeafb2f160495733f3f8bd2080427798a.scope: Deactivated successfully.
Nov 29 03:27:39 np0005539563 podman[352269]: 2025-11-29 08:27:39.602316655 +0000 UTC m=+0.041200498 container create 69a245ab8d439bf5429182de8494d6b1a982d9f7ece479209d695021263fad63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:27:39 np0005539563 systemd[1]: Started libpod-conmon-69a245ab8d439bf5429182de8494d6b1a982d9f7ece479209d695021263fad63.scope.
Nov 29 03:27:39 np0005539563 podman[352269]: 2025-11-29 08:27:39.583437863 +0000 UTC m=+0.022321726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:27:39 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:27:39 np0005539563 podman[352269]: 2025-11-29 08:27:39.704986086 +0000 UTC m=+0.143869929 container init 69a245ab8d439bf5429182de8494d6b1a982d9f7ece479209d695021263fad63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bell, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:27:39 np0005539563 podman[352269]: 2025-11-29 08:27:39.712073108 +0000 UTC m=+0.150956941 container start 69a245ab8d439bf5429182de8494d6b1a982d9f7ece479209d695021263fad63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bell, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:27:39 np0005539563 podman[352269]: 2025-11-29 08:27:39.717058113 +0000 UTC m=+0.155941976 container attach 69a245ab8d439bf5429182de8494d6b1a982d9f7ece479209d695021263fad63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bell, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:27:39 np0005539563 laughing_bell[352285]: 167 167
Nov 29 03:27:39 np0005539563 systemd[1]: libpod-69a245ab8d439bf5429182de8494d6b1a982d9f7ece479209d695021263fad63.scope: Deactivated successfully.
Nov 29 03:27:39 np0005539563 podman[352269]: 2025-11-29 08:27:39.723144098 +0000 UTC m=+0.162027971 container died 69a245ab8d439bf5429182de8494d6b1a982d9f7ece479209d695021263fad63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 03:27:39 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a977dee171a88bfacde762e6c779783de95950d9f240a10d89b09fcbe7a3e738-merged.mount: Deactivated successfully.
Nov 29 03:27:39 np0005539563 podman[352269]: 2025-11-29 08:27:39.773699188 +0000 UTC m=+0.212583071 container remove 69a245ab8d439bf5429182de8494d6b1a982d9f7ece479209d695021263fad63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bell, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:27:39 np0005539563 systemd[1]: libpod-conmon-69a245ab8d439bf5429182de8494d6b1a982d9f7ece479209d695021263fad63.scope: Deactivated successfully.
Nov 29 03:27:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68a2e6f0 =====
Nov 29 03:27:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68a2e6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:40.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:40 np0005539563 radosgw[93236]: beast: 0x7efd68a2e6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:40.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:40 np0005539563 podman[352307]: 2025-11-29 08:27:40.004310795 +0000 UTC m=+0.023317942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:27:40 np0005539563 podman[352307]: 2025-11-29 08:27:40.368124781 +0000 UTC m=+0.387131908 container create 42bdda67135c7fb8954f05fcc57e892af030e28630d79d99a74c79cbf09f1804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:27:40 np0005539563 systemd[1]: Started libpod-conmon-42bdda67135c7fb8954f05fcc57e892af030e28630d79d99a74c79cbf09f1804.scope.
Nov 29 03:27:40 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:27:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3ccbf39a62519bcf5df4ecc22edb87957c5ed076dafc5dc12f8763fb3aa39b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3ccbf39a62519bcf5df4ecc22edb87957c5ed076dafc5dc12f8763fb3aa39b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3ccbf39a62519bcf5df4ecc22edb87957c5ed076dafc5dc12f8763fb3aa39b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3ccbf39a62519bcf5df4ecc22edb87957c5ed076dafc5dc12f8763fb3aa39b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:40 np0005539563 podman[352307]: 2025-11-29 08:27:40.475632804 +0000 UTC m=+0.494639961 container init 42bdda67135c7fb8954f05fcc57e892af030e28630d79d99a74c79cbf09f1804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:27:40 np0005539563 podman[352307]: 2025-11-29 08:27:40.483321602 +0000 UTC m=+0.502328729 container start 42bdda67135c7fb8954f05fcc57e892af030e28630d79d99a74c79cbf09f1804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:27:40 np0005539563 podman[352307]: 2025-11-29 08:27:40.487031202 +0000 UTC m=+0.506038359 container attach 42bdda67135c7fb8954f05fcc57e892af030e28630d79d99a74c79cbf09f1804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermi, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:27:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Nov 29 03:27:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Nov 29 03:27:40 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Nov 29 03:27:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 742 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 237 op/s
Nov 29 03:27:41 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:41Z|00679|binding|INFO|Releasing lport 42a41b42-1527-4cfa-9dcf-4b7f34b092b7 from this chassis (sb_readonly=0)
Nov 29 03:27:41 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:41Z|00680|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 03:27:41 np0005539563 nova_compute[252253]: 2025-11-29 08:27:41.152 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]: {
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:    "0": [
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:        {
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:            "devices": [
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:                "/dev/loop3"
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:            ],
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:            "lv_name": "ceph_lv0",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:            "lv_size": "7511998464",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:            "name": "ceph_lv0",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:            "tags": {
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:                "ceph.cluster_name": "ceph",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:                "ceph.crush_device_class": "",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:                "ceph.encrypted": "0",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:                "ceph.osd_id": "0",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:                "ceph.type": "block",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:                "ceph.vdo": "0"
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:            },
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:            "type": "block",
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:            "vg_name": "ceph_vg0"
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:        }
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]:    ]
Nov 29 03:27:41 np0005539563 exciting_fermi[352373]: }
Nov 29 03:27:41 np0005539563 systemd[1]: libpod-42bdda67135c7fb8954f05fcc57e892af030e28630d79d99a74c79cbf09f1804.scope: Deactivated successfully.
Nov 29 03:27:41 np0005539563 podman[352307]: 2025-11-29 08:27:41.277912519 +0000 UTC m=+1.296919676 container died 42bdda67135c7fb8954f05fcc57e892af030e28630d79d99a74c79cbf09f1804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermi, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:27:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5c3ccbf39a62519bcf5df4ecc22edb87957c5ed076dafc5dc12f8763fb3aa39b-merged.mount: Deactivated successfully.
Nov 29 03:27:41 np0005539563 podman[352307]: 2025-11-29 08:27:41.343992059 +0000 UTC m=+1.362999186 container remove 42bdda67135c7fb8954f05fcc57e892af030e28630d79d99a74c79cbf09f1804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:27:41 np0005539563 systemd[1]: libpod-conmon-42bdda67135c7fb8954f05fcc57e892af030e28630d79d99a74c79cbf09f1804.scope: Deactivated successfully.
Nov 29 03:27:41 np0005539563 podman[352536]: 2025-11-29 08:27:41.985351815 +0000 UTC m=+0.040281822 container create 06dc1ab5ad933fd91031c4a1fed61bb38daf76d5edf36f72a14a6ebc1e8318ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_borg, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:42 np0005539563 systemd[1]: Started libpod-conmon-06dc1ab5ad933fd91031c4a1fed61bb38daf76d5edf36f72a14a6ebc1e8318ae.scope.
Nov 29 03:27:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:42.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68a2e6f0 =====
Nov 29 03:27:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68a2e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:42 np0005539563 radosgw[93236]: beast: 0x7efd68a2e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:42.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:42 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:27:42 np0005539563 podman[352536]: 2025-11-29 08:27:41.967400559 +0000 UTC m=+0.022330576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:27:42 np0005539563 podman[352536]: 2025-11-29 08:27:42.070388469 +0000 UTC m=+0.125318486 container init 06dc1ab5ad933fd91031c4a1fed61bb38daf76d5edf36f72a14a6ebc1e8318ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:27:42 np0005539563 podman[352536]: 2025-11-29 08:27:42.077787879 +0000 UTC m=+0.132717876 container start 06dc1ab5ad933fd91031c4a1fed61bb38daf76d5edf36f72a14a6ebc1e8318ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_borg, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:27:42 np0005539563 podman[352536]: 2025-11-29 08:27:42.082444455 +0000 UTC m=+0.137374532 container attach 06dc1ab5ad933fd91031c4a1fed61bb38daf76d5edf36f72a14a6ebc1e8318ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_borg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:27:42 np0005539563 funny_borg[352553]: 167 167
Nov 29 03:27:42 np0005539563 systemd[1]: libpod-06dc1ab5ad933fd91031c4a1fed61bb38daf76d5edf36f72a14a6ebc1e8318ae.scope: Deactivated successfully.
Nov 29 03:27:42 np0005539563 conmon[352553]: conmon 06dc1ab5ad933fd91031 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06dc1ab5ad933fd91031c4a1fed61bb38daf76d5edf36f72a14a6ebc1e8318ae.scope/container/memory.events
Nov 29 03:27:42 np0005539563 podman[352536]: 2025-11-29 08:27:42.084337377 +0000 UTC m=+0.139267374 container died 06dc1ab5ad933fd91031c4a1fed61bb38daf76d5edf36f72a14a6ebc1e8318ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_borg, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:27:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-46537bd0f5286e525b55179170b11ef378b993da3f0dec33f55de3921f16e9c1-merged.mount: Deactivated successfully.
Nov 29 03:27:42 np0005539563 podman[352536]: 2025-11-29 08:27:42.117279049 +0000 UTC m=+0.172209046 container remove 06dc1ab5ad933fd91031c4a1fed61bb38daf76d5edf36f72a14a6ebc1e8318ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_borg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:27:42 np0005539563 systemd[1]: libpod-conmon-06dc1ab5ad933fd91031c4a1fed61bb38daf76d5edf36f72a14a6ebc1e8318ae.scope: Deactivated successfully.
Nov 29 03:27:42 np0005539563 podman[352579]: 2025-11-29 08:27:42.297508582 +0000 UTC m=+0.037717203 container create 3e300810694456a5a45044349736acbd713b3d16b9a49661b225495fe02c1e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ride, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:27:42 np0005539563 systemd[1]: Started libpod-conmon-3e300810694456a5a45044349736acbd713b3d16b9a49661b225495fe02c1e0a.scope.
Nov 29 03:27:42 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:27:42 np0005539563 podman[352579]: 2025-11-29 08:27:42.282339771 +0000 UTC m=+0.022548412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:27:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2491bb65c0331f0957d2a51cb0da670c7fd1af1d8a6644ae39cc23e2855f6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2491bb65c0331f0957d2a51cb0da670c7fd1af1d8a6644ae39cc23e2855f6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2491bb65c0331f0957d2a51cb0da670c7fd1af1d8a6644ae39cc23e2855f6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2491bb65c0331f0957d2a51cb0da670c7fd1af1d8a6644ae39cc23e2855f6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:42 np0005539563 podman[352579]: 2025-11-29 08:27:42.40522294 +0000 UTC m=+0.145431581 container init 3e300810694456a5a45044349736acbd713b3d16b9a49661b225495fe02c1e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:27:42 np0005539563 podman[352579]: 2025-11-29 08:27:42.413078313 +0000 UTC m=+0.153286934 container start 3e300810694456a5a45044349736acbd713b3d16b9a49661b225495fe02c1e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:27:42 np0005539563 podman[352579]: 2025-11-29 08:27:42.417970256 +0000 UTC m=+0.158178877 container attach 3e300810694456a5a45044349736acbd713b3d16b9a49661b225495fe02c1e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ride, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:27:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 742 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 86 KiB/s rd, 1.3 MiB/s wr, 119 op/s
Nov 29 03:27:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:27:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:27:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:27:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:27:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:27:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:27:43 np0005539563 pensive_ride[352596]: {
Nov 29 03:27:43 np0005539563 pensive_ride[352596]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:27:43 np0005539563 pensive_ride[352596]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:27:43 np0005539563 pensive_ride[352596]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:27:43 np0005539563 pensive_ride[352596]:        "osd_id": 0,
Nov 29 03:27:43 np0005539563 pensive_ride[352596]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:27:43 np0005539563 pensive_ride[352596]:        "type": "bluestore"
Nov 29 03:27:43 np0005539563 pensive_ride[352596]:    }
Nov 29 03:27:43 np0005539563 pensive_ride[352596]: }
Nov 29 03:27:43 np0005539563 systemd[1]: libpod-3e300810694456a5a45044349736acbd713b3d16b9a49661b225495fe02c1e0a.scope: Deactivated successfully.
Nov 29 03:27:43 np0005539563 podman[352579]: 2025-11-29 08:27:43.272700752 +0000 UTC m=+1.012909373 container died 3e300810694456a5a45044349736acbd713b3d16b9a49661b225495fe02c1e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ride, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:27:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6b2491bb65c0331f0957d2a51cb0da670c7fd1af1d8a6644ae39cc23e2855f6b-merged.mount: Deactivated successfully.
Nov 29 03:27:43 np0005539563 podman[352579]: 2025-11-29 08:27:43.339155022 +0000 UTC m=+1.079363633 container remove 3e300810694456a5a45044349736acbd713b3d16b9a49661b225495fe02c1e0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:27:43 np0005539563 systemd[1]: libpod-conmon-3e300810694456a5a45044349736acbd713b3d16b9a49661b225495fe02c1e0a.scope: Deactivated successfully.
Nov 29 03:27:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:27:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:27:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:27:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:27:43 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 102a628d-4907-4b99-b7be-c521003b44f1 does not exist
Nov 29 03:27:43 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8a56850a-c7e5-416a-ac0b-99b12ca34830 does not exist
Nov 29 03:27:43 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 27544b10-dbd0-4ea2-bab7-f357c6852eac does not exist
Nov 29 03:27:43 np0005539563 nova_compute[252253]: 2025-11-29 08:27:43.416 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:43 np0005539563 nova_compute[252253]: 2025-11-29 08:27:43.535 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Nov 29 03:27:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Nov 29 03:27:43 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Nov 29 03:27:43 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:43Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:13:85 10.100.0.12
Nov 29 03:27:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:27:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:27:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:44.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68a2e6f0 =====
Nov 29 03:27:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68a2e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:44 np0005539563 radosgw[93236]: beast: 0x7efd68a2e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:44.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 685 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 606 KiB/s rd, 3.4 MiB/s wr, 187 op/s
Nov 29 03:27:45 np0005539563 podman[352683]: 2025-11-29 08:27:45.512677345 +0000 UTC m=+0.056945004 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 03:27:45 np0005539563 podman[352684]: 2025-11-29 08:27:45.524884396 +0000 UTC m=+0.062873475 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:27:45 np0005539563 podman[352685]: 2025-11-29 08:27:45.542332369 +0000 UTC m=+0.083484623 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 03:27:45 np0005539563 nova_compute[252253]: 2025-11-29 08:27:45.980 252257 DEBUG oslo_concurrency.lockutils [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:45 np0005539563 nova_compute[252253]: 2025-11-29 08:27:45.981 252257 DEBUG oslo_concurrency.lockutils [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:45 np0005539563 nova_compute[252253]: 2025-11-29 08:27:45.981 252257 DEBUG oslo_concurrency.lockutils [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:45 np0005539563 nova_compute[252253]: 2025-11-29 08:27:45.981 252257 DEBUG oslo_concurrency.lockutils [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:45 np0005539563 nova_compute[252253]: 2025-11-29 08:27:45.981 252257 DEBUG oslo_concurrency.lockutils [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:45 np0005539563 nova_compute[252253]: 2025-11-29 08:27:45.982 252257 INFO nova.compute.manager [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Terminating instance#033[00m
Nov 29 03:27:45 np0005539563 nova_compute[252253]: 2025-11-29 08:27:45.983 252257 DEBUG nova.compute.manager [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:27:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:46.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68a2e6f0 =====
Nov 29 03:27:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68a2e6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:46 np0005539563 radosgw[93236]: beast: 0x7efd68a2e6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:46.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:46 np0005539563 kernel: tap1a4ca7b6-25 (unregistering): left promiscuous mode
Nov 29 03:27:46 np0005539563 NetworkManager[48981]: <info>  [1764404866.0642] device (tap1a4ca7b6-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:27:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:46Z|00681|binding|INFO|Releasing lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 from this chassis (sb_readonly=0)
Nov 29 03:27:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:46Z|00682|binding|INFO|Setting lport 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 down in Southbound
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.074 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:46Z|00683|binding|INFO|Removing iface tap1a4ca7b6-25 ovn-installed in OVS
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.076 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.090 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.110 252257 DEBUG nova.compute.manager [req-45ac3d6b-1cb3-4954-8dad-5c29937df7d9 req-13ed9bfb-b67d-48a1-85dc-6042fdd77f01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.111 252257 DEBUG nova.compute.manager [req-45ac3d6b-1cb3-4954-8dad-5c29937df7d9 req-13ed9bfb-b67d-48a1-85dc-6042fdd77f01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing instance network info cache due to event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.111 252257 DEBUG oslo_concurrency.lockutils [req-45ac3d6b-1cb3-4954-8dad-5c29937df7d9 req-13ed9bfb-b67d-48a1-85dc-6042fdd77f01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.111 252257 DEBUG oslo_concurrency.lockutils [req-45ac3d6b-1cb3-4954-8dad-5c29937df7d9 req-13ed9bfb-b67d-48a1-85dc-6042fdd77f01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.111 252257 DEBUG nova.network.neutron [req-45ac3d6b-1cb3-4954-8dad-5c29937df7d9 req-13ed9bfb-b67d-48a1-85dc-6042fdd77f01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.113 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:96:ee 10.100.0.9'], port_security=['fa:16:3e:85:96:ee 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '59a5747d-b29d-47f7-848c-62778e994c56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7008b597-8de2-4973-801f-fcc733e4f6c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09cc8c3182d845f597dda064f9013941', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'dbe43642-7b06-4c12-a982-e7ee16790d67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f1261764-1af6-4456-be86-7981c6d9ba2a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1a4ca7b6-25c7-44e8-9189-4d8759d2d061) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.114 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1a4ca7b6-25c7-44e8-9189-4d8759d2d061 in datapath 7008b597-8de2-4973-801f-fcc733e4f6c9 unbound from our chassis#033[00m
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.116 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7008b597-8de2-4973-801f-fcc733e4f6c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.117 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2692d346-9124-4e17-9c0e-e1ae9cded3ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.117 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 namespace which is not needed anymore#033[00m
Nov 29 03:27:46 np0005539563 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009b.scope: Deactivated successfully.
Nov 29 03:27:46 np0005539563 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009b.scope: Consumed 18.455s CPU time.
Nov 29 03:27:46 np0005539563 systemd-machined[213024]: Machine qemu-77-instance-0000009b terminated.
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.219 252257 INFO nova.virt.libvirt.driver [-] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Instance destroyed successfully.#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.219 252257 DEBUG nova.objects.instance [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lazy-loading 'resources' on Instance uuid 59a5747d-b29d-47f7-848c-62778e994c56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.282 252257 DEBUG nova.virt.libvirt.vif [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:24:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-92344656',display_name='tempest-ServerRescueNegativeTestJSON-server-92344656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-92344656',id=155,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09cc8c3182d845f597dda064f9013941',ramdisk_id='',reservation_id='r-0zxpzi3w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-754875869',owner_user_name='tempest-ServerRescueNegativeTestJSON-754875869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:25:46Z,user_data=None,user_id='dfcf2db50da745c09bffcf32ec016854',uuid=59a5747d-b29d-47f7-848c-62778e994c56,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.282 252257 DEBUG nova.network.os_vif_util [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converting VIF {"id": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "address": "fa:16:3e:85:96:ee", "network": {"id": "7008b597-8de2-4973-801f-fcc733e4f6c9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1620781527-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09cc8c3182d845f597dda064f9013941", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a4ca7b6-25", "ovs_interfaceid": "1a4ca7b6-25c7-44e8-9189-4d8759d2d061", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.283 252257 DEBUG nova.network.os_vif_util [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:96:ee,bridge_name='br-int',has_traffic_filtering=True,id=1a4ca7b6-25c7-44e8-9189-4d8759d2d061,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a4ca7b6-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.283 252257 DEBUG os_vif [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:96:ee,bridge_name='br-int',has_traffic_filtering=True,id=1a4ca7b6-25c7-44e8-9189-4d8759d2d061,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a4ca7b6-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.284 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.285 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a4ca7b6-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.286 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.288 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.291 252257 INFO os_vif [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:96:ee,bridge_name='br-int',has_traffic_filtering=True,id=1a4ca7b6-25c7-44e8-9189-4d8759d2d061,network=Network(7008b597-8de2-4973-801f-fcc733e4f6c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a4ca7b6-25')#033[00m
Nov 29 03:27:46 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[349084]: [NOTICE]   (349088) : haproxy version is 2.8.14-c23fe91
Nov 29 03:27:46 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[349084]: [NOTICE]   (349088) : path to executable is /usr/sbin/haproxy
Nov 29 03:27:46 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[349084]: [WARNING]  (349088) : Exiting Master process...
Nov 29 03:27:46 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[349084]: [WARNING]  (349088) : Exiting Master process...
Nov 29 03:27:46 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[349084]: [ALERT]    (349088) : Current worker (349090) exited with code 143 (Terminated)
Nov 29 03:27:46 np0005539563 neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9[349084]: [WARNING]  (349088) : All workers exited. Exiting... (0)
Nov 29 03:27:46 np0005539563 systemd[1]: libpod-34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c.scope: Deactivated successfully.
Nov 29 03:27:46 np0005539563 podman[352769]: 2025-11-29 08:27:46.369717403 +0000 UTC m=+0.163812928 container died 34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:27:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:27:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0f333987072d7680e6059494cf08dabe134384b6a5df62e84257a24789c13b0f-merged.mount: Deactivated successfully.
Nov 29 03:27:46 np0005539563 podman[352769]: 2025-11-29 08:27:46.431052515 +0000 UTC m=+0.225148060 container cleanup 34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:27:46 np0005539563 systemd[1]: libpod-conmon-34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c.scope: Deactivated successfully.
Nov 29 03:27:46 np0005539563 podman[352831]: 2025-11-29 08:27:46.502537292 +0000 UTC m=+0.046531992 container remove 34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.508 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a19e10c0-cc18-43fd-8640-e81a1826a5c5]: (4, ('Sat Nov 29 08:27:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c)\n34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c\nSat Nov 29 08:27:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 (34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c)\n34194f9e27422938564f8a91e6096f1f7d7a401d8b94fc49d376b0b299a8a71c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.510 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c890757b-2745-4584-922b-3aa3ccca5c10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.510 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7008b597-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.512 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:46 np0005539563 kernel: tap7008b597-80: left promiscuous mode
Nov 29 03:27:46 np0005539563 nova_compute[252253]: 2025-11-29 08:27:46.572 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.575 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1a71a7a2-7038-498c-967d-58bdc9dbed2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.593 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8d23634b-17b9-4552-83ac-740770a57bc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.594 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[18bb36ab-6a5e-43cd-8d3d-632d38f0ad61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 619 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 789 KiB/s rd, 2.6 MiB/s wr, 207 op/s
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.611 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a5244588-d039-4223-a0d4-f9f4bd7df93c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771349, 'reachable_time': 33969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352846, 'error': None, 'target': 'ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:46 np0005539563 systemd[1]: run-netns-ovnmeta\x2d7008b597\x2d8de2\x2d4973\x2d801f\x2dfcc733e4f6c9.mount: Deactivated successfully.
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.614 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7008b597-8de2-4973-801f-fcc733e4f6c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:27:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:27:46.614 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[410cc165-5dfa-471d-974e-d07327530357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539563 nova_compute[252253]: 2025-11-29 08:27:47.503 252257 DEBUG nova.compute.manager [req-b4dcf8bb-b869-4e7a-b839-fd3e4df62952 req-50f7ebee-1271-4ac0-b870-2be5f031aa4f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-unplugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:47 np0005539563 nova_compute[252253]: 2025-11-29 08:27:47.503 252257 DEBUG oslo_concurrency.lockutils [req-b4dcf8bb-b869-4e7a-b839-fd3e4df62952 req-50f7ebee-1271-4ac0-b870-2be5f031aa4f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:47 np0005539563 nova_compute[252253]: 2025-11-29 08:27:47.504 252257 DEBUG oslo_concurrency.lockutils [req-b4dcf8bb-b869-4e7a-b839-fd3e4df62952 req-50f7ebee-1271-4ac0-b870-2be5f031aa4f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:47 np0005539563 nova_compute[252253]: 2025-11-29 08:27:47.504 252257 DEBUG oslo_concurrency.lockutils [req-b4dcf8bb-b869-4e7a-b839-fd3e4df62952 req-50f7ebee-1271-4ac0-b870-2be5f031aa4f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:47 np0005539563 nova_compute[252253]: 2025-11-29 08:27:47.504 252257 DEBUG nova.compute.manager [req-b4dcf8bb-b869-4e7a-b839-fd3e4df62952 req-50f7ebee-1271-4ac0-b870-2be5f031aa4f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-unplugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:47 np0005539563 nova_compute[252253]: 2025-11-29 08:27:47.504 252257 DEBUG nova.compute.manager [req-b4dcf8bb-b869-4e7a-b839-fd3e4df62952 req-50f7ebee-1271-4ac0-b870-2be5f031aa4f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-unplugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:27:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:48.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:48.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:48 np0005539563 nova_compute[252253]: 2025-11-29 08:27:48.149 252257 INFO nova.virt.libvirt.driver [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Deleting instance files /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56_del#033[00m
Nov 29 03:27:48 np0005539563 nova_compute[252253]: 2025-11-29 08:27:48.150 252257 INFO nova.virt.libvirt.driver [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Deletion of /var/lib/nova/instances/59a5747d-b29d-47f7-848c-62778e994c56_del complete#033[00m
Nov 29 03:27:48 np0005539563 nova_compute[252253]: 2025-11-29 08:27:48.322 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404853.3209941, 9c6c5334-4e97-46b8-9013-cc5269d8c1c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:48 np0005539563 nova_compute[252253]: 2025-11-29 08:27:48.323 252257 INFO nova.compute.manager [-] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:27:48 np0005539563 nova_compute[252253]: 2025-11-29 08:27:48.366 252257 DEBUG nova.compute.manager [None req-94ae8eba-a7a9-405a-a04e-f0cdab836241 - - - - - -] [instance: 9c6c5334-4e97-46b8-9013-cc5269d8c1c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:48 np0005539563 nova_compute[252253]: 2025-11-29 08:27:48.411 252257 DEBUG nova.network.neutron [req-45ac3d6b-1cb3-4954-8dad-5c29937df7d9 req-13ed9bfb-b67d-48a1-85dc-6042fdd77f01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updated VIF entry in instance network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:27:48 np0005539563 nova_compute[252253]: 2025-11-29 08:27:48.412 252257 DEBUG nova.network.neutron [req-45ac3d6b-1cb3-4954-8dad-5c29937df7d9 req-13ed9bfb-b67d-48a1-85dc-6042fdd77f01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating instance_info_cache with network_info: [{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:48 np0005539563 nova_compute[252253]: 2025-11-29 08:27:48.418 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Nov 29 03:27:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 570 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.6 MiB/s wr, 210 op/s
Nov 29 03:27:48 np0005539563 nova_compute[252253]: 2025-11-29 08:27:48.720 252257 INFO nova.compute.manager [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Took 2.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:27:48 np0005539563 nova_compute[252253]: 2025-11-29 08:27:48.723 252257 DEBUG oslo.service.loopingcall [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:27:48 np0005539563 nova_compute[252253]: 2025-11-29 08:27:48.723 252257 DEBUG nova.compute.manager [-] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:27:48 np0005539563 nova_compute[252253]: 2025-11-29 08:27:48.724 252257 DEBUG nova.network.neutron [-] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:27:49 np0005539563 nova_compute[252253]: 2025-11-29 08:27:49.354 252257 DEBUG oslo_concurrency.lockutils [req-45ac3d6b-1cb3-4954-8dad-5c29937df7d9 req-13ed9bfb-b67d-48a1-85dc-6042fdd77f01 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Nov 29 03:27:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Nov 29 03:27:49 np0005539563 nova_compute[252253]: 2025-11-29 08:27:49.604 252257 DEBUG nova.compute.manager [req-4cc50353-3550-4d83-ab50-0c38119754fa req-1fc55204-6fc8-4b7e-b7dd-59fcf1764633 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:49 np0005539563 nova_compute[252253]: 2025-11-29 08:27:49.605 252257 DEBUG oslo_concurrency.lockutils [req-4cc50353-3550-4d83-ab50-0c38119754fa req-1fc55204-6fc8-4b7e-b7dd-59fcf1764633 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "59a5747d-b29d-47f7-848c-62778e994c56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:49 np0005539563 nova_compute[252253]: 2025-11-29 08:27:49.605 252257 DEBUG oslo_concurrency.lockutils [req-4cc50353-3550-4d83-ab50-0c38119754fa req-1fc55204-6fc8-4b7e-b7dd-59fcf1764633 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:49 np0005539563 nova_compute[252253]: 2025-11-29 08:27:49.605 252257 DEBUG oslo_concurrency.lockutils [req-4cc50353-3550-4d83-ab50-0c38119754fa req-1fc55204-6fc8-4b7e-b7dd-59fcf1764633 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:49 np0005539563 nova_compute[252253]: 2025-11-29 08:27:49.606 252257 DEBUG nova.compute.manager [req-4cc50353-3550-4d83-ab50-0c38119754fa req-1fc55204-6fc8-4b7e-b7dd-59fcf1764633 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] No waiting events found dispatching network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:49 np0005539563 nova_compute[252253]: 2025-11-29 08:27:49.606 252257 WARNING nova.compute.manager [req-4cc50353-3550-4d83-ab50-0c38119754fa req-1fc55204-6fc8-4b7e-b7dd-59fcf1764633 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received unexpected event network-vif-plugged-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:27:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:50.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:50.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:50 np0005539563 nova_compute[252253]: 2025-11-29 08:27:50.460 252257 DEBUG nova.network.neutron [-] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:50 np0005539563 nova_compute[252253]: 2025-11-29 08:27:50.486 252257 INFO nova.compute.manager [-] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Took 1.76 seconds to deallocate network for instance.#033[00m
Nov 29 03:27:50 np0005539563 nova_compute[252253]: 2025-11-29 08:27:50.530 252257 DEBUG oslo_concurrency.lockutils [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:50 np0005539563 nova_compute[252253]: 2025-11-29 08:27:50.530 252257 DEBUG oslo_concurrency.lockutils [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 549 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 256 op/s
Nov 29 03:27:50 np0005539563 nova_compute[252253]: 2025-11-29 08:27:50.656 252257 DEBUG oslo_concurrency.processutils [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:50 np0005539563 nova_compute[252253]: 2025-11-29 08:27:50.710 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:51 np0005539563 nova_compute[252253]: 2025-11-29 08:27:51.287 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459429466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:51 np0005539563 nova_compute[252253]: 2025-11-29 08:27:51.724 252257 DEBUG oslo_concurrency.processutils [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:51 np0005539563 nova_compute[252253]: 2025-11-29 08:27:51.729 252257 DEBUG nova.compute.provider_tree [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:51 np0005539563 nova_compute[252253]: 2025-11-29 08:27:51.762 252257 DEBUG nova.scheduler.client.report [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:51 np0005539563 nova_compute[252253]: 2025-11-29 08:27:51.802 252257 DEBUG oslo_concurrency.lockutils [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:51 np0005539563 nova_compute[252253]: 2025-11-29 08:27:51.827 252257 INFO nova.scheduler.client.report [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Deleted allocations for instance 59a5747d-b29d-47f7-848c-62778e994c56#033[00m
Nov 29 03:27:51 np0005539563 nova_compute[252253]: 2025-11-29 08:27:51.913 252257 DEBUG oslo_concurrency.lockutils [None req-f7c044d0-b604-433a-8945-6c5b86e3f781 dfcf2db50da745c09bffcf32ec016854 09cc8c3182d845f597dda064f9013941 - - default default] Lock "59a5747d-b29d-47f7-848c-62778e994c56" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:52.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:52.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:52 np0005539563 nova_compute[252253]: 2025-11-29 08:27:52.256 252257 DEBUG nova.compute.manager [req-e1107b22-52be-4e53-91a4-873d818df930 req-4f34bded-023b-4744-9ff7-18a66e0c4d80 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Received event network-vif-deleted-1a4ca7b6-25c7-44e8-9189-4d8759d2d061 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 549 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 497 KiB/s wr, 122 op/s
Nov 29 03:27:53 np0005539563 nova_compute[252253]: 2025-11-29 08:27:53.420 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:54.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:54.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.124 252257 DEBUG nova.compute.manager [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.215 252257 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.216 252257 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.242 252257 DEBUG nova.objects.instance [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'pci_requests' on Instance uuid 78a00526-9c03-4c52-93a4-2275348b883a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.257 252257 DEBUG nova.virt.hardware [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.257 252257 INFO nova.compute.claims [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.257 252257 DEBUG nova.objects.instance [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'resources' on Instance uuid 78a00526-9c03-4c52-93a4-2275348b883a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.270 252257 DEBUG nova.objects.instance [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'pci_devices' on Instance uuid 78a00526-9c03-4c52-93a4-2275348b883a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.327 252257 INFO nova.compute.resource_tracker [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating resource usage from migration f4512608-06e5-4fc1-8a5c-b2332184a36d#033[00m
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.328 252257 DEBUG nova.compute.resource_tracker [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Starting to track incoming migration f4512608-06e5-4fc1-8a5c-b2332184a36d with flavor a3833334-6e3e-4b1c-bf74-bdd1055a9e9b _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.432 252257 DEBUG oslo_concurrency.processutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Nov 29 03:27:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:27:54Z|00684|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.904 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/430301272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.991 252257 DEBUG oslo_concurrency.processutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:54 np0005539563 nova_compute[252253]: 2025-11-29 08:27:54.998 252257 DEBUG nova.compute.provider_tree [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:55 np0005539563 nova_compute[252253]: 2025-11-29 08:27:55.180 252257 DEBUG nova.scheduler.client.report [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:55 np0005539563 nova_compute[252253]: 2025-11-29 08:27:55.200 252257 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:55 np0005539563 nova_compute[252253]: 2025-11-29 08:27:55.201 252257 INFO nova.compute.manager [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Migrating#033[00m
Nov 29 03:27:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:56.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:56.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:56 np0005539563 nova_compute[252253]: 2025-11-29 08:27:56.288 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 29 03:27:57 np0005539563 systemd-logind[785]: New session 61 of user nova.
Nov 29 03:27:57 np0005539563 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:27:57 np0005539563 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:27:57 np0005539563 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:27:57 np0005539563 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:27:57 np0005539563 systemd[352903]: Queued start job for default target Main User Target.
Nov 29 03:27:57 np0005539563 systemd[352903]: Created slice User Application Slice.
Nov 29 03:27:57 np0005539563 systemd[352903]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:27:57 np0005539563 systemd[352903]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:27:57 np0005539563 systemd[352903]: Reached target Paths.
Nov 29 03:27:57 np0005539563 systemd[352903]: Reached target Timers.
Nov 29 03:27:57 np0005539563 systemd[352903]: Starting D-Bus User Message Bus Socket...
Nov 29 03:27:57 np0005539563 systemd[352903]: Starting Create User's Volatile Files and Directories...
Nov 29 03:27:57 np0005539563 systemd[352903]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:27:57 np0005539563 systemd[352903]: Reached target Sockets.
Nov 29 03:27:57 np0005539563 systemd[352903]: Finished Create User's Volatile Files and Directories.
Nov 29 03:27:57 np0005539563 systemd[352903]: Reached target Basic System.
Nov 29 03:27:57 np0005539563 systemd[352903]: Reached target Main User Target.
Nov 29 03:27:57 np0005539563 systemd[352903]: Startup finished in 158ms.
Nov 29 03:27:57 np0005539563 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:27:57 np0005539563 systemd[1]: Started Session 61 of User nova.
Nov 29 03:27:57 np0005539563 systemd[1]: session-61.scope: Deactivated successfully.
Nov 29 03:27:57 np0005539563 systemd-logind[785]: Session 61 logged out. Waiting for processes to exit.
Nov 29 03:27:57 np0005539563 systemd-logind[785]: Removed session 61.
Nov 29 03:27:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:58.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:27:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:27:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:58.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:27:58 np0005539563 systemd-logind[785]: New session 63 of user nova.
Nov 29 03:27:58 np0005539563 systemd[1]: Started Session 63 of User nova.
Nov 29 03:27:58 np0005539563 systemd[1]: session-63.scope: Deactivated successfully.
Nov 29 03:27:58 np0005539563 systemd-logind[785]: Session 63 logged out. Waiting for processes to exit.
Nov 29 03:27:58 np0005539563 systemd-logind[785]: Removed session 63.
Nov 29 03:27:58 np0005539563 nova_compute[252253]: 2025-11-29 08:27:58.423 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Nov 29 03:27:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.004000107s ======
Nov 29 03:28:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:00.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000107s
Nov 29 03:28:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:00.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 112 op/s
Nov 29 03:28:01 np0005539563 nova_compute[252253]: 2025-11-29 08:28:01.217 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404866.2160666, 59a5747d-b29d-47f7-848c-62778e994c56 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:01 np0005539563 nova_compute[252253]: 2025-11-29 08:28:01.218 252257 INFO nova.compute.manager [-] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:28:01 np0005539563 nova_compute[252253]: 2025-11-29 08:28:01.266 252257 DEBUG nova.compute.manager [None req-ee61ddc5-f0ca-46f7-86a9-bcc88f769477 - - - - - -] [instance: 59a5747d-b29d-47f7-848c-62778e994c56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:01 np0005539563 nova_compute[252253]: 2025-11-29 08:28:01.290 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:01 np0005539563 nova_compute[252253]: 2025-11-29 08:28:01.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:01 np0005539563 nova_compute[252253]: 2025-11-29 08:28:01.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:28:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:02.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:02.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 75 op/s
Nov 29 03:28:02 np0005539563 nova_compute[252253]: 2025-11-29 08:28:02.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:02 np0005539563 nova_compute[252253]: 2025-11-29 08:28:02.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.424 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.569 252257 DEBUG nova.compute.manager [req-0dc88611-d116-4b1e-92a5-f1eb8ece826d req-d7549757-232d-4634-aeea-9dae93961a3e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-unplugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.569 252257 DEBUG oslo_concurrency.lockutils [req-0dc88611-d116-4b1e-92a5-f1eb8ece826d req-d7549757-232d-4634-aeea-9dae93961a3e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.570 252257 DEBUG oslo_concurrency.lockutils [req-0dc88611-d116-4b1e-92a5-f1eb8ece826d req-d7549757-232d-4634-aeea-9dae93961a3e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.570 252257 DEBUG oslo_concurrency.lockutils [req-0dc88611-d116-4b1e-92a5-f1eb8ece826d req-d7549757-232d-4634-aeea-9dae93961a3e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.570 252257 DEBUG nova.compute.manager [req-0dc88611-d116-4b1e-92a5-f1eb8ece826d req-d7549757-232d-4634-aeea-9dae93961a3e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] No waiting events found dispatching network-vif-unplugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.570 252257 WARNING nova.compute.manager [req-0dc88611-d116-4b1e-92a5-f1eb8ece826d req-d7549757-232d-4634-aeea-9dae93961a3e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received unexpected event network-vif-unplugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:28:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.703 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.703 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.703 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:03 np0005539563 nova_compute[252253]: 2025-11-29 08:28:03.898 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:04.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:04.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4065788622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:04 np0005539563 nova_compute[252253]: 2025-11-29 08:28:04.143 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:04 np0005539563 nova_compute[252253]: 2025-11-29 08:28:04.620 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:04 np0005539563 nova_compute[252253]: 2025-11-29 08:28:04.621 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 94 op/s
Nov 29 03:28:04 np0005539563 nova_compute[252253]: 2025-11-29 08:28:04.622 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:04 np0005539563 nova_compute[252253]: 2025-11-29 08:28:04.626 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:04 np0005539563 nova_compute[252253]: 2025-11-29 08:28:04.626 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:04 np0005539563 nova_compute[252253]: 2025-11-29 08:28:04.629 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:04 np0005539563 nova_compute[252253]: 2025-11-29 08:28:04.629 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:04 np0005539563 nova_compute[252253]: 2025-11-29 08:28:04.796 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:28:04 np0005539563 nova_compute[252253]: 2025-11-29 08:28:04.797 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3653MB free_disk=20.784832000732422GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:28:04 np0005539563 nova_compute[252253]: 2025-11-29 08:28:04.798 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:04 np0005539563 nova_compute[252253]: 2025-11-29 08:28:04.798 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:04.933 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:04.934 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:04.934 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.273 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Migration for instance 78a00526-9c03-4c52-93a4-2275348b883a refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.338 252257 INFO nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating resource usage from migration f4512608-06e5-4fc1-8a5c-b2332184a36d#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.339 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Starting to track incoming migration f4512608-06e5-4fc1-8a5c-b2332184a36d with flavor a3833334-6e3e-4b1c-bf74-bdd1055a9e9b _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.382 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 5a603f26-2b4a-4025-8cc2-a31c8c89e652 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.383 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance df3ef43d-e67b-4d7f-8603-5cf61569ae1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.383 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 554ea6a4-8de1-41bf-8772-b15e95a7fd05 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.505 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Instance with task_state "resize_migrated" is not being actively managed by this compute host but has allocations referencing this compute node (190eff98-dce8-46c0-8a7d-870d6fa5cbbd): {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. Skipping heal of allocations during the task state transition. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1708#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.506 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.506 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.524 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.661 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.661 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.695 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.716 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.802 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.853 252257 INFO nova.network.neutron [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.942 252257 DEBUG nova.compute.manager [req-08bdffe2-a320-4992-9d69-2f4e3b4f8966 req-e682c324-6473-4952-9db3-74624b31cc1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.943 252257 DEBUG oslo_concurrency.lockutils [req-08bdffe2-a320-4992-9d69-2f4e3b4f8966 req-e682c324-6473-4952-9db3-74624b31cc1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.943 252257 DEBUG oslo_concurrency.lockutils [req-08bdffe2-a320-4992-9d69-2f4e3b4f8966 req-e682c324-6473-4952-9db3-74624b31cc1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.944 252257 DEBUG oslo_concurrency.lockutils [req-08bdffe2-a320-4992-9d69-2f4e3b4f8966 req-e682c324-6473-4952-9db3-74624b31cc1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.944 252257 DEBUG nova.compute.manager [req-08bdffe2-a320-4992-9d69-2f4e3b4f8966 req-e682c324-6473-4952-9db3-74624b31cc1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] No waiting events found dispatching network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:05 np0005539563 nova_compute[252253]: 2025-11-29 08:28:05.944 252257 WARNING nova.compute.manager [req-08bdffe2-a320-4992-9d69-2f4e3b4f8966 req-e682c324-6473-4952-9db3-74624b31cc1a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received unexpected event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:28:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:06.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:06.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:06 np0005539563 nova_compute[252253]: 2025-11-29 08:28:06.290 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:06 np0005539563 nova_compute[252253]: 2025-11-29 08:28:06.313 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:06 np0005539563 nova_compute[252253]: 2025-11-29 08:28:06.322 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:06 np0005539563 nova_compute[252253]: 2025-11-29 08:28:06.339 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:06 np0005539563 nova_compute[252253]: 2025-11-29 08:28:06.375 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:28:06 np0005539563 nova_compute[252253]: 2025-11-29 08:28:06.376 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:06 np0005539563 nova_compute[252253]: 2025-11-29 08:28:06.578 252257 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:06 np0005539563 nova_compute[252253]: 2025-11-29 08:28:06.579 252257 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquired lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:06 np0005539563 nova_compute[252253]: 2025-11-29 08:28:06.579 252257 DEBUG nova.network.neutron [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:28:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 32 KiB/s wr, 79 op/s
Nov 29 03:28:06 np0005539563 nova_compute[252253]: 2025-11-29 08:28:06.755 252257 DEBUG nova.compute.manager [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-changed-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:06 np0005539563 nova_compute[252253]: 2025-11-29 08:28:06.756 252257 DEBUG nova.compute.manager [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Refreshing instance network info cache due to event network-changed-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:28:06 np0005539563 nova_compute[252253]: 2025-11-29 08:28:06.756 252257 DEBUG oslo_concurrency.lockutils [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:07 np0005539563 nova_compute[252253]: 2025-11-29 08:28:07.378 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:07 np0005539563 nova_compute[252253]: 2025-11-29 08:28:07.958 252257 DEBUG nova.network.neutron [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating instance_info_cache with network_info: [{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:07 np0005539563 nova_compute[252253]: 2025-11-29 08:28:07.982 252257 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Releasing lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:07 np0005539563 nova_compute[252253]: 2025-11-29 08:28:07.986 252257 DEBUG oslo_concurrency.lockutils [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:07 np0005539563 nova_compute[252253]: 2025-11-29 08:28:07.986 252257 DEBUG nova.network.neutron [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Refreshing network info cache for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.049 252257 DEBUG os_brick.utils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.051 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.062 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.063 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[be441fd4-beaf-4460-9724-a5a505bf363c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.063 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.072 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.072 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ca74ec-b9d4-4303-9fb7-1c46bf9b3256]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:08.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.074 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:08.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.083 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.084 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d8f3dc-06f2-4cb6-84ee-35876b483939]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.086 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[2629a593-8f37-4633-b612-6deeaf9e1db5]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.087 252257 DEBUG oslo_concurrency.processutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.122 252257 DEBUG oslo_concurrency.processutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.125 252257 DEBUG os_brick.initiator.connectors.lightos [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.126 252257 DEBUG os_brick.initiator.connectors.lightos [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.126 252257 DEBUG os_brick.initiator.connectors.lightos [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.127 252257 DEBUG os_brick.utils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.376 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.377 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:28:08 np0005539563 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:28:08 np0005539563 systemd[352903]: Activating special unit Exit the Session...
Nov 29 03:28:08 np0005539563 systemd[352903]: Stopped target Main User Target.
Nov 29 03:28:08 np0005539563 systemd[352903]: Stopped target Basic System.
Nov 29 03:28:08 np0005539563 systemd[352903]: Stopped target Paths.
Nov 29 03:28:08 np0005539563 systemd[352903]: Stopped target Sockets.
Nov 29 03:28:08 np0005539563 systemd[352903]: Stopped target Timers.
Nov 29 03:28:08 np0005539563 systemd[352903]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:28:08 np0005539563 systemd[352903]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:28:08 np0005539563 systemd[352903]: Closed D-Bus User Message Bus Socket.
Nov 29 03:28:08 np0005539563 systemd[352903]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:28:08 np0005539563 systemd[352903]: Removed slice User Application Slice.
Nov 29 03:28:08 np0005539563 systemd[352903]: Reached target Shutdown.
Nov 29 03:28:08 np0005539563 systemd[352903]: Finished Exit the Session.
Nov 29 03:28:08 np0005539563 systemd[352903]: Reached target Exit the Session.
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.429 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:08 np0005539563 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:28:08 np0005539563 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:28:08 np0005539563 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:28:08 np0005539563 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:28:08 np0005539563 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:28:08 np0005539563 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:28:08 np0005539563 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:28:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 586 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 79 op/s
Nov 29 03:28:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.980 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.981 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:08 np0005539563 nova_compute[252253]: 2025-11-29 08:28:08.981 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:28:09 np0005539563 nova_compute[252253]: 2025-11-29 08:28:09.084 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:09 np0005539563 nova_compute[252253]: 2025-11-29 08:28:09.231 252257 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:28:09 np0005539563 nova_compute[252253]: 2025-11-29 08:28:09.234 252257 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:28:09 np0005539563 nova_compute[252253]: 2025-11-29 08:28:09.234 252257 INFO nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Creating image(s)#033[00m
Nov 29 03:28:09 np0005539563 nova_compute[252253]: 2025-11-29 08:28:09.646 252257 DEBUG nova.storage.rbd_utils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] creating snapshot(nova-resize) on rbd image(78a00526-9c03-4c52-93a4-2275348b883a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:28:09 np0005539563 nova_compute[252253]: 2025-11-29 08:28:09.925 252257 DEBUG nova.network.neutron [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updated VIF entry in instance network info cache for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:28:09 np0005539563 nova_compute[252253]: 2025-11-29 08:28:09.925 252257 DEBUG nova.network.neutron [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating instance_info_cache with network_info: [{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:09 np0005539563 nova_compute[252253]: 2025-11-29 08:28:09.942 252257 DEBUG oslo_concurrency.lockutils [req-9f7ce30b-c7ee-4286-80dc-05c86e9b51fa req-67c2e1ed-bfeb-4160-852b-103d2c79d355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:10.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:10.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 592 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 834 KiB/s rd, 456 KiB/s wr, 48 op/s
Nov 29 03:28:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.211 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Updating instance_info_cache with network_info: [{"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.231 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-5a603f26-2b4a-4025-8cc2-a31c8c89e652" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.232 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.234 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.234 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Nov 29 03:28:11 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.272 252257 WARNING nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] While synchronizing instance power states, found 4 instances in the database and 3 instances on the hypervisor.#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.272 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Triggering sync for uuid 5a603f26-2b4a-4025-8cc2-a31c8c89e652 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.272 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Triggering sync for uuid df3ef43d-e67b-4d7f-8603-5cf61569ae1f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.273 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Triggering sync for uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.273 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Triggering sync for uuid 78a00526-9c03-4c52-93a4-2275348b883a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.274 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.274 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.275 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.275 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.276 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.276 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.277 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.277 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "78a00526-9c03-4c52-93a4-2275348b883a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.277 252257 INFO nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.278 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "78a00526-9c03-4c52-93a4-2275348b883a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.293 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.297 252257 DEBUG nova.objects.instance [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 78a00526-9c03-4c52-93a4-2275348b883a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.340 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.341 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.373 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.434 252257 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.434 252257 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Ensure instance console log exists: /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.435 252257 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.435 252257 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.435 252257 DEBUG oslo_concurrency.lockutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.438 252257 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Start _get_guest_xml network_info=[{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:76:cc:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ff1e082f-e768-4c5f-850b-5e8ce6b839d1', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ff1e082f-e768-4c5f-850b-5e8ce6b839d1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '78a00526-9c03-4c52-93a4-2275348b883a', 'attached_at': '2025-11-29T08:28:08.000000', 'detached_at': '', 'volume_id': 'ff1e082f-e768-4c5f-850b-5e8ce6b839d1', 'multiattach': True, 'serial': 'ff1e082f-e768-4c5f-850b-5e8ce6b839d1'}, 'attachment_id': 'b110837f-deba-41df-98b1-10fc2923f088', 'disk_bus': 'virtio', 'boot_index': None, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.442 252257 WARNING nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.447 252257 DEBUG nova.virt.libvirt.host [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.447 252257 DEBUG nova.virt.libvirt.host [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.450 252257 DEBUG nova.virt.libvirt.host [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.451 252257 DEBUG nova.virt.libvirt.host [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.452 252257 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.452 252257 DEBUG nova.virt.hardware [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.452 252257 DEBUG nova.virt.hardware [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.452 252257 DEBUG nova.virt.hardware [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.453 252257 DEBUG nova.virt.hardware [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.453 252257 DEBUG nova.virt.hardware [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.453 252257 DEBUG nova.virt.hardware [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.453 252257 DEBUG nova.virt.hardware [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.453 252257 DEBUG nova.virt.hardware [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.453 252257 DEBUG nova.virt.hardware [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.453 252257 DEBUG nova.virt.hardware [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.454 252257 DEBUG nova.virt.hardware [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.454 252257 DEBUG nova.objects.instance [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 78a00526-9c03-4c52-93a4-2275348b883a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.473 252257 DEBUG oslo_concurrency.processutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:11.879 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.879 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:11.880 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:28:11 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:11.881 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1680458643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.908 252257 DEBUG oslo_concurrency.processutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:11 np0005539563 nova_compute[252253]: 2025-11-29 08:28:11.954 252257 DEBUG oslo_concurrency.processutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:12.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:12.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.152 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2204410226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.443 252257 DEBUG oslo_concurrency.processutils [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.467 252257 DEBUG nova.virt.libvirt.vif [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=163,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-pp6jso0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=78a00526-9c03-4c52-93a4-2275348b883a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:76:cc:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.467 252257 DEBUG nova.network.os_vif_util [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:76:cc:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.468 252257 DEBUG nova.network.os_vif_util [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.471 252257 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  <uuid>78a00526-9c03-4c52-93a4-2275348b883a</uuid>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  <name>instance-000000a3</name>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  <memory>196608</memory>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <nova:name>multiattach-server-1</nova:name>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:28:11</nova:creationTime>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.micro">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <nova:memory>192</nova:memory>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <nova:user uuid="b4f4d28745dd46e586642c84c051db39">tempest-AttachVolumeMultiAttachTest-1454477111-project-member</nova:user>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <nova:project uuid="23450c2eaf4442459dec94c6d29f0412">tempest-AttachVolumeMultiAttachTest-1454477111</nova:project>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <nova:port uuid="e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <entry name="serial">78a00526-9c03-4c52-93a4-2275348b883a</entry>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <entry name="uuid">78a00526-9c03-4c52-93a4-2275348b883a</entry>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/78a00526-9c03-4c52-93a4-2275348b883a_disk">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/78a00526-9c03-4c52-93a4-2275348b883a_disk.config">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="volumes/volume-ff1e082f-e768-4c5f-850b-5e8ce6b839d1">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <serial>ff1e082f-e768-4c5f-850b-5e8ce6b839d1</serial>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <shareable/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:76:cc:96"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <target dev="tape0c088b1-9b"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a/console.log" append="off"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:28:12 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:28:12 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:28:12 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:28:12 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.471 252257 DEBUG nova.virt.libvirt.vif [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=163,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:26:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-pp6jso0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=78a00526-9c03-4c52-93a4-2275348b883a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:76:cc:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.472 252257 DEBUG nova.network.os_vif_util [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "vif_mac": "fa:16:3e:76:cc:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.473 252257 DEBUG nova.network.os_vif_util [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.473 252257 DEBUG os_vif [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.473 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.474 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.474 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.477 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.477 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0c088b1-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.478 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0c088b1-9b, col_values=(('external_ids', {'iface-id': 'e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:cc:96', 'vm-uuid': '78a00526-9c03-4c52-93a4-2275348b883a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.479 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:12 np0005539563 NetworkManager[48981]: <info>  [1764404892.4807] manager: (tape0c088b1-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.482 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.487 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.488 252257 INFO os_vif [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b')#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.529 252257 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.530 252257 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.530 252257 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.530 252257 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] No VIF found with MAC fa:16:3e:76:cc:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.531 252257 INFO nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Using config drive#033[00m
Nov 29 03:28:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 592 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 639 KiB/s rd, 533 KiB/s wr, 39 op/s
Nov 29 03:28:12 np0005539563 kernel: tape0c088b1-9b: entered promiscuous mode
Nov 29 03:28:12 np0005539563 NetworkManager[48981]: <info>  [1764404892.6312] manager: (tape0c088b1-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/298)
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.685 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:28:12Z|00685|binding|INFO|Claiming lport e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for this chassis.
Nov 29 03:28:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:28:12Z|00686|binding|INFO|e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e: Claiming fa:16:3e:76:cc:96 10.100.0.3
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.693 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:cc:96 10.100.0.3'], port_security=['fa:16:3e:76:cc:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '78a00526-9c03-4c52-93a4-2275348b883a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6e9e03ca-34d5-466f-8e26-e073c35a802c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.695 158990 INFO neutron.agent.ovn.metadata.agent [-] Port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 bound to our chassis#033[00m
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.697 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abbc8daa-d665-4e2f-bf74-9e57db481441#033[00m
Nov 29 03:28:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:28:12Z|00687|binding|INFO|Setting lport e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e ovn-installed in OVS
Nov 29 03:28:12 np0005539563 ovn_controller[148841]: 2025-11-29T08:28:12Z|00688|binding|INFO|Setting lport e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e up in Southbound
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.701 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:12 np0005539563 systemd-machined[213024]: New machine qemu-81-instance-000000a3.
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.714 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[877f6f43-3321-4f64-bda1-fcb1f7ee2761]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:12 np0005539563 systemd-udevd[353203]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.721 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:12 np0005539563 systemd[1]: Started Virtual Machine qemu-81-instance-000000a3.
Nov 29 03:28:12 np0005539563 NetworkManager[48981]: <info>  [1764404892.7332] device (tape0c088b1-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:28:12 np0005539563 NetworkManager[48981]: <info>  [1764404892.7343] device (tape0c088b1-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.753 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d4605891-5660-4968-ad8a-9b0c0031542f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.758 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[388c8bbe-83ee-49e8-939e-6e9ed08f6e5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.787 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[40c6fc29-e06c-4ada-89b9-a1d4323d917f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.805 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e81c6ccc-edfb-4989-91f8-5e487409b3a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766783, 'reachable_time': 19799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353215, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.818 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d1631ec4-71f2-4911-93d7-d81fe29bd5a8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766793, 'tstamp': 766793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353217, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766796, 'tstamp': 766796}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353217, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.819 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.821 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:12 np0005539563 nova_compute[252253]: 2025-11-29 08:28:12.822 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.822 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabbc8daa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.822 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.823 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabbc8daa-d0, col_values=(('external_ids', {'iface-id': 'fb65e0fb-a778-4ace-a666-dfdbc516af09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:12.823 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:28:12
Nov 29 03:28:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:28:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:28:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'backups', '.mgr', 'vms', 'images']
Nov 29 03:28:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:28:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:28:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:28:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:28:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:28:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:28:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.214 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404893.2135887, 78a00526-9c03-4c52-93a4-2275348b883a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.214 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.216 252257 DEBUG nova.compute.manager [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.220 252257 INFO nova.virt.libvirt.driver [-] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Instance running successfully.#033[00m
Nov 29 03:28:13 np0005539563 virtqemud[251807]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.222 252257 DEBUG nova.virt.libvirt.guest [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.222 252257 DEBUG nova.virt.libvirt.driver [None req-cd04adb8-1957-4752-aa59-f5e0d0e01e26 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.235 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.238 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.267 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.267 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404893.2162688, 78a00526-9c03-4c52-93a4-2275348b883a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.267 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.288 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.291 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.310 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:28:13 np0005539563 nova_compute[252253]: 2025-11-29 08:28:13.428 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:14.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:14.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:14 np0005539563 nova_compute[252253]: 2025-11-29 08:28:14.494 252257 DEBUG nova.compute.manager [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:14 np0005539563 nova_compute[252253]: 2025-11-29 08:28:14.495 252257 DEBUG oslo_concurrency.lockutils [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:14 np0005539563 nova_compute[252253]: 2025-11-29 08:28:14.495 252257 DEBUG oslo_concurrency.lockutils [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:14 np0005539563 nova_compute[252253]: 2025-11-29 08:28:14.496 252257 DEBUG oslo_concurrency.lockutils [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:14 np0005539563 nova_compute[252253]: 2025-11-29 08:28:14.496 252257 DEBUG nova.compute.manager [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] No waiting events found dispatching network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:14 np0005539563 nova_compute[252253]: 2025-11-29 08:28:14.496 252257 WARNING nova.compute.manager [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received unexpected event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:28:14 np0005539563 nova_compute[252253]: 2025-11-29 08:28:14.496 252257 DEBUG nova.compute.manager [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:14 np0005539563 nova_compute[252253]: 2025-11-29 08:28:14.496 252257 DEBUG oslo_concurrency.lockutils [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:14 np0005539563 nova_compute[252253]: 2025-11-29 08:28:14.497 252257 DEBUG oslo_concurrency.lockutils [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:14 np0005539563 nova_compute[252253]: 2025-11-29 08:28:14.497 252257 DEBUG oslo_concurrency.lockutils [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:14 np0005539563 nova_compute[252253]: 2025-11-29 08:28:14.497 252257 DEBUG nova.compute.manager [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] No waiting events found dispatching network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:14 np0005539563 nova_compute[252253]: 2025-11-29 08:28:14.497 252257 WARNING nova.compute.manager [req-8e121fcf-31aa-4d48-aef4-334dc8dbb296 req-1f95fec3-d4d1-4be1-9a56-0d6c3077ff70 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received unexpected event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:28:14 np0005539563 nova_compute[252253]: 2025-11-29 08:28:14.555 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 615 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Nov 29 03:28:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:16.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:16.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:28:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:28:16 np0005539563 podman[353280]: 2025-11-29 08:28:16.544560482 +0000 UTC m=+0.088475488 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:28:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:28:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:28:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:28:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:28:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:28:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:28:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:28:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:28:16 np0005539563 podman[353281]: 2025-11-29 08:28:16.56441842 +0000 UTC m=+0.100570216 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 29 03:28:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 619 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 130 op/s
Nov 29 03:28:16 np0005539563 podman[353282]: 2025-11-29 08:28:16.644017716 +0000 UTC m=+0.184969562 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:28:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:28:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 12K writes, 54K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1713 writes, 7639 keys, 1713 commit groups, 1.0 writes per commit group, ingest: 10.92 MB, 0.02 MB/s#012Interval WAL: 1713 writes, 1713 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     12.5      5.54              0.24        34    0.163       0      0       0.0       0.0#012  L6      1/0   10.27 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6     28.0     23.6     13.58              1.05        33    0.411    221K    18K       0.0       0.0#012 Sum      1/0   10.27 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6     19.9     20.4     19.12              1.29        67    0.285    221K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.6     93.8     93.8      0.81              0.22        12    0.068     53K   3134       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     28.0     23.6     13.58              1.05        33    0.411    221K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     12.5      5.54              0.24        33    0.168       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.068, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.38 GB write, 0.08 MB/s write, 0.37 GB read, 0.08 MB/s read, 19.1 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 304.00 MB usage: 44.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000521 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2476,43.08 MB,14.1703%) FilterBlock(68,628.73 KB,0.201973%) IndexBlock(68,1.01 MB,0.332652%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:28:17 np0005539563 nova_compute[252253]: 2025-11-29 08:28:17.480 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:18.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:18.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:18 np0005539563 nova_compute[252253]: 2025-11-29 08:28:18.430 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Nov 29 03:28:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Nov 29 03:28:18 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Nov 29 03:28:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 619 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 227 op/s
Nov 29 03:28:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:19 np0005539563 nova_compute[252253]: 2025-11-29 08:28:19.678 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:20.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:20.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 619 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 195 op/s
Nov 29 03:28:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:22.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:22.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:22 np0005539563 nova_compute[252253]: 2025-11-29 08:28:22.482 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 619 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Nov 29 03:28:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Nov 29 03:28:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Nov 29 03:28:22 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Nov 29 03:28:23 np0005539563 nova_compute[252253]: 2025-11-29 08:28:23.480 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:23 np0005539563 nova_compute[252253]: 2025-11-29 08:28:23.544 252257 DEBUG nova.compute.manager [req-e9b3089a-2394-4974-8583-5d642c2fcacb req-696ecd63-f383-4e98-8bce-1068b12fc5b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:23 np0005539563 nova_compute[252253]: 2025-11-29 08:28:23.544 252257 DEBUG nova.compute.manager [req-e9b3089a-2394-4974-8583-5d642c2fcacb req-696ecd63-f383-4e98-8bce-1068b12fc5b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing instance network info cache due to event network-changed-f095bbfd-d901-4dd4-8831-72dab1104494. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:28:23 np0005539563 nova_compute[252253]: 2025-11-29 08:28:23.544 252257 DEBUG oslo_concurrency.lockutils [req-e9b3089a-2394-4974-8583-5d642c2fcacb req-696ecd63-f383-4e98-8bce-1068b12fc5b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:23 np0005539563 nova_compute[252253]: 2025-11-29 08:28:23.544 252257 DEBUG oslo_concurrency.lockutils [req-e9b3089a-2394-4974-8583-5d642c2fcacb req-696ecd63-f383-4e98-8bce-1068b12fc5b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:23 np0005539563 nova_compute[252253]: 2025-11-29 08:28:23.545 252257 DEBUG nova.network.neutron [req-e9b3089a-2394-4974-8583-5d642c2fcacb req-696ecd63-f383-4e98-8bce-1068b12fc5b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Refreshing network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:28:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010863962462776848 of space, bias 1.0, pg target 3.2591887388330543 quantized to 32 (current 32)
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00432083013446864 of space, bias 1.0, pg target 1.283286549937186 quantized to 32 (current 32)
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8449684766424715 quantized to 32 (current 32)
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Nov 29 03:28:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:24.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:24.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 672 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 130 op/s
Nov 29 03:28:25 np0005539563 nova_compute[252253]: 2025-11-29 08:28:25.448 252257 DEBUG nova.network.neutron [req-e9b3089a-2394-4974-8583-5d642c2fcacb req-696ecd63-f383-4e98-8bce-1068b12fc5b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updated VIF entry in instance network info cache for port f095bbfd-d901-4dd4-8831-72dab1104494. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:28:25 np0005539563 nova_compute[252253]: 2025-11-29 08:28:25.449 252257 DEBUG nova.network.neutron [req-e9b3089a-2394-4974-8583-5d642c2fcacb req-696ecd63-f383-4e98-8bce-1068b12fc5b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating instance_info_cache with network_info: [{"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:25 np0005539563 nova_compute[252253]: 2025-11-29 08:28:25.525 252257 DEBUG oslo_concurrency.lockutils [req-e9b3089a-2394-4974-8583-5d642c2fcacb req-696ecd63-f383-4e98-8bce-1068b12fc5b6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-554ea6a4-8de1-41bf-8772-b15e95a7fd05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:25 np0005539563 nova_compute[252253]: 2025-11-29 08:28:25.639 252257 DEBUG nova.compute.manager [req-8ea00ef3-673c-4858-99ec-8b74e0986f62 req-60f8a51c-5acd-4031-8865-35d2815f3adc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-changed-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:25 np0005539563 nova_compute[252253]: 2025-11-29 08:28:25.640 252257 DEBUG nova.compute.manager [req-8ea00ef3-673c-4858-99ec-8b74e0986f62 req-60f8a51c-5acd-4031-8865-35d2815f3adc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Refreshing instance network info cache due to event network-changed-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:28:25 np0005539563 nova_compute[252253]: 2025-11-29 08:28:25.640 252257 DEBUG oslo_concurrency.lockutils [req-8ea00ef3-673c-4858-99ec-8b74e0986f62 req-60f8a51c-5acd-4031-8865-35d2815f3adc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:25 np0005539563 nova_compute[252253]: 2025-11-29 08:28:25.641 252257 DEBUG oslo_concurrency.lockutils [req-8ea00ef3-673c-4858-99ec-8b74e0986f62 req-60f8a51c-5acd-4031-8865-35d2815f3adc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:25 np0005539563 nova_compute[252253]: 2025-11-29 08:28:25.641 252257 DEBUG nova.network.neutron [req-8ea00ef3-673c-4858-99ec-8b74e0986f62 req-60f8a51c-5acd-4031-8865-35d2815f3adc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Refreshing network info cache for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:28:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:26.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:26.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 686 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 5.2 MiB/s wr, 68 op/s
Nov 29 03:28:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:28:27Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:cc:96 10.100.0.3
Nov 29 03:28:27 np0005539563 nova_compute[252253]: 2025-11-29 08:28:27.433 252257 DEBUG nova.network.neutron [req-8ea00ef3-673c-4858-99ec-8b74e0986f62 req-60f8a51c-5acd-4031-8865-35d2815f3adc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updated VIF entry in instance network info cache for port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:28:27 np0005539563 nova_compute[252253]: 2025-11-29 08:28:27.434 252257 DEBUG nova.network.neutron [req-8ea00ef3-673c-4858-99ec-8b74e0986f62 req-60f8a51c-5acd-4031-8865-35d2815f3adc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating instance_info_cache with network_info: [{"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:27 np0005539563 nova_compute[252253]: 2025-11-29 08:28:27.468 252257 DEBUG oslo_concurrency.lockutils [req-8ea00ef3-673c-4858-99ec-8b74e0986f62 req-60f8a51c-5acd-4031-8865-35d2815f3adc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-78a00526-9c03-4c52-93a4-2275348b883a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:27 np0005539563 nova_compute[252253]: 2025-11-29 08:28:27.485 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:28:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3051919099' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:28:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:28:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3051919099' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:28:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:28.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:28.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:28 np0005539563 nova_compute[252253]: 2025-11-29 08:28:28.484 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 686 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 114 op/s
Nov 29 03:28:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Nov 29 03:28:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Nov 29 03:28:28 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Nov 29 03:28:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:30.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:30.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 686 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.7 MiB/s rd, 5.3 MiB/s wr, 224 op/s
Nov 29 03:28:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:32.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:32.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:32 np0005539563 nova_compute[252253]: 2025-11-29 08:28:32.486 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 686 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.3 MiB/s wr, 180 op/s
Nov 29 03:28:33 np0005539563 nova_compute[252253]: 2025-11-29 08:28:33.487 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:34.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:34.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 688 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 1.0 MiB/s wr, 284 op/s
Nov 29 03:28:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:36.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:36.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 702 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 930 KiB/s wr, 269 op/s
Nov 29 03:28:37 np0005539563 nova_compute[252253]: 2025-11-29 08:28:37.489 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:38.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:38.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:38 np0005539563 nova_compute[252253]: 2025-11-29 08:28:38.490 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 776 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.5 MiB/s rd, 4.2 MiB/s wr, 300 op/s
Nov 29 03:28:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:39 np0005539563 nova_compute[252253]: 2025-11-29 08:28:39.631 252257 DEBUG oslo_concurrency.lockutils [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:39 np0005539563 nova_compute[252253]: 2025-11-29 08:28:39.631 252257 DEBUG oslo_concurrency.lockutils [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:39 np0005539563 nova_compute[252253]: 2025-11-29 08:28:39.644 252257 INFO nova.compute.manager [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Detaching volume ff1e082f-e768-4c5f-850b-5e8ce6b839d1#033[00m
Nov 29 03:28:39 np0005539563 nova_compute[252253]: 2025-11-29 08:28:39.782 252257 INFO nova.virt.block_device [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Attempting to driver detach volume ff1e082f-e768-4c5f-850b-5e8ce6b839d1 from mountpoint /dev/vdb#033[00m
Nov 29 03:28:39 np0005539563 nova_compute[252253]: 2025-11-29 08:28:39.795 252257 DEBUG nova.virt.libvirt.driver [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Attempting to detach device vdb from instance 554ea6a4-8de1-41bf-8772-b15e95a7fd05 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:28:39 np0005539563 nova_compute[252253]: 2025-11-29 08:28:39.796 252257 DEBUG nova.virt.libvirt.guest [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-ff1e082f-e768-4c5f-850b-5e8ce6b839d1">
Nov 29 03:28:39 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  <serial>ff1e082f-e768-4c5f-850b-5e8ce6b839d1</serial>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  <shareable/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:28:39 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:28:39 np0005539563 nova_compute[252253]: 2025-11-29 08:28:39.814 252257 INFO nova.virt.libvirt.driver [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully detached device vdb from instance 554ea6a4-8de1-41bf-8772-b15e95a7fd05 from the persistent domain config.#033[00m
Nov 29 03:28:39 np0005539563 nova_compute[252253]: 2025-11-29 08:28:39.814 252257 DEBUG nova.virt.libvirt.driver [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 554ea6a4-8de1-41bf-8772-b15e95a7fd05 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:28:39 np0005539563 nova_compute[252253]: 2025-11-29 08:28:39.815 252257 DEBUG nova.virt.libvirt.guest [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-ff1e082f-e768-4c5f-850b-5e8ce6b839d1">
Nov 29 03:28:39 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  <serial>ff1e082f-e768-4c5f-850b-5e8ce6b839d1</serial>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  <shareable/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:28:39 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:28:39 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:28:39 np0005539563 nova_compute[252253]: 2025-11-29 08:28:39.926 252257 DEBUG nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Received event <DeviceRemovedEvent: 1764404919.9264681, 554ea6a4-8de1-41bf-8772-b15e95a7fd05 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:28:39 np0005539563 nova_compute[252253]: 2025-11-29 08:28:39.928 252257 DEBUG nova.virt.libvirt.driver [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 554ea6a4-8de1-41bf-8772-b15e95a7fd05 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:28:39 np0005539563 nova_compute[252253]: 2025-11-29 08:28:39.931 252257 INFO nova.virt.libvirt.driver [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully detached device vdb from instance 554ea6a4-8de1-41bf-8772-b15e95a7fd05 from the live domain config.#033[00m
Nov 29 03:28:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:40.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:40.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:40 np0005539563 nova_compute[252253]: 2025-11-29 08:28:40.270 252257 INFO nova.virt.libvirt.driver [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Detected multiple connections on this host for volume: ff1e082f-e768-4c5f-850b-5e8ce6b839d1, skipping target disconnect.#033[00m
Nov 29 03:28:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 814 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.9 MiB/s wr, 266 op/s
Nov 29 03:28:41 np0005539563 nova_compute[252253]: 2025-11-29 08:28:41.049 252257 DEBUG nova.objects.instance [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'flavor' on Instance uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:41 np0005539563 nova_compute[252253]: 2025-11-29 08:28:41.762 252257 DEBUG oslo_concurrency.lockutils [None req-48b8a57c-080d-46b1-ab10-5f5f629aabf6 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.031 252257 DEBUG oslo_concurrency.lockutils [None req-5fa5ade3-d202-4fc8-98b3-084559e7bd6b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.031 252257 DEBUG oslo_concurrency.lockutils [None req-5fa5ade3-d202-4fc8-98b3-084559e7bd6b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:42.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:42.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.233 252257 INFO nova.compute.manager [None req-5fa5ade3-d202-4fc8-98b3-084559e7bd6b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Detaching volume ff1e082f-e768-4c5f-850b-5e8ce6b839d1#033[00m
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.372 252257 INFO nova.virt.block_device [None req-5fa5ade3-d202-4fc8-98b3-084559e7bd6b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Attempting to driver detach volume ff1e082f-e768-4c5f-850b-5e8ce6b839d1 from mountpoint /dev/vdb#033[00m
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.381 252257 DEBUG nova.virt.libvirt.driver [None req-5fa5ade3-d202-4fc8-98b3-084559e7bd6b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Attempting to detach device vdb from instance 78a00526-9c03-4c52-93a4-2275348b883a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.382 252257 DEBUG nova.virt.libvirt.guest [None req-5fa5ade3-d202-4fc8-98b3-084559e7bd6b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-ff1e082f-e768-4c5f-850b-5e8ce6b839d1">
Nov 29 03:28:42 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  <serial>ff1e082f-e768-4c5f-850b-5e8ce6b839d1</serial>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  <shareable/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:28:42 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.390 252257 INFO nova.virt.libvirt.driver [None req-5fa5ade3-d202-4fc8-98b3-084559e7bd6b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully detached device vdb from instance 78a00526-9c03-4c52-93a4-2275348b883a from the persistent domain config.#033[00m
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.390 252257 DEBUG nova.virt.libvirt.driver [None req-5fa5ade3-d202-4fc8-98b3-084559e7bd6b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 78a00526-9c03-4c52-93a4-2275348b883a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.391 252257 DEBUG nova.virt.libvirt.guest [None req-5fa5ade3-d202-4fc8-98b3-084559e7bd6b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-ff1e082f-e768-4c5f-850b-5e8ce6b839d1">
Nov 29 03:28:42 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  <serial>ff1e082f-e768-4c5f-850b-5e8ce6b839d1</serial>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  <shareable/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:28:42 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:28:42 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.465 252257 DEBUG nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Received event <DeviceRemovedEvent: 1764404922.4650197, 78a00526-9c03-4c52-93a4-2275348b883a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.468 252257 DEBUG nova.virt.libvirt.driver [None req-5fa5ade3-d202-4fc8-98b3-084559e7bd6b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 78a00526-9c03-4c52-93a4-2275348b883a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.471 252257 INFO nova.virt.libvirt.driver [None req-5fa5ade3-d202-4fc8-98b3-084559e7bd6b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully detached device vdb from instance 78a00526-9c03-4c52-93a4-2275348b883a from the live domain config.#033[00m
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.493 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 814 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.9 MiB/s wr, 207 op/s
Nov 29 03:28:42 np0005539563 nova_compute[252253]: 2025-11-29 08:28:42.923 252257 DEBUG nova.objects.instance [None req-5fa5ade3-d202-4fc8-98b3-084559e7bd6b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'flavor' on Instance uuid 78a00526-9c03-4c52-93a4-2275348b883a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:43 np0005539563 nova_compute[252253]: 2025-11-29 08:28:43.034 252257 DEBUG oslo_concurrency.lockutils [None req-5fa5ade3-d202-4fc8-98b3-084559e7bd6b b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:28:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:28:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:28:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:28:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:28:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:28:43 np0005539563 nova_compute[252253]: 2025-11-29 08:28:43.241 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:43 np0005539563 nova_compute[252253]: 2025-11-29 08:28:43.241 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:43 np0005539563 nova_compute[252253]: 2025-11-29 08:28:43.257 252257 DEBUG nova.compute.manager [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:28:43 np0005539563 nova_compute[252253]: 2025-11-29 08:28:43.348 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:43 np0005539563 nova_compute[252253]: 2025-11-29 08:28:43.349 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:43 np0005539563 nova_compute[252253]: 2025-11-29 08:28:43.359 252257 DEBUG nova.virt.hardware [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:28:43 np0005539563 nova_compute[252253]: 2025-11-29 08:28:43.359 252257 INFO nova.compute.claims [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:28:43 np0005539563 nova_compute[252253]: 2025-11-29 08:28:43.493 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1539773764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:43 np0005539563 nova_compute[252253]: 2025-11-29 08:28:43.547 252257 DEBUG oslo_concurrency.processutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4065275847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:43 np0005539563 nova_compute[252253]: 2025-11-29 08:28:43.994 252257 DEBUG oslo_concurrency.processutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.001 252257 DEBUG nova.compute.provider_tree [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.023 252257 DEBUG nova.scheduler.client.report [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.050 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.051 252257 DEBUG nova.compute.manager [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:28:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:44.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.114 252257 DEBUG nova.compute.manager [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.115 252257 DEBUG nova.network.neutron [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.134 252257 INFO nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:28:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:44.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.152 252257 DEBUG nova.compute.manager [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.262 252257 INFO nova.virt.block_device [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Booting with volume cedd0225-5008-4e7b-a363-38f002cf49fe at /dev/vda#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.406 252257 DEBUG nova.policy [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd039e57f31de4717a235fc96ebd56559', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '527c6a274d1e478eadfe67139e121185', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.425 252257 DEBUG os_brick.utils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.427 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.438 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.438 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[2929296c-8cc5-47b5-9e40-5c4455e2dc66]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.440 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.448 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.448 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[ffd1f543-519d-4071-b8dd-be5057d6c6ab]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.450 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.459 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.459 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[786f6cd9-b45b-4409-af1c-abfa3e77a10f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.461 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[1f068d4c-f5fc-4a79-84e7-d1c021e59e7e]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.461 252257 DEBUG oslo_concurrency.processutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.493 252257 DEBUG oslo_concurrency.processutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.496 252257 DEBUG os_brick.initiator.connectors.lightos [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.496 252257 DEBUG os_brick.initiator.connectors.lightos [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.496 252257 DEBUG os_brick.initiator.connectors.lightos [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.497 252257 DEBUG os_brick.utils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:28:44 np0005539563 nova_compute[252253]: 2025-11-29 08:28:44.497 252257 DEBUG nova.virt.block_device [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating existing volume attachment record: 78dc1b6b-1941-4dff-8ea0-d8c64bc50d26 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:28:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 854 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.5 MiB/s rd, 7.4 MiB/s wr, 291 op/s
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:28:44 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8310ddfd-78e0-4a74-8309-2c323e13dddf does not exist
Nov 29 03:28:44 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6eb4506c-3b45-4ff0-9db7-c3150f560489 does not exist
Nov 29 03:28:44 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e849341d-5f09-4ef4-b2b2-e6f129490686 does not exist
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:28:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:28:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2674661049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:45 np0005539563 podman[353758]: 2025-11-29 08:28:45.341050258 +0000 UTC m=+0.041330561 container create 0e618c851d80f93964d3694bf3133902e65d654238755839bcd925337c00a0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:28:45 np0005539563 systemd[1]: Started libpod-conmon-0e618c851d80f93964d3694bf3133902e65d654238755839bcd925337c00a0e4.scope.
Nov 29 03:28:45 np0005539563 podman[353758]: 2025-11-29 08:28:45.322383242 +0000 UTC m=+0.022663545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:28:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:28:45 np0005539563 podman[353758]: 2025-11-29 08:28:45.437018837 +0000 UTC m=+0.137299120 container init 0e618c851d80f93964d3694bf3133902e65d654238755839bcd925337c00a0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feistel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 03:28:45 np0005539563 podman[353758]: 2025-11-29 08:28:45.444175462 +0000 UTC m=+0.144455745 container start 0e618c851d80f93964d3694bf3133902e65d654238755839bcd925337c00a0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feistel, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:28:45 np0005539563 podman[353758]: 2025-11-29 08:28:45.447581354 +0000 UTC m=+0.147861647 container attach 0e618c851d80f93964d3694bf3133902e65d654238755839bcd925337c00a0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:28:45 np0005539563 magical_feistel[353775]: 167 167
Nov 29 03:28:45 np0005539563 systemd[1]: libpod-0e618c851d80f93964d3694bf3133902e65d654238755839bcd925337c00a0e4.scope: Deactivated successfully.
Nov 29 03:28:45 np0005539563 podman[353758]: 2025-11-29 08:28:45.450213955 +0000 UTC m=+0.150494238 container died 0e618c851d80f93964d3694bf3133902e65d654238755839bcd925337c00a0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:28:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6023b4dfd63fa07378cfac566cc798c973ca51c266f3d35ee18bf682ff42c7b5-merged.mount: Deactivated successfully.
Nov 29 03:28:45 np0005539563 podman[353758]: 2025-11-29 08:28:45.496659654 +0000 UTC m=+0.196939937 container remove 0e618c851d80f93964d3694bf3133902e65d654238755839bcd925337c00a0e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feistel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 29 03:28:45 np0005539563 systemd[1]: libpod-conmon-0e618c851d80f93964d3694bf3133902e65d654238755839bcd925337c00a0e4.scope: Deactivated successfully.
Nov 29 03:28:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:28:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:28:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:28:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:28:45 np0005539563 nova_compute[252253]: 2025-11-29 08:28:45.669 252257 DEBUG nova.network.neutron [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Successfully created port: fe638793-a58c-45c7-af31-561a212a980a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:28:45 np0005539563 podman[353797]: 2025-11-29 08:28:45.703504348 +0000 UTC m=+0.042300308 container create 4350eb6961dd7c4fbf4b4110a3ca10f6f8ff5c22ddc7d34323359a5a607b559d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:28:45 np0005539563 systemd[1]: Started libpod-conmon-4350eb6961dd7c4fbf4b4110a3ca10f6f8ff5c22ddc7d34323359a5a607b559d.scope.
Nov 29 03:28:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:28:45 np0005539563 podman[353797]: 2025-11-29 08:28:45.684984126 +0000 UTC m=+0.023780116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:28:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5f8bec0076e7645c8a81ea7b216c0f55146400a9be2bf913f36d9379c88c79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5f8bec0076e7645c8a81ea7b216c0f55146400a9be2bf913f36d9379c88c79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5f8bec0076e7645c8a81ea7b216c0f55146400a9be2bf913f36d9379c88c79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5f8bec0076e7645c8a81ea7b216c0f55146400a9be2bf913f36d9379c88c79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5f8bec0076e7645c8a81ea7b216c0f55146400a9be2bf913f36d9379c88c79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:45 np0005539563 podman[353797]: 2025-11-29 08:28:45.798598343 +0000 UTC m=+0.137394363 container init 4350eb6961dd7c4fbf4b4110a3ca10f6f8ff5c22ddc7d34323359a5a607b559d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:28:45 np0005539563 podman[353797]: 2025-11-29 08:28:45.807353281 +0000 UTC m=+0.146149241 container start 4350eb6961dd7c4fbf4b4110a3ca10f6f8ff5c22ddc7d34323359a5a607b559d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bartik, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:28:45 np0005539563 podman[353797]: 2025-11-29 08:28:45.811507613 +0000 UTC m=+0.150303573 container attach 4350eb6961dd7c4fbf4b4110a3ca10f6f8ff5c22ddc7d34323359a5a607b559d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:28:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:46.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:46 np0005539563 nova_compute[252253]: 2025-11-29 08:28:46.134 252257 DEBUG nova.compute.manager [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:28:46 np0005539563 nova_compute[252253]: 2025-11-29 08:28:46.136 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:28:46 np0005539563 nova_compute[252253]: 2025-11-29 08:28:46.137 252257 INFO nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Creating image(s)#033[00m
Nov 29 03:28:46 np0005539563 nova_compute[252253]: 2025-11-29 08:28:46.137 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:28:46 np0005539563 nova_compute[252253]: 2025-11-29 08:28:46.137 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Ensure instance console log exists: /var/lib/nova/instances/e412f1ba-217f-4c10-b176-528b2ef6ed0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:28:46 np0005539563 nova_compute[252253]: 2025-11-29 08:28:46.138 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:46 np0005539563 nova_compute[252253]: 2025-11-29 08:28:46.138 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:46 np0005539563 nova_compute[252253]: 2025-11-29 08:28:46.138 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:46.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:46 np0005539563 trusting_bartik[353813]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:28:46 np0005539563 trusting_bartik[353813]: --> relative data size: 1.0
Nov 29 03:28:46 np0005539563 trusting_bartik[353813]: --> All data devices are unavailable
Nov 29 03:28:46 np0005539563 podman[353797]: 2025-11-29 08:28:46.612711509 +0000 UTC m=+0.951507469 container died 4350eb6961dd7c4fbf4b4110a3ca10f6f8ff5c22ddc7d34323359a5a607b559d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bartik, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:28:46 np0005539563 systemd[1]: libpod-4350eb6961dd7c4fbf4b4110a3ca10f6f8ff5c22ddc7d34323359a5a607b559d.scope: Deactivated successfully.
Nov 29 03:28:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 860 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 7.5 MiB/s wr, 210 op/s
Nov 29 03:28:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-fa5f8bec0076e7645c8a81ea7b216c0f55146400a9be2bf913f36d9379c88c79-merged.mount: Deactivated successfully.
Nov 29 03:28:46 np0005539563 podman[353797]: 2025-11-29 08:28:46.744666207 +0000 UTC m=+1.083462167 container remove 4350eb6961dd7c4fbf4b4110a3ca10f6f8ff5c22ddc7d34323359a5a607b559d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:28:46 np0005539563 systemd[1]: libpod-conmon-4350eb6961dd7c4fbf4b4110a3ca10f6f8ff5c22ddc7d34323359a5a607b559d.scope: Deactivated successfully.
Nov 29 03:28:46 np0005539563 podman[353829]: 2025-11-29 08:28:46.76363307 +0000 UTC m=+0.119296285 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:28:46 np0005539563 podman[353835]: 2025-11-29 08:28:46.79761594 +0000 UTC m=+0.153712547 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:28:46 np0005539563 podman[353873]: 2025-11-29 08:28:46.855497187 +0000 UTC m=+0.078632469 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 29 03:28:47 np0005539563 podman[354044]: 2025-11-29 08:28:47.356999554 +0000 UTC m=+0.059906745 container create e66b5f65fa131792bd0d53ead8da11db7ce8031cc807acd236e582fa84af00dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:28:47 np0005539563 systemd[1]: Started libpod-conmon-e66b5f65fa131792bd0d53ead8da11db7ce8031cc807acd236e582fa84af00dc.scope.
Nov 29 03:28:47 np0005539563 podman[354044]: 2025-11-29 08:28:47.330635539 +0000 UTC m=+0.033542760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:28:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:28:47 np0005539563 podman[354044]: 2025-11-29 08:28:47.450408654 +0000 UTC m=+0.153315915 container init e66b5f65fa131792bd0d53ead8da11db7ce8031cc807acd236e582fa84af00dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:28:47 np0005539563 podman[354044]: 2025-11-29 08:28:47.458132243 +0000 UTC m=+0.161039454 container start e66b5f65fa131792bd0d53ead8da11db7ce8031cc807acd236e582fa84af00dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_taussig, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:28:47 np0005539563 podman[354044]: 2025-11-29 08:28:47.463119549 +0000 UTC m=+0.166026850 container attach e66b5f65fa131792bd0d53ead8da11db7ce8031cc807acd236e582fa84af00dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:28:47 np0005539563 adoring_taussig[354061]: 167 167
Nov 29 03:28:47 np0005539563 systemd[1]: libpod-e66b5f65fa131792bd0d53ead8da11db7ce8031cc807acd236e582fa84af00dc.scope: Deactivated successfully.
Nov 29 03:28:47 np0005539563 podman[354044]: 2025-11-29 08:28:47.4668821 +0000 UTC m=+0.169789281 container died e66b5f65fa131792bd0d53ead8da11db7ce8031cc807acd236e582fa84af00dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:28:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d73e3d05a2d481e48af089ff82e0588ef6d4fc973aa9b78bcc1233f5a655ba77-merged.mount: Deactivated successfully.
Nov 29 03:28:47 np0005539563 podman[354044]: 2025-11-29 08:28:47.511223232 +0000 UTC m=+0.214130413 container remove e66b5f65fa131792bd0d53ead8da11db7ce8031cc807acd236e582fa84af00dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:28:47 np0005539563 nova_compute[252253]: 2025-11-29 08:28:47.548 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:47 np0005539563 systemd[1]: libpod-conmon-e66b5f65fa131792bd0d53ead8da11db7ce8031cc807acd236e582fa84af00dc.scope: Deactivated successfully.
Nov 29 03:28:47 np0005539563 podman[354085]: 2025-11-29 08:28:47.748768647 +0000 UTC m=+0.046723867 container create 0e61f15d2bbc8a17417bd8ada5ae28ddb00b374f8ca3af5a2fb17026a3f309a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:28:47 np0005539563 systemd[1]: Started libpod-conmon-0e61f15d2bbc8a17417bd8ada5ae28ddb00b374f8ca3af5a2fb17026a3f309a5.scope.
Nov 29 03:28:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:28:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4792cc4cc520be2908db62583b57a43023cb6da1656a34a8199253ecee5bb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:47 np0005539563 podman[354085]: 2025-11-29 08:28:47.727951823 +0000 UTC m=+0.025907083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:28:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4792cc4cc520be2908db62583b57a43023cb6da1656a34a8199253ecee5bb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4792cc4cc520be2908db62583b57a43023cb6da1656a34a8199253ecee5bb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b4792cc4cc520be2908db62583b57a43023cb6da1656a34a8199253ecee5bb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:47 np0005539563 podman[354085]: 2025-11-29 08:28:47.839618528 +0000 UTC m=+0.137573768 container init 0e61f15d2bbc8a17417bd8ada5ae28ddb00b374f8ca3af5a2fb17026a3f309a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:28:47 np0005539563 podman[354085]: 2025-11-29 08:28:47.845269871 +0000 UTC m=+0.143225121 container start 0e61f15d2bbc8a17417bd8ada5ae28ddb00b374f8ca3af5a2fb17026a3f309a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_matsumoto, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:28:47 np0005539563 podman[354085]: 2025-11-29 08:28:47.850782111 +0000 UTC m=+0.148737321 container attach 0e61f15d2bbc8a17417bd8ada5ae28ddb00b374f8ca3af5a2fb17026a3f309a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_matsumoto, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:28:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:48.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:48.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:48 np0005539563 nova_compute[252253]: 2025-11-29 08:28:48.495 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]: {
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:    "0": [
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:        {
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:            "devices": [
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:                "/dev/loop3"
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:            ],
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:            "lv_name": "ceph_lv0",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:            "lv_size": "7511998464",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:            "name": "ceph_lv0",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:            "tags": {
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:                "ceph.cluster_name": "ceph",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:                "ceph.crush_device_class": "",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:                "ceph.encrypted": "0",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:                "ceph.osd_id": "0",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:                "ceph.type": "block",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:                "ceph.vdo": "0"
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:            },
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:            "type": "block",
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:            "vg_name": "ceph_vg0"
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:        }
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]:    ]
Nov 29 03:28:48 np0005539563 clever_matsumoto[354101]: }
Nov 29 03:28:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 860 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 952 KiB/s rd, 6.8 MiB/s wr, 209 op/s
Nov 29 03:28:48 np0005539563 systemd[1]: libpod-0e61f15d2bbc8a17417bd8ada5ae28ddb00b374f8ca3af5a2fb17026a3f309a5.scope: Deactivated successfully.
Nov 29 03:28:48 np0005539563 podman[354085]: 2025-11-29 08:28:48.664442884 +0000 UTC m=+0.962398094 container died 0e61f15d2bbc8a17417bd8ada5ae28ddb00b374f8ca3af5a2fb17026a3f309a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_matsumoto, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:28:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:48 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8b4792cc4cc520be2908db62583b57a43023cb6da1656a34a8199253ecee5bb9-merged.mount: Deactivated successfully.
Nov 29 03:28:48 np0005539563 podman[354085]: 2025-11-29 08:28:48.819931607 +0000 UTC m=+1.117886817 container remove 0e61f15d2bbc8a17417bd8ada5ae28ddb00b374f8ca3af5a2fb17026a3f309a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:28:48 np0005539563 systemd[1]: libpod-conmon-0e61f15d2bbc8a17417bd8ada5ae28ddb00b374f8ca3af5a2fb17026a3f309a5.scope: Deactivated successfully.
Nov 29 03:28:49 np0005539563 podman[354265]: 2025-11-29 08:28:49.415122621 +0000 UTC m=+0.042060761 container create 3de9d94c0254b91733f5ffb807802007075ab21d13f4bc954f02cb39e54dbb49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:28:49 np0005539563 systemd[1]: Started libpod-conmon-3de9d94c0254b91733f5ffb807802007075ab21d13f4bc954f02cb39e54dbb49.scope.
Nov 29 03:28:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:28:49 np0005539563 podman[354265]: 2025-11-29 08:28:49.399414066 +0000 UTC m=+0.026352226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:28:49 np0005539563 podman[354265]: 2025-11-29 08:28:49.498592842 +0000 UTC m=+0.125531012 container init 3de9d94c0254b91733f5ffb807802007075ab21d13f4bc954f02cb39e54dbb49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goldstine, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:28:49 np0005539563 podman[354265]: 2025-11-29 08:28:49.510887946 +0000 UTC m=+0.137826086 container start 3de9d94c0254b91733f5ffb807802007075ab21d13f4bc954f02cb39e54dbb49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goldstine, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:28:49 np0005539563 thirsty_goldstine[354281]: 167 167
Nov 29 03:28:49 np0005539563 systemd[1]: libpod-3de9d94c0254b91733f5ffb807802007075ab21d13f4bc954f02cb39e54dbb49.scope: Deactivated successfully.
Nov 29 03:28:49 np0005539563 podman[354265]: 2025-11-29 08:28:49.51547915 +0000 UTC m=+0.142417310 container attach 3de9d94c0254b91733f5ffb807802007075ab21d13f4bc954f02cb39e54dbb49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:28:49 np0005539563 podman[354265]: 2025-11-29 08:28:49.51693053 +0000 UTC m=+0.143868670 container died 3de9d94c0254b91733f5ffb807802007075ab21d13f4bc954f02cb39e54dbb49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goldstine, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:28:49 np0005539563 systemd[1]: var-lib-containers-storage-overlay-443aa6385deaf892614fcc3fa37c2bf77ac333193af832bc26b8f9d411ebcd1c-merged.mount: Deactivated successfully.
Nov 29 03:28:49 np0005539563 podman[354265]: 2025-11-29 08:28:49.55938611 +0000 UTC m=+0.186324300 container remove 3de9d94c0254b91733f5ffb807802007075ab21d13f4bc954f02cb39e54dbb49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goldstine, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:28:49 np0005539563 systemd[1]: libpod-conmon-3de9d94c0254b91733f5ffb807802007075ab21d13f4bc954f02cb39e54dbb49.scope: Deactivated successfully.
Nov 29 03:28:49 np0005539563 podman[354304]: 2025-11-29 08:28:49.751119764 +0000 UTC m=+0.048094884 container create 4e52970972b2c8922a1cdceb8fc0258b08ee64c4341021169660549be20993c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:28:49 np0005539563 systemd[1]: Started libpod-conmon-4e52970972b2c8922a1cdceb8fc0258b08ee64c4341021169660549be20993c1.scope.
Nov 29 03:28:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:28:49 np0005539563 podman[354304]: 2025-11-29 08:28:49.73140826 +0000 UTC m=+0.028383410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:28:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1375395a15ba6a9189ff965c71eed736d0391140781bce02e69d18058189585b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1375395a15ba6a9189ff965c71eed736d0391140781bce02e69d18058189585b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1375395a15ba6a9189ff965c71eed736d0391140781bce02e69d18058189585b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1375395a15ba6a9189ff965c71eed736d0391140781bce02e69d18058189585b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:49 np0005539563 podman[354304]: 2025-11-29 08:28:49.843228279 +0000 UTC m=+0.140203409 container init 4e52970972b2c8922a1cdceb8fc0258b08ee64c4341021169660549be20993c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:28:49 np0005539563 podman[354304]: 2025-11-29 08:28:49.854425743 +0000 UTC m=+0.151400863 container start 4e52970972b2c8922a1cdceb8fc0258b08ee64c4341021169660549be20993c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:28:49 np0005539563 podman[354304]: 2025-11-29 08:28:49.858043951 +0000 UTC m=+0.155019071 container attach 4e52970972b2c8922a1cdceb8fc0258b08ee64c4341021169660549be20993c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:28:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:50.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:50.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 861 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.0 MiB/s wr, 137 op/s
Nov 29 03:28:50 np0005539563 competent_davinci[354320]: {
Nov 29 03:28:50 np0005539563 competent_davinci[354320]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:28:50 np0005539563 competent_davinci[354320]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:28:50 np0005539563 competent_davinci[354320]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:28:50 np0005539563 competent_davinci[354320]:        "osd_id": 0,
Nov 29 03:28:50 np0005539563 competent_davinci[354320]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:28:50 np0005539563 competent_davinci[354320]:        "type": "bluestore"
Nov 29 03:28:50 np0005539563 competent_davinci[354320]:    }
Nov 29 03:28:50 np0005539563 competent_davinci[354320]: }
Nov 29 03:28:50 np0005539563 systemd[1]: libpod-4e52970972b2c8922a1cdceb8fc0258b08ee64c4341021169660549be20993c1.scope: Deactivated successfully.
Nov 29 03:28:50 np0005539563 podman[354304]: 2025-11-29 08:28:50.734149726 +0000 UTC m=+1.031124926 container died 4e52970972b2c8922a1cdceb8fc0258b08ee64c4341021169660549be20993c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:28:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1375395a15ba6a9189ff965c71eed736d0391140781bce02e69d18058189585b-merged.mount: Deactivated successfully.
Nov 29 03:28:50 np0005539563 podman[354304]: 2025-11-29 08:28:50.988842196 +0000 UTC m=+1.285817316 container remove 4e52970972b2c8922a1cdceb8fc0258b08ee64c4341021169660549be20993c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:28:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:28:51 np0005539563 systemd[1]: libpod-conmon-4e52970972b2c8922a1cdceb8fc0258b08ee64c4341021169660549be20993c1.scope: Deactivated successfully.
Nov 29 03:28:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:28:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:28:51 np0005539563 nova_compute[252253]: 2025-11-29 08:28:51.434 252257 DEBUG nova.network.neutron [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Successfully updated port: fe638793-a58c-45c7-af31-561a212a980a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:28:51 np0005539563 nova_compute[252253]: 2025-11-29 08:28:51.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:51 np0005539563 nova_compute[252253]: 2025-11-29 08:28:51.686 252257 DEBUG nova.compute.manager [req-31adf1bd-274e-43bb-b142-9057d2c1aca7 req-5ec5d3b0-2a3c-4c6f-97a4-2af0c584a3dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received event network-changed-fe638793-a58c-45c7-af31-561a212a980a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:51 np0005539563 nova_compute[252253]: 2025-11-29 08:28:51.686 252257 DEBUG nova.compute.manager [req-31adf1bd-274e-43bb-b142-9057d2c1aca7 req-5ec5d3b0-2a3c-4c6f-97a4-2af0c584a3dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing instance network info cache due to event network-changed-fe638793-a58c-45c7-af31-561a212a980a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:28:51 np0005539563 nova_compute[252253]: 2025-11-29 08:28:51.686 252257 DEBUG oslo_concurrency.lockutils [req-31adf1bd-274e-43bb-b142-9057d2c1aca7 req-5ec5d3b0-2a3c-4c6f-97a4-2af0c584a3dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:51 np0005539563 nova_compute[252253]: 2025-11-29 08:28:51.686 252257 DEBUG oslo_concurrency.lockutils [req-31adf1bd-274e-43bb-b142-9057d2c1aca7 req-5ec5d3b0-2a3c-4c6f-97a4-2af0c584a3dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:51 np0005539563 nova_compute[252253]: 2025-11-29 08:28:51.687 252257 DEBUG nova.network.neutron [req-31adf1bd-274e-43bb-b142-9057d2c1aca7 req-5ec5d3b0-2a3c-4c6f-97a4-2af0c584a3dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing network info cache for port fe638793-a58c-45c7-af31-561a212a980a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:28:51 np0005539563 nova_compute[252253]: 2025-11-29 08:28:51.767 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:28:51 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f5db574e-d545-49e4-b10a-2b08a63c92b5 does not exist
Nov 29 03:28:51 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4aa36d7c-ddfd-4593-9742-acd5af8704b0 does not exist
Nov 29 03:28:51 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 26da047f-8f72-44b4-93c0-08ad34ebb3b6 does not exist
Nov 29 03:28:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:52.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:52.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:52 np0005539563 nova_compute[252253]: 2025-11-29 08:28:52.215 252257 DEBUG nova.network.neutron [req-31adf1bd-274e-43bb-b142-9057d2c1aca7 req-5ec5d3b0-2a3c-4c6f-97a4-2af0c584a3dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:28:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:28:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:28:52 np0005539563 nova_compute[252253]: 2025-11-29 08:28:52.552 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 861 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 124 op/s
Nov 29 03:28:53 np0005539563 nova_compute[252253]: 2025-11-29 08:28:53.410 252257 DEBUG nova.network.neutron [req-31adf1bd-274e-43bb-b142-9057d2c1aca7 req-5ec5d3b0-2a3c-4c6f-97a4-2af0c584a3dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:53 np0005539563 nova_compute[252253]: 2025-11-29 08:28:53.497 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:53 np0005539563 nova_compute[252253]: 2025-11-29 08:28:53.584 252257 DEBUG oslo_concurrency.lockutils [req-31adf1bd-274e-43bb-b142-9057d2c1aca7 req-5ec5d3b0-2a3c-4c6f-97a4-2af0c584a3dd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:53 np0005539563 nova_compute[252253]: 2025-11-29 08:28:53.585 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquired lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:53 np0005539563 nova_compute[252253]: 2025-11-29 08:28:53.585 252257 DEBUG nova.network.neutron [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:28:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:28:53Z|00689|memory|INFO|peak resident set size grew 51% in last 3964.2 seconds, from 16256 kB to 24584 kB
Nov 29 03:28:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:28:53Z|00690|memory|INFO|idl-cells-OVN_Southbound:10776 idl-cells-Open_vSwitch:1041 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:364 lflow-cache-entries-cache-matches:284 lflow-cache-size-KB:1489 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:639 ofctrl_installed_flow_usage-KB:468 ofctrl_sb_flow_ref_usage-KB:239
Nov 29 03:28:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:53.910 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:53.911 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:28:53 np0005539563 nova_compute[252253]: 2025-11-29 08:28:53.911 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:53 np0005539563 ceph-osd[84724]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:28:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:54.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:54.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:54 np0005539563 nova_compute[252253]: 2025-11-29 08:28:54.444 252257 DEBUG nova.network.neutron [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:28:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 861 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 132 op/s
Nov 29 03:28:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:28:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:56.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:28:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:56.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 102 KiB/s wr, 55 op/s
Nov 29 03:28:56 np0005539563 nova_compute[252253]: 2025-11-29 08:28:56.770 252257 DEBUG nova.network.neutron [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating instance_info_cache with network_info: [{"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.108299) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937108412, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1533, "num_deletes": 255, "total_data_size": 2471435, "memory_usage": 2510904, "flush_reason": "Manual Compaction"}
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937124559, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 2429582, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53384, "largest_seqno": 54916, "table_properties": {"data_size": 2422494, "index_size": 4095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15891, "raw_average_key_size": 20, "raw_value_size": 2407941, "raw_average_value_size": 3139, "num_data_blocks": 178, "num_entries": 767, "num_filter_entries": 767, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404809, "oldest_key_time": 1764404809, "file_creation_time": 1764404937, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 16267 microseconds, and 7081 cpu microseconds.
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.124611) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 2429582 bytes OK
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.124642) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.126946) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.127006) EVENT_LOG_v1 {"time_micros": 1764404937126964, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.127027) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2464745, prev total WAL file size 2464745, number of live WAL files 2.
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.127973) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(2372KB)], [116(10MB)]
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937128056, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 13197478, "oldest_snapshot_seqno": -1}
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 8877 keys, 11320976 bytes, temperature: kUnknown
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937255172, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 11320976, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11263587, "index_size": 34109, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 231203, "raw_average_key_size": 26, "raw_value_size": 11107736, "raw_average_value_size": 1251, "num_data_blocks": 1322, "num_entries": 8877, "num_filter_entries": 8877, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764404937, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.255484) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 11320976 bytes
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.257041) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.7 rd, 89.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.3 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(10.1) write-amplify(4.7) OK, records in: 9403, records dropped: 526 output_compression: NoCompression
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.257057) EVENT_LOG_v1 {"time_micros": 1764404937257049, "job": 70, "event": "compaction_finished", "compaction_time_micros": 127222, "compaction_time_cpu_micros": 26197, "output_level": 6, "num_output_files": 1, "total_output_size": 11320976, "num_input_records": 9403, "num_output_records": 8877, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937257644, "job": 70, "event": "table_file_deletion", "file_number": 118}
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404937259513, "job": 70, "event": "table_file_deletion", "file_number": 116}
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.127821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.259618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.259624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.259625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.259627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:28:57.259629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.555 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.697 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Releasing lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.698 252257 DEBUG nova.compute.manager [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Instance network_info: |[{"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.700 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Start _get_guest_xml network_info=[{"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-cedd0225-5008-4e7b-a363-38f002cf49fe', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'cedd0225-5008-4e7b-a363-38f002cf49fe', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'e412f1ba-217f-4c10-b176-528b2ef6ed0e', 'attached_at': '', 'detached_at': '', 'volume_id': 'cedd0225-5008-4e7b-a363-38f002cf49fe', 'serial': 'cedd0225-5008-4e7b-a363-38f002cf49fe'}, 'attachment_id': '78dc1b6b-1941-4dff-8ea0-d8c64bc50d26', 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.706 252257 WARNING nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.713 252257 DEBUG nova.virt.libvirt.host [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.714 252257 DEBUG nova.virt.libvirt.host [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.718 252257 DEBUG nova.virt.libvirt.host [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.719 252257 DEBUG nova.virt.libvirt.host [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.720 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.720 252257 DEBUG nova.virt.hardware [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.720 252257 DEBUG nova.virt.hardware [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.721 252257 DEBUG nova.virt.hardware [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.721 252257 DEBUG nova.virt.hardware [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.721 252257 DEBUG nova.virt.hardware [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.721 252257 DEBUG nova.virt.hardware [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.721 252257 DEBUG nova.virt.hardware [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.721 252257 DEBUG nova.virt.hardware [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.722 252257 DEBUG nova.virt.hardware [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.722 252257 DEBUG nova.virt.hardware [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.722 252257 DEBUG nova.virt.hardware [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.755 252257 DEBUG nova.storage.rbd_utils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] rbd image e412f1ba-217f-4c10-b176-528b2ef6ed0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:57 np0005539563 nova_compute[252253]: 2025-11-29 08:28:57.759 252257 DEBUG oslo_concurrency.processutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:58.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:28:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:58.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/151542145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:58 np0005539563 nova_compute[252253]: 2025-11-29 08:28:58.246 252257 DEBUG oslo_concurrency.processutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:58 np0005539563 nova_compute[252253]: 2025-11-29 08:28:58.499 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 53 KiB/s wr, 27 op/s
Nov 29 03:28:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:58 np0005539563 nova_compute[252253]: 2025-11-29 08:28:58.799 252257 DEBUG nova.virt.libvirt.vif [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:28:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1385893524',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1385893524',id=167,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLuAg2lLvJL1IbHQI4zWjduPL00fGBTgnUuLmVxh8Papw1HN8YCJ1MjiVOY2IjiYFlPS7NCeNdc1wi8bfIbI4zqr01CElkg8VYpaZv/gY5PmkQnremSmt7jl09ZoO4cYg==',key_name='tempest-TestInstancesWithCinderVolumes-1453989920',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='527c6a274d1e478eadfe67139e121185',ramdisk_id='',reservation_id='r-6dg4rukr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-663978016',owner_user_name='tempest-TestInstancesWithCinderVolumes-663978016-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:44Z,user_data=None,user_id='d039e57f31de4717a235fc96ebd56559',uuid=e412f1ba-217f-4c10-b176-528b2ef6ed0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:28:58 np0005539563 nova_compute[252253]: 2025-11-29 08:28:58.800 252257 DEBUG nova.network.os_vif_util [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converting VIF {"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:58 np0005539563 nova_compute[252253]: 2025-11-29 08:28:58.801 252257 DEBUG nova.network.os_vif_util [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:14:58,bridge_name='br-int',has_traffic_filtering=True,id=fe638793-a58c-45c7-af31-561a212a980a,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe638793-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:58 np0005539563 nova_compute[252253]: 2025-11-29 08:28:58.803 252257 DEBUG nova.objects.instance [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'pci_devices' on Instance uuid e412f1ba-217f-4c10-b176-528b2ef6ed0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.324 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  <uuid>e412f1ba-217f-4c10-b176-528b2ef6ed0e</uuid>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  <name>instance-000000a7</name>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestInstancesWithCinderVolumes-server-1385893524</nova:name>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:28:57</nova:creationTime>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <nova:user uuid="d039e57f31de4717a235fc96ebd56559">tempest-TestInstancesWithCinderVolumes-663978016-project-member</nova:user>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <nova:project uuid="527c6a274d1e478eadfe67139e121185">tempest-TestInstancesWithCinderVolumes-663978016</nova:project>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <nova:port uuid="fe638793-a58c-45c7-af31-561a212a980a">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <entry name="serial">e412f1ba-217f-4c10-b176-528b2ef6ed0e</entry>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <entry name="uuid">e412f1ba-217f-4c10-b176-528b2ef6ed0e</entry>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/e412f1ba-217f-4c10-b176-528b2ef6ed0e_disk.config">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="volumes/volume-cedd0225-5008-4e7b-a363-38f002cf49fe">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <serial>cedd0225-5008-4e7b-a363-38f002cf49fe</serial>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:80:14:58"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <target dev="tapfe638793-a5"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/e412f1ba-217f-4c10-b176-528b2ef6ed0e/console.log" append="off"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:28:59 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:28:59 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:28:59 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:28:59 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.325 252257 DEBUG nova.compute.manager [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Preparing to wait for external event network-vif-plugged-fe638793-a58c-45c7-af31-561a212a980a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.325 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.325 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.325 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.326 252257 DEBUG nova.virt.libvirt.vif [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:28:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1385893524',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1385893524',id=167,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLuAg2lLvJL1IbHQI4zWjduPL00fGBTgnUuLmVxh8Papw1HN8YCJ1MjiVOY2IjiYFlPS7NCeNdc1wi8bfIbI4zqr01CElkg8VYpaZv/gY5PmkQnremSmt7jl09ZoO4cYg==',key_name='tempest-TestInstancesWithCinderVolumes-1453989920',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='527c6a274d1e478eadfe67139e121185',ramdisk_id='',reservation_id='r-6dg4rukr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-663978016',owner_user_name='tempest-TestInstancesWithCinderVolumes-663978016-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:28:44Z,user_data=None,user_id='d039e57f31de4717a235fc96ebd56559',uuid=e412f1ba-217f-4c10-b176-528b2ef6ed0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.326 252257 DEBUG nova.network.os_vif_util [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converting VIF {"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.327 252257 DEBUG nova.network.os_vif_util [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:14:58,bridge_name='br-int',has_traffic_filtering=True,id=fe638793-a58c-45c7-af31-561a212a980a,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe638793-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.327 252257 DEBUG os_vif [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:14:58,bridge_name='br-int',has_traffic_filtering=True,id=fe638793-a58c-45c7-af31-561a212a980a,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe638793-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.328 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.328 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.328 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.331 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.331 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe638793-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.331 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfe638793-a5, col_values=(('external_ids', {'iface-id': 'fe638793-a58c-45c7-af31-561a212a980a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:14:58', 'vm-uuid': 'e412f1ba-217f-4c10-b176-528b2ef6ed0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.333 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:59 np0005539563 NetworkManager[48981]: <info>  [1764404939.3341] manager: (tapfe638793-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.335 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.345 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.346 252257 INFO os_vif [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:14:58,bridge_name='br-int',has_traffic_filtering=True,id=fe638793-a58c-45c7-af31-561a212a980a,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe638793-a5')#033[00m
Nov 29 03:28:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:28:59.913 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.938 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.938 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.939 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No VIF found with MAC fa:16:3e:80:14:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.939 252257 INFO nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Using config drive#033[00m
Nov 29 03:28:59 np0005539563 nova_compute[252253]: 2025-11-29 08:28:59.969 252257 DEBUG nova.storage.rbd_utils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] rbd image e412f1ba-217f-4c10-b176-528b2ef6ed0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:00.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:00.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:00 np0005539563 nova_compute[252253]: 2025-11-29 08:29:00.524 252257 INFO nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Creating config drive at /var/lib/nova/instances/e412f1ba-217f-4c10-b176-528b2ef6ed0e/disk.config#033[00m
Nov 29 03:29:00 np0005539563 nova_compute[252253]: 2025-11-29 08:29:00.530 252257 DEBUG oslo_concurrency.processutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e412f1ba-217f-4c10-b176-528b2ef6ed0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwlcucrih execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 47 KiB/s wr, 51 op/s
Nov 29 03:29:00 np0005539563 nova_compute[252253]: 2025-11-29 08:29:00.678 252257 DEBUG oslo_concurrency.processutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e412f1ba-217f-4c10-b176-528b2ef6ed0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwlcucrih" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:00 np0005539563 nova_compute[252253]: 2025-11-29 08:29:00.710 252257 DEBUG nova.storage.rbd_utils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] rbd image e412f1ba-217f-4c10-b176-528b2ef6ed0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:00 np0005539563 nova_compute[252253]: 2025-11-29 08:29:00.715 252257 DEBUG oslo_concurrency.processutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e412f1ba-217f-4c10-b176-528b2ef6ed0e/disk.config e412f1ba-217f-4c10-b176-528b2ef6ed0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:01 np0005539563 nova_compute[252253]: 2025-11-29 08:29:01.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:01 np0005539563 nova_compute[252253]: 2025-11-29 08:29:01.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:29:02 np0005539563 nova_compute[252253]: 2025-11-29 08:29:02.130 252257 DEBUG oslo_concurrency.processutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e412f1ba-217f-4c10-b176-528b2ef6ed0e/disk.config e412f1ba-217f-4c10-b176-528b2ef6ed0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:02 np0005539563 nova_compute[252253]: 2025-11-29 08:29:02.131 252257 INFO nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Deleting local config drive /var/lib/nova/instances/e412f1ba-217f-4c10-b176-528b2ef6ed0e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:29:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:02.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:02.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:02 np0005539563 kernel: tapfe638793-a5: entered promiscuous mode
Nov 29 03:29:02 np0005539563 NetworkManager[48981]: <info>  [1764404942.1853] manager: (tapfe638793-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Nov 29 03:29:02 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:02Z|00691|binding|INFO|Claiming lport fe638793-a58c-45c7-af31-561a212a980a for this chassis.
Nov 29 03:29:02 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:02Z|00692|binding|INFO|fe638793-a58c-45c7-af31-561a212a980a: Claiming fa:16:3e:80:14:58 10.100.0.10
Nov 29 03:29:02 np0005539563 nova_compute[252253]: 2025-11-29 08:29:02.187 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:02 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:02Z|00693|binding|INFO|Setting lport fe638793-a58c-45c7-af31-561a212a980a ovn-installed in OVS
Nov 29 03:29:02 np0005539563 nova_compute[252253]: 2025-11-29 08:29:02.208 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:02 np0005539563 nova_compute[252253]: 2025-11-29 08:29:02.209 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:02 np0005539563 systemd-machined[213024]: New machine qemu-82-instance-000000a7.
Nov 29 03:29:02 np0005539563 systemd[1]: Started Virtual Machine qemu-82-instance-000000a7.
Nov 29 03:29:02 np0005539563 systemd-udevd[354573]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:02 np0005539563 NetworkManager[48981]: <info>  [1764404942.2602] device (tapfe638793-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:29:02 np0005539563 NetworkManager[48981]: <info>  [1764404942.2609] device (tapfe638793-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:29:02 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:02Z|00694|binding|INFO|Setting lport fe638793-a58c-45c7-af31-561a212a980a up in Southbound
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.500 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:14:58 10.100.0.10'], port_security=['fa:16:3e:80:14:58 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e412f1ba-217f-4c10-b176-528b2ef6ed0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-371b699e-06e1-407e-ac77-9768d9a0e76e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '527c6a274d1e478eadfe67139e121185', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e734722-bbf6-4c47-9bc6-bf8d5f52e07d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c0188f4-aa09-4b91-9f84-524ffee1218e, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=fe638793-a58c-45c7-af31-561a212a980a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.502 158990 INFO neutron.agent.ovn.metadata.agent [-] Port fe638793-a58c-45c7-af31-561a212a980a in datapath 371b699e-06e1-407e-ac77-9768d9a0e76e bound to our chassis#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.504 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 371b699e-06e1-407e-ac77-9768d9a0e76e#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.518 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[13e15a39-fe0e-4adb-a686-1cdbc778dd4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.519 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap371b699e-01 in ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.520 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap371b699e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.520 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4647ed39-fd93-4e2e-8be0-9c3398a07926]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.521 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6d212e0a-3e5d-4577-81f8-5494a8b4dee3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.537 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa5172c-a78c-43dd-83d1-a3f4cacfd2b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.570 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8b1b16-cf0e-4b2b-8c14-581670b4e906]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.609 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[cee6cbf3-47e3-4f22-88f2-502a7cd5c0a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.615 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[368beece-6f04-48cd-9d30-a3779d6182c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 NetworkManager[48981]: <info>  [1764404942.6160] manager: (tap371b699e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/301)
Nov 29 03:29:02 np0005539563 systemd-udevd[354575]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 35 KiB/s wr, 50 op/s
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.661 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[bd33ec4b-ce21-4016-b258-3ed6b00ffade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.668 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[09ff6554-2819-4515-81b8-1d14781bdf83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 nova_compute[252253]: 2025-11-29 08:29:02.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:02 np0005539563 nova_compute[252253]: 2025-11-29 08:29:02.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:02 np0005539563 NetworkManager[48981]: <info>  [1764404942.6961] device (tap371b699e-00): carrier: link connected
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.702 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[eb357aea-3c9c-4f0d-be86-2c80f1da9ce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.722 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[687bc2e9-10ce-43cb-adaa-09e051f62d5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap371b699e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:80:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791046, 'reachable_time': 24091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354624, 'error': None, 'target': 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.739 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[41939ebf-2047-4f41-8b2c-8ffdcda3e8ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:80be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 791046, 'tstamp': 791046}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354640, 'error': None, 'target': 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.760 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5d8ec644-e9ec-4edc-ab12-f11380bf0ca8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap371b699e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:80:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791046, 'reachable_time': 24091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354642, 'error': None, 'target': 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.794 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[de703a37-c36f-481b-bea0-7a870f595cf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 nova_compute[252253]: 2025-11-29 08:29:02.876 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404942.8763735, e412f1ba-217f-4c10-b176-528b2ef6ed0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.876 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[50a63162-efca-44e5-a9a1-cc4b24c98c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.877 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap371b699e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.878 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:02 np0005539563 nova_compute[252253]: 2025-11-29 08:29:02.877 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.878 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap371b699e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:02 np0005539563 nova_compute[252253]: 2025-11-29 08:29:02.915 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:02 np0005539563 NetworkManager[48981]: <info>  [1764404942.9161] manager: (tap371b699e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Nov 29 03:29:02 np0005539563 kernel: tap371b699e-00: entered promiscuous mode
Nov 29 03:29:02 np0005539563 nova_compute[252253]: 2025-11-29 08:29:02.918 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.919 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap371b699e-00, col_values=(('external_ids', {'iface-id': 'bf759292-fede-4172-b0b8-efd6e3442b62'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:02 np0005539563 nova_compute[252253]: 2025-11-29 08:29:02.920 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:02 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:02Z|00695|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=1)
Nov 29 03:29:02 np0005539563 nova_compute[252253]: 2025-11-29 08:29:02.938 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.939 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/371b699e-06e1-407e-ac77-9768d9a0e76e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/371b699e-06e1-407e-ac77-9768d9a0e76e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.940 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[032cad91-8343-4e6d-9f5f-a046806b9350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.941 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-371b699e-06e1-407e-ac77-9768d9a0e76e
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/371b699e-06e1-407e-ac77-9768d9a0e76e.pid.haproxy
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 371b699e-06e1-407e-ac77-9768d9a0e76e
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:29:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:02.941 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'env', 'PROCESS_TAG=haproxy-371b699e-06e1-407e-ac77-9768d9a0e76e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/371b699e-06e1-407e-ac77-9768d9a0e76e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:29:03 np0005539563 podman[354683]: 2025-11-29 08:29:03.317634546 +0000 UTC m=+0.063854221 container create 6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:29:03 np0005539563 systemd[1]: Started libpod-conmon-6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b.scope.
Nov 29 03:29:03 np0005539563 podman[354683]: 2025-11-29 08:29:03.282488654 +0000 UTC m=+0.028708369 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:29:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:29:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0bf8b9330459520f1206bdeb60bdfcd354324d1ef1d0c6d65c2a72213be0aab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:03 np0005539563 podman[354683]: 2025-11-29 08:29:03.408763705 +0000 UTC m=+0.154983370 container init 6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:29:03 np0005539563 podman[354683]: 2025-11-29 08:29:03.41595595 +0000 UTC m=+0.162175615 container start 6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:29:03 np0005539563 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[354699]: [NOTICE]   (354703) : New worker (354705) forked
Nov 29 03:29:03 np0005539563 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[354699]: [NOTICE]   (354703) : Loading success.
Nov 29 03:29:03 np0005539563 nova_compute[252253]: 2025-11-29 08:29:03.500 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:03 np0005539563 nova_compute[252253]: 2025-11-29 08:29:03.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:04.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:04.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:04 np0005539563 nova_compute[252253]: 2025-11-29 08:29:04.334 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:04 np0005539563 nova_compute[252253]: 2025-11-29 08:29:04.524 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:04 np0005539563 nova_compute[252253]: 2025-11-29 08:29:04.528 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404942.8764846, e412f1ba-217f-4c10-b176-528b2ef6ed0e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:04 np0005539563 nova_compute[252253]: 2025-11-29 08:29:04.529 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:29:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 54 KiB/s wr, 102 op/s
Nov 29 03:29:04 np0005539563 nova_compute[252253]: 2025-11-29 08:29:04.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:04.934 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:04.935 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:04.936 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.827 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.827 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.828 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.828 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.828 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.880 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.889 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.939 252257 DEBUG nova.compute.manager [req-2dceb9b7-f23c-4b97-ac40-3a83457d8e1e req-62146d91-4e75-44ac-a038-277a52c463e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received event network-vif-plugged-fe638793-a58c-45c7-af31-561a212a980a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.939 252257 DEBUG oslo_concurrency.lockutils [req-2dceb9b7-f23c-4b97-ac40-3a83457d8e1e req-62146d91-4e75-44ac-a038-277a52c463e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.940 252257 DEBUG oslo_concurrency.lockutils [req-2dceb9b7-f23c-4b97-ac40-3a83457d8e1e req-62146d91-4e75-44ac-a038-277a52c463e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.940 252257 DEBUG oslo_concurrency.lockutils [req-2dceb9b7-f23c-4b97-ac40-3a83457d8e1e req-62146d91-4e75-44ac-a038-277a52c463e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.941 252257 DEBUG nova.compute.manager [req-2dceb9b7-f23c-4b97-ac40-3a83457d8e1e req-62146d91-4e75-44ac-a038-277a52c463e2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Processing event network-vif-plugged-fe638793-a58c-45c7-af31-561a212a980a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.942 252257 DEBUG nova.compute.manager [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.951 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.957 252257 INFO nova.virt.libvirt.driver [-] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Instance spawned successfully.#033[00m
Nov 29 03:29:05 np0005539563 nova_compute[252253]: 2025-11-29 08:29:05.957 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:29:06 np0005539563 nova_compute[252253]: 2025-11-29 08:29:06.106 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:06 np0005539563 nova_compute[252253]: 2025-11-29 08:29:06.106 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:06 np0005539563 nova_compute[252253]: 2025-11-29 08:29:06.107 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:06 np0005539563 nova_compute[252253]: 2025-11-29 08:29:06.108 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:06 np0005539563 nova_compute[252253]: 2025-11-29 08:29:06.108 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:06 np0005539563 nova_compute[252253]: 2025-11-29 08:29:06.109 252257 DEBUG nova.virt.libvirt.driver [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:06 np0005539563 nova_compute[252253]: 2025-11-29 08:29:06.115 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:06 np0005539563 nova_compute[252253]: 2025-11-29 08:29:06.116 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764404945.9500709, e412f1ba-217f-4c10-b176-528b2ef6ed0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:06 np0005539563 nova_compute[252253]: 2025-11-29 08:29:06.116 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:29:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:06.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:06.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/651573233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:06 np0005539563 nova_compute[252253]: 2025-11-29 08:29:06.323 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 39 KiB/s wr, 100 op/s
Nov 29 03:29:06 np0005539563 nova_compute[252253]: 2025-11-29 08:29:06.672 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:06 np0005539563 nova_compute[252253]: 2025-11-29 08:29:06.677 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:07 np0005539563 nova_compute[252253]: 2025-11-29 08:29:07.363 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:07 np0005539563 nova_compute[252253]: 2025-11-29 08:29:07.764 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:07 np0005539563 nova_compute[252253]: 2025-11-29 08:29:07.765 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:07 np0005539563 nova_compute[252253]: 2025-11-29 08:29:07.770 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:07 np0005539563 nova_compute[252253]: 2025-11-29 08:29:07.771 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:07 np0005539563 nova_compute[252253]: 2025-11-29 08:29:07.775 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:07 np0005539563 nova_compute[252253]: 2025-11-29 08:29:07.776 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:07 np0005539563 nova_compute[252253]: 2025-11-29 08:29:07.783 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:07 np0005539563 nova_compute[252253]: 2025-11-29 08:29:07.784 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:07 np0005539563 nova_compute[252253]: 2025-11-29 08:29:07.791 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:07 np0005539563 nova_compute[252253]: 2025-11-29 08:29:07.792 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:07 np0005539563 nova_compute[252253]: 2025-11-29 08:29:07.823 252257 INFO nova.compute.manager [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Took 21.69 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:29:07 np0005539563 nova_compute[252253]: 2025-11-29 08:29:07.824 252257 DEBUG nova.compute.manager [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.021 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.022 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3305MB free_disk=20.71410369873047GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.022 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.022 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:08.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:08.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.237 252257 INFO nova.compute.manager [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Took 24.94 seconds to build instance.#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.348 252257 DEBUG nova.compute.manager [req-0c729523-6809-4a9c-8baf-dfef373d7dd5 req-8ef7662e-d070-4dbf-9eb2-865446b8ac9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received event network-vif-plugged-fe638793-a58c-45c7-af31-561a212a980a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.349 252257 DEBUG oslo_concurrency.lockutils [req-0c729523-6809-4a9c-8baf-dfef373d7dd5 req-8ef7662e-d070-4dbf-9eb2-865446b8ac9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.349 252257 DEBUG oslo_concurrency.lockutils [req-0c729523-6809-4a9c-8baf-dfef373d7dd5 req-8ef7662e-d070-4dbf-9eb2-865446b8ac9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.349 252257 DEBUG oslo_concurrency.lockutils [req-0c729523-6809-4a9c-8baf-dfef373d7dd5 req-8ef7662e-d070-4dbf-9eb2-865446b8ac9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.350 252257 DEBUG nova.compute.manager [req-0c729523-6809-4a9c-8baf-dfef373d7dd5 req-8ef7662e-d070-4dbf-9eb2-865446b8ac9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] No waiting events found dispatching network-vif-plugged-fe638793-a58c-45c7-af31-561a212a980a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.350 252257 WARNING nova.compute.manager [req-0c729523-6809-4a9c-8baf-dfef373d7dd5 req-8ef7662e-d070-4dbf-9eb2-865446b8ac9d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received unexpected event network-vif-plugged-fe638793-a58c-45c7-af31-561a212a980a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.502 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.506 252257 DEBUG oslo_concurrency.lockutils [None req-15f8381d-923c-4c8e-8cf3-b392c039db63 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.551 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 5a603f26-2b4a-4025-8cc2-a31c8c89e652 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.551 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance df3ef43d-e67b-4d7f-8603-5cf61569ae1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.552 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 554ea6a4-8de1-41bf-8772-b15e95a7fd05 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.552 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 78a00526-9c03-4c52-93a4-2275348b883a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.552 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance e412f1ba-217f-4c10-b176-528b2ef6ed0e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.552 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.553 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:29:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 862 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 25 KiB/s wr, 155 op/s
Nov 29 03:29:08 np0005539563 nova_compute[252253]: 2025-11-29 08:29:08.692 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1094632520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:09 np0005539563 nova_compute[252253]: 2025-11-29 08:29:09.123 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:09 np0005539563 nova_compute[252253]: 2025-11-29 08:29:09.128 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:09 np0005539563 nova_compute[252253]: 2025-11-29 08:29:09.239 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:09 np0005539563 nova_compute[252253]: 2025-11-29 08:29:09.336 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:09 np0005539563 nova_compute[252253]: 2025-11-29 08:29:09.340 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:29:09 np0005539563 nova_compute[252253]: 2025-11-29 08:29:09.340 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:10.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:10.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 873 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 337 KiB/s wr, 175 op/s
Nov 29 03:29:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:12.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:12.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:12 np0005539563 nova_compute[252253]: 2025-11-29 08:29:12.342 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:12 np0005539563 nova_compute[252253]: 2025-11-29 08:29:12.342 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:29:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 873 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 337 KiB/s wr, 148 op/s
Nov 29 03:29:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:29:12
Nov 29 03:29:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:29:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:29:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['backups', '.mgr', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'images']
Nov 29 03:29:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:29:13 np0005539563 nova_compute[252253]: 2025-11-29 08:29:13.012 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:13 np0005539563 nova_compute[252253]: 2025-11-29 08:29:13.012 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:13 np0005539563 nova_compute[252253]: 2025-11-29 08:29:13.013 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:29:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:29:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:29:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:29:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:29:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:29:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:29:13 np0005539563 nova_compute[252253]: 2025-11-29 08:29:13.503 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:14.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:14.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:14 np0005539563 nova_compute[252253]: 2025-11-29 08:29:14.338 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 929 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Nov 29 03:29:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:16.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:16.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:29:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:29:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:29:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:29:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:29:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:29:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:29:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:29:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:29:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:29:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 936 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.8 MiB/s wr, 226 op/s
Nov 29 03:29:16 np0005539563 nova_compute[252253]: 2025-11-29 08:29:16.850 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Updating instance_info_cache with network_info: [{"id": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "address": "fa:16:3e:e6:bf:db", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a0b6a5-7a", "ovs_interfaceid": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:17 np0005539563 podman[354767]: 2025-11-29 08:29:17.262812675 +0000 UTC m=+0.067491889 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125)
Nov 29 03:29:17 np0005539563 podman[354766]: 2025-11-29 08:29:17.281608215 +0000 UTC m=+0.087142902 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:29:17 np0005539563 podman[354768]: 2025-11-29 08:29:17.293124216 +0000 UTC m=+0.094853500 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:29:17 np0005539563 nova_compute[252253]: 2025-11-29 08:29:17.541 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-df3ef43d-e67b-4d7f-8603-5cf61569ae1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:17 np0005539563 nova_compute[252253]: 2025-11-29 08:29:17.541 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:29:17 np0005539563 nova_compute[252253]: 2025-11-29 08:29:17.542 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:17 np0005539563 nova_compute[252253]: 2025-11-29 08:29:17.542 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:18.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:18.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:18 np0005539563 nova_compute[252253]: 2025-11-29 08:29:18.504 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 941 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 233 op/s
Nov 29 03:29:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:19 np0005539563 nova_compute[252253]: 2025-11-29 08:29:19.340 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:20.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:20.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 941 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 174 op/s
Nov 29 03:29:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/358310699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:22.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:22.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:22Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:14:58 10.100.0.10
Nov 29 03:29:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:22Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:14:58 10.100.0.10
Nov 29 03:29:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 941 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 146 op/s
Nov 29 03:29:23 np0005539563 nova_compute[252253]: 2025-11-29 08:29:23.506 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01307263313093244 of space, bias 1.0, pg target 3.921789939279732 quantized to 32 (current 32)
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.009765443246324213 of space, bias 1.0, pg target 2.9003366441582914 quantized to 32 (current 32)
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003806103724641727 of space, bias 1.0, pg target 1.1228005987693095 quantized to 32 (current 32)
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017099385817978784 quantized to 16 (current 16)
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003206134840871022 quantized to 32 (current 32)
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018168097431602458 quantized to 32 (current 32)
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004274846454494696 quantized to 32 (current 32)
Nov 29 03:29:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:24.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:24.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:24 np0005539563 nova_compute[252253]: 2025-11-29 08:29:24.342 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 975 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 211 op/s
Nov 29 03:29:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:26.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:26.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 984 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 847 KiB/s rd, 3.4 MiB/s wr, 121 op/s
Nov 29 03:29:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:28.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:28.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:28 np0005539563 nova_compute[252253]: 2025-11-29 08:29:28.510 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 1005 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 621 KiB/s rd, 4.4 MiB/s wr, 151 op/s
Nov 29 03:29:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:29 np0005539563 nova_compute[252253]: 2025-11-29 08:29:29.344 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:30.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:30.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 985 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 601 KiB/s rd, 4.3 MiB/s wr, 156 op/s
Nov 29 03:29:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3126927634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:31 np0005539563 nova_compute[252253]: 2025-11-29 08:29:31.747 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:31.748 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:31.749 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:29:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:31.750 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:31 np0005539563 nova_compute[252253]: 2025-11-29 08:29:31.872 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:32.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:32.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 985 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 601 KiB/s rd, 3.9 MiB/s wr, 152 op/s
Nov 29 03:29:33 np0005539563 nova_compute[252253]: 2025-11-29 08:29:33.513 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Nov 29 03:29:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Nov 29 03:29:33 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Nov 29 03:29:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:34.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:34.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:34 np0005539563 nova_compute[252253]: 2025-11-29 08:29:34.346 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 926 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Nov 29 03:29:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:36.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:36.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 918 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 182 op/s
Nov 29 03:29:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:38.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:38.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:38 np0005539563 nova_compute[252253]: 2025-11-29 08:29:38.515 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 905 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 39 KiB/s wr, 218 op/s
Nov 29 03:29:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:39 np0005539563 nova_compute[252253]: 2025-11-29 08:29:39.349 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:40.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:40.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 905 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 48 KiB/s wr, 214 op/s
Nov 29 03:29:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Nov 29 03:29:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Nov 29 03:29:41 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Nov 29 03:29:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:42.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:42.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 905 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 20 KiB/s wr, 190 op/s
Nov 29 03:29:42 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:42Z|00696|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 03:29:42 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:42Z|00697|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 03:29:42 np0005539563 nova_compute[252253]: 2025-11-29 08:29:42.796 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:29:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:29:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:29:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:29:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:29:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:29:43 np0005539563 nova_compute[252253]: 2025-11-29 08:29:43.517 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Nov 29 03:29:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Nov 29 03:29:43 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Nov 29 03:29:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:44.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:44.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:44 np0005539563 nova_compute[252253]: 2025-11-29 08:29:44.351 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 907 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 289 KiB/s wr, 173 op/s
Nov 29 03:29:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:46.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:46.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 907 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 724 KiB/s rd, 305 KiB/s wr, 64 op/s
Nov 29 03:29:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:46Z|00698|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 03:29:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:46Z|00699|binding|INFO|Releasing lport fb65e0fb-a778-4ace-a666-dfdbc516af09 from this chassis (sb_readonly=0)
Nov 29 03:29:46 np0005539563 nova_compute[252253]: 2025-11-29 08:29:46.912 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:47 np0005539563 podman[354946]: 2025-11-29 08:29:47.516909669 +0000 UTC m=+0.064511308 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:29:47 np0005539563 podman[354947]: 2025-11-29 08:29:47.519687715 +0000 UTC m=+0.062084983 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 29 03:29:47 np0005539563 podman[354953]: 2025-11-29 08:29:47.562853824 +0000 UTC m=+0.095442627 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 03:29:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:48.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:48.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:48 np0005539563 nova_compute[252253]: 2025-11-29 08:29:48.519 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 907 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 706 KiB/s rd, 316 KiB/s wr, 113 op/s
Nov 29 03:29:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:49 np0005539563 nova_compute[252253]: 2025-11-29 08:29:49.353 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:50.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:50.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 909 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 814 KiB/s rd, 498 KiB/s wr, 128 op/s
Nov 29 03:29:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:52.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:52.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Nov 29 03:29:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 909 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 751 KiB/s rd, 460 KiB/s wr, 118 op/s
Nov 29 03:29:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Nov 29 03:29:52 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Nov 29 03:29:52 np0005539563 nova_compute[252253]: 2025-11-29 08:29:52.708 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:53 np0005539563 nova_compute[252253]: 2025-11-29 08:29:53.521 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:29:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6a27af4e-b22e-4f51-a24a-969ec6884a89 does not exist
Nov 29 03:29:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8e2645c9-2d2f-4d95-ae99-1ef6862d6ddc does not exist
Nov 29 03:29:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5f77f517-c589-4e88-b888-52ee04d3a480 does not exist
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3021398887' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:29:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3021398887' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:29:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:54.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:54.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:54 np0005539563 podman[355288]: 2025-11-29 08:29:54.338109048 +0000 UTC m=+0.058625530 container create bdea3c4b26b7cbfa3ecd076d43e24e8ae691165da44bcb12528384ad8ea29d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:29:54 np0005539563 nova_compute[252253]: 2025-11-29 08:29:54.410 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:54 np0005539563 podman[355288]: 2025-11-29 08:29:54.320032148 +0000 UTC m=+0.040548600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:29:54 np0005539563 systemd[1]: Started libpod-conmon-bdea3c4b26b7cbfa3ecd076d43e24e8ae691165da44bcb12528384ad8ea29d9d.scope.
Nov 29 03:29:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:29:54 np0005539563 podman[355288]: 2025-11-29 08:29:54.488332757 +0000 UTC m=+0.208849289 container init bdea3c4b26b7cbfa3ecd076d43e24e8ae691165da44bcb12528384ad8ea29d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:29:54 np0005539563 podman[355288]: 2025-11-29 08:29:54.498792311 +0000 UTC m=+0.219308783 container start bdea3c4b26b7cbfa3ecd076d43e24e8ae691165da44bcb12528384ad8ea29d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:29:54 np0005539563 podman[355288]: 2025-11-29 08:29:54.502377948 +0000 UTC m=+0.222894490 container attach bdea3c4b26b7cbfa3ecd076d43e24e8ae691165da44bcb12528384ad8ea29d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_thompson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:29:54 np0005539563 gracious_thompson[355304]: 167 167
Nov 29 03:29:54 np0005539563 systemd[1]: libpod-bdea3c4b26b7cbfa3ecd076d43e24e8ae691165da44bcb12528384ad8ea29d9d.scope: Deactivated successfully.
Nov 29 03:29:54 np0005539563 podman[355288]: 2025-11-29 08:29:54.507348762 +0000 UTC m=+0.227865264 container died bdea3c4b26b7cbfa3ecd076d43e24e8ae691165da44bcb12528384ad8ea29d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_thompson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:29:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-85f16db6c4af2531c5ed2cb913a64ae1840d3b15ddf71de4f9fda038a65f2d0a-merged.mount: Deactivated successfully.
Nov 29 03:29:54 np0005539563 podman[355288]: 2025-11-29 08:29:54.558876599 +0000 UTC m=+0.279393071 container remove bdea3c4b26b7cbfa3ecd076d43e24e8ae691165da44bcb12528384ad8ea29d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:29:54 np0005539563 systemd[1]: libpod-conmon-bdea3c4b26b7cbfa3ecd076d43e24e8ae691165da44bcb12528384ad8ea29d9d.scope: Deactivated successfully.
Nov 29 03:29:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 884 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 680 KiB/s rd, 251 KiB/s wr, 116 op/s
Nov 29 03:29:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Nov 29 03:29:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Nov 29 03:29:54 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Nov 29 03:29:54 np0005539563 podman[355329]: 2025-11-29 08:29:54.782422264 +0000 UTC m=+0.051778743 container create ec6c6cff648751e60eda704d22c94ee05007fb4eb9b884fc6c9e4749fad0f9e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:29:54 np0005539563 systemd[1]: Started libpod-conmon-ec6c6cff648751e60eda704d22c94ee05007fb4eb9b884fc6c9e4749fad0f9e7.scope.
Nov 29 03:29:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:29:54 np0005539563 podman[355329]: 2025-11-29 08:29:54.754233671 +0000 UTC m=+0.023590160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:29:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728cfd457a5dcf4154dea8e20cc8e6326f76d0a73c4d59900e1dc1eda0e25972/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728cfd457a5dcf4154dea8e20cc8e6326f76d0a73c4d59900e1dc1eda0e25972/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728cfd457a5dcf4154dea8e20cc8e6326f76d0a73c4d59900e1dc1eda0e25972/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728cfd457a5dcf4154dea8e20cc8e6326f76d0a73c4d59900e1dc1eda0e25972/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/728cfd457a5dcf4154dea8e20cc8e6326f76d0a73c4d59900e1dc1eda0e25972/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:54 np0005539563 podman[355329]: 2025-11-29 08:29:54.873283937 +0000 UTC m=+0.142640426 container init ec6c6cff648751e60eda704d22c94ee05007fb4eb9b884fc6c9e4749fad0f9e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elbakyan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:29:54 np0005539563 podman[355329]: 2025-11-29 08:29:54.880129452 +0000 UTC m=+0.149485921 container start ec6c6cff648751e60eda704d22c94ee05007fb4eb9b884fc6c9e4749fad0f9e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elbakyan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:29:54 np0005539563 podman[355329]: 2025-11-29 08:29:54.884033798 +0000 UTC m=+0.153390277 container attach ec6c6cff648751e60eda704d22c94ee05007fb4eb9b884fc6c9e4749fad0f9e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elbakyan, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:29:55 np0005539563 frosty_elbakyan[355345]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:29:55 np0005539563 frosty_elbakyan[355345]: --> relative data size: 1.0
Nov 29 03:29:55 np0005539563 frosty_elbakyan[355345]: --> All data devices are unavailable
Nov 29 03:29:55 np0005539563 systemd[1]: libpod-ec6c6cff648751e60eda704d22c94ee05007fb4eb9b884fc6c9e4749fad0f9e7.scope: Deactivated successfully.
Nov 29 03:29:55 np0005539563 podman[355329]: 2025-11-29 08:29:55.687910226 +0000 UTC m=+0.957266725 container died ec6c6cff648751e60eda704d22c94ee05007fb4eb9b884fc6c9e4749fad0f9e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elbakyan, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:29:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay-728cfd457a5dcf4154dea8e20cc8e6326f76d0a73c4d59900e1dc1eda0e25972-merged.mount: Deactivated successfully.
Nov 29 03:29:55 np0005539563 podman[355329]: 2025-11-29 08:29:55.740230463 +0000 UTC m=+1.009586932 container remove ec6c6cff648751e60eda704d22c94ee05007fb4eb9b884fc6c9e4749fad0f9e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elbakyan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:29:55 np0005539563 systemd[1]: libpod-conmon-ec6c6cff648751e60eda704d22c94ee05007fb4eb9b884fc6c9e4749fad0f9e7.scope: Deactivated successfully.
Nov 29 03:29:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:29:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:56.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:29:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:56.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:56 np0005539563 podman[355513]: 2025-11-29 08:29:56.323829905 +0000 UTC m=+0.041956828 container create 5e51abb21aded6d36696acfc1c4fab8bc50a21377201ad65ddc24ec8184f6629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:29:56 np0005539563 systemd[1]: Started libpod-conmon-5e51abb21aded6d36696acfc1c4fab8bc50a21377201ad65ddc24ec8184f6629.scope.
Nov 29 03:29:56 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:29:56 np0005539563 podman[355513]: 2025-11-29 08:29:56.307178263 +0000 UTC m=+0.025305216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:29:56 np0005539563 podman[355513]: 2025-11-29 08:29:56.411839819 +0000 UTC m=+0.129966782 container init 5e51abb21aded6d36696acfc1c4fab8bc50a21377201ad65ddc24ec8184f6629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:29:56 np0005539563 podman[355513]: 2025-11-29 08:29:56.417128812 +0000 UTC m=+0.135255735 container start 5e51abb21aded6d36696acfc1c4fab8bc50a21377201ad65ddc24ec8184f6629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gagarin, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:29:56 np0005539563 podman[355513]: 2025-11-29 08:29:56.420043511 +0000 UTC m=+0.138170524 container attach 5e51abb21aded6d36696acfc1c4fab8bc50a21377201ad65ddc24ec8184f6629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:29:56 np0005539563 jovial_gagarin[355529]: 167 167
Nov 29 03:29:56 np0005539563 systemd[1]: libpod-5e51abb21aded6d36696acfc1c4fab8bc50a21377201ad65ddc24ec8184f6629.scope: Deactivated successfully.
Nov 29 03:29:56 np0005539563 podman[355513]: 2025-11-29 08:29:56.421643195 +0000 UTC m=+0.139770118 container died 5e51abb21aded6d36696acfc1c4fab8bc50a21377201ad65ddc24ec8184f6629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gagarin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:29:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8e25600d861286f9b60f46b90dc174b917029d5f6f4d5ebc0e8709069b9801e2-merged.mount: Deactivated successfully.
Nov 29 03:29:56 np0005539563 podman[355513]: 2025-11-29 08:29:56.468998297 +0000 UTC m=+0.187125220 container remove 5e51abb21aded6d36696acfc1c4fab8bc50a21377201ad65ddc24ec8184f6629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:29:56 np0005539563 systemd[1]: libpod-conmon-5e51abb21aded6d36696acfc1c4fab8bc50a21377201ad65ddc24ec8184f6629.scope: Deactivated successfully.
Nov 29 03:29:56 np0005539563 podman[355554]: 2025-11-29 08:29:56.63740168 +0000 UTC m=+0.035776851 container create ce419ee30710e0cee4f72b20ea60ed4746db78120ca4ceadda06f163d468629a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:29:56 np0005539563 systemd[1]: Started libpod-conmon-ce419ee30710e0cee4f72b20ea60ed4746db78120ca4ceadda06f163d468629a.scope.
Nov 29 03:29:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 873 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 280 KiB/s rd, 285 KiB/s wr, 95 op/s
Nov 29 03:29:56 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:29:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9864b754844b4559a94c86e2cc90691efe1c68cdd3c75e92c2de00eeaac3dad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9864b754844b4559a94c86e2cc90691efe1c68cdd3c75e92c2de00eeaac3dad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9864b754844b4559a94c86e2cc90691efe1c68cdd3c75e92c2de00eeaac3dad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9864b754844b4559a94c86e2cc90691efe1c68cdd3c75e92c2de00eeaac3dad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:56 np0005539563 podman[355554]: 2025-11-29 08:29:56.7060776 +0000 UTC m=+0.104452801 container init ce419ee30710e0cee4f72b20ea60ed4746db78120ca4ceadda06f163d468629a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:29:56 np0005539563 podman[355554]: 2025-11-29 08:29:56.713117031 +0000 UTC m=+0.111492202 container start ce419ee30710e0cee4f72b20ea60ed4746db78120ca4ceadda06f163d468629a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:29:56 np0005539563 podman[355554]: 2025-11-29 08:29:56.716565624 +0000 UTC m=+0.114940815 container attach ce419ee30710e0cee4f72b20ea60ed4746db78120ca4ceadda06f163d468629a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:29:56 np0005539563 podman[355554]: 2025-11-29 08:29:56.622729252 +0000 UTC m=+0.021104433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.056 252257 DEBUG oslo_concurrency.lockutils [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.057 252257 DEBUG oslo_concurrency.lockutils [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.057 252257 DEBUG oslo_concurrency.lockutils [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.057 252257 DEBUG oslo_concurrency.lockutils [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.057 252257 DEBUG oslo_concurrency.lockutils [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.058 252257 INFO nova.compute.manager [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Terminating instance#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.059 252257 DEBUG nova.compute.manager [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:29:57 np0005539563 kernel: tape0c088b1-9b (unregistering): left promiscuous mode
Nov 29 03:29:57 np0005539563 NetworkManager[48981]: <info>  [1764404997.1276] device (tape0c088b1-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.137 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:57 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:57Z|00700|binding|INFO|Releasing lport e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e from this chassis (sb_readonly=0)
Nov 29 03:29:57 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:57Z|00701|binding|INFO|Setting lport e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e down in Southbound
Nov 29 03:29:57 np0005539563 ovn_controller[148841]: 2025-11-29T08:29:57Z|00702|binding|INFO|Removing iface tape0c088b1-9b ovn-installed in OVS
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.139 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.144 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:cc:96 10.100.0.3'], port_security=['fa:16:3e:76:cc:96 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '78a00526-9c03-4c52-93a4-2275348b883a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6e9e03ca-34d5-466f-8e26-e073c35a802c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.145 158990 INFO neutron.agent.ovn.metadata.agent [-] Port e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 unbound from our chassis#033[00m
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.147 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abbc8daa-d665-4e2f-bf74-9e57db481441#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.162 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.165 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d9826a0b-2a6f-4fc9-a280-aae8050be155]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:57 np0005539563 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a3.scope: Deactivated successfully.
Nov 29 03:29:57 np0005539563 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a3.scope: Consumed 17.921s CPU time.
Nov 29 03:29:57 np0005539563 systemd-machined[213024]: Machine qemu-81-instance-000000a3 terminated.
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.198 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3a009b33-45ae-49ff-9e23-14b63310660d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.202 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[96193797-35be-40c6-a650-5e435cd3d65f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.231 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[cea1f0ec-0f1f-4ed2-8648-c2ab7b008f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.249 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ddf250-89ec-47d1-a64f-84e2bc8cbd0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 868, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 868, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766783, 'reachable_time': 19799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355588, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.273 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c8098ed5-9eb3-4252-b5fb-1c71f2040bea]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766793, 'tstamp': 766793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355589, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766796, 'tstamp': 766796}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355589, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.275 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.276 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.280 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.284 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabbc8daa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.284 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.285 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabbc8daa-d0, col_values=(('external_ids', {'iface-id': 'fb65e0fb-a778-4ace-a666-dfdbc516af09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:57 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:29:57.285 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.300 252257 INFO nova.virt.libvirt.driver [-] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Instance destroyed successfully.#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.301 252257 DEBUG nova.objects.instance [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'resources' on Instance uuid 78a00526-9c03-4c52-93a4-2275348b883a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.320 252257 DEBUG nova.virt.libvirt.vif [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=163,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:13Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-pp6jso0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:28:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=78a00526-9c03-4c52-93a4-2275348b883a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.321 252257 DEBUG nova.network.os_vif_util [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "address": "fa:16:3e:76:cc:96", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0c088b1-9b", "ovs_interfaceid": "e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.322 252257 DEBUG nova.network.os_vif_util [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.322 252257 DEBUG os_vif [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.324 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.324 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0c088b1-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.326 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.328 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.331 252257 INFO os_vif [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:cc:96,bridge_name='br-int',has_traffic_filtering=True,id=e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0c088b1-9b')#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.429 252257 DEBUG nova.compute.manager [req-1d768fdb-5e7f-459d-b74c-be42bf9a9f75 req-7c44d479-97d8-40c6-8c7d-e978398da70d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-unplugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.429 252257 DEBUG oslo_concurrency.lockutils [req-1d768fdb-5e7f-459d-b74c-be42bf9a9f75 req-7c44d479-97d8-40c6-8c7d-e978398da70d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.429 252257 DEBUG oslo_concurrency.lockutils [req-1d768fdb-5e7f-459d-b74c-be42bf9a9f75 req-7c44d479-97d8-40c6-8c7d-e978398da70d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.429 252257 DEBUG oslo_concurrency.lockutils [req-1d768fdb-5e7f-459d-b74c-be42bf9a9f75 req-7c44d479-97d8-40c6-8c7d-e978398da70d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.429 252257 DEBUG nova.compute.manager [req-1d768fdb-5e7f-459d-b74c-be42bf9a9f75 req-7c44d479-97d8-40c6-8c7d-e978398da70d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] No waiting events found dispatching network-vif-unplugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.430 252257 DEBUG nova.compute.manager [req-1d768fdb-5e7f-459d-b74c-be42bf9a9f75 req-7c44d479-97d8-40c6-8c7d-e978398da70d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-unplugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]: {
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:    "0": [
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:        {
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:            "devices": [
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:                "/dev/loop3"
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:            ],
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:            "lv_name": "ceph_lv0",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:            "lv_size": "7511998464",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:            "name": "ceph_lv0",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:            "tags": {
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:                "ceph.cluster_name": "ceph",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:                "ceph.crush_device_class": "",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:                "ceph.encrypted": "0",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:                "ceph.osd_id": "0",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:                "ceph.type": "block",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:                "ceph.vdo": "0"
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:            },
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:            "type": "block",
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:            "vg_name": "ceph_vg0"
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:        }
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]:    ]
Nov 29 03:29:57 np0005539563 affectionate_joliot[355571]: }
Nov 29 03:29:57 np0005539563 systemd[1]: libpod-ce419ee30710e0cee4f72b20ea60ed4746db78120ca4ceadda06f163d468629a.scope: Deactivated successfully.
Nov 29 03:29:57 np0005539563 podman[355554]: 2025-11-29 08:29:57.486521534 +0000 UTC m=+0.884896705 container died ce419ee30710e0cee4f72b20ea60ed4746db78120ca4ceadda06f163d468629a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:29:57 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b9864b754844b4559a94c86e2cc90691efe1c68cdd3c75e92c2de00eeaac3dad-merged.mount: Deactivated successfully.
Nov 29 03:29:57 np0005539563 podman[355554]: 2025-11-29 08:29:57.537497875 +0000 UTC m=+0.935873046 container remove ce419ee30710e0cee4f72b20ea60ed4746db78120ca4ceadda06f163d468629a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:29:57 np0005539563 systemd[1]: libpod-conmon-ce419ee30710e0cee4f72b20ea60ed4746db78120ca4ceadda06f163d468629a.scope: Deactivated successfully.
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.768 252257 INFO nova.virt.libvirt.driver [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Deleting instance files /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a_del#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.770 252257 INFO nova.virt.libvirt.driver [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Deletion of /var/lib/nova/instances/78a00526-9c03-4c52-93a4-2275348b883a_del complete#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.776 252257 DEBUG oslo_concurrency.lockutils [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.776 252257 DEBUG oslo_concurrency.lockutils [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.805 252257 DEBUG nova.objects.instance [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid e412f1ba-217f-4c10-b176-528b2ef6ed0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.832 252257 INFO nova.compute.manager [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.832 252257 DEBUG oslo.service.loopingcall [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.832 252257 DEBUG nova.compute.manager [-] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.833 252257 DEBUG nova.network.neutron [-] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.841 252257 DEBUG oslo_concurrency.lockutils [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:57 np0005539563 nova_compute[252253]: 2025-11-29 08:29:57.924 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539563 podman[355782]: 2025-11-29 08:29:58.110151888 +0000 UTC m=+0.034932006 container create 281d9017112decbf84a6f8f85a3978fec3e2674f67dea7b0f6d087764b482e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcclintock, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:29:58 np0005539563 systemd[1]: Started libpod-conmon-281d9017112decbf84a6f8f85a3978fec3e2674f67dea7b0f6d087764b482e30.scope.
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.142 252257 DEBUG oslo_concurrency.lockutils [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.143 252257 DEBUG oslo_concurrency.lockutils [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.143 252257 INFO nova.compute.manager [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Attaching volume ab4cd0c8-adcb-470a-9b1a-e227da2d6280 to /dev/vdb#033[00m
Nov 29 03:29:58 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:29:58 np0005539563 podman[355782]: 2025-11-29 08:29:58.178436569 +0000 UTC m=+0.103216707 container init 281d9017112decbf84a6f8f85a3978fec3e2674f67dea7b0f6d087764b482e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:29:58 np0005539563 podman[355782]: 2025-11-29 08:29:58.184923374 +0000 UTC m=+0.109703492 container start 281d9017112decbf84a6f8f85a3978fec3e2674f67dea7b0f6d087764b482e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcclintock, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 03:29:58 np0005539563 podman[355782]: 2025-11-29 08:29:58.188126651 +0000 UTC m=+0.112906779 container attach 281d9017112decbf84a6f8f85a3978fec3e2674f67dea7b0f6d087764b482e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:29:58 np0005539563 condescending_mcclintock[355799]: 167 167
Nov 29 03:29:58 np0005539563 systemd[1]: libpod-281d9017112decbf84a6f8f85a3978fec3e2674f67dea7b0f6d087764b482e30.scope: Deactivated successfully.
Nov 29 03:29:58 np0005539563 podman[355782]: 2025-11-29 08:29:58.190578948 +0000 UTC m=+0.115359066 container died 281d9017112decbf84a6f8f85a3978fec3e2674f67dea7b0f6d087764b482e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:29:58 np0005539563 podman[355782]: 2025-11-29 08:29:58.094217657 +0000 UTC m=+0.018997795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:29:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:58.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e55d82d57a512e6a91593043048d0dd3334e99c6678772902f11b0b796d90b06-merged.mount: Deactivated successfully.
Nov 29 03:29:58 np0005539563 podman[355782]: 2025-11-29 08:29:58.22240486 +0000 UTC m=+0.147184978 container remove 281d9017112decbf84a6f8f85a3978fec3e2674f67dea7b0f6d087764b482e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:29:58 np0005539563 systemd[1]: libpod-conmon-281d9017112decbf84a6f8f85a3978fec3e2674f67dea7b0f6d087764b482e30.scope: Deactivated successfully.
Nov 29 03:29:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:29:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:58.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.301 252257 DEBUG os_brick.utils [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.303 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.314 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.315 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[cb87c600-fbd3-48bc-8f48-0b8d67461f81]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.316 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.323 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.324 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[907df7d5-ffe5-4e6d-8b56-2804c952086c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.325 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.333 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.333 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee3f2ef-b424-46cf-a6c4-98f2e64fb7ec]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.334 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[771cc979-08ba-4f81-a373-901bcd0adcda]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.335 252257 DEBUG oslo_concurrency.processutils [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.369 252257 DEBUG oslo_concurrency.processutils [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.371 252257 DEBUG os_brick.initiator.connectors.lightos [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.371 252257 DEBUG os_brick.initiator.connectors.lightos [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.371 252257 DEBUG os_brick.initiator.connectors.lightos [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.372 252257 DEBUG os_brick.utils [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.372 252257 DEBUG nova.virt.block_device [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating existing volume attachment record: 855aa72f-32e8-47ca-ba06-7cf6d25d0d65 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:29:58 np0005539563 podman[355830]: 2025-11-29 08:29:58.422971484 +0000 UTC m=+0.039642415 container create a79c0d233b456914f7db3500d3a332c5b656f4f9b210751a0c79f8a5238cb0b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:29:58 np0005539563 systemd[1]: Started libpod-conmon-a79c0d233b456914f7db3500d3a332c5b656f4f9b210751a0c79f8a5238cb0b6.scope.
Nov 29 03:29:58 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:29:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b2eecb062c746d44a4e47b3c1f2abb1eee372ef747a944cbf8a2269f8d27c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b2eecb062c746d44a4e47b3c1f2abb1eee372ef747a944cbf8a2269f8d27c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b2eecb062c746d44a4e47b3c1f2abb1eee372ef747a944cbf8a2269f8d27c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b2eecb062c746d44a4e47b3c1f2abb1eee372ef747a944cbf8a2269f8d27c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:58 np0005539563 podman[355830]: 2025-11-29 08:29:58.403716422 +0000 UTC m=+0.020387363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:29:58 np0005539563 podman[355830]: 2025-11-29 08:29:58.500371691 +0000 UTC m=+0.117042632 container init a79c0d233b456914f7db3500d3a332c5b656f4f9b210751a0c79f8a5238cb0b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haslett, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:29:58 np0005539563 podman[355830]: 2025-11-29 08:29:58.506590789 +0000 UTC m=+0.123261710 container start a79c0d233b456914f7db3500d3a332c5b656f4f9b210751a0c79f8a5238cb0b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:29:58 np0005539563 podman[355830]: 2025-11-29 08:29:58.509620801 +0000 UTC m=+0.126291722 container attach a79c0d233b456914f7db3500d3a332c5b656f4f9b210751a0c79f8a5238cb0b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:29:58 np0005539563 nova_compute[252253]: 2025-11-29 08:29:58.523 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 808 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 93 KiB/s rd, 36 KiB/s wr, 129 op/s
Nov 29 03:29:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Nov 29 03:29:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Nov 29 03:29:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.222 252257 DEBUG nova.network.neutron [-] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.276 252257 INFO nova.compute.manager [-] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Took 1.44 seconds to deallocate network for instance.#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.286 252257 DEBUG nova.objects.instance [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid e412f1ba-217f-4c10-b176-528b2ef6ed0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.330 252257 DEBUG nova.virt.libvirt.driver [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Attempting to attach volume ab4cd0c8-adcb-470a-9b1a-e227da2d6280 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:29:59 np0005539563 friendly_haslett[355846]: {
Nov 29 03:29:59 np0005539563 friendly_haslett[355846]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:29:59 np0005539563 friendly_haslett[355846]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:29:59 np0005539563 friendly_haslett[355846]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:29:59 np0005539563 friendly_haslett[355846]:        "osd_id": 0,
Nov 29 03:29:59 np0005539563 friendly_haslett[355846]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:29:59 np0005539563 friendly_haslett[355846]:        "type": "bluestore"
Nov 29 03:29:59 np0005539563 friendly_haslett[355846]:    }
Nov 29 03:29:59 np0005539563 friendly_haslett[355846]: }
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.334 252257 DEBUG nova.virt.libvirt.guest [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:29:59 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:29:59 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-ab4cd0c8-adcb-470a-9b1a-e227da2d6280">
Nov 29 03:29:59 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:59 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:59 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:59 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:29:59 np0005539563 nova_compute[252253]:  <auth username="openstack">
Nov 29 03:29:59 np0005539563 nova_compute[252253]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:29:59 np0005539563 nova_compute[252253]:  </auth>
Nov 29 03:29:59 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:29:59 np0005539563 nova_compute[252253]:  <serial>ab4cd0c8-adcb-470a-9b1a-e227da2d6280</serial>
Nov 29 03:29:59 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:29:59 np0005539563 nova_compute[252253]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:29:59 np0005539563 systemd[1]: libpod-a79c0d233b456914f7db3500d3a332c5b656f4f9b210751a0c79f8a5238cb0b6.scope: Deactivated successfully.
Nov 29 03:29:59 np0005539563 podman[355830]: 2025-11-29 08:29:59.362420124 +0000 UTC m=+0.979091045 container died a79c0d233b456914f7db3500d3a332c5b656f4f9b210751a0c79f8a5238cb0b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.371 252257 DEBUG oslo_concurrency.lockutils [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.372 252257 DEBUG oslo_concurrency.lockutils [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f3b2eecb062c746d44a4e47b3c1f2abb1eee372ef747a944cbf8a2269f8d27c7-merged.mount: Deactivated successfully.
Nov 29 03:29:59 np0005539563 podman[355830]: 2025-11-29 08:29:59.4109586 +0000 UTC m=+1.027629511 container remove a79c0d233b456914f7db3500d3a332c5b656f4f9b210751a0c79f8a5238cb0b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:29:59 np0005539563 systemd[1]: libpod-conmon-a79c0d233b456914f7db3500d3a332c5b656f4f9b210751a0c79f8a5238cb0b6.scope: Deactivated successfully.
Nov 29 03:29:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.460 252257 DEBUG nova.virt.libvirt.driver [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.460 252257 DEBUG nova.virt.libvirt.driver [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.461 252257 DEBUG nova.virt.libvirt.driver [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.461 252257 DEBUG nova.virt.libvirt.driver [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No VIF found with MAC fa:16:3e:80:14:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.535 252257 DEBUG nova.compute.manager [req-2e57b82c-f05f-44a0-8762-98acc9eecd74 req-c4f6cb18-8efe-4b75-932a-97087d34861a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.536 252257 DEBUG oslo_concurrency.lockutils [req-2e57b82c-f05f-44a0-8762-98acc9eecd74 req-c4f6cb18-8efe-4b75-932a-97087d34861a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "78a00526-9c03-4c52-93a4-2275348b883a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.536 252257 DEBUG oslo_concurrency.lockutils [req-2e57b82c-f05f-44a0-8762-98acc9eecd74 req-c4f6cb18-8efe-4b75-932a-97087d34861a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.536 252257 DEBUG oslo_concurrency.lockutils [req-2e57b82c-f05f-44a0-8762-98acc9eecd74 req-c4f6cb18-8efe-4b75-932a-97087d34861a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.536 252257 DEBUG nova.compute.manager [req-2e57b82c-f05f-44a0-8762-98acc9eecd74 req-c4f6cb18-8efe-4b75-932a-97087d34861a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] No waiting events found dispatching network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.537 252257 WARNING nova.compute.manager [req-2e57b82c-f05f-44a0-8762-98acc9eecd74 req-c4f6cb18-8efe-4b75-932a-97087d34861a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received unexpected event network-vif-plugged-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.537 252257 DEBUG nova.compute.manager [req-2e57b82c-f05f-44a0-8762-98acc9eecd74 req-c4f6cb18-8efe-4b75-932a-97087d34861a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Received event network-vif-deleted-e0c088b1-9b3a-42b2-9a05-ef5c13d7b45e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.547 252257 DEBUG oslo_concurrency.processutils [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.642 252257 DEBUG oslo_concurrency.lockutils [None req-a8dc97d4-62f7-4358-94d3-64a48bafce87 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3756510675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.990 252257 DEBUG oslo_concurrency.processutils [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:59 np0005539563 nova_compute[252253]: 2025-11-29 08:29:59.996 252257 DEBUG nova.compute.provider_tree [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.013 252257 DEBUG nova.scheduler.client.report [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.042 252257 DEBUG oslo_concurrency.lockutils [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.076 252257 INFO nova.scheduler.client.report [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Deleted allocations for instance 78a00526-9c03-4c52-93a4-2275348b883a#033[00m
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.177 252257 DEBUG oslo_concurrency.lockutils [None req-302b7135-47be-4519-8f97-3a203ade4810 b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "78a00526-9c03-4c52-93a4-2275348b883a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:00.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:00.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:30:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:30:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:30:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b99202b2-8700-43b6-bfca-446ec8d23a3d does not exist
Nov 29 03:30:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 3da5c7de-17bc-4c89-a06d-e8206313fe98 does not exist
Nov 29 03:30:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7c4d1149-3254-4700-8467-a8ad19fa14f6 does not exist
Nov 29 03:30:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 780 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 108 KiB/s rd, 42 KiB/s wr, 150 op/s
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.798 252257 DEBUG oslo_concurrency.lockutils [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.799 252257 DEBUG oslo_concurrency.lockutils [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.799 252257 DEBUG oslo_concurrency.lockutils [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.799 252257 DEBUG oslo_concurrency.lockutils [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.799 252257 DEBUG oslo_concurrency.lockutils [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.802 252257 INFO nova.compute.manager [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Terminating instance#033[00m
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.803 252257 DEBUG nova.compute.manager [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:30:00 np0005539563 kernel: tapf095bbfd-d9 (unregistering): left promiscuous mode
Nov 29 03:30:00 np0005539563 NetworkManager[48981]: <info>  [1764405000.8629] device (tapf095bbfd-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.875 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:00Z|00703|binding|INFO|Releasing lport f095bbfd-d901-4dd4-8831-72dab1104494 from this chassis (sb_readonly=0)
Nov 29 03:30:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:00Z|00704|binding|INFO|Setting lport f095bbfd-d901-4dd4-8831-72dab1104494 down in Southbound
Nov 29 03:30:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:00Z|00705|binding|INFO|Removing iface tapf095bbfd-d9 ovn-installed in OVS
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.878 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:00.888 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:13:85 10.100.0.12'], port_security=['fa:16:3e:7b:13:85 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '554ea6a4-8de1-41bf-8772-b15e95a7fd05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6e9e03ca-34d5-466f-8e26-e073c35a802c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=f095bbfd-d901-4dd4-8831-72dab1104494) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:00.890 158990 INFO neutron.agent.ovn.metadata.agent [-] Port f095bbfd-d901-4dd4-8831-72dab1104494 in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 unbound from our chassis#033[00m
Nov 29 03:30:00 np0005539563 nova_compute[252253]: 2025-11-29 08:30:00.891 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:00.892 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abbc8daa-d665-4e2f-bf74-9e57db481441#033[00m
Nov 29 03:30:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:00.911 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c58b70aa-02bd-4a13-a3af-a3ca9de41378]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:00 np0005539563 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a2.scope: Deactivated successfully.
Nov 29 03:30:00 np0005539563 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a2.scope: Consumed 19.906s CPU time.
Nov 29 03:30:00 np0005539563 systemd-machined[213024]: Machine qemu-80-instance-000000a2 terminated.
Nov 29 03:30:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:00.943 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c0325d9a-5dcd-47d2-b80c-8f298969eec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:00.946 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f70f2712-35b3-460c-b9c1-5f1f8664b48c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:00.975 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe3009e-f1bb-43ae-bc0a-b4e5d530ed4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:00.991 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc23c28-f10d-431f-8f37-c4dc06971f7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 868, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 13, 'rx_bytes': 868, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766783, 'reachable_time': 19799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355986, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:01.005 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f9878b-5972-416c-9352-c4023e67dabe]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766793, 'tstamp': 766793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355987, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766796, 'tstamp': 766796}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355987, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:01.007 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.008 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.012 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:01.013 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabbc8daa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:01.014 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:01.014 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabbc8daa-d0, col_values=(('external_ids', {'iface-id': 'fb65e0fb-a778-4ace-a666-dfdbc516af09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:01.014 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.042 252257 INFO nova.virt.libvirt.driver [-] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Instance destroyed successfully.#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.043 252257 DEBUG nova.objects.instance [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'resources' on Instance uuid 554ea6a4-8de1-41bf-8772-b15e95a7fd05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.065 252257 DEBUG nova.virt.libvirt.vif [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=162,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:27:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-6fx7zmqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:27:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=554ea6a4-8de1-41bf-8772-b15e95a7fd05,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.066 252257 DEBUG nova.network.os_vif_util [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "f095bbfd-d901-4dd4-8831-72dab1104494", "address": "fa:16:3e:7b:13:85", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf095bbfd-d9", "ovs_interfaceid": "f095bbfd-d901-4dd4-8831-72dab1104494", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.066 252257 DEBUG nova.network.os_vif_util [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.067 252257 DEBUG os_vif [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.068 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.069 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf095bbfd-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.070 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.071 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.074 252257 INFO os_vif [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:13:85,bridge_name='br-int',has_traffic_filtering=True,id=f095bbfd-d901-4dd4-8831-72dab1104494,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf095bbfd-d9')#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.308 252257 DEBUG oslo_concurrency.lockutils [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.308 252257 DEBUG oslo_concurrency.lockutils [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.329 252257 DEBUG nova.objects.instance [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid e412f1ba-217f-4c10-b176-528b2ef6ed0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.405 252257 DEBUG oslo_concurrency.lockutils [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:01 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 03:30:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:30:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.482 252257 INFO nova.virt.libvirt.driver [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Deleting instance files /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05_del#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.483 252257 INFO nova.virt.libvirt.driver [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Deletion of /var/lib/nova/instances/554ea6a4-8de1-41bf-8772-b15e95a7fd05_del complete#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.531 252257 INFO nova.compute.manager [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.531 252257 DEBUG oslo.service.loopingcall [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.531 252257 DEBUG nova.compute.manager [-] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.532 252257 DEBUG nova.network.neutron [-] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.676 252257 DEBUG oslo_concurrency.lockutils [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.677 252257 DEBUG oslo_concurrency.lockutils [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.677 252257 INFO nova.compute.manager [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Attaching volume 882d54a9-fd7e-41ac-9421-07a1ee6da4e8 to /dev/vdc#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.838 252257 DEBUG os_brick.utils [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.839 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.852 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.853 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6d1262-d300-45da-9d50-8da8c5547a14]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.855 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.864 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.865 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ce9394-d296-4b65-8328-8e152fc3eb55]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.867 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.875 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.876 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[911c1ddf-4bac-48ba-bda1-46c751ca4571]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.878 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[5b958c74-1606-4214-bfaa-ec4ea393ae16]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.878 252257 DEBUG oslo_concurrency.processutils [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.917 252257 DEBUG oslo_concurrency.processutils [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.922 252257 DEBUG os_brick.initiator.connectors.lightos [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.922 252257 DEBUG os_brick.initiator.connectors.lightos [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.922 252257 DEBUG os_brick.initiator.connectors.lightos [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.923 252257 DEBUG os_brick.utils [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] <== get_connector_properties: return (83ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:30:01 np0005539563 nova_compute[252253]: 2025-11-29 08:30:01.923 252257 DEBUG nova.virt.block_device [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating existing volume attachment record: 955e5743-c138-41ae-9be8-124a391ca9f3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.062 252257 DEBUG nova.network.neutron [-] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.113 252257 INFO nova.compute.manager [-] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Took 0.58 seconds to deallocate network for instance.#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.153 252257 DEBUG oslo_concurrency.lockutils [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.153 252257 DEBUG oslo_concurrency.lockutils [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.174 252257 DEBUG nova.compute.manager [req-1b03cac3-1c31-4f57-9583-8a3772ef10c4 req-6ca5cf9f-b006-42d5-95af-dd743f4b6706 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Received event network-vif-deleted-f095bbfd-d901-4dd4-8831-72dab1104494 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:02.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.252 252257 DEBUG oslo_concurrency.processutils [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:02.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.440 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2904235100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.671 252257 DEBUG oslo_concurrency.processutils [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 780 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 34 KiB/s wr, 138 op/s
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.679 252257 DEBUG nova.compute.provider_tree [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.706 252257 DEBUG nova.scheduler.client.report [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.723 252257 DEBUG oslo_concurrency.lockutils [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.750 252257 INFO nova.scheduler.client.report [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Deleted allocations for instance 554ea6a4-8de1-41bf-8772-b15e95a7fd05#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.766 252257 DEBUG nova.objects.instance [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid e412f1ba-217f-4c10-b176-528b2ef6ed0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.800 252257 DEBUG nova.virt.libvirt.driver [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Attempting to attach volume 882d54a9-fd7e-41ac-9421-07a1ee6da4e8 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.802 252257 DEBUG nova.virt.libvirt.guest [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:30:02 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:02 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-882d54a9-fd7e-41ac-9421-07a1ee6da4e8">
Nov 29 03:30:02 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:02 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:02 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:02 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:30:02 np0005539563 nova_compute[252253]:  <auth username="openstack">
Nov 29 03:30:02 np0005539563 nova_compute[252253]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:30:02 np0005539563 nova_compute[252253]:  </auth>
Nov 29 03:30:02 np0005539563 nova_compute[252253]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:30:02 np0005539563 nova_compute[252253]:  <serial>882d54a9-fd7e-41ac-9421-07a1ee6da4e8</serial>
Nov 29 03:30:02 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:30:02 np0005539563 nova_compute[252253]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.813 252257 DEBUG oslo_concurrency.lockutils [None req-5415eb8a-edf9-42e6-8e57-ae7e8721d38a b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "554ea6a4-8de1-41bf-8772-b15e95a7fd05" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.946 252257 DEBUG nova.virt.libvirt.driver [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.947 252257 DEBUG nova.virt.libvirt.driver [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.948 252257 DEBUG nova.virt.libvirt.driver [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.948 252257 DEBUG nova.virt.libvirt.driver [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:02 np0005539563 nova_compute[252253]: 2025-11-29 08:30:02.949 252257 DEBUG nova.virt.libvirt.driver [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] No VIF found with MAC fa:16:3e:80:14:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:30:03 np0005539563 nova_compute[252253]: 2025-11-29 08:30:03.395 252257 DEBUG oslo_concurrency.lockutils [None req-444357a0-44eb-4998-b3fd-2bb937efd5bf d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:03 np0005539563 nova_compute[252253]: 2025-11-29 08:30:03.526 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:03 np0005539563 nova_compute[252253]: 2025-11-29 08:30:03.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:03 np0005539563 nova_compute[252253]: 2025-11-29 08:30:03.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:03 np0005539563 nova_compute[252253]: 2025-11-29 08:30:03.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:03 np0005539563 nova_compute[252253]: 2025-11-29 08:30:03.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:30:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:30:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2851973504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:30:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:30:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2851973504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:30:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:04.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:04.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.629 252257 DEBUG oslo_concurrency.lockutils [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.630 252257 DEBUG oslo_concurrency.lockutils [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.630 252257 DEBUG oslo_concurrency.lockutils [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.630 252257 DEBUG oslo_concurrency.lockutils [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.631 252257 DEBUG oslo_concurrency.lockutils [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.632 252257 INFO nova.compute.manager [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Terminating instance#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.633 252257 DEBUG nova.compute.manager [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:30:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 729 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 107 KiB/s rd, 30 KiB/s wr, 146 op/s
Nov 29 03:30:04 np0005539563 kernel: tap74a0b6a5-7a (unregistering): left promiscuous mode
Nov 29 03:30:04 np0005539563 NetworkManager[48981]: <info>  [1764405004.6994] device (tap74a0b6a5-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:04Z|00706|binding|INFO|Releasing lport 74a0b6a5-7ae5-44ef-a159-4a87de6da113 from this chassis (sb_readonly=0)
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.711 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:04Z|00707|binding|INFO|Setting lport 74a0b6a5-7ae5-44ef-a159-4a87de6da113 down in Southbound
Nov 29 03:30:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:04Z|00708|binding|INFO|Removing iface tap74a0b6a5-7a ovn-installed in OVS
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.714 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.719 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:bf:db 10.100.0.9'], port_security=['fa:16:3e:e6:bf:db 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'df3ef43d-e67b-4d7f-8603-5cf61569ae1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e9e03ca-34d5-466f-8e26-e073c35a802c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=74a0b6a5-7ae5-44ef-a159-4a87de6da113) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.720 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 74a0b6a5-7ae5-44ef-a159-4a87de6da113 in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 unbound from our chassis#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.722 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abbc8daa-d665-4e2f-bf74-9e57db481441#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.729 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.739 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0076bf-ee35-4cd2-ae23-54977eea2aaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:04 np0005539563 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Nov 29 03:30:04 np0005539563 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000009f.scope: Consumed 25.225s CPU time.
Nov 29 03:30:04 np0005539563 systemd-machined[213024]: Machine qemu-75-instance-0000009f terminated.
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.769 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3a9761-9b1c-4fc3-9525-edbf784d8550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.772 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb140a7-6cab-4ac6-be82-fd1845787bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.800 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[db634ac4-8b63-46d0-8996-ec009794d127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.815 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a23473-846e-41cd-ab06-f88e5a241ce5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabbc8daa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:89:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 868, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 868, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766783, 'reachable_time': 19799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356130, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.827 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[acb8e5c5-4995-4016-b2a1-278ac6b3b039]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766793, 'tstamp': 766793}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356131, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapabbc8daa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766796, 'tstamp': 766796}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356131, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.828 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.830 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.833 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.834 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabbc8daa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.834 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.834 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabbc8daa-d0, col_values=(('external_ids', {'iface-id': 'fb65e0fb-a778-4ace-a666-dfdbc516af09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.835 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.848 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.853 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.863 252257 INFO nova.virt.libvirt.driver [-] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Instance destroyed successfully.#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.864 252257 DEBUG nova.objects.instance [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'resources' on Instance uuid df3ef43d-e67b-4d7f-8603-5cf61569ae1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.877 252257 DEBUG nova.virt.libvirt.vif [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:25:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=159,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-0pvktber',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:25:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=df3ef43d-e67b-4d7f-8603-5cf61569ae1f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "address": "fa:16:3e:e6:bf:db", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a0b6a5-7a", "ovs_interfaceid": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.878 252257 DEBUG nova.network.os_vif_util [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "address": "fa:16:3e:e6:bf:db", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a0b6a5-7a", "ovs_interfaceid": "74a0b6a5-7ae5-44ef-a159-4a87de6da113", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.878 252257 DEBUG nova.network.os_vif_util [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:bf:db,bridge_name='br-int',has_traffic_filtering=True,id=74a0b6a5-7ae5-44ef-a159-4a87de6da113,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74a0b6a5-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.879 252257 DEBUG os_vif [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:bf:db,bridge_name='br-int',has_traffic_filtering=True,id=74a0b6a5-7ae5-44ef-a159-4a87de6da113,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74a0b6a5-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.880 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.880 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74a0b6a5-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.882 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.884 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:04 np0005539563 nova_compute[252253]: 2025-11-29 08:30:04.886 252257 INFO os_vif [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:bf:db,bridge_name='br-int',has_traffic_filtering=True,id=74a0b6a5-7ae5-44ef-a159-4a87de6da113,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74a0b6a5-7a')#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.934 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.935 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:04.935 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:05 np0005539563 nova_compute[252253]: 2025-11-29 08:30:05.020 252257 DEBUG nova.compute.manager [req-67f9e9f3-3310-4a3f-ad14-302f6a383ab8 req-55121882-2476-49c4-878b-93ce052e1349 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Received event network-vif-unplugged-74a0b6a5-7ae5-44ef-a159-4a87de6da113 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:05 np0005539563 nova_compute[252253]: 2025-11-29 08:30:05.020 252257 DEBUG oslo_concurrency.lockutils [req-67f9e9f3-3310-4a3f-ad14-302f6a383ab8 req-55121882-2476-49c4-878b-93ce052e1349 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:05 np0005539563 nova_compute[252253]: 2025-11-29 08:30:05.020 252257 DEBUG oslo_concurrency.lockutils [req-67f9e9f3-3310-4a3f-ad14-302f6a383ab8 req-55121882-2476-49c4-878b-93ce052e1349 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:05 np0005539563 nova_compute[252253]: 2025-11-29 08:30:05.020 252257 DEBUG oslo_concurrency.lockutils [req-67f9e9f3-3310-4a3f-ad14-302f6a383ab8 req-55121882-2476-49c4-878b-93ce052e1349 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:05 np0005539563 nova_compute[252253]: 2025-11-29 08:30:05.021 252257 DEBUG nova.compute.manager [req-67f9e9f3-3310-4a3f-ad14-302f6a383ab8 req-55121882-2476-49c4-878b-93ce052e1349 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] No waiting events found dispatching network-vif-unplugged-74a0b6a5-7ae5-44ef-a159-4a87de6da113 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:05 np0005539563 nova_compute[252253]: 2025-11-29 08:30:05.021 252257 DEBUG nova.compute.manager [req-67f9e9f3-3310-4a3f-ad14-302f6a383ab8 req-55121882-2476-49c4-878b-93ce052e1349 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Received event network-vif-unplugged-74a0b6a5-7ae5-44ef-a159-4a87de6da113 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:30:05 np0005539563 nova_compute[252253]: 2025-11-29 08:30:05.360 252257 INFO nova.virt.libvirt.driver [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Deleting instance files /var/lib/nova/instances/df3ef43d-e67b-4d7f-8603-5cf61569ae1f_del#033[00m
Nov 29 03:30:05 np0005539563 nova_compute[252253]: 2025-11-29 08:30:05.361 252257 INFO nova.virt.libvirt.driver [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Deletion of /var/lib/nova/instances/df3ef43d-e67b-4d7f-8603-5cf61569ae1f_del complete#033[00m
Nov 29 03:30:05 np0005539563 nova_compute[252253]: 2025-11-29 08:30:05.440 252257 INFO nova.compute.manager [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:30:05 np0005539563 nova_compute[252253]: 2025-11-29 08:30:05.440 252257 DEBUG oslo.service.loopingcall [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:30:05 np0005539563 nova_compute[252253]: 2025-11-29 08:30:05.441 252257 DEBUG nova.compute.manager [-] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:30:05 np0005539563 nova_compute[252253]: 2025-11-29 08:30:05.441 252257 DEBUG nova.network.neutron [-] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:30:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:06.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:06 np0005539563 nova_compute[252253]: 2025-11-29 08:30:06.276 252257 DEBUG nova.compute.manager [req-ce299ac5-d7e5-4204-b9a6-7ace1a89504f req-34b971df-8a8d-4dd1-884b-27acb2ed9f2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received event network-changed-fe638793-a58c-45c7-af31-561a212a980a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:06 np0005539563 nova_compute[252253]: 2025-11-29 08:30:06.277 252257 DEBUG nova.compute.manager [req-ce299ac5-d7e5-4204-b9a6-7ace1a89504f req-34b971df-8a8d-4dd1-884b-27acb2ed9f2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing instance network info cache due to event network-changed-fe638793-a58c-45c7-af31-561a212a980a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:06 np0005539563 nova_compute[252253]: 2025-11-29 08:30:06.277 252257 DEBUG oslo_concurrency.lockutils [req-ce299ac5-d7e5-4204-b9a6-7ace1a89504f req-34b971df-8a8d-4dd1-884b-27acb2ed9f2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:06.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:06 np0005539563 nova_compute[252253]: 2025-11-29 08:30:06.278 252257 DEBUG oslo_concurrency.lockutils [req-ce299ac5-d7e5-4204-b9a6-7ace1a89504f req-34b971df-8a8d-4dd1-884b-27acb2ed9f2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:06 np0005539563 nova_compute[252253]: 2025-11-29 08:30:06.278 252257 DEBUG nova.network.neutron [req-ce299ac5-d7e5-4204-b9a6-7ace1a89504f req-34b971df-8a8d-4dd1-884b-27acb2ed9f2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing network info cache for port fe638793-a58c-45c7-af31-561a212a980a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:06 np0005539563 nova_compute[252253]: 2025-11-29 08:30:06.527 252257 DEBUG nova.network.neutron [-] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:06 np0005539563 nova_compute[252253]: 2025-11-29 08:30:06.545 252257 INFO nova.compute.manager [-] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Took 1.10 seconds to deallocate network for instance.#033[00m
Nov 29 03:30:06 np0005539563 nova_compute[252253]: 2025-11-29 08:30:06.594 252257 DEBUG oslo_concurrency.lockutils [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:06 np0005539563 nova_compute[252253]: 2025-11-29 08:30:06.594 252257 DEBUG oslo_concurrency.lockutils [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:06 np0005539563 nova_compute[252253]: 2025-11-29 08:30:06.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 668 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 93 KiB/s rd, 17 KiB/s wr, 130 op/s
Nov 29 03:30:06 np0005539563 nova_compute[252253]: 2025-11-29 08:30:06.692 252257 DEBUG oslo_concurrency.processutils [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:06 np0005539563 nova_compute[252253]: 2025-11-29 08:30:06.733 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.148 252257 DEBUG nova.compute.manager [req-3e40701f-39e1-4d88-974a-daed66a940cc req-564a3a38-0ba2-4816-aca5-04e5bd145e05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Received event network-vif-plugged-74a0b6a5-7ae5-44ef-a159-4a87de6da113 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.148 252257 DEBUG oslo_concurrency.lockutils [req-3e40701f-39e1-4d88-974a-daed66a940cc req-564a3a38-0ba2-4816-aca5-04e5bd145e05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.149 252257 DEBUG oslo_concurrency.lockutils [req-3e40701f-39e1-4d88-974a-daed66a940cc req-564a3a38-0ba2-4816-aca5-04e5bd145e05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.149 252257 DEBUG oslo_concurrency.lockutils [req-3e40701f-39e1-4d88-974a-daed66a940cc req-564a3a38-0ba2-4816-aca5-04e5bd145e05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.149 252257 DEBUG nova.compute.manager [req-3e40701f-39e1-4d88-974a-daed66a940cc req-564a3a38-0ba2-4816-aca5-04e5bd145e05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] No waiting events found dispatching network-vif-plugged-74a0b6a5-7ae5-44ef-a159-4a87de6da113 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.149 252257 WARNING nova.compute.manager [req-3e40701f-39e1-4d88-974a-daed66a940cc req-564a3a38-0ba2-4816-aca5-04e5bd145e05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Received unexpected event network-vif-plugged-74a0b6a5-7ae5-44ef-a159-4a87de6da113 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.149 252257 DEBUG nova.compute.manager [req-3e40701f-39e1-4d88-974a-daed66a940cc req-564a3a38-0ba2-4816-aca5-04e5bd145e05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Received event network-vif-deleted-74a0b6a5-7ae5-44ef-a159-4a87de6da113 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/915833778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.199 252257 DEBUG oslo_concurrency.processutils [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.206 252257 DEBUG nova.compute.provider_tree [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.232 252257 DEBUG nova.scheduler.client.report [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.255 252257 DEBUG oslo_concurrency.lockutils [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.257 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.257 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.257 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.258 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.326 252257 INFO nova.scheduler.client.report [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Deleted allocations for instance df3ef43d-e67b-4d7f-8603-5cf61569ae1f#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.473 252257 DEBUG oslo_concurrency.lockutils [None req-67734ab2-e7e0-4b7f-b1a9-c098d06b30ef b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "df3ef43d-e67b-4d7f-8603-5cf61569ae1f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/709335468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.737 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.839 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.839 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.843 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.843 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.843 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.844 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.871 252257 DEBUG oslo_concurrency.lockutils [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.872 252257 DEBUG oslo_concurrency.lockutils [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.872 252257 DEBUG oslo_concurrency.lockutils [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.872 252257 DEBUG oslo_concurrency.lockutils [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.872 252257 DEBUG oslo_concurrency.lockutils [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.874 252257 INFO nova.compute.manager [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Terminating instance#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.874 252257 DEBUG nova.compute.manager [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:30:07 np0005539563 kernel: tap53d86447-39 (unregistering): left promiscuous mode
Nov 29 03:30:07 np0005539563 NetworkManager[48981]: <info>  [1764405007.9360] device (tap53d86447-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:07 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:07Z|00709|binding|INFO|Releasing lport 53d86447-39c2-4624-8083-b6dc36b78b15 from this chassis (sb_readonly=0)
Nov 29 03:30:07 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:07Z|00710|binding|INFO|Setting lport 53d86447-39c2-4624-8083-b6dc36b78b15 down in Southbound
Nov 29 03:30:07 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:07Z|00711|binding|INFO|Removing iface tap53d86447-39 ovn-installed in OVS
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.950 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:07.954 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:ee:77 10.100.0.4'], port_security=['fa:16:3e:e5:ee:77 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5a603f26-2b4a-4025-8cc2-a31c8c89e652', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abbc8daa-d665-4e2f-bf74-9e57db481441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23450c2eaf4442459dec94c6d29f0412', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e9e03ca-34d5-466f-8e26-e073c35a802c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e85a088-d5fe-4b38-8043-a9acee66ccb5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=53d86447-39c2-4624-8083-b6dc36b78b15) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:07.955 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 53d86447-39c2-4624-8083-b6dc36b78b15 in datapath abbc8daa-d665-4e2f-bf74-9e57db481441 unbound from our chassis#033[00m
Nov 29 03:30:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:07.960 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abbc8daa-d665-4e2f-bf74-9e57db481441, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.960 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:07.962 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[903a6272-9c84-4403-a1f1-000f04e7800c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:07.963 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 namespace which is not needed anymore#033[00m
Nov 29 03:30:07 np0005539563 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000009e.scope: Deactivated successfully.
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.981 252257 DEBUG nova.compute.manager [req-220a4057-cc50-443e-8590-a20dddd6a745 req-d2c2be21-a7bb-4c76-9303-b1a8fe1a3fec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received event network-changed-fe638793-a58c-45c7-af31-561a212a980a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.981 252257 DEBUG nova.compute.manager [req-220a4057-cc50-443e-8590-a20dddd6a745 req-d2c2be21-a7bb-4c76-9303-b1a8fe1a3fec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing instance network info cache due to event network-changed-fe638793-a58c-45c7-af31-561a212a980a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:07 np0005539563 nova_compute[252253]: 2025-11-29 08:30:07.982 252257 DEBUG oslo_concurrency.lockutils [req-220a4057-cc50-443e-8590-a20dddd6a745 req-d2c2be21-a7bb-4c76-9303-b1a8fe1a3fec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:07 np0005539563 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000009e.scope: Consumed 27.421s CPU time.
Nov 29 03:30:07 np0005539563 systemd-machined[213024]: Machine qemu-74-instance-0000009e terminated.
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.104 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:08 np0005539563 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[346833]: [NOTICE]   (346837) : haproxy version is 2.8.14-c23fe91
Nov 29 03:30:08 np0005539563 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[346833]: [NOTICE]   (346837) : path to executable is /usr/sbin/haproxy
Nov 29 03:30:08 np0005539563 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[346833]: [WARNING]  (346837) : Exiting Master process...
Nov 29 03:30:08 np0005539563 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[346833]: [ALERT]    (346837) : Current worker (346839) exited with code 143 (Terminated)
Nov 29 03:30:08 np0005539563 neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441[346833]: [WARNING]  (346837) : All workers exited. Exiting... (0)
Nov 29 03:30:08 np0005539563 systemd[1]: libpod-b4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61.scope: Deactivated successfully.
Nov 29 03:30:08 np0005539563 podman[356233]: 2025-11-29 08:30:08.123301404 +0000 UTC m=+0.052777821 container died b4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.125 252257 INFO nova.virt.libvirt.driver [-] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Instance destroyed successfully.#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.125 252257 DEBUG nova.objects.instance [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lazy-loading 'resources' on Instance uuid 5a603f26-2b4a-4025-8cc2-a31c8c89e652 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.140 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.141 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3704MB free_disk=20.868736267089844GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.142 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.142 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61-userdata-shm.mount: Deactivated successfully.
Nov 29 03:30:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c82f6089bcbb3562f9805a2602f6b7d55ce55ced36b47cd2190e416d9a522e54-merged.mount: Deactivated successfully.
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.157 252257 DEBUG nova.virt.libvirt.vif [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:24:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=158,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC2U26ZMSkcfI5DQFnyWxH+S3YaQ8SAmf1n52XjS1tNMntu8AbhepbwcUWS7Z4/uA3A5Bve+j7ia9a5dnEqoCJZvLZo58KXp6UbvJn0ceeh5z06l1tL3ON8Wl2km+sS1vg==',key_name='tempest-keypair-1391434303',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23450c2eaf4442459dec94c6d29f0412',ramdisk_id='',reservation_id='r-1bf2j4th',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-1454477111',owner_user_name='tempest-AttachVolumeMultiAttachTest-1454477111-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:25:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4f4d28745dd46e586642c84c051db39',uuid=5a603f26-2b4a-4025-8cc2-a31c8c89e652,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.158 252257 DEBUG nova.network.os_vif_util [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converting VIF {"id": "53d86447-39c2-4624-8083-b6dc36b78b15", "address": "fa:16:3e:e5:ee:77", "network": {"id": "abbc8daa-d665-4e2f-bf74-9e57db481441", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1822769447-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23450c2eaf4442459dec94c6d29f0412", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap53d86447-39", "ovs_interfaceid": "53d86447-39c2-4624-8083-b6dc36b78b15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.159 252257 DEBUG nova.network.os_vif_util [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:ee:77,bridge_name='br-int',has_traffic_filtering=True,id=53d86447-39c2-4624-8083-b6dc36b78b15,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53d86447-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.160 252257 DEBUG os_vif [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:ee:77,bridge_name='br-int',has_traffic_filtering=True,id=53d86447-39c2-4624-8083-b6dc36b78b15,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53d86447-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:08 np0005539563 podman[356233]: 2025-11-29 08:30:08.162243139 +0000 UTC m=+0.091719536 container cleanup b4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.162 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.163 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53d86447-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.170 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.173 252257 INFO os_vif [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:ee:77,bridge_name='br-int',has_traffic_filtering=True,id=53d86447-39c2-4624-8083-b6dc36b78b15,network=Network(abbc8daa-d665-4e2f-bf74-9e57db481441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap53d86447-39')#033[00m
Nov 29 03:30:08 np0005539563 systemd[1]: libpod-conmon-b4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61.scope: Deactivated successfully.
Nov 29 03:30:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:08.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:08 np0005539563 podman[356271]: 2025-11-29 08:30:08.239711867 +0000 UTC m=+0.052066111 container remove b4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:30:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:08.247 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a8673988-ffd0-497d-aa05-2030314e33e1]: (4, ('Sat Nov 29 08:30:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 (b4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61)\nb4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61\nSat Nov 29 08:30:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 (b4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61)\nb4be91e873830f5aa28915cc2946a74a8fa18369c78fa07559de486156982d61\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.247 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 5a603f26-2b4a-4025-8cc2-a31c8c89e652 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.248 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance e412f1ba-217f-4c10-b176-528b2ef6ed0e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.249 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.249 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:30:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:08.250 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[96ba80fb-5b63-44c7-b0e6-e2b528ca1e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:08.251 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabbc8daa-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.254 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:08 np0005539563 kernel: tapabbc8daa-d0: left promiscuous mode
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.274 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:08.277 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[df1d7157-904b-4637-ab9a-d3ebc03b2f14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:08.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:08.295 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[847d4464-3d3a-447f-9055-d935e9e57749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:08.296 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3491874c-a133-44c0-9b17-f185a3c1f2cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.299 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:08.313 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c873cf0f-2993-40be-9646-08da0644870e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766774, 'reachable_time': 42142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356302, 'error': None, 'target': 'ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:08 np0005539563 systemd[1]: run-netns-ovnmeta\x2dabbc8daa\x2dd665\x2d4e2f\x2dbf74\x2d9e57db481441.mount: Deactivated successfully.
Nov 29 03:30:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:08.316 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abbc8daa-d665-4e2f-bf74-9e57db481441 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:30:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:08.316 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[83ae2031-4b2a-47f2-b905-90135d33b48e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.400 252257 DEBUG nova.network.neutron [req-ce299ac5-d7e5-4204-b9a6-7ace1a89504f req-34b971df-8a8d-4dd1-884b-27acb2ed9f2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updated VIF entry in instance network info cache for port fe638793-a58c-45c7-af31-561a212a980a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.405 252257 DEBUG nova.network.neutron [req-ce299ac5-d7e5-4204-b9a6-7ace1a89504f req-34b971df-8a8d-4dd1-884b-27acb2ed9f2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating instance_info_cache with network_info: [{"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.441 252257 DEBUG oslo_concurrency.lockutils [req-ce299ac5-d7e5-4204-b9a6-7ace1a89504f req-34b971df-8a8d-4dd1-884b-27acb2ed9f2c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.444 252257 DEBUG oslo_concurrency.lockutils [req-220a4057-cc50-443e-8590-a20dddd6a745 req-d2c2be21-a7bb-4c76-9303-b1a8fe1a3fec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.448 252257 DEBUG nova.network.neutron [req-220a4057-cc50-443e-8590-a20dddd6a745 req-d2c2be21-a7bb-4c76-9303-b1a8fe1a3fec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing network info cache for port fe638793-a58c-45c7-af31-561a212a980a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.528 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.677 252257 INFO nova.virt.libvirt.driver [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Deleting instance files /var/lib/nova/instances/5a603f26-2b4a-4025-8cc2-a31c8c89e652_del#033[00m
Nov 29 03:30:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 589 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 82 KiB/s rd, 26 KiB/s wr, 114 op/s
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.679 252257 INFO nova.virt.libvirt.driver [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Deletion of /var/lib/nova/instances/5a603f26-2b4a-4025-8cc2-a31c8c89e652_del complete#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.742 252257 INFO nova.compute.manager [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.743 252257 DEBUG oslo.service.loopingcall [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.744 252257 DEBUG nova.compute.manager [-] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.745 252257 DEBUG nova.network.neutron [-] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:30:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3072380682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.769 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.779 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.809 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.835 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:30:08 np0005539563 nova_compute[252253]: 2025-11-29 08:30:08.835 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:09 np0005539563 nova_compute[252253]: 2025-11-29 08:30:09.661 252257 DEBUG nova.network.neutron [-] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:09 np0005539563 nova_compute[252253]: 2025-11-29 08:30:09.689 252257 INFO nova.compute.manager [-] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Took 0.94 seconds to deallocate network for instance.#033[00m
Nov 29 03:30:09 np0005539563 nova_compute[252253]: 2025-11-29 08:30:09.736 252257 DEBUG oslo_concurrency.lockutils [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:09 np0005539563 nova_compute[252253]: 2025-11-29 08:30:09.737 252257 DEBUG oslo_concurrency.lockutils [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:09 np0005539563 nova_compute[252253]: 2025-11-29 08:30:09.742 252257 DEBUG nova.compute.manager [req-2068ebb6-804c-486f-a1e1-c1dc50d36b2a req-41ca7d97-73c2-460f-8b8e-8618a10eb06d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Received event network-vif-deleted-53d86447-39c2-4624-8083-b6dc36b78b15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:09 np0005539563 nova_compute[252253]: 2025-11-29 08:30:09.829 252257 DEBUG oslo_concurrency.processutils [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:09 np0005539563 nova_compute[252253]: 2025-11-29 08:30:09.976 252257 DEBUG nova.network.neutron [req-220a4057-cc50-443e-8590-a20dddd6a745 req-d2c2be21-a7bb-4c76-9303-b1a8fe1a3fec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updated VIF entry in instance network info cache for port fe638793-a58c-45c7-af31-561a212a980a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:09 np0005539563 nova_compute[252253]: 2025-11-29 08:30:09.978 252257 DEBUG nova.network.neutron [req-220a4057-cc50-443e-8590-a20dddd6a745 req-d2c2be21-a7bb-4c76-9303-b1a8fe1a3fec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating instance_info_cache with network_info: [{"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:10 np0005539563 nova_compute[252253]: 2025-11-29 08:30:10.002 252257 DEBUG oslo_concurrency.lockutils [req-220a4057-cc50-443e-8590-a20dddd6a745 req-d2c2be21-a7bb-4c76-9303-b1a8fe1a3fec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:10 np0005539563 nova_compute[252253]: 2025-11-29 08:30:10.107 252257 DEBUG nova.compute.manager [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received event network-changed-fe638793-a58c-45c7-af31-561a212a980a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:10 np0005539563 nova_compute[252253]: 2025-11-29 08:30:10.107 252257 DEBUG nova.compute.manager [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing instance network info cache due to event network-changed-fe638793-a58c-45c7-af31-561a212a980a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:10 np0005539563 nova_compute[252253]: 2025-11-29 08:30:10.107 252257 DEBUG oslo_concurrency.lockutils [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:10 np0005539563 nova_compute[252253]: 2025-11-29 08:30:10.108 252257 DEBUG oslo_concurrency.lockutils [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:10 np0005539563 nova_compute[252253]: 2025-11-29 08:30:10.108 252257 DEBUG nova.network.neutron [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing network info cache for port fe638793-a58c-45c7-af31-561a212a980a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:10.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:10.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737361689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:10 np0005539563 nova_compute[252253]: 2025-11-29 08:30:10.323 252257 DEBUG oslo_concurrency.processutils [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:10 np0005539563 nova_compute[252253]: 2025-11-29 08:30:10.333 252257 DEBUG nova.compute.provider_tree [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:10 np0005539563 nova_compute[252253]: 2025-11-29 08:30:10.374 252257 DEBUG nova.scheduler.client.report [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:10 np0005539563 nova_compute[252253]: 2025-11-29 08:30:10.420 252257 DEBUG oslo_concurrency.lockutils [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:10 np0005539563 nova_compute[252253]: 2025-11-29 08:30:10.455 252257 INFO nova.scheduler.client.report [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Deleted allocations for instance 5a603f26-2b4a-4025-8cc2-a31c8c89e652#033[00m
Nov 29 03:30:10 np0005539563 nova_compute[252253]: 2025-11-29 08:30:10.532 252257 DEBUG oslo_concurrency.lockutils [None req-04a9de0d-e81c-4a78-8a8b-31b4a5aeeecd b4f4d28745dd46e586642c84c051db39 23450c2eaf4442459dec94c6d29f0412 - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:10 np0005539563 nova_compute[252253]: 2025-11-29 08:30:10.625 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 553 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 74 KiB/s rd, 31 KiB/s wr, 103 op/s
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.634 252257 DEBUG nova.network.neutron [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updated VIF entry in instance network info cache for port fe638793-a58c-45c7-af31-561a212a980a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.635 252257 DEBUG nova.network.neutron [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating instance_info_cache with network_info: [{"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.666 252257 DEBUG oslo_concurrency.lockutils [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.666 252257 DEBUG nova.compute.manager [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Received event network-vif-unplugged-53d86447-39c2-4624-8083-b6dc36b78b15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.667 252257 DEBUG oslo_concurrency.lockutils [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.667 252257 DEBUG oslo_concurrency.lockutils [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.667 252257 DEBUG oslo_concurrency.lockutils [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.668 252257 DEBUG nova.compute.manager [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] No waiting events found dispatching network-vif-unplugged-53d86447-39c2-4624-8083-b6dc36b78b15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.668 252257 WARNING nova.compute.manager [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Received unexpected event network-vif-unplugged-53d86447-39c2-4624-8083-b6dc36b78b15 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.668 252257 DEBUG nova.compute.manager [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Received event network-vif-plugged-53d86447-39c2-4624-8083-b6dc36b78b15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.669 252257 DEBUG oslo_concurrency.lockutils [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.669 252257 DEBUG oslo_concurrency.lockutils [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.669 252257 DEBUG oslo_concurrency.lockutils [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "5a603f26-2b4a-4025-8cc2-a31c8c89e652-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.669 252257 DEBUG nova.compute.manager [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] No waiting events found dispatching network-vif-plugged-53d86447-39c2-4624-8083-b6dc36b78b15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.670 252257 WARNING nova.compute.manager [req-7b2fed52-0939-4cba-897b-2ba675bae488 req-2ae8efe1-ee7a-446b-a50d-d94a264c08ae 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Received unexpected event network-vif-plugged-53d86447-39c2-4624-8083-b6dc36b78b15 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.837 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.838 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.866 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.867 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.931 252257 DEBUG nova.compute.manager [req-5375ae72-7841-4949-941f-4d32bdf5a684 req-386fd5b7-9ebd-4bd1-aba1-449c1f8c4141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received event network-changed-fe638793-a58c-45c7-af31-561a212a980a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.931 252257 DEBUG nova.compute.manager [req-5375ae72-7841-4949-941f-4d32bdf5a684 req-386fd5b7-9ebd-4bd1-aba1-449c1f8c4141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing instance network info cache due to event network-changed-fe638793-a58c-45c7-af31-561a212a980a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.932 252257 DEBUG oslo_concurrency.lockutils [req-5375ae72-7841-4949-941f-4d32bdf5a684 req-386fd5b7-9ebd-4bd1-aba1-449c1f8c4141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.932 252257 DEBUG oslo_concurrency.lockutils [req-5375ae72-7841-4949-941f-4d32bdf5a684 req-386fd5b7-9ebd-4bd1-aba1-449c1f8c4141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:11 np0005539563 nova_compute[252253]: 2025-11-29 08:30:11.933 252257 DEBUG nova.network.neutron [req-5375ae72-7841-4949-941f-4d32bdf5a684 req-386fd5b7-9ebd-4bd1-aba1-449c1f8c4141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing network info cache for port fe638793-a58c-45c7-af31-561a212a980a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:12.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:12.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:12 np0005539563 nova_compute[252253]: 2025-11-29 08:30:12.298 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404997.2975755, 78a00526-9c03-4c52-93a4-2275348b883a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:12 np0005539563 nova_compute[252253]: 2025-11-29 08:30:12.299 252257 INFO nova.compute.manager [-] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:12 np0005539563 nova_compute[252253]: 2025-11-29 08:30:12.355 252257 DEBUG nova.compute.manager [None req-17c93d1d-c324-4610-bccd-54391da8e93f - - - - - -] [instance: 78a00526-9c03-4c52-93a4-2275348b883a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 553 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 63 KiB/s rd, 27 KiB/s wr, 89 op/s
Nov 29 03:30:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:30:12
Nov 29 03:30:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:30:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:30:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.meta', 'vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'images']
Nov 29 03:30:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:30:13 np0005539563 nova_compute[252253]: 2025-11-29 08:30:13.168 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:30:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:30:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:30:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:30:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:30:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:30:13 np0005539563 nova_compute[252253]: 2025-11-29 08:30:13.530 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:14.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:14.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:14 np0005539563 nova_compute[252253]: 2025-11-29 08:30:14.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 541 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 199 KiB/s wr, 118 op/s
Nov 29 03:30:14 np0005539563 nova_compute[252253]: 2025-11-29 08:30:14.828 252257 DEBUG oslo_concurrency.lockutils [None req-e768bec5-f22a-4269-85c0-1b930947a735 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:14 np0005539563 nova_compute[252253]: 2025-11-29 08:30:14.828 252257 DEBUG oslo_concurrency.lockutils [None req-e768bec5-f22a-4269-85c0-1b930947a735 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:14 np0005539563 nova_compute[252253]: 2025-11-29 08:30:14.850 252257 INFO nova.compute.manager [None req-e768bec5-f22a-4269-85c0-1b930947a735 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Detaching volume ab4cd0c8-adcb-470a-9b1a-e227da2d6280#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.068 252257 INFO nova.virt.block_device [None req-e768bec5-f22a-4269-85c0-1b930947a735 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Attempting to driver detach volume ab4cd0c8-adcb-470a-9b1a-e227da2d6280 from mountpoint /dev/vdb#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.080 252257 DEBUG nova.virt.libvirt.driver [None req-e768bec5-f22a-4269-85c0-1b930947a735 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Attempting to detach device vdb from instance e412f1ba-217f-4c10-b176-528b2ef6ed0e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.081 252257 DEBUG nova.virt.libvirt.guest [None req-e768bec5-f22a-4269-85c0-1b930947a735 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:30:15 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-ab4cd0c8-adcb-470a-9b1a-e227da2d6280">
Nov 29 03:30:15 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:  <serial>ab4cd0c8-adcb-470a-9b1a-e227da2d6280</serial>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:30:15 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:30:15 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.089 252257 INFO nova.virt.libvirt.driver [None req-e768bec5-f22a-4269-85c0-1b930947a735 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully detached device vdb from instance e412f1ba-217f-4c10-b176-528b2ef6ed0e from the persistent domain config.#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.089 252257 DEBUG nova.virt.libvirt.driver [None req-e768bec5-f22a-4269-85c0-1b930947a735 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e412f1ba-217f-4c10-b176-528b2ef6ed0e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.090 252257 DEBUG nova.virt.libvirt.guest [None req-e768bec5-f22a-4269-85c0-1b930947a735 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:30:15 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-ab4cd0c8-adcb-470a-9b1a-e227da2d6280">
Nov 29 03:30:15 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:  <serial>ab4cd0c8-adcb-470a-9b1a-e227da2d6280</serial>
Nov 29 03:30:15 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:30:15 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:30:15 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.165 252257 DEBUG nova.network.neutron [req-5375ae72-7841-4949-941f-4d32bdf5a684 req-386fd5b7-9ebd-4bd1-aba1-449c1f8c4141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updated VIF entry in instance network info cache for port fe638793-a58c-45c7-af31-561a212a980a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.165 252257 DEBUG nova.network.neutron [req-5375ae72-7841-4949-941f-4d32bdf5a684 req-386fd5b7-9ebd-4bd1-aba1-449c1f8c4141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating instance_info_cache with network_info: [{"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.196 252257 DEBUG oslo_concurrency.lockutils [req-5375ae72-7841-4949-941f-4d32bdf5a684 req-386fd5b7-9ebd-4bd1-aba1-449c1f8c4141 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.218 252257 DEBUG nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Received event <DeviceRemovedEvent: 1764405015.2179427, e412f1ba-217f-4c10-b176-528b2ef6ed0e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.220 252257 DEBUG nova.virt.libvirt.driver [None req-e768bec5-f22a-4269-85c0-1b930947a735 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e412f1ba-217f-4c10-b176-528b2ef6ed0e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.224 252257 INFO nova.virt.libvirt.driver [None req-e768bec5-f22a-4269-85c0-1b930947a735 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully detached device vdb from instance e412f1ba-217f-4c10-b176-528b2ef6ed0e from the live domain config.#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.466 252257 DEBUG nova.objects.instance [None req-e768bec5-f22a-4269-85c0-1b930947a735 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid e412f1ba-217f-4c10-b176-528b2ef6ed0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:15 np0005539563 nova_compute[252253]: 2025-11-29 08:30:15.499 252257 DEBUG oslo_concurrency.lockutils [None req-e768bec5-f22a-4269-85c0-1b930947a735 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:16 np0005539563 nova_compute[252253]: 2025-11-29 08:30:16.041 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405001.0405302, 554ea6a4-8de1-41bf-8772-b15e95a7fd05 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:16 np0005539563 nova_compute[252253]: 2025-11-29 08:30:16.042 252257 INFO nova.compute.manager [-] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:16 np0005539563 nova_compute[252253]: 2025-11-29 08:30:16.086 252257 DEBUG nova.compute.manager [None req-9fa4fda0-44b7-4f68-9e23-5e86d4ff46bc - - - - - -] [instance: 554ea6a4-8de1-41bf-8772-b15e95a7fd05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:16.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:16.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:30:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:30:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:30:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:30:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:30:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:30:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:30:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:30:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:30:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:30:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 541 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 196 KiB/s wr, 92 op/s
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.173 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:18.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:18.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.361 252257 DEBUG oslo_concurrency.lockutils [None req-255f5a7a-8e29-4611-bdd7-ee2921ee47cb d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.361 252257 DEBUG oslo_concurrency.lockutils [None req-255f5a7a-8e29-4611-bdd7-ee2921ee47cb d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.382 252257 INFO nova.compute.manager [None req-255f5a7a-8e29-4611-bdd7-ee2921ee47cb d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Detaching volume 882d54a9-fd7e-41ac-9421-07a1ee6da4e8#033[00m
Nov 29 03:30:18 np0005539563 podman[356355]: 2025-11-29 08:30:18.495896715 +0000 UTC m=+0.049623525 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:30:18 np0005539563 podman[356356]: 2025-11-29 08:30:18.507484999 +0000 UTC m=+0.060561121 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 29 03:30:18 np0005539563 podman[356357]: 2025-11-29 08:30:18.528342725 +0000 UTC m=+0.081348605 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.531 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.571 252257 INFO nova.virt.block_device [None req-255f5a7a-8e29-4611-bdd7-ee2921ee47cb d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Attempting to driver detach volume 882d54a9-fd7e-41ac-9421-07a1ee6da4e8 from mountpoint /dev/vdc#033[00m
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.579 252257 DEBUG nova.virt.libvirt.driver [None req-255f5a7a-8e29-4611-bdd7-ee2921ee47cb d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Attempting to detach device vdc from instance e412f1ba-217f-4c10-b176-528b2ef6ed0e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.579 252257 DEBUG nova.virt.libvirt.guest [None req-255f5a7a-8e29-4611-bdd7-ee2921ee47cb d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:30:18 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-882d54a9-fd7e-41ac-9421-07a1ee6da4e8">
Nov 29 03:30:18 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:  <serial>882d54a9-fd7e-41ac-9421-07a1ee6da4e8</serial>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:30:18 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:30:18 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.588 252257 INFO nova.virt.libvirt.driver [None req-255f5a7a-8e29-4611-bdd7-ee2921ee47cb d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully detached device vdc from instance e412f1ba-217f-4c10-b176-528b2ef6ed0e from the persistent domain config.#033[00m
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.588 252257 DEBUG nova.virt.libvirt.driver [None req-255f5a7a-8e29-4611-bdd7-ee2921ee47cb d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance e412f1ba-217f-4c10-b176-528b2ef6ed0e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.588 252257 DEBUG nova.virt.libvirt.guest [None req-255f5a7a-8e29-4611-bdd7-ee2921ee47cb d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:30:18 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-882d54a9-fd7e-41ac-9421-07a1ee6da4e8">
Nov 29 03:30:18 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:  <serial>882d54a9-fd7e-41ac-9421-07a1ee6da4e8</serial>
Nov 29 03:30:18 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:30:18 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:30:18 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.652 252257 DEBUG nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Received event <DeviceRemovedEvent: 1764405018.6520116, e412f1ba-217f-4c10-b176-528b2ef6ed0e => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.654 252257 DEBUG nova.virt.libvirt.driver [None req-255f5a7a-8e29-4611-bdd7-ee2921ee47cb d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance e412f1ba-217f-4c10-b176-528b2ef6ed0e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.656 252257 INFO nova.virt.libvirt.driver [None req-255f5a7a-8e29-4611-bdd7-ee2921ee47cb d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully detached device vdc from instance e412f1ba-217f-4c10-b176-528b2ef6ed0e from the live domain config.#033[00m
Nov 29 03:30:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:18.671 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.672 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:18.673 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:30:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 487 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 394 KiB/s wr, 164 op/s
Nov 29 03:30:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.887 252257 DEBUG nova.objects.instance [None req-255f5a7a-8e29-4611-bdd7-ee2921ee47cb d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'flavor' on Instance uuid e412f1ba-217f-4c10-b176-528b2ef6ed0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:18 np0005539563 nova_compute[252253]: 2025-11-29 08:30:18.926 252257 DEBUG oslo_concurrency.lockutils [None req-255f5a7a-8e29-4611-bdd7-ee2921ee47cb d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:19 np0005539563 nova_compute[252253]: 2025-11-29 08:30:19.862 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405004.861624, df3ef43d-e67b-4d7f-8603-5cf61569ae1f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:19 np0005539563 nova_compute[252253]: 2025-11-29 08:30:19.863 252257 INFO nova.compute.manager [-] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:19 np0005539563 nova_compute[252253]: 2025-11-29 08:30:19.888 252257 DEBUG nova.compute.manager [None req-cda7dcc5-82cb-4edb-b27a-545f01c70104 - - - - - -] [instance: df3ef43d-e67b-4d7f-8603-5cf61569ae1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:20.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:20.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:20 np0005539563 nova_compute[252253]: 2025-11-29 08:30:20.433 252257 DEBUG nova.compute.manager [req-714f3b40-ef57-46fb-b76b-1e153c370c98 req-b459a42b-20fe-47a4-8708-53b110769931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received event network-changed-fe638793-a58c-45c7-af31-561a212a980a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:20 np0005539563 nova_compute[252253]: 2025-11-29 08:30:20.434 252257 DEBUG nova.compute.manager [req-714f3b40-ef57-46fb-b76b-1e153c370c98 req-b459a42b-20fe-47a4-8708-53b110769931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing instance network info cache due to event network-changed-fe638793-a58c-45c7-af31-561a212a980a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:20 np0005539563 nova_compute[252253]: 2025-11-29 08:30:20.435 252257 DEBUG oslo_concurrency.lockutils [req-714f3b40-ef57-46fb-b76b-1e153c370c98 req-b459a42b-20fe-47a4-8708-53b110769931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:20 np0005539563 nova_compute[252253]: 2025-11-29 08:30:20.435 252257 DEBUG oslo_concurrency.lockutils [req-714f3b40-ef57-46fb-b76b-1e153c370c98 req-b459a42b-20fe-47a4-8708-53b110769931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:20 np0005539563 nova_compute[252253]: 2025-11-29 08:30:20.435 252257 DEBUG nova.network.neutron [req-714f3b40-ef57-46fb-b76b-1e153c370c98 req-b459a42b-20fe-47a4-8708-53b110769931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing network info cache for port fe638793-a58c-45c7-af31-561a212a980a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 462 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 381 KiB/s wr, 152 op/s
Nov 29 03:30:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:20Z|00712|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 03:30:20 np0005539563 nova_compute[252253]: 2025-11-29 08:30:20.902 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:21Z|00713|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 03:30:21 np0005539563 nova_compute[252253]: 2025-11-29 08:30:21.187 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:21.674 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:22 np0005539563 nova_compute[252253]: 2025-11-29 08:30:22.160 252257 DEBUG nova.network.neutron [req-714f3b40-ef57-46fb-b76b-1e153c370c98 req-b459a42b-20fe-47a4-8708-53b110769931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updated VIF entry in instance network info cache for port fe638793-a58c-45c7-af31-561a212a980a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:22 np0005539563 nova_compute[252253]: 2025-11-29 08:30:22.162 252257 DEBUG nova.network.neutron [req-714f3b40-ef57-46fb-b76b-1e153c370c98 req-b459a42b-20fe-47a4-8708-53b110769931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating instance_info_cache with network_info: [{"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:22 np0005539563 nova_compute[252253]: 2025-11-29 08:30:22.198 252257 DEBUG oslo_concurrency.lockutils [req-714f3b40-ef57-46fb-b76b-1e153c370c98 req-b459a42b-20fe-47a4-8708-53b110769931 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:22.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:22.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 462 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 372 KiB/s wr, 144 op/s
Nov 29 03:30:23 np0005539563 nova_compute[252253]: 2025-11-29 08:30:23.116 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405008.1146235, 5a603f26-2b4a-4025-8cc2-a31c8c89e652 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:23 np0005539563 nova_compute[252253]: 2025-11-29 08:30:23.117 252257 INFO nova.compute.manager [-] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:23 np0005539563 nova_compute[252253]: 2025-11-29 08:30:23.138 252257 DEBUG nova.compute.manager [None req-b7b83b4d-1d7a-44fd-9857-d9048790d7cd - - - - - -] [instance: 5a603f26-2b4a-4025-8cc2-a31c8c89e652] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:23 np0005539563 nova_compute[252253]: 2025-11-29 08:30:23.176 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:23 np0005539563 nova_compute[252253]: 2025-11-29 08:30:23.532 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002197947201284199 of space, bias 1.0, pg target 0.6593841603852597 quantized to 32 (current 32)
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.008758527882467894 of space, bias 1.0, pg target 2.627558364740368 quantized to 32 (current 32)
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:30:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:24.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:24.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 160 op/s
Nov 29 03:30:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:26.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:26.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 506 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 153 op/s
Nov 29 03:30:28 np0005539563 nova_compute[252253]: 2025-11-29 08:30:28.211 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:28.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:28.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:28 np0005539563 nova_compute[252253]: 2025-11-29 08:30:28.534 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 155 op/s
Nov 29 03:30:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:30.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:30.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Nov 29 03:30:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:32.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:32.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 29 03:30:33 np0005539563 nova_compute[252253]: 2025-11-29 08:30:33.214 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:33 np0005539563 nova_compute[252253]: 2025-11-29 08:30:33.563 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:34.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:34.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Nov 29 03:30:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:36.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:36.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 902 KiB/s wr, 148 op/s
Nov 29 03:30:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:38.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:38 np0005539563 nova_compute[252253]: 2025-11-29 08:30:38.273 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:38.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:38 np0005539563 nova_compute[252253]: 2025-11-29 08:30:38.567 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 35 KiB/s wr, 125 op/s
Nov 29 03:30:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:38 np0005539563 NetworkManager[48981]: <info>  [1764405038.8032] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Nov 29 03:30:38 np0005539563 nova_compute[252253]: 2025-11-29 08:30:38.803 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539563 NetworkManager[48981]: <info>  [1764405038.8043] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Nov 29 03:30:39 np0005539563 nova_compute[252253]: 2025-11-29 08:30:39.070 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:39 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:39Z|00714|binding|INFO|Releasing lport bf759292-fede-4172-b0b8-efd6e3442b62 from this chassis (sb_readonly=0)
Nov 29 03:30:39 np0005539563 nova_compute[252253]: 2025-11-29 08:30:39.091 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:40.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:40.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 120 op/s
Nov 29 03:30:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:42.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 16 KiB/s wr, 60 op/s
Nov 29 03:30:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:30:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:30:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:30:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:30:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:30:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:30:43 np0005539563 nova_compute[252253]: 2025-11-29 08:30:43.325 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:43 np0005539563 nova_compute[252253]: 2025-11-29 08:30:43.568 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:30:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 46K writes, 178K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 16K syncs, 2.78 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8766 writes, 29K keys, 8766 commit groups, 1.0 writes per commit group, ingest: 29.55 MB, 0.05 MB/s#012Interval WAL: 8765 writes, 3516 syncs, 2.49 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:30:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:44.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:44.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 524 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 93 op/s
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.092 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.092 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.116 252257 DEBUG nova.compute.manager [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.187 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.187 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.196 252257 DEBUG nova.virt.hardware [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.196 252257 INFO nova.compute.claims [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.300 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1973454107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.784 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.792 252257 DEBUG nova.compute.provider_tree [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.810 252257 DEBUG nova.scheduler.client.report [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.835 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.836 252257 DEBUG nova.compute.manager [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.895 252257 DEBUG nova.compute.manager [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.895 252257 DEBUG nova.network.neutron [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.923 252257 INFO nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:30:45 np0005539563 nova_compute[252253]: 2025-11-29 08:30:45.950 252257 DEBUG nova.compute.manager [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.099 252257 DEBUG nova.compute.manager [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.100 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.100 252257 INFO nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Creating image(s)#033[00m
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.128 252257 DEBUG nova.storage.rbd_utils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.155 252257 DEBUG nova.storage.rbd_utils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.181 252257 DEBUG nova.storage.rbd_utils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.185 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.216 252257 DEBUG nova.policy [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e6de0587a3794e30acefc687f435d388', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '37972b49ddde4c519c6523d2ea1569b5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.251 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.252 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.252 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.253 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:46.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:30:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:46.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.511 252257 DEBUG nova.storage.rbd_utils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:46 np0005539563 nova_compute[252253]: 2025-11-29 08:30:46.514 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 537 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 255 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 29 03:30:47 np0005539563 nova_compute[252253]: 2025-11-29 08:30:47.152 252257 DEBUG nova.network.neutron [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Successfully created port: 1576b647-a0ba-45ac-afa5-c62b909bb7e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:30:47 np0005539563 nova_compute[252253]: 2025-11-29 08:30:47.717 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:47 np0005539563 nova_compute[252253]: 2025-11-29 08:30:47.785 252257 DEBUG nova.storage.rbd_utils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] resizing rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:30:47 np0005539563 nova_compute[252253]: 2025-11-29 08:30:47.880 252257 DEBUG nova.objects.instance [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:47 np0005539563 nova_compute[252253]: 2025-11-29 08:30:47.902 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:30:47 np0005539563 nova_compute[252253]: 2025-11-29 08:30:47.903 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Ensure instance console log exists: /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:30:47 np0005539563 nova_compute[252253]: 2025-11-29 08:30:47.903 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:47 np0005539563 nova_compute[252253]: 2025-11-29 08:30:47.904 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:47 np0005539563 nova_compute[252253]: 2025-11-29 08:30:47.905 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:47 np0005539563 nova_compute[252253]: 2025-11-29 08:30:47.986 252257 DEBUG nova.network.neutron [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Successfully updated port: 1576b647-a0ba-45ac-afa5-c62b909bb7e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:30:48 np0005539563 nova_compute[252253]: 2025-11-29 08:30:48.017 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:48 np0005539563 nova_compute[252253]: 2025-11-29 08:30:48.017 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquired lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:48 np0005539563 nova_compute[252253]: 2025-11-29 08:30:48.017 252257 DEBUG nova.network.neutron [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:30:48 np0005539563 nova_compute[252253]: 2025-11-29 08:30:48.129 252257 DEBUG nova.compute.manager [req-abaaad75-0e89-4177-ba54-d7462f72474d req-02ececad-d8de-45c9-8879-f34ed95ab675 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-changed-1576b647-a0ba-45ac-afa5-c62b909bb7e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:48 np0005539563 nova_compute[252253]: 2025-11-29 08:30:48.129 252257 DEBUG nova.compute.manager [req-abaaad75-0e89-4177-ba54-d7462f72474d req-02ececad-d8de-45c9-8879-f34ed95ab675 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Refreshing instance network info cache due to event network-changed-1576b647-a0ba-45ac-afa5-c62b909bb7e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:48 np0005539563 nova_compute[252253]: 2025-11-29 08:30:48.129 252257 DEBUG oslo_concurrency.lockutils [req-abaaad75-0e89-4177-ba54-d7462f72474d req-02ececad-d8de-45c9-8879-f34ed95ab675 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:48.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:48 np0005539563 nova_compute[252253]: 2025-11-29 08:30:48.274 252257 DEBUG nova.network.neutron [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:30:48 np0005539563 nova_compute[252253]: 2025-11-29 08:30:48.328 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:48.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:48 np0005539563 nova_compute[252253]: 2025-11-29 08:30:48.570 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 560 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 317 KiB/s rd, 3.0 MiB/s wr, 82 op/s
Nov 29 03:30:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.197 252257 DEBUG nova.network.neutron [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updating instance_info_cache with network_info: [{"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.228 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Releasing lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.229 252257 DEBUG nova.compute.manager [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Instance network_info: |[{"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.230 252257 DEBUG oslo_concurrency.lockutils [req-abaaad75-0e89-4177-ba54-d7462f72474d req-02ececad-d8de-45c9-8879-f34ed95ab675 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.231 252257 DEBUG nova.network.neutron [req-abaaad75-0e89-4177-ba54-d7462f72474d req-02ececad-d8de-45c9-8879-f34ed95ab675 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Refreshing network info cache for port 1576b647-a0ba-45ac-afa5-c62b909bb7e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.236 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Start _get_guest_xml network_info=[{"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.241 252257 WARNING nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.246 252257 DEBUG nova.virt.libvirt.host [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.247 252257 DEBUG nova.virt.libvirt.host [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.250 252257 DEBUG nova.virt.libvirt.host [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.251 252257 DEBUG nova.virt.libvirt.host [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.252 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.252 252257 DEBUG nova.virt.hardware [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.253 252257 DEBUG nova.virt.hardware [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.253 252257 DEBUG nova.virt.hardware [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.253 252257 DEBUG nova.virt.hardware [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.253 252257 DEBUG nova.virt.hardware [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.253 252257 DEBUG nova.virt.hardware [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.254 252257 DEBUG nova.virt.hardware [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.254 252257 DEBUG nova.virt.hardware [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.254 252257 DEBUG nova.virt.hardware [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.255 252257 DEBUG nova.virt.hardware [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.255 252257 DEBUG nova.virt.hardware [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.258 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:49 np0005539563 podman[356746]: 2025-11-29 08:30:49.509965654 +0000 UTC m=+0.053144031 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:30:49 np0005539563 podman[356747]: 2025-11-29 08:30:49.514705452 +0000 UTC m=+0.057485469 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:30:49 np0005539563 podman[356748]: 2025-11-29 08:30:49.540628604 +0000 UTC m=+0.079835593 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:30:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030902154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.724 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.754 252257 DEBUG nova.storage.rbd_utils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:49 np0005539563 nova_compute[252253]: 2025-11-29 08:30:49.758 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:50.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:50.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.514 252257 DEBUG nova.network.neutron [req-abaaad75-0e89-4177-ba54-d7462f72474d req-02ececad-d8de-45c9-8879-f34ed95ab675 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updated VIF entry in instance network info cache for port 1576b647-a0ba-45ac-afa5-c62b909bb7e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.515 252257 DEBUG nova.network.neutron [req-abaaad75-0e89-4177-ba54-d7462f72474d req-02ececad-d8de-45c9-8879-f34ed95ab675 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updating instance_info_cache with network_info: [{"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.596 252257 DEBUG oslo_concurrency.lockutils [req-abaaad75-0e89-4177-ba54-d7462f72474d req-02ececad-d8de-45c9-8879-f34ed95ab675 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/216055025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.619 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.861s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.620 252257 DEBUG nova.virt.libvirt.vif [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-935562196',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-935562196',id=171,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/QbFQHfxyoj1W/t5pawyERQGZRClAr1DxU8gg8udDNKRDAgSRqjviYC9CV8DByogltybpLGJLh5e67lMbhPKRIYOrGJnVOyLrNIthayQV7k/8lr+xvE29t9ygQsTGfcQ==',key_name='tempest-keypair-1565500821',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='37972b49ddde4c519c6523d2ea1569b5',ramdisk_id='',reservation_id='r-osvyk2n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1751768432',owner_user_name='tempest-AttachVolumeShelveTestJSON-1751768432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e6de0587a3794e30acefc687f435d388',uuid=4d6c236c-ba8a-44dc-8413-3d4bfc16ec56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.620 252257 DEBUG nova.network.os_vif_util [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converting VIF {"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.621 252257 DEBUG nova.network.os_vif_util [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.622 252257 DEBUG nova.objects.instance [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.635 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  <uuid>4d6c236c-ba8a-44dc-8413-3d4bfc16ec56</uuid>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  <name>instance-000000ab</name>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <nova:name>tempest-AttachVolumeShelveTestJSON-server-935562196</nova:name>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:30:49</nova:creationTime>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <nova:user uuid="e6de0587a3794e30acefc687f435d388">tempest-AttachVolumeShelveTestJSON-1751768432-project-member</nova:user>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <nova:project uuid="37972b49ddde4c519c6523d2ea1569b5">tempest-AttachVolumeShelveTestJSON-1751768432</nova:project>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <nova:port uuid="1576b647-a0ba-45ac-afa5-c62b909bb7e9">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <entry name="serial">4d6c236c-ba8a-44dc-8413-3d4bfc16ec56</entry>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <entry name="uuid">4d6c236c-ba8a-44dc-8413-3d4bfc16ec56</entry>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk.config">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:53:4e:1f"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <target dev="tap1576b647-a0"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/console.log" append="off"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:30:50 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:30:50 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:30:50 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:30:50 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.636 252257 DEBUG nova.compute.manager [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Preparing to wait for external event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.636 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.636 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.636 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.637 252257 DEBUG nova.virt.libvirt.vif [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-935562196',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-935562196',id=171,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/QbFQHfxyoj1W/t5pawyERQGZRClAr1DxU8gg8udDNKRDAgSRqjviYC9CV8DByogltybpLGJLh5e67lMbhPKRIYOrGJnVOyLrNIthayQV7k/8lr+xvE29t9ygQsTGfcQ==',key_name='tempest-keypair-1565500821',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='37972b49ddde4c519c6523d2ea1569b5',ramdisk_id='',reservation_id='r-osvyk2n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1751768432',owner_user_name='tempest-AttachVolumeShelveTestJSON-1751768432-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e6de0587a3794e30acefc687f435d388',uuid=4d6c236c-ba8a-44dc-8413-3d4bfc16ec56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.637 252257 DEBUG nova.network.os_vif_util [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converting VIF {"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.637 252257 DEBUG nova.network.os_vif_util [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.638 252257 DEBUG os_vif [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.638 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.639 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.639 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.642 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.643 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1576b647-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.643 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1576b647-a0, col_values=(('external_ids', {'iface-id': '1576b647-a0ba-45ac-afa5-c62b909bb7e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:4e:1f', 'vm-uuid': '4d6c236c-ba8a-44dc-8413-3d4bfc16ec56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.644 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:50 np0005539563 NetworkManager[48981]: <info>  [1764405050.6455] manager: (tap1576b647-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.647 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.651 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:50 np0005539563 nova_compute[252253]: 2025-11-29 08:30:50.652 252257 INFO os_vif [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0')#033[00m
Nov 29 03:30:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 588 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 4.1 MiB/s wr, 94 op/s
Nov 29 03:30:51 np0005539563 nova_compute[252253]: 2025-11-29 08:30:51.096 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:51 np0005539563 nova_compute[252253]: 2025-11-29 08:30:51.097 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:51 np0005539563 nova_compute[252253]: 2025-11-29 08:30:51.097 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No VIF found with MAC fa:16:3e:53:4e:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:30:51 np0005539563 nova_compute[252253]: 2025-11-29 08:30:51.098 252257 INFO nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Using config drive#033[00m
Nov 29 03:30:51 np0005539563 nova_compute[252253]: 2025-11-29 08:30:51.125 252257 DEBUG nova.storage.rbd_utils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:52 np0005539563 nova_compute[252253]: 2025-11-29 08:30:52.064 252257 INFO nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Creating config drive at /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/disk.config#033[00m
Nov 29 03:30:52 np0005539563 nova_compute[252253]: 2025-11-29 08:30:52.073 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3gbidy4r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:52 np0005539563 nova_compute[252253]: 2025-11-29 08:30:52.213 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3gbidy4r" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:52 np0005539563 nova_compute[252253]: 2025-11-29 08:30:52.246 252257 DEBUG nova.storage.rbd_utils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] rbd image 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:52 np0005539563 nova_compute[252253]: 2025-11-29 08:30:52.251 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/disk.config 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:52.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:52.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 588 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 4.1 MiB/s wr, 94 op/s
Nov 29 03:30:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:30:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/152890148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:30:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:30:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/152890148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:30:53 np0005539563 nova_compute[252253]: 2025-11-29 08:30:53.653 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:53 np0005539563 nova_compute[252253]: 2025-11-29 08:30:53.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/841218850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:53 np0005539563 nova_compute[252253]: 2025-11-29 08:30:53.723 252257 DEBUG oslo_concurrency.processutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/disk.config 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:53 np0005539563 nova_compute[252253]: 2025-11-29 08:30:53.724 252257 INFO nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Deleting local config drive /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56/disk.config because it was imported into RBD.#033[00m
Nov 29 03:30:53 np0005539563 kernel: tap1576b647-a0: entered promiscuous mode
Nov 29 03:30:53 np0005539563 NetworkManager[48981]: <info>  [1764405053.7742] manager: (tap1576b647-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/306)
Nov 29 03:30:53 np0005539563 systemd-udevd[356920]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:53Z|00715|binding|INFO|Claiming lport 1576b647-a0ba-45ac-afa5-c62b909bb7e9 for this chassis.
Nov 29 03:30:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:53Z|00716|binding|INFO|1576b647-a0ba-45ac-afa5-c62b909bb7e9: Claiming fa:16:3e:53:4e:1f 10.100.0.4
Nov 29 03:30:53 np0005539563 nova_compute[252253]: 2025-11-29 08:30:53.800 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.810 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:4e:1f 10.100.0.4'], port_security=['fa:16:3e:53:4e:1f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4d6c236c-ba8a-44dc-8413-3d4bfc16ec56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c541784-a3aa-4c55-a753-a31504941937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37972b49ddde4c519c6523d2ea1569b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '496c1f15-8168-427c-a8c0-5ed474644583', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0f9e799-5b16-4c43-ac05-86721fcbe6ee, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1576b647-a0ba-45ac-afa5-c62b909bb7e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.812 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1576b647-a0ba-45ac-afa5-c62b909bb7e9 in datapath 4c541784-a3aa-4c55-a753-a31504941937 bound to our chassis#033[00m
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.813 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c541784-a3aa-4c55-a753-a31504941937#033[00m
Nov 29 03:30:53 np0005539563 NetworkManager[48981]: <info>  [1764405053.8162] device (tap1576b647-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:30:53 np0005539563 NetworkManager[48981]: <info>  [1764405053.8172] device (tap1576b647-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:30:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:53Z|00717|binding|INFO|Setting lport 1576b647-a0ba-45ac-afa5-c62b909bb7e9 ovn-installed in OVS
Nov 29 03:30:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:53Z|00718|binding|INFO|Setting lport 1576b647-a0ba-45ac-afa5-c62b909bb7e9 up in Southbound
Nov 29 03:30:53 np0005539563 nova_compute[252253]: 2025-11-29 08:30:53.823 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.827 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[176efce0-284f-4a76-bcaf-6d37fa1983c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.828 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c541784-a1 in ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.830 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c541784-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.830 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3b40ba56-0cd2-491c-baf8-aecb071f60a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.831 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b68c1a-12c8-4902-9fee-416b23fd29ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:53 np0005539563 systemd-machined[213024]: New machine qemu-83-instance-000000ab.
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.844 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3327fe-6728-4059-8230-0965027ee8a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:53 np0005539563 systemd[1]: Started Virtual Machine qemu-83-instance-000000ab.
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.869 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f5cbc2-0f60-4a3c-9d7f-6c9141c6bb41]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.903 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[815a13cb-2d18-4c58-a232-b6bf713b8e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.909 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e99f0d-5651-4916-9262-372275f55287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:53 np0005539563 NetworkManager[48981]: <info>  [1764405053.9105] manager: (tap4c541784-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/307)
Nov 29 03:30:53 np0005539563 systemd-udevd[356924]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.942 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4e944f6a-8bb6-4f31-a297-8201a2b1f7ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.945 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5924f31b-621c-4a99-ab0c-8f653212428e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:53 np0005539563 NetworkManager[48981]: <info>  [1764405053.9646] device (tap4c541784-a0): carrier: link connected
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.970 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5c32099c-e09a-409d-a3ec-0fb2ed5797f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:53.986 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[90c70faa-a414-47fa-bee2-9b940fe89402]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c541784-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:95:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802173, 'reachable_time': 21324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356956, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:54.001 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5473a1bb-dbb7-4009-bccf-dcf4b2aa8217]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6a:9545'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 802173, 'tstamp': 802173}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356957, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:54.017 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2dfa4dab-1ab0-4fc4-b4f0-b98a164ec150]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c541784-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6a:95:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802173, 'reachable_time': 21324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356958, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:54.049 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[611b8a79-a389-4f57-8fb0-4259920bd784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.104 252257 DEBUG nova.compute.manager [req-39b643c9-e670-4782-8768-364033e7be52 req-97ffe1cb-51bb-4b0a-9773-5ce17b264f03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.105 252257 DEBUG oslo_concurrency.lockutils [req-39b643c9-e670-4782-8768-364033e7be52 req-97ffe1cb-51bb-4b0a-9773-5ce17b264f03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.105 252257 DEBUG oslo_concurrency.lockutils [req-39b643c9-e670-4782-8768-364033e7be52 req-97ffe1cb-51bb-4b0a-9773-5ce17b264f03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.106 252257 DEBUG oslo_concurrency.lockutils [req-39b643c9-e670-4782-8768-364033e7be52 req-97ffe1cb-51bb-4b0a-9773-5ce17b264f03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.106 252257 DEBUG nova.compute.manager [req-39b643c9-e670-4782-8768-364033e7be52 req-97ffe1cb-51bb-4b0a-9773-5ce17b264f03 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Processing event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:54.110 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2d5bd31b-740a-4baf-a542-2214cee4d1a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:54.111 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c541784-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:54.111 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:54.112 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c541784-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.113 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:54 np0005539563 kernel: tap4c541784-a0: entered promiscuous mode
Nov 29 03:30:54 np0005539563 NetworkManager[48981]: <info>  [1764405054.1150] manager: (tap4c541784-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:54.118 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c541784-a0, col_values=(('external_ids', {'iface-id': '7f1f6d69-4406-4e27-a503-d839c5cccd04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:30:54Z|00719|binding|INFO|Releasing lport 7f1f6d69-4406-4e27-a503-d839c5cccd04 from this chassis (sb_readonly=0)
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.119 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.134 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:54.136 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c541784-a3aa-4c55-a753-a31504941937.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c541784-a3aa-4c55-a753-a31504941937.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:54.137 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd8b91b-1347-4e35-9f14-a49f5a97abc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:54.137 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-4c541784-a3aa-4c55-a753-a31504941937
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/4c541784-a3aa-4c55-a753-a31504941937.pid.haproxy
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 4c541784-a3aa-4c55-a753-a31504941937
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:30:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:54.138 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'env', 'PROCESS_TAG=haproxy-4c541784-a3aa-4c55-a753-a31504941937', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c541784-a3aa-4c55-a753-a31504941937.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:30:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:54.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:54.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.394 252257 DEBUG nova.compute.manager [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.396 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405054.393643, 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.396 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] VM Started (Lifecycle Event)#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.401 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.405 252257 INFO nova.virt.libvirt.driver [-] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Instance spawned successfully.#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.405 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.420 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.423 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.431 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.432 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.432 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.433 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.433 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.434 252257 DEBUG nova.virt.libvirt.driver [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.442 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.443 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405054.393856, 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.443 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.467 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.472 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405054.4002028, 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.472 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.508 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.511 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.521 252257 INFO nova.compute.manager [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Took 8.42 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.522 252257 DEBUG nova.compute.manager [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.532 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:54 np0005539563 podman[357030]: 2025-11-29 08:30:54.481642285 +0000 UTC m=+0.021302159 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.589 252257 INFO nova.compute.manager [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Took 9.43 seconds to build instance.#033[00m
Nov 29 03:30:54 np0005539563 nova_compute[252253]: 2025-11-29 08:30:54.613 252257 DEBUG oslo_concurrency.lockutils [None req-a2b4aac8-fc68-465d-bbe7-e84e7f821d26 e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 536 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 4.3 MiB/s wr, 123 op/s
Nov 29 03:30:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 03:30:55 np0005539563 podman[357030]: 2025-11-29 08:30:55.566355081 +0000 UTC m=+1.106014935 container create fc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:30:55 np0005539563 systemd[1]: Started libpod-conmon-fc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300.scope.
Nov 29 03:30:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:30:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12cabaec788a1c26af276602b516e48e954ff78da1e1a5dda2409f0d80026f9c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:30:55 np0005539563 nova_compute[252253]: 2025-11-29 08:30:55.646 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:55 np0005539563 podman[357030]: 2025-11-29 08:30:55.907995257 +0000 UTC m=+1.447655111 container init fc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:30:55 np0005539563 podman[357030]: 2025-11-29 08:30:55.918467691 +0000 UTC m=+1.458127545 container start fc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:30:55 np0005539563 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[357045]: [NOTICE]   (357049) : New worker (357051) forked
Nov 29 03:30:55 np0005539563 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[357045]: [NOTICE]   (357049) : Loading success.
Nov 29 03:30:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:56.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:56 np0005539563 nova_compute[252253]: 2025-11-29 08:30:56.345 252257 DEBUG nova.compute.manager [req-b1bdb597-413e-4bf5-9e52-7131ba30d8b7 req-6869701c-1782-4e81-b9a5-5025a4e9c0d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:56 np0005539563 nova_compute[252253]: 2025-11-29 08:30:56.346 252257 DEBUG oslo_concurrency.lockutils [req-b1bdb597-413e-4bf5-9e52-7131ba30d8b7 req-6869701c-1782-4e81-b9a5-5025a4e9c0d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:56 np0005539563 nova_compute[252253]: 2025-11-29 08:30:56.346 252257 DEBUG oslo_concurrency.lockutils [req-b1bdb597-413e-4bf5-9e52-7131ba30d8b7 req-6869701c-1782-4e81-b9a5-5025a4e9c0d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:56 np0005539563 nova_compute[252253]: 2025-11-29 08:30:56.347 252257 DEBUG oslo_concurrency.lockutils [req-b1bdb597-413e-4bf5-9e52-7131ba30d8b7 req-6869701c-1782-4e81-b9a5-5025a4e9c0d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:56 np0005539563 nova_compute[252253]: 2025-11-29 08:30:56.347 252257 DEBUG nova.compute.manager [req-b1bdb597-413e-4bf5-9e52-7131ba30d8b7 req-6869701c-1782-4e81-b9a5-5025a4e9c0d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] No waiting events found dispatching network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:56 np0005539563 nova_compute[252253]: 2025-11-29 08:30:56.347 252257 WARNING nova.compute.manager [req-b1bdb597-413e-4bf5-9e52-7131ba30d8b7 req-6869701c-1782-4e81-b9a5-5025a4e9c0d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received unexpected event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:30:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:56.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 726 KiB/s rd, 3.1 MiB/s wr, 135 op/s
Nov 29 03:30:58 np0005539563 nova_compute[252253]: 2025-11-29 08:30:58.148 252257 DEBUG nova.compute.manager [req-a0ae9a05-89b3-4e3e-99fb-f16b902887fc req-376e3af2-d0e6-4eb5-b488-c3de752b2371 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-changed-1576b647-a0ba-45ac-afa5-c62b909bb7e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:58 np0005539563 nova_compute[252253]: 2025-11-29 08:30:58.149 252257 DEBUG nova.compute.manager [req-a0ae9a05-89b3-4e3e-99fb-f16b902887fc req-376e3af2-d0e6-4eb5-b488-c3de752b2371 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Refreshing instance network info cache due to event network-changed-1576b647-a0ba-45ac-afa5-c62b909bb7e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:58 np0005539563 nova_compute[252253]: 2025-11-29 08:30:58.149 252257 DEBUG oslo_concurrency.lockutils [req-a0ae9a05-89b3-4e3e-99fb-f16b902887fc req-376e3af2-d0e6-4eb5-b488-c3de752b2371 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:58 np0005539563 nova_compute[252253]: 2025-11-29 08:30:58.150 252257 DEBUG oslo_concurrency.lockutils [req-a0ae9a05-89b3-4e3e-99fb-f16b902887fc req-376e3af2-d0e6-4eb5-b488-c3de752b2371 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:58 np0005539563 nova_compute[252253]: 2025-11-29 08:30:58.150 252257 DEBUG nova.network.neutron [req-a0ae9a05-89b3-4e3e-99fb-f16b902887fc req-376e3af2-d0e6-4eb5-b488-c3de752b2371 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Refreshing network info cache for port 1576b647-a0ba-45ac-afa5-c62b909bb7e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:58.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:30:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:58.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:58.640 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:58 np0005539563 nova_compute[252253]: 2025-11-29 08:30:58.642 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:58.643 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:30:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:30:58.644 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:58 np0005539563 nova_compute[252253]: 2025-11-29 08:30:58.656 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Nov 29 03:30:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:00.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:00.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:00 np0005539563 nova_compute[252253]: 2025-11-29 08:31:00.649 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 164 op/s
Nov 29 03:31:00 np0005539563 nova_compute[252253]: 2025-11-29 08:31:00.892 252257 DEBUG nova.network.neutron [req-a0ae9a05-89b3-4e3e-99fb-f16b902887fc req-376e3af2-d0e6-4eb5-b488-c3de752b2371 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updated VIF entry in instance network info cache for port 1576b647-a0ba-45ac-afa5-c62b909bb7e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:00 np0005539563 nova_compute[252253]: 2025-11-29 08:31:00.893 252257 DEBUG nova.network.neutron [req-a0ae9a05-89b3-4e3e-99fb-f16b902887fc req-376e3af2-d0e6-4eb5-b488-c3de752b2371 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updating instance_info_cache with network_info: [{"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:00 np0005539563 nova_compute[252253]: 2025-11-29 08:31:00.924 252257 DEBUG oslo_concurrency.lockutils [req-a0ae9a05-89b3-4e3e-99fb-f16b902887fc req-376e3af2-d0e6-4eb5-b488-c3de752b2371 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Nov 29 03:31:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Nov 29 03:31:01 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Nov 29 03:31:01 np0005539563 podman[357234]: 2025-11-29 08:31:01.687810932 +0000 UTC m=+0.053553202 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:31:01 np0005539563 podman[357234]: 2025-11-29 08:31:01.801099392 +0000 UTC m=+0.166841642 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:31:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:02.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:02.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:02 np0005539563 podman[357435]: 2025-11-29 08:31:02.374710062 +0000 UTC m=+0.051599149 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 03:31:02 np0005539563 podman[357435]: 2025-11-29 08:31:02.385061862 +0000 UTC m=+0.061950919 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 03:31:02 np0005539563 podman[357502]: 2025-11-29 08:31:02.606296765 +0000 UTC m=+0.053626213 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, description=keepalived for Ceph, release=1793, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Nov 29 03:31:02 np0005539563 podman[357502]: 2025-11-29 08:31:02.618232529 +0000 UTC m=+0.065561987 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, vendor=Red Hat, Inc., distribution-scope=public, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, name=keepalived, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 29 03:31:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:31:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 467 KiB/s wr, 180 op/s
Nov 29 03:31:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:31:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:31:03 np0005539563 nova_compute[252253]: 2025-11-29 08:31:03.658 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:03 np0005539563 nova_compute[252253]: 2025-11-29 08:31:03.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:03 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d58422fa-496b-4af7-9a6d-b990812c89ae does not exist
Nov 29 03:31:03 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev edd4920c-222c-4af3-a948-e77aec84b317 does not exist
Nov 29 03:31:03 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 69602acc-11fd-4439-827d-14c53f712f98 does not exist
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:31:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:31:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:04.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:04.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:04 np0005539563 podman[357807]: 2025-11-29 08:31:04.414865623 +0000 UTC m=+0.048389882 container create 6888fd6f79b3179d11840e8a8244b6f9c4fdd4beb3d851f3658055c1aaff1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sammet, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:31:04 np0005539563 systemd[1]: Started libpod-conmon-6888fd6f79b3179d11840e8a8244b6f9c4fdd4beb3d851f3658055c1aaff1f2f.scope.
Nov 29 03:31:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:31:04 np0005539563 podman[357807]: 2025-11-29 08:31:04.394959374 +0000 UTC m=+0.028483663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:31:04 np0005539563 podman[357807]: 2025-11-29 08:31:04.497238044 +0000 UTC m=+0.130762333 container init 6888fd6f79b3179d11840e8a8244b6f9c4fdd4beb3d851f3658055c1aaff1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sammet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:31:04 np0005539563 podman[357807]: 2025-11-29 08:31:04.504559203 +0000 UTC m=+0.138083472 container start 6888fd6f79b3179d11840e8a8244b6f9c4fdd4beb3d851f3658055c1aaff1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:31:04 np0005539563 podman[357807]: 2025-11-29 08:31:04.508068198 +0000 UTC m=+0.141592527 container attach 6888fd6f79b3179d11840e8a8244b6f9c4fdd4beb3d851f3658055c1aaff1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:31:04 np0005539563 ecstatic_sammet[357824]: 167 167
Nov 29 03:31:04 np0005539563 systemd[1]: libpod-6888fd6f79b3179d11840e8a8244b6f9c4fdd4beb3d851f3658055c1aaff1f2f.scope: Deactivated successfully.
Nov 29 03:31:04 np0005539563 podman[357807]: 2025-11-29 08:31:04.512500758 +0000 UTC m=+0.146025027 container died 6888fd6f79b3179d11840e8a8244b6f9c4fdd4beb3d851f3658055c1aaff1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:31:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay-93b0c0254fa262994aa7d9b8b1d5f2c638e4858193d924592acd77d5855cca5d-merged.mount: Deactivated successfully.
Nov 29 03:31:04 np0005539563 podman[357807]: 2025-11-29 08:31:04.549514491 +0000 UTC m=+0.183038760 container remove 6888fd6f79b3179d11840e8a8244b6f9c4fdd4beb3d851f3658055c1aaff1f2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:31:04 np0005539563 systemd[1]: libpod-conmon-6888fd6f79b3179d11840e8a8244b6f9c4fdd4beb3d851f3658055c1aaff1f2f.scope: Deactivated successfully.
Nov 29 03:31:04 np0005539563 nova_compute[252253]: 2025-11-29 08:31:04.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:04 np0005539563 nova_compute[252253]: 2025-11-29 08:31:04.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:04 np0005539563 nova_compute[252253]: 2025-11-29 08:31:04.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:04 np0005539563 nova_compute[252253]: 2025-11-29 08:31:04.680 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:31:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 252 KiB/s wr, 147 op/s
Nov 29 03:31:04 np0005539563 podman[357850]: 2025-11-29 08:31:04.714065689 +0000 UTC m=+0.038436422 container create 59fffda61594c93d8e5577d8a7c94411f32809939d717af3f6c4693eacd79af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:31:04 np0005539563 systemd[1]: Started libpod-conmon-59fffda61594c93d8e5577d8a7c94411f32809939d717af3f6c4693eacd79af7.scope.
Nov 29 03:31:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:31:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b087fdfc48e06ce57d124f1e63d7b66ad39000fb567d3cbe8d526d2fbc8dfe62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:04 np0005539563 podman[357850]: 2025-11-29 08:31:04.697616923 +0000 UTC m=+0.021987686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:31:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b087fdfc48e06ce57d124f1e63d7b66ad39000fb567d3cbe8d526d2fbc8dfe62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b087fdfc48e06ce57d124f1e63d7b66ad39000fb567d3cbe8d526d2fbc8dfe62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b087fdfc48e06ce57d124f1e63d7b66ad39000fb567d3cbe8d526d2fbc8dfe62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b087fdfc48e06ce57d124f1e63d7b66ad39000fb567d3cbe8d526d2fbc8dfe62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:04.936 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:04.936 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:04.937 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:05 np0005539563 podman[357850]: 2025-11-29 08:31:05.230809508 +0000 UTC m=+0.555180261 container init 59fffda61594c93d8e5577d8a7c94411f32809939d717af3f6c4693eacd79af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:31:05 np0005539563 podman[357850]: 2025-11-29 08:31:05.238490686 +0000 UTC m=+0.562861419 container start 59fffda61594c93d8e5577d8a7c94411f32809939d717af3f6c4693eacd79af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:31:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:31:05 np0005539563 podman[357850]: 2025-11-29 08:31:05.621128202 +0000 UTC m=+0.945498975 container attach 59fffda61594c93d8e5577d8a7c94411f32809939d717af3f6c4693eacd79af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heyrovsky, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:31:05 np0005539563 nova_compute[252253]: 2025-11-29 08:31:05.651 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:06 np0005539563 upbeat_heyrovsky[357867]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:31:06 np0005539563 upbeat_heyrovsky[357867]: --> relative data size: 1.0
Nov 29 03:31:06 np0005539563 upbeat_heyrovsky[357867]: --> All data devices are unavailable
Nov 29 03:31:06 np0005539563 systemd[1]: libpod-59fffda61594c93d8e5577d8a7c94411f32809939d717af3f6c4693eacd79af7.scope: Deactivated successfully.
Nov 29 03:31:06 np0005539563 podman[357883]: 2025-11-29 08:31:06.097434076 +0000 UTC m=+0.023778805 container died 59fffda61594c93d8e5577d8a7c94411f32809939d717af3f6c4693eacd79af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heyrovsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:31:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b087fdfc48e06ce57d124f1e63d7b66ad39000fb567d3cbe8d526d2fbc8dfe62-merged.mount: Deactivated successfully.
Nov 29 03:31:06 np0005539563 podman[357883]: 2025-11-29 08:31:06.153933088 +0000 UTC m=+0.080277817 container remove 59fffda61594c93d8e5577d8a7c94411f32809939d717af3f6c4693eacd79af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_heyrovsky, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:31:06 np0005539563 systemd[1]: libpod-conmon-59fffda61594c93d8e5577d8a7c94411f32809939d717af3f6c4693eacd79af7.scope: Deactivated successfully.
Nov 29 03:31:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:06.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:06.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Nov 29 03:31:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Nov 29 03:31:06 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Nov 29 03:31:06 np0005539563 nova_compute[252253]: 2025-11-29 08:31:06.671 252257 DEBUG nova.compute.manager [req-58e8d4a7-f44a-490a-ac87-d6a6d69c60b7 req-08d4b2b3-f806-4a76-af3e-590e655aeccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received event network-changed-fe638793-a58c-45c7-af31-561a212a980a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:06 np0005539563 nova_compute[252253]: 2025-11-29 08:31:06.674 252257 DEBUG nova.compute.manager [req-58e8d4a7-f44a-490a-ac87-d6a6d69c60b7 req-08d4b2b3-f806-4a76-af3e-590e655aeccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing instance network info cache due to event network-changed-fe638793-a58c-45c7-af31-561a212a980a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:06 np0005539563 nova_compute[252253]: 2025-11-29 08:31:06.674 252257 DEBUG oslo_concurrency.lockutils [req-58e8d4a7-f44a-490a-ac87-d6a6d69c60b7 req-08d4b2b3-f806-4a76-af3e-590e655aeccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:06 np0005539563 nova_compute[252253]: 2025-11-29 08:31:06.674 252257 DEBUG oslo_concurrency.lockutils [req-58e8d4a7-f44a-490a-ac87-d6a6d69c60b7 req-08d4b2b3-f806-4a76-af3e-590e655aeccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:06 np0005539563 nova_compute[252253]: 2025-11-29 08:31:06.675 252257 DEBUG nova.network.neutron [req-58e8d4a7-f44a-490a-ac87-d6a6d69c60b7 req-08d4b2b3-f806-4a76-af3e-590e655aeccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Refreshing network info cache for port fe638793-a58c-45c7-af31-561a212a980a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 273 KiB/s wr, 69 op/s
Nov 29 03:31:06 np0005539563 podman[358039]: 2025-11-29 08:31:06.813303781 +0000 UTC m=+0.039284946 container create 08f9e025107e6f0365b7e0bb485655299dd061c2cc0e2a8499bac6dea691b227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:31:06 np0005539563 systemd[1]: Started libpod-conmon-08f9e025107e6f0365b7e0bb485655299dd061c2cc0e2a8499bac6dea691b227.scope.
Nov 29 03:31:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:31:06 np0005539563 podman[358039]: 2025-11-29 08:31:06.796864925 +0000 UTC m=+0.022846110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:31:06 np0005539563 podman[358039]: 2025-11-29 08:31:06.906474835 +0000 UTC m=+0.132456080 container init 08f9e025107e6f0365b7e0bb485655299dd061c2cc0e2a8499bac6dea691b227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:31:06 np0005539563 podman[358039]: 2025-11-29 08:31:06.914916553 +0000 UTC m=+0.140897718 container start 08f9e025107e6f0365b7e0bb485655299dd061c2cc0e2a8499bac6dea691b227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:31:06 np0005539563 sad_kare[358056]: 167 167
Nov 29 03:31:06 np0005539563 systemd[1]: libpod-08f9e025107e6f0365b7e0bb485655299dd061c2cc0e2a8499bac6dea691b227.scope: Deactivated successfully.
Nov 29 03:31:06 np0005539563 podman[358039]: 2025-11-29 08:31:06.952156363 +0000 UTC m=+0.178137528 container attach 08f9e025107e6f0365b7e0bb485655299dd061c2cc0e2a8499bac6dea691b227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:31:06 np0005539563 podman[358039]: 2025-11-29 08:31:06.95278436 +0000 UTC m=+0.178765545 container died 08f9e025107e6f0365b7e0bb485655299dd061c2cc0e2a8499bac6dea691b227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:31:06 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Nov 29 03:31:07 np0005539563 systemd[1]: var-lib-containers-storage-overlay-67d8458d7f15e4b7826f805c6dd90f366913188cf48849bf9d9358bec21e1eaa-merged.mount: Deactivated successfully.
Nov 29 03:31:07 np0005539563 podman[358039]: 2025-11-29 08:31:07.117648226 +0000 UTC m=+0.343629391 container remove 08f9e025107e6f0365b7e0bb485655299dd061c2cc0e2a8499bac6dea691b227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kare, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 29 03:31:07 np0005539563 systemd[1]: libpod-conmon-08f9e025107e6f0365b7e0bb485655299dd061c2cc0e2a8499bac6dea691b227.scope: Deactivated successfully.
Nov 29 03:31:07 np0005539563 podman[358082]: 2025-11-29 08:31:07.319532616 +0000 UTC m=+0.065970109 container create c9d25cf4af7ea1513019dd093f42db62689beaa2174980c8fea53d5a4ad6abe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:31:07 np0005539563 systemd[1]: Started libpod-conmon-c9d25cf4af7ea1513019dd093f42db62689beaa2174980c8fea53d5a4ad6abe3.scope.
Nov 29 03:31:07 np0005539563 podman[358082]: 2025-11-29 08:31:07.278677958 +0000 UTC m=+0.025115471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:31:07 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:31:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a7f330c0c3a98a118c5f91aed5b91dd27da321debe2ea2d16960db47f4dbfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a7f330c0c3a98a118c5f91aed5b91dd27da321debe2ea2d16960db47f4dbfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a7f330c0c3a98a118c5f91aed5b91dd27da321debe2ea2d16960db47f4dbfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9a7f330c0c3a98a118c5f91aed5b91dd27da321debe2ea2d16960db47f4dbfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:07 np0005539563 podman[358082]: 2025-11-29 08:31:07.409085832 +0000 UTC m=+0.155523345 container init c9d25cf4af7ea1513019dd093f42db62689beaa2174980c8fea53d5a4ad6abe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:31:07 np0005539563 podman[358082]: 2025-11-29 08:31:07.415493835 +0000 UTC m=+0.161931328 container start c9d25cf4af7ea1513019dd093f42db62689beaa2174980c8fea53d5a4ad6abe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:31:07 np0005539563 podman[358082]: 2025-11-29 08:31:07.419142024 +0000 UTC m=+0.165579537 container attach c9d25cf4af7ea1513019dd093f42db62689beaa2174980c8fea53d5a4ad6abe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_taussig, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:31:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Nov 29 03:31:07 np0005539563 nova_compute[252253]: 2025-11-29 08:31:07.863 252257 DEBUG nova.network.neutron [req-58e8d4a7-f44a-490a-ac87-d6a6d69c60b7 req-08d4b2b3-f806-4a76-af3e-590e655aeccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updated VIF entry in instance network info cache for port fe638793-a58c-45c7-af31-561a212a980a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:07 np0005539563 nova_compute[252253]: 2025-11-29 08:31:07.864 252257 DEBUG nova.network.neutron [req-58e8d4a7-f44a-490a-ac87-d6a6d69c60b7 req-08d4b2b3-f806-4a76-af3e-590e655aeccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating instance_info_cache with network_info: [{"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:07 np0005539563 nova_compute[252253]: 2025-11-29 08:31:07.884 252257 DEBUG oslo_concurrency.lockutils [req-58e8d4a7-f44a-490a-ac87-d6a6d69c60b7 req-08d4b2b3-f806-4a76-af3e-590e655aeccd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Nov 29 03:31:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Nov 29 03:31:07 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:07Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:4e:1f 10.100.0.4
Nov 29 03:31:07 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:07Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:4e:1f 10.100.0.4
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]: {
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:    "0": [
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:        {
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:            "devices": [
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:                "/dev/loop3"
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:            ],
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:            "lv_name": "ceph_lv0",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:            "lv_size": "7511998464",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:            "name": "ceph_lv0",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:            "tags": {
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:                "ceph.cluster_name": "ceph",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:                "ceph.crush_device_class": "",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:                "ceph.encrypted": "0",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:                "ceph.osd_id": "0",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:                "ceph.type": "block",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:                "ceph.vdo": "0"
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:            },
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:            "type": "block",
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:            "vg_name": "ceph_vg0"
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:        }
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]:    ]
Nov 29 03:31:08 np0005539563 friendly_taussig[358098]: }
Nov 29 03:31:08 np0005539563 systemd[1]: libpod-c9d25cf4af7ea1513019dd093f42db62689beaa2174980c8fea53d5a4ad6abe3.scope: Deactivated successfully.
Nov 29 03:31:08 np0005539563 podman[358082]: 2025-11-29 08:31:08.275899785 +0000 UTC m=+1.022337278 container died c9d25cf4af7ea1513019dd093f42db62689beaa2174980c8fea53d5a4ad6abe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_taussig, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:31:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:08.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f9a7f330c0c3a98a118c5f91aed5b91dd27da321debe2ea2d16960db47f4dbfc-merged.mount: Deactivated successfully.
Nov 29 03:31:08 np0005539563 podman[358082]: 2025-11-29 08:31:08.329557508 +0000 UTC m=+1.075994991 container remove c9d25cf4af7ea1513019dd093f42db62689beaa2174980c8fea53d5a4ad6abe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_taussig, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:31:08 np0005539563 systemd[1]: libpod-conmon-c9d25cf4af7ea1513019dd093f42db62689beaa2174980c8fea53d5a4ad6abe3.scope: Deactivated successfully.
Nov 29 03:31:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:08.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:08 np0005539563 nova_compute[252253]: 2025-11-29 08:31:08.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:08 np0005539563 nova_compute[252253]: 2025-11-29 08:31:08.696 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 602 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.3 MiB/s wr, 96 op/s
Nov 29 03:31:08 np0005539563 nova_compute[252253]: 2025-11-29 08:31:08.755 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:08 np0005539563 nova_compute[252253]: 2025-11-29 08:31:08.756 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:08 np0005539563 nova_compute[252253]: 2025-11-29 08:31:08.757 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:08 np0005539563 nova_compute[252253]: 2025-11-29 08:31:08.757 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:31:08 np0005539563 nova_compute[252253]: 2025-11-29 08:31:08.757 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:08.816400) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405068816510, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1581, "num_deletes": 258, "total_data_size": 2491218, "memory_usage": 2527552, "flush_reason": "Manual Compaction"}
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405068836888, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 2449043, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54917, "largest_seqno": 56497, "table_properties": {"data_size": 2441815, "index_size": 4171, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16166, "raw_average_key_size": 20, "raw_value_size": 2426842, "raw_average_value_size": 3064, "num_data_blocks": 183, "num_entries": 792, "num_filter_entries": 792, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404937, "oldest_key_time": 1764404937, "file_creation_time": 1764405068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 20662 microseconds, and 6693 cpu microseconds.
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:08.837102) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 2449043 bytes OK
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:08.837177) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:08.908688) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:08.908797) EVENT_LOG_v1 {"time_micros": 1764405068908785, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:08.908823) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2484345, prev total WAL file size 2484345, number of live WAL files 2.
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:08.909959) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303135' seq:72057594037927935, type:22 .. '6C6F676D0032323636' seq:0, type:0; will stop at (end)
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(2391KB)], [119(10MB)]
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405068910050, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 13770019, "oldest_snapshot_seqno": -1}
Nov 29 03:31:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Nov 29 03:31:09 np0005539563 podman[358273]: 2025-11-29 08:31:08.931067875 +0000 UTC m=+0.022884151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 9137 keys, 13628290 bytes, temperature: kUnknown
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405069146615, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 13628290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13566645, "index_size": 37725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22853, "raw_key_size": 237881, "raw_average_key_size": 26, "raw_value_size": 13403616, "raw_average_value_size": 1466, "num_data_blocks": 1473, "num_entries": 9137, "num_filter_entries": 9137, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764405068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:31:09 np0005539563 podman[358273]: 2025-11-29 08:31:09.147118798 +0000 UTC m=+0.238935054 container create aee7b8f67b93a78f4437321ea13ef0e741746f6843885fb178e9be39c917105b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jones, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:09.147130) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 13628290 bytes
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:09.165291) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.2 rd, 57.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.8 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(11.2) write-amplify(5.6) OK, records in: 9669, records dropped: 532 output_compression: NoCompression
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:09.165336) EVENT_LOG_v1 {"time_micros": 1764405069165318, "job": 72, "event": "compaction_finished", "compaction_time_micros": 236753, "compaction_time_cpu_micros": 30383, "output_level": 6, "num_output_files": 1, "total_output_size": 13628290, "num_input_records": 9669, "num_output_records": 9137, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405069166255, "job": 72, "event": "table_file_deletion", "file_number": 121}
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405069171939, "job": 72, "event": "table_file_deletion", "file_number": 119}
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:08.909822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:09.172018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:09.172022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:09.172023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:09.172025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:31:09.172026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857578055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:09 np0005539563 systemd[1]: Started libpod-conmon-aee7b8f67b93a78f4437321ea13ef0e741746f6843885fb178e9be39c917105b.scope.
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.212 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:31:09 np0005539563 podman[358273]: 2025-11-29 08:31:09.257097227 +0000 UTC m=+0.348913503 container init aee7b8f67b93a78f4437321ea13ef0e741746f6843885fb178e9be39c917105b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jones, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:31:09 np0005539563 podman[358273]: 2025-11-29 08:31:09.265620888 +0000 UTC m=+0.357437154 container start aee7b8f67b93a78f4437321ea13ef0e741746f6843885fb178e9be39c917105b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:31:09 np0005539563 podman[358273]: 2025-11-29 08:31:09.268626009 +0000 UTC m=+0.360442315 container attach aee7b8f67b93a78f4437321ea13ef0e741746f6843885fb178e9be39c917105b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jones, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:31:09 np0005539563 systemd[1]: libpod-aee7b8f67b93a78f4437321ea13ef0e741746f6843885fb178e9be39c917105b.scope: Deactivated successfully.
Nov 29 03:31:09 np0005539563 eager_jones[358300]: 167 167
Nov 29 03:31:09 np0005539563 conmon[358300]: conmon aee7b8f67b93a78f4437 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aee7b8f67b93a78f4437321ea13ef0e741746f6843885fb178e9be39c917105b.scope/container/memory.events
Nov 29 03:31:09 np0005539563 podman[358273]: 2025-11-29 08:31:09.274713235 +0000 UTC m=+0.366529501 container died aee7b8f67b93a78f4437321ea13ef0e741746f6843885fb178e9be39c917105b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jones, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.295 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000ab as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.295 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000ab as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.300 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.300 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.483 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.485 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3661MB free_disk=20.92129898071289GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.485 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.485 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.566 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance e412f1ba-217f-4c10-b176-528b2ef6ed0e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.566 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.566 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.566 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:31:09 np0005539563 nova_compute[252253]: 2025-11-29 08:31:09.612 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-19c49b823c80953f66ad694b17cdbe1d0986ad0337681414c8aec4ace43f4215-merged.mount: Deactivated successfully.
Nov 29 03:31:09 np0005539563 podman[358273]: 2025-11-29 08:31:09.785507753 +0000 UTC m=+0.877324029 container remove aee7b8f67b93a78f4437321ea13ef0e741746f6843885fb178e9be39c917105b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jones, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:31:09 np0005539563 systemd[1]: libpod-conmon-aee7b8f67b93a78f4437321ea13ef0e741746f6843885fb178e9be39c917105b.scope: Deactivated successfully.
Nov 29 03:31:09 np0005539563 podman[358345]: 2025-11-29 08:31:09.977879425 +0000 UTC m=+0.048959488 container create 1b57eb2dd8ee91db22b370bcad6a71b96934ed6cdfa841a2b733864491a54ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cannon, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:31:10 np0005539563 systemd[1]: Started libpod-conmon-1b57eb2dd8ee91db22b370bcad6a71b96934ed6cdfa841a2b733864491a54ed2.scope.
Nov 29 03:31:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3248946213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:10 np0005539563 podman[358345]: 2025-11-29 08:31:09.95886848 +0000 UTC m=+0.029948573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:31:10 np0005539563 nova_compute[252253]: 2025-11-29 08:31:10.057 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:10 np0005539563 nova_compute[252253]: 2025-11-29 08:31:10.063 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:31:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a1132ff86b40e69aadbd6a0bdd443801983983c39eb4c8918483f9c9dce276/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a1132ff86b40e69aadbd6a0bdd443801983983c39eb4c8918483f9c9dce276/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a1132ff86b40e69aadbd6a0bdd443801983983c39eb4c8918483f9c9dce276/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49a1132ff86b40e69aadbd6a0bdd443801983983c39eb4c8918483f9c9dce276/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:10 np0005539563 nova_compute[252253]: 2025-11-29 08:31:10.090 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:10 np0005539563 nova_compute[252253]: 2025-11-29 08:31:10.112 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:31:10 np0005539563 nova_compute[252253]: 2025-11-29 08:31:10.112 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:10.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:10.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:10 np0005539563 podman[358345]: 2025-11-29 08:31:10.512255761 +0000 UTC m=+0.583335864 container init 1b57eb2dd8ee91db22b370bcad6a71b96934ed6cdfa841a2b733864491a54ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cannon, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:31:10 np0005539563 podman[358345]: 2025-11-29 08:31:10.520988538 +0000 UTC m=+0.592068611 container start 1b57eb2dd8ee91db22b370bcad6a71b96934ed6cdfa841a2b733864491a54ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cannon, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:31:10 np0005539563 nova_compute[252253]: 2025-11-29 08:31:10.654 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 659 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.6 MiB/s rd, 15 MiB/s wr, 339 op/s
Nov 29 03:31:10 np0005539563 podman[358345]: 2025-11-29 08:31:10.888719151 +0000 UTC m=+0.959799264 container attach 1b57eb2dd8ee91db22b370bcad6a71b96934ed6cdfa841a2b733864491a54ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:31:11 np0005539563 strange_cannon[358363]: {
Nov 29 03:31:11 np0005539563 strange_cannon[358363]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:31:11 np0005539563 strange_cannon[358363]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:31:11 np0005539563 strange_cannon[358363]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:31:11 np0005539563 strange_cannon[358363]:        "osd_id": 0,
Nov 29 03:31:11 np0005539563 strange_cannon[358363]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:31:11 np0005539563 strange_cannon[358363]:        "type": "bluestore"
Nov 29 03:31:11 np0005539563 strange_cannon[358363]:    }
Nov 29 03:31:11 np0005539563 strange_cannon[358363]: }
Nov 29 03:31:11 np0005539563 systemd[1]: libpod-1b57eb2dd8ee91db22b370bcad6a71b96934ed6cdfa841a2b733864491a54ed2.scope: Deactivated successfully.
Nov 29 03:31:11 np0005539563 podman[358345]: 2025-11-29 08:31:11.373301149 +0000 UTC m=+1.444381212 container died 1b57eb2dd8ee91db22b370bcad6a71b96934ed6cdfa841a2b733864491a54ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:31:12 np0005539563 nova_compute[252253]: 2025-11-29 08:31:12.113 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:12 np0005539563 nova_compute[252253]: 2025-11-29 08:31:12.115 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:31:12 np0005539563 nova_compute[252253]: 2025-11-29 08:31:12.116 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:31:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:12.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:12.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:12 np0005539563 nova_compute[252253]: 2025-11-29 08:31:12.457 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:12 np0005539563 nova_compute[252253]: 2025-11-29 08:31:12.458 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:12 np0005539563 nova_compute[252253]: 2025-11-29 08:31:12.458 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:31:12 np0005539563 nova_compute[252253]: 2025-11-29 08:31:12.459 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e412f1ba-217f-4c10-b176-528b2ef6ed0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 659 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.4 MiB/s rd, 15 MiB/s wr, 317 op/s
Nov 29 03:31:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-49a1132ff86b40e69aadbd6a0bdd443801983983c39eb4c8918483f9c9dce276-merged.mount: Deactivated successfully.
Nov 29 03:31:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:31:12
Nov 29 03:31:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:31:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:31:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'images', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control']
Nov 29 03:31:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:31:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:31:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:31:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:31:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:31:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:31:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:31:13 np0005539563 podman[358345]: 2025-11-29 08:31:13.307493069 +0000 UTC m=+3.378573132 container remove 1b57eb2dd8ee91db22b370bcad6a71b96934ed6cdfa841a2b733864491a54ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:31:13 np0005539563 systemd[1]: libpod-conmon-1b57eb2dd8ee91db22b370bcad6a71b96934ed6cdfa841a2b733864491a54ed2.scope: Deactivated successfully.
Nov 29 03:31:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:31:13 np0005539563 nova_compute[252253]: 2025-11-29 08:31:13.699 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:31:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:13 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 47ecafa2-87d1-42ed-9d06-f9481f41b6e5 does not exist
Nov 29 03:31:13 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 626e9256-b7fa-491f-b435-7bff596421f2 does not exist
Nov 29 03:31:13 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 26f95aea-08c2-4d13-9d1e-b92ea34d6d56 does not exist
Nov 29 03:31:13 np0005539563 nova_compute[252253]: 2025-11-29 08:31:13.992 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating instance_info_cache with network_info: [{"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:14 np0005539563 nova_compute[252253]: 2025-11-29 08:31:14.012 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-e412f1ba-217f-4c10-b176-528b2ef6ed0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:14 np0005539563 nova_compute[252253]: 2025-11-29 08:31:14.013 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:31:14 np0005539563 nova_compute[252253]: 2025-11-29 08:31:14.013 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:14.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:14 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:31:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:14.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:14 np0005539563 nova_compute[252253]: 2025-11-29 08:31:14.439 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:14 np0005539563 nova_compute[252253]: 2025-11-29 08:31:14.440 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:14 np0005539563 nova_compute[252253]: 2025-11-29 08:31:14.463 252257 DEBUG nova.compute.manager [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:31:14 np0005539563 nova_compute[252253]: 2025-11-29 08:31:14.535 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:14 np0005539563 nova_compute[252253]: 2025-11-29 08:31:14.535 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:14 np0005539563 nova_compute[252253]: 2025-11-29 08:31:14.542 252257 DEBUG nova.virt.hardware [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:31:14 np0005539563 nova_compute[252253]: 2025-11-29 08:31:14.543 252257 INFO nova.compute.claims [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:31:14 np0005539563 nova_compute[252253]: 2025-11-29 08:31:14.666 252257 DEBUG oslo_concurrency.processutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.5 MiB/s rd, 12 MiB/s wr, 259 op/s
Nov 29 03:31:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3226655288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.120 252257 DEBUG oslo_concurrency.processutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.128 252257 DEBUG nova.compute.provider_tree [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.150 252257 DEBUG nova.scheduler.client.report [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.171 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.172 252257 DEBUG nova.compute.manager [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.232 252257 DEBUG nova.compute.manager [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.232 252257 DEBUG nova.network.neutron [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.254 252257 INFO nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.278 252257 DEBUG nova.compute.manager [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.391 252257 DEBUG nova.compute.manager [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.392 252257 DEBUG nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.393 252257 INFO nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Creating image(s)#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.426 252257 DEBUG nova.storage.rbd_utils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] rbd image 9a93f858-81bf-4fad-bdc5-df74e4bb0c75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.464 252257 DEBUG nova.storage.rbd_utils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] rbd image 9a93f858-81bf-4fad-bdc5-df74e4bb0c75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.499 252257 DEBUG nova.storage.rbd_utils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] rbd image 9a93f858-81bf-4fad-bdc5-df74e4bb0c75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.503 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "77ca74694deeab2a68d74020437a0f97b2807e6d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.504 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "77ca74694deeab2a68d74020437a0f97b2807e6d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.511 252257 DEBUG nova.policy [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd45f9a4a44664af3884c15ce0f5697e0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7e8e7407a7c44208a503e8225c1cf518', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.701 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.767 252257 DEBUG nova.virt.libvirt.imagebackend [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/29125612-6fcb-47b2-8690-67e6e3459b96/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/29125612-6fcb-47b2-8690-67e6e3459b96/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.903 252257 DEBUG nova.virt.libvirt.imagebackend [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/29125612-6fcb-47b2-8690-67e6e3459b96/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:31:15 np0005539563 nova_compute[252253]: 2025-11-29 08:31:15.904 252257 DEBUG nova.storage.rbd_utils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] cloning images/29125612-6fcb-47b2-8690-67e6e3459b96@snap to None/9a93f858-81bf-4fad-bdc5-df74e4bb0c75_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:31:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:16.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:16.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:31:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:31:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:31:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:31:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:31:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:31:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:31:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:31:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:31:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:31:16 np0005539563 nova_compute[252253]: 2025-11-29 08:31:16.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:16 np0005539563 nova_compute[252253]: 2025-11-29 08:31:16.683 252257 DEBUG oslo_concurrency.lockutils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:16 np0005539563 nova_compute[252253]: 2025-11-29 08:31:16.683 252257 DEBUG oslo_concurrency.lockutils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:16 np0005539563 nova_compute[252253]: 2025-11-29 08:31:16.683 252257 INFO nova.compute.manager [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Shelving#033[00m
Nov 29 03:31:16 np0005539563 nova_compute[252253]: 2025-11-29 08:31:16.708 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "77ca74694deeab2a68d74020437a0f97b2807e6d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 8.1 MiB/s wr, 290 op/s
Nov 29 03:31:16 np0005539563 nova_compute[252253]: 2025-11-29 08:31:16.888 252257 DEBUG nova.virt.libvirt.driver [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:31:16 np0005539563 nova_compute[252253]: 2025-11-29 08:31:16.896 252257 DEBUG nova.objects.instance [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:16 np0005539563 nova_compute[252253]: 2025-11-29 08:31:16.912 252257 DEBUG nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:31:16 np0005539563 nova_compute[252253]: 2025-11-29 08:31:16.913 252257 DEBUG nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Ensure instance console log exists: /var/lib/nova/instances/9a93f858-81bf-4fad-bdc5-df74e4bb0c75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:31:16 np0005539563 nova_compute[252253]: 2025-11-29 08:31:16.914 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:16 np0005539563 nova_compute[252253]: 2025-11-29 08:31:16.914 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:16 np0005539563 nova_compute[252253]: 2025-11-29 08:31:16.914 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:17 np0005539563 nova_compute[252253]: 2025-11-29 08:31:17.364 252257 DEBUG nova.network.neutron [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Successfully created port: 8b93f183-22da-4a1a-9b2c-52314f794aad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:31:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:18.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:18.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:18 np0005539563 nova_compute[252253]: 2025-11-29 08:31:18.702 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.8 MiB/s wr, 274 op/s
Nov 29 03:31:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Nov 29 03:31:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Nov 29 03:31:18 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.458 252257 DEBUG oslo_concurrency.lockutils [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.458 252257 DEBUG oslo_concurrency.lockutils [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.459 252257 DEBUG oslo_concurrency.lockutils [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.459 252257 DEBUG oslo_concurrency.lockutils [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.459 252257 DEBUG oslo_concurrency.lockutils [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.460 252257 INFO nova.compute.manager [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Terminating instance#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.462 252257 DEBUG nova.compute.manager [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:31:19 np0005539563 kernel: tapfe638793-a5 (unregistering): left promiscuous mode
Nov 29 03:31:19 np0005539563 NetworkManager[48981]: <info>  [1764405079.5314] device (tapfe638793-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:31:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:19Z|00720|binding|INFO|Releasing lport fe638793-a58c-45c7-af31-561a212a980a from this chassis (sb_readonly=0)
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.537 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:19Z|00721|binding|INFO|Setting lport fe638793-a58c-45c7-af31-561a212a980a down in Southbound
Nov 29 03:31:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:19Z|00722|binding|INFO|Removing iface tapfe638793-a5 ovn-installed in OVS
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.539 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.556 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539563 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a7.scope: Deactivated successfully.
Nov 29 03:31:19 np0005539563 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a7.scope: Consumed 20.610s CPU time.
Nov 29 03:31:19 np0005539563 systemd-machined[213024]: Machine qemu-82-instance-000000a7 terminated.
Nov 29 03:31:19 np0005539563 podman[358655]: 2025-11-29 08:31:19.652083906 +0000 UTC m=+0.081953701 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:31:19 np0005539563 podman[358657]: 2025-11-29 08:31:19.655264862 +0000 UTC m=+0.084996784 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.684 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539563 podman[358658]: 2025-11-29 08:31:19.687796144 +0000 UTC m=+0.112277543 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.689 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.702 252257 INFO nova.virt.libvirt.driver [-] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Instance destroyed successfully.#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.703 252257 DEBUG nova.objects.instance [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lazy-loading 'resources' on Instance uuid e412f1ba-217f-4c10-b176-528b2ef6ed0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:19.708 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:14:58 10.100.0.10'], port_security=['fa:16:3e:80:14:58 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e412f1ba-217f-4c10-b176-528b2ef6ed0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-371b699e-06e1-407e-ac77-9768d9a0e76e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '527c6a274d1e478eadfe67139e121185', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4e734722-bbf6-4c47-9bc6-bf8d5f52e07d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c0188f4-aa09-4b91-9f84-524ffee1218e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=fe638793-a58c-45c7-af31-561a212a980a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:19.709 158990 INFO neutron.agent.ovn.metadata.agent [-] Port fe638793-a58c-45c7-af31-561a212a980a in datapath 371b699e-06e1-407e-ac77-9768d9a0e76e unbound from our chassis#033[00m
Nov 29 03:31:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:19.710 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 371b699e-06e1-407e-ac77-9768d9a0e76e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:31:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:19.711 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[faf44f34-ced7-4f1a-86c5-3186955540c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:19.712 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e namespace which is not needed anymore#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.713 252257 DEBUG nova.network.neutron [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Successfully updated port: 8b93f183-22da-4a1a-9b2c-52314f794aad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.754 252257 DEBUG nova.virt.libvirt.vif [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1385893524',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1385893524',id=167,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDLuAg2lLvJL1IbHQI4zWjduPL00fGBTgnUuLmVxh8Papw1HN8YCJ1MjiVOY2IjiYFlPS7NCeNdc1wi8bfIbI4zqr01CElkg8VYpaZv/gY5PmkQnremSmt7jl09ZoO4cYg==',key_name='tempest-TestInstancesWithCinderVolumes-1453989920',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:29:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='527c6a274d1e478eadfe67139e121185',ramdisk_id='',reservation_id='r-6dg4rukr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-663978016',owner_user_name='tempest-TestInstancesWithCinderVolumes-663978016-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:07Z,user_data=None,user_id='d039e57f31de4717a235fc96ebd56559',uuid=e412f1ba-217f-4c10-b176-528b2ef6ed0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.755 252257 DEBUG nova.network.os_vif_util [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converting VIF {"id": "fe638793-a58c-45c7-af31-561a212a980a", "address": "fa:16:3e:80:14:58", "network": {"id": "371b699e-06e1-407e-ac77-9768d9a0e76e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-701115820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "527c6a274d1e478eadfe67139e121185", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe638793-a5", "ovs_interfaceid": "fe638793-a58c-45c7-af31-561a212a980a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.756 252257 DEBUG nova.network.os_vif_util [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:14:58,bridge_name='br-int',has_traffic_filtering=True,id=fe638793-a58c-45c7-af31-561a212a980a,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe638793-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.756 252257 DEBUG os_vif [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:14:58,bridge_name='br-int',has_traffic_filtering=True,id=fe638793-a58c-45c7-af31-561a212a980a,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe638793-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.758 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.758 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe638793-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.760 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.762 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.764 252257 INFO os_vif [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:14:58,bridge_name='br-int',has_traffic_filtering=True,id=fe638793-a58c-45c7-af31-561a212a980a,network=Network(371b699e-06e1-407e-ac77-9768d9a0e76e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe638793-a5')#033[00m
Nov 29 03:31:19 np0005539563 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[354699]: [NOTICE]   (354703) : haproxy version is 2.8.14-c23fe91
Nov 29 03:31:19 np0005539563 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[354699]: [NOTICE]   (354703) : path to executable is /usr/sbin/haproxy
Nov 29 03:31:19 np0005539563 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[354699]: [WARNING]  (354703) : Exiting Master process...
Nov 29 03:31:19 np0005539563 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[354699]: [WARNING]  (354703) : Exiting Master process...
Nov 29 03:31:19 np0005539563 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[354699]: [ALERT]    (354703) : Current worker (354705) exited with code 143 (Terminated)
Nov 29 03:31:19 np0005539563 neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e[354699]: [WARNING]  (354703) : All workers exited. Exiting... (0)
Nov 29 03:31:19 np0005539563 systemd[1]: libpod-6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b.scope: Deactivated successfully.
Nov 29 03:31:19 np0005539563 podman[358761]: 2025-11-29 08:31:19.833955053 +0000 UTC m=+0.039558772 container died 6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:31:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b-userdata-shm.mount: Deactivated successfully.
Nov 29 03:31:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e0bf8b9330459520f1206bdeb60bdfcd354324d1ef1d0c6d65c2a72213be0aab-merged.mount: Deactivated successfully.
Nov 29 03:31:19 np0005539563 podman[358761]: 2025-11-29 08:31:19.872384815 +0000 UTC m=+0.077988524 container cleanup 6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:31:19 np0005539563 systemd[1]: libpod-conmon-6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b.scope: Deactivated successfully.
Nov 29 03:31:19 np0005539563 nova_compute[252253]: 2025-11-29 08:31:19.913 252257 INFO nova.virt.libvirt.driver [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:31:20 np0005539563 nova_compute[252253]: 2025-11-29 08:31:20.058 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "refresh_cache-9a93f858-81bf-4fad-bdc5-df74e4bb0c75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:20 np0005539563 nova_compute[252253]: 2025-11-29 08:31:20.058 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquired lock "refresh_cache-9a93f858-81bf-4fad-bdc5-df74e4bb0c75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:20 np0005539563 nova_compute[252253]: 2025-11-29 08:31:20.058 252257 DEBUG nova.network.neutron [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:31:20 np0005539563 podman[358795]: 2025-11-29 08:31:20.270705186 +0000 UTC m=+0.379538704 container remove 6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.280 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0867f508-c339-4a1e-afda-b388e1802377]: (4, ('Sat Nov 29 08:31:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e (6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b)\n6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b\nSat Nov 29 08:31:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e (6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b)\n6f9b3f42b7b19bf10328eb6b2ce2c00be7629ad34b879972f3aa926f7534ad0b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.282 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[610a0e09-c593-40ae-8793-fc163b15337a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.283 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap371b699e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:20 np0005539563 kernel: tap1576b647-a0 (unregistering): left promiscuous mode
Nov 29 03:31:20 np0005539563 nova_compute[252253]: 2025-11-29 08:31:20.286 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:20 np0005539563 NetworkManager[48981]: <info>  [1764405080.2905] device (tap1576b647-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:31:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:20.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:20 np0005539563 kernel: tap371b699e-00: left promiscuous mode
Nov 29 03:31:20 np0005539563 nova_compute[252253]: 2025-11-29 08:31:20.322 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:20Z|00723|binding|INFO|Releasing lport 1576b647-a0ba-45ac-afa5-c62b909bb7e9 from this chassis (sb_readonly=0)
Nov 29 03:31:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:20Z|00724|binding|INFO|Setting lport 1576b647-a0ba-45ac-afa5-c62b909bb7e9 down in Southbound
Nov 29 03:31:20 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:20Z|00725|binding|INFO|Removing iface tap1576b647-a0 ovn-installed in OVS
Nov 29 03:31:20 np0005539563 nova_compute[252253]: 2025-11-29 08:31:20.329 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.332 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[49df4f59-ca9c-45fc-98e3-37a6b6220410]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:20 np0005539563 nova_compute[252253]: 2025-11-29 08:31:20.351 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.352 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d5203039-8380-40f9-bc42-604d687438f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.353 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b2729623-ad2d-45c9-8ddd-85d747ac6b56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.372 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d42cc9-bfbc-4f24-839d-e6359fae6dc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791037, 'reachable_time': 24808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358815, 'error': None, 'target': 'ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.376 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-371b699e-06e1-407e-ac77-9768d9a0e76e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:31:20 np0005539563 systemd[1]: run-netns-ovnmeta\x2d371b699e\x2d06e1\x2d407e\x2dac77\x2d9768d9a0e76e.mount: Deactivated successfully.
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.376 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[79eb25bb-d41d-4915-8cd6-00ec3b274e05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:20 np0005539563 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000ab.scope: Deactivated successfully.
Nov 29 03:31:20 np0005539563 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000ab.scope: Consumed 14.795s CPU time.
Nov 29 03:31:20 np0005539563 systemd-machined[213024]: Machine qemu-83-instance-000000ab terminated.
Nov 29 03:31:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:20.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:20 np0005539563 nova_compute[252253]: 2025-11-29 08:31:20.526 252257 DEBUG nova.network.neutron [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:31:20 np0005539563 nova_compute[252253]: 2025-11-29 08:31:20.546 252257 INFO nova.virt.libvirt.driver [-] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Instance destroyed successfully.#033[00m
Nov 29 03:31:20 np0005539563 nova_compute[252253]: 2025-11-29 08:31:20.546 252257 DEBUG nova.objects.instance [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 132 KiB/s wr, 176 op/s
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.966 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:4e:1f 10.100.0.4'], port_security=['fa:16:3e:53:4e:1f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4d6c236c-ba8a-44dc-8413-3d4bfc16ec56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c541784-a3aa-4c55-a753-a31504941937', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37972b49ddde4c519c6523d2ea1569b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '496c1f15-8168-427c-a8c0-5ed474644583', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0f9e799-5b16-4c43-ac05-86721fcbe6ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=1576b647-a0ba-45ac-afa5-c62b909bb7e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.969 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 1576b647-a0ba-45ac-afa5-c62b909bb7e9 in datapath 4c541784-a3aa-4c55-a753-a31504941937 unbound from our chassis#033[00m
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.972 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c541784-a3aa-4c55-a753-a31504941937, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.973 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5e27f78b-3f60-4e04-b8d9-8c523579d025]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:20.974 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 namespace which is not needed anymore#033[00m
Nov 29 03:31:21 np0005539563 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[357045]: [NOTICE]   (357049) : haproxy version is 2.8.14-c23fe91
Nov 29 03:31:21 np0005539563 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[357045]: [NOTICE]   (357049) : path to executable is /usr/sbin/haproxy
Nov 29 03:31:21 np0005539563 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[357045]: [WARNING]  (357049) : Exiting Master process...
Nov 29 03:31:21 np0005539563 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[357045]: [WARNING]  (357049) : Exiting Master process...
Nov 29 03:31:21 np0005539563 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[357045]: [ALERT]    (357049) : Current worker (357051) exited with code 143 (Terminated)
Nov 29 03:31:21 np0005539563 neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937[357045]: [WARNING]  (357049) : All workers exited. Exiting... (0)
Nov 29 03:31:21 np0005539563 systemd[1]: libpod-fc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300.scope: Deactivated successfully.
Nov 29 03:31:21 np0005539563 podman[358845]: 2025-11-29 08:31:21.124545979 +0000 UTC m=+0.052153634 container died fc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:31:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300-userdata-shm.mount: Deactivated successfully.
Nov 29 03:31:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay-12cabaec788a1c26af276602b516e48e954ff78da1e1a5dda2409f0d80026f9c-merged.mount: Deactivated successfully.
Nov 29 03:31:21 np0005539563 podman[358845]: 2025-11-29 08:31:21.164636785 +0000 UTC m=+0.092244410 container cleanup fc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:31:21 np0005539563 systemd[1]: libpod-conmon-fc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300.scope: Deactivated successfully.
Nov 29 03:31:21 np0005539563 podman[358873]: 2025-11-29 08:31:21.220289483 +0000 UTC m=+0.037338434 container remove fc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:31:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:21.226 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[85f9c3bd-5f92-4e5b-a44a-633e55aebbef]: (4, ('Sat Nov 29 08:31:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 (fc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300)\nfc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300\nSat Nov 29 08:31:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 (fc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300)\nfc7331ee13dcf526b260bd104f3651627fa8b0d369595333518828a589bc6300\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:21.228 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0776d49f-dbd7-4489-bc27-f9018282821e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:21.229 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c541784-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:21 np0005539563 kernel: tap4c541784-a0: left promiscuous mode
Nov 29 03:31:21 np0005539563 nova_compute[252253]: 2025-11-29 08:31:21.230 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:21 np0005539563 nova_compute[252253]: 2025-11-29 08:31:21.248 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:21.251 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7ae956-fbe3-4fd7-90a6-81e6ee33613e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:21.266 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c7d144-f90e-4ccf-9c0e-51bbd8c2e763]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:21.267 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[262b843e-5155-49d9-962c-6a86bc7bec62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:21.288 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c013a8ca-2a1c-4c64-b6a3-bb1af2019b9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802167, 'reachable_time': 21867, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358892, 'error': None, 'target': 'ovnmeta-4c541784-a3aa-4c55-a753-a31504941937', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:21.290 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c541784-a3aa-4c55-a753-a31504941937 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:31:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:21.290 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[b8629998-3e44-4ad1-9d0b-e284b672fdef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:21 np0005539563 systemd[1]: run-netns-ovnmeta\x2d4c541784\x2da3aa\x2d4c55\x2da753\x2da31504941937.mount: Deactivated successfully.
Nov 29 03:31:21 np0005539563 nova_compute[252253]: 2025-11-29 08:31:21.403 252257 DEBUG nova.compute.manager [req-d64ecd73-16b6-45ad-9a54-ae120eca576f req-0318d376-f8ca-49bf-aff3-640aed37ff99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Received event network-changed-8b93f183-22da-4a1a-9b2c-52314f794aad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:21 np0005539563 nova_compute[252253]: 2025-11-29 08:31:21.404 252257 DEBUG nova.compute.manager [req-d64ecd73-16b6-45ad-9a54-ae120eca576f req-0318d376-f8ca-49bf-aff3-640aed37ff99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Refreshing instance network info cache due to event network-changed-8b93f183-22da-4a1a-9b2c-52314f794aad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:21 np0005539563 nova_compute[252253]: 2025-11-29 08:31:21.404 252257 DEBUG oslo_concurrency.lockutils [req-d64ecd73-16b6-45ad-9a54-ae120eca576f req-0318d376-f8ca-49bf-aff3-640aed37ff99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9a93f858-81bf-4fad-bdc5-df74e4bb0c75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:21 np0005539563 nova_compute[252253]: 2025-11-29 08:31:21.712 252257 INFO nova.virt.libvirt.driver [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Beginning cold snapshot process#033[00m
Nov 29 03:31:22 np0005539563 nova_compute[252253]: 2025-11-29 08:31:22.051 252257 DEBUG nova.compute.manager [req-e84f09dc-2d4d-413c-b00c-b0debe917c08 req-8916d1a5-abec-42bd-877b-5ebd603dc5fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received event network-vif-unplugged-fe638793-a58c-45c7-af31-561a212a980a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:22 np0005539563 nova_compute[252253]: 2025-11-29 08:31:22.052 252257 DEBUG oslo_concurrency.lockutils [req-e84f09dc-2d4d-413c-b00c-b0debe917c08 req-8916d1a5-abec-42bd-877b-5ebd603dc5fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:22 np0005539563 nova_compute[252253]: 2025-11-29 08:31:22.052 252257 DEBUG oslo_concurrency.lockutils [req-e84f09dc-2d4d-413c-b00c-b0debe917c08 req-8916d1a5-abec-42bd-877b-5ebd603dc5fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:22 np0005539563 nova_compute[252253]: 2025-11-29 08:31:22.052 252257 DEBUG oslo_concurrency.lockutils [req-e84f09dc-2d4d-413c-b00c-b0debe917c08 req-8916d1a5-abec-42bd-877b-5ebd603dc5fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:22 np0005539563 nova_compute[252253]: 2025-11-29 08:31:22.052 252257 DEBUG nova.compute.manager [req-e84f09dc-2d4d-413c-b00c-b0debe917c08 req-8916d1a5-abec-42bd-877b-5ebd603dc5fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] No waiting events found dispatching network-vif-unplugged-fe638793-a58c-45c7-af31-561a212a980a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:22 np0005539563 nova_compute[252253]: 2025-11-29 08:31:22.052 252257 DEBUG nova.compute.manager [req-e84f09dc-2d4d-413c-b00c-b0debe917c08 req-8916d1a5-abec-42bd-877b-5ebd603dc5fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received event network-vif-unplugged-fe638793-a58c-45c7-af31-561a212a980a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:31:22 np0005539563 nova_compute[252253]: 2025-11-29 08:31:22.103 252257 INFO nova.virt.libvirt.driver [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Deleting instance files /var/lib/nova/instances/e412f1ba-217f-4c10-b176-528b2ef6ed0e_del#033[00m
Nov 29 03:31:22 np0005539563 nova_compute[252253]: 2025-11-29 08:31:22.104 252257 INFO nova.virt.libvirt.driver [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Deletion of /var/lib/nova/instances/e412f1ba-217f-4c10-b176-528b2ef6ed0e_del complete#033[00m
Nov 29 03:31:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:22.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:22.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 132 KiB/s wr, 176 op/s
Nov 29 03:31:22 np0005539563 nova_compute[252253]: 2025-11-29 08:31:22.851 252257 INFO nova.compute.manager [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Took 3.39 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:31:22 np0005539563 nova_compute[252253]: 2025-11-29 08:31:22.853 252257 DEBUG oslo.service.loopingcall [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:31:22 np0005539563 nova_compute[252253]: 2025-11-29 08:31:22.853 252257 DEBUG nova.compute.manager [-] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:31:22 np0005539563 nova_compute[252253]: 2025-11-29 08:31:22.853 252257 DEBUG nova.network.neutron [-] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.031 252257 DEBUG nova.virt.libvirt.imagebackend [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] No parent info for 1be11678-cfa4-4dee-b54c-6c7e547e5a6a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.060 252257 DEBUG nova.network.neutron [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Updating instance_info_cache with network_info: [{"id": "8b93f183-22da-4a1a-9b2c-52314f794aad", "address": "fa:16:3e:cb:cf:07", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b93f183-22", "ovs_interfaceid": "8b93f183-22da-4a1a-9b2c-52314f794aad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.278 252257 DEBUG nova.storage.rbd_utils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] creating snapshot(e41c51d3318f4c77848a1b37592a4eed) on rbd image(4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.464 252257 DEBUG nova.compute.manager [req-00171709-3852-4777-b9c1-5acd73a65d82 req-1467113b-8270-497c-9eb3-7cec8b821b43 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-vif-unplugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.464 252257 DEBUG oslo_concurrency.lockutils [req-00171709-3852-4777-b9c1-5acd73a65d82 req-1467113b-8270-497c-9eb3-7cec8b821b43 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.465 252257 DEBUG oslo_concurrency.lockutils [req-00171709-3852-4777-b9c1-5acd73a65d82 req-1467113b-8270-497c-9eb3-7cec8b821b43 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.465 252257 DEBUG oslo_concurrency.lockutils [req-00171709-3852-4777-b9c1-5acd73a65d82 req-1467113b-8270-497c-9eb3-7cec8b821b43 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.465 252257 DEBUG nova.compute.manager [req-00171709-3852-4777-b9c1-5acd73a65d82 req-1467113b-8270-497c-9eb3-7cec8b821b43 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] No waiting events found dispatching network-vif-unplugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.465 252257 WARNING nova.compute.manager [req-00171709-3852-4777-b9c1-5acd73a65d82 req-1467113b-8270-497c-9eb3-7cec8b821b43 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received unexpected event network-vif-unplugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.703 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0053584618695328495 of space, bias 1.0, pg target 1.607538560859855 quantized to 32 (current 32)
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.008766888551554067 of space, bias 1.0, pg target 2.621299676914666 quantized to 32 (current 32)
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005021672308300763 of space, bias 1.0, pg target 1.4914366755653266 quantized to 32 (current 32)
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.741 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Releasing lock "refresh_cache-9a93f858-81bf-4fad-bdc5-df74e4bb0c75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.741 252257 DEBUG nova.compute.manager [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Instance network_info: |[{"id": "8b93f183-22da-4a1a-9b2c-52314f794aad", "address": "fa:16:3e:cb:cf:07", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b93f183-22", "ovs_interfaceid": "8b93f183-22da-4a1a-9b2c-52314f794aad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.742 252257 DEBUG oslo_concurrency.lockutils [req-d64ecd73-16b6-45ad-9a54-ae120eca576f req-0318d376-f8ca-49bf-aff3-640aed37ff99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9a93f858-81bf-4fad-bdc5-df74e4bb0c75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.742 252257 DEBUG nova.network.neutron [req-d64ecd73-16b6-45ad-9a54-ae120eca576f req-0318d376-f8ca-49bf-aff3-640aed37ff99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Refreshing network info cache for port 8b93f183-22da-4a1a-9b2c-52314f794aad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.744 252257 DEBUG nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Start _get_guest_xml network_info=[{"id": "8b93f183-22da-4a1a-9b2c-52314f794aad", "address": "fa:16:3e:cb:cf:07", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b93f183-22", "ovs_interfaceid": "8b93f183-22da-4a1a-9b2c-52314f794aad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:31:03Z,direct_url=<?>,disk_format='raw',id=29125612-6fcb-47b2-8690-67e6e3459b96,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1177154409',owner='7e8e7407a7c44208a503e8225c1cf518',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:31:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '29125612-6fcb-47b2-8690-67e6e3459b96'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.748 252257 WARNING nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.759 252257 DEBUG nova.virt.libvirt.host [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.759 252257 DEBUG nova.virt.libvirt.host [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.763 252257 DEBUG nova.virt.libvirt.host [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.763 252257 DEBUG nova.virt.libvirt.host [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.764 252257 DEBUG nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.764 252257 DEBUG nova.virt.hardware [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:31:03Z,direct_url=<?>,disk_format='raw',id=29125612-6fcb-47b2-8690-67e6e3459b96,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1177154409',owner='7e8e7407a7c44208a503e8225c1cf518',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:31:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.765 252257 DEBUG nova.virt.hardware [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.765 252257 DEBUG nova.virt.hardware [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.765 252257 DEBUG nova.virt.hardware [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.765 252257 DEBUG nova.virt.hardware [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.765 252257 DEBUG nova.virt.hardware [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.765 252257 DEBUG nova.virt.hardware [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.766 252257 DEBUG nova.virt.hardware [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.766 252257 DEBUG nova.virt.hardware [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.766 252257 DEBUG nova.virt.hardware [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.766 252257 DEBUG nova.virt.hardware [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:31:23 np0005539563 nova_compute[252253]: 2025-11-29 08:31:23.769 252257 DEBUG oslo_concurrency.processutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Nov 29 03:31:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/279430381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.191 252257 DEBUG oslo_concurrency.processutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.215 252257 DEBUG nova.storage.rbd_utils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] rbd image 9a93f858-81bf-4fad-bdc5-df74e4bb0c75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.219 252257 DEBUG oslo_concurrency.processutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:24.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:24.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Nov 29 03:31:24 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Nov 29 03:31:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1614106452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.677 252257 DEBUG oslo_concurrency.processutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.679 252257 DEBUG nova.virt.libvirt.vif [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:31:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1423980546',display_name='tempest-TestStampPattern-server-1423980546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1423980546',id=173,image_ref='29125612-6fcb-47b2-8690-67e6e3459b96',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI96Evf2Y0SwutlY6N1eO4BKjG4KN2PYNqztf6unh2meM8u5LoAdRPMughEalPkJvCxIIxu40dTok7DnjTnYJBYMbeg+H1BqLCO5M0zr1+eSR0VHUnp1o+KGiyZHQh121Q==',key_name='tempest-TestStampPattern-155113296',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7e8e7407a7c44208a503e8225c1cf518',ramdisk_id='',reservation_id='r-bz3jgttx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='dc8140a9-7bef-42f8-867c-13e29f022673',image_min_disk='1',image_min_ram='0',image_owner_id='7e8e7407a7c44208a503e8225c1cf518',image_owner_project_name='tempest-TestStampPattern-1730119083',image_owner_user_name='tempest-TestStampPattern-1730119083-project-member',image_user_id='d45f9a4a44664af3884c15ce0f5697e0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1730119083',owner_user_name='tempest-TestStampPattern-1730119083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:15Z,user_data=None,user_id='d45f9a4a44664af3884c15ce0f5697e0',uuid=9a93f858-81bf-4fad-bdc5-df74e4bb0c75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b93f183-22da-4a1a-9b2c-52314f794aad", "address": "fa:16:3e:cb:cf:07", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b93f183-22", "ovs_interfaceid": "8b93f183-22da-4a1a-9b2c-52314f794aad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.679 252257 DEBUG nova.network.os_vif_util [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Converting VIF {"id": "8b93f183-22da-4a1a-9b2c-52314f794aad", "address": "fa:16:3e:cb:cf:07", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b93f183-22", "ovs_interfaceid": "8b93f183-22da-4a1a-9b2c-52314f794aad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.680 252257 DEBUG nova.network.os_vif_util [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:cf:07,bridge_name='br-int',has_traffic_filtering=True,id=8b93f183-22da-4a1a-9b2c-52314f794aad,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b93f183-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.681 252257 DEBUG nova.objects.instance [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 48 KiB/s wr, 137 op/s
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.762 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.984 252257 DEBUG nova.compute.manager [req-d752b647-c408-448a-b5d5-90911a65bfa8 req-0b98ed53-f22d-4532-99e9-8e702c714ced 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received event network-vif-plugged-fe638793-a58c-45c7-af31-561a212a980a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.985 252257 DEBUG oslo_concurrency.lockutils [req-d752b647-c408-448a-b5d5-90911a65bfa8 req-0b98ed53-f22d-4532-99e9-8e702c714ced 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.985 252257 DEBUG oslo_concurrency.lockutils [req-d752b647-c408-448a-b5d5-90911a65bfa8 req-0b98ed53-f22d-4532-99e9-8e702c714ced 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.985 252257 DEBUG oslo_concurrency.lockutils [req-d752b647-c408-448a-b5d5-90911a65bfa8 req-0b98ed53-f22d-4532-99e9-8e702c714ced 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.986 252257 DEBUG nova.compute.manager [req-d752b647-c408-448a-b5d5-90911a65bfa8 req-0b98ed53-f22d-4532-99e9-8e702c714ced 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] No waiting events found dispatching network-vif-plugged-fe638793-a58c-45c7-af31-561a212a980a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:24 np0005539563 nova_compute[252253]: 2025-11-29 08:31:24.986 252257 WARNING nova.compute.manager [req-d752b647-c408-448a-b5d5-90911a65bfa8 req-0b98ed53-f22d-4532-99e9-8e702c714ced 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received unexpected event network-vif-plugged-fe638793-a58c-45c7-af31-561a212a980a for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.329 252257 DEBUG nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  <uuid>9a93f858-81bf-4fad-bdc5-df74e4bb0c75</uuid>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  <name>instance-000000ad</name>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestStampPattern-server-1423980546</nova:name>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:31:23</nova:creationTime>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <nova:user uuid="d45f9a4a44664af3884c15ce0f5697e0">tempest-TestStampPattern-1730119083-project-member</nova:user>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <nova:project uuid="7e8e7407a7c44208a503e8225c1cf518">tempest-TestStampPattern-1730119083</nova:project>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="29125612-6fcb-47b2-8690-67e6e3459b96"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <nova:port uuid="8b93f183-22da-4a1a-9b2c-52314f794aad">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <entry name="serial">9a93f858-81bf-4fad-bdc5-df74e4bb0c75</entry>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <entry name="uuid">9a93f858-81bf-4fad-bdc5-df74e4bb0c75</entry>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/9a93f858-81bf-4fad-bdc5-df74e4bb0c75_disk">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/9a93f858-81bf-4fad-bdc5-df74e4bb0c75_disk.config">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:cb:cf:07"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <target dev="tap8b93f183-22"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/9a93f858-81bf-4fad-bdc5-df74e4bb0c75/console.log" append="off"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <input type="keyboard" bus="usb"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:31:25 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:31:25 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:31:25 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:31:25 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.330 252257 DEBUG nova.compute.manager [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Preparing to wait for external event network-vif-plugged-8b93f183-22da-4a1a-9b2c-52314f794aad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.330 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.330 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.330 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.331 252257 DEBUG nova.virt.libvirt.vif [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:31:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1423980546',display_name='tempest-TestStampPattern-server-1423980546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1423980546',id=173,image_ref='29125612-6fcb-47b2-8690-67e6e3459b96',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI96Evf2Y0SwutlY6N1eO4BKjG4KN2PYNqztf6unh2meM8u5LoAdRPMughEalPkJvCxIIxu40dTok7DnjTnYJBYMbeg+H1BqLCO5M0zr1+eSR0VHUnp1o+KGiyZHQh121Q==',key_name='tempest-TestStampPattern-155113296',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7e8e7407a7c44208a503e8225c1cf518',ramdisk_id='',reservation_id='r-bz3jgttx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='dc8140a9-7bef-42f8-867c-13e29f022673',image_min_disk='1',image_min_ram='0',image_owner_id='7e8e7407a7c44208a503e8225c1cf518',image_owner_project_name='tempest-TestStampPattern-1730119083',image_owner_user_name='tempest-TestStampPattern-1730119083-project-member',image_user_id='d45f9a4a44664af3884c15ce0f5697e0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1730119083',owner_user_name='tempest-TestStampPattern-1730119083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:15Z,user_data=None,user_id='d45f9a4a44664af3884c15ce0f5697e0',uuid=9a93f858-81bf-4fad-bdc5-df74e4bb0c75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b93f183-22da-4a1a-9b2c-52314f794aad", "address": "fa:16:3e:cb:cf:07", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b93f183-22", "ovs_interfaceid": "8b93f183-22da-4a1a-9b2c-52314f794aad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.331 252257 DEBUG nova.network.os_vif_util [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Converting VIF {"id": "8b93f183-22da-4a1a-9b2c-52314f794aad", "address": "fa:16:3e:cb:cf:07", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b93f183-22", "ovs_interfaceid": "8b93f183-22da-4a1a-9b2c-52314f794aad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.332 252257 DEBUG nova.network.os_vif_util [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:cf:07,bridge_name='br-int',has_traffic_filtering=True,id=8b93f183-22da-4a1a-9b2c-52314f794aad,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b93f183-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.333 252257 DEBUG os_vif [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:cf:07,bridge_name='br-int',has_traffic_filtering=True,id=8b93f183-22da-4a1a-9b2c-52314f794aad,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b93f183-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.333 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.334 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.334 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.337 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.337 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b93f183-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.337 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b93f183-22, col_values=(('external_ids', {'iface-id': '8b93f183-22da-4a1a-9b2c-52314f794aad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:cf:07', 'vm-uuid': '9a93f858-81bf-4fad-bdc5-df74e4bb0c75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.339 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:25 np0005539563 NetworkManager[48981]: <info>  [1764405085.3400] manager: (tap8b93f183-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.342 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.346 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.347 252257 INFO os_vif [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:cf:07,bridge_name='br-int',has_traffic_filtering=True,id=8b93f183-22da-4a1a-9b2c-52314f794aad,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b93f183-22')#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.704 252257 DEBUG nova.network.neutron [-] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.820 252257 DEBUG nova.compute.manager [req-e34b1673-a185-4443-a85c-deaba68818dd req-6192c785-9f6d-49d4-a5e0-1d26c09b5ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.820 252257 DEBUG oslo_concurrency.lockutils [req-e34b1673-a185-4443-a85c-deaba68818dd req-6192c785-9f6d-49d4-a5e0-1d26c09b5ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.820 252257 DEBUG oslo_concurrency.lockutils [req-e34b1673-a185-4443-a85c-deaba68818dd req-6192c785-9f6d-49d4-a5e0-1d26c09b5ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.821 252257 DEBUG oslo_concurrency.lockutils [req-e34b1673-a185-4443-a85c-deaba68818dd req-6192c785-9f6d-49d4-a5e0-1d26c09b5ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.821 252257 DEBUG nova.compute.manager [req-e34b1673-a185-4443-a85c-deaba68818dd req-6192c785-9f6d-49d4-a5e0-1d26c09b5ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] No waiting events found dispatching network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.821 252257 WARNING nova.compute.manager [req-e34b1673-a185-4443-a85c-deaba68818dd req-6192c785-9f6d-49d4-a5e0-1d26c09b5ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received unexpected event network-vif-plugged-1576b647-a0ba-45ac-afa5-c62b909bb7e9 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.821 252257 DEBUG nova.compute.manager [req-e34b1673-a185-4443-a85c-deaba68818dd req-6192c785-9f6d-49d4-a5e0-1d26c09b5ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Received event network-vif-deleted-fe638793-a58c-45c7-af31-561a212a980a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.821 252257 INFO nova.compute.manager [req-e34b1673-a185-4443-a85c-deaba68818dd req-6192c785-9f6d-49d4-a5e0-1d26c09b5ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Neutron deleted interface fe638793-a58c-45c7-af31-561a212a980a; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.821 252257 DEBUG nova.network.neutron [req-e34b1673-a185-4443-a85c-deaba68818dd req-6192c785-9f6d-49d4-a5e0-1d26c09b5ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.914 252257 DEBUG nova.network.neutron [req-d64ecd73-16b6-45ad-9a54-ae120eca576f req-0318d376-f8ca-49bf-aff3-640aed37ff99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Updated VIF entry in instance network info cache for port 8b93f183-22da-4a1a-9b2c-52314f794aad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:25 np0005539563 nova_compute[252253]: 2025-11-29 08:31:25.914 252257 DEBUG nova.network.neutron [req-d64ecd73-16b6-45ad-9a54-ae120eca576f req-0318d376-f8ca-49bf-aff3-640aed37ff99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Updating instance_info_cache with network_info: [{"id": "8b93f183-22da-4a1a-9b2c-52314f794aad", "address": "fa:16:3e:cb:cf:07", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b93f183-22", "ovs_interfaceid": "8b93f183-22da-4a1a-9b2c-52314f794aad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:26.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:26.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:26 np0005539563 nova_compute[252253]: 2025-11-29 08:31:26.400 252257 DEBUG nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:26 np0005539563 nova_compute[252253]: 2025-11-29 08:31:26.400 252257 DEBUG nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:26 np0005539563 nova_compute[252253]: 2025-11-29 08:31:26.400 252257 DEBUG nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No VIF found with MAC fa:16:3e:cb:cf:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:31:26 np0005539563 nova_compute[252253]: 2025-11-29 08:31:26.401 252257 INFO nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Using config drive#033[00m
Nov 29 03:31:26 np0005539563 nova_compute[252253]: 2025-11-29 08:31:26.430 252257 DEBUG nova.storage.rbd_utils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] rbd image 9a93f858-81bf-4fad-bdc5-df74e4bb0c75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:26 np0005539563 nova_compute[252253]: 2025-11-29 08:31:26.482 252257 INFO nova.compute.manager [-] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Took 3.63 seconds to deallocate network for instance.#033[00m
Nov 29 03:31:26 np0005539563 nova_compute[252253]: 2025-11-29 08:31:26.490 252257 DEBUG nova.compute.manager [req-e34b1673-a185-4443-a85c-deaba68818dd req-6192c785-9f6d-49d4-a5e0-1d26c09b5ea2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Detach interface failed, port_id=fe638793-a58c-45c7-af31-561a212a980a, reason: Instance e412f1ba-217f-4c10-b176-528b2ef6ed0e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:31:26 np0005539563 nova_compute[252253]: 2025-11-29 08:31:26.697 252257 DEBUG oslo_concurrency.lockutils [req-d64ecd73-16b6-45ad-9a54-ae120eca576f req-0318d376-f8ca-49bf-aff3-640aed37ff99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9a93f858-81bf-4fad-bdc5-df74e4bb0c75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 665 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 435 KiB/s rd, 17 KiB/s wr, 62 op/s
Nov 29 03:31:26 np0005539563 nova_compute[252253]: 2025-11-29 08:31:26.986 252257 DEBUG nova.storage.rbd_utils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] cloning vms/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk@e41c51d3318f4c77848a1b37592a4eed to images/fde91722-ea74-45a9-b57b-7fc9203f0965 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:31:27 np0005539563 nova_compute[252253]: 2025-11-29 08:31:27.123 252257 DEBUG nova.storage.rbd_utils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] flattening images/fde91722-ea74-45a9-b57b-7fc9203f0965 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:31:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:31:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2106926969' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:31:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:31:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2106926969' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:31:28 np0005539563 nova_compute[252253]: 2025-11-29 08:31:28.103 252257 INFO nova.compute.manager [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Took 1.62 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:31:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:28.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:28.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:28 np0005539563 nova_compute[252253]: 2025-11-29 08:31:28.427 252257 DEBUG oslo_concurrency.lockutils [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:28 np0005539563 nova_compute[252253]: 2025-11-29 08:31:28.428 252257 DEBUG oslo_concurrency.lockutils [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:28 np0005539563 nova_compute[252253]: 2025-11-29 08:31:28.526 252257 DEBUG oslo_concurrency.processutils [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:28 np0005539563 nova_compute[252253]: 2025-11-29 08:31:28.562 252257 INFO nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Creating config drive at /var/lib/nova/instances/9a93f858-81bf-4fad-bdc5-df74e4bb0c75/disk.config#033[00m
Nov 29 03:31:28 np0005539563 nova_compute[252253]: 2025-11-29 08:31:28.567 252257 DEBUG oslo_concurrency.processutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a93f858-81bf-4fad-bdc5-df74e4bb0c75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpddkpre6s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:28 np0005539563 nova_compute[252253]: 2025-11-29 08:31:28.705 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 701 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 114 op/s
Nov 29 03:31:28 np0005539563 nova_compute[252253]: 2025-11-29 08:31:28.730 252257 DEBUG oslo_concurrency.processutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a93f858-81bf-4fad-bdc5-df74e4bb0c75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpddkpre6s" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3899069321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:29 np0005539563 nova_compute[252253]: 2025-11-29 08:31:29.277 252257 DEBUG nova.storage.rbd_utils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] rbd image 9a93f858-81bf-4fad-bdc5-df74e4bb0c75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:29 np0005539563 nova_compute[252253]: 2025-11-29 08:31:29.280 252257 DEBUG oslo_concurrency.processutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a93f858-81bf-4fad-bdc5-df74e4bb0c75/disk.config 9a93f858-81bf-4fad-bdc5-df74e4bb0c75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:29 np0005539563 nova_compute[252253]: 2025-11-29 08:31:29.323 252257 DEBUG oslo_concurrency.processutils [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.797s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:29 np0005539563 nova_compute[252253]: 2025-11-29 08:31:29.329 252257 DEBUG nova.compute.provider_tree [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:29 np0005539563 nova_compute[252253]: 2025-11-29 08:31:29.449 252257 DEBUG nova.scheduler.client.report [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:29 np0005539563 nova_compute[252253]: 2025-11-29 08:31:29.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:29 np0005539563 nova_compute[252253]: 2025-11-29 08:31:29.804 252257 DEBUG oslo_concurrency.lockutils [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:30 np0005539563 nova_compute[252253]: 2025-11-29 08:31:30.020 252257 INFO nova.scheduler.client.report [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Deleted allocations for instance e412f1ba-217f-4c10-b176-528b2ef6ed0e#033[00m
Nov 29 03:31:30 np0005539563 nova_compute[252253]: 2025-11-29 08:31:30.287 252257 DEBUG nova.storage.rbd_utils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] removing snapshot(e41c51d3318f4c77848a1b37592a4eed) on rbd image(4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:31:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:30.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:30 np0005539563 nova_compute[252253]: 2025-11-29 08:31:30.341 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:30 np0005539563 nova_compute[252253]: 2025-11-29 08:31:30.402 252257 DEBUG oslo_concurrency.lockutils [None req-82ced9ad-5dad-4745-9300-a6708c004a46 d039e57f31de4717a235fc96ebd56559 527c6a274d1e478eadfe67139e121185 - - default default] Lock "e412f1ba-217f-4c10-b176-528b2ef6ed0e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 738 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.7 MiB/s wr, 99 op/s
Nov 29 03:31:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Nov 29 03:31:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Nov 29 03:31:30 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Nov 29 03:31:30 np0005539563 nova_compute[252253]: 2025-11-29 08:31:30.931 252257 DEBUG nova.storage.rbd_utils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] creating snapshot(snap) on rbd image(fde91722-ea74-45a9-b57b-7fc9203f0965) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:31:30 np0005539563 nova_compute[252253]: 2025-11-29 08:31:30.962 252257 DEBUG oslo_concurrency.processutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a93f858-81bf-4fad-bdc5-df74e4bb0c75/disk.config 9a93f858-81bf-4fad-bdc5-df74e4bb0c75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:30 np0005539563 nova_compute[252253]: 2025-11-29 08:31:30.962 252257 INFO nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Deleting local config drive /var/lib/nova/instances/9a93f858-81bf-4fad-bdc5-df74e4bb0c75/disk.config because it was imported into RBD.#033[00m
Nov 29 03:31:31 np0005539563 NetworkManager[48981]: <info>  [1764405091.0192] manager: (tap8b93f183-22): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Nov 29 03:31:31 np0005539563 kernel: tap8b93f183-22: entered promiscuous mode
Nov 29 03:31:31 np0005539563 systemd-udevd[359249]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:31:31 np0005539563 nova_compute[252253]: 2025-11-29 08:31:31.080 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:31 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:31Z|00726|binding|INFO|Claiming lport 8b93f183-22da-4a1a-9b2c-52314f794aad for this chassis.
Nov 29 03:31:31 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:31Z|00727|binding|INFO|8b93f183-22da-4a1a-9b2c-52314f794aad: Claiming fa:16:3e:cb:cf:07 10.100.0.10
Nov 29 03:31:31 np0005539563 NetworkManager[48981]: <info>  [1764405091.0951] device (tap8b93f183-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:31:31 np0005539563 NetworkManager[48981]: <info>  [1764405091.0968] device (tap8b93f183-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:31:31 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:31Z|00728|binding|INFO|Setting lport 8b93f183-22da-4a1a-9b2c-52314f794aad ovn-installed in OVS
Nov 29 03:31:31 np0005539563 nova_compute[252253]: 2025-11-29 08:31:31.101 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:31 np0005539563 systemd-machined[213024]: New machine qemu-84-instance-000000ad.
Nov 29 03:31:31 np0005539563 nova_compute[252253]: 2025-11-29 08:31:31.111 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:31 np0005539563 systemd[1]: Started Virtual Machine qemu-84-instance-000000ad.
Nov 29 03:31:31 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:31Z|00729|binding|INFO|Setting lport 8b93f183-22da-4a1a-9b2c-52314f794aad up in Southbound
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.329 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:cf:07 10.100.0.10'], port_security=['fa:16:3e:cb:cf:07 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9a93f858-81bf-4fad-bdc5-df74e4bb0c75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7e8e7407a7c44208a503e8225c1cf518', 'neutron:revision_number': '2', 'neutron:security_group_ids': '056d3a24-7b10-4a45-884a-1b8e5def99f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a117267-2677-4e97-b3d9-4edd30f1b375, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=8b93f183-22da-4a1a-9b2c-52314f794aad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.330 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 8b93f183-22da-4a1a-9b2c-52314f794aad in datapath 9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 bound to our chassis#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.331 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.344 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[457db5f6-0da9-42c0-ab36-79f1c45c19e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.345 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bbeaef7-11 in ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.347 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bbeaef7-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.347 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5f48d5-890e-4f2b-9b0a-9ef3f4572bca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.348 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5528023f-cf23-4ce9-8fe5-01126becc4a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.361 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[e5752a79-3ab6-4a16-a324-6037afa5a44e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.376 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b2ebb8-0834-452b-8ed1-a6bfda282709]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.408 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[730f9209-22de-46ba-8369-4edadd832e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 NetworkManager[48981]: <info>  [1764405091.4145] manager: (tap9bbeaef7-10): new Veth device (/org/freedesktop/NetworkManager/Devices/311)
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.414 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[29d5b8f9-d3b0-450d-95e5-932c5b014d91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.448 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[9acd0436-9196-47b9-9f0a-ad425bf41fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.451 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2ddbc7-e62c-4ca8-8cfb-165e964889d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 NetworkManager[48981]: <info>  [1764405091.4728] device (tap9bbeaef7-10): carrier: link connected
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.479 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[441c79e9-8b95-44c7-ab89-f90e898954cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.499 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0002b8-7553-43ad-900a-0419a1ca4a29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bbeaef7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:b9:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 805924, 'reachable_time': 18462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359290, 'error': None, 'target': 'ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.512 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4df0a48e-ddd5-46a3-aa85-be84c78ef196]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:b929'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 805924, 'tstamp': 805924}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359291, 'error': None, 'target': 'ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.534 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[310a5221-6ad6-4f66-8f06-0be0a63da2ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bbeaef7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:b9:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 805924, 'reachable_time': 18462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359292, 'error': None, 'target': 'ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.559 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4e93eb13-bb10-47a9-a575-f1cf39012d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.618 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5cc085ef-2a35-4bc1-aca5-a0a13c614b08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.620 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bbeaef7-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.620 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.621 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bbeaef7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:31 np0005539563 NetworkManager[48981]: <info>  [1764405091.6239] manager: (tap9bbeaef7-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Nov 29 03:31:31 np0005539563 nova_compute[252253]: 2025-11-29 08:31:31.623 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:31 np0005539563 kernel: tap9bbeaef7-10: entered promiscuous mode
Nov 29 03:31:31 np0005539563 nova_compute[252253]: 2025-11-29 08:31:31.628 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.630 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bbeaef7-10, col_values=(('external_ids', {'iface-id': '3986ddbd-1b85-4e76-95e6-c4ab20dc3ca3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:31 np0005539563 nova_compute[252253]: 2025-11-29 08:31:31.631 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:31 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:31Z|00730|binding|INFO|Releasing lport 3986ddbd-1b85-4e76-95e6-c4ab20dc3ca3 from this chassis (sb_readonly=0)
Nov 29 03:31:31 np0005539563 nova_compute[252253]: 2025-11-29 08:31:31.647 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:31 np0005539563 nova_compute[252253]: 2025-11-29 08:31:31.649 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.650 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.650 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1638264d-90bf-4876-bc24-ce75ffaef387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.651 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6.pid.haproxy
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:31:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:31.652 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'env', 'PROCESS_TAG=haproxy-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:31:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Nov 29 03:31:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Nov 29 03:31:31 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Nov 29 03:31:32 np0005539563 podman[359363]: 2025-11-29 08:31:32.031938339 +0000 UTC m=+0.048580017 container create c9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:31:32 np0005539563 systemd[1]: Started libpod-conmon-c9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c.scope.
Nov 29 03:31:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:31:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07685db7f06c3a1fae6525af4641c1023877351acb1c8ba4b5d443ba58d83295/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:32 np0005539563 podman[359363]: 2025-11-29 08:31:32.004958799 +0000 UTC m=+0.021600497 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:31:32 np0005539563 podman[359363]: 2025-11-29 08:31:32.1135475 +0000 UTC m=+0.130189198 container init c9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:31:32 np0005539563 podman[359363]: 2025-11-29 08:31:32.120482528 +0000 UTC m=+0.137124206 container start c9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:31:32 np0005539563 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[359385]: [NOTICE]   (359389) : New worker (359391) forked
Nov 29 03:31:32 np0005539563 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[359385]: [NOTICE]   (359389) : Loading success.
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.242 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405092.2419991, 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.243 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] VM Started (Lifecycle Event)#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.268 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.272 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405092.244783, 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.272 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.294 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.296 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:32.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.320 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:31:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:32.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.431 252257 DEBUG nova.compute.manager [req-a0e03d82-997a-4bd1-8587-a12b50fabd93 req-83da6f78-a133-467c-a8a8-001779cc4359 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Received event network-vif-plugged-8b93f183-22da-4a1a-9b2c-52314f794aad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.431 252257 DEBUG oslo_concurrency.lockutils [req-a0e03d82-997a-4bd1-8587-a12b50fabd93 req-83da6f78-a133-467c-a8a8-001779cc4359 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.431 252257 DEBUG oslo_concurrency.lockutils [req-a0e03d82-997a-4bd1-8587-a12b50fabd93 req-83da6f78-a133-467c-a8a8-001779cc4359 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.431 252257 DEBUG oslo_concurrency.lockutils [req-a0e03d82-997a-4bd1-8587-a12b50fabd93 req-83da6f78-a133-467c-a8a8-001779cc4359 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.432 252257 DEBUG nova.compute.manager [req-a0e03d82-997a-4bd1-8587-a12b50fabd93 req-83da6f78-a133-467c-a8a8-001779cc4359 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Processing event network-vif-plugged-8b93f183-22da-4a1a-9b2c-52314f794aad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.432 252257 DEBUG nova.compute.manager [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.435 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405092.4350135, 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.436 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.437 252257 DEBUG nova.virt.libvirt.driver [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.441 252257 INFO nova.virt.libvirt.driver [-] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Instance spawned successfully.#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.442 252257 INFO nova.compute.manager [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Took 17.05 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.443 252257 DEBUG nova.compute.manager [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.480 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.486 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.518 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.520 252257 INFO nova.compute.manager [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Took 18.00 seconds to build instance.#033[00m
Nov 29 03:31:32 np0005539563 nova_compute[252253]: 2025-11-29 08:31:32.542 252257 DEBUG oslo_concurrency.lockutils [None req-7829968d-10ee-4a6e-b99c-9e6494b0e35d d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 738 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.9 MiB/s wr, 104 op/s
Nov 29 03:31:33 np0005539563 nova_compute[252253]: 2025-11-29 08:31:33.707 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:34.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:34.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:34 np0005539563 nova_compute[252253]: 2025-11-29 08:31:34.502 252257 DEBUG nova.compute.manager [req-698d24d9-2c1c-478f-84ab-0503026aac24 req-ea29f531-706f-4eb6-8f8b-7569f29f9470 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Received event network-vif-plugged-8b93f183-22da-4a1a-9b2c-52314f794aad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:34 np0005539563 nova_compute[252253]: 2025-11-29 08:31:34.503 252257 DEBUG oslo_concurrency.lockutils [req-698d24d9-2c1c-478f-84ab-0503026aac24 req-ea29f531-706f-4eb6-8f8b-7569f29f9470 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:34 np0005539563 nova_compute[252253]: 2025-11-29 08:31:34.503 252257 DEBUG oslo_concurrency.lockutils [req-698d24d9-2c1c-478f-84ab-0503026aac24 req-ea29f531-706f-4eb6-8f8b-7569f29f9470 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:34 np0005539563 nova_compute[252253]: 2025-11-29 08:31:34.503 252257 DEBUG oslo_concurrency.lockutils [req-698d24d9-2c1c-478f-84ab-0503026aac24 req-ea29f531-706f-4eb6-8f8b-7569f29f9470 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:34 np0005539563 nova_compute[252253]: 2025-11-29 08:31:34.504 252257 DEBUG nova.compute.manager [req-698d24d9-2c1c-478f-84ab-0503026aac24 req-ea29f531-706f-4eb6-8f8b-7569f29f9470 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] No waiting events found dispatching network-vif-plugged-8b93f183-22da-4a1a-9b2c-52314f794aad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:34 np0005539563 nova_compute[252253]: 2025-11-29 08:31:34.504 252257 WARNING nova.compute.manager [req-698d24d9-2c1c-478f-84ab-0503026aac24 req-ea29f531-706f-4eb6-8f8b-7569f29f9470 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Received unexpected event network-vif-plugged-8b93f183-22da-4a1a-9b2c-52314f794aad for instance with vm_state active and task_state None.#033[00m
Nov 29 03:31:34 np0005539563 nova_compute[252253]: 2025-11-29 08:31:34.701 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405079.700245, e412f1ba-217f-4c10-b176-528b2ef6ed0e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:34 np0005539563 nova_compute[252253]: 2025-11-29 08:31:34.701 252257 INFO nova.compute.manager [-] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:31:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 762 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 8.6 MiB/s wr, 202 op/s
Nov 29 03:31:34 np0005539563 nova_compute[252253]: 2025-11-29 08:31:34.719 252257 DEBUG nova.compute.manager [None req-108d63d2-80e4-46cf-8505-822be6600d34 - - - - - -] [instance: e412f1ba-217f-4c10-b176-528b2ef6ed0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.345 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.545 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405080.5447826, 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.546 252257 INFO nova.compute.manager [-] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.568 252257 DEBUG nova.compute.manager [None req-306b9742-72ef-45d9-8a69-2c8ab3b887d0 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.573 252257 DEBUG nova.compute.manager [None req-306b9742-72ef-45d9-8a69-2c8ab3b887d0 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: shelving_image_uploading, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.593 252257 INFO nova.compute.manager [None req-306b9742-72ef-45d9-8a69-2c8ab3b887d0 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] During sync_power_state the instance has a pending task (shelving_image_uploading). Skip.#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.870 252257 INFO nova.virt.libvirt.driver [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Snapshot image upload complete#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.871 252257 DEBUG nova.compute.manager [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.937 252257 INFO nova.compute.manager [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Shelve offloading#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.943 252257 INFO nova.virt.libvirt.driver [-] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Instance destroyed successfully.#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.943 252257 DEBUG nova.compute.manager [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.945 252257 DEBUG oslo_concurrency.lockutils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.945 252257 DEBUG oslo_concurrency.lockutils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquired lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:35 np0005539563 nova_compute[252253]: 2025-11-29 08:31:35.946 252257 DEBUG nova.network.neutron [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:31:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:36.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:36.320 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:36.321 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:31:36 np0005539563 nova_compute[252253]: 2025-11-29 08:31:36.322 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:36.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 774 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 5.6 MiB/s wr, 219 op/s
Nov 29 03:31:37 np0005539563 nova_compute[252253]: 2025-11-29 08:31:37.263 252257 DEBUG nova.compute.manager [req-782bf98a-89c5-413b-a083-7cc8e6cda4d4 req-02302326-fa73-49e9-83aa-3da6933de961 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Received event network-changed-8b93f183-22da-4a1a-9b2c-52314f794aad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:37 np0005539563 nova_compute[252253]: 2025-11-29 08:31:37.264 252257 DEBUG nova.compute.manager [req-782bf98a-89c5-413b-a083-7cc8e6cda4d4 req-02302326-fa73-49e9-83aa-3da6933de961 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Refreshing instance network info cache due to event network-changed-8b93f183-22da-4a1a-9b2c-52314f794aad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:37 np0005539563 nova_compute[252253]: 2025-11-29 08:31:37.264 252257 DEBUG oslo_concurrency.lockutils [req-782bf98a-89c5-413b-a083-7cc8e6cda4d4 req-02302326-fa73-49e9-83aa-3da6933de961 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9a93f858-81bf-4fad-bdc5-df74e4bb0c75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:37 np0005539563 nova_compute[252253]: 2025-11-29 08:31:37.264 252257 DEBUG oslo_concurrency.lockutils [req-782bf98a-89c5-413b-a083-7cc8e6cda4d4 req-02302326-fa73-49e9-83aa-3da6933de961 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9a93f858-81bf-4fad-bdc5-df74e4bb0c75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:37 np0005539563 nova_compute[252253]: 2025-11-29 08:31:37.265 252257 DEBUG nova.network.neutron [req-782bf98a-89c5-413b-a083-7cc8e6cda4d4 req-02302326-fa73-49e9-83aa-3da6933de961 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Refreshing network info cache for port 8b93f183-22da-4a1a-9b2c-52314f794aad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:37 np0005539563 nova_compute[252253]: 2025-11-29 08:31:37.335 252257 DEBUG nova.network.neutron [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updating instance_info_cache with network_info: [{"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:37 np0005539563 nova_compute[252253]: 2025-11-29 08:31:37.399 252257 DEBUG oslo_concurrency.lockutils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Releasing lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:38.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:38.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:38 np0005539563 nova_compute[252253]: 2025-11-29 08:31:38.710 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 777 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.1 MiB/s wr, 261 op/s
Nov 29 03:31:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Nov 29 03:31:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Nov 29 03:31:38 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Nov 29 03:31:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:31:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1181121692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:31:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:31:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1181121692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:31:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:40.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.349 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:40.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.572 252257 INFO nova.virt.libvirt.driver [-] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Instance destroyed successfully.#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.572 252257 DEBUG nova.objects.instance [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lazy-loading 'resources' on Instance uuid 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.586 252257 DEBUG nova.virt.libvirt.vif [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-935562196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-935562196',id=171,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/QbFQHfxyoj1W/t5pawyERQGZRClAr1DxU8gg8udDNKRDAgSRqjviYC9CV8DByogltybpLGJLh5e67lMbhPKRIYOrGJnVOyLrNIthayQV7k/8lr+xvE29t9ygQsTGfcQ==',key_name='tempest-keypair-1565500821',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='37972b49ddde4c519c6523d2ea1569b5',ramdisk_id='',reservation_id='r-osvyk2n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1751768432',owner_user_name='tempest-AttachVolumeShelveTestJSON-1751768432-project-member',shelved_at='2025-11-29T08:31:35.871379',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fde91722-ea74-45a9-b57b-7fc9203f0965'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:31:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e6de0587a3794e30acefc687f435d388',uuid=4d6c236c-ba8a-44dc-8413-3d4bfc16ec56,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.587 252257 DEBUG nova.network.os_vif_util [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converting VIF {"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1576b647-a0", "ovs_interfaceid": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.588 252257 DEBUG nova.network.os_vif_util [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.588 252257 DEBUG os_vif [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.589 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.590 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1576b647-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.591 252257 DEBUG nova.network.neutron [req-782bf98a-89c5-413b-a083-7cc8e6cda4d4 req-02302326-fa73-49e9-83aa-3da6933de961 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Updated VIF entry in instance network info cache for port 8b93f183-22da-4a1a-9b2c-52314f794aad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.591 252257 DEBUG nova.network.neutron [req-782bf98a-89c5-413b-a083-7cc8e6cda4d4 req-02302326-fa73-49e9-83aa-3da6933de961 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Updating instance_info_cache with network_info: [{"id": "8b93f183-22da-4a1a-9b2c-52314f794aad", "address": "fa:16:3e:cb:cf:07", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b93f183-22", "ovs_interfaceid": "8b93f183-22da-4a1a-9b2c-52314f794aad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.593 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.594 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.597 252257 INFO os_vif [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:4e:1f,bridge_name='br-int',has_traffic_filtering=True,id=1576b647-a0ba-45ac-afa5-c62b909bb7e9,network=Network(4c541784-a3aa-4c55-a753-a31504941937),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1576b647-a0')#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.618 252257 DEBUG oslo_concurrency.lockutils [req-782bf98a-89c5-413b-a083-7cc8e6cda4d4 req-02302326-fa73-49e9-83aa-3da6933de961 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9a93f858-81bf-4fad-bdc5-df74e4bb0c75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.673 252257 DEBUG nova.compute.manager [req-723539cc-f447-4b07-82cc-c42f84924631 req-afa81b81-7624-4e5f-b5db-5577c6c47548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Received event network-changed-1576b647-a0ba-45ac-afa5-c62b909bb7e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.674 252257 DEBUG nova.compute.manager [req-723539cc-f447-4b07-82cc-c42f84924631 req-afa81b81-7624-4e5f-b5db-5577c6c47548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Refreshing instance network info cache due to event network-changed-1576b647-a0ba-45ac-afa5-c62b909bb7e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.674 252257 DEBUG oslo_concurrency.lockutils [req-723539cc-f447-4b07-82cc-c42f84924631 req-afa81b81-7624-4e5f-b5db-5577c6c47548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.675 252257 DEBUG oslo_concurrency.lockutils [req-723539cc-f447-4b07-82cc-c42f84924631 req-afa81b81-7624-4e5f-b5db-5577c6c47548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:40 np0005539563 nova_compute[252253]: 2025-11-29 08:31:40.675 252257 DEBUG nova.network.neutron [req-723539cc-f447-4b07-82cc-c42f84924631 req-afa81b81-7624-4e5f-b5db-5577c6c47548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Refreshing network info cache for port 1576b647-a0ba-45ac-afa5-c62b909bb7e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 757 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.8 MiB/s wr, 240 op/s
Nov 29 03:31:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:31:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2775002244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:31:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:31:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2775002244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:31:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:31:41.323 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:42 np0005539563 nova_compute[252253]: 2025-11-29 08:31:42.284 252257 DEBUG nova.network.neutron [req-723539cc-f447-4b07-82cc-c42f84924631 req-afa81b81-7624-4e5f-b5db-5577c6c47548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updated VIF entry in instance network info cache for port 1576b647-a0ba-45ac-afa5-c62b909bb7e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:42 np0005539563 nova_compute[252253]: 2025-11-29 08:31:42.285 252257 DEBUG nova.network.neutron [req-723539cc-f447-4b07-82cc-c42f84924631 req-afa81b81-7624-4e5f-b5db-5577c6c47548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Updating instance_info_cache with network_info: [{"id": "1576b647-a0ba-45ac-afa5-c62b909bb7e9", "address": "fa:16:3e:53:4e:1f", "network": {"id": "4c541784-a3aa-4c55-a753-a31504941937", "bridge": null, "label": "tempest-AttachVolumeShelveTestJSON-1413927249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "37972b49ddde4c519c6523d2ea1569b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap1576b647-a0", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:42 np0005539563 nova_compute[252253]: 2025-11-29 08:31:42.304 252257 DEBUG oslo_concurrency.lockutils [req-723539cc-f447-4b07-82cc-c42f84924631 req-afa81b81-7624-4e5f-b5db-5577c6c47548 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:42.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:42.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:42 np0005539563 nova_compute[252253]: 2025-11-29 08:31:42.496 252257 INFO nova.virt.libvirt.driver [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Deleting instance files /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_del#033[00m
Nov 29 03:31:42 np0005539563 nova_compute[252253]: 2025-11-29 08:31:42.496 252257 INFO nova.virt.libvirt.driver [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Deletion of /var/lib/nova/instances/4d6c236c-ba8a-44dc-8413-3d4bfc16ec56_del complete#033[00m
Nov 29 03:31:42 np0005539563 nova_compute[252253]: 2025-11-29 08:31:42.586 252257 INFO nova.scheduler.client.report [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Deleted allocations for instance 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56#033[00m
Nov 29 03:31:42 np0005539563 nova_compute[252253]: 2025-11-29 08:31:42.666 252257 DEBUG oslo_concurrency.lockutils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:42 np0005539563 nova_compute[252253]: 2025-11-29 08:31:42.666 252257 DEBUG oslo_concurrency.lockutils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 757 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 212 op/s
Nov 29 03:31:42 np0005539563 nova_compute[252253]: 2025-11-29 08:31:42.720 252257 DEBUG oslo_concurrency.processutils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2603678167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:31:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:31:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:31:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:31:43 np0005539563 nova_compute[252253]: 2025-11-29 08:31:43.209 252257 DEBUG oslo_concurrency.processutils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:31:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:31:43 np0005539563 nova_compute[252253]: 2025-11-29 08:31:43.215 252257 DEBUG nova.compute.provider_tree [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:43 np0005539563 nova_compute[252253]: 2025-11-29 08:31:43.248 252257 DEBUG nova.scheduler.client.report [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:43 np0005539563 nova_compute[252253]: 2025-11-29 08:31:43.282 252257 DEBUG oslo_concurrency.lockutils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:43 np0005539563 nova_compute[252253]: 2025-11-29 08:31:43.351 252257 DEBUG oslo_concurrency.lockutils [None req-b53fee3f-de87-428a-895d-6f24e506495a e6de0587a3794e30acefc687f435d388 37972b49ddde4c519c6523d2ea1569b5 - - default default] Lock "4d6c236c-ba8a-44dc-8413-3d4bfc16ec56" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 26.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:43 np0005539563 nova_compute[252253]: 2025-11-29 08:31:43.713 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:44.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:44.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 535 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 417 KiB/s wr, 207 op/s
Nov 29 03:31:45 np0005539563 nova_compute[252253]: 2025-11-29 08:31:45.592 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Nov 29 03:31:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Nov 29 03:31:45 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Nov 29 03:31:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:46.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:46.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 464 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 318 KiB/s rd, 415 KiB/s wr, 134 op/s
Nov 29 03:31:47 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:47Z|00079|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.10
Nov 29 03:31:47 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:47Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:cb:cf:07 10.100.0.10
Nov 29 03:31:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:48.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:48.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:48 np0005539563 nova_compute[252253]: 2025-11-29 08:31:48.714 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 462 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 628 KiB/s wr, 162 op/s
Nov 29 03:31:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:50Z|00081|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.10
Nov 29 03:31:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:50Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:cb:cf:07 10.100.0.10
Nov 29 03:31:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:50.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:50.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:50 np0005539563 podman[359502]: 2025-11-29 08:31:50.522571342 +0000 UTC m=+0.070890252 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:31:50 np0005539563 podman[359503]: 2025-11-29 08:31:50.530724753 +0000 UTC m=+0.078411865 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:31:50 np0005539563 podman[359504]: 2025-11-29 08:31:50.557664692 +0000 UTC m=+0.094501450 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:31:50 np0005539563 nova_compute[252253]: 2025-11-29 08:31:50.594 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 454 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 606 KiB/s wr, 174 op/s
Nov 29 03:31:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:52.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:52.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:52 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:52Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:cf:07 10.100.0.10
Nov 29 03:31:52 np0005539563 ovn_controller[148841]: 2025-11-29T08:31:52Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:cf:07 10.100.0.10
Nov 29 03:31:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 454 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 606 KiB/s wr, 174 op/s
Nov 29 03:31:53 np0005539563 nova_compute[252253]: 2025-11-29 08:31:53.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:53 np0005539563 nova_compute[252253]: 2025-11-29 08:31:53.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:53 np0005539563 nova_compute[252253]: 2025-11-29 08:31:53.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:31:53 np0005539563 nova_compute[252253]: 2025-11-29 08:31:53.717 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Nov 29 03:31:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Nov 29 03:31:54 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Nov 29 03:31:54 np0005539563 nova_compute[252253]: 2025-11-29 08:31:54.138 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:31:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:54.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:54.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 454 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 337 KiB/s wr, 79 op/s
Nov 29 03:31:55 np0005539563 nova_compute[252253]: 2025-11-29 08:31:55.596 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:31:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:56.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:31:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:56.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 347 KiB/s wr, 73 op/s
Nov 29 03:31:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:58.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:31:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:58.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:58 np0005539563 nova_compute[252253]: 2025-11-29 08:31:58.719 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 248 KiB/s rd, 62 KiB/s wr, 21 op/s
Nov 29 03:31:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:00.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:00.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:00 np0005539563 nova_compute[252253]: 2025-11-29 08:32:00.599 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 62 KiB/s wr, 4 op/s
Nov 29 03:32:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1357944057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:02 np0005539563 ovn_controller[148841]: 2025-11-29T08:32:02Z|00731|binding|INFO|Releasing lport 3986ddbd-1b85-4e76-95e6-c4ab20dc3ca3 from this chassis (sb_readonly=0)
Nov 29 03:32:02 np0005539563 nova_compute[252253]: 2025-11-29 08:32:02.144 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:02.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:02.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 62 KiB/s wr, 4 op/s
Nov 29 03:32:03 np0005539563 nova_compute[252253]: 2025-11-29 08:32:03.722 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:04.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:04.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 50 KiB/s wr, 5 op/s
Nov 29 03:32:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:04.937 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:04.938 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:04.938 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:05 np0005539563 nova_compute[252253]: 2025-11-29 08:32:05.137 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:05 np0005539563 nova_compute[252253]: 2025-11-29 08:32:05.137 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:05 np0005539563 nova_compute[252253]: 2025-11-29 08:32:05.138 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:32:05 np0005539563 nova_compute[252253]: 2025-11-29 08:32:05.603 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:06.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:06.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:06 np0005539563 nova_compute[252253]: 2025-11-29 08:32:06.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:06 np0005539563 nova_compute[252253]: 2025-11-29 08:32:06.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 48 KiB/s wr, 5 op/s
Nov 29 03:32:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:08.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:08.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:08 np0005539563 nova_compute[252253]: 2025-11-29 08:32:08.725 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 KiB/s rd, 9.2 KiB/s wr, 4 op/s
Nov 29 03:32:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:09 np0005539563 nova_compute[252253]: 2025-11-29 08:32:09.961 252257 DEBUG oslo_concurrency.lockutils [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:09 np0005539563 nova_compute[252253]: 2025-11-29 08:32:09.962 252257 DEBUG oslo_concurrency.lockutils [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:09 np0005539563 nova_compute[252253]: 2025-11-29 08:32:09.977 252257 DEBUG nova.objects.instance [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lazy-loading 'flavor' on Instance uuid 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.269 252257 DEBUG oslo_concurrency.lockutils [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:10.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:10.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.606 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.715 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 4d6c236c-ba8a-44dc-8413-3d4bfc16ec56] Skipping network cache update for instance because it has been migrated to another host. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9902#033[00m
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.716 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.716 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 6.1 KiB/s wr, 4 op/s
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.793 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.794 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.794 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.794 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.794 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.970 252257 DEBUG oslo_concurrency.lockutils [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.971 252257 DEBUG oslo_concurrency.lockutils [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:10 np0005539563 nova_compute[252253]: 2025-11-29 08:32:10.971 252257 INFO nova.compute.manager [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Attaching volume 0176de29-c1f1-41e7-8476-91ff0e6c70a5 to /dev/vdb#033[00m
Nov 29 03:32:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/508529758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.235 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.263 252257 DEBUG os_brick.utils [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.264 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.275 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.276 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[75fdb3e6-56f4-4d37-b6ad-d4fcbbd64f22]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.276 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.284 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.285 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[11bd021e-8eed-4eb4-a225-1cf754ac0ce2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.286 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.296 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.296 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa9c8a0-0c93-4812-99d6-dabacbc3e45f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.298 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[608717fd-a57a-43ca-8e30-e2c6da56973b]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.298 252257 DEBUG oslo_concurrency.processutils [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.337 252257 DEBUG oslo_concurrency.processutils [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.339 252257 DEBUG os_brick.initiator.connectors.lightos [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.340 252257 DEBUG os_brick.initiator.connectors.lightos [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.340 252257 DEBUG os_brick.initiator.connectors.lightos [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.340 252257 DEBUG os_brick.utils [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.341 252257 DEBUG nova.virt.block_device [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Updating existing volume attachment record: 5f58bd24-32fa-485e-9f3a-580b2e710da0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:32:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4214025956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.376 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.376 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.528 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.529 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4032MB free_disk=20.890789031982422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.529 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.530 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:11 np0005539563 ovn_controller[148841]: 2025-11-29T08:32:11Z|00732|binding|INFO|Releasing lport 3986ddbd-1b85-4e76-95e6-c4ab20dc3ca3 from this chassis (sb_readonly=0)
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.673 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.673 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.674 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.698 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:11 np0005539563 nova_compute[252253]: 2025-11-29 08:32:11.837 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:12 np0005539563 nova_compute[252253]: 2025-11-29 08:32:12.194 252257 DEBUG nova.objects.instance [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lazy-loading 'flavor' on Instance uuid 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:12 np0005539563 nova_compute[252253]: 2025-11-29 08:32:12.223 252257 DEBUG nova.virt.libvirt.driver [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Attempting to attach volume 0176de29-c1f1-41e7-8476-91ff0e6c70a5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:32:12 np0005539563 nova_compute[252253]: 2025-11-29 08:32:12.225 252257 DEBUG nova.virt.libvirt.guest [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:32:12 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:12 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-0176de29-c1f1-41e7-8476-91ff0e6c70a5">
Nov 29 03:32:12 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:12 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:12 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:12 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:32:12 np0005539563 nova_compute[252253]:  <auth username="openstack">
Nov 29 03:32:12 np0005539563 nova_compute[252253]:    <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:32:12 np0005539563 nova_compute[252253]:  </auth>
Nov 29 03:32:12 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:32:12 np0005539563 nova_compute[252253]:  <serial>0176de29-c1f1-41e7-8476-91ff0e6c70a5</serial>
Nov 29 03:32:12 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:32:12 np0005539563 nova_compute[252253]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:32:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043514117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:12 np0005539563 nova_compute[252253]: 2025-11-29 08:32:12.261 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:12 np0005539563 nova_compute[252253]: 2025-11-29 08:32:12.266 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:12 np0005539563 nova_compute[252253]: 2025-11-29 08:32:12.339 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:12.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:12 np0005539563 nova_compute[252253]: 2025-11-29 08:32:12.399 252257 DEBUG nova.virt.libvirt.driver [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:12 np0005539563 nova_compute[252253]: 2025-11-29 08:32:12.399 252257 DEBUG nova.virt.libvirt.driver [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:12 np0005539563 nova_compute[252253]: 2025-11-29 08:32:12.399 252257 DEBUG nova.virt.libvirt.driver [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:12 np0005539563 nova_compute[252253]: 2025-11-29 08:32:12.400 252257 DEBUG nova.virt.libvirt.driver [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] No VIF found with MAC fa:16:3e:cb:cf:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:32:12 np0005539563 nova_compute[252253]: 2025-11-29 08:32:12.405 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:32:12 np0005539563 nova_compute[252253]: 2025-11-29 08:32:12.405 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:12.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:12 np0005539563 nova_compute[252253]: 2025-11-29 08:32:12.684 252257 DEBUG oslo_concurrency.lockutils [None req-89f89bff-8295-470a-b038-97b345ebd7ed d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 457 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 6.1 KiB/s wr, 4 op/s
Nov 29 03:32:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:32:12
Nov 29 03:32:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:32:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:32:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'images', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.data']
Nov 29 03:32:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:32:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:32:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:32:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:32:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:32:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:32:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.483251) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133483371, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 922, "num_deletes": 254, "total_data_size": 1295410, "memory_usage": 1325344, "flush_reason": "Manual Compaction"}
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133498934, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1279833, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56498, "largest_seqno": 57419, "table_properties": {"data_size": 1275156, "index_size": 2265, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10750, "raw_average_key_size": 20, "raw_value_size": 1265604, "raw_average_value_size": 2396, "num_data_blocks": 99, "num_entries": 528, "num_filter_entries": 528, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405069, "oldest_key_time": 1764405069, "file_creation_time": 1764405133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 15735 microseconds, and 4509 cpu microseconds.
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.499027) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1279833 bytes OK
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.499069) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.507601) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.507645) EVENT_LOG_v1 {"time_micros": 1764405133507637, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.507668) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1290946, prev total WAL file size 1290946, number of live WAL files 2.
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.508806) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1249KB)], [122(12MB)]
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133508931, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 14908123, "oldest_snapshot_seqno": -1}
Nov 29 03:32:13 np0005539563 nova_compute[252253]: 2025-11-29 08:32:13.730 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 9140 keys, 12980847 bytes, temperature: kUnknown
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133841151, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 12980847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12919582, "index_size": 37281, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22917, "raw_key_size": 238783, "raw_average_key_size": 26, "raw_value_size": 12757054, "raw_average_value_size": 1395, "num_data_blocks": 1447, "num_entries": 9140, "num_filter_entries": 9140, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764405133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.843631) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12980847 bytes
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.856531) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 44.9 rd, 39.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 13.0 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(21.8) write-amplify(10.1) OK, records in: 9665, records dropped: 525 output_compression: NoCompression
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.856574) EVENT_LOG_v1 {"time_micros": 1764405133856559, "job": 74, "event": "compaction_finished", "compaction_time_micros": 332173, "compaction_time_cpu_micros": 30753, "output_level": 6, "num_output_files": 1, "total_output_size": 12980847, "num_input_records": 9665, "num_output_records": 9140, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133857078, "job": 74, "event": "table_file_deletion", "file_number": 124}
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405133859110, "job": 74, "event": "table_file_deletion", "file_number": 122}
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.508569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.859215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.859223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.859225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.859227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:32:13 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:32:13.859230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:32:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:14.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:14.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 492 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 73 op/s
Nov 29 03:32:15 np0005539563 nova_compute[252253]: 2025-11-29 08:32:15.461 252257 DEBUG oslo_concurrency.lockutils [None req-9dfb9d9a-c5b1-47e5-a38e-fdd4efc9d74c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:15 np0005539563 nova_compute[252253]: 2025-11-29 08:32:15.462 252257 DEBUG oslo_concurrency.lockutils [None req-9dfb9d9a-c5b1-47e5-a38e-fdd4efc9d74c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:15 np0005539563 nova_compute[252253]: 2025-11-29 08:32:15.608 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:15 np0005539563 nova_compute[252253]: 2025-11-29 08:32:15.927 252257 INFO nova.compute.manager [None req-9dfb9d9a-c5b1-47e5-a38e-fdd4efc9d74c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Detaching volume 0176de29-c1f1-41e7-8476-91ff0e6c70a5#033[00m
Nov 29 03:32:16 np0005539563 nova_compute[252253]: 2025-11-29 08:32:16.366 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:16.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:16.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:32:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:32:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:32:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:32:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:32:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:32:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:32:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:32:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:32:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:32:16 np0005539563 nova_compute[252253]: 2025-11-29 08:32:16.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:32:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:32:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 536 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 93 op/s
Nov 29 03:32:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:16 np0005539563 nova_compute[252253]: 2025-11-29 08:32:16.867 252257 INFO nova.virt.block_device [None req-9dfb9d9a-c5b1-47e5-a38e-fdd4efc9d74c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Attempting to driver detach volume 0176de29-c1f1-41e7-8476-91ff0e6c70a5 from mountpoint /dev/vdb#033[00m
Nov 29 03:32:16 np0005539563 nova_compute[252253]: 2025-11-29 08:32:16.877 252257 DEBUG nova.virt.libvirt.driver [None req-9dfb9d9a-c5b1-47e5-a38e-fdd4efc9d74c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Attempting to detach device vdb from instance 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:32:16 np0005539563 nova_compute[252253]: 2025-11-29 08:32:16.878 252257 DEBUG nova.virt.libvirt.guest [None req-9dfb9d9a-c5b1-47e5-a38e-fdd4efc9d74c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:32:16 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-0176de29-c1f1-41e7-8476-91ff0e6c70a5">
Nov 29 03:32:16 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:  <serial>0176de29-c1f1-41e7-8476-91ff0e6c70a5</serial>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:32:16 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:32:16 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:32:16 np0005539563 nova_compute[252253]: 2025-11-29 08:32:16.886 252257 INFO nova.virt.libvirt.driver [None req-9dfb9d9a-c5b1-47e5-a38e-fdd4efc9d74c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Successfully detached device vdb from instance 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 from the persistent domain config.#033[00m
Nov 29 03:32:16 np0005539563 nova_compute[252253]: 2025-11-29 08:32:16.887 252257 DEBUG nova.virt.libvirt.driver [None req-9dfb9d9a-c5b1-47e5-a38e-fdd4efc9d74c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:32:16 np0005539563 nova_compute[252253]: 2025-11-29 08:32:16.887 252257 DEBUG nova.virt.libvirt.guest [None req-9dfb9d9a-c5b1-47e5-a38e-fdd4efc9d74c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:32:16 np0005539563 nova_compute[252253]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:  <source protocol="rbd" name="volumes/volume-0176de29-c1f1-41e7-8476-91ff0e6c70a5">
Nov 29 03:32:16 np0005539563 nova_compute[252253]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:  </source>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:  <serial>0176de29-c1f1-41e7-8476-91ff0e6c70a5</serial>
Nov 29 03:32:16 np0005539563 nova_compute[252253]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:32:16 np0005539563 nova_compute[252253]: </disk>
Nov 29 03:32:16 np0005539563 nova_compute[252253]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:32:17 np0005539563 nova_compute[252253]: 2025-11-29 08:32:17.008 252257 DEBUG nova.virt.libvirt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Received event <DeviceRemovedEvent: 1764405137.0082552, 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:32:17 np0005539563 nova_compute[252253]: 2025-11-29 08:32:17.010 252257 DEBUG nova.virt.libvirt.driver [None req-9dfb9d9a-c5b1-47e5-a38e-fdd4efc9d74c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:32:17 np0005539563 nova_compute[252253]: 2025-11-29 08:32:17.012 252257 INFO nova.virt.libvirt.driver [None req-9dfb9d9a-c5b1-47e5-a38e-fdd4efc9d74c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Successfully detached device vdb from instance 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 from the live domain config.#033[00m
Nov 29 03:32:17 np0005539563 nova_compute[252253]: 2025-11-29 08:32:17.404 252257 DEBUG nova.objects.instance [None req-9dfb9d9a-c5b1-47e5-a38e-fdd4efc9d74c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lazy-loading 'flavor' on Instance uuid 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:32:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:32:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:32:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:32:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:32:17 np0005539563 nova_compute[252253]: 2025-11-29 08:32:17.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4eed6600-b946-485d-a7f5-cf86f0c62436 does not exist
Nov 29 03:32:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cfb3d211-d317-4ab9-acf2-399982cf227f does not exist
Nov 29 03:32:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 557c4fb5-e597-475d-a933-7787ffac64e8 does not exist
Nov 29 03:32:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:32:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:32:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:32:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:32:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:32:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:32:17 np0005539563 nova_compute[252253]: 2025-11-29 08:32:17.867 252257 DEBUG oslo_concurrency.lockutils [None req-9dfb9d9a-c5b1-47e5-a38e-fdd4efc9d74c d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:32:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:18 np0005539563 podman[359970]: 2025-11-29 08:32:18.31486183 +0000 UTC m=+0.040514659 container create 35387855fdbd542c76d7e1db9a21a22a967cd034db16c4ecb8d90ea390bc8292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_taussig, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:32:18 np0005539563 systemd[1]: Started libpod-conmon-35387855fdbd542c76d7e1db9a21a22a967cd034db16c4ecb8d90ea390bc8292.scope.
Nov 29 03:32:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:18.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:32:18 np0005539563 podman[359970]: 2025-11-29 08:32:18.390468418 +0000 UTC m=+0.116121277 container init 35387855fdbd542c76d7e1db9a21a22a967cd034db16c4ecb8d90ea390bc8292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:32:18 np0005539563 podman[359970]: 2025-11-29 08:32:18.297248122 +0000 UTC m=+0.022900981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:32:18 np0005539563 podman[359970]: 2025-11-29 08:32:18.398210157 +0000 UTC m=+0.123862996 container start 35387855fdbd542c76d7e1db9a21a22a967cd034db16c4ecb8d90ea390bc8292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 03:32:18 np0005539563 podman[359970]: 2025-11-29 08:32:18.401156587 +0000 UTC m=+0.126809456 container attach 35387855fdbd542c76d7e1db9a21a22a967cd034db16c4ecb8d90ea390bc8292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_taussig, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:32:18 np0005539563 adoring_taussig[359986]: 167 167
Nov 29 03:32:18 np0005539563 systemd[1]: libpod-35387855fdbd542c76d7e1db9a21a22a967cd034db16c4ecb8d90ea390bc8292.scope: Deactivated successfully.
Nov 29 03:32:18 np0005539563 conmon[359986]: conmon 35387855fdbd542c76d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35387855fdbd542c76d7e1db9a21a22a967cd034db16c4ecb8d90ea390bc8292.scope/container/memory.events
Nov 29 03:32:18 np0005539563 podman[359970]: 2025-11-29 08:32:18.407355675 +0000 UTC m=+0.133008524 container died 35387855fdbd542c76d7e1db9a21a22a967cd034db16c4ecb8d90ea390bc8292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:32:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0e262797cbde54dd5770f39096705dc12522a59e3476118bd800ca28cbe84619-merged.mount: Deactivated successfully.
Nov 29 03:32:18 np0005539563 podman[359970]: 2025-11-29 08:32:18.451866511 +0000 UTC m=+0.177519370 container remove 35387855fdbd542c76d7e1db9a21a22a967cd034db16c4ecb8d90ea390bc8292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_taussig, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:32:18 np0005539563 systemd[1]: libpod-conmon-35387855fdbd542c76d7e1db9a21a22a967cd034db16c4ecb8d90ea390bc8292.scope: Deactivated successfully.
Nov 29 03:32:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:18.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:18 np0005539563 podman[360008]: 2025-11-29 08:32:18.643944245 +0000 UTC m=+0.039304916 container create f15d09513a35ef0754680c5e080a6be2e379dc1d78b6967755bc573220c48868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:32:18 np0005539563 systemd[1]: Started libpod-conmon-f15d09513a35ef0754680c5e080a6be2e379dc1d78b6967755bc573220c48868.scope.
Nov 29 03:32:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:32:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2317e419ba9490e5f05d2c5b8441920eaf428e13eb898f3501bae297c33d3bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2317e419ba9490e5f05d2c5b8441920eaf428e13eb898f3501bae297c33d3bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2317e419ba9490e5f05d2c5b8441920eaf428e13eb898f3501bae297c33d3bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2317e419ba9490e5f05d2c5b8441920eaf428e13eb898f3501bae297c33d3bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2317e419ba9490e5f05d2c5b8441920eaf428e13eb898f3501bae297c33d3bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:18 np0005539563 podman[360008]: 2025-11-29 08:32:18.62754925 +0000 UTC m=+0.022909951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:32:18 np0005539563 podman[360008]: 2025-11-29 08:32:18.725765951 +0000 UTC m=+0.121126642 container init f15d09513a35ef0754680c5e080a6be2e379dc1d78b6967755bc573220c48868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:32:18 np0005539563 nova_compute[252253]: 2025-11-29 08:32:18.733 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 538 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.1 MiB/s wr, 110 op/s
Nov 29 03:32:18 np0005539563 podman[360008]: 2025-11-29 08:32:18.736160253 +0000 UTC m=+0.131520924 container start f15d09513a35ef0754680c5e080a6be2e379dc1d78b6967755bc573220c48868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:32:18 np0005539563 podman[360008]: 2025-11-29 08:32:18.739935405 +0000 UTC m=+0.135296066 container attach f15d09513a35ef0754680c5e080a6be2e379dc1d78b6967755bc573220c48868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:32:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.086 252257 DEBUG nova.compute.manager [req-ceb74d77-b6b5-45bb-be53-0d80eccccf38 req-257f23ff-64f6-4210-9f75-6877dd3d157d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Received event network-changed-8b93f183-22da-4a1a-9b2c-52314f794aad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.087 252257 DEBUG nova.compute.manager [req-ceb74d77-b6b5-45bb-be53-0d80eccccf38 req-257f23ff-64f6-4210-9f75-6877dd3d157d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Refreshing instance network info cache due to event network-changed-8b93f183-22da-4a1a-9b2c-52314f794aad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.087 252257 DEBUG oslo_concurrency.lockutils [req-ceb74d77-b6b5-45bb-be53-0d80eccccf38 req-257f23ff-64f6-4210-9f75-6877dd3d157d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-9a93f858-81bf-4fad-bdc5-df74e4bb0c75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.088 252257 DEBUG oslo_concurrency.lockutils [req-ceb74d77-b6b5-45bb-be53-0d80eccccf38 req-257f23ff-64f6-4210-9f75-6877dd3d157d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-9a93f858-81bf-4fad-bdc5-df74e4bb0c75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.089 252257 DEBUG nova.network.neutron [req-ceb74d77-b6b5-45bb-be53-0d80eccccf38 req-257f23ff-64f6-4210-9f75-6877dd3d157d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Refreshing network info cache for port 8b93f183-22da-4a1a-9b2c-52314f794aad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.137 252257 DEBUG oslo_concurrency.lockutils [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.138 252257 DEBUG oslo_concurrency.lockutils [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.138 252257 DEBUG oslo_concurrency.lockutils [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.138 252257 DEBUG oslo_concurrency.lockutils [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.139 252257 DEBUG oslo_concurrency.lockutils [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.140 252257 INFO nova.compute.manager [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Terminating instance#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.140 252257 DEBUG nova.compute.manager [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:32:19 np0005539563 kernel: tap8b93f183-22 (unregistering): left promiscuous mode
Nov 29 03:32:19 np0005539563 NetworkManager[48981]: <info>  [1764405139.2080] device (tap8b93f183-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:32:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:32:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:32:19Z|00733|binding|INFO|Releasing lport 8b93f183-22da-4a1a-9b2c-52314f794aad from this chassis (sb_readonly=0)
Nov 29 03:32:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:32:19Z|00734|binding|INFO|Setting lport 8b93f183-22da-4a1a-9b2c-52314f794aad down in Southbound
Nov 29 03:32:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:32:19Z|00735|binding|INFO|Removing iface tap8b93f183-22 ovn-installed in OVS
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.233 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:cf:07 10.100.0.10'], port_security=['fa:16:3e:cb:cf:07 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9a93f858-81bf-4fad-bdc5-df74e4bb0c75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7e8e7407a7c44208a503e8225c1cf518', 'neutron:revision_number': '4', 'neutron:security_group_ids': '056d3a24-7b10-4a45-884a-1b8e5def99f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a117267-2677-4e97-b3d9-4edd30f1b375, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=8b93f183-22da-4a1a-9b2c-52314f794aad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.222 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.235 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 8b93f183-22da-4a1a-9b2c-52314f794aad in datapath 9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 unbound from our chassis#033[00m
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.237 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.238 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e15d4f75-fa09-4e1e-aad5-97a6961b0dc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.239 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 namespace which is not needed anymore#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.251 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:19 np0005539563 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000ad.scope: Deactivated successfully.
Nov 29 03:32:19 np0005539563 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000ad.scope: Consumed 16.550s CPU time.
Nov 29 03:32:19 np0005539563 systemd-machined[213024]: Machine qemu-84-instance-000000ad terminated.
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.369 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.377 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.383 252257 INFO nova.virt.libvirt.driver [-] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Instance destroyed successfully.#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.383 252257 DEBUG nova.objects.instance [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lazy-loading 'resources' on Instance uuid 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:19 np0005539563 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[359385]: [NOTICE]   (359389) : haproxy version is 2.8.14-c23fe91
Nov 29 03:32:19 np0005539563 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[359385]: [NOTICE]   (359389) : path to executable is /usr/sbin/haproxy
Nov 29 03:32:19 np0005539563 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[359385]: [WARNING]  (359389) : Exiting Master process...
Nov 29 03:32:19 np0005539563 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[359385]: [ALERT]    (359389) : Current worker (359391) exited with code 143 (Terminated)
Nov 29 03:32:19 np0005539563 neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6[359385]: [WARNING]  (359389) : All workers exited. Exiting... (0)
Nov 29 03:32:19 np0005539563 systemd[1]: libpod-c9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c.scope: Deactivated successfully.
Nov 29 03:32:19 np0005539563 podman[360055]: 2025-11-29 08:32:19.412588258 +0000 UTC m=+0.054066155 container died c9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:32:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Nov 29 03:32:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:32:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-07685db7f06c3a1fae6525af4641c1023877351acb1c8ba4b5d443ba58d83295-merged.mount: Deactivated successfully.
Nov 29 03:32:19 np0005539563 podman[360055]: 2025-11-29 08:32:19.478690449 +0000 UTC m=+0.120168346 container cleanup c9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:32:19 np0005539563 systemd[1]: libpod-conmon-c9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c.scope: Deactivated successfully.
Nov 29 03:32:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Nov 29 03:32:19 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Nov 29 03:32:19 np0005539563 podman[360097]: 2025-11-29 08:32:19.556320353 +0000 UTC m=+0.051606760 container remove c9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.563 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d6697d-e954-41fd-ab84-ddeb02766912]: (4, ('Sat Nov 29 08:32:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 (c9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c)\nc9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c\nSat Nov 29 08:32:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 (c9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c)\nc9dba2c495dcfa6a56ff460ac686871493d8a4210362eb1335ffe684cc89561c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.565 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[408aae8e-7023-4293-9869-049da768d51a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.566 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bbeaef7-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:19 np0005539563 kernel: tap9bbeaef7-10: left promiscuous mode
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.599 252257 DEBUG nova.virt.libvirt.vif [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:31:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1423980546',display_name='tempest-TestStampPattern-server-1423980546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1423980546',id=173,image_ref='29125612-6fcb-47b2-8690-67e6e3459b96',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI96Evf2Y0SwutlY6N1eO4BKjG4KN2PYNqztf6unh2meM8u5LoAdRPMughEalPkJvCxIIxu40dTok7DnjTnYJBYMbeg+H1BqLCO5M0zr1+eSR0VHUnp1o+KGiyZHQh121Q==',key_name='tempest-TestStampPattern-155113296',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:31:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7e8e7407a7c44208a503e8225c1cf518',ramdisk_id='',reservation_id='r-bz3jgttx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='dc8140a9-7bef-42f8-867c-13e29f022673',image_min_disk='1',image_min_ram='0',image_owner_id='7e8e7407a7c44208a503e8225c1cf518',image_owner_project_name='tempest-TestStampPattern-1730119083',image_owner_user_name='tempest-TestStampPattern-1730119083-project-member',image_user_id='d45f9a4a44664af3884c15ce0f5697e0',owner_project_name='tempest-TestStampPattern-1730119083',owner_user_name='tempest-TestStampPattern-1730119083-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:31:32Z,user_data=None,user_id='d45f9a4a44664af3884c15ce0f5697e0',uuid=9a93f858-81bf-4fad-bdc5-df74e4bb0c75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b93f183-22da-4a1a-9b2c-52314f794aad", "address": "fa:16:3e:cb:cf:07", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b93f183-22", "ovs_interfaceid": "8b93f183-22da-4a1a-9b2c-52314f794aad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.599 252257 DEBUG nova.network.os_vif_util [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Converting VIF {"id": "8b93f183-22da-4a1a-9b2c-52314f794aad", "address": "fa:16:3e:cb:cf:07", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b93f183-22", "ovs_interfaceid": "8b93f183-22da-4a1a-9b2c-52314f794aad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.601 252257 DEBUG nova.network.os_vif_util [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:cf:07,bridge_name='br-int',has_traffic_filtering=True,id=8b93f183-22da-4a1a-9b2c-52314f794aad,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b93f183-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.601 252257 DEBUG os_vif [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:cf:07,bridge_name='br-int',has_traffic_filtering=True,id=8b93f183-22da-4a1a-9b2c-52314f794aad,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b93f183-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.603 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.604 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b93f183-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.607 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.608 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.619 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.620 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.622 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cbdcc008-bf71-40dd-a14a-d11c0ae87373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:19 np0005539563 nova_compute[252253]: 2025-11-29 08:32:19.622 252257 INFO os_vif [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:cf:07,bridge_name='br-int',has_traffic_filtering=True,id=8b93f183-22da-4a1a-9b2c-52314f794aad,network=Network(9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b93f183-22')#033[00m
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.637 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6ccd655a-332c-4ea0-9b50-46a943760eb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.639 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[87a0d7bb-4d7b-48c1-b6dc-d05f82b9c83a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.656 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[41c7a7e5-1e90-4140-bead-d3ecdd9ffb5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 805917, 'reachable_time': 16314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360136, 'error': None, 'target': 'ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:19 np0005539563 systemd[1]: run-netns-ovnmeta\x2d9bbeaef7\x2d1d9b\x2d48d5\x2db82f\x2da3c3a4c84cd6.mount: Deactivated successfully.
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.658 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:32:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:19.659 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[e512fa33-2158-4023-8a52-f2f38fb51eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:19 np0005539563 elated_rubin[360024]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:32:19 np0005539563 elated_rubin[360024]: --> relative data size: 1.0
Nov 29 03:32:19 np0005539563 elated_rubin[360024]: --> All data devices are unavailable
Nov 29 03:32:19 np0005539563 systemd[1]: libpod-f15d09513a35ef0754680c5e080a6be2e379dc1d78b6967755bc573220c48868.scope: Deactivated successfully.
Nov 29 03:32:19 np0005539563 podman[360008]: 2025-11-29 08:32:19.691094734 +0000 UTC m=+1.086455405 container died f15d09513a35ef0754680c5e080a6be2e379dc1d78b6967755bc573220c48868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:32:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e2317e419ba9490e5f05d2c5b8441920eaf428e13eb898f3501bae297c33d3bc-merged.mount: Deactivated successfully.
Nov 29 03:32:19 np0005539563 podman[360008]: 2025-11-29 08:32:19.757016469 +0000 UTC m=+1.152377140 container remove f15d09513a35ef0754680c5e080a6be2e379dc1d78b6967755bc573220c48868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:32:19 np0005539563 systemd[1]: libpod-conmon-f15d09513a35ef0754680c5e080a6be2e379dc1d78b6967755bc573220c48868.scope: Deactivated successfully.
Nov 29 03:32:20 np0005539563 nova_compute[252253]: 2025-11-29 08:32:20.003 252257 DEBUG nova.compute.manager [req-246ffad4-2b2b-4ff5-b418-a3cfe99bf13b req-b7cf896f-cc68-4ca6-a886-e611016085ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Received event network-vif-unplugged-8b93f183-22da-4a1a-9b2c-52314f794aad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:20 np0005539563 nova_compute[252253]: 2025-11-29 08:32:20.004 252257 DEBUG oslo_concurrency.lockutils [req-246ffad4-2b2b-4ff5-b418-a3cfe99bf13b req-b7cf896f-cc68-4ca6-a886-e611016085ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:20 np0005539563 nova_compute[252253]: 2025-11-29 08:32:20.004 252257 DEBUG oslo_concurrency.lockutils [req-246ffad4-2b2b-4ff5-b418-a3cfe99bf13b req-b7cf896f-cc68-4ca6-a886-e611016085ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:20 np0005539563 nova_compute[252253]: 2025-11-29 08:32:20.004 252257 DEBUG oslo_concurrency.lockutils [req-246ffad4-2b2b-4ff5-b418-a3cfe99bf13b req-b7cf896f-cc68-4ca6-a886-e611016085ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:20 np0005539563 nova_compute[252253]: 2025-11-29 08:32:20.004 252257 DEBUG nova.compute.manager [req-246ffad4-2b2b-4ff5-b418-a3cfe99bf13b req-b7cf896f-cc68-4ca6-a886-e611016085ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] No waiting events found dispatching network-vif-unplugged-8b93f183-22da-4a1a-9b2c-52314f794aad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:20 np0005539563 nova_compute[252253]: 2025-11-29 08:32:20.004 252257 DEBUG nova.compute.manager [req-246ffad4-2b2b-4ff5-b418-a3cfe99bf13b req-b7cf896f-cc68-4ca6-a886-e611016085ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Received event network-vif-unplugged-8b93f183-22da-4a1a-9b2c-52314f794aad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:32:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:20.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:20 np0005539563 podman[360296]: 2025-11-29 08:32:20.395908678 +0000 UTC m=+0.042775040 container create ec17fd8c5db43dff15e6c03a018e41819b11de727b38c91a0a38419bcffb5c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 03:32:20 np0005539563 systemd[1]: Started libpod-conmon-ec17fd8c5db43dff15e6c03a018e41819b11de727b38c91a0a38419bcffb5c8c.scope.
Nov 29 03:32:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:32:20 np0005539563 podman[360296]: 2025-11-29 08:32:20.466324666 +0000 UTC m=+0.113191059 container init ec17fd8c5db43dff15e6c03a018e41819b11de727b38c91a0a38419bcffb5c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lamport, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:32:20 np0005539563 podman[360296]: 2025-11-29 08:32:20.37787353 +0000 UTC m=+0.024739912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:32:20 np0005539563 podman[360296]: 2025-11-29 08:32:20.473755237 +0000 UTC m=+0.120621599 container start ec17fd8c5db43dff15e6c03a018e41819b11de727b38c91a0a38419bcffb5c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:32:20 np0005539563 podman[360296]: 2025-11-29 08:32:20.47753362 +0000 UTC m=+0.124400002 container attach ec17fd8c5db43dff15e6c03a018e41819b11de727b38c91a0a38419bcffb5c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:32:20 np0005539563 infallible_lamport[360312]: 167 167
Nov 29 03:32:20 np0005539563 systemd[1]: libpod-ec17fd8c5db43dff15e6c03a018e41819b11de727b38c91a0a38419bcffb5c8c.scope: Deactivated successfully.
Nov 29 03:32:20 np0005539563 podman[360296]: 2025-11-29 08:32:20.479607806 +0000 UTC m=+0.126474168 container died ec17fd8c5db43dff15e6c03a018e41819b11de727b38c91a0a38419bcffb5c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:32:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:20.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1a2b8d16fc0eaba0f18b4da07c7eef0095a80d24cbbf8190a52bf63444ee28ed-merged.mount: Deactivated successfully.
Nov 29 03:32:20 np0005539563 podman[360296]: 2025-11-29 08:32:20.517235386 +0000 UTC m=+0.164101748 container remove ec17fd8c5db43dff15e6c03a018e41819b11de727b38c91a0a38419bcffb5c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 03:32:20 np0005539563 systemd[1]: libpod-conmon-ec17fd8c5db43dff15e6c03a018e41819b11de727b38c91a0a38419bcffb5c8c.scope: Deactivated successfully.
Nov 29 03:32:20 np0005539563 podman[360336]: 2025-11-29 08:32:20.68497629 +0000 UTC m=+0.043292374 container create 71577a9b6e5f3a131cd06395a7a44a189fb2fb1a9eb8c6ed66d8ca363cbd602e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:32:20 np0005539563 systemd[1]: Started libpod-conmon-71577a9b6e5f3a131cd06395a7a44a189fb2fb1a9eb8c6ed66d8ca363cbd602e.scope.
Nov 29 03:32:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 503 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.2 MiB/s rd, 5.1 MiB/s wr, 197 op/s
Nov 29 03:32:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:32:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5298089f49a5dabe9111c17a6253d36393bf53a57875501de38adb7053c84515/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5298089f49a5dabe9111c17a6253d36393bf53a57875501de38adb7053c84515/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5298089f49a5dabe9111c17a6253d36393bf53a57875501de38adb7053c84515/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5298089f49a5dabe9111c17a6253d36393bf53a57875501de38adb7053c84515/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:20 np0005539563 podman[360336]: 2025-11-29 08:32:20.667459285 +0000 UTC m=+0.025775389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:32:20 np0005539563 podman[360336]: 2025-11-29 08:32:20.773561809 +0000 UTC m=+0.131877933 container init 71577a9b6e5f3a131cd06395a7a44a189fb2fb1a9eb8c6ed66d8ca363cbd602e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:32:20 np0005539563 podman[360336]: 2025-11-29 08:32:20.783394726 +0000 UTC m=+0.141710820 container start 71577a9b6e5f3a131cd06395a7a44a189fb2fb1a9eb8c6ed66d8ca363cbd602e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_diffie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:32:20 np0005539563 podman[360336]: 2025-11-29 08:32:20.787145798 +0000 UTC m=+0.145461912 container attach 71577a9b6e5f3a131cd06395a7a44a189fb2fb1a9eb8c6ed66d8ca363cbd602e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:32:20 np0005539563 podman[360350]: 2025-11-29 08:32:20.78759871 +0000 UTC m=+0.067533401 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:32:20 np0005539563 podman[360354]: 2025-11-29 08:32:20.797892639 +0000 UTC m=+0.077299476 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:32:21 np0005539563 podman[360355]: 2025-11-29 08:32:21.01090668 +0000 UTC m=+0.287215263 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 29 03:32:21 np0005539563 nova_compute[252253]: 2025-11-29 08:32:21.169 252257 INFO nova.virt.libvirt.driver [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Deleting instance files /var/lib/nova/instances/9a93f858-81bf-4fad-bdc5-df74e4bb0c75_del#033[00m
Nov 29 03:32:21 np0005539563 nova_compute[252253]: 2025-11-29 08:32:21.169 252257 INFO nova.virt.libvirt.driver [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Deletion of /var/lib/nova/instances/9a93f858-81bf-4fad-bdc5-df74e4bb0c75_del complete#033[00m
Nov 29 03:32:21 np0005539563 nova_compute[252253]: 2025-11-29 08:32:21.509 252257 INFO nova.compute.manager [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Took 2.37 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:32:21 np0005539563 nova_compute[252253]: 2025-11-29 08:32:21.510 252257 DEBUG oslo.service.loopingcall [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:32:21 np0005539563 nova_compute[252253]: 2025-11-29 08:32:21.511 252257 DEBUG nova.compute.manager [-] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:32:21 np0005539563 nova_compute[252253]: 2025-11-29 08:32:21.511 252257 DEBUG nova.network.neutron [-] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:32:21 np0005539563 sad_diffie[360365]: {
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:    "0": [
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:        {
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:            "devices": [
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:                "/dev/loop3"
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:            ],
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:            "lv_name": "ceph_lv0",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:            "lv_size": "7511998464",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:            "name": "ceph_lv0",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:            "tags": {
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:                "ceph.cluster_name": "ceph",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:                "ceph.crush_device_class": "",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:                "ceph.encrypted": "0",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:                "ceph.osd_id": "0",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:                "ceph.type": "block",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:                "ceph.vdo": "0"
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:            },
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:            "type": "block",
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:            "vg_name": "ceph_vg0"
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:        }
Nov 29 03:32:21 np0005539563 sad_diffie[360365]:    ]
Nov 29 03:32:21 np0005539563 sad_diffie[360365]: }
Nov 29 03:32:21 np0005539563 systemd[1]: libpod-71577a9b6e5f3a131cd06395a7a44a189fb2fb1a9eb8c6ed66d8ca363cbd602e.scope: Deactivated successfully.
Nov 29 03:32:21 np0005539563 podman[360421]: 2025-11-29 08:32:21.605553009 +0000 UTC m=+0.023434905 container died 71577a9b6e5f3a131cd06395a7a44a189fb2fb1a9eb8c6ed66d8ca363cbd602e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 29 03:32:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5298089f49a5dabe9111c17a6253d36393bf53a57875501de38adb7053c84515-merged.mount: Deactivated successfully.
Nov 29 03:32:21 np0005539563 nova_compute[252253]: 2025-11-29 08:32:21.645 252257 DEBUG nova.network.neutron [req-ceb74d77-b6b5-45bb-be53-0d80eccccf38 req-257f23ff-64f6-4210-9f75-6877dd3d157d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Updated VIF entry in instance network info cache for port 8b93f183-22da-4a1a-9b2c-52314f794aad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:32:21 np0005539563 nova_compute[252253]: 2025-11-29 08:32:21.645 252257 DEBUG nova.network.neutron [req-ceb74d77-b6b5-45bb-be53-0d80eccccf38 req-257f23ff-64f6-4210-9f75-6877dd3d157d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Updating instance_info_cache with network_info: [{"id": "8b93f183-22da-4a1a-9b2c-52314f794aad", "address": "fa:16:3e:cb:cf:07", "network": {"id": "9bbeaef7-1d9b-48d5-b82f-a3c3a4c84cd6", "bridge": "br-int", "label": "tempest-TestStampPattern-617297274-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e8e7407a7c44208a503e8225c1cf518", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b93f183-22", "ovs_interfaceid": "8b93f183-22da-4a1a-9b2c-52314f794aad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:21 np0005539563 podman[360421]: 2025-11-29 08:32:21.661460754 +0000 UTC m=+0.079342640 container remove 71577a9b6e5f3a131cd06395a7a44a189fb2fb1a9eb8c6ed66d8ca363cbd602e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:32:21 np0005539563 systemd[1]: libpod-conmon-71577a9b6e5f3a131cd06395a7a44a189fb2fb1a9eb8c6ed66d8ca363cbd602e.scope: Deactivated successfully.
Nov 29 03:32:21 np0005539563 nova_compute[252253]: 2025-11-29 08:32:21.722 252257 DEBUG oslo_concurrency.lockutils [req-ceb74d77-b6b5-45bb-be53-0d80eccccf38 req-257f23ff-64f6-4210-9f75-6877dd3d157d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-9a93f858-81bf-4fad-bdc5-df74e4bb0c75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:22 np0005539563 nova_compute[252253]: 2025-11-29 08:32:22.168 252257 DEBUG nova.compute.manager [req-50e0d16a-6460-45a8-a404-39b709087fda req-cea48a28-1b57-40ef-8fac-da971cb3cb05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Received event network-vif-plugged-8b93f183-22da-4a1a-9b2c-52314f794aad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:22 np0005539563 nova_compute[252253]: 2025-11-29 08:32:22.168 252257 DEBUG oslo_concurrency.lockutils [req-50e0d16a-6460-45a8-a404-39b709087fda req-cea48a28-1b57-40ef-8fac-da971cb3cb05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:22 np0005539563 nova_compute[252253]: 2025-11-29 08:32:22.168 252257 DEBUG oslo_concurrency.lockutils [req-50e0d16a-6460-45a8-a404-39b709087fda req-cea48a28-1b57-40ef-8fac-da971cb3cb05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:22 np0005539563 nova_compute[252253]: 2025-11-29 08:32:22.168 252257 DEBUG oslo_concurrency.lockutils [req-50e0d16a-6460-45a8-a404-39b709087fda req-cea48a28-1b57-40ef-8fac-da971cb3cb05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:22 np0005539563 nova_compute[252253]: 2025-11-29 08:32:22.168 252257 DEBUG nova.compute.manager [req-50e0d16a-6460-45a8-a404-39b709087fda req-cea48a28-1b57-40ef-8fac-da971cb3cb05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] No waiting events found dispatching network-vif-plugged-8b93f183-22da-4a1a-9b2c-52314f794aad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:32:22 np0005539563 nova_compute[252253]: 2025-11-29 08:32:22.169 252257 WARNING nova.compute.manager [req-50e0d16a-6460-45a8-a404-39b709087fda req-cea48a28-1b57-40ef-8fac-da971cb3cb05 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Received unexpected event network-vif-plugged-8b93f183-22da-4a1a-9b2c-52314f794aad for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:32:22 np0005539563 podman[360575]: 2025-11-29 08:32:22.297498275 +0000 UTC m=+0.035986026 container create 844971e8ed61879937a227133fb8d74e14be576630e07aa75501e01861e2755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:32:22 np0005539563 systemd[1]: Started libpod-conmon-844971e8ed61879937a227133fb8d74e14be576630e07aa75501e01861e2755d.scope.
Nov 29 03:32:22 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:32:22 np0005539563 podman[360575]: 2025-11-29 08:32:22.368653004 +0000 UTC m=+0.107140775 container init 844971e8ed61879937a227133fb8d74e14be576630e07aa75501e01861e2755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:32:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:22.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:22 np0005539563 podman[360575]: 2025-11-29 08:32:22.375371435 +0000 UTC m=+0.113859186 container start 844971e8ed61879937a227133fb8d74e14be576630e07aa75501e01861e2755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_easley, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:32:22 np0005539563 podman[360575]: 2025-11-29 08:32:22.28365834 +0000 UTC m=+0.022146111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:32:22 np0005539563 podman[360575]: 2025-11-29 08:32:22.380142275 +0000 UTC m=+0.118630046 container attach 844971e8ed61879937a227133fb8d74e14be576630e07aa75501e01861e2755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_easley, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:32:22 np0005539563 blissful_easley[360591]: 167 167
Nov 29 03:32:22 np0005539563 systemd[1]: libpod-844971e8ed61879937a227133fb8d74e14be576630e07aa75501e01861e2755d.scope: Deactivated successfully.
Nov 29 03:32:22 np0005539563 podman[360575]: 2025-11-29 08:32:22.381958453 +0000 UTC m=+0.120446254 container died 844971e8ed61879937a227133fb8d74e14be576630e07aa75501e01861e2755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_easley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:32:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-17e6bfa97103931100464929c3d523337b4a728b5b81d3a7a295a94f6acff4f0-merged.mount: Deactivated successfully.
Nov 29 03:32:22 np0005539563 podman[360575]: 2025-11-29 08:32:22.425136364 +0000 UTC m=+0.163624115 container remove 844971e8ed61879937a227133fb8d74e14be576630e07aa75501e01861e2755d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_easley, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:32:22 np0005539563 systemd[1]: libpod-conmon-844971e8ed61879937a227133fb8d74e14be576630e07aa75501e01861e2755d.scope: Deactivated successfully.
Nov 29 03:32:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:22.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:22 np0005539563 podman[360614]: 2025-11-29 08:32:22.56821707 +0000 UTC m=+0.036568862 container create cf9e205c7c962ee48a4a7d94195a53f31a1199ec6c0966ce59244f0cbd6beed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:32:22 np0005539563 systemd[1]: Started libpod-conmon-cf9e205c7c962ee48a4a7d94195a53f31a1199ec6c0966ce59244f0cbd6beed1.scope.
Nov 29 03:32:22 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:32:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f062beb79ddd84940ea8f59c861de058b463b054f986dd0cf6e5a4c93efbed9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f062beb79ddd84940ea8f59c861de058b463b054f986dd0cf6e5a4c93efbed9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f062beb79ddd84940ea8f59c861de058b463b054f986dd0cf6e5a4c93efbed9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f062beb79ddd84940ea8f59c861de058b463b054f986dd0cf6e5a4c93efbed9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:22 np0005539563 podman[360614]: 2025-11-29 08:32:22.553571753 +0000 UTC m=+0.021923565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:32:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 503 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.2 MiB/s rd, 5.1 MiB/s wr, 197 op/s
Nov 29 03:32:23 np0005539563 podman[360614]: 2025-11-29 08:32:23.146763184 +0000 UTC m=+0.615114996 container init cf9e205c7c962ee48a4a7d94195a53f31a1199ec6c0966ce59244f0cbd6beed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:32:23 np0005539563 podman[360614]: 2025-11-29 08:32:23.158384319 +0000 UTC m=+0.626736121 container start cf9e205c7c962ee48a4a7d94195a53f31a1199ec6c0966ce59244f0cbd6beed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:32:23 np0005539563 podman[360614]: 2025-11-29 08:32:23.162360576 +0000 UTC m=+0.630712408 container attach cf9e205c7c962ee48a4a7d94195a53f31a1199ec6c0966ce59244f0cbd6beed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:32:23 np0005539563 nova_compute[252253]: 2025-11-29 08:32:23.631 252257 DEBUG nova.network.neutron [-] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:23 np0005539563 nova_compute[252253]: 2025-11-29 08:32:23.736 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00665818240508096 of space, bias 1.0, pg target 1.9974547215242882 quantized to 32 (current 32)
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0023506202889447235 of space, bias 1.0, pg target 0.7028354663944724 quantized to 32 (current 32)
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005678348338916807 of space, bias 1.0, pg target 1.6978261533361252 quantized to 32 (current 32)
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:32:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:23 np0005539563 nova_compute[252253]: 2025-11-29 08:32:23.866 252257 DEBUG nova.compute.manager [req-17c6b975-aeff-4d05-b62d-179d90c9ed51 req-0e4384e7-b79f-42d1-8194-2b86cf13b5d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Received event network-vif-deleted-8b93f183-22da-4a1a-9b2c-52314f794aad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:23 np0005539563 nova_compute[252253]: 2025-11-29 08:32:23.866 252257 INFO nova.compute.manager [req-17c6b975-aeff-4d05-b62d-179d90c9ed51 req-0e4384e7-b79f-42d1-8194-2b86cf13b5d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Neutron deleted interface 8b93f183-22da-4a1a-9b2c-52314f794aad; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:32:23 np0005539563 nova_compute[252253]: 2025-11-29 08:32:23.867 252257 DEBUG nova.network.neutron [req-17c6b975-aeff-4d05-b62d-179d90c9ed51 req-0e4384e7-b79f-42d1-8194-2b86cf13b5d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:23 np0005539563 nova_compute[252253]: 2025-11-29 08:32:23.869 252257 INFO nova.compute.manager [-] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Took 2.36 seconds to deallocate network for instance.#033[00m
Nov 29 03:32:23 np0005539563 nova_compute[252253]: 2025-11-29 08:32:23.971 252257 DEBUG nova.compute.manager [req-17c6b975-aeff-4d05-b62d-179d90c9ed51 req-0e4384e7-b79f-42d1-8194-2b86cf13b5d9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Detach interface failed, port_id=8b93f183-22da-4a1a-9b2c-52314f794aad, reason: Instance 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:32:24 np0005539563 priceless_benz[360631]: {
Nov 29 03:32:24 np0005539563 priceless_benz[360631]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:32:24 np0005539563 priceless_benz[360631]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:32:24 np0005539563 priceless_benz[360631]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:32:24 np0005539563 priceless_benz[360631]:        "osd_id": 0,
Nov 29 03:32:24 np0005539563 priceless_benz[360631]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:32:24 np0005539563 priceless_benz[360631]:        "type": "bluestore"
Nov 29 03:32:24 np0005539563 priceless_benz[360631]:    }
Nov 29 03:32:24 np0005539563 priceless_benz[360631]: }
Nov 29 03:32:24 np0005539563 systemd[1]: libpod-cf9e205c7c962ee48a4a7d94195a53f31a1199ec6c0966ce59244f0cbd6beed1.scope: Deactivated successfully.
Nov 29 03:32:24 np0005539563 podman[360614]: 2025-11-29 08:32:24.167123857 +0000 UTC m=+1.635475659 container died cf9e205c7c962ee48a4a7d94195a53f31a1199ec6c0966ce59244f0cbd6beed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:32:24 np0005539563 systemd[1]: libpod-cf9e205c7c962ee48a4a7d94195a53f31a1199ec6c0966ce59244f0cbd6beed1.scope: Consumed 1.009s CPU time.
Nov 29 03:32:24 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f062beb79ddd84940ea8f59c861de058b463b054f986dd0cf6e5a4c93efbed9e-merged.mount: Deactivated successfully.
Nov 29 03:32:24 np0005539563 podman[360614]: 2025-11-29 08:32:24.339189998 +0000 UTC m=+1.807541830 container remove cf9e205c7c962ee48a4a7d94195a53f31a1199ec6c0966ce59244f0cbd6beed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:32:24 np0005539563 systemd[1]: libpod-conmon-cf9e205c7c962ee48a4a7d94195a53f31a1199ec6c0966ce59244f0cbd6beed1.scope: Deactivated successfully.
Nov 29 03:32:24 np0005539563 nova_compute[252253]: 2025-11-29 08:32:24.359 252257 DEBUG oslo_concurrency.lockutils [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:24 np0005539563 nova_compute[252253]: 2025-11-29 08:32:24.359 252257 DEBUG oslo_concurrency.lockutils [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:24.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:32:24 np0005539563 nova_compute[252253]: 2025-11-29 08:32:24.424 252257 DEBUG oslo_concurrency.processutils [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:24.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:32:24 np0005539563 nova_compute[252253]: 2025-11-29 08:32:24.664 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 400 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.2 MiB/s wr, 180 op/s
Nov 29 03:32:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 571b71ec-8374-4e97-b143-75ae3d454e11 does not exist
Nov 29 03:32:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8b9d50fb-864a-46b6-993f-53d2b521d396 does not exist
Nov 29 03:32:24 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 63c26a43-8652-49e8-b08f-00e2d09fca02 does not exist
Nov 29 03:32:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3052037109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:24 np0005539563 nova_compute[252253]: 2025-11-29 08:32:24.976 252257 DEBUG oslo_concurrency.processutils [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:24 np0005539563 nova_compute[252253]: 2025-11-29 08:32:24.983 252257 DEBUG nova.compute.provider_tree [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:25 np0005539563 nova_compute[252253]: 2025-11-29 08:32:25.244 252257 DEBUG nova.scheduler.client.report [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:25 np0005539563 nova_compute[252253]: 2025-11-29 08:32:25.409 252257 DEBUG oslo_concurrency.lockutils [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:25 np0005539563 nova_compute[252253]: 2025-11-29 08:32:25.526 252257 INFO nova.scheduler.client.report [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Deleted allocations for instance 9a93f858-81bf-4fad-bdc5-df74e4bb0c75#033[00m
Nov 29 03:32:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:25 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:32:25 np0005539563 nova_compute[252253]: 2025-11-29 08:32:25.724 252257 DEBUG oslo_concurrency.lockutils [None req-aa5d4c3e-ede6-4bc1-89e0-9be1f3ed8290 d45f9a4a44664af3884c15ce0f5697e0 7e8e7407a7c44208a503e8225c1cf518 - - default default] Lock "9a93f858-81bf-4fad-bdc5-df74e4bb0c75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:32:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:26.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:32:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:26.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 362 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 457 KiB/s wr, 192 op/s
Nov 29 03:32:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:28.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:28.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:28.679 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:28.679 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:32:28 np0005539563 nova_compute[252253]: 2025-11-29 08:32:28.680 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:28 np0005539563 nova_compute[252253]: 2025-11-29 08:32:28.739 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 247 KiB/s wr, 176 op/s
Nov 29 03:32:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Nov 29 03:32:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Nov 29 03:32:28 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Nov 29 03:32:29 np0005539563 nova_compute[252253]: 2025-11-29 08:32:29.666 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:29 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:32:29.681 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:29 np0005539563 nova_compute[252253]: 2025-11-29 08:32:29.926 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:29 np0005539563 nova_compute[252253]: 2025-11-29 08:32:29.927 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:32:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Nov 29 03:32:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:30.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Nov 29 03:32:30 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Nov 29 03:32:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:30.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:32:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/405876529' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:32:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:32:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/405876529' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:32:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 868 KiB/s rd, 21 KiB/s wr, 193 op/s
Nov 29 03:32:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:32.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:32.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 145 KiB/s rd, 17 KiB/s wr, 112 op/s
Nov 29 03:32:33 np0005539563 nova_compute[252253]: 2025-11-29 08:32:33.742 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:32:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1848767357' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:32:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:32:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1848767357' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:32:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Nov 29 03:32:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Nov 29 03:32:34 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Nov 29 03:32:34 np0005539563 nova_compute[252253]: 2025-11-29 08:32:34.382 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405139.381144, 9a93f858-81bf-4fad-bdc5-df74e4bb0c75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:34 np0005539563 nova_compute[252253]: 2025-11-29 08:32:34.383 252257 INFO nova.compute.manager [-] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:32:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:34.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:34 np0005539563 nova_compute[252253]: 2025-11-29 08:32:34.454 252257 DEBUG nova.compute.manager [None req-96d7c592-cc32-4be3-8e35-c28609f19ae9 - - - - - -] [instance: 9a93f858-81bf-4fad-bdc5-df74e4bb0c75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:34.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:34 np0005539563 nova_compute[252253]: 2025-11-29 08:32:34.668 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 311 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 851 KiB/s rd, 26 KiB/s wr, 155 op/s
Nov 29 03:32:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:36.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:36.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 281 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 897 KiB/s rd, 22 KiB/s wr, 181 op/s
Nov 29 03:32:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:38.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:38.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:38 np0005539563 nova_compute[252253]: 2025-11-29 08:32:38.743 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 283 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 856 KiB/s rd, 37 KiB/s wr, 191 op/s
Nov 29 03:32:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Nov 29 03:32:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Nov 29 03:32:38 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Nov 29 03:32:39 np0005539563 nova_compute[252253]: 2025-11-29 08:32:39.671 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:40.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:40.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 802 KiB/s rd, 24 KiB/s wr, 163 op/s
Nov 29 03:32:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:42.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:42.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:42 np0005539563 nova_compute[252253]: 2025-11-29 08:32:42.588 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 257 KiB/s rd, 19 KiB/s wr, 98 op/s
Nov 29 03:32:42 np0005539563 nova_compute[252253]: 2025-11-29 08:32:42.847 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:32:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:32:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:32:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:32:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:32:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:32:43 np0005539563 nova_compute[252253]: 2025-11-29 08:32:43.746 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Nov 29 03:32:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Nov 29 03:32:43 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Nov 29 03:32:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:32:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:44.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:32:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:44.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:32:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1974619220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:32:44 np0005539563 nova_compute[252253]: 2025-11-29 08:32:44.672 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:32:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1974619220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:32:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 204 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 19 KiB/s wr, 65 op/s
Nov 29 03:32:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:46.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:46.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 158 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.7 KiB/s wr, 73 op/s
Nov 29 03:32:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:48.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:48.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:48 np0005539563 nova_compute[252253]: 2025-11-29 08:32:48.748 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.7 KiB/s wr, 91 op/s
Nov 29 03:32:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:49 np0005539563 nova_compute[252253]: 2025-11-29 08:32:49.675 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:50.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:50.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 3.5 KiB/s wr, 77 op/s
Nov 29 03:32:51 np0005539563 podman[360853]: 2025-11-29 08:32:51.523873571 +0000 UTC m=+0.065454234 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:32:51 np0005539563 podman[360855]: 2025-11-29 08:32:51.553651268 +0000 UTC m=+0.102487268 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:32:51 np0005539563 podman[360854]: 2025-11-29 08:32:51.560569455 +0000 UTC m=+0.102593409 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:32:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:52.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:52.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 3.5 KiB/s wr, 77 op/s
Nov 29 03:32:53 np0005539563 nova_compute[252253]: 2025-11-29 08:32:53.687 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:53 np0005539563 nova_compute[252253]: 2025-11-29 08:32:53.750 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:54.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:54.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:54 np0005539563 nova_compute[252253]: 2025-11-29 08:32:54.676 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.6 KiB/s wr, 78 op/s
Nov 29 03:32:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:56.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:56.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 2.3 KiB/s wr, 86 op/s
Nov 29 03:32:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:58.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:32:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:58.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 2.3 KiB/s wr, 116 op/s
Nov 29 03:32:58 np0005539563 nova_compute[252253]: 2025-11-29 08:32:58.753 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:59 np0005539563 nova_compute[252253]: 2025-11-29 08:32:59.678 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:00.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:00.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 426 B/s wr, 110 op/s
Nov 29 03:33:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:02.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 255 B/s wr, 107 op/s
Nov 29 03:33:03 np0005539563 nova_compute[252253]: 2025-11-29 08:33:03.755 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:04.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:04.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:04 np0005539563 nova_compute[252253]: 2025-11-29 08:33:04.679 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 136 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 684 KiB/s wr, 166 op/s
Nov 29 03:33:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:33:04.937 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:33:04.938 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:33:04.938 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:05 np0005539563 nova_compute[252253]: 2025-11-29 08:33:05.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:05 np0005539563 nova_compute[252253]: 2025-11-29 08:33:05.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:33:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:06.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:06.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:06 np0005539563 nova_compute[252253]: 2025-11-29 08:33:06.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:06 np0005539563 nova_compute[252253]: 2025-11-29 08:33:06.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 1.8 MiB/s wr, 158 op/s
Nov 29 03:33:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:08.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:08.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:08 np0005539563 nova_compute[252253]: 2025-11-29 08:33:08.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:08 np0005539563 nova_compute[252253]: 2025-11-29 08:33:08.757 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 146 op/s
Nov 29 03:33:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:09 np0005539563 nova_compute[252253]: 2025-11-29 08:33:09.681 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:10.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:10.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 777 KiB/s rd, 1.8 MiB/s wr, 124 op/s
Nov 29 03:33:11 np0005539563 nova_compute[252253]: 2025-11-29 08:33:11.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:11 np0005539563 nova_compute[252253]: 2025-11-29 08:33:11.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:33:11 np0005539563 nova_compute[252253]: 2025-11-29 08:33:11.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:33:11 np0005539563 nova_compute[252253]: 2025-11-29 08:33:11.693 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:33:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:12.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:12.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:12 np0005539563 nova_compute[252253]: 2025-11-29 08:33:12.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:12 np0005539563 nova_compute[252253]: 2025-11-29 08:33:12.740 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:12 np0005539563 nova_compute[252253]: 2025-11-29 08:33:12.740 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:12 np0005539563 nova_compute[252253]: 2025-11-29 08:33:12.740 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:12 np0005539563 nova_compute[252253]: 2025-11-29 08:33:12.741 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:33:12 np0005539563 nova_compute[252253]: 2025-11-29 08:33:12.741 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 764 KiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 29 03:33:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:33:12
Nov 29 03:33:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:33:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:33:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'images', 'backups', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta']
Nov 29 03:33:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:33:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3868432709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.180 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:33:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:33:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:33:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:33:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:33:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.340 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.341 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4244MB free_disk=20.967361450195312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.341 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.342 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.609 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.610 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.648 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.678 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.678 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.698 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.723 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.745 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:13 np0005539563 nova_compute[252253]: 2025-11-29 08:33:13.775 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026191862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:14 np0005539563 nova_compute[252253]: 2025-11-29 08:33:14.200 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:14 np0005539563 nova_compute[252253]: 2025-11-29 08:33:14.205 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:33:14 np0005539563 nova_compute[252253]: 2025-11-29 08:33:14.225 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:33:14 np0005539563 nova_compute[252253]: 2025-11-29 08:33:14.251 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:33:14 np0005539563 nova_compute[252253]: 2025-11-29 08:33:14.251 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:14.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:14.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:14 np0005539563 nova_compute[252253]: 2025-11-29 08:33:14.683 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Nov 29 03:33:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:16.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:16.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:33:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:33:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:33:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:33:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:33:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:33:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:33:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:33:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:33:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:33:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 185 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 03:33:18 np0005539563 nova_compute[252253]: 2025-11-29 08:33:18.253 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:18.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:18.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:18 np0005539563 nova_compute[252253]: 2025-11-29 08:33:18.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:18 np0005539563 nova_compute[252253]: 2025-11-29 08:33:18.762 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 29 03:33:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:19 np0005539563 nova_compute[252253]: 2025-11-29 08:33:19.686 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2092077475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:20.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:20.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 03:33:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:22.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:22 np0005539563 podman[361024]: 2025-11-29 08:33:22.529404753 +0000 UTC m=+0.067953952 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:33:22 np0005539563 podman[361025]: 2025-11-29 08:33:22.541368347 +0000 UTC m=+0.092606030 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:33:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:22.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:22 np0005539563 podman[361026]: 2025-11-29 08:33:22.588515364 +0000 UTC m=+0.132797368 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:33:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019856589079657546 of space, bias 1.0, pg target 0.5956976723897264 quantized to 32 (current 32)
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021617782198027173 of space, bias 1.0, pg target 0.6485334659408152 quantized to 32 (current 32)
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:33:23 np0005539563 nova_compute[252253]: 2025-11-29 08:33:23.763 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:24.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:24.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:24 np0005539563 nova_compute[252253]: 2025-11-29 08:33:24.695 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 233 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 147 op/s
Nov 29 03:33:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:26.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:33:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:33:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:33:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:33:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:33:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:26.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 244 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 170 op/s
Nov 29 03:33:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:33:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c1e3c614-3335-4524-8965-99c65af87d22 does not exist
Nov 29 03:33:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c007eaf9-16fe-45a9-a856-45fa2ee599bd does not exist
Nov 29 03:33:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8f08b6cb-f852-4b85-a4e1-c0785d97868c does not exist
Nov 29 03:33:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:33:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:33:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:33:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:33:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:33:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:33:27 np0005539563 podman[361414]: 2025-11-29 08:33:27.82832618 +0000 UTC m=+0.026306124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:33:28 np0005539563 podman[361414]: 2025-11-29 08:33:28.009250721 +0000 UTC m=+0.207230605 container create ff0531642bda109d8988b365f2e501eb81a72832acb60e5d304dacec60340567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rubin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:33:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:33:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:33:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:33:28 np0005539563 systemd[1]: Started libpod-conmon-ff0531642bda109d8988b365f2e501eb81a72832acb60e5d304dacec60340567.scope.
Nov 29 03:33:28 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:33:28 np0005539563 podman[361414]: 2025-11-29 08:33:28.188493967 +0000 UTC m=+0.386473831 container init ff0531642bda109d8988b365f2e501eb81a72832acb60e5d304dacec60340567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rubin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:33:28 np0005539563 podman[361414]: 2025-11-29 08:33:28.196581736 +0000 UTC m=+0.394561590 container start ff0531642bda109d8988b365f2e501eb81a72832acb60e5d304dacec60340567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rubin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:33:28 np0005539563 nervous_rubin[361430]: 167 167
Nov 29 03:33:28 np0005539563 systemd[1]: libpod-ff0531642bda109d8988b365f2e501eb81a72832acb60e5d304dacec60340567.scope: Deactivated successfully.
Nov 29 03:33:28 np0005539563 podman[361414]: 2025-11-29 08:33:28.220137194 +0000 UTC m=+0.418117138 container attach ff0531642bda109d8988b365f2e501eb81a72832acb60e5d304dacec60340567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rubin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:33:28 np0005539563 podman[361414]: 2025-11-29 08:33:28.221128531 +0000 UTC m=+0.419108425 container died ff0531642bda109d8988b365f2e501eb81a72832acb60e5d304dacec60340567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:33:28 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4a347cea47551248d80edb8cce4ebf42c69bcf75177fc5a77f742b6a8940ea64-merged.mount: Deactivated successfully.
Nov 29 03:33:28 np0005539563 podman[361414]: 2025-11-29 08:33:28.26944856 +0000 UTC m=+0.467428414 container remove ff0531642bda109d8988b365f2e501eb81a72832acb60e5d304dacec60340567 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:33:28 np0005539563 systemd[1]: libpod-conmon-ff0531642bda109d8988b365f2e501eb81a72832acb60e5d304dacec60340567.scope: Deactivated successfully.
Nov 29 03:33:28 np0005539563 podman[361457]: 2025-11-29 08:33:28.430846593 +0000 UTC m=+0.043246473 container create 4cc9f5f3c18730a3adcafa403560fa5784a94e47b1dd90fbb8d789b266891636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:33:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:28.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:28 np0005539563 systemd[1]: Started libpod-conmon-4cc9f5f3c18730a3adcafa403560fa5784a94e47b1dd90fbb8d789b266891636.scope.
Nov 29 03:33:28 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:33:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e49cfd0124c00ef0ec99eeac359a6dfedc285d610fb5153cfb6c24d38f6b4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e49cfd0124c00ef0ec99eeac359a6dfedc285d610fb5153cfb6c24d38f6b4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e49cfd0124c00ef0ec99eeac359a6dfedc285d610fb5153cfb6c24d38f6b4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e49cfd0124c00ef0ec99eeac359a6dfedc285d610fb5153cfb6c24d38f6b4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:28 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e49cfd0124c00ef0ec99eeac359a6dfedc285d610fb5153cfb6c24d38f6b4f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:28 np0005539563 podman[361457]: 2025-11-29 08:33:28.413272056 +0000 UTC m=+0.025671946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:33:28 np0005539563 podman[361457]: 2025-11-29 08:33:28.519527185 +0000 UTC m=+0.131927145 container init 4cc9f5f3c18730a3adcafa403560fa5784a94e47b1dd90fbb8d789b266891636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:33:28 np0005539563 podman[361457]: 2025-11-29 08:33:28.530459171 +0000 UTC m=+0.142859041 container start 4cc9f5f3c18730a3adcafa403560fa5784a94e47b1dd90fbb8d789b266891636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:33:28 np0005539563 podman[361457]: 2025-11-29 08:33:28.534580664 +0000 UTC m=+0.146980534 container attach 4cc9f5f3c18730a3adcafa403560fa5784a94e47b1dd90fbb8d789b266891636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:33:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:28.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 157 op/s
Nov 29 03:33:28 np0005539563 nova_compute[252253]: 2025-11-29 08:33:28.766 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:29 np0005539563 sweet_swanson[361475]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:33:29 np0005539563 sweet_swanson[361475]: --> relative data size: 1.0
Nov 29 03:33:29 np0005539563 sweet_swanson[361475]: --> All data devices are unavailable
Nov 29 03:33:29 np0005539563 systemd[1]: libpod-4cc9f5f3c18730a3adcafa403560fa5784a94e47b1dd90fbb8d789b266891636.scope: Deactivated successfully.
Nov 29 03:33:29 np0005539563 podman[361457]: 2025-11-29 08:33:29.354238329 +0000 UTC m=+0.966638209 container died 4cc9f5f3c18730a3adcafa403560fa5784a94e47b1dd90fbb8d789b266891636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:33:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-14e49cfd0124c00ef0ec99eeac359a6dfedc285d610fb5153cfb6c24d38f6b4f-merged.mount: Deactivated successfully.
Nov 29 03:33:29 np0005539563 podman[361457]: 2025-11-29 08:33:29.457667231 +0000 UTC m=+1.070067101 container remove 4cc9f5f3c18730a3adcafa403560fa5784a94e47b1dd90fbb8d789b266891636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:33:29 np0005539563 systemd[1]: libpod-conmon-4cc9f5f3c18730a3adcafa403560fa5784a94e47b1dd90fbb8d789b266891636.scope: Deactivated successfully.
Nov 29 03:33:29 np0005539563 nova_compute[252253]: 2025-11-29 08:33:29.697 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:30 np0005539563 podman[361645]: 2025-11-29 08:33:30.065436907 +0000 UTC m=+0.037644051 container create 415758e3c44cf7244868e0faa69707c1b70c104c0e7c2e63aa9f553a9b406a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:33:30 np0005539563 systemd[1]: Started libpod-conmon-415758e3c44cf7244868e0faa69707c1b70c104c0e7c2e63aa9f553a9b406a19.scope.
Nov 29 03:33:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:33:30 np0005539563 podman[361645]: 2025-11-29 08:33:30.049388862 +0000 UTC m=+0.021596016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:33:30 np0005539563 podman[361645]: 2025-11-29 08:33:30.198289757 +0000 UTC m=+0.170496921 container init 415758e3c44cf7244868e0faa69707c1b70c104c0e7c2e63aa9f553a9b406a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:33:30 np0005539563 podman[361645]: 2025-11-29 08:33:30.205699457 +0000 UTC m=+0.177906641 container start 415758e3c44cf7244868e0faa69707c1b70c104c0e7c2e63aa9f553a9b406a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:33:30 np0005539563 podman[361645]: 2025-11-29 08:33:30.209799458 +0000 UTC m=+0.182006602 container attach 415758e3c44cf7244868e0faa69707c1b70c104c0e7c2e63aa9f553a9b406a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:33:30 np0005539563 great_golick[361661]: 167 167
Nov 29 03:33:30 np0005539563 systemd[1]: libpod-415758e3c44cf7244868e0faa69707c1b70c104c0e7c2e63aa9f553a9b406a19.scope: Deactivated successfully.
Nov 29 03:33:30 np0005539563 podman[361645]: 2025-11-29 08:33:30.210900618 +0000 UTC m=+0.183107762 container died 415758e3c44cf7244868e0faa69707c1b70c104c0e7c2e63aa9f553a9b406a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:33:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-90e11b390e02939bf3e3acabbd37d62952142060df7fd11249433a21fe9aea8a-merged.mount: Deactivated successfully.
Nov 29 03:33:30 np0005539563 podman[361645]: 2025-11-29 08:33:30.248859056 +0000 UTC m=+0.221066200 container remove 415758e3c44cf7244868e0faa69707c1b70c104c0e7c2e63aa9f553a9b406a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:33:30 np0005539563 systemd[1]: libpod-conmon-415758e3c44cf7244868e0faa69707c1b70c104c0e7c2e63aa9f553a9b406a19.scope: Deactivated successfully.
Nov 29 03:33:30 np0005539563 podman[361686]: 2025-11-29 08:33:30.411278347 +0000 UTC m=+0.054247372 container create d31eaf0c49e02f319b2e42514a1fd0543d9c5967914eb48d936e7e7ba7d3ecf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:33:30 np0005539563 systemd[1]: Started libpod-conmon-d31eaf0c49e02f319b2e42514a1fd0543d9c5967914eb48d936e7e7ba7d3ecf5.scope.
Nov 29 03:33:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:30.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:30 np0005539563 podman[361686]: 2025-11-29 08:33:30.378806367 +0000 UTC m=+0.021775412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:33:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:33:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e155baa9abcc16ed8e4c01e2c685fb71e72f9ca3b02477f593a9d424c1f956/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e155baa9abcc16ed8e4c01e2c685fb71e72f9ca3b02477f593a9d424c1f956/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e155baa9abcc16ed8e4c01e2c685fb71e72f9ca3b02477f593a9d424c1f956/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2e155baa9abcc16ed8e4c01e2c685fb71e72f9ca3b02477f593a9d424c1f956/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:30 np0005539563 podman[361686]: 2025-11-29 08:33:30.504248515 +0000 UTC m=+0.147217540 container init d31eaf0c49e02f319b2e42514a1fd0543d9c5967914eb48d936e7e7ba7d3ecf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:33:30 np0005539563 podman[361686]: 2025-11-29 08:33:30.512222291 +0000 UTC m=+0.155191316 container start d31eaf0c49e02f319b2e42514a1fd0543d9c5967914eb48d936e7e7ba7d3ecf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:33:30 np0005539563 podman[361686]: 2025-11-29 08:33:30.515024767 +0000 UTC m=+0.157993812 container attach d31eaf0c49e02f319b2e42514a1fd0543d9c5967914eb48d936e7e7ba7d3ecf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:33:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:30.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 144 op/s
Nov 29 03:33:31 np0005539563 funny_cohen[361702]: {
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:    "0": [
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:        {
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:            "devices": [
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:                "/dev/loop3"
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:            ],
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:            "lv_name": "ceph_lv0",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:            "lv_size": "7511998464",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:            "name": "ceph_lv0",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:            "tags": {
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:                "ceph.cluster_name": "ceph",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:                "ceph.crush_device_class": "",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:                "ceph.encrypted": "0",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:                "ceph.osd_id": "0",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:                "ceph.type": "block",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:                "ceph.vdo": "0"
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:            },
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:            "type": "block",
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:            "vg_name": "ceph_vg0"
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:        }
Nov 29 03:33:31 np0005539563 funny_cohen[361702]:    ]
Nov 29 03:33:31 np0005539563 funny_cohen[361702]: }
Nov 29 03:33:31 np0005539563 systemd[1]: libpod-d31eaf0c49e02f319b2e42514a1fd0543d9c5967914eb48d936e7e7ba7d3ecf5.scope: Deactivated successfully.
Nov 29 03:33:31 np0005539563 podman[361686]: 2025-11-29 08:33:31.266401203 +0000 UTC m=+0.909370268 container died d31eaf0c49e02f319b2e42514a1fd0543d9c5967914eb48d936e7e7ba7d3ecf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cohen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:33:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c2e155baa9abcc16ed8e4c01e2c685fb71e72f9ca3b02477f593a9d424c1f956-merged.mount: Deactivated successfully.
Nov 29 03:33:31 np0005539563 podman[361686]: 2025-11-29 08:33:31.423978572 +0000 UTC m=+1.066947607 container remove d31eaf0c49e02f319b2e42514a1fd0543d9c5967914eb48d936e7e7ba7d3ecf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:33:31 np0005539563 systemd[1]: libpod-conmon-d31eaf0c49e02f319b2e42514a1fd0543d9c5967914eb48d936e7e7ba7d3ecf5.scope: Deactivated successfully.
Nov 29 03:33:32 np0005539563 podman[361865]: 2025-11-29 08:33:32.104080007 +0000 UTC m=+0.053675305 container create c47a6751b519850c0d1ba63073ac06d09962059e290be29d6307de7b066deedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:33:32 np0005539563 podman[361865]: 2025-11-29 08:33:32.070078656 +0000 UTC m=+0.019673954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:33:32 np0005539563 systemd[1]: Started libpod-conmon-c47a6751b519850c0d1ba63073ac06d09962059e290be29d6307de7b066deedc.scope.
Nov 29 03:33:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:33:32 np0005539563 podman[361865]: 2025-11-29 08:33:32.232804405 +0000 UTC m=+0.182399703 container init c47a6751b519850c0d1ba63073ac06d09962059e290be29d6307de7b066deedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:33:32 np0005539563 podman[361865]: 2025-11-29 08:33:32.238315164 +0000 UTC m=+0.187910442 container start c47a6751b519850c0d1ba63073ac06d09962059e290be29d6307de7b066deedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:33:32 np0005539563 charming_jang[361881]: 167 167
Nov 29 03:33:32 np0005539563 systemd[1]: libpod-c47a6751b519850c0d1ba63073ac06d09962059e290be29d6307de7b066deedc.scope: Deactivated successfully.
Nov 29 03:33:32 np0005539563 podman[361865]: 2025-11-29 08:33:32.26878183 +0000 UTC m=+0.218377108 container attach c47a6751b519850c0d1ba63073ac06d09962059e290be29d6307de7b066deedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:33:32 np0005539563 podman[361865]: 2025-11-29 08:33:32.269573231 +0000 UTC m=+0.219168509 container died c47a6751b519850c0d1ba63073ac06d09962059e290be29d6307de7b066deedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:33:32 np0005539563 systemd[1]: var-lib-containers-storage-overlay-385d3545634bb6c1b1e9e6b7b6c06f4130ebf46a18ff15e92120f603f908f84f-merged.mount: Deactivated successfully.
Nov 29 03:33:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:32.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:32 np0005539563 podman[361865]: 2025-11-29 08:33:32.499203732 +0000 UTC m=+0.448799010 container remove c47a6751b519850c0d1ba63073ac06d09962059e290be29d6307de7b066deedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:33:32 np0005539563 systemd[1]: libpod-conmon-c47a6751b519850c0d1ba63073ac06d09962059e290be29d6307de7b066deedc.scope: Deactivated successfully.
Nov 29 03:33:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:32.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:32 np0005539563 podman[361908]: 2025-11-29 08:33:32.740404306 +0000 UTC m=+0.079528855 container create c99ef8b5ef4057d06345f14df44ef90cf0c690560c58928fd0bd9d1c676c7c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:33:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Nov 29 03:33:32 np0005539563 systemd[1]: Started libpod-conmon-c99ef8b5ef4057d06345f14df44ef90cf0c690560c58928fd0bd9d1c676c7c29.scope.
Nov 29 03:33:32 np0005539563 podman[361908]: 2025-11-29 08:33:32.702402906 +0000 UTC m=+0.041527445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:33:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:33:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf78c13e6e5e2bfcfc58a50d91f2c6324556bec8aca88ea3707a18386cb64db6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf78c13e6e5e2bfcfc58a50d91f2c6324556bec8aca88ea3707a18386cb64db6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf78c13e6e5e2bfcfc58a50d91f2c6324556bec8aca88ea3707a18386cb64db6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf78c13e6e5e2bfcfc58a50d91f2c6324556bec8aca88ea3707a18386cb64db6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:32 np0005539563 podman[361908]: 2025-11-29 08:33:32.835180094 +0000 UTC m=+0.174304653 container init c99ef8b5ef4057d06345f14df44ef90cf0c690560c58928fd0bd9d1c676c7c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:33:32 np0005539563 podman[361908]: 2025-11-29 08:33:32.844017083 +0000 UTC m=+0.183141622 container start c99ef8b5ef4057d06345f14df44ef90cf0c690560c58928fd0bd9d1c676c7c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:33:32 np0005539563 podman[361908]: 2025-11-29 08:33:32.909870458 +0000 UTC m=+0.248994987 container attach c99ef8b5ef4057d06345f14df44ef90cf0c690560c58928fd0bd9d1c676c7c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:33:33 np0005539563 nova_compute[252253]: 2025-11-29 08:33:33.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:33 np0005539563 pedantic_villani[361925]: {
Nov 29 03:33:33 np0005539563 pedantic_villani[361925]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:33:33 np0005539563 pedantic_villani[361925]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:33:33 np0005539563 pedantic_villani[361925]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:33:33 np0005539563 pedantic_villani[361925]:        "osd_id": 0,
Nov 29 03:33:33 np0005539563 pedantic_villani[361925]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:33:33 np0005539563 pedantic_villani[361925]:        "type": "bluestore"
Nov 29 03:33:33 np0005539563 pedantic_villani[361925]:    }
Nov 29 03:33:33 np0005539563 pedantic_villani[361925]: }
Nov 29 03:33:33 np0005539563 systemd[1]: libpod-c99ef8b5ef4057d06345f14df44ef90cf0c690560c58928fd0bd9d1c676c7c29.scope: Deactivated successfully.
Nov 29 03:33:33 np0005539563 nova_compute[252253]: 2025-11-29 08:33:33.766 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:33 np0005539563 podman[361948]: 2025-11-29 08:33:33.784066531 +0000 UTC m=+0.025343287 container died c99ef8b5ef4057d06345f14df44ef90cf0c690560c58928fd0bd9d1c676c7c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:33:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-bf78c13e6e5e2bfcfc58a50d91f2c6324556bec8aca88ea3707a18386cb64db6-merged.mount: Deactivated successfully.
Nov 29 03:33:34 np0005539563 podman[361948]: 2025-11-29 08:33:34.426227418 +0000 UTC m=+0.667504194 container remove c99ef8b5ef4057d06345f14df44ef90cf0c690560c58928fd0bd9d1c676c7c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:33:34 np0005539563 systemd[1]: libpod-conmon-c99ef8b5ef4057d06345f14df44ef90cf0c690560c58928fd0bd9d1c676c7c29.scope: Deactivated successfully.
Nov 29 03:33:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:34.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:33:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:33:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:33:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:33:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 10735200-4d2a-4334-ab87-a8e5d2aa5c68 does not exist
Nov 29 03:33:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev da653936-99b8-406d-89ad-d7960a015e27 does not exist
Nov 29 03:33:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b062cfc0-627c-4f94-a34e-a476296103da does not exist
Nov 29 03:33:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:34.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:34 np0005539563 nova_compute[252253]: 2025-11-29 08:33:34.699 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 249 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.3 MiB/s wr, 158 op/s
Nov 29 03:33:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:33:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:33:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:36.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:36.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 267 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.3 MiB/s wr, 103 op/s
Nov 29 03:33:37 np0005539563 ovn_controller[148841]: 2025-11-29T08:33:37Z|00736|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Nov 29 03:33:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:38.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:38.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:38 np0005539563 nova_compute[252253]: 2025-11-29 08:33:38.768 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 323 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 107 op/s
Nov 29 03:33:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Nov 29 03:33:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Nov 29 03:33:39 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Nov 29 03:33:39 np0005539563 nova_compute[252253]: 2025-11-29 08:33:39.701 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Nov 29 03:33:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:40.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:40.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.7 MiB/s wr, 119 op/s
Nov 29 03:33:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Nov 29 03:33:40 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Nov 29 03:33:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Nov 29 03:33:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:42.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:42.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Nov 29 03:33:42 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Nov 29 03:33:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 453 KiB/s rd, 5.0 MiB/s wr, 126 op/s
Nov 29 03:33:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:33:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:33:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:33:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:33:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:33:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:33:43 np0005539563 nova_compute[252253]: 2025-11-29 08:33:43.770 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Nov 29 03:33:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Nov 29 03:33:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Nov 29 03:33:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:44.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:44.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:44 np0005539563 nova_compute[252253]: 2025-11-29 08:33:44.735 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 332 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.0 MiB/s wr, 63 op/s
Nov 29 03:33:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:46.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:46.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 362 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 52 op/s
Nov 29 03:33:47 np0005539563 nova_compute[252253]: 2025-11-29 08:33:47.327 252257 DEBUG nova.compute.manager [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:33:47 np0005539563 nova_compute[252253]: 2025-11-29 08:33:47.448 252257 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:47 np0005539563 nova_compute[252253]: 2025-11-29 08:33:47.449 252257 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:47 np0005539563 nova_compute[252253]: 2025-11-29 08:33:47.473 252257 DEBUG nova.objects.instance [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lazy-loading 'pci_requests' on Instance uuid 782511b8-9841-4558-bc21-9a81d3913b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:47 np0005539563 nova_compute[252253]: 2025-11-29 08:33:47.486 252257 DEBUG nova.virt.hardware [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:33:47 np0005539563 nova_compute[252253]: 2025-11-29 08:33:47.487 252257 INFO nova.compute.claims [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:33:47 np0005539563 nova_compute[252253]: 2025-11-29 08:33:47.487 252257 DEBUG nova.objects.instance [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lazy-loading 'resources' on Instance uuid 782511b8-9841-4558-bc21-9a81d3913b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:47 np0005539563 nova_compute[252253]: 2025-11-29 08:33:47.501 252257 DEBUG nova.objects.instance [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lazy-loading 'numa_topology' on Instance uuid 782511b8-9841-4558-bc21-9a81d3913b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:47 np0005539563 nova_compute[252253]: 2025-11-29 08:33:47.513 252257 DEBUG nova.objects.instance [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lazy-loading 'pci_devices' on Instance uuid 782511b8-9841-4558-bc21-9a81d3913b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:47 np0005539563 nova_compute[252253]: 2025-11-29 08:33:47.559 252257 INFO nova.compute.resource_tracker [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updating resource usage from migration 29af94fb-b970-4b57-a643-370ba5509b9a#033[00m
Nov 29 03:33:47 np0005539563 nova_compute[252253]: 2025-11-29 08:33:47.560 252257 DEBUG nova.compute.resource_tracker [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Starting to track incoming migration 29af94fb-b970-4b57-a643-370ba5509b9a with flavor b3f6a6d1-4abb-4332-8391-2e39c8fa168a _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:33:47 np0005539563 nova_compute[252253]: 2025-11-29 08:33:47.604 252257 DEBUG oslo_concurrency.processutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3825895591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:48 np0005539563 nova_compute[252253]: 2025-11-29 08:33:48.029 252257 DEBUG oslo_concurrency.processutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:48 np0005539563 nova_compute[252253]: 2025-11-29 08:33:48.035 252257 DEBUG nova.compute.provider_tree [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:33:48 np0005539563 nova_compute[252253]: 2025-11-29 08:33:48.065 252257 DEBUG nova.scheduler.client.report [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:33:48 np0005539563 nova_compute[252253]: 2025-11-29 08:33:48.093 252257 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:48 np0005539563 nova_compute[252253]: 2025-11-29 08:33:48.093 252257 INFO nova.compute.manager [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Migrating#033[00m
Nov 29 03:33:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:48.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:48.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:48 np0005539563 nova_compute[252253]: 2025-11-29 08:33:48.771 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 52 op/s
Nov 29 03:33:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Nov 29 03:33:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Nov 29 03:33:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Nov 29 03:33:49 np0005539563 nova_compute[252253]: 2025-11-29 08:33:49.800 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:50 np0005539563 systemd-logind[785]: New session 64 of user nova.
Nov 29 03:33:50 np0005539563 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:33:50 np0005539563 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:33:50 np0005539563 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:33:50 np0005539563 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:33:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:50.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:50 np0005539563 systemd[362098]: Queued start job for default target Main User Target.
Nov 29 03:33:50 np0005539563 systemd[362098]: Created slice User Application Slice.
Nov 29 03:33:50 np0005539563 systemd[362098]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:33:50 np0005539563 systemd[362098]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:33:50 np0005539563 systemd[362098]: Reached target Paths.
Nov 29 03:33:50 np0005539563 systemd[362098]: Reached target Timers.
Nov 29 03:33:50 np0005539563 systemd[362098]: Starting D-Bus User Message Bus Socket...
Nov 29 03:33:50 np0005539563 systemd[362098]: Starting Create User's Volatile Files and Directories...
Nov 29 03:33:50 np0005539563 systemd[362098]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:33:50 np0005539563 systemd[362098]: Reached target Sockets.
Nov 29 03:33:50 np0005539563 systemd[362098]: Finished Create User's Volatile Files and Directories.
Nov 29 03:33:50 np0005539563 systemd[362098]: Reached target Basic System.
Nov 29 03:33:50 np0005539563 systemd[362098]: Reached target Main User Target.
Nov 29 03:33:50 np0005539563 systemd[362098]: Startup finished in 144ms.
Nov 29 03:33:50 np0005539563 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:33:50 np0005539563 systemd[1]: Started Session 64 of User nova.
Nov 29 03:33:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:50.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:50 np0005539563 systemd[1]: session-64.scope: Deactivated successfully.
Nov 29 03:33:50 np0005539563 systemd-logind[785]: Session 64 logged out. Waiting for processes to exit.
Nov 29 03:33:50 np0005539563 systemd-logind[785]: Removed session 64.
Nov 29 03:33:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 63 op/s
Nov 29 03:33:50 np0005539563 systemd-logind[785]: New session 66 of user nova.
Nov 29 03:33:50 np0005539563 systemd[1]: Started Session 66 of User nova.
Nov 29 03:33:50 np0005539563 systemd[1]: session-66.scope: Deactivated successfully.
Nov 29 03:33:50 np0005539563 systemd-logind[785]: Session 66 logged out. Waiting for processes to exit.
Nov 29 03:33:50 np0005539563 systemd-logind[785]: Removed session 66.
Nov 29 03:33:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Nov 29 03:33:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Nov 29 03:33:51 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Nov 29 03:33:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:52.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:52.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 33 op/s
Nov 29 03:33:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Nov 29 03:33:53 np0005539563 podman[362123]: 2025-11-29 08:33:53.505055296 +0000 UTC m=+0.062686289 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:33:53 np0005539563 podman[362124]: 2025-11-29 08:33:53.540000233 +0000 UTC m=+0.096270469 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:33:53 np0005539563 podman[362125]: 2025-11-29 08:33:53.56054248 +0000 UTC m=+0.112593822 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:33:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Nov 29 03:33:53 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Nov 29 03:33:53 np0005539563 nova_compute[252253]: 2025-11-29 08:33:53.773 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539563 nova_compute[252253]: 2025-11-29 08:33:53.788 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:53 np0005539563 nova_compute[252253]: 2025-11-29 08:33:53.938 252257 DEBUG nova.compute.manager [req-3b58d15d-3c57-496c-b25e-baa9d37c6092 req-a0bbe84c-0a81-440b-821d-98ff4721a951 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-unplugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:53 np0005539563 nova_compute[252253]: 2025-11-29 08:33:53.939 252257 DEBUG oslo_concurrency.lockutils [req-3b58d15d-3c57-496c-b25e-baa9d37c6092 req-a0bbe84c-0a81-440b-821d-98ff4721a951 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:53 np0005539563 nova_compute[252253]: 2025-11-29 08:33:53.939 252257 DEBUG oslo_concurrency.lockutils [req-3b58d15d-3c57-496c-b25e-baa9d37c6092 req-a0bbe84c-0a81-440b-821d-98ff4721a951 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:53 np0005539563 nova_compute[252253]: 2025-11-29 08:33:53.939 252257 DEBUG oslo_concurrency.lockutils [req-3b58d15d-3c57-496c-b25e-baa9d37c6092 req-a0bbe84c-0a81-440b-821d-98ff4721a951 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:53 np0005539563 nova_compute[252253]: 2025-11-29 08:33:53.939 252257 DEBUG nova.compute.manager [req-3b58d15d-3c57-496c-b25e-baa9d37c6092 req-a0bbe84c-0a81-440b-821d-98ff4721a951 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] No waiting events found dispatching network-vif-unplugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:53 np0005539563 nova_compute[252253]: 2025-11-29 08:33:53.940 252257 WARNING nova.compute.manager [req-3b58d15d-3c57-496c-b25e-baa9d37c6092 req-a0bbe84c-0a81-440b-821d-98ff4721a951 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received unexpected event network-vif-unplugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:33:54 np0005539563 nova_compute[252253]: 2025-11-29 08:33:54.102 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:33:54.103 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:33:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:33:54.104 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:33:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:54.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Nov 29 03:33:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:54.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Nov 29 03:33:54 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Nov 29 03:33:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 410 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.4 MiB/s wr, 77 op/s
Nov 29 03:33:54 np0005539563 nova_compute[252253]: 2025-11-29 08:33:54.803 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:55 np0005539563 nova_compute[252253]: 2025-11-29 08:33:55.015 252257 INFO nova.network.neutron [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updating port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:33:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Nov 29 03:33:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Nov 29 03:33:55 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Nov 29 03:33:56 np0005539563 nova_compute[252253]: 2025-11-29 08:33:56.048 252257 DEBUG nova.compute.manager [req-004ed692-b413-48fc-b4e1-e66696b92d63 req-167ee234-4aa1-42c9-aac0-38b8ca758840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:56 np0005539563 nova_compute[252253]: 2025-11-29 08:33:56.048 252257 DEBUG oslo_concurrency.lockutils [req-004ed692-b413-48fc-b4e1-e66696b92d63 req-167ee234-4aa1-42c9-aac0-38b8ca758840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:56 np0005539563 nova_compute[252253]: 2025-11-29 08:33:56.048 252257 DEBUG oslo_concurrency.lockutils [req-004ed692-b413-48fc-b4e1-e66696b92d63 req-167ee234-4aa1-42c9-aac0-38b8ca758840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:56 np0005539563 nova_compute[252253]: 2025-11-29 08:33:56.049 252257 DEBUG oslo_concurrency.lockutils [req-004ed692-b413-48fc-b4e1-e66696b92d63 req-167ee234-4aa1-42c9-aac0-38b8ca758840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:56 np0005539563 nova_compute[252253]: 2025-11-29 08:33:56.049 252257 DEBUG nova.compute.manager [req-004ed692-b413-48fc-b4e1-e66696b92d63 req-167ee234-4aa1-42c9-aac0-38b8ca758840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] No waiting events found dispatching network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:56 np0005539563 nova_compute[252253]: 2025-11-29 08:33:56.049 252257 WARNING nova.compute.manager [req-004ed692-b413-48fc-b4e1-e66696b92d63 req-167ee234-4aa1-42c9-aac0-38b8ca758840 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received unexpected event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:33:56 np0005539563 nova_compute[252253]: 2025-11-29 08:33:56.432 252257 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:56 np0005539563 nova_compute[252253]: 2025-11-29 08:33:56.432 252257 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquired lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:56 np0005539563 nova_compute[252253]: 2025-11-29 08:33:56.433 252257 DEBUG nova.network.neutron [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:33:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:56.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:56 np0005539563 nova_compute[252253]: 2025-11-29 08:33:56.543 252257 DEBUG nova.compute.manager [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-changed-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:56 np0005539563 nova_compute[252253]: 2025-11-29 08:33:56.544 252257 DEBUG nova.compute.manager [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Refreshing instance network info cache due to event network-changed-8a130a46-1e4c-4c18-8d1f-c60c770a5f49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:56 np0005539563 nova_compute[252253]: 2025-11-29 08:33:56.545 252257 DEBUG oslo_concurrency.lockutils [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:56.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 450 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 7.8 MiB/s wr, 145 op/s
Nov 29 03:33:57 np0005539563 nova_compute[252253]: 2025-11-29 08:33:57.933 252257 DEBUG nova.network.neutron [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updating instance_info_cache with network_info: [{"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:57 np0005539563 nova_compute[252253]: 2025-11-29 08:33:57.969 252257 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Releasing lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:57 np0005539563 nova_compute[252253]: 2025-11-29 08:33:57.975 252257 DEBUG oslo_concurrency.lockutils [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:57 np0005539563 nova_compute[252253]: 2025-11-29 08:33:57.976 252257 DEBUG nova.network.neutron [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Refreshing network info cache for port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.068 252257 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.069 252257 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.070 252257 INFO nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Creating image(s)#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.106 252257 DEBUG nova.storage.rbd_utils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] creating snapshot(nova-resize) on rbd image(782511b8-9841-4558-bc21-9a81d3913b54_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:33:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:33:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:58.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:33:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:33:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:58.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Nov 29 03:33:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Nov 29 03:33:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.751 252257 DEBUG nova.objects.instance [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 782511b8-9841-4558-bc21-9a81d3913b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.775 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 497 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 9.7 MiB/s wr, 212 op/s
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.891 252257 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.891 252257 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Ensure instance console log exists: /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.891 252257 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.892 252257 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.892 252257 DEBUG oslo_concurrency.lockutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.894 252257 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Start _get_guest_xml network_info=[{"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1013637972", "vif_mac": "fa:16:3e:c0:6e:5a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.898 252257 WARNING nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.905 252257 DEBUG nova.virt.libvirt.host [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.906 252257 DEBUG nova.virt.libvirt.host [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.909 252257 DEBUG nova.virt.libvirt.host [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.910 252257 DEBUG nova.virt.libvirt.host [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.911 252257 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.911 252257 DEBUG nova.virt.hardware [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.911 252257 DEBUG nova.virt.hardware [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.912 252257 DEBUG nova.virt.hardware [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.912 252257 DEBUG nova.virt.hardware [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.912 252257 DEBUG nova.virt.hardware [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.912 252257 DEBUG nova.virt.hardware [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.912 252257 DEBUG nova.virt.hardware [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.913 252257 DEBUG nova.virt.hardware [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.913 252257 DEBUG nova.virt.hardware [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.913 252257 DEBUG nova.virt.hardware [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.913 252257 DEBUG nova.virt.hardware [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.913 252257 DEBUG nova.objects.instance [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 782511b8-9841-4558-bc21-9a81d3913b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:58 np0005539563 nova_compute[252253]: 2025-11-29 08:33:58.931 252257 DEBUG oslo_concurrency.processutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3813860604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.367 252257 DEBUG oslo_concurrency.processutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.405 252257 DEBUG oslo_concurrency.processutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.805 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1085468899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.828 252257 DEBUG oslo_concurrency.processutils [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.832 252257 DEBUG nova.virt.libvirt.vif [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1326828449',display_name='tempest-TestNetworkAdvancedServerOps-server-1326828449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1326828449',id=175,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSsTatB0ntZvgpT1iFQOjTdjEe6U2LspHqhHVlH5yZ8EV93LX7uxMrpvCyJRoDivS5erw2JcnGjpRKngF+GjO4y0hQO2CgxrKJ2TL+ibBoOMIlLXbWea/NN/kfP4yKXpw==',key_name='tempest-TestNetworkAdvancedServerOps-1995448580',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-d1ar0502',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:33:54Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=782511b8-9841-4558-bc21-9a81d3913b54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1013637972", "vif_mac": "fa:16:3e:c0:6e:5a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.833 252257 DEBUG nova.network.os_vif_util [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converting VIF {"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1013637972", "vif_mac": "fa:16:3e:c0:6e:5a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.835 252257 DEBUG nova.network.os_vif_util [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.840 252257 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  <uuid>782511b8-9841-4558-bc21-9a81d3913b54</uuid>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  <name>instance-000000af</name>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1326828449</nova:name>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:33:58</nova:creationTime>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <nova:port uuid="8a130a46-1e4c-4c18-8d1f-c60c770a5f49">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <entry name="serial">782511b8-9841-4558-bc21-9a81d3913b54</entry>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <entry name="uuid">782511b8-9841-4558-bc21-9a81d3913b54</entry>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/782511b8-9841-4558-bc21-9a81d3913b54_disk">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/782511b8-9841-4558-bc21-9a81d3913b54_disk.config">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:c0:6e:5a"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <target dev="tap8a130a46-1e"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54/console.log" append="off"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:33:59 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:33:59 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:33:59 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:33:59 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.842 252257 DEBUG nova.virt.libvirt.vif [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1326828449',display_name='tempest-TestNetworkAdvancedServerOps-server-1326828449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1326828449',id=175,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSsTatB0ntZvgpT1iFQOjTdjEe6U2LspHqhHVlH5yZ8EV93LX7uxMrpvCyJRoDivS5erw2JcnGjpRKngF+GjO4y0hQO2CgxrKJ2TL+ibBoOMIlLXbWea/NN/kfP4yKXpw==',key_name='tempest-TestNetworkAdvancedServerOps-1995448580',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-d1ar0502',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:33:54Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=782511b8-9841-4558-bc21-9a81d3913b54,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1013637972", "vif_mac": "fa:16:3e:c0:6e:5a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.842 252257 DEBUG nova.network.os_vif_util [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converting VIF {"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1013637972", "vif_mac": "fa:16:3e:c0:6e:5a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.843 252257 DEBUG nova.network.os_vif_util [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.843 252257 DEBUG os_vif [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.844 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.844 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.844 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.847 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.848 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a130a46-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.848 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a130a46-1e, col_values=(('external_ids', {'iface-id': '8a130a46-1e4c-4c18-8d1f-c60c770a5f49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:6e:5a', 'vm-uuid': '782511b8-9841-4558-bc21-9a81d3913b54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.850 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:59 np0005539563 NetworkManager[48981]: <info>  [1764405239.8518] manager: (tap8a130a46-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.853 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.856 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.857 252257 INFO os_vif [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e')#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.951 252257 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.951 252257 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.952 252257 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] No VIF found with MAC fa:16:3e:c0:6e:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:33:59 np0005539563 nova_compute[252253]: 2025-11-29 08:33:59.953 252257 INFO nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Using config drive#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.011 252257 DEBUG nova.network.neutron [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updated VIF entry in instance network info cache for port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.012 252257 DEBUG nova.network.neutron [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updating instance_info_cache with network_info: [{"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.040 252257 DEBUG oslo_concurrency.lockutils [req-866eab19-e059-4ff9-b853-ff98ccab44c2 req-e5bfc02c-3e8e-4498-83d2-37d627360c5c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:00 np0005539563 kernel: tap8a130a46-1e: entered promiscuous mode
Nov 29 03:34:00 np0005539563 NetworkManager[48981]: <info>  [1764405240.0812] manager: (tap8a130a46-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.081 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:00Z|00737|binding|INFO|Claiming lport 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for this chassis.
Nov 29 03:34:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:00Z|00738|binding|INFO|8a130a46-1e4c-4c18-8d1f-c60c770a5f49: Claiming fa:16:3e:c0:6e:5a 10.100.0.4
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.088 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.094 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.102 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:00 np0005539563 NetworkManager[48981]: <info>  [1764405240.1072] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.108 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:00 np0005539563 NetworkManager[48981]: <info>  [1764405240.1083] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.111 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:6e:5a 10.100.0.4'], port_security=['fa:16:3e:c0:6e:5a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '782511b8-9841-4558-bc21-9a81d3913b54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'bfa4c1a9-d993-4c80-84c8-af76e286907f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d02d2157-b362-405a-8753-8c1be0d0ef4c, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=8a130a46-1e4c-4c18-8d1f-c60c770a5f49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.112 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 in datapath f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c bound to our chassis#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.113 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c#033[00m
Nov 29 03:34:00 np0005539563 systemd-machined[213024]: New machine qemu-85-instance-000000af.
Nov 29 03:34:00 np0005539563 systemd-udevd[362355]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.126 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9b679c27-37bb-453e-8af7-51b5f0b89b73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.127 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf4afd5c3-f1 in ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.129 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf4afd5c3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.129 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3b751f9b-3b2a-4063-9781-18ee8ab71a1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.130 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e09e7a33-8c53-4d89-b91d-558af5aa2fba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 systemd[1]: Started Virtual Machine qemu-85-instance-000000af.
Nov 29 03:34:00 np0005539563 NetworkManager[48981]: <info>  [1764405240.1462] device (tap8a130a46-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:34:00 np0005539563 NetworkManager[48981]: <info>  [1764405240.1472] device (tap8a130a46-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.147 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c4461b-12d3-46ec-8b57-46295b838306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.170 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[62e81d9a-160c-4da5-9b83-7e272733033f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.211 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[54d103ef-cdde-48dc-a527-90f52c4d4afb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 systemd-udevd[362358]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:34:00 np0005539563 NetworkManager[48981]: <info>  [1764405240.2273] manager: (tapf4afd5c3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/317)
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.226 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef2076d-a47e-46e5-9885-960044560d07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.266 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c69038cb-b4db-4e9c-885a-1c48d35f9c87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.269 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e12ee7ea-353f-4f0a-90a5-3cb80d389be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 NetworkManager[48981]: <info>  [1764405240.2966] device (tapf4afd5c3-f0): carrier: link connected
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.308 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[6faf45f2-8865-403e-bb4a-a1612116e05a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.315 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.334 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb9edfe-44f7-4ef8-a68f-76ed67833d86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4afd5c3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:97:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 820807, 'reachable_time': 25768, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362387, 'error': None, 'target': 'ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.337 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:00Z|00739|binding|INFO|Setting lport 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 ovn-installed in OVS
Nov 29 03:34:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:00Z|00740|binding|INFO|Setting lport 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 up in Southbound
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.350 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.359 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ea01dc68-ea02-4b3f-8606-0c4fa80d4d57]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:97be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 820807, 'tstamp': 820807}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362388, 'error': None, 'target': 'ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.377 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dc96b6d7-d812-4a86-a543-cd0c88001f73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4afd5c3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:97:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 820807, 'reachable_time': 25768, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362404, 'error': None, 'target': 'ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.412 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0c1d4a-5164-48e6-9e84-ec6563345e6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.478 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d6014d42-9a33-48f1-bade-5cdd2ac5b233]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.479 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4afd5c3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.479 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.479 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4afd5c3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:00.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.522 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:00 np0005539563 NetworkManager[48981]: <info>  [1764405240.5234] manager: (tapf4afd5c3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Nov 29 03:34:00 np0005539563 kernel: tapf4afd5c3-f0: entered promiscuous mode
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.525 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.526 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4afd5c3-f0, col_values=(('external_ids', {'iface-id': 'ff03e0e0-7321-4974-89e3-44f271d6956a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.527 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:00Z|00741|binding|INFO|Releasing lport ff03e0e0-7321-4974-89e3-44f271d6956a from this chassis (sb_readonly=0)
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.542 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.543 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.544 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c561d33f-cd13-4e9d-bbba-6d66341979ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.545 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c.pid.haproxy
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:34:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:00.546 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'env', 'PROCESS_TAG=haproxy-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:34:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:00.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.747 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Acquiring lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.747 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.761 252257 DEBUG nova.compute.manager [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:34:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 497 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 8.1 MiB/s wr, 213 op/s
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.796 252257 DEBUG nova.compute.manager [req-d040e18d-2472-4806-922a-4ce4b8ea9d29 req-40a8923b-0f64-408e-b572-1a530d86bcbd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.797 252257 DEBUG oslo_concurrency.lockutils [req-d040e18d-2472-4806-922a-4ce4b8ea9d29 req-40a8923b-0f64-408e-b572-1a530d86bcbd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.797 252257 DEBUG oslo_concurrency.lockutils [req-d040e18d-2472-4806-922a-4ce4b8ea9d29 req-40a8923b-0f64-408e-b572-1a530d86bcbd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.797 252257 DEBUG oslo_concurrency.lockutils [req-d040e18d-2472-4806-922a-4ce4b8ea9d29 req-40a8923b-0f64-408e-b572-1a530d86bcbd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.797 252257 DEBUG nova.compute.manager [req-d040e18d-2472-4806-922a-4ce4b8ea9d29 req-40a8923b-0f64-408e-b572-1a530d86bcbd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] No waiting events found dispatching network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.797 252257 WARNING nova.compute.manager [req-d040e18d-2472-4806-922a-4ce4b8ea9d29 req-40a8923b-0f64-408e-b572-1a530d86bcbd 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received unexpected event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.848 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.848 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.855 252257 DEBUG nova.virt.hardware [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:34:00 np0005539563 nova_compute[252253]: 2025-11-29 08:34:00.855 252257 INFO nova.compute.claims [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:34:00 np0005539563 podman[362440]: 2025-11-29 08:34:00.919671563 +0000 UTC m=+0.051665610 container create 95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:34:00 np0005539563 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:34:00 np0005539563 systemd[362098]: Activating special unit Exit the Session...
Nov 29 03:34:00 np0005539563 systemd[362098]: Stopped target Main User Target.
Nov 29 03:34:00 np0005539563 systemd[362098]: Stopped target Basic System.
Nov 29 03:34:00 np0005539563 systemd[362098]: Stopped target Paths.
Nov 29 03:34:00 np0005539563 systemd[362098]: Stopped target Sockets.
Nov 29 03:34:00 np0005539563 systemd[362098]: Stopped target Timers.
Nov 29 03:34:00 np0005539563 systemd[362098]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:34:00 np0005539563 systemd[362098]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:34:00 np0005539563 systemd[362098]: Closed D-Bus User Message Bus Socket.
Nov 29 03:34:00 np0005539563 systemd[362098]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:34:00 np0005539563 systemd[362098]: Removed slice User Application Slice.
Nov 29 03:34:00 np0005539563 systemd[362098]: Reached target Shutdown.
Nov 29 03:34:00 np0005539563 systemd[362098]: Finished Exit the Session.
Nov 29 03:34:00 np0005539563 systemd[362098]: Reached target Exit the Session.
Nov 29 03:34:00 np0005539563 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:34:00 np0005539563 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:34:00 np0005539563 systemd[1]: Started libpod-conmon-95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58.scope.
Nov 29 03:34:00 np0005539563 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:34:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:34:00 np0005539563 podman[362440]: 2025-11-29 08:34:00.891602872 +0000 UTC m=+0.023596929 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:34:00 np0005539563 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:34:00 np0005539563 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:34:00 np0005539563 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:34:00 np0005539563 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:34:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/688415a79a03588f21fd6fd598567da41fa86222fb1eea186c24a8ab3bbebbe1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:01 np0005539563 podman[362440]: 2025-11-29 08:34:01.009865776 +0000 UTC m=+0.141859853 container init 95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:34:01 np0005539563 podman[362440]: 2025-11-29 08:34:01.015997682 +0000 UTC m=+0.147991729 container start 95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.034 252257 DEBUG oslo_concurrency.processutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:01 np0005539563 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[362457]: [NOTICE]   (362463) : New worker (362465) forked
Nov 29 03:34:01 np0005539563 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[362457]: [NOTICE]   (362463) : Loading success.
Nov 29 03:34:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:01.106 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.210 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405241.2096689, 782511b8-9841-4558-bc21-9a81d3913b54 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.210 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.217 252257 DEBUG nova.compute.manager [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.241 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.245 252257 INFO nova.virt.libvirt.driver [-] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Instance running successfully.#033[00m
Nov 29 03:34:01 np0005539563 virtqemud[251807]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.249 252257 DEBUG nova.virt.libvirt.guest [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.250 252257 DEBUG nova.virt.libvirt.driver [None req-1fa9aac2-6281-4898-81be-fefc4cb03263 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.252 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.291 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.291 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405241.2162793, 782511b8-9841-4558-bc21-9a81d3913b54 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.291 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] VM Started (Lifecycle Event)#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.341 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.345 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.403 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:34:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2185830496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.521 252257 DEBUG oslo_concurrency.processutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.526 252257 DEBUG nova.compute.provider_tree [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.546 252257 DEBUG nova.scheduler.client.report [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.567 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.567 252257 DEBUG nova.compute.manager [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.623 252257 DEBUG nova.compute.manager [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.624 252257 DEBUG nova.network.neutron [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.640 252257 INFO nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.656 252257 DEBUG nova.compute.manager [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.745 252257 INFO nova.virt.block_device [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Booting with volume f3c8b389-36e8-459a-b7d9-3ed0e65767ad at /dev/vda#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.839 252257 DEBUG nova.policy [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ad82d69b01a4929b20a4d3c4dbe0135', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '340d97a89c434bedbead3110819c581d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.929 252257 DEBUG os_brick.utils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.931 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.944 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.945 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[82de7d5c-dab9-498f-81c1-5da38497b3d5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.946 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.955 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.955 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[ae34c56e-77e1-4082-a169-5cc1b5df760a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.957 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.965 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.966 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2e3648-9a2a-4c98-acf0-cbcdca6de888]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.967 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[65d69abd-e698-4ba4-bb81-940becd5891b]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:01 np0005539563 nova_compute[252253]: 2025-11-29 08:34:01.968 252257 DEBUG oslo_concurrency.processutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:02 np0005539563 nova_compute[252253]: 2025-11-29 08:34:02.002 252257 DEBUG oslo_concurrency.processutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:02 np0005539563 nova_compute[252253]: 2025-11-29 08:34:02.004 252257 DEBUG os_brick.initiator.connectors.lightos [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:34:02 np0005539563 nova_compute[252253]: 2025-11-29 08:34:02.005 252257 DEBUG os_brick.initiator.connectors.lightos [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:34:02 np0005539563 nova_compute[252253]: 2025-11-29 08:34:02.005 252257 DEBUG os_brick.initiator.connectors.lightos [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:34:02 np0005539563 nova_compute[252253]: 2025-11-29 08:34:02.006 252257 DEBUG os_brick.utils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:34:02 np0005539563 nova_compute[252253]: 2025-11-29 08:34:02.006 252257 DEBUG nova.virt.block_device [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updating existing volume attachment record: e38d3b87-65f2-4aa4-af13-d30b22c19194 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:34:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:02.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:02.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 497 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 6.3 MiB/s wr, 164 op/s
Nov 29 03:34:02 np0005539563 nova_compute[252253]: 2025-11-29 08:34:02.827 252257 DEBUG nova.network.neutron [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Successfully created port: c8756ae8-709a-481a-b200-35a8980cff93 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:34:02 np0005539563 nova_compute[252253]: 2025-11-29 08:34:02.959 252257 DEBUG nova.compute.manager [req-49e6286c-7cb6-47b5-8b74-42da4e56c345 req-8377d59c-151d-48be-a42a-78ea4ad42684 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:02 np0005539563 nova_compute[252253]: 2025-11-29 08:34:02.960 252257 DEBUG oslo_concurrency.lockutils [req-49e6286c-7cb6-47b5-8b74-42da4e56c345 req-8377d59c-151d-48be-a42a-78ea4ad42684 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:02 np0005539563 nova_compute[252253]: 2025-11-29 08:34:02.961 252257 DEBUG oslo_concurrency.lockutils [req-49e6286c-7cb6-47b5-8b74-42da4e56c345 req-8377d59c-151d-48be-a42a-78ea4ad42684 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:02 np0005539563 nova_compute[252253]: 2025-11-29 08:34:02.961 252257 DEBUG oslo_concurrency.lockutils [req-49e6286c-7cb6-47b5-8b74-42da4e56c345 req-8377d59c-151d-48be-a42a-78ea4ad42684 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:02 np0005539563 nova_compute[252253]: 2025-11-29 08:34:02.961 252257 DEBUG nova.compute.manager [req-49e6286c-7cb6-47b5-8b74-42da4e56c345 req-8377d59c-151d-48be-a42a-78ea4ad42684 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] No waiting events found dispatching network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:02 np0005539563 nova_compute[252253]: 2025-11-29 08:34:02.962 252257 WARNING nova.compute.manager [req-49e6286c-7cb6-47b5-8b74-42da4e56c345 req-8377d59c-151d-48be-a42a-78ea4ad42684 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received unexpected event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.215 252257 DEBUG nova.compute.manager [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.218 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.218 252257 INFO nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Creating image(s)#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.219 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.219 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Ensure instance console log exists: /var/lib/nova/instances/6ba3ce9b-c4ad-471a-bba5-1755f9a9babd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.220 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.220 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.220 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.604 252257 DEBUG nova.network.neutron [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Successfully updated port: c8756ae8-709a-481a-b200-35a8980cff93 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.626 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Acquiring lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.626 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Acquired lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.627 252257 DEBUG nova.network.neutron [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.673 252257 DEBUG nova.compute.manager [req-ce3beb73-52b0-469a-a3d8-e17c75b01786 req-50d582d3-4e74-4f67-a600-1be8e95f2791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Received event network-changed-c8756ae8-709a-481a-b200-35a8980cff93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.674 252257 DEBUG nova.compute.manager [req-ce3beb73-52b0-469a-a3d8-e17c75b01786 req-50d582d3-4e74-4f67-a600-1be8e95f2791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Refreshing instance network info cache due to event network-changed-c8756ae8-709a-481a-b200-35a8980cff93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.674 252257 DEBUG oslo_concurrency.lockutils [req-ce3beb73-52b0-469a-a3d8-e17c75b01786 req-50d582d3-4e74-4f67-a600-1be8e95f2791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.815 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:03 np0005539563 nova_compute[252253]: 2025-11-29 08:34:03.869 252257 DEBUG nova.network.neutron [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:34:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Nov 29 03:34:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Nov 29 03:34:04 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Nov 29 03:34:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:04.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:04.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.689 252257 DEBUG nova.network.neutron [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updating instance_info_cache with network_info: [{"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.714 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Releasing lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.714 252257 DEBUG nova.compute.manager [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Instance network_info: |[{"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.714 252257 DEBUG oslo_concurrency.lockutils [req-ce3beb73-52b0-469a-a3d8-e17c75b01786 req-50d582d3-4e74-4f67-a600-1be8e95f2791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.715 252257 DEBUG nova.network.neutron [req-ce3beb73-52b0-469a-a3d8-e17c75b01786 req-50d582d3-4e74-4f67-a600-1be8e95f2791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Refreshing network info cache for port c8756ae8-709a-481a-b200-35a8980cff93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.718 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Start _get_guest_xml network_info=[{"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f3c8b389-36e8-459a-b7d9-3ed0e65767ad', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f3c8b389-36e8-459a-b7d9-3ed0e65767ad', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '6ba3ce9b-c4ad-471a-bba5-1755f9a9babd', 'attached_at': '', 'detached_at': '', 'volume_id': 'f3c8b389-36e8-459a-b7d9-3ed0e65767ad', 'serial': 'f3c8b389-36e8-459a-b7d9-3ed0e65767ad'}, 'attachment_id': 'e38d3b87-65f2-4aa4-af13-d30b22c19194', 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.724 252257 WARNING nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.730 252257 DEBUG nova.virt.libvirt.host [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.731 252257 DEBUG nova.virt.libvirt.host [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.735 252257 DEBUG nova.virt.libvirt.host [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.735 252257 DEBUG nova.virt.libvirt.host [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.736 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.736 252257 DEBUG nova.virt.hardware [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.737 252257 DEBUG nova.virt.hardware [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.737 252257 DEBUG nova.virt.hardware [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.737 252257 DEBUG nova.virt.hardware [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.738 252257 DEBUG nova.virt.hardware [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.738 252257 DEBUG nova.virt.hardware [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.738 252257 DEBUG nova.virt.hardware [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.739 252257 DEBUG nova.virt.hardware [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.739 252257 DEBUG nova.virt.hardware [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.739 252257 DEBUG nova.virt.hardware [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.739 252257 DEBUG nova.virt.hardware [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.770 252257 DEBUG nova.storage.rbd_utils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] rbd image 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.776 252257 DEBUG oslo_concurrency.processutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 459 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Nov 29 03:34:04 np0005539563 nova_compute[252253]: 2025-11-29 08:34:04.850 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:04.938 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:04.939 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:04.939 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:34:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2893780491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.219 252257 DEBUG oslo_concurrency.processutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.243 252257 DEBUG nova.virt.libvirt.vif [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:33:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-11214697',display_name='tempest-TestVolumeBackupRestore-server-11214697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-11214697',id=176,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEJP1MxO5y1zdIUbHIEoxTtC/SuZQSrgpYtTIlN5AnKzpTQu0zImzztXtiv58Io2sEdQDVqMGxwfPFmTVhCDnsvqKdA6A/j5g/dzpRord4axz42zZ9NClNKCtE3Mjy4fJA==',key_name='tempest-TestVolumeBackupRestore-564851940',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='340d97a89c434bedbead3110819c581d',ramdisk_id='',reservation_id='r-ew2x7c80',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-760261171',owner_user_name='tempest-TestVolumeBackupRestore-760261171-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:01Z,user_data=None,user_id='2ad82d69b01a4929b20a4d3c4dbe0135',uuid=6ba3ce9b-c4ad-471a-bba5-1755f9a9babd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.244 252257 DEBUG nova.network.os_vif_util [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Converting VIF {"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.244 252257 DEBUG nova.network.os_vif_util [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:69:43,bridge_name='br-int',has_traffic_filtering=True,id=c8756ae8-709a-481a-b200-35a8980cff93,network=Network(eb589b81-4c58-4ebd-a9b9-74e187f0139b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8756ae8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.245 252257 DEBUG nova.objects.instance [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lazy-loading 'pci_devices' on Instance uuid 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.262 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  <uuid>6ba3ce9b-c4ad-471a-bba5-1755f9a9babd</uuid>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  <name>instance-000000b0</name>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestVolumeBackupRestore-server-11214697</nova:name>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:34:04</nova:creationTime>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <nova:user uuid="2ad82d69b01a4929b20a4d3c4dbe0135">tempest-TestVolumeBackupRestore-760261171-project-member</nova:user>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <nova:project uuid="340d97a89c434bedbead3110819c581d">tempest-TestVolumeBackupRestore-760261171</nova:project>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <nova:port uuid="c8756ae8-709a-481a-b200-35a8980cff93">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <entry name="serial">6ba3ce9b-c4ad-471a-bba5-1755f9a9babd</entry>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <entry name="uuid">6ba3ce9b-c4ad-471a-bba5-1755f9a9babd</entry>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/6ba3ce9b-c4ad-471a-bba5-1755f9a9babd_disk.config">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="volumes/volume-f3c8b389-36e8-459a-b7d9-3ed0e65767ad">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <serial>f3c8b389-36e8-459a-b7d9-3ed0e65767ad</serial>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:34:69:43"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <target dev="tapc8756ae8-70"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/6ba3ce9b-c4ad-471a-bba5-1755f9a9babd/console.log" append="off"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:34:05 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:34:05 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:34:05 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:34:05 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.263 252257 DEBUG nova.compute.manager [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Preparing to wait for external event network-vif-plugged-c8756ae8-709a-481a-b200-35a8980cff93 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.264 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Acquiring lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.264 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.264 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.265 252257 DEBUG nova.virt.libvirt.vif [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:33:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-11214697',display_name='tempest-TestVolumeBackupRestore-server-11214697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-11214697',id=176,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEJP1MxO5y1zdIUbHIEoxTtC/SuZQSrgpYtTIlN5AnKzpTQu0zImzztXtiv58Io2sEdQDVqMGxwfPFmTVhCDnsvqKdA6A/j5g/dzpRord4axz42zZ9NClNKCtE3Mjy4fJA==',key_name='tempest-TestVolumeBackupRestore-564851940',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='340d97a89c434bedbead3110819c581d',ramdisk_id='',reservation_id='r-ew2x7c80',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-760261171',owner_user_name='tempest-TestVolumeBackupRestore-760261171-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:01Z,user_data=None,user_id='2ad82d69b01a4929b20a4d3c4dbe0135',uuid=6ba3ce9b-c4ad-471a-bba5-1755f9a9babd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.265 252257 DEBUG nova.network.os_vif_util [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Converting VIF {"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.266 252257 DEBUG nova.network.os_vif_util [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:69:43,bridge_name='br-int',has_traffic_filtering=True,id=c8756ae8-709a-481a-b200-35a8980cff93,network=Network(eb589b81-4c58-4ebd-a9b9-74e187f0139b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8756ae8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.266 252257 DEBUG os_vif [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:69:43,bridge_name='br-int',has_traffic_filtering=True,id=c8756ae8-709a-481a-b200-35a8980cff93,network=Network(eb589b81-4c58-4ebd-a9b9-74e187f0139b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8756ae8-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.267 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.267 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.268 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.272 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.272 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8756ae8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.273 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc8756ae8-70, col_values=(('external_ids', {'iface-id': 'c8756ae8-709a-481a-b200-35a8980cff93', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:69:43', 'vm-uuid': '6ba3ce9b-c4ad-471a-bba5-1755f9a9babd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.274 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:05 np0005539563 NetworkManager[48981]: <info>  [1764405245.2755] manager: (tapc8756ae8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.277 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.281 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.282 252257 INFO os_vif [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:69:43,bridge_name='br-int',has_traffic_filtering=True,id=c8756ae8-709a-481a-b200-35a8980cff93,network=Network(eb589b81-4c58-4ebd-a9b9-74e187f0139b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8756ae8-70')#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.330 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.330 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.330 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] No VIF found with MAC fa:16:3e:34:69:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.331 252257 INFO nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Using config drive#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.360 252257 DEBUG nova.storage.rbd_utils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] rbd image 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.717 252257 INFO nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Creating config drive at /var/lib/nova/instances/6ba3ce9b-c4ad-471a-bba5-1755f9a9babd/disk.config#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.721 252257 DEBUG oslo_concurrency.processutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6ba3ce9b-c4ad-471a-bba5-1755f9a9babd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzcrau08d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.859 252257 DEBUG oslo_concurrency.processutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6ba3ce9b-c4ad-471a-bba5-1755f9a9babd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzcrau08d" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.891 252257 DEBUG nova.storage.rbd_utils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] rbd image 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:05 np0005539563 nova_compute[252253]: 2025-11-29 08:34:05.895 252257 DEBUG oslo_concurrency.processutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6ba3ce9b-c4ad-471a-bba5-1755f9a9babd/disk.config 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.095 252257 DEBUG oslo_concurrency.processutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6ba3ce9b-c4ad-471a-bba5-1755f9a9babd/disk.config 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.096 252257 INFO nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Deleting local config drive /var/lib/nova/instances/6ba3ce9b-c4ad-471a-bba5-1755f9a9babd/disk.config because it was imported into RBD.#033[00m
Nov 29 03:34:06 np0005539563 kernel: tapc8756ae8-70: entered promiscuous mode
Nov 29 03:34:06 np0005539563 NetworkManager[48981]: <info>  [1764405246.1436] manager: (tapc8756ae8-70): new Tun device (/org/freedesktop/NetworkManager/Devices/320)
Nov 29 03:34:06 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:06Z|00742|binding|INFO|Claiming lport c8756ae8-709a-481a-b200-35a8980cff93 for this chassis.
Nov 29 03:34:06 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:06Z|00743|binding|INFO|c8756ae8-709a-481a-b200-35a8980cff93: Claiming fa:16:3e:34:69:43 10.100.0.8
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.145 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.152 252257 DEBUG nova.network.neutron [req-ce3beb73-52b0-469a-a3d8-e17c75b01786 req-50d582d3-4e74-4f67-a600-1be8e95f2791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updated VIF entry in instance network info cache for port c8756ae8-709a-481a-b200-35a8980cff93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.153 252257 DEBUG nova.network.neutron [req-ce3beb73-52b0-469a-a3d8-e17c75b01786 req-50d582d3-4e74-4f67-a600-1be8e95f2791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updating instance_info_cache with network_info: [{"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.156 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:69:43 10.100.0.8'], port_security=['fa:16:3e:34:69:43 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6ba3ce9b-c4ad-471a-bba5-1755f9a9babd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb589b81-4c58-4ebd-a9b9-74e187f0139b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '340d97a89c434bedbead3110819c581d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eecfd5ef-bcfc-45f1-8227-4a86138cddce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7527fd93-29b4-43c7-b4dd-e316caa7227c, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=c8756ae8-709a-481a-b200-35a8980cff93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.157 158990 INFO neutron.agent.ovn.metadata.agent [-] Port c8756ae8-709a-481a-b200-35a8980cff93 in datapath eb589b81-4c58-4ebd-a9b9-74e187f0139b bound to our chassis#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.158 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb589b81-4c58-4ebd-a9b9-74e187f0139b#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.170 252257 DEBUG oslo_concurrency.lockutils [req-ce3beb73-52b0-469a-a3d8-e17c75b01786 req-50d582d3-4e74-4f67-a600-1be8e95f2791 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.173 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4031061b-be1e-468d-8bd5-d6d1f90f6f3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.173 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb589b81-41 in ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:34:06 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:06Z|00744|binding|INFO|Setting lport c8756ae8-709a-481a-b200-35a8980cff93 ovn-installed in OVS
Nov 29 03:34:06 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:06Z|00745|binding|INFO|Setting lport c8756ae8-709a-481a-b200-35a8980cff93 up in Southbound
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.177 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb589b81-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.177 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[df8c35cc-8217-4f17-93a7-72a653a92c1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.177 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:06 np0005539563 systemd-machined[213024]: New machine qemu-86-instance-000000b0.
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.179 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[605350ce-2fa9-4b74-a8eb-5ac586def794]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.180 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:06 np0005539563 systemd[1]: Started Virtual Machine qemu-86-instance-000000b0.
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.194 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[de52c5d2-a535-41b5-bc82-3b133c02380c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 systemd-udevd[362694]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:34:06 np0005539563 NetworkManager[48981]: <info>  [1764405246.2090] device (tapc8756ae8-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:34:06 np0005539563 NetworkManager[48981]: <info>  [1764405246.2100] device (tapc8756ae8-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.227 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3e95c0f2-dc35-467e-a5c7-d5e41aff0102]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.255 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[769872b9-5bae-4edf-8396-7077d4655c86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.262 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ef785480-baa4-48a8-b35e-855946f12c13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 NetworkManager[48981]: <info>  [1764405246.2639] manager: (tapeb589b81-40): new Veth device (/org/freedesktop/NetworkManager/Devices/321)
Nov 29 03:34:06 np0005539563 systemd-udevd[362697]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.294 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c47e4ec1-990e-4a4e-b5ef-da6b9e7612b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.297 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[8545f707-c93e-4ec7-b7d0-944845e909ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 NetworkManager[48981]: <info>  [1764405246.3189] device (tapeb589b81-40): carrier: link connected
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.323 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7ef4a2-f18d-4991-ad5c-d13f5b1d8091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.338 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d8a003-f90b-40ea-8bad-a0c5435811ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb589b81-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:56:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 821409, 'reachable_time': 32298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362727, 'error': None, 'target': 'ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.349 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[569e0b31-d86d-4e48-b869-1b4d00dbd2c4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe07:5644'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 821409, 'tstamp': 821409}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362728, 'error': None, 'target': 'ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.363 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9890dc-d712-41e1-bf7f-15ae23a71033]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb589b81-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:56:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 821409, 'reachable_time': 32298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362729, 'error': None, 'target': 'ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.391 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d33c8bc8-4bed-40b9-8ad6-d040c5026e55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.400 252257 DEBUG nova.compute.manager [req-7ded5cc1-a78b-48fa-80b9-fb7cf1ebcf07 req-eb5b0bc7-2b23-4e2a-8bc0-07fc3a41ad0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Received event network-vif-plugged-c8756ae8-709a-481a-b200-35a8980cff93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.400 252257 DEBUG oslo_concurrency.lockutils [req-7ded5cc1-a78b-48fa-80b9-fb7cf1ebcf07 req-eb5b0bc7-2b23-4e2a-8bc0-07fc3a41ad0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.400 252257 DEBUG oslo_concurrency.lockutils [req-7ded5cc1-a78b-48fa-80b9-fb7cf1ebcf07 req-eb5b0bc7-2b23-4e2a-8bc0-07fc3a41ad0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.401 252257 DEBUG oslo_concurrency.lockutils [req-7ded5cc1-a78b-48fa-80b9-fb7cf1ebcf07 req-eb5b0bc7-2b23-4e2a-8bc0-07fc3a41ad0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.402 252257 DEBUG nova.compute.manager [req-7ded5cc1-a78b-48fa-80b9-fb7cf1ebcf07 req-eb5b0bc7-2b23-4e2a-8bc0-07fc3a41ad0e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Processing event network-vif-plugged-c8756ae8-709a-481a-b200-35a8980cff93 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.452 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ece6fd69-0737-449c-beb5-d3a072097b14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.453 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb589b81-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.453 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.454 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb589b81-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.455 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:06 np0005539563 NetworkManager[48981]: <info>  [1764405246.4564] manager: (tapeb589b81-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Nov 29 03:34:06 np0005539563 kernel: tapeb589b81-40: entered promiscuous mode
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.458 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.458 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb589b81-40, col_values=(('external_ids', {'iface-id': '3bb78c6a-35fe-489a-94ca-dc919c4d5253'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.459 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:06 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:06Z|00746|binding|INFO|Releasing lport 3bb78c6a-35fe-489a-94ca-dc919c4d5253 from this chassis (sb_readonly=0)
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.473 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.473 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.473 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb589b81-4c58-4ebd-a9b9-74e187f0139b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb589b81-4c58-4ebd-a9b9-74e187f0139b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.474 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2f12519d-b03d-48d2-ae37-5ba2000fc156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.476 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-eb589b81-4c58-4ebd-a9b9-74e187f0139b
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/eb589b81-4c58-4ebd-a9b9-74e187f0139b.pid.haproxy
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID eb589b81-4c58-4ebd-a9b9-74e187f0139b
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:34:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:06.476 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b', 'env', 'PROCESS_TAG=haproxy-eb589b81-4c58-4ebd-a9b9-74e187f0139b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb589b81-4c58-4ebd-a9b9-74e187f0139b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:34:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:06.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.632 252257 DEBUG nova.compute.manager [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.633 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405246.6331508, 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.633 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] VM Started (Lifecycle Event)#033[00m
Nov 29 03:34:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:06.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.643 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.649 252257 INFO nova.virt.libvirt.driver [-] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Instance spawned successfully.#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.649 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.670 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.676 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.688 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.688 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.689 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.689 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.689 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.690 252257 DEBUG nova.virt.libvirt.driver [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.714 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.714 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405246.6332462, 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.714 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.756 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.760 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405246.6429162, 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.760 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.766 252257 INFO nova.compute.manager [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Took 3.55 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.767 252257 DEBUG nova.compute.manager [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.782 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.785 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 KiB/s wr, 179 op/s
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.837 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:34:06 np0005539563 podman[362800]: 2025-11-29 08:34:06.86009527 +0000 UTC m=+0.057337195 container create f83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.864 252257 INFO nova.compute.manager [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Took 6.04 seconds to build instance.#033[00m
Nov 29 03:34:06 np0005539563 nova_compute[252253]: 2025-11-29 08:34:06.888 252257 DEBUG oslo_concurrency.lockutils [None req-563814dd-1983-4057-ad20-21982c1c4b2d 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:06 np0005539563 systemd[1]: Started libpod-conmon-f83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d.scope.
Nov 29 03:34:06 np0005539563 podman[362800]: 2025-11-29 08:34:06.832468191 +0000 UTC m=+0.029710136 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:34:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:34:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec7536d683991b11c781d9aa1cb576d33922063763e6b7b2aa55fb30001208b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:06 np0005539563 podman[362800]: 2025-11-29 08:34:06.964412436 +0000 UTC m=+0.161654361 container init f83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:34:06 np0005539563 podman[362800]: 2025-11-29 08:34:06.970109 +0000 UTC m=+0.167350925 container start f83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:34:07 np0005539563 neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b[362816]: [NOTICE]   (362820) : New worker (362823) forked
Nov 29 03:34:07 np0005539563 neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b[362816]: [NOTICE]   (362820) : Loading success.
Nov 29 03:34:07 np0005539563 nova_compute[252253]: 2025-11-29 08:34:07.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:07 np0005539563 nova_compute[252253]: 2025-11-29 08:34:07.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:08 np0005539563 nova_compute[252253]: 2025-11-29 08:34:08.499 252257 DEBUG nova.compute.manager [req-ed85add6-b2a9-4668-b430-0c646bf15923 req-2f215d7c-2335-4b43-b6aa-29f9012976ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Received event network-vif-plugged-c8756ae8-709a-481a-b200-35a8980cff93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:08 np0005539563 nova_compute[252253]: 2025-11-29 08:34:08.499 252257 DEBUG oslo_concurrency.lockutils [req-ed85add6-b2a9-4668-b430-0c646bf15923 req-2f215d7c-2335-4b43-b6aa-29f9012976ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:08 np0005539563 nova_compute[252253]: 2025-11-29 08:34:08.500 252257 DEBUG oslo_concurrency.lockutils [req-ed85add6-b2a9-4668-b430-0c646bf15923 req-2f215d7c-2335-4b43-b6aa-29f9012976ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:08 np0005539563 nova_compute[252253]: 2025-11-29 08:34:08.500 252257 DEBUG oslo_concurrency.lockutils [req-ed85add6-b2a9-4668-b430-0c646bf15923 req-2f215d7c-2335-4b43-b6aa-29f9012976ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:08 np0005539563 nova_compute[252253]: 2025-11-29 08:34:08.500 252257 DEBUG nova.compute.manager [req-ed85add6-b2a9-4668-b430-0c646bf15923 req-2f215d7c-2335-4b43-b6aa-29f9012976ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] No waiting events found dispatching network-vif-plugged-c8756ae8-709a-481a-b200-35a8980cff93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:08 np0005539563 nova_compute[252253]: 2025-11-29 08:34:08.500 252257 WARNING nova.compute.manager [req-ed85add6-b2a9-4668-b430-0c646bf15923 req-2f215d7c-2335-4b43-b6aa-29f9012976ba 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Received unexpected event network-vif-plugged-c8756ae8-709a-481a-b200-35a8980cff93 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:34:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:08.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:08.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:08 np0005539563 nova_compute[252253]: 2025-11-29 08:34:08.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 199 op/s
Nov 29 03:34:08 np0005539563 nova_compute[252253]: 2025-11-29 08:34:08.816 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:10 np0005539563 nova_compute[252253]: 2025-11-29 08:34:10.275 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:10.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:10.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 17 KiB/s wr, 199 op/s
Nov 29 03:34:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:12.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:12.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:12 np0005539563 nova_compute[252253]: 2025-11-29 08:34:12.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:12 np0005539563 nova_compute[252253]: 2025-11-29 08:34:12.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:34:12 np0005539563 nova_compute[252253]: 2025-11-29 08:34:12.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:34:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 17 KiB/s wr, 199 op/s
Nov 29 03:34:12 np0005539563 nova_compute[252253]: 2025-11-29 08:34:12.790 252257 DEBUG nova.compute.manager [req-2d657a13-96c7-44dd-9b6d-8aefc796b7ba req-681f2988-0b00-40bf-9152-637df65a06d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Received event network-changed-c8756ae8-709a-481a-b200-35a8980cff93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:12 np0005539563 nova_compute[252253]: 2025-11-29 08:34:12.790 252257 DEBUG nova.compute.manager [req-2d657a13-96c7-44dd-9b6d-8aefc796b7ba req-681f2988-0b00-40bf-9152-637df65a06d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Refreshing instance network info cache due to event network-changed-c8756ae8-709a-481a-b200-35a8980cff93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:12 np0005539563 nova_compute[252253]: 2025-11-29 08:34:12.791 252257 DEBUG oslo_concurrency.lockutils [req-2d657a13-96c7-44dd-9b6d-8aefc796b7ba req-681f2988-0b00-40bf-9152-637df65a06d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:12 np0005539563 nova_compute[252253]: 2025-11-29 08:34:12.791 252257 DEBUG oslo_concurrency.lockutils [req-2d657a13-96c7-44dd-9b6d-8aefc796b7ba req-681f2988-0b00-40bf-9152-637df65a06d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:12 np0005539563 nova_compute[252253]: 2025-11-29 08:34:12.791 252257 DEBUG nova.network.neutron [req-2d657a13-96c7-44dd-9b6d-8aefc796b7ba req-681f2988-0b00-40bf-9152-637df65a06d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Refreshing network info cache for port c8756ae8-709a-481a-b200-35a8980cff93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:34:12
Nov 29 03:34:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:34:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:34:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.meta', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'backups']
Nov 29 03:34:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:34:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:34:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:34:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:34:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:34:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:34:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:34:13 np0005539563 nova_compute[252253]: 2025-11-29 08:34:13.632 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:13 np0005539563 nova_compute[252253]: 2025-11-29 08:34:13.632 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:13 np0005539563 nova_compute[252253]: 2025-11-29 08:34:13.633 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:34:13 np0005539563 nova_compute[252253]: 2025-11-29 08:34:13.633 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 782511b8-9841-4558-bc21-9a81d3913b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:13 np0005539563 nova_compute[252253]: 2025-11-29 08:34:13.818 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Nov 29 03:34:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Nov 29 03:34:14 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Nov 29 03:34:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:14.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:14.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:14 np0005539563 nova_compute[252253]: 2025-11-29 08:34:14.670 252257 DEBUG nova.compute.manager [req-d346347c-3b6f-4044-b7d9-4fe40d60e72a req-b35350bd-1a34-46bd-b58f-0355d575efe9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Received event network-changed-c8756ae8-709a-481a-b200-35a8980cff93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:14 np0005539563 nova_compute[252253]: 2025-11-29 08:34:14.671 252257 DEBUG nova.compute.manager [req-d346347c-3b6f-4044-b7d9-4fe40d60e72a req-b35350bd-1a34-46bd-b58f-0355d575efe9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Refreshing instance network info cache due to event network-changed-c8756ae8-709a-481a-b200-35a8980cff93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:14 np0005539563 nova_compute[252253]: 2025-11-29 08:34:14.671 252257 DEBUG oslo_concurrency.lockutils [req-d346347c-3b6f-4044-b7d9-4fe40d60e72a req-b35350bd-1a34-46bd-b58f-0355d575efe9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.0 MiB/s wr, 211 op/s
Nov 29 03:34:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:15Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:6e:5a 10.100.0.4
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.311 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.598 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updating instance_info_cache with network_info: [{"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.613 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.613 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.613 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.617 252257 DEBUG nova.network.neutron [req-2d657a13-96c7-44dd-9b6d-8aefc796b7ba req-681f2988-0b00-40bf-9152-637df65a06d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updated VIF entry in instance network info cache for port c8756ae8-709a-481a-b200-35a8980cff93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.617 252257 DEBUG nova.network.neutron [req-2d657a13-96c7-44dd-9b6d-8aefc796b7ba req-681f2988-0b00-40bf-9152-637df65a06d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updating instance_info_cache with network_info: [{"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.643 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.643 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.644 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.644 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.644 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.684 252257 DEBUG oslo_concurrency.lockutils [req-2d657a13-96c7-44dd-9b6d-8aefc796b7ba req-681f2988-0b00-40bf-9152-637df65a06d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.685 252257 DEBUG oslo_concurrency.lockutils [req-d346347c-3b6f-4044-b7d9-4fe40d60e72a req-b35350bd-1a34-46bd-b58f-0355d575efe9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:15 np0005539563 nova_compute[252253]: 2025-11-29 08:34:15.685 252257 DEBUG nova.network.neutron [req-d346347c-3b6f-4044-b7d9-4fe40d60e72a req-b35350bd-1a34-46bd-b58f-0355d575efe9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Refreshing network info cache for port c8756ae8-709a-481a-b200-35a8980cff93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3142441442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.107 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.192 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.193 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.195 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.195 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.371 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.373 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3790MB free_disk=20.922813415527344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.374 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.374 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:16.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:34:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:34:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:34:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:34:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:34:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:34:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:34:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:34:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:34:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:34:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:16.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.784 252257 DEBUG nova.compute.manager [req-bd70150b-a6c0-42b4-b90d-26aa6b2c9e26 req-3514037b-f207-4b61-bef2-f3e868b679c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Received event network-changed-c8756ae8-709a-481a-b200-35a8980cff93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.785 252257 DEBUG nova.compute.manager [req-bd70150b-a6c0-42b4-b90d-26aa6b2c9e26 req-3514037b-f207-4b61-bef2-f3e868b679c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Refreshing instance network info cache due to event network-changed-c8756ae8-709a-481a-b200-35a8980cff93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.785 252257 DEBUG oslo_concurrency.lockutils [req-bd70150b-a6c0-42b4-b90d-26aa6b2c9e26 req-3514037b-f207-4b61-bef2-f3e868b679c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 479 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 3.7 MiB/s wr, 208 op/s
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.915 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 782511b8-9841-4558-bc21-9a81d3913b54 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.916 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.917 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:34:16 np0005539563 nova_compute[252253]: 2025-11-29 08:34:16.918 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:34:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Nov 29 03:34:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Nov 29 03:34:17 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Nov 29 03:34:17 np0005539563 nova_compute[252253]: 2025-11-29 08:34:17.604 252257 DEBUG nova.network.neutron [req-d346347c-3b6f-4044-b7d9-4fe40d60e72a req-b35350bd-1a34-46bd-b58f-0355d575efe9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updated VIF entry in instance network info cache for port c8756ae8-709a-481a-b200-35a8980cff93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:17 np0005539563 nova_compute[252253]: 2025-11-29 08:34:17.604 252257 DEBUG nova.network.neutron [req-d346347c-3b6f-4044-b7d9-4fe40d60e72a req-b35350bd-1a34-46bd-b58f-0355d575efe9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updating instance_info_cache with network_info: [{"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:17 np0005539563 nova_compute[252253]: 2025-11-29 08:34:17.625 252257 DEBUG oslo_concurrency.lockutils [req-d346347c-3b6f-4044-b7d9-4fe40d60e72a req-b35350bd-1a34-46bd-b58f-0355d575efe9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:17 np0005539563 nova_compute[252253]: 2025-11-29 08:34:17.625 252257 DEBUG oslo_concurrency.lockutils [req-bd70150b-a6c0-42b4-b90d-26aa6b2c9e26 req-3514037b-f207-4b61-bef2-f3e868b679c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:17 np0005539563 nova_compute[252253]: 2025-11-29 08:34:17.626 252257 DEBUG nova.network.neutron [req-bd70150b-a6c0-42b4-b90d-26aa6b2c9e26 req-3514037b-f207-4b61-bef2-f3e868b679c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Refreshing network info cache for port c8756ae8-709a-481a-b200-35a8980cff93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:17 np0005539563 nova_compute[252253]: 2025-11-29 08:34:17.752 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1374812936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:18 np0005539563 nova_compute[252253]: 2025-11-29 08:34:18.185 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:18 np0005539563 nova_compute[252253]: 2025-11-29 08:34:18.192 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:18 np0005539563 nova_compute[252253]: 2025-11-29 08:34:18.208 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:18 np0005539563 nova_compute[252253]: 2025-11-29 08:34:18.236 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:34:18 np0005539563 nova_compute[252253]: 2025-11-29 08:34:18.236 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:18.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:18.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:18 np0005539563 nova_compute[252253]: 2025-11-29 08:34:18.763 252257 DEBUG nova.network.neutron [req-bd70150b-a6c0-42b4-b90d-26aa6b2c9e26 req-3514037b-f207-4b61-bef2-f3e868b679c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updated VIF entry in instance network info cache for port c8756ae8-709a-481a-b200-35a8980cff93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:18 np0005539563 nova_compute[252253]: 2025-11-29 08:34:18.764 252257 DEBUG nova.network.neutron [req-bd70150b-a6c0-42b4-b90d-26aa6b2c9e26 req-3514037b-f207-4b61-bef2-f3e868b679c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updating instance_info_cache with network_info: [{"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:18 np0005539563 nova_compute[252253]: 2025-11-29 08:34:18.788 252257 DEBUG oslo_concurrency.lockutils [req-bd70150b-a6c0-42b4-b90d-26aa6b2c9e26 req-3514037b-f207-4b61-bef2-f3e868b679c3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 451 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 5.9 MiB/s wr, 318 op/s
Nov 29 03:34:18 np0005539563 nova_compute[252253]: 2025-11-29 08:34:18.821 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:20 np0005539563 nova_compute[252253]: 2025-11-29 08:34:20.301 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:20 np0005539563 nova_compute[252253]: 2025-11-29 08:34:20.314 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:20.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:20.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:20 np0005539563 nova_compute[252253]: 2025-11-29 08:34:20.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:20 np0005539563 nova_compute[252253]: 2025-11-29 08:34:20.762 252257 INFO nova.compute.manager [None req-c06a2a9c-7a2b-4996-b722-e30bad6e9bb5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Get console output#033[00m
Nov 29 03:34:20 np0005539563 nova_compute[252253]: 2025-11-29 08:34:20.768 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:34:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 433 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 7.3 MiB/s wr, 357 op/s
Nov 29 03:34:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:21Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:34:69:43 10.100.0.8
Nov 29 03:34:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:21Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:34:69:43 10.100.0.8
Nov 29 03:34:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:22.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:22.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.760 252257 DEBUG nova.compute.manager [req-1be1aa25-6c97-4e76-9e16-864b8d811e8e req-8e45ce52-86dc-4642-88e6-7aca3070aa31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-changed-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.760 252257 DEBUG nova.compute.manager [req-1be1aa25-6c97-4e76-9e16-864b8d811e8e req-8e45ce52-86dc-4642-88e6-7aca3070aa31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Refreshing instance network info cache due to event network-changed-8a130a46-1e4c-4c18-8d1f-c60c770a5f49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.760 252257 DEBUG oslo_concurrency.lockutils [req-1be1aa25-6c97-4e76-9e16-864b8d811e8e req-8e45ce52-86dc-4642-88e6-7aca3070aa31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.761 252257 DEBUG oslo_concurrency.lockutils [req-1be1aa25-6c97-4e76-9e16-864b8d811e8e req-8e45ce52-86dc-4642-88e6-7aca3070aa31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.761 252257 DEBUG nova.network.neutron [req-1be1aa25-6c97-4e76-9e16-864b8d811e8e req-8e45ce52-86dc-4642-88e6-7aca3070aa31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Refreshing network info cache for port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 433 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 4.5 MiB/s wr, 245 op/s
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.808 252257 DEBUG oslo_concurrency.lockutils [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.808 252257 DEBUG oslo_concurrency.lockutils [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.809 252257 DEBUG oslo_concurrency.lockutils [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.809 252257 DEBUG oslo_concurrency.lockutils [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.809 252257 DEBUG oslo_concurrency.lockutils [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.811 252257 INFO nova.compute.manager [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Terminating instance#033[00m
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.812 252257 DEBUG nova.compute.manager [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:34:22 np0005539563 kernel: tap8a130a46-1e (unregistering): left promiscuous mode
Nov 29 03:34:22 np0005539563 NetworkManager[48981]: <info>  [1764405262.8892] device (tap8a130a46-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.901 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:22Z|00747|binding|INFO|Releasing lport 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 from this chassis (sb_readonly=0)
Nov 29 03:34:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:22Z|00748|binding|INFO|Setting lport 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 down in Southbound
Nov 29 03:34:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:22Z|00749|binding|INFO|Removing iface tap8a130a46-1e ovn-installed in OVS
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.903 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:22.909 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:6e:5a 10.100.0.4'], port_security=['fa:16:3e:c0:6e:5a 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '782511b8-9841-4558-bc21-9a81d3913b54', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'bfa4c1a9-d993-4c80-84c8-af76e286907f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d02d2157-b362-405a-8753-8c1be0d0ef4c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=8a130a46-1e4c-4c18-8d1f-c60c770a5f49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:22.910 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49 in datapath f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c unbound from our chassis#033[00m
Nov 29 03:34:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:22.912 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:34:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:22.913 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f323ed6e-dec0-4994-8c1f-2f53d0283c5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:22.914 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c namespace which is not needed anymore#033[00m
Nov 29 03:34:22 np0005539563 nova_compute[252253]: 2025-11-29 08:34:22.916 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:22 np0005539563 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000af.scope: Deactivated successfully.
Nov 29 03:34:22 np0005539563 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000af.scope: Consumed 13.858s CPU time.
Nov 29 03:34:22 np0005539563 systemd-machined[213024]: Machine qemu-85-instance-000000af terminated.
Nov 29 03:34:23 np0005539563 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[362457]: [NOTICE]   (362463) : haproxy version is 2.8.14-c23fe91
Nov 29 03:34:23 np0005539563 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[362457]: [NOTICE]   (362463) : path to executable is /usr/sbin/haproxy
Nov 29 03:34:23 np0005539563 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[362457]: [WARNING]  (362463) : Exiting Master process...
Nov 29 03:34:23 np0005539563 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[362457]: [WARNING]  (362463) : Exiting Master process...
Nov 29 03:34:23 np0005539563 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[362457]: [ALERT]    (362463) : Current worker (362465) exited with code 143 (Terminated)
Nov 29 03:34:23 np0005539563 neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c[362457]: [WARNING]  (362463) : All workers exited. Exiting... (0)
Nov 29 03:34:23 np0005539563 systemd[1]: libpod-95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58.scope: Deactivated successfully.
Nov 29 03:34:23 np0005539563 podman[362908]: 2025-11-29 08:34:23.04921712 +0000 UTC m=+0.052747270 container died 95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.055 252257 INFO nova.virt.libvirt.driver [-] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Instance destroyed successfully.#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.057 252257 DEBUG nova.objects.instance [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'resources' on Instance uuid 782511b8-9841-4558-bc21-9a81d3913b54 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.071 252257 DEBUG nova.virt.libvirt.vif [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1326828449',display_name='tempest-TestNetworkAdvancedServerOps-server-1326828449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1326828449',id=175,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSsTatB0ntZvgpT1iFQOjTdjEe6U2LspHqhHVlH5yZ8EV93LX7uxMrpvCyJRoDivS5erw2JcnGjpRKngF+GjO4y0hQO2CgxrKJ2TL+ibBoOMIlLXbWea/NN/kfP4yKXpw==',key_name='tempest-TestNetworkAdvancedServerOps-1995448580',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:01Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-d1ar0502',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:05Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=782511b8-9841-4558-bc21-9a81d3913b54,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.071 252257 DEBUG nova.network.os_vif_util [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.072 252257 DEBUG nova.network.os_vif_util [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.072 252257 DEBUG os_vif [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.074 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.075 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a130a46-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.076 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.079 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.082 252257 INFO os_vif [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:6e:5a,bridge_name='br-int',has_traffic_filtering=True,id=8a130a46-1e4c-4c18-8d1f-c60c770a5f49,network=Network(f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a130a46-1e')#033[00m
Nov 29 03:34:23 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58-userdata-shm.mount: Deactivated successfully.
Nov 29 03:34:23 np0005539563 systemd[1]: var-lib-containers-storage-overlay-688415a79a03588f21fd6fd598567da41fa86222fb1eea186c24a8ab3bbebbe1-merged.mount: Deactivated successfully.
Nov 29 03:34:23 np0005539563 podman[362908]: 2025-11-29 08:34:23.112909915 +0000 UTC m=+0.116440035 container cleanup 95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:34:23 np0005539563 systemd[1]: libpod-conmon-95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58.scope: Deactivated successfully.
Nov 29 03:34:23 np0005539563 podman[362962]: 2025-11-29 08:34:23.178279607 +0000 UTC m=+0.042554895 container remove 95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:34:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:23.183 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cea3e309-d505-4af2-88af-a7437c6ead0e]: (4, ('Sat Nov 29 08:34:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c (95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58)\n95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58\nSat Nov 29 08:34:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c (95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58)\n95cba220f28b90c4635f9caeac36db4c2278b10197639531cc8bd0ac2e535b58\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:23.185 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5d2b5881-4ed5-4359-8d76-7ae86833aea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:23.186 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4afd5c3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.188 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:23 np0005539563 kernel: tapf4afd5c3-f0: left promiscuous mode
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.217 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:23.220 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[86462521-4c77-470f-b10c-d2c95b66dea5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:23.245 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e42f59d1-43e9-4512-8ead-6184ff20a228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:23.247 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a62c1926-c783-4803-b83f-0c28b0ad559a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:23.268 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[83509521-fc1c-4cc5-b957-aeb8fedfe866]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 820797, 'reachable_time': 16104, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362977, 'error': None, 'target': 'ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:23.273 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:34:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:23.274 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[35ef41b7-4500-469d-a50b-4bef48cd2bff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:23 np0005539563 systemd[1]: run-netns-ovnmeta\x2df4afd5c3\x2df1f9\x2d4e62\x2d9e1b\x2dd55edb60d97c.mount: Deactivated successfully.
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.520 252257 DEBUG nova.compute.manager [req-f2426473-3dad-424c-ba63-787e83fff310 req-c3142fc0-d1b2-43b6-8146-0e682dfe7419 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-unplugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.520 252257 DEBUG oslo_concurrency.lockutils [req-f2426473-3dad-424c-ba63-787e83fff310 req-c3142fc0-d1b2-43b6-8146-0e682dfe7419 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.521 252257 DEBUG oslo_concurrency.lockutils [req-f2426473-3dad-424c-ba63-787e83fff310 req-c3142fc0-d1b2-43b6-8146-0e682dfe7419 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.521 252257 DEBUG oslo_concurrency.lockutils [req-f2426473-3dad-424c-ba63-787e83fff310 req-c3142fc0-d1b2-43b6-8146-0e682dfe7419 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.522 252257 DEBUG nova.compute.manager [req-f2426473-3dad-424c-ba63-787e83fff310 req-c3142fc0-d1b2-43b6-8146-0e682dfe7419 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] No waiting events found dispatching network-vif-unplugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.522 252257 DEBUG nova.compute.manager [req-f2426473-3dad-424c-ba63-787e83fff310 req-c3142fc0-d1b2-43b6-8146-0e682dfe7419 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-unplugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.568 252257 INFO nova.virt.libvirt.driver [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Deleting instance files /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54_del#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.569 252257 INFO nova.virt.libvirt.driver [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Deletion of /var/lib/nova/instances/782511b8-9841-4558-bc21-9a81d3913b54_del complete#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.655 252257 INFO nova.compute.manager [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.656 252257 DEBUG oslo.service.loopingcall [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.656 252257 DEBUG nova.compute.manager [-] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.656 252257 DEBUG nova.network.neutron [-] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004355363332868044 of space, bias 1.0, pg target 1.3066089998604131 quantized to 32 (current 32)
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004657801449376512 of space, bias 1.0, pg target 1.3926826333635771 quantized to 32 (current 32)
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.294807006676903 quantized to 32 (current 32)
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:34:23 np0005539563 nova_compute[252253]: 2025-11-29 08:34:23.867 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:24 np0005539563 podman[363003]: 2025-11-29 08:34:24.094859849 +0000 UTC m=+0.068379735 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 03:34:24 np0005539563 podman[363004]: 2025-11-29 08:34:24.095064524 +0000 UTC m=+0.068444506 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 29 03:34:24 np0005539563 podman[363005]: 2025-11-29 08:34:24.139775515 +0000 UTC m=+0.109490737 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:34:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:24.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:24.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 431 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.2 MiB/s wr, 294 op/s
Nov 29 03:34:25 np0005539563 nova_compute[252253]: 2025-11-29 08:34:25.041 252257 DEBUG nova.network.neutron [req-1be1aa25-6c97-4e76-9e16-864b8d811e8e req-8e45ce52-86dc-4642-88e6-7aca3070aa31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updated VIF entry in instance network info cache for port 8a130a46-1e4c-4c18-8d1f-c60c770a5f49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:25 np0005539563 nova_compute[252253]: 2025-11-29 08:34:25.042 252257 DEBUG nova.network.neutron [req-1be1aa25-6c97-4e76-9e16-864b8d811e8e req-8e45ce52-86dc-4642-88e6-7aca3070aa31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updating instance_info_cache with network_info: [{"id": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "address": "fa:16:3e:c0:6e:5a", "network": {"id": "f4afd5c3-f1f9-4e62-9e1b-d55edb60d97c", "bridge": "br-int", "label": "tempest-network-smoke--1013637972", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a130a46-1e", "ovs_interfaceid": "8a130a46-1e4c-4c18-8d1f-c60c770a5f49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:25 np0005539563 nova_compute[252253]: 2025-11-29 08:34:25.113 252257 DEBUG oslo_concurrency.lockutils [req-1be1aa25-6c97-4e76-9e16-864b8d811e8e req-8e45ce52-86dc-4642-88e6-7aca3070aa31 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-782511b8-9841-4558-bc21-9a81d3913b54" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:25 np0005539563 nova_compute[252253]: 2025-11-29 08:34:25.123 252257 DEBUG nova.network.neutron [-] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:25 np0005539563 nova_compute[252253]: 2025-11-29 08:34:25.139 252257 INFO nova.compute.manager [-] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Took 1.48 seconds to deallocate network for instance.#033[00m
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.042 252257 DEBUG oslo_concurrency.lockutils [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.043 252257 DEBUG oslo_concurrency.lockutils [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.153 252257 DEBUG oslo_concurrency.processutils [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.421 252257 DEBUG nova.compute.manager [req-f701100d-9d2c-4f07-b8be-f65a73a4a5f8 req-978c4f79-401f-4304-964b-f9082699e0c1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-deleted-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:26.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1350265111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.593 252257 DEBUG oslo_concurrency.processutils [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.602 252257 DEBUG nova.compute.provider_tree [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:26.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.747 252257 DEBUG nova.compute.manager [req-f1c50003-9a2d-48a0-817b-ba44de3fe501 req-1a031f13-0110-4e7e-bd86-0e2044d935eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.748 252257 DEBUG oslo_concurrency.lockutils [req-f1c50003-9a2d-48a0-817b-ba44de3fe501 req-1a031f13-0110-4e7e-bd86-0e2044d935eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "782511b8-9841-4558-bc21-9a81d3913b54-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.748 252257 DEBUG oslo_concurrency.lockutils [req-f1c50003-9a2d-48a0-817b-ba44de3fe501 req-1a031f13-0110-4e7e-bd86-0e2044d935eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.748 252257 DEBUG oslo_concurrency.lockutils [req-f1c50003-9a2d-48a0-817b-ba44de3fe501 req-1a031f13-0110-4e7e-bd86-0e2044d935eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.748 252257 DEBUG nova.compute.manager [req-f1c50003-9a2d-48a0-817b-ba44de3fe501 req-1a031f13-0110-4e7e-bd86-0e2044d935eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] No waiting events found dispatching network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.749 252257 WARNING nova.compute.manager [req-f1c50003-9a2d-48a0-817b-ba44de3fe501 req-1a031f13-0110-4e7e-bd86-0e2044d935eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Received unexpected event network-vif-plugged-8a130a46-1e4c-4c18-8d1f-c60c770a5f49 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:34:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 406 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.5 MiB/s wr, 262 op/s
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.883 252257 DEBUG nova.scheduler.client.report [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.928 252257 DEBUG oslo_concurrency.lockutils [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:26 np0005539563 nova_compute[252253]: 2025-11-29 08:34:26.964 252257 INFO nova.scheduler.client.report [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Deleted allocations for instance 782511b8-9841-4558-bc21-9a81d3913b54#033[00m
Nov 29 03:34:27 np0005539563 nova_compute[252253]: 2025-11-29 08:34:27.046 252257 DEBUG oslo_concurrency.lockutils [None req-17afd61c-52ce-44ec-8ad0-6638c5eb6aa6 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "782511b8-9841-4558-bc21-9a81d3913b54" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:28 np0005539563 nova_compute[252253]: 2025-11-29 08:34:28.078 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:28.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:28.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 381 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 238 op/s
Nov 29 03:34:28 np0005539563 nova_compute[252253]: 2025-11-29 08:34:28.871 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Nov 29 03:34:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Nov 29 03:34:29 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Nov 29 03:34:30 np0005539563 nova_compute[252253]: 2025-11-29 08:34:30.342 252257 DEBUG nova.compute.manager [req-0f7854cd-4c34-4541-87e5-e4ad818b8ea3 req-137759a3-fad4-4e89-8f72-3ed7bdc2fdd8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Received event network-changed-c8756ae8-709a-481a-b200-35a8980cff93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:30 np0005539563 nova_compute[252253]: 2025-11-29 08:34:30.343 252257 DEBUG nova.compute.manager [req-0f7854cd-4c34-4541-87e5-e4ad818b8ea3 req-137759a3-fad4-4e89-8f72-3ed7bdc2fdd8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Refreshing instance network info cache due to event network-changed-c8756ae8-709a-481a-b200-35a8980cff93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:30 np0005539563 nova_compute[252253]: 2025-11-29 08:34:30.343 252257 DEBUG oslo_concurrency.lockutils [req-0f7854cd-4c34-4541-87e5-e4ad818b8ea3 req-137759a3-fad4-4e89-8f72-3ed7bdc2fdd8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:30 np0005539563 nova_compute[252253]: 2025-11-29 08:34:30.344 252257 DEBUG oslo_concurrency.lockutils [req-0f7854cd-4c34-4541-87e5-e4ad818b8ea3 req-137759a3-fad4-4e89-8f72-3ed7bdc2fdd8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:30 np0005539563 nova_compute[252253]: 2025-11-29 08:34:30.344 252257 DEBUG nova.network.neutron [req-0f7854cd-4c34-4541-87e5-e4ad818b8ea3 req-137759a3-fad4-4e89-8f72-3ed7bdc2fdd8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Refreshing network info cache for port c8756ae8-709a-481a-b200-35a8980cff93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:30.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:30.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 381 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 1.5 MiB/s wr, 128 op/s
Nov 29 03:34:31 np0005539563 nova_compute[252253]: 2025-11-29 08:34:31.905 252257 DEBUG oslo_concurrency.lockutils [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Acquiring lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:31 np0005539563 nova_compute[252253]: 2025-11-29 08:34:31.906 252257 DEBUG oslo_concurrency.lockutils [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:31 np0005539563 nova_compute[252253]: 2025-11-29 08:34:31.906 252257 DEBUG oslo_concurrency.lockutils [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Acquiring lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:31 np0005539563 nova_compute[252253]: 2025-11-29 08:34:31.907 252257 DEBUG oslo_concurrency.lockutils [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:31 np0005539563 nova_compute[252253]: 2025-11-29 08:34:31.907 252257 DEBUG oslo_concurrency.lockutils [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:31 np0005539563 nova_compute[252253]: 2025-11-29 08:34:31.909 252257 INFO nova.compute.manager [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Terminating instance#033[00m
Nov 29 03:34:31 np0005539563 nova_compute[252253]: 2025-11-29 08:34:31.911 252257 DEBUG nova.compute.manager [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:34:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:32.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:32.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 381 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 1.5 MiB/s wr, 128 op/s
Nov 29 03:34:33 np0005539563 nova_compute[252253]: 2025-11-29 08:34:33.082 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:33 np0005539563 kernel: tapc8756ae8-70 (unregistering): left promiscuous mode
Nov 29 03:34:33 np0005539563 NetworkManager[48981]: <info>  [1764405273.2326] device (tapc8756ae8-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:34:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:33Z|00750|binding|INFO|Releasing lport c8756ae8-709a-481a-b200-35a8980cff93 from this chassis (sb_readonly=0)
Nov 29 03:34:33 np0005539563 nova_compute[252253]: 2025-11-29 08:34:33.293 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:33Z|00751|binding|INFO|Setting lport c8756ae8-709a-481a-b200-35a8980cff93 down in Southbound
Nov 29 03:34:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:34:33Z|00752|binding|INFO|Removing iface tapc8756ae8-70 ovn-installed in OVS
Nov 29 03:34:33 np0005539563 nova_compute[252253]: 2025-11-29 08:34:33.297 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:33 np0005539563 nova_compute[252253]: 2025-11-29 08:34:33.309 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:33 np0005539563 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b0.scope: Deactivated successfully.
Nov 29 03:34:33 np0005539563 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b0.scope: Consumed 14.280s CPU time.
Nov 29 03:34:33 np0005539563 systemd-machined[213024]: Machine qemu-86-instance-000000b0 terminated.
Nov 29 03:34:33 np0005539563 nova_compute[252253]: 2025-11-29 08:34:33.563 252257 INFO nova.virt.libvirt.driver [-] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Instance destroyed successfully.#033[00m
Nov 29 03:34:33 np0005539563 nova_compute[252253]: 2025-11-29 08:34:33.564 252257 DEBUG nova.objects.instance [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lazy-loading 'resources' on Instance uuid 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:33 np0005539563 nova_compute[252253]: 2025-11-29 08:34:33.874 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:34.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:34.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 381 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 657 KiB/s rd, 29 KiB/s wr, 80 op/s
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.184 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:69:43 10.100.0.8'], port_security=['fa:16:3e:34:69:43 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6ba3ce9b-c4ad-471a-bba5-1755f9a9babd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb589b81-4c58-4ebd-a9b9-74e187f0139b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '340d97a89c434bedbead3110819c581d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eecfd5ef-bcfc-45f1-8227-4a86138cddce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7527fd93-29b4-43c7-b4dd-e316caa7227c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=c8756ae8-709a-481a-b200-35a8980cff93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.185 158990 INFO neutron.agent.ovn.metadata.agent [-] Port c8756ae8-709a-481a-b200-35a8980cff93 in datapath eb589b81-4c58-4ebd-a9b9-74e187f0139b unbound from our chassis#033[00m
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.186 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb589b81-4c58-4ebd-a9b9-74e187f0139b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.187 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d4cb88e8-5bbf-4463-867c-1446ec4567e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.187 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b namespace which is not needed anymore#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.297 252257 DEBUG nova.virt.libvirt.vif [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:33:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-11214697',display_name='tempest-TestVolumeBackupRestore-server-11214697',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-11214697',id=176,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEJP1MxO5y1zdIUbHIEoxTtC/SuZQSrgpYtTIlN5AnKzpTQu0zImzztXtiv58Io2sEdQDVqMGxwfPFmTVhCDnsvqKdA6A/j5g/dzpRord4axz42zZ9NClNKCtE3Mjy4fJA==',key_name='tempest-TestVolumeBackupRestore-564851940',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='340d97a89c434bedbead3110819c581d',ramdisk_id='',reservation_id='r-ew2x7c80',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-760261171',owner_user_name='tempest-TestVolumeBackupRestore-760261171-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:06Z,user_data=None,user_id='2ad82d69b01a4929b20a4d3c4dbe0135',uuid=6ba3ce9b-c4ad-471a-bba5-1755f9a9babd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.297 252257 DEBUG nova.network.os_vif_util [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Converting VIF {"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.298 252257 DEBUG nova.network.os_vif_util [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:69:43,bridge_name='br-int',has_traffic_filtering=True,id=c8756ae8-709a-481a-b200-35a8980cff93,network=Network(eb589b81-4c58-4ebd-a9b9-74e187f0139b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8756ae8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.298 252257 DEBUG os_vif [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:69:43,bridge_name='br-int',has_traffic_filtering=True,id=c8756ae8-709a-481a-b200-35a8980cff93,network=Network(eb589b81-4c58-4ebd-a9b9-74e187f0139b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8756ae8-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.300 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.301 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8756ae8-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.303 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.303 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.305 252257 INFO os_vif [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:69:43,bridge_name='br-int',has_traffic_filtering=True,id=c8756ae8-709a-481a-b200-35a8980cff93,network=Network(eb589b81-4c58-4ebd-a9b9-74e187f0139b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8756ae8-70')#033[00m
Nov 29 03:34:35 np0005539563 neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b[362816]: [NOTICE]   (362820) : haproxy version is 2.8.14-c23fe91
Nov 29 03:34:35 np0005539563 neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b[362816]: [NOTICE]   (362820) : path to executable is /usr/sbin/haproxy
Nov 29 03:34:35 np0005539563 neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b[362816]: [WARNING]  (362820) : Exiting Master process...
Nov 29 03:34:35 np0005539563 neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b[362816]: [WARNING]  (362820) : Exiting Master process...
Nov 29 03:34:35 np0005539563 neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b[362816]: [ALERT]    (362820) : Current worker (362823) exited with code 143 (Terminated)
Nov 29 03:34:35 np0005539563 neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b[362816]: [WARNING]  (362820) : All workers exited. Exiting... (0)
Nov 29 03:34:35 np0005539563 systemd[1]: libpod-f83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d.scope: Deactivated successfully.
Nov 29 03:34:35 np0005539563 podman[363227]: 2025-11-29 08:34:35.336276525 +0000 UTC m=+0.050252932 container died f83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:34:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d-userdata-shm.mount: Deactivated successfully.
Nov 29 03:34:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7ec7536d683991b11c781d9aa1cb576d33922063763e6b7b2aa55fb30001208b-merged.mount: Deactivated successfully.
Nov 29 03:34:35 np0005539563 podman[363227]: 2025-11-29 08:34:35.374849809 +0000 UTC m=+0.088826206 container cleanup f83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:34:35 np0005539563 systemd[1]: libpod-conmon-f83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d.scope: Deactivated successfully.
Nov 29 03:34:35 np0005539563 podman[363298]: 2025-11-29 08:34:35.449491262 +0000 UTC m=+0.053113370 container remove f83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.455 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e70284b8-6787-43b3-807b-44ab1eda36cb]: (4, ('Sat Nov 29 08:34:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b (f83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d)\nf83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d\nSat Nov 29 08:34:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b (f83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d)\nf83433f495496a3cffe3fced32388b7dfdd45c6f0e03326730f5164cd6daa56d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.458 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e965d3dd-fe83-4a02-be42-a5ba8378efe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.459 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb589b81-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.461 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:35 np0005539563 kernel: tapeb589b81-40: left promiscuous mode
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.477 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.481 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6d6901-a00b-441e-a0c2-dca9e41bbe92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.496 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4fea8b-e2e4-4e1c-8039-1e937215c718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.498 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a634609f-de39-426c-89f2-2d22d704a16f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.519 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[30cc8762-aadf-48a2-a017-a644f586b402]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 821402, 'reachable_time': 28149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363325, 'error': None, 'target': 'ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:35 np0005539563 systemd[1]: run-netns-ovnmeta\x2deb589b81\x2d4c58\x2d4ebd\x2da9b9\x2d74e187f0139b.mount: Deactivated successfully.
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.523 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb589b81-4c58-4ebd-a9b9-74e187f0139b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:34:35 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:35.523 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[0484ade0-f5e6-449c-8a18-bf3440ca60e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:34:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:34:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.844 252257 DEBUG nova.compute.manager [req-2ea7c719-766d-4156-934e-a78311488cca req-d791736f-c93b-4afc-9a5f-d218ead994d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Received event network-vif-unplugged-c8756ae8-709a-481a-b200-35a8980cff93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.844 252257 DEBUG oslo_concurrency.lockutils [req-2ea7c719-766d-4156-934e-a78311488cca req-d791736f-c93b-4afc-9a5f-d218ead994d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.844 252257 DEBUG oslo_concurrency.lockutils [req-2ea7c719-766d-4156-934e-a78311488cca req-d791736f-c93b-4afc-9a5f-d218ead994d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.845 252257 DEBUG oslo_concurrency.lockutils [req-2ea7c719-766d-4156-934e-a78311488cca req-d791736f-c93b-4afc-9a5f-d218ead994d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.845 252257 DEBUG nova.compute.manager [req-2ea7c719-766d-4156-934e-a78311488cca req-d791736f-c93b-4afc-9a5f-d218ead994d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] No waiting events found dispatching network-vif-unplugged-c8756ae8-709a-481a-b200-35a8980cff93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:35 np0005539563 nova_compute[252253]: 2025-11-29 08:34:35.845 252257 DEBUG nova.compute.manager [req-2ea7c719-766d-4156-934e-a78311488cca req-d791736f-c93b-4afc-9a5f-d218ead994d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Received event network-vif-unplugged-c8756ae8-709a-481a-b200-35a8980cff93 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:34:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:34:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:36.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:36.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 381 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 646 KiB/s rd, 29 KiB/s wr, 65 op/s
Nov 29 03:34:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:38 np0005539563 nova_compute[252253]: 2025-11-29 08:34:38.052 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405263.0509949, 782511b8-9841-4558-bc21-9a81d3913b54 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:38 np0005539563 nova_compute[252253]: 2025-11-29 08:34:38.053 252257 INFO nova.compute.manager [-] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:34:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:38.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:38 np0005539563 nova_compute[252253]: 2025-11-29 08:34:38.620 252257 DEBUG nova.compute.manager [req-ba653e77-4ef1-4c95-a75a-a351a86182cc req-ce6f25b9-8c46-4721-8e20-878e3069b26a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Received event network-vif-plugged-c8756ae8-709a-481a-b200-35a8980cff93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:38 np0005539563 nova_compute[252253]: 2025-11-29 08:34:38.621 252257 DEBUG oslo_concurrency.lockutils [req-ba653e77-4ef1-4c95-a75a-a351a86182cc req-ce6f25b9-8c46-4721-8e20-878e3069b26a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:38 np0005539563 nova_compute[252253]: 2025-11-29 08:34:38.622 252257 DEBUG oslo_concurrency.lockutils [req-ba653e77-4ef1-4c95-a75a-a351a86182cc req-ce6f25b9-8c46-4721-8e20-878e3069b26a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:38 np0005539563 nova_compute[252253]: 2025-11-29 08:34:38.622 252257 DEBUG oslo_concurrency.lockutils [req-ba653e77-4ef1-4c95-a75a-a351a86182cc req-ce6f25b9-8c46-4721-8e20-878e3069b26a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:38 np0005539563 nova_compute[252253]: 2025-11-29 08:34:38.623 252257 DEBUG nova.compute.manager [req-ba653e77-4ef1-4c95-a75a-a351a86182cc req-ce6f25b9-8c46-4721-8e20-878e3069b26a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] No waiting events found dispatching network-vif-plugged-c8756ae8-709a-481a-b200-35a8980cff93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:38 np0005539563 nova_compute[252253]: 2025-11-29 08:34:38.623 252257 WARNING nova.compute.manager [req-ba653e77-4ef1-4c95-a75a-a351a86182cc req-ce6f25b9-8c46-4721-8e20-878e3069b26a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Received unexpected event network-vif-plugged-c8756ae8-709a-481a-b200-35a8980cff93 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:34:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:38.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:34:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 383 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 644 KiB/s rd, 40 KiB/s wr, 63 op/s
Nov 29 03:34:38 np0005539563 nova_compute[252253]: 2025-11-29 08:34:38.864 252257 DEBUG nova.compute.manager [None req-03463345-19f9-4dc5-b7e6-0a03e4abe31e - - - - - -] [instance: 782511b8-9841-4558-bc21-9a81d3913b54] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:38 np0005539563 nova_compute[252253]: 2025-11-29 08:34:38.874 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:39 np0005539563 nova_compute[252253]: 2025-11-29 08:34:39.015 252257 DEBUG nova.network.neutron [req-0f7854cd-4c34-4541-87e5-e4ad818b8ea3 req-137759a3-fad4-4e89-8f72-3ed7bdc2fdd8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updated VIF entry in instance network info cache for port c8756ae8-709a-481a-b200-35a8980cff93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:39 np0005539563 nova_compute[252253]: 2025-11-29 08:34:39.015 252257 DEBUG nova.network.neutron [req-0f7854cd-4c34-4541-87e5-e4ad818b8ea3 req-137759a3-fad4-4e89-8f72-3ed7bdc2fdd8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updating instance_info_cache with network_info: [{"id": "c8756ae8-709a-481a-b200-35a8980cff93", "address": "fa:16:3e:34:69:43", "network": {"id": "eb589b81-4c58-4ebd-a9b9-74e187f0139b", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1698796486-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "340d97a89c434bedbead3110819c581d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8756ae8-70", "ovs_interfaceid": "c8756ae8-709a-481a-b200-35a8980cff93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:39 np0005539563 nova_compute[252253]: 2025-11-29 08:34:39.169 252257 INFO nova.virt.libvirt.driver [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Deleting instance files /var/lib/nova/instances/6ba3ce9b-c4ad-471a-bba5-1755f9a9babd_del#033[00m
Nov 29 03:34:39 np0005539563 nova_compute[252253]: 2025-11-29 08:34:39.170 252257 INFO nova.virt.libvirt.driver [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Deletion of /var/lib/nova/instances/6ba3ce9b-c4ad-471a-bba5-1755f9a9babd_del complete#033[00m
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:34:39 np0005539563 nova_compute[252253]: 2025-11-29 08:34:39.249 252257 DEBUG oslo_concurrency.lockutils [req-0f7854cd-4c34-4541-87e5-e4ad818b8ea3 req-137759a3-fad4-4e89-8f72-3ed7bdc2fdd8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:39 np0005539563 nova_compute[252253]: 2025-11-29 08:34:39.377 252257 INFO nova.compute.manager [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Took 7.47 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:34:39 np0005539563 nova_compute[252253]: 2025-11-29 08:34:39.378 252257 DEBUG oslo.service.loopingcall [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:34:39 np0005539563 nova_compute[252253]: 2025-11-29 08:34:39.378 252257 DEBUG nova.compute.manager [-] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:34:39 np0005539563 nova_compute[252253]: 2025-11-29 08:34:39.379 252257 DEBUG nova.network.neutron [-] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.422550) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279422677, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1860, "num_deletes": 262, "total_data_size": 3019026, "memory_usage": 3066656, "flush_reason": "Manual Compaction"}
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279456581, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 2950223, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57420, "largest_seqno": 59279, "table_properties": {"data_size": 2941679, "index_size": 5230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17721, "raw_average_key_size": 19, "raw_value_size": 2924274, "raw_average_value_size": 3274, "num_data_blocks": 226, "num_entries": 893, "num_filter_entries": 893, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405134, "oldest_key_time": 1764405134, "file_creation_time": 1764405279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 34041 microseconds, and 8686 cpu microseconds.
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.456644) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 2950223 bytes OK
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.456671) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.459451) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.459469) EVENT_LOG_v1 {"time_micros": 1764405279459465, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.459486) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 3011084, prev total WAL file size 3011084, number of live WAL files 2.
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.460570) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323532' seq:72057594037927935, type:22 .. '6B7600353038' seq:0, type:0; will stop at (end)
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(2881KB)], [125(12MB)]
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279460667, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 15931070, "oldest_snapshot_seqno": -1}
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 9491 keys, 14787577 bytes, temperature: kUnknown
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279654077, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 14787577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14722314, "index_size": 40446, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23749, "raw_key_size": 248100, "raw_average_key_size": 26, "raw_value_size": 14551727, "raw_average_value_size": 1533, "num_data_blocks": 1565, "num_entries": 9491, "num_filter_entries": 9491, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764405279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.654366) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 14787577 bytes
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.656343) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.3 rd, 76.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 12.4 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(10.4) write-amplify(5.0) OK, records in: 10033, records dropped: 542 output_compression: NoCompression
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.656369) EVENT_LOG_v1 {"time_micros": 1764405279656358, "job": 76, "event": "compaction_finished", "compaction_time_micros": 193488, "compaction_time_cpu_micros": 33458, "output_level": 6, "num_output_files": 1, "total_output_size": 14787577, "num_input_records": 10033, "num_output_records": 9491, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279657045, "job": 76, "event": "table_file_deletion", "file_number": 127}
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405279659540, "job": 76, "event": "table_file_deletion", "file_number": 125}
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.460343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.659578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.659581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.659583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.659584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:34:39.659586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:40 np0005539563 nova_compute[252253]: 2025-11-29 08:34:40.039 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:34:40 np0005539563 nova_compute[252253]: 2025-11-29 08:34:40.289 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:40 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f81df75b-aaed-4bf1-b29c-e76afaef0bab does not exist
Nov 29 03:34:40 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 49aaf8da-ad5d-4ac9-94b9-5086ca3a2df1 does not exist
Nov 29 03:34:40 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c1cb0bd8-b54a-4cb5-88c3-e0fa6329a62c does not exist
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:34:40 np0005539563 nova_compute[252253]: 2025-11-29 08:34:40.302 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:34:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:34:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:40.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:40 np0005539563 nova_compute[252253]: 2025-11-29 08:34:40.645 252257 DEBUG nova.network.neutron [-] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:40 np0005539563 nova_compute[252253]: 2025-11-29 08:34:40.672 252257 DEBUG nova.compute.manager [req-49528041-2a24-4f55-b71a-7229fce6e0cf req-aab0cee4-86d0-47a2-bc4e-ba37f7a1159f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Received event network-vif-deleted-c8756ae8-709a-481a-b200-35a8980cff93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:40 np0005539563 nova_compute[252253]: 2025-11-29 08:34:40.672 252257 INFO nova.compute.manager [req-49528041-2a24-4f55-b71a-7229fce6e0cf req-aab0cee4-86d0-47a2-bc4e-ba37f7a1159f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Neutron deleted interface c8756ae8-709a-481a-b200-35a8980cff93; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:34:40 np0005539563 nova_compute[252253]: 2025-11-29 08:34:40.673 252257 DEBUG nova.network.neutron [req-49528041-2a24-4f55-b71a-7229fce6e0cf req-aab0cee4-86d0-47a2-bc4e-ba37f7a1159f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:40.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:40 np0005539563 nova_compute[252253]: 2025-11-29 08:34:40.747 252257 INFO nova.compute.manager [-] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Took 1.37 seconds to deallocate network for instance.#033[00m
Nov 29 03:34:40 np0005539563 nova_compute[252253]: 2025-11-29 08:34:40.757 252257 DEBUG nova.compute.manager [req-49528041-2a24-4f55-b71a-7229fce6e0cf req-aab0cee4-86d0-47a2-bc4e-ba37f7a1159f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Detach interface failed, port_id=c8756ae8-709a-481a-b200-35a8980cff93, reason: Instance 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:34:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 383 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 570 KiB/s rd, 36 KiB/s wr, 61 op/s
Nov 29 03:34:40 np0005539563 podman[363610]: 2025-11-29 08:34:40.928313923 +0000 UTC m=+0.056939874 container create ddf429e94e05e8ca94df77e283b19cc87b7d864747819f1d68661293b92f1f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:34:40 np0005539563 systemd[1]: Started libpod-conmon-ddf429e94e05e8ca94df77e283b19cc87b7d864747819f1d68661293b92f1f8a.scope.
Nov 29 03:34:40 np0005539563 podman[363610]: 2025-11-29 08:34:40.898040803 +0000 UTC m=+0.026666834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:34:41 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:34:41 np0005539563 podman[363610]: 2025-11-29 08:34:41.04261092 +0000 UTC m=+0.171236911 container init ddf429e94e05e8ca94df77e283b19cc87b7d864747819f1d68661293b92f1f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:34:41 np0005539563 podman[363610]: 2025-11-29 08:34:41.051699566 +0000 UTC m=+0.180325527 container start ddf429e94e05e8ca94df77e283b19cc87b7d864747819f1d68661293b92f1f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:34:41 np0005539563 podman[363610]: 2025-11-29 08:34:41.056276759 +0000 UTC m=+0.184902750 container attach ddf429e94e05e8ca94df77e283b19cc87b7d864747819f1d68661293b92f1f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:34:41 np0005539563 reverent_snyder[363627]: 167 167
Nov 29 03:34:41 np0005539563 systemd[1]: libpod-ddf429e94e05e8ca94df77e283b19cc87b7d864747819f1d68661293b92f1f8a.scope: Deactivated successfully.
Nov 29 03:34:41 np0005539563 podman[363610]: 2025-11-29 08:34:41.05888884 +0000 UTC m=+0.187514801 container died ddf429e94e05e8ca94df77e283b19cc87b7d864747819f1d68661293b92f1f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:34:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d5c4d9c8cf057b4bde624479a244d6ad025950b10cc68fc18b8cfd8e25c995eb-merged.mount: Deactivated successfully.
Nov 29 03:34:41 np0005539563 nova_compute[252253]: 2025-11-29 08:34:41.086 252257 INFO nova.compute.manager [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Took 0.34 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:34:41 np0005539563 podman[363610]: 2025-11-29 08:34:41.097702282 +0000 UTC m=+0.226328223 container remove ddf429e94e05e8ca94df77e283b19cc87b7d864747819f1d68661293b92f1f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:34:41 np0005539563 systemd[1]: libpod-conmon-ddf429e94e05e8ca94df77e283b19cc87b7d864747819f1d68661293b92f1f8a.scope: Deactivated successfully.
Nov 29 03:34:41 np0005539563 nova_compute[252253]: 2025-11-29 08:34:41.252 252257 DEBUG oslo_concurrency.lockutils [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:41 np0005539563 nova_compute[252253]: 2025-11-29 08:34:41.253 252257 DEBUG oslo_concurrency.lockutils [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:41 np0005539563 podman[363650]: 2025-11-29 08:34:41.295824589 +0000 UTC m=+0.069512924 container create 45e3c086eff4377f5b27e94ea4cb21f535fc60fb39675b6a843d6bd56f7d96b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elbakyan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:34:41 np0005539563 nova_compute[252253]: 2025-11-29 08:34:41.321 252257 DEBUG oslo_concurrency.processutils [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:34:41 np0005539563 systemd[1]: Started libpod-conmon-45e3c086eff4377f5b27e94ea4cb21f535fc60fb39675b6a843d6bd56f7d96b2.scope.
Nov 29 03:34:41 np0005539563 podman[363650]: 2025-11-29 08:34:41.265763015 +0000 UTC m=+0.039451390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:34:41 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:34:41 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61c82b4a793611ff08b70c519533317640642b875b41fdb1cdbd02b1dcc7eb3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:41 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61c82b4a793611ff08b70c519533317640642b875b41fdb1cdbd02b1dcc7eb3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:41 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61c82b4a793611ff08b70c519533317640642b875b41fdb1cdbd02b1dcc7eb3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:41 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61c82b4a793611ff08b70c519533317640642b875b41fdb1cdbd02b1dcc7eb3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:41 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61c82b4a793611ff08b70c519533317640642b875b41fdb1cdbd02b1dcc7eb3e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:41 np0005539563 podman[363650]: 2025-11-29 08:34:41.417850486 +0000 UTC m=+0.191538821 container init 45e3c086eff4377f5b27e94ea4cb21f535fc60fb39675b6a843d6bd56f7d96b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:34:41 np0005539563 podman[363650]: 2025-11-29 08:34:41.432520353 +0000 UTC m=+0.206208658 container start 45e3c086eff4377f5b27e94ea4cb21f535fc60fb39675b6a843d6bd56f7d96b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:34:41 np0005539563 podman[363650]: 2025-11-29 08:34:41.444368034 +0000 UTC m=+0.218056399 container attach 45e3c086eff4377f5b27e94ea4cb21f535fc60fb39675b6a843d6bd56f7d96b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:34:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477834669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:41 np0005539563 nova_compute[252253]: 2025-11-29 08:34:41.841 252257 DEBUG oslo_concurrency.processutils [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:41 np0005539563 nova_compute[252253]: 2025-11-29 08:34:41.849 252257 DEBUG nova.compute.provider_tree [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:41 np0005539563 nova_compute[252253]: 2025-11-29 08:34:41.949 252257 DEBUG nova.scheduler.client.report [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:42 np0005539563 nova_compute[252253]: 2025-11-29 08:34:42.049 252257 DEBUG oslo_concurrency.lockutils [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:42 np0005539563 nova_compute[252253]: 2025-11-29 08:34:42.082 252257 INFO nova.scheduler.client.report [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Deleted allocations for instance 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd#033[00m
Nov 29 03:34:42 np0005539563 condescending_elbakyan[363667]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:34:42 np0005539563 condescending_elbakyan[363667]: --> relative data size: 1.0
Nov 29 03:34:42 np0005539563 condescending_elbakyan[363667]: --> All data devices are unavailable
Nov 29 03:34:42 np0005539563 systemd[1]: libpod-45e3c086eff4377f5b27e94ea4cb21f535fc60fb39675b6a843d6bd56f7d96b2.scope: Deactivated successfully.
Nov 29 03:34:42 np0005539563 conmon[363667]: conmon 45e3c086eff4377f5b27 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45e3c086eff4377f5b27e94ea4cb21f535fc60fb39675b6a843d6bd56f7d96b2.scope/container/memory.events
Nov 29 03:34:42 np0005539563 podman[363650]: 2025-11-29 08:34:42.327985442 +0000 UTC m=+1.101673747 container died 45e3c086eff4377f5b27e94ea4cb21f535fc60fb39675b6a843d6bd56f7d96b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elbakyan, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:34:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-61c82b4a793611ff08b70c519533317640642b875b41fdb1cdbd02b1dcc7eb3e-merged.mount: Deactivated successfully.
Nov 29 03:34:42 np0005539563 nova_compute[252253]: 2025-11-29 08:34:42.373 252257 DEBUG oslo_concurrency.lockutils [None req-bbd2b6f6-6abd-40e4-b0ab-d849e8bdacbc 2ad82d69b01a4929b20a4d3c4dbe0135 340d97a89c434bedbead3110819c581d - - default default] Lock "6ba3ce9b-c4ad-471a-bba5-1755f9a9babd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:42 np0005539563 podman[363650]: 2025-11-29 08:34:42.395974334 +0000 UTC m=+1.169662629 container remove 45e3c086eff4377f5b27e94ea4cb21f535fc60fb39675b6a843d6bd56f7d96b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:34:42 np0005539563 systemd[1]: libpod-conmon-45e3c086eff4377f5b27e94ea4cb21f535fc60fb39675b6a843d6bd56f7d96b2.scope: Deactivated successfully.
Nov 29 03:34:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:42.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:42.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 383 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 314 KiB/s rd, 22 KiB/s wr, 46 op/s
Nov 29 03:34:43 np0005539563 podman[363858]: 2025-11-29 08:34:43.099845433 +0000 UTC m=+0.054454265 container create b2185557ea0e179ae1b870941f88f6a2674c555669284dd105a276275feb1050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_albattani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 29 03:34:43 np0005539563 systemd[1]: Started libpod-conmon-b2185557ea0e179ae1b870941f88f6a2674c555669284dd105a276275feb1050.scope.
Nov 29 03:34:43 np0005539563 podman[363858]: 2025-11-29 08:34:43.075937205 +0000 UTC m=+0.030546057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:34:43 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:34:43 np0005539563 podman[363858]: 2025-11-29 08:34:43.200645734 +0000 UTC m=+0.155254556 container init b2185557ea0e179ae1b870941f88f6a2674c555669284dd105a276275feb1050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_albattani, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:34:43 np0005539563 podman[363858]: 2025-11-29 08:34:43.209928975 +0000 UTC m=+0.164537787 container start b2185557ea0e179ae1b870941f88f6a2674c555669284dd105a276275feb1050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:34:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:34:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:34:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:34:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:34:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:34:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:34:43 np0005539563 podman[363858]: 2025-11-29 08:34:43.214349896 +0000 UTC m=+0.168958708 container attach b2185557ea0e179ae1b870941f88f6a2674c555669284dd105a276275feb1050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_albattani, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:34:43 np0005539563 mystifying_albattani[363875]: 167 167
Nov 29 03:34:43 np0005539563 systemd[1]: libpod-b2185557ea0e179ae1b870941f88f6a2674c555669284dd105a276275feb1050.scope: Deactivated successfully.
Nov 29 03:34:43 np0005539563 podman[363858]: 2025-11-29 08:34:43.217936522 +0000 UTC m=+0.172545334 container died b2185557ea0e179ae1b870941f88f6a2674c555669284dd105a276275feb1050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_albattani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:34:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay-93417002f59750daf10e869d5dba1a37886fcdd04c0d2f2452c6fc7d10c98b4a-merged.mount: Deactivated successfully.
Nov 29 03:34:43 np0005539563 podman[363858]: 2025-11-29 08:34:43.260594738 +0000 UTC m=+0.215203550 container remove b2185557ea0e179ae1b870941f88f6a2674c555669284dd105a276275feb1050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_albattani, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:34:43 np0005539563 systemd[1]: libpod-conmon-b2185557ea0e179ae1b870941f88f6a2674c555669284dd105a276275feb1050.scope: Deactivated successfully.
Nov 29 03:34:43 np0005539563 podman[363900]: 2025-11-29 08:34:43.435269481 +0000 UTC m=+0.045100813 container create 5315be8fab49f26958d20a4ea8d4cd2f9f4e2b848576310e6e8aee93fd118f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:34:43 np0005539563 systemd[1]: Started libpod-conmon-5315be8fab49f26958d20a4ea8d4cd2f9f4e2b848576310e6e8aee93fd118f71.scope.
Nov 29 03:34:43 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:34:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73f5af2ddc68f79b99d6626e98b2391f6de69c171c53c68c94b19dd412a0246/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73f5af2ddc68f79b99d6626e98b2391f6de69c171c53c68c94b19dd412a0246/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:43 np0005539563 podman[363900]: 2025-11-29 08:34:43.416264986 +0000 UTC m=+0.026096348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:34:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73f5af2ddc68f79b99d6626e98b2391f6de69c171c53c68c94b19dd412a0246/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73f5af2ddc68f79b99d6626e98b2391f6de69c171c53c68c94b19dd412a0246/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:43 np0005539563 podman[363900]: 2025-11-29 08:34:43.524378815 +0000 UTC m=+0.134210197 container init 5315be8fab49f26958d20a4ea8d4cd2f9f4e2b848576310e6e8aee93fd118f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:34:43 np0005539563 podman[363900]: 2025-11-29 08:34:43.53235613 +0000 UTC m=+0.142187462 container start 5315be8fab49f26958d20a4ea8d4cd2f9f4e2b848576310e6e8aee93fd118f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:34:43 np0005539563 podman[363900]: 2025-11-29 08:34:43.535454354 +0000 UTC m=+0.145285726 container attach 5315be8fab49f26958d20a4ea8d4cd2f9f4e2b848576310e6e8aee93fd118f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mccarthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:34:43 np0005539563 nova_compute[252253]: 2025-11-29 08:34:43.914 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]: {
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:    "0": [
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:        {
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:            "devices": [
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:                "/dev/loop3"
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:            ],
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:            "lv_name": "ceph_lv0",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:            "lv_size": "7511998464",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:            "name": "ceph_lv0",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:            "tags": {
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:                "ceph.cluster_name": "ceph",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:                "ceph.crush_device_class": "",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:                "ceph.encrypted": "0",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:                "ceph.osd_id": "0",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:                "ceph.type": "block",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:                "ceph.vdo": "0"
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:            },
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:            "type": "block",
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:            "vg_name": "ceph_vg0"
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:        }
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]:    ]
Nov 29 03:34:44 np0005539563 busy_mccarthy[363916]: }
Nov 29 03:34:44 np0005539563 systemd[1]: libpod-5315be8fab49f26958d20a4ea8d4cd2f9f4e2b848576310e6e8aee93fd118f71.scope: Deactivated successfully.
Nov 29 03:34:44 np0005539563 podman[363900]: 2025-11-29 08:34:44.376826739 +0000 UTC m=+0.986658071 container died 5315be8fab49f26958d20a4ea8d4cd2f9f4e2b848576310e6e8aee93fd118f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:34:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f73f5af2ddc68f79b99d6626e98b2391f6de69c171c53c68c94b19dd412a0246-merged.mount: Deactivated successfully.
Nov 29 03:34:44 np0005539563 podman[363900]: 2025-11-29 08:34:44.446182038 +0000 UTC m=+1.056013380 container remove 5315be8fab49f26958d20a4ea8d4cd2f9f4e2b848576310e6e8aee93fd118f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mccarthy, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:34:44 np0005539563 systemd[1]: libpod-conmon-5315be8fab49f26958d20a4ea8d4cd2f9f4e2b848576310e6e8aee93fd118f71.scope: Deactivated successfully.
Nov 29 03:34:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:44.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Nov 29 03:34:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Nov 29 03:34:44 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Nov 29 03:34:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:44.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 383 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 17 KiB/s wr, 16 op/s
Nov 29 03:34:45 np0005539563 podman[364130]: 2025-11-29 08:34:45.132031268 +0000 UTC m=+0.041990718 container create 1097039b41437606cdc2af743d3aac6e3a95dedf8b6d093e12574df7ff4effd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:34:45 np0005539563 systemd[1]: Started libpod-conmon-1097039b41437606cdc2af743d3aac6e3a95dedf8b6d093e12574df7ff4effd6.scope.
Nov 29 03:34:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:34:45 np0005539563 podman[364130]: 2025-11-29 08:34:45.112837149 +0000 UTC m=+0.022796639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:34:45 np0005539563 podman[364130]: 2025-11-29 08:34:45.212792246 +0000 UTC m=+0.122751736 container init 1097039b41437606cdc2af743d3aac6e3a95dedf8b6d093e12574df7ff4effd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:34:45 np0005539563 podman[364130]: 2025-11-29 08:34:45.225045389 +0000 UTC m=+0.135004849 container start 1097039b41437606cdc2af743d3aac6e3a95dedf8b6d093e12574df7ff4effd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:34:45 np0005539563 podman[364130]: 2025-11-29 08:34:45.228115102 +0000 UTC m=+0.138074572 container attach 1097039b41437606cdc2af743d3aac6e3a95dedf8b6d093e12574df7ff4effd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:34:45 np0005539563 competent_curran[364146]: 167 167
Nov 29 03:34:45 np0005539563 podman[364130]: 2025-11-29 08:34:45.229249062 +0000 UTC m=+0.139208522 container died 1097039b41437606cdc2af743d3aac6e3a95dedf8b6d093e12574df7ff4effd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:34:45 np0005539563 systemd[1]: libpod-1097039b41437606cdc2af743d3aac6e3a95dedf8b6d093e12574df7ff4effd6.scope: Deactivated successfully.
Nov 29 03:34:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2112d38d6f0e0e67246d1e2fcc78f0607479f61f90bac8a14e1933f5fb49184c-merged.mount: Deactivated successfully.
Nov 29 03:34:45 np0005539563 podman[364130]: 2025-11-29 08:34:45.262837412 +0000 UTC m=+0.172796872 container remove 1097039b41437606cdc2af743d3aac6e3a95dedf8b6d093e12574df7ff4effd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:34:45 np0005539563 systemd[1]: libpod-conmon-1097039b41437606cdc2af743d3aac6e3a95dedf8b6d093e12574df7ff4effd6.scope: Deactivated successfully.
Nov 29 03:34:45 np0005539563 nova_compute[252253]: 2025-11-29 08:34:45.305 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:45 np0005539563 podman[364170]: 2025-11-29 08:34:45.505557919 +0000 UTC m=+0.101578644 container create 7b64dd847e52832c0001d88080e120d9e06cee68f86ad810e1327b93bb02ee8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:34:45 np0005539563 podman[364170]: 2025-11-29 08:34:45.435952813 +0000 UTC m=+0.031973618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:34:45 np0005539563 systemd[1]: Started libpod-conmon-7b64dd847e52832c0001d88080e120d9e06cee68f86ad810e1327b93bb02ee8d.scope.
Nov 29 03:34:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:34:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/292646434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:34:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:34:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/292646434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:34:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:34:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/416a7bd410b26de555b2351d4d297b55cc1d7c0da36718f697e28089249bffc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/416a7bd410b26de555b2351d4d297b55cc1d7c0da36718f697e28089249bffc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/416a7bd410b26de555b2351d4d297b55cc1d7c0da36718f697e28089249bffc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/416a7bd410b26de555b2351d4d297b55cc1d7c0da36718f697e28089249bffc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:45 np0005539563 podman[364170]: 2025-11-29 08:34:45.585650268 +0000 UTC m=+0.181671023 container init 7b64dd847e52832c0001d88080e120d9e06cee68f86ad810e1327b93bb02ee8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sinoussi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:34:45 np0005539563 podman[364170]: 2025-11-29 08:34:45.592007681 +0000 UTC m=+0.188028406 container start 7b64dd847e52832c0001d88080e120d9e06cee68f86ad810e1327b93bb02ee8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:34:45 np0005539563 podman[364170]: 2025-11-29 08:34:45.595171276 +0000 UTC m=+0.191192001 container attach 7b64dd847e52832c0001d88080e120d9e06cee68f86ad810e1327b93bb02ee8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:34:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Nov 29 03:34:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Nov 29 03:34:45 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Nov 29 03:34:46 np0005539563 funny_sinoussi[364188]: {
Nov 29 03:34:46 np0005539563 funny_sinoussi[364188]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:34:46 np0005539563 funny_sinoussi[364188]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:34:46 np0005539563 funny_sinoussi[364188]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:34:46 np0005539563 funny_sinoussi[364188]:        "osd_id": 0,
Nov 29 03:34:46 np0005539563 funny_sinoussi[364188]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:34:46 np0005539563 funny_sinoussi[364188]:        "type": "bluestore"
Nov 29 03:34:46 np0005539563 funny_sinoussi[364188]:    }
Nov 29 03:34:46 np0005539563 funny_sinoussi[364188]: }
Nov 29 03:34:46 np0005539563 systemd[1]: libpod-7b64dd847e52832c0001d88080e120d9e06cee68f86ad810e1327b93bb02ee8d.scope: Deactivated successfully.
Nov 29 03:34:46 np0005539563 podman[364170]: 2025-11-29 08:34:46.522134319 +0000 UTC m=+1.118155044 container died 7b64dd847e52832c0001d88080e120d9e06cee68f86ad810e1327b93bb02ee8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:34:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-416a7bd410b26de555b2351d4d297b55cc1d7c0da36718f697e28089249bffc6-merged.mount: Deactivated successfully.
Nov 29 03:34:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:46.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:46 np0005539563 podman[364170]: 2025-11-29 08:34:46.581286442 +0000 UTC m=+1.177307167 container remove 7b64dd847e52832c0001d88080e120d9e06cee68f86ad810e1327b93bb02ee8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_sinoussi, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:34:46 np0005539563 systemd[1]: libpod-conmon-7b64dd847e52832c0001d88080e120d9e06cee68f86ad810e1327b93bb02ee8d.scope: Deactivated successfully.
Nov 29 03:34:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:34:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:34:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:46.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:46 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 38453a17-682a-453d-b5cb-10e90779508c does not exist
Nov 29 03:34:46 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d3ab8aee-5496-44d8-9cd3-efdddf6f4d66 does not exist
Nov 29 03:34:46 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 43fb3726-f03d-40fd-96b6-ea5432585b8d does not exist
Nov 29 03:34:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 348 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 4.9 KiB/s wr, 18 op/s
Nov 29 03:34:46 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:34:48 np0005539563 nova_compute[252253]: 2025-11-29 08:34:48.558 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405273.5572286, 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:48 np0005539563 nova_compute[252253]: 2025-11-29 08:34:48.559 252257 INFO nova.compute.manager [-] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:34:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:48.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:48 np0005539563 nova_compute[252253]: 2025-11-29 08:34:48.581 252257 DEBUG nova.compute.manager [None req-f4f5f2f6-16b0-4eae-81d5-4b6158489288 - - - - - -] [instance: 6ba3ce9b-c4ad-471a-bba5-1755f9a9babd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:48.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 232 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 8.0 KiB/s wr, 77 op/s
Nov 29 03:34:48 np0005539563 nova_compute[252253]: 2025-11-29 08:34:48.954 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Nov 29 03:34:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Nov 29 03:34:49 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Nov 29 03:34:50 np0005539563 nova_compute[252253]: 2025-11-29 08:34:50.310 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:50.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:50.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 14 KiB/s wr, 124 op/s
Nov 29 03:34:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:52.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:52.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 10 KiB/s wr, 95 op/s
Nov 29 03:34:53 np0005539563 nova_compute[252253]: 2025-11-29 08:34:53.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:53 np0005539563 nova_compute[252253]: 2025-11-29 08:34:53.956 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Nov 29 03:34:54 np0005539563 podman[364278]: 2025-11-29 08:34:54.506430226 +0000 UTC m=+0.062338460 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:34:54 np0005539563 podman[364277]: 2025-11-29 08:34:54.530513758 +0000 UTC m=+0.086590907 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 29 03:34:54 np0005539563 podman[364279]: 2025-11-29 08:34:54.532648656 +0000 UTC m=+0.084538481 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:34:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:34:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:54.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:34:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Nov 29 03:34:54 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Nov 29 03:34:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:54.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 157 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 18 KiB/s wr, 105 op/s
Nov 29 03:34:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:55.143 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:55 np0005539563 nova_compute[252253]: 2025-11-29 08:34:55.143 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:55.144 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:34:55 np0005539563 nova_compute[252253]: 2025-11-29 08:34:55.312 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:34:56.145 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:56.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:56.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 129 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 15 KiB/s wr, 45 op/s
Nov 29 03:34:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:58.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:34:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:58.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 13 KiB/s wr, 53 op/s
Nov 29 03:34:58 np0005539563 nova_compute[252253]: 2025-11-29 08:34:58.980 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:00 np0005539563 nova_compute[252253]: 2025-11-29 08:35:00.315 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:00.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:00.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 7.4 KiB/s wr, 35 op/s
Nov 29 03:35:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:02.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:02.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 7.4 KiB/s wr, 35 op/s
Nov 29 03:35:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:35:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3087804535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:35:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:35:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3087804535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:35:03 np0005539563 nova_compute[252253]: 2025-11-29 08:35:03.982 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:04.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:04.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Nov 29 03:35:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:04.938 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:04.939 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:04.939 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:05 np0005539563 nova_compute[252253]: 2025-11-29 08:35:05.318 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:05 np0005539563 nova_compute[252253]: 2025-11-29 08:35:05.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:05 np0005539563 nova_compute[252253]: 2025-11-29 08:35:05.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:35:06 np0005539563 nova_compute[252253]: 2025-11-29 08:35:06.373 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:06 np0005539563 nova_compute[252253]: 2025-11-29 08:35:06.373 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:06 np0005539563 nova_compute[252253]: 2025-11-29 08:35:06.408 252257 DEBUG nova.compute.manager [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:35:06 np0005539563 nova_compute[252253]: 2025-11-29 08:35:06.500 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:06 np0005539563 nova_compute[252253]: 2025-11-29 08:35:06.501 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:06 np0005539563 nova_compute[252253]: 2025-11-29 08:35:06.512 252257 DEBUG nova.virt.hardware [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:35:06 np0005539563 nova_compute[252253]: 2025-11-29 08:35:06.513 252257 INFO nova.compute.claims [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:35:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:06.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:06 np0005539563 nova_compute[252253]: 2025-11-29 08:35:06.638 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:06.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 1023 B/s wr, 39 op/s
Nov 29 03:35:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2388779141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.095 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.102 252257 DEBUG nova.compute.provider_tree [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.124 252257 DEBUG nova.scheduler.client.report [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.150 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.151 252257 DEBUG nova.compute.manager [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.203 252257 DEBUG nova.compute.manager [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.204 252257 DEBUG nova.network.neutron [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.226 252257 INFO nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.241 252257 DEBUG nova.compute.manager [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.336 252257 DEBUG nova.compute.manager [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.338 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.339 252257 INFO nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Creating image(s)#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.363 252257 DEBUG nova.storage.rbd_utils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.389 252257 DEBUG nova.storage.rbd_utils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.413 252257 DEBUG nova.storage.rbd_utils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.417 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.523 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.524 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.526 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.526 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.567 252257 DEBUG nova.storage.rbd_utils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.575 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.745 252257 DEBUG nova.policy [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '686f527a5723407b85ed34c8a312583f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.903 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:07 np0005539563 nova_compute[252253]: 2025-11-29 08:35:07.996 252257 DEBUG nova.storage.rbd_utils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] resizing rbd image e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:35:08 np0005539563 nova_compute[252253]: 2025-11-29 08:35:08.127 252257 DEBUG nova.objects.instance [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'migration_context' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:08 np0005539563 nova_compute[252253]: 2025-11-29 08:35:08.152 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:35:08 np0005539563 nova_compute[252253]: 2025-11-29 08:35:08.153 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Ensure instance console log exists: /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:35:08 np0005539563 nova_compute[252253]: 2025-11-29 08:35:08.154 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:08 np0005539563 nova_compute[252253]: 2025-11-29 08:35:08.154 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:08 np0005539563 nova_compute[252253]: 2025-11-29 08:35:08.154 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:08.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:08 np0005539563 nova_compute[252253]: 2025-11-29 08:35:08.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:08.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 142 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 924 KiB/s wr, 51 op/s
Nov 29 03:35:08 np0005539563 nova_compute[252253]: 2025-11-29 08:35:08.984 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:09 np0005539563 nova_compute[252253]: 2025-11-29 08:35:09.390 252257 DEBUG nova.network.neutron [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Successfully created port: 56a99c82-c7f3-45ce-8952-bb1fdd178381 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.617440) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309617470, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 600, "num_deletes": 253, "total_data_size": 664330, "memory_usage": 676000, "flush_reason": "Manual Compaction"}
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309623024, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 503252, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59280, "largest_seqno": 59879, "table_properties": {"data_size": 500227, "index_size": 932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8401, "raw_average_key_size": 21, "raw_value_size": 493784, "raw_average_value_size": 1253, "num_data_blocks": 40, "num_entries": 394, "num_filter_entries": 394, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405279, "oldest_key_time": 1764405279, "file_creation_time": 1764405309, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 5628 microseconds, and 2110 cpu microseconds.
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.623067) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 503252 bytes OK
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.623088) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.624471) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.624486) EVENT_LOG_v1 {"time_micros": 1764405309624480, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.624504) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 661029, prev total WAL file size 661029, number of live WAL files 2.
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.625020) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303134' seq:72057594037927935, type:22 .. '6D6772737461740032323636' seq:0, type:0; will stop at (end)
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(491KB)], [128(14MB)]
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309625073, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 15290829, "oldest_snapshot_seqno": -1}
Nov 29 03:35:09 np0005539563 nova_compute[252253]: 2025-11-29 08:35:09.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 9373 keys, 11574571 bytes, temperature: kUnknown
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309714990, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 11574571, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11514395, "index_size": 35615, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23493, "raw_key_size": 245961, "raw_average_key_size": 26, "raw_value_size": 11350037, "raw_average_value_size": 1210, "num_data_blocks": 1363, "num_entries": 9373, "num_filter_entries": 9373, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764405309, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.715249) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11574571 bytes
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.722721) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.9 rd, 128.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.1 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(53.4) write-amplify(23.0) OK, records in: 9885, records dropped: 512 output_compression: NoCompression
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.722753) EVENT_LOG_v1 {"time_micros": 1764405309722730, "job": 78, "event": "compaction_finished", "compaction_time_micros": 90004, "compaction_time_cpu_micros": 29695, "output_level": 6, "num_output_files": 1, "total_output_size": 11574571, "num_input_records": 9885, "num_output_records": 9373, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309722934, "job": 78, "event": "table_file_deletion", "file_number": 130}
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405309725355, "job": 78, "event": "table_file_deletion", "file_number": 128}
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.624959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.725457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.725464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.725465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.725467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:09 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:09.725469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:10 np0005539563 nova_compute[252253]: 2025-11-29 08:35:10.161 252257 DEBUG nova.network.neutron [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Successfully updated port: 56a99c82-c7f3-45ce-8952-bb1fdd178381 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:35:10 np0005539563 nova_compute[252253]: 2025-11-29 08:35:10.181 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:10 np0005539563 nova_compute[252253]: 2025-11-29 08:35:10.181 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:10 np0005539563 nova_compute[252253]: 2025-11-29 08:35:10.181 252257 DEBUG nova.network.neutron [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:35:10 np0005539563 nova_compute[252253]: 2025-11-29 08:35:10.321 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:10 np0005539563 nova_compute[252253]: 2025-11-29 08:35:10.394 252257 DEBUG nova.compute.manager [req-85048727-bda3-4077-b481-754fc6870827 req-3dcb7048-d9a8-4fa3-8275-243b5deb0569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:10 np0005539563 nova_compute[252253]: 2025-11-29 08:35:10.394 252257 DEBUG nova.compute.manager [req-85048727-bda3-4077-b481-754fc6870827 req-3dcb7048-d9a8-4fa3-8275-243b5deb0569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing instance network info cache due to event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:35:10 np0005539563 nova_compute[252253]: 2025-11-29 08:35:10.395 252257 DEBUG oslo_concurrency.lockutils [req-85048727-bda3-4077-b481-754fc6870827 req-3dcb7048-d9a8-4fa3-8275-243b5deb0569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:10 np0005539563 nova_compute[252253]: 2025-11-29 08:35:10.465 252257 DEBUG nova.network.neutron [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:35:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:10.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:10 np0005539563 nova_compute[252253]: 2025-11-29 08:35:10.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:10.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.655 252257 DEBUG nova.network.neutron [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.690 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.691 252257 DEBUG nova.compute.manager [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Instance network_info: |[{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.691 252257 DEBUG oslo_concurrency.lockutils [req-85048727-bda3-4077-b481-754fc6870827 req-3dcb7048-d9a8-4fa3-8275-243b5deb0569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.692 252257 DEBUG nova.network.neutron [req-85048727-bda3-4077-b481-754fc6870827 req-3dcb7048-d9a8-4fa3-8275-243b5deb0569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.696 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Start _get_guest_xml network_info=[{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.701 252257 WARNING nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.707 252257 DEBUG nova.virt.libvirt.host [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.708 252257 DEBUG nova.virt.libvirt.host [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.712 252257 DEBUG nova.virt.libvirt.host [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.712 252257 DEBUG nova.virt.libvirt.host [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.713 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.713 252257 DEBUG nova.virt.hardware [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.714 252257 DEBUG nova.virt.hardware [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.714 252257 DEBUG nova.virt.hardware [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.714 252257 DEBUG nova.virt.hardware [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.714 252257 DEBUG nova.virt.hardware [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.715 252257 DEBUG nova.virt.hardware [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.715 252257 DEBUG nova.virt.hardware [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.715 252257 DEBUG nova.virt.hardware [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.715 252257 DEBUG nova.virt.hardware [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.716 252257 DEBUG nova.virt.hardware [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.716 252257 DEBUG nova.virt.hardware [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:35:11 np0005539563 nova_compute[252253]: 2025-11-29 08:35:11.718 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:35:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1400392762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.155 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.185 252257 DEBUG nova.storage.rbd_utils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.190 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:12.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:35:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431350111' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.644 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.646 252257 DEBUG nova.virt.libvirt.vif [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:35:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1486764034',display_name='tempest-TestNetworkAdvancedServerOps-server-1486764034',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1486764034',id=177,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlZsMiAKOr+1mOJ9gsr3FgWBE+mKwRnJkRBUHqhee24xo71b8dlrKwXDFbukNzcIWmQZvBI4Ju6SAH+rRZvrJVzvxQlKC2PN7cQRHMeK9LWhS/kLn4nic2/QWwXvrAG3A==',key_name='tempest-TestNetworkAdvancedServerOps-1855444884',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-wqp7y19b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:35:07Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=e2ac4a3e-8e9f-481b-9493-37a7fcdddec0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.646 252257 DEBUG nova.network.os_vif_util [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.647 252257 DEBUG nova.network.os_vif_util [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.648 252257 DEBUG nova.objects.instance [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.668 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  <uuid>e2ac4a3e-8e9f-481b-9493-37a7fcdddec0</uuid>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  <name>instance-000000b1</name>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1486764034</nova:name>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:35:11</nova:creationTime>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <nova:port uuid="56a99c82-c7f3-45ce-8952-bb1fdd178381">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <entry name="serial">e2ac4a3e-8e9f-481b-9493-37a7fcdddec0</entry>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <entry name="uuid">e2ac4a3e-8e9f-481b-9493-37a7fcdddec0</entry>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk.config">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:dc:ee:a4"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <target dev="tap56a99c82-c7"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/console.log" append="off"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:35:12 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:35:12 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:35:12 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:35:12 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.670 252257 DEBUG nova.compute.manager [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Preparing to wait for external event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.671 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.671 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.671 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.672 252257 DEBUG nova.virt.libvirt.vif [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:35:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1486764034',display_name='tempest-TestNetworkAdvancedServerOps-server-1486764034',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1486764034',id=177,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlZsMiAKOr+1mOJ9gsr3FgWBE+mKwRnJkRBUHqhee24xo71b8dlrKwXDFbukNzcIWmQZvBI4Ju6SAH+rRZvrJVzvxQlKC2PN7cQRHMeK9LWhS/kLn4nic2/QWwXvrAG3A==',key_name='tempest-TestNetworkAdvancedServerOps-1855444884',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-wqp7y19b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:35:07Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=e2ac4a3e-8e9f-481b-9493-37a7fcdddec0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.672 252257 DEBUG nova.network.os_vif_util [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.673 252257 DEBUG nova.network.os_vif_util [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.673 252257 DEBUG os_vif [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.674 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.675 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.675 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.679 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.679 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56a99c82-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.680 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56a99c82-c7, col_values=(('external_ids', {'iface-id': '56a99c82-c7f3-45ce-8952-bb1fdd178381', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:ee:a4', 'vm-uuid': 'e2ac4a3e-8e9f-481b-9493-37a7fcdddec0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.682 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:12 np0005539563 NetworkManager[48981]: <info>  [1764405312.6832] manager: (tap56a99c82-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.684 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.689 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.691 252257 INFO os_vif [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7')#033[00m
Nov 29 03:35:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:12.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.759 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.760 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.760 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No VIF found with MAC fa:16:3e:dc:ee:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.761 252257 INFO nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Using config drive#033[00m
Nov 29 03:35:12 np0005539563 nova_compute[252253]: 2025-11-29 08:35:12.787 252257 DEBUG nova.storage.rbd_utils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 29 03:35:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:35:12
Nov 29 03:35:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:35:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:35:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'backups', 'images', 'vms', 'default.rgw.meta', 'default.rgw.log', 'volumes']
Nov 29 03:35:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:35:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:35:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:35:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:35:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:35:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:35:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:35:13 np0005539563 nova_compute[252253]: 2025-11-29 08:35:13.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:13 np0005539563 nova_compute[252253]: 2025-11-29 08:35:13.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:35:13 np0005539563 nova_compute[252253]: 2025-11-29 08:35:13.704 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:35:13 np0005539563 nova_compute[252253]: 2025-11-29 08:35:13.872 252257 DEBUG nova.network.neutron [req-85048727-bda3-4077-b481-754fc6870827 req-3dcb7048-d9a8-4fa3-8275-243b5deb0569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updated VIF entry in instance network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:35:13 np0005539563 nova_compute[252253]: 2025-11-29 08:35:13.873 252257 DEBUG nova.network.neutron [req-85048727-bda3-4077-b481-754fc6870827 req-3dcb7048-d9a8-4fa3-8275-243b5deb0569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:13 np0005539563 nova_compute[252253]: 2025-11-29 08:35:13.893 252257 DEBUG oslo_concurrency.lockutils [req-85048727-bda3-4077-b481-754fc6870827 req-3dcb7048-d9a8-4fa3-8275-243b5deb0569 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:13 np0005539563 nova_compute[252253]: 2025-11-29 08:35:13.981 252257 INFO nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Creating config drive at /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/disk.config#033[00m
Nov 29 03:35:13 np0005539563 nova_compute[252253]: 2025-11-29 08:35:13.987 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppbo33yzh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.022 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.133 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppbo33yzh" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.167 252257 DEBUG nova.storage.rbd_utils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.172 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/disk.config e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.348 252257 DEBUG oslo_concurrency.processutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/disk.config e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.351 252257 INFO nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Deleting local config drive /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:35:14 np0005539563 kernel: tap56a99c82-c7: entered promiscuous mode
Nov 29 03:35:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:35:14Z|00753|binding|INFO|Claiming lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 for this chassis.
Nov 29 03:35:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:35:14Z|00754|binding|INFO|56a99c82-c7f3-45ce-8952-bb1fdd178381: Claiming fa:16:3e:dc:ee:a4 10.100.0.11
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.411 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539563 NetworkManager[48981]: <info>  [1764405314.4120] manager: (tap56a99c82-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/324)
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.418 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.424 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:ee:a4 10.100.0.11'], port_security=['fa:16:3e:dc:ee:a4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2ac4a3e-8e9f-481b-9493-37a7fcdddec0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd444d77c-01c4-4fb8-ba04-a10761695979', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0167e73-34b5-4b34-9484-783b07e45b22, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=56a99c82-c7f3-45ce-8952-bb1fdd178381) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.426 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 56a99c82-c7f3-45ce-8952-bb1fdd178381 in datapath e259a30d-7e3f-48b9-abdf-dc7aa571c14c bound to our chassis#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.427 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e259a30d-7e3f-48b9-abdf-dc7aa571c14c#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.439 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c9823161-911b-4586-ac5a-f823a0dea60d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.440 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape259a30d-71 in ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.442 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape259a30d-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.442 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae95916-115c-41b4-bc10-43f478b91357]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.443 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e41aa4-b8c8-4733-a053-08a4576a02b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 systemd-udevd[364728]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:35:14 np0005539563 systemd-machined[213024]: New machine qemu-87-instance-000000b1.
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.455 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e86f86-0def-44af-908c-8d3ee4d24c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 NetworkManager[48981]: <info>  [1764405314.4591] device (tap56a99c82-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:35:14 np0005539563 NetworkManager[48981]: <info>  [1764405314.4599] device (tap56a99c82-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:35:14 np0005539563 systemd[1]: Started Virtual Machine qemu-87-instance-000000b1.
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.470 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bb4d1024-bfd5-4447-a3ff-e88658170345]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.484 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:35:14Z|00755|binding|INFO|Setting lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 ovn-installed in OVS
Nov 29 03:35:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:35:14Z|00756|binding|INFO|Setting lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 up in Southbound
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.490 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.509 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d807c0dd-5f37-4113-8624-862263d86139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 systemd-udevd[364732]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:35:14 np0005539563 NetworkManager[48981]: <info>  [1764405314.5158] manager: (tape259a30d-70): new Veth device (/org/freedesktop/NetworkManager/Devices/325)
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.514 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[89bc86d2-e5fe-4236-91f1-d2e6ba952d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.546 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[9dcad193-4617-4cb0-987f-27435b845b8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.550 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b8666b7f-5327-4d12-8cd3-cc5f42f68d59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 NetworkManager[48981]: <info>  [1764405314.5712] device (tape259a30d-70): carrier: link connected
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.576 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d578ed-d499-45f0-adfb-4cd367bd28ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.594 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[63997c98-bab5-4ef4-beb3-31662e9e2b4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape259a30d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:ff:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 828234, 'reachable_time': 44468, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364760, 'error': None, 'target': 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:14.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.610 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb797f5-4008-4c3d-a7cf-5f916f72f6dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:ff36'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 828234, 'tstamp': 828234}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364761, 'error': None, 'target': 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.625 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2517565d-439d-4771-b0e9-38baf36adfb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape259a30d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:ff:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 828234, 'reachable_time': 44468, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364762, 'error': None, 'target': 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.653 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[04a5a01c-6fb2-4b3a-a656-cbf472af0651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.710 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.712 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.712 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.712 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:14.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.828 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8a04e90a-0112-4a15-b003-b46583a3bc89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.831 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape259a30d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.831 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.831 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape259a30d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:14 np0005539563 NetworkManager[48981]: <info>  [1764405314.8336] manager: (tape259a30d-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Nov 29 03:35:14 np0005539563 kernel: tape259a30d-70: entered promiscuous mode
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.833 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.835 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape259a30d-70, col_values=(('external_ids', {'iface-id': '26e698e1-ae82-4653-b3c1-2c8f8d7f1139'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:35:14Z|00757|binding|INFO|Releasing lport 26e698e1-ae82-4653-b3c1-2c8f8d7f1139 from this chassis (sb_readonly=0)
Nov 29 03:35:14 np0005539563 nova_compute[252253]: 2025-11-29 08:35:14.849 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.850 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e259a30d-7e3f-48b9-abdf-dc7aa571c14c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e259a30d-7e3f-48b9-abdf-dc7aa571c14c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.851 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[939cf342-fde2-4ab3-a939-e3407cda1d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.852 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-e259a30d-7e3f-48b9-abdf-dc7aa571c14c
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/e259a30d-7e3f-48b9-abdf-dc7aa571c14c.pid.haproxy
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID e259a30d-7e3f-48b9-abdf-dc7aa571c14c
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:35:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:14.852 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'env', 'PROCESS_TAG=haproxy-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e259a30d-7e3f-48b9-abdf-dc7aa571c14c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.093 252257 DEBUG nova.compute.manager [req-b922a148-ccd4-4f1d-8e6c-da8933fd22ee req-ec544f39-02f8-4e3d-bd89-5ca74d1c212f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.093 252257 DEBUG oslo_concurrency.lockutils [req-b922a148-ccd4-4f1d-8e6c-da8933fd22ee req-ec544f39-02f8-4e3d-bd89-5ca74d1c212f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.094 252257 DEBUG oslo_concurrency.lockutils [req-b922a148-ccd4-4f1d-8e6c-da8933fd22ee req-ec544f39-02f8-4e3d-bd89-5ca74d1c212f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.094 252257 DEBUG oslo_concurrency.lockutils [req-b922a148-ccd4-4f1d-8e6c-da8933fd22ee req-ec544f39-02f8-4e3d-bd89-5ca74d1c212f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.094 252257 DEBUG nova.compute.manager [req-b922a148-ccd4-4f1d-8e6c-da8933fd22ee req-ec544f39-02f8-4e3d-bd89-5ca74d1c212f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Processing event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:35:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4247046449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.159 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:15 np0005539563 podman[364815]: 2025-11-29 08:35:15.193767832 +0000 UTC m=+0.049976685 container create 8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.213 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.213 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:35:15 np0005539563 systemd[1]: Started libpod-conmon-8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa.scope.
Nov 29 03:35:15 np0005539563 podman[364815]: 2025-11-29 08:35:15.166464713 +0000 UTC m=+0.022673576 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:35:15 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:35:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce64f81134d70ac7a015c2fbcb46cad6b51ba4c47a350370c8bbee7edeb88b20/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:15 np0005539563 podman[364815]: 2025-11-29 08:35:15.293160445 +0000 UTC m=+0.149369308 container init 8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:35:15 np0005539563 podman[364815]: 2025-11-29 08:35:15.301343327 +0000 UTC m=+0.157552180 container start 8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 03:35:15 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[364833]: [NOTICE]   (364837) : New worker (364839) forked
Nov 29 03:35:15 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[364833]: [NOTICE]   (364837) : Loading success.
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.394 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.395 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4097MB free_disk=20.967525482177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.395 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.396 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.481 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.482 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.482 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:35:15 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:35:15 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.508 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.735 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405315.7351015, e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.736 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.738 252257 DEBUG nova.compute.manager [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.741 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.744 252257 INFO nova.virt.libvirt.driver [-] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Instance spawned successfully.#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.744 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.752 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.754 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.765 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.766 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.766 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.767 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.768 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.768 252257 DEBUG nova.virt.libvirt.driver [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.813 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.814 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405315.7359955, e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.814 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.858 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.863 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405315.7402194, e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.863 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.875 252257 INFO nova.compute.manager [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Took 8.54 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.875 252257 DEBUG nova.compute.manager [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.886 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.889 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.918 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:35:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/440048892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.943 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.954 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.956 252257 INFO nova.compute.manager [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Took 9.50 seconds to build instance.#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.973 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.978 252257 DEBUG oslo_concurrency.lockutils [None req-e206e44f-7304-4f42-a8ec-a6506d6d7659 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.996 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:35:15 np0005539563 nova_compute[252253]: 2025-11-29 08:35:15.997 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.327864) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316327934, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 322, "num_deletes": 251, "total_data_size": 116391, "memory_usage": 122808, "flush_reason": "Manual Compaction"}
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316331693, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 115345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59880, "largest_seqno": 60201, "table_properties": {"data_size": 113274, "index_size": 234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5302, "raw_average_key_size": 18, "raw_value_size": 109211, "raw_average_value_size": 381, "num_data_blocks": 10, "num_entries": 286, "num_filter_entries": 286, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405310, "oldest_key_time": 1764405310, "file_creation_time": 1764405316, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 3902 microseconds, and 1640 cpu microseconds.
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.331771) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 115345 bytes OK
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.331797) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.333792) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.333806) EVENT_LOG_v1 {"time_micros": 1764405316333801, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.333823) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 114138, prev total WAL file size 114138, number of live WAL files 2.
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.334175) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(112KB)], [131(11MB)]
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316334215, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 11689916, "oldest_snapshot_seqno": -1}
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 9149 keys, 9787854 bytes, temperature: kUnknown
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316426693, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 9787854, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9730846, "index_size": 33023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22917, "raw_key_size": 242009, "raw_average_key_size": 26, "raw_value_size": 9572010, "raw_average_value_size": 1046, "num_data_blocks": 1248, "num_entries": 9149, "num_filter_entries": 9149, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764405316, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.427085) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9787854 bytes
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.429787) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.2 rd, 105.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(186.2) write-amplify(84.9) OK, records in: 9659, records dropped: 510 output_compression: NoCompression
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.429807) EVENT_LOG_v1 {"time_micros": 1764405316429799, "job": 80, "event": "compaction_finished", "compaction_time_micros": 92654, "compaction_time_cpu_micros": 24139, "output_level": 6, "num_output_files": 1, "total_output_size": 9787854, "num_input_records": 9659, "num_output_records": 9149, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316429954, "job": 80, "event": "table_file_deletion", "file_number": 133}
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405316431719, "job": 80, "event": "table_file_deletion", "file_number": 131}
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.334084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.431931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.431938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.431940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.431942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:16 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:35:16.431943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:35:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:35:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:35:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:35:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:35:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:35:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:35:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:35:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:35:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:35:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:16.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:16.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 29 03:35:17 np0005539563 nova_compute[252253]: 2025-11-29 08:35:17.265 252257 DEBUG nova.compute.manager [req-85bc3066-9845-48a0-8f13-567c8e1e4a06 req-8d71305f-9183-43b4-8f86-7713ee6a8920 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:17 np0005539563 nova_compute[252253]: 2025-11-29 08:35:17.265 252257 DEBUG oslo_concurrency.lockutils [req-85bc3066-9845-48a0-8f13-567c8e1e4a06 req-8d71305f-9183-43b4-8f86-7713ee6a8920 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:17 np0005539563 nova_compute[252253]: 2025-11-29 08:35:17.266 252257 DEBUG oslo_concurrency.lockutils [req-85bc3066-9845-48a0-8f13-567c8e1e4a06 req-8d71305f-9183-43b4-8f86-7713ee6a8920 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:17 np0005539563 nova_compute[252253]: 2025-11-29 08:35:17.267 252257 DEBUG oslo_concurrency.lockutils [req-85bc3066-9845-48a0-8f13-567c8e1e4a06 req-8d71305f-9183-43b4-8f86-7713ee6a8920 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:17 np0005539563 nova_compute[252253]: 2025-11-29 08:35:17.267 252257 DEBUG nova.compute.manager [req-85bc3066-9845-48a0-8f13-567c8e1e4a06 req-8d71305f-9183-43b4-8f86-7713ee6a8920 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:17 np0005539563 nova_compute[252253]: 2025-11-29 08:35:17.267 252257 WARNING nova.compute.manager [req-85bc3066-9845-48a0-8f13-567c8e1e4a06 req-8d71305f-9183-43b4-8f86-7713ee6a8920 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:35:17 np0005539563 nova_compute[252253]: 2025-11-29 08:35:17.684 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:18.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:18.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 994 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 29 03:35:19 np0005539563 nova_compute[252253]: 2025-11-29 08:35:19.000 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:19 np0005539563 nova_compute[252253]: 2025-11-29 08:35:19.997 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:20.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:20.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 904 KiB/s wr, 87 op/s
Nov 29 03:35:21 np0005539563 nova_compute[252253]: 2025-11-29 08:35:21.138 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:21 np0005539563 NetworkManager[48981]: <info>  [1764405321.1395] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Nov 29 03:35:21 np0005539563 NetworkManager[48981]: <info>  [1764405321.1406] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Nov 29 03:35:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:35:21Z|00758|binding|INFO|Releasing lport 26e698e1-ae82-4653-b3c1-2c8f8d7f1139 from this chassis (sb_readonly=0)
Nov 29 03:35:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:35:21Z|00759|binding|INFO|Releasing lport 26e698e1-ae82-4653-b3c1-2c8f8d7f1139 from this chassis (sb_readonly=0)
Nov 29 03:35:21 np0005539563 nova_compute[252253]: 2025-11-29 08:35:21.225 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:21 np0005539563 nova_compute[252253]: 2025-11-29 08:35:21.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:22.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:22 np0005539563 nova_compute[252253]: 2025-11-29 08:35:22.687 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:22.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:35:24 np0005539563 nova_compute[252253]: 2025-11-29 08:35:24.002 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:24.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:24 np0005539563 podman[364943]: 2025-11-29 08:35:24.630495799 +0000 UTC m=+0.057246932 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:35:24 np0005539563 podman[364942]: 2025-11-29 08:35:24.65047764 +0000 UTC m=+0.073568004 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:35:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:24.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:24 np0005539563 podman[364944]: 2025-11-29 08:35:24.732444881 +0000 UTC m=+0.152581095 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:35:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 29 03:35:25 np0005539563 nova_compute[252253]: 2025-11-29 08:35:25.786 252257 DEBUG nova.compute.manager [req-9c17ef21-c1da-4ec6-9843-4db08ab82dde req-14d6a6d7-1c10-4e11-a917-7dc072b99e7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:25 np0005539563 nova_compute[252253]: 2025-11-29 08:35:25.787 252257 DEBUG nova.compute.manager [req-9c17ef21-c1da-4ec6-9843-4db08ab82dde req-14d6a6d7-1c10-4e11-a917-7dc072b99e7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing instance network info cache due to event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:35:25 np0005539563 nova_compute[252253]: 2025-11-29 08:35:25.787 252257 DEBUG oslo_concurrency.lockutils [req-9c17ef21-c1da-4ec6-9843-4db08ab82dde req-14d6a6d7-1c10-4e11-a917-7dc072b99e7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:25 np0005539563 nova_compute[252253]: 2025-11-29 08:35:25.788 252257 DEBUG oslo_concurrency.lockutils [req-9c17ef21-c1da-4ec6-9843-4db08ab82dde req-14d6a6d7-1c10-4e11-a917-7dc072b99e7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:25 np0005539563 nova_compute[252253]: 2025-11-29 08:35:25.788 252257 DEBUG nova.network.neutron [req-9c17ef21-c1da-4ec6-9843-4db08ab82dde req-14d6a6d7-1c10-4e11-a917-7dc072b99e7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:35:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:26.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:26.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:35:27 np0005539563 nova_compute[252253]: 2025-11-29 08:35:27.690 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:28.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:28.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 172 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 510 KiB/s wr, 90 op/s
Nov 29 03:35:29 np0005539563 nova_compute[252253]: 2025-11-29 08:35:29.005 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:35:29Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:ee:a4 10.100.0.11
Nov 29 03:35:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:35:29Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:ee:a4 10.100.0.11
Nov 29 03:35:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:29 np0005539563 nova_compute[252253]: 2025-11-29 08:35:29.883 252257 DEBUG nova.network.neutron [req-9c17ef21-c1da-4ec6-9843-4db08ab82dde req-14d6a6d7-1c10-4e11-a917-7dc072b99e7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updated VIF entry in instance network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:35:29 np0005539563 nova_compute[252253]: 2025-11-29 08:35:29.884 252257 DEBUG nova.network.neutron [req-9c17ef21-c1da-4ec6-9843-4db08ab82dde req-14d6a6d7-1c10-4e11-a917-7dc072b99e7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:30.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:30.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 182 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 71 op/s
Nov 29 03:35:31 np0005539563 nova_compute[252253]: 2025-11-29 08:35:31.112 252257 DEBUG oslo_concurrency.lockutils [req-9c17ef21-c1da-4ec6-9843-4db08ab82dde req-14d6a6d7-1c10-4e11-a917-7dc072b99e7d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:32.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:32 np0005539563 nova_compute[252253]: 2025-11-29 08:35:32.722 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:32.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 182 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 208 KiB/s rd, 1.5 MiB/s wr, 36 op/s
Nov 29 03:35:33 np0005539563 nova_compute[252253]: 2025-11-29 08:35:33.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:34 np0005539563 nova_compute[252253]: 2025-11-29 08:35:34.047 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:34.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:34.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 03:35:36 np0005539563 nova_compute[252253]: 2025-11-29 08:35:36.257 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:36 np0005539563 nova_compute[252253]: 2025-11-29 08:35:36.295 252257 INFO nova.compute.manager [None req-85a443c5-cfe6-492a-a6a4-ced30a381451 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Get console output#033[00m
Nov 29 03:35:36 np0005539563 nova_compute[252253]: 2025-11-29 08:35:36.303 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:35:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:36.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:36.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 03:35:37 np0005539563 nova_compute[252253]: 2025-11-29 08:35:37.724 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:38.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:38.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 03:35:39 np0005539563 nova_compute[252253]: 2025-11-29 08:35:39.050 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:40.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:40.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Nov 29 03:35:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:42.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:42 np0005539563 nova_compute[252253]: 2025-11-29 08:35:42.726 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:35:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:42.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:35:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 665 KiB/s wr, 26 op/s
Nov 29 03:35:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:35:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:35:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:35:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:35:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:35:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:35:43 np0005539563 nova_compute[252253]: 2025-11-29 08:35:43.857 252257 INFO nova.compute.manager [None req-219c6876-f89c-414d-830b-7b67463092bb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Get console output#033[00m
Nov 29 03:35:43 np0005539563 nova_compute[252253]: 2025-11-29 08:35:43.864 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:35:44 np0005539563 nova_compute[252253]: 2025-11-29 08:35:44.053 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:44.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:44.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 665 KiB/s wr, 26 op/s
Nov 29 03:35:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:46.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:46.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Nov 29 03:35:47 np0005539563 nova_compute[252253]: 2025-11-29 08:35:47.729 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:35:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:35:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:35:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:35:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:35:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:48.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:35:48 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev bd48a8b4-321f-44d1-9866-a561fc443ee8 does not exist
Nov 29 03:35:48 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev bba6df49-679e-4334-8a11-ea3a95048156 does not exist
Nov 29 03:35:48 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 54f9c605-ccaa-4e3f-8ab4-9b14b05b0a5d does not exist
Nov 29 03:35:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:35:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:35:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:35:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:35:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:35:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:35:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:48.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Nov 29 03:35:49 np0005539563 nova_compute[252253]: 2025-11-29 08:35:49.055 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:35:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:35:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:35:49 np0005539563 podman[365367]: 2025-11-29 08:35:49.239164571 +0000 UTC m=+0.042042181 container create cbced389ab7eb4652d0afefcfbfac1f4726a327ff0dfd69a03bf79f47f00e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_satoshi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:35:49 np0005539563 systemd[1]: Started libpod-conmon-cbced389ab7eb4652d0afefcfbfac1f4726a327ff0dfd69a03bf79f47f00e7cd.scope.
Nov 29 03:35:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:35:49 np0005539563 podman[365367]: 2025-11-29 08:35:49.219019415 +0000 UTC m=+0.021897045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:35:49 np0005539563 podman[365367]: 2025-11-29 08:35:49.438171313 +0000 UTC m=+0.241048953 container init cbced389ab7eb4652d0afefcfbfac1f4726a327ff0dfd69a03bf79f47f00e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_satoshi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:35:49 np0005539563 podman[365367]: 2025-11-29 08:35:49.445141731 +0000 UTC m=+0.248019351 container start cbced389ab7eb4652d0afefcfbfac1f4726a327ff0dfd69a03bf79f47f00e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:35:49 np0005539563 podman[365367]: 2025-11-29 08:35:49.449065297 +0000 UTC m=+0.251942907 container attach cbced389ab7eb4652d0afefcfbfac1f4726a327ff0dfd69a03bf79f47f00e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:35:49 np0005539563 affectionate_satoshi[365384]: 167 167
Nov 29 03:35:49 np0005539563 systemd[1]: libpod-cbced389ab7eb4652d0afefcfbfac1f4726a327ff0dfd69a03bf79f47f00e7cd.scope: Deactivated successfully.
Nov 29 03:35:49 np0005539563 conmon[365384]: conmon cbced389ab7eb4652d0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cbced389ab7eb4652d0afefcfbfac1f4726a327ff0dfd69a03bf79f47f00e7cd.scope/container/memory.events
Nov 29 03:35:49 np0005539563 podman[365367]: 2025-11-29 08:35:49.454387761 +0000 UTC m=+0.257265371 container died cbced389ab7eb4652d0afefcfbfac1f4726a327ff0dfd69a03bf79f47f00e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:35:49 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7c206bae98a460df1ac3f8c5c003fa357d2b632c9e374940a846935f3c327ee5-merged.mount: Deactivated successfully.
Nov 29 03:35:49 np0005539563 podman[365367]: 2025-11-29 08:35:49.60529369 +0000 UTC m=+0.408171300 container remove cbced389ab7eb4652d0afefcfbfac1f4726a327ff0dfd69a03bf79f47f00e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:35:49 np0005539563 systemd[1]: libpod-conmon-cbced389ab7eb4652d0afefcfbfac1f4726a327ff0dfd69a03bf79f47f00e7cd.scope: Deactivated successfully.
Nov 29 03:35:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:49 np0005539563 podman[365410]: 2025-11-29 08:35:49.780513237 +0000 UTC m=+0.039787769 container create 114f3adbf1f2868f66831c6825f0b3aa38474a569bfbe95c60512cb038cd9467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:35:49 np0005539563 systemd[1]: Started libpod-conmon-114f3adbf1f2868f66831c6825f0b3aa38474a569bfbe95c60512cb038cd9467.scope.
Nov 29 03:35:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:35:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e540e87bc26cb9f43a5682b4061a66dc6a985710030d095e800e163b6832b8f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e540e87bc26cb9f43a5682b4061a66dc6a985710030d095e800e163b6832b8f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e540e87bc26cb9f43a5682b4061a66dc6a985710030d095e800e163b6832b8f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e540e87bc26cb9f43a5682b4061a66dc6a985710030d095e800e163b6832b8f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:49 np0005539563 podman[365410]: 2025-11-29 08:35:49.764216055 +0000 UTC m=+0.023490607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:35:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e540e87bc26cb9f43a5682b4061a66dc6a985710030d095e800e163b6832b8f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:50 np0005539563 podman[365410]: 2025-11-29 08:35:50.217368262 +0000 UTC m=+0.476642844 container init 114f3adbf1f2868f66831c6825f0b3aa38474a569bfbe95c60512cb038cd9467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:35:50 np0005539563 podman[365410]: 2025-11-29 08:35:50.225467492 +0000 UTC m=+0.484742034 container start 114f3adbf1f2868f66831c6825f0b3aa38474a569bfbe95c60512cb038cd9467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:35:50 np0005539563 podman[365410]: 2025-11-29 08:35:50.229535762 +0000 UTC m=+0.488810344 container attach 114f3adbf1f2868f66831c6825f0b3aa38474a569bfbe95c60512cb038cd9467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_swartz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 29 03:35:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:50.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:50.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Nov 29 03:35:51 np0005539563 frosty_swartz[365428]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:35:51 np0005539563 frosty_swartz[365428]: --> relative data size: 1.0
Nov 29 03:35:51 np0005539563 frosty_swartz[365428]: --> All data devices are unavailable
Nov 29 03:35:51 np0005539563 systemd[1]: libpod-114f3adbf1f2868f66831c6825f0b3aa38474a569bfbe95c60512cb038cd9467.scope: Deactivated successfully.
Nov 29 03:35:51 np0005539563 podman[365410]: 2025-11-29 08:35:51.093912279 +0000 UTC m=+1.353186801 container died 114f3adbf1f2868f66831c6825f0b3aa38474a569bfbe95c60512cb038cd9467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_swartz, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:35:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e540e87bc26cb9f43a5682b4061a66dc6a985710030d095e800e163b6832b8f0-merged.mount: Deactivated successfully.
Nov 29 03:35:51 np0005539563 podman[365410]: 2025-11-29 08:35:51.152665481 +0000 UTC m=+1.411940013 container remove 114f3adbf1f2868f66831c6825f0b3aa38474a569bfbe95c60512cb038cd9467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:35:51 np0005539563 systemd[1]: libpod-conmon-114f3adbf1f2868f66831c6825f0b3aa38474a569bfbe95c60512cb038cd9467.scope: Deactivated successfully.
Nov 29 03:35:51 np0005539563 nova_compute[252253]: 2025-11-29 08:35:51.648 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:51 np0005539563 podman[365598]: 2025-11-29 08:35:51.706174456 +0000 UTC m=+0.036579492 container create 67a93ab68fea48ba6ce0393de9cf6b9968ca48557bc8bc09aac7522b6295d91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_newton, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:35:51 np0005539563 systemd[1]: Started libpod-conmon-67a93ab68fea48ba6ce0393de9cf6b9968ca48557bc8bc09aac7522b6295d91f.scope.
Nov 29 03:35:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:35:51 np0005539563 podman[365598]: 2025-11-29 08:35:51.690526873 +0000 UTC m=+0.020931929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:35:51 np0005539563 podman[365598]: 2025-11-29 08:35:51.787089229 +0000 UTC m=+0.117494285 container init 67a93ab68fea48ba6ce0393de9cf6b9968ca48557bc8bc09aac7522b6295d91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:35:51 np0005539563 podman[365598]: 2025-11-29 08:35:51.792874955 +0000 UTC m=+0.123279991 container start 67a93ab68fea48ba6ce0393de9cf6b9968ca48557bc8bc09aac7522b6295d91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_newton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:35:51 np0005539563 podman[365598]: 2025-11-29 08:35:51.796046841 +0000 UTC m=+0.126451877 container attach 67a93ab68fea48ba6ce0393de9cf6b9968ca48557bc8bc09aac7522b6295d91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:35:51 np0005539563 boring_newton[365614]: 167 167
Nov 29 03:35:51 np0005539563 systemd[1]: libpod-67a93ab68fea48ba6ce0393de9cf6b9968ca48557bc8bc09aac7522b6295d91f.scope: Deactivated successfully.
Nov 29 03:35:51 np0005539563 podman[365598]: 2025-11-29 08:35:51.798154338 +0000 UTC m=+0.128559384 container died 67a93ab68fea48ba6ce0393de9cf6b9968ca48557bc8bc09aac7522b6295d91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_newton, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:35:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8bbb7f2d65f474a8a060623a22156ac3b1ec739a19ba46aaa7dfc62d2c3fd4b0-merged.mount: Deactivated successfully.
Nov 29 03:35:51 np0005539563 podman[365598]: 2025-11-29 08:35:51.841153153 +0000 UTC m=+0.171558199 container remove 67a93ab68fea48ba6ce0393de9cf6b9968ca48557bc8bc09aac7522b6295d91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:35:51 np0005539563 systemd[1]: libpod-conmon-67a93ab68fea48ba6ce0393de9cf6b9968ca48557bc8bc09aac7522b6295d91f.scope: Deactivated successfully.
Nov 29 03:35:52 np0005539563 podman[365636]: 2025-11-29 08:35:52.009779141 +0000 UTC m=+0.044472255 container create 6fce983509889d430603babdbe0d83aa2e5f4f4b4f7fb142cef3faf1dd22d4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_gauss, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:35:52 np0005539563 systemd[1]: Started libpod-conmon-6fce983509889d430603babdbe0d83aa2e5f4f4b4f7fb142cef3faf1dd22d4cc.scope.
Nov 29 03:35:52 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:35:52 np0005539563 podman[365636]: 2025-11-29 08:35:51.991177848 +0000 UTC m=+0.025870952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:35:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dbefb893c9a6138c3a8a6cd9306d46c7a53f6d6a2a421b3140a7bf50aa9192a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dbefb893c9a6138c3a8a6cd9306d46c7a53f6d6a2a421b3140a7bf50aa9192a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dbefb893c9a6138c3a8a6cd9306d46c7a53f6d6a2a421b3140a7bf50aa9192a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dbefb893c9a6138c3a8a6cd9306d46c7a53f6d6a2a421b3140a7bf50aa9192a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:52 np0005539563 podman[365636]: 2025-11-29 08:35:52.108110226 +0000 UTC m=+0.142803400 container init 6fce983509889d430603babdbe0d83aa2e5f4f4b4f7fb142cef3faf1dd22d4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_gauss, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:35:52 np0005539563 podman[365636]: 2025-11-29 08:35:52.118700652 +0000 UTC m=+0.153393766 container start 6fce983509889d430603babdbe0d83aa2e5f4f4b4f7fb142cef3faf1dd22d4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:35:52 np0005539563 podman[365636]: 2025-11-29 08:35:52.123877642 +0000 UTC m=+0.158570766 container attach 6fce983509889d430603babdbe0d83aa2e5f4f4b4f7fb142cef3faf1dd22d4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_gauss, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:35:52 np0005539563 nova_compute[252253]: 2025-11-29 08:35:52.434 252257 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:52 np0005539563 nova_compute[252253]: 2025-11-29 08:35:52.435 252257 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:52 np0005539563 nova_compute[252253]: 2025-11-29 08:35:52.435 252257 DEBUG nova.network.neutron [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:35:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:52.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:52.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:52 np0005539563 nova_compute[252253]: 2025-11-29 08:35:52.773 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]: {
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:    "0": [
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:        {
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:            "devices": [
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:                "/dev/loop3"
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:            ],
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:            "lv_name": "ceph_lv0",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:            "lv_size": "7511998464",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:            "name": "ceph_lv0",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:            "tags": {
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:                "ceph.cluster_name": "ceph",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:                "ceph.crush_device_class": "",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:                "ceph.encrypted": "0",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:                "ceph.osd_id": "0",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:                "ceph.type": "block",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:                "ceph.vdo": "0"
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:            },
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:            "type": "block",
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:            "vg_name": "ceph_vg0"
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:        }
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]:    ]
Nov 29 03:35:52 np0005539563 laughing_gauss[365653]: }
Nov 29 03:35:52 np0005539563 systemd[1]: libpod-6fce983509889d430603babdbe0d83aa2e5f4f4b4f7fb142cef3faf1dd22d4cc.scope: Deactivated successfully.
Nov 29 03:35:52 np0005539563 podman[365636]: 2025-11-29 08:35:52.911544451 +0000 UTC m=+0.946237545 container died 6fce983509889d430603babdbe0d83aa2e5f4f4b4f7fb142cef3faf1dd22d4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_gauss, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:35:52 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5dbefb893c9a6138c3a8a6cd9306d46c7a53f6d6a2a421b3140a7bf50aa9192a-merged.mount: Deactivated successfully.
Nov 29 03:35:52 np0005539563 podman[365636]: 2025-11-29 08:35:52.974344113 +0000 UTC m=+1.009037187 container remove 6fce983509889d430603babdbe0d83aa2e5f4f4b4f7fb142cef3faf1dd22d4cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:35:52 np0005539563 systemd[1]: libpod-conmon-6fce983509889d430603babdbe0d83aa2e5f4f4b4f7fb142cef3faf1dd22d4cc.scope: Deactivated successfully.
Nov 29 03:35:53 np0005539563 podman[365816]: 2025-11-29 08:35:53.596503522 +0000 UTC m=+0.059581024 container create ade7fccae8a19f488667ed850ffc71147264c69224f1fb68454d474a4607374d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curie, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:35:53 np0005539563 systemd[1]: Started libpod-conmon-ade7fccae8a19f488667ed850ffc71147264c69224f1fb68454d474a4607374d.scope.
Nov 29 03:35:53 np0005539563 podman[365816]: 2025-11-29 08:35:53.5700001 +0000 UTC m=+0.033077652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:35:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:35:53 np0005539563 podman[365816]: 2025-11-29 08:35:53.710694545 +0000 UTC m=+0.173772097 container init ade7fccae8a19f488667ed850ffc71147264c69224f1fb68454d474a4607374d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:35:53 np0005539563 podman[365816]: 2025-11-29 08:35:53.719453103 +0000 UTC m=+0.182530615 container start ade7fccae8a19f488667ed850ffc71147264c69224f1fb68454d474a4607374d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:35:53 np0005539563 podman[365816]: 2025-11-29 08:35:53.723305168 +0000 UTC m=+0.186382690 container attach ade7fccae8a19f488667ed850ffc71147264c69224f1fb68454d474a4607374d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:35:53 np0005539563 gifted_curie[365832]: 167 167
Nov 29 03:35:53 np0005539563 systemd[1]: libpod-ade7fccae8a19f488667ed850ffc71147264c69224f1fb68454d474a4607374d.scope: Deactivated successfully.
Nov 29 03:35:53 np0005539563 conmon[365832]: conmon ade7fccae8a19f488667 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ade7fccae8a19f488667ed850ffc71147264c69224f1fb68454d474a4607374d.scope/container/memory.events
Nov 29 03:35:53 np0005539563 podman[365816]: 2025-11-29 08:35:53.728987383 +0000 UTC m=+0.192064925 container died ade7fccae8a19f488667ed850ffc71147264c69224f1fb68454d474a4607374d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curie, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:35:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5e18e98ac5df997ae7459d66e6e694c8e91b1331be29825761b311acadabf1cc-merged.mount: Deactivated successfully.
Nov 29 03:35:53 np0005539563 podman[365816]: 2025-11-29 08:35:53.780388875 +0000 UTC m=+0.243466417 container remove ade7fccae8a19f488667ed850ffc71147264c69224f1fb68454d474a4607374d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:35:53 np0005539563 systemd[1]: libpod-conmon-ade7fccae8a19f488667ed850ffc71147264c69224f1fb68454d474a4607374d.scope: Deactivated successfully.
Nov 29 03:35:54 np0005539563 podman[365857]: 2025-11-29 08:35:53.974794993 +0000 UTC m=+0.031654634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:35:54 np0005539563 podman[365857]: 2025-11-29 08:35:54.150335807 +0000 UTC m=+0.207195448 container create 1337b622091b006eb16e879b8ef7b5ebe9d20c261dd423b3f02f6ddb7a3f520d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sammet, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:35:54 np0005539563 nova_compute[252253]: 2025-11-29 08:35:54.146 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:54 np0005539563 systemd[1]: Started libpod-conmon-1337b622091b006eb16e879b8ef7b5ebe9d20c261dd423b3f02f6ddb7a3f520d.scope.
Nov 29 03:35:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:35:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3671c17bd54f282d7fa2ca199ed807e54f816efb27e50fa7d3f8f5f021e311/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3671c17bd54f282d7fa2ca199ed807e54f816efb27e50fa7d3f8f5f021e311/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3671c17bd54f282d7fa2ca199ed807e54f816efb27e50fa7d3f8f5f021e311/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a3671c17bd54f282d7fa2ca199ed807e54f816efb27e50fa7d3f8f5f021e311/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:54 np0005539563 podman[365857]: 2025-11-29 08:35:54.249176891 +0000 UTC m=+0.306036572 container init 1337b622091b006eb16e879b8ef7b5ebe9d20c261dd423b3f02f6ddb7a3f520d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sammet, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:35:54 np0005539563 podman[365857]: 2025-11-29 08:35:54.263873291 +0000 UTC m=+0.320732892 container start 1337b622091b006eb16e879b8ef7b5ebe9d20c261dd423b3f02f6ddb7a3f520d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:35:54 np0005539563 podman[365857]: 2025-11-29 08:35:54.268624161 +0000 UTC m=+0.325483772 container attach 1337b622091b006eb16e879b8ef7b5ebe9d20c261dd423b3f02f6ddb7a3f520d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 29 03:35:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:54.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:35:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:54.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:35:54 np0005539563 nova_compute[252253]: 2025-11-29 08:35:54.778 252257 DEBUG nova.network.neutron [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:54 np0005539563 nova_compute[252253]: 2025-11-29 08:35:54.812 252257 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 29 03:35:54 np0005539563 nova_compute[252253]: 2025-11-29 08:35:54.930 252257 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:35:54 np0005539563 nova_compute[252253]: 2025-11-29 08:35:54.931 252257 DEBUG nova.virt.libvirt.volume.remotefs [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Creating file /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/afc0ac49a15f4f29a803cc242a6db6b0.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:35:54 np0005539563 nova_compute[252253]: 2025-11-29 08:35:54.931 252257 DEBUG oslo_concurrency.processutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/afc0ac49a15f4f29a803cc242a6db6b0.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:55 np0005539563 relaxed_sammet[365874]: {
Nov 29 03:35:55 np0005539563 relaxed_sammet[365874]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:35:55 np0005539563 relaxed_sammet[365874]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:35:55 np0005539563 relaxed_sammet[365874]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:35:55 np0005539563 relaxed_sammet[365874]:        "osd_id": 0,
Nov 29 03:35:55 np0005539563 relaxed_sammet[365874]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:35:55 np0005539563 relaxed_sammet[365874]:        "type": "bluestore"
Nov 29 03:35:55 np0005539563 relaxed_sammet[365874]:    }
Nov 29 03:35:55 np0005539563 relaxed_sammet[365874]: }
Nov 29 03:35:55 np0005539563 systemd[1]: libpod-1337b622091b006eb16e879b8ef7b5ebe9d20c261dd423b3f02f6ddb7a3f520d.scope: Deactivated successfully.
Nov 29 03:35:55 np0005539563 podman[365857]: 2025-11-29 08:35:55.155142752 +0000 UTC m=+1.212002353 container died 1337b622091b006eb16e879b8ef7b5ebe9d20c261dd423b3f02f6ddb7a3f520d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sammet, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:35:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2a3671c17bd54f282d7fa2ca199ed807e54f816efb27e50fa7d3f8f5f021e311-merged.mount: Deactivated successfully.
Nov 29 03:35:55 np0005539563 podman[365857]: 2025-11-29 08:35:55.21963994 +0000 UTC m=+1.276499541 container remove 1337b622091b006eb16e879b8ef7b5ebe9d20c261dd423b3f02f6ddb7a3f520d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:35:55 np0005539563 systemd[1]: libpod-conmon-1337b622091b006eb16e879b8ef7b5ebe9d20c261dd423b3f02f6ddb7a3f520d.scope: Deactivated successfully.
Nov 29 03:35:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:35:55 np0005539563 podman[365898]: 2025-11-29 08:35:55.265443339 +0000 UTC m=+0.077304058 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:35:55 np0005539563 podman[365905]: 2025-11-29 08:35:55.26548678 +0000 UTC m=+0.074041469 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:35:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:35:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:35:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:35:55 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7f7c1f64-d173-4e0e-bd23-1fe6d0022e1e does not exist
Nov 29 03:35:55 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 48ab8871-9bbf-4ebb-9afe-c123766f7eda does not exist
Nov 29 03:35:55 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 911832e7-647f-4ac1-a5a9-5eb959a0dc0a does not exist
Nov 29 03:35:55 np0005539563 podman[365906]: 2025-11-29 08:35:55.320667543 +0000 UTC m=+0.128694078 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:35:55 np0005539563 nova_compute[252253]: 2025-11-29 08:35:55.412 252257 DEBUG oslo_concurrency.processutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/afc0ac49a15f4f29a803cc242a6db6b0.tmp" returned: 1 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:55 np0005539563 nova_compute[252253]: 2025-11-29 08:35:55.414 252257 DEBUG oslo_concurrency.processutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/afc0ac49a15f4f29a803cc242a6db6b0.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:35:55 np0005539563 nova_compute[252253]: 2025-11-29 08:35:55.415 252257 DEBUG nova.virt.libvirt.volume.remotefs [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Creating directory /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:35:55 np0005539563 nova_compute[252253]: 2025-11-29 08:35:55.416 252257 DEBUG oslo_concurrency.processutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:55 np0005539563 nova_compute[252253]: 2025-11-29 08:35:55.655 252257 DEBUG oslo_concurrency.processutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:55 np0005539563 nova_compute[252253]: 2025-11-29 08:35:55.661 252257 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:35:56 np0005539563 nova_compute[252253]: 2025-11-29 08:35:56.116 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:56 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:35:56 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:35:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:56.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:56.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 29 03:35:57 np0005539563 nova_compute[252253]: 2025-11-29 08:35:57.811 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:57 np0005539563 nova_compute[252253]: 2025-11-29 08:35:57.939 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:57 np0005539563 kernel: tap56a99c82-c7 (unregistering): left promiscuous mode
Nov 29 03:35:57 np0005539563 NetworkManager[48981]: <info>  [1764405357.9789] device (tap56a99c82-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:35:57 np0005539563 ovn_controller[148841]: 2025-11-29T08:35:57Z|00760|binding|INFO|Releasing lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 from this chassis (sb_readonly=0)
Nov 29 03:35:57 np0005539563 ovn_controller[148841]: 2025-11-29T08:35:57Z|00761|binding|INFO|Setting lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 down in Southbound
Nov 29 03:35:57 np0005539563 nova_compute[252253]: 2025-11-29 08:35:57.989 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:57 np0005539563 ovn_controller[148841]: 2025-11-29T08:35:57Z|00762|binding|INFO|Removing iface tap56a99c82-c7 ovn-installed in OVS
Nov 29 03:35:57 np0005539563 nova_compute[252253]: 2025-11-29 08:35:57.992 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.000 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:ee:a4 10.100.0.11'], port_security=['fa:16:3e:dc:ee:a4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2ac4a3e-8e9f-481b-9493-37a7fcdddec0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd444d77c-01c4-4fb8-ba04-a10761695979', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0167e73-34b5-4b34-9484-783b07e45b22, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=56a99c82-c7f3-45ce-8952-bb1fdd178381) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.001 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 56a99c82-c7f3-45ce-8952-bb1fdd178381 in datapath e259a30d-7e3f-48b9-abdf-dc7aa571c14c unbound from our chassis#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.003 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e259a30d-7e3f-48b9-abdf-dc7aa571c14c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.006 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4c3a78-0f6f-48d4-8f5a-b6cb8397747a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.007 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c namespace which is not needed anymore#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.007 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:58 np0005539563 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b1.scope: Deactivated successfully.
Nov 29 03:35:58 np0005539563 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b1.scope: Consumed 15.843s CPU time.
Nov 29 03:35:58 np0005539563 systemd-machined[213024]: Machine qemu-87-instance-000000b1 terminated.
Nov 29 03:35:58 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[364833]: [NOTICE]   (364837) : haproxy version is 2.8.14-c23fe91
Nov 29 03:35:58 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[364833]: [NOTICE]   (364837) : path to executable is /usr/sbin/haproxy
Nov 29 03:35:58 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[364833]: [WARNING]  (364837) : Exiting Master process...
Nov 29 03:35:58 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[364833]: [ALERT]    (364837) : Current worker (364839) exited with code 143 (Terminated)
Nov 29 03:35:58 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[364833]: [WARNING]  (364837) : All workers exited. Exiting... (0)
Nov 29 03:35:58 np0005539563 systemd[1]: libpod-8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa.scope: Deactivated successfully.
Nov 29 03:35:58 np0005539563 podman[366049]: 2025-11-29 08:35:58.16407147 +0000 UTC m=+0.052956975 container died 8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:35:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ce64f81134d70ac7a015c2fbcb46cad6b51ba4c47a350370c8bbee7edeb88b20-merged.mount: Deactivated successfully.
Nov 29 03:35:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa-userdata-shm.mount: Deactivated successfully.
Nov 29 03:35:58 np0005539563 podman[366049]: 2025-11-29 08:35:58.201583333 +0000 UTC m=+0.090468798 container cleanup 8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:35:58 np0005539563 systemd[1]: libpod-conmon-8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa.scope: Deactivated successfully.
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.217 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.222 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:58 np0005539563 podman[366080]: 2025-11-29 08:35:58.272921256 +0000 UTC m=+0.047104745 container remove 8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.279 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc2aea4-4875-4164-96e2-cc85c7309128]: (4, ('Sat Nov 29 08:35:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c (8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa)\n8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa\nSat Nov 29 08:35:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c (8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa)\n8fe309ee2551d4f93d3179f83850d76e831a411dd98f0a179ded06ccba8abafa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.281 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa34169-5739-4daa-bf94-fc33917be626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.282 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape259a30d-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.284 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:58 np0005539563 kernel: tape259a30d-70: left promiscuous mode
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.302 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.305 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4710bd-d6ba-4038-9d64-35b5bede214a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.322 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[11190552-78b3-4f5d-95ac-8ec995d3e5a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.323 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e753e556-b4d6-404f-b049-f65d4326d9ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.341 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e590e7-4f0c-4d99-9588-1303096f8660]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 828227, 'reachable_time': 16721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366106, 'error': None, 'target': 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.343 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.344 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a540bc-cb21-4db9-86d2-30d35a625550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:58 np0005539563 systemd[1]: run-netns-ovnmeta\x2de259a30d\x2d7e3f\x2d48b9\x2dabdf\x2ddc7aa571c14c.mount: Deactivated successfully.
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.467 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.466 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:35:58.472 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:35:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:58.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.680 252257 INFO nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.686 252257 INFO nova.virt.libvirt.driver [-] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Instance destroyed successfully.#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.687 252257 DEBUG nova.virt.libvirt.vif [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:35:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1486764034',display_name='tempest-TestNetworkAdvancedServerOps-server-1486764034',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1486764034',id=177,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlZsMiAKOr+1mOJ9gsr3FgWBE+mKwRnJkRBUHqhee24xo71b8dlrKwXDFbukNzcIWmQZvBI4Ju6SAH+rRZvrJVzvxQlKC2PN7cQRHMeK9LWhS/kLn4nic2/QWwXvrAG3A==',key_name='tempest-TestNetworkAdvancedServerOps-1855444884',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:35:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-wqp7y19b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:35:51Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=e2ac4a3e-8e9f-481b-9493-37a7fcdddec0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--413829212", "vif_mac": "fa:16:3e:dc:ee:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.687 252257 DEBUG nova.network.os_vif_util [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converting VIF {"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--413829212", "vif_mac": "fa:16:3e:dc:ee:a4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.688 252257 DEBUG nova.network.os_vif_util [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.689 252257 DEBUG os_vif [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.691 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.691 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56a99c82-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.693 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.696 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.701 252257 INFO os_vif [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7')#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.707 252257 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.707 252257 DEBUG nova.virt.libvirt.driver [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:35:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:35:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:58.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 12 KiB/s wr, 2 op/s
Nov 29 03:35:58 np0005539563 nova_compute[252253]: 2025-11-29 08:35:58.954 252257 DEBUG neutronclient.v2_0.client [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 56a99c82-c7f3-45ce-8952-bb1fdd178381 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:35:59 np0005539563 nova_compute[252253]: 2025-11-29 08:35:59.072 252257 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:59 np0005539563 nova_compute[252253]: 2025-11-29 08:35:59.072 252257 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:59 np0005539563 nova_compute[252253]: 2025-11-29 08:35:59.072 252257 DEBUG oslo_concurrency.lockutils [None req-c194419e-9f9b-4c67-881b-3eba93a8a280 7b4e953ac9d64d2b8bf3cbf02c1c9371 112d5da6ad864b7d8fdc6c67f60d3c7d - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:59 np0005539563 nova_compute[252253]: 2025-11-29 08:35:59.092 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:00.474 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:00.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:00.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 18 KiB/s wr, 3 op/s
Nov 29 03:36:01 np0005539563 nova_compute[252253]: 2025-11-29 08:36:01.076 252257 DEBUG nova.compute.manager [req-5da8448f-80ae-44ea-ac11-203de8e6e93a req-eb368cb7-c7c7-46f0-a38b-f99b4cc2d4db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:01 np0005539563 nova_compute[252253]: 2025-11-29 08:36:01.076 252257 DEBUG oslo_concurrency.lockutils [req-5da8448f-80ae-44ea-ac11-203de8e6e93a req-eb368cb7-c7c7-46f0-a38b-f99b4cc2d4db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:01 np0005539563 nova_compute[252253]: 2025-11-29 08:36:01.076 252257 DEBUG oslo_concurrency.lockutils [req-5da8448f-80ae-44ea-ac11-203de8e6e93a req-eb368cb7-c7c7-46f0-a38b-f99b4cc2d4db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:01 np0005539563 nova_compute[252253]: 2025-11-29 08:36:01.077 252257 DEBUG oslo_concurrency.lockutils [req-5da8448f-80ae-44ea-ac11-203de8e6e93a req-eb368cb7-c7c7-46f0-a38b-f99b4cc2d4db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:01 np0005539563 nova_compute[252253]: 2025-11-29 08:36:01.077 252257 DEBUG nova.compute.manager [req-5da8448f-80ae-44ea-ac11-203de8e6e93a req-eb368cb7-c7c7-46f0-a38b-f99b4cc2d4db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:01 np0005539563 nova_compute[252253]: 2025-11-29 08:36:01.077 252257 WARNING nova.compute.manager [req-5da8448f-80ae-44ea-ac11-203de8e6e93a req-eb368cb7-c7c7-46f0-a38b-f99b4cc2d4db 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:36:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:02.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:02 np0005539563 nova_compute[252253]: 2025-11-29 08:36:02.745 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:02.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 17 KiB/s wr, 3 op/s
Nov 29 03:36:03 np0005539563 nova_compute[252253]: 2025-11-29 08:36:03.264 252257 DEBUG nova.compute.manager [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:03 np0005539563 nova_compute[252253]: 2025-11-29 08:36:03.265 252257 DEBUG oslo_concurrency.lockutils [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:03 np0005539563 nova_compute[252253]: 2025-11-29 08:36:03.266 252257 DEBUG oslo_concurrency.lockutils [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:03 np0005539563 nova_compute[252253]: 2025-11-29 08:36:03.266 252257 DEBUG oslo_concurrency.lockutils [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:03 np0005539563 nova_compute[252253]: 2025-11-29 08:36:03.266 252257 DEBUG nova.compute.manager [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:03 np0005539563 nova_compute[252253]: 2025-11-29 08:36:03.267 252257 WARNING nova.compute.manager [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:36:03 np0005539563 nova_compute[252253]: 2025-11-29 08:36:03.267 252257 DEBUG nova.compute.manager [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:03 np0005539563 nova_compute[252253]: 2025-11-29 08:36:03.268 252257 DEBUG nova.compute.manager [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing instance network info cache due to event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:03 np0005539563 nova_compute[252253]: 2025-11-29 08:36:03.268 252257 DEBUG oslo_concurrency.lockutils [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:03 np0005539563 nova_compute[252253]: 2025-11-29 08:36:03.268 252257 DEBUG oslo_concurrency.lockutils [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:03 np0005539563 nova_compute[252253]: 2025-11-29 08:36:03.269 252257 DEBUG nova.network.neutron [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:03 np0005539563 nova_compute[252253]: 2025-11-29 08:36:03.694 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:04 np0005539563 nova_compute[252253]: 2025-11-29 08:36:04.094 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Nov 29 03:36:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Nov 29 03:36:04 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Nov 29 03:36:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:04.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:04.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 KiB/s rd, 20 KiB/s wr, 4 op/s
Nov 29 03:36:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:04.940 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:04.941 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:04.941 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:05 np0005539563 nova_compute[252253]: 2025-11-29 08:36:05.038 252257 DEBUG nova.network.neutron [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updated VIF entry in instance network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:05 np0005539563 nova_compute[252253]: 2025-11-29 08:36:05.038 252257 DEBUG nova.network.neutron [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:05 np0005539563 nova_compute[252253]: 2025-11-29 08:36:05.065 252257 DEBUG oslo_concurrency.lockutils [req-d949bb4e-0050-4f9a-98a5-c86a84e7ec51 req-a3251cb9-2861-4535-9748-c57d2c6d5a0f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:06 np0005539563 nova_compute[252253]: 2025-11-29 08:36:06.555 252257 DEBUG nova.compute.manager [req-f9a21f27-aee5-4d0e-aa5b-7b9f01728cc6 req-5952340b-947f-46c5-93a0-e008fecd9ca4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:06 np0005539563 nova_compute[252253]: 2025-11-29 08:36:06.557 252257 DEBUG oslo_concurrency.lockutils [req-f9a21f27-aee5-4d0e-aa5b-7b9f01728cc6 req-5952340b-947f-46c5-93a0-e008fecd9ca4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:06 np0005539563 nova_compute[252253]: 2025-11-29 08:36:06.558 252257 DEBUG oslo_concurrency.lockutils [req-f9a21f27-aee5-4d0e-aa5b-7b9f01728cc6 req-5952340b-947f-46c5-93a0-e008fecd9ca4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:06 np0005539563 nova_compute[252253]: 2025-11-29 08:36:06.558 252257 DEBUG oslo_concurrency.lockutils [req-f9a21f27-aee5-4d0e-aa5b-7b9f01728cc6 req-5952340b-947f-46c5-93a0-e008fecd9ca4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:06 np0005539563 nova_compute[252253]: 2025-11-29 08:36:06.559 252257 DEBUG nova.compute.manager [req-f9a21f27-aee5-4d0e-aa5b-7b9f01728cc6 req-5952340b-947f-46c5-93a0-e008fecd9ca4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:06 np0005539563 nova_compute[252253]: 2025-11-29 08:36:06.559 252257 WARNING nova.compute.manager [req-f9a21f27-aee5-4d0e-aa5b-7b9f01728cc6 req-5952340b-947f-46c5-93a0-e008fecd9ca4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:36:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:06.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:06.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 20 KiB/s wr, 16 op/s
Nov 29 03:36:07 np0005539563 nova_compute[252253]: 2025-11-29 08:36:07.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:07 np0005539563 nova_compute[252253]: 2025-11-29 08:36:07.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:36:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:08.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:08 np0005539563 nova_compute[252253]: 2025-11-29 08:36:08.696 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:08 np0005539563 nova_compute[252253]: 2025-11-29 08:36:08.720 252257 DEBUG nova.compute.manager [req-462184b7-f9e8-49d9-856f-430fdbf72d02 req-f8ae818f-0745-4c97-958e-1bad7ef3c5ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:08 np0005539563 nova_compute[252253]: 2025-11-29 08:36:08.721 252257 DEBUG oslo_concurrency.lockutils [req-462184b7-f9e8-49d9-856f-430fdbf72d02 req-f8ae818f-0745-4c97-958e-1bad7ef3c5ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:08 np0005539563 nova_compute[252253]: 2025-11-29 08:36:08.721 252257 DEBUG oslo_concurrency.lockutils [req-462184b7-f9e8-49d9-856f-430fdbf72d02 req-f8ae818f-0745-4c97-958e-1bad7ef3c5ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:08 np0005539563 nova_compute[252253]: 2025-11-29 08:36:08.721 252257 DEBUG oslo_concurrency.lockutils [req-462184b7-f9e8-49d9-856f-430fdbf72d02 req-f8ae818f-0745-4c97-958e-1bad7ef3c5ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:08 np0005539563 nova_compute[252253]: 2025-11-29 08:36:08.721 252257 DEBUG nova.compute.manager [req-462184b7-f9e8-49d9-856f-430fdbf72d02 req-f8ae818f-0745-4c97-958e-1bad7ef3c5ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:08 np0005539563 nova_compute[252253]: 2025-11-29 08:36:08.721 252257 WARNING nova.compute.manager [req-462184b7-f9e8-49d9-856f-430fdbf72d02 req-f8ae818f-0745-4c97-958e-1bad7ef3c5ee 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:36:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:08.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 984 KiB/s rd, 7.2 KiB/s wr, 56 op/s
Nov 29 03:36:09 np0005539563 nova_compute[252253]: 2025-11-29 08:36:09.096 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:09 np0005539563 nova_compute[252253]: 2025-11-29 08:36:09.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:10.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:10.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 511 B/s wr, 97 op/s
Nov 29 03:36:11 np0005539563 nova_compute[252253]: 2025-11-29 08:36:11.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:12.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:12 np0005539563 nova_compute[252253]: 2025-11-29 08:36:12.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:12 np0005539563 nova_compute[252253]: 2025-11-29 08:36:12.711 252257 DEBUG nova.compute.manager [req-6e9fa8d7-726b-4bf6-8107-8fedff1d2171 req-bbafa856-aa0c-4f25-9b3b-b550c7aa5ddc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:12 np0005539563 nova_compute[252253]: 2025-11-29 08:36:12.711 252257 DEBUG oslo_concurrency.lockutils [req-6e9fa8d7-726b-4bf6-8107-8fedff1d2171 req-bbafa856-aa0c-4f25-9b3b-b550c7aa5ddc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:12 np0005539563 nova_compute[252253]: 2025-11-29 08:36:12.711 252257 DEBUG oslo_concurrency.lockutils [req-6e9fa8d7-726b-4bf6-8107-8fedff1d2171 req-bbafa856-aa0c-4f25-9b3b-b550c7aa5ddc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:12 np0005539563 nova_compute[252253]: 2025-11-29 08:36:12.711 252257 DEBUG oslo_concurrency.lockutils [req-6e9fa8d7-726b-4bf6-8107-8fedff1d2171 req-bbafa856-aa0c-4f25-9b3b-b550c7aa5ddc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:12 np0005539563 nova_compute[252253]: 2025-11-29 08:36:12.712 252257 DEBUG nova.compute.manager [req-6e9fa8d7-726b-4bf6-8107-8fedff1d2171 req-bbafa856-aa0c-4f25-9b3b-b550c7aa5ddc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:12 np0005539563 nova_compute[252253]: 2025-11-29 08:36:12.712 252257 WARNING nova.compute.manager [req-6e9fa8d7-726b-4bf6-8107-8fedff1d2171 req-bbafa856-aa0c-4f25-9b3b-b550c7aa5ddc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:36:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:12.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 511 B/s wr, 97 op/s
Nov 29 03:36:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:36:12
Nov 29 03:36:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:36:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:36:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'vms', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'images', 'volumes']
Nov 29 03:36:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:36:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:36:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:36:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:36:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:36:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:36:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:36:13 np0005539563 nova_compute[252253]: 2025-11-29 08:36:13.233 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405358.2316904, e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:13 np0005539563 nova_compute[252253]: 2025-11-29 08:36:13.233 252257 INFO nova.compute.manager [-] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:36:13 np0005539563 nova_compute[252253]: 2025-11-29 08:36:13.262 252257 DEBUG nova.compute.manager [None req-544eeded-5b07-4e25-ad6d-ec6b909c8591 - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:13 np0005539563 nova_compute[252253]: 2025-11-29 08:36:13.265 252257 DEBUG nova.compute.manager [None req-544eeded-5b07-4e25-ad6d-ec6b909c8591 - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2159527247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:13 np0005539563 nova_compute[252253]: 2025-11-29 08:36:13.296 252257 INFO nova.compute.manager [None req-544eeded-5b07-4e25-ad6d-ec6b909c8591 - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Nov 29 03:36:13 np0005539563 nova_compute[252253]: 2025-11-29 08:36:13.735 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:13 np0005539563 nova_compute[252253]: 2025-11-29 08:36:13.741 252257 INFO nova.compute.manager [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Swapping old allocation on dict_keys(['190eff98-dce8-46c0-8a7d-870d6fa5cbbd']) held by migration c2890240-6091-4f4d-923d-07b9818675b5 for instance#033[00m
Nov 29 03:36:13 np0005539563 nova_compute[252253]: 2025-11-29 08:36:13.801 252257 DEBUG nova.scheduler.client.report [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Overwriting current allocation {'allocations': {'ea190a43-1246-44b8-8f8b-a61b155a1d3b': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}, 'generation': 83}}, 'project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'user_id': '686f527a5723407b85ed34c8a312583f', 'consumer_generation': 1} on consumer e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018#033[00m
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.098 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.116 252257 INFO nova.network.neutron [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating port 56a99c82-c7f3-45ce-8952-bb1fdd178381 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:36:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:36:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:36:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:14.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.705 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.705 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.706 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.706 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:14.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 490 B/s wr, 99 op/s
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.927 252257 DEBUG nova.compute.manager [req-44628e13-5b38-42e9-9251-58f284ce2f58 req-29dcbbc5-8a22-49b7-9664-a8453d88b5cc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.928 252257 DEBUG oslo_concurrency.lockutils [req-44628e13-5b38-42e9-9251-58f284ce2f58 req-29dcbbc5-8a22-49b7-9664-a8453d88b5cc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.928 252257 DEBUG oslo_concurrency.lockutils [req-44628e13-5b38-42e9-9251-58f284ce2f58 req-29dcbbc5-8a22-49b7-9664-a8453d88b5cc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.928 252257 DEBUG oslo_concurrency.lockutils [req-44628e13-5b38-42e9-9251-58f284ce2f58 req-29dcbbc5-8a22-49b7-9664-a8453d88b5cc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.929 252257 DEBUG nova.compute.manager [req-44628e13-5b38-42e9-9251-58f284ce2f58 req-29dcbbc5-8a22-49b7-9664-a8453d88b5cc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:14 np0005539563 nova_compute[252253]: 2025-11-29 08:36:14.929 252257 WARNING nova.compute.manager [req-44628e13-5b38-42e9-9251-58f284ce2f58 req-29dcbbc5-8a22-49b7-9664-a8453d88b5cc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:36:15 np0005539563 nova_compute[252253]: 2025-11-29 08:36:15.715 252257 DEBUG oslo_concurrency.lockutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:15 np0005539563 nova_compute[252253]: 2025-11-29 08:36:15.943 252257 DEBUG nova.compute.manager [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:15 np0005539563 nova_compute[252253]: 2025-11-29 08:36:15.943 252257 DEBUG nova.compute.manager [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing instance network info cache due to event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:15 np0005539563 nova_compute[252253]: 2025-11-29 08:36:15.944 252257 DEBUG oslo_concurrency.lockutils [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:36:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:36:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:36:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:36:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:36:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:36:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:36:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:36:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:36:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:36:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:16.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:16.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 89 op/s
Nov 29 03:36:17 np0005539563 nova_compute[252253]: 2025-11-29 08:36:17.822 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": null, "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap56a99c82-c7", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:17 np0005539563 nova_compute[252253]: 2025-11-29 08:36:17.880 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:17 np0005539563 nova_compute[252253]: 2025-11-29 08:36:17.881 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:36:17 np0005539563 nova_compute[252253]: 2025-11-29 08:36:17.882 252257 DEBUG oslo_concurrency.lockutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:17 np0005539563 nova_compute[252253]: 2025-11-29 08:36:17.882 252257 DEBUG nova.network.neutron [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:36:17 np0005539563 nova_compute[252253]: 2025-11-29 08:36:17.884 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:17 np0005539563 nova_compute[252253]: 2025-11-29 08:36:17.886 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:17 np0005539563 nova_compute[252253]: 2025-11-29 08:36:17.923 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:17 np0005539563 nova_compute[252253]: 2025-11-29 08:36:17.924 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:17 np0005539563 nova_compute[252253]: 2025-11-29 08:36:17.924 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:17 np0005539563 nova_compute[252253]: 2025-11-29 08:36:17.925 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:36:17 np0005539563 nova_compute[252253]: 2025-11-29 08:36:17.925 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111071294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:18 np0005539563 nova_compute[252253]: 2025-11-29 08:36:18.456 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:18 np0005539563 nova_compute[252253]: 2025-11-29 08:36:18.538 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:36:18 np0005539563 nova_compute[252253]: 2025-11-29 08:36:18.538 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000b1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:36:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:18.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:18 np0005539563 nova_compute[252253]: 2025-11-29 08:36:18.708 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:36:18 np0005539563 nova_compute[252253]: 2025-11-29 08:36:18.709 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4192MB free_disk=20.942657470703125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:36:18 np0005539563 nova_compute[252253]: 2025-11-29 08:36:18.710 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:18 np0005539563 nova_compute[252253]: 2025-11-29 08:36:18.710 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:18 np0005539563 nova_compute[252253]: 2025-11-29 08:36:18.737 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:18.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:18 np0005539563 nova_compute[252253]: 2025-11-29 08:36:18.796 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:36:18 np0005539563 nova_compute[252253]: 2025-11-29 08:36:18.797 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:36:18 np0005539563 nova_compute[252253]: 2025-11-29 08:36:18.798 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:36:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 222 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 84 op/s
Nov 29 03:36:18 np0005539563 nova_compute[252253]: 2025-11-29 08:36:18.852 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:19 np0005539563 nova_compute[252253]: 2025-11-29 08:36:19.100 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692800489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:19 np0005539563 nova_compute[252253]: 2025-11-29 08:36:19.315 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:19 np0005539563 nova_compute[252253]: 2025-11-29 08:36:19.321 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:19 np0005539563 nova_compute[252253]: 2025-11-29 08:36:19.353 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:19 np0005539563 nova_compute[252253]: 2025-11-29 08:36:19.397 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:36:19 np0005539563 nova_compute[252253]: 2025-11-29 08:36:19.397 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.190 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.331 252257 DEBUG nova.network.neutron [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.368 252257 DEBUG oslo_concurrency.lockutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.369 252257 DEBUG nova.virt.libvirt.driver [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.413 252257 DEBUG oslo_concurrency.lockutils [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.414 252257 DEBUG nova.network.neutron [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.462 252257 DEBUG nova.storage.rbd_utils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rolling back rbd image(e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.582 252257 DEBUG nova.storage.rbd_utils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] removing snapshot(nova-resize) on rbd image(e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:36:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Nov 29 03:36:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Nov 29 03:36:20 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Nov 29 03:36:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:20.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.712 252257 DEBUG nova.virt.libvirt.driver [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Start _get_guest_xml network_info=[{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.715 252257 WARNING nova.virt.libvirt.driver [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.720 252257 DEBUG nova.virt.libvirt.host [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.721 252257 DEBUG nova.virt.libvirt.host [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.725 252257 DEBUG nova.virt.libvirt.host [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.725 252257 DEBUG nova.virt.libvirt.host [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.727 252257 DEBUG nova.virt.libvirt.driver [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.727 252257 DEBUG nova.virt.hardware [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.728 252257 DEBUG nova.virt.hardware [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.728 252257 DEBUG nova.virt.hardware [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.728 252257 DEBUG nova.virt.hardware [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.728 252257 DEBUG nova.virt.hardware [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.729 252257 DEBUG nova.virt.hardware [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.729 252257 DEBUG nova.virt.hardware [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.729 252257 DEBUG nova.virt.hardware [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.730 252257 DEBUG nova.virt.hardware [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.730 252257 DEBUG nova.virt.hardware [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.730 252257 DEBUG nova.virt.hardware [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.731 252257 DEBUG nova.objects.instance [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:20 np0005539563 nova_compute[252253]: 2025-11-29 08:36:20.750 252257 DEBUG oslo_concurrency.processutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:20.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 242 KiB/s rd, 2.2 MiB/s wr, 51 op/s
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.231 252257 DEBUG oslo_concurrency.processutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.274 252257 DEBUG oslo_concurrency.processutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2567423183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.712 252257 DEBUG oslo_concurrency.processutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.714 252257 DEBUG nova.virt.libvirt.vif [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:35:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1486764034',display_name='tempest-TestNetworkAdvancedServerOps-server-1486764034',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1486764034',id=177,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlZsMiAKOr+1mOJ9gsr3FgWBE+mKwRnJkRBUHqhee24xo71b8dlrKwXDFbukNzcIWmQZvBI4Ju6SAH+rRZvrJVzvxQlKC2PN7cQRHMeK9LWhS/kLn4nic2/QWwXvrAG3A==',key_name='tempest-TestNetworkAdvancedServerOps-1855444884',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:36:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-wqp7y19b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:36:08Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=e2ac4a3e-8e9f-481b-9493-37a7fcdddec0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.714 252257 DEBUG nova.network.os_vif_util [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.715 252257 DEBUG nova.network.os_vif_util [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.718 252257 DEBUG nova.virt.libvirt.driver [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  <uuid>e2ac4a3e-8e9f-481b-9493-37a7fcdddec0</uuid>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  <name>instance-000000b1</name>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1486764034</nova:name>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:36:20</nova:creationTime>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <nova:port uuid="56a99c82-c7f3-45ce-8952-bb1fdd178381">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <entry name="serial">e2ac4a3e-8e9f-481b-9493-37a7fcdddec0</entry>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <entry name="uuid">e2ac4a3e-8e9f-481b-9493-37a7fcdddec0</entry>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_disk.config">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:dc:ee:a4"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <target dev="tap56a99c82-c7"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0/console.log" append="off"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <input type="keyboard" bus="usb"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:36:21 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:36:21 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:36:21 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:36:21 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.719 252257 DEBUG nova.compute.manager [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Preparing to wait for external event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.719 252257 DEBUG oslo_concurrency.lockutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.720 252257 DEBUG oslo_concurrency.lockutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.720 252257 DEBUG oslo_concurrency.lockutils [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.720 252257 DEBUG nova.virt.libvirt.vif [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:35:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1486764034',display_name='tempest-TestNetworkAdvancedServerOps-server-1486764034',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1486764034',id=177,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlZsMiAKOr+1mOJ9gsr3FgWBE+mKwRnJkRBUHqhee24xo71b8dlrKwXDFbukNzcIWmQZvBI4Ju6SAH+rRZvrJVzvxQlKC2PN7cQRHMeK9LWhS/kLn4nic2/QWwXvrAG3A==',key_name='tempest-TestNetworkAdvancedServerOps-1855444884',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:36:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-wqp7y19b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:36:08Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=e2ac4a3e-8e9f-481b-9493-37a7fcdddec0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.721 252257 DEBUG nova.network.os_vif_util [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.721 252257 DEBUG nova.network.os_vif_util [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.722 252257 DEBUG os_vif [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.722 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.723 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.723 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.726 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.727 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56a99c82-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.727 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56a99c82-c7, col_values=(('external_ids', {'iface-id': '56a99c82-c7f3-45ce-8952-bb1fdd178381', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:ee:a4', 'vm-uuid': 'e2ac4a3e-8e9f-481b-9493-37a7fcdddec0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.768 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:21 np0005539563 NetworkManager[48981]: <info>  [1764405381.7692] manager: (tap56a99c82-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.771 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.773 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.774 252257 INFO os_vif [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7')#033[00m
Nov 29 03:36:21 np0005539563 kernel: tap56a99c82-c7: entered promiscuous mode
Nov 29 03:36:21 np0005539563 NetworkManager[48981]: <info>  [1764405381.8576] manager: (tap56a99c82-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/330)
Nov 29 03:36:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:36:21Z|00763|binding|INFO|Claiming lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 for this chassis.
Nov 29 03:36:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:36:21Z|00764|binding|INFO|56a99c82-c7f3-45ce-8952-bb1fdd178381: Claiming fa:16:3e:dc:ee:a4 10.100.0.11
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.859 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:21.869 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:ee:a4 10.100.0.11'], port_security=['fa:16:3e:dc:ee:a4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2ac4a3e-8e9f-481b-9493-37a7fcdddec0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'd444d77c-01c4-4fb8-ba04-a10761695979', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0167e73-34b5-4b34-9484-783b07e45b22, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=56a99c82-c7f3-45ce-8952-bb1fdd178381) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:21.870 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 56a99c82-c7f3-45ce-8952-bb1fdd178381 in datapath e259a30d-7e3f-48b9-abdf-dc7aa571c14c bound to our chassis#033[00m
Nov 29 03:36:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:21.871 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e259a30d-7e3f-48b9-abdf-dc7aa571c14c#033[00m
Nov 29 03:36:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:36:21Z|00765|binding|INFO|Setting lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 ovn-installed in OVS
Nov 29 03:36:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:36:21Z|00766|binding|INFO|Setting lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 up in Southbound
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.874 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:21 np0005539563 nova_compute[252253]: 2025-11-29 08:36:21.877 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:21.886 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f55b4617-89aa-40d9-ae9a-c56ad9b68c6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:21.887 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape259a30d-71 in ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:36:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:21.889 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape259a30d-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:36:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:21.889 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8d9b34-a808-45be-99d8-4afb36273cd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:21.890 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9130cb23-51d5-451f-a526-0294868a755d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:21 np0005539563 systemd-udevd[366345]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:36:21 np0005539563 systemd-machined[213024]: New machine qemu-88-instance-000000b1.
Nov 29 03:36:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:21.903 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[c5fd3204-9535-45ed-b1cf-0145cae7acf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:21 np0005539563 NetworkManager[48981]: <info>  [1764405381.9081] device (tap56a99c82-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:36:21 np0005539563 NetworkManager[48981]: <info>  [1764405381.9100] device (tap56a99c82-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:36:21 np0005539563 systemd[1]: Started Virtual Machine qemu-88-instance-000000b1.
Nov 29 03:36:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:21.928 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[abb8e6ff-7b78-4f8f-8cdd-ff2d58524e92]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:21.956 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[35b2a80f-3c21-40a4-a93a-5bbfdac3d31d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:21.962 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd3b5ff-132b-44d4-8ab3-59cc94eb2133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:21 np0005539563 systemd-udevd[366348]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:36:21 np0005539563 NetworkManager[48981]: <info>  [1764405381.9637] manager: (tape259a30d-70): new Veth device (/org/freedesktop/NetworkManager/Devices/331)
Nov 29 03:36:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:21.998 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[52cc3e78-f3f2-4a27-8128-6c989cb4a5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.001 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[56776af5-e293-4285-8c38-f3bf9dda2634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:22 np0005539563 NetworkManager[48981]: <info>  [1764405382.0266] device (tape259a30d-70): carrier: link connected
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.032 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ac614bd6-be75-408f-a461-7604975b84ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.050 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f98df280-971a-4456-988a-c5b817a7c760]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape259a30d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:ff:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834980, 'reachable_time': 21645, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366377, 'error': None, 'target': 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.069 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6e5334-7527-4165-9cba-31d630f989e5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:ff36'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 834980, 'tstamp': 834980}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366378, 'error': None, 'target': 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.091 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7748d6cc-a252-4acb-bd6a-798db7b5c25f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape259a30d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:ff:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834980, 'reachable_time': 21645, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366379, 'error': None, 'target': 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.126 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ca49aa-e1c9-4a4c-aaae-3db33568e34c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.181 252257 DEBUG nova.compute.manager [req-0a40aa66-01ca-4725-a579-fb8d21ed4cdf req-3173d7ab-076e-4877-90b6-e0afd477bb3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.182 252257 DEBUG oslo_concurrency.lockutils [req-0a40aa66-01ca-4725-a579-fb8d21ed4cdf req-3173d7ab-076e-4877-90b6-e0afd477bb3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.181 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[60bbc9df-d76f-4dec-b0e1-b43b17a6d762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.182 252257 DEBUG oslo_concurrency.lockutils [req-0a40aa66-01ca-4725-a579-fb8d21ed4cdf req-3173d7ab-076e-4877-90b6-e0afd477bb3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.182 252257 DEBUG oslo_concurrency.lockutils [req-0a40aa66-01ca-4725-a579-fb8d21ed4cdf req-3173d7ab-076e-4877-90b6-e0afd477bb3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.182 252257 DEBUG nova.compute.manager [req-0a40aa66-01ca-4725-a579-fb8d21ed4cdf req-3173d7ab-076e-4877-90b6-e0afd477bb3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Processing event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.183 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape259a30d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.183 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.183 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape259a30d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.185 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:22 np0005539563 NetworkManager[48981]: <info>  [1764405382.1858] manager: (tape259a30d-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Nov 29 03:36:22 np0005539563 kernel: tape259a30d-70: entered promiscuous mode
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.187 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.190 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape259a30d-70, col_values=(('external_ids', {'iface-id': '26e698e1-ae82-4653-b3c1-2c8f8d7f1139'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.190 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.192 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e259a30d-7e3f-48b9-abdf-dc7aa571c14c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e259a30d-7e3f-48b9-abdf-dc7aa571c14c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:36:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:36:22Z|00767|binding|INFO|Releasing lport 26e698e1-ae82-4653-b3c1-2c8f8d7f1139 from this chassis (sb_readonly=0)
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.193 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ef7b55-fda1-4e8c-8478-bfa785ac859b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.193 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-e259a30d-7e3f-48b9-abdf-dc7aa571c14c
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/e259a30d-7e3f-48b9-abdf-dc7aa571c14c.pid.haproxy
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID e259a30d-7e3f-48b9-abdf-dc7aa571c14c
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:36:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:22.194 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'env', 'PROCESS_TAG=haproxy-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e259a30d-7e3f-48b9-abdf-dc7aa571c14c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.206 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.208 252257 DEBUG nova.network.neutron [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updated VIF entry in instance network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.218 252257 DEBUG nova.network.neutron [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.247 252257 DEBUG oslo_concurrency.lockutils [req-54e54a49-ed37-4973-bb61-7091ff87ad3b req-d274f428-80e4-4c39-a2ff-80af3e3bada5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.392 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405382.3920207, e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.392 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.394 252257 DEBUG nova.compute.manager [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.400 252257 INFO nova.virt.libvirt.driver [-] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Instance running successfully.#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.401 252257 DEBUG nova.virt.libvirt.driver [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.442 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.446 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.492 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.494 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405382.3922558, e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.494 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.525 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.528 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405382.397077, e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.528 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.533 252257 INFO nova.compute.manager [None req-8b28db10-ee49-4251-863a-3f3e7649dcfc 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance to original state: 'active'#033[00m
Nov 29 03:36:22 np0005539563 podman[366453]: 2025-11-29 08:36:22.553290712 +0000 UTC m=+0.046579170 container create 0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.576 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.579 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:22 np0005539563 systemd[1]: Started libpod-conmon-0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1.scope.
Nov 29 03:36:22 np0005539563 nova_compute[252253]: 2025-11-29 08:36:22.609 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:36:22 np0005539563 podman[366453]: 2025-11-29 08:36:22.529374801 +0000 UTC m=+0.022663269 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:36:22 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:36:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e8fd4bfba5e93b606a4fbc7af59b03ab89c511a321bad759f942f04f7a5ded/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:22 np0005539563 podman[366453]: 2025-11-29 08:36:22.651401277 +0000 UTC m=+0.144689775 container init 0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:36:22 np0005539563 podman[366453]: 2025-11-29 08:36:22.65668558 +0000 UTC m=+0.149974048 container start 0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:36:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:22.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:22 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[366468]: [NOTICE]   (366472) : New worker (366474) forked
Nov 29 03:36:22 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[366468]: [NOTICE]   (366472) : Loading success.
Nov 29 03:36:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:22.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 242 KiB/s rd, 2.2 MiB/s wr, 51 op/s
Nov 29 03:36:23 np0005539563 nova_compute[252253]: 2025-11-29 08:36:23.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031625139586823003 of space, bias 1.0, pg target 0.9487541876046901 quantized to 32 (current 32)
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002173410455053043 of space, bias 1.0, pg target 0.6520231365159129 quantized to 32 (current 32)
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:36:24 np0005539563 nova_compute[252253]: 2025-11-29 08:36:24.145 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:24 np0005539563 nova_compute[252253]: 2025-11-29 08:36:24.582 252257 DEBUG nova.compute.manager [req-a67f08e1-43c4-42d1-99a6-35e1457d9a8f req-97a86aaf-f4ae-460f-a47d-f4c1bf568376 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:24 np0005539563 nova_compute[252253]: 2025-11-29 08:36:24.583 252257 DEBUG oslo_concurrency.lockutils [req-a67f08e1-43c4-42d1-99a6-35e1457d9a8f req-97a86aaf-f4ae-460f-a47d-f4c1bf568376 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:24 np0005539563 nova_compute[252253]: 2025-11-29 08:36:24.584 252257 DEBUG oslo_concurrency.lockutils [req-a67f08e1-43c4-42d1-99a6-35e1457d9a8f req-97a86aaf-f4ae-460f-a47d-f4c1bf568376 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:24 np0005539563 nova_compute[252253]: 2025-11-29 08:36:24.584 252257 DEBUG oslo_concurrency.lockutils [req-a67f08e1-43c4-42d1-99a6-35e1457d9a8f req-97a86aaf-f4ae-460f-a47d-f4c1bf568376 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:24 np0005539563 nova_compute[252253]: 2025-11-29 08:36:24.584 252257 DEBUG nova.compute.manager [req-a67f08e1-43c4-42d1-99a6-35e1457d9a8f req-97a86aaf-f4ae-460f-a47d-f4c1bf568376 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:24 np0005539563 nova_compute[252253]: 2025-11-29 08:36:24.585 252257 WARNING nova.compute.manager [req-a67f08e1-43c4-42d1-99a6-35e1457d9a8f req-97a86aaf-f4ae-460f-a47d-f4c1bf568376 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:36:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:24.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 453 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Nov 29 03:36:25 np0005539563 podman[366536]: 2025-11-29 08:36:25.507778456 +0000 UTC m=+0.064152079 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:36:25 np0005539563 podman[366535]: 2025-11-29 08:36:25.530620289 +0000 UTC m=+0.089379277 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:36:25 np0005539563 podman[366537]: 2025-11-29 08:36:25.531456702 +0000 UTC m=+0.084931686 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:36:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:26.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:26 np0005539563 nova_compute[252253]: 2025-11-29 08:36:26.767 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:26.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Nov 29 03:36:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:28.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:28.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 877 KiB/s wr, 182 op/s
Nov 29 03:36:29 np0005539563 nova_compute[252253]: 2025-11-29 08:36:29.149 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Nov 29 03:36:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Nov 29 03:36:29 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Nov 29 03:36:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:30.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:30.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 18 KiB/s wr, 182 op/s
Nov 29 03:36:31 np0005539563 nova_compute[252253]: 2025-11-29 08:36:31.770 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:32.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:32.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 18 KiB/s wr, 182 op/s
Nov 29 03:36:34 np0005539563 nova_compute[252253]: 2025-11-29 08:36:34.151 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:34.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:34.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 18 KiB/s wr, 159 op/s
Nov 29 03:36:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:36:35Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:ee:a4 10.100.0.11
Nov 29 03:36:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:36.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:36 np0005539563 nova_compute[252253]: 2025-11-29 08:36:36.772 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:36.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 19 KiB/s wr, 122 op/s
Nov 29 03:36:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:36:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:38.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:36:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:38.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 309 KiB/s wr, 91 op/s
Nov 29 03:36:39 np0005539563 nova_compute[252253]: 2025-11-29 08:36:39.154 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:40.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:40.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 266 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 128 op/s
Nov 29 03:36:41 np0005539563 nova_compute[252253]: 2025-11-29 08:36:41.774 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:42 np0005539563 nova_compute[252253]: 2025-11-29 08:36:42.548 252257 INFO nova.compute.manager [None req-e2d87a3b-19b6-4c02-9ba9-88ccc4cea2ab 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Get console output#033[00m
Nov 29 03:36:42 np0005539563 nova_compute[252253]: 2025-11-29 08:36:42.557 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:36:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:42.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:42.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 266 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 814 KiB/s rd, 1.5 MiB/s wr, 89 op/s
Nov 29 03:36:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:36:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:36:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:36:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:36:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:36:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:36:44 np0005539563 nova_compute[252253]: 2025-11-29 08:36:44.157 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:44.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:44 np0005539563 nova_compute[252253]: 2025-11-29 08:36:44.721 252257 DEBUG nova.compute.manager [req-8264e1fa-2e14-4d06-94af-522e49b370d9 req-842a6092-2a15-42a8-ab1e-cd0c9cee7cf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:44 np0005539563 nova_compute[252253]: 2025-11-29 08:36:44.721 252257 DEBUG nova.compute.manager [req-8264e1fa-2e14-4d06-94af-522e49b370d9 req-842a6092-2a15-42a8-ab1e-cd0c9cee7cf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing instance network info cache due to event network-changed-56a99c82-c7f3-45ce-8952-bb1fdd178381. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:44 np0005539563 nova_compute[252253]: 2025-11-29 08:36:44.721 252257 DEBUG oslo_concurrency.lockutils [req-8264e1fa-2e14-4d06-94af-522e49b370d9 req-842a6092-2a15-42a8-ab1e-cd0c9cee7cf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:44 np0005539563 nova_compute[252253]: 2025-11-29 08:36:44.722 252257 DEBUG oslo_concurrency.lockutils [req-8264e1fa-2e14-4d06-94af-522e49b370d9 req-842a6092-2a15-42a8-ab1e-cd0c9cee7cf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:44 np0005539563 nova_compute[252253]: 2025-11-29 08:36:44.722 252257 DEBUG nova.network.neutron [req-8264e1fa-2e14-4d06-94af-522e49b370d9 req-842a6092-2a15-42a8-ab1e-cd0c9cee7cf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Refreshing network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:44.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 281 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 902 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Nov 29 03:36:44 np0005539563 nova_compute[252253]: 2025-11-29 08:36:44.921 252257 DEBUG oslo_concurrency.lockutils [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:44 np0005539563 nova_compute[252253]: 2025-11-29 08:36:44.922 252257 DEBUG oslo_concurrency.lockutils [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:44 np0005539563 nova_compute[252253]: 2025-11-29 08:36:44.922 252257 DEBUG oslo_concurrency.lockutils [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:44 np0005539563 nova_compute[252253]: 2025-11-29 08:36:44.922 252257 DEBUG oslo_concurrency.lockutils [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:44 np0005539563 nova_compute[252253]: 2025-11-29 08:36:44.923 252257 DEBUG oslo_concurrency.lockutils [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:44 np0005539563 nova_compute[252253]: 2025-11-29 08:36:44.924 252257 INFO nova.compute.manager [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Terminating instance#033[00m
Nov 29 03:36:44 np0005539563 nova_compute[252253]: 2025-11-29 08:36:44.925 252257 DEBUG nova.compute.manager [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:36:45 np0005539563 kernel: tap56a99c82-c7 (unregistering): left promiscuous mode
Nov 29 03:36:45 np0005539563 NetworkManager[48981]: <info>  [1764405405.1785] device (tap56a99c82-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:36:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:36:45Z|00768|binding|INFO|Releasing lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 from this chassis (sb_readonly=0)
Nov 29 03:36:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:36:45Z|00769|binding|INFO|Setting lport 56a99c82-c7f3-45ce-8952-bb1fdd178381 down in Southbound
Nov 29 03:36:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:36:45Z|00770|binding|INFO|Removing iface tap56a99c82-c7 ovn-installed in OVS
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.193 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.196 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:ee:a4 10.100.0.11'], port_security=['fa:16:3e:dc:ee:a4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2ac4a3e-8e9f-481b-9493-37a7fcdddec0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'd444d77c-01c4-4fb8-ba04-a10761695979', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0167e73-34b5-4b34-9484-783b07e45b22, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=56a99c82-c7f3-45ce-8952-bb1fdd178381) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.198 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 56a99c82-c7f3-45ce-8952-bb1fdd178381 in datapath e259a30d-7e3f-48b9-abdf-dc7aa571c14c unbound from our chassis#033[00m
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.199 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e259a30d-7e3f-48b9-abdf-dc7aa571c14c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.202 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ee982965-fb7c-4519-bf25-52d2383cc880]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.203 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c namespace which is not needed anymore#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.212 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539563 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000b1.scope: Deactivated successfully.
Nov 29 03:36:45 np0005539563 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000b1.scope: Consumed 14.149s CPU time.
Nov 29 03:36:45 np0005539563 systemd-machined[213024]: Machine qemu-88-instance-000000b1 terminated.
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.350 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.357 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[366468]: [NOTICE]   (366472) : haproxy version is 2.8.14-c23fe91
Nov 29 03:36:45 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[366468]: [NOTICE]   (366472) : path to executable is /usr/sbin/haproxy
Nov 29 03:36:45 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[366468]: [WARNING]  (366472) : Exiting Master process...
Nov 29 03:36:45 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[366468]: [WARNING]  (366472) : Exiting Master process...
Nov 29 03:36:45 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[366468]: [ALERT]    (366472) : Current worker (366474) exited with code 143 (Terminated)
Nov 29 03:36:45 np0005539563 neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c[366468]: [WARNING]  (366472) : All workers exited. Exiting... (0)
Nov 29 03:36:45 np0005539563 systemd[1]: libpod-0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1.scope: Deactivated successfully.
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.367 252257 INFO nova.virt.libvirt.driver [-] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Instance destroyed successfully.#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.368 252257 DEBUG nova.objects.instance [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'resources' on Instance uuid e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:45 np0005539563 podman[366682]: 2025-11-29 08:36:45.370150158 +0000 UTC m=+0.056860750 container died 0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:36:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1-userdata-shm.mount: Deactivated successfully.
Nov 29 03:36:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-83e8fd4bfba5e93b606a4fbc7af59b03ab89c511a321bad759f942f04f7a5ded-merged.mount: Deactivated successfully.
Nov 29 03:36:45 np0005539563 podman[366682]: 2025-11-29 08:36:45.409649125 +0000 UTC m=+0.096359717 container cleanup 0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:36:45 np0005539563 systemd[1]: libpod-conmon-0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1.scope: Deactivated successfully.
Nov 29 03:36:45 np0005539563 podman[366721]: 2025-11-29 08:36:45.482969534 +0000 UTC m=+0.046087878 container remove 0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.493 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d10f8901-ff18-402e-a1a9-d803a3175f23]: (4, ('Sat Nov 29 08:36:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c (0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1)\n0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1\nSat Nov 29 08:36:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c (0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1)\n0e48466a91b770f0121578ed37abf34fb44ab8428cecdec525c58d613b8594d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.496 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae90e5e-0b35-4226-aa9f-9647baa7ce30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.498 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape259a30d-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.500 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539563 kernel: tape259a30d-70: left promiscuous mode
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.519 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.521 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.524 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[85766540-c571-4168-bac9-d0d1cf8cd7e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.540 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0fdfb249-b417-474f-aa9a-3987bec50ace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.541 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a96e8c-57e0-4254-848e-cbf2c1c300c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.556 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc2f15b-6a1f-4621-92ce-9054f01cffd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 834972, 'reachable_time': 25308, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366740, 'error': None, 'target': 'ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539563 systemd[1]: run-netns-ovnmeta\x2de259a30d\x2d7e3f\x2d48b9\x2dabdf\x2ddc7aa571c14c.mount: Deactivated successfully.
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.561 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e259a30d-7e3f-48b9-abdf-dc7aa571c14c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:36:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:45.561 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[40146ff9-bac3-4049-a012-34fbba8268f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.981 252257 DEBUG nova.virt.libvirt.vif [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:35:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1486764034',display_name='tempest-TestNetworkAdvancedServerOps-server-1486764034',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1486764034',id=177,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJlZsMiAKOr+1mOJ9gsr3FgWBE+mKwRnJkRBUHqhee24xo71b8dlrKwXDFbukNzcIWmQZvBI4Ju6SAH+rRZvrJVzvxQlKC2PN7cQRHMeK9LWhS/kLn4nic2/QWwXvrAG3A==',key_name='tempest-TestNetworkAdvancedServerOps-1855444884',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:36:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-wqp7y19b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:36:22Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=e2ac4a3e-8e9f-481b-9493-37a7fcdddec0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.982 252257 DEBUG nova.network.os_vif_util [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.983 252257 DEBUG nova.network.os_vif_util [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.983 252257 DEBUG os_vif [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.984 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.985 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56a99c82-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.986 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.988 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539563 nova_compute[252253]: 2025-11-29 08:36:45.991 252257 INFO os_vif [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=56a99c82-c7f3-45ce-8952-bb1fdd178381,network=Network(e259a30d-7e3f-48b9-abdf-dc7aa571c14c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56a99c82-c7')#033[00m
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.164 252257 DEBUG nova.compute.manager [req-a4f50000-5dac-4144-bb7e-912fd4e2cb8d req-d58751da-4d70-4ad3-87a5-6411b67d9464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.165 252257 DEBUG oslo_concurrency.lockutils [req-a4f50000-5dac-4144-bb7e-912fd4e2cb8d req-d58751da-4d70-4ad3-87a5-6411b67d9464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.166 252257 DEBUG oslo_concurrency.lockutils [req-a4f50000-5dac-4144-bb7e-912fd4e2cb8d req-d58751da-4d70-4ad3-87a5-6411b67d9464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.166 252257 DEBUG oslo_concurrency.lockutils [req-a4f50000-5dac-4144-bb7e-912fd4e2cb8d req-d58751da-4d70-4ad3-87a5-6411b67d9464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.166 252257 DEBUG nova.compute.manager [req-a4f50000-5dac-4144-bb7e-912fd4e2cb8d req-d58751da-4d70-4ad3-87a5-6411b67d9464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.167 252257 DEBUG nova.compute.manager [req-a4f50000-5dac-4144-bb7e-912fd4e2cb8d req-d58751da-4d70-4ad3-87a5-6411b67d9464 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-unplugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.526 252257 DEBUG nova.network.neutron [req-8264e1fa-2e14-4d06-94af-522e49b370d9 req-842a6092-2a15-42a8-ab1e-cd0c9cee7cf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updated VIF entry in instance network info cache for port 56a99c82-c7f3-45ce-8952-bb1fdd178381. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.526 252257 DEBUG nova.network.neutron [req-8264e1fa-2e14-4d06-94af-522e49b370d9 req-842a6092-2a15-42a8-ab1e-cd0c9cee7cf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [{"id": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "address": "fa:16:3e:dc:ee:a4", "network": {"id": "e259a30d-7e3f-48b9-abdf-dc7aa571c14c", "bridge": "br-int", "label": "tempest-network-smoke--413829212", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56a99c82-c7", "ovs_interfaceid": "56a99c82-c7f3-45ce-8952-bb1fdd178381", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.625 252257 DEBUG oslo_concurrency.lockutils [req-8264e1fa-2e14-4d06-94af-522e49b370d9 req-842a6092-2a15-42a8-ab1e-cd0c9cee7cf6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:46.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.720 252257 INFO nova.virt.libvirt.driver [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Deleting instance files /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_del#033[00m
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.721 252257 INFO nova.virt.libvirt.driver [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Deletion of /var/lib/nova/instances/e2ac4a3e-8e9f-481b-9493-37a7fcdddec0_del complete#033[00m
Nov 29 03:36:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:46.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.830 252257 INFO nova.compute.manager [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Took 1.90 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.830 252257 DEBUG oslo.service.loopingcall [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.831 252257 DEBUG nova.compute.manager [-] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:36:46 np0005539563 nova_compute[252253]: 2025-11-29 08:36:46.831 252257 DEBUG nova.network.neutron [-] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:36:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 281 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 904 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Nov 29 03:36:47 np0005539563 nova_compute[252253]: 2025-11-29 08:36:47.927 252257 DEBUG nova.network.neutron [-] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:47 np0005539563 nova_compute[252253]: 2025-11-29 08:36:47.945 252257 INFO nova.compute.manager [-] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Took 1.11 seconds to deallocate network for instance.#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.003 252257 DEBUG oslo_concurrency.lockutils [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.003 252257 DEBUG oslo_concurrency.lockutils [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.006 252257 DEBUG nova.compute.manager [req-c99bdc30-a5aa-438a-a927-29f6cd1c8d88 req-2712abfe-1aa0-4069-ae4e-1788e1ab3283 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-deleted-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.054 252257 DEBUG oslo_concurrency.processutils [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.258 252257 DEBUG nova.compute.manager [req-1942393f-a654-47a5-8684-373472194b8c req-aa07df99-3bbd-4f1d-9c11-4d11567f176c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.259 252257 DEBUG oslo_concurrency.lockutils [req-1942393f-a654-47a5-8684-373472194b8c req-aa07df99-3bbd-4f1d-9c11-4d11567f176c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.259 252257 DEBUG oslo_concurrency.lockutils [req-1942393f-a654-47a5-8684-373472194b8c req-aa07df99-3bbd-4f1d-9c11-4d11567f176c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.260 252257 DEBUG oslo_concurrency.lockutils [req-1942393f-a654-47a5-8684-373472194b8c req-aa07df99-3bbd-4f1d-9c11-4d11567f176c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.260 252257 DEBUG nova.compute.manager [req-1942393f-a654-47a5-8684-373472194b8c req-aa07df99-3bbd-4f1d-9c11-4d11567f176c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] No waiting events found dispatching network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.260 252257 WARNING nova.compute.manager [req-1942393f-a654-47a5-8684-373472194b8c req-aa07df99-3bbd-4f1d-9c11-4d11567f176c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Received unexpected event network-vif-plugged-56a99c82-c7f3-45ce-8952-bb1fdd178381 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:36:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1048150539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.494 252257 DEBUG oslo_concurrency.processutils [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.500 252257 DEBUG nova.compute.provider_tree [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.521 252257 DEBUG nova.scheduler.client.report [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.542 252257 DEBUG oslo_concurrency.lockutils [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.564 252257 INFO nova.scheduler.client.report [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Deleted allocations for instance e2ac4a3e-8e9f-481b-9493-37a7fcdddec0#033[00m
Nov 29 03:36:48 np0005539563 nova_compute[252253]: 2025-11-29 08:36:48.646 252257 DEBUG oslo_concurrency.lockutils [None req-ec3cbc89-4fc0-4197-b650-0c730d8d11e5 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "e2ac4a3e-8e9f-481b-9493-37a7fcdddec0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:48.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:48.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 645 KiB/s rd, 2.2 MiB/s wr, 111 op/s
Nov 29 03:36:49 np0005539563 nova_compute[252253]: 2025-11-29 08:36:49.159 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:50.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:50.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 1.9 MiB/s wr, 91 op/s
Nov 29 03:36:50 np0005539563 nova_compute[252253]: 2025-11-29 08:36:50.988 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:52.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:52.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 109 KiB/s rd, 646 KiB/s wr, 48 op/s
Nov 29 03:36:53 np0005539563 nova_compute[252253]: 2025-11-29 08:36:53.674 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:54 np0005539563 nova_compute[252253]: 2025-11-29 08:36:54.161 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:54.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:36:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:54.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:36:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 109 KiB/s rd, 645 KiB/s wr, 48 op/s
Nov 29 03:36:55 np0005539563 nova_compute[252253]: 2025-11-29 08:36:55.011 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:55 np0005539563 nova_compute[252253]: 2025-11-29 08:36:55.216 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:55 np0005539563 podman[366812]: 2025-11-29 08:36:55.889783271 +0000 UTC m=+0.059155074 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:36:55 np0005539563 podman[366813]: 2025-11-29 08:36:55.893690377 +0000 UTC m=+0.060235313 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 29 03:36:55 np0005539563 podman[366814]: 2025-11-29 08:36:55.951623586 +0000 UTC m=+0.114128022 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Nov 29 03:36:55 np0005539563 nova_compute[252253]: 2025-11-29 08:36:55.989 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:36:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:36:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:36:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:36:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 03:36:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:36:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 03:36:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:36:56 np0005539563 nova_compute[252253]: 2025-11-29 08:36:56.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:56 np0005539563 nova_compute[252253]: 2025-11-29 08:36:56.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:36:56 np0005539563 nova_compute[252253]: 2025-11-29 08:36:56.694 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:36:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:36:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:56.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:36:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:56.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 28 op/s
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:36:57 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a86815d4-2b35-47b7-ba1d-7c3a8a0896ba does not exist
Nov 29 03:36:57 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 837102f4-c590-4e3e-9a8b-1475cfb676cc does not exist
Nov 29 03:36:57 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 42731b8c-0d19-4a47-9c6e-b55f765a7052 does not exist
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:36:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:36:57 np0005539563 podman[367124]: 2025-11-29 08:36:57.685487872 +0000 UTC m=+0.040917697 container create 947e4003a5020ecd1bacefc13fc4677cc90fb6fff6160f6a4618dfd3cc57ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:36:57 np0005539563 systemd[1]: Started libpod-conmon-947e4003a5020ecd1bacefc13fc4677cc90fb6fff6160f6a4618dfd3cc57ca60.scope.
Nov 29 03:36:57 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:36:57 np0005539563 podman[367124]: 2025-11-29 08:36:57.668116378 +0000 UTC m=+0.023546233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:36:57 np0005539563 podman[367124]: 2025-11-29 08:36:57.774211369 +0000 UTC m=+0.129641244 container init 947e4003a5020ecd1bacefc13fc4677cc90fb6fff6160f6a4618dfd3cc57ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:36:57 np0005539563 podman[367124]: 2025-11-29 08:36:57.781896319 +0000 UTC m=+0.137326144 container start 947e4003a5020ecd1bacefc13fc4677cc90fb6fff6160f6a4618dfd3cc57ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:36:57 np0005539563 podman[367124]: 2025-11-29 08:36:57.785377024 +0000 UTC m=+0.140806869 container attach 947e4003a5020ecd1bacefc13fc4677cc90fb6fff6160f6a4618dfd3cc57ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:36:57 np0005539563 dreamy_mendel[367140]: 167 167
Nov 29 03:36:57 np0005539563 systemd[1]: libpod-947e4003a5020ecd1bacefc13fc4677cc90fb6fff6160f6a4618dfd3cc57ca60.scope: Deactivated successfully.
Nov 29 03:36:57 np0005539563 conmon[367140]: conmon 947e4003a5020ecd1bac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-947e4003a5020ecd1bacefc13fc4677cc90fb6fff6160f6a4618dfd3cc57ca60.scope/container/memory.events
Nov 29 03:36:57 np0005539563 podman[367124]: 2025-11-29 08:36:57.789565638 +0000 UTC m=+0.144995463 container died 947e4003a5020ecd1bacefc13fc4677cc90fb6fff6160f6a4618dfd3cc57ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendel, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:36:57 np0005539563 systemd[1]: var-lib-containers-storage-overlay-656d2cf517dc981eea4c3587211bfc2cb09989b54444032a8e9bd5199ac0b291-merged.mount: Deactivated successfully.
Nov 29 03:36:57 np0005539563 podman[367124]: 2025-11-29 08:36:57.834151253 +0000 UTC m=+0.189581078 container remove 947e4003a5020ecd1bacefc13fc4677cc90fb6fff6160f6a4618dfd3cc57ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:36:57 np0005539563 systemd[1]: libpod-conmon-947e4003a5020ecd1bacefc13fc4677cc90fb6fff6160f6a4618dfd3cc57ca60.scope: Deactivated successfully.
Nov 29 03:36:58 np0005539563 podman[367164]: 2025-11-29 08:36:58.002276236 +0000 UTC m=+0.048313479 container create 7c8ef217e81d85f5d96b730cf40bc7d142e1646dd95d2577d3f5abea9658fdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:36:58 np0005539563 systemd[1]: Started libpod-conmon-7c8ef217e81d85f5d96b730cf40bc7d142e1646dd95d2577d3f5abea9658fdd7.scope.
Nov 29 03:36:58 np0005539563 podman[367164]: 2025-11-29 08:36:57.97899317 +0000 UTC m=+0.025030403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:36:58 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:36:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5651a6db60c90318cc9c5ac892fab5f4422738dacdd310c6375b74268bb6acf5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5651a6db60c90318cc9c5ac892fab5f4422738dacdd310c6375b74268bb6acf5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5651a6db60c90318cc9c5ac892fab5f4422738dacdd310c6375b74268bb6acf5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5651a6db60c90318cc9c5ac892fab5f4422738dacdd310c6375b74268bb6acf5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5651a6db60c90318cc9c5ac892fab5f4422738dacdd310c6375b74268bb6acf5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:58 np0005539563 podman[367164]: 2025-11-29 08:36:58.11877514 +0000 UTC m=+0.164812363 container init 7c8ef217e81d85f5d96b730cf40bc7d142e1646dd95d2577d3f5abea9658fdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:36:58 np0005539563 podman[367164]: 2025-11-29 08:36:58.132185646 +0000 UTC m=+0.178222879 container start 7c8ef217e81d85f5d96b730cf40bc7d142e1646dd95d2577d3f5abea9658fdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:36:58 np0005539563 podman[367164]: 2025-11-29 08:36:58.136968217 +0000 UTC m=+0.183005450 container attach 7c8ef217e81d85f5d96b730cf40bc7d142e1646dd95d2577d3f5abea9658fdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:36:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:58.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:36:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:58.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 27 op/s
Nov 29 03:36:58 np0005539563 nifty_hypatia[367180]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:36:58 np0005539563 nifty_hypatia[367180]: --> relative data size: 1.0
Nov 29 03:36:58 np0005539563 nifty_hypatia[367180]: --> All data devices are unavailable
Nov 29 03:36:58 np0005539563 systemd[1]: libpod-7c8ef217e81d85f5d96b730cf40bc7d142e1646dd95d2577d3f5abea9658fdd7.scope: Deactivated successfully.
Nov 29 03:36:59 np0005539563 podman[367195]: 2025-11-29 08:36:59.001522839 +0000 UTC m=+0.024099257 container died 7c8ef217e81d85f5d96b730cf40bc7d142e1646dd95d2577d3f5abea9658fdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:36:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5651a6db60c90318cc9c5ac892fab5f4422738dacdd310c6375b74268bb6acf5-merged.mount: Deactivated successfully.
Nov 29 03:36:59 np0005539563 podman[367195]: 2025-11-29 08:36:59.05364558 +0000 UTC m=+0.076222008 container remove 7c8ef217e81d85f5d96b730cf40bc7d142e1646dd95d2577d3f5abea9658fdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:36:59 np0005539563 systemd[1]: libpod-conmon-7c8ef217e81d85f5d96b730cf40bc7d142e1646dd95d2577d3f5abea9658fdd7.scope: Deactivated successfully.
Nov 29 03:36:59 np0005539563 nova_compute[252253]: 2025-11-29 08:36:59.165 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:59.298 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:36:59.299 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:36:59 np0005539563 nova_compute[252253]: 2025-11-29 08:36:59.299 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:59 np0005539563 podman[367349]: 2025-11-29 08:36:59.745026264 +0000 UTC m=+0.043952349 container create c4a7a882d2aacde026f7e27dbf2cc0125c7afe96a57ff7cae5119a41723b0865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:36:59 np0005539563 systemd[1]: Started libpod-conmon-c4a7a882d2aacde026f7e27dbf2cc0125c7afe96a57ff7cae5119a41723b0865.scope.
Nov 29 03:36:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:36:59 np0005539563 podman[367349]: 2025-11-29 08:36:59.816301566 +0000 UTC m=+0.115227671 container init c4a7a882d2aacde026f7e27dbf2cc0125c7afe96a57ff7cae5119a41723b0865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:36:59 np0005539563 podman[367349]: 2025-11-29 08:36:59.727850336 +0000 UTC m=+0.026776441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:36:59 np0005539563 podman[367349]: 2025-11-29 08:36:59.824337505 +0000 UTC m=+0.123263590 container start c4a7a882d2aacde026f7e27dbf2cc0125c7afe96a57ff7cae5119a41723b0865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:36:59 np0005539563 podman[367349]: 2025-11-29 08:36:59.827234944 +0000 UTC m=+0.126161019 container attach c4a7a882d2aacde026f7e27dbf2cc0125c7afe96a57ff7cae5119a41723b0865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:36:59 np0005539563 modest_jackson[367366]: 167 167
Nov 29 03:36:59 np0005539563 systemd[1]: libpod-c4a7a882d2aacde026f7e27dbf2cc0125c7afe96a57ff7cae5119a41723b0865.scope: Deactivated successfully.
Nov 29 03:36:59 np0005539563 podman[367349]: 2025-11-29 08:36:59.83076542 +0000 UTC m=+0.129691505 container died c4a7a882d2aacde026f7e27dbf2cc0125c7afe96a57ff7cae5119a41723b0865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:36:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6ca074adc3f0f99b6e5fec7eea47488b4acba91bfb212ce379f485a1240f8fdf-merged.mount: Deactivated successfully.
Nov 29 03:36:59 np0005539563 podman[367349]: 2025-11-29 08:36:59.864025867 +0000 UTC m=+0.162951952 container remove c4a7a882d2aacde026f7e27dbf2cc0125c7afe96a57ff7cae5119a41723b0865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:36:59 np0005539563 systemd[1]: libpod-conmon-c4a7a882d2aacde026f7e27dbf2cc0125c7afe96a57ff7cae5119a41723b0865.scope: Deactivated successfully.
Nov 29 03:37:00 np0005539563 podman[367390]: 2025-11-29 08:37:00.009531263 +0000 UTC m=+0.024390556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:37:00 np0005539563 podman[367390]: 2025-11-29 08:37:00.339726632 +0000 UTC m=+0.354585915 container create 75a09147cf23f927e81a08ae166644a781b41c1a3d47173ca916dc0f527847b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_albattani, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:37:00 np0005539563 nova_compute[252253]: 2025-11-29 08:37:00.364 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405405.362866, e2ac4a3e-8e9f-481b-9493-37a7fcdddec0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:00 np0005539563 nova_compute[252253]: 2025-11-29 08:37:00.364 252257 INFO nova.compute.manager [-] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:37:00 np0005539563 systemd[1]: Started libpod-conmon-75a09147cf23f927e81a08ae166644a781b41c1a3d47173ca916dc0f527847b3.scope.
Nov 29 03:37:00 np0005539563 nova_compute[252253]: 2025-11-29 08:37:00.392 252257 DEBUG nova.compute.manager [None req-8a507ace-acf5-44b3-8108-74f520d9f4d7 - - - - - -] [instance: e2ac4a3e-8e9f-481b-9493-37a7fcdddec0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:37:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653ae8d993fa352ee4ed716e974ccc94b706de21a7f884dbb2dbd335a0b65eeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653ae8d993fa352ee4ed716e974ccc94b706de21a7f884dbb2dbd335a0b65eeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653ae8d993fa352ee4ed716e974ccc94b706de21a7f884dbb2dbd335a0b65eeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653ae8d993fa352ee4ed716e974ccc94b706de21a7f884dbb2dbd335a0b65eeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:00 np0005539563 podman[367390]: 2025-11-29 08:37:00.4537648 +0000 UTC m=+0.468624093 container init 75a09147cf23f927e81a08ae166644a781b41c1a3d47173ca916dc0f527847b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_albattani, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:37:00 np0005539563 podman[367390]: 2025-11-29 08:37:00.465278014 +0000 UTC m=+0.480137287 container start 75a09147cf23f927e81a08ae166644a781b41c1a3d47173ca916dc0f527847b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_albattani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:37:00 np0005539563 podman[367390]: 2025-11-29 08:37:00.469252892 +0000 UTC m=+0.484112185 container attach 75a09147cf23f927e81a08ae166644a781b41c1a3d47173ca916dc0f527847b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:37:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:00.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:00.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 14 KiB/s wr, 10 op/s
Nov 29 03:37:00 np0005539563 nova_compute[252253]: 2025-11-29 08:37:00.992 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]: {
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:    "0": [
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:        {
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:            "devices": [
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:                "/dev/loop3"
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:            ],
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:            "lv_name": "ceph_lv0",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:            "lv_size": "7511998464",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:            "name": "ceph_lv0",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:            "tags": {
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:                "ceph.cluster_name": "ceph",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:                "ceph.crush_device_class": "",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:                "ceph.encrypted": "0",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:                "ceph.osd_id": "0",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:                "ceph.type": "block",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:                "ceph.vdo": "0"
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:            },
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:            "type": "block",
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:            "vg_name": "ceph_vg0"
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:        }
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]:    ]
Nov 29 03:37:01 np0005539563 infallible_albattani[367406]: }
Nov 29 03:37:01 np0005539563 systemd[1]: libpod-75a09147cf23f927e81a08ae166644a781b41c1a3d47173ca916dc0f527847b3.scope: Deactivated successfully.
Nov 29 03:37:01 np0005539563 podman[367390]: 2025-11-29 08:37:01.257327312 +0000 UTC m=+1.272186575 container died 75a09147cf23f927e81a08ae166644a781b41c1a3d47173ca916dc0f527847b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_albattani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:37:01 np0005539563 systemd[1]: var-lib-containers-storage-overlay-653ae8d993fa352ee4ed716e974ccc94b706de21a7f884dbb2dbd335a0b65eeb-merged.mount: Deactivated successfully.
Nov 29 03:37:01 np0005539563 podman[367390]: 2025-11-29 08:37:01.317715657 +0000 UTC m=+1.332574930 container remove 75a09147cf23f927e81a08ae166644a781b41c1a3d47173ca916dc0f527847b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_albattani, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:37:01 np0005539563 systemd[1]: libpod-conmon-75a09147cf23f927e81a08ae166644a781b41c1a3d47173ca916dc0f527847b3.scope: Deactivated successfully.
Nov 29 03:37:01 np0005539563 podman[367568]: 2025-11-29 08:37:01.894512157 +0000 UTC m=+0.043813685 container create 8555396e3fed2653e5c1fcf03de626dc7d4c6f55020651efd16b550f482310a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:37:01 np0005539563 systemd[1]: Started libpod-conmon-8555396e3fed2653e5c1fcf03de626dc7d4c6f55020651efd16b550f482310a1.scope.
Nov 29 03:37:01 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:37:01 np0005539563 podman[367568]: 2025-11-29 08:37:01.875883359 +0000 UTC m=+0.025184907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:37:01 np0005539563 podman[367568]: 2025-11-29 08:37:01.975087704 +0000 UTC m=+0.124389242 container init 8555396e3fed2653e5c1fcf03de626dc7d4c6f55020651efd16b550f482310a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:37:01 np0005539563 podman[367568]: 2025-11-29 08:37:01.983072852 +0000 UTC m=+0.132374380 container start 8555396e3fed2653e5c1fcf03de626dc7d4c6f55020651efd16b550f482310a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:37:01 np0005539563 podman[367568]: 2025-11-29 08:37:01.987204114 +0000 UTC m=+0.136505662 container attach 8555396e3fed2653e5c1fcf03de626dc7d4c6f55020651efd16b550f482310a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:37:01 np0005539563 intelligent_bell[367584]: 167 167
Nov 29 03:37:01 np0005539563 systemd[1]: libpod-8555396e3fed2653e5c1fcf03de626dc7d4c6f55020651efd16b550f482310a1.scope: Deactivated successfully.
Nov 29 03:37:01 np0005539563 podman[367568]: 2025-11-29 08:37:01.989411364 +0000 UTC m=+0.138712892 container died 8555396e3fed2653e5c1fcf03de626dc7d4c6f55020651efd16b550f482310a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:37:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7f1537829e5e7699819eccdf8a7f0f17fdadf0a260b13588f0036bb30ae02efb-merged.mount: Deactivated successfully.
Nov 29 03:37:02 np0005539563 podman[367568]: 2025-11-29 08:37:02.022548627 +0000 UTC m=+0.171850155 container remove 8555396e3fed2653e5c1fcf03de626dc7d4c6f55020651efd16b550f482310a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bell, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:37:02 np0005539563 systemd[1]: libpod-conmon-8555396e3fed2653e5c1fcf03de626dc7d4c6f55020651efd16b550f482310a1.scope: Deactivated successfully.
Nov 29 03:37:02 np0005539563 podman[367607]: 2025-11-29 08:37:02.185286773 +0000 UTC m=+0.043107277 container create 143c95d0ef875d184c91bc6625a2be7eabf4975d7f971e674101ace3ac24bbcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mirzakhani, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:37:02 np0005539563 systemd[1]: Started libpod-conmon-143c95d0ef875d184c91bc6625a2be7eabf4975d7f971e674101ace3ac24bbcf.scope.
Nov 29 03:37:02 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:37:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc921fe3130af22255968113b8041b40c551ebdf8802dc0508a72d3e41d96e2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc921fe3130af22255968113b8041b40c551ebdf8802dc0508a72d3e41d96e2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc921fe3130af22255968113b8041b40c551ebdf8802dc0508a72d3e41d96e2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc921fe3130af22255968113b8041b40c551ebdf8802dc0508a72d3e41d96e2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:02 np0005539563 podman[367607]: 2025-11-29 08:37:02.256163574 +0000 UTC m=+0.113984078 container init 143c95d0ef875d184c91bc6625a2be7eabf4975d7f971e674101ace3ac24bbcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:37:02 np0005539563 podman[367607]: 2025-11-29 08:37:02.262284481 +0000 UTC m=+0.120104985 container start 143c95d0ef875d184c91bc6625a2be7eabf4975d7f971e674101ace3ac24bbcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:37:02 np0005539563 podman[367607]: 2025-11-29 08:37:02.167805696 +0000 UTC m=+0.025626230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:37:02 np0005539563 podman[367607]: 2025-11-29 08:37:02.26518068 +0000 UTC m=+0.123001184 container attach 143c95d0ef875d184c91bc6625a2be7eabf4975d7f971e674101ace3ac24bbcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mirzakhani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:37:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:02.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:02.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 29 03:37:03 np0005539563 heuristic_mirzakhani[367624]: {
Nov 29 03:37:03 np0005539563 heuristic_mirzakhani[367624]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:37:03 np0005539563 heuristic_mirzakhani[367624]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:37:03 np0005539563 heuristic_mirzakhani[367624]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:37:03 np0005539563 heuristic_mirzakhani[367624]:        "osd_id": 0,
Nov 29 03:37:03 np0005539563 heuristic_mirzakhani[367624]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:37:03 np0005539563 heuristic_mirzakhani[367624]:        "type": "bluestore"
Nov 29 03:37:03 np0005539563 heuristic_mirzakhani[367624]:    }
Nov 29 03:37:03 np0005539563 heuristic_mirzakhani[367624]: }
Nov 29 03:37:03 np0005539563 systemd[1]: libpod-143c95d0ef875d184c91bc6625a2be7eabf4975d7f971e674101ace3ac24bbcf.scope: Deactivated successfully.
Nov 29 03:37:03 np0005539563 podman[367607]: 2025-11-29 08:37:03.196410361 +0000 UTC m=+1.054230875 container died 143c95d0ef875d184c91bc6625a2be7eabf4975d7f971e674101ace3ac24bbcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mirzakhani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:37:03 np0005539563 systemd[1]: var-lib-containers-storage-overlay-bc921fe3130af22255968113b8041b40c551ebdf8802dc0508a72d3e41d96e2f-merged.mount: Deactivated successfully.
Nov 29 03:37:03 np0005539563 podman[367607]: 2025-11-29 08:37:03.253795544 +0000 UTC m=+1.111616048 container remove 143c95d0ef875d184c91bc6625a2be7eabf4975d7f971e674101ace3ac24bbcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:37:03 np0005539563 systemd[1]: libpod-conmon-143c95d0ef875d184c91bc6625a2be7eabf4975d7f971e674101ace3ac24bbcf.scope: Deactivated successfully.
Nov 29 03:37:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:37:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:37:03.301 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:37:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:37:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:37:03 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 75e3b4bb-017f-4476-a926-7660dca32a44 does not exist
Nov 29 03:37:03 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c4529902-25f9-48b6-969c-081268acec49 does not exist
Nov 29 03:37:03 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9f39a864-d3fb-45fa-b1ae-478dda389bdd does not exist
Nov 29 03:37:04 np0005539563 nova_compute[252253]: 2025-11-29 08:37:04.168 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:37:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:37:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:04.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:04.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 29 03:37:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:37:04.941 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:37:04.942 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:37:04.942 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:05 np0005539563 nova_compute[252253]: 2025-11-29 08:37:05.995 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:06.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:06.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 2 op/s
Nov 29 03:37:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:08.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:08.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 2 op/s
Nov 29 03:37:09 np0005539563 nova_compute[252253]: 2025-11-29 08:37:09.169 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:09 np0005539563 nova_compute[252253]: 2025-11-29 08:37:09.695 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:09 np0005539563 nova_compute[252253]: 2025-11-29 08:37:09.695 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:37:10 np0005539563 nova_compute[252253]: 2025-11-29 08:37:10.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:10.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:10.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.7 KiB/s wr, 3 op/s
Nov 29 03:37:10 np0005539563 nova_compute[252253]: 2025-11-29 08:37:10.997 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:11 np0005539563 nova_compute[252253]: 2025-11-29 08:37:11.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:12 np0005539563 nova_compute[252253]: 2025-11-29 08:37:12.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:12.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:12.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 3 op/s
Nov 29 03:37:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:37:12
Nov 29 03:37:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:37:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:37:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.control']
Nov 29 03:37:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:37:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:37:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:37:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:37:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:37:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:37:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:37:14 np0005539563 nova_compute[252253]: 2025-11-29 08:37:14.217 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:14 np0005539563 nova_compute[252253]: 2025-11-29 08:37:14.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:14 np0005539563 nova_compute[252253]: 2025-11-29 08:37:14.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:37:14 np0005539563 nova_compute[252253]: 2025-11-29 08:37:14.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:37:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:14.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:14.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.3 KiB/s wr, 3 op/s
Nov 29 03:37:15 np0005539563 nova_compute[252253]: 2025-11-29 08:37:15.290 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:37:16 np0005539563 nova_compute[252253]: 2025-11-29 08:37:16.000 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:37:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:37:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:37:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:37:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:37:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:37:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:37:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:37:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:37:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:37:16 np0005539563 nova_compute[252253]: 2025-11-29 08:37:16.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:16 np0005539563 nova_compute[252253]: 2025-11-29 08:37:16.730 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:16 np0005539563 nova_compute[252253]: 2025-11-29 08:37:16.731 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:16 np0005539563 nova_compute[252253]: 2025-11-29 08:37:16.731 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:16 np0005539563 nova_compute[252253]: 2025-11-29 08:37:16.731 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:37:16 np0005539563 nova_compute[252253]: 2025-11-29 08:37:16.732 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:16.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:37:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:16.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:37:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 4.0 KiB/s wr, 3 op/s
Nov 29 03:37:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2854436147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:17 np0005539563 nova_compute[252253]: 2025-11-29 08:37:17.186 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:17 np0005539563 nova_compute[252253]: 2025-11-29 08:37:17.338 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:37:17 np0005539563 nova_compute[252253]: 2025-11-29 08:37:17.339 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4154MB free_disk=20.942684173583984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:37:17 np0005539563 nova_compute[252253]: 2025-11-29 08:37:17.339 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:17 np0005539563 nova_compute[252253]: 2025-11-29 08:37:17.339 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:17 np0005539563 nova_compute[252253]: 2025-11-29 08:37:17.770 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:37:17 np0005539563 nova_compute[252253]: 2025-11-29 08:37:17.770 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:37:18 np0005539563 nova_compute[252253]: 2025-11-29 08:37:18.009 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638903916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:18 np0005539563 nova_compute[252253]: 2025-11-29 08:37:18.448 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:18 np0005539563 nova_compute[252253]: 2025-11-29 08:37:18.454 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:37:18 np0005539563 nova_compute[252253]: 2025-11-29 08:37:18.642 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:37:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:18.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:18 np0005539563 nova_compute[252253]: 2025-11-29 08:37:18.808 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:37:18 np0005539563 nova_compute[252253]: 2025-11-29 08:37:18.808 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:18.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 4.2 KiB/s wr, 2 op/s
Nov 29 03:37:19 np0005539563 nova_compute[252253]: 2025-11-29 08:37:19.220 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:20 np0005539563 nova_compute[252253]: 2025-11-29 08:37:20.567 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:20 np0005539563 nova_compute[252253]: 2025-11-29 08:37:20.568 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:37:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:20.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:37:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:20.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 156 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 4.6 KiB/s wr, 23 op/s
Nov 29 03:37:21 np0005539563 nova_compute[252253]: 2025-11-29 08:37:21.002 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:22.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:22.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 156 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 2.9 KiB/s wr, 23 op/s
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001079980341522427 of space, bias 1.0, pg target 0.3239941024567281 quantized to 32 (current 32)
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002173410455053043 of space, bias 1.0, pg target 0.6520231365159129 quantized to 32 (current 32)
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:37:24 np0005539563 nova_compute[252253]: 2025-11-29 08:37:24.222 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:24.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:24.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 153 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Nov 29 03:37:25 np0005539563 nova_compute[252253]: 2025-11-29 08:37:25.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:26 np0005539563 nova_compute[252253]: 2025-11-29 08:37:26.005 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:26 np0005539563 podman[367864]: 2025-11-29 08:37:26.512875142 +0000 UTC m=+0.057960031 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:37:26 np0005539563 podman[367866]: 2025-11-29 08:37:26.549153241 +0000 UTC m=+0.095194126 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:37:26 np0005539563 podman[367865]: 2025-11-29 08:37:26.549601593 +0000 UTC m=+0.095594026 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, tcib_managed=true)
Nov 29 03:37:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:26.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:26.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 29 03:37:27 np0005539563 nova_compute[252253]: 2025-11-29 08:37:27.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:37:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3778855845' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:37:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:37:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3778855845' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:37:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:28.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:28.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Nov 29 03:37:29 np0005539563 nova_compute[252253]: 2025-11-29 08:37:29.261 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:29 np0005539563 nova_compute[252253]: 2025-11-29 08:37:29.760 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:29 np0005539563 nova_compute[252253]: 2025-11-29 08:37:29.761 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:37:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:30.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 29 03:37:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:31 np0005539563 nova_compute[252253]: 2025-11-29 08:37:31.009 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:32.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 29 03:37:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:32.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:34 np0005539563 nova_compute[252253]: 2025-11-29 08:37:34.263 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:34.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 03:37:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:34.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.016915) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455017027, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1491, "num_deletes": 257, "total_data_size": 2494282, "memory_usage": 2539264, "flush_reason": "Manual Compaction"}
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455036276, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 2453955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60202, "largest_seqno": 61692, "table_properties": {"data_size": 2447023, "index_size": 4002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14745, "raw_average_key_size": 20, "raw_value_size": 2432972, "raw_average_value_size": 3301, "num_data_blocks": 176, "num_entries": 737, "num_filter_entries": 737, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405317, "oldest_key_time": 1764405317, "file_creation_time": 1764405455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 19388 microseconds, and 6132 cpu microseconds.
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.036348) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 2453955 bytes OK
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.036377) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.050067) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.050089) EVENT_LOG_v1 {"time_micros": 1764405455050084, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.050106) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 2487896, prev total WAL file size 2487896, number of live WAL files 2.
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.051137) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323635' seq:72057594037927935, type:22 .. '6C6F676D0032353137' seq:0, type:0; will stop at (end)
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(2396KB)], [134(9558KB)]
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455051273, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 12241809, "oldest_snapshot_seqno": -1}
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 9353 keys, 12080783 bytes, temperature: kUnknown
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455188554, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 12080783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12019896, "index_size": 36390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 247265, "raw_average_key_size": 26, "raw_value_size": 11855171, "raw_average_value_size": 1267, "num_data_blocks": 1389, "num_entries": 9353, "num_filter_entries": 9353, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764405455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.189118) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 12080783 bytes
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.191668) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.1 rd, 87.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.3 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(9.9) write-amplify(4.9) OK, records in: 9886, records dropped: 533 output_compression: NoCompression
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.191798) EVENT_LOG_v1 {"time_micros": 1764405455191690, "job": 82, "event": "compaction_finished", "compaction_time_micros": 137422, "compaction_time_cpu_micros": 42409, "output_level": 6, "num_output_files": 1, "total_output_size": 12080783, "num_input_records": 9886, "num_output_records": 9353, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455193019, "job": 82, "event": "table_file_deletion", "file_number": 136}
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405455196613, "job": 82, "event": "table_file_deletion", "file_number": 134}
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.050871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.196666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.196672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.196675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.196678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:37:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:37:35.196681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:37:36 np0005539563 nova_compute[252253]: 2025-11-29 08:37:36.011 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:36.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 474 KiB/s wr, 15 op/s
Nov 29 03:37:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:36.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:38.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:38 np0005539563 nova_compute[252253]: 2025-11-29 08:37:38.838 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Nov 29 03:37:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:38.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:39 np0005539563 nova_compute[252253]: 2025-11-29 08:37:39.264 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:40.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 13 KiB/s wr, 20 op/s
Nov 29 03:37:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:40.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:41 np0005539563 nova_compute[252253]: 2025-11-29 08:37:41.013 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:42.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 13 KiB/s wr, 20 op/s
Nov 29 03:37:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:42.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:42 np0005539563 nova_compute[252253]: 2025-11-29 08:37:42.968 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:37:42.968 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:37:42.969 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:37:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:37:42.970 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:37:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:37:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:37:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:37:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:37:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:37:44 np0005539563 nova_compute[252253]: 2025-11-29 08:37:44.266 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:44.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 72 op/s
Nov 29 03:37:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:44.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:46 np0005539563 nova_compute[252253]: 2025-11-29 08:37:46.067 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:46.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 87 op/s
Nov 29 03:37:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:46.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:48.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 29 03:37:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:48.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:49 np0005539563 nova_compute[252253]: 2025-11-29 08:37:49.268 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:37:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:50.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:37:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 29 03:37:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:50.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:51 np0005539563 nova_compute[252253]: 2025-11-29 08:37:51.070 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:52.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 67 op/s
Nov 29 03:37:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:52.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:54 np0005539563 nova_compute[252253]: 2025-11-29 08:37:54.270 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:54 np0005539563 nova_compute[252253]: 2025-11-29 08:37:54.717 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:54.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 67 op/s
Nov 29 03:37:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:37:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:54.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:37:56 np0005539563 nova_compute[252253]: 2025-11-29 08:37:56.072 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:56.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 176 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 567 KiB/s rd, 784 KiB/s wr, 33 op/s
Nov 29 03:37:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:56.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:57 np0005539563 podman[367989]: 2025-11-29 08:37:57.489772979 +0000 UTC m=+0.048918955 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:37:57 np0005539563 podman[367990]: 2025-11-29 08:37:57.49569638 +0000 UTC m=+0.051610248 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:37:57 np0005539563 podman[367991]: 2025-11-29 08:37:57.529716278 +0000 UTC m=+0.081532374 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Nov 29 03:37:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:58.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 190 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 29 03:37:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:37:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:37:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:58.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:37:59 np0005539563 nova_compute[252253]: 2025-11-29 08:37:59.271 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:00.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 03:38:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:00.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:01 np0005539563 nova_compute[252253]: 2025-11-29 08:38:01.074 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:02.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 03:38:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:02.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:04 np0005539563 nova_compute[252253]: 2025-11-29 08:38:04.273 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:38:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:38:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:38:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:38:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:38:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:38:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f7cde740-b9ec-49f9-bdcc-e09708f77fa4 does not exist
Nov 29 03:38:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6fd9dbaf-bc4f-4905-8b6f-f4e1c68405de does not exist
Nov 29 03:38:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 220abc91-8bc3-49fb-8e8b-fc75f5831737 does not exist
Nov 29 03:38:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:38:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:38:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:38:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:38:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:38:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:38:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:04.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 03:38:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:04.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:38:04.943 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:38:04.943 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:38:04.943 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:05 np0005539563 podman[368327]: 2025-11-29 08:38:05.269420943 +0000 UTC m=+0.037624118 container create b19404ac02002baee54cb684f2ab19fd0c63e35dcb51df4cd26ae89fd96d208b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_grothendieck, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:38:05 np0005539563 systemd[1]: Started libpod-conmon-b19404ac02002baee54cb684f2ab19fd0c63e35dcb51df4cd26ae89fd96d208b.scope.
Nov 29 03:38:05 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:38:05 np0005539563 podman[368327]: 2025-11-29 08:38:05.252492771 +0000 UTC m=+0.020695976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:38:05 np0005539563 podman[368327]: 2025-11-29 08:38:05.351007716 +0000 UTC m=+0.119210921 container init b19404ac02002baee54cb684f2ab19fd0c63e35dcb51df4cd26ae89fd96d208b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_grothendieck, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:38:05 np0005539563 podman[368327]: 2025-11-29 08:38:05.358112719 +0000 UTC m=+0.126315904 container start b19404ac02002baee54cb684f2ab19fd0c63e35dcb51df4cd26ae89fd96d208b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:38:05 np0005539563 podman[368327]: 2025-11-29 08:38:05.361145842 +0000 UTC m=+0.129349057 container attach b19404ac02002baee54cb684f2ab19fd0c63e35dcb51df4cd26ae89fd96d208b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:05 np0005539563 beautiful_grothendieck[368343]: 167 167
Nov 29 03:38:05 np0005539563 systemd[1]: libpod-b19404ac02002baee54cb684f2ab19fd0c63e35dcb51df4cd26ae89fd96d208b.scope: Deactivated successfully.
Nov 29 03:38:05 np0005539563 podman[368327]: 2025-11-29 08:38:05.364194586 +0000 UTC m=+0.132397771 container died b19404ac02002baee54cb684f2ab19fd0c63e35dcb51df4cd26ae89fd96d208b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:05 np0005539563 systemd[1]: var-lib-containers-storage-overlay-dd89293c014df1514bfbc1dc926f18ddf0258795649a168ec5f2fbd602a84f58-merged.mount: Deactivated successfully.
Nov 29 03:38:05 np0005539563 podman[368327]: 2025-11-29 08:38:05.403487617 +0000 UTC m=+0.171690802 container remove b19404ac02002baee54cb684f2ab19fd0c63e35dcb51df4cd26ae89fd96d208b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_grothendieck, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:38:05 np0005539563 systemd[1]: libpod-conmon-b19404ac02002baee54cb684f2ab19fd0c63e35dcb51df4cd26ae89fd96d208b.scope: Deactivated successfully.
Nov 29 03:38:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:38:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:38:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:38:05 np0005539563 podman[368366]: 2025-11-29 08:38:05.650201191 +0000 UTC m=+0.104896301 container create 9bc48ebca777f1c9b9f694d8ddd5c696d44cd2af073b61c7e903bc5dfefeecaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:38:05 np0005539563 podman[368366]: 2025-11-29 08:38:05.56837356 +0000 UTC m=+0.023068640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:38:05 np0005539563 systemd[1]: Started libpod-conmon-9bc48ebca777f1c9b9f694d8ddd5c696d44cd2af073b61c7e903bc5dfefeecaf.scope.
Nov 29 03:38:05 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:38:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c09aa115c86f06f20ff4e5db60f780b66ed6bf7629597d14e97c8dc70b7d85e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c09aa115c86f06f20ff4e5db60f780b66ed6bf7629597d14e97c8dc70b7d85e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c09aa115c86f06f20ff4e5db60f780b66ed6bf7629597d14e97c8dc70b7d85e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c09aa115c86f06f20ff4e5db60f780b66ed6bf7629597d14e97c8dc70b7d85e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c09aa115c86f06f20ff4e5db60f780b66ed6bf7629597d14e97c8dc70b7d85e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:05 np0005539563 podman[368366]: 2025-11-29 08:38:05.736013399 +0000 UTC m=+0.190708479 container init 9bc48ebca777f1c9b9f694d8ddd5c696d44cd2af073b61c7e903bc5dfefeecaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jemison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 03:38:05 np0005539563 podman[368366]: 2025-11-29 08:38:05.745816566 +0000 UTC m=+0.200511626 container start 9bc48ebca777f1c9b9f694d8ddd5c696d44cd2af073b61c7e903bc5dfefeecaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jemison, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:38:05 np0005539563 podman[368366]: 2025-11-29 08:38:05.748808228 +0000 UTC m=+0.203503308 container attach 9bc48ebca777f1c9b9f694d8ddd5c696d44cd2af073b61c7e903bc5dfefeecaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:06 np0005539563 nova_compute[252253]: 2025-11-29 08:38:06.076 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:06 np0005539563 magical_jemison[368383]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:38:06 np0005539563 magical_jemison[368383]: --> relative data size: 1.0
Nov 29 03:38:06 np0005539563 magical_jemison[368383]: --> All data devices are unavailable
Nov 29 03:38:06 np0005539563 systemd[1]: libpod-9bc48ebca777f1c9b9f694d8ddd5c696d44cd2af073b61c7e903bc5dfefeecaf.scope: Deactivated successfully.
Nov 29 03:38:06 np0005539563 podman[368448]: 2025-11-29 08:38:06.659511619 +0000 UTC m=+0.023619885 container died 9bc48ebca777f1c9b9f694d8ddd5c696d44cd2af073b61c7e903bc5dfefeecaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jemison, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:38:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c09aa115c86f06f20ff4e5db60f780b66ed6bf7629597d14e97c8dc70b7d85e2-merged.mount: Deactivated successfully.
Nov 29 03:38:06 np0005539563 podman[368448]: 2025-11-29 08:38:06.783806246 +0000 UTC m=+0.147914472 container remove 9bc48ebca777f1c9b9f694d8ddd5c696d44cd2af073b61c7e903bc5dfefeecaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jemison, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:38:06 np0005539563 systemd[1]: libpod-conmon-9bc48ebca777f1c9b9f694d8ddd5c696d44cd2af073b61c7e903bc5dfefeecaf.scope: Deactivated successfully.
Nov 29 03:38:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:06.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 03:38:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:06.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:07 np0005539563 podman[368601]: 2025-11-29 08:38:07.363330651 +0000 UTC m=+0.020999883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:38:07 np0005539563 podman[368601]: 2025-11-29 08:38:07.462486874 +0000 UTC m=+0.120156086 container create fe2197f149f32669499104a21d90f3965aeaba5dca75b7cbf60c48d517d55cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dijkstra, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:38:07 np0005539563 systemd[1]: Started libpod-conmon-fe2197f149f32669499104a21d90f3965aeaba5dca75b7cbf60c48d517d55cd3.scope.
Nov 29 03:38:07 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:38:07 np0005539563 podman[368601]: 2025-11-29 08:38:07.640837034 +0000 UTC m=+0.298506336 container init fe2197f149f32669499104a21d90f3965aeaba5dca75b7cbf60c48d517d55cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:38:07 np0005539563 podman[368601]: 2025-11-29 08:38:07.654505037 +0000 UTC m=+0.312174249 container start fe2197f149f32669499104a21d90f3965aeaba5dca75b7cbf60c48d517d55cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:07 np0005539563 sweet_dijkstra[368618]: 167 167
Nov 29 03:38:07 np0005539563 systemd[1]: libpod-fe2197f149f32669499104a21d90f3965aeaba5dca75b7cbf60c48d517d55cd3.scope: Deactivated successfully.
Nov 29 03:38:07 np0005539563 podman[368601]: 2025-11-29 08:38:07.859339939 +0000 UTC m=+0.517009201 container attach fe2197f149f32669499104a21d90f3965aeaba5dca75b7cbf60c48d517d55cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dijkstra, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:38:07 np0005539563 podman[368601]: 2025-11-29 08:38:07.861096568 +0000 UTC m=+0.518765800 container died fe2197f149f32669499104a21d90f3965aeaba5dca75b7cbf60c48d517d55cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dijkstra, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:38:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-058c4164ecdebf34c456db80f591b2677bbc195967d5c9cd9cdfdf9cf91150d5-merged.mount: Deactivated successfully.
Nov 29 03:38:08 np0005539563 podman[368601]: 2025-11-29 08:38:08.137663265 +0000 UTC m=+0.795332487 container remove fe2197f149f32669499104a21d90f3965aeaba5dca75b7cbf60c48d517d55cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:38:08 np0005539563 systemd[1]: libpod-conmon-fe2197f149f32669499104a21d90f3965aeaba5dca75b7cbf60c48d517d55cd3.scope: Deactivated successfully.
Nov 29 03:38:08 np0005539563 podman[368643]: 2025-11-29 08:38:08.321021872 +0000 UTC m=+0.049497100 container create a6cb2135cca681f0cebc39268c9e570676f578a51ab5185c7247846a5952ce5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:08 np0005539563 systemd[1]: Started libpod-conmon-a6cb2135cca681f0cebc39268c9e570676f578a51ab5185c7247846a5952ce5d.scope.
Nov 29 03:38:08 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:38:08 np0005539563 podman[368643]: 2025-11-29 08:38:08.295150077 +0000 UTC m=+0.023625335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:38:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5afbd2a22d4b427e12c9595b45a3219eb80cb88a07a428785ddb1590e4a9a19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5afbd2a22d4b427e12c9595b45a3219eb80cb88a07a428785ddb1590e4a9a19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5afbd2a22d4b427e12c9595b45a3219eb80cb88a07a428785ddb1590e4a9a19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5afbd2a22d4b427e12c9595b45a3219eb80cb88a07a428785ddb1590e4a9a19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:08 np0005539563 podman[368643]: 2025-11-29 08:38:08.414768937 +0000 UTC m=+0.143244185 container init a6cb2135cca681f0cebc39268c9e570676f578a51ab5185c7247846a5952ce5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:38:08 np0005539563 podman[368643]: 2025-11-29 08:38:08.421115381 +0000 UTC m=+0.149590599 container start a6cb2135cca681f0cebc39268c9e570676f578a51ab5185c7247846a5952ce5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:08 np0005539563 podman[368643]: 2025-11-29 08:38:08.433102277 +0000 UTC m=+0.161577515 container attach a6cb2135cca681f0cebc39268c9e570676f578a51ab5185c7247846a5952ce5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:38:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:08.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 254 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Nov 29 03:38:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:08.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]: {
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:    "0": [
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:        {
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:            "devices": [
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:                "/dev/loop3"
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:            ],
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:            "lv_name": "ceph_lv0",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:            "lv_size": "7511998464",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:            "name": "ceph_lv0",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:            "tags": {
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:                "ceph.cluster_name": "ceph",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:                "ceph.crush_device_class": "",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:                "ceph.encrypted": "0",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:                "ceph.osd_id": "0",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:                "ceph.type": "block",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:                "ceph.vdo": "0"
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:            },
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:            "type": "block",
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:            "vg_name": "ceph_vg0"
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:        }
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]:    ]
Nov 29 03:38:09 np0005539563 hardcore_jackson[368660]: }
Nov 29 03:38:09 np0005539563 systemd[1]: libpod-a6cb2135cca681f0cebc39268c9e570676f578a51ab5185c7247846a5952ce5d.scope: Deactivated successfully.
Nov 29 03:38:09 np0005539563 podman[368643]: 2025-11-29 08:38:09.227134228 +0000 UTC m=+0.955609446 container died a6cb2135cca681f0cebc39268c9e570676f578a51ab5185c7247846a5952ce5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:38:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f5afbd2a22d4b427e12c9595b45a3219eb80cb88a07a428785ddb1590e4a9a19-merged.mount: Deactivated successfully.
Nov 29 03:38:09 np0005539563 nova_compute[252253]: 2025-11-29 08:38:09.274 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:09 np0005539563 podman[368643]: 2025-11-29 08:38:09.280092051 +0000 UTC m=+1.008567269 container remove a6cb2135cca681f0cebc39268c9e570676f578a51ab5185c7247846a5952ce5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:38:09 np0005539563 systemd[1]: libpod-conmon-a6cb2135cca681f0cebc39268c9e570676f578a51ab5185c7247846a5952ce5d.scope: Deactivated successfully.
Nov 29 03:38:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:09 np0005539563 podman[368824]: 2025-11-29 08:38:09.825552368 +0000 UTC m=+0.035251642 container create 9bf90a39e6d565d10bfc06325b5d910b32a3efcd2629a73a7e26097337664547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bohr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:38:09 np0005539563 systemd[1]: Started libpod-conmon-9bf90a39e6d565d10bfc06325b5d910b32a3efcd2629a73a7e26097337664547.scope.
Nov 29 03:38:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:38:09 np0005539563 podman[368824]: 2025-11-29 08:38:09.904571751 +0000 UTC m=+0.114271055 container init 9bf90a39e6d565d10bfc06325b5d910b32a3efcd2629a73a7e26097337664547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bohr, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:38:09 np0005539563 podman[368824]: 2025-11-29 08:38:09.809699625 +0000 UTC m=+0.019398919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:38:09 np0005539563 podman[368824]: 2025-11-29 08:38:09.910965896 +0000 UTC m=+0.120665170 container start 9bf90a39e6d565d10bfc06325b5d910b32a3efcd2629a73a7e26097337664547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bohr, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:38:09 np0005539563 systemd[1]: libpod-9bf90a39e6d565d10bfc06325b5d910b32a3efcd2629a73a7e26097337664547.scope: Deactivated successfully.
Nov 29 03:38:09 np0005539563 happy_bohr[368841]: 167 167
Nov 29 03:38:09 np0005539563 conmon[368841]: conmon 9bf90a39e6d565d10bfc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9bf90a39e6d565d10bfc06325b5d910b32a3efcd2629a73a7e26097337664547.scope/container/memory.events
Nov 29 03:38:09 np0005539563 podman[368824]: 2025-11-29 08:38:09.922451609 +0000 UTC m=+0.132150913 container attach 9bf90a39e6d565d10bfc06325b5d910b32a3efcd2629a73a7e26097337664547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:38:09 np0005539563 podman[368824]: 2025-11-29 08:38:09.922966033 +0000 UTC m=+0.132665307 container died 9bf90a39e6d565d10bfc06325b5d910b32a3efcd2629a73a7e26097337664547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bohr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:38:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e59f53f08a062b347138f7d09d22734315734e2c549108d855cce350480b05c3-merged.mount: Deactivated successfully.
Nov 29 03:38:09 np0005539563 podman[368824]: 2025-11-29 08:38:09.966252262 +0000 UTC m=+0.175951526 container remove 9bf90a39e6d565d10bfc06325b5d910b32a3efcd2629a73a7e26097337664547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:38:09 np0005539563 systemd[1]: libpod-conmon-9bf90a39e6d565d10bfc06325b5d910b32a3efcd2629a73a7e26097337664547.scope: Deactivated successfully.
Nov 29 03:38:10 np0005539563 podman[368863]: 2025-11-29 08:38:10.125085891 +0000 UTC m=+0.036652990 container create efce6e056a22dd3fabaf438b111111670a1e6d514cc0366864b10edd8938f5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:38:10 np0005539563 systemd[1]: Started libpod-conmon-efce6e056a22dd3fabaf438b111111670a1e6d514cc0366864b10edd8938f5e5.scope.
Nov 29 03:38:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:38:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b39b7904f2228492f8cb385455b6bd1fe5176fecfa4eec51a629605935013d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b39b7904f2228492f8cb385455b6bd1fe5176fecfa4eec51a629605935013d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b39b7904f2228492f8cb385455b6bd1fe5176fecfa4eec51a629605935013d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b39b7904f2228492f8cb385455b6bd1fe5176fecfa4eec51a629605935013d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:10 np0005539563 podman[368863]: 2025-11-29 08:38:10.108373896 +0000 UTC m=+0.019940995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:38:10 np0005539563 podman[368863]: 2025-11-29 08:38:10.228643093 +0000 UTC m=+0.140210212 container init efce6e056a22dd3fabaf438b111111670a1e6d514cc0366864b10edd8938f5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sinoussi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:38:10 np0005539563 podman[368863]: 2025-11-29 08:38:10.235032267 +0000 UTC m=+0.146599366 container start efce6e056a22dd3fabaf438b111111670a1e6d514cc0366864b10edd8938f5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:10 np0005539563 podman[368863]: 2025-11-29 08:38:10.23845298 +0000 UTC m=+0.150020179 container attach efce6e056a22dd3fabaf438b111111670a1e6d514cc0366864b10edd8938f5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sinoussi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:38:10 np0005539563 nova_compute[252253]: 2025-11-29 08:38:10.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:10 np0005539563 nova_compute[252253]: 2025-11-29 08:38:10.680 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:38:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:10.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 30 KiB/s wr, 4 op/s
Nov 29 03:38:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:38:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:10.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:38:11 np0005539563 jovial_sinoussi[368879]: {
Nov 29 03:38:11 np0005539563 jovial_sinoussi[368879]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:38:11 np0005539563 jovial_sinoussi[368879]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:38:11 np0005539563 jovial_sinoussi[368879]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:38:11 np0005539563 jovial_sinoussi[368879]:        "osd_id": 0,
Nov 29 03:38:11 np0005539563 jovial_sinoussi[368879]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:38:11 np0005539563 jovial_sinoussi[368879]:        "type": "bluestore"
Nov 29 03:38:11 np0005539563 jovial_sinoussi[368879]:    }
Nov 29 03:38:11 np0005539563 jovial_sinoussi[368879]: }
Nov 29 03:38:11 np0005539563 systemd[1]: libpod-efce6e056a22dd3fabaf438b111111670a1e6d514cc0366864b10edd8938f5e5.scope: Deactivated successfully.
Nov 29 03:38:11 np0005539563 podman[368863]: 2025-11-29 08:38:11.060806324 +0000 UTC m=+0.972373443 container died efce6e056a22dd3fabaf438b111111670a1e6d514cc0366864b10edd8938f5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sinoussi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:38:11 np0005539563 nova_compute[252253]: 2025-11-29 08:38:11.130 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a4b39b7904f2228492f8cb385455b6bd1fe5176fecfa4eec51a629605935013d-merged.mount: Deactivated successfully.
Nov 29 03:38:11 np0005539563 podman[368863]: 2025-11-29 08:38:11.283258766 +0000 UTC m=+1.194825865 container remove efce6e056a22dd3fabaf438b111111670a1e6d514cc0366864b10edd8938f5e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sinoussi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:38:11 np0005539563 systemd[1]: libpod-conmon-efce6e056a22dd3fabaf438b111111670a1e6d514cc0366864b10edd8938f5e5.scope: Deactivated successfully.
Nov 29 03:38:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:38:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:38:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:38:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:38:11 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev aa691160-8135-4ba1-a1d1-271f7a38eabe does not exist
Nov 29 03:38:11 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6f14ded0-60de-473f-8f6d-4b4c7b32e973 does not exist
Nov 29 03:38:11 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 47fa5a40-9506-4301-bdcb-1ae83fe0c5e8 does not exist
Nov 29 03:38:11 np0005539563 nova_compute[252253]: 2025-11-29 08:38:11.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:38:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:38:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:12.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Nov 29 03:38:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:12.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:38:12
Nov 29 03:38:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:38:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:38:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'backups', 'default.rgw.control', 'volumes', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root']
Nov 29 03:38:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:38:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:38:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:38:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:38:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:38:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:38:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:38:13 np0005539563 nova_compute[252253]: 2025-11-29 08:38:13.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:14 np0005539563 nova_compute[252253]: 2025-11-29 08:38:14.277 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:14 np0005539563 nova_compute[252253]: 2025-11-29 08:38:14.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:14 np0005539563 nova_compute[252253]: 2025-11-29 08:38:14.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:38:14 np0005539563 nova_compute[252253]: 2025-11-29 08:38:14.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:38:14 np0005539563 nova_compute[252253]: 2025-11-29 08:38:14.696 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:38:14 np0005539563 nova_compute[252253]: 2025-11-29 08:38:14.697 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:14.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Nov 29 03:38:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:14.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:16 np0005539563 nova_compute[252253]: 2025-11-29 08:38:16.132 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:38:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:38:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:38:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:38:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:38:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:38:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:38:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:38:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:38:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:38:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:16.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:38:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 61K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1698 writes, 7789 keys, 1697 commit groups, 1.0 writes per commit group, ingest: 11.07 MB, 0.02 MB/s#012Interval WAL: 1698 writes, 1697 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     14.3      5.66              0.28        41    0.138       0      0       0.0       0.0#012  L6      1/0   11.52 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0     31.9     27.3     14.79              1.26        40    0.370    289K    21K       0.0       0.0#012 Sum      1/0   11.52 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     23.1     23.7     20.45              1.55        81    0.252    289K    21K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1     69.8     70.8      1.33              0.25        14    0.095     68K   3680       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     31.9     27.3     14.79              1.26        40    0.370    289K    21K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     14.3      5.65              0.28        40    0.141       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.079, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.47 GB write, 0.09 MB/s write, 0.46 GB read, 0.09 MB/s read, 20.4 seconds#012Interval compaction: 0.09 GB write, 0.16 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 304.00 MB usage: 53.59 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000584 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2962,51.50 MB,16.9393%) FilterBlock(82,808.80 KB,0.259816%) IndexBlock(82,1.31 MB,0.430012%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:38:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Nov 29 03:38:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:16.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:17 np0005539563 nova_compute[252253]: 2025-11-29 08:38:17.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:17 np0005539563 nova_compute[252253]: 2025-11-29 08:38:17.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:17 np0005539563 nova_compute[252253]: 2025-11-29 08:38:17.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:17 np0005539563 nova_compute[252253]: 2025-11-29 08:38:17.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:17 np0005539563 nova_compute[252253]: 2025-11-29 08:38:17.703 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:38:17 np0005539563 nova_compute[252253]: 2025-11-29 08:38:17.703 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/679710500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.123 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.287 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.288 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4155MB free_disk=20.942729949951172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.288 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.288 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.369 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.370 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.386 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.406 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.407 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.443 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.473 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.488 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:18.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 2.0 KiB/s wr, 2 op/s
Nov 29 03:38:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/572325142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.924 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.929 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:18.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.954 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.955 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:38:18 np0005539563 nova_compute[252253]: 2025-11-29 08:38:18.955 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:19 np0005539563 nova_compute[252253]: 2025-11-29 08:38:19.277 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:20.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 2.0 KiB/s wr, 5 op/s
Nov 29 03:38:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:20.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:21 np0005539563 nova_compute[252253]: 2025-11-29 08:38:21.167 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:21 np0005539563 nova_compute[252253]: 2025-11-29 08:38:21.957 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:22.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 2.0 KiB/s wr, 5 op/s
Nov 29 03:38:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:22.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021703206425646754 of space, bias 1.0, pg target 0.6510961927694027 quantized to 32 (current 32)
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:38:24 np0005539563 nova_compute[252253]: 2025-11-29 08:38:24.279 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:24.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 2.0 KiB/s wr, 5 op/s
Nov 29 03:38:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:24.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:25 np0005539563 nova_compute[252253]: 2025-11-29 08:38:25.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:26 np0005539563 nova_compute[252253]: 2025-11-29 08:38:26.171 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:26.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 201 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 KiB/s rd, 2.2 KiB/s wr, 7 op/s
Nov 29 03:38:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:26.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:38:26Z|00771|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 03:38:28 np0005539563 podman[369067]: 2025-11-29 08:38:28.508588058 +0000 UTC m=+0.063226464 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:38:28 np0005539563 podman[369066]: 2025-11-29 08:38:28.511082706 +0000 UTC m=+0.064400937 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:38:28 np0005539563 podman[369068]: 2025-11-29 08:38:28.535369608 +0000 UTC m=+0.083713783 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 03:38:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:28.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 03:38:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:28.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:29 np0005539563 nova_compute[252253]: 2025-11-29 08:38:29.281 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:30 np0005539563 nova_compute[252253]: 2025-11-29 08:38:30.548 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:30.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 29 03:38:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:30.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:31 np0005539563 nova_compute[252253]: 2025-11-29 08:38:31.173 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:32.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 29 03:38:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:32.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:34 np0005539563 nova_compute[252253]: 2025-11-29 08:38:34.282 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:34.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 199 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 29 03:38:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:34.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:36 np0005539563 nova_compute[252253]: 2025-11-29 08:38:36.175 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:38:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:36.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:38:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 151 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 29 03:38:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:36.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:38.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 29 03:38:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:38.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:39 np0005539563 nova_compute[252253]: 2025-11-29 08:38:39.284 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:40.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 29 03:38:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:40.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:41 np0005539563 nova_compute[252253]: 2025-11-29 08:38:41.225 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:42.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 29 03:38:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:42.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:38:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:38:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:38:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:38:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:38:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:38:43 np0005539563 nova_compute[252253]: 2025-11-29 08:38:43.588 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:38:43.587 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:38:43.589 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:38:44 np0005539563 nova_compute[252253]: 2025-11-29 08:38:44.286 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:44.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 29 03:38:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:44.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:46 np0005539563 nova_compute[252253]: 2025-11-29 08:38:46.227 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:46.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 77 op/s
Nov 29 03:38:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:46.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:48.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 188 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 106 op/s
Nov 29 03:38:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:48.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:49 np0005539563 nova_compute[252253]: 2025-11-29 08:38:49.288 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:38:49.590 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:50.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 03:38:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:50.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:51 np0005539563 nova_compute[252253]: 2025-11-29 08:38:51.229 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.394222) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531394367, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 894, "num_deletes": 251, "total_data_size": 1347348, "memory_usage": 1363760, "flush_reason": "Manual Compaction"}
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531413420, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 1332192, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61693, "largest_seqno": 62586, "table_properties": {"data_size": 1327755, "index_size": 2088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9879, "raw_average_key_size": 19, "raw_value_size": 1318860, "raw_average_value_size": 2632, "num_data_blocks": 93, "num_entries": 501, "num_filter_entries": 501, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405455, "oldest_key_time": 1764405455, "file_creation_time": 1764405531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 19239 microseconds, and 5784 cpu microseconds.
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.413492) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 1332192 bytes OK
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.413526) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.415341) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.415362) EVENT_LOG_v1 {"time_micros": 1764405531415355, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.415386) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1343107, prev total WAL file size 1343107, number of live WAL files 2.
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.416065) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(1300KB)], [137(11MB)]
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531416101, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 13412975, "oldest_snapshot_seqno": -1}
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 9339 keys, 11503218 bytes, temperature: kUnknown
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531515278, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 11503218, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11443115, "index_size": 35670, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23365, "raw_key_size": 247682, "raw_average_key_size": 26, "raw_value_size": 11279213, "raw_average_value_size": 1207, "num_data_blocks": 1354, "num_entries": 9339, "num_filter_entries": 9339, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764405531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.515691) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 11503218 bytes
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.517095) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.0 rd, 115.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 11.5 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(18.7) write-amplify(8.6) OK, records in: 9854, records dropped: 515 output_compression: NoCompression
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.517122) EVENT_LOG_v1 {"time_micros": 1764405531517109, "job": 84, "event": "compaction_finished", "compaction_time_micros": 99325, "compaction_time_cpu_micros": 39835, "output_level": 6, "num_output_files": 1, "total_output_size": 11503218, "num_input_records": 9854, "num_output_records": 9339, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531517872, "job": 84, "event": "table_file_deletion", "file_number": 139}
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405531521162, "job": 84, "event": "table_file_deletion", "file_number": 137}
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.415995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.521349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.521360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.521364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.521372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:38:51.521376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:52.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 03:38:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:52.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:54 np0005539563 nova_compute[252253]: 2025-11-29 08:38:54.349 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:54.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 03:38:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:54.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:55 np0005539563 nova_compute[252253]: 2025-11-29 08:38:55.705 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:56 np0005539563 nova_compute[252253]: 2025-11-29 08:38:56.231 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:56.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 03:38:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:56.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:57 np0005539563 nova_compute[252253]: 2025-11-29 08:38:57.537 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:57 np0005539563 nova_compute[252253]: 2025-11-29 08:38:57.538 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:57 np0005539563 nova_compute[252253]: 2025-11-29 08:38:57.556 252257 DEBUG nova.compute.manager [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:38:57 np0005539563 nova_compute[252253]: 2025-11-29 08:38:57.639 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:57 np0005539563 nova_compute[252253]: 2025-11-29 08:38:57.639 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:57 np0005539563 nova_compute[252253]: 2025-11-29 08:38:57.648 252257 DEBUG nova.virt.hardware [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:38:57 np0005539563 nova_compute[252253]: 2025-11-29 08:38:57.648 252257 INFO nova.compute.claims [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:38:57 np0005539563 nova_compute[252253]: 2025-11-29 08:38:57.783 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1116186033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.206 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.212 252257 DEBUG nova.compute.provider_tree [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.243 252257 DEBUG nova.scheduler.client.report [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.269 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.270 252257 DEBUG nova.compute.manager [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.340 252257 DEBUG nova.compute.manager [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.340 252257 DEBUG nova.network.neutron [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.373 252257 INFO nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.399 252257 DEBUG nova.compute.manager [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.516 252257 DEBUG nova.compute.manager [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.518 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.518 252257 INFO nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Creating image(s)#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.547 252257 DEBUG nova.storage.rbd_utils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.572 252257 DEBUG nova.storage.rbd_utils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.597 252257 DEBUG nova.storage.rbd_utils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.601 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.634 252257 DEBUG nova.policy [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '686f527a5723407b85ed34c8a312583f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.673 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.674 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.674 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.675 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.700 252257 DEBUG nova.storage.rbd_utils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:58 np0005539563 nova_compute[252253]: 2025-11-29 08:38:58.703 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:58.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 29 03:38:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:38:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:59.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:59 np0005539563 nova_compute[252253]: 2025-11-29 08:38:59.012 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:59 np0005539563 nova_compute[252253]: 2025-11-29 08:38:59.096 252257 DEBUG nova.storage.rbd_utils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] resizing rbd image 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:38:59 np0005539563 nova_compute[252253]: 2025-11-29 08:38:59.206 252257 DEBUG nova.objects.instance [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'migration_context' on Instance uuid 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:59 np0005539563 nova_compute[252253]: 2025-11-29 08:38:59.236 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:38:59 np0005539563 nova_compute[252253]: 2025-11-29 08:38:59.236 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Ensure instance console log exists: /var/lib/nova/instances/744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:38:59 np0005539563 nova_compute[252253]: 2025-11-29 08:38:59.237 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:59 np0005539563 nova_compute[252253]: 2025-11-29 08:38:59.238 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:59 np0005539563 nova_compute[252253]: 2025-11-29 08:38:59.238 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:59 np0005539563 nova_compute[252253]: 2025-11-29 08:38:59.351 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:59 np0005539563 podman[369385]: 2025-11-29 08:38:59.500429552 +0000 UTC m=+0.053827388 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 29 03:38:59 np0005539563 podman[369384]: 2025-11-29 08:38:59.500767641 +0000 UTC m=+0.054237209 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:38:59 np0005539563 podman[369386]: 2025-11-29 08:38:59.530474761 +0000 UTC m=+0.079094076 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:38:59 np0005539563 nova_compute[252253]: 2025-11-29 08:38:59.554 252257 DEBUG nova.network.neutron [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Successfully created port: 42ad2c69-185b-46ac-ba56-15bd589c9c41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:38:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:00 np0005539563 nova_compute[252253]: 2025-11-29 08:39:00.285 252257 DEBUG nova.network.neutron [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Successfully updated port: 42ad2c69-185b-46ac-ba56-15bd589c9c41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:39:00 np0005539563 nova_compute[252253]: 2025-11-29 08:39:00.303 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:00 np0005539563 nova_compute[252253]: 2025-11-29 08:39:00.303 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:00 np0005539563 nova_compute[252253]: 2025-11-29 08:39:00.304 252257 DEBUG nova.network.neutron [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:39:00 np0005539563 nova_compute[252253]: 2025-11-29 08:39:00.402 252257 DEBUG nova.compute.manager [req-b8d4f1f6-a975-4ccd-aabb-d9395382aca3 req-9e891ed5-50c7-4d88-b73a-56cd66e23022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Received event network-changed-42ad2c69-185b-46ac-ba56-15bd589c9c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:00 np0005539563 nova_compute[252253]: 2025-11-29 08:39:00.403 252257 DEBUG nova.compute.manager [req-b8d4f1f6-a975-4ccd-aabb-d9395382aca3 req-9e891ed5-50c7-4d88-b73a-56cd66e23022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Refreshing instance network info cache due to event network-changed-42ad2c69-185b-46ac-ba56-15bd589c9c41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:00 np0005539563 nova_compute[252253]: 2025-11-29 08:39:00.403 252257 DEBUG oslo_concurrency.lockutils [req-b8d4f1f6-a975-4ccd-aabb-d9395382aca3 req-9e891ed5-50c7-4d88-b73a-56cd66e23022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:00 np0005539563 nova_compute[252253]: 2025-11-29 08:39:00.547 252257 DEBUG nova.network.neutron [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:39:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:00.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 225 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 1.3 MiB/s wr, 19 op/s
Nov 29 03:39:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.233 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.535 252257 DEBUG nova.network.neutron [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Updating instance_info_cache with network_info: [{"id": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "address": "fa:16:3e:2e:95:81", "network": {"id": "394ebb6b-ea36-4f10-a9a9-83350ba9a0ee", "bridge": "br-int", "label": "tempest-network-smoke--875570004", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ad2c69-18", "ovs_interfaceid": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.565 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.565 252257 DEBUG nova.compute.manager [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Instance network_info: |[{"id": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "address": "fa:16:3e:2e:95:81", "network": {"id": "394ebb6b-ea36-4f10-a9a9-83350ba9a0ee", "bridge": "br-int", "label": "tempest-network-smoke--875570004", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ad2c69-18", "ovs_interfaceid": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.566 252257 DEBUG oslo_concurrency.lockutils [req-b8d4f1f6-a975-4ccd-aabb-d9395382aca3 req-9e891ed5-50c7-4d88-b73a-56cd66e23022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.566 252257 DEBUG nova.network.neutron [req-b8d4f1f6-a975-4ccd-aabb-d9395382aca3 req-9e891ed5-50c7-4d88-b73a-56cd66e23022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Refreshing network info cache for port 42ad2c69-185b-46ac-ba56-15bd589c9c41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.569 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Start _get_guest_xml network_info=[{"id": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "address": "fa:16:3e:2e:95:81", "network": {"id": "394ebb6b-ea36-4f10-a9a9-83350ba9a0ee", "bridge": "br-int", "label": "tempest-network-smoke--875570004", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ad2c69-18", "ovs_interfaceid": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.572 252257 WARNING nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.577 252257 DEBUG nova.virt.libvirt.host [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.578 252257 DEBUG nova.virt.libvirt.host [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.580 252257 DEBUG nova.virt.libvirt.host [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.581 252257 DEBUG nova.virt.libvirt.host [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.582 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.582 252257 DEBUG nova.virt.hardware [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.582 252257 DEBUG nova.virt.hardware [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.583 252257 DEBUG nova.virt.hardware [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.583 252257 DEBUG nova.virt.hardware [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.583 252257 DEBUG nova.virt.hardware [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.583 252257 DEBUG nova.virt.hardware [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.583 252257 DEBUG nova.virt.hardware [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.584 252257 DEBUG nova.virt.hardware [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.584 252257 DEBUG nova.virt.hardware [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.584 252257 DEBUG nova.virt.hardware [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.584 252257 DEBUG nova.virt.hardware [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:39:01 np0005539563 nova_compute[252253]: 2025-11-29 08:39:01.587 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1944985667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.033 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.065 252257 DEBUG nova.storage.rbd_utils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.069 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2557532631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.500 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.502 252257 DEBUG nova.virt.libvirt.vif [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:38:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-114419784',display_name='tempest-TestNetworkAdvancedServerOps-server-114419784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-114419784',id=181,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM86dRwr+/hL3YQDqfUe1N2xX+b9qjNeI304LzU46zce1gqAq9EJxKzKVBw43NzawtddoKkC4CmhvrbCNUf4DGEf47ZvlBIG3NSViUSjF5j3n4f59ugj/zCtdXPRWz37UQ==',key_name='tempest-TestNetworkAdvancedServerOps-1033424177',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-0dob1ar8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:38:58Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "address": "fa:16:3e:2e:95:81", "network": {"id": "394ebb6b-ea36-4f10-a9a9-83350ba9a0ee", "bridge": "br-int", "label": "tempest-network-smoke--875570004", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ad2c69-18", "ovs_interfaceid": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.502 252257 DEBUG nova.network.os_vif_util [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "address": "fa:16:3e:2e:95:81", "network": {"id": "394ebb6b-ea36-4f10-a9a9-83350ba9a0ee", "bridge": "br-int", "label": "tempest-network-smoke--875570004", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ad2c69-18", "ovs_interfaceid": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.503 252257 DEBUG nova.network.os_vif_util [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:95:81,bridge_name='br-int',has_traffic_filtering=True,id=42ad2c69-185b-46ac-ba56-15bd589c9c41,network=Network(394ebb6b-ea36-4f10-a9a9-83350ba9a0ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ad2c69-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.504 252257 DEBUG nova.objects.instance [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.523 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  <uuid>744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee</uuid>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  <name>instance-000000b5</name>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-114419784</nova:name>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:39:01</nova:creationTime>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <nova:port uuid="42ad2c69-185b-46ac-ba56-15bd589c9c41">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <entry name="serial">744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee</entry>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <entry name="uuid">744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee</entry>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk.config">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:2e:95:81"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <target dev="tap42ad2c69-18"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee/console.log" append="off"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:39:02 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:39:02 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:39:02 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:39:02 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.525 252257 DEBUG nova.compute.manager [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Preparing to wait for external event network-vif-plugged-42ad2c69-185b-46ac-ba56-15bd589c9c41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.525 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.526 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.526 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.527 252257 DEBUG nova.virt.libvirt.vif [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:38:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-114419784',display_name='tempest-TestNetworkAdvancedServerOps-server-114419784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-114419784',id=181,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM86dRwr+/hL3YQDqfUe1N2xX+b9qjNeI304LzU46zce1gqAq9EJxKzKVBw43NzawtddoKkC4CmhvrbCNUf4DGEf47ZvlBIG3NSViUSjF5j3n4f59ugj/zCtdXPRWz37UQ==',key_name='tempest-TestNetworkAdvancedServerOps-1033424177',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-0dob1ar8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:38:58Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "address": "fa:16:3e:2e:95:81", "network": {"id": "394ebb6b-ea36-4f10-a9a9-83350ba9a0ee", "bridge": "br-int", "label": "tempest-network-smoke--875570004", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ad2c69-18", "ovs_interfaceid": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.527 252257 DEBUG nova.network.os_vif_util [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "address": "fa:16:3e:2e:95:81", "network": {"id": "394ebb6b-ea36-4f10-a9a9-83350ba9a0ee", "bridge": "br-int", "label": "tempest-network-smoke--875570004", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ad2c69-18", "ovs_interfaceid": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.528 252257 DEBUG nova.network.os_vif_util [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:95:81,bridge_name='br-int',has_traffic_filtering=True,id=42ad2c69-185b-46ac-ba56-15bd589c9c41,network=Network(394ebb6b-ea36-4f10-a9a9-83350ba9a0ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ad2c69-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.528 252257 DEBUG os_vif [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:95:81,bridge_name='br-int',has_traffic_filtering=True,id=42ad2c69-185b-46ac-ba56-15bd589c9c41,network=Network(394ebb6b-ea36-4f10-a9a9-83350ba9a0ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ad2c69-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.529 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.529 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.530 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.533 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.534 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap42ad2c69-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.534 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap42ad2c69-18, col_values=(('external_ids', {'iface-id': '42ad2c69-185b-46ac-ba56-15bd589c9c41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:95:81', 'vm-uuid': '744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.535 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:02 np0005539563 NetworkManager[48981]: <info>  [1764405542.5370] manager: (tap42ad2c69-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.537 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.544 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.546 252257 INFO os_vif [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:95:81,bridge_name='br-int',has_traffic_filtering=True,id=42ad2c69-185b-46ac-ba56-15bd589c9c41,network=Network(394ebb6b-ea36-4f10-a9a9-83350ba9a0ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ad2c69-18')#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.795 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.796 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.796 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No VIF found with MAC fa:16:3e:2e:95:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.797 252257 INFO nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Using config drive#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.823 252257 DEBUG nova.storage.rbd_utils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.880 252257 DEBUG nova.network.neutron [req-b8d4f1f6-a975-4ccd-aabb-d9395382aca3 req-9e891ed5-50c7-4d88-b73a-56cd66e23022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Updated VIF entry in instance network info cache for port 42ad2c69-185b-46ac-ba56-15bd589c9c41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.880 252257 DEBUG nova.network.neutron [req-b8d4f1f6-a975-4ccd-aabb-d9395382aca3 req-9e891ed5-50c7-4d88-b73a-56cd66e23022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Updating instance_info_cache with network_info: [{"id": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "address": "fa:16:3e:2e:95:81", "network": {"id": "394ebb6b-ea36-4f10-a9a9-83350ba9a0ee", "bridge": "br-int", "label": "tempest-network-smoke--875570004", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ad2c69-18", "ovs_interfaceid": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:02.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 225 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 869 KiB/s wr, 3 op/s
Nov 29 03:39:02 np0005539563 nova_compute[252253]: 2025-11-29 08:39:02.914 252257 DEBUG oslo_concurrency.lockutils [req-b8d4f1f6-a975-4ccd-aabb-d9395382aca3 req-9e891ed5-50c7-4d88-b73a-56cd66e23022 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:03 np0005539563 nova_compute[252253]: 2025-11-29 08:39:03.253 252257 INFO nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Creating config drive at /var/lib/nova/instances/744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee/disk.config#033[00m
Nov 29 03:39:03 np0005539563 nova_compute[252253]: 2025-11-29 08:39:03.259 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp42tcm4p3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:03 np0005539563 nova_compute[252253]: 2025-11-29 08:39:03.406 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp42tcm4p3" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:03 np0005539563 nova_compute[252253]: 2025-11-29 08:39:03.432 252257 DEBUG nova.storage.rbd_utils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:03 np0005539563 nova_compute[252253]: 2025-11-29 08:39:03.436 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee/disk.config 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:03 np0005539563 nova_compute[252253]: 2025-11-29 08:39:03.602 252257 DEBUG oslo_concurrency.processutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee/disk.config 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:03 np0005539563 nova_compute[252253]: 2025-11-29 08:39:03.603 252257 INFO nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Deleting local config drive /var/lib/nova/instances/744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee/disk.config because it was imported into RBD.#033[00m
Nov 29 03:39:03 np0005539563 kernel: tap42ad2c69-18: entered promiscuous mode
Nov 29 03:39:03 np0005539563 nova_compute[252253]: 2025-11-29 08:39:03.668 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:39:03Z|00772|binding|INFO|Claiming lport 42ad2c69-185b-46ac-ba56-15bd589c9c41 for this chassis.
Nov 29 03:39:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:39:03Z|00773|binding|INFO|42ad2c69-185b-46ac-ba56-15bd589c9c41: Claiming fa:16:3e:2e:95:81 10.100.0.12
Nov 29 03:39:03 np0005539563 NetworkManager[48981]: <info>  [1764405543.6708] manager: (tap42ad2c69-18): new Tun device (/org/freedesktop/NetworkManager/Devices/334)
Nov 29 03:39:03 np0005539563 nova_compute[252253]: 2025-11-29 08:39:03.673 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:03 np0005539563 nova_compute[252253]: 2025-11-29 08:39:03.679 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:03 np0005539563 systemd-udevd[369584]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:39:03 np0005539563 systemd-machined[213024]: New machine qemu-89-instance-000000b5.
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.706 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:95:81 10.100.0.12'], port_security=['fa:16:3e:2e:95:81 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'db4eb6f4-94a3-48cf-bbfe-2e542e978551', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f3c02391-33c5-4dc6-9e30-a8ea4f1f06d9, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=42ad2c69-185b-46ac-ba56-15bd589c9c41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.707 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 42ad2c69-185b-46ac-ba56-15bd589c9c41 in datapath 394ebb6b-ea36-4f10-a9a9-83350ba9a0ee bound to our chassis#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.709 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 394ebb6b-ea36-4f10-a9a9-83350ba9a0ee#033[00m
Nov 29 03:39:03 np0005539563 systemd[1]: Started Virtual Machine qemu-89-instance-000000b5.
Nov 29 03:39:03 np0005539563 NetworkManager[48981]: <info>  [1764405543.7252] device (tap42ad2c69-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.724 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[96c85ccd-2ea4-4c89-b029-25daf88fb563]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.725 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap394ebb6b-e1 in ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:39:03 np0005539563 NetworkManager[48981]: <info>  [1764405543.7264] device (tap42ad2c69-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.729 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap394ebb6b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.729 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc0d00d-79ba-4704-984b-3a6743b486bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.730 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1e81f470-3d3f-4831-b31e-f700b692804b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.743 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c74684-4132-4a86-8e27-9a201e334449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:03 np0005539563 nova_compute[252253]: 2025-11-29 08:39:03.746 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:39:03Z|00774|binding|INFO|Setting lport 42ad2c69-185b-46ac-ba56-15bd589c9c41 ovn-installed in OVS
Nov 29 03:39:03 np0005539563 ovn_controller[148841]: 2025-11-29T08:39:03Z|00775|binding|INFO|Setting lport 42ad2c69-185b-46ac-ba56-15bd589c9c41 up in Southbound
Nov 29 03:39:03 np0005539563 nova_compute[252253]: 2025-11-29 08:39:03.751 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.768 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e52342fa-8da6-4642-8725-ce7f52f2ebd1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.798 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3e97edcc-aa40-4aa5-aa5e-aa33afa2d721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.803 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b83f5b29-b786-455f-8991-78536f2c5f6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:03 np0005539563 NetworkManager[48981]: <info>  [1764405543.8039] manager: (tap394ebb6b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/335)
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.834 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3cb3d3-b062-4fe0-abe5-b97ce0141e49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.837 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b90318fc-4b2c-4414-af12-8d60c8e885e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:03 np0005539563 NetworkManager[48981]: <info>  [1764405543.8588] device (tap394ebb6b-e0): carrier: link connected
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.864 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[605fb849-323f-4bbf-ba78-63a7eaff20f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.881 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b22df8bd-64d7-4bf9-8cb8-2666c10dae15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap394ebb6b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:a8:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 851163, 'reachable_time': 35760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369618, 'error': None, 'target': 'ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.897 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[efdf27e8-5921-439c-b40d-b78a4824eb1b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:a8a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 851163, 'tstamp': 851163}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369619, 'error': None, 'target': 'ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.914 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7752d0-30bd-4062-8f89-2ea089627f1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap394ebb6b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:a8:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 851163, 'reachable_time': 35760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369620, 'error': None, 'target': 'ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:03.948 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[720d353b-e9d7-4475-a66d-de338f70c1f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:04.008 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3b1c80-9826-4adb-80f8-e19723447a61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:04.010 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap394ebb6b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:04.010 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:04.011 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap394ebb6b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:04 np0005539563 NetworkManager[48981]: <info>  [1764405544.0135] manager: (tap394ebb6b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Nov 29 03:39:04 np0005539563 nova_compute[252253]: 2025-11-29 08:39:04.012 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:04 np0005539563 kernel: tap394ebb6b-e0: entered promiscuous mode
Nov 29 03:39:04 np0005539563 nova_compute[252253]: 2025-11-29 08:39:04.015 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:04.016 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap394ebb6b-e0, col_values=(('external_ids', {'iface-id': '33eb6424-28bc-4627-8619-a5d079a6a854'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:04 np0005539563 nova_compute[252253]: 2025-11-29 08:39:04.017 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:39:04Z|00776|binding|INFO|Releasing lport 33eb6424-28bc-4627-8619-a5d079a6a854 from this chassis (sb_readonly=0)
Nov 29 03:39:04 np0005539563 nova_compute[252253]: 2025-11-29 08:39:04.029 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:04.030 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/394ebb6b-ea36-4f10-a9a9-83350ba9a0ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/394ebb6b-ea36-4f10-a9a9-83350ba9a0ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:04.031 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe366c2-1c89-425b-af91-9178f11cbac2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:04.032 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/394ebb6b-ea36-4f10-a9a9-83350ba9a0ee.pid.haproxy
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 394ebb6b-ea36-4f10-a9a9-83350ba9a0ee
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:04.032 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee', 'env', 'PROCESS_TAG=haproxy-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/394ebb6b-ea36-4f10-a9a9-83350ba9a0ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:39:04 np0005539563 nova_compute[252253]: 2025-11-29 08:39:04.127 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405544.1266632, 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:04 np0005539563 nova_compute[252253]: 2025-11-29 08:39:04.128 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] VM Started (Lifecycle Event)#033[00m
Nov 29 03:39:04 np0005539563 nova_compute[252253]: 2025-11-29 08:39:04.153 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:04 np0005539563 nova_compute[252253]: 2025-11-29 08:39:04.157 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405544.1267793, 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:04 np0005539563 nova_compute[252253]: 2025-11-29 08:39:04.158 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:39:04 np0005539563 nova_compute[252253]: 2025-11-29 08:39:04.184 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:04 np0005539563 nova_compute[252253]: 2025-11-29 08:39:04.188 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:04 np0005539563 nova_compute[252253]: 2025-11-29 08:39:04.219 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:39:04 np0005539563 podman[369694]: 2025-11-29 08:39:04.359033123 +0000 UTC m=+0.047464395 container create 88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:39:04 np0005539563 nova_compute[252253]: 2025-11-29 08:39:04.395 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:04 np0005539563 systemd[1]: Started libpod-conmon-88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21.scope.
Nov 29 03:39:04 np0005539563 podman[369694]: 2025-11-29 08:39:04.333472096 +0000 UTC m=+0.021903388 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:39:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:39:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0307175e8143f4a37ef80335960f2c411d320caca6aa6248a7b36b0c84cc07ff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:04 np0005539563 podman[369694]: 2025-11-29 08:39:04.461123255 +0000 UTC m=+0.149554547 container init 88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:39:04 np0005539563 podman[369694]: 2025-11-29 08:39:04.467981242 +0000 UTC m=+0.156412514 container start 88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:39:04 np0005539563 neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee[369709]: [NOTICE]   (369713) : New worker (369715) forked
Nov 29 03:39:04 np0005539563 neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee[369709]: [NOTICE]   (369713) : Loading success.
Nov 29 03:39:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:39:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:04.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:39:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:04.944 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:04.945 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:04.946 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:05.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 03:39:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:07.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:07 np0005539563 nova_compute[252253]: 2025-11-29 08:39:07.536 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:08.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 29 03:39:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:09.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:09 np0005539563 nova_compute[252253]: 2025-11-29 08:39:09.398 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.231 252257 DEBUG nova.compute.manager [req-33c26faf-3939-481a-acb5-fcdb5ce91059 req-b315ccc4-85f0-4c94-a19b-9dfb3f16cc11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Received event network-vif-plugged-42ad2c69-185b-46ac-ba56-15bd589c9c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.231 252257 DEBUG oslo_concurrency.lockutils [req-33c26faf-3939-481a-acb5-fcdb5ce91059 req-b315ccc4-85f0-4c94-a19b-9dfb3f16cc11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.232 252257 DEBUG oslo_concurrency.lockutils [req-33c26faf-3939-481a-acb5-fcdb5ce91059 req-b315ccc4-85f0-4c94-a19b-9dfb3f16cc11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.232 252257 DEBUG oslo_concurrency.lockutils [req-33c26faf-3939-481a-acb5-fcdb5ce91059 req-b315ccc4-85f0-4c94-a19b-9dfb3f16cc11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.232 252257 DEBUG nova.compute.manager [req-33c26faf-3939-481a-acb5-fcdb5ce91059 req-b315ccc4-85f0-4c94-a19b-9dfb3f16cc11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Processing event network-vif-plugged-42ad2c69-185b-46ac-ba56-15bd589c9c41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.233 252257 DEBUG nova.compute.manager [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.236 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405550.2363305, 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.236 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.238 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.246 252257 INFO nova.virt.libvirt.driver [-] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Instance spawned successfully.#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.246 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.263 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.270 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.273 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.274 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.274 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.274 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.275 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.275 252257 DEBUG nova.virt.libvirt.driver [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.308 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.350 252257 INFO nova.compute.manager [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Took 11.83 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.350 252257 DEBUG nova.compute.manager [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.441 252257 INFO nova.compute.manager [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Took 12.84 seconds to build instance.#033[00m
Nov 29 03:39:10 np0005539563 nova_compute[252253]: 2025-11-29 08:39:10.472 252257 DEBUG oslo_concurrency.lockutils [None req-67da7a2b-5fc7-4ee1-8d22-2fd93732ed96 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:10.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 29 03:39:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:11.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:12 np0005539563 nova_compute[252253]: 2025-11-29 08:39:12.353 252257 DEBUG nova.compute.manager [req-67bd736d-3342-4ea9-adcd-44f401dcf01a req-1880e12d-4a23-4240-89d1-2e0b60f74f5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Received event network-vif-plugged-42ad2c69-185b-46ac-ba56-15bd589c9c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:12 np0005539563 nova_compute[252253]: 2025-11-29 08:39:12.353 252257 DEBUG oslo_concurrency.lockutils [req-67bd736d-3342-4ea9-adcd-44f401dcf01a req-1880e12d-4a23-4240-89d1-2e0b60f74f5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:12 np0005539563 nova_compute[252253]: 2025-11-29 08:39:12.353 252257 DEBUG oslo_concurrency.lockutils [req-67bd736d-3342-4ea9-adcd-44f401dcf01a req-1880e12d-4a23-4240-89d1-2e0b60f74f5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:12 np0005539563 nova_compute[252253]: 2025-11-29 08:39:12.354 252257 DEBUG oslo_concurrency.lockutils [req-67bd736d-3342-4ea9-adcd-44f401dcf01a req-1880e12d-4a23-4240-89d1-2e0b60f74f5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:12 np0005539563 nova_compute[252253]: 2025-11-29 08:39:12.354 252257 DEBUG nova.compute.manager [req-67bd736d-3342-4ea9-adcd-44f401dcf01a req-1880e12d-4a23-4240-89d1-2e0b60f74f5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] No waiting events found dispatching network-vif-plugged-42ad2c69-185b-46ac-ba56-15bd589c9c41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:12 np0005539563 nova_compute[252253]: 2025-11-29 08:39:12.354 252257 WARNING nova.compute.manager [req-67bd736d-3342-4ea9-adcd-44f401dcf01a req-1880e12d-4a23-4240-89d1-2e0b60f74f5f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Received unexpected event network-vif-plugged-42ad2c69-185b-46ac-ba56-15bd589c9c41 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:39:12 np0005539563 nova_compute[252253]: 2025-11-29 08:39:12.538 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:12 np0005539563 nova_compute[252253]: 2025-11-29 08:39:12.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:12 np0005539563 nova_compute[252253]: 2025-11-29 08:39:12.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:39:12 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev eaa24dfe-6694-4898-b6c5-d6588e71946a does not exist
Nov 29 03:39:12 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev aef911e5-4624-4072-b5a7-e809d0aada4b does not exist
Nov 29 03:39:12 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7dfa8f48-32e3-42a0-93e7-305ec3c23748 does not exist
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:39:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:39:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:12.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 972 KiB/s wr, 33 op/s
Nov 29 03:39:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:39:12
Nov 29 03:39:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:39:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:39:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'vms', '.mgr']
Nov 29 03:39:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:39:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:13.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:39:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:39:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:39:13 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:39:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:39:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:39:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:39:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:39:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:39:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:39:13 np0005539563 podman[370051]: 2025-11-29 08:39:13.348982393 +0000 UTC m=+0.042714675 container create e0e9385d7de64ee4bc16ea759f69271b35bc0ae00cdd917f6e71a85253e19561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:39:13 np0005539563 systemd[1]: Started libpod-conmon-e0e9385d7de64ee4bc16ea759f69271b35bc0ae00cdd917f6e71a85253e19561.scope.
Nov 29 03:39:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:39:13 np0005539563 podman[370051]: 2025-11-29 08:39:13.328890245 +0000 UTC m=+0.022622547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:39:13 np0005539563 podman[370051]: 2025-11-29 08:39:13.427458641 +0000 UTC m=+0.121190943 container init e0e9385d7de64ee4bc16ea759f69271b35bc0ae00cdd917f6e71a85253e19561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:39:13 np0005539563 podman[370051]: 2025-11-29 08:39:13.434928894 +0000 UTC m=+0.128661176 container start e0e9385d7de64ee4bc16ea759f69271b35bc0ae00cdd917f6e71a85253e19561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chandrasekhar, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:39:13 np0005539563 podman[370051]: 2025-11-29 08:39:13.438172524 +0000 UTC m=+0.131904806 container attach e0e9385d7de64ee4bc16ea759f69271b35bc0ae00cdd917f6e71a85253e19561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:39:13 np0005539563 gifted_chandrasekhar[370069]: 167 167
Nov 29 03:39:13 np0005539563 systemd[1]: libpod-e0e9385d7de64ee4bc16ea759f69271b35bc0ae00cdd917f6e71a85253e19561.scope: Deactivated successfully.
Nov 29 03:39:13 np0005539563 podman[370051]: 2025-11-29 08:39:13.443118978 +0000 UTC m=+0.136851260 container died e0e9385d7de64ee4bc16ea759f69271b35bc0ae00cdd917f6e71a85253e19561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:39:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-efc65e1354d5c00cd99cfbf73e28344f907b724322ac92fb4fde2581f7dca6d9-merged.mount: Deactivated successfully.
Nov 29 03:39:13 np0005539563 podman[370051]: 2025-11-29 08:39:13.486792628 +0000 UTC m=+0.180524920 container remove e0e9385d7de64ee4bc16ea759f69271b35bc0ae00cdd917f6e71a85253e19561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:39:13 np0005539563 systemd[1]: libpod-conmon-e0e9385d7de64ee4bc16ea759f69271b35bc0ae00cdd917f6e71a85253e19561.scope: Deactivated successfully.
Nov 29 03:39:13 np0005539563 podman[370094]: 2025-11-29 08:39:13.656297798 +0000 UTC m=+0.044033381 container create 7098858104ea88d65246ed6d65571a938e108ffefdbaa6a8cb20ebb5105bc0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:39:13 np0005539563 nova_compute[252253]: 2025-11-29 08:39:13.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:13 np0005539563 systemd[1]: Started libpod-conmon-7098858104ea88d65246ed6d65571a938e108ffefdbaa6a8cb20ebb5105bc0e5.scope.
Nov 29 03:39:13 np0005539563 podman[370094]: 2025-11-29 08:39:13.637575728 +0000 UTC m=+0.025311321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:39:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:39:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e859b643dcd9acd3534d1cd6130ac265423bc05c6e6c261bf2c8ab1ef6bb16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e859b643dcd9acd3534d1cd6130ac265423bc05c6e6c261bf2c8ab1ef6bb16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e859b643dcd9acd3534d1cd6130ac265423bc05c6e6c261bf2c8ab1ef6bb16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e859b643dcd9acd3534d1cd6130ac265423bc05c6e6c261bf2c8ab1ef6bb16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e859b643dcd9acd3534d1cd6130ac265423bc05c6e6c261bf2c8ab1ef6bb16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:13 np0005539563 podman[370094]: 2025-11-29 08:39:13.773983336 +0000 UTC m=+0.161718939 container init 7098858104ea88d65246ed6d65571a938e108ffefdbaa6a8cb20ebb5105bc0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:39:13 np0005539563 podman[370094]: 2025-11-29 08:39:13.782798366 +0000 UTC m=+0.170533949 container start 7098858104ea88d65246ed6d65571a938e108ffefdbaa6a8cb20ebb5105bc0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:39:13 np0005539563 podman[370094]: 2025-11-29 08:39:13.785964552 +0000 UTC m=+0.173700135 container attach 7098858104ea88d65246ed6d65571a938e108ffefdbaa6a8cb20ebb5105bc0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:39:14 np0005539563 nova_compute[252253]: 2025-11-29 08:39:14.072 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:14 np0005539563 NetworkManager[48981]: <info>  [1764405554.0741] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Nov 29 03:39:14 np0005539563 NetworkManager[48981]: <info>  [1764405554.0754] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Nov 29 03:39:14 np0005539563 nova_compute[252253]: 2025-11-29 08:39:14.214 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:39:14Z|00777|binding|INFO|Releasing lport 33eb6424-28bc-4627-8619-a5d079a6a854 from this chassis (sb_readonly=0)
Nov 29 03:39:14 np0005539563 nova_compute[252253]: 2025-11-29 08:39:14.238 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:14 np0005539563 nova_compute[252253]: 2025-11-29 08:39:14.444 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:14 np0005539563 bold_spence[370111]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:39:14 np0005539563 bold_spence[370111]: --> relative data size: 1.0
Nov 29 03:39:14 np0005539563 bold_spence[370111]: --> All data devices are unavailable
Nov 29 03:39:14 np0005539563 systemd[1]: libpod-7098858104ea88d65246ed6d65571a938e108ffefdbaa6a8cb20ebb5105bc0e5.scope: Deactivated successfully.
Nov 29 03:39:14 np0005539563 podman[370094]: 2025-11-29 08:39:14.611645096 +0000 UTC m=+0.999380700 container died 7098858104ea88d65246ed6d65571a938e108ffefdbaa6a8cb20ebb5105bc0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:39:14 np0005539563 nova_compute[252253]: 2025-11-29 08:39:14.661 252257 DEBUG nova.compute.manager [req-1ef18b08-a727-4bf7-aaea-0a039ddf5d77 req-3d51aa47-65cf-41cd-85e5-8f3cb02eff8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Received event network-changed-42ad2c69-185b-46ac-ba56-15bd589c9c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:14 np0005539563 nova_compute[252253]: 2025-11-29 08:39:14.661 252257 DEBUG nova.compute.manager [req-1ef18b08-a727-4bf7-aaea-0a039ddf5d77 req-3d51aa47-65cf-41cd-85e5-8f3cb02eff8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Refreshing instance network info cache due to event network-changed-42ad2c69-185b-46ac-ba56-15bd589c9c41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:14 np0005539563 nova_compute[252253]: 2025-11-29 08:39:14.662 252257 DEBUG oslo_concurrency.lockutils [req-1ef18b08-a727-4bf7-aaea-0a039ddf5d77 req-3d51aa47-65cf-41cd-85e5-8f3cb02eff8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:14 np0005539563 nova_compute[252253]: 2025-11-29 08:39:14.662 252257 DEBUG oslo_concurrency.lockutils [req-1ef18b08-a727-4bf7-aaea-0a039ddf5d77 req-3d51aa47-65cf-41cd-85e5-8f3cb02eff8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:14 np0005539563 nova_compute[252253]: 2025-11-29 08:39:14.662 252257 DEBUG nova.network.neutron [req-1ef18b08-a727-4bf7-aaea-0a039ddf5d77 req-3d51aa47-65cf-41cd-85e5-8f3cb02eff8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Refreshing network info cache for port 42ad2c69-185b-46ac-ba56-15bd589c9c41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d8e859b643dcd9acd3534d1cd6130ac265423bc05c6e6c261bf2c8ab1ef6bb16-merged.mount: Deactivated successfully.
Nov 29 03:39:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:14 np0005539563 nova_compute[252253]: 2025-11-29 08:39:14.680 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:14 np0005539563 nova_compute[252253]: 2025-11-29 08:39:14.681 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:39:14 np0005539563 nova_compute[252253]: 2025-11-29 08:39:14.682 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:39:14 np0005539563 podman[370094]: 2025-11-29 08:39:14.715346023 +0000 UTC m=+1.103081606 container remove 7098858104ea88d65246ed6d65571a938e108ffefdbaa6a8cb20ebb5105bc0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_spence, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:39:14 np0005539563 systemd[1]: libpod-conmon-7098858104ea88d65246ed6d65571a938e108ffefdbaa6a8cb20ebb5105bc0e5.scope: Deactivated successfully.
Nov 29 03:39:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:14.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 274 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 86 op/s
Nov 29 03:39:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:15 np0005539563 nova_compute[252253]: 2025-11-29 08:39:15.077 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:15 np0005539563 podman[370281]: 2025-11-29 08:39:15.306602327 +0000 UTC m=+0.041897003 container create a7338bca78f96683a166f7f933f3ad860eb493589f71d4edeff9260cf418c4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:39:15 np0005539563 systemd[1]: Started libpod-conmon-a7338bca78f96683a166f7f933f3ad860eb493589f71d4edeff9260cf418c4dc.scope.
Nov 29 03:39:15 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:39:15 np0005539563 podman[370281]: 2025-11-29 08:39:15.286070397 +0000 UTC m=+0.021365063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:39:15 np0005539563 podman[370281]: 2025-11-29 08:39:15.390149754 +0000 UTC m=+0.125444400 container init a7338bca78f96683a166f7f933f3ad860eb493589f71d4edeff9260cf418c4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:39:15 np0005539563 podman[370281]: 2025-11-29 08:39:15.396691732 +0000 UTC m=+0.131986378 container start a7338bca78f96683a166f7f933f3ad860eb493589f71d4edeff9260cf418c4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:39:15 np0005539563 podman[370281]: 2025-11-29 08:39:15.400493296 +0000 UTC m=+0.135787942 container attach a7338bca78f96683a166f7f933f3ad860eb493589f71d4edeff9260cf418c4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_carson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:39:15 np0005539563 silly_carson[370298]: 167 167
Nov 29 03:39:15 np0005539563 systemd[1]: libpod-a7338bca78f96683a166f7f933f3ad860eb493589f71d4edeff9260cf418c4dc.scope: Deactivated successfully.
Nov 29 03:39:15 np0005539563 podman[370281]: 2025-11-29 08:39:15.40282994 +0000 UTC m=+0.138124586 container died a7338bca78f96683a166f7f933f3ad860eb493589f71d4edeff9260cf418c4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_carson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:39:15 np0005539563 systemd[1]: var-lib-containers-storage-overlay-dab1f6ae32f27cc8497eac0fa7f844c362362663bc57f9ebfb1af6c3460e0515-merged.mount: Deactivated successfully.
Nov 29 03:39:15 np0005539563 podman[370281]: 2025-11-29 08:39:15.439102748 +0000 UTC m=+0.174397394 container remove a7338bca78f96683a166f7f933f3ad860eb493589f71d4edeff9260cf418c4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:39:15 np0005539563 systemd[1]: libpod-conmon-a7338bca78f96683a166f7f933f3ad860eb493589f71d4edeff9260cf418c4dc.scope: Deactivated successfully.
Nov 29 03:39:15 np0005539563 podman[370321]: 2025-11-29 08:39:15.603667544 +0000 UTC m=+0.046335955 container create fad11b2d8d61e565d78f8a2bdb26fd2e9492dcbc545e2dd6297c9fcaf729f971 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilbur, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:39:15 np0005539563 systemd[1]: Started libpod-conmon-fad11b2d8d61e565d78f8a2bdb26fd2e9492dcbc545e2dd6297c9fcaf729f971.scope.
Nov 29 03:39:15 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:39:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47688cdcda89cf988ec5c16cf6fbb8ca9096845682dd51552b4f696166010005/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47688cdcda89cf988ec5c16cf6fbb8ca9096845682dd51552b4f696166010005/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47688cdcda89cf988ec5c16cf6fbb8ca9096845682dd51552b4f696166010005/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47688cdcda89cf988ec5c16cf6fbb8ca9096845682dd51552b4f696166010005/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:15 np0005539563 podman[370321]: 2025-11-29 08:39:15.580400999 +0000 UTC m=+0.023069430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:39:15 np0005539563 podman[370321]: 2025-11-29 08:39:15.691204949 +0000 UTC m=+0.133873380 container init fad11b2d8d61e565d78f8a2bdb26fd2e9492dcbc545e2dd6297c9fcaf729f971 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilbur, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:39:15 np0005539563 podman[370321]: 2025-11-29 08:39:15.699204087 +0000 UTC m=+0.141872498 container start fad11b2d8d61e565d78f8a2bdb26fd2e9492dcbc545e2dd6297c9fcaf729f971 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilbur, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:39:15 np0005539563 podman[370321]: 2025-11-29 08:39:15.706774033 +0000 UTC m=+0.149442464 container attach fad11b2d8d61e565d78f8a2bdb26fd2e9492dcbc545e2dd6297c9fcaf729f971 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilbur, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:39:15 np0005539563 ceph-mgr[74636]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]: {
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:    "0": [
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:        {
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:            "devices": [
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:                "/dev/loop3"
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:            ],
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:            "lv_name": "ceph_lv0",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:            "lv_size": "7511998464",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:            "name": "ceph_lv0",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:            "tags": {
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:                "ceph.cluster_name": "ceph",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:                "ceph.crush_device_class": "",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:                "ceph.encrypted": "0",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:                "ceph.osd_id": "0",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:                "ceph.type": "block",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:                "ceph.vdo": "0"
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:            },
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:            "type": "block",
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:            "vg_name": "ceph_vg0"
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:        }
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]:    ]
Nov 29 03:39:16 np0005539563 fervent_wilbur[370337]: }
Nov 29 03:39:16 np0005539563 systemd[1]: libpod-fad11b2d8d61e565d78f8a2bdb26fd2e9492dcbc545e2dd6297c9fcaf729f971.scope: Deactivated successfully.
Nov 29 03:39:16 np0005539563 podman[370321]: 2025-11-29 08:39:16.529697762 +0000 UTC m=+0.972366173 container died fad11b2d8d61e565d78f8a2bdb26fd2e9492dcbc545e2dd6297c9fcaf729f971 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilbur, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:39:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-47688cdcda89cf988ec5c16cf6fbb8ca9096845682dd51552b4f696166010005-merged.mount: Deactivated successfully.
Nov 29 03:39:16 np0005539563 podman[370321]: 2025-11-29 08:39:16.588236078 +0000 UTC m=+1.030904489 container remove fad11b2d8d61e565d78f8a2bdb26fd2e9492dcbc545e2dd6297c9fcaf729f971 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilbur, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:39:16 np0005539563 systemd[1]: libpod-conmon-fad11b2d8d61e565d78f8a2bdb26fd2e9492dcbc545e2dd6297c9fcaf729f971.scope: Deactivated successfully.
Nov 29 03:39:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:39:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:39:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:39:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:39:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:39:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:39:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:39:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:39:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:39:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:39:16 np0005539563 nova_compute[252253]: 2025-11-29 08:39:16.660 252257 DEBUG nova.network.neutron [req-1ef18b08-a727-4bf7-aaea-0a039ddf5d77 req-3d51aa47-65cf-41cd-85e5-8f3cb02eff8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Updated VIF entry in instance network info cache for port 42ad2c69-185b-46ac-ba56-15bd589c9c41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:16 np0005539563 nova_compute[252253]: 2025-11-29 08:39:16.663 252257 DEBUG nova.network.neutron [req-1ef18b08-a727-4bf7-aaea-0a039ddf5d77 req-3d51aa47-65cf-41cd-85e5-8f3cb02eff8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Updating instance_info_cache with network_info: [{"id": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "address": "fa:16:3e:2e:95:81", "network": {"id": "394ebb6b-ea36-4f10-a9a9-83350ba9a0ee", "bridge": "br-int", "label": "tempest-network-smoke--875570004", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ad2c69-18", "ovs_interfaceid": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:16 np0005539563 nova_compute[252253]: 2025-11-29 08:39:16.694 252257 DEBUG oslo_concurrency.lockutils [req-1ef18b08-a727-4bf7-aaea-0a039ddf5d77 req-3d51aa47-65cf-41cd-85e5-8f3cb02eff8d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:16 np0005539563 nova_compute[252253]: 2025-11-29 08:39:16.696 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:16 np0005539563 nova_compute[252253]: 2025-11-29 08:39:16.696 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:39:16 np0005539563 nova_compute[252253]: 2025-11-29 08:39:16.697 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:16.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 29 03:39:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:17.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:17 np0005539563 podman[370499]: 2025-11-29 08:39:17.186807942 +0000 UTC m=+0.040555827 container create 9e34bbb20db9010904dcf0062949042a954bf1d104b3defc031480b937fb4c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_curran, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:39:17 np0005539563 systemd[1]: Started libpod-conmon-9e34bbb20db9010904dcf0062949042a954bf1d104b3defc031480b937fb4c4b.scope.
Nov 29 03:39:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:39:17 np0005539563 podman[370499]: 2025-11-29 08:39:17.170340283 +0000 UTC m=+0.024088188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:39:17 np0005539563 podman[370499]: 2025-11-29 08:39:17.279000475 +0000 UTC m=+0.132748380 container init 9e34bbb20db9010904dcf0062949042a954bf1d104b3defc031480b937fb4c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:39:17 np0005539563 podman[370499]: 2025-11-29 08:39:17.28618961 +0000 UTC m=+0.139937495 container start 9e34bbb20db9010904dcf0062949042a954bf1d104b3defc031480b937fb4c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_curran, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:39:17 np0005539563 podman[370499]: 2025-11-29 08:39:17.289765538 +0000 UTC m=+0.143513453 container attach 9e34bbb20db9010904dcf0062949042a954bf1d104b3defc031480b937fb4c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_curran, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:39:17 np0005539563 frosty_curran[370516]: 167 167
Nov 29 03:39:17 np0005539563 systemd[1]: libpod-9e34bbb20db9010904dcf0062949042a954bf1d104b3defc031480b937fb4c4b.scope: Deactivated successfully.
Nov 29 03:39:17 np0005539563 conmon[370516]: conmon 9e34bbb20db9010904dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e34bbb20db9010904dcf0062949042a954bf1d104b3defc031480b937fb4c4b.scope/container/memory.events
Nov 29 03:39:17 np0005539563 podman[370499]: 2025-11-29 08:39:17.292691658 +0000 UTC m=+0.146439553 container died 9e34bbb20db9010904dcf0062949042a954bf1d104b3defc031480b937fb4c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_curran, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:39:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay-17a1dbdbd02d72375db48faa84b3280946cb4084ce329e870cc763297c89db87-merged.mount: Deactivated successfully.
Nov 29 03:39:17 np0005539563 podman[370499]: 2025-11-29 08:39:17.328332488 +0000 UTC m=+0.182080373 container remove 9e34bbb20db9010904dcf0062949042a954bf1d104b3defc031480b937fb4c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_curran, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:39:17 np0005539563 systemd[1]: libpod-conmon-9e34bbb20db9010904dcf0062949042a954bf1d104b3defc031480b937fb4c4b.scope: Deactivated successfully.
Nov 29 03:39:17 np0005539563 podman[370541]: 2025-11-29 08:39:17.478366978 +0000 UTC m=+0.032688592 container create 75f74c142461a18b01e793a5ebc39f17f22a60d9dd2b6149dc01bcd477df3258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 03:39:17 np0005539563 systemd[1]: Started libpod-conmon-75f74c142461a18b01e793a5ebc39f17f22a60d9dd2b6149dc01bcd477df3258.scope.
Nov 29 03:39:17 np0005539563 nova_compute[252253]: 2025-11-29 08:39:17.540 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:39:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb633c37b94092eabfb19a6f11ad7d8bfd298b3d6d95f477782636f71c188b09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb633c37b94092eabfb19a6f11ad7d8bfd298b3d6d95f477782636f71c188b09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb633c37b94092eabfb19a6f11ad7d8bfd298b3d6d95f477782636f71c188b09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb633c37b94092eabfb19a6f11ad7d8bfd298b3d6d95f477782636f71c188b09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:17 np0005539563 podman[370541]: 2025-11-29 08:39:17.463905024 +0000 UTC m=+0.018226658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:39:17 np0005539563 podman[370541]: 2025-11-29 08:39:17.579165865 +0000 UTC m=+0.133487509 container init 75f74c142461a18b01e793a5ebc39f17f22a60d9dd2b6149dc01bcd477df3258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:39:17 np0005539563 podman[370541]: 2025-11-29 08:39:17.588499329 +0000 UTC m=+0.142820943 container start 75f74c142461a18b01e793a5ebc39f17f22a60d9dd2b6149dc01bcd477df3258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:39:17 np0005539563 podman[370541]: 2025-11-29 08:39:17.591523302 +0000 UTC m=+0.145844936 container attach 75f74c142461a18b01e793a5ebc39f17f22a60d9dd2b6149dc01bcd477df3258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:39:17 np0005539563 nova_compute[252253]: 2025-11-29 08:39:17.834 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Updating instance_info_cache with network_info: [{"id": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "address": "fa:16:3e:2e:95:81", "network": {"id": "394ebb6b-ea36-4f10-a9a9-83350ba9a0ee", "bridge": "br-int", "label": "tempest-network-smoke--875570004", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ad2c69-18", "ovs_interfaceid": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:17 np0005539563 nova_compute[252253]: 2025-11-29 08:39:17.847 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:17 np0005539563 nova_compute[252253]: 2025-11-29 08:39:17.848 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:39:17 np0005539563 nova_compute[252253]: 2025-11-29 08:39:17.848 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:17 np0005539563 nova_compute[252253]: 2025-11-29 08:39:17.849 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:17 np0005539563 nova_compute[252253]: 2025-11-29 08:39:17.849 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:17 np0005539563 nova_compute[252253]: 2025-11-29 08:39:17.870 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:17 np0005539563 nova_compute[252253]: 2025-11-29 08:39:17.871 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:17 np0005539563 nova_compute[252253]: 2025-11-29 08:39:17.871 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:17 np0005539563 nova_compute[252253]: 2025-11-29 08:39:17.871 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:39:17 np0005539563 nova_compute[252253]: 2025-11-29 08:39:17.871 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2077211369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:18 np0005539563 nova_compute[252253]: 2025-11-29 08:39:18.385 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:18 np0005539563 compassionate_margulis[370558]: {
Nov 29 03:39:18 np0005539563 compassionate_margulis[370558]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:39:18 np0005539563 compassionate_margulis[370558]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:39:18 np0005539563 compassionate_margulis[370558]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:39:18 np0005539563 compassionate_margulis[370558]:        "osd_id": 0,
Nov 29 03:39:18 np0005539563 compassionate_margulis[370558]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:39:18 np0005539563 compassionate_margulis[370558]:        "type": "bluestore"
Nov 29 03:39:18 np0005539563 compassionate_margulis[370558]:    }
Nov 29 03:39:18 np0005539563 compassionate_margulis[370558]: }
Nov 29 03:39:18 np0005539563 nova_compute[252253]: 2025-11-29 08:39:18.461 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:18 np0005539563 nova_compute[252253]: 2025-11-29 08:39:18.462 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:18 np0005539563 systemd[1]: libpod-75f74c142461a18b01e793a5ebc39f17f22a60d9dd2b6149dc01bcd477df3258.scope: Deactivated successfully.
Nov 29 03:39:18 np0005539563 podman[370541]: 2025-11-29 08:39:18.466466409 +0000 UTC m=+1.020788053 container died 75f74c142461a18b01e793a5ebc39f17f22a60d9dd2b6149dc01bcd477df3258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_margulis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:39:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-bb633c37b94092eabfb19a6f11ad7d8bfd298b3d6d95f477782636f71c188b09-merged.mount: Deactivated successfully.
Nov 29 03:39:18 np0005539563 podman[370541]: 2025-11-29 08:39:18.518535227 +0000 UTC m=+1.072856841 container remove 75f74c142461a18b01e793a5ebc39f17f22a60d9dd2b6149dc01bcd477df3258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:39:18 np0005539563 systemd[1]: libpod-conmon-75f74c142461a18b01e793a5ebc39f17f22a60d9dd2b6149dc01bcd477df3258.scope: Deactivated successfully.
Nov 29 03:39:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:39:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:39:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:39:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:39:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a09ba61a-6086-4355-bceb-aff11fab21f8 does not exist
Nov 29 03:39:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cbec7eb1-d3cb-4ac5-ac9e-a3d733757bc2 does not exist
Nov 29 03:39:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 13f20584-42b7-455e-84a0-13c3c0a1466f does not exist
Nov 29 03:39:18 np0005539563 nova_compute[252253]: 2025-11-29 08:39:18.669 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:39:18 np0005539563 nova_compute[252253]: 2025-11-29 08:39:18.670 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3962MB free_disk=20.901081085205078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:39:18 np0005539563 nova_compute[252253]: 2025-11-29 08:39:18.670 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:18 np0005539563 nova_compute[252253]: 2025-11-29 08:39:18.670 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:18 np0005539563 nova_compute[252253]: 2025-11-29 08:39:18.822 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:39:18 np0005539563 nova_compute[252253]: 2025-11-29 08:39:18.823 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:39:18 np0005539563 nova_compute[252253]: 2025-11-29 08:39:18.823 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:39:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:18.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 29 03:39:18 np0005539563 nova_compute[252253]: 2025-11-29 08:39:18.952 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:19.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:39:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:39:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1233993549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:19 np0005539563 nova_compute[252253]: 2025-11-29 08:39:19.399 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:19 np0005539563 nova_compute[252253]: 2025-11-29 08:39:19.405 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:39:19 np0005539563 nova_compute[252253]: 2025-11-29 08:39:19.419 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:39:19 np0005539563 nova_compute[252253]: 2025-11-29 08:39:19.441 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:39:19 np0005539563 nova_compute[252253]: 2025-11-29 08:39:19.441 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:19 np0005539563 nova_compute[252253]: 2025-11-29 08:39:19.448 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:20.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Nov 29 03:39:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:21.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:22.504 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:22 np0005539563 nova_compute[252253]: 2025-11-29 08:39:22.504 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:22.505 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:39:22 np0005539563 nova_compute[252253]: 2025-11-29 08:39:22.543 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:22.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Nov 29 03:39:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:23.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004161795668155593 of space, bias 1.0, pg target 1.248538700446678 quantized to 32 (current 32)
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6462629990228922 quantized to 32 (current 32)
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:39:24 np0005539563 nova_compute[252253]: 2025-11-29 08:39:24.271 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:24 np0005539563 nova_compute[252253]: 2025-11-29 08:39:24.484 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:24.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 309 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 183 op/s
Nov 29 03:39:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:25.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:39:25Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:95:81 10.100.0.12
Nov 29 03:39:25 np0005539563 ovn_controller[148841]: 2025-11-29T08:39:25Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:95:81 10.100.0.12
Nov 29 03:39:25 np0005539563 nova_compute[252253]: 2025-11-29 08:39:25.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:26.506 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:26.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 319 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.9 MiB/s wr, 151 op/s
Nov 29 03:39:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:27.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:27 np0005539563 nova_compute[252253]: 2025-11-29 08:39:27.545 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:28.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 29 03:39:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:29.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:29 np0005539563 nova_compute[252253]: 2025-11-29 08:39:29.488 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:30 np0005539563 podman[370745]: 2025-11-29 08:39:30.525021049 +0000 UTC m=+0.074071950 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 29 03:39:30 np0005539563 podman[370744]: 2025-11-29 08:39:30.54013438 +0000 UTC m=+0.090053635 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:39:30 np0005539563 podman[370746]: 2025-11-29 08:39:30.557055271 +0000 UTC m=+0.094256129 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 03:39:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:30.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Nov 29 03:39:30 np0005539563 nova_compute[252253]: 2025-11-29 08:39:30.992 252257 INFO nova.compute.manager [None req-db99172e-d99e-4607-a4e7-a02b7816d615 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Get console output#033[00m
Nov 29 03:39:30 np0005539563 nova_compute[252253]: 2025-11-29 08:39:30.997 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:39:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:31.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:31 np0005539563 nova_compute[252253]: 2025-11-29 08:39:31.282 252257 INFO nova.compute.manager [None req-01b3595f-95ef-4b88-b857-59c56b4eb0aa 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Pausing#033[00m
Nov 29 03:39:31 np0005539563 nova_compute[252253]: 2025-11-29 08:39:31.283 252257 DEBUG nova.objects.instance [None req-01b3595f-95ef-4b88-b857-59c56b4eb0aa 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'flavor' on Instance uuid 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:31 np0005539563 nova_compute[252253]: 2025-11-29 08:39:31.319 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405571.3193567, 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:31 np0005539563 nova_compute[252253]: 2025-11-29 08:39:31.320 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:39:31 np0005539563 nova_compute[252253]: 2025-11-29 08:39:31.322 252257 DEBUG nova.compute.manager [None req-01b3595f-95ef-4b88-b857-59c56b4eb0aa 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:31 np0005539563 nova_compute[252253]: 2025-11-29 08:39:31.377 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:31 np0005539563 nova_compute[252253]: 2025-11-29 08:39:31.382 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:31 np0005539563 nova_compute[252253]: 2025-11-29 08:39:31.409 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 29 03:39:32 np0005539563 nova_compute[252253]: 2025-11-29 08:39:32.547 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:32.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 98 op/s
Nov 29 03:39:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:33.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:34 np0005539563 nova_compute[252253]: 2025-11-29 08:39:34.535 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 344 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 122 op/s
Nov 29 03:39:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:34.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:35.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:35 np0005539563 nova_compute[252253]: 2025-11-29 08:39:35.920 252257 INFO nova.compute.manager [None req-f45e5cad-ecdd-4d97-8e3e-6c0bf99adbdb 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Get console output#033[00m
Nov 29 03:39:35 np0005539563 nova_compute[252253]: 2025-11-29 08:39:35.924 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:39:36 np0005539563 nova_compute[252253]: 2025-11-29 08:39:36.337 252257 INFO nova.compute.manager [None req-ca46de6a-78a2-4f6f-b71b-02e12eb782cd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Unpausing#033[00m
Nov 29 03:39:36 np0005539563 nova_compute[252253]: 2025-11-29 08:39:36.338 252257 DEBUG nova.objects.instance [None req-ca46de6a-78a2-4f6f-b71b-02e12eb782cd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'flavor' on Instance uuid 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:36 np0005539563 nova_compute[252253]: 2025-11-29 08:39:36.495 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405576.4952583, 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:36 np0005539563 nova_compute[252253]: 2025-11-29 08:39:36.495 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:39:36 np0005539563 virtqemud[251807]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:39:36 np0005539563 nova_compute[252253]: 2025-11-29 08:39:36.500 252257 DEBUG nova.virt.libvirt.guest [None req-ca46de6a-78a2-4f6f-b71b-02e12eb782cd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:39:36 np0005539563 nova_compute[252253]: 2025-11-29 08:39:36.500 252257 DEBUG nova.compute.manager [None req-ca46de6a-78a2-4f6f-b71b-02e12eb782cd 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:36 np0005539563 nova_compute[252253]: 2025-11-29 08:39:36.530 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:36 np0005539563 nova_compute[252253]: 2025-11-29 08:39:36.532 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 477 KiB/s rd, 3.0 MiB/s wr, 106 op/s
Nov 29 03:39:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:36.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:37.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:37 np0005539563 nova_compute[252253]: 2025-11-29 08:39:37.549 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 316 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Nov 29 03:39:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:38.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:39.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:39 np0005539563 nova_compute[252253]: 2025-11-29 08:39:39.537 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:39 np0005539563 nova_compute[252253]: 2025-11-29 08:39:39.779 252257 INFO nova.compute.manager [None req-6963e09c-7bda-4143-b58a-47057af0049d 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Get console output#033[00m
Nov 29 03:39:39 np0005539563 nova_compute[252253]: 2025-11-29 08:39:39.782 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:39:40 np0005539563 nova_compute[252253]: 2025-11-29 08:39:40.827 252257 DEBUG nova.compute.manager [req-9dace910-add3-4241-be27-b08474221e4f req-aaec964b-a1d0-4238-ae30-f6d9d03ca073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Received event network-changed-42ad2c69-185b-46ac-ba56-15bd589c9c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:40 np0005539563 nova_compute[252253]: 2025-11-29 08:39:40.827 252257 DEBUG nova.compute.manager [req-9dace910-add3-4241-be27-b08474221e4f req-aaec964b-a1d0-4238-ae30-f6d9d03ca073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Refreshing instance network info cache due to event network-changed-42ad2c69-185b-46ac-ba56-15bd589c9c41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:40 np0005539563 nova_compute[252253]: 2025-11-29 08:39:40.828 252257 DEBUG oslo_concurrency.lockutils [req-9dace910-add3-4241-be27-b08474221e4f req-aaec964b-a1d0-4238-ae30-f6d9d03ca073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:40 np0005539563 nova_compute[252253]: 2025-11-29 08:39:40.828 252257 DEBUG oslo_concurrency.lockutils [req-9dace910-add3-4241-be27-b08474221e4f req-aaec964b-a1d0-4238-ae30-f6d9d03ca073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:40 np0005539563 nova_compute[252253]: 2025-11-29 08:39:40.828 252257 DEBUG nova.network.neutron [req-9dace910-add3-4241-be27-b08474221e4f req-aaec964b-a1d0-4238-ae30-f6d9d03ca073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Refreshing network info cache for port 42ad2c69-185b-46ac-ba56-15bd589c9c41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:40 np0005539563 nova_compute[252253]: 2025-11-29 08:39:40.887 252257 DEBUG oslo_concurrency.lockutils [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:40 np0005539563 nova_compute[252253]: 2025-11-29 08:39:40.888 252257 DEBUG oslo_concurrency.lockutils [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:40 np0005539563 nova_compute[252253]: 2025-11-29 08:39:40.888 252257 DEBUG oslo_concurrency.lockutils [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:40 np0005539563 nova_compute[252253]: 2025-11-29 08:39:40.888 252257 DEBUG oslo_concurrency.lockutils [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:40 np0005539563 nova_compute[252253]: 2025-11-29 08:39:40.889 252257 DEBUG oslo_concurrency.lockutils [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:40 np0005539563 nova_compute[252253]: 2025-11-29 08:39:40.889 252257 INFO nova.compute.manager [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Terminating instance#033[00m
Nov 29 03:39:40 np0005539563 nova_compute[252253]: 2025-11-29 08:39:40.890 252257 DEBUG nova.compute.manager [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:39:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 214 KiB/s rd, 2.2 MiB/s wr, 61 op/s
Nov 29 03:39:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:40.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:41 np0005539563 kernel: tap42ad2c69-18 (unregistering): left promiscuous mode
Nov 29 03:39:41 np0005539563 NetworkManager[48981]: <info>  [1764405581.0378] device (tap42ad2c69-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:39:41 np0005539563 ovn_controller[148841]: 2025-11-29T08:39:41Z|00778|binding|INFO|Releasing lport 42ad2c69-185b-46ac-ba56-15bd589c9c41 from this chassis (sb_readonly=0)
Nov 29 03:39:41 np0005539563 ovn_controller[148841]: 2025-11-29T08:39:41Z|00779|binding|INFO|Setting lport 42ad2c69-185b-46ac-ba56-15bd589c9c41 down in Southbound
Nov 29 03:39:41 np0005539563 ovn_controller[148841]: 2025-11-29T08:39:41Z|00780|binding|INFO|Removing iface tap42ad2c69-18 ovn-installed in OVS
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.049 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.051 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.056 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:95:81 10.100.0.12'], port_security=['fa:16:3e:2e:95:81 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'db4eb6f4-94a3-48cf-bbfe-2e542e978551', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f3c02391-33c5-4dc6-9e30-a8ea4f1f06d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=42ad2c69-185b-46ac-ba56-15bd589c9c41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.058 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 42ad2c69-185b-46ac-ba56-15bd589c9c41 in datapath 394ebb6b-ea36-4f10-a9a9-83350ba9a0ee unbound from our chassis#033[00m
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.059 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 394ebb6b-ea36-4f10-a9a9-83350ba9a0ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:39:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:41.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.061 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[afa0e946-2f4c-4c37-b4d7-e51b95e517ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.061 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee namespace which is not needed anymore#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.072 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:41 np0005539563 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000b5.scope: Deactivated successfully.
Nov 29 03:39:41 np0005539563 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000b5.scope: Consumed 14.337s CPU time.
Nov 29 03:39:41 np0005539563 systemd-machined[213024]: Machine qemu-89-instance-000000b5 terminated.
Nov 29 03:39:41 np0005539563 neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee[369709]: [NOTICE]   (369713) : haproxy version is 2.8.14-c23fe91
Nov 29 03:39:41 np0005539563 neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee[369709]: [NOTICE]   (369713) : path to executable is /usr/sbin/haproxy
Nov 29 03:39:41 np0005539563 neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee[369709]: [WARNING]  (369713) : Exiting Master process...
Nov 29 03:39:41 np0005539563 neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee[369709]: [WARNING]  (369713) : Exiting Master process...
Nov 29 03:39:41 np0005539563 neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee[369709]: [ALERT]    (369713) : Current worker (369715) exited with code 143 (Terminated)
Nov 29 03:39:41 np0005539563 neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee[369709]: [WARNING]  (369713) : All workers exited. Exiting... (0)
Nov 29 03:39:41 np0005539563 systemd[1]: libpod-88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21.scope: Deactivated successfully.
Nov 29 03:39:41 np0005539563 podman[370839]: 2025-11-29 08:39:41.310032381 +0000 UTC m=+0.164098624 container died 88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.326 252257 INFO nova.virt.libvirt.driver [-] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Instance destroyed successfully.#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.327 252257 DEBUG nova.objects.instance [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'resources' on Instance uuid 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.343 252257 DEBUG nova.virt.libvirt.vif [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:38:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-114419784',display_name='tempest-TestNetworkAdvancedServerOps-server-114419784',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-114419784',id=181,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM86dRwr+/hL3YQDqfUe1N2xX+b9qjNeI304LzU46zce1gqAq9EJxKzKVBw43NzawtddoKkC4CmhvrbCNUf4DGEf47ZvlBIG3NSViUSjF5j3n4f59ugj/zCtdXPRWz37UQ==',key_name='tempest-TestNetworkAdvancedServerOps-1033424177',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:39:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-0dob1ar8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:39:36Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "address": "fa:16:3e:2e:95:81", "network": {"id": "394ebb6b-ea36-4f10-a9a9-83350ba9a0ee", "bridge": "br-int", "label": "tempest-network-smoke--875570004", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ad2c69-18", "ovs_interfaceid": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.344 252257 DEBUG nova.network.os_vif_util [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "address": "fa:16:3e:2e:95:81", "network": {"id": "394ebb6b-ea36-4f10-a9a9-83350ba9a0ee", "bridge": "br-int", "label": "tempest-network-smoke--875570004", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ad2c69-18", "ovs_interfaceid": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.345 252257 DEBUG nova.network.os_vif_util [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:95:81,bridge_name='br-int',has_traffic_filtering=True,id=42ad2c69-185b-46ac-ba56-15bd589c9c41,network=Network(394ebb6b-ea36-4f10-a9a9-83350ba9a0ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ad2c69-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.345 252257 DEBUG os_vif [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:95:81,bridge_name='br-int',has_traffic_filtering=True,id=42ad2c69-185b-46ac-ba56-15bd589c9c41,network=Network(394ebb6b-ea36-4f10-a9a9-83350ba9a0ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ad2c69-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:39:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21-userdata-shm.mount: Deactivated successfully.
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.348 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.349 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42ad2c69-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0307175e8143f4a37ef80335960f2c411d320caca6aa6248a7b36b0c84cc07ff-merged.mount: Deactivated successfully.
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.351 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.353 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.355 252257 INFO os_vif [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:95:81,bridge_name='br-int',has_traffic_filtering=True,id=42ad2c69-185b-46ac-ba56-15bd589c9c41,network=Network(394ebb6b-ea36-4f10-a9a9-83350ba9a0ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42ad2c69-18')#033[00m
Nov 29 03:39:41 np0005539563 podman[370839]: 2025-11-29 08:39:41.464537452 +0000 UTC m=+0.318603665 container cleanup 88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:39:41 np0005539563 systemd[1]: libpod-conmon-88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21.scope: Deactivated successfully.
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.500 252257 DEBUG nova.compute.manager [req-5677e20c-0e01-43c6-b87b-2219bd319e0e req-65ac4a09-601c-4a97-9e1b-680b5c8a3b3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Received event network-vif-unplugged-42ad2c69-185b-46ac-ba56-15bd589c9c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.500 252257 DEBUG oslo_concurrency.lockutils [req-5677e20c-0e01-43c6-b87b-2219bd319e0e req-65ac4a09-601c-4a97-9e1b-680b5c8a3b3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.500 252257 DEBUG oslo_concurrency.lockutils [req-5677e20c-0e01-43c6-b87b-2219bd319e0e req-65ac4a09-601c-4a97-9e1b-680b5c8a3b3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.501 252257 DEBUG oslo_concurrency.lockutils [req-5677e20c-0e01-43c6-b87b-2219bd319e0e req-65ac4a09-601c-4a97-9e1b-680b5c8a3b3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.501 252257 DEBUG nova.compute.manager [req-5677e20c-0e01-43c6-b87b-2219bd319e0e req-65ac4a09-601c-4a97-9e1b-680b5c8a3b3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] No waiting events found dispatching network-vif-unplugged-42ad2c69-185b-46ac-ba56-15bd589c9c41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.501 252257 DEBUG nova.compute.manager [req-5677e20c-0e01-43c6-b87b-2219bd319e0e req-65ac4a09-601c-4a97-9e1b-680b5c8a3b3c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Received event network-vif-unplugged-42ad2c69-185b-46ac-ba56-15bd589c9c41 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:39:41 np0005539563 podman[370895]: 2025-11-29 08:39:41.609097142 +0000 UTC m=+0.116247460 container remove 88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.618 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4c42b2ec-751a-4de0-a4ed-dc84a3660f8e]: (4, ('Sat Nov 29 08:39:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee (88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21)\n88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21\nSat Nov 29 08:39:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee (88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21)\n88d644b9129078b0039b7c56a738a6f9d31dc256ce836fa06fb32467a8422f21\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.619 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff37e5d-0e4b-4d21-815e-662df1fb6d76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.620 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap394ebb6b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.622 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:41 np0005539563 kernel: tap394ebb6b-e0: left promiscuous mode
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.635 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.642 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[90b1b8cb-3769-43d6-8c92-f4d75e35673c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.664 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8271b5-3b6c-45cb-a061-2ebefc567d72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.665 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cc042643-b397-4d06-a9b8-fef24bb6353a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:41 np0005539563 nova_compute[252253]: 2025-11-29 08:39:41.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.684 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2baa9c-36c0-4dd4-82b5-351ded79da0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 851156, 'reachable_time': 31596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370910, 'error': None, 'target': 'ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:41 np0005539563 systemd[1]: run-netns-ovnmeta\x2d394ebb6b\x2dea36\x2d4f10\x2da9a9\x2d83350ba9a0ee.mount: Deactivated successfully.
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.688 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-394ebb6b-ea36-4f10-a9a9-83350ba9a0ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:39:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:39:41.689 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[ae25fd9e-bd1c-49ad-a883-521e6294b280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.051 252257 INFO nova.virt.libvirt.driver [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Deleting instance files /var/lib/nova/instances/744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_del#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.052 252257 INFO nova.virt.libvirt.driver [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Deletion of /var/lib/nova/instances/744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee_del complete#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.181 252257 INFO nova.compute.manager [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Took 1.29 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.181 252257 DEBUG oslo.service.loopingcall [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.182 252257 DEBUG nova.compute.manager [-] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.182 252257 DEBUG nova.network.neutron [-] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.613 252257 DEBUG nova.network.neutron [req-9dace910-add3-4241-be27-b08474221e4f req-aaec964b-a1d0-4238-ae30-f6d9d03ca073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Updated VIF entry in instance network info cache for port 42ad2c69-185b-46ac-ba56-15bd589c9c41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.613 252257 DEBUG nova.network.neutron [req-9dace910-add3-4241-be27-b08474221e4f req-aaec964b-a1d0-4238-ae30-f6d9d03ca073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Updating instance_info_cache with network_info: [{"id": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "address": "fa:16:3e:2e:95:81", "network": {"id": "394ebb6b-ea36-4f10-a9a9-83350ba9a0ee", "bridge": "br-int", "label": "tempest-network-smoke--875570004", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42ad2c69-18", "ovs_interfaceid": "42ad2c69-185b-46ac-ba56-15bd589c9c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.640 252257 DEBUG oslo_concurrency.lockutils [req-9dace910-add3-4241-be27-b08474221e4f req-aaec964b-a1d0-4238-ae30-f6d9d03ca073 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.826 252257 DEBUG nova.network.neutron [-] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.857 252257 INFO nova.compute.manager [-] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Took 0.68 seconds to deallocate network for instance.#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.874 252257 DEBUG nova.compute.manager [req-946a689d-9bcb-4e2b-ae62-3992b66b100e req-3cb8f2b6-e04b-4998-a7ad-d1d5ee4d8663 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Received event network-vif-deleted-42ad2c69-185b-46ac-ba56-15bd589c9c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.917 252257 DEBUG oslo_concurrency.lockutils [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.918 252257 DEBUG oslo_concurrency.lockutils [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 213 KiB/s rd, 2.2 MiB/s wr, 60 op/s
Nov 29 03:39:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:42.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:42 np0005539563 nova_compute[252253]: 2025-11-29 08:39:42.983 252257 DEBUG oslo_concurrency.processutils [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:43.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:39:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:39:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:39:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:39:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:39:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:39:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838854134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:43 np0005539563 nova_compute[252253]: 2025-11-29 08:39:43.432 252257 DEBUG oslo_concurrency.processutils [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:43 np0005539563 nova_compute[252253]: 2025-11-29 08:39:43.440 252257 DEBUG nova.compute.provider_tree [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:39:43 np0005539563 nova_compute[252253]: 2025-11-29 08:39:43.459 252257 DEBUG nova.scheduler.client.report [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:39:43 np0005539563 nova_compute[252253]: 2025-11-29 08:39:43.484 252257 DEBUG oslo_concurrency.lockutils [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:43 np0005539563 nova_compute[252253]: 2025-11-29 08:39:43.510 252257 INFO nova.scheduler.client.report [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Deleted allocations for instance 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee#033[00m
Nov 29 03:39:43 np0005539563 nova_compute[252253]: 2025-11-29 08:39:43.580 252257 DEBUG oslo_concurrency.lockutils [None req-1f38508d-078e-4c5a-b997-51f75df133d7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:43 np0005539563 nova_compute[252253]: 2025-11-29 08:39:43.614 252257 DEBUG nova.compute.manager [req-9c6d022a-4494-4eb7-89fa-0ad791f4256c req-f35ddf40-49e9-4ef6-8036-74ba5624efe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Received event network-vif-plugged-42ad2c69-185b-46ac-ba56-15bd589c9c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:43 np0005539563 nova_compute[252253]: 2025-11-29 08:39:43.615 252257 DEBUG oslo_concurrency.lockutils [req-9c6d022a-4494-4eb7-89fa-0ad791f4256c req-f35ddf40-49e9-4ef6-8036-74ba5624efe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:43 np0005539563 nova_compute[252253]: 2025-11-29 08:39:43.616 252257 DEBUG oslo_concurrency.lockutils [req-9c6d022a-4494-4eb7-89fa-0ad791f4256c req-f35ddf40-49e9-4ef6-8036-74ba5624efe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:43 np0005539563 nova_compute[252253]: 2025-11-29 08:39:43.616 252257 DEBUG oslo_concurrency.lockutils [req-9c6d022a-4494-4eb7-89fa-0ad791f4256c req-f35ddf40-49e9-4ef6-8036-74ba5624efe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:43 np0005539563 nova_compute[252253]: 2025-11-29 08:39:43.616 252257 DEBUG nova.compute.manager [req-9c6d022a-4494-4eb7-89fa-0ad791f4256c req-f35ddf40-49e9-4ef6-8036-74ba5624efe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] No waiting events found dispatching network-vif-plugged-42ad2c69-185b-46ac-ba56-15bd589c9c41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:43 np0005539563 nova_compute[252253]: 2025-11-29 08:39:43.617 252257 WARNING nova.compute.manager [req-9c6d022a-4494-4eb7-89fa-0ad791f4256c req-f35ddf40-49e9-4ef6-8036-74ba5624efe2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Received unexpected event network-vif-plugged-42ad2c69-185b-46ac-ba56-15bd589c9c41 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:39:44 np0005539563 nova_compute[252253]: 2025-11-29 08:39:44.539 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 314 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 229 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Nov 29 03:39:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:44.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:45.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:46 np0005539563 nova_compute[252253]: 2025-11-29 08:39:46.352 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:46 np0005539563 nova_compute[252253]: 2025-11-29 08:39:46.802 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 157 KiB/s rd, 761 KiB/s wr, 64 op/s
Nov 29 03:39:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:46.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:46 np0005539563 nova_compute[252253]: 2025-11-29 08:39:46.975 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:47.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 233 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 43 op/s
Nov 29 03:39:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:48.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:49.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:49 np0005539563 nova_compute[252253]: 2025-11-29 08:39:49.581 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 4.7 KiB/s wr, 56 op/s
Nov 29 03:39:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:50.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:51.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:51 np0005539563 nova_compute[252253]: 2025-11-29 08:39:51.355 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 4.7 KiB/s wr, 56 op/s
Nov 29 03:39:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:52.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:53.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:54 np0005539563 nova_compute[252253]: 2025-11-29 08:39:54.632 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 4.7 KiB/s wr, 56 op/s
Nov 29 03:39:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:54.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:55.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:55 np0005539563 nova_compute[252253]: 2025-11-29 08:39:55.694 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:56 np0005539563 nova_compute[252253]: 2025-11-29 08:39:56.322 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405581.321394, 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:56 np0005539563 nova_compute[252253]: 2025-11-29 08:39:56.323 252257 INFO nova.compute.manager [-] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:39:56 np0005539563 nova_compute[252253]: 2025-11-29 08:39:56.359 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:56 np0005539563 nova_compute[252253]: 2025-11-29 08:39:56.369 252257 DEBUG nova.compute.manager [None req-114ddfd4-cea2-4c01-87e4-b0d3781bb8fb - - - - - -] [instance: 744b3ec0-59a2-45b0-a1bb-3ad7d4d2acee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 3.5 KiB/s wr, 38 op/s
Nov 29 03:39:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:56.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:57.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 4.1 KiB/s wr, 43 op/s
Nov 29 03:39:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:58.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:39:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:39:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:59.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:39:59 np0005539563 nova_compute[252253]: 2025-11-29 08:39:59.635 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 03:40:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 129 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 29 03:40:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:00.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:01.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:01 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 03:40:01 np0005539563 nova_compute[252253]: 2025-11-29 08:40:01.362 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:01 np0005539563 podman[370996]: 2025-11-29 08:40:01.514673424 +0000 UTC m=+0.064759105 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:40:01 np0005539563 podman[370995]: 2025-11-29 08:40:01.516002151 +0000 UTC m=+0.066387110 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:40:01 np0005539563 podman[370997]: 2025-11-29 08:40:01.59853796 +0000 UTC m=+0.142838353 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:40:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 129 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 597 B/s wr, 17 op/s
Nov 29 03:40:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:02.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:03.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:04 np0005539563 nova_compute[252253]: 2025-11-29 08:40:04.637 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 03:40:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:04.945 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:04.945 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:04.946 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:04.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:05.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:06 np0005539563 nova_compute[252253]: 2025-11-29 08:40:06.410 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 03:40:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:06.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:07.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:07 np0005539563 nova_compute[252253]: 2025-11-29 08:40:07.971 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "568bc9d5-f3a0-4da3-8498-39190619b609" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:07 np0005539563 nova_compute[252253]: 2025-11-29 08:40:07.972 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:08 np0005539563 nova_compute[252253]: 2025-11-29 08:40:08.006 252257 DEBUG nova.compute.manager [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:40:08 np0005539563 nova_compute[252253]: 2025-11-29 08:40:08.086 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:08 np0005539563 nova_compute[252253]: 2025-11-29 08:40:08.087 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:08 np0005539563 nova_compute[252253]: 2025-11-29 08:40:08.094 252257 DEBUG nova.virt.hardware [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:40:08 np0005539563 nova_compute[252253]: 2025-11-29 08:40:08.095 252257 INFO nova.compute.claims [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:40:08 np0005539563 nova_compute[252253]: 2025-11-29 08:40:08.505 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 03:40:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:08.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/859808057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.009 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.015 252257 DEBUG nova.compute.provider_tree [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:40:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:09.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.124 252257 DEBUG nova.scheduler.client.report [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.259 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.260 252257 DEBUG nova.compute.manager [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.417 252257 DEBUG nova.compute.manager [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.418 252257 DEBUG nova.network.neutron [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.453 252257 INFO nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.639 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.667 252257 DEBUG nova.compute.manager [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:40:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.800 252257 DEBUG nova.policy [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '686f527a5723407b85ed34c8a312583f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.975 252257 DEBUG nova.compute.manager [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.976 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:40:09 np0005539563 nova_compute[252253]: 2025-11-29 08:40:09.977 252257 INFO nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Creating image(s)#033[00m
Nov 29 03:40:10 np0005539563 nova_compute[252253]: 2025-11-29 08:40:10.003 252257 DEBUG nova.storage.rbd_utils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 568bc9d5-f3a0-4da3-8498-39190619b609_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:10 np0005539563 nova_compute[252253]: 2025-11-29 08:40:10.030 252257 DEBUG nova.storage.rbd_utils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 568bc9d5-f3a0-4da3-8498-39190619b609_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:10 np0005539563 nova_compute[252253]: 2025-11-29 08:40:10.058 252257 DEBUG nova.storage.rbd_utils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 568bc9d5-f3a0-4da3-8498-39190619b609_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:10 np0005539563 nova_compute[252253]: 2025-11-29 08:40:10.062 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:10 np0005539563 nova_compute[252253]: 2025-11-29 08:40:10.130 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:10 np0005539563 nova_compute[252253]: 2025-11-29 08:40:10.131 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:10 np0005539563 nova_compute[252253]: 2025-11-29 08:40:10.131 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:10 np0005539563 nova_compute[252253]: 2025-11-29 08:40:10.132 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:10 np0005539563 nova_compute[252253]: 2025-11-29 08:40:10.159 252257 DEBUG nova.storage.rbd_utils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 568bc9d5-f3a0-4da3-8498-39190619b609_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:10 np0005539563 nova_compute[252253]: 2025-11-29 08:40:10.164 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 568bc9d5-f3a0-4da3-8498-39190619b609_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:10 np0005539563 nova_compute[252253]: 2025-11-29 08:40:10.676 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 568bc9d5-f3a0-4da3-8498-39190619b609_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:10 np0005539563 nova_compute[252253]: 2025-11-29 08:40:10.776 252257 DEBUG nova.storage.rbd_utils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] resizing rbd image 568bc9d5-f3a0-4da3-8498-39190619b609_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:40:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 128 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 37 KiB/s wr, 24 op/s
Nov 29 03:40:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:10.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:10 np0005539563 nova_compute[252253]: 2025-11-29 08:40:10.982 252257 DEBUG nova.objects.instance [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'migration_context' on Instance uuid 568bc9d5-f3a0-4da3-8498-39190619b609 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:11 np0005539563 nova_compute[252253]: 2025-11-29 08:40:11.035 252257 DEBUG nova.network.neutron [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Successfully created port: 70963606-b079-4b58-9a76-ac862f1d57d3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:40:11 np0005539563 nova_compute[252253]: 2025-11-29 08:40:11.053 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:40:11 np0005539563 nova_compute[252253]: 2025-11-29 08:40:11.053 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Ensure instance console log exists: /var/lib/nova/instances/568bc9d5-f3a0-4da3-8498-39190619b609/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:40:11 np0005539563 nova_compute[252253]: 2025-11-29 08:40:11.054 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:11 np0005539563 nova_compute[252253]: 2025-11-29 08:40:11.054 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:11 np0005539563 nova_compute[252253]: 2025-11-29 08:40:11.055 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:11.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:11 np0005539563 nova_compute[252253]: 2025-11-29 08:40:11.413 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:12 np0005539563 nova_compute[252253]: 2025-11-29 08:40:12.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:12 np0005539563 nova_compute[252253]: 2025-11-29 08:40:12.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:40:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 128 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 37 KiB/s wr, 22 op/s
Nov 29 03:40:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:40:12
Nov 29 03:40:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:40:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:40:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.meta', '.mgr', 'vms', '.rgw.root', 'cephfs.cephfs.data']
Nov 29 03:40:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:40:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:12.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:13 np0005539563 nova_compute[252253]: 2025-11-29 08:40:13.013 252257 DEBUG nova.network.neutron [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Successfully updated port: 70963606-b079-4b58-9a76-ac862f1d57d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:40:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:13.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:13 np0005539563 nova_compute[252253]: 2025-11-29 08:40:13.117 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:13 np0005539563 nova_compute[252253]: 2025-11-29 08:40:13.118 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:13 np0005539563 nova_compute[252253]: 2025-11-29 08:40:13.118 252257 DEBUG nova.network.neutron [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:40:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:40:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:40:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:40:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:40:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:40:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:40:13 np0005539563 nova_compute[252253]: 2025-11-29 08:40:13.254 252257 DEBUG nova.compute.manager [req-99280956-9005-470c-b9ae-38884491ce73 req-e10cc4a6-e7c5-4bd0-970a-2cfb904fafd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received event network-changed-70963606-b079-4b58-9a76-ac862f1d57d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:13 np0005539563 nova_compute[252253]: 2025-11-29 08:40:13.254 252257 DEBUG nova.compute.manager [req-99280956-9005-470c-b9ae-38884491ce73 req-e10cc4a6-e7c5-4bd0-970a-2cfb904fafd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Refreshing instance network info cache due to event network-changed-70963606-b079-4b58-9a76-ac862f1d57d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:13 np0005539563 nova_compute[252253]: 2025-11-29 08:40:13.255 252257 DEBUG oslo_concurrency.lockutils [req-99280956-9005-470c-b9ae-38884491ce73 req-e10cc4a6-e7c5-4bd0-970a-2cfb904fafd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:14 np0005539563 nova_compute[252253]: 2025-11-29 08:40:14.366 252257 DEBUG nova.network.neutron [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:40:14 np0005539563 nova_compute[252253]: 2025-11-29 08:40:14.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:14 np0005539563 nova_compute[252253]: 2025-11-29 08:40:14.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:40:14 np0005539563 nova_compute[252253]: 2025-11-29 08:40:14.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:40:14 np0005539563 nova_compute[252253]: 2025-11-29 08:40:14.680 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:14 np0005539563 nova_compute[252253]: 2025-11-29 08:40:14.809 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:40:14 np0005539563 nova_compute[252253]: 2025-11-29 08:40:14.809 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:40:14 np0005539563 nova_compute[252253]: 2025-11-29 08:40:14.809 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:40:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:14.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:15.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:15 np0005539563 nova_compute[252253]: 2025-11-29 08:40:15.987 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:15.987 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:40:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:15.988 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:40:16 np0005539563 nova_compute[252253]: 2025-11-29 08:40:16.415 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:40:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:40:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:40:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:40:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:40:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:40:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:40:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:40:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:40:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:40:16 np0005539563 nova_compute[252253]: 2025-11-29 08:40:16.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:40:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:16.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:17.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.605 252257 DEBUG nova.network.neutron [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Updating instance_info_cache with network_info: [{"id": "70963606-b079-4b58-9a76-ac862f1d57d3", "address": "fa:16:3e:23:44:3a", "network": {"id": "db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b", "bridge": "br-int", "label": "tempest-network-smoke--973157119", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70963606-b0", "ovs_interfaceid": "70963606-b079-4b58-9a76-ac862f1d57d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.690 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.690 252257 DEBUG nova.compute.manager [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Instance network_info: |[{"id": "70963606-b079-4b58-9a76-ac862f1d57d3", "address": "fa:16:3e:23:44:3a", "network": {"id": "db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b", "bridge": "br-int", "label": "tempest-network-smoke--973157119", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70963606-b0", "ovs_interfaceid": "70963606-b079-4b58-9a76-ac862f1d57d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.690 252257 DEBUG oslo_concurrency.lockutils [req-99280956-9005-470c-b9ae-38884491ce73 req-e10cc4a6-e7c5-4bd0-970a-2cfb904fafd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.691 252257 DEBUG nova.network.neutron [req-99280956-9005-470c-b9ae-38884491ce73 req-e10cc4a6-e7c5-4bd0-970a-2cfb904fafd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Refreshing network info cache for port 70963606-b079-4b58-9a76-ac862f1d57d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.693 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Start _get_guest_xml network_info=[{"id": "70963606-b079-4b58-9a76-ac862f1d57d3", "address": "fa:16:3e:23:44:3a", "network": {"id": "db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b", "bridge": "br-int", "label": "tempest-network-smoke--973157119", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70963606-b0", "ovs_interfaceid": "70963606-b079-4b58-9a76-ac862f1d57d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.698 252257 WARNING nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.703 252257 DEBUG nova.virt.libvirt.host [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.704 252257 DEBUG nova.virt.libvirt.host [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.707 252257 DEBUG nova.virt.libvirt.host [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.707 252257 DEBUG nova.virt.libvirt.host [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.708 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.709 252257 DEBUG nova.virt.hardware [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.709 252257 DEBUG nova.virt.hardware [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.709 252257 DEBUG nova.virt.hardware [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.709 252257 DEBUG nova.virt.hardware [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.710 252257 DEBUG nova.virt.hardware [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.710 252257 DEBUG nova.virt.hardware [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.710 252257 DEBUG nova.virt.hardware [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.710 252257 DEBUG nova.virt.hardware [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.710 252257 DEBUG nova.virt.hardware [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.711 252257 DEBUG nova.virt.hardware [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.711 252257 DEBUG nova.virt.hardware [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.713 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.746 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.746 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.747 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.747 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:40:17 np0005539563 nova_compute[252253]: 2025-11-29 08:40:17.747 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:40:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196310453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:40:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254585785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.201 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.230 252257 DEBUG nova.storage.rbd_utils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 568bc9d5-f3a0-4da3-8498-39190619b609_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.234 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.262 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.448 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.450 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4200MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.451 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.451 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.607 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 568bc9d5-f3a0-4da3-8498-39190619b609 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.607 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.608 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.670 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:40:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2402171271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.727 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.732 252257 DEBUG nova.virt.libvirt.vif [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:40:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-629095894',display_name='tempest-TestNetworkAdvancedServerOps-server-629095894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-629095894',id=183,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH2yP/2QU6PTYj1t66d+HPSX+KV0oocmhp2xC8ea863goF8EdRO0kHNCQ8hPmkVgIieSkTxwq6yQ9k0Cd46h1ezo/BXcbLG1N4NFNsHkbuaGTtR0shkVSQPRC7Lr/Iy1hQ==',key_name='tempest-TestNetworkAdvancedServerOps-1090220151',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-dfmzmoms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:40:09Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=568bc9d5-f3a0-4da3-8498-39190619b609,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70963606-b079-4b58-9a76-ac862f1d57d3", "address": "fa:16:3e:23:44:3a", "network": {"id": "db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b", "bridge": "br-int", "label": "tempest-network-smoke--973157119", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70963606-b0", "ovs_interfaceid": "70963606-b079-4b58-9a76-ac862f1d57d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.733 252257 DEBUG nova.network.os_vif_util [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "70963606-b079-4b58-9a76-ac862f1d57d3", "address": "fa:16:3e:23:44:3a", "network": {"id": "db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b", "bridge": "br-int", "label": "tempest-network-smoke--973157119", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70963606-b0", "ovs_interfaceid": "70963606-b079-4b58-9a76-ac862f1d57d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.736 252257 DEBUG nova.network.os_vif_util [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:44:3a,bridge_name='br-int',has_traffic_filtering=True,id=70963606-b079-4b58-9a76-ac862f1d57d3,network=Network(db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70963606-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.739 252257 DEBUG nova.objects.instance [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 568bc9d5-f3a0-4da3-8498-39190619b609 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.802 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  <uuid>568bc9d5-f3a0-4da3-8498-39190619b609</uuid>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  <name>instance-000000b7</name>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-629095894</nova:name>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:40:17</nova:creationTime>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <nova:port uuid="70963606-b079-4b58-9a76-ac862f1d57d3">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <entry name="serial">568bc9d5-f3a0-4da3-8498-39190619b609</entry>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <entry name="uuid">568bc9d5-f3a0-4da3-8498-39190619b609</entry>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/568bc9d5-f3a0-4da3-8498-39190619b609_disk">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/568bc9d5-f3a0-4da3-8498-39190619b609_disk.config">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:23:44:3a"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <target dev="tap70963606-b0"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/568bc9d5-f3a0-4da3-8498-39190619b609/console.log" append="off"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:40:18 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:40:18 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:40:18 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:40:18 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.804 252257 DEBUG nova.compute.manager [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Preparing to wait for external event network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.805 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.805 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.805 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.806 252257 DEBUG nova.virt.libvirt.vif [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:40:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-629095894',display_name='tempest-TestNetworkAdvancedServerOps-server-629095894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-629095894',id=183,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH2yP/2QU6PTYj1t66d+HPSX+KV0oocmhp2xC8ea863goF8EdRO0kHNCQ8hPmkVgIieSkTxwq6yQ9k0Cd46h1ezo/BXcbLG1N4NFNsHkbuaGTtR0shkVSQPRC7Lr/Iy1hQ==',key_name='tempest-TestNetworkAdvancedServerOps-1090220151',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-dfmzmoms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:40:09Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=568bc9d5-f3a0-4da3-8498-39190619b609,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70963606-b079-4b58-9a76-ac862f1d57d3", "address": "fa:16:3e:23:44:3a", "network": {"id": "db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b", "bridge": "br-int", "label": "tempest-network-smoke--973157119", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70963606-b0", "ovs_interfaceid": "70963606-b079-4b58-9a76-ac862f1d57d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.807 252257 DEBUG nova.network.os_vif_util [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "70963606-b079-4b58-9a76-ac862f1d57d3", "address": "fa:16:3e:23:44:3a", "network": {"id": "db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b", "bridge": "br-int", "label": "tempest-network-smoke--973157119", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70963606-b0", "ovs_interfaceid": "70963606-b079-4b58-9a76-ac862f1d57d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.808 252257 DEBUG nova.network.os_vif_util [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:44:3a,bridge_name='br-int',has_traffic_filtering=True,id=70963606-b079-4b58-9a76-ac862f1d57d3,network=Network(db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70963606-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.808 252257 DEBUG os_vif [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:44:3a,bridge_name='br-int',has_traffic_filtering=True,id=70963606-b079-4b58-9a76-ac862f1d57d3,network=Network(db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70963606-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.810 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.810 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.811 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.814 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.815 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70963606-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.815 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap70963606-b0, col_values=(('external_ids', {'iface-id': '70963606-b079-4b58-9a76-ac862f1d57d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:44:3a', 'vm-uuid': '568bc9d5-f3a0-4da3-8498-39190619b609'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:18 np0005539563 NetworkManager[48981]: <info>  [1764405618.8181] manager: (tap70963606-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.820 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.824 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:18 np0005539563 nova_compute[252253]: 2025-11-29 08:40:18.825 252257 INFO os_vif [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:44:3a,bridge_name='br-int',has_traffic_filtering=True,id=70963606-b079-4b58-9a76-ac862f1d57d3,network=Network(db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70963606-b0')#033[00m
Nov 29 03:40:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:40:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:18.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:19 np0005539563 nova_compute[252253]: 2025-11-29 08:40:19.090 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:19 np0005539563 nova_compute[252253]: 2025-11-29 08:40:19.091 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:19 np0005539563 nova_compute[252253]: 2025-11-29 08:40:19.091 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No VIF found with MAC fa:16:3e:23:44:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:40:19 np0005539563 nova_compute[252253]: 2025-11-29 08:40:19.091 252257 INFO nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Using config drive#033[00m
Nov 29 03:40:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:19.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4042749142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:19 np0005539563 nova_compute[252253]: 2025-11-29 08:40:19.124 252257 DEBUG nova.storage.rbd_utils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 568bc9d5-f3a0-4da3-8498-39190619b609_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:19 np0005539563 nova_compute[252253]: 2025-11-29 08:40:19.140 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:19 np0005539563 nova_compute[252253]: 2025-11-29 08:40:19.154 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:40:19 np0005539563 nova_compute[252253]: 2025-11-29 08:40:19.306 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:40:19 np0005539563 nova_compute[252253]: 2025-11-29 08:40:19.397 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:40:19 np0005539563 nova_compute[252253]: 2025-11-29 08:40:19.398 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:19 np0005539563 nova_compute[252253]: 2025-11-29 08:40:19.681 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:19.990 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:40:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:40:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:40:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:40:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:40:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:40:20 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d6a3649a-f42d-408c-9309-68fa5b87fc44 does not exist
Nov 29 03:40:20 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 35523cfe-9887-4a86-bf3c-b04db4e66f2d does not exist
Nov 29 03:40:20 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d69aa1ba-43da-4b85-9020-337b499c6931 does not exist
Nov 29 03:40:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:40:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:40:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:40:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:40:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:40:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:40:20 np0005539563 nova_compute[252253]: 2025-11-29 08:40:20.324 252257 INFO nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Creating config drive at /var/lib/nova/instances/568bc9d5-f3a0-4da3-8498-39190619b609/disk.config#033[00m
Nov 29 03:40:20 np0005539563 nova_compute[252253]: 2025-11-29 08:40:20.338 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/568bc9d5-f3a0-4da3-8498-39190619b609/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplc1_m_5f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:20 np0005539563 nova_compute[252253]: 2025-11-29 08:40:20.399 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:20 np0005539563 nova_compute[252253]: 2025-11-29 08:40:20.483 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/568bc9d5-f3a0-4da3-8498-39190619b609/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplc1_m_5f" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:20 np0005539563 nova_compute[252253]: 2025-11-29 08:40:20.513 252257 DEBUG nova.storage.rbd_utils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 568bc9d5-f3a0-4da3-8498-39190619b609_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:20 np0005539563 nova_compute[252253]: 2025-11-29 08:40:20.517 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/568bc9d5-f3a0-4da3-8498-39190619b609/disk.config 568bc9d5-f3a0-4da3-8498-39190619b609_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:20 np0005539563 nova_compute[252253]: 2025-11-29 08:40:20.641 252257 DEBUG nova.network.neutron [req-99280956-9005-470c-b9ae-38884491ce73 req-e10cc4a6-e7c5-4bd0-970a-2cfb904fafd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Updated VIF entry in instance network info cache for port 70963606-b079-4b58-9a76-ac862f1d57d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:40:20 np0005539563 nova_compute[252253]: 2025-11-29 08:40:20.642 252257 DEBUG nova.network.neutron [req-99280956-9005-470c-b9ae-38884491ce73 req-e10cc4a6-e7c5-4bd0-970a-2cfb904fafd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Updating instance_info_cache with network_info: [{"id": "70963606-b079-4b58-9a76-ac862f1d57d3", "address": "fa:16:3e:23:44:3a", "network": {"id": "db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b", "bridge": "br-int", "label": "tempest-network-smoke--973157119", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70963606-b0", "ovs_interfaceid": "70963606-b079-4b58-9a76-ac862f1d57d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:20 np0005539563 podman[371737]: 2025-11-29 08:40:20.663389789 +0000 UTC m=+0.037409581 container create d9c952746ebeea624b4b4ad47b96a7b673ddb23b9d878bcb551390858ec619a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:40:20 np0005539563 systemd[1]: Started libpod-conmon-d9c952746ebeea624b4b4ad47b96a7b673ddb23b9d878bcb551390858ec619a6.scope.
Nov 29 03:40:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:40:20 np0005539563 podman[371737]: 2025-11-29 08:40:20.737862638 +0000 UTC m=+0.111882450 container init d9c952746ebeea624b4b4ad47b96a7b673ddb23b9d878bcb551390858ec619a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:40:20 np0005539563 podman[371737]: 2025-11-29 08:40:20.646471998 +0000 UTC m=+0.020491810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:40:20 np0005539563 podman[371737]: 2025-11-29 08:40:20.745818775 +0000 UTC m=+0.119838577 container start d9c952746ebeea624b4b4ad47b96a7b673ddb23b9d878bcb551390858ec619a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:40:20 np0005539563 podman[371737]: 2025-11-29 08:40:20.74891707 +0000 UTC m=+0.122936862 container attach d9c952746ebeea624b4b4ad47b96a7b673ddb23b9d878bcb551390858ec619a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:40:20 np0005539563 nova_compute[252253]: 2025-11-29 08:40:20.749 252257 DEBUG oslo_concurrency.lockutils [req-99280956-9005-470c-b9ae-38884491ce73 req-e10cc4a6-e7c5-4bd0-970a-2cfb904fafd9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:20 np0005539563 pedantic_roentgen[371754]: 167 167
Nov 29 03:40:20 np0005539563 systemd[1]: libpod-d9c952746ebeea624b4b4ad47b96a7b673ddb23b9d878bcb551390858ec619a6.scope: Deactivated successfully.
Nov 29 03:40:20 np0005539563 podman[371737]: 2025-11-29 08:40:20.75148932 +0000 UTC m=+0.125509112 container died d9c952746ebeea624b4b4ad47b96a7b673ddb23b9d878bcb551390858ec619a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:40:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-fc67f0a78a890eed0dd6a5642ea110f4a3b6220ad957915d329bfffdcc7aec51-merged.mount: Deactivated successfully.
Nov 29 03:40:20 np0005539563 podman[371737]: 2025-11-29 08:40:20.793063943 +0000 UTC m=+0.167083745 container remove d9c952746ebeea624b4b4ad47b96a7b673ddb23b9d878bcb551390858ec619a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:40:20 np0005539563 systemd[1]: libpod-conmon-d9c952746ebeea624b4b4ad47b96a7b673ddb23b9d878bcb551390858ec619a6.scope: Deactivated successfully.
Nov 29 03:40:20 np0005539563 podman[371778]: 2025-11-29 08:40:20.939204786 +0000 UTC m=+0.041545394 container create 762438cda4a665f23a2e958e6b5141c06ec9f94a6799cfd52b378b8b526ac181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jepsen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:40:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:40:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:20.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:20 np0005539563 systemd[1]: Started libpod-conmon-762438cda4a665f23a2e958e6b5141c06ec9f94a6799cfd52b378b8b526ac181.scope.
Nov 29 03:40:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:40:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:40:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:40:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:40:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e86bcc5cd6bb47608c24ea33d0cef989531df2011829b7ce0e01f8d279ed1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:21 np0005539563 podman[371778]: 2025-11-29 08:40:20.916637151 +0000 UTC m=+0.018977739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:40:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e86bcc5cd6bb47608c24ea33d0cef989531df2011829b7ce0e01f8d279ed1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e86bcc5cd6bb47608c24ea33d0cef989531df2011829b7ce0e01f8d279ed1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e86bcc5cd6bb47608c24ea33d0cef989531df2011829b7ce0e01f8d279ed1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e86bcc5cd6bb47608c24ea33d0cef989531df2011829b7ce0e01f8d279ed1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:21 np0005539563 podman[371778]: 2025-11-29 08:40:21.028404187 +0000 UTC m=+0.130744755 container init 762438cda4a665f23a2e958e6b5141c06ec9f94a6799cfd52b378b8b526ac181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:40:21 np0005539563 podman[371778]: 2025-11-29 08:40:21.036184389 +0000 UTC m=+0.138524957 container start 762438cda4a665f23a2e958e6b5141c06ec9f94a6799cfd52b378b8b526ac181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jepsen, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:40:21 np0005539563 podman[371778]: 2025-11-29 08:40:21.041358571 +0000 UTC m=+0.143699169 container attach 762438cda4a665f23a2e958e6b5141c06ec9f94a6799cfd52b378b8b526ac181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jepsen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:40:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:21.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.173 252257 DEBUG oslo_concurrency.processutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/568bc9d5-f3a0-4da3-8498-39190619b609/disk.config 568bc9d5-f3a0-4da3-8498-39190619b609_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.174 252257 INFO nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Deleting local config drive /var/lib/nova/instances/568bc9d5-f3a0-4da3-8498-39190619b609/disk.config because it was imported into RBD.#033[00m
Nov 29 03:40:21 np0005539563 kernel: tap70963606-b0: entered promiscuous mode
Nov 29 03:40:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:21Z|00781|binding|INFO|Claiming lport 70963606-b079-4b58-9a76-ac862f1d57d3 for this chassis.
Nov 29 03:40:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:21Z|00782|binding|INFO|70963606-b079-4b58-9a76-ac862f1d57d3: Claiming fa:16:3e:23:44:3a 10.100.0.6
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.242 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:21 np0005539563 NetworkManager[48981]: <info>  [1764405621.2445] manager: (tap70963606-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/340)
Nov 29 03:40:21 np0005539563 systemd-udevd[371815]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:40:21 np0005539563 NetworkManager[48981]: <info>  [1764405621.2927] device (tap70963606-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:40:21 np0005539563 NetworkManager[48981]: <info>  [1764405621.2940] device (tap70963606-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:40:21 np0005539563 systemd-machined[213024]: New machine qemu-90-instance-000000b7.
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.312 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.320 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:21Z|00783|binding|INFO|Setting lport 70963606-b079-4b58-9a76-ac862f1d57d3 ovn-installed in OVS
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.323 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:21 np0005539563 systemd[1]: Started Virtual Machine qemu-90-instance-000000b7.
Nov 29 03:40:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:21Z|00784|binding|INFO|Setting lport 70963606-b079-4b58-9a76-ac862f1d57d3 up in Southbound
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.454 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:44:3a 10.100.0.6'], port_security=['fa:16:3e:23:44:3a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '568bc9d5-f3a0-4da3-8498-39190619b609', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aa203723-7c73-4546-9d5a-2fa3edd54128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e27b4ec5-16ce-4129-9a2c-16195985164b, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=70963606-b079-4b58-9a76-ac862f1d57d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.455 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 70963606-b079-4b58-9a76-ac862f1d57d3 in datapath db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b bound to our chassis#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.456 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.468 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[67a385bd-3942-4be2-9dc8-909d6d7f9186]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.469 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdb8f1b92-c1 in ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.471 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdb8f1b92-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.471 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b00d36-02ad-4c28-a818-9fb1002e037b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.471 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[52f10e30-acd1-4317-871e-e4fb2557ab65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.484 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[54f765b2-194b-44b9-bd35-58207400b83f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.497 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[edfa008a-368c-4f0d-9579-bed3ce5460fb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.525 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[da191129-e5c8-4c48-90dd-69a95b1c4af8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.531 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[06d3cbb0-37a2-46b3-bdf1-0c1f82267cd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 NetworkManager[48981]: <info>  [1764405621.5319] manager: (tapdb8f1b92-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/341)
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.563 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[aa058b59-0d77-46ca-a6c3-5c0bca73f60f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.565 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[bd323207-9319-4d4c-8ee7-527356cdccf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 NetworkManager[48981]: <info>  [1764405621.5886] device (tapdb8f1b92-c0): carrier: link connected
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.594 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[352981f3-9372-471f-aa79-c26e2d08a509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.610 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d13e083a-a16d-4004-8750-3636269b9f84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb8f1b92-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:56:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858936, 'reachable_time': 16358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371850, 'error': None, 'target': 'ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.625 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2206ba0b-a4ae-4cb0-9d9f-8f10efc86e9e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:5684'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858936, 'tstamp': 858936}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371851, 'error': None, 'target': 'ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.641 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[376d4543-ef15-407c-8753-1f32a7f84f15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb8f1b92-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:56:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858936, 'reachable_time': 16358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371852, 'error': None, 'target': 'ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.673 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[265a4a6a-72db-42fb-bea7-f281b6263dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.709 252257 DEBUG nova.compute.manager [req-969441ec-0c19-4b32-9bb4-bea3c407e932 req-746ab87a-e7bd-405a-bb82-d53fc74c2231 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received event network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.709 252257 DEBUG oslo_concurrency.lockutils [req-969441ec-0c19-4b32-9bb4-bea3c407e932 req-746ab87a-e7bd-405a-bb82-d53fc74c2231 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.709 252257 DEBUG oslo_concurrency.lockutils [req-969441ec-0c19-4b32-9bb4-bea3c407e932 req-746ab87a-e7bd-405a-bb82-d53fc74c2231 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.710 252257 DEBUG oslo_concurrency.lockutils [req-969441ec-0c19-4b32-9bb4-bea3c407e932 req-746ab87a-e7bd-405a-bb82-d53fc74c2231 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.710 252257 DEBUG nova.compute.manager [req-969441ec-0c19-4b32-9bb4-bea3c407e932 req-746ab87a-e7bd-405a-bb82-d53fc74c2231 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Processing event network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.730 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed7fe2f-f215-4691-8ed2-af3bf4c8e996]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.731 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb8f1b92-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.732 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.732 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb8f1b92-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.733 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:21 np0005539563 NetworkManager[48981]: <info>  [1764405621.7344] manager: (tapdb8f1b92-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Nov 29 03:40:21 np0005539563 kernel: tapdb8f1b92-c0: entered promiscuous mode
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.737 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.738 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb8f1b92-c0, col_values=(('external_ids', {'iface-id': '1d3f61ec-40d0-4855-994b-dbfc2de17cb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.739 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:21Z|00785|binding|INFO|Releasing lport 1d3f61ec-40d0-4855-994b-dbfc2de17cb0 from this chassis (sb_readonly=0)
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.752 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.754 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.755 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d2bdf89d-bce6-48d0-98a7-9b7e8625b8cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.756 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b.pid.haproxy
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:40:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:21.756 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'env', 'PROCESS_TAG=haproxy-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:40:21 np0005539563 determined_jepsen[371798]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:40:21 np0005539563 determined_jepsen[371798]: --> relative data size: 1.0
Nov 29 03:40:21 np0005539563 determined_jepsen[371798]: --> All data devices are unavailable
Nov 29 03:40:21 np0005539563 systemd[1]: libpod-762438cda4a665f23a2e958e6b5141c06ec9f94a6799cfd52b378b8b526ac181.scope: Deactivated successfully.
Nov 29 03:40:21 np0005539563 podman[371778]: 2025-11-29 08:40:21.86209398 +0000 UTC m=+0.964434558 container died 762438cda4a665f23a2e958e6b5141c06ec9f94a6799cfd52b378b8b526ac181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:40:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b3e86bcc5cd6bb47608c24ea33d0cef989531df2011829b7ce0e01f8d279ed1b-merged.mount: Deactivated successfully.
Nov 29 03:40:21 np0005539563 podman[371778]: 2025-11-29 08:40:21.925393424 +0000 UTC m=+1.027733992 container remove 762438cda4a665f23a2e958e6b5141c06ec9f94a6799cfd52b378b8b526ac181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jepsen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:40:21 np0005539563 systemd[1]: libpod-conmon-762438cda4a665f23a2e958e6b5141c06ec9f94a6799cfd52b378b8b526ac181.scope: Deactivated successfully.
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.966 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405621.965821, 568bc9d5-f3a0-4da3-8498-39190619b609 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.967 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] VM Started (Lifecycle Event)#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.969 252257 DEBUG nova.compute.manager [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.973 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.978 252257 INFO nova.virt.libvirt.driver [-] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Instance spawned successfully.#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.978 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:40:21 np0005539563 nova_compute[252253]: 2025-11-29 08:40:21.996 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.002 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.005 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.005 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.006 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.006 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.006 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.007 252257 DEBUG nova.virt.libvirt.driver [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.031 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.032 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405621.967042, 568bc9d5-f3a0-4da3-8498-39190619b609 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.033 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.067 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.070 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405621.9720132, 568bc9d5-f3a0-4da3-8498-39190619b609 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.070 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.078 252257 INFO nova.compute.manager [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Took 12.10 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.079 252257 DEBUG nova.compute.manager [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.087 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.095 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.122 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.146 252257 INFO nova.compute.manager [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Took 14.09 seconds to build instance.#033[00m
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.164 252257 DEBUG oslo_concurrency.lockutils [None req-ce5ee0a0-096a-4df3-afd8-6cf759898357 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:22 np0005539563 podman[372001]: 2025-11-29 08:40:22.166542577 +0000 UTC m=+0.060410377 container create 037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:40:22 np0005539563 systemd[1]: Started libpod-conmon-037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9.scope.
Nov 29 03:40:22 np0005539563 podman[372001]: 2025-11-29 08:40:22.134322369 +0000 UTC m=+0.028190179 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:40:22 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:40:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c98b23d439e6375fbc655791e85eea89dc44439078e951474d7cc6247c0837f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:22 np0005539563 podman[372001]: 2025-11-29 08:40:22.256653203 +0000 UTC m=+0.150521023 container init 037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:40:22 np0005539563 podman[372001]: 2025-11-29 08:40:22.263729136 +0000 UTC m=+0.157596936 container start 037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:40:22 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372061]: [NOTICE]   (372066) : New worker (372068) forked
Nov 29 03:40:22 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372061]: [NOTICE]   (372066) : Loading success.
Nov 29 03:40:22 np0005539563 podman[372115]: 2025-11-29 08:40:22.564526664 +0000 UTC m=+0.051372341 container create 6e334eb536557e9cfe5c1b10600492452d0d561791a20e5068fdfbab1273f0be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_archimedes, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:40:22 np0005539563 systemd[1]: Started libpod-conmon-6e334eb536557e9cfe5c1b10600492452d0d561791a20e5068fdfbab1273f0be.scope.
Nov 29 03:40:22 np0005539563 podman[372115]: 2025-11-29 08:40:22.545868406 +0000 UTC m=+0.032714103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:40:22 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:40:22 np0005539563 podman[372115]: 2025-11-29 08:40:22.660843629 +0000 UTC m=+0.147689326 container init 6e334eb536557e9cfe5c1b10600492452d0d561791a20e5068fdfbab1273f0be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_archimedes, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:40:22 np0005539563 podman[372115]: 2025-11-29 08:40:22.668763245 +0000 UTC m=+0.155608922 container start 6e334eb536557e9cfe5c1b10600492452d0d561791a20e5068fdfbab1273f0be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:40:22 np0005539563 podman[372115]: 2025-11-29 08:40:22.671690005 +0000 UTC m=+0.158535682 container attach 6e334eb536557e9cfe5c1b10600492452d0d561791a20e5068fdfbab1273f0be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_archimedes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:40:22 np0005539563 heuristic_archimedes[372131]: 167 167
Nov 29 03:40:22 np0005539563 podman[372115]: 2025-11-29 08:40:22.677109183 +0000 UTC m=+0.163954860 container died 6e334eb536557e9cfe5c1b10600492452d0d561791a20e5068fdfbab1273f0be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:40:22 np0005539563 systemd[1]: libpod-6e334eb536557e9cfe5c1b10600492452d0d561791a20e5068fdfbab1273f0be.scope: Deactivated successfully.
Nov 29 03:40:22 np0005539563 nova_compute[252253]: 2025-11-29 08:40:22.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-91cca176232b553fbd11ff8ab9aa60b81869febc343243ed2ce1ea41504eee1c-merged.mount: Deactivated successfully.
Nov 29 03:40:22 np0005539563 podman[372115]: 2025-11-29 08:40:22.712820395 +0000 UTC m=+0.199666062 container remove 6e334eb536557e9cfe5c1b10600492452d0d561791a20e5068fdfbab1273f0be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_archimedes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:40:22 np0005539563 systemd[1]: libpod-conmon-6e334eb536557e9cfe5c1b10600492452d0d561791a20e5068fdfbab1273f0be.scope: Deactivated successfully.
Nov 29 03:40:22 np0005539563 podman[372155]: 2025-11-29 08:40:22.909135176 +0000 UTC m=+0.049953112 container create 8835c1bca61a459d0850fbcdf0354f6c7dd8a827fdcf5e4df744fbb2013a2933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:40:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 1.7 MiB/s wr, 14 op/s
Nov 29 03:40:22 np0005539563 systemd[1]: Started libpod-conmon-8835c1bca61a459d0850fbcdf0354f6c7dd8a827fdcf5e4df744fbb2013a2933.scope.
Nov 29 03:40:22 np0005539563 podman[372155]: 2025-11-29 08:40:22.884176276 +0000 UTC m=+0.024994242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:40:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:22 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:40:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:22.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986f1f301fba4725d7c540a59c1a2e389e7637f8429f23f95c21f5d89900d08b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986f1f301fba4725d7c540a59c1a2e389e7637f8429f23f95c21f5d89900d08b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986f1f301fba4725d7c540a59c1a2e389e7637f8429f23f95c21f5d89900d08b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986f1f301fba4725d7c540a59c1a2e389e7637f8429f23f95c21f5d89900d08b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:23 np0005539563 podman[372155]: 2025-11-29 08:40:23.008981467 +0000 UTC m=+0.149799423 container init 8835c1bca61a459d0850fbcdf0354f6c7dd8a827fdcf5e4df744fbb2013a2933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:40:23 np0005539563 podman[372155]: 2025-11-29 08:40:23.017761486 +0000 UTC m=+0.158579422 container start 8835c1bca61a459d0850fbcdf0354f6c7dd8a827fdcf5e4df744fbb2013a2933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:40:23 np0005539563 podman[372155]: 2025-11-29 08:40:23.022459815 +0000 UTC m=+0.163277781 container attach 8835c1bca61a459d0850fbcdf0354f6c7dd8a827fdcf5e4df744fbb2013a2933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:40:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:23.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]: {
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:    "0": [
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:        {
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:            "devices": [
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:                "/dev/loop3"
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:            ],
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:            "lv_name": "ceph_lv0",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:            "lv_size": "7511998464",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:            "name": "ceph_lv0",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:            "tags": {
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:                "ceph.cluster_name": "ceph",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:                "ceph.crush_device_class": "",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:                "ceph.encrypted": "0",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:                "ceph.osd_id": "0",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:                "ceph.type": "block",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:                "ceph.vdo": "0"
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:            },
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:            "type": "block",
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:            "vg_name": "ceph_vg0"
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:        }
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]:    ]
Nov 29 03:40:23 np0005539563 vigilant_tesla[372172]: }
Nov 29 03:40:23 np0005539563 systemd[1]: libpod-8835c1bca61a459d0850fbcdf0354f6c7dd8a827fdcf5e4df744fbb2013a2933.scope: Deactivated successfully.
Nov 29 03:40:23 np0005539563 podman[372155]: 2025-11-29 08:40:23.807930422 +0000 UTC m=+0.948748358 container died 8835c1bca61a459d0850fbcdf0354f6c7dd8a827fdcf5e4df744fbb2013a2933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:40:23 np0005539563 nova_compute[252253]: 2025-11-29 08:40:23.874 252257 DEBUG nova.compute.manager [req-fe522256-c743-46b6-a25d-de40a227dc1c req-c16fd550-1178-43fb-8f82-aeb06cee7355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received event network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:23 np0005539563 nova_compute[252253]: 2025-11-29 08:40:23.876 252257 DEBUG oslo_concurrency.lockutils [req-fe522256-c743-46b6-a25d-de40a227dc1c req-c16fd550-1178-43fb-8f82-aeb06cee7355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:23 np0005539563 nova_compute[252253]: 2025-11-29 08:40:23.876 252257 DEBUG oslo_concurrency.lockutils [req-fe522256-c743-46b6-a25d-de40a227dc1c req-c16fd550-1178-43fb-8f82-aeb06cee7355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:23 np0005539563 nova_compute[252253]: 2025-11-29 08:40:23.876 252257 DEBUG oslo_concurrency.lockutils [req-fe522256-c743-46b6-a25d-de40a227dc1c req-c16fd550-1178-43fb-8f82-aeb06cee7355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:23 np0005539563 nova_compute[252253]: 2025-11-29 08:40:23.876 252257 DEBUG nova.compute.manager [req-fe522256-c743-46b6-a25d-de40a227dc1c req-c16fd550-1178-43fb-8f82-aeb06cee7355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] No waiting events found dispatching network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:23 np0005539563 nova_compute[252253]: 2025-11-29 08:40:23.877 252257 WARNING nova.compute.manager [req-fe522256-c743-46b6-a25d-de40a227dc1c req-c16fd550-1178-43fb-8f82-aeb06cee7355 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received unexpected event network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:40:23 np0005539563 nova_compute[252253]: 2025-11-29 08:40:23.877 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:40:23 np0005539563 systemd[1]: var-lib-containers-storage-overlay-986f1f301fba4725d7c540a59c1a2e389e7637f8429f23f95c21f5d89900d08b-merged.mount: Deactivated successfully.
Nov 29 03:40:23 np0005539563 podman[372155]: 2025-11-29 08:40:23.909246322 +0000 UTC m=+1.050064258 container remove 8835c1bca61a459d0850fbcdf0354f6c7dd8a827fdcf5e4df744fbb2013a2933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:40:23 np0005539563 systemd[1]: libpod-conmon-8835c1bca61a459d0850fbcdf0354f6c7dd8a827fdcf5e4df744fbb2013a2933.scope: Deactivated successfully.
Nov 29 03:40:24 np0005539563 podman[372334]: 2025-11-29 08:40:24.488127011 +0000 UTC m=+0.032006033 container create 8c61f72df6822ea274dc240173f3e0514b9778d974a75afa45c1f079a036a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:40:24 np0005539563 systemd[1]: Started libpod-conmon-8c61f72df6822ea274dc240173f3e0514b9778d974a75afa45c1f079a036a601.scope.
Nov 29 03:40:24 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:40:24 np0005539563 podman[372334]: 2025-11-29 08:40:24.55851381 +0000 UTC m=+0.102392852 container init 8c61f72df6822ea274dc240173f3e0514b9778d974a75afa45c1f079a036a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shockley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:40:24 np0005539563 podman[372334]: 2025-11-29 08:40:24.565151991 +0000 UTC m=+0.109031013 container start 8c61f72df6822ea274dc240173f3e0514b9778d974a75afa45c1f079a036a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shockley, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:40:24 np0005539563 podman[372334]: 2025-11-29 08:40:24.568813941 +0000 UTC m=+0.112692993 container attach 8c61f72df6822ea274dc240173f3e0514b9778d974a75afa45c1f079a036a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shockley, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:40:24 np0005539563 goofy_shockley[372349]: 167 167
Nov 29 03:40:24 np0005539563 systemd[1]: libpod-8c61f72df6822ea274dc240173f3e0514b9778d974a75afa45c1f079a036a601.scope: Deactivated successfully.
Nov 29 03:40:24 np0005539563 podman[372334]: 2025-11-29 08:40:24.570127066 +0000 UTC m=+0.114006088 container died 8c61f72df6822ea274dc240173f3e0514b9778d974a75afa45c1f079a036a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:40:24 np0005539563 podman[372334]: 2025-11-29 08:40:24.474375436 +0000 UTC m=+0.018254478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:40:24 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1216c6456ac758db51f9211d4207234b11371807827b2db5213d5a47b6c93813-merged.mount: Deactivated successfully.
Nov 29 03:40:24 np0005539563 podman[372334]: 2025-11-29 08:40:24.615419791 +0000 UTC m=+0.159298813 container remove 8c61f72df6822ea274dc240173f3e0514b9778d974a75afa45c1f079a036a601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:40:24 np0005539563 systemd[1]: libpod-conmon-8c61f72df6822ea274dc240173f3e0514b9778d974a75afa45c1f079a036a601.scope: Deactivated successfully.
Nov 29 03:40:24 np0005539563 nova_compute[252253]: 2025-11-29 08:40:24.684 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:24 np0005539563 podman[372373]: 2025-11-29 08:40:24.769074678 +0000 UTC m=+0.035935959 container create b8d63d479f11214ce9633cc2bbd38c5ec517f2cf891a742dc91f519116f1634b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:40:24 np0005539563 systemd[1]: Started libpod-conmon-b8d63d479f11214ce9633cc2bbd38c5ec517f2cf891a742dc91f519116f1634b.scope.
Nov 29 03:40:24 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:40:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eedb5f1c6bb2830cb558b19c43d13c126e1c2e2aaf584d0159c57aac0a3da476/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eedb5f1c6bb2830cb558b19c43d13c126e1c2e2aaf584d0159c57aac0a3da476/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eedb5f1c6bb2830cb558b19c43d13c126e1c2e2aaf584d0159c57aac0a3da476/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eedb5f1c6bb2830cb558b19c43d13c126e1c2e2aaf584d0159c57aac0a3da476/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:24 np0005539563 podman[372373]: 2025-11-29 08:40:24.75262674 +0000 UTC m=+0.019488031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:40:24 np0005539563 podman[372373]: 2025-11-29 08:40:24.862775672 +0000 UTC m=+0.129636953 container init b8d63d479f11214ce9633cc2bbd38c5ec517f2cf891a742dc91f519116f1634b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:40:24 np0005539563 podman[372373]: 2025-11-29 08:40:24.871627994 +0000 UTC m=+0.138489275 container start b8d63d479f11214ce9633cc2bbd38c5ec517f2cf891a742dc91f519116f1634b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:40:24 np0005539563 podman[372373]: 2025-11-29 08:40:24.876274341 +0000 UTC m=+0.143135642 container attach b8d63d479f11214ce9633cc2bbd38c5ec517f2cf891a742dc91f519116f1634b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:40:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1012 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Nov 29 03:40:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:24.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:25.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:25 np0005539563 nova_compute[252253]: 2025-11-29 08:40:25.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:25 np0005539563 silly_curie[372389]: {
Nov 29 03:40:25 np0005539563 silly_curie[372389]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:40:25 np0005539563 silly_curie[372389]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:40:25 np0005539563 silly_curie[372389]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:40:25 np0005539563 silly_curie[372389]:        "osd_id": 0,
Nov 29 03:40:25 np0005539563 silly_curie[372389]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:40:25 np0005539563 silly_curie[372389]:        "type": "bluestore"
Nov 29 03:40:25 np0005539563 silly_curie[372389]:    }
Nov 29 03:40:25 np0005539563 silly_curie[372389]: }
Nov 29 03:40:25 np0005539563 systemd[1]: libpod-b8d63d479f11214ce9633cc2bbd38c5ec517f2cf891a742dc91f519116f1634b.scope: Deactivated successfully.
Nov 29 03:40:25 np0005539563 podman[372373]: 2025-11-29 08:40:25.721534347 +0000 UTC m=+0.988395628 container died b8d63d479f11214ce9633cc2bbd38c5ec517f2cf891a742dc91f519116f1634b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 03:40:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-eedb5f1c6bb2830cb558b19c43d13c126e1c2e2aaf584d0159c57aac0a3da476-merged.mount: Deactivated successfully.
Nov 29 03:40:25 np0005539563 podman[372373]: 2025-11-29 08:40:25.775802467 +0000 UTC m=+1.042663748 container remove b8d63d479f11214ce9633cc2bbd38c5ec517f2cf891a742dc91f519116f1634b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curie, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:40:25 np0005539563 systemd[1]: libpod-conmon-b8d63d479f11214ce9633cc2bbd38c5ec517f2cf891a742dc91f519116f1634b.scope: Deactivated successfully.
Nov 29 03:40:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:40:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:40:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:40:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:40:25 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 79f94a92-fc90-4b03-9274-34a0de9ef1cc does not exist
Nov 29 03:40:25 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9fd77483-53a0-4edb-8cf6-559a0be71035 does not exist
Nov 29 03:40:25 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2ada55f6-d0a1-4b41-9fc4-1c0bb0fcf006 does not exist
Nov 29 03:40:26 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:40:26 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:40:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 13 KiB/s wr, 56 op/s
Nov 29 03:40:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:26.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:27.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:27 np0005539563 NetworkManager[48981]: <info>  [1764405627.7770] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Nov 29 03:40:27 np0005539563 nova_compute[252253]: 2025-11-29 08:40:27.776 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:27 np0005539563 NetworkManager[48981]: <info>  [1764405627.7780] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Nov 29 03:40:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:27Z|00786|binding|INFO|Releasing lport 1d3f61ec-40d0-4855-994b-dbfc2de17cb0 from this chassis (sb_readonly=0)
Nov 29 03:40:27 np0005539563 nova_compute[252253]: 2025-11-29 08:40:27.860 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:27 np0005539563 nova_compute[252253]: 2025-11-29 08:40:27.871 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:40:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1871173452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:40:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:40:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1871173452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:40:28 np0005539563 nova_compute[252253]: 2025-11-29 08:40:28.570 252257 DEBUG nova.compute.manager [req-d0111faf-314a-444e-b8aa-d571e6265640 req-e86ad4f2-cbf2-44c3-846c-32cb962ef138 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received event network-changed-70963606-b079-4b58-9a76-ac862f1d57d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:28 np0005539563 nova_compute[252253]: 2025-11-29 08:40:28.570 252257 DEBUG nova.compute.manager [req-d0111faf-314a-444e-b8aa-d571e6265640 req-e86ad4f2-cbf2-44c3-846c-32cb962ef138 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Refreshing instance network info cache due to event network-changed-70963606-b079-4b58-9a76-ac862f1d57d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:28 np0005539563 nova_compute[252253]: 2025-11-29 08:40:28.571 252257 DEBUG oslo_concurrency.lockutils [req-d0111faf-314a-444e-b8aa-d571e6265640 req-e86ad4f2-cbf2-44c3-846c-32cb962ef138 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:28 np0005539563 nova_compute[252253]: 2025-11-29 08:40:28.571 252257 DEBUG oslo_concurrency.lockutils [req-d0111faf-314a-444e-b8aa-d571e6265640 req-e86ad4f2-cbf2-44c3-846c-32cb962ef138 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:28 np0005539563 nova_compute[252253]: 2025-11-29 08:40:28.571 252257 DEBUG nova.network.neutron [req-d0111faf-314a-444e-b8aa-d571e6265640 req-e86ad4f2-cbf2-44c3-846c-32cb962ef138 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Refreshing network info cache for port 70963606-b079-4b58-9a76-ac862f1d57d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:40:28 np0005539563 nova_compute[252253]: 2025-11-29 08:40:28.880 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:40:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:28.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:29.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:29 np0005539563 nova_compute[252253]: 2025-11-29 08:40:29.742 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:30 np0005539563 nova_compute[252253]: 2025-11-29 08:40:30.849 252257 DEBUG nova.network.neutron [req-d0111faf-314a-444e-b8aa-d571e6265640 req-e86ad4f2-cbf2-44c3-846c-32cb962ef138 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Updated VIF entry in instance network info cache for port 70963606-b079-4b58-9a76-ac862f1d57d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:40:30 np0005539563 nova_compute[252253]: 2025-11-29 08:40:30.850 252257 DEBUG nova.network.neutron [req-d0111faf-314a-444e-b8aa-d571e6265640 req-e86ad4f2-cbf2-44c3-846c-32cb962ef138 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Updating instance_info_cache with network_info: [{"id": "70963606-b079-4b58-9a76-ac862f1d57d3", "address": "fa:16:3e:23:44:3a", "network": {"id": "db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b", "bridge": "br-int", "label": "tempest-network-smoke--973157119", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70963606-b0", "ovs_interfaceid": "70963606-b079-4b58-9a76-ac862f1d57d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:30 np0005539563 nova_compute[252253]: 2025-11-29 08:40:30.881 252257 DEBUG oslo_concurrency.lockutils [req-d0111faf-314a-444e-b8aa-d571e6265640 req-e86ad4f2-cbf2-44c3-846c-32cb962ef138 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:40:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:30.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:31.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:32 np0005539563 podman[372530]: 2025-11-29 08:40:32.528655354 +0000 UTC m=+0.075447308 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:40:32 np0005539563 podman[372529]: 2025-11-29 08:40:32.5431895 +0000 UTC m=+0.089260903 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:40:32 np0005539563 podman[372531]: 2025-11-29 08:40:32.544927997 +0000 UTC m=+0.084671908 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 03:40:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:40:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:32.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:33.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:33 np0005539563 nova_compute[252253]: 2025-11-29 08:40:33.882 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:34 np0005539563 nova_compute[252253]: 2025-11-29 08:40:34.742 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 186 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 967 KiB/s wr, 88 op/s
Nov 29 03:40:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:34.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:35.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:35Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:44:3a 10.100.0.6
Nov 29 03:40:35 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:35Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:44:3a 10.100.0.6
Nov 29 03:40:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 222 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1022 KiB/s rd, 2.4 MiB/s wr, 86 op/s
Nov 29 03:40:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:36.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:37.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:38 np0005539563 nova_compute[252253]: 2025-11-29 08:40:38.885 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 243 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 108 op/s
Nov 29 03:40:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:39.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:39.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:39 np0005539563 nova_compute[252253]: 2025-11-29 08:40:39.809 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 29 03:40:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:41.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:41.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:42 np0005539563 nova_compute[252253]: 2025-11-29 08:40:42.145 252257 INFO nova.compute.manager [None req-e26f7577-2f22-45b4-9bb9-76716417dff7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Get console output#033[00m
Nov 29 03:40:42 np0005539563 nova_compute[252253]: 2025-11-29 08:40:42.151 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:40:42 np0005539563 nova_compute[252253]: 2025-11-29 08:40:42.495 252257 DEBUG oslo_concurrency.lockutils [None req-4bb647e2-c12d-4855-b487-ff12d66df8de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "568bc9d5-f3a0-4da3-8498-39190619b609" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:42 np0005539563 nova_compute[252253]: 2025-11-29 08:40:42.495 252257 DEBUG oslo_concurrency.lockutils [None req-4bb647e2-c12d-4855-b487-ff12d66df8de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:42 np0005539563 nova_compute[252253]: 2025-11-29 08:40:42.496 252257 INFO nova.compute.manager [None req-4bb647e2-c12d-4855-b487-ff12d66df8de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Rebooting instance#033[00m
Nov 29 03:40:42 np0005539563 nova_compute[252253]: 2025-11-29 08:40:42.515 252257 DEBUG oslo_concurrency.lockutils [None req-4bb647e2-c12d-4855-b487-ff12d66df8de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:42 np0005539563 nova_compute[252253]: 2025-11-29 08:40:42.515 252257 DEBUG oslo_concurrency.lockutils [None req-4bb647e2-c12d-4855-b487-ff12d66df8de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:42 np0005539563 nova_compute[252253]: 2025-11-29 08:40:42.515 252257 DEBUG nova.network.neutron [None req-4bb647e2-c12d-4855-b487-ff12d66df8de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:40:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 29 03:40:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:43.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:43.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:40:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:40:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:40:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:40:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:40:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:40:43 np0005539563 nova_compute[252253]: 2025-11-29 08:40:43.889 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:40:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 51K writes, 197K keys, 51K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 51K writes, 18K syncs, 2.75 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4693 writes, 18K keys, 4693 commit groups, 1.0 writes per commit group, ingest: 17.24 MB, 0.03 MB/s#012Interval WAL: 4694 writes, 1909 syncs, 2.46 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:40:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:44 np0005539563 nova_compute[252253]: 2025-11-29 08:40:44.812 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Nov 29 03:40:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:45.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 03:40:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:45.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 03:40:45 np0005539563 nova_compute[252253]: 2025-11-29 08:40:45.476 252257 DEBUG nova.network.neutron [None req-4bb647e2-c12d-4855-b487-ff12d66df8de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Updating instance_info_cache with network_info: [{"id": "70963606-b079-4b58-9a76-ac862f1d57d3", "address": "fa:16:3e:23:44:3a", "network": {"id": "db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b", "bridge": "br-int", "label": "tempest-network-smoke--973157119", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70963606-b0", "ovs_interfaceid": "70963606-b079-4b58-9a76-ac862f1d57d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:45 np0005539563 nova_compute[252253]: 2025-11-29 08:40:45.500 252257 DEBUG oslo_concurrency.lockutils [None req-4bb647e2-c12d-4855-b487-ff12d66df8de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:45 np0005539563 nova_compute[252253]: 2025-11-29 08:40:45.502 252257 DEBUG nova.compute.manager [None req-4bb647e2-c12d-4855-b487-ff12d66df8de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 148 op/s
Nov 29 03:40:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:47.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:47.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:47 np0005539563 kernel: tap70963606-b0 (unregistering): left promiscuous mode
Nov 29 03:40:47 np0005539563 NetworkManager[48981]: <info>  [1764405647.8333] device (tap70963606-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:40:47 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:47Z|00787|binding|INFO|Releasing lport 70963606-b079-4b58-9a76-ac862f1d57d3 from this chassis (sb_readonly=0)
Nov 29 03:40:47 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:47Z|00788|binding|INFO|Setting lport 70963606-b079-4b58-9a76-ac862f1d57d3 down in Southbound
Nov 29 03:40:47 np0005539563 nova_compute[252253]: 2025-11-29 08:40:47.842 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:47 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:47Z|00789|binding|INFO|Removing iface tap70963606-b0 ovn-installed in OVS
Nov 29 03:40:47 np0005539563 nova_compute[252253]: 2025-11-29 08:40:47.844 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:47 np0005539563 nova_compute[252253]: 2025-11-29 08:40:47.845 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:47.857 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:44:3a 10.100.0.6'], port_security=['fa:16:3e:23:44:3a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '568bc9d5-f3a0-4da3-8498-39190619b609', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa203723-7c73-4546-9d5a-2fa3edd54128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e27b4ec5-16ce-4129-9a2c-16195985164b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=70963606-b079-4b58-9a76-ac862f1d57d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:40:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:47.859 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 70963606-b079-4b58-9a76-ac862f1d57d3 in datapath db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b unbound from our chassis#033[00m
Nov 29 03:40:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:47.860 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:40:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:47.862 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[87d27003-930c-43d0-87eb-1eeeccb9b492]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:47.863 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b namespace which is not needed anymore#033[00m
Nov 29 03:40:47 np0005539563 nova_compute[252253]: 2025-11-29 08:40:47.868 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:47 np0005539563 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000b7.scope: Deactivated successfully.
Nov 29 03:40:47 np0005539563 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000b7.scope: Consumed 14.567s CPU time.
Nov 29 03:40:47 np0005539563 systemd-machined[213024]: Machine qemu-90-instance-000000b7 terminated.
Nov 29 03:40:48 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372061]: [NOTICE]   (372066) : haproxy version is 2.8.14-c23fe91
Nov 29 03:40:48 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372061]: [NOTICE]   (372066) : path to executable is /usr/sbin/haproxy
Nov 29 03:40:48 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372061]: [WARNING]  (372066) : Exiting Master process...
Nov 29 03:40:48 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372061]: [ALERT]    (372066) : Current worker (372068) exited with code 143 (Terminated)
Nov 29 03:40:48 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372061]: [WARNING]  (372066) : All workers exited. Exiting... (0)
Nov 29 03:40:48 np0005539563 systemd[1]: libpod-037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9.scope: Deactivated successfully.
Nov 29 03:40:48 np0005539563 podman[372667]: 2025-11-29 08:40:48.021050726 +0000 UTC m=+0.060963543 container died 037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:40:48 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3c98b23d439e6375fbc655791e85eea89dc44439078e951474d7cc6247c0837f-merged.mount: Deactivated successfully.
Nov 29 03:40:48 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9-userdata-shm.mount: Deactivated successfully.
Nov 29 03:40:48 np0005539563 podman[372667]: 2025-11-29 08:40:48.058973169 +0000 UTC m=+0.098886026 container cleanup 037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:40:48 np0005539563 systemd[1]: libpod-conmon-037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9.scope: Deactivated successfully.
Nov 29 03:40:48 np0005539563 podman[372707]: 2025-11-29 08:40:48.15291582 +0000 UTC m=+0.059690499 container remove 037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.159 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d806526a-52da-4b26-944a-8bdd3dcc8c08]: (4, ('Sat Nov 29 08:40:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b (037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9)\n037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9\nSat Nov 29 08:40:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b (037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9)\n037ffb43d7a2051738bce0964246b245bb2c887a8b97630a870ce4ef0c4a73c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.161 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fe428e90-482d-4a69-878a-32192eaa6d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.162 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb8f1b92-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:48 np0005539563 nova_compute[252253]: 2025-11-29 08:40:48.165 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539563 kernel: tapdb8f1b92-c0: left promiscuous mode
Nov 29 03:40:48 np0005539563 nova_compute[252253]: 2025-11-29 08:40:48.187 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.191 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[aa080dc2-3d7b-46fe-bd77-2b9d84cc3627]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.211 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[84ce3aa0-b6e2-4a1c-abda-2a9a43af3339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.213 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9c366d-3593-4aec-ab25-d3cdcd0091b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.229 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[68a7c050-009b-41f9-9c60-c929d46fd011]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858929, 'reachable_time': 42912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372727, 'error': None, 'target': 'ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 systemd[1]: run-netns-ovnmeta\x2ddb8f1b92\x2dc9f5\x2d4b99\x2db81d\x2d74e2e10c3f6b.mount: Deactivated successfully.
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.233 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.233 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[d22fbeeb-ace6-47e3-9026-125d5c88e0be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 nova_compute[252253]: 2025-11-29 08:40:48.540 252257 DEBUG nova.compute.manager [req-6c2c66c0-ead3-4b35-a7ce-79136926e933 req-ba633a9c-7d11-40c9-b830-0e69ac973c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received event network-vif-unplugged-70963606-b079-4b58-9a76-ac862f1d57d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:48 np0005539563 nova_compute[252253]: 2025-11-29 08:40:48.540 252257 DEBUG oslo_concurrency.lockutils [req-6c2c66c0-ead3-4b35-a7ce-79136926e933 req-ba633a9c-7d11-40c9-b830-0e69ac973c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:48 np0005539563 nova_compute[252253]: 2025-11-29 08:40:48.541 252257 DEBUG oslo_concurrency.lockutils [req-6c2c66c0-ead3-4b35-a7ce-79136926e933 req-ba633a9c-7d11-40c9-b830-0e69ac973c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:48 np0005539563 nova_compute[252253]: 2025-11-29 08:40:48.541 252257 DEBUG oslo_concurrency.lockutils [req-6c2c66c0-ead3-4b35-a7ce-79136926e933 req-ba633a9c-7d11-40c9-b830-0e69ac973c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:48 np0005539563 nova_compute[252253]: 2025-11-29 08:40:48.541 252257 DEBUG nova.compute.manager [req-6c2c66c0-ead3-4b35-a7ce-79136926e933 req-ba633a9c-7d11-40c9-b830-0e69ac973c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] No waiting events found dispatching network-vif-unplugged-70963606-b079-4b58-9a76-ac862f1d57d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:48 np0005539563 nova_compute[252253]: 2025-11-29 08:40:48.541 252257 WARNING nova.compute.manager [req-6c2c66c0-ead3-4b35-a7ce-79136926e933 req-ba633a9c-7d11-40c9-b830-0e69ac973c57 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received unexpected event network-vif-unplugged-70963606-b079-4b58-9a76-ac862f1d57d3 for instance with vm_state active and task_state reboot_started.#033[00m
Nov 29 03:40:48 np0005539563 nova_compute[252253]: 2025-11-29 08:40:48.608 252257 INFO nova.virt.libvirt.driver [None req-4bb647e2-c12d-4855-b487-ff12d66df8de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Instance shutdown successfully.#033[00m
Nov 29 03:40:48 np0005539563 kernel: tap70963606-b0: entered promiscuous mode
Nov 29 03:40:48 np0005539563 systemd-udevd[372646]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:40:48 np0005539563 NetworkManager[48981]: <info>  [1764405648.6744] manager: (tap70963606-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/345)
Nov 29 03:40:48 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:48Z|00790|binding|INFO|Claiming lport 70963606-b079-4b58-9a76-ac862f1d57d3 for this chassis.
Nov 29 03:40:48 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:48Z|00791|binding|INFO|70963606-b079-4b58-9a76-ac862f1d57d3: Claiming fa:16:3e:23:44:3a 10.100.0.6
Nov 29 03:40:48 np0005539563 nova_compute[252253]: 2025-11-29 08:40:48.675 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.684 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:44:3a 10.100.0.6'], port_security=['fa:16:3e:23:44:3a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '568bc9d5-f3a0-4da3-8498-39190619b609', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'aa203723-7c73-4546-9d5a-2fa3edd54128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e27b4ec5-16ce-4129-9a2c-16195985164b, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=70963606-b079-4b58-9a76-ac862f1d57d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.686 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 70963606-b079-4b58-9a76-ac862f1d57d3 in datapath db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b bound to our chassis#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.687 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b#033[00m
Nov 29 03:40:48 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:48Z|00792|binding|INFO|Setting lport 70963606-b079-4b58-9a76-ac862f1d57d3 ovn-installed in OVS
Nov 29 03:40:48 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:48Z|00793|binding|INFO|Setting lport 70963606-b079-4b58-9a76-ac862f1d57d3 up in Southbound
Nov 29 03:40:48 np0005539563 NetworkManager[48981]: <info>  [1764405648.6922] device (tap70963606-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:40:48 np0005539563 nova_compute[252253]: 2025-11-29 08:40:48.692 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539563 NetworkManager[48981]: <info>  [1764405648.6953] device (tap70963606-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:40:48 np0005539563 nova_compute[252253]: 2025-11-29 08:40:48.695 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.698 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d62abc88-d88e-4f80-aa7c-4fd0d12fcb28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.699 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdb8f1b92-c1 in ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.702 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdb8f1b92-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.702 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b27af4-8ac5-4841-88c1-397c9ca6c154]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.702 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8571d87c-fc98-4234-a7f6-1a1f378dbb6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.715 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd5ee23-e53d-4db2-852b-a560a39b4b22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 systemd-machined[213024]: New machine qemu-91-instance-000000b7.
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.729 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[90248aa8-6b0c-4c96-a6ab-8d20c8c9cd4f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 systemd[1]: Started Virtual Machine qemu-91-instance-000000b7.
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.764 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd8fc1e-8a64-4067-83dc-09244d123fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.770 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ddba597d-aec7-460e-aec2-4ea495a84fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 NetworkManager[48981]: <info>  [1764405648.7722] manager: (tapdb8f1b92-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/346)
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.806 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5f2438ba-57af-461c-915a-93cc881c054e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.809 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3a69cec2-33c4-477d-8d59-c878586c1d8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 NetworkManager[48981]: <info>  [1764405648.8413] device (tapdb8f1b92-c0): carrier: link connected
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.848 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc636f4-9ef2-4860-a3f0-bad0bd2584bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.869 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c9afb7-4a12-4136-8bd7-eaaed2e71af3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb8f1b92-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:56:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861661, 'reachable_time': 32192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372772, 'error': None, 'target': 'ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.889 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0bfe7115-f96e-4dcc-bdc9-62fef4c190ec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:5684'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 861661, 'tstamp': 861661}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372773, 'error': None, 'target': 'ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 nova_compute[252253]: 2025-11-29 08:40:48.891 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.909 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[93c76720-d3eb-478e-8d77-79b8f5606e5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb8f1b92-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:56:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861661, 'reachable_time': 32192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372774, 'error': None, 'target': 'ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:48.940 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b91d4277-8fc2-4683-bddd-234fa3b5e490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 119 op/s
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:49.007 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed59415-9f6e-4805-85aa-83648bb982b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:49.009 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb8f1b92-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:49.009 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:49.010 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb8f1b92-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.011 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:49 np0005539563 kernel: tapdb8f1b92-c0: entered promiscuous mode
Nov 29 03:40:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:49.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:49 np0005539563 NetworkManager[48981]: <info>  [1764405649.0145] manager: (tapdb8f1b92-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:49.015 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb8f1b92-c0, col_values=(('external_ids', {'iface-id': '1d3f61ec-40d0-4855-994b-dbfc2de17cb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.016 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:49 np0005539563 ovn_controller[148841]: 2025-11-29T08:40:49Z|00794|binding|INFO|Releasing lport 1d3f61ec-40d0-4855-994b-dbfc2de17cb0 from this chassis (sb_readonly=0)
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.029 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.031 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:49.031 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:49.032 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1abaff-9fce-4bd5-960a-95f4d1f5adbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:49.033 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b.pid.haproxy
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:40:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:40:49.033 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'env', 'PROCESS_TAG=haproxy-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:40:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:49.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:49 np0005539563 podman[372841]: 2025-11-29 08:40:49.375951053 +0000 UTC m=+0.044466673 container create d9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:40:49 np0005539563 systemd[1]: Started libpod-conmon-d9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065.scope.
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.431 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 568bc9d5-f3a0-4da3-8498-39190619b609 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.432 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405649.431351, 568bc9d5-f3a0-4da3-8498-39190619b609 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.433 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:40:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.438 252257 INFO nova.virt.libvirt.driver [-] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Instance running successfully.#033[00m
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.438 252257 INFO nova.virt.libvirt.driver [None req-4bb647e2-c12d-4855-b487-ff12d66df8de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Instance soft rebooted successfully.#033[00m
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.439 252257 DEBUG nova.compute.manager [None req-4bb647e2-c12d-4855-b487-ff12d66df8de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d26e13ce04e37a7fe9e8fd1e830fcc0c1a484a597d929e85aa678729563dbd4a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:49 np0005539563 podman[372841]: 2025-11-29 08:40:49.353819739 +0000 UTC m=+0.022335380 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:40:49 np0005539563 podman[372841]: 2025-11-29 08:40:49.45803689 +0000 UTC m=+0.126552540 container init d9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:40:49 np0005539563 podman[372841]: 2025-11-29 08:40:49.463313354 +0000 UTC m=+0.131828974 container start d9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.477 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.482 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:40:49 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372862]: [NOTICE]   (372866) : New worker (372868) forked
Nov 29 03:40:49 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372862]: [NOTICE]   (372866) : Loading success.
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.510 252257 DEBUG oslo_concurrency.lockutils [None req-4bb647e2-c12d-4855-b487-ff12d66df8de 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.512 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405649.4328513, 568bc9d5-f3a0-4da3-8498-39190619b609 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.512 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] VM Started (Lifecycle Event)#033[00m
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.551 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.555 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:40:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:49 np0005539563 nova_compute[252253]: 2025-11-29 08:40:49.815 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.667 252257 DEBUG nova.compute.manager [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received event network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.667 252257 DEBUG oslo_concurrency.lockutils [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.668 252257 DEBUG oslo_concurrency.lockutils [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.668 252257 DEBUG oslo_concurrency.lockutils [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.669 252257 DEBUG nova.compute.manager [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] No waiting events found dispatching network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.669 252257 WARNING nova.compute.manager [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received unexpected event network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.669 252257 DEBUG nova.compute.manager [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received event network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.669 252257 DEBUG oslo_concurrency.lockutils [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.670 252257 DEBUG oslo_concurrency.lockutils [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.670 252257 DEBUG oslo_concurrency.lockutils [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.670 252257 DEBUG nova.compute.manager [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] No waiting events found dispatching network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.671 252257 WARNING nova.compute.manager [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received unexpected event network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.671 252257 DEBUG nova.compute.manager [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received event network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.671 252257 DEBUG oslo_concurrency.lockutils [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.671 252257 DEBUG oslo_concurrency.lockutils [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.672 252257 DEBUG oslo_concurrency.lockutils [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.672 252257 DEBUG nova.compute.manager [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] No waiting events found dispatching network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:50 np0005539563 nova_compute[252253]: 2025-11-29 08:40:50.672 252257 WARNING nova.compute.manager [req-9eecafa1-2508-4168-af37-0bf3116ee8e6 req-2da553c1-eb80-4096-a6b4-752ee4ee23f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received unexpected event network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:40:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 68 KiB/s wr, 93 op/s
Nov 29 03:40:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:51.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:51.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 30 KiB/s wr, 84 op/s
Nov 29 03:40:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:53.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:53.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:53 np0005539563 nova_compute[252253]: 2025-11-29 08:40:53.894 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:54 np0005539563 nova_compute[252253]: 2025-11-29 08:40:54.861 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 256 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 616 KiB/s wr, 157 op/s
Nov 29 03:40:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:55.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:40:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:55.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:40:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 03:40:56 np0005539563 nova_compute[252253]: 2025-11-29 08:40:56.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 264 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.6 MiB/s wr, 146 op/s
Nov 29 03:40:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:57.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:57.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:58 np0005539563 nova_compute[252253]: 2025-11-29 08:40:58.897 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 269 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Nov 29 03:40:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:59.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:40:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:59.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:59 np0005539563 nova_compute[252253]: 2025-11-29 08:40:59.864 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Nov 29 03:41:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:01.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:01.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:02 np0005539563 ovn_controller[148841]: 2025-11-29T08:41:02Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:44:3a 10.100.0.6
Nov 29 03:41:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Nov 29 03:41:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:03.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:03.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:03 np0005539563 podman[372884]: 2025-11-29 08:41:03.489796932 +0000 UTC m=+0.048005529 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:41:03 np0005539563 podman[372885]: 2025-11-29 08:41:03.522490784 +0000 UTC m=+0.078216753 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 29 03:41:03 np0005539563 podman[372886]: 2025-11-29 08:41:03.527368287 +0000 UTC m=+0.079331144 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:41:03 np0005539563 nova_compute[252253]: 2025-11-29 08:41:03.941 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:04 np0005539563 nova_compute[252253]: 2025-11-29 08:41:04.865 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:04.946 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:04.946 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:04.947 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Nov 29 03:41:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:05.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:05.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 858 KiB/s rd, 1.6 MiB/s wr, 97 op/s
Nov 29 03:41:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:07.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:07.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:08 np0005539563 nova_compute[252253]: 2025-11-29 08:41:08.944 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 684 KiB/s rd, 620 KiB/s wr, 68 op/s
Nov 29 03:41:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:09.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:09.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:09 np0005539563 nova_compute[252253]: 2025-11-29 08:41:09.305 252257 INFO nova.compute.manager [None req-e956eb03-3b36-4814-bc98-779f39f32e12 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Get console output#033[00m
Nov 29 03:41:09 np0005539563 nova_compute[252253]: 2025-11-29 08:41:09.309 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:41:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:09 np0005539563 nova_compute[252253]: 2025-11-29 08:41:09.867 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 672 KiB/s rd, 671 KiB/s wr, 69 op/s
Nov 29 03:41:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:11.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:11.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:41:12
Nov 29 03:41:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:41:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:41:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'images', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', '.rgw.root']
Nov 29 03:41:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:41:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 532 KiB/s rd, 577 KiB/s wr, 48 op/s
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.021 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.021 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.022 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:41:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:13.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:13.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:41:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:41:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:41:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:41:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:41:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.553 252257 DEBUG oslo_concurrency.lockutils [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "568bc9d5-f3a0-4da3-8498-39190619b609" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.553 252257 DEBUG oslo_concurrency.lockutils [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.554 252257 DEBUG oslo_concurrency.lockutils [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.554 252257 DEBUG oslo_concurrency.lockutils [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.554 252257 DEBUG oslo_concurrency.lockutils [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.556 252257 INFO nova.compute.manager [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Terminating instance#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.557 252257 DEBUG nova.compute.manager [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:41:13 np0005539563 kernel: tap70963606-b0 (unregistering): left promiscuous mode
Nov 29 03:41:13 np0005539563 NetworkManager[48981]: <info>  [1764405673.6067] device (tap70963606-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:41:13 np0005539563 ovn_controller[148841]: 2025-11-29T08:41:13Z|00795|binding|INFO|Releasing lport 70963606-b079-4b58-9a76-ac862f1d57d3 from this chassis (sb_readonly=0)
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.616 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539563 ovn_controller[148841]: 2025-11-29T08:41:13Z|00796|binding|INFO|Setting lport 70963606-b079-4b58-9a76-ac862f1d57d3 down in Southbound
Nov 29 03:41:13 np0005539563 ovn_controller[148841]: 2025-11-29T08:41:13Z|00797|binding|INFO|Removing iface tap70963606-b0 ovn-installed in OVS
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.618 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.623 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:44:3a 10.100.0.6'], port_security=['fa:16:3e:23:44:3a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '568bc9d5-f3a0-4da3-8498-39190619b609', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'aa203723-7c73-4546-9d5a-2fa3edd54128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e27b4ec5-16ce-4129-9a2c-16195985164b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=70963606-b079-4b58-9a76-ac862f1d57d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.624 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 70963606-b079-4b58-9a76-ac862f1d57d3 in datapath db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b unbound from our chassis#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.626 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.627 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[906efe2a-3004-408f-8a27-244db69b201c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.627 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b namespace which is not needed anymore#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.637 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539563 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000b7.scope: Deactivated successfully.
Nov 29 03:41:13 np0005539563 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000b7.scope: Consumed 14.198s CPU time.
Nov 29 03:41:13 np0005539563 systemd-machined[213024]: Machine qemu-91-instance-000000b7 terminated.
Nov 29 03:41:13 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372862]: [NOTICE]   (372866) : haproxy version is 2.8.14-c23fe91
Nov 29 03:41:13 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372862]: [NOTICE]   (372866) : path to executable is /usr/sbin/haproxy
Nov 29 03:41:13 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372862]: [WARNING]  (372866) : Exiting Master process...
Nov 29 03:41:13 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372862]: [ALERT]    (372866) : Current worker (372868) exited with code 143 (Terminated)
Nov 29 03:41:13 np0005539563 neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b[372862]: [WARNING]  (372866) : All workers exited. Exiting... (0)
Nov 29 03:41:13 np0005539563 systemd[1]: libpod-d9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065.scope: Deactivated successfully.
Nov 29 03:41:13 np0005539563 podman[373028]: 2025-11-29 08:41:13.757031531 +0000 UTC m=+0.044264226 container died d9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.776 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.781 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065-userdata-shm.mount: Deactivated successfully.
Nov 29 03:41:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d26e13ce04e37a7fe9e8fd1e830fcc0c1a484a597d929e85aa678729563dbd4a-merged.mount: Deactivated successfully.
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.793 252257 INFO nova.virt.libvirt.driver [-] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Instance destroyed successfully.#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.794 252257 DEBUG nova.objects.instance [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'resources' on Instance uuid 568bc9d5-f3a0-4da3-8498-39190619b609 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:13 np0005539563 podman[373028]: 2025-11-29 08:41:13.796047055 +0000 UTC m=+0.083279740 container cleanup d9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.816 252257 DEBUG nova.virt.libvirt.vif [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:40:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-629095894',display_name='tempest-TestNetworkAdvancedServerOps-server-629095894',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-629095894',id=183,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH2yP/2QU6PTYj1t66d+HPSX+KV0oocmhp2xC8ea863goF8EdRO0kHNCQ8hPmkVgIieSkTxwq6yQ9k0Cd46h1ezo/BXcbLG1N4NFNsHkbuaGTtR0shkVSQPRC7Lr/Iy1hQ==',key_name='tempest-TestNetworkAdvancedServerOps-1090220151',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:40:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-dfmzmoms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:40:49Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=568bc9d5-f3a0-4da3-8498-39190619b609,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "70963606-b079-4b58-9a76-ac862f1d57d3", "address": "fa:16:3e:23:44:3a", "network": {"id": "db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b", "bridge": "br-int", "label": "tempest-network-smoke--973157119", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70963606-b0", "ovs_interfaceid": "70963606-b079-4b58-9a76-ac862f1d57d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.816 252257 DEBUG nova.network.os_vif_util [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "70963606-b079-4b58-9a76-ac862f1d57d3", "address": "fa:16:3e:23:44:3a", "network": {"id": "db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b", "bridge": "br-int", "label": "tempest-network-smoke--973157119", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70963606-b0", "ovs_interfaceid": "70963606-b079-4b58-9a76-ac862f1d57d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.817 252257 DEBUG nova.network.os_vif_util [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:44:3a,bridge_name='br-int',has_traffic_filtering=True,id=70963606-b079-4b58-9a76-ac862f1d57d3,network=Network(db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70963606-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.818 252257 DEBUG os_vif [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:44:3a,bridge_name='br-int',has_traffic_filtering=True,id=70963606-b079-4b58-9a76-ac862f1d57d3,network=Network(db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70963606-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:41:13 np0005539563 systemd[1]: libpod-conmon-d9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065.scope: Deactivated successfully.
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.820 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.820 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70963606-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.823 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.825 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.828 252257 INFO os_vif [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:44:3a,bridge_name='br-int',has_traffic_filtering=True,id=70963606-b079-4b58-9a76-ac862f1d57d3,network=Network(db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70963606-b0')#033[00m
Nov 29 03:41:13 np0005539563 podman[373067]: 2025-11-29 08:41:13.870868624 +0000 UTC m=+0.046579621 container remove d9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.876 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5d2f58-0913-4d21-9920-ed83b6f054e4]: (4, ('Sat Nov 29 08:41:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b (d9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065)\nd9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065\nSat Nov 29 08:41:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b (d9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065)\nd9188e986d576db4b4afb4fe96564bd2fc2016a233ba24fbf453e57f79559065\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.877 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2486b59d-7c57-4864-a07d-ada324ec9bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.878 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb8f1b92-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.879 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539563 kernel: tapdb8f1b92-c0: left promiscuous mode
Nov 29 03:41:13 np0005539563 nova_compute[252253]: 2025-11-29 08:41:13.898 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.901 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5c93865b-93a3-4984-b971-d2f523f17418]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.915 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5cb446-21c1-4955-b1f6-434f86274408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.916 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac9f156-77a1-44d5-86a9-33a28e808d13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.931 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4f1925-42f3-4666-910a-9494ba4282eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861653, 'reachable_time': 33445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373105, 'error': None, 'target': 'ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.933 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:41:13 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:13.933 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[16027a1f-7b74-4e61-8afc-74c74ad2747d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:13 np0005539563 systemd[1]: run-netns-ovnmeta\x2ddb8f1b92\x2dc9f5\x2d4b99\x2db81d\x2d74e2e10c3f6b.mount: Deactivated successfully.
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.156 252257 DEBUG nova.compute.manager [req-c1057124-6867-435d-aff4-8b8ed8adb30e req-9b4bac15-cf5d-4a0f-a5d2-780239b68899 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received event network-vif-unplugged-70963606-b079-4b58-9a76-ac862f1d57d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.157 252257 DEBUG oslo_concurrency.lockutils [req-c1057124-6867-435d-aff4-8b8ed8adb30e req-9b4bac15-cf5d-4a0f-a5d2-780239b68899 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.158 252257 DEBUG oslo_concurrency.lockutils [req-c1057124-6867-435d-aff4-8b8ed8adb30e req-9b4bac15-cf5d-4a0f-a5d2-780239b68899 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.158 252257 DEBUG oslo_concurrency.lockutils [req-c1057124-6867-435d-aff4-8b8ed8adb30e req-9b4bac15-cf5d-4a0f-a5d2-780239b68899 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.158 252257 DEBUG nova.compute.manager [req-c1057124-6867-435d-aff4-8b8ed8adb30e req-9b4bac15-cf5d-4a0f-a5d2-780239b68899 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] No waiting events found dispatching network-vif-unplugged-70963606-b079-4b58-9a76-ac862f1d57d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.158 252257 DEBUG nova.compute.manager [req-c1057124-6867-435d-aff4-8b8ed8adb30e req-9b4bac15-cf5d-4a0f-a5d2-780239b68899 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received event network-vif-unplugged-70963606-b079-4b58-9a76-ac862f1d57d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.179 252257 DEBUG nova.compute.manager [req-98d9d760-8bba-496a-9b50-fb55a01fcb3c req-6a8cc3ee-0a60-4859-b9f6-ad50d461c98c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received event network-changed-70963606-b079-4b58-9a76-ac862f1d57d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.180 252257 DEBUG nova.compute.manager [req-98d9d760-8bba-496a-9b50-fb55a01fcb3c req-6a8cc3ee-0a60-4859-b9f6-ad50d461c98c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Refreshing instance network info cache due to event network-changed-70963606-b079-4b58-9a76-ac862f1d57d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.181 252257 DEBUG oslo_concurrency.lockutils [req-98d9d760-8bba-496a-9b50-fb55a01fcb3c req-6a8cc3ee-0a60-4859-b9f6-ad50d461c98c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.181 252257 DEBUG oslo_concurrency.lockutils [req-98d9d760-8bba-496a-9b50-fb55a01fcb3c req-6a8cc3ee-0a60-4859-b9f6-ad50d461c98c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.182 252257 DEBUG nova.network.neutron [req-98d9d760-8bba-496a-9b50-fb55a01fcb3c req-6a8cc3ee-0a60-4859-b9f6-ad50d461c98c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Refreshing network info cache for port 70963606-b079-4b58-9a76-ac862f1d57d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.264 252257 INFO nova.virt.libvirt.driver [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Deleting instance files /var/lib/nova/instances/568bc9d5-f3a0-4da3-8498-39190619b609_del#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.265 252257 INFO nova.virt.libvirt.driver [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Deletion of /var/lib/nova/instances/568bc9d5-f3a0-4da3-8498-39190619b609_del complete#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.313 252257 INFO nova.compute.manager [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.314 252257 DEBUG oslo.service.loopingcall [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.314 252257 DEBUG nova.compute.manager [-] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.314 252257 DEBUG nova.network.neutron [-] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:41:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:14 np0005539563 nova_compute[252253]: 2025-11-29 08:41:14.869 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 298 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 542 KiB/s rd, 1.3 MiB/s wr, 66 op/s
Nov 29 03:41:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:15.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:15.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:15 np0005539563 nova_compute[252253]: 2025-11-29 08:41:15.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:15 np0005539563 nova_compute[252253]: 2025-11-29 08:41:15.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:41:15 np0005539563 nova_compute[252253]: 2025-11-29 08:41:15.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:41:15 np0005539563 nova_compute[252253]: 2025-11-29 08:41:15.698 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 03:41:15 np0005539563 nova_compute[252253]: 2025-11-29 08:41:15.699 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.060 252257 DEBUG nova.network.neutron [-] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.093 252257 INFO nova.compute.manager [-] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Took 1.78 seconds to deallocate network for instance.#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.176 252257 DEBUG oslo_concurrency.lockutils [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.176 252257 DEBUG oslo_concurrency.lockutils [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.211 252257 DEBUG nova.compute.manager [req-87858925-de70-478e-8ebb-9b106fa02f2f req-316c19db-39e5-4ddb-91d3-62817be25aea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received event network-vif-deleted-70963606-b079-4b58-9a76-ac862f1d57d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.233 252257 DEBUG oslo_concurrency.processutils [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.314 252257 DEBUG nova.compute.manager [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received event network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.314 252257 DEBUG oslo_concurrency.lockutils [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.314 252257 DEBUG oslo_concurrency.lockutils [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.315 252257 DEBUG oslo_concurrency.lockutils [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.315 252257 DEBUG nova.compute.manager [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] No waiting events found dispatching network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.315 252257 WARNING nova.compute.manager [req-3d05e24e-8d77-4c91-8516-5042c0c13bfe req-40fbc96e-e7dd-4516-af2a-85101c5b8462 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Received unexpected event network-vif-plugged-70963606-b079-4b58-9a76-ac862f1d57d3 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:41:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:41:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:41:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:41:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:41:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:41:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:41:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:41:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:41:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2888382568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:41:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.667 252257 DEBUG oslo_concurrency.processutils [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.675 252257 DEBUG nova.compute.provider_tree [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.694 252257 DEBUG nova.scheduler.client.report [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.724 252257 DEBUG oslo_concurrency.lockutils [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.770 252257 INFO nova.scheduler.client.report [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Deleted allocations for instance 568bc9d5-f3a0-4da3-8498-39190619b609#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.842 252257 DEBUG nova.network.neutron [req-98d9d760-8bba-496a-9b50-fb55a01fcb3c req-6a8cc3ee-0a60-4859-b9f6-ad50d461c98c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Updated VIF entry in instance network info cache for port 70963606-b079-4b58-9a76-ac862f1d57d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.844 252257 DEBUG nova.network.neutron [req-98d9d760-8bba-496a-9b50-fb55a01fcb3c req-6a8cc3ee-0a60-4859-b9f6-ad50d461c98c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Updating instance_info_cache with network_info: [{"id": "70963606-b079-4b58-9a76-ac862f1d57d3", "address": "fa:16:3e:23:44:3a", "network": {"id": "db8f1b92-c9f5-4b99-b81d-74e2e10c3f6b", "bridge": "br-int", "label": "tempest-network-smoke--973157119", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70963606-b0", "ovs_interfaceid": "70963606-b079-4b58-9a76-ac862f1d57d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.855 252257 DEBUG oslo_concurrency.lockutils [None req-27369b4b-4b94-4055-bc4f-cc61d452ed93 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "568bc9d5-f3a0-4da3-8498-39190619b609" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:16 np0005539563 nova_compute[252253]: 2025-11-29 08:41:16.865 252257 DEBUG oslo_concurrency.lockutils [req-98d9d760-8bba-496a-9b50-fb55a01fcb3c req-6a8cc3ee-0a60-4859-b9f6-ad50d461c98c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-568bc9d5-f3a0-4da3-8498-39190619b609" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 285 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 158 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 29 03:41:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:17.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:17.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:41:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3270536942' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:41:18 np0005539563 nova_compute[252253]: 2025-11-29 08:41:18.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:18 np0005539563 nova_compute[252253]: 2025-11-29 08:41:18.824 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 29 03:41:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:19.024 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:19.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:19.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:19 np0005539563 nova_compute[252253]: 2025-11-29 08:41:19.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:19 np0005539563 nova_compute[252253]: 2025-11-29 08:41:19.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:19 np0005539563 nova_compute[252253]: 2025-11-29 08:41:19.703 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:19 np0005539563 nova_compute[252253]: 2025-11-29 08:41:19.703 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:19 np0005539563 nova_compute[252253]: 2025-11-29 08:41:19.704 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:41:19 np0005539563 nova_compute[252253]: 2025-11-29 08:41:19.704 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:19 np0005539563 nova_compute[252253]: 2025-11-29 08:41:19.871 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/768020349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:20 np0005539563 nova_compute[252253]: 2025-11-29 08:41:20.149 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:20 np0005539563 nova_compute[252253]: 2025-11-29 08:41:20.381 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:41:20 np0005539563 nova_compute[252253]: 2025-11-29 08:41:20.383 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4181MB free_disk=20.921977996826172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:41:20 np0005539563 nova_compute[252253]: 2025-11-29 08:41:20.383 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:20 np0005539563 nova_compute[252253]: 2025-11-29 08:41:20.384 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:20 np0005539563 nova_compute[252253]: 2025-11-29 08:41:20.467 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:41:20 np0005539563 nova_compute[252253]: 2025-11-29 08:41:20.467 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:41:20 np0005539563 nova_compute[252253]: 2025-11-29 08:41:20.490 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2279126069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:20 np0005539563 nova_compute[252253]: 2025-11-29 08:41:20.924 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:20 np0005539563 nova_compute[252253]: 2025-11-29 08:41:20.931 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:20 np0005539563 nova_compute[252253]: 2025-11-29 08:41:20.950 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 29 03:41:20 np0005539563 nova_compute[252253]: 2025-11-29 08:41:20.976 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:41:20 np0005539563 nova_compute[252253]: 2025-11-29 08:41:20.976 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:21.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:21.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:21 np0005539563 nova_compute[252253]: 2025-11-29 08:41:21.977 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.2 MiB/s wr, 52 op/s
Nov 29 03:41:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:23.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:23.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:23 np0005539563 nova_compute[252253]: 2025-11-29 08:41:23.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:23 np0005539563 nova_compute[252253]: 2025-11-29 08:41:23.828 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00315906063884236 of space, bias 1.0, pg target 0.947718191652708 quantized to 32 (current 32)
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:41:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:24 np0005539563 nova_compute[252253]: 2025-11-29 08:41:24.873 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 924 KiB/s rd, 1.2 MiB/s wr, 87 op/s
Nov 29 03:41:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:25.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:25.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:25 np0005539563 nova_compute[252253]: 2025-11-29 08:41:25.603 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539563 nova_compute[252253]: 2025-11-29 08:41:25.777 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 488 KiB/s wr, 108 op/s
Nov 29 03:41:27 np0005539563 podman[373355]: 2025-11-29 08:41:27.011214778 +0000 UTC m=+0.080015622 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:41:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:27.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:27 np0005539563 podman[373355]: 2025-11-29 08:41:27.124123965 +0000 UTC m=+0.192924799 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:41:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:27.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:27 np0005539563 nova_compute[252253]: 2025-11-29 08:41:27.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:41:27 np0005539563 podman[373510]: 2025-11-29 08:41:27.753126229 +0000 UTC m=+0.053473079 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 03:41:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:41:27 np0005539563 podman[373510]: 2025-11-29 08:41:27.766304818 +0000 UTC m=+0.066651678 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 03:41:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:28 np0005539563 podman[373587]: 2025-11-29 08:41:28.031862055 +0000 UTC m=+0.057312112 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, name=keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, version=2.2.4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 03:41:28 np0005539563 podman[373587]: 2025-11-29 08:41:28.039415832 +0000 UTC m=+0.064865879 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, com.redhat.component=keepalived-container, vcs-type=git, build-date=2023-02-22T09:23:20, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4)
Nov 29 03:41:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:41:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:41:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:28 np0005539563 nova_compute[252253]: 2025-11-29 08:41:28.791 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405673.7894495, 568bc9d5-f3a0-4da3-8498-39190619b609 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:28 np0005539563 nova_compute[252253]: 2025-11-29 08:41:28.792 252257 INFO nova.compute.manager [-] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:41:28 np0005539563 nova_compute[252253]: 2025-11-29 08:41:28.819 252257 DEBUG nova.compute.manager [None req-5736762a-e038-4252-836f-0ec5c301253b - - - - - -] [instance: 568bc9d5-f3a0-4da3-8498-39190619b609] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:28 np0005539563 nova_compute[252253]: 2025-11-29 08:41:28.830 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 29 03:41:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:29.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:41:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:29.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:29 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e76823be-8e9f-4723-b947-0f6ba4997cec does not exist
Nov 29 03:41:29 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0827fadd-0fe8-497c-89fc-3f3792567101 does not exist
Nov 29 03:41:29 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 08853304-3441-4bf6-8f87-022e0310a1db does not exist
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:41:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:29 np0005539563 podman[374047]: 2025-11-29 08:41:29.744636246 +0000 UTC m=+0.038380396 container create 84ee3541cf18a618504781ed02453e97cbca89d39c171bdc9d3c18a151dfd7c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:41:29 np0005539563 systemd[1]: Started libpod-conmon-84ee3541cf18a618504781ed02453e97cbca89d39c171bdc9d3c18a151dfd7c4.scope.
Nov 29 03:41:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:41:29 np0005539563 podman[374047]: 2025-11-29 08:41:29.816073504 +0000 UTC m=+0.109817664 container init 84ee3541cf18a618504781ed02453e97cbca89d39c171bdc9d3c18a151dfd7c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:41:29 np0005539563 podman[374047]: 2025-11-29 08:41:29.72751138 +0000 UTC m=+0.021255540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:41:29 np0005539563 podman[374047]: 2025-11-29 08:41:29.823526897 +0000 UTC m=+0.117271047 container start 84ee3541cf18a618504781ed02453e97cbca89d39c171bdc9d3c18a151dfd7c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:41:29 np0005539563 podman[374047]: 2025-11-29 08:41:29.827520486 +0000 UTC m=+0.121264656 container attach 84ee3541cf18a618504781ed02453e97cbca89d39c171bdc9d3c18a151dfd7c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_colden, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:41:29 np0005539563 vibrant_colden[374063]: 167 167
Nov 29 03:41:29 np0005539563 systemd[1]: libpod-84ee3541cf18a618504781ed02453e97cbca89d39c171bdc9d3c18a151dfd7c4.scope: Deactivated successfully.
Nov 29 03:41:29 np0005539563 podman[374047]: 2025-11-29 08:41:29.829257253 +0000 UTC m=+0.123001403 container died 84ee3541cf18a618504781ed02453e97cbca89d39c171bdc9d3c18a151dfd7c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_colden, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:41:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f8c3f867df840f7e061d930a4ef1a9c3482c1cfebb0c77d91da11bcc82d87311-merged.mount: Deactivated successfully.
Nov 29 03:41:29 np0005539563 podman[374047]: 2025-11-29 08:41:29.864474143 +0000 UTC m=+0.158218293 container remove 84ee3541cf18a618504781ed02453e97cbca89d39c171bdc9d3c18a151dfd7c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_colden, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:41:29 np0005539563 nova_compute[252253]: 2025-11-29 08:41:29.874 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:29 np0005539563 systemd[1]: libpod-conmon-84ee3541cf18a618504781ed02453e97cbca89d39c171bdc9d3c18a151dfd7c4.scope: Deactivated successfully.
Nov 29 03:41:30 np0005539563 podman[374087]: 2025-11-29 08:41:30.020776003 +0000 UTC m=+0.041500592 container create 17dc5c1f51ac2bc7978140389c95b79c46c07eb4c88d0e48eded2c854d30d1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_austin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:41:30 np0005539563 systemd[1]: Started libpod-conmon-17dc5c1f51ac2bc7978140389c95b79c46c07eb4c88d0e48eded2c854d30d1b1.scope.
Nov 29 03:41:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:41:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ccb74c210b73d2f75b6c6f0318a238f640ac9f719c6959c4e4595e7c0700ce4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ccb74c210b73d2f75b6c6f0318a238f640ac9f719c6959c4e4595e7c0700ce4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ccb74c210b73d2f75b6c6f0318a238f640ac9f719c6959c4e4595e7c0700ce4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ccb74c210b73d2f75b6c6f0318a238f640ac9f719c6959c4e4595e7c0700ce4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ccb74c210b73d2f75b6c6f0318a238f640ac9f719c6959c4e4595e7c0700ce4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:30 np0005539563 podman[374087]: 2025-11-29 08:41:29.997951591 +0000 UTC m=+0.018676200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:41:30 np0005539563 podman[374087]: 2025-11-29 08:41:30.1035653 +0000 UTC m=+0.124289899 container init 17dc5c1f51ac2bc7978140389c95b79c46c07eb4c88d0e48eded2c854d30d1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:41:30 np0005539563 podman[374087]: 2025-11-29 08:41:30.110685623 +0000 UTC m=+0.131410212 container start 17dc5c1f51ac2bc7978140389c95b79c46c07eb4c88d0e48eded2c854d30d1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:41:30 np0005539563 podman[374087]: 2025-11-29 08:41:30.114057645 +0000 UTC m=+0.134782234 container attach 17dc5c1f51ac2bc7978140389c95b79c46c07eb4c88d0e48eded2c854d30d1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_austin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:41:30 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:30 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:30 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:41:30 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:30 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:41:30 np0005539563 amazing_austin[374104]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:41:30 np0005539563 amazing_austin[374104]: --> relative data size: 1.0
Nov 29 03:41:30 np0005539563 amazing_austin[374104]: --> All data devices are unavailable
Nov 29 03:41:30 np0005539563 systemd[1]: libpod-17dc5c1f51ac2bc7978140389c95b79c46c07eb4c88d0e48eded2c854d30d1b1.scope: Deactivated successfully.
Nov 29 03:41:30 np0005539563 podman[374119]: 2025-11-29 08:41:30.960565166 +0000 UTC m=+0.022622387 container died 17dc5c1f51ac2bc7978140389c95b79c46c07eb4c88d0e48eded2c854d30d1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:41:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 29 03:41:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5ccb74c210b73d2f75b6c6f0318a238f640ac9f719c6959c4e4595e7c0700ce4-merged.mount: Deactivated successfully.
Nov 29 03:41:31 np0005539563 podman[374119]: 2025-11-29 08:41:31.018477585 +0000 UTC m=+0.080534806 container remove 17dc5c1f51ac2bc7978140389c95b79c46c07eb4c88d0e48eded2c854d30d1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_austin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:41:31 np0005539563 systemd[1]: libpod-conmon-17dc5c1f51ac2bc7978140389c95b79c46c07eb4c88d0e48eded2c854d30d1b1.scope: Deactivated successfully.
Nov 29 03:41:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:31.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:31.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:31 np0005539563 podman[374275]: 2025-11-29 08:41:31.629971611 +0000 UTC m=+0.037585065 container create abee4e7f1139e56370f5728261450a8bb362d1f7eb0ef0bb17e8337d2bdaad1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:41:31 np0005539563 systemd[1]: Started libpod-conmon-abee4e7f1139e56370f5728261450a8bb362d1f7eb0ef0bb17e8337d2bdaad1b.scope.
Nov 29 03:41:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:41:31 np0005539563 podman[374275]: 2025-11-29 08:41:31.705212251 +0000 UTC m=+0.112825715 container init abee4e7f1139e56370f5728261450a8bb362d1f7eb0ef0bb17e8337d2bdaad1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:41:31 np0005539563 podman[374275]: 2025-11-29 08:41:31.613903183 +0000 UTC m=+0.021516657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:41:31 np0005539563 podman[374275]: 2025-11-29 08:41:31.711795921 +0000 UTC m=+0.119409375 container start abee4e7f1139e56370f5728261450a8bb362d1f7eb0ef0bb17e8337d2bdaad1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:41:31 np0005539563 podman[374275]: 2025-11-29 08:41:31.714462424 +0000 UTC m=+0.122075898 container attach abee4e7f1139e56370f5728261450a8bb362d1f7eb0ef0bb17e8337d2bdaad1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:41:31 np0005539563 keen_tu[374291]: 167 167
Nov 29 03:41:31 np0005539563 systemd[1]: libpod-abee4e7f1139e56370f5728261450a8bb362d1f7eb0ef0bb17e8337d2bdaad1b.scope: Deactivated successfully.
Nov 29 03:41:31 np0005539563 conmon[374291]: conmon abee4e7f1139e56370f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abee4e7f1139e56370f5728261450a8bb362d1f7eb0ef0bb17e8337d2bdaad1b.scope/container/memory.events
Nov 29 03:41:31 np0005539563 podman[374275]: 2025-11-29 08:41:31.718550515 +0000 UTC m=+0.126163969 container died abee4e7f1139e56370f5728261450a8bb362d1f7eb0ef0bb17e8337d2bdaad1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:41:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9a274c2d3d4a93b3ce0df898276eb952ae7ae149b30154c6d496f4718d5d0d2b-merged.mount: Deactivated successfully.
Nov 29 03:41:31 np0005539563 podman[374275]: 2025-11-29 08:41:31.756933042 +0000 UTC m=+0.164546496 container remove abee4e7f1139e56370f5728261450a8bb362d1f7eb0ef0bb17e8337d2bdaad1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 03:41:31 np0005539563 systemd[1]: libpod-conmon-abee4e7f1139e56370f5728261450a8bb362d1f7eb0ef0bb17e8337d2bdaad1b.scope: Deactivated successfully.
Nov 29 03:41:31 np0005539563 podman[374313]: 2025-11-29 08:41:31.919822421 +0000 UTC m=+0.041155773 container create 0ab84a0ed0751d6d793512144cb8f8242303b542bd03488a8a57fc4624106dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_joliot, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:41:31 np0005539563 systemd[1]: Started libpod-conmon-0ab84a0ed0751d6d793512144cb8f8242303b542bd03488a8a57fc4624106dc1.scope.
Nov 29 03:41:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:41:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58108c1d82a5aa4ee8b558d24f150800bf87c108e8fb355170270af35f7df7a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58108c1d82a5aa4ee8b558d24f150800bf87c108e8fb355170270af35f7df7a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58108c1d82a5aa4ee8b558d24f150800bf87c108e8fb355170270af35f7df7a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:31 np0005539563 podman[374313]: 2025-11-29 08:41:31.902099088 +0000 UTC m=+0.023432470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:41:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58108c1d82a5aa4ee8b558d24f150800bf87c108e8fb355170270af35f7df7a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:32 np0005539563 podman[374313]: 2025-11-29 08:41:32.006177875 +0000 UTC m=+0.127511247 container init 0ab84a0ed0751d6d793512144cb8f8242303b542bd03488a8a57fc4624106dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:41:32 np0005539563 podman[374313]: 2025-11-29 08:41:32.017687828 +0000 UTC m=+0.139021180 container start 0ab84a0ed0751d6d793512144cb8f8242303b542bd03488a8a57fc4624106dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_joliot, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:41:32 np0005539563 podman[374313]: 2025-11-29 08:41:32.020655929 +0000 UTC m=+0.141989281 container attach 0ab84a0ed0751d6d793512144cb8f8242303b542bd03488a8a57fc4624106dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]: {
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:    "0": [
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:        {
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:            "devices": [
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:                "/dev/loop3"
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:            ],
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:            "lv_name": "ceph_lv0",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:            "lv_size": "7511998464",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:            "name": "ceph_lv0",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:            "tags": {
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:                "ceph.cluster_name": "ceph",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:                "ceph.crush_device_class": "",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:                "ceph.encrypted": "0",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:                "ceph.osd_id": "0",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:                "ceph.type": "block",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:                "ceph.vdo": "0"
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:            },
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:            "type": "block",
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:            "vg_name": "ceph_vg0"
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:        }
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]:    ]
Nov 29 03:41:32 np0005539563 crazy_joliot[374330]: }
Nov 29 03:41:32 np0005539563 systemd[1]: libpod-0ab84a0ed0751d6d793512144cb8f8242303b542bd03488a8a57fc4624106dc1.scope: Deactivated successfully.
Nov 29 03:41:32 np0005539563 podman[374313]: 2025-11-29 08:41:32.837683357 +0000 UTC m=+0.959016709 container died 0ab84a0ed0751d6d793512144cb8f8242303b542bd03488a8a57fc4624106dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:41:32 np0005539563 systemd[1]: var-lib-containers-storage-overlay-58108c1d82a5aa4ee8b558d24f150800bf87c108e8fb355170270af35f7df7a6-merged.mount: Deactivated successfully.
Nov 29 03:41:32 np0005539563 podman[374313]: 2025-11-29 08:41:32.891068332 +0000 UTC m=+1.012401684 container remove 0ab84a0ed0751d6d793512144cb8f8242303b542bd03488a8a57fc4624106dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_joliot, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:41:32 np0005539563 systemd[1]: libpod-conmon-0ab84a0ed0751d6d793512144cb8f8242303b542bd03488a8a57fc4624106dc1.scope: Deactivated successfully.
Nov 29 03:41:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 29 03:41:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:33.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:33.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:33 np0005539563 podman[374492]: 2025-11-29 08:41:33.489101791 +0000 UTC m=+0.037876213 container create 52d04f4a92aa1dfc41ab41ef804c8f4664e92b733ee10cdc2b1ce1ec49ecf6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:41:33 np0005539563 systemd[1]: Started libpod-conmon-52d04f4a92aa1dfc41ab41ef804c8f4664e92b733ee10cdc2b1ce1ec49ecf6f2.scope.
Nov 29 03:41:33 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:41:33 np0005539563 podman[374492]: 2025-11-29 08:41:33.55801857 +0000 UTC m=+0.106793022 container init 52d04f4a92aa1dfc41ab41ef804c8f4664e92b733ee10cdc2b1ce1ec49ecf6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:41:33 np0005539563 podman[374492]: 2025-11-29 08:41:33.56757457 +0000 UTC m=+0.116348992 container start 52d04f4a92aa1dfc41ab41ef804c8f4664e92b733ee10cdc2b1ce1ec49ecf6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:41:33 np0005539563 podman[374492]: 2025-11-29 08:41:33.47399946 +0000 UTC m=+0.022773902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:41:33 np0005539563 happy_euclid[374508]: 167 167
Nov 29 03:41:33 np0005539563 podman[374492]: 2025-11-29 08:41:33.572840914 +0000 UTC m=+0.121615356 container attach 52d04f4a92aa1dfc41ab41ef804c8f4664e92b733ee10cdc2b1ce1ec49ecf6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:41:33 np0005539563 systemd[1]: libpod-52d04f4a92aa1dfc41ab41ef804c8f4664e92b733ee10cdc2b1ce1ec49ecf6f2.scope: Deactivated successfully.
Nov 29 03:41:33 np0005539563 podman[374492]: 2025-11-29 08:41:33.573776089 +0000 UTC m=+0.122550511 container died 52d04f4a92aa1dfc41ab41ef804c8f4664e92b733ee10cdc2b1ce1ec49ecf6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:41:33 np0005539563 podman[374509]: 2025-11-29 08:41:33.6002242 +0000 UTC m=+0.060195672 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:41:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6d75a3edd69883be1085fded4b8905fb6818887940cadd46ea99951eadbe9271-merged.mount: Deactivated successfully.
Nov 29 03:41:33 np0005539563 podman[374492]: 2025-11-29 08:41:33.61783001 +0000 UTC m=+0.166604432 container remove 52d04f4a92aa1dfc41ab41ef804c8f4664e92b733ee10cdc2b1ce1ec49ecf6f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:41:33 np0005539563 systemd[1]: libpod-conmon-52d04f4a92aa1dfc41ab41ef804c8f4664e92b733ee10cdc2b1ce1ec49ecf6f2.scope: Deactivated successfully.
Nov 29 03:41:33 np0005539563 podman[374517]: 2025-11-29 08:41:33.65854871 +0000 UTC m=+0.105418155 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:41:33 np0005539563 podman[374518]: 2025-11-29 08:41:33.737929333 +0000 UTC m=+0.182320160 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:41:33 np0005539563 podman[374595]: 2025-11-29 08:41:33.78624624 +0000 UTC m=+0.039847887 container create b5e3b65e81e6326df4dd527deaf2f6ed9dc22f8c2117c1e2c65b98f83c8c0a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:41:33 np0005539563 systemd[1]: Started libpod-conmon-b5e3b65e81e6326df4dd527deaf2f6ed9dc22f8c2117c1e2c65b98f83c8c0a34.scope.
Nov 29 03:41:33 np0005539563 nova_compute[252253]: 2025-11-29 08:41:33.831 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:33 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:41:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf53e4abb38bc487457f7d72487de6bae73f8b648b28f6f039606a5e52f7681/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf53e4abb38bc487457f7d72487de6bae73f8b648b28f6f039606a5e52f7681/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf53e4abb38bc487457f7d72487de6bae73f8b648b28f6f039606a5e52f7681/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf53e4abb38bc487457f7d72487de6bae73f8b648b28f6f039606a5e52f7681/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:33 np0005539563 podman[374595]: 2025-11-29 08:41:33.860577706 +0000 UTC m=+0.114179373 container init b5e3b65e81e6326df4dd527deaf2f6ed9dc22f8c2117c1e2c65b98f83c8c0a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:41:33 np0005539563 podman[374595]: 2025-11-29 08:41:33.768587089 +0000 UTC m=+0.022188756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:41:33 np0005539563 podman[374595]: 2025-11-29 08:41:33.867890665 +0000 UTC m=+0.121492312 container start b5e3b65e81e6326df4dd527deaf2f6ed9dc22f8c2117c1e2c65b98f83c8c0a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:41:33 np0005539563 podman[374595]: 2025-11-29 08:41:33.870870126 +0000 UTC m=+0.124471793 container attach b5e3b65e81e6326df4dd527deaf2f6ed9dc22f8c2117c1e2c65b98f83c8c0a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:41:34 np0005539563 thirsty_vaughan[374612]: {
Nov 29 03:41:34 np0005539563 thirsty_vaughan[374612]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:41:34 np0005539563 thirsty_vaughan[374612]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:41:34 np0005539563 thirsty_vaughan[374612]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:41:34 np0005539563 thirsty_vaughan[374612]:        "osd_id": 0,
Nov 29 03:41:34 np0005539563 thirsty_vaughan[374612]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:41:34 np0005539563 thirsty_vaughan[374612]:        "type": "bluestore"
Nov 29 03:41:34 np0005539563 thirsty_vaughan[374612]:    }
Nov 29 03:41:34 np0005539563 thirsty_vaughan[374612]: }
Nov 29 03:41:34 np0005539563 systemd[1]: libpod-b5e3b65e81e6326df4dd527deaf2f6ed9dc22f8c2117c1e2c65b98f83c8c0a34.scope: Deactivated successfully.
Nov 29 03:41:34 np0005539563 podman[374595]: 2025-11-29 08:41:34.688573752 +0000 UTC m=+0.942175409 container died b5e3b65e81e6326df4dd527deaf2f6ed9dc22f8c2117c1e2c65b98f83c8c0a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:41:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4cf53e4abb38bc487457f7d72487de6bae73f8b648b28f6f039606a5e52f7681-merged.mount: Deactivated successfully.
Nov 29 03:41:34 np0005539563 podman[374595]: 2025-11-29 08:41:34.747871418 +0000 UTC m=+1.001473065 container remove b5e3b65e81e6326df4dd527deaf2f6ed9dc22f8c2117c1e2c65b98f83c8c0a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:41:34 np0005539563 systemd[1]: libpod-conmon-b5e3b65e81e6326df4dd527deaf2f6ed9dc22f8c2117c1e2c65b98f83c8c0a34.scope: Deactivated successfully.
Nov 29 03:41:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:41:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:41:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8f014e10-f66e-47f2-8946-82715ac19a07 does not exist
Nov 29 03:41:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c7e5b5bc-e224-4585-b070-307c6ae33836 does not exist
Nov 29 03:41:34 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d20f6e24-597d-4c1f-99bc-600ecfb8d438 does not exist
Nov 29 03:41:34 np0005539563 nova_compute[252253]: 2025-11-29 08:41:34.876 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 252 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 626 KiB/s wr, 87 op/s
Nov 29 03:41:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:35.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:35.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:35 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:41:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 268 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 71 op/s
Nov 29 03:41:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:37.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:37.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:38 np0005539563 nova_compute[252253]: 2025-11-29 08:41:38.834 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 274 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 239 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Nov 29 03:41:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:39.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:39.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:39 np0005539563 nova_compute[252253]: 2025-11-29 08:41:39.879 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 03:41:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:41.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:41.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:42 np0005539563 nova_compute[252253]: 2025-11-29 08:41:42.573 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:42 np0005539563 nova_compute[252253]: 2025-11-29 08:41:42.574 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:42 np0005539563 nova_compute[252253]: 2025-11-29 08:41:42.589 252257 DEBUG nova.compute.manager [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:41:42 np0005539563 nova_compute[252253]: 2025-11-29 08:41:42.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:42 np0005539563 nova_compute[252253]: 2025-11-29 08:41:42.679 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:42 np0005539563 nova_compute[252253]: 2025-11-29 08:41:42.680 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:42 np0005539563 nova_compute[252253]: 2025-11-29 08:41:42.692 252257 DEBUG nova.virt.hardware [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:41:42 np0005539563 nova_compute[252253]: 2025-11-29 08:41:42.693 252257 INFO nova.compute.claims [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:41:42 np0005539563 nova_compute[252253]: 2025-11-29 08:41:42.832 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 03:41:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:43.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:41:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:41:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:41:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:41:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:41:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:41:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:43.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657979487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.274 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.282 252257 DEBUG nova.compute.provider_tree [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.304 252257 DEBUG nova.scheduler.client.report [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.345 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.346 252257 DEBUG nova.compute.manager [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.424 252257 DEBUG nova.compute.manager [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.424 252257 DEBUG nova.network.neutron [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.459 252257 INFO nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.483 252257 DEBUG nova.compute.manager [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.577 252257 DEBUG nova.compute.manager [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.579 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.579 252257 INFO nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Creating image(s)#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.605 252257 DEBUG nova.storage.rbd_utils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.630 252257 DEBUG nova.storage.rbd_utils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.657 252257 DEBUG nova.storage.rbd_utils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.663 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.693 252257 DEBUG nova.policy [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '686f527a5723407b85ed34c8a312583f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.731 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.732 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.733 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.734 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.763 252257 DEBUG nova.storage.rbd_utils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.768 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:43 np0005539563 nova_compute[252253]: 2025-11-29 08:41:43.837 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:44 np0005539563 nova_compute[252253]: 2025-11-29 08:41:44.077 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:44 np0005539563 nova_compute[252253]: 2025-11-29 08:41:44.158 252257 DEBUG nova.storage.rbd_utils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] resizing rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:41:44 np0005539563 nova_compute[252253]: 2025-11-29 08:41:44.277 252257 DEBUG nova.objects.instance [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'migration_context' on Instance uuid 8b3377f3-920a-41d9-bfc4-2b727546ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:44 np0005539563 nova_compute[252253]: 2025-11-29 08:41:44.291 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:41:44 np0005539563 nova_compute[252253]: 2025-11-29 08:41:44.292 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Ensure instance console log exists: /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:41:44 np0005539563 nova_compute[252253]: 2025-11-29 08:41:44.292 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:44 np0005539563 nova_compute[252253]: 2025-11-29 08:41:44.293 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:44 np0005539563 nova_compute[252253]: 2025-11-29 08:41:44.293 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:44 np0005539563 nova_compute[252253]: 2025-11-29 08:41:44.880 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:44 np0005539563 nova_compute[252253]: 2025-11-29 08:41:44.915 252257 DEBUG nova.network.neutron [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Successfully created port: d7976101-fb6f-4e03-be4e-6b60c73979e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:41:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 242 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 2.5 MiB/s wr, 80 op/s
Nov 29 03:41:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:45.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:45.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:45 np0005539563 nova_compute[252253]: 2025-11-29 08:41:45.876 252257 DEBUG nova.network.neutron [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Successfully updated port: d7976101-fb6f-4e03-be4e-6b60c73979e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:41:45 np0005539563 nova_compute[252253]: 2025-11-29 08:41:45.896 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:45 np0005539563 nova_compute[252253]: 2025-11-29 08:41:45.897 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:45 np0005539563 nova_compute[252253]: 2025-11-29 08:41:45.898 252257 DEBUG nova.network.neutron [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:41:46 np0005539563 nova_compute[252253]: 2025-11-29 08:41:46.315 252257 DEBUG nova.compute.manager [req-05919a72-3b7a-4b3d-b358-86302141aaed req-a99a1857-3df2-4641-ae56-4448061557f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received event network-changed-d7976101-fb6f-4e03-be4e-6b60c73979e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:46 np0005539563 nova_compute[252253]: 2025-11-29 08:41:46.315 252257 DEBUG nova.compute.manager [req-05919a72-3b7a-4b3d-b358-86302141aaed req-a99a1857-3df2-4641-ae56-4448061557f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Refreshing instance network info cache due to event network-changed-d7976101-fb6f-4e03-be4e-6b60c73979e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:46 np0005539563 nova_compute[252253]: 2025-11-29 08:41:46.316 252257 DEBUG oslo_concurrency.lockutils [req-05919a72-3b7a-4b3d-b358-86302141aaed req-a99a1857-3df2-4641-ae56-4448061557f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:46 np0005539563 nova_compute[252253]: 2025-11-29 08:41:46.522 252257 DEBUG nova.network.neutron [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:41:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 242 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 339 KiB/s rd, 3.0 MiB/s wr, 95 op/s
Nov 29 03:41:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:47.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:41:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:47.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.056 252257 DEBUG nova.network.neutron [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Updating instance_info_cache with network_info: [{"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.095 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.095 252257 DEBUG nova.compute.manager [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Instance network_info: |[{"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.096 252257 DEBUG oslo_concurrency.lockutils [req-05919a72-3b7a-4b3d-b358-86302141aaed req-a99a1857-3df2-4641-ae56-4448061557f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.096 252257 DEBUG nova.network.neutron [req-05919a72-3b7a-4b3d-b358-86302141aaed req-a99a1857-3df2-4641-ae56-4448061557f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Refreshing network info cache for port d7976101-fb6f-4e03-be4e-6b60c73979e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.100 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Start _get_guest_xml network_info=[{"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.104 252257 WARNING nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.119 252257 DEBUG nova.virt.libvirt.host [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.120 252257 DEBUG nova.virt.libvirt.host [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.124 252257 DEBUG nova.virt.libvirt.host [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.125 252257 DEBUG nova.virt.libvirt.host [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.126 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.127 252257 DEBUG nova.virt.hardware [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.127 252257 DEBUG nova.virt.hardware [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.128 252257 DEBUG nova.virt.hardware [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.128 252257 DEBUG nova.virt.hardware [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.128 252257 DEBUG nova.virt.hardware [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.128 252257 DEBUG nova.virt.hardware [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.129 252257 DEBUG nova.virt.hardware [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.129 252257 DEBUG nova.virt.hardware [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.129 252257 DEBUG nova.virt.hardware [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.130 252257 DEBUG nova.virt.hardware [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.130 252257 DEBUG nova.virt.hardware [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.134 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:41:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163031373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.580 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.607 252257 DEBUG nova.storage.rbd_utils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.611 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:48 np0005539563 nova_compute[252253]: 2025-11-29 08:41:48.840 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 233 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 255 KiB/s rd, 2.3 MiB/s wr, 95 op/s
Nov 29 03:41:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:41:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958087801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.064 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.066 252257 DEBUG nova.virt.libvirt.vif [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1930617470',display_name='tempest-TestNetworkAdvancedServerOps-server-1930617470',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1930617470',id=186,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKY1Sg2xL/aJz8wIL7cf5MzcV52yp5R0mXmIsEO1TSV5wOnaXa6t112hZJc+/UBiVqxk5rRlpEmVgzJvgpc06h1m1EAduPYs3GDyvBwnX5qP0GCBg7T1VF1J1Nh92LK9xA==',key_name='tempest-TestNetworkAdvancedServerOps-482800446',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-x7e57fd4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:41:43Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=8b3377f3-920a-41d9-bfc4-2b727546ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.066 252257 DEBUG nova.network.os_vif_util [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.067 252257 DEBUG nova.network.os_vif_util [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.068 252257 DEBUG nova.objects.instance [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8b3377f3-920a-41d9-bfc4-2b727546ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:49.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.085 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  <uuid>8b3377f3-920a-41d9-bfc4-2b727546ab6f</uuid>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  <name>instance-000000ba</name>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1930617470</nova:name>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:41:48</nova:creationTime>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <nova:port uuid="d7976101-fb6f-4e03-be4e-6b60c73979e4">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <entry name="serial">8b3377f3-920a-41d9-bfc4-2b727546ab6f</entry>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <entry name="uuid">8b3377f3-920a-41d9-bfc4-2b727546ab6f</entry>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk.config">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:80:8f:9a"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <target dev="tapd7976101-fb"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/console.log" append="off"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:41:49 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:41:49 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:41:49 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:41:49 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.087 252257 DEBUG nova.compute.manager [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Preparing to wait for external event network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.088 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.088 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.088 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.089 252257 DEBUG nova.virt.libvirt.vif [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1930617470',display_name='tempest-TestNetworkAdvancedServerOps-server-1930617470',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1930617470',id=186,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKY1Sg2xL/aJz8wIL7cf5MzcV52yp5R0mXmIsEO1TSV5wOnaXa6t112hZJc+/UBiVqxk5rRlpEmVgzJvgpc06h1m1EAduPYs3GDyvBwnX5qP0GCBg7T1VF1J1Nh92LK9xA==',key_name='tempest-TestNetworkAdvancedServerOps-482800446',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-x7e57fd4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:41:43Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=8b3377f3-920a-41d9-bfc4-2b727546ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.089 252257 DEBUG nova.network.os_vif_util [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.090 252257 DEBUG nova.network.os_vif_util [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.090 252257 DEBUG os_vif [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.091 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.093 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.093 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.097 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.097 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7976101-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.097 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7976101-fb, col_values=(('external_ids', {'iface-id': 'd7976101-fb6f-4e03-be4e-6b60c73979e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:8f:9a', 'vm-uuid': '8b3377f3-920a-41d9-bfc4-2b727546ab6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.099 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:49 np0005539563 NetworkManager[48981]: <info>  [1764405709.1005] manager: (tapd7976101-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.103 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.106 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.107 252257 INFO os_vif [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb')#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.155 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.155 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.156 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No VIF found with MAC fa:16:3e:80:8f:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.156 252257 INFO nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Using config drive#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.179 252257 DEBUG nova.storage.rbd_utils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:49.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.677 252257 INFO nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Creating config drive at /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/disk.config#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.683 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw26h_a0w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.714 252257 DEBUG nova.network.neutron [req-05919a72-3b7a-4b3d-b358-86302141aaed req-a99a1857-3df2-4641-ae56-4448061557f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Updated VIF entry in instance network info cache for port d7976101-fb6f-4e03-be4e-6b60c73979e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.715 252257 DEBUG nova.network.neutron [req-05919a72-3b7a-4b3d-b358-86302141aaed req-a99a1857-3df2-4641-ae56-4448061557f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Updating instance_info_cache with network_info: [{"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.732 252257 DEBUG oslo_concurrency.lockutils [req-05919a72-3b7a-4b3d-b358-86302141aaed req-a99a1857-3df2-4641-ae56-4448061557f1 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.821 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw26h_a0w" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.850 252257 DEBUG nova.storage.rbd_utils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.854 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/disk.config 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:49 np0005539563 nova_compute[252253]: 2025-11-29 08:41:49.894 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.052 252257 DEBUG oslo_concurrency.processutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/disk.config 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.053 252257 INFO nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Deleting local config drive /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/disk.config because it was imported into RBD.#033[00m
Nov 29 03:41:50 np0005539563 kernel: tapd7976101-fb: entered promiscuous mode
Nov 29 03:41:50 np0005539563 NetworkManager[48981]: <info>  [1764405710.1273] manager: (tapd7976101-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/349)
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.130 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:41:50Z|00798|binding|INFO|Claiming lport d7976101-fb6f-4e03-be4e-6b60c73979e4 for this chassis.
Nov 29 03:41:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:41:50Z|00799|binding|INFO|d7976101-fb6f-4e03-be4e-6b60c73979e4: Claiming fa:16:3e:80:8f:9a 10.100.0.11
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.135 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.150 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:8f:9a 10.100.0.11'], port_security=['fa:16:3e:80:8f:9a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8b3377f3-920a-41d9-bfc4-2b727546ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb2c517f-e973-4398-be77-628f13500e1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1042471a-4e26-498d-88fd-cdaba7fe83fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6798f55c-ab9d-407e-ac43-81155e9f9232, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=d7976101-fb6f-4e03-be4e-6b60c73979e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.152 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d7976101-fb6f-4e03-be4e-6b60c73979e4 in datapath eb2c517f-e973-4398-be77-628f13500e1a bound to our chassis#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.154 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb2c517f-e973-4398-be77-628f13500e1a#033[00m
Nov 29 03:41:50 np0005539563 systemd-udevd[375077]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.178 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[31c9373c-bb6e-401d-9d20-82657263bb68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.180 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb2c517f-e1 in ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.182 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb2c517f-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.182 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7285f6-dd82-4022-bab9-8de099f4032a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 NetworkManager[48981]: <info>  [1764405710.1840] device (tapd7976101-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:41:50 np0005539563 NetworkManager[48981]: <info>  [1764405710.1846] device (tapd7976101-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:41:50 np0005539563 systemd-machined[213024]: New machine qemu-92-instance-000000ba.
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.184 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9fbef72e-c357-48ef-b8e2-628a93261050]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 systemd[1]: Started Virtual Machine qemu-92-instance-000000ba.
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.197 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[26615c5f-a143-4449-a1ae-0cde61a9528c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.201 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:41:50Z|00800|binding|INFO|Setting lport d7976101-fb6f-4e03-be4e-6b60c73979e4 ovn-installed in OVS
Nov 29 03:41:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:41:50Z|00801|binding|INFO|Setting lport d7976101-fb6f-4e03-be4e-6b60c73979e4 up in Southbound
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.207 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.227 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[66d268e8-b15a-430a-b6ce-265f5af455eb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.256 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[176c5600-50d9-4271-aebe-486feb630e85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 NetworkManager[48981]: <info>  [1764405710.2653] manager: (tapeb2c517f-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/350)
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.265 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d367fe11-21de-471c-9eeb-ad75539a4852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.302 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f660df9d-5b02-4bc7-b82c-1954b8a34cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.306 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[90b90e25-9cc0-443e-968d-e0f8ca7e1ace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 NetworkManager[48981]: <info>  [1764405710.3324] device (tapeb2c517f-e0): carrier: link connected
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.342 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[917cbd5b-ea5a-40db-af82-84123210999d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.362 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5848f8fc-6e14-4185-8813-b201905e84f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb2c517f-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:3f:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 867810, 'reachable_time': 34842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375112, 'error': None, 'target': 'ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.380 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b337cfec-8690-4628-84cc-122d26b7b5c3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:3f23'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 867810, 'tstamp': 867810}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375113, 'error': None, 'target': 'ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.407 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[44f09247-34a6-4fdb-9b47-d4c5dc75c132]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb2c517f-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:3f:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 867810, 'reachable_time': 34842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375114, 'error': None, 'target': 'ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.444 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[85f50c10-0959-47cb-acff-2897eff20a2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.528 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[50d2bb00-ae74-48da-a6c7-1445f170a048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.529 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb2c517f-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.529 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.530 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb2c517f-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.531 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539563 NetworkManager[48981]: <info>  [1764405710.5320] manager: (tapeb2c517f-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Nov 29 03:41:50 np0005539563 kernel: tapeb2c517f-e0: entered promiscuous mode
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.533 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb2c517f-e0, col_values=(('external_ids', {'iface-id': '60f1e0af-3a11-4413-80bc-17ee4b21dd0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.534 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539563 ovn_controller[148841]: 2025-11-29T08:41:50Z|00802|binding|INFO|Releasing lport 60f1e0af-3a11-4413-80bc-17ee4b21dd0a from this chassis (sb_readonly=0)
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.549 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.550 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb2c517f-e973-4398-be77-628f13500e1a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb2c517f-e973-4398-be77-628f13500e1a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.552 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[aa01a1c6-e41c-41d5-bf7e-a3429dd93753]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.552 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-eb2c517f-e973-4398-be77-628f13500e1a
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/eb2c517f-e973-4398-be77-628f13500e1a.pid.haproxy
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID eb2c517f-e973-4398-be77-628f13500e1a
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:41:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:41:50.553 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a', 'env', 'PROCESS_TAG=haproxy-eb2c517f-e973-4398-be77-628f13500e1a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb2c517f-e973-4398-be77-628f13500e1a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.636 252257 DEBUG nova.compute.manager [req-1c1935cd-19bd-445b-a271-6d6379bb86bc req-b48fe775-0a25-4c6b-89eb-9352b12090e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received event network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.637 252257 DEBUG oslo_concurrency.lockutils [req-1c1935cd-19bd-445b-a271-6d6379bb86bc req-b48fe775-0a25-4c6b-89eb-9352b12090e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.637 252257 DEBUG oslo_concurrency.lockutils [req-1c1935cd-19bd-445b-a271-6d6379bb86bc req-b48fe775-0a25-4c6b-89eb-9352b12090e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.637 252257 DEBUG oslo_concurrency.lockutils [req-1c1935cd-19bd-445b-a271-6d6379bb86bc req-b48fe775-0a25-4c6b-89eb-9352b12090e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.638 252257 DEBUG nova.compute.manager [req-1c1935cd-19bd-445b-a271-6d6379bb86bc req-b48fe775-0a25-4c6b-89eb-9352b12090e4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Processing event network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.709 252257 DEBUG nova.compute.manager [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.710 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405710.7088945, 8b3377f3-920a-41d9-bfc4-2b727546ab6f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.710 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.714 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.718 252257 INFO nova.virt.libvirt.driver [-] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Instance spawned successfully.#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.719 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.751 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.755 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.763 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.763 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.763 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.764 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.764 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.765 252257 DEBUG nova.virt.libvirt.driver [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.799 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.800 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405710.7101922, 8b3377f3-920a-41d9-bfc4-2b727546ab6f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.801 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.826 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.832 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405710.713218, 8b3377f3-920a-41d9-bfc4-2b727546ab6f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.832 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.840 252257 INFO nova.compute.manager [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Took 7.26 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.841 252257 DEBUG nova.compute.manager [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.852 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.857 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.893 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.918 252257 INFO nova.compute.manager [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Took 8.28 seconds to build instance.#033[00m
Nov 29 03:41:50 np0005539563 nova_compute[252253]: 2025-11-29 08:41:50.936 252257 DEBUG oslo_concurrency.lockutils [None req-d19e2635-a1b3-4b77-9e29-a67658f7d40c 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:50 np0005539563 podman[375186]: 2025-11-29 08:41:50.98403701 +0000 UTC m=+0.083756624 container create 8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:41:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 167 KiB/s rd, 1.9 MiB/s wr, 107 op/s
Nov 29 03:41:51 np0005539563 systemd[1]: Started libpod-conmon-8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d.scope.
Nov 29 03:41:51 np0005539563 podman[375186]: 2025-11-29 08:41:50.943061814 +0000 UTC m=+0.042781518 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:41:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:41:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3b83f0fa4359618eefc7b117fdc4f95885ffb416a986c04077c35b6d4c8b83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:51 np0005539563 podman[375186]: 2025-11-29 08:41:51.077073346 +0000 UTC m=+0.176792960 container init 8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:41:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:51.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:51 np0005539563 podman[375186]: 2025-11-29 08:41:51.085126725 +0000 UTC m=+0.184846339 container start 8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:41:51 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375202]: [NOTICE]   (375206) : New worker (375208) forked
Nov 29 03:41:51 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375202]: [NOTICE]   (375206) : Loading success.
Nov 29 03:41:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:51.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:52 np0005539563 nova_compute[252253]: 2025-11-29 08:41:52.741 252257 DEBUG nova.compute.manager [req-fd0577eb-8dc9-47b2-b2bb-200103cf8bcb req-78cd2fe8-349e-4a05-90d3-092df06428c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received event network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:52 np0005539563 nova_compute[252253]: 2025-11-29 08:41:52.742 252257 DEBUG oslo_concurrency.lockutils [req-fd0577eb-8dc9-47b2-b2bb-200103cf8bcb req-78cd2fe8-349e-4a05-90d3-092df06428c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:52 np0005539563 nova_compute[252253]: 2025-11-29 08:41:52.742 252257 DEBUG oslo_concurrency.lockutils [req-fd0577eb-8dc9-47b2-b2bb-200103cf8bcb req-78cd2fe8-349e-4a05-90d3-092df06428c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:52 np0005539563 nova_compute[252253]: 2025-11-29 08:41:52.742 252257 DEBUG oslo_concurrency.lockutils [req-fd0577eb-8dc9-47b2-b2bb-200103cf8bcb req-78cd2fe8-349e-4a05-90d3-092df06428c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:52 np0005539563 nova_compute[252253]: 2025-11-29 08:41:52.743 252257 DEBUG nova.compute.manager [req-fd0577eb-8dc9-47b2-b2bb-200103cf8bcb req-78cd2fe8-349e-4a05-90d3-092df06428c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] No waiting events found dispatching network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:52 np0005539563 nova_compute[252253]: 2025-11-29 08:41:52.743 252257 WARNING nova.compute.manager [req-fd0577eb-8dc9-47b2-b2bb-200103cf8bcb req-78cd2fe8-349e-4a05-90d3-092df06428c5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received unexpected event network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:41:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Nov 29 03:41:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:53.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:53.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:54 np0005539563 nova_compute[252253]: 2025-11-29 08:41:54.101 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:41:54Z|00803|binding|INFO|Releasing lport 60f1e0af-3a11-4413-80bc-17ee4b21dd0a from this chassis (sb_readonly=0)
Nov 29 03:41:54 np0005539563 nova_compute[252253]: 2025-11-29 08:41:54.637 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:54 np0005539563 NetworkManager[48981]: <info>  [1764405714.6382] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Nov 29 03:41:54 np0005539563 NetworkManager[48981]: <info>  [1764405714.6394] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Nov 29 03:41:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:41:54Z|00804|binding|INFO|Releasing lport 60f1e0af-3a11-4413-80bc-17ee4b21dd0a from this chassis (sb_readonly=0)
Nov 29 03:41:54 np0005539563 nova_compute[252253]: 2025-11-29 08:41:54.681 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:54 np0005539563 nova_compute[252253]: 2025-11-29 08:41:54.686 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:54 np0005539563 nova_compute[252253]: 2025-11-29 08:41:54.884 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 819 KiB/s rd, 1.8 MiB/s wr, 116 op/s
Nov 29 03:41:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:55.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:55 np0005539563 nova_compute[252253]: 2025-11-29 08:41:55.107 252257 DEBUG nova.compute.manager [req-74ca6372-ee77-4023-a5ad-59efc3060eb2 req-0b585372-3809-4b00-bbfb-4a92f15d11c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received event network-changed-d7976101-fb6f-4e03-be4e-6b60c73979e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:55 np0005539563 nova_compute[252253]: 2025-11-29 08:41:55.108 252257 DEBUG nova.compute.manager [req-74ca6372-ee77-4023-a5ad-59efc3060eb2 req-0b585372-3809-4b00-bbfb-4a92f15d11c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Refreshing instance network info cache due to event network-changed-d7976101-fb6f-4e03-be4e-6b60c73979e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:55 np0005539563 nova_compute[252253]: 2025-11-29 08:41:55.108 252257 DEBUG oslo_concurrency.lockutils [req-74ca6372-ee77-4023-a5ad-59efc3060eb2 req-0b585372-3809-4b00-bbfb-4a92f15d11c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:55 np0005539563 nova_compute[252253]: 2025-11-29 08:41:55.108 252257 DEBUG oslo_concurrency.lockutils [req-74ca6372-ee77-4023-a5ad-59efc3060eb2 req-0b585372-3809-4b00-bbfb-4a92f15d11c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:55 np0005539563 nova_compute[252253]: 2025-11-29 08:41:55.109 252257 DEBUG nova.network.neutron [req-74ca6372-ee77-4023-a5ad-59efc3060eb2 req-0b585372-3809-4b00-bbfb-4a92f15d11c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Refreshing network info cache for port d7976101-fb6f-4e03-be4e-6b60c73979e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:55.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:55 np0005539563 nova_compute[252253]: 2025-11-29 08:41:55.735 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:56 np0005539563 nova_compute[252253]: 2025-11-29 08:41:56.571 252257 DEBUG nova.network.neutron [req-74ca6372-ee77-4023-a5ad-59efc3060eb2 req-0b585372-3809-4b00-bbfb-4a92f15d11c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Updated VIF entry in instance network info cache for port d7976101-fb6f-4e03-be4e-6b60c73979e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:56 np0005539563 nova_compute[252253]: 2025-11-29 08:41:56.572 252257 DEBUG nova.network.neutron [req-74ca6372-ee77-4023-a5ad-59efc3060eb2 req-0b585372-3809-4b00-bbfb-4a92f15d11c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Updating instance_info_cache with network_info: [{"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:56 np0005539563 nova_compute[252253]: 2025-11-29 08:41:56.623 252257 DEBUG oslo_concurrency.lockutils [req-74ca6372-ee77-4023-a5ad-59efc3060eb2 req-0b585372-3809-4b00-bbfb-4a92f15d11c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 140 op/s
Nov 29 03:41:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:57.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:57.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:57 np0005539563 nova_compute[252253]: 2025-11-29 08:41:57.727 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 337 KiB/s wr, 112 op/s
Nov 29 03:41:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:59.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:59 np0005539563 nova_compute[252253]: 2025-11-29 08:41:59.107 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:41:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:59.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:59 np0005539563 nova_compute[252253]: 2025-11-29 08:41:59.888 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:00 np0005539563 nova_compute[252253]: 2025-11-29 08:42:00.853 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 93 op/s
Nov 29 03:42:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:01.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:01.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 70 op/s
Nov 29 03:42:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:03.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:03.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:04 np0005539563 nova_compute[252253]: 2025-11-29 08:42:04.149 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:04 np0005539563 podman[375224]: 2025-11-29 08:42:04.541929605 +0000 UTC m=+0.094466016 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:42:04 np0005539563 podman[375225]: 2025-11-29 08:42:04.551874686 +0000 UTC m=+0.091845414 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:42:04 np0005539563 podman[375231]: 2025-11-29 08:42:04.556875742 +0000 UTC m=+0.094181887 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:42:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:04 np0005539563 nova_compute[252253]: 2025-11-29 08:42:04.889 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:04Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:8f:9a 10.100.0.11
Nov 29 03:42:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:04.947 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:04.948 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:04.948 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:04Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:8f:9a 10.100.0.11
Nov 29 03:42:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 170 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 256 KiB/s wr, 75 op/s
Nov 29 03:42:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:05.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:05.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 193 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.9 MiB/s wr, 97 op/s
Nov 29 03:42:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:07.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:07.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:07 np0005539563 nova_compute[252253]: 2025-11-29 08:42:07.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:07 np0005539563 nova_compute[252253]: 2025-11-29 08:42:07.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:42:07 np0005539563 nova_compute[252253]: 2025-11-29 08:42:07.707 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:42:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 197 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 29 03:42:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:09.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:09 np0005539563 nova_compute[252253]: 2025-11-29 08:42:09.153 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:09.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:09 np0005539563 nova_compute[252253]: 2025-11-29 08:42:09.892 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:10 np0005539563 nova_compute[252253]: 2025-11-29 08:42:10.523 252257 INFO nova.compute.manager [None req-f88fc415-89db-45a8-9224-9b99c4a76799 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Get console output#033[00m
Nov 29 03:42:10 np0005539563 nova_compute[252253]: 2025-11-29 08:42:10.529 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:42:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 232 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 321 KiB/s rd, 3.4 MiB/s wr, 88 op/s
Nov 29 03:42:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:11.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:11.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:11 np0005539563 nova_compute[252253]: 2025-11-29 08:42:11.630 252257 INFO nova.compute.manager [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Rebuilding instance#033[00m
Nov 29 03:42:12 np0005539563 nova_compute[252253]: 2025-11-29 08:42:12.224 252257 DEBUG nova.objects.instance [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8b3377f3-920a-41d9-bfc4-2b727546ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:12 np0005539563 nova_compute[252253]: 2025-11-29 08:42:12.255 252257 DEBUG nova.compute.manager [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:12 np0005539563 nova_compute[252253]: 2025-11-29 08:42:12.339 252257 DEBUG nova.objects.instance [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8b3377f3-920a-41d9-bfc4-2b727546ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:12 np0005539563 nova_compute[252253]: 2025-11-29 08:42:12.372 252257 DEBUG nova.objects.instance [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8b3377f3-920a-41d9-bfc4-2b727546ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:12 np0005539563 nova_compute[252253]: 2025-11-29 08:42:12.402 252257 DEBUG nova.objects.instance [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'resources' on Instance uuid 8b3377f3-920a-41d9-bfc4-2b727546ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:12 np0005539563 nova_compute[252253]: 2025-11-29 08:42:12.418 252257 DEBUG nova.objects.instance [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'migration_context' on Instance uuid 8b3377f3-920a-41d9-bfc4-2b727546ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:12 np0005539563 nova_compute[252253]: 2025-11-29 08:42:12.436 252257 DEBUG nova.objects.instance [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:42:12 np0005539563 nova_compute[252253]: 2025-11-29 08:42:12.441 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:42:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:42:12
Nov 29 03:42:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:42:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:42:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 29 03:42:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:42:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 232 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 321 KiB/s rd, 3.4 MiB/s wr, 88 op/s
Nov 29 03:42:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:13.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:42:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:42:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:42:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:42:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:42:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:42:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:13.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:14 np0005539563 nova_compute[252253]: 2025-11-29 08:42:14.156 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:14 np0005539563 nova_compute[252253]: 2025-11-29 08:42:14.707 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:14 np0005539563 kernel: tapd7976101-fb (unregistering): left promiscuous mode
Nov 29 03:42:14 np0005539563 NetworkManager[48981]: <info>  [1764405734.7448] device (tapd7976101-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:42:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:14Z|00805|binding|INFO|Releasing lport d7976101-fb6f-4e03-be4e-6b60c73979e4 from this chassis (sb_readonly=0)
Nov 29 03:42:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:14Z|00806|binding|INFO|Setting lport d7976101-fb6f-4e03-be4e-6b60c73979e4 down in Southbound
Nov 29 03:42:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:14Z|00807|binding|INFO|Removing iface tapd7976101-fb ovn-installed in OVS
Nov 29 03:42:14 np0005539563 nova_compute[252253]: 2025-11-29 08:42:14.752 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:14.763 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:8f:9a 10.100.0.11'], port_security=['fa:16:3e:80:8f:9a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8b3377f3-920a-41d9-bfc4-2b727546ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb2c517f-e973-4398-be77-628f13500e1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1042471a-4e26-498d-88fd-cdaba7fe83fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6798f55c-ab9d-407e-ac43-81155e9f9232, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=d7976101-fb6f-4e03-be4e-6b60c73979e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:14.764 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d7976101-fb6f-4e03-be4e-6b60c73979e4 in datapath eb2c517f-e973-4398-be77-628f13500e1a unbound from our chassis#033[00m
Nov 29 03:42:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:14.765 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb2c517f-e973-4398-be77-628f13500e1a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:42:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:14.766 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f53c54-dda6-46d3-b962-48d5e616dc33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:14.767 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a namespace which is not needed anymore#033[00m
Nov 29 03:42:14 np0005539563 nova_compute[252253]: 2025-11-29 08:42:14.774 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:14 np0005539563 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000ba.scope: Deactivated successfully.
Nov 29 03:42:14 np0005539563 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000ba.scope: Consumed 14.557s CPU time.
Nov 29 03:42:14 np0005539563 systemd-machined[213024]: Machine qemu-92-instance-000000ba terminated.
Nov 29 03:42:14 np0005539563 nova_compute[252253]: 2025-11-29 08:42:14.894 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:14 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375202]: [NOTICE]   (375206) : haproxy version is 2.8.14-c23fe91
Nov 29 03:42:14 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375202]: [NOTICE]   (375206) : path to executable is /usr/sbin/haproxy
Nov 29 03:42:14 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375202]: [WARNING]  (375206) : Exiting Master process...
Nov 29 03:42:14 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375202]: [WARNING]  (375206) : Exiting Master process...
Nov 29 03:42:14 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375202]: [ALERT]    (375206) : Current worker (375208) exited with code 143 (Terminated)
Nov 29 03:42:14 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375202]: [WARNING]  (375206) : All workers exited. Exiting... (0)
Nov 29 03:42:14 np0005539563 systemd[1]: libpod-8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d.scope: Deactivated successfully.
Nov 29 03:42:14 np0005539563 podman[375369]: 2025-11-29 08:42:14.906073707 +0000 UTC m=+0.050057406 container died 8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:42:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d-userdata-shm.mount: Deactivated successfully.
Nov 29 03:42:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-cd3b83f0fa4359618eefc7b117fdc4f95885ffb416a986c04077c35b6d4c8b83-merged.mount: Deactivated successfully.
Nov 29 03:42:14 np0005539563 podman[375369]: 2025-11-29 08:42:14.943881707 +0000 UTC m=+0.087865406 container cleanup 8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:42:14 np0005539563 systemd[1]: libpod-conmon-8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d.scope: Deactivated successfully.
Nov 29 03:42:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 321 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Nov 29 03:42:15 np0005539563 podman[375399]: 2025-11-29 08:42:15.007179452 +0000 UTC m=+0.044127744 container remove 8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:42:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:15.013 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3006c72e-5819-4f0a-afb1-51d6fbdba84f]: (4, ('Sat Nov 29 08:42:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a (8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d)\n8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d\nSat Nov 29 08:42:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a (8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d)\n8501a41c335cc211ac3efe3ccfef3d06ddb33eb390c1c91b5a3e0dcfcd6f907d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:15.015 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9b862e93-358d-42b4-9462-b9bd438a3aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:15.016 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb2c517f-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.018 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:15 np0005539563 kernel: tapeb2c517f-e0: left promiscuous mode
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.034 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:15.037 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[06cbe7a6-0249-46ca-a488-3af5cc7ec63f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:15.056 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5864dd-13f4-4666-bf21-08774f3cb478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:15.057 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3569909a-a510-4496-9024-7657633c8ae1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:15.076 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d9149291-7f59-4386-9eb0-beeaf5ec0055]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 867802, 'reachable_time': 16162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375429, 'error': None, 'target': 'ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:15.079 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:42:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:15.079 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[abfccf37-ed69-48b3-855e-b3d26b49c27c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:15 np0005539563 systemd[1]: run-netns-ovnmeta\x2deb2c517f\x2de973\x2d4398\x2dbe77\x2d628f13500e1a.mount: Deactivated successfully.
Nov 29 03:42:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:15.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:15.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.465 252257 DEBUG nova.compute.manager [req-6cccff3b-cfb5-49f4-89e2-2709e3eda677 req-bc267d9d-ef67-4327-b1cb-a9e7a7bd45b9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received event network-vif-unplugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.466 252257 DEBUG oslo_concurrency.lockutils [req-6cccff3b-cfb5-49f4-89e2-2709e3eda677 req-bc267d9d-ef67-4327-b1cb-a9e7a7bd45b9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.466 252257 DEBUG oslo_concurrency.lockutils [req-6cccff3b-cfb5-49f4-89e2-2709e3eda677 req-bc267d9d-ef67-4327-b1cb-a9e7a7bd45b9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.466 252257 DEBUG oslo_concurrency.lockutils [req-6cccff3b-cfb5-49f4-89e2-2709e3eda677 req-bc267d9d-ef67-4327-b1cb-a9e7a7bd45b9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.466 252257 DEBUG nova.compute.manager [req-6cccff3b-cfb5-49f4-89e2-2709e3eda677 req-bc267d9d-ef67-4327-b1cb-a9e7a7bd45b9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] No waiting events found dispatching network-vif-unplugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.466 252257 WARNING nova.compute.manager [req-6cccff3b-cfb5-49f4-89e2-2709e3eda677 req-bc267d9d-ef67-4327-b1cb-a9e7a7bd45b9 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received unexpected event network-vif-unplugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.467 252257 INFO nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.473 252257 INFO nova.virt.libvirt.driver [-] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Instance destroyed successfully.#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.478 252257 INFO nova.virt.libvirt.driver [-] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Instance destroyed successfully.#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.479 252257 DEBUG nova.virt.libvirt.vif [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1930617470',display_name='tempest-TestNetworkAdvancedServerOps-server-1930617470',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1930617470',id=186,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKY1Sg2xL/aJz8wIL7cf5MzcV52yp5R0mXmIsEO1TSV5wOnaXa6t112hZJc+/UBiVqxk5rRlpEmVgzJvgpc06h1m1EAduPYs3GDyvBwnX5qP0GCBg7T1VF1J1Nh92LK9xA==',key_name='tempest-TestNetworkAdvancedServerOps-482800446',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:41:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-x7e57fd4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:11Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=8b3377f3-920a-41d9-bfc4-2b727546ab6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.479 252257 DEBUG nova.network.os_vif_util [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.480 252257 DEBUG nova.network.os_vif_util [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.481 252257 DEBUG os_vif [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.482 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.483 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7976101-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.484 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.486 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.487 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.491 252257 INFO os_vif [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb')#033[00m
Nov 29 03:42:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:15.554 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.554 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:15.558 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:42:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:15.562 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.963 252257 INFO nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Deleting instance files /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f_del#033[00m
Nov 29 03:42:15 np0005539563 nova_compute[252253]: 2025-11-29 08:42:15.965 252257 INFO nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Deletion of /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f_del complete#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.128 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.129 252257 INFO nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Creating image(s)#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.159 252257 DEBUG nova.storage.rbd_utils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.189 252257 DEBUG nova.storage.rbd_utils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.230 252257 DEBUG nova.storage.rbd_utils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.236 252257 DEBUG oslo_concurrency.processutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.331 252257 DEBUG oslo_concurrency.processutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.333 252257 DEBUG oslo_concurrency.lockutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.334 252257 DEBUG oslo_concurrency.lockutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.334 252257 DEBUG oslo_concurrency.lockutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "40c26ed0fe4534cf021820db0c9b5c605a52a242" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.370 252257 DEBUG nova.storage.rbd_utils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.377 252257 DEBUG oslo_concurrency.processutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:42:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:42:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:42:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:42:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:42:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:42:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:42:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:42:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:42:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.680 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.705 252257 DEBUG oslo_concurrency.processutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.800 252257 DEBUG nova.storage.rbd_utils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] resizing rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.908 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.910 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Ensure instance console log exists: /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.912 252257 DEBUG oslo_concurrency.lockutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.913 252257 DEBUG oslo_concurrency.lockutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.914 252257 DEBUG oslo_concurrency.lockutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.919 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Start _get_guest_xml network_info=[{"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.924 252257 WARNING nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.930 252257 DEBUG nova.virt.libvirt.host [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.931 252257 DEBUG nova.virt.libvirt.host [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.934 252257 DEBUG nova.virt.libvirt.host [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.934 252257 DEBUG nova.virt.libvirt.host [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.936 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.936 252257 DEBUG nova.virt.hardware [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:40:38Z,direct_url=<?>,disk_format='qcow2',id=ed489666-5fa2-4ea4-8005-7a7505ac1b78,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:58Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.937 252257 DEBUG nova.virt.hardware [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.937 252257 DEBUG nova.virt.hardware [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.937 252257 DEBUG nova.virt.hardware [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.938 252257 DEBUG nova.virt.hardware [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.938 252257 DEBUG nova.virt.hardware [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.938 252257 DEBUG nova.virt.hardware [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.938 252257 DEBUG nova.virt.hardware [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.939 252257 DEBUG nova.virt.hardware [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.939 252257 DEBUG nova.virt.hardware [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.940 252257 DEBUG nova.virt.hardware [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.940 252257 DEBUG nova.objects.instance [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8b3377f3-920a-41d9-bfc4-2b727546ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:16 np0005539563 nova_compute[252253]: 2025-11-29 08:42:16.957 252257 DEBUG oslo_concurrency.processutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 277 KiB/s rd, 3.7 MiB/s wr, 90 op/s
Nov 29 03:42:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:17.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:17.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:42:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/419823308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.380 252257 DEBUG oslo_concurrency.processutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.409 252257 DEBUG nova.storage.rbd_utils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.414 252257 DEBUG oslo_concurrency.processutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.555 252257 DEBUG nova.compute.manager [req-fe2c6796-30f7-4cc6-8693-5c5e995e0b5c req-1c2e773b-2104-42d1-924f-673432290760 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received event network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.557 252257 DEBUG oslo_concurrency.lockutils [req-fe2c6796-30f7-4cc6-8693-5c5e995e0b5c req-1c2e773b-2104-42d1-924f-673432290760 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.557 252257 DEBUG oslo_concurrency.lockutils [req-fe2c6796-30f7-4cc6-8693-5c5e995e0b5c req-1c2e773b-2104-42d1-924f-673432290760 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.557 252257 DEBUG oslo_concurrency.lockutils [req-fe2c6796-30f7-4cc6-8693-5c5e995e0b5c req-1c2e773b-2104-42d1-924f-673432290760 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.557 252257 DEBUG nova.compute.manager [req-fe2c6796-30f7-4cc6-8693-5c5e995e0b5c req-1c2e773b-2104-42d1-924f-673432290760 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] No waiting events found dispatching network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.558 252257 WARNING nova.compute.manager [req-fe2c6796-30f7-4cc6-8693-5c5e995e0b5c req-1c2e773b-2104-42d1-924f-673432290760 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received unexpected event network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.681 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.681 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.682 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.711 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.712 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8b3377f3-920a-41d9-bfc4-2b727546ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:42:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2380355446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.862 252257 DEBUG oslo_concurrency.processutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.864 252257 DEBUG nova.virt.libvirt.vif [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1930617470',display_name='tempest-TestNetworkAdvancedServerOps-server-1930617470',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1930617470',id=186,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKY1Sg2xL/aJz8wIL7cf5MzcV52yp5R0mXmIsEO1TSV5wOnaXa6t112hZJc+/UBiVqxk5rRlpEmVgzJvgpc06h1m1EAduPYs3GDyvBwnX5qP0GCBg7T1VF1J1Nh92LK9xA==',key_name='tempest-TestNetworkAdvancedServerOps-482800446',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:41:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-x7e57fd4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:16Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=8b3377f3-920a-41d9-bfc4-2b727546ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.864 252257 DEBUG nova.network.os_vif_util [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.865 252257 DEBUG nova.network.os_vif_util [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.867 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  <uuid>8b3377f3-920a-41d9-bfc4-2b727546ab6f</uuid>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  <name>instance-000000ba</name>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1930617470</nova:name>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:42:16</nova:creationTime>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="ed489666-5fa2-4ea4-8005-7a7505ac1b78"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <nova:port uuid="d7976101-fb6f-4e03-be4e-6b60c73979e4">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <entry name="serial">8b3377f3-920a-41d9-bfc4-2b727546ab6f</entry>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <entry name="uuid">8b3377f3-920a-41d9-bfc4-2b727546ab6f</entry>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk.config">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:80:8f:9a"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <target dev="tapd7976101-fb"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/console.log" append="off"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:42:17 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:42:17 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:42:17 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:42:17 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.870 252257 DEBUG nova.virt.libvirt.vif [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1930617470',display_name='tempest-TestNetworkAdvancedServerOps-server-1930617470',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1930617470',id=186,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKY1Sg2xL/aJz8wIL7cf5MzcV52yp5R0mXmIsEO1TSV5wOnaXa6t112hZJc+/UBiVqxk5rRlpEmVgzJvgpc06h1m1EAduPYs3GDyvBwnX5qP0GCBg7T1VF1J1Nh92LK9xA==',key_name='tempest-TestNetworkAdvancedServerOps-482800446',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:41:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-x7e57fd4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:16Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=8b3377f3-920a-41d9-bfc4-2b727546ab6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.871 252257 DEBUG nova.network.os_vif_util [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.872 252257 DEBUG nova.network.os_vif_util [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.873 252257 DEBUG os_vif [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.874 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.875 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.876 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.879 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.879 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7976101-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.880 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7976101-fb, col_values=(('external_ids', {'iface-id': 'd7976101-fb6f-4e03-be4e-6b60c73979e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:8f:9a', 'vm-uuid': '8b3377f3-920a-41d9-bfc4-2b727546ab6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.883 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:17 np0005539563 NetworkManager[48981]: <info>  [1764405737.8839] manager: (tapd7976101-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.886 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.887 252257 INFO os_vif [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb')#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.989 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.990 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.990 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No VIF found with MAC fa:16:3e:80:8f:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:42:17 np0005539563 nova_compute[252253]: 2025-11-29 08:42:17.991 252257 INFO nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Using config drive#033[00m
Nov 29 03:42:18 np0005539563 nova_compute[252253]: 2025-11-29 08:42:18.024 252257 DEBUG nova.storage.rbd_utils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:18 np0005539563 nova_compute[252253]: 2025-11-29 08:42:18.045 252257 DEBUG nova.objects.instance [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8b3377f3-920a-41d9-bfc4-2b727546ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:18 np0005539563 nova_compute[252253]: 2025-11-29 08:42:18.075 252257 DEBUG nova.objects.instance [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'keypairs' on Instance uuid 8b3377f3-920a-41d9-bfc4-2b727546ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:18 np0005539563 nova_compute[252253]: 2025-11-29 08:42:18.490 252257 INFO nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Creating config drive at /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/disk.config#033[00m
Nov 29 03:42:18 np0005539563 nova_compute[252253]: 2025-11-29 08:42:18.495 252257 DEBUG oslo_concurrency.processutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyhsg8p0z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:18 np0005539563 nova_compute[252253]: 2025-11-29 08:42:18.639 252257 DEBUG oslo_concurrency.processutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyhsg8p0z" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:18 np0005539563 nova_compute[252253]: 2025-11-29 08:42:18.672 252257 DEBUG nova.storage.rbd_utils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] rbd image 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:18 np0005539563 nova_compute[252253]: 2025-11-29 08:42:18.676 252257 DEBUG oslo_concurrency.processutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/disk.config 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:18 np0005539563 nova_compute[252253]: 2025-11-29 08:42:18.885 252257 DEBUG oslo_concurrency.processutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/disk.config 8b3377f3-920a-41d9-bfc4-2b727546ab6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:18 np0005539563 nova_compute[252253]: 2025-11-29 08:42:18.886 252257 INFO nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Deleting local config drive /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f/disk.config because it was imported into RBD.#033[00m
Nov 29 03:42:18 np0005539563 kernel: tapd7976101-fb: entered promiscuous mode
Nov 29 03:42:18 np0005539563 NetworkManager[48981]: <info>  [1764405738.9450] manager: (tapd7976101-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/355)
Nov 29 03:42:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:18Z|00808|binding|INFO|Claiming lport d7976101-fb6f-4e03-be4e-6b60c73979e4 for this chassis.
Nov 29 03:42:18 np0005539563 nova_compute[252253]: 2025-11-29 08:42:18.945 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:18Z|00809|binding|INFO|d7976101-fb6f-4e03-be4e-6b60c73979e4: Claiming fa:16:3e:80:8f:9a 10.100.0.11
Nov 29 03:42:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:18.956 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:8f:9a 10.100.0.11'], port_security=['fa:16:3e:80:8f:9a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8b3377f3-920a-41d9-bfc4-2b727546ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb2c517f-e973-4398-be77-628f13500e1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1042471a-4e26-498d-88fd-cdaba7fe83fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6798f55c-ab9d-407e-ac43-81155e9f9232, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=d7976101-fb6f-4e03-be4e-6b60c73979e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:18.957 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d7976101-fb6f-4e03-be4e-6b60c73979e4 in datapath eb2c517f-e973-4398-be77-628f13500e1a bound to our chassis#033[00m
Nov 29 03:42:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:18.959 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb2c517f-e973-4398-be77-628f13500e1a#033[00m
Nov 29 03:42:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:18Z|00810|binding|INFO|Setting lport d7976101-fb6f-4e03-be4e-6b60c73979e4 ovn-installed in OVS
Nov 29 03:42:18 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:18Z|00811|binding|INFO|Setting lport d7976101-fb6f-4e03-be4e-6b60c73979e4 up in Southbound
Nov 29 03:42:18 np0005539563 nova_compute[252253]: 2025-11-29 08:42:18.964 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:18 np0005539563 nova_compute[252253]: 2025-11-29 08:42:18.968 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:18 np0005539563 systemd-udevd[375751]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:42:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:18.973 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c93ef638-4b75-4ae1-8954-5a93eee2c00f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:18.975 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb2c517f-e1 in ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:42:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:18.977 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb2c517f-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:42:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:18.977 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2d93c8b7-37f7-423e-9c18-c93d320b97d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:18.980 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[56975039-8673-4cae-be96-26f2d591583b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:18 np0005539563 NetworkManager[48981]: <info>  [1764405738.9853] device (tapd7976101-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:42:18 np0005539563 NetworkManager[48981]: <info>  [1764405738.9863] device (tapd7976101-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:42:18 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:18.992 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[f370bb33-f295-4a64-a536-4fa703fa9071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:18 np0005539563 systemd-machined[213024]: New machine qemu-93-instance-000000ba.
Nov 29 03:42:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 252 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 2.6 MiB/s wr, 46 op/s
Nov 29 03:42:19 np0005539563 systemd[1]: Started Virtual Machine qemu-93-instance-000000ba.
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.023 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[00044936-39d9-46ea-a483-f72000d3def8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.057 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[ea205d9a-06e7-40d0-a886-aa2d5e9a4ad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539563 NetworkManager[48981]: <info>  [1764405739.0664] manager: (tapeb2c517f-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/356)
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.065 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bee8514e-9c46-4324-aad3-8f79745b5e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539563 systemd-udevd[375756]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.100 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf3d503-9d2a-40a8-b770-e9e6d327e415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.104 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f331bccc-d4dd-490c-890e-a061905005dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:19.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:19 np0005539563 NetworkManager[48981]: <info>  [1764405739.1304] device (tapeb2c517f-e0): carrier: link connected
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.138 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[464d4dce-7885-4d14-b719-bcb897755db3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.168 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[53d02153-a38e-44d6-acc1-739a7e9677bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb2c517f-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:3f:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870690, 'reachable_time': 33913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375788, 'error': None, 'target': 'ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.184 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0f208c86-4cb2-4174-91fe-5a093d22ec93]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:3f23'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870690, 'tstamp': 870690}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375796, 'error': None, 'target': 'ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.202 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[81089be0-5ca7-4379-8fd7-6b368c35ed7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb2c517f-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:3f:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870690, 'reachable_time': 33913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375805, 'error': None, 'target': 'ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.233 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8c5743-f020-4034-8259-48782d900613]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.287 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbd2510-04d8-4284-b262-8c9f6298d83f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.290 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb2c517f-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.290 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.291 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb2c517f-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.294 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:19 np0005539563 NetworkManager[48981]: <info>  [1764405739.2948] manager: (tapeb2c517f-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Nov 29 03:42:19 np0005539563 kernel: tapeb2c517f-e0: entered promiscuous mode
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.296 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:19.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.298 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb2c517f-e0, col_values=(('external_ids', {'iface-id': '60f1e0af-3a11-4413-80bc-17ee4b21dd0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:19Z|00812|binding|INFO|Releasing lport 60f1e0af-3a11-4413-80bc-17ee4b21dd0a from this chassis (sb_readonly=0)
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.300 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.314 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.315 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb2c517f-e973-4398-be77-628f13500e1a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb2c517f-e973-4398-be77-628f13500e1a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.316 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9696593b-58c7-4b50-b42b-65e102f14c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.317 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-eb2c517f-e973-4398-be77-628f13500e1a
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/eb2c517f-e973-4398-be77-628f13500e1a.pid.haproxy
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID eb2c517f-e973-4398-be77-628f13500e1a
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:42:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:19.318 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a', 'env', 'PROCESS_TAG=haproxy-eb2c517f-e973-4398-be77-628f13500e1a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb2c517f-e973-4398-be77-628f13500e1a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.380 252257 DEBUG nova.virt.libvirt.host [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Removed pending event for 8b3377f3-920a-41d9-bfc4-2b727546ab6f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.380 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405739.3795896, 8b3377f3-920a-41d9-bfc4-2b727546ab6f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.380 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.383 252257 DEBUG nova.compute.manager [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.383 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.388 252257 INFO nova.virt.libvirt.driver [-] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Instance spawned successfully.#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.389 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.406 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.412 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.417 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.418 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.418 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.419 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.419 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.420 252257 DEBUG nova.virt.libvirt.driver [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.457 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.458 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405739.3816807, 8b3377f3-920a-41d9-bfc4-2b727546ab6f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.458 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.483 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.487 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.503 252257 DEBUG nova.compute.manager [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.556 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.598 252257 DEBUG oslo_concurrency.lockutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.599 252257 DEBUG oslo_concurrency.lockutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.599 252257 DEBUG nova.objects.instance [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.668 252257 DEBUG oslo_concurrency.lockutils [None req-f532d357-9456-4f9f-9491-746f292b73e0 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:19 np0005539563 podman[375860]: 2025-11-29 08:42:19.701401012 +0000 UTC m=+0.059681648 container create 544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:42:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:19 np0005539563 systemd[1]: Started libpod-conmon-544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55.scope.
Nov 29 03:42:19 np0005539563 podman[375860]: 2025-11-29 08:42:19.669448471 +0000 UTC m=+0.027729137 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:42:19 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:42:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9b0a81a0c91edcb2d02ea8a34deca1b3ff4126298feddac876cf4e0f34564a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:19 np0005539563 podman[375860]: 2025-11-29 08:42:19.789232606 +0000 UTC m=+0.147513262 container init 544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:42:19 np0005539563 podman[375860]: 2025-11-29 08:42:19.794423428 +0000 UTC m=+0.152704064 container start 544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:42:19 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375875]: [NOTICE]   (375879) : New worker (375881) forked
Nov 29 03:42:19 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375875]: [NOTICE]   (375879) : Loading success.
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.873 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Updating instance_info_cache with network_info: [{"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.896 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.900 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.900 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:42:19 np0005539563 nova_compute[252253]: 2025-11-29 08:42:19.901 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:20 np0005539563 nova_compute[252253]: 2025-11-29 08:42:20.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:20 np0005539563 nova_compute[252253]: 2025-11-29 08:42:20.699 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:20 np0005539563 nova_compute[252253]: 2025-11-29 08:42:20.699 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:20 np0005539563 nova_compute[252253]: 2025-11-29 08:42:20.700 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:20 np0005539563 nova_compute[252253]: 2025-11-29 08:42:20.700 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:42:20 np0005539563 nova_compute[252253]: 2025-11-29 08:42:20.700 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 172 op/s
Nov 29 03:42:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:21.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/413093287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.181 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.268 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.269 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000ba as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:42:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:21.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.417 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.419 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4016MB free_disk=20.92298126220703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.419 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.419 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.548 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 8b3377f3-920a-41d9-bfc4-2b727546ab6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.549 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.549 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.604 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.800 252257 DEBUG nova.compute.manager [req-6d9926d7-756a-438f-aa80-7a6d9fa94f34 req-dd5319ed-4b30-4ebb-8b4b-ed2e6cff3c91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received event network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.801 252257 DEBUG oslo_concurrency.lockutils [req-6d9926d7-756a-438f-aa80-7a6d9fa94f34 req-dd5319ed-4b30-4ebb-8b4b-ed2e6cff3c91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.801 252257 DEBUG oslo_concurrency.lockutils [req-6d9926d7-756a-438f-aa80-7a6d9fa94f34 req-dd5319ed-4b30-4ebb-8b4b-ed2e6cff3c91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.801 252257 DEBUG oslo_concurrency.lockutils [req-6d9926d7-756a-438f-aa80-7a6d9fa94f34 req-dd5319ed-4b30-4ebb-8b4b-ed2e6cff3c91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.802 252257 DEBUG nova.compute.manager [req-6d9926d7-756a-438f-aa80-7a6d9fa94f34 req-dd5319ed-4b30-4ebb-8b4b-ed2e6cff3c91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] No waiting events found dispatching network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.802 252257 WARNING nova.compute.manager [req-6d9926d7-756a-438f-aa80-7a6d9fa94f34 req-dd5319ed-4b30-4ebb-8b4b-ed2e6cff3c91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received unexpected event network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.802 252257 DEBUG nova.compute.manager [req-6d9926d7-756a-438f-aa80-7a6d9fa94f34 req-dd5319ed-4b30-4ebb-8b4b-ed2e6cff3c91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received event network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.803 252257 DEBUG oslo_concurrency.lockutils [req-6d9926d7-756a-438f-aa80-7a6d9fa94f34 req-dd5319ed-4b30-4ebb-8b4b-ed2e6cff3c91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.803 252257 DEBUG oslo_concurrency.lockutils [req-6d9926d7-756a-438f-aa80-7a6d9fa94f34 req-dd5319ed-4b30-4ebb-8b4b-ed2e6cff3c91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.803 252257 DEBUG oslo_concurrency.lockutils [req-6d9926d7-756a-438f-aa80-7a6d9fa94f34 req-dd5319ed-4b30-4ebb-8b4b-ed2e6cff3c91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.804 252257 DEBUG nova.compute.manager [req-6d9926d7-756a-438f-aa80-7a6d9fa94f34 req-dd5319ed-4b30-4ebb-8b4b-ed2e6cff3c91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] No waiting events found dispatching network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:21 np0005539563 nova_compute[252253]: 2025-11-29 08:42:21.804 252257 WARNING nova.compute.manager [req-6d9926d7-756a-438f-aa80-7a6d9fa94f34 req-dd5319ed-4b30-4ebb-8b4b-ed2e6cff3c91 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received unexpected event network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:42:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3927432958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:22 np0005539563 nova_compute[252253]: 2025-11-29 08:42:22.068 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:22 np0005539563 nova_compute[252253]: 2025-11-29 08:42:22.073 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:22 np0005539563 nova_compute[252253]: 2025-11-29 08:42:22.093 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:22 np0005539563 nova_compute[252253]: 2025-11-29 08:42:22.124 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:42:22 np0005539563 nova_compute[252253]: 2025-11-29 08:42:22.125 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:22 np0005539563 nova_compute[252253]: 2025-11-29 08:42:22.882 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 143 op/s
Nov 29 03:42:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:23.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:23 np0005539563 nova_compute[252253]: 2025-11-29 08:42:23.126 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:23.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:23 np0005539563 nova_compute[252253]: 2025-11-29 08:42:23.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019914750255909173 of space, bias 1.0, pg target 0.5974425076772752 quantized to 32 (current 32)
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:42:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:24 np0005539563 nova_compute[252253]: 2025-11-29 08:42:24.898 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.4 MiB/s wr, 201 op/s
Nov 29 03:42:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:25.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:25.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.332210) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746332316, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2122, "num_deletes": 251, "total_data_size": 3843898, "memory_usage": 3907968, "flush_reason": "Manual Compaction"}
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746362143, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3754644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62587, "largest_seqno": 64708, "table_properties": {"data_size": 3745151, "index_size": 5986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19677, "raw_average_key_size": 20, "raw_value_size": 3726216, "raw_average_value_size": 3857, "num_data_blocks": 262, "num_entries": 966, "num_filter_entries": 966, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405532, "oldest_key_time": 1764405532, "file_creation_time": 1764405746, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 29999 microseconds, and 9783 cpu microseconds.
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.362244) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3754644 bytes OK
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.362285) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.364852) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.364891) EVENT_LOG_v1 {"time_micros": 1764405746364883, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.364911) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3835333, prev total WAL file size 3835333, number of live WAL files 2.
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.366138) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3666KB)], [140(10MB)]
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746366250, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 15257862, "oldest_snapshot_seqno": -1}
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 9788 keys, 13330620 bytes, temperature: kUnknown
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746476976, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 13330620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13266080, "index_size": 38955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 257789, "raw_average_key_size": 26, "raw_value_size": 13093087, "raw_average_value_size": 1337, "num_data_blocks": 1489, "num_entries": 9788, "num_filter_entries": 9788, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764405746, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.477238) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 13330620 bytes
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.478752) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.7 rd, 120.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 11.0 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 10305, records dropped: 517 output_compression: NoCompression
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.478769) EVENT_LOG_v1 {"time_micros": 1764405746478761, "job": 86, "event": "compaction_finished", "compaction_time_micros": 110799, "compaction_time_cpu_micros": 31821, "output_level": 6, "num_output_files": 1, "total_output_size": 13330620, "num_input_records": 10305, "num_output_records": 9788, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746479449, "job": 86, "event": "table_file_deletion", "file_number": 142}
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405746481105, "job": 86, "event": "table_file_deletion", "file_number": 140}
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.365918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.481197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.481204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.481206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.481208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:26 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:42:26.481209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 205 op/s
Nov 29 03:42:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:27.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:27.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:27 np0005539563 nova_compute[252253]: 2025-11-29 08:42:27.885 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Nov 29 03:42:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:29.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:29.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:29 np0005539563 nova_compute[252253]: 2025-11-29 08:42:29.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:29 np0005539563 nova_compute[252253]: 2025-11-29 08:42:29.900 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:30 np0005539563 nova_compute[252253]: 2025-11-29 08:42:30.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:30 np0005539563 nova_compute[252253]: 2025-11-29 08:42:30.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:42:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 223 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 202 op/s
Nov 29 03:42:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:31.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:31.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:32 np0005539563 nova_compute[252253]: 2025-11-29 08:42:32.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:32 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Nov 29 03:42:32 np0005539563 nova_compute[252253]: 2025-11-29 08:42:32.887 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 223 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 920 KiB/s wr, 73 op/s
Nov 29 03:42:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:33.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:33.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:33Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:8f:9a 10.100.0.11
Nov 29 03:42:33 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:33Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:8f:9a 10.100.0.11
Nov 29 03:42:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:34 np0005539563 nova_compute[252253]: 2025-11-29 08:42:34.951 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 264 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 158 op/s
Nov 29 03:42:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:35.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:35.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:35 np0005539563 podman[376010]: 2025-11-29 08:42:35.510609452 +0000 UTC m=+0.059317439 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:42:35 np0005539563 podman[376017]: 2025-11-29 08:42:35.511776993 +0000 UTC m=+0.063698977 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 29 03:42:35 np0005539563 podman[376018]: 2025-11-29 08:42:35.537658168 +0000 UTC m=+0.085592993 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:42:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:42:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:42:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 812 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Nov 29 03:42:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:37.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:37.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ca33b5e4-364f-439d-ad90-fbd9465b593e does not exist
Nov 29 03:42:37 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev caafcbab-1718-4cfd-ac56-5b54e50062a6 does not exist
Nov 29 03:42:37 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 459b5c7e-ebb2-4c3d-bf38-f99917260ceb does not exist
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:42:37 np0005539563 nova_compute[252253]: 2025-11-29 08:42:37.889 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:42:38 np0005539563 podman[376326]: 2025-11-29 08:42:38.371247897 +0000 UTC m=+0.062564486 container create a05324088aac9942dc757557fc59c9a86e31270dbfa14e5fabfe4df95da98016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:42:38 np0005539563 systemd[1]: Started libpod-conmon-a05324088aac9942dc757557fc59c9a86e31270dbfa14e5fabfe4df95da98016.scope.
Nov 29 03:42:38 np0005539563 podman[376326]: 2025-11-29 08:42:38.335886623 +0000 UTC m=+0.027203292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:42:38 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:42:38 np0005539563 podman[376326]: 2025-11-29 08:42:38.473597396 +0000 UTC m=+0.164913995 container init a05324088aac9942dc757557fc59c9a86e31270dbfa14e5fabfe4df95da98016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hellman, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:42:38 np0005539563 podman[376326]: 2025-11-29 08:42:38.48326463 +0000 UTC m=+0.174581219 container start a05324088aac9942dc757557fc59c9a86e31270dbfa14e5fabfe4df95da98016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hellman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:42:38 np0005539563 podman[376326]: 2025-11-29 08:42:38.48728184 +0000 UTC m=+0.178598469 container attach a05324088aac9942dc757557fc59c9a86e31270dbfa14e5fabfe4df95da98016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hellman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:42:38 np0005539563 nifty_hellman[376343]: 167 167
Nov 29 03:42:38 np0005539563 systemd[1]: libpod-a05324088aac9942dc757557fc59c9a86e31270dbfa14e5fabfe4df95da98016.scope: Deactivated successfully.
Nov 29 03:42:38 np0005539563 podman[376326]: 2025-11-29 08:42:38.493964122 +0000 UTC m=+0.185280711 container died a05324088aac9942dc757557fc59c9a86e31270dbfa14e5fabfe4df95da98016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hellman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:42:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e5588fcc76c09fd87c7e299b088a2ab54f6cc53cba2d6f7e55f502642eee68ab-merged.mount: Deactivated successfully.
Nov 29 03:42:38 np0005539563 podman[376326]: 2025-11-29 08:42:38.533001176 +0000 UTC m=+0.224317755 container remove a05324088aac9942dc757557fc59c9a86e31270dbfa14e5fabfe4df95da98016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:42:38 np0005539563 systemd[1]: libpod-conmon-a05324088aac9942dc757557fc59c9a86e31270dbfa14e5fabfe4df95da98016.scope: Deactivated successfully.
Nov 29 03:42:38 np0005539563 podman[376365]: 2025-11-29 08:42:38.735981388 +0000 UTC m=+0.047409953 container create 41063f0b58c018fcce3e5d14d38194a9f42ac6a2aa632c03d2165e9ea5546b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:42:38 np0005539563 systemd[1]: Started libpod-conmon-41063f0b58c018fcce3e5d14d38194a9f42ac6a2aa632c03d2165e9ea5546b01.scope.
Nov 29 03:42:38 np0005539563 podman[376365]: 2025-11-29 08:42:38.71622098 +0000 UTC m=+0.027649565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:42:38 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:42:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39de46b9720d413dc0da54e46e4c3cbf9b8e5e9f189624fa2a153bc9740db70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39de46b9720d413dc0da54e46e4c3cbf9b8e5e9f189624fa2a153bc9740db70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39de46b9720d413dc0da54e46e4c3cbf9b8e5e9f189624fa2a153bc9740db70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39de46b9720d413dc0da54e46e4c3cbf9b8e5e9f189624fa2a153bc9740db70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:38 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39de46b9720d413dc0da54e46e4c3cbf9b8e5e9f189624fa2a153bc9740db70/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:38 np0005539563 podman[376365]: 2025-11-29 08:42:38.839284134 +0000 UTC m=+0.150712719 container init 41063f0b58c018fcce3e5d14d38194a9f42ac6a2aa632c03d2165e9ea5546b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:42:38 np0005539563 podman[376365]: 2025-11-29 08:42:38.84685773 +0000 UTC m=+0.158286295 container start 41063f0b58c018fcce3e5d14d38194a9f42ac6a2aa632c03d2165e9ea5546b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:42:38 np0005539563 podman[376365]: 2025-11-29 08:42:38.850774167 +0000 UTC m=+0.162202752 container attach 41063f0b58c018fcce3e5d14d38194a9f42ac6a2aa632c03d2165e9ea5546b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:42:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 665 KiB/s rd, 4.3 MiB/s wr, 124 op/s
Nov 29 03:42:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:39.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:39.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:39 np0005539563 blissful_yonath[376383]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:42:39 np0005539563 blissful_yonath[376383]: --> relative data size: 1.0
Nov 29 03:42:39 np0005539563 blissful_yonath[376383]: --> All data devices are unavailable
Nov 29 03:42:39 np0005539563 systemd[1]: libpod-41063f0b58c018fcce3e5d14d38194a9f42ac6a2aa632c03d2165e9ea5546b01.scope: Deactivated successfully.
Nov 29 03:42:39 np0005539563 podman[376365]: 2025-11-29 08:42:39.701018959 +0000 UTC m=+1.012447564 container died 41063f0b58c018fcce3e5d14d38194a9f42ac6a2aa632c03d2165e9ea5546b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:42:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:39 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c39de46b9720d413dc0da54e46e4c3cbf9b8e5e9f189624fa2a153bc9740db70-merged.mount: Deactivated successfully.
Nov 29 03:42:39 np0005539563 podman[376365]: 2025-11-29 08:42:39.76782529 +0000 UTC m=+1.079253855 container remove 41063f0b58c018fcce3e5d14d38194a9f42ac6a2aa632c03d2165e9ea5546b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:42:39 np0005539563 systemd[1]: libpod-conmon-41063f0b58c018fcce3e5d14d38194a9f42ac6a2aa632c03d2165e9ea5546b01.scope: Deactivated successfully.
Nov 29 03:42:39 np0005539563 nova_compute[252253]: 2025-11-29 08:42:39.955 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:40 np0005539563 podman[376551]: 2025-11-29 08:42:40.394972403 +0000 UTC m=+0.036369342 container create a50320339ad38b04b636f8ed8f1958b480fb1d5d8b0907ee7a899b68d5a65ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:42:40 np0005539563 systemd[1]: Started libpod-conmon-a50320339ad38b04b636f8ed8f1958b480fb1d5d8b0907ee7a899b68d5a65ef9.scope.
Nov 29 03:42:40 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:42:40 np0005539563 podman[376551]: 2025-11-29 08:42:40.37799235 +0000 UTC m=+0.019389309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:42:40 np0005539563 podman[376551]: 2025-11-29 08:42:40.488847292 +0000 UTC m=+0.130244251 container init a50320339ad38b04b636f8ed8f1958b480fb1d5d8b0907ee7a899b68d5a65ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:42:40 np0005539563 podman[376551]: 2025-11-29 08:42:40.496455039 +0000 UTC m=+0.137852028 container start a50320339ad38b04b636f8ed8f1958b480fb1d5d8b0907ee7a899b68d5a65ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:42:40 np0005539563 podman[376551]: 2025-11-29 08:42:40.500535911 +0000 UTC m=+0.141932860 container attach a50320339ad38b04b636f8ed8f1958b480fb1d5d8b0907ee7a899b68d5a65ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wright, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:42:40 np0005539563 vibrant_wright[376567]: 167 167
Nov 29 03:42:40 np0005539563 systemd[1]: libpod-a50320339ad38b04b636f8ed8f1958b480fb1d5d8b0907ee7a899b68d5a65ef9.scope: Deactivated successfully.
Nov 29 03:42:40 np0005539563 podman[376551]: 2025-11-29 08:42:40.501842946 +0000 UTC m=+0.143239895 container died a50320339ad38b04b636f8ed8f1958b480fb1d5d8b0907ee7a899b68d5a65ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:42:40 np0005539563 systemd[1]: var-lib-containers-storage-overlay-51c1b29029994918149584882e1da6ea275729f41e8d404a28a3fffb3221b4ed-merged.mount: Deactivated successfully.
Nov 29 03:42:40 np0005539563 podman[376551]: 2025-11-29 08:42:40.541369853 +0000 UTC m=+0.182766792 container remove a50320339ad38b04b636f8ed8f1958b480fb1d5d8b0907ee7a899b68d5a65ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wright, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:42:40 np0005539563 systemd[1]: libpod-conmon-a50320339ad38b04b636f8ed8f1958b480fb1d5d8b0907ee7a899b68d5a65ef9.scope: Deactivated successfully.
Nov 29 03:42:40 np0005539563 nova_compute[252253]: 2025-11-29 08:42:40.659 252257 INFO nova.compute.manager [None req-78007b4a-7008-4ddc-afd1-db7847539939 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Get console output#033[00m
Nov 29 03:42:40 np0005539563 nova_compute[252253]: 2025-11-29 08:42:40.671 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:42:40 np0005539563 podman[376589]: 2025-11-29 08:42:40.744097969 +0000 UTC m=+0.067069769 container create 16f37cd4f2dfdf2e1024c1827a7a3047401e4b4a8016e7ea56b7c76aeffc79ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_benz, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:42:40 np0005539563 systemd[1]: Started libpod-conmon-16f37cd4f2dfdf2e1024c1827a7a3047401e4b4a8016e7ea56b7c76aeffc79ba.scope.
Nov 29 03:42:40 np0005539563 podman[376589]: 2025-11-29 08:42:40.724374401 +0000 UTC m=+0.047346301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:42:40 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:42:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab4294934cfa9ffd4cd842e8b915fccd345108636e31acdbe27c188bd013201/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab4294934cfa9ffd4cd842e8b915fccd345108636e31acdbe27c188bd013201/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab4294934cfa9ffd4cd842e8b915fccd345108636e31acdbe27c188bd013201/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab4294934cfa9ffd4cd842e8b915fccd345108636e31acdbe27c188bd013201/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:40 np0005539563 podman[376589]: 2025-11-29 08:42:40.856285497 +0000 UTC m=+0.179257347 container init 16f37cd4f2dfdf2e1024c1827a7a3047401e4b4a8016e7ea56b7c76aeffc79ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_benz, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:42:40 np0005539563 podman[376589]: 2025-11-29 08:42:40.865783525 +0000 UTC m=+0.188755335 container start 16f37cd4f2dfdf2e1024c1827a7a3047401e4b4a8016e7ea56b7c76aeffc79ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:42:40 np0005539563 podman[376589]: 2025-11-29 08:42:40.869795845 +0000 UTC m=+0.192767675 container attach 16f37cd4f2dfdf2e1024c1827a7a3047401e4b4a8016e7ea56b7c76aeffc79ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_benz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:42:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Nov 29 03:42:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:41.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:41.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:41 np0005539563 interesting_benz[376606]: {
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:    "0": [
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:        {
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:            "devices": [
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:                "/dev/loop3"
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:            ],
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:            "lv_name": "ceph_lv0",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:            "lv_size": "7511998464",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:            "name": "ceph_lv0",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:            "tags": {
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:                "ceph.cluster_name": "ceph",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:                "ceph.crush_device_class": "",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:                "ceph.encrypted": "0",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:                "ceph.osd_id": "0",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:                "ceph.type": "block",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:                "ceph.vdo": "0"
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:            },
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:            "type": "block",
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:            "vg_name": "ceph_vg0"
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:        }
Nov 29 03:42:41 np0005539563 interesting_benz[376606]:    ]
Nov 29 03:42:41 np0005539563 interesting_benz[376606]: }
Nov 29 03:42:41 np0005539563 systemd[1]: libpod-16f37cd4f2dfdf2e1024c1827a7a3047401e4b4a8016e7ea56b7c76aeffc79ba.scope: Deactivated successfully.
Nov 29 03:42:41 np0005539563 podman[376589]: 2025-11-29 08:42:41.7149739 +0000 UTC m=+1.037945720 container died 16f37cd4f2dfdf2e1024c1827a7a3047401e4b4a8016e7ea56b7c76aeffc79ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:42:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2ab4294934cfa9ffd4cd842e8b915fccd345108636e31acdbe27c188bd013201-merged.mount: Deactivated successfully.
Nov 29 03:42:41 np0005539563 podman[376589]: 2025-11-29 08:42:41.775285954 +0000 UTC m=+1.098257764 container remove 16f37cd4f2dfdf2e1024c1827a7a3047401e4b4a8016e7ea56b7c76aeffc79ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_benz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:42:41 np0005539563 systemd[1]: libpod-conmon-16f37cd4f2dfdf2e1024c1827a7a3047401e4b4a8016e7ea56b7c76aeffc79ba.scope: Deactivated successfully.
Nov 29 03:42:41 np0005539563 nova_compute[252253]: 2025-11-29 08:42:41.997 252257 DEBUG nova.compute.manager [req-7761fd35-fab5-44cd-bc47-f01a8447a9db req-4269c479-4997-4410-85c0-c663f4db2cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received event network-changed-d7976101-fb6f-4e03-be4e-6b60c73979e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:41 np0005539563 nova_compute[252253]: 2025-11-29 08:42:41.997 252257 DEBUG nova.compute.manager [req-7761fd35-fab5-44cd-bc47-f01a8447a9db req-4269c479-4997-4410-85c0-c663f4db2cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Refreshing instance network info cache due to event network-changed-d7976101-fb6f-4e03-be4e-6b60c73979e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:42:41 np0005539563 nova_compute[252253]: 2025-11-29 08:42:41.998 252257 DEBUG oslo_concurrency.lockutils [req-7761fd35-fab5-44cd-bc47-f01a8447a9db req-4269c479-4997-4410-85c0-c663f4db2cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:41 np0005539563 nova_compute[252253]: 2025-11-29 08:42:41.998 252257 DEBUG oslo_concurrency.lockutils [req-7761fd35-fab5-44cd-bc47-f01a8447a9db req-4269c479-4997-4410-85c0-c663f4db2cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:41 np0005539563 nova_compute[252253]: 2025-11-29 08:42:41.998 252257 DEBUG nova.network.neutron [req-7761fd35-fab5-44cd-bc47-f01a8447a9db req-4269c479-4997-4410-85c0-c663f4db2cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Refreshing network info cache for port d7976101-fb6f-4e03-be4e-6b60c73979e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.105 252257 DEBUG oslo_concurrency.lockutils [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.107 252257 DEBUG oslo_concurrency.lockutils [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.108 252257 DEBUG oslo_concurrency.lockutils [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.108 252257 DEBUG oslo_concurrency.lockutils [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.109 252257 DEBUG oslo_concurrency.lockutils [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.111 252257 INFO nova.compute.manager [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Terminating instance#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.112 252257 DEBUG nova.compute.manager [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:42:42 np0005539563 kernel: tapd7976101-fb (unregistering): left promiscuous mode
Nov 29 03:42:42 np0005539563 NetworkManager[48981]: <info>  [1764405762.1766] device (tapd7976101-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:42:42 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:42Z|00813|binding|INFO|Releasing lport d7976101-fb6f-4e03-be4e-6b60c73979e4 from this chassis (sb_readonly=0)
Nov 29 03:42:42 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:42Z|00814|binding|INFO|Setting lport d7976101-fb6f-4e03-be4e-6b60c73979e4 down in Southbound
Nov 29 03:42:42 np0005539563 ovn_controller[148841]: 2025-11-29T08:42:42Z|00815|binding|INFO|Removing iface tapd7976101-fb ovn-installed in OVS
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.185 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.195 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:8f:9a 10.100.0.11'], port_security=['fa:16:3e:80:8f:9a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8b3377f3-920a-41d9-bfc4-2b727546ab6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb2c517f-e973-4398-be77-628f13500e1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1042471a-4e26-498d-88fd-cdaba7fe83fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6798f55c-ab9d-407e-ac43-81155e9f9232, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=d7976101-fb6f-4e03-be4e-6b60c73979e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.197 158990 INFO neutron.agent.ovn.metadata.agent [-] Port d7976101-fb6f-4e03-be4e-6b60c73979e4 in datapath eb2c517f-e973-4398-be77-628f13500e1a unbound from our chassis#033[00m
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.198 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb2c517f-e973-4398-be77-628f13500e1a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.199 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3beb2220-a9e7-45a1-9251-883325faa47b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.199 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a namespace which is not needed anymore#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.203 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:42 np0005539563 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000ba.scope: Deactivated successfully.
Nov 29 03:42:42 np0005539563 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000ba.scope: Consumed 13.968s CPU time.
Nov 29 03:42:42 np0005539563 systemd-machined[213024]: Machine qemu-93-instance-000000ba terminated.
Nov 29 03:42:42 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375875]: [NOTICE]   (375879) : haproxy version is 2.8.14-c23fe91
Nov 29 03:42:42 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375875]: [NOTICE]   (375879) : path to executable is /usr/sbin/haproxy
Nov 29 03:42:42 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375875]: [WARNING]  (375879) : Exiting Master process...
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.361 252257 INFO nova.virt.libvirt.driver [-] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Instance destroyed successfully.#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.362 252257 DEBUG nova.objects.instance [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'resources' on Instance uuid 8b3377f3-920a-41d9-bfc4-2b727546ab6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:42 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375875]: [ALERT]    (375879) : Current worker (375881) exited with code 143 (Terminated)
Nov 29 03:42:42 np0005539563 neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a[375875]: [WARNING]  (375879) : All workers exited. Exiting... (0)
Nov 29 03:42:42 np0005539563 systemd[1]: libpod-544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55.scope: Deactivated successfully.
Nov 29 03:42:42 np0005539563 podman[376775]: 2025-11-29 08:42:42.375263306 +0000 UTC m=+0.058222798 container died 544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.376 252257 DEBUG nova.virt.libvirt.vif [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1930617470',display_name='tempest-TestNetworkAdvancedServerOps-server-1930617470',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1930617470',id=186,image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKY1Sg2xL/aJz8wIL7cf5MzcV52yp5R0mXmIsEO1TSV5wOnaXa6t112hZJc+/UBiVqxk5rRlpEmVgzJvgpc06h1m1EAduPYs3GDyvBwnX5qP0GCBg7T1VF1J1Nh92LK9xA==',key_name='tempest-TestNetworkAdvancedServerOps-482800446',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:42:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-x7e57fd4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='ed489666-5fa2-4ea4-8005-7a7505ac1b78',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:42:19Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=8b3377f3-920a-41d9-bfc4-2b727546ab6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.377 252257 DEBUG nova.network.os_vif_util [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.378 252257 DEBUG nova.network.os_vif_util [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.379 252257 DEBUG os_vif [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.381 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.381 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7976101-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.383 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.383 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.385 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.388 252257 INFO os_vif [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:8f:9a,bridge_name='br-int',has_traffic_filtering=True,id=d7976101-fb6f-4e03-be4e-6b60c73979e4,network=Network(eb2c517f-e973-4398-be77-628f13500e1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7976101-fb')#033[00m
Nov 29 03:42:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55-userdata-shm.mount: Deactivated successfully.
Nov 29 03:42:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8c9b0a81a0c91edcb2d02ea8a34deca1b3ff4126298feddac876cf4e0f34564a-merged.mount: Deactivated successfully.
Nov 29 03:42:42 np0005539563 podman[376775]: 2025-11-29 08:42:42.426525643 +0000 UTC m=+0.109485095 container cleanup 544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:42:42 np0005539563 systemd[1]: libpod-conmon-544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55.scope: Deactivated successfully.
Nov 29 03:42:42 np0005539563 podman[376841]: 2025-11-29 08:42:42.492962433 +0000 UTC m=+0.042992832 container remove 544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.498 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8da744e4-9d02-4e87-9350-22e6096a8db9]: (4, ('Sat Nov 29 08:42:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a (544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55)\n544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55\nSat Nov 29 08:42:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a (544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55)\n544c78f92bc39efc97d7b280f7928db33c5dae5557d2f91857392ba0ce88ca55\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.502 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[57a8c4d1-324f-4590-956b-3b88f35e3e44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.504 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb2c517f-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.506 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:42 np0005539563 kernel: tapeb2c517f-e0: left promiscuous mode
Nov 29 03:42:42 np0005539563 podman[376844]: 2025-11-29 08:42:42.531470733 +0000 UTC m=+0.064060407 container create d7d71164b6ff00e33996ee3765d48e938b03dac8909f307d457c437898b7ed34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.534 252257 DEBUG nova.compute.manager [req-16726210-cbea-49e7-b28a-9c7528bec440 req-d7f14f1b-ca82-42a5-bf22-e9541bc18494 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received event network-vif-unplugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.534 252257 DEBUG oslo_concurrency.lockutils [req-16726210-cbea-49e7-b28a-9c7528bec440 req-d7f14f1b-ca82-42a5-bf22-e9541bc18494 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.534 252257 DEBUG oslo_concurrency.lockutils [req-16726210-cbea-49e7-b28a-9c7528bec440 req-d7f14f1b-ca82-42a5-bf22-e9541bc18494 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.534 252257 DEBUG oslo_concurrency.lockutils [req-16726210-cbea-49e7-b28a-9c7528bec440 req-d7f14f1b-ca82-42a5-bf22-e9541bc18494 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.535 252257 DEBUG nova.compute.manager [req-16726210-cbea-49e7-b28a-9c7528bec440 req-d7f14f1b-ca82-42a5-bf22-e9541bc18494 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] No waiting events found dispatching network-vif-unplugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.535 252257 DEBUG nova.compute.manager [req-16726210-cbea-49e7-b28a-9c7528bec440 req-d7f14f1b-ca82-42a5-bf22-e9541bc18494 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received event network-vif-unplugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.535 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.536 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac5fbbe-fee5-42b6-ab52-57e0f72e4098]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.546 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[da39aa61-c041-4384-8aae-6eae82d10467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.547 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d975cd-1ba6-46de-a8c6-4b12be46060f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.566 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9819020e-2e6c-479d-9dcc-a7b157065b9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870682, 'reachable_time': 44672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376868, 'error': None, 'target': 'ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.570 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb2c517f-e973-4398-be77-628f13500e1a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:42:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:42:42.570 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[baf08c9d-b37a-421f-a2a0-9f8a895af402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:42 np0005539563 systemd[1]: Started libpod-conmon-d7d71164b6ff00e33996ee3765d48e938b03dac8909f307d457c437898b7ed34.scope.
Nov 29 03:42:42 np0005539563 systemd[1]: run-netns-ovnmeta\x2deb2c517f\x2de973\x2d4398\x2dbe77\x2d628f13500e1a.mount: Deactivated successfully.
Nov 29 03:42:42 np0005539563 podman[376844]: 2025-11-29 08:42:42.505429614 +0000 UTC m=+0.038019308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:42:42 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:42:42 np0005539563 podman[376844]: 2025-11-29 08:42:42.627880811 +0000 UTC m=+0.160470505 container init d7d71164b6ff00e33996ee3765d48e938b03dac8909f307d457c437898b7ed34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:42:42 np0005539563 podman[376844]: 2025-11-29 08:42:42.635613832 +0000 UTC m=+0.168203506 container start d7d71164b6ff00e33996ee3765d48e938b03dac8909f307d457c437898b7ed34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:42:42 np0005539563 podman[376844]: 2025-11-29 08:42:42.638934662 +0000 UTC m=+0.171524446 container attach d7d71164b6ff00e33996ee3765d48e938b03dac8909f307d457c437898b7ed34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:42:42 np0005539563 eager_kalam[376872]: 167 167
Nov 29 03:42:42 np0005539563 systemd[1]: libpod-d7d71164b6ff00e33996ee3765d48e938b03dac8909f307d457c437898b7ed34.scope: Deactivated successfully.
Nov 29 03:42:42 np0005539563 podman[376844]: 2025-11-29 08:42:42.642464068 +0000 UTC m=+0.175053742 container died d7d71164b6ff00e33996ee3765d48e938b03dac8909f307d457c437898b7ed34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:42:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b5b103d42847069375f6b22bf4afb6634c76492897fcff34f9c693b6757a1714-merged.mount: Deactivated successfully.
Nov 29 03:42:42 np0005539563 podman[376844]: 2025-11-29 08:42:42.681566074 +0000 UTC m=+0.214155738 container remove d7d71164b6ff00e33996ee3765d48e938b03dac8909f307d457c437898b7ed34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 03:42:42 np0005539563 systemd[1]: libpod-conmon-d7d71164b6ff00e33996ee3765d48e938b03dac8909f307d457c437898b7ed34.scope: Deactivated successfully.
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.844 252257 INFO nova.virt.libvirt.driver [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Deleting instance files /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f_del#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.846 252257 INFO nova.virt.libvirt.driver [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Deletion of /var/lib/nova/instances/8b3377f3-920a-41d9-bfc4-2b727546ab6f_del complete#033[00m
Nov 29 03:42:42 np0005539563 podman[376896]: 2025-11-29 08:42:42.906278669 +0000 UTC m=+0.058326351 container create cd800e7b14a485071e93ad5c56c8ae494b2a9c3626201c8d2c3c8a6cdf04a384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.906 252257 INFO nova.compute.manager [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.907 252257 DEBUG oslo.service.loopingcall [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.907 252257 DEBUG nova.compute.manager [-] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:42:42 np0005539563 nova_compute[252253]: 2025-11-29 08:42:42.907 252257 DEBUG nova.network.neutron [-] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:42:42 np0005539563 systemd[1]: Started libpod-conmon-cd800e7b14a485071e93ad5c56c8ae494b2a9c3626201c8d2c3c8a6cdf04a384.scope.
Nov 29 03:42:42 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:42:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36c61136736d3a26cb86cc7d438ca59f6896566b663beec874058e3b25b1117/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36c61136736d3a26cb86cc7d438ca59f6896566b663beec874058e3b25b1117/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36c61136736d3a26cb86cc7d438ca59f6896566b663beec874058e3b25b1117/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36c61136736d3a26cb86cc7d438ca59f6896566b663beec874058e3b25b1117/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:42 np0005539563 podman[376896]: 2025-11-29 08:42:42.879385986 +0000 UTC m=+0.031433728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:42:42 np0005539563 podman[376896]: 2025-11-29 08:42:42.977306984 +0000 UTC m=+0.129354626 container init cd800e7b14a485071e93ad5c56c8ae494b2a9c3626201c8d2c3c8a6cdf04a384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_noyce, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:42:42 np0005539563 podman[376896]: 2025-11-29 08:42:42.985111307 +0000 UTC m=+0.137158949 container start cd800e7b14a485071e93ad5c56c8ae494b2a9c3626201c8d2c3c8a6cdf04a384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_noyce, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:42:42 np0005539563 podman[376896]: 2025-11-29 08:42:42.988790837 +0000 UTC m=+0.140838479 container attach cd800e7b14a485071e93ad5c56c8ae494b2a9c3626201c8d2c3c8a6cdf04a384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_noyce, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:42:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 653 KiB/s rd, 3.4 MiB/s wr, 114 op/s
Nov 29 03:42:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:43.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:42:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:42:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:42:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:42:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:42:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:42:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:43.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:43 np0005539563 vibrant_noyce[376912]: {
Nov 29 03:42:43 np0005539563 vibrant_noyce[376912]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:42:43 np0005539563 vibrant_noyce[376912]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:42:43 np0005539563 vibrant_noyce[376912]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:42:43 np0005539563 vibrant_noyce[376912]:        "osd_id": 0,
Nov 29 03:42:43 np0005539563 vibrant_noyce[376912]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:42:43 np0005539563 vibrant_noyce[376912]:        "type": "bluestore"
Nov 29 03:42:43 np0005539563 vibrant_noyce[376912]:    }
Nov 29 03:42:43 np0005539563 vibrant_noyce[376912]: }
Nov 29 03:42:43 np0005539563 systemd[1]: libpod-cd800e7b14a485071e93ad5c56c8ae494b2a9c3626201c8d2c3c8a6cdf04a384.scope: Deactivated successfully.
Nov 29 03:42:43 np0005539563 podman[376934]: 2025-11-29 08:42:43.915231487 +0000 UTC m=+0.021649121 container died cd800e7b14a485071e93ad5c56c8ae494b2a9c3626201c8d2c3c8a6cdf04a384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_noyce, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:42:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c36c61136736d3a26cb86cc7d438ca59f6896566b663beec874058e3b25b1117-merged.mount: Deactivated successfully.
Nov 29 03:42:43 np0005539563 podman[376934]: 2025-11-29 08:42:43.963221995 +0000 UTC m=+0.069639599 container remove cd800e7b14a485071e93ad5c56c8ae494b2a9c3626201c8d2c3c8a6cdf04a384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:42:43 np0005539563 systemd[1]: libpod-conmon-cd800e7b14a485071e93ad5c56c8ae494b2a9c3626201c8d2c3c8a6cdf04a384.scope: Deactivated successfully.
Nov 29 03:42:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:42:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:42:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:44 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 24c44bbb-657f-46f4-bca9-3c0bcb31c2d1 does not exist
Nov 29 03:42:44 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2b0fc8e3-a36e-41d0-80bb-61b9f9e66154 does not exist
Nov 29 03:42:44 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 950bec7f-9a11-482e-b904-d5bd745ab51a does not exist
Nov 29 03:42:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.543 252257 DEBUG nova.network.neutron [-] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.575 252257 INFO nova.compute.manager [-] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Took 1.67 seconds to deallocate network for instance.#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.620 252257 DEBUG nova.compute.manager [req-b01a224f-8f2d-4035-9fe4-ba31229b3ad2 req-da8b9378-44e4-4a99-93e4-2d98a74c01fa 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received event network-vif-deleted-d7976101-fb6f-4e03-be4e-6b60c73979e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.625 252257 DEBUG nova.network.neutron [req-7761fd35-fab5-44cd-bc47-f01a8447a9db req-4269c479-4997-4410-85c0-c663f4db2cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Updated VIF entry in instance network info cache for port d7976101-fb6f-4e03-be4e-6b60c73979e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.626 252257 DEBUG nova.network.neutron [req-7761fd35-fab5-44cd-bc47-f01a8447a9db req-4269c479-4997-4410-85c0-c663f4db2cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Updating instance_info_cache with network_info: [{"id": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "address": "fa:16:3e:80:8f:9a", "network": {"id": "eb2c517f-e973-4398-be77-628f13500e1a", "bridge": "br-int", "label": "tempest-network-smoke--1465831756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7976101-fb", "ovs_interfaceid": "d7976101-fb6f-4e03-be4e-6b60c73979e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.647 252257 DEBUG oslo_concurrency.lockutils [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.648 252257 DEBUG oslo_concurrency.lockutils [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.658 252257 DEBUG oslo_concurrency.lockutils [req-7761fd35-fab5-44cd-bc47-f01a8447a9db req-4269c479-4997-4410-85c0-c663f4db2cad 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8b3377f3-920a-41d9-bfc4-2b727546ab6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.661 252257 DEBUG nova.compute.manager [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received event network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.661 252257 DEBUG oslo_concurrency.lockutils [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.662 252257 DEBUG oslo_concurrency.lockutils [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.662 252257 DEBUG oslo_concurrency.lockutils [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.662 252257 DEBUG nova.compute.manager [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] No waiting events found dispatching network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.662 252257 WARNING nova.compute.manager [req-5f44c35b-3146-406f-a697-e3de0fb7f228 req-8103f47f-eeef-4dd2-9f94-c670289ef1e3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Received unexpected event network-vif-plugged-d7976101-fb6f-4e03-be4e-6b60c73979e4 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.708 252257 DEBUG oslo_concurrency.processutils [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:44 np0005539563 nova_compute[252253]: 2025-11-29 08:42:44.957 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 692 KiB/s rd, 3.4 MiB/s wr, 169 op/s
Nov 29 03:42:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4164448671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:45.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:45 np0005539563 nova_compute[252253]: 2025-11-29 08:42:45.136 252257 DEBUG oslo_concurrency.processutils [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:45 np0005539563 nova_compute[252253]: 2025-11-29 08:42:45.143 252257 DEBUG nova.compute.provider_tree [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:45 np0005539563 nova_compute[252253]: 2025-11-29 08:42:45.165 252257 DEBUG nova.scheduler.client.report [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:45 np0005539563 nova_compute[252253]: 2025-11-29 08:42:45.194 252257 DEBUG oslo_concurrency.lockutils [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:45 np0005539563 nova_compute[252253]: 2025-11-29 08:42:45.230 252257 INFO nova.scheduler.client.report [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Deleted allocations for instance 8b3377f3-920a-41d9-bfc4-2b727546ab6f#033[00m
Nov 29 03:42:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:45.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:45 np0005539563 nova_compute[252253]: 2025-11-29 08:42:45.341 252257 DEBUG oslo_concurrency.lockutils [None req-1ee40eec-b25c-4b67-9170-b6b298160f58 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "8b3377f3-920a-41d9-bfc4-2b727546ab6f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 243 KiB/s rd, 605 KiB/s wr, 84 op/s
Nov 29 03:42:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:47.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:47.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:47 np0005539563 nova_compute[252253]: 2025-11-29 08:42:47.385 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:47 np0005539563 nova_compute[252253]: 2025-11-29 08:42:47.696 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:47 np0005539563 nova_compute[252253]: 2025-11-29 08:42:47.881 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 25 KiB/s wr, 56 op/s
Nov 29 03:42:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:49.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:49.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:49 np0005539563 nova_compute[252253]: 2025-11-29 08:42:49.959 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 24 KiB/s wr, 56 op/s
Nov 29 03:42:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:51.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:51.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:52 np0005539563 nova_compute[252253]: 2025-11-29 08:42:52.389 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 03:42:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 03:42:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 03:42:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 03:42:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 03:42:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 03:42:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 03:42:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Nov 29 03:42:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:53.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:53.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:54 np0005539563 nova_compute[252253]: 2025-11-29 08:42:54.996 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 93 KiB/s rd, 2.3 KiB/s wr, 147 op/s
Nov 29 03:42:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:55.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:55.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Nov 29 03:42:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:42:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:57.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:42:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:57 np0005539563 nova_compute[252253]: 2025-11-29 08:42:57.356 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405762.3551028, 8b3377f3-920a-41d9-bfc4-2b727546ab6f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:57 np0005539563 nova_compute[252253]: 2025-11-29 08:42:57.357 252257 INFO nova.compute.manager [-] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:42:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:57.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:57 np0005539563 nova_compute[252253]: 2025-11-29 08:42:57.393 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:57 np0005539563 nova_compute[252253]: 2025-11-29 08:42:57.413 252257 DEBUG nova.compute.manager [None req-1408af82-3e18-4574-80a2-176ef05505ff - - - - - -] [instance: 8b3377f3-920a-41d9-bfc4-2b727546ab6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:58 np0005539563 nova_compute[252253]: 2025-11-29 08:42:58.708 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 0 B/s wr, 160 op/s
Nov 29 03:42:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:59.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:42:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:59.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:00 np0005539563 nova_compute[252253]: 2025-11-29 08:43:00.000 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Nov 29 03:43:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.003000081s ======
Nov 29 03:43:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:01.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Nov 29 03:43:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:01.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:02 np0005539563 nova_compute[252253]: 2025-11-29 08:43:02.397 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Nov 29 03:43:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:03.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:03.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:04.948 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:04.948 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:04.948 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:05 np0005539563 nova_compute[252253]: 2025-11-29 08:43:05.002 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 137 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 554 KiB/s wr, 196 op/s
Nov 29 03:43:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:05.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:05.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:06 np0005539563 podman[377084]: 2025-11-29 08:43:06.505545782 +0000 UTC m=+0.055917105 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:43:06 np0005539563 podman[377085]: 2025-11-29 08:43:06.540069783 +0000 UTC m=+0.091079203 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:43:06 np0005539563 podman[377086]: 2025-11-29 08:43:06.542632423 +0000 UTC m=+0.088780441 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:43:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 137 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 554 KiB/s wr, 105 op/s
Nov 29 03:43:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:07.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:07.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:07 np0005539563 nova_compute[252253]: 2025-11-29 08:43:07.399 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 147 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 1.0 MiB/s wr, 106 op/s
Nov 29 03:43:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:09.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:09.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:10 np0005539563 nova_compute[252253]: 2025-11-29 08:43:10.006 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 198 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.1 MiB/s wr, 63 op/s
Nov 29 03:43:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:11.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:11.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:12 np0005539563 nova_compute[252253]: 2025-11-29 08:43:12.403 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:43:12
Nov 29 03:43:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:43:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:43:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', '.rgw.root', 'images', 'default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.meta']
Nov 29 03:43:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:43:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 198 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.1 MiB/s wr, 51 op/s
Nov 29 03:43:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:13.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:43:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:43:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:43:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:43:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:43:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:43:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:13.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:14 np0005539563 nova_compute[252253]: 2025-11-29 08:43:14.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:15 np0005539563 nova_compute[252253]: 2025-11-29 08:43:15.010 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Nov 29 03:43:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:15.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:15.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:43:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:43:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:43:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:43:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:43:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:43:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:43:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:43:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:43:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:43:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 103 op/s
Nov 29 03:43:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:17.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:17.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:17 np0005539563 nova_compute[252253]: 2025-11-29 08:43:17.406 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:18 np0005539563 nova_compute[252253]: 2025-11-29 08:43:18.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:18 np0005539563 nova_compute[252253]: 2025-11-29 08:43:18.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:43:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 108 op/s
Nov 29 03:43:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:19.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:19.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:19 np0005539563 nova_compute[252253]: 2025-11-29 08:43:19.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:19 np0005539563 nova_compute[252253]: 2025-11-29 08:43:19.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:43:19 np0005539563 nova_compute[252253]: 2025-11-29 08:43:19.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:43:19 np0005539563 nova_compute[252253]: 2025-11-29 08:43:19.707 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:43:19 np0005539563 nova_compute[252253]: 2025-11-29 08:43:19.707 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:20 np0005539563 nova_compute[252253]: 2025-11-29 08:43:20.010 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 121 op/s
Nov 29 03:43:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:21.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:21.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:22 np0005539563 nova_compute[252253]: 2025-11-29 08:43:22.410 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:22 np0005539563 nova_compute[252253]: 2025-11-29 08:43:22.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:22 np0005539563 nova_compute[252253]: 2025-11-29 08:43:22.710 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:22 np0005539563 nova_compute[252253]: 2025-11-29 08:43:22.710 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:22 np0005539563 nova_compute[252253]: 2025-11-29 08:43:22.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:22 np0005539563 nova_compute[252253]: 2025-11-29 08:43:22.711 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:43:22 np0005539563 nova_compute[252253]: 2025-11-29 08:43:22.711 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:22 np0005539563 nova_compute[252253]: 2025-11-29 08:43:22.850 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:22.850 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:22.851 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 505 KiB/s wr, 94 op/s
Nov 29 03:43:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/495934207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.143 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:23.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.324 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.325 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4215MB free_disk=20.946483612060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.325 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.325 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:23.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.418 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.418 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.441 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.457 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.458 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.472 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.491 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.507 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019914750255909173 of space, bias 1.0, pg target 0.5974425076772752 quantized to 32 (current 32)
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:43:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/984546322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:23 np0005539563 nova_compute[252253]: 2025-11-29 08:43:23.996 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:24 np0005539563 nova_compute[252253]: 2025-11-29 08:43:24.002 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:24 np0005539563 nova_compute[252253]: 2025-11-29 08:43:24.018 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:24 np0005539563 nova_compute[252253]: 2025-11-29 08:43:24.042 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:43:24 np0005539563 nova_compute[252253]: 2025-11-29 08:43:24.042 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:25 np0005539563 nova_compute[252253]: 2025-11-29 08:43:25.011 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 230 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 186 op/s
Nov 29 03:43:25 np0005539563 nova_compute[252253]: 2025-11-29 08:43:25.042 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:25 np0005539563 nova_compute[252253]: 2025-11-29 08:43:25.042 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:25.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:25.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 230 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 109 op/s
Nov 29 03:43:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:43:27Z|00816|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 29 03:43:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:27.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:27.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:27 np0005539563 nova_compute[252253]: 2025-11-29 08:43:27.414 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 238 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Nov 29 03:43:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:29.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:29.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:30 np0005539563 nova_compute[252253]: 2025-11-29 08:43:30.013 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Nov 29 03:43:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:31.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:31.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:31 np0005539563 nova_compute[252253]: 2025-11-29 08:43:31.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:31 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:31.853 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:32 np0005539563 nova_compute[252253]: 2025-11-29 08:43:32.418 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Nov 29 03:43:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:33.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:33.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:34.755115) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405814755356, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 870, "num_deletes": 252, "total_data_size": 1304531, "memory_usage": 1328360, "flush_reason": "Manual Compaction"}
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405814764535, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 851253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64709, "largest_seqno": 65578, "table_properties": {"data_size": 847495, "index_size": 1473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9909, "raw_average_key_size": 21, "raw_value_size": 839594, "raw_average_value_size": 1782, "num_data_blocks": 62, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405747, "oldest_key_time": 1764405747, "file_creation_time": 1764405814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 9464 microseconds, and 4407 cpu microseconds.
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:34.764641) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 851253 bytes OK
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:34.764672) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:34.767051) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:34.767071) EVENT_LOG_v1 {"time_micros": 1764405814767066, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:34.767088) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 1300349, prev total WAL file size 1300349, number of live WAL files 2.
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:34.767875) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323635' seq:72057594037927935, type:22 .. '6D6772737461740032353138' seq:0, type:0; will stop at (end)
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(831KB)], [143(12MB)]
Nov 29 03:43:34 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405814767973, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 14181873, "oldest_snapshot_seqno": -1}
Nov 29 03:43:35 np0005539563 nova_compute[252253]: 2025-11-29 08:43:35.015 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 276 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 176 op/s
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 9766 keys, 10719946 bytes, temperature: kUnknown
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405815119275, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10719946, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10659488, "index_size": 34937, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24453, "raw_key_size": 257534, "raw_average_key_size": 26, "raw_value_size": 10490585, "raw_average_value_size": 1074, "num_data_blocks": 1322, "num_entries": 9766, "num_filter_entries": 9766, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764405814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:43:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:35.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:35.120456) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10719946 bytes
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:35.263221) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 40.3 rd, 30.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.7 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(29.3) write-amplify(12.6) OK, records in: 10259, records dropped: 493 output_compression: NoCompression
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:35.263271) EVENT_LOG_v1 {"time_micros": 1764405815263253, "job": 88, "event": "compaction_finished", "compaction_time_micros": 352193, "compaction_time_cpu_micros": 31766, "output_level": 6, "num_output_files": 1, "total_output_size": 10719946, "num_input_records": 10259, "num_output_records": 9766, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405815263721, "job": 88, "event": "table_file_deletion", "file_number": 145}
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405815266430, "job": 88, "event": "table_file_deletion", "file_number": 143}
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:34.767684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:35.266694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:35.266711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:35.266713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:35.266715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:43:35 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:43:35.266717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:43:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:35.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:36 np0005539563 nova_compute[252253]: 2025-11-29 08:43:36.583 252257 DEBUG nova.compute.manager [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:43:36 np0005539563 nova_compute[252253]: 2025-11-29 08:43:36.711 252257 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:36 np0005539563 nova_compute[252253]: 2025-11-29 08:43:36.712 252257 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:36 np0005539563 nova_compute[252253]: 2025-11-29 08:43:36.742 252257 DEBUG nova.objects.instance [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_requests' on Instance uuid 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:36 np0005539563 nova_compute[252253]: 2025-11-29 08:43:36.757 252257 DEBUG nova.virt.hardware [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:43:36 np0005539563 nova_compute[252253]: 2025-11-29 08:43:36.757 252257 INFO nova.compute.claims [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:43:36 np0005539563 nova_compute[252253]: 2025-11-29 08:43:36.758 252257 DEBUG nova.objects.instance [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'resources' on Instance uuid 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:36 np0005539563 nova_compute[252253]: 2025-11-29 08:43:36.799 252257 DEBUG nova.objects.instance [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:36 np0005539563 nova_compute[252253]: 2025-11-29 08:43:36.851 252257 INFO nova.compute.resource_tracker [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updating resource usage from migration e291ecb1-4385-42e6-9bb8-6d2095710e8e#033[00m
Nov 29 03:43:36 np0005539563 nova_compute[252253]: 2025-11-29 08:43:36.851 252257 DEBUG nova.compute.resource_tracker [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Starting to track incoming migration e291ecb1-4385-42e6-9bb8-6d2095710e8e with flavor a3833334-6e3e-4b1c-bf74-bdd1055a9e9b _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:43:36 np0005539563 nova_compute[252253]: 2025-11-29 08:43:36.898 252257 DEBUG oslo_concurrency.processutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 276 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 445 KiB/s rd, 2.7 MiB/s wr, 84 op/s
Nov 29 03:43:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:37.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/817634460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:37 np0005539563 nova_compute[252253]: 2025-11-29 08:43:37.359 252257 DEBUG oslo_concurrency.processutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:37 np0005539563 nova_compute[252253]: 2025-11-29 08:43:37.365 252257 DEBUG nova.compute.provider_tree [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:37 np0005539563 nova_compute[252253]: 2025-11-29 08:43:37.394 252257 DEBUG nova.scheduler.client.report [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:37 np0005539563 nova_compute[252253]: 2025-11-29 08:43:37.421 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:37.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:37 np0005539563 nova_compute[252253]: 2025-11-29 08:43:37.431 252257 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:37 np0005539563 nova_compute[252253]: 2025-11-29 08:43:37.431 252257 INFO nova.compute.manager [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Migrating#033[00m
Nov 29 03:43:37 np0005539563 podman[377334]: 2025-11-29 08:43:37.496392777 +0000 UTC m=+0.048790680 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:43:37 np0005539563 podman[377335]: 2025-11-29 08:43:37.51005942 +0000 UTC m=+0.059460511 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:43:37 np0005539563 podman[377336]: 2025-11-29 08:43:37.531594007 +0000 UTC m=+0.079344213 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller)
Nov 29 03:43:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 466 KiB/s rd, 2.7 MiB/s wr, 86 op/s
Nov 29 03:43:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:39.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:39.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:40 np0005539563 nova_compute[252253]: 2025-11-29 08:43:40.015 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:40 np0005539563 systemd-logind[785]: New session 67 of user nova.
Nov 29 03:43:40 np0005539563 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:43:40 np0005539563 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:43:40 np0005539563 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:43:40 np0005539563 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:43:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 436 KiB/s rd, 2.2 MiB/s wr, 84 op/s
Nov 29 03:43:41 np0005539563 systemd[377399]: Queued start job for default target Main User Target.
Nov 29 03:43:41 np0005539563 systemd[377399]: Created slice User Application Slice.
Nov 29 03:43:41 np0005539563 systemd[377399]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:43:41 np0005539563 systemd[377399]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:43:41 np0005539563 systemd[377399]: Reached target Paths.
Nov 29 03:43:41 np0005539563 systemd[377399]: Reached target Timers.
Nov 29 03:43:41 np0005539563 systemd[377399]: Starting D-Bus User Message Bus Socket...
Nov 29 03:43:41 np0005539563 systemd[377399]: Starting Create User's Volatile Files and Directories...
Nov 29 03:43:41 np0005539563 systemd[377399]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:43:41 np0005539563 systemd[377399]: Finished Create User's Volatile Files and Directories.
Nov 29 03:43:41 np0005539563 systemd[377399]: Reached target Sockets.
Nov 29 03:43:41 np0005539563 systemd[377399]: Reached target Basic System.
Nov 29 03:43:41 np0005539563 systemd[377399]: Reached target Main User Target.
Nov 29 03:43:41 np0005539563 systemd[377399]: Startup finished in 147ms.
Nov 29 03:43:41 np0005539563 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:43:41 np0005539563 systemd[1]: Started Session 67 of User nova.
Nov 29 03:43:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:41.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:41 np0005539563 systemd[1]: session-67.scope: Deactivated successfully.
Nov 29 03:43:41 np0005539563 systemd-logind[785]: Session 67 logged out. Waiting for processes to exit.
Nov 29 03:43:41 np0005539563 systemd-logind[785]: Removed session 67.
Nov 29 03:43:41 np0005539563 systemd-logind[785]: New session 69 of user nova.
Nov 29 03:43:41 np0005539563 systemd[1]: Started Session 69 of User nova.
Nov 29 03:43:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:41.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:41 np0005539563 systemd[1]: session-69.scope: Deactivated successfully.
Nov 29 03:43:41 np0005539563 systemd-logind[785]: Session 69 logged out. Waiting for processes to exit.
Nov 29 03:43:41 np0005539563 systemd-logind[785]: Removed session 69.
Nov 29 03:43:42 np0005539563 nova_compute[252253]: 2025-11-29 08:43:42.426 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 03:43:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:43.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:43:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:43:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:43:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:43:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:43:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:43:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:43.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:43 np0005539563 nova_compute[252253]: 2025-11-29 08:43:43.981 252257 DEBUG nova.compute.manager [req-a5c35955-8be6-45b3-9905-fc4305fc6e14 req-66fcd32b-9daf-46f1-8055-2c1f0a12c32f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-unplugged-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:43 np0005539563 nova_compute[252253]: 2025-11-29 08:43:43.982 252257 DEBUG oslo_concurrency.lockutils [req-a5c35955-8be6-45b3-9905-fc4305fc6e14 req-66fcd32b-9daf-46f1-8055-2c1f0a12c32f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:43 np0005539563 nova_compute[252253]: 2025-11-29 08:43:43.982 252257 DEBUG oslo_concurrency.lockutils [req-a5c35955-8be6-45b3-9905-fc4305fc6e14 req-66fcd32b-9daf-46f1-8055-2c1f0a12c32f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:43 np0005539563 nova_compute[252253]: 2025-11-29 08:43:43.982 252257 DEBUG oslo_concurrency.lockutils [req-a5c35955-8be6-45b3-9905-fc4305fc6e14 req-66fcd32b-9daf-46f1-8055-2c1f0a12c32f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:43 np0005539563 nova_compute[252253]: 2025-11-29 08:43:43.982 252257 DEBUG nova.compute.manager [req-a5c35955-8be6-45b3-9905-fc4305fc6e14 req-66fcd32b-9daf-46f1-8055-2c1f0a12c32f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] No waiting events found dispatching network-vif-unplugged-c30634d5-981b-440c-aaed-815b2591a3d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:43 np0005539563 nova_compute[252253]: 2025-11-29 08:43:43.982 252257 WARNING nova.compute.manager [req-a5c35955-8be6-45b3-9905-fc4305fc6e14 req-66fcd32b-9daf-46f1-8055-2c1f0a12c32f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received unexpected event network-vif-unplugged-c30634d5-981b-440c-aaed-815b2591a3d4 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:43:44 np0005539563 nova_compute[252253]: 2025-11-29 08:43:44.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:45 np0005539563 nova_compute[252253]: 2025-11-29 08:43:45.018 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Nov 29 03:43:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:45.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:45.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:45 np0005539563 nova_compute[252253]: 2025-11-29 08:43:45.581 252257 INFO nova.network.neutron [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updating port c30634d5-981b-440c-aaed-815b2591a3d4 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:43:45 np0005539563 podman[377694]: 2025-11-29 08:43:45.853487998 +0000 UTC m=+0.021365654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:46 np0005539563 nova_compute[252253]: 2025-11-29 08:43:46.181 252257 DEBUG nova.compute.manager [req-54fa01b8-3b99-486c-9c7b-aa99338b64e6 req-4424d16b-abc0-4006-9c17-e705467155b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:46 np0005539563 nova_compute[252253]: 2025-11-29 08:43:46.181 252257 DEBUG oslo_concurrency.lockutils [req-54fa01b8-3b99-486c-9c7b-aa99338b64e6 req-4424d16b-abc0-4006-9c17-e705467155b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:46 np0005539563 nova_compute[252253]: 2025-11-29 08:43:46.182 252257 DEBUG oslo_concurrency.lockutils [req-54fa01b8-3b99-486c-9c7b-aa99338b64e6 req-4424d16b-abc0-4006-9c17-e705467155b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:46 np0005539563 nova_compute[252253]: 2025-11-29 08:43:46.182 252257 DEBUG oslo_concurrency.lockutils [req-54fa01b8-3b99-486c-9c7b-aa99338b64e6 req-4424d16b-abc0-4006-9c17-e705467155b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:46 np0005539563 nova_compute[252253]: 2025-11-29 08:43:46.182 252257 DEBUG nova.compute.manager [req-54fa01b8-3b99-486c-9c7b-aa99338b64e6 req-4424d16b-abc0-4006-9c17-e705467155b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] No waiting events found dispatching network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:46 np0005539563 nova_compute[252253]: 2025-11-29 08:43:46.182 252257 WARNING nova.compute.manager [req-54fa01b8-3b99-486c-9c7b-aa99338b64e6 req-4424d16b-abc0-4006-9c17-e705467155b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received unexpected event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:43:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:46 np0005539563 podman[377694]: 2025-11-29 08:43:46.690014196 +0000 UTC m=+0.857891842 container create 76921750aacec33995a284b76ebce5b0fbd81e5146607bde9666ea7e52483d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_taussig, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:43:46 np0005539563 systemd[1]: Started libpod-conmon-76921750aacec33995a284b76ebce5b0fbd81e5146607bde9666ea7e52483d4e.scope.
Nov 29 03:43:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:43:46 np0005539563 podman[377694]: 2025-11-29 08:43:46.948489071 +0000 UTC m=+1.116366737 container init 76921750aacec33995a284b76ebce5b0fbd81e5146607bde9666ea7e52483d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_taussig, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:43:46 np0005539563 podman[377694]: 2025-11-29 08:43:46.956810308 +0000 UTC m=+1.124687944 container start 76921750aacec33995a284b76ebce5b0fbd81e5146607bde9666ea7e52483d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_taussig, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:43:46 np0005539563 elated_taussig[377710]: 167 167
Nov 29 03:43:46 np0005539563 systemd[1]: libpod-76921750aacec33995a284b76ebce5b0fbd81e5146607bde9666ea7e52483d4e.scope: Deactivated successfully.
Nov 29 03:43:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 41 KiB/s wr, 12 op/s
Nov 29 03:43:47 np0005539563 podman[377694]: 2025-11-29 08:43:47.050729268 +0000 UTC m=+1.218606924 container attach 76921750aacec33995a284b76ebce5b0fbd81e5146607bde9666ea7e52483d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_taussig, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:43:47 np0005539563 podman[377694]: 2025-11-29 08:43:47.05225148 +0000 UTC m=+1.220129126 container died 76921750aacec33995a284b76ebce5b0fbd81e5146607bde9666ea7e52483d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_taussig, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:43:47 np0005539563 nova_compute[252253]: 2025-11-29 08:43:47.064 252257 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:47 np0005539563 nova_compute[252253]: 2025-11-29 08:43:47.065 252257 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquired lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:47 np0005539563 nova_compute[252253]: 2025-11-29 08:43:47.066 252257 DEBUG nova.network.neutron [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:43:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c04f09da793f454b090323180f5d75200d4fc7a9c9c3d0554f3043ce7e2b5e60-merged.mount: Deactivated successfully.
Nov 29 03:43:47 np0005539563 podman[377694]: 2025-11-29 08:43:47.096199337 +0000 UTC m=+1.264076973 container remove 76921750aacec33995a284b76ebce5b0fbd81e5146607bde9666ea7e52483d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_taussig, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:43:47 np0005539563 systemd[1]: libpod-conmon-76921750aacec33995a284b76ebce5b0fbd81e5146607bde9666ea7e52483d4e.scope: Deactivated successfully.
Nov 29 03:43:47 np0005539563 nova_compute[252253]: 2025-11-29 08:43:47.164 252257 DEBUG nova.compute.manager [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-changed-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:47 np0005539563 nova_compute[252253]: 2025-11-29 08:43:47.166 252257 DEBUG nova.compute.manager [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Refreshing instance network info cache due to event network-changed-c30634d5-981b-440c-aaed-815b2591a3d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:43:47 np0005539563 nova_compute[252253]: 2025-11-29 08:43:47.166 252257 DEBUG oslo_concurrency.lockutils [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:47.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:47 np0005539563 podman[377734]: 2025-11-29 08:43:47.261144512 +0000 UTC m=+0.039531858 container create 1c5610e370d6d8f47876873e0e0b08ab147f6022eb4478ac4f102ff018ff9b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:43:47 np0005539563 systemd[1]: Started libpod-conmon-1c5610e370d6d8f47876873e0e0b08ab147f6022eb4478ac4f102ff018ff9b49.scope.
Nov 29 03:43:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:43:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8631aa73e1653fdfe23636104bfb0dbc80d6cafc3aaf7d751f5f6507e2b0cf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8631aa73e1653fdfe23636104bfb0dbc80d6cafc3aaf7d751f5f6507e2b0cf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8631aa73e1653fdfe23636104bfb0dbc80d6cafc3aaf7d751f5f6507e2b0cf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8631aa73e1653fdfe23636104bfb0dbc80d6cafc3aaf7d751f5f6507e2b0cf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:47 np0005539563 podman[377734]: 2025-11-29 08:43:47.245428275 +0000 UTC m=+0.023815641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:47 np0005539563 podman[377734]: 2025-11-29 08:43:47.345114711 +0000 UTC m=+0.123502087 container init 1c5610e370d6d8f47876873e0e0b08ab147f6022eb4478ac4f102ff018ff9b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gates, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:43:47 np0005539563 podman[377734]: 2025-11-29 08:43:47.351774753 +0000 UTC m=+0.130162099 container start 1c5610e370d6d8f47876873e0e0b08ab147f6022eb4478ac4f102ff018ff9b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:43:47 np0005539563 podman[377734]: 2025-11-29 08:43:47.356643235 +0000 UTC m=+0.135030601 container attach 1c5610e370d6d8f47876873e0e0b08ab147f6022eb4478ac4f102ff018ff9b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gates, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 03:43:47 np0005539563 nova_compute[252253]: 2025-11-29 08:43:47.430 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:47.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:48 np0005539563 nova_compute[252253]: 2025-11-29 08:43:48.299 252257 DEBUG nova.network.neutron [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updating instance_info_cache with network_info: [{"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:48 np0005539563 nova_compute[252253]: 2025-11-29 08:43:48.341 252257 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Releasing lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:48 np0005539563 nova_compute[252253]: 2025-11-29 08:43:48.345 252257 DEBUG oslo_concurrency.lockutils [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:48 np0005539563 nova_compute[252253]: 2025-11-29 08:43:48.345 252257 DEBUG nova.network.neutron [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Refreshing network info cache for port c30634d5-981b-440c-aaed-815b2591a3d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:43:48 np0005539563 modest_gates[377751]: [
Nov 29 03:43:48 np0005539563 modest_gates[377751]:    {
Nov 29 03:43:48 np0005539563 modest_gates[377751]:        "available": false,
Nov 29 03:43:48 np0005539563 modest_gates[377751]:        "ceph_device": false,
Nov 29 03:43:48 np0005539563 modest_gates[377751]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:        "lsm_data": {},
Nov 29 03:43:48 np0005539563 modest_gates[377751]:        "lvs": [],
Nov 29 03:43:48 np0005539563 modest_gates[377751]:        "path": "/dev/sr0",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:        "rejected_reasons": [
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "Has a FileSystem",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "Insufficient space (<5GB)"
Nov 29 03:43:48 np0005539563 modest_gates[377751]:        ],
Nov 29 03:43:48 np0005539563 modest_gates[377751]:        "sys_api": {
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "actuators": null,
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "device_nodes": "sr0",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "devname": "sr0",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "human_readable_size": "482.00 KB",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "id_bus": "ata",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "model": "QEMU DVD-ROM",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "nr_requests": "2",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "parent": "/dev/sr0",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "partitions": {},
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "path": "/dev/sr0",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "removable": "1",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "rev": "2.5+",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "ro": "0",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "rotational": "1",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "sas_address": "",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "sas_device_handle": "",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "scheduler_mode": "mq-deadline",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "sectors": 0,
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "sectorsize": "2048",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "size": 493568.0,
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "support_discard": "2048",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "type": "disk",
Nov 29 03:43:48 np0005539563 modest_gates[377751]:            "vendor": "QEMU"
Nov 29 03:43:48 np0005539563 modest_gates[377751]:        }
Nov 29 03:43:48 np0005539563 modest_gates[377751]:    }
Nov 29 03:43:48 np0005539563 modest_gates[377751]: ]
Nov 29 03:43:48 np0005539563 systemd[1]: libpod-1c5610e370d6d8f47876873e0e0b08ab147f6022eb4478ac4f102ff018ff9b49.scope: Deactivated successfully.
Nov 29 03:43:48 np0005539563 systemd[1]: libpod-1c5610e370d6d8f47876873e0e0b08ab147f6022eb4478ac4f102ff018ff9b49.scope: Consumed 1.094s CPU time.
Nov 29 03:43:48 np0005539563 podman[377734]: 2025-11-29 08:43:48.4465359 +0000 UTC m=+1.224923246 container died 1c5610e370d6d8f47876873e0e0b08ab147f6022eb4478ac4f102ff018ff9b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:43:48 np0005539563 nova_compute[252253]: 2025-11-29 08:43:48.451 252257 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:43:48 np0005539563 nova_compute[252253]: 2025-11-29 08:43:48.452 252257 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:43:48 np0005539563 nova_compute[252253]: 2025-11-29 08:43:48.452 252257 INFO nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Creating image(s)#033[00m
Nov 29 03:43:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:43:48 np0005539563 nova_compute[252253]: 2025-11-29 08:43:48.506 252257 DEBUG nova.storage.rbd_utils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] creating snapshot(nova-resize) on rbd image(52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:43:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Nov 29 03:43:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 286 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 263 KiB/s wr, 35 op/s
Nov 29 03:43:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:49.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:49.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.021 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e8631aa73e1653fdfe23636104bfb0dbc80d6cafc3aaf7d751f5f6507e2b0cf1-merged.mount: Deactivated successfully.
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:43:50 np0005539563 podman[377734]: 2025-11-29 08:43:50.083219678 +0000 UTC m=+2.861607024 container remove 1c5610e370d6d8f47876873e0e0b08ab147f6022eb4478ac4f102ff018ff9b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.088 252257 DEBUG nova.objects.instance [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:50 np0005539563 systemd[1]: libpod-conmon-1c5610e370d6d8f47876873e0e0b08ab147f6022eb4478ac4f102ff018ff9b49.scope: Deactivated successfully.
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:50 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 24378957-cfa6-4030-b747-3b71a5a99981 does not exist
Nov 29 03:43:50 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9ad78bc0-d0c9-42d6-8263-84051f647d6d does not exist
Nov 29 03:43:50 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4f55f24e-17fb-4fd1-be45-35cad0768959 does not exist
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.347 252257 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.347 252257 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Ensure instance console log exists: /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.348 252257 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.348 252257 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.348 252257 DEBUG oslo_concurrency.lockutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.351 252257 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Start _get_guest_xml network_info=[{"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--475297760", "vif_mac": "fa:16:3e:2f:d2:47"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.357 252257 WARNING nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.363 252257 DEBUG nova.virt.libvirt.host [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.364 252257 DEBUG nova.virt.libvirt.host [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.368 252257 DEBUG nova.virt.libvirt.host [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.369 252257 DEBUG nova.virt.libvirt.host [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.370 252257 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.370 252257 DEBUG nova.virt.hardware [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a3833334-6e3e-4b1c-bf74-bdd1055a9e9b',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.370 252257 DEBUG nova.virt.hardware [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.370 252257 DEBUG nova.virt.hardware [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.371 252257 DEBUG nova.virt.hardware [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.371 252257 DEBUG nova.virt.hardware [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.371 252257 DEBUG nova.virt.hardware [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.371 252257 DEBUG nova.virt.hardware [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.371 252257 DEBUG nova.virt.hardware [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.371 252257 DEBUG nova.virt.hardware [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.371 252257 DEBUG nova.virt.hardware [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.372 252257 DEBUG nova.virt.hardware [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.372 252257 DEBUG nova.objects.instance [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.411 252257 DEBUG oslo_concurrency.processutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/700537020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.855 252257 DEBUG oslo_concurrency.processutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:50 np0005539563 podman[379160]: 2025-11-29 08:43:50.885785831 +0000 UTC m=+0.071679845 container create 99ffc10998081376cff73c4d022ae3412ea61424d1333cddb78d3e7969969c6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:43:50 np0005539563 nova_compute[252253]: 2025-11-29 08:43:50.895 252257 DEBUG oslo_concurrency.processutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:50 np0005539563 systemd[1]: Started libpod-conmon-99ffc10998081376cff73c4d022ae3412ea61424d1333cddb78d3e7969969c6d.scope.
Nov 29 03:43:50 np0005539563 podman[379160]: 2025-11-29 08:43:50.840567689 +0000 UTC m=+0.026461723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:50 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:43:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.2 MiB/s wr, 43 op/s
Nov 29 03:43:51 np0005539563 podman[379160]: 2025-11-29 08:43:51.19536912 +0000 UTC m=+0.381263224 container init 99ffc10998081376cff73c4d022ae3412ea61424d1333cddb78d3e7969969c6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:43:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:43:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:43:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:51.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:51 np0005539563 podman[379160]: 2025-11-29 08:43:51.207434738 +0000 UTC m=+0.393328752 container start 99ffc10998081376cff73c4d022ae3412ea61424d1333cddb78d3e7969969c6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:43:51 np0005539563 podman[379160]: 2025-11-29 08:43:51.211989102 +0000 UTC m=+0.397883146 container attach 99ffc10998081376cff73c4d022ae3412ea61424d1333cddb78d3e7969969c6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:43:51 np0005539563 cranky_hodgkin[379197]: 167 167
Nov 29 03:43:51 np0005539563 systemd[1]: libpod-99ffc10998081376cff73c4d022ae3412ea61424d1333cddb78d3e7969969c6d.scope: Deactivated successfully.
Nov 29 03:43:51 np0005539563 podman[379160]: 2025-11-29 08:43:51.213335839 +0000 UTC m=+0.399229893 container died 99ffc10998081376cff73c4d022ae3412ea61424d1333cddb78d3e7969969c6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:43:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c69052af27eb1d4e522e500d9147d951297b64601ae4c11ffd3046d4f7963789-merged.mount: Deactivated successfully.
Nov 29 03:43:51 np0005539563 podman[379160]: 2025-11-29 08:43:51.266281332 +0000 UTC m=+0.452175376 container remove 99ffc10998081376cff73c4d022ae3412ea61424d1333cddb78d3e7969969c6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:43:51 np0005539563 systemd[1]: libpod-conmon-99ffc10998081376cff73c4d022ae3412ea61424d1333cddb78d3e7969969c6d.scope: Deactivated successfully.
Nov 29 03:43:51 np0005539563 podman[379240]: 2025-11-29 08:43:51.420975868 +0000 UTC m=+0.044489034 container create e9f646385324be64db3cc7627b5805d1dae88f0d6911fdc687fa92ec71068b6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:43:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:51.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:51 np0005539563 systemd[1]: Started libpod-conmon-e9f646385324be64db3cc7627b5805d1dae88f0d6911fdc687fa92ec71068b6f.scope.
Nov 29 03:43:51 np0005539563 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:43:51 np0005539563 systemd[377399]: Activating special unit Exit the Session...
Nov 29 03:43:51 np0005539563 systemd[377399]: Stopped target Main User Target.
Nov 29 03:43:51 np0005539563 systemd[377399]: Stopped target Basic System.
Nov 29 03:43:51 np0005539563 systemd[377399]: Stopped target Paths.
Nov 29 03:43:51 np0005539563 systemd[377399]: Stopped target Sockets.
Nov 29 03:43:51 np0005539563 systemd[377399]: Stopped target Timers.
Nov 29 03:43:51 np0005539563 systemd[377399]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:43:51 np0005539563 systemd[377399]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:43:51 np0005539563 systemd[377399]: Closed D-Bus User Message Bus Socket.
Nov 29 03:43:51 np0005539563 systemd[377399]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:43:51 np0005539563 systemd[377399]: Removed slice User Application Slice.
Nov 29 03:43:51 np0005539563 systemd[377399]: Reached target Shutdown.
Nov 29 03:43:51 np0005539563 systemd[377399]: Finished Exit the Session.
Nov 29 03:43:51 np0005539563 systemd[377399]: Reached target Exit the Session.
Nov 29 03:43:51 np0005539563 podman[379240]: 2025-11-29 08:43:51.40013341 +0000 UTC m=+0.023646586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:43:51 np0005539563 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:43:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1001391614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:51 np0005539563 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:43:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4891f45865553b1d2da76b5fb728929e1018020656bc4b42dfd0cf3ba6d2cfe6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:51 np0005539563 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:43:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4891f45865553b1d2da76b5fb728929e1018020656bc4b42dfd0cf3ba6d2cfe6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4891f45865553b1d2da76b5fb728929e1018020656bc4b42dfd0cf3ba6d2cfe6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4891f45865553b1d2da76b5fb728929e1018020656bc4b42dfd0cf3ba6d2cfe6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4891f45865553b1d2da76b5fb728929e1018020656bc4b42dfd0cf3ba6d2cfe6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:51 np0005539563 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:43:51 np0005539563 podman[379240]: 2025-11-29 08:43:51.533167596 +0000 UTC m=+0.156680732 container init e9f646385324be64db3cc7627b5805d1dae88f0d6911fdc687fa92ec71068b6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:43:51 np0005539563 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:43:51 np0005539563 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:43:51 np0005539563 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.539 252257 DEBUG oslo_concurrency.processutils [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.541 252257 DEBUG nova.virt.libvirt.vif [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:43:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1344969912',display_name='tempest-TestNetworkAdvancedServerOps-server-1344969912',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1344969912',id=188,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLmm4ltmj+g7/icCjtJr0SHjXTHtxI2929fCkjN+rZCkOcGA5uAJypuYXHDfNxCJPF4dK0M+sqiJNNL/Fk73SGlWsRBT1NFSICYmkpJ84SJ0IFGfF3uz8ZC1rBZd82HRJw==',key_name='tempest-TestNetworkAdvancedServerOps-1709788421',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:43:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-uextbsm0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:45Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=52de3669-ccbb-4d2c-948b-abc4aae3b8e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--475297760", "vif_mac": "fa:16:3e:2f:d2:47"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.541 252257 DEBUG nova.network.os_vif_util [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--475297760", "vif_mac": "fa:16:3e:2f:d2:47"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.542 252257 DEBUG nova.network.os_vif_util [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:51 np0005539563 podman[379240]: 2025-11-29 08:43:51.545565164 +0000 UTC m=+0.169078290 container start e9f646385324be64db3cc7627b5805d1dae88f0d6911fdc687fa92ec71068b6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.545 252257 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  <uuid>52de3669-ccbb-4d2c-948b-abc4aae3b8e4</uuid>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  <name>instance-000000bc</name>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  <memory>196608</memory>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1344969912</nova:name>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:43:50</nova:creationTime>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.micro">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <nova:memory>192</nova:memory>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <nova:user uuid="686f527a5723407b85ed34c8a312583f">tempest-TestNetworkAdvancedServerOps-382266774-project-member</nova:user>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <nova:project uuid="c4ca87a38a19497f84b6d2c170c4fe75">tempest-TestNetworkAdvancedServerOps-382266774</nova:project>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <nova:port uuid="c30634d5-981b-440c-aaed-815b2591a3d4">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <entry name="serial">52de3669-ccbb-4d2c-948b-abc4aae3b8e4</entry>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <entry name="uuid">52de3669-ccbb-4d2c-948b-abc4aae3b8e4</entry>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/52de3669-ccbb-4d2c-948b-abc4aae3b8e4_disk.config">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:2f:d2:47"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <target dev="tapc30634d5-98"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4/console.log" append="off"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:43:51 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:43:51 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:43:51 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:43:51 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.546 252257 DEBUG nova.virt.libvirt.vif [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:43:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1344969912',display_name='tempest-TestNetworkAdvancedServerOps-server-1344969912',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1344969912',id=188,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLmm4ltmj+g7/icCjtJr0SHjXTHtxI2929fCkjN+rZCkOcGA5uAJypuYXHDfNxCJPF4dK0M+sqiJNNL/Fk73SGlWsRBT1NFSICYmkpJ84SJ0IFGfF3uz8ZC1rBZd82HRJw==',key_name='tempest-TestNetworkAdvancedServerOps-1709788421',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:43:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-uextbsm0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:45Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=52de3669-ccbb-4d2c-948b-abc4aae3b8e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--475297760", "vif_mac": "fa:16:3e:2f:d2:47"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.546 252257 DEBUG nova.network.os_vif_util [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--475297760", "vif_mac": "fa:16:3e:2f:d2:47"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.547 252257 DEBUG nova.network.os_vif_util [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.547 252257 DEBUG os_vif [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.548 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.548 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.549 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:51 np0005539563 podman[379240]: 2025-11-29 08:43:51.549582383 +0000 UTC m=+0.173095559 container attach e9f646385324be64db3cc7627b5805d1dae88f0d6911fdc687fa92ec71068b6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.553 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.553 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc30634d5-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.553 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc30634d5-98, col_values=(('external_ids', {'iface-id': 'c30634d5-981b-440c-aaed-815b2591a3d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:d2:47', 'vm-uuid': '52de3669-ccbb-4d2c-948b-abc4aae3b8e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.555 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539563 NetworkManager[48981]: <info>  [1764405831.5561] manager: (tapc30634d5-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.559 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.564 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.565 252257 INFO os_vif [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98')#033[00m
Nov 29 03:43:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.699 252257 DEBUG nova.network.neutron [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updated VIF entry in instance network info cache for port c30634d5-981b-440c-aaed-815b2591a3d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.700 252257 DEBUG nova.network.neutron [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updating instance_info_cache with network_info: [{"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.719 252257 DEBUG oslo_concurrency.lockutils [req-f256c372-2e3c-4eb4-998b-6ff6b6adde39 req-98fe8c6e-d88c-4081-95f6-0f983f8c8213 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.916 252257 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.918 252257 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.918 252257 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] No VIF found with MAC fa:16:3e:2f:d2:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:43:51 np0005539563 nova_compute[252253]: 2025-11-29 08:43:51.920 252257 INFO nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Using config drive#033[00m
Nov 29 03:43:52 np0005539563 kernel: tapc30634d5-98: entered promiscuous mode
Nov 29 03:43:52 np0005539563 NetworkManager[48981]: <info>  [1764405832.0345] manager: (tapc30634d5-98): new Tun device (/org/freedesktop/NetworkManager/Devices/359)
Nov 29 03:43:52 np0005539563 ovn_controller[148841]: 2025-11-29T08:43:52Z|00817|binding|INFO|Claiming lport c30634d5-981b-440c-aaed-815b2591a3d4 for this chassis.
Nov 29 03:43:52 np0005539563 nova_compute[252253]: 2025-11-29 08:43:52.034 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:52 np0005539563 ovn_controller[148841]: 2025-11-29T08:43:52Z|00818|binding|INFO|c30634d5-981b-440c-aaed-815b2591a3d4: Claiming fa:16:3e:2f:d2:47 10.100.0.9
Nov 29 03:43:52 np0005539563 nova_compute[252253]: 2025-11-29 08:43:52.041 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:52 np0005539563 nova_compute[252253]: 2025-11-29 08:43:52.046 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:52 np0005539563 nova_compute[252253]: 2025-11-29 08:43:52.050 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:52 np0005539563 NetworkManager[48981]: <info>  [1764405832.0515] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Nov 29 03:43:52 np0005539563 NetworkManager[48981]: <info>  [1764405832.0523] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.056 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:d2:47 10.100.0.9'], port_security=['fa:16:3e:2f:d2:47 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '52de3669-ccbb-4d2c-948b-abc4aae3b8e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3aa44838-3538-48cc-aa78-ee7437a5a87d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f970837-a742-427c-bfc6-c51a824e5eec, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=c30634d5-981b-440c-aaed-815b2591a3d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.058 158990 INFO neutron.agent.ovn.metadata.agent [-] Port c30634d5-981b-440c-aaed-815b2591a3d4 in datapath 16035279-ee66-4ba0-b73b-de24bec8a7fe bound to our chassis#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.059 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16035279-ee66-4ba0-b73b-de24bec8a7fe#033[00m
Nov 29 03:43:52 np0005539563 systemd-machined[213024]: New machine qemu-94-instance-000000bc.
Nov 29 03:43:52 np0005539563 systemd-udevd[379297]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.076 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[86b7ec9f-c3c4-4327-9275-82da294da9ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.077 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap16035279-e1 in ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.079 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap16035279-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.079 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8e02317a-a7b2-4b96-9010-68901339b722]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.081 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0bef1aa8-43f6-4ac9-aa44-1e72fa9fd9b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 systemd[1]: Started Virtual Machine qemu-94-instance-000000bc.
Nov 29 03:43:52 np0005539563 NetworkManager[48981]: <info>  [1764405832.0939] device (tapc30634d5-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:43:52 np0005539563 NetworkManager[48981]: <info>  [1764405832.0949] device (tapc30634d5-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.095 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[401b75c6-e03f-48f9-8cdb-9dbfe7e9db2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.127 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[58557782-7945-4542-8c3a-61ccc205d02c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.163 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[457755a4-4b01-441d-8155-e46fc2d15546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.170 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[13cdafe1-7bc2-44db-972a-48d3de0ee50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 systemd-udevd[379300]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:43:52 np0005539563 NetworkManager[48981]: <info>  [1764405832.1721] manager: (tap16035279-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/362)
Nov 29 03:43:52 np0005539563 nova_compute[252253]: 2025-11-29 08:43:52.204 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.208 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[21d3d2d6-9cd9-4c00-b3a0-f478535e70c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.217 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[242fe1ba-7d3a-455b-91cb-cd1bc6357966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 nova_compute[252253]: 2025-11-29 08:43:52.219 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:52 np0005539563 ovn_controller[148841]: 2025-11-29T08:43:52Z|00819|binding|INFO|Setting lport c30634d5-981b-440c-aaed-815b2591a3d4 ovn-installed in OVS
Nov 29 03:43:52 np0005539563 ovn_controller[148841]: 2025-11-29T08:43:52Z|00820|binding|INFO|Setting lport c30634d5-981b-440c-aaed-815b2591a3d4 up in Southbound
Nov 29 03:43:52 np0005539563 nova_compute[252253]: 2025-11-29 08:43:52.230 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:52 np0005539563 NetworkManager[48981]: <info>  [1764405832.2428] device (tap16035279-e0): carrier: link connected
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.249 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6224a4-76a0-4d23-91dd-63809fd77a96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.267 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[75a3235f-f679-4dc5-bebc-349ababbd76e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16035279-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:09:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880001, 'reachable_time': 36523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379331, 'error': None, 'target': 'ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.284 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b844992b-71f4-4968-844e-c746c94d2746]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5e:9a5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 880001, 'tstamp': 880001}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379333, 'error': None, 'target': 'ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.302 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a118d9cf-a0e5-408c-a4b5-db0a81d509c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16035279-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:09:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880001, 'reachable_time': 36523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379336, 'error': None, 'target': 'ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.332 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc843d8-e48c-4e8d-bd28-cafa249f48af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 gallant_boyd[379256]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:43:52 np0005539563 gallant_boyd[379256]: --> relative data size: 1.0
Nov 29 03:43:52 np0005539563 gallant_boyd[379256]: --> All data devices are unavailable
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.393 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[421d89f6-d82d-462b-a252-038ef5b8b759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.394 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16035279-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.394 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.394 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16035279-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:52 np0005539563 nova_compute[252253]: 2025-11-29 08:43:52.396 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:52 np0005539563 NetworkManager[48981]: <info>  [1764405832.3972] manager: (tap16035279-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Nov 29 03:43:52 np0005539563 kernel: tap16035279-e0: entered promiscuous mode
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.404 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16035279-e0, col_values=(('external_ids', {'iface-id': '6d144850-0aa2-4d79-ba3f-ed60c65ed2f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:52 np0005539563 ovn_controller[148841]: 2025-11-29T08:43:52Z|00821|binding|INFO|Releasing lport 6d144850-0aa2-4d79-ba3f-ed60c65ed2f3 from this chassis (sb_readonly=0)
Nov 29 03:43:52 np0005539563 nova_compute[252253]: 2025-11-29 08:43:52.405 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:52 np0005539563 systemd[1]: libpod-e9f646385324be64db3cc7627b5805d1dae88f0d6911fdc687fa92ec71068b6f.scope: Deactivated successfully.
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.408 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/16035279-ee66-4ba0-b73b-de24bec8a7fe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/16035279-ee66-4ba0-b73b-de24bec8a7fe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.412 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d0460bbf-59b2-47e6-92df-da9c6899a304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.413 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-16035279-ee66-4ba0-b73b-de24bec8a7fe
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/16035279-ee66-4ba0-b73b-de24bec8a7fe.pid.haproxy
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 16035279-ee66-4ba0-b73b-de24bec8a7fe
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:43:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:43:52.413 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'env', 'PROCESS_TAG=haproxy-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/16035279-ee66-4ba0-b73b-de24bec8a7fe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:43:52 np0005539563 conmon[379256]: conmon e9f646385324be64db3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9f646385324be64db3cc7627b5805d1dae88f0d6911fdc687fa92ec71068b6f.scope/container/memory.events
Nov 29 03:43:52 np0005539563 nova_compute[252253]: 2025-11-29 08:43:52.419 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:52 np0005539563 podman[379240]: 2025-11-29 08:43:52.421184538 +0000 UTC m=+1.044697684 container died e9f646385324be64db3cc7627b5805d1dae88f0d6911fdc687fa92ec71068b6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:43:52 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4891f45865553b1d2da76b5fb728929e1018020656bc4b42dfd0cf3ba6d2cfe6-merged.mount: Deactivated successfully.
Nov 29 03:43:52 np0005539563 podman[379240]: 2025-11-29 08:43:52.479163249 +0000 UTC m=+1.102676365 container remove e9f646385324be64db3cc7627b5805d1dae88f0d6911fdc687fa92ec71068b6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_boyd, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:43:52 np0005539563 systemd[1]: libpod-conmon-e9f646385324be64db3cc7627b5805d1dae88f0d6911fdc687fa92ec71068b6f.scope: Deactivated successfully.
Nov 29 03:43:52 np0005539563 podman[379471]: 2025-11-29 08:43:52.760298991 +0000 UTC m=+0.048411860 container create 6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:43:52 np0005539563 systemd[1]: Started libpod-conmon-6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620.scope.
Nov 29 03:43:52 np0005539563 podman[379471]: 2025-11-29 08:43:52.733385298 +0000 UTC m=+0.021498187 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:43:52 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:43:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f124a620f20fde5ef6650123ba9af81abb3c8f90d36e5052651d44bf336ac209/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:52 np0005539563 podman[379471]: 2025-11-29 08:43:52.867334828 +0000 UTC m=+0.155447717 container init 6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:43:52 np0005539563 podman[379471]: 2025-11-29 08:43:52.873203928 +0000 UTC m=+0.161316797 container start 6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:43:52 np0005539563 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[379513]: [NOTICE]   (379538) : New worker (379553) forked
Nov 29 03:43:52 np0005539563 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[379513]: [NOTICE]   (379538) : Loading success.
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.014 252257 DEBUG nova.compute.manager [req-cc3e1f7c-b007-480c-b669-08164d3ac07e req-d48ec334-f088-4c4b-bff8-27e18bea9212 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.014 252257 DEBUG oslo_concurrency.lockutils [req-cc3e1f7c-b007-480c-b669-08164d3ac07e req-d48ec334-f088-4c4b-bff8-27e18bea9212 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.014 252257 DEBUG oslo_concurrency.lockutils [req-cc3e1f7c-b007-480c-b669-08164d3ac07e req-d48ec334-f088-4c4b-bff8-27e18bea9212 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.014 252257 DEBUG oslo_concurrency.lockutils [req-cc3e1f7c-b007-480c-b669-08164d3ac07e req-d48ec334-f088-4c4b-bff8-27e18bea9212 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.014 252257 DEBUG nova.compute.manager [req-cc3e1f7c-b007-480c-b669-08164d3ac07e req-d48ec334-f088-4c4b-bff8-27e18bea9212 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] No waiting events found dispatching network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.015 252257 WARNING nova.compute.manager [req-cc3e1f7c-b007-480c-b669-08164d3ac07e req-d48ec334-f088-4c4b-bff8-27e18bea9212 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received unexpected event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.028 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405833.0283, 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.028 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.031 252257 DEBUG nova.compute.manager [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.034 252257 INFO nova.virt.libvirt.driver [-] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Instance running successfully.#033[00m
Nov 29 03:43:53 np0005539563 virtqemud[251807]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.037 252257 DEBUG nova.virt.libvirt.guest [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.038 252257 DEBUG nova.virt.libvirt.driver [None req-cdeccdf8-5015-4334-b973-42db40c7ba37 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:43:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.2 MiB/s wr, 43 op/s
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.043 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.047 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:53 np0005539563 podman[379594]: 2025-11-29 08:43:53.090148891 +0000 UTC m=+0.040657040 container create d5eb8f8b96cfd7226a21835deff424f6bad298767248b2293dab45a9354eb188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_buck, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.089 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.090 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405833.0315661, 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.090 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] VM Started (Lifecycle Event)#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.119 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:53 np0005539563 nova_compute[252253]: 2025-11-29 08:43:53.122 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:53 np0005539563 systemd[1]: Started libpod-conmon-d5eb8f8b96cfd7226a21835deff424f6bad298767248b2293dab45a9354eb188.scope.
Nov 29 03:43:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:43:53 np0005539563 podman[379594]: 2025-11-29 08:43:53.072683615 +0000 UTC m=+0.023191794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:53 np0005539563 podman[379594]: 2025-11-29 08:43:53.184815781 +0000 UTC m=+0.135323980 container init d5eb8f8b96cfd7226a21835deff424f6bad298767248b2293dab45a9354eb188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:43:53 np0005539563 podman[379594]: 2025-11-29 08:43:53.194668839 +0000 UTC m=+0.145176988 container start d5eb8f8b96cfd7226a21835deff424f6bad298767248b2293dab45a9354eb188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_buck, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:43:53 np0005539563 podman[379594]: 2025-11-29 08:43:53.198882765 +0000 UTC m=+0.149390954 container attach d5eb8f8b96cfd7226a21835deff424f6bad298767248b2293dab45a9354eb188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:43:53 np0005539563 serene_buck[379610]: 167 167
Nov 29 03:43:53 np0005539563 conmon[379610]: conmon d5eb8f8b96cfd7226a21 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5eb8f8b96cfd7226a21835deff424f6bad298767248b2293dab45a9354eb188.scope/container/memory.events
Nov 29 03:43:53 np0005539563 systemd[1]: libpod-d5eb8f8b96cfd7226a21835deff424f6bad298767248b2293dab45a9354eb188.scope: Deactivated successfully.
Nov 29 03:43:53 np0005539563 podman[379594]: 2025-11-29 08:43:53.20351953 +0000 UTC m=+0.154027679 container died d5eb8f8b96cfd7226a21835deff424f6bad298767248b2293dab45a9354eb188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:43:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:53.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay-12d576144a9abdf8067c50967a3b8956141bb097a9b152de2a35d0a96cec55c9-merged.mount: Deactivated successfully.
Nov 29 03:43:53 np0005539563 podman[379594]: 2025-11-29 08:43:53.389305215 +0000 UTC m=+0.339813374 container remove d5eb8f8b96cfd7226a21835deff424f6bad298767248b2293dab45a9354eb188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_buck, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:43:53 np0005539563 systemd[1]: libpod-conmon-d5eb8f8b96cfd7226a21835deff424f6bad298767248b2293dab45a9354eb188.scope: Deactivated successfully.
Nov 29 03:43:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:53.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:53 np0005539563 podman[379635]: 2025-11-29 08:43:53.552911603 +0000 UTC m=+0.049971563 container create 975cd208dc23a8b49cf41801aaf9e06bf5b72b45b5d0da52783fb446dcd755a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:43:53 np0005539563 systemd[1]: Started libpod-conmon-975cd208dc23a8b49cf41801aaf9e06bf5b72b45b5d0da52783fb446dcd755a9.scope.
Nov 29 03:43:53 np0005539563 podman[379635]: 2025-11-29 08:43:53.525144566 +0000 UTC m=+0.022204546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:43:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1f3e68d2abedbb9c901e2f3c5842745d79a3e02fb42b2452ba96670e77188b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1f3e68d2abedbb9c901e2f3c5842745d79a3e02fb42b2452ba96670e77188b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1f3e68d2abedbb9c901e2f3c5842745d79a3e02fb42b2452ba96670e77188b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1f3e68d2abedbb9c901e2f3c5842745d79a3e02fb42b2452ba96670e77188b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:53 np0005539563 podman[379635]: 2025-11-29 08:43:53.648808867 +0000 UTC m=+0.145868857 container init 975cd208dc23a8b49cf41801aaf9e06bf5b72b45b5d0da52783fb446dcd755a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:43:53 np0005539563 podman[379635]: 2025-11-29 08:43:53.658130181 +0000 UTC m=+0.155190141 container start 975cd208dc23a8b49cf41801aaf9e06bf5b72b45b5d0da52783fb446dcd755a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:43:53 np0005539563 podman[379635]: 2025-11-29 08:43:53.662940692 +0000 UTC m=+0.160000662 container attach 975cd208dc23a8b49cf41801aaf9e06bf5b72b45b5d0da52783fb446dcd755a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]: {
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:    "0": [
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:        {
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:            "devices": [
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:                "/dev/loop3"
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:            ],
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:            "lv_name": "ceph_lv0",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:            "lv_size": "7511998464",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:            "name": "ceph_lv0",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:            "tags": {
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:                "ceph.cluster_name": "ceph",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:                "ceph.crush_device_class": "",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:                "ceph.encrypted": "0",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:                "ceph.osd_id": "0",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:                "ceph.type": "block",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:                "ceph.vdo": "0"
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:            },
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:            "type": "block",
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:            "vg_name": "ceph_vg0"
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:        }
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]:    ]
Nov 29 03:43:54 np0005539563 infallible_brattain[379651]: }
Nov 29 03:43:54 np0005539563 systemd[1]: libpod-975cd208dc23a8b49cf41801aaf9e06bf5b72b45b5d0da52783fb446dcd755a9.scope: Deactivated successfully.
Nov 29 03:43:54 np0005539563 podman[379635]: 2025-11-29 08:43:54.441104171 +0000 UTC m=+0.938164131 container died 975cd208dc23a8b49cf41801aaf9e06bf5b72b45b5d0da52783fb446dcd755a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brattain, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:43:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9d1f3e68d2abedbb9c901e2f3c5842745d79a3e02fb42b2452ba96670e77188b-merged.mount: Deactivated successfully.
Nov 29 03:43:54 np0005539563 podman[379635]: 2025-11-29 08:43:54.496676915 +0000 UTC m=+0.993736875 container remove 975cd208dc23a8b49cf41801aaf9e06bf5b72b45b5d0da52783fb446dcd755a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 29 03:43:54 np0005539563 systemd[1]: libpod-conmon-975cd208dc23a8b49cf41801aaf9e06bf5b72b45b5d0da52783fb446dcd755a9.scope: Deactivated successfully.
Nov 29 03:43:55 np0005539563 nova_compute[252253]: 2025-11-29 08:43:55.024 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 194 op/s
Nov 29 03:43:55 np0005539563 podman[379813]: 2025-11-29 08:43:55.090286174 +0000 UTC m=+0.046810697 container create 5b3f14d97d409842ada623be679beb380bb52bc53201e9ded14f80c9230c3b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:43:55 np0005539563 nova_compute[252253]: 2025-11-29 08:43:55.103 252257 DEBUG nova.compute.manager [req-3d09136b-e371-494e-b5f9-b0e39953ad48 req-a275744b-1a5c-4e93-b78a-224acec2ae76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:55 np0005539563 nova_compute[252253]: 2025-11-29 08:43:55.103 252257 DEBUG oslo_concurrency.lockutils [req-3d09136b-e371-494e-b5f9-b0e39953ad48 req-a275744b-1a5c-4e93-b78a-224acec2ae76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:55 np0005539563 nova_compute[252253]: 2025-11-29 08:43:55.104 252257 DEBUG oslo_concurrency.lockutils [req-3d09136b-e371-494e-b5f9-b0e39953ad48 req-a275744b-1a5c-4e93-b78a-224acec2ae76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:55 np0005539563 nova_compute[252253]: 2025-11-29 08:43:55.104 252257 DEBUG oslo_concurrency.lockutils [req-3d09136b-e371-494e-b5f9-b0e39953ad48 req-a275744b-1a5c-4e93-b78a-224acec2ae76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:55 np0005539563 nova_compute[252253]: 2025-11-29 08:43:55.104 252257 DEBUG nova.compute.manager [req-3d09136b-e371-494e-b5f9-b0e39953ad48 req-a275744b-1a5c-4e93-b78a-224acec2ae76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] No waiting events found dispatching network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:55 np0005539563 nova_compute[252253]: 2025-11-29 08:43:55.104 252257 WARNING nova.compute.manager [req-3d09136b-e371-494e-b5f9-b0e39953ad48 req-a275744b-1a5c-4e93-b78a-224acec2ae76 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received unexpected event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:43:55 np0005539563 systemd[1]: Started libpod-conmon-5b3f14d97d409842ada623be679beb380bb52bc53201e9ded14f80c9230c3b51.scope.
Nov 29 03:43:55 np0005539563 podman[379813]: 2025-11-29 08:43:55.063982298 +0000 UTC m=+0.020506921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:43:55 np0005539563 podman[379813]: 2025-11-29 08:43:55.183123354 +0000 UTC m=+0.139647987 container init 5b3f14d97d409842ada623be679beb380bb52bc53201e9ded14f80c9230c3b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:43:55 np0005539563 podman[379813]: 2025-11-29 08:43:55.191994586 +0000 UTC m=+0.148519139 container start 5b3f14d97d409842ada623be679beb380bb52bc53201e9ded14f80c9230c3b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:43:55 np0005539563 podman[379813]: 2025-11-29 08:43:55.196250732 +0000 UTC m=+0.152775295 container attach 5b3f14d97d409842ada623be679beb380bb52bc53201e9ded14f80c9230c3b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_greider, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:43:55 np0005539563 hardcore_greider[379829]: 167 167
Nov 29 03:43:55 np0005539563 systemd[1]: libpod-5b3f14d97d409842ada623be679beb380bb52bc53201e9ded14f80c9230c3b51.scope: Deactivated successfully.
Nov 29 03:43:55 np0005539563 podman[379813]: 2025-11-29 08:43:55.201466645 +0000 UTC m=+0.157991168 container died 5b3f14d97d409842ada623be679beb380bb52bc53201e9ded14f80c9230c3b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:43:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:55.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay-88665f8834df96e0e62a0a2d00fd93b6f418b8c06f45746a31960c62232dd615-merged.mount: Deactivated successfully.
Nov 29 03:43:55 np0005539563 podman[379813]: 2025-11-29 08:43:55.244671902 +0000 UTC m=+0.201196425 container remove 5b3f14d97d409842ada623be679beb380bb52bc53201e9ded14f80c9230c3b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:43:55 np0005539563 systemd[1]: libpod-conmon-5b3f14d97d409842ada623be679beb380bb52bc53201e9ded14f80c9230c3b51.scope: Deactivated successfully.
Nov 29 03:43:55 np0005539563 podman[379850]: 2025-11-29 08:43:55.434201338 +0000 UTC m=+0.058871986 container create b483397d618417741a246189db6a5aa6dc55c9f5795ff350092fd502f7d09755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_roentgen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:43:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:55.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:55 np0005539563 systemd[1]: Started libpod-conmon-b483397d618417741a246189db6a5aa6dc55c9f5795ff350092fd502f7d09755.scope.
Nov 29 03:43:55 np0005539563 podman[379850]: 2025-11-29 08:43:55.41300757 +0000 UTC m=+0.037678228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:43:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2e67395765ab9317094d75813b8ad77e7c3d60318ab111a04971be45dfedab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2e67395765ab9317094d75813b8ad77e7c3d60318ab111a04971be45dfedab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2e67395765ab9317094d75813b8ad77e7c3d60318ab111a04971be45dfedab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2e67395765ab9317094d75813b8ad77e7c3d60318ab111a04971be45dfedab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:55 np0005539563 podman[379850]: 2025-11-29 08:43:55.535470328 +0000 UTC m=+0.160140976 container init b483397d618417741a246189db6a5aa6dc55c9f5795ff350092fd502f7d09755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:43:55 np0005539563 podman[379850]: 2025-11-29 08:43:55.54218457 +0000 UTC m=+0.166855188 container start b483397d618417741a246189db6a5aa6dc55c9f5795ff350092fd502f7d09755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:43:55 np0005539563 podman[379850]: 2025-11-29 08:43:55.546242771 +0000 UTC m=+0.170913409 container attach b483397d618417741a246189db6a5aa6dc55c9f5795ff350092fd502f7d09755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:43:56 np0005539563 youthful_roentgen[379868]: {
Nov 29 03:43:56 np0005539563 youthful_roentgen[379868]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:43:56 np0005539563 youthful_roentgen[379868]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:43:56 np0005539563 youthful_roentgen[379868]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:43:56 np0005539563 youthful_roentgen[379868]:        "osd_id": 0,
Nov 29 03:43:56 np0005539563 youthful_roentgen[379868]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:43:56 np0005539563 youthful_roentgen[379868]:        "type": "bluestore"
Nov 29 03:43:56 np0005539563 youthful_roentgen[379868]:    }
Nov 29 03:43:56 np0005539563 youthful_roentgen[379868]: }
Nov 29 03:43:56 np0005539563 systemd[1]: libpod-b483397d618417741a246189db6a5aa6dc55c9f5795ff350092fd502f7d09755.scope: Deactivated successfully.
Nov 29 03:43:56 np0005539563 podman[379889]: 2025-11-29 08:43:56.478502779 +0000 UTC m=+0.026945475 container died b483397d618417741a246189db6a5aa6dc55c9f5795ff350092fd502f7d09755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_roentgen, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:43:56 np0005539563 nova_compute[252253]: 2025-11-29 08:43:56.623 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 194 op/s
Nov 29 03:43:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:57.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:57 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ac2e67395765ab9317094d75813b8ad77e7c3d60318ab111a04971be45dfedab-merged.mount: Deactivated successfully.
Nov 29 03:43:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:43:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:57.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:43:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Nov 29 03:43:58 np0005539563 podman[379889]: 2025-11-29 08:43:58.103567601 +0000 UTC m=+1.652010317 container remove b483397d618417741a246189db6a5aa6dc55c9f5795ff350092fd502f7d09755 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_roentgen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:43:58 np0005539563 systemd[1]: libpod-conmon-b483397d618417741a246189db6a5aa6dc55c9f5795ff350092fd502f7d09755.scope: Deactivated successfully.
Nov 29 03:43:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:43:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Nov 29 03:43:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 17 KiB/s wr, 200 op/s
Nov 29 03:43:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:43:59 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Nov 29 03:43:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:59.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:43:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:43:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:59.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:00 np0005539563 nova_compute[252253]: 2025-11-29 08:44:00.024 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:44:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2df65d3a-737f-448d-ade9-f64fba0e1bab does not exist
Nov 29 03:44:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1d89675d-d28b-4446-9a7b-7b9d0b19a7d9 does not exist
Nov 29 03:44:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b67be59e-07e9-40d7-a00e-4b0f15eeefdd does not exist
Nov 29 03:44:00 np0005539563 nova_compute[252253]: 2025-11-29 08:44:00.704 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:00 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:44:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 196 op/s
Nov 29 03:44:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:01.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:01.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:01 np0005539563 nova_compute[252253]: 2025-11-29 08:44:01.626 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:44:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 196 op/s
Nov 29 03:44:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:03.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:03.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:04.949 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:04.950 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:04.950 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:05 np0005539563 nova_compute[252253]: 2025-11-29 08:44:05.026 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 994 KiB/s rd, 3.3 KiB/s wr, 48 op/s
Nov 29 03:44:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:05.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:05.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:06 np0005539563 nova_compute[252253]: 2025-11-29 08:44:06.629 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Nov 29 03:44:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 994 KiB/s rd, 3.3 KiB/s wr, 48 op/s
Nov 29 03:44:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:07.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:07.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Nov 29 03:44:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Nov 29 03:44:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:44:08Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2f:d2:47 10.100.0.9
Nov 29 03:44:08 np0005539563 podman[379961]: 2025-11-29 08:44:08.521965252 +0000 UTC m=+0.066298148 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:44:08 np0005539563 podman[379960]: 2025-11-29 08:44:08.544568108 +0000 UTC m=+0.088515424 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:44:08 np0005539563 podman[379962]: 2025-11-29 08:44:08.603216196 +0000 UTC m=+0.147101770 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 03:44:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 330 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 310 KiB/s rd, 582 KiB/s wr, 40 op/s
Nov 29 03:44:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:09.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:09.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:10 np0005539563 nova_compute[252253]: 2025-11-29 08:44:10.027 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 354 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 968 KiB/s rd, 2.6 MiB/s wr, 117 op/s
Nov 29 03:44:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:11.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:11.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:11 np0005539563 nova_compute[252253]: 2025-11-29 08:44:11.632 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:44:12
Nov 29 03:44:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:44:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:44:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['backups', 'vms', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Nov 29 03:44:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:44:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 354 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 968 KiB/s rd, 2.6 MiB/s wr, 117 op/s
Nov 29 03:44:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:44:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:44:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:44:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:44:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:44:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:44:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:13.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:13.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:15 np0005539563 nova_compute[252253]: 2025-11-29 08:44:15.029 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 360 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 978 KiB/s rd, 2.6 MiB/s wr, 120 op/s
Nov 29 03:44:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:15.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:15 np0005539563 nova_compute[252253]: 2025-11-29 08:44:15.250 252257 INFO nova.compute.manager [None req-bf718be3-45eb-45d6-bf07-09dedbbad2db 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Get console output#033[00m
Nov 29 03:44:15 np0005539563 nova_compute[252253]: 2025-11-29 08:44:15.255 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:44:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:15.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:15 np0005539563 nova_compute[252253]: 2025-11-29 08:44:15.938 252257 DEBUG nova.compute.manager [req-f862c61b-7a54-4c91-8156-38d7eb40d159 req-341f2c48-a410-417f-ab8d-dc2712ee5592 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-changed-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:15 np0005539563 nova_compute[252253]: 2025-11-29 08:44:15.939 252257 DEBUG nova.compute.manager [req-f862c61b-7a54-4c91-8156-38d7eb40d159 req-341f2c48-a410-417f-ab8d-dc2712ee5592 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Refreshing instance network info cache due to event network-changed-c30634d5-981b-440c-aaed-815b2591a3d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:15 np0005539563 nova_compute[252253]: 2025-11-29 08:44:15.939 252257 DEBUG oslo_concurrency.lockutils [req-f862c61b-7a54-4c91-8156-38d7eb40d159 req-341f2c48-a410-417f-ab8d-dc2712ee5592 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:15 np0005539563 nova_compute[252253]: 2025-11-29 08:44:15.939 252257 DEBUG oslo_concurrency.lockutils [req-f862c61b-7a54-4c91-8156-38d7eb40d159 req-341f2c48-a410-417f-ab8d-dc2712ee5592 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:15 np0005539563 nova_compute[252253]: 2025-11-29 08:44:15.939 252257 DEBUG nova.network.neutron [req-f862c61b-7a54-4c91-8156-38d7eb40d159 req-341f2c48-a410-417f-ab8d-dc2712ee5592 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Refreshing network info cache for port c30634d5-981b-440c-aaed-815b2591a3d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.014 252257 DEBUG oslo_concurrency.lockutils [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.014 252257 DEBUG oslo_concurrency.lockutils [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.015 252257 DEBUG oslo_concurrency.lockutils [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.015 252257 DEBUG oslo_concurrency.lockutils [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.015 252257 DEBUG oslo_concurrency.lockutils [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.016 252257 INFO nova.compute.manager [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Terminating instance#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.017 252257 DEBUG nova.compute.manager [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:44:16 np0005539563 kernel: tapc30634d5-98 (unregistering): left promiscuous mode
Nov 29 03:44:16 np0005539563 NetworkManager[48981]: <info>  [1764405856.0824] device (tapc30634d5-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:44:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:44:16Z|00822|binding|INFO|Releasing lport c30634d5-981b-440c-aaed-815b2591a3d4 from this chassis (sb_readonly=0)
Nov 29 03:44:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:44:16Z|00823|binding|INFO|Setting lport c30634d5-981b-440c-aaed-815b2591a3d4 down in Southbound
Nov 29 03:44:16 np0005539563 ovn_controller[148841]: 2025-11-29T08:44:16Z|00824|binding|INFO|Removing iface tapc30634d5-98 ovn-installed in OVS
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.093 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.096 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.103 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:d2:47 10.100.0.9'], port_security=['fa:16:3e:2f:d2:47 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '52de3669-ccbb-4d2c-948b-abc4aae3b8e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4ca87a38a19497f84b6d2c170c4fe75', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3aa44838-3538-48cc-aa78-ee7437a5a87d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f970837-a742-427c-bfc6-c51a824e5eec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=c30634d5-981b-440c-aaed-815b2591a3d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.104 158990 INFO neutron.agent.ovn.metadata.agent [-] Port c30634d5-981b-440c-aaed-815b2591a3d4 in datapath 16035279-ee66-4ba0-b73b-de24bec8a7fe unbound from our chassis#033[00m
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.106 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16035279-ee66-4ba0-b73b-de24bec8a7fe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.107 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[278e4d86-b5c4-4ce5-8939-263c67428482]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.108 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe namespace which is not needed anymore#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.118 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:16 np0005539563 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000bc.scope: Deactivated successfully.
Nov 29 03:44:16 np0005539563 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000bc.scope: Consumed 14.356s CPU time.
Nov 29 03:44:16 np0005539563 systemd-machined[213024]: Machine qemu-94-instance-000000bc terminated.
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.255 252257 INFO nova.virt.libvirt.driver [-] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Instance destroyed successfully.#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.255 252257 DEBUG nova.objects.instance [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lazy-loading 'resources' on Instance uuid 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.270 252257 DEBUG nova.virt.libvirt.vif [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:43:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1344969912',display_name='tempest-TestNetworkAdvancedServerOps-server-1344969912',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1344969912',id=188,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLmm4ltmj+g7/icCjtJr0SHjXTHtxI2929fCkjN+rZCkOcGA5uAJypuYXHDfNxCJPF4dK0M+sqiJNNL/Fk73SGlWsRBT1NFSICYmkpJ84SJ0IFGfF3uz8ZC1rBZd82HRJw==',key_name='tempest-TestNetworkAdvancedServerOps-1709788421',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:43:53Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4ca87a38a19497f84b6d2c170c4fe75',ramdisk_id='',reservation_id='r-uextbsm0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-382266774',owner_user_name='tempest-TestNetworkAdvancedServerOps-382266774-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:44:00Z,user_data=None,user_id='686f527a5723407b85ed34c8a312583f',uuid=52de3669-ccbb-4d2c-948b-abc4aae3b8e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.271 252257 DEBUG nova.network.os_vif_util [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converting VIF {"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.271 252257 DEBUG nova.network.os_vif_util [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.272 252257 DEBUG os_vif [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.273 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.273 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc30634d5-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.276 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.278 252257 INFO os_vif [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:d2:47,bridge_name='br-int',has_traffic_filtering=True,id=c30634d5-981b-440c-aaed-815b2591a3d4,network=Network(16035279-ee66-4ba0-b73b-de24bec8a7fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc30634d5-98')#033[00m
Nov 29 03:44:16 np0005539563 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[379513]: [NOTICE]   (379538) : haproxy version is 2.8.14-c23fe91
Nov 29 03:44:16 np0005539563 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[379513]: [NOTICE]   (379538) : path to executable is /usr/sbin/haproxy
Nov 29 03:44:16 np0005539563 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[379513]: [WARNING]  (379538) : Exiting Master process...
Nov 29 03:44:16 np0005539563 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[379513]: [ALERT]    (379538) : Current worker (379553) exited with code 143 (Terminated)
Nov 29 03:44:16 np0005539563 neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe[379513]: [WARNING]  (379538) : All workers exited. Exiting... (0)
Nov 29 03:44:16 np0005539563 systemd[1]: libpod-6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620.scope: Deactivated successfully.
Nov 29 03:44:16 np0005539563 podman[380103]: 2025-11-29 08:44:16.374904993 +0000 UTC m=+0.179224545 container died 6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.431 252257 DEBUG nova.compute.manager [req-fd81ba11-fe2a-4c0c-bf11-1eb92d2452c1 req-898ad8cc-b3aa-455f-bdee-1352763146c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-unplugged-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.431 252257 DEBUG oslo_concurrency.lockutils [req-fd81ba11-fe2a-4c0c-bf11-1eb92d2452c1 req-898ad8cc-b3aa-455f-bdee-1352763146c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.432 252257 DEBUG oslo_concurrency.lockutils [req-fd81ba11-fe2a-4c0c-bf11-1eb92d2452c1 req-898ad8cc-b3aa-455f-bdee-1352763146c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.432 252257 DEBUG oslo_concurrency.lockutils [req-fd81ba11-fe2a-4c0c-bf11-1eb92d2452c1 req-898ad8cc-b3aa-455f-bdee-1352763146c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.432 252257 DEBUG nova.compute.manager [req-fd81ba11-fe2a-4c0c-bf11-1eb92d2452c1 req-898ad8cc-b3aa-455f-bdee-1352763146c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] No waiting events found dispatching network-vif-unplugged-c30634d5-981b-440c-aaed-815b2591a3d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.432 252257 DEBUG nova.compute.manager [req-fd81ba11-fe2a-4c0c-bf11-1eb92d2452c1 req-898ad8cc-b3aa-455f-bdee-1352763146c0 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-unplugged-c30634d5-981b-440c-aaed-815b2591a3d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:44:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620-userdata-shm.mount: Deactivated successfully.
Nov 29 03:44:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f124a620f20fde5ef6650123ba9af81abb3c8f90d36e5052651d44bf336ac209-merged.mount: Deactivated successfully.
Nov 29 03:44:16 np0005539563 podman[380103]: 2025-11-29 08:44:16.486715001 +0000 UTC m=+0.291034533 container cleanup 6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:44:16 np0005539563 systemd[1]: libpod-conmon-6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620.scope: Deactivated successfully.
Nov 29 03:44:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:44:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:44:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:44:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:44:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:44:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:44:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:44:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:44:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:44:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:16 np0005539563 podman[380162]: 2025-11-29 08:44:16.852726636 +0000 UTC m=+0.343490722 container remove 6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.858 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dc5c1fab-4aa7-4e25-b174-01365268b862]: (4, ('Sat Nov 29 08:44:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe (6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620)\n6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620\nSat Nov 29 08:44:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe (6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620)\n6f019d241f7b84feb09b5445375a2a52102bad85e78970831c2a21d705b3f620\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.859 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dfaf2efc-9ce0-40f7-aba4-7e1b39e2377c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.861 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16035279-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.863 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:16 np0005539563 kernel: tap16035279-e0: left promiscuous mode
Nov 29 03:44:16 np0005539563 nova_compute[252253]: 2025-11-29 08:44:16.877 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.879 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cc065cd7-90c5-41de-8e13-c80efdaa55c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.896 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[677f6603-6e51-48d0-835e-b28d6cf2c8bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.897 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[327840ce-3dc6-42af-8599-60007f61ed56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.913 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ac871015-4cfd-4e78-88ee-79f5c2d74aca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879993, 'reachable_time': 29140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380179, 'error': None, 'target': 'ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.915 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-16035279-ee66-4ba0-b73b-de24bec8a7fe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:44:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:16.915 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fcd3a8-bf7f-4901-9108-2ad4c60c8f19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:16 np0005539563 systemd[1]: run-netns-ovnmeta\x2d16035279\x2dee66\x2d4ba0\x2db73b\x2dde24bec8a7fe.mount: Deactivated successfully.
Nov 29 03:44:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 360 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 978 KiB/s rd, 2.6 MiB/s wr, 120 op/s
Nov 29 03:44:17 np0005539563 nova_compute[252253]: 2025-11-29 08:44:17.109 252257 INFO nova.virt.libvirt.driver [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Deleting instance files /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4_del#033[00m
Nov 29 03:44:17 np0005539563 nova_compute[252253]: 2025-11-29 08:44:17.109 252257 INFO nova.virt.libvirt.driver [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Deletion of /var/lib/nova/instances/52de3669-ccbb-4d2c-948b-abc4aae3b8e4_del complete#033[00m
Nov 29 03:44:17 np0005539563 nova_compute[252253]: 2025-11-29 08:44:17.194 252257 INFO nova.compute.manager [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Took 1.18 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:44:17 np0005539563 nova_compute[252253]: 2025-11-29 08:44:17.195 252257 DEBUG oslo.service.loopingcall [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:44:17 np0005539563 nova_compute[252253]: 2025-11-29 08:44:17.196 252257 DEBUG nova.compute.manager [-] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:44:17 np0005539563 nova_compute[252253]: 2025-11-29 08:44:17.196 252257 DEBUG nova.network.neutron [-] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:44:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:17.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:17.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:17 np0005539563 nova_compute[252253]: 2025-11-29 08:44:17.728 252257 DEBUG nova.network.neutron [req-f862c61b-7a54-4c91-8156-38d7eb40d159 req-341f2c48-a410-417f-ab8d-dc2712ee5592 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updated VIF entry in instance network info cache for port c30634d5-981b-440c-aaed-815b2591a3d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:17 np0005539563 nova_compute[252253]: 2025-11-29 08:44:17.729 252257 DEBUG nova.network.neutron [req-f862c61b-7a54-4c91-8156-38d7eb40d159 req-341f2c48-a410-417f-ab8d-dc2712ee5592 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updating instance_info_cache with network_info: [{"id": "c30634d5-981b-440c-aaed-815b2591a3d4", "address": "fa:16:3e:2f:d2:47", "network": {"id": "16035279-ee66-4ba0-b73b-de24bec8a7fe", "bridge": "br-int", "label": "tempest-network-smoke--475297760", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4ca87a38a19497f84b6d2c170c4fe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc30634d5-98", "ovs_interfaceid": "c30634d5-981b-440c-aaed-815b2591a3d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:17 np0005539563 nova_compute[252253]: 2025-11-29 08:44:17.771 252257 DEBUG oslo_concurrency.lockutils [req-f862c61b-7a54-4c91-8156-38d7eb40d159 req-341f2c48-a410-417f-ab8d-dc2712ee5592 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-52de3669-ccbb-4d2c-948b-abc4aae3b8e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.455 252257 DEBUG nova.network.neutron [-] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.485 252257 INFO nova.compute.manager [-] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Took 1.29 seconds to deallocate network for instance.#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.522 252257 DEBUG nova.compute.manager [req-95dc91bf-7430-4266-9112-9ef7b91c2121 req-8701a455-457f-457c-86b6-7379147347ea 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-deleted-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.526 252257 DEBUG oslo_concurrency.lockutils [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.526 252257 DEBUG oslo_concurrency.lockutils [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.540 252257 DEBUG oslo_concurrency.lockutils [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.545 252257 DEBUG nova.compute.manager [req-a4c8cf28-3d68-4634-b350-ce24ad6824f7 req-1d7faacb-e737-4457-8514-b76212ab23eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.545 252257 DEBUG oslo_concurrency.lockutils [req-a4c8cf28-3d68-4634-b350-ce24ad6824f7 req-1d7faacb-e737-4457-8514-b76212ab23eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.546 252257 DEBUG oslo_concurrency.lockutils [req-a4c8cf28-3d68-4634-b350-ce24ad6824f7 req-1d7faacb-e737-4457-8514-b76212ab23eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.546 252257 DEBUG oslo_concurrency.lockutils [req-a4c8cf28-3d68-4634-b350-ce24ad6824f7 req-1d7faacb-e737-4457-8514-b76212ab23eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.546 252257 DEBUG nova.compute.manager [req-a4c8cf28-3d68-4634-b350-ce24ad6824f7 req-1d7faacb-e737-4457-8514-b76212ab23eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] No waiting events found dispatching network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.546 252257 WARNING nova.compute.manager [req-a4c8cf28-3d68-4634-b350-ce24ad6824f7 req-1d7faacb-e737-4457-8514-b76212ab23eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Received unexpected event network-vif-plugged-c30634d5-981b-440c-aaed-815b2591a3d4 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.615 252257 INFO nova.scheduler.client.report [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Deleted allocations for instance 52de3669-ccbb-4d2c-948b-abc4aae3b8e4#033[00m
Nov 29 03:44:18 np0005539563 nova_compute[252253]: 2025-11-29 08:44:18.694 252257 DEBUG oslo_concurrency.lockutils [None req-58acd303-adff-49c6-9dfc-5fca47ffc8a7 686f527a5723407b85ed34c8a312583f c4ca87a38a19497f84b6d2c170c4fe75 - - default default] Lock "52de3669-ccbb-4d2c-948b-abc4aae3b8e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 872 KiB/s rd, 2.3 MiB/s wr, 119 op/s
Nov 29 03:44:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:19.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:19.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:19 np0005539563 nova_compute[252253]: 2025-11-29 08:44:19.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:19 np0005539563 nova_compute[252253]: 2025-11-29 08:44:19.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:44:19 np0005539563 nova_compute[252253]: 2025-11-29 08:44:19.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:44:19 np0005539563 nova_compute[252253]: 2025-11-29 08:44:19.817 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:44:19 np0005539563 nova_compute[252253]: 2025-11-29 08:44:19.818 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:19 np0005539563 nova_compute[252253]: 2025-11-29 08:44:19.818 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:19 np0005539563 nova_compute[252253]: 2025-11-29 08:44:19.818 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:44:20 np0005539563 nova_compute[252253]: 2025-11-29 08:44:20.031 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 775 KiB/s rd, 1.7 MiB/s wr, 143 op/s
Nov 29 03:44:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:21.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:21 np0005539563 nova_compute[252253]: 2025-11-29 08:44:21.275 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:21.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:22 np0005539563 nova_compute[252253]: 2025-11-29 08:44:22.669 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:22 np0005539563 nova_compute[252253]: 2025-11-29 08:44:22.787 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 38 KiB/s wr, 66 op/s
Nov 29 03:44:23 np0005539563 nova_compute[252253]: 2025-11-29 08:44:23.093 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:23.094 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:23.096 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:44:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:23.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:23.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:23 np0005539563 nova_compute[252253]: 2025-11-29 08:44:23.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002172865194025684 of space, bias 1.0, pg target 0.6518595582077051 quantized to 32 (current 32)
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:44:24 np0005539563 nova_compute[252253]: 2025-11-29 08:44:24.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:24 np0005539563 nova_compute[252253]: 2025-11-29 08:44:24.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:24 np0005539563 nova_compute[252253]: 2025-11-29 08:44:24.700 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:24 np0005539563 nova_compute[252253]: 2025-11-29 08:44:24.700 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:24 np0005539563 nova_compute[252253]: 2025-11-29 08:44:24.700 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:24 np0005539563 nova_compute[252253]: 2025-11-29 08:44:24.700 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:44:24 np0005539563 nova_compute[252253]: 2025-11-29 08:44:24.701 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:25 np0005539563 nova_compute[252253]: 2025-11-29 08:44:25.033 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 40 KiB/s wr, 94 op/s
Nov 29 03:44:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017921705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:25 np0005539563 nova_compute[252253]: 2025-11-29 08:44:25.146 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:25.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:25 np0005539563 nova_compute[252253]: 2025-11-29 08:44:25.379 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:44:25 np0005539563 nova_compute[252253]: 2025-11-29 08:44:25.381 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4205MB free_disk=20.942676544189453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:44:25 np0005539563 nova_compute[252253]: 2025-11-29 08:44:25.381 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:25 np0005539563 nova_compute[252253]: 2025-11-29 08:44:25.381 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:25.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:25 np0005539563 nova_compute[252253]: 2025-11-29 08:44:25.536 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:44:25 np0005539563 nova_compute[252253]: 2025-11-29 08:44:25.537 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:44:25 np0005539563 nova_compute[252253]: 2025-11-29 08:44:25.596 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1528115922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:26 np0005539563 nova_compute[252253]: 2025-11-29 08:44:26.016 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:26 np0005539563 nova_compute[252253]: 2025-11-29 08:44:26.022 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:26 np0005539563 nova_compute[252253]: 2025-11-29 08:44:26.047 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:26 np0005539563 nova_compute[252253]: 2025-11-29 08:44:26.085 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:44:26 np0005539563 nova_compute[252253]: 2025-11-29 08:44:26.085 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:26 np0005539563 nova_compute[252253]: 2025-11-29 08:44:26.304 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 13 KiB/s wr, 85 op/s
Nov 29 03:44:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:27.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:27.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 13 KiB/s wr, 85 op/s
Nov 29 03:44:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:29.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:29.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:30 np0005539563 nova_compute[252253]: 2025-11-29 08:44:30.035 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 12 KiB/s wr, 72 op/s
Nov 29 03:44:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:31.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:31 np0005539563 nova_compute[252253]: 2025-11-29 08:44:31.254 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405856.252283, 52de3669-ccbb-4d2c-948b-abc4aae3b8e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:31 np0005539563 nova_compute[252253]: 2025-11-29 08:44:31.254 252257 INFO nova.compute.manager [-] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:44:31 np0005539563 nova_compute[252253]: 2025-11-29 08:44:31.279 252257 DEBUG nova.compute.manager [None req-1761c689-bd95-4816-a9ac-462ed3f70170 - - - - - -] [instance: 52de3669-ccbb-4d2c-948b-abc4aae3b8e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:31 np0005539563 nova_compute[252253]: 2025-11-29 08:44:31.307 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:31.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 29 03:44:33 np0005539563 nova_compute[252253]: 2025-11-29 08:44:33.086 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:33 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:33.098 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:33.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:33.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:35 np0005539563 nova_compute[252253]: 2025-11-29 08:44:35.036 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 29 03:44:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:35.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:35.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:36 np0005539563 nova_compute[252253]: 2025-11-29 08:44:36.310 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:44:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:37.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:37.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:44:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:39.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:39 np0005539563 podman[380288]: 2025-11-29 08:44:39.504548584 +0000 UTC m=+0.056272064 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:44:39 np0005539563 podman[380289]: 2025-11-29 08:44:39.515708539 +0000 UTC m=+0.065017913 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:44:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:39.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:39 np0005539563 podman[380290]: 2025-11-29 08:44:39.536546417 +0000 UTC m=+0.082610943 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 03:44:40 np0005539563 nova_compute[252253]: 2025-11-29 08:44:40.038 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:44:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:41.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:41 np0005539563 nova_compute[252253]: 2025-11-29 08:44:41.313 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:41.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:44:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:44:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:44:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:44:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:44:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:44:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:44:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:43.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:43.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:45 np0005539563 nova_compute[252253]: 2025-11-29 08:44:45.039 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 139 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 511 B/s rd, 617 KiB/s wr, 3 op/s
Nov 29 03:44:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:45.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:45.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:46 np0005539563 nova_compute[252253]: 2025-11-29 08:44:46.316 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:46.954556) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405886954581, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 914, "num_deletes": 258, "total_data_size": 1439613, "memory_usage": 1457840, "flush_reason": "Manual Compaction"}
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405886964528, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 1414103, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65579, "largest_seqno": 66492, "table_properties": {"data_size": 1409327, "index_size": 2363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10474, "raw_average_key_size": 19, "raw_value_size": 1399693, "raw_average_value_size": 2650, "num_data_blocks": 102, "num_entries": 528, "num_filter_entries": 528, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405815, "oldest_key_time": 1764405815, "file_creation_time": 1764405886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 10094 microseconds, and 4342 cpu microseconds.
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:46.964644) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 1414103 bytes OK
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:46.964699) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:46.976060) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:46.976097) EVENT_LOG_v1 {"time_micros": 1764405886976087, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:46.976118) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 1435153, prev total WAL file size 1435153, number of live WAL files 2.
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:46.977324) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353136' seq:72057594037927935, type:22 .. '6C6F676D0032373639' seq:0, type:0; will stop at (end)
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(1380KB)], [146(10MB)]
Nov 29 03:44:46 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405886977397, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 12134049, "oldest_snapshot_seqno": -1}
Nov 29 03:44:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 139 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 511 B/s rd, 617 KiB/s wr, 3 op/s
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 9757 keys, 11969332 bytes, temperature: kUnknown
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405887071426, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11969332, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11907232, "index_size": 36579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24453, "raw_key_size": 258377, "raw_average_key_size": 26, "raw_value_size": 11737057, "raw_average_value_size": 1202, "num_data_blocks": 1389, "num_entries": 9757, "num_filter_entries": 9757, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764405886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:47.071805) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11969332 bytes
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:47.120104) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.9 rd, 127.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.2 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(17.0) write-amplify(8.5) OK, records in: 10294, records dropped: 537 output_compression: NoCompression
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:47.120148) EVENT_LOG_v1 {"time_micros": 1764405887120132, "job": 90, "event": "compaction_finished", "compaction_time_micros": 94125, "compaction_time_cpu_micros": 27095, "output_level": 6, "num_output_files": 1, "total_output_size": 11969332, "num_input_records": 10294, "num_output_records": 9757, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405887120789, "job": 90, "event": "table_file_deletion", "file_number": 148}
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405887123534, "job": 90, "event": "table_file_deletion", "file_number": 146}
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:46.977105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:47.123628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:47.123635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:47.123636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:47.123638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:44:47.123639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:47.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:47.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:47 np0005539563 nova_compute[252253]: 2025-11-29 08:44:47.613 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:47 np0005539563 nova_compute[252253]: 2025-11-29 08:44:47.613 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:47 np0005539563 nova_compute[252253]: 2025-11-29 08:44:47.632 252257 DEBUG nova.compute.manager [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:44:47 np0005539563 nova_compute[252253]: 2025-11-29 08:44:47.722 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:47 np0005539563 nova_compute[252253]: 2025-11-29 08:44:47.722 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:47 np0005539563 nova_compute[252253]: 2025-11-29 08:44:47.728 252257 DEBUG nova.virt.hardware [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:44:47 np0005539563 nova_compute[252253]: 2025-11-29 08:44:47.729 252257 INFO nova.compute.claims [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:44:47 np0005539563 nova_compute[252253]: 2025-11-29 08:44:47.859 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2494466256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:48 np0005539563 nova_compute[252253]: 2025-11-29 08:44:48.293 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:48 np0005539563 nova_compute[252253]: 2025-11-29 08:44:48.301 252257 DEBUG nova.compute.provider_tree [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:48 np0005539563 nova_compute[252253]: 2025-11-29 08:44:48.372 252257 DEBUG nova.scheduler.client.report [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:48 np0005539563 nova_compute[252253]: 2025-11-29 08:44:48.526 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:48 np0005539563 nova_compute[252253]: 2025-11-29 08:44:48.528 252257 DEBUG nova.compute.manager [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:44:48 np0005539563 nova_compute[252253]: 2025-11-29 08:44:48.791 252257 DEBUG nova.compute.manager [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:44:48 np0005539563 nova_compute[252253]: 2025-11-29 08:44:48.792 252257 DEBUG nova.network.neutron [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.010 252257 DEBUG nova.policy [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a45da8ed818144f8bd6e00d233fcb5d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '03858b11000d4b57bd3659c3083eed47', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.013 252257 INFO nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:44:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 148 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 970 KiB/s wr, 14 op/s
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.095 252257 DEBUG nova.compute.manager [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.228 252257 DEBUG nova.compute.manager [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.229 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.229 252257 INFO nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Creating image(s)#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.258 252257 DEBUG nova.storage.rbd_utils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:49.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.286 252257 DEBUG nova.storage.rbd_utils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.315 252257 DEBUG nova.storage.rbd_utils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.318 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.384 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.385 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.386 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.386 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.416 252257 DEBUG nova.storage.rbd_utils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.420 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:49.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.712 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.750 252257 DEBUG nova.network.neutron [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Successfully created port: f17d30e4-2f9e-4bd8-8232-ecc3302c7824 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.793 252257 DEBUG nova.storage.rbd_utils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] resizing rbd image ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.899 252257 DEBUG nova.objects.instance [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'migration_context' on Instance uuid ea4fd34f-5c9e-4fc3-a21d-11eee5732097 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.917 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.918 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Ensure instance console log exists: /var/lib/nova/instances/ea4fd34f-5c9e-4fc3-a21d-11eee5732097/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.918 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.919 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:49 np0005539563 nova_compute[252253]: 2025-11-29 08:44:49.919 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:50 np0005539563 nova_compute[252253]: 2025-11-29 08:44:50.068 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:50 np0005539563 nova_compute[252253]: 2025-11-29 08:44:50.624 252257 DEBUG nova.network.neutron [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Successfully updated port: f17d30e4-2f9e-4bd8-8232-ecc3302c7824 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:44:50 np0005539563 nova_compute[252253]: 2025-11-29 08:44:50.641 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:50 np0005539563 nova_compute[252253]: 2025-11-29 08:44:50.641 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquired lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:50 np0005539563 nova_compute[252253]: 2025-11-29 08:44:50.641 252257 DEBUG nova.network.neutron [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:44:50 np0005539563 nova_compute[252253]: 2025-11-29 08:44:50.712 252257 DEBUG nova.compute.manager [req-bd52fe47-570e-42a7-a1d5-33555092ca83 req-b89c8b3f-0bfd-4083-9c83-c3ed40f8578f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Received event network-changed-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:50 np0005539563 nova_compute[252253]: 2025-11-29 08:44:50.712 252257 DEBUG nova.compute.manager [req-bd52fe47-570e-42a7-a1d5-33555092ca83 req-b89c8b3f-0bfd-4083-9c83-c3ed40f8578f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Refreshing instance network info cache due to event network-changed-f17d30e4-2f9e-4bd8-8232-ecc3302c7824. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:50 np0005539563 nova_compute[252253]: 2025-11-29 08:44:50.713 252257 DEBUG oslo_concurrency.lockutils [req-bd52fe47-570e-42a7-a1d5-33555092ca83 req-b89c8b3f-0bfd-4083-9c83-c3ed40f8578f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:50 np0005539563 nova_compute[252253]: 2025-11-29 08:44:50.789 252257 DEBUG nova.network.neutron [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:44:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 177 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 2.2 MiB/s wr, 42 op/s
Nov 29 03:44:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:51.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.317 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.522 252257 DEBUG nova.network.neutron [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Updating instance_info_cache with network_info: [{"id": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "address": "fa:16:3e:12:22:09", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17d30e4-2f", "ovs_interfaceid": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:51.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.547 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Releasing lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.547 252257 DEBUG nova.compute.manager [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Instance network_info: |[{"id": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "address": "fa:16:3e:12:22:09", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17d30e4-2f", "ovs_interfaceid": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.548 252257 DEBUG oslo_concurrency.lockutils [req-bd52fe47-570e-42a7-a1d5-33555092ca83 req-b89c8b3f-0bfd-4083-9c83-c3ed40f8578f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.548 252257 DEBUG nova.network.neutron [req-bd52fe47-570e-42a7-a1d5-33555092ca83 req-b89c8b3f-0bfd-4083-9c83-c3ed40f8578f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Refreshing network info cache for port f17d30e4-2f9e-4bd8-8232-ecc3302c7824 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.550 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Start _get_guest_xml network_info=[{"id": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "address": "fa:16:3e:12:22:09", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17d30e4-2f", "ovs_interfaceid": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.555 252257 WARNING nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.560 252257 DEBUG nova.virt.libvirt.host [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.560 252257 DEBUG nova.virt.libvirt.host [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.565 252257 DEBUG nova.virt.libvirt.host [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.566 252257 DEBUG nova.virt.libvirt.host [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.567 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.567 252257 DEBUG nova.virt.hardware [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.567 252257 DEBUG nova.virt.hardware [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.568 252257 DEBUG nova.virt.hardware [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.568 252257 DEBUG nova.virt.hardware [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.568 252257 DEBUG nova.virt.hardware [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.568 252257 DEBUG nova.virt.hardware [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.569 252257 DEBUG nova.virt.hardware [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.569 252257 DEBUG nova.virt.hardware [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.570 252257 DEBUG nova.virt.hardware [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.570 252257 DEBUG nova.virt.hardware [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.570 252257 DEBUG nova.virt.hardware [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:44:51 np0005539563 nova_compute[252253]: 2025-11-29 08:44:51.573 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:44:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4233684586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.068 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.103 252257 DEBUG nova.storage.rbd_utils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.109 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:44:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3655060916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.553 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.555 252257 DEBUG nova.virt.libvirt.vif [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:44:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-194846457',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-194846457',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=192,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbN5b32r72JhE7OXuYcMuqpQgiMdY2+BbNFCdwmdC+KNNVkj/UkovXMGv4H0wFMw66XdJWz6gHQFWuL4IxqlXtnDVqoyPJrtUDp+2zsXRX6OPpYRO3gSrTYZqROcMoftQ==',key_name='tempest-TestSecurityGroupsBasicOps-116189028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-i2vyp27y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:44:49Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=ea4fd34f-5c9e-4fc3-a21d-11eee5732097,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "address": "fa:16:3e:12:22:09", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17d30e4-2f", "ovs_interfaceid": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.555 252257 DEBUG nova.network.os_vif_util [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "address": "fa:16:3e:12:22:09", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17d30e4-2f", "ovs_interfaceid": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.556 252257 DEBUG nova.network.os_vif_util [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:22:09,bridge_name='br-int',has_traffic_filtering=True,id=f17d30e4-2f9e-4bd8-8232-ecc3302c7824,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17d30e4-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.557 252257 DEBUG nova.objects.instance [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'pci_devices' on Instance uuid ea4fd34f-5c9e-4fc3-a21d-11eee5732097 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.590 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  <uuid>ea4fd34f-5c9e-4fc3-a21d-11eee5732097</uuid>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  <name>instance-000000c0</name>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-194846457</nova:name>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:44:51</nova:creationTime>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <nova:user uuid="a45da8ed818144f8bd6e00d233fcb5d2">tempest-TestSecurityGroupsBasicOps-1086021155-project-member</nova:user>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <nova:project uuid="03858b11000d4b57bd3659c3083eed47">tempest-TestSecurityGroupsBasicOps-1086021155</nova:project>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <nova:port uuid="f17d30e4-2f9e-4bd8-8232-ecc3302c7824">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <entry name="serial">ea4fd34f-5c9e-4fc3-a21d-11eee5732097</entry>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <entry name="uuid">ea4fd34f-5c9e-4fc3-a21d-11eee5732097</entry>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk.config">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:12:22:09"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <target dev="tapf17d30e4-2f"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/ea4fd34f-5c9e-4fc3-a21d-11eee5732097/console.log" append="off"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:44:52 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:44:52 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:44:52 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:44:52 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.592 252257 DEBUG nova.compute.manager [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Preparing to wait for external event network-vif-plugged-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.592 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.593 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.593 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.594 252257 DEBUG nova.virt.libvirt.vif [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:44:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-194846457',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-194846457',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=192,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbN5b32r72JhE7OXuYcMuqpQgiMdY2+BbNFCdwmdC+KNNVkj/UkovXMGv4H0wFMw66XdJWz6gHQFWuL4IxqlXtnDVqoyPJrtUDp+2zsXRX6OPpYRO3gSrTYZqROcMoftQ==',key_name='tempest-TestSecurityGroupsBasicOps-116189028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-i2vyp27y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:44:49Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=ea4fd34f-5c9e-4fc3-a21d-11eee5732097,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "address": "fa:16:3e:12:22:09", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17d30e4-2f", "ovs_interfaceid": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.594 252257 DEBUG nova.network.os_vif_util [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "address": "fa:16:3e:12:22:09", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17d30e4-2f", "ovs_interfaceid": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.595 252257 DEBUG nova.network.os_vif_util [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:22:09,bridge_name='br-int',has_traffic_filtering=True,id=f17d30e4-2f9e-4bd8-8232-ecc3302c7824,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17d30e4-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.596 252257 DEBUG os_vif [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:22:09,bridge_name='br-int',has_traffic_filtering=True,id=f17d30e4-2f9e-4bd8-8232-ecc3302c7824,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17d30e4-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.597 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.597 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.598 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.601 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.601 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf17d30e4-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.602 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf17d30e4-2f, col_values=(('external_ids', {'iface-id': 'f17d30e4-2f9e-4bd8-8232-ecc3302c7824', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:22:09', 'vm-uuid': 'ea4fd34f-5c9e-4fc3-a21d-11eee5732097'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.603 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:52 np0005539563 NetworkManager[48981]: <info>  [1764405892.6042] manager: (tapf17d30e4-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.605 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.610 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.612 252257 INFO os_vif [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:22:09,bridge_name='br-int',has_traffic_filtering=True,id=f17d30e4-2f9e-4bd8-8232-ecc3302c7824,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17d30e4-2f')#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.666 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.667 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.667 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] No VIF found with MAC fa:16:3e:12:22:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.667 252257 INFO nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Using config drive#033[00m
Nov 29 03:44:52 np0005539563 nova_compute[252253]: 2025-11-29 08:44:52.692 252257 DEBUG nova.storage.rbd_utils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 177 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 2.2 MiB/s wr, 42 op/s
Nov 29 03:44:53 np0005539563 nova_compute[252253]: 2025-11-29 08:44:53.090 252257 INFO nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Creating config drive at /var/lib/nova/instances/ea4fd34f-5c9e-4fc3-a21d-11eee5732097/disk.config#033[00m
Nov 29 03:44:53 np0005539563 nova_compute[252253]: 2025-11-29 08:44:53.095 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea4fd34f-5c9e-4fc3-a21d-11eee5732097/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxjjdmdsk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:53 np0005539563 nova_compute[252253]: 2025-11-29 08:44:53.229 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea4fd34f-5c9e-4fc3-a21d-11eee5732097/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxjjdmdsk" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:53.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:53.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.135 252257 DEBUG nova.storage.rbd_utils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] rbd image ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.140 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea4fd34f-5c9e-4fc3-a21d-11eee5732097/disk.config ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.299 252257 DEBUG oslo_concurrency.processutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea4fd34f-5c9e-4fc3-a21d-11eee5732097/disk.config ea4fd34f-5c9e-4fc3-a21d-11eee5732097_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.299 252257 INFO nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Deleting local config drive /var/lib/nova/instances/ea4fd34f-5c9e-4fc3-a21d-11eee5732097/disk.config because it was imported into RBD.#033[00m
Nov 29 03:44:54 np0005539563 kernel: tapf17d30e4-2f: entered promiscuous mode
Nov 29 03:44:54 np0005539563 NetworkManager[48981]: <info>  [1764405894.3567] manager: (tapf17d30e4-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/365)
Nov 29 03:44:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:44:54Z|00825|binding|INFO|Claiming lport f17d30e4-2f9e-4bd8-8232-ecc3302c7824 for this chassis.
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.357 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:44:54Z|00826|binding|INFO|f17d30e4-2f9e-4bd8-8232-ecc3302c7824: Claiming fa:16:3e:12:22:09 10.100.0.7
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.364 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:54 np0005539563 systemd-udevd[380726]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:44:54 np0005539563 NetworkManager[48981]: <info>  [1764405894.3973] device (tapf17d30e4-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:44:54 np0005539563 NetworkManager[48981]: <info>  [1764405894.3986] device (tapf17d30e4-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:44:54 np0005539563 systemd-machined[213024]: New machine qemu-95-instance-000000c0.
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.447 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:44:54Z|00827|binding|INFO|Setting lport f17d30e4-2f9e-4bd8-8232-ecc3302c7824 ovn-installed in OVS
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.452 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:54 np0005539563 systemd[1]: Started Virtual Machine qemu-95-instance-000000c0.
Nov 29 03:44:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:44:54Z|00828|binding|INFO|Setting lport f17d30e4-2f9e-4bd8-8232-ecc3302c7824 up in Southbound
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.577 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:22:09 10.100.0.7'], port_security=['fa:16:3e:12:22:09 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ea4fd34f-5c9e-4fc3-a21d-11eee5732097', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '2', 'neutron:security_group_ids': '747d2b6c-68e4-4f8b-89b3-15bbb589ad69 f2cc13f0-62d2-4aa9-9fdf-13ddbaf0f2cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41d2fa50-52e6-41f0-8017-13145416317d, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=f17d30e4-2f9e-4bd8-8232-ecc3302c7824) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.578 158990 INFO neutron.agent.ovn.metadata.agent [-] Port f17d30e4-2f9e-4bd8-8232-ecc3302c7824 in datapath d7daafb1-8347-4bc3-b00b-9a558f101e51 bound to our chassis#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.579 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7daafb1-8347-4bc3-b00b-9a558f101e51#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.591 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5a37ac-c652-4f9f-95fb-e5643e18948e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.592 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7daafb1-81 in ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.597 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7daafb1-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.597 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d0402fc2-16bd-4660-bfb9-316968b13957]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.598 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a8473ecd-32ad-4693-8379-7b1823450f1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.610 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[58951553-bbdd-4f09-874a-61085ff64092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.635 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[28112a6e-5f48-46aa-9039-4a35002dbcc7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.665 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[fd845fd1-4223-47f7-9cd1-638b8efaa49c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.670 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c56cbbb1-1b8b-483d-b2e5-acf4ae9c326f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 NetworkManager[48981]: <info>  [1764405894.6714] manager: (tapd7daafb1-80): new Veth device (/org/freedesktop/NetworkManager/Devices/366)
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.707 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[fc26c9b7-d79f-45a5-8b55-779534c003ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.710 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[0c817c2e-2b6d-4fc9-b27e-28b5d85731a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 NetworkManager[48981]: <info>  [1764405894.7310] device (tapd7daafb1-80): carrier: link connected
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.737 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c90b1a70-2ca1-4d55-8ef8-2dc0209bdcb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.754 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2b840329-da3e-42b7-9df3-a9649e6d1a25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7daafb1-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:cf:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886250, 'reachable_time': 32673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380762, 'error': None, 'target': 'ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.770 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[07c2373c-7f8e-416c-ab28-feb24cce41c2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3b:cfb8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 886250, 'tstamp': 886250}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380763, 'error': None, 'target': 'ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.789 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[27c17bd0-8453-4bf2-bb19-207bc06eee34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7daafb1-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:cf:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886250, 'reachable_time': 32673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380764, 'error': None, 'target': 'ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.843 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[58adf7f1-0f44-43ea-9dc1-8e62e9f61f10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.905 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[54f0b3f5-9c77-4121-bb19-0c11ab458c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.907 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7daafb1-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.908 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.908 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7daafb1-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.910 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:54 np0005539563 NetworkManager[48981]: <info>  [1764405894.9115] manager: (tapd7daafb1-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Nov 29 03:44:54 np0005539563 kernel: tapd7daafb1-80: entered promiscuous mode
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.913 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.916 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7daafb1-80, col_values=(('external_ids', {'iface-id': '40b4f56a-1cc5-423e-999b-a20c0e423329'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.917 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:44:54Z|00829|binding|INFO|Releasing lport 40b4f56a-1cc5-423e-999b-a20c0e423329 from this chassis (sb_readonly=0)
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.930 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.931 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7daafb1-8347-4bc3-b00b-9a558f101e51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7daafb1-8347-4bc3-b00b-9a558f101e51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.932 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[751383f5-5c6a-46d2-922f-36a29a481ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.933 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-d7daafb1-8347-4bc3-b00b-9a558f101e51
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/d7daafb1-8347-4bc3-b00b-9a558f101e51.pid.haproxy
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID d7daafb1-8347-4bc3-b00b-9a558f101e51
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:44:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:44:54.934 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'env', 'PROCESS_TAG=haproxy-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7daafb1-8347-4bc3-b00b-9a558f101e51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.983 252257 DEBUG nova.network.neutron [req-bd52fe47-570e-42a7-a1d5-33555092ca83 req-b89c8b3f-0bfd-4083-9c83-c3ed40f8578f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Updated VIF entry in instance network info cache for port f17d30e4-2f9e-4bd8-8232-ecc3302c7824. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:54 np0005539563 nova_compute[252253]: 2025-11-29 08:44:54.984 252257 DEBUG nova.network.neutron [req-bd52fe47-570e-42a7-a1d5-33555092ca83 req-b89c8b3f-0bfd-4083-9c83-c3ed40f8578f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Updating instance_info_cache with network_info: [{"id": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "address": "fa:16:3e:12:22:09", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17d30e4-2f", "ovs_interfaceid": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.002 252257 DEBUG oslo_concurrency.lockutils [req-bd52fe47-570e-42a7-a1d5-33555092ca83 req-b89c8b3f-0bfd-4083-9c83-c3ed40f8578f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.019 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405895.0191205, ea4fd34f-5c9e-4fc3-a21d-11eee5732097 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.020 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] VM Started (Lifecycle Event)#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.041 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.045 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405895.0204425, ea4fd34f-5c9e-4fc3-a21d-11eee5732097 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.045 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:44:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 133 op/s
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.073 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.102 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.106 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.267 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:44:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:55.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:55 np0005539563 podman[380839]: 2025-11-29 08:44:55.308628873 +0000 UTC m=+0.058922347 container create f53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:44:55 np0005539563 systemd[1]: Started libpod-conmon-f53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194.scope.
Nov 29 03:44:55 np0005539563 podman[380839]: 2025-11-29 08:44:55.274842702 +0000 UTC m=+0.025136206 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:44:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:44:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/124f8e2aeb9502ab219cc59f9ba935a665141f39e0ce7d1a56daddd51d0950a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:55 np0005539563 podman[380839]: 2025-11-29 08:44:55.391596613 +0000 UTC m=+0.141890097 container init f53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:44:55 np0005539563 podman[380839]: 2025-11-29 08:44:55.397610888 +0000 UTC m=+0.147904352 container start f53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:44:55 np0005539563 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[380854]: [NOTICE]   (380858) : New worker (380860) forked
Nov 29 03:44:55 np0005539563 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[380854]: [NOTICE]   (380858) : Loading success.
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.459 252257 DEBUG nova.compute.manager [req-71dddf15-0186-4043-9bf9-0c13cc927d8b req-e3939280-ebf2-4d03-aa95-6b480b976eed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Received event network-vif-plugged-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.459 252257 DEBUG oslo_concurrency.lockutils [req-71dddf15-0186-4043-9bf9-0c13cc927d8b req-e3939280-ebf2-4d03-aa95-6b480b976eed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.460 252257 DEBUG oslo_concurrency.lockutils [req-71dddf15-0186-4043-9bf9-0c13cc927d8b req-e3939280-ebf2-4d03-aa95-6b480b976eed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.460 252257 DEBUG oslo_concurrency.lockutils [req-71dddf15-0186-4043-9bf9-0c13cc927d8b req-e3939280-ebf2-4d03-aa95-6b480b976eed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.460 252257 DEBUG nova.compute.manager [req-71dddf15-0186-4043-9bf9-0c13cc927d8b req-e3939280-ebf2-4d03-aa95-6b480b976eed 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Processing event network-vif-plugged-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.461 252257 DEBUG nova.compute.manager [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.466 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764405895.4662724, ea4fd34f-5c9e-4fc3-a21d-11eee5732097 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.466 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.468 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.471 252257 INFO nova.virt.libvirt.driver [-] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Instance spawned successfully.#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.471 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.496 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.502 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.505 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.505 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.505 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.506 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.506 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.506 252257 DEBUG nova.virt.libvirt.driver [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:44:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:44:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:55.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.547 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.652 252257 INFO nova.compute.manager [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Took 6.42 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.653 252257 DEBUG nova.compute.manager [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.790 252257 INFO nova.compute.manager [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Took 8.10 seconds to build instance.#033[00m
Nov 29 03:44:55 np0005539563 nova_compute[252253]: 2025-11-29 08:44:55.887 252257 DEBUG oslo_concurrency.lockutils [None req-07e07afe-13f5-4369-8ba5-7dc530678200 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 130 op/s
Nov 29 03:44:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:57.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:57.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:57 np0005539563 nova_compute[252253]: 2025-11-29 08:44:57.566 252257 DEBUG nova.compute.manager [req-eff33d65-4763-429a-8fed-8ad20aed571b req-2b37f8ab-27e9-40c8-94c7-118a5536450f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Received event network-vif-plugged-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:57 np0005539563 nova_compute[252253]: 2025-11-29 08:44:57.567 252257 DEBUG oslo_concurrency.lockutils [req-eff33d65-4763-429a-8fed-8ad20aed571b req-2b37f8ab-27e9-40c8-94c7-118a5536450f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:57 np0005539563 nova_compute[252253]: 2025-11-29 08:44:57.567 252257 DEBUG oslo_concurrency.lockutils [req-eff33d65-4763-429a-8fed-8ad20aed571b req-2b37f8ab-27e9-40c8-94c7-118a5536450f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:57 np0005539563 nova_compute[252253]: 2025-11-29 08:44:57.567 252257 DEBUG oslo_concurrency.lockutils [req-eff33d65-4763-429a-8fed-8ad20aed571b req-2b37f8ab-27e9-40c8-94c7-118a5536450f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:57 np0005539563 nova_compute[252253]: 2025-11-29 08:44:57.568 252257 DEBUG nova.compute.manager [req-eff33d65-4763-429a-8fed-8ad20aed571b req-2b37f8ab-27e9-40c8-94c7-118a5536450f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] No waiting events found dispatching network-vif-plugged-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:57 np0005539563 nova_compute[252253]: 2025-11-29 08:44:57.568 252257 WARNING nova.compute.manager [req-eff33d65-4763-429a-8fed-8ad20aed571b req-2b37f8ab-27e9-40c8-94c7-118a5536450f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Received unexpected event network-vif-plugged-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:44:57 np0005539563 nova_compute[252253]: 2025-11-29 08:44:57.603 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.0 MiB/s wr, 157 op/s
Nov 29 03:44:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:59.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:44:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:59.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:00 np0005539563 nova_compute[252253]: 2025-11-29 08:45:00.073 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:00 np0005539563 nova_compute[252253]: 2025-11-29 08:45:00.631 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:00 np0005539563 NetworkManager[48981]: <info>  [1764405900.6319] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Nov 29 03:45:00 np0005539563 NetworkManager[48981]: <info>  [1764405900.6330] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Nov 29 03:45:00 np0005539563 nova_compute[252253]: 2025-11-29 08:45:00.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:00 np0005539563 nova_compute[252253]: 2025-11-29 08:45:00.728 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:00 np0005539563 ovn_controller[148841]: 2025-11-29T08:45:00Z|00830|binding|INFO|Releasing lport 40b4f56a-1cc5-423e-999b-a20c0e423329 from this chassis (sb_readonly=0)
Nov 29 03:45:00 np0005539563 nova_compute[252253]: 2025-11-29 08:45:00.739 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 187 op/s
Nov 29 03:45:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:01.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:45:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:45:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:45:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:45:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:01.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:01 np0005539563 nova_compute[252253]: 2025-11-29 08:45:01.795 252257 DEBUG nova.compute.manager [req-165efeae-6deb-4adc-acc2-3098533bf4bf req-fff7aed1-6b12-48fb-b21b-faf7459925eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Received event network-changed-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:01 np0005539563 nova_compute[252253]: 2025-11-29 08:45:01.795 252257 DEBUG nova.compute.manager [req-165efeae-6deb-4adc-acc2-3098533bf4bf req-fff7aed1-6b12-48fb-b21b-faf7459925eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Refreshing instance network info cache due to event network-changed-f17d30e4-2f9e-4bd8-8232-ecc3302c7824. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:45:01 np0005539563 nova_compute[252253]: 2025-11-29 08:45:01.796 252257 DEBUG oslo_concurrency.lockutils [req-165efeae-6deb-4adc-acc2-3098533bf4bf req-fff7aed1-6b12-48fb-b21b-faf7459925eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:01 np0005539563 nova_compute[252253]: 2025-11-29 08:45:01.796 252257 DEBUG oslo_concurrency.lockutils [req-165efeae-6deb-4adc-acc2-3098533bf4bf req-fff7aed1-6b12-48fb-b21b-faf7459925eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:01 np0005539563 nova_compute[252253]: 2025-11-29 08:45:01.797 252257 DEBUG nova.network.neutron [req-165efeae-6deb-4adc-acc2-3098533bf4bf req-fff7aed1-6b12-48fb-b21b-faf7459925eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Refreshing network info cache for port f17d30e4-2f9e-4bd8-8232-ecc3302c7824 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:45:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:02 np0005539563 nova_compute[252253]: 2025-11-29 08:45:02.605 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:45:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:45:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 159 op/s
Nov 29 03:45:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:03.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:03.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:45:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:45:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:45:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:45:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:45:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0e2ca660-a9a7-4db5-a596-71627608870d does not exist
Nov 29 03:45:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 069ebb0d-3ee0-4626-b04b-c421e5e37698 does not exist
Nov 29 03:45:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 720b8c08-e693-4224-90c3-909f8f1a8f2d does not exist
Nov 29 03:45:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:45:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:45:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:45:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:45:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:45:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:45:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:45:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:45:04 np0005539563 podman[381264]: 2025-11-29 08:45:04.580209126 +0000 UTC m=+0.037548234 container create 0c7e63573491700236d0e9cc9748287c9d7b6da4074988f7c6ddccdcf61b5952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jones, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:45:04 np0005539563 systemd[1]: Started libpod-conmon-0c7e63573491700236d0e9cc9748287c9d7b6da4074988f7c6ddccdcf61b5952.scope.
Nov 29 03:45:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:45:04 np0005539563 podman[381264]: 2025-11-29 08:45:04.563619684 +0000 UTC m=+0.020958812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:45:04 np0005539563 podman[381264]: 2025-11-29 08:45:04.679013569 +0000 UTC m=+0.136352697 container init 0c7e63573491700236d0e9cc9748287c9d7b6da4074988f7c6ddccdcf61b5952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jones, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:45:04 np0005539563 podman[381264]: 2025-11-29 08:45:04.689458673 +0000 UTC m=+0.146797781 container start 0c7e63573491700236d0e9cc9748287c9d7b6da4074988f7c6ddccdcf61b5952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jones, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:45:04 np0005539563 podman[381264]: 2025-11-29 08:45:04.693386461 +0000 UTC m=+0.150725599 container attach 0c7e63573491700236d0e9cc9748287c9d7b6da4074988f7c6ddccdcf61b5952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jones, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:45:04 np0005539563 systemd[1]: libpod-0c7e63573491700236d0e9cc9748287c9d7b6da4074988f7c6ddccdcf61b5952.scope: Deactivated successfully.
Nov 29 03:45:04 np0005539563 practical_jones[381281]: 167 167
Nov 29 03:45:04 np0005539563 conmon[381281]: conmon 0c7e63573491700236d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c7e63573491700236d0e9cc9748287c9d7b6da4074988f7c6ddccdcf61b5952.scope/container/memory.events
Nov 29 03:45:04 np0005539563 podman[381264]: 2025-11-29 08:45:04.700562446 +0000 UTC m=+0.157901554 container died 0c7e63573491700236d0e9cc9748287c9d7b6da4074988f7c6ddccdcf61b5952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:45:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay-02b53bc0b488291279c93768d67f40da314a1e75872f5b12504da754217e3b49-merged.mount: Deactivated successfully.
Nov 29 03:45:04 np0005539563 podman[381264]: 2025-11-29 08:45:04.761791855 +0000 UTC m=+0.219130963 container remove 0c7e63573491700236d0e9cc9748287c9d7b6da4074988f7c6ddccdcf61b5952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jones, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:45:04 np0005539563 systemd[1]: libpod-conmon-0c7e63573491700236d0e9cc9748287c9d7b6da4074988f7c6ddccdcf61b5952.scope: Deactivated successfully.
Nov 29 03:45:04 np0005539563 podman[381306]: 2025-11-29 08:45:04.918363012 +0000 UTC m=+0.035275432 container create e058f270ef1e944c222d461cf1b32b2bfbe7153f56204fdba96231532c2ad13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_sanderson, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:45:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:45:04.950 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:45:04.952 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:45:04.953 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:04 np0005539563 systemd[1]: Started libpod-conmon-e058f270ef1e944c222d461cf1b32b2bfbe7153f56204fdba96231532c2ad13b.scope.
Nov 29 03:45:04 np0005539563 nova_compute[252253]: 2025-11-29 08:45:04.980 252257 DEBUG nova.network.neutron [req-165efeae-6deb-4adc-acc2-3098533bf4bf req-fff7aed1-6b12-48fb-b21b-faf7459925eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Updated VIF entry in instance network info cache for port f17d30e4-2f9e-4bd8-8232-ecc3302c7824. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:45:04 np0005539563 nova_compute[252253]: 2025-11-29 08:45:04.981 252257 DEBUG nova.network.neutron [req-165efeae-6deb-4adc-acc2-3098533bf4bf req-fff7aed1-6b12-48fb-b21b-faf7459925eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Updating instance_info_cache with network_info: [{"id": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "address": "fa:16:3e:12:22:09", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17d30e4-2f", "ovs_interfaceid": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:45:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/089aa791e8a509cb04251f292b573a5c0e3ace72a4dafe42de4ee6ed9601b37d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/089aa791e8a509cb04251f292b573a5c0e3ace72a4dafe42de4ee6ed9601b37d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/089aa791e8a509cb04251f292b573a5c0e3ace72a4dafe42de4ee6ed9601b37d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/089aa791e8a509cb04251f292b573a5c0e3ace72a4dafe42de4ee6ed9601b37d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/089aa791e8a509cb04251f292b573a5c0e3ace72a4dafe42de4ee6ed9601b37d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:05 np0005539563 podman[381306]: 2025-11-29 08:45:04.902601753 +0000 UTC m=+0.019514193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:45:05 np0005539563 podman[381306]: 2025-11-29 08:45:05.010631737 +0000 UTC m=+0.127544177 container init e058f270ef1e944c222d461cf1b32b2bfbe7153f56204fdba96231532c2ad13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_sanderson, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:45:05 np0005539563 podman[381306]: 2025-11-29 08:45:05.017511715 +0000 UTC m=+0.134424135 container start e058f270ef1e944c222d461cf1b32b2bfbe7153f56204fdba96231532c2ad13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:45:05 np0005539563 nova_compute[252253]: 2025-11-29 08:45:05.019 252257 DEBUG oslo_concurrency.lockutils [req-165efeae-6deb-4adc-acc2-3098533bf4bf req-fff7aed1-6b12-48fb-b21b-faf7459925eb 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:05 np0005539563 podman[381306]: 2025-11-29 08:45:05.022370307 +0000 UTC m=+0.139282757 container attach e058f270ef1e944c222d461cf1b32b2bfbe7153f56204fdba96231532c2ad13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_sanderson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:45:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 232 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.1 MiB/s wr, 200 op/s
Nov 29 03:45:05 np0005539563 nova_compute[252253]: 2025-11-29 08:45:05.075 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:05.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:05.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:05 np0005539563 practical_sanderson[381323]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:45:05 np0005539563 practical_sanderson[381323]: --> relative data size: 1.0
Nov 29 03:45:05 np0005539563 practical_sanderson[381323]: --> All data devices are unavailable
Nov 29 03:45:05 np0005539563 systemd[1]: libpod-e058f270ef1e944c222d461cf1b32b2bfbe7153f56204fdba96231532c2ad13b.scope: Deactivated successfully.
Nov 29 03:45:05 np0005539563 podman[381340]: 2025-11-29 08:45:05.991324635 +0000 UTC m=+0.038459109 container died e058f270ef1e944c222d461cf1b32b2bfbe7153f56204fdba96231532c2ad13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:45:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-089aa791e8a509cb04251f292b573a5c0e3ace72a4dafe42de4ee6ed9601b37d-merged.mount: Deactivated successfully.
Nov 29 03:45:06 np0005539563 podman[381340]: 2025-11-29 08:45:06.082926472 +0000 UTC m=+0.130060896 container remove e058f270ef1e944c222d461cf1b32b2bfbe7153f56204fdba96231532c2ad13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:45:06 np0005539563 systemd[1]: libpod-conmon-e058f270ef1e944c222d461cf1b32b2bfbe7153f56204fdba96231532c2ad13b.scope: Deactivated successfully.
Nov 29 03:45:06 np0005539563 podman[381495]: 2025-11-29 08:45:06.821754909 +0000 UTC m=+0.046152159 container create 9ea3f992a2481db15d085cf39f1840182919551ee3b862a2ea4b410002fb5e38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ritchie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:45:06 np0005539563 systemd[1]: Started libpod-conmon-9ea3f992a2481db15d085cf39f1840182919551ee3b862a2ea4b410002fb5e38.scope.
Nov 29 03:45:06 np0005539563 podman[381495]: 2025-11-29 08:45:06.797870368 +0000 UTC m=+0.022267608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:45:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:45:06 np0005539563 podman[381495]: 2025-11-29 08:45:06.924501279 +0000 UTC m=+0.148898539 container init 9ea3f992a2481db15d085cf39f1840182919551ee3b862a2ea4b410002fb5e38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ritchie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:45:06 np0005539563 podman[381495]: 2025-11-29 08:45:06.932510077 +0000 UTC m=+0.156907317 container start 9ea3f992a2481db15d085cf39f1840182919551ee3b862a2ea4b410002fb5e38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ritchie, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:45:06 np0005539563 podman[381495]: 2025-11-29 08:45:06.937333659 +0000 UTC m=+0.161730919 container attach 9ea3f992a2481db15d085cf39f1840182919551ee3b862a2ea4b410002fb5e38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:45:06 np0005539563 systemd[1]: libpod-9ea3f992a2481db15d085cf39f1840182919551ee3b862a2ea4b410002fb5e38.scope: Deactivated successfully.
Nov 29 03:45:06 np0005539563 intelligent_ritchie[381511]: 167 167
Nov 29 03:45:06 np0005539563 conmon[381511]: conmon 9ea3f992a2481db15d08 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ea3f992a2481db15d085cf39f1840182919551ee3b862a2ea4b410002fb5e38.scope/container/memory.events
Nov 29 03:45:06 np0005539563 podman[381495]: 2025-11-29 08:45:06.940817204 +0000 UTC m=+0.165214444 container died 9ea3f992a2481db15d085cf39f1840182919551ee3b862a2ea4b410002fb5e38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:45:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8fd8dd8cbf28b9c8f22ad810fcf017883076d7559e637eed179e98854a83d62f-merged.mount: Deactivated successfully.
Nov 29 03:45:06 np0005539563 podman[381495]: 2025-11-29 08:45:06.981871853 +0000 UTC m=+0.206269093 container remove 9ea3f992a2481db15d085cf39f1840182919551ee3b862a2ea4b410002fb5e38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:45:07 np0005539563 systemd[1]: libpod-conmon-9ea3f992a2481db15d085cf39f1840182919551ee3b862a2ea4b410002fb5e38.scope: Deactivated successfully.
Nov 29 03:45:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 232 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 109 op/s
Nov 29 03:45:07 np0005539563 podman[381536]: 2025-11-29 08:45:07.17035859 +0000 UTC m=+0.060051457 container create 784e37832126065984e607d4645b29e11371a7f262aac376ff039fde10e8e601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:45:07 np0005539563 systemd[1]: Started libpod-conmon-784e37832126065984e607d4645b29e11371a7f262aac376ff039fde10e8e601.scope.
Nov 29 03:45:07 np0005539563 podman[381536]: 2025-11-29 08:45:07.137937526 +0000 UTC m=+0.027630453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:45:07 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:45:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3942036b63e15b934b1ed59088b775e09da99c1aa512ba2bacd307bfb7ee359e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3942036b63e15b934b1ed59088b775e09da99c1aa512ba2bacd307bfb7ee359e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3942036b63e15b934b1ed59088b775e09da99c1aa512ba2bacd307bfb7ee359e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3942036b63e15b934b1ed59088b775e09da99c1aa512ba2bacd307bfb7ee359e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:07.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:07 np0005539563 podman[381536]: 2025-11-29 08:45:07.300031494 +0000 UTC m=+0.189724451 container init 784e37832126065984e607d4645b29e11371a7f262aac376ff039fde10e8e601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:45:07 np0005539563 podman[381536]: 2025-11-29 08:45:07.30941537 +0000 UTC m=+0.199108207 container start 784e37832126065984e607d4645b29e11371a7f262aac376ff039fde10e8e601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 29 03:45:07 np0005539563 podman[381536]: 2025-11-29 08:45:07.313231054 +0000 UTC m=+0.202923971 container attach 784e37832126065984e607d4645b29e11371a7f262aac376ff039fde10e8e601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:45:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:07.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:07 np0005539563 nova_compute[252253]: 2025-11-29 08:45:07.607 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:08 np0005539563 silly_diffie[381552]: {
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:    "0": [
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:        {
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:            "devices": [
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:                "/dev/loop3"
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:            ],
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:            "lv_name": "ceph_lv0",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:            "lv_size": "7511998464",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:            "name": "ceph_lv0",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:            "tags": {
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:                "ceph.cluster_name": "ceph",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:                "ceph.crush_device_class": "",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:                "ceph.encrypted": "0",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:                "ceph.osd_id": "0",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:                "ceph.type": "block",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:                "ceph.vdo": "0"
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:            },
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:            "type": "block",
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:            "vg_name": "ceph_vg0"
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:        }
Nov 29 03:45:08 np0005539563 silly_diffie[381552]:    ]
Nov 29 03:45:08 np0005539563 silly_diffie[381552]: }
Nov 29 03:45:08 np0005539563 systemd[1]: libpod-784e37832126065984e607d4645b29e11371a7f262aac376ff039fde10e8e601.scope: Deactivated successfully.
Nov 29 03:45:08 np0005539563 podman[381536]: 2025-11-29 08:45:08.131023513 +0000 UTC m=+1.020716340 container died 784e37832126065984e607d4645b29e11371a7f262aac376ff039fde10e8e601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:45:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3942036b63e15b934b1ed59088b775e09da99c1aa512ba2bacd307bfb7ee359e-merged.mount: Deactivated successfully.
Nov 29 03:45:08 np0005539563 podman[381536]: 2025-11-29 08:45:08.193992348 +0000 UTC m=+1.083685185 container remove 784e37832126065984e607d4645b29e11371a7f262aac376ff039fde10e8e601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:45:08 np0005539563 systemd[1]: libpod-conmon-784e37832126065984e607d4645b29e11371a7f262aac376ff039fde10e8e601.scope: Deactivated successfully.
Nov 29 03:45:08 np0005539563 podman[381711]: 2025-11-29 08:45:08.823410103 +0000 UTC m=+0.035106797 container create 2b4930395bf76ffd9f577da41bce27a2a6262b40b8a67da4de38ca224b898ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brahmagupta, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:45:08 np0005539563 systemd[1]: Started libpod-conmon-2b4930395bf76ffd9f577da41bce27a2a6262b40b8a67da4de38ca224b898ccc.scope.
Nov 29 03:45:08 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:45:08 np0005539563 podman[381711]: 2025-11-29 08:45:08.90141874 +0000 UTC m=+0.113115444 container init 2b4930395bf76ffd9f577da41bce27a2a6262b40b8a67da4de38ca224b898ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brahmagupta, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:45:08 np0005539563 podman[381711]: 2025-11-29 08:45:08.808681102 +0000 UTC m=+0.020377816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:45:08 np0005539563 podman[381711]: 2025-11-29 08:45:08.906851488 +0000 UTC m=+0.118548182 container start 2b4930395bf76ffd9f577da41bce27a2a6262b40b8a67da4de38ca224b898ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brahmagupta, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 29 03:45:08 np0005539563 podman[381711]: 2025-11-29 08:45:08.909794138 +0000 UTC m=+0.121490852 container attach 2b4930395bf76ffd9f577da41bce27a2a6262b40b8a67da4de38ca224b898ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brahmagupta, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:45:08 np0005539563 mystifying_brahmagupta[381727]: 167 167
Nov 29 03:45:08 np0005539563 systemd[1]: libpod-2b4930395bf76ffd9f577da41bce27a2a6262b40b8a67da4de38ca224b898ccc.scope: Deactivated successfully.
Nov 29 03:45:08 np0005539563 podman[381711]: 2025-11-29 08:45:08.911759632 +0000 UTC m=+0.123456326 container died 2b4930395bf76ffd9f577da41bce27a2a6262b40b8a67da4de38ca224b898ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:45:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9b30f49294cc41859b8781151d42a3f8c08c776624d3e7e0990bbe67be12af46-merged.mount: Deactivated successfully.
Nov 29 03:45:08 np0005539563 podman[381711]: 2025-11-29 08:45:08.945995675 +0000 UTC m=+0.157692369 container remove 2b4930395bf76ffd9f577da41bce27a2a6262b40b8a67da4de38ca224b898ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:45:08 np0005539563 systemd[1]: libpod-conmon-2b4930395bf76ffd9f577da41bce27a2a6262b40b8a67da4de38ca224b898ccc.scope: Deactivated successfully.
Nov 29 03:45:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 245 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 130 op/s
Nov 29 03:45:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:45:09Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:22:09 10.100.0.7
Nov 29 03:45:09 np0005539563 ovn_controller[148841]: 2025-11-29T08:45:09Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:22:09 10.100.0.7
Nov 29 03:45:09 np0005539563 podman[381752]: 2025-11-29 08:45:09.115120354 +0000 UTC m=+0.043389673 container create c1ae9e4b4daf26f4ec662273b4a5a03c74c95b7c1371a0b8fd00c65eac98d510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:45:09 np0005539563 systemd[1]: Started libpod-conmon-c1ae9e4b4daf26f4ec662273b4a5a03c74c95b7c1371a0b8fd00c65eac98d510.scope.
Nov 29 03:45:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:45:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf4bd2f7cb79e1938b8310971c2283f4453344f528c7510c60fb91091c00720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:09 np0005539563 podman[381752]: 2025-11-29 08:45:09.097630257 +0000 UTC m=+0.025899596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:45:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf4bd2f7cb79e1938b8310971c2283f4453344f528c7510c60fb91091c00720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf4bd2f7cb79e1938b8310971c2283f4453344f528c7510c60fb91091c00720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf4bd2f7cb79e1938b8310971c2283f4453344f528c7510c60fb91091c00720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:09 np0005539563 podman[381752]: 2025-11-29 08:45:09.215909101 +0000 UTC m=+0.144178470 container init c1ae9e4b4daf26f4ec662273b4a5a03c74c95b7c1371a0b8fd00c65eac98d510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:45:09 np0005539563 podman[381752]: 2025-11-29 08:45:09.225823591 +0000 UTC m=+0.154092920 container start c1ae9e4b4daf26f4ec662273b4a5a03c74c95b7c1371a0b8fd00c65eac98d510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:45:09 np0005539563 podman[381752]: 2025-11-29 08:45:09.228942576 +0000 UTC m=+0.157211945 container attach c1ae9e4b4daf26f4ec662273b4a5a03c74c95b7c1371a0b8fd00c65eac98d510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:45:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:09.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:09.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:10 np0005539563 nova_compute[252253]: 2025-11-29 08:45:10.078 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:10 np0005539563 frosty_merkle[381769]: {
Nov 29 03:45:10 np0005539563 frosty_merkle[381769]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:45:10 np0005539563 frosty_merkle[381769]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:45:10 np0005539563 frosty_merkle[381769]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:45:10 np0005539563 frosty_merkle[381769]:        "osd_id": 0,
Nov 29 03:45:10 np0005539563 frosty_merkle[381769]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:45:10 np0005539563 frosty_merkle[381769]:        "type": "bluestore"
Nov 29 03:45:10 np0005539563 frosty_merkle[381769]:    }
Nov 29 03:45:10 np0005539563 frosty_merkle[381769]: }
Nov 29 03:45:10 np0005539563 systemd[1]: libpod-c1ae9e4b4daf26f4ec662273b4a5a03c74c95b7c1371a0b8fd00c65eac98d510.scope: Deactivated successfully.
Nov 29 03:45:10 np0005539563 podman[381752]: 2025-11-29 08:45:10.198400278 +0000 UTC m=+1.126669617 container died c1ae9e4b4daf26f4ec662273b4a5a03c74c95b7c1371a0b8fd00c65eac98d510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:45:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-caf4bd2f7cb79e1938b8310971c2283f4453344f528c7510c60fb91091c00720-merged.mount: Deactivated successfully.
Nov 29 03:45:10 np0005539563 podman[381752]: 2025-11-29 08:45:10.269340922 +0000 UTC m=+1.197610241 container remove c1ae9e4b4daf26f4ec662273b4a5a03c74c95b7c1371a0b8fd00c65eac98d510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:45:10 np0005539563 systemd[1]: libpod-conmon-c1ae9e4b4daf26f4ec662273b4a5a03c74c95b7c1371a0b8fd00c65eac98d510.scope: Deactivated successfully.
Nov 29 03:45:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:45:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:45:10 np0005539563 podman[381790]: 2025-11-29 08:45:10.321693649 +0000 UTC m=+0.083103756 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:45:10 np0005539563 podman[381794]: 2025-11-29 08:45:10.327841226 +0000 UTC m=+0.091181695 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:45:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:10 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a6a840e8-0916-4447-9d90-c1c2a9a0d6b1 does not exist
Nov 29 03:45:10 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e0a8d268-7faf-4037-9a6a-5c3f596a347d does not exist
Nov 29 03:45:10 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d8b5a1af-62b7-4f56-ae06-32a7efabb3e4 does not exist
Nov 29 03:45:10 np0005539563 podman[381800]: 2025-11-29 08:45:10.381044356 +0000 UTC m=+0.144412006 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:45:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 270 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.7 MiB/s wr, 144 op/s
Nov 29 03:45:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:11.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:11 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:11 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:45:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:11.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:12 np0005539563 nova_compute[252253]: 2025-11-29 08:45:12.609 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:45:12
Nov 29 03:45:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:45:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:45:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.mgr', 'images', 'backups']
Nov 29 03:45:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:45:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 270 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 437 KiB/s rd, 3.7 MiB/s wr, 103 op/s
Nov 29 03:45:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:45:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:45:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:45:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:45:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:45:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:45:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:13.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:13.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 652 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Nov 29 03:45:15 np0005539563 nova_compute[252253]: 2025-11-29 08:45:15.080 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:45:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:15.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:45:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:15.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:45:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:45:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:45:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:45:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:45:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:45:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:45:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:45:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:45:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:45:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 2.5 MiB/s wr, 86 op/s
Nov 29 03:45:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:17.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:17.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:17 np0005539563 nova_compute[252253]: 2025-11-29 08:45:17.613 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:17 np0005539563 nova_compute[252253]: 2025-11-29 08:45:17.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 2.6 MiB/s wr, 89 op/s
Nov 29 03:45:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:19.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:19.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:19 np0005539563 nova_compute[252253]: 2025-11-29 08:45:19.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:19 np0005539563 nova_compute[252253]: 2025-11-29 08:45:19.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:45:19 np0005539563 nova_compute[252253]: 2025-11-29 08:45:19.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:45:20 np0005539563 nova_compute[252253]: 2025-11-29 08:45:20.130 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:20 np0005539563 nova_compute[252253]: 2025-11-29 08:45:20.638 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:20 np0005539563 nova_compute[252253]: 2025-11-29 08:45:20.639 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:20 np0005539563 nova_compute[252253]: 2025-11-29 08:45:20.639 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:45:20 np0005539563 nova_compute[252253]: 2025-11-29 08:45:20.639 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea4fd34f-5c9e-4fc3-a21d-11eee5732097 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:45:20.775 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:20 np0005539563 nova_compute[252253]: 2025-11-29 08:45:20.775 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:45:20.776 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:45:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:45:20.776 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 29 03:45:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:21.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:21.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:22 np0005539563 nova_compute[252253]: 2025-11-29 08:45:22.617 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 215 KiB/s rd, 609 KiB/s wr, 27 op/s
Nov 29 03:45:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:23.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:23.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004342095314535641 of space, bias 1.0, pg target 1.3026285943606923 quantized to 32 (current 32)
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6462629990228922 quantized to 32 (current 32)
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:45:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 216 KiB/s rd, 609 KiB/s wr, 27 op/s
Nov 29 03:45:25 np0005539563 nova_compute[252253]: 2025-11-29 08:45:25.132 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:25.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:25.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 853 B/s rd, 22 KiB/s wr, 2 op/s
Nov 29 03:45:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:27.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:27.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:27 np0005539563 nova_compute[252253]: 2025-11-29 08:45:27.620 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:45:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2858280769' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:45:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:45:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2858280769' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:45:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 853 B/s rd, 24 KiB/s wr, 2 op/s
Nov 29 03:45:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:29.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:29.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.135 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.548 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Updating instance_info_cache with network_info: [{"id": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "address": "fa:16:3e:12:22:09", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17d30e4-2f", "ovs_interfaceid": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:30 np0005539563 ovn_controller[148841]: 2025-11-29T08:45:30Z|00831|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.821 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.821 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.823 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.823 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.823 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.823 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.823 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.824 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.911 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.911 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.911 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.911 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:45:30 np0005539563 nova_compute[252253]: 2025-11-29 08:45:30.911 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 2.0 KiB/s wr, 3 op/s
Nov 29 03:45:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:31.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3217594026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:31 np0005539563 nova_compute[252253]: 2025-11-29 08:45:31.361 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:31.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:31 np0005539563 nova_compute[252253]: 2025-11-29 08:45:31.670 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000c0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:45:31 np0005539563 nova_compute[252253]: 2025-11-29 08:45:31.670 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000c0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:45:31 np0005539563 nova_compute[252253]: 2025-11-29 08:45:31.857 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:45:31 np0005539563 nova_compute[252253]: 2025-11-29 08:45:31.858 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3985MB free_disk=20.897125244140625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:45:31 np0005539563 nova_compute[252253]: 2025-11-29 08:45:31.859 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:31 np0005539563 nova_compute[252253]: 2025-11-29 08:45:31.859 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:31 np0005539563 nova_compute[252253]: 2025-11-29 08:45:31.956 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance ea4fd34f-5c9e-4fc3-a21d-11eee5732097 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:45:31 np0005539563 nova_compute[252253]: 2025-11-29 08:45:31.957 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:45:31 np0005539563 nova_compute[252253]: 2025-11-29 08:45:31.957 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:45:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:31 np0005539563 nova_compute[252253]: 2025-11-29 08:45:31.994 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2284728337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:32 np0005539563 nova_compute[252253]: 2025-11-29 08:45:32.440 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:32 np0005539563 nova_compute[252253]: 2025-11-29 08:45:32.446 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:32 np0005539563 nova_compute[252253]: 2025-11-29 08:45:32.650 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 2.0 KiB/s wr, 2 op/s
Nov 29 03:45:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:33.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:33.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:34 np0005539563 nova_compute[252253]: 2025-11-29 08:45:34.479 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:34 np0005539563 nova_compute[252253]: 2025-11-29 08:45:34.669 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:45:34 np0005539563 nova_compute[252253]: 2025-11-29 08:45:34.670 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 71 op/s
Nov 29 03:45:35 np0005539563 nova_compute[252253]: 2025-11-29 08:45:35.136 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:35.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:35.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:36 np0005539563 nova_compute[252253]: 2025-11-29 08:45:36.524 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 71 op/s
Nov 29 03:45:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:37.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:37 np0005539563 nova_compute[252253]: 2025-11-29 08:45:37.653 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 71 op/s
Nov 29 03:45:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:39.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:39.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:40 np0005539563 nova_compute[252253]: 2025-11-29 08:45:40.139 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:40 np0005539563 podman[382074]: 2025-11-29 08:45:40.528198849 +0000 UTC m=+0.076647331 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 03:45:40 np0005539563 podman[382073]: 2025-11-29 08:45:40.553449217 +0000 UTC m=+0.094871587 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:45:40 np0005539563 podman[382075]: 2025-11-29 08:45:40.575623271 +0000 UTC m=+0.122596692 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Nov 29 03:45:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Nov 29 03:45:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:41.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:41.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:42 np0005539563 nova_compute[252253]: 2025-11-29 08:45:42.655 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 68 op/s
Nov 29 03:45:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:45:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:45:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:45:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:45:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:45:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:45:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:43.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:43.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Nov 29 03:45:45 np0005539563 nova_compute[252253]: 2025-11-29 08:45:45.140 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:45.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:45.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 507 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 29 03:45:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:47.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:47.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:47 np0005539563 nova_compute[252253]: 2025-11-29 08:45:47.658 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Nov 29 03:45:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:49.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:49.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:49 np0005539563 nova_compute[252253]: 2025-11-29 08:45:49.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:50 np0005539563 nova_compute[252253]: 2025-11-29 08:45:50.141 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Nov 29 03:45:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:51.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:51.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:52 np0005539563 nova_compute[252253]: 2025-11-29 08:45:52.661 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Nov 29 03:45:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:53.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:53.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 265 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 168 op/s
Nov 29 03:45:55 np0005539563 nova_compute[252253]: 2025-11-29 08:45:55.143 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.222718) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955222875, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 882, "num_deletes": 251, "total_data_size": 1318915, "memory_usage": 1347040, "flush_reason": "Manual Compaction"}
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955234428, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 1293643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66493, "largest_seqno": 67374, "table_properties": {"data_size": 1289175, "index_size": 2119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10019, "raw_average_key_size": 19, "raw_value_size": 1280227, "raw_average_value_size": 2550, "num_data_blocks": 92, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405887, "oldest_key_time": 1764405887, "file_creation_time": 1764405955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 11706 microseconds, and 5877 cpu microseconds.
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.234507) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 1293643 bytes OK
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.234540) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.238061) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.238088) EVENT_LOG_v1 {"time_micros": 1764405955238082, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.238107) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1314686, prev total WAL file size 1314686, number of live WAL files 2.
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.238960) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(1263KB)], [149(11MB)]
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955239042, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 13262975, "oldest_snapshot_seqno": -1}
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 9741 keys, 11306341 bytes, temperature: kUnknown
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955324167, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 11306341, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11244980, "index_size": 35904, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24389, "raw_key_size": 258742, "raw_average_key_size": 26, "raw_value_size": 11075697, "raw_average_value_size": 1137, "num_data_blocks": 1355, "num_entries": 9741, "num_filter_entries": 9741, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764405955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.324448) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 11306341 bytes
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.325808) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.7 rd, 132.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(19.0) write-amplify(8.7) OK, records in: 10259, records dropped: 518 output_compression: NoCompression
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.325825) EVENT_LOG_v1 {"time_micros": 1764405955325818, "job": 92, "event": "compaction_finished", "compaction_time_micros": 85208, "compaction_time_cpu_micros": 26460, "output_level": 6, "num_output_files": 1, "total_output_size": 11306341, "num_input_records": 10259, "num_output_records": 9741, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955326156, "job": 92, "event": "table_file_deletion", "file_number": 151}
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405955328125, "job": 92, "event": "table_file_deletion", "file_number": 149}
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.238798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.328176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.328180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.328183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.328185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:45:55 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:45:55.328186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:45:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:55.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:55.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 265 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 24 KiB/s wr, 98 op/s
Nov 29 03:45:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:57.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:57.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:45:57 np0005539563 nova_compute[252253]: 2025-11-29 08:45:57.663 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 106 op/s
Nov 29 03:45:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:59.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:45:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:45:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:59.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:00 np0005539563 nova_compute[252253]: 2025-11-29 08:46:00.146 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 253 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 889 KiB/s wr, 113 op/s
Nov 29 03:46:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:01.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:01.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:02 np0005539563 nova_compute[252253]: 2025-11-29 08:46:02.668 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:02 np0005539563 nova_compute[252253]: 2025-11-29 08:46:02.846 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 261 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 286 KiB/s rd, 1.5 MiB/s wr, 64 op/s
Nov 29 03:46:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:03.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:03.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:04.951 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:04.952 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:04.952 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 273 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 315 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Nov 29 03:46:05 np0005539563 nova_compute[252253]: 2025-11-29 08:46:05.149 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:05.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:05.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 273 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 03:46:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:07.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:07.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:07 np0005539563 nova_compute[252253]: 2025-11-29 08:46:07.672 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:46:08Z|00832|binding|INFO|Releasing lport 40b4f56a-1cc5-423e-999b-a20c0e423329 from this chassis (sb_readonly=0)
Nov 29 03:46:08 np0005539563 nova_compute[252253]: 2025-11-29 08:46:08.366 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 278 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 317 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 29 03:46:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:09.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:09.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:10 np0005539563 nova_compute[252253]: 2025-11-29 08:46:10.152 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:10.218 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:10 np0005539563 nova_compute[252253]: 2025-11-29 08:46:10.219 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:10.220 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:46:10 np0005539563 podman[382249]: 2025-11-29 08:46:10.99271506 +0000 UTC m=+0.062655948 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd)
Nov 29 03:46:10 np0005539563 podman[382247]: 2025-11-29 08:46:10.993556614 +0000 UTC m=+0.070456681 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent)
Nov 29 03:46:11 np0005539563 podman[382250]: 2025-11-29 08:46:11.035890287 +0000 UTC m=+0.104644132 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:46:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 222 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Nov 29 03:46:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:11.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:11.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:46:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:46:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:46:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:46:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:46:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:46:11 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 98efc1b8-6809-46a1-9d2e-2b96a26a654b does not exist
Nov 29 03:46:11 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8463507a-e60a-4ad6-9efa-65bf4939bd49 does not exist
Nov 29 03:46:11 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d5a57bde-d09c-419f-bd04-4e799f07ffe1 does not exist
Nov 29 03:46:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:46:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:46:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:46:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:46:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:46:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:46:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:12 np0005539563 podman[382583]: 2025-11-29 08:46:12.345059818 +0000 UTC m=+0.035005684 container create 8be227da2dd2ce0042a5bdcc43fa86864b08e005a2005ccdcc614180dbc01fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_perlman, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 03:46:12 np0005539563 systemd[1]: Started libpod-conmon-8be227da2dd2ce0042a5bdcc43fa86864b08e005a2005ccdcc614180dbc01fa8.scope.
Nov 29 03:46:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:46:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:46:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:46:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:46:12 np0005539563 podman[382583]: 2025-11-29 08:46:12.329189436 +0000 UTC m=+0.019135322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:12 np0005539563 podman[382583]: 2025-11-29 08:46:12.434602199 +0000 UTC m=+0.124548085 container init 8be227da2dd2ce0042a5bdcc43fa86864b08e005a2005ccdcc614180dbc01fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_perlman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:46:12 np0005539563 podman[382583]: 2025-11-29 08:46:12.443867822 +0000 UTC m=+0.133813688 container start 8be227da2dd2ce0042a5bdcc43fa86864b08e005a2005ccdcc614180dbc01fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:46:12 np0005539563 podman[382583]: 2025-11-29 08:46:12.447653285 +0000 UTC m=+0.137599181 container attach 8be227da2dd2ce0042a5bdcc43fa86864b08e005a2005ccdcc614180dbc01fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_perlman, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:46:12 np0005539563 infallible_perlman[382600]: 167 167
Nov 29 03:46:12 np0005539563 systemd[1]: libpod-8be227da2dd2ce0042a5bdcc43fa86864b08e005a2005ccdcc614180dbc01fa8.scope: Deactivated successfully.
Nov 29 03:46:12 np0005539563 conmon[382600]: conmon 8be227da2dd2ce0042a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8be227da2dd2ce0042a5bdcc43fa86864b08e005a2005ccdcc614180dbc01fa8.scope/container/memory.events
Nov 29 03:46:12 np0005539563 podman[382583]: 2025-11-29 08:46:12.453913965 +0000 UTC m=+0.143859851 container died 8be227da2dd2ce0042a5bdcc43fa86864b08e005a2005ccdcc614180dbc01fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_perlman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:46:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-94cae495971ca614f21871d7b38a4c4c28aa64e27b0e222165657c423cf28440-merged.mount: Deactivated successfully.
Nov 29 03:46:12 np0005539563 podman[382583]: 2025-11-29 08:46:12.496473005 +0000 UTC m=+0.186418881 container remove 8be227da2dd2ce0042a5bdcc43fa86864b08e005a2005ccdcc614180dbc01fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_perlman, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:46:12 np0005539563 systemd[1]: libpod-conmon-8be227da2dd2ce0042a5bdcc43fa86864b08e005a2005ccdcc614180dbc01fa8.scope: Deactivated successfully.
Nov 29 03:46:12 np0005539563 nova_compute[252253]: 2025-11-29 08:46:12.674 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:12 np0005539563 podman[382623]: 2025-11-29 08:46:12.690977427 +0000 UTC m=+0.050057146 container create 7e0608d91e49eaef4e2dd5b961f4d8422bffdd861df8beaf932502659c87b66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:46:12 np0005539563 systemd[1]: Started libpod-conmon-7e0608d91e49eaef4e2dd5b961f4d8422bffdd861df8beaf932502659c87b66e.scope.
Nov 29 03:46:12 np0005539563 podman[382623]: 2025-11-29 08:46:12.670538389 +0000 UTC m=+0.029618128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:46:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/064e177857de28f6cfb69c2fe56e0a723901438bc0ad16ccbb12b0611411740b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/064e177857de28f6cfb69c2fe56e0a723901438bc0ad16ccbb12b0611411740b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/064e177857de28f6cfb69c2fe56e0a723901438bc0ad16ccbb12b0611411740b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/064e177857de28f6cfb69c2fe56e0a723901438bc0ad16ccbb12b0611411740b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/064e177857de28f6cfb69c2fe56e0a723901438bc0ad16ccbb12b0611411740b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:12 np0005539563 podman[382623]: 2025-11-29 08:46:12.799752651 +0000 UTC m=+0.158832390 container init 7e0608d91e49eaef4e2dd5b961f4d8422bffdd861df8beaf932502659c87b66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:46:12 np0005539563 podman[382623]: 2025-11-29 08:46:12.809077835 +0000 UTC m=+0.168157584 container start 7e0608d91e49eaef4e2dd5b961f4d8422bffdd861df8beaf932502659c87b66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_colden, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:46:12 np0005539563 podman[382623]: 2025-11-29 08:46:12.813630949 +0000 UTC m=+0.172710708 container attach 7e0608d91e49eaef4e2dd5b961f4d8422bffdd861df8beaf932502659c87b66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_colden, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:46:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:46:12
Nov 29 03:46:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:46:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:46:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'backups', 'vms']
Nov 29 03:46:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:46:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 337 KiB/s rd, 1.3 MiB/s wr, 80 op/s
Nov 29 03:46:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:46:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:46:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:46:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:46:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:46:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:46:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:13.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:13 np0005539563 angry_colden[382640]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:46:13 np0005539563 angry_colden[382640]: --> relative data size: 1.0
Nov 29 03:46:13 np0005539563 angry_colden[382640]: --> All data devices are unavailable
Nov 29 03:46:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:13.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:13 np0005539563 systemd[1]: libpod-7e0608d91e49eaef4e2dd5b961f4d8422bffdd861df8beaf932502659c87b66e.scope: Deactivated successfully.
Nov 29 03:46:13 np0005539563 podman[382623]: 2025-11-29 08:46:13.69753875 +0000 UTC m=+1.056618499 container died 7e0608d91e49eaef4e2dd5b961f4d8422bffdd861df8beaf932502659c87b66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:46:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-064e177857de28f6cfb69c2fe56e0a723901438bc0ad16ccbb12b0611411740b-merged.mount: Deactivated successfully.
Nov 29 03:46:13 np0005539563 podman[382623]: 2025-11-29 08:46:13.76582973 +0000 UTC m=+1.124909439 container remove 7e0608d91e49eaef4e2dd5b961f4d8422bffdd861df8beaf932502659c87b66e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:46:13 np0005539563 systemd[1]: libpod-conmon-7e0608d91e49eaef4e2dd5b961f4d8422bffdd861df8beaf932502659c87b66e.scope: Deactivated successfully.
Nov 29 03:46:14 np0005539563 podman[382809]: 2025-11-29 08:46:14.521904427 +0000 UTC m=+0.062197936 container create faf8f19cfd2b8d6d28bf113c0c4b40fc57dc5391045b4df1b961b71f1dfe7bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_robinson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:46:14 np0005539563 systemd[1]: Started libpod-conmon-faf8f19cfd2b8d6d28bf113c0c4b40fc57dc5391045b4df1b961b71f1dfe7bbe.scope.
Nov 29 03:46:14 np0005539563 podman[382809]: 2025-11-29 08:46:14.49229734 +0000 UTC m=+0.032590939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:14 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:46:14 np0005539563 podman[382809]: 2025-11-29 08:46:14.610173153 +0000 UTC m=+0.150466692 container init faf8f19cfd2b8d6d28bf113c0c4b40fc57dc5391045b4df1b961b71f1dfe7bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:46:14 np0005539563 podman[382809]: 2025-11-29 08:46:14.621769289 +0000 UTC m=+0.162062828 container start faf8f19cfd2b8d6d28bf113c0c4b40fc57dc5391045b4df1b961b71f1dfe7bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_robinson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:46:14 np0005539563 podman[382809]: 2025-11-29 08:46:14.626637582 +0000 UTC m=+0.166931091 container attach faf8f19cfd2b8d6d28bf113c0c4b40fc57dc5391045b4df1b961b71f1dfe7bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:46:14 np0005539563 systemd[1]: libpod-faf8f19cfd2b8d6d28bf113c0c4b40fc57dc5391045b4df1b961b71f1dfe7bbe.scope: Deactivated successfully.
Nov 29 03:46:14 np0005539563 affectionate_robinson[382825]: 167 167
Nov 29 03:46:14 np0005539563 conmon[382825]: conmon faf8f19cfd2b8d6d28bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-faf8f19cfd2b8d6d28bf113c0c4b40fc57dc5391045b4df1b961b71f1dfe7bbe.scope/container/memory.events
Nov 29 03:46:14 np0005539563 podman[382809]: 2025-11-29 08:46:14.630594829 +0000 UTC m=+0.170888408 container died faf8f19cfd2b8d6d28bf113c0c4b40fc57dc5391045b4df1b961b71f1dfe7bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_robinson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:46:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0553f9bac2bf7ab63b776e6a39771290747508d49cc3b533634f1050447762f8-merged.mount: Deactivated successfully.
Nov 29 03:46:14 np0005539563 podman[382809]: 2025-11-29 08:46:14.687887371 +0000 UTC m=+0.228180880 container remove faf8f19cfd2b8d6d28bf113c0c4b40fc57dc5391045b4df1b961b71f1dfe7bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_robinson, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:46:14 np0005539563 systemd[1]: libpod-conmon-faf8f19cfd2b8d6d28bf113c0c4b40fc57dc5391045b4df1b961b71f1dfe7bbe.scope: Deactivated successfully.
Nov 29 03:46:14 np0005539563 podman[382851]: 2025-11-29 08:46:14.913071418 +0000 UTC m=+0.072345053 container create 85e3c7c8feb2a1721a10b230086c66d0ab48b15c6b7f5dd7b799b6a056d60606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:46:14 np0005539563 systemd[1]: Started libpod-conmon-85e3c7c8feb2a1721a10b230086c66d0ab48b15c6b7f5dd7b799b6a056d60606.scope.
Nov 29 03:46:14 np0005539563 podman[382851]: 2025-11-29 08:46:14.881448917 +0000 UTC m=+0.040722552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:15 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:46:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0365e904315c37a3e6096d91f2cd6f47b676722c4039d27a31ea6d3825bae88a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0365e904315c37a3e6096d91f2cd6f47b676722c4039d27a31ea6d3825bae88a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0365e904315c37a3e6096d91f2cd6f47b676722c4039d27a31ea6d3825bae88a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0365e904315c37a3e6096d91f2cd6f47b676722c4039d27a31ea6d3825bae88a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:15 np0005539563 podman[382851]: 2025-11-29 08:46:15.026016976 +0000 UTC m=+0.185290621 container init 85e3c7c8feb2a1721a10b230086c66d0ab48b15c6b7f5dd7b799b6a056d60606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:46:15 np0005539563 podman[382851]: 2025-11-29 08:46:15.033047678 +0000 UTC m=+0.192321253 container start 85e3c7c8feb2a1721a10b230086c66d0ab48b15c6b7f5dd7b799b6a056d60606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:46:15 np0005539563 podman[382851]: 2025-11-29 08:46:15.036361198 +0000 UTC m=+0.195634833 container attach 85e3c7c8feb2a1721a10b230086c66d0ab48b15c6b7f5dd7b799b6a056d60606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hellman, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:46:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 196 KiB/s rd, 700 KiB/s wr, 61 op/s
Nov 29 03:46:15 np0005539563 nova_compute[252253]: 2025-11-29 08:46:15.154 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:15.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:15.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]: {
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:    "0": [
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:        {
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:            "devices": [
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:                "/dev/loop3"
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:            ],
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:            "lv_name": "ceph_lv0",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:            "lv_size": "7511998464",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:            "name": "ceph_lv0",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:            "tags": {
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:                "ceph.cluster_name": "ceph",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:                "ceph.crush_device_class": "",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:                "ceph.encrypted": "0",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:                "ceph.osd_id": "0",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:                "ceph.type": "block",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:                "ceph.vdo": "0"
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:            },
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:            "type": "block",
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:            "vg_name": "ceph_vg0"
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:        }
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]:    ]
Nov 29 03:46:15 np0005539563 recursing_hellman[382867]: }
Nov 29 03:46:15 np0005539563 systemd[1]: libpod-85e3c7c8feb2a1721a10b230086c66d0ab48b15c6b7f5dd7b799b6a056d60606.scope: Deactivated successfully.
Nov 29 03:46:15 np0005539563 podman[382851]: 2025-11-29 08:46:15.903599875 +0000 UTC m=+1.062873500 container died 85e3c7c8feb2a1721a10b230086c66d0ab48b15c6b7f5dd7b799b6a056d60606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hellman, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:46:15 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0365e904315c37a3e6096d91f2cd6f47b676722c4039d27a31ea6d3825bae88a-merged.mount: Deactivated successfully.
Nov 29 03:46:15 np0005539563 podman[382851]: 2025-11-29 08:46:15.98855374 +0000 UTC m=+1.147827365 container remove 85e3c7c8feb2a1721a10b230086c66d0ab48b15c6b7f5dd7b799b6a056d60606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hellman, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:46:15 np0005539563 systemd[1]: libpod-conmon-85e3c7c8feb2a1721a10b230086c66d0ab48b15c6b7f5dd7b799b6a056d60606.scope: Deactivated successfully.
Nov 29 03:46:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:46:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:46:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:46:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:46:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:46:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:46:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:46:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:46:16 np0005539563 podman[383033]: 2025-11-29 08:46:16.687459308 +0000 UTC m=+0.047315901 container create ce393d564151fd9f788ce380fb6a3b8665a7d2a8c8f2f6249404d9b0f1e12250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:46:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:46:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:46:16 np0005539563 systemd[1]: Started libpod-conmon-ce393d564151fd9f788ce380fb6a3b8665a7d2a8c8f2f6249404d9b0f1e12250.scope.
Nov 29 03:46:16 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:46:16 np0005539563 podman[383033]: 2025-11-29 08:46:16.669471408 +0000 UTC m=+0.029328041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:16 np0005539563 podman[383033]: 2025-11-29 08:46:16.768882287 +0000 UTC m=+0.128738920 container init ce393d564151fd9f788ce380fb6a3b8665a7d2a8c8f2f6249404d9b0f1e12250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:46:16 np0005539563 podman[383033]: 2025-11-29 08:46:16.775469727 +0000 UTC m=+0.135326330 container start ce393d564151fd9f788ce380fb6a3b8665a7d2a8c8f2f6249404d9b0f1e12250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:46:16 np0005539563 podman[383033]: 2025-11-29 08:46:16.778622263 +0000 UTC m=+0.138478856 container attach ce393d564151fd9f788ce380fb6a3b8665a7d2a8c8f2f6249404d9b0f1e12250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:46:16 np0005539563 vigorous_joliot[383049]: 167 167
Nov 29 03:46:16 np0005539563 systemd[1]: libpod-ce393d564151fd9f788ce380fb6a3b8665a7d2a8c8f2f6249404d9b0f1e12250.scope: Deactivated successfully.
Nov 29 03:46:16 np0005539563 conmon[383049]: conmon ce393d564151fd9f788c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce393d564151fd9f788ce380fb6a3b8665a7d2a8c8f2f6249404d9b0f1e12250.scope/container/memory.events
Nov 29 03:46:16 np0005539563 podman[383033]: 2025-11-29 08:46:16.782373695 +0000 UTC m=+0.142230338 container died ce393d564151fd9f788ce380fb6a3b8665a7d2a8c8f2f6249404d9b0f1e12250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:46:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-251be0df283b2f98d7bf3dc735c32314e6b27fd5ff9b2c2e4d5db6ecd7dbae14-merged.mount: Deactivated successfully.
Nov 29 03:46:16 np0005539563 podman[383033]: 2025-11-29 08:46:16.844063076 +0000 UTC m=+0.203919679 container remove ce393d564151fd9f788ce380fb6a3b8665a7d2a8c8f2f6249404d9b0f1e12250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_joliot, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:46:16 np0005539563 systemd[1]: libpod-conmon-ce393d564151fd9f788ce380fb6a3b8665a7d2a8c8f2f6249404d9b0f1e12250.scope: Deactivated successfully.
Nov 29 03:46:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 61 KiB/s wr, 37 op/s
Nov 29 03:46:17 np0005539563 podman[383074]: 2025-11-29 08:46:17.013291369 +0000 UTC m=+0.027207583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:17.222 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:17 np0005539563 podman[383074]: 2025-11-29 08:46:17.273949464 +0000 UTC m=+0.287865738 container create 520b5996682fdf8f3d467b26f293a951f75af8c5174f691b84283c45a1eb608e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:46:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:17.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:17 np0005539563 systemd[1]: Started libpod-conmon-520b5996682fdf8f3d467b26f293a951f75af8c5174f691b84283c45a1eb608e.scope.
Nov 29 03:46:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:46:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3cb7b2f352f849684ec089b856b017237724419e0625c1845601a5e4cf78b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3cb7b2f352f849684ec089b856b017237724419e0625c1845601a5e4cf78b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3cb7b2f352f849684ec089b856b017237724419e0625c1845601a5e4cf78b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3cb7b2f352f849684ec089b856b017237724419e0625c1845601a5e4cf78b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:17 np0005539563 podman[383074]: 2025-11-29 08:46:17.463925601 +0000 UTC m=+0.477841825 container init 520b5996682fdf8f3d467b26f293a951f75af8c5174f691b84283c45a1eb608e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:46:17 np0005539563 podman[383074]: 2025-11-29 08:46:17.473931794 +0000 UTC m=+0.487848018 container start 520b5996682fdf8f3d467b26f293a951f75af8c5174f691b84283c45a1eb608e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:46:17 np0005539563 podman[383074]: 2025-11-29 08:46:17.478765646 +0000 UTC m=+0.492681880 container attach 520b5996682fdf8f3d467b26f293a951f75af8c5174f691b84283c45a1eb608e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:46:17 np0005539563 nova_compute[252253]: 2025-11-29 08:46:17.676 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:17.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:18 np0005539563 awesome_johnson[383092]: {
Nov 29 03:46:18 np0005539563 awesome_johnson[383092]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:46:18 np0005539563 awesome_johnson[383092]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:46:18 np0005539563 awesome_johnson[383092]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:46:18 np0005539563 awesome_johnson[383092]:        "osd_id": 0,
Nov 29 03:46:18 np0005539563 awesome_johnson[383092]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:46:18 np0005539563 awesome_johnson[383092]:        "type": "bluestore"
Nov 29 03:46:18 np0005539563 awesome_johnson[383092]:    }
Nov 29 03:46:18 np0005539563 awesome_johnson[383092]: }
Nov 29 03:46:18 np0005539563 systemd[1]: libpod-520b5996682fdf8f3d467b26f293a951f75af8c5174f691b84283c45a1eb608e.scope: Deactivated successfully.
Nov 29 03:46:18 np0005539563 podman[383074]: 2025-11-29 08:46:18.398183224 +0000 UTC m=+1.412099418 container died 520b5996682fdf8f3d467b26f293a951f75af8c5174f691b84283c45a1eb608e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:46:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ec3cb7b2f352f849684ec089b856b017237724419e0625c1845601a5e4cf78b2-merged.mount: Deactivated successfully.
Nov 29 03:46:18 np0005539563 podman[383074]: 2025-11-29 08:46:18.456618867 +0000 UTC m=+1.470535051 container remove 520b5996682fdf8f3d467b26f293a951f75af8c5174f691b84283c45a1eb608e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_johnson, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:46:18 np0005539563 systemd[1]: libpod-conmon-520b5996682fdf8f3d467b26f293a951f75af8c5174f691b84283c45a1eb608e.scope: Deactivated successfully.
Nov 29 03:46:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:46:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:46:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:46:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:46:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 55aa5ac8-e888-4def-a145-a3ff76053226 does not exist
Nov 29 03:46:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0b24fd1c-45b0-4741-8da3-9bc75458b2f6 does not exist
Nov 29 03:46:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 35aec431-1449-4b0b-97d8-694ce87830ba does not exist
Nov 29 03:46:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 62 KiB/s wr, 37 op/s
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.260 252257 DEBUG nova.compute.manager [req-d2467687-0538-4e23-815e-f7c9b45808de req-ae936738-9270-4258-aef2-3043b6bacd09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Received event network-changed-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.261 252257 DEBUG nova.compute.manager [req-d2467687-0538-4e23-815e-f7c9b45808de req-ae936738-9270-4258-aef2-3043b6bacd09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Refreshing instance network info cache due to event network-changed-f17d30e4-2f9e-4bd8-8232-ecc3302c7824. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.261 252257 DEBUG oslo_concurrency.lockutils [req-d2467687-0538-4e23-815e-f7c9b45808de req-ae936738-9270-4258-aef2-3043b6bacd09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.262 252257 DEBUG oslo_concurrency.lockutils [req-d2467687-0538-4e23-815e-f7c9b45808de req-ae936738-9270-4258-aef2-3043b6bacd09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.262 252257 DEBUG nova.network.neutron [req-d2467687-0538-4e23-815e-f7c9b45808de req-ae936738-9270-4258-aef2-3043b6bacd09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Refreshing network info cache for port f17d30e4-2f9e-4bd8-8232-ecc3302c7824 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:46:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:19.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:46:19 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.535 252257 DEBUG oslo_concurrency.lockutils [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.536 252257 DEBUG oslo_concurrency.lockutils [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.537 252257 DEBUG oslo_concurrency.lockutils [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.537 252257 DEBUG oslo_concurrency.lockutils [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.537 252257 DEBUG oslo_concurrency.lockutils [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.540 252257 INFO nova.compute.manager [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Terminating instance#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.543 252257 DEBUG nova.compute.manager [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:46:19 np0005539563 kernel: tapf17d30e4-2f (unregistering): left promiscuous mode
Nov 29 03:46:19 np0005539563 NetworkManager[48981]: <info>  [1764405979.6075] device (tapf17d30e4-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:46:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:46:19Z|00833|binding|INFO|Releasing lport f17d30e4-2f9e-4bd8-8232-ecc3302c7824 from this chassis (sb_readonly=0)
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.624 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:46:19Z|00834|binding|INFO|Setting lport f17d30e4-2f9e-4bd8-8232-ecc3302c7824 down in Southbound
Nov 29 03:46:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:46:19Z|00835|binding|INFO|Removing iface tapf17d30e4-2f ovn-installed in OVS
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.626 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.647 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539563 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000c0.scope: Deactivated successfully.
Nov 29 03:46:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:19.665 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:22:09 10.100.0.7'], port_security=['fa:16:3e:12:22:09 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ea4fd34f-5c9e-4fc3-a21d-11eee5732097', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '03858b11000d4b57bd3659c3083eed47', 'neutron:revision_number': '4', 'neutron:security_group_ids': '747d2b6c-68e4-4f8b-89b3-15bbb589ad69 f2cc13f0-62d2-4aa9-9fdf-13ddbaf0f2cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41d2fa50-52e6-41f0-8017-13145416317d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=f17d30e4-2f9e-4bd8-8232-ecc3302c7824) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:19.666 158990 INFO neutron.agent.ovn.metadata.agent [-] Port f17d30e4-2f9e-4bd8-8232-ecc3302c7824 in datapath d7daafb1-8347-4bc3-b00b-9a558f101e51 unbound from our chassis#033[00m
Nov 29 03:46:19 np0005539563 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000c0.scope: Consumed 17.039s CPU time.
Nov 29 03:46:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:19.667 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7daafb1-8347-4bc3-b00b-9a558f101e51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:46:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:19.668 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3f4fdf66-7d41-465f-8f57-73b789bb49a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:19.669 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51 namespace which is not needed anymore#033[00m
Nov 29 03:46:19 np0005539563 systemd-machined[213024]: Machine qemu-95-instance-000000c0 terminated.
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:19.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.792 252257 INFO nova.virt.libvirt.driver [-] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Instance destroyed successfully.#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.793 252257 DEBUG nova.objects.instance [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lazy-loading 'resources' on Instance uuid ea4fd34f-5c9e-4fc3-a21d-11eee5732097 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:19 np0005539563 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[380854]: [NOTICE]   (380858) : haproxy version is 2.8.14-c23fe91
Nov 29 03:46:19 np0005539563 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[380854]: [NOTICE]   (380858) : path to executable is /usr/sbin/haproxy
Nov 29 03:46:19 np0005539563 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[380854]: [WARNING]  (380858) : Exiting Master process...
Nov 29 03:46:19 np0005539563 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[380854]: [ALERT]    (380858) : Current worker (380860) exited with code 143 (Terminated)
Nov 29 03:46:19 np0005539563 neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51[380854]: [WARNING]  (380858) : All workers exited. Exiting... (0)
Nov 29 03:46:19 np0005539563 systemd[1]: libpod-f53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194.scope: Deactivated successfully.
Nov 29 03:46:19 np0005539563 podman[383209]: 2025-11-29 08:46:19.83959871 +0000 UTC m=+0.048371190 container died f53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:46:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194-userdata-shm.mount: Deactivated successfully.
Nov 29 03:46:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-124f8e2aeb9502ab219cc59f9ba935a665141f39e0ce7d1a56daddd51d0950a5-merged.mount: Deactivated successfully.
Nov 29 03:46:19 np0005539563 podman[383209]: 2025-11-29 08:46:19.874372717 +0000 UTC m=+0.083145187 container cleanup f53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:46:19 np0005539563 systemd[1]: libpod-conmon-f53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194.scope: Deactivated successfully.
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.919 252257 DEBUG nova.virt.libvirt.vif [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:44:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-194846457',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1086021155-access_point-194846457',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1086021155-ac',id=192,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbN5b32r72JhE7OXuYcMuqpQgiMdY2+BbNFCdwmdC+KNNVkj/UkovXMGv4H0wFMw66XdJWz6gHQFWuL4IxqlXtnDVqoyPJrtUDp+2zsXRX6OPpYRO3gSrTYZqROcMoftQ==',key_name='tempest-TestSecurityGroupsBasicOps-116189028',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:44:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='03858b11000d4b57bd3659c3083eed47',ramdisk_id='',reservation_id='r-i2vyp27y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1086021155',owner_user_name='tempest-TestSecurityGroupsBasicOps-1086021155-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:44:55Z,user_data=None,user_id='a45da8ed818144f8bd6e00d233fcb5d2',uuid=ea4fd34f-5c9e-4fc3-a21d-11eee5732097,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "address": "fa:16:3e:12:22:09", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17d30e4-2f", "ovs_interfaceid": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.920 252257 DEBUG nova.network.os_vif_util [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converting VIF {"id": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "address": "fa:16:3e:12:22:09", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17d30e4-2f", "ovs_interfaceid": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.921 252257 DEBUG nova.network.os_vif_util [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:22:09,bridge_name='br-int',has_traffic_filtering=True,id=f17d30e4-2f9e-4bd8-8232-ecc3302c7824,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17d30e4-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.922 252257 DEBUG os_vif [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:22:09,bridge_name='br-int',has_traffic_filtering=True,id=f17d30e4-2f9e-4bd8-8232-ecc3302c7824,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17d30e4-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.924 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.925 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf17d30e4-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.926 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.928 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.932 252257 INFO os_vif [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:22:09,bridge_name='br-int',has_traffic_filtering=True,id=f17d30e4-2f9e-4bd8-8232-ecc3302c7824,network=Network(d7daafb1-8347-4bc3-b00b-9a558f101e51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf17d30e4-2f')#033[00m
Nov 29 03:46:19 np0005539563 podman[383240]: 2025-11-29 08:46:19.942528094 +0000 UTC m=+0.048197794 container remove f53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:46:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:19.948 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd4281f-789d-4ddb-b68c-40f3b7eae738]: (4, ('Sat Nov 29 08:46:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51 (f53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194)\nf53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194\nSat Nov 29 08:46:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51 (f53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194)\nf53ede9c4b4e0896a434007219a30ab323739dfddfc637424e87f03954940194\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:19.950 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4d1a9f-87fa-420c-9459-80838c55a654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:19.951 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7daafb1-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:19 np0005539563 kernel: tapd7daafb1-80: left promiscuous mode
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.954 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.956 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:19.958 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6149d9d3-614c-48e8-a968-73fa71939a07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:19 np0005539563 nova_compute[252253]: 2025-11-29 08:46:19.967 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:19.979 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8e4a7e-a075-40e9-8250-ef784e3ecf4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:19.980 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d22e8fd6-adb4-4907-9bca-cb277fbcda8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:20.002 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c0385034-b9d4-48f9-aaee-e54743b7e8b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886243, 'reachable_time': 19227, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383275, 'error': None, 'target': 'ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:20.006 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7daafb1-8347-4bc3-b00b-9a558f101e51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:46:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:20.006 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[118e73d0-a715-4e69-87e2-5c3652e32f96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:20 np0005539563 systemd[1]: run-netns-ovnmeta\x2dd7daafb1\x2d8347\x2d4bc3\x2db00b\x2d9a558f101e51.mount: Deactivated successfully.
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.155 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.430 252257 INFO nova.virt.libvirt.driver [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Deleting instance files /var/lib/nova/instances/ea4fd34f-5c9e-4fc3-a21d-11eee5732097_del#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.431 252257 INFO nova.virt.libvirt.driver [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Deletion of /var/lib/nova/instances/ea4fd34f-5c9e-4fc3-a21d-11eee5732097_del complete#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.516 252257 INFO nova.compute.manager [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.517 252257 DEBUG oslo.service.loopingcall [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.517 252257 DEBUG nova.compute.manager [-] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.517 252257 DEBUG nova.network.neutron [-] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.702 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.702 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.718 252257 DEBUG nova.compute.manager [req-b6300ebb-8e10-4c7c-b4b9-1f9f43b4c740 req-352abcb7-1835-425d-9db4-d0f2f5dc2420 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Received event network-vif-unplugged-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.719 252257 DEBUG oslo_concurrency.lockutils [req-b6300ebb-8e10-4c7c-b4b9-1f9f43b4c740 req-352abcb7-1835-425d-9db4-d0f2f5dc2420 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.719 252257 DEBUG oslo_concurrency.lockutils [req-b6300ebb-8e10-4c7c-b4b9-1f9f43b4c740 req-352abcb7-1835-425d-9db4-d0f2f5dc2420 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.719 252257 DEBUG oslo_concurrency.lockutils [req-b6300ebb-8e10-4c7c-b4b9-1f9f43b4c740 req-352abcb7-1835-425d-9db4-d0f2f5dc2420 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.719 252257 DEBUG nova.compute.manager [req-b6300ebb-8e10-4c7c-b4b9-1f9f43b4c740 req-352abcb7-1835-425d-9db4-d0f2f5dc2420 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] No waiting events found dispatching network-vif-unplugged-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:20 np0005539563 nova_compute[252253]: 2025-11-29 08:46:20.719 252257 DEBUG nova.compute.manager [req-b6300ebb-8e10-4c7c-b4b9-1f9f43b4c740 req-352abcb7-1835-425d-9db4-d0f2f5dc2420 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Received event network-vif-unplugged-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:46:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 37 KiB/s wr, 33 op/s
Nov 29 03:46:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:21.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:21 np0005539563 nova_compute[252253]: 2025-11-29 08:46:21.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:21.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:22 np0005539563 nova_compute[252253]: 2025-11-29 08:46:22.156 252257 DEBUG nova.network.neutron [req-d2467687-0538-4e23-815e-f7c9b45808de req-ae936738-9270-4258-aef2-3043b6bacd09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Updated VIF entry in instance network info cache for port f17d30e4-2f9e-4bd8-8232-ecc3302c7824. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:46:22 np0005539563 nova_compute[252253]: 2025-11-29 08:46:22.156 252257 DEBUG nova.network.neutron [req-d2467687-0538-4e23-815e-f7c9b45808de req-ae936738-9270-4258-aef2-3043b6bacd09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Updating instance_info_cache with network_info: [{"id": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "address": "fa:16:3e:12:22:09", "network": {"id": "d7daafb1-8347-4bc3-b00b-9a558f101e51", "bridge": "br-int", "label": "tempest-network-smoke--1067511852", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "03858b11000d4b57bd3659c3083eed47", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf17d30e4-2f", "ovs_interfaceid": "f17d30e4-2f9e-4bd8-8232-ecc3302c7824", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:22 np0005539563 nova_compute[252253]: 2025-11-29 08:46:22.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:22 np0005539563 nova_compute[252253]: 2025-11-29 08:46:22.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:46:22 np0005539563 nova_compute[252253]: 2025-11-29 08:46:22.890 252257 DEBUG oslo_concurrency.lockutils [req-d2467687-0538-4e23-815e-f7c9b45808de req-ae936738-9270-4258-aef2-3043b6bacd09 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ea4fd34f-5c9e-4fc3-a21d-11eee5732097" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 166 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 10 KiB/s wr, 6 op/s
Nov 29 03:46:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:23.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:23.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:23 np0005539563 nova_compute[252253]: 2025-11-29 08:46:23.811 252257 DEBUG nova.compute.manager [req-9ebd5773-b998-49ae-9362-f7abbb07a3d4 req-f89d9cd3-e246-4f59-bc32-222616097e84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Received event network-vif-plugged-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:23 np0005539563 nova_compute[252253]: 2025-11-29 08:46:23.811 252257 DEBUG oslo_concurrency.lockutils [req-9ebd5773-b998-49ae-9362-f7abbb07a3d4 req-f89d9cd3-e246-4f59-bc32-222616097e84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:23 np0005539563 nova_compute[252253]: 2025-11-29 08:46:23.811 252257 DEBUG oslo_concurrency.lockutils [req-9ebd5773-b998-49ae-9362-f7abbb07a3d4 req-f89d9cd3-e246-4f59-bc32-222616097e84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:23 np0005539563 nova_compute[252253]: 2025-11-29 08:46:23.811 252257 DEBUG oslo_concurrency.lockutils [req-9ebd5773-b998-49ae-9362-f7abbb07a3d4 req-f89d9cd3-e246-4f59-bc32-222616097e84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:23 np0005539563 nova_compute[252253]: 2025-11-29 08:46:23.812 252257 DEBUG nova.compute.manager [req-9ebd5773-b998-49ae-9362-f7abbb07a3d4 req-f89d9cd3-e246-4f59-bc32-222616097e84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] No waiting events found dispatching network-vif-plugged-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:23 np0005539563 nova_compute[252253]: 2025-11-29 08:46:23.812 252257 WARNING nova.compute.manager [req-9ebd5773-b998-49ae-9362-f7abbb07a3d4 req-f89d9cd3-e246-4f59-bc32-222616097e84 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Received unexpected event network-vif-plugged-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013209857156151125 of space, bias 1.0, pg target 0.39629571468453373 quantized to 32 (current 32)
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.154 252257 DEBUG nova.network.neutron [-] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.180 252257 INFO nova.compute.manager [-] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Took 3.66 seconds to deallocate network for instance.#033[00m
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.238 252257 DEBUG oslo_concurrency.lockutils [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.238 252257 DEBUG oslo_concurrency.lockutils [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.243 252257 DEBUG nova.compute.manager [req-8f36e1f8-6459-47db-ac5a-8a55e6281474 req-0954481c-3029-4055-ac50-ec27717050ca 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Received event network-vif-deleted-f17d30e4-2f9e-4bd8-8232-ecc3302c7824 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.360 252257 DEBUG oslo_concurrency.processutils [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3802839166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.809 252257 DEBUG oslo_concurrency.processutils [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.815 252257 DEBUG nova.compute.provider_tree [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.850 252257 DEBUG nova.scheduler.client.report [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.886 252257 DEBUG oslo_concurrency.lockutils [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.909 252257 INFO nova.scheduler.client.report [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Deleted allocations for instance ea4fd34f-5c9e-4fc3-a21d-11eee5732097#033[00m
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.928 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:24 np0005539563 nova_compute[252253]: 2025-11-29 08:46:24.992 252257 DEBUG oslo_concurrency.lockutils [None req-ce79dcbf-8d26-4e4b-9b49-bde5391464c3 a45da8ed818144f8bd6e00d233fcb5d2 03858b11000d4b57bd3659c3083eed47 - - default default] Lock "ea4fd34f-5c9e-4fc3-a21d-11eee5732097" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 2.4 KiB/s wr, 29 op/s
Nov 29 03:46:25 np0005539563 nova_compute[252253]: 2025-11-29 08:46:25.156 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:25.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:25 np0005539563 nova_compute[252253]: 2025-11-29 08:46:25.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:25.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:26 np0005539563 nova_compute[252253]: 2025-11-29 08:46:26.708 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 29 03:46:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:27.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:27 np0005539563 nova_compute[252253]: 2025-11-29 08:46:27.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:27 np0005539563 nova_compute[252253]: 2025-11-29 08:46:27.705 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:27.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:27 np0005539563 nova_compute[252253]: 2025-11-29 08:46:27.705 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:27 np0005539563 nova_compute[252253]: 2025-11-29 08:46:27.705 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:27 np0005539563 nova_compute[252253]: 2025-11-29 08:46:27.706 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:46:27 np0005539563 nova_compute[252253]: 2025-11-29 08:46:27.706 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3712535892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:28 np0005539563 nova_compute[252253]: 2025-11-29 08:46:28.155 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:28 np0005539563 nova_compute[252253]: 2025-11-29 08:46:28.350 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:46:28 np0005539563 nova_compute[252253]: 2025-11-29 08:46:28.353 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4163MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:46:28 np0005539563 nova_compute[252253]: 2025-11-29 08:46:28.353 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:28 np0005539563 nova_compute[252253]: 2025-11-29 08:46:28.354 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:28 np0005539563 nova_compute[252253]: 2025-11-29 08:46:28.421 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:46:28 np0005539563 nova_compute[252253]: 2025-11-29 08:46:28.421 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:46:28 np0005539563 nova_compute[252253]: 2025-11-29 08:46:28.447 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2041589643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:28 np0005539563 nova_compute[252253]: 2025-11-29 08:46:28.901 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:28 np0005539563 nova_compute[252253]: 2025-11-29 08:46:28.906 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:46:28 np0005539563 nova_compute[252253]: 2025-11-29 08:46:28.945 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:46:28 np0005539563 nova_compute[252253]: 2025-11-29 08:46:28.973 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:46:28 np0005539563 nova_compute[252253]: 2025-11-29 08:46:28.973 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 29 03:46:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:29.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:29.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:29 np0005539563 nova_compute[252253]: 2025-11-29 08:46:29.932 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:30 np0005539563 nova_compute[252253]: 2025-11-29 08:46:30.158 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 03:46:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:31.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:31.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:31 np0005539563 nova_compute[252253]: 2025-11-29 08:46:31.861 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:31 np0005539563 nova_compute[252253]: 2025-11-29 08:46:31.973 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 29 03:46:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:33.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:33.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:34 np0005539563 nova_compute[252253]: 2025-11-29 08:46:34.790 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405979.7885883, ea4fd34f-5c9e-4fc3-a21d-11eee5732097 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:34 np0005539563 nova_compute[252253]: 2025-11-29 08:46:34.791 252257 INFO nova.compute.manager [-] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:46:34 np0005539563 nova_compute[252253]: 2025-11-29 08:46:34.810 252257 DEBUG nova.compute.manager [None req-6145c668-9aa8-452c-bd02-d855aec6cf47 - - - - - -] [instance: ea4fd34f-5c9e-4fc3-a21d-11eee5732097] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:34 np0005539563 nova_compute[252253]: 2025-11-29 08:46:34.974 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:34 np0005539563 nova_compute[252253]: 2025-11-29 08:46:34.981 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Nov 29 03:46:35 np0005539563 nova_compute[252253]: 2025-11-29 08:46:35.160 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:35.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:35.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:46:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:37.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:37.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:46:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:39.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:39.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:39 np0005539563 nova_compute[252253]: 2025-11-29 08:46:39.985 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:40 np0005539563 nova_compute[252253]: 2025-11-29 08:46:40.162 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:46:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:41.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:41 np0005539563 podman[383406]: 2025-11-29 08:46:41.50708727 +0000 UTC m=+0.063130611 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:46:41 np0005539563 podman[383407]: 2025-11-29 08:46:41.520975679 +0000 UTC m=+0.077484424 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:46:41 np0005539563 podman[383408]: 2025-11-29 08:46:41.543522773 +0000 UTC m=+0.097246232 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:46:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:41.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 136 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 441 KiB/s wr, 22 op/s
Nov 29 03:46:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:46:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:46:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:46:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:46:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:46:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:46:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:43.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:43.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:45 np0005539563 nova_compute[252253]: 2025-11-29 08:46:45.029 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:46:45 np0005539563 nova_compute[252253]: 2025-11-29 08:46:45.165 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:45.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:45.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:46:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:47.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:47.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:46:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:49.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:49.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:50 np0005539563 nova_compute[252253]: 2025-11-29 08:46:50.032 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:50 np0005539563 nova_compute[252253]: 2025-11-29 08:46:50.168 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:46:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:51.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:51.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:46:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:53.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:53.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:54.268 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:54.269 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:46:54 np0005539563 nova_compute[252253]: 2025-11-29 08:46:54.269 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:46:54.270 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:55 np0005539563 nova_compute[252253]: 2025-11-29 08:46:55.072 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 1.3 MiB/s wr, 17 op/s
Nov 29 03:46:55 np0005539563 nova_compute[252253]: 2025-11-29 08:46:55.171 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:46:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:55.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:46:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:55.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 511 B/s wr, 13 op/s
Nov 29 03:46:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:57.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:57.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 605 KiB/s rd, 12 KiB/s wr, 30 op/s
Nov 29 03:46:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:59.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:46:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:59.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:00 np0005539563 nova_compute[252253]: 2025-11-29 08:47:00.075 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:00 np0005539563 nova_compute[252253]: 2025-11-29 08:47:00.174 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:47:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:01.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:47:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:03.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:04 np0005539563 nova_compute[252253]: 2025-11-29 08:47:04.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:47:04.953 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:47:04.954 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:47:04.954 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:05 np0005539563 nova_compute[252253]: 2025-11-29 08:47:05.079 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:47:05 np0005539563 nova_compute[252253]: 2025-11-29 08:47:05.176 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:05.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:05.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 60 op/s
Nov 29 03:47:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:07.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:07.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 173 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 582 KiB/s wr, 84 op/s
Nov 29 03:47:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:09.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:09.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:10 np0005539563 nova_compute[252253]: 2025-11-29 08:47:10.083 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:10 np0005539563 nova_compute[252253]: 2025-11-29 08:47:10.205 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:10 np0005539563 nova_compute[252253]: 2025-11-29 08:47:10.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:10 np0005539563 nova_compute[252253]: 2025-11-29 08:47:10.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:47:10 np0005539563 nova_compute[252253]: 2025-11-29 08:47:10.699 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:47:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Nov 29 03:47:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:11.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:11 np0005539563 podman[383579]: 2025-11-29 08:47:11.684100355 +0000 UTC m=+0.053861458 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:47:11 np0005539563 podman[383580]: 2025-11-29 08:47:11.718524614 +0000 UTC m=+0.076040784 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd)
Nov 29 03:47:11 np0005539563 podman[383581]: 2025-11-29 08:47:11.772790613 +0000 UTC m=+0.129466329 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:47:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:11.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:47:12
Nov 29 03:47:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:47:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:47:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'vms', '.mgr', 'backups', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 29 03:47:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:47:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 03:47:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:47:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:47:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:47:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:47:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:47:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:47:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:13.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:13.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:15 np0005539563 nova_compute[252253]: 2025-11-29 08:47:15.115 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 03:47:15 np0005539563 nova_compute[252253]: 2025-11-29 08:47:15.207 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:15.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:15.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:47:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:47:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:47:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:47:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:47:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:47:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:47:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:47:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:47:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:47:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 03:47:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:17.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:17.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 03:47:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:19.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:47:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 03:47:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:47:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:47:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:19.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 03:47:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:47:20 np0005539563 nova_compute[252253]: 2025-11-29 08:47:20.119 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:20 np0005539563 nova_compute[252253]: 2025-11-29 08:47:20.209 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:47:20 np0005539563 nova_compute[252253]: 2025-11-29 08:47:20.698 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:20 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2372955e-591c-4ccc-874c-24f067f99da6 does not exist
Nov 29 03:47:20 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ec21a21a-2bf3-4e31-9511-f8ff10971b08 does not exist
Nov 29 03:47:20 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6896a40a-5624-48c4-b11b-d9536c85ecff does not exist
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:47:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:47:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 273 KiB/s rd, 1.6 MiB/s wr, 40 op/s
Nov 29 03:47:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:47:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:21 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:47:21 np0005539563 podman[383927]: 2025-11-29 08:47:21.430413039 +0000 UTC m=+0.049684915 container create aa0623d68201e7d25b2408829778e3cf34303fe87cde6209b622012b7746ff61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haibt, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 03:47:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:21.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:21 np0005539563 systemd[1]: Started libpod-conmon-aa0623d68201e7d25b2408829778e3cf34303fe87cde6209b622012b7746ff61.scope.
Nov 29 03:47:21 np0005539563 podman[383927]: 2025-11-29 08:47:21.408117771 +0000 UTC m=+0.027389717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:47:21 np0005539563 podman[383927]: 2025-11-29 08:47:21.543480231 +0000 UTC m=+0.162752117 container init aa0623d68201e7d25b2408829778e3cf34303fe87cde6209b622012b7746ff61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:47:21 np0005539563 podman[383927]: 2025-11-29 08:47:21.553880504 +0000 UTC m=+0.173152370 container start aa0623d68201e7d25b2408829778e3cf34303fe87cde6209b622012b7746ff61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haibt, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:47:21 np0005539563 podman[383927]: 2025-11-29 08:47:21.558536451 +0000 UTC m=+0.177808337 container attach aa0623d68201e7d25b2408829778e3cf34303fe87cde6209b622012b7746ff61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:47:21 np0005539563 youthful_haibt[383943]: 167 167
Nov 29 03:47:21 np0005539563 systemd[1]: libpod-aa0623d68201e7d25b2408829778e3cf34303fe87cde6209b622012b7746ff61.scope: Deactivated successfully.
Nov 29 03:47:21 np0005539563 podman[383927]: 2025-11-29 08:47:21.56509491 +0000 UTC m=+0.184366816 container died aa0623d68201e7d25b2408829778e3cf34303fe87cde6209b622012b7746ff61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haibt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:47:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay-193ef181d68f9571daaffaa52507b38531193b02fdb930819a1aff9e122ae666-merged.mount: Deactivated successfully.
Nov 29 03:47:21 np0005539563 podman[383927]: 2025-11-29 08:47:21.624313604 +0000 UTC m=+0.243585510 container remove aa0623d68201e7d25b2408829778e3cf34303fe87cde6209b622012b7746ff61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_haibt, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:47:21 np0005539563 systemd[1]: libpod-conmon-aa0623d68201e7d25b2408829778e3cf34303fe87cde6209b622012b7746ff61.scope: Deactivated successfully.
Nov 29 03:47:21 np0005539563 nova_compute[252253]: 2025-11-29 08:47:21.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:21.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:21 np0005539563 podman[383967]: 2025-11-29 08:47:21.805834461 +0000 UTC m=+0.047564297 container create bd519cc460d704189fbee9443dd36a9cccc79855b3fd789871c963f7761bfd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:47:21 np0005539563 systemd[1]: Started libpod-conmon-bd519cc460d704189fbee9443dd36a9cccc79855b3fd789871c963f7761bfd76.scope.
Nov 29 03:47:21 np0005539563 podman[383967]: 2025-11-29 08:47:21.787002358 +0000 UTC m=+0.028732204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:47:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbf919aa7f816da900225816bf51d86099caf3a3d5e2001b7a517543bf86466/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbf919aa7f816da900225816bf51d86099caf3a3d5e2001b7a517543bf86466/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbf919aa7f816da900225816bf51d86099caf3a3d5e2001b7a517543bf86466/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbf919aa7f816da900225816bf51d86099caf3a3d5e2001b7a517543bf86466/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbf919aa7f816da900225816bf51d86099caf3a3d5e2001b7a517543bf86466/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:21 np0005539563 podman[383967]: 2025-11-29 08:47:21.905791036 +0000 UTC m=+0.147520902 container init bd519cc460d704189fbee9443dd36a9cccc79855b3fd789871c963f7761bfd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:47:21 np0005539563 podman[383967]: 2025-11-29 08:47:21.913945468 +0000 UTC m=+0.155675304 container start bd519cc460d704189fbee9443dd36a9cccc79855b3fd789871c963f7761bfd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:47:21 np0005539563 podman[383967]: 2025-11-29 08:47:21.917428623 +0000 UTC m=+0.159158629 container attach bd519cc460d704189fbee9443dd36a9cccc79855b3fd789871c963f7761bfd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:47:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:22 np0005539563 nova_compute[252253]: 2025-11-29 08:47:22.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:22 np0005539563 nova_compute[252253]: 2025-11-29 08:47:22.680 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:47:22 np0005539563 nova_compute[252253]: 2025-11-29 08:47:22.680 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:47:22 np0005539563 nova_compute[252253]: 2025-11-29 08:47:22.701 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:47:22 np0005539563 nova_compute[252253]: 2025-11-29 08:47:22.701 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:22 np0005539563 nova_compute[252253]: 2025-11-29 08:47:22.701 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:47:22 np0005539563 competent_hermann[383983]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:47:22 np0005539563 competent_hermann[383983]: --> relative data size: 1.0
Nov 29 03:47:22 np0005539563 competent_hermann[383983]: --> All data devices are unavailable
Nov 29 03:47:22 np0005539563 systemd[1]: libpod-bd519cc460d704189fbee9443dd36a9cccc79855b3fd789871c963f7761bfd76.scope: Deactivated successfully.
Nov 29 03:47:22 np0005539563 conmon[383983]: conmon bd519cc460d704189fbe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd519cc460d704189fbee9443dd36a9cccc79855b3fd789871c963f7761bfd76.scope/container/memory.events
Nov 29 03:47:22 np0005539563 podman[383967]: 2025-11-29 08:47:22.826755687 +0000 UTC m=+1.068485513 container died bd519cc460d704189fbee9443dd36a9cccc79855b3fd789871c963f7761bfd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:47:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8dbf919aa7f816da900225816bf51d86099caf3a3d5e2001b7a517543bf86466-merged.mount: Deactivated successfully.
Nov 29 03:47:22 np0005539563 podman[383967]: 2025-11-29 08:47:22.883830342 +0000 UTC m=+1.125560168 container remove bd519cc460d704189fbee9443dd36a9cccc79855b3fd789871c963f7761bfd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hermann, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:47:22 np0005539563 systemd[1]: libpod-conmon-bd519cc460d704189fbee9443dd36a9cccc79855b3fd789871c963f7761bfd76.scope: Deactivated successfully.
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 852 B/s rd, 12 KiB/s wr, 0 op/s
Nov 29 03:47:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:23.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:23 np0005539563 podman[384154]: 2025-11-29 08:47:23.655757421 +0000 UTC m=+0.054343773 container create 5c40d2bbc9b8cce6a7cde5b1feda00b0e87a2e90c9f9600e3c18edb97d18d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:47:23 np0005539563 systemd[1]: Started libpod-conmon-5c40d2bbc9b8cce6a7cde5b1feda00b0e87a2e90c9f9600e3c18edb97d18d297.scope.
Nov 29 03:47:23 np0005539563 podman[384154]: 2025-11-29 08:47:23.633191876 +0000 UTC m=+0.031778258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:47:23 np0005539563 podman[384154]: 2025-11-29 08:47:23.750101041 +0000 UTC m=+0.148687483 container init 5c40d2bbc9b8cce6a7cde5b1feda00b0e87a2e90c9f9600e3c18edb97d18d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:47:23 np0005539563 podman[384154]: 2025-11-29 08:47:23.76105706 +0000 UTC m=+0.159643412 container start 5c40d2bbc9b8cce6a7cde5b1feda00b0e87a2e90c9f9600e3c18edb97d18d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 03:47:23 np0005539563 podman[384154]: 2025-11-29 08:47:23.764844844 +0000 UTC m=+0.163431276 container attach 5c40d2bbc9b8cce6a7cde5b1feda00b0e87a2e90c9f9600e3c18edb97d18d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:47:23 np0005539563 frosty_ardinghelli[384171]: 167 167
Nov 29 03:47:23 np0005539563 systemd[1]: libpod-5c40d2bbc9b8cce6a7cde5b1feda00b0e87a2e90c9f9600e3c18edb97d18d297.scope: Deactivated successfully.
Nov 29 03:47:23 np0005539563 podman[384154]: 2025-11-29 08:47:23.770370775 +0000 UTC m=+0.168957187 container died 5c40d2bbc9b8cce6a7cde5b1feda00b0e87a2e90c9f9600e3c18edb97d18d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:47:23 np0005539563 systemd[1]: var-lib-containers-storage-overlay-42901e21d891670e336537ffd910e2699991bbbe5963923f2538fd6fcfa2ecc4-merged.mount: Deactivated successfully.
Nov 29 03:47:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:23.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:23 np0005539563 podman[384154]: 2025-11-29 08:47:23.810822447 +0000 UTC m=+0.209408789 container remove 5c40d2bbc9b8cce6a7cde5b1feda00b0e87a2e90c9f9600e3c18edb97d18d297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:47:23 np0005539563 systemd[1]: libpod-conmon-5c40d2bbc9b8cce6a7cde5b1feda00b0e87a2e90c9f9600e3c18edb97d18d297.scope: Deactivated successfully.
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021694118741857433 of space, bias 1.0, pg target 0.650823562255723 quantized to 32 (current 32)
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:23 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:47:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:47:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:47:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:47:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:47:24 np0005539563 podman[384194]: 2025-11-29 08:47:24.028797528 +0000 UTC m=+0.063111132 container create 62c6d4aa43aca0ded8af6cfd9b1dfc0804ef7dc017ad74c028cdd1f053df09f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pasteur, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:47:24 np0005539563 systemd[1]: Started libpod-conmon-62c6d4aa43aca0ded8af6cfd9b1dfc0804ef7dc017ad74c028cdd1f053df09f8.scope.
Nov 29 03:47:24 np0005539563 podman[384194]: 2025-11-29 08:47:23.996068485 +0000 UTC m=+0.030382180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:24 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:47:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5817844c3a33f7bbeb344ab8265c257383e530720a16a856806222735022da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5817844c3a33f7bbeb344ab8265c257383e530720a16a856806222735022da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5817844c3a33f7bbeb344ab8265c257383e530720a16a856806222735022da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5817844c3a33f7bbeb344ab8265c257383e530720a16a856806222735022da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:24 np0005539563 podman[384194]: 2025-11-29 08:47:24.132545225 +0000 UTC m=+0.166858849 container init 62c6d4aa43aca0ded8af6cfd9b1dfc0804ef7dc017ad74c028cdd1f053df09f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:47:24 np0005539563 podman[384194]: 2025-11-29 08:47:24.141470238 +0000 UTC m=+0.175783832 container start 62c6d4aa43aca0ded8af6cfd9b1dfc0804ef7dc017ad74c028cdd1f053df09f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pasteur, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:47:24 np0005539563 podman[384194]: 2025-11-29 08:47:24.144940713 +0000 UTC m=+0.179254357 container attach 62c6d4aa43aca0ded8af6cfd9b1dfc0804ef7dc017ad74c028cdd1f053df09f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]: {
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:    "0": [
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:        {
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:            "devices": [
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:                "/dev/loop3"
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:            ],
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:            "lv_name": "ceph_lv0",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:            "lv_size": "7511998464",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:            "name": "ceph_lv0",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:            "tags": {
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:                "ceph.cluster_name": "ceph",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:                "ceph.crush_device_class": "",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:                "ceph.encrypted": "0",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:                "ceph.osd_id": "0",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:                "ceph.type": "block",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:                "ceph.vdo": "0"
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:            },
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:            "type": "block",
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:            "vg_name": "ceph_vg0"
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:        }
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]:    ]
Nov 29 03:47:24 np0005539563 magical_pasteur[384211]: }
Nov 29 03:47:24 np0005539563 systemd[1]: libpod-62c6d4aa43aca0ded8af6cfd9b1dfc0804ef7dc017ad74c028cdd1f053df09f8.scope: Deactivated successfully.
Nov 29 03:47:24 np0005539563 podman[384194]: 2025-11-29 08:47:24.928585711 +0000 UTC m=+0.962899415 container died 62c6d4aa43aca0ded8af6cfd9b1dfc0804ef7dc017ad74c028cdd1f053df09f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pasteur, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:47:24 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8b5817844c3a33f7bbeb344ab8265c257383e530720a16a856806222735022da-merged.mount: Deactivated successfully.
Nov 29 03:47:25 np0005539563 nova_compute[252253]: 2025-11-29 08:47:25.123 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 12 KiB/s wr, 3 op/s
Nov 29 03:47:25 np0005539563 podman[384194]: 2025-11-29 08:47:25.206593698 +0000 UTC m=+1.240907302 container remove 62c6d4aa43aca0ded8af6cfd9b1dfc0804ef7dc017ad74c028cdd1f053df09f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pasteur, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:47:25 np0005539563 nova_compute[252253]: 2025-11-29 08:47:25.212 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:25 np0005539563 systemd[1]: libpod-conmon-62c6d4aa43aca0ded8af6cfd9b1dfc0804ef7dc017ad74c028cdd1f053df09f8.scope: Deactivated successfully.
Nov 29 03:47:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:25.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:25 np0005539563 nova_compute[252253]: 2025-11-29 08:47:25.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:25.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:25 np0005539563 podman[384375]: 2025-11-29 08:47:25.876945939 +0000 UTC m=+0.038522241 container create 767521b91a306c83ed9e226ed56dafb314e8d20b43684495cdfe402e7ca387bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:47:25 np0005539563 systemd[1]: Started libpod-conmon-767521b91a306c83ed9e226ed56dafb314e8d20b43684495cdfe402e7ca387bb.scope.
Nov 29 03:47:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:47:25 np0005539563 podman[384375]: 2025-11-29 08:47:25.859948226 +0000 UTC m=+0.021524548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:25 np0005539563 podman[384375]: 2025-11-29 08:47:25.968186195 +0000 UTC m=+0.129762507 container init 767521b91a306c83ed9e226ed56dafb314e8d20b43684495cdfe402e7ca387bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dewdney, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:47:25 np0005539563 podman[384375]: 2025-11-29 08:47:25.976161133 +0000 UTC m=+0.137737435 container start 767521b91a306c83ed9e226ed56dafb314e8d20b43684495cdfe402e7ca387bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:47:25 np0005539563 podman[384375]: 2025-11-29 08:47:25.979976166 +0000 UTC m=+0.141552498 container attach 767521b91a306c83ed9e226ed56dafb314e8d20b43684495cdfe402e7ca387bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 03:47:25 np0005539563 fervent_dewdney[384392]: 167 167
Nov 29 03:47:25 np0005539563 systemd[1]: libpod-767521b91a306c83ed9e226ed56dafb314e8d20b43684495cdfe402e7ca387bb.scope: Deactivated successfully.
Nov 29 03:47:25 np0005539563 podman[384375]: 2025-11-29 08:47:25.983341899 +0000 UTC m=+0.144918211 container died 767521b91a306c83ed9e226ed56dafb314e8d20b43684495cdfe402e7ca387bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dewdney, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:47:26 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3cfa3a2db0095cda39d032e52516f33f68aec8875bb44e64ed8c9aac019bc181-merged.mount: Deactivated successfully.
Nov 29 03:47:26 np0005539563 podman[384375]: 2025-11-29 08:47:26.02191425 +0000 UTC m=+0.183490552 container remove 767521b91a306c83ed9e226ed56dafb314e8d20b43684495cdfe402e7ca387bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dewdney, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:47:26 np0005539563 systemd[1]: libpod-conmon-767521b91a306c83ed9e226ed56dafb314e8d20b43684495cdfe402e7ca387bb.scope: Deactivated successfully.
Nov 29 03:47:26 np0005539563 podman[384416]: 2025-11-29 08:47:26.174773896 +0000 UTC m=+0.039406615 container create 182a0164a3bdd8c5131628413292439697b796fc441b3f570fd1cc052ac1abb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:47:26 np0005539563 systemd[1]: Started libpod-conmon-182a0164a3bdd8c5131628413292439697b796fc441b3f570fd1cc052ac1abb3.scope.
Nov 29 03:47:26 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:47:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864c298ff01e62c49f911eadf5862bfe76aa5cef3e87fd48dd086c81979702df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:26 np0005539563 podman[384416]: 2025-11-29 08:47:26.158190014 +0000 UTC m=+0.022822753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864c298ff01e62c49f911eadf5862bfe76aa5cef3e87fd48dd086c81979702df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864c298ff01e62c49f911eadf5862bfe76aa5cef3e87fd48dd086c81979702df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864c298ff01e62c49f911eadf5862bfe76aa5cef3e87fd48dd086c81979702df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:26 np0005539563 podman[384416]: 2025-11-29 08:47:26.270104654 +0000 UTC m=+0.134737463 container init 182a0164a3bdd8c5131628413292439697b796fc441b3f570fd1cc052ac1abb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:47:26 np0005539563 podman[384416]: 2025-11-29 08:47:26.28313384 +0000 UTC m=+0.147766559 container start 182a0164a3bdd8c5131628413292439697b796fc441b3f570fd1cc052ac1abb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hertz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:47:26 np0005539563 podman[384416]: 2025-11-29 08:47:26.287500438 +0000 UTC m=+0.152133257 container attach 182a0164a3bdd8c5131628413292439697b796fc441b3f570fd1cc052ac1abb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:47:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 3 op/s
Nov 29 03:47:27 np0005539563 competent_hertz[384433]: {
Nov 29 03:47:27 np0005539563 competent_hertz[384433]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:47:27 np0005539563 competent_hertz[384433]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:47:27 np0005539563 competent_hertz[384433]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:47:27 np0005539563 competent_hertz[384433]:        "osd_id": 0,
Nov 29 03:47:27 np0005539563 competent_hertz[384433]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:47:27 np0005539563 competent_hertz[384433]:        "type": "bluestore"
Nov 29 03:47:27 np0005539563 competent_hertz[384433]:    }
Nov 29 03:47:27 np0005539563 competent_hertz[384433]: }
Nov 29 03:47:27 np0005539563 systemd[1]: libpod-182a0164a3bdd8c5131628413292439697b796fc441b3f570fd1cc052ac1abb3.scope: Deactivated successfully.
Nov 29 03:47:27 np0005539563 podman[384455]: 2025-11-29 08:47:27.243952357 +0000 UTC m=+0.026623347 container died 182a0164a3bdd8c5131628413292439697b796fc441b3f570fd1cc052ac1abb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hertz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:47:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-864c298ff01e62c49f911eadf5862bfe76aa5cef3e87fd48dd086c81979702df-merged.mount: Deactivated successfully.
Nov 29 03:47:27 np0005539563 podman[384455]: 2025-11-29 08:47:27.292848029 +0000 UTC m=+0.075519019 container remove 182a0164a3bdd8c5131628413292439697b796fc441b3f570fd1cc052ac1abb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hertz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:47:27 np0005539563 systemd[1]: libpod-conmon-182a0164a3bdd8c5131628413292439697b796fc441b3f570fd1cc052ac1abb3.scope: Deactivated successfully.
Nov 29 03:47:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:47:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:47:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5c409afa-73fb-41df-936e-adb9557c5bc7 does not exist
Nov 29 03:47:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8da9f59b-176b-4243-be53-00377a180d93 does not exist
Nov 29 03:47:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 50034444-2b20-43ca-84a2-0eee57f3ff2c does not exist
Nov 29 03:47:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:27.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:27 np0005539563 nova_compute[252253]: 2025-11-29 08:47:27.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:27 np0005539563 nova_compute[252253]: 2025-11-29 08:47:27.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:27 np0005539563 nova_compute[252253]: 2025-11-29 08:47:27.703 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:27 np0005539563 nova_compute[252253]: 2025-11-29 08:47:27.703 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:27 np0005539563 nova_compute[252253]: 2025-11-29 08:47:27.704 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:27 np0005539563 nova_compute[252253]: 2025-11-29 08:47:27.704 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:47:27 np0005539563 nova_compute[252253]: 2025-11-29 08:47:27.705 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:47:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:27.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:47:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508676362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:28 np0005539563 nova_compute[252253]: 2025-11-29 08:47:28.196 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:47:28 np0005539563 nova_compute[252253]: 2025-11-29 08:47:28.403 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:47:28 np0005539563 nova_compute[252253]: 2025-11-29 08:47:28.405 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4158MB free_disk=20.9427490234375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:47:28 np0005539563 nova_compute[252253]: 2025-11-29 08:47:28.405 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:28 np0005539563 nova_compute[252253]: 2025-11-29 08:47:28.405 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:28 np0005539563 nova_compute[252253]: 2025-11-29 08:47:28.507 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:47:28 np0005539563 nova_compute[252253]: 2025-11-29 08:47:28.507 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:47:28 np0005539563 nova_compute[252253]: 2025-11-29 08:47:28.537 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2449081529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:28 np0005539563 nova_compute[252253]: 2025-11-29 08:47:28.995 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:29 np0005539563 nova_compute[252253]: 2025-11-29 08:47:29.003 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:47:29 np0005539563 nova_compute[252253]: 2025-11-29 08:47:29.024 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:47:29 np0005539563 nova_compute[252253]: 2025-11-29 08:47:29.026 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:47:29 np0005539563 nova_compute[252253]: 2025-11-29 08:47:29.026 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 215 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 455 KiB/s wr, 16 op/s
Nov 29 03:47:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:47:29Z|00836|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 03:47:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:29.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:29.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:30 np0005539563 nova_compute[252253]: 2025-11-29 08:47:30.128 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:30 np0005539563 nova_compute[252253]: 2025-11-29 08:47:30.256 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 188 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Nov 29 03:47:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:31.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:31.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 29 03:47:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:33.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:33.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:35 np0005539563 nova_compute[252253]: 2025-11-29 08:47:35.131 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Nov 29 03:47:35 np0005539563 nova_compute[252253]: 2025-11-29 08:47:35.259 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:35.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:35.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:36 np0005539563 nova_compute[252253]: 2025-11-29 08:47:36.026 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Nov 29 03:47:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:37.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:37.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 561 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Nov 29 03:47:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:39.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:39.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:40 np0005539563 nova_compute[252253]: 2025-11-29 08:47:40.134 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:40 np0005539563 nova_compute[252253]: 2025-11-29 08:47:40.261 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:40 np0005539563 nova_compute[252253]: 2025-11-29 08:47:40.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 116 op/s
Nov 29 03:47:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:41.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:41.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:42 np0005539563 podman[384621]: 2025-11-29 08:47:42.538595898 +0000 UTC m=+0.074395568 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 29 03:47:42 np0005539563 podman[384622]: 2025-11-29 08:47:42.56799275 +0000 UTC m=+0.104810529 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:47:42 np0005539563 podman[384623]: 2025-11-29 08:47:42.569280304 +0000 UTC m=+0.106361490 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:47:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 87 op/s
Nov 29 03:47:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:47:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:47:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:47:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:47:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:47:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:47:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:43.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:43.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:44 np0005539563 nova_compute[252253]: 2025-11-29 08:47:44.694 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:44 np0005539563 nova_compute[252253]: 2025-11-29 08:47:44.694 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:47:45 np0005539563 nova_compute[252253]: 2025-11-29 08:47:45.138 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Nov 29 03:47:45 np0005539563 nova_compute[252253]: 2025-11-29 08:47:45.263 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:45.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:45.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 67 op/s
Nov 29 03:47:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:47.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:47.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 623 KiB/s wr, 84 op/s
Nov 29 03:47:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:49.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:49.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:50 np0005539563 nova_compute[252253]: 2025-11-29 08:47:50.172 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:50 np0005539563 nova_compute[252253]: 2025-11-29 08:47:50.265 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Nov 29 03:47:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:51.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:51 np0005539563 nova_compute[252253]: 2025-11-29 08:47:51.689 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:51.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 03:47:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:53.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:53.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:53 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:47:53 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:47:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 03:47:55 np0005539563 nova_compute[252253]: 2025-11-29 08:47:55.176 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:55 np0005539563 nova_compute[252253]: 2025-11-29 08:47:55.266 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:55.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:55.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 03:47:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:57.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:57.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 29 03:47:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:47:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:59.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:47:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:47:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:59.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:00 np0005539563 nova_compute[252253]: 2025-11-29 08:48:00.193 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:00 np0005539563 nova_compute[252253]: 2025-11-29 08:48:00.268 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 362 KiB/s rd, 1.6 MiB/s wr, 53 op/s
Nov 29 03:48:01 np0005539563 nova_compute[252253]: 2025-11-29 08:48:01.424 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:01.423 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:01.426 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:48:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:01.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:01.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Nov 29 03:48:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Nov 29 03:48:02 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Nov 29 03:48:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 222 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 20 op/s
Nov 29 03:48:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Nov 29 03:48:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Nov 29 03:48:03 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Nov 29 03:48:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:03.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:03.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Nov 29 03:48:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Nov 29 03:48:04 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Nov 29 03:48:04 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:04Z|00837|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 03:48:04 np0005539563 nova_compute[252253]: 2025-11-29 08:48:04.699 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:04.955 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:04.955 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:04.955 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 319 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 180 op/s
Nov 29 03:48:05 np0005539563 nova_compute[252253]: 2025-11-29 08:48:05.195 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:05 np0005539563 nova_compute[252253]: 2025-11-29 08:48:05.270 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:05.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:05.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 319 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 173 op/s
Nov 29 03:48:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.008000215s ======
Nov 29 03:48:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:07.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.008000215s
Nov 29 03:48:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:07.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:08.428 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.9 MiB/s rd, 9.9 MiB/s wr, 193 op/s
Nov 29 03:48:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:09.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:09.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:10 np0005539563 nova_compute[252253]: 2025-11-29 08:48:10.197 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:10 np0005539563 nova_compute[252253]: 2025-11-29 08:48:10.272 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 301 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.1 MiB/s wr, 165 op/s
Nov 29 03:48:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:11.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:11.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Nov 29 03:48:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Nov 29 03:48:12 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Nov 29 03:48:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:48:12
Nov 29 03:48:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:48:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:48:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'volumes', 'cephfs.cephfs.data']
Nov 29 03:48:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:48:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 6.5 MiB/s wr, 162 op/s
Nov 29 03:48:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:48:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:48:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:48:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:48:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:48:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:48:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:13.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:13 np0005539563 podman[384803]: 2025-11-29 08:48:13.504796331 +0000 UTC m=+0.056348417 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:48:13 np0005539563 podman[384804]: 2025-11-29 08:48:13.528786645 +0000 UTC m=+0.077164425 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:48:13 np0005539563 podman[384805]: 2025-11-29 08:48:13.559946954 +0000 UTC m=+0.103225384 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:48:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:13.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 181 KiB/s wr, 153 op/s
Nov 29 03:48:15 np0005539563 nova_compute[252253]: 2025-11-29 08:48:15.200 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:15 np0005539563 nova_compute[252253]: 2025-11-29 08:48:15.274 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:15.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:15.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:48:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:48:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:48:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:48:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:48:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:48:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:48:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:48:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:48:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:48:16 np0005539563 nova_compute[252253]: 2025-11-29 08:48:16.709 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:16 np0005539563 nova_compute[252253]: 2025-11-29 08:48:16.709 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:16 np0005539563 nova_compute[252253]: 2025-11-29 08:48:16.710 252257 INFO nova.compute.manager [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Unshelving#033[00m
Nov 29 03:48:16 np0005539563 nova_compute[252253]: 2025-11-29 08:48:16.807 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:16 np0005539563 nova_compute[252253]: 2025-11-29 08:48:16.808 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:16 np0005539563 nova_compute[252253]: 2025-11-29 08:48:16.813 252257 DEBUG nova.objects.instance [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'pci_requests' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:16 np0005539563 nova_compute[252253]: 2025-11-29 08:48:16.829 252257 DEBUG nova.objects.instance [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:16 np0005539563 nova_compute[252253]: 2025-11-29 08:48:16.843 252257 DEBUG nova.virt.hardware [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:48:16 np0005539563 nova_compute[252253]: 2025-11-29 08:48:16.844 252257 INFO nova.compute.claims [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:48:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:48:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 15K writes, 68K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s#012Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1543 writes, 6590 keys, 1543 commit groups, 1.0 writes per commit group, ingest: 10.53 MB, 0.02 MB/s#012Interval WAL: 1543 writes, 1543 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     15.5      5.74              0.31        46    0.125       0      0       0.0       0.0#012  L6      1/0   10.78 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.1     34.6     29.6     15.53              1.42        45    0.345    340K    24K       0.0       0.0#012 Sum      1/0   10.78 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.1     25.3     25.8     21.27              1.73        91    0.234    340K    24K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8     79.2     78.3      0.82              0.19        10    0.082     50K   2580       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     34.6     29.6     15.53              1.42        45    0.345    340K    24K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     15.6      5.73              0.31        45    0.127       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.087, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.54 GB write, 0.09 MB/s write, 0.52 GB read, 0.09 MB/s read, 21.3 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 304.00 MB usage: 61.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000433 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3412,59.07 MB,19.4311%) FilterBlock(92,936.98 KB,0.300995%) IndexBlock(92,1.52 MB,0.499168%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:48:16 np0005539563 nova_compute[252253]: 2025-11-29 08:48:16.958 252257 DEBUG oslo_concurrency.processutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 181 KiB/s wr, 153 op/s
Nov 29 03:48:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4206582262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:17 np0005539563 nova_compute[252253]: 2025-11-29 08:48:17.446 252257 DEBUG oslo_concurrency.processutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:17 np0005539563 nova_compute[252253]: 2025-11-29 08:48:17.453 252257 DEBUG nova.compute.provider_tree [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:17 np0005539563 nova_compute[252253]: 2025-11-29 08:48:17.468 252257 DEBUG nova.scheduler.client.report [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:17 np0005539563 nova_compute[252253]: 2025-11-29 08:48:17.503 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:17.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:17.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:18 np0005539563 nova_compute[252253]: 2025-11-29 08:48:18.014 252257 INFO nova.network.neutron [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updating port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:48:18 np0005539563 nova_compute[252253]: 2025-11-29 08:48:18.629 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:18 np0005539563 nova_compute[252253]: 2025-11-29 08:48:18.630 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquired lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:18 np0005539563 nova_compute[252253]: 2025-11-29 08:48:18.630 252257 DEBUG nova.network.neutron [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:48:18 np0005539563 nova_compute[252253]: 2025-11-29 08:48:18.772 252257 DEBUG nova.compute.manager [req-063a5b71-18ab-460c-8162-5d2bc30412f7 req-40a8e857-2e7b-49c8-bc20-312253b1e54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-changed-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:18 np0005539563 nova_compute[252253]: 2025-11-29 08:48:18.772 252257 DEBUG nova.compute.manager [req-063a5b71-18ab-460c-8162-5d2bc30412f7 req-40a8e857-2e7b-49c8-bc20-312253b1e54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Refreshing instance network info cache due to event network-changed-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:48:18 np0005539563 nova_compute[252253]: 2025-11-29 08:48:18.773 252257 DEBUG oslo_concurrency.lockutils [req-063a5b71-18ab-460c-8162-5d2bc30412f7 req-40a8e857-2e7b-49c8-bc20-312253b1e54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 123 op/s
Nov 29 03:48:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:19.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:19.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.051 252257 DEBUG nova.network.neutron [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updating instance_info_cache with network_info: [{"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.083 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Releasing lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.085 252257 DEBUG nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.086 252257 INFO nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Creating image(s)#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.124 252257 DEBUG nova.storage.rbd_utils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.128 252257 DEBUG nova.objects.instance [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.130 252257 DEBUG oslo_concurrency.lockutils [req-063a5b71-18ab-460c-8162-5d2bc30412f7 req-40a8e857-2e7b-49c8-bc20-312253b1e54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.130 252257 DEBUG nova.network.neutron [req-063a5b71-18ab-460c-8162-5d2bc30412f7 req-40a8e857-2e7b-49c8-bc20-312253b1e54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Refreshing network info cache for port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.179 252257 DEBUG nova.storage.rbd_utils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.206 252257 DEBUG nova.storage.rbd_utils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.211 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "3bed5c6abaeaeeb4526e60969704d7ac820eee7b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.212 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "3bed5c6abaeaeeb4526e60969704d7ac820eee7b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.217 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.276 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.559 252257 DEBUG nova.virt.libvirt.imagebackend [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/73ec6614-8649-4526-8040-59b3499a752c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/73ec6614-8649-4526-8040-59b3499a752c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.660 252257 DEBUG nova.virt.libvirt.imagebackend [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/73ec6614-8649-4526-8040-59b3499a752c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.660 252257 DEBUG nova.storage.rbd_utils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] cloning images/73ec6614-8649-4526-8040-59b3499a752c@snap to None/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.796 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "3bed5c6abaeaeeb4526e60969704d7ac820eee7b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:20 np0005539563 nova_compute[252253]: 2025-11-29 08:48:20.945 252257 DEBUG nova.objects.instance [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'migration_context' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.032 252257 DEBUG nova.storage.rbd_utils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] flattening vms/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:48:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 KiB/s wr, 110 op/s
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.393 252257 DEBUG nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Image rbd:vms/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.394 252257 DEBUG nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.394 252257 DEBUG nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Ensure instance console log exists: /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.394 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.395 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.395 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.397 252257 DEBUG nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Start _get_guest_xml network_info=[{"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:47:57Z,direct_url=<?>,disk_format='raw',id=73ec6614-8649-4526-8040-59b3499a752c,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-74860485-shelved',owner='c5e836f8387a492c8119be72f1fb9980',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:48:06Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.400 252257 WARNING nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.403 252257 DEBUG nova.virt.libvirt.host [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.404 252257 DEBUG nova.virt.libvirt.host [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.408 252257 DEBUG nova.virt.libvirt.host [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.408 252257 DEBUG nova.virt.libvirt.host [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.409 252257 DEBUG nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.409 252257 DEBUG nova.virt.hardware [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:47:57Z,direct_url=<?>,disk_format='raw',id=73ec6614-8649-4526-8040-59b3499a752c,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-74860485-shelved',owner='c5e836f8387a492c8119be72f1fb9980',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:48:06Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.409 252257 DEBUG nova.virt.hardware [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.410 252257 DEBUG nova.virt.hardware [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.410 252257 DEBUG nova.virt.hardware [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.410 252257 DEBUG nova.virt.hardware [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.410 252257 DEBUG nova.virt.hardware [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.410 252257 DEBUG nova.virt.hardware [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.411 252257 DEBUG nova.virt.hardware [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.411 252257 DEBUG nova.virt.hardware [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.411 252257 DEBUG nova.virt.hardware [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.411 252257 DEBUG nova.virt.hardware [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.412 252257 DEBUG nova.objects.instance [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.428 252257 DEBUG oslo_concurrency.processutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.492 252257 DEBUG nova.network.neutron [req-063a5b71-18ab-460c-8162-5d2bc30412f7 req-40a8e857-2e7b-49c8-bc20-312253b1e54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updated VIF entry in instance network info cache for port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.493 252257 DEBUG nova.network.neutron [req-063a5b71-18ab-460c-8162-5d2bc30412f7 req-40a8e857-2e7b-49c8-bc20-312253b1e54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updating instance_info_cache with network_info: [{"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:21.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.514 252257 DEBUG oslo_concurrency.lockutils [req-063a5b71-18ab-460c-8162-5d2bc30412f7 req-40a8e857-2e7b-49c8-bc20-312253b1e54b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:48:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69639738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.879 252257 DEBUG oslo_concurrency.processutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:21.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.907 252257 DEBUG nova.storage.rbd_utils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:21 np0005539563 nova_compute[252253]: 2025-11-29 08:48:21.911 252257 DEBUG oslo_concurrency.processutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:48:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2507809565' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.351 252257 DEBUG oslo_concurrency.processutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.353 252257 DEBUG nova.virt.libvirt.vif [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:47:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-74860485',display_name='tempest-TestShelveInstance-server-74860485',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-74860485',id=195,image_ref='73ec6614-8649-4526-8040-59b3499a752c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-876097080',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:47:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c5e836f8387a492c8119be72f1fb9980',ramdisk_id='',reservation_id='r-4rgnqe84',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1715482181',owner_user_name='tempest-TestShelveInstance-1715482181-project-member',shelved_at='2025-11-29T08:48:06.464838',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='73ec6614-8649-4526-8040-59b3499a752c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:48:16Z,user_data=None,user_id='5dbbf4fd34004538ad08aa4aa6ab8096',uuid=6eda2a5e-bf92-4d34-b21e-ca4eaf01728b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.354 252257 DEBUG nova.network.os_vif_util [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converting VIF {"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.355 252257 DEBUG nova.network.os_vif_util [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.357 252257 DEBUG nova.objects.instance [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.385 252257 DEBUG nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  <uuid>6eda2a5e-bf92-4d34-b21e-ca4eaf01728b</uuid>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  <name>instance-000000c3</name>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestShelveInstance-server-74860485</nova:name>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:48:21</nova:creationTime>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <nova:user uuid="5dbbf4fd34004538ad08aa4aa6ab8096">tempest-TestShelveInstance-1715482181-project-member</nova:user>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <nova:project uuid="c5e836f8387a492c8119be72f1fb9980">tempest-TestShelveInstance-1715482181</nova:project>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="73ec6614-8649-4526-8040-59b3499a752c"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <nova:port uuid="3bfd1c43-1b2b-4fa1-8eb4-e366844ea174">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <entry name="serial">6eda2a5e-bf92-4d34-b21e-ca4eaf01728b</entry>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <entry name="uuid">6eda2a5e-bf92-4d34-b21e-ca4eaf01728b</entry>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk.config">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:03:3b:02"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <target dev="tap3bfd1c43-1b"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/console.log" append="off"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <input type="keyboard" bus="usb"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:48:22 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:48:22 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:48:22 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:48:22 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.386 252257 DEBUG nova.compute.manager [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Preparing to wait for external event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.387 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.387 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.388 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.389 252257 DEBUG nova.virt.libvirt.vif [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:47:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-74860485',display_name='tempest-TestShelveInstance-server-74860485',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-74860485',id=195,image_ref='73ec6614-8649-4526-8040-59b3499a752c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-876097080',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:47:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c5e836f8387a492c8119be72f1fb9980',ramdisk_id='',reservation_id='r-4rgnqe84',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1715482181',owner_user_name='tempest-TestShelveInstance-1715482181-project-member',shelved_at='2025-11-29T08:48:06.464838',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='73ec6614-8649-4526-8040-59b3499a752c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:48:16Z,user_data=None,user_id='5dbbf4fd34004538ad08aa4aa6ab8096',uuid=6eda2a5e-bf92-4d34-b21e-ca4eaf01728b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.389 252257 DEBUG nova.network.os_vif_util [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converting VIF {"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.390 252257 DEBUG nova.network.os_vif_util [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.391 252257 DEBUG os_vif [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.392 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.393 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.394 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.400 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.401 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bfd1c43-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.402 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3bfd1c43-1b, col_values=(('external_ids', {'iface-id': '3bfd1c43-1b2b-4fa1-8eb4-e366844ea174', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:3b:02', 'vm-uuid': '6eda2a5e-bf92-4d34-b21e-ca4eaf01728b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.404 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:22 np0005539563 NetworkManager[48981]: <info>  [1764406102.4052] manager: (tap3bfd1c43-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.407 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.415 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.417 252257 INFO os_vif [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b')#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.482 252257 DEBUG nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.482 252257 DEBUG nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.482 252257 DEBUG nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] No VIF found with MAC fa:16:3e:03:3b:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.483 252257 INFO nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Using config drive#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.507 252257 DEBUG nova.storage.rbd_utils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.527 252257 DEBUG nova.objects.instance [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.557 252257 DEBUG nova.objects.instance [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'keypairs' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.695 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.696 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.696 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:48:22 np0005539563 nova_compute[252253]: 2025-11-29 08:48:22.696 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.081 252257 INFO nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Creating config drive at /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/disk.config#033[00m
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.091 252257 DEBUG oslo_concurrency.processutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa1m8xg4h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.3 MiB/s wr, 139 op/s
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.236 252257 DEBUG oslo_concurrency.processutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa1m8xg4h" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.269 252257 DEBUG nova.storage.rbd_utils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.272 252257 DEBUG oslo_concurrency.processutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/disk.config 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.443 252257 DEBUG oslo_concurrency.processutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/disk.config 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.444 252257 INFO nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Deleting local config drive /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b/disk.config because it was imported into RBD.#033[00m
Nov 29 03:48:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:23.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:23 np0005539563 kernel: tap3bfd1c43-1b: entered promiscuous mode
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.516 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:23Z|00838|binding|INFO|Claiming lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 for this chassis.
Nov 29 03:48:23 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:23Z|00839|binding|INFO|3bfd1c43-1b2b-4fa1-8eb4-e366844ea174: Claiming fa:16:3e:03:3b:02 10.100.0.10
Nov 29 03:48:23 np0005539563 NetworkManager[48981]: <info>  [1764406103.5200] manager: (tap3bfd1c43-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/371)
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.522 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.527 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539563 NetworkManager[48981]: <info>  [1764406103.5340] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.533 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539563 NetworkManager[48981]: <info>  [1764406103.5344] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.537 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:3b:02 10.100.0.10'], port_security=['fa:16:3e:03:3b:02 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6eda2a5e-bf92-4d34-b21e-ca4eaf01728b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0636028a-96d5-4ad7-aa6e-9129edd44385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e836f8387a492c8119be72f1fb9980', 'neutron:revision_number': '7', 'neutron:security_group_ids': '36d56553-7b52-4135-ab01-9fd93eb2713f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf231669-438b-4750-8f96-dc7fed049a6a, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.538 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 in datapath 0636028a-96d5-4ad7-aa6e-9129edd44385 bound to our chassis#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.539 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0636028a-96d5-4ad7-aa6e-9129edd44385#033[00m
Nov 29 03:48:23 np0005539563 systemd-machined[213024]: New machine qemu-96-instance-000000c3.
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.550 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b07c4ed6-9762-4503-920b-b784e969b4a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.551 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0636028a-91 in ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.553 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0636028a-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.553 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2aecce14-ec64-4495-81d2-959529d7bb60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.553 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[17d38609-862f-4c15-a315-ad1547bd5d91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.563 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec4132d-2506-420f-bef1-c2fdd8fbea52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 systemd[1]: Started Virtual Machine qemu-96-instance-000000c3.
Nov 29 03:48:23 np0005539563 systemd-udevd[385245]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:48:23 np0005539563 NetworkManager[48981]: <info>  [1764406103.6068] device (tap3bfd1c43-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:48:23 np0005539563 NetworkManager[48981]: <info>  [1764406103.6077] device (tap3bfd1c43-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.604 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[28a36fb2-dc7b-409e-83cc-53ca29957a2e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.635 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d3df03c3-8573-4ba4-8afe-888dd3e1d1f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.641 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[32c4a86e-4c93-4c66-8912-b18d54ab66a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 NetworkManager[48981]: <info>  [1764406103.6422] manager: (tap0636028a-90): new Veth device (/org/freedesktop/NetworkManager/Devices/374)
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.669 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f473d298-221b-4aa9-8c74-72744fcbec5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.672 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[577d38d3-2406-4ab0-9fdc-e785a0fd3596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 NetworkManager[48981]: <info>  [1764406103.6971] device (tap0636028a-90): carrier: link connected
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.702 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[68247416-7ee9-4b1e-84b6-405dd4cac9e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.714 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.719 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[48752754-5a96-4a6d-955e-a1c4cddfbfb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0636028a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:11:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 907147, 'reachable_time': 37112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385275, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.731 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.733 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2bc79e-3d75-4f31-b698-f90c7388986d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:1119'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 907147, 'tstamp': 907147}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385276, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:23Z|00840|binding|INFO|Setting lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 ovn-installed in OVS
Nov 29 03:48:23 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:23Z|00841|binding|INFO|Setting lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 up in Southbound
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.744 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.746 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[00ac8128-95b8-4f7d-a473-f7d5029afe21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0636028a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:11:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 907147, 'reachable_time': 37112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385277, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.777 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7062ffc7-3e6a-4946-8f3c-a8ddac5d537b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.835 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[20123116-9110-4149-9087-f69ca87b9618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.836 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0636028a-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.837 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.837 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0636028a-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:23 np0005539563 NetworkManager[48981]: <info>  [1764406103.8403] manager: (tap0636028a-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.840 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539563 kernel: tap0636028a-90: entered promiscuous mode
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.841 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.843 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0636028a-90, col_values=(('external_ids', {'iface-id': '58043efe-c991-4914-9f0a-2bba8af4c408'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.844 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:23Z|00842|binding|INFO|Releasing lport 58043efe-c991-4914-9f0a-2bba8af4c408 from this chassis (sb_readonly=0)
Nov 29 03:48:23 np0005539563 nova_compute[252253]: 2025-11-29 08:48:23.857 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.858 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0636028a-96d5-4ad7-aa6e-9129edd44385.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0636028a-96d5-4ad7-aa6e-9129edd44385.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.859 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[914c0a7d-a038-4783-9eff-6b6bb5980f0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.860 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-0636028a-96d5-4ad7-aa6e-9129edd44385
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/0636028a-96d5-4ad7-aa6e-9129edd44385.pid.haproxy
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 0636028a-96d5-4ad7-aa6e-9129edd44385
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:48:23 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:23.862 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'env', 'PROCESS_TAG=haproxy-0636028a-96d5-4ad7-aa6e-9129edd44385', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0636028a-96d5-4ad7-aa6e-9129edd44385.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:48:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:23.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016730425856132515 of space, bias 1.0, pg target 0.5019127756839754 quantized to 32 (current 32)
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004071827598641355 of space, bias 1.0, pg target 1.2215482795924066 quantized to 32 (current 32)
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.195 252257 DEBUG nova.compute.manager [req-ce9368d0-765e-4581-9de0-0c91bca34eac req-023c75a1-69cf-47fa-9674-0d0f558c6995 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.196 252257 DEBUG oslo_concurrency.lockutils [req-ce9368d0-765e-4581-9de0-0c91bca34eac req-023c75a1-69cf-47fa-9674-0d0f558c6995 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.196 252257 DEBUG oslo_concurrency.lockutils [req-ce9368d0-765e-4581-9de0-0c91bca34eac req-023c75a1-69cf-47fa-9674-0d0f558c6995 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.196 252257 DEBUG oslo_concurrency.lockutils [req-ce9368d0-765e-4581-9de0-0c91bca34eac req-023c75a1-69cf-47fa-9674-0d0f558c6995 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.196 252257 DEBUG nova.compute.manager [req-ce9368d0-765e-4581-9de0-0c91bca34eac req-023c75a1-69cf-47fa-9674-0d0f558c6995 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Processing event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:48:24 np0005539563 podman[385309]: 2025-11-29 08:48:24.281870067 +0000 UTC m=+0.057896398 container create d243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:48:24 np0005539563 systemd[1]: Started libpod-conmon-d243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef.scope.
Nov 29 03:48:24 np0005539563 podman[385309]: 2025-11-29 08:48:24.245875916 +0000 UTC m=+0.021902267 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:48:24 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:48:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9eda98899e9f0ad3a3d3175c042e9eb5879be1071e9cbcd5e6bf78a5549ef0a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:24 np0005539563 podman[385309]: 2025-11-29 08:48:24.370821972 +0000 UTC m=+0.146848313 container init d243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:48:24 np0005539563 podman[385309]: 2025-11-29 08:48:24.378050989 +0000 UTC m=+0.154077320 container start d243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:48:24 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[385325]: [NOTICE]   (385353) : New worker (385364) forked
Nov 29 03:48:24 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[385325]: [NOTICE]   (385353) : Loading success.
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.520 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406104.5205138, 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.521 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.523 252257 DEBUG nova.compute.manager [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.526 252257 DEBUG nova.virt.libvirt.driver [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.530 252257 INFO nova.virt.libvirt.driver [-] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Instance spawned successfully.#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.540 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.543 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.563 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.564 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406104.5206769, 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.564 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.580 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.583 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406104.5253267, 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.583 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.605 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.608 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:48:24 np0005539563 nova_compute[252253]: 2025-11-29 08:48:24.625 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:48:25 np0005539563 nova_compute[252253]: 2025-11-29 08:48:25.009 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updating instance_info_cache with network_info: [{"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:25 np0005539563 nova_compute[252253]: 2025-11-29 08:48:25.026 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:25 np0005539563 nova_compute[252253]: 2025-11-29 08:48:25.026 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:48:25 np0005539563 nova_compute[252253]: 2025-11-29 08:48:25.027 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:25 np0005539563 nova_compute[252253]: 2025-11-29 08:48:25.027 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:25 np0005539563 nova_compute[252253]: 2025-11-29 08:48:25.028 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:48:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 357 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 6.0 MiB/s wr, 223 op/s
Nov 29 03:48:25 np0005539563 nova_compute[252253]: 2025-11-29 08:48:25.278 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:25.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:25 np0005539563 nova_compute[252253]: 2025-11-29 08:48:25.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Nov 29 03:48:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Nov 29 03:48:25 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Nov 29 03:48:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:25.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:26 np0005539563 nova_compute[252253]: 2025-11-29 08:48:26.130 252257 DEBUG nova.compute.manager [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:26 np0005539563 nova_compute[252253]: 2025-11-29 08:48:26.200 252257 DEBUG oslo_concurrency.lockutils [None req-a25cca3e-2c07-4132-aadd-bfdf3a397541 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 9.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:26 np0005539563 nova_compute[252253]: 2025-11-29 08:48:26.275 252257 DEBUG nova.compute.manager [req-0e64a6d9-f224-4f30-bb34-7cf475646766 req-0ec35aee-64a4-4164-846b-fc53b9feaa96 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:26 np0005539563 nova_compute[252253]: 2025-11-29 08:48:26.276 252257 DEBUG oslo_concurrency.lockutils [req-0e64a6d9-f224-4f30-bb34-7cf475646766 req-0ec35aee-64a4-4164-846b-fc53b9feaa96 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:26 np0005539563 nova_compute[252253]: 2025-11-29 08:48:26.276 252257 DEBUG oslo_concurrency.lockutils [req-0e64a6d9-f224-4f30-bb34-7cf475646766 req-0ec35aee-64a4-4164-846b-fc53b9feaa96 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:26 np0005539563 nova_compute[252253]: 2025-11-29 08:48:26.276 252257 DEBUG oslo_concurrency.lockutils [req-0e64a6d9-f224-4f30-bb34-7cf475646766 req-0ec35aee-64a4-4164-846b-fc53b9feaa96 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:26 np0005539563 nova_compute[252253]: 2025-11-29 08:48:26.277 252257 DEBUG nova.compute.manager [req-0e64a6d9-f224-4f30-bb34-7cf475646766 req-0ec35aee-64a4-4164-846b-fc53b9feaa96 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] No waiting events found dispatching network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:26 np0005539563 nova_compute[252253]: 2025-11-29 08:48:26.277 252257 WARNING nova.compute.manager [req-0e64a6d9-f224-4f30-bb34-7cf475646766 req-0ec35aee-64a4-4164-846b-fc53b9feaa96 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received unexpected event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:48:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 357 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 169 op/s
Nov 29 03:48:27 np0005539563 nova_compute[252253]: 2025-11-29 08:48:27.405 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:27.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:27 np0005539563 nova_compute[252253]: 2025-11-29 08:48:27.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:27.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:28 np0005539563 nova_compute[252253]: 2025-11-29 08:48:28.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:28 np0005539563 nova_compute[252253]: 2025-11-29 08:48:28.700 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:28 np0005539563 nova_compute[252253]: 2025-11-29 08:48:28.700 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:28 np0005539563 nova_compute[252253]: 2025-11-29 08:48:28.700 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:28 np0005539563 nova_compute[252253]: 2025-11-29 08:48:28.700 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:48:28 np0005539563 nova_compute[252253]: 2025-11-29 08:48:28.701 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:48:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:48:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:48:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:48:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:48:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:48:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d9e30582-f480-4891-97cf-f31af4aaa57d does not exist
Nov 29 03:48:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 059766b4-f297-48f0-b618-520722a0bb36 does not exist
Nov 29 03:48:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f7d6e64e-00f4-40d5-99ec-e513bcef91c4 does not exist
Nov 29 03:48:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:48:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:48:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:48:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:48:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:48:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:48:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121584574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.125 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 336 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 7.2 MiB/s wr, 212 op/s
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.217 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000c3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.217 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000c3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.385 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.386 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3985MB free_disk=20.897357940673828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.386 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.386 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.469 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.469 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.470 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.486 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.505 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.505 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:48:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.519 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:48:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:29.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.548 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:48:29 np0005539563 nova_compute[252253]: 2025-11-29 08:48:29.585 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:29 np0005539563 podman[385678]: 2025-11-29 08:48:29.645148453 +0000 UTC m=+0.102287710 container create fea4657b35adf647c65d95b1a9924712f1d1a89f0030713c6f1c82054a861cc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaplygin, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:48:29 np0005539563 podman[385678]: 2025-11-29 08:48:29.578460495 +0000 UTC m=+0.035599772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:48:29 np0005539563 systemd[1]: Started libpod-conmon-fea4657b35adf647c65d95b1a9924712f1d1a89f0030713c6f1c82054a861cc4.scope.
Nov 29 03:48:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:48:29 np0005539563 podman[385678]: 2025-11-29 08:48:29.881844013 +0000 UTC m=+0.338983300 container init fea4657b35adf647c65d95b1a9924712f1d1a89f0030713c6f1c82054a861cc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:48:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:48:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:48:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:48:29 np0005539563 podman[385678]: 2025-11-29 08:48:29.89051775 +0000 UTC m=+0.347657007 container start fea4657b35adf647c65d95b1a9924712f1d1a89f0030713c6f1c82054a861cc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaplygin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:48:29 np0005539563 adoring_chaplygin[385696]: 167 167
Nov 29 03:48:29 np0005539563 systemd[1]: libpod-fea4657b35adf647c65d95b1a9924712f1d1a89f0030713c6f1c82054a861cc4.scope: Deactivated successfully.
Nov 29 03:48:29 np0005539563 conmon[385696]: conmon fea4657b35adf647c65d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fea4657b35adf647c65d95b1a9924712f1d1a89f0030713c6f1c82054a861cc4.scope/container/memory.events
Nov 29 03:48:29 np0005539563 podman[385678]: 2025-11-29 08:48:29.914164465 +0000 UTC m=+0.371303722 container attach fea4657b35adf647c65d95b1a9924712f1d1a89f0030713c6f1c82054a861cc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:48:29 np0005539563 podman[385678]: 2025-11-29 08:48:29.915438199 +0000 UTC m=+0.372577446 container died fea4657b35adf647c65d95b1a9924712f1d1a89f0030713c6f1c82054a861cc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:48:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:29.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-28b1abf067892d5a827bafa34130a7a9fdcfc0d1b3df6d11ebdea75ddd7abd36-merged.mount: Deactivated successfully.
Nov 29 03:48:29 np0005539563 podman[385678]: 2025-11-29 08:48:29.958294688 +0000 UTC m=+0.415433945 container remove fea4657b35adf647c65d95b1a9924712f1d1a89f0030713c6f1c82054a861cc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chaplygin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:48:29 np0005539563 systemd[1]: libpod-conmon-fea4657b35adf647c65d95b1a9924712f1d1a89f0030713c6f1c82054a861cc4.scope: Deactivated successfully.
Nov 29 03:48:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577900064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:30 np0005539563 nova_compute[252253]: 2025-11-29 08:48:30.018 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:30 np0005539563 nova_compute[252253]: 2025-11-29 08:48:30.026 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:30 np0005539563 nova_compute[252253]: 2025-11-29 08:48:30.041 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:30 np0005539563 nova_compute[252253]: 2025-11-29 08:48:30.067 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:48:30 np0005539563 nova_compute[252253]: 2025-11-29 08:48:30.068 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:30 np0005539563 podman[385741]: 2025-11-29 08:48:30.169157684 +0000 UTC m=+0.048549525 container create 33a28debe723b0e61b806ed8df74fbb3f70176891e1ccf41a5c97819feff11ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_merkle, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:48:30 np0005539563 systemd[1]: Started libpod-conmon-33a28debe723b0e61b806ed8df74fbb3f70176891e1ccf41a5c97819feff11ac.scope.
Nov 29 03:48:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:48:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cac772f6ec95229524a17a2324874354b1ca0e381d300ad831f5b9f3813fe24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cac772f6ec95229524a17a2324874354b1ca0e381d300ad831f5b9f3813fe24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cac772f6ec95229524a17a2324874354b1ca0e381d300ad831f5b9f3813fe24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cac772f6ec95229524a17a2324874354b1ca0e381d300ad831f5b9f3813fe24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cac772f6ec95229524a17a2324874354b1ca0e381d300ad831f5b9f3813fe24/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:30 np0005539563 podman[385741]: 2025-11-29 08:48:30.146479386 +0000 UTC m=+0.025871317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:48:30 np0005539563 podman[385741]: 2025-11-29 08:48:30.261480881 +0000 UTC m=+0.140872752 container init 33a28debe723b0e61b806ed8df74fbb3f70176891e1ccf41a5c97819feff11ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_merkle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:48:30 np0005539563 podman[385741]: 2025-11-29 08:48:30.269569941 +0000 UTC m=+0.148961782 container start 33a28debe723b0e61b806ed8df74fbb3f70176891e1ccf41a5c97819feff11ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_merkle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:48:30 np0005539563 podman[385741]: 2025-11-29 08:48:30.272523732 +0000 UTC m=+0.151915573 container attach 33a28debe723b0e61b806ed8df74fbb3f70176891e1ccf41a5c97819feff11ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:48:30 np0005539563 nova_compute[252253]: 2025-11-29 08:48:30.280 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:31 np0005539563 determined_merkle[385757]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:48:31 np0005539563 determined_merkle[385757]: --> relative data size: 1.0
Nov 29 03:48:31 np0005539563 determined_merkle[385757]: --> All data devices are unavailable
Nov 29 03:48:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.4 MiB/s rd, 7.2 MiB/s wr, 280 op/s
Nov 29 03:48:31 np0005539563 systemd[1]: libpod-33a28debe723b0e61b806ed8df74fbb3f70176891e1ccf41a5c97819feff11ac.scope: Deactivated successfully.
Nov 29 03:48:31 np0005539563 podman[385741]: 2025-11-29 08:48:31.176985723 +0000 UTC m=+1.056377564 container died 33a28debe723b0e61b806ed8df74fbb3f70176891e1ccf41a5c97819feff11ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_merkle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:48:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0cac772f6ec95229524a17a2324874354b1ca0e381d300ad831f5b9f3813fe24-merged.mount: Deactivated successfully.
Nov 29 03:48:31 np0005539563 podman[385741]: 2025-11-29 08:48:31.231823677 +0000 UTC m=+1.111215518 container remove 33a28debe723b0e61b806ed8df74fbb3f70176891e1ccf41a5c97819feff11ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_merkle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:48:31 np0005539563 systemd[1]: libpod-conmon-33a28debe723b0e61b806ed8df74fbb3f70176891e1ccf41a5c97819feff11ac.scope: Deactivated successfully.
Nov 29 03:48:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:31.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:31 np0005539563 podman[385927]: 2025-11-29 08:48:31.797696909 +0000 UTC m=+0.035053566 container create 2700d34cd949b1d615d709696ca8715c3b613d02678a613671b82e59b8aac24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_turing, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:48:31 np0005539563 systemd[1]: Started libpod-conmon-2700d34cd949b1d615d709696ca8715c3b613d02678a613671b82e59b8aac24c.scope.
Nov 29 03:48:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:48:31 np0005539563 podman[385927]: 2025-11-29 08:48:31.783118412 +0000 UTC m=+0.020475099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:48:31 np0005539563 podman[385927]: 2025-11-29 08:48:31.89529567 +0000 UTC m=+0.132652357 container init 2700d34cd949b1d615d709696ca8715c3b613d02678a613671b82e59b8aac24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_turing, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:48:31 np0005539563 podman[385927]: 2025-11-29 08:48:31.90336558 +0000 UTC m=+0.140722237 container start 2700d34cd949b1d615d709696ca8715c3b613d02678a613671b82e59b8aac24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:48:31 np0005539563 podman[385927]: 2025-11-29 08:48:31.907091461 +0000 UTC m=+0.144448138 container attach 2700d34cd949b1d615d709696ca8715c3b613d02678a613671b82e59b8aac24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_turing, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:48:31 np0005539563 wizardly_turing[385943]: 167 167
Nov 29 03:48:31 np0005539563 systemd[1]: libpod-2700d34cd949b1d615d709696ca8715c3b613d02678a613671b82e59b8aac24c.scope: Deactivated successfully.
Nov 29 03:48:31 np0005539563 conmon[385943]: conmon 2700d34cd949b1d615d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2700d34cd949b1d615d709696ca8715c3b613d02678a613671b82e59b8aac24c.scope/container/memory.events
Nov 29 03:48:31 np0005539563 podman[385927]: 2025-11-29 08:48:31.909897287 +0000 UTC m=+0.147253944 container died 2700d34cd949b1d615d709696ca8715c3b613d02678a613671b82e59b8aac24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_turing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:48:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:31.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ae51e0be88a1c4d75387fc687ce120a96b7b75d04bb2144993d23db007e77416-merged.mount: Deactivated successfully.
Nov 29 03:48:31 np0005539563 podman[385927]: 2025-11-29 08:48:31.943456202 +0000 UTC m=+0.180812859 container remove 2700d34cd949b1d615d709696ca8715c3b613d02678a613671b82e59b8aac24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:48:31 np0005539563 systemd[1]: libpod-conmon-2700d34cd949b1d615d709696ca8715c3b613d02678a613671b82e59b8aac24c.scope: Deactivated successfully.
Nov 29 03:48:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:32 np0005539563 podman[385966]: 2025-11-29 08:48:32.103159815 +0000 UTC m=+0.037488713 container create 6ec4c845b11e2d385c79bb0df2eccbf19e0a4b686acd08d14aa26d04cc3479ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:48:32 np0005539563 systemd[1]: Started libpod-conmon-6ec4c845b11e2d385c79bb0df2eccbf19e0a4b686acd08d14aa26d04cc3479ea.scope.
Nov 29 03:48:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:48:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2359dfbce60b9462a8190eb8edb62ea8c93d29eb129188790604a62167cf24e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2359dfbce60b9462a8190eb8edb62ea8c93d29eb129188790604a62167cf24e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2359dfbce60b9462a8190eb8edb62ea8c93d29eb129188790604a62167cf24e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2359dfbce60b9462a8190eb8edb62ea8c93d29eb129188790604a62167cf24e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:32 np0005539563 podman[385966]: 2025-11-29 08:48:32.086291735 +0000 UTC m=+0.020620663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:48:32 np0005539563 podman[385966]: 2025-11-29 08:48:32.18664381 +0000 UTC m=+0.120972728 container init 6ec4c845b11e2d385c79bb0df2eccbf19e0a4b686acd08d14aa26d04cc3479ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:48:32 np0005539563 podman[385966]: 2025-11-29 08:48:32.192620073 +0000 UTC m=+0.126948971 container start 6ec4c845b11e2d385c79bb0df2eccbf19e0a4b686acd08d14aa26d04cc3479ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:48:32 np0005539563 podman[385966]: 2025-11-29 08:48:32.196021695 +0000 UTC m=+0.130350583 container attach 6ec4c845b11e2d385c79bb0df2eccbf19e0a4b686acd08d14aa26d04cc3479ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 03:48:32 np0005539563 nova_compute[252253]: 2025-11-29 08:48:32.409 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:32 np0005539563 blissful_germain[385983]: {
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:    "0": [
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:        {
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:            "devices": [
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:                "/dev/loop3"
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:            ],
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:            "lv_name": "ceph_lv0",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:            "lv_size": "7511998464",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:            "name": "ceph_lv0",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:            "tags": {
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:                "ceph.cluster_name": "ceph",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:                "ceph.crush_device_class": "",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:                "ceph.encrypted": "0",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:                "ceph.osd_id": "0",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:                "ceph.type": "block",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:                "ceph.vdo": "0"
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:            },
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:            "type": "block",
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:            "vg_name": "ceph_vg0"
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:        }
Nov 29 03:48:32 np0005539563 blissful_germain[385983]:    ]
Nov 29 03:48:32 np0005539563 blissful_germain[385983]: }
Nov 29 03:48:33 np0005539563 systemd[1]: libpod-6ec4c845b11e2d385c79bb0df2eccbf19e0a4b686acd08d14aa26d04cc3479ea.scope: Deactivated successfully.
Nov 29 03:48:33 np0005539563 podman[386042]: 2025-11-29 08:48:33.060803325 +0000 UTC m=+0.035144319 container died 6ec4c845b11e2d385c79bb0df2eccbf19e0a4b686acd08d14aa26d04cc3479ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:48:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2359dfbce60b9462a8190eb8edb62ea8c93d29eb129188790604a62167cf24e2-merged.mount: Deactivated successfully.
Nov 29 03:48:33 np0005539563 podman[386042]: 2025-11-29 08:48:33.12592122 +0000 UTC m=+0.100262184 container remove 6ec4c845b11e2d385c79bb0df2eccbf19e0a4b686acd08d14aa26d04cc3479ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:48:33 np0005539563 systemd[1]: libpod-conmon-6ec4c845b11e2d385c79bb0df2eccbf19e0a4b686acd08d14aa26d04cc3479ea.scope: Deactivated successfully.
Nov 29 03:48:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.8 MiB/s wr, 236 op/s
Nov 29 03:48:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:33.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:33 np0005539563 podman[386200]: 2025-11-29 08:48:33.769883131 +0000 UTC m=+0.037816491 container create a0b25abe2cef016b7fa05d82f6bb40835aa2824973877d12cb4514551ab27908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:48:33 np0005539563 systemd[1]: Started libpod-conmon-a0b25abe2cef016b7fa05d82f6bb40835aa2824973877d12cb4514551ab27908.scope.
Nov 29 03:48:33 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:48:33 np0005539563 podman[386200]: 2025-11-29 08:48:33.84653507 +0000 UTC m=+0.114468470 container init a0b25abe2cef016b7fa05d82f6bb40835aa2824973877d12cb4514551ab27908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 29 03:48:33 np0005539563 podman[386200]: 2025-11-29 08:48:33.752073686 +0000 UTC m=+0.020007086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:48:33 np0005539563 podman[386200]: 2025-11-29 08:48:33.85462523 +0000 UTC m=+0.122558590 container start a0b25abe2cef016b7fa05d82f6bb40835aa2824973877d12cb4514551ab27908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:48:33 np0005539563 elastic_kare[386218]: 167 167
Nov 29 03:48:33 np0005539563 podman[386200]: 2025-11-29 08:48:33.86013446 +0000 UTC m=+0.128067860 container attach a0b25abe2cef016b7fa05d82f6bb40835aa2824973877d12cb4514551ab27908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:48:33 np0005539563 systemd[1]: libpod-a0b25abe2cef016b7fa05d82f6bb40835aa2824973877d12cb4514551ab27908.scope: Deactivated successfully.
Nov 29 03:48:33 np0005539563 podman[386200]: 2025-11-29 08:48:33.860610054 +0000 UTC m=+0.128543414 container died a0b25abe2cef016b7fa05d82f6bb40835aa2824973877d12cb4514551ab27908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:48:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4d5cd5829c445d64bba74b27469d0f9319164cd9e250ed4971337632d0b77935-merged.mount: Deactivated successfully.
Nov 29 03:48:33 np0005539563 podman[386200]: 2025-11-29 08:48:33.896407799 +0000 UTC m=+0.164341159 container remove a0b25abe2cef016b7fa05d82f6bb40835aa2824973877d12cb4514551ab27908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:48:33 np0005539563 systemd[1]: libpod-conmon-a0b25abe2cef016b7fa05d82f6bb40835aa2824973877d12cb4514551ab27908.scope: Deactivated successfully.
Nov 29 03:48:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:33.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:34 np0005539563 podman[386240]: 2025-11-29 08:48:34.065623762 +0000 UTC m=+0.040772283 container create baebbeaf6b06fe99d123bbde01424c79610352e4f1ec5da901286dbe263090c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chaplygin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:48:34 np0005539563 systemd[1]: Started libpod-conmon-baebbeaf6b06fe99d123bbde01424c79610352e4f1ec5da901286dbe263090c0.scope.
Nov 29 03:48:34 np0005539563 podman[386240]: 2025-11-29 08:48:34.046760147 +0000 UTC m=+0.021908678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:48:34 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:48:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d708afc8f6ee46b95e11c03cf44fa9e57eb994ee07aff2e0298de401e7d3ba90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d708afc8f6ee46b95e11c03cf44fa9e57eb994ee07aff2e0298de401e7d3ba90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d708afc8f6ee46b95e11c03cf44fa9e57eb994ee07aff2e0298de401e7d3ba90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d708afc8f6ee46b95e11c03cf44fa9e57eb994ee07aff2e0298de401e7d3ba90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:34 np0005539563 podman[386240]: 2025-11-29 08:48:34.175400833 +0000 UTC m=+0.150549354 container init baebbeaf6b06fe99d123bbde01424c79610352e4f1ec5da901286dbe263090c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:48:34 np0005539563 podman[386240]: 2025-11-29 08:48:34.182700942 +0000 UTC m=+0.157849453 container start baebbeaf6b06fe99d123bbde01424c79610352e4f1ec5da901286dbe263090c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:48:34 np0005539563 podman[386240]: 2025-11-29 08:48:34.186677141 +0000 UTC m=+0.161825732 container attach baebbeaf6b06fe99d123bbde01424c79610352e4f1ec5da901286dbe263090c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:48:34 np0005539563 trusting_chaplygin[386256]: {
Nov 29 03:48:34 np0005539563 trusting_chaplygin[386256]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:48:34 np0005539563 trusting_chaplygin[386256]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:48:34 np0005539563 trusting_chaplygin[386256]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:48:34 np0005539563 trusting_chaplygin[386256]:        "osd_id": 0,
Nov 29 03:48:34 np0005539563 trusting_chaplygin[386256]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:48:34 np0005539563 trusting_chaplygin[386256]:        "type": "bluestore"
Nov 29 03:48:34 np0005539563 trusting_chaplygin[386256]:    }
Nov 29 03:48:34 np0005539563 trusting_chaplygin[386256]: }
Nov 29 03:48:35 np0005539563 systemd[1]: libpod-baebbeaf6b06fe99d123bbde01424c79610352e4f1ec5da901286dbe263090c0.scope: Deactivated successfully.
Nov 29 03:48:35 np0005539563 podman[386240]: 2025-11-29 08:48:35.004096569 +0000 UTC m=+0.979245070 container died baebbeaf6b06fe99d123bbde01424c79610352e4f1ec5da901286dbe263090c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chaplygin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:48:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d708afc8f6ee46b95e11c03cf44fa9e57eb994ee07aff2e0298de401e7d3ba90-merged.mount: Deactivated successfully.
Nov 29 03:48:35 np0005539563 podman[386240]: 2025-11-29 08:48:35.061883394 +0000 UTC m=+1.037031915 container remove baebbeaf6b06fe99d123bbde01424c79610352e4f1ec5da901286dbe263090c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:48:35 np0005539563 systemd[1]: libpod-conmon-baebbeaf6b06fe99d123bbde01424c79610352e4f1ec5da901286dbe263090c0.scope: Deactivated successfully.
Nov 29 03:48:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:48:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:48:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:48:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:48:35 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 867751ca-da65-48a5-aed6-5ce7a9e804f0 does not exist
Nov 29 03:48:35 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c5dd5919-68bf-47e7-8ccb-482e591e7fc9 does not exist
Nov 29 03:48:35 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 36cfe04f-3acc-4871-81f5-4bdc0fb4fbca does not exist
Nov 29 03:48:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 36 KiB/s wr, 112 op/s
Nov 29 03:48:35 np0005539563 nova_compute[252253]: 2025-11-29 08:48:35.283 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:35.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:35.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:36 np0005539563 nova_compute[252253]: 2025-11-29 08:48:36.068 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:48:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:48:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Nov 29 03:48:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Nov 29 03:48:37 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Nov 29 03:48:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 36 KiB/s wr, 112 op/s
Nov 29 03:48:37 np0005539563 nova_compute[252253]: 2025-11-29 08:48:37.415 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:37.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:37.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:38 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:38Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:3b:02 10.100.0.10
Nov 29 03:48:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 20 KiB/s wr, 78 op/s
Nov 29 03:48:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:39.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:39.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:40 np0005539563 nova_compute[252253]: 2025-11-29 08:48:40.285 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 647 KiB/s rd, 23 KiB/s wr, 52 op/s
Nov 29 03:48:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:41.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:48:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:41.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:48:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:42 np0005539563 nova_compute[252253]: 2025-11-29 08:48:42.416 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 647 KiB/s rd, 22 KiB/s wr, 52 op/s
Nov 29 03:48:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:48:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:48:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:48:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:48:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:48:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:48:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:43.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:43.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:44 np0005539563 podman[386346]: 2025-11-29 08:48:44.520623151 +0000 UTC m=+0.072033655 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:48:44 np0005539563 podman[386347]: 2025-11-29 08:48:44.531660701 +0000 UTC m=+0.081653636 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 03:48:44 np0005539563 podman[386348]: 2025-11-29 08:48:44.595625235 +0000 UTC m=+0.144140860 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 29 03:48:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 301 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 822 KiB/s rd, 1.2 MiB/s wr, 73 op/s
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.288 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:45.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.806 252257 DEBUG nova.compute.manager [req-40792676-a95c-425a-9135-1a4a28822d6d req-abe4aefe-30ac-430e-9dbd-a90a01880a99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-changed-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.807 252257 DEBUG nova.compute.manager [req-40792676-a95c-425a-9135-1a4a28822d6d req-abe4aefe-30ac-430e-9dbd-a90a01880a99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Refreshing instance network info cache due to event network-changed-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.808 252257 DEBUG oslo_concurrency.lockutils [req-40792676-a95c-425a-9135-1a4a28822d6d req-abe4aefe-30ac-430e-9dbd-a90a01880a99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.809 252257 DEBUG oslo_concurrency.lockutils [req-40792676-a95c-425a-9135-1a4a28822d6d req-abe4aefe-30ac-430e-9dbd-a90a01880a99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.809 252257 DEBUG nova.network.neutron [req-40792676-a95c-425a-9135-1a4a28822d6d req-abe4aefe-30ac-430e-9dbd-a90a01880a99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Refreshing network info cache for port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.898 252257 DEBUG oslo_concurrency.lockutils [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.899 252257 DEBUG oslo_concurrency.lockutils [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.899 252257 DEBUG oslo_concurrency.lockutils [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.899 252257 DEBUG oslo_concurrency.lockutils [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.900 252257 DEBUG oslo_concurrency.lockutils [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.901 252257 INFO nova.compute.manager [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Terminating instance#033[00m
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.902 252257 DEBUG nova.compute.manager [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:48:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:45.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:45 np0005539563 kernel: tap3bfd1c43-1b (unregistering): left promiscuous mode
Nov 29 03:48:45 np0005539563 NetworkManager[48981]: <info>  [1764406125.9526] device (tap3bfd1c43-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:48:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:45Z|00843|binding|INFO|Releasing lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 from this chassis (sb_readonly=0)
Nov 29 03:48:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:45Z|00844|binding|INFO|Setting lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 down in Southbound
Nov 29 03:48:45 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:45Z|00845|binding|INFO|Removing iface tap3bfd1c43-1b ovn-installed in OVS
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.974 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:45.984 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:3b:02 10.100.0.10'], port_security=['fa:16:3e:03:3b:02 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6eda2a5e-bf92-4d34-b21e-ca4eaf01728b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0636028a-96d5-4ad7-aa6e-9129edd44385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e836f8387a492c8119be72f1fb9980', 'neutron:revision_number': '9', 'neutron:security_group_ids': '36d56553-7b52-4135-ab01-9fd93eb2713f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf231669-438b-4750-8f96-dc7fed049a6a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:45.987 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 in datapath 0636028a-96d5-4ad7-aa6e-9129edd44385 unbound from our chassis#033[00m
Nov 29 03:48:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:45.989 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0636028a-96d5-4ad7-aa6e-9129edd44385, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:48:45 np0005539563 nova_compute[252253]: 2025-11-29 08:48:45.990 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:45.991 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d4975b1c-487d-4d31-90e1-e62e7b88eef8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:45.992 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 namespace which is not needed anymore#033[00m
Nov 29 03:48:46 np0005539563 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000c3.scope: Deactivated successfully.
Nov 29 03:48:46 np0005539563 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000c3.scope: Consumed 14.915s CPU time.
Nov 29 03:48:46 np0005539563 systemd-machined[213024]: Machine qemu-96-instance-000000c3 terminated.
Nov 29 03:48:46 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[385325]: [NOTICE]   (385353) : haproxy version is 2.8.14-c23fe91
Nov 29 03:48:46 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[385325]: [NOTICE]   (385353) : path to executable is /usr/sbin/haproxy
Nov 29 03:48:46 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[385325]: [WARNING]  (385353) : Exiting Master process...
Nov 29 03:48:46 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[385325]: [WARNING]  (385353) : Exiting Master process...
Nov 29 03:48:46 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[385325]: [ALERT]    (385353) : Current worker (385364) exited with code 143 (Terminated)
Nov 29 03:48:46 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[385325]: [WARNING]  (385353) : All workers exited. Exiting... (0)
Nov 29 03:48:46 np0005539563 systemd[1]: libpod-d243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef.scope: Deactivated successfully.
Nov 29 03:48:46 np0005539563 kernel: tap3bfd1c43-1b: entered promiscuous mode
Nov 29 03:48:46 np0005539563 podman[386432]: 2025-11-29 08:48:46.124276317 +0000 UTC m=+0.045724777 container died d243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:48:46 np0005539563 NetworkManager[48981]: <info>  [1764406126.1257] manager: (tap3bfd1c43-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/376)
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.125 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:46 np0005539563 kernel: tap3bfd1c43-1b (unregistering): left promiscuous mode
Nov 29 03:48:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:46Z|00846|binding|INFO|Claiming lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 for this chassis.
Nov 29 03:48:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:46Z|00847|binding|INFO|3bfd1c43-1b2b-4fa1-8eb4-e366844ea174: Claiming fa:16:3e:03:3b:02 10.100.0.10
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.145 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:3b:02 10.100.0.10'], port_security=['fa:16:3e:03:3b:02 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6eda2a5e-bf92-4d34-b21e-ca4eaf01728b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0636028a-96d5-4ad7-aa6e-9129edd44385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e836f8387a492c8119be72f1fb9980', 'neutron:revision_number': '9', 'neutron:security_group_ids': '36d56553-7b52-4135-ab01-9fd93eb2713f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf231669-438b-4750-8f96-dc7fed049a6a, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:46Z|00848|binding|INFO|Setting lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 ovn-installed in OVS
Nov 29 03:48:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:46Z|00849|binding|INFO|Setting lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 up in Southbound
Nov 29 03:48:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:46Z|00850|binding|INFO|Releasing lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 from this chassis (sb_readonly=1)
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.151 252257 INFO nova.virt.libvirt.driver [-] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Instance destroyed successfully.#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.151 252257 DEBUG nova.objects.instance [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'resources' on Instance uuid 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.152 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:46Z|00851|if_status|INFO|Dropped 2 log messages in last 1391 seconds (most recently, 1391 seconds ago) due to excessive rate
Nov 29 03:48:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:46Z|00852|if_status|INFO|Not setting lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 down as sb is readonly
Nov 29 03:48:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:46Z|00853|binding|INFO|Removing iface tap3bfd1c43-1b ovn-installed in OVS
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.156 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef-userdata-shm.mount: Deactivated successfully.
Nov 29 03:48:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:46Z|00854|binding|INFO|Releasing lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 from this chassis (sb_readonly=0)
Nov 29 03:48:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:48:46Z|00855|binding|INFO|Setting lport 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 down in Southbound
Nov 29 03:48:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c9eda98899e9f0ad3a3d3175c042e9eb5879be1071e9cbcd5e6bf78a5549ef0a-merged.mount: Deactivated successfully.
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.173 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:46 np0005539563 podman[386432]: 2025-11-29 08:48:46.175169303 +0000 UTC m=+0.096617763 container cleanup d243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.178 252257 DEBUG nova.virt.libvirt.vif [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:47:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-74860485',display_name='tempest-TestShelveInstance-server-74860485',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-74860485',id=195,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKT08Tnkn1UqV0bF/J8wpnzMCF6Zkhpmk/usL6YnI+Le5YAvtauWosLF4Kvj259R/59WcHeLG4Cqd2MmjrgXGd9Nw0BxGgZcDldkgLq1Xl0jjL8yBMwXntpEhSzBHi8sNQ==',key_name='tempest-TestShelveInstance-876097080',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:48:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5e836f8387a492c8119be72f1fb9980',ramdisk_id='',reservation_id='r-4rgnqe84',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1715482181',owner_user_name='tempest-TestShelveInstance-1715482181-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:48:26Z,user_data=None,user_id='5dbbf4fd34004538ad08aa4aa6ab8096',uuid=6eda2a5e-bf92-4d34-b21e-ca4eaf01728b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.179 252257 DEBUG nova.network.os_vif_util [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converting VIF {"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.180 252257 DEBUG nova.network.os_vif_util [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.180 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:3b:02 10.100.0.10'], port_security=['fa:16:3e:03:3b:02 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6eda2a5e-bf92-4d34-b21e-ca4eaf01728b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0636028a-96d5-4ad7-aa6e-9129edd44385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e836f8387a492c8119be72f1fb9980', 'neutron:revision_number': '9', 'neutron:security_group_ids': '36d56553-7b52-4135-ab01-9fd93eb2713f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf231669-438b-4750-8f96-dc7fed049a6a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.181 252257 DEBUG os_vif [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:48:46 np0005539563 systemd[1]: libpod-conmon-d243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef.scope: Deactivated successfully.
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.183 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.183 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bfd1c43-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.186 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.189 252257 INFO os_vif [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:3b:02,bridge_name='br-int',has_traffic_filtering=True,id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bfd1c43-1b')#033[00m
Nov 29 03:48:46 np0005539563 podman[386463]: 2025-11-29 08:48:46.233433982 +0000 UTC m=+0.039402125 container remove d243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.239 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d5036421-e0ce-44e9-a613-e38ac8553a45]: (4, ('Sat Nov 29 08:48:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 (d243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef)\nd243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef\nSat Nov 29 08:48:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 (d243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef)\nd243c2def4cbad5a670236c103b062d3c8e9d415189f382f3d4985cba3727cef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.241 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[46ce8928-7fb0-4355-b7fa-d0180d452810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.242 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0636028a-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.243 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:46 np0005539563 kernel: tap0636028a-90: left promiscuous mode
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.256 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.258 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[727c94f6-716f-4b0c-aa07-2dd74c79ea61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.280 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0f09ab5c-f419-4c67-a3b0-827d8623ccf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.281 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9704f9f2-ecfe-473a-8f66-3783f035d532]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.296 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d06bba21-ba83-43b7-babc-302dd47fae60]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 907140, 'reachable_time': 26323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386496, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.298 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:48:46 np0005539563 systemd[1]: run-netns-ovnmeta\x2d0636028a\x2d96d5\x2d4ad7\x2daa6e\x2d9129edd44385.mount: Deactivated successfully.
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.298 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[20ad242c-6d12-44a9-8883-50f6d3c4b351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.300 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 in datapath 0636028a-96d5-4ad7-aa6e-9129edd44385 unbound from our chassis#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.301 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0636028a-96d5-4ad7-aa6e-9129edd44385, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.302 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0610787f-57da-4911-9455-cfb71b77d464]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.302 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 in datapath 0636028a-96d5-4ad7-aa6e-9129edd44385 unbound from our chassis#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.303 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0636028a-96d5-4ad7-aa6e-9129edd44385, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:48:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:46.304 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[35b52a38-a06b-4e4b-b0bd-52cf0789b226]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.671 252257 INFO nova.virt.libvirt.driver [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Deleting instance files /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_del#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.672 252257 INFO nova.virt.libvirt.driver [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Deletion of /var/lib/nova/instances/6eda2a5e-bf92-4d34-b21e-ca4eaf01728b_del complete#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.758 252257 INFO nova.compute.manager [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.758 252257 DEBUG oslo.service.loopingcall [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.759 252257 DEBUG nova.compute.manager [-] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:48:46 np0005539563 nova_compute[252253]: 2025-11-29 08:48:46.759 252257 DEBUG nova.network.neutron [-] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:48:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 301 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 812 KiB/s rd, 1.2 MiB/s wr, 72 op/s
Nov 29 03:48:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:47.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.922 252257 DEBUG nova.compute.manager [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-vif-unplugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.922 252257 DEBUG oslo_concurrency.lockutils [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.922 252257 DEBUG oslo_concurrency.lockutils [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.923 252257 DEBUG oslo_concurrency.lockutils [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.923 252257 DEBUG nova.compute.manager [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] No waiting events found dispatching network-vif-unplugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.923 252257 DEBUG nova.compute.manager [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-vif-unplugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.923 252257 DEBUG nova.compute.manager [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.923 252257 DEBUG oslo_concurrency.lockutils [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.924 252257 DEBUG oslo_concurrency.lockutils [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.924 252257 DEBUG oslo_concurrency.lockutils [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.924 252257 DEBUG nova.compute.manager [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] No waiting events found dispatching network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.924 252257 WARNING nova.compute.manager [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received unexpected event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.924 252257 DEBUG nova.compute.manager [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.925 252257 DEBUG oslo_concurrency.lockutils [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.925 252257 DEBUG oslo_concurrency.lockutils [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.925 252257 DEBUG oslo_concurrency.lockutils [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.925 252257 DEBUG nova.compute.manager [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] No waiting events found dispatching network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.925 252257 WARNING nova.compute.manager [req-05396eb9-607a-45a4-8a68-cc8153121e7f req-bfca9968-6dfd-42a9-8d70-71f6663e2c63 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received unexpected event network-vif-plugged-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:48:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:47.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:47 np0005539563 nova_compute[252253]: 2025-11-29 08:48:47.975 252257 DEBUG nova.network.neutron [-] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:48 np0005539563 nova_compute[252253]: 2025-11-29 08:48:48.001 252257 INFO nova.compute.manager [-] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Took 1.24 seconds to deallocate network for instance.#033[00m
Nov 29 03:48:48 np0005539563 nova_compute[252253]: 2025-11-29 08:48:48.044 252257 DEBUG oslo_concurrency.lockutils [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:48 np0005539563 nova_compute[252253]: 2025-11-29 08:48:48.045 252257 DEBUG oslo_concurrency.lockutils [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:48 np0005539563 nova_compute[252253]: 2025-11-29 08:48:48.115 252257 DEBUG oslo_concurrency.processutils [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:48 np0005539563 nova_compute[252253]: 2025-11-29 08:48:48.444 252257 DEBUG nova.network.neutron [req-40792676-a95c-425a-9135-1a4a28822d6d req-abe4aefe-30ac-430e-9dbd-a90a01880a99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updated VIF entry in instance network info cache for port 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:48:48 np0005539563 nova_compute[252253]: 2025-11-29 08:48:48.445 252257 DEBUG nova.network.neutron [req-40792676-a95c-425a-9135-1a4a28822d6d req-abe4aefe-30ac-430e-9dbd-a90a01880a99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Updating instance_info_cache with network_info: [{"id": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "address": "fa:16:3e:03:3b:02", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bfd1c43-1b", "ovs_interfaceid": "3bfd1c43-1b2b-4fa1-8eb4-e366844ea174", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:48 np0005539563 nova_compute[252253]: 2025-11-29 08:48:48.460 252257 DEBUG oslo_concurrency.lockutils [req-40792676-a95c-425a-9135-1a4a28822d6d req-abe4aefe-30ac-430e-9dbd-a90a01880a99 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255916700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:48 np0005539563 nova_compute[252253]: 2025-11-29 08:48:48.540 252257 DEBUG oslo_concurrency.processutils [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:48 np0005539563 nova_compute[252253]: 2025-11-29 08:48:48.546 252257 DEBUG nova.compute.provider_tree [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:48 np0005539563 nova_compute[252253]: 2025-11-29 08:48:48.564 252257 DEBUG nova.scheduler.client.report [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:48 np0005539563 nova_compute[252253]: 2025-11-29 08:48:48.592 252257 DEBUG oslo_concurrency.lockutils [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:48 np0005539563 nova_compute[252253]: 2025-11-29 08:48:48.623 252257 INFO nova.scheduler.client.report [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Deleted allocations for instance 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b#033[00m
Nov 29 03:48:48 np0005539563 nova_compute[252253]: 2025-11-29 08:48:48.676 252257 DEBUG oslo_concurrency.lockutils [None req-edaa3c3b-5c77-4a61-b434-a941d650728f 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "6eda2a5e-bf92-4d34-b21e-ca4eaf01728b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 303 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 752 KiB/s rd, 1.6 MiB/s wr, 65 op/s
Nov 29 03:48:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:49.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:49.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:50 np0005539563 nova_compute[252253]: 2025-11-29 08:48:50.045 252257 DEBUG nova.compute.manager [req-7434e0eb-461c-4166-ad01-1ab220df17d9 req-d95b46d1-11f4-4c9d-8c77-0a6d4f4746ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Received event network-vif-deleted-3bfd1c43-1b2b-4fa1-8eb4-e366844ea174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:50 np0005539563 nova_compute[252253]: 2025-11-29 08:48:50.045 252257 INFO nova.compute.manager [req-7434e0eb-461c-4166-ad01-1ab220df17d9 req-d95b46d1-11f4-4c9d-8c77-0a6d4f4746ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Neutron deleted interface 3bfd1c43-1b2b-4fa1-8eb4-e366844ea174; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:48:50 np0005539563 nova_compute[252253]: 2025-11-29 08:48:50.046 252257 DEBUG nova.network.neutron [req-7434e0eb-461c-4166-ad01-1ab220df17d9 req-d95b46d1-11f4-4c9d-8c77-0a6d4f4746ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 29 03:48:50 np0005539563 nova_compute[252253]: 2025-11-29 08:48:50.048 252257 DEBUG nova.compute.manager [req-7434e0eb-461c-4166-ad01-1ab220df17d9 req-d95b46d1-11f4-4c9d-8c77-0a6d4f4746ec 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Detach interface failed, port_id=3bfd1c43-1b2b-4fa1-8eb4-e366844ea174, reason: Instance 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:48:50 np0005539563 nova_compute[252253]: 2025-11-29 08:48:50.290 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 722 KiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 29 03:48:51 np0005539563 nova_compute[252253]: 2025-11-29 08:48:51.185 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:51.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:51.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:52 np0005539563 nova_compute[252253]: 2025-11-29 08:48:52.184 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:52.184 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:52.185 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:48:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:48:52.186 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:52 np0005539563 nova_compute[252253]: 2025-11-29 08:48:52.548 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 29 03:48:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:53.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:53.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 29 03:48:55 np0005539563 nova_compute[252253]: 2025-11-29 08:48:55.294 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:55.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:55.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:56 np0005539563 nova_compute[252253]: 2025-11-29 08:48:56.188 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 823 KiB/s wr, 89 op/s
Nov 29 03:48:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:57.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:57.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 824 KiB/s wr, 118 op/s
Nov 29 03:48:59 np0005539563 nova_compute[252253]: 2025-11-29 08:48:59.511 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:59.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:48:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:48:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:48:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:59.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:00 np0005539563 nova_compute[252253]: 2025-11-29 08:49:00.330 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:01 np0005539563 nova_compute[252253]: 2025-11-29 08:49:01.148 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406126.1452608, 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:01 np0005539563 nova_compute[252253]: 2025-11-29 08:49:01.148 252257 INFO nova.compute.manager [-] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:49:01 np0005539563 nova_compute[252253]: 2025-11-29 08:49:01.166 252257 DEBUG nova.compute.manager [None req-550a4008-ae64-4c72-82ce-bbfe1ffd850a - - - - - -] [instance: 6eda2a5e-bf92-4d34-b21e-ca4eaf01728b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 231 KiB/s wr, 121 op/s
Nov 29 03:49:01 np0005539563 nova_compute[252253]: 2025-11-29 08:49:01.190 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:01.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:01.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 262 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 398 KiB/s wr, 92 op/s
Nov 29 03:49:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:03.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:03.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:04.956 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:04.957 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:04.957 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Nov 29 03:49:05 np0005539563 nova_compute[252253]: 2025-11-29 08:49:05.332 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:05.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:05 np0005539563 nova_compute[252253]: 2025-11-29 08:49:05.714 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:05.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:06 np0005539563 nova_compute[252253]: 2025-11-29 08:49:06.192 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 29 03:49:07 np0005539563 nova_compute[252253]: 2025-11-29 08:49:07.544 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:07 np0005539563 nova_compute[252253]: 2025-11-29 08:49:07.544 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:07 np0005539563 nova_compute[252253]: 2025-11-29 08:49:07.565 252257 DEBUG nova.compute.manager [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:49:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:07.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:07 np0005539563 nova_compute[252253]: 2025-11-29 08:49:07.662 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:07 np0005539563 nova_compute[252253]: 2025-11-29 08:49:07.663 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:07 np0005539563 nova_compute[252253]: 2025-11-29 08:49:07.687 252257 DEBUG nova.virt.hardware [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:49:07 np0005539563 nova_compute[252253]: 2025-11-29 08:49:07.687 252257 INFO nova.compute.claims [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:49:07 np0005539563 nova_compute[252253]: 2025-11-29 08:49:07.818 252257 DEBUG oslo_concurrency.processutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:07.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026947189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.255 252257 DEBUG oslo_concurrency.processutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.262 252257 DEBUG nova.compute.provider_tree [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.281 252257 DEBUG nova.scheduler.client.report [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.305 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.306 252257 DEBUG nova.compute.manager [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.408 252257 DEBUG nova.compute.manager [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.408 252257 DEBUG nova.network.neutron [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.439 252257 INFO nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.458 252257 DEBUG nova.compute.manager [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.507 252257 INFO nova.virt.block_device [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Booting with volume 60d0f33b-7946-4e21-ac67-19a83123d623 at /dev/vda#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.694 252257 DEBUG os_brick.utils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.697 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.709 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.709 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[855e4a64-d523-4b4d-8359-ea1602e46d9d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.710 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.720 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.720 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc9a768-2397-4193-bc97-5c427e424164]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.722 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.732 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.732 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[6625244c-f43c-44d7-8be1-3ed73f115809]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.734 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[1d45b3b3-b259-41b2-9dba-1b23812f1e36]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.735 252257 DEBUG oslo_concurrency.processutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.768 252257 DEBUG oslo_concurrency.processutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.771 252257 DEBUG os_brick.initiator.connectors.lightos [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.771 252257 DEBUG os_brick.initiator.connectors.lightos [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.771 252257 DEBUG os_brick.initiator.connectors.lightos [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.772 252257 DEBUG os_brick.utils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:49:08 np0005539563 nova_compute[252253]: 2025-11-29 08:49:08.772 252257 DEBUG nova.virt.block_device [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating existing volume attachment record: b8cebea3-9148-49b3-8497-14fe79dc1eb6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:49:09 np0005539563 nova_compute[252253]: 2025-11-29 08:49:09.131 252257 DEBUG nova.policy [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5dbbf4fd34004538ad08aa4aa6ab8096', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c5e836f8387a492c8119be72f1fb9980', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:49:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 309 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.0 MiB/s wr, 96 op/s
Nov 29 03:49:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:09.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:09 np0005539563 nova_compute[252253]: 2025-11-29 08:49:09.800 252257 DEBUG nova.compute.manager [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:49:09 np0005539563 nova_compute[252253]: 2025-11-29 08:49:09.802 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:49:09 np0005539563 nova_compute[252253]: 2025-11-29 08:49:09.803 252257 INFO nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Creating image(s)#033[00m
Nov 29 03:49:09 np0005539563 nova_compute[252253]: 2025-11-29 08:49:09.804 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:49:09 np0005539563 nova_compute[252253]: 2025-11-29 08:49:09.804 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Ensure instance console log exists: /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:49:09 np0005539563 nova_compute[252253]: 2025-11-29 08:49:09.804 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:09 np0005539563 nova_compute[252253]: 2025-11-29 08:49:09.805 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:09 np0005539563 nova_compute[252253]: 2025-11-29 08:49:09.805 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:09 np0005539563 nova_compute[252253]: 2025-11-29 08:49:09.861 252257 DEBUG nova.network.neutron [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Successfully created port: 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:49:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:09.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:10 np0005539563 nova_compute[252253]: 2025-11-29 08:49:10.333 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 98 op/s
Nov 29 03:49:11 np0005539563 nova_compute[252253]: 2025-11-29 08:49:11.195 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:11 np0005539563 nova_compute[252253]: 2025-11-29 08:49:11.272 252257 DEBUG nova.network.neutron [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Successfully updated port: 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:49:11 np0005539563 nova_compute[252253]: 2025-11-29 08:49:11.286 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:11 np0005539563 nova_compute[252253]: 2025-11-29 08:49:11.286 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquired lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:11 np0005539563 nova_compute[252253]: 2025-11-29 08:49:11.286 252257 DEBUG nova.network.neutron [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:49:11 np0005539563 nova_compute[252253]: 2025-11-29 08:49:11.364 252257 DEBUG nova.compute.manager [req-d1a6182c-7999-4191-bb14-df96f8bde910 req-8b3ba9b7-26b6-47bb-8d2e-abc58ec02a11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-changed-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:11 np0005539563 nova_compute[252253]: 2025-11-29 08:49:11.364 252257 DEBUG nova.compute.manager [req-d1a6182c-7999-4191-bb14-df96f8bde910 req-8b3ba9b7-26b6-47bb-8d2e-abc58ec02a11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Refreshing instance network info cache due to event network-changed-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:49:11 np0005539563 nova_compute[252253]: 2025-11-29 08:49:11.365 252257 DEBUG oslo_concurrency.lockutils [req-d1a6182c-7999-4191-bb14-df96f8bde910 req-8b3ba9b7-26b6-47bb-8d2e-abc58ec02a11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:11 np0005539563 nova_compute[252253]: 2025-11-29 08:49:11.445 252257 DEBUG nova.network.neutron [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:49:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:11.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:11.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:49:12
Nov 29 03:49:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:49:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:49:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'vms', 'images']
Nov 29 03:49:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:49:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 225 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 29 03:49:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:49:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:49:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:49:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:49:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:49:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.364 252257 DEBUG nova.network.neutron [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating instance_info_cache with network_info: [{"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.405 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Releasing lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.405 252257 DEBUG nova.compute.manager [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Instance network_info: |[{"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.406 252257 DEBUG oslo_concurrency.lockutils [req-d1a6182c-7999-4191-bb14-df96f8bde910 req-8b3ba9b7-26b6-47bb-8d2e-abc58ec02a11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.406 252257 DEBUG nova.network.neutron [req-d1a6182c-7999-4191-bb14-df96f8bde910 req-8b3ba9b7-26b6-47bb-8d2e-abc58ec02a11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Refreshing network info cache for port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.408 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Start _get_guest_xml network_info=[{"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-60d0f33b-7946-4e21-ac67-19a83123d623', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '60d0f33b-7946-4e21-ac67-19a83123d623', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '8e875192-3bcb-45b5-b98e-ed3fcce55779', 'attached_at': '', 'detached_at': '', 'volume_id': '60d0f33b-7946-4e21-ac67-19a83123d623', 'serial': '60d0f33b-7946-4e21-ac67-19a83123d623'}, 'attachment_id': 'b8cebea3-9148-49b3-8497-14fe79dc1eb6', 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.414 252257 WARNING nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.426 252257 DEBUG nova.virt.libvirt.host [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.426 252257 DEBUG nova.virt.libvirt.host [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.431 252257 DEBUG nova.virt.libvirt.host [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.432 252257 DEBUG nova.virt.libvirt.host [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.433 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.433 252257 DEBUG nova.virt.hardware [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.434 252257 DEBUG nova.virt.hardware [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.434 252257 DEBUG nova.virt.hardware [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.434 252257 DEBUG nova.virt.hardware [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.434 252257 DEBUG nova.virt.hardware [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.435 252257 DEBUG nova.virt.hardware [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.435 252257 DEBUG nova.virt.hardware [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.435 252257 DEBUG nova.virt.hardware [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.435 252257 DEBUG nova.virt.hardware [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.436 252257 DEBUG nova.virt.hardware [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.436 252257 DEBUG nova.virt.hardware [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.461 252257 DEBUG nova.storage.rbd_utils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 8e875192-3bcb-45b5-b98e-ed3fcce55779_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.465 252257 DEBUG oslo_concurrency.processutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:13.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:49:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4202656210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.879 252257 DEBUG oslo_concurrency.processutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.904 252257 DEBUG nova.virt.libvirt.vif [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:49:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1554816171',display_name='tempest-TestShelveInstance-server-1554816171',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1554816171',id=198,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGtvikMAmWyKtz3G3oHOmTiaNE9UQ1Ju0e0lx2pz3ihtev7i/wsJX3O3ljU9qYZfHQILbh0YI0gMgFhLFsRZmDRrEreGW4wntvuPAkftPwbOEG8U0ceDBmuI6Y+BB4Dm+g==',key_name='tempest-TestShelveInstance-1032744962',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e836f8387a492c8119be72f1fb9980',ramdisk_id='',reservation_id='r-zqh0kbjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1715482181',owner_user_name='tempest-TestShelveInstance-1715482181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:49:08Z,user_data=None,user_id='5dbbf4fd34004538ad08aa4aa6ab8096',uuid=8e875192-3bcb-45b5-b98e-ed3fcce55779,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.904 252257 DEBUG nova.network.os_vif_util [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converting VIF {"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.905 252257 DEBUG nova.network.os_vif_util [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.906 252257 DEBUG nova.objects.instance [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e875192-3bcb-45b5-b98e-ed3fcce55779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.919 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  <uuid>8e875192-3bcb-45b5-b98e-ed3fcce55779</uuid>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  <name>instance-000000c6</name>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestShelveInstance-server-1554816171</nova:name>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:49:13</nova:creationTime>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <nova:user uuid="5dbbf4fd34004538ad08aa4aa6ab8096">tempest-TestShelveInstance-1715482181-project-member</nova:user>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <nova:project uuid="c5e836f8387a492c8119be72f1fb9980">tempest-TestShelveInstance-1715482181</nova:project>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <nova:port uuid="02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <entry name="serial">8e875192-3bcb-45b5-b98e-ed3fcce55779</entry>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <entry name="uuid">8e875192-3bcb-45b5-b98e-ed3fcce55779</entry>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/8e875192-3bcb-45b5-b98e-ed3fcce55779_disk.config">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="volumes/volume-60d0f33b-7946-4e21-ac67-19a83123d623">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <serial>60d0f33b-7946-4e21-ac67-19a83123d623</serial>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:4d:e7:09"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <target dev="tap02cfe8ea-9c"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/console.log" append="off"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:49:13 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:49:13 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:49:13 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:49:13 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.920 252257 DEBUG nova.compute.manager [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Preparing to wait for external event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.920 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.920 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.921 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.921 252257 DEBUG nova.virt.libvirt.vif [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:49:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1554816171',display_name='tempest-TestShelveInstance-server-1554816171',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1554816171',id=198,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGtvikMAmWyKtz3G3oHOmTiaNE9UQ1Ju0e0lx2pz3ihtev7i/wsJX3O3ljU9qYZfHQILbh0YI0gMgFhLFsRZmDRrEreGW4wntvuPAkftPwbOEG8U0ceDBmuI6Y+BB4Dm+g==',key_name='tempest-TestShelveInstance-1032744962',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e836f8387a492c8119be72f1fb9980',ramdisk_id='',reservation_id='r-zqh0kbjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1715482181',owner_user_name='tempest-TestShelveInstance-1715482181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:49:08Z,user_data=None,user_id='5dbbf4fd34004538ad08aa4aa6ab8096',uuid=8e875192-3bcb-45b5-b98e-ed3fcce55779,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.921 252257 DEBUG nova.network.os_vif_util [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converting VIF {"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.922 252257 DEBUG nova.network.os_vif_util [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.922 252257 DEBUG os_vif [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.923 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.923 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.924 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.926 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.927 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02cfe8ea-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.927 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02cfe8ea-9c, col_values=(('external_ids', {'iface-id': '02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:e7:09', 'vm-uuid': '8e875192-3bcb-45b5-b98e-ed3fcce55779'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.929 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:13 np0005539563 NetworkManager[48981]: <info>  [1764406153.9302] manager: (tap02cfe8ea-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.932 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.933 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.934 252257 INFO os_vif [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c')#033[00m
Nov 29 03:49:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:49:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:13.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.996 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.996 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.996 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] No VIF found with MAC fa:16:3e:4d:e7:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:49:13 np0005539563 nova_compute[252253]: 2025-11-29 08:49:13.997 252257 INFO nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Using config drive#033[00m
Nov 29 03:49:14 np0005539563 nova_compute[252253]: 2025-11-29 08:49:14.022 252257 DEBUG nova.storage.rbd_utils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 8e875192-3bcb-45b5-b98e-ed3fcce55779_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:49:14 np0005539563 nova_compute[252253]: 2025-11-29 08:49:14.418 252257 INFO nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Creating config drive at /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/disk.config#033[00m
Nov 29 03:49:14 np0005539563 nova_compute[252253]: 2025-11-29 08:49:14.423 252257 DEBUG oslo_concurrency.processutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3s_xqqui execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:14 np0005539563 nova_compute[252253]: 2025-11-29 08:49:14.562 252257 DEBUG oslo_concurrency.processutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3s_xqqui" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:14 np0005539563 nova_compute[252253]: 2025-11-29 08:49:14.600 252257 DEBUG nova.storage.rbd_utils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] rbd image 8e875192-3bcb-45b5-b98e-ed3fcce55779_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:49:14 np0005539563 nova_compute[252253]: 2025-11-29 08:49:14.605 252257 DEBUG oslo_concurrency.processutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/disk.config 8e875192-3bcb-45b5-b98e-ed3fcce55779_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:14 np0005539563 nova_compute[252253]: 2025-11-29 08:49:14.786 252257 DEBUG oslo_concurrency.processutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/disk.config 8e875192-3bcb-45b5-b98e-ed3fcce55779_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:14 np0005539563 nova_compute[252253]: 2025-11-29 08:49:14.786 252257 INFO nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Deleting local config drive /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779/disk.config because it was imported into RBD.#033[00m
Nov 29 03:49:14 np0005539563 kernel: tap02cfe8ea-9c: entered promiscuous mode
Nov 29 03:49:14 np0005539563 nova_compute[252253]: 2025-11-29 08:49:14.837 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:14Z|00856|binding|INFO|Claiming lport 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 for this chassis.
Nov 29 03:49:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:14Z|00857|binding|INFO|02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137: Claiming fa:16:3e:4d:e7:09 10.100.0.8
Nov 29 03:49:14 np0005539563 NetworkManager[48981]: <info>  [1764406154.8396] manager: (tap02cfe8ea-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.844 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:e7:09 10.100.0.8'], port_security=['fa:16:3e:4d:e7:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8e875192-3bcb-45b5-b98e-ed3fcce55779', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0636028a-96d5-4ad7-aa6e-9129edd44385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e836f8387a492c8119be72f1fb9980', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a7a05da-d569-4f0e-9366-7d699d1285bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf231669-438b-4750-8f96-dc7fed049a6a, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.845 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 in datapath 0636028a-96d5-4ad7-aa6e-9129edd44385 bound to our chassis#033[00m
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.846 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0636028a-96d5-4ad7-aa6e-9129edd44385#033[00m
Nov 29 03:49:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:14Z|00858|binding|INFO|Setting lport 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 ovn-installed in OVS
Nov 29 03:49:14 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:14Z|00859|binding|INFO|Setting lport 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 up in Southbound
Nov 29 03:49:14 np0005539563 nova_compute[252253]: 2025-11-29 08:49:14.855 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:14 np0005539563 nova_compute[252253]: 2025-11-29 08:49:14.857 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.861 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[66cf6205-9c7e-4cb6-897c-93eb9790e016]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.862 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0636028a-91 in ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.865 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0636028a-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.865 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0758f5cb-efb7-490f-96d0-995a7a4ea55a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.866 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c40ebd6d-c065-4565-9fe4-94f56b0694a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.880 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[28f68c65-116d-4829-960b-0472f9723275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539563 systemd-machined[213024]: New machine qemu-97-instance-000000c6.
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.906 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3af108b6-81b6-48da-8630-41f045584a85]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539563 systemd[1]: Started Virtual Machine qemu-97-instance-000000c6.
Nov 29 03:49:14 np0005539563 systemd-udevd[386822]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.937 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3a6555-f353-4d0a-a581-5fdeda43248a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.942 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fc114923-3053-4684-b5fa-234095aa103f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539563 NetworkManager[48981]: <info>  [1764406154.9444] manager: (tap0636028a-90): new Veth device (/org/freedesktop/NetworkManager/Devices/379)
Nov 29 03:49:14 np0005539563 systemd-udevd[386830]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:49:14 np0005539563 NetworkManager[48981]: <info>  [1764406154.9564] device (tap02cfe8ea-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:49:14 np0005539563 NetworkManager[48981]: <info>  [1764406154.9575] device (tap02cfe8ea-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:49:14 np0005539563 podman[386773]: 2025-11-29 08:49:14.960439138 +0000 UTC m=+0.082388477 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.979 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[cde77320-1eb4-4ff8-b9ca-efb6f5ac45a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:14.983 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d5252f-4de6-4299-b5c2-d34e7df78f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:14 np0005539563 podman[386775]: 2025-11-29 08:49:14.992046309 +0000 UTC m=+0.116047403 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:49:15 np0005539563 NetworkManager[48981]: <info>  [1764406155.0110] device (tap0636028a-90): carrier: link connected
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.012 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[67f6d78c-50e1-40ef-a131-b3559b293dc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:15 np0005539563 podman[386776]: 2025-11-29 08:49:15.016879726 +0000 UTC m=+0.137145019 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller)
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.032 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2478c83a-00e8-44c2-b2b0-e4b64f659949]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0636028a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:11:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912278, 'reachable_time': 34565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386869, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.054 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2bb52b-b272-4a22-9eea-9e283e3b7fe6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:1119'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 912278, 'tstamp': 912278}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386871, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.073 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1eeabceb-6a44-40a2-b35d-d952e0b8c526]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0636028a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:11:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912278, 'reachable_time': 34565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386872, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.119 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[35de499f-3e6e-4d80-834b-96c853754d72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 295 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 225 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.192 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[432beb3b-2b2b-40fb-9bfb-fb25320e34dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.194 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0636028a-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.194 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.194 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0636028a-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.196 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:15 np0005539563 NetworkManager[48981]: <info>  [1764406155.1969] manager: (tap0636028a-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Nov 29 03:49:15 np0005539563 kernel: tap0636028a-90: entered promiscuous mode
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.198 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.199 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0636028a-90, col_values=(('external_ids', {'iface-id': '58043efe-c991-4914-9f0a-2bba8af4c408'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.200 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:15Z|00860|binding|INFO|Releasing lport 58043efe-c991-4914-9f0a-2bba8af4c408 from this chassis (sb_readonly=0)
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.215 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.216 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0636028a-96d5-4ad7-aa6e-9129edd44385.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0636028a-96d5-4ad7-aa6e-9129edd44385.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.217 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4a5e2e-8cb1-4935-90cd-7cadf4fb7af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.217 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-0636028a-96d5-4ad7-aa6e-9129edd44385
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/0636028a-96d5-4ad7-aa6e-9129edd44385.pid.haproxy
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 0636028a-96d5-4ad7-aa6e-9129edd44385
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:49:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:15.218 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'env', 'PROCESS_TAG=haproxy-0636028a-96d5-4ad7-aa6e-9129edd44385', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0636028a-96d5-4ad7-aa6e-9129edd44385.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.300 252257 DEBUG nova.compute.manager [req-aecb7629-2aa8-4228-a357-3c00c1fadd9f req-0f296c98-52b3-48fe-b6fd-130fa4e29151 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.300 252257 DEBUG oslo_concurrency.lockutils [req-aecb7629-2aa8-4228-a357-3c00c1fadd9f req-0f296c98-52b3-48fe-b6fd-130fa4e29151 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.301 252257 DEBUG oslo_concurrency.lockutils [req-aecb7629-2aa8-4228-a357-3c00c1fadd9f req-0f296c98-52b3-48fe-b6fd-130fa4e29151 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.301 252257 DEBUG oslo_concurrency.lockutils [req-aecb7629-2aa8-4228-a357-3c00c1fadd9f req-0f296c98-52b3-48fe-b6fd-130fa4e29151 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.301 252257 DEBUG nova.compute.manager [req-aecb7629-2aa8-4228-a357-3c00c1fadd9f req-0f296c98-52b3-48fe-b6fd-130fa4e29151 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Processing event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.331 252257 DEBUG nova.network.neutron [req-d1a6182c-7999-4191-bb14-df96f8bde910 req-8b3ba9b7-26b6-47bb-8d2e-abc58ec02a11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updated VIF entry in instance network info cache for port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.331 252257 DEBUG nova.network.neutron [req-d1a6182c-7999-4191-bb14-df96f8bde910 req-8b3ba9b7-26b6-47bb-8d2e-abc58ec02a11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating instance_info_cache with network_info: [{"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.333 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406155.3298926, 8e875192-3bcb-45b5-b98e-ed3fcce55779 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.333 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] VM Started (Lifecycle Event)#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.334 252257 DEBUG nova.compute.manager [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.335 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.340 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.344 252257 INFO nova.virt.libvirt.driver [-] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Instance spawned successfully.#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.345 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.375 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.378 252257 DEBUG oslo_concurrency.lockutils [req-d1a6182c-7999-4191-bb14-df96f8bde910 req-8b3ba9b7-26b6-47bb-8d2e-abc58ec02a11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.387 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.392 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.392 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.392 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.393 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.393 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.393 252257 DEBUG nova.virt.libvirt.driver [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.464 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.465 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406155.330185, 8e875192-3bcb-45b5-b98e-ed3fcce55779 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.466 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.474 252257 INFO nova.compute.manager [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Took 5.67 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.475 252257 DEBUG nova.compute.manager [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.488 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.493 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406155.3389225, 8e875192-3bcb-45b5-b98e-ed3fcce55779 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.494 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.516 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.520 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.535 252257 INFO nova.compute.manager [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Took 7.91 seconds to build instance.#033[00m
Nov 29 03:49:15 np0005539563 nova_compute[252253]: 2025-11-29 08:49:15.549 252257 DEBUG oslo_concurrency.lockutils [None req-0f459e4c-338d-413d-8267-c3d8b2a9709c 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:15.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:15 np0005539563 podman[386944]: 2025-11-29 08:49:15.597775238 +0000 UTC m=+0.050214310 container create e0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:49:15 np0005539563 systemd[1]: Started libpod-conmon-e0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0.scope.
Nov 29 03:49:15 np0005539563 podman[386944]: 2025-11-29 08:49:15.571574954 +0000 UTC m=+0.024014046 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:49:15 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:49:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304e1a1ce2e3e215c33d5d1aa856d1056ec3e36605fde67c56ead89dd9596019/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:15 np0005539563 podman[386944]: 2025-11-29 08:49:15.690730931 +0000 UTC m=+0.143170023 container init e0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:49:15 np0005539563 podman[386944]: 2025-11-29 08:49:15.698621637 +0000 UTC m=+0.151060709 container start e0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:49:15 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[386959]: [NOTICE]   (386963) : New worker (386965) forked
Nov 29 03:49:15 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[386959]: [NOTICE]   (386963) : Loading success.
Nov 29 03:49:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:15.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:49:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:49:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:49:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:49:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:49:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:49:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:49:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:49:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:49:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:49:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 295 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 210 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Nov 29 03:49:17 np0005539563 nova_compute[252253]: 2025-11-29 08:49:17.464 252257 DEBUG nova.compute.manager [req-1662eab4-7969-4bbd-b72c-fc1a6dcd0d68 req-aebb9c41-44f2-4449-9c2b-9f9296a32aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:17 np0005539563 nova_compute[252253]: 2025-11-29 08:49:17.465 252257 DEBUG oslo_concurrency.lockutils [req-1662eab4-7969-4bbd-b72c-fc1a6dcd0d68 req-aebb9c41-44f2-4449-9c2b-9f9296a32aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:17 np0005539563 nova_compute[252253]: 2025-11-29 08:49:17.465 252257 DEBUG oslo_concurrency.lockutils [req-1662eab4-7969-4bbd-b72c-fc1a6dcd0d68 req-aebb9c41-44f2-4449-9c2b-9f9296a32aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:17 np0005539563 nova_compute[252253]: 2025-11-29 08:49:17.466 252257 DEBUG oslo_concurrency.lockutils [req-1662eab4-7969-4bbd-b72c-fc1a6dcd0d68 req-aebb9c41-44f2-4449-9c2b-9f9296a32aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:17 np0005539563 nova_compute[252253]: 2025-11-29 08:49:17.466 252257 DEBUG nova.compute.manager [req-1662eab4-7969-4bbd-b72c-fc1a6dcd0d68 req-aebb9c41-44f2-4449-9c2b-9f9296a32aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] No waiting events found dispatching network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:17 np0005539563 nova_compute[252253]: 2025-11-29 08:49:17.466 252257 WARNING nova.compute.manager [req-1662eab4-7969-4bbd-b72c-fc1a6dcd0d68 req-aebb9c41-44f2-4449-9c2b-9f9296a32aef 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received unexpected event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:49:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:17.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:17.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:18 np0005539563 nova_compute[252253]: 2025-11-29 08:49:18.930 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 261 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 713 KiB/s rd, 2.2 MiB/s wr, 98 op/s
Nov 29 03:49:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:19Z|00861|binding|INFO|Releasing lport 58043efe-c991-4914-9f0a-2bba8af4c408 from this chassis (sb_readonly=0)
Nov 29 03:49:19 np0005539563 nova_compute[252253]: 2025-11-29 08:49:19.454 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:19.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:19Z|00862|binding|INFO|Releasing lport 58043efe-c991-4914-9f0a-2bba8af4c408 from this chassis (sb_readonly=0)
Nov 29 03:49:19 np0005539563 NetworkManager[48981]: <info>  [1764406159.6137] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Nov 29 03:49:19 np0005539563 nova_compute[252253]: 2025-11-29 08:49:19.612 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539563 NetworkManager[48981]: <info>  [1764406159.6149] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Nov 29 03:49:19 np0005539563 nova_compute[252253]: 2025-11-29 08:49:19.735 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:19Z|00863|binding|INFO|Releasing lport 58043efe-c991-4914-9f0a-2bba8af4c408 from this chassis (sb_readonly=0)
Nov 29 03:49:19 np0005539563 nova_compute[252253]: 2025-11-29 08:49:19.753 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539563 nova_compute[252253]: 2025-11-29 08:49:19.859 252257 DEBUG nova.compute.manager [req-af91d11a-4855-4d1e-9a6a-8bee8c3e467d req-105cf1da-6d0d-49d0-8dcf-94de5aab03fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-changed-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:19 np0005539563 nova_compute[252253]: 2025-11-29 08:49:19.860 252257 DEBUG nova.compute.manager [req-af91d11a-4855-4d1e-9a6a-8bee8c3e467d req-105cf1da-6d0d-49d0-8dcf-94de5aab03fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Refreshing instance network info cache due to event network-changed-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:49:19 np0005539563 nova_compute[252253]: 2025-11-29 08:49:19.860 252257 DEBUG oslo_concurrency.lockutils [req-af91d11a-4855-4d1e-9a6a-8bee8c3e467d req-105cf1da-6d0d-49d0-8dcf-94de5aab03fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:19 np0005539563 nova_compute[252253]: 2025-11-29 08:49:19.861 252257 DEBUG oslo_concurrency.lockutils [req-af91d11a-4855-4d1e-9a6a-8bee8c3e467d req-105cf1da-6d0d-49d0-8dcf-94de5aab03fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:19 np0005539563 nova_compute[252253]: 2025-11-29 08:49:19.861 252257 DEBUG nova.network.neutron [req-af91d11a-4855-4d1e-9a6a-8bee8c3e467d req-105cf1da-6d0d-49d0-8dcf-94de5aab03fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Refreshing network info cache for port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:49:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:20.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:20 np0005539563 nova_compute[252253]: 2025-11-29 08:49:20.340 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 932 KiB/s wr, 133 op/s
Nov 29 03:49:21 np0005539563 nova_compute[252253]: 2025-11-29 08:49:21.303 252257 DEBUG nova.network.neutron [req-af91d11a-4855-4d1e-9a6a-8bee8c3e467d req-105cf1da-6d0d-49d0-8dcf-94de5aab03fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updated VIF entry in instance network info cache for port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:49:21 np0005539563 nova_compute[252253]: 2025-11-29 08:49:21.303 252257 DEBUG nova.network.neutron [req-af91d11a-4855-4d1e-9a6a-8bee8c3e467d req-105cf1da-6d0d-49d0-8dcf-94de5aab03fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating instance_info_cache with network_info: [{"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:21 np0005539563 nova_compute[252253]: 2025-11-29 08:49:21.353 252257 DEBUG oslo_concurrency.lockutils [req-af91d11a-4855-4d1e-9a6a-8bee8c3e467d req-105cf1da-6d0d-49d0-8dcf-94de5aab03fc 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:21.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:22.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:22 np0005539563 nova_compute[252253]: 2025-11-29 08:49:22.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 234 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 105 op/s
Nov 29 03:49:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:23.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:23 np0005539563 nova_compute[252253]: 2025-11-29 08:49:23.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:23 np0005539563 nova_compute[252253]: 2025-11-29 08:49:23.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:49:23 np0005539563 nova_compute[252253]: 2025-11-29 08:49:23.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:49:23 np0005539563 nova_compute[252253]: 2025-11-29 08:49:23.861 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:23 np0005539563 nova_compute[252253]: 2025-11-29 08:49:23.862 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:23 np0005539563 nova_compute[252253]: 2025-11-29 08:49:23.863 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:49:23 np0005539563 nova_compute[252253]: 2025-11-29 08:49:23.864 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e875192-3bcb-45b5-b98e-ed3fcce55779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:23 np0005539563 nova_compute[252253]: 2025-11-29 08:49:23.932 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:24.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018849673715801228 of space, bias 1.0, pg target 0.5654902114740369 quantized to 32 (current 32)
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003150154708728829 of space, bias 1.0, pg target 0.9450464126186487 quantized to 32 (current 32)
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:49:25 np0005539563 nova_compute[252253]: 2025-11-29 08:49:25.116 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating instance_info_cache with network_info: [{"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:25 np0005539563 nova_compute[252253]: 2025-11-29 08:49:25.137 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:25 np0005539563 nova_compute[252253]: 2025-11-29 08:49:25.137 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:49:25 np0005539563 nova_compute[252253]: 2025-11-29 08:49:25.138 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:25 np0005539563 nova_compute[252253]: 2025-11-29 08:49:25.138 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:25 np0005539563 nova_compute[252253]: 2025-11-29 08:49:25.138 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:49:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 129 op/s
Nov 29 03:49:25 np0005539563 nova_compute[252253]: 2025-11-29 08:49:25.341 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.443691) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165444061, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2144, "num_deletes": 253, "total_data_size": 3810367, "memory_usage": 3878576, "flush_reason": "Manual Compaction"}
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165475274, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 3742524, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67375, "largest_seqno": 69518, "table_properties": {"data_size": 3732857, "index_size": 6096, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20029, "raw_average_key_size": 20, "raw_value_size": 3713546, "raw_average_value_size": 3804, "num_data_blocks": 265, "num_entries": 976, "num_filter_entries": 976, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405955, "oldest_key_time": 1764405955, "file_creation_time": 1764406165, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 31463 microseconds, and 13454 cpu microseconds.
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.475360) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 3742524 bytes OK
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.475399) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.487990) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.488015) EVENT_LOG_v1 {"time_micros": 1764406165488008, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.488034) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 3801706, prev total WAL file size 3801706, number of live WAL files 2.
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.489187) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(3654KB)], [152(10MB)]
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165489260, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 15048865, "oldest_snapshot_seqno": -1}
Nov 29 03:49:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:49:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:25.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 10192 keys, 13101826 bytes, temperature: kUnknown
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165631444, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 13101826, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13035803, "index_size": 39414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 268905, "raw_average_key_size": 26, "raw_value_size": 12856878, "raw_average_value_size": 1261, "num_data_blocks": 1499, "num_entries": 10192, "num_filter_entries": 10192, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764406165, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.631808) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 13101826 bytes
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.633035) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.8 rd, 92.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.8 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 10717, records dropped: 525 output_compression: NoCompression
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.633055) EVENT_LOG_v1 {"time_micros": 1764406165633044, "job": 94, "event": "compaction_finished", "compaction_time_micros": 142273, "compaction_time_cpu_micros": 29649, "output_level": 6, "num_output_files": 1, "total_output_size": 13101826, "num_input_records": 10717, "num_output_records": 10192, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165633695, "job": 94, "event": "table_file_deletion", "file_number": 154}
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406165635224, "job": 94, "event": "table_file_deletion", "file_number": 152}
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.488982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.635346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.635352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.635354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.635356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:25 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:49:25.635357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:26.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:26Z|00864|binding|INFO|Releasing lport 58043efe-c991-4914-9f0a-2bba8af4c408 from this chassis (sb_readonly=0)
Nov 29 03:49:26 np0005539563 nova_compute[252253]: 2025-11-29 08:49:26.330 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 KiB/s wr, 117 op/s
Nov 29 03:49:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:27.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:27 np0005539563 nova_compute[252253]: 2025-11-29 08:49:27.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:28.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:28 np0005539563 nova_compute[252253]: 2025-11-29 08:49:28.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:28 np0005539563 nova_compute[252253]: 2025-11-29 08:49:28.934 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:29Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:e7:09 10.100.0.8
Nov 29 03:49:29 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:29Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:e7:09 10.100.0.8
Nov 29 03:49:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 KiB/s wr, 120 op/s
Nov 29 03:49:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:29.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:29 np0005539563 nova_compute[252253]: 2025-11-29 08:49:29.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:29 np0005539563 nova_compute[252253]: 2025-11-29 08:49:29.720 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:29 np0005539563 nova_compute[252253]: 2025-11-29 08:49:29.720 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:29 np0005539563 nova_compute[252253]: 2025-11-29 08:49:29.720 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:29 np0005539563 nova_compute[252253]: 2025-11-29 08:49:29.721 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:49:29 np0005539563 nova_compute[252253]: 2025-11-29 08:49:29.721 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:30.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2190195639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:30 np0005539563 nova_compute[252253]: 2025-11-29 08:49:30.141 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:30 np0005539563 nova_compute[252253]: 2025-11-29 08:49:30.233 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:49:30 np0005539563 nova_compute[252253]: 2025-11-29 08:49:30.234 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:49:30 np0005539563 nova_compute[252253]: 2025-11-29 08:49:30.343 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:30 np0005539563 nova_compute[252253]: 2025-11-29 08:49:30.434 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:49:30 np0005539563 nova_compute[252253]: 2025-11-29 08:49:30.436 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3958MB free_disk=20.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:49:30 np0005539563 nova_compute[252253]: 2025-11-29 08:49:30.436 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:30 np0005539563 nova_compute[252253]: 2025-11-29 08:49:30.436 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:30 np0005539563 nova_compute[252253]: 2025-11-29 08:49:30.581 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 8e875192-3bcb-45b5-b98e-ed3fcce55779 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:49:30 np0005539563 nova_compute[252253]: 2025-11-29 08:49:30.582 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:49:30 np0005539563 nova_compute[252253]: 2025-11-29 08:49:30.582 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:49:30 np0005539563 nova_compute[252253]: 2025-11-29 08:49:30.735 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1625207753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:31 np0005539563 nova_compute[252253]: 2025-11-29 08:49:31.162 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:31 np0005539563 nova_compute[252253]: 2025-11-29 08:49:31.171 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 193 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 129 op/s
Nov 29 03:49:31 np0005539563 nova_compute[252253]: 2025-11-29 08:49:31.193 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:31 np0005539563 nova_compute[252253]: 2025-11-29 08:49:31.214 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:49:31 np0005539563 nova_compute[252253]: 2025-11-29 08:49:31.214 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:31.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:32.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:32 np0005539563 nova_compute[252253]: 2025-11-29 08:49:32.968 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 196 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Nov 29 03:49:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:33.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:33 np0005539563 nova_compute[252253]: 2025-11-29 08:49:33.936 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:34.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 29 03:49:35 np0005539563 nova_compute[252253]: 2025-11-29 08:49:35.345 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:35.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:36.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:36 np0005539563 nova_compute[252253]: 2025-11-29 08:49:36.112 252257 DEBUG oslo_concurrency.lockutils [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779" by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:36 np0005539563 nova_compute[252253]: 2025-11-29 08:49:36.113 252257 DEBUG oslo_concurrency.lockutils [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779" acquired by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:36 np0005539563 nova_compute[252253]: 2025-11-29 08:49:36.113 252257 INFO nova.compute.manager [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Shelve offloading#033[00m
Nov 29 03:49:36 np0005539563 nova_compute[252253]: 2025-11-29 08:49:36.133 252257 DEBUG nova.virt.libvirt.driver [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:49:36 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1e861ae0-dce6-4533-b07f-5532d76911c2 does not exist
Nov 29 03:49:36 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6a5545c1-41a2-4fa4-a4b1-3bb06dbfa86d does not exist
Nov 29 03:49:36 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5e0d420f-9f16-427d-a25f-a8d87078d20c does not exist
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1499617626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 03:49:37 np0005539563 podman[387352]: 2025-11-29 08:49:37.326008614 +0000 UTC m=+0.037292508 container create 40a94c425e321cab5b4faceaff1a7d880cf6d987724ff41886059a385aa667ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:49:37 np0005539563 systemd[1]: Started libpod-conmon-40a94c425e321cab5b4faceaff1a7d880cf6d987724ff41886059a385aa667ed.scope.
Nov 29 03:49:37 np0005539563 podman[387352]: 2025-11-29 08:49:37.309501904 +0000 UTC m=+0.020785828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:49:37 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:49:37 np0005539563 podman[387352]: 2025-11-29 08:49:37.426000239 +0000 UTC m=+0.137284173 container init 40a94c425e321cab5b4faceaff1a7d880cf6d987724ff41886059a385aa667ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:49:37 np0005539563 podman[387352]: 2025-11-29 08:49:37.438429918 +0000 UTC m=+0.149713822 container start 40a94c425e321cab5b4faceaff1a7d880cf6d987724ff41886059a385aa667ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:49:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:49:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:49:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:49:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:49:37 np0005539563 podman[387352]: 2025-11-29 08:49:37.441492442 +0000 UTC m=+0.152776366 container attach 40a94c425e321cab5b4faceaff1a7d880cf6d987724ff41886059a385aa667ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:49:37 np0005539563 quirky_shaw[387368]: 167 167
Nov 29 03:49:37 np0005539563 systemd[1]: libpod-40a94c425e321cab5b4faceaff1a7d880cf6d987724ff41886059a385aa667ed.scope: Deactivated successfully.
Nov 29 03:49:37 np0005539563 conmon[387368]: conmon 40a94c425e321cab5b4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40a94c425e321cab5b4faceaff1a7d880cf6d987724ff41886059a385aa667ed.scope/container/memory.events
Nov 29 03:49:37 np0005539563 podman[387373]: 2025-11-29 08:49:37.494230469 +0000 UTC m=+0.030838851 container died 40a94c425e321cab5b4faceaff1a7d880cf6d987724ff41886059a385aa667ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:49:37 np0005539563 systemd[1]: var-lib-containers-storage-overlay-19530105ef7bee5bb28aea0e31abd07c42b58d0af3cc064dc0ceb36a631cb6ff-merged.mount: Deactivated successfully.
Nov 29 03:49:37 np0005539563 podman[387373]: 2025-11-29 08:49:37.531050872 +0000 UTC m=+0.067659234 container remove 40a94c425e321cab5b4faceaff1a7d880cf6d987724ff41886059a385aa667ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:49:37 np0005539563 systemd[1]: libpod-conmon-40a94c425e321cab5b4faceaff1a7d880cf6d987724ff41886059a385aa667ed.scope: Deactivated successfully.
Nov 29 03:49:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:37.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:37 np0005539563 podman[387395]: 2025-11-29 08:49:37.733092859 +0000 UTC m=+0.039036425 container create 953499afd4b77f2a5edea25f1bdd67986ff37de44bea4212a7f934be908cb164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wescoff, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:49:37 np0005539563 systemd[1]: Started libpod-conmon-953499afd4b77f2a5edea25f1bdd67986ff37de44bea4212a7f934be908cb164.scope.
Nov 29 03:49:37 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:49:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccc604eea9b9b8ad28f9f19c8ae1dbdd9ee604d863c9bfb987169eeca6288c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccc604eea9b9b8ad28f9f19c8ae1dbdd9ee604d863c9bfb987169eeca6288c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccc604eea9b9b8ad28f9f19c8ae1dbdd9ee604d863c9bfb987169eeca6288c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccc604eea9b9b8ad28f9f19c8ae1dbdd9ee604d863c9bfb987169eeca6288c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:37 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccc604eea9b9b8ad28f9f19c8ae1dbdd9ee604d863c9bfb987169eeca6288c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:37 np0005539563 podman[387395]: 2025-11-29 08:49:37.805856302 +0000 UTC m=+0.111799948 container init 953499afd4b77f2a5edea25f1bdd67986ff37de44bea4212a7f934be908cb164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wescoff, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:49:37 np0005539563 podman[387395]: 2025-11-29 08:49:37.716907308 +0000 UTC m=+0.022850894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:49:37 np0005539563 podman[387395]: 2025-11-29 08:49:37.815609158 +0000 UTC m=+0.121552734 container start 953499afd4b77f2a5edea25f1bdd67986ff37de44bea4212a7f934be908cb164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wescoff, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:49:37 np0005539563 podman[387395]: 2025-11-29 08:49:37.818914237 +0000 UTC m=+0.124857823 container attach 953499afd4b77f2a5edea25f1bdd67986ff37de44bea4212a7f934be908cb164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:49:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:38.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.213 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:38 np0005539563 kernel: tap02cfe8ea-9c (unregistering): left promiscuous mode
Nov 29 03:49:38 np0005539563 NetworkManager[48981]: <info>  [1764406178.3653] device (tap02cfe8ea-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:49:38 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:38Z|00865|binding|INFO|Releasing lport 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 from this chassis (sb_readonly=0)
Nov 29 03:49:38 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:38Z|00866|binding|INFO|Setting lport 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 down in Southbound
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.379 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:38 np0005539563 ovn_controller[148841]: 2025-11-29T08:49:38Z|00867|binding|INFO|Removing iface tap02cfe8ea-9c ovn-installed in OVS
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.394 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:e7:09 10.100.0.8'], port_security=['fa:16:3e:4d:e7:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8e875192-3bcb-45b5-b98e-ed3fcce55779', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0636028a-96d5-4ad7-aa6e-9129edd44385', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e836f8387a492c8119be72f1fb9980', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a7a05da-d569-4f0e-9366-7d699d1285bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.242'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf231669-438b-4750-8f96-dc7fed049a6a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.396 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 in datapath 0636028a-96d5-4ad7-aa6e-9129edd44385 unbound from our chassis#033[00m
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.397 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0636028a-96d5-4ad7-aa6e-9129edd44385, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.398 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[07efe8f2-a277-4d20-919f-8aeea4081a86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.400 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 namespace which is not needed anymore#033[00m
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.407 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:38 np0005539563 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000c6.scope: Deactivated successfully.
Nov 29 03:49:38 np0005539563 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000c6.scope: Consumed 14.594s CPU time.
Nov 29 03:49:38 np0005539563 systemd-machined[213024]: Machine qemu-97-instance-000000c6 terminated.
Nov 29 03:49:38 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[386959]: [NOTICE]   (386963) : haproxy version is 2.8.14-c23fe91
Nov 29 03:49:38 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[386959]: [NOTICE]   (386963) : path to executable is /usr/sbin/haproxy
Nov 29 03:49:38 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[386959]: [ALERT]    (386963) : Current worker (386965) exited with code 143 (Terminated)
Nov 29 03:49:38 np0005539563 neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385[386959]: [WARNING]  (386963) : All workers exited. Exiting... (0)
Nov 29 03:49:38 np0005539563 systemd[1]: libpod-e0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0.scope: Deactivated successfully.
Nov 29 03:49:38 np0005539563 podman[387443]: 2025-11-29 08:49:38.536752253 +0000 UTC m=+0.042451898 container died e0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:49:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0-userdata-shm.mount: Deactivated successfully.
Nov 29 03:49:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay-304e1a1ce2e3e215c33d5d1aa856d1056ec3e36605fde67c56ead89dd9596019-merged.mount: Deactivated successfully.
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.590 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:38 np0005539563 podman[387443]: 2025-11-29 08:49:38.594207358 +0000 UTC m=+0.099906983 container cleanup e0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.598 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:38 np0005539563 systemd[1]: libpod-conmon-e0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0.scope: Deactivated successfully.
Nov 29 03:49:38 np0005539563 vigilant_wescoff[387412]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:49:38 np0005539563 vigilant_wescoff[387412]: --> relative data size: 1.0
Nov 29 03:49:38 np0005539563 vigilant_wescoff[387412]: --> All data devices are unavailable
Nov 29 03:49:38 np0005539563 systemd[1]: libpod-953499afd4b77f2a5edea25f1bdd67986ff37de44bea4212a7f934be908cb164.scope: Deactivated successfully.
Nov 29 03:49:38 np0005539563 conmon[387412]: conmon 953499afd4b77f2a5ede <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-953499afd4b77f2a5edea25f1bdd67986ff37de44bea4212a7f934be908cb164.scope/container/memory.events
Nov 29 03:49:38 np0005539563 podman[387395]: 2025-11-29 08:49:38.636791859 +0000 UTC m=+0.942735425 container died 953499afd4b77f2a5edea25f1bdd67986ff37de44bea4212a7f934be908cb164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wescoff, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.646 252257 DEBUG nova.compute.manager [req-d863612f-0a97-4659-a9f6-34581b8f9dda req-7fe082fe-132d-4719-bb27-28eaa6f9d24a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-vif-unplugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.647 252257 DEBUG oslo_concurrency.lockutils [req-d863612f-0a97-4659-a9f6-34581b8f9dda req-7fe082fe-132d-4719-bb27-28eaa6f9d24a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.648 252257 DEBUG oslo_concurrency.lockutils [req-d863612f-0a97-4659-a9f6-34581b8f9dda req-7fe082fe-132d-4719-bb27-28eaa6f9d24a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.648 252257 DEBUG oslo_concurrency.lockutils [req-d863612f-0a97-4659-a9f6-34581b8f9dda req-7fe082fe-132d-4719-bb27-28eaa6f9d24a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.648 252257 DEBUG nova.compute.manager [req-d863612f-0a97-4659-a9f6-34581b8f9dda req-7fe082fe-132d-4719-bb27-28eaa6f9d24a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] No waiting events found dispatching network-vif-unplugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.648 252257 WARNING nova.compute.manager [req-d863612f-0a97-4659-a9f6-34581b8f9dda req-7fe082fe-132d-4719-bb27-28eaa6f9d24a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received unexpected event network-vif-unplugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 for instance with vm_state active and task_state shelving.#033[00m
Nov 29 03:49:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1ccc604eea9b9b8ad28f9f19c8ae1dbdd9ee604d863c9bfb987169eeca6288c5-merged.mount: Deactivated successfully.
Nov 29 03:49:38 np0005539563 podman[387490]: 2025-11-29 08:49:38.69736328 +0000 UTC m=+0.073823593 container remove e0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.702 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c1b202-dfd8-446b-9ab5-59477d089298]: (4, ('Sat Nov 29 08:49:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 (e0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0)\ne0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0\nSat Nov 29 08:49:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 (e0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0)\ne0005aa405e922fcf4d4908f6d31fa6be22ad912af767f8097e8de2b9b6a2db0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.704 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[602caf1c-f380-4ae3-837c-a645174114c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.705 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0636028a-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.706 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:38 np0005539563 kernel: tap0636028a-90: left promiscuous mode
Nov 29 03:49:38 np0005539563 podman[387395]: 2025-11-29 08:49:38.720018477 +0000 UTC m=+1.025962053 container remove 953499afd4b77f2a5edea25f1bdd67986ff37de44bea4212a7f934be908cb164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wescoff, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:49:38 np0005539563 systemd[1]: libpod-conmon-953499afd4b77f2a5edea25f1bdd67986ff37de44bea4212a7f934be908cb164.scope: Deactivated successfully.
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.731 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.734 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[94c67116-8767-4aac-907a-bd7e5ca63d45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.748 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0f738f1d-cccb-48bd-a991-b1a918e73d40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.750 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fa1acc02-d275-4155-85cb-59cde3dfe6d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.773 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ea02644c-52e6-4130-a753-4bf95e76fc2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912270, 'reachable_time': 36462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387520, 'error': None, 'target': 'ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:38 np0005539563 systemd[1]: run-netns-ovnmeta\x2d0636028a\x2d96d5\x2d4ad7\x2daa6e\x2d9129edd44385.mount: Deactivated successfully.
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.776 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0636028a-96d5-4ad7-aa6e-9129edd44385 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:49:38 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:38.776 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[2295f7c8-26e5-4577-b75b-c10d92bec5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.938 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:38 np0005539563 nova_compute[252253]: 2025-11-29 08:49:38.955 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:39 np0005539563 nova_compute[252253]: 2025-11-29 08:49:39.148 252257 INFO nova.virt.libvirt.driver [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:49:39 np0005539563 nova_compute[252253]: 2025-11-29 08:49:39.153 252257 INFO nova.virt.libvirt.driver [-] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Instance destroyed successfully.#033[00m
Nov 29 03:49:39 np0005539563 nova_compute[252253]: 2025-11-29 08:49:39.153 252257 DEBUG nova.objects.instance [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e875192-3bcb-45b5-b98e-ed3fcce55779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:39 np0005539563 nova_compute[252253]: 2025-11-29 08:49:39.165 252257 DEBUG nova.compute.manager [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:39 np0005539563 nova_compute[252253]: 2025-11-29 08:49:39.167 252257 DEBUG oslo_concurrency.lockutils [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:39 np0005539563 nova_compute[252253]: 2025-11-29 08:49:39.168 252257 DEBUG oslo_concurrency.lockutils [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquired lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:39 np0005539563 nova_compute[252253]: 2025-11-29 08:49:39.168 252257 DEBUG nova.network.neutron [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:49:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 29 03:49:39 np0005539563 podman[387663]: 2025-11-29 08:49:39.285987342 +0000 UTC m=+0.033594776 container create d486ccd8ec9cb10e61116f08fdb653a3c0417230c20a11bfe3d39328bff277fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:49:39 np0005539563 systemd[1]: Started libpod-conmon-d486ccd8ec9cb10e61116f08fdb653a3c0417230c20a11bfe3d39328bff277fe.scope.
Nov 29 03:49:39 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:49:39 np0005539563 podman[387663]: 2025-11-29 08:49:39.270598283 +0000 UTC m=+0.018205727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:49:39 np0005539563 podman[387663]: 2025-11-29 08:49:39.369694084 +0000 UTC m=+0.117301528 container init d486ccd8ec9cb10e61116f08fdb653a3c0417230c20a11bfe3d39328bff277fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 03:49:39 np0005539563 podman[387663]: 2025-11-29 08:49:39.376981683 +0000 UTC m=+0.124589107 container start d486ccd8ec9cb10e61116f08fdb653a3c0417230c20a11bfe3d39328bff277fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:49:39 np0005539563 elated_ritchie[387680]: 167 167
Nov 29 03:49:39 np0005539563 podman[387663]: 2025-11-29 08:49:39.380503219 +0000 UTC m=+0.128110633 container attach d486ccd8ec9cb10e61116f08fdb653a3c0417230c20a11bfe3d39328bff277fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:49:39 np0005539563 systemd[1]: libpod-d486ccd8ec9cb10e61116f08fdb653a3c0417230c20a11bfe3d39328bff277fe.scope: Deactivated successfully.
Nov 29 03:49:39 np0005539563 conmon[387680]: conmon d486ccd8ec9cb10e6111 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d486ccd8ec9cb10e61116f08fdb653a3c0417230c20a11bfe3d39328bff277fe.scope/container/memory.events
Nov 29 03:49:39 np0005539563 podman[387663]: 2025-11-29 08:49:39.382520414 +0000 UTC m=+0.130127838 container died d486ccd8ec9cb10e61116f08fdb653a3c0417230c20a11bfe3d39328bff277fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:49:39 np0005539563 podman[387663]: 2025-11-29 08:49:39.415390309 +0000 UTC m=+0.162997743 container remove d486ccd8ec9cb10e61116f08fdb653a3c0417230c20a11bfe3d39328bff277fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:49:39 np0005539563 systemd[1]: libpod-conmon-d486ccd8ec9cb10e61116f08fdb653a3c0417230c20a11bfe3d39328bff277fe.scope: Deactivated successfully.
Nov 29 03:49:39 np0005539563 systemd[1]: var-lib-containers-storage-overlay-66ce4c83a74218ae8d7b94bc0929e34d42428b257d144d54122429aa33c3936e-merged.mount: Deactivated successfully.
Nov 29 03:49:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:39.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:39 np0005539563 podman[387704]: 2025-11-29 08:49:39.621893318 +0000 UTC m=+0.043587329 container create f7b753409be6475b5d9bd9ed0cb5e353d157c730cb0285d551197bd6c22b15bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:49:39 np0005539563 systemd[1]: Started libpod-conmon-f7b753409be6475b5d9bd9ed0cb5e353d157c730cb0285d551197bd6c22b15bd.scope.
Nov 29 03:49:39 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:49:39 np0005539563 podman[387704]: 2025-11-29 08:49:39.607468945 +0000 UTC m=+0.029162986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:49:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2681fb49fb3446bf2b2cf957ba47f8e8d846d94a50d8bdd5fd951ef4662f38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2681fb49fb3446bf2b2cf957ba47f8e8d846d94a50d8bdd5fd951ef4662f38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2681fb49fb3446bf2b2cf957ba47f8e8d846d94a50d8bdd5fd951ef4662f38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2681fb49fb3446bf2b2cf957ba47f8e8d846d94a50d8bdd5fd951ef4662f38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:39 np0005539563 podman[387704]: 2025-11-29 08:49:39.721998126 +0000 UTC m=+0.143692167 container init f7b753409be6475b5d9bd9ed0cb5e353d157c730cb0285d551197bd6c22b15bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:49:39 np0005539563 podman[387704]: 2025-11-29 08:49:39.735310409 +0000 UTC m=+0.157004430 container start f7b753409be6475b5d9bd9ed0cb5e353d157c730cb0285d551197bd6c22b15bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:49:39 np0005539563 podman[387704]: 2025-11-29 08:49:39.738516757 +0000 UTC m=+0.160210798 container attach f7b753409be6475b5d9bd9ed0cb5e353d157c730cb0285d551197bd6c22b15bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:49:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:40.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:40 np0005539563 nova_compute[252253]: 2025-11-29 08:49:40.372 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:40 np0005539563 keen_pike[387720]: {
Nov 29 03:49:40 np0005539563 keen_pike[387720]:    "0": [
Nov 29 03:49:40 np0005539563 keen_pike[387720]:        {
Nov 29 03:49:40 np0005539563 keen_pike[387720]:            "devices": [
Nov 29 03:49:40 np0005539563 keen_pike[387720]:                "/dev/loop3"
Nov 29 03:49:40 np0005539563 keen_pike[387720]:            ],
Nov 29 03:49:40 np0005539563 keen_pike[387720]:            "lv_name": "ceph_lv0",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:            "lv_size": "7511998464",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:            "name": "ceph_lv0",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:            "tags": {
Nov 29 03:49:40 np0005539563 keen_pike[387720]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:                "ceph.cluster_name": "ceph",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:                "ceph.crush_device_class": "",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:                "ceph.encrypted": "0",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:                "ceph.osd_id": "0",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:                "ceph.type": "block",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:                "ceph.vdo": "0"
Nov 29 03:49:40 np0005539563 keen_pike[387720]:            },
Nov 29 03:49:40 np0005539563 keen_pike[387720]:            "type": "block",
Nov 29 03:49:40 np0005539563 keen_pike[387720]:            "vg_name": "ceph_vg0"
Nov 29 03:49:40 np0005539563 keen_pike[387720]:        }
Nov 29 03:49:40 np0005539563 keen_pike[387720]:    ]
Nov 29 03:49:40 np0005539563 keen_pike[387720]: }
Nov 29 03:49:40 np0005539563 systemd[1]: libpod-f7b753409be6475b5d9bd9ed0cb5e353d157c730cb0285d551197bd6c22b15bd.scope: Deactivated successfully.
Nov 29 03:49:40 np0005539563 podman[387704]: 2025-11-29 08:49:40.513539719 +0000 UTC m=+0.935233780 container died f7b753409be6475b5d9bd9ed0cb5e353d157c730cb0285d551197bd6c22b15bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:49:40 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6c2681fb49fb3446bf2b2cf957ba47f8e8d846d94a50d8bdd5fd951ef4662f38-merged.mount: Deactivated successfully.
Nov 29 03:49:40 np0005539563 podman[387704]: 2025-11-29 08:49:40.588966655 +0000 UTC m=+1.010660666 container remove f7b753409be6475b5d9bd9ed0cb5e353d157c730cb0285d551197bd6c22b15bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:49:40 np0005539563 systemd[1]: libpod-conmon-f7b753409be6475b5d9bd9ed0cb5e353d157c730cb0285d551197bd6c22b15bd.scope: Deactivated successfully.
Nov 29 03:49:40 np0005539563 nova_compute[252253]: 2025-11-29 08:49:40.708 252257 DEBUG nova.compute.manager [req-e6b7a2a8-6352-4f2d-ba7e-c19b390f6f0c req-0a708c5d-1191-4e38-904c-017373290357 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:40 np0005539563 nova_compute[252253]: 2025-11-29 08:49:40.710 252257 DEBUG oslo_concurrency.lockutils [req-e6b7a2a8-6352-4f2d-ba7e-c19b390f6f0c req-0a708c5d-1191-4e38-904c-017373290357 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:40 np0005539563 nova_compute[252253]: 2025-11-29 08:49:40.710 252257 DEBUG oslo_concurrency.lockutils [req-e6b7a2a8-6352-4f2d-ba7e-c19b390f6f0c req-0a708c5d-1191-4e38-904c-017373290357 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:40 np0005539563 nova_compute[252253]: 2025-11-29 08:49:40.710 252257 DEBUG oslo_concurrency.lockutils [req-e6b7a2a8-6352-4f2d-ba7e-c19b390f6f0c req-0a708c5d-1191-4e38-904c-017373290357 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:40 np0005539563 nova_compute[252253]: 2025-11-29 08:49:40.710 252257 DEBUG nova.compute.manager [req-e6b7a2a8-6352-4f2d-ba7e-c19b390f6f0c req-0a708c5d-1191-4e38-904c-017373290357 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] No waiting events found dispatching network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:40 np0005539563 nova_compute[252253]: 2025-11-29 08:49:40.711 252257 WARNING nova.compute.manager [req-e6b7a2a8-6352-4f2d-ba7e-c19b390f6f0c req-0a708c5d-1191-4e38-904c-017373290357 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received unexpected event network-vif-plugged-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 for instance with vm_state active and task_state shelving.#033[00m
Nov 29 03:49:41 np0005539563 podman[387884]: 2025-11-29 08:49:41.18939348 +0000 UTC m=+0.038729897 container create 1dee250a04b0731b4ffa9d5ac83b1a723a5dc0912b5375e222db351ce35f864c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:49:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Nov 29 03:49:41 np0005539563 systemd[1]: Started libpod-conmon-1dee250a04b0731b4ffa9d5ac83b1a723a5dc0912b5375e222db351ce35f864c.scope.
Nov 29 03:49:41 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:49:41 np0005539563 podman[387884]: 2025-11-29 08:49:41.173316962 +0000 UTC m=+0.022653389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:49:41 np0005539563 podman[387884]: 2025-11-29 08:49:41.272561446 +0000 UTC m=+0.121897903 container init 1dee250a04b0731b4ffa9d5ac83b1a723a5dc0912b5375e222db351ce35f864c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:49:41 np0005539563 podman[387884]: 2025-11-29 08:49:41.281969593 +0000 UTC m=+0.131306000 container start 1dee250a04b0731b4ffa9d5ac83b1a723a5dc0912b5375e222db351ce35f864c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:49:41 np0005539563 podman[387884]: 2025-11-29 08:49:41.284997876 +0000 UTC m=+0.134334343 container attach 1dee250a04b0731b4ffa9d5ac83b1a723a5dc0912b5375e222db351ce35f864c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:49:41 np0005539563 pensive_noether[387899]: 167 167
Nov 29 03:49:41 np0005539563 systemd[1]: libpod-1dee250a04b0731b4ffa9d5ac83b1a723a5dc0912b5375e222db351ce35f864c.scope: Deactivated successfully.
Nov 29 03:49:41 np0005539563 conmon[387899]: conmon 1dee250a04b0731b4ffa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1dee250a04b0731b4ffa9d5ac83b1a723a5dc0912b5375e222db351ce35f864c.scope/container/memory.events
Nov 29 03:49:41 np0005539563 podman[387884]: 2025-11-29 08:49:41.288802379 +0000 UTC m=+0.138138846 container died 1dee250a04b0731b4ffa9d5ac83b1a723a5dc0912b5375e222db351ce35f864c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:49:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c578bc8d0aa81efb27d69efc11ade4b572abbed6b1a316805036c1bdd6525200-merged.mount: Deactivated successfully.
Nov 29 03:49:41 np0005539563 podman[387884]: 2025-11-29 08:49:41.357344498 +0000 UTC m=+0.206680915 container remove 1dee250a04b0731b4ffa9d5ac83b1a723a5dc0912b5375e222db351ce35f864c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:49:41 np0005539563 systemd[1]: libpod-conmon-1dee250a04b0731b4ffa9d5ac83b1a723a5dc0912b5375e222db351ce35f864c.scope: Deactivated successfully.
Nov 29 03:49:41 np0005539563 podman[387925]: 2025-11-29 08:49:41.546241525 +0000 UTC m=+0.043126866 container create ef5c20e5bd93535f964d3ec601a69f2987a57fc83ad8e334985720ec392756bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heisenberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:49:41 np0005539563 systemd[1]: Started libpod-conmon-ef5c20e5bd93535f964d3ec601a69f2987a57fc83ad8e334985720ec392756bf.scope.
Nov 29 03:49:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:41.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:41 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:49:41 np0005539563 podman[387925]: 2025-11-29 08:49:41.526480827 +0000 UTC m=+0.023366218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:49:41 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260c33b98c6354e1a9b88417462e4783c797b6fca2e2da594d07109d86bb9754/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:41 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260c33b98c6354e1a9b88417462e4783c797b6fca2e2da594d07109d86bb9754/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:41 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260c33b98c6354e1a9b88417462e4783c797b6fca2e2da594d07109d86bb9754/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:41 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/260c33b98c6354e1a9b88417462e4783c797b6fca2e2da594d07109d86bb9754/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:41 np0005539563 podman[387925]: 2025-11-29 08:49:41.640757362 +0000 UTC m=+0.137642733 container init ef5c20e5bd93535f964d3ec601a69f2987a57fc83ad8e334985720ec392756bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:49:41 np0005539563 podman[387925]: 2025-11-29 08:49:41.64986324 +0000 UTC m=+0.146748591 container start ef5c20e5bd93535f964d3ec601a69f2987a57fc83ad8e334985720ec392756bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:49:41 np0005539563 podman[387925]: 2025-11-29 08:49:41.653838098 +0000 UTC m=+0.150723489 container attach ef5c20e5bd93535f964d3ec601a69f2987a57fc83ad8e334985720ec392756bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heisenberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:49:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:42.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:42 np0005539563 funny_heisenberg[387941]: {
Nov 29 03:49:42 np0005539563 funny_heisenberg[387941]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:49:42 np0005539563 funny_heisenberg[387941]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:49:42 np0005539563 funny_heisenberg[387941]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:49:42 np0005539563 funny_heisenberg[387941]:        "osd_id": 0,
Nov 29 03:49:42 np0005539563 funny_heisenberg[387941]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:49:42 np0005539563 funny_heisenberg[387941]:        "type": "bluestore"
Nov 29 03:49:42 np0005539563 funny_heisenberg[387941]:    }
Nov 29 03:49:42 np0005539563 funny_heisenberg[387941]: }
Nov 29 03:49:42 np0005539563 systemd[1]: libpod-ef5c20e5bd93535f964d3ec601a69f2987a57fc83ad8e334985720ec392756bf.scope: Deactivated successfully.
Nov 29 03:49:42 np0005539563 podman[387925]: 2025-11-29 08:49:42.554363002 +0000 UTC m=+1.051248343 container died ef5c20e5bd93535f964d3ec601a69f2987a57fc83ad8e334985720ec392756bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:49:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:42.574 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:42 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:42.576 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:49:42 np0005539563 nova_compute[252253]: 2025-11-29 08:49:42.577 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:42 np0005539563 nova_compute[252253]: 2025-11-29 08:49:42.618 252257 DEBUG nova.network.neutron [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating instance_info_cache with network_info: [{"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-260c33b98c6354e1a9b88417462e4783c797b6fca2e2da594d07109d86bb9754-merged.mount: Deactivated successfully.
Nov 29 03:49:42 np0005539563 nova_compute[252253]: 2025-11-29 08:49:42.648 252257 DEBUG oslo_concurrency.lockutils [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Releasing lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:42 np0005539563 podman[387925]: 2025-11-29 08:49:42.669052168 +0000 UTC m=+1.165937549 container remove ef5c20e5bd93535f964d3ec601a69f2987a57fc83ad8e334985720ec392756bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_heisenberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:49:42 np0005539563 systemd[1]: libpod-conmon-ef5c20e5bd93535f964d3ec601a69f2987a57fc83ad8e334985720ec392756bf.scope: Deactivated successfully.
Nov 29 03:49:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:49:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:49:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:49:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:49:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev def26ab2-9195-4ac1-b25e-fb35c4bdedef does not exist
Nov 29 03:49:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 60f18d08-3c1d-4f0b-8391-7fcc68b43860 does not exist
Nov 29 03:49:42 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b6e1ce92-a89c-4b10-aada-46dff3b05b4d does not exist
Nov 29 03:49:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 183 KiB/s rd, 636 KiB/s wr, 28 op/s
Nov 29 03:49:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:49:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:49:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:49:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:49:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:49:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.567 252257 INFO nova.virt.libvirt.driver [-] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Instance destroyed successfully.#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.568 252257 DEBUG nova.objects.instance [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lazy-loading 'resources' on Instance uuid 8e875192-3bcb-45b5-b98e-ed3fcce55779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.598 252257 DEBUG nova.virt.libvirt.vif [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:49:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1554816171',display_name='tempest-TestShelveInstance-server-1554816171',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1554816171',id=198,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGtvikMAmWyKtz3G3oHOmTiaNE9UQ1Ju0e0lx2pz3ihtev7i/wsJX3O3ljU9qYZfHQILbh0YI0gMgFhLFsRZmDRrEreGW4wntvuPAkftPwbOEG8U0ceDBmuI6Y+BB4Dm+g==',key_name='tempest-TestShelveInstance-1032744962',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:49:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5e836f8387a492c8119be72f1fb9980',ramdisk_id='',reservation_id='r-zqh0kbjv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1715482181',owner_user_name='tempest-TestShelveInstance-1715482181-project-member'},tags=<?>,task_state='shelving',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:15Z,user_data=None,user_id='5dbbf4fd34004538ad08aa4aa6ab8096',uuid=8e875192-3bcb-45b5-b98e-ed3fcce55779,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.599 252257 DEBUG nova.network.os_vif_util [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converting VIF {"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": "br-int", "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.600 252257 DEBUG nova.network.os_vif_util [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.600 252257 DEBUG os_vif [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.602 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.603 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02cfe8ea-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.605 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.606 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:43.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.610 252257 INFO os_vif [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:e7:09,bridge_name='br-int',has_traffic_filtering=True,id=02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137,network=Network(0636028a-96d5-4ad7-aa6e-9129edd44385),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02cfe8ea-9c')#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.659 252257 DEBUG nova.compute.manager [req-01a0a50b-17fe-495f-ac1e-34ba789c0625 req-abfcbdb7-11c5-4d5c-843c-3f12f5d545a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Received event network-changed-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.661 252257 DEBUG nova.compute.manager [req-01a0a50b-17fe-495f-ac1e-34ba789c0625 req-abfcbdb7-11c5-4d5c-843c-3f12f5d545a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Refreshing instance network info cache due to event network-changed-02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.662 252257 DEBUG oslo_concurrency.lockutils [req-01a0a50b-17fe-495f-ac1e-34ba789c0625 req-abfcbdb7-11c5-4d5c-843c-3f12f5d545a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.662 252257 DEBUG oslo_concurrency.lockutils [req-01a0a50b-17fe-495f-ac1e-34ba789c0625 req-abfcbdb7-11c5-4d5c-843c-3f12f5d545a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.662 252257 DEBUG nova.network.neutron [req-01a0a50b-17fe-495f-ac1e-34ba789c0625 req-abfcbdb7-11c5-4d5c-843c-3f12f5d545a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Refreshing network info cache for port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:49:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:49:43 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.874 252257 INFO nova.virt.libvirt.driver [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Deleting instance files /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779_del#033[00m
Nov 29 03:49:43 np0005539563 nova_compute[252253]: 2025-11-29 08:49:43.875 252257 INFO nova.virt.libvirt.driver [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Deletion of /var/lib/nova/instances/8e875192-3bcb-45b5-b98e-ed3fcce55779_del complete#033[00m
Nov 29 03:49:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:44.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:44 np0005539563 nova_compute[252253]: 2025-11-29 08:49:44.501 252257 INFO nova.scheduler.client.report [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Deleted allocations for instance 8e875192-3bcb-45b5-b98e-ed3fcce55779#033[00m
Nov 29 03:49:44 np0005539563 nova_compute[252253]: 2025-11-29 08:49:44.542 252257 DEBUG oslo_concurrency.lockutils [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:44 np0005539563 nova_compute[252253]: 2025-11-29 08:49:44.543 252257 DEBUG oslo_concurrency.lockutils [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:44 np0005539563 nova_compute[252253]: 2025-11-29 08:49:44.567 252257 DEBUG oslo_concurrency.processutils [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:44 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:49:44.579 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:45 np0005539563 nova_compute[252253]: 2025-11-29 08:49:45.017 252257 DEBUG oslo_concurrency.processutils [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:45 np0005539563 nova_compute[252253]: 2025-11-29 08:49:45.028 252257 DEBUG nova.compute.provider_tree [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:45 np0005539563 nova_compute[252253]: 2025-11-29 08:49:45.060 252257 DEBUG nova.scheduler.client.report [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:45 np0005539563 nova_compute[252253]: 2025-11-29 08:49:45.095 252257 DEBUG oslo_concurrency.lockutils [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:45 np0005539563 nova_compute[252253]: 2025-11-29 08:49:45.168 252257 DEBUG oslo_concurrency.lockutils [None req-aab6127a-fbc4-4dc4-98fb-07b9d3b523a6 5dbbf4fd34004538ad08aa4aa6ab8096 c5e836f8387a492c8119be72f1fb9980 - - default default] Lock "8e875192-3bcb-45b5-b98e-ed3fcce55779" "released" by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" :: held 9.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 87 KiB/s wr, 24 op/s
Nov 29 03:49:45 np0005539563 nova_compute[252253]: 2025-11-29 08:49:45.375 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:45 np0005539563 podman[388069]: 2025-11-29 08:49:45.510073539 +0000 UTC m=+0.061756464 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 03:49:45 np0005539563 podman[388070]: 2025-11-29 08:49:45.528725398 +0000 UTC m=+0.080112055 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 29 03:49:45 np0005539563 nova_compute[252253]: 2025-11-29 08:49:45.538 252257 DEBUG nova.network.neutron [req-01a0a50b-17fe-495f-ac1e-34ba789c0625 req-abfcbdb7-11c5-4d5c-843c-3f12f5d545a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updated VIF entry in instance network info cache for port 02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:49:45 np0005539563 nova_compute[252253]: 2025-11-29 08:49:45.539 252257 DEBUG nova.network.neutron [req-01a0a50b-17fe-495f-ac1e-34ba789c0625 req-abfcbdb7-11c5-4d5c-843c-3f12f5d545a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Updating instance_info_cache with network_info: [{"id": "02cfe8ea-9cc4-4cbb-88b5-c9ae807d6137", "address": "fa:16:3e:4d:e7:09", "network": {"id": "0636028a-96d5-4ad7-aa6e-9129edd44385", "bridge": null, "label": "tempest-TestShelveInstance-87152114-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e836f8387a492c8119be72f1fb9980", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap02cfe8ea-9c", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:45 np0005539563 podman[388071]: 2025-11-29 08:49:45.556143384 +0000 UTC m=+0.099143002 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:49:45 np0005539563 nova_compute[252253]: 2025-11-29 08:49:45.569 252257 DEBUG oslo_concurrency.lockutils [req-01a0a50b-17fe-495f-ac1e-34ba789c0625 req-abfcbdb7-11c5-4d5c-843c-3f12f5d545a5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-8e875192-3bcb-45b5-b98e-ed3fcce55779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:45.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:46.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 40 KiB/s wr, 17 op/s
Nov 29 03:49:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:47.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:48.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:48 np0005539563 nova_compute[252253]: 2025-11-29 08:49:48.605 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 40 KiB/s wr, 17 op/s
Nov 29 03:49:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1161134043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:49.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:50.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:50 np0005539563 nova_compute[252253]: 2025-11-29 08:49:50.377 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 233 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 993 KiB/s wr, 28 op/s
Nov 29 03:49:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:51.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:52.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 29 03:49:53 np0005539563 nova_compute[252253]: 2025-11-29 08:49:53.605 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406178.60413, 8e875192-3bcb-45b5-b98e-ed3fcce55779 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:53 np0005539563 nova_compute[252253]: 2025-11-29 08:49:53.606 252257 INFO nova.compute.manager [-] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:49:53 np0005539563 nova_compute[252253]: 2025-11-29 08:49:53.608 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:53.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:53 np0005539563 nova_compute[252253]: 2025-11-29 08:49:53.629 252257 DEBUG nova.compute.manager [None req-3cd66983-6633-4b47-9d50-d41af4cca76b - - - - - -] [instance: 8e875192-3bcb-45b5-b98e-ed3fcce55779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:54.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 29 03:49:55 np0005539563 nova_compute[252253]: 2025-11-29 08:49:55.419 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:49:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:55.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:49:55 np0005539563 nova_compute[252253]: 2025-11-29 08:49:55.674 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:56.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:49:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:49:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:57.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:49:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:58.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:58 np0005539563 nova_compute[252253]: 2025-11-29 08:49:58.610 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 29 03:49:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:49:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:59.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 03:50:00 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 03:50:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:00.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:00 np0005539563 nova_compute[252253]: 2025-11-29 08:50:00.462 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 29 03:50:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:01.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:02.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 872 KiB/s wr, 72 op/s
Nov 29 03:50:03 np0005539563 nova_compute[252253]: 2025-11-29 08:50:03.612 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:03.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:04.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:50:04.958 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:50:04.959 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:50:04.959 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Nov 29 03:50:05 np0005539563 nova_compute[252253]: 2025-11-29 08:50:05.465 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:05.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:06.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Nov 29 03:50:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:07.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:07 np0005539563 nova_compute[252253]: 2025-11-29 08:50:07.693 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:08.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:08 np0005539563 nova_compute[252253]: 2025-11-29 08:50:08.614 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Nov 29 03:50:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:09.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:10.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:10 np0005539563 nova_compute[252253]: 2025-11-29 08:50:10.467 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 145 op/s
Nov 29 03:50:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:11.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:12.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:50:12
Nov 29 03:50:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:50:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:50:12 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'default.rgw.control', '.mgr', '.rgw.root', 'volumes', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 29 03:50:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:50:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 256 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 593 KiB/s wr, 149 op/s
Nov 29 03:50:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:50:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:50:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:50:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:50:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:50:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:50:13 np0005539563 nova_compute[252253]: 2025-11-29 08:50:13.616 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:13.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:14.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 276 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Nov 29 03:50:15 np0005539563 nova_compute[252253]: 2025-11-29 08:50:15.471 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:15.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:16.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:16 np0005539563 podman[388248]: 2025-11-29 08:50:16.508910714 +0000 UTC m=+0.059783500 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:50:16 np0005539563 podman[388249]: 2025-11-29 08:50:16.556851422 +0000 UTC m=+0.100833850 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:50:16 np0005539563 podman[388250]: 2025-11-29 08:50:16.573292789 +0000 UTC m=+0.116817354 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:50:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:50:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:50:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:50:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:50:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:50:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:50:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:50:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:50:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:50:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:50:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 276 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 586 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 29 03:50:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:17.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:18.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:18 np0005539563 nova_compute[252253]: 2025-11-29 08:50:18.618 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 790 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Nov 29 03:50:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:19.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:20.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:20 np0005539563 nova_compute[252253]: 2025-11-29 08:50:20.508 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 930 KiB/s rd, 2.2 MiB/s wr, 111 op/s
Nov 29 03:50:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:21.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:22.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:50:22.548 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:50:22 np0005539563 nova_compute[252253]: 2025-11-29 08:50:22.549 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:22 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:50:22.550 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:50:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 930 KiB/s rd, 2.2 MiB/s wr, 112 op/s
Nov 29 03:50:23 np0005539563 nova_compute[252253]: 2025-11-29 08:50:23.621 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:23.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:23 np0005539563 nova_compute[252253]: 2025-11-29 08:50:23.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002175591499162479 of space, bias 1.0, pg target 0.6526774497487436 quantized to 32 (current 32)
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004330644832961102 of space, bias 1.0, pg target 1.2991934498883306 quantized to 32 (current 32)
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:50:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:24.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:50:24.551 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 865 KiB/s rd, 1.6 MiB/s wr, 95 op/s
Nov 29 03:50:25 np0005539563 nova_compute[252253]: 2025-11-29 08:50:25.510 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:25.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:25 np0005539563 nova_compute[252253]: 2025-11-29 08:50:25.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:25 np0005539563 nova_compute[252253]: 2025-11-29 08:50:25.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:50:25 np0005539563 nova_compute[252253]: 2025-11-29 08:50:25.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:50:25 np0005539563 nova_compute[252253]: 2025-11-29 08:50:25.708 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:50:25 np0005539563 nova_compute[252253]: 2025-11-29 08:50:25.708 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:25 np0005539563 nova_compute[252253]: 2025-11-29 08:50:25.709 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:50:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:26.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:26 np0005539563 nova_compute[252253]: 2025-11-29 08:50:26.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 42 KiB/s wr, 25 op/s
Nov 29 03:50:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:27.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:27 np0005539563 nova_compute[252253]: 2025-11-29 08:50:27.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:28.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:28 np0005539563 nova_compute[252253]: 2025-11-29 08:50:28.622 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 45 KiB/s wr, 25 op/s
Nov 29 03:50:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:29.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:29 np0005539563 nova_compute[252253]: 2025-11-29 08:50:29.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:29 np0005539563 nova_compute[252253]: 2025-11-29 08:50:29.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:29 np0005539563 nova_compute[252253]: 2025-11-29 08:50:29.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:29 np0005539563 nova_compute[252253]: 2025-11-29 08:50:29.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:29 np0005539563 nova_compute[252253]: 2025-11-29 08:50:29.712 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:50:29 np0005539563 nova_compute[252253]: 2025-11-29 08:50:29.712 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:30.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2991538308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.154 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.302 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.303 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4174MB free_disk=20.94256591796875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.303 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.304 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.450 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.451 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.497 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.566 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372590960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.933 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.940 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.966 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.997 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:50:30 np0005539563 nova_compute[252253]: 2025-11-29 08:50:30.998 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 142 KiB/s rd, 33 KiB/s wr, 12 op/s
Nov 29 03:50:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:31.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:32 np0005539563 nova_compute[252253]: 2025-11-29 08:50:31.999 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:32.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 27 KiB/s wr, 3 op/s
Nov 29 03:50:33 np0005539563 nova_compute[252253]: 2025-11-29 08:50:33.623 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:33.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:34.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 20 KiB/s wr, 5 op/s
Nov 29 03:50:35 np0005539563 nova_compute[252253]: 2025-11-29 08:50:35.568 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:35.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:36.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 16 KiB/s wr, 5 op/s
Nov 29 03:50:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:37.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:38.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:38 np0005539563 nova_compute[252253]: 2025-11-29 08:50:38.625 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:38 np0005539563 nova_compute[252253]: 2025-11-29 08:50:38.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 255 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 147 KiB/s rd, 17 KiB/s wr, 18 op/s
Nov 29 03:50:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:39.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:40.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:40 np0005539563 nova_compute[252253]: 2025-11-29 08:50:40.573 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 141 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 16 KiB/s wr, 65 op/s
Nov 29 03:50:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:41.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:42.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 257 KiB/s rd, 13 KiB/s wr, 65 op/s
Nov 29 03:50:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:50:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:50:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:50:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:50:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:50:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:50:43 np0005539563 nova_compute[252253]: 2025-11-29 08:50:43.626 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:43.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:44.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:50:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 56K writes, 214K keys, 56K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s#012Cumulative WAL: 56K writes, 20K syncs, 2.72 writes per sync, written: 0.20 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4444 writes, 16K keys, 4444 commit groups, 1.0 writes per commit group, ingest: 18.60 MB, 0.03 MB/s#012Interval WAL: 4444 writes, 1785 syncs, 2.49 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561bde6ad610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561bde6ad610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Nov 29 03:50:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:50:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:50:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:50:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:50:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:50:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:50:44 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 669ab93d-0d0d-4c32-a2f5-b0a2ceb76723 does not exist
Nov 29 03:50:44 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c2c23830-5ced-4ebe-bc6d-dbbd5fcfce66 does not exist
Nov 29 03:50:44 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5a2d6350-490a-40e4-912e-20202a47ccfb does not exist
Nov 29 03:50:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:50:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:50:44 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:50:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:50:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:50:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:50:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:50:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 257 KiB/s rd, 8.0 KiB/s wr, 64 op/s
Nov 29 03:50:45 np0005539563 podman[388696]: 2025-11-29 08:50:45.536557482 +0000 UTC m=+0.047932161 container create a3e92121033ce9826243b39629dd161b2f763550853ff42c4fe72a3ba0945ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:50:45 np0005539563 nova_compute[252253]: 2025-11-29 08:50:45.574 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:45 np0005539563 systemd[1]: Started libpod-conmon-a3e92121033ce9826243b39629dd161b2f763550853ff42c4fe72a3ba0945ece.scope.
Nov 29 03:50:45 np0005539563 podman[388696]: 2025-11-29 08:50:45.511332518 +0000 UTC m=+0.022707217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:50:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:50:45 np0005539563 podman[388696]: 2025-11-29 08:50:45.645393423 +0000 UTC m=+0.156768132 container init a3e92121033ce9826243b39629dd161b2f763550853ff42c4fe72a3ba0945ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:50:45 np0005539563 podman[388696]: 2025-11-29 08:50:45.652325402 +0000 UTC m=+0.163700081 container start a3e92121033ce9826243b39629dd161b2f763550853ff42c4fe72a3ba0945ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_khayyam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:50:45 np0005539563 podman[388696]: 2025-11-29 08:50:45.655316833 +0000 UTC m=+0.166691512 container attach a3e92121033ce9826243b39629dd161b2f763550853ff42c4fe72a3ba0945ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_khayyam, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:50:45 np0005539563 elated_khayyam[388713]: 167 167
Nov 29 03:50:45 np0005539563 systemd[1]: libpod-a3e92121033ce9826243b39629dd161b2f763550853ff42c4fe72a3ba0945ece.scope: Deactivated successfully.
Nov 29 03:50:45 np0005539563 podman[388696]: 2025-11-29 08:50:45.662668741 +0000 UTC m=+0.174043410 container died a3e92121033ce9826243b39629dd161b2f763550853ff42c4fe72a3ba0945ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_khayyam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:50:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a5c473f9a6815ff93f473eaedb9b501d36db9ca7cf3c688f88089405bf338311-merged.mount: Deactivated successfully.
Nov 29 03:50:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:45.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:45 np0005539563 podman[388696]: 2025-11-29 08:50:45.702290977 +0000 UTC m=+0.213665646 container remove a3e92121033ce9826243b39629dd161b2f763550853ff42c4fe72a3ba0945ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_khayyam, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:50:45 np0005539563 systemd[1]: libpod-conmon-a3e92121033ce9826243b39629dd161b2f763550853ff42c4fe72a3ba0945ece.scope: Deactivated successfully.
Nov 29 03:50:45 np0005539563 podman[388736]: 2025-11-29 08:50:45.857480644 +0000 UTC m=+0.037585309 container create 25d108fd2b170ffe0692e5fb5621094afb49a6564d1de291bdc5f87be8ae19ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:50:45 np0005539563 systemd[1]: Started libpod-conmon-25d108fd2b170ffe0692e5fb5621094afb49a6564d1de291bdc5f87be8ae19ce.scope.
Nov 29 03:50:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:50:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd430f262fd62d095059c687dacf24507914ab42f88c41af38e0931ff3b0f79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd430f262fd62d095059c687dacf24507914ab42f88c41af38e0931ff3b0f79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd430f262fd62d095059c687dacf24507914ab42f88c41af38e0931ff3b0f79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd430f262fd62d095059c687dacf24507914ab42f88c41af38e0931ff3b0f79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd430f262fd62d095059c687dacf24507914ab42f88c41af38e0931ff3b0f79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:45 np0005539563 podman[388736]: 2025-11-29 08:50:45.842097577 +0000 UTC m=+0.022202262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:50:45 np0005539563 podman[388736]: 2025-11-29 08:50:45.952315116 +0000 UTC m=+0.132419801 container init 25d108fd2b170ffe0692e5fb5621094afb49a6564d1de291bdc5f87be8ae19ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:50:45 np0005539563 podman[388736]: 2025-11-29 08:50:45.960364484 +0000 UTC m=+0.140469149 container start 25d108fd2b170ffe0692e5fb5621094afb49a6564d1de291bdc5f87be8ae19ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lumiere, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:50:45 np0005539563 podman[388736]: 2025-11-29 08:50:45.964890797 +0000 UTC m=+0.144995462 container attach 25d108fd2b170ffe0692e5fb5621094afb49a6564d1de291bdc5f87be8ae19ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:50:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:50:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:50:46 np0005539563 nova_compute[252253]: 2025-11-29 08:50:46.098 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:46.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:46 np0005539563 nova_compute[252253]: 2025-11-29 08:50:46.271 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:46 np0005539563 sad_lumiere[388752]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:50:46 np0005539563 sad_lumiere[388752]: --> relative data size: 1.0
Nov 29 03:50:46 np0005539563 sad_lumiere[388752]: --> All data devices are unavailable
Nov 29 03:50:46 np0005539563 systemd[1]: libpod-25d108fd2b170ffe0692e5fb5621094afb49a6564d1de291bdc5f87be8ae19ce.scope: Deactivated successfully.
Nov 29 03:50:46 np0005539563 podman[388736]: 2025-11-29 08:50:46.794179624 +0000 UTC m=+0.974284299 container died 25d108fd2b170ffe0692e5fb5621094afb49a6564d1de291bdc5f87be8ae19ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:50:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8cd430f262fd62d095059c687dacf24507914ab42f88c41af38e0931ff3b0f79-merged.mount: Deactivated successfully.
Nov 29 03:50:46 np0005539563 podman[388736]: 2025-11-29 08:50:46.87069735 +0000 UTC m=+1.050802015 container remove 25d108fd2b170ffe0692e5fb5621094afb49a6564d1de291bdc5f87be8ae19ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:50:46 np0005539563 systemd[1]: libpod-conmon-25d108fd2b170ffe0692e5fb5621094afb49a6564d1de291bdc5f87be8ae19ce.scope: Deactivated successfully.
Nov 29 03:50:46 np0005539563 podman[388779]: 2025-11-29 08:50:46.912530554 +0000 UTC m=+0.079667492 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:50:46 np0005539563 podman[388770]: 2025-11-29 08:50:46.922571166 +0000 UTC m=+0.093653871 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:50:46 np0005539563 podman[388780]: 2025-11-29 08:50:46.939515775 +0000 UTC m=+0.095634164 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:50:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 3.3 KiB/s wr, 60 op/s
Nov 29 03:50:47 np0005539563 podman[388985]: 2025-11-29 08:50:47.465268282 +0000 UTC m=+0.020305762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:50:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:47.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:47 np0005539563 podman[388985]: 2025-11-29 08:50:47.708339043 +0000 UTC m=+0.263376483 container create 0247c5cdad8146f9ff088812655606a62289b19a968d80545b6879cd03ab4813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:50:47 np0005539563 systemd[1]: Started libpod-conmon-0247c5cdad8146f9ff088812655606a62289b19a968d80545b6879cd03ab4813.scope.
Nov 29 03:50:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:50:47 np0005539563 podman[388985]: 2025-11-29 08:50:47.783187102 +0000 UTC m=+0.338224562 container init 0247c5cdad8146f9ff088812655606a62289b19a968d80545b6879cd03ab4813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_banzai, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 03:50:47 np0005539563 podman[388985]: 2025-11-29 08:50:47.78973664 +0000 UTC m=+0.344774080 container start 0247c5cdad8146f9ff088812655606a62289b19a968d80545b6879cd03ab4813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_banzai, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:50:47 np0005539563 podman[388985]: 2025-11-29 08:50:47.793505652 +0000 UTC m=+0.348543112 container attach 0247c5cdad8146f9ff088812655606a62289b19a968d80545b6879cd03ab4813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_banzai, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:50:47 np0005539563 modest_banzai[389001]: 167 167
Nov 29 03:50:47 np0005539563 systemd[1]: libpod-0247c5cdad8146f9ff088812655606a62289b19a968d80545b6879cd03ab4813.scope: Deactivated successfully.
Nov 29 03:50:47 np0005539563 podman[388985]: 2025-11-29 08:50:47.796942295 +0000 UTC m=+0.351979735 container died 0247c5cdad8146f9ff088812655606a62289b19a968d80545b6879cd03ab4813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:50:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f2c93231a783d544409b1d5177359a55eed90364a7478898f1f6867cba1c9a2b-merged.mount: Deactivated successfully.
Nov 29 03:50:47 np0005539563 podman[388985]: 2025-11-29 08:50:47.831738879 +0000 UTC m=+0.386776319 container remove 0247c5cdad8146f9ff088812655606a62289b19a968d80545b6879cd03ab4813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_banzai, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:50:47 np0005539563 systemd[1]: libpod-conmon-0247c5cdad8146f9ff088812655606a62289b19a968d80545b6879cd03ab4813.scope: Deactivated successfully.
Nov 29 03:50:48 np0005539563 podman[389025]: 2025-11-29 08:50:48.026280134 +0000 UTC m=+0.046826401 container create 037ed190dcac1c6c0f6e66b177ff90e64e3c6742b7454df485be6bdc16b450ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:50:48 np0005539563 systemd[1]: Started libpod-conmon-037ed190dcac1c6c0f6e66b177ff90e64e3c6742b7454df485be6bdc16b450ac.scope.
Nov 29 03:50:48 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:50:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f808bf26fe2d7cef2d1783a200078900f5221f0121cd8368b1124754487c9382/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f808bf26fe2d7cef2d1783a200078900f5221f0121cd8368b1124754487c9382/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f808bf26fe2d7cef2d1783a200078900f5221f0121cd8368b1124754487c9382/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f808bf26fe2d7cef2d1783a200078900f5221f0121cd8368b1124754487c9382/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:48 np0005539563 podman[389025]: 2025-11-29 08:50:48.004312288 +0000 UTC m=+0.024858585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:50:48 np0005539563 podman[389025]: 2025-11-29 08:50:48.105622615 +0000 UTC m=+0.126168912 container init 037ed190dcac1c6c0f6e66b177ff90e64e3c6742b7454df485be6bdc16b450ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:50:48 np0005539563 podman[389025]: 2025-11-29 08:50:48.113685194 +0000 UTC m=+0.134231461 container start 037ed190dcac1c6c0f6e66b177ff90e64e3c6742b7454df485be6bdc16b450ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamarr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 03:50:48 np0005539563 podman[389025]: 2025-11-29 08:50:48.116562092 +0000 UTC m=+0.137108359 container attach 037ed190dcac1c6c0f6e66b177ff90e64e3c6742b7454df485be6bdc16b450ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:50:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:48.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:48 np0005539563 nova_compute[252253]: 2025-11-29 08:50:48.627 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]: {
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:    "0": [
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:        {
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:            "devices": [
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:                "/dev/loop3"
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:            ],
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:            "lv_name": "ceph_lv0",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:            "lv_size": "7511998464",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:            "name": "ceph_lv0",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:            "tags": {
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:                "ceph.cluster_name": "ceph",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:                "ceph.crush_device_class": "",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:                "ceph.encrypted": "0",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:                "ceph.osd_id": "0",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:                "ceph.type": "block",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:                "ceph.vdo": "0"
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:            },
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:            "type": "block",
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:            "vg_name": "ceph_vg0"
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:        }
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]:    ]
Nov 29 03:50:48 np0005539563 brave_lamarr[389042]: }
Nov 29 03:50:48 np0005539563 systemd[1]: libpod-037ed190dcac1c6c0f6e66b177ff90e64e3c6742b7454df485be6bdc16b450ac.scope: Deactivated successfully.
Nov 29 03:50:48 np0005539563 podman[389025]: 2025-11-29 08:50:48.891238599 +0000 UTC m=+0.911784876 container died 037ed190dcac1c6c0f6e66b177ff90e64e3c6742b7454df485be6bdc16b450ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:50:48 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f808bf26fe2d7cef2d1783a200078900f5221f0121cd8368b1124754487c9382-merged.mount: Deactivated successfully.
Nov 29 03:50:48 np0005539563 podman[389025]: 2025-11-29 08:50:48.953455706 +0000 UTC m=+0.974001963 container remove 037ed190dcac1c6c0f6e66b177ff90e64e3c6742b7454df485be6bdc16b450ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lamarr, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:50:48 np0005539563 systemd[1]: libpod-conmon-037ed190dcac1c6c0f6e66b177ff90e64e3c6742b7454df485be6bdc16b450ac.scope: Deactivated successfully.
Nov 29 03:50:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 3.3 KiB/s wr, 60 op/s
Nov 29 03:50:49 np0005539563 podman[389205]: 2025-11-29 08:50:49.679054641 +0000 UTC m=+0.060670136 container create 9c572840a71a9e7ade1000f7bc20f094690ef0fc72c492ba16742d8ab1d78c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:50:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:49.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:49 np0005539563 systemd[1]: Started libpod-conmon-9c572840a71a9e7ade1000f7bc20f094690ef0fc72c492ba16742d8ab1d78c31.scope.
Nov 29 03:50:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:50:49 np0005539563 podman[389205]: 2025-11-29 08:50:49.661689441 +0000 UTC m=+0.043304946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:50:49 np0005539563 podman[389205]: 2025-11-29 08:50:49.761918859 +0000 UTC m=+0.143534374 container init 9c572840a71a9e7ade1000f7bc20f094690ef0fc72c492ba16742d8ab1d78c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_matsumoto, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:50:49 np0005539563 podman[389205]: 2025-11-29 08:50:49.76972691 +0000 UTC m=+0.151342445 container start 9c572840a71a9e7ade1000f7bc20f094690ef0fc72c492ba16742d8ab1d78c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:50:49 np0005539563 podman[389205]: 2025-11-29 08:50:49.773975795 +0000 UTC m=+0.155591320 container attach 9c572840a71a9e7ade1000f7bc20f094690ef0fc72c492ba16742d8ab1d78c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:50:49 np0005539563 beautiful_matsumoto[389222]: 167 167
Nov 29 03:50:49 np0005539563 systemd[1]: libpod-9c572840a71a9e7ade1000f7bc20f094690ef0fc72c492ba16742d8ab1d78c31.scope: Deactivated successfully.
Nov 29 03:50:49 np0005539563 conmon[389222]: conmon 9c572840a71a9e7ade10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c572840a71a9e7ade1000f7bc20f094690ef0fc72c492ba16742d8ab1d78c31.scope/container/memory.events
Nov 29 03:50:49 np0005539563 podman[389205]: 2025-11-29 08:50:49.776212556 +0000 UTC m=+0.157828071 container died 9c572840a71a9e7ade1000f7bc20f094690ef0fc72c492ba16742d8ab1d78c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:50:49 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8a258d613459da062a2d2af625bb3633da595700f764adeaf605438173da5945-merged.mount: Deactivated successfully.
Nov 29 03:50:49 np0005539563 podman[389205]: 2025-11-29 08:50:49.818724689 +0000 UTC m=+0.200340174 container remove 9c572840a71a9e7ade1000f7bc20f094690ef0fc72c492ba16742d8ab1d78c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:50:49 np0005539563 systemd[1]: libpod-conmon-9c572840a71a9e7ade1000f7bc20f094690ef0fc72c492ba16742d8ab1d78c31.scope: Deactivated successfully.
Nov 29 03:50:50 np0005539563 podman[389245]: 2025-11-29 08:50:50.003946081 +0000 UTC m=+0.049166983 container create b2f4b65c1567b58a891a477b0d8d121d4b8c39b66ae140345d1f76aeb4c0ca96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dubinsky, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:50:50 np0005539563 systemd[1]: Started libpod-conmon-b2f4b65c1567b58a891a477b0d8d121d4b8c39b66ae140345d1f76aeb4c0ca96.scope.
Nov 29 03:50:50 np0005539563 podman[389245]: 2025-11-29 08:50:49.984599857 +0000 UTC m=+0.029820779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:50:50 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:50:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a35ca9262eb4ef240fb7f9fed79f0940e68d00141a9e84087c493d258d28b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a35ca9262eb4ef240fb7f9fed79f0940e68d00141a9e84087c493d258d28b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a35ca9262eb4ef240fb7f9fed79f0940e68d00141a9e84087c493d258d28b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a35ca9262eb4ef240fb7f9fed79f0940e68d00141a9e84087c493d258d28b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:50 np0005539563 podman[389245]: 2025-11-29 08:50:50.101834496 +0000 UTC m=+0.147055418 container init b2f4b65c1567b58a891a477b0d8d121d4b8c39b66ae140345d1f76aeb4c0ca96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dubinsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:50:50 np0005539563 podman[389245]: 2025-11-29 08:50:50.112274919 +0000 UTC m=+0.157495821 container start b2f4b65c1567b58a891a477b0d8d121d4b8c39b66ae140345d1f76aeb4c0ca96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:50:50 np0005539563 podman[389245]: 2025-11-29 08:50:50.116773891 +0000 UTC m=+0.161994813 container attach b2f4b65c1567b58a891a477b0d8d121d4b8c39b66ae140345d1f76aeb4c0ca96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:50:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:50.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:50 np0005539563 nova_compute[252253]: 2025-11-29 08:50:50.594 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:50 np0005539563 sleepy_dubinsky[389261]: {
Nov 29 03:50:50 np0005539563 sleepy_dubinsky[389261]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:50:50 np0005539563 sleepy_dubinsky[389261]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:50:50 np0005539563 sleepy_dubinsky[389261]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:50:50 np0005539563 sleepy_dubinsky[389261]:        "osd_id": 0,
Nov 29 03:50:50 np0005539563 sleepy_dubinsky[389261]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:50:50 np0005539563 sleepy_dubinsky[389261]:        "type": "bluestore"
Nov 29 03:50:50 np0005539563 sleepy_dubinsky[389261]:    }
Nov 29 03:50:50 np0005539563 sleepy_dubinsky[389261]: }
Nov 29 03:50:50 np0005539563 systemd[1]: libpod-b2f4b65c1567b58a891a477b0d8d121d4b8c39b66ae140345d1f76aeb4c0ca96.scope: Deactivated successfully.
Nov 29 03:50:50 np0005539563 podman[389245]: 2025-11-29 08:50:50.942601214 +0000 UTC m=+0.987822116 container died b2f4b65c1567b58a891a477b0d8d121d4b8c39b66ae140345d1f76aeb4c0ca96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dubinsky, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:50:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-96a35ca9262eb4ef240fb7f9fed79f0940e68d00141a9e84087c493d258d28b8-merged.mount: Deactivated successfully.
Nov 29 03:50:51 np0005539563 podman[389245]: 2025-11-29 08:50:51.010407213 +0000 UTC m=+1.055628115 container remove b2f4b65c1567b58a891a477b0d8d121d4b8c39b66ae140345d1f76aeb4c0ca96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dubinsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:50:51 np0005539563 systemd[1]: libpod-conmon-b2f4b65c1567b58a891a477b0d8d121d4b8c39b66ae140345d1f76aeb4c0ca96.scope: Deactivated successfully.
Nov 29 03:50:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:50:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:50:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:50:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:50:51 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 483defa0-b4f1-4134-aac5-3e7ab80ea017 does not exist
Nov 29 03:50:51 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7c5d9614-b257-4acb-9c0e-4344b07cf481 does not exist
Nov 29 03:50:51 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 28e4e578-ff50-4ecb-9d79-99bd3a3ba17f does not exist
Nov 29 03:50:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 113 KiB/s rd, 2.0 KiB/s wr, 47 op/s
Nov 29 03:50:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:51.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:52.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:50:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:50:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Nov 29 03:50:53 np0005539563 nova_compute[252253]: 2025-11-29 08:50:53.630 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:53.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:54.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:50:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 03:50:55 np0005539563 nova_compute[252253]: 2025-11-29 08:50:55.595 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:55.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:56.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:50:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:57.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:50:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:58.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:50:58 np0005539563 nova_compute[252253]: 2025-11-29 08:50:58.631 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:50:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:50:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:59.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:00.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:00 np0005539563 nova_compute[252253]: 2025-11-29 08:51:00.597 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:51:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:51:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:01.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:51:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:51:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:02.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:51:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:51:02.359 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:51:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:51:02.360 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:51:02 np0005539563 nova_compute[252253]: 2025-11-29 08:51:02.408 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:51:03 np0005539563 nova_compute[252253]: 2025-11-29 08:51:03.633 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:51:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:03.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:51:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:51:04.960 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:51:04.960 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:51:04.961 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:51:05 np0005539563 nova_compute[252253]: 2025-11-29 08:51:05.599 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:05.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:06.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:51:07 np0005539563 nova_compute[252253]: 2025-11-29 08:51:07.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:07.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:08.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:08 np0005539563 nova_compute[252253]: 2025-11-29 08:51:08.637 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 134 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.6 KiB/s rd, 528 KiB/s wr, 12 op/s
Nov 29 03:51:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:09.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:10.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:10 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:51:10.361 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:10 np0005539563 nova_compute[252253]: 2025-11-29 08:51:10.601 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:51:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:11.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:12.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:51:13
Nov 29 03:51:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:51:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:51:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'vms', 'backups', 'cephfs.cephfs.meta']
Nov 29 03:51:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:51:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:51:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:51:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:51:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:51:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:51:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:51:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 29 03:51:13 np0005539563 nova_compute[252253]: 2025-11-29 08:51:13.691 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:13.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:14.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Nov 29 03:51:15 np0005539563 nova_compute[252253]: 2025-11-29 08:51:15.603 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:51:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:15.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:51:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:16.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:51:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:51:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:51:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:51:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:51:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:51:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:51:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:51:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:51:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:51:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Nov 29 03:51:17 np0005539563 podman[389458]: 2025-11-29 08:51:17.508490334 +0000 UTC m=+0.060598783 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:51:17 np0005539563 podman[389459]: 2025-11-29 08:51:17.527651594 +0000 UTC m=+0.073617997 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:51:17 np0005539563 podman[389460]: 2025-11-29 08:51:17.54446499 +0000 UTC m=+0.084943604 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:51:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:17.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:18.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:18 np0005539563 nova_compute[252253]: 2025-11-29 08:51:18.693 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 03:51:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:19.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:20.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:20 np0005539563 nova_compute[252253]: 2025-11-29 08:51:20.607 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 88 op/s
Nov 29 03:51:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:21.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:51:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:22.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:51:22 np0005539563 ovn_controller[148841]: 2025-11-29T08:51:22Z|00868|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Nov 29 03:51:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 305 active+clean; 178 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 681 KiB/s wr, 85 op/s
Nov 29 03:51:23 np0005539563 nova_compute[252253]: 2025-11-29 08:51:23.721 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:23.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001360244509584962 of space, bias 1.0, pg target 0.40807335287548857 quantized to 32 (current 32)
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:51:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:24.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:24 np0005539563 nova_compute[252253]: 2025-11-29 08:51:24.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 228 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 116 op/s
Nov 29 03:51:25 np0005539563 nova_compute[252253]: 2025-11-29 08:51:25.608 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:25.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:26.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:26 np0005539563 nova_compute[252253]: 2025-11-29 08:51:26.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:26 np0005539563 nova_compute[252253]: 2025-11-29 08:51:26.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:51:26 np0005539563 nova_compute[252253]: 2025-11-29 08:51:26.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:51:26 np0005539563 nova_compute[252253]: 2025-11-29 08:51:26.695 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:51:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 228 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 3.2 MiB/s wr, 63 op/s
Nov 29 03:51:27 np0005539563 nova_compute[252253]: 2025-11-29 08:51:27.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:27 np0005539563 nova_compute[252253]: 2025-11-29 08:51:27.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:51:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:27.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:28.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:28 np0005539563 nova_compute[252253]: 2025-11-29 08:51:28.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:28 np0005539563 nova_compute[252253]: 2025-11-29 08:51:28.725 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 305 active+clean; 239 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 431 KiB/s rd, 3.8 MiB/s wr, 80 op/s
Nov 29 03:51:29 np0005539563 nova_compute[252253]: 2025-11-29 08:51:29.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:29.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:30.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:30 np0005539563 nova_compute[252253]: 2025-11-29 08:51:30.653 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:30 np0005539563 nova_compute[252253]: 2025-11-29 08:51:30.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:30 np0005539563 nova_compute[252253]: 2025-11-29 08:51:30.705 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:30 np0005539563 nova_compute[252253]: 2025-11-29 08:51:30.706 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:30 np0005539563 nova_compute[252253]: 2025-11-29 08:51:30.706 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:30 np0005539563 nova_compute[252253]: 2025-11-29 08:51:30.706 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:51:30 np0005539563 nova_compute[252253]: 2025-11-29 08:51:30.706 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:51:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3498394224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:51:31 np0005539563 nova_compute[252253]: 2025-11-29 08:51:31.141 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Nov 29 03:51:31 np0005539563 nova_compute[252253]: 2025-11-29 08:51:31.346 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:51:31 np0005539563 nova_compute[252253]: 2025-11-29 08:51:31.347 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4167MB free_disk=20.922344207763672GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:51:31 np0005539563 nova_compute[252253]: 2025-11-29 08:51:31.347 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:31 np0005539563 nova_compute[252253]: 2025-11-29 08:51:31.348 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:31 np0005539563 nova_compute[252253]: 2025-11-29 08:51:31.429 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:51:31 np0005539563 nova_compute[252253]: 2025-11-29 08:51:31.429 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:51:31 np0005539563 nova_compute[252253]: 2025-11-29 08:51:31.453 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:31.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:51:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463144033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:51:31 np0005539563 nova_compute[252253]: 2025-11-29 08:51:31.967 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:31 np0005539563 nova_compute[252253]: 2025-11-29 08:51:31.975 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:51:32 np0005539563 nova_compute[252253]: 2025-11-29 08:51:32.004 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:51:32 np0005539563 nova_compute[252253]: 2025-11-29 08:51:32.006 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:51:32 np0005539563 nova_compute[252253]: 2025-11-29 08:51:32.006 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:32.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 118 op/s
Nov 29 03:51:33 np0005539563 nova_compute[252253]: 2025-11-29 08:51:33.729 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:33.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:34 np0005539563 nova_compute[252253]: 2025-11-29 08:51:34.007 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:34.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 212 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 156 op/s
Nov 29 03:51:35 np0005539563 nova_compute[252253]: 2025-11-29 08:51:35.657 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:35.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:36.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 212 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 715 KiB/s wr, 111 op/s
Nov 29 03:51:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:37.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:38.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:38 np0005539563 nova_compute[252253]: 2025-11-29 08:51:38.753 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 188 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 715 KiB/s wr, 120 op/s
Nov 29 03:51:39 np0005539563 nova_compute[252253]: 2025-11-29 08:51:39.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:39.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:40.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:40 np0005539563 nova_compute[252253]: 2025-11-29 08:51:40.659 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 178 KiB/s wr, 118 op/s
Nov 29 03:51:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:41.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.178845) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302178991, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1438, "num_deletes": 257, "total_data_size": 2372565, "memory_usage": 2424936, "flush_reason": "Manual Compaction"}
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302201412, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 2333108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69520, "largest_seqno": 70956, "table_properties": {"data_size": 2326562, "index_size": 3680, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14156, "raw_average_key_size": 19, "raw_value_size": 2313252, "raw_average_value_size": 3235, "num_data_blocks": 162, "num_entries": 715, "num_filter_entries": 715, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406166, "oldest_key_time": 1764406166, "file_creation_time": 1764406302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 22586 microseconds, and 6524 cpu microseconds.
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.201498) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 2333108 bytes OK
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.201531) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.212799) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.212870) EVENT_LOG_v1 {"time_micros": 1764406302212857, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.212899) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 2366370, prev total WAL file size 2366370, number of live WAL files 2.
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.214066) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373638' seq:72057594037927935, type:22 .. '6C6F676D0033303231' seq:0, type:0; will stop at (end)
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(2278KB)], [155(12MB)]
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302214149, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 15434934, "oldest_snapshot_seqno": -1}
Nov 29 03:51:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:42.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 10380 keys, 15300537 bytes, temperature: kUnknown
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302448254, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 15300537, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15230968, "index_size": 42512, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25989, "raw_key_size": 273848, "raw_average_key_size": 26, "raw_value_size": 15046626, "raw_average_value_size": 1449, "num_data_blocks": 1629, "num_entries": 10380, "num_filter_entries": 10380, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764406302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.449294) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 15300537 bytes
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.453056) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.9 rd, 65.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 12.5 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(13.2) write-amplify(6.6) OK, records in: 10907, records dropped: 527 output_compression: NoCompression
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.453089) EVENT_LOG_v1 {"time_micros": 1764406302453074, "job": 96, "event": "compaction_finished", "compaction_time_micros": 234214, "compaction_time_cpu_micros": 35314, "output_level": 6, "num_output_files": 1, "total_output_size": 15300537, "num_input_records": 10907, "num_output_records": 10380, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302454069, "job": 96, "event": "table_file_deletion", "file_number": 157}
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406302458798, "job": 96, "event": "table_file_deletion", "file_number": 155}
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.213861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.458853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.458857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.458859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.458861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:42.458863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:51:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:51:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:51:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:51:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:51:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:51:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 164 KiB/s wr, 100 op/s
Nov 29 03:51:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:43.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:43 np0005539563 nova_compute[252253]: 2025-11-29 08:51:43.757 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:44.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 188 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.9 MiB/s wr, 99 op/s
Nov 29 03:51:45 np0005539563 nova_compute[252253]: 2025-11-29 08:51:45.710 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:51:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:45.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:51:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:51:46.188 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:51:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:51:46.189 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:51:46 np0005539563 nova_compute[252253]: 2025-11-29 08:51:46.189 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:46.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 188 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 1.9 MiB/s wr, 50 op/s
Nov 29 03:51:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:47.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:48.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:48 np0005539563 podman[389630]: 2025-11-29 08:51:48.519863282 +0000 UTC m=+0.061808097 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:51:48 np0005539563 podman[389631]: 2025-11-29 08:51:48.539175335 +0000 UTC m=+0.072047074 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd)
Nov 29 03:51:48 np0005539563 podman[389632]: 2025-11-29 08:51:48.573495566 +0000 UTC m=+0.107097495 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:51:48 np0005539563 nova_compute[252253]: 2025-11-29 08:51:48.759 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 190 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 160 KiB/s rd, 2.0 MiB/s wr, 55 op/s
Nov 29 03:51:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:49.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:50.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:50 np0005539563 nova_compute[252253]: 2025-11-29 08:51:50.711 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 29 03:51:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:51.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:52.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:52 np0005539563 podman[389869]: 2025-11-29 08:51:52.427658208 +0000 UTC m=+0.068136908 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:51:52 np0005539563 podman[389869]: 2025-11-29 08:51:52.794082594 +0000 UTC m=+0.434561314 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:51:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 302 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 29 03:51:53 np0005539563 podman[390024]: 2025-11-29 08:51:53.510283976 +0000 UTC m=+0.052806463 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 03:51:53 np0005539563 podman[390024]: 2025-11-29 08:51:53.521320775 +0000 UTC m=+0.063843242 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 03:51:53 np0005539563 podman[390090]: 2025-11-29 08:51:53.716336312 +0000 UTC m=+0.047448597 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, distribution-scope=public, name=keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.buildah.version=1.28.2, version=2.2.4, build-date=2023-02-22T09:23:20, vcs-type=git)
Nov 29 03:51:53 np0005539563 nova_compute[252253]: 2025-11-29 08:51:53.761 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:53.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:53 np0005539563 podman[390111]: 2025-11-29 08:51:53.78593272 +0000 UTC m=+0.051303762 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, release=1793, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 29 03:51:53 np0005539563 podman[390090]: 2025-11-29 08:51:53.791726367 +0000 UTC m=+0.122838622 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, description=keepalived for Ceph, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., release=1793, io.buildah.version=1.28.2, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container)
Nov 29 03:51:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:51:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:51:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:51:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:51:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:54.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:51:54 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 64f34583-af89-4c04-88aa-1d65ff8032ee does not exist
Nov 29 03:51:54 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 908164f4-f067-432e-a4af-a0197dc01233 does not exist
Nov 29 03:51:54 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f44be6d4-39dd-49ca-8539-ea1cb70ba9c8 does not exist
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:51:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:51:55 np0005539563 podman[390448]: 2025-11-29 08:51:55.169173558 +0000 UTC m=+0.051359123 container create 4ae3632cea6d6e0cc30e09324ebd53e942822afc4f165c152c88bda90b70db61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:51:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:51:55.192 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:55 np0005539563 systemd[1]: Started libpod-conmon-4ae3632cea6d6e0cc30e09324ebd53e942822afc4f165c152c88bda90b70db61.scope.
Nov 29 03:51:55 np0005539563 podman[390448]: 2025-11-29 08:51:55.141524559 +0000 UTC m=+0.023710224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:51:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:51:55 np0005539563 podman[390448]: 2025-11-29 08:51:55.253003122 +0000 UTC m=+0.135188737 container init 4ae3632cea6d6e0cc30e09324ebd53e942822afc4f165c152c88bda90b70db61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:51:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 302 KiB/s rd, 2.0 MiB/s wr, 59 op/s
Nov 29 03:51:55 np0005539563 podman[390448]: 2025-11-29 08:51:55.259431426 +0000 UTC m=+0.141616991 container start 4ae3632cea6d6e0cc30e09324ebd53e942822afc4f165c152c88bda90b70db61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:51:55 np0005539563 podman[390448]: 2025-11-29 08:51:55.264566355 +0000 UTC m=+0.146751930 container attach 4ae3632cea6d6e0cc30e09324ebd53e942822afc4f165c152c88bda90b70db61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:51:55 np0005539563 hungry_mendel[390464]: 167 167
Nov 29 03:51:55 np0005539563 systemd[1]: libpod-4ae3632cea6d6e0cc30e09324ebd53e942822afc4f165c152c88bda90b70db61.scope: Deactivated successfully.
Nov 29 03:51:55 np0005539563 podman[390448]: 2025-11-29 08:51:55.265410718 +0000 UTC m=+0.147596273 container died 4ae3632cea6d6e0cc30e09324ebd53e942822afc4f165c152c88bda90b70db61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:51:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8e51007fa2b7f266d154b94eb0d8b990d83188e7ce2f3284fa860782f9097e3d-merged.mount: Deactivated successfully.
Nov 29 03:51:55 np0005539563 podman[390448]: 2025-11-29 08:51:55.311011324 +0000 UTC m=+0.193196889 container remove 4ae3632cea6d6e0cc30e09324ebd53e942822afc4f165c152c88bda90b70db61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:51:55 np0005539563 systemd[1]: libpod-conmon-4ae3632cea6d6e0cc30e09324ebd53e942822afc4f165c152c88bda90b70db61.scope: Deactivated successfully.
Nov 29 03:51:55 np0005539563 podman[390490]: 2025-11-29 08:51:55.490890972 +0000 UTC m=+0.052724100 container create 83cc84d59da1b74e3c353897a668a8d74cc747dcf381d82f952145128a598c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:51:55 np0005539563 systemd[1]: Started libpod-conmon-83cc84d59da1b74e3c353897a668a8d74cc747dcf381d82f952145128a598c07.scope.
Nov 29 03:51:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:51:55 np0005539563 podman[390490]: 2025-11-29 08:51:55.466256754 +0000 UTC m=+0.028089872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:51:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49e67537aa7a876ef59b9a5006e76b32c3e036b87cb00f476fa26d5f85515bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49e67537aa7a876ef59b9a5006e76b32c3e036b87cb00f476fa26d5f85515bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49e67537aa7a876ef59b9a5006e76b32c3e036b87cb00f476fa26d5f85515bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49e67537aa7a876ef59b9a5006e76b32c3e036b87cb00f476fa26d5f85515bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49e67537aa7a876ef59b9a5006e76b32c3e036b87cb00f476fa26d5f85515bb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:55 np0005539563 podman[390490]: 2025-11-29 08:51:55.581390756 +0000 UTC m=+0.143223854 container init 83cc84d59da1b74e3c353897a668a8d74cc747dcf381d82f952145128a598c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haslett, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:51:55 np0005539563 podman[390490]: 2025-11-29 08:51:55.588783287 +0000 UTC m=+0.150616385 container start 83cc84d59da1b74e3c353897a668a8d74cc747dcf381d82f952145128a598c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haslett, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:51:55 np0005539563 podman[390490]: 2025-11-29 08:51:55.592245441 +0000 UTC m=+0.154078579 container attach 83cc84d59da1b74e3c353897a668a8d74cc747dcf381d82f952145128a598c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haslett, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:51:55 np0005539563 nova_compute[252253]: 2025-11-29 08:51:55.712 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:55.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:51:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:51:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:51:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:56.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:56 np0005539563 gifted_haslett[390507]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:51:56 np0005539563 gifted_haslett[390507]: --> relative data size: 1.0
Nov 29 03:51:56 np0005539563 gifted_haslett[390507]: --> All data devices are unavailable
Nov 29 03:51:56 np0005539563 systemd[1]: libpod-83cc84d59da1b74e3c353897a668a8d74cc747dcf381d82f952145128a598c07.scope: Deactivated successfully.
Nov 29 03:51:56 np0005539563 podman[390490]: 2025-11-29 08:51:56.401053483 +0000 UTC m=+0.962886621 container died 83cc84d59da1b74e3c353897a668a8d74cc747dcf381d82f952145128a598c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haslett, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:51:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b49e67537aa7a876ef59b9a5006e76b32c3e036b87cb00f476fa26d5f85515bb-merged.mount: Deactivated successfully.
Nov 29 03:51:56 np0005539563 podman[390490]: 2025-11-29 08:51:56.456220188 +0000 UTC m=+1.018053286 container remove 83cc84d59da1b74e3c353897a668a8d74cc747dcf381d82f952145128a598c07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:51:56 np0005539563 systemd[1]: libpod-conmon-83cc84d59da1b74e3c353897a668a8d74cc747dcf381d82f952145128a598c07.scope: Deactivated successfully.
Nov 29 03:51:56 np0005539563 nova_compute[252253]: 2025-11-29 08:51:56.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.105182) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317105252, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 399, "num_deletes": 251, "total_data_size": 295861, "memory_usage": 303184, "flush_reason": "Manual Compaction"}
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317109439, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 277327, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70957, "largest_seqno": 71355, "table_properties": {"data_size": 274910, "index_size": 516, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6547, "raw_average_key_size": 20, "raw_value_size": 270037, "raw_average_value_size": 851, "num_data_blocks": 22, "num_entries": 317, "num_filter_entries": 317, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406302, "oldest_key_time": 1764406302, "file_creation_time": 1764406317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 4286 microseconds, and 1453 cpu microseconds.
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.109481) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 277327 bytes OK
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.109496) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.111092) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.111118) EVENT_LOG_v1 {"time_micros": 1764406317111100, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.111135) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 293333, prev total WAL file size 293333, number of live WAL files 2.
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.111792) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353137' seq:72057594037927935, type:22 .. '6D6772737461740032373639' seq:0, type:0; will stop at (end)
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(270KB)], [158(14MB)]
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317111903, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 15577864, "oldest_snapshot_seqno": -1}
Nov 29 03:51:57 np0005539563 podman[390673]: 2025-11-29 08:51:57.115507066 +0000 UTC m=+0.043545632 container create e5d582c8058aee86d7deaa1b589c47d76ab12e8b891f66b1a85465a24384ac1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:51:57 np0005539563 systemd[1]: Started libpod-conmon-e5d582c8058aee86d7deaa1b589c47d76ab12e8b891f66b1a85465a24384ac1b.scope.
Nov 29 03:51:57 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:51:57 np0005539563 podman[390673]: 2025-11-29 08:51:57.09758937 +0000 UTC m=+0.025627956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:51:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 225 KiB/s rd, 213 KiB/s wr, 35 op/s
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 10184 keys, 11737172 bytes, temperature: kUnknown
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317265000, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11737172, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11673656, "index_size": 36971, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 269971, "raw_average_key_size": 26, "raw_value_size": 11497321, "raw_average_value_size": 1128, "num_data_blocks": 1397, "num_entries": 10184, "num_filter_entries": 10184, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764406317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.265260) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11737172 bytes
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.269261) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.7 rd, 76.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.6 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(98.5) write-amplify(42.3) OK, records in: 10697, records dropped: 513 output_compression: NoCompression
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.269281) EVENT_LOG_v1 {"time_micros": 1764406317269271, "job": 98, "event": "compaction_finished", "compaction_time_micros": 153158, "compaction_time_cpu_micros": 51649, "output_level": 6, "num_output_files": 1, "total_output_size": 11737172, "num_input_records": 10697, "num_output_records": 10184, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317269472, "job": 98, "event": "table_file_deletion", "file_number": 160}
Nov 29 03:51:57 np0005539563 podman[390673]: 2025-11-29 08:51:57.269997115 +0000 UTC m=+0.198035681 container init e5d582c8058aee86d7deaa1b589c47d76ab12e8b891f66b1a85465a24384ac1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tesla, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406317271650, "job": 98, "event": "table_file_deletion", "file_number": 158}
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.111587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.271714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.271722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.271724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.271726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:57 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:51:57.271728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:51:57 np0005539563 podman[390673]: 2025-11-29 08:51:57.279096322 +0000 UTC m=+0.207134888 container start e5d582c8058aee86d7deaa1b589c47d76ab12e8b891f66b1a85465a24384ac1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tesla, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:51:57 np0005539563 podman[390673]: 2025-11-29 08:51:57.282908185 +0000 UTC m=+0.210946741 container attach e5d582c8058aee86d7deaa1b589c47d76ab12e8b891f66b1a85465a24384ac1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tesla, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:51:57 np0005539563 happy_tesla[390689]: 167 167
Nov 29 03:51:57 np0005539563 systemd[1]: libpod-e5d582c8058aee86d7deaa1b589c47d76ab12e8b891f66b1a85465a24384ac1b.scope: Deactivated successfully.
Nov 29 03:51:57 np0005539563 podman[390673]: 2025-11-29 08:51:57.286964915 +0000 UTC m=+0.215003471 container died e5d582c8058aee86d7deaa1b589c47d76ab12e8b891f66b1a85465a24384ac1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:51:57 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b4057cfa26a3a1bf35a52464f57fc18a06c6a2069fa0d5615c30cc7f4d0e8cb8-merged.mount: Deactivated successfully.
Nov 29 03:51:57 np0005539563 podman[390673]: 2025-11-29 08:51:57.329664643 +0000 UTC m=+0.257703209 container remove e5d582c8058aee86d7deaa1b589c47d76ab12e8b891f66b1a85465a24384ac1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:51:57 np0005539563 systemd[1]: libpod-conmon-e5d582c8058aee86d7deaa1b589c47d76ab12e8b891f66b1a85465a24384ac1b.scope: Deactivated successfully.
Nov 29 03:51:57 np0005539563 podman[390712]: 2025-11-29 08:51:57.529272555 +0000 UTC m=+0.046939893 container create 26e23f2143b216c661a86b2870df08608cbe41fb1b10dae548b224367cb21938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:51:57 np0005539563 systemd[1]: Started libpod-conmon-26e23f2143b216c661a86b2870df08608cbe41fb1b10dae548b224367cb21938.scope.
Nov 29 03:51:57 np0005539563 podman[390712]: 2025-11-29 08:51:57.507778613 +0000 UTC m=+0.025445971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:51:57 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:51:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a49320e56ba6fefc91ed4393c2e473b6238baf2ae2bc0d88d977d03ed0d3331/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a49320e56ba6fefc91ed4393c2e473b6238baf2ae2bc0d88d977d03ed0d3331/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a49320e56ba6fefc91ed4393c2e473b6238baf2ae2bc0d88d977d03ed0d3331/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:57 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a49320e56ba6fefc91ed4393c2e473b6238baf2ae2bc0d88d977d03ed0d3331/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:57 np0005539563 podman[390712]: 2025-11-29 08:51:57.646651648 +0000 UTC m=+0.164319076 container init 26e23f2143b216c661a86b2870df08608cbe41fb1b10dae548b224367cb21938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:51:57 np0005539563 podman[390712]: 2025-11-29 08:51:57.657841812 +0000 UTC m=+0.175509150 container start 26e23f2143b216c661a86b2870df08608cbe41fb1b10dae548b224367cb21938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_herschel, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:51:57 np0005539563 podman[390712]: 2025-11-29 08:51:57.661612005 +0000 UTC m=+0.179279343 container attach 26e23f2143b216c661a86b2870df08608cbe41fb1b10dae548b224367cb21938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:51:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:57.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:51:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:58.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:58 np0005539563 silly_herschel[390728]: {
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:    "0": [
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:        {
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:            "devices": [
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:                "/dev/loop3"
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:            ],
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:            "lv_name": "ceph_lv0",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:            "lv_size": "7511998464",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:            "name": "ceph_lv0",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:            "tags": {
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:                "ceph.cluster_name": "ceph",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:                "ceph.crush_device_class": "",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:                "ceph.encrypted": "0",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:                "ceph.osd_id": "0",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:                "ceph.type": "block",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:                "ceph.vdo": "0"
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:            },
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:            "type": "block",
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:            "vg_name": "ceph_vg0"
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:        }
Nov 29 03:51:58 np0005539563 silly_herschel[390728]:    ]
Nov 29 03:51:58 np0005539563 silly_herschel[390728]: }
Nov 29 03:51:58 np0005539563 systemd[1]: libpod-26e23f2143b216c661a86b2870df08608cbe41fb1b10dae548b224367cb21938.scope: Deactivated successfully.
Nov 29 03:51:58 np0005539563 podman[390712]: 2025-11-29 08:51:58.458327028 +0000 UTC m=+0.975994376 container died 26e23f2143b216c661a86b2870df08608cbe41fb1b10dae548b224367cb21938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:51:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2a49320e56ba6fefc91ed4393c2e473b6238baf2ae2bc0d88d977d03ed0d3331-merged.mount: Deactivated successfully.
Nov 29 03:51:58 np0005539563 podman[390712]: 2025-11-29 08:51:58.523893116 +0000 UTC m=+1.041560444 container remove 26e23f2143b216c661a86b2870df08608cbe41fb1b10dae548b224367cb21938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:51:58 np0005539563 systemd[1]: libpod-conmon-26e23f2143b216c661a86b2870df08608cbe41fb1b10dae548b224367cb21938.scope: Deactivated successfully.
Nov 29 03:51:58 np0005539563 nova_compute[252253]: 2025-11-29 08:51:58.789 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:59 np0005539563 podman[390892]: 2025-11-29 08:51:59.229327845 +0000 UTC m=+0.057857920 container create 0410cc4729f479ee500f7258c4cf11e781d723b3d6cebf8b3bb6f8e6f67466e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:51:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 225 KiB/s rd, 213 KiB/s wr, 35 op/s
Nov 29 03:51:59 np0005539563 systemd[1]: Started libpod-conmon-0410cc4729f479ee500f7258c4cf11e781d723b3d6cebf8b3bb6f8e6f67466e6.scope.
Nov 29 03:51:59 np0005539563 podman[390892]: 2025-11-29 08:51:59.197830161 +0000 UTC m=+0.026360316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:51:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:51:59 np0005539563 podman[390892]: 2025-11-29 08:51:59.323078587 +0000 UTC m=+0.151608692 container init 0410cc4729f479ee500f7258c4cf11e781d723b3d6cebf8b3bb6f8e6f67466e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:51:59 np0005539563 podman[390892]: 2025-11-29 08:51:59.330638572 +0000 UTC m=+0.159168657 container start 0410cc4729f479ee500f7258c4cf11e781d723b3d6cebf8b3bb6f8e6f67466e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:51:59 np0005539563 podman[390892]: 2025-11-29 08:51:59.335157925 +0000 UTC m=+0.163688020 container attach 0410cc4729f479ee500f7258c4cf11e781d723b3d6cebf8b3bb6f8e6f67466e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:51:59 np0005539563 modest_northcutt[390908]: 167 167
Nov 29 03:51:59 np0005539563 systemd[1]: libpod-0410cc4729f479ee500f7258c4cf11e781d723b3d6cebf8b3bb6f8e6f67466e6.scope: Deactivated successfully.
Nov 29 03:51:59 np0005539563 podman[390892]: 2025-11-29 08:51:59.339236845 +0000 UTC m=+0.167766920 container died 0410cc4729f479ee500f7258c4cf11e781d723b3d6cebf8b3bb6f8e6f67466e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:51:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1526cef3f6199919be190aa2e63250200df43fe7531fdfc34f13b63b32045c0d-merged.mount: Deactivated successfully.
Nov 29 03:51:59 np0005539563 podman[390892]: 2025-11-29 08:51:59.377623986 +0000 UTC m=+0.206154061 container remove 0410cc4729f479ee500f7258c4cf11e781d723b3d6cebf8b3bb6f8e6f67466e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_northcutt, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 29 03:51:59 np0005539563 systemd[1]: libpod-conmon-0410cc4729f479ee500f7258c4cf11e781d723b3d6cebf8b3bb6f8e6f67466e6.scope: Deactivated successfully.
Nov 29 03:51:59 np0005539563 podman[390932]: 2025-11-29 08:51:59.542279991 +0000 UTC m=+0.046555334 container create 6afc1a085448a5f9db09215a6a6bdbc5b589436987bc17548a8f3b49649b1509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_benz, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:51:59 np0005539563 systemd[1]: Started libpod-conmon-6afc1a085448a5f9db09215a6a6bdbc5b589436987bc17548a8f3b49649b1509.scope.
Nov 29 03:51:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:51:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f08435b28b4157c3043343aa334b8fdf316649ec31b0950290f7c9bee8c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f08435b28b4157c3043343aa334b8fdf316649ec31b0950290f7c9bee8c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f08435b28b4157c3043343aa334b8fdf316649ec31b0950290f7c9bee8c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db9f08435b28b4157c3043343aa334b8fdf316649ec31b0950290f7c9bee8c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:59 np0005539563 podman[390932]: 2025-11-29 08:51:59.61343759 +0000 UTC m=+0.117712943 container init 6afc1a085448a5f9db09215a6a6bdbc5b589436987bc17548a8f3b49649b1509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_benz, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:51:59 np0005539563 podman[390932]: 2025-11-29 08:51:59.523686026 +0000 UTC m=+0.027961359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:51:59 np0005539563 podman[390932]: 2025-11-29 08:51:59.621349985 +0000 UTC m=+0.125625318 container start 6afc1a085448a5f9db09215a6a6bdbc5b589436987bc17548a8f3b49649b1509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_benz, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:51:59 np0005539563 podman[390932]: 2025-11-29 08:51:59.625095406 +0000 UTC m=+0.129370769 container attach 6afc1a085448a5f9db09215a6a6bdbc5b589436987bc17548a8f3b49649b1509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:51:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:51:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:51:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:59.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:00.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:00 np0005539563 funny_benz[390948]: {
Nov 29 03:52:00 np0005539563 funny_benz[390948]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:52:00 np0005539563 funny_benz[390948]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:52:00 np0005539563 funny_benz[390948]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:52:00 np0005539563 funny_benz[390948]:        "osd_id": 0,
Nov 29 03:52:00 np0005539563 funny_benz[390948]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:52:00 np0005539563 funny_benz[390948]:        "type": "bluestore"
Nov 29 03:52:00 np0005539563 funny_benz[390948]:    }
Nov 29 03:52:00 np0005539563 funny_benz[390948]: }
Nov 29 03:52:00 np0005539563 systemd[1]: libpod-6afc1a085448a5f9db09215a6a6bdbc5b589436987bc17548a8f3b49649b1509.scope: Deactivated successfully.
Nov 29 03:52:00 np0005539563 podman[390932]: 2025-11-29 08:52:00.41330798 +0000 UTC m=+0.917583313 container died 6afc1a085448a5f9db09215a6a6bdbc5b589436987bc17548a8f3b49649b1509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_benz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:52:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0db9f08435b28b4157c3043343aa334b8fdf316649ec31b0950290f7c9bee8c9-merged.mount: Deactivated successfully.
Nov 29 03:52:00 np0005539563 podman[390932]: 2025-11-29 08:52:00.467350425 +0000 UTC m=+0.971625748 container remove 6afc1a085448a5f9db09215a6a6bdbc5b589436987bc17548a8f3b49649b1509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:52:00 np0005539563 systemd[1]: libpod-conmon-6afc1a085448a5f9db09215a6a6bdbc5b589436987bc17548a8f3b49649b1509.scope: Deactivated successfully.
Nov 29 03:52:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:52:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:52:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:52:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:52:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d3078da9-1958-4b72-895a-9c3ad2c62dc3 does not exist
Nov 29 03:52:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 263dc114-3ceb-4f06-9b6b-3af049eba2dc does not exist
Nov 29 03:52:00 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a9a11ef4-54d5-4c53-be17-fd01840e78ad does not exist
Nov 29 03:52:00 np0005539563 nova_compute[252253]: 2025-11-29 08:52:00.715 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 159 KiB/s rd, 101 KiB/s wr, 29 op/s
Nov 29 03:52:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:52:01 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:52:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:01.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:02.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s wr, 0 op/s
Nov 29 03:52:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:03.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:03 np0005539563 nova_compute[252253]: 2025-11-29 08:52:03.793 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:04.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:04.961 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:04.962 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:04.962 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 305 active+clean; 220 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 455 KiB/s wr, 24 op/s
Nov 29 03:52:05 np0005539563 nova_compute[252253]: 2025-11-29 08:52:05.755 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:05.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:06.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 305 active+clean; 220 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 443 KiB/s wr, 24 op/s
Nov 29 03:52:07 np0005539563 nova_compute[252253]: 2025-11-29 08:52:07.699 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:07.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:08.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:08 np0005539563 nova_compute[252253]: 2025-11-29 08:52:08.797 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 305 active+clean; 228 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 743 KiB/s wr, 25 op/s
Nov 29 03:52:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:09.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Nov 29 03:52:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Nov 29 03:52:09 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Nov 29 03:52:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:10.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:10 np0005539563 nova_compute[252253]: 2025-11-29 08:52:10.756 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Nov 29 03:52:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Nov 29 03:52:10 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Nov 29 03:52:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 2.7 MiB/s wr, 53 op/s
Nov 29 03:52:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:11.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Nov 29 03:52:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Nov 29 03:52:11 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Nov 29 03:52:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:12.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:52:13
Nov 29 03:52:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:52:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:52:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.data']
Nov 29 03:52:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:52:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:52:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:52:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:52:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:52:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:52:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:52:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 305 active+clean; 274 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 104 op/s
Nov 29 03:52:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:13.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:13 np0005539563 nova_compute[252253]: 2025-11-29 08:52:13.829 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:14.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 MiB/s rd, 9.9 MiB/s wr, 332 op/s
Nov 29 03:52:15 np0005539563 nova_compute[252253]: 2025-11-29 08:52:15.758 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:15.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:16.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:52:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:52:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:52:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:52:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:52:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:52:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:52:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:52:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:52:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:52:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Nov 29 03:52:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Nov 29 03:52:17 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Nov 29 03:52:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 7.3 MiB/s wr, 292 op/s
Nov 29 03:52:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:17.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:18.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:18 np0005539563 nova_compute[252253]: 2025-11-29 08:52:18.590 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:18 np0005539563 nova_compute[252253]: 2025-11-29 08:52:18.590 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:18 np0005539563 nova_compute[252253]: 2025-11-29 08:52:18.609 252257 DEBUG nova.compute.manager [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:52:18 np0005539563 nova_compute[252253]: 2025-11-29 08:52:18.709 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:18 np0005539563 nova_compute[252253]: 2025-11-29 08:52:18.710 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:18 np0005539563 nova_compute[252253]: 2025-11-29 08:52:18.719 252257 DEBUG nova.virt.hardware [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:52:18 np0005539563 nova_compute[252253]: 2025-11-29 08:52:18.719 252257 INFO nova.compute.claims [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:52:18 np0005539563 nova_compute[252253]: 2025-11-29 08:52:18.834 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:18 np0005539563 nova_compute[252253]: 2025-11-29 08:52:18.854 252257 DEBUG oslo_concurrency.processutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 233 op/s
Nov 29 03:52:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478243950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.315 252257 DEBUG oslo_concurrency.processutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.323 252257 DEBUG nova.compute.provider_tree [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.347 252257 DEBUG nova.scheduler.client.report [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.394 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.395 252257 DEBUG nova.compute.manager [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.469 252257 DEBUG nova.compute.manager [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.469 252257 DEBUG nova.network.neutron [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.490 252257 INFO nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.513 252257 DEBUG nova.compute.manager [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:52:19 np0005539563 podman[391116]: 2025-11-29 08:52:19.529215744 +0000 UTC m=+0.066200887 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:52:19 np0005539563 podman[391117]: 2025-11-29 08:52:19.53793359 +0000 UTC m=+0.073445303 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.613 252257 DEBUG nova.compute.manager [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.615 252257 DEBUG nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.615 252257 INFO nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Creating image(s)#033[00m
Nov 29 03:52:19 np0005539563 podman[391118]: 2025-11-29 08:52:19.618558046 +0000 UTC m=+0.137518000 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.648 252257 DEBUG nova.storage.rbd_utils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] rbd image 66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.686 252257 DEBUG nova.storage.rbd_utils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] rbd image 66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.717 252257 DEBUG nova.storage.rbd_utils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] rbd image 66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.722 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "b44f43e664b1142122b3e5c98b00fb1ce4a4d766" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:19 np0005539563 nova_compute[252253]: 2025-11-29 08:52:19.723 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "b44f43e664b1142122b3e5c98b00fb1ce4a4d766" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:19.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.023 252257 DEBUG nova.virt.libvirt.imagebackend [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Image locations are: [{'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/a7583cd4-d395-48e2-9f81-567bf2845ae0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/a7583cd4-d395-48e2-9f81-567bf2845ae0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.102 252257 DEBUG nova.virt.libvirt.imagebackend [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Selected location: {'url': 'rbd://38a37ed2-442a-5e0d-a69a-881fdd186450/images/a7583cd4-d395-48e2-9f81-567bf2845ae0/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.103 252257 DEBUG nova.storage.rbd_utils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] cloning images/a7583cd4-d395-48e2-9f81-567bf2845ae0@snap to None/66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.213 252257 DEBUG nova.policy [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7b36e3f2406043c2a741c24fb14de7df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0596f9d1e5a5444ca2640f6e8244d53f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.262 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "b44f43e664b1142122b3e5c98b00fb1ce4a4d766" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:20.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.446 252257 DEBUG nova.objects.instance [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lazy-loading 'migration_context' on Instance uuid 66ab0bd5-ae63-4ae2-af64-ae8b85745f24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.460 252257 DEBUG nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.461 252257 DEBUG nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Ensure instance console log exists: /var/lib/nova/instances/66ab0bd5-ae63-4ae2-af64-ae8b85745f24/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.461 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.462 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.462 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.695 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:52:20 np0005539563 nova_compute[252253]: 2025-11-29 08:52:20.760 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 5.0 MiB/s wr, 203 op/s
Nov 29 03:52:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:21.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:22.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:22 np0005539563 nova_compute[252253]: 2025-11-29 08:52:22.309 252257 DEBUG nova.network.neutron [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Successfully created port: 685efdfe-7b92-41ab-b933-36d49a2b7522 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:52:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.0 MiB/s wr, 153 op/s
Nov 29 03:52:23 np0005539563 nova_compute[252253]: 2025-11-29 08:52:23.300 252257 DEBUG nova.network.neutron [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Successfully updated port: 685efdfe-7b92-41ab-b933-36d49a2b7522 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:52:23 np0005539563 nova_compute[252253]: 2025-11-29 08:52:23.319 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:23 np0005539563 nova_compute[252253]: 2025-11-29 08:52:23.320 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquired lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:23 np0005539563 nova_compute[252253]: 2025-11-29 08:52:23.320 252257 DEBUG nova.network.neutron [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:52:23 np0005539563 nova_compute[252253]: 2025-11-29 08:52:23.407 252257 DEBUG nova.compute.manager [req-17d3a3f9-4f86-4e4c-ba22-c45ca6d2c7be req-011f7cd8-c432-4078-8673-543c564be6c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Received event network-changed-685efdfe-7b92-41ab-b933-36d49a2b7522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:23 np0005539563 nova_compute[252253]: 2025-11-29 08:52:23.408 252257 DEBUG nova.compute.manager [req-17d3a3f9-4f86-4e4c-ba22-c45ca6d2c7be req-011f7cd8-c432-4078-8673-543c564be6c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Refreshing instance network info cache due to event network-changed-685efdfe-7b92-41ab-b933-36d49a2b7522. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:52:23 np0005539563 nova_compute[252253]: 2025-11-29 08:52:23.408 252257 DEBUG oslo_concurrency.lockutils [req-17d3a3f9-4f86-4e4c-ba22-c45ca6d2c7be req-011f7cd8-c432-4078-8673-543c564be6c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:23 np0005539563 nova_compute[252253]: 2025-11-29 08:52:23.491 252257 DEBUG nova.network.neutron [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:52:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:23.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:23 np0005539563 nova_compute[252253]: 2025-11-29 08:52:23.837 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031696023520379674 of space, bias 1.0, pg target 0.9508807056113903 quantized to 32 (current 32)
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004069464800856132 of space, bias 1.0, pg target 1.2208394402568397 quantized to 32 (current 32)
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:52:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:24.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.576 252257 DEBUG nova.network.neutron [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Updating instance_info_cache with network_info: [{"id": "685efdfe-7b92-41ab-b933-36d49a2b7522", "address": "fa:16:3e:f3:b7:0c", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685efdfe-7b", "ovs_interfaceid": "685efdfe-7b92-41ab-b933-36d49a2b7522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.609 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Releasing lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.609 252257 DEBUG nova.compute.manager [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Instance network_info: |[{"id": "685efdfe-7b92-41ab-b933-36d49a2b7522", "address": "fa:16:3e:f3:b7:0c", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685efdfe-7b", "ovs_interfaceid": "685efdfe-7b92-41ab-b933-36d49a2b7522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.610 252257 DEBUG oslo_concurrency.lockutils [req-17d3a3f9-4f86-4e4c-ba22-c45ca6d2c7be req-011f7cd8-c432-4078-8673-543c564be6c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.610 252257 DEBUG nova.network.neutron [req-17d3a3f9-4f86-4e4c-ba22-c45ca6d2c7be req-011f7cd8-c432-4078-8673-543c564be6c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Refreshing network info cache for port 685efdfe-7b92-41ab-b933-36d49a2b7522 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.615 252257 DEBUG nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Start _get_guest_xml network_info=[{"id": "685efdfe-7b92-41ab-b933-36d49a2b7522", "address": "fa:16:3e:f3:b7:0c", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685efdfe-7b", "ovs_interfaceid": "685efdfe-7b92-41ab-b933-36d49a2b7522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:52:07Z,direct_url=<?>,disk_format='raw',id=a7583cd4-d395-48e2-9f81-567bf2845ae0,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1017606698',owner='0596f9d1e5a5444ca2640f6e8244d53f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:52:14Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': 'a7583cd4-d395-48e2-9f81-567bf2845ae0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.620 252257 WARNING nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.628 252257 DEBUG nova.virt.libvirt.host [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.629 252257 DEBUG nova.virt.libvirt.host [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.637 252257 DEBUG nova.virt.libvirt.host [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.638 252257 DEBUG nova.virt.libvirt.host [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.640 252257 DEBUG nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.640 252257 DEBUG nova.virt.hardware [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:52:07Z,direct_url=<?>,disk_format='raw',id=a7583cd4-d395-48e2-9f81-567bf2845ae0,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1017606698',owner='0596f9d1e5a5444ca2640f6e8244d53f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:52:14Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.641 252257 DEBUG nova.virt.hardware [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.641 252257 DEBUG nova.virt.hardware [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.641 252257 DEBUG nova.virt.hardware [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.642 252257 DEBUG nova.virt.hardware [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.642 252257 DEBUG nova.virt.hardware [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.642 252257 DEBUG nova.virt.hardware [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.643 252257 DEBUG nova.virt.hardware [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.643 252257 DEBUG nova.virt.hardware [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.643 252257 DEBUG nova.virt.hardware [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.643 252257 DEBUG nova.virt.hardware [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.647 252257 DEBUG oslo_concurrency.processutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:24 np0005539563 nova_compute[252253]: 2025-11-29 08:52:24.695 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:52:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247192750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.118 252257 DEBUG oslo_concurrency.processutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.148 252257 DEBUG nova.storage.rbd_utils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] rbd image 66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.152 252257 DEBUG oslo_concurrency.processutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 305 active+clean; 349 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 29 03:52:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:52:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1369877008' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.602 252257 DEBUG oslo_concurrency.processutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.604 252257 DEBUG nova.virt.libvirt.vif [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-351772626',display_name='tempest-TestSnapshotPattern-server-351772626',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-351772626',id=203,image_ref='a7583cd4-d395-48e2-9f81-567bf2845ae0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBImEx8+jhsxRFNI/zXiqCIp6lKyzrmzXueICkOx8YGb02aphTL5Mlw1+YiMaTW8XLhYmBtqvqII/hnTIhC95ctb8YpefMaS6Qv1/vv9QrNRmuoy5csFiSCQsYM34gKdoxw==',key_name='tempest-TestSnapshotPattern-299175359',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0596f9d1e5a5444ca2640f6e8244d53f',ramdisk_id='',reservation_id='r-u72f31zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='defc87c3-85a5-47bb-8d50-3121d5d780c1',image_min_disk='1',image_min_ram='0',image_owner_id='0596f9d1e5a5444ca2640f6e8244d53f',image_owner_project_name='tempest-TestSnapshotPattern-32695225',image_owner_user_name='tempest-TestSnapshotPattern-32695225-project-member',image_user_id='7b36e3f2406043c2a741c24fb14de7df',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-32695225',owner_user_name='tempest-TestSnapshotPattern-32695225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:52:19Z,user_data=None,user_id='7b36e3f2406043c2a741c24fb14de7df',uuid=66ab0bd5-ae63-4ae2-af64-ae8b85745f24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "685efdfe-7b92-41ab-b933-36d49a2b7522", "address": "fa:16:3e:f3:b7:0c", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685efdfe-7b", "ovs_interfaceid": "685efdfe-7b92-41ab-b933-36d49a2b7522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.604 252257 DEBUG nova.network.os_vif_util [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Converting VIF {"id": "685efdfe-7b92-41ab-b933-36d49a2b7522", "address": "fa:16:3e:f3:b7:0c", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685efdfe-7b", "ovs_interfaceid": "685efdfe-7b92-41ab-b933-36d49a2b7522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.605 252257 DEBUG nova.network.os_vif_util [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:b7:0c,bridge_name='br-int',has_traffic_filtering=True,id=685efdfe-7b92-41ab-b933-36d49a2b7522,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685efdfe-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.606 252257 DEBUG nova.objects.instance [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lazy-loading 'pci_devices' on Instance uuid 66ab0bd5-ae63-4ae2-af64-ae8b85745f24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.633 252257 DEBUG nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  <uuid>66ab0bd5-ae63-4ae2-af64-ae8b85745f24</uuid>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  <name>instance-000000cb</name>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestSnapshotPattern-server-351772626</nova:name>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:52:24</nova:creationTime>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <nova:user uuid="7b36e3f2406043c2a741c24fb14de7df">tempest-TestSnapshotPattern-32695225-project-member</nova:user>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <nova:project uuid="0596f9d1e5a5444ca2640f6e8244d53f">tempest-TestSnapshotPattern-32695225</nova:project>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="a7583cd4-d395-48e2-9f81-567bf2845ae0"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <nova:port uuid="685efdfe-7b92-41ab-b933-36d49a2b7522">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <entry name="serial">66ab0bd5-ae63-4ae2-af64-ae8b85745f24</entry>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <entry name="uuid">66ab0bd5-ae63-4ae2-af64-ae8b85745f24</entry>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk.config">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:f3:b7:0c"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <target dev="tap685efdfe-7b"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/66ab0bd5-ae63-4ae2-af64-ae8b85745f24/console.log" append="off"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <input type="keyboard" bus="usb"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:52:25 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:52:25 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:52:25 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:52:25 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.634 252257 DEBUG nova.compute.manager [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Preparing to wait for external event network-vif-plugged-685efdfe-7b92-41ab-b933-36d49a2b7522 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.635 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.635 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.635 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.636 252257 DEBUG nova.virt.libvirt.vif [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-351772626',display_name='tempest-TestSnapshotPattern-server-351772626',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-351772626',id=203,image_ref='a7583cd4-d395-48e2-9f81-567bf2845ae0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBImEx8+jhsxRFNI/zXiqCIp6lKyzrmzXueICkOx8YGb02aphTL5Mlw1+YiMaTW8XLhYmBtqvqII/hnTIhC95ctb8YpefMaS6Qv1/vv9QrNRmuoy5csFiSCQsYM34gKdoxw==',key_name='tempest-TestSnapshotPattern-299175359',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0596f9d1e5a5444ca2640f6e8244d53f',ramdisk_id='',reservation_id='r-u72f31zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='defc87c3-85a5-47bb-8d50-3121d5d780c1',image_min_disk='1',image_min_ram='0',image_owner_id='0596f9d1e5a5444ca2640f6e8244d53f',image_owner_project_name='tempest-TestSnapshotPattern-32695225',image_owner_user_name='tempest-TestSnapshotPattern-32695225-project-member',image_user_id='7b36e3f2406043c2a741c24fb14de7df',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-32695225',owner_user_name='tempest-TestSnapshotPattern-32695225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:52:19Z,user_data=None,user_id='7b36e3f2406043c2a741c24fb14de7df',uuid=66ab0bd5-ae63-4ae2-af64-ae8b85745f24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "685efdfe-7b92-41ab-b933-36d49a2b7522", "address": "fa:16:3e:f3:b7:0c", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685efdfe-7b", "ovs_interfaceid": "685efdfe-7b92-41ab-b933-36d49a2b7522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.636 252257 DEBUG nova.network.os_vif_util [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Converting VIF {"id": "685efdfe-7b92-41ab-b933-36d49a2b7522", "address": "fa:16:3e:f3:b7:0c", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685efdfe-7b", "ovs_interfaceid": "685efdfe-7b92-41ab-b933-36d49a2b7522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.637 252257 DEBUG nova.network.os_vif_util [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:b7:0c,bridge_name='br-int',has_traffic_filtering=True,id=685efdfe-7b92-41ab-b933-36d49a2b7522,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685efdfe-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.638 252257 DEBUG os_vif [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:b7:0c,bridge_name='br-int',has_traffic_filtering=True,id=685efdfe-7b92-41ab-b933-36d49a2b7522,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685efdfe-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.638 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.639 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.640 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.644 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.644 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap685efdfe-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.645 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap685efdfe-7b, col_values=(('external_ids', {'iface-id': '685efdfe-7b92-41ab-b933-36d49a2b7522', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:b7:0c', 'vm-uuid': '66ab0bd5-ae63-4ae2-af64-ae8b85745f24'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:25 np0005539563 NetworkManager[48981]: <info>  [1764406345.6476] manager: (tap685efdfe-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.651 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.654 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.654 252257 INFO os_vif [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:b7:0c,bridge_name='br-int',has_traffic_filtering=True,id=685efdfe-7b92-41ab-b933-36d49a2b7522,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685efdfe-7b')#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.712 252257 DEBUG nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.713 252257 DEBUG nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.713 252257 DEBUG nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] No VIF found with MAC fa:16:3e:f3:b7:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.714 252257 INFO nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Using config drive#033[00m
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.744 252257 DEBUG nova.storage.rbd_utils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] rbd image 66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:25.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:25 np0005539563 nova_compute[252253]: 2025-11-29 08:52:25.805 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.189 252257 INFO nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Creating config drive at /var/lib/nova/instances/66ab0bd5-ae63-4ae2-af64-ae8b85745f24/disk.config#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.193 252257 DEBUG oslo_concurrency.processutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66ab0bd5-ae63-4ae2-af64-ae8b85745f24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcfa8jz7o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.295 252257 DEBUG nova.network.neutron [req-17d3a3f9-4f86-4e4c-ba22-c45ca6d2c7be req-011f7cd8-c432-4078-8673-543c564be6c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Updated VIF entry in instance network info cache for port 685efdfe-7b92-41ab-b933-36d49a2b7522. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.296 252257 DEBUG nova.network.neutron [req-17d3a3f9-4f86-4e4c-ba22-c45ca6d2c7be req-011f7cd8-c432-4078-8673-543c564be6c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Updating instance_info_cache with network_info: [{"id": "685efdfe-7b92-41ab-b933-36d49a2b7522", "address": "fa:16:3e:f3:b7:0c", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685efdfe-7b", "ovs_interfaceid": "685efdfe-7b92-41ab-b933-36d49a2b7522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:26.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.318 252257 DEBUG oslo_concurrency.lockutils [req-17d3a3f9-4f86-4e4c-ba22-c45ca6d2c7be req-011f7cd8-c432-4078-8673-543c564be6c6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.348 252257 DEBUG oslo_concurrency.processutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66ab0bd5-ae63-4ae2-af64-ae8b85745f24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcfa8jz7o" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.381 252257 DEBUG nova.storage.rbd_utils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] rbd image 66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.386 252257 DEBUG oslo_concurrency.processutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66ab0bd5-ae63-4ae2-af64-ae8b85745f24/disk.config 66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.580 252257 DEBUG oslo_concurrency.processutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66ab0bd5-ae63-4ae2-af64-ae8b85745f24/disk.config 66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.582 252257 INFO nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Deleting local config drive /var/lib/nova/instances/66ab0bd5-ae63-4ae2-af64-ae8b85745f24/disk.config because it was imported into RBD.#033[00m
Nov 29 03:52:26 np0005539563 kernel: tap685efdfe-7b: entered promiscuous mode
Nov 29 03:52:26 np0005539563 NetworkManager[48981]: <info>  [1764406346.6422] manager: (tap685efdfe-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/384)
Nov 29 03:52:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:52:26Z|00869|binding|INFO|Claiming lport 685efdfe-7b92-41ab-b933-36d49a2b7522 for this chassis.
Nov 29 03:52:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:52:26Z|00870|binding|INFO|685efdfe-7b92-41ab-b933-36d49a2b7522: Claiming fa:16:3e:f3:b7:0c 10.100.0.3
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.642 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.650 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.652 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.659 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539563 systemd-udevd[391488]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.674 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539563 NetworkManager[48981]: <info>  [1764406346.6766] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Nov 29 03:52:26 np0005539563 NetworkManager[48981]: <info>  [1764406346.6780] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Nov 29 03:52:26 np0005539563 systemd-machined[213024]: New machine qemu-98-instance-000000cb.
Nov 29 03:52:26 np0005539563 NetworkManager[48981]: <info>  [1764406346.6844] device (tap685efdfe-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:52:26 np0005539563 NetworkManager[48981]: <info>  [1764406346.6871] device (tap685efdfe-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:52:26 np0005539563 systemd[1]: Started Virtual Machine qemu-98-instance-000000cb.
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.810 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:b7:0c 10.100.0.3'], port_security=['fa:16:3e:f3:b7:0c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '66ab0bd5-ae63-4ae2-af64-ae8b85745f24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0596f9d1e5a5444ca2640f6e8244d53f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49368169-f673-45da-b454-bf6c8bb93b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46809892-ffee-4015-b7f0-51515653f0e9, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=685efdfe-7b92-41ab-b933-36d49a2b7522) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.812 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 685efdfe-7b92-41ab-b933-36d49a2b7522 in datapath 9094c67b-5d6f-4130-9ec6-7da5c871a564 bound to our chassis#033[00m
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.814 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9094c67b-5d6f-4130-9ec6-7da5c871a564#033[00m
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.827 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a12f5831-883f-40cf-bb97-584bfbe267cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.828 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9094c67b-51 in ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.831 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9094c67b-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.831 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d86922a3-4621-4ca8-a7a5-fd779983136d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.832 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[05c600c2-836f-4892-806e-e1c7d01af910]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.833 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.850 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[cf571d6c-f92d-4feb-a0f2-14c0e1657e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.858 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:52:26Z|00871|binding|INFO|Setting lport 685efdfe-7b92-41ab-b933-36d49a2b7522 ovn-installed in OVS
Nov 29 03:52:26 np0005539563 ovn_controller[148841]: 2025-11-29T08:52:26Z|00872|binding|INFO|Setting lport 685efdfe-7b92-41ab-b933-36d49a2b7522 up in Southbound
Nov 29 03:52:26 np0005539563 nova_compute[252253]: 2025-11-29 08:52:26.870 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.876 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[21b1519a-2431-47ef-8f53-d1c776198fb2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.908 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[9b82db6d-ae82-4d14-9252-b8d1b96bb662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.913 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[eea29a03-1889-4fe3-a3e7-9ad816b5ee00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539563 NetworkManager[48981]: <info>  [1764406346.9146] manager: (tap9094c67b-50): new Veth device (/org/freedesktop/NetworkManager/Devices/387)
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.947 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[46d42acd-4100-44b4-843e-89e52873c13c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.950 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[f34ed47b-71f1-4be3-a6fd-4fc987371c1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:26 np0005539563 NetworkManager[48981]: <info>  [1764406346.9750] device (tap9094c67b-50): carrier: link connected
Nov 29 03:52:26 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.982 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe203ca-d47c-4d43-b779-99bfaa4f07a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:26.998 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[179c6cc5-8e68-409d-9594-51b406a2d5ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9094c67b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:fd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 931474, 'reachable_time': 44162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391522, 'error': None, 'target': 'ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:27.015 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf83ece-09b1-4349-92a1-40cbc4e83b26]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:fd50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 931474, 'tstamp': 931474}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391523, 'error': None, 'target': 'ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:27.031 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f43a8621-c6d3-4d36-80d6-2e4cf0651d10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9094c67b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:fd:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 931474, 'reachable_time': 44162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391524, 'error': None, 'target': 'ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:27.064 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4dade12f-3c17-4859-8dd9-fa044461dbd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:27.133 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0f25d1fb-390b-44d2-8a52-69d273454758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:27.135 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9094c67b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:27.135 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:27.136 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9094c67b-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:27 np0005539563 NetworkManager[48981]: <info>  [1764406347.1384] manager: (tap9094c67b-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.139 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:27 np0005539563 kernel: tap9094c67b-50: entered promiscuous mode
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.140 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:27.141 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9094c67b-50, col_values=(('external_ids', {'iface-id': '52cea514-684d-4e12-87ec-eee5c187481b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.142 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:27 np0005539563 ovn_controller[148841]: 2025-11-29T08:52:27Z|00873|binding|INFO|Releasing lport 52cea514-684d-4e12-87ec-eee5c187481b from this chassis (sb_readonly=0)
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.156 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:27.157 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9094c67b-5d6f-4130-9ec6-7da5c871a564.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9094c67b-5d6f-4130-9ec6-7da5c871a564.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:27.158 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a44ce8b4-2aad-4e92-89c2-08ba9454c458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:27.159 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-9094c67b-5d6f-4130-9ec6-7da5c871a564
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/9094c67b-5d6f-4130-9ec6-7da5c871a564.pid.haproxy
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 9094c67b-5d6f-4130-9ec6-7da5c871a564
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:52:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:27.159 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'env', 'PROCESS_TAG=haproxy-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9094c67b-5d6f-4130-9ec6-7da5c871a564.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:52:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 305 active+clean; 349 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 384 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.298 252257 DEBUG nova.compute.manager [req-d31fa29a-9988-4b95-9722-da8fb5dbf991 req-5e421375-0246-4304-94b0-49ed0e0b3758 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Received event network-vif-plugged-685efdfe-7b92-41ab-b933-36d49a2b7522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.299 252257 DEBUG oslo_concurrency.lockutils [req-d31fa29a-9988-4b95-9722-da8fb5dbf991 req-5e421375-0246-4304-94b0-49ed0e0b3758 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.299 252257 DEBUG oslo_concurrency.lockutils [req-d31fa29a-9988-4b95-9722-da8fb5dbf991 req-5e421375-0246-4304-94b0-49ed0e0b3758 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.300 252257 DEBUG oslo_concurrency.lockutils [req-d31fa29a-9988-4b95-9722-da8fb5dbf991 req-5e421375-0246-4304-94b0-49ed0e0b3758 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.300 252257 DEBUG nova.compute.manager [req-d31fa29a-9988-4b95-9722-da8fb5dbf991 req-5e421375-0246-4304-94b0-49ed0e0b3758 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Processing event network-vif-plugged-685efdfe-7b92-41ab-b933-36d49a2b7522 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:52:27 np0005539563 podman[391557]: 2025-11-29 08:52:27.545696853 +0000 UTC m=+0.049514163 container create 7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:52:27 np0005539563 systemd[1]: Started libpod-conmon-7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb.scope.
Nov 29 03:52:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:52:27 np0005539563 podman[391557]: 2025-11-29 08:52:27.517216371 +0000 UTC m=+0.021033711 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:52:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43903fd3d6e8dcb5d14a40fca14ccf103adce67adbd8718a870d3b7b8b0d535a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:27 np0005539563 podman[391557]: 2025-11-29 08:52:27.634988034 +0000 UTC m=+0.138805364 container init 7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:52:27 np0005539563 podman[391557]: 2025-11-29 08:52:27.642134599 +0000 UTC m=+0.145951919 container start 7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:52:27 np0005539563 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[391572]: [NOTICE]   (391591) : New worker (391596) forked
Nov 29 03:52:27 np0005539563 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[391572]: [NOTICE]   (391591) : Loading success.
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:52:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:27.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.807 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406347.8067226, 66ab0bd5-ae63-4ae2-af64-ae8b85745f24 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.808 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] VM Started (Lifecycle Event)#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.809 252257 DEBUG nova.compute.manager [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.812 252257 DEBUG nova.virt.libvirt.driver [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.816 252257 INFO nova.virt.libvirt.driver [-] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Instance spawned successfully.#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.816 252257 INFO nova.compute.manager [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Took 8.20 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.817 252257 DEBUG nova.compute.manager [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.861 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.865 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.904 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.904 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406347.8069944, 66ab0bd5-ae63-4ae2-af64-ae8b85745f24 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.904 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.906 252257 INFO nova.compute.manager [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Took 9.24 seconds to build instance.#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.931 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.934 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406347.811857, 66ab0bd5-ae63-4ae2-af64-ae8b85745f24 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:52:27 np0005539563 nova_compute[252253]: 2025-11-29 08:52:27.934 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:52:28 np0005539563 nova_compute[252253]: 2025-11-29 08:52:28.073 252257 DEBUG oslo_concurrency.lockutils [None req-8e7ccff5-65ab-441c-a1b8-725493a3642e 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:28 np0005539563 nova_compute[252253]: 2025-11-29 08:52:28.075 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:52:28 np0005539563 nova_compute[252253]: 2025-11-29 08:52:28.078 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:52:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:52:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/389682077' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:52:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:52:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/389682077' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:52:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:28.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:28 np0005539563 nova_compute[252253]: 2025-11-29 08:52:28.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:28 np0005539563 nova_compute[252253]: 2025-11-29 08:52:28.680 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:52:28 np0005539563 nova_compute[252253]: 2025-11-29 08:52:28.681 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:52:28 np0005539563 nova_compute[252253]: 2025-11-29 08:52:28.988 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:28 np0005539563 nova_compute[252253]: 2025-11-29 08:52:28.990 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:28 np0005539563 nova_compute[252253]: 2025-11-29 08:52:28.990 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:52:28 np0005539563 nova_compute[252253]: 2025-11-29 08:52:28.991 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 66ab0bd5-ae63-4ae2-af64-ae8b85745f24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:52:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 305 active+clean; 354 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 29 03:52:29 np0005539563 nova_compute[252253]: 2025-11-29 08:52:29.489 252257 DEBUG nova.compute.manager [req-c7069e81-baf2-4a3b-9ca2-faa002e01637 req-16eccf5a-b354-4084-8083-a6c66b670460 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Received event network-vif-plugged-685efdfe-7b92-41ab-b933-36d49a2b7522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:29 np0005539563 nova_compute[252253]: 2025-11-29 08:52:29.490 252257 DEBUG oslo_concurrency.lockutils [req-c7069e81-baf2-4a3b-9ca2-faa002e01637 req-16eccf5a-b354-4084-8083-a6c66b670460 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:29 np0005539563 nova_compute[252253]: 2025-11-29 08:52:29.490 252257 DEBUG oslo_concurrency.lockutils [req-c7069e81-baf2-4a3b-9ca2-faa002e01637 req-16eccf5a-b354-4084-8083-a6c66b670460 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:29 np0005539563 nova_compute[252253]: 2025-11-29 08:52:29.490 252257 DEBUG oslo_concurrency.lockutils [req-c7069e81-baf2-4a3b-9ca2-faa002e01637 req-16eccf5a-b354-4084-8083-a6c66b670460 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:29 np0005539563 nova_compute[252253]: 2025-11-29 08:52:29.491 252257 DEBUG nova.compute.manager [req-c7069e81-baf2-4a3b-9ca2-faa002e01637 req-16eccf5a-b354-4084-8083-a6c66b670460 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] No waiting events found dispatching network-vif-plugged-685efdfe-7b92-41ab-b933-36d49a2b7522 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:52:29 np0005539563 nova_compute[252253]: 2025-11-29 08:52:29.491 252257 WARNING nova.compute.manager [req-c7069e81-baf2-4a3b-9ca2-faa002e01637 req-16eccf5a-b354-4084-8083-a6c66b670460 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Received unexpected event network-vif-plugged-685efdfe-7b92-41ab-b933-36d49a2b7522 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:52:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:29.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:30.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:30 np0005539563 nova_compute[252253]: 2025-11-29 08:52:30.647 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:30 np0005539563 nova_compute[252253]: 2025-11-29 08:52:30.808 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Nov 29 03:52:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:31.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:32.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 165 op/s
Nov 29 03:52:33 np0005539563 nova_compute[252253]: 2025-11-29 08:52:33.760 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Updating instance_info_cache with network_info: [{"id": "685efdfe-7b92-41ab-b933-36d49a2b7522", "address": "fa:16:3e:f3:b7:0c", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685efdfe-7b", "ovs_interfaceid": "685efdfe-7b92-41ab-b933-36d49a2b7522", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:33 np0005539563 nova_compute[252253]: 2025-11-29 08:52:33.801 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:33 np0005539563 nova_compute[252253]: 2025-11-29 08:52:33.802 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:52:33 np0005539563 nova_compute[252253]: 2025-11-29 08:52:33.802 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:33 np0005539563 nova_compute[252253]: 2025-11-29 08:52:33.803 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:33 np0005539563 nova_compute[252253]: 2025-11-29 08:52:33.803 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:33 np0005539563 nova_compute[252253]: 2025-11-29 08:52:33.803 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:33.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:33 np0005539563 nova_compute[252253]: 2025-11-29 08:52:33.834 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:33 np0005539563 nova_compute[252253]: 2025-11-29 08:52:33.835 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:33 np0005539563 nova_compute[252253]: 2025-11-29 08:52:33.835 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:33 np0005539563 nova_compute[252253]: 2025-11-29 08:52:33.835 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:52:33 np0005539563 nova_compute[252253]: 2025-11-29 08:52:33.836 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531876958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:34 np0005539563 nova_compute[252253]: 2025-11-29 08:52:34.296 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:34.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:34 np0005539563 nova_compute[252253]: 2025-11-29 08:52:34.396 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:52:34 np0005539563 nova_compute[252253]: 2025-11-29 08:52:34.397 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:52:34 np0005539563 nova_compute[252253]: 2025-11-29 08:52:34.596 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:52:34 np0005539563 nova_compute[252253]: 2025-11-29 08:52:34.597 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3983MB free_disk=20.897113800048828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:52:34 np0005539563 nova_compute[252253]: 2025-11-29 08:52:34.598 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:34 np0005539563 nova_compute[252253]: 2025-11-29 08:52:34.598 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:34.737 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:52:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:34.739 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:52:34 np0005539563 nova_compute[252253]: 2025-11-29 08:52:34.739 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:52:34.740 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:34 np0005539563 nova_compute[252253]: 2025-11-29 08:52:34.819 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 66ab0bd5-ae63-4ae2-af64-ae8b85745f24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:52:34 np0005539563 nova_compute[252253]: 2025-11-29 08:52:34.820 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:52:34 np0005539563 nova_compute[252253]: 2025-11-29 08:52:34.820 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:52:34 np0005539563 nova_compute[252253]: 2025-11-29 08:52:34.900 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Nov 29 03:52:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1941697903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:35 np0005539563 nova_compute[252253]: 2025-11-29 08:52:35.371 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:35 np0005539563 nova_compute[252253]: 2025-11-29 08:52:35.377 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:52:35 np0005539563 nova_compute[252253]: 2025-11-29 08:52:35.399 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:52:35 np0005539563 nova_compute[252253]: 2025-11-29 08:52:35.422 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:52:35 np0005539563 nova_compute[252253]: 2025-11-29 08:52:35.423 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:35 np0005539563 nova_compute[252253]: 2025-11-29 08:52:35.649 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:35 np0005539563 nova_compute[252253]: 2025-11-29 08:52:35.661 252257 DEBUG nova.compute.manager [req-87c29c9c-0166-45e5-befb-8f7349d29a7f req-9ff8cbd0-8f01-4538-b2df-4707cce8077d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Received event network-changed-685efdfe-7b92-41ab-b933-36d49a2b7522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:35 np0005539563 nova_compute[252253]: 2025-11-29 08:52:35.661 252257 DEBUG nova.compute.manager [req-87c29c9c-0166-45e5-befb-8f7349d29a7f req-9ff8cbd0-8f01-4538-b2df-4707cce8077d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Refreshing instance network info cache due to event network-changed-685efdfe-7b92-41ab-b933-36d49a2b7522. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:52:35 np0005539563 nova_compute[252253]: 2025-11-29 08:52:35.662 252257 DEBUG oslo_concurrency.lockutils [req-87c29c9c-0166-45e5-befb-8f7349d29a7f req-9ff8cbd0-8f01-4538-b2df-4707cce8077d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:35 np0005539563 nova_compute[252253]: 2025-11-29 08:52:35.662 252257 DEBUG oslo_concurrency.lockutils [req-87c29c9c-0166-45e5-befb-8f7349d29a7f req-9ff8cbd0-8f01-4538-b2df-4707cce8077d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:35 np0005539563 nova_compute[252253]: 2025-11-29 08:52:35.662 252257 DEBUG nova.network.neutron [req-87c29c9c-0166-45e5-befb-8f7349d29a7f req-9ff8cbd0-8f01-4538-b2df-4707cce8077d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Refreshing network info cache for port 685efdfe-7b92-41ab-b933-36d49a2b7522 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:52:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:35.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:35 np0005539563 nova_compute[252253]: 2025-11-29 08:52:35.855 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:36.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 419 KiB/s wr, 84 op/s
Nov 29 03:52:37 np0005539563 nova_compute[252253]: 2025-11-29 08:52:37.843 252257 DEBUG nova.network.neutron [req-87c29c9c-0166-45e5-befb-8f7349d29a7f req-9ff8cbd0-8f01-4538-b2df-4707cce8077d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Updated VIF entry in instance network info cache for port 685efdfe-7b92-41ab-b933-36d49a2b7522. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:52:37 np0005539563 nova_compute[252253]: 2025-11-29 08:52:37.845 252257 DEBUG nova.network.neutron [req-87c29c9c-0166-45e5-befb-8f7349d29a7f req-9ff8cbd0-8f01-4538-b2df-4707cce8077d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Updating instance_info_cache with network_info: [{"id": "685efdfe-7b92-41ab-b933-36d49a2b7522", "address": "fa:16:3e:f3:b7:0c", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685efdfe-7b", "ovs_interfaceid": "685efdfe-7b92-41ab-b933-36d49a2b7522", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:37.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:37 np0005539563 nova_compute[252253]: 2025-11-29 08:52:37.872 252257 DEBUG oslo_concurrency.lockutils [req-87c29c9c-0166-45e5-befb-8f7349d29a7f req-9ff8cbd0-8f01-4538-b2df-4707cce8077d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:38.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 419 KiB/s wr, 84 op/s
Nov 29 03:52:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:39.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:40.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:40 np0005539563 nova_compute[252253]: 2025-11-29 08:52:40.652 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:40 np0005539563 nova_compute[252253]: 2025-11-29 08:52:40.907 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 74 op/s
Nov 29 03:52:41 np0005539563 ovn_controller[148841]: 2025-11-29T08:52:41Z|00106|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.3
Nov 29 03:52:41 np0005539563 ovn_controller[148841]: 2025-11-29T08:52:41Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f3:b7:0c 10.100.0.3
Nov 29 03:52:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:41.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:42 np0005539563 nova_compute[252253]: 2025-11-29 08:52:42.296 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:42.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:52:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:52:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:52:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:52:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:52:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:52:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 305 active+clean; 362 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 167 KiB/s wr, 31 op/s
Nov 29 03:52:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:43.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:44.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 305 active+clean; 413 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 57 op/s
Nov 29 03:52:45 np0005539563 nova_compute[252253]: 2025-11-29 08:52:45.656 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:45.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:45 np0005539563 nova_compute[252253]: 2025-11-29 08:52:45.965 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:52:46Z|00108|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.3
Nov 29 03:52:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:52:46Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f3:b7:0c 10.100.0.3
Nov 29 03:52:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:46.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:52:46Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:b7:0c 10.100.0.3
Nov 29 03:52:46 np0005539563 ovn_controller[148841]: 2025-11-29T08:52:46Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:b7:0c 10.100.0.3
Nov 29 03:52:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 305 active+clean; 413 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 56 op/s
Nov 29 03:52:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:47.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:48.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 305 active+clean; 419 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.3 MiB/s wr, 79 op/s
Nov 29 03:52:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:49.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:50.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:50 np0005539563 podman[391737]: 2025-11-29 08:52:50.531276597 +0000 UTC m=+0.069570118 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:52:50 np0005539563 podman[391736]: 2025-11-29 08:52:50.531318448 +0000 UTC m=+0.074108340 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:52:50 np0005539563 podman[391738]: 2025-11-29 08:52:50.596457965 +0000 UTC m=+0.135674250 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:52:50 np0005539563 nova_compute[252253]: 2025-11-29 08:52:50.667 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:50 np0005539563 nova_compute[252253]: 2025-11-29 08:52:50.967 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 305 active+clean; 422 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 83 op/s
Nov 29 03:52:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:51.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:52.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 03:52:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 03:52:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 305 active+clean; 422 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 95 op/s
Nov 29 03:52:53 np0005539563 nova_compute[252253]: 2025-11-29 08:52:53.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:53.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:54.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:54 np0005539563 nova_compute[252253]: 2025-11-29 08:52:54.712 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:54 np0005539563 nova_compute[252253]: 2025-11-29 08:52:54.712 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:52:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 305 active+clean; 422 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 222 op/s
Nov 29 03:52:55 np0005539563 nova_compute[252253]: 2025-11-29 08:52:55.670 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:55.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:56 np0005539563 nova_compute[252253]: 2025-11-29 08:52:56.008 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:56.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.383828) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376384321, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 809, "num_deletes": 252, "total_data_size": 1112865, "memory_usage": 1133320, "flush_reason": "Manual Compaction"}
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376400039, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1099793, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71356, "largest_seqno": 72164, "table_properties": {"data_size": 1095705, "index_size": 1803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9360, "raw_average_key_size": 19, "raw_value_size": 1087379, "raw_average_value_size": 2298, "num_data_blocks": 79, "num_entries": 473, "num_filter_entries": 473, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406318, "oldest_key_time": 1764406318, "file_creation_time": 1764406376, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 16244 microseconds, and 9939 cpu microseconds.
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.400397) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1099793 bytes OK
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.400509) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.402329) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.402353) EVENT_LOG_v1 {"time_micros": 1764406376402347, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.402371) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1108894, prev total WAL file size 1108894, number of live WAL files 2.
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.403800) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1074KB)], [161(11MB)]
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376404066, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12836965, "oldest_snapshot_seqno": -1}
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 10138 keys, 10787520 bytes, temperature: kUnknown
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376492571, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 10787520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10725073, "index_size": 35966, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 269742, "raw_average_key_size": 26, "raw_value_size": 10550436, "raw_average_value_size": 1040, "num_data_blocks": 1349, "num_entries": 10138, "num_filter_entries": 10138, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764406376, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.493473) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10787520 bytes
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.494989) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.7 rd, 121.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(21.5) write-amplify(9.8) OK, records in: 10657, records dropped: 519 output_compression: NoCompression
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.495021) EVENT_LOG_v1 {"time_micros": 1764406376495005, "job": 100, "event": "compaction_finished", "compaction_time_micros": 88734, "compaction_time_cpu_micros": 52244, "output_level": 6, "num_output_files": 1, "total_output_size": 10787520, "num_input_records": 10657, "num_output_records": 10138, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376495641, "job": 100, "event": "table_file_deletion", "file_number": 163}
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406376499173, "job": 100, "event": "table_file_deletion", "file_number": 161}
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.403502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.499281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.499290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.499292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.499295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:52:56 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:52:56.499297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:52:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 305 active+clean; 422 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 201 KiB/s wr, 172 op/s
Nov 29 03:52:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:52:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:57.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:52:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 305 active+clean; 422 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 201 KiB/s wr, 240 op/s
Nov 29 03:52:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:52:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:59.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:00.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:00 np0005539563 nova_compute[252253]: 2025-11-29 08:53:00.674 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:01 np0005539563 nova_compute[252253]: 2025-11-29 08:53:01.051 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 305 active+clean; 422 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 66 KiB/s wr, 249 op/s
Nov 29 03:53:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:01.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:02.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:53:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:53:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 305 active+clean; 423 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 191 KiB/s wr, 253 op/s
Nov 29 03:53:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:03.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:53:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:53:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:53:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:53:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:53:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev db14b2b9-8490-4f16-907f-1cf1c243e2f1 does not exist
Nov 29 03:53:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 81a18cdf-8b09-4b7d-9424-27580ca5d860 does not exist
Nov 29 03:53:04 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ca6696cc-fb3f-4659-8736-fedc37053b41 does not exist
Nov 29 03:53:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:53:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:53:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:53:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:53:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:53:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:53:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:04.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:04 np0005539563 podman[392131]: 2025-11-29 08:53:04.641604315 +0000 UTC m=+0.051575989 container create c0389d8178f0faa72076d1ced97551f94e606c357bde02fb76cbc2bb6496802e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bohr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:53:04 np0005539563 systemd[1]: Started libpod-conmon-c0389d8178f0faa72076d1ced97551f94e606c357bde02fb76cbc2bb6496802e.scope.
Nov 29 03:53:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:53:04 np0005539563 podman[392131]: 2025-11-29 08:53:04.61598197 +0000 UTC m=+0.025953624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:53:04 np0005539563 podman[392131]: 2025-11-29 08:53:04.726999711 +0000 UTC m=+0.136971365 container init c0389d8178f0faa72076d1ced97551f94e606c357bde02fb76cbc2bb6496802e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:53:04 np0005539563 podman[392131]: 2025-11-29 08:53:04.735549963 +0000 UTC m=+0.145521597 container start c0389d8178f0faa72076d1ced97551f94e606c357bde02fb76cbc2bb6496802e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:53:04 np0005539563 podman[392131]: 2025-11-29 08:53:04.738878743 +0000 UTC m=+0.148850377 container attach c0389d8178f0faa72076d1ced97551f94e606c357bde02fb76cbc2bb6496802e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bohr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:53:04 np0005539563 awesome_bohr[392147]: 167 167
Nov 29 03:53:04 np0005539563 systemd[1]: libpod-c0389d8178f0faa72076d1ced97551f94e606c357bde02fb76cbc2bb6496802e.scope: Deactivated successfully.
Nov 29 03:53:04 np0005539563 podman[392131]: 2025-11-29 08:53:04.743352573 +0000 UTC m=+0.153324207 container died c0389d8178f0faa72076d1ced97551f94e606c357bde02fb76cbc2bb6496802e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bohr, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:53:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay-97057123b6600dd0683d47eb8685a86a73096eafe782668d88c7d90e68d507be-merged.mount: Deactivated successfully.
Nov 29 03:53:04 np0005539563 podman[392131]: 2025-11-29 08:53:04.783085042 +0000 UTC m=+0.193056676 container remove c0389d8178f0faa72076d1ced97551f94e606c357bde02fb76cbc2bb6496802e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:53:04 np0005539563 systemd[1]: libpod-conmon-c0389d8178f0faa72076d1ced97551f94e606c357bde02fb76cbc2bb6496802e.scope: Deactivated successfully.
Nov 29 03:53:04 np0005539563 podman[392169]: 2025-11-29 08:53:04.955443085 +0000 UTC m=+0.047525960 container create 7a3ff666789a0f1695c2e5c6dbc64476ab407906901e8c7ce85665a050e5dd1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:53:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:04.962 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:04.964 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:04.965 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:04 np0005539563 systemd[1]: Started libpod-conmon-7a3ff666789a0f1695c2e5c6dbc64476ab407906901e8c7ce85665a050e5dd1d.scope.
Nov 29 03:53:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:53:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:04 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:53:05 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:53:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab07a1291425f3472350748e0aba4d87551cff899ea33918c9017152dfd1561/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab07a1291425f3472350748e0aba4d87551cff899ea33918c9017152dfd1561/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab07a1291425f3472350748e0aba4d87551cff899ea33918c9017152dfd1561/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab07a1291425f3472350748e0aba4d87551cff899ea33918c9017152dfd1561/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:05 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab07a1291425f3472350748e0aba4d87551cff899ea33918c9017152dfd1561/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:05 np0005539563 podman[392169]: 2025-11-29 08:53:04.936114281 +0000 UTC m=+0.028197156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:53:05 np0005539563 podman[392169]: 2025-11-29 08:53:05.041412487 +0000 UTC m=+0.133495372 container init 7a3ff666789a0f1695c2e5c6dbc64476ab407906901e8c7ce85665a050e5dd1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:53:05 np0005539563 podman[392169]: 2025-11-29 08:53:05.049728482 +0000 UTC m=+0.141811337 container start 7a3ff666789a0f1695c2e5c6dbc64476ab407906901e8c7ce85665a050e5dd1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:53:05 np0005539563 podman[392169]: 2025-11-29 08:53:05.053055962 +0000 UTC m=+0.145138817 container attach 7a3ff666789a0f1695c2e5c6dbc64476ab407906901e8c7ce85665a050e5dd1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:53:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 305 active+clean; 457 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.3 MiB/s wr, 283 op/s
Nov 29 03:53:05 np0005539563 nova_compute[252253]: 2025-11-29 08:53:05.677 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:05.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:05 np0005539563 keen_tharp[392185]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:53:05 np0005539563 keen_tharp[392185]: --> relative data size: 1.0
Nov 29 03:53:05 np0005539563 keen_tharp[392185]: --> All data devices are unavailable
Nov 29 03:53:05 np0005539563 systemd[1]: libpod-7a3ff666789a0f1695c2e5c6dbc64476ab407906901e8c7ce85665a050e5dd1d.scope: Deactivated successfully.
Nov 29 03:53:05 np0005539563 podman[392169]: 2025-11-29 08:53:05.971411605 +0000 UTC m=+1.063494450 container died 7a3ff666789a0f1695c2e5c6dbc64476ab407906901e8c7ce85665a050e5dd1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:53:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-fab07a1291425f3472350748e0aba4d87551cff899ea33918c9017152dfd1561-merged.mount: Deactivated successfully.
Nov 29 03:53:06 np0005539563 podman[392169]: 2025-11-29 08:53:06.038212856 +0000 UTC m=+1.130295721 container remove 7a3ff666789a0f1695c2e5c6dbc64476ab407906901e8c7ce85665a050e5dd1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:53:06 np0005539563 systemd[1]: libpod-conmon-7a3ff666789a0f1695c2e5c6dbc64476ab407906901e8c7ce85665a050e5dd1d.scope: Deactivated successfully.
Nov 29 03:53:06 np0005539563 nova_compute[252253]: 2025-11-29 08:53:06.053 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:06.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:06 np0005539563 podman[392357]: 2025-11-29 08:53:06.70991918 +0000 UTC m=+0.038852664 container create 547d330290f9b9a34f4971131bc4b5880eebfb9f3d8a78bd3e387d7bd3cd80ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_clarke, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:53:06 np0005539563 systemd[1]: Started libpod-conmon-547d330290f9b9a34f4971131bc4b5880eebfb9f3d8a78bd3e387d7bd3cd80ed.scope.
Nov 29 03:53:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:53:06 np0005539563 podman[392357]: 2025-11-29 08:53:06.783145246 +0000 UTC m=+0.112078740 container init 547d330290f9b9a34f4971131bc4b5880eebfb9f3d8a78bd3e387d7bd3cd80ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:53:06 np0005539563 podman[392357]: 2025-11-29 08:53:06.693811724 +0000 UTC m=+0.022745248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:53:06 np0005539563 podman[392357]: 2025-11-29 08:53:06.79286884 +0000 UTC m=+0.121802334 container start 547d330290f9b9a34f4971131bc4b5880eebfb9f3d8a78bd3e387d7bd3cd80ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_clarke, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:53:06 np0005539563 podman[392357]: 2025-11-29 08:53:06.796236181 +0000 UTC m=+0.125169705 container attach 547d330290f9b9a34f4971131bc4b5880eebfb9f3d8a78bd3e387d7bd3cd80ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:53:06 np0005539563 quizzical_clarke[392374]: 167 167
Nov 29 03:53:06 np0005539563 systemd[1]: libpod-547d330290f9b9a34f4971131bc4b5880eebfb9f3d8a78bd3e387d7bd3cd80ed.scope: Deactivated successfully.
Nov 29 03:53:06 np0005539563 podman[392357]: 2025-11-29 08:53:06.799005856 +0000 UTC m=+0.127939380 container died 547d330290f9b9a34f4971131bc4b5880eebfb9f3d8a78bd3e387d7bd3cd80ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_clarke, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:53:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-293d9e83e087918e67d34ff5964747d09271da8c5b652e2fe7490a4c99a9f8d4-merged.mount: Deactivated successfully.
Nov 29 03:53:06 np0005539563 podman[392357]: 2025-11-29 08:53:06.844968933 +0000 UTC m=+0.173902437 container remove 547d330290f9b9a34f4971131bc4b5880eebfb9f3d8a78bd3e387d7bd3cd80ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_clarke, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:53:06 np0005539563 systemd[1]: libpod-conmon-547d330290f9b9a34f4971131bc4b5880eebfb9f3d8a78bd3e387d7bd3cd80ed.scope: Deactivated successfully.
Nov 29 03:53:07 np0005539563 podman[392396]: 2025-11-29 08:53:07.047993998 +0000 UTC m=+0.060304726 container create 506c9627645a50447ed4bcf156d3c0d8e9e390bba315737e47c96b287a2cf48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_neumann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:53:07 np0005539563 systemd[1]: Started libpod-conmon-506c9627645a50447ed4bcf156d3c0d8e9e390bba315737e47c96b287a2cf48e.scope.
Nov 29 03:53:07 np0005539563 podman[392396]: 2025-11-29 08:53:07.031082619 +0000 UTC m=+0.043393357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:53:07 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:53:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31f5af4a891ef4a8f3e09141dd754d2ee25f77181cb760c4ba661f3554ca2de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31f5af4a891ef4a8f3e09141dd754d2ee25f77181cb760c4ba661f3554ca2de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31f5af4a891ef4a8f3e09141dd754d2ee25f77181cb760c4ba661f3554ca2de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:07 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31f5af4a891ef4a8f3e09141dd754d2ee25f77181cb760c4ba661f3554ca2de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:07 np0005539563 podman[392396]: 2025-11-29 08:53:07.149217562 +0000 UTC m=+0.161528360 container init 506c9627645a50447ed4bcf156d3c0d8e9e390bba315737e47c96b287a2cf48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:53:07 np0005539563 podman[392396]: 2025-11-29 08:53:07.15465268 +0000 UTC m=+0.166963398 container start 506c9627645a50447ed4bcf156d3c0d8e9e390bba315737e47c96b287a2cf48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:53:07 np0005539563 podman[392396]: 2025-11-29 08:53:07.158385612 +0000 UTC m=+0.170696350 container attach 506c9627645a50447ed4bcf156d3c0d8e9e390bba315737e47c96b287a2cf48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_neumann, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:53:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 305 active+clean; 457 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 519 KiB/s rd, 2.3 MiB/s wr, 150 op/s
Nov 29 03:53:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:07.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:07 np0005539563 elated_neumann[392414]: {
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:    "0": [
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:        {
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:            "devices": [
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:                "/dev/loop3"
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:            ],
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:            "lv_name": "ceph_lv0",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:            "lv_size": "7511998464",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:            "name": "ceph_lv0",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:            "tags": {
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:                "ceph.cluster_name": "ceph",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:                "ceph.crush_device_class": "",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:                "ceph.encrypted": "0",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:                "ceph.osd_id": "0",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:                "ceph.type": "block",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:                "ceph.vdo": "0"
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:            },
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:            "type": "block",
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:            "vg_name": "ceph_vg0"
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:        }
Nov 29 03:53:07 np0005539563 elated_neumann[392414]:    ]
Nov 29 03:53:07 np0005539563 elated_neumann[392414]: }
Nov 29 03:53:08 np0005539563 systemd[1]: libpod-506c9627645a50447ed4bcf156d3c0d8e9e390bba315737e47c96b287a2cf48e.scope: Deactivated successfully.
Nov 29 03:53:08 np0005539563 podman[392396]: 2025-11-29 08:53:08.01094806 +0000 UTC m=+1.023258798 container died 506c9627645a50447ed4bcf156d3c0d8e9e390bba315737e47c96b287a2cf48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:53:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d31f5af4a891ef4a8f3e09141dd754d2ee25f77181cb760c4ba661f3554ca2de-merged.mount: Deactivated successfully.
Nov 29 03:53:08 np0005539563 podman[392396]: 2025-11-29 08:53:08.071768189 +0000 UTC m=+1.084078907 container remove 506c9627645a50447ed4bcf156d3c0d8e9e390bba315737e47c96b287a2cf48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_neumann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:53:08 np0005539563 systemd[1]: libpod-conmon-506c9627645a50447ed4bcf156d3c0d8e9e390bba315737e47c96b287a2cf48e.scope: Deactivated successfully.
Nov 29 03:53:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:08.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:08 np0005539563 nova_compute[252253]: 2025-11-29 08:53:08.451 252257 DEBUG nova.compute.manager [None req-79492f12-2a02-4021-a9bc-2f0a75a006dd 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:53:08 np0005539563 nova_compute[252253]: 2025-11-29 08:53:08.500 252257 INFO nova.compute.manager [None req-79492f12-2a02-4021-a9bc-2f0a75a006dd 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] instance snapshotting#033[00m
Nov 29 03:53:08 np0005539563 podman[392578]: 2025-11-29 08:53:08.657079971 +0000 UTC m=+0.037891659 container create 539de01bc208b6d2211086c9e25afdc03a2c0596708fa5b743e6f9d2154c7b09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:53:08 np0005539563 systemd[1]: Started libpod-conmon-539de01bc208b6d2211086c9e25afdc03a2c0596708fa5b743e6f9d2154c7b09.scope.
Nov 29 03:53:08 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:53:08 np0005539563 podman[392578]: 2025-11-29 08:53:08.726738049 +0000 UTC m=+0.107549747 container init 539de01bc208b6d2211086c9e25afdc03a2c0596708fa5b743e6f9d2154c7b09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:53:08 np0005539563 podman[392578]: 2025-11-29 08:53:08.733910334 +0000 UTC m=+0.114722022 container start 539de01bc208b6d2211086c9e25afdc03a2c0596708fa5b743e6f9d2154c7b09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:53:08 np0005539563 podman[392578]: 2025-11-29 08:53:08.640036778 +0000 UTC m=+0.020848496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:53:08 np0005539563 podman[392578]: 2025-11-29 08:53:08.736574286 +0000 UTC m=+0.117385994 container attach 539de01bc208b6d2211086c9e25afdc03a2c0596708fa5b743e6f9d2154c7b09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:53:08 np0005539563 affectionate_mccarthy[392594]: 167 167
Nov 29 03:53:08 np0005539563 systemd[1]: libpod-539de01bc208b6d2211086c9e25afdc03a2c0596708fa5b743e6f9d2154c7b09.scope: Deactivated successfully.
Nov 29 03:53:08 np0005539563 podman[392578]: 2025-11-29 08:53:08.739868135 +0000 UTC m=+0.120679823 container died 539de01bc208b6d2211086c9e25afdc03a2c0596708fa5b743e6f9d2154c7b09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:53:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e236dec27e17105ab885fe37d8e60d17bd17cdf2bd6c4e9cef9a0078f9f538fb-merged.mount: Deactivated successfully.
Nov 29 03:53:08 np0005539563 podman[392578]: 2025-11-29 08:53:08.776831567 +0000 UTC m=+0.157643255 container remove 539de01bc208b6d2211086c9e25afdc03a2c0596708fa5b743e6f9d2154c7b09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:53:08 np0005539563 systemd[1]: libpod-conmon-539de01bc208b6d2211086c9e25afdc03a2c0596708fa5b743e6f9d2154c7b09.scope: Deactivated successfully.
Nov 29 03:53:08 np0005539563 nova_compute[252253]: 2025-11-29 08:53:08.839 252257 INFO nova.virt.libvirt.driver [None req-79492f12-2a02-4021-a9bc-2f0a75a006dd 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Beginning live snapshot process#033[00m
Nov 29 03:53:08 np0005539563 podman[392618]: 2025-11-29 08:53:08.957488277 +0000 UTC m=+0.044180620 container create 785f47edb0c09c2086ffaa78f52227820215b5c05b019fee6e6964a4ca845eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:53:09 np0005539563 systemd[1]: Started libpod-conmon-785f47edb0c09c2086ffaa78f52227820215b5c05b019fee6e6964a4ca845eea.scope.
Nov 29 03:53:09 np0005539563 nova_compute[252253]: 2025-11-29 08:53:09.023 252257 DEBUG nova.storage.rbd_utils [None req-79492f12-2a02-4021-a9bc-2f0a75a006dd 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] creating snapshot(540807150a9b4302bf7539d2e4b1d764) on rbd image(66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:53:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:53:09 np0005539563 podman[392618]: 2025-11-29 08:53:08.937960797 +0000 UTC m=+0.024653160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:53:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc2a469090afcc607735c58bf00ba36a5da540da8067a2773b306707740a8fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc2a469090afcc607735c58bf00ba36a5da540da8067a2773b306707740a8fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc2a469090afcc607735c58bf00ba36a5da540da8067a2773b306707740a8fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc2a469090afcc607735c58bf00ba36a5da540da8067a2773b306707740a8fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:09 np0005539563 podman[392618]: 2025-11-29 08:53:09.048935396 +0000 UTC m=+0.135627759 container init 785f47edb0c09c2086ffaa78f52227820215b5c05b019fee6e6964a4ca845eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:53:09 np0005539563 podman[392618]: 2025-11-29 08:53:09.058468965 +0000 UTC m=+0.145161308 container start 785f47edb0c09c2086ffaa78f52227820215b5c05b019fee6e6964a4ca845eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:53:09 np0005539563 podman[392618]: 2025-11-29 08:53:09.062829242 +0000 UTC m=+0.149521585 container attach 785f47edb0c09c2086ffaa78f52227820215b5c05b019fee6e6964a4ca845eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:53:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 305 active+clean; 459 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 636 KiB/s rd, 2.3 MiB/s wr, 168 op/s
Nov 29 03:53:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Nov 29 03:53:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Nov 29 03:53:09 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Nov 29 03:53:09 np0005539563 nova_compute[252253]: 2025-11-29 08:53:09.443 252257 DEBUG nova.storage.rbd_utils [None req-79492f12-2a02-4021-a9bc-2f0a75a006dd 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] cloning vms/66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk@540807150a9b4302bf7539d2e4b1d764 to images/89a98419-7cd2-49c3-aac2-95418a57f2f6 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:53:09 np0005539563 nova_compute[252253]: 2025-11-29 08:53:09.614 252257 DEBUG nova.storage.rbd_utils [None req-79492f12-2a02-4021-a9bc-2f0a75a006dd 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] flattening images/89a98419-7cd2-49c3-aac2-95418a57f2f6 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:53:09 np0005539563 nova_compute[252253]: 2025-11-29 08:53:09.691 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:09.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:09 np0005539563 elated_volhard[392667]: {
Nov 29 03:53:09 np0005539563 elated_volhard[392667]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:53:09 np0005539563 elated_volhard[392667]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:53:09 np0005539563 elated_volhard[392667]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:53:09 np0005539563 elated_volhard[392667]:        "osd_id": 0,
Nov 29 03:53:09 np0005539563 elated_volhard[392667]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:53:09 np0005539563 elated_volhard[392667]:        "type": "bluestore"
Nov 29 03:53:09 np0005539563 elated_volhard[392667]:    }
Nov 29 03:53:09 np0005539563 elated_volhard[392667]: }
Nov 29 03:53:10 np0005539563 systemd[1]: libpod-785f47edb0c09c2086ffaa78f52227820215b5c05b019fee6e6964a4ca845eea.scope: Deactivated successfully.
Nov 29 03:53:10 np0005539563 podman[392618]: 2025-11-29 08:53:10.032206518 +0000 UTC m=+1.118898861 container died 785f47edb0c09c2086ffaa78f52227820215b5c05b019fee6e6964a4ca845eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:53:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6dc2a469090afcc607735c58bf00ba36a5da540da8067a2773b306707740a8fb-merged.mount: Deactivated successfully.
Nov 29 03:53:10 np0005539563 podman[392618]: 2025-11-29 08:53:10.087448377 +0000 UTC m=+1.174140720 container remove 785f47edb0c09c2086ffaa78f52227820215b5c05b019fee6e6964a4ca845eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:53:10 np0005539563 systemd[1]: libpod-conmon-785f47edb0c09c2086ffaa78f52227820215b5c05b019fee6e6964a4ca845eea.scope: Deactivated successfully.
Nov 29 03:53:10 np0005539563 nova_compute[252253]: 2025-11-29 08:53:10.114 252257 DEBUG nova.storage.rbd_utils [None req-79492f12-2a02-4021-a9bc-2f0a75a006dd 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] removing snapshot(540807150a9b4302bf7539d2e4b1d764) on rbd image(66ab0bd5-ae63-4ae2-af64-ae8b85745f24_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:53:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:53:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:53:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:10 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9a9f1b8d-364e-4371-85c2-4957cfd315c2 does not exist
Nov 29 03:53:10 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d32c1e83-2691-46ea-a18d-4b35c53e15ce does not exist
Nov 29 03:53:10 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1189d35a-09b1-430f-9f81-704bf205d886 does not exist
Nov 29 03:53:10 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:10 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:53:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:10.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:10 np0005539563 nova_compute[252253]: 2025-11-29 08:53:10.682 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:11 np0005539563 nova_compute[252253]: 2025-11-29 08:53:11.121 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Nov 29 03:53:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Nov 29 03:53:11 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Nov 29 03:53:11 np0005539563 nova_compute[252253]: 2025-11-29 08:53:11.193 252257 DEBUG nova.storage.rbd_utils [None req-79492f12-2a02-4021-a9bc-2f0a75a006dd 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] creating snapshot(snap) on rbd image(89a98419-7cd2-49c3-aac2-95418a57f2f6) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:53:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 305 active+clean; 486 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.6 MiB/s wr, 128 op/s
Nov 29 03:53:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:11.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Nov 29 03:53:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Nov 29 03:53:12 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Nov 29 03:53:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:12.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:53:13
Nov 29 03:53:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:53:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:53:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'backups']
Nov 29 03:53:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:53:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:53:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:53:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:53:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:53:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:53:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:53:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 305 active+clean; 507 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.0 MiB/s rd, 7.8 MiB/s wr, 218 op/s
Nov 29 03:53:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:13.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:14.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:14 np0005539563 nova_compute[252253]: 2025-11-29 08:53:14.418 252257 INFO nova.virt.libvirt.driver [None req-79492f12-2a02-4021-a9bc-2f0a75a006dd 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Snapshot image upload complete#033[00m
Nov 29 03:53:14 np0005539563 nova_compute[252253]: 2025-11-29 08:53:14.420 252257 INFO nova.compute.manager [None req-79492f12-2a02-4021-a9bc-2f0a75a006dd 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Took 5.92 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:53:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Nov 29 03:53:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Nov 29 03:53:15 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Nov 29 03:53:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 305 active+clean; 481 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 256 op/s
Nov 29 03:53:15 np0005539563 nova_compute[252253]: 2025-11-29 08:53:15.686 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:15.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:16 np0005539563 nova_compute[252253]: 2025-11-29 08:53:16.123 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:16.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:53:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:53:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:53:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:53:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:53:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:53:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:53:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:53:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:53:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:53:16 np0005539563 nova_compute[252253]: 2025-11-29 08:53:16.977 252257 DEBUG oslo_concurrency.lockutils [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:16 np0005539563 nova_compute[252253]: 2025-11-29 08:53:16.978 252257 DEBUG oslo_concurrency.lockutils [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:16 np0005539563 nova_compute[252253]: 2025-11-29 08:53:16.978 252257 DEBUG oslo_concurrency.lockutils [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:16 np0005539563 nova_compute[252253]: 2025-11-29 08:53:16.979 252257 DEBUG oslo_concurrency.lockutils [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:16 np0005539563 nova_compute[252253]: 2025-11-29 08:53:16.979 252257 DEBUG oslo_concurrency.lockutils [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:16 np0005539563 nova_compute[252253]: 2025-11-29 08:53:16.981 252257 INFO nova.compute.manager [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Terminating instance#033[00m
Nov 29 03:53:16 np0005539563 nova_compute[252253]: 2025-11-29 08:53:16.983 252257 DEBUG nova.compute.manager [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:53:17 np0005539563 kernel: tap685efdfe-7b (unregistering): left promiscuous mode
Nov 29 03:53:17 np0005539563 NetworkManager[48981]: <info>  [1764406397.0385] device (tap685efdfe-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:53:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:53:17Z|00874|binding|INFO|Releasing lport 685efdfe-7b92-41ab-b933-36d49a2b7522 from this chassis (sb_readonly=0)
Nov 29 03:53:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:53:17Z|00875|binding|INFO|Setting lport 685efdfe-7b92-41ab-b933-36d49a2b7522 down in Southbound
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.051 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:17 np0005539563 ovn_controller[148841]: 2025-11-29T08:53:17Z|00876|binding|INFO|Removing iface tap685efdfe-7b ovn-installed in OVS
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.056 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.063 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:b7:0c 10.100.0.3'], port_security=['fa:16:3e:f3:b7:0c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '66ab0bd5-ae63-4ae2-af64-ae8b85745f24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0596f9d1e5a5444ca2640f6e8244d53f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '49368169-f673-45da-b454-bf6c8bb93b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46809892-ffee-4015-b7f0-51515653f0e9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=685efdfe-7b92-41ab-b933-36d49a2b7522) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.065 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 685efdfe-7b92-41ab-b933-36d49a2b7522 in datapath 9094c67b-5d6f-4130-9ec6-7da5c871a564 unbound from our chassis#033[00m
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.067 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9094c67b-5d6f-4130-9ec6-7da5c871a564, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.069 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[da82d0f3-f0eb-43ff-8a5e-d0f7575df781]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.071 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564 namespace which is not needed anymore#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.089 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:17 np0005539563 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000cb.scope: Deactivated successfully.
Nov 29 03:53:17 np0005539563 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000cb.scope: Consumed 16.779s CPU time.
Nov 29 03:53:17 np0005539563 systemd-machined[213024]: Machine qemu-98-instance-000000cb terminated.
Nov 29 03:53:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Nov 29 03:53:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Nov 29 03:53:17 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.224 252257 INFO nova.virt.libvirt.driver [-] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Instance destroyed successfully.#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.225 252257 DEBUG nova.objects.instance [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lazy-loading 'resources' on Instance uuid 66ab0bd5-ae63-4ae2-af64-ae8b85745f24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:53:17 np0005539563 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[391572]: [NOTICE]   (391591) : haproxy version is 2.8.14-c23fe91
Nov 29 03:53:17 np0005539563 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[391572]: [NOTICE]   (391591) : path to executable is /usr/sbin/haproxy
Nov 29 03:53:17 np0005539563 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[391572]: [WARNING]  (391591) : Exiting Master process...
Nov 29 03:53:17 np0005539563 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[391572]: [ALERT]    (391591) : Current worker (391596) exited with code 143 (Terminated)
Nov 29 03:53:17 np0005539563 neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564[391572]: [WARNING]  (391591) : All workers exited. Exiting... (0)
Nov 29 03:53:17 np0005539563 systemd[1]: libpod-7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb.scope: Deactivated successfully.
Nov 29 03:53:17 np0005539563 podman[392938]: 2025-11-29 08:53:17.238841433 +0000 UTC m=+0.058018012 container died 7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.252 252257 DEBUG nova.virt.libvirt.vif [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:52:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-351772626',display_name='tempest-TestSnapshotPattern-server-351772626',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-351772626',id=203,image_ref='a7583cd4-d395-48e2-9f81-567bf2845ae0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBImEx8+jhsxRFNI/zXiqCIp6lKyzrmzXueICkOx8YGb02aphTL5Mlw1+YiMaTW8XLhYmBtqvqII/hnTIhC95ctb8YpefMaS6Qv1/vv9QrNRmuoy5csFiSCQsYM34gKdoxw==',key_name='tempest-TestSnapshotPattern-299175359',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:52:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0596f9d1e5a5444ca2640f6e8244d53f',ramdisk_id='',reservation_id='r-u72f31zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='defc87c3-85a5-47bb-8d50-3121d5d780c1',image_min_disk='1',image_min_ram='0',image_owner_id='0596f9d1e5a5444ca2640f6e8244d53f',image_owner_project_name='tempest-TestSnapshotPattern-32695225',image_owner_user_name='tempest-TestSnapshotPattern-32695225-project-member',image_user_id='7b36e3f2406043c2a741c24fb14de7df',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-32695225',owner_user_name='tempest-TestSnapshotPattern-32695225-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:53:14Z,user_data=None,user_id='7b36e3f2406043c2a741c24fb14de7df',uuid=66ab0bd5-ae63-4ae2-af64-ae8b85745f24,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "685efdfe-7b92-41ab-b933-36d49a2b7522", "address": "fa:16:3e:f3:b7:0c", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685efdfe-7b", "ovs_interfaceid": "685efdfe-7b92-41ab-b933-36d49a2b7522", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.253 252257 DEBUG nova.network.os_vif_util [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Converting VIF {"id": "685efdfe-7b92-41ab-b933-36d49a2b7522", "address": "fa:16:3e:f3:b7:0c", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685efdfe-7b", "ovs_interfaceid": "685efdfe-7b92-41ab-b933-36d49a2b7522", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.253 252257 DEBUG nova.network.os_vif_util [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:b7:0c,bridge_name='br-int',has_traffic_filtering=True,id=685efdfe-7b92-41ab-b933-36d49a2b7522,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685efdfe-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.254 252257 DEBUG os_vif [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:b7:0c,bridge_name='br-int',has_traffic_filtering=True,id=685efdfe-7b92-41ab-b933-36d49a2b7522,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685efdfe-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.256 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.256 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap685efdfe-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.258 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.261 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.264 252257 INFO os_vif [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:b7:0c,bridge_name='br-int',has_traffic_filtering=True,id=685efdfe-7b92-41ab-b933-36d49a2b7522,network=Network(9094c67b-5d6f-4130-9ec6-7da5c871a564),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685efdfe-7b')#033[00m
Nov 29 03:53:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb-userdata-shm.mount: Deactivated successfully.
Nov 29 03:53:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay-43903fd3d6e8dcb5d14a40fca14ccf103adce67adbd8718a870d3b7b8b0d535a-merged.mount: Deactivated successfully.
Nov 29 03:53:17 np0005539563 podman[392938]: 2025-11-29 08:53:17.279576607 +0000 UTC m=+0.098753196 container cleanup 7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:53:17 np0005539563 systemd[1]: libpod-conmon-7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb.scope: Deactivated successfully.
Nov 29 03:53:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 305 active+clean; 481 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 12 MiB/s wr, 201 op/s
Nov 29 03:53:17 np0005539563 podman[392992]: 2025-11-29 08:53:17.363038666 +0000 UTC m=+0.052214504 container remove 7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.371 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[715424e0-e9d2-411e-bc86-e5aee81f478b]: (4, ('Sat Nov 29 08:53:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564 (7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb)\n7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb\nSat Nov 29 08:53:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564 (7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb)\n7bf72b9b3572017dc02ea64dfd93f93061b01a53a0c475206f4876331f962ceb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.374 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7e03baad-4aaa-4130-9075-474df1c69e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.375 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9094c67b-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.376 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:17 np0005539563 kernel: tap9094c67b-50: left promiscuous mode
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.384 252257 DEBUG nova.compute.manager [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Received event network-changed-685efdfe-7b92-41ab-b933-36d49a2b7522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.385 252257 DEBUG nova.compute.manager [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Refreshing instance network info cache due to event network-changed-685efdfe-7b92-41ab-b933-36d49a2b7522. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.386 252257 DEBUG oslo_concurrency.lockutils [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.386 252257 DEBUG oslo_concurrency.lockutils [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.386 252257 DEBUG nova.network.neutron [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Refreshing network info cache for port 685efdfe-7b92-41ab-b933-36d49a2b7522 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.398 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.402 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dfdce882-55f8-4a46-901b-936e44b01d22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.418 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1ae507-0b19-4efd-9936-96c846bf292a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.419 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fce97d08-1fe6-4d92-973d-744911f62166]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.433 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3bfac7cf-ec83-45ed-8dc8-b68c6f67a874]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 931467, 'reachable_time': 20700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393009, 'error': None, 'target': 'ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.436 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9094c67b-5d6f-4130-9ec6-7da5c871a564 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:53:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:17.436 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[2be260ce-a13e-4eab-bb98-698f7c59c449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:53:17 np0005539563 systemd[1]: run-netns-ovnmeta\x2d9094c67b\x2d5d6f\x2d4130\x2d9ec6\x2d7da5c871a564.mount: Deactivated successfully.
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.665 252257 INFO nova.virt.libvirt.driver [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Deleting instance files /var/lib/nova/instances/66ab0bd5-ae63-4ae2-af64-ae8b85745f24_del#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.666 252257 INFO nova.virt.libvirt.driver [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Deletion of /var/lib/nova/instances/66ab0bd5-ae63-4ae2-af64-ae8b85745f24_del complete#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.719 252257 INFO nova.compute.manager [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.720 252257 DEBUG oslo.service.loopingcall [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.720 252257 DEBUG nova.compute.manager [-] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:53:17 np0005539563 nova_compute[252253]: 2025-11-29 08:53:17.721 252257 DEBUG nova.network.neutron [-] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:53:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:17.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:18.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:18 np0005539563 nova_compute[252253]: 2025-11-29 08:53:18.855 252257 DEBUG nova.network.neutron [-] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:53:18 np0005539563 nova_compute[252253]: 2025-11-29 08:53:18.879 252257 INFO nova.compute.manager [-] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Took 1.16 seconds to deallocate network for instance.#033[00m
Nov 29 03:53:18 np0005539563 nova_compute[252253]: 2025-11-29 08:53:18.958 252257 DEBUG nova.network.neutron [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Updated VIF entry in instance network info cache for port 685efdfe-7b92-41ab-b933-36d49a2b7522. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:53:18 np0005539563 nova_compute[252253]: 2025-11-29 08:53:18.958 252257 DEBUG nova.network.neutron [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Updating instance_info_cache with network_info: [{"id": "685efdfe-7b92-41ab-b933-36d49a2b7522", "address": "fa:16:3e:f3:b7:0c", "network": {"id": "9094c67b-5d6f-4130-9ec6-7da5c871a564", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1108775138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0596f9d1e5a5444ca2640f6e8244d53f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685efdfe-7b", "ovs_interfaceid": "685efdfe-7b92-41ab-b933-36d49a2b7522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.008 252257 DEBUG nova.compute.manager [req-4f4897b4-862c-4291-81d5-a077c1a4fdb3 req-5416ff20-fad3-4cff-9f09-9484e7f809a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Received event network-vif-deleted-685efdfe-7b92-41ab-b933-36d49a2b7522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.008 252257 INFO nova.compute.manager [req-4f4897b4-862c-4291-81d5-a077c1a4fdb3 req-5416ff20-fad3-4cff-9f09-9484e7f809a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Neutron deleted interface 685efdfe-7b92-41ab-b933-36d49a2b7522; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.009 252257 DEBUG nova.network.neutron [req-4f4897b4-862c-4291-81d5-a077c1a4fdb3 req-5416ff20-fad3-4cff-9f09-9484e7f809a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:53:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 305 active+clean; 435 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 9.7 MiB/s wr, 211 op/s
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.512 252257 DEBUG oslo_concurrency.lockutils [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-66ab0bd5-ae63-4ae2-af64-ae8b85745f24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.513 252257 DEBUG nova.compute.manager [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Received event network-vif-unplugged-685efdfe-7b92-41ab-b933-36d49a2b7522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.513 252257 DEBUG oslo_concurrency.lockutils [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.513 252257 DEBUG oslo_concurrency.lockutils [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.514 252257 DEBUG oslo_concurrency.lockutils [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.514 252257 DEBUG nova.compute.manager [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] No waiting events found dispatching network-vif-unplugged-685efdfe-7b92-41ab-b933-36d49a2b7522 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.514 252257 DEBUG nova.compute.manager [req-d39e5dd0-b1ab-4956-be74-1c7fd0a31c71 req-4b9e2446-f5e9-4cc0-b79e-c0674300de18 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Received event network-vif-unplugged-685efdfe-7b92-41ab-b933-36d49a2b7522 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.635 252257 DEBUG oslo_concurrency.lockutils [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.635 252257 DEBUG oslo_concurrency.lockutils [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.672 252257 DEBUG oslo_concurrency.processutils [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.717 252257 DEBUG nova.compute.manager [req-607a1446-837d-48a7-980b-fcbecd10ca53 req-11b2bd29-7a5d-4980-af2c-d0fb04003ae5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Received event network-vif-plugged-685efdfe-7b92-41ab-b933-36d49a2b7522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.718 252257 DEBUG oslo_concurrency.lockutils [req-607a1446-837d-48a7-980b-fcbecd10ca53 req-11b2bd29-7a5d-4980-af2c-d0fb04003ae5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.718 252257 DEBUG oslo_concurrency.lockutils [req-607a1446-837d-48a7-980b-fcbecd10ca53 req-11b2bd29-7a5d-4980-af2c-d0fb04003ae5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.718 252257 DEBUG oslo_concurrency.lockutils [req-607a1446-837d-48a7-980b-fcbecd10ca53 req-11b2bd29-7a5d-4980-af2c-d0fb04003ae5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.719 252257 DEBUG nova.compute.manager [req-607a1446-837d-48a7-980b-fcbecd10ca53 req-11b2bd29-7a5d-4980-af2c-d0fb04003ae5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] No waiting events found dispatching network-vif-plugged-685efdfe-7b92-41ab-b933-36d49a2b7522 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.719 252257 WARNING nova.compute.manager [req-607a1446-837d-48a7-980b-fcbecd10ca53 req-11b2bd29-7a5d-4980-af2c-d0fb04003ae5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Received unexpected event network-vif-plugged-685efdfe-7b92-41ab-b933-36d49a2b7522 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:53:19 np0005539563 nova_compute[252253]: 2025-11-29 08:53:19.721 252257 DEBUG nova.compute.manager [req-4f4897b4-862c-4291-81d5-a077c1a4fdb3 req-5416ff20-fad3-4cff-9f09-9484e7f809a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Detach interface failed, port_id=685efdfe-7b92-41ab-b933-36d49a2b7522, reason: Instance 66ab0bd5-ae63-4ae2-af64-ae8b85745f24 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:53:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:19.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/396321426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:20 np0005539563 nova_compute[252253]: 2025-11-29 08:53:20.153 252257 DEBUG oslo_concurrency.processutils [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:20 np0005539563 nova_compute[252253]: 2025-11-29 08:53:20.160 252257 DEBUG nova.compute.provider_tree [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:53:20 np0005539563 nova_compute[252253]: 2025-11-29 08:53:20.202 252257 DEBUG nova.scheduler.client.report [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:53:20 np0005539563 nova_compute[252253]: 2025-11-29 08:53:20.232 252257 DEBUG oslo_concurrency.lockutils [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:20 np0005539563 nova_compute[252253]: 2025-11-29 08:53:20.263 252257 INFO nova.scheduler.client.report [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Deleted allocations for instance 66ab0bd5-ae63-4ae2-af64-ae8b85745f24#033[00m
Nov 29 03:53:20 np0005539563 nova_compute[252253]: 2025-11-29 08:53:20.381 252257 DEBUG oslo_concurrency.lockutils [None req-6ef5ef94-4074-4b06-b5cf-976b73bbd000 7b36e3f2406043c2a741c24fb14de7df 0596f9d1e5a5444ca2640f6e8244d53f - - default default] Lock "66ab0bd5-ae63-4ae2-af64-ae8b85745f24" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:20.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:21 np0005539563 nova_compute[252253]: 2025-11-29 08:53:21.125 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.3 MiB/s wr, 174 op/s
Nov 29 03:53:21 np0005539563 podman[393035]: 2025-11-29 08:53:21.509767794 +0000 UTC m=+0.065555886 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:53:21 np0005539563 podman[393036]: 2025-11-29 08:53:21.520561386 +0000 UTC m=+0.073393749 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:53:21 np0005539563 podman[393037]: 2025-11-29 08:53:21.584523748 +0000 UTC m=+0.129631191 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:53:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:21.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Nov 29 03:53:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Nov 29 03:53:22 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Nov 29 03:53:22 np0005539563 nova_compute[252253]: 2025-11-29 08:53:22.259 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:22.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Nov 29 03:53:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Nov 29 03:53:23 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Nov 29 03:53:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 109 KiB/s rd, 7.1 KiB/s wr, 158 op/s
Nov 29 03:53:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:23.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002173955716080402 of space, bias 1.0, pg target 0.6521867148241206 quantized to 32 (current 32)
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004069283047180346 of space, bias 1.0, pg target 1.2207849141541038 quantized to 32 (current 32)
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:53:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:24.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 227 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 6.9 KiB/s wr, 153 op/s
Nov 29 03:53:25 np0005539563 nova_compute[252253]: 2025-11-29 08:53:25.385 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539563 nova_compute[252253]: 2025-11-29 08:53:25.576 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539563 nova_compute[252253]: 2025-11-29 08:53:25.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:25.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:26 np0005539563 nova_compute[252253]: 2025-11-29 08:53:26.127 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:26.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:27 np0005539563 nova_compute[252253]: 2025-11-29 08:53:27.262 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 227 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 78 KiB/s rd, 5.9 KiB/s wr, 115 op/s
Nov 29 03:53:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:27.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:28.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:28 np0005539563 nova_compute[252253]: 2025-11-29 08:53:28.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 305 active+clean; 186 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 46 op/s
Nov 29 03:53:29 np0005539563 nova_compute[252253]: 2025-11-29 08:53:29.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:29 np0005539563 nova_compute[252253]: 2025-11-29 08:53:29.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:53:29 np0005539563 nova_compute[252253]: 2025-11-29 08:53:29.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:53:29 np0005539563 nova_compute[252253]: 2025-11-29 08:53:29.692 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:53:29 np0005539563 nova_compute[252253]: 2025-11-29 08:53:29.692 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:29 np0005539563 nova_compute[252253]: 2025-11-29 08:53:29.693 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:29 np0005539563 nova_compute[252253]: 2025-11-29 08:53:29.693 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:53:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:29.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:30.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:31 np0005539563 nova_compute[252253]: 2025-11-29 08:53:31.128 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 3.1 KiB/s wr, 66 op/s
Nov 29 03:53:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:31.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Nov 29 03:53:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Nov 29 03:53:32 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Nov 29 03:53:32 np0005539563 nova_compute[252253]: 2025-11-29 08:53:32.223 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406397.221995, 66ab0bd5-ae63-4ae2-af64-ae8b85745f24 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:53:32 np0005539563 nova_compute[252253]: 2025-11-29 08:53:32.223 252257 INFO nova.compute.manager [-] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:53:32 np0005539563 nova_compute[252253]: 2025-11-29 08:53:32.249 252257 DEBUG nova.compute.manager [None req-1d4cea29-bbe4-4fdc-bfb9-f0fd9e5600b3 - - - - - -] [instance: 66ab0bd5-ae63-4ae2-af64-ae8b85745f24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:53:32 np0005539563 nova_compute[252253]: 2025-11-29 08:53:32.265 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.005000134s ======
Nov 29 03:53:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:32.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000134s
Nov 29 03:53:32 np0005539563 nova_compute[252253]: 2025-11-29 08:53:32.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Nov 29 03:53:33 np0005539563 nova_compute[252253]: 2025-11-29 08:53:33.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:33 np0005539563 nova_compute[252253]: 2025-11-29 08:53:33.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:33 np0005539563 nova_compute[252253]: 2025-11-29 08:53:33.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:33 np0005539563 nova_compute[252253]: 2025-11-29 08:53:33.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:33 np0005539563 nova_compute[252253]: 2025-11-29 08:53:33.712 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:53:33 np0005539563 nova_compute[252253]: 2025-11-29 08:53:33.712 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:33.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2304781969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.161 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.315 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.316 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4135MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.317 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.317 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:34.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.463 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.463 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.480 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.514 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.515 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.536 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.566 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.598 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:34 np0005539563 nova_compute[252253]: 2025-11-29 08:53:34.969 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:34.970 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:53:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:34.972 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:53:34 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:53:34.973 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:53:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876622984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:35 np0005539563 nova_compute[252253]: 2025-11-29 08:53:35.062 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:35 np0005539563 nova_compute[252253]: 2025-11-29 08:53:35.068 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:53:35 np0005539563 nova_compute[252253]: 2025-11-29 08:53:35.087 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:53:35 np0005539563 nova_compute[252253]: 2025-11-29 08:53:35.106 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:53:35 np0005539563 nova_compute[252253]: 2025-11-29 08:53:35.107 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Nov 29 03:53:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:35.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:36 np0005539563 nova_compute[252253]: 2025-11-29 08:53:36.131 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:36.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:37 np0005539563 nova_compute[252253]: 2025-11-29 08:53:37.267 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Nov 29 03:53:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:37.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:38.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 23 op/s
Nov 29 03:53:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:39.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:40.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:41 np0005539563 nova_compute[252253]: 2025-11-29 08:53:41.134 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:53:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:41.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:42 np0005539563 nova_compute[252253]: 2025-11-29 08:53:42.107 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:42 np0005539563 nova_compute[252253]: 2025-11-29 08:53:42.271 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:42.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3476609685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:53:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:53:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:53:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:53:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:53:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:53:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:53:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:43.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:44.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:53:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:45.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:46 np0005539563 nova_compute[252253]: 2025-11-29 08:53:46.136 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:46.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:47 np0005539563 nova_compute[252253]: 2025-11-29 08:53:47.273 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:53:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:47.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:48.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:53:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:49.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:50.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:51 np0005539563 nova_compute[252253]: 2025-11-29 08:53:51.138 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:53:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:51.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:52 np0005539563 nova_compute[252253]: 2025-11-29 08:53:52.275 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:52.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:52 np0005539563 podman[393210]: 2025-11-29 08:53:52.601871306 +0000 UTC m=+0.078306611 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:53:52 np0005539563 podman[393212]: 2025-11-29 08:53:52.612862613 +0000 UTC m=+0.083455809 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:53:52 np0005539563 podman[393211]: 2025-11-29 08:53:52.620450629 +0000 UTC m=+0.093325867 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:53:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Nov 29 03:53:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:53.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:54.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 305 active+clean; 163 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.7 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Nov 29 03:53:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:55.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:53:56 np0005539563 nova_compute[252253]: 2025-11-29 08:53:56.141 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:56.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:57 np0005539563 nova_compute[252253]: 2025-11-29 08:53:57.277 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 305 active+clean; 163 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.7 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Nov 29 03:53:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:53:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:57.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:53:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:58.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Nov 29 03:53:59 np0005539563 nova_compute[252253]: 2025-11-29 08:53:59.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:53:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:53:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:59.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:00.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:01 np0005539563 nova_compute[252253]: 2025-11-29 08:54:01.143 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 29 03:54:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:01.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:02 np0005539563 nova_compute[252253]: 2025-11-29 08:54:02.280 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:02.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 29 03:54:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:03.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:04.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:04.964 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:04.964 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:04.964 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 03:54:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:05.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:06 np0005539563 nova_compute[252253]: 2025-11-29 08:54:06.145 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:06.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:07 np0005539563 nova_compute[252253]: 2025-11-29 08:54:07.283 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 336 KiB/s wr, 86 op/s
Nov 29 03:54:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:07.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:54:08Z|00877|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Nov 29 03:54:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:08.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 336 KiB/s wr, 86 op/s
Nov 29 03:54:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:09.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:10.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:10 np0005539563 nova_compute[252253]: 2025-11-29 08:54:10.692 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:11 np0005539563 nova_compute[252253]: 2025-11-29 08:54:11.147 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 29 03:54:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:54:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:54:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:54:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:54:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:54:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:54:11 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cc5b4404-e098-40a8-8383-8d26f1f5a8ae does not exist
Nov 29 03:54:11 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e79ce4fd-dda7-4cc5-a477-36891c1cebd9 does not exist
Nov 29 03:54:11 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e792fe1d-7645-4098-b288-c919b7080b1a does not exist
Nov 29 03:54:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:54:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:54:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:54:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:54:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:54:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:54:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:11.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:12 np0005539563 nova_compute[252253]: 2025-11-29 08:54:12.285 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:12 np0005539563 podman[393602]: 2025-11-29 08:54:12.420974428 +0000 UTC m=+0.058257119 container create a355d2b81d4b614f6e7770c84e1adcfc88021d2c587ba1913d6f8c732bbf5c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:54:12 np0005539563 systemd[1]: Started libpod-conmon-a355d2b81d4b614f6e7770c84e1adcfc88021d2c587ba1913d6f8c732bbf5c50.scope.
Nov 29 03:54:12 np0005539563 podman[393602]: 2025-11-29 08:54:12.38781892 +0000 UTC m=+0.025101661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:54:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:54:12 np0005539563 podman[393602]: 2025-11-29 08:54:12.51043323 +0000 UTC m=+0.147715931 container init a355d2b81d4b614f6e7770c84e1adcfc88021d2c587ba1913d6f8c732bbf5c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:54:12 np0005539563 podman[393602]: 2025-11-29 08:54:12.518019605 +0000 UTC m=+0.155302296 container start a355d2b81d4b614f6e7770c84e1adcfc88021d2c587ba1913d6f8c732bbf5c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_knuth, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:54:12 np0005539563 podman[393602]: 2025-11-29 08:54:12.522333242 +0000 UTC m=+0.159615963 container attach a355d2b81d4b614f6e7770c84e1adcfc88021d2c587ba1913d6f8c732bbf5c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_knuth, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:54:12 np0005539563 stupefied_knuth[393618]: 167 167
Nov 29 03:54:12 np0005539563 systemd[1]: libpod-a355d2b81d4b614f6e7770c84e1adcfc88021d2c587ba1913d6f8c732bbf5c50.scope: Deactivated successfully.
Nov 29 03:54:12 np0005539563 podman[393602]: 2025-11-29 08:54:12.524356637 +0000 UTC m=+0.161639328 container died a355d2b81d4b614f6e7770c84e1adcfc88021d2c587ba1913d6f8c732bbf5c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:54:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:12.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4d23c0f35443ffd1711267f61cf40785951afafd66389fb79f7fb7fc46a16fd7-merged.mount: Deactivated successfully.
Nov 29 03:54:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:54:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:54:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:54:12 np0005539563 podman[393602]: 2025-11-29 08:54:12.568089381 +0000 UTC m=+0.205372072 container remove a355d2b81d4b614f6e7770c84e1adcfc88021d2c587ba1913d6f8c732bbf5c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:54:12 np0005539563 systemd[1]: libpod-conmon-a355d2b81d4b614f6e7770c84e1adcfc88021d2c587ba1913d6f8c732bbf5c50.scope: Deactivated successfully.
Nov 29 03:54:12 np0005539563 podman[393642]: 2025-11-29 08:54:12.719023298 +0000 UTC m=+0.043427857 container create 0338ff6e65d3ce2fecd3b2d377a06f0bdaff48abcda30c7746fd3c50c2f6d121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:54:12 np0005539563 systemd[1]: Started libpod-conmon-0338ff6e65d3ce2fecd3b2d377a06f0bdaff48abcda30c7746fd3c50c2f6d121.scope.
Nov 29 03:54:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:54:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7262bdadae33da9c379e0f737e409f77e87de41ecb157a3b42ad82b401ee03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7262bdadae33da9c379e0f737e409f77e87de41ecb157a3b42ad82b401ee03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7262bdadae33da9c379e0f737e409f77e87de41ecb157a3b42ad82b401ee03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7262bdadae33da9c379e0f737e409f77e87de41ecb157a3b42ad82b401ee03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7262bdadae33da9c379e0f737e409f77e87de41ecb157a3b42ad82b401ee03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:12 np0005539563 podman[393642]: 2025-11-29 08:54:12.70176581 +0000 UTC m=+0.026170389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:54:12 np0005539563 podman[393642]: 2025-11-29 08:54:12.804566284 +0000 UTC m=+0.128970933 container init 0338ff6e65d3ce2fecd3b2d377a06f0bdaff48abcda30c7746fd3c50c2f6d121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_maxwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:54:12 np0005539563 podman[393642]: 2025-11-29 08:54:12.818333887 +0000 UTC m=+0.142738476 container start 0338ff6e65d3ce2fecd3b2d377a06f0bdaff48abcda30c7746fd3c50c2f6d121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:54:12 np0005539563 podman[393642]: 2025-11-29 08:54:12.830440584 +0000 UTC m=+0.154845323 container attach 0338ff6e65d3ce2fecd3b2d377a06f0bdaff48abcda30c7746fd3c50c2f6d121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:54:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:54:13
Nov 29 03:54:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:54:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:54:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'volumes', '.rgw.root', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 29 03:54:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:54:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:54:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:54:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:54:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:54:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:54:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:54:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Nov 29 03:54:13 np0005539563 mystifying_maxwell[393658]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:54:13 np0005539563 mystifying_maxwell[393658]: --> relative data size: 1.0
Nov 29 03:54:13 np0005539563 mystifying_maxwell[393658]: --> All data devices are unavailable
Nov 29 03:54:13 np0005539563 systemd[1]: libpod-0338ff6e65d3ce2fecd3b2d377a06f0bdaff48abcda30c7746fd3c50c2f6d121.scope: Deactivated successfully.
Nov 29 03:54:13 np0005539563 podman[393674]: 2025-11-29 08:54:13.666542013 +0000 UTC m=+0.023217390 container died 0338ff6e65d3ce2fecd3b2d377a06f0bdaff48abcda30c7746fd3c50c2f6d121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_maxwell, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:54:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ea7262bdadae33da9c379e0f737e409f77e87de41ecb157a3b42ad82b401ee03-merged.mount: Deactivated successfully.
Nov 29 03:54:13 np0005539563 podman[393674]: 2025-11-29 08:54:13.721448369 +0000 UTC m=+0.078123706 container remove 0338ff6e65d3ce2fecd3b2d377a06f0bdaff48abcda30c7746fd3c50c2f6d121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_maxwell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:54:13 np0005539563 systemd[1]: libpod-conmon-0338ff6e65d3ce2fecd3b2d377a06f0bdaff48abcda30c7746fd3c50c2f6d121.scope: Deactivated successfully.
Nov 29 03:54:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:13.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:14 np0005539563 podman[393832]: 2025-11-29 08:54:14.407124605 +0000 UTC m=+0.048149525 container create c3d2cce45bf0e6675afb0c8dd71888e9a9c10333f14b7e570fecf8bac1971f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:54:14 np0005539563 systemd[1]: Started libpod-conmon-c3d2cce45bf0e6675afb0c8dd71888e9a9c10333f14b7e570fecf8bac1971f75.scope.
Nov 29 03:54:14 np0005539563 podman[393832]: 2025-11-29 08:54:14.385377415 +0000 UTC m=+0.026402365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:54:14 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:54:14 np0005539563 podman[393832]: 2025-11-29 08:54:14.499762153 +0000 UTC m=+0.140787093 container init c3d2cce45bf0e6675afb0c8dd71888e9a9c10333f14b7e570fecf8bac1971f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:54:14 np0005539563 podman[393832]: 2025-11-29 08:54:14.506426624 +0000 UTC m=+0.147451544 container start c3d2cce45bf0e6675afb0c8dd71888e9a9c10333f14b7e570fecf8bac1971f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_turing, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:54:14 np0005539563 podman[393832]: 2025-11-29 08:54:14.509830786 +0000 UTC m=+0.150855706 container attach c3d2cce45bf0e6675afb0c8dd71888e9a9c10333f14b7e570fecf8bac1971f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_turing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:54:14 np0005539563 optimistic_turing[393848]: 167 167
Nov 29 03:54:14 np0005539563 systemd[1]: libpod-c3d2cce45bf0e6675afb0c8dd71888e9a9c10333f14b7e570fecf8bac1971f75.scope: Deactivated successfully.
Nov 29 03:54:14 np0005539563 podman[393832]: 2025-11-29 08:54:14.51592084 +0000 UTC m=+0.156945770 container died c3d2cce45bf0e6675afb0c8dd71888e9a9c10333f14b7e570fecf8bac1971f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_turing, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:54:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7271475681a16f93c625ee26ff0411bcbb9db743dbb3a4b669f73006a7cbe31f-merged.mount: Deactivated successfully.
Nov 29 03:54:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:14.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:14 np0005539563 podman[393832]: 2025-11-29 08:54:14.563778896 +0000 UTC m=+0.204803816 container remove c3d2cce45bf0e6675afb0c8dd71888e9a9c10333f14b7e570fecf8bac1971f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 03:54:14 np0005539563 systemd[1]: libpod-conmon-c3d2cce45bf0e6675afb0c8dd71888e9a9c10333f14b7e570fecf8bac1971f75.scope: Deactivated successfully.
Nov 29 03:54:14 np0005539563 podman[393872]: 2025-11-29 08:54:14.73862054 +0000 UTC m=+0.053602642 container create 251c824c5d2df6ac7d28af4579751f96fac5050b69d216930606bbebba315cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:54:14 np0005539563 systemd[1]: Started libpod-conmon-251c824c5d2df6ac7d28af4579751f96fac5050b69d216930606bbebba315cad.scope.
Nov 29 03:54:14 np0005539563 podman[393872]: 2025-11-29 08:54:14.711581908 +0000 UTC m=+0.026564070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:54:14 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:54:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b02cfef0967684ae82d12f50bacbee0e8740af9dbd1a3849b713cf8454b815b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b02cfef0967684ae82d12f50bacbee0e8740af9dbd1a3849b713cf8454b815b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b02cfef0967684ae82d12f50bacbee0e8740af9dbd1a3849b713cf8454b815b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:14 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b02cfef0967684ae82d12f50bacbee0e8740af9dbd1a3849b713cf8454b815b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:14 np0005539563 podman[393872]: 2025-11-29 08:54:14.847135328 +0000 UTC m=+0.162117390 container init 251c824c5d2df6ac7d28af4579751f96fac5050b69d216930606bbebba315cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:54:14 np0005539563 podman[393872]: 2025-11-29 08:54:14.862062643 +0000 UTC m=+0.177044705 container start 251c824c5d2df6ac7d28af4579751f96fac5050b69d216930606bbebba315cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 29 03:54:14 np0005539563 podman[393872]: 2025-11-29 08:54:14.864724754 +0000 UTC m=+0.179706816 container attach 251c824c5d2df6ac7d28af4579751f96fac5050b69d216930606bbebba315cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:54:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 305 active+clean; 194 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]: {
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:    "0": [
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:        {
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:            "devices": [
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:                "/dev/loop3"
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:            ],
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:            "lv_name": "ceph_lv0",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:            "lv_size": "7511998464",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:            "name": "ceph_lv0",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:            "tags": {
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:                "ceph.cluster_name": "ceph",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:                "ceph.crush_device_class": "",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:                "ceph.encrypted": "0",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:                "ceph.osd_id": "0",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:                "ceph.type": "block",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:                "ceph.vdo": "0"
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:            },
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:            "type": "block",
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:            "vg_name": "ceph_vg0"
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:        }
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]:    ]
Nov 29 03:54:15 np0005539563 peaceful_tharp[393888]: }
Nov 29 03:54:15 np0005539563 systemd[1]: libpod-251c824c5d2df6ac7d28af4579751f96fac5050b69d216930606bbebba315cad.scope: Deactivated successfully.
Nov 29 03:54:15 np0005539563 podman[393872]: 2025-11-29 08:54:15.700031602 +0000 UTC m=+1.015013654 container died 251c824c5d2df6ac7d28af4579751f96fac5050b69d216930606bbebba315cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:54:15 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7b02cfef0967684ae82d12f50bacbee0e8740af9dbd1a3849b713cf8454b815b-merged.mount: Deactivated successfully.
Nov 29 03:54:15 np0005539563 podman[393872]: 2025-11-29 08:54:15.751494055 +0000 UTC m=+1.066476117 container remove 251c824c5d2df6ac7d28af4579751f96fac5050b69d216930606bbebba315cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tharp, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:54:15 np0005539563 systemd[1]: libpod-conmon-251c824c5d2df6ac7d28af4579751f96fac5050b69d216930606bbebba315cad.scope: Deactivated successfully.
Nov 29 03:54:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:15.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:16 np0005539563 nova_compute[252253]: 2025-11-29 08:54:16.149 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:16 np0005539563 podman[394099]: 2025-11-29 08:54:16.436817451 +0000 UTC m=+0.039718707 container create 13bbdb63d460c3c484d66fcb2f6a1988d534961f264efb1aeb2311a73a9ab1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:54:16 np0005539563 systemd[1]: Started libpod-conmon-13bbdb63d460c3c484d66fcb2f6a1988d534961f264efb1aeb2311a73a9ab1fe.scope.
Nov 29 03:54:16 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:54:16 np0005539563 podman[394099]: 2025-11-29 08:54:16.420612752 +0000 UTC m=+0.023514028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:54:16 np0005539563 podman[394099]: 2025-11-29 08:54:16.532373158 +0000 UTC m=+0.135274434 container init 13bbdb63d460c3c484d66fcb2f6a1988d534961f264efb1aeb2311a73a9ab1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:54:16 np0005539563 podman[394099]: 2025-11-29 08:54:16.541976318 +0000 UTC m=+0.144877614 container start 13bbdb63d460c3c484d66fcb2f6a1988d534961f264efb1aeb2311a73a9ab1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:54:16 np0005539563 podman[394099]: 2025-11-29 08:54:16.545593695 +0000 UTC m=+0.148494981 container attach 13bbdb63d460c3c484d66fcb2f6a1988d534961f264efb1aeb2311a73a9ab1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:54:16 np0005539563 great_dewdney[394115]: 167 167
Nov 29 03:54:16 np0005539563 systemd[1]: libpod-13bbdb63d460c3c484d66fcb2f6a1988d534961f264efb1aeb2311a73a9ab1fe.scope: Deactivated successfully.
Nov 29 03:54:16 np0005539563 podman[394099]: 2025-11-29 08:54:16.550596561 +0000 UTC m=+0.153497847 container died 13bbdb63d460c3c484d66fcb2f6a1988d534961f264efb1aeb2311a73a9ab1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:54:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:16.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-39d0475fe6e5669129baae48ee346287ccc73d37c10c8e79bbc238c35de4d179-merged.mount: Deactivated successfully.
Nov 29 03:54:16 np0005539563 podman[394099]: 2025-11-29 08:54:16.594313265 +0000 UTC m=+0.197214521 container remove 13bbdb63d460c3c484d66fcb2f6a1988d534961f264efb1aeb2311a73a9ab1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:54:16 np0005539563 systemd[1]: libpod-conmon-13bbdb63d460c3c484d66fcb2f6a1988d534961f264efb1aeb2311a73a9ab1fe.scope: Deactivated successfully.
Nov 29 03:54:16 np0005539563 podman[394138]: 2025-11-29 08:54:16.742177889 +0000 UTC m=+0.036180621 container create 2ff904e4dae5e1aeed5047cfa3f4881ae85be45633e24b1883267b1139efe767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_beaver, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:54:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:54:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:54:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:54:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:54:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:54:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:54:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:54:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:54:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:54:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:54:16 np0005539563 systemd[1]: Started libpod-conmon-2ff904e4dae5e1aeed5047cfa3f4881ae85be45633e24b1883267b1139efe767.scope.
Nov 29 03:54:16 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:54:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5de3b9c3a4d3f00554f022c3dc6c961b9871ca665eac61272212adfada0c87d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5de3b9c3a4d3f00554f022c3dc6c961b9871ca665eac61272212adfada0c87d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5de3b9c3a4d3f00554f022c3dc6c961b9871ca665eac61272212adfada0c87d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:16 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5de3b9c3a4d3f00554f022c3dc6c961b9871ca665eac61272212adfada0c87d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:16 np0005539563 podman[394138]: 2025-11-29 08:54:16.726290258 +0000 UTC m=+0.020293000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:54:16 np0005539563 podman[394138]: 2025-11-29 08:54:16.825369891 +0000 UTC m=+0.119372643 container init 2ff904e4dae5e1aeed5047cfa3f4881ae85be45633e24b1883267b1139efe767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:54:16 np0005539563 podman[394138]: 2025-11-29 08:54:16.832572386 +0000 UTC m=+0.126575118 container start 2ff904e4dae5e1aeed5047cfa3f4881ae85be45633e24b1883267b1139efe767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:54:16 np0005539563 podman[394138]: 2025-11-29 08:54:16.835970388 +0000 UTC m=+0.129973120 container attach 2ff904e4dae5e1aeed5047cfa3f4881ae85be45633e24b1883267b1139efe767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_beaver, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:54:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:17 np0005539563 nova_compute[252253]: 2025-11-29 08:54:17.287 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 305 active+clean; 194 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Nov 29 03:54:17 np0005539563 boring_beaver[394154]: {
Nov 29 03:54:17 np0005539563 boring_beaver[394154]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:54:17 np0005539563 boring_beaver[394154]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:54:17 np0005539563 boring_beaver[394154]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:54:17 np0005539563 boring_beaver[394154]:        "osd_id": 0,
Nov 29 03:54:17 np0005539563 boring_beaver[394154]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:54:17 np0005539563 boring_beaver[394154]:        "type": "bluestore"
Nov 29 03:54:17 np0005539563 boring_beaver[394154]:    }
Nov 29 03:54:17 np0005539563 boring_beaver[394154]: }
Nov 29 03:54:17 np0005539563 systemd[1]: libpod-2ff904e4dae5e1aeed5047cfa3f4881ae85be45633e24b1883267b1139efe767.scope: Deactivated successfully.
Nov 29 03:54:17 np0005539563 podman[394138]: 2025-11-29 08:54:17.731507326 +0000 UTC m=+1.025510058 container died 2ff904e4dae5e1aeed5047cfa3f4881ae85be45633e24b1883267b1139efe767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:54:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c5de3b9c3a4d3f00554f022c3dc6c961b9871ca665eac61272212adfada0c87d-merged.mount: Deactivated successfully.
Nov 29 03:54:17 np0005539563 podman[394138]: 2025-11-29 08:54:17.794745457 +0000 UTC m=+1.088748189 container remove 2ff904e4dae5e1aeed5047cfa3f4881ae85be45633e24b1883267b1139efe767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:54:17 np0005539563 systemd[1]: libpod-conmon-2ff904e4dae5e1aeed5047cfa3f4881ae85be45633e24b1883267b1139efe767.scope: Deactivated successfully.
Nov 29 03:54:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:54:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:54:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:54:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:54:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0832b320-8920-4e06-8064-d544bc24a8ed does not exist
Nov 29 03:54:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4be88377-ed5b-4d21-bfa5-7db130995935 does not exist
Nov 29 03:54:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b737d652-4b20-43fc-849c-3c7bd79bc6a5 does not exist
Nov 29 03:54:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:17.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:18.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:54:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:54:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 305 active+clean; 199 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Nov 29 03:54:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:19.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:20.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:21 np0005539563 nova_compute[252253]: 2025-11-29 08:54:21.152 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 03:54:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:21.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:22 np0005539563 nova_compute[252253]: 2025-11-29 08:54:22.290 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:22.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 03:54:23 np0005539563 podman[394243]: 2025-11-29 08:54:23.509876691 +0000 UTC m=+0.062371099 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:54:23 np0005539563 podman[394244]: 2025-11-29 08:54:23.540633194 +0000 UTC m=+0.089744061 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:54:23 np0005539563 podman[394245]: 2025-11-29 08:54:23.541086467 +0000 UTC m=+0.088311993 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 03:54:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:23.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002162686988181649 of space, bias 1.0, pg target 0.6488060964544947 quantized to 32 (current 32)
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:54:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:24.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 03:54:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:25.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:26 np0005539563 nova_compute[252253]: 2025-11-29 08:54:26.154 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:26.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:26 np0005539563 nova_compute[252253]: 2025-11-29 08:54:26.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:27 np0005539563 nova_compute[252253]: 2025-11-29 08:54:27.292 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 54 KiB/s wr, 10 op/s
Nov 29 03:54:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:27.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:28.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 54 KiB/s wr, 10 op/s
Nov 29 03:54:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:29.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:30.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:30 np0005539563 nova_compute[252253]: 2025-11-29 08:54:30.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:30 np0005539563 nova_compute[252253]: 2025-11-29 08:54:30.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:54:30 np0005539563 nova_compute[252253]: 2025-11-29 08:54:30.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:54:30 np0005539563 nova_compute[252253]: 2025-11-29 08:54:30.720 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:54:30 np0005539563 nova_compute[252253]: 2025-11-29 08:54:30.720 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:30 np0005539563 nova_compute[252253]: 2025-11-29 08:54:30.721 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:31 np0005539563 nova_compute[252253]: 2025-11-29 08:54:31.158 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 42 KiB/s wr, 9 op/s
Nov 29 03:54:31 np0005539563 nova_compute[252253]: 2025-11-29 08:54:31.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:31 np0005539563 nova_compute[252253]: 2025-11-29 08:54:31.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:54:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:31.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:32 np0005539563 nova_compute[252253]: 2025-11-29 08:54:32.295 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:32.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Nov 29 03:54:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:33.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:34.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:34 np0005539563 nova_compute[252253]: 2025-11-29 08:54:34.680 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Nov 29 03:54:35 np0005539563 nova_compute[252253]: 2025-11-29 08:54:35.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:35 np0005539563 nova_compute[252253]: 2025-11-29 08:54:35.708 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:35 np0005539563 nova_compute[252253]: 2025-11-29 08:54:35.708 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:35 np0005539563 nova_compute[252253]: 2025-11-29 08:54:35.708 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:35 np0005539563 nova_compute[252253]: 2025-11-29 08:54:35.708 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:54:35 np0005539563 nova_compute[252253]: 2025-11-29 08:54:35.709 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:35.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:54:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1233880893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:54:36 np0005539563 nova_compute[252253]: 2025-11-29 08:54:36.152 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:36 np0005539563 nova_compute[252253]: 2025-11-29 08:54:36.197 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:36 np0005539563 nova_compute[252253]: 2025-11-29 08:54:36.410 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:54:36 np0005539563 nova_compute[252253]: 2025-11-29 08:54:36.411 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4111MB free_disk=20.942718505859375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:54:36 np0005539563 nova_compute[252253]: 2025-11-29 08:54:36.411 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:36 np0005539563 nova_compute[252253]: 2025-11-29 08:54:36.412 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:36 np0005539563 nova_compute[252253]: 2025-11-29 08:54:36.560 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:54:36 np0005539563 nova_compute[252253]: 2025-11-29 08:54:36.561 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:54:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:36.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:36 np0005539563 nova_compute[252253]: 2025-11-29 08:54:36.699 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:54:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/365315365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:54:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:37 np0005539563 nova_compute[252253]: 2025-11-29 08:54:37.169 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:37 np0005539563 nova_compute[252253]: 2025-11-29 08:54:37.176 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:54:37 np0005539563 nova_compute[252253]: 2025-11-29 08:54:37.195 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:54:37 np0005539563 nova_compute[252253]: 2025-11-29 08:54:37.197 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:54:37 np0005539563 nova_compute[252253]: 2025-11-29 08:54:37.198 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:37 np0005539563 nova_compute[252253]: 2025-11-29 08:54:37.332 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 29 03:54:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:37.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:38.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 29 03:54:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:40.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:40.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:41.169 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:54:41 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:41.170 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:54:41 np0005539563 nova_compute[252253]: 2025-11-29 08:54:41.170 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:41 np0005539563 nova_compute[252253]: 2025-11-29 08:54:41.199 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 3.0 KiB/s wr, 0 op/s
Nov 29 03:54:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:42.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:42 np0005539563 nova_compute[252253]: 2025-11-29 08:54:42.335 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:42.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:43 np0005539563 nova_compute[252253]: 2025-11-29 08:54:43.200 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:54:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:54:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:54:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:54:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:54:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:54:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 1022 B/s wr, 0 op/s
Nov 29 03:54:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:44.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:44.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 4.7 KiB/s wr, 0 op/s
Nov 29 03:54:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:46.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:46 np0005539563 nova_compute[252253]: 2025-11-29 08:54:46.201 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:46.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:47 np0005539563 nova_compute[252253]: 2025-11-29 08:54:47.337 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 3.7 KiB/s wr, 0 op/s
Nov 29 03:54:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:48.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.062 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.063 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.077 252257 DEBUG nova.compute.manager [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.207 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.207 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.213 252257 DEBUG nova.virt.hardware [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.214 252257 INFO nova.compute.claims [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.410 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:48.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:54:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/926984436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.878 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.885 252257 DEBUG nova.compute.provider_tree [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.905 252257 DEBUG nova.scheduler.client.report [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.935 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.936 252257 DEBUG nova.compute.manager [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.989 252257 DEBUG nova.compute.manager [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:54:48 np0005539563 nova_compute[252253]: 2025-11-29 08:54:48.989 252257 DEBUG nova.network.neutron [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.012 252257 INFO nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.038 252257 DEBUG nova.compute.manager [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.128 252257 DEBUG nova.compute.manager [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.129 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.130 252257 INFO nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Creating image(s)#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.155 252257 DEBUG nova.storage.rbd_utils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:54:49 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:49.172 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.180 252257 DEBUG nova.storage.rbd_utils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.209 252257 DEBUG nova.storage.rbd_utils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.212 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.310 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.311 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.312 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.312 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.335 252257 DEBUG nova.storage.rbd_utils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.339 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 4.3 KiB/s wr, 0 op/s
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.394 252257 DEBUG nova.policy [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.614 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.678 252257 DEBUG nova.storage.rbd_utils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image 16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.782 252257 DEBUG nova.objects.instance [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid 16eabee0-f603-47c9-9ccd-82b1da31c3e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.807 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.808 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Ensure instance console log exists: /var/lib/nova/instances/16eabee0-f603-47c9-9ccd-82b1da31c3e5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.808 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.809 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:49 np0005539563 nova_compute[252253]: 2025-11-29 08:54:49.809 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:50.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:50.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:51 np0005539563 nova_compute[252253]: 2025-11-29 08:54:51.011 252257 DEBUG nova.network.neutron [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Successfully created port: b3b392a7-00fe-4dde-85ae-b7ad839245a5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:54:51 np0005539563 nova_compute[252253]: 2025-11-29 08:54:51.203 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 305 active+clean; 212 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.9 KiB/s rd, 661 KiB/s wr, 6 op/s
Nov 29 03:54:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:52.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:52 np0005539563 nova_compute[252253]: 2025-11-29 08:54:52.182 252257 DEBUG nova.network.neutron [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Successfully updated port: b3b392a7-00fe-4dde-85ae-b7ad839245a5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:54:52 np0005539563 nova_compute[252253]: 2025-11-29 08:54:52.200 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-16eabee0-f603-47c9-9ccd-82b1da31c3e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:54:52 np0005539563 nova_compute[252253]: 2025-11-29 08:54:52.201 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-16eabee0-f603-47c9-9ccd-82b1da31c3e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:54:52 np0005539563 nova_compute[252253]: 2025-11-29 08:54:52.201 252257 DEBUG nova.network.neutron [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:54:52 np0005539563 nova_compute[252253]: 2025-11-29 08:54:52.322 252257 DEBUG nova.compute.manager [req-618b9060-3b57-4788-937d-a2534e821811 req-b5fced82-b70b-4eac-bec4-06190c7f4ec8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Received event network-changed-b3b392a7-00fe-4dde-85ae-b7ad839245a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:54:52 np0005539563 nova_compute[252253]: 2025-11-29 08:54:52.323 252257 DEBUG nova.compute.manager [req-618b9060-3b57-4788-937d-a2534e821811 req-b5fced82-b70b-4eac-bec4-06190c7f4ec8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Refreshing instance network info cache due to event network-changed-b3b392a7-00fe-4dde-85ae-b7ad839245a5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:54:52 np0005539563 nova_compute[252253]: 2025-11-29 08:54:52.323 252257 DEBUG oslo_concurrency.lockutils [req-618b9060-3b57-4788-937d-a2534e821811 req-b5fced82-b70b-4eac-bec4-06190c7f4ec8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-16eabee0-f603-47c9-9ccd-82b1da31c3e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:54:52 np0005539563 nova_compute[252253]: 2025-11-29 08:54:52.369 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:52.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:52 np0005539563 nova_compute[252253]: 2025-11-29 08:54:52.995 252257 DEBUG nova.network.neutron [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:54:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 305 active+clean; 212 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 KiB/s rd, 661 KiB/s wr, 5 op/s
Nov 29 03:54:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:54:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:54.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:54:54 np0005539563 podman[394608]: 2025-11-29 08:54:54.520918677 +0000 UTC m=+0.077651783 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 03:54:54 np0005539563 podman[394609]: 2025-11-29 08:54:54.531698919 +0000 UTC m=+0.085729331 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:54:54 np0005539563 podman[394610]: 2025-11-29 08:54:54.55167639 +0000 UTC m=+0.103760991 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:54:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:54:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:54.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:54:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 29 03:54:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:56.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.204 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.345 252257 DEBUG nova.network.neutron [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Updating instance_info_cache with network_info: [{"id": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "address": "fa:16:3e:44:2e:46", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3b392a7-00", "ovs_interfaceid": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.377 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-16eabee0-f603-47c9-9ccd-82b1da31c3e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.378 252257 DEBUG nova.compute.manager [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Instance network_info: |[{"id": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "address": "fa:16:3e:44:2e:46", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3b392a7-00", "ovs_interfaceid": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.378 252257 DEBUG oslo_concurrency.lockutils [req-618b9060-3b57-4788-937d-a2534e821811 req-b5fced82-b70b-4eac-bec4-06190c7f4ec8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-16eabee0-f603-47c9-9ccd-82b1da31c3e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.378 252257 DEBUG nova.network.neutron [req-618b9060-3b57-4788-937d-a2534e821811 req-b5fced82-b70b-4eac-bec4-06190c7f4ec8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Refreshing network info cache for port b3b392a7-00fe-4dde-85ae-b7ad839245a5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.381 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Start _get_guest_xml network_info=[{"id": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "address": "fa:16:3e:44:2e:46", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3b392a7-00", "ovs_interfaceid": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.387 252257 WARNING nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.393 252257 DEBUG nova.virt.libvirt.host [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.393 252257 DEBUG nova.virt.libvirt.host [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.398 252257 DEBUG nova.virt.libvirt.host [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.399 252257 DEBUG nova.virt.libvirt.host [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.400 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.400 252257 DEBUG nova.virt.hardware [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.401 252257 DEBUG nova.virt.hardware [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.401 252257 DEBUG nova.virt.hardware [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.401 252257 DEBUG nova.virt.hardware [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.402 252257 DEBUG nova.virt.hardware [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.402 252257 DEBUG nova.virt.hardware [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.402 252257 DEBUG nova.virt.hardware [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.402 252257 DEBUG nova.virt.hardware [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.402 252257 DEBUG nova.virt.hardware [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.403 252257 DEBUG nova.virt.hardware [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.403 252257 DEBUG nova.virt.hardware [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.406 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:56.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:54:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44141468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.892 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.920 252257 DEBUG nova.storage.rbd_utils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:54:56 np0005539563 nova_compute[252253]: 2025-11-29 08:54:56.925 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:54:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:54:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1740278720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.371 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.373 252257 DEBUG nova.virt.libvirt.vif [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:54:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1992557920',display_name='tempest-TestNetworkBasicOps-server-1992557920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1992557920',id=206,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKd5X7ttPDZrLcSEWOWxiDiqXNcrcA1WA0dnVqQAgtuUKAr3Od0dUKsl+vtj8oQvZWYkyQjtE8n5s8UKGxr3N8P2h1WoaggKg3lwtt8NDDbnm6HABmPHF8MMMwChxxT+Jw==',key_name='tempest-TestNetworkBasicOps-1992531590',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-x7pxpe1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:54:49Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=16eabee0-f603-47c9-9ccd-82b1da31c3e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "address": "fa:16:3e:44:2e:46", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3b392a7-00", "ovs_interfaceid": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.374 252257 DEBUG nova.network.os_vif_util [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "address": "fa:16:3e:44:2e:46", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3b392a7-00", "ovs_interfaceid": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.375 252257 DEBUG nova.network.os_vif_util [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:2e:46,bridge_name='br-int',has_traffic_filtering=True,id=b3b392a7-00fe-4dde-85ae-b7ad839245a5,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3b392a7-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.376 252257 DEBUG nova.objects.instance [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 16eabee0-f603-47c9-9ccd-82b1da31c3e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.400 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  <uuid>16eabee0-f603-47c9-9ccd-82b1da31c3e5</uuid>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  <name>instance-000000ce</name>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestNetworkBasicOps-server-1992557920</nova:name>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:54:56</nova:creationTime>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <nova:port uuid="b3b392a7-00fe-4dde-85ae-b7ad839245a5">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <entry name="serial">16eabee0-f603-47c9-9ccd-82b1da31c3e5</entry>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <entry name="uuid">16eabee0-f603-47c9-9ccd-82b1da31c3e5</entry>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk.config">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:44:2e:46"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <target dev="tapb3b392a7-00"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/16eabee0-f603-47c9-9ccd-82b1da31c3e5/console.log" append="off"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:54:57 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:54:57 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:54:57 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:54:57 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.401 252257 DEBUG nova.compute.manager [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Preparing to wait for external event network-vif-plugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.401 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.402 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.402 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.403 252257 DEBUG nova.virt.libvirt.vif [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:54:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1992557920',display_name='tempest-TestNetworkBasicOps-server-1992557920',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1992557920',id=206,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKd5X7ttPDZrLcSEWOWxiDiqXNcrcA1WA0dnVqQAgtuUKAr3Od0dUKsl+vtj8oQvZWYkyQjtE8n5s8UKGxr3N8P2h1WoaggKg3lwtt8NDDbnm6HABmPHF8MMMwChxxT+Jw==',key_name='tempest-TestNetworkBasicOps-1992531590',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-x7pxpe1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:54:49Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=16eabee0-f603-47c9-9ccd-82b1da31c3e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "address": "fa:16:3e:44:2e:46", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3b392a7-00", "ovs_interfaceid": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.403 252257 DEBUG nova.network.os_vif_util [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "address": "fa:16:3e:44:2e:46", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3b392a7-00", "ovs_interfaceid": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.404 252257 DEBUG nova.network.os_vif_util [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:2e:46,bridge_name='br-int',has_traffic_filtering=True,id=b3b392a7-00fe-4dde-85ae-b7ad839245a5,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3b392a7-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.404 252257 DEBUG os_vif [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:2e:46,bridge_name='br-int',has_traffic_filtering=True,id=b3b392a7-00fe-4dde-85ae-b7ad839245a5,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3b392a7-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.405 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.405 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.406 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.410 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.411 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3b392a7-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.411 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3b392a7-00, col_values=(('external_ids', {'iface-id': 'b3b392a7-00fe-4dde-85ae-b7ad839245a5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:2e:46', 'vm-uuid': '16eabee0-f603-47c9-9ccd-82b1da31c3e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.412 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.412 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:57 np0005539563 NetworkManager[48981]: <info>  [1764406497.4136] manager: (tapb3b392a7-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.415 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.420 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.421 252257 INFO os_vif [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:2e:46,bridge_name='br-int',has_traffic_filtering=True,id=b3b392a7-00fe-4dde-85ae-b7ad839245a5,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3b392a7-00')#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.475 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.476 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.476 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:44:2e:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.476 252257 INFO nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Using config drive#033[00m
Nov 29 03:54:57 np0005539563 nova_compute[252253]: 2025-11-29 08:54:57.501 252257 DEBUG nova.storage.rbd_utils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:54:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:58.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:58 np0005539563 nova_compute[252253]: 2025-11-29 08:54:58.393 252257 INFO nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Creating config drive at /var/lib/nova/instances/16eabee0-f603-47c9-9ccd-82b1da31c3e5/disk.config#033[00m
Nov 29 03:54:58 np0005539563 nova_compute[252253]: 2025-11-29 08:54:58.398 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/16eabee0-f603-47c9-9ccd-82b1da31c3e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgnwpwnv0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:58 np0005539563 nova_compute[252253]: 2025-11-29 08:54:58.540 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/16eabee0-f603-47c9-9ccd-82b1da31c3e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgnwpwnv0" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:58 np0005539563 nova_compute[252253]: 2025-11-29 08:54:58.569 252257 DEBUG nova.storage.rbd_utils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:54:58 np0005539563 nova_compute[252253]: 2025-11-29 08:54:58.573 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/16eabee0-f603-47c9-9ccd-82b1da31c3e5/disk.config 16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:54:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:58.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:58 np0005539563 nova_compute[252253]: 2025-11-29 08:54:58.722 252257 DEBUG oslo_concurrency.processutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/16eabee0-f603-47c9-9ccd-82b1da31c3e5/disk.config 16eabee0-f603-47c9-9ccd-82b1da31c3e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:58 np0005539563 nova_compute[252253]: 2025-11-29 08:54:58.723 252257 INFO nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Deleting local config drive /var/lib/nova/instances/16eabee0-f603-47c9-9ccd-82b1da31c3e5/disk.config because it was imported into RBD.#033[00m
Nov 29 03:54:58 np0005539563 kernel: tapb3b392a7-00: entered promiscuous mode
Nov 29 03:54:58 np0005539563 NetworkManager[48981]: <info>  [1764406498.7814] manager: (tapb3b392a7-00): new Tun device (/org/freedesktop/NetworkManager/Devices/390)
Nov 29 03:54:58 np0005539563 systemd-udevd[394853]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:54:58 np0005539563 ovn_controller[148841]: 2025-11-29T08:54:58Z|00878|binding|INFO|Claiming lport b3b392a7-00fe-4dde-85ae-b7ad839245a5 for this chassis.
Nov 29 03:54:58 np0005539563 ovn_controller[148841]: 2025-11-29T08:54:58Z|00879|binding|INFO|b3b392a7-00fe-4dde-85ae-b7ad839245a5: Claiming fa:16:3e:44:2e:46 10.100.0.26
Nov 29 03:54:58 np0005539563 nova_compute[252253]: 2025-11-29 08:54:58.833 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:58 np0005539563 nova_compute[252253]: 2025-11-29 08:54:58.837 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:58 np0005539563 NetworkManager[48981]: <info>  [1764406498.8467] device (tapb3b392a7-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:54:58 np0005539563 NetworkManager[48981]: <info>  [1764406498.8476] device (tapb3b392a7-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.846 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:2e:46 10.100.0.26'], port_security=['fa:16:3e:44:2e:46 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '16eabee0-f603-47c9-9ccd-82b1da31c3e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbf505d1-7919-461d-b3a8-5568e119b40c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05627052-63ca-4f76-9ac7-41cb7dbaaa91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b15200f8-3405-4179-8895-fe8fa61a54ba, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=b3b392a7-00fe-4dde-85ae-b7ad839245a5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.847 158990 INFO neutron.agent.ovn.metadata.agent [-] Port b3b392a7-00fe-4dde-85ae-b7ad839245a5 in datapath cbf505d1-7919-461d-b3a8-5568e119b40c bound to our chassis#033[00m
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.848 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cbf505d1-7919-461d-b3a8-5568e119b40c#033[00m
Nov 29 03:54:58 np0005539563 systemd-machined[213024]: New machine qemu-99-instance-000000ce.
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.861 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0672c4e0-0661-455a-8b02-9cb174f9b83e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.862 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcbf505d1-71 in ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.864 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcbf505d1-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.864 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[37c06fa2-e7de-4a7c-948b-1ec77868949a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.864 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0fd94e-ac6b-4373-8d51-a7740c0bb883]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:58 np0005539563 ovn_controller[148841]: 2025-11-29T08:54:58Z|00880|binding|INFO|Setting lport b3b392a7-00fe-4dde-85ae-b7ad839245a5 ovn-installed in OVS
Nov 29 03:54:58 np0005539563 ovn_controller[148841]: 2025-11-29T08:54:58Z|00881|binding|INFO|Setting lport b3b392a7-00fe-4dde-85ae-b7ad839245a5 up in Southbound
Nov 29 03:54:58 np0005539563 nova_compute[252253]: 2025-11-29 08:54:58.877 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.877 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[ed27001a-a5fa-4939-9e03-861f7a65b8d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:58 np0005539563 systemd[1]: Started Virtual Machine qemu-99-instance-000000ce.
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.900 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1701e5be-1fa0-4b5b-95fc-d6696d0655ca]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.930 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[af257d19-87f8-4555-98cc-d0aaac49020c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.936 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[adb4084b-8efa-4fc4-8969-365cc0bd65f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:58 np0005539563 systemd-udevd[394857]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:54:58 np0005539563 NetworkManager[48981]: <info>  [1764406498.9382] manager: (tapcbf505d1-70): new Veth device (/org/freedesktop/NetworkManager/Devices/391)
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.972 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4c297f14-f775-450e-bd8e-8b891da7b4d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:58.975 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[7740b036-18f0-48e9-8e4e-cb4b15600b12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:58 np0005539563 NetworkManager[48981]: <info>  [1764406498.9978] device (tapcbf505d1-70): carrier: link connected
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.007 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a17f3f04-9e92-4392-8fec-40573b1430b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.026 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[807d2f62-4288-4eff-a0da-34b110f0907a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbf505d1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 946677, 'reachable_time': 26043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394889, 'error': None, 'target': 'ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.043 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[23610deb-8b53-4223-be9e-e98f7ab69fee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:f377'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 946677, 'tstamp': 946677}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394890, 'error': None, 'target': 'ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.059 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[706302c6-993d-48b8-872e-0038eae3f313]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbf505d1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:f3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 946677, 'reachable_time': 26043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394891, 'error': None, 'target': 'ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.090 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c11d8a57-da1d-43f3-821c-bd25d74cd416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.154 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f48507d7-2e56-4223-819a-d1a8be6dab0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.156 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbf505d1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.156 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.157 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcbf505d1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:54:59 np0005539563 kernel: tapcbf505d1-70: entered promiscuous mode
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.160 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:59 np0005539563 NetworkManager[48981]: <info>  [1764406499.1611] manager: (tapcbf505d1-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.164 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.166 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcbf505d1-70, col_values=(('external_ids', {'iface-id': '6ea3e895-3f37-4fb8-8562-8f74fb8c5800'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.167 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:54:59Z|00882|binding|INFO|Releasing lport 6ea3e895-3f37-4fb8-8562-8f74fb8c5800 from this chassis (sb_readonly=0)
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.168 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.171 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cbf505d1-7919-461d-b3a8-5568e119b40c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cbf505d1-7919-461d-b3a8-5568e119b40c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.172 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[809b0ca6-ca87-491c-b036-4a3d0ef0e862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.173 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-cbf505d1-7919-461d-b3a8-5568e119b40c
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/cbf505d1-7919-461d-b3a8-5568e119b40c.pid.haproxy
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID cbf505d1-7919-461d-b3a8-5568e119b40c
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:54:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:54:59.173 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c', 'env', 'PROCESS_TAG=haproxy-cbf505d1-7919-461d-b3a8-5568e119b40c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cbf505d1-7919-461d-b3a8-5568e119b40c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.181 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.366 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406499.3661537, 16eabee0-f603-47c9-9ccd-82b1da31c3e5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.367 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] VM Started (Lifecycle Event)#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.476 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.480 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406499.3664494, 16eabee0-f603-47c9-9ccd-82b1da31c3e5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.481 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.512 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.517 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.547 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:54:59 np0005539563 podman[394966]: 2025-11-29 08:54:59.601913891 +0000 UTC m=+0.061630120 container create 0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:54:59 np0005539563 systemd[1]: Started libpod-conmon-0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c.scope.
Nov 29 03:54:59 np0005539563 podman[394966]: 2025-11-29 08:54:59.564993371 +0000 UTC m=+0.024709620 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:54:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:54:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31960a6c80109b68a688e4714170dcd57b744e07f3b5c1621c3f5ed704d8889/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:59 np0005539563 podman[394966]: 2025-11-29 08:54:59.693357677 +0000 UTC m=+0.153073896 container init 0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:54:59 np0005539563 podman[394966]: 2025-11-29 08:54:59.699321498 +0000 UTC m=+0.159037697 container start 0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.718 252257 DEBUG nova.compute.manager [req-1aae93a4-740b-4d25-b9b6-e506984f3067 req-387a57b2-3762-41c4-a251-b6407dc053f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Received event network-vif-plugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.718 252257 DEBUG oslo_concurrency.lockutils [req-1aae93a4-740b-4d25-b9b6-e506984f3067 req-387a57b2-3762-41c4-a251-b6407dc053f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.719 252257 DEBUG oslo_concurrency.lockutils [req-1aae93a4-740b-4d25-b9b6-e506984f3067 req-387a57b2-3762-41c4-a251-b6407dc053f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.719 252257 DEBUG oslo_concurrency.lockutils [req-1aae93a4-740b-4d25-b9b6-e506984f3067 req-387a57b2-3762-41c4-a251-b6407dc053f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.720 252257 DEBUG nova.compute.manager [req-1aae93a4-740b-4d25-b9b6-e506984f3067 req-387a57b2-3762-41c4-a251-b6407dc053f6 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Processing event network-vif-plugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.720 252257 DEBUG nova.compute.manager [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:54:59 np0005539563 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[394981]: [NOTICE]   (394985) : New worker (394987) forked
Nov 29 03:54:59 np0005539563 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[394981]: [NOTICE]   (394985) : Loading success.
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.728 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.729 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406499.7284362, 16eabee0-f603-47c9-9ccd-82b1da31c3e5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.729 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.733 252257 INFO nova.virt.libvirt.driver [-] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Instance spawned successfully.#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.733 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.754 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.762 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.765 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.765 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.766 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.767 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.767 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.768 252257 DEBUG nova.virt.libvirt.driver [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.801 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.850 252257 INFO nova.compute.manager [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Took 10.72 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.851 252257 DEBUG nova.compute.manager [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:54:59 np0005539563 nova_compute[252253]: 2025-11-29 08:54:59.996 252257 INFO nova.compute.manager [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Took 11.82 seconds to build instance.#033[00m
Nov 29 03:55:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:00 np0005539563 nova_compute[252253]: 2025-11-29 08:55:00.026 252257 DEBUG oslo_concurrency.lockutils [None req-0de57a87-1ca4-45cc-9b59-81d5e5d06546 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:00.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:00 np0005539563 nova_compute[252253]: 2025-11-29 08:55:00.437 252257 DEBUG nova.network.neutron [req-618b9060-3b57-4788-937d-a2534e821811 req-b5fced82-b70b-4eac-bec4-06190c7f4ec8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Updated VIF entry in instance network info cache for port b3b392a7-00fe-4dde-85ae-b7ad839245a5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:55:00 np0005539563 nova_compute[252253]: 2025-11-29 08:55:00.438 252257 DEBUG nova.network.neutron [req-618b9060-3b57-4788-937d-a2534e821811 req-b5fced82-b70b-4eac-bec4-06190c7f4ec8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Updating instance_info_cache with network_info: [{"id": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "address": "fa:16:3e:44:2e:46", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3b392a7-00", "ovs_interfaceid": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:55:00 np0005539563 nova_compute[252253]: 2025-11-29 08:55:00.461 252257 DEBUG oslo_concurrency.lockutils [req-618b9060-3b57-4788-937d-a2534e821811 req-b5fced82-b70b-4eac-bec4-06190c7f4ec8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-16eabee0-f603-47c9-9ccd-82b1da31c3e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:55:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:00.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:01 np0005539563 nova_compute[252253]: 2025-11-29 08:55:01.217 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 03:55:01 np0005539563 nova_compute[252253]: 2025-11-29 08:55:01.828 252257 DEBUG nova.compute.manager [req-b52ae641-6c23-469a-8ce8-74bf3169f43b req-62d67a0d-c286-4ec1-976b-8357e353226a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Received event network-vif-plugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:55:01 np0005539563 nova_compute[252253]: 2025-11-29 08:55:01.830 252257 DEBUG oslo_concurrency.lockutils [req-b52ae641-6c23-469a-8ce8-74bf3169f43b req-62d67a0d-c286-4ec1-976b-8357e353226a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:01 np0005539563 nova_compute[252253]: 2025-11-29 08:55:01.831 252257 DEBUG oslo_concurrency.lockutils [req-b52ae641-6c23-469a-8ce8-74bf3169f43b req-62d67a0d-c286-4ec1-976b-8357e353226a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:01 np0005539563 nova_compute[252253]: 2025-11-29 08:55:01.832 252257 DEBUG oslo_concurrency.lockutils [req-b52ae641-6c23-469a-8ce8-74bf3169f43b req-62d67a0d-c286-4ec1-976b-8357e353226a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:01 np0005539563 nova_compute[252253]: 2025-11-29 08:55:01.833 252257 DEBUG nova.compute.manager [req-b52ae641-6c23-469a-8ce8-74bf3169f43b req-62d67a0d-c286-4ec1-976b-8357e353226a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] No waiting events found dispatching network-vif-plugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:55:01 np0005539563 nova_compute[252253]: 2025-11-29 08:55:01.834 252257 WARNING nova.compute.manager [req-b52ae641-6c23-469a-8ce8-74bf3169f43b req-62d67a0d-c286-4ec1-976b-8357e353226a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Received unexpected event network-vif-plugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:55:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:02.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:02 np0005539563 nova_compute[252253]: 2025-11-29 08:55:02.414 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:02.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 28 op/s
Nov 29 03:55:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:04.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:04.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:55:04.966 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:55:04.968 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:55:04.969 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 98 op/s
Nov 29 03:55:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:06.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:06 np0005539563 nova_compute[252253]: 2025-11-29 08:55:06.218 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:06.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 75 op/s
Nov 29 03:55:07 np0005539563 nova_compute[252253]: 2025-11-29 08:55:07.419 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:08.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:08.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 75 op/s
Nov 29 03:55:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:10.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:10.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:11 np0005539563 nova_compute[252253]: 2025-11-29 08:55:11.220 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 71 op/s
Nov 29 03:55:11 np0005539563 nova_compute[252253]: 2025-11-29 08:55:11.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:12.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:12 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Nov 29 03:55:12 np0005539563 nova_compute[252253]: 2025-11-29 08:55:12.467 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:12.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:55:13
Nov 29 03:55:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:55:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:55:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.rgw.root', 'default.rgw.control', 'default.rgw.log', '.mgr', 'backups', 'images', 'cephfs.cephfs.meta']
Nov 29 03:55:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:55:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:55:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:55:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:55:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:55:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:55:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:55:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Nov 29 03:55:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:14.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:14.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 305 active+clean; 272 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Nov 29 03:55:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:55:15Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:2e:46 10.100.0.26
Nov 29 03:55:15 np0005539563 ovn_controller[148841]: 2025-11-29T08:55:15Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:2e:46 10.100.0.26
Nov 29 03:55:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:16.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:16 np0005539563 nova_compute[252253]: 2025-11-29 08:55:16.260 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:16.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:55:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:55:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:55:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:55:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:55:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:55:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:55:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:55:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:55:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:55:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 305 active+clean; 272 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 153 KiB/s rd, 2.0 MiB/s wr, 37 op/s
Nov 29 03:55:17 np0005539563 nova_compute[252253]: 2025-11-29 08:55:17.471 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:18.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:18.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:55:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:55:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:55:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:55:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 305 active+clean; 275 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 207 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Nov 29 03:55:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:20.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:20.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:55:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:55:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:21 np0005539563 nova_compute[252253]: 2025-11-29 08:55:21.262 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 29 03:55:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:22.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 51106c54-4e3b-48b6-a1c0-6f0b8d5739f0 does not exist
Nov 29 03:55:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9755b922-55a5-4016-9e4e-a7d19de2efdd does not exist
Nov 29 03:55:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9bbe3b2d-6c7e-49de-a855-2a55246762d3 does not exist
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:55:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:55:22 np0005539563 nova_compute[252253]: 2025-11-29 08:55:22.473 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:22.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:22 np0005539563 podman[395449]: 2025-11-29 08:55:22.959566183 +0000 UTC m=+0.042663796 container create cbbe09d471aa376e7dae0fafc6243fe564e779530aa9eb2cd57421fe09a2f673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:55:22 np0005539563 systemd[1]: Started libpod-conmon-cbbe09d471aa376e7dae0fafc6243fe564e779530aa9eb2cd57421fe09a2f673.scope.
Nov 29 03:55:23 np0005539563 podman[395449]: 2025-11-29 08:55:22.9413086 +0000 UTC m=+0.024406213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:55:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:55:23 np0005539563 podman[395449]: 2025-11-29 08:55:23.061592455 +0000 UTC m=+0.144690068 container init cbbe09d471aa376e7dae0fafc6243fe564e779530aa9eb2cd57421fe09a2f673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_morse, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:55:23 np0005539563 podman[395449]: 2025-11-29 08:55:23.070109526 +0000 UTC m=+0.153207159 container start cbbe09d471aa376e7dae0fafc6243fe564e779530aa9eb2cd57421fe09a2f673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_morse, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:55:23 np0005539563 podman[395449]: 2025-11-29 08:55:23.074243398 +0000 UTC m=+0.157341021 container attach cbbe09d471aa376e7dae0fafc6243fe564e779530aa9eb2cd57421fe09a2f673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:55:23 np0005539563 systemd[1]: libpod-cbbe09d471aa376e7dae0fafc6243fe564e779530aa9eb2cd57421fe09a2f673.scope: Deactivated successfully.
Nov 29 03:55:23 np0005539563 dreamy_morse[395466]: 167 167
Nov 29 03:55:23 np0005539563 conmon[395466]: conmon cbbe09d471aa376e7dae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cbbe09d471aa376e7dae0fafc6243fe564e779530aa9eb2cd57421fe09a2f673.scope/container/memory.events
Nov 29 03:55:23 np0005539563 podman[395449]: 2025-11-29 08:55:23.080872687 +0000 UTC m=+0.163970320 container died cbbe09d471aa376e7dae0fafc6243fe564e779530aa9eb2cd57421fe09a2f673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_morse, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:55:23 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3c80788314d9b8bc088e90d3ec2b6dd6ee71e789d523689695170feabe10fbe6-merged.mount: Deactivated successfully.
Nov 29 03:55:23 np0005539563 podman[395449]: 2025-11-29 08:55:23.13414419 +0000 UTC m=+0.217241843 container remove cbbe09d471aa376e7dae0fafc6243fe564e779530aa9eb2cd57421fe09a2f673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_morse, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:55:23 np0005539563 systemd[1]: libpod-conmon-cbbe09d471aa376e7dae0fafc6243fe564e779530aa9eb2cd57421fe09a2f673.scope: Deactivated successfully.
Nov 29 03:55:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:55:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:55:23 np0005539563 podman[395491]: 2025-11-29 08:55:23.315018068 +0000 UTC m=+0.049969525 container create 63f9f8d7fa3e1403023573eee47e9194e52e020c0513787e98115c0ce8d5c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jones, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:55:23 np0005539563 systemd[1]: Started libpod-conmon-63f9f8d7fa3e1403023573eee47e9194e52e020c0513787e98115c0ce8d5c2be.scope.
Nov 29 03:55:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 29 03:55:23 np0005539563 podman[395491]: 2025-11-29 08:55:23.289166298 +0000 UTC m=+0.024117805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:55:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:55:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5988db2e6f7f333ddf55303c00be17e07cf383bfe40941a334290877fd03ca8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5988db2e6f7f333ddf55303c00be17e07cf383bfe40941a334290877fd03ca8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5988db2e6f7f333ddf55303c00be17e07cf383bfe40941a334290877fd03ca8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5988db2e6f7f333ddf55303c00be17e07cf383bfe40941a334290877fd03ca8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5988db2e6f7f333ddf55303c00be17e07cf383bfe40941a334290877fd03ca8d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:23 np0005539563 podman[395491]: 2025-11-29 08:55:23.420453593 +0000 UTC m=+0.155405050 container init 63f9f8d7fa3e1403023573eee47e9194e52e020c0513787e98115c0ce8d5c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jones, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:55:23 np0005539563 podman[395491]: 2025-11-29 08:55:23.427974036 +0000 UTC m=+0.162925483 container start 63f9f8d7fa3e1403023573eee47e9194e52e020c0513787e98115c0ce8d5c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jones, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:55:23 np0005539563 podman[395491]: 2025-11-29 08:55:23.431896603 +0000 UTC m=+0.166848060 container attach 63f9f8d7fa3e1403023573eee47e9194e52e020c0513787e98115c0ce8d5c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jones, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:55:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:24.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004339914270426205 of space, bias 1.0, pg target 1.3019742811278616 quantized to 32 (current 32)
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6462629990228922 quantized to 32 (current 32)
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:55:24 np0005539563 gallant_jones[395507]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:55:24 np0005539563 gallant_jones[395507]: --> relative data size: 1.0
Nov 29 03:55:24 np0005539563 gallant_jones[395507]: --> All data devices are unavailable
Nov 29 03:55:24 np0005539563 systemd[1]: libpod-63f9f8d7fa3e1403023573eee47e9194e52e020c0513787e98115c0ce8d5c2be.scope: Deactivated successfully.
Nov 29 03:55:24 np0005539563 podman[395522]: 2025-11-29 08:55:24.25651868 +0000 UTC m=+0.024046253 container died 63f9f8d7fa3e1403023573eee47e9194e52e020c0513787e98115c0ce8d5c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jones, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:55:24 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5988db2e6f7f333ddf55303c00be17e07cf383bfe40941a334290877fd03ca8d-merged.mount: Deactivated successfully.
Nov 29 03:55:24 np0005539563 podman[395522]: 2025-11-29 08:55:24.317224124 +0000 UTC m=+0.084751687 container remove 63f9f8d7fa3e1403023573eee47e9194e52e020c0513787e98115c0ce8d5c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jones, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:55:24 np0005539563 systemd[1]: libpod-conmon-63f9f8d7fa3e1403023573eee47e9194e52e020c0513787e98115c0ce8d5c2be.scope: Deactivated successfully.
Nov 29 03:55:24 np0005539563 podman[395636]: 2025-11-29 08:55:24.677318843 +0000 UTC m=+0.067180980 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 29 03:55:24 np0005539563 podman[395637]: 2025-11-29 08:55:24.681969109 +0000 UTC m=+0.066960234 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:55:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:24.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:24 np0005539563 podman[395638]: 2025-11-29 08:55:24.731632304 +0000 UTC m=+0.116768313 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:55:24 np0005539563 podman[395746]: 2025-11-29 08:55:24.916557791 +0000 UTC m=+0.034803964 container create a4633170dd998377433712143ec93fdb881806718bc15c7fe8d6b56083788ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mendel, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:55:24 np0005539563 systemd[1]: Started libpod-conmon-a4633170dd998377433712143ec93fdb881806718bc15c7fe8d6b56083788ce3.scope.
Nov 29 03:55:24 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:55:24 np0005539563 podman[395746]: 2025-11-29 08:55:24.982471335 +0000 UTC m=+0.100717518 container init a4633170dd998377433712143ec93fdb881806718bc15c7fe8d6b56083788ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mendel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:55:24 np0005539563 podman[395746]: 2025-11-29 08:55:24.988118738 +0000 UTC m=+0.106364911 container start a4633170dd998377433712143ec93fdb881806718bc15c7fe8d6b56083788ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mendel, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:55:24 np0005539563 podman[395746]: 2025-11-29 08:55:24.990733059 +0000 UTC m=+0.108979262 container attach a4633170dd998377433712143ec93fdb881806718bc15c7fe8d6b56083788ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mendel, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:55:24 np0005539563 epic_mendel[395762]: 167 167
Nov 29 03:55:24 np0005539563 systemd[1]: libpod-a4633170dd998377433712143ec93fdb881806718bc15c7fe8d6b56083788ce3.scope: Deactivated successfully.
Nov 29 03:55:24 np0005539563 podman[395746]: 2025-11-29 08:55:24.900148757 +0000 UTC m=+0.018394950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:55:24 np0005539563 podman[395746]: 2025-11-29 08:55:24.996369502 +0000 UTC m=+0.114615685 container died a4633170dd998377433712143ec93fdb881806718bc15c7fe8d6b56083788ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mendel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:55:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-695aeb38f209c71b8f224be2fa580132043942bb10617ac227df24024817f618-merged.mount: Deactivated successfully.
Nov 29 03:55:25 np0005539563 podman[395746]: 2025-11-29 08:55:25.036100617 +0000 UTC m=+0.154346800 container remove a4633170dd998377433712143ec93fdb881806718bc15c7fe8d6b56083788ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:55:25 np0005539563 systemd[1]: libpod-conmon-a4633170dd998377433712143ec93fdb881806718bc15c7fe8d6b56083788ce3.scope: Deactivated successfully.
Nov 29 03:55:25 np0005539563 podman[395788]: 2025-11-29 08:55:25.219168695 +0000 UTC m=+0.042785471 container create 5e6ab04f083cb77ef57be5e8d1437259f394fb39a6144254275ae1420ea39a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_satoshi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:55:25 np0005539563 systemd[1]: Started libpod-conmon-5e6ab04f083cb77ef57be5e8d1437259f394fb39a6144254275ae1420ea39a62.scope.
Nov 29 03:55:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:55:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b83a435c80ce11fdcee3abdb347a74f5edfc9081f7a21d292f86ef2882892ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b83a435c80ce11fdcee3abdb347a74f5edfc9081f7a21d292f86ef2882892ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b83a435c80ce11fdcee3abdb347a74f5edfc9081f7a21d292f86ef2882892ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b83a435c80ce11fdcee3abdb347a74f5edfc9081f7a21d292f86ef2882892ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:25 np0005539563 podman[395788]: 2025-11-29 08:55:25.20052717 +0000 UTC m=+0.024143956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:55:25 np0005539563 podman[395788]: 2025-11-29 08:55:25.348474936 +0000 UTC m=+0.172091722 container init 5e6ab04f083cb77ef57be5e8d1437259f394fb39a6144254275ae1420ea39a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:55:25 np0005539563 podman[395788]: 2025-11-29 08:55:25.355578947 +0000 UTC m=+0.179195703 container start 5e6ab04f083cb77ef57be5e8d1437259f394fb39a6144254275ae1420ea39a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_satoshi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:55:25 np0005539563 podman[395788]: 2025-11-29 08:55:25.359890105 +0000 UTC m=+0.183506861 container attach 5e6ab04f083cb77ef57be5e8d1437259f394fb39a6144254275ae1420ea39a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:55:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 03:55:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:26.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]: {
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:    "0": [
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:        {
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:            "devices": [
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:                "/dev/loop3"
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:            ],
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:            "lv_name": "ceph_lv0",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:            "lv_size": "7511998464",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:            "name": "ceph_lv0",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:            "tags": {
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:                "ceph.cluster_name": "ceph",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:                "ceph.crush_device_class": "",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:                "ceph.encrypted": "0",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:                "ceph.osd_id": "0",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:                "ceph.type": "block",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:                "ceph.vdo": "0"
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:            },
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:            "type": "block",
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:            "vg_name": "ceph_vg0"
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:        }
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]:    ]
Nov 29 03:55:26 np0005539563 condescending_satoshi[395804]: }
Nov 29 03:55:26 np0005539563 systemd[1]: libpod-5e6ab04f083cb77ef57be5e8d1437259f394fb39a6144254275ae1420ea39a62.scope: Deactivated successfully.
Nov 29 03:55:26 np0005539563 podman[395788]: 2025-11-29 08:55:26.135780943 +0000 UTC m=+0.959397709 container died 5e6ab04f083cb77ef57be5e8d1437259f394fb39a6144254275ae1420ea39a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:55:26 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1b83a435c80ce11fdcee3abdb347a74f5edfc9081f7a21d292f86ef2882892ae-merged.mount: Deactivated successfully.
Nov 29 03:55:26 np0005539563 podman[395788]: 2025-11-29 08:55:26.191878292 +0000 UTC m=+1.015495058 container remove 5e6ab04f083cb77ef57be5e8d1437259f394fb39a6144254275ae1420ea39a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_satoshi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:55:26 np0005539563 systemd[1]: libpod-conmon-5e6ab04f083cb77ef57be5e8d1437259f394fb39a6144254275ae1420ea39a62.scope: Deactivated successfully.
Nov 29 03:55:26 np0005539563 nova_compute[252253]: 2025-11-29 08:55:26.297 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:26 np0005539563 nova_compute[252253]: 2025-11-29 08:55:26.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:26.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:26 np0005539563 podman[395966]: 2025-11-29 08:55:26.993340942 +0000 UTC m=+0.059902433 container create 8109fb12ef28d96c0b6e3de9a60bfa96f030c8e62f92cfabebb036d889146555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wilbur, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:55:27 np0005539563 systemd[1]: Started libpod-conmon-8109fb12ef28d96c0b6e3de9a60bfa96f030c8e62f92cfabebb036d889146555.scope.
Nov 29 03:55:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:55:27 np0005539563 podman[395966]: 2025-11-29 08:55:26.974521162 +0000 UTC m=+0.041082663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:55:27 np0005539563 podman[395966]: 2025-11-29 08:55:27.091546171 +0000 UTC m=+0.158107662 container init 8109fb12ef28d96c0b6e3de9a60bfa96f030c8e62f92cfabebb036d889146555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:55:27 np0005539563 podman[395966]: 2025-11-29 08:55:27.100230126 +0000 UTC m=+0.166791597 container start 8109fb12ef28d96c0b6e3de9a60bfa96f030c8e62f92cfabebb036d889146555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:55:27 np0005539563 podman[395966]: 2025-11-29 08:55:27.104157963 +0000 UTC m=+0.170719554 container attach 8109fb12ef28d96c0b6e3de9a60bfa96f030c8e62f92cfabebb036d889146555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wilbur, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:55:27 np0005539563 laughing_wilbur[395982]: 167 167
Nov 29 03:55:27 np0005539563 systemd[1]: libpod-8109fb12ef28d96c0b6e3de9a60bfa96f030c8e62f92cfabebb036d889146555.scope: Deactivated successfully.
Nov 29 03:55:27 np0005539563 podman[395966]: 2025-11-29 08:55:27.111580493 +0000 UTC m=+0.178141994 container died 8109fb12ef28d96c0b6e3de9a60bfa96f030c8e62f92cfabebb036d889146555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 03:55:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-dc07fbf5ebc1fa68066406dbd252ecc53a7b65301f5574d7f89921ee9ebac41c-merged.mount: Deactivated successfully.
Nov 29 03:55:27 np0005539563 podman[395966]: 2025-11-29 08:55:27.161780542 +0000 UTC m=+0.228342013 container remove 8109fb12ef28d96c0b6e3de9a60bfa96f030c8e62f92cfabebb036d889146555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wilbur, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:55:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:27 np0005539563 systemd[1]: libpod-conmon-8109fb12ef28d96c0b6e3de9a60bfa96f030c8e62f92cfabebb036d889146555.scope: Deactivated successfully.
Nov 29 03:55:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 173 KiB/s rd, 110 KiB/s wr, 24 op/s
Nov 29 03:55:27 np0005539563 podman[396005]: 2025-11-29 08:55:27.38071476 +0000 UTC m=+0.056784288 container create 6faa223f1518cad37dafaa87e5c82044bb1c94714a6fe007b213bdc07613d20d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mendel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:55:27 np0005539563 systemd[1]: Started libpod-conmon-6faa223f1518cad37dafaa87e5c82044bb1c94714a6fe007b213bdc07613d20d.scope.
Nov 29 03:55:27 np0005539563 podman[396005]: 2025-11-29 08:55:27.357172853 +0000 UTC m=+0.033242421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:55:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:55:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712c43a0ae3958b308253a18b2be40d8bf8ea892d4a111bda33198463a4287bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712c43a0ae3958b308253a18b2be40d8bf8ea892d4a111bda33198463a4287bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712c43a0ae3958b308253a18b2be40d8bf8ea892d4a111bda33198463a4287bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712c43a0ae3958b308253a18b2be40d8bf8ea892d4a111bda33198463a4287bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:27 np0005539563 podman[396005]: 2025-11-29 08:55:27.476527064 +0000 UTC m=+0.152596612 container init 6faa223f1518cad37dafaa87e5c82044bb1c94714a6fe007b213bdc07613d20d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 03:55:27 np0005539563 nova_compute[252253]: 2025-11-29 08:55:27.475 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:27 np0005539563 podman[396005]: 2025-11-29 08:55:27.483834342 +0000 UTC m=+0.159903870 container start 6faa223f1518cad37dafaa87e5c82044bb1c94714a6fe007b213bdc07613d20d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mendel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:55:27 np0005539563 podman[396005]: 2025-11-29 08:55:27.487914432 +0000 UTC m=+0.163983980 container attach 6faa223f1518cad37dafaa87e5c82044bb1c94714a6fe007b213bdc07613d20d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:55:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:28.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:28 np0005539563 quizzical_mendel[396021]: {
Nov 29 03:55:28 np0005539563 quizzical_mendel[396021]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:55:28 np0005539563 quizzical_mendel[396021]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:55:28 np0005539563 quizzical_mendel[396021]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:55:28 np0005539563 quizzical_mendel[396021]:        "osd_id": 0,
Nov 29 03:55:28 np0005539563 quizzical_mendel[396021]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:55:28 np0005539563 quizzical_mendel[396021]:        "type": "bluestore"
Nov 29 03:55:28 np0005539563 quizzical_mendel[396021]:    }
Nov 29 03:55:28 np0005539563 quizzical_mendel[396021]: }
Nov 29 03:55:28 np0005539563 systemd[1]: libpod-6faa223f1518cad37dafaa87e5c82044bb1c94714a6fe007b213bdc07613d20d.scope: Deactivated successfully.
Nov 29 03:55:28 np0005539563 podman[396005]: 2025-11-29 08:55:28.431636135 +0000 UTC m=+1.107705673 container died 6faa223f1518cad37dafaa87e5c82044bb1c94714a6fe007b213bdc07613d20d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:55:28 np0005539563 systemd[1]: var-lib-containers-storage-overlay-712c43a0ae3958b308253a18b2be40d8bf8ea892d4a111bda33198463a4287bf-merged.mount: Deactivated successfully.
Nov 29 03:55:28 np0005539563 podman[396005]: 2025-11-29 08:55:28.492983326 +0000 UTC m=+1.169052854 container remove 6faa223f1518cad37dafaa87e5c82044bb1c94714a6fe007b213bdc07613d20d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mendel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:55:28 np0005539563 systemd[1]: libpod-conmon-6faa223f1518cad37dafaa87e5c82044bb1c94714a6fe007b213bdc07613d20d.scope: Deactivated successfully.
Nov 29 03:55:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:55:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:55:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev dfc7554a-02dd-42fa-ac9c-3cac9046e0fe does not exist
Nov 29 03:55:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev be80d867-ded8-4153-bc59-a82f2838591c does not exist
Nov 29 03:55:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 098e0699-5eb7-4ea2-a1b7-5ff896c1ccee does not exist
Nov 29 03:55:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:28.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 173 KiB/s rd, 111 KiB/s wr, 24 op/s
Nov 29 03:55:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:55:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:30.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:30 np0005539563 nova_compute[252253]: 2025-11-29 08:55:30.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:30.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:31 np0005539563 nova_compute[252253]: 2025-11-29 08:55:31.298 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 119 KiB/s rd, 104 KiB/s wr, 20 op/s
Nov 29 03:55:31 np0005539563 nova_compute[252253]: 2025-11-29 08:55:31.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:31 np0005539563 nova_compute[252253]: 2025-11-29 08:55:31.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:55:31 np0005539563 nova_compute[252253]: 2025-11-29 08:55:31.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:55:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:32.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:32 np0005539563 nova_compute[252253]: 2025-11-29 08:55:32.392 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-16eabee0-f603-47c9-9ccd-82b1da31c3e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:55:32 np0005539563 nova_compute[252253]: 2025-11-29 08:55:32.392 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-16eabee0-f603-47c9-9ccd-82b1da31c3e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:55:32 np0005539563 nova_compute[252253]: 2025-11-29 08:55:32.393 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:55:32 np0005539563 nova_compute[252253]: 2025-11-29 08:55:32.393 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 16eabee0-f603-47c9-9ccd-82b1da31c3e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:55:32 np0005539563 nova_compute[252253]: 2025-11-29 08:55:32.477 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:32.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s wr, 1 op/s
Nov 29 03:55:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:34.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:34.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s wr, 1 op/s
Nov 29 03:55:35 np0005539563 nova_compute[252253]: 2025-11-29 08:55:35.629 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Updating instance_info_cache with network_info: [{"id": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "address": "fa:16:3e:44:2e:46", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3b392a7-00", "ovs_interfaceid": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:55:35 np0005539563 nova_compute[252253]: 2025-11-29 08:55:35.656 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-16eabee0-f603-47c9-9ccd-82b1da31c3e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:55:35 np0005539563 nova_compute[252253]: 2025-11-29 08:55:35.657 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:55:35 np0005539563 nova_compute[252253]: 2025-11-29 08:55:35.657 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:35 np0005539563 nova_compute[252253]: 2025-11-29 08:55:35.658 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:35 np0005539563 nova_compute[252253]: 2025-11-29 08:55:35.658 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:55:35 np0005539563 nova_compute[252253]: 2025-11-29 08:55:35.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:36.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:36 np0005539563 nova_compute[252253]: 2025-11-29 08:55:36.301 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:36.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 29 03:55:37 np0005539563 nova_compute[252253]: 2025-11-29 08:55:37.481 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:37 np0005539563 nova_compute[252253]: 2025-11-29 08:55:37.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:37 np0005539563 nova_compute[252253]: 2025-11-29 08:55:37.701 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:37 np0005539563 nova_compute[252253]: 2025-11-29 08:55:37.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:37 np0005539563 nova_compute[252253]: 2025-11-29 08:55:37.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:37 np0005539563 nova_compute[252253]: 2025-11-29 08:55:37.702 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:55:37 np0005539563 nova_compute[252253]: 2025-11-29 08:55:37.703 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:55:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:38.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:55:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/433257077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.132 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.222 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.223 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.374 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.375 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3926MB free_disk=20.896987915039062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.375 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.376 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.448 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 16eabee0-f603-47c9-9ccd-82b1da31c3e5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.448 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.448 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.489 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:55:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:38.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:55:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4015313614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.902 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.907 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.927 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.958 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:55:38 np0005539563 nova_compute[252253]: 2025-11-29 08:55:38.959 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 29 03:55:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:40.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:40.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:41 np0005539563 nova_compute[252253]: 2025-11-29 08:55:41.304 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s wr, 0 op/s
Nov 29 03:55:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:42.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:42 np0005539563 nova_compute[252253]: 2025-11-29 08:55:42.535 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:42.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:55:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:55:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:55:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:55:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:55:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:55:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 03:55:43 np0005539563 nova_compute[252253]: 2025-11-29 08:55:43.960 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:44.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:44.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.0 KiB/s wr, 0 op/s
Nov 29 03:55:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:46.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:46 np0005539563 nova_compute[252253]: 2025-11-29 08:55:46.306 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:46.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Nov 29 03:55:47 np0005539563 nova_compute[252253]: 2025-11-29 08:55:47.537 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:48.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:48.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s wr, 0 op/s
Nov 29 03:55:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:50.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:50.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:51 np0005539563 nova_compute[252253]: 2025-11-29 08:55:51.309 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.0 KiB/s wr, 0 op/s
Nov 29 03:55:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:52.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:52 np0005539563 nova_compute[252253]: 2025-11-29 08:55:52.540 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:52 np0005539563 ovn_controller[148841]: 2025-11-29T08:55:52Z|00883|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Nov 29 03:55:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:52.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 29 03:55:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:54.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:54.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s wr, 1 op/s
Nov 29 03:55:55 np0005539563 podman[396215]: 2025-11-29 08:55:55.51956189 +0000 UTC m=+0.073662105 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 29 03:55:55 np0005539563 podman[396216]: 2025-11-29 08:55:55.52252958 +0000 UTC m=+0.075076554 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 03:55:55 np0005539563 podman[396217]: 2025-11-29 08:55:55.562621775 +0000 UTC m=+0.104316845 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:55:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:56.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:56 np0005539563 nova_compute[252253]: 2025-11-29 08:55:56.311 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:55:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:56.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:55:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 KiB/s wr, 0 op/s
Nov 29 03:55:57 np0005539563 nova_compute[252253]: 2025-11-29 08:55:57.542 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:58.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:55:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:58.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Nov 29 03:56:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:00.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:00 np0005539563 nova_compute[252253]: 2025-11-29 08:56:00.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:00.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:01 np0005539563 nova_compute[252253]: 2025-11-29 08:56:01.344 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s wr, 0 op/s
Nov 29 03:56:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:02.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:02 np0005539563 nova_compute[252253]: 2025-11-29 08:56:02.593 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:56:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:02.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:56:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 2.5 KiB/s wr, 0 op/s
Nov 29 03:56:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:04.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:04.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:04.968 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:04.969 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:04.969 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 12 KiB/s wr, 2 op/s
Nov 29 03:56:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:06.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:06 np0005539563 nova_compute[252253]: 2025-11-29 08:56:06.348 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:06.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 11 KiB/s wr, 1 op/s
Nov 29 03:56:07 np0005539563 nova_compute[252253]: 2025-11-29 08:56:07.596 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:07 np0005539563 nova_compute[252253]: 2025-11-29 08:56:07.847 252257 DEBUG oslo_concurrency.lockutils [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:07 np0005539563 nova_compute[252253]: 2025-11-29 08:56:07.848 252257 DEBUG oslo_concurrency.lockutils [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:07 np0005539563 nova_compute[252253]: 2025-11-29 08:56:07.848 252257 DEBUG oslo_concurrency.lockutils [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:07 np0005539563 nova_compute[252253]: 2025-11-29 08:56:07.848 252257 DEBUG oslo_concurrency.lockutils [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:07 np0005539563 nova_compute[252253]: 2025-11-29 08:56:07.848 252257 DEBUG oslo_concurrency.lockutils [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:07 np0005539563 nova_compute[252253]: 2025-11-29 08:56:07.850 252257 INFO nova.compute.manager [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Terminating instance#033[00m
Nov 29 03:56:07 np0005539563 nova_compute[252253]: 2025-11-29 08:56:07.850 252257 DEBUG nova.compute.manager [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:56:07 np0005539563 kernel: tapb3b392a7-00 (unregistering): left promiscuous mode
Nov 29 03:56:07 np0005539563 NetworkManager[48981]: <info>  [1764406567.9123] device (tapb3b392a7-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:56:07 np0005539563 ovn_controller[148841]: 2025-11-29T08:56:07Z|00884|binding|INFO|Releasing lport b3b392a7-00fe-4dde-85ae-b7ad839245a5 from this chassis (sb_readonly=0)
Nov 29 03:56:07 np0005539563 ovn_controller[148841]: 2025-11-29T08:56:07Z|00885|binding|INFO|Setting lport b3b392a7-00fe-4dde-85ae-b7ad839245a5 down in Southbound
Nov 29 03:56:07 np0005539563 nova_compute[252253]: 2025-11-29 08:56:07.928 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:07 np0005539563 ovn_controller[148841]: 2025-11-29T08:56:07Z|00886|binding|INFO|Removing iface tapb3b392a7-00 ovn-installed in OVS
Nov 29 03:56:07 np0005539563 nova_compute[252253]: 2025-11-29 08:56:07.930 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:07.947 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:2e:46 10.100.0.26'], port_security=['fa:16:3e:44:2e:46 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '16eabee0-f603-47c9-9ccd-82b1da31c3e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbf505d1-7919-461d-b3a8-5568e119b40c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05627052-63ca-4f76-9ac7-41cb7dbaaa91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b15200f8-3405-4179-8895-fe8fa61a54ba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=b3b392a7-00fe-4dde-85ae-b7ad839245a5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:56:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:07.949 158990 INFO neutron.agent.ovn.metadata.agent [-] Port b3b392a7-00fe-4dde-85ae-b7ad839245a5 in datapath cbf505d1-7919-461d-b3a8-5568e119b40c unbound from our chassis#033[00m
Nov 29 03:56:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:07.950 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cbf505d1-7919-461d-b3a8-5568e119b40c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:56:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:07.952 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5d74a6cb-af35-4e93-a1c5-0609c4503ff3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:56:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:07.952 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c namespace which is not needed anymore#033[00m
Nov 29 03:56:07 np0005539563 nova_compute[252253]: 2025-11-29 08:56:07.970 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:07 np0005539563 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000ce.scope: Deactivated successfully.
Nov 29 03:56:07 np0005539563 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000ce.scope: Consumed 16.463s CPU time.
Nov 29 03:56:07 np0005539563 systemd-machined[213024]: Machine qemu-99-instance-000000ce terminated.
Nov 29 03:56:08 np0005539563 kernel: tapb3b392a7-00: entered promiscuous mode
Nov 29 03:56:08 np0005539563 NetworkManager[48981]: <info>  [1764406568.0785] manager: (tapb3b392a7-00): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Nov 29 03:56:08 np0005539563 kernel: tapb3b392a7-00 (unregistering): left promiscuous mode
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.079 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:56:08Z|00887|binding|INFO|Claiming lport b3b392a7-00fe-4dde-85ae-b7ad839245a5 for this chassis.
Nov 29 03:56:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:56:08Z|00888|binding|INFO|b3b392a7-00fe-4dde-85ae-b7ad839245a5: Claiming fa:16:3e:44:2e:46 10.100.0.26
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.089 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:2e:46 10.100.0.26'], port_security=['fa:16:3e:44:2e:46 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '16eabee0-f603-47c9-9ccd-82b1da31c3e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbf505d1-7919-461d-b3a8-5568e119b40c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05627052-63ca-4f76-9ac7-41cb7dbaaa91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b15200f8-3405-4179-8895-fe8fa61a54ba, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=b3b392a7-00fe-4dde-85ae-b7ad839245a5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:56:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:56:08Z|00889|binding|INFO|Setting lport b3b392a7-00fe-4dde-85ae-b7ad839245a5 ovn-installed in OVS
Nov 29 03:56:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:56:08Z|00890|binding|INFO|Setting lport b3b392a7-00fe-4dde-85ae-b7ad839245a5 up in Southbound
Nov 29 03:56:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:56:08Z|00891|binding|INFO|Releasing lport b3b392a7-00fe-4dde-85ae-b7ad839245a5 from this chassis (sb_readonly=1)
Nov 29 03:56:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:56:08Z|00892|if_status|INFO|Dropped 2 log messages in last 442 seconds (most recently, 442 seconds ago) due to excessive rate
Nov 29 03:56:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:56:08Z|00893|if_status|INFO|Not setting lport b3b392a7-00fe-4dde-85ae-b7ad839245a5 down as sb is readonly
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.100 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:56:08Z|00894|binding|INFO|Removing iface tapb3b392a7-00 ovn-installed in OVS
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.102 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.105 252257 INFO nova.virt.libvirt.driver [-] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Instance destroyed successfully.#033[00m
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.105 252257 DEBUG nova.objects.instance [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid 16eabee0-f603-47c9-9ccd-82b1da31c3e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:56:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:08.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:56:08Z|00895|binding|INFO|Releasing lport b3b392a7-00fe-4dde-85ae-b7ad839245a5 from this chassis (sb_readonly=0)
Nov 29 03:56:08 np0005539563 ovn_controller[148841]: 2025-11-29T08:56:08Z|00896|binding|INFO|Setting lport b3b392a7-00fe-4dde-85ae-b7ad839245a5 down in Southbound
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.113 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:08 np0005539563 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[394981]: [NOTICE]   (394985) : haproxy version is 2.8.14-c23fe91
Nov 29 03:56:08 np0005539563 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[394981]: [NOTICE]   (394985) : path to executable is /usr/sbin/haproxy
Nov 29 03:56:08 np0005539563 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[394981]: [WARNING]  (394985) : Exiting Master process...
Nov 29 03:56:08 np0005539563 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[394981]: [ALERT]    (394985) : Current worker (394987) exited with code 143 (Terminated)
Nov 29 03:56:08 np0005539563 neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c[394981]: [WARNING]  (394985) : All workers exited. Exiting... (0)
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.118 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:2e:46 10.100.0.26'], port_security=['fa:16:3e:44:2e:46 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '16eabee0-f603-47c9-9ccd-82b1da31c3e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbf505d1-7919-461d-b3a8-5568e119b40c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05627052-63ca-4f76-9ac7-41cb7dbaaa91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b15200f8-3405-4179-8895-fe8fa61a54ba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=b3b392a7-00fe-4dde-85ae-b7ad839245a5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:56:08 np0005539563 systemd[1]: libpod-0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c.scope: Deactivated successfully.
Nov 29 03:56:08 np0005539563 podman[396358]: 2025-11-29 08:56:08.127239704 +0000 UTC m=+0.063518640 container died 0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:56:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:56:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a31960a6c80109b68a688e4714170dcd57b744e07f3b5c1621c3f5ed704d8889-merged.mount: Deactivated successfully.
Nov 29 03:56:08 np0005539563 podman[396358]: 2025-11-29 08:56:08.165706506 +0000 UTC m=+0.101985452 container cleanup 0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:56:08 np0005539563 systemd[1]: libpod-conmon-0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c.scope: Deactivated successfully.
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.184 252257 DEBUG nova.virt.libvirt.vif [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:54:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1992557920',display_name='tempest-TestNetworkBasicOps-server-1992557920',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1992557920',id=206,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKd5X7ttPDZrLcSEWOWxiDiqXNcrcA1WA0dnVqQAgtuUKAr3Od0dUKsl+vtj8oQvZWYkyQjtE8n5s8UKGxr3N8P2h1WoaggKg3lwtt8NDDbnm6HABmPHF8MMMwChxxT+Jw==',key_name='tempest-TestNetworkBasicOps-1992531590',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:54:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-x7pxpe1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:54:59Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=16eabee0-f603-47c9-9ccd-82b1da31c3e5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "address": "fa:16:3e:44:2e:46", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3b392a7-00", "ovs_interfaceid": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.186 252257 DEBUG nova.network.os_vif_util [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "address": "fa:16:3e:44:2e:46", "network": {"id": "cbf505d1-7919-461d-b3a8-5568e119b40c", "bridge": "br-int", "label": "tempest-network-smoke--133968160", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3b392a7-00", "ovs_interfaceid": "b3b392a7-00fe-4dde-85ae-b7ad839245a5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.187 252257 DEBUG nova.network.os_vif_util [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:44:2e:46,bridge_name='br-int',has_traffic_filtering=True,id=b3b392a7-00fe-4dde-85ae-b7ad839245a5,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3b392a7-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.187 252257 DEBUG os_vif [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:2e:46,bridge_name='br-int',has_traffic_filtering=True,id=b3b392a7-00fe-4dde-85ae-b7ad839245a5,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3b392a7-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.189 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.189 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3b392a7-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.190 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.193 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.199 252257 INFO os_vif [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:2e:46,bridge_name='br-int',has_traffic_filtering=True,id=b3b392a7-00fe-4dde-85ae-b7ad839245a5,network=Network(cbf505d1-7919-461d-b3a8-5568e119b40c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3b392a7-00')#033[00m
Nov 29 03:56:08 np0005539563 podman[396398]: 2025-11-29 08:56:08.235863845 +0000 UTC m=+0.047845196 container remove 0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.242 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d1610442-c163-4d18-bcdd-e2bef0d2a77b]: (4, ('Sat Nov 29 08:56:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c (0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c)\n0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c\nSat Nov 29 08:56:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c (0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c)\n0feca21ae8046cd64aa80cec7b2b137707505bfd48a27017230160d116c3f52c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.244 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f99bab09-7007-4480-aa42-e69e249eb2a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.245 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbf505d1-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.247 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:08 np0005539563 kernel: tapcbf505d1-70: left promiscuous mode
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.266 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.270 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[676a2ce2-c593-41e4-8102-6a059e225364]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.289 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f7850d3a-dbc4-45c6-b495-5b6270e55596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.290 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd1e8f9-b165-4a17-a1cc-680ee1bf87a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.310 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5b72148e-6757-4104-b1f2-66a7b5e1e4b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 946669, 'reachable_time': 37336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396431, 'error': None, 'target': 'ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.314 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cbf505d1-7919-461d-b3a8-5568e119b40c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.314 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[0423abcd-6aab-414e-99de-a5d10689d134]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.316 158990 INFO neutron.agent.ovn.metadata.agent [-] Port b3b392a7-00fe-4dde-85ae-b7ad839245a5 in datapath cbf505d1-7919-461d-b3a8-5568e119b40c unbound from our chassis#033[00m
Nov 29 03:56:08 np0005539563 systemd[1]: run-netns-ovnmeta\x2dcbf505d1\x2d7919\x2d461d\x2db3a8\x2d5568e119b40c.mount: Deactivated successfully.
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.317 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cbf505d1-7919-461d-b3a8-5568e119b40c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.318 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5f53f3c0-8450-411a-8535-844ee9c36f18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.319 158990 INFO neutron.agent.ovn.metadata.agent [-] Port b3b392a7-00fe-4dde-85ae-b7ad839245a5 in datapath cbf505d1-7919-461d-b3a8-5568e119b40c unbound from our chassis#033[00m
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.320 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cbf505d1-7919-461d-b3a8-5568e119b40c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:56:08 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:08.321 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3308e152-d6f8-4dc4-a8b9-2ee38f0729bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:56:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:08.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.946 252257 INFO nova.virt.libvirt.driver [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Deleting instance files /var/lib/nova/instances/16eabee0-f603-47c9-9ccd-82b1da31c3e5_del#033[00m
Nov 29 03:56:08 np0005539563 nova_compute[252253]: 2025-11-29 08:56:08.948 252257 INFO nova.virt.libvirt.driver [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Deletion of /var/lib/nova/instances/16eabee0-f603-47c9-9ccd-82b1da31c3e5_del complete#033[00m
Nov 29 03:56:09 np0005539563 nova_compute[252253]: 2025-11-29 08:56:09.225 252257 INFO nova.compute.manager [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Took 1.37 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:56:09 np0005539563 nova_compute[252253]: 2025-11-29 08:56:09.226 252257 DEBUG oslo.service.loopingcall [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:56:09 np0005539563 nova_compute[252253]: 2025-11-29 08:56:09.226 252257 DEBUG nova.compute.manager [-] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:56:09 np0005539563 nova_compute[252253]: 2025-11-29 08:56:09.226 252257 DEBUG nova.network.neutron [-] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:56:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 11 KiB/s wr, 2 op/s
Nov 29 03:56:09 np0005539563 nova_compute[252253]: 2025-11-29 08:56:09.589 252257 DEBUG nova.compute.manager [req-4fc02eb3-67b0-4093-b6ea-a70e015e60e3 req-ce67d9f1-5456-4ddb-81fb-e8fd5640319d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Received event network-vif-unplugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:56:09 np0005539563 nova_compute[252253]: 2025-11-29 08:56:09.589 252257 DEBUG oslo_concurrency.lockutils [req-4fc02eb3-67b0-4093-b6ea-a70e015e60e3 req-ce67d9f1-5456-4ddb-81fb-e8fd5640319d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:09 np0005539563 nova_compute[252253]: 2025-11-29 08:56:09.589 252257 DEBUG oslo_concurrency.lockutils [req-4fc02eb3-67b0-4093-b6ea-a70e015e60e3 req-ce67d9f1-5456-4ddb-81fb-e8fd5640319d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:09 np0005539563 nova_compute[252253]: 2025-11-29 08:56:09.590 252257 DEBUG oslo_concurrency.lockutils [req-4fc02eb3-67b0-4093-b6ea-a70e015e60e3 req-ce67d9f1-5456-4ddb-81fb-e8fd5640319d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:09 np0005539563 nova_compute[252253]: 2025-11-29 08:56:09.590 252257 DEBUG nova.compute.manager [req-4fc02eb3-67b0-4093-b6ea-a70e015e60e3 req-ce67d9f1-5456-4ddb-81fb-e8fd5640319d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] No waiting events found dispatching network-vif-unplugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:56:09 np0005539563 nova_compute[252253]: 2025-11-29 08:56:09.590 252257 DEBUG nova.compute.manager [req-4fc02eb3-67b0-4093-b6ea-a70e015e60e3 req-ce67d9f1-5456-4ddb-81fb-e8fd5640319d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Received event network-vif-unplugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:56:09 np0005539563 nova_compute[252253]: 2025-11-29 08:56:09.787 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:09.789 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:56:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:09.791 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:56:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:10.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:10.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.384 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 305 active+clean; 219 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 KiB/s rd, 13 KiB/s wr, 10 op/s
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.693 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.723 252257 DEBUG nova.compute.manager [req-48d255eb-bdf5-4cb1-93bf-d1c3817f6c6a req-2726feb4-bee7-4648-91e5-1ea619eb8c65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Received event network-vif-plugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.724 252257 DEBUG oslo_concurrency.lockutils [req-48d255eb-bdf5-4cb1-93bf-d1c3817f6c6a req-2726feb4-bee7-4648-91e5-1ea619eb8c65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.724 252257 DEBUG oslo_concurrency.lockutils [req-48d255eb-bdf5-4cb1-93bf-d1c3817f6c6a req-2726feb4-bee7-4648-91e5-1ea619eb8c65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.725 252257 DEBUG oslo_concurrency.lockutils [req-48d255eb-bdf5-4cb1-93bf-d1c3817f6c6a req-2726feb4-bee7-4648-91e5-1ea619eb8c65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.725 252257 DEBUG nova.compute.manager [req-48d255eb-bdf5-4cb1-93bf-d1c3817f6c6a req-2726feb4-bee7-4648-91e5-1ea619eb8c65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] No waiting events found dispatching network-vif-plugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.725 252257 WARNING nova.compute.manager [req-48d255eb-bdf5-4cb1-93bf-d1c3817f6c6a req-2726feb4-bee7-4648-91e5-1ea619eb8c65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Received unexpected event network-vif-plugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.726 252257 DEBUG nova.compute.manager [req-48d255eb-bdf5-4cb1-93bf-d1c3817f6c6a req-2726feb4-bee7-4648-91e5-1ea619eb8c65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Received event network-vif-plugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.726 252257 DEBUG oslo_concurrency.lockutils [req-48d255eb-bdf5-4cb1-93bf-d1c3817f6c6a req-2726feb4-bee7-4648-91e5-1ea619eb8c65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.726 252257 DEBUG oslo_concurrency.lockutils [req-48d255eb-bdf5-4cb1-93bf-d1c3817f6c6a req-2726feb4-bee7-4648-91e5-1ea619eb8c65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.727 252257 DEBUG oslo_concurrency.lockutils [req-48d255eb-bdf5-4cb1-93bf-d1c3817f6c6a req-2726feb4-bee7-4648-91e5-1ea619eb8c65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.727 252257 DEBUG nova.compute.manager [req-48d255eb-bdf5-4cb1-93bf-d1c3817f6c6a req-2726feb4-bee7-4648-91e5-1ea619eb8c65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] No waiting events found dispatching network-vif-plugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:56:11 np0005539563 nova_compute[252253]: 2025-11-29 08:56:11.727 252257 WARNING nova.compute.manager [req-48d255eb-bdf5-4cb1-93bf-d1c3817f6c6a req-2726feb4-bee7-4648-91e5-1ea619eb8c65 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Received unexpected event network-vif-plugged-b3b392a7-00fe-4dde-85ae-b7ad839245a5 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:56:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:12.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:12 np0005539563 nova_compute[252253]: 2025-11-29 08:56:12.453 252257 DEBUG nova.network.neutron [-] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:56:12 np0005539563 nova_compute[252253]: 2025-11-29 08:56:12.470 252257 INFO nova.compute.manager [-] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Took 3.24 seconds to deallocate network for instance.#033[00m
Nov 29 03:56:12 np0005539563 nova_compute[252253]: 2025-11-29 08:56:12.517 252257 DEBUG oslo_concurrency.lockutils [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:12 np0005539563 nova_compute[252253]: 2025-11-29 08:56:12.517 252257 DEBUG oslo_concurrency.lockutils [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:12 np0005539563 nova_compute[252253]: 2025-11-29 08:56:12.576 252257 DEBUG oslo_concurrency.processutils [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:56:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:12.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:56:13
Nov 29 03:56:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:56:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:56:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['images', 'default.rgw.control', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log']
Nov 29 03:56:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:56:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:56:13 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3467016413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:56:13 np0005539563 nova_compute[252253]: 2025-11-29 08:56:13.075 252257 DEBUG oslo_concurrency.processutils [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:56:13 np0005539563 nova_compute[252253]: 2025-11-29 08:56:13.083 252257 DEBUG nova.compute.provider_tree [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:56:13 np0005539563 nova_compute[252253]: 2025-11-29 08:56:13.108 252257 DEBUG nova.scheduler.client.report [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:56:13 np0005539563 nova_compute[252253]: 2025-11-29 08:56:13.134 252257 DEBUG oslo_concurrency.lockutils [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:13 np0005539563 nova_compute[252253]: 2025-11-29 08:56:13.192 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:13 np0005539563 nova_compute[252253]: 2025-11-29 08:56:13.210 252257 INFO nova.scheduler.client.report [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance 16eabee0-f603-47c9-9ccd-82b1da31c3e5#033[00m
Nov 29 03:56:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:56:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:56:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:56:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:56:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:56:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:56:13 np0005539563 nova_compute[252253]: 2025-11-29 08:56:13.272 252257 DEBUG oslo_concurrency.lockutils [None req-7bfc0b36-2c39-4dc8-a36f-a4ec300d33c1 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "16eabee0-f603-47c9-9ccd-82b1da31c3e5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 29 op/s
Nov 29 03:56:13 np0005539563 nova_compute[252253]: 2025-11-29 08:56:13.851 252257 DEBUG nova.compute.manager [req-6d82b9d6-2cb9-487e-9e9d-9021e5377986 req-ffe2df19-f913-44cd-a5a4-ad283968647c 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Received event network-vif-deleted-b3b392a7-00fe-4dde-85ae-b7ad839245a5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:56:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:14.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:14.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 29 op/s
Nov 29 03:56:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:16.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:16 np0005539563 nova_compute[252253]: 2025-11-29 08:56:16.442 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:56:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:16.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:56:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:56:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:56:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:56:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:56:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:56:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:56:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:56:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:56:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:56:16.793 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:56:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 29 03:56:17 np0005539563 nova_compute[252253]: 2025-11-29 08:56:17.908 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:18.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:18 np0005539563 nova_compute[252253]: 2025-11-29 08:56:18.194 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:18.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 29 03:56:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:20.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:20.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 305 active+clean; 134 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 4.4 KiB/s wr, 43 op/s
Nov 29 03:56:21 np0005539563 nova_compute[252253]: 2025-11-29 08:56:21.444 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:22.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:22.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:23 np0005539563 nova_compute[252253]: 2025-11-29 08:56:23.101 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406568.1002004, 16eabee0-f603-47c9-9ccd-82b1da31c3e5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:56:23 np0005539563 nova_compute[252253]: 2025-11-29 08:56:23.101 252257 INFO nova.compute.manager [-] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:56:23 np0005539563 nova_compute[252253]: 2025-11-29 08:56:23.132 252257 DEBUG nova.compute.manager [None req-937519ac-4aa1-4e56-9c42-2b94f287a439 - - - - - -] [instance: 16eabee0-f603-47c9-9ccd-82b1da31c3e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:56:23 np0005539563 nova_compute[252253]: 2025-11-29 08:56:23.197 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Nov 29 03:56:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:24.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.906639679880886e-06 of space, bias 1.0, pg target 0.002071991903964266 quantized to 32 (current 32)
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:56:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:24.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 29 03:56:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:26.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:26 np0005539563 nova_compute[252253]: 2025-11-29 08:56:26.487 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:26 np0005539563 podman[396514]: 2025-11-29 08:56:26.54623977 +0000 UTC m=+0.049400898 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:56:26 np0005539563 podman[396515]: 2025-11-29 08:56:26.553591899 +0000 UTC m=+0.066341938 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:56:26 np0005539563 podman[396516]: 2025-11-29 08:56:26.574059323 +0000 UTC m=+0.087024367 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:56:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:26.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 03:56:27 np0005539563 nova_compute[252253]: 2025-11-29 08:56:27.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:27.923138) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406587923292, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2154, "num_deletes": 254, "total_data_size": 3945775, "memory_usage": 4006056, "flush_reason": "Manual Compaction"}
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406587950385, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 3857718, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72165, "largest_seqno": 74318, "table_properties": {"data_size": 3847777, "index_size": 6370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20040, "raw_average_key_size": 20, "raw_value_size": 3828203, "raw_average_value_size": 3930, "num_data_blocks": 277, "num_entries": 974, "num_filter_entries": 974, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406377, "oldest_key_time": 1764406377, "file_creation_time": 1764406587, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 27242 microseconds, and 7913 cpu microseconds.
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:27.950451) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 3857718 bytes OK
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:27.950484) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:27.952112) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:27.952134) EVENT_LOG_v1 {"time_micros": 1764406587952129, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:27.952151) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 3937067, prev total WAL file size 3937067, number of live WAL files 2.
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:27.953047) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(3767KB)], [164(10MB)]
Nov 29 03:56:27 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406587953116, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 14645238, "oldest_snapshot_seqno": -1}
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 10584 keys, 12705456 bytes, temperature: kUnknown
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406588028820, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 12705456, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12638217, "index_size": 39670, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26501, "raw_key_size": 279761, "raw_average_key_size": 26, "raw_value_size": 12453915, "raw_average_value_size": 1176, "num_data_blocks": 1502, "num_entries": 10584, "num_filter_entries": 10584, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764406587, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:28.029157) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12705456 bytes
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:28.058129) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.1 rd, 167.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 10.3 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 11112, records dropped: 528 output_compression: NoCompression
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:28.058177) EVENT_LOG_v1 {"time_micros": 1764406588058159, "job": 102, "event": "compaction_finished", "compaction_time_micros": 75826, "compaction_time_cpu_micros": 29526, "output_level": 6, "num_output_files": 1, "total_output_size": 12705456, "num_input_records": 11112, "num_output_records": 10584, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406588059588, "job": 102, "event": "table_file_deletion", "file_number": 166}
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406588062532, "job": 102, "event": "table_file_deletion", "file_number": 164}
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:27.952934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:28.062728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:28.062780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:28.062784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:28.062787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:28 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:56:28.062789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:28.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:28 np0005539563 nova_compute[252253]: 2025-11-29 08:56:28.200 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:28.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 03:56:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:56:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:56:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:56:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:56:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:56:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:30.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:56:30 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev fb716322-153e-42cb-a18d-c1e84f48e7d5 does not exist
Nov 29 03:56:30 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 06b26f0a-5bac-48e5-95eb-5194eeb59aaf does not exist
Nov 29 03:56:30 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cc8d7af0-244e-4101-9918-c1d50b48ec20 does not exist
Nov 29 03:56:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:56:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:56:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:56:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:56:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:56:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:56:30 np0005539563 nova_compute[252253]: 2025-11-29 08:56:30.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:30.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:30 np0005539563 podman[396852]: 2025-11-29 08:56:30.924275129 +0000 UTC m=+0.043148668 container create 1ec79023af35c0242c216786fbfffe24e173ae0b0f2d9938bfa2b6f23df1ff3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:56:30 np0005539563 systemd[1]: Started libpod-conmon-1ec79023af35c0242c216786fbfffe24e173ae0b0f2d9938bfa2b6f23df1ff3f.scope.
Nov 29 03:56:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:56:30 np0005539563 podman[396852]: 2025-11-29 08:56:30.904167815 +0000 UTC m=+0.023041404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:56:31 np0005539563 podman[396852]: 2025-11-29 08:56:31.004497202 +0000 UTC m=+0.123370771 container init 1ec79023af35c0242c216786fbfffe24e173ae0b0f2d9938bfa2b6f23df1ff3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:56:31 np0005539563 podman[396852]: 2025-11-29 08:56:31.011685416 +0000 UTC m=+0.130558955 container start 1ec79023af35c0242c216786fbfffe24e173ae0b0f2d9938bfa2b6f23df1ff3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:56:31 np0005539563 podman[396852]: 2025-11-29 08:56:31.014656786 +0000 UTC m=+0.133530325 container attach 1ec79023af35c0242c216786fbfffe24e173ae0b0f2d9938bfa2b6f23df1ff3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:56:31 np0005539563 jovial_hamilton[396869]: 167 167
Nov 29 03:56:31 np0005539563 systemd[1]: libpod-1ec79023af35c0242c216786fbfffe24e173ae0b0f2d9938bfa2b6f23df1ff3f.scope: Deactivated successfully.
Nov 29 03:56:31 np0005539563 podman[396874]: 2025-11-29 08:56:31.054766423 +0000 UTC m=+0.024821273 container died 1ec79023af35c0242c216786fbfffe24e173ae0b0f2d9938bfa2b6f23df1ff3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:56:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2621d627c23c17c3d00f986689f1bbdc2acbb74b3df57b8ed9c70e1e3b2321b2-merged.mount: Deactivated successfully.
Nov 29 03:56:31 np0005539563 podman[396874]: 2025-11-29 08:56:31.100934583 +0000 UTC m=+0.070989403 container remove 1ec79023af35c0242c216786fbfffe24e173ae0b0f2d9938bfa2b6f23df1ff3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:56:31 np0005539563 systemd[1]: libpod-conmon-1ec79023af35c0242c216786fbfffe24e173ae0b0f2d9938bfa2b6f23df1ff3f.scope: Deactivated successfully.
Nov 29 03:56:31 np0005539563 podman[396897]: 2025-11-29 08:56:31.282143049 +0000 UTC m=+0.053677994 container create 84fc4657cab41a1a712b0a87552f98e4590610f7bff5c4590ebac4b0c4ee4788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:56:31 np0005539563 systemd[1]: Started libpod-conmon-84fc4657cab41a1a712b0a87552f98e4590610f7bff5c4590ebac4b0c4ee4788.scope.
Nov 29 03:56:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:56:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ff5d6e97a3f267f2e867deb5718ec8301eb4813047f03f79b37730f90f319b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:31 np0005539563 podman[396897]: 2025-11-29 08:56:31.26257496 +0000 UTC m=+0.034109885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:56:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ff5d6e97a3f267f2e867deb5718ec8301eb4813047f03f79b37730f90f319b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ff5d6e97a3f267f2e867deb5718ec8301eb4813047f03f79b37730f90f319b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ff5d6e97a3f267f2e867deb5718ec8301eb4813047f03f79b37730f90f319b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ff5d6e97a3f267f2e867deb5718ec8301eb4813047f03f79b37730f90f319b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:31 np0005539563 podman[396897]: 2025-11-29 08:56:31.374661094 +0000 UTC m=+0.146196019 container init 84fc4657cab41a1a712b0a87552f98e4590610f7bff5c4590ebac4b0c4ee4788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:56:31 np0005539563 podman[396897]: 2025-11-29 08:56:31.389232239 +0000 UTC m=+0.160767174 container start 84fc4657cab41a1a712b0a87552f98e4590610f7bff5c4590ebac4b0c4ee4788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ritchie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 03:56:31 np0005539563 podman[396897]: 2025-11-29 08:56:31.392727293 +0000 UTC m=+0.164262238 container attach 84fc4657cab41a1a712b0a87552f98e4590610f7bff5c4590ebac4b0c4ee4788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ritchie, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:56:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:56:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:56:31 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:56:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 03:56:31 np0005539563 nova_compute[252253]: 2025-11-29 08:56:31.530 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:32.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:32 np0005539563 zealous_ritchie[396913]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:56:32 np0005539563 zealous_ritchie[396913]: --> relative data size: 1.0
Nov 29 03:56:32 np0005539563 zealous_ritchie[396913]: --> All data devices are unavailable
Nov 29 03:56:32 np0005539563 systemd[1]: libpod-84fc4657cab41a1a712b0a87552f98e4590610f7bff5c4590ebac4b0c4ee4788.scope: Deactivated successfully.
Nov 29 03:56:32 np0005539563 podman[396897]: 2025-11-29 08:56:32.233687644 +0000 UTC m=+1.005222609 container died 84fc4657cab41a1a712b0a87552f98e4590610f7bff5c4590ebac4b0c4ee4788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:56:32 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e0ff5d6e97a3f267f2e867deb5718ec8301eb4813047f03f79b37730f90f319b-merged.mount: Deactivated successfully.
Nov 29 03:56:32 np0005539563 podman[396897]: 2025-11-29 08:56:32.290325427 +0000 UTC m=+1.061860342 container remove 84fc4657cab41a1a712b0a87552f98e4590610f7bff5c4590ebac4b0c4ee4788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:56:32 np0005539563 systemd[1]: libpod-conmon-84fc4657cab41a1a712b0a87552f98e4590610f7bff5c4590ebac4b0c4ee4788.scope: Deactivated successfully.
Nov 29 03:56:32 np0005539563 nova_compute[252253]: 2025-11-29 08:56:32.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:32 np0005539563 nova_compute[252253]: 2025-11-29 08:56:32.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:56:32 np0005539563 nova_compute[252253]: 2025-11-29 08:56:32.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:56:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:32.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:33 np0005539563 podman[397081]: 2025-11-29 08:56:33.043163121 +0000 UTC m=+0.049070270 container create dd243193cad4bac05f10b68656f987ed0136355eea96bb0a9908f6b8322746f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sanderson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:56:33 np0005539563 systemd[1]: Started libpod-conmon-dd243193cad4bac05f10b68656f987ed0136355eea96bb0a9908f6b8322746f4.scope.
Nov 29 03:56:33 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:56:33 np0005539563 podman[397081]: 2025-11-29 08:56:33.022531572 +0000 UTC m=+0.028438781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:56:33 np0005539563 podman[397081]: 2025-11-29 08:56:33.13215085 +0000 UTC m=+0.138058059 container init dd243193cad4bac05f10b68656f987ed0136355eea96bb0a9908f6b8322746f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sanderson, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:56:33 np0005539563 podman[397081]: 2025-11-29 08:56:33.140771373 +0000 UTC m=+0.146678532 container start dd243193cad4bac05f10b68656f987ed0136355eea96bb0a9908f6b8322746f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sanderson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:56:33 np0005539563 podman[397081]: 2025-11-29 08:56:33.144180356 +0000 UTC m=+0.150087555 container attach dd243193cad4bac05f10b68656f987ed0136355eea96bb0a9908f6b8322746f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:56:33 np0005539563 gracious_sanderson[397097]: 167 167
Nov 29 03:56:33 np0005539563 systemd[1]: libpod-dd243193cad4bac05f10b68656f987ed0136355eea96bb0a9908f6b8322746f4.scope: Deactivated successfully.
Nov 29 03:56:33 np0005539563 conmon[397097]: conmon dd243193cad4bac05f10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd243193cad4bac05f10b68656f987ed0136355eea96bb0a9908f6b8322746f4.scope/container/memory.events
Nov 29 03:56:33 np0005539563 podman[397081]: 2025-11-29 08:56:33.148754999 +0000 UTC m=+0.154662158 container died dd243193cad4bac05f10b68656f987ed0136355eea96bb0a9908f6b8322746f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sanderson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:56:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8b2e9d5720af59a63c83e4bd7f72c09b1f7fde9e26e82a61852cbac982b3638c-merged.mount: Deactivated successfully.
Nov 29 03:56:33 np0005539563 podman[397081]: 2025-11-29 08:56:33.191084395 +0000 UTC m=+0.196991564 container remove dd243193cad4bac05f10b68656f987ed0136355eea96bb0a9908f6b8322746f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sanderson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:56:33 np0005539563 systemd[1]: libpod-conmon-dd243193cad4bac05f10b68656f987ed0136355eea96bb0a9908f6b8322746f4.scope: Deactivated successfully.
Nov 29 03:56:33 np0005539563 nova_compute[252253]: 2025-11-29 08:56:33.203 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:33 np0005539563 podman[397123]: 2025-11-29 08:56:33.34489813 +0000 UTC m=+0.037351882 container create 2217991fbb7900878d61ae296e44a4d15a3a897cc16f4eaa01e290c310f80427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:56:33 np0005539563 systemd[1]: Started libpod-conmon-2217991fbb7900878d61ae296e44a4d15a3a897cc16f4eaa01e290c310f80427.scope.
Nov 29 03:56:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s rd, 596 B/s wr, 12 op/s
Nov 29 03:56:33 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:56:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a87c8b5aeb188035ddcd3efe435ebdc0162d08d78cb5b91e10fe5d98b5fef62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a87c8b5aeb188035ddcd3efe435ebdc0162d08d78cb5b91e10fe5d98b5fef62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a87c8b5aeb188035ddcd3efe435ebdc0162d08d78cb5b91e10fe5d98b5fef62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a87c8b5aeb188035ddcd3efe435ebdc0162d08d78cb5b91e10fe5d98b5fef62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:33 np0005539563 podman[397123]: 2025-11-29 08:56:33.32898959 +0000 UTC m=+0.021443362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:56:33 np0005539563 podman[397123]: 2025-11-29 08:56:33.427634901 +0000 UTC m=+0.120088653 container init 2217991fbb7900878d61ae296e44a4d15a3a897cc16f4eaa01e290c310f80427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:56:33 np0005539563 podman[397123]: 2025-11-29 08:56:33.435374181 +0000 UTC m=+0.127827933 container start 2217991fbb7900878d61ae296e44a4d15a3a897cc16f4eaa01e290c310f80427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:56:33 np0005539563 podman[397123]: 2025-11-29 08:56:33.438971628 +0000 UTC m=+0.131425380 container attach 2217991fbb7900878d61ae296e44a4d15a3a897cc16f4eaa01e290c310f80427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:56:33 np0005539563 nova_compute[252253]: 2025-11-29 08:56:33.801 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:56:33 np0005539563 nova_compute[252253]: 2025-11-29 08:56:33.801 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:34.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]: {
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:    "0": [
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:        {
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:            "devices": [
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:                "/dev/loop3"
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:            ],
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:            "lv_name": "ceph_lv0",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:            "lv_size": "7511998464",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:            "name": "ceph_lv0",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:            "tags": {
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:                "ceph.cluster_name": "ceph",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:                "ceph.crush_device_class": "",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:                "ceph.encrypted": "0",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:                "ceph.osd_id": "0",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:                "ceph.type": "block",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:                "ceph.vdo": "0"
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:            },
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:            "type": "block",
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:            "vg_name": "ceph_vg0"
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:        }
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]:    ]
Nov 29 03:56:34 np0005539563 frosty_driscoll[397140]: }
Nov 29 03:56:34 np0005539563 systemd[1]: libpod-2217991fbb7900878d61ae296e44a4d15a3a897cc16f4eaa01e290c310f80427.scope: Deactivated successfully.
Nov 29 03:56:34 np0005539563 podman[397123]: 2025-11-29 08:56:34.217648171 +0000 UTC m=+0.910101933 container died 2217991fbb7900878d61ae296e44a4d15a3a897cc16f4eaa01e290c310f80427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:56:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5a87c8b5aeb188035ddcd3efe435ebdc0162d08d78cb5b91e10fe5d98b5fef62-merged.mount: Deactivated successfully.
Nov 29 03:56:34 np0005539563 podman[397123]: 2025-11-29 08:56:34.282884857 +0000 UTC m=+0.975338609 container remove 2217991fbb7900878d61ae296e44a4d15a3a897cc16f4eaa01e290c310f80427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:56:34 np0005539563 systemd[1]: libpod-conmon-2217991fbb7900878d61ae296e44a4d15a3a897cc16f4eaa01e290c310f80427.scope: Deactivated successfully.
Nov 29 03:56:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:34.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:34 np0005539563 podman[397305]: 2025-11-29 08:56:34.816068043 +0000 UTC m=+0.044059943 container create b86a3c9abcddbc69eb5ee26eca2aae6ada2260ff4342cdbd3462721f032ae375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heisenberg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:56:34 np0005539563 systemd[1]: Started libpod-conmon-b86a3c9abcddbc69eb5ee26eca2aae6ada2260ff4342cdbd3462721f032ae375.scope.
Nov 29 03:56:34 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:56:34 np0005539563 podman[397305]: 2025-11-29 08:56:34.890296534 +0000 UTC m=+0.118288454 container init b86a3c9abcddbc69eb5ee26eca2aae6ada2260ff4342cdbd3462721f032ae375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heisenberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:56:34 np0005539563 podman[397305]: 2025-11-29 08:56:34.797909512 +0000 UTC m=+0.025901442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:56:34 np0005539563 podman[397305]: 2025-11-29 08:56:34.895649189 +0000 UTC m=+0.123641099 container start b86a3c9abcddbc69eb5ee26eca2aae6ada2260ff4342cdbd3462721f032ae375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:56:34 np0005539563 podman[397305]: 2025-11-29 08:56:34.899481902 +0000 UTC m=+0.127473822 container attach b86a3c9abcddbc69eb5ee26eca2aae6ada2260ff4342cdbd3462721f032ae375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heisenberg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:56:34 np0005539563 goofy_heisenberg[397321]: 167 167
Nov 29 03:56:34 np0005539563 systemd[1]: libpod-b86a3c9abcddbc69eb5ee26eca2aae6ada2260ff4342cdbd3462721f032ae375.scope: Deactivated successfully.
Nov 29 03:56:34 np0005539563 podman[397305]: 2025-11-29 08:56:34.901006774 +0000 UTC m=+0.128998684 container died b86a3c9abcddbc69eb5ee26eca2aae6ada2260ff4342cdbd3462721f032ae375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heisenberg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:56:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e2246926b53cc3213484db8f6edec3ddbb48f74c03208eada2fd32aab411c3ff-merged.mount: Deactivated successfully.
Nov 29 03:56:34 np0005539563 podman[397305]: 2025-11-29 08:56:34.940924474 +0000 UTC m=+0.168916374 container remove b86a3c9abcddbc69eb5ee26eca2aae6ada2260ff4342cdbd3462721f032ae375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:56:34 np0005539563 systemd[1]: libpod-conmon-b86a3c9abcddbc69eb5ee26eca2aae6ada2260ff4342cdbd3462721f032ae375.scope: Deactivated successfully.
Nov 29 03:56:35 np0005539563 podman[397344]: 2025-11-29 08:56:35.094378619 +0000 UTC m=+0.042431319 container create a4e236cba8ef1dc2b12a385848ab38f9923870659fae6a327d7f893018d3ddca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:56:35 np0005539563 systemd[1]: Started libpod-conmon-a4e236cba8ef1dc2b12a385848ab38f9923870659fae6a327d7f893018d3ddca.scope.
Nov 29 03:56:35 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:56:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d8a9f51788cf461971d85492f1270a7c3afbf119375324249f9901c36bc400/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d8a9f51788cf461971d85492f1270a7c3afbf119375324249f9901c36bc400/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d8a9f51788cf461971d85492f1270a7c3afbf119375324249f9901c36bc400/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d8a9f51788cf461971d85492f1270a7c3afbf119375324249f9901c36bc400/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:35 np0005539563 podman[397344]: 2025-11-29 08:56:35.075452067 +0000 UTC m=+0.023504787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:56:35 np0005539563 podman[397344]: 2025-11-29 08:56:35.177415057 +0000 UTC m=+0.125467787 container init a4e236cba8ef1dc2b12a385848ab38f9923870659fae6a327d7f893018d3ddca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:56:35 np0005539563 podman[397344]: 2025-11-29 08:56:35.18563084 +0000 UTC m=+0.133683550 container start a4e236cba8ef1dc2b12a385848ab38f9923870659fae6a327d7f893018d3ddca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dubinsky, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:56:35 np0005539563 podman[397344]: 2025-11-29 08:56:35.189092604 +0000 UTC m=+0.137145314 container attach a4e236cba8ef1dc2b12a385848ab38f9923870659fae6a327d7f893018d3ddca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dubinsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:56:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 255 B/s wr, 1 op/s
Nov 29 03:56:35 np0005539563 nova_compute[252253]: 2025-11-29 08:56:35.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:35 np0005539563 nova_compute[252253]: 2025-11-29 08:56:35.681 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:56:35 np0005539563 recursing_dubinsky[397361]: {
Nov 29 03:56:35 np0005539563 recursing_dubinsky[397361]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:56:35 np0005539563 recursing_dubinsky[397361]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:56:35 np0005539563 recursing_dubinsky[397361]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:56:35 np0005539563 recursing_dubinsky[397361]:        "osd_id": 0,
Nov 29 03:56:35 np0005539563 recursing_dubinsky[397361]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:56:35 np0005539563 recursing_dubinsky[397361]:        "type": "bluestore"
Nov 29 03:56:35 np0005539563 recursing_dubinsky[397361]:    }
Nov 29 03:56:35 np0005539563 recursing_dubinsky[397361]: }
Nov 29 03:56:35 np0005539563 systemd[1]: libpod-a4e236cba8ef1dc2b12a385848ab38f9923870659fae6a327d7f893018d3ddca.scope: Deactivated successfully.
Nov 29 03:56:35 np0005539563 conmon[397361]: conmon a4e236cba8ef1dc2b12a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4e236cba8ef1dc2b12a385848ab38f9923870659fae6a327d7f893018d3ddca.scope/container/memory.events
Nov 29 03:56:35 np0005539563 podman[397344]: 2025-11-29 08:56:35.997240755 +0000 UTC m=+0.945293445 container died a4e236cba8ef1dc2b12a385848ab38f9923870659fae6a327d7f893018d3ddca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:56:36 np0005539563 systemd[1]: var-lib-containers-storage-overlay-25d8a9f51788cf461971d85492f1270a7c3afbf119375324249f9901c36bc400-merged.mount: Deactivated successfully.
Nov 29 03:56:36 np0005539563 podman[397344]: 2025-11-29 08:56:36.048469952 +0000 UTC m=+0.996522652 container remove a4e236cba8ef1dc2b12a385848ab38f9923870659fae6a327d7f893018d3ddca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:56:36 np0005539563 systemd[1]: libpod-conmon-a4e236cba8ef1dc2b12a385848ab38f9923870659fae6a327d7f893018d3ddca.scope: Deactivated successfully.
Nov 29 03:56:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:56:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:36.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:56:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:56:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:56:36 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7da13a34-c0bf-4d87-8aa9-fcdbc17ce189 does not exist
Nov 29 03:56:36 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cde0a971-f338-40c5-a9c9-225456aa3069 does not exist
Nov 29 03:56:36 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev fb8a66f2-d934-40dc-97ff-3ef6b65466ab does not exist
Nov 29 03:56:36 np0005539563 nova_compute[252253]: 2025-11-29 08:56:36.533 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:36.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:56:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:56:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:37 np0005539563 nova_compute[252253]: 2025-11-29 08:56:37.683 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:56:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:38.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:56:38 np0005539563 nova_compute[252253]: 2025-11-29 08:56:38.206 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:38 np0005539563 nova_compute[252253]: 2025-11-29 08:56:38.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:38.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:40.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:40.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:41 np0005539563 nova_compute[252253]: 2025-11-29 08:56:41.285 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:41 np0005539563 nova_compute[252253]: 2025-11-29 08:56:41.285 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:41 np0005539563 nova_compute[252253]: 2025-11-29 08:56:41.285 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:41 np0005539563 nova_compute[252253]: 2025-11-29 08:56:41.286 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:56:41 np0005539563 nova_compute[252253]: 2025-11-29 08:56:41.286 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:56:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:41 np0005539563 nova_compute[252253]: 2025-11-29 08:56:41.564 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:56:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2150829522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:56:41 np0005539563 nova_compute[252253]: 2025-11-29 08:56:41.770 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:56:41 np0005539563 nova_compute[252253]: 2025-11-29 08:56:41.960 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:56:41 np0005539563 nova_compute[252253]: 2025-11-29 08:56:41.961 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4100MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:56:41 np0005539563 nova_compute[252253]: 2025-11-29 08:56:41.962 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:41 np0005539563 nova_compute[252253]: 2025-11-29 08:56:41.962 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:42.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:42 np0005539563 nova_compute[252253]: 2025-11-29 08:56:42.442 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:56:42 np0005539563 nova_compute[252253]: 2025-11-29 08:56:42.443 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:56:42 np0005539563 nova_compute[252253]: 2025-11-29 08:56:42.462 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:56:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000054s ======
Nov 29 03:56:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:42.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Nov 29 03:56:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:56:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3402153616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:56:42 np0005539563 nova_compute[252253]: 2025-11-29 08:56:42.961 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:56:42 np0005539563 nova_compute[252253]: 2025-11-29 08:56:42.968 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:56:42 np0005539563 nova_compute[252253]: 2025-11-29 08:56:42.993 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:56:43 np0005539563 nova_compute[252253]: 2025-11-29 08:56:43.028 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:56:43 np0005539563 nova_compute[252253]: 2025-11-29 08:56:43.029 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:43 np0005539563 nova_compute[252253]: 2025-11-29 08:56:43.211 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:56:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:56:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:56:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:56:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:56:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:56:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3645: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:44.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:44.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3646: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:46.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:46 np0005539563 nova_compute[252253]: 2025-11-29 08:56:46.597 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:46.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:47 np0005539563 nova_compute[252253]: 2025-11-29 08:56:47.029 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3647: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:48.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:48 np0005539563 nova_compute[252253]: 2025-11-29 08:56:48.214 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:48.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:50.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:50.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:51 np0005539563 nova_compute[252253]: 2025-11-29 08:56:51.601 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:52.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:52.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:53 np0005539563 nova_compute[252253]: 2025-11-29 08:56:53.216 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:54.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:54.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3651: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:56.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:56 np0005539563 nova_compute[252253]: 2025-11-29 08:56:56.641 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:56.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:57 np0005539563 podman[397552]: 2025-11-29 08:56:57.519621313 +0000 UTC m=+0.072116534 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:56:57 np0005539563 podman[397553]: 2025-11-29 08:56:57.531633158 +0000 UTC m=+0.084129739 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:56:57 np0005539563 podman[397554]: 2025-11-29 08:56:57.556489311 +0000 UTC m=+0.102054784 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:56:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:56:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:58.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:56:58 np0005539563 nova_compute[252253]: 2025-11-29 08:56:58.219 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:56:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:58.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3653: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:00.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:00.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:01 np0005539563 nova_compute[252253]: 2025-11-29 08:57:01.643 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:02.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:02.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:03 np0005539563 nova_compute[252253]: 2025-11-29 08:57:03.222 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:04.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:04.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:04.969 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:04.970 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:04.970 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:06.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:06 np0005539563 nova_compute[252253]: 2025-11-29 08:57:06.666 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:06.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:08.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:08 np0005539563 nova_compute[252253]: 2025-11-29 08:57:08.225 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:08.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:10.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:10.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:11 np0005539563 nova_compute[252253]: 2025-11-29 08:57:11.668 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:11 np0005539563 nova_compute[252253]: 2025-11-29 08:57:11.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:12.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:12.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:57:13
Nov 29 03:57:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:57:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:57:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'vms', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'volumes', '.mgr', 'backups', 'cephfs.cephfs.meta']
Nov 29 03:57:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:57:13 np0005539563 nova_compute[252253]: 2025-11-29 08:57:13.228 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:57:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:57:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:57:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:57:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:57:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:57:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:14.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:14.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:16.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:16 np0005539563 nova_compute[252253]: 2025-11-29 08:57:16.670 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:57:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:57:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:57:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:57:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:57:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:57:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:57:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:57:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:57:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:57:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:16.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:18.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:18 np0005539563 nova_compute[252253]: 2025-11-29 08:57:18.231 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:18.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 133 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.4 KiB/s rd, 134 KiB/s wr, 11 op/s
Nov 29 03:57:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:20.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:20.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:21 np0005539563 ovn_controller[148841]: 2025-11-29T08:57:21Z|00897|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 29 03:57:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:57:21 np0005539563 nova_compute[252253]: 2025-11-29 08:57:21.672 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:22.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.678 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.679 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.680 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.680 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.680 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.681 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.713 252257 DEBUG nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.713 252257 WARNING nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.713 252257 WARNING nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.714 252257 INFO nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Removable base files: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.714 252257 INFO nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.714 252257 INFO nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/40c26ed0fe4534cf021820db0c9b5c605a52a242#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.714 252257 DEBUG nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.714 252257 DEBUG nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 29 03:57:22 np0005539563 nova_compute[252253]: 2025-11-29 08:57:22.715 252257 DEBUG nova.virt.libvirt.imagecache [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 29 03:57:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:22.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:23 np0005539563 nova_compute[252253]: 2025-11-29 08:57:23.234 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:57:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:24.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:24.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:57:25 np0005539563 nova_compute[252253]: 2025-11-29 08:57:25.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:25 np0005539563 nova_compute[252253]: 2025-11-29 08:57:25.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:57:25 np0005539563 nova_compute[252253]: 2025-11-29 08:57:25.695 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:57:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:26.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:26 np0005539563 nova_compute[252253]: 2025-11-29 08:57:26.674 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:26.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:57:27 np0005539563 nova_compute[252253]: 2025-11-29 08:57:27.696 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:27.752 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:57:27 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:27.753 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:57:27 np0005539563 nova_compute[252253]: 2025-11-29 08:57:27.784 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:28.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:28 np0005539563 nova_compute[252253]: 2025-11-29 08:57:28.236 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:28 np0005539563 podman[397726]: 2025-11-29 08:57:28.542654032 +0000 UTC m=+0.082151525 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:57:28 np0005539563 podman[397727]: 2025-11-29 08:57:28.549309702 +0000 UTC m=+0.084898340 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:57:28 np0005539563 podman[397728]: 2025-11-29 08:57:28.5909641 +0000 UTC m=+0.117450731 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:57:28 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:28.755 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:28.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:57:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:30.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:30.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 81 op/s
Nov 29 03:57:31 np0005539563 nova_compute[252253]: 2025-11-29 08:57:31.676 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:31 np0005539563 nova_compute[252253]: 2025-11-29 08:57:31.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:32.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:32.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:33 np0005539563 nova_compute[252253]: 2025-11-29 08:57:33.239 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:57:33 np0005539563 nova_compute[252253]: 2025-11-29 08:57:33.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:33 np0005539563 nova_compute[252253]: 2025-11-29 08:57:33.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:57:33 np0005539563 nova_compute[252253]: 2025-11-29 08:57:33.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:57:33 np0005539563 nova_compute[252253]: 2025-11-29 08:57:33.710 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:57:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:34.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:34 np0005539563 nova_compute[252253]: 2025-11-29 08:57:34.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:34.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 133 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Nov 29 03:57:35 np0005539563 nova_compute[252253]: 2025-11-29 08:57:35.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:35 np0005539563 nova_compute[252253]: 2025-11-29 08:57:35.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:57:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:36.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:36 np0005539563 nova_compute[252253]: 2025-11-29 08:57:36.701 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:36.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 133 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Nov 29 03:57:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:57:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:57:37 np0005539563 nova_compute[252253]: 2025-11-29 08:57:37.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 03:57:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:57:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 03:57:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:57:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:38.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:38 np0005539563 nova_compute[252253]: 2025-11-29 08:57:38.243 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:38 np0005539563 nova_compute[252253]: 2025-11-29 08:57:38.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:57:38 np0005539563 nova_compute[252253]: 2025-11-29 08:57:38.782 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:38 np0005539563 nova_compute[252253]: 2025-11-29 08:57:38.784 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:38 np0005539563 nova_compute[252253]: 2025-11-29 08:57:38.784 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:38 np0005539563 nova_compute[252253]: 2025-11-29 08:57:38.785 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:57:38 np0005539563 nova_compute[252253]: 2025-11-29 08:57:38.786 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:38.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:38 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9b3d4ff2-54fe-40be-8302-3340f04b44b6 does not exist
Nov 29 03:57:38 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 77ce0428-f5a4-4aa4-ba57-7be27191d7a1 does not exist
Nov 29 03:57:38 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4025e64e-da1e-446a-86f0-aeb8a56a9801 does not exist
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:57:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:57:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3955622861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:39 np0005539563 nova_compute[252253]: 2025-11-29 08:57:39.271 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 29 03:57:39 np0005539563 nova_compute[252253]: 2025-11-29 08:57:39.498 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:57:39 np0005539563 nova_compute[252253]: 2025-11-29 08:57:39.501 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4139MB free_disk=20.986629486083984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:57:39 np0005539563 nova_compute[252253]: 2025-11-29 08:57:39.502 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:39 np0005539563 nova_compute[252253]: 2025-11-29 08:57:39.502 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:39 np0005539563 nova_compute[252253]: 2025-11-29 08:57:39.582 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:57:39 np0005539563 nova_compute[252253]: 2025-11-29 08:57:39.582 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:57:39 np0005539563 nova_compute[252253]: 2025-11-29 08:57:39.605 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:39 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:57:39 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:39 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:57:39 np0005539563 podman[398140]: 2025-11-29 08:57:39.707040718 +0000 UTC m=+0.052952805 container create e2edca5f0a34c759ae149fd3e440e2be4fd8a89e55b12ae1dfb57a5514a392d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:57:39 np0005539563 systemd[1]: Started libpod-conmon-e2edca5f0a34c759ae149fd3e440e2be4fd8a89e55b12ae1dfb57a5514a392d1.scope.
Nov 29 03:57:39 np0005539563 podman[398140]: 2025-11-29 08:57:39.679488512 +0000 UTC m=+0.025400699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:57:39 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:57:39 np0005539563 podman[398140]: 2025-11-29 08:57:39.802850542 +0000 UTC m=+0.148762679 container init e2edca5f0a34c759ae149fd3e440e2be4fd8a89e55b12ae1dfb57a5514a392d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mendeleev, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:57:39 np0005539563 podman[398140]: 2025-11-29 08:57:39.813767328 +0000 UTC m=+0.159679425 container start e2edca5f0a34c759ae149fd3e440e2be4fd8a89e55b12ae1dfb57a5514a392d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mendeleev, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:57:39 np0005539563 podman[398140]: 2025-11-29 08:57:39.819786751 +0000 UTC m=+0.165698858 container attach e2edca5f0a34c759ae149fd3e440e2be4fd8a89e55b12ae1dfb57a5514a392d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mendeleev, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:57:39 np0005539563 systemd[1]: libpod-e2edca5f0a34c759ae149fd3e440e2be4fd8a89e55b12ae1dfb57a5514a392d1.scope: Deactivated successfully.
Nov 29 03:57:39 np0005539563 intelligent_mendeleev[398167]: 167 167
Nov 29 03:57:39 np0005539563 podman[398140]: 2025-11-29 08:57:39.824390855 +0000 UTC m=+0.170302952 container died e2edca5f0a34c759ae149fd3e440e2be4fd8a89e55b12ae1dfb57a5514a392d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:57:39 np0005539563 conmon[398167]: conmon e2edca5f0a34c759ae14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2edca5f0a34c759ae149fd3e440e2be4fd8a89e55b12ae1dfb57a5514a392d1.scope/container/memory.events
Nov 29 03:57:39 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7ed919aba1db20b85b8a940da8f2288794bb08b5f24dbb22eda9167f5be6b36e-merged.mount: Deactivated successfully.
Nov 29 03:57:39 np0005539563 podman[398140]: 2025-11-29 08:57:39.867381019 +0000 UTC m=+0.213293116 container remove e2edca5f0a34c759ae149fd3e440e2be4fd8a89e55b12ae1dfb57a5514a392d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mendeleev, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:57:39 np0005539563 systemd[1]: libpod-conmon-e2edca5f0a34c759ae149fd3e440e2be4fd8a89e55b12ae1dfb57a5514a392d1.scope: Deactivated successfully.
Nov 29 03:57:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1599731928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:40 np0005539563 podman[398201]: 2025-11-29 08:57:40.079190504 +0000 UTC m=+0.061288020 container create 3f1c992d2318f19c5ac2d03a4226cf034c97da58559d8137f41e294fb621e393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:57:40 np0005539563 nova_compute[252253]: 2025-11-29 08:57:40.108 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:40 np0005539563 nova_compute[252253]: 2025-11-29 08:57:40.118 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:57:40 np0005539563 nova_compute[252253]: 2025-11-29 08:57:40.135 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:57:40 np0005539563 nova_compute[252253]: 2025-11-29 08:57:40.136 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:57:40 np0005539563 nova_compute[252253]: 2025-11-29 08:57:40.137 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:40 np0005539563 systemd[1]: Started libpod-conmon-3f1c992d2318f19c5ac2d03a4226cf034c97da58559d8137f41e294fb621e393.scope.
Nov 29 03:57:40 np0005539563 podman[398201]: 2025-11-29 08:57:40.058535016 +0000 UTC m=+0.040632592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:57:40 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:57:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeaacfce4a1b367b30911afff791eda79c883a38a117499c88b39addeb199694/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeaacfce4a1b367b30911afff791eda79c883a38a117499c88b39addeb199694/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeaacfce4a1b367b30911afff791eda79c883a38a117499c88b39addeb199694/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeaacfce4a1b367b30911afff791eda79c883a38a117499c88b39addeb199694/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeaacfce4a1b367b30911afff791eda79c883a38a117499c88b39addeb199694/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:40 np0005539563 podman[398201]: 2025-11-29 08:57:40.20350975 +0000 UTC m=+0.185607266 container init 3f1c992d2318f19c5ac2d03a4226cf034c97da58559d8137f41e294fb621e393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_colden, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:57:40 np0005539563 podman[398201]: 2025-11-29 08:57:40.212682039 +0000 UTC m=+0.194779555 container start 3f1c992d2318f19c5ac2d03a4226cf034c97da58559d8137f41e294fb621e393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:57:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:40 np0005539563 podman[398201]: 2025-11-29 08:57:40.216016399 +0000 UTC m=+0.198113935 container attach 3f1c992d2318f19c5ac2d03a4226cf034c97da58559d8137f41e294fb621e393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_colden, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:57:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:40.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:40.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:41 np0005539563 gifted_colden[398219]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:57:41 np0005539563 gifted_colden[398219]: --> relative data size: 1.0
Nov 29 03:57:41 np0005539563 gifted_colden[398219]: --> All data devices are unavailable
Nov 29 03:57:41 np0005539563 systemd[1]: libpod-3f1c992d2318f19c5ac2d03a4226cf034c97da58559d8137f41e294fb621e393.scope: Deactivated successfully.
Nov 29 03:57:41 np0005539563 podman[398236]: 2025-11-29 08:57:41.172524337 +0000 UTC m=+0.023931018 container died 3f1c992d2318f19c5ac2d03a4226cf034c97da58559d8137f41e294fb621e393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_colden, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:57:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-eeaacfce4a1b367b30911afff791eda79c883a38a117499c88b39addeb199694-merged.mount: Deactivated successfully.
Nov 29 03:57:41 np0005539563 podman[398236]: 2025-11-29 08:57:41.239298445 +0000 UTC m=+0.090705096 container remove 3f1c992d2318f19c5ac2d03a4226cf034c97da58559d8137f41e294fb621e393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_colden, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:57:41 np0005539563 systemd[1]: libpod-conmon-3f1c992d2318f19c5ac2d03a4226cf034c97da58559d8137f41e294fb621e393.scope: Deactivated successfully.
Nov 29 03:57:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 KiB/s wr, 99 op/s
Nov 29 03:57:41 np0005539563 nova_compute[252253]: 2025-11-29 08:57:41.703 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:41 np0005539563 podman[398392]: 2025-11-29 08:57:41.944589692 +0000 UTC m=+0.039882121 container create d998aab59702efbaa42aaa94d34842fe820cc660272c5568b4db74f3ce300a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:57:41 np0005539563 systemd[1]: Started libpod-conmon-d998aab59702efbaa42aaa94d34842fe820cc660272c5568b4db74f3ce300a50.scope.
Nov 29 03:57:42 np0005539563 podman[398392]: 2025-11-29 08:57:41.928822665 +0000 UTC m=+0.024115114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:57:42 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:57:42 np0005539563 podman[398392]: 2025-11-29 08:57:42.053515091 +0000 UTC m=+0.148807600 container init d998aab59702efbaa42aaa94d34842fe820cc660272c5568b4db74f3ce300a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_borg, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:57:42 np0005539563 podman[398392]: 2025-11-29 08:57:42.064833508 +0000 UTC m=+0.160125937 container start d998aab59702efbaa42aaa94d34842fe820cc660272c5568b4db74f3ce300a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_borg, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:57:42 np0005539563 podman[398392]: 2025-11-29 08:57:42.06788507 +0000 UTC m=+0.163177619 container attach d998aab59702efbaa42aaa94d34842fe820cc660272c5568b4db74f3ce300a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:57:42 np0005539563 nifty_borg[398409]: 167 167
Nov 29 03:57:42 np0005539563 systemd[1]: libpod-d998aab59702efbaa42aaa94d34842fe820cc660272c5568b4db74f3ce300a50.scope: Deactivated successfully.
Nov 29 03:57:42 np0005539563 podman[398392]: 2025-11-29 08:57:42.071067546 +0000 UTC m=+0.166359975 container died d998aab59702efbaa42aaa94d34842fe820cc660272c5568b4db74f3ce300a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_borg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:57:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6a5ac03b86d4efbfecb9a5088021054fed067c6495925ad1c09afb4128e6edec-merged.mount: Deactivated successfully.
Nov 29 03:57:42 np0005539563 podman[398392]: 2025-11-29 08:57:42.102641181 +0000 UTC m=+0.197933620 container remove d998aab59702efbaa42aaa94d34842fe820cc660272c5568b4db74f3ce300a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:57:42 np0005539563 systemd[1]: libpod-conmon-d998aab59702efbaa42aaa94d34842fe820cc660272c5568b4db74f3ce300a50.scope: Deactivated successfully.
Nov 29 03:57:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:42.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:42 np0005539563 podman[398432]: 2025-11-29 08:57:42.311470706 +0000 UTC m=+0.061050384 container create fd3211508637db675bcb18484ea33abe17f49444c3a7441121b8ade9e776f061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_babbage, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:57:42 np0005539563 systemd[1]: Started libpod-conmon-fd3211508637db675bcb18484ea33abe17f49444c3a7441121b8ade9e776f061.scope.
Nov 29 03:57:42 np0005539563 podman[398432]: 2025-11-29 08:57:42.29501229 +0000 UTC m=+0.044591998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:57:42 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:57:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b144a21cf94d27f42383aa2ee78fc4e426120ade0867b0dcbf6275df968dbfbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b144a21cf94d27f42383aa2ee78fc4e426120ade0867b0dcbf6275df968dbfbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b144a21cf94d27f42383aa2ee78fc4e426120ade0867b0dcbf6275df968dbfbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b144a21cf94d27f42383aa2ee78fc4e426120ade0867b0dcbf6275df968dbfbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:42 np0005539563 podman[398432]: 2025-11-29 08:57:42.413416746 +0000 UTC m=+0.162996444 container init fd3211508637db675bcb18484ea33abe17f49444c3a7441121b8ade9e776f061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_babbage, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:57:42 np0005539563 podman[398432]: 2025-11-29 08:57:42.430865099 +0000 UTC m=+0.180444787 container start fd3211508637db675bcb18484ea33abe17f49444c3a7441121b8ade9e776f061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:57:42 np0005539563 podman[398432]: 2025-11-29 08:57:42.4349849 +0000 UTC m=+0.184564598 container attach fd3211508637db675bcb18484ea33abe17f49444c3a7441121b8ade9e776f061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:57:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:42.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:43 np0005539563 cool_babbage[398449]: {
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:    "0": [
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:        {
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:            "devices": [
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:                "/dev/loop3"
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:            ],
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:            "lv_name": "ceph_lv0",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:            "lv_size": "7511998464",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:            "name": "ceph_lv0",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:            "tags": {
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:                "ceph.cluster_name": "ceph",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:                "ceph.crush_device_class": "",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:                "ceph.encrypted": "0",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:                "ceph.osd_id": "0",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:                "ceph.type": "block",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:                "ceph.vdo": "0"
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:            },
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:            "type": "block",
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:            "vg_name": "ceph_vg0"
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:        }
Nov 29 03:57:43 np0005539563 cool_babbage[398449]:    ]
Nov 29 03:57:43 np0005539563 cool_babbage[398449]: }
Nov 29 03:57:43 np0005539563 systemd[1]: libpod-fd3211508637db675bcb18484ea33abe17f49444c3a7441121b8ade9e776f061.scope: Deactivated successfully.
Nov 29 03:57:43 np0005539563 podman[398432]: 2025-11-29 08:57:43.219949843 +0000 UTC m=+0.969529581 container died fd3211508637db675bcb18484ea33abe17f49444c3a7441121b8ade9e776f061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_babbage, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:57:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:57:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:57:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:57:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:57:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:57:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:57:43 np0005539563 nova_compute[252253]: 2025-11-29 08:57:43.245 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b144a21cf94d27f42383aa2ee78fc4e426120ade0867b0dcbf6275df968dbfbd-merged.mount: Deactivated successfully.
Nov 29 03:57:43 np0005539563 podman[398432]: 2025-11-29 08:57:43.277880142 +0000 UTC m=+1.027459820 container remove fd3211508637db675bcb18484ea33abe17f49444c3a7441121b8ade9e776f061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_babbage, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:57:43 np0005539563 systemd[1]: libpod-conmon-fd3211508637db675bcb18484ea33abe17f49444c3a7441121b8ade9e776f061.scope: Deactivated successfully.
Nov 29 03:57:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 244 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Nov 29 03:57:43 np0005539563 podman[398612]: 2025-11-29 08:57:43.856779107 +0000 UTC m=+0.034952558 container create 7c7119678b57ba39401317cce03ac4f28699eed2cdaaed2248b336172bad269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_carver, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:57:43 np0005539563 systemd[1]: Started libpod-conmon-7c7119678b57ba39401317cce03ac4f28699eed2cdaaed2248b336172bad269e.scope.
Nov 29 03:57:43 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:57:43 np0005539563 podman[398612]: 2025-11-29 08:57:43.839971252 +0000 UTC m=+0.018144723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:57:43 np0005539563 podman[398612]: 2025-11-29 08:57:43.938589032 +0000 UTC m=+0.116762523 container init 7c7119678b57ba39401317cce03ac4f28699eed2cdaaed2248b336172bad269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_carver, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:57:43 np0005539563 podman[398612]: 2025-11-29 08:57:43.946826175 +0000 UTC m=+0.124999626 container start 7c7119678b57ba39401317cce03ac4f28699eed2cdaaed2248b336172bad269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_carver, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:57:43 np0005539563 podman[398612]: 2025-11-29 08:57:43.950192486 +0000 UTC m=+0.128365957 container attach 7c7119678b57ba39401317cce03ac4f28699eed2cdaaed2248b336172bad269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:57:43 np0005539563 elastic_carver[398629]: 167 167
Nov 29 03:57:43 np0005539563 systemd[1]: libpod-7c7119678b57ba39401317cce03ac4f28699eed2cdaaed2248b336172bad269e.scope: Deactivated successfully.
Nov 29 03:57:43 np0005539563 podman[398612]: 2025-11-29 08:57:43.951561272 +0000 UTC m=+0.129734733 container died 7c7119678b57ba39401317cce03ac4f28699eed2cdaaed2248b336172bad269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:57:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ff82de449bac020de7fd10c8d0834bd993dcca2620c6a8e9b906050728d0814c-merged.mount: Deactivated successfully.
Nov 29 03:57:43 np0005539563 podman[398612]: 2025-11-29 08:57:43.983826436 +0000 UTC m=+0.161999887 container remove 7c7119678b57ba39401317cce03ac4f28699eed2cdaaed2248b336172bad269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:57:43 np0005539563 systemd[1]: libpod-conmon-7c7119678b57ba39401317cce03ac4f28699eed2cdaaed2248b336172bad269e.scope: Deactivated successfully.
Nov 29 03:57:44 np0005539563 podman[398654]: 2025-11-29 08:57:44.120413395 +0000 UTC m=+0.033300493 container create acb629f8a1a4706e81e7ab8cdb612dc7c15539e40c16198c404a70ae8a21245c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:57:44 np0005539563 systemd[1]: Started libpod-conmon-acb629f8a1a4706e81e7ab8cdb612dc7c15539e40c16198c404a70ae8a21245c.scope.
Nov 29 03:57:44 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:57:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59234841bfe4874fd77e10c56d611e33675452a495041af30458eb3a3eb08430/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59234841bfe4874fd77e10c56d611e33675452a495041af30458eb3a3eb08430/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59234841bfe4874fd77e10c56d611e33675452a495041af30458eb3a3eb08430/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59234841bfe4874fd77e10c56d611e33675452a495041af30458eb3a3eb08430/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:44 np0005539563 podman[398654]: 2025-11-29 08:57:44.106502088 +0000 UTC m=+0.019389206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:57:44 np0005539563 podman[398654]: 2025-11-29 08:57:44.206901987 +0000 UTC m=+0.119789085 container init acb629f8a1a4706e81e7ab8cdb612dc7c15539e40c16198c404a70ae8a21245c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:57:44 np0005539563 podman[398654]: 2025-11-29 08:57:44.212913609 +0000 UTC m=+0.125800707 container start acb629f8a1a4706e81e7ab8cdb612dc7c15539e40c16198c404a70ae8a21245c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:57:44 np0005539563 podman[398654]: 2025-11-29 08:57:44.215573182 +0000 UTC m=+0.128460280 container attach acb629f8a1a4706e81e7ab8cdb612dc7c15539e40c16198c404a70ae8a21245c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 03:57:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:44.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:44 np0005539563 nova_compute[252253]: 2025-11-29 08:57:44.333 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "4a5e65b2-e585-469d-9639-c30e30f77ed0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:44 np0005539563 nova_compute[252253]: 2025-11-29 08:57:44.334 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:44 np0005539563 nova_compute[252253]: 2025-11-29 08:57:44.352 252257 DEBUG nova.compute.manager [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:57:44 np0005539563 nova_compute[252253]: 2025-11-29 08:57:44.433 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:44 np0005539563 nova_compute[252253]: 2025-11-29 08:57:44.433 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:44 np0005539563 nova_compute[252253]: 2025-11-29 08:57:44.443 252257 DEBUG nova.virt.hardware [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:57:44 np0005539563 nova_compute[252253]: 2025-11-29 08:57:44.444 252257 INFO nova.compute.claims [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:57:44 np0005539563 nova_compute[252253]: 2025-11-29 08:57:44.552 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:44.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3905176243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:44 np0005539563 nova_compute[252253]: 2025-11-29 08:57:44.975 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:44 np0005539563 nova_compute[252253]: 2025-11-29 08:57:44.981 252257 DEBUG nova.compute.provider_tree [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.001 252257 DEBUG nova.scheduler.client.report [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.023 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.024 252257 DEBUG nova.compute.manager [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:57:45 np0005539563 sad_meninsky[398670]: {
Nov 29 03:57:45 np0005539563 sad_meninsky[398670]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:57:45 np0005539563 sad_meninsky[398670]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:57:45 np0005539563 sad_meninsky[398670]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:57:45 np0005539563 sad_meninsky[398670]:        "osd_id": 0,
Nov 29 03:57:45 np0005539563 sad_meninsky[398670]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:57:45 np0005539563 sad_meninsky[398670]:        "type": "bluestore"
Nov 29 03:57:45 np0005539563 sad_meninsky[398670]:    }
Nov 29 03:57:45 np0005539563 sad_meninsky[398670]: }
Nov 29 03:57:45 np0005539563 systemd[1]: libpod-acb629f8a1a4706e81e7ab8cdb612dc7c15539e40c16198c404a70ae8a21245c.scope: Deactivated successfully.
Nov 29 03:57:45 np0005539563 podman[398654]: 2025-11-29 08:57:45.054602559 +0000 UTC m=+0.967489657 container died acb629f8a1a4706e81e7ab8cdb612dc7c15539e40c16198c404a70ae8a21245c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.064 252257 DEBUG nova.compute.manager [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.064 252257 DEBUG nova.network.neutron [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:57:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-59234841bfe4874fd77e10c56d611e33675452a495041af30458eb3a3eb08430-merged.mount: Deactivated successfully.
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.095 252257 INFO nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:57:45 np0005539563 podman[398654]: 2025-11-29 08:57:45.100479111 +0000 UTC m=+1.013366209 container remove acb629f8a1a4706e81e7ab8cdb612dc7c15539e40c16198c404a70ae8a21245c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:57:45 np0005539563 systemd[1]: libpod-conmon-acb629f8a1a4706e81e7ab8cdb612dc7c15539e40c16198c404a70ae8a21245c.scope: Deactivated successfully.
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.117 252257 DEBUG nova.compute.manager [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:57:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.136 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:57:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:45 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5f7c5acc-3e0c-462e-9de9-551ac69ec734 does not exist
Nov 29 03:57:45 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0cacd7c5-834a-4e56-9439-7bb52d27a55b does not exist
Nov 29 03:57:45 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d2c2bba3-f00e-4ea8-b127-cf7ad19cd775 does not exist
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.241 252257 DEBUG nova.compute.manager [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.242 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.242 252257 INFO nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Creating image(s)#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.274 252257 DEBUG nova.storage.rbd_utils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 4a5e65b2-e585-469d-9639-c30e30f77ed0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.303 252257 DEBUG nova.storage.rbd_utils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 4a5e65b2-e585-469d-9639-c30e30f77ed0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.330 252257 DEBUG nova.storage.rbd_utils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 4a5e65b2-e585-469d-9639-c30e30f77ed0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.333 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.407 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.408 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.409 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.409 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.438 252257 DEBUG nova.storage.rbd_utils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 4a5e65b2-e585-469d-9639-c30e30f77ed0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.442 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 4a5e65b2-e585-469d-9639-c30e30f77ed0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:45 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.772 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 4a5e65b2-e585-469d-9639-c30e30f77ed0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.860 252257 DEBUG nova.storage.rbd_utils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image 4a5e65b2-e585-469d-9639-c30e30f77ed0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:57:45 np0005539563 nova_compute[252253]: 2025-11-29 08:57:45.978 252257 DEBUG nova.objects.instance [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a5e65b2-e585-469d-9639-c30e30f77ed0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:57:46 np0005539563 nova_compute[252253]: 2025-11-29 08:57:46.121 252257 DEBUG nova.policy [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:57:46 np0005539563 nova_compute[252253]: 2025-11-29 08:57:46.204 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:57:46 np0005539563 nova_compute[252253]: 2025-11-29 08:57:46.204 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Ensure instance console log exists: /var/lib/nova/instances/4a5e65b2-e585-469d-9639-c30e30f77ed0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:57:46 np0005539563 nova_compute[252253]: 2025-11-29 08:57:46.206 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:46 np0005539563 nova_compute[252253]: 2025-11-29 08:57:46.206 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:46 np0005539563 nova_compute[252253]: 2025-11-29 08:57:46.207 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:57:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:46.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:57:46 np0005539563 nova_compute[252253]: 2025-11-29 08:57:46.741 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:46.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 3 op/s
Nov 29 03:57:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/736689315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:48.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:48 np0005539563 nova_compute[252253]: 2025-11-29 08:57:48.248 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:48 np0005539563 nova_compute[252253]: 2025-11-29 08:57:48.561 252257 DEBUG nova.network.neutron [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Successfully updated port: df6fdca5-d93b-4b2f-9836-ec2a95857ae7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:57:48 np0005539563 nova_compute[252253]: 2025-11-29 08:57:48.754 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-4a5e65b2-e585-469d-9639-c30e30f77ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:57:48 np0005539563 nova_compute[252253]: 2025-11-29 08:57:48.755 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-4a5e65b2-e585-469d-9639-c30e30f77ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:57:48 np0005539563 nova_compute[252253]: 2025-11-29 08:57:48.755 252257 DEBUG nova.network.neutron [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:57:48 np0005539563 nova_compute[252253]: 2025-11-29 08:57:48.826 252257 DEBUG nova.compute.manager [req-5c36f803-5643-4d3f-8430-6794ad553383 req-ddd929b4-31c1-4bac-97fd-be6bb41cb60f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Received event network-changed-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:57:48 np0005539563 nova_compute[252253]: 2025-11-29 08:57:48.827 252257 DEBUG nova.compute.manager [req-5c36f803-5643-4d3f-8430-6794ad553383 req-ddd929b4-31c1-4bac-97fd-be6bb41cb60f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Refreshing instance network info cache due to event network-changed-df6fdca5-d93b-4b2f-9836-ec2a95857ae7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:57:48 np0005539563 nova_compute[252253]: 2025-11-29 08:57:48.827 252257 DEBUG oslo_concurrency.lockutils [req-5c36f803-5643-4d3f-8430-6794ad553383 req-ddd929b4-31c1-4bac-97fd-be6bb41cb60f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-4a5e65b2-e585-469d-9639-c30e30f77ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:57:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:48.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:49 np0005539563 nova_compute[252253]: 2025-11-29 08:57:49.442 252257 DEBUG nova.network.neutron [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:57:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 132 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 293 KiB/s wr, 14 op/s
Nov 29 03:57:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:50.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.665 252257 DEBUG nova.network.neutron [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Updating instance_info_cache with network_info: [{"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.694 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-4a5e65b2-e585-469d-9639-c30e30f77ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.695 252257 DEBUG nova.compute.manager [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Instance network_info: |[{"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.696 252257 DEBUG oslo_concurrency.lockutils [req-5c36f803-5643-4d3f-8430-6794ad553383 req-ddd929b4-31c1-4bac-97fd-be6bb41cb60f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-4a5e65b2-e585-469d-9639-c30e30f77ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.697 252257 DEBUG nova.network.neutron [req-5c36f803-5643-4d3f-8430-6794ad553383 req-ddd929b4-31c1-4bac-97fd-be6bb41cb60f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Refreshing network info cache for port df6fdca5-d93b-4b2f-9836-ec2a95857ae7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.702 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Start _get_guest_xml network_info=[{"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.712 252257 WARNING nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.718 252257 DEBUG nova.virt.libvirt.host [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.720 252257 DEBUG nova.virt.libvirt.host [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.732 252257 DEBUG nova.virt.libvirt.host [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.733 252257 DEBUG nova.virt.libvirt.host [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.735 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.736 252257 DEBUG nova.virt.hardware [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.737 252257 DEBUG nova.virt.hardware [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.737 252257 DEBUG nova.virt.hardware [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.738 252257 DEBUG nova.virt.hardware [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.738 252257 DEBUG nova.virt.hardware [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.739 252257 DEBUG nova.virt.hardware [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.739 252257 DEBUG nova.virt.hardware [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.740 252257 DEBUG nova.virt.hardware [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.741 252257 DEBUG nova.virt.hardware [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.741 252257 DEBUG nova.virt.hardware [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.741 252257 DEBUG nova.virt.hardware [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:57:50 np0005539563 nova_compute[252253]: 2025-11-29 08:57:50.747 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:50.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:57:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3221452569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.218 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.258 252257 DEBUG nova.storage.rbd_utils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 4a5e65b2-e585-469d-9639-c30e30f77ed0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.263 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:57:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:57:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4014826015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.745 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.759 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.761 252257 DEBUG nova.virt.libvirt.vif [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-567674992',display_name='tempest-TestNetworkBasicOps-server-567674992',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-567674992',id=208,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBFuIb9m+qfup4I2gUah8h3Yp0EK1yDvhzdfhBEwzW/K33/kJ3QKeCYRsrifRmAOkqpPvCg8Mj44MqQpFS1WnvZmElNZe+SJkb+eYAdn9hHCWYLX0O+JBeXhY9SZcAyNTw==',key_name='tempest-TestNetworkBasicOps-1434938868',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-ca68e755',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:57:45Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=4a5e65b2-e585-469d-9639-c30e30f77ed0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.761 252257 DEBUG nova.network.os_vif_util [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.762 252257 DEBUG nova.network.os_vif_util [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.763 252257 DEBUG nova.objects.instance [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a5e65b2-e585-469d-9639-c30e30f77ed0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.788 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  <uuid>4a5e65b2-e585-469d-9639-c30e30f77ed0</uuid>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  <name>instance-000000d0</name>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestNetworkBasicOps-server-567674992</nova:name>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 08:57:50</nova:creationTime>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <nova:port uuid="df6fdca5-d93b-4b2f-9836-ec2a95857ae7">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <system>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <entry name="serial">4a5e65b2-e585-469d-9639-c30e30f77ed0</entry>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <entry name="uuid">4a5e65b2-e585-469d-9639-c30e30f77ed0</entry>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    </system>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  <os>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  </os>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  <features>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  </features>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  </clock>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  <devices>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/4a5e65b2-e585-469d-9639-c30e30f77ed0_disk">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/4a5e65b2-e585-469d-9639-c30e30f77ed0_disk.config">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      </source>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      </auth>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    </disk>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:61:3c:80"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <target dev="tapdf6fdca5-d9"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    </interface>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/4a5e65b2-e585-469d-9639-c30e30f77ed0/console.log" append="off"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    </serial>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <video>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    </video>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    </rng>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 03:57:51 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 03:57:51 np0005539563 nova_compute[252253]:  </devices>
Nov 29 03:57:51 np0005539563 nova_compute[252253]: </domain>
Nov 29 03:57:51 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.791 252257 DEBUG nova.compute.manager [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Preparing to wait for external event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.792 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.792 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.793 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.794 252257 DEBUG nova.virt.libvirt.vif [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-567674992',display_name='tempest-TestNetworkBasicOps-server-567674992',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-567674992',id=208,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBFuIb9m+qfup4I2gUah8h3Yp0EK1yDvhzdfhBEwzW/K33/kJ3QKeCYRsrifRmAOkqpPvCg8Mj44MqQpFS1WnvZmElNZe+SJkb+eYAdn9hHCWYLX0O+JBeXhY9SZcAyNTw==',key_name='tempest-TestNetworkBasicOps-1434938868',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-ca68e755',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:57:45Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=4a5e65b2-e585-469d-9639-c30e30f77ed0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.795 252257 DEBUG nova.network.os_vif_util [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.796 252257 DEBUG nova.network.os_vif_util [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.797 252257 DEBUG os_vif [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.798 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.799 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.799 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.805 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.805 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf6fdca5-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.806 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf6fdca5-d9, col_values=(('external_ids', {'iface-id': 'df6fdca5-d93b-4b2f-9836-ec2a95857ae7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:3c:80', 'vm-uuid': '4a5e65b2-e585-469d-9639-c30e30f77ed0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.808 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:51 np0005539563 NetworkManager[48981]: <info>  [1764406671.8103] manager: (tapdf6fdca5-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.812 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.821 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.823 252257 INFO os_vif [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9')#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.880 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.881 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.882 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:61:3c:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.883 252257 INFO nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Using config drive#033[00m
Nov 29 03:57:51 np0005539563 nova_compute[252253]: 2025-11-29 08:57:51.926 252257 DEBUG nova.storage.rbd_utils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 4a5e65b2-e585-469d-9639-c30e30f77ed0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:52.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:52 np0005539563 nova_compute[252253]: 2025-11-29 08:57:52.605 252257 INFO nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Creating config drive at /var/lib/nova/instances/4a5e65b2-e585-469d-9639-c30e30f77ed0/disk.config#033[00m
Nov 29 03:57:52 np0005539563 nova_compute[252253]: 2025-11-29 08:57:52.610 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a5e65b2-e585-469d-9639-c30e30f77ed0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp01l0f0_l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:52 np0005539563 nova_compute[252253]: 2025-11-29 08:57:52.758 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a5e65b2-e585-469d-9639-c30e30f77ed0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp01l0f0_l" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:52 np0005539563 nova_compute[252253]: 2025-11-29 08:57:52.794 252257 DEBUG nova.storage.rbd_utils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image 4a5e65b2-e585-469d-9639-c30e30f77ed0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:57:52 np0005539563 nova_compute[252253]: 2025-11-29 08:57:52.798 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a5e65b2-e585-469d-9639-c30e30f77ed0/disk.config 4a5e65b2-e585-469d-9639-c30e30f77ed0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:52 np0005539563 nova_compute[252253]: 2025-11-29 08:57:52.866 252257 DEBUG nova.network.neutron [req-5c36f803-5643-4d3f-8430-6794ad553383 req-ddd929b4-31c1-4bac-97fd-be6bb41cb60f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Updated VIF entry in instance network info cache for port df6fdca5-d93b-4b2f-9836-ec2a95857ae7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:57:52 np0005539563 nova_compute[252253]: 2025-11-29 08:57:52.867 252257 DEBUG nova.network.neutron [req-5c36f803-5643-4d3f-8430-6794ad553383 req-ddd929b4-31c1-4bac-97fd-be6bb41cb60f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Updating instance_info_cache with network_info: [{"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:57:52 np0005539563 nova_compute[252253]: 2025-11-29 08:57:52.891 252257 DEBUG oslo_concurrency.lockutils [req-5c36f803-5643-4d3f-8430-6794ad553383 req-ddd929b4-31c1-4bac-97fd-be6bb41cb60f 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-4a5e65b2-e585-469d-9639-c30e30f77ed0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:57:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:52.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:57:53 np0005539563 nova_compute[252253]: 2025-11-29 08:57:53.863 252257 DEBUG oslo_concurrency.processutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a5e65b2-e585-469d-9639-c30e30f77ed0/disk.config 4a5e65b2-e585-469d-9639-c30e30f77ed0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:53 np0005539563 nova_compute[252253]: 2025-11-29 08:57:53.864 252257 INFO nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Deleting local config drive /var/lib/nova/instances/4a5e65b2-e585-469d-9639-c30e30f77ed0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:57:53 np0005539563 kernel: tapdf6fdca5-d9: entered promiscuous mode
Nov 29 03:57:53 np0005539563 NetworkManager[48981]: <info>  [1764406673.9704] manager: (tapdf6fdca5-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Nov 29 03:57:53 np0005539563 nova_compute[252253]: 2025-11-29 08:57:53.972 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:57:53Z|00898|binding|INFO|Claiming lport df6fdca5-d93b-4b2f-9836-ec2a95857ae7 for this chassis.
Nov 29 03:57:53 np0005539563 ovn_controller[148841]: 2025-11-29T08:57:53Z|00899|binding|INFO|df6fdca5-d93b-4b2f-9836-ec2a95857ae7: Claiming fa:16:3e:61:3c:80 10.100.0.9
Nov 29 03:57:53 np0005539563 nova_compute[252253]: 2025-11-29 08:57:53.977 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:53 np0005539563 NetworkManager[48981]: <info>  [1764406673.9820] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Nov 29 03:57:53 np0005539563 NetworkManager[48981]: <info>  [1764406673.9827] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Nov 29 03:57:53 np0005539563 nova_compute[252253]: 2025-11-29 08:57:53.981 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:53.988 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:3c:80 10.100.0.9'], port_security=['fa:16:3e:61:3c:80 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1382759112', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4a5e65b2-e585-469d-9639-c30e30f77ed0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1382759112', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '7', 'neutron:security_group_ids': '609a93e2-6e8e-4542-856e-8879513dfb81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17861533-9bce-4d6b-b9db-7033aae334db, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=df6fdca5-d93b-4b2f-9836-ec2a95857ae7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:57:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:53.989 158990 INFO neutron.agent.ovn.metadata.agent [-] Port df6fdca5-d93b-4b2f-9836-ec2a95857ae7 in datapath 8601263e-32ba-44f8-aef7-66d3b518c9d4 bound to our chassis#033[00m
Nov 29 03:57:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:53.990 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8601263e-32ba-44f8-aef7-66d3b518c9d4#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.005 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef9abbf-b517-4a3e-9f1c-c0e71ded3874]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.006 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8601263e-31 in ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:57:54 np0005539563 systemd-udevd[399082]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:57:54 np0005539563 systemd-machined[213024]: New machine qemu-100-instance-000000d0.
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.008 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8601263e-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.008 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[07876f46-fae2-455c-8942-e2518b34f676]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.009 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0560a610-922f-4b7b-93d8-dc7d7400b1d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 NetworkManager[48981]: <info>  [1764406674.0208] device (tapdf6fdca5-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:57:54 np0005539563 NetworkManager[48981]: <info>  [1764406674.0218] device (tapdf6fdca5-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.022 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[a321f06c-9d6f-4c93-b175-534d8b5d555f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 systemd[1]: Started Virtual Machine qemu-100-instance-000000d0.
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.049 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b25fb359-f7c8-4e12-a45a-15b79c671345]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.058 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.064 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:57:54Z|00900|binding|INFO|Setting lport df6fdca5-d93b-4b2f-9836-ec2a95857ae7 ovn-installed in OVS
Nov 29 03:57:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:57:54Z|00901|binding|INFO|Setting lport df6fdca5-d93b-4b2f-9836-ec2a95857ae7 up in Southbound
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.075 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.081 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[01e47ae6-8793-480e-9e18-fda4f2857877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.085 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2f398680-17d3-460a-b8f5-9b7c4461f1b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 NetworkManager[48981]: <info>  [1764406674.0872] manager: (tap8601263e-30): new Veth device (/org/freedesktop/NetworkManager/Devices/398)
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.118 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a43dfeb8-448c-4de2-ada6-c18959cadfde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.122 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d664b7-17a8-483d-b980-d24583c24932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 NetworkManager[48981]: <info>  [1764406674.1444] device (tap8601263e-30): carrier: link connected
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.149 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[037254da-31fb-4657-8b12-3a6b4d1389cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.167 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8714e4e1-7595-492c-9589-c1ce2cfc8747]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8601263e-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:1e:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 964191, 'reachable_time': 23318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399114, 'error': None, 'target': 'ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.182 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d56ac9-4058-439e-b256-0eacea6b3ab0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:1e9b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 964191, 'tstamp': 964191}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399115, 'error': None, 'target': 'ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.198 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[42719ee5-1ea9-4a64-a636-dd34b90db02b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8601263e-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:1e:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 964191, 'reachable_time': 23318, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399116, 'error': None, 'target': 'ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:54.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.236 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfdeb45-984a-43ca-9dd0-58eefb78fd8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.299 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[30aefcaf-0d16-4157-b3df-a66248588efe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.301 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8601263e-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.302 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.302 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8601263e-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.304 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:54 np0005539563 NetworkManager[48981]: <info>  [1764406674.3054] manager: (tap8601263e-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Nov 29 03:57:54 np0005539563 kernel: tap8601263e-30: entered promiscuous mode
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.308 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.309 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8601263e-30, col_values=(('external_ids', {'iface-id': '7d2fd210-17a1-4ed9-a018-d28f3f167dc8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.310 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:54 np0005539563 ovn_controller[148841]: 2025-11-29T08:57:54Z|00902|binding|INFO|Releasing lport 7d2fd210-17a1-4ed9-a018-d28f3f167dc8 from this chassis (sb_readonly=0)
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.337 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.339 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.339 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8601263e-32ba-44f8-aef7-66d3b518c9d4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8601263e-32ba-44f8-aef7-66d3b518c9d4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.340 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1f830ac2-7edc-4152-a789-15ad4763905b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.341 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-8601263e-32ba-44f8-aef7-66d3b518c9d4
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/8601263e-32ba-44f8-aef7-66d3b518c9d4.pid.haproxy
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 8601263e-32ba-44f8-aef7-66d3b518c9d4
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:57:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:54.342 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'env', 'PROCESS_TAG=haproxy-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8601263e-32ba-44f8-aef7-66d3b518c9d4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:57:54 np0005539563 podman[399149]: 2025-11-29 08:57:54.700906192 +0000 UTC m=+0.048523955 container create 1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.734 252257 DEBUG nova.compute.manager [req-f6d1cdfc-6863-4fda-af61-f1fee80f706d req-3749650d-c964-4a91-a7e2-f44b36af898a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Received event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.735 252257 DEBUG oslo_concurrency.lockutils [req-f6d1cdfc-6863-4fda-af61-f1fee80f706d req-3749650d-c964-4a91-a7e2-f44b36af898a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.735 252257 DEBUG oslo_concurrency.lockutils [req-f6d1cdfc-6863-4fda-af61-f1fee80f706d req-3749650d-c964-4a91-a7e2-f44b36af898a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.735 252257 DEBUG oslo_concurrency.lockutils [req-f6d1cdfc-6863-4fda-af61-f1fee80f706d req-3749650d-c964-4a91-a7e2-f44b36af898a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.736 252257 DEBUG nova.compute.manager [req-f6d1cdfc-6863-4fda-af61-f1fee80f706d req-3749650d-c964-4a91-a7e2-f44b36af898a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Processing event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:57:54 np0005539563 systemd[1]: Started libpod-conmon-1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f.scope.
Nov 29 03:57:54 np0005539563 podman[399149]: 2025-11-29 08:57:54.674555718 +0000 UTC m=+0.022173481 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:57:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:57:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4846006013a21d88a341e534d0d600de03d8b2a826bbfe8ae25cda2f0ca82b45/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:54 np0005539563 podman[399149]: 2025-11-29 08:57:54.794422314 +0000 UTC m=+0.142040107 container init 1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:57:54 np0005539563 podman[399149]: 2025-11-29 08:57:54.800170059 +0000 UTC m=+0.147787812 container start 1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:57:54 np0005539563 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[399168]: [NOTICE]   (399194) : New worker (399205) forked
Nov 29 03:57:54 np0005539563 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[399168]: [NOTICE]   (399194) : Loading success.
Nov 29 03:57:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:54.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.945 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406674.9453166, 4a5e65b2-e585-469d-9639-c30e30f77ed0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.946 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.948 252257 DEBUG nova.compute.manager [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.952 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.955 252257 INFO nova.virt.libvirt.driver [-] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Instance spawned successfully.#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.956 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.982 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.987 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.989 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.990 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.990 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.991 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.991 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:57:54 np0005539563 nova_compute[252253]: 2025-11-29 08:57:54.992 252257 DEBUG nova.virt.libvirt.driver [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:57:55 np0005539563 nova_compute[252253]: 2025-11-29 08:57:55.027 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:57:55 np0005539563 nova_compute[252253]: 2025-11-29 08:57:55.027 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406674.9454577, 4a5e65b2-e585-469d-9639-c30e30f77ed0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:57:55 np0005539563 nova_compute[252253]: 2025-11-29 08:57:55.027 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:57:55 np0005539563 nova_compute[252253]: 2025-11-29 08:57:55.056 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:57:55 np0005539563 nova_compute[252253]: 2025-11-29 08:57:55.059 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406674.9509165, 4a5e65b2-e585-469d-9639-c30e30f77ed0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:57:55 np0005539563 nova_compute[252253]: 2025-11-29 08:57:55.059 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:57:55 np0005539563 nova_compute[252253]: 2025-11-29 08:57:55.072 252257 INFO nova.compute.manager [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Took 9.83 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:57:55 np0005539563 nova_compute[252253]: 2025-11-29 08:57:55.072 252257 DEBUG nova.compute.manager [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:57:55 np0005539563 nova_compute[252253]: 2025-11-29 08:57:55.081 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:57:55 np0005539563 nova_compute[252253]: 2025-11-29 08:57:55.083 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:57:55 np0005539563 nova_compute[252253]: 2025-11-29 08:57:55.117 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:57:55 np0005539563 nova_compute[252253]: 2025-11-29 08:57:55.135 252257 INFO nova.compute.manager [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Took 10.72 seconds to build instance.#033[00m
Nov 29 03:57:55 np0005539563 nova_compute[252253]: 2025-11-29 08:57:55.156 252257 DEBUG oslo_concurrency.lockutils [None req-5b136f29-95df-4b53-b3c3-e304cf76c17e 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 03:57:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:56.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:56 np0005539563 nova_compute[252253]: 2025-11-29 08:57:56.747 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:56 np0005539563 nova_compute[252253]: 2025-11-29 08:57:56.808 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:56 np0005539563 nova_compute[252253]: 2025-11-29 08:57:56.824 252257 DEBUG nova.compute.manager [req-238c6dd0-7124-4c15-b1a4-474eae0c0768 req-a74d292b-0dbd-4606-be92-0e578e94b208 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Received event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:57:56 np0005539563 nova_compute[252253]: 2025-11-29 08:57:56.825 252257 DEBUG oslo_concurrency.lockutils [req-238c6dd0-7124-4c15-b1a4-474eae0c0768 req-a74d292b-0dbd-4606-be92-0e578e94b208 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:56 np0005539563 nova_compute[252253]: 2025-11-29 08:57:56.825 252257 DEBUG oslo_concurrency.lockutils [req-238c6dd0-7124-4c15-b1a4-474eae0c0768 req-a74d292b-0dbd-4606-be92-0e578e94b208 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:56 np0005539563 nova_compute[252253]: 2025-11-29 08:57:56.826 252257 DEBUG oslo_concurrency.lockutils [req-238c6dd0-7124-4c15-b1a4-474eae0c0768 req-a74d292b-0dbd-4606-be92-0e578e94b208 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:56 np0005539563 nova_compute[252253]: 2025-11-29 08:57:56.826 252257 DEBUG nova.compute.manager [req-238c6dd0-7124-4c15-b1a4-474eae0c0768 req-a74d292b-0dbd-4606-be92-0e578e94b208 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] No waiting events found dispatching network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:57:56 np0005539563 nova_compute[252253]: 2025-11-29 08:57:56.827 252257 WARNING nova.compute.manager [req-238c6dd0-7124-4c15-b1a4-474eae0c0768 req-a74d292b-0dbd-4606-be92-0e578e94b208 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Received unexpected event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:57:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:56.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 03:57:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:58.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:57:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:58.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.115 252257 DEBUG oslo_concurrency.lockutils [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "4a5e65b2-e585-469d-9639-c30e30f77ed0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.117 252257 DEBUG oslo_concurrency.lockutils [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.117 252257 DEBUG oslo_concurrency.lockutils [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.118 252257 DEBUG oslo_concurrency.lockutils [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.118 252257 DEBUG oslo_concurrency.lockutils [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.120 252257 INFO nova.compute.manager [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Terminating instance#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.122 252257 DEBUG nova.compute.manager [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:57:59 np0005539563 kernel: tapdf6fdca5-d9 (unregistering): left promiscuous mode
Nov 29 03:57:59 np0005539563 NetworkManager[48981]: <info>  [1764406679.2874] device (tapdf6fdca5-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:57:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:57:59Z|00903|binding|INFO|Releasing lport df6fdca5-d93b-4b2f-9836-ec2a95857ae7 from this chassis (sb_readonly=0)
Nov 29 03:57:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:57:59Z|00904|binding|INFO|Setting lport df6fdca5-d93b-4b2f-9836-ec2a95857ae7 down in Southbound
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.302 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:59 np0005539563 ovn_controller[148841]: 2025-11-29T08:57:59Z|00905|binding|INFO|Removing iface tapdf6fdca5-d9 ovn-installed in OVS
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.306 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.318 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:3c:80 10.100.0.9'], port_security=['fa:16:3e:61:3c:80 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1382759112', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4a5e65b2-e585-469d-9639-c30e30f77ed0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1382759112', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '9', 'neutron:security_group_ids': '609a93e2-6e8e-4542-856e-8879513dfb81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.205', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17861533-9bce-4d6b-b9db-7033aae334db, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=df6fdca5-d93b-4b2f-9836-ec2a95857ae7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.320 158990 INFO neutron.agent.ovn.metadata.agent [-] Port df6fdca5-d93b-4b2f-9836-ec2a95857ae7 in datapath 8601263e-32ba-44f8-aef7-66d3b518c9d4 unbound from our chassis#033[00m
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.321 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8601263e-32ba-44f8-aef7-66d3b518c9d4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.322 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2c59f8d6-b9a5-4e20-afd1-4529b95a9313]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.323 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4 namespace which is not needed anymore#033[00m
Nov 29 03:57:59 np0005539563 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000d0.scope: Deactivated successfully.
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.343 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:59 np0005539563 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000d0.scope: Consumed 5.223s CPU time.
Nov 29 03:57:59 np0005539563 systemd-machined[213024]: Machine qemu-100-instance-000000d0 terminated.
Nov 29 03:57:59 np0005539563 podman[399275]: 2025-11-29 08:57:59.391561645 +0000 UTC m=+0.080743097 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:57:59 np0005539563 podman[399278]: 2025-11-29 08:57:59.404954378 +0000 UTC m=+0.088732813 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:57:59 np0005539563 podman[399279]: 2025-11-29 08:57:59.423891791 +0000 UTC m=+0.100946615 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Nov 29 03:57:59 np0005539563 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[399168]: [NOTICE]   (399194) : haproxy version is 2.8.14-c23fe91
Nov 29 03:57:59 np0005539563 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[399168]: [NOTICE]   (399194) : path to executable is /usr/sbin/haproxy
Nov 29 03:57:59 np0005539563 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[399168]: [WARNING]  (399194) : Exiting Master process...
Nov 29 03:57:59 np0005539563 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[399168]: [ALERT]    (399194) : Current worker (399205) exited with code 143 (Terminated)
Nov 29 03:57:59 np0005539563 neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4[399168]: [WARNING]  (399194) : All workers exited. Exiting... (0)
Nov 29 03:57:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3683: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 374 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Nov 29 03:57:59 np0005539563 systemd[1]: libpod-1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f.scope: Deactivated successfully.
Nov 29 03:57:59 np0005539563 podman[399358]: 2025-11-29 08:57:59.45781835 +0000 UTC m=+0.041332241 container died 1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:57:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f-userdata-shm.mount: Deactivated successfully.
Nov 29 03:57:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4846006013a21d88a341e534d0d600de03d8b2a826bbfe8ae25cda2f0ca82b45-merged.mount: Deactivated successfully.
Nov 29 03:57:59 np0005539563 podman[399358]: 2025-11-29 08:57:59.493235968 +0000 UTC m=+0.076749859 container cleanup 1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:57:59 np0005539563 systemd[1]: libpod-conmon-1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f.scope: Deactivated successfully.
Nov 29 03:57:59 np0005539563 podman[399392]: 2025-11-29 08:57:59.555180465 +0000 UTC m=+0.042834930 container remove 1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.562 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b30e50-c3aa-4f8c-8b39-4133583d47b6]: (4, ('Sat Nov 29 08:57:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4 (1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f)\n1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f\nSat Nov 29 08:57:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4 (1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f)\n1c91e38ea0510861eb40c702d6f690b3a85b4e372459e0e4e7cd9e8449299b4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.563 252257 INFO nova.virt.libvirt.driver [-] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Instance destroyed successfully.#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.564 252257 DEBUG nova.objects.instance [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid 4a5e65b2-e585-469d-9639-c30e30f77ed0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.565 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9b9c56-f8d9-4708-b815-c0a117172ad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.566 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8601263e-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.568 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:59 np0005539563 kernel: tap8601263e-30: left promiscuous mode
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.585 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.588 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[618eabf1-4175-44c8-913d-94361d8de240]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.602 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fed76b7c-dfc3-4ebe-91cd-1e3b487b3b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.602 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[34b1adc0-fb8e-444b-8440-6da246d4d08c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.611 252257 DEBUG nova.virt.libvirt.vif [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-567674992',display_name='tempest-TestNetworkBasicOps-server-567674992',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-567674992',id=208,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBFuIb9m+qfup4I2gUah8h3Yp0EK1yDvhzdfhBEwzW/K33/kJ3QKeCYRsrifRmAOkqpPvCg8Mj44MqQpFS1WnvZmElNZe+SJkb+eYAdn9hHCWYLX0O+JBeXhY9SZcAyNTw==',key_name='tempest-TestNetworkBasicOps-1434938868',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:57:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-ca68e755',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:57:55Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=4a5e65b2-e585-469d-9639-c30e30f77ed0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.612 252257 DEBUG nova.network.os_vif_util [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "address": "fa:16:3e:61:3c:80", "network": {"id": "8601263e-32ba-44f8-aef7-66d3b518c9d4", "bridge": "br-int", "label": "tempest-network-smoke--1596277998", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf6fdca5-d9", "ovs_interfaceid": "df6fdca5-d93b-4b2f-9836-ec2a95857ae7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.612 252257 DEBUG nova.network.os_vif_util [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.613 252257 DEBUG os_vif [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.614 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.614 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf6fdca5-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.616 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.617 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.617 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[93146fe3-5917-4045-893e-83583c0f4a45]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 964184, 'reachable_time': 17669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399419, 'error': None, 'target': 'ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:59 np0005539563 systemd[1]: run-netns-ovnmeta\x2d8601263e\x2d32ba\x2d44f8\x2daef7\x2d66d3b518c9d4.mount: Deactivated successfully.
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.620 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8601263e-32ba-44f8-aef7-66d3b518c9d4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:57:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:57:59.620 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5440fe-8476-4482-b441-392e4dbd9d2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.620 252257 INFO os_vif [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=df6fdca5-d93b-4b2f-9836-ec2a95857ae7,network=Network(8601263e-32ba-44f8-aef7-66d3b518c9d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapdf6fdca5-d9')#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.726 252257 DEBUG nova.compute.manager [req-fa68d781-0b4b-4b91-8b5a-07c9b7a303f9 req-c54fe4f3-1879-4cf7-95dd-101d35e797b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Received event network-vif-unplugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.727 252257 DEBUG oslo_concurrency.lockutils [req-fa68d781-0b4b-4b91-8b5a-07c9b7a303f9 req-c54fe4f3-1879-4cf7-95dd-101d35e797b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.727 252257 DEBUG oslo_concurrency.lockutils [req-fa68d781-0b4b-4b91-8b5a-07c9b7a303f9 req-c54fe4f3-1879-4cf7-95dd-101d35e797b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.728 252257 DEBUG oslo_concurrency.lockutils [req-fa68d781-0b4b-4b91-8b5a-07c9b7a303f9 req-c54fe4f3-1879-4cf7-95dd-101d35e797b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.728 252257 DEBUG nova.compute.manager [req-fa68d781-0b4b-4b91-8b5a-07c9b7a303f9 req-c54fe4f3-1879-4cf7-95dd-101d35e797b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] No waiting events found dispatching network-vif-unplugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:57:59 np0005539563 nova_compute[252253]: 2025-11-29 08:57:59.728 252257 DEBUG nova.compute.manager [req-fa68d781-0b4b-4b91-8b5a-07c9b7a303f9 req-c54fe4f3-1879-4cf7-95dd-101d35e797b2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Received event network-vif-unplugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:58:00 np0005539563 nova_compute[252253]: 2025-11-29 08:58:00.130 252257 INFO nova.virt.libvirt.driver [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Deleting instance files /var/lib/nova/instances/4a5e65b2-e585-469d-9639-c30e30f77ed0_del#033[00m
Nov 29 03:58:00 np0005539563 nova_compute[252253]: 2025-11-29 08:58:00.131 252257 INFO nova.virt.libvirt.driver [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Deletion of /var/lib/nova/instances/4a5e65b2-e585-469d-9639-c30e30f77ed0_del complete#033[00m
Nov 29 03:58:00 np0005539563 nova_compute[252253]: 2025-11-29 08:58:00.200 252257 INFO nova.compute.manager [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Took 1.08 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:58:00 np0005539563 nova_compute[252253]: 2025-11-29 08:58:00.200 252257 DEBUG oslo.service.loopingcall [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:58:00 np0005539563 nova_compute[252253]: 2025-11-29 08:58:00.201 252257 DEBUG nova.compute.manager [-] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:58:00 np0005539563 nova_compute[252253]: 2025-11-29 08:58:00.201 252257 DEBUG nova.network.neutron [-] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:58:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:00.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:00.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 137 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 102 op/s
Nov 29 03:58:01 np0005539563 nova_compute[252253]: 2025-11-29 08:58:01.751 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:01 np0005539563 nova_compute[252253]: 2025-11-29 08:58:01.838 252257 DEBUG nova.compute.manager [req-3030d12c-15bc-479e-9d90-f01e70b1f02c req-927639a3-2902-40bf-a3e7-b114130145f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Received event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:58:01 np0005539563 nova_compute[252253]: 2025-11-29 08:58:01.839 252257 DEBUG oslo_concurrency.lockutils [req-3030d12c-15bc-479e-9d90-f01e70b1f02c req-927639a3-2902-40bf-a3e7-b114130145f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:01 np0005539563 nova_compute[252253]: 2025-11-29 08:58:01.839 252257 DEBUG oslo_concurrency.lockutils [req-3030d12c-15bc-479e-9d90-f01e70b1f02c req-927639a3-2902-40bf-a3e7-b114130145f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:01 np0005539563 nova_compute[252253]: 2025-11-29 08:58:01.839 252257 DEBUG oslo_concurrency.lockutils [req-3030d12c-15bc-479e-9d90-f01e70b1f02c req-927639a3-2902-40bf-a3e7-b114130145f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:01 np0005539563 nova_compute[252253]: 2025-11-29 08:58:01.839 252257 DEBUG nova.compute.manager [req-3030d12c-15bc-479e-9d90-f01e70b1f02c req-927639a3-2902-40bf-a3e7-b114130145f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] No waiting events found dispatching network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:58:01 np0005539563 nova_compute[252253]: 2025-11-29 08:58:01.839 252257 WARNING nova.compute.manager [req-3030d12c-15bc-479e-9d90-f01e70b1f02c req-927639a3-2902-40bf-a3e7-b114130145f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Received unexpected event network-vif-plugged-df6fdca5-d93b-4b2f-9836-ec2a95857ae7 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:58:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.003000080s ======
Nov 29 03:58:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:02.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Nov 29 03:58:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:02.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 126 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 89 op/s
Nov 29 03:58:03 np0005539563 nova_compute[252253]: 2025-11-29 08:58:03.543 252257 DEBUG nova.network.neutron [-] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:58:03 np0005539563 nova_compute[252253]: 2025-11-29 08:58:03.562 252257 INFO nova.compute.manager [-] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Took 3.36 seconds to deallocate network for instance.#033[00m
Nov 29 03:58:03 np0005539563 nova_compute[252253]: 2025-11-29 08:58:03.620 252257 DEBUG oslo_concurrency.lockutils [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:03 np0005539563 nova_compute[252253]: 2025-11-29 08:58:03.621 252257 DEBUG oslo_concurrency.lockutils [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:03 np0005539563 nova_compute[252253]: 2025-11-29 08:58:03.695 252257 DEBUG oslo_concurrency.processutils [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:58:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2371015855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:58:04 np0005539563 nova_compute[252253]: 2025-11-29 08:58:04.152 252257 DEBUG oslo_concurrency.processutils [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:04 np0005539563 nova_compute[252253]: 2025-11-29 08:58:04.157 252257 DEBUG nova.compute.provider_tree [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:58:04 np0005539563 nova_compute[252253]: 2025-11-29 08:58:04.183 252257 DEBUG nova.scheduler.client.report [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:58:04 np0005539563 nova_compute[252253]: 2025-11-29 08:58:04.207 252257 DEBUG oslo_concurrency.lockutils [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:04.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:04 np0005539563 nova_compute[252253]: 2025-11-29 08:58:04.248 252257 INFO nova.scheduler.client.report [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance 4a5e65b2-e585-469d-9639-c30e30f77ed0#033[00m
Nov 29 03:58:04 np0005539563 nova_compute[252253]: 2025-11-29 08:58:04.339 252257 DEBUG oslo_concurrency.lockutils [None req-9940102d-666b-4026-a09d-4618704b065c 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "4a5e65b2-e585-469d-9639-c30e30f77ed0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:04 np0005539563 nova_compute[252253]: 2025-11-29 08:58:04.615 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:04.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:58:04.970 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:58:04.971 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:58:04.971 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 29 03:58:05 np0005539563 nova_compute[252253]: 2025-11-29 08:58:05.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:06.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:58:06.245 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:58:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:58:06.246 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:58:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:58:06.247 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:58:06 np0005539563 nova_compute[252253]: 2025-11-29 08:58:06.247 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:06 np0005539563 nova_compute[252253]: 2025-11-29 08:58:06.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:06 np0005539563 nova_compute[252253]: 2025-11-29 08:58:06.751 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:06.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3687: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Nov 29 03:58:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:08.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:08 np0005539563 nova_compute[252253]: 2025-11-29 08:58:08.691 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:08 np0005539563 nova_compute[252253]: 2025-11-29 08:58:08.692 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:58:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:58:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:08.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:58:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Nov 29 03:58:09 np0005539563 nova_compute[252253]: 2025-11-29 08:58:09.618 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:10.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:10.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.2 KiB/s wr, 79 op/s
Nov 29 03:58:11 np0005539563 nova_compute[252253]: 2025-11-29 08:58:11.689 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:11 np0005539563 nova_compute[252253]: 2025-11-29 08:58:11.753 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:12.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:12.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:58:13
Nov 29 03:58:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:58:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:58:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.log', 'vms', '.mgr']
Nov 29 03:58:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:58:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:58:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:58:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:58:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:58:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:58:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:58:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.7 KiB/s rd, 852 B/s wr, 13 op/s
Nov 29 03:58:13 np0005539563 nova_compute[252253]: 2025-11-29 08:58:13.928 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:13 np0005539563 nova_compute[252253]: 2025-11-29 08:58:13.965 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:14.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:14 np0005539563 nova_compute[252253]: 2025-11-29 08:58:14.563 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406679.5620425, 4a5e65b2-e585-469d-9639-c30e30f77ed0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:58:14 np0005539563 nova_compute[252253]: 2025-11-29 08:58:14.564 252257 INFO nova.compute.manager [-] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:58:14 np0005539563 nova_compute[252253]: 2025-11-29 08:58:14.598 252257 DEBUG nova.compute.manager [None req-348f226a-ed64-49ed-8032-5353af7b20b5 - - - - - -] [instance: 4a5e65b2-e585-469d-9639-c30e30f77ed0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:58:14 np0005539563 nova_compute[252253]: 2025-11-29 08:58:14.619 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:14.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 10 op/s
Nov 29 03:58:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:16.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:16 np0005539563 nova_compute[252253]: 2025-11-29 08:58:16.755 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:58:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:58:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:58:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:58:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:58:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:58:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:58:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:58:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:58:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:58:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:58:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 17K writes, 75K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s#012Cumulative WAL: 17K writes, 16K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1562 writes, 6638 keys, 1562 commit groups, 1.0 writes per commit group, ingest: 10.40 MB, 0.02 MB/s#012Interval WAL: 1562 writes, 1562 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     17.1      5.84              0.35        51    0.115       0      0       0.0       0.0#012  L6      1/0   12.12 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2     37.5     32.0     16.22              1.62        50    0.324    394K    27K       0.0       0.0#012 Sum      1/0   12.12 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2     27.5     28.1     22.06              1.97       101    0.218    394K    27K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.6     88.1     89.8      0.80              0.24        10    0.080     54K   2612       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     37.5     32.0     16.22              1.62        50    0.324    394K    27K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     17.1      5.84              0.35        50    0.117       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.098, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.61 GB write, 0.09 MB/s write, 0.59 GB read, 0.09 MB/s read, 22.1 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 304.00 MB usage: 69.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.00074 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3859,66.90 MB,22.0081%) FilterBlock(102,1.05 MB,0.345787%) IndexBlock(102,1.74 MB,0.57236%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:58:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:16.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:18.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:18.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:19 np0005539563 nova_compute[252253]: 2025-11-29 08:58:19.622 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:20.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:20.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:21 np0005539563 nova_compute[252253]: 2025-11-29 08:58:21.758 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:22.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:22.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:58:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:24.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:24 np0005539563 nova_compute[252253]: 2025-11-29 08:58:24.624 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:24.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:26.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:26 np0005539563 nova_compute[252253]: 2025-11-29 08:58:26.760 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:26.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:28.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:28.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:29 np0005539563 podman[399528]: 2025-11-29 08:58:29.495568391 +0000 UTC m=+0.054136907 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:58:29 np0005539563 podman[399529]: 2025-11-29 08:58:29.503310191 +0000 UTC m=+0.059320517 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:58:29 np0005539563 podman[399530]: 2025-11-29 08:58:29.537542527 +0000 UTC m=+0.087270533 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:58:29 np0005539563 nova_compute[252253]: 2025-11-29 08:58:29.625 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:29 np0005539563 nova_compute[252253]: 2025-11-29 08:58:29.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:30.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:30.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:31 np0005539563 nova_compute[252253]: 2025-11-29 08:58:31.762 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:32.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:32 np0005539563 nova_compute[252253]: 2025-11-29 08:58:32.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:32.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:34.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:34 np0005539563 nova_compute[252253]: 2025-11-29 08:58:34.626 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:34 np0005539563 nova_compute[252253]: 2025-11-29 08:58:34.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:34 np0005539563 nova_compute[252253]: 2025-11-29 08:58:34.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:58:34 np0005539563 nova_compute[252253]: 2025-11-29 08:58:34.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:58:34 np0005539563 nova_compute[252253]: 2025-11-29 08:58:34.694 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:58:34 np0005539563 nova_compute[252253]: 2025-11-29 08:58:34.695 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:34.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 136 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 618 KiB/s wr, 1 op/s
Nov 29 03:58:35 np0005539563 nova_compute[252253]: 2025-11-29 08:58:35.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:35 np0005539563 nova_compute[252253]: 2025-11-29 08:58:35.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:58:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:36.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:36 np0005539563 nova_compute[252253]: 2025-11-29 08:58:36.766 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:36.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 136 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 618 KiB/s wr, 1 op/s
Nov 29 03:58:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:38.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:38 np0005539563 nova_compute[252253]: 2025-11-29 08:58:38.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:38.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 156 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 1.3 MiB/s wr, 3 op/s
Nov 29 03:58:39 np0005539563 nova_compute[252253]: 2025-11-29 08:58:39.666 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:39 np0005539563 nova_compute[252253]: 2025-11-29 08:58:39.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:39 np0005539563 nova_compute[252253]: 2025-11-29 08:58:39.708 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:39 np0005539563 nova_compute[252253]: 2025-11-29 08:58:39.708 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:39 np0005539563 nova_compute[252253]: 2025-11-29 08:58:39.708 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:39 np0005539563 nova_compute[252253]: 2025-11-29 08:58:39.708 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:58:39 np0005539563 nova_compute[252253]: 2025-11-29 08:58:39.709 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:58:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2615626142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.121 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:40.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.293 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.294 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4140MB free_disk=20.981201171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.294 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.295 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.371 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.372 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.392 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.433 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.434 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.453 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.486 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.512 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:58:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274336590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.961 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:40 np0005539563 nova_compute[252253]: 2025-11-29 08:58:40.968 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:58:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:40.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:41 np0005539563 nova_compute[252253]: 2025-11-29 08:58:41.001 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:58:41 np0005539563 nova_compute[252253]: 2025-11-29 08:58:41.028 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:58:41 np0005539563 nova_compute[252253]: 2025-11-29 08:58:41.029 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:58:41 np0005539563 nova_compute[252253]: 2025-11-29 08:58:41.775 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:42.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:42.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:58:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:58:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:58:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:58:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:58:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:58:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 308 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Nov 29 03:58:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:44.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:44 np0005539563 nova_compute[252253]: 2025-11-29 08:58:44.667 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:44.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Nov 29 03:58:46 np0005539563 nova_compute[252253]: 2025-11-29 08:58:46.030 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:46.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:58:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:58:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:58:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:58:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:58:46 np0005539563 nova_compute[252253]: 2025-11-29 08:58:46.777 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:58:46 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 54e566ad-2a2b-4749-96e7-7901acabec8f does not exist
Nov 29 03:58:46 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 33854c96-2498-4f41-96cc-f49882d17f6f does not exist
Nov 29 03:58:46 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9f8693e5-f8f8-47c4-99d9-4d1af85c8943 does not exist
Nov 29 03:58:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:58:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:58:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:58:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:58:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:58:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:58:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:46.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:47 np0005539563 podman[399968]: 2025-11-29 08:58:47.4438941 +0000 UTC m=+0.043846129 container create 8bb6c448a3851987a310a5ee188c0d097187ffcae346192098fd15c3bf50d7bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:58:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 81 op/s
Nov 29 03:58:47 np0005539563 systemd[1]: Started libpod-conmon-8bb6c448a3851987a310a5ee188c0d097187ffcae346192098fd15c3bf50d7bb.scope.
Nov 29 03:58:47 np0005539563 podman[399968]: 2025-11-29 08:58:47.423349614 +0000 UTC m=+0.023301623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:58:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:58:47 np0005539563 podman[399968]: 2025-11-29 08:58:47.548470861 +0000 UTC m=+0.148422900 container init 8bb6c448a3851987a310a5ee188c0d097187ffcae346192098fd15c3bf50d7bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:58:47 np0005539563 podman[399968]: 2025-11-29 08:58:47.555685397 +0000 UTC m=+0.155637396 container start 8bb6c448a3851987a310a5ee188c0d097187ffcae346192098fd15c3bf50d7bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:58:47 np0005539563 podman[399968]: 2025-11-29 08:58:47.558851492 +0000 UTC m=+0.158803491 container attach 8bb6c448a3851987a310a5ee188c0d097187ffcae346192098fd15c3bf50d7bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:58:47 np0005539563 systemd[1]: libpod-8bb6c448a3851987a310a5ee188c0d097187ffcae346192098fd15c3bf50d7bb.scope: Deactivated successfully.
Nov 29 03:58:47 np0005539563 ecstatic_chaplygin[399985]: 167 167
Nov 29 03:58:47 np0005539563 conmon[399985]: conmon 8bb6c448a3851987a310 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8bb6c448a3851987a310a5ee188c0d097187ffcae346192098fd15c3bf50d7bb.scope/container/memory.events
Nov 29 03:58:47 np0005539563 podman[399968]: 2025-11-29 08:58:47.56504443 +0000 UTC m=+0.164996429 container died 8bb6c448a3851987a310a5ee188c0d097187ffcae346192098fd15c3bf50d7bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:58:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:58:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:58:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:58:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0d4e61204ce335ca3852fc2d47de4e54b39cf0b3feaaacf9a5301756a6ed58ab-merged.mount: Deactivated successfully.
Nov 29 03:58:47 np0005539563 podman[399968]: 2025-11-29 08:58:47.609272958 +0000 UTC m=+0.209224937 container remove 8bb6c448a3851987a310a5ee188c0d097187ffcae346192098fd15c3bf50d7bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:58:47 np0005539563 systemd[1]: libpod-conmon-8bb6c448a3851987a310a5ee188c0d097187ffcae346192098fd15c3bf50d7bb.scope: Deactivated successfully.
Nov 29 03:58:47 np0005539563 podman[400008]: 2025-11-29 08:58:47.790833394 +0000 UTC m=+0.045279187 container create 054255b374cf60d903e9ad384b472e75782135b1efa4c95d9a8b513113575e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:58:47 np0005539563 systemd[1]: Started libpod-conmon-054255b374cf60d903e9ad384b472e75782135b1efa4c95d9a8b513113575e02.scope.
Nov 29 03:58:47 np0005539563 podman[400008]: 2025-11-29 08:58:47.770650537 +0000 UTC m=+0.025096350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:58:47 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:58:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93acd802e65d5c944c94adea2f6fb97c784916ce3975fbc661e80366daf389/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93acd802e65d5c944c94adea2f6fb97c784916ce3975fbc661e80366daf389/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93acd802e65d5c944c94adea2f6fb97c784916ce3975fbc661e80366daf389/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93acd802e65d5c944c94adea2f6fb97c784916ce3975fbc661e80366daf389/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:47 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93acd802e65d5c944c94adea2f6fb97c784916ce3975fbc661e80366daf389/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:47 np0005539563 podman[400008]: 2025-11-29 08:58:47.886537345 +0000 UTC m=+0.140983168 container init 054255b374cf60d903e9ad384b472e75782135b1efa4c95d9a8b513113575e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_booth, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:58:47 np0005539563 podman[400008]: 2025-11-29 08:58:47.894051419 +0000 UTC m=+0.148497252 container start 054255b374cf60d903e9ad384b472e75782135b1efa4c95d9a8b513113575e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:58:47 np0005539563 podman[400008]: 2025-11-29 08:58:47.900306368 +0000 UTC m=+0.154752181 container attach 054255b374cf60d903e9ad384b472e75782135b1efa4c95d9a8b513113575e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_booth, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:58:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:48.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:48 np0005539563 friendly_booth[400025]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:58:48 np0005539563 friendly_booth[400025]: --> relative data size: 1.0
Nov 29 03:58:48 np0005539563 friendly_booth[400025]: --> All data devices are unavailable
Nov 29 03:58:48 np0005539563 systemd[1]: libpod-054255b374cf60d903e9ad384b472e75782135b1efa4c95d9a8b513113575e02.scope: Deactivated successfully.
Nov 29 03:58:48 np0005539563 podman[400008]: 2025-11-29 08:58:48.747095315 +0000 UTC m=+1.001541118 container died 054255b374cf60d903e9ad384b472e75782135b1efa4c95d9a8b513113575e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:58:48 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8e93acd802e65d5c944c94adea2f6fb97c784916ce3975fbc661e80366daf389-merged.mount: Deactivated successfully.
Nov 29 03:58:48 np0005539563 podman[400008]: 2025-11-29 08:58:48.800539782 +0000 UTC m=+1.054985575 container remove 054255b374cf60d903e9ad384b472e75782135b1efa4c95d9a8b513113575e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:58:48 np0005539563 systemd[1]: libpod-conmon-054255b374cf60d903e9ad384b472e75782135b1efa4c95d9a8b513113575e02.scope: Deactivated successfully.
Nov 29 03:58:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:48.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:49 np0005539563 podman[400194]: 2025-11-29 08:58:49.405613255 +0000 UTC m=+0.035379068 container create c3468e0f536c4c2be0d3b2eb5351f0833def1a967e11c9934af8b46d6a27ed18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_liskov, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:58:49 np0005539563 systemd[1]: Started libpod-conmon-c3468e0f536c4c2be0d3b2eb5351f0833def1a967e11c9934af8b46d6a27ed18.scope.
Nov 29 03:58:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:58:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 99 op/s
Nov 29 03:58:49 np0005539563 podman[400194]: 2025-11-29 08:58:49.389638613 +0000 UTC m=+0.019404466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:58:49 np0005539563 podman[400194]: 2025-11-29 08:58:49.486936987 +0000 UTC m=+0.116702820 container init c3468e0f536c4c2be0d3b2eb5351f0833def1a967e11c9934af8b46d6a27ed18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_liskov, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:58:49 np0005539563 podman[400194]: 2025-11-29 08:58:49.493531796 +0000 UTC m=+0.123297609 container start c3468e0f536c4c2be0d3b2eb5351f0833def1a967e11c9934af8b46d6a27ed18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:58:49 np0005539563 podman[400194]: 2025-11-29 08:58:49.49702226 +0000 UTC m=+0.126788073 container attach c3468e0f536c4c2be0d3b2eb5351f0833def1a967e11c9934af8b46d6a27ed18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_liskov, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:58:49 np0005539563 serene_liskov[400210]: 167 167
Nov 29 03:58:49 np0005539563 systemd[1]: libpod-c3468e0f536c4c2be0d3b2eb5351f0833def1a967e11c9934af8b46d6a27ed18.scope: Deactivated successfully.
Nov 29 03:58:49 np0005539563 podman[400194]: 2025-11-29 08:58:49.49995199 +0000 UTC m=+0.129717803 container died c3468e0f536c4c2be0d3b2eb5351f0833def1a967e11c9934af8b46d6a27ed18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_liskov, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:58:49 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6a891e94bc0b325a78bdda3cca2e654c7d860f038fa0f3fd1b025a64db279757-merged.mount: Deactivated successfully.
Nov 29 03:58:49 np0005539563 podman[400194]: 2025-11-29 08:58:49.54168132 +0000 UTC m=+0.171447153 container remove c3468e0f536c4c2be0d3b2eb5351f0833def1a967e11c9934af8b46d6a27ed18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:58:49 np0005539563 systemd[1]: libpod-conmon-c3468e0f536c4c2be0d3b2eb5351f0833def1a967e11c9934af8b46d6a27ed18.scope: Deactivated successfully.
Nov 29 03:58:49 np0005539563 nova_compute[252253]: 2025-11-29 08:58:49.669 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:49 np0005539563 podman[400234]: 2025-11-29 08:58:49.721416206 +0000 UTC m=+0.049661606 container create 6c4ebfb84c35baac2f18fceceab08521c4cff92424fbecdfc98ed25f251146f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jepsen, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:58:49 np0005539563 systemd[1]: Started libpod-conmon-6c4ebfb84c35baac2f18fceceab08521c4cff92424fbecdfc98ed25f251146f6.scope.
Nov 29 03:58:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:58:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0bdff50b30de120cf14f8b7acd4acdc07e2c479abe5d006c2e296f23e754c29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0bdff50b30de120cf14f8b7acd4acdc07e2c479abe5d006c2e296f23e754c29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:49 np0005539563 podman[400234]: 2025-11-29 08:58:49.701651721 +0000 UTC m=+0.029897151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:58:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0bdff50b30de120cf14f8b7acd4acdc07e2c479abe5d006c2e296f23e754c29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:49 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0bdff50b30de120cf14f8b7acd4acdc07e2c479abe5d006c2e296f23e754c29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:49 np0005539563 podman[400234]: 2025-11-29 08:58:49.809127781 +0000 UTC m=+0.137373181 container init 6c4ebfb84c35baac2f18fceceab08521c4cff92424fbecdfc98ed25f251146f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jepsen, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:58:49 np0005539563 podman[400234]: 2025-11-29 08:58:49.816139651 +0000 UTC m=+0.144385051 container start 6c4ebfb84c35baac2f18fceceab08521c4cff92424fbecdfc98ed25f251146f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jepsen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 03:58:49 np0005539563 podman[400234]: 2025-11-29 08:58:49.821677211 +0000 UTC m=+0.149922641 container attach 6c4ebfb84c35baac2f18fceceab08521c4cff92424fbecdfc98ed25f251146f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:58:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:50.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]: {
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:    "0": [
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:        {
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:            "devices": [
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:                "/dev/loop3"
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:            ],
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:            "lv_name": "ceph_lv0",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:            "lv_size": "7511998464",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:            "name": "ceph_lv0",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:            "tags": {
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:                "ceph.cluster_name": "ceph",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:                "ceph.crush_device_class": "",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:                "ceph.encrypted": "0",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:                "ceph.osd_id": "0",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:                "ceph.type": "block",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:                "ceph.vdo": "0"
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:            },
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:            "type": "block",
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:            "vg_name": "ceph_vg0"
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:        }
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]:    ]
Nov 29 03:58:50 np0005539563 amazing_jepsen[400251]: }
Nov 29 03:58:50 np0005539563 systemd[1]: libpod-6c4ebfb84c35baac2f18fceceab08521c4cff92424fbecdfc98ed25f251146f6.scope: Deactivated successfully.
Nov 29 03:58:50 np0005539563 conmon[400251]: conmon 6c4ebfb84c35baac2f18 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6c4ebfb84c35baac2f18fceceab08521c4cff92424fbecdfc98ed25f251146f6.scope/container/memory.events
Nov 29 03:58:50 np0005539563 podman[400234]: 2025-11-29 08:58:50.567083843 +0000 UTC m=+0.895329243 container died 6c4ebfb84c35baac2f18fceceab08521c4cff92424fbecdfc98ed25f251146f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jepsen, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:58:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b0bdff50b30de120cf14f8b7acd4acdc07e2c479abe5d006c2e296f23e754c29-merged.mount: Deactivated successfully.
Nov 29 03:58:50 np0005539563 podman[400234]: 2025-11-29 08:58:50.620715755 +0000 UTC m=+0.948961155 container remove 6c4ebfb84c35baac2f18fceceab08521c4cff92424fbecdfc98ed25f251146f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:58:50 np0005539563 systemd[1]: libpod-conmon-6c4ebfb84c35baac2f18fceceab08521c4cff92424fbecdfc98ed25f251146f6.scope: Deactivated successfully.
Nov 29 03:58:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:51.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:51 np0005539563 podman[400414]: 2025-11-29 08:58:51.194261495 +0000 UTC m=+0.045345229 container create eb5bac3353e3ee1be1d33d0fe3d23499dbed9729e0735a73c85981c13079d97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:58:51 np0005539563 systemd[1]: Started libpod-conmon-eb5bac3353e3ee1be1d33d0fe3d23499dbed9729e0735a73c85981c13079d97b.scope.
Nov 29 03:58:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:58:51 np0005539563 podman[400414]: 2025-11-29 08:58:51.175284011 +0000 UTC m=+0.026367755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:58:51 np0005539563 podman[400414]: 2025-11-29 08:58:51.284114738 +0000 UTC m=+0.135198512 container init eb5bac3353e3ee1be1d33d0fe3d23499dbed9729e0735a73c85981c13079d97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swirles, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:58:51 np0005539563 podman[400414]: 2025-11-29 08:58:51.291855138 +0000 UTC m=+0.142938882 container start eb5bac3353e3ee1be1d33d0fe3d23499dbed9729e0735a73c85981c13079d97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:58:51 np0005539563 podman[400414]: 2025-11-29 08:58:51.295421114 +0000 UTC m=+0.146504858 container attach eb5bac3353e3ee1be1d33d0fe3d23499dbed9729e0735a73c85981c13079d97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:58:51 np0005539563 wonderful_swirles[400430]: 167 167
Nov 29 03:58:51 np0005539563 systemd[1]: libpod-eb5bac3353e3ee1be1d33d0fe3d23499dbed9729e0735a73c85981c13079d97b.scope: Deactivated successfully.
Nov 29 03:58:51 np0005539563 conmon[400430]: conmon eb5bac3353e3ee1be1d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb5bac3353e3ee1be1d33d0fe3d23499dbed9729e0735a73c85981c13079d97b.scope/container/memory.events
Nov 29 03:58:51 np0005539563 podman[400414]: 2025-11-29 08:58:51.302582487 +0000 UTC m=+0.153666251 container died eb5bac3353e3ee1be1d33d0fe3d23499dbed9729e0735a73c85981c13079d97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:58:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c6743c0f2c42c6bb3fba71c6b8604f562acbd642e840603b8f7ebed5a2e864ec-merged.mount: Deactivated successfully.
Nov 29 03:58:51 np0005539563 podman[400414]: 2025-11-29 08:58:51.342109628 +0000 UTC m=+0.193193372 container remove eb5bac3353e3ee1be1d33d0fe3d23499dbed9729e0735a73c85981c13079d97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:58:51 np0005539563 systemd[1]: libpod-conmon-eb5bac3353e3ee1be1d33d0fe3d23499dbed9729e0735a73c85981c13079d97b.scope: Deactivated successfully.
Nov 29 03:58:51 np0005539563 ovn_controller[148841]: 2025-11-29T08:58:51Z|00906|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 29 03:58:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 518 KiB/s wr, 96 op/s
Nov 29 03:58:51 np0005539563 podman[400454]: 2025-11-29 08:58:51.543766048 +0000 UTC m=+0.045405921 container create b882fdcc821948367ecca3bad6ce931a6cdaf44d882306d472bffe74ac82fa30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:58:51 np0005539563 systemd[1]: Started libpod-conmon-b882fdcc821948367ecca3bad6ce931a6cdaf44d882306d472bffe74ac82fa30.scope.
Nov 29 03:58:51 np0005539563 podman[400454]: 2025-11-29 08:58:51.522674447 +0000 UTC m=+0.024314310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:58:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:58:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a972a5345e2f72e004daab6f597a990f1ebdab03a43cfbd8655077e48e3a4363/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a972a5345e2f72e004daab6f597a990f1ebdab03a43cfbd8655077e48e3a4363/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a972a5345e2f72e004daab6f597a990f1ebdab03a43cfbd8655077e48e3a4363/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a972a5345e2f72e004daab6f597a990f1ebdab03a43cfbd8655077e48e3a4363/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:51 np0005539563 podman[400454]: 2025-11-29 08:58:51.651441543 +0000 UTC m=+0.153081416 container init b882fdcc821948367ecca3bad6ce931a6cdaf44d882306d472bffe74ac82fa30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:58:51 np0005539563 podman[400454]: 2025-11-29 08:58:51.663366586 +0000 UTC m=+0.165006459 container start b882fdcc821948367ecca3bad6ce931a6cdaf44d882306d472bffe74ac82fa30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:58:51 np0005539563 podman[400454]: 2025-11-29 08:58:51.666940244 +0000 UTC m=+0.168580107 container attach b882fdcc821948367ecca3bad6ce931a6cdaf44d882306d472bffe74ac82fa30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:58:51 np0005539563 nova_compute[252253]: 2025-11-29 08:58:51.779 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.249523) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732249708, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1506, "num_deletes": 258, "total_data_size": 2575819, "memory_usage": 2616136, "flush_reason": "Manual Compaction"}
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732270317, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 2535139, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74319, "largest_seqno": 75824, "table_properties": {"data_size": 2528172, "index_size": 4037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14735, "raw_average_key_size": 19, "raw_value_size": 2514096, "raw_average_value_size": 3392, "num_data_blocks": 177, "num_entries": 741, "num_filter_entries": 741, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406588, "oldest_key_time": 1764406588, "file_creation_time": 1764406732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 20962 microseconds, and 6346 cpu microseconds.
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.270599) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 2535139 bytes OK
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.270703) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.273030) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.273054) EVENT_LOG_v1 {"time_micros": 1764406732273048, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.273073) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 2569375, prev total WAL file size 2569375, number of live WAL files 2.
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.274482) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303230' seq:72057594037927935, type:22 .. '6C6F676D0033323734' seq:0, type:0; will stop at (end)
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(2475KB)], [167(12MB)]
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732274595, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 15240595, "oldest_snapshot_seqno": -1}
Nov 29 03:58:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:52.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 10794 keys, 15100715 bytes, temperature: kUnknown
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732410891, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 15100715, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15029487, "index_size": 43147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27013, "raw_key_size": 285141, "raw_average_key_size": 26, "raw_value_size": 14838939, "raw_average_value_size": 1374, "num_data_blocks": 1649, "num_entries": 10794, "num_filter_entries": 10794, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764406732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.411241) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 15100715 bytes
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.431139) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.7 rd, 110.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 12.1 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(12.0) write-amplify(6.0) OK, records in: 11325, records dropped: 531 output_compression: NoCompression
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.431179) EVENT_LOG_v1 {"time_micros": 1764406732431165, "job": 104, "event": "compaction_finished", "compaction_time_micros": 136408, "compaction_time_cpu_micros": 40966, "output_level": 6, "num_output_files": 1, "total_output_size": 15100715, "num_input_records": 11325, "num_output_records": 10794, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732431835, "job": 104, "event": "table_file_deletion", "file_number": 169}
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406732433604, "job": 104, "event": "table_file_deletion", "file_number": 167}
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.274300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.433653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.433660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.433661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.433662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-08:58:52.433664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:52 np0005539563 relaxed_darwin[400470]: {
Nov 29 03:58:52 np0005539563 relaxed_darwin[400470]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 03:58:52 np0005539563 relaxed_darwin[400470]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:58:52 np0005539563 relaxed_darwin[400470]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:58:52 np0005539563 relaxed_darwin[400470]:        "osd_id": 0,
Nov 29 03:58:52 np0005539563 relaxed_darwin[400470]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:58:52 np0005539563 relaxed_darwin[400470]:        "type": "bluestore"
Nov 29 03:58:52 np0005539563 relaxed_darwin[400470]:    }
Nov 29 03:58:52 np0005539563 relaxed_darwin[400470]: }
Nov 29 03:58:52 np0005539563 systemd[1]: libpod-b882fdcc821948367ecca3bad6ce931a6cdaf44d882306d472bffe74ac82fa30.scope: Deactivated successfully.
Nov 29 03:58:52 np0005539563 podman[400454]: 2025-11-29 08:58:52.635150098 +0000 UTC m=+1.136789951 container died b882fdcc821948367ecca3bad6ce931a6cdaf44d882306d472bffe74ac82fa30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:58:52 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a972a5345e2f72e004daab6f597a990f1ebdab03a43cfbd8655077e48e3a4363-merged.mount: Deactivated successfully.
Nov 29 03:58:52 np0005539563 podman[400454]: 2025-11-29 08:58:52.692525912 +0000 UTC m=+1.194165805 container remove b882fdcc821948367ecca3bad6ce931a6cdaf44d882306d472bffe74ac82fa30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:58:52 np0005539563 systemd[1]: libpod-conmon-b882fdcc821948367ecca3bad6ce931a6cdaf44d882306d472bffe74ac82fa30.scope: Deactivated successfully.
Nov 29 03:58:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:58:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:53.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:58:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:58:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:58:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b4f1bad9-9d99-44da-9447-056cea9331c1 does not exist
Nov 29 03:58:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 3adf9288-bc05-45f8-b8d9-2969d1222ee1 does not exist
Nov 29 03:58:53 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 003d12e5-8673-4fd6-9621-0c6073b68ac5 does not exist
Nov 29 03:58:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:58:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:54.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:58:54 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:58:54 np0005539563 nova_compute[252253]: 2025-11-29 08:58:54.672 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:55.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 585 KiB/s wr, 67 op/s
Nov 29 03:58:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:56.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:56 np0005539563 nova_compute[252253]: 2025-11-29 08:58:56.781 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:57.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 613 KiB/s rd, 573 KiB/s wr, 28 op/s
Nov 29 03:58:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:58.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:58:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:58:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:59.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:58:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 305 active+clean; 182 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 691 KiB/s rd, 1.3 MiB/s wr, 45 op/s
Nov 29 03:58:59 np0005539563 nova_compute[252253]: 2025-11-29 08:58:59.673 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:00.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:00 np0005539563 podman[400610]: 2025-11-29 08:59:00.537197444 +0000 UTC m=+0.083997815 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:59:00 np0005539563 podman[400609]: 2025-11-29 08:59:00.548790498 +0000 UTC m=+0.098319173 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:59:00 np0005539563 podman[400611]: 2025-11-29 08:59:00.587467846 +0000 UTC m=+0.120360120 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:59:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:01.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 03:59:01 np0005539563 nova_compute[252253]: 2025-11-29 08:59:01.784 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:02.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 03:59:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:04.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:04 np0005539563 nova_compute[252253]: 2025-11-29 08:59:04.675 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:59:04.971 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:59:04.971 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:59:04.971 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:05.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 03:59:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:59:06.130 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:59:06 np0005539563 nova_compute[252253]: 2025-11-29 08:59:06.130 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:59:06.132 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:59:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:06.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:06 np0005539563 nova_compute[252253]: 2025-11-29 08:59:06.785 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:07.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 284 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Nov 29 03:59:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:08.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:09.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 284 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Nov 29 03:59:09 np0005539563 nova_compute[252253]: 2025-11-29 08:59:09.676 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:10.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:11.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 207 KiB/s rd, 873 KiB/s wr, 38 op/s
Nov 29 03:59:11 np0005539563 nova_compute[252253]: 2025-11-29 08:59:11.786 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:12.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:12 np0005539563 nova_compute[252253]: 2025-11-29 08:59:12.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_08:59:13
Nov 29 03:59:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:59:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 03:59:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.meta', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log']
Nov 29 03:59:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:13.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:59:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:59:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:59:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:59:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:59:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:59:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:59:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 305 active+clean; 173 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 17 KiB/s wr, 9 op/s
Nov 29 03:59:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:14.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:14 np0005539563 nova_compute[252253]: 2025-11-29 08:59:14.677 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:15.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:15 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 08:59:15.134 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:59:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 7.8 KiB/s wr, 28 op/s
Nov 29 03:59:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:16.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:16 np0005539563 nova_compute[252253]: 2025-11-29 08:59:16.788 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:59:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:59:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:59:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:59:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:59:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:59:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:59:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:59:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:59:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:59:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:17.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 7.5 KiB/s wr, 28 op/s
Nov 29 03:59:17 np0005539563 nova_compute[252253]: 2025-11-29 08:59:17.548 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:18.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:19.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 7.5 KiB/s wr, 28 op/s
Nov 29 03:59:19 np0005539563 nova_compute[252253]: 2025-11-29 08:59:19.678 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:20.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:21.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 03:59:21 np0005539563 nova_compute[252253]: 2025-11-29 08:59:21.790 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:22.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:23.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:59:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:24.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:24 np0005539563 nova_compute[252253]: 2025-11-29 08:59:24.679 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:25.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Nov 29 03:59:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:26.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:26 np0005539563 nova_compute[252253]: 2025-11-29 08:59:26.791 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:27.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:28.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:29.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3728: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:29 np0005539563 nova_compute[252253]: 2025-11-29 08:59:29.680 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:30.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:30 np0005539563 nova_compute[252253]: 2025-11-29 08:59:30.703 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:31.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3729: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:31 np0005539563 podman[400740]: 2025-11-29 08:59:31.518555525 +0000 UTC m=+0.062377190 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:59:31 np0005539563 podman[400741]: 2025-11-29 08:59:31.535383171 +0000 UTC m=+0.076431211 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:59:31 np0005539563 podman[400742]: 2025-11-29 08:59:31.587262496 +0000 UTC m=+0.124245386 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:59:31 np0005539563 nova_compute[252253]: 2025-11-29 08:59:31.793 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:32.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:33.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:34.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:34 np0005539563 nova_compute[252253]: 2025-11-29 08:59:34.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:34 np0005539563 nova_compute[252253]: 2025-11-29 08:59:34.681 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:35.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:35 np0005539563 nova_compute[252253]: 2025-11-29 08:59:35.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:36.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:36 np0005539563 nova_compute[252253]: 2025-11-29 08:59:36.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:36 np0005539563 nova_compute[252253]: 2025-11-29 08:59:36.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:59:36 np0005539563 nova_compute[252253]: 2025-11-29 08:59:36.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:59:36 np0005539563 nova_compute[252253]: 2025-11-29 08:59:36.704 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:59:36 np0005539563 nova_compute[252253]: 2025-11-29 08:59:36.795 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:37.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:37 np0005539563 nova_compute[252253]: 2025-11-29 08:59:37.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:37 np0005539563 nova_compute[252253]: 2025-11-29 08:59:37.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:59:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:38.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:39.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3733: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:39 np0005539563 nova_compute[252253]: 2025-11-29 08:59:39.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:39 np0005539563 nova_compute[252253]: 2025-11-29 08:59:39.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:39 np0005539563 nova_compute[252253]: 2025-11-29 08:59:39.682 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:39 np0005539563 nova_compute[252253]: 2025-11-29 08:59:39.854 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:39 np0005539563 nova_compute[252253]: 2025-11-29 08:59:39.855 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:39 np0005539563 nova_compute[252253]: 2025-11-29 08:59:39.855 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:39 np0005539563 nova_compute[252253]: 2025-11-29 08:59:39.855 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:59:39 np0005539563 nova_compute[252253]: 2025-11-29 08:59:39.855 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:59:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:59:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/688819580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:59:40 np0005539563 nova_compute[252253]: 2025-11-29 08:59:40.276 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:59:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:40.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:40 np0005539563 nova_compute[252253]: 2025-11-29 08:59:40.409 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:59:40 np0005539563 nova_compute[252253]: 2025-11-29 08:59:40.410 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4147MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:59:40 np0005539563 nova_compute[252253]: 2025-11-29 08:59:40.411 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:40 np0005539563 nova_compute[252253]: 2025-11-29 08:59:40.411 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:40 np0005539563 nova_compute[252253]: 2025-11-29 08:59:40.723 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:59:40 np0005539563 nova_compute[252253]: 2025-11-29 08:59:40.723 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:59:40 np0005539563 nova_compute[252253]: 2025-11-29 08:59:40.816 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:59:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:59:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2141574915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:59:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 03:59:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:41.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 03:59:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:59:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340783428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:59:41 np0005539563 nova_compute[252253]: 2025-11-29 08:59:41.227 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:59:41 np0005539563 nova_compute[252253]: 2025-11-29 08:59:41.234 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:59:41 np0005539563 nova_compute[252253]: 2025-11-29 08:59:41.284 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:59:41 np0005539563 nova_compute[252253]: 2025-11-29 08:59:41.288 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:59:41 np0005539563 nova_compute[252253]: 2025-11-29 08:59:41.289 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3734: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:41 np0005539563 nova_compute[252253]: 2025-11-29 08:59:41.797 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:42.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:43.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:59:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:59:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:59:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:59:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:59:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:59:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:44.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:44 np0005539563 nova_compute[252253]: 2025-11-29 08:59:44.684 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:45.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:46.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:46 np0005539563 nova_compute[252253]: 2025-11-29 08:59:46.800 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:47.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:47 np0005539563 nova_compute[252253]: 2025-11-29 08:59:47.289 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:48.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:49.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:49 np0005539563 nova_compute[252253]: 2025-11-29 08:59:49.685 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:50.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:51.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:51 np0005539563 nova_compute[252253]: 2025-11-29 08:59:51.801 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:52.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:53.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 305 active+clean; 130 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 425 KiB/s wr, 0 op/s
Nov 29 03:59:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:54.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:54 np0005539563 nova_compute[252253]: 2025-11-29 08:59:54.686 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 03:59:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:59:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:59:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:59:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:59:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:59:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:59:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:59:55 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e593e47e-dccb-417a-a629-685de3a4823b does not exist
Nov 29 03:59:55 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1ff7cc35-6e6f-41aa-a8d8-0bfca0bbeec4 does not exist
Nov 29 03:59:55 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 31be2bca-64c2-4f55-ac8c-88187002dcd4 does not exist
Nov 29 03:59:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:59:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:59:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:59:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:59:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:59:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:59:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:55.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:59:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:59:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:59:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 03:59:55 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:59:55 np0005539563 podman[401178]: 2025-11-29 08:59:55.738677438 +0000 UTC m=+0.044095965 container create cbe2ecd8a2cd8fe87b4c7a2cc621b7ea6041ca323bece41c0cec324823c0c31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_perlman, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:59:55 np0005539563 systemd[1]: Started libpod-conmon-cbe2ecd8a2cd8fe87b4c7a2cc621b7ea6041ca323bece41c0cec324823c0c31b.scope.
Nov 29 03:59:55 np0005539563 podman[401178]: 2025-11-29 08:59:55.7177212 +0000 UTC m=+0.023139757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:59:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:59:55 np0005539563 podman[401178]: 2025-11-29 08:59:55.84845661 +0000 UTC m=+0.153875157 container init cbe2ecd8a2cd8fe87b4c7a2cc621b7ea6041ca323bece41c0cec324823c0c31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:59:55 np0005539563 podman[401178]: 2025-11-29 08:59:55.861549735 +0000 UTC m=+0.166968262 container start cbe2ecd8a2cd8fe87b4c7a2cc621b7ea6041ca323bece41c0cec324823c0c31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:59:55 np0005539563 podman[401178]: 2025-11-29 08:59:55.864481864 +0000 UTC m=+0.169900471 container attach cbe2ecd8a2cd8fe87b4c7a2cc621b7ea6041ca323bece41c0cec324823c0c31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:59:55 np0005539563 amazing_perlman[401194]: 167 167
Nov 29 03:59:55 np0005539563 systemd[1]: libpod-cbe2ecd8a2cd8fe87b4c7a2cc621b7ea6041ca323bece41c0cec324823c0c31b.scope: Deactivated successfully.
Nov 29 03:59:55 np0005539563 podman[401178]: 2025-11-29 08:59:55.874856665 +0000 UTC m=+0.180275232 container died cbe2ecd8a2cd8fe87b4c7a2cc621b7ea6041ca323bece41c0cec324823c0c31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:59:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d1cc2439c245d55026e577416bc03dd1b0df9f1b54ede36707b38fd643ee6303-merged.mount: Deactivated successfully.
Nov 29 03:59:55 np0005539563 podman[401178]: 2025-11-29 08:59:55.927256654 +0000 UTC m=+0.232675181 container remove cbe2ecd8a2cd8fe87b4c7a2cc621b7ea6041ca323bece41c0cec324823c0c31b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_perlman, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:59:55 np0005539563 systemd[1]: libpod-conmon-cbe2ecd8a2cd8fe87b4c7a2cc621b7ea6041ca323bece41c0cec324823c0c31b.scope: Deactivated successfully.
Nov 29 03:59:56 np0005539563 podman[401217]: 2025-11-29 08:59:56.130671051 +0000 UTC m=+0.054143366 container create d34c1b159fa7f73e20b2dfcc26f604df1e088f21436e8efc1937db8c50a09bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:59:56 np0005539563 systemd[1]: Started libpod-conmon-d34c1b159fa7f73e20b2dfcc26f604df1e088f21436e8efc1937db8c50a09bae.scope.
Nov 29 03:59:56 np0005539563 podman[401217]: 2025-11-29 08:59:56.110429784 +0000 UTC m=+0.033902099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:59:56 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:59:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c20d7d6649968e01cc5c8a84cf71fd72465d36f11fc7cc7da2954b1123605be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c20d7d6649968e01cc5c8a84cf71fd72465d36f11fc7cc7da2954b1123605be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c20d7d6649968e01cc5c8a84cf71fd72465d36f11fc7cc7da2954b1123605be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c20d7d6649968e01cc5c8a84cf71fd72465d36f11fc7cc7da2954b1123605be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c20d7d6649968e01cc5c8a84cf71fd72465d36f11fc7cc7da2954b1123605be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:56 np0005539563 podman[401217]: 2025-11-29 08:59:56.232436957 +0000 UTC m=+0.155909272 container init d34c1b159fa7f73e20b2dfcc26f604df1e088f21436e8efc1937db8c50a09bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:59:56 np0005539563 podman[401217]: 2025-11-29 08:59:56.248881262 +0000 UTC m=+0.172353567 container start d34c1b159fa7f73e20b2dfcc26f604df1e088f21436e8efc1937db8c50a09bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermi, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:59:56 np0005539563 podman[401217]: 2025-11-29 08:59:56.253203299 +0000 UTC m=+0.176675624 container attach d34c1b159fa7f73e20b2dfcc26f604df1e088f21436e8efc1937db8c50a09bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:59:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:56 np0005539563 nova_compute[252253]: 2025-11-29 08:59:56.803 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:57.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:57 np0005539563 tender_fermi[401234]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:59:57 np0005539563 tender_fermi[401234]: --> relative data size: 1.0
Nov 29 03:59:57 np0005539563 tender_fermi[401234]: --> All data devices are unavailable
Nov 29 03:59:57 np0005539563 systemd[1]: libpod-d34c1b159fa7f73e20b2dfcc26f604df1e088f21436e8efc1937db8c50a09bae.scope: Deactivated successfully.
Nov 29 03:59:57 np0005539563 podman[401250]: 2025-11-29 08:59:57.227665723 +0000 UTC m=+0.028137013 container died d34c1b159fa7f73e20b2dfcc26f604df1e088f21436e8efc1937db8c50a09bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:59:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:57 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6c20d7d6649968e01cc5c8a84cf71fd72465d36f11fc7cc7da2954b1123605be-merged.mount: Deactivated successfully.
Nov 29 03:59:57 np0005539563 podman[401250]: 2025-11-29 08:59:57.292225112 +0000 UTC m=+0.092696382 container remove d34c1b159fa7f73e20b2dfcc26f604df1e088f21436e8efc1937db8c50a09bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 03:59:57 np0005539563 systemd[1]: libpod-conmon-d34c1b159fa7f73e20b2dfcc26f604df1e088f21436e8efc1937db8c50a09bae.scope: Deactivated successfully.
Nov 29 03:59:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3742: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:59:58 np0005539563 podman[401404]: 2025-11-29 08:59:58.044565322 +0000 UTC m=+0.045677558 container create 6c9c460aa8bd546724da3b36d4fa15100dd8cb322ccd6acc7bca7a8569228ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lalande, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:59:58 np0005539563 systemd[1]: Started libpod-conmon-6c9c460aa8bd546724da3b36d4fa15100dd8cb322ccd6acc7bca7a8569228ba1.scope.
Nov 29 03:59:58 np0005539563 podman[401404]: 2025-11-29 08:59:58.024994722 +0000 UTC m=+0.026106948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:59:58 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:59:58 np0005539563 podman[401404]: 2025-11-29 08:59:58.14237658 +0000 UTC m=+0.143488796 container init 6c9c460aa8bd546724da3b36d4fa15100dd8cb322ccd6acc7bca7a8569228ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lalande, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:59:58 np0005539563 podman[401404]: 2025-11-29 08:59:58.150369137 +0000 UTC m=+0.151481333 container start 6c9c460aa8bd546724da3b36d4fa15100dd8cb322ccd6acc7bca7a8569228ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:59:58 np0005539563 podman[401404]: 2025-11-29 08:59:58.154821557 +0000 UTC m=+0.155933753 container attach 6c9c460aa8bd546724da3b36d4fa15100dd8cb322ccd6acc7bca7a8569228ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:59:58 np0005539563 dazzling_lalande[401420]: 167 167
Nov 29 03:59:58 np0005539563 systemd[1]: libpod-6c9c460aa8bd546724da3b36d4fa15100dd8cb322ccd6acc7bca7a8569228ba1.scope: Deactivated successfully.
Nov 29 03:59:58 np0005539563 podman[401404]: 2025-11-29 08:59:58.157068558 +0000 UTC m=+0.158180754 container died 6c9c460aa8bd546724da3b36d4fa15100dd8cb322ccd6acc7bca7a8569228ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:59:58 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a8320c5f20e6484037f24be3cb2f5c6295aea021653f404fe0ab4727fdce8906-merged.mount: Deactivated successfully.
Nov 29 03:59:58 np0005539563 podman[401404]: 2025-11-29 08:59:58.205319594 +0000 UTC m=+0.206431810 container remove 6c9c460aa8bd546724da3b36d4fa15100dd8cb322ccd6acc7bca7a8569228ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lalande, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:59:58 np0005539563 systemd[1]: libpod-conmon-6c9c460aa8bd546724da3b36d4fa15100dd8cb322ccd6acc7bca7a8569228ba1.scope: Deactivated successfully.
Nov 29 03:59:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:58 np0005539563 podman[401444]: 2025-11-29 08:59:58.393446539 +0000 UTC m=+0.055022842 container create 23efe175b137f54c1882f0ba6156cfce9fe2e98a59d194e8a731ed773e8ae64a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_noyce, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:59:58 np0005539563 systemd[1]: Started libpod-conmon-23efe175b137f54c1882f0ba6156cfce9fe2e98a59d194e8a731ed773e8ae64a.scope.
Nov 29 03:59:58 np0005539563 podman[401444]: 2025-11-29 08:59:58.376596792 +0000 UTC m=+0.038173125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:59:58 np0005539563 systemd[1]: Started libcrun container.
Nov 29 03:59:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205d8d6f74f1e3735673a4331b1263de61e3be7f9accc8e984cdaca661919c7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205d8d6f74f1e3735673a4331b1263de61e3be7f9accc8e984cdaca661919c7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205d8d6f74f1e3735673a4331b1263de61e3be7f9accc8e984cdaca661919c7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:58 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205d8d6f74f1e3735673a4331b1263de61e3be7f9accc8e984cdaca661919c7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:58 np0005539563 podman[401444]: 2025-11-29 08:59:58.533430299 +0000 UTC m=+0.195006622 container init 23efe175b137f54c1882f0ba6156cfce9fe2e98a59d194e8a731ed773e8ae64a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:59:58 np0005539563 podman[401444]: 2025-11-29 08:59:58.541036794 +0000 UTC m=+0.202613097 container start 23efe175b137f54c1882f0ba6156cfce9fe2e98a59d194e8a731ed773e8ae64a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:59:58 np0005539563 podman[401444]: 2025-11-29 08:59:58.545178667 +0000 UTC m=+0.206754970 container attach 23efe175b137f54c1882f0ba6156cfce9fe2e98a59d194e8a731ed773e8ae64a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:59:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 03:59:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:59:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:59.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]: {
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:    "0": [
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:        {
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:            "devices": [
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:                "/dev/loop3"
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:            ],
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:            "lv_name": "ceph_lv0",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:            "lv_size": "7511998464",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:            "name": "ceph_lv0",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:            "tags": {
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:                "ceph.cluster_name": "ceph",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:                "ceph.crush_device_class": "",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:                "ceph.encrypted": "0",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:                "ceph.osd_id": "0",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:                "ceph.type": "block",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:                "ceph.vdo": "0"
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:            },
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:            "type": "block",
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:            "vg_name": "ceph_vg0"
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:        }
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]:    ]
Nov 29 03:59:59 np0005539563 flamboyant_noyce[401460]: }
Nov 29 03:59:59 np0005539563 systemd[1]: libpod-23efe175b137f54c1882f0ba6156cfce9fe2e98a59d194e8a731ed773e8ae64a.scope: Deactivated successfully.
Nov 29 03:59:59 np0005539563 podman[401444]: 2025-11-29 08:59:59.29615145 +0000 UTC m=+0.957727763 container died 23efe175b137f54c1882f0ba6156cfce9fe2e98a59d194e8a731ed773e8ae64a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:59:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-205d8d6f74f1e3735673a4331b1263de61e3be7f9accc8e984cdaca661919c7e-merged.mount: Deactivated successfully.
Nov 29 03:59:59 np0005539563 podman[401444]: 2025-11-29 08:59:59.360713498 +0000 UTC m=+1.022289801 container remove 23efe175b137f54c1882f0ba6156cfce9fe2e98a59d194e8a731ed773e8ae64a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:59:59 np0005539563 systemd[1]: libpod-conmon-23efe175b137f54c1882f0ba6156cfce9fe2e98a59d194e8a731ed773e8ae64a.scope: Deactivated successfully.
Nov 29 03:59:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3743: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 03:59:59 np0005539563 nova_compute[252253]: 2025-11-29 08:59:59.688 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 04:00:00 np0005539563 podman[401671]: 2025-11-29 09:00:00.036494105 +0000 UTC m=+0.048788881 container create 13edf373729b5465f6219ccf5fc9ece7b36cdc5562fd821aec3aeb0a19d95f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_newton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 04:00:00 np0005539563 systemd[1]: Started libpod-conmon-13edf373729b5465f6219ccf5fc9ece7b36cdc5562fd821aec3aeb0a19d95f54.scope.
Nov 29 04:00:00 np0005539563 podman[401671]: 2025-11-29 09:00:00.01635648 +0000 UTC m=+0.028651286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:00:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:00:00 np0005539563 podman[401671]: 2025-11-29 09:00:00.139486724 +0000 UTC m=+0.151781520 container init 13edf373729b5465f6219ccf5fc9ece7b36cdc5562fd821aec3aeb0a19d95f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 04:00:00 np0005539563 podman[401671]: 2025-11-29 09:00:00.150147363 +0000 UTC m=+0.162442139 container start 13edf373729b5465f6219ccf5fc9ece7b36cdc5562fd821aec3aeb0a19d95f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_newton, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 04:00:00 np0005539563 priceless_newton[401688]: 167 167
Nov 29 04:00:00 np0005539563 systemd[1]: libpod-13edf373729b5465f6219ccf5fc9ece7b36cdc5562fd821aec3aeb0a19d95f54.scope: Deactivated successfully.
Nov 29 04:00:00 np0005539563 conmon[401688]: conmon 13edf373729b5465f621 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13edf373729b5465f6219ccf5fc9ece7b36cdc5562fd821aec3aeb0a19d95f54.scope/container/memory.events
Nov 29 04:00:00 np0005539563 podman[401671]: 2025-11-29 09:00:00.157889302 +0000 UTC m=+0.170184188 container attach 13edf373729b5465f6219ccf5fc9ece7b36cdc5562fd821aec3aeb0a19d95f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 04:00:00 np0005539563 podman[401671]: 2025-11-29 09:00:00.159040464 +0000 UTC m=+0.171335300 container died 13edf373729b5465f6219ccf5fc9ece7b36cdc5562fd821aec3aeb0a19d95f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_newton, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:00:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1748d7f8a30952ee800dc9a2605e01c52d15a5c59fa131354d1091510cd346aa-merged.mount: Deactivated successfully.
Nov 29 04:00:00 np0005539563 podman[401671]: 2025-11-29 09:00:00.203331353 +0000 UTC m=+0.215626119 container remove 13edf373729b5465f6219ccf5fc9ece7b36cdc5562fd821aec3aeb0a19d95f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:00:00 np0005539563 systemd[1]: libpod-conmon-13edf373729b5465f6219ccf5fc9ece7b36cdc5562fd821aec3aeb0a19d95f54.scope: Deactivated successfully.
Nov 29 04:00:00 np0005539563 podman[401711]: 2025-11-29 09:00:00.355143493 +0000 UTC m=+0.041894335 container create 60c102e4591cc85154bdf5f920fe56cc7b4ad2b8d42577e162d38aaf11b6b10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shamir, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 29 04:00:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:00.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:00 np0005539563 systemd[1]: Started libpod-conmon-60c102e4591cc85154bdf5f920fe56cc7b4ad2b8d42577e162d38aaf11b6b10e.scope.
Nov 29 04:00:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:00:00 np0005539563 podman[401711]: 2025-11-29 09:00:00.335214114 +0000 UTC m=+0.021964976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:00:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac7bc22070bf236282aecad5e77e303aadc9811c0990f2170abde2c737c109e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac7bc22070bf236282aecad5e77e303aadc9811c0990f2170abde2c737c109e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac7bc22070bf236282aecad5e77e303aadc9811c0990f2170abde2c737c109e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac7bc22070bf236282aecad5e77e303aadc9811c0990f2170abde2c737c109e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:00 np0005539563 podman[401711]: 2025-11-29 09:00:00.444714578 +0000 UTC m=+0.131465430 container init 60c102e4591cc85154bdf5f920fe56cc7b4ad2b8d42577e162d38aaf11b6b10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:00:00 np0005539563 podman[401711]: 2025-11-29 09:00:00.450799743 +0000 UTC m=+0.137550585 container start 60c102e4591cc85154bdf5f920fe56cc7b4ad2b8d42577e162d38aaf11b6b10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 04:00:00 np0005539563 podman[401711]: 2025-11-29 09:00:00.453942558 +0000 UTC m=+0.140693420 container attach 60c102e4591cc85154bdf5f920fe56cc7b4ad2b8d42577e162d38aaf11b6b10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 04:00:00 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 04:00:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:01.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:01 np0005539563 eloquent_shamir[401728]: {
Nov 29 04:00:01 np0005539563 eloquent_shamir[401728]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:00:01 np0005539563 eloquent_shamir[401728]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:00:01 np0005539563 eloquent_shamir[401728]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:00:01 np0005539563 eloquent_shamir[401728]:        "osd_id": 0,
Nov 29 04:00:01 np0005539563 eloquent_shamir[401728]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:00:01 np0005539563 eloquent_shamir[401728]:        "type": "bluestore"
Nov 29 04:00:01 np0005539563 eloquent_shamir[401728]:    }
Nov 29 04:00:01 np0005539563 eloquent_shamir[401728]: }
Nov 29 04:00:01 np0005539563 systemd[1]: libpod-60c102e4591cc85154bdf5f920fe56cc7b4ad2b8d42577e162d38aaf11b6b10e.scope: Deactivated successfully.
Nov 29 04:00:01 np0005539563 podman[401711]: 2025-11-29 09:00:01.294837326 +0000 UTC m=+0.981588178 container died 60c102e4591cc85154bdf5f920fe56cc7b4ad2b8d42577e162d38aaf11b6b10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 04:00:01 np0005539563 systemd[1]: var-lib-containers-storage-overlay-fac7bc22070bf236282aecad5e77e303aadc9811c0990f2170abde2c737c109e-merged.mount: Deactivated successfully.
Nov 29 04:00:01 np0005539563 podman[401711]: 2025-11-29 09:00:01.357332539 +0000 UTC m=+1.044083381 container remove 60c102e4591cc85154bdf5f920fe56cc7b4ad2b8d42577e162d38aaf11b6b10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shamir, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:00:01 np0005539563 systemd[1]: libpod-conmon-60c102e4591cc85154bdf5f920fe56cc7b4ad2b8d42577e162d38aaf11b6b10e.scope: Deactivated successfully.
Nov 29 04:00:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:00:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:00:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:00:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:00:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 03392e03-91ba-4b07-8df3-ddd56680a2db does not exist
Nov 29 04:00:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5b95504e-ac70-48a2-bb9f-d9fbcf22649a does not exist
Nov 29 04:00:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 059eb987-0761-4bba-bf2f-4b2937f01831 does not exist
Nov 29 04:00:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3744: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 04:00:01 np0005539563 podman[401811]: 2025-11-29 09:00:01.669270294 +0000 UTC m=+0.062540514 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 04:00:01 np0005539563 podman[401812]: 2025-11-29 09:00:01.673220531 +0000 UTC m=+0.064944519 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 29 04:00:01 np0005539563 podman[401813]: 2025-11-29 09:00:01.719413902 +0000 UTC m=+0.106426562 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 04:00:01 np0005539563 nova_compute[252253]: 2025-11-29 09:00:01.805 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:02.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:00:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:00:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:02.820 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:00:02 np0005539563 nova_compute[252253]: 2025-11-29 09:00:02.821 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:02.822 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:00:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 04:00:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:03.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 04:00:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3745: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 396 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 29 04:00:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:04.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:04 np0005539563 nova_compute[252253]: 2025-11-29 09:00:04.691 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:04.971 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:04.972 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:04.972 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:05.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3746: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 100 op/s
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.599575) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805599694, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 873, "num_deletes": 251, "total_data_size": 1324680, "memory_usage": 1354928, "flush_reason": "Manual Compaction"}
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805608469, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 1299245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75825, "largest_seqno": 76697, "table_properties": {"data_size": 1294851, "index_size": 2045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9708, "raw_average_key_size": 19, "raw_value_size": 1286080, "raw_average_value_size": 2608, "num_data_blocks": 91, "num_entries": 493, "num_filter_entries": 493, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406732, "oldest_key_time": 1764406732, "file_creation_time": 1764406805, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 8913 microseconds, and 4308 cpu microseconds.
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.608536) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 1299245 bytes OK
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.608566) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.610089) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.610143) EVENT_LOG_v1 {"time_micros": 1764406805610133, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.610168) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1320502, prev total WAL file size 1320502, number of live WAL files 2.
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.610923) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(1268KB)], [170(14MB)]
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805611039, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 16399960, "oldest_snapshot_seqno": -1}
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 10772 keys, 14492565 bytes, temperature: kUnknown
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805715762, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 14492565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14421950, "index_size": 42598, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26949, "raw_key_size": 285387, "raw_average_key_size": 26, "raw_value_size": 14232204, "raw_average_value_size": 1321, "num_data_blocks": 1620, "num_entries": 10772, "num_filter_entries": 10772, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764406805, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.716094) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 14492565 bytes
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.796903) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.4 rd, 138.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.4 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(23.8) write-amplify(11.2) OK, records in: 11287, records dropped: 515 output_compression: NoCompression
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.796949) EVENT_LOG_v1 {"time_micros": 1764406805796936, "job": 106, "event": "compaction_finished", "compaction_time_micros": 104837, "compaction_time_cpu_micros": 34940, "output_level": 6, "num_output_files": 1, "total_output_size": 14492565, "num_input_records": 11287, "num_output_records": 10772, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805797685, "job": 106, "event": "table_file_deletion", "file_number": 172}
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406805800871, "job": 106, "event": "table_file_deletion", "file_number": 170}
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.610766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.801022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.801031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.801034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.801037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:05 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:05.801040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:06.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:06 np0005539563 nova_compute[252253]: 2025-11-29 09:00:06.806 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:06 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:06.823 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:00:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:07.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3747: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:00:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:08.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:09.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3748: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:00:09 np0005539563 nova_compute[252253]: 2025-11-29 09:00:09.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:09 np0005539563 nova_compute[252253]: 2025-11-29 09:00:09.692 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:09 np0005539563 nova_compute[252253]: 2025-11-29 09:00:09.792 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "c79d4974-069b-4ff8-80b4-9c40bff14325" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:09 np0005539563 nova_compute[252253]: 2025-11-29 09:00:09.792 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:09 np0005539563 nova_compute[252253]: 2025-11-29 09:00:09.820 252257 DEBUG nova.compute.manager [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 04:00:09 np0005539563 nova_compute[252253]: 2025-11-29 09:00:09.903 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:09 np0005539563 nova_compute[252253]: 2025-11-29 09:00:09.904 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:09 np0005539563 nova_compute[252253]: 2025-11-29 09:00:09.913 252257 DEBUG nova.virt.hardware [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 04:00:09 np0005539563 nova_compute[252253]: 2025-11-29 09:00:09.913 252257 INFO nova.compute.claims [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.078 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:10.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:00:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/981739051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.534 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.541 252257 DEBUG nova.compute.provider_tree [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.560 252257 DEBUG nova.scheduler.client.report [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.592 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.594 252257 DEBUG nova.compute.manager [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.641 252257 DEBUG nova.compute.manager [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.642 252257 DEBUG nova.network.neutron [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.660 252257 INFO nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.679 252257 DEBUG nova.compute.manager [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.769 252257 DEBUG nova.compute.manager [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.770 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.770 252257 INFO nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Creating image(s)#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.797 252257 DEBUG nova.storage.rbd_utils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image c79d4974-069b-4ff8-80b4-9c40bff14325_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.826 252257 DEBUG nova.storage.rbd_utils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image c79d4974-069b-4ff8-80b4-9c40bff14325_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.854 252257 DEBUG nova.storage.rbd_utils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image c79d4974-069b-4ff8-80b4-9c40bff14325_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.858 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.943 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.944 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.945 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.945 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.972 252257 DEBUG nova.storage.rbd_utils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image c79d4974-069b-4ff8-80b4-9c40bff14325_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:00:10 np0005539563 nova_compute[252253]: 2025-11-29 09:00:10.975 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c79d4974-069b-4ff8-80b4-9c40bff14325_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:11 np0005539563 nova_compute[252253]: 2025-11-29 09:00:11.007 252257 DEBUG nova.policy [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 04:00:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:11.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:11 np0005539563 nova_compute[252253]: 2025-11-29 09:00:11.250 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf c79d4974-069b-4ff8-80b4-9c40bff14325_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:11 np0005539563 nova_compute[252253]: 2025-11-29 09:00:11.333 252257 DEBUG nova.storage.rbd_utils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image c79d4974-069b-4ff8-80b4-9c40bff14325_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 04:00:11 np0005539563 nova_compute[252253]: 2025-11-29 09:00:11.441 252257 DEBUG nova.objects.instance [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid c79d4974-069b-4ff8-80b4-9c40bff14325 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:00:11 np0005539563 nova_compute[252253]: 2025-11-29 09:00:11.454 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 04:00:11 np0005539563 nova_compute[252253]: 2025-11-29 09:00:11.455 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Ensure instance console log exists: /var/lib/nova/instances/c79d4974-069b-4ff8-80b4-9c40bff14325/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 04:00:11 np0005539563 nova_compute[252253]: 2025-11-29 09:00:11.455 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:11 np0005539563 nova_compute[252253]: 2025-11-29 09:00:11.455 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:11 np0005539563 nova_compute[252253]: 2025-11-29 09:00:11.456 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3749: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:00:11 np0005539563 nova_compute[252253]: 2025-11-29 09:00:11.834 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:12 np0005539563 nova_compute[252253]: 2025-11-29 09:00:12.278 252257 DEBUG nova.network.neutron [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Successfully created port: 43132856-68f7-409c-979c-a360b649693a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 04:00:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:12.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:12 np0005539563 nova_compute[252253]: 2025-11-29 09:00:12.690 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:00:13
Nov 29 04:00:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:00:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:00:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.log', 'images', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.rgw.root']
Nov 29 04:00:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:00:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:13.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:00:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:00:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:00:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:00:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:00:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:00:13 np0005539563 nova_compute[252253]: 2025-11-29 09:00:13.426 252257 DEBUG nova.network.neutron [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Successfully updated port: 43132856-68f7-409c-979c-a360b649693a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 04:00:13 np0005539563 nova_compute[252253]: 2025-11-29 09:00:13.441 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:00:13 np0005539563 nova_compute[252253]: 2025-11-29 09:00:13.441 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:00:13 np0005539563 nova_compute[252253]: 2025-11-29 09:00:13.442 252257 DEBUG nova.network.neutron [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 04:00:13 np0005539563 nova_compute[252253]: 2025-11-29 09:00:13.503 252257 DEBUG nova.compute.manager [req-a826d958-1c24-44e0-ba2c-a4f595820783 req-a35c252e-b52d-4c92-b992-4ae3c7056286 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Received event network-changed-43132856-68f7-409c-979c-a360b649693a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:00:13 np0005539563 nova_compute[252253]: 2025-11-29 09:00:13.504 252257 DEBUG nova.compute.manager [req-a826d958-1c24-44e0-ba2c-a4f595820783 req-a35c252e-b52d-4c92-b992-4ae3c7056286 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Refreshing instance network info cache due to event network-changed-43132856-68f7-409c-979c-a360b649693a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:00:13 np0005539563 nova_compute[252253]: 2025-11-29 09:00:13.504 252257 DEBUG oslo_concurrency.lockutils [req-a826d958-1c24-44e0-ba2c-a4f595820783 req-a35c252e-b52d-4c92-b992-4ae3c7056286 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:00:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3750: 305 pgs: 305 active+clean; 183 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 556 KiB/s wr, 81 op/s
Nov 29 04:00:13 np0005539563 nova_compute[252253]: 2025-11-29 09:00:13.580 252257 DEBUG nova.network.neutron [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 04:00:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:14.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.508 252257 DEBUG nova.network.neutron [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Updating instance_info_cache with network_info: [{"id": "43132856-68f7-409c-979c-a360b649693a", "address": "fa:16:3e:41:3c:55", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43132856-68", "ovs_interfaceid": "43132856-68f7-409c-979c-a360b649693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.533 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.534 252257 DEBUG nova.compute.manager [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Instance network_info: |[{"id": "43132856-68f7-409c-979c-a360b649693a", "address": "fa:16:3e:41:3c:55", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43132856-68", "ovs_interfaceid": "43132856-68f7-409c-979c-a360b649693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.534 252257 DEBUG oslo_concurrency.lockutils [req-a826d958-1c24-44e0-ba2c-a4f595820783 req-a35c252e-b52d-4c92-b992-4ae3c7056286 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.534 252257 DEBUG nova.network.neutron [req-a826d958-1c24-44e0-ba2c-a4f595820783 req-a35c252e-b52d-4c92-b992-4ae3c7056286 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Refreshing network info cache for port 43132856-68f7-409c-979c-a360b649693a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.537 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Start _get_guest_xml network_info=[{"id": "43132856-68f7-409c-979c-a360b649693a", "address": "fa:16:3e:41:3c:55", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43132856-68", "ovs_interfaceid": "43132856-68f7-409c-979c-a360b649693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.541 252257 WARNING nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.548 252257 DEBUG nova.virt.libvirt.host [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.548 252257 DEBUG nova.virt.libvirt.host [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.553 252257 DEBUG nova.virt.libvirt.host [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.554 252257 DEBUG nova.virt.libvirt.host [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.555 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.555 252257 DEBUG nova.virt.hardware [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.555 252257 DEBUG nova.virt.hardware [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.556 252257 DEBUG nova.virt.hardware [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.556 252257 DEBUG nova.virt.hardware [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.556 252257 DEBUG nova.virt.hardware [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.556 252257 DEBUG nova.virt.hardware [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.556 252257 DEBUG nova.virt.hardware [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.557 252257 DEBUG nova.virt.hardware [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.557 252257 DEBUG nova.virt.hardware [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.557 252257 DEBUG nova.virt.hardware [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.557 252257 DEBUG nova.virt.hardware [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.560 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.694 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:00:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2364951944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.961 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.987 252257 DEBUG nova.storage.rbd_utils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image c79d4974-069b-4ff8-80b4-9c40bff14325_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:00:14 np0005539563 nova_compute[252253]: 2025-11-29 09:00:14.990 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:15.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:15 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:00:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/65487676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.396 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.398 252257 DEBUG nova.virt.libvirt.vif [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:00:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1401271841',display_name='tempest-TestNetworkBasicOps-server-1401271841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1401271841',id=211,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHIJpsjkVsL8+FcfxghR6DoYmuBooqwUp0n5P26wiEuvf5glBWlFPPQi6brQpY8+uVbxVeEQGTT+TFrwaHaqZz+Fqhpn4PRkVYpD/MuBi+OIjyFWaZnfB1qKVdU3qHsAqg==',key_name='tempest-TestNetworkBasicOps-1482221197',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-qn6m0am8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:00:10Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=c79d4974-069b-4ff8-80b4-9c40bff14325,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43132856-68f7-409c-979c-a360b649693a", "address": "fa:16:3e:41:3c:55", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43132856-68", "ovs_interfaceid": "43132856-68f7-409c-979c-a360b649693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.398 252257 DEBUG nova.network.os_vif_util [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "43132856-68f7-409c-979c-a360b649693a", "address": "fa:16:3e:41:3c:55", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43132856-68", "ovs_interfaceid": "43132856-68f7-409c-979c-a360b649693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.399 252257 DEBUG nova.network.os_vif_util [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:3c:55,bridge_name='br-int',has_traffic_filtering=True,id=43132856-68f7-409c-979c-a360b649693a,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43132856-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.401 252257 DEBUG nova.objects.instance [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid c79d4974-069b-4ff8-80b4-9c40bff14325 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.432 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] End _get_guest_xml xml=<domain type="kvm">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  <uuid>c79d4974-069b-4ff8-80b4-9c40bff14325</uuid>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  <name>instance-000000d3</name>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestNetworkBasicOps-server-1401271841</nova:name>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 09:00:14</nova:creationTime>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <nova:port uuid="43132856-68f7-409c-979c-a360b649693a">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <system>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <entry name="serial">c79d4974-069b-4ff8-80b4-9c40bff14325</entry>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <entry name="uuid">c79d4974-069b-4ff8-80b4-9c40bff14325</entry>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    </system>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  <os>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  </os>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  <features>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  </features>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  </clock>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  <devices>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/c79d4974-069b-4ff8-80b4-9c40bff14325_disk">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      </source>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      </auth>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    </disk>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/c79d4974-069b-4ff8-80b4-9c40bff14325_disk.config">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      </source>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      </auth>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    </disk>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:41:3c:55"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <target dev="tap43132856-68"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    </interface>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/c79d4974-069b-4ff8-80b4-9c40bff14325/console.log" append="off"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    </serial>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <video>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    </video>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    </rng>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 04:00:15 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 04:00:15 np0005539563 nova_compute[252253]:  </devices>
Nov 29 04:00:15 np0005539563 nova_compute[252253]: </domain>
Nov 29 04:00:15 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.434 252257 DEBUG nova.compute.manager [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Preparing to wait for external event network-vif-plugged-43132856-68f7-409c-979c-a360b649693a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.435 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.435 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.436 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.437 252257 DEBUG nova.virt.libvirt.vif [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:00:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1401271841',display_name='tempest-TestNetworkBasicOps-server-1401271841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1401271841',id=211,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHIJpsjkVsL8+FcfxghR6DoYmuBooqwUp0n5P26wiEuvf5glBWlFPPQi6brQpY8+uVbxVeEQGTT+TFrwaHaqZz+Fqhpn4PRkVYpD/MuBi+OIjyFWaZnfB1qKVdU3qHsAqg==',key_name='tempest-TestNetworkBasicOps-1482221197',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-qn6m0am8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:00:10Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=c79d4974-069b-4ff8-80b4-9c40bff14325,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43132856-68f7-409c-979c-a360b649693a", "address": "fa:16:3e:41:3c:55", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43132856-68", "ovs_interfaceid": "43132856-68f7-409c-979c-a360b649693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.437 252257 DEBUG nova.network.os_vif_util [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "43132856-68f7-409c-979c-a360b649693a", "address": "fa:16:3e:41:3c:55", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43132856-68", "ovs_interfaceid": "43132856-68f7-409c-979c-a360b649693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.438 252257 DEBUG nova.network.os_vif_util [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:3c:55,bridge_name='br-int',has_traffic_filtering=True,id=43132856-68f7-409c-979c-a360b649693a,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43132856-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.439 252257 DEBUG os_vif [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:3c:55,bridge_name='br-int',has_traffic_filtering=True,id=43132856-68f7-409c-979c-a360b649693a,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43132856-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.439 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.440 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.441 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.445 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.446 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43132856-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.446 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43132856-68, col_values=(('external_ids', {'iface-id': '43132856-68f7-409c-979c-a360b649693a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:3c:55', 'vm-uuid': 'c79d4974-069b-4ff8-80b4-9c40bff14325'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:00:15 np0005539563 NetworkManager[48981]: <info>  [1764406815.4495] manager: (tap43132856-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.450 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.457 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.459 252257 INFO os_vif [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:3c:55,bridge_name='br-int',has_traffic_filtering=True,id=43132856-68f7-409c-979c-a360b649693a,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43132856-68')#033[00m
Nov 29 04:00:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3751: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 143 op/s
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.641 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.642 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.643 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:41:3c:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.644 252257 INFO nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Using config drive#033[00m
Nov 29 04:00:15 np0005539563 nova_compute[252253]: 2025-11-29 09:00:15.675 252257 DEBUG nova.storage.rbd_utils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image c79d4974-069b-4ff8-80b4-9c40bff14325_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.318 252257 INFO nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Creating config drive at /var/lib/nova/instances/c79d4974-069b-4ff8-80b4-9c40bff14325/disk.config#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.328 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c79d4974-069b-4ff8-80b4-9c40bff14325/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb11ejn58 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.364 252257 DEBUG nova.network.neutron [req-a826d958-1c24-44e0-ba2c-a4f595820783 req-a35c252e-b52d-4c92-b992-4ae3c7056286 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Updated VIF entry in instance network info cache for port 43132856-68f7-409c-979c-a360b649693a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.364 252257 DEBUG nova.network.neutron [req-a826d958-1c24-44e0-ba2c-a4f595820783 req-a35c252e-b52d-4c92-b992-4ae3c7056286 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Updating instance_info_cache with network_info: [{"id": "43132856-68f7-409c-979c-a360b649693a", "address": "fa:16:3e:41:3c:55", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43132856-68", "ovs_interfaceid": "43132856-68f7-409c-979c-a360b649693a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:00:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:16.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.398 252257 DEBUG oslo_concurrency.lockutils [req-a826d958-1c24-44e0-ba2c-a4f595820783 req-a35c252e-b52d-4c92-b992-4ae3c7056286 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.470 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c79d4974-069b-4ff8-80b4-9c40bff14325/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb11ejn58" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.504 252257 DEBUG nova.storage.rbd_utils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image c79d4974-069b-4ff8-80b4-9c40bff14325_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.509 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c79d4974-069b-4ff8-80b4-9c40bff14325/disk.config c79d4974-069b-4ff8-80b4-9c40bff14325_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.794 252257 DEBUG oslo_concurrency.processutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c79d4974-069b-4ff8-80b4-9c40bff14325/disk.config c79d4974-069b-4ff8-80b4-9c40bff14325_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.795 252257 INFO nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Deleting local config drive /var/lib/nova/instances/c79d4974-069b-4ff8-80b4-9c40bff14325/disk.config because it was imported into RBD.#033[00m
Nov 29 04:00:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:00:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:00:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:00:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:00:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:00:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:00:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:00:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:00:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:00:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.836 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:16 np0005539563 kernel: tap43132856-68: entered promiscuous mode
Nov 29 04:00:16 np0005539563 NetworkManager[48981]: <info>  [1764406816.8568] manager: (tap43132856-68): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.857 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:16 np0005539563 ovn_controller[148841]: 2025-11-29T09:00:16Z|00907|binding|INFO|Claiming lport 43132856-68f7-409c-979c-a360b649693a for this chassis.
Nov 29 04:00:16 np0005539563 ovn_controller[148841]: 2025-11-29T09:00:16Z|00908|binding|INFO|43132856-68f7-409c-979c-a360b649693a: Claiming fa:16:3e:41:3c:55 10.100.0.6
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.861 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.866 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.868 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:16 np0005539563 NetworkManager[48981]: <info>  [1764406816.8694] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Nov 29 04:00:16 np0005539563 NetworkManager[48981]: <info>  [1764406816.8701] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Nov 29 04:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:16.875 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:3c:55 10.100.0.6'], port_security=['fa:16:3e:41:3c:55 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c79d4974-069b-4ff8-80b4-9c40bff14325', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf214aa3-cb83-4459-afa4-8d60262c5413', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a30db943-b1cb-4a91-8ae2-9c91fb356172', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb2c6e8-c4d9-43b8-a7a3-4ba0a1e884fb, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=43132856-68f7-409c-979c-a360b649693a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:16.876 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 43132856-68f7-409c-979c-a360b649693a in datapath bf214aa3-cb83-4459-afa4-8d60262c5413 bound to our chassis#033[00m
Nov 29 04:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:16.877 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf214aa3-cb83-4459-afa4-8d60262c5413#033[00m
Nov 29 04:00:16 np0005539563 systemd-udevd[402204]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:00:16 np0005539563 systemd-machined[213024]: New machine qemu-101-instance-000000d3.
Nov 29 04:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:16.890 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a34cab7f-d4fd-48ed-9b18-9216790a8f34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:16.891 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf214aa3-c1 in ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 04:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:16.894 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf214aa3-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 04:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:16.894 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[31893aae-a57f-46fc-a58a-448a00b7069a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:16.894 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[48adcedc-f134-4972-b0c3-4f0da6cb5288]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:16 np0005539563 NetworkManager[48981]: <info>  [1764406816.8989] device (tap43132856-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:00:16 np0005539563 NetworkManager[48981]: <info>  [1764406816.8999] device (tap43132856-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 04:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:16.906 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd1c71d-0ed0-4208-902d-7875d0bd2b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:16 np0005539563 systemd[1]: Started Virtual Machine qemu-101-instance-000000d3.
Nov 29 04:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:16.930 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8772202b-890b-44ac-becd-a38db67a8f29]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.948 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.950 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.958 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:16.958 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3955f6-1f2c-4c9a-a745-35ae68f99337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:16 np0005539563 ovn_controller[148841]: 2025-11-29T09:00:16Z|00909|binding|INFO|Setting lport 43132856-68f7-409c-979c-a360b649693a ovn-installed in OVS
Nov 29 04:00:16 np0005539563 ovn_controller[148841]: 2025-11-29T09:00:16Z|00910|binding|INFO|Setting lport 43132856-68f7-409c-979c-a360b649693a up in Southbound
Nov 29 04:00:16 np0005539563 NetworkManager[48981]: <info>  [1764406816.9702] manager: (tapbf214aa3-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/404)
Nov 29 04:00:16 np0005539563 nova_compute[252253]: 2025-11-29 09:00:16.970 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:16 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:16.971 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[197ca92b-8161-43ac-a253-296d88f3c463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.000 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[d001674b-7f2b-4205-a2b9-3eaaf9cc1e88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.003 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0879ac-46c5-4531-8c75-2ce7adfd2029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:17 np0005539563 NetworkManager[48981]: <info>  [1764406817.0254] device (tapbf214aa3-c0): carrier: link connected
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.034 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e2181b-ddba-4f93-ae17-4ff7680c59f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.052 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2498e4-bb53-43c8-9c28-3c31e6f7b783]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf214aa3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:5e:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 978479, 'reachable_time': 41664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402238, 'error': None, 'target': 'ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.067 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a73e25cd-3fb0-4043-bf0d-74e567c558f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:5e94'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 978479, 'tstamp': 978479}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402239, 'error': None, 'target': 'ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.082 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a536bf8b-8378-4594-a16d-e446a6f2dadc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf214aa3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:5e:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 978479, 'reachable_time': 41664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402240, 'error': None, 'target': 'ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.113 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc9e18a-1f99-4dce-859f-4597dd894653]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:17.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.169 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[6406df89-c80f-420e-8138-1bcce3e240d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.170 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf214aa3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.171 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.171 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf214aa3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:00:17 np0005539563 kernel: tapbf214aa3-c0: entered promiscuous mode
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.173 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:17 np0005539563 NetworkManager[48981]: <info>  [1764406817.1739] manager: (tapbf214aa3-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.177 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf214aa3-c0, col_values=(('external_ids', {'iface-id': 'c17826d8-60cc-4702-9dae-6f2f3963d0d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.179 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:17 np0005539563 ovn_controller[148841]: 2025-11-29T09:00:17Z|00911|binding|INFO|Releasing lport c17826d8-60cc-4702-9dae-6f2f3963d0d8 from this chassis (sb_readonly=0)
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.182 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf214aa3-cb83-4459-afa4-8d60262c5413.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf214aa3-cb83-4459-afa4-8d60262c5413.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.183 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[358624be-4325-4e99-8481-3f28458152b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.183 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-bf214aa3-cb83-4459-afa4-8d60262c5413
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/bf214aa3-cb83-4459-afa4-8d60262c5413.pid.haproxy
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID bf214aa3-cb83-4459-afa4-8d60262c5413
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 04:00:17 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:17.184 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413', 'env', 'PROCESS_TAG=haproxy-bf214aa3-cb83-4459-afa4-8d60262c5413', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf214aa3-cb83-4459-afa4-8d60262c5413.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.193 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.283600) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817283637, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 344, "num_deletes": 250, "total_data_size": 180789, "memory_usage": 187392, "flush_reason": "Manual Compaction"}
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817287645, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 178336, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76698, "largest_seqno": 77041, "table_properties": {"data_size": 176179, "index_size": 320, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5928, "raw_average_key_size": 20, "raw_value_size": 171912, "raw_average_value_size": 588, "num_data_blocks": 14, "num_entries": 292, "num_filter_entries": 292, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406806, "oldest_key_time": 1764406806, "file_creation_time": 1764406817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 4107 microseconds, and 1138 cpu microseconds.
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.287706) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 178336 bytes OK
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.287724) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.288878) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.288892) EVENT_LOG_v1 {"time_micros": 1764406817288886, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.288906) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 178479, prev total WAL file size 178479, number of live WAL files 2.
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.289198) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373638' seq:72057594037927935, type:22 .. '6D6772737461740033303139' seq:0, type:0; will stop at (end)
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(174KB)], [173(13MB)]
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817289261, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 14670901, "oldest_snapshot_seqno": -1}
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.356 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406817.3559585, c79d4974-069b-4ff8-80b4-9c40bff14325 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.357 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] VM Started (Lifecycle Event)#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.422 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.426 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406817.3587039, c79d4974-069b-4ff8-80b4-9c40bff14325 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.426 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] VM Paused (Lifecycle Event)#033[00m
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 10556 keys, 10829686 bytes, temperature: kUnknown
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817431223, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 10829686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10765279, "index_size": 36902, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26437, "raw_key_size": 281095, "raw_average_key_size": 26, "raw_value_size": 10584256, "raw_average_value_size": 1002, "num_data_blocks": 1382, "num_entries": 10556, "num_filter_entries": 10556, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764406817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.431786) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10829686 bytes
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.440948) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.1 rd, 76.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.8 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(143.0) write-amplify(60.7) OK, records in: 11064, records dropped: 508 output_compression: NoCompression
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.440967) EVENT_LOG_v1 {"time_micros": 1764406817440958, "job": 108, "event": "compaction_finished", "compaction_time_micros": 142278, "compaction_time_cpu_micros": 26404, "output_level": 6, "num_output_files": 1, "total_output_size": 10829686, "num_input_records": 11064, "num_output_records": 10556, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817441089, "job": 108, "event": "table_file_deletion", "file_number": 175}
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406817443122, "job": 108, "event": "table_file_deletion", "file_number": 173}
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.289125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.443227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.443234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.443235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.443237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:00:17.443238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.449 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.451 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.470 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:00:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3752: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 398 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Nov 29 04:00:17 np0005539563 podman[402313]: 2025-11-29 09:00:17.530843693 +0000 UTC m=+0.022781377 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 04:00:17 np0005539563 podman[402313]: 2025-11-29 09:00:17.836454988 +0000 UTC m=+0.328392642 container create ca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.854 252257 DEBUG nova.compute.manager [req-434dfa8d-054a-432d-bb86-fd4b8e6ac0a0 req-feea298c-139e-487e-8ec5-a7daad14921a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Received event network-vif-plugged-43132856-68f7-409c-979c-a360b649693a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.855 252257 DEBUG oslo_concurrency.lockutils [req-434dfa8d-054a-432d-bb86-fd4b8e6ac0a0 req-feea298c-139e-487e-8ec5-a7daad14921a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.855 252257 DEBUG oslo_concurrency.lockutils [req-434dfa8d-054a-432d-bb86-fd4b8e6ac0a0 req-feea298c-139e-487e-8ec5-a7daad14921a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.856 252257 DEBUG oslo_concurrency.lockutils [req-434dfa8d-054a-432d-bb86-fd4b8e6ac0a0 req-feea298c-139e-487e-8ec5-a7daad14921a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.856 252257 DEBUG nova.compute.manager [req-434dfa8d-054a-432d-bb86-fd4b8e6ac0a0 req-feea298c-139e-487e-8ec5-a7daad14921a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Processing event network-vif-plugged-43132856-68f7-409c-979c-a360b649693a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.857 252257 DEBUG nova.compute.manager [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.861 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406817.8608527, c79d4974-069b-4ff8-80b4-9c40bff14325 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.861 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] VM Resumed (Lifecycle Event)#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.862 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.866 252257 INFO nova.virt.libvirt.driver [-] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Instance spawned successfully.#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.866 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 04:00:17 np0005539563 systemd[1]: Started libpod-conmon-ca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192.scope.
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.908 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:00:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.915 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:00:17 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/318419f953fd8b91586fdaf67b7faa34ba5d8a2939a32011d73a8d6b2fda5100/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.920 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.921 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.921 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.922 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.922 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.923 252257 DEBUG nova.virt.libvirt.driver [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:00:17 np0005539563 podman[402313]: 2025-11-29 09:00:17.933862765 +0000 UTC m=+0.425800479 container init ca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 04:00:17 np0005539563 podman[402313]: 2025-11-29 09:00:17.940719701 +0000 UTC m=+0.432657395 container start ca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:00:17 np0005539563 nova_compute[252253]: 2025-11-29 09:00:17.951 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:00:17 np0005539563 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[402329]: [NOTICE]   (402333) : New worker (402335) forked
Nov 29 04:00:17 np0005539563 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[402329]: [NOTICE]   (402333) : Loading success.
Nov 29 04:00:18 np0005539563 nova_compute[252253]: 2025-11-29 09:00:18.002 252257 INFO nova.compute.manager [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Took 7.23 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 04:00:18 np0005539563 nova_compute[252253]: 2025-11-29 09:00:18.003 252257 DEBUG nova.compute.manager [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:00:18 np0005539563 nova_compute[252253]: 2025-11-29 09:00:18.061 252257 INFO nova.compute.manager [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Took 8.19 seconds to build instance.#033[00m
Nov 29 04:00:18 np0005539563 nova_compute[252253]: 2025-11-29 09:00:18.079 252257 DEBUG oslo_concurrency.lockutils [None req-12caff79-5d49-4fb9-a6a7-ce5c345e8ca2 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:18.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:19.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3753: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 408 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Nov 29 04:00:19 np0005539563 nova_compute[252253]: 2025-11-29 09:00:19.941 252257 DEBUG nova.compute.manager [req-1b62b343-c666-4f74-a1e4-821f92538c64 req-cb03ce62-96e5-4a9d-894a-38fbca843322 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Received event network-vif-plugged-43132856-68f7-409c-979c-a360b649693a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:00:19 np0005539563 nova_compute[252253]: 2025-11-29 09:00:19.943 252257 DEBUG oslo_concurrency.lockutils [req-1b62b343-c666-4f74-a1e4-821f92538c64 req-cb03ce62-96e5-4a9d-894a-38fbca843322 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:19 np0005539563 nova_compute[252253]: 2025-11-29 09:00:19.944 252257 DEBUG oslo_concurrency.lockutils [req-1b62b343-c666-4f74-a1e4-821f92538c64 req-cb03ce62-96e5-4a9d-894a-38fbca843322 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:19 np0005539563 nova_compute[252253]: 2025-11-29 09:00:19.944 252257 DEBUG oslo_concurrency.lockutils [req-1b62b343-c666-4f74-a1e4-821f92538c64 req-cb03ce62-96e5-4a9d-894a-38fbca843322 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:19 np0005539563 nova_compute[252253]: 2025-11-29 09:00:19.944 252257 DEBUG nova.compute.manager [req-1b62b343-c666-4f74-a1e4-821f92538c64 req-cb03ce62-96e5-4a9d-894a-38fbca843322 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] No waiting events found dispatching network-vif-plugged-43132856-68f7-409c-979c-a360b649693a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:00:19 np0005539563 nova_compute[252253]: 2025-11-29 09:00:19.945 252257 WARNING nova.compute.manager [req-1b62b343-c666-4f74-a1e4-821f92538c64 req-cb03ce62-96e5-4a9d-894a-38fbca843322 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Received unexpected event network-vif-plugged-43132856-68f7-409c-979c-a360b649693a for instance with vm_state active and task_state None.#033[00m
Nov 29 04:00:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:20.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:20 np0005539563 nova_compute[252253]: 2025-11-29 09:00:20.448 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:21.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3754: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Nov 29 04:00:21 np0005539563 nova_compute[252253]: 2025-11-29 09:00:21.840 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:22.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:23.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3755: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Nov 29 04:00:23 np0005539563 nova_compute[252253]: 2025-11-29 09:00:23.529 252257 DEBUG nova.compute.manager [req-1dc03363-9e50-406f-87da-86b154ee37bb req-00a97885-053d-4536-8bd1-15963b8c06b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Received event network-changed-43132856-68f7-409c-979c-a360b649693a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:00:23 np0005539563 nova_compute[252253]: 2025-11-29 09:00:23.530 252257 DEBUG nova.compute.manager [req-1dc03363-9e50-406f-87da-86b154ee37bb req-00a97885-053d-4536-8bd1-15963b8c06b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Refreshing instance network info cache due to event network-changed-43132856-68f7-409c-979c-a360b649693a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:00:23 np0005539563 nova_compute[252253]: 2025-11-29 09:00:23.530 252257 DEBUG oslo_concurrency.lockutils [req-1dc03363-9e50-406f-87da-86b154ee37bb req-00a97885-053d-4536-8bd1-15963b8c06b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:00:23 np0005539563 nova_compute[252253]: 2025-11-29 09:00:23.530 252257 DEBUG oslo_concurrency.lockutils [req-1dc03363-9e50-406f-87da-86b154ee37bb req-00a97885-053d-4536-8bd1-15963b8c06b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:00:23 np0005539563 nova_compute[252253]: 2025-11-29 09:00:23.530 252257 DEBUG nova.network.neutron [req-1dc03363-9e50-406f-87da-86b154ee37bb req-00a97885-053d-4536-8bd1-15963b8c06b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Refreshing network info cache for port 43132856-68f7-409c-979c-a360b649693a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003164876756467523 of space, bias 1.0, pg target 0.9494630269402569 quantized to 32 (current 32)
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:00:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:24.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:25.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:25 np0005539563 nova_compute[252253]: 2025-11-29 09:00:25.450 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3756: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 151 op/s
Nov 29 04:00:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:26.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:26 np0005539563 nova_compute[252253]: 2025-11-29 09:00:26.843 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:26 np0005539563 nova_compute[252253]: 2025-11-29 09:00:26.965 252257 DEBUG nova.network.neutron [req-1dc03363-9e50-406f-87da-86b154ee37bb req-00a97885-053d-4536-8bd1-15963b8c06b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Updated VIF entry in instance network info cache for port 43132856-68f7-409c-979c-a360b649693a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:00:26 np0005539563 nova_compute[252253]: 2025-11-29 09:00:26.965 252257 DEBUG nova.network.neutron [req-1dc03363-9e50-406f-87da-86b154ee37bb req-00a97885-053d-4536-8bd1-15963b8c06b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Updating instance_info_cache with network_info: [{"id": "43132856-68f7-409c-979c-a360b649693a", "address": "fa:16:3e:41:3c:55", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43132856-68", "ovs_interfaceid": "43132856-68f7-409c-979c-a360b649693a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:00:26 np0005539563 nova_compute[252253]: 2025-11-29 09:00:26.982 252257 DEBUG oslo_concurrency.lockutils [req-1dc03363-9e50-406f-87da-86b154ee37bb req-00a97885-053d-4536-8bd1-15963b8c06b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:00:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:27.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3757: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 39 KiB/s wr, 77 op/s
Nov 29 04:00:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:28.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:29.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3758: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 39 KiB/s wr, 77 op/s
Nov 29 04:00:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:30.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:30 np0005539563 nova_compute[252253]: 2025-11-29 09:00:30.454 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:30 np0005539563 nova_compute[252253]: 2025-11-29 09:00:30.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:31.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:31 np0005539563 ovn_controller[148841]: 2025-11-29T09:00:31Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:3c:55 10.100.0.6
Nov 29 04:00:31 np0005539563 ovn_controller[148841]: 2025-11-29T09:00:31Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:3c:55 10.100.0.6
Nov 29 04:00:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3759: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 72 op/s
Nov 29 04:00:31 np0005539563 nova_compute[252253]: 2025-11-29 09:00:31.848 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:32.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:32 np0005539563 podman[402403]: 2025-11-29 09:00:32.521629863 +0000 UTC m=+0.065971046 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:00:32 np0005539563 podman[402402]: 2025-11-29 09:00:32.521636704 +0000 UTC m=+0.066226514 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:00:32 np0005539563 podman[402404]: 2025-11-29 09:00:32.556033656 +0000 UTC m=+0.094291425 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 04:00:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:33.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3760: 305 pgs: 305 active+clean; 255 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 500 KiB/s rd, 739 KiB/s wr, 37 op/s
Nov 29 04:00:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:34.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:35.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:35 np0005539563 nova_compute[252253]: 2025-11-29 09:00:35.457 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3761: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 04:00:35 np0005539563 nova_compute[252253]: 2025-11-29 09:00:35.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:36.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:36 np0005539563 nova_compute[252253]: 2025-11-29 09:00:36.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:36 np0005539563 nova_compute[252253]: 2025-11-29 09:00:36.851 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:37.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3762: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 04:00:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:38.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:38 np0005539563 nova_compute[252253]: 2025-11-29 09:00:38.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:38 np0005539563 nova_compute[252253]: 2025-11-29 09:00:38.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:00:38 np0005539563 nova_compute[252253]: 2025-11-29 09:00:38.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:00:38 np0005539563 nova_compute[252253]: 2025-11-29 09:00:38.981 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:00:38 np0005539563 nova_compute[252253]: 2025-11-29 09:00:38.982 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:00:38 np0005539563 nova_compute[252253]: 2025-11-29 09:00:38.982 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 04:00:38 np0005539563 nova_compute[252253]: 2025-11-29 09:00:38.982 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c79d4974-069b-4ff8-80b4-9c40bff14325 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:00:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:39.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3763: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 04:00:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:40.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:40 np0005539563 nova_compute[252253]: 2025-11-29 09:00:40.459 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:40 np0005539563 nova_compute[252253]: 2025-11-29 09:00:40.500 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Updating instance_info_cache with network_info: [{"id": "43132856-68f7-409c-979c-a360b649693a", "address": "fa:16:3e:41:3c:55", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43132856-68", "ovs_interfaceid": "43132856-68f7-409c-979c-a360b649693a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:00:40 np0005539563 nova_compute[252253]: 2025-11-29 09:00:40.537 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:00:40 np0005539563 nova_compute[252253]: 2025-11-29 09:00:40.538 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 04:00:40 np0005539563 nova_compute[252253]: 2025-11-29 09:00:40.539 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:40 np0005539563 nova_compute[252253]: 2025-11-29 09:00:40.539 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:00:40 np0005539563 nova_compute[252253]: 2025-11-29 09:00:40.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:41.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3764: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 04:00:41 np0005539563 nova_compute[252253]: 2025-11-29 09:00:41.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:41 np0005539563 nova_compute[252253]: 2025-11-29 09:00:41.713 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:41 np0005539563 nova_compute[252253]: 2025-11-29 09:00:41.713 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:41 np0005539563 nova_compute[252253]: 2025-11-29 09:00:41.713 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:41 np0005539563 nova_compute[252253]: 2025-11-29 09:00:41.714 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:00:41 np0005539563 nova_compute[252253]: 2025-11-29 09:00:41.714 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:41 np0005539563 nova_compute[252253]: 2025-11-29 09:00:41.852 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:00:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3896652145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:00:42 np0005539563 nova_compute[252253]: 2025-11-29 09:00:42.160 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:42 np0005539563 nova_compute[252253]: 2025-11-29 09:00:42.259 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000d3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:00:42 np0005539563 nova_compute[252253]: 2025-11-29 09:00:42.260 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000d3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:00:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:42.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:42 np0005539563 nova_compute[252253]: 2025-11-29 09:00:42.462 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:00:42 np0005539563 nova_compute[252253]: 2025-11-29 09:00:42.463 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3938MB free_disk=20.897228240966797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:00:42 np0005539563 nova_compute[252253]: 2025-11-29 09:00:42.464 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:42 np0005539563 nova_compute[252253]: 2025-11-29 09:00:42.465 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:42 np0005539563 nova_compute[252253]: 2025-11-29 09:00:42.573 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance c79d4974-069b-4ff8-80b4-9c40bff14325 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 04:00:42 np0005539563 nova_compute[252253]: 2025-11-29 09:00:42.573 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:00:42 np0005539563 nova_compute[252253]: 2025-11-29 09:00:42.574 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:00:42 np0005539563 nova_compute[252253]: 2025-11-29 09:00:42.632 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:00:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159564219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:00:43 np0005539563 nova_compute[252253]: 2025-11-29 09:00:43.095 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:43 np0005539563 nova_compute[252253]: 2025-11-29 09:00:43.101 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:00:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:43.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:43 np0005539563 nova_compute[252253]: 2025-11-29 09:00:43.230 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:00:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:00:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:00:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:00:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:00:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:00:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:00:43 np0005539563 nova_compute[252253]: 2025-11-29 09:00:43.304 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:00:43 np0005539563 nova_compute[252253]: 2025-11-29 09:00:43.305 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3765: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 04:00:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:00:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 59K writes, 223K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s#012Cumulative WAL: 59K writes, 21K syncs, 2.70 writes per sync, written: 0.21 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2965 writes, 9665 keys, 2965 commit groups, 1.0 writes per commit group, ingest: 9.56 MB, 0.02 MB/s#012Interval WAL: 2965 writes, 1282 syncs, 2.31 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:00:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:44.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:45.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.463 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.466 252257 DEBUG oslo_concurrency.lockutils [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "c79d4974-069b-4ff8-80b4-9c40bff14325" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.467 252257 DEBUG oslo_concurrency.lockutils [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.467 252257 DEBUG oslo_concurrency.lockutils [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.468 252257 DEBUG oslo_concurrency.lockutils [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.468 252257 DEBUG oslo_concurrency.lockutils [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.470 252257 INFO nova.compute.manager [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Terminating instance#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.471 252257 DEBUG nova.compute.manager [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 04:00:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3766: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 163 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Nov 29 04:00:45 np0005539563 kernel: tap43132856-68 (unregistering): left promiscuous mode
Nov 29 04:00:45 np0005539563 NetworkManager[48981]: <info>  [1764406845.5390] device (tap43132856-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 04:00:45 np0005539563 ovn_controller[148841]: 2025-11-29T09:00:45Z|00912|binding|INFO|Releasing lport 43132856-68f7-409c-979c-a360b649693a from this chassis (sb_readonly=0)
Nov 29 04:00:45 np0005539563 ovn_controller[148841]: 2025-11-29T09:00:45Z|00913|binding|INFO|Setting lport 43132856-68f7-409c-979c-a360b649693a down in Southbound
Nov 29 04:00:45 np0005539563 ovn_controller[148841]: 2025-11-29T09:00:45Z|00914|binding|INFO|Removing iface tap43132856-68 ovn-installed in OVS
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.551 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.553 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.559 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:3c:55 10.100.0.6'], port_security=['fa:16:3e:41:3c:55 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c79d4974-069b-4ff8-80b4-9c40bff14325', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf214aa3-cb83-4459-afa4-8d60262c5413', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a30db943-b1cb-4a91-8ae2-9c91fb356172', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb2c6e8-c4d9-43b8-a7a3-4ba0a1e884fb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=43132856-68f7-409c-979c-a360b649693a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.561 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 43132856-68f7-409c-979c-a360b649693a in datapath bf214aa3-cb83-4459-afa4-8d60262c5413 unbound from our chassis#033[00m
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.562 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf214aa3-cb83-4459-afa4-8d60262c5413, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.563 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[74885de4-b4fd-45e2-88b8-640f1d6d5ba3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.564 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413 namespace which is not needed anymore#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.575 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:45 np0005539563 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000d3.scope: Deactivated successfully.
Nov 29 04:00:45 np0005539563 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000d3.scope: Consumed 15.631s CPU time.
Nov 29 04:00:45 np0005539563 systemd-machined[213024]: Machine qemu-101-instance-000000d3 terminated.
Nov 29 04:00:45 np0005539563 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[402329]: [NOTICE]   (402333) : haproxy version is 2.8.14-c23fe91
Nov 29 04:00:45 np0005539563 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[402329]: [NOTICE]   (402333) : path to executable is /usr/sbin/haproxy
Nov 29 04:00:45 np0005539563 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[402329]: [WARNING]  (402333) : Exiting Master process...
Nov 29 04:00:45 np0005539563 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[402329]: [WARNING]  (402333) : Exiting Master process...
Nov 29 04:00:45 np0005539563 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[402329]: [ALERT]    (402333) : Current worker (402335) exited with code 143 (Terminated)
Nov 29 04:00:45 np0005539563 neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413[402329]: [WARNING]  (402333) : All workers exited. Exiting... (0)
Nov 29 04:00:45 np0005539563 systemd[1]: libpod-ca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192.scope: Deactivated successfully.
Nov 29 04:00:45 np0005539563 podman[402593]: 2025-11-29 09:00:45.701047741 +0000 UTC m=+0.046825199 container died ca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.714 252257 INFO nova.virt.libvirt.driver [-] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Instance destroyed successfully.#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.714 252257 DEBUG nova.objects.instance [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid c79d4974-069b-4ff8-80b4-9c40bff14325 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:00:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192-userdata-shm.mount: Deactivated successfully.
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.728 252257 DEBUG nova.virt.libvirt.vif [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:00:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1401271841',display_name='tempest-TestNetworkBasicOps-server-1401271841',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1401271841',id=211,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHIJpsjkVsL8+FcfxghR6DoYmuBooqwUp0n5P26wiEuvf5glBWlFPPQi6brQpY8+uVbxVeEQGTT+TFrwaHaqZz+Fqhpn4PRkVYpD/MuBi+OIjyFWaZnfB1qKVdU3qHsAqg==',key_name='tempest-TestNetworkBasicOps-1482221197',keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:00:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-qn6m0am8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:00:18Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=c79d4974-069b-4ff8-80b4-9c40bff14325,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43132856-68f7-409c-979c-a360b649693a", "address": "fa:16:3e:41:3c:55", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43132856-68", "ovs_interfaceid": "43132856-68f7-409c-979c-a360b649693a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.729 252257 DEBUG nova.network.os_vif_util [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "43132856-68f7-409c-979c-a360b649693a", "address": "fa:16:3e:41:3c:55", "network": {"id": "bf214aa3-cb83-4459-afa4-8d60262c5413", "bridge": "br-int", "label": "tempest-network-smoke--1579671431", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43132856-68", "ovs_interfaceid": "43132856-68f7-409c-979c-a360b649693a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.731 252257 DEBUG nova.network.os_vif_util [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:3c:55,bridge_name='br-int',has_traffic_filtering=True,id=43132856-68f7-409c-979c-a360b649693a,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43132856-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:00:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-318419f953fd8b91586fdaf67b7faa34ba5d8a2939a32011d73a8d6b2fda5100-merged.mount: Deactivated successfully.
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.731 252257 DEBUG os_vif [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:3c:55,bridge_name='br-int',has_traffic_filtering=True,id=43132856-68f7-409c-979c-a360b649693a,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43132856-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.799 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.799 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43132856-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.801 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.802 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.804 252257 INFO os_vif [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:3c:55,bridge_name='br-int',has_traffic_filtering=True,id=43132856-68f7-409c-979c-a360b649693a,network=Network(bf214aa3-cb83-4459-afa4-8d60262c5413),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43132856-68')#033[00m
Nov 29 04:00:45 np0005539563 podman[402593]: 2025-11-29 09:00:45.809273076 +0000 UTC m=+0.155050524 container cleanup ca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:00:45 np0005539563 systemd[1]: libpod-conmon-ca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192.scope: Deactivated successfully.
Nov 29 04:00:45 np0005539563 podman[402648]: 2025-11-29 09:00:45.874842311 +0000 UTC m=+0.045393429 container remove ca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.880 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1b424697-8e6a-40d9-a671-8f00e89d18e3]: (4, ('Sat Nov 29 09:00:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413 (ca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192)\nca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192\nSat Nov 29 09:00:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413 (ca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192)\nca6a961ad5c84fce42b31efca25609fd313d289401b3ab55a9b143d115f2b192\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.882 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[42b2b44b-8f24-45c8-8f4b-4b98c35e5fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.883 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf214aa3-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.885 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:45 np0005539563 kernel: tapbf214aa3-c0: left promiscuous mode
Nov 29 04:00:45 np0005539563 nova_compute[252253]: 2025-11-29 09:00:45.899 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.901 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[96a8231e-0a08-467f-9ba7-b89c95eb5967]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.919 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3330b352-a36a-4355-a5c2-b39f5018402a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.920 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5f56da40-0b15-459a-a793-7a43250f35d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.935 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3c47724f-83b9-4aa3-9414-dd091f31adae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 978472, 'reachable_time': 44741, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402668, 'error': None, 'target': 'ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.938 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf214aa3-cb83-4459-afa4-8d60262c5413 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 04:00:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:45.938 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[cddba0c9-51df-4c62-bd1a-a583ae05911c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:00:45 np0005539563 systemd[1]: run-netns-ovnmeta\x2dbf214aa3\x2dcb83\x2d4459\x2dafa4\x2d8d60262c5413.mount: Deactivated successfully.
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.227 252257 DEBUG nova.compute.manager [req-2bce63cd-ff9b-4339-af9e-cc9877c86fdd req-a5378645-1388-445e-b479-e512cf5c64a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Received event network-vif-unplugged-43132856-68f7-409c-979c-a360b649693a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.228 252257 DEBUG oslo_concurrency.lockutils [req-2bce63cd-ff9b-4339-af9e-cc9877c86fdd req-a5378645-1388-445e-b479-e512cf5c64a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.228 252257 DEBUG oslo_concurrency.lockutils [req-2bce63cd-ff9b-4339-af9e-cc9877c86fdd req-a5378645-1388-445e-b479-e512cf5c64a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.228 252257 DEBUG oslo_concurrency.lockutils [req-2bce63cd-ff9b-4339-af9e-cc9877c86fdd req-a5378645-1388-445e-b479-e512cf5c64a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.228 252257 DEBUG nova.compute.manager [req-2bce63cd-ff9b-4339-af9e-cc9877c86fdd req-a5378645-1388-445e-b479-e512cf5c64a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] No waiting events found dispatching network-vif-unplugged-43132856-68f7-409c-979c-a360b649693a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.228 252257 DEBUG nova.compute.manager [req-2bce63cd-ff9b-4339-af9e-cc9877c86fdd req-a5378645-1388-445e-b479-e512cf5c64a3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Received event network-vif-unplugged-43132856-68f7-409c-979c-a360b649693a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.252 252257 INFO nova.virt.libvirt.driver [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Deleting instance files /var/lib/nova/instances/c79d4974-069b-4ff8-80b4-9c40bff14325_del#033[00m
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.253 252257 INFO nova.virt.libvirt.driver [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Deletion of /var/lib/nova/instances/c79d4974-069b-4ff8-80b4-9c40bff14325_del complete#033[00m
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.310 252257 INFO nova.compute.manager [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.311 252257 DEBUG oslo.service.loopingcall [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.312 252257 DEBUG nova.compute.manager [-] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.312 252257 DEBUG nova.network.neutron [-] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 04:00:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:46.385 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.385 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:46.386 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:00:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:46.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:46 np0005539563 nova_compute[252253]: 2025-11-29 09:00:46.890 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.133 252257 DEBUG nova.compute.manager [req-cad504c4-db73-4f9b-b55b-c955b76390b6 req-afd890d8-e487-4b3e-95c5-540677e94c3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Received event network-changed-43132856-68f7-409c-979c-a360b649693a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.134 252257 DEBUG nova.compute.manager [req-cad504c4-db73-4f9b-b55b-c955b76390b6 req-afd890d8-e487-4b3e-95c5-540677e94c3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Refreshing instance network info cache due to event network-changed-43132856-68f7-409c-979c-a360b649693a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.135 252257 DEBUG oslo_concurrency.lockutils [req-cad504c4-db73-4f9b-b55b-c955b76390b6 req-afd890d8-e487-4b3e-95c5-540677e94c3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.135 252257 DEBUG oslo_concurrency.lockutils [req-cad504c4-db73-4f9b-b55b-c955b76390b6 req-afd890d8-e487-4b3e-95c5-540677e94c3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.135 252257 DEBUG nova.network.neutron [req-cad504c4-db73-4f9b-b55b-c955b76390b6 req-afd890d8-e487-4b3e-95c5-540677e94c3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Refreshing network info cache for port 43132856-68f7-409c-979c-a360b649693a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:00:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:47.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.305 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.347 252257 INFO nova.network.neutron [req-cad504c4-db73-4f9b-b55b-c955b76390b6 req-afd890d8-e487-4b3e-95c5-540677e94c3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Port 43132856-68f7-409c-979c-a360b649693a from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.348 252257 DEBUG nova.network.neutron [req-cad504c4-db73-4f9b-b55b-c955b76390b6 req-afd890d8-e487-4b3e-95c5-540677e94c3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.374 252257 DEBUG nova.network.neutron [-] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.376 252257 DEBUG oslo_concurrency.lockutils [req-cad504c4-db73-4f9b-b55b-c955b76390b6 req-afd890d8-e487-4b3e-95c5-540677e94c3a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-c79d4974-069b-4ff8-80b4-9c40bff14325" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.391 252257 INFO nova.compute.manager [-] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Took 1.08 seconds to deallocate network for instance.#033[00m
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.456 252257 DEBUG oslo_concurrency.lockutils [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.457 252257 DEBUG oslo_concurrency.lockutils [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.509 252257 DEBUG oslo_concurrency.processutils [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3767: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 KiB/s rd, 17 KiB/s wr, 1 op/s
Nov 29 04:00:47 np0005539563 nova_compute[252253]: 2025-11-29 09:00:47.558 252257 DEBUG nova.compute.manager [req-4a465cd1-025f-4561-a4f8-a8ed97e08893 req-bb6e42f3-9cee-4a0c-920c-db451d6ea1c4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Received event network-vif-deleted-43132856-68f7-409c-979c-a360b649693a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:00:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:00:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824405244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:00:48 np0005539563 nova_compute[252253]: 2025-11-29 09:00:48.011 252257 DEBUG oslo_concurrency.processutils [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:48 np0005539563 nova_compute[252253]: 2025-11-29 09:00:48.017 252257 DEBUG nova.compute.provider_tree [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:00:48 np0005539563 nova_compute[252253]: 2025-11-29 09:00:48.038 252257 DEBUG nova.scheduler.client.report [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:00:48 np0005539563 nova_compute[252253]: 2025-11-29 09:00:48.062 252257 DEBUG oslo_concurrency.lockutils [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:48 np0005539563 nova_compute[252253]: 2025-11-29 09:00:48.095 252257 INFO nova.scheduler.client.report [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance c79d4974-069b-4ff8-80b4-9c40bff14325#033[00m
Nov 29 04:00:48 np0005539563 nova_compute[252253]: 2025-11-29 09:00:48.190 252257 DEBUG oslo_concurrency.lockutils [None req-c20cdba1-eef3-446b-be14-b822b18f4328 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:48 np0005539563 nova_compute[252253]: 2025-11-29 09:00:48.311 252257 DEBUG nova.compute.manager [req-3396e4f5-5059-49c7-9f22-7a72be2529a5 req-3ac6d02a-6c28-4e2a-8d69-2f87ccc76fd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Received event network-vif-plugged-43132856-68f7-409c-979c-a360b649693a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:00:48 np0005539563 nova_compute[252253]: 2025-11-29 09:00:48.312 252257 DEBUG oslo_concurrency.lockutils [req-3396e4f5-5059-49c7-9f22-7a72be2529a5 req-3ac6d02a-6c28-4e2a-8d69-2f87ccc76fd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:48 np0005539563 nova_compute[252253]: 2025-11-29 09:00:48.312 252257 DEBUG oslo_concurrency.lockutils [req-3396e4f5-5059-49c7-9f22-7a72be2529a5 req-3ac6d02a-6c28-4e2a-8d69-2f87ccc76fd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:48 np0005539563 nova_compute[252253]: 2025-11-29 09:00:48.313 252257 DEBUG oslo_concurrency.lockutils [req-3396e4f5-5059-49c7-9f22-7a72be2529a5 req-3ac6d02a-6c28-4e2a-8d69-2f87ccc76fd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "c79d4974-069b-4ff8-80b4-9c40bff14325-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:48 np0005539563 nova_compute[252253]: 2025-11-29 09:00:48.313 252257 DEBUG nova.compute.manager [req-3396e4f5-5059-49c7-9f22-7a72be2529a5 req-3ac6d02a-6c28-4e2a-8d69-2f87ccc76fd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] No waiting events found dispatching network-vif-plugged-43132856-68f7-409c-979c-a360b649693a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:00:48 np0005539563 nova_compute[252253]: 2025-11-29 09:00:48.314 252257 WARNING nova.compute.manager [req-3396e4f5-5059-49c7-9f22-7a72be2529a5 req-3ac6d02a-6c28-4e2a-8d69-2f87ccc76fd5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Received unexpected event network-vif-plugged-43132856-68f7-409c-979c-a360b649693a for instance with vm_state deleted and task_state None.#033[00m
Nov 29 04:00:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:48.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:49.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3768: 305 pgs: 305 active+clean; 258 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.6 KiB/s rd, 17 KiB/s wr, 5 op/s
Nov 29 04:00:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:50.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:50 np0005539563 nova_compute[252253]: 2025-11-29 09:00:50.802 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:51.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3769: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 20 KiB/s wr, 29 op/s
Nov 29 04:00:51 np0005539563 nova_compute[252253]: 2025-11-29 09:00:51.892 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:52 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:00:52.388 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:00:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:52.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:00:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2387376699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:00:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:53.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3770: 305 pgs: 305 active+clean; 186 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 8.8 KiB/s wr, 29 op/s
Nov 29 04:00:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:54.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:55.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 04:00:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3771: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Nov 29 04:00:55 np0005539563 nova_compute[252253]: 2025-11-29 09:00:55.806 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:56.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:56 np0005539563 nova_compute[252253]: 2025-11-29 09:00:56.920 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:57.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3772: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 4.7 KiB/s wr, 55 op/s
Nov 29 04:00:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:58.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:00:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:00:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:59.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:00:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3773: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 4.7 KiB/s wr, 55 op/s
Nov 29 04:01:00 np0005539563 nova_compute[252253]: 2025-11-29 09:01:00.210 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:00 np0005539563 nova_compute[252253]: 2025-11-29 09:01:00.283 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:00.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:00 np0005539563 nova_compute[252253]: 2025-11-29 09:01:00.711 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406845.708721, c79d4974-069b-4ff8-80b4-9c40bff14325 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:01:00 np0005539563 nova_compute[252253]: 2025-11-29 09:01:00.711 252257 INFO nova.compute.manager [-] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] VM Stopped (Lifecycle Event)#033[00m
Nov 29 04:01:00 np0005539563 nova_compute[252253]: 2025-11-29 09:01:00.732 252257 DEBUG nova.compute.manager [None req-87319c15-6b41-4576-bbe5-77412819c924 - - - - - -] [instance: c79d4974-069b-4ff8-80b4-9c40bff14325] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:01:00 np0005539563 nova_compute[252253]: 2025-11-29 09:01:00.808 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:01.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3774: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 4.0 KiB/s wr, 52 op/s
Nov 29 04:01:01 np0005539563 nova_compute[252253]: 2025-11-29 09:01:01.921 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:02.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:01:03 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b951e23f-d9d2-47bb-8b90-938f090eba60 does not exist
Nov 29 04:01:03 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ba7b31bf-ef99-4054-a457-89ab84a03cd6 does not exist
Nov 29 04:01:03 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c2fb7534-5870-47db-acd4-35457e2bf17b does not exist
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:01:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:03.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:03 np0005539563 podman[402919]: 2025-11-29 09:01:03.277780877 +0000 UTC m=+0.068739572 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:01:03 np0005539563 podman[402920]: 2025-11-29 09:01:03.289505584 +0000 UTC m=+0.078880656 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 04:01:03 np0005539563 podman[402921]: 2025-11-29 09:01:03.348934294 +0000 UTC m=+0.121926462 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:01:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3775: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:01:03 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:01:03 np0005539563 podman[403097]: 2025-11-29 09:01:03.811858958 +0000 UTC m=+0.049917753 container create 20113d5c0832546821eb254a17741c28fb2de0cff8d7436396f28d60ea00cb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:01:03 np0005539563 systemd[1]: Started libpod-conmon-20113d5c0832546821eb254a17741c28fb2de0cff8d7436396f28d60ea00cb35.scope.
Nov 29 04:01:03 np0005539563 podman[403097]: 2025-11-29 09:01:03.788333461 +0000 UTC m=+0.026392286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:01:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:01:03 np0005539563 podman[403097]: 2025-11-29 09:01:03.918153496 +0000 UTC m=+0.156212281 container init 20113d5c0832546821eb254a17741c28fb2de0cff8d7436396f28d60ea00cb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 29 04:01:03 np0005539563 podman[403097]: 2025-11-29 09:01:03.928591798 +0000 UTC m=+0.166650573 container start 20113d5c0832546821eb254a17741c28fb2de0cff8d7436396f28d60ea00cb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 04:01:03 np0005539563 podman[403097]: 2025-11-29 09:01:03.933130111 +0000 UTC m=+0.171188886 container attach 20113d5c0832546821eb254a17741c28fb2de0cff8d7436396f28d60ea00cb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dewdney, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:01:03 np0005539563 beautiful_dewdney[403113]: 167 167
Nov 29 04:01:03 np0005539563 systemd[1]: libpod-20113d5c0832546821eb254a17741c28fb2de0cff8d7436396f28d60ea00cb35.scope: Deactivated successfully.
Nov 29 04:01:03 np0005539563 conmon[403113]: conmon 20113d5c0832546821eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20113d5c0832546821eb254a17741c28fb2de0cff8d7436396f28d60ea00cb35.scope/container/memory.events
Nov 29 04:01:03 np0005539563 podman[403097]: 2025-11-29 09:01:03.938916078 +0000 UTC m=+0.176974863 container died 20113d5c0832546821eb254a17741c28fb2de0cff8d7436396f28d60ea00cb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dewdney, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:01:03 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4de1a99aea839b42b0e7afa1e5ea970937bec3d8c250e1ac54600b50e954c029-merged.mount: Deactivated successfully.
Nov 29 04:01:03 np0005539563 podman[403097]: 2025-11-29 09:01:03.977802471 +0000 UTC m=+0.215861246 container remove 20113d5c0832546821eb254a17741c28fb2de0cff8d7436396f28d60ea00cb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dewdney, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 04:01:03 np0005539563 systemd[1]: libpod-conmon-20113d5c0832546821eb254a17741c28fb2de0cff8d7436396f28d60ea00cb35.scope: Deactivated successfully.
Nov 29 04:01:04 np0005539563 podman[403136]: 2025-11-29 09:01:04.155883193 +0000 UTC m=+0.037073315 container create 46775d106fc5b7a4cc59d5e45d6b89ba36fa224ac3595818dace30315ce3a2ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 04:01:04 np0005539563 systemd[1]: Started libpod-conmon-46775d106fc5b7a4cc59d5e45d6b89ba36fa224ac3595818dace30315ce3a2ac.scope.
Nov 29 04:01:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:01:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05b4124ba9d71fb708831dc1989a6253e10831959d246e82ce8e1dd29fdc7b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05b4124ba9d71fb708831dc1989a6253e10831959d246e82ce8e1dd29fdc7b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05b4124ba9d71fb708831dc1989a6253e10831959d246e82ce8e1dd29fdc7b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05b4124ba9d71fb708831dc1989a6253e10831959d246e82ce8e1dd29fdc7b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05b4124ba9d71fb708831dc1989a6253e10831959d246e82ce8e1dd29fdc7b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:04 np0005539563 podman[403136]: 2025-11-29 09:01:04.233458153 +0000 UTC m=+0.114648355 container init 46775d106fc5b7a4cc59d5e45d6b89ba36fa224ac3595818dace30315ce3a2ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:01:04 np0005539563 podman[403136]: 2025-11-29 09:01:04.140870506 +0000 UTC m=+0.022060648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:01:04 np0005539563 podman[403136]: 2025-11-29 09:01:04.242130718 +0000 UTC m=+0.123320880 container start 46775d106fc5b7a4cc59d5e45d6b89ba36fa224ac3595818dace30315ce3a2ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 04:01:04 np0005539563 podman[403136]: 2025-11-29 09:01:04.246074705 +0000 UTC m=+0.127264827 container attach 46775d106fc5b7a4cc59d5e45d6b89ba36fa224ac3595818dace30315ce3a2ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 04:01:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:04.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:04.972 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:04.975 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:04.975 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:05 np0005539563 nice_beaver[403150]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:01:05 np0005539563 nice_beaver[403150]: --> relative data size: 1.0
Nov 29 04:01:05 np0005539563 nice_beaver[403150]: --> All data devices are unavailable
Nov 29 04:01:05 np0005539563 systemd[1]: libpod-46775d106fc5b7a4cc59d5e45d6b89ba36fa224ac3595818dace30315ce3a2ac.scope: Deactivated successfully.
Nov 29 04:01:05 np0005539563 podman[403136]: 2025-11-29 09:01:05.026180937 +0000 UTC m=+0.907371059 container died 46775d106fc5b7a4cc59d5e45d6b89ba36fa224ac3595818dace30315ce3a2ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:01:05 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d05b4124ba9d71fb708831dc1989a6253e10831959d246e82ce8e1dd29fdc7b7-merged.mount: Deactivated successfully.
Nov 29 04:01:05 np0005539563 podman[403136]: 2025-11-29 09:01:05.094882287 +0000 UTC m=+0.976072409 container remove 46775d106fc5b7a4cc59d5e45d6b89ba36fa224ac3595818dace30315ce3a2ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:01:05 np0005539563 systemd[1]: libpod-conmon-46775d106fc5b7a4cc59d5e45d6b89ba36fa224ac3595818dace30315ce3a2ac.scope: Deactivated successfully.
Nov 29 04:01:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:05.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3776: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 04:01:05 np0005539563 podman[403322]: 2025-11-29 09:01:05.728692258 +0000 UTC m=+0.038860793 container create 7d1adbdded0932eece5b62b2e1083adbc73f6ade26320f08aeec9a2ad31b5dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 29 04:01:05 np0005539563 systemd[1]: Started libpod-conmon-7d1adbdded0932eece5b62b2e1083adbc73f6ade26320f08aeec9a2ad31b5dde.scope.
Nov 29 04:01:05 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:01:05 np0005539563 podman[403322]: 2025-11-29 09:01:05.795694962 +0000 UTC m=+0.105863497 container init 7d1adbdded0932eece5b62b2e1083adbc73f6ade26320f08aeec9a2ad31b5dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:01:05 np0005539563 podman[403322]: 2025-11-29 09:01:05.801953411 +0000 UTC m=+0.112121946 container start 7d1adbdded0932eece5b62b2e1083adbc73f6ade26320f08aeec9a2ad31b5dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 04:01:05 np0005539563 podman[403322]: 2025-11-29 09:01:05.805370564 +0000 UTC m=+0.115539259 container attach 7d1adbdded0932eece5b62b2e1083adbc73f6ade26320f08aeec9a2ad31b5dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:01:05 np0005539563 boring_raman[403338]: 167 167
Nov 29 04:01:05 np0005539563 systemd[1]: libpod-7d1adbdded0932eece5b62b2e1083adbc73f6ade26320f08aeec9a2ad31b5dde.scope: Deactivated successfully.
Nov 29 04:01:05 np0005539563 podman[403322]: 2025-11-29 09:01:05.712220612 +0000 UTC m=+0.022389177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:01:05 np0005539563 podman[403322]: 2025-11-29 09:01:05.809064314 +0000 UTC m=+0.119232879 container died 7d1adbdded0932eece5b62b2e1083adbc73f6ade26320f08aeec9a2ad31b5dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:01:05 np0005539563 nova_compute[252253]: 2025-11-29 09:01:05.810 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:05 np0005539563 systemd[1]: var-lib-containers-storage-overlay-286ba6d7a77b9d6faee1147596701c74fd4ca3349e86e72a56571bbcd40fccb5-merged.mount: Deactivated successfully.
Nov 29 04:01:05 np0005539563 podman[403322]: 2025-11-29 09:01:05.852309755 +0000 UTC m=+0.162478290 container remove 7d1adbdded0932eece5b62b2e1083adbc73f6ade26320f08aeec9a2ad31b5dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 04:01:05 np0005539563 systemd[1]: libpod-conmon-7d1adbdded0932eece5b62b2e1083adbc73f6ade26320f08aeec9a2ad31b5dde.scope: Deactivated successfully.
Nov 29 04:01:06 np0005539563 podman[403363]: 2025-11-29 09:01:06.021344172 +0000 UTC m=+0.052314887 container create 0d822a2e52b91c280d4df4221f993ffc12cb172a67390c7a11d71dc4acfa7ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_payne, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:01:06 np0005539563 systemd[1]: Started libpod-conmon-0d822a2e52b91c280d4df4221f993ffc12cb172a67390c7a11d71dc4acfa7ed4.scope.
Nov 29 04:01:06 np0005539563 podman[403363]: 2025-11-29 09:01:05.994353741 +0000 UTC m=+0.025324546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:01:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:01:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566d070d2e11481a0e72a8e1be129624eb683e1e05c598d7ea7c38db18b6766f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566d070d2e11481a0e72a8e1be129624eb683e1e05c598d7ea7c38db18b6766f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566d070d2e11481a0e72a8e1be129624eb683e1e05c598d7ea7c38db18b6766f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566d070d2e11481a0e72a8e1be129624eb683e1e05c598d7ea7c38db18b6766f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:06 np0005539563 podman[403363]: 2025-11-29 09:01:06.104337529 +0000 UTC m=+0.135308244 container init 0d822a2e52b91c280d4df4221f993ffc12cb172a67390c7a11d71dc4acfa7ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_payne, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:01:06 np0005539563 podman[403363]: 2025-11-29 09:01:06.11805872 +0000 UTC m=+0.149029435 container start 0d822a2e52b91c280d4df4221f993ffc12cb172a67390c7a11d71dc4acfa7ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_payne, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:01:06 np0005539563 podman[403363]: 2025-11-29 09:01:06.123107457 +0000 UTC m=+0.154078182 container attach 0d822a2e52b91c280d4df4221f993ffc12cb172a67390c7a11d71dc4acfa7ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_payne, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 04:01:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:06.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:06 np0005539563 loving_payne[403380]: {
Nov 29 04:01:06 np0005539563 loving_payne[403380]:    "0": [
Nov 29 04:01:06 np0005539563 loving_payne[403380]:        {
Nov 29 04:01:06 np0005539563 loving_payne[403380]:            "devices": [
Nov 29 04:01:06 np0005539563 loving_payne[403380]:                "/dev/loop3"
Nov 29 04:01:06 np0005539563 loving_payne[403380]:            ],
Nov 29 04:01:06 np0005539563 loving_payne[403380]:            "lv_name": "ceph_lv0",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:            "lv_size": "7511998464",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:            "name": "ceph_lv0",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:            "tags": {
Nov 29 04:01:06 np0005539563 loving_payne[403380]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:                "ceph.cluster_name": "ceph",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:                "ceph.crush_device_class": "",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:                "ceph.encrypted": "0",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:                "ceph.osd_id": "0",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:                "ceph.type": "block",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:                "ceph.vdo": "0"
Nov 29 04:01:06 np0005539563 loving_payne[403380]:            },
Nov 29 04:01:06 np0005539563 loving_payne[403380]:            "type": "block",
Nov 29 04:01:06 np0005539563 loving_payne[403380]:            "vg_name": "ceph_vg0"
Nov 29 04:01:06 np0005539563 loving_payne[403380]:        }
Nov 29 04:01:06 np0005539563 loving_payne[403380]:    ]
Nov 29 04:01:06 np0005539563 loving_payne[403380]: }
Nov 29 04:01:06 np0005539563 systemd[1]: libpod-0d822a2e52b91c280d4df4221f993ffc12cb172a67390c7a11d71dc4acfa7ed4.scope: Deactivated successfully.
Nov 29 04:01:06 np0005539563 podman[403363]: 2025-11-29 09:01:06.919998504 +0000 UTC m=+0.950969229 container died 0d822a2e52b91c280d4df4221f993ffc12cb172a67390c7a11d71dc4acfa7ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:01:06 np0005539563 nova_compute[252253]: 2025-11-29 09:01:06.922 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-566d070d2e11481a0e72a8e1be129624eb683e1e05c598d7ea7c38db18b6766f-merged.mount: Deactivated successfully.
Nov 29 04:01:06 np0005539563 podman[403363]: 2025-11-29 09:01:06.968342263 +0000 UTC m=+0.999312978 container remove 0d822a2e52b91c280d4df4221f993ffc12cb172a67390c7a11d71dc4acfa7ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:01:06 np0005539563 systemd[1]: libpod-conmon-0d822a2e52b91c280d4df4221f993ffc12cb172a67390c7a11d71dc4acfa7ed4.scope: Deactivated successfully.
Nov 29 04:01:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:07.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3777: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:07 np0005539563 podman[403544]: 2025-11-29 09:01:07.611660051 +0000 UTC m=+0.050669833 container create c6e39f21ace1b5fe90928aaa58e3662a7055e0d9cd3d0dab5669c1d1c1a978a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 04:01:07 np0005539563 systemd[1]: Started libpod-conmon-c6e39f21ace1b5fe90928aaa58e3662a7055e0d9cd3d0dab5669c1d1c1a978a4.scope.
Nov 29 04:01:07 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:01:07 np0005539563 podman[403544]: 2025-11-29 09:01:07.590471678 +0000 UTC m=+0.029481510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:01:07 np0005539563 podman[403544]: 2025-11-29 09:01:07.699452379 +0000 UTC m=+0.138462221 container init c6e39f21ace1b5fe90928aaa58e3662a7055e0d9cd3d0dab5669c1d1c1a978a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_beaver, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:01:07 np0005539563 podman[403544]: 2025-11-29 09:01:07.707245379 +0000 UTC m=+0.146255141 container start c6e39f21ace1b5fe90928aaa58e3662a7055e0d9cd3d0dab5669c1d1c1a978a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_beaver, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:01:07 np0005539563 podman[403544]: 2025-11-29 09:01:07.711049373 +0000 UTC m=+0.150059215 container attach c6e39f21ace1b5fe90928aaa58e3662a7055e0d9cd3d0dab5669c1d1c1a978a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:01:07 np0005539563 eager_beaver[403560]: 167 167
Nov 29 04:01:07 np0005539563 systemd[1]: libpod-c6e39f21ace1b5fe90928aaa58e3662a7055e0d9cd3d0dab5669c1d1c1a978a4.scope: Deactivated successfully.
Nov 29 04:01:07 np0005539563 podman[403544]: 2025-11-29 09:01:07.715281537 +0000 UTC m=+0.154291299 container died c6e39f21ace1b5fe90928aaa58e3662a7055e0d9cd3d0dab5669c1d1c1a978a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:01:07 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ebb2ab860d2bd8d17eb87680e0ae9e0360d5b9a88a8512fc7a03a6cda7eeefe6-merged.mount: Deactivated successfully.
Nov 29 04:01:07 np0005539563 podman[403544]: 2025-11-29 09:01:07.754091088 +0000 UTC m=+0.193100850 container remove c6e39f21ace1b5fe90928aaa58e3662a7055e0d9cd3d0dab5669c1d1c1a978a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 04:01:07 np0005539563 systemd[1]: libpod-conmon-c6e39f21ace1b5fe90928aaa58e3662a7055e0d9cd3d0dab5669c1d1c1a978a4.scope: Deactivated successfully.
Nov 29 04:01:07 np0005539563 podman[403583]: 2025-11-29 09:01:07.944133003 +0000 UTC m=+0.043914270 container create fc9d568371824899ab1cdc2b23fd69016b3da9db39c1fc14a7272a6dae82c3fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:01:08 np0005539563 systemd[1]: Started libpod-conmon-fc9d568371824899ab1cdc2b23fd69016b3da9db39c1fc14a7272a6dae82c3fa.scope.
Nov 29 04:01:08 np0005539563 podman[403583]: 2025-11-29 09:01:07.923111704 +0000 UTC m=+0.022892961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:01:08 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:01:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6c7827f0f0924aaa30ff2e16465c2d182cca6150eb4a0996b38172ad2feb32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6c7827f0f0924aaa30ff2e16465c2d182cca6150eb4a0996b38172ad2feb32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6c7827f0f0924aaa30ff2e16465c2d182cca6150eb4a0996b38172ad2feb32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6c7827f0f0924aaa30ff2e16465c2d182cca6150eb4a0996b38172ad2feb32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:08 np0005539563 podman[403583]: 2025-11-29 09:01:08.051895351 +0000 UTC m=+0.151676638 container init fc9d568371824899ab1cdc2b23fd69016b3da9db39c1fc14a7272a6dae82c3fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_borg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:01:08 np0005539563 podman[403583]: 2025-11-29 09:01:08.070551396 +0000 UTC m=+0.170332653 container start fc9d568371824899ab1cdc2b23fd69016b3da9db39c1fc14a7272a6dae82c3fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_borg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 04:01:08 np0005539563 podman[403583]: 2025-11-29 09:01:08.074973476 +0000 UTC m=+0.174754763 container attach fc9d568371824899ab1cdc2b23fd69016b3da9db39c1fc14a7272a6dae82c3fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_borg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 04:01:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:08.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:08 np0005539563 modest_borg[403599]: {
Nov 29 04:01:08 np0005539563 modest_borg[403599]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:01:08 np0005539563 modest_borg[403599]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:01:08 np0005539563 modest_borg[403599]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:01:08 np0005539563 modest_borg[403599]:        "osd_id": 0,
Nov 29 04:01:08 np0005539563 modest_borg[403599]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:01:08 np0005539563 modest_borg[403599]:        "type": "bluestore"
Nov 29 04:01:08 np0005539563 modest_borg[403599]:    }
Nov 29 04:01:08 np0005539563 modest_borg[403599]: }
Nov 29 04:01:08 np0005539563 systemd[1]: libpod-fc9d568371824899ab1cdc2b23fd69016b3da9db39c1fc14a7272a6dae82c3fa.scope: Deactivated successfully.
Nov 29 04:01:08 np0005539563 podman[403583]: 2025-11-29 09:01:08.954607923 +0000 UTC m=+1.054389180 container died fc9d568371824899ab1cdc2b23fd69016b3da9db39c1fc14a7272a6dae82c3fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_borg, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 04:01:08 np0005539563 systemd[1]: var-lib-containers-storage-overlay-7e6c7827f0f0924aaa30ff2e16465c2d182cca6150eb4a0996b38172ad2feb32-merged.mount: Deactivated successfully.
Nov 29 04:01:09 np0005539563 podman[403583]: 2025-11-29 09:01:09.006138978 +0000 UTC m=+1.105920225 container remove fc9d568371824899ab1cdc2b23fd69016b3da9db39c1fc14a7272a6dae82c3fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 04:01:09 np0005539563 systemd[1]: libpod-conmon-fc9d568371824899ab1cdc2b23fd69016b3da9db39c1fc14a7272a6dae82c3fa.scope: Deactivated successfully.
Nov 29 04:01:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:01:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:01:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:01:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:01:09 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4d0499cc-5061-4e0f-b0fd-f4291a0c78fb does not exist
Nov 29 04:01:09 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev fc9e2273-3ecd-4a33-9f86-1f50bc362c19 does not exist
Nov 29 04:01:09 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b0c2a37d-e9b8-4918-ab06-ae1a330d1888 does not exist
Nov 29 04:01:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:09.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3778: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:10 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:01:10 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:01:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:10.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:10 np0005539563 nova_compute[252253]: 2025-11-29 09:01:10.813 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:11.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3779: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:11 np0005539563 nova_compute[252253]: 2025-11-29 09:01:11.926 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:12.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:01:13
Nov 29 04:01:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:01:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:01:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.control', 'backups', 'images', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.log']
Nov 29 04:01:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:01:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:13.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:01:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:01:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:01:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:01:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:01:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:01:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3780: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:13 np0005539563 nova_compute[252253]: 2025-11-29 09:01:13.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:14.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:15.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3781: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:15 np0005539563 nova_compute[252253]: 2025-11-29 09:01:15.816 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:16.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:01:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:01:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:01:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:01:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:01:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:01:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:01:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:01:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:01:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:01:16 np0005539563 nova_compute[252253]: 2025-11-29 09:01:16.928 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:17.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:17 np0005539563 nova_compute[252253]: 2025-11-29 09:01:17.436 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:17 np0005539563 nova_compute[252253]: 2025-11-29 09:01:17.436 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:17 np0005539563 nova_compute[252253]: 2025-11-29 09:01:17.456 252257 DEBUG nova.compute.manager [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 04:01:17 np0005539563 nova_compute[252253]: 2025-11-29 09:01:17.535 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:17 np0005539563 nova_compute[252253]: 2025-11-29 09:01:17.536 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:17 np0005539563 nova_compute[252253]: 2025-11-29 09:01:17.546 252257 DEBUG nova.virt.hardware [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 04:01:17 np0005539563 nova_compute[252253]: 2025-11-29 09:01:17.546 252257 INFO nova.compute.claims [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 04:01:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3782: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:17 np0005539563 nova_compute[252253]: 2025-11-29 09:01:17.647 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:01:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401654113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.097 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.104 252257 DEBUG nova.compute.provider_tree [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.248 252257 DEBUG nova.scheduler.client.report [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.302 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.303 252257 DEBUG nova.compute.manager [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.409 252257 DEBUG nova.compute.manager [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.410 252257 DEBUG nova.network.neutron [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.429 252257 INFO nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 04:01:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.449 252257 DEBUG nova.compute.manager [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 04:01:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:18.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.533 252257 DEBUG nova.compute.manager [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.534 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.535 252257 INFO nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Creating image(s)#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.566 252257 DEBUG nova.storage.rbd_utils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.597 252257 DEBUG nova.storage.rbd_utils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.630 252257 DEBUG nova.storage.rbd_utils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.634 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.707 252257 DEBUG nova.policy [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a9ba73ff05b4529ad104362a5a57cc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca5878248147453baabf40a90f9feb19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.711 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.712 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.713 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.713 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.743 252257 DEBUG nova.storage.rbd_utils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:01:18 np0005539563 nova_compute[252253]: 2025-11-29 09:01:18.748 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:19 np0005539563 nova_compute[252253]: 2025-11-29 09:01:19.035 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:19 np0005539563 nova_compute[252253]: 2025-11-29 09:01:19.099 252257 DEBUG nova.storage.rbd_utils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] resizing rbd image e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 04:01:19 np0005539563 nova_compute[252253]: 2025-11-29 09:01:19.202 252257 DEBUG nova.objects.instance [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'migration_context' on Instance uuid e36cab22-949e-4692-99eb-c8cd8a4c6a0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:01:19 np0005539563 nova_compute[252253]: 2025-11-29 09:01:19.217 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 04:01:19 np0005539563 nova_compute[252253]: 2025-11-29 09:01:19.217 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Ensure instance console log exists: /var/lib/nova/instances/e36cab22-949e-4692-99eb-c8cd8a4c6a0c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 04:01:19 np0005539563 nova_compute[252253]: 2025-11-29 09:01:19.218 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:19 np0005539563 nova_compute[252253]: 2025-11-29 09:01:19.218 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:19 np0005539563 nova_compute[252253]: 2025-11-29 09:01:19.218 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:19.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:19 np0005539563 nova_compute[252253]: 2025-11-29 09:01:19.403 252257 DEBUG nova.network.neutron [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Successfully created port: f7f62120-8c84-4d6d-b0d4-71adeb899a9b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 04:01:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3783: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:20 np0005539563 nova_compute[252253]: 2025-11-29 09:01:20.205 252257 DEBUG nova.network.neutron [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Successfully updated port: f7f62120-8c84-4d6d-b0d4-71adeb899a9b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 04:01:20 np0005539563 nova_compute[252253]: 2025-11-29 09:01:20.234 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:01:20 np0005539563 nova_compute[252253]: 2025-11-29 09:01:20.235 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquired lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:01:20 np0005539563 nova_compute[252253]: 2025-11-29 09:01:20.235 252257 DEBUG nova.network.neutron [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 04:01:20 np0005539563 nova_compute[252253]: 2025-11-29 09:01:20.320 252257 DEBUG nova.compute.manager [req-49a8d2f8-3584-4663-8255-c2da0eb0b822 req-152c0043-b35d-4986-b59a-6576fbdbe199 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Received event network-changed-f7f62120-8c84-4d6d-b0d4-71adeb899a9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:01:20 np0005539563 nova_compute[252253]: 2025-11-29 09:01:20.321 252257 DEBUG nova.compute.manager [req-49a8d2f8-3584-4663-8255-c2da0eb0b822 req-152c0043-b35d-4986-b59a-6576fbdbe199 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Refreshing instance network info cache due to event network-changed-f7f62120-8c84-4d6d-b0d4-71adeb899a9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:01:20 np0005539563 nova_compute[252253]: 2025-11-29 09:01:20.321 252257 DEBUG oslo_concurrency.lockutils [req-49a8d2f8-3584-4663-8255-c2da0eb0b822 req-152c0043-b35d-4986-b59a-6576fbdbe199 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:01:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:20.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:20 np0005539563 nova_compute[252253]: 2025-11-29 09:01:20.819 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:20 np0005539563 nova_compute[252253]: 2025-11-29 09:01:20.995 252257 DEBUG nova.network.neutron [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 04:01:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:21.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3784: 305 pgs: 305 active+clean; 149 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 KiB/s rd, 1.3 MiB/s wr, 3 op/s
Nov 29 04:01:21 np0005539563 nova_compute[252253]: 2025-11-29 09:01:21.930 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.412 252257 DEBUG nova.network.neutron [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Updating instance_info_cache with network_info: [{"id": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "address": "fa:16:3e:a7:34:7f", "network": {"id": "f746ed83-c3a2-46d9-9f9b-e56c24674528", "bridge": "br-int", "label": "tempest-network-smoke--1081066937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7f62120-8c", "ovs_interfaceid": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.435 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Releasing lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.435 252257 DEBUG nova.compute.manager [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Instance network_info: |[{"id": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "address": "fa:16:3e:a7:34:7f", "network": {"id": "f746ed83-c3a2-46d9-9f9b-e56c24674528", "bridge": "br-int", "label": "tempest-network-smoke--1081066937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7f62120-8c", "ovs_interfaceid": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.436 252257 DEBUG oslo_concurrency.lockutils [req-49a8d2f8-3584-4663-8255-c2da0eb0b822 req-152c0043-b35d-4986-b59a-6576fbdbe199 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.436 252257 DEBUG nova.network.neutron [req-49a8d2f8-3584-4663-8255-c2da0eb0b822 req-152c0043-b35d-4986-b59a-6576fbdbe199 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Refreshing network info cache for port f7f62120-8c84-4d6d-b0d4-71adeb899a9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.438 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Start _get_guest_xml network_info=[{"id": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "address": "fa:16:3e:a7:34:7f", "network": {"id": "f746ed83-c3a2-46d9-9f9b-e56c24674528", "bridge": "br-int", "label": "tempest-network-smoke--1081066937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7f62120-8c", "ovs_interfaceid": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.443 252257 WARNING nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.450 252257 DEBUG nova.virt.libvirt.host [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.451 252257 DEBUG nova.virt.libvirt.host [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.454 252257 DEBUG nova.virt.libvirt.host [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 04:01:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:22.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.455 252257 DEBUG nova.virt.libvirt.host [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.456 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.456 252257 DEBUG nova.virt.hardware [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.456 252257 DEBUG nova.virt.hardware [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.456 252257 DEBUG nova.virt.hardware [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.457 252257 DEBUG nova.virt.hardware [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.457 252257 DEBUG nova.virt.hardware [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.457 252257 DEBUG nova.virt.hardware [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.457 252257 DEBUG nova.virt.hardware [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.458 252257 DEBUG nova.virt.hardware [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.458 252257 DEBUG nova.virt.hardware [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.458 252257 DEBUG nova.virt.hardware [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.458 252257 DEBUG nova.virt.hardware [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.461 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:01:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2107379554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.908 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.937 252257 DEBUG nova.storage.rbd_utils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:01:22 np0005539563 nova_compute[252253]: 2025-11-29 09:01:22.941 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:23.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:01:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199218333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.397 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.399 252257 DEBUG nova.virt.libvirt.vif [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:01:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-26796661',display_name='tempest-TestNetworkBasicOps-server-26796661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-26796661',id=212,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBECCpbyz2WKdi7sT9JWWCM6Kl73WjrHjB7ITCJ78GJA0zmfKiXWvjxjdLlwRp9HDdXCGnEWSpyXIRJ8iLcG1UIqUvrQaZAS83f2kf0ldS7KTq/S8SZDnFuWHcRgz7EDUsQ==',key_name='tempest-TestNetworkBasicOps-1364303418',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-f0tf09bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:01:18Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=e36cab22-949e-4692-99eb-c8cd8a4c6a0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "address": "fa:16:3e:a7:34:7f", "network": {"id": "f746ed83-c3a2-46d9-9f9b-e56c24674528", "bridge": "br-int", "label": "tempest-network-smoke--1081066937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7f62120-8c", "ovs_interfaceid": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.399 252257 DEBUG nova.network.os_vif_util [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "address": "fa:16:3e:a7:34:7f", "network": {"id": "f746ed83-c3a2-46d9-9f9b-e56c24674528", "bridge": "br-int", "label": "tempest-network-smoke--1081066937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7f62120-8c", "ovs_interfaceid": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.400 252257 DEBUG nova.network.os_vif_util [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:34:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7f62120-8c84-4d6d-b0d4-71adeb899a9b,network=Network(f746ed83-c3a2-46d9-9f9b-e56c24674528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7f62120-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.401 252257 DEBUG nova.objects.instance [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'pci_devices' on Instance uuid e36cab22-949e-4692-99eb-c8cd8a4c6a0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.429 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  <uuid>e36cab22-949e-4692-99eb-c8cd8a4c6a0c</uuid>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  <name>instance-000000d4</name>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestNetworkBasicOps-server-26796661</nova:name>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 09:01:22</nova:creationTime>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <nova:user uuid="3a9ba73ff05b4529ad104362a5a57cc7">tempest-TestNetworkBasicOps-488786542-project-member</nova:user>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <nova:project uuid="ca5878248147453baabf40a90f9feb19">tempest-TestNetworkBasicOps-488786542</nova:project>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <nova:port uuid="f7f62120-8c84-4d6d-b0d4-71adeb899a9b">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <system>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <entry name="serial">e36cab22-949e-4692-99eb-c8cd8a4c6a0c</entry>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <entry name="uuid">e36cab22-949e-4692-99eb-c8cd8a4c6a0c</entry>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    </system>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  <os>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  </os>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  <features>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  </features>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  </clock>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  <devices>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      </source>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      </auth>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    </disk>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk.config">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      </source>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      </auth>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    </disk>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:a7:34:7f"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <target dev="tapf7f62120-8c"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    </interface>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/e36cab22-949e-4692-99eb-c8cd8a4c6a0c/console.log" append="off"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    </serial>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <video>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    </video>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    </rng>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 04:01:23 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 04:01:23 np0005539563 nova_compute[252253]:  </devices>
Nov 29 04:01:23 np0005539563 nova_compute[252253]: </domain>
Nov 29 04:01:23 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.431 252257 DEBUG nova.compute.manager [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Preparing to wait for external event network-vif-plugged-f7f62120-8c84-4d6d-b0d4-71adeb899a9b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.432 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.432 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.433 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.434 252257 DEBUG nova.virt.libvirt.vif [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:01:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-26796661',display_name='tempest-TestNetworkBasicOps-server-26796661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-26796661',id=212,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBECCpbyz2WKdi7sT9JWWCM6Kl73WjrHjB7ITCJ78GJA0zmfKiXWvjxjdLlwRp9HDdXCGnEWSpyXIRJ8iLcG1UIqUvrQaZAS83f2kf0ldS7KTq/S8SZDnFuWHcRgz7EDUsQ==',key_name='tempest-TestNetworkBasicOps-1364303418',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-f0tf09bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:01:18Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=e36cab22-949e-4692-99eb-c8cd8a4c6a0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "address": "fa:16:3e:a7:34:7f", "network": {"id": "f746ed83-c3a2-46d9-9f9b-e56c24674528", "bridge": "br-int", "label": "tempest-network-smoke--1081066937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7f62120-8c", "ovs_interfaceid": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.434 252257 DEBUG nova.network.os_vif_util [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "address": "fa:16:3e:a7:34:7f", "network": {"id": "f746ed83-c3a2-46d9-9f9b-e56c24674528", "bridge": "br-int", "label": "tempest-network-smoke--1081066937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7f62120-8c", "ovs_interfaceid": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.435 252257 DEBUG nova.network.os_vif_util [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:34:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7f62120-8c84-4d6d-b0d4-71adeb899a9b,network=Network(f746ed83-c3a2-46d9-9f9b-e56c24674528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7f62120-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.436 252257 DEBUG os_vif [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:34:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7f62120-8c84-4d6d-b0d4-71adeb899a9b,network=Network(f746ed83-c3a2-46d9-9f9b-e56c24674528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7f62120-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.436 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.437 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.437 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.441 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.442 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf7f62120-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.442 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf7f62120-8c, col_values=(('external_ids', {'iface-id': 'f7f62120-8c84-4d6d-b0d4-71adeb899a9b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:34:7f', 'vm-uuid': 'e36cab22-949e-4692-99eb-c8cd8a4c6a0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.444 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:23 np0005539563 NetworkManager[48981]: <info>  [1764406883.4462] manager: (tapf7f62120-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.447 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.453 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.454 252257 INFO os_vif [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:34:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7f62120-8c84-4d6d-b0d4-71adeb899a9b,network=Network(f746ed83-c3a2-46d9-9f9b-e56c24674528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7f62120-8c')#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.512 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.512 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.512 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] No VIF found with MAC fa:16:3e:a7:34:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.513 252257 INFO nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Using config drive#033[00m
Nov 29 04:01:23 np0005539563 nova_compute[252253]: 2025-11-29 09:01:23.542 252257 DEBUG nova.storage.rbd_utils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:01:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3785: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.175 252257 INFO nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Creating config drive at /var/lib/nova/instances/e36cab22-949e-4692-99eb-c8cd8a4c6a0c/disk.config#033[00m
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.181 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e36cab22-949e-4692-99eb-c8cd8a4c6a0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5hcss34g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Nov 29 04:01:24 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:24 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.320 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e36cab22-949e-4692-99eb-c8cd8a4c6a0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5hcss34g" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.349 252257 DEBUG nova.storage.rbd_utils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] rbd image e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.353 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e36cab22-949e-4692-99eb-c8cd8a4c6a0c/disk.config e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.415 252257 DEBUG nova.network.neutron [req-49a8d2f8-3584-4663-8255-c2da0eb0b822 req-152c0043-b35d-4986-b59a-6576fbdbe199 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Updated VIF entry in instance network info cache for port f7f62120-8c84-4d6d-b0d4-71adeb899a9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.416 252257 DEBUG nova.network.neutron [req-49a8d2f8-3584-4663-8255-c2da0eb0b822 req-152c0043-b35d-4986-b59a-6576fbdbe199 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Updating instance_info_cache with network_info: [{"id": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "address": "fa:16:3e:a7:34:7f", "network": {"id": "f746ed83-c3a2-46d9-9f9b-e56c24674528", "bridge": "br-int", "label": "tempest-network-smoke--1081066937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7f62120-8c", "ovs_interfaceid": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.432 252257 DEBUG oslo_concurrency.lockutils [req-49a8d2f8-3584-4663-8255-c2da0eb0b822 req-152c0043-b35d-4986-b59a-6576fbdbe199 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:01:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:24.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.570 252257 DEBUG oslo_concurrency.processutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e36cab22-949e-4692-99eb-c8cd8a4c6a0c/disk.config e36cab22-949e-4692-99eb-c8cd8a4c6a0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.570 252257 INFO nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Deleting local config drive /var/lib/nova/instances/e36cab22-949e-4692-99eb-c8cd8a4c6a0c/disk.config because it was imported into RBD.#033[00m
Nov 29 04:01:24 np0005539563 kernel: tapf7f62120-8c: entered promiscuous mode
Nov 29 04:01:24 np0005539563 NetworkManager[48981]: <info>  [1764406884.6263] manager: (tapf7f62120-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Nov 29 04:01:24 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:24Z|00915|binding|INFO|Claiming lport f7f62120-8c84-4d6d-b0d4-71adeb899a9b for this chassis.
Nov 29 04:01:24 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:24Z|00916|binding|INFO|f7f62120-8c84-4d6d-b0d4-71adeb899a9b: Claiming fa:16:3e:a7:34:7f 10.100.0.14
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.626 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.630 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.644 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:34:7f 10.100.0.14'], port_security=['fa:16:3e:a7:34:7f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e36cab22-949e-4692-99eb-c8cd8a4c6a0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f746ed83-c3a2-46d9-9f9b-e56c24674528', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8df76778-4fbe-4582-ade7-212bf2e03745', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20d8e394-b251-4088-9dbe-619132ef1b4f, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=f7f62120-8c84-4d6d-b0d4-71adeb899a9b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.645 158990 INFO neutron.agent.ovn.metadata.agent [-] Port f7f62120-8c84-4d6d-b0d4-71adeb899a9b in datapath f746ed83-c3a2-46d9-9f9b-e56c24674528 bound to our chassis#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.646 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f746ed83-c3a2-46d9-9f9b-e56c24674528#033[00m
Nov 29 04:01:24 np0005539563 systemd-udevd[404066]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.658 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[eb47c79f-5fc6-4aa8-b534-052ae17ae751]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.659 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf746ed83-c1 in ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.661 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf746ed83-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.661 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9fcb27ef-fd51-4a18-ad54-945fd7261e8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.662 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[48bde579-7f97-474b-893d-e452eb4fb02b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 NetworkManager[48981]: <info>  [1764406884.6679] device (tapf7f62120-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:01:24 np0005539563 systemd-machined[213024]: New machine qemu-102-instance-000000d4.
Nov 29 04:01:24 np0005539563 NetworkManager[48981]: <info>  [1764406884.6704] device (tapf7f62120-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.675 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[4c154d0d-6876-4d99-85dd-3f92c43db2b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 systemd[1]: Started Virtual Machine qemu-102-instance-000000d4.
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.692 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.697 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e20e17-c205-4fee-a722-ed1badb764af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:24Z|00917|binding|INFO|Setting lport f7f62120-8c84-4d6d-b0d4-71adeb899a9b ovn-installed in OVS
Nov 29 04:01:24 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:24Z|00918|binding|INFO|Setting lport f7f62120-8c84-4d6d-b0d4-71adeb899a9b up in Southbound
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.699 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.729 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9d132c-50e2-4cca-9359-963722ccd297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.734 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e55b546f-56c3-421d-890c-dd6fa899033d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 NetworkManager[48981]: <info>  [1764406884.7367] manager: (tapf746ed83-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/408)
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.765 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[aa7aacce-7356-4b3d-95e2-e173d9396fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.768 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c088d3ee-60da-44df-8720-791ccab3f2e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 NetworkManager[48981]: <info>  [1764406884.7931] device (tapf746ed83-c0): carrier: link connected
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.799 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd225fc-1cf7-41ff-92db-d5dd5c089dad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.817 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d4bf6f0b-4eb4-4a6f-90ff-dffc724eaef0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf746ed83-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:42:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 985256, 'reachable_time': 38426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404099, 'error': None, 'target': 'ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.833 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2d66ad16-0be0-4750-a421-723d06b22d2c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:4255'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 985256, 'tstamp': 985256}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404100, 'error': None, 'target': 'ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.850 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[13107152-581f-4ee4-b2d3-9c956ab2d9b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf746ed83-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:42:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 985256, 'reachable_time': 38426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404101, 'error': None, 'target': 'ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.882 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[56c93391-7424-40f7-82ec-d58075e1a6c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.936 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e0da5eca-ef34-4efb-86db-1f577076a77e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.939 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf746ed83-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.939 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.940 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf746ed83-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:01:24 np0005539563 kernel: tapf746ed83-c0: entered promiscuous mode
Nov 29 04:01:24 np0005539563 NetworkManager[48981]: <info>  [1764406884.9433] manager: (tapf746ed83-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.942 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.948 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf746ed83-c0, col_values=(('external_ids', {'iface-id': '626af158-f7cf-4f8b-a9de-5f07559c43e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.949 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:24 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:24Z|00919|binding|INFO|Releasing lport 626af158-f7cf-4f8b-a9de-5f07559c43e5 from this chassis (sb_readonly=0)
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.951 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f746ed83-c3a2-46d9-9f9b-e56c24674528.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f746ed83-c3a2-46d9-9f9b-e56c24674528.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.952 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc31c21-b50d-4847-9441-5bf4986658b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.953 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-f746ed83-c3a2-46d9-9f9b-e56c24674528
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/f746ed83-c3a2-46d9-9f9b-e56c24674528.pid.haproxy
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID f746ed83-c3a2-46d9-9f9b-e56c24674528
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 04:01:24 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:24.953 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528', 'env', 'PROCESS_TAG=haproxy-f746ed83-c3a2-46d9-9f9b-e56c24674528', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f746ed83-c3a2-46d9-9f9b-e56c24674528.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 04:01:24 np0005539563 nova_compute[252253]: 2025-11-29 09:01:24.963 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.175 252257 DEBUG nova.compute.manager [req-f809b409-be82-4cc9-9a18-9b67c1bb125d req-172a5c3a-f816-4657-9e50-0ab977fdece5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Received event network-vif-plugged-f7f62120-8c84-4d6d-b0d4-71adeb899a9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.176 252257 DEBUG oslo_concurrency.lockutils [req-f809b409-be82-4cc9-9a18-9b67c1bb125d req-172a5c3a-f816-4657-9e50-0ab977fdece5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.177 252257 DEBUG oslo_concurrency.lockutils [req-f809b409-be82-4cc9-9a18-9b67c1bb125d req-172a5c3a-f816-4657-9e50-0ab977fdece5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.178 252257 DEBUG oslo_concurrency.lockutils [req-f809b409-be82-4cc9-9a18-9b67c1bb125d req-172a5c3a-f816-4657-9e50-0ab977fdece5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.179 252257 DEBUG nova.compute.manager [req-f809b409-be82-4cc9-9a18-9b67c1bb125d req-172a5c3a-f816-4657-9e50-0ab977fdece5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Processing event network-vif-plugged-f7f62120-8c84-4d6d-b0d4-71adeb899a9b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 04:01:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:25.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.328 252257 DEBUG nova.compute.manager [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.329 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406885.3277872, e36cab22-949e-4692-99eb-c8cd8a4c6a0c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.329 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] VM Started (Lifecycle Event)#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.332 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.336 252257 INFO nova.virt.libvirt.driver [-] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Instance spawned successfully.#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.336 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 04:01:25 np0005539563 podman[404173]: 2025-11-29 09:01:25.34315913 +0000 UTC m=+0.053295474 container create c384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.365 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.374 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.379 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.379 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.379 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.380 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.380 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.381 252257 DEBUG nova.virt.libvirt.driver [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:01:25 np0005539563 systemd[1]: Started libpod-conmon-c384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc.scope.
Nov 29 04:01:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:01:25 np0005539563 podman[404173]: 2025-11-29 09:01:25.311621306 +0000 UTC m=+0.021757670 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 04:01:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4e6acb72e9294ec584170ea2379128c52c8df35a796864bc9108c7bbc42006/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:25 np0005539563 podman[404173]: 2025-11-29 09:01:25.418582743 +0000 UTC m=+0.128719077 container init c384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 04:01:25 np0005539563 podman[404173]: 2025-11-29 09:01:25.423870166 +0000 UTC m=+0.134006530 container start c384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.430 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.431 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406885.3281949, e36cab22-949e-4692-99eb-c8cd8a4c6a0c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.431 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] VM Paused (Lifecycle Event)#033[00m
Nov 29 04:01:25 np0005539563 neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528[404189]: [NOTICE]   (404193) : New worker (404195) forked
Nov 29 04:01:25 np0005539563 neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528[404189]: [NOTICE]   (404193) : Loading success.
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.463 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.468 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764406885.3316865, e36cab22-949e-4692-99eb-c8cd8a4c6a0c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.468 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.472 252257 INFO nova.compute.manager [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Took 6.94 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.472 252257 DEBUG nova.compute.manager [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.494 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.498 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.519 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.529 252257 INFO nova.compute.manager [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Took 8.02 seconds to build instance.#033[00m
Nov 29 04:01:25 np0005539563 nova_compute[252253]: 2025-11-29 09:01:25.548 252257 DEBUG oslo_concurrency.lockutils [None req-d8978e1f-063d-4545-9c14-12d37a4533ff 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3786: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 04:01:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:26.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:26 np0005539563 nova_compute[252253]: 2025-11-29 09:01:26.933 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:27.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:27 np0005539563 nova_compute[252253]: 2025-11-29 09:01:27.275 252257 DEBUG nova.compute.manager [req-4a5d1b89-810e-4879-bec3-e123b12f03f9 req-6ccfd09b-d677-41a5-9828-d6ec822a42c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Received event network-vif-plugged-f7f62120-8c84-4d6d-b0d4-71adeb899a9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:01:27 np0005539563 nova_compute[252253]: 2025-11-29 09:01:27.276 252257 DEBUG oslo_concurrency.lockutils [req-4a5d1b89-810e-4879-bec3-e123b12f03f9 req-6ccfd09b-d677-41a5-9828-d6ec822a42c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:27 np0005539563 nova_compute[252253]: 2025-11-29 09:01:27.276 252257 DEBUG oslo_concurrency.lockutils [req-4a5d1b89-810e-4879-bec3-e123b12f03f9 req-6ccfd09b-d677-41a5-9828-d6ec822a42c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:27 np0005539563 nova_compute[252253]: 2025-11-29 09:01:27.276 252257 DEBUG oslo_concurrency.lockutils [req-4a5d1b89-810e-4879-bec3-e123b12f03f9 req-6ccfd09b-d677-41a5-9828-d6ec822a42c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:27 np0005539563 nova_compute[252253]: 2025-11-29 09:01:27.277 252257 DEBUG nova.compute.manager [req-4a5d1b89-810e-4879-bec3-e123b12f03f9 req-6ccfd09b-d677-41a5-9828-d6ec822a42c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] No waiting events found dispatching network-vif-plugged-f7f62120-8c84-4d6d-b0d4-71adeb899a9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:01:27 np0005539563 nova_compute[252253]: 2025-11-29 09:01:27.277 252257 WARNING nova.compute.manager [req-4a5d1b89-810e-4879-bec3-e123b12f03f9 req-6ccfd09b-d677-41a5-9828-d6ec822a42c2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Received unexpected event network-vif-plugged-f7f62120-8c84-4d6d-b0d4-71adeb899a9b for instance with vm_state active and task_state None.#033[00m
Nov 29 04:01:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3787: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 04:01:28 np0005539563 nova_compute[252253]: 2025-11-29 09:01:28.446 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:28.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:28 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:28Z|00920|binding|INFO|Releasing lport 626af158-f7cf-4f8b-a9de-5f07559c43e5 from this chassis (sb_readonly=0)
Nov 29 04:01:28 np0005539563 NetworkManager[48981]: <info>  [1764406888.8742] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Nov 29 04:01:28 np0005539563 NetworkManager[48981]: <info>  [1764406888.8757] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Nov 29 04:01:28 np0005539563 nova_compute[252253]: 2025-11-29 09:01:28.873 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:28 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:28Z|00921|binding|INFO|Releasing lport 626af158-f7cf-4f8b-a9de-5f07559c43e5 from this chassis (sb_readonly=0)
Nov 29 04:01:28 np0005539563 nova_compute[252253]: 2025-11-29 09:01:28.925 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:28 np0005539563 nova_compute[252253]: 2025-11-29 09:01:28.930 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:29 np0005539563 nova_compute[252253]: 2025-11-29 09:01:29.207 252257 DEBUG nova.compute.manager [req-3f3f92ad-bb52-4d29-b2ae-c51ef1e6556e req-7db744b0-9d08-4d29-9679-9b429614d47d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Received event network-changed-f7f62120-8c84-4d6d-b0d4-71adeb899a9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:01:29 np0005539563 nova_compute[252253]: 2025-11-29 09:01:29.208 252257 DEBUG nova.compute.manager [req-3f3f92ad-bb52-4d29-b2ae-c51ef1e6556e req-7db744b0-9d08-4d29-9679-9b429614d47d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Refreshing instance network info cache due to event network-changed-f7f62120-8c84-4d6d-b0d4-71adeb899a9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:01:29 np0005539563 nova_compute[252253]: 2025-11-29 09:01:29.209 252257 DEBUG oslo_concurrency.lockutils [req-3f3f92ad-bb52-4d29-b2ae-c51ef1e6556e req-7db744b0-9d08-4d29-9679-9b429614d47d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:01:29 np0005539563 nova_compute[252253]: 2025-11-29 09:01:29.209 252257 DEBUG oslo_concurrency.lockutils [req-3f3f92ad-bb52-4d29-b2ae-c51ef1e6556e req-7db744b0-9d08-4d29-9679-9b429614d47d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:01:29 np0005539563 nova_compute[252253]: 2025-11-29 09:01:29.209 252257 DEBUG nova.network.neutron [req-3f3f92ad-bb52-4d29-b2ae-c51ef1e6556e req-7db744b0-9d08-4d29-9679-9b429614d47d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Refreshing network info cache for port f7f62120-8c84-4d6d-b0d4-71adeb899a9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:01:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:29.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3788: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 708 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Nov 29 04:01:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:30.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:30 np0005539563 nova_compute[252253]: 2025-11-29 09:01:30.623 252257 DEBUG nova.network.neutron [req-3f3f92ad-bb52-4d29-b2ae-c51ef1e6556e req-7db744b0-9d08-4d29-9679-9b429614d47d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Updated VIF entry in instance network info cache for port f7f62120-8c84-4d6d-b0d4-71adeb899a9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:01:30 np0005539563 nova_compute[252253]: 2025-11-29 09:01:30.624 252257 DEBUG nova.network.neutron [req-3f3f92ad-bb52-4d29-b2ae-c51ef1e6556e req-7db744b0-9d08-4d29-9679-9b429614d47d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Updating instance_info_cache with network_info: [{"id": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "address": "fa:16:3e:a7:34:7f", "network": {"id": "f746ed83-c3a2-46d9-9f9b-e56c24674528", "bridge": "br-int", "label": "tempest-network-smoke--1081066937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7f62120-8c", "ovs_interfaceid": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:01:30 np0005539563 nova_compute[252253]: 2025-11-29 09:01:30.648 252257 DEBUG oslo_concurrency.lockutils [req-3f3f92ad-bb52-4d29-b2ae-c51ef1e6556e req-7db744b0-9d08-4d29-9679-9b429614d47d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:01:30 np0005539563 nova_compute[252253]: 2025-11-29 09:01:30.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:31.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3789: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 04:01:31 np0005539563 nova_compute[252253]: 2025-11-29 09:01:31.934 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:32.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:33.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:33 np0005539563 nova_compute[252253]: 2025-11-29 09:01:33.448 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:33 np0005539563 podman[404210]: 2025-11-29 09:01:33.49206155 +0000 UTC m=+0.050972522 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:01:33 np0005539563 podman[404211]: 2025-11-29 09:01:33.503029947 +0000 UTC m=+0.057993201 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 04:01:33 np0005539563 podman[404212]: 2025-11-29 09:01:33.532478194 +0000 UTC m=+0.080870621 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 29 04:01:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3790: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 543 KiB/s wr, 97 op/s
Nov 29 04:01:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:34.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:35.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3791: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:01:35 np0005539563 nova_compute[252253]: 2025-11-29 09:01:35.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:36.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:36 np0005539563 nova_compute[252253]: 2025-11-29 09:01:36.936 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:37.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3792: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 67 op/s
Nov 29 04:01:38 np0005539563 nova_compute[252253]: 2025-11-29 09:01:38.452 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:38.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:38 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:38Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:34:7f 10.100.0.14
Nov 29 04:01:38 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:38Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:34:7f 10.100.0.14
Nov 29 04:01:38 np0005539563 nova_compute[252253]: 2025-11-29 09:01:38.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:39.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3793: 305 pgs: 305 active+clean; 170 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 497 KiB/s wr, 85 op/s
Nov 29 04:01:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:40.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:40 np0005539563 nova_compute[252253]: 2025-11-29 09:01:40.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:40 np0005539563 nova_compute[252253]: 2025-11-29 09:01:40.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:01:40 np0005539563 nova_compute[252253]: 2025-11-29 09:01:40.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:01:41 np0005539563 nova_compute[252253]: 2025-11-29 09:01:41.018 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:01:41 np0005539563 nova_compute[252253]: 2025-11-29 09:01:41.018 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:01:41 np0005539563 nova_compute[252253]: 2025-11-29 09:01:41.018 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 04:01:41 np0005539563 nova_compute[252253]: 2025-11-29 09:01:41.018 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e36cab22-949e-4692-99eb-c8cd8a4c6a0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:01:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:41.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3794: 305 pgs: 305 active+clean; 196 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 86 op/s
Nov 29 04:01:41 np0005539563 nova_compute[252253]: 2025-11-29 09:01:41.938 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.240 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Updating instance_info_cache with network_info: [{"id": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "address": "fa:16:3e:a7:34:7f", "network": {"id": "f746ed83-c3a2-46d9-9f9b-e56c24674528", "bridge": "br-int", "label": "tempest-network-smoke--1081066937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7f62120-8c", "ovs_interfaceid": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:01:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.353 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.354 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.354 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.355 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.355 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.355 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.408 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.408 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.408 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.409 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.409 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:42.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:01:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3246123097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.848 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.938 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000d4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:01:42 np0005539563 nova_compute[252253]: 2025-11-29 09:01:42.939 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000d4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.072 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.074 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3918MB free_disk=20.943435668945312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.074 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.075 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:01:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:01:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:01:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:01:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:01:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:01:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:43.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.287 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance e36cab22-949e-4692-99eb-c8cd8a4c6a0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.288 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.288 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.331 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.454 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3795: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 29 04:01:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:01:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2035084151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.752 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.759 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.820 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.897 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.898 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.979 252257 INFO nova.compute.manager [None req-7980bf11-c4ef-431c-a89d-b991ade1cc1a 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Get console output#033[00m
Nov 29 04:01:43 np0005539563 nova_compute[252253]: 2025-11-29 09:01:43.987 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 04:01:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:44.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:45.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:45 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:45Z|00922|binding|INFO|Releasing lport 626af158-f7cf-4f8b-a9de-5f07559c43e5 from this chassis (sb_readonly=0)
Nov 29 04:01:45 np0005539563 nova_compute[252253]: 2025-11-29 09:01:45.488 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:45 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:45Z|00923|binding|INFO|Releasing lport 626af158-f7cf-4f8b-a9de-5f07559c43e5 from this chassis (sb_readonly=0)
Nov 29 04:01:45 np0005539563 nova_compute[252253]: 2025-11-29 09:01:45.554 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3796: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 29 04:01:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:46.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:46 np0005539563 nova_compute[252253]: 2025-11-29 09:01:46.830 252257 INFO nova.compute.manager [None req-cfc6bea6-31bd-4e43-9df6-c21432c07614 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Get console output#033[00m
Nov 29 04:01:46 np0005539563 nova_compute[252253]: 2025-11-29 09:01:46.835 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 04:01:46 np0005539563 nova_compute[252253]: 2025-11-29 09:01:46.940 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:47 np0005539563 nova_compute[252253]: 2025-11-29 09:01:47.221 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:47.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3797: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 29 04:01:48 np0005539563 nova_compute[252253]: 2025-11-29 09:01:48.032 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:48 np0005539563 NetworkManager[48981]: <info>  [1764406908.0338] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Nov 29 04:01:48 np0005539563 NetworkManager[48981]: <info>  [1764406908.0351] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Nov 29 04:01:48 np0005539563 nova_compute[252253]: 2025-11-29 09:01:48.093 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:48 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:48Z|00924|binding|INFO|Releasing lport 626af158-f7cf-4f8b-a9de-5f07559c43e5 from this chassis (sb_readonly=0)
Nov 29 04:01:48 np0005539563 nova_compute[252253]: 2025-11-29 09:01:48.099 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:48 np0005539563 nova_compute[252253]: 2025-11-29 09:01:48.376 252257 INFO nova.compute.manager [None req-53325767-0b52-4780-8486-cee87d9899a6 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Get console output#033[00m
Nov 29 04:01:48 np0005539563 nova_compute[252253]: 2025-11-29 09:01:48.381 316984 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 04:01:48 np0005539563 nova_compute[252253]: 2025-11-29 09:01:48.456 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:48.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:48.907 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=92, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=91) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:01:48 np0005539563 nova_compute[252253]: 2025-11-29 09:01:48.908 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:48.908 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:01:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:49.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3798: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.221 252257 DEBUG nova.compute.manager [req-0a85d716-7148-46f3-8f46-7956420df0b9 req-befd5520-e38f-4390-8069-66181fb23e45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Received event network-changed-f7f62120-8c84-4d6d-b0d4-71adeb899a9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.221 252257 DEBUG nova.compute.manager [req-0a85d716-7148-46f3-8f46-7956420df0b9 req-befd5520-e38f-4390-8069-66181fb23e45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Refreshing instance network info cache due to event network-changed-f7f62120-8c84-4d6d-b0d4-71adeb899a9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.222 252257 DEBUG oslo_concurrency.lockutils [req-0a85d716-7148-46f3-8f46-7956420df0b9 req-befd5520-e38f-4390-8069-66181fb23e45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.222 252257 DEBUG oslo_concurrency.lockutils [req-0a85d716-7148-46f3-8f46-7956420df0b9 req-befd5520-e38f-4390-8069-66181fb23e45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.223 252257 DEBUG nova.network.neutron [req-0a85d716-7148-46f3-8f46-7956420df0b9 req-befd5520-e38f-4390-8069-66181fb23e45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Refreshing network info cache for port f7f62120-8c84-4d6d-b0d4-71adeb899a9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.263 252257 DEBUG oslo_concurrency.lockutils [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.263 252257 DEBUG oslo_concurrency.lockutils [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.264 252257 DEBUG oslo_concurrency.lockutils [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.264 252257 DEBUG oslo_concurrency.lockutils [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.265 252257 DEBUG oslo_concurrency.lockutils [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.267 252257 INFO nova.compute.manager [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Terminating instance#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.268 252257 DEBUG nova.compute.manager [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 04:01:50 np0005539563 kernel: tapf7f62120-8c (unregistering): left promiscuous mode
Nov 29 04:01:50 np0005539563 NetworkManager[48981]: <info>  [1764406910.3260] device (tapf7f62120-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.367 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:50 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:50Z|00925|binding|INFO|Releasing lport f7f62120-8c84-4d6d-b0d4-71adeb899a9b from this chassis (sb_readonly=0)
Nov 29 04:01:50 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:50Z|00926|binding|INFO|Setting lport f7f62120-8c84-4d6d-b0d4-71adeb899a9b down in Southbound
Nov 29 04:01:50 np0005539563 ovn_controller[148841]: 2025-11-29T09:01:50Z|00927|binding|INFO|Removing iface tapf7f62120-8c ovn-installed in OVS
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.369 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.376 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:34:7f 10.100.0.14'], port_security=['fa:16:3e:a7:34:7f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e36cab22-949e-4692-99eb-c8cd8a4c6a0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f746ed83-c3a2-46d9-9f9b-e56c24674528', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca5878248147453baabf40a90f9feb19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8df76778-4fbe-4582-ade7-212bf2e03745', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20d8e394-b251-4088-9dbe-619132ef1b4f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=f7f62120-8c84-4d6d-b0d4-71adeb899a9b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.377 158990 INFO neutron.agent.ovn.metadata.agent [-] Port f7f62120-8c84-4d6d-b0d4-71adeb899a9b in datapath f746ed83-c3a2-46d9-9f9b-e56c24674528 unbound from our chassis#033[00m
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.378 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f746ed83-c3a2-46d9-9f9b-e56c24674528, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.379 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[261fc200-be52-4332-8771-90e46b4b8613]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.380 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528 namespace which is not needed anymore#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.381 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:50 np0005539563 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000d4.scope: Deactivated successfully.
Nov 29 04:01:50 np0005539563 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000d4.scope: Consumed 14.242s CPU time.
Nov 29 04:01:50 np0005539563 systemd-machined[213024]: Machine qemu-102-instance-000000d4 terminated.
Nov 29 04:01:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:50.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.499 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.505 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.511 252257 INFO nova.virt.libvirt.driver [-] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Instance destroyed successfully.#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.512 252257 DEBUG nova.objects.instance [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lazy-loading 'resources' on Instance uuid e36cab22-949e-4692-99eb-c8cd8a4c6a0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:01:50 np0005539563 neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528[404189]: [NOTICE]   (404193) : haproxy version is 2.8.14-c23fe91
Nov 29 04:01:50 np0005539563 neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528[404189]: [NOTICE]   (404193) : path to executable is /usr/sbin/haproxy
Nov 29 04:01:50 np0005539563 neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528[404189]: [WARNING]  (404193) : Exiting Master process...
Nov 29 04:01:50 np0005539563 neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528[404189]: [WARNING]  (404193) : Exiting Master process...
Nov 29 04:01:50 np0005539563 neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528[404189]: [ALERT]    (404193) : Current worker (404195) exited with code 143 (Terminated)
Nov 29 04:01:50 np0005539563 neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528[404189]: [WARNING]  (404193) : All workers exited. Exiting... (0)
Nov 29 04:01:50 np0005539563 systemd[1]: libpod-c384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc.scope: Deactivated successfully.
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.529 252257 DEBUG nova.virt.libvirt.vif [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:01:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-26796661',display_name='tempest-TestNetworkBasicOps-server-26796661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-26796661',id=212,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBECCpbyz2WKdi7sT9JWWCM6Kl73WjrHjB7ITCJ78GJA0zmfKiXWvjxjdLlwRp9HDdXCGnEWSpyXIRJ8iLcG1UIqUvrQaZAS83f2kf0ldS7KTq/S8SZDnFuWHcRgz7EDUsQ==',key_name='tempest-TestNetworkBasicOps-1364303418',keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:01:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca5878248147453baabf40a90f9feb19',ramdisk_id='',reservation_id='r-f0tf09bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-488786542',owner_user_name='tempest-TestNetworkBasicOps-488786542-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:01:25Z,user_data=None,user_id='3a9ba73ff05b4529ad104362a5a57cc7',uuid=e36cab22-949e-4692-99eb-c8cd8a4c6a0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "address": "fa:16:3e:a7:34:7f", "network": {"id": "f746ed83-c3a2-46d9-9f9b-e56c24674528", "bridge": "br-int", "label": "tempest-network-smoke--1081066937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7f62120-8c", "ovs_interfaceid": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.530 252257 DEBUG nova.network.os_vif_util [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converting VIF {"id": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "address": "fa:16:3e:a7:34:7f", "network": {"id": "f746ed83-c3a2-46d9-9f9b-e56c24674528", "bridge": "br-int", "label": "tempest-network-smoke--1081066937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7f62120-8c", "ovs_interfaceid": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:01:50 np0005539563 podman[404400]: 2025-11-29 09:01:50.530476452 +0000 UTC m=+0.043682775 container died c384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.532 252257 DEBUG nova.network.os_vif_util [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:34:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7f62120-8c84-4d6d-b0d4-71adeb899a9b,network=Network(f746ed83-c3a2-46d9-9f9b-e56c24674528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7f62120-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.533 252257 DEBUG os_vif [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:34:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7f62120-8c84-4d6d-b0d4-71adeb899a9b,network=Network(f746ed83-c3a2-46d9-9f9b-e56c24674528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7f62120-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.534 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.535 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf7f62120-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.536 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.537 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.540 252257 INFO os_vif [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:34:7f,bridge_name='br-int',has_traffic_filtering=True,id=f7f62120-8c84-4d6d-b0d4-71adeb899a9b,network=Network(f746ed83-c3a2-46d9-9f9b-e56c24674528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7f62120-8c')#033[00m
Nov 29 04:01:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc-userdata-shm.mount: Deactivated successfully.
Nov 29 04:01:50 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9e4e6acb72e9294ec584170ea2379128c52c8df35a796864bc9108c7bbc42006-merged.mount: Deactivated successfully.
Nov 29 04:01:50 np0005539563 podman[404400]: 2025-11-29 09:01:50.57697376 +0000 UTC m=+0.090180083 container cleanup c384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:01:50 np0005539563 systemd[1]: libpod-conmon-c384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc.scope: Deactivated successfully.
Nov 29 04:01:50 np0005539563 podman[404452]: 2025-11-29 09:01:50.644414376 +0000 UTC m=+0.045403480 container remove c384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.652 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[16629029-cb47-4a8f-b78d-1ff6f159d41e]: (4, ('Sat Nov 29 09:01:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528 (c384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc)\nc384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc\nSat Nov 29 09:01:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528 (c384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc)\nc384fa877d2abc5644b33cb8f215bf5fe2c293fdb4465ff613620c706c4258cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.654 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b378312d-99be-46a0-b9f1-111095a76cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.655 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf746ed83-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:01:50 np0005539563 kernel: tapf746ed83-c0: left promiscuous mode
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.660 252257 DEBUG nova.compute.manager [req-d5236a84-78ce-4299-bf34-ea895f206c82 req-3900abb3-aece-4d4e-acf1-decbe4822d97 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Received event network-vif-unplugged-f7f62120-8c84-4d6d-b0d4-71adeb899a9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.661 252257 DEBUG oslo_concurrency.lockutils [req-d5236a84-78ce-4299-bf34-ea895f206c82 req-3900abb3-aece-4d4e-acf1-decbe4822d97 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.662 252257 DEBUG oslo_concurrency.lockutils [req-d5236a84-78ce-4299-bf34-ea895f206c82 req-3900abb3-aece-4d4e-acf1-decbe4822d97 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.662 252257 DEBUG oslo_concurrency.lockutils [req-d5236a84-78ce-4299-bf34-ea895f206c82 req-3900abb3-aece-4d4e-acf1-decbe4822d97 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.662 252257 DEBUG nova.compute.manager [req-d5236a84-78ce-4299-bf34-ea895f206c82 req-3900abb3-aece-4d4e-acf1-decbe4822d97 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] No waiting events found dispatching network-vif-unplugged-f7f62120-8c84-4d6d-b0d4-71adeb899a9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.662 252257 DEBUG nova.compute.manager [req-d5236a84-78ce-4299-bf34-ea895f206c82 req-3900abb3-aece-4d4e-acf1-decbe4822d97 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Received event network-vif-unplugged-f7f62120-8c84-4d6d-b0d4-71adeb899a9b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.663 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.670 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.672 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b99d333f-c4ed-44b9-98d1-728af9f137a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.683 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1cba45-4fc0-4d64-9f72-86f7a69d8d84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.684 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0240b60a-13f9-4ad6-b5d1-49d78b4ce36a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.702 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0beee835-f332-447a-a0ba-0bcaf6de4149]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 985249, 'reachable_time': 29471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404468, 'error': None, 'target': 'ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.704 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f746ed83-c3a2-46d9-9f9b-e56c24674528 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 04:01:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:50.704 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb22e9f-d47d-434b-ab1e-4bffd1c9130e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:01:50 np0005539563 systemd[1]: run-netns-ovnmeta\x2df746ed83\x2dc3a2\x2d46d9\x2d9f9b\x2de56c24674528.mount: Deactivated successfully.
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.968 252257 INFO nova.virt.libvirt.driver [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Deleting instance files /var/lib/nova/instances/e36cab22-949e-4692-99eb-c8cd8a4c6a0c_del#033[00m
Nov 29 04:01:50 np0005539563 nova_compute[252253]: 2025-11-29 09:01:50.969 252257 INFO nova.virt.libvirt.driver [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Deletion of /var/lib/nova/instances/e36cab22-949e-4692-99eb-c8cd8a4c6a0c_del complete#033[00m
Nov 29 04:01:51 np0005539563 nova_compute[252253]: 2025-11-29 09:01:51.052 252257 INFO nova.compute.manager [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 04:01:51 np0005539563 nova_compute[252253]: 2025-11-29 09:01:51.053 252257 DEBUG oslo.service.loopingcall [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 04:01:51 np0005539563 nova_compute[252253]: 2025-11-29 09:01:51.053 252257 DEBUG nova.compute.manager [-] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 04:01:51 np0005539563 nova_compute[252253]: 2025-11-29 09:01:51.053 252257 DEBUG nova.network.neutron [-] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 04:01:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:51.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3799: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 190 KiB/s rd, 1.7 MiB/s wr, 42 op/s
Nov 29 04:01:51 np0005539563 nova_compute[252253]: 2025-11-29 09:01:51.942 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:52.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:52 np0005539563 nova_compute[252253]: 2025-11-29 09:01:52.774 252257 DEBUG nova.compute.manager [req-5a7a2196-2171-435c-950c-58deebb3950f req-820e9846-d4f1-42af-8745-6c9860e19a29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Received event network-vif-plugged-f7f62120-8c84-4d6d-b0d4-71adeb899a9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:01:52 np0005539563 nova_compute[252253]: 2025-11-29 09:01:52.775 252257 DEBUG oslo_concurrency.lockutils [req-5a7a2196-2171-435c-950c-58deebb3950f req-820e9846-d4f1-42af-8745-6c9860e19a29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:52 np0005539563 nova_compute[252253]: 2025-11-29 09:01:52.775 252257 DEBUG oslo_concurrency.lockutils [req-5a7a2196-2171-435c-950c-58deebb3950f req-820e9846-d4f1-42af-8745-6c9860e19a29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:52 np0005539563 nova_compute[252253]: 2025-11-29 09:01:52.775 252257 DEBUG oslo_concurrency.lockutils [req-5a7a2196-2171-435c-950c-58deebb3950f req-820e9846-d4f1-42af-8745-6c9860e19a29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:52 np0005539563 nova_compute[252253]: 2025-11-29 09:01:52.775 252257 DEBUG nova.compute.manager [req-5a7a2196-2171-435c-950c-58deebb3950f req-820e9846-d4f1-42af-8745-6c9860e19a29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] No waiting events found dispatching network-vif-plugged-f7f62120-8c84-4d6d-b0d4-71adeb899a9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:01:52 np0005539563 nova_compute[252253]: 2025-11-29 09:01:52.776 252257 WARNING nova.compute.manager [req-5a7a2196-2171-435c-950c-58deebb3950f req-820e9846-d4f1-42af-8745-6c9860e19a29 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Received unexpected event network-vif-plugged-f7f62120-8c84-4d6d-b0d4-71adeb899a9b for instance with vm_state active and task_state deleting.#033[00m
Nov 29 04:01:53 np0005539563 nova_compute[252253]: 2025-11-29 09:01:53.087 252257 DEBUG nova.network.neutron [req-0a85d716-7148-46f3-8f46-7956420df0b9 req-befd5520-e38f-4390-8069-66181fb23e45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Updated VIF entry in instance network info cache for port f7f62120-8c84-4d6d-b0d4-71adeb899a9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:01:53 np0005539563 nova_compute[252253]: 2025-11-29 09:01:53.088 252257 DEBUG nova.network.neutron [req-0a85d716-7148-46f3-8f46-7956420df0b9 req-befd5520-e38f-4390-8069-66181fb23e45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Updating instance_info_cache with network_info: [{"id": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "address": "fa:16:3e:a7:34:7f", "network": {"id": "f746ed83-c3a2-46d9-9f9b-e56c24674528", "bridge": "br-int", "label": "tempest-network-smoke--1081066937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca5878248147453baabf40a90f9feb19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7f62120-8c", "ovs_interfaceid": "f7f62120-8c84-4d6d-b0d4-71adeb899a9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:01:53 np0005539563 nova_compute[252253]: 2025-11-29 09:01:53.114 252257 DEBUG oslo_concurrency.lockutils [req-0a85d716-7148-46f3-8f46-7956420df0b9 req-befd5520-e38f-4390-8069-66181fb23e45 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-e36cab22-949e-4692-99eb-c8cd8a4c6a0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:01:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:01:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:53.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:01:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:01:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/226939711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:01:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3800: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 96 KiB/s rd, 86 KiB/s wr, 29 op/s
Nov 29 04:01:54 np0005539563 nova_compute[252253]: 2025-11-29 09:01:54.052 252257 DEBUG nova.network.neutron [-] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:01:54 np0005539563 nova_compute[252253]: 2025-11-29 09:01:54.079 252257 INFO nova.compute.manager [-] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Took 3.03 seconds to deallocate network for instance.#033[00m
Nov 29 04:01:54 np0005539563 nova_compute[252253]: 2025-11-29 09:01:54.168 252257 DEBUG oslo_concurrency.lockutils [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:54 np0005539563 nova_compute[252253]: 2025-11-29 09:01:54.169 252257 DEBUG oslo_concurrency.lockutils [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:54 np0005539563 nova_compute[252253]: 2025-11-29 09:01:54.197 252257 DEBUG nova.compute.manager [req-35fee9a4-4477-489c-88cf-abd5255a82d0 req-cf778df2-2de9-4c7d-89b7-831aedd157f8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Received event network-vif-deleted-f7f62120-8c84-4d6d-b0d4-71adeb899a9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:01:54 np0005539563 nova_compute[252253]: 2025-11-29 09:01:54.230 252257 DEBUG oslo_concurrency.processutils [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:54.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:01:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524937875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:01:54 np0005539563 nova_compute[252253]: 2025-11-29 09:01:54.671 252257 DEBUG oslo_concurrency.processutils [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:54 np0005539563 nova_compute[252253]: 2025-11-29 09:01:54.677 252257 DEBUG nova.compute.provider_tree [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:01:54 np0005539563 nova_compute[252253]: 2025-11-29 09:01:54.697 252257 DEBUG nova.scheduler.client.report [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:01:54 np0005539563 nova_compute[252253]: 2025-11-29 09:01:54.724 252257 DEBUG oslo_concurrency.lockutils [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:54 np0005539563 nova_compute[252253]: 2025-11-29 09:01:54.761 252257 INFO nova.scheduler.client.report [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Deleted allocations for instance e36cab22-949e-4692-99eb-c8cd8a4c6a0c#033[00m
Nov 29 04:01:54 np0005539563 nova_compute[252253]: 2025-11-29 09:01:54.842 252257 DEBUG oslo_concurrency.lockutils [None req-8c2818b3-6a1c-4511-9827-870874a90219 3a9ba73ff05b4529ad104362a5a57cc7 ca5878248147453baabf40a90f9feb19 - - default default] Lock "e36cab22-949e-4692-99eb-c8cd8a4c6a0c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:54 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:01:54.910 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '92'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:01:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:55.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:55 np0005539563 nova_compute[252253]: 2025-11-29 09:01:55.538 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3801: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Nov 29 04:01:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:56.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:56 np0005539563 nova_compute[252253]: 2025-11-29 09:01:56.945 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:57.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3802: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 27 op/s
Nov 29 04:01:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:58.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:59 np0005539563 nova_compute[252253]: 2025-11-29 09:01:59.121 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:59 np0005539563 nova_compute[252253]: 2025-11-29 09:01:59.191 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:01:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:59.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3803: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 27 op/s
Nov 29 04:02:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:00.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:00 np0005539563 nova_compute[252253]: 2025-11-29 09:02:00.539 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:01.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3804: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 04:02:01 np0005539563 nova_compute[252253]: 2025-11-29 09:02:01.947 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:02.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:03.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3805: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 04:02:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:04.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:04 np0005539563 podman[404550]: 2025-11-29 09:02:04.504499214 +0000 UTC m=+0.052223825 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 04:02:04 np0005539563 podman[404551]: 2025-11-29 09:02:04.51322127 +0000 UTC m=+0.057177848 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 04:02:04 np0005539563 podman[404552]: 2025-11-29 09:02:04.589797643 +0000 UTC m=+0.129917808 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 04:02:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:02:04.973 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:02:04.974 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:02:04.974 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:05.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:05 np0005539563 nova_compute[252253]: 2025-11-29 09:02:05.511 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406910.5100791, e36cab22-949e-4692-99eb-c8cd8a4c6a0c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:02:05 np0005539563 nova_compute[252253]: 2025-11-29 09:02:05.512 252257 INFO nova.compute.manager [-] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 04:02:05 np0005539563 nova_compute[252253]: 2025-11-29 09:02:05.533 252257 DEBUG nova.compute.manager [None req-441e0205-535a-4c65-86ba-19f5b912d41f - - - - - -] [instance: e36cab22-949e-4692-99eb-c8cd8a4c6a0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:02:05 np0005539563 nova_compute[252253]: 2025-11-29 09:02:05.541 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3806: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.1 KiB/s rd, 511 B/s wr, 13 op/s
Nov 29 04:02:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:06.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:07 np0005539563 nova_compute[252253]: 2025-11-29 09:02:06.999 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:07.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3807: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:08.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:09.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3808: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:10 np0005539563 podman[404788]: 2025-11-29 09:02:10.35304533 +0000 UTC m=+0.066338567 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 04:02:10 np0005539563 podman[404788]: 2025-11-29 09:02:10.482063074 +0000 UTC m=+0.195356301 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 04:02:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:10.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:10 np0005539563 nova_compute[252253]: 2025-11-29 09:02:10.543 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:11 np0005539563 podman[404942]: 2025-11-29 09:02:11.20843384 +0000 UTC m=+0.081822457 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 04:02:11 np0005539563 podman[404942]: 2025-11-29 09:02:11.215831571 +0000 UTC m=+0.089220067 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 04:02:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:11.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:11 np0005539563 podman[405004]: 2025-11-29 09:02:11.478881664 +0000 UTC m=+0.075452284 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, description=keepalived for Ceph)
Nov 29 04:02:11 np0005539563 podman[405004]: 2025-11-29 09:02:11.501351682 +0000 UTC m=+0.097922232 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 29 04:02:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:02:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3809: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:02:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:12 np0005539563 nova_compute[252253]: 2025-11-29 09:02:12.041 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:12.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:12 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 29456393-6b87-40e9-aca0-2236df15a309 does not exist
Nov 29 04:02:12 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5935ac77-b1ec-4541-949d-1c5eb2638abc does not exist
Nov 29 04:02:12 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c227f84f-3720-4c9e-bed5-530888b11896 does not exist
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:12 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:02:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:02:13
Nov 29 04:02:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:02:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:02:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', 'volumes', '.rgw.root', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta']
Nov 29 04:02:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:02:13 np0005539563 podman[405313]: 2025-11-29 09:02:13.232490954 +0000 UTC m=+0.047173268 container create eb1ad8ec58556531a9644ff75f1d507f828ee9f6d2444011c5604b434441590e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:02:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:02:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:02:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:02:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:02:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:02:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:02:13 np0005539563 systemd[1]: Started libpod-conmon-eb1ad8ec58556531a9644ff75f1d507f828ee9f6d2444011c5604b434441590e.scope.
Nov 29 04:02:13 np0005539563 podman[405313]: 2025-11-29 09:02:13.20872378 +0000 UTC m=+0.023406084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:02:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:02:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:13.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:13 np0005539563 podman[405313]: 2025-11-29 09:02:13.335614896 +0000 UTC m=+0.150297280 container init eb1ad8ec58556531a9644ff75f1d507f828ee9f6d2444011c5604b434441590e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 04:02:13 np0005539563 podman[405313]: 2025-11-29 09:02:13.342566815 +0000 UTC m=+0.157249119 container start eb1ad8ec58556531a9644ff75f1d507f828ee9f6d2444011c5604b434441590e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:02:13 np0005539563 podman[405313]: 2025-11-29 09:02:13.345810912 +0000 UTC m=+0.160493436 container attach eb1ad8ec58556531a9644ff75f1d507f828ee9f6d2444011c5604b434441590e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 29 04:02:13 np0005539563 exciting_curie[405329]: 167 167
Nov 29 04:02:13 np0005539563 systemd[1]: libpod-eb1ad8ec58556531a9644ff75f1d507f828ee9f6d2444011c5604b434441590e.scope: Deactivated successfully.
Nov 29 04:02:13 np0005539563 conmon[405329]: conmon eb1ad8ec58556531a964 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb1ad8ec58556531a9644ff75f1d507f828ee9f6d2444011c5604b434441590e.scope/container/memory.events
Nov 29 04:02:13 np0005539563 podman[405334]: 2025-11-29 09:02:13.386116123 +0000 UTC m=+0.024436052 container died eb1ad8ec58556531a9644ff75f1d507f828ee9f6d2444011c5604b434441590e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:02:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0f6b99a0e1b1682fb69f59a581623f2ee5db8dd78dcf38b696de5783af7d725d-merged.mount: Deactivated successfully.
Nov 29 04:02:13 np0005539563 podman[405334]: 2025-11-29 09:02:13.427404381 +0000 UTC m=+0.065724300 container remove eb1ad8ec58556531a9644ff75f1d507f828ee9f6d2444011c5604b434441590e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 04:02:13 np0005539563 systemd[1]: libpod-conmon-eb1ad8ec58556531a9644ff75f1d507f828ee9f6d2444011c5604b434441590e.scope: Deactivated successfully.
Nov 29 04:02:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3810: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:13 np0005539563 podman[405356]: 2025-11-29 09:02:13.614317432 +0000 UTC m=+0.052456981 container create 8458c81946849cce5de682589f90dacad0239e54eea9a9e92e4dc7bf94b20cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 04:02:13 np0005539563 systemd[1]: Started libpod-conmon-8458c81946849cce5de682589f90dacad0239e54eea9a9e92e4dc7bf94b20cf6.scope.
Nov 29 04:02:13 np0005539563 nova_compute[252253]: 2025-11-29 09:02:13.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:13 np0005539563 podman[405356]: 2025-11-29 09:02:13.590465036 +0000 UTC m=+0.028604565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:02:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:02:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0375eea63b0ad1bce4fe40a868fcb3092bc2e6744a809eda46670571f3be2bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0375eea63b0ad1bce4fe40a868fcb3092bc2e6744a809eda46670571f3be2bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0375eea63b0ad1bce4fe40a868fcb3092bc2e6744a809eda46670571f3be2bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0375eea63b0ad1bce4fe40a868fcb3092bc2e6744a809eda46670571f3be2bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0375eea63b0ad1bce4fe40a868fcb3092bc2e6744a809eda46670571f3be2bb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:13 np0005539563 podman[405356]: 2025-11-29 09:02:13.716625342 +0000 UTC m=+0.154764851 container init 8458c81946849cce5de682589f90dacad0239e54eea9a9e92e4dc7bf94b20cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:02:13 np0005539563 podman[405356]: 2025-11-29 09:02:13.726360876 +0000 UTC m=+0.164500385 container start 8458c81946849cce5de682589f90dacad0239e54eea9a9e92e4dc7bf94b20cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 04:02:13 np0005539563 podman[405356]: 2025-11-29 09:02:13.729683896 +0000 UTC m=+0.167823425 container attach 8458c81946849cce5de682589f90dacad0239e54eea9a9e92e4dc7bf94b20cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:02:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:14.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:14 np0005539563 exciting_shaw[405372]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:02:14 np0005539563 exciting_shaw[405372]: --> relative data size: 1.0
Nov 29 04:02:14 np0005539563 exciting_shaw[405372]: --> All data devices are unavailable
Nov 29 04:02:14 np0005539563 systemd[1]: libpod-8458c81946849cce5de682589f90dacad0239e54eea9a9e92e4dc7bf94b20cf6.scope: Deactivated successfully.
Nov 29 04:02:14 np0005539563 podman[405356]: 2025-11-29 09:02:14.625670826 +0000 UTC m=+1.063810405 container died 8458c81946849cce5de682589f90dacad0239e54eea9a9e92e4dc7bf94b20cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 04:02:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d0375eea63b0ad1bce4fe40a868fcb3092bc2e6744a809eda46670571f3be2bb-merged.mount: Deactivated successfully.
Nov 29 04:02:14 np0005539563 nova_compute[252253]: 2025-11-29 09:02:14.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:14 np0005539563 podman[405356]: 2025-11-29 09:02:14.708229321 +0000 UTC m=+1.146368830 container remove 8458c81946849cce5de682589f90dacad0239e54eea9a9e92e4dc7bf94b20cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shaw, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 04:02:14 np0005539563 systemd[1]: libpod-conmon-8458c81946849cce5de682589f90dacad0239e54eea9a9e92e4dc7bf94b20cf6.scope: Deactivated successfully.
Nov 29 04:02:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:15.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:15 np0005539563 podman[405544]: 2025-11-29 09:02:15.421260477 +0000 UTC m=+0.061069395 container create e7ec604b8343115a19a5f2a4e6f9d2f34d6f9718321bbb7fe803f02bb49b451c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:02:15 np0005539563 systemd[1]: Started libpod-conmon-e7ec604b8343115a19a5f2a4e6f9d2f34d6f9718321bbb7fe803f02bb49b451c.scope.
Nov 29 04:02:15 np0005539563 podman[405544]: 2025-11-29 09:02:15.40141303 +0000 UTC m=+0.041221968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:02:15 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:02:15 np0005539563 podman[405544]: 2025-11-29 09:02:15.535619444 +0000 UTC m=+0.175428382 container init e7ec604b8343115a19a5f2a4e6f9d2f34d6f9718321bbb7fe803f02bb49b451c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:02:15 np0005539563 nova_compute[252253]: 2025-11-29 09:02:15.547 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:15 np0005539563 podman[405544]: 2025-11-29 09:02:15.549933691 +0000 UTC m=+0.189742609 container start e7ec604b8343115a19a5f2a4e6f9d2f34d6f9718321bbb7fe803f02bb49b451c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:02:15 np0005539563 podman[405544]: 2025-11-29 09:02:15.553256521 +0000 UTC m=+0.193065459 container attach e7ec604b8343115a19a5f2a4e6f9d2f34d6f9718321bbb7fe803f02bb49b451c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 04:02:15 np0005539563 funny_mcclintock[405560]: 167 167
Nov 29 04:02:15 np0005539563 systemd[1]: libpod-e7ec604b8343115a19a5f2a4e6f9d2f34d6f9718321bbb7fe803f02bb49b451c.scope: Deactivated successfully.
Nov 29 04:02:15 np0005539563 podman[405544]: 2025-11-29 09:02:15.556586101 +0000 UTC m=+0.196395019 container died e7ec604b8343115a19a5f2a4e6f9d2f34d6f9718321bbb7fe803f02bb49b451c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mcclintock, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 04:02:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3811: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:15 np0005539563 systemd[1]: var-lib-containers-storage-overlay-fcb94b7398bf2e5feeab57c864322eee25a1c74fd80105cc20e9d109aa17e83c-merged.mount: Deactivated successfully.
Nov 29 04:02:15 np0005539563 podman[405544]: 2025-11-29 09:02:15.593277024 +0000 UTC m=+0.233085942 container remove e7ec604b8343115a19a5f2a4e6f9d2f34d6f9718321bbb7fe803f02bb49b451c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mcclintock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 04:02:15 np0005539563 systemd[1]: libpod-conmon-e7ec604b8343115a19a5f2a4e6f9d2f34d6f9718321bbb7fe803f02bb49b451c.scope: Deactivated successfully.
Nov 29 04:02:15 np0005539563 podman[405586]: 2025-11-29 09:02:15.824003292 +0000 UTC m=+0.071319973 container create 3818137bbc64cff1252c13c4f7f6d253813d9f1b6014f8e1d654b8b8ff550bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wozniak, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 04:02:15 np0005539563 systemd[1]: Started libpod-conmon-3818137bbc64cff1252c13c4f7f6d253813d9f1b6014f8e1d654b8b8ff550bc8.scope.
Nov 29 04:02:15 np0005539563 podman[405586]: 2025-11-29 09:02:15.803848836 +0000 UTC m=+0.051165547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:02:15 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:02:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20e4e716079e4c5171b6f5ff2b0d8c7a0bc607fd6a404fe4a647e41ef77c9e88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20e4e716079e4c5171b6f5ff2b0d8c7a0bc607fd6a404fe4a647e41ef77c9e88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20e4e716079e4c5171b6f5ff2b0d8c7a0bc607fd6a404fe4a647e41ef77c9e88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20e4e716079e4c5171b6f5ff2b0d8c7a0bc607fd6a404fe4a647e41ef77c9e88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:15 np0005539563 podman[405586]: 2025-11-29 09:02:15.933486966 +0000 UTC m=+0.180803687 container init 3818137bbc64cff1252c13c4f7f6d253813d9f1b6014f8e1d654b8b8ff550bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wozniak, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 04:02:15 np0005539563 podman[405586]: 2025-11-29 09:02:15.941394071 +0000 UTC m=+0.188710752 container start 3818137bbc64cff1252c13c4f7f6d253813d9f1b6014f8e1d654b8b8ff550bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wozniak, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:02:15 np0005539563 podman[405586]: 2025-11-29 09:02:15.945250905 +0000 UTC m=+0.192567606 container attach 3818137bbc64cff1252c13c4f7f6d253813d9f1b6014f8e1d654b8b8ff550bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:02:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:16.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]: {
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:    "0": [
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:        {
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:            "devices": [
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:                "/dev/loop3"
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:            ],
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:            "lv_name": "ceph_lv0",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:            "lv_size": "7511998464",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:            "name": "ceph_lv0",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:            "tags": {
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:                "ceph.cluster_name": "ceph",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:                "ceph.crush_device_class": "",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:                "ceph.encrypted": "0",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:                "ceph.osd_id": "0",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:                "ceph.type": "block",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:                "ceph.vdo": "0"
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:            },
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:            "type": "block",
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:            "vg_name": "ceph_vg0"
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:        }
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]:    ]
Nov 29 04:02:16 np0005539563 zealous_wozniak[405603]: }
Nov 29 04:02:16 np0005539563 systemd[1]: libpod-3818137bbc64cff1252c13c4f7f6d253813d9f1b6014f8e1d654b8b8ff550bc8.scope: Deactivated successfully.
Nov 29 04:02:16 np0005539563 podman[405586]: 2025-11-29 09:02:16.714141113 +0000 UTC m=+0.961457864 container died 3818137bbc64cff1252c13c4f7f6d253813d9f1b6014f8e1d654b8b8ff550bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wozniak, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 04:02:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-20e4e716079e4c5171b6f5ff2b0d8c7a0bc607fd6a404fe4a647e41ef77c9e88-merged.mount: Deactivated successfully.
Nov 29 04:02:16 np0005539563 podman[405586]: 2025-11-29 09:02:16.794055417 +0000 UTC m=+1.041372088 container remove 3818137bbc64cff1252c13c4f7f6d253813d9f1b6014f8e1d654b8b8ff550bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:02:16 np0005539563 systemd[1]: libpod-conmon-3818137bbc64cff1252c13c4f7f6d253813d9f1b6014f8e1d654b8b8ff550bc8.scope: Deactivated successfully.
Nov 29 04:02:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:02:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:02:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:02:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:02:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:02:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:02:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:02:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:02:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:02:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:02:17 np0005539563 nova_compute[252253]: 2025-11-29 09:02:17.043 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:17.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3812: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:17 np0005539563 podman[405766]: 2025-11-29 09:02:17.629866827 +0000 UTC m=+0.056073749 container create cae42f978da8820c8b30df0240d43a1f2deaefb85fb59a3eaf7c038840bb486b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 04:02:17 np0005539563 systemd[1]: Started libpod-conmon-cae42f978da8820c8b30df0240d43a1f2deaefb85fb59a3eaf7c038840bb486b.scope.
Nov 29 04:02:17 np0005539563 podman[405766]: 2025-11-29 09:02:17.605309613 +0000 UTC m=+0.031516565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:02:17 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:02:17 np0005539563 podman[405766]: 2025-11-29 09:02:17.727321346 +0000 UTC m=+0.153528328 container init cae42f978da8820c8b30df0240d43a1f2deaefb85fb59a3eaf7c038840bb486b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banach, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 04:02:17 np0005539563 podman[405766]: 2025-11-29 09:02:17.73893858 +0000 UTC m=+0.165145522 container start cae42f978da8820c8b30df0240d43a1f2deaefb85fb59a3eaf7c038840bb486b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banach, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 04:02:17 np0005539563 podman[405766]: 2025-11-29 09:02:17.74225875 +0000 UTC m=+0.168465712 container attach cae42f978da8820c8b30df0240d43a1f2deaefb85fb59a3eaf7c038840bb486b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 04:02:17 np0005539563 quirky_banach[405782]: 167 167
Nov 29 04:02:17 np0005539563 systemd[1]: libpod-cae42f978da8820c8b30df0240d43a1f2deaefb85fb59a3eaf7c038840bb486b.scope: Deactivated successfully.
Nov 29 04:02:17 np0005539563 podman[405766]: 2025-11-29 09:02:17.746374652 +0000 UTC m=+0.172581564 container died cae42f978da8820c8b30df0240d43a1f2deaefb85fb59a3eaf7c038840bb486b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banach, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:02:17 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9daa56afa34515d0679b11f08f2c002cd6e1f54a069469e190bf4e19aae9c9d0-merged.mount: Deactivated successfully.
Nov 29 04:02:17 np0005539563 podman[405766]: 2025-11-29 09:02:17.788957005 +0000 UTC m=+0.215163907 container remove cae42f978da8820c8b30df0240d43a1f2deaefb85fb59a3eaf7c038840bb486b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banach, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 04:02:17 np0005539563 systemd[1]: libpod-conmon-cae42f978da8820c8b30df0240d43a1f2deaefb85fb59a3eaf7c038840bb486b.scope: Deactivated successfully.
Nov 29 04:02:17 np0005539563 podman[405807]: 2025-11-29 09:02:17.976581565 +0000 UTC m=+0.049509032 container create ec4d8d5f4a0c1827832835ebaf89c8bbba4ba18b503926f91de6b6fb265e22ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:02:18 np0005539563 systemd[1]: Started libpod-conmon-ec4d8d5f4a0c1827832835ebaf89c8bbba4ba18b503926f91de6b6fb265e22ba.scope.
Nov 29 04:02:18 np0005539563 podman[405807]: 2025-11-29 09:02:17.953845709 +0000 UTC m=+0.026773256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:02:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:02:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47203044e7230bbb22a7fb620f451c54cda1b35b33b7516b828e113aab243203/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47203044e7230bbb22a7fb620f451c54cda1b35b33b7516b828e113aab243203/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47203044e7230bbb22a7fb620f451c54cda1b35b33b7516b828e113aab243203/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47203044e7230bbb22a7fb620f451c54cda1b35b33b7516b828e113aab243203/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:18 np0005539563 podman[405807]: 2025-11-29 09:02:18.071982558 +0000 UTC m=+0.144910045 container init ec4d8d5f4a0c1827832835ebaf89c8bbba4ba18b503926f91de6b6fb265e22ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_goldstine, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:02:18 np0005539563 podman[405807]: 2025-11-29 09:02:18.081077994 +0000 UTC m=+0.154005501 container start ec4d8d5f4a0c1827832835ebaf89c8bbba4ba18b503926f91de6b6fb265e22ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 04:02:18 np0005539563 podman[405807]: 2025-11-29 09:02:18.085559556 +0000 UTC m=+0.158487043 container attach ec4d8d5f4a0c1827832835ebaf89c8bbba4ba18b503926f91de6b6fb265e22ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 04:02:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:18.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:19 np0005539563 vigorous_goldstine[405824]: {
Nov 29 04:02:19 np0005539563 vigorous_goldstine[405824]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:02:19 np0005539563 vigorous_goldstine[405824]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:02:19 np0005539563 vigorous_goldstine[405824]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:02:19 np0005539563 vigorous_goldstine[405824]:        "osd_id": 0,
Nov 29 04:02:19 np0005539563 vigorous_goldstine[405824]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:02:19 np0005539563 vigorous_goldstine[405824]:        "type": "bluestore"
Nov 29 04:02:19 np0005539563 vigorous_goldstine[405824]:    }
Nov 29 04:02:19 np0005539563 vigorous_goldstine[405824]: }
Nov 29 04:02:19 np0005539563 systemd[1]: libpod-ec4d8d5f4a0c1827832835ebaf89c8bbba4ba18b503926f91de6b6fb265e22ba.scope: Deactivated successfully.
Nov 29 04:02:19 np0005539563 podman[405807]: 2025-11-29 09:02:19.078882971 +0000 UTC m=+1.151810488 container died ec4d8d5f4a0c1827832835ebaf89c8bbba4ba18b503926f91de6b6fb265e22ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_goldstine, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:02:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-47203044e7230bbb22a7fb620f451c54cda1b35b33b7516b828e113aab243203-merged.mount: Deactivated successfully.
Nov 29 04:02:19 np0005539563 podman[405807]: 2025-11-29 09:02:19.147827367 +0000 UTC m=+1.220754844 container remove ec4d8d5f4a0c1827832835ebaf89c8bbba4ba18b503926f91de6b6fb265e22ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 04:02:19 np0005539563 systemd[1]: libpod-conmon-ec4d8d5f4a0c1827832835ebaf89c8bbba4ba18b503926f91de6b6fb265e22ba.scope: Deactivated successfully.
Nov 29 04:02:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:02:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:02:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:19 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6189a12e-c405-41be-8ad9-5c7dc2fc6a3e does not exist
Nov 29 04:02:19 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev daccfbc2-db4b-470f-b89f-ede920eea7dc does not exist
Nov 29 04:02:19 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cbde6508-ac1d-4456-b9a9-5778e006bf7a does not exist
Nov 29 04:02:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:19.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3813: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:20 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:02:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:20.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:20 np0005539563 nova_compute[252253]: 2025-11-29 09:02:20.549 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:21.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3814: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:22 np0005539563 nova_compute[252253]: 2025-11-29 09:02:22.046 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:22.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:23.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3815: 305 pgs: 305 active+clean; 129 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 116 KiB/s wr, 11 op/s
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.361378652521868e-05 of space, bias 1.0, pg target 0.019084135957565605 quantized to 32 (current 32)
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:02:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:24.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:25.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:25 np0005539563 nova_compute[252253]: 2025-11-29 09:02:25.552 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3816: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 04:02:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:26.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:27 np0005539563 nova_compute[252253]: 2025-11-29 09:02:27.048 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:27.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3817: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 04:02:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:28.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:29.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3818: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 04:02:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:30.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:30 np0005539563 nova_compute[252253]: 2025-11-29 09:02:30.554 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:31.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3819: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 04:02:32 np0005539563 nova_compute[252253]: 2025-11-29 09:02:32.051 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:32.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:32 np0005539563 nova_compute[252253]: 2025-11-29 09:02:32.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:33.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3820: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 235 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 29 04:02:33 np0005539563 nova_compute[252253]: 2025-11-29 09:02:33.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:33 np0005539563 nova_compute[252253]: 2025-11-29 09:02:33.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:02:33 np0005539563 nova_compute[252253]: 2025-11-29 09:02:33.699 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:02:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:34.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:35.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:35 np0005539563 podman[405969]: 2025-11-29 09:02:35.529282451 +0000 UTC m=+0.082745982 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 04:02:35 np0005539563 podman[405970]: 2025-11-29 09:02:35.54217512 +0000 UTC m=+0.093075831 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 04:02:35 np0005539563 nova_compute[252253]: 2025-11-29 09:02:35.556 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:35 np0005539563 podman[405971]: 2025-11-29 09:02:35.566639432 +0000 UTC m=+0.107086080 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 04:02:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3821: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 88 op/s
Nov 29 04:02:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:36.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:36 np0005539563 nova_compute[252253]: 2025-11-29 09:02:36.698 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:37 np0005539563 nova_compute[252253]: 2025-11-29 09:02:37.051 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:37.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3822: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:02:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:38.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:38 np0005539563 nova_compute[252253]: 2025-11-29 09:02:38.680 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:39.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3823: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:02:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:40.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:40 np0005539563 nova_compute[252253]: 2025-11-29 09:02:40.558 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:40 np0005539563 nova_compute[252253]: 2025-11-29 09:02:40.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:40 np0005539563 nova_compute[252253]: 2025-11-29 09:02:40.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:02:40 np0005539563 nova_compute[252253]: 2025-11-29 09:02:40.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:02:40 np0005539563 nova_compute[252253]: 2025-11-29 09:02:40.692 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:02:40 np0005539563 nova_compute[252253]: 2025-11-29 09:02:40.693 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:40 np0005539563 nova_compute[252253]: 2025-11-29 09:02:40.693 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:40 np0005539563 nova_compute[252253]: 2025-11-29 09:02:40.693 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:02:41 np0005539563 ovn_controller[148841]: 2025-11-29T09:02:41Z|00928|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 04:02:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:41.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3824: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:02:42 np0005539563 nova_compute[252253]: 2025-11-29 09:02:42.053 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:42.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:42 np0005539563 nova_compute[252253]: 2025-11-29 09:02:42.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:42 np0005539563 nova_compute[252253]: 2025-11-29 09:02:42.726 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:42 np0005539563 nova_compute[252253]: 2025-11-29 09:02:42.726 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:42 np0005539563 nova_compute[252253]: 2025-11-29 09:02:42.727 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:42 np0005539563 nova_compute[252253]: 2025-11-29 09:02:42.727 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:02:42 np0005539563 nova_compute[252253]: 2025-11-29 09:02:42.727 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:02:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1544152170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:02:43 np0005539563 nova_compute[252253]: 2025-11-29 09:02:43.171 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:02:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:02:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:02:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:02:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:02:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:02:43 np0005539563 nova_compute[252253]: 2025-11-29 09:02:43.357 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:02:43 np0005539563 nova_compute[252253]: 2025-11-29 09:02:43.359 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4109MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:02:43 np0005539563 nova_compute[252253]: 2025-11-29 09:02:43.359 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:43 np0005539563 nova_compute[252253]: 2025-11-29 09:02:43.359 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:43.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:43 np0005539563 nova_compute[252253]: 2025-11-29 09:02:43.444 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:02:43 np0005539563 nova_compute[252253]: 2025-11-29 09:02:43.445 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:02:43 np0005539563 nova_compute[252253]: 2025-11-29 09:02:43.458 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3825: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 29 04:02:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:02:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3419853262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:02:43 np0005539563 nova_compute[252253]: 2025-11-29 09:02:43.871 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:43 np0005539563 nova_compute[252253]: 2025-11-29 09:02:43.877 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:02:43 np0005539563 nova_compute[252253]: 2025-11-29 09:02:43.900 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:02:43 np0005539563 nova_compute[252253]: 2025-11-29 09:02:43.935 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:02:43 np0005539563 nova_compute[252253]: 2025-11-29 09:02:43.936 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:44.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:45.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:45 np0005539563 nova_compute[252253]: 2025-11-29 09:02:45.561 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3826: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 66 op/s
Nov 29 04:02:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:46.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:47 np0005539563 nova_compute[252253]: 2025-11-29 09:02:47.057 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:47.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3827: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 29 04:02:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:48.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:48 np0005539563 nova_compute[252253]: 2025-11-29 09:02:48.937 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:49.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3828: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 4 op/s
Nov 29 04:02:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:50.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:50 np0005539563 nova_compute[252253]: 2025-11-29 09:02:50.597 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:51.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3829: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 KiB/s rd, 5 op/s
Nov 29 04:02:52 np0005539563 nova_compute[252253]: 2025-11-29 09:02:52.059 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:52.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 04:02:52 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 04:02:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:53.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3830: 305 pgs: 305 active+clean; 158 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 19 op/s
Nov 29 04:02:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:54.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:55.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3831: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 852 B/s wr, 80 op/s
Nov 29 04:02:55 np0005539563 nova_compute[252253]: 2025-11-29 09:02:55.599 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:02:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:56.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:02:57 np0005539563 nova_compute[252253]: 2025-11-29 09:02:57.091 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:57.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3832: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 852 B/s wr, 77 op/s
Nov 29 04:02:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:58.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:02:59.160 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=93, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=92) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:02:59 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:02:59.161 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:02:59 np0005539563 nova_compute[252253]: 2025-11-29 09:02:59.213 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:02:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:59.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3833: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 94 KiB/s rd, 1.2 KiB/s wr, 149 op/s
Nov 29 04:03:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:00.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:00 np0005539563 nova_compute[252253]: 2025-11-29 09:03:00.603 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:01.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3834: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 1.2 KiB/s wr, 208 op/s
Nov 29 04:03:02 np0005539563 nova_compute[252253]: 2025-11-29 09:03:02.092 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:02 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:03:02.163 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '93'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:03:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:02.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:03.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3835: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 1.2 KiB/s wr, 207 op/s
Nov 29 04:03:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:04.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:03:04.975 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:03:04.975 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:03:04.975 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:05.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3836: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 118 KiB/s rd, 1.2 KiB/s wr, 193 op/s
Nov 29 04:03:05 np0005539563 nova_compute[252253]: 2025-11-29 09:03:05.605 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:06 np0005539563 podman[406190]: 2025-11-29 09:03:06.52282531 +0000 UTC m=+0.073436019 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 04:03:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:06.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:06 np0005539563 podman[406192]: 2025-11-29 09:03:06.566805191 +0000 UTC m=+0.109493645 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 04:03:06 np0005539563 podman[406191]: 2025-11-29 09:03:06.570612935 +0000 UTC m=+0.110884764 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:03:07 np0005539563 nova_compute[252253]: 2025-11-29 09:03:07.158 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:07.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3837: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 78 KiB/s rd, 341 B/s wr, 130 op/s
Nov 29 04:03:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:08.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:09.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3838: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 341 B/s wr, 131 op/s
Nov 29 04:03:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:10.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:10 np0005539563 nova_compute[252253]: 2025-11-29 09:03:10.653 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:11.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3839: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 04:03:12 np0005539563 nova_compute[252253]: 2025-11-29 09:03:12.162 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:12.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:03:13
Nov 29 04:03:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:03:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:03:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', 'images', 'backups', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log']
Nov 29 04:03:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:03:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:03:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:03:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:03:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:03:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:03:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:03:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:13.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3840: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Nov 29 04:03:13 np0005539563 nova_compute[252253]: 2025-11-29 09:03:13.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:14.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:15.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3841: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Nov 29 04:03:15 np0005539563 nova_compute[252253]: 2025-11-29 09:03:15.655 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:16.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:03:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:03:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:03:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:03:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:03:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:03:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:03:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:03:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:03:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:03:17 np0005539563 nova_compute[252253]: 2025-11-29 09:03:17.164 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:17.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3842: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Nov 29 04:03:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:18.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:19.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3843: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Nov 29 04:03:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:20.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:20 np0005539563 nova_compute[252253]: 2025-11-29 09:03:20.658 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:20 np0005539563 nova_compute[252253]: 2025-11-29 09:03:20.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:20 np0005539563 nova_compute[252253]: 2025-11-29 09:03:20.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:03:20 np0005539563 nova_compute[252253]: 2025-11-29 09:03:20.793 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 04:03:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 04:03:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:21.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3844: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:22 np0005539563 nova_compute[252253]: 2025-11-29 09:03:22.164 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:22.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a1c03bde-c9f6-4e73-ac27-be57465bed07 does not exist
Nov 29 04:03:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b667475a-dabc-4c1d-8cc5-b90e73974ac4 does not exist
Nov 29 04:03:22 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 43a867ed-062b-4ba3-93dc-1ad4ea9e79fd does not exist
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:03:22 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:03:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:23.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:23 np0005539563 podman[406586]: 2025-11-29 09:03:23.457998081 +0000 UTC m=+0.038161575 container create c3661d51579a336ba0479fb4730a1d4ef628a807e3c0806c18051aa28c174e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:03:23 np0005539563 systemd[1]: Started libpod-conmon-c3661d51579a336ba0479fb4730a1d4ef628a807e3c0806c18051aa28c174e7f.scope.
Nov 29 04:03:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:03:23 np0005539563 podman[406586]: 2025-11-29 09:03:23.441461373 +0000 UTC m=+0.021624877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:03:23 np0005539563 podman[406586]: 2025-11-29 09:03:23.543047153 +0000 UTC m=+0.123210667 container init c3661d51579a336ba0479fb4730a1d4ef628a807e3c0806c18051aa28c174e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:03:23 np0005539563 podman[406586]: 2025-11-29 09:03:23.550923177 +0000 UTC m=+0.131086661 container start c3661d51579a336ba0479fb4730a1d4ef628a807e3c0806c18051aa28c174e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:03:23 np0005539563 podman[406586]: 2025-11-29 09:03:23.555355957 +0000 UTC m=+0.135519441 container attach c3661d51579a336ba0479fb4730a1d4ef628a807e3c0806c18051aa28c174e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:03:23 np0005539563 keen_pike[406603]: 167 167
Nov 29 04:03:23 np0005539563 systemd[1]: libpod-c3661d51579a336ba0479fb4730a1d4ef628a807e3c0806c18051aa28c174e7f.scope: Deactivated successfully.
Nov 29 04:03:23 np0005539563 podman[406586]: 2025-11-29 09:03:23.558172883 +0000 UTC m=+0.138336367 container died c3661d51579a336ba0479fb4730a1d4ef628a807e3c0806c18051aa28c174e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:03:23 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d1586d31aeadf98ade4a3d02ad79fc87253c9378740b3acf5ae98b9d2f91eb44-merged.mount: Deactivated successfully.
Nov 29 04:03:23 np0005539563 podman[406586]: 2025-11-29 09:03:23.600810807 +0000 UTC m=+0.180974291 container remove c3661d51579a336ba0479fb4730a1d4ef628a807e3c0806c18051aa28c174e7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 04:03:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3845: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:23 np0005539563 systemd[1]: libpod-conmon-c3661d51579a336ba0479fb4730a1d4ef628a807e3c0806c18051aa28c174e7f.scope: Deactivated successfully.
Nov 29 04:03:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:03:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:03:23 np0005539563 podman[406627]: 2025-11-29 09:03:23.763086241 +0000 UTC m=+0.047737543 container create cece400f26aee00bd0b73bb4b931b1ae31aac2499d6c2e51162afb42bfb4a6ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcnulty, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 04:03:23 np0005539563 systemd[1]: Started libpod-conmon-cece400f26aee00bd0b73bb4b931b1ae31aac2499d6c2e51162afb42bfb4a6ea.scope.
Nov 29 04:03:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:03:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4623a8ee63595e42f5a838837382dd4f5c1fe67d616965f7c7af33cad348450/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4623a8ee63595e42f5a838837382dd4f5c1fe67d616965f7c7af33cad348450/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4623a8ee63595e42f5a838837382dd4f5c1fe67d616965f7c7af33cad348450/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4623a8ee63595e42f5a838837382dd4f5c1fe67d616965f7c7af33cad348450/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:23 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4623a8ee63595e42f5a838837382dd4f5c1fe67d616965f7c7af33cad348450/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:23 np0005539563 podman[406627]: 2025-11-29 09:03:23.743638115 +0000 UTC m=+0.028289457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:03:23 np0005539563 podman[406627]: 2025-11-29 09:03:23.851237458 +0000 UTC m=+0.135888780 container init cece400f26aee00bd0b73bb4b931b1ae31aac2499d6c2e51162afb42bfb4a6ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:03:23 np0005539563 podman[406627]: 2025-11-29 09:03:23.861778743 +0000 UTC m=+0.146430035 container start cece400f26aee00bd0b73bb4b931b1ae31aac2499d6c2e51162afb42bfb4a6ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:03:23 np0005539563 podman[406627]: 2025-11-29 09:03:23.865269858 +0000 UTC m=+0.149921190 container attach cece400f26aee00bd0b73bb4b931b1ae31aac2499d6c2e51162afb42bfb4a6ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:03:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:24.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:24 np0005539563 bold_mcnulty[406643]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:03:24 np0005539563 bold_mcnulty[406643]: --> relative data size: 1.0
Nov 29 04:03:24 np0005539563 bold_mcnulty[406643]: --> All data devices are unavailable
Nov 29 04:03:24 np0005539563 systemd[1]: libpod-cece400f26aee00bd0b73bb4b931b1ae31aac2499d6c2e51162afb42bfb4a6ea.scope: Deactivated successfully.
Nov 29 04:03:24 np0005539563 podman[406627]: 2025-11-29 09:03:24.730054513 +0000 UTC m=+1.014705815 container died cece400f26aee00bd0b73bb4b931b1ae31aac2499d6c2e51162afb42bfb4a6ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcnulty, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 04:03:24 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b4623a8ee63595e42f5a838837382dd4f5c1fe67d616965f7c7af33cad348450-merged.mount: Deactivated successfully.
Nov 29 04:03:24 np0005539563 podman[406627]: 2025-11-29 09:03:24.793055678 +0000 UTC m=+1.077706980 container remove cece400f26aee00bd0b73bb4b931b1ae31aac2499d6c2e51162afb42bfb4a6ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 04:03:24 np0005539563 systemd[1]: libpod-conmon-cece400f26aee00bd0b73bb4b931b1ae31aac2499d6c2e51162afb42bfb4a6ea.scope: Deactivated successfully.
Nov 29 04:03:25 np0005539563 podman[406811]: 2025-11-29 09:03:25.411085052 +0000 UTC m=+0.041803943 container create 6e1b8beeca0bb6fc8f08fa99b27c645dc8d95c943c700a4fd74995b277e922a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:03:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:25.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:25 np0005539563 systemd[1]: Started libpod-conmon-6e1b8beeca0bb6fc8f08fa99b27c645dc8d95c943c700a4fd74995b277e922a2.scope.
Nov 29 04:03:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:03:25 np0005539563 podman[406811]: 2025-11-29 09:03:25.490263636 +0000 UTC m=+0.120982537 container init 6e1b8beeca0bb6fc8f08fa99b27c645dc8d95c943c700a4fd74995b277e922a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 04:03:25 np0005539563 podman[406811]: 2025-11-29 09:03:25.395321035 +0000 UTC m=+0.026039946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:03:25 np0005539563 podman[406811]: 2025-11-29 09:03:25.497391519 +0000 UTC m=+0.128110410 container start 6e1b8beeca0bb6fc8f08fa99b27c645dc8d95c943c700a4fd74995b277e922a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:03:25 np0005539563 objective_goldberg[406827]: 167 167
Nov 29 04:03:25 np0005539563 podman[406811]: 2025-11-29 09:03:25.501191512 +0000 UTC m=+0.131910403 container attach 6e1b8beeca0bb6fc8f08fa99b27c645dc8d95c943c700a4fd74995b277e922a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 04:03:25 np0005539563 systemd[1]: libpod-6e1b8beeca0bb6fc8f08fa99b27c645dc8d95c943c700a4fd74995b277e922a2.scope: Deactivated successfully.
Nov 29 04:03:25 np0005539563 podman[406811]: 2025-11-29 09:03:25.502104516 +0000 UTC m=+0.132823407 container died 6e1b8beeca0bb6fc8f08fa99b27c645dc8d95c943c700a4fd74995b277e922a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 04:03:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-af193a8ee2c1a3c6e79e55de75a3f68aa1aea97cd196652892dd5943603b4e61-merged.mount: Deactivated successfully.
Nov 29 04:03:25 np0005539563 podman[406811]: 2025-11-29 09:03:25.549420847 +0000 UTC m=+0.180139738 container remove 6e1b8beeca0bb6fc8f08fa99b27c645dc8d95c943c700a4fd74995b277e922a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goldberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 04:03:25 np0005539563 systemd[1]: libpod-conmon-6e1b8beeca0bb6fc8f08fa99b27c645dc8d95c943c700a4fd74995b277e922a2.scope: Deactivated successfully.
Nov 29 04:03:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3846: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:25 np0005539563 nova_compute[252253]: 2025-11-29 09:03:25.661 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:25 np0005539563 podman[406852]: 2025-11-29 09:03:25.728908887 +0000 UTC m=+0.042803240 container create bc127dd1f7ddac26f6bd2215f03ff697c96d14e06cad78b974312f94bb7636de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chatelet, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:03:25 np0005539563 systemd[1]: Started libpod-conmon-bc127dd1f7ddac26f6bd2215f03ff697c96d14e06cad78b974312f94bb7636de.scope.
Nov 29 04:03:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:03:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8936e0cac846070ce62c7229b3b019296ce047a7c0698da55792870f6a14e8be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8936e0cac846070ce62c7229b3b019296ce047a7c0698da55792870f6a14e8be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8936e0cac846070ce62c7229b3b019296ce047a7c0698da55792870f6a14e8be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:25 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8936e0cac846070ce62c7229b3b019296ce047a7c0698da55792870f6a14e8be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:25 np0005539563 podman[406852]: 2025-11-29 09:03:25.710803987 +0000 UTC m=+0.024698320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:03:25 np0005539563 podman[406852]: 2025-11-29 09:03:25.811831013 +0000 UTC m=+0.125725346 container init bc127dd1f7ddac26f6bd2215f03ff697c96d14e06cad78b974312f94bb7636de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chatelet, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:03:25 np0005539563 podman[406852]: 2025-11-29 09:03:25.817586058 +0000 UTC m=+0.131480361 container start bc127dd1f7ddac26f6bd2215f03ff697c96d14e06cad78b974312f94bb7636de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 04:03:25 np0005539563 podman[406852]: 2025-11-29 09:03:25.820888778 +0000 UTC m=+0.134783091 container attach bc127dd1f7ddac26f6bd2215f03ff697c96d14e06cad78b974312f94bb7636de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]: {
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:    "0": [
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:        {
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:            "devices": [
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:                "/dev/loop3"
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:            ],
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:            "lv_name": "ceph_lv0",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:            "lv_size": "7511998464",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:            "name": "ceph_lv0",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:            "tags": {
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:                "ceph.cluster_name": "ceph",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:                "ceph.crush_device_class": "",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:                "ceph.encrypted": "0",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:                "ceph.osd_id": "0",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:                "ceph.type": "block",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:                "ceph.vdo": "0"
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:            },
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:            "type": "block",
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:            "vg_name": "ceph_vg0"
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:        }
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]:    ]
Nov 29 04:03:26 np0005539563 optimistic_chatelet[406868]: }
Nov 29 04:03:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:26.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:26 np0005539563 systemd[1]: libpod-bc127dd1f7ddac26f6bd2215f03ff697c96d14e06cad78b974312f94bb7636de.scope: Deactivated successfully.
Nov 29 04:03:26 np0005539563 podman[406852]: 2025-11-29 09:03:26.62834157 +0000 UTC m=+0.942235923 container died bc127dd1f7ddac26f6bd2215f03ff697c96d14e06cad78b974312f94bb7636de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chatelet, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:03:26 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8936e0cac846070ce62c7229b3b019296ce047a7c0698da55792870f6a14e8be-merged.mount: Deactivated successfully.
Nov 29 04:03:26 np0005539563 podman[406852]: 2025-11-29 09:03:26.699892748 +0000 UTC m=+1.013787061 container remove bc127dd1f7ddac26f6bd2215f03ff697c96d14e06cad78b974312f94bb7636de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chatelet, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 04:03:26 np0005539563 systemd[1]: libpod-conmon-bc127dd1f7ddac26f6bd2215f03ff697c96d14e06cad78b974312f94bb7636de.scope: Deactivated successfully.
Nov 29 04:03:27 np0005539563 nova_compute[252253]: 2025-11-29 09:03:27.191 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:27 np0005539563 podman[407028]: 2025-11-29 09:03:27.369829127 +0000 UTC m=+0.035827331 container create 7e2e6fb1f8bfa094e555c5194382a97784aacbe3be9989aedf6693c03abb774e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:03:27 np0005539563 systemd[1]: Started libpod-conmon-7e2e6fb1f8bfa094e555c5194382a97784aacbe3be9989aedf6693c03abb774e.scope.
Nov 29 04:03:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:27.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:03:27 np0005539563 podman[407028]: 2025-11-29 09:03:27.355247152 +0000 UTC m=+0.021245386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:03:27 np0005539563 podman[407028]: 2025-11-29 09:03:27.460235644 +0000 UTC m=+0.126233878 container init 7e2e6fb1f8bfa094e555c5194382a97784aacbe3be9989aedf6693c03abb774e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_golick, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 29 04:03:27 np0005539563 podman[407028]: 2025-11-29 09:03:27.468484558 +0000 UTC m=+0.134482762 container start 7e2e6fb1f8bfa094e555c5194382a97784aacbe3be9989aedf6693c03abb774e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_golick, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:03:27 np0005539563 podman[407028]: 2025-11-29 09:03:27.472832176 +0000 UTC m=+0.138830380 container attach 7e2e6fb1f8bfa094e555c5194382a97784aacbe3be9989aedf6693c03abb774e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_golick, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:03:27 np0005539563 systemd[1]: libpod-7e2e6fb1f8bfa094e555c5194382a97784aacbe3be9989aedf6693c03abb774e.scope: Deactivated successfully.
Nov 29 04:03:27 np0005539563 determined_golick[407044]: 167 167
Nov 29 04:03:27 np0005539563 conmon[407044]: conmon 7e2e6fb1f8bfa094e555 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e2e6fb1f8bfa094e555c5194382a97784aacbe3be9989aedf6693c03abb774e.scope/container/memory.events
Nov 29 04:03:27 np0005539563 podman[407028]: 2025-11-29 09:03:27.477940754 +0000 UTC m=+0.143938958 container died 7e2e6fb1f8bfa094e555c5194382a97784aacbe3be9989aedf6693c03abb774e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_golick, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:03:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-aeb3f8fe870507a291e15739aabc30ad5005c1af6379f703fc815133a0fe2879-merged.mount: Deactivated successfully.
Nov 29 04:03:27 np0005539563 podman[407028]: 2025-11-29 09:03:27.517111895 +0000 UTC m=+0.183110109 container remove 7e2e6fb1f8bfa094e555c5194382a97784aacbe3be9989aedf6693c03abb774e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:03:27 np0005539563 systemd[1]: libpod-conmon-7e2e6fb1f8bfa094e555c5194382a97784aacbe3be9989aedf6693c03abb774e.scope: Deactivated successfully.
Nov 29 04:03:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3847: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:27 np0005539563 podman[407068]: 2025-11-29 09:03:27.687875239 +0000 UTC m=+0.052290297 container create 388d1e384ac024d9e7d42f7506e3a2f4031984e0d69c923ccbc933129f90f237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 04:03:27 np0005539563 systemd[1]: Started libpod-conmon-388d1e384ac024d9e7d42f7506e3a2f4031984e0d69c923ccbc933129f90f237.scope.
Nov 29 04:03:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:03:27 np0005539563 podman[407068]: 2025-11-29 09:03:27.66652774 +0000 UTC m=+0.030942778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:03:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc65fb63911ce343930a5fbe628dc6c2b47862d49427e61a9a6af27a933247a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc65fb63911ce343930a5fbe628dc6c2b47862d49427e61a9a6af27a933247a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc65fb63911ce343930a5fbe628dc6c2b47862d49427e61a9a6af27a933247a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc65fb63911ce343930a5fbe628dc6c2b47862d49427e61a9a6af27a933247a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:27 np0005539563 podman[407068]: 2025-11-29 09:03:27.778841841 +0000 UTC m=+0.143256879 container init 388d1e384ac024d9e7d42f7506e3a2f4031984e0d69c923ccbc933129f90f237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hermann, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 04:03:27 np0005539563 podman[407068]: 2025-11-29 09:03:27.787860025 +0000 UTC m=+0.152275043 container start 388d1e384ac024d9e7d42f7506e3a2f4031984e0d69c923ccbc933129f90f237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 04:03:27 np0005539563 podman[407068]: 2025-11-29 09:03:27.791143185 +0000 UTC m=+0.155558203 container attach 388d1e384ac024d9e7d42f7506e3a2f4031984e0d69c923ccbc933129f90f237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:03:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:28.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:28 np0005539563 agitated_hermann[407085]: {
Nov 29 04:03:28 np0005539563 agitated_hermann[407085]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:03:28 np0005539563 agitated_hermann[407085]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:03:28 np0005539563 agitated_hermann[407085]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:03:28 np0005539563 agitated_hermann[407085]:        "osd_id": 0,
Nov 29 04:03:28 np0005539563 agitated_hermann[407085]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:03:28 np0005539563 agitated_hermann[407085]:        "type": "bluestore"
Nov 29 04:03:28 np0005539563 agitated_hermann[407085]:    }
Nov 29 04:03:28 np0005539563 agitated_hermann[407085]: }
Nov 29 04:03:28 np0005539563 systemd[1]: libpod-388d1e384ac024d9e7d42f7506e3a2f4031984e0d69c923ccbc933129f90f237.scope: Deactivated successfully.
Nov 29 04:03:28 np0005539563 podman[407068]: 2025-11-29 09:03:28.638530878 +0000 UTC m=+1.002945896 container died 388d1e384ac024d9e7d42f7506e3a2f4031984e0d69c923ccbc933129f90f237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 04:03:28 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2bc65fb63911ce343930a5fbe628dc6c2b47862d49427e61a9a6af27a933247a-merged.mount: Deactivated successfully.
Nov 29 04:03:28 np0005539563 podman[407068]: 2025-11-29 09:03:28.696267352 +0000 UTC m=+1.060682370 container remove 388d1e384ac024d9e7d42f7506e3a2f4031984e0d69c923ccbc933129f90f237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hermann, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 04:03:28 np0005539563 systemd[1]: libpod-conmon-388d1e384ac024d9e7d42f7506e3a2f4031984e0d69c923ccbc933129f90f237.scope: Deactivated successfully.
Nov 29 04:03:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:03:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:03:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ed288323-c1a5-4037-bc0c-358d783bd7a7 does not exist
Nov 29 04:03:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9b6bd933-ab9f-46a5-95db-b53247c06a18 does not exist
Nov 29 04:03:28 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 91072752-f3a2-4b35-9f65-7aae27840879 does not exist
Nov 29 04:03:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:29.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3848: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:03:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:30.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:30 np0005539563 nova_compute[252253]: 2025-11-29 09:03:30.722 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:31.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3849: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:32 np0005539563 nova_compute[252253]: 2025-11-29 09:03:32.194 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:32.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:33.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3850: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:33 np0005539563 nova_compute[252253]: 2025-11-29 09:03:33.882 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:34.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:35.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3851: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:35 np0005539563 nova_compute[252253]: 2025-11-29 09:03:35.725 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:36.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:37 np0005539563 nova_compute[252253]: 2025-11-29 09:03:37.196 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:37.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:37 np0005539563 podman[407175]: 2025-11-29 09:03:37.521487513 +0000 UTC m=+0.073955893 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:03:37 np0005539563 podman[407176]: 2025-11-29 09:03:37.539839641 +0000 UTC m=+0.086420061 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 04:03:37 np0005539563 podman[407177]: 2025-11-29 09:03:37.55351534 +0000 UTC m=+0.101979162 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 04:03:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3852: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:38.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:38 np0005539563 nova_compute[252253]: 2025-11-29 09:03:38.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:38 np0005539563 nova_compute[252253]: 2025-11-29 09:03:38.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:39.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3853: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:40.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:40 np0005539563 nova_compute[252253]: 2025-11-29 09:03:40.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:40 np0005539563 nova_compute[252253]: 2025-11-29 09:03:40.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:40 np0005539563 nova_compute[252253]: 2025-11-29 09:03:40.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:03:40 np0005539563 nova_compute[252253]: 2025-11-29 09:03:40.784 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:41.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3854: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:42 np0005539563 nova_compute[252253]: 2025-11-29 09:03:42.197 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:42.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:42 np0005539563 nova_compute[252253]: 2025-11-29 09:03:42.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:42 np0005539563 nova_compute[252253]: 2025-11-29 09:03:42.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:03:42 np0005539563 nova_compute[252253]: 2025-11-29 09:03:42.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:03:42 np0005539563 nova_compute[252253]: 2025-11-29 09:03:42.697 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:03:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:03:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:03:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:03:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:03:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:03:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:03:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:43.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3855: 305 pgs: 305 active+clean; 137 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 819 KiB/s wr, 0 op/s
Nov 29 04:03:43 np0005539563 ovn_controller[148841]: 2025-11-29T09:03:43Z|00929|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 04:03:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:44.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:44 np0005539563 nova_compute[252253]: 2025-11-29 09:03:44.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:44 np0005539563 nova_compute[252253]: 2025-11-29 09:03:44.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:44 np0005539563 nova_compute[252253]: 2025-11-29 09:03:44.703 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:44 np0005539563 nova_compute[252253]: 2025-11-29 09:03:44.703 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:44 np0005539563 nova_compute[252253]: 2025-11-29 09:03:44.704 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:03:44 np0005539563 nova_compute[252253]: 2025-11-29 09:03:44.704 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:03:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:03:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1274209132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.176 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.328 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.330 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4125MB free_disk=20.978904724121094GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.330 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.331 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:45.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.533 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.533 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.556 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.579 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.579 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.594 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:03:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3856: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.628 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.663 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:03:45 np0005539563 nova_compute[252253]: 2025-11-29 09:03:45.839 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:03:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1394626300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:03:46 np0005539563 nova_compute[252253]: 2025-11-29 09:03:46.149 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:03:46 np0005539563 nova_compute[252253]: 2025-11-29 09:03:46.154 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:03:46 np0005539563 nova_compute[252253]: 2025-11-29 09:03:46.172 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:03:46 np0005539563 nova_compute[252253]: 2025-11-29 09:03:46.174 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:03:46 np0005539563 nova_compute[252253]: 2025-11-29 09:03:46.174 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:46.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:47 np0005539563 nova_compute[252253]: 2025-11-29 09:03:47.199 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:47.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3857: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.731933) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027732063, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 2023, "num_deletes": 251, "total_data_size": 3708420, "memory_usage": 3755688, "flush_reason": "Manual Compaction"}
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027763787, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 3632560, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77042, "largest_seqno": 79064, "table_properties": {"data_size": 3623349, "index_size": 5768, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18720, "raw_average_key_size": 20, "raw_value_size": 3605074, "raw_average_value_size": 3905, "num_data_blocks": 253, "num_entries": 923, "num_filter_entries": 923, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406818, "oldest_key_time": 1764406818, "file_creation_time": 1764407027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 31960 microseconds, and 10819 cpu microseconds.
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.763953) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 3632560 bytes OK
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.764010) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.765678) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.765717) EVENT_LOG_v1 {"time_micros": 1764407027765708, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.765770) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 3700224, prev total WAL file size 3700224, number of live WAL files 2.
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.767480) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(3547KB)], [176(10MB)]
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027767549, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 14462246, "oldest_snapshot_seqno": -1}
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 10958 keys, 12466146 bytes, temperature: kUnknown
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027874695, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 12466146, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12397774, "index_size": 39843, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27461, "raw_key_size": 290148, "raw_average_key_size": 26, "raw_value_size": 12208235, "raw_average_value_size": 1114, "num_data_blocks": 1503, "num_entries": 10958, "num_filter_entries": 10958, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764407027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.875007) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 12466146 bytes
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.909985) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.8 rd, 116.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.3 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 11479, records dropped: 521 output_compression: NoCompression
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.910026) EVENT_LOG_v1 {"time_micros": 1764407027910010, "job": 110, "event": "compaction_finished", "compaction_time_micros": 107265, "compaction_time_cpu_micros": 28802, "output_level": 6, "num_output_files": 1, "total_output_size": 12466146, "num_input_records": 11479, "num_output_records": 10958, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027910970, "job": 110, "event": "table_file_deletion", "file_number": 178}
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407027913022, "job": 110, "event": "table_file_deletion", "file_number": 176}
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.767340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.913120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.913125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.913127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.913129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:03:47.913130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:48.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:49.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3858: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 04:03:50 np0005539563 nova_compute[252253]: 2025-11-29 09:03:50.175 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:50.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:50 np0005539563 nova_compute[252253]: 2025-11-29 09:03:50.842 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:51.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3859: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 29 04:03:52 np0005539563 nova_compute[252253]: 2025-11-29 09:03:52.201 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:52.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:53.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3860: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 384 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Nov 29 04:03:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:54.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:55.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3861: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1010 KiB/s wr, 81 op/s
Nov 29 04:03:55 np0005539563 nova_compute[252253]: 2025-11-29 09:03:55.877 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:56.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:57 np0005539563 nova_compute[252253]: 2025-11-29 09:03:57.202 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:57.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3862: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 55 op/s
Nov 29 04:03:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:58.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:03:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:03:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:59.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:03:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3863: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 04:04:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:00.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:00 np0005539563 nova_compute[252253]: 2025-11-29 09:04:00.879 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:01.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3864: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 04:04:02 np0005539563 nova_compute[252253]: 2025-11-29 09:04:02.205 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:02.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:03.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3865: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 70 op/s
Nov 29 04:04:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:04.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:04:04.976 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:04:04.977 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:04:04.977 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:05.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3866: 305 pgs: 305 active+clean; 172 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 432 KiB/s wr, 63 op/s
Nov 29 04:04:05 np0005539563 nova_compute[252253]: 2025-11-29 09:04:05.881 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:06.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:07 np0005539563 nova_compute[252253]: 2025-11-29 09:04:07.207 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:07.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3867: 305 pgs: 305 active+clean; 172 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 579 KiB/s rd, 432 KiB/s wr, 24 op/s
Nov 29 04:04:08 np0005539563 podman[407393]: 2025-11-29 09:04:08.497513609 +0000 UTC m=+0.050384095 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 04:04:08 np0005539563 podman[407394]: 2025-11-29 09:04:08.505560917 +0000 UTC m=+0.056579183 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:04:08 np0005539563 podman[407395]: 2025-11-29 09:04:08.529798584 +0000 UTC m=+0.078139337 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 04:04:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:08.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:09.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3868: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 783 KiB/s rd, 1.5 MiB/s wr, 51 op/s
Nov 29 04:04:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:10.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:10 np0005539563 nova_compute[252253]: 2025-11-29 09:04:10.883 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:11.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3869: 305 pgs: 305 active+clean; 199 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 04:04:12 np0005539563 nova_compute[252253]: 2025-11-29 09:04:12.209 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:12.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:04:13
Nov 29 04:04:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:04:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:04:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data']
Nov 29 04:04:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:04:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:04:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:04:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:04:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:04:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:04:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:04:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:13.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3870: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 29 04:04:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:14.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:15.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3871: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 29 04:04:15 np0005539563 nova_compute[252253]: 2025-11-29 09:04:15.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:15 np0005539563 nova_compute[252253]: 2025-11-29 09:04:15.887 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:16.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:04:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:04:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:04:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:04:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:04:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:04:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:04:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:04:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:04:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:04:17 np0005539563 nova_compute[252253]: 2025-11-29 09:04:17.212 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:17.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3872: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 384 KiB/s rd, 1.7 MiB/s wr, 60 op/s
Nov 29 04:04:17 np0005539563 nova_compute[252253]: 2025-11-29 09:04:17.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:18.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:19.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3873: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 384 KiB/s rd, 1.7 MiB/s wr, 60 op/s
Nov 29 04:04:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:20.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:20 np0005539563 nova_compute[252253]: 2025-11-29 09:04:20.891 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:21.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3874: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 180 KiB/s rd, 637 KiB/s wr, 33 op/s
Nov 29 04:04:22 np0005539563 nova_compute[252253]: 2025-11-29 09:04:22.214 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:22.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:23.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3875: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 39 KiB/s wr, 5 op/s
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002170865903592034 of space, bias 1.0, pg target 0.6512597710776102 quantized to 32 (current 32)
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:04:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:24.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:25.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3876: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Nov 29 04:04:25 np0005539563 nova_compute[252253]: 2025-11-29 09:04:25.936 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:26.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:27 np0005539563 nova_compute[252253]: 2025-11-29 09:04:27.216 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:27.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3877: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Nov 29 04:04:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:28.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:29.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3878: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:04:30 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5178436d-6daf-449a-a6f3-848950de8cb6 does not exist
Nov 29 04:04:30 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 656cca18-62e9-472a-9a2f-dd055b6e3034 does not exist
Nov 29 04:04:30 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 708a3e5b-2e69-428c-975b-331d6adeb735 does not exist
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:04:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:30.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:04:30 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:04:30 np0005539563 podman[407790]: 2025-11-29 09:04:30.93548672 +0000 UTC m=+0.055774361 container create f821e8a841b3a74aeec173e3687a3e9aa6d535e81a675c071e5b912927b91b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 04:04:30 np0005539563 nova_compute[252253]: 2025-11-29 09:04:30.938 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:30 np0005539563 systemd[1]: Started libpod-conmon-f821e8a841b3a74aeec173e3687a3e9aa6d535e81a675c071e5b912927b91b26.scope.
Nov 29 04:04:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:04:30 np0005539563 podman[407790]: 2025-11-29 09:04:30.90336384 +0000 UTC m=+0.023651601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:04:31 np0005539563 podman[407790]: 2025-11-29 09:04:31.015963099 +0000 UTC m=+0.136250790 container init f821e8a841b3a74aeec173e3687a3e9aa6d535e81a675c071e5b912927b91b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 04:04:31 np0005539563 podman[407790]: 2025-11-29 09:04:31.023175284 +0000 UTC m=+0.143462935 container start f821e8a841b3a74aeec173e3687a3e9aa6d535e81a675c071e5b912927b91b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sinoussi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:04:31 np0005539563 gracious_sinoussi[407806]: 167 167
Nov 29 04:04:31 np0005539563 systemd[1]: libpod-f821e8a841b3a74aeec173e3687a3e9aa6d535e81a675c071e5b912927b91b26.scope: Deactivated successfully.
Nov 29 04:04:31 np0005539563 podman[407790]: 2025-11-29 09:04:31.030875833 +0000 UTC m=+0.151163484 container attach f821e8a841b3a74aeec173e3687a3e9aa6d535e81a675c071e5b912927b91b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 04:04:31 np0005539563 podman[407790]: 2025-11-29 09:04:31.031784227 +0000 UTC m=+0.152071868 container died f821e8a841b3a74aeec173e3687a3e9aa6d535e81a675c071e5b912927b91b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sinoussi, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:04:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ce00f324405425b6ac897ea3e939832e3327d6a99181c4fe0a0e6fe28395c29c-merged.mount: Deactivated successfully.
Nov 29 04:04:31 np0005539563 podman[407790]: 2025-11-29 09:04:31.074155894 +0000 UTC m=+0.194443555 container remove f821e8a841b3a74aeec173e3687a3e9aa6d535e81a675c071e5b912927b91b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_sinoussi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 04:04:31 np0005539563 systemd[1]: libpod-conmon-f821e8a841b3a74aeec173e3687a3e9aa6d535e81a675c071e5b912927b91b26.scope: Deactivated successfully.
Nov 29 04:04:31 np0005539563 podman[407834]: 2025-11-29 09:04:31.257951891 +0000 UTC m=+0.051180837 container create 5b0709df2dcde7ecf63422f8ee237a96776f94058d09c921540e123e80d4694f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wilson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:04:31 np0005539563 systemd[1]: Started libpod-conmon-5b0709df2dcde7ecf63422f8ee237a96776f94058d09c921540e123e80d4694f.scope.
Nov 29 04:04:31 np0005539563 podman[407834]: 2025-11-29 09:04:31.235896314 +0000 UTC m=+0.029125310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:04:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:04:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848ba2e0e2bbc83773dbfcb6ea63017a937cf5264e9a1a6e23e9896e16afe25a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:04:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848ba2e0e2bbc83773dbfcb6ea63017a937cf5264e9a1a6e23e9896e16afe25a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:04:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848ba2e0e2bbc83773dbfcb6ea63017a937cf5264e9a1a6e23e9896e16afe25a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:04:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848ba2e0e2bbc83773dbfcb6ea63017a937cf5264e9a1a6e23e9896e16afe25a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:04:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848ba2e0e2bbc83773dbfcb6ea63017a937cf5264e9a1a6e23e9896e16afe25a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:04:31 np0005539563 podman[407834]: 2025-11-29 09:04:31.358237796 +0000 UTC m=+0.151466742 container init 5b0709df2dcde7ecf63422f8ee237a96776f94058d09c921540e123e80d4694f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wilson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:04:31 np0005539563 podman[407834]: 2025-11-29 09:04:31.365588545 +0000 UTC m=+0.158817491 container start 5b0709df2dcde7ecf63422f8ee237a96776f94058d09c921540e123e80d4694f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wilson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:04:31 np0005539563 podman[407834]: 2025-11-29 09:04:31.368488884 +0000 UTC m=+0.161717860 container attach 5b0709df2dcde7ecf63422f8ee237a96776f94058d09c921540e123e80d4694f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wilson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 04:04:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:31.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3879: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Nov 29 04:04:32 np0005539563 nova_compute[252253]: 2025-11-29 09:04:32.219 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:32 np0005539563 stupefied_wilson[407852]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:04:32 np0005539563 stupefied_wilson[407852]: --> relative data size: 1.0
Nov 29 04:04:32 np0005539563 stupefied_wilson[407852]: --> All data devices are unavailable
Nov 29 04:04:32 np0005539563 systemd[1]: libpod-5b0709df2dcde7ecf63422f8ee237a96776f94058d09c921540e123e80d4694f.scope: Deactivated successfully.
Nov 29 04:04:32 np0005539563 podman[407834]: 2025-11-29 09:04:32.288252658 +0000 UTC m=+1.081481604 container died 5b0709df2dcde7ecf63422f8ee237a96776f94058d09c921540e123e80d4694f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:04:32 np0005539563 systemd[1]: var-lib-containers-storage-overlay-848ba2e0e2bbc83773dbfcb6ea63017a937cf5264e9a1a6e23e9896e16afe25a-merged.mount: Deactivated successfully.
Nov 29 04:04:32 np0005539563 podman[407834]: 2025-11-29 09:04:32.350724849 +0000 UTC m=+1.143953815 container remove 5b0709df2dcde7ecf63422f8ee237a96776f94058d09c921540e123e80d4694f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wilson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:04:32 np0005539563 systemd[1]: libpod-conmon-5b0709df2dcde7ecf63422f8ee237a96776f94058d09c921540e123e80d4694f.scope: Deactivated successfully.
Nov 29 04:04:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:32.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:33 np0005539563 podman[408022]: 2025-11-29 09:04:33.028176122 +0000 UTC m=+0.061378683 container create 3abd89cc9624319dacac2cd1e0571f19ff093fbd2018c8d51ae62efaddb65209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hermann, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 04:04:33 np0005539563 systemd[1]: Started libpod-conmon-3abd89cc9624319dacac2cd1e0571f19ff093fbd2018c8d51ae62efaddb65209.scope.
Nov 29 04:04:33 np0005539563 podman[408022]: 2025-11-29 09:04:32.999894796 +0000 UTC m=+0.033097437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:04:33 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:04:33 np0005539563 podman[408022]: 2025-11-29 09:04:33.11308808 +0000 UTC m=+0.146290661 container init 3abd89cc9624319dacac2cd1e0571f19ff093fbd2018c8d51ae62efaddb65209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 04:04:33 np0005539563 podman[408022]: 2025-11-29 09:04:33.12559927 +0000 UTC m=+0.158801841 container start 3abd89cc9624319dacac2cd1e0571f19ff093fbd2018c8d51ae62efaddb65209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:04:33 np0005539563 podman[408022]: 2025-11-29 09:04:33.130046199 +0000 UTC m=+0.163248780 container attach 3abd89cc9624319dacac2cd1e0571f19ff093fbd2018c8d51ae62efaddb65209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hermann, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:04:33 np0005539563 pedantic_hermann[408038]: 167 167
Nov 29 04:04:33 np0005539563 systemd[1]: libpod-3abd89cc9624319dacac2cd1e0571f19ff093fbd2018c8d51ae62efaddb65209.scope: Deactivated successfully.
Nov 29 04:04:33 np0005539563 podman[408022]: 2025-11-29 09:04:33.133271387 +0000 UTC m=+0.166473958 container died 3abd89cc9624319dacac2cd1e0571f19ff093fbd2018c8d51ae62efaddb65209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 04:04:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b622b3c5526b8ebb423d96105103da86e1ea01dc22209c28591237540dd49814-merged.mount: Deactivated successfully.
Nov 29 04:04:33 np0005539563 podman[408022]: 2025-11-29 09:04:33.187377262 +0000 UTC m=+0.220579863 container remove 3abd89cc9624319dacac2cd1e0571f19ff093fbd2018c8d51ae62efaddb65209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:04:33 np0005539563 systemd[1]: libpod-conmon-3abd89cc9624319dacac2cd1e0571f19ff093fbd2018c8d51ae62efaddb65209.scope: Deactivated successfully.
Nov 29 04:04:33 np0005539563 podman[408065]: 2025-11-29 09:04:33.399154246 +0000 UTC m=+0.049407058 container create defc3215db04165ac428eac47429cd758972e094641e1b63e0b6648a859d7b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:04:33 np0005539563 systemd[1]: Started libpod-conmon-defc3215db04165ac428eac47429cd758972e094641e1b63e0b6648a859d7b72.scope.
Nov 29 04:04:33 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:04:33 np0005539563 podman[408065]: 2025-11-29 09:04:33.375124366 +0000 UTC m=+0.025377178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:04:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f498ecc8ebae74563b0af23d5a8155fef35ec3779e462be4a66e8275b5217f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:04:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f498ecc8ebae74563b0af23d5a8155fef35ec3779e462be4a66e8275b5217f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:04:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f498ecc8ebae74563b0af23d5a8155fef35ec3779e462be4a66e8275b5217f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:04:33 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f498ecc8ebae74563b0af23d5a8155fef35ec3779e462be4a66e8275b5217f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:04:33 np0005539563 podman[408065]: 2025-11-29 09:04:33.48683346 +0000 UTC m=+0.137086242 container init defc3215db04165ac428eac47429cd758972e094641e1b63e0b6648a859d7b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_einstein, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 29 04:04:33 np0005539563 podman[408065]: 2025-11-29 09:04:33.502009601 +0000 UTC m=+0.152262373 container start defc3215db04165ac428eac47429cd758972e094641e1b63e0b6648a859d7b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:04:33 np0005539563 podman[408065]: 2025-11-29 09:04:33.505863885 +0000 UTC m=+0.156116647 container attach defc3215db04165ac428eac47429cd758972e094641e1b63e0b6648a859d7b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_einstein, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:04:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:33.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3880: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 29 04:04:34 np0005539563 objective_einstein[408081]: {
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:    "0": [
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:        {
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:            "devices": [
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:                "/dev/loop3"
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:            ],
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:            "lv_name": "ceph_lv0",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:            "lv_size": "7511998464",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:            "name": "ceph_lv0",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:            "tags": {
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:                "ceph.cluster_name": "ceph",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:                "ceph.crush_device_class": "",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:                "ceph.encrypted": "0",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:                "ceph.osd_id": "0",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:                "ceph.type": "block",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:                "ceph.vdo": "0"
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:            },
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:            "type": "block",
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:            "vg_name": "ceph_vg0"
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:        }
Nov 29 04:04:34 np0005539563 objective_einstein[408081]:    ]
Nov 29 04:04:34 np0005539563 objective_einstein[408081]: }
Nov 29 04:04:34 np0005539563 systemd[1]: libpod-defc3215db04165ac428eac47429cd758972e094641e1b63e0b6648a859d7b72.scope: Deactivated successfully.
Nov 29 04:04:34 np0005539563 podman[408065]: 2025-11-29 09:04:34.330599796 +0000 UTC m=+0.980852578 container died defc3215db04165ac428eac47429cd758972e094641e1b63e0b6648a859d7b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:04:34 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5f498ecc8ebae74563b0af23d5a8155fef35ec3779e462be4a66e8275b5217f3-merged.mount: Deactivated successfully.
Nov 29 04:04:34 np0005539563 podman[408065]: 2025-11-29 09:04:34.383149899 +0000 UTC m=+1.033402661 container remove defc3215db04165ac428eac47429cd758972e094641e1b63e0b6648a859d7b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_einstein, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 04:04:34 np0005539563 systemd[1]: libpod-conmon-defc3215db04165ac428eac47429cd758972e094641e1b63e0b6648a859d7b72.scope: Deactivated successfully.
Nov 29 04:04:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:34.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:34 np0005539563 nova_compute[252253]: 2025-11-29 09:04:34.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:35 np0005539563 podman[408239]: 2025-11-29 09:04:35.031938205 +0000 UTC m=+0.038890364 container create 1a43af7e02d4c031b344fe0335221bbab2a2c8eb05398b4a86b1645b27de5aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:04:35 np0005539563 systemd[1]: Started libpod-conmon-1a43af7e02d4c031b344fe0335221bbab2a2c8eb05398b4a86b1645b27de5aa2.scope.
Nov 29 04:04:35 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:04:35 np0005539563 podman[408239]: 2025-11-29 09:04:35.0147608 +0000 UTC m=+0.021712989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:04:35 np0005539563 podman[408239]: 2025-11-29 09:04:35.115084057 +0000 UTC m=+0.122036226 container init 1a43af7e02d4c031b344fe0335221bbab2a2c8eb05398b4a86b1645b27de5aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tu, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 04:04:35 np0005539563 podman[408239]: 2025-11-29 09:04:35.124582574 +0000 UTC m=+0.131534733 container start 1a43af7e02d4c031b344fe0335221bbab2a2c8eb05398b4a86b1645b27de5aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tu, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:04:35 np0005539563 podman[408239]: 2025-11-29 09:04:35.128054078 +0000 UTC m=+0.135006257 container attach 1a43af7e02d4c031b344fe0335221bbab2a2c8eb05398b4a86b1645b27de5aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 04:04:35 np0005539563 hardcore_tu[408255]: 167 167
Nov 29 04:04:35 np0005539563 systemd[1]: libpod-1a43af7e02d4c031b344fe0335221bbab2a2c8eb05398b4a86b1645b27de5aa2.scope: Deactivated successfully.
Nov 29 04:04:35 np0005539563 podman[408239]: 2025-11-29 09:04:35.13296146 +0000 UTC m=+0.139913629 container died 1a43af7e02d4c031b344fe0335221bbab2a2c8eb05398b4a86b1645b27de5aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tu, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 04:04:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-bfab93b3cee363b05c275b1d1b52eb5c93abf1c7dee39f4ad33212dec3ab8f35-merged.mount: Deactivated successfully.
Nov 29 04:04:35 np0005539563 podman[408239]: 2025-11-29 09:04:35.169785417 +0000 UTC m=+0.176737586 container remove 1a43af7e02d4c031b344fe0335221bbab2a2c8eb05398b4a86b1645b27de5aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 04:04:35 np0005539563 systemd[1]: libpod-conmon-1a43af7e02d4c031b344fe0335221bbab2a2c8eb05398b4a86b1645b27de5aa2.scope: Deactivated successfully.
Nov 29 04:04:35 np0005539563 podman[408279]: 2025-11-29 09:04:35.343421589 +0000 UTC m=+0.041492514 container create e88deda047814212d2e6a6d552dc5b0be2aaf2129d4e2b7c137b546324a38e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hopper, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 04:04:35 np0005539563 systemd[1]: Started libpod-conmon-e88deda047814212d2e6a6d552dc5b0be2aaf2129d4e2b7c137b546324a38e7c.scope.
Nov 29 04:04:35 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:04:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b67e4e206f482620b5998998b084ccfcf8bc2395adeae87b961065b81987e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:04:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b67e4e206f482620b5998998b084ccfcf8bc2395adeae87b961065b81987e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:04:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b67e4e206f482620b5998998b084ccfcf8bc2395adeae87b961065b81987e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:04:35 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b67e4e206f482620b5998998b084ccfcf8bc2395adeae87b961065b81987e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:04:35 np0005539563 podman[408279]: 2025-11-29 09:04:35.326296115 +0000 UTC m=+0.024367060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:04:35 np0005539563 podman[408279]: 2025-11-29 09:04:35.426086727 +0000 UTC m=+0.124157732 container init e88deda047814212d2e6a6d552dc5b0be2aaf2129d4e2b7c137b546324a38e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hopper, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 04:04:35 np0005539563 podman[408279]: 2025-11-29 09:04:35.433456727 +0000 UTC m=+0.131527692 container start e88deda047814212d2e6a6d552dc5b0be2aaf2129d4e2b7c137b546324a38e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:04:35 np0005539563 podman[408279]: 2025-11-29 09:04:35.43726067 +0000 UTC m=+0.135331595 container attach e88deda047814212d2e6a6d552dc5b0be2aaf2129d4e2b7c137b546324a38e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hopper, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 04:04:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:35.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3881: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.7 KiB/s rd, 1023 B/s wr, 6 op/s
Nov 29 04:04:35 np0005539563 nova_compute[252253]: 2025-11-29 09:04:35.940 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:36 np0005539563 nova_compute[252253]: 2025-11-29 09:04:36.278 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:04:36.280 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=94, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=93) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:04:36 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:04:36.281 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:04:36 np0005539563 optimistic_hopper[408296]: {
Nov 29 04:04:36 np0005539563 optimistic_hopper[408296]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:04:36 np0005539563 optimistic_hopper[408296]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:04:36 np0005539563 optimistic_hopper[408296]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:04:36 np0005539563 optimistic_hopper[408296]:        "osd_id": 0,
Nov 29 04:04:36 np0005539563 optimistic_hopper[408296]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:04:36 np0005539563 optimistic_hopper[408296]:        "type": "bluestore"
Nov 29 04:04:36 np0005539563 optimistic_hopper[408296]:    }
Nov 29 04:04:36 np0005539563 optimistic_hopper[408296]: }
Nov 29 04:04:36 np0005539563 systemd[1]: libpod-e88deda047814212d2e6a6d552dc5b0be2aaf2129d4e2b7c137b546324a38e7c.scope: Deactivated successfully.
Nov 29 04:04:36 np0005539563 conmon[408296]: conmon e88deda047814212d2e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e88deda047814212d2e6a6d552dc5b0be2aaf2129d4e2b7c137b546324a38e7c.scope/container/memory.events
Nov 29 04:04:36 np0005539563 podman[408279]: 2025-11-29 09:04:36.322981361 +0000 UTC m=+1.021052297 container died e88deda047814212d2e6a6d552dc5b0be2aaf2129d4e2b7c137b546324a38e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hopper, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 04:04:36 np0005539563 systemd[1]: var-lib-containers-storage-overlay-24b67e4e206f482620b5998998b084ccfcf8bc2395adeae87b961065b81987e3-merged.mount: Deactivated successfully.
Nov 29 04:04:36 np0005539563 podman[408279]: 2025-11-29 09:04:36.382303987 +0000 UTC m=+1.080374902 container remove e88deda047814212d2e6a6d552dc5b0be2aaf2129d4e2b7c137b546324a38e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hopper, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:04:36 np0005539563 systemd[1]: libpod-conmon-e88deda047814212d2e6a6d552dc5b0be2aaf2129d4e2b7c137b546324a38e7c.scope: Deactivated successfully.
Nov 29 04:04:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:04:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:36.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:04:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:04:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:04:37 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 643365a3-0fed-4cb3-8a7d-5118c82cbf22 does not exist
Nov 29 04:04:37 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0b052ba1-abfb-4956-b922-37141057f34a does not exist
Nov 29 04:04:37 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 973d55f1-06a8-43c4-b903-0bbeb84ca286 does not exist
Nov 29 04:04:37 np0005539563 nova_compute[252253]: 2025-11-29 09:04:37.221 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:37.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3882: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.7 KiB/s rd, 1023 B/s wr, 6 op/s
Nov 29 04:04:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:04:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:04:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:38.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:38 np0005539563 nova_compute[252253]: 2025-11-29 09:04:38.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:39.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:39 np0005539563 podman[408383]: 2025-11-29 09:04:39.547613082 +0000 UTC m=+0.083776139 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 29 04:04:39 np0005539563 podman[408384]: 2025-11-29 09:04:39.548241549 +0000 UTC m=+0.089030491 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:04:39 np0005539563 podman[408385]: 2025-11-29 09:04:39.573589125 +0000 UTC m=+0.116581707 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 04:04:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3883: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 1.2 KiB/s wr, 7 op/s
Nov 29 04:04:40 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:04:40.283 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '94'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:04:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:40.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:40 np0005539563 nova_compute[252253]: 2025-11-29 09:04:40.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:40 np0005539563 nova_compute[252253]: 2025-11-29 09:04:40.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:40 np0005539563 nova_compute[252253]: 2025-11-29 09:04:40.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:40 np0005539563 nova_compute[252253]: 2025-11-29 09:04:40.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:04:40 np0005539563 nova_compute[252253]: 2025-11-29 09:04:40.944 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:41.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3884: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Nov 29 04:04:42 np0005539563 nova_compute[252253]: 2025-11-29 09:04:42.222 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:42.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:42 np0005539563 nova_compute[252253]: 2025-11-29 09:04:42.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:42 np0005539563 nova_compute[252253]: 2025-11-29 09:04:42.680 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:04:42 np0005539563 nova_compute[252253]: 2025-11-29 09:04:42.681 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:04:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:04:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:04:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:04:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:04:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:04:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:04:43 np0005539563 nova_compute[252253]: 2025-11-29 09:04:43.547 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:04:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:43.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3885: 305 pgs: 305 active+clean; 178 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 597 B/s wr, 16 op/s
Nov 29 04:04:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:44.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:45.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3886: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 29 04:04:45 np0005539563 nova_compute[252253]: 2025-11-29 09:04:45.946 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:46.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:46 np0005539563 nova_compute[252253]: 2025-11-29 09:04:46.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:46 np0005539563 nova_compute[252253]: 2025-11-29 09:04:46.862 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:46 np0005539563 nova_compute[252253]: 2025-11-29 09:04:46.862 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:46 np0005539563 nova_compute[252253]: 2025-11-29 09:04:46.863 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:46 np0005539563 nova_compute[252253]: 2025-11-29 09:04:46.863 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:04:46 np0005539563 nova_compute[252253]: 2025-11-29 09:04:46.863 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:04:47 np0005539563 nova_compute[252253]: 2025-11-29 09:04:47.223 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:04:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3540003379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:04:47 np0005539563 nova_compute[252253]: 2025-11-29 09:04:47.364 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:04:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:47 np0005539563 nova_compute[252253]: 2025-11-29 09:04:47.556 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:04:47 np0005539563 nova_compute[252253]: 2025-11-29 09:04:47.557 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4105MB free_disk=20.98827362060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:04:47 np0005539563 nova_compute[252253]: 2025-11-29 09:04:47.557 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:47 np0005539563 nova_compute[252253]: 2025-11-29 09:04:47.558 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:47.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3887: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Nov 29 04:04:47 np0005539563 nova_compute[252253]: 2025-11-29 09:04:47.872 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:04:47 np0005539563 nova_compute[252253]: 2025-11-29 09:04:47.872 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:04:47 np0005539563 nova_compute[252253]: 2025-11-29 09:04:47.950 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:04:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:04:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316468941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:04:48 np0005539563 nova_compute[252253]: 2025-11-29 09:04:48.425 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:04:48 np0005539563 nova_compute[252253]: 2025-11-29 09:04:48.431 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:04:48 np0005539563 nova_compute[252253]: 2025-11-29 09:04:48.484 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:04:48 np0005539563 nova_compute[252253]: 2025-11-29 09:04:48.485 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:04:48 np0005539563 nova_compute[252253]: 2025-11-29 09:04:48.486 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:48.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:49.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3888: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Nov 29 04:04:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:50.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:50 np0005539563 nova_compute[252253]: 2025-11-29 09:04:50.948 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:51.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3889: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 938 B/s wr, 23 op/s
Nov 29 04:04:52 np0005539563 nova_compute[252253]: 2025-11-29 09:04:52.225 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:52 np0005539563 nova_compute[252253]: 2025-11-29 09:04:52.486 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:04:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:52.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:04:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:53.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3890: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 938 B/s wr, 16 op/s
Nov 29 04:04:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:54.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:55.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3891: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s rd, 597 B/s wr, 13 op/s
Nov 29 04:04:55 np0005539563 nova_compute[252253]: 2025-11-29 09:04:55.976 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:56.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:57 np0005539563 nova_compute[252253]: 2025-11-29 09:04:57.226 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:57.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3892: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 04:04:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:58.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:04:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:59.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3893: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 04:05:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:00.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:00 np0005539563 nova_compute[252253]: 2025-11-29 09:05:00.979 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:01.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3894: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:02 np0005539563 nova_compute[252253]: 2025-11-29 09:05:02.228 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:02.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:03.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3895: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:04.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:04.978 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:04.979 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:04.979 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:05.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3896: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:05 np0005539563 nova_compute[252253]: 2025-11-29 09:05:05.982 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:06.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:07 np0005539563 nova_compute[252253]: 2025-11-29 09:05:07.229 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:07.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3897: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:08.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:09.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3898: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:10 np0005539563 podman[408604]: 2025-11-29 09:05:10.510638914 +0000 UTC m=+0.059673506 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:05:10 np0005539563 podman[408603]: 2025-11-29 09:05:10.527640374 +0000 UTC m=+0.071011793 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 04:05:10 np0005539563 podman[408605]: 2025-11-29 09:05:10.536671209 +0000 UTC m=+0.080427918 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 29 04:05:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:10.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:11 np0005539563 nova_compute[252253]: 2025-11-29 09:05:11.024 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:11.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3899: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:12 np0005539563 nova_compute[252253]: 2025-11-29 09:05:12.230 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:12.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:05:13
Nov 29 04:05:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:05:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:05:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'volumes']
Nov 29 04:05:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:05:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:05:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:05:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:05:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:05:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:05:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:05:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:13.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3900: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:14.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:15.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3901: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:15 np0005539563 nova_compute[252253]: 2025-11-29 09:05:15.674 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:16 np0005539563 nova_compute[252253]: 2025-11-29 09:05:16.078 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:16.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:05:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:05:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:05:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:05:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:05:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:05:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:05:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:05:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:05:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:05:17 np0005539563 nova_compute[252253]: 2025-11-29 09:05:17.233 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:17.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3902: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:18.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:19.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3903: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:20.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:21 np0005539563 nova_compute[252253]: 2025-11-29 09:05:21.081 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:21.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3904: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:22 np0005539563 nova_compute[252253]: 2025-11-29 09:05:22.235 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:22.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:23.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3905: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:05:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:24.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:25.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3906: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:26 np0005539563 nova_compute[252253]: 2025-11-29 09:05:26.082 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:26.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:27 np0005539563 nova_compute[252253]: 2025-11-29 09:05:27.237 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:27.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3907: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:28.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:29.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3908: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:30.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:31 np0005539563 nova_compute[252253]: 2025-11-29 09:05:31.128 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:31.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3909: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:32 np0005539563 nova_compute[252253]: 2025-11-29 09:05:32.240 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:32.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:33.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3910: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:34.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:35.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3911: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:36 np0005539563 nova_compute[252253]: 2025-11-29 09:05:36.159 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:36 np0005539563 nova_compute[252253]: 2025-11-29 09:05:36.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:36.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:37 np0005539563 nova_compute[252253]: 2025-11-29 09:05:37.273 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:37.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3912: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:05:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:05:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 04:05:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:38 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 04:05:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:38.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 04:05:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 04:05:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:39 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:39 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:39 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:39 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:39.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3913: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:39 np0005539563 nova_compute[252253]: 2025-11-29 09:05:39.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:40 np0005539563 nova_compute[252253]: 2025-11-29 09:05:40.197 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:40 np0005539563 nova_compute[252253]: 2025-11-29 09:05:40.197 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:40 np0005539563 nova_compute[252253]: 2025-11-29 09:05:40.216 252257 DEBUG nova.compute.manager [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 04:05:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:05:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:05:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:05:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:05:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:05:40 np0005539563 nova_compute[252253]: 2025-11-29 09:05:40.555 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:40 np0005539563 nova_compute[252253]: 2025-11-29 09:05:40.555 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:40 np0005539563 nova_compute[252253]: 2025-11-29 09:05:40.562 252257 DEBUG nova.virt.hardware [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 04:05:40 np0005539563 nova_compute[252253]: 2025-11-29 09:05:40.562 252257 INFO nova.compute.claims [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 04:05:40 np0005539563 nova_compute[252253]: 2025-11-29 09:05:40.674 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:40.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:05:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/554130387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.126 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.135 252257 DEBUG nova.compute.provider_tree [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.156 252257 DEBUG nova.scheduler.client.report [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.181 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.182 252257 DEBUG nova.compute.manager [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 04:05:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:41 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.202 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:41 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6ebabdb0-5ac7-458e-98f0-281dd3849b23 does not exist
Nov 29 04:05:41 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ad039050-32d9-40ca-bc5d-8043305da1c4 does not exist
Nov 29 04:05:41 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 74a8b012-2b35-4c17-b64f-5de1a2039f4a does not exist
Nov 29 04:05:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:05:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:05:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:05:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:05:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:05:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.243 252257 DEBUG nova.compute.manager [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.244 252257 DEBUG nova.network.neutron [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.267 252257 INFO nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.286 252257 DEBUG nova.compute.manager [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 04:05:41 np0005539563 podman[409026]: 2025-11-29 09:05:41.368564024 +0000 UTC m=+0.069257466 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 29 04:05:41 np0005539563 podman[409027]: 2025-11-29 09:05:41.371854093 +0000 UTC m=+0.070871139 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.386 252257 DEBUG nova.compute.manager [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.387 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.387 252257 INFO nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Creating image(s)#033[00m
Nov 29 04:05:41 np0005539563 podman[409028]: 2025-11-29 09:05:41.392694318 +0000 UTC m=+0.087719817 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.414 252257 DEBUG nova.storage.rbd_utils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image 6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.444 252257 DEBUG nova.storage.rbd_utils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image 6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.469 252257 DEBUG nova.storage.rbd_utils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image 6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.473 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.545 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.546 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.547 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.547 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "9b6c4a62e987670abc3ce4c57f88bd403b2af8bf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.578 252257 DEBUG nova.storage.rbd_utils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image 6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.582 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:41.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3914: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:05:41 np0005539563 podman[409298]: 2025-11-29 09:05:41.789592884 +0000 UTC m=+0.044107455 container create 628ab6167c1944d094aebb3a85696710c0db42cbc9c7931a261e980a5e00b9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:41 np0005539563 systemd[1]: Started libpod-conmon-628ab6167c1944d094aebb3a85696710c0db42cbc9c7931a261e980a5e00b9b5.scope.
Nov 29 04:05:41 np0005539563 podman[409298]: 2025-11-29 09:05:41.768690658 +0000 UTC m=+0.023205249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.874 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9b6c4a62e987670abc3ce4c57f88bd403b2af8bf 6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:41 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:05:41 np0005539563 podman[409298]: 2025-11-29 09:05:41.89467844 +0000 UTC m=+0.149193031 container init 628ab6167c1944d094aebb3a85696710c0db42cbc9c7931a261e980a5e00b9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_tesla, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 04:05:41 np0005539563 podman[409298]: 2025-11-29 09:05:41.903046086 +0000 UTC m=+0.157560647 container start 628ab6167c1944d094aebb3a85696710c0db42cbc9c7931a261e980a5e00b9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_tesla, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:05:41 np0005539563 podman[409298]: 2025-11-29 09:05:41.906284984 +0000 UTC m=+0.160799565 container attach 628ab6167c1944d094aebb3a85696710c0db42cbc9c7931a261e980a5e00b9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 04:05:41 np0005539563 great_tesla[409314]: 167 167
Nov 29 04:05:41 np0005539563 systemd[1]: libpod-628ab6167c1944d094aebb3a85696710c0db42cbc9c7931a261e980a5e00b9b5.scope: Deactivated successfully.
Nov 29 04:05:41 np0005539563 conmon[409314]: conmon 628ab6167c1944d094ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-628ab6167c1944d094aebb3a85696710c0db42cbc9c7931a261e980a5e00b9b5.scope/container/memory.events
Nov 29 04:05:41 np0005539563 podman[409298]: 2025-11-29 09:05:41.911114035 +0000 UTC m=+0.165628606 container died 628ab6167c1944d094aebb3a85696710c0db42cbc9c7931a261e980a5e00b9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:05:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ba6bf278b202608c836fb56d29356c14ff7dc8f11804959f46d42eab87696781-merged.mount: Deactivated successfully.
Nov 29 04:05:41 np0005539563 podman[409298]: 2025-11-29 09:05:41.956069172 +0000 UTC m=+0.210583753 container remove 628ab6167c1944d094aebb3a85696710c0db42cbc9c7931a261e980a5e00b9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:05:41 np0005539563 nova_compute[252253]: 2025-11-29 09:05:41.958 252257 DEBUG nova.storage.rbd_utils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] resizing rbd image 6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 04:05:41 np0005539563 systemd[1]: libpod-conmon-628ab6167c1944d094aebb3a85696710c0db42cbc9c7931a261e980a5e00b9b5.scope: Deactivated successfully.
Nov 29 04:05:42 np0005539563 nova_compute[252253]: 2025-11-29 09:05:42.077 252257 DEBUG nova.objects.instance [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lazy-loading 'migration_context' on Instance uuid 6f8e25c4-545a-420f-bd34-aef4da2b27d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:05:42 np0005539563 nova_compute[252253]: 2025-11-29 09:05:42.115 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 04:05:42 np0005539563 nova_compute[252253]: 2025-11-29 09:05:42.116 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Ensure instance console log exists: /var/lib/nova/instances/6f8e25c4-545a-420f-bd34-aef4da2b27d7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 04:05:42 np0005539563 nova_compute[252253]: 2025-11-29 09:05:42.117 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:42 np0005539563 nova_compute[252253]: 2025-11-29 09:05:42.117 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:42 np0005539563 nova_compute[252253]: 2025-11-29 09:05:42.118 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:42 np0005539563 podman[409408]: 2025-11-29 09:05:42.154600107 +0000 UTC m=+0.080934232 container create 432d4a0adc0a84be0e7451545ff4172477394ffc2fa4354a89b20b7ed1cc0546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 04:05:42 np0005539563 podman[409408]: 2025-11-29 09:05:42.096778041 +0000 UTC m=+0.023112186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:05:42 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:42 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:05:42 np0005539563 systemd[1]: Started libpod-conmon-432d4a0adc0a84be0e7451545ff4172477394ffc2fa4354a89b20b7ed1cc0546.scope.
Nov 29 04:05:42 np0005539563 nova_compute[252253]: 2025-11-29 09:05:42.327 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:42 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:05:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19dc1855cd7f0d340483626eac6956e2a3a2aef34e1dd161f00cb4bcd9e192cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19dc1855cd7f0d340483626eac6956e2a3a2aef34e1dd161f00cb4bcd9e192cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19dc1855cd7f0d340483626eac6956e2a3a2aef34e1dd161f00cb4bcd9e192cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19dc1855cd7f0d340483626eac6956e2a3a2aef34e1dd161f00cb4bcd9e192cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19dc1855cd7f0d340483626eac6956e2a3a2aef34e1dd161f00cb4bcd9e192cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:42 np0005539563 podman[409408]: 2025-11-29 09:05:42.360308057 +0000 UTC m=+0.286642262 container init 432d4a0adc0a84be0e7451545ff4172477394ffc2fa4354a89b20b7ed1cc0546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:42 np0005539563 podman[409408]: 2025-11-29 09:05:42.366310499 +0000 UTC m=+0.292644634 container start 432d4a0adc0a84be0e7451545ff4172477394ffc2fa4354a89b20b7ed1cc0546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 04:05:42 np0005539563 podman[409408]: 2025-11-29 09:05:42.370852783 +0000 UTC m=+0.297186938 container attach 432d4a0adc0a84be0e7451545ff4172477394ffc2fa4354a89b20b7ed1cc0546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 04:05:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:42 np0005539563 nova_compute[252253]: 2025-11-29 09:05:42.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:42.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.063 252257 DEBUG nova.network.neutron [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Successfully created port: 12ddaf65-2dcb-4830-9742-863386b8c30f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 04:05:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:05:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:05:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:05:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:05:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:05:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:05:43 np0005539563 lucid_poincare[409425]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:05:43 np0005539563 lucid_poincare[409425]: --> relative data size: 1.0
Nov 29 04:05:43 np0005539563 lucid_poincare[409425]: --> All data devices are unavailable
Nov 29 04:05:43 np0005539563 systemd[1]: libpod-432d4a0adc0a84be0e7451545ff4172477394ffc2fa4354a89b20b7ed1cc0546.scope: Deactivated successfully.
Nov 29 04:05:43 np0005539563 podman[409408]: 2025-11-29 09:05:43.306672981 +0000 UTC m=+1.233007106 container died 432d4a0adc0a84be0e7451545ff4172477394ffc2fa4354a89b20b7ed1cc0546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 04:05:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay-19dc1855cd7f0d340483626eac6956e2a3a2aef34e1dd161f00cb4bcd9e192cf-merged.mount: Deactivated successfully.
Nov 29 04:05:43 np0005539563 podman[409408]: 2025-11-29 09:05:43.376554233 +0000 UTC m=+1.302888368 container remove 432d4a0adc0a84be0e7451545ff4172477394ffc2fa4354a89b20b7ed1cc0546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:05:43 np0005539563 systemd[1]: libpod-conmon-432d4a0adc0a84be0e7451545ff4172477394ffc2fa4354a89b20b7ed1cc0546.scope: Deactivated successfully.
Nov 29 04:05:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:43.472 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=95, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=94) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:05:43 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:43.473 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.525 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:43.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3915: 305 pgs: 305 active+clean; 124 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 94 KiB/s wr, 0 op/s
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.697 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.698 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.712 252257 DEBUG nova.network.neutron [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Successfully updated port: 12ddaf65-2dcb-4830-9742-863386b8c30f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.728 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "refresh_cache-6f8e25c4-545a-420f-bd34-aef4da2b27d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.729 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquired lock "refresh_cache-6f8e25c4-545a-420f-bd34-aef4da2b27d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.729 252257 DEBUG nova.network.neutron [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.823 252257 DEBUG nova.compute.manager [req-93a08104-879b-47e2-a4f0-c16bfc331745 req-b021973a-c228-4467-abcf-2e17265437d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Received event network-changed-12ddaf65-2dcb-4830-9742-863386b8c30f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.823 252257 DEBUG nova.compute.manager [req-93a08104-879b-47e2-a4f0-c16bfc331745 req-b021973a-c228-4467-abcf-2e17265437d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Refreshing instance network info cache due to event network-changed-12ddaf65-2dcb-4830-9742-863386b8c30f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:05:43 np0005539563 nova_compute[252253]: 2025-11-29 09:05:43.823 252257 DEBUG oslo_concurrency.lockutils [req-93a08104-879b-47e2-a4f0-c16bfc331745 req-b021973a-c228-4467-abcf-2e17265437d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-6f8e25c4-545a-420f-bd34-aef4da2b27d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:05:44 np0005539563 podman[409645]: 2025-11-29 09:05:44.081343286 +0000 UTC m=+0.059015730 container create 3d2272068812010106e8ece1bb9ddf1581878c6337cffeca02e72aed5a72c23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 04:05:44 np0005539563 systemd[1]: Started libpod-conmon-3d2272068812010106e8ece1bb9ddf1581878c6337cffeca02e72aed5a72c23a.scope.
Nov 29 04:05:44 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:05:44 np0005539563 podman[409645]: 2025-11-29 09:05:44.062762293 +0000 UTC m=+0.040434787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:05:44 np0005539563 podman[409645]: 2025-11-29 09:05:44.158395922 +0000 UTC m=+0.136068416 container init 3d2272068812010106e8ece1bb9ddf1581878c6337cffeca02e72aed5a72c23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:05:44 np0005539563 podman[409645]: 2025-11-29 09:05:44.164832906 +0000 UTC m=+0.142505360 container start 3d2272068812010106e8ece1bb9ddf1581878c6337cffeca02e72aed5a72c23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_booth, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 04:05:44 np0005539563 podman[409645]: 2025-11-29 09:05:44.168586078 +0000 UTC m=+0.146258532 container attach 3d2272068812010106e8ece1bb9ddf1581878c6337cffeca02e72aed5a72c23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:05:44 np0005539563 tender_booth[409661]: 167 167
Nov 29 04:05:44 np0005539563 podman[409645]: 2025-11-29 09:05:44.170502809 +0000 UTC m=+0.148175273 container died 3d2272068812010106e8ece1bb9ddf1581878c6337cffeca02e72aed5a72c23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_booth, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 04:05:44 np0005539563 systemd[1]: libpod-3d2272068812010106e8ece1bb9ddf1581878c6337cffeca02e72aed5a72c23a.scope: Deactivated successfully.
Nov 29 04:05:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-50454bdf806b482767dee3b81da0647337b3f22a44dea3a129174eb3e7a5f1c2-merged.mount: Deactivated successfully.
Nov 29 04:05:44 np0005539563 podman[409645]: 2025-11-29 09:05:44.210691538 +0000 UTC m=+0.188363982 container remove 3d2272068812010106e8ece1bb9ddf1581878c6337cffeca02e72aed5a72c23a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_booth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:05:44 np0005539563 systemd[1]: libpod-conmon-3d2272068812010106e8ece1bb9ddf1581878c6337cffeca02e72aed5a72c23a.scope: Deactivated successfully.
Nov 29 04:05:44 np0005539563 nova_compute[252253]: 2025-11-29 09:05:44.307 252257 DEBUG nova.network.neutron [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 04:05:44 np0005539563 podman[409686]: 2025-11-29 09:05:44.410770205 +0000 UTC m=+0.048242687 container create 82e669266cd71ebed495418a5f1f11c55bea0607ba9092bb1c6a8556c59238d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:44 np0005539563 systemd[1]: Started libpod-conmon-82e669266cd71ebed495418a5f1f11c55bea0607ba9092bb1c6a8556c59238d0.scope.
Nov 29 04:05:44 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:05:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea18ca8e2745b413aab4dd9630af12bf65450ab272227b269b6f627f8819125/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea18ca8e2745b413aab4dd9630af12bf65450ab272227b269b6f627f8819125/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea18ca8e2745b413aab4dd9630af12bf65450ab272227b269b6f627f8819125/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:44 np0005539563 podman[409686]: 2025-11-29 09:05:44.391679178 +0000 UTC m=+0.029151640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:05:44 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea18ca8e2745b413aab4dd9630af12bf65450ab272227b269b6f627f8819125/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:44 np0005539563 podman[409686]: 2025-11-29 09:05:44.500113734 +0000 UTC m=+0.137586186 container init 82e669266cd71ebed495418a5f1f11c55bea0607ba9092bb1c6a8556c59238d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:05:44 np0005539563 podman[409686]: 2025-11-29 09:05:44.507883775 +0000 UTC m=+0.145356237 container start 82e669266cd71ebed495418a5f1f11c55bea0607ba9092bb1c6a8556c59238d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chebyshev, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 04:05:44 np0005539563 podman[409686]: 2025-11-29 09:05:44.511445471 +0000 UTC m=+0.148917923 container attach 82e669266cd71ebed495418a5f1f11c55bea0607ba9092bb1c6a8556c59238d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 04:05:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:44.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.095 252257 DEBUG nova.network.neutron [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Updating instance_info_cache with network_info: [{"id": "12ddaf65-2dcb-4830-9742-863386b8c30f", "address": "fa:16:3e:b9:01:3f", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ddaf65-2d", "ovs_interfaceid": "12ddaf65-2dcb-4830-9742-863386b8c30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.144 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Releasing lock "refresh_cache-6f8e25c4-545a-420f-bd34-aef4da2b27d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.145 252257 DEBUG nova.compute.manager [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Instance network_info: |[{"id": "12ddaf65-2dcb-4830-9742-863386b8c30f", "address": "fa:16:3e:b9:01:3f", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ddaf65-2d", "ovs_interfaceid": "12ddaf65-2dcb-4830-9742-863386b8c30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.145 252257 DEBUG oslo_concurrency.lockutils [req-93a08104-879b-47e2-a4f0-c16bfc331745 req-b021973a-c228-4467-abcf-2e17265437d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-6f8e25c4-545a-420f-bd34-aef4da2b27d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.146 252257 DEBUG nova.network.neutron [req-93a08104-879b-47e2-a4f0-c16bfc331745 req-b021973a-c228-4467-abcf-2e17265437d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Refreshing network info cache for port 12ddaf65-2dcb-4830-9742-863386b8c30f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.150 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Start _get_guest_xml network_info=[{"id": "12ddaf65-2dcb-4830-9742-863386b8c30f", "address": "fa:16:3e:b9:01:3f", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ddaf65-2d", "ovs_interfaceid": "12ddaf65-2dcb-4830-9742-863386b8c30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'image_id': '1be11678-cfa4-4dee-b54c-6c7e547e5a6a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.158 252257 WARNING nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.164 252257 DEBUG nova.virt.libvirt.host [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.165 252257 DEBUG nova.virt.libvirt.host [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.168 252257 DEBUG nova.virt.libvirt.host [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.169 252257 DEBUG nova.virt.libvirt.host [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.170 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.171 252257 DEBUG nova.virt.hardware [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:39:56Z,direct_url=<?>,disk_format='qcow2',id=1be11678-cfa4-4dee-b54c-6c7e547e5a6a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='532b69b8d9eb42e8a1aed36b5ddb038a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:40:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.171 252257 DEBUG nova.virt.hardware [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.171 252257 DEBUG nova.virt.hardware [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.172 252257 DEBUG nova.virt.hardware [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.172 252257 DEBUG nova.virt.hardware [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.172 252257 DEBUG nova.virt.hardware [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.173 252257 DEBUG nova.virt.hardware [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.173 252257 DEBUG nova.virt.hardware [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.173 252257 DEBUG nova.virt.hardware [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.174 252257 DEBUG nova.virt.hardware [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.174 252257 DEBUG nova.virt.hardware [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.178 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]: {
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:    "0": [
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:        {
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:            "devices": [
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:                "/dev/loop3"
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:            ],
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:            "lv_name": "ceph_lv0",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:            "lv_size": "7511998464",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:            "name": "ceph_lv0",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:            "tags": {
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:                "ceph.cluster_name": "ceph",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:                "ceph.crush_device_class": "",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:                "ceph.encrypted": "0",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:                "ceph.osd_id": "0",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:                "ceph.type": "block",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:                "ceph.vdo": "0"
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:            },
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:            "type": "block",
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:            "vg_name": "ceph_vg0"
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:        }
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]:    ]
Nov 29 04:05:45 np0005539563 recursing_chebyshev[409703]: }
Nov 29 04:05:45 np0005539563 systemd[1]: libpod-82e669266cd71ebed495418a5f1f11c55bea0607ba9092bb1c6a8556c59238d0.scope: Deactivated successfully.
Nov 29 04:05:45 np0005539563 podman[409686]: 2025-11-29 09:05:45.347724035 +0000 UTC m=+0.985196477 container died 82e669266cd71ebed495418a5f1f11c55bea0607ba9092bb1c6a8556c59238d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chebyshev, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:05:45 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3ea18ca8e2745b413aab4dd9630af12bf65450ab272227b269b6f627f8819125-merged.mount: Deactivated successfully.
Nov 29 04:05:45 np0005539563 podman[409686]: 2025-11-29 09:05:45.404534743 +0000 UTC m=+1.042007185 container remove 82e669266cd71ebed495418a5f1f11c55bea0607ba9092bb1c6a8556c59238d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 04:05:45 np0005539563 systemd[1]: libpod-conmon-82e669266cd71ebed495418a5f1f11c55bea0607ba9092bb1c6a8556c59238d0.scope: Deactivated successfully.
Nov 29 04:05:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:45.474 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '95'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.617 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:45.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.666 252257 DEBUG nova.storage.rbd_utils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image 6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:45 np0005539563 nova_compute[252253]: 2025-11-29 09:05:45.671 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3916: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 3.2 MiB/s wr, 38 op/s
Nov 29 04:05:45 np0005539563 podman[409928]: 2025-11-29 09:05:45.99058146 +0000 UTC m=+0.038530924 container create 58af80018528cdc5eb30f2f23f815149c177e6da3d0f10a6cd85f90baab34900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lederberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:46 np0005539563 systemd[1]: Started libpod-conmon-58af80018528cdc5eb30f2f23f815149c177e6da3d0f10a6cd85f90baab34900.scope.
Nov 29 04:05:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:05:46 np0005539563 podman[409928]: 2025-11-29 09:05:46.054887251 +0000 UTC m=+0.102836735 container init 58af80018528cdc5eb30f2f23f815149c177e6da3d0f10a6cd85f90baab34900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 04:05:46 np0005539563 podman[409928]: 2025-11-29 09:05:46.061024537 +0000 UTC m=+0.108973991 container start 58af80018528cdc5eb30f2f23f815149c177e6da3d0f10a6cd85f90baab34900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 04:05:46 np0005539563 podman[409928]: 2025-11-29 09:05:46.064344507 +0000 UTC m=+0.112294161 container attach 58af80018528cdc5eb30f2f23f815149c177e6da3d0f10a6cd85f90baab34900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:05:46 np0005539563 friendly_lederberg[409944]: 167 167
Nov 29 04:05:46 np0005539563 systemd[1]: libpod-58af80018528cdc5eb30f2f23f815149c177e6da3d0f10a6cd85f90baab34900.scope: Deactivated successfully.
Nov 29 04:05:46 np0005539563 conmon[409944]: conmon 58af80018528cdc5eb30 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58af80018528cdc5eb30f2f23f815149c177e6da3d0f10a6cd85f90baab34900.scope/container/memory.events
Nov 29 04:05:46 np0005539563 podman[409928]: 2025-11-29 09:05:46.068421088 +0000 UTC m=+0.116370552 container died 58af80018528cdc5eb30f2f23f815149c177e6da3d0f10a6cd85f90baab34900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lederberg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:05:46 np0005539563 podman[409928]: 2025-11-29 09:05:45.974873794 +0000 UTC m=+0.022823278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:05:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:05:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/816665252' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:05:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d3a7dab7b4570b6b2155023568de4c63c3d031a7401ee40517efefa066feb5ab-merged.mount: Deactivated successfully.
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.109 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.113 252257 DEBUG nova.virt.libvirt.vif [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:05:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-876372140',display_name='tempest-TestServerMultinode-server-876372140',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-876372140',id=215,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c7919c45c334cfb95f0fdc69027c245',ramdisk_id='',reservation_id='r-ko0e83tg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1741703404',owner_user_name='tempest-TestServerMultinode-1741703404-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:05:41Z,user_data=None,user_id='1ef789b2d4084ff99c58ebaccf153280',uuid=6f8e25c4-545a-420f-bd34-aef4da2b27d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12ddaf65-2dcb-4830-9742-863386b8c30f", "address": "fa:16:3e:b9:01:3f", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ddaf65-2d", "ovs_interfaceid": "12ddaf65-2dcb-4830-9742-863386b8c30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.113 252257 DEBUG nova.network.os_vif_util [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converting VIF {"id": "12ddaf65-2dcb-4830-9742-863386b8c30f", "address": "fa:16:3e:b9:01:3f", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ddaf65-2d", "ovs_interfaceid": "12ddaf65-2dcb-4830-9742-863386b8c30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:05:46 np0005539563 podman[409928]: 2025-11-29 09:05:46.114162026 +0000 UTC m=+0.162111500 container remove 58af80018528cdc5eb30f2f23f815149c177e6da3d0f10a6cd85f90baab34900 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lederberg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.114 252257 DEBUG nova.network.os_vif_util [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:01:3f,bridge_name='br-int',has_traffic_filtering=True,id=12ddaf65-2dcb-4830-9742-863386b8c30f,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ddaf65-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.116 252257 DEBUG nova.objects.instance [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f8e25c4-545a-420f-bd34-aef4da2b27d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:05:46 np0005539563 systemd[1]: libpod-conmon-58af80018528cdc5eb30f2f23f815149c177e6da3d0f10a6cd85f90baab34900.scope: Deactivated successfully.
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.140 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] End _get_guest_xml xml=<domain type="kvm">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  <uuid>6f8e25c4-545a-420f-bd34-aef4da2b27d7</uuid>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  <name>instance-000000d7</name>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestServerMultinode-server-876372140</nova:name>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 09:05:45</nova:creationTime>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <nova:user uuid="1ef789b2d4084ff99c58ebaccf153280">tempest-TestServerMultinode-1741703404-project-admin</nova:user>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <nova:project uuid="8c7919c45c334cfb95f0fdc69027c245">tempest-TestServerMultinode-1741703404</nova:project>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <nova:root type="image" uuid="1be11678-cfa4-4dee-b54c-6c7e547e5a6a"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <nova:port uuid="12ddaf65-2dcb-4830-9742-863386b8c30f">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <system>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <entry name="serial">6f8e25c4-545a-420f-bd34-aef4da2b27d7</entry>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <entry name="uuid">6f8e25c4-545a-420f-bd34-aef4da2b27d7</entry>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    </system>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  <os>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  </os>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  <features>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  </features>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  </clock>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  <devices>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      </source>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      </auth>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    </disk>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk.config">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      </source>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      </auth>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    </disk>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:b9:01:3f"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <target dev="tap12ddaf65-2d"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    </interface>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/6f8e25c4-545a-420f-bd34-aef4da2b27d7/console.log" append="off"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    </serial>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <video>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    </video>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    </rng>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 04:05:46 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 04:05:46 np0005539563 nova_compute[252253]:  </devices>
Nov 29 04:05:46 np0005539563 nova_compute[252253]: </domain>
Nov 29 04:05:46 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.142 252257 DEBUG nova.compute.manager [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Preparing to wait for external event network-vif-plugged-12ddaf65-2dcb-4830-9742-863386b8c30f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.142 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.143 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.143 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.144 252257 DEBUG nova.virt.libvirt.vif [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:05:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-876372140',display_name='tempest-TestServerMultinode-server-876372140',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-876372140',id=215,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c7919c45c334cfb95f0fdc69027c245',ramdisk_id='',reservation_id='r-ko0e83tg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1741703404',owner_user_name='tempest-TestServerMultinode-1741703404-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:05:41Z,user_data=None,user_id='1ef789b2d4084ff99c58ebaccf153280',uuid=6f8e25c4-545a-420f-bd34-aef4da2b27d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12ddaf65-2dcb-4830-9742-863386b8c30f", "address": "fa:16:3e:b9:01:3f", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ddaf65-2d", "ovs_interfaceid": "12ddaf65-2dcb-4830-9742-863386b8c30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.144 252257 DEBUG nova.network.os_vif_util [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converting VIF {"id": "12ddaf65-2dcb-4830-9742-863386b8c30f", "address": "fa:16:3e:b9:01:3f", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ddaf65-2d", "ovs_interfaceid": "12ddaf65-2dcb-4830-9742-863386b8c30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.145 252257 DEBUG nova.network.os_vif_util [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:01:3f,bridge_name='br-int',has_traffic_filtering=True,id=12ddaf65-2dcb-4830-9742-863386b8c30f,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ddaf65-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.145 252257 DEBUG os_vif [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:01:3f,bridge_name='br-int',has_traffic_filtering=True,id=12ddaf65-2dcb-4830-9742-863386b8c30f,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ddaf65-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.146 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.146 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.147 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.151 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.151 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12ddaf65-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.152 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12ddaf65-2d, col_values=(('external_ids', {'iface-id': '12ddaf65-2dcb-4830-9742-863386b8c30f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:01:3f', 'vm-uuid': '6f8e25c4-545a-420f-bd34-aef4da2b27d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.153 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:46 np0005539563 NetworkManager[48981]: <info>  [1764407146.1547] manager: (tap12ddaf65-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.156 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.161 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.162 252257 INFO os_vif [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:01:3f,bridge_name='br-int',has_traffic_filtering=True,id=12ddaf65-2dcb-4830-9742-863386b8c30f,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ddaf65-2d')#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.209 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.210 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.210 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] No VIF found with MAC fa:16:3e:b9:01:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.210 252257 INFO nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Using config drive#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.240 252257 DEBUG nova.storage.rbd_utils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image 6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:46 np0005539563 podman[409986]: 2025-11-29 09:05:46.277238831 +0000 UTC m=+0.040534578 container create e5c1ec5be249c313cee8d166eb4558766b71396900ad1e16b60b7f866e81d20e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:46 np0005539563 systemd[1]: Started libpod-conmon-e5c1ec5be249c313cee8d166eb4558766b71396900ad1e16b60b7f866e81d20e.scope.
Nov 29 04:05:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:05:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da4c4ce63cce4eea5c37d3f1f299da4c250f35e8188b0168481b2fb56b7e0cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da4c4ce63cce4eea5c37d3f1f299da4c250f35e8188b0168481b2fb56b7e0cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da4c4ce63cce4eea5c37d3f1f299da4c250f35e8188b0168481b2fb56b7e0cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da4c4ce63cce4eea5c37d3f1f299da4c250f35e8188b0168481b2fb56b7e0cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:46 np0005539563 podman[409986]: 2025-11-29 09:05:46.33107071 +0000 UTC m=+0.094366477 container init e5c1ec5be249c313cee8d166eb4558766b71396900ad1e16b60b7f866e81d20e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:05:46 np0005539563 podman[409986]: 2025-11-29 09:05:46.339298303 +0000 UTC m=+0.102594050 container start e5c1ec5be249c313cee8d166eb4558766b71396900ad1e16b60b7f866e81d20e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:05:46 np0005539563 podman[409986]: 2025-11-29 09:05:46.343321001 +0000 UTC m=+0.106616748 container attach e5c1ec5be249c313cee8d166eb4558766b71396900ad1e16b60b7f866e81d20e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 04:05:46 np0005539563 podman[409986]: 2025-11-29 09:05:46.260543409 +0000 UTC m=+0.023839186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.552 252257 DEBUG nova.network.neutron [req-93a08104-879b-47e2-a4f0-c16bfc331745 req-b021973a-c228-4467-abcf-2e17265437d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Updated VIF entry in instance network info cache for port 12ddaf65-2dcb-4830-9742-863386b8c30f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.553 252257 DEBUG nova.network.neutron [req-93a08104-879b-47e2-a4f0-c16bfc331745 req-b021973a-c228-4467-abcf-2e17265437d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Updating instance_info_cache with network_info: [{"id": "12ddaf65-2dcb-4830-9742-863386b8c30f", "address": "fa:16:3e:b9:01:3f", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ddaf65-2d", "ovs_interfaceid": "12ddaf65-2dcb-4830-9742-863386b8c30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.574 252257 DEBUG oslo_concurrency.lockutils [req-93a08104-879b-47e2-a4f0-c16bfc331745 req-b021973a-c228-4467-abcf-2e17265437d7 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-6f8e25c4-545a-420f-bd34-aef4da2b27d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.672 252257 INFO nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Creating config drive at /var/lib/nova/instances/6f8e25c4-545a-420f-bd34-aef4da2b27d7/disk.config#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.677 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f8e25c4-545a-420f-bd34-aef4da2b27d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqoivyjjq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:46.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.827 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f8e25c4-545a-420f-bd34-aef4da2b27d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqoivyjjq" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.869 252257 DEBUG nova.storage.rbd_utils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] rbd image 6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:05:46 np0005539563 nova_compute[252253]: 2025-11-29 09:05:46.874 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f8e25c4-545a-420f-bd34-aef4da2b27d7/disk.config 6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.041 252257 DEBUG oslo_concurrency.processutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f8e25c4-545a-420f-bd34-aef4da2b27d7/disk.config 6f8e25c4-545a-420f-bd34-aef4da2b27d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.042 252257 INFO nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Deleting local config drive /var/lib/nova/instances/6f8e25c4-545a-420f-bd34-aef4da2b27d7/disk.config because it was imported into RBD.#033[00m
Nov 29 04:05:47 np0005539563 kernel: tap12ddaf65-2d: entered promiscuous mode
Nov 29 04:05:47 np0005539563 NetworkManager[48981]: <info>  [1764407147.1017] manager: (tap12ddaf65-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/415)
Nov 29 04:05:47 np0005539563 systemd-udevd[410074]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.141 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.148 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:47 np0005539563 ovn_controller[148841]: 2025-11-29T09:05:47Z|00930|binding|INFO|Claiming lport 12ddaf65-2dcb-4830-9742-863386b8c30f for this chassis.
Nov 29 04:05:47 np0005539563 ovn_controller[148841]: 2025-11-29T09:05:47Z|00931|binding|INFO|12ddaf65-2dcb-4830-9742-863386b8c30f: Claiming fa:16:3e:b9:01:3f 10.100.0.12
Nov 29 04:05:47 np0005539563 suspicious_herschel[410006]: {
Nov 29 04:05:47 np0005539563 suspicious_herschel[410006]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:05:47 np0005539563 suspicious_herschel[410006]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:05:47 np0005539563 suspicious_herschel[410006]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:05:47 np0005539563 NetworkManager[48981]: <info>  [1764407147.1544] device (tap12ddaf65-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:05:47 np0005539563 suspicious_herschel[410006]:        "osd_id": 0,
Nov 29 04:05:47 np0005539563 suspicious_herschel[410006]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:05:47 np0005539563 suspicious_herschel[410006]:        "type": "bluestore"
Nov 29 04:05:47 np0005539563 suspicious_herschel[410006]:    }
Nov 29 04:05:47 np0005539563 suspicious_herschel[410006]: }
Nov 29 04:05:47 np0005539563 NetworkManager[48981]: <info>  [1764407147.1551] device (tap12ddaf65-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 04:05:47 np0005539563 systemd-machined[213024]: New machine qemu-103-instance-000000d7.
Nov 29 04:05:47 np0005539563 podman[409986]: 2025-11-29 09:05:47.188852465 +0000 UTC m=+0.952148232 container died e5c1ec5be249c313cee8d166eb4558766b71396900ad1e16b60b7f866e81d20e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:05:47 np0005539563 systemd[1]: Started Virtual Machine qemu-103-instance-000000d7.
Nov 29 04:05:47 np0005539563 systemd[1]: libpod-e5c1ec5be249c313cee8d166eb4558766b71396900ad1e16b60b7f866e81d20e.scope: Deactivated successfully.
Nov 29 04:05:47 np0005539563 ovn_controller[148841]: 2025-11-29T09:05:47Z|00932|binding|INFO|Setting lport 12ddaf65-2dcb-4830-9742-863386b8c30f ovn-installed in OVS
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.207 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:47 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5da4c4ce63cce4eea5c37d3f1f299da4c250f35e8188b0168481b2fb56b7e0cf-merged.mount: Deactivated successfully.
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.331 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:47 np0005539563 ovn_controller[148841]: 2025-11-29T09:05:47Z|00933|binding|INFO|Setting lport 12ddaf65-2dcb-4830-9742-863386b8c30f up in Southbound
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.369 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:01:3f 10.100.0.12'], port_security=['fa:16:3e:b9:01:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6f8e25c4-545a-420f-bd34-aef4da2b27d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c7919c45c334cfb95f0fdc69027c245', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64bf80fe-f6f5-45b2-bd8e-9bcbdb5e2a9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=194d050b-f997-4b45-91e1-9c8d251911a1, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=12ddaf65-2dcb-4830-9742-863386b8c30f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.370 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 12ddaf65-2dcb-4830-9742-863386b8c30f in datapath 7f61907c-426d-40db-9f88-8bc5f33db1b9 bound to our chassis#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.371 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7f61907c-426d-40db-9f88-8bc5f33db1b9#033[00m
Nov 29 04:05:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:47 np0005539563 podman[409986]: 2025-11-29 09:05:47.386762643 +0000 UTC m=+1.150058390 container remove e5c1ec5be249c313cee8d166eb4558766b71396900ad1e16b60b7f866e81d20e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_herschel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.392 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[062aced7-cf31-476a-af07-16941a8b4a7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.393 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7f61907c-41 in ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.396 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7f61907c-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.396 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9f643f94-f730-4cb4-9e01-69e4c012634d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 systemd[1]: libpod-conmon-e5c1ec5be249c313cee8d166eb4558766b71396900ad1e16b60b7f866e81d20e.scope: Deactivated successfully.
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.397 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[69626222-5cf1-4043-9f30-9c10a8e0350a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.412 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[af922626-fdb8-455d-9dff-28a4221f631e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.437 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7825b419-5799-4864-87c3-9e379cc38871]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.467 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[53cf3eaf-ef04-4704-a48f-d59050bc11fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.472 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[614a7745-4a5e-45ce-b466-274e05630609]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 NetworkManager[48981]: <info>  [1764407147.4740] manager: (tap7f61907c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/416)
Nov 29 04:05:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.505 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a0416925-1ade-4c8b-a88c-72c70ab2ba99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.508 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[1de1e04d-7dd0-4be3-96af-5c2d89a1e3a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 NetworkManager[48981]: <info>  [1764407147.5275] device (tap7f61907c-40): carrier: link connected
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.531 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c69151-e370-4cb9-a697-91a67c80bfec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.547 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f58504-0254-4acb-86eb-6e2ea06fef8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f61907c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:f8:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1011530, 'reachable_time': 25357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410162, 'error': None, 'target': 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.563 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce929db-4746-48b2-9d3e-4c884221d6d3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:f80e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1011530, 'tstamp': 1011530}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410166, 'error': None, 'target': 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.586 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d32c25b1-ca12-416a-a3d2-5437457d6427]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f61907c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:f8:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1011530, 'reachable_time': 25357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 410167, 'error': None, 'target': 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:47 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 05d0a457-817a-4641-afe5-b92d7fe8819b does not exist
Nov 29 04:05:47 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c78846d8-4d92-4128-9d24-0fd729008515 does not exist
Nov 29 04:05:47 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 7df4e1fd-a0fe-4db3-9348-0191c5b5852e does not exist
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.618 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[55c0fc39-9566-4eee-986d-4ebb0c38c929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.631 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764407147.6305346, 6f8e25c4-545a-420f-bd34-aef4da2b27d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.631 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] VM Started (Lifecycle Event)#033[00m
Nov 29 04:05:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:47.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3917: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 3.2 MiB/s wr, 38 op/s
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.681 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e7af2c7d-cabe-4406-9b02-c3594d1fe862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.683 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f61907c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.683 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.683 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f61907c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:47 np0005539563 NetworkManager[48981]: <info>  [1764407147.6856] manager: (tap7f61907c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.685 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:47 np0005539563 kernel: tap7f61907c-40: entered promiscuous mode
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.688 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.689 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7f61907c-40, col_values=(('external_ids', {'iface-id': 'f4d00aa1-326b-4003-b66e-9a8340a19429'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.690 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:47 np0005539563 ovn_controller[148841]: 2025-11-29T09:05:47Z|00934|binding|INFO|Releasing lport f4d00aa1-326b-4003-b66e-9a8340a19429 from this chassis (sb_readonly=0)
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.703 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.705 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7f61907c-426d-40db-9f88-8bc5f33db1b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7f61907c-426d-40db-9f88-8bc5f33db1b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.706 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[d08fe189-5033-42b3-bca6-e7894805e961]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.706 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-7f61907c-426d-40db-9f88-8bc5f33db1b9
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/7f61907c-426d-40db-9f88-8bc5f33db1b9.pid.haproxy
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 7f61907c-426d-40db-9f88-8bc5f33db1b9
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 04:05:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:05:47.707 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'env', 'PROCESS_TAG=haproxy-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7f61907c-426d-40db-9f88-8bc5f33db1b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.722 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.725 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764407147.6306775, 6f8e25c4-545a-420f-bd34-aef4da2b27d7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.725 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] VM Paused (Lifecycle Event)#033[00m
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.754 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.757 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:05:47 np0005539563 nova_compute[252253]: 2025-11-29 09:05:47.784 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:05:48 np0005539563 podman[410249]: 2025-11-29 09:05:48.060910506 +0000 UTC m=+0.051537426 container create 2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 04:05:48 np0005539563 systemd[1]: Started libpod-conmon-2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291.scope.
Nov 29 04:05:48 np0005539563 podman[410249]: 2025-11-29 09:05:48.035170359 +0000 UTC m=+0.025797329 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 04:05:48 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:05:48 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7ea8f3d8365acb215141c95a28ca3677d45704038747f53d6a0976b09a8824f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:48 np0005539563 podman[410249]: 2025-11-29 09:05:48.155829006 +0000 UTC m=+0.146455966 container init 2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 04:05:48 np0005539563 podman[410249]: 2025-11-29 09:05:48.161707155 +0000 UTC m=+0.152334085 container start 2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:05:48 np0005539563 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[410264]: [NOTICE]   (410268) : New worker (410270) forked
Nov 29 04:05:48 np0005539563 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[410264]: [NOTICE]   (410268) : Loading success.
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.375 252257 DEBUG nova.compute.manager [req-65cf4ce4-b69c-4f5f-a54c-00b6e95ac53c req-1f2b82f1-b966-4d83-9a39-b5752cd0e943 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Received event network-vif-plugged-12ddaf65-2dcb-4830-9742-863386b8c30f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.377 252257 DEBUG oslo_concurrency.lockutils [req-65cf4ce4-b69c-4f5f-a54c-00b6e95ac53c req-1f2b82f1-b966-4d83-9a39-b5752cd0e943 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.377 252257 DEBUG oslo_concurrency.lockutils [req-65cf4ce4-b69c-4f5f-a54c-00b6e95ac53c req-1f2b82f1-b966-4d83-9a39-b5752cd0e943 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.378 252257 DEBUG oslo_concurrency.lockutils [req-65cf4ce4-b69c-4f5f-a54c-00b6e95ac53c req-1f2b82f1-b966-4d83-9a39-b5752cd0e943 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.379 252257 DEBUG nova.compute.manager [req-65cf4ce4-b69c-4f5f-a54c-00b6e95ac53c req-1f2b82f1-b966-4d83-9a39-b5752cd0e943 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Processing event network-vif-plugged-12ddaf65-2dcb-4830-9742-863386b8c30f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.380 252257 DEBUG nova.compute.manager [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.386 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764407148.3864565, 6f8e25c4-545a-420f-bd34-aef4da2b27d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.387 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] VM Resumed (Lifecycle Event)#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.390 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.395 252257 INFO nova.virt.libvirt.driver [-] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Instance spawned successfully.#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.396 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.417 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.423 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.427 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.427 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.428 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.428 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.428 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.429 252257 DEBUG nova.virt.libvirt.driver [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.458 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.501 252257 INFO nova.compute.manager [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Took 7.12 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.502 252257 DEBUG nova.compute.manager [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.568 252257 INFO nova.compute.manager [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Took 8.14 seconds to build instance.#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.589 252257 DEBUG oslo_concurrency.lockutils [None req-0f14d382-b0f9-4a7c-92ad-e617edc72b98 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.706 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.707 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.708 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.708 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:05:48 np0005539563 nova_compute[252253]: 2025-11-29 09:05:48.709 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 04:05:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 04:05:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:05:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:05:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1641842313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:05:49 np0005539563 nova_compute[252253]: 2025-11-29 09:05:49.601 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.892s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:49.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:49 np0005539563 nova_compute[252253]: 2025-11-29 09:05:49.674 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000d7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:05:49 np0005539563 nova_compute[252253]: 2025-11-29 09:05:49.675 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000d7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:05:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3918: 305 pgs: 305 active+clean; 234 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 714 KiB/s rd, 4.2 MiB/s wr, 90 op/s
Nov 29 04:05:49 np0005539563 nova_compute[252253]: 2025-11-29 09:05:49.825 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:05:49 np0005539563 nova_compute[252253]: 2025-11-29 09:05:49.827 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4005MB free_disk=20.95053482055664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:05:49 np0005539563 nova_compute[252253]: 2025-11-29 09:05:49.827 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:49 np0005539563 nova_compute[252253]: 2025-11-29 09:05:49.827 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:49 np0005539563 nova_compute[252253]: 2025-11-29 09:05:49.911 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 6f8e25c4-545a-420f-bd34-aef4da2b27d7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 04:05:49 np0005539563 nova_compute[252253]: 2025-11-29 09:05:49.911 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:05:49 np0005539563 nova_compute[252253]: 2025-11-29 09:05:49.912 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:05:49 np0005539563 nova_compute[252253]: 2025-11-29 09:05:49.995 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:05:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/682494736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:05:50 np0005539563 nova_compute[252253]: 2025-11-29 09:05:50.517 252257 DEBUG nova.compute.manager [req-9a44ba6f-b62e-48a0-9064-cf84d91d7c7b req-aa383940-996e-4628-9ea4-3bb982f18c28 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Received event network-vif-plugged-12ddaf65-2dcb-4830-9742-863386b8c30f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:05:50 np0005539563 nova_compute[252253]: 2025-11-29 09:05:50.518 252257 DEBUG oslo_concurrency.lockutils [req-9a44ba6f-b62e-48a0-9064-cf84d91d7c7b req-aa383940-996e-4628-9ea4-3bb982f18c28 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:50 np0005539563 nova_compute[252253]: 2025-11-29 09:05:50.519 252257 DEBUG oslo_concurrency.lockutils [req-9a44ba6f-b62e-48a0-9064-cf84d91d7c7b req-aa383940-996e-4628-9ea4-3bb982f18c28 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:50 np0005539563 nova_compute[252253]: 2025-11-29 09:05:50.519 252257 DEBUG oslo_concurrency.lockutils [req-9a44ba6f-b62e-48a0-9064-cf84d91d7c7b req-aa383940-996e-4628-9ea4-3bb982f18c28 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:50 np0005539563 nova_compute[252253]: 2025-11-29 09:05:50.519 252257 DEBUG nova.compute.manager [req-9a44ba6f-b62e-48a0-9064-cf84d91d7c7b req-aa383940-996e-4628-9ea4-3bb982f18c28 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] No waiting events found dispatching network-vif-plugged-12ddaf65-2dcb-4830-9742-863386b8c30f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:05:50 np0005539563 nova_compute[252253]: 2025-11-29 09:05:50.519 252257 WARNING nova.compute.manager [req-9a44ba6f-b62e-48a0-9064-cf84d91d7c7b req-aa383940-996e-4628-9ea4-3bb982f18c28 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Received unexpected event network-vif-plugged-12ddaf65-2dcb-4830-9742-863386b8c30f for instance with vm_state active and task_state None.#033[00m
Nov 29 04:05:50 np0005539563 nova_compute[252253]: 2025-11-29 09:05:50.520 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:50 np0005539563 nova_compute[252253]: 2025-11-29 09:05:50.527 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:05:50 np0005539563 nova_compute[252253]: 2025-11-29 09:05:50.553 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:05:50 np0005539563 nova_compute[252253]: 2025-11-29 09:05:50.577 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:05:50 np0005539563 nova_compute[252253]: 2025-11-29 09:05:50.578 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:50.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:51 np0005539563 nova_compute[252253]: 2025-11-29 09:05:51.154 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:51.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3919: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.3 MiB/s wr, 137 op/s
Nov 29 04:05:52 np0005539563 nova_compute[252253]: 2025-11-29 09:05:52.332 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:52 np0005539563 nova_compute[252253]: 2025-11-29 09:05:52.580 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:52.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:53.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3920: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.3 MiB/s wr, 155 op/s
Nov 29 04:05:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:54.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:55.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3921: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.3 MiB/s wr, 247 op/s
Nov 29 04:05:56 np0005539563 nova_compute[252253]: 2025-11-29 09:05:56.157 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:56.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:57 np0005539563 nova_compute[252253]: 2025-11-29 09:05:57.335 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:57.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3922: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 209 op/s
Nov 29 04:05:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:58.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:05:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:05:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:59.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:05:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3923: 305 pgs: 305 active+clean; 239 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.1 MiB/s wr, 250 op/s
Nov 29 04:06:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:00.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:01 np0005539563 nova_compute[252253]: 2025-11-29 09:06:01.214 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:01.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3924: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.2 MiB/s wr, 238 op/s
Nov 29 04:06:02 np0005539563 nova_compute[252253]: 2025-11-29 09:06:02.336 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:02 np0005539563 ovn_controller[148841]: 2025-11-29T09:06:02Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:01:3f 10.100.0.12
Nov 29 04:06:02 np0005539563 ovn_controller[148841]: 2025-11-29T09:06:02Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:01:3f 10.100.0.12
Nov 29 04:06:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:02.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:03.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3925: 305 pgs: 305 active+clean; 206 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 374 KiB/s wr, 226 op/s
Nov 29 04:06:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:04.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:04.980 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:04.980 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:04.981 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.413 252257 DEBUG oslo_concurrency.lockutils [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.414 252257 DEBUG oslo_concurrency.lockutils [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.414 252257 DEBUG oslo_concurrency.lockutils [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.414 252257 DEBUG oslo_concurrency.lockutils [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.415 252257 DEBUG oslo_concurrency.lockutils [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.416 252257 INFO nova.compute.manager [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Terminating instance#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.417 252257 DEBUG nova.compute.manager [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 04:06:05 np0005539563 kernel: tap12ddaf65-2d (unregistering): left promiscuous mode
Nov 29 04:06:05 np0005539563 NetworkManager[48981]: <info>  [1764407165.4772] device (tap12ddaf65-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 04:06:05 np0005539563 ovn_controller[148841]: 2025-11-29T09:06:05Z|00935|binding|INFO|Releasing lport 12ddaf65-2dcb-4830-9742-863386b8c30f from this chassis (sb_readonly=0)
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.486 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:05 np0005539563 ovn_controller[148841]: 2025-11-29T09:06:05Z|00936|binding|INFO|Setting lport 12ddaf65-2dcb-4830-9742-863386b8c30f down in Southbound
Nov 29 04:06:05 np0005539563 ovn_controller[148841]: 2025-11-29T09:06:05Z|00937|binding|INFO|Removing iface tap12ddaf65-2d ovn-installed in OVS
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.493 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:01:3f 10.100.0.12'], port_security=['fa:16:3e:b9:01:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6f8e25c4-545a-420f-bd34-aef4da2b27d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c7919c45c334cfb95f0fdc69027c245', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64bf80fe-f6f5-45b2-bd8e-9bcbdb5e2a9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=194d050b-f997-4b45-91e1-9c8d251911a1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=12ddaf65-2dcb-4830-9742-863386b8c30f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.495 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 12ddaf65-2dcb-4830-9742-863386b8c30f in datapath 7f61907c-426d-40db-9f88-8bc5f33db1b9 unbound from our chassis#033[00m
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.496 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f61907c-426d-40db-9f88-8bc5f33db1b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.497 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fea6fef3-9520-4758-ae37-0c0fc94681c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.498 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 namespace which is not needed anymore#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.523 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:05 np0005539563 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000d7.scope: Deactivated successfully.
Nov 29 04:06:05 np0005539563 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000d7.scope: Consumed 13.514s CPU time.
Nov 29 04:06:05 np0005539563 systemd-machined[213024]: Machine qemu-103-instance-000000d7 terminated.
Nov 29 04:06:05 np0005539563 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[410264]: [NOTICE]   (410268) : haproxy version is 2.8.14-c23fe91
Nov 29 04:06:05 np0005539563 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[410264]: [NOTICE]   (410268) : path to executable is /usr/sbin/haproxy
Nov 29 04:06:05 np0005539563 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[410264]: [WARNING]  (410268) : Exiting Master process...
Nov 29 04:06:05 np0005539563 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[410264]: [WARNING]  (410268) : Exiting Master process...
Nov 29 04:06:05 np0005539563 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[410264]: [ALERT]    (410268) : Current worker (410270) exited with code 143 (Terminated)
Nov 29 04:06:05 np0005539563 neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9[410264]: [WARNING]  (410268) : All workers exited. Exiting... (0)
Nov 29 04:06:05 np0005539563 systemd[1]: libpod-2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291.scope: Deactivated successfully.
Nov 29 04:06:05 np0005539563 podman[410409]: 2025-11-29 09:06:05.6430895 +0000 UTC m=+0.051420143 container died 2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.654 252257 INFO nova.virt.libvirt.driver [-] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Instance destroyed successfully.#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.654 252257 DEBUG nova.objects.instance [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lazy-loading 'resources' on Instance uuid 6f8e25c4-545a-420f-bd34-aef4da2b27d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.672 252257 DEBUG nova.virt.libvirt.vif [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:05:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-876372140',display_name='tempest-TestServerMultinode-server-876372140',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-876372140',id=215,image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:05:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8c7919c45c334cfb95f0fdc69027c245',ramdisk_id='',reservation_id='r-ko0e83tg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='1be11678-cfa4-4dee-b54c-6c7e547e5a6a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1741703404',owner_user_name='tempest-TestServerMultinode-1741703404-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:05:48Z,user_data=None,user_id='1ef789b2d4084ff99c58ebaccf153280',uuid=6f8e25c4-545a-420f-bd34-aef4da2b27d7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12ddaf65-2dcb-4830-9742-863386b8c30f", "address": "fa:16:3e:b9:01:3f", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ddaf65-2d", "ovs_interfaceid": "12ddaf65-2dcb-4830-9742-863386b8c30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.673 252257 DEBUG nova.network.os_vif_util [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converting VIF {"id": "12ddaf65-2dcb-4830-9742-863386b8c30f", "address": "fa:16:3e:b9:01:3f", "network": {"id": "7f61907c-426d-40db-9f88-8bc5f33db1b9", "bridge": "br-int", "label": "tempest-TestServerMultinode-1860154618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "309bb9682d8741cb96a008986d8d01dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12ddaf65-2d", "ovs_interfaceid": "12ddaf65-2dcb-4830-9742-863386b8c30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.674 252257 DEBUG nova.network.os_vif_util [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:01:3f,bridge_name='br-int',has_traffic_filtering=True,id=12ddaf65-2dcb-4830-9742-863386b8c30f,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ddaf65-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.674 252257 DEBUG os_vif [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:01:3f,bridge_name='br-int',has_traffic_filtering=True,id=12ddaf65-2dcb-4830-9742-863386b8c30f,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ddaf65-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 04:06:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.676 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.676 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12ddaf65-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:06:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:05.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:05 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291-userdata-shm.mount: Deactivated successfully.
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.679 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:05 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f7ea8f3d8365acb215141c95a28ca3677d45704038747f53d6a0976b09a8824f-merged.mount: Deactivated successfully.
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.681 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.684 252257 INFO os_vif [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:01:3f,bridge_name='br-int',has_traffic_filtering=True,id=12ddaf65-2dcb-4830-9742-863386b8c30f,network=Network(7f61907c-426d-40db-9f88-8bc5f33db1b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12ddaf65-2d')#033[00m
Nov 29 04:06:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3926: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 259 op/s
Nov 29 04:06:05 np0005539563 podman[410409]: 2025-11-29 09:06:05.693239968 +0000 UTC m=+0.101570601 container cleanup 2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:06:05 np0005539563 systemd[1]: libpod-conmon-2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291.scope: Deactivated successfully.
Nov 29 04:06:05 np0005539563 podman[410464]: 2025-11-29 09:06:05.751187217 +0000 UTC m=+0.037805585 container remove 2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.756 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[72fa4471-963a-4b14-8bfe-559dac5716f1]: (4, ('Sat Nov 29 09:06:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 (2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291)\n2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291\nSat Nov 29 09:06:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 (2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291)\n2be3c3050ec469d81b6f4a7d2b589bda23e3ffd935c3ac8c027de9abdbc28291\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.758 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[76b168d2-8087-45a0-a8a5-74ba3c65a169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.758 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f61907c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.760 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:05 np0005539563 kernel: tap7f61907c-40: left promiscuous mode
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.773 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.775 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2512daa4-f72b-4bbf-9c4a-bcbeb9666e40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.792 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8e113ed2-7ebb-49f2-b57d-db4267b23a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.793 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f5a7ab-e7f5-4268-97d1-c2754f0ab7f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.806 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7dc45a-e9d1-4606-8f71-390829dccceb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1011523, 'reachable_time': 15168, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410481, 'error': None, 'target': 'ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.808 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7f61907c-426d-40db-9f88-8bc5f33db1b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 04:06:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:05.808 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[99a763ee-9aef-47a9-b591-cf59813dc101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:06:05 np0005539563 systemd[1]: run-netns-ovnmeta\x2d7f61907c\x2d426d\x2d40db\x2d9f88\x2d8bc5f33db1b9.mount: Deactivated successfully.
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.838 252257 DEBUG nova.compute.manager [req-f4b9146f-4733-454b-b1d0-81f3ee571099 req-16e00fab-b7d7-48a8-b868-74abc6d1947b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Received event network-vif-unplugged-12ddaf65-2dcb-4830-9742-863386b8c30f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.839 252257 DEBUG oslo_concurrency.lockutils [req-f4b9146f-4733-454b-b1d0-81f3ee571099 req-16e00fab-b7d7-48a8-b868-74abc6d1947b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.841 252257 DEBUG oslo_concurrency.lockutils [req-f4b9146f-4733-454b-b1d0-81f3ee571099 req-16e00fab-b7d7-48a8-b868-74abc6d1947b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.841 252257 DEBUG oslo_concurrency.lockutils [req-f4b9146f-4733-454b-b1d0-81f3ee571099 req-16e00fab-b7d7-48a8-b868-74abc6d1947b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.841 252257 DEBUG nova.compute.manager [req-f4b9146f-4733-454b-b1d0-81f3ee571099 req-16e00fab-b7d7-48a8-b868-74abc6d1947b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] No waiting events found dispatching network-vif-unplugged-12ddaf65-2dcb-4830-9742-863386b8c30f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:06:05 np0005539563 nova_compute[252253]: 2025-11-29 09:06:05.841 252257 DEBUG nova.compute.manager [req-f4b9146f-4733-454b-b1d0-81f3ee571099 req-16e00fab-b7d7-48a8-b868-74abc6d1947b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Received event network-vif-unplugged-12ddaf65-2dcb-4830-9742-863386b8c30f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 04:06:06 np0005539563 nova_compute[252253]: 2025-11-29 09:06:06.138 252257 INFO nova.virt.libvirt.driver [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Deleting instance files /var/lib/nova/instances/6f8e25c4-545a-420f-bd34-aef4da2b27d7_del#033[00m
Nov 29 04:06:06 np0005539563 nova_compute[252253]: 2025-11-29 09:06:06.143 252257 INFO nova.virt.libvirt.driver [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Deletion of /var/lib/nova/instances/6f8e25c4-545a-420f-bd34-aef4da2b27d7_del complete#033[00m
Nov 29 04:06:06 np0005539563 nova_compute[252253]: 2025-11-29 09:06:06.212 252257 INFO nova.compute.manager [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 04:06:06 np0005539563 nova_compute[252253]: 2025-11-29 09:06:06.213 252257 DEBUG oslo.service.loopingcall [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 04:06:06 np0005539563 nova_compute[252253]: 2025-11-29 09:06:06.213 252257 DEBUG nova.compute.manager [-] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 04:06:06 np0005539563 nova_compute[252253]: 2025-11-29 09:06:06.213 252257 DEBUG nova.network.neutron [-] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 04:06:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:06.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:07 np0005539563 nova_compute[252253]: 2025-11-29 09:06:07.397 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:07.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3927: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Nov 29 04:06:07 np0005539563 nova_compute[252253]: 2025-11-29 09:06:07.701 252257 DEBUG nova.network.neutron [-] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:06:07 np0005539563 nova_compute[252253]: 2025-11-29 09:06:07.717 252257 INFO nova.compute.manager [-] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Took 1.50 seconds to deallocate network for instance.#033[00m
Nov 29 04:06:07 np0005539563 nova_compute[252253]: 2025-11-29 09:06:07.764 252257 DEBUG oslo_concurrency.lockutils [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:07 np0005539563 nova_compute[252253]: 2025-11-29 09:06:07.765 252257 DEBUG oslo_concurrency.lockutils [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:07 np0005539563 nova_compute[252253]: 2025-11-29 09:06:07.836 252257 DEBUG oslo_concurrency.processutils [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:06:07 np0005539563 nova_compute[252253]: 2025-11-29 09:06:07.997 252257 DEBUG nova.compute.manager [req-d827ddeb-64ee-4547-a36c-a50507c4c601 req-ef23120a-f81c-4ee7-99cb-320cc8150796 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Received event network-vif-plugged-12ddaf65-2dcb-4830-9742-863386b8c30f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:06:08 np0005539563 nova_compute[252253]: 2025-11-29 09:06:07.999 252257 DEBUG oslo_concurrency.lockutils [req-d827ddeb-64ee-4547-a36c-a50507c4c601 req-ef23120a-f81c-4ee7-99cb-320cc8150796 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:08 np0005539563 nova_compute[252253]: 2025-11-29 09:06:08.000 252257 DEBUG oslo_concurrency.lockutils [req-d827ddeb-64ee-4547-a36c-a50507c4c601 req-ef23120a-f81c-4ee7-99cb-320cc8150796 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:08 np0005539563 nova_compute[252253]: 2025-11-29 09:06:08.000 252257 DEBUG oslo_concurrency.lockutils [req-d827ddeb-64ee-4547-a36c-a50507c4c601 req-ef23120a-f81c-4ee7-99cb-320cc8150796 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:08 np0005539563 nova_compute[252253]: 2025-11-29 09:06:08.001 252257 DEBUG nova.compute.manager [req-d827ddeb-64ee-4547-a36c-a50507c4c601 req-ef23120a-f81c-4ee7-99cb-320cc8150796 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] No waiting events found dispatching network-vif-plugged-12ddaf65-2dcb-4830-9742-863386b8c30f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:06:08 np0005539563 nova_compute[252253]: 2025-11-29 09:06:08.001 252257 WARNING nova.compute.manager [req-d827ddeb-64ee-4547-a36c-a50507c4c601 req-ef23120a-f81c-4ee7-99cb-320cc8150796 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Received unexpected event network-vif-plugged-12ddaf65-2dcb-4830-9742-863386b8c30f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 04:06:08 np0005539563 nova_compute[252253]: 2025-11-29 09:06:08.001 252257 DEBUG nova.compute.manager [req-d827ddeb-64ee-4547-a36c-a50507c4c601 req-ef23120a-f81c-4ee7-99cb-320cc8150796 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Received event network-vif-deleted-12ddaf65-2dcb-4830-9742-863386b8c30f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:06:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:06:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1312814041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:06:08 np0005539563 nova_compute[252253]: 2025-11-29 09:06:08.341 252257 DEBUG oslo_concurrency.processutils [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:06:08 np0005539563 nova_compute[252253]: 2025-11-29 09:06:08.347 252257 DEBUG nova.compute.provider_tree [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:06:08 np0005539563 nova_compute[252253]: 2025-11-29 09:06:08.379 252257 DEBUG nova.scheduler.client.report [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:06:08 np0005539563 nova_compute[252253]: 2025-11-29 09:06:08.423 252257 DEBUG oslo_concurrency.lockutils [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:08 np0005539563 nova_compute[252253]: 2025-11-29 09:06:08.448 252257 INFO nova.scheduler.client.report [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Deleted allocations for instance 6f8e25c4-545a-420f-bd34-aef4da2b27d7#033[00m
Nov 29 04:06:08 np0005539563 nova_compute[252253]: 2025-11-29 09:06:08.506 252257 DEBUG oslo_concurrency.lockutils [None req-16d9171b-6367-42cc-8643-fbcf23833a43 1ef789b2d4084ff99c58ebaccf153280 8c7919c45c334cfb95f0fdc69027c245 - - default default] Lock "6f8e25c4-545a-420f-bd34-aef4da2b27d7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:08.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:09.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3928: 305 pgs: 305 active+clean; 155 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 176 op/s
Nov 29 04:06:10 np0005539563 nova_compute[252253]: 2025-11-29 09:06:10.681 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:10.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:11 np0005539563 podman[410509]: 2025-11-29 09:06:11.539689935 +0000 UTC m=+0.088170028 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 04:06:11 np0005539563 podman[410508]: 2025-11-29 09:06:11.539165981 +0000 UTC m=+0.082786612 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:06:11 np0005539563 podman[410510]: 2025-11-29 09:06:11.595481416 +0000 UTC m=+0.130079303 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 04:06:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:11.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3929: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Nov 29 04:06:11 np0005539563 nova_compute[252253]: 2025-11-29 09:06:11.759 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:12 np0005539563 nova_compute[252253]: 2025-11-29 09:06:12.449 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:12.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:06:13
Nov 29 04:06:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:06:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:06:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'vms', '.mgr', 'default.rgw.log', '.rgw.root']
Nov 29 04:06:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:06:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:06:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:06:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:06:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:06:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:06:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:06:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:13.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3930: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 113 op/s
Nov 29 04:06:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:14.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:15 np0005539563 nova_compute[252253]: 2025-11-29 09:06:15.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:15 np0005539563 nova_compute[252253]: 2025-11-29 09:06:15.685 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3931: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 175 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Nov 29 04:06:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:15.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:16.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:06:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:06:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:06:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:06:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:06:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:06:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:06:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:06:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:06:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.401750) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177401897, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1526, "num_deletes": 256, "total_data_size": 2711421, "memory_usage": 2752960, "flush_reason": "Manual Compaction"}
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177423080, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 2659495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79065, "largest_seqno": 80590, "table_properties": {"data_size": 2652279, "index_size": 4222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14829, "raw_average_key_size": 19, "raw_value_size": 2637990, "raw_average_value_size": 3545, "num_data_blocks": 184, "num_entries": 744, "num_filter_entries": 744, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407028, "oldest_key_time": 1764407028, "file_creation_time": 1764407177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 21329 microseconds, and 8446 cpu microseconds.
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.423149) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 2659495 bytes OK
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.423181) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.429478) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.429502) EVENT_LOG_v1 {"time_micros": 1764407177429497, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.429521) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 2704911, prev total WAL file size 2704911, number of live WAL files 2.
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.430768) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323733' seq:72057594037927935, type:22 .. '6C6F676D0033353235' seq:0, type:0; will stop at (end)
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(2597KB)], [179(11MB)]
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177430861, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 15125641, "oldest_snapshot_seqno": -1}
Nov 29 04:06:17 np0005539563 nova_compute[252253]: 2025-11-29 09:06:17.453 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 11173 keys, 14981226 bytes, temperature: kUnknown
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177591676, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 14981226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14908685, "index_size": 43500, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 295582, "raw_average_key_size": 26, "raw_value_size": 14712908, "raw_average_value_size": 1316, "num_data_blocks": 1657, "num_entries": 11173, "num_filter_entries": 11173, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764407177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.592642) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 14981226 bytes
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.594793) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.6 rd, 92.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.9 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 11702, records dropped: 529 output_compression: NoCompression
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.594824) EVENT_LOG_v1 {"time_micros": 1764407177594812, "job": 112, "event": "compaction_finished", "compaction_time_micros": 161521, "compaction_time_cpu_micros": 47396, "output_level": 6, "num_output_files": 1, "total_output_size": 14981226, "num_input_records": 11702, "num_output_records": 11173, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177595332, "job": 112, "event": "table_file_deletion", "file_number": 181}
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407177597495, "job": 112, "event": "table_file_deletion", "file_number": 179}
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.430530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.597650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.597658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.597660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.597662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:06:17 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:06:17.597664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:06:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3932: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 04:06:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:17.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:18 np0005539563 nova_compute[252253]: 2025-11-29 09:06:18.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:18.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3933: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 04:06:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:19.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:20 np0005539563 nova_compute[252253]: 2025-11-29 09:06:20.651 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764407165.6500769, 6f8e25c4-545a-420f-bd34-aef4da2b27d7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:06:20 np0005539563 nova_compute[252253]: 2025-11-29 09:06:20.651 252257 INFO nova.compute.manager [-] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] VM Stopped (Lifecycle Event)#033[00m
Nov 29 04:06:20 np0005539563 nova_compute[252253]: 2025-11-29 09:06:20.686 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:20.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:21 np0005539563 nova_compute[252253]: 2025-11-29 09:06:21.659 252257 DEBUG nova.compute.manager [None req-7162ff1f-e8fb-4a7f-bb06-a3555a245b36 - - - - - -] [instance: 6f8e25c4-545a-420f-bd34-aef4da2b27d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:06:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3934: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 17 op/s
Nov 29 04:06:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:21.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:22 np0005539563 nova_compute[252253]: 2025-11-29 09:06:22.454 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:22.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3935: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:23.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:06:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:24.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:25 np0005539563 nova_compute[252253]: 2025-11-29 09:06:25.688 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3936: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:25.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:26.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:27 np0005539563 nova_compute[252253]: 2025-11-29 09:06:27.456 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3937: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:27.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:28.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3938: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:29.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:30 np0005539563 nova_compute[252253]: 2025-11-29 09:06:30.690 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:30.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3939: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:31.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:32 np0005539563 nova_compute[252253]: 2025-11-29 09:06:32.458 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:32.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3940: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:33.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:34.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:35 np0005539563 nova_compute[252253]: 2025-11-29 09:06:35.692 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3941: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:35.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:36 np0005539563 nova_compute[252253]: 2025-11-29 09:06:36.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:36.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:37 np0005539563 nova_compute[252253]: 2025-11-29 09:06:37.460 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3942: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:37.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:06:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:38.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:06:39 np0005539563 nova_compute[252253]: 2025-11-29 09:06:39.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3943: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:39.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:40 np0005539563 nova_compute[252253]: 2025-11-29 09:06:40.693 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:40.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3944: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:41.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:42 np0005539563 nova_compute[252253]: 2025-11-29 09:06:42.463 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:42 np0005539563 podman[410636]: 2025-11-29 09:06:42.500469721 +0000 UTC m=+0.059678397 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:06:42 np0005539563 podman[410637]: 2025-11-29 09:06:42.510435991 +0000 UTC m=+0.065632979 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:06:42 np0005539563 podman[410638]: 2025-11-29 09:06:42.55289162 +0000 UTC m=+0.098499428 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true)
Nov 29 04:06:42 np0005539563 nova_compute[252253]: 2025-11-29 09:06:42.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:42 np0005539563 nova_compute[252253]: 2025-11-29 09:06:42.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:42 np0005539563 nova_compute[252253]: 2025-11-29 09:06:42.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:06:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:42.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:06:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:06:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:06:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:06:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:06:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:06:43 np0005539563 nova_compute[252253]: 2025-11-29 09:06:43.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:43 np0005539563 nova_compute[252253]: 2025-11-29 09:06:43.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:06:43 np0005539563 nova_compute[252253]: 2025-11-29 09:06:43.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:06:43 np0005539563 nova_compute[252253]: 2025-11-29 09:06:43.694 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:06:43 np0005539563 nova_compute[252253]: 2025-11-29 09:06:43.694 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3945: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:43.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:44.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:45 np0005539563 nova_compute[252253]: 2025-11-29 09:06:45.695 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3946: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:45.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:46.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:47 np0005539563 nova_compute[252253]: 2025-11-29 09:06:47.463 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3947: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:47.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:47.959 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=96, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=95) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:06:47 np0005539563 nova_compute[252253]: 2025-11-29 09:06:47.960 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:47 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:47.961 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:06:48 np0005539563 nova_compute[252253]: 2025-11-29 09:06:48.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:48 np0005539563 nova_compute[252253]: 2025-11-29 09:06:48.706 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:48 np0005539563 nova_compute[252253]: 2025-11-29 09:06:48.707 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:48 np0005539563 nova_compute[252253]: 2025-11-29 09:06:48.707 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:48 np0005539563 nova_compute[252253]: 2025-11-29 09:06:48.707 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:06:48 np0005539563 nova_compute[252253]: 2025-11-29 09:06:48.708 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:06:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:48.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2096468359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:06:49 np0005539563 nova_compute[252253]: 2025-11-29 09:06:49.131 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:06:49 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 22d59e89-13cf-4a55-9acf-053e80286261 does not exist
Nov 29 04:06:49 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5bbdaa42-0dfe-430f-970e-47d69b3f8d0e does not exist
Nov 29 04:06:49 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b8f3c239-60c5-48fc-b58a-907bed3f6f2c does not exist
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:06:49 np0005539563 nova_compute[252253]: 2025-11-29 09:06:49.303 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:06:49 np0005539563 nova_compute[252253]: 2025-11-29 09:06:49.306 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4124MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:06:49 np0005539563 nova_compute[252253]: 2025-11-29 09:06:49.307 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:49 np0005539563 nova_compute[252253]: 2025-11-29 09:06:49.307 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:49 np0005539563 nova_compute[252253]: 2025-11-29 09:06:49.400 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:06:49 np0005539563 nova_compute[252253]: 2025-11-29 09:06:49.400 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:06:49 np0005539563 nova_compute[252253]: 2025-11-29 09:06:49.416 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:06:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3948: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:49 np0005539563 podman[411067]: 2025-11-29 09:06:49.725980518 +0000 UTC m=+0.037097445 container create 9523614b109b5258a5d68d6342a51b5737a3120fe075028b4b92d3202c2168d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:06:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:49.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:49 np0005539563 systemd[1]: Started libpod-conmon-9523614b109b5258a5d68d6342a51b5737a3120fe075028b4b92d3202c2168d9.scope.
Nov 29 04:06:49 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:06:49 np0005539563 podman[411067]: 2025-11-29 09:06:49.80284946 +0000 UTC m=+0.113966407 container init 9523614b109b5258a5d68d6342a51b5737a3120fe075028b4b92d3202c2168d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curran, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 04:06:49 np0005539563 podman[411067]: 2025-11-29 09:06:49.708836884 +0000 UTC m=+0.019953831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:06:49 np0005539563 podman[411067]: 2025-11-29 09:06:49.810759534 +0000 UTC m=+0.121876461 container start 9523614b109b5258a5d68d6342a51b5737a3120fe075028b4b92d3202c2168d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:06:49 np0005539563 podman[411067]: 2025-11-29 09:06:49.815354738 +0000 UTC m=+0.126471665 container attach 9523614b109b5258a5d68d6342a51b5737a3120fe075028b4b92d3202c2168d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curran, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 04:06:49 np0005539563 objective_curran[411083]: 167 167
Nov 29 04:06:49 np0005539563 systemd[1]: libpod-9523614b109b5258a5d68d6342a51b5737a3120fe075028b4b92d3202c2168d9.scope: Deactivated successfully.
Nov 29 04:06:49 np0005539563 conmon[411083]: conmon 9523614b109b5258a5d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9523614b109b5258a5d68d6342a51b5737a3120fe075028b4b92d3202c2168d9.scope/container/memory.events
Nov 29 04:06:49 np0005539563 podman[411067]: 2025-11-29 09:06:49.818971567 +0000 UTC m=+0.130088494 container died 9523614b109b5258a5d68d6342a51b5737a3120fe075028b4b92d3202c2168d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curran, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:06:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/655567624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:06:49 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1d1cb0399e784f6d246cbcce530fb593efaea9445211c6d448409053abdfafa2-merged.mount: Deactivated successfully.
Nov 29 04:06:49 np0005539563 podman[411067]: 2025-11-29 09:06:49.861346574 +0000 UTC m=+0.172463491 container remove 9523614b109b5258a5d68d6342a51b5737a3120fe075028b4b92d3202c2168d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curran, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:06:49 np0005539563 nova_compute[252253]: 2025-11-29 09:06:49.861 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:06:49 np0005539563 nova_compute[252253]: 2025-11-29 09:06:49.869 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:06:49 np0005539563 systemd[1]: libpod-conmon-9523614b109b5258a5d68d6342a51b5737a3120fe075028b4b92d3202c2168d9.scope: Deactivated successfully.
Nov 29 04:06:49 np0005539563 nova_compute[252253]: 2025-11-29 09:06:49.891 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:06:49 np0005539563 nova_compute[252253]: 2025-11-29 09:06:49.924 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:06:49 np0005539563 nova_compute[252253]: 2025-11-29 09:06:49.925 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:50 np0005539563 podman[411112]: 2025-11-29 09:06:50.029379384 +0000 UTC m=+0.044662321 container create 72cda21e7fe65b3a1253c9b218cd0a7eaa6fbbb493774e8a50532d62fe644725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:06:50 np0005539563 systemd[1]: Started libpod-conmon-72cda21e7fe65b3a1253c9b218cd0a7eaa6fbbb493774e8a50532d62fe644725.scope.
Nov 29 04:06:50 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:06:50 np0005539563 podman[411112]: 2025-11-29 09:06:50.009618669 +0000 UTC m=+0.024901626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:06:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bfb825032758c9ecfe60850027d1d06e4ab3f65ca9ed91d0ce48e18ee3062aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bfb825032758c9ecfe60850027d1d06e4ab3f65ca9ed91d0ce48e18ee3062aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bfb825032758c9ecfe60850027d1d06e4ab3f65ca9ed91d0ce48e18ee3062aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bfb825032758c9ecfe60850027d1d06e4ab3f65ca9ed91d0ce48e18ee3062aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:50 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bfb825032758c9ecfe60850027d1d06e4ab3f65ca9ed91d0ce48e18ee3062aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:50 np0005539563 podman[411112]: 2025-11-29 09:06:50.119602476 +0000 UTC m=+0.134885423 container init 72cda21e7fe65b3a1253c9b218cd0a7eaa6fbbb493774e8a50532d62fe644725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 04:06:50 np0005539563 podman[411112]: 2025-11-29 09:06:50.127343356 +0000 UTC m=+0.142626283 container start 72cda21e7fe65b3a1253c9b218cd0a7eaa6fbbb493774e8a50532d62fe644725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 04:06:50 np0005539563 podman[411112]: 2025-11-29 09:06:50.130933383 +0000 UTC m=+0.146216310 container attach 72cda21e7fe65b3a1253c9b218cd0a7eaa6fbbb493774e8a50532d62fe644725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:06:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:06:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:06:50 np0005539563 nova_compute[252253]: 2025-11-29 09:06:50.697 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:50.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:50 np0005539563 infallible_hamilton[411128]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:06:50 np0005539563 infallible_hamilton[411128]: --> relative data size: 1.0
Nov 29 04:06:50 np0005539563 infallible_hamilton[411128]: --> All data devices are unavailable
Nov 29 04:06:50 np0005539563 systemd[1]: libpod-72cda21e7fe65b3a1253c9b218cd0a7eaa6fbbb493774e8a50532d62fe644725.scope: Deactivated successfully.
Nov 29 04:06:51 np0005539563 podman[411112]: 2025-11-29 09:06:51.00095055 +0000 UTC m=+1.016233477 container died 72cda21e7fe65b3a1253c9b218cd0a7eaa6fbbb493774e8a50532d62fe644725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 04:06:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0bfb825032758c9ecfe60850027d1d06e4ab3f65ca9ed91d0ce48e18ee3062aa-merged.mount: Deactivated successfully.
Nov 29 04:06:51 np0005539563 podman[411112]: 2025-11-29 09:06:51.068810297 +0000 UTC m=+1.084093224 container remove 72cda21e7fe65b3a1253c9b218cd0a7eaa6fbbb493774e8a50532d62fe644725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 04:06:51 np0005539563 systemd[1]: libpod-conmon-72cda21e7fe65b3a1253c9b218cd0a7eaa6fbbb493774e8a50532d62fe644725.scope: Deactivated successfully.
Nov 29 04:06:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3949: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:51.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:51 np0005539563 podman[411297]: 2025-11-29 09:06:51.808993728 +0000 UTC m=+0.043149959 container create eaae99f9da4299cc0f2cf4b08080c8e4e790e8b9cd0e97c21f50450bb1a6e5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 04:06:51 np0005539563 systemd[1]: Started libpod-conmon-eaae99f9da4299cc0f2cf4b08080c8e4e790e8b9cd0e97c21f50450bb1a6e5ec.scope.
Nov 29 04:06:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:06:51 np0005539563 podman[411297]: 2025-11-29 09:06:51.793775037 +0000 UTC m=+0.027931288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:06:51 np0005539563 podman[411297]: 2025-11-29 09:06:51.902725816 +0000 UTC m=+0.136882057 container init eaae99f9da4299cc0f2cf4b08080c8e4e790e8b9cd0e97c21f50450bb1a6e5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brahmagupta, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 04:06:51 np0005539563 podman[411297]: 2025-11-29 09:06:51.912339957 +0000 UTC m=+0.146496208 container start eaae99f9da4299cc0f2cf4b08080c8e4e790e8b9cd0e97c21f50450bb1a6e5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brahmagupta, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:06:51 np0005539563 podman[411297]: 2025-11-29 09:06:51.91689775 +0000 UTC m=+0.151054021 container attach eaae99f9da4299cc0f2cf4b08080c8e4e790e8b9cd0e97c21f50450bb1a6e5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:06:51 np0005539563 recursing_brahmagupta[411314]: 167 167
Nov 29 04:06:51 np0005539563 systemd[1]: libpod-eaae99f9da4299cc0f2cf4b08080c8e4e790e8b9cd0e97c21f50450bb1a6e5ec.scope: Deactivated successfully.
Nov 29 04:06:51 np0005539563 podman[411297]: 2025-11-29 09:06:51.918315378 +0000 UTC m=+0.152471609 container died eaae99f9da4299cc0f2cf4b08080c8e4e790e8b9cd0e97c21f50450bb1a6e5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:06:51 np0005539563 nova_compute[252253]: 2025-11-29 09:06:51.926 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4dcf0979f903230059e13fbe7a4f5dbe8b3fd6c1eb55fadd38c0442de4466a72-merged.mount: Deactivated successfully.
Nov 29 04:06:51 np0005539563 podman[411297]: 2025-11-29 09:06:51.952073952 +0000 UTC m=+0.186230193 container remove eaae99f9da4299cc0f2cf4b08080c8e4e790e8b9cd0e97c21f50450bb1a6e5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:06:51 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:06:51.962 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '96'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:06:51 np0005539563 systemd[1]: libpod-conmon-eaae99f9da4299cc0f2cf4b08080c8e4e790e8b9cd0e97c21f50450bb1a6e5ec.scope: Deactivated successfully.
Nov 29 04:06:52 np0005539563 podman[411339]: 2025-11-29 09:06:52.115116976 +0000 UTC m=+0.048186935 container create 53da222b7a8ea0c539ca230894d2f435538b68b54728958715fa5220ed049734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 04:06:52 np0005539563 systemd[1]: Started libpod-conmon-53da222b7a8ea0c539ca230894d2f435538b68b54728958715fa5220ed049734.scope.
Nov 29 04:06:52 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:06:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f44d43c97ac00643428522439e882b77702912fb1177d9ee18b76cd384b08dda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f44d43c97ac00643428522439e882b77702912fb1177d9ee18b76cd384b08dda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f44d43c97ac00643428522439e882b77702912fb1177d9ee18b76cd384b08dda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f44d43c97ac00643428522439e882b77702912fb1177d9ee18b76cd384b08dda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:52 np0005539563 podman[411339]: 2025-11-29 09:06:52.185333758 +0000 UTC m=+0.118403737 container init 53da222b7a8ea0c539ca230894d2f435538b68b54728958715fa5220ed049734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 04:06:52 np0005539563 podman[411339]: 2025-11-29 09:06:52.094502949 +0000 UTC m=+0.027572958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:06:52 np0005539563 podman[411339]: 2025-11-29 09:06:52.191865715 +0000 UTC m=+0.124935674 container start 53da222b7a8ea0c539ca230894d2f435538b68b54728958715fa5220ed049734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gagarin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:06:52 np0005539563 podman[411339]: 2025-11-29 09:06:52.195204996 +0000 UTC m=+0.128274955 container attach 53da222b7a8ea0c539ca230894d2f435538b68b54728958715fa5220ed049734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:06:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:52 np0005539563 nova_compute[252253]: 2025-11-29 09:06:52.464 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:52.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]: {
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:    "0": [
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:        {
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:            "devices": [
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:                "/dev/loop3"
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:            ],
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:            "lv_name": "ceph_lv0",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:            "lv_size": "7511998464",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:            "name": "ceph_lv0",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:            "tags": {
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:                "ceph.cluster_name": "ceph",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:                "ceph.crush_device_class": "",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:                "ceph.encrypted": "0",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:                "ceph.osd_id": "0",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:                "ceph.type": "block",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:                "ceph.vdo": "0"
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:            },
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:            "type": "block",
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:            "vg_name": "ceph_vg0"
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:        }
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]:    ]
Nov 29 04:06:52 np0005539563 frosty_gagarin[411355]: }
Nov 29 04:06:52 np0005539563 systemd[1]: libpod-53da222b7a8ea0c539ca230894d2f435538b68b54728958715fa5220ed049734.scope: Deactivated successfully.
Nov 29 04:06:52 np0005539563 podman[411339]: 2025-11-29 09:06:52.935293434 +0000 UTC m=+0.868363433 container died 53da222b7a8ea0c539ca230894d2f435538b68b54728958715fa5220ed049734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:06:52 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f44d43c97ac00643428522439e882b77702912fb1177d9ee18b76cd384b08dda-merged.mount: Deactivated successfully.
Nov 29 04:06:52 np0005539563 podman[411339]: 2025-11-29 09:06:52.983117089 +0000 UTC m=+0.916187048 container remove 53da222b7a8ea0c539ca230894d2f435538b68b54728958715fa5220ed049734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:06:52 np0005539563 systemd[1]: libpod-conmon-53da222b7a8ea0c539ca230894d2f435538b68b54728958715fa5220ed049734.scope: Deactivated successfully.
Nov 29 04:06:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3950: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:53 np0005539563 podman[411516]: 2025-11-29 09:06:53.727388451 +0000 UTC m=+0.040227301 container create 30dab4db2fe49ccb55dbc62128019c01aa765b516f76fea502dd22c769c5a380 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 04:06:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:53.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:53 np0005539563 systemd[1]: Started libpod-conmon-30dab4db2fe49ccb55dbc62128019c01aa765b516f76fea502dd22c769c5a380.scope.
Nov 29 04:06:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:06:53 np0005539563 podman[411516]: 2025-11-29 09:06:53.709244869 +0000 UTC m=+0.022083749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:06:53 np0005539563 podman[411516]: 2025-11-29 09:06:53.81266539 +0000 UTC m=+0.125504260 container init 30dab4db2fe49ccb55dbc62128019c01aa765b516f76fea502dd22c769c5a380 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 04:06:53 np0005539563 podman[411516]: 2025-11-29 09:06:53.82007235 +0000 UTC m=+0.132911210 container start 30dab4db2fe49ccb55dbc62128019c01aa765b516f76fea502dd22c769c5a380 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 04:06:53 np0005539563 podman[411516]: 2025-11-29 09:06:53.824015227 +0000 UTC m=+0.136854077 container attach 30dab4db2fe49ccb55dbc62128019c01aa765b516f76fea502dd22c769c5a380 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 04:06:53 np0005539563 systemd[1]: libpod-30dab4db2fe49ccb55dbc62128019c01aa765b516f76fea502dd22c769c5a380.scope: Deactivated successfully.
Nov 29 04:06:53 np0005539563 great_mclean[411531]: 167 167
Nov 29 04:06:53 np0005539563 conmon[411531]: conmon 30dab4db2fe49ccb55db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30dab4db2fe49ccb55dbc62128019c01aa765b516f76fea502dd22c769c5a380.scope/container/memory.events
Nov 29 04:06:53 np0005539563 podman[411516]: 2025-11-29 09:06:53.829173107 +0000 UTC m=+0.142011957 container died 30dab4db2fe49ccb55dbc62128019c01aa765b516f76fea502dd22c769c5a380 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 04:06:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9167ab0f932b934f2ce23234a67126bf653b20001a58b8449ed3370c9a4fdb2b-merged.mount: Deactivated successfully.
Nov 29 04:06:53 np0005539563 podman[411516]: 2025-11-29 09:06:53.864184465 +0000 UTC m=+0.177023315 container remove 30dab4db2fe49ccb55dbc62128019c01aa765b516f76fea502dd22c769c5a380 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:06:53 np0005539563 systemd[1]: libpod-conmon-30dab4db2fe49ccb55dbc62128019c01aa765b516f76fea502dd22c769c5a380.scope: Deactivated successfully.
Nov 29 04:06:54 np0005539563 podman[411553]: 2025-11-29 09:06:54.016603552 +0000 UTC m=+0.040318493 container create 73c3884a8994133c6f62cba2a93a3790995692d1384bfea7cd6beac1b03e997d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:06:54 np0005539563 systemd[1]: Started libpod-conmon-73c3884a8994133c6f62cba2a93a3790995692d1384bfea7cd6beac1b03e997d.scope.
Nov 29 04:06:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:06:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09c30ac56a94979cce83093a567df55302a5507f43c682077192bc21c6744b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09c30ac56a94979cce83093a567df55302a5507f43c682077192bc21c6744b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09c30ac56a94979cce83093a567df55302a5507f43c682077192bc21c6744b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09c30ac56a94979cce83093a567df55302a5507f43c682077192bc21c6744b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:54 np0005539563 podman[411553]: 2025-11-29 09:06:54.094246714 +0000 UTC m=+0.117961675 container init 73c3884a8994133c6f62cba2a93a3790995692d1384bfea7cd6beac1b03e997d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dhawan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:06:54 np0005539563 podman[411553]: 2025-11-29 09:06:53.999257452 +0000 UTC m=+0.022972413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:06:54 np0005539563 podman[411553]: 2025-11-29 09:06:54.100813642 +0000 UTC m=+0.124528583 container start 73c3884a8994133c6f62cba2a93a3790995692d1384bfea7cd6beac1b03e997d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dhawan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 29 04:06:54 np0005539563 podman[411553]: 2025-11-29 09:06:54.103748401 +0000 UTC m=+0.127463362 container attach 73c3884a8994133c6f62cba2a93a3790995692d1384bfea7cd6beac1b03e997d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 04:06:54 np0005539563 ovn_controller[148841]: 2025-11-29T09:06:54Z|00938|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 04:06:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:54.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:54 np0005539563 youthful_dhawan[411571]: {
Nov 29 04:06:54 np0005539563 youthful_dhawan[411571]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:06:54 np0005539563 youthful_dhawan[411571]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:06:54 np0005539563 youthful_dhawan[411571]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:06:54 np0005539563 youthful_dhawan[411571]:        "osd_id": 0,
Nov 29 04:06:54 np0005539563 youthful_dhawan[411571]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:06:54 np0005539563 youthful_dhawan[411571]:        "type": "bluestore"
Nov 29 04:06:54 np0005539563 youthful_dhawan[411571]:    }
Nov 29 04:06:54 np0005539563 youthful_dhawan[411571]: }
Nov 29 04:06:54 np0005539563 systemd[1]: libpod-73c3884a8994133c6f62cba2a93a3790995692d1384bfea7cd6beac1b03e997d.scope: Deactivated successfully.
Nov 29 04:06:55 np0005539563 podman[411592]: 2025-11-29 09:06:55.001463398 +0000 UTC m=+0.031516485 container died 73c3884a8994133c6f62cba2a93a3790995692d1384bfea7cd6beac1b03e997d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dhawan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 04:06:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b09c30ac56a94979cce83093a567df55302a5507f43c682077192bc21c6744b3-merged.mount: Deactivated successfully.
Nov 29 04:06:55 np0005539563 podman[411592]: 2025-11-29 09:06:55.057259039 +0000 UTC m=+0.087312146 container remove 73c3884a8994133c6f62cba2a93a3790995692d1384bfea7cd6beac1b03e997d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dhawan, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 04:06:55 np0005539563 systemd[1]: libpod-conmon-73c3884a8994133c6f62cba2a93a3790995692d1384bfea7cd6beac1b03e997d.scope: Deactivated successfully.
Nov 29 04:06:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:06:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:06:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:06:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:06:55 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 19d31829-625e-43dc-ba75-4f8651c14f77 does not exist
Nov 29 04:06:55 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5a60464b-81fd-4f11-80b6-cb1c2c95d696 does not exist
Nov 29 04:06:55 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6149889c-2c5c-4b6f-9c51-f5027ae2b89b does not exist
Nov 29 04:06:55 np0005539563 nova_compute[252253]: 2025-11-29 09:06:55.701 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3951: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:55.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:56 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:06:56 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:06:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:56.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:57 np0005539563 nova_compute[252253]: 2025-11-29 09:06:57.466 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3952: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:57.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:06:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:58.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:06:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:06:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3590118869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:06:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3953: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:06:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:59.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:00 np0005539563 nova_compute[252253]: 2025-11-29 09:07:00.704 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:00.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3954: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:01.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:02 np0005539563 nova_compute[252253]: 2025-11-29 09:07:02.532 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:02.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3955: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:03.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:04.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:04.981 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:04.982 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:04.982 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:05 np0005539563 nova_compute[252253]: 2025-11-29 09:07:05.706 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3956: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:05.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:06.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:07 np0005539563 nova_compute[252253]: 2025-11-29 09:07:07.596 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3957: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:07.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:08.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3958: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:09.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:10 np0005539563 nova_compute[252253]: 2025-11-29 09:07:10.708 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:10.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3959: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:11.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:12 np0005539563 nova_compute[252253]: 2025-11-29 09:07:12.599 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:12.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:07:13
Nov 29 04:07:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:07:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:07:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'vms', '.rgw.root']
Nov 29 04:07:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:07:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:07:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:07:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:07:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:07:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:07:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:07:13 np0005539563 podman[411719]: 2025-11-29 09:07:13.521466414 +0000 UTC m=+0.071854336 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 04:07:13 np0005539563 podman[411718]: 2025-11-29 09:07:13.534803104 +0000 UTC m=+0.088119105 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 04:07:13 np0005539563 podman[411720]: 2025-11-29 09:07:13.56932041 +0000 UTC m=+0.109037863 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:07:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3960: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:13.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:14.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:15 np0005539563 nova_compute[252253]: 2025-11-29 09:07:15.710 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3961: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:15.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:16 np0005539563 nova_compute[252253]: 2025-11-29 09:07:16.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:07:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:07:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:07:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:07:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:07:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:07:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:07:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:07:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:07:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:07:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:16.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:17 np0005539563 nova_compute[252253]: 2025-11-29 09:07:17.634 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3962: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:17.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:18.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3963: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:19.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:20 np0005539563 nova_compute[252253]: 2025-11-29 09:07:20.712 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:20.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3964: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:21.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:22 np0005539563 nova_compute[252253]: 2025-11-29 09:07:22.636 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:22.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3965: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 341 B/s wr, 1 op/s
Nov 29 04:07:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:23.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021617782198027173 of space, bias 1.0, pg target 0.6485334659408152 quantized to 32 (current 32)
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:07:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:24.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:25 np0005539563 nova_compute[252253]: 2025-11-29 09:07:25.714 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3966: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 04:07:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:25.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:26.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:27 np0005539563 nova_compute[252253]: 2025-11-29 09:07:27.638 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3967: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 04:07:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:27.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:07:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/35439533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:07:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:07:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/35439533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:07:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:28.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3968: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 04:07:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:29.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:30 np0005539563 nova_compute[252253]: 2025-11-29 09:07:30.567 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:30 np0005539563 nova_compute[252253]: 2025-11-29 09:07:30.716 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:30.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3969: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 04:07:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:31.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.111541) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252111602, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 849, "num_deletes": 251, "total_data_size": 1301324, "memory_usage": 1328000, "flush_reason": "Manual Compaction"}
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252127020, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 1287117, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80592, "largest_seqno": 81439, "table_properties": {"data_size": 1282808, "index_size": 2024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9448, "raw_average_key_size": 19, "raw_value_size": 1274216, "raw_average_value_size": 2649, "num_data_blocks": 90, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407177, "oldest_key_time": 1764407177, "file_creation_time": 1764407252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 15543 microseconds, and 7457 cpu microseconds.
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.127083) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 1287117 bytes OK
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.127118) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.128988) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.129014) EVENT_LOG_v1 {"time_micros": 1764407252129005, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.129039) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 1297254, prev total WAL file size 1297254, number of live WAL files 2.
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.129958) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(1256KB)], [182(14MB)]
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252129993, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 16268343, "oldest_snapshot_seqno": -1}
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 11139 keys, 14302855 bytes, temperature: kUnknown
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252253108, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 14302855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14231137, "index_size": 42741, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27909, "raw_key_size": 295568, "raw_average_key_size": 26, "raw_value_size": 14036251, "raw_average_value_size": 1260, "num_data_blocks": 1620, "num_entries": 11139, "num_filter_entries": 11139, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764407252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.253502) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 14302855 bytes
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.255189) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.0 rd, 116.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.3 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(23.8) write-amplify(11.1) OK, records in: 11654, records dropped: 515 output_compression: NoCompression
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.255219) EVENT_LOG_v1 {"time_micros": 1764407252255205, "job": 114, "event": "compaction_finished", "compaction_time_micros": 123233, "compaction_time_cpu_micros": 32170, "output_level": 6, "num_output_files": 1, "total_output_size": 14302855, "num_input_records": 11654, "num_output_records": 11139, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252255765, "job": 114, "event": "table_file_deletion", "file_number": 184}
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407252260768, "job": 114, "event": "table_file_deletion", "file_number": 182}
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.129861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.260873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.260880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.260883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.260886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:07:32.260889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:07:32 np0005539563 nova_compute[252253]: 2025-11-29 09:07:32.342 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "15d81344-77fd-48da-8fd6-97b50ae519d8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:32 np0005539563 nova_compute[252253]: 2025-11-29 09:07:32.343 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:32 np0005539563 nova_compute[252253]: 2025-11-29 09:07:32.399 252257 DEBUG nova.compute.manager [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 04:07:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:32 np0005539563 nova_compute[252253]: 2025-11-29 09:07:32.561 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:32 np0005539563 nova_compute[252253]: 2025-11-29 09:07:32.562 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:32 np0005539563 nova_compute[252253]: 2025-11-29 09:07:32.573 252257 DEBUG nova.virt.hardware [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 04:07:32 np0005539563 nova_compute[252253]: 2025-11-29 09:07:32.574 252257 INFO nova.compute.claims [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 04:07:32 np0005539563 nova_compute[252253]: 2025-11-29 09:07:32.677 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:32.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:32 np0005539563 nova_compute[252253]: 2025-11-29 09:07:32.938 252257 DEBUG oslo_concurrency.processutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:07:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/916988230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:07:33 np0005539563 nova_compute[252253]: 2025-11-29 09:07:33.380 252257 DEBUG oslo_concurrency.processutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:33 np0005539563 nova_compute[252253]: 2025-11-29 09:07:33.388 252257 DEBUG nova.compute.provider_tree [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:07:33 np0005539563 nova_compute[252253]: 2025-11-29 09:07:33.495 252257 DEBUG nova.scheduler.client.report [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:07:33 np0005539563 nova_compute[252253]: 2025-11-29 09:07:33.596 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:33 np0005539563 nova_compute[252253]: 2025-11-29 09:07:33.597 252257 DEBUG nova.compute.manager [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 04:07:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3970: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 29 04:07:33 np0005539563 nova_compute[252253]: 2025-11-29 09:07:33.739 252257 DEBUG nova.compute.manager [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 04:07:33 np0005539563 nova_compute[252253]: 2025-11-29 09:07:33.740 252257 DEBUG nova.network.neutron [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 04:07:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:33.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:33 np0005539563 nova_compute[252253]: 2025-11-29 09:07:33.877 252257 INFO nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 04:07:33 np0005539563 nova_compute[252253]: 2025-11-29 09:07:33.924 252257 DEBUG nova.compute.manager [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.353 252257 DEBUG nova.policy [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ff561a95dc44b9fb9f7fd8fee80f589', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.658 252257 INFO nova.virt.block_device [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Booting with volume f02bbf94-8afe-4780-9c81-24b2a1512122 at /dev/vda#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.796 252257 DEBUG os_brick.utils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.798 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.807 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.808 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[4c04fe0c-b07f-4a0c-9030-f20c2aa9fa55]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.809 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.816 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.816 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[73111bd4-262a-4270-8a6f-7c8995a11d98]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.818 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.824 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.825 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[733bdf9e-92f1-4bed-b2cf-58708090e7a1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.826 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[d8382e38-bcd5-4cde-9d9e-180271aefe39]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.826 252257 DEBUG oslo_concurrency.processutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.858 252257 DEBUG oslo_concurrency.processutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.863 252257 DEBUG os_brick.initiator.connectors.lightos [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.863 252257 DEBUG os_brick.initiator.connectors.lightos [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.864 252257 DEBUG os_brick.initiator.connectors.lightos [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.865 252257 DEBUG os_brick.utils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 04:07:34 np0005539563 nova_compute[252253]: 2025-11-29 09:07:34.866 252257 DEBUG nova.virt.block_device [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Updating existing volume attachment record: b2e14335-0515-425b-923e-a26445434c69 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 04:07:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:34.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:35 np0005539563 nova_compute[252253]: 2025-11-29 09:07:35.718 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3971: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 21 KiB/s wr, 3 op/s
Nov 29 04:07:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:35.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:36 np0005539563 nova_compute[252253]: 2025-11-29 09:07:36.509 252257 DEBUG nova.compute.manager [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 04:07:36 np0005539563 nova_compute[252253]: 2025-11-29 09:07:36.510 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 04:07:36 np0005539563 nova_compute[252253]: 2025-11-29 09:07:36.511 252257 INFO nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Creating image(s)#033[00m
Nov 29 04:07:36 np0005539563 nova_compute[252253]: 2025-11-29 09:07:36.511 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 04:07:36 np0005539563 nova_compute[252253]: 2025-11-29 09:07:36.511 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Ensure instance console log exists: /var/lib/nova/instances/15d81344-77fd-48da-8fd6-97b50ae519d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 04:07:36 np0005539563 nova_compute[252253]: 2025-11-29 09:07:36.512 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:36 np0005539563 nova_compute[252253]: 2025-11-29 09:07:36.512 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:36 np0005539563 nova_compute[252253]: 2025-11-29 09:07:36.512 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:36 np0005539563 nova_compute[252253]: 2025-11-29 09:07:36.513 252257 DEBUG nova.network.neutron [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Successfully created port: 6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 04:07:36 np0005539563 nova_compute[252253]: 2025-11-29 09:07:36.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:36 np0005539563 nova_compute[252253]: 2025-11-29 09:07:36.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:36 np0005539563 nova_compute[252253]: 2025-11-29 09:07:36.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:07:36 np0005539563 nova_compute[252253]: 2025-11-29 09:07:36.693 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:07:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:36.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:37 np0005539563 nova_compute[252253]: 2025-11-29 09:07:37.560 252257 DEBUG nova.network.neutron [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Successfully updated port: 6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 04:07:37 np0005539563 nova_compute[252253]: 2025-11-29 09:07:37.582 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "refresh_cache-15d81344-77fd-48da-8fd6-97b50ae519d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:07:37 np0005539563 nova_compute[252253]: 2025-11-29 09:07:37.582 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquired lock "refresh_cache-15d81344-77fd-48da-8fd6-97b50ae519d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:07:37 np0005539563 nova_compute[252253]: 2025-11-29 09:07:37.583 252257 DEBUG nova.network.neutron [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 04:07:37 np0005539563 nova_compute[252253]: 2025-11-29 09:07:37.708 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3972: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 04:07:37 np0005539563 nova_compute[252253]: 2025-11-29 09:07:37.786 252257 DEBUG nova.compute.manager [req-bb6d8728-9883-41ca-83bb-fde309a7ef94 req-2182a5a2-5a2e-424a-8127-d328b57972f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Received event network-changed-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:07:37 np0005539563 nova_compute[252253]: 2025-11-29 09:07:37.786 252257 DEBUG nova.compute.manager [req-bb6d8728-9883-41ca-83bb-fde309a7ef94 req-2182a5a2-5a2e-424a-8127-d328b57972f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Refreshing instance network info cache due to event network-changed-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:07:37 np0005539563 nova_compute[252253]: 2025-11-29 09:07:37.786 252257 DEBUG oslo_concurrency.lockutils [req-bb6d8728-9883-41ca-83bb-fde309a7ef94 req-2182a5a2-5a2e-424a-8127-d328b57972f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-15d81344-77fd-48da-8fd6-97b50ae519d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:07:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:37.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:37 np0005539563 nova_compute[252253]: 2025-11-29 09:07:37.872 252257 DEBUG nova.network.neutron [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 04:07:38 np0005539563 nova_compute[252253]: 2025-11-29 09:07:38.664 252257 DEBUG nova.network.neutron [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Updating instance_info_cache with network_info: [{"id": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "address": "fa:16:3e:a6:a9:75", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f74bad2-bc", "ovs_interfaceid": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:07:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:38.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.015 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Releasing lock "refresh_cache-15d81344-77fd-48da-8fd6-97b50ae519d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.016 252257 DEBUG nova.compute.manager [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Instance network_info: |[{"id": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "address": "fa:16:3e:a6:a9:75", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f74bad2-bc", "ovs_interfaceid": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.016 252257 DEBUG oslo_concurrency.lockutils [req-bb6d8728-9883-41ca-83bb-fde309a7ef94 req-2182a5a2-5a2e-424a-8127-d328b57972f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-15d81344-77fd-48da-8fd6-97b50ae519d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.017 252257 DEBUG nova.network.neutron [req-bb6d8728-9883-41ca-83bb-fde309a7ef94 req-2182a5a2-5a2e-424a-8127-d328b57972f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Refreshing network info cache for port 6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.020 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Start _get_guest_xml network_info=[{"id": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "address": "fa:16:3e:a6:a9:75", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f74bad2-bc", "ovs_interfaceid": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f02bbf94-8afe-4780-9c81-24b2a1512122', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f02bbf94-8afe-4780-9c81-24b2a1512122', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '15d81344-77fd-48da-8fd6-97b50ae519d8', 'attached_at': '', 'detached_at': '', 'volume_id': 'f02bbf94-8afe-4780-9c81-24b2a1512122', 'serial': 'f02bbf94-8afe-4780-9c81-24b2a1512122'}, 'attachment_id': 'b2e14335-0515-425b-923e-a26445434c69', 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.025 252257 WARNING nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.030 252257 DEBUG nova.virt.libvirt.host [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.031 252257 DEBUG nova.virt.libvirt.host [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.033 252257 DEBUG nova.virt.libvirt.host [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.034 252257 DEBUG nova.virt.libvirt.host [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.035 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.035 252257 DEBUG nova.virt.hardware [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.036 252257 DEBUG nova.virt.hardware [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.036 252257 DEBUG nova.virt.hardware [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.036 252257 DEBUG nova.virt.hardware [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.036 252257 DEBUG nova.virt.hardware [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.037 252257 DEBUG nova.virt.hardware [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.037 252257 DEBUG nova.virt.hardware [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.037 252257 DEBUG nova.virt.hardware [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.037 252257 DEBUG nova.virt.hardware [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.037 252257 DEBUG nova.virt.hardware [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.038 252257 DEBUG nova.virt.hardware [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.070 252257 DEBUG nova.storage.rbd_utils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 15d81344-77fd-48da-8fd6-97b50ae519d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.075 252257 DEBUG oslo_concurrency.processutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:07:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2538792099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.551 252257 DEBUG oslo_concurrency.processutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3973: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.803 252257 DEBUG os_brick.encryptors [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Using volume encryption metadata '{'encryption_key_id': 'd358f63b-9db0-4a4f-856d-06365a5bcd6e', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f02bbf94-8afe-4780-9c81-24b2a1512122', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f02bbf94-8afe-4780-9c81-24b2a1512122', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '15d81344-77fd-48da-8fd6-97b50ae519d8', 'attached_at': '', 'detached_at': '', 'volume_id': 'f02bbf94-8afe-4780-9c81-24b2a1512122', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.808 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Nov 29 04:07:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:39.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.832 252257 DEBUG barbicanclient.v1.secrets [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.833 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.858 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.858 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.928 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.929 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.980 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:39 np0005539563 nova_compute[252253]: 2025-11-29 09:07:39.980 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.043 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.044 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.137 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.139 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.174 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.175 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.224 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.225 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.269 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.270 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.303 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.304 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.357 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.357 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.381 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.382 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.459 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.459 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.600 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.600 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.633 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.633 252257 INFO barbicanclient.base [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Calculated Secrets uuid ref: secrets/d358f63b-9db0-4a4f-856d-06365a5bcd6e#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.673 252257 DEBUG barbicanclient.client [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.674 252257 DEBUG nova.virt.libvirt.host [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 04:07:40 np0005539563 nova_compute[252253]:  <usage type="volume">
Nov 29 04:07:40 np0005539563 nova_compute[252253]:    <volume>f02bbf94-8afe-4780-9c81-24b2a1512122</volume>
Nov 29 04:07:40 np0005539563 nova_compute[252253]:  </usage>
Nov 29 04:07:40 np0005539563 nova_compute[252253]: </secret>
Nov 29 04:07:40 np0005539563 nova_compute[252253]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.721 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.890 252257 DEBUG nova.virt.libvirt.vif [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:07:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-561284136',display_name='tempest-TestVolumeBootPattern-server-561284136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-561284136',id=218,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-l3d1y9v1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:07:34Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=15d81344-77fd-48da-8fd6-97b50ae519d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "address": "fa:16:3e:a6:a9:75", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f74bad2-bc", "ovs_interfaceid": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.891 252257 DEBUG nova.network.os_vif_util [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "address": "fa:16:3e:a6:a9:75", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f74bad2-bc", "ovs_interfaceid": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.893 252257 DEBUG nova.network.os_vif_util [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:75,bridge_name='br-int',has_traffic_filtering=True,id=6f74bad2-bc6a-4f93-b881-e1b6d6ee0220,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f74bad2-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:07:40 np0005539563 nova_compute[252253]: 2025-11-29 09:07:40.897 252257 DEBUG nova.objects.instance [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 15d81344-77fd-48da-8fd6-97b50ae519d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:07:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:40.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.077 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] End _get_guest_xml xml=<domain type="kvm">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  <uuid>15d81344-77fd-48da-8fd6-97b50ae519d8</uuid>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  <name>instance-000000da</name>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestVolumeBootPattern-server-561284136</nova:name>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 09:07:39</nova:creationTime>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <nova:user uuid="5ff561a95dc44b9fb9f7fd8fee80f589">tempest-TestVolumeBootPattern-531976395-project-member</nova:user>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <nova:project uuid="51af0a2ee11a460ab825a484e5c6f4a3">tempest-TestVolumeBootPattern-531976395</nova:project>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <nova:port uuid="6f74bad2-bc6a-4f93-b881-e1b6d6ee0220">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <system>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <entry name="serial">15d81344-77fd-48da-8fd6-97b50ae519d8</entry>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <entry name="uuid">15d81344-77fd-48da-8fd6-97b50ae519d8</entry>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    </system>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  <os>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  </os>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  <features>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  </features>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  </clock>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  <devices>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/15d81344-77fd-48da-8fd6-97b50ae519d8_disk.config">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      </source>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      </auth>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    </disk>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="volumes/volume-f02bbf94-8afe-4780-9c81-24b2a1512122">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      </source>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      </auth>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <serial>f02bbf94-8afe-4780-9c81-24b2a1512122</serial>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <encryption format="luks">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:        <secret type="passphrase" uuid="12568ef4-24bc-415b-84b5-db2c6b5add3e"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      </encryption>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    </disk>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:a6:a9:75"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <target dev="tap6f74bad2-bc"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    </interface>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/15d81344-77fd-48da-8fd6-97b50ae519d8/console.log" append="off"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    </serial>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <video>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    </video>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    </rng>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 04:07:41 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 04:07:41 np0005539563 nova_compute[252253]:  </devices>
Nov 29 04:07:41 np0005539563 nova_compute[252253]: </domain>
Nov 29 04:07:41 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.078 252257 DEBUG nova.compute.manager [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Preparing to wait for external event network-vif-plugged-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.078 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.079 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.079 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.080 252257 DEBUG nova.virt.libvirt.vif [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:07:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-561284136',display_name='tempest-TestVolumeBootPattern-server-561284136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-561284136',id=218,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-l3d1y9v1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:07:34Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=15d81344-77fd-48da-8fd6-97b50ae519d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "address": "fa:16:3e:a6:a9:75", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f74bad2-bc", "ovs_interfaceid": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.081 252257 DEBUG nova.network.os_vif_util [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "address": "fa:16:3e:a6:a9:75", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f74bad2-bc", "ovs_interfaceid": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.082 252257 DEBUG nova.network.os_vif_util [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:75,bridge_name='br-int',has_traffic_filtering=True,id=6f74bad2-bc6a-4f93-b881-e1b6d6ee0220,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f74bad2-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.083 252257 DEBUG os_vif [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:75,bridge_name='br-int',has_traffic_filtering=True,id=6f74bad2-bc6a-4f93-b881-e1b6d6ee0220,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f74bad2-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.084 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.084 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.085 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.089 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.089 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f74bad2-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.090 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f74bad2-bc, col_values=(('external_ids', {'iface-id': '6f74bad2-bc6a-4f93-b881-e1b6d6ee0220', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:a9:75', 'vm-uuid': '15d81344-77fd-48da-8fd6-97b50ae519d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.092 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:41 np0005539563 NetworkManager[48981]: <info>  [1764407261.0934] manager: (tap6f74bad2-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.094 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.103 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.105 252257 INFO os_vif [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:75,bridge_name='br-int',has_traffic_filtering=True,id=6f74bad2-bc6a-4f93-b881-e1b6d6ee0220,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f74bad2-bc')#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.203 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.203 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.203 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No VIF found with MAC fa:16:3e:a6:a9:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.204 252257 INFO nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Using config drive#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.233 252257 DEBUG nova.storage.rbd_utils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 15d81344-77fd-48da-8fd6-97b50ae519d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:07:41 np0005539563 nova_compute[252253]: 2025-11-29 09:07:41.693 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3974: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:41.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:42 np0005539563 nova_compute[252253]: 2025-11-29 09:07:42.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:42 np0005539563 nova_compute[252253]: 2025-11-29 09:07:42.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:07:42 np0005539563 nova_compute[252253]: 2025-11-29 09:07:42.710 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:42.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:07:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:07:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:07:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:07:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:07:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:07:43 np0005539563 nova_compute[252253]: 2025-11-29 09:07:43.512 252257 DEBUG nova.network.neutron [req-bb6d8728-9883-41ca-83bb-fde309a7ef94 req-2182a5a2-5a2e-424a-8127-d328b57972f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Updated VIF entry in instance network info cache for port 6f74bad2-bc6a-4f93-b881-e1b6d6ee0220. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:07:43 np0005539563 nova_compute[252253]: 2025-11-29 09:07:43.512 252257 DEBUG nova.network.neutron [req-bb6d8728-9883-41ca-83bb-fde309a7ef94 req-2182a5a2-5a2e-424a-8127-d328b57972f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Updating instance_info_cache with network_info: [{"id": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "address": "fa:16:3e:a6:a9:75", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f74bad2-bc", "ovs_interfaceid": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:07:43 np0005539563 nova_compute[252253]: 2025-11-29 09:07:43.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:43 np0005539563 nova_compute[252253]: 2025-11-29 09:07:43.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3975: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:43 np0005539563 nova_compute[252253]: 2025-11-29 09:07:43.806 252257 DEBUG oslo_concurrency.lockutils [req-bb6d8728-9883-41ca-83bb-fde309a7ef94 req-2182a5a2-5a2e-424a-8127-d328b57972f5 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-15d81344-77fd-48da-8fd6-97b50ae519d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:07:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:43.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:43 np0005539563 podman[411959]: 2025-11-29 09:07:43.873148607 +0000 UTC m=+0.055577226 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 29 04:07:43 np0005539563 podman[411960]: 2025-11-29 09:07:43.889566662 +0000 UTC m=+0.067983012 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 04:07:43 np0005539563 podman[411961]: 2025-11-29 09:07:43.939820452 +0000 UTC m=+0.118132369 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 04:07:44 np0005539563 nova_compute[252253]: 2025-11-29 09:07:44.342 252257 INFO nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Creating config drive at /var/lib/nova/instances/15d81344-77fd-48da-8fd6-97b50ae519d8/disk.config#033[00m
Nov 29 04:07:44 np0005539563 nova_compute[252253]: 2025-11-29 09:07:44.347 252257 DEBUG oslo_concurrency.processutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/15d81344-77fd-48da-8fd6-97b50ae519d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpazv10tha execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:44 np0005539563 nova_compute[252253]: 2025-11-29 09:07:44.482 252257 DEBUG oslo_concurrency.processutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/15d81344-77fd-48da-8fd6-97b50ae519d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpazv10tha" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:44 np0005539563 nova_compute[252253]: 2025-11-29 09:07:44.517 252257 DEBUG nova.storage.rbd_utils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 15d81344-77fd-48da-8fd6-97b50ae519d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:07:44 np0005539563 nova_compute[252253]: 2025-11-29 09:07:44.521 252257 DEBUG oslo_concurrency.processutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/15d81344-77fd-48da-8fd6-97b50ae519d8/disk.config 15d81344-77fd-48da-8fd6-97b50ae519d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:44.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.448 252257 DEBUG oslo_concurrency.processutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/15d81344-77fd-48da-8fd6-97b50ae519d8/disk.config 15d81344-77fd-48da-8fd6-97b50ae519d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.927s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.450 252257 INFO nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Deleting local config drive /var/lib/nova/instances/15d81344-77fd-48da-8fd6-97b50ae519d8/disk.config because it was imported into RBD.#033[00m
Nov 29 04:07:45 np0005539563 kernel: tap6f74bad2-bc: entered promiscuous mode
Nov 29 04:07:45 np0005539563 NetworkManager[48981]: <info>  [1764407265.5157] manager: (tap6f74bad2-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Nov 29 04:07:45 np0005539563 ovn_controller[148841]: 2025-11-29T09:07:45Z|00939|binding|INFO|Claiming lport 6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 for this chassis.
Nov 29 04:07:45 np0005539563 ovn_controller[148841]: 2025-11-29T09:07:45Z|00940|binding|INFO|6f74bad2-bc6a-4f93-b881-e1b6d6ee0220: Claiming fa:16:3e:a6:a9:75 10.100.0.14
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.517 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.523 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.527 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.534 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:a9:75 10.100.0.14'], port_security=['fa:16:3e:a6:a9:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '15d81344-77fd-48da-8fd6-97b50ae519d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bbe29fc0-1435-473c-891a-fae6e52fd8dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=6f74bad2-bc6a-4f93-b881-e1b6d6ee0220) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.535 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad bound to our chassis#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.537 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.549 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[88f27b4f-9200-431b-b338-b1e7d031b579]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.550 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8aaf4606-91 in ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.553 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8aaf4606-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.553 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c67bde-8804-4250-97bf-41e6f43f17a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.554 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0543dd8c-2de7-41b5-a677-b06386028b60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 systemd-udevd[412103]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:07:45 np0005539563 NetworkManager[48981]: <info>  [1764407265.5748] device (tap6f74bad2-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.574 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[25020453-4605-4554-b93c-e860c6f752ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 systemd-machined[213024]: New machine qemu-104-instance-000000da.
Nov 29 04:07:45 np0005539563 NetworkManager[48981]: <info>  [1764407265.5759] device (tap6f74bad2-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.584 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:45 np0005539563 ovn_controller[148841]: 2025-11-29T09:07:45Z|00941|binding|INFO|Setting lport 6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 ovn-installed in OVS
Nov 29 04:07:45 np0005539563 ovn_controller[148841]: 2025-11-29T09:07:45Z|00942|binding|INFO|Setting lport 6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 up in Southbound
Nov 29 04:07:45 np0005539563 systemd[1]: Started Virtual Machine qemu-104-instance-000000da.
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.589 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.601 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9811258f-3ff2-48ea-a5ce-ec05664e26a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.636 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[20800320-f0fc-4f78-a944-f9cb7226b53d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 NetworkManager[48981]: <info>  [1764407265.6437] manager: (tap8aaf4606-90): new Veth device (/org/freedesktop/NetworkManager/Devices/420)
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.644 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[03091009-c0ff-4951-bef6-be9102dbd809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.675 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2328f240-14f7-48ab-9635-5fa79d98ff1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.678 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2d1364-526e-4739-8f79-8b5f4e4648d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 NetworkManager[48981]: <info>  [1764407265.7031] device (tap8aaf4606-90): carrier: link connected
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.703 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.703 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.708 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[4464308f-c273-4a70-b7fa-ab979fa57708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.726 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c4d23c-8a7f-496e-8e66-786cd2ae720a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1023347, 'reachable_time': 43236, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412136, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3976: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.744 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[64ae404d-5ddc-4a0f-b371-2ccef29b954f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:8863'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1023347, 'tstamp': 1023347}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412137, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.764 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba82c76-54ef-4dc3-aa5a-1db966c5302b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1023347, 'reachable_time': 43236, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 412138, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.801 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4c2b8e-7bef-46cc-87bf-8f90495968fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:45.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.873 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7efe1dec-daf7-4891-a5e4-23c34605c208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.875 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.875 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.876 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8aaf4606-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.905 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:45 np0005539563 kernel: tap8aaf4606-90: entered promiscuous mode
Nov 29 04:07:45 np0005539563 NetworkManager[48981]: <info>  [1764407265.9069] manager: (tap8aaf4606-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.909 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.911 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8aaf4606-90, col_values=(('external_ids', {'iface-id': 'dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.912 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:45 np0005539563 ovn_controller[148841]: 2025-11-29T09:07:45Z|00943|binding|INFO|Releasing lport dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a from this chassis (sb_readonly=0)
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.913 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.913 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.914 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[22918dc9-7e1b-46d7-a237-73f8ef3c580a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.915 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 04:07:45 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:45.916 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'env', 'PROCESS_TAG=haproxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 04:07:45 np0005539563 nova_compute[252253]: 2025-11-29 09:07:45.971 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:46 np0005539563 nova_compute[252253]: 2025-11-29 09:07:46.092 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:46 np0005539563 podman[412170]: 2025-11-29 09:07:46.328387536 +0000 UTC m=+0.044173688 container create 091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 04:07:46 np0005539563 systemd[1]: Started libpod-conmon-091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090.scope.
Nov 29 04:07:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:07:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86646b01ff1799eac31c24cac854e7d04899b3221b66254681c0eb98cb4fd637/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:46 np0005539563 podman[412170]: 2025-11-29 09:07:46.303923363 +0000 UTC m=+0.019709525 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 04:07:46 np0005539563 podman[412170]: 2025-11-29 09:07:46.405292067 +0000 UTC m=+0.121078229 container init 091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 04:07:46 np0005539563 podman[412170]: 2025-11-29 09:07:46.412221215 +0000 UTC m=+0.128007367 container start 091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 04:07:46 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[412186]: [NOTICE]   (412190) : New worker (412192) forked
Nov 29 04:07:46 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[412186]: [NOTICE]   (412190) : Loading success.
Nov 29 04:07:46 np0005539563 nova_compute[252253]: 2025-11-29 09:07:46.844 252257 DEBUG nova.compute.manager [req-c57fc5dd-e44c-4756-b01d-b650fc278464 req-18a7edff-8694-4265-ad91-da4db8001a68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Received event network-vif-plugged-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:07:46 np0005539563 nova_compute[252253]: 2025-11-29 09:07:46.844 252257 DEBUG oslo_concurrency.lockutils [req-c57fc5dd-e44c-4756-b01d-b650fc278464 req-18a7edff-8694-4265-ad91-da4db8001a68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:46 np0005539563 nova_compute[252253]: 2025-11-29 09:07:46.845 252257 DEBUG oslo_concurrency.lockutils [req-c57fc5dd-e44c-4756-b01d-b650fc278464 req-18a7edff-8694-4265-ad91-da4db8001a68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:46 np0005539563 nova_compute[252253]: 2025-11-29 09:07:46.845 252257 DEBUG oslo_concurrency.lockutils [req-c57fc5dd-e44c-4756-b01d-b650fc278464 req-18a7edff-8694-4265-ad91-da4db8001a68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:46 np0005539563 nova_compute[252253]: 2025-11-29 09:07:46.845 252257 DEBUG nova.compute.manager [req-c57fc5dd-e44c-4756-b01d-b650fc278464 req-18a7edff-8694-4265-ad91-da4db8001a68 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Processing event network-vif-plugged-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 04:07:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:46.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:47 np0005539563 nova_compute[252253]: 2025-11-29 09:07:47.712 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3977: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 29 04:07:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:47.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:48.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.020 252257 DEBUG nova.compute.manager [req-487127a1-b8b5-4002-8b5f-b13392b70cca req-39bbdf85-d96b-45e2-b2a0-9d85518a031a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Received event network-vif-plugged-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.020 252257 DEBUG oslo_concurrency.lockutils [req-487127a1-b8b5-4002-8b5f-b13392b70cca req-39bbdf85-d96b-45e2-b2a0-9d85518a031a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.021 252257 DEBUG oslo_concurrency.lockutils [req-487127a1-b8b5-4002-8b5f-b13392b70cca req-39bbdf85-d96b-45e2-b2a0-9d85518a031a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.021 252257 DEBUG oslo_concurrency.lockutils [req-487127a1-b8b5-4002-8b5f-b13392b70cca req-39bbdf85-d96b-45e2-b2a0-9d85518a031a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.022 252257 DEBUG nova.compute.manager [req-487127a1-b8b5-4002-8b5f-b13392b70cca req-39bbdf85-d96b-45e2-b2a0-9d85518a031a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] No waiting events found dispatching network-vif-plugged-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.022 252257 WARNING nova.compute.manager [req-487127a1-b8b5-4002-8b5f-b13392b70cca req-39bbdf85-d96b-45e2-b2a0-9d85518a031a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Received unexpected event network-vif-plugged-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.249 252257 DEBUG nova.compute.manager [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.250 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764407269.2488978, 15d81344-77fd-48da-8fd6-97b50ae519d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.250 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] VM Started (Lifecycle Event)#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.255 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.259 252257 INFO nova.virt.libvirt.driver [-] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Instance spawned successfully.#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.259 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.297 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.303 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.306 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.307 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.307 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.307 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.308 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.308 252257 DEBUG nova.virt.libvirt.driver [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.360 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.360 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764407269.2521574, 15d81344-77fd-48da-8fd6-97b50ae519d8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.360 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] VM Paused (Lifecycle Event)#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.409 252257 INFO nova.compute.manager [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Took 12.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.410 252257 DEBUG nova.compute.manager [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.417 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.419 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764407269.2543206, 15d81344-77fd-48da-8fd6-97b50ae519d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.419 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] VM Resumed (Lifecycle Event)#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.457 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.459 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.486 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.509 252257 INFO nova.compute.manager [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Took 16.99 seconds to build instance.#033[00m
Nov 29 04:07:49 np0005539563 nova_compute[252253]: 2025-11-29 09:07:49.531 252257 DEBUG oslo_concurrency.lockutils [None req-3f3e44d9-f34f-415f-bb54-9f498ffe2b9f 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3978: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 9 op/s
Nov 29 04:07:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:49.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:50 np0005539563 nova_compute[252253]: 2025-11-29 09:07:50.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:50 np0005539563 nova_compute[252253]: 2025-11-29 09:07:50.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:50 np0005539563 nova_compute[252253]: 2025-11-29 09:07:50.709 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:50 np0005539563 nova_compute[252253]: 2025-11-29 09:07:50.710 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:50 np0005539563 nova_compute[252253]: 2025-11-29 09:07:50.711 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:50 np0005539563 nova_compute[252253]: 2025-11-29 09:07:50.711 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:07:50 np0005539563 nova_compute[252253]: 2025-11-29 09:07:50.712 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:50.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.094 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:07:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697145045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.138 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.215 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000da as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.215 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000da as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.361 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.362 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4066MB free_disk=20.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.362 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.363 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.433 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 15d81344-77fd-48da-8fd6-97b50ae519d8 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.434 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.434 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.485 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3979: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Nov 29 04:07:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:51.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:07:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3753362989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.902 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.911 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.939 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.970 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:07:51 np0005539563 nova_compute[252253]: 2025-11-29 09:07:51.971 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:52 np0005539563 nova_compute[252253]: 2025-11-29 09:07:52.714 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:52.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.580 252257 DEBUG oslo_concurrency.lockutils [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "15d81344-77fd-48da-8fd6-97b50ae519d8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.580 252257 DEBUG oslo_concurrency.lockutils [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.580 252257 DEBUG oslo_concurrency.lockutils [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.581 252257 DEBUG oslo_concurrency.lockutils [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.581 252257 DEBUG oslo_concurrency.lockutils [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.583 252257 INFO nova.compute.manager [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Terminating instance#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.584 252257 DEBUG nova.compute.manager [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 04:07:53 np0005539563 kernel: tap6f74bad2-bc (unregistering): left promiscuous mode
Nov 29 04:07:53 np0005539563 NetworkManager[48981]: <info>  [1764407273.6331] device (tap6f74bad2-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.647 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:53 np0005539563 ovn_controller[148841]: 2025-11-29T09:07:53Z|00944|binding|INFO|Releasing lport 6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 from this chassis (sb_readonly=0)
Nov 29 04:07:53 np0005539563 ovn_controller[148841]: 2025-11-29T09:07:53Z|00945|binding|INFO|Setting lport 6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 down in Southbound
Nov 29 04:07:53 np0005539563 ovn_controller[148841]: 2025-11-29T09:07:53Z|00946|binding|INFO|Removing iface tap6f74bad2-bc ovn-installed in OVS
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.664 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:a9:75 10.100.0.14'], port_security=['fa:16:3e:a6:a9:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '15d81344-77fd-48da-8fd6-97b50ae519d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bbe29fc0-1435-473c-891a-fae6e52fd8dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=6f74bad2-bc6a-4f93-b881-e1b6d6ee0220) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.665 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad unbound from our chassis#033[00m
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.666 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.667 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.668 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f96fe6b0-8b93-43f5-8295-a64cdd7db1bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.669 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace which is not needed anymore#033[00m
Nov 29 04:07:53 np0005539563 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000da.scope: Deactivated successfully.
Nov 29 04:07:53 np0005539563 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000da.scope: Consumed 4.122s CPU time.
Nov 29 04:07:53 np0005539563 systemd-machined[213024]: Machine qemu-104-instance-000000da terminated.
Nov 29 04:07:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3980: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Nov 29 04:07:53 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[412186]: [NOTICE]   (412190) : haproxy version is 2.8.14-c23fe91
Nov 29 04:07:53 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[412186]: [NOTICE]   (412190) : path to executable is /usr/sbin/haproxy
Nov 29 04:07:53 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[412186]: [WARNING]  (412190) : Exiting Master process...
Nov 29 04:07:53 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[412186]: [ALERT]    (412190) : Current worker (412192) exited with code 143 (Terminated)
Nov 29 04:07:53 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[412186]: [WARNING]  (412190) : All workers exited. Exiting... (0)
Nov 29 04:07:53 np0005539563 systemd[1]: libpod-091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090.scope: Deactivated successfully.
Nov 29 04:07:53 np0005539563 podman[412316]: 2025-11-29 09:07:53.80235914 +0000 UTC m=+0.046608553 container died 091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.817 252257 INFO nova.virt.libvirt.driver [-] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Instance destroyed successfully.#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.819 252257 DEBUG nova.objects.instance [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'resources' on Instance uuid 15d81344-77fd-48da-8fd6-97b50ae519d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:07:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090-userdata-shm.mount: Deactivated successfully.
Nov 29 04:07:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay-86646b01ff1799eac31c24cac854e7d04899b3221b66254681c0eb98cb4fd637-merged.mount: Deactivated successfully.
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.839 252257 DEBUG nova.virt.libvirt.vif [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:07:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-561284136',display_name='tempest-TestVolumeBootPattern-server-561284136',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-561284136',id=218,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:07:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-l3d1y9v1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:07:49Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=15d81344-77fd-48da-8fd6-97b50ae519d8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "address": "fa:16:3e:a6:a9:75", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f74bad2-bc", "ovs_interfaceid": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.840 252257 DEBUG nova.network.os_vif_util [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "address": "fa:16:3e:a6:a9:75", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f74bad2-bc", "ovs_interfaceid": "6f74bad2-bc6a-4f93-b881-e1b6d6ee0220", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.841 252257 DEBUG nova.network.os_vif_util [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:75,bridge_name='br-int',has_traffic_filtering=True,id=6f74bad2-bc6a-4f93-b881-e1b6d6ee0220,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f74bad2-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.841 252257 DEBUG os_vif [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:75,bridge_name='br-int',has_traffic_filtering=True,id=6f74bad2-bc6a-4f93-b881-e1b6d6ee0220,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f74bad2-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 04:07:53 np0005539563 podman[412316]: 2025-11-29 09:07:53.84224599 +0000 UTC m=+0.086495403 container cleanup 091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.843 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.844 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f74bad2-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:07:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:53.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.846 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.849 252257 INFO os_vif [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:a9:75,bridge_name='br-int',has_traffic_filtering=True,id=6f74bad2-bc6a-4f93-b881-e1b6d6ee0220,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f74bad2-bc')#033[00m
Nov 29 04:07:53 np0005539563 systemd[1]: libpod-conmon-091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090.scope: Deactivated successfully.
Nov 29 04:07:53 np0005539563 podman[412353]: 2025-11-29 09:07:53.903135599 +0000 UTC m=+0.038912175 container remove 091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.909 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[12d4ce21-3b3d-4e03-9857-773a500ce4cf]: (4, ('Sat Nov 29 09:07:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090)\n091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090\nSat Nov 29 09:07:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090)\n091703f64d97ed3e6f6ecac8aee4ead486dbceb79736ffc68b0b1aec55670090\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.911 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e65188b8-762c-4e85-9cf7-5f9a04ff4756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.912 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.913 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:53 np0005539563 kernel: tap8aaf4606-90: left promiscuous mode
Nov 29 04:07:53 np0005539563 nova_compute[252253]: 2025-11-29 09:07:53.937 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.940 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb009d7-662f-48fc-8f2c-4a47465cb70b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.957 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5c7abe-60c3-47be-b36b-bcda86ceecc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.958 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7f47c14d-1971-4551-a014-27a016f02017]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.978 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1e69cd-e8af-4a08-b066-6488d5d1a958]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1023340, 'reachable_time': 35289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412386, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.981 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 04:07:53 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:53.981 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[c2faf717-7833-48af-bf49-a860aabe5b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:07:53 np0005539563 systemd[1]: run-netns-ovnmeta\x2d8aaf4606\x2d9df9\x2d4ad5\x2d9ade\x2df48fdc6cfaad.mount: Deactivated successfully.
Nov 29 04:07:54 np0005539563 nova_compute[252253]: 2025-11-29 09:07:54.073 252257 INFO nova.virt.libvirt.driver [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Deleting instance files /var/lib/nova/instances/15d81344-77fd-48da-8fd6-97b50ae519d8_del#033[00m
Nov 29 04:07:54 np0005539563 nova_compute[252253]: 2025-11-29 09:07:54.073 252257 INFO nova.virt.libvirt.driver [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Deletion of /var/lib/nova/instances/15d81344-77fd-48da-8fd6-97b50ae519d8_del complete#033[00m
Nov 29 04:07:54 np0005539563 nova_compute[252253]: 2025-11-29 09:07:54.146 252257 INFO nova.compute.manager [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Took 0.56 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 04:07:54 np0005539563 nova_compute[252253]: 2025-11-29 09:07:54.147 252257 DEBUG oslo.service.loopingcall [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 04:07:54 np0005539563 nova_compute[252253]: 2025-11-29 09:07:54.147 252257 DEBUG nova.compute.manager [-] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 04:07:54 np0005539563 nova_compute[252253]: 2025-11-29 09:07:54.147 252257 DEBUG nova.network.neutron [-] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 04:07:54 np0005539563 nova_compute[252253]: 2025-11-29 09:07:54.737 252257 DEBUG nova.compute.manager [req-aa32029f-f5fa-45f6-8418-b64d287119b8 req-79d54831-396a-4a38-bc10-8df4f406e399 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Received event network-vif-unplugged-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:07:54 np0005539563 nova_compute[252253]: 2025-11-29 09:07:54.738 252257 DEBUG oslo_concurrency.lockutils [req-aa32029f-f5fa-45f6-8418-b64d287119b8 req-79d54831-396a-4a38-bc10-8df4f406e399 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:54 np0005539563 nova_compute[252253]: 2025-11-29 09:07:54.739 252257 DEBUG oslo_concurrency.lockutils [req-aa32029f-f5fa-45f6-8418-b64d287119b8 req-79d54831-396a-4a38-bc10-8df4f406e399 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:54 np0005539563 nova_compute[252253]: 2025-11-29 09:07:54.739 252257 DEBUG oslo_concurrency.lockutils [req-aa32029f-f5fa-45f6-8418-b64d287119b8 req-79d54831-396a-4a38-bc10-8df4f406e399 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:54 np0005539563 nova_compute[252253]: 2025-11-29 09:07:54.739 252257 DEBUG nova.compute.manager [req-aa32029f-f5fa-45f6-8418-b64d287119b8 req-79d54831-396a-4a38-bc10-8df4f406e399 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] No waiting events found dispatching network-vif-unplugged-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:07:54 np0005539563 nova_compute[252253]: 2025-11-29 09:07:54.740 252257 DEBUG nova.compute.manager [req-aa32029f-f5fa-45f6-8418-b64d287119b8 req-79d54831-396a-4a38-bc10-8df4f406e399 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Received event network-vif-unplugged-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 04:07:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:54.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3981: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 22 op/s
Nov 29 04:07:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:55.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 04:07:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 04:07:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 04:07:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 04:07:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 04:07:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:07:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:56.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:07:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:07:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 04:07:57 np0005539563 nova_compute[252253]: 2025-11-29 09:07:57.717 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3982: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 13 KiB/s wr, 21 op/s
Nov 29 04:07:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:07:57 np0005539563 nova_compute[252253]: 2025-11-29 09:07:57.767 252257 DEBUG nova.compute.manager [req-997bbd69-269c-4e2e-8de5-7da4800b6ff7 req-cfb6f310-a04d-42de-b71f-1fc4a5b6ab1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Received event network-vif-plugged-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:07:57 np0005539563 nova_compute[252253]: 2025-11-29 09:07:57.768 252257 DEBUG oslo_concurrency.lockutils [req-997bbd69-269c-4e2e-8de5-7da4800b6ff7 req-cfb6f310-a04d-42de-b71f-1fc4a5b6ab1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:57 np0005539563 nova_compute[252253]: 2025-11-29 09:07:57.768 252257 DEBUG oslo_concurrency.lockutils [req-997bbd69-269c-4e2e-8de5-7da4800b6ff7 req-cfb6f310-a04d-42de-b71f-1fc4a5b6ab1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:57 np0005539563 nova_compute[252253]: 2025-11-29 09:07:57.768 252257 DEBUG oslo_concurrency.lockutils [req-997bbd69-269c-4e2e-8de5-7da4800b6ff7 req-cfb6f310-a04d-42de-b71f-1fc4a5b6ab1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:57 np0005539563 nova_compute[252253]: 2025-11-29 09:07:57.769 252257 DEBUG nova.compute.manager [req-997bbd69-269c-4e2e-8de5-7da4800b6ff7 req-cfb6f310-a04d-42de-b71f-1fc4a5b6ab1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] No waiting events found dispatching network-vif-plugged-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:07:57 np0005539563 nova_compute[252253]: 2025-11-29 09:07:57.769 252257 WARNING nova.compute.manager [req-997bbd69-269c-4e2e-8de5-7da4800b6ff7 req-cfb6f310-a04d-42de-b71f-1fc4a5b6ab1d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Received unexpected event network-vif-plugged-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 04:07:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 04:07:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 04:07:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:07:57 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:07:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:57.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:07:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f8ab7ac1-8035-4281-b3c7-8e1eec21cbf8 does not exist
Nov 29 04:07:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 56d86e53-390c-43ce-bdeb-3029615a55c9 does not exist
Nov 29 04:07:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5341f396-8b4f-4e7f-9e64-8f8411627e66 does not exist
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:07:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:58.573 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=97, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=96) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:07:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:07:58.574 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:07:58 np0005539563 nova_compute[252253]: 2025-11-29 09:07:58.618 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:58 np0005539563 nova_compute[252253]: 2025-11-29 09:07:58.723 252257 DEBUG nova.network.neutron [-] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:07:58 np0005539563 nova_compute[252253]: 2025-11-29 09:07:58.755 252257 DEBUG nova.compute.manager [req-ac994826-dfa8-4e13-a92a-bc6257ab0010 req-f3228282-fa19-4e9b-8138-eb9abdecfd26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Received event network-vif-deleted-6f74bad2-bc6a-4f93-b881-e1b6d6ee0220 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:07:58 np0005539563 nova_compute[252253]: 2025-11-29 09:07:58.756 252257 INFO nova.compute.manager [req-ac994826-dfa8-4e13-a92a-bc6257ab0010 req-f3228282-fa19-4e9b-8138-eb9abdecfd26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Neutron deleted interface 6f74bad2-bc6a-4f93-b881-e1b6d6ee0220; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 04:07:58 np0005539563 nova_compute[252253]: 2025-11-29 09:07:58.756 252257 DEBUG nova.network.neutron [req-ac994826-dfa8-4e13-a92a-bc6257ab0010 req-f3228282-fa19-4e9b-8138-eb9abdecfd26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:07:58 np0005539563 nova_compute[252253]: 2025-11-29 09:07:58.765 252257 INFO nova.compute.manager [-] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Took 4.62 seconds to deallocate network for instance.#033[00m
Nov 29 04:07:58 np0005539563 nova_compute[252253]: 2025-11-29 09:07:58.846 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:07:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:07:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:58.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:59 np0005539563 podman[412664]: 2025-11-29 09:07:59.11596456 +0000 UTC m=+0.042397699 container create 18e998e7ad8ea3c5ded7e2530bdd331b361f5c350ee02ba289628923a4d3156a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_buck, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 04:07:59 np0005539563 nova_compute[252253]: 2025-11-29 09:07:59.141 252257 DEBUG nova.compute.manager [req-ac994826-dfa8-4e13-a92a-bc6257ab0010 req-f3228282-fa19-4e9b-8138-eb9abdecfd26 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Detach interface failed, port_id=6f74bad2-bc6a-4f93-b881-e1b6d6ee0220, reason: Instance 15d81344-77fd-48da-8fd6-97b50ae519d8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 04:07:59 np0005539563 systemd[1]: Started libpod-conmon-18e998e7ad8ea3c5ded7e2530bdd331b361f5c350ee02ba289628923a4d3156a.scope.
Nov 29 04:07:59 np0005539563 podman[412664]: 2025-11-29 09:07:59.094480399 +0000 UTC m=+0.020913518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:07:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:07:59 np0005539563 podman[412664]: 2025-11-29 09:07:59.213375018 +0000 UTC m=+0.139808157 container init 18e998e7ad8ea3c5ded7e2530bdd331b361f5c350ee02ba289628923a4d3156a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:07:59 np0005539563 podman[412664]: 2025-11-29 09:07:59.223952105 +0000 UTC m=+0.150385204 container start 18e998e7ad8ea3c5ded7e2530bdd331b361f5c350ee02ba289628923a4d3156a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:07:59 np0005539563 podman[412664]: 2025-11-29 09:07:59.22823705 +0000 UTC m=+0.154670159 container attach 18e998e7ad8ea3c5ded7e2530bdd331b361f5c350ee02ba289628923a4d3156a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_buck, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:07:59 np0005539563 ecstatic_buck[412681]: 167 167
Nov 29 04:07:59 np0005539563 systemd[1]: libpod-18e998e7ad8ea3c5ded7e2530bdd331b361f5c350ee02ba289628923a4d3156a.scope: Deactivated successfully.
Nov 29 04:07:59 np0005539563 podman[412664]: 2025-11-29 09:07:59.235698982 +0000 UTC m=+0.162132101 container died 18e998e7ad8ea3c5ded7e2530bdd331b361f5c350ee02ba289628923a4d3156a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_buck, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:07:59 np0005539563 nova_compute[252253]: 2025-11-29 09:07:59.237 252257 INFO nova.compute.manager [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Took 0.47 seconds to detach 1 volumes for instance.#033[00m
Nov 29 04:07:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c5562390a720786ecef9409705b9fd89ad2d90ebead1c6ac21e5592f250c393c-merged.mount: Deactivated successfully.
Nov 29 04:07:59 np0005539563 podman[412664]: 2025-11-29 09:07:59.281323818 +0000 UTC m=+0.207756927 container remove 18e998e7ad8ea3c5ded7e2530bdd331b361f5c350ee02ba289628923a4d3156a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:07:59 np0005539563 systemd[1]: libpod-conmon-18e998e7ad8ea3c5ded7e2530bdd331b361f5c350ee02ba289628923a4d3156a.scope: Deactivated successfully.
Nov 29 04:07:59 np0005539563 podman[412705]: 2025-11-29 09:07:59.449862042 +0000 UTC m=+0.055772291 container create 99383d657ff4d6ace3d694556dca7f53899b9b68d9839b3cef30a53fbbd89c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 04:07:59 np0005539563 systemd[1]: Started libpod-conmon-99383d657ff4d6ace3d694556dca7f53899b9b68d9839b3cef30a53fbbd89c35.scope.
Nov 29 04:07:59 np0005539563 podman[412705]: 2025-11-29 09:07:59.41692614 +0000 UTC m=+0.022836399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:07:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:07:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5ab9f44647c51cb75a4106b140fc73f870dd382bef2d53fdee94ae52791254/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5ab9f44647c51cb75a4106b140fc73f870dd382bef2d53fdee94ae52791254/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5ab9f44647c51cb75a4106b140fc73f870dd382bef2d53fdee94ae52791254/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5ab9f44647c51cb75a4106b140fc73f870dd382bef2d53fdee94ae52791254/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5ab9f44647c51cb75a4106b140fc73f870dd382bef2d53fdee94ae52791254/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:59 np0005539563 podman[412705]: 2025-11-29 09:07:59.566033747 +0000 UTC m=+0.171943976 container init 99383d657ff4d6ace3d694556dca7f53899b9b68d9839b3cef30a53fbbd89c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mendel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:07:59 np0005539563 nova_compute[252253]: 2025-11-29 09:07:59.576 252257 DEBUG oslo_concurrency.lockutils [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:59 np0005539563 nova_compute[252253]: 2025-11-29 09:07:59.577 252257 DEBUG oslo_concurrency.lockutils [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:59 np0005539563 podman[412705]: 2025-11-29 09:07:59.579632025 +0000 UTC m=+0.185542244 container start 99383d657ff4d6ace3d694556dca7f53899b9b68d9839b3cef30a53fbbd89c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mendel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 04:07:59 np0005539563 podman[412705]: 2025-11-29 09:07:59.583867979 +0000 UTC m=+0.189778228 container attach 99383d657ff4d6ace3d694556dca7f53899b9b68d9839b3cef30a53fbbd89c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:07:59 np0005539563 nova_compute[252253]: 2025-11-29 09:07:59.638 252257 DEBUG oslo_concurrency.processutils [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3983: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 13 KiB/s wr, 21 op/s
Nov 29 04:07:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:07:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:59.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:08:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1968188615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:08:00 np0005539563 nova_compute[252253]: 2025-11-29 09:08:00.128 252257 DEBUG oslo_concurrency.processutils [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:08:00 np0005539563 nova_compute[252253]: 2025-11-29 09:08:00.133 252257 DEBUG nova.compute.provider_tree [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:08:00 np0005539563 eager_mendel[412721]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:08:00 np0005539563 eager_mendel[412721]: --> relative data size: 1.0
Nov 29 04:08:00 np0005539563 eager_mendel[412721]: --> All data devices are unavailable
Nov 29 04:08:00 np0005539563 systemd[1]: libpod-99383d657ff4d6ace3d694556dca7f53899b9b68d9839b3cef30a53fbbd89c35.scope: Deactivated successfully.
Nov 29 04:08:00 np0005539563 conmon[412721]: conmon 99383d657ff4d6ace3d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-99383d657ff4d6ace3d694556dca7f53899b9b68d9839b3cef30a53fbbd89c35.scope/container/memory.events
Nov 29 04:08:00 np0005539563 podman[412758]: 2025-11-29 09:08:00.430584956 +0000 UTC m=+0.029154371 container died 99383d657ff4d6ace3d694556dca7f53899b9b68d9839b3cef30a53fbbd89c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:08:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1f5ab9f44647c51cb75a4106b140fc73f870dd382bef2d53fdee94ae52791254-merged.mount: Deactivated successfully.
Nov 29 04:08:00 np0005539563 podman[412758]: 2025-11-29 09:08:00.487574248 +0000 UTC m=+0.086143593 container remove 99383d657ff4d6ace3d694556dca7f53899b9b68d9839b3cef30a53fbbd89c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:08:00 np0005539563 systemd[1]: libpod-conmon-99383d657ff4d6ace3d694556dca7f53899b9b68d9839b3cef30a53fbbd89c35.scope: Deactivated successfully.
Nov 29 04:08:00 np0005539563 nova_compute[252253]: 2025-11-29 09:08:00.595 252257 DEBUG nova.scheduler.client.report [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:08:00 np0005539563 nova_compute[252253]: 2025-11-29 09:08:00.768 252257 DEBUG oslo_concurrency.lockutils [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:00 np0005539563 nova_compute[252253]: 2025-11-29 09:08:00.811 252257 INFO nova.scheduler.client.report [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Deleted allocations for instance 15d81344-77fd-48da-8fd6-97b50ae519d8#033[00m
Nov 29 04:08:00 np0005539563 nova_compute[252253]: 2025-11-29 09:08:00.884 252257 DEBUG oslo_concurrency.lockutils [None req-c4c62b38-15da-4d8e-a8c9-8249d1f8e098 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "15d81344-77fd-48da-8fd6-97b50ae519d8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:00.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:01 np0005539563 podman[412914]: 2025-11-29 09:08:01.057397526 +0000 UTC m=+0.035412609 container create 206a1f7453b41d65f40a75cfae786607a4c4de9368133026f5d929d48a63a9cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:08:01 np0005539563 systemd[1]: Started libpod-conmon-206a1f7453b41d65f40a75cfae786607a4c4de9368133026f5d929d48a63a9cd.scope.
Nov 29 04:08:01 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:08:01 np0005539563 podman[412914]: 2025-11-29 09:08:01.114690048 +0000 UTC m=+0.092705151 container init 206a1f7453b41d65f40a75cfae786607a4c4de9368133026f5d929d48a63a9cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:08:01 np0005539563 podman[412914]: 2025-11-29 09:08:01.120905296 +0000 UTC m=+0.098920369 container start 206a1f7453b41d65f40a75cfae786607a4c4de9368133026f5d929d48a63a9cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 04:08:01 np0005539563 podman[412914]: 2025-11-29 09:08:01.123932008 +0000 UTC m=+0.101947121 container attach 206a1f7453b41d65f40a75cfae786607a4c4de9368133026f5d929d48a63a9cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 04:08:01 np0005539563 vigorous_blackwell[412930]: 167 167
Nov 29 04:08:01 np0005539563 systemd[1]: libpod-206a1f7453b41d65f40a75cfae786607a4c4de9368133026f5d929d48a63a9cd.scope: Deactivated successfully.
Nov 29 04:08:01 np0005539563 podman[412914]: 2025-11-29 09:08:01.126174269 +0000 UTC m=+0.104189352 container died 206a1f7453b41d65f40a75cfae786607a4c4de9368133026f5d929d48a63a9cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:08:01 np0005539563 podman[412914]: 2025-11-29 09:08:01.041420245 +0000 UTC m=+0.019435348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:08:01 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e11acd90b0b7d46efa89e82a4ca928a73922868f20390f73a5f548e5663eeee7-merged.mount: Deactivated successfully.
Nov 29 04:08:01 np0005539563 podman[412914]: 2025-11-29 09:08:01.164163427 +0000 UTC m=+0.142178510 container remove 206a1f7453b41d65f40a75cfae786607a4c4de9368133026f5d929d48a63a9cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 29 04:08:01 np0005539563 systemd[1]: libpod-conmon-206a1f7453b41d65f40a75cfae786607a4c4de9368133026f5d929d48a63a9cd.scope: Deactivated successfully.
Nov 29 04:08:01 np0005539563 podman[412955]: 2025-11-29 09:08:01.328751864 +0000 UTC m=+0.051623729 container create 0e2f46be9bf5d8ee005275ddb5f0ea22af74b4f99bf6fd59a7e7c2bf197597b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:08:01 np0005539563 systemd[1]: Started libpod-conmon-0e2f46be9bf5d8ee005275ddb5f0ea22af74b4f99bf6fd59a7e7c2bf197597b1.scope.
Nov 29 04:08:01 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:08:01 np0005539563 podman[412955]: 2025-11-29 09:08:01.301817615 +0000 UTC m=+0.024689570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:08:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53e3e8ffc7d73e81275f1827d50e1346418fd577a21ee9e2dd1e130b7c5609f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53e3e8ffc7d73e81275f1827d50e1346418fd577a21ee9e2dd1e130b7c5609f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53e3e8ffc7d73e81275f1827d50e1346418fd577a21ee9e2dd1e130b7c5609f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53e3e8ffc7d73e81275f1827d50e1346418fd577a21ee9e2dd1e130b7c5609f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:01 np0005539563 podman[412955]: 2025-11-29 09:08:01.413232341 +0000 UTC m=+0.136104296 container init 0e2f46be9bf5d8ee005275ddb5f0ea22af74b4f99bf6fd59a7e7c2bf197597b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ishizaka, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 04:08:01 np0005539563 podman[412955]: 2025-11-29 09:08:01.419865691 +0000 UTC m=+0.142737556 container start 0e2f46be9bf5d8ee005275ddb5f0ea22af74b4f99bf6fd59a7e7c2bf197597b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ishizaka, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:08:01 np0005539563 podman[412955]: 2025-11-29 09:08:01.424159687 +0000 UTC m=+0.147031592 container attach 0e2f46be9bf5d8ee005275ddb5f0ea22af74b4f99bf6fd59a7e7c2bf197597b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ishizaka, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:08:01 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:08:01.577 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '97'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:08:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3984: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 597 B/s wr, 13 op/s
Nov 29 04:08:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:01.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]: {
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:    "0": [
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:        {
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:            "devices": [
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:                "/dev/loop3"
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:            ],
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:            "lv_name": "ceph_lv0",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:            "lv_size": "7511998464",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:            "name": "ceph_lv0",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:            "tags": {
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:                "ceph.cluster_name": "ceph",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:                "ceph.crush_device_class": "",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:                "ceph.encrypted": "0",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:                "ceph.osd_id": "0",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:                "ceph.type": "block",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:                "ceph.vdo": "0"
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:            },
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:            "type": "block",
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:            "vg_name": "ceph_vg0"
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:        }
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]:    ]
Nov 29 04:08:02 np0005539563 romantic_ishizaka[412973]: }
Nov 29 04:08:02 np0005539563 systemd[1]: libpod-0e2f46be9bf5d8ee005275ddb5f0ea22af74b4f99bf6fd59a7e7c2bf197597b1.scope: Deactivated successfully.
Nov 29 04:08:02 np0005539563 podman[412955]: 2025-11-29 09:08:02.170947558 +0000 UTC m=+0.893819443 container died 0e2f46be9bf5d8ee005275ddb5f0ea22af74b4f99bf6fd59a7e7c2bf197597b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 04:08:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a53e3e8ffc7d73e81275f1827d50e1346418fd577a21ee9e2dd1e130b7c5609f-merged.mount: Deactivated successfully.
Nov 29 04:08:02 np0005539563 podman[412955]: 2025-11-29 09:08:02.218025012 +0000 UTC m=+0.940896877 container remove 0e2f46be9bf5d8ee005275ddb5f0ea22af74b4f99bf6fd59a7e7c2bf197597b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_ishizaka, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:08:02 np0005539563 systemd[1]: libpod-conmon-0e2f46be9bf5d8ee005275ddb5f0ea22af74b4f99bf6fd59a7e7c2bf197597b1.scope: Deactivated successfully.
Nov 29 04:08:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:02 np0005539563 nova_compute[252253]: 2025-11-29 09:08:02.719 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:08:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1557411659' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:08:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:08:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1557411659' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:08:02 np0005539563 podman[413137]: 2025-11-29 09:08:02.85686308 +0000 UTC m=+0.052612536 container create ff91a27e06b24468fcde538ba3acd05c297ffe15e94a481bcdcbcc15d11b895f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 04:08:02 np0005539563 systemd[1]: Started libpod-conmon-ff91a27e06b24468fcde538ba3acd05c297ffe15e94a481bcdcbcc15d11b895f.scope.
Nov 29 04:08:02 np0005539563 podman[413137]: 2025-11-29 09:08:02.830367412 +0000 UTC m=+0.026116948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:08:02 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:08:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:02.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:02 np0005539563 podman[413137]: 2025-11-29 09:08:02.949389225 +0000 UTC m=+0.145138671 container init ff91a27e06b24468fcde538ba3acd05c297ffe15e94a481bcdcbcc15d11b895f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 04:08:02 np0005539563 podman[413137]: 2025-11-29 09:08:02.956108097 +0000 UTC m=+0.151857543 container start ff91a27e06b24468fcde538ba3acd05c297ffe15e94a481bcdcbcc15d11b895f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:08:02 np0005539563 podman[413137]: 2025-11-29 09:08:02.959364715 +0000 UTC m=+0.155114191 container attach ff91a27e06b24468fcde538ba3acd05c297ffe15e94a481bcdcbcc15d11b895f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:08:02 np0005539563 practical_snyder[413153]: 167 167
Nov 29 04:08:02 np0005539563 systemd[1]: libpod-ff91a27e06b24468fcde538ba3acd05c297ffe15e94a481bcdcbcc15d11b895f.scope: Deactivated successfully.
Nov 29 04:08:02 np0005539563 conmon[413153]: conmon ff91a27e06b24468fcde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff91a27e06b24468fcde538ba3acd05c297ffe15e94a481bcdcbcc15d11b895f.scope/container/memory.events
Nov 29 04:08:02 np0005539563 podman[413137]: 2025-11-29 09:08:02.963265621 +0000 UTC m=+0.159015057 container died ff91a27e06b24468fcde538ba3acd05c297ffe15e94a481bcdcbcc15d11b895f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 04:08:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-98086f4472353dc9e5c6bed465404d9c936d435e5c43a0184266c4ee0c3f9943-merged.mount: Deactivated successfully.
Nov 29 04:08:03 np0005539563 podman[413137]: 2025-11-29 09:08:03.000131469 +0000 UTC m=+0.195880925 container remove ff91a27e06b24468fcde538ba3acd05c297ffe15e94a481bcdcbcc15d11b895f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 04:08:03 np0005539563 systemd[1]: libpod-conmon-ff91a27e06b24468fcde538ba3acd05c297ffe15e94a481bcdcbcc15d11b895f.scope: Deactivated successfully.
Nov 29 04:08:03 np0005539563 podman[413177]: 2025-11-29 09:08:03.165173078 +0000 UTC m=+0.039851460 container create 0d8c17430d29200d71ab460a8a20673618169a6527b7cdecf3963c09cffb6bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_darwin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:08:03 np0005539563 systemd[1]: Started libpod-conmon-0d8c17430d29200d71ab460a8a20673618169a6527b7cdecf3963c09cffb6bae.scope.
Nov 29 04:08:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:08:03 np0005539563 podman[413177]: 2025-11-29 09:08:03.147977112 +0000 UTC m=+0.022655504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:08:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6667b64993a258a0e99f8382b28293bbc4ee1ebb5fda9926724256804d088de7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6667b64993a258a0e99f8382b28293bbc4ee1ebb5fda9926724256804d088de7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6667b64993a258a0e99f8382b28293bbc4ee1ebb5fda9926724256804d088de7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6667b64993a258a0e99f8382b28293bbc4ee1ebb5fda9926724256804d088de7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:03 np0005539563 podman[413177]: 2025-11-29 09:08:03.261143696 +0000 UTC m=+0.135822108 container init 0d8c17430d29200d71ab460a8a20673618169a6527b7cdecf3963c09cffb6bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_darwin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:08:03 np0005539563 podman[413177]: 2025-11-29 09:08:03.275507315 +0000 UTC m=+0.150185697 container start 0d8c17430d29200d71ab460a8a20673618169a6527b7cdecf3963c09cffb6bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:08:03 np0005539563 podman[413177]: 2025-11-29 09:08:03.279219656 +0000 UTC m=+0.153898058 container attach 0d8c17430d29200d71ab460a8a20673618169a6527b7cdecf3963c09cffb6bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_darwin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:08:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3985: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 938 B/s wr, 16 op/s
Nov 29 04:08:03 np0005539563 nova_compute[252253]: 2025-11-29 09:08:03.849 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:03.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:04 np0005539563 nervous_darwin[413195]: {
Nov 29 04:08:04 np0005539563 nervous_darwin[413195]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:08:04 np0005539563 nervous_darwin[413195]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:08:04 np0005539563 nervous_darwin[413195]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:08:04 np0005539563 nervous_darwin[413195]:        "osd_id": 0,
Nov 29 04:08:04 np0005539563 nervous_darwin[413195]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:08:04 np0005539563 nervous_darwin[413195]:        "type": "bluestore"
Nov 29 04:08:04 np0005539563 nervous_darwin[413195]:    }
Nov 29 04:08:04 np0005539563 nervous_darwin[413195]: }
Nov 29 04:08:04 np0005539563 systemd[1]: libpod-0d8c17430d29200d71ab460a8a20673618169a6527b7cdecf3963c09cffb6bae.scope: Deactivated successfully.
Nov 29 04:08:04 np0005539563 podman[413177]: 2025-11-29 09:08:04.096768661 +0000 UTC m=+0.971447043 container died 0d8c17430d29200d71ab460a8a20673618169a6527b7cdecf3963c09cffb6bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:08:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6667b64993a258a0e99f8382b28293bbc4ee1ebb5fda9926724256804d088de7-merged.mount: Deactivated successfully.
Nov 29 04:08:04 np0005539563 podman[413177]: 2025-11-29 09:08:04.167299292 +0000 UTC m=+1.041977704 container remove 0d8c17430d29200d71ab460a8a20673618169a6527b7cdecf3963c09cffb6bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 04:08:04 np0005539563 systemd[1]: libpod-conmon-0d8c17430d29200d71ab460a8a20673618169a6527b7cdecf3963c09cffb6bae.scope: Deactivated successfully.
Nov 29 04:08:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:08:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:08:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:08:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:04.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:08:04.984 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:08:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:08:04.985 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:08:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:08:04.985 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:08:05 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f6c08aa6-11cb-4e86-84fb-84b1dc6ee5fe does not exist
Nov 29 04:08:05 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 41094217-d341-429e-9e84-514258d23976 does not exist
Nov 29 04:08:05 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0b91deec-01a2-4e85-bf87-59ca71fe8cad does not exist
Nov 29 04:08:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3986: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 29 04:08:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:05.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:08:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:08:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:06.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:07 np0005539563 nova_compute[252253]: 2025-11-29 09:08:07.722 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3987: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Nov 29 04:08:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:07.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:08 np0005539563 nova_compute[252253]: 2025-11-29 09:08:08.817 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764407273.8162608, 15d81344-77fd-48da-8fd6-97b50ae519d8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:08:08 np0005539563 nova_compute[252253]: 2025-11-29 09:08:08.818 252257 INFO nova.compute.manager [-] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] VM Stopped (Lifecycle Event)#033[00m
Nov 29 04:08:08 np0005539563 nova_compute[252253]: 2025-11-29 09:08:08.904 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:08.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3988: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Nov 29 04:08:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:09.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:10.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3989: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Nov 29 04:08:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:11.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:12 np0005539563 nova_compute[252253]: 2025-11-29 09:08:12.724 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:12.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:08:13
Nov 29 04:08:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:08:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:08:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'vms', '.mgr', 'images']
Nov 29 04:08:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:08:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:08:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:08:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:08:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:08:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:08:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:08:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3990: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Nov 29 04:08:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:13.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:13 np0005539563 nova_compute[252253]: 2025-11-29 09:08:13.944 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:14 np0005539563 podman[413335]: 2025-11-29 09:08:14.537218178 +0000 UTC m=+0.085877056 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 29 04:08:14 np0005539563 podman[413334]: 2025-11-29 09:08:14.544640809 +0000 UTC m=+0.097588793 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 04:08:14 np0005539563 podman[413336]: 2025-11-29 09:08:14.57348115 +0000 UTC m=+0.118862719 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:08:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:14.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3991: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 255 B/s wr, 11 op/s
Nov 29 04:08:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:15.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:08:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:08:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:08:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:08:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:08:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:08:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:08:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:08:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:08:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:08:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 18K writes, 81K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s#012Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1480 writes, 6584 keys, 1480 commit groups, 1.0 writes per commit group, ingest: 10.14 MB, 0.02 MB/s#012Interval WAL: 1480 writes, 1480 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     18.7      5.94              0.39        57    0.104       0      0       0.0       0.0#012  L6      1/0   13.64 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4     40.9     35.2     17.00              1.83        56    0.304    463K    30K       0.0       0.0#012 Sum      1/0   13.64 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4     30.3     30.9     22.94              2.22       113    0.203    463K    30K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1    100.1    101.8      0.88              0.25        12    0.073     68K   3119       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     40.9     35.2     17.00              1.83        56    0.304    463K    30K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     18.7      5.94              0.39        56    0.106       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.108, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.69 GB write, 0.10 MB/s write, 0.68 GB read, 0.10 MB/s read, 22.9 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 304.00 MB usage: 78.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000702 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4323,75.00 MB,24.6704%) FilterBlock(114,1.24 MB,0.40887%) IndexBlock(114,2.02 MB,0.666081%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 04:08:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:16.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:17 np0005539563 nova_compute[252253]: 2025-11-29 09:08:17.727 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3992: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:17.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:18.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:18 np0005539563 nova_compute[252253]: 2025-11-29 09:08:18.996 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3993: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:19.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:19 np0005539563 nova_compute[252253]: 2025-11-29 09:08:19.967 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:20.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3994: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:21.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:22 np0005539563 nova_compute[252253]: 2025-11-29 09:08:22.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:22 np0005539563 nova_compute[252253]: 2025-11-29 09:08:22.764 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:22.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3995: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:23 np0005539563 nova_compute[252253]: 2025-11-29 09:08:23.812 252257 DEBUG nova.compute.manager [None req-d2024486-e6c3-44c1-865c-20806e4033c8 - - - - - -] [instance: 15d81344-77fd-48da-8fd6-97b50ae519d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:08:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:23.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:24 np0005539563 nova_compute[252253]: 2025-11-29 09:08:24.037 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:08:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:24.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3996: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:25.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:26 np0005539563 nova_compute[252253]: 2025-11-29 09:08:26.008 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:26 np0005539563 nova_compute[252253]: 2025-11-29 09:08:26.008 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:08:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:26.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:27 np0005539563 nova_compute[252253]: 2025-11-29 09:08:27.270 252257 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 1.53 sec#033[00m
Nov 29 04:08:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3997: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:27 np0005539563 nova_compute[252253]: 2025-11-29 09:08:27.808 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:27.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:28 np0005539563 ovn_controller[148841]: 2025-11-29T09:08:28Z|00947|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Nov 29 04:08:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:28.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:29 np0005539563 nova_compute[252253]: 2025-11-29 09:08:29.082 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3998: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 684 KiB/s rd, 2 op/s
Nov 29 04:08:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:29.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:30.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3999: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 3 op/s
Nov 29 04:08:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:31.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:32 np0005539563 nova_compute[252253]: 2025-11-29 09:08:32.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:32 np0005539563 nova_compute[252253]: 2025-11-29 09:08:32.807 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:32.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4000: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 29 04:08:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:33.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:34 np0005539563 nova_compute[252253]: 2025-11-29 09:08:34.124 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:34.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4001: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 29 04:08:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:35.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4002: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 29 04:08:37 np0005539563 nova_compute[252253]: 2025-11-29 09:08:37.854 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:37.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:39 np0005539563 nova_compute[252253]: 2025-11-29 09:08:39.127 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.226547) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319226699, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 833, "num_deletes": 250, "total_data_size": 1173047, "memory_usage": 1198640, "flush_reason": "Manual Compaction"}
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319343979, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 761991, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81441, "largest_seqno": 82272, "table_properties": {"data_size": 758458, "index_size": 1312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9579, "raw_average_key_size": 20, "raw_value_size": 750852, "raw_average_value_size": 1639, "num_data_blocks": 57, "num_entries": 458, "num_filter_entries": 458, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407252, "oldest_key_time": 1764407252, "file_creation_time": 1764407319, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 117435 microseconds, and 3729 cpu microseconds.
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.344054) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 761991 bytes OK
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.344091) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.347071) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.347108) EVENT_LOG_v1 {"time_micros": 1764407319347099, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.347129) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 1168994, prev total WAL file size 1185037, number of live WAL files 2.
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.348017) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303138' seq:72057594037927935, type:22 .. '6D6772737461740033323639' seq:0, type:0; will stop at (end)
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(744KB)], [185(13MB)]
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319348094, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 15064846, "oldest_snapshot_seqno": -1}
Nov 29 04:08:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4003: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 11105 keys, 11615171 bytes, temperature: kUnknown
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319814249, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 11615171, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11547634, "index_size": 38688, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 295058, "raw_average_key_size": 26, "raw_value_size": 11357461, "raw_average_value_size": 1022, "num_data_blocks": 1452, "num_entries": 11105, "num_filter_entries": 11105, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764407319, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.814665) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11615171 bytes
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.817572) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.3 rd, 24.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 13.6 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(35.0) write-amplify(15.2) OK, records in: 11597, records dropped: 492 output_compression: NoCompression
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.817624) EVENT_LOG_v1 {"time_micros": 1764407319817605, "job": 116, "event": "compaction_finished", "compaction_time_micros": 466273, "compaction_time_cpu_micros": 35241, "output_level": 6, "num_output_files": 1, "total_output_size": 11615171, "num_input_records": 11597, "num_output_records": 11105, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319818522, "job": 116, "event": "table_file_deletion", "file_number": 187}
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407319821406, "job": 116, "event": "table_file_deletion", "file_number": 185}
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.347793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.821589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.821597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.821599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.821601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:08:39 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:08:39.821603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:08:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:39.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4004: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 170 B/s wr, 4 op/s
Nov 29 04:08:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:41.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:42 np0005539563 nova_compute[252253]: 2025-11-29 09:08:42.856 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:42.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:08:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:08:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:08:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:08:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:08:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:08:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4005: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 685 KiB/s rd, 255 B/s wr, 4 op/s
Nov 29 04:08:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:43.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:44 np0005539563 nova_compute[252253]: 2025-11-29 09:08:44.130 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:44.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:45 np0005539563 podman[413517]: 2025-11-29 09:08:45.525620607 +0000 UTC m=+0.069226805 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:08:45 np0005539563 podman[413518]: 2025-11-29 09:08:45.534033695 +0000 UTC m=+0.078727632 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:08:45 np0005539563 podman[413519]: 2025-11-29 09:08:45.557384357 +0000 UTC m=+0.101723225 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:08:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4006: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Nov 29 04:08:45 np0005539563 nova_compute[252253]: 2025-11-29 09:08:45.861 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:45 np0005539563 nova_compute[252253]: 2025-11-29 09:08:45.861 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:45 np0005539563 nova_compute[252253]: 2025-11-29 09:08:45.861 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:45 np0005539563 nova_compute[252253]: 2025-11-29 09:08:45.861 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:45 np0005539563 nova_compute[252253]: 2025-11-29 09:08:45.862 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:45 np0005539563 nova_compute[252253]: 2025-11-29 09:08:45.862 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:08:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:45.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:46.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:47 np0005539563 nova_compute[252253]: 2025-11-29 09:08:47.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:47 np0005539563 nova_compute[252253]: 2025-11-29 09:08:47.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:08:47 np0005539563 nova_compute[252253]: 2025-11-29 09:08:47.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:08:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4007: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Nov 29 04:08:47 np0005539563 nova_compute[252253]: 2025-11-29 09:08:47.858 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:47.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:48.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:49 np0005539563 nova_compute[252253]: 2025-11-29 09:08:49.133 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4008: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Nov 29 04:08:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:49.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:50 np0005539563 nova_compute[252253]: 2025-11-29 09:08:50.469 252257 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 3.20 sec#033[00m
Nov 29 04:08:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:08:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:50.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:08:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Nov 29 04:08:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:51.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:52 np0005539563 nova_compute[252253]: 2025-11-29 09:08:52.856 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:08:52 np0005539563 nova_compute[252253]: 2025-11-29 09:08:52.856 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:52 np0005539563 nova_compute[252253]: 2025-11-29 09:08:52.857 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:52 np0005539563 nova_compute[252253]: 2025-11-29 09:08:52.914 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:53.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 255 B/s wr, 0 op/s
Nov 29 04:08:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:53.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:54 np0005539563 nova_compute[252253]: 2025-11-29 09:08:54.136 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:55.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 170 B/s wr, 0 op/s
Nov 29 04:08:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:55.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:56 np0005539563 nova_compute[252253]: 2025-11-29 09:08:56.759 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:08:56 np0005539563 nova_compute[252253]: 2025-11-29 09:08:56.760 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:08:56 np0005539563 nova_compute[252253]: 2025-11-29 09:08:56.760 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:56 np0005539563 nova_compute[252253]: 2025-11-29 09:08:56.760 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:08:56 np0005539563 nova_compute[252253]: 2025-11-29 09:08:56.760 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:08:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:57.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:08:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903914103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:08:57 np0005539563 nova_compute[252253]: 2025-11-29 09:08:57.230 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:08:57 np0005539563 nova_compute[252253]: 2025-11-29 09:08:57.392 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:08:57 np0005539563 nova_compute[252253]: 2025-11-29 09:08:57.394 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4129MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:08:57 np0005539563 nova_compute[252253]: 2025-11-29 09:08:57.395 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:08:57 np0005539563 nova_compute[252253]: 2025-11-29 09:08:57.395 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:08:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:57.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:57 np0005539563 nova_compute[252253]: 2025-11-29 09:08:57.946 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:59.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:59 np0005539563 nova_compute[252253]: 2025-11-29 09:08:59.178 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:08:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:59.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:01.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Nov 29 04:09:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:01.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:03 np0005539563 nova_compute[252253]: 2025-11-29 09:09:02.999 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:03.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Nov 29 04:09:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:03.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:04 np0005539563 nova_compute[252253]: 2025-11-29 09:09:04.181 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:09:04.985 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:09:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:09:04.985 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:09:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:09:04.985 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:09:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:05.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4016: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.1 KiB/s rd, 341 B/s wr, 11 op/s
Nov 29 04:09:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:05.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:09:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:09:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:09:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:09:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:09:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:07.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:09:07 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 27b2925c-90ce-427e-a52a-8386621258cd does not exist
Nov 29 04:09:07 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 45eabb7c-69f9-4254-8314-88306fdfa14b does not exist
Nov 29 04:09:07 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4ae7f8f6-8dfd-4cbf-8563-1632e1d26da8 does not exist
Nov 29 04:09:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:09:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:09:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:09:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:09:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:09:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:09:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4017: 305 pgs: 305 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.8 KiB/s rd, 678 KiB/s wr, 13 op/s
Nov 29 04:09:07 np0005539563 podman[413931]: 2025-11-29 09:09:07.790252906 +0000 UTC m=+0.049742988 container create a799485d7172ed36c0d5e0670b4f12fc434f7761082ca0b3631c8094a4a14c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 04:09:07 np0005539563 systemd[1]: Started libpod-conmon-a799485d7172ed36c0d5e0670b4f12fc434f7761082ca0b3631c8094a4a14c69.scope.
Nov 29 04:09:07 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:09:07 np0005539563 podman[413931]: 2025-11-29 09:09:07.76897913 +0000 UTC m=+0.028469262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:09:07 np0005539563 podman[413931]: 2025-11-29 09:09:07.877518479 +0000 UTC m=+0.137008571 container init a799485d7172ed36c0d5e0670b4f12fc434f7761082ca0b3631c8094a4a14c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swanson, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:09:07 np0005539563 podman[413931]: 2025-11-29 09:09:07.885151826 +0000 UTC m=+0.144641908 container start a799485d7172ed36c0d5e0670b4f12fc434f7761082ca0b3631c8094a4a14c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swanson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 04:09:07 np0005539563 podman[413931]: 2025-11-29 09:09:07.888503856 +0000 UTC m=+0.147993938 container attach a799485d7172ed36c0d5e0670b4f12fc434f7761082ca0b3631c8094a4a14c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swanson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 04:09:07 np0005539563 distracted_swanson[413948]: 167 167
Nov 29 04:09:07 np0005539563 systemd[1]: libpod-a799485d7172ed36c0d5e0670b4f12fc434f7761082ca0b3631c8094a4a14c69.scope: Deactivated successfully.
Nov 29 04:09:07 np0005539563 podman[413931]: 2025-11-29 09:09:07.892371051 +0000 UTC m=+0.151861133 container died a799485d7172ed36c0d5e0670b4f12fc434f7761082ca0b3631c8094a4a14c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swanson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:09:07 np0005539563 systemd[1]: var-lib-containers-storage-overlay-62ff09f88d0d8e80ac59fea3af876ebde7ab17d0c2dc4072354608633435db90-merged.mount: Deactivated successfully.
Nov 29 04:09:07 np0005539563 podman[413931]: 2025-11-29 09:09:07.935604022 +0000 UTC m=+0.195094104 container remove a799485d7172ed36c0d5e0670b4f12fc434f7761082ca0b3631c8094a4a14c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:09:07 np0005539563 systemd[1]: libpod-conmon-a799485d7172ed36c0d5e0670b4f12fc434f7761082ca0b3631c8094a4a14c69.scope: Deactivated successfully.
Nov 29 04:09:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:07.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:08 np0005539563 nova_compute[252253]: 2025-11-29 09:09:08.000 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:08 np0005539563 podman[413971]: 2025-11-29 09:09:08.106699074 +0000 UTC m=+0.055582656 container create 77969aff1750a70da241352e281719e0ccaa0c7caf899a37dbf02e3d0b8b1395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 04:09:08 np0005539563 nova_compute[252253]: 2025-11-29 09:09:08.132 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:09:08 np0005539563 nova_compute[252253]: 2025-11-29 09:09:08.132 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:09:08 np0005539563 systemd[1]: Started libpod-conmon-77969aff1750a70da241352e281719e0ccaa0c7caf899a37dbf02e3d0b8b1395.scope.
Nov 29 04:09:08 np0005539563 nova_compute[252253]: 2025-11-29 09:09:08.163 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:09:08 np0005539563 podman[413971]: 2025-11-29 09:09:08.080514625 +0000 UTC m=+0.029398217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:09:08 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:09:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70705d6b247c4c5f00f23d97513711c96105c6f7ba2ea7608f25021d1de9350c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:09:08 np0005539563 nova_compute[252253]: 2025-11-29 09:09:08.185 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:09:08 np0005539563 nova_compute[252253]: 2025-11-29 09:09:08.185 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:09:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70705d6b247c4c5f00f23d97513711c96105c6f7ba2ea7608f25021d1de9350c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:09:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70705d6b247c4c5f00f23d97513711c96105c6f7ba2ea7608f25021d1de9350c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:09:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70705d6b247c4c5f00f23d97513711c96105c6f7ba2ea7608f25021d1de9350c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:09:08 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70705d6b247c4c5f00f23d97513711c96105c6f7ba2ea7608f25021d1de9350c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:09:08 np0005539563 podman[413971]: 2025-11-29 09:09:08.204690628 +0000 UTC m=+0.153574270 container init 77969aff1750a70da241352e281719e0ccaa0c7caf899a37dbf02e3d0b8b1395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_einstein, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 04:09:08 np0005539563 nova_compute[252253]: 2025-11-29 09:09:08.211 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:09:08 np0005539563 podman[413971]: 2025-11-29 09:09:08.21624973 +0000 UTC m=+0.165133312 container start 77969aff1750a70da241352e281719e0ccaa0c7caf899a37dbf02e3d0b8b1395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:09:08 np0005539563 podman[413971]: 2025-11-29 09:09:08.221608606 +0000 UTC m=+0.170492168 container attach 77969aff1750a70da241352e281719e0ccaa0c7caf899a37dbf02e3d0b8b1395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_einstein, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 04:09:08 np0005539563 nova_compute[252253]: 2025-11-29 09:09:08.234 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:09:08 np0005539563 nova_compute[252253]: 2025-11-29 09:09:08.253 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:09:08 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:09:08 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:09:08 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:09:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:09:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3660982933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:09:08 np0005539563 nova_compute[252253]: 2025-11-29 09:09:08.685 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:09:08 np0005539563 nova_compute[252253]: 2025-11-29 09:09:08.692 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:09:09 np0005539563 interesting_einstein[413987]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:09:09 np0005539563 interesting_einstein[413987]: --> relative data size: 1.0
Nov 29 04:09:09 np0005539563 interesting_einstein[413987]: --> All data devices are unavailable
Nov 29 04:09:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:09.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:09 np0005539563 systemd[1]: libpod-77969aff1750a70da241352e281719e0ccaa0c7caf899a37dbf02e3d0b8b1395.scope: Deactivated successfully.
Nov 29 04:09:09 np0005539563 podman[413971]: 2025-11-29 09:09:09.034475545 +0000 UTC m=+0.983359087 container died 77969aff1750a70da241352e281719e0ccaa0c7caf899a37dbf02e3d0b8b1395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 04:09:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-70705d6b247c4c5f00f23d97513711c96105c6f7ba2ea7608f25021d1de9350c-merged.mount: Deactivated successfully.
Nov 29 04:09:09 np0005539563 podman[413971]: 2025-11-29 09:09:09.09302699 +0000 UTC m=+1.041910532 container remove 77969aff1750a70da241352e281719e0ccaa0c7caf899a37dbf02e3d0b8b1395 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_einstein, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:09:09 np0005539563 systemd[1]: libpod-conmon-77969aff1750a70da241352e281719e0ccaa0c7caf899a37dbf02e3d0b8b1395.scope: Deactivated successfully.
Nov 29 04:09:09 np0005539563 nova_compute[252253]: 2025-11-29 09:09:09.182 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:09 np0005539563 podman[414176]: 2025-11-29 09:09:09.619391422 +0000 UTC m=+0.022455849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:09:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4018: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 04:09:09 np0005539563 podman[414176]: 2025-11-29 09:09:09.942822999 +0000 UTC m=+0.345887406 container create 81009e1814ecec83b74d694b9e15630ff145621568a6357491a1e5bc6e1f76e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:09:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:09.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:10 np0005539563 systemd[1]: Started libpod-conmon-81009e1814ecec83b74d694b9e15630ff145621568a6357491a1e5bc6e1f76e3.scope.
Nov 29 04:09:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:09:10 np0005539563 podman[414176]: 2025-11-29 09:09:10.541413306 +0000 UTC m=+0.944477743 container init 81009e1814ecec83b74d694b9e15630ff145621568a6357491a1e5bc6e1f76e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 29 04:09:10 np0005539563 podman[414176]: 2025-11-29 09:09:10.547587463 +0000 UTC m=+0.950651870 container start 81009e1814ecec83b74d694b9e15630ff145621568a6357491a1e5bc6e1f76e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:09:10 np0005539563 friendly_herschel[414193]: 167 167
Nov 29 04:09:10 np0005539563 systemd[1]: libpod-81009e1814ecec83b74d694b9e15630ff145621568a6357491a1e5bc6e1f76e3.scope: Deactivated successfully.
Nov 29 04:09:10 np0005539563 podman[414176]: 2025-11-29 09:09:10.561220033 +0000 UTC m=+0.964284440 container attach 81009e1814ecec83b74d694b9e15630ff145621568a6357491a1e5bc6e1f76e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 04:09:10 np0005539563 podman[414176]: 2025-11-29 09:09:10.561644724 +0000 UTC m=+0.964709131 container died 81009e1814ecec83b74d694b9e15630ff145621568a6357491a1e5bc6e1f76e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:09:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c520cbf50b30b9e531c9389573583c69cf9b68d6a850dc5c7f7ff638ad754528-merged.mount: Deactivated successfully.
Nov 29 04:09:10 np0005539563 podman[414176]: 2025-11-29 09:09:10.599848409 +0000 UTC m=+1.002912816 container remove 81009e1814ecec83b74d694b9e15630ff145621568a6357491a1e5bc6e1f76e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:09:10 np0005539563 systemd[1]: libpod-conmon-81009e1814ecec83b74d694b9e15630ff145621568a6357491a1e5bc6e1f76e3.scope: Deactivated successfully.
Nov 29 04:09:10 np0005539563 podman[414218]: 2025-11-29 09:09:10.761964758 +0000 UTC m=+0.044674180 container create c78dfe4596c7a23a16ad02ba3da2398be8bad9644f0aff2604afa86e4251efb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 04:09:10 np0005539563 systemd[1]: Started libpod-conmon-c78dfe4596c7a23a16ad02ba3da2398be8bad9644f0aff2604afa86e4251efb5.scope.
Nov 29 04:09:10 np0005539563 podman[414218]: 2025-11-29 09:09:10.741181365 +0000 UTC m=+0.023890797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:09:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:09:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5b16046bcc43ac9a8162bb1540aa1c8acc6ccb7808bc233e276a9b71c4a7e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:09:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5b16046bcc43ac9a8162bb1540aa1c8acc6ccb7808bc233e276a9b71c4a7e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:09:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5b16046bcc43ac9a8162bb1540aa1c8acc6ccb7808bc233e276a9b71c4a7e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:09:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5b16046bcc43ac9a8162bb1540aa1c8acc6ccb7808bc233e276a9b71c4a7e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:09:10 np0005539563 podman[414218]: 2025-11-29 09:09:10.853587198 +0000 UTC m=+0.136296630 container init c78dfe4596c7a23a16ad02ba3da2398be8bad9644f0aff2604afa86e4251efb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 04:09:10 np0005539563 podman[414218]: 2025-11-29 09:09:10.868851582 +0000 UTC m=+0.151560994 container start c78dfe4596c7a23a16ad02ba3da2398be8bad9644f0aff2604afa86e4251efb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_raman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:09:10 np0005539563 podman[414218]: 2025-11-29 09:09:10.872621854 +0000 UTC m=+0.155331286 container attach c78dfe4596c7a23a16ad02ba3da2398be8bad9644f0aff2604afa86e4251efb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:09:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:11.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:11 np0005539563 serene_raman[414234]: {
Nov 29 04:09:11 np0005539563 serene_raman[414234]:    "0": [
Nov 29 04:09:11 np0005539563 serene_raman[414234]:        {
Nov 29 04:09:11 np0005539563 serene_raman[414234]:            "devices": [
Nov 29 04:09:11 np0005539563 serene_raman[414234]:                "/dev/loop3"
Nov 29 04:09:11 np0005539563 serene_raman[414234]:            ],
Nov 29 04:09:11 np0005539563 serene_raman[414234]:            "lv_name": "ceph_lv0",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:            "lv_size": "7511998464",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:            "name": "ceph_lv0",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:            "tags": {
Nov 29 04:09:11 np0005539563 serene_raman[414234]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:                "ceph.cluster_name": "ceph",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:                "ceph.crush_device_class": "",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:                "ceph.encrypted": "0",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:                "ceph.osd_id": "0",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:                "ceph.type": "block",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:                "ceph.vdo": "0"
Nov 29 04:09:11 np0005539563 serene_raman[414234]:            },
Nov 29 04:09:11 np0005539563 serene_raman[414234]:            "type": "block",
Nov 29 04:09:11 np0005539563 serene_raman[414234]:            "vg_name": "ceph_vg0"
Nov 29 04:09:11 np0005539563 serene_raman[414234]:        }
Nov 29 04:09:11 np0005539563 serene_raman[414234]:    ]
Nov 29 04:09:11 np0005539563 serene_raman[414234]: }
Nov 29 04:09:11 np0005539563 systemd[1]: libpod-c78dfe4596c7a23a16ad02ba3da2398be8bad9644f0aff2604afa86e4251efb5.scope: Deactivated successfully.
Nov 29 04:09:11 np0005539563 podman[414218]: 2025-11-29 09:09:11.582382211 +0000 UTC m=+0.865091623 container died c78dfe4596c7a23a16ad02ba3da2398be8bad9644f0aff2604afa86e4251efb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_raman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 04:09:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2c5b16046bcc43ac9a8162bb1540aa1c8acc6ccb7808bc233e276a9b71c4a7e8-merged.mount: Deactivated successfully.
Nov 29 04:09:11 np0005539563 podman[414218]: 2025-11-29 09:09:11.634472252 +0000 UTC m=+0.917181664 container remove c78dfe4596c7a23a16ad02ba3da2398be8bad9644f0aff2604afa86e4251efb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_raman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 04:09:11 np0005539563 systemd[1]: libpod-conmon-c78dfe4596c7a23a16ad02ba3da2398be8bad9644f0aff2604afa86e4251efb5.scope: Deactivated successfully.
Nov 29 04:09:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4019: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 04:09:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:11.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:12 np0005539563 podman[414399]: 2025-11-29 09:09:12.305883151 +0000 UTC m=+0.050566240 container create 529447010ffb1a5029d8cabd2e220072d563a24deaa1e69446681c4e376a66ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sammet, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 04:09:12 np0005539563 systemd[1]: Started libpod-conmon-529447010ffb1a5029d8cabd2e220072d563a24deaa1e69446681c4e376a66ee.scope.
Nov 29 04:09:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:09:12 np0005539563 podman[414399]: 2025-11-29 09:09:12.280824753 +0000 UTC m=+0.025507882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:09:12 np0005539563 podman[414399]: 2025-11-29 09:09:12.386919645 +0000 UTC m=+0.131602754 container init 529447010ffb1a5029d8cabd2e220072d563a24deaa1e69446681c4e376a66ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sammet, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:09:12 np0005539563 podman[414399]: 2025-11-29 09:09:12.395155308 +0000 UTC m=+0.139838397 container start 529447010ffb1a5029d8cabd2e220072d563a24deaa1e69446681c4e376a66ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sammet, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:09:12 np0005539563 podman[414399]: 2025-11-29 09:09:12.399487285 +0000 UTC m=+0.144170384 container attach 529447010ffb1a5029d8cabd2e220072d563a24deaa1e69446681c4e376a66ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sammet, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:09:12 np0005539563 infallible_sammet[414416]: 167 167
Nov 29 04:09:12 np0005539563 systemd[1]: libpod-529447010ffb1a5029d8cabd2e220072d563a24deaa1e69446681c4e376a66ee.scope: Deactivated successfully.
Nov 29 04:09:12 np0005539563 podman[414399]: 2025-11-29 09:09:12.401638034 +0000 UTC m=+0.146321133 container died 529447010ffb1a5029d8cabd2e220072d563a24deaa1e69446681c4e376a66ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 04:09:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-27e21e25b1ee0ca55334b7a88d6b646bd2c0bcf8291e779ec16c0198e6ca7362-merged.mount: Deactivated successfully.
Nov 29 04:09:12 np0005539563 podman[414399]: 2025-11-29 09:09:12.441484232 +0000 UTC m=+0.186167321 container remove 529447010ffb1a5029d8cabd2e220072d563a24deaa1e69446681c4e376a66ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 04:09:12 np0005539563 systemd[1]: libpod-conmon-529447010ffb1a5029d8cabd2e220072d563a24deaa1e69446681c4e376a66ee.scope: Deactivated successfully.
Nov 29 04:09:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:12 np0005539563 podman[414439]: 2025-11-29 09:09:12.638547208 +0000 UTC m=+0.045690768 container create 0aefb991b9d79b9ca7822029d38596c95fd79685b29fedfe4984f26e2eeb07f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meninsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 04:09:12 np0005539563 systemd[1]: Started libpod-conmon-0aefb991b9d79b9ca7822029d38596c95fd79685b29fedfe4984f26e2eeb07f9.scope.
Nov 29 04:09:12 np0005539563 podman[414439]: 2025-11-29 09:09:12.616297465 +0000 UTC m=+0.023441065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:09:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:09:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca02d6ace9b6bd4e6b3f96c5b97df7d578b671195fc848f31c29d0a0f9eefbe6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:09:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca02d6ace9b6bd4e6b3f96c5b97df7d578b671195fc848f31c29d0a0f9eefbe6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:09:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca02d6ace9b6bd4e6b3f96c5b97df7d578b671195fc848f31c29d0a0f9eefbe6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:09:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca02d6ace9b6bd4e6b3f96c5b97df7d578b671195fc848f31c29d0a0f9eefbe6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:09:12 np0005539563 podman[414439]: 2025-11-29 09:09:12.979869149 +0000 UTC m=+0.387012699 container init 0aefb991b9d79b9ca7822029d38596c95fd79685b29fedfe4984f26e2eeb07f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meninsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 04:09:12 np0005539563 podman[414439]: 2025-11-29 09:09:12.986858099 +0000 UTC m=+0.394001649 container start 0aefb991b9d79b9ca7822029d38596c95fd79685b29fedfe4984f26e2eeb07f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 04:09:13 np0005539563 nova_compute[252253]: 2025-11-29 09:09:13.002 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:13.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:09:13
Nov 29 04:09:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:09:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:09:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.log', '.rgw.root', '.mgr']
Nov 29 04:09:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:09:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:09:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:09:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:09:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:09:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:09:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:09:13 np0005539563 podman[414439]: 2025-11-29 09:09:13.696361609 +0000 UTC m=+1.103505179 container attach 0aefb991b9d79b9ca7822029d38596c95fd79685b29fedfe4984f26e2eeb07f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:09:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4020: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 04:09:13 np0005539563 epic_meninsky[414455]: {
Nov 29 04:09:13 np0005539563 epic_meninsky[414455]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:09:13 np0005539563 epic_meninsky[414455]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:09:13 np0005539563 epic_meninsky[414455]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:09:13 np0005539563 epic_meninsky[414455]:        "osd_id": 0,
Nov 29 04:09:13 np0005539563 epic_meninsky[414455]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:09:13 np0005539563 epic_meninsky[414455]:        "type": "bluestore"
Nov 29 04:09:13 np0005539563 epic_meninsky[414455]:    }
Nov 29 04:09:13 np0005539563 epic_meninsky[414455]: }
Nov 29 04:09:13 np0005539563 systemd[1]: libpod-0aefb991b9d79b9ca7822029d38596c95fd79685b29fedfe4984f26e2eeb07f9.scope: Deactivated successfully.
Nov 29 04:09:13 np0005539563 podman[414479]: 2025-11-29 09:09:13.884045731 +0000 UTC m=+0.023784245 container died 0aefb991b9d79b9ca7822029d38596c95fd79685b29fedfe4984f26e2eeb07f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meninsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:09:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:13.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ca02d6ace9b6bd4e6b3f96c5b97df7d578b671195fc848f31c29d0a0f9eefbe6-merged.mount: Deactivated successfully.
Nov 29 04:09:14 np0005539563 podman[414479]: 2025-11-29 09:09:14.149686573 +0000 UTC m=+0.289425067 container remove 0aefb991b9d79b9ca7822029d38596c95fd79685b29fedfe4984f26e2eeb07f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meninsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:09:14 np0005539563 systemd[1]: libpod-conmon-0aefb991b9d79b9ca7822029d38596c95fd79685b29fedfe4984f26e2eeb07f9.scope: Deactivated successfully.
Nov 29 04:09:14 np0005539563 nova_compute[252253]: 2025-11-29 09:09:14.184 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:09:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:09:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:09:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:15.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4021: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 04:09:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:09:15 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:09:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 395c356a-83b5-411d-98fa-71503e4d0883 does not exist
Nov 29 04:09:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 55a511f2-7b45-4798-9040-3d092f38c988 does not exist
Nov 29 04:09:15 np0005539563 ceph-mgr[74636]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2945860420
Nov 29 04:09:15 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f8764f2a-c557-44ff-8f06-9541d651d945 does not exist
Nov 29 04:09:15 np0005539563 nova_compute[252253]: 2025-11-29 09:09:15.943 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:09:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:15.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:15 np0005539563 podman[414520]: 2025-11-29 09:09:15.977942274 +0000 UTC m=+0.079966126 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 04:09:16 np0005539563 podman[414519]: 2025-11-29 09:09:16.007556806 +0000 UTC m=+0.103216516 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent)
Nov 29 04:09:16 np0005539563 podman[414521]: 2025-11-29 09:09:16.015692497 +0000 UTC m=+0.114673286 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 04:09:16 np0005539563 nova_compute[252253]: 2025-11-29 09:09:16.044 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:09:16 np0005539563 nova_compute[252253]: 2025-11-29 09:09:16.044 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 18.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:09:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Nov 29 04:09:16 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:09:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:09:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:09:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:09:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:09:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:09:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:09:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:09:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:09:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:09:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:09:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:17.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Nov 29 04:09:17 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Nov 29 04:09:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4023: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Nov 29 04:09:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:17.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:18 np0005539563 nova_compute[252253]: 2025-11-29 09:09:18.002 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:19.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:19 np0005539563 nova_compute[252253]: 2025-11-29 09:09:19.186 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4024: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.0 KiB/s rd, 409 B/s wr, 7 op/s
Nov 29 04:09:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:19.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:21.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4025: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.1 KiB/s rd, 511 B/s wr, 8 op/s
Nov 29 04:09:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:21.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:23.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:23 np0005539563 nova_compute[252253]: 2025-11-29 09:09:23.045 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4026: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.1 KiB/s rd, 511 B/s wr, 8 op/s
Nov 29 04:09:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:23.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:24 np0005539563 nova_compute[252253]: 2025-11-29 09:09:24.249 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0031503364624046156 of space, bias 1.0, pg target 0.9451009387213847 quantized to 32 (current 32)
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:09:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:09:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:25.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4027: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.1 KiB/s rd, 511 B/s wr, 8 op/s
Nov 29 04:09:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:25.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:27.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4028: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 476 B/s wr, 7 op/s
Nov 29 04:09:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:27.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:28 np0005539563 nova_compute[252253]: 2025-11-29 09:09:28.047 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:29.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:29 np0005539563 nova_compute[252253]: 2025-11-29 09:09:29.252 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4029: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.1 KiB/s rd, 426 B/s wr, 6 op/s
Nov 29 04:09:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:29.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:31.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4030: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 29 04:09:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:31.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:33 np0005539563 nova_compute[252253]: 2025-11-29 09:09:33.051 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:33.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4031: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:09:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:33.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:34 np0005539563 nova_compute[252253]: 2025-11-29 09:09:34.295 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:35.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4032: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:09:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:36.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:09:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:37.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:09:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4033: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:09:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:38.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:38 np0005539563 nova_compute[252253]: 2025-11-29 09:09:38.053 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:39.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:39 np0005539563 nova_compute[252253]: 2025-11-29 09:09:39.297 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4034: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:09:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:40.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:41.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4035: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 511 B/s rd, 0 op/s
Nov 29 04:09:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:09:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:42.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:09:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:43 np0005539563 nova_compute[252253]: 2025-11-29 09:09:43.055 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:43.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:09:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:09:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4036: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.4 KiB/s rd, 341 B/s wr, 6 op/s
Nov 29 04:09:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:44.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:44 np0005539563 nova_compute[252253]: 2025-11-29 09:09:44.300 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:45.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4037: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.8 KiB/s rd, 597 B/s wr, 7 op/s
Nov 29 04:09:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:46.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:46 np0005539563 podman[414719]: 2025-11-29 09:09:46.493529467 +0000 UTC m=+0.049199024 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:09:46 np0005539563 podman[414720]: 2025-11-29 09:09:46.494926034 +0000 UTC m=+0.048450382 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:09:46 np0005539563 podman[414721]: 2025-11-29 09:09:46.526585691 +0000 UTC m=+0.075887466 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 04:09:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:09:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:47.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:09:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4038: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 10 op/s
Nov 29 04:09:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:48.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:48 np0005539563 nova_compute[252253]: 2025-11-29 09:09:48.038 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:09:48 np0005539563 nova_compute[252253]: 2025-11-29 09:09:48.039 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:09:48 np0005539563 nova_compute[252253]: 2025-11-29 09:09:48.039 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:09:48 np0005539563 nova_compute[252253]: 2025-11-29 09:09:48.039 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:09:48 np0005539563 nova_compute[252253]: 2025-11-29 09:09:48.039 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:09:48 np0005539563 nova_compute[252253]: 2025-11-29 09:09:48.039 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:09:48 np0005539563 nova_compute[252253]: 2025-11-29 09:09:48.057 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:48 np0005539563 nova_compute[252253]: 2025-11-29 09:09:48.369 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:09:48 np0005539563 nova_compute[252253]: 2025-11-29 09:09:48.370 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:09:48 np0005539563 nova_compute[252253]: 2025-11-29 09:09:48.642 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:09:48.643 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=98, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=97) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:09:48 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:09:48.644 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:09:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:09:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:49.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:09:49 np0005539563 nova_compute[252253]: 2025-11-29 09:09:49.351 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4039: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 10 op/s
Nov 29 04:09:50 np0005539563 nova_compute[252253]: 2025-11-29 09:09:50.009 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:09:50 np0005539563 nova_compute[252253]: 2025-11-29 09:09:50.010 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:09:50 np0005539563 nova_compute[252253]: 2025-11-29 09:09:50.010 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:09:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:50.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:50 np0005539563 nova_compute[252253]: 2025-11-29 09:09:50.050 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:09:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:09:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:51.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:09:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4040: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 10 op/s
Nov 29 04:09:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:52.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:52 np0005539563 nova_compute[252253]: 2025-11-29 09:09:52.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:09:53 np0005539563 nova_compute[252253]: 2025-11-29 09:09:53.060 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:53.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:53 np0005539563 nova_compute[252253]: 2025-11-29 09:09:53.322 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:09:53 np0005539563 nova_compute[252253]: 2025-11-29 09:09:53.323 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:09:53 np0005539563 nova_compute[252253]: 2025-11-29 09:09:53.323 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:09:53 np0005539563 nova_compute[252253]: 2025-11-29 09:09:53.323 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:09:53 np0005539563 nova_compute[252253]: 2025-11-29 09:09:53.324 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:09:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:09:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3807548406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:09:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4041: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s rd, 938 B/s wr, 10 op/s
Nov 29 04:09:53 np0005539563 nova_compute[252253]: 2025-11-29 09:09:53.812 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:09:54 np0005539563 nova_compute[252253]: 2025-11-29 09:09:54.011 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:09:54 np0005539563 nova_compute[252253]: 2025-11-29 09:09:54.013 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4108MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:09:54 np0005539563 nova_compute[252253]: 2025-11-29 09:09:54.013 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:09:54 np0005539563 nova_compute[252253]: 2025-11-29 09:09:54.014 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:09:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:54.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:54 np0005539563 nova_compute[252253]: 2025-11-29 09:09:54.212 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:09:54 np0005539563 nova_compute[252253]: 2025-11-29 09:09:54.213 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:09:54 np0005539563 nova_compute[252253]: 2025-11-29 09:09:54.247 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:09:54 np0005539563 nova_compute[252253]: 2025-11-29 09:09:54.399 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:09:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3450250076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:09:54 np0005539563 nova_compute[252253]: 2025-11-29 09:09:54.747 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:09:54 np0005539563 nova_compute[252253]: 2025-11-29 09:09:54.754 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:09:54 np0005539563 nova_compute[252253]: 2025-11-29 09:09:54.771 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:09:54 np0005539563 nova_compute[252253]: 2025-11-29 09:09:54.773 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:09:54 np0005539563 nova_compute[252253]: 2025-11-29 09:09:54.773 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:09:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:55.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:09:55.646 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '98'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:09:55 np0005539563 nova_compute[252253]: 2025-11-29 09:09:55.774 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:09:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4042: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 597 B/s wr, 4 op/s
Nov 29 04:09:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:56.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:09:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:57.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:09:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4043: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Nov 29 04:09:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:58.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:58 np0005539563 nova_compute[252253]: 2025-11-29 09:09:58.062 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:09:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:59.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:59 np0005539563 nova_compute[252253]: 2025-11-29 09:09:59.401 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4044: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:10:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 04:10:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:00.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:00 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 04:10:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:01.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4045: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.2 KiB/s rd, 12 KiB/s wr, 8 op/s
Nov 29 04:10:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:02.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:03 np0005539563 nova_compute[252253]: 2025-11-29 09:10:03.064 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:03.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4046: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 708 KiB/s rd, 12 KiB/s wr, 35 op/s
Nov 29 04:10:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:10:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:04.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:10:04 np0005539563 nova_compute[252253]: 2025-11-29 09:10:04.433 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:10:04.986 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:10:04.987 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:10:04.987 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:05.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4047: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 49 op/s
Nov 29 04:10:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:06.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:07.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4048: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 29 04:10:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:08.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:08 np0005539563 nova_compute[252253]: 2025-11-29 09:10:08.066 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:09.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:09 np0005539563 nova_compute[252253]: 2025-11-29 09:10:09.436 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4049: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 29 04:10:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:10.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:11.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4050: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Nov 29 04:10:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:10:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:12.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:10:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:13 np0005539563 nova_compute[252253]: 2025-11-29 09:10:13.068 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:10:13
Nov 29 04:10:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:10:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:10:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'vms']
Nov 29 04:10:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:10:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:13.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:10:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:10:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4051: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 682 B/s wr, 80 op/s
Nov 29 04:10:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:14.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:14 np0005539563 nova_compute[252253]: 2025-11-29 09:10:14.439 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:15.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4052: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 597 B/s wr, 53 op/s
Nov 29 04:10:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:16.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:16 np0005539563 podman[414967]: 2025-11-29 09:10:16.652158623 +0000 UTC m=+0.061936328 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 04:10:16 np0005539563 podman[414968]: 2025-11-29 09:10:16.665929696 +0000 UTC m=+0.075272189 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:10:16 np0005539563 podman[414969]: 2025-11-29 09:10:16.723587237 +0000 UTC m=+0.130273598 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 04:10:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:10:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:10:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:10:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:10:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:10:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:10:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:10:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:10:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:10:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 04:10:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:17.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:10:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 3719c53c-f1c4-4dfe-956c-78c6de393654 does not exist
Nov 29 04:10:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev bf8ab676-1052-4b45-a293-d07995d933a7 does not exist
Nov 29 04:10:17 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 232bbdfa-f591-4a9c-93d6-13e1c3f11561 does not exist
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4053: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 856 KiB/s rd, 597 B/s wr, 39 op/s
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:10:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:10:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:18.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:18 np0005539563 nova_compute[252253]: 2025-11-29 09:10:18.069 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:18 np0005539563 podman[415228]: 2025-11-29 09:10:18.097859077 +0000 UTC m=+0.052296397 container create 7f346cb5bba5a29bd24653da5c4b7d29294acb81945039c0fdb49238ca3d1480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:10:18 np0005539563 systemd[1]: Started libpod-conmon-7f346cb5bba5a29bd24653da5c4b7d29294acb81945039c0fdb49238ca3d1480.scope.
Nov 29 04:10:18 np0005539563 podman[415228]: 2025-11-29 09:10:18.068072341 +0000 UTC m=+0.022509761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:10:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:10:18 np0005539563 podman[415228]: 2025-11-29 09:10:18.199836908 +0000 UTC m=+0.154274258 container init 7f346cb5bba5a29bd24653da5c4b7d29294acb81945039c0fdb49238ca3d1480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 04:10:18 np0005539563 podman[415228]: 2025-11-29 09:10:18.211330239 +0000 UTC m=+0.165767559 container start 7f346cb5bba5a29bd24653da5c4b7d29294acb81945039c0fdb49238ca3d1480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 04:10:18 np0005539563 podman[415228]: 2025-11-29 09:10:18.215101691 +0000 UTC m=+0.169539061 container attach 7f346cb5bba5a29bd24653da5c4b7d29294acb81945039c0fdb49238ca3d1480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:10:18 np0005539563 musing_bouman[415244]: 167 167
Nov 29 04:10:18 np0005539563 systemd[1]: libpod-7f346cb5bba5a29bd24653da5c4b7d29294acb81945039c0fdb49238ca3d1480.scope: Deactivated successfully.
Nov 29 04:10:18 np0005539563 podman[415228]: 2025-11-29 09:10:18.217211719 +0000 UTC m=+0.171649049 container died 7f346cb5bba5a29bd24653da5c4b7d29294acb81945039c0fdb49238ca3d1480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 04:10:18 np0005539563 systemd[1]: var-lib-containers-storage-overlay-2047ec0b82ba4d767a54c85bb59197e4221934ae46bba675ec20ab01b153876d-merged.mount: Deactivated successfully.
Nov 29 04:10:18 np0005539563 podman[415228]: 2025-11-29 09:10:18.269225666 +0000 UTC m=+0.223663026 container remove 7f346cb5bba5a29bd24653da5c4b7d29294acb81945039c0fdb49238ca3d1480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 04:10:18 np0005539563 systemd[1]: libpod-conmon-7f346cb5bba5a29bd24653da5c4b7d29294acb81945039c0fdb49238ca3d1480.scope: Deactivated successfully.
Nov 29 04:10:18 np0005539563 podman[415268]: 2025-11-29 09:10:18.484305 +0000 UTC m=+0.057061456 container create 727d53fa17d9867c834e7e75457ec0dffb3a326533f2e1bf299889e75d9f2182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:10:18 np0005539563 systemd[1]: Started libpod-conmon-727d53fa17d9867c834e7e75457ec0dffb3a326533f2e1bf299889e75d9f2182.scope.
Nov 29 04:10:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:10:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf3ed4611e69bb61da19d8019b9684664b63617a208199746ee9793cba6c0f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf3ed4611e69bb61da19d8019b9684664b63617a208199746ee9793cba6c0f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf3ed4611e69bb61da19d8019b9684664b63617a208199746ee9793cba6c0f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:18 np0005539563 podman[415268]: 2025-11-29 09:10:18.466161489 +0000 UTC m=+0.038917955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:10:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf3ed4611e69bb61da19d8019b9684664b63617a208199746ee9793cba6c0f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:18 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf3ed4611e69bb61da19d8019b9684664b63617a208199746ee9793cba6c0f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:18 np0005539563 podman[415268]: 2025-11-29 09:10:18.574352638 +0000 UTC m=+0.147109114 container init 727d53fa17d9867c834e7e75457ec0dffb3a326533f2e1bf299889e75d9f2182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_albattani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 04:10:18 np0005539563 podman[415268]: 2025-11-29 09:10:18.583838065 +0000 UTC m=+0.156594511 container start 727d53fa17d9867c834e7e75457ec0dffb3a326533f2e1bf299889e75d9f2182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_albattani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 04:10:18 np0005539563 podman[415268]: 2025-11-29 09:10:18.587396552 +0000 UTC m=+0.160153018 container attach 727d53fa17d9867c834e7e75457ec0dffb3a326533f2e1bf299889e75d9f2182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_albattani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:10:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:19.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:19 np0005539563 cranky_albattani[415285]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:10:19 np0005539563 cranky_albattani[415285]: --> relative data size: 1.0
Nov 29 04:10:19 np0005539563 cranky_albattani[415285]: --> All data devices are unavailable
Nov 29 04:10:19 np0005539563 systemd[1]: libpod-727d53fa17d9867c834e7e75457ec0dffb3a326533f2e1bf299889e75d9f2182.scope: Deactivated successfully.
Nov 29 04:10:19 np0005539563 podman[415268]: 2025-11-29 09:10:19.434560909 +0000 UTC m=+1.007317365 container died 727d53fa17d9867c834e7e75457ec0dffb3a326533f2e1bf299889e75d9f2182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_albattani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:10:19 np0005539563 nova_compute[252253]: 2025-11-29 09:10:19.442 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-eaf3ed4611e69bb61da19d8019b9684664b63617a208199746ee9793cba6c0f4-merged.mount: Deactivated successfully.
Nov 29 04:10:19 np0005539563 podman[415268]: 2025-11-29 09:10:19.506924808 +0000 UTC m=+1.079681264 container remove 727d53fa17d9867c834e7e75457ec0dffb3a326533f2e1bf299889e75d9f2182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_albattani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:10:19 np0005539563 systemd[1]: libpod-conmon-727d53fa17d9867c834e7e75457ec0dffb3a326533f2e1bf299889e75d9f2182.scope: Deactivated successfully.
Nov 29 04:10:19 np0005539563 nova_compute[252253]: 2025-11-29 09:10:19.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4054: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 144 KiB/s rd, 597 B/s wr, 16 op/s
Nov 29 04:10:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:20.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:20 np0005539563 podman[415454]: 2025-11-29 09:10:20.116109123 +0000 UTC m=+0.039081749 container create 9afb8eeb27b3cf54799e485697bb9e7525c915128d961a9862451cc4edda7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 04:10:20 np0005539563 systemd[1]: Started libpod-conmon-9afb8eeb27b3cf54799e485697bb9e7525c915128d961a9862451cc4edda7899.scope.
Nov 29 04:10:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:10:20 np0005539563 podman[415454]: 2025-11-29 09:10:20.099839862 +0000 UTC m=+0.022812508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:10:20 np0005539563 podman[415454]: 2025-11-29 09:10:20.202564504 +0000 UTC m=+0.125537190 container init 9afb8eeb27b3cf54799e485697bb9e7525c915128d961a9862451cc4edda7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heisenberg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 04:10:20 np0005539563 podman[415454]: 2025-11-29 09:10:20.216007087 +0000 UTC m=+0.138979733 container start 9afb8eeb27b3cf54799e485697bb9e7525c915128d961a9862451cc4edda7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:10:20 np0005539563 dazzling_heisenberg[415469]: 167 167
Nov 29 04:10:20 np0005539563 podman[415454]: 2025-11-29 09:10:20.221027383 +0000 UTC m=+0.144000029 container attach 9afb8eeb27b3cf54799e485697bb9e7525c915128d961a9862451cc4edda7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heisenberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 04:10:20 np0005539563 systemd[1]: libpod-9afb8eeb27b3cf54799e485697bb9e7525c915128d961a9862451cc4edda7899.scope: Deactivated successfully.
Nov 29 04:10:20 np0005539563 podman[415454]: 2025-11-29 09:10:20.222296258 +0000 UTC m=+0.145268884 container died 9afb8eeb27b3cf54799e485697bb9e7525c915128d961a9862451cc4edda7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:10:20 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3a1dcba436352767e5774aa0f932f08ef1533c9fbea32b37c9ce039cae1a727a-merged.mount: Deactivated successfully.
Nov 29 04:10:20 np0005539563 podman[415454]: 2025-11-29 09:10:20.263487863 +0000 UTC m=+0.186460489 container remove 9afb8eeb27b3cf54799e485697bb9e7525c915128d961a9862451cc4edda7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:10:20 np0005539563 systemd[1]: libpod-conmon-9afb8eeb27b3cf54799e485697bb9e7525c915128d961a9862451cc4edda7899.scope: Deactivated successfully.
Nov 29 04:10:20 np0005539563 podman[415493]: 2025-11-29 09:10:20.422709904 +0000 UTC m=+0.042577673 container create cf6703cd783fdf423d7e4f1bd8c153a36abdf925f4343bde9026d11755522d64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:10:20 np0005539563 systemd[1]: Started libpod-conmon-cf6703cd783fdf423d7e4f1bd8c153a36abdf925f4343bde9026d11755522d64.scope.
Nov 29 04:10:20 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:10:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eff47c04c8f40c7823fd6d719127921f83ca4e598e66bb9dc5b1dbc5a2a44a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eff47c04c8f40c7823fd6d719127921f83ca4e598e66bb9dc5b1dbc5a2a44a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eff47c04c8f40c7823fd6d719127921f83ca4e598e66bb9dc5b1dbc5a2a44a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:20 np0005539563 podman[415493]: 2025-11-29 09:10:20.405800237 +0000 UTC m=+0.025668036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:10:20 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eff47c04c8f40c7823fd6d719127921f83ca4e598e66bb9dc5b1dbc5a2a44a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:20 np0005539563 podman[415493]: 2025-11-29 09:10:20.506894754 +0000 UTC m=+0.126762533 container init cf6703cd783fdf423d7e4f1bd8c153a36abdf925f4343bde9026d11755522d64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_morse, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 04:10:20 np0005539563 podman[415493]: 2025-11-29 09:10:20.515906747 +0000 UTC m=+0.135774516 container start cf6703cd783fdf423d7e4f1bd8c153a36abdf925f4343bde9026d11755522d64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_morse, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 04:10:20 np0005539563 podman[415493]: 2025-11-29 09:10:20.518814496 +0000 UTC m=+0.138682265 container attach cf6703cd783fdf423d7e4f1bd8c153a36abdf925f4343bde9026d11755522d64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_morse, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:10:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:21.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:21 np0005539563 focused_morse[415509]: {
Nov 29 04:10:21 np0005539563 focused_morse[415509]:    "0": [
Nov 29 04:10:21 np0005539563 focused_morse[415509]:        {
Nov 29 04:10:21 np0005539563 focused_morse[415509]:            "devices": [
Nov 29 04:10:21 np0005539563 focused_morse[415509]:                "/dev/loop3"
Nov 29 04:10:21 np0005539563 focused_morse[415509]:            ],
Nov 29 04:10:21 np0005539563 focused_morse[415509]:            "lv_name": "ceph_lv0",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:            "lv_size": "7511998464",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:            "name": "ceph_lv0",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:            "tags": {
Nov 29 04:10:21 np0005539563 focused_morse[415509]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:                "ceph.cluster_name": "ceph",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:                "ceph.crush_device_class": "",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:                "ceph.encrypted": "0",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:                "ceph.osd_id": "0",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:                "ceph.type": "block",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:                "ceph.vdo": "0"
Nov 29 04:10:21 np0005539563 focused_morse[415509]:            },
Nov 29 04:10:21 np0005539563 focused_morse[415509]:            "type": "block",
Nov 29 04:10:21 np0005539563 focused_morse[415509]:            "vg_name": "ceph_vg0"
Nov 29 04:10:21 np0005539563 focused_morse[415509]:        }
Nov 29 04:10:21 np0005539563 focused_morse[415509]:    ]
Nov 29 04:10:21 np0005539563 focused_morse[415509]: }
Nov 29 04:10:21 np0005539563 systemd[1]: libpod-cf6703cd783fdf423d7e4f1bd8c153a36abdf925f4343bde9026d11755522d64.scope: Deactivated successfully.
Nov 29 04:10:21 np0005539563 podman[415493]: 2025-11-29 09:10:21.268028952 +0000 UTC m=+0.887896711 container died cf6703cd783fdf423d7e4f1bd8c153a36abdf925f4343bde9026d11755522d64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_morse, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 04:10:21 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5eff47c04c8f40c7823fd6d719127921f83ca4e598e66bb9dc5b1dbc5a2a44a9-merged.mount: Deactivated successfully.
Nov 29 04:10:21 np0005539563 podman[415493]: 2025-11-29 09:10:21.319839335 +0000 UTC m=+0.939707104 container remove cf6703cd783fdf423d7e4f1bd8c153a36abdf925f4343bde9026d11755522d64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_morse, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 04:10:21 np0005539563 systemd[1]: libpod-conmon-cf6703cd783fdf423d7e4f1bd8c153a36abdf925f4343bde9026d11755522d64.scope: Deactivated successfully.
Nov 29 04:10:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4055: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 511 B/s wr, 14 op/s
Nov 29 04:10:21 np0005539563 podman[415673]: 2025-11-29 09:10:21.975643591 +0000 UTC m=+0.063302984 container create d88901f0c03cc3f790ad438473eed0d2c60b1066426b0eeea2da50240759a869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 04:10:22 np0005539563 systemd[1]: Started libpod-conmon-d88901f0c03cc3f790ad438473eed0d2c60b1066426b0eeea2da50240759a869.scope.
Nov 29 04:10:22 np0005539563 podman[415673]: 2025-11-29 09:10:21.948804965 +0000 UTC m=+0.036464438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:10:22 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:10:22 np0005539563 podman[415673]: 2025-11-29 09:10:22.06606817 +0000 UTC m=+0.153727583 container init d88901f0c03cc3f790ad438473eed0d2c60b1066426b0eeea2da50240759a869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bohr, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 04:10:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:22.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:22 np0005539563 podman[415673]: 2025-11-29 09:10:22.072648488 +0000 UTC m=+0.160307901 container start d88901f0c03cc3f790ad438473eed0d2c60b1066426b0eeea2da50240759a869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 04:10:22 np0005539563 podman[415673]: 2025-11-29 09:10:22.076307407 +0000 UTC m=+0.163966900 container attach d88901f0c03cc3f790ad438473eed0d2c60b1066426b0eeea2da50240759a869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 04:10:22 np0005539563 clever_bohr[415689]: 167 167
Nov 29 04:10:22 np0005539563 podman[415673]: 2025-11-29 09:10:22.078716072 +0000 UTC m=+0.166375505 container died d88901f0c03cc3f790ad438473eed0d2c60b1066426b0eeea2da50240759a869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bohr, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 04:10:22 np0005539563 systemd[1]: libpod-d88901f0c03cc3f790ad438473eed0d2c60b1066426b0eeea2da50240759a869.scope: Deactivated successfully.
Nov 29 04:10:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-bcbd2b2107aa208f26edea96af0f3c5a9d82ae004a9f006834961d2347fd9780-merged.mount: Deactivated successfully.
Nov 29 04:10:22 np0005539563 podman[415673]: 2025-11-29 09:10:22.126407043 +0000 UTC m=+0.214066436 container remove d88901f0c03cc3f790ad438473eed0d2c60b1066426b0eeea2da50240759a869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bohr, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 04:10:22 np0005539563 systemd[1]: libpod-conmon-d88901f0c03cc3f790ad438473eed0d2c60b1066426b0eeea2da50240759a869.scope: Deactivated successfully.
Nov 29 04:10:22 np0005539563 podman[415712]: 2025-11-29 09:10:22.2869322 +0000 UTC m=+0.049767278 container create e94e0904fd513d801bed5ec5f2860c45c0edefecff5c53e3c7329cb1d43572ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wiles, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 04:10:22 np0005539563 systemd[1]: Started libpod-conmon-e94e0904fd513d801bed5ec5f2860c45c0edefecff5c53e3c7329cb1d43572ac.scope.
Nov 29 04:10:22 np0005539563 podman[415712]: 2025-11-29 09:10:22.262608422 +0000 UTC m=+0.025443510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:10:22 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:10:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8711924c012cd59ea88d0ac0f4c522957884b539ae1782210b9e9afbacb8035a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8711924c012cd59ea88d0ac0f4c522957884b539ae1782210b9e9afbacb8035a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8711924c012cd59ea88d0ac0f4c522957884b539ae1782210b9e9afbacb8035a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:22 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8711924c012cd59ea88d0ac0f4c522957884b539ae1782210b9e9afbacb8035a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:10:22 np0005539563 podman[415712]: 2025-11-29 09:10:22.378138999 +0000 UTC m=+0.140974057 container init e94e0904fd513d801bed5ec5f2860c45c0edefecff5c53e3c7329cb1d43572ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 04:10:22 np0005539563 podman[415712]: 2025-11-29 09:10:22.390627328 +0000 UTC m=+0.153462366 container start e94e0904fd513d801bed5ec5f2860c45c0edefecff5c53e3c7329cb1d43572ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 29 04:10:22 np0005539563 podman[415712]: 2025-11-29 09:10:22.396529808 +0000 UTC m=+0.159364846 container attach e94e0904fd513d801bed5ec5f2860c45c0edefecff5c53e3c7329cb1d43572ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:10:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:23 np0005539563 nova_compute[252253]: 2025-11-29 09:10:23.071 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:23.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:23 np0005539563 condescending_wiles[415729]: {
Nov 29 04:10:23 np0005539563 condescending_wiles[415729]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:10:23 np0005539563 condescending_wiles[415729]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:10:23 np0005539563 condescending_wiles[415729]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:10:23 np0005539563 condescending_wiles[415729]:        "osd_id": 0,
Nov 29 04:10:23 np0005539563 condescending_wiles[415729]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:10:23 np0005539563 condescending_wiles[415729]:        "type": "bluestore"
Nov 29 04:10:23 np0005539563 condescending_wiles[415729]:    }
Nov 29 04:10:23 np0005539563 condescending_wiles[415729]: }
Nov 29 04:10:23 np0005539563 systemd[1]: libpod-e94e0904fd513d801bed5ec5f2860c45c0edefecff5c53e3c7329cb1d43572ac.scope: Deactivated successfully.
Nov 29 04:10:23 np0005539563 podman[415712]: 2025-11-29 09:10:23.300228636 +0000 UTC m=+1.063063674 container died e94e0904fd513d801bed5ec5f2860c45c0edefecff5c53e3c7329cb1d43572ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 04:10:23 np0005539563 systemd[1]: var-lib-containers-storage-overlay-8711924c012cd59ea88d0ac0f4c522957884b539ae1782210b9e9afbacb8035a-merged.mount: Deactivated successfully.
Nov 29 04:10:23 np0005539563 podman[415712]: 2025-11-29 09:10:23.351627918 +0000 UTC m=+1.114462956 container remove e94e0904fd513d801bed5ec5f2860c45c0edefecff5c53e3c7329cb1d43572ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:10:23 np0005539563 systemd[1]: libpod-conmon-e94e0904fd513d801bed5ec5f2860c45c0edefecff5c53e3c7329cb1d43572ac.scope: Deactivated successfully.
Nov 29 04:10:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:10:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:10:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:10:23 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:10:23 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e083628e-f683-487d-a04b-8477972e8ba3 does not exist
Nov 29 04:10:23 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4839f667-4b4a-4b17-aecf-9109cf0fcb63 does not exist
Nov 29 04:10:23 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 013f271c-2468-4883-ab92-c02b0f37f58e does not exist
Nov 29 04:10:23 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:10:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4056: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s rd, 682 B/s wr, 8 op/s
Nov 29 04:10:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:24.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0031506999697561884 of space, bias 1.0, pg target 0.9452099909268565 quantized to 32 (current 32)
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:10:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:10:24 np0005539563 nova_compute[252253]: 2025-11-29 09:10:24.445 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:24 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:10:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:25.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4057: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.1 KiB/s rd, 597 B/s wr, 9 op/s
Nov 29 04:10:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Nov 29 04:10:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Nov 29 04:10:25 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Nov 29 04:10:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:26.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:27.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:27 np0005539563 nova_compute[252253]: 2025-11-29 09:10:27.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4059: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 511 B/s wr, 21 op/s
Nov 29 04:10:28 np0005539563 nova_compute[252253]: 2025-11-29 09:10:28.072 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:28.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:29.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:29 np0005539563 nova_compute[252253]: 2025-11-29 09:10:29.447 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4060: 305 pgs: 305 active+clean; 135 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 1.9 KiB/s wr, 49 op/s
Nov 29 04:10:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:30.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:31.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4061: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.9 KiB/s wr, 48 op/s
Nov 29 04:10:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:32.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Nov 29 04:10:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Nov 29 04:10:32 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Nov 29 04:10:33 np0005539563 nova_compute[252253]: 2025-11-29 09:10:33.075 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:33.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4063: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 1.9 KiB/s wr, 57 op/s
Nov 29 04:10:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:34.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:34 np0005539563 nova_compute[252253]: 2025-11-29 09:10:34.450 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:35.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4064: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 865 KiB/s rd, 1.5 KiB/s wr, 47 op/s
Nov 29 04:10:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:36.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:37.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4065: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.5 KiB/s wr, 37 op/s
Nov 29 04:10:38 np0005539563 nova_compute[252253]: 2025-11-29 09:10:38.077 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:38.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:38 np0005539563 nova_compute[252253]: 2025-11-29 09:10:38.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:39.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:39 np0005539563 nova_compute[252253]: 2025-11-29 09:10:39.454 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4066: 305 pgs: 305 active+clean; 133 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 573 KiB/s wr, 19 op/s
Nov 29 04:10:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:40.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:41.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4067: 305 pgs: 305 active+clean; 146 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 32 op/s
Nov 29 04:10:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:42.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:42 np0005539563 nova_compute[252253]: 2025-11-29 09:10:42.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:42 np0005539563 nova_compute[252253]: 2025-11-29 09:10:42.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:10:43 np0005539563 nova_compute[252253]: 2025-11-29 09:10:43.080 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:43.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:10:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:10:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4068: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 44 op/s
Nov 29 04:10:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:10:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:44.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:10:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 61K writes, 229K keys, 61K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s#012Cumulative WAL: 61K writes, 22K syncs, 2.68 writes per sync, written: 0.21 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2012 writes, 5478 keys, 2012 commit groups, 1.0 writes per commit group, ingest: 5.20 MB, 0.01 MB/s#012Interval WAL: 2012 writes, 901 syncs, 2.23 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:10:44 np0005539563 nova_compute[252253]: 2025-11-29 09:10:44.457 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:45.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:45 np0005539563 ovn_controller[148841]: 2025-11-29T09:10:45Z|00948|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 29 04:10:45 np0005539563 nova_compute[252253]: 2025-11-29 09:10:45.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:45 np0005539563 nova_compute[252253]: 2025-11-29 09:10:45.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4069: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 29 04:10:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:46.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:46 np0005539563 nova_compute[252253]: 2025-11-29 09:10:46.606 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:10:46.606 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=99, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=98) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:10:46 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:10:46.607 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:10:46 np0005539563 nova_compute[252253]: 2025-11-29 09:10:46.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:47.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:47 np0005539563 podman[415926]: 2025-11-29 09:10:47.532558229 +0000 UTC m=+0.068271363 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:10:47 np0005539563 podman[415927]: 2025-11-29 09:10:47.537447142 +0000 UTC m=+0.073175966 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 04:10:47 np0005539563 podman[415928]: 2025-11-29 09:10:47.560473106 +0000 UTC m=+0.093599179 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:10:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4070: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 29 04:10:48 np0005539563 nova_compute[252253]: 2025-11-29 09:10:48.083 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:48.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:49.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:49 np0005539563 nova_compute[252253]: 2025-11-29 09:10:49.460 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4071: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 704 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 04:10:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:50.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:50 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:10:50.609 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '99'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:10:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:51.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:51 np0005539563 nova_compute[252253]: 2025-11-29 09:10:51.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:51 np0005539563 nova_compute[252253]: 2025-11-29 09:10:51.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:10:51 np0005539563 nova_compute[252253]: 2025-11-29 09:10:51.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:10:51 np0005539563 nova_compute[252253]: 2025-11-29 09:10:51.698 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:10:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4072: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Nov 29 04:10:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:52.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:53 np0005539563 nova_compute[252253]: 2025-11-29 09:10:53.122 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:53.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4073: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 KiB/s rd, 794 KiB/s wr, 13 op/s
Nov 29 04:10:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:54.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:54 np0005539563 nova_compute[252253]: 2025-11-29 09:10:54.462 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:54 np0005539563 nova_compute[252253]: 2025-11-29 09:10:54.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:54 np0005539563 nova_compute[252253]: 2025-11-29 09:10:54.853 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:54 np0005539563 nova_compute[252253]: 2025-11-29 09:10:54.853 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:54 np0005539563 nova_compute[252253]: 2025-11-29 09:10:54.854 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:54 np0005539563 nova_compute[252253]: 2025-11-29 09:10:54.854 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:10:54 np0005539563 nova_compute[252253]: 2025-11-29 09:10:54.855 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:10:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:55.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 04:10:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:10:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3107028670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:10:55 np0005539563 nova_compute[252253]: 2025-11-29 09:10:55.314 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:10:55 np0005539563 nova_compute[252253]: 2025-11-29 09:10:55.467 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:55 np0005539563 nova_compute[252253]: 2025-11-29 09:10:55.468 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:55 np0005539563 nova_compute[252253]: 2025-11-29 09:10:55.470 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:10:55 np0005539563 nova_compute[252253]: 2025-11-29 09:10:55.471 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4149MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:10:55 np0005539563 nova_compute[252253]: 2025-11-29 09:10:55.471 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:55 np0005539563 nova_compute[252253]: 2025-11-29 09:10:55.471 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:55 np0005539563 nova_compute[252253]: 2025-11-29 09:10:55.549 252257 DEBUG nova.compute.manager [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 04:10:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4074: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:10:55 np0005539563 nova_compute[252253]: 2025-11-29 09:10:55.847 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:10:55 np0005539563 nova_compute[252253]: 2025-11-29 09:10:55.850 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Nov 29 04:10:55 np0005539563 nova_compute[252253]: 2025-11-29 09:10:55.850 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:10:55 np0005539563 nova_compute[252253]: 2025-11-29 09:10:55.850 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:10:55 np0005539563 nova_compute[252253]: 2025-11-29 09:10:55.892 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:10:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:10:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:56.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:10:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:10:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1441298852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:10:56 np0005539563 nova_compute[252253]: 2025-11-29 09:10:56.303 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:10:56 np0005539563 nova_compute[252253]: 2025-11-29 09:10:56.308 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:10:56 np0005539563 nova_compute[252253]: 2025-11-29 09:10:56.475 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:10:56 np0005539563 nova_compute[252253]: 2025-11-29 09:10:56.477 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:10:56 np0005539563 nova_compute[252253]: 2025-11-29 09:10:56.477 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:56 np0005539563 nova_compute[252253]: 2025-11-29 09:10:56.477 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:10:56 np0005539563 nova_compute[252253]: 2025-11-29 09:10:56.487 252257 DEBUG nova.virt.hardware [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 04:10:56 np0005539563 nova_compute[252253]: 2025-11-29 09:10:56.488 252257 INFO nova.compute.claims [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 04:10:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:57.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:57 np0005539563 nova_compute[252253]: 2025-11-29 09:10:57.195 252257 DEBUG oslo_concurrency.processutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:10:57 np0005539563 nova_compute[252253]: 2025-11-29 09:10:57.478 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:10:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:10:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:10:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3950441658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:10:57 np0005539563 nova_compute[252253]: 2025-11-29 09:10:57.616 252257 DEBUG oslo_concurrency.processutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:10:57 np0005539563 nova_compute[252253]: 2025-11-29 09:10:57.625 252257 DEBUG nova.compute.provider_tree [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:10:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4075: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:10:58 np0005539563 nova_compute[252253]: 2025-11-29 09:10:58.124 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:10:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:10:58.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:10:58 np0005539563 nova_compute[252253]: 2025-11-29 09:10:58.196 252257 DEBUG nova.scheduler.client.report [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:10:58 np0005539563 nova_compute[252253]: 2025-11-29 09:10:58.603 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:10:58 np0005539563 nova_compute[252253]: 2025-11-29 09:10:58.604 252257 DEBUG nova.compute.manager [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 04:10:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:10:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:10:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:10:59.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:10:59 np0005539563 nova_compute[252253]: 2025-11-29 09:10:59.254 252257 DEBUG nova.compute.manager [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 04:10:59 np0005539563 nova_compute[252253]: 2025-11-29 09:10:59.255 252257 DEBUG nova.network.neutron [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 04:10:59 np0005539563 nova_compute[252253]: 2025-11-29 09:10:59.465 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:10:59 np0005539563 nova_compute[252253]: 2025-11-29 09:10:59.586 252257 INFO nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 04:10:59 np0005539563 nova_compute[252253]: 2025-11-29 09:10:59.675 252257 DEBUG nova.policy [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ff561a95dc44b9fb9f7fd8fee80f589', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 04:10:59 np0005539563 nova_compute[252253]: 2025-11-29 09:10:59.728 252257 DEBUG nova.compute.manager [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 04:10:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4076: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:11:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:00.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.383 252257 INFO nova.virt.block_device [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Booting with volume 3367e57a-b6f9-477a-b507-12360b177bbc at /dev/vda#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.734 252257 DEBUG os_brick.utils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.736 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.746 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.746 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[6649218a-01f4-4770-b62b-7377438a0a23]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.748 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.756 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.757 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[38fdb19c-55bc-48f6-97df-ec45ffed8512]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.758 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.766 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.767 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5fa352-5f51-4d4f-ad5b-b0d1c404992e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.768 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[df7a289a-085a-440f-bb08-db584ecc1789]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.768 252257 DEBUG oslo_concurrency.processutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.805 252257 DEBUG oslo_concurrency.processutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.807 252257 DEBUG os_brick.initiator.connectors.lightos [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.807 252257 DEBUG os_brick.initiator.connectors.lightos [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.807 252257 DEBUG os_brick.initiator.connectors.lightos [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.808 252257 DEBUG os_brick.utils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 04:11:00 np0005539563 nova_compute[252253]: 2025-11-29 09:11:00.808 252257 DEBUG nova.virt.block_device [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Updating existing volume attachment record: 409b1012-f240-4088-ba56-c6946fd65938 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 04:11:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:01.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4077: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:11:02 np0005539563 nova_compute[252253]: 2025-11-29 09:11:02.013 252257 DEBUG nova.network.neutron [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Successfully created port: 03e18ca1-a94c-48e8-a149-d4c144215d18 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 04:11:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:02.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:02 np0005539563 nova_compute[252253]: 2025-11-29 09:11:02.695 252257 DEBUG nova.compute.manager [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 04:11:02 np0005539563 nova_compute[252253]: 2025-11-29 09:11:02.696 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 04:11:02 np0005539563 nova_compute[252253]: 2025-11-29 09:11:02.697 252257 INFO nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Creating image(s)#033[00m
Nov 29 04:11:02 np0005539563 nova_compute[252253]: 2025-11-29 09:11:02.698 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 04:11:02 np0005539563 nova_compute[252253]: 2025-11-29 09:11:02.698 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Ensure instance console log exists: /var/lib/nova/instances/98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 04:11:02 np0005539563 nova_compute[252253]: 2025-11-29 09:11:02.698 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:11:02 np0005539563 nova_compute[252253]: 2025-11-29 09:11:02.699 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:11:02 np0005539563 nova_compute[252253]: 2025-11-29 09:11:02.699 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:11:03 np0005539563 nova_compute[252253]: 2025-11-29 09:11:03.125 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:03.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:03 np0005539563 nova_compute[252253]: 2025-11-29 09:11:03.679 252257 DEBUG nova.network.neutron [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Successfully updated port: 03e18ca1-a94c-48e8-a149-d4c144215d18 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 04:11:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4078: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:11:03 np0005539563 nova_compute[252253]: 2025-11-29 09:11:03.875 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:11:03 np0005539563 nova_compute[252253]: 2025-11-29 09:11:03.876 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquired lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:11:03 np0005539563 nova_compute[252253]: 2025-11-29 09:11:03.876 252257 DEBUG nova.network.neutron [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 04:11:03 np0005539563 nova_compute[252253]: 2025-11-29 09:11:03.911 252257 DEBUG nova.compute.manager [req-780a2df0-82d7-4aba-9859-575ac7083d62 req-eb9e77f5-b76a-40b8-b76a-3dd66a195ed3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Received event network-changed-03e18ca1-a94c-48e8-a149-d4c144215d18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:11:03 np0005539563 nova_compute[252253]: 2025-11-29 09:11:03.912 252257 DEBUG nova.compute.manager [req-780a2df0-82d7-4aba-9859-575ac7083d62 req-eb9e77f5-b76a-40b8-b76a-3dd66a195ed3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Refreshing instance network info cache due to event network-changed-03e18ca1-a94c-48e8-a149-d4c144215d18. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:11:03 np0005539563 nova_compute[252253]: 2025-11-29 09:11:03.912 252257 DEBUG oslo_concurrency.lockutils [req-780a2df0-82d7-4aba-9859-575ac7083d62 req-eb9e77f5-b76a-40b8-b76a-3dd66a195ed3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:11:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:04.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:04 np0005539563 nova_compute[252253]: 2025-11-29 09:11:04.467 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:04 np0005539563 nova_compute[252253]: 2025-11-29 09:11:04.606 252257 DEBUG nova.network.neutron [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 04:11:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:04.988 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:11:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:04.988 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:11:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:04.989 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:11:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:05.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.810 252257 DEBUG nova.network.neutron [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Updating instance_info_cache with network_info: [{"id": "03e18ca1-a94c-48e8-a149-d4c144215d18", "address": "fa:16:3e:aa:23:7f", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e18ca1-a9", "ovs_interfaceid": "03e18ca1-a94c-48e8-a149-d4c144215d18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:11:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4079: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.840 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Releasing lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.840 252257 DEBUG nova.compute.manager [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Instance network_info: |[{"id": "03e18ca1-a94c-48e8-a149-d4c144215d18", "address": "fa:16:3e:aa:23:7f", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e18ca1-a9", "ovs_interfaceid": "03e18ca1-a94c-48e8-a149-d4c144215d18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.840 252257 DEBUG oslo_concurrency.lockutils [req-780a2df0-82d7-4aba-9859-575ac7083d62 req-eb9e77f5-b76a-40b8-b76a-3dd66a195ed3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.841 252257 DEBUG nova.network.neutron [req-780a2df0-82d7-4aba-9859-575ac7083d62 req-eb9e77f5-b76a-40b8-b76a-3dd66a195ed3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Refreshing network info cache for port 03e18ca1-a94c-48e8-a149-d4c144215d18 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.843 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Start _get_guest_xml network_info=[{"id": "03e18ca1-a94c-48e8-a149-d4c144215d18", "address": "fa:16:3e:aa:23:7f", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e18ca1-a9", "ovs_interfaceid": "03e18ca1-a94c-48e8-a149-d4c144215d18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3367e57a-b6f9-477a-b507-12360b177bbc', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3367e57a-b6f9-477a-b507-12360b177bbc', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1', 'attached_at': '', 'detached_at': '', 'volume_id': '3367e57a-b6f9-477a-b507-12360b177bbc', 'serial': '3367e57a-b6f9-477a-b507-12360b177bbc'}, 'attachment_id': '409b1012-f240-4088-ba56-c6946fd65938', 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.848 252257 WARNING nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.852 252257 DEBUG nova.virt.libvirt.host [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.852 252257 DEBUG nova.virt.libvirt.host [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.856 252257 DEBUG nova.virt.libvirt.host [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.857 252257 DEBUG nova.virt.libvirt.host [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.858 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.858 252257 DEBUG nova.virt.hardware [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.859 252257 DEBUG nova.virt.hardware [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.859 252257 DEBUG nova.virt.hardware [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.859 252257 DEBUG nova.virt.hardware [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.859 252257 DEBUG nova.virt.hardware [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.860 252257 DEBUG nova.virt.hardware [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.860 252257 DEBUG nova.virt.hardware [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.860 252257 DEBUG nova.virt.hardware [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.860 252257 DEBUG nova.virt.hardware [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.860 252257 DEBUG nova.virt.hardware [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.861 252257 DEBUG nova.virt.hardware [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.888 252257 DEBUG nova.storage.rbd_utils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:11:05 np0005539563 nova_compute[252253]: 2025-11-29 09:11:05.892 252257 DEBUG oslo_concurrency.processutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 04:11:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:06.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 04:11:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:11:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3294026258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:11:06 np0005539563 nova_compute[252253]: 2025-11-29 09:11:06.445 252257 DEBUG oslo_concurrency.processutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.151 252257 DEBUG nova.network.neutron [req-780a2df0-82d7-4aba-9859-575ac7083d62 req-eb9e77f5-b76a-40b8-b76a-3dd66a195ed3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Updated VIF entry in instance network info cache for port 03e18ca1-a94c-48e8-a149-d4c144215d18. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.151 252257 DEBUG nova.network.neutron [req-780a2df0-82d7-4aba-9859-575ac7083d62 req-eb9e77f5-b76a-40b8-b76a-3dd66a195ed3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Updating instance_info_cache with network_info: [{"id": "03e18ca1-a94c-48e8-a149-d4c144215d18", "address": "fa:16:3e:aa:23:7f", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e18ca1-a9", "ovs_interfaceid": "03e18ca1-a94c-48e8-a149-d4c144215d18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:11:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:07.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.191 252257 DEBUG nova.virt.libvirt.vif [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:10:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-516170309',display_name='tempest-TestVolumeBootPattern-volume-backed-server-516170309',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-516170309',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPL2OUU5K/teyQFPNqTZqLgT9O73zt6IMvn2ncnkLwXCm+6FT01omnbIj1FDDJyN8ZJFe1DRGxLAfym3zMJehf/kJ2C2SwXTfIQxTBENVQyqhaLmtpLmKddio66bQ4PMmQ==',key_name='tempest-keypair-1412035180',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-htdy8f9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:11:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03e18ca1-a94c-48e8-a149-d4c144215d18", "address": "fa:16:3e:aa:23:7f", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e18ca1-a9", "ovs_interfaceid": "03e18ca1-a94c-48e8-a149-d4c144215d18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.191 252257 DEBUG nova.network.os_vif_util [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "03e18ca1-a94c-48e8-a149-d4c144215d18", "address": "fa:16:3e:aa:23:7f", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e18ca1-a9", "ovs_interfaceid": "03e18ca1-a94c-48e8-a149-d4c144215d18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.192 252257 DEBUG nova.network.os_vif_util [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:23:7f,bridge_name='br-int',has_traffic_filtering=True,id=03e18ca1-a94c-48e8-a149-d4c144215d18,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e18ca1-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.193 252257 DEBUG nova.objects.instance [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.412 252257 DEBUG oslo_concurrency.lockutils [req-780a2df0-82d7-4aba-9859-575ac7083d62 req-eb9e77f5-b76a-40b8-b76a-3dd66a195ed3 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.415 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  <uuid>98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1</uuid>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  <name>instance-000000dc</name>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-516170309</nova:name>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 09:11:05</nova:creationTime>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <nova:user uuid="5ff561a95dc44b9fb9f7fd8fee80f589">tempest-TestVolumeBootPattern-531976395-project-member</nova:user>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <nova:project uuid="51af0a2ee11a460ab825a484e5c6f4a3">tempest-TestVolumeBootPattern-531976395</nova:project>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <nova:port uuid="03e18ca1-a94c-48e8-a149-d4c144215d18">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <system>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <entry name="serial">98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1</entry>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <entry name="uuid">98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1</entry>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    </system>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  <os>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  </os>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  <features>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  </features>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  </clock>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  <devices>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1_disk.config">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      </source>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      </auth>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    </disk>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="volumes/volume-3367e57a-b6f9-477a-b507-12360b177bbc">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      </source>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      </auth>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <serial>3367e57a-b6f9-477a-b507-12360b177bbc</serial>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    </disk>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:aa:23:7f"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <target dev="tap03e18ca1-a9"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    </interface>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1/console.log" append="off"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    </serial>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <video>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    </video>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    </rng>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 04:11:07 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 04:11:07 np0005539563 nova_compute[252253]:  </devices>
Nov 29 04:11:07 np0005539563 nova_compute[252253]: </domain>
Nov 29 04:11:07 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.416 252257 DEBUG nova.compute.manager [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Preparing to wait for external event network-vif-plugged-03e18ca1-a94c-48e8-a149-d4c144215d18 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.416 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.416 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.416 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.417 252257 DEBUG nova.virt.libvirt.vif [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:10:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-516170309',display_name='tempest-TestVolumeBootPattern-volume-backed-server-516170309',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-516170309',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPL2OUU5K/teyQFPNqTZqLgT9O73zt6IMvn2ncnkLwXCm+6FT01omnbIj1FDDJyN8ZJFe1DRGxLAfym3zMJehf/kJ2C2SwXTfIQxTBENVQyqhaLmtpLmKddio66bQ4PMmQ==',key_name='tempest-keypair-1412035180',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-htdy8f9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:11:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03e18ca1-a94c-48e8-a149-d4c144215d18", "address": "fa:16:3e:aa:23:7f", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e18ca1-a9", "ovs_interfaceid": "03e18ca1-a94c-48e8-a149-d4c144215d18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.417 252257 DEBUG nova.network.os_vif_util [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "03e18ca1-a94c-48e8-a149-d4c144215d18", "address": "fa:16:3e:aa:23:7f", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e18ca1-a9", "ovs_interfaceid": "03e18ca1-a94c-48e8-a149-d4c144215d18", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.418 252257 DEBUG nova.network.os_vif_util [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:23:7f,bridge_name='br-int',has_traffic_filtering=True,id=03e18ca1-a94c-48e8-a149-d4c144215d18,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e18ca1-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.418 252257 DEBUG os_vif [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:23:7f,bridge_name='br-int',has_traffic_filtering=True,id=03e18ca1-a94c-48e8-a149-d4c144215d18,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e18ca1-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.419 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.419 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.420 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.423 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.423 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03e18ca1-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.423 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap03e18ca1-a9, col_values=(('external_ids', {'iface-id': '03e18ca1-a94c-48e8-a149-d4c144215d18', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:23:7f', 'vm-uuid': '98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.425 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:07 np0005539563 NetworkManager[48981]: <info>  [1764407467.4262] manager: (tap03e18ca1-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.429 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.432 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:07 np0005539563 nova_compute[252253]: 2025-11-29 09:11:07.433 252257 INFO os_vif [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:23:7f,bridge_name='br-int',has_traffic_filtering=True,id=03e18ca1-a94c-48e8-a149-d4c144215d18,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e18ca1-a9')#033[00m
Nov 29 04:11:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4080: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:11:08 np0005539563 nova_compute[252253]: 2025-11-29 09:11:08.104 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:11:08 np0005539563 nova_compute[252253]: 2025-11-29 09:11:08.104 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:11:08 np0005539563 nova_compute[252253]: 2025-11-29 09:11:08.105 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No VIF found with MAC fa:16:3e:aa:23:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 04:11:08 np0005539563 nova_compute[252253]: 2025-11-29 09:11:08.106 252257 INFO nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Using config drive#033[00m
Nov 29 04:11:08 np0005539563 nova_compute[252253]: 2025-11-29 09:11:08.131 252257 DEBUG nova.storage.rbd_utils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:11:08 np0005539563 nova_compute[252253]: 2025-11-29 09:11:08.138 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:11:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:08.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:11:08 np0005539563 nova_compute[252253]: 2025-11-29 09:11:08.785 252257 INFO nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Creating config drive at /var/lib/nova/instances/98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1/disk.config#033[00m
Nov 29 04:11:08 np0005539563 nova_compute[252253]: 2025-11-29 09:11:08.792 252257 DEBUG oslo_concurrency.processutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuq3ymikk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:08 np0005539563 nova_compute[252253]: 2025-11-29 09:11:08.930 252257 DEBUG oslo_concurrency.processutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuq3ymikk" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:08 np0005539563 nova_compute[252253]: 2025-11-29 09:11:08.960 252257 DEBUG nova.storage.rbd_utils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:11:08 np0005539563 nova_compute[252253]: 2025-11-29 09:11:08.964 252257 DEBUG oslo_concurrency.processutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1/disk.config 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:09.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:09 np0005539563 nova_compute[252253]: 2025-11-29 09:11:09.604 252257 DEBUG oslo_concurrency.processutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1/disk.config 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:09 np0005539563 nova_compute[252253]: 2025-11-29 09:11:09.605 252257 INFO nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Deleting local config drive /var/lib/nova/instances/98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1/disk.config because it was imported into RBD.#033[00m
Nov 29 04:11:09 np0005539563 kernel: tap03e18ca1-a9: entered promiscuous mode
Nov 29 04:11:09 np0005539563 NetworkManager[48981]: <info>  [1764407469.6696] manager: (tap03e18ca1-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/423)
Nov 29 04:11:09 np0005539563 nova_compute[252253]: 2025-11-29 09:11:09.671 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:09 np0005539563 ovn_controller[148841]: 2025-11-29T09:11:09Z|00949|binding|INFO|Claiming lport 03e18ca1-a94c-48e8-a149-d4c144215d18 for this chassis.
Nov 29 04:11:09 np0005539563 ovn_controller[148841]: 2025-11-29T09:11:09Z|00950|binding|INFO|03e18ca1-a94c-48e8-a149-d4c144215d18: Claiming fa:16:3e:aa:23:7f 10.100.0.12
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.689 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:23:7f 10.100.0.12'], port_security=['fa:16:3e:aa:23:7f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0596b743-cfc2-4d80-898c-031d919a5afd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=03e18ca1-a94c-48e8-a149-d4c144215d18) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.691 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 03e18ca1-a94c-48e8-a149-d4c144215d18 in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad bound to our chassis#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.693 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad#033[00m
Nov 29 04:11:09 np0005539563 ovn_controller[148841]: 2025-11-29T09:11:09Z|00951|binding|INFO|Setting lport 03e18ca1-a94c-48e8-a149-d4c144215d18 ovn-installed in OVS
Nov 29 04:11:09 np0005539563 ovn_controller[148841]: 2025-11-29T09:11:09Z|00952|binding|INFO|Setting lport 03e18ca1-a94c-48e8-a149-d4c144215d18 up in Southbound
Nov 29 04:11:09 np0005539563 nova_compute[252253]: 2025-11-29 09:11:09.697 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:09 np0005539563 nova_compute[252253]: 2025-11-29 09:11:09.701 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:09 np0005539563 systemd-udevd[416236]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.705 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[594a8039-74b6-48e4-8b4a-a0dd9ac9b3d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.706 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8aaf4606-91 in ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.708 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8aaf4606-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.708 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a3361459-2979-4923-b527-ea3b4f119294]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.709 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[68e2d41c-0559-4750-a300-7e9effd20614]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 systemd-machined[213024]: New machine qemu-105-instance-000000dc.
Nov 29 04:11:09 np0005539563 NetworkManager[48981]: <info>  [1764407469.7181] device (tap03e18ca1-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:11:09 np0005539563 NetworkManager[48981]: <info>  [1764407469.7189] device (tap03e18ca1-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.722 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[3fda7a10-804b-459a-9d2f-18db02563681]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 systemd[1]: Started Virtual Machine qemu-105-instance-000000dc.
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.736 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c30a4cce-b7af-4fc5-8163-27f3d18190ac]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.770 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[0bba659b-c25c-415a-96dd-c369faf5bbbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 NetworkManager[48981]: <info>  [1764407469.7782] manager: (tap8aaf4606-90): new Veth device (/org/freedesktop/NetworkManager/Devices/424)
Nov 29 04:11:09 np0005539563 systemd-udevd[416240]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.777 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab4a515-15cf-4142-9a9e-876319f3d553]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.810 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[a0fd3b8f-d3db-4de5-92ad-e8a52956c354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.813 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[5381a610-201e-450d-b0df-d857cccb3797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 NetworkManager[48981]: <info>  [1764407469.8348] device (tap8aaf4606-90): carrier: link connected
Nov 29 04:11:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4081: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 op/s
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.840 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[171a9782-5969-4cca-a16a-c64d65ef2964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.855 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4411d75f-bb51-40c6-8caf-817196e053da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 279], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1043760, 'reachable_time': 17881, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416269, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.872 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9a25f3e5-cf08-4244-9964-ac3161138879]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:8863'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1043760, 'tstamp': 1043760}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416270, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.890 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8335e531-541c-45ad-94fe-80d27714413d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 279], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1043760, 'reachable_time': 17881, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 416271, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.919 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9321f8e5-f812-4f21-994a-60af3234019d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.973 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[3d58de26-f4ae-491c-a27b-2cc466458927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.975 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.975 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.975 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8aaf4606-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:11:09 np0005539563 NetworkManager[48981]: <info>  [1764407469.9777] manager: (tap8aaf4606-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Nov 29 04:11:09 np0005539563 nova_compute[252253]: 2025-11-29 09:11:09.977 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:09 np0005539563 kernel: tap8aaf4606-90: entered promiscuous mode
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.979 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8aaf4606-90, col_values=(('external_ids', {'iface-id': 'dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:11:09 np0005539563 nova_compute[252253]: 2025-11-29 09:11:09.980 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:09 np0005539563 ovn_controller[148841]: 2025-11-29T09:11:09Z|00953|binding|INFO|Releasing lport dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a from this chassis (sb_readonly=0)
Nov 29 04:11:09 np0005539563 nova_compute[252253]: 2025-11-29 09:11:09.994 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.995 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.996 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[adfab1fb-ee49-4f45-9e8b-1faac151ae2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.996 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 04:11:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:11:09.998 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'env', 'PROCESS_TAG=haproxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 04:11:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:10.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:10 np0005539563 podman[416303]: 2025-11-29 09:11:10.357699645 +0000 UTC m=+0.047083848 container create 7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:11:10 np0005539563 systemd[1]: Started libpod-conmon-7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15.scope.
Nov 29 04:11:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:11:10 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36f592cec8b31f29dd6c0d284d6d86c32afbf701b0e27258311128d7327f15db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:10 np0005539563 podman[416303]: 2025-11-29 09:11:10.331913116 +0000 UTC m=+0.021297339 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 04:11:10 np0005539563 podman[416303]: 2025-11-29 09:11:10.438417145 +0000 UTC m=+0.127801368 container init 7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 04:11:10 np0005539563 podman[416303]: 2025-11-29 09:11:10.445924869 +0000 UTC m=+0.135309072 container start 7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:11:10 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[416318]: [NOTICE]   (416322) : New worker (416324) forked
Nov 29 04:11:10 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[416318]: [NOTICE]   (416322) : Loading success.
Nov 29 04:11:10 np0005539563 nova_compute[252253]: 2025-11-29 09:11:10.819 252257 DEBUG nova.compute.manager [req-80b96ea5-9693-4ad1-b89e-c314e07228c5 req-c3be45a4-f653-4c72-9abe-6dba8f7f73a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Received event network-vif-plugged-03e18ca1-a94c-48e8-a149-d4c144215d18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:11:10 np0005539563 nova_compute[252253]: 2025-11-29 09:11:10.819 252257 DEBUG oslo_concurrency.lockutils [req-80b96ea5-9693-4ad1-b89e-c314e07228c5 req-c3be45a4-f653-4c72-9abe-6dba8f7f73a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:11:10 np0005539563 nova_compute[252253]: 2025-11-29 09:11:10.819 252257 DEBUG oslo_concurrency.lockutils [req-80b96ea5-9693-4ad1-b89e-c314e07228c5 req-c3be45a4-f653-4c72-9abe-6dba8f7f73a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:11:10 np0005539563 nova_compute[252253]: 2025-11-29 09:11:10.820 252257 DEBUG oslo_concurrency.lockutils [req-80b96ea5-9693-4ad1-b89e-c314e07228c5 req-c3be45a4-f653-4c72-9abe-6dba8f7f73a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:11:10 np0005539563 nova_compute[252253]: 2025-11-29 09:11:10.820 252257 DEBUG nova.compute.manager [req-80b96ea5-9693-4ad1-b89e-c314e07228c5 req-c3be45a4-f653-4c72-9abe-6dba8f7f73a2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Processing event network-vif-plugged-03e18ca1-a94c-48e8-a149-d4c144215d18 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 04:11:10 np0005539563 nova_compute[252253]: 2025-11-29 09:11:10.935 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764407470.9352598, 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:11:10 np0005539563 nova_compute[252253]: 2025-11-29 09:11:10.936 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] VM Started (Lifecycle Event)#033[00m
Nov 29 04:11:10 np0005539563 nova_compute[252253]: 2025-11-29 09:11:10.938 252257 DEBUG nova.compute.manager [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 04:11:10 np0005539563 nova_compute[252253]: 2025-11-29 09:11:10.940 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 04:11:10 np0005539563 nova_compute[252253]: 2025-11-29 09:11:10.943 252257 INFO nova.virt.libvirt.driver [-] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Instance spawned successfully.#033[00m
Nov 29 04:11:10 np0005539563 nova_compute[252253]: 2025-11-29 09:11:10.944 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 04:11:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:11.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:11 np0005539563 nova_compute[252253]: 2025-11-29 09:11:11.556 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:11:11 np0005539563 nova_compute[252253]: 2025-11-29 09:11:11.561 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:11:11 np0005539563 nova_compute[252253]: 2025-11-29 09:11:11.773 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:11:11 np0005539563 nova_compute[252253]: 2025-11-29 09:11:11.774 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:11:11 np0005539563 nova_compute[252253]: 2025-11-29 09:11:11.774 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:11:11 np0005539563 nova_compute[252253]: 2025-11-29 09:11:11.775 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:11:11 np0005539563 nova_compute[252253]: 2025-11-29 09:11:11.775 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:11:11 np0005539563 nova_compute[252253]: 2025-11-29 09:11:11.776 252257 DEBUG nova.virt.libvirt.driver [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:11:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4082: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 KiB/s rd, 255 B/s wr, 5 op/s
Nov 29 04:11:11 np0005539563 nova_compute[252253]: 2025-11-29 09:11:11.983 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:11:11 np0005539563 nova_compute[252253]: 2025-11-29 09:11:11.983 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764407470.9374537, 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:11:11 np0005539563 nova_compute[252253]: 2025-11-29 09:11:11.983 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] VM Paused (Lifecycle Event)#033[00m
Nov 29 04:11:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:12.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.278 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.281 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764407470.9401495, 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.282 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.427 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.663 252257 INFO nova.compute.manager [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Took 9.97 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.663 252257 DEBUG nova.compute.manager [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.675 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.679 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.723 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.785 252257 INFO nova.compute.manager [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Took 17.02 seconds to build instance.#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.815 252257 DEBUG oslo_concurrency.lockutils [None req-2ca8e6f8-02e1-4780-a137-375c91c0cdbc 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.948 252257 DEBUG nova.compute.manager [req-b151be17-49e3-4c51-b5fc-01ad8665ac7b req-e25cdc11-22bb-4f15-a02d-a2d2e13afb24 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Received event network-vif-plugged-03e18ca1-a94c-48e8-a149-d4c144215d18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.949 252257 DEBUG oslo_concurrency.lockutils [req-b151be17-49e3-4c51-b5fc-01ad8665ac7b req-e25cdc11-22bb-4f15-a02d-a2d2e13afb24 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.949 252257 DEBUG oslo_concurrency.lockutils [req-b151be17-49e3-4c51-b5fc-01ad8665ac7b req-e25cdc11-22bb-4f15-a02d-a2d2e13afb24 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.949 252257 DEBUG oslo_concurrency.lockutils [req-b151be17-49e3-4c51-b5fc-01ad8665ac7b req-e25cdc11-22bb-4f15-a02d-a2d2e13afb24 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.949 252257 DEBUG nova.compute.manager [req-b151be17-49e3-4c51-b5fc-01ad8665ac7b req-e25cdc11-22bb-4f15-a02d-a2d2e13afb24 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] No waiting events found dispatching network-vif-plugged-03e18ca1-a94c-48e8-a149-d4c144215d18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:11:12 np0005539563 nova_compute[252253]: 2025-11-29 09:11:12.949 252257 WARNING nova.compute.manager [req-b151be17-49e3-4c51-b5fc-01ad8665ac7b req-e25cdc11-22bb-4f15-a02d-a2d2e13afb24 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Received unexpected event network-vif-plugged-03e18ca1-a94c-48e8-a149-d4c144215d18 for instance with vm_state active and task_state None.#033[00m
Nov 29 04:11:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:11:13
Nov 29 04:11:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:11:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:11:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'images', 'vms']
Nov 29 04:11:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:11:13 np0005539563 nova_compute[252253]: 2025-11-29 09:11:13.128 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:13.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:11:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:11:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4083: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 805 KiB/s rd, 511 B/s wr, 34 op/s
Nov 29 04:11:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:14.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:15.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4084: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 55 op/s
Nov 29 04:11:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:16.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:11:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:11:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:11:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:11:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:11:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:11:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:11:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:11:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:11:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:11:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:17.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:17 np0005539563 nova_compute[252253]: 2025-11-29 09:11:17.430 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4085: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 04:11:18 np0005539563 nova_compute[252253]: 2025-11-29 09:11:18.130 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.166210) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478166346, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 1564, "num_deletes": 252, "total_data_size": 2701636, "memory_usage": 2746456, "flush_reason": "Manual Compaction"}
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Nov 29 04:11:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:18.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:18 np0005539563 podman[416380]: 2025-11-29 09:11:18.514805258 +0000 UTC m=+0.060291437 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 04:11:18 np0005539563 podman[416381]: 2025-11-29 09:11:18.535204991 +0000 UTC m=+0.076926897 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:11:18 np0005539563 podman[416379]: 2025-11-29 09:11:18.536465136 +0000 UTC m=+0.084321278 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478573707, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 2658815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82273, "largest_seqno": 83836, "table_properties": {"data_size": 2651603, "index_size": 4218, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15405, "raw_average_key_size": 20, "raw_value_size": 2637041, "raw_average_value_size": 3474, "num_data_blocks": 186, "num_entries": 759, "num_filter_entries": 759, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407319, "oldest_key_time": 1764407319, "file_creation_time": 1764407478, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 407550 microseconds, and 6958 cpu microseconds.
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.573814) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 2658815 bytes OK
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.573843) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.580679) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.580712) EVENT_LOG_v1 {"time_micros": 1764407478580705, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.580747) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 2694984, prev total WAL file size 2695265, number of live WAL files 2.
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.581679) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(2596KB)], [188(11MB)]
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478581832, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 14273986, "oldest_snapshot_seqno": -1}
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 11343 keys, 12314225 bytes, temperature: kUnknown
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478761837, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 12314225, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12244381, "index_size": 40355, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28421, "raw_key_size": 300799, "raw_average_key_size": 26, "raw_value_size": 12049470, "raw_average_value_size": 1062, "num_data_blocks": 1517, "num_entries": 11343, "num_filter_entries": 11343, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764407478, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.762136) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 12314225 bytes
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.925301) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 79.3 rd, 68.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.1 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(10.0) write-amplify(4.6) OK, records in: 11864, records dropped: 521 output_compression: NoCompression
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.925390) EVENT_LOG_v1 {"time_micros": 1764407478925343, "job": 118, "event": "compaction_finished", "compaction_time_micros": 180092, "compaction_time_cpu_micros": 29100, "output_level": 6, "num_output_files": 1, "total_output_size": 12314225, "num_input_records": 11864, "num_output_records": 11343, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478926004, "job": 118, "event": "table_file_deletion", "file_number": 190}
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407478927598, "job": 118, "event": "table_file_deletion", "file_number": 188}
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.581504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.927642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.927647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.927649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.927652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:11:18 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:11:18.927654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:11:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:19.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:19 np0005539563 nova_compute[252253]: 2025-11-29 09:11:19.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:19 np0005539563 nova_compute[252253]: 2025-11-29 09:11:19.799 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:19 np0005539563 NetworkManager[48981]: <info>  [1764407479.8134] manager: (patch-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Nov 29 04:11:19 np0005539563 NetworkManager[48981]: <info>  [1764407479.8153] manager: (patch-br-int-to-provnet-a082b8bd-d08a-4199-ba2d-38dbc451f37e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Nov 29 04:11:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4086: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 04:11:19 np0005539563 ovn_controller[148841]: 2025-11-29T09:11:19Z|00954|binding|INFO|Releasing lport dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a from this chassis (sb_readonly=0)
Nov 29 04:11:19 np0005539563 nova_compute[252253]: 2025-11-29 09:11:19.903 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:19 np0005539563 nova_compute[252253]: 2025-11-29 09:11:19.912 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:20.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:20 np0005539563 nova_compute[252253]: 2025-11-29 09:11:20.186 252257 DEBUG nova.compute.manager [req-a946fd18-fd1b-4c1e-82c1-b392350f98a2 req-1db5379c-bca9-414d-a24c-174edda72d11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Received event network-changed-03e18ca1-a94c-48e8-a149-d4c144215d18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:11:20 np0005539563 nova_compute[252253]: 2025-11-29 09:11:20.186 252257 DEBUG nova.compute.manager [req-a946fd18-fd1b-4c1e-82c1-b392350f98a2 req-1db5379c-bca9-414d-a24c-174edda72d11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Refreshing instance network info cache due to event network-changed-03e18ca1-a94c-48e8-a149-d4c144215d18. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:11:20 np0005539563 nova_compute[252253]: 2025-11-29 09:11:20.187 252257 DEBUG oslo_concurrency.lockutils [req-a946fd18-fd1b-4c1e-82c1-b392350f98a2 req-1db5379c-bca9-414d-a24c-174edda72d11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:11:20 np0005539563 nova_compute[252253]: 2025-11-29 09:11:20.187 252257 DEBUG oslo_concurrency.lockutils [req-a946fd18-fd1b-4c1e-82c1-b392350f98a2 req-1db5379c-bca9-414d-a24c-174edda72d11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:11:20 np0005539563 nova_compute[252253]: 2025-11-29 09:11:20.187 252257 DEBUG nova.network.neutron [req-a946fd18-fd1b-4c1e-82c1-b392350f98a2 req-1db5379c-bca9-414d-a24c-174edda72d11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Refreshing network info cache for port 03e18ca1-a94c-48e8-a149-d4c144215d18 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:11:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:21.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4087: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 04:11:22 np0005539563 nova_compute[252253]: 2025-11-29 09:11:22.026 252257 DEBUG nova.network.neutron [req-a946fd18-fd1b-4c1e-82c1-b392350f98a2 req-1db5379c-bca9-414d-a24c-174edda72d11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Updated VIF entry in instance network info cache for port 03e18ca1-a94c-48e8-a149-d4c144215d18. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:11:22 np0005539563 nova_compute[252253]: 2025-11-29 09:11:22.027 252257 DEBUG nova.network.neutron [req-a946fd18-fd1b-4c1e-82c1-b392350f98a2 req-1db5379c-bca9-414d-a24c-174edda72d11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Updating instance_info_cache with network_info: [{"id": "03e18ca1-a94c-48e8-a149-d4c144215d18", "address": "fa:16:3e:aa:23:7f", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e18ca1-a9", "ovs_interfaceid": "03e18ca1-a94c-48e8-a149-d4c144215d18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:11:22 np0005539563 nova_compute[252253]: 2025-11-29 09:11:22.057 252257 DEBUG oslo_concurrency.lockutils [req-a946fd18-fd1b-4c1e-82c1-b392350f98a2 req-1db5379c-bca9-414d-a24c-174edda72d11 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:11:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:22.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:22 np0005539563 nova_compute[252253]: 2025-11-29 09:11:22.433 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:23 np0005539563 nova_compute[252253]: 2025-11-29 09:11:23.169 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:23.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4088: 305 pgs: 305 active+clean; 172 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 510 KiB/s wr, 73 op/s
Nov 29 04:11:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:24.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.178915410385259e-06 of space, bias 1.0, pg target 0.0024536746231155777 quantized to 32 (current 32)
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0034262385422482783 of space, bias 1.0, pg target 1.0278715626744834 quantized to 32 (current 32)
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:11:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 04:11:24 np0005539563 ovn_controller[148841]: 2025-11-29T09:11:24Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:23:7f 10.100.0.12
Nov 29 04:11:24 np0005539563 ovn_controller[148841]: 2025-11-29T09:11:24Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:23:7f 10.100.0.12
Nov 29 04:11:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:11:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:11:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:11:24 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:11:24 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:11:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:25.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4089: 305 pgs: 305 active+clean; 173 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 895 KiB/s wr, 62 op/s
Nov 29 04:11:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:26.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:11:26 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 62623408-f3e4-45fe-a604-9b78340ec914 does not exist
Nov 29 04:11:26 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 3d340257-1836-41cd-8937-866b2273b9e2 does not exist
Nov 29 04:11:26 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 98721065-3fb6-452b-a9b5-47a2a1bb3eb7 does not exist
Nov 29 04:11:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:11:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:11:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:11:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:11:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:11:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:11:26 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:11:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:27.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:27 np0005539563 podman[416769]: 2025-11-29 09:11:27.423693584 +0000 UTC m=+0.038189426 container create ca7f8fab8b6c905f26e447e0d18af70bd1c4577af662cbab667e0629b2e82de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 04:11:27 np0005539563 nova_compute[252253]: 2025-11-29 09:11:27.435 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:27 np0005539563 systemd[1]: Started libpod-conmon-ca7f8fab8b6c905f26e447e0d18af70bd1c4577af662cbab667e0629b2e82de8.scope.
Nov 29 04:11:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:11:27 np0005539563 podman[416769]: 2025-11-29 09:11:27.406911919 +0000 UTC m=+0.021407781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:11:27 np0005539563 podman[416769]: 2025-11-29 09:11:27.506678236 +0000 UTC m=+0.121174098 container init ca7f8fab8b6c905f26e447e0d18af70bd1c4577af662cbab667e0629b2e82de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 04:11:27 np0005539563 podman[416769]: 2025-11-29 09:11:27.514529229 +0000 UTC m=+0.129025071 container start ca7f8fab8b6c905f26e447e0d18af70bd1c4577af662cbab667e0629b2e82de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:11:27 np0005539563 podman[416769]: 2025-11-29 09:11:27.518068765 +0000 UTC m=+0.132564607 container attach ca7f8fab8b6c905f26e447e0d18af70bd1c4577af662cbab667e0629b2e82de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:11:27 np0005539563 trusting_kapitsa[416786]: 167 167
Nov 29 04:11:27 np0005539563 systemd[1]: libpod-ca7f8fab8b6c905f26e447e0d18af70bd1c4577af662cbab667e0629b2e82de8.scope: Deactivated successfully.
Nov 29 04:11:27 np0005539563 conmon[416786]: conmon ca7f8fab8b6c905f26e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca7f8fab8b6c905f26e447e0d18af70bd1c4577af662cbab667e0629b2e82de8.scope/container/memory.events
Nov 29 04:11:27 np0005539563 podman[416769]: 2025-11-29 09:11:27.522961037 +0000 UTC m=+0.137456879 container died ca7f8fab8b6c905f26e447e0d18af70bd1c4577af662cbab667e0629b2e82de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kapitsa, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 04:11:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-13e3a4905680f742e7d5491b10b692f1c17885a53c3bbd92277e1639d13ed09a-merged.mount: Deactivated successfully.
Nov 29 04:11:27 np0005539563 podman[416769]: 2025-11-29 09:11:27.559116148 +0000 UTC m=+0.173611990 container remove ca7f8fab8b6c905f26e447e0d18af70bd1c4577af662cbab667e0629b2e82de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 04:11:27 np0005539563 systemd[1]: libpod-conmon-ca7f8fab8b6c905f26e447e0d18af70bd1c4577af662cbab667e0629b2e82de8.scope: Deactivated successfully.
Nov 29 04:11:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:27 np0005539563 podman[416810]: 2025-11-29 09:11:27.739781508 +0000 UTC m=+0.045570076 container create 8f5058053a1c5cf1d11d31a8a62f419a15412f0720d0131cdbf2ca4895978eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 04:11:27 np0005539563 systemd[1]: Started libpod-conmon-8f5058053a1c5cf1d11d31a8a62f419a15412f0720d0131cdbf2ca4895978eaa.scope.
Nov 29 04:11:27 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:11:27 np0005539563 podman[416810]: 2025-11-29 09:11:27.717996337 +0000 UTC m=+0.023784925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:11:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3fb5fe0f91d1ea8eeb2d0e4790fc45eabffeb0b5654eb898cfdab7484de457f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3fb5fe0f91d1ea8eeb2d0e4790fc45eabffeb0b5654eb898cfdab7484de457f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3fb5fe0f91d1ea8eeb2d0e4790fc45eabffeb0b5654eb898cfdab7484de457f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3fb5fe0f91d1ea8eeb2d0e4790fc45eabffeb0b5654eb898cfdab7484de457f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:27 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3fb5fe0f91d1ea8eeb2d0e4790fc45eabffeb0b5654eb898cfdab7484de457f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:27 np0005539563 podman[416810]: 2025-11-29 09:11:27.831104836 +0000 UTC m=+0.136893454 container init 8f5058053a1c5cf1d11d31a8a62f419a15412f0720d0131cdbf2ca4895978eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 04:11:27 np0005539563 podman[416810]: 2025-11-29 09:11:27.840877931 +0000 UTC m=+0.146666509 container start 8f5058053a1c5cf1d11d31a8a62f419a15412f0720d0131cdbf2ca4895978eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 04:11:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4090: 305 pgs: 305 active+clean; 184 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 769 KiB/s rd, 1.7 MiB/s wr, 63 op/s
Nov 29 04:11:28 np0005539563 podman[416810]: 2025-11-29 09:11:28.092824945 +0000 UTC m=+0.398613543 container attach 8f5058053a1c5cf1d11d31a8a62f419a15412f0720d0131cdbf2ca4895978eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 04:11:28 np0005539563 nova_compute[252253]: 2025-11-29 09:11:28.171 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:28.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:11:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:11:28 np0005539563 charming_torvalds[416826]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:11:28 np0005539563 charming_torvalds[416826]: --> relative data size: 1.0
Nov 29 04:11:28 np0005539563 charming_torvalds[416826]: --> All data devices are unavailable
Nov 29 04:11:28 np0005539563 systemd[1]: libpod-8f5058053a1c5cf1d11d31a8a62f419a15412f0720d0131cdbf2ca4895978eaa.scope: Deactivated successfully.
Nov 29 04:11:28 np0005539563 podman[416810]: 2025-11-29 09:11:28.712924125 +0000 UTC m=+1.018712693 container died 8f5058053a1c5cf1d11d31a8a62f419a15412f0720d0131cdbf2ca4895978eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:11:28 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f3fb5fe0f91d1ea8eeb2d0e4790fc45eabffeb0b5654eb898cfdab7484de457f-merged.mount: Deactivated successfully.
Nov 29 04:11:28 np0005539563 podman[416810]: 2025-11-29 09:11:28.76585135 +0000 UTC m=+1.071639918 container remove 8f5058053a1c5cf1d11d31a8a62f419a15412f0720d0131cdbf2ca4895978eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:11:28 np0005539563 systemd[1]: libpod-conmon-8f5058053a1c5cf1d11d31a8a62f419a15412f0720d0131cdbf2ca4895978eaa.scope: Deactivated successfully.
Nov 29 04:11:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:29.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:29 np0005539563 podman[416994]: 2025-11-29 09:11:29.345072222 +0000 UTC m=+0.037063226 container create b77774b48592297e525c1ce8df9caffb4598405a27479c303708b396c3096ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jackson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:11:29 np0005539563 systemd[1]: Started libpod-conmon-b77774b48592297e525c1ce8df9caffb4598405a27479c303708b396c3096ed7.scope.
Nov 29 04:11:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:11:29 np0005539563 podman[416994]: 2025-11-29 09:11:29.409987593 +0000 UTC m=+0.101978617 container init b77774b48592297e525c1ce8df9caffb4598405a27479c303708b396c3096ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jackson, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 04:11:29 np0005539563 podman[416994]: 2025-11-29 09:11:29.418236407 +0000 UTC m=+0.110227411 container start b77774b48592297e525c1ce8df9caffb4598405a27479c303708b396c3096ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jackson, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:11:29 np0005539563 podman[416994]: 2025-11-29 09:11:29.421019872 +0000 UTC m=+0.113010906 container attach b77774b48592297e525c1ce8df9caffb4598405a27479c303708b396c3096ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:11:29 np0005539563 focused_jackson[417010]: 167 167
Nov 29 04:11:29 np0005539563 systemd[1]: libpod-b77774b48592297e525c1ce8df9caffb4598405a27479c303708b396c3096ed7.scope: Deactivated successfully.
Nov 29 04:11:29 np0005539563 podman[416994]: 2025-11-29 09:11:29.329443958 +0000 UTC m=+0.021434982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:11:29 np0005539563 podman[416994]: 2025-11-29 09:11:29.424661961 +0000 UTC m=+0.116652965 container died b77774b48592297e525c1ce8df9caffb4598405a27479c303708b396c3096ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jackson, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:11:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-49cb7557857e0613c2c0eae9b91ef48c804e1474c88b0817237b1c1777826a0b-merged.mount: Deactivated successfully.
Nov 29 04:11:29 np0005539563 podman[416994]: 2025-11-29 09:11:29.456207507 +0000 UTC m=+0.148198511 container remove b77774b48592297e525c1ce8df9caffb4598405a27479c303708b396c3096ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 04:11:29 np0005539563 systemd[1]: libpod-conmon-b77774b48592297e525c1ce8df9caffb4598405a27479c303708b396c3096ed7.scope: Deactivated successfully.
Nov 29 04:11:29 np0005539563 podman[417034]: 2025-11-29 09:11:29.664549438 +0000 UTC m=+0.054036826 container create 858599e33b7128eb75b074e9e0509b52ef4d2fca18da3bbb56d00a95dcd25842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 04:11:29 np0005539563 systemd[1]: Started libpod-conmon-858599e33b7128eb75b074e9e0509b52ef4d2fca18da3bbb56d00a95dcd25842.scope.
Nov 29 04:11:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:11:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/158c120f4402bed47a7a5c8a80ddc3d871153eccc8b4e6eb764594beae0d7822/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/158c120f4402bed47a7a5c8a80ddc3d871153eccc8b4e6eb764594beae0d7822/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/158c120f4402bed47a7a5c8a80ddc3d871153eccc8b4e6eb764594beae0d7822/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:29 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/158c120f4402bed47a7a5c8a80ddc3d871153eccc8b4e6eb764594beae0d7822/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:29 np0005539563 podman[417034]: 2025-11-29 09:11:29.731183936 +0000 UTC m=+0.120671334 container init 858599e33b7128eb75b074e9e0509b52ef4d2fca18da3bbb56d00a95dcd25842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:11:29 np0005539563 podman[417034]: 2025-11-29 09:11:29.640554628 +0000 UTC m=+0.030042026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:11:29 np0005539563 podman[417034]: 2025-11-29 09:11:29.741010772 +0000 UTC m=+0.130498130 container start 858599e33b7128eb75b074e9e0509b52ef4d2fca18da3bbb56d00a95dcd25842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_franklin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 04:11:29 np0005539563 podman[417034]: 2025-11-29 09:11:29.746870831 +0000 UTC m=+0.136358219 container attach 858599e33b7128eb75b074e9e0509b52ef4d2fca18da3bbb56d00a95dcd25842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_franklin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:11:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4091: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 379 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 04:11:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:30.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]: {
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:    "0": [
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:        {
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:            "devices": [
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:                "/dev/loop3"
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:            ],
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:            "lv_name": "ceph_lv0",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:            "lv_size": "7511998464",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:            "name": "ceph_lv0",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:            "tags": {
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:                "ceph.cluster_name": "ceph",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:                "ceph.crush_device_class": "",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:                "ceph.encrypted": "0",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:                "ceph.osd_id": "0",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:                "ceph.type": "block",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:                "ceph.vdo": "0"
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:            },
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:            "type": "block",
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:            "vg_name": "ceph_vg0"
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:        }
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]:    ]
Nov 29 04:11:30 np0005539563 gifted_franklin[417050]: }
Nov 29 04:11:30 np0005539563 systemd[1]: libpod-858599e33b7128eb75b074e9e0509b52ef4d2fca18da3bbb56d00a95dcd25842.scope: Deactivated successfully.
Nov 29 04:11:30 np0005539563 podman[417034]: 2025-11-29 09:11:30.516779395 +0000 UTC m=+0.906266743 container died 858599e33b7128eb75b074e9e0509b52ef4d2fca18da3bbb56d00a95dcd25842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_franklin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:11:30 np0005539563 systemd[1]: var-lib-containers-storage-overlay-158c120f4402bed47a7a5c8a80ddc3d871153eccc8b4e6eb764594beae0d7822-merged.mount: Deactivated successfully.
Nov 29 04:11:30 np0005539563 podman[417034]: 2025-11-29 09:11:30.567555422 +0000 UTC m=+0.957042790 container remove 858599e33b7128eb75b074e9e0509b52ef4d2fca18da3bbb56d00a95dcd25842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_franklin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 04:11:30 np0005539563 systemd[1]: libpod-conmon-858599e33b7128eb75b074e9e0509b52ef4d2fca18da3bbb56d00a95dcd25842.scope: Deactivated successfully.
Nov 29 04:11:31 np0005539563 podman[417210]: 2025-11-29 09:11:31.145273943 +0000 UTC m=+0.036097170 container create 42d2f050e13dd92124673885ec4900bf9aac974735cd435aa4c02cf2097ca0fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 04:11:31 np0005539563 systemd[1]: Started libpod-conmon-42d2f050e13dd92124673885ec4900bf9aac974735cd435aa4c02cf2097ca0fe.scope.
Nov 29 04:11:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:11:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:31.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:31 np0005539563 podman[417210]: 2025-11-29 09:11:31.211504719 +0000 UTC m=+0.102327956 container init 42d2f050e13dd92124673885ec4900bf9aac974735cd435aa4c02cf2097ca0fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:11:31 np0005539563 podman[417210]: 2025-11-29 09:11:31.220672239 +0000 UTC m=+0.111495466 container start 42d2f050e13dd92124673885ec4900bf9aac974735cd435aa4c02cf2097ca0fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 04:11:31 np0005539563 podman[417210]: 2025-11-29 09:11:31.224245915 +0000 UTC m=+0.115069162 container attach 42d2f050e13dd92124673885ec4900bf9aac974735cd435aa4c02cf2097ca0fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:11:31 np0005539563 podman[417210]: 2025-11-29 09:11:31.130017469 +0000 UTC m=+0.020840716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:11:31 np0005539563 kind_wing[417228]: 167 167
Nov 29 04:11:31 np0005539563 systemd[1]: libpod-42d2f050e13dd92124673885ec4900bf9aac974735cd435aa4c02cf2097ca0fe.scope: Deactivated successfully.
Nov 29 04:11:31 np0005539563 podman[417210]: 2025-11-29 09:11:31.226672811 +0000 UTC m=+0.117496048 container died 42d2f050e13dd92124673885ec4900bf9aac974735cd435aa4c02cf2097ca0fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 04:11:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-50dd375fdd9965f5939aa4817a6e6229806d7b2a9a2a6a98644fb14260d6700d-merged.mount: Deactivated successfully.
Nov 29 04:11:31 np0005539563 podman[417210]: 2025-11-29 09:11:31.263949282 +0000 UTC m=+0.154772509 container remove 42d2f050e13dd92124673885ec4900bf9aac974735cd435aa4c02cf2097ca0fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wing, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:11:31 np0005539563 systemd[1]: libpod-conmon-42d2f050e13dd92124673885ec4900bf9aac974735cd435aa4c02cf2097ca0fe.scope: Deactivated successfully.
Nov 29 04:11:31 np0005539563 podman[417252]: 2025-11-29 09:11:31.449704601 +0000 UTC m=+0.038694120 container create b4529e8f7d03a57567a49bfa522e305bdccfe76e86de23b48cbd3b14dccb7e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:11:31 np0005539563 systemd[1]: Started libpod-conmon-b4529e8f7d03a57567a49bfa522e305bdccfe76e86de23b48cbd3b14dccb7e06.scope.
Nov 29 04:11:31 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:11:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c34a17649d232d47cedd651d08443dfde1917e92568778b20ee1962d13e157/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c34a17649d232d47cedd651d08443dfde1917e92568778b20ee1962d13e157/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c34a17649d232d47cedd651d08443dfde1917e92568778b20ee1962d13e157/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:31 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79c34a17649d232d47cedd651d08443dfde1917e92568778b20ee1962d13e157/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:11:31 np0005539563 podman[417252]: 2025-11-29 09:11:31.431706023 +0000 UTC m=+0.020695552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:11:31 np0005539563 podman[417252]: 2025-11-29 09:11:31.53186391 +0000 UTC m=+0.120853439 container init b4529e8f7d03a57567a49bfa522e305bdccfe76e86de23b48cbd3b14dccb7e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:11:31 np0005539563 podman[417252]: 2025-11-29 09:11:31.538684984 +0000 UTC m=+0.127674493 container start b4529e8f7d03a57567a49bfa522e305bdccfe76e86de23b48cbd3b14dccb7e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 04:11:31 np0005539563 podman[417252]: 2025-11-29 09:11:31.541679396 +0000 UTC m=+0.130668925 container attach b4529e8f7d03a57567a49bfa522e305bdccfe76e86de23b48cbd3b14dccb7e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:11:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4092: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 379 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 04:11:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:32.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:32 np0005539563 relaxed_lamarr[417268]: {
Nov 29 04:11:32 np0005539563 relaxed_lamarr[417268]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:11:32 np0005539563 relaxed_lamarr[417268]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:11:32 np0005539563 relaxed_lamarr[417268]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:11:32 np0005539563 relaxed_lamarr[417268]:        "osd_id": 0,
Nov 29 04:11:32 np0005539563 relaxed_lamarr[417268]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:11:32 np0005539563 relaxed_lamarr[417268]:        "type": "bluestore"
Nov 29 04:11:32 np0005539563 relaxed_lamarr[417268]:    }
Nov 29 04:11:32 np0005539563 relaxed_lamarr[417268]: }
Nov 29 04:11:32 np0005539563 systemd[1]: libpod-b4529e8f7d03a57567a49bfa522e305bdccfe76e86de23b48cbd3b14dccb7e06.scope: Deactivated successfully.
Nov 29 04:11:32 np0005539563 podman[417252]: 2025-11-29 09:11:32.364064573 +0000 UTC m=+0.953054082 container died b4529e8f7d03a57567a49bfa522e305bdccfe76e86de23b48cbd3b14dccb7e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 04:11:32 np0005539563 nova_compute[252253]: 2025-11-29 09:11:32.438 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-79c34a17649d232d47cedd651d08443dfde1917e92568778b20ee1962d13e157-merged.mount: Deactivated successfully.
Nov 29 04:11:33 np0005539563 nova_compute[252253]: 2025-11-29 09:11:33.173 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:33.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:33 np0005539563 podman[417252]: 2025-11-29 09:11:33.645437181 +0000 UTC m=+2.234426690 container remove b4529e8f7d03a57567a49bfa522e305bdccfe76e86de23b48cbd3b14dccb7e06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:11:33 np0005539563 systemd[1]: libpod-conmon-b4529e8f7d03a57567a49bfa522e305bdccfe76e86de23b48cbd3b14dccb7e06.scope: Deactivated successfully.
Nov 29 04:11:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:11:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:11:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:11:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:11:33 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4ebcbacb-5d50-4dde-b891-df5cf90a1d20 does not exist
Nov 29 04:11:33 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 276c4cdd-608e-453a-babb-faab87f713b7 does not exist
Nov 29 04:11:33 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 98489ba6-3cd0-4cb7-85c7-26edc05d019d does not exist
Nov 29 04:11:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4093: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 379 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 29 04:11:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:34.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:11:34 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:11:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:35.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4094: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 367 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Nov 29 04:11:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:36.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:37.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:37 np0005539563 nova_compute[252253]: 2025-11-29 09:11:37.443 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4095: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Nov 29 04:11:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Nov 29 04:11:38 np0005539563 nova_compute[252253]: 2025-11-29 09:11:38.175 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Nov 29 04:11:38 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Nov 29 04:11:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:38.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:38 np0005539563 nova_compute[252253]: 2025-11-29 09:11:38.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:39.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4097: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 KiB/s rd, 16 KiB/s wr, 11 op/s
Nov 29 04:11:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:40.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:41.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4098: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 15 KiB/s wr, 13 op/s
Nov 29 04:11:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:42.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:42 np0005539563 nova_compute[252253]: 2025-11-29 09:11:42.447 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:43 np0005539563 nova_compute[252253]: 2025-11-29 09:11:43.178 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:43.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:11:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:11:43 np0005539563 nova_compute[252253]: 2025-11-29 09:11:43.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:43 np0005539563 nova_compute[252253]: 2025-11-29 09:11:43.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:11:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4099: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 3.5 KiB/s wr, 17 op/s
Nov 29 04:11:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:44.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:45.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:45 np0005539563 nova_compute[252253]: 2025-11-29 09:11:45.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4100: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 3.6 KiB/s wr, 20 op/s
Nov 29 04:11:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:46.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:46 np0005539563 nova_compute[252253]: 2025-11-29 09:11:46.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:46 np0005539563 nova_compute[252253]: 2025-11-29 09:11:46.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:47.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:47 np0005539563 nova_compute[252253]: 2025-11-29 09:11:47.451 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4101: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 3.6 KiB/s wr, 19 op/s
Nov 29 04:11:48 np0005539563 nova_compute[252253]: 2025-11-29 09:11:48.179 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:48.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:49.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:49 np0005539563 podman[417413]: 2025-11-29 09:11:49.508582567 +0000 UTC m=+0.061599112 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:11:49 np0005539563 podman[417414]: 2025-11-29 09:11:49.51275241 +0000 UTC m=+0.063756361 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 04:11:49 np0005539563 podman[417415]: 2025-11-29 09:11:49.542766724 +0000 UTC m=+0.090300950 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 04:11:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4102: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 3.0 KiB/s wr, 17 op/s
Nov 29 04:11:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:11:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:50.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:11:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:51.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:51 np0005539563 nova_compute[252253]: 2025-11-29 09:11:51.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:51 np0005539563 nova_compute[252253]: 2025-11-29 09:11:51.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:11:51 np0005539563 nova_compute[252253]: 2025-11-29 09:11:51.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:11:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4103: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 2.8 KiB/s wr, 13 op/s
Nov 29 04:11:51 np0005539563 nova_compute[252253]: 2025-11-29 09:11:51.933 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:11:51 np0005539563 nova_compute[252253]: 2025-11-29 09:11:51.933 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:11:51 np0005539563 nova_compute[252253]: 2025-11-29 09:11:51.933 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 04:11:51 np0005539563 nova_compute[252253]: 2025-11-29 09:11:51.934 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:11:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:52.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:52 np0005539563 nova_compute[252253]: 2025-11-29 09:11:52.455 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:53 np0005539563 nova_compute[252253]: 2025-11-29 09:11:53.181 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:53.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4104: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.2 KiB/s rd, 2.5 KiB/s wr, 12 op/s
Nov 29 04:11:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:54.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:55 np0005539563 nova_compute[252253]: 2025-11-29 09:11:55.063 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Updating instance_info_cache with network_info: [{"id": "03e18ca1-a94c-48e8-a149-d4c144215d18", "address": "fa:16:3e:aa:23:7f", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e18ca1-a9", "ovs_interfaceid": "03e18ca1-a94c-48e8-a149-d4c144215d18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:11:55 np0005539563 nova_compute[252253]: 2025-11-29 09:11:55.208 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:11:55 np0005539563 nova_compute[252253]: 2025-11-29 09:11:55.208 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 04:11:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:55.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4105: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.8 KiB/s rd, 1023 B/s wr, 12 op/s
Nov 29 04:11:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:11:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:56.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:11:56 np0005539563 nova_compute[252253]: 2025-11-29 09:11:56.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:56 np0005539563 nova_compute[252253]: 2025-11-29 09:11:56.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:11:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:57.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:57 np0005539563 nova_compute[252253]: 2025-11-29 09:11:57.458 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:11:57 np0005539563 nova_compute[252253]: 2025-11-29 09:11:57.804 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:11:57 np0005539563 nova_compute[252253]: 2025-11-29 09:11:57.804 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:11:57 np0005539563 nova_compute[252253]: 2025-11-29 09:11:57.804 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:11:57 np0005539563 nova_compute[252253]: 2025-11-29 09:11:57.805 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:11:57 np0005539563 nova_compute[252253]: 2025-11-29 09:11:57.805 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4106: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.5 KiB/s rd, 1.7 KiB/s wr, 12 op/s
Nov 29 04:11:58 np0005539563 nova_compute[252253]: 2025-11-29 09:11:58.183 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:11:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:11:58.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:11:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028295833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:11:58 np0005539563 nova_compute[252253]: 2025-11-29 09:11:58.301 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:58 np0005539563 nova_compute[252253]: 2025-11-29 09:11:58.644 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:11:58 np0005539563 nova_compute[252253]: 2025-11-29 09:11:58.645 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:11:58 np0005539563 nova_compute[252253]: 2025-11-29 09:11:58.912 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:11:58 np0005539563 nova_compute[252253]: 2025-11-29 09:11:58.914 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3892MB free_disk=20.988109588623047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:11:58 np0005539563 nova_compute[252253]: 2025-11-29 09:11:58.915 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:11:58 np0005539563 nova_compute[252253]: 2025-11-29 09:11:58.915 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:11:59 np0005539563 nova_compute[252253]: 2025-11-29 09:11:59.206 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 04:11:59 np0005539563 nova_compute[252253]: 2025-11-29 09:11:59.206 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:11:59 np0005539563 nova_compute[252253]: 2025-11-29 09:11:59.206 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:11:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:11:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:11:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:11:59.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:11:59 np0005539563 nova_compute[252253]: 2025-11-29 09:11:59.240 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:11:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:11:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2215405446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:11:59 np0005539563 nova_compute[252253]: 2025-11-29 09:11:59.660 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:11:59 np0005539563 nova_compute[252253]: 2025-11-29 09:11:59.666 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:11:59 np0005539563 nova_compute[252253]: 2025-11-29 09:11:59.726 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:11:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4107: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.5 KiB/s rd, 3.7 KiB/s wr, 13 op/s
Nov 29 04:12:00 np0005539563 nova_compute[252253]: 2025-11-29 09:12:00.164 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:12:00 np0005539563 nova_compute[252253]: 2025-11-29 09:12:00.165 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:00.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:01.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4108: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 3.7 KiB/s wr, 10 op/s
Nov 29 04:12:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:02.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:02 np0005539563 nova_compute[252253]: 2025-11-29 09:12:02.462 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:03 np0005539563 nova_compute[252253]: 2025-11-29 09:12:03.185 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:03.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4109: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 KiB/s rd, 3.4 KiB/s wr, 7 op/s
Nov 29 04:12:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:04.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:12:04.989 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:12:04.990 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:12:04.991 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:05.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4110: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 3.3 KiB/s wr, 6 op/s
Nov 29 04:12:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:06.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:07.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:07 np0005539563 nova_compute[252253]: 2025-11-29 09:12:07.465 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4111: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 KiB/s rd, 2.9 KiB/s wr, 2 op/s
Nov 29 04:12:08 np0005539563 nova_compute[252253]: 2025-11-29 09:12:08.187 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:08.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:12:09.182 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=100, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=99) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:12:09 np0005539563 nova_compute[252253]: 2025-11-29 09:12:09.183 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:09 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:12:09.184 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:12:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:09.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4112: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 75 KiB/s rd, 17 KiB/s wr, 13 op/s
Nov 29 04:12:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:10.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:11.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4113: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 652 KiB/s rd, 15 KiB/s wr, 34 op/s
Nov 29 04:12:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:12.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:12 np0005539563 nova_compute[252253]: 2025-11-29 09:12:12.469 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:12:13
Nov 29 04:12:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:12:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:12:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'volumes']
Nov 29 04:12:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:12:13 np0005539563 nova_compute[252253]: 2025-11-29 09:12:13.189 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:13.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:12:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:12:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4114: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 15 KiB/s wr, 61 op/s
Nov 29 04:12:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:14.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:15.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4115: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Nov 29 04:12:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:16.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:12:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:12:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:12:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:12:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:12:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:12:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:12:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:12:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:12:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:12:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:17.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:17 np0005539563 nova_compute[252253]: 2025-11-29 09:12:17.472 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4116: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Nov 29 04:12:18 np0005539563 nova_compute[252253]: 2025-11-29 09:12:18.191 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:18.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:12:19.186 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '100'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:12:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:19.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4117: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Nov 29 04:12:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:20.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:20 np0005539563 podman[417587]: 2025-11-29 09:12:20.504616058 +0000 UTC m=+0.051601930 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 29 04:12:20 np0005539563 podman[417586]: 2025-11-29 09:12:20.53119905 +0000 UTC m=+0.078185192 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 04:12:20 np0005539563 podman[417588]: 2025-11-29 09:12:20.560645118 +0000 UTC m=+0.105261266 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:12:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:21.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4118: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.8 KiB/s wr, 65 op/s
Nov 29 04:12:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:22.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:22 np0005539563 nova_compute[252253]: 2025-11-29 09:12:22.476 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:23 np0005539563 nova_compute[252253]: 2025-11-29 09:12:23.220 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:23.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4119: 305 pgs: 305 active+clean; 208 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 215 KiB/s wr, 55 op/s
Nov 29 04:12:24 np0005539563 nova_compute[252253]: 2025-11-29 09:12:24.161 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:24.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.617607714498418e-05 of space, bias 1.0, pg target 0.004852823143495254 quantized to 32 (current 32)
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004444059126651778 of space, bias 1.0, pg target 1.3332177379955334 quantized to 32 (current 32)
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:12:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 04:12:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:25.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4120: 305 pgs: 305 active+clean; 209 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 240 KiB/s wr, 50 op/s
Nov 29 04:12:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:26.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:27.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:27 np0005539563 nova_compute[252253]: 2025-11-29 09:12:27.480 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4121: 305 pgs: 305 active+clean; 214 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 937 KiB/s rd, 493 KiB/s wr, 48 op/s
Nov 29 04:12:28 np0005539563 nova_compute[252253]: 2025-11-29 09:12:28.222 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:28.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:29.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4122: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 529 KiB/s wr, 56 op/s
Nov 29 04:12:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:30.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:31.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:31 np0005539563 nova_compute[252253]: 2025-11-29 09:12:31.671 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4123: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 529 KiB/s wr, 56 op/s
Nov 29 04:12:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:32.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:32 np0005539563 nova_compute[252253]: 2025-11-29 09:12:32.482 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:33 np0005539563 nova_compute[252253]: 2025-11-29 09:12:33.224 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:33.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4124: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 539 KiB/s wr, 57 op/s
Nov 29 04:12:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:34.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:34 np0005539563 podman[417880]: 2025-11-29 09:12:34.930493985 +0000 UTC m=+0.055069255 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 04:12:35 np0005539563 podman[417880]: 2025-11-29 09:12:35.061411696 +0000 UTC m=+0.185986966 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:12:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:35.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 04:12:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4125: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 543 KiB/s rd, 333 KiB/s wr, 46 op/s
Nov 29 04:12:35 np0005539563 podman[418036]: 2025-11-29 09:12:35.904810973 +0000 UTC m=+0.217461279 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 04:12:35 np0005539563 podman[418036]: 2025-11-29 09:12:35.947232694 +0000 UTC m=+0.259882960 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 04:12:36 np0005539563 podman[418102]: 2025-11-29 09:12:36.197964635 +0000 UTC m=+0.060522302 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vendor=Red Hat, Inc., description=keepalived for Ceph, architecture=x86_64, io.buildah.version=1.28.2, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, distribution-scope=public, release=1793, version=2.2.4, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived)
Nov 29 04:12:36 np0005539563 podman[418102]: 2025-11-29 09:12:36.207867654 +0000 UTC m=+0.070425311 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, architecture=x86_64, io.buildah.version=1.28.2, vcs-type=git, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, name=keepalived, build-date=2023-02-22T09:23:20, version=2.2.4, release=1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 04:12:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:12:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:36.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 04:12:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:12:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:37.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:37 np0005539563 nova_compute[252253]: 2025-11-29 09:12:37.486 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4126: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 399 KiB/s rd, 308 KiB/s wr, 24 op/s
Nov 29 04:12:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:38 np0005539563 nova_compute[252253]: 2025-11-29 09:12:38.274 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:38.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:12:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:39.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:39 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 65c0e5a6-f907-4b1f-9a30-ae36ca629b19 does not exist
Nov 29 04:12:39 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 26c73a0b-c7fb-47cb-9469-a19849e1cf92 does not exist
Nov 29 04:12:39 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 431b3610-55e8-4258-8b04-3c6f3acac7ca does not exist
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:12:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:12:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4127: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 124 KiB/s rd, 50 KiB/s wr, 9 op/s
Nov 29 04:12:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:40.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:40 np0005539563 podman[418528]: 2025-11-29 09:12:40.415510657 +0000 UTC m=+0.044152539 container create b06b064d51cd4602d42b92331ca0ddd5f890e5005c5f31b40499fc1d4ce88a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ganguly, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:12:40 np0005539563 systemd[1]: Started libpod-conmon-b06b064d51cd4602d42b92331ca0ddd5f890e5005c5f31b40499fc1d4ce88a19.scope.
Nov 29 04:12:40 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:12:40 np0005539563 podman[418528]: 2025-11-29 09:12:40.393613943 +0000 UTC m=+0.022255825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:12:40 np0005539563 podman[418528]: 2025-11-29 09:12:40.500694467 +0000 UTC m=+0.129336349 container init b06b064d51cd4602d42b92331ca0ddd5f890e5005c5f31b40499fc1d4ce88a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ganguly, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:12:40 np0005539563 podman[418528]: 2025-11-29 09:12:40.506875605 +0000 UTC m=+0.135517467 container start b06b064d51cd4602d42b92331ca0ddd5f890e5005c5f31b40499fc1d4ce88a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:12:40 np0005539563 elastic_ganguly[418544]: 167 167
Nov 29 04:12:40 np0005539563 systemd[1]: libpod-b06b064d51cd4602d42b92331ca0ddd5f890e5005c5f31b40499fc1d4ce88a19.scope: Deactivated successfully.
Nov 29 04:12:40 np0005539563 conmon[418544]: conmon b06b064d51cd4602d42b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b06b064d51cd4602d42b92331ca0ddd5f890e5005c5f31b40499fc1d4ce88a19.scope/container/memory.events
Nov 29 04:12:40 np0005539563 podman[418528]: 2025-11-29 09:12:40.515131239 +0000 UTC m=+0.143773131 container attach b06b064d51cd4602d42b92331ca0ddd5f890e5005c5f31b40499fc1d4ce88a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ganguly, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:12:40 np0005539563 podman[418528]: 2025-11-29 09:12:40.515760467 +0000 UTC m=+0.144402339 container died b06b064d51cd4602d42b92331ca0ddd5f890e5005c5f31b40499fc1d4ce88a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:12:40 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a74d0c686457e5ac8d0094a7404e96ebfb39e05702f1242653fee38c333d690b-merged.mount: Deactivated successfully.
Nov 29 04:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:40 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:12:40 np0005539563 podman[418528]: 2025-11-29 09:12:40.556361567 +0000 UTC m=+0.185003429 container remove b06b064d51cd4602d42b92331ca0ddd5f890e5005c5f31b40499fc1d4ce88a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:12:40 np0005539563 systemd[1]: libpod-conmon-b06b064d51cd4602d42b92331ca0ddd5f890e5005c5f31b40499fc1d4ce88a19.scope: Deactivated successfully.
Nov 29 04:12:40 np0005539563 nova_compute[252253]: 2025-11-29 09:12:40.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:40 np0005539563 podman[418568]: 2025-11-29 09:12:40.73857723 +0000 UTC m=+0.061908890 container create 9888cc88d7be2bf3b8f0525d39f7f58f064d487be1f8e5587579c94a1ce85877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 29 04:12:40 np0005539563 systemd[1]: Started libpod-conmon-9888cc88d7be2bf3b8f0525d39f7f58f064d487be1f8e5587579c94a1ce85877.scope.
Nov 29 04:12:40 np0005539563 podman[418568]: 2025-11-29 09:12:40.702269515 +0000 UTC m=+0.025601265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:12:40 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:12:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5334be9a6d8bdfaf5c9ddee61891792dd93ebd41c468955892bb075fb5b4310f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5334be9a6d8bdfaf5c9ddee61891792dd93ebd41c468955892bb075fb5b4310f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5334be9a6d8bdfaf5c9ddee61891792dd93ebd41c468955892bb075fb5b4310f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5334be9a6d8bdfaf5c9ddee61891792dd93ebd41c468955892bb075fb5b4310f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:40 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5334be9a6d8bdfaf5c9ddee61891792dd93ebd41c468955892bb075fb5b4310f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:40 np0005539563 podman[418568]: 2025-11-29 09:12:40.841130542 +0000 UTC m=+0.164462222 container init 9888cc88d7be2bf3b8f0525d39f7f58f064d487be1f8e5587579c94a1ce85877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_williams, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:12:40 np0005539563 podman[418568]: 2025-11-29 09:12:40.852434479 +0000 UTC m=+0.175766139 container start 9888cc88d7be2bf3b8f0525d39f7f58f064d487be1f8e5587579c94a1ce85877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 04:12:40 np0005539563 podman[418568]: 2025-11-29 09:12:40.855703897 +0000 UTC m=+0.179035557 container attach 9888cc88d7be2bf3b8f0525d39f7f58f064d487be1f8e5587579c94a1ce85877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_williams, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 04:12:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:41.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:41 np0005539563 nova_compute[252253]: 2025-11-29 09:12:41.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:41 np0005539563 nova_compute[252253]: 2025-11-29 09:12:41.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:12:41 np0005539563 beautiful_williams[418585]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:12:41 np0005539563 beautiful_williams[418585]: --> relative data size: 1.0
Nov 29 04:12:41 np0005539563 beautiful_williams[418585]: --> All data devices are unavailable
Nov 29 04:12:41 np0005539563 systemd[1]: libpod-9888cc88d7be2bf3b8f0525d39f7f58f064d487be1f8e5587579c94a1ce85877.scope: Deactivated successfully.
Nov 29 04:12:41 np0005539563 podman[418601]: 2025-11-29 09:12:41.835397652 +0000 UTC m=+0.030098128 container died 9888cc88d7be2bf3b8f0525d39f7f58f064d487be1f8e5587579c94a1ce85877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:12:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-5334be9a6d8bdfaf5c9ddee61891792dd93ebd41c468955892bb075fb5b4310f-merged.mount: Deactivated successfully.
Nov 29 04:12:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4128: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 14 KiB/s wr, 1 op/s
Nov 29 04:12:41 np0005539563 podman[418601]: 2025-11-29 09:12:41.90132142 +0000 UTC m=+0.096021876 container remove 9888cc88d7be2bf3b8f0525d39f7f58f064d487be1f8e5587579c94a1ce85877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_williams, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:12:41 np0005539563 systemd[1]: libpod-conmon-9888cc88d7be2bf3b8f0525d39f7f58f064d487be1f8e5587579c94a1ce85877.scope: Deactivated successfully.
Nov 29 04:12:42 np0005539563 nova_compute[252253]: 2025-11-29 09:12:42.025 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:12:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:42.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:42 np0005539563 nova_compute[252253]: 2025-11-29 09:12:42.490 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:42 np0005539563 podman[418757]: 2025-11-29 09:12:42.756043444 +0000 UTC m=+0.066729151 container create 8f68925ec59a3666ab07d10de3ea032a5fad820dd5e2962d78ca127d64c38f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lumiere, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 04:12:42 np0005539563 systemd[1]: Started libpod-conmon-8f68925ec59a3666ab07d10de3ea032a5fad820dd5e2962d78ca127d64c38f05.scope.
Nov 29 04:12:42 np0005539563 podman[418757]: 2025-11-29 09:12:42.73415263 +0000 UTC m=+0.044838317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:12:42 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:12:42 np0005539563 podman[418757]: 2025-11-29 09:12:42.862953624 +0000 UTC m=+0.173639311 container init 8f68925ec59a3666ab07d10de3ea032a5fad820dd5e2962d78ca127d64c38f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 29 04:12:42 np0005539563 podman[418757]: 2025-11-29 09:12:42.871140726 +0000 UTC m=+0.181826403 container start 8f68925ec59a3666ab07d10de3ea032a5fad820dd5e2962d78ca127d64c38f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lumiere, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:12:42 np0005539563 podman[418757]: 2025-11-29 09:12:42.875790682 +0000 UTC m=+0.186476389 container attach 8f68925ec59a3666ab07d10de3ea032a5fad820dd5e2962d78ca127d64c38f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 04:12:42 np0005539563 practical_lumiere[418773]: 167 167
Nov 29 04:12:42 np0005539563 systemd[1]: libpod-8f68925ec59a3666ab07d10de3ea032a5fad820dd5e2962d78ca127d64c38f05.scope: Deactivated successfully.
Nov 29 04:12:42 np0005539563 conmon[418773]: conmon 8f68925ec59a3666ab07 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8f68925ec59a3666ab07d10de3ea032a5fad820dd5e2962d78ca127d64c38f05.scope/container/memory.events
Nov 29 04:12:42 np0005539563 podman[418757]: 2025-11-29 09:12:42.880283645 +0000 UTC m=+0.190969322 container died 8f68925ec59a3666ab07d10de3ea032a5fad820dd5e2962d78ca127d64c38f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 04:12:42 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d448138db3a7f51435744bd8329519ad2931ca29b105c4f53b1ab5e5f57b9bfc-merged.mount: Deactivated successfully.
Nov 29 04:12:42 np0005539563 podman[418757]: 2025-11-29 09:12:42.936495529 +0000 UTC m=+0.247181196 container remove 8f68925ec59a3666ab07d10de3ea032a5fad820dd5e2962d78ca127d64c38f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lumiere, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:12:42 np0005539563 systemd[1]: libpod-conmon-8f68925ec59a3666ab07d10de3ea032a5fad820dd5e2962d78ca127d64c38f05.scope: Deactivated successfully.
Nov 29 04:12:43 np0005539563 podman[418798]: 2025-11-29 09:12:43.203606465 +0000 UTC m=+0.071193472 container create 20fa81b54c0210c901bd97bc8b7117bb009336dc61f663bbcf6b30de49934283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:12:43 np0005539563 systemd[1]: Started libpod-conmon-20fa81b54c0210c901bd97bc8b7117bb009336dc61f663bbcf6b30de49934283.scope.
Nov 29 04:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:12:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:12:43 np0005539563 podman[418798]: 2025-11-29 09:12:43.182969125 +0000 UTC m=+0.050556172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:12:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:43.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:43 np0005539563 nova_compute[252253]: 2025-11-29 09:12:43.277 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:43 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:12:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155e5fbaf16997f7591d1305bdbd9ab9f0d75b291a3f9922c0bbcd508322d61e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155e5fbaf16997f7591d1305bdbd9ab9f0d75b291a3f9922c0bbcd508322d61e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155e5fbaf16997f7591d1305bdbd9ab9f0d75b291a3f9922c0bbcd508322d61e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:43 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155e5fbaf16997f7591d1305bdbd9ab9f0d75b291a3f9922c0bbcd508322d61e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:43 np0005539563 podman[418798]: 2025-11-29 09:12:43.31884076 +0000 UTC m=+0.186427797 container init 20fa81b54c0210c901bd97bc8b7117bb009336dc61f663bbcf6b30de49934283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:12:43 np0005539563 podman[418798]: 2025-11-29 09:12:43.327218527 +0000 UTC m=+0.194805534 container start 20fa81b54c0210c901bd97bc8b7117bb009336dc61f663bbcf6b30de49934283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_knuth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:12:43 np0005539563 podman[418798]: 2025-11-29 09:12:43.330584008 +0000 UTC m=+0.198171045 container attach 20fa81b54c0210c901bd97bc8b7117bb009336dc61f663bbcf6b30de49934283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:12:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4129: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 16 KiB/s wr, 1 op/s
Nov 29 04:12:44 np0005539563 nova_compute[252253]: 2025-11-29 09:12:44.026 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:44 np0005539563 nova_compute[252253]: 2025-11-29 09:12:44.026 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]: {
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:    "0": [
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:        {
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:            "devices": [
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:                "/dev/loop3"
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:            ],
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:            "lv_name": "ceph_lv0",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:            "lv_size": "7511998464",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:            "name": "ceph_lv0",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:            "tags": {
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:                "ceph.cluster_name": "ceph",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:                "ceph.crush_device_class": "",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:                "ceph.encrypted": "0",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:                "ceph.osd_id": "0",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:                "ceph.type": "block",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:                "ceph.vdo": "0"
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:            },
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:            "type": "block",
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:            "vg_name": "ceph_vg0"
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:        }
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]:    ]
Nov 29 04:12:44 np0005539563 heuristic_knuth[418815]: }
Nov 29 04:12:44 np0005539563 systemd[1]: libpod-20fa81b54c0210c901bd97bc8b7117bb009336dc61f663bbcf6b30de49934283.scope: Deactivated successfully.
Nov 29 04:12:44 np0005539563 podman[418798]: 2025-11-29 09:12:44.143485389 +0000 UTC m=+1.011072496 container died 20fa81b54c0210c901bd97bc8b7117bb009336dc61f663bbcf6b30de49934283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 04:12:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-155e5fbaf16997f7591d1305bdbd9ab9f0d75b291a3f9922c0bbcd508322d61e-merged.mount: Deactivated successfully.
Nov 29 04:12:44 np0005539563 podman[418798]: 2025-11-29 09:12:44.204811052 +0000 UTC m=+1.072398059 container remove 20fa81b54c0210c901bd97bc8b7117bb009336dc61f663bbcf6b30de49934283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_knuth, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 04:12:44 np0005539563 systemd[1]: libpod-conmon-20fa81b54c0210c901bd97bc8b7117bb009336dc61f663bbcf6b30de49934283.scope: Deactivated successfully.
Nov 29 04:12:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:44.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:44 np0005539563 ovn_controller[148841]: 2025-11-29T09:12:44Z|00955|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Nov 29 04:12:44 np0005539563 podman[418975]: 2025-11-29 09:12:44.858081843 +0000 UTC m=+0.035881065 container create fa3683d6a1cfe37a3d95b0e5cd0aaf21d23461887f346422dfe3f1138b4912ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 04:12:44 np0005539563 systemd[1]: Started libpod-conmon-fa3683d6a1cfe37a3d95b0e5cd0aaf21d23461887f346422dfe3f1138b4912ff.scope.
Nov 29 04:12:44 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:12:44 np0005539563 podman[418975]: 2025-11-29 09:12:44.938073942 +0000 UTC m=+0.115873184 container init fa3683d6a1cfe37a3d95b0e5cd0aaf21d23461887f346422dfe3f1138b4912ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldberg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 04:12:44 np0005539563 podman[418975]: 2025-11-29 09:12:44.841905984 +0000 UTC m=+0.019705226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:12:44 np0005539563 podman[418975]: 2025-11-29 09:12:44.944322581 +0000 UTC m=+0.122121803 container start fa3683d6a1cfe37a3d95b0e5cd0aaf21d23461887f346422dfe3f1138b4912ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 04:12:44 np0005539563 podman[418975]: 2025-11-29 09:12:44.947613081 +0000 UTC m=+0.125412313 container attach fa3683d6a1cfe37a3d95b0e5cd0aaf21d23461887f346422dfe3f1138b4912ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 04:12:44 np0005539563 elastic_goldberg[418992]: 167 167
Nov 29 04:12:44 np0005539563 systemd[1]: libpod-fa3683d6a1cfe37a3d95b0e5cd0aaf21d23461887f346422dfe3f1138b4912ff.scope: Deactivated successfully.
Nov 29 04:12:44 np0005539563 conmon[418992]: conmon fa3683d6a1cfe37a3d95 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa3683d6a1cfe37a3d95b0e5cd0aaf21d23461887f346422dfe3f1138b4912ff.scope/container/memory.events
Nov 29 04:12:44 np0005539563 podman[418975]: 2025-11-29 09:12:44.951397644 +0000 UTC m=+0.129196866 container died fa3683d6a1cfe37a3d95b0e5cd0aaf21d23461887f346422dfe3f1138b4912ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldberg, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:12:44 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1f175e256ef40494a9e6f5a91036cddac8d8dcdf4f6cd2075f6dc25cfb3379a4-merged.mount: Deactivated successfully.
Nov 29 04:12:44 np0005539563 podman[418975]: 2025-11-29 09:12:44.991631835 +0000 UTC m=+0.169431057 container remove fa3683d6a1cfe37a3d95b0e5cd0aaf21d23461887f346422dfe3f1138b4912ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:12:45 np0005539563 systemd[1]: libpod-conmon-fa3683d6a1cfe37a3d95b0e5cd0aaf21d23461887f346422dfe3f1138b4912ff.scope: Deactivated successfully.
Nov 29 04:12:45 np0005539563 podman[419015]: 2025-11-29 09:12:45.159080027 +0000 UTC m=+0.042155234 container create 429b468b702a8bc8591732ace4e73e2218e8ecc460dcf91a37141f8252f9b710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shannon, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:12:45 np0005539563 systemd[1]: Started libpod-conmon-429b468b702a8bc8591732ace4e73e2218e8ecc460dcf91a37141f8252f9b710.scope.
Nov 29 04:12:45 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:12:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb27f0719efb33bc241b3469bf420b09cb758d07d58835bd0b10f6961ab68ea9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb27f0719efb33bc241b3469bf420b09cb758d07d58835bd0b10f6961ab68ea9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb27f0719efb33bc241b3469bf420b09cb758d07d58835bd0b10f6961ab68ea9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:45 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb27f0719efb33bc241b3469bf420b09cb758d07d58835bd0b10f6961ab68ea9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:12:45 np0005539563 podman[419015]: 2025-11-29 09:12:45.13964202 +0000 UTC m=+0.022717207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:12:45 np0005539563 podman[419015]: 2025-11-29 09:12:45.244855384 +0000 UTC m=+0.127930561 container init 429b468b702a8bc8591732ace4e73e2218e8ecc460dcf91a37141f8252f9b710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:12:45 np0005539563 podman[419015]: 2025-11-29 09:12:45.26020121 +0000 UTC m=+0.143276377 container start 429b468b702a8bc8591732ace4e73e2218e8ecc460dcf91a37141f8252f9b710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 04:12:45 np0005539563 podman[419015]: 2025-11-29 09:12:45.264924848 +0000 UTC m=+0.148000055 container attach 429b468b702a8bc8591732ace4e73e2218e8ecc460dcf91a37141f8252f9b710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 04:12:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:45.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4130: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 6.0 KiB/s wr, 0 op/s
Nov 29 04:12:46 np0005539563 wizardly_shannon[419032]: {
Nov 29 04:12:46 np0005539563 wizardly_shannon[419032]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:12:46 np0005539563 wizardly_shannon[419032]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:12:46 np0005539563 wizardly_shannon[419032]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:12:46 np0005539563 wizardly_shannon[419032]:        "osd_id": 0,
Nov 29 04:12:46 np0005539563 wizardly_shannon[419032]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:12:46 np0005539563 wizardly_shannon[419032]:        "type": "bluestore"
Nov 29 04:12:46 np0005539563 wizardly_shannon[419032]:    }
Nov 29 04:12:46 np0005539563 wizardly_shannon[419032]: }
Nov 29 04:12:46 np0005539563 systemd[1]: libpod-429b468b702a8bc8591732ace4e73e2218e8ecc460dcf91a37141f8252f9b710.scope: Deactivated successfully.
Nov 29 04:12:46 np0005539563 podman[419015]: 2025-11-29 09:12:46.123398754 +0000 UTC m=+1.006473951 container died 429b468b702a8bc8591732ace4e73e2218e8ecc460dcf91a37141f8252f9b710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:12:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-bb27f0719efb33bc241b3469bf420b09cb758d07d58835bd0b10f6961ab68ea9-merged.mount: Deactivated successfully.
Nov 29 04:12:46 np0005539563 podman[419015]: 2025-11-29 09:12:46.180645477 +0000 UTC m=+1.063720644 container remove 429b468b702a8bc8591732ace4e73e2218e8ecc460dcf91a37141f8252f9b710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shannon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 04:12:46 np0005539563 systemd[1]: libpod-conmon-429b468b702a8bc8591732ace4e73e2218e8ecc460dcf91a37141f8252f9b710.scope: Deactivated successfully.
Nov 29 04:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:12:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:46.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:12:46 np0005539563 nova_compute[252253]: 2025-11-29 09:12:46.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:46 np0005539563 nova_compute[252253]: 2025-11-29 09:12:46.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:46 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev fd5a9178-3557-4afe-8eb8-e537f5fe2932 does not exist
Nov 29 04:12:46 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a8c4f80f-1848-4fde-a51f-1b3024410e07 does not exist
Nov 29 04:12:46 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4aab2bb4-47bf-4851-a47e-689aa1ec027d does not exist
Nov 29 04:12:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:47.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:47 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:12:47 np0005539563 nova_compute[252253]: 2025-11-29 09:12:47.493 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4131: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 6.0 KiB/s wr, 0 op/s
Nov 29 04:12:48 np0005539563 nova_compute[252253]: 2025-11-29 09:12:48.279 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:48.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:48 np0005539563 nova_compute[252253]: 2025-11-29 09:12:48.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:49.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4132: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 6.0 KiB/s wr, 1 op/s
Nov 29 04:12:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:50.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:51.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:51 np0005539563 podman[419172]: 2025-11-29 09:12:51.566974194 +0000 UTC m=+0.107005744 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 04:12:51 np0005539563 podman[419171]: 2025-11-29 09:12:51.571451475 +0000 UTC m=+0.115040381 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 04:12:51 np0005539563 podman[419173]: 2025-11-29 09:12:51.58670788 +0000 UTC m=+0.124057957 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:12:51 np0005539563 nova_compute[252253]: 2025-11-29 09:12:51.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:51 np0005539563 nova_compute[252253]: 2025-11-29 09:12:51.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:12:51 np0005539563 nova_compute[252253]: 2025-11-29 09:12:51.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:12:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4133: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 KiB/s rd, 2.3 KiB/s wr, 1 op/s
Nov 29 04:12:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:12:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:52.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:12:52 np0005539563 nova_compute[252253]: 2025-11-29 09:12:52.548 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:52 np0005539563 nova_compute[252253]: 2025-11-29 09:12:52.834 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:12:52 np0005539563 nova_compute[252253]: 2025-11-29 09:12:52.835 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:12:52 np0005539563 nova_compute[252253]: 2025-11-29 09:12:52.835 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 04:12:52 np0005539563 nova_compute[252253]: 2025-11-29 09:12:52.836 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:12:53 np0005539563 nova_compute[252253]: 2025-11-29 09:12:53.281 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:53.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4134: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 2.7 KiB/s wr, 17 op/s
Nov 29 04:12:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:54.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:55.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:12:55.500 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=101, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=100) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:12:55 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:12:55.501 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:12:55 np0005539563 nova_compute[252253]: 2025-11-29 09:12:55.502 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4135: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 938 B/s wr, 53 op/s
Nov 29 04:12:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:12:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:56.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:12:56 np0005539563 nova_compute[252253]: 2025-11-29 09:12:56.349 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Updating instance_info_cache with network_info: [{"id": "03e18ca1-a94c-48e8-a149-d4c144215d18", "address": "fa:16:3e:aa:23:7f", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e18ca1-a9", "ovs_interfaceid": "03e18ca1-a94c-48e8-a149-d4c144215d18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:12:56 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:12:56.503 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '101'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:12:56 np0005539563 nova_compute[252253]: 2025-11-29 09:12:56.517 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:12:56 np0005539563 nova_compute[252253]: 2025-11-29 09:12:56.518 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 04:12:56 np0005539563 nova_compute[252253]: 2025-11-29 09:12:56.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:57.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:12:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1599357342' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:12:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:12:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1599357342' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:12:57 np0005539563 nova_compute[252253]: 2025-11-29 09:12:57.551 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:12:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4136: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 596 B/s wr, 53 op/s
Nov 29 04:12:58 np0005539563 nova_compute[252253]: 2025-11-29 09:12:58.284 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:12:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:12:58.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:58 np0005539563 nova_compute[252253]: 2025-11-29 09:12:58.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:12:58 np0005539563 nova_compute[252253]: 2025-11-29 09:12:58.700 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:58 np0005539563 nova_compute[252253]: 2025-11-29 09:12:58.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:58 np0005539563 nova_compute[252253]: 2025-11-29 09:12:58.702 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:12:58 np0005539563 nova_compute[252253]: 2025-11-29 09:12:58.702 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:12:58 np0005539563 nova_compute[252253]: 2025-11-29 09:12:58.702 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:12:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Nov 29 04:12:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Nov 29 04:12:58 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Nov 29 04:12:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:12:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/464795899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:12:59 np0005539563 nova_compute[252253]: 2025-11-29 09:12:59.198 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:12:59 np0005539563 nova_compute[252253]: 2025-11-29 09:12:59.277 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:12:59 np0005539563 nova_compute[252253]: 2025-11-29 09:12:59.278 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:12:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:12:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:12:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:12:59.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:12:59 np0005539563 nova_compute[252253]: 2025-11-29 09:12:59.438 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:12:59 np0005539563 nova_compute[252253]: 2025-11-29 09:12:59.439 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3853MB free_disk=20.988109588623047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:12:59 np0005539563 nova_compute[252253]: 2025-11-29 09:12:59.439 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:12:59 np0005539563 nova_compute[252253]: 2025-11-29 09:12:59.440 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:12:59 np0005539563 nova_compute[252253]: 2025-11-29 09:12:59.507 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 04:12:59 np0005539563 nova_compute[252253]: 2025-11-29 09:12:59.508 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:12:59 np0005539563 nova_compute[252253]: 2025-11-29 09:12:59.508 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:12:59 np0005539563 nova_compute[252253]: 2025-11-29 09:12:59.546 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:12:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4138: 305 pgs: 305 active+clean; 205 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 141 KiB/s rd, 2.2 KiB/s wr, 225 op/s
Nov 29 04:12:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:12:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/961603926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:12:59 np0005539563 nova_compute[252253]: 2025-11-29 09:12:59.972 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:12:59 np0005539563 nova_compute[252253]: 2025-11-29 09:12:59.979 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.000 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.003 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.003 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.174 252257 DEBUG oslo_concurrency.lockutils [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.174 252257 DEBUG oslo_concurrency.lockutils [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.175 252257 DEBUG oslo_concurrency.lockutils [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.175 252257 DEBUG oslo_concurrency.lockutils [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.175 252257 DEBUG oslo_concurrency.lockutils [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.177 252257 INFO nova.compute.manager [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Terminating instance#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.178 252257 DEBUG nova.compute.manager [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 04:13:00 np0005539563 kernel: tap03e18ca1-a9 (unregistering): left promiscuous mode
Nov 29 04:13:00 np0005539563 NetworkManager[48981]: <info>  [1764407580.2376] device (tap03e18ca1-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 04:13:00 np0005539563 ovn_controller[148841]: 2025-11-29T09:13:00Z|00956|binding|INFO|Releasing lport 03e18ca1-a94c-48e8-a149-d4c144215d18 from this chassis (sb_readonly=0)
Nov 29 04:13:00 np0005539563 ovn_controller[148841]: 2025-11-29T09:13:00Z|00957|binding|INFO|Setting lport 03e18ca1-a94c-48e8-a149-d4c144215d18 down in Southbound
Nov 29 04:13:00 np0005539563 ovn_controller[148841]: 2025-11-29T09:13:00Z|00958|binding|INFO|Removing iface tap03e18ca1-a9 ovn-installed in OVS
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.294 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.300 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:23:7f 10.100.0.12'], port_security=['fa:16:3e:aa:23:7f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0596b743-cfc2-4d80-898c-031d919a5afd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=03e18ca1-a94c-48e8-a149-d4c144215d18) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.302 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 03e18ca1-a94c-48e8-a149-d4c144215d18 in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad unbound from our chassis#033[00m
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.302 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.304 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[5c34f68b-5b9a-43b1-a0e7-e2f581d3a49e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.304 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace which is not needed anymore#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.308 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:00.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:00 np0005539563 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000dc.scope: Deactivated successfully.
Nov 29 04:13:00 np0005539563 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000dc.scope: Consumed 18.351s CPU time.
Nov 29 04:13:00 np0005539563 systemd-machined[213024]: Machine qemu-105-instance-000000dc terminated.
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.419 252257 INFO nova.virt.libvirt.driver [-] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Instance destroyed successfully.#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.421 252257 DEBUG nova.objects.instance [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'resources' on Instance uuid 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:13:00 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[416318]: [NOTICE]   (416322) : haproxy version is 2.8.14-c23fe91
Nov 29 04:13:00 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[416318]: [NOTICE]   (416322) : path to executable is /usr/sbin/haproxy
Nov 29 04:13:00 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[416318]: [WARNING]  (416322) : Exiting Master process...
Nov 29 04:13:00 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[416318]: [WARNING]  (416322) : Exiting Master process...
Nov 29 04:13:00 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[416318]: [ALERT]    (416322) : Current worker (416324) exited with code 143 (Terminated)
Nov 29 04:13:00 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[416318]: [WARNING]  (416322) : All workers exited. Exiting... (0)
Nov 29 04:13:00 np0005539563 systemd[1]: libpod-7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15.scope: Deactivated successfully.
Nov 29 04:13:00 np0005539563 podman[419309]: 2025-11-29 09:13:00.434908947 +0000 UTC m=+0.045847764 container died 7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.437 252257 DEBUG nova.virt.libvirt.vif [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:10:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-516170309',display_name='tempest-TestVolumeBootPattern-volume-backed-server-516170309',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-516170309',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPL2OUU5K/teyQFPNqTZqLgT9O73zt6IMvn2ncnkLwXCm+6FT01omnbIj1FDDJyN8ZJFe1DRGxLAfym3zMJehf/kJ2C2SwXTfIQxTBENVQyqhaLmtpLmKddio66bQ4PMmQ==',key_name='tempest-keypair-1412035180',keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:11:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-htdy8f9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:11:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "03e18ca1-a94c-48e8-a149-d4c144215d18", "address": "fa:16:3e:aa:23:7f", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e18ca1-a9", "ovs_interfaceid": "03e18ca1-a94c-48e8-a149-d4c144215d18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.437 252257 DEBUG nova.network.os_vif_util [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "03e18ca1-a94c-48e8-a149-d4c144215d18", "address": "fa:16:3e:aa:23:7f", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03e18ca1-a9", "ovs_interfaceid": "03e18ca1-a94c-48e8-a149-d4c144215d18", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.438 252257 DEBUG nova.network.os_vif_util [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:23:7f,bridge_name='br-int',has_traffic_filtering=True,id=03e18ca1-a94c-48e8-a149-d4c144215d18,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e18ca1-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.438 252257 DEBUG os_vif [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:23:7f,bridge_name='br-int',has_traffic_filtering=True,id=03e18ca1-a94c-48e8-a149-d4c144215d18,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e18ca1-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.440 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.440 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03e18ca1-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.443 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.445 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.448 252257 INFO os_vif [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:23:7f,bridge_name='br-int',has_traffic_filtering=True,id=03e18ca1-a94c-48e8-a149-d4c144215d18,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03e18ca1-a9')#033[00m
Nov 29 04:13:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15-userdata-shm.mount: Deactivated successfully.
Nov 29 04:13:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay-36f592cec8b31f29dd6c0d284d6d86c32afbf701b0e27258311128d7327f15db-merged.mount: Deactivated successfully.
Nov 29 04:13:00 np0005539563 podman[419309]: 2025-11-29 09:13:00.481107351 +0000 UTC m=+0.092046158 container cleanup 7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:13:00 np0005539563 systemd[1]: libpod-conmon-7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15.scope: Deactivated successfully.
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.505 252257 DEBUG nova.compute.manager [req-f287d84c-74cc-49d4-b741-fc1c31807cd8 req-18d19612-1775-4421-ab2b-7c7b40fbcae4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Received event network-vif-unplugged-03e18ca1-a94c-48e8-a149-d4c144215d18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.505 252257 DEBUG oslo_concurrency.lockutils [req-f287d84c-74cc-49d4-b741-fc1c31807cd8 req-18d19612-1775-4421-ab2b-7c7b40fbcae4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.505 252257 DEBUG oslo_concurrency.lockutils [req-f287d84c-74cc-49d4-b741-fc1c31807cd8 req-18d19612-1775-4421-ab2b-7c7b40fbcae4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.506 252257 DEBUG oslo_concurrency.lockutils [req-f287d84c-74cc-49d4-b741-fc1c31807cd8 req-18d19612-1775-4421-ab2b-7c7b40fbcae4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.506 252257 DEBUG nova.compute.manager [req-f287d84c-74cc-49d4-b741-fc1c31807cd8 req-18d19612-1775-4421-ab2b-7c7b40fbcae4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] No waiting events found dispatching network-vif-unplugged-03e18ca1-a94c-48e8-a149-d4c144215d18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.506 252257 DEBUG nova.compute.manager [req-f287d84c-74cc-49d4-b741-fc1c31807cd8 req-18d19612-1775-4421-ab2b-7c7b40fbcae4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Received event network-vif-unplugged-03e18ca1-a94c-48e8-a149-d4c144215d18 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 04:13:00 np0005539563 podman[419362]: 2025-11-29 09:13:00.540995585 +0000 UTC m=+0.040052057 container remove 7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.546 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b975a252-8d80-4630-a012-abe622390f62]: (4, ('Sat Nov 29 09:13:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15)\n7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15\nSat Nov 29 09:13:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15)\n7a03370208e5b4d879fa61c5da49f43d3af2da3cdbfd02971f966ad5a856de15\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.548 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2f7eb995-0bf2-4d53-931f-e699bc53e619]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.549 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:13:00 np0005539563 kernel: tap8aaf4606-90: left promiscuous mode
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.550 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.563 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.567 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[e734e76e-d312-492a-9240-e1ef3536cb6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.590 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[0685c2ed-170a-4d0b-aac1-f969b43f9754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.598 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[c16ef0d5-a18b-400a-b538-4d31756d9591]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.612 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[19286f6c-a599-47d6-a60a-04bf9378a2d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1043753, 'reachable_time': 38895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419381, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:00 np0005539563 systemd[1]: run-netns-ovnmeta\x2d8aaf4606\x2d9df9\x2d4ad5\x2d9ade\x2df48fdc6cfaad.mount: Deactivated successfully.
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.616 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 04:13:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:00.616 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[41e49cc5-0f46-40f1-a95b-352096337375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.682 252257 INFO nova.virt.libvirt.driver [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Deleting instance files /var/lib/nova/instances/98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1_del#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.683 252257 INFO nova.virt.libvirt.driver [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Deletion of /var/lib/nova/instances/98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1_del complete#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.732 252257 INFO nova.compute.manager [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Took 0.55 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.733 252257 DEBUG oslo.service.loopingcall [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.733 252257 DEBUG nova.compute.manager [-] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 04:13:00 np0005539563 nova_compute[252253]: 2025-11-29 09:13:00.733 252257 DEBUG nova.network.neutron [-] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 04:13:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:01.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4139: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 177 KiB/s rd, 2.3 KiB/s wr, 283 op/s
Nov 29 04:13:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.004000107s ======
Nov 29 04:13:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:02.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000107s
Nov 29 04:13:02 np0005539563 nova_compute[252253]: 2025-11-29 09:13:02.614 252257 DEBUG nova.compute.manager [req-6dddd9a7-1e7e-44da-bb1a-39a23b7e42b5 req-6a5900e7-5c9c-46df-9f55-0a1a3363147a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Received event network-vif-plugged-03e18ca1-a94c-48e8-a149-d4c144215d18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:13:02 np0005539563 nova_compute[252253]: 2025-11-29 09:13:02.615 252257 DEBUG oslo_concurrency.lockutils [req-6dddd9a7-1e7e-44da-bb1a-39a23b7e42b5 req-6a5900e7-5c9c-46df-9f55-0a1a3363147a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:02 np0005539563 nova_compute[252253]: 2025-11-29 09:13:02.616 252257 DEBUG oslo_concurrency.lockutils [req-6dddd9a7-1e7e-44da-bb1a-39a23b7e42b5 req-6a5900e7-5c9c-46df-9f55-0a1a3363147a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:02 np0005539563 nova_compute[252253]: 2025-11-29 09:13:02.616 252257 DEBUG oslo_concurrency.lockutils [req-6dddd9a7-1e7e-44da-bb1a-39a23b7e42b5 req-6a5900e7-5c9c-46df-9f55-0a1a3363147a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:02 np0005539563 nova_compute[252253]: 2025-11-29 09:13:02.616 252257 DEBUG nova.compute.manager [req-6dddd9a7-1e7e-44da-bb1a-39a23b7e42b5 req-6a5900e7-5c9c-46df-9f55-0a1a3363147a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] No waiting events found dispatching network-vif-plugged-03e18ca1-a94c-48e8-a149-d4c144215d18 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:13:02 np0005539563 nova_compute[252253]: 2025-11-29 09:13:02.617 252257 WARNING nova.compute.manager [req-6dddd9a7-1e7e-44da-bb1a-39a23b7e42b5 req-6a5900e7-5c9c-46df-9f55-0a1a3363147a 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Received unexpected event network-vif-plugged-03e18ca1-a94c-48e8-a149-d4c144215d18 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 04:13:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:02 np0005539563 nova_compute[252253]: 2025-11-29 09:13:02.714 252257 DEBUG nova.network.neutron [-] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:13:02 np0005539563 nova_compute[252253]: 2025-11-29 09:13:02.781 252257 INFO nova.compute.manager [-] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Took 2.05 seconds to deallocate network for instance.#033[00m
Nov 29 04:13:02 np0005539563 nova_compute[252253]: 2025-11-29 09:13:02.819 252257 DEBUG nova.compute.manager [req-f9ee2cfd-1df5-49ce-a595-140aaa431bb5 req-994e5186-37f1-4b71-82ff-0b180bd009b8 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Received event network-vif-deleted-03e18ca1-a94c-48e8-a149-d4c144215d18 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:13:03 np0005539563 nova_compute[252253]: 2025-11-29 09:13:03.122 252257 INFO nova.compute.manager [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Took 0.34 seconds to detach 1 volumes for instance.#033[00m
Nov 29 04:13:03 np0005539563 nova_compute[252253]: 2025-11-29 09:13:03.123 252257 DEBUG nova.compute.manager [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Deleting volume: 3367e57a-b6f9-477a-b507-12360b177bbc _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 04:13:03 np0005539563 nova_compute[252253]: 2025-11-29 09:13:03.288 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:03.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:03 np0005539563 nova_compute[252253]: 2025-11-29 09:13:03.420 252257 DEBUG oslo_concurrency.lockutils [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:03 np0005539563 nova_compute[252253]: 2025-11-29 09:13:03.420 252257 DEBUG oslo_concurrency.lockutils [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:03 np0005539563 nova_compute[252253]: 2025-11-29 09:13:03.468 252257 DEBUG oslo_concurrency.processutils [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:13:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4140: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 169 KiB/s rd, 2.3 KiB/s wr, 272 op/s
Nov 29 04:13:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:13:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039828435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:13:03 np0005539563 nova_compute[252253]: 2025-11-29 09:13:03.979 252257 DEBUG oslo_concurrency.processutils [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:13:03 np0005539563 nova_compute[252253]: 2025-11-29 09:13:03.988 252257 DEBUG nova.compute.provider_tree [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:13:04 np0005539563 nova_compute[252253]: 2025-11-29 09:13:04.046 252257 DEBUG nova.scheduler.client.report [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:13:04 np0005539563 nova_compute[252253]: 2025-11-29 09:13:04.155 252257 DEBUG oslo_concurrency.lockutils [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:04 np0005539563 nova_compute[252253]: 2025-11-29 09:13:04.220 252257 INFO nova.scheduler.client.report [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Deleted allocations for instance 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1#033[00m
Nov 29 04:13:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:04.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:04 np0005539563 nova_compute[252253]: 2025-11-29 09:13:04.703 252257 DEBUG oslo_concurrency.lockutils [None req-c0d51dec-bfdc-4b96-adea-9f3175543879 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:04.990 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:04.991 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:13:04.991 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:05.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:05 np0005539563 nova_compute[252253]: 2025-11-29 09:13:05.502 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4141: 305 pgs: 305 active+clean; 157 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 167 KiB/s rd, 3.0 KiB/s wr, 260 op/s
Nov 29 04:13:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:06.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:07.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Nov 29 04:13:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Nov 29 04:13:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Nov 29 04:13:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4143: 305 pgs: 305 active+clean; 157 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 3.3 KiB/s wr, 195 op/s
Nov 29 04:13:08 np0005539563 nova_compute[252253]: 2025-11-29 09:13:08.293 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:08.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Nov 29 04:13:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Nov 29 04:13:09 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Nov 29 04:13:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:09.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4145: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 53 op/s
Nov 29 04:13:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:10.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:10 np0005539563 nova_compute[252253]: 2025-11-29 09:13:10.506 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:11.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4146: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Nov 29 04:13:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:12.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:13:13
Nov 29 04:13:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:13:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:13:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.log', '.mgr']
Nov 29 04:13:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:13:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:13:13 np0005539563 nova_compute[252253]: 2025-11-29 09:13:13.292 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:13.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4147: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.1 KiB/s wr, 28 op/s
Nov 29 04:13:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:14.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:15.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:15 np0005539563 nova_compute[252253]: 2025-11-29 09:13:15.418 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764407580.4165459, 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:13:15 np0005539563 nova_compute[252253]: 2025-11-29 09:13:15.418 252257 INFO nova.compute.manager [-] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] VM Stopped (Lifecycle Event)#033[00m
Nov 29 04:13:15 np0005539563 nova_compute[252253]: 2025-11-29 09:13:15.460 252257 DEBUG nova.compute.manager [None req-6d3a652c-ea41-440d-8463-3388b1c686d0 - - - - - -] [instance: 98f7fc6b-a484-4f84-87e3-5cc3f25bb7c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:13:15 np0005539563 nova_compute[252253]: 2025-11-29 09:13:15.510 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4148: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.1 KiB/s wr, 31 op/s
Nov 29 04:13:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:16.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:13:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:13:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:13:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:13:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:13:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:13:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:13:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:13:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:13:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:13:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:17.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Nov 29 04:13:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Nov 29 04:13:17 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Nov 29 04:13:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4150: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.0 KiB/s wr, 27 op/s
Nov 29 04:13:18 np0005539563 nova_compute[252253]: 2025-11-29 09:13:18.294 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:18.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:13:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:19.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:13:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4151: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 921 B/s wr, 22 op/s
Nov 29 04:13:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:20.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:20 np0005539563 nova_compute[252253]: 2025-11-29 09:13:20.514 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:21.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4152: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 921 B/s wr, 21 op/s
Nov 29 04:13:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:22.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:22 np0005539563 podman[419467]: 2025-11-29 09:13:22.543907495 +0000 UTC m=+0.078158861 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 04:13:22 np0005539563 podman[419466]: 2025-11-29 09:13:22.562601072 +0000 UTC m=+0.097017743 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 04:13:22 np0005539563 podman[419468]: 2025-11-29 09:13:22.575629556 +0000 UTC m=+0.109507592 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:13:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:23.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:23 np0005539563 nova_compute[252253]: 2025-11-29 09:13:23.344 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4153: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 826 KiB/s rd, 102 B/s wr, 9 op/s
Nov 29 04:13:23 np0005539563 nova_compute[252253]: 2025-11-29 09:13:23.998 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:13:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:24.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:13:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:13:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:13:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:25.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:13:25 np0005539563 nova_compute[252253]: 2025-11-29 09:13:25.518 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4154: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 8 op/s
Nov 29 04:13:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:13:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:26.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:13:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:27.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4155: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 199 B/s wr, 8 op/s
Nov 29 04:13:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:13:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2184447687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:13:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:13:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2184447687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:13:28 np0005539563 nova_compute[252253]: 2025-11-29 09:13:28.346 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:28.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:29.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4156: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 9 op/s
Nov 29 04:13:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:30.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:30 np0005539563 nova_compute[252253]: 2025-11-29 09:13:30.522 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:31.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4157: 305 pgs: 305 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 771 KiB/s wr, 29 op/s
Nov 29 04:13:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:32.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:32 np0005539563 nova_compute[252253]: 2025-11-29 09:13:32.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:32 np0005539563 nova_compute[252253]: 2025-11-29 09:13:32.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:13:32 np0005539563 ovn_controller[148841]: 2025-11-29T09:13:32Z|00959|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Nov 29 04:13:33 np0005539563 nova_compute[252253]: 2025-11-29 09:13:33.347 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:33.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4158: 305 pgs: 305 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 36 op/s
Nov 29 04:13:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:34.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:34 np0005539563 nova_compute[252253]: 2025-11-29 09:13:34.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:35.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:35 np0005539563 nova_compute[252253]: 2025-11-29 09:13:35.526 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4159: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 29 04:13:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:36.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:37.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4160: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 04:13:38 np0005539563 nova_compute[252253]: 2025-11-29 09:13:38.350 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:38.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:39.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4161: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 29 04:13:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:40.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:40 np0005539563 nova_compute[252253]: 2025-11-29 09:13:40.530 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:41.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4162: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 29 04:13:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:42.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.644857) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622644988, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1532, "num_deletes": 257, "total_data_size": 2726226, "memory_usage": 2757168, "flush_reason": "Manual Compaction"}
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622686521, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 2662688, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83838, "largest_seqno": 85368, "table_properties": {"data_size": 2655422, "index_size": 4272, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15104, "raw_average_key_size": 20, "raw_value_size": 2640926, "raw_average_value_size": 3516, "num_data_blocks": 187, "num_entries": 751, "num_filter_entries": 751, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407478, "oldest_key_time": 1764407478, "file_creation_time": 1764407622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 41665 microseconds, and 6834 cpu microseconds.
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.686578) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 2662688 bytes OK
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.686603) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.689347) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.689367) EVENT_LOG_v1 {"time_micros": 1764407622689362, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.689383) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 2719637, prev total WAL file size 2719637, number of live WAL files 2.
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.690197) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353234' seq:72057594037927935, type:22 .. '6C6F676D0033373735' seq:0, type:0; will stop at (end)
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(2600KB)], [191(11MB)]
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622690266, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 14976913, "oldest_snapshot_seqno": -1}
Nov 29 04:13:42 np0005539563 nova_compute[252253]: 2025-11-29 09:13:42.701 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 11561 keys, 14845485 bytes, temperature: kUnknown
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622894238, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 14845485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14771418, "index_size": 44066, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28933, "raw_key_size": 306354, "raw_average_key_size": 26, "raw_value_size": 14569767, "raw_average_value_size": 1260, "num_data_blocks": 1674, "num_entries": 11561, "num_filter_entries": 11561, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764407622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.894571) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 14845485 bytes
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.896158) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 73.4 rd, 72.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.7 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(11.2) write-amplify(5.6) OK, records in: 12094, records dropped: 533 output_compression: NoCompression
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.896179) EVENT_LOG_v1 {"time_micros": 1764407622896169, "job": 120, "event": "compaction_finished", "compaction_time_micros": 204088, "compaction_time_cpu_micros": 32787, "output_level": 6, "num_output_files": 1, "total_output_size": 14845485, "num_input_records": 12094, "num_output_records": 11561, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622896847, "job": 120, "event": "table_file_deletion", "file_number": 193}
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407622899617, "job": 120, "event": "table_file_deletion", "file_number": 191}
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.690062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.899724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.899765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.899767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.899769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:13:42 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:13:42.899771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:13:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:13:43 np0005539563 nova_compute[252253]: 2025-11-29 09:13:43.352 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:43.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4163: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.6 KiB/s rd, 1.0 MiB/s wr, 12 op/s
Nov 29 04:13:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:44.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:44 np0005539563 nova_compute[252253]: 2025-11-29 09:13:44.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:44 np0005539563 nova_compute[252253]: 2025-11-29 09:13:44.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:13:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:45.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:45 np0005539563 nova_compute[252253]: 2025-11-29 09:13:45.535 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4164: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 KiB/s rd, 679 KiB/s wr, 8 op/s
Nov 29 04:13:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:46.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:46 np0005539563 nova_compute[252253]: 2025-11-29 09:13:46.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:47.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:47 np0005539563 nova_compute[252253]: 2025-11-29 09:13:47.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4165: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 4 op/s
Nov 29 04:13:48 np0005539563 nova_compute[252253]: 2025-11-29 09:13:48.354 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:48.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 04:13:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 04:13:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 04:13:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 04:13:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:48 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:48 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:48 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:48 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:49.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:49 np0005539563 nova_compute[252253]: 2025-11-29 09:13:49.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4166: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 43 op/s
Nov 29 04:13:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:50.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:50 np0005539563 nova_compute[252253]: 2025-11-29 09:13:50.539 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:51 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f79150b5-b32f-4ac4-858e-c0a8a75db62e does not exist
Nov 29 04:13:51 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 735b8efa-5fb1-4b60-a840-d02bdb5ee503 does not exist
Nov 29 04:13:51 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 6e9eece2-9a68-46ec-a0ba-42701f3235b5 does not exist
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:13:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:13:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:51.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:51 np0005539563 podman[419917]: 2025-11-29 09:13:51.81868861 +0000 UTC m=+0.060491021 container create ea69847ef918f895d2d8197cfc2ca7628e5788017d9e5fd19f4414a5f44bf7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:13:51 np0005539563 systemd[1]: Started libpod-conmon-ea69847ef918f895d2d8197cfc2ca7628e5788017d9e5fd19f4414a5f44bf7bf.scope.
Nov 29 04:13:51 np0005539563 podman[419917]: 2025-11-29 09:13:51.787924886 +0000 UTC m=+0.029727377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:13:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:13:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4167: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:13:51 np0005539563 podman[419917]: 2025-11-29 09:13:51.937415701 +0000 UTC m=+0.179218102 container init ea69847ef918f895d2d8197cfc2ca7628e5788017d9e5fd19f4414a5f44bf7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 04:13:51 np0005539563 podman[419917]: 2025-11-29 09:13:51.948623725 +0000 UTC m=+0.190426156 container start ea69847ef918f895d2d8197cfc2ca7628e5788017d9e5fd19f4414a5f44bf7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 04:13:51 np0005539563 podman[419917]: 2025-11-29 09:13:51.954190566 +0000 UTC m=+0.195992987 container attach ea69847ef918f895d2d8197cfc2ca7628e5788017d9e5fd19f4414a5f44bf7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 04:13:51 np0005539563 boring_poitras[419934]: 167 167
Nov 29 04:13:51 np0005539563 systemd[1]: libpod-ea69847ef918f895d2d8197cfc2ca7628e5788017d9e5fd19f4414a5f44bf7bf.scope: Deactivated successfully.
Nov 29 04:13:51 np0005539563 podman[419917]: 2025-11-29 09:13:51.95841884 +0000 UTC m=+0.200221251 container died ea69847ef918f895d2d8197cfc2ca7628e5788017d9e5fd19f4414a5f44bf7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 04:13:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-dc4d487505359d1b93f0c7fdff7572424fa011e43c4de78278167fcdb00d793d-merged.mount: Deactivated successfully.
Nov 29 04:13:52 np0005539563 podman[419917]: 2025-11-29 09:13:52.006372921 +0000 UTC m=+0.248175322 container remove ea69847ef918f895d2d8197cfc2ca7628e5788017d9e5fd19f4414a5f44bf7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:13:52 np0005539563 systemd[1]: libpod-conmon-ea69847ef918f895d2d8197cfc2ca7628e5788017d9e5fd19f4414a5f44bf7bf.scope: Deactivated successfully.
Nov 29 04:13:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:13:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:52 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:13:52 np0005539563 podman[419959]: 2025-11-29 09:13:52.187959387 +0000 UTC m=+0.046143702 container create 9db8649d7bbaa551a74457388563752413b59053b1a8497e069fc1d609a5ffc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_margulis, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:13:52 np0005539563 systemd[1]: Started libpod-conmon-9db8649d7bbaa551a74457388563752413b59053b1a8497e069fc1d609a5ffc6.scope.
Nov 29 04:13:52 np0005539563 podman[419959]: 2025-11-29 09:13:52.169936898 +0000 UTC m=+0.028121233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:13:52 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:13:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e3ee757f6daa2cbda66a53090bf970ad6f21e60d336dd18b7dafce8231b0c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e3ee757f6daa2cbda66a53090bf970ad6f21e60d336dd18b7dafce8231b0c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e3ee757f6daa2cbda66a53090bf970ad6f21e60d336dd18b7dafce8231b0c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e3ee757f6daa2cbda66a53090bf970ad6f21e60d336dd18b7dafce8231b0c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:52 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e3ee757f6daa2cbda66a53090bf970ad6f21e60d336dd18b7dafce8231b0c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:52 np0005539563 podman[419959]: 2025-11-29 09:13:52.296503431 +0000 UTC m=+0.154687796 container init 9db8649d7bbaa551a74457388563752413b59053b1a8497e069fc1d609a5ffc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 29 04:13:52 np0005539563 podman[419959]: 2025-11-29 09:13:52.304675473 +0000 UTC m=+0.162859788 container start 9db8649d7bbaa551a74457388563752413b59053b1a8497e069fc1d609a5ffc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_margulis, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:13:52 np0005539563 podman[419959]: 2025-11-29 09:13:52.308220349 +0000 UTC m=+0.166404684 container attach 9db8649d7bbaa551a74457388563752413b59053b1a8497e069fc1d609a5ffc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_margulis, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 04:13:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:13:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:52.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:13:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:52 np0005539563 nova_compute[252253]: 2025-11-29 09:13:52.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:52 np0005539563 nova_compute[252253]: 2025-11-29 09:13:52.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:13:52 np0005539563 nova_compute[252253]: 2025-11-29 09:13:52.680 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:13:52 np0005539563 nova_compute[252253]: 2025-11-29 09:13:52.699 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:13:53 np0005539563 focused_margulis[419976]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:13:53 np0005539563 focused_margulis[419976]: --> relative data size: 1.0
Nov 29 04:13:53 np0005539563 focused_margulis[419976]: --> All data devices are unavailable
Nov 29 04:13:53 np0005539563 systemd[1]: libpod-9db8649d7bbaa551a74457388563752413b59053b1a8497e069fc1d609a5ffc6.scope: Deactivated successfully.
Nov 29 04:13:53 np0005539563 podman[419959]: 2025-11-29 09:13:53.178215288 +0000 UTC m=+1.036399603 container died 9db8649d7bbaa551a74457388563752413b59053b1a8497e069fc1d609a5ffc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_margulis, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:13:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay-09e3ee757f6daa2cbda66a53090bf970ad6f21e60d336dd18b7dafce8231b0c4-merged.mount: Deactivated successfully.
Nov 29 04:13:53 np0005539563 podman[419959]: 2025-11-29 09:13:53.266372959 +0000 UTC m=+1.124557284 container remove 9db8649d7bbaa551a74457388563752413b59053b1a8497e069fc1d609a5ffc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 04:13:53 np0005539563 systemd[1]: libpod-conmon-9db8649d7bbaa551a74457388563752413b59053b1a8497e069fc1d609a5ffc6.scope: Deactivated successfully.
Nov 29 04:13:53 np0005539563 podman[419992]: 2025-11-29 09:13:53.283657118 +0000 UTC m=+0.060699837 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 04:13:53 np0005539563 podman[420000]: 2025-11-29 09:13:53.294576764 +0000 UTC m=+0.081258665 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 04:13:53 np0005539563 podman[420001]: 2025-11-29 09:13:53.355916118 +0000 UTC m=+0.142626050 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 04:13:53 np0005539563 nova_compute[252253]: 2025-11-29 09:13:53.357 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:53.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4168: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:13:53 np0005539563 podman[420205]: 2025-11-29 09:13:53.9558225 +0000 UTC m=+0.053624155 container create c90a18717c8fe1131558b1165bf4804f155ab8b0ceb6ced289cf35aba5cf2927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:13:54 np0005539563 systemd[1]: Started libpod-conmon-c90a18717c8fe1131558b1165bf4804f155ab8b0ceb6ced289cf35aba5cf2927.scope.
Nov 29 04:13:54 np0005539563 podman[420205]: 2025-11-29 09:13:53.925150688 +0000 UTC m=+0.022952363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:13:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:13:54 np0005539563 podman[420205]: 2025-11-29 09:13:54.060455229 +0000 UTC m=+0.158256914 container init c90a18717c8fe1131558b1165bf4804f155ab8b0ceb6ced289cf35aba5cf2927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 04:13:54 np0005539563 podman[420205]: 2025-11-29 09:13:54.068833267 +0000 UTC m=+0.166634922 container start c90a18717c8fe1131558b1165bf4804f155ab8b0ceb6ced289cf35aba5cf2927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:13:54 np0005539563 podman[420205]: 2025-11-29 09:13:54.073648837 +0000 UTC m=+0.171450492 container attach c90a18717c8fe1131558b1165bf4804f155ab8b0ceb6ced289cf35aba5cf2927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:13:54 np0005539563 quizzical_shamir[420221]: 167 167
Nov 29 04:13:54 np0005539563 systemd[1]: libpod-c90a18717c8fe1131558b1165bf4804f155ab8b0ceb6ced289cf35aba5cf2927.scope: Deactivated successfully.
Nov 29 04:13:54 np0005539563 podman[420205]: 2025-11-29 09:13:54.081734146 +0000 UTC m=+0.179535801 container died c90a18717c8fe1131558b1165bf4804f155ab8b0ceb6ced289cf35aba5cf2927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:13:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3f342b52e24c651646308c256b8d10232dfe4903ea80afe1c2d5d3815b6cfb8c-merged.mount: Deactivated successfully.
Nov 29 04:13:54 np0005539563 podman[420205]: 2025-11-29 09:13:54.12983527 +0000 UTC m=+0.227636925 container remove c90a18717c8fe1131558b1165bf4804f155ab8b0ceb6ced289cf35aba5cf2927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 04:13:54 np0005539563 systemd[1]: libpod-conmon-c90a18717c8fe1131558b1165bf4804f155ab8b0ceb6ced289cf35aba5cf2927.scope: Deactivated successfully.
Nov 29 04:13:54 np0005539563 podman[420246]: 2025-11-29 09:13:54.321073379 +0000 UTC m=+0.055624740 container create 1111936f43861d97e0e3ab291f755c3b257fbd49d4235d046bcd3d18fda24719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cohen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 04:13:54 np0005539563 systemd[1]: Started libpod-conmon-1111936f43861d97e0e3ab291f755c3b257fbd49d4235d046bcd3d18fda24719.scope.
Nov 29 04:13:54 np0005539563 podman[420246]: 2025-11-29 09:13:54.294795725 +0000 UTC m=+0.029347106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:13:54 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:13:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446725148fada3ffd8c11fb5ebc05296e808e8cc356b1c289cb1bdb2e122df03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446725148fada3ffd8c11fb5ebc05296e808e8cc356b1c289cb1bdb2e122df03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446725148fada3ffd8c11fb5ebc05296e808e8cc356b1c289cb1bdb2e122df03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:54 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446725148fada3ffd8c11fb5ebc05296e808e8cc356b1c289cb1bdb2e122df03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:54 np0005539563 podman[420246]: 2025-11-29 09:13:54.421554373 +0000 UTC m=+0.156105824 container init 1111936f43861d97e0e3ab291f755c3b257fbd49d4235d046bcd3d18fda24719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cohen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:13:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:54.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:54 np0005539563 podman[420246]: 2025-11-29 09:13:54.437237589 +0000 UTC m=+0.171788950 container start 1111936f43861d97e0e3ab291f755c3b257fbd49d4235d046bcd3d18fda24719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cohen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 04:13:54 np0005539563 podman[420246]: 2025-11-29 09:13:54.441262099 +0000 UTC m=+0.175813500 container attach 1111936f43861d97e0e3ab291f755c3b257fbd49d4235d046bcd3d18fda24719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cohen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]: {
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:    "0": [
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:        {
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:            "devices": [
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:                "/dev/loop3"
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:            ],
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:            "lv_name": "ceph_lv0",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:            "lv_size": "7511998464",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:            "name": "ceph_lv0",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:            "tags": {
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:                "ceph.cluster_name": "ceph",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:                "ceph.crush_device_class": "",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:                "ceph.encrypted": "0",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:                "ceph.osd_id": "0",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:                "ceph.type": "block",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:                "ceph.vdo": "0"
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:            },
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:            "type": "block",
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:            "vg_name": "ceph_vg0"
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:        }
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]:    ]
Nov 29 04:13:55 np0005539563 youthful_cohen[420264]: }
Nov 29 04:13:55 np0005539563 systemd[1]: libpod-1111936f43861d97e0e3ab291f755c3b257fbd49d4235d046bcd3d18fda24719.scope: Deactivated successfully.
Nov 29 04:13:55 np0005539563 podman[420246]: 2025-11-29 09:13:55.203894545 +0000 UTC m=+0.938445916 container died 1111936f43861d97e0e3ab291f755c3b257fbd49d4235d046bcd3d18fda24719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cohen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 04:13:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay-446725148fada3ffd8c11fb5ebc05296e808e8cc356b1c289cb1bdb2e122df03-merged.mount: Deactivated successfully.
Nov 29 04:13:55 np0005539563 podman[420246]: 2025-11-29 09:13:55.257026606 +0000 UTC m=+0.991577967 container remove 1111936f43861d97e0e3ab291f755c3b257fbd49d4235d046bcd3d18fda24719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cohen, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 04:13:55 np0005539563 systemd[1]: libpod-conmon-1111936f43861d97e0e3ab291f755c3b257fbd49d4235d046bcd3d18fda24719.scope: Deactivated successfully.
Nov 29 04:13:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:55.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:55 np0005539563 nova_compute[252253]: 2025-11-29 09:13:55.543 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:55 np0005539563 podman[420426]: 2025-11-29 09:13:55.813422558 +0000 UTC m=+0.022502461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:13:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4169: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:13:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:13:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:56.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:13:56 np0005539563 podman[420426]: 2025-11-29 09:13:56.436708406 +0000 UTC m=+0.645788289 container create 911cc58179117cadacaf771c3af75e0f6ac1d1fee98f33a39f1113bf43e227ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 04:13:56 np0005539563 systemd[1]: Started libpod-conmon-911cc58179117cadacaf771c3af75e0f6ac1d1fee98f33a39f1113bf43e227ba.scope.
Nov 29 04:13:56 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:13:56 np0005539563 podman[420426]: 2025-11-29 09:13:56.503075276 +0000 UTC m=+0.712155189 container init 911cc58179117cadacaf771c3af75e0f6ac1d1fee98f33a39f1113bf43e227ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khorana, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:13:56 np0005539563 podman[420426]: 2025-11-29 09:13:56.509241013 +0000 UTC m=+0.718320896 container start 911cc58179117cadacaf771c3af75e0f6ac1d1fee98f33a39f1113bf43e227ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khorana, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:13:56 np0005539563 podman[420426]: 2025-11-29 09:13:56.513273692 +0000 UTC m=+0.722353595 container attach 911cc58179117cadacaf771c3af75e0f6ac1d1fee98f33a39f1113bf43e227ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khorana, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 04:13:56 np0005539563 quirky_khorana[420443]: 167 167
Nov 29 04:13:56 np0005539563 systemd[1]: libpod-911cc58179117cadacaf771c3af75e0f6ac1d1fee98f33a39f1113bf43e227ba.scope: Deactivated successfully.
Nov 29 04:13:56 np0005539563 conmon[420443]: conmon 911cc58179117cadacaf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-911cc58179117cadacaf771c3af75e0f6ac1d1fee98f33a39f1113bf43e227ba.scope/container/memory.events
Nov 29 04:13:56 np0005539563 podman[420426]: 2025-11-29 09:13:56.517907128 +0000 UTC m=+0.726987021 container died 911cc58179117cadacaf771c3af75e0f6ac1d1fee98f33a39f1113bf43e227ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khorana, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:13:56 np0005539563 systemd[1]: var-lib-containers-storage-overlay-71ce92a6c115233eee337a276a1735e7ed58d95c3e91d8c9f07560b4de1f1ce2-merged.mount: Deactivated successfully.
Nov 29 04:13:56 np0005539563 podman[420426]: 2025-11-29 09:13:56.558305654 +0000 UTC m=+0.767385537 container remove 911cc58179117cadacaf771c3af75e0f6ac1d1fee98f33a39f1113bf43e227ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khorana, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:13:56 np0005539563 systemd[1]: libpod-conmon-911cc58179117cadacaf771c3af75e0f6ac1d1fee98f33a39f1113bf43e227ba.scope: Deactivated successfully.
Nov 29 04:13:56 np0005539563 podman[420468]: 2025-11-29 09:13:56.716860625 +0000 UTC m=+0.042210096 container create c1dbb3e9ff35fa6f50e08383e80f76911ba835762125577e78f138ffdc508c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 04:13:56 np0005539563 systemd[1]: Started libpod-conmon-c1dbb3e9ff35fa6f50e08383e80f76911ba835762125577e78f138ffdc508c1c.scope.
Nov 29 04:13:56 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:13:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/576813ee5f7ddc1ecc591d4dde74bd9e3ad3b55ab705362e26cf50979f1b1e6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/576813ee5f7ddc1ecc591d4dde74bd9e3ad3b55ab705362e26cf50979f1b1e6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/576813ee5f7ddc1ecc591d4dde74bd9e3ad3b55ab705362e26cf50979f1b1e6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:56 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/576813ee5f7ddc1ecc591d4dde74bd9e3ad3b55ab705362e26cf50979f1b1e6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:13:56 np0005539563 podman[420468]: 2025-11-29 09:13:56.698947598 +0000 UTC m=+0.024297099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:13:56 np0005539563 podman[420468]: 2025-11-29 09:13:56.798198691 +0000 UTC m=+0.123548202 container init c1dbb3e9ff35fa6f50e08383e80f76911ba835762125577e78f138ffdc508c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 04:13:56 np0005539563 podman[420468]: 2025-11-29 09:13:56.809549329 +0000 UTC m=+0.134898800 container start c1dbb3e9ff35fa6f50e08383e80f76911ba835762125577e78f138ffdc508c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:13:56 np0005539563 podman[420468]: 2025-11-29 09:13:56.813913627 +0000 UTC m=+0.139263128 container attach c1dbb3e9ff35fa6f50e08383e80f76911ba835762125577e78f138ffdc508c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 04:13:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:57.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:57 np0005539563 flamboyant_volhard[420485]: {
Nov 29 04:13:57 np0005539563 flamboyant_volhard[420485]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:13:57 np0005539563 flamboyant_volhard[420485]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:13:57 np0005539563 flamboyant_volhard[420485]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:13:57 np0005539563 flamboyant_volhard[420485]:        "osd_id": 0,
Nov 29 04:13:57 np0005539563 flamboyant_volhard[420485]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:13:57 np0005539563 flamboyant_volhard[420485]:        "type": "bluestore"
Nov 29 04:13:57 np0005539563 flamboyant_volhard[420485]:    }
Nov 29 04:13:57 np0005539563 flamboyant_volhard[420485]: }
Nov 29 04:13:57 np0005539563 systemd[1]: libpod-c1dbb3e9ff35fa6f50e08383e80f76911ba835762125577e78f138ffdc508c1c.scope: Deactivated successfully.
Nov 29 04:13:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:13:57 np0005539563 podman[420507]: 2025-11-29 09:13:57.668909309 +0000 UTC m=+0.021305869 container died c1dbb3e9ff35fa6f50e08383e80f76911ba835762125577e78f138ffdc508c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:13:57 np0005539563 systemd[1]: var-lib-containers-storage-overlay-576813ee5f7ddc1ecc591d4dde74bd9e3ad3b55ab705362e26cf50979f1b1e6a-merged.mount: Deactivated successfully.
Nov 29 04:13:57 np0005539563 podman[420507]: 2025-11-29 09:13:57.726193813 +0000 UTC m=+0.078590353 container remove c1dbb3e9ff35fa6f50e08383e80f76911ba835762125577e78f138ffdc508c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 04:13:57 np0005539563 systemd[1]: libpod-conmon-c1dbb3e9ff35fa6f50e08383e80f76911ba835762125577e78f138ffdc508c1c.scope: Deactivated successfully.
Nov 29 04:13:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:13:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:13:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:57 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev fddbb826-d22e-41fa-a0a0-24142f7f8cf6 does not exist
Nov 29 04:13:57 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4fbadf08-db00-4f8b-aca2-87caeb24e25e does not exist
Nov 29 04:13:57 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev bf9db1d8-aafe-4e05-aee6-7f2af55eb468 does not exist
Nov 29 04:13:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4170: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Nov 29 04:13:58 np0005539563 nova_compute[252253]: 2025-11-29 09:13:58.359 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:13:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:13:58.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:58 np0005539563 nova_compute[252253]: 2025-11-29 09:13:58.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:58 np0005539563 nova_compute[252253]: 2025-11-29 09:13:58.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:13:58 np0005539563 nova_compute[252253]: 2025-11-29 09:13:58.707 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:58 np0005539563 nova_compute[252253]: 2025-11-29 09:13:58.707 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:58 np0005539563 nova_compute[252253]: 2025-11-29 09:13:58.708 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:13:58 np0005539563 nova_compute[252253]: 2025-11-29 09:13:58.708 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:13:58 np0005539563 nova_compute[252253]: 2025-11-29 09:13:58.708 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:13:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:13:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:13:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1601604406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:13:59 np0005539563 nova_compute[252253]: 2025-11-29 09:13:59.137 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:13:59 np0005539563 nova_compute[252253]: 2025-11-29 09:13:59.304 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:13:59 np0005539563 nova_compute[252253]: 2025-11-29 09:13:59.305 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4069MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:13:59 np0005539563 nova_compute[252253]: 2025-11-29 09:13:59.306 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:13:59 np0005539563 nova_compute[252253]: 2025-11-29 09:13:59.306 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:13:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:13:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:13:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:13:59.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:13:59 np0005539563 nova_compute[252253]: 2025-11-29 09:13:59.385 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:13:59 np0005539563 nova_compute[252253]: 2025-11-29 09:13:59.386 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:13:59 np0005539563 nova_compute[252253]: 2025-11-29 09:13:59.404 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:13:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:13:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1444134340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:13:59 np0005539563 nova_compute[252253]: 2025-11-29 09:13:59.894 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:13:59 np0005539563 nova_compute[252253]: 2025-11-29 09:13:59.901 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:13:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4171: 305 pgs: 305 active+clean; 188 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 101 op/s
Nov 29 04:13:59 np0005539563 nova_compute[252253]: 2025-11-29 09:13:59.933 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:13:59 np0005539563 nova_compute[252253]: 2025-11-29 09:13:59.991 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:13:59 np0005539563 nova_compute[252253]: 2025-11-29 09:13:59.992 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:00.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:00 np0005539563 nova_compute[252253]: 2025-11-29 09:14:00.548 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:01.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4172: 305 pgs: 305 active+clean; 194 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Nov 29 04:14:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:02.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:03 np0005539563 nova_compute[252253]: 2025-11-29 09:14:03.363 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:03.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4173: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 04:14:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:04.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:04.991 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:04.991 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:04.991 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:05.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:05 np0005539563 nova_compute[252253]: 2025-11-29 09:14:05.552 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4174: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 04:14:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:06.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:07.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:07.469 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=102, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=101) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:14:07 np0005539563 nova_compute[252253]: 2025-11-29 09:14:07.469 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:07 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:07.470 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:14:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4175: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 04:14:08 np0005539563 nova_compute[252253]: 2025-11-29 09:14:08.364 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:08.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:14:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:09.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:14:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4176: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 29 04:14:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:10.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:10 np0005539563 nova_compute[252253]: 2025-11-29 09:14:10.557 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:11.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4177: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 126 KiB/s rd, 523 KiB/s wr, 47 op/s
Nov 29 04:14:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:12.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:12 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:12.473 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '102'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:14:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:14:13
Nov 29 04:14:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:14:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:14:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'images', 'default.rgw.control', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 29 04:14:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:14:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:14:13 np0005539563 nova_compute[252253]: 2025-11-29 09:14:13.366 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:13.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:13 np0005539563 nova_compute[252253]: 2025-11-29 09:14:13.902 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:13 np0005539563 nova_compute[252253]: 2025-11-29 09:14:13.903 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4178: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 76 KiB/s wr, 18 op/s
Nov 29 04:14:13 np0005539563 nova_compute[252253]: 2025-11-29 09:14:13.929 252257 DEBUG nova.compute.manager [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.025 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.025 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.033 252257 DEBUG nova.virt.hardware [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.033 252257 INFO nova.compute.claims [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.127 252257 DEBUG nova.scheduler.client.report [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.149 252257 DEBUG nova.scheduler.client.report [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.149 252257 DEBUG nova.compute.provider_tree [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.217 252257 DEBUG nova.scheduler.client.report [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.246 252257 DEBUG nova.scheduler.client.report [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.325 252257 DEBUG oslo_concurrency.processutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:14:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:14.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:14:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437376576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.815 252257 DEBUG oslo_concurrency.processutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.821 252257 DEBUG nova.compute.provider_tree [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.840 252257 DEBUG nova.scheduler.client.report [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.863 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.864 252257 DEBUG nova.compute.manager [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.907 252257 DEBUG nova.compute.manager [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.908 252257 DEBUG nova.network.neutron [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.926 252257 INFO nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 04:14:14 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.948 252257 DEBUG nova.compute.manager [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:14.999 252257 INFO nova.virt.block_device [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Booting with volume 5acb1bf0-7995-4ac2-84e2-62745b9cdce6 at /dev/vda#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.132 252257 DEBUG nova.policy [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ff561a95dc44b9fb9f7fd8fee80f589', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.167 252257 DEBUG os_brick.utils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.169 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.186 268082 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.187 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[997fc915-c1d7-4760-9a4a-f41d6a91e247]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.188 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.203 268082 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.204 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[2eddff7b-3125-46f5-9b99-75939f546fad]: (4, ('InitiatorName=iqn.1994-05.com.redhat:b34daab51fdd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.206 268082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.217 268082 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.217 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5f23ec-5ca2-49d6-a565-343a9765ca2c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.219 268082 DEBUG oslo.privsep.daemon [-] privsep: reply[cd68bdaa-2469-4196-8b1b-9f41aee287fd]: (4, '9fe13708-3578-4487-abe9-9bea2dcb1209') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.220 252257 DEBUG oslo_concurrency.processutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.256 252257 DEBUG oslo_concurrency.processutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.258 252257 DEBUG os_brick.initiator.connectors.lightos [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.259 252257 DEBUG os_brick.initiator.connectors.lightos [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.259 252257 DEBUG os_brick.initiator.connectors.lightos [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.259 252257 DEBUG os_brick.utils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] <== get_connector_properties: return (91ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:b34daab51fdd', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9fe13708-3578-4487-abe9-9bea2dcb1209', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.259 252257 DEBUG nova.virt.block_device [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updating existing volume attachment record: 02128a5d-2580-41d2-a022-994a8a38ebc5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 04:14:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.002000053s ======
Nov 29 04:14:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:15.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 29 04:14:15 np0005539563 nova_compute[252253]: 2025-11-29 09:14:15.560 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4179: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 31 KiB/s wr, 16 op/s
Nov 29 04:14:16 np0005539563 nova_compute[252253]: 2025-11-29 09:14:16.061 252257 DEBUG nova.network.neutron [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Successfully created port: 680dff60-ed18-4961-84dd-043dd06abd06 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 04:14:16 np0005539563 nova_compute[252253]: 2025-11-29 09:14:16.282 252257 DEBUG nova.compute.manager [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 04:14:16 np0005539563 nova_compute[252253]: 2025-11-29 09:14:16.283 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 04:14:16 np0005539563 nova_compute[252253]: 2025-11-29 09:14:16.283 252257 INFO nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Creating image(s)#033[00m
Nov 29 04:14:16 np0005539563 nova_compute[252253]: 2025-11-29 09:14:16.284 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 04:14:16 np0005539563 nova_compute[252253]: 2025-11-29 09:14:16.284 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Ensure instance console log exists: /var/lib/nova/instances/ab760f9d-43e3-4bec-9987-df02dc30b9ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 04:14:16 np0005539563 nova_compute[252253]: 2025-11-29 09:14:16.284 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:16 np0005539563 nova_compute[252253]: 2025-11-29 09:14:16.285 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:16 np0005539563 nova_compute[252253]: 2025-11-29 09:14:16.285 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:16.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:14:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:14:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:14:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:14:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:14:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:14:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:14:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:14:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:14:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:14:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:17.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:17 np0005539563 nova_compute[252253]: 2025-11-29 09:14:17.486 252257 DEBUG nova.network.neutron [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Successfully updated port: 680dff60-ed18-4961-84dd-043dd06abd06 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 04:14:17 np0005539563 nova_compute[252253]: 2025-11-29 09:14:17.500 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:14:17 np0005539563 nova_compute[252253]: 2025-11-29 09:14:17.500 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquired lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:14:17 np0005539563 nova_compute[252253]: 2025-11-29 09:14:17.500 252257 DEBUG nova.network.neutron [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 04:14:17 np0005539563 nova_compute[252253]: 2025-11-29 09:14:17.582 252257 DEBUG nova.compute.manager [req-656afb2d-d736-4c66-a896-7a235b2791b5 req-49754f22-154b-4324-b15b-4d73fac7e62b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Received event network-changed-680dff60-ed18-4961-84dd-043dd06abd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:14:17 np0005539563 nova_compute[252253]: 2025-11-29 09:14:17.582 252257 DEBUG nova.compute.manager [req-656afb2d-d736-4c66-a896-7a235b2791b5 req-49754f22-154b-4324-b15b-4d73fac7e62b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Refreshing instance network info cache due to event network-changed-680dff60-ed18-4961-84dd-043dd06abd06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:14:17 np0005539563 nova_compute[252253]: 2025-11-29 09:14:17.583 252257 DEBUG oslo_concurrency.lockutils [req-656afb2d-d736-4c66-a896-7a235b2791b5 req-49754f22-154b-4324-b15b-4d73fac7e62b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:14:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:17 np0005539563 nova_compute[252253]: 2025-11-29 09:14:17.676 252257 DEBUG nova.network.neutron [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 04:14:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4180: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 30 KiB/s wr, 15 op/s
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.408 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:14:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:18.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.710 252257 DEBUG nova.network.neutron [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updating instance_info_cache with network_info: [{"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.733 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Releasing lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.733 252257 DEBUG nova.compute.manager [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Instance network_info: |[{"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.734 252257 DEBUG oslo_concurrency.lockutils [req-656afb2d-d736-4c66-a896-7a235b2791b5 req-49754f22-154b-4324-b15b-4d73fac7e62b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.734 252257 DEBUG nova.network.neutron [req-656afb2d-d736-4c66-a896-7a235b2791b5 req-49754f22-154b-4324-b15b-4d73fac7e62b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Refreshing network info cache for port 680dff60-ed18-4961-84dd-043dd06abd06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.737 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Start _get_guest_xml network_info=[{"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5acb1bf0-7995-4ac2-84e2-62745b9cdce6', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5acb1bf0-7995-4ac2-84e2-62745b9cdce6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ab760f9d-43e3-4bec-9987-df02dc30b9ef', 'attached_at': '', 'detached_at': '', 'volume_id': '5acb1bf0-7995-4ac2-84e2-62745b9cdce6', 'serial': '5acb1bf0-7995-4ac2-84e2-62745b9cdce6'}, 'attachment_id': '02128a5d-2580-41d2-a022-994a8a38ebc5', 'disk_bus': 'virtio', 'boot_index': 0, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.743 252257 WARNING nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.749 252257 DEBUG nova.virt.libvirt.host [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.750 252257 DEBUG nova.virt.libvirt.host [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.754 252257 DEBUG nova.virt.libvirt.host [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.754 252257 DEBUG nova.virt.libvirt.host [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.755 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.756 252257 DEBUG nova.virt.hardware [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:39:53Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b3f6a6d1-4abb-4332-8391-2e39c8fa168a',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.756 252257 DEBUG nova.virt.hardware [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.756 252257 DEBUG nova.virt.hardware [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.757 252257 DEBUG nova.virt.hardware [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.757 252257 DEBUG nova.virt.hardware [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.757 252257 DEBUG nova.virt.hardware [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.757 252257 DEBUG nova.virt.hardware [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.757 252257 DEBUG nova.virt.hardware [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.758 252257 DEBUG nova.virt.hardware [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.758 252257 DEBUG nova.virt.hardware [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.758 252257 DEBUG nova.virt.hardware [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.803 252257 DEBUG nova.storage.rbd_utils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image ab760f9d-43e3-4bec-9987-df02dc30b9ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:14:18 np0005539563 nova_compute[252253]: 2025-11-29 09:14:18.811 252257 DEBUG oslo_concurrency.processutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:14:19 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 04:14:19 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2810679674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.358 252257 DEBUG oslo_concurrency.processutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.391 252257 DEBUG nova.virt.libvirt.vif [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:14:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-365044550',display_name='tempest-TestVolumeBootPattern-server-365044550',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-365044550',id=223,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbG23j9M5o6eHfsJFAWGmFr+V1OMrrFRyvdXC6aXkLfRb952sNiXaohq8D2hzBatQ6UrGgr+Il3V8996CyOSEBo0EV82vq7jHKwJvSwjMwvkl///TChhoI2G24vyXx6sw==',key_name='tempest-TestVolumeBootPattern-692880462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-ls3d5wdz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:14:14Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=ab760f9d-43e3-4bec-9987-df02dc30b9ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.391 252257 DEBUG nova.network.os_vif_util [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.392 252257 DEBUG nova.network.os_vif_util [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:67:2a,bridge_name='br-int',has_traffic_filtering=True,id=680dff60-ed18-4961-84dd-043dd06abd06,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680dff60-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.393 252257 DEBUG nova.objects.instance [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid ab760f9d-43e3-4bec-9987-df02dc30b9ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:14:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:19.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.423 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] End _get_guest_xml xml=<domain type="kvm">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  <uuid>ab760f9d-43e3-4bec-9987-df02dc30b9ef</uuid>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  <name>instance-000000df</name>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  <memory>131072</memory>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  <vcpu>1</vcpu>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  <metadata>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <nova:name>tempest-TestVolumeBootPattern-server-365044550</nova:name>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <nova:creationTime>2025-11-29 09:14:18</nova:creationTime>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <nova:flavor name="m1.nano">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <nova:memory>128</nova:memory>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <nova:disk>1</nova:disk>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <nova:swap>0</nova:swap>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <nova:vcpus>1</nova:vcpus>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      </nova:flavor>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <nova:owner>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <nova:user uuid="5ff561a95dc44b9fb9f7fd8fee80f589">tempest-TestVolumeBootPattern-531976395-project-member</nova:user>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <nova:project uuid="51af0a2ee11a460ab825a484e5c6f4a3">tempest-TestVolumeBootPattern-531976395</nova:project>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      </nova:owner>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <nova:ports>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <nova:port uuid="680dff60-ed18-4961-84dd-043dd06abd06">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        </nova:port>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      </nova:ports>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    </nova:instance>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  </metadata>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  <sysinfo type="smbios">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <system>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <entry name="manufacturer">RDO</entry>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <entry name="product">OpenStack Compute</entry>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <entry name="serial">ab760f9d-43e3-4bec-9987-df02dc30b9ef</entry>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <entry name="uuid">ab760f9d-43e3-4bec-9987-df02dc30b9ef</entry>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <entry name="family">Virtual Machine</entry>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    </system>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  </sysinfo>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  <os>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <boot dev="hd"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <smbios mode="sysinfo"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  </os>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  <features>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <acpi/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <apic/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <vmcoreinfo/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  </features>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  <clock offset="utc">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <timer name="hpet" present="no"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  </clock>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  <cpu mode="custom" match="exact">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <model>Nehalem</model>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  </cpu>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  <devices>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <disk type="network" device="cdrom">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <driver type="raw" cache="none"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="vms/ab760f9d-43e3-4bec-9987-df02dc30b9ef_disk.config">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      </source>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      </auth>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <target dev="sda" bus="sata"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    </disk>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <disk type="network" device="disk">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <source protocol="rbd" name="volumes/volume-5acb1bf0-7995-4ac2-84e2-62745b9cdce6">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <host name="192.168.122.100" port="6789"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <host name="192.168.122.102" port="6789"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <host name="192.168.122.101" port="6789"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      </source>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <auth username="openstack">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:        <secret type="ceph" uuid="38a37ed2-442a-5e0d-a69a-881fdd186450"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      </auth>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <target dev="vda" bus="virtio"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <serial>5acb1bf0-7995-4ac2-84e2-62745b9cdce6</serial>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    </disk>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <interface type="ethernet">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <mac address="fa:16:3e:7b:67:2a"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <mtu size="1442"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <target dev="tap680dff60-ed"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    </interface>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <serial type="pty">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <log file="/var/lib/nova/instances/ab760f9d-43e3-4bec-9987-df02dc30b9ef/console.log" append="off"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    </serial>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <video>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <model type="virtio"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    </video>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <input type="tablet" bus="usb"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <rng model="virtio">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <backend model="random">/dev/urandom</backend>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    </rng>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <controller type="usb" index="0"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    <memballoon model="virtio">
Nov 29 04:14:19 np0005539563 nova_compute[252253]:      <stats period="10"/>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:    </memballoon>
Nov 29 04:14:19 np0005539563 nova_compute[252253]:  </devices>
Nov 29 04:14:19 np0005539563 nova_compute[252253]: </domain>
Nov 29 04:14:19 np0005539563 nova_compute[252253]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.425 252257 DEBUG nova.compute.manager [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Preparing to wait for external event network-vif-plugged-680dff60-ed18-4961-84dd-043dd06abd06 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.425 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.426 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.426 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.427 252257 DEBUG nova.virt.libvirt.vif [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T09:14:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-365044550',display_name='tempest-TestVolumeBootPattern-server-365044550',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-365044550',id=223,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbG23j9M5o6eHfsJFAWGmFr+V1OMrrFRyvdXC6aXkLfRb952sNiXaohq8D2hzBatQ6UrGgr+Il3V8996CyOSEBo0EV82vq7jHKwJvSwjMwvkl///TChhoI2G24vyXx6sw==',key_name='tempest-TestVolumeBootPattern-692880462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-ls3d5wdz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T09:14:14Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=ab760f9d-43e3-4bec-9987-df02dc30b9ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.427 252257 DEBUG nova.network.os_vif_util [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.428 252257 DEBUG nova.network.os_vif_util [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:67:2a,bridge_name='br-int',has_traffic_filtering=True,id=680dff60-ed18-4961-84dd-043dd06abd06,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680dff60-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.429 252257 DEBUG os_vif [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:67:2a,bridge_name='br-int',has_traffic_filtering=True,id=680dff60-ed18-4961-84dd-043dd06abd06,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680dff60-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.430 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.430 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.431 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.435 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.436 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap680dff60-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.436 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap680dff60-ed, col_values=(('external_ids', {'iface-id': '680dff60-ed18-4961-84dd-043dd06abd06', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:67:2a', 'vm-uuid': 'ab760f9d-43e3-4bec-9987-df02dc30b9ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:14:19 np0005539563 NetworkManager[48981]: <info>  [1764407659.4694] manager: (tap680dff60-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.468 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.471 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.476 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.478 252257 INFO os_vif [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:67:2a,bridge_name='br-int',has_traffic_filtering=True,id=680dff60-ed18-4961-84dd-043dd06abd06,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680dff60-ed')#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.612 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.614 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.615 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] No VIF found with MAC fa:16:3e:7b:67:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.616 252257 INFO nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Using config drive#033[00m
Nov 29 04:14:19 np0005539563 nova_compute[252253]: 2025-11-29 09:14:19.645 252257 DEBUG nova.storage.rbd_utils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image ab760f9d-43e3-4bec-9987-df02dc30b9ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:14:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4181: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 30 KiB/s wr, 15 op/s
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.148 252257 INFO nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Creating config drive at /var/lib/nova/instances/ab760f9d-43e3-4bec-9987-df02dc30b9ef/disk.config#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.152 252257 DEBUG oslo_concurrency.processutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab760f9d-43e3-4bec-9987-df02dc30b9ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiuh91425 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.193 252257 DEBUG nova.network.neutron [req-656afb2d-d736-4c66-a896-7a235b2791b5 req-49754f22-154b-4324-b15b-4d73fac7e62b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updated VIF entry in instance network info cache for port 680dff60-ed18-4961-84dd-043dd06abd06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.194 252257 DEBUG nova.network.neutron [req-656afb2d-d736-4c66-a896-7a235b2791b5 req-49754f22-154b-4324-b15b-4d73fac7e62b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updating instance_info_cache with network_info: [{"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.216 252257 DEBUG oslo_concurrency.lockutils [req-656afb2d-d736-4c66-a896-7a235b2791b5 req-49754f22-154b-4324-b15b-4d73fac7e62b 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.301 252257 DEBUG oslo_concurrency.processutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab760f9d-43e3-4bec-9987-df02dc30b9ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiuh91425" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.331 252257 DEBUG nova.storage.rbd_utils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] rbd image ab760f9d-43e3-4bec-9987-df02dc30b9ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.335 252257 DEBUG oslo_concurrency.processutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab760f9d-43e3-4bec-9987-df02dc30b9ef/disk.config ab760f9d-43e3-4bec-9987-df02dc30b9ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:14:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:14:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:20.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.540 252257 DEBUG oslo_concurrency.processutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab760f9d-43e3-4bec-9987-df02dc30b9ef/disk.config ab760f9d-43e3-4bec-9987-df02dc30b9ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.541 252257 INFO nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Deleting local config drive /var/lib/nova/instances/ab760f9d-43e3-4bec-9987-df02dc30b9ef/disk.config because it was imported into RBD.#033[00m
Nov 29 04:14:20 np0005539563 kernel: tap680dff60-ed: entered promiscuous mode
Nov 29 04:14:20 np0005539563 NetworkManager[48981]: <info>  [1764407660.6317] manager: (tap680dff60-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/429)
Nov 29 04:14:20 np0005539563 ovn_controller[148841]: 2025-11-29T09:14:20Z|00960|binding|INFO|Claiming lport 680dff60-ed18-4961-84dd-043dd06abd06 for this chassis.
Nov 29 04:14:20 np0005539563 ovn_controller[148841]: 2025-11-29T09:14:20Z|00961|binding|INFO|680dff60-ed18-4961-84dd-043dd06abd06: Claiming fa:16:3e:7b:67:2a 10.100.0.5
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.633 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.640 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:67:2a 10.100.0.5'], port_security=['fa:16:3e:7b:67:2a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ab760f9d-43e3-4bec-9987-df02dc30b9ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f464a39e-170e-4271-8e3e-71cb609233aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=680dff60-ed18-4961-84dd-043dd06abd06) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.641 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 680dff60-ed18-4961-84dd-043dd06abd06 in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad bound to our chassis#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.642 158990 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.650 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:20 np0005539563 ovn_controller[148841]: 2025-11-29T09:14:20Z|00962|binding|INFO|Setting lport 680dff60-ed18-4961-84dd-043dd06abd06 ovn-installed in OVS
Nov 29 04:14:20 np0005539563 ovn_controller[148841]: 2025-11-29T09:14:20Z|00963|binding|INFO|Setting lport 680dff60-ed18-4961-84dd-043dd06abd06 up in Southbound
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.652 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.656 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.656 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[06437842-3a20-4336-aa14-9d7239ddfda2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.657 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8aaf4606-91 in ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.660 261364 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8aaf4606-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.660 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[881c9f70-7967-4ef6-86ea-ff96e611c043]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.661 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0066f8-53b8-4c2e-bc14-19602e04f04d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:20 np0005539563 systemd-machined[213024]: New machine qemu-106-instance-000000df.
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.683 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[2da89347-ca1d-4483-bd3f-f2b8d92195be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:20 np0005539563 systemd[1]: Started Virtual Machine qemu-106-instance-000000df.
Nov 29 04:14:20 np0005539563 systemd-udevd[420825]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.714 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae8f5a3-88fc-42af-8ddc-bb0dcba0228a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:20 np0005539563 NetworkManager[48981]: <info>  [1764407660.7189] device (tap680dff60-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:14:20 np0005539563 NetworkManager[48981]: <info>  [1764407660.7203] device (tap680dff60-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.754 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[c08942b7-493d-4e3f-ab9e-0ffbed368ce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.759 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[50c57bdc-a752-4b85-9fb8-f22f928e806a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:20 np0005539563 NetworkManager[48981]: <info>  [1764407660.7608] manager: (tap8aaf4606-90): new Veth device (/org/freedesktop/NetworkManager/Devices/430)
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.803 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[21155e31-14e4-4d76-9cd5-8c9c645ac87a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.807 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[98b3dbb7-936c-4729-aca2-7c908fa5f03e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:20 np0005539563 NetworkManager[48981]: <info>  [1764407660.8359] device (tap8aaf4606-90): carrier: link connected
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.846 261631 DEBUG oslo.privsep.daemon [-] privsep: reply[2739bb73-1182-4766-b324-a0e6f338dcca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.867 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9599fa-1804-46cb-a470-372824f036d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 282], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1062860, 'reachable_time': 35651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420855, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.889 252257 DEBUG nova.compute.manager [req-eda1ceab-6619-4f94-b409-4d45b71f45f3 req-4282b03d-2a06-4607-9a51-5b0962ab57d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Received event network-vif-plugged-680dff60-ed18-4961-84dd-043dd06abd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.889 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[13b6495b-6d00-4c11-b650-1131fa1e12bf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:8863'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1062860, 'tstamp': 1062860}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420863, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.890 252257 DEBUG oslo_concurrency.lockutils [req-eda1ceab-6619-4f94-b409-4d45b71f45f3 req-4282b03d-2a06-4607-9a51-5b0962ab57d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.891 252257 DEBUG oslo_concurrency.lockutils [req-eda1ceab-6619-4f94-b409-4d45b71f45f3 req-4282b03d-2a06-4607-9a51-5b0962ab57d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.892 252257 DEBUG oslo_concurrency.lockutils [req-eda1ceab-6619-4f94-b409-4d45b71f45f3 req-4282b03d-2a06-4607-9a51-5b0962ab57d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:20 np0005539563 nova_compute[252253]: 2025-11-29 09:14:20.892 252257 DEBUG nova.compute.manager [req-eda1ceab-6619-4f94-b409-4d45b71f45f3 req-4282b03d-2a06-4607-9a51-5b0962ab57d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Processing event network-vif-plugged-680dff60-ed18-4961-84dd-043dd06abd06 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.914 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c226f0-23f8-439e-be5f-bd979af01379]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8aaf4606-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:88:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 282], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1062860, 'reachable_time': 35651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 420873, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:20.963 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[8a30f024-24a7-472a-9563-3af6eb2bfa23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:21.041 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ad7a33-8752-45ad-b21c-44f7718fa842]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:21.043 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:21.043 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:21.043 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8aaf4606-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.046 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:21 np0005539563 kernel: tap8aaf4606-90: entered promiscuous mode
Nov 29 04:14:21 np0005539563 NetworkManager[48981]: <info>  [1764407661.0477] manager: (tap8aaf4606-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:21.051 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8aaf4606-90, col_values=(('external_ids', {'iface-id': 'dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:14:21 np0005539563 ovn_controller[148841]: 2025-11-29T09:14:21Z|00964|binding|INFO|Releasing lport dcea3b5a-c3c6-4ea4-8c47-8c2337a9ad5a from this chassis (sb_readonly=0)
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.053 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.066 252257 DEBUG nova.compute.manager [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.067 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764407661.0652149, ab760f9d-43e3-4bec-9987-df02dc30b9ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.067 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] VM Started (Lifecycle Event)#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.071 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.077 252257 INFO nova.virt.libvirt.driver [-] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Instance spawned successfully.#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.077 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.083 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.085 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:21.085 158990 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:21.086 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9304be86-4918-4952-beb0-8844868b5377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:21.088 158990 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: global
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    log         /dev/log local0 debug
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    log-tag     haproxy-metadata-proxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    user        root
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    group       root
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    maxconn     1024
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    pidfile     /var/lib/neutron/external/pids/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.pid.haproxy
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    daemon
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: defaults
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    log global
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    mode http
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    option httplog
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    option dontlognull
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    option http-server-close
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    option forwardfor
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    retries                 3
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    timeout http-request    30s
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    timeout connect         30s
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    timeout client          32s
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    timeout server          32s
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    timeout http-keep-alive 30s
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: 
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: listen listener
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    bind 169.254.169.254:80
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]:    http-request add-header X-OVN-Network-ID 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 04:14:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:14:21.089 158990 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'env', 'PROCESS_TAG=haproxy-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.124 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.132 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.137 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.138 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.138 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.139 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.139 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.139 252257 DEBUG nova.virt.libvirt.driver [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.162 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.162 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764407661.0697138, ab760f9d-43e3-4bec-9987-df02dc30b9ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.163 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] VM Paused (Lifecycle Event)#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.198 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.203 252257 DEBUG nova.virt.driver [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] Emitting event <LifecycleEvent: 1764407661.070161, ab760f9d-43e3-4bec-9987-df02dc30b9ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.203 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] VM Resumed (Lifecycle Event)#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.207 252257 INFO nova.compute.manager [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Took 4.92 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.207 252257 DEBUG nova.compute.manager [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.229 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.231 252257 DEBUG nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.257 252257 INFO nova.compute.manager [None req-ec4c8987-1f4b-4b23-b012-36815bb5b99f - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.277 252257 INFO nova.compute.manager [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Took 7.28 seconds to build instance.#033[00m
Nov 29 04:14:21 np0005539563 nova_compute[252253]: 2025-11-29 09:14:21.294 252257 DEBUG oslo_concurrency.lockutils [None req-95cbfee8-c78d-48a2-9740-197d792cd529 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:21.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:21 np0005539563 podman[420932]: 2025-11-29 09:14:21.490558678 +0000 UTC m=+0.056962136 container create d1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 04:14:21 np0005539563 systemd[1]: Started libpod-conmon-d1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1.scope.
Nov 29 04:14:21 np0005539563 podman[420932]: 2025-11-29 09:14:21.460611147 +0000 UTC m=+0.027014595 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 04:14:21 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:14:21 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02fc8d1efcf00702b21b30b97fabe5a9559a025fde4045848a8ace0611522c9a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 04:14:21 np0005539563 podman[420932]: 2025-11-29 09:14:21.607075179 +0000 UTC m=+0.173478617 container init d1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 04:14:21 np0005539563 podman[420932]: 2025-11-29 09:14:21.620230335 +0000 UTC m=+0.186633773 container start d1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:14:21 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[420947]: [NOTICE]   (420951) : New worker (420953) forked
Nov 29 04:14:21 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[420947]: [NOTICE]   (420951) : Loading success.
Nov 29 04:14:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4182: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.6 KiB/s rd, 9.7 KiB/s wr, 12 op/s
Nov 29 04:14:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:22.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:23 np0005539563 nova_compute[252253]: 2025-11-29 09:14:23.041 252257 DEBUG nova.compute.manager [req-738414b9-a91f-4878-ae09-38cae23f098c req-322843b5-9803-4600-9262-59046b99f6d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Received event network-vif-plugged-680dff60-ed18-4961-84dd-043dd06abd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:14:23 np0005539563 nova_compute[252253]: 2025-11-29 09:14:23.042 252257 DEBUG oslo_concurrency.lockutils [req-738414b9-a91f-4878-ae09-38cae23f098c req-322843b5-9803-4600-9262-59046b99f6d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:14:23 np0005539563 nova_compute[252253]: 2025-11-29 09:14:23.042 252257 DEBUG oslo_concurrency.lockutils [req-738414b9-a91f-4878-ae09-38cae23f098c req-322843b5-9803-4600-9262-59046b99f6d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:14:23 np0005539563 nova_compute[252253]: 2025-11-29 09:14:23.043 252257 DEBUG oslo_concurrency.lockutils [req-738414b9-a91f-4878-ae09-38cae23f098c req-322843b5-9803-4600-9262-59046b99f6d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:14:23 np0005539563 nova_compute[252253]: 2025-11-29 09:14:23.043 252257 DEBUG nova.compute.manager [req-738414b9-a91f-4878-ae09-38cae23f098c req-322843b5-9803-4600-9262-59046b99f6d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] No waiting events found dispatching network-vif-plugged-680dff60-ed18-4961-84dd-043dd06abd06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:14:23 np0005539563 nova_compute[252253]: 2025-11-29 09:14:23.043 252257 WARNING nova.compute.manager [req-738414b9-a91f-4878-ae09-38cae23f098c req-322843b5-9803-4600-9262-59046b99f6d2 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Received unexpected event network-vif-plugged-680dff60-ed18-4961-84dd-043dd06abd06 for instance with vm_state active and task_state None.#033[00m
Nov 29 04:14:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:23.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:23 np0005539563 nova_compute[252253]: 2025-11-29 09:14:23.411 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:23 np0005539563 podman[420963]: 2025-11-29 09:14:23.533260747 +0000 UTC m=+0.079393574 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 04:14:23 np0005539563 podman[420964]: 2025-11-29 09:14:23.558779589 +0000 UTC m=+0.096685813 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 04:14:23 np0005539563 podman[420965]: 2025-11-29 09:14:23.569471809 +0000 UTC m=+0.106252433 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 04:14:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4183: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 434 KiB/s rd, 16 op/s
Nov 29 04:14:23 np0005539563 nova_compute[252253]: 2025-11-29 09:14:23.988 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004327191513121161 of space, bias 1.0, pg target 1.2981574539363483 quantized to 32 (current 32)
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:14:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 04:14:24 np0005539563 nova_compute[252253]: 2025-11-29 09:14:24.469 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:24.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:25.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4184: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:14:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:26.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:27.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4185: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:14:28 np0005539563 nova_compute[252253]: 2025-11-29 09:14:28.447 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:28.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:28 np0005539563 nova_compute[252253]: 2025-11-29 09:14:28.584 252257 DEBUG nova.compute.manager [req-5fc0fa15-1ad8-44ef-97e7-22c4be124261 req-fbf9ebcc-e54c-4198-a0ff-fc5afa1b10d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Received event network-changed-680dff60-ed18-4961-84dd-043dd06abd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:14:28 np0005539563 nova_compute[252253]: 2025-11-29 09:14:28.585 252257 DEBUG nova.compute.manager [req-5fc0fa15-1ad8-44ef-97e7-22c4be124261 req-fbf9ebcc-e54c-4198-a0ff-fc5afa1b10d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Refreshing instance network info cache due to event network-changed-680dff60-ed18-4961-84dd-043dd06abd06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:14:28 np0005539563 nova_compute[252253]: 2025-11-29 09:14:28.585 252257 DEBUG oslo_concurrency.lockutils [req-5fc0fa15-1ad8-44ef-97e7-22c4be124261 req-fbf9ebcc-e54c-4198-a0ff-fc5afa1b10d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:14:28 np0005539563 nova_compute[252253]: 2025-11-29 09:14:28.585 252257 DEBUG oslo_concurrency.lockutils [req-5fc0fa15-1ad8-44ef-97e7-22c4be124261 req-fbf9ebcc-e54c-4198-a0ff-fc5afa1b10d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:14:28 np0005539563 nova_compute[252253]: 2025-11-29 09:14:28.585 252257 DEBUG nova.network.neutron [req-5fc0fa15-1ad8-44ef-97e7-22c4be124261 req-fbf9ebcc-e54c-4198-a0ff-fc5afa1b10d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Refreshing network info cache for port 680dff60-ed18-4961-84dd-043dd06abd06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:14:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:29.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:29 np0005539563 nova_compute[252253]: 2025-11-29 09:14:29.473 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4186: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:14:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:30.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:31.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4187: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 04:14:32 np0005539563 nova_compute[252253]: 2025-11-29 09:14:32.362 252257 DEBUG nova.network.neutron [req-5fc0fa15-1ad8-44ef-97e7-22c4be124261 req-fbf9ebcc-e54c-4198-a0ff-fc5afa1b10d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updated VIF entry in instance network info cache for port 680dff60-ed18-4961-84dd-043dd06abd06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:14:32 np0005539563 nova_compute[252253]: 2025-11-29 09:14:32.363 252257 DEBUG nova.network.neutron [req-5fc0fa15-1ad8-44ef-97e7-22c4be124261 req-fbf9ebcc-e54c-4198-a0ff-fc5afa1b10d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updating instance_info_cache with network_info: [{"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:14:32 np0005539563 nova_compute[252253]: 2025-11-29 09:14:32.403 252257 DEBUG oslo_concurrency.lockutils [req-5fc0fa15-1ad8-44ef-97e7-22c4be124261 req-fbf9ebcc-e54c-4198-a0ff-fc5afa1b10d4 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:14:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:32.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:33.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:33 np0005539563 nova_compute[252253]: 2025-11-29 09:14:33.450 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4188: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 76 op/s
Nov 29 04:14:34 np0005539563 nova_compute[252253]: 2025-11-29 09:14:34.478 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:34.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:35.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:35 np0005539563 ovn_controller[148841]: 2025-11-29T09:14:35Z|00122|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.5
Nov 29 04:14:35 np0005539563 ovn_controller[148841]: 2025-11-29T09:14:35Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:7b:67:2a 10.100.0.5
Nov 29 04:14:35 np0005539563 nova_compute[252253]: 2025-11-29 09:14:35.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4189: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 25 KiB/s wr, 94 op/s
Nov 29 04:14:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:36.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:37.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4190: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 461 KiB/s rd, 13 KiB/s wr, 37 op/s
Nov 29 04:14:38 np0005539563 nova_compute[252253]: 2025-11-29 09:14:38.452 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:38.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:39.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:39 np0005539563 nova_compute[252253]: 2025-11-29 09:14:39.481 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:39 np0005539563 ovn_controller[148841]: 2025-11-29T09:14:39Z|00124|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.5
Nov 29 04:14:39 np0005539563 ovn_controller[148841]: 2025-11-29T09:14:39Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:7b:67:2a 10.100.0.5
Nov 29 04:14:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4191: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 538 KiB/s rd, 13 KiB/s wr, 44 op/s
Nov 29 04:14:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:40.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:40 np0005539563 ovn_controller[148841]: 2025-11-29T09:14:40Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:67:2a 10.100.0.5
Nov 29 04:14:40 np0005539563 ovn_controller[148841]: 2025-11-29T09:14:40Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:67:2a 10.100.0.5
Nov 29 04:14:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:41.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4192: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 538 KiB/s rd, 22 KiB/s wr, 44 op/s
Nov 29 04:14:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:42.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:14:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:14:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:43.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:43 np0005539563 nova_compute[252253]: 2025-11-29 09:14:43.454 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4193: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 538 KiB/s rd, 26 KiB/s wr, 44 op/s
Nov 29 04:14:44 np0005539563 nova_compute[252253]: 2025-11-29 09:14:44.484 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:44.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:44 np0005539563 nova_compute[252253]: 2025-11-29 09:14:44.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:45.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:45 np0005539563 nova_compute[252253]: 2025-11-29 09:14:45.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:45 np0005539563 nova_compute[252253]: 2025-11-29 09:14:45.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:14:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4194: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 507 KiB/s rd, 26 KiB/s wr, 42 op/s
Nov 29 04:14:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:46.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:46 np0005539563 nova_compute[252253]: 2025-11-29 09:14:46.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:47.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4195: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 77 KiB/s rd, 14 KiB/s wr, 7 op/s
Nov 29 04:14:48 np0005539563 nova_compute[252253]: 2025-11-29 09:14:48.456 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:48.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:49.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:49 np0005539563 nova_compute[252253]: 2025-11-29 09:14:49.486 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:49 np0005539563 nova_compute[252253]: 2025-11-29 09:14:49.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4196: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 77 KiB/s rd, 14 KiB/s wr, 7 op/s
Nov 29 04:14:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:50.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.210863) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691211023, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 880, "num_deletes": 251, "total_data_size": 1318627, "memory_usage": 1345160, "flush_reason": "Manual Compaction"}
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691231836, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 1292982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85369, "largest_seqno": 86248, "table_properties": {"data_size": 1288530, "index_size": 2103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9930, "raw_average_key_size": 19, "raw_value_size": 1279631, "raw_average_value_size": 2564, "num_data_blocks": 91, "num_entries": 499, "num_filter_entries": 499, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407623, "oldest_key_time": 1764407623, "file_creation_time": 1764407691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 21004 microseconds, and 5931 cpu microseconds.
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.231934) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 1292982 bytes OK
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.231971) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.235722) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.235758) EVENT_LOG_v1 {"time_micros": 1764407691235752, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.235782) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1314424, prev total WAL file size 1314424, number of live WAL files 2.
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.236534) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(1262KB)], [194(14MB)]
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691236620, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 16138467, "oldest_snapshot_seqno": -1}
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 11541 keys, 14095999 bytes, temperature: kUnknown
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691353431, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 14095999, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14022720, "index_size": 43342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28869, "raw_key_size": 306606, "raw_average_key_size": 26, "raw_value_size": 13822276, "raw_average_value_size": 1197, "num_data_blocks": 1637, "num_entries": 11541, "num_filter_entries": 11541, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764407691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.353804) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 14095999 bytes
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.355321) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.1 rd, 120.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.2 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(23.4) write-amplify(10.9) OK, records in: 12060, records dropped: 519 output_compression: NoCompression
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.355346) EVENT_LOG_v1 {"time_micros": 1764407691355334, "job": 122, "event": "compaction_finished", "compaction_time_micros": 116894, "compaction_time_cpu_micros": 34526, "output_level": 6, "num_output_files": 1, "total_output_size": 14095999, "num_input_records": 12060, "num_output_records": 11541, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691355752, "job": 122, "event": "table_file_deletion", "file_number": 196}
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407691358066, "job": 122, "event": "table_file_deletion", "file_number": 194}
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.236331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.358142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.358148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.358149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.358150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:14:51 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:14:51.358152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:14:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:51.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:51 np0005539563 nova_compute[252253]: 2025-11-29 09:14:51.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4197: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s wr, 0 op/s
Nov 29 04:14:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:52.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:53.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:53 np0005539563 nova_compute[252253]: 2025-11-29 09:14:53.459 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4198: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s wr, 0 op/s
Nov 29 04:14:54 np0005539563 nova_compute[252253]: 2025-11-29 09:14:54.530 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:54.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:54 np0005539563 podman[421142]: 2025-11-29 09:14:54.546788467 +0000 UTC m=+0.101324768 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 04:14:54 np0005539563 podman[421141]: 2025-11-29 09:14:54.546903151 +0000 UTC m=+0.101997997 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 29 04:14:54 np0005539563 podman[421143]: 2025-11-29 09:14:54.581763178 +0000 UTC m=+0.132045514 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 04:14:54 np0005539563 nova_compute[252253]: 2025-11-29 09:14:54.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:54 np0005539563 nova_compute[252253]: 2025-11-29 09:14:54.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:14:54 np0005539563 nova_compute[252253]: 2025-11-29 09:14:54.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:14:54 np0005539563 nova_compute[252253]: 2025-11-29 09:14:54.997 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:14:54 np0005539563 nova_compute[252253]: 2025-11-29 09:14:54.997 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:14:54 np0005539563 nova_compute[252253]: 2025-11-29 09:14:54.997 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 04:14:54 np0005539563 nova_compute[252253]: 2025-11-29 09:14:54.998 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ab760f9d-43e3-4bec-9987-df02dc30b9ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:14:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:55.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4199: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s wr, 0 op/s
Nov 29 04:14:56 np0005539563 nova_compute[252253]: 2025-11-29 09:14:56.134 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updating instance_info_cache with network_info: [{"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:14:56 np0005539563 nova_compute[252253]: 2025-11-29 09:14:56.163 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:14:56 np0005539563 nova_compute[252253]: 2025-11-29 09:14:56.164 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 04:14:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:56.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:57.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:14:57 np0005539563 ovn_controller[148841]: 2025-11-29T09:14:57Z|00965|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 29 04:14:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4200: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s wr, 0 op/s
Nov 29 04:14:58 np0005539563 nova_compute[252253]: 2025-11-29 09:14:58.508 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:14:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:14:58.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:14:58 np0005539563 nova_compute[252253]: 2025-11-29 09:14:58.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:14:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:14:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:14:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:14:59.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:14:59 np0005539563 nova_compute[252253]: 2025-11-29 09:14:59.533 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:14:59 np0005539563 podman[421481]: 2025-11-29 09:14:59.705660424 +0000 UTC m=+0.040500360 container create 96d34015edd0bcb20a0d1e57103e01d1784091aac8147a271cac596a2b47b042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_northcutt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 04:14:59 np0005539563 systemd[1]: Started libpod-conmon-96d34015edd0bcb20a0d1e57103e01d1784091aac8147a271cac596a2b47b042.scope.
Nov 29 04:14:59 np0005539563 podman[421481]: 2025-11-29 09:14:59.688042655 +0000 UTC m=+0.022882611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:14:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:14:59 np0005539563 podman[421481]: 2025-11-29 09:14:59.803893939 +0000 UTC m=+0.138733905 container init 96d34015edd0bcb20a0d1e57103e01d1784091aac8147a271cac596a2b47b042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_northcutt, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 04:14:59 np0005539563 podman[421481]: 2025-11-29 09:14:59.812569554 +0000 UTC m=+0.147409490 container start 96d34015edd0bcb20a0d1e57103e01d1784091aac8147a271cac596a2b47b042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_northcutt, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:14:59 np0005539563 podman[421481]: 2025-11-29 09:14:59.815218396 +0000 UTC m=+0.150058362 container attach 96d34015edd0bcb20a0d1e57103e01d1784091aac8147a271cac596a2b47b042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 04:14:59 np0005539563 gallant_northcutt[421497]: 167 167
Nov 29 04:14:59 np0005539563 systemd[1]: libpod-96d34015edd0bcb20a0d1e57103e01d1784091aac8147a271cac596a2b47b042.scope: Deactivated successfully.
Nov 29 04:14:59 np0005539563 podman[421481]: 2025-11-29 09:14:59.82016196 +0000 UTC m=+0.155001916 container died 96d34015edd0bcb20a0d1e57103e01d1784091aac8147a271cac596a2b47b042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_northcutt, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 04:14:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3385abcb0474fcabc63edc7f47896edb3fdcaa350d92d04f90a01381c6ca4e9e-merged.mount: Deactivated successfully.
Nov 29 04:14:59 np0005539563 podman[421481]: 2025-11-29 09:14:59.863093014 +0000 UTC m=+0.197932950 container remove 96d34015edd0bcb20a0d1e57103e01d1784091aac8147a271cac596a2b47b042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_northcutt, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 04:14:59 np0005539563 systemd[1]: libpod-conmon-96d34015edd0bcb20a0d1e57103e01d1784091aac8147a271cac596a2b47b042.scope: Deactivated successfully.
Nov 29 04:14:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4201: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s wr, 0 op/s
Nov 29 04:15:00 np0005539563 podman[421520]: 2025-11-29 09:15:00.022914669 +0000 UTC m=+0.040772997 container create 141bc7c7dfe1738f806ce0a02a03979a4597cc4f7eb735dccca9c4b6b21c3190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_engelbart, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 04:15:00 np0005539563 systemd[1]: Started libpod-conmon-141bc7c7dfe1738f806ce0a02a03979a4597cc4f7eb735dccca9c4b6b21c3190.scope.
Nov 29 04:15:00 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:15:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76d665829509a9835a6d17b384eb324a0ec155241a2208d823ac7229859b0008/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76d665829509a9835a6d17b384eb324a0ec155241a2208d823ac7229859b0008/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76d665829509a9835a6d17b384eb324a0ec155241a2208d823ac7229859b0008/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:00 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76d665829509a9835a6d17b384eb324a0ec155241a2208d823ac7229859b0008/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:00 np0005539563 podman[421520]: 2025-11-29 09:15:00.004575931 +0000 UTC m=+0.022434289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:15:00 np0005539563 podman[421520]: 2025-11-29 09:15:00.111453401 +0000 UTC m=+0.129311749 container init 141bc7c7dfe1738f806ce0a02a03979a4597cc4f7eb735dccca9c4b6b21c3190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:15:00 np0005539563 podman[421520]: 2025-11-29 09:15:00.127154597 +0000 UTC m=+0.145012965 container start 141bc7c7dfe1738f806ce0a02a03979a4597cc4f7eb735dccca9c4b6b21c3190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_engelbart, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:15:00 np0005539563 podman[421520]: 2025-11-29 09:15:00.133837339 +0000 UTC m=+0.151695687 container attach 141bc7c7dfe1738f806ce0a02a03979a4597cc4f7eb735dccca9c4b6b21c3190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 04:15:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:00.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:00 np0005539563 nova_compute[252253]: 2025-11-29 09:15:00.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:00 np0005539563 nova_compute[252253]: 2025-11-29 09:15:00.698 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:15:00 np0005539563 nova_compute[252253]: 2025-11-29 09:15:00.699 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:15:00 np0005539563 nova_compute[252253]: 2025-11-29 09:15:00.699 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:15:00 np0005539563 nova_compute[252253]: 2025-11-29 09:15:00.699 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:15:00 np0005539563 nova_compute[252253]: 2025-11-29 09:15:00.700 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1198527805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:15:01 np0005539563 nova_compute[252253]: 2025-11-29 09:15:01.162 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Nov 29 04:15:01 np0005539563 nova_compute[252253]: 2025-11-29 09:15:01.256 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:15:01 np0005539563 nova_compute[252253]: 2025-11-29 09:15:01.257 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]: [
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:    {
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:        "available": false,
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:        "ceph_device": false,
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:        "lsm_data": {},
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:        "lvs": [],
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:        "path": "/dev/sr0",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:        "rejected_reasons": [
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "Insufficient space (<5GB)",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "Has a FileSystem"
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:        ],
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:        "sys_api": {
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "actuators": null,
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "device_nodes": "sr0",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "devname": "sr0",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "human_readable_size": "482.00 KB",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "id_bus": "ata",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "model": "QEMU DVD-ROM",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "nr_requests": "2",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "parent": "/dev/sr0",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "partitions": {},
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "path": "/dev/sr0",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "removable": "1",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "rev": "2.5+",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "ro": "0",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "rotational": "1",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "sas_address": "",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "sas_device_handle": "",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "scheduler_mode": "mq-deadline",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "sectors": 0,
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "sectorsize": "2048",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "size": 493568.0,
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "support_discard": "2048",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "type": "disk",
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:            "vendor": "QEMU"
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:        }
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]:    }
Nov 29 04:15:01 np0005539563 inspiring_engelbart[421537]: ]
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Nov 29 04:15:01 np0005539563 nova_compute[252253]: 2025-11-29 09:15:01.423 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:15:01 np0005539563 nova_compute[252253]: 2025-11-29 09:15:01.424 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3856MB free_disk=20.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:15:01 np0005539563 nova_compute[252253]: 2025-11-29 09:15:01.424 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:15:01 np0005539563 nova_compute[252253]: 2025-11-29 09:15:01.424 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:15:01 np0005539563 systemd[1]: libpod-141bc7c7dfe1738f806ce0a02a03979a4597cc4f7eb735dccca9c4b6b21c3190.scope: Deactivated successfully.
Nov 29 04:15:01 np0005539563 systemd[1]: libpod-141bc7c7dfe1738f806ce0a02a03979a4597cc4f7eb735dccca9c4b6b21c3190.scope: Consumed 1.269s CPU time.
Nov 29 04:15:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:01.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:01 np0005539563 podman[422915]: 2025-11-29 09:15:01.478939885 +0000 UTC m=+0.024225799 container died 141bc7c7dfe1738f806ce0a02a03979a4597cc4f7eb735dccca9c4b6b21c3190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_engelbart, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 29 04:15:01 np0005539563 nova_compute[252253]: 2025-11-29 09:15:01.492 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance ab760f9d-43e3-4bec-9987-df02dc30b9ef actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 04:15:01 np0005539563 nova_compute[252253]: 2025-11-29 09:15:01.492 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:15:01 np0005539563 nova_compute[252253]: 2025-11-29 09:15:01.492 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:15:01 np0005539563 nova_compute[252253]: 2025-11-29 09:15:01.584 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:15:01 np0005539563 systemd[1]: var-lib-containers-storage-overlay-76d665829509a9835a6d17b384eb324a0ec155241a2208d823ac7229859b0008-merged.mount: Deactivated successfully.
Nov 29 04:15:01 np0005539563 podman[422915]: 2025-11-29 09:15:01.748314642 +0000 UTC m=+0.293600546 container remove 141bc7c7dfe1738f806ce0a02a03979a4597cc4f7eb735dccca9c4b6b21c3190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_engelbart, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:15:01 np0005539563 systemd[1]: libpod-conmon-141bc7c7dfe1738f806ce0a02a03979a4597cc4f7eb735dccca9c4b6b21c3190.scope: Deactivated successfully.
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 38657ddc-428e-4bc3-82e5-538dbe39131f does not exist
Nov 29 04:15:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1d02ce05-ee92-486e-93de-a81f35a68d30 does not exist
Nov 29 04:15:01 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 85e9160c-b071-4b85-bfd1-845789e8e9f1 does not exist
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:15:01 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:15:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4203: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 04:15:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:15:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3253250129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:15:02 np0005539563 nova_compute[252253]: 2025-11-29 09:15:02.043 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:15:02 np0005539563 nova_compute[252253]: 2025-11-29 09:15:02.049 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:15:02 np0005539563 nova_compute[252253]: 2025-11-29 09:15:02.065 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:15:02 np0005539563 nova_compute[252253]: 2025-11-29 09:15:02.091 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:15:02 np0005539563 nova_compute[252253]: 2025-11-29 09:15:02.091 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:15:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:15:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:02 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:15:02 np0005539563 podman[423090]: 2025-11-29 09:15:02.425185411 +0000 UTC m=+0.046530672 container create f948c1b35ae824bb9860676fdf2ac271ba4438a7a98244bfc95bb2c48ede45eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sammet, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:15:02 np0005539563 systemd[1]: Started libpod-conmon-f948c1b35ae824bb9860676fdf2ac271ba4438a7a98244bfc95bb2c48ede45eb.scope.
Nov 29 04:15:02 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:15:02 np0005539563 podman[423090]: 2025-11-29 09:15:02.403034871 +0000 UTC m=+0.024380152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:15:02 np0005539563 podman[423090]: 2025-11-29 09:15:02.507521225 +0000 UTC m=+0.128866486 container init f948c1b35ae824bb9860676fdf2ac271ba4438a7a98244bfc95bb2c48ede45eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sammet, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 04:15:02 np0005539563 podman[423090]: 2025-11-29 09:15:02.515182773 +0000 UTC m=+0.136528034 container start f948c1b35ae824bb9860676fdf2ac271ba4438a7a98244bfc95bb2c48ede45eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sammet, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 04:15:02 np0005539563 podman[423090]: 2025-11-29 09:15:02.518153464 +0000 UTC m=+0.139498755 container attach f948c1b35ae824bb9860676fdf2ac271ba4438a7a98244bfc95bb2c48ede45eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:15:02 np0005539563 distracted_sammet[423106]: 167 167
Nov 29 04:15:02 np0005539563 systemd[1]: libpod-f948c1b35ae824bb9860676fdf2ac271ba4438a7a98244bfc95bb2c48ede45eb.scope: Deactivated successfully.
Nov 29 04:15:02 np0005539563 podman[423090]: 2025-11-29 09:15:02.521362931 +0000 UTC m=+0.142708192 container died f948c1b35ae824bb9860676fdf2ac271ba4438a7a98244bfc95bb2c48ede45eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 04:15:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b9722250c6b14beb25fef80ab264d43a6291524c7dd85cbd4cdac136ba917ff2-merged.mount: Deactivated successfully.
Nov 29 04:15:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:02.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:02 np0005539563 podman[423090]: 2025-11-29 09:15:02.556964237 +0000 UTC m=+0.178309498 container remove f948c1b35ae824bb9860676fdf2ac271ba4438a7a98244bfc95bb2c48ede45eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 04:15:02 np0005539563 systemd[1]: libpod-conmon-f948c1b35ae824bb9860676fdf2ac271ba4438a7a98244bfc95bb2c48ede45eb.scope: Deactivated successfully.
Nov 29 04:15:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:02 np0005539563 podman[423129]: 2025-11-29 09:15:02.716329579 +0000 UTC m=+0.038277159 container create 972f89e3f3bba92ee62e42dedc70320f572461505331eef2fb23be1a33fd876e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:15:02 np0005539563 systemd[1]: Started libpod-conmon-972f89e3f3bba92ee62e42dedc70320f572461505331eef2fb23be1a33fd876e.scope.
Nov 29 04:15:02 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:15:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2a1bebd3b02e479fa681153c15367c7f426054474ffd0d38edbc7990930e44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2a1bebd3b02e479fa681153c15367c7f426054474ffd0d38edbc7990930e44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2a1bebd3b02e479fa681153c15367c7f426054474ffd0d38edbc7990930e44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2a1bebd3b02e479fa681153c15367c7f426054474ffd0d38edbc7990930e44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:02 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2a1bebd3b02e479fa681153c15367c7f426054474ffd0d38edbc7990930e44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:02 np0005539563 podman[423129]: 2025-11-29 09:15:02.699653676 +0000 UTC m=+0.021601276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:15:02 np0005539563 podman[423129]: 2025-11-29 09:15:02.801086768 +0000 UTC m=+0.123034368 container init 972f89e3f3bba92ee62e42dedc70320f572461505331eef2fb23be1a33fd876e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 04:15:02 np0005539563 podman[423129]: 2025-11-29 09:15:02.806705851 +0000 UTC m=+0.128653431 container start 972f89e3f3bba92ee62e42dedc70320f572461505331eef2fb23be1a33fd876e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:15:02 np0005539563 podman[423129]: 2025-11-29 09:15:02.809449305 +0000 UTC m=+0.131396905 container attach 972f89e3f3bba92ee62e42dedc70320f572461505331eef2fb23be1a33fd876e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:15:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:03.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:03 np0005539563 nova_compute[252253]: 2025-11-29 09:15:03.510 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:03 np0005539563 serene_ganguly[423145]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:15:03 np0005539563 serene_ganguly[423145]: --> relative data size: 1.0
Nov 29 04:15:03 np0005539563 serene_ganguly[423145]: --> All data devices are unavailable
Nov 29 04:15:03 np0005539563 systemd[1]: libpod-972f89e3f3bba92ee62e42dedc70320f572461505331eef2fb23be1a33fd876e.scope: Deactivated successfully.
Nov 29 04:15:03 np0005539563 podman[423129]: 2025-11-29 09:15:03.652721299 +0000 UTC m=+0.974668879 container died 972f89e3f3bba92ee62e42dedc70320f572461505331eef2fb23be1a33fd876e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:15:03 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6a2a1bebd3b02e479fa681153c15367c7f426054474ffd0d38edbc7990930e44-merged.mount: Deactivated successfully.
Nov 29 04:15:03 np0005539563 podman[423129]: 2025-11-29 09:15:03.712586213 +0000 UTC m=+1.034533793 container remove 972f89e3f3bba92ee62e42dedc70320f572461505331eef2fb23be1a33fd876e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 04:15:03 np0005539563 systemd[1]: libpod-conmon-972f89e3f3bba92ee62e42dedc70320f572461505331eef2fb23be1a33fd876e.scope: Deactivated successfully.
Nov 29 04:15:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4204: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 117 KiB/s rd, 4.5 KiB/s wr, 15 op/s
Nov 29 04:15:04 np0005539563 podman[423317]: 2025-11-29 09:15:04.385617969 +0000 UTC m=+0.036319917 container create 42177005a17fa636eb527820c70d5e8e14510e30d999c7c8ccadf9b01eb5f059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_germain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 04:15:04 np0005539563 systemd[1]: Started libpod-conmon-42177005a17fa636eb527820c70d5e8e14510e30d999c7c8ccadf9b01eb5f059.scope.
Nov 29 04:15:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:15:04 np0005539563 podman[423317]: 2025-11-29 09:15:04.370331414 +0000 UTC m=+0.021033382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:15:04 np0005539563 podman[423317]: 2025-11-29 09:15:04.468671162 +0000 UTC m=+0.119373140 container init 42177005a17fa636eb527820c70d5e8e14510e30d999c7c8ccadf9b01eb5f059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_germain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:15:04 np0005539563 podman[423317]: 2025-11-29 09:15:04.48076673 +0000 UTC m=+0.131468678 container start 42177005a17fa636eb527820c70d5e8e14510e30d999c7c8ccadf9b01eb5f059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_germain, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 04:15:04 np0005539563 nice_germain[423333]: 167 167
Nov 29 04:15:04 np0005539563 podman[423317]: 2025-11-29 09:15:04.485477278 +0000 UTC m=+0.136179226 container attach 42177005a17fa636eb527820c70d5e8e14510e30d999c7c8ccadf9b01eb5f059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_germain, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:15:04 np0005539563 systemd[1]: libpod-42177005a17fa636eb527820c70d5e8e14510e30d999c7c8ccadf9b01eb5f059.scope: Deactivated successfully.
Nov 29 04:15:04 np0005539563 conmon[423333]: conmon 42177005a17fa636eb52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42177005a17fa636eb527820c70d5e8e14510e30d999c7c8ccadf9b01eb5f059.scope/container/memory.events
Nov 29 04:15:04 np0005539563 podman[423338]: 2025-11-29 09:15:04.529262236 +0000 UTC m=+0.025625307 container died 42177005a17fa636eb527820c70d5e8e14510e30d999c7c8ccadf9b01eb5f059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_germain, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 04:15:04 np0005539563 nova_compute[252253]: 2025-11-29 09:15:04.537 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:04.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay-30ff110fe4bc5e24c82c9b686e56a504b631441a473f3de18726e49f7c17b011-merged.mount: Deactivated successfully.
Nov 29 04:15:04 np0005539563 podman[423338]: 2025-11-29 09:15:04.566175767 +0000 UTC m=+0.062538808 container remove 42177005a17fa636eb527820c70d5e8e14510e30d999c7c8ccadf9b01eb5f059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_germain, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:15:04 np0005539563 systemd[1]: libpod-conmon-42177005a17fa636eb527820c70d5e8e14510e30d999c7c8ccadf9b01eb5f059.scope: Deactivated successfully.
Nov 29 04:15:04 np0005539563 podman[423359]: 2025-11-29 09:15:04.759398338 +0000 UTC m=+0.042134024 container create d69ebc08f6805ccf6d420138b6103d7979ebe4c39a98c2f8b3a2ca93eeb4b148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 04:15:04 np0005539563 systemd[1]: Started libpod-conmon-d69ebc08f6805ccf6d420138b6103d7979ebe4c39a98c2f8b3a2ca93eeb4b148.scope.
Nov 29 04:15:04 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:15:04 np0005539563 podman[423359]: 2025-11-29 09:15:04.739808897 +0000 UTC m=+0.022544593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:15:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e658196f88bf79bc5ec03e6c29f4218834a779dbb820eca8072dc22980b01a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e658196f88bf79bc5ec03e6c29f4218834a779dbb820eca8072dc22980b01a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e658196f88bf79bc5ec03e6c29f4218834a779dbb820eca8072dc22980b01a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:04 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4e658196f88bf79bc5ec03e6c29f4218834a779dbb820eca8072dc22980b01a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:04 np0005539563 podman[423359]: 2025-11-29 09:15:04.86199338 +0000 UTC m=+0.144729046 container init d69ebc08f6805ccf6d420138b6103d7979ebe4c39a98c2f8b3a2ca93eeb4b148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carson, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 04:15:04 np0005539563 podman[423359]: 2025-11-29 09:15:04.868836416 +0000 UTC m=+0.151572082 container start d69ebc08f6805ccf6d420138b6103d7979ebe4c39a98c2f8b3a2ca93eeb4b148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:15:04 np0005539563 podman[423359]: 2025-11-29 09:15:04.872439094 +0000 UTC m=+0.155174760 container attach d69ebc08f6805ccf6d420138b6103d7979ebe4c39a98c2f8b3a2ca93eeb4b148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:15:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:15:04.991 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:15:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:15:04.994 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:15:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:15:04.996 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:15:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:05.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:05 np0005539563 angry_carson[423376]: {
Nov 29 04:15:05 np0005539563 angry_carson[423376]:    "0": [
Nov 29 04:15:05 np0005539563 angry_carson[423376]:        {
Nov 29 04:15:05 np0005539563 angry_carson[423376]:            "devices": [
Nov 29 04:15:05 np0005539563 angry_carson[423376]:                "/dev/loop3"
Nov 29 04:15:05 np0005539563 angry_carson[423376]:            ],
Nov 29 04:15:05 np0005539563 angry_carson[423376]:            "lv_name": "ceph_lv0",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:            "lv_size": "7511998464",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:            "name": "ceph_lv0",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:            "tags": {
Nov 29 04:15:05 np0005539563 angry_carson[423376]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:                "ceph.cluster_name": "ceph",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:                "ceph.crush_device_class": "",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:                "ceph.encrypted": "0",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:                "ceph.osd_id": "0",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:                "ceph.type": "block",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:                "ceph.vdo": "0"
Nov 29 04:15:05 np0005539563 angry_carson[423376]:            },
Nov 29 04:15:05 np0005539563 angry_carson[423376]:            "type": "block",
Nov 29 04:15:05 np0005539563 angry_carson[423376]:            "vg_name": "ceph_vg0"
Nov 29 04:15:05 np0005539563 angry_carson[423376]:        }
Nov 29 04:15:05 np0005539563 angry_carson[423376]:    ]
Nov 29 04:15:05 np0005539563 angry_carson[423376]: }
Nov 29 04:15:05 np0005539563 systemd[1]: libpod-d69ebc08f6805ccf6d420138b6103d7979ebe4c39a98c2f8b3a2ca93eeb4b148.scope: Deactivated successfully.
Nov 29 04:15:05 np0005539563 podman[423359]: 2025-11-29 09:15:05.701091281 +0000 UTC m=+0.983826947 container died d69ebc08f6805ccf6d420138b6103d7979ebe4c39a98c2f8b3a2ca93eeb4b148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 04:15:05 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f4e658196f88bf79bc5ec03e6c29f4218834a779dbb820eca8072dc22980b01a-merged.mount: Deactivated successfully.
Nov 29 04:15:05 np0005539563 podman[423359]: 2025-11-29 09:15:05.759860476 +0000 UTC m=+1.042596142 container remove d69ebc08f6805ccf6d420138b6103d7979ebe4c39a98c2f8b3a2ca93eeb4b148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:15:05 np0005539563 systemd[1]: libpod-conmon-d69ebc08f6805ccf6d420138b6103d7979ebe4c39a98c2f8b3a2ca93eeb4b148.scope: Deactivated successfully.
Nov 29 04:15:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4205: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 5.0 KiB/s wr, 29 op/s
Nov 29 04:15:06 np0005539563 podman[423539]: 2025-11-29 09:15:06.387164451 +0000 UTC m=+0.043818049 container create 152fc897d7c15b05b8a0a3fa1574b066fa129978e1e745ee1ff5624fc76d4b33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:15:06 np0005539563 systemd[1]: Started libpod-conmon-152fc897d7c15b05b8a0a3fa1574b066fa129978e1e745ee1ff5624fc76d4b33.scope.
Nov 29 04:15:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:15:06 np0005539563 podman[423539]: 2025-11-29 09:15:06.365012031 +0000 UTC m=+0.021665649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:15:06 np0005539563 podman[423539]: 2025-11-29 09:15:06.466317418 +0000 UTC m=+0.122971046 container init 152fc897d7c15b05b8a0a3fa1574b066fa129978e1e745ee1ff5624fc76d4b33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 04:15:06 np0005539563 podman[423539]: 2025-11-29 09:15:06.47267024 +0000 UTC m=+0.129323838 container start 152fc897d7c15b05b8a0a3fa1574b066fa129978e1e745ee1ff5624fc76d4b33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 04:15:06 np0005539563 podman[423539]: 2025-11-29 09:15:06.475462796 +0000 UTC m=+0.132116394 container attach 152fc897d7c15b05b8a0a3fa1574b066fa129978e1e745ee1ff5624fc76d4b33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:15:06 np0005539563 peaceful_cartwright[423555]: 167 167
Nov 29 04:15:06 np0005539563 systemd[1]: libpod-152fc897d7c15b05b8a0a3fa1574b066fa129978e1e745ee1ff5624fc76d4b33.scope: Deactivated successfully.
Nov 29 04:15:06 np0005539563 podman[423539]: 2025-11-29 09:15:06.478285443 +0000 UTC m=+0.134939051 container died 152fc897d7c15b05b8a0a3fa1574b066fa129978e1e745ee1ff5624fc76d4b33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 04:15:06 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4dafe1b2f4e8c49358cd5928f5daa35b5b914588918d8f6872fabb5fdbde9016-merged.mount: Deactivated successfully.
Nov 29 04:15:06 np0005539563 podman[423539]: 2025-11-29 09:15:06.517782934 +0000 UTC m=+0.174436532 container remove 152fc897d7c15b05b8a0a3fa1574b066fa129978e1e745ee1ff5624fc76d4b33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cartwright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 04:15:06 np0005539563 systemd[1]: libpod-conmon-152fc897d7c15b05b8a0a3fa1574b066fa129978e1e745ee1ff5624fc76d4b33.scope: Deactivated successfully.
Nov 29 04:15:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:06.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:06 np0005539563 podman[423581]: 2025-11-29 09:15:06.681065214 +0000 UTC m=+0.042058992 container create b0671e92569b15be621135da51dce7b407f78d3a8b853c596b4c68c27dc54237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 04:15:06 np0005539563 systemd[1]: Started libpod-conmon-b0671e92569b15be621135da51dce7b407f78d3a8b853c596b4c68c27dc54237.scope.
Nov 29 04:15:06 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:15:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e62be89c85b1edded009ca6ff811e60d157e0be246e305dfd351ee16005db4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e62be89c85b1edded009ca6ff811e60d157e0be246e305dfd351ee16005db4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e62be89c85b1edded009ca6ff811e60d157e0be246e305dfd351ee16005db4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:06 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e62be89c85b1edded009ca6ff811e60d157e0be246e305dfd351ee16005db4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:15:06 np0005539563 podman[423581]: 2025-11-29 09:15:06.66432766 +0000 UTC m=+0.025321468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:15:06 np0005539563 podman[423581]: 2025-11-29 09:15:06.769632176 +0000 UTC m=+0.130625984 container init b0671e92569b15be621135da51dce7b407f78d3a8b853c596b4c68c27dc54237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:15:06 np0005539563 podman[423581]: 2025-11-29 09:15:06.776172994 +0000 UTC m=+0.137166782 container start b0671e92569b15be621135da51dce7b407f78d3a8b853c596b4c68c27dc54237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 29 04:15:06 np0005539563 podman[423581]: 2025-11-29 09:15:06.779992487 +0000 UTC m=+0.140986295 container attach b0671e92569b15be621135da51dce7b407f78d3a8b853c596b4c68c27dc54237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 04:15:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:07.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:07 np0005539563 quirky_chaum[423598]: {
Nov 29 04:15:07 np0005539563 quirky_chaum[423598]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:15:07 np0005539563 quirky_chaum[423598]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:15:07 np0005539563 quirky_chaum[423598]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:15:07 np0005539563 quirky_chaum[423598]:        "osd_id": 0,
Nov 29 04:15:07 np0005539563 quirky_chaum[423598]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:15:07 np0005539563 quirky_chaum[423598]:        "type": "bluestore"
Nov 29 04:15:07 np0005539563 quirky_chaum[423598]:    }
Nov 29 04:15:07 np0005539563 quirky_chaum[423598]: }
Nov 29 04:15:07 np0005539563 systemd[1]: libpod-b0671e92569b15be621135da51dce7b407f78d3a8b853c596b4c68c27dc54237.scope: Deactivated successfully.
Nov 29 04:15:07 np0005539563 podman[423581]: 2025-11-29 09:15:07.765343405 +0000 UTC m=+1.126337183 container died b0671e92569b15be621135da51dce7b407f78d3a8b853c596b4c68c27dc54237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 04:15:07 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e7e62be89c85b1edded009ca6ff811e60d157e0be246e305dfd351ee16005db4-merged.mount: Deactivated successfully.
Nov 29 04:15:07 np0005539563 podman[423581]: 2025-11-29 09:15:07.824610772 +0000 UTC m=+1.185604550 container remove b0671e92569b15be621135da51dce7b407f78d3a8b853c596b4c68c27dc54237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 04:15:07 np0005539563 systemd[1]: libpod-conmon-b0671e92569b15be621135da51dce7b407f78d3a8b853c596b4c68c27dc54237.scope: Deactivated successfully.
Nov 29 04:15:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:15:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:15:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4206: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 5.0 KiB/s wr, 29 op/s
Nov 29 04:15:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:08 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev c503bca3-a935-4b74-b492-46d81974d0d7 does not exist
Nov 29 04:15:08 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4e7319ef-be5d-40c6-b766-476bc80e0366 does not exist
Nov 29 04:15:08 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 290c5e85-bbf1-49c9-ae56-0ffd2b3ca0f0 does not exist
Nov 29 04:15:08 np0005539563 nova_compute[252253]: 2025-11-29 09:15:08.512 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:08.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:08 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:08 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:15:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:09.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:09 np0005539563 nova_compute[252253]: 2025-11-29 09:15:09.540 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4207: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 5.4 KiB/s wr, 31 op/s
Nov 29 04:15:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:10.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:11.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4208: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 258 KiB/s rd, 6.8 KiB/s wr, 31 op/s
Nov 29 04:15:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:12.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:15:13
Nov 29 04:15:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:15:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:15:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'backups', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.log', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 29 04:15:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:15:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:15:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:13.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:13 np0005539563 nova_compute[252253]: 2025-11-29 09:15:13.513 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4209: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 227 KiB/s rd, 6.0 KiB/s wr, 27 op/s
Nov 29 04:15:14 np0005539563 nova_compute[252253]: 2025-11-29 09:15:14.543 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:14.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:15.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4210: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Nov 29 04:15:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:16.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:15:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:15:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:15:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:15:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:15:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:15:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:15:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:15:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:15:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:15:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:17.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4211: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 KiB/s rd, 1.8 KiB/s wr, 2 op/s
Nov 29 04:15:18 np0005539563 nova_compute[252253]: 2025-11-29 09:15:18.515 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:18.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:19.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:19 np0005539563 nova_compute[252253]: 2025-11-29 09:15:19.588 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4212: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s rd, 14 KiB/s wr, 11 op/s
Nov 29 04:15:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:15:20.151 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=103, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=102) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:15:20 np0005539563 nova_compute[252253]: 2025-11-29 09:15:20.152 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:15:20.152 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:15:20 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:15:20.153 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '103'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:15:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:20.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:21.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4213: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s rd, 14 KiB/s wr, 10 op/s
Nov 29 04:15:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:22.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:23.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:23 np0005539563 nova_compute[252253]: 2025-11-29 09:15:23.517 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4214: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 441 KiB/s rd, 12 KiB/s wr, 27 op/s
Nov 29 04:15:24 np0005539563 nova_compute[252253]: 2025-11-29 09:15:24.087 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.399503303554811e-05 of space, bias 1.0, pg target 0.004198509910664433 quantized to 32 (current 32)
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004332280616043179 of space, bias 1.0, pg target 1.2996841848129537 quantized to 32 (current 32)
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:15:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 04:15:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:24.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:24 np0005539563 nova_compute[252253]: 2025-11-29 09:15:24.591 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:25.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:25 np0005539563 podman[423744]: 2025-11-29 09:15:25.508197242 +0000 UTC m=+0.062221649 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 04:15:25 np0005539563 podman[423743]: 2025-11-29 09:15:25.532848561 +0000 UTC m=+0.086344233 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 04:15:25 np0005539563 podman[423745]: 2025-11-29 09:15:25.539277045 +0000 UTC m=+0.089231081 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 04:15:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4215: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 29 04:15:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:26.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:27.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4216: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 29 04:15:28 np0005539563 nova_compute[252253]: 2025-11-29 09:15:28.561 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:28.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:29.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:29 np0005539563 nova_compute[252253]: 2025-11-29 09:15:29.593 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4217: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 29 04:15:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:30.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:31.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4218: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 68 op/s
Nov 29 04:15:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:32.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:33.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:33 np0005539563 nova_compute[252253]: 2025-11-29 09:15:33.624 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4219: 305 pgs: 305 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 411 KiB/s wr, 84 op/s
Nov 29 04:15:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:34.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:34 np0005539563 nova_compute[252253]: 2025-11-29 09:15:34.596 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:35.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4220: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 504 KiB/s wr, 102 op/s
Nov 29 04:15:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:36.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:15:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:37.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:15:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4221: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 504 KiB/s wr, 54 op/s
Nov 29 04:15:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:38.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:38 np0005539563 nova_compute[252253]: 2025-11-29 09:15:38.627 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:39.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:39 np0005539563 nova_compute[252253]: 2025-11-29 09:15:39.658 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4222: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 517 KiB/s wr, 56 op/s
Nov 29 04:15:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:40.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:41.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4223: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 587 KiB/s wr, 57 op/s
Nov 29 04:15:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:42.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:15:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:15:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:43.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:43 np0005539563 nova_compute[252253]: 2025-11-29 09:15:43.629 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4224: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 587 KiB/s wr, 57 op/s
Nov 29 04:15:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:44.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:44 np0005539563 nova_compute[252253]: 2025-11-29 09:15:44.705 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:45.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4225: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 766 KiB/s rd, 179 KiB/s wr, 39 op/s
Nov 29 04:15:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:46.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:46 np0005539563 nova_compute[252253]: 2025-11-29 09:15:46.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:46 np0005539563 nova_compute[252253]: 2025-11-29 09:15:46.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:46 np0005539563 nova_compute[252253]: 2025-11-29 09:15:46.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:15:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:47.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:47 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4226: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 87 KiB/s wr, 3 op/s
Nov 29 04:15:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:48.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:48 np0005539563 nova_compute[252253]: 2025-11-29 09:15:48.632 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:48 np0005539563 nova_compute[252253]: 2025-11-29 09:15:48.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:49.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:49 np0005539563 nova_compute[252253]: 2025-11-29 09:15:49.708 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:49 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4227: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 87 KiB/s wr, 3 op/s
Nov 29 04:15:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:50.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:50 np0005539563 nova_compute[252253]: 2025-11-29 09:15:50.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:51.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:51 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4228: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 74 KiB/s wr, 1 op/s
Nov 29 04:15:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:52.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:15:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:53.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:15:53 np0005539563 nova_compute[252253]: 2025-11-29 09:15:53.636 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:53 np0005539563 ovn_controller[148841]: 2025-11-29T09:15:53Z|00966|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Nov 29 04:15:53 np0005539563 nova_compute[252253]: 2025-11-29 09:15:53.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:53 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4229: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 43 KiB/s wr, 2 op/s
Nov 29 04:15:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:54.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:54 np0005539563 nova_compute[252253]: 2025-11-29 09:15:54.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:54 np0005539563 nova_compute[252253]: 2025-11-29 09:15:54.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:15:54 np0005539563 nova_compute[252253]: 2025-11-29 09:15:54.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:15:54 np0005539563 nova_compute[252253]: 2025-11-29 09:15:54.710 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:55 np0005539563 nova_compute[252253]: 2025-11-29 09:15:55.048 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:15:55 np0005539563 nova_compute[252253]: 2025-11-29 09:15:55.048 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquired lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:15:55 np0005539563 nova_compute[252253]: 2025-11-29 09:15:55.048 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 04:15:55 np0005539563 nova_compute[252253]: 2025-11-29 09:15:55.048 252257 DEBUG nova.objects.instance [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ab760f9d-43e3-4bec-9987-df02dc30b9ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:15:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:55.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:55 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4230: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 53 KiB/s wr, 4 op/s
Nov 29 04:15:56 np0005539563 podman[423922]: 2025-11-29 09:15:56.500761273 +0000 UTC m=+0.054854730 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:15:56 np0005539563 podman[423923]: 2025-11-29 09:15:56.517728113 +0000 UTC m=+0.061284884 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:15:56 np0005539563 podman[423924]: 2025-11-29 09:15:56.536709717 +0000 UTC m=+0.084011210 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:15:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.003000080s ======
Nov 29 04:15:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:56.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Nov 29 04:15:57 np0005539563 nova_compute[252253]: 2025-11-29 09:15:57.412 252257 DEBUG nova.network.neutron [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updating instance_info_cache with network_info: [{"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:15:57 np0005539563 nova_compute[252253]: 2025-11-29 09:15:57.431 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Releasing lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:15:57 np0005539563 nova_compute[252253]: 2025-11-29 09:15:57.431 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 04:15:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:57.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:15:57 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4231: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 50 KiB/s wr, 4 op/s
Nov 29 04:15:58 np0005539563 nova_compute[252253]: 2025-11-29 09:15:58.466 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:15:58.466 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=104, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=103) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:15:58 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:15:58.468 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:15:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:15:58.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:58 np0005539563 nova_compute[252253]: 2025-11-29 09:15:58.636 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:58 np0005539563 nova_compute[252253]: 2025-11-29 09:15:58.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:15:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:15:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:15:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:15:59.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:15:59 np0005539563 nova_compute[252253]: 2025-11-29 09:15:59.712 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:15:59 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4232: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 188 KiB/s rd, 50 KiB/s wr, 19 op/s
Nov 29 04:16:00 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:00.470 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '104'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:16:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:00.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:01.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Nov 29 04:16:01 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Nov 29 04:16:01 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Nov 29 04:16:01 np0005539563 nova_compute[252253]: 2025-11-29 09:16:01.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:01 np0005539563 nova_compute[252253]: 2025-11-29 09:16:01.701 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:16:01 np0005539563 nova_compute[252253]: 2025-11-29 09:16:01.701 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:16:01 np0005539563 nova_compute[252253]: 2025-11-29 09:16:01.701 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:16:01 np0005539563 nova_compute[252253]: 2025-11-29 09:16:01.701 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:16:01 np0005539563 nova_compute[252253]: 2025-11-29 09:16:01.702 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:16:01 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4234: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 60 KiB/s wr, 27 op/s
Nov 29 04:16:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:16:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3940384204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:16:02 np0005539563 nova_compute[252253]: 2025-11-29 09:16:02.168 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:16:02 np0005539563 nova_compute[252253]: 2025-11-29 09:16:02.236 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:16:02 np0005539563 nova_compute[252253]: 2025-11-29 09:16:02.236 252257 DEBUG nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 04:16:02 np0005539563 nova_compute[252253]: 2025-11-29 09:16:02.392 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:16:02 np0005539563 nova_compute[252253]: 2025-11-29 09:16:02.393 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3933MB free_disk=20.988128662109375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:16:02 np0005539563 nova_compute[252253]: 2025-11-29 09:16:02.394 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:16:02 np0005539563 nova_compute[252253]: 2025-11-29 09:16:02.394 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:16:02 np0005539563 nova_compute[252253]: 2025-11-29 09:16:02.538 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Instance ab760f9d-43e3-4bec-9987-df02dc30b9ef actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 04:16:02 np0005539563 nova_compute[252253]: 2025-11-29 09:16:02.540 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:16:02 np0005539563 nova_compute[252253]: 2025-11-29 09:16:02.540 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:16:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:02.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:02 np0005539563 nova_compute[252253]: 2025-11-29 09:16:02.642 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:16:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:16:03 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1677203671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.106 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.113 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.146 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.149 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.149 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:16:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:03.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.588 252257 DEBUG nova.compute.manager [req-20db89a6-c5f7-4d96-ac71-da65f828b4d1 req-190e8f78-4e29-4dad-80cc-d67712981c2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Received event network-changed-680dff60-ed18-4961-84dd-043dd06abd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.589 252257 DEBUG nova.compute.manager [req-20db89a6-c5f7-4d96-ac71-da65f828b4d1 req-190e8f78-4e29-4dad-80cc-d67712981c2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Refreshing instance network info cache due to event network-changed-680dff60-ed18-4961-84dd-043dd06abd06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.589 252257 DEBUG oslo_concurrency.lockutils [req-20db89a6-c5f7-4d96-ac71-da65f828b4d1 req-190e8f78-4e29-4dad-80cc-d67712981c2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.589 252257 DEBUG oslo_concurrency.lockutils [req-20db89a6-c5f7-4d96-ac71-da65f828b4d1 req-190e8f78-4e29-4dad-80cc-d67712981c2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquired lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.589 252257 DEBUG nova.network.neutron [req-20db89a6-c5f7-4d96-ac71-da65f828b4d1 req-190e8f78-4e29-4dad-80cc-d67712981c2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Refreshing network info cache for port 680dff60-ed18-4961-84dd-043dd06abd06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.676 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.695 252257 DEBUG oslo_concurrency.lockutils [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.696 252257 DEBUG oslo_concurrency.lockutils [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.696 252257 DEBUG oslo_concurrency.lockutils [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.696 252257 DEBUG oslo_concurrency.lockutils [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.696 252257 DEBUG oslo_concurrency.lockutils [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.697 252257 INFO nova.compute.manager [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Terminating instance#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.698 252257 DEBUG nova.compute.manager [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 04:16:03 np0005539563 kernel: tap680dff60-ed (unregistering): left promiscuous mode
Nov 29 04:16:03 np0005539563 NetworkManager[48981]: <info>  [1764407763.8112] device (tap680dff60-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 04:16:03 np0005539563 ovn_controller[148841]: 2025-11-29T09:16:03Z|00967|binding|INFO|Releasing lport 680dff60-ed18-4961-84dd-043dd06abd06 from this chassis (sb_readonly=0)
Nov 29 04:16:03 np0005539563 ovn_controller[148841]: 2025-11-29T09:16:03Z|00968|binding|INFO|Setting lport 680dff60-ed18-4961-84dd-043dd06abd06 down in Southbound
Nov 29 04:16:03 np0005539563 ovn_controller[148841]: 2025-11-29T09:16:03Z|00969|binding|INFO|Removing iface tap680dff60-ed ovn-installed in OVS
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.820 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.823 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:03.828 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:67:2a 10.100.0.5'], port_security=['fa:16:3e:7b:67:2a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ab760f9d-43e3-4bec-9987-df02dc30b9ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '51af0a2ee11a460ab825a484e5c6f4a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f464a39e-170e-4271-8e3e-71cb609233aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26c70775-c49f-4c45-91d6-cdc9893e63eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>], logical_port=680dff60-ed18-4961-84dd-043dd06abd06) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc96498760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:16:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:03.829 158990 INFO neutron.agent.ovn.metadata.agent [-] Port 680dff60-ed18-4961-84dd-043dd06abd06 in datapath 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad unbound from our chassis#033[00m
Nov 29 04:16:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:03.830 158990 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 04:16:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:03.832 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[ef521e60-6a3d-44d4-b817-4b5ed38faebf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:16:03 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:03.833 158990 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad namespace which is not needed anymore#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.839 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:03 np0005539563 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000df.scope: Deactivated successfully.
Nov 29 04:16:03 np0005539563 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000df.scope: Consumed 17.456s CPU time.
Nov 29 04:16:03 np0005539563 systemd-machined[213024]: Machine qemu-106-instance-000000df terminated.
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.918 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.923 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.935 252257 INFO nova.virt.libvirt.driver [-] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Instance destroyed successfully.#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.936 252257 DEBUG nova.objects.instance [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lazy-loading 'resources' on Instance uuid ab760f9d-43e3-4bec-9987-df02dc30b9ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.953 252257 DEBUG nova.virt.libvirt.vif [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T09:14:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-365044550',display_name='tempest-TestVolumeBootPattern-server-365044550',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-365044550',id=223,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNbG23j9M5o6eHfsJFAWGmFr+V1OMrrFRyvdXC6aXkLfRb952sNiXaohq8D2hzBatQ6UrGgr+Il3V8996CyOSEBo0EV82vq7jHKwJvSwjMwvkl///TChhoI2G24vyXx6sw==',key_name='tempest-TestVolumeBootPattern-692880462',keypairs=<?>,launch_index=0,launched_at=2025-11-29T09:14:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='51af0a2ee11a460ab825a484e5c6f4a3',ramdisk_id='',reservation_id='r-ls3d5wdz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-531976395',owner_user_name='tempest-TestVolumeBootPattern-531976395-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T09:14:21Z,user_data=None,user_id='5ff561a95dc44b9fb9f7fd8fee80f589',uuid=ab760f9d-43e3-4bec-9987-df02dc30b9ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.954 252257 DEBUG nova.network.os_vif_util [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converting VIF {"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.955 252257 DEBUG nova.network.os_vif_util [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:67:2a,bridge_name='br-int',has_traffic_filtering=True,id=680dff60-ed18-4961-84dd-043dd06abd06,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680dff60-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.955 252257 DEBUG os_vif [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:67:2a,bridge_name='br-int',has_traffic_filtering=True,id=680dff60-ed18-4961-84dd-043dd06abd06,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680dff60-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.956 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.957 252257 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap680dff60-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.963 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.966 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 04:16:03 np0005539563 nova_compute[252253]: 2025-11-29 09:16:03.970 252257 INFO os_vif [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:67:2a,bridge_name='br-int',has_traffic_filtering=True,id=680dff60-ed18-4961-84dd-043dd06abd06,network=Network(8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap680dff60-ed')#033[00m
Nov 29 04:16:03 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[420947]: [NOTICE]   (420951) : haproxy version is 2.8.14-c23fe91
Nov 29 04:16:03 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[420947]: [NOTICE]   (420951) : path to executable is /usr/sbin/haproxy
Nov 29 04:16:03 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[420947]: [WARNING]  (420951) : Exiting Master process...
Nov 29 04:16:03 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[420947]: [ALERT]    (420951) : Current worker (420953) exited with code 143 (Terminated)
Nov 29 04:16:03 np0005539563 neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad[420947]: [WARNING]  (420951) : All workers exited. Exiting... (0)
Nov 29 04:16:03 np0005539563 systemd[1]: libpod-d1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1.scope: Deactivated successfully.
Nov 29 04:16:03 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4235: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 278 KiB/s rd, 14 KiB/s wr, 43 op/s
Nov 29 04:16:03 np0005539563 podman[424062]: 2025-11-29 09:16:03.985197369 +0000 UTC m=+0.053624626 container died d1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:16:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1-userdata-shm.mount: Deactivated successfully.
Nov 29 04:16:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay-02fc8d1efcf00702b21b30b97fabe5a9559a025fde4045848a8ace0611522c9a-merged.mount: Deactivated successfully.
Nov 29 04:16:04 np0005539563 podman[424062]: 2025-11-29 09:16:04.035070632 +0000 UTC m=+0.103497889 container cleanup d1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 04:16:04 np0005539563 systemd[1]: libpod-conmon-d1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1.scope: Deactivated successfully.
Nov 29 04:16:04 np0005539563 podman[424115]: 2025-11-29 09:16:04.100749014 +0000 UTC m=+0.042783083 container remove d1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 04:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:04.107 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[9c30280c-fa42-4a11-a2b8-b8538ba60b13]: (4, ('Sat Nov 29 09:16:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (d1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1)\nd1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1\nSat Nov 29 09:16:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad (d1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1)\nd1c8bd9d735f47109ec2ac3e08af11cf212ac215692061fcdb4505768b57e2d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:04.109 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[2051f425-8f22-4f65-83c2-b58aa08b745c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:04.110 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8aaf4606-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:16:04 np0005539563 kernel: tap8aaf4606-90: left promiscuous mode
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.113 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.128 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:04.131 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[7800bfe2-cf4b-4a94-ae72-ad6d2a5c144c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:04.147 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f9201a-8f50-4fc2-9cc5-4a9860a978ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:04.148 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[455d0124-bc8a-4be9-a715-3d58c9117ef4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:04.165 261364 DEBUG oslo.privsep.daemon [-] privsep: reply[a77b6c17-1659-4f5b-91f4-a724d132be72]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1062852, 'reachable_time': 40942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424130, 'error': None, 'target': 'ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:16:04 np0005539563 systemd[1]: run-netns-ovnmeta\x2d8aaf4606\x2d9df9\x2d4ad5\x2d9ade\x2df48fdc6cfaad.mount: Deactivated successfully.
Nov 29 04:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:04.170 159106 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 04:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:04.170 159106 DEBUG oslo.privsep.daemon [-] privsep: reply[0587e065-3bbc-43ec-b98f-c2cd368d802f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.225 252257 INFO nova.virt.libvirt.driver [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Deleting instance files /var/lib/nova/instances/ab760f9d-43e3-4bec-9987-df02dc30b9ef_del#033[00m
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.226 252257 INFO nova.virt.libvirt.driver [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Deletion of /var/lib/nova/instances/ab760f9d-43e3-4bec-9987-df02dc30b9ef_del complete#033[00m
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.405 252257 DEBUG nova.compute.manager [req-ff78b6ee-22ad-4f6a-88ce-0684f8fe1d1a req-cdc8cb77-0961-4849-b938-6296573f994e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Received event network-vif-unplugged-680dff60-ed18-4961-84dd-043dd06abd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.406 252257 DEBUG oslo_concurrency.lockutils [req-ff78b6ee-22ad-4f6a-88ce-0684f8fe1d1a req-cdc8cb77-0961-4849-b938-6296573f994e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.406 252257 DEBUG oslo_concurrency.lockutils [req-ff78b6ee-22ad-4f6a-88ce-0684f8fe1d1a req-cdc8cb77-0961-4849-b938-6296573f994e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.406 252257 DEBUG oslo_concurrency.lockutils [req-ff78b6ee-22ad-4f6a-88ce-0684f8fe1d1a req-cdc8cb77-0961-4849-b938-6296573f994e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.407 252257 DEBUG nova.compute.manager [req-ff78b6ee-22ad-4f6a-88ce-0684f8fe1d1a req-cdc8cb77-0961-4849-b938-6296573f994e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] No waiting events found dispatching network-vif-unplugged-680dff60-ed18-4961-84dd-043dd06abd06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.407 252257 DEBUG nova.compute.manager [req-ff78b6ee-22ad-4f6a-88ce-0684f8fe1d1a req-cdc8cb77-0961-4849-b938-6296573f994e 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Received event network-vif-unplugged-680dff60-ed18-4961-84dd-043dd06abd06 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.415 252257 INFO nova.compute.manager [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.415 252257 DEBUG oslo.service.loopingcall [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.416 252257 DEBUG nova.compute.manager [-] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 04:16:04 np0005539563 nova_compute[252253]: 2025-11-29 09:16:04.416 252257 DEBUG nova.network.neutron [-] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 04:16:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:04.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:04.992 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:04.993 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:16:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:16:04.993 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:16:05 np0005539563 nova_compute[252253]: 2025-11-29 09:16:05.337 252257 DEBUG nova.network.neutron [-] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:16:05 np0005539563 nova_compute[252253]: 2025-11-29 09:16:05.355 252257 INFO nova.compute.manager [-] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Took 0.94 seconds to deallocate network for instance.#033[00m
Nov 29 04:16:05 np0005539563 nova_compute[252253]: 2025-11-29 09:16:05.469 252257 DEBUG nova.network.neutron [req-20db89a6-c5f7-4d96-ac71-da65f828b4d1 req-190e8f78-4e29-4dad-80cc-d67712981c2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updated VIF entry in instance network info cache for port 680dff60-ed18-4961-84dd-043dd06abd06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 04:16:05 np0005539563 nova_compute[252253]: 2025-11-29 09:16:05.470 252257 DEBUG nova.network.neutron [req-20db89a6-c5f7-4d96-ac71-da65f828b4d1 req-190e8f78-4e29-4dad-80cc-d67712981c2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updating instance_info_cache with network_info: [{"id": "680dff60-ed18-4961-84dd-043dd06abd06", "address": "fa:16:3e:7b:67:2a", "network": {"id": "8aaf4606-9df9-4ad5-9ade-f48fdc6cfaad", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1879328059-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "51af0a2ee11a460ab825a484e5c6f4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap680dff60-ed", "ovs_interfaceid": "680dff60-ed18-4961-84dd-043dd06abd06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:16:05 np0005539563 nova_compute[252253]: 2025-11-29 09:16:05.487 252257 DEBUG oslo_concurrency.lockutils [req-20db89a6-c5f7-4d96-ac71-da65f828b4d1 req-190e8f78-4e29-4dad-80cc-d67712981c2d 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Releasing lock "refresh_cache-ab760f9d-43e3-4bec-9987-df02dc30b9ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 04:16:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:05.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:05 np0005539563 nova_compute[252253]: 2025-11-29 09:16:05.523 252257 INFO nova.compute.manager [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Took 0.17 seconds to detach 1 volumes for instance.#033[00m
Nov 29 04:16:05 np0005539563 nova_compute[252253]: 2025-11-29 09:16:05.556 252257 DEBUG oslo_concurrency.lockutils [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:16:05 np0005539563 nova_compute[252253]: 2025-11-29 09:16:05.557 252257 DEBUG oslo_concurrency.lockutils [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:16:05 np0005539563 nova_compute[252253]: 2025-11-29 09:16:05.601 252257 DEBUG oslo_concurrency.processutils [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:16:05 np0005539563 nova_compute[252253]: 2025-11-29 09:16:05.666 252257 DEBUG nova.compute.manager [req-1df54e2f-dad2-4918-b8d6-3b045345706d req-60f5cad7-512c-48eb-9f22-5a5c5bb2a2ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Received event network-vif-deleted-680dff60-ed18-4961-84dd-043dd06abd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:16:05 np0005539563 nova_compute[252253]: 2025-11-29 09:16:05.667 252257 INFO nova.compute.manager [req-1df54e2f-dad2-4918-b8d6-3b045345706d req-60f5cad7-512c-48eb-9f22-5a5c5bb2a2ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Neutron deleted interface 680dff60-ed18-4961-84dd-043dd06abd06; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 04:16:05 np0005539563 nova_compute[252253]: 2025-11-29 09:16:05.667 252257 DEBUG nova.network.neutron [req-1df54e2f-dad2-4918-b8d6-3b045345706d req-60f5cad7-512c-48eb-9f22-5a5c5bb2a2ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 04:16:05 np0005539563 nova_compute[252253]: 2025-11-29 09:16:05.695 252257 DEBUG nova.compute.manager [req-1df54e2f-dad2-4918-b8d6-3b045345706d req-60f5cad7-512c-48eb-9f22-5a5c5bb2a2ff 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Detach interface failed, port_id=680dff60-ed18-4961-84dd-043dd06abd06, reason: Instance ab760f9d-43e3-4bec-9987-df02dc30b9ef could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 04:16:05 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4236: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 302 KiB/s rd, 3.0 KiB/s wr, 73 op/s
Nov 29 04:16:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:16:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1278760975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:16:06 np0005539563 nova_compute[252253]: 2025-11-29 09:16:06.025 252257 DEBUG oslo_concurrency.processutils [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:16:06 np0005539563 nova_compute[252253]: 2025-11-29 09:16:06.032 252257 DEBUG nova.compute.provider_tree [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:16:06 np0005539563 nova_compute[252253]: 2025-11-29 09:16:06.054 252257 DEBUG nova.scheduler.client.report [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:16:06 np0005539563 nova_compute[252253]: 2025-11-29 09:16:06.106 252257 DEBUG oslo_concurrency.lockutils [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:16:06 np0005539563 nova_compute[252253]: 2025-11-29 09:16:06.145 252257 INFO nova.scheduler.client.report [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Deleted allocations for instance ab760f9d-43e3-4bec-9987-df02dc30b9ef#033[00m
Nov 29 04:16:06 np0005539563 nova_compute[252253]: 2025-11-29 09:16:06.200 252257 DEBUG oslo_concurrency.lockutils [None req-ef0b6bbd-136c-4a8e-b53c-9f6b5af5ef69 5ff561a95dc44b9fb9f7fd8fee80f589 51af0a2ee11a460ab825a484e5c6f4a3 - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:16:06 np0005539563 nova_compute[252253]: 2025-11-29 09:16:06.337 252257 DEBUG nova.compute.manager [req-05a23602-02d9-4a8e-bf78-21f6355e9f8d req-a857df86-7c47-4b48-b773-e7ba8809a707 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Received event network-vif-plugged-680dff60-ed18-4961-84dd-043dd06abd06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 04:16:06 np0005539563 nova_compute[252253]: 2025-11-29 09:16:06.337 252257 DEBUG oslo_concurrency.lockutils [req-05a23602-02d9-4a8e-bf78-21f6355e9f8d req-a857df86-7c47-4b48-b773-e7ba8809a707 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Acquiring lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:16:06 np0005539563 nova_compute[252253]: 2025-11-29 09:16:06.338 252257 DEBUG oslo_concurrency.lockutils [req-05a23602-02d9-4a8e-bf78-21f6355e9f8d req-a857df86-7c47-4b48-b773-e7ba8809a707 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:16:06 np0005539563 nova_compute[252253]: 2025-11-29 09:16:06.338 252257 DEBUG oslo_concurrency.lockutils [req-05a23602-02d9-4a8e-bf78-21f6355e9f8d req-a857df86-7c47-4b48-b773-e7ba8809a707 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] Lock "ab760f9d-43e3-4bec-9987-df02dc30b9ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:16:06 np0005539563 nova_compute[252253]: 2025-11-29 09:16:06.338 252257 DEBUG nova.compute.manager [req-05a23602-02d9-4a8e-bf78-21f6355e9f8d req-a857df86-7c47-4b48-b773-e7ba8809a707 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] No waiting events found dispatching network-vif-plugged-680dff60-ed18-4961-84dd-043dd06abd06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 04:16:06 np0005539563 nova_compute[252253]: 2025-11-29 09:16:06.338 252257 WARNING nova.compute.manager [req-05a23602-02d9-4a8e-bf78-21f6355e9f8d req-a857df86-7c47-4b48-b773-e7ba8809a707 5a8f6e09d9d9417f99023c9d150e8a2f ad029182ad604de9a9f142e9cb3c3eec - - default default] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Received unexpected event network-vif-plugged-680dff60-ed18-4961-84dd-043dd06abd06 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 04:16:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:06.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:07.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Nov 29 04:16:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Nov 29 04:16:07 np0005539563 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Nov 29 04:16:07 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4238: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 151 KiB/s rd, 2.9 KiB/s wr, 68 op/s
Nov 29 04:16:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:08.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:08 np0005539563 nova_compute[252253]: 2025-11-29 09:16:08.674 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:08 np0005539563 nova_compute[252253]: 2025-11-29 09:16:08.958 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:16:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:16:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 04:16:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 04:16:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:09.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:09 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:09 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4239: 305 pgs: 305 active+clean; 174 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 3.1 KiB/s wr, 66 op/s
Nov 29 04:16:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 04:16:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 04:16:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:10.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:10 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:10 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:11 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d1ea0d08-4da5-4598-852b-a0e36f65ba85 does not exist
Nov 29 04:16:11 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 667cf4dc-cd66-40a7-8608-9a745d2b9e72 does not exist
Nov 29 04:16:11 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 9ef41e2f-3016-4707-b311-5410801c1b78 does not exist
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:16:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:11.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:11 np0005539563 podman[424598]: 2025-11-29 09:16:11.734299454 +0000 UTC m=+0.037664623 container create b77d17c1d95ebbe3d63a21ec0d7fb22bb2ea26af5c3c461174a70502d9ddee82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:11 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:16:11 np0005539563 systemd[1]: Started libpod-conmon-b77d17c1d95ebbe3d63a21ec0d7fb22bb2ea26af5c3c461174a70502d9ddee82.scope.
Nov 29 04:16:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:16:11 np0005539563 podman[424598]: 2025-11-29 09:16:11.809263517 +0000 UTC m=+0.112628666 container init b77d17c1d95ebbe3d63a21ec0d7fb22bb2ea26af5c3c461174a70502d9ddee82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 04:16:11 np0005539563 podman[424598]: 2025-11-29 09:16:11.716752568 +0000 UTC m=+0.020117717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:16:11 np0005539563 podman[424598]: 2025-11-29 09:16:11.816498044 +0000 UTC m=+0.119863173 container start b77d17c1d95ebbe3d63a21ec0d7fb22bb2ea26af5c3c461174a70502d9ddee82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:16:11 np0005539563 podman[424598]: 2025-11-29 09:16:11.820302777 +0000 UTC m=+0.123667926 container attach b77d17c1d95ebbe3d63a21ec0d7fb22bb2ea26af5c3c461174a70502d9ddee82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:16:11 np0005539563 recursing_archimedes[424615]: 167 167
Nov 29 04:16:11 np0005539563 systemd[1]: libpod-b77d17c1d95ebbe3d63a21ec0d7fb22bb2ea26af5c3c461174a70502d9ddee82.scope: Deactivated successfully.
Nov 29 04:16:11 np0005539563 conmon[424615]: conmon b77d17c1d95ebbe3d63a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b77d17c1d95ebbe3d63a21ec0d7fb22bb2ea26af5c3c461174a70502d9ddee82.scope/container/memory.events
Nov 29 04:16:11 np0005539563 podman[424598]: 2025-11-29 09:16:11.823722469 +0000 UTC m=+0.127087598 container died b77d17c1d95ebbe3d63a21ec0d7fb22bb2ea26af5c3c461174a70502d9ddee82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:16:11 np0005539563 systemd[1]: var-lib-containers-storage-overlay-47ee13e8258feb491d04d16e80deefc63f75f627ae1c2593616a2efdb905ac85-merged.mount: Deactivated successfully.
Nov 29 04:16:11 np0005539563 podman[424598]: 2025-11-29 09:16:11.85950138 +0000 UTC m=+0.162866509 container remove b77d17c1d95ebbe3d63a21ec0d7fb22bb2ea26af5c3c461174a70502d9ddee82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:16:11 np0005539563 systemd[1]: libpod-conmon-b77d17c1d95ebbe3d63a21ec0d7fb22bb2ea26af5c3c461174a70502d9ddee82.scope: Deactivated successfully.
Nov 29 04:16:11 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4240: 305 pgs: 305 active+clean; 149 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 3.0 KiB/s wr, 60 op/s
Nov 29 04:16:12 np0005539563 podman[424639]: 2025-11-29 09:16:11.999859287 +0000 UTC m=+0.035082802 container create a12852c791e5813f5e99fa62175240b4e591df06fc18440b4125814294021759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sanderson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 04:16:12 np0005539563 systemd[1]: Started libpod-conmon-a12852c791e5813f5e99fa62175240b4e591df06fc18440b4125814294021759.scope.
Nov 29 04:16:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:16:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbc0b9e952ff3cb211422055e6b1e7b7df1714d6dda7103c3668e86b0ee8f429/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:16:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbc0b9e952ff3cb211422055e6b1e7b7df1714d6dda7103c3668e86b0ee8f429/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:16:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbc0b9e952ff3cb211422055e6b1e7b7df1714d6dda7103c3668e86b0ee8f429/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:16:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbc0b9e952ff3cb211422055e6b1e7b7df1714d6dda7103c3668e86b0ee8f429/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:16:12 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbc0b9e952ff3cb211422055e6b1e7b7df1714d6dda7103c3668e86b0ee8f429/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:16:12 np0005539563 podman[424639]: 2025-11-29 09:16:12.069023453 +0000 UTC m=+0.104246998 container init a12852c791e5813f5e99fa62175240b4e591df06fc18440b4125814294021759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:16:12 np0005539563 podman[424639]: 2025-11-29 09:16:12.07737489 +0000 UTC m=+0.112598405 container start a12852c791e5813f5e99fa62175240b4e591df06fc18440b4125814294021759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 04:16:12 np0005539563 podman[424639]: 2025-11-29 09:16:12.080305759 +0000 UTC m=+0.115529314 container attach a12852c791e5813f5e99fa62175240b4e591df06fc18440b4125814294021759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sanderson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:16:12 np0005539563 podman[424639]: 2025-11-29 09:16:11.984978554 +0000 UTC m=+0.020202079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:16:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:12.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:12 np0005539563 sleepy_sanderson[424655]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:16:12 np0005539563 sleepy_sanderson[424655]: --> relative data size: 1.0
Nov 29 04:16:12 np0005539563 sleepy_sanderson[424655]: --> All data devices are unavailable
Nov 29 04:16:12 np0005539563 systemd[1]: libpod-a12852c791e5813f5e99fa62175240b4e591df06fc18440b4125814294021759.scope: Deactivated successfully.
Nov 29 04:16:12 np0005539563 podman[424670]: 2025-11-29 09:16:12.896527309 +0000 UTC m=+0.030394894 container died a12852c791e5813f5e99fa62175240b4e591df06fc18440b4125814294021759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 29 04:16:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-cbc0b9e952ff3cb211422055e6b1e7b7df1714d6dda7103c3668e86b0ee8f429-merged.mount: Deactivated successfully.
Nov 29 04:16:12 np0005539563 podman[424670]: 2025-11-29 09:16:12.947815051 +0000 UTC m=+0.081682636 container remove a12852c791e5813f5e99fa62175240b4e591df06fc18440b4125814294021759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 04:16:12 np0005539563 systemd[1]: libpod-conmon-a12852c791e5813f5e99fa62175240b4e591df06fc18440b4125814294021759.scope: Deactivated successfully.
Nov 29 04:16:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:16:13
Nov 29 04:16:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:16:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:16:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'images', 'default.rgw.log', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.rgw.root']
Nov 29 04:16:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:16:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:16:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:13.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:13 np0005539563 podman[424829]: 2025-11-29 09:16:13.553619983 +0000 UTC m=+0.037655692 container create e02421dd182a7c467e4a4ab236feb7f60d3eaa8409260441d7f0c6dbb67724cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 04:16:13 np0005539563 systemd[1]: Started libpod-conmon-e02421dd182a7c467e4a4ab236feb7f60d3eaa8409260441d7f0c6dbb67724cd.scope.
Nov 29 04:16:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:16:13 np0005539563 podman[424829]: 2025-11-29 09:16:13.623283413 +0000 UTC m=+0.107319142 container init e02421dd182a7c467e4a4ab236feb7f60d3eaa8409260441d7f0c6dbb67724cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_archimedes, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:16:13 np0005539563 podman[424829]: 2025-11-29 09:16:13.629361898 +0000 UTC m=+0.113397607 container start e02421dd182a7c467e4a4ab236feb7f60d3eaa8409260441d7f0c6dbb67724cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_archimedes, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 04:16:13 np0005539563 podman[424829]: 2025-11-29 09:16:13.632130423 +0000 UTC m=+0.116166132 container attach e02421dd182a7c467e4a4ab236feb7f60d3eaa8409260441d7f0c6dbb67724cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:16:13 np0005539563 podman[424829]: 2025-11-29 09:16:13.537439344 +0000 UTC m=+0.021475073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:16:13 np0005539563 gallant_archimedes[424845]: 167 167
Nov 29 04:16:13 np0005539563 systemd[1]: libpod-e02421dd182a7c467e4a4ab236feb7f60d3eaa8409260441d7f0c6dbb67724cd.scope: Deactivated successfully.
Nov 29 04:16:13 np0005539563 podman[424829]: 2025-11-29 09:16:13.634340303 +0000 UTC m=+0.118376032 container died e02421dd182a7c467e4a4ab236feb7f60d3eaa8409260441d7f0c6dbb67724cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_archimedes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 04:16:13 np0005539563 systemd[1]: var-lib-containers-storage-overlay-fb42bd59bd9c28ac25e60df2d17d54f63816a22dc656a1aacbbfc460547195fa-merged.mount: Deactivated successfully.
Nov 29 04:16:13 np0005539563 podman[424829]: 2025-11-29 09:16:13.667164793 +0000 UTC m=+0.151200502 container remove e02421dd182a7c467e4a4ab236feb7f60d3eaa8409260441d7f0c6dbb67724cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_archimedes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:16:13 np0005539563 nova_compute[252253]: 2025-11-29 09:16:13.675 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:13 np0005539563 systemd[1]: libpod-conmon-e02421dd182a7c467e4a4ab236feb7f60d3eaa8409260441d7f0c6dbb67724cd.scope: Deactivated successfully.
Nov 29 04:16:13 np0005539563 podman[424868]: 2025-11-29 09:16:13.827961245 +0000 UTC m=+0.035247897 container create 2c00ae5e7be310b2210b46afb4f4b794c0f9f6b295e7f5f29ef8cda1775a3486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_nobel, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 04:16:13 np0005539563 systemd[1]: Started libpod-conmon-2c00ae5e7be310b2210b46afb4f4b794c0f9f6b295e7f5f29ef8cda1775a3486.scope.
Nov 29 04:16:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:16:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93da6b72c3f03dd43f515ae7de1f1699df8bb7bd2a5eaf874da3280c4d04802e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:16:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93da6b72c3f03dd43f515ae7de1f1699df8bb7bd2a5eaf874da3280c4d04802e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:16:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93da6b72c3f03dd43f515ae7de1f1699df8bb7bd2a5eaf874da3280c4d04802e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:16:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93da6b72c3f03dd43f515ae7de1f1699df8bb7bd2a5eaf874da3280c4d04802e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:16:13 np0005539563 podman[424868]: 2025-11-29 09:16:13.812623669 +0000 UTC m=+0.019910351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:16:13 np0005539563 podman[424868]: 2025-11-29 09:16:13.91628283 +0000 UTC m=+0.123569502 container init 2c00ae5e7be310b2210b46afb4f4b794c0f9f6b295e7f5f29ef8cda1775a3486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 04:16:13 np0005539563 podman[424868]: 2025-11-29 09:16:13.92437082 +0000 UTC m=+0.131657472 container start 2c00ae5e7be310b2210b46afb4f4b794c0f9f6b295e7f5f29ef8cda1775a3486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 04:16:13 np0005539563 podman[424868]: 2025-11-29 09:16:13.927357751 +0000 UTC m=+0.134644403 container attach 2c00ae5e7be310b2210b46afb4f4b794c0f9f6b295e7f5f29ef8cda1775a3486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 04:16:13 np0005539563 nova_compute[252253]: 2025-11-29 09:16:13.961 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:13 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4241: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 KiB/s wr, 53 op/s
Nov 29 04:16:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:14.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]: {
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:    "0": [
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:        {
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:            "devices": [
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:                "/dev/loop3"
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:            ],
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:            "lv_name": "ceph_lv0",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:            "lv_size": "7511998464",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:            "name": "ceph_lv0",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:            "tags": {
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:                "ceph.cluster_name": "ceph",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:                "ceph.crush_device_class": "",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:                "ceph.encrypted": "0",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:                "ceph.osd_id": "0",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:                "ceph.type": "block",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:                "ceph.vdo": "0"
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:            },
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:            "type": "block",
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:            "vg_name": "ceph_vg0"
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:        }
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]:    ]
Nov 29 04:16:14 np0005539563 romantic_nobel[424885]: }
Nov 29 04:16:14 np0005539563 systemd[1]: libpod-2c00ae5e7be310b2210b46afb4f4b794c0f9f6b295e7f5f29ef8cda1775a3486.scope: Deactivated successfully.
Nov 29 04:16:14 np0005539563 podman[424868]: 2025-11-29 09:16:14.688485327 +0000 UTC m=+0.895772009 container died 2c00ae5e7be310b2210b46afb4f4b794c0f9f6b295e7f5f29ef8cda1775a3486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_nobel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 04:16:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-93da6b72c3f03dd43f515ae7de1f1699df8bb7bd2a5eaf874da3280c4d04802e-merged.mount: Deactivated successfully.
Nov 29 04:16:14 np0005539563 podman[424868]: 2025-11-29 09:16:14.745545144 +0000 UTC m=+0.952831796 container remove 2c00ae5e7be310b2210b46afb4f4b794c0f9f6b295e7f5f29ef8cda1775a3486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_nobel, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 04:16:14 np0005539563 systemd[1]: libpod-conmon-2c00ae5e7be310b2210b46afb4f4b794c0f9f6b295e7f5f29ef8cda1775a3486.scope: Deactivated successfully.
Nov 29 04:16:15 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:16:15 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:16:15 np0005539563 nova_compute[252253]: 2025-11-29 09:16:15.233 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:15 np0005539563 podman[425051]: 2025-11-29 09:16:15.33524034 +0000 UTC m=+0.038913927 container create 0e02a65de580a4545584d173117f44a51c7c52939fee4123fbf672bcc2836e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 04:16:15 np0005539563 nova_compute[252253]: 2025-11-29 09:16:15.386 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:15 np0005539563 systemd[1]: Started libpod-conmon-0e02a65de580a4545584d173117f44a51c7c52939fee4123fbf672bcc2836e2d.scope.
Nov 29 04:16:15 np0005539563 podman[425051]: 2025-11-29 09:16:15.320135031 +0000 UTC m=+0.023808648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:16:15 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:16:15 np0005539563 podman[425051]: 2025-11-29 09:16:15.436311432 +0000 UTC m=+0.139985039 container init 0e02a65de580a4545584d173117f44a51c7c52939fee4123fbf672bcc2836e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:16:15 np0005539563 podman[425051]: 2025-11-29 09:16:15.442356115 +0000 UTC m=+0.146029702 container start 0e02a65de580a4545584d173117f44a51c7c52939fee4123fbf672bcc2836e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 04:16:15 np0005539563 podman[425051]: 2025-11-29 09:16:15.446061016 +0000 UTC m=+0.149734623 container attach 0e02a65de580a4545584d173117f44a51c7c52939fee4123fbf672bcc2836e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 04:16:15 np0005539563 eager_booth[425068]: 167 167
Nov 29 04:16:15 np0005539563 systemd[1]: libpod-0e02a65de580a4545584d173117f44a51c7c52939fee4123fbf672bcc2836e2d.scope: Deactivated successfully.
Nov 29 04:16:15 np0005539563 podman[425051]: 2025-11-29 09:16:15.447801873 +0000 UTC m=+0.151475470 container died 0e02a65de580a4545584d173117f44a51c7c52939fee4123fbf672bcc2836e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 04:16:15 np0005539563 systemd[1]: var-lib-containers-storage-overlay-1c42992015fa8d684d315ea1f1d3669d245c6569d79fc721a428aa6c9638b972-merged.mount: Deactivated successfully.
Nov 29 04:16:15 np0005539563 podman[425051]: 2025-11-29 09:16:15.483612705 +0000 UTC m=+0.187286292 container remove 0e02a65de580a4545584d173117f44a51c7c52939fee4123fbf672bcc2836e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:16:15 np0005539563 systemd[1]: libpod-conmon-0e02a65de580a4545584d173117f44a51c7c52939fee4123fbf672bcc2836e2d.scope: Deactivated successfully.
Nov 29 04:16:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:15.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:15 np0005539563 podman[425092]: 2025-11-29 09:16:15.638495476 +0000 UTC m=+0.034691012 container create 32be339a7226c3af6ec8accebb18c8e4710142d6d55189e3d564ef61650c08ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_keller, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:16:15 np0005539563 systemd[1]: Started libpod-conmon-32be339a7226c3af6ec8accebb18c8e4710142d6d55189e3d564ef61650c08ea.scope.
Nov 29 04:16:15 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:16:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6814cb95ab6a19bff3a5980791b0221cb4189eba822c8fd180dc143bbee5931a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:16:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6814cb95ab6a19bff3a5980791b0221cb4189eba822c8fd180dc143bbee5931a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:16:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6814cb95ab6a19bff3a5980791b0221cb4189eba822c8fd180dc143bbee5931a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:16:15 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6814cb95ab6a19bff3a5980791b0221cb4189eba822c8fd180dc143bbee5931a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:16:15 np0005539563 podman[425092]: 2025-11-29 09:16:15.716713278 +0000 UTC m=+0.112908894 container init 32be339a7226c3af6ec8accebb18c8e4710142d6d55189e3d564ef61650c08ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_keller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:16:15 np0005539563 podman[425092]: 2025-11-29 09:16:15.623368486 +0000 UTC m=+0.019564042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:16:15 np0005539563 podman[425092]: 2025-11-29 09:16:15.723818241 +0000 UTC m=+0.120013767 container start 32be339a7226c3af6ec8accebb18c8e4710142d6d55189e3d564ef61650c08ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_keller, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:16:15 np0005539563 podman[425092]: 2025-11-29 09:16:15.726223705 +0000 UTC m=+0.122419251 container attach 32be339a7226c3af6ec8accebb18c8e4710142d6d55189e3d564ef61650c08ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:16:15 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4242: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 716 B/s wr, 22 op/s
Nov 29 04:16:16 np0005539563 nice_keller[425108]: {
Nov 29 04:16:16 np0005539563 nice_keller[425108]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:16:16 np0005539563 nice_keller[425108]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:16:16 np0005539563 nice_keller[425108]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:16:16 np0005539563 nice_keller[425108]:        "osd_id": 0,
Nov 29 04:16:16 np0005539563 nice_keller[425108]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:16:16 np0005539563 nice_keller[425108]:        "type": "bluestore"
Nov 29 04:16:16 np0005539563 nice_keller[425108]:    }
Nov 29 04:16:16 np0005539563 nice_keller[425108]: }
Nov 29 04:16:16 np0005539563 systemd[1]: libpod-32be339a7226c3af6ec8accebb18c8e4710142d6d55189e3d564ef61650c08ea.scope: Deactivated successfully.
Nov 29 04:16:16 np0005539563 podman[425092]: 2025-11-29 09:16:16.638335037 +0000 UTC m=+1.034530603 container died 32be339a7226c3af6ec8accebb18c8e4710142d6d55189e3d564ef61650c08ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 04:16:16 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6814cb95ab6a19bff3a5980791b0221cb4189eba822c8fd180dc143bbee5931a-merged.mount: Deactivated successfully.
Nov 29 04:16:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:16:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:16.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:16:16 np0005539563 podman[425092]: 2025-11-29 09:16:16.693643057 +0000 UTC m=+1.089838593 container remove 32be339a7226c3af6ec8accebb18c8e4710142d6d55189e3d564ef61650c08ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 04:16:16 np0005539563 systemd[1]: libpod-conmon-32be339a7226c3af6ec8accebb18c8e4710142d6d55189e3d564ef61650c08ea.scope: Deactivated successfully.
Nov 29 04:16:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:16:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:16 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:16:16 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b079d309-4207-4d84-a63c-cbf256c4f474 does not exist
Nov 29 04:16:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0461a379-cec1-4e43-a054-510c7ed298b0 does not exist
Nov 29 04:16:16 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev cb0be618-3f8b-4217-a107-2ac659096295 does not exist
Nov 29 04:16:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:16:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:16:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:16:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:16:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:16:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:16:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:16:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:16:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:16:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:16:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:17 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:16:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:16:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:17.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:16:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:17 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4243: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 696 B/s wr, 21 op/s
Nov 29 04:16:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:18.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:18 np0005539563 nova_compute[252253]: 2025-11-29 09:16:18.678 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:18 np0005539563 nova_compute[252253]: 2025-11-29 09:16:18.934 252257 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764407763.9324684, ab760f9d-43e3-4bec-9987-df02dc30b9ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 04:16:18 np0005539563 nova_compute[252253]: 2025-11-29 09:16:18.934 252257 INFO nova.compute.manager [-] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] VM Stopped (Lifecycle Event)#033[00m
Nov 29 04:16:18 np0005539563 nova_compute[252253]: 2025-11-29 09:16:18.966 252257 DEBUG nova.compute.manager [None req-a15d12da-477f-4a9a-8068-46385b584079 - - - - - -] [instance: ab760f9d-43e3-4bec-9987-df02dc30b9ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 04:16:18 np0005539563 nova_compute[252253]: 2025-11-29 09:16:18.967 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:19.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:19 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4244: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 597 B/s wr, 18 op/s
Nov 29 04:16:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:16:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:20.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:16:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:21.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:21 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4245: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 341 B/s wr, 14 op/s
Nov 29 04:16:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:22.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:23.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:23 np0005539563 nova_compute[252253]: 2025-11-29 09:16:23.679 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:23 np0005539563 nova_compute[252253]: 2025-11-29 09:16:23.969 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:23 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4246: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 10 op/s
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:16:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:16:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:16:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:24.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:16:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:25.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:25 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4247: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:26 np0005539563 nova_compute[252253]: 2025-11-29 09:16:26.146 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:26.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:27 np0005539563 podman[425199]: 2025-11-29 09:16:27.499202188 +0000 UTC m=+0.054888800 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 04:16:27 np0005539563 podman[425200]: 2025-11-29 09:16:27.504758989 +0000 UTC m=+0.058769716 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:16:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:27.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:27 np0005539563 podman[425201]: 2025-11-29 09:16:27.564255522 +0000 UTC m=+0.113986622 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 04:16:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:27 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4248: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:28 np0005539563 nova_compute[252253]: 2025-11-29 09:16:28.681 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:28.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:28 np0005539563 nova_compute[252253]: 2025-11-29 09:16:28.971 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:29.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:29 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4249: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:30.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:31.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:31 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4250: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:32.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:16:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:33.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:16:33 np0005539563 nova_compute[252253]: 2025-11-29 09:16:33.683 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:33 np0005539563 nova_compute[252253]: 2025-11-29 09:16:33.973 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:33 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4251: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:34.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:35.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:35 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4252: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:36.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:37.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:37 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4253: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:38 np0005539563 nova_compute[252253]: 2025-11-29 09:16:38.672 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:38 np0005539563 nova_compute[252253]: 2025-11-29 09:16:38.687 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:38.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:38 np0005539563 nova_compute[252253]: 2025-11-29 09:16:38.976 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:39.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:39 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4254: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:40.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:41.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:41 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4255: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:42.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:16:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:16:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:43.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:43 np0005539563 nova_compute[252253]: 2025-11-29 09:16:43.688 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:43 np0005539563 nova_compute[252253]: 2025-11-29 09:16:43.978 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:43 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4256: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:44.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:45.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:45 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4257: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:46.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:47.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:47 np0005539563 nova_compute[252253]: 2025-11-29 09:16:47.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:47 np0005539563 nova_compute[252253]: 2025-11-29 09:16:47.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:16:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4258: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:48 np0005539563 nova_compute[252253]: 2025-11-29 09:16:48.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:48 np0005539563 nova_compute[252253]: 2025-11-29 09:16:48.690 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:48.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:48 np0005539563 nova_compute[252253]: 2025-11-29 09:16:48.980 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:49.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4259: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:50 np0005539563 nova_compute[252253]: 2025-11-29 09:16:50.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:50 np0005539563 nova_compute[252253]: 2025-11-29 09:16:50.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:50.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:51.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4260: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:52.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:53.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:53 np0005539563 nova_compute[252253]: 2025-11-29 09:16:53.692 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:53 np0005539563 nova_compute[252253]: 2025-11-29 09:16:53.982 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4261: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:54 np0005539563 ovn_controller[148841]: 2025-11-29T09:16:54Z|00970|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Nov 29 04:16:54 np0005539563 nova_compute[252253]: 2025-11-29 09:16:54.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:54.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:55.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4262: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:56 np0005539563 nova_compute[252253]: 2025-11-29 09:16:56.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:16:56 np0005539563 nova_compute[252253]: 2025-11-29 09:16:56.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:16:56 np0005539563 nova_compute[252253]: 2025-11-29 09:16:56.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:16:56 np0005539563 nova_compute[252253]: 2025-11-29 09:16:56.699 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:16:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:56.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:57.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:16:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:16:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4263: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:16:58 np0005539563 podman[425376]: 2025-11-29 09:16:58.496576168 +0000 UTC m=+0.050654165 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 04:16:58 np0005539563 podman[425377]: 2025-11-29 09:16:58.502398796 +0000 UTC m=+0.053809760 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:16:58 np0005539563 podman[425378]: 2025-11-29 09:16:58.527189029 +0000 UTC m=+0.074071810 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 04:16:58 np0005539563 nova_compute[252253]: 2025-11-29 09:16:58.694 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:16:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:16:58.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:16:58 np0005539563 nova_compute[252253]: 2025-11-29 09:16:58.984 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:16:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:16:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:16:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:16:59.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4264: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:00 np0005539563 nova_compute[252253]: 2025-11-29 09:17:00.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:17:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:00.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:17:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:01.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4265: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:02.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:03.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:03 np0005539563 nova_compute[252253]: 2025-11-29 09:17:03.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:03 np0005539563 nova_compute[252253]: 2025-11-29 09:17:03.696 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:03 np0005539563 nova_compute[252253]: 2025-11-29 09:17:03.718 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:17:03 np0005539563 nova_compute[252253]: 2025-11-29 09:17:03.719 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:17:03 np0005539563 nova_compute[252253]: 2025-11-29 09:17:03.719 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:17:03 np0005539563 nova_compute[252253]: 2025-11-29 09:17:03.719 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:17:03 np0005539563 nova_compute[252253]: 2025-11-29 09:17:03.719 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:17:03 np0005539563 nova_compute[252253]: 2025-11-29 09:17:03.986 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4266: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:17:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2176026491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:17:04 np0005539563 nova_compute[252253]: 2025-11-29 09:17:04.145 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:17:04 np0005539563 nova_compute[252253]: 2025-11-29 09:17:04.306 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:17:04 np0005539563 nova_compute[252253]: 2025-11-29 09:17:04.308 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4099MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:17:04 np0005539563 nova_compute[252253]: 2025-11-29 09:17:04.309 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:17:04 np0005539563 nova_compute[252253]: 2025-11-29 09:17:04.309 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:17:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:04.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:04 np0005539563 nova_compute[252253]: 2025-11-29 09:17:04.757 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:17:04 np0005539563 nova_compute[252253]: 2025-11-29 09:17:04.757 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:17:04 np0005539563 nova_compute[252253]: 2025-11-29 09:17:04.788 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:17:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:17:04.993 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:17:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:17:04.994 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:17:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:17:04.994 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:17:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:17:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1643408493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:17:05 np0005539563 nova_compute[252253]: 2025-11-29 09:17:05.217 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:17:05 np0005539563 nova_compute[252253]: 2025-11-29 09:17:05.223 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:17:05 np0005539563 nova_compute[252253]: 2025-11-29 09:17:05.253 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:17:05 np0005539563 nova_compute[252253]: 2025-11-29 09:17:05.280 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:17:05 np0005539563 nova_compute[252253]: 2025-11-29 09:17:05.281 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:17:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:05.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4267: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:06.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:07.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.707946) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827707982, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 1535, "num_deletes": 252, "total_data_size": 2624466, "memory_usage": 2676712, "flush_reason": "Manual Compaction"}
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827719331, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 1604665, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86249, "largest_seqno": 87783, "table_properties": {"data_size": 1599186, "index_size": 2682, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14502, "raw_average_key_size": 21, "raw_value_size": 1587073, "raw_average_value_size": 2310, "num_data_blocks": 119, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407692, "oldest_key_time": 1764407692, "file_creation_time": 1764407827, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 11413 microseconds, and 4384 cpu microseconds.
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.719362) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 1604665 bytes OK
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.719380) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.720684) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.720696) EVENT_LOG_v1 {"time_micros": 1764407827720692, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.720713) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 2617933, prev total WAL file size 2617933, number of live WAL files 2.
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.721571) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323638' seq:72057594037927935, type:22 .. '6D6772737461740033353230' seq:0, type:0; will stop at (end)
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(1567KB)], [197(13MB)]
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827721614, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 15700664, "oldest_snapshot_seqno": -1}
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 11765 keys, 12760412 bytes, temperature: kUnknown
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827886464, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 12760412, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12688481, "index_size": 41418, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29445, "raw_key_size": 311462, "raw_average_key_size": 26, "raw_value_size": 12486779, "raw_average_value_size": 1061, "num_data_blocks": 1562, "num_entries": 11765, "num_filter_entries": 11765, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764407827, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.886845) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 12760412 bytes
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.890924) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.2 rd, 77.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.4 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(17.7) write-amplify(8.0) OK, records in: 12228, records dropped: 463 output_compression: NoCompression
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.890947) EVENT_LOG_v1 {"time_micros": 1764407827890936, "job": 124, "event": "compaction_finished", "compaction_time_micros": 164971, "compaction_time_cpu_micros": 31081, "output_level": 6, "num_output_files": 1, "total_output_size": 12760412, "num_input_records": 12228, "num_output_records": 11765, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827891390, "job": 124, "event": "table_file_deletion", "file_number": 199}
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407827896225, "job": 124, "event": "table_file_deletion", "file_number": 197}
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.721424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.896330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.896335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.896337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.896338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:17:07 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:17:07.896340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:17:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4268: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:08 np0005539563 nova_compute[252253]: 2025-11-29 09:17:08.699 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:08.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:08 np0005539563 nova_compute[252253]: 2025-11-29 09:17:08.989 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:09.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4269: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:10.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:11.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4270: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:12.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:17:13
Nov 29 04:17:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:17:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:17:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['vms', '.mgr', '.rgw.root', 'images', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data']
Nov 29 04:17:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:17:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:17:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:13.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:13 np0005539563 nova_compute[252253]: 2025-11-29 09:17:13.700 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:13 np0005539563 nova_compute[252253]: 2025-11-29 09:17:13.991 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4271: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:14.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:15.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4272: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:16.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:17:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:17:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:17:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:17:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:17:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:17:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:17:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:17:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:17:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:17:17 np0005539563 systemd-logind[785]: New session 70 of user zuul.
Nov 29 04:17:17 np0005539563 systemd[1]: Started Session 70 of User zuul.
Nov 29 04:17:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:17.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4273: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:17:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2a1344f3-da65-43e8-ad09-b0a152f4401d does not exist
Nov 29 04:17:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d4d3d9ec-0260-4b3d-8dee-76b2dc514243 does not exist
Nov 29 04:17:18 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 13efaefb-becc-4e89-903e-6f51fd4d487b does not exist
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:17:18 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:17:18 np0005539563 nova_compute[252253]: 2025-11-29 09:17:18.702 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:18.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:18 np0005539563 podman[425898]: 2025-11-29 09:17:18.829020277 +0000 UTC m=+0.047809068 container create dcb55293495253096557d1467783da453e548aa8584643ac22800f695723280c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hugle, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 04:17:18 np0005539563 systemd[1]: Started libpod-conmon-dcb55293495253096557d1467783da453e548aa8584643ac22800f695723280c.scope.
Nov 29 04:17:18 np0005539563 podman[425898]: 2025-11-29 09:17:18.80255293 +0000 UTC m=+0.021341741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:17:18 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:17:18 np0005539563 podman[425898]: 2025-11-29 09:17:18.933305396 +0000 UTC m=+0.152094207 container init dcb55293495253096557d1467783da453e548aa8584643ac22800f695723280c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 04:17:18 np0005539563 podman[425898]: 2025-11-29 09:17:18.943349449 +0000 UTC m=+0.162138250 container start dcb55293495253096557d1467783da453e548aa8584643ac22800f695723280c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 04:17:18 np0005539563 podman[425898]: 2025-11-29 09:17:18.947020728 +0000 UTC m=+0.165809539 container attach dcb55293495253096557d1467783da453e548aa8584643ac22800f695723280c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hugle, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 04:17:18 np0005539563 gracious_hugle[425915]: 167 167
Nov 29 04:17:18 np0005539563 systemd[1]: libpod-dcb55293495253096557d1467783da453e548aa8584643ac22800f695723280c.scope: Deactivated successfully.
Nov 29 04:17:18 np0005539563 conmon[425915]: conmon dcb55293495253096557 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dcb55293495253096557d1467783da453e548aa8584643ac22800f695723280c.scope/container/memory.events
Nov 29 04:17:18 np0005539563 nova_compute[252253]: 2025-11-29 09:17:18.992 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:19 np0005539563 podman[425935]: 2025-11-29 09:17:19.001667191 +0000 UTC m=+0.029823311 container died dcb55293495253096557d1467783da453e548aa8584643ac22800f695723280c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hugle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 04:17:19 np0005539563 systemd[1]: var-lib-containers-storage-overlay-866cd6b365da52959d2ac1f11340fe4cc9ccca3e39dc3b3e1dd1a6f92a154979-merged.mount: Deactivated successfully.
Nov 29 04:17:19 np0005539563 podman[425935]: 2025-11-29 09:17:19.043534976 +0000 UTC m=+0.071691066 container remove dcb55293495253096557d1467783da453e548aa8584643ac22800f695723280c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 04:17:19 np0005539563 systemd[1]: libpod-conmon-dcb55293495253096557d1467783da453e548aa8584643ac22800f695723280c.scope: Deactivated successfully.
Nov 29 04:17:19 np0005539563 podman[425964]: 2025-11-29 09:17:19.220115496 +0000 UTC m=+0.044963301 container create 5ff5525ba33ae7a556a4346c5abda8668db818c1c16ecd036ea9e4640702f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 04:17:19 np0005539563 systemd[1]: Started libpod-conmon-5ff5525ba33ae7a556a4346c5abda8668db818c1c16ecd036ea9e4640702f10a.scope.
Nov 29 04:17:19 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:17:19 np0005539563 podman[425964]: 2025-11-29 09:17:19.200589186 +0000 UTC m=+0.025437021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:17:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3265a6546deaa8a6be53520f72840f79548b09f02e407351efddc2863c5d58db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:17:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3265a6546deaa8a6be53520f72840f79548b09f02e407351efddc2863c5d58db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:17:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3265a6546deaa8a6be53520f72840f79548b09f02e407351efddc2863c5d58db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:17:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3265a6546deaa8a6be53520f72840f79548b09f02e407351efddc2863c5d58db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:17:19 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3265a6546deaa8a6be53520f72840f79548b09f02e407351efddc2863c5d58db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:17:19 np0005539563 podman[425964]: 2025-11-29 09:17:19.312006458 +0000 UTC m=+0.136854293 container init 5ff5525ba33ae7a556a4346c5abda8668db818c1c16ecd036ea9e4640702f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:17:19 np0005539563 podman[425964]: 2025-11-29 09:17:19.322487222 +0000 UTC m=+0.147335037 container start 5ff5525ba33ae7a556a4346c5abda8668db818c1c16ecd036ea9e4640702f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 04:17:19 np0005539563 podman[425964]: 2025-11-29 09:17:19.325490654 +0000 UTC m=+0.150338469 container attach 5ff5525ba33ae7a556a4346c5abda8668db818c1c16ecd036ea9e4640702f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 04:17:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:19.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:19 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.43683 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4274: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:20 np0005539563 infallible_chebyshev[425985]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:17:20 np0005539563 infallible_chebyshev[425985]: --> relative data size: 1.0
Nov 29 04:17:20 np0005539563 infallible_chebyshev[425985]: --> All data devices are unavailable
Nov 29 04:17:20 np0005539563 systemd[1]: libpod-5ff5525ba33ae7a556a4346c5abda8668db818c1c16ecd036ea9e4640702f10a.scope: Deactivated successfully.
Nov 29 04:17:20 np0005539563 podman[425964]: 2025-11-29 09:17:20.131113257 +0000 UTC m=+0.955961072 container died 5ff5525ba33ae7a556a4346c5abda8668db818c1c16ecd036ea9e4640702f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 04:17:20 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.43689 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:20.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:20 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50432 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:21 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50438 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:21.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:21 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47428 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:21 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 04:17:21 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3523648440' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 04:17:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4275: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:22 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47437 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:22 np0005539563 systemd[1]: var-lib-containers-storage-overlay-3265a6546deaa8a6be53520f72840f79548b09f02e407351efddc2863c5d58db-merged.mount: Deactivated successfully.
Nov 29 04:17:22 np0005539563 podman[425964]: 2025-11-29 09:17:22.510666633 +0000 UTC m=+3.335514468 container remove 5ff5525ba33ae7a556a4346c5abda8668db818c1c16ecd036ea9e4640702f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:17:22 np0005539563 systemd[1]: libpod-conmon-5ff5525ba33ae7a556a4346c5abda8668db818c1c16ecd036ea9e4640702f10a.scope: Deactivated successfully.
Nov 29 04:17:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:22.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:23 np0005539563 podman[426301]: 2025-11-29 09:17:23.158867516 +0000 UTC m=+0.047465128 container create be02324607a4ae6aa45e02880f97b32fc44d1cc8101ad3b6cecd1b0b6c1a83de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mayer, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 04:17:23 np0005539563 systemd[1]: Started libpod-conmon-be02324607a4ae6aa45e02880f97b32fc44d1cc8101ad3b6cecd1b0b6c1a83de.scope.
Nov 29 04:17:23 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:17:23 np0005539563 podman[426301]: 2025-11-29 09:17:23.140464087 +0000 UTC m=+0.029061719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:17:23 np0005539563 podman[426301]: 2025-11-29 09:17:23.313682425 +0000 UTC m=+0.202280067 container init be02324607a4ae6aa45e02880f97b32fc44d1cc8101ad3b6cecd1b0b6c1a83de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 04:17:23 np0005539563 podman[426301]: 2025-11-29 09:17:23.322533915 +0000 UTC m=+0.211131527 container start be02324607a4ae6aa45e02880f97b32fc44d1cc8101ad3b6cecd1b0b6c1a83de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:17:23 np0005539563 podman[426301]: 2025-11-29 09:17:23.327163781 +0000 UTC m=+0.215761393 container attach be02324607a4ae6aa45e02880f97b32fc44d1cc8101ad3b6cecd1b0b6c1a83de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mayer, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 04:17:23 np0005539563 clever_mayer[426319]: 167 167
Nov 29 04:17:23 np0005539563 systemd[1]: libpod-be02324607a4ae6aa45e02880f97b32fc44d1cc8101ad3b6cecd1b0b6c1a83de.scope: Deactivated successfully.
Nov 29 04:17:23 np0005539563 podman[426301]: 2025-11-29 09:17:23.332058753 +0000 UTC m=+0.220656375 container died be02324607a4ae6aa45e02880f97b32fc44d1cc8101ad3b6cecd1b0b6c1a83de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:17:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:23.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:23 np0005539563 nova_compute[252253]: 2025-11-29 09:17:23.706 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:23 np0005539563 systemd[1]: var-lib-containers-storage-overlay-42ec4dd2779209785864b38ec363e6074194aa7c365490b45ed4ec21da6f763c-merged.mount: Deactivated successfully.
Nov 29 04:17:23 np0005539563 podman[426301]: 2025-11-29 09:17:23.887884041 +0000 UTC m=+0.776481653 container remove be02324607a4ae6aa45e02880f97b32fc44d1cc8101ad3b6cecd1b0b6c1a83de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 04:17:23 np0005539563 nova_compute[252253]: 2025-11-29 09:17:23.997 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:24 np0005539563 systemd[1]: libpod-conmon-be02324607a4ae6aa45e02880f97b32fc44d1cc8101ad3b6cecd1b0b6c1a83de.scope: Deactivated successfully.
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4276: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:24 np0005539563 podman[426345]: 2025-11-29 09:17:24.051758395 +0000 UTC m=+0.043052788 container create 74c74b4db50fadc4bed46a19f56c92feeeb5f2c9dcdc4e02412f8d1d43836db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wescoff, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:17:24 np0005539563 systemd[1]: Started libpod-conmon-74c74b4db50fadc4bed46a19f56c92feeeb5f2c9dcdc4e02412f8d1d43836db8.scope.
Nov 29 04:17:24 np0005539563 podman[426345]: 2025-11-29 09:17:24.033241814 +0000 UTC m=+0.024536227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:17:24 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:17:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a640744042898946106123ada7f525892ec5f1a93e3a78e0aecec0275f9d2bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:17:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a640744042898946106123ada7f525892ec5f1a93e3a78e0aecec0275f9d2bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:17:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a640744042898946106123ada7f525892ec5f1a93e3a78e0aecec0275f9d2bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:17:24 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a640744042898946106123ada7f525892ec5f1a93e3a78e0aecec0275f9d2bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:17:24 np0005539563 podman[426345]: 2025-11-29 09:17:24.169344395 +0000 UTC m=+0.160638808 container init 74c74b4db50fadc4bed46a19f56c92feeeb5f2c9dcdc4e02412f8d1d43836db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wescoff, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:17:24 np0005539563 podman[426345]: 2025-11-29 09:17:24.178538465 +0000 UTC m=+0.169832858 container start 74c74b4db50fadc4bed46a19f56c92feeeb5f2c9dcdc4e02412f8d1d43836db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wescoff, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 04:17:24 np0005539563 podman[426345]: 2025-11-29 09:17:24.18243214 +0000 UTC m=+0.173726613 container attach 74c74b4db50fadc4bed46a19f56c92feeeb5f2c9dcdc4e02412f8d1d43836db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:17:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:24.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]: {
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:    "0": [
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:        {
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:            "devices": [
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:                "/dev/loop3"
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:            ],
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:            "lv_name": "ceph_lv0",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:            "lv_size": "7511998464",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:            "name": "ceph_lv0",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:            "tags": {
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:                "ceph.cluster_name": "ceph",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:                "ceph.crush_device_class": "",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:                "ceph.encrypted": "0",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:                "ceph.osd_id": "0",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:                "ceph.type": "block",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:                "ceph.vdo": "0"
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:            },
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:            "type": "block",
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:            "vg_name": "ceph_vg0"
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:        }
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]:    ]
Nov 29 04:17:24 np0005539563 happy_wescoff[426362]: }
Nov 29 04:17:25 np0005539563 systemd[1]: libpod-74c74b4db50fadc4bed46a19f56c92feeeb5f2c9dcdc4e02412f8d1d43836db8.scope: Deactivated successfully.
Nov 29 04:17:25 np0005539563 podman[426345]: 2025-11-29 09:17:25.015678552 +0000 UTC m=+1.006972965 container died 74c74b4db50fadc4bed46a19f56c92feeeb5f2c9dcdc4e02412f8d1d43836db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wescoff, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:17:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6a640744042898946106123ada7f525892ec5f1a93e3a78e0aecec0275f9d2bd-merged.mount: Deactivated successfully.
Nov 29 04:17:25 np0005539563 podman[426345]: 2025-11-29 09:17:25.078463775 +0000 UTC m=+1.069758168 container remove 74c74b4db50fadc4bed46a19f56c92feeeb5f2c9dcdc4e02412f8d1d43836db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:17:25 np0005539563 systemd[1]: libpod-conmon-74c74b4db50fadc4bed46a19f56c92feeeb5f2c9dcdc4e02412f8d1d43836db8.scope: Deactivated successfully.
Nov 29 04:17:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:25.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:25 np0005539563 ovs-vsctl[426551]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 04:17:25 np0005539563 podman[426553]: 2025-11-29 09:17:25.695335718 +0000 UTC m=+0.052199687 container create 778343ae1f2a0708a2b3696c7e498b06ee43d39d967a1d0ecc2d8946ccc2a32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jones, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:17:25 np0005539563 systemd[1]: Started libpod-conmon-778343ae1f2a0708a2b3696c7e498b06ee43d39d967a1d0ecc2d8946ccc2a32d.scope.
Nov 29 04:17:25 np0005539563 podman[426553]: 2025-11-29 09:17:25.66482924 +0000 UTC m=+0.021693239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:17:25 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:17:25 np0005539563 podman[426553]: 2025-11-29 09:17:25.794768166 +0000 UTC m=+0.151632165 container init 778343ae1f2a0708a2b3696c7e498b06ee43d39d967a1d0ecc2d8946ccc2a32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 04:17:25 np0005539563 podman[426553]: 2025-11-29 09:17:25.806421002 +0000 UTC m=+0.163284971 container start 778343ae1f2a0708a2b3696c7e498b06ee43d39d967a1d0ecc2d8946ccc2a32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:17:25 np0005539563 podman[426553]: 2025-11-29 09:17:25.809033723 +0000 UTC m=+0.165897712 container attach 778343ae1f2a0708a2b3696c7e498b06ee43d39d967a1d0ecc2d8946ccc2a32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jones, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 04:17:25 np0005539563 amazing_jones[426582]: 167 167
Nov 29 04:17:25 np0005539563 systemd[1]: libpod-778343ae1f2a0708a2b3696c7e498b06ee43d39d967a1d0ecc2d8946ccc2a32d.scope: Deactivated successfully.
Nov 29 04:17:25 np0005539563 podman[426553]: 2025-11-29 09:17:25.814718047 +0000 UTC m=+0.171582016 container died 778343ae1f2a0708a2b3696c7e498b06ee43d39d967a1d0ecc2d8946ccc2a32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jones, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:17:25 np0005539563 systemd[1]: var-lib-containers-storage-overlay-faa3d76edb0ea21c803b4f0fd1adcd98874e54b7f2de88442e25c411e2ea451c-merged.mount: Deactivated successfully.
Nov 29 04:17:25 np0005539563 podman[426553]: 2025-11-29 09:17:25.852785829 +0000 UTC m=+0.209649798 container remove 778343ae1f2a0708a2b3696c7e498b06ee43d39d967a1d0ecc2d8946ccc2a32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:17:25 np0005539563 systemd[1]: libpod-conmon-778343ae1f2a0708a2b3696c7e498b06ee43d39d967a1d0ecc2d8946ccc2a32d.scope: Deactivated successfully.
Nov 29 04:17:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4277: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:26 np0005539563 podman[426632]: 2025-11-29 09:17:26.026162772 +0000 UTC m=+0.048763464 container create 97c92071462330a1c0679e290e7169d6d278b5c7fc31caff46ac1fca030f831a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jemison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:17:26 np0005539563 systemd[1]: Started libpod-conmon-97c92071462330a1c0679e290e7169d6d278b5c7fc31caff46ac1fca030f831a.scope.
Nov 29 04:17:26 np0005539563 podman[426632]: 2025-11-29 09:17:26.005325107 +0000 UTC m=+0.027925779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:17:26 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:17:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4542a635df57c6c936c600c368c0aaa0d95b4d07b163494676fbf12b092a612/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:17:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4542a635df57c6c936c600c368c0aaa0d95b4d07b163494676fbf12b092a612/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:17:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4542a635df57c6c936c600c368c0aaa0d95b4d07b163494676fbf12b092a612/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:17:26 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4542a635df57c6c936c600c368c0aaa0d95b4d07b163494676fbf12b092a612/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:17:26 np0005539563 podman[426632]: 2025-11-29 09:17:26.120886981 +0000 UTC m=+0.143487653 container init 97c92071462330a1c0679e290e7169d6d278b5c7fc31caff46ac1fca030f831a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:17:26 np0005539563 podman[426632]: 2025-11-29 09:17:26.128698943 +0000 UTC m=+0.151299595 container start 97c92071462330a1c0679e290e7169d6d278b5c7fc31caff46ac1fca030f831a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jemison, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:17:26 np0005539563 podman[426632]: 2025-11-29 09:17:26.131249353 +0000 UTC m=+0.153849995 container attach 97c92071462330a1c0679e290e7169d6d278b5c7fc31caff46ac1fca030f831a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 04:17:26 np0005539563 virtqemud[251807]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 04:17:26 np0005539563 virtqemud[251807]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 04:17:26 np0005539563 virtqemud[251807]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 04:17:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:26.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:27 np0005539563 jolly_jemison[426649]: {
Nov 29 04:17:27 np0005539563 jolly_jemison[426649]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:17:27 np0005539563 jolly_jemison[426649]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:17:27 np0005539563 jolly_jemison[426649]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:17:27 np0005539563 jolly_jemison[426649]:        "osd_id": 0,
Nov 29 04:17:27 np0005539563 jolly_jemison[426649]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:17:27 np0005539563 jolly_jemison[426649]:        "type": "bluestore"
Nov 29 04:17:27 np0005539563 jolly_jemison[426649]:    }
Nov 29 04:17:27 np0005539563 jolly_jemison[426649]: }
Nov 29 04:17:27 np0005539563 systemd[1]: libpod-97c92071462330a1c0679e290e7169d6d278b5c7fc31caff46ac1fca030f831a.scope: Deactivated successfully.
Nov 29 04:17:27 np0005539563 podman[426632]: 2025-11-29 09:17:27.055261847 +0000 UTC m=+1.077862499 container died 97c92071462330a1c0679e290e7169d6d278b5c7fc31caff46ac1fca030f831a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jemison, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 04:17:27 np0005539563 systemd[1]: var-lib-containers-storage-overlay-a4542a635df57c6c936c600c368c0aaa0d95b4d07b163494676fbf12b092a612-merged.mount: Deactivated successfully.
Nov 29 04:17:27 np0005539563 podman[426632]: 2025-11-29 09:17:27.119204531 +0000 UTC m=+1.141805183 container remove 97c92071462330a1c0679e290e7169d6d278b5c7fc31caff46ac1fca030f831a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:17:27 np0005539563 systemd[1]: libpod-conmon-97c92071462330a1c0679e290e7169d6d278b5c7fc31caff46ac1fca030f831a.scope: Deactivated successfully.
Nov 29 04:17:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:17:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:17:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:17:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:17:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a443ecd8-7ad8-4688-bbcb-cc630561bedd does not exist
Nov 29 04:17:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev dd248b0e-ff14-4056-950a-7a8c9439158a does not exist
Nov 29 04:17:27 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 1aa82c97-5c6b-47fe-9fc5-a85afaefd3e7 does not exist
Nov 29 04:17:27 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: cache status {prefix=cache status} (starting...)
Nov 29 04:17:27 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:17:27 np0005539563 lvm[427011]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 04:17:27 np0005539563 lvm[427011]: VG ceph_vg0 finished
Nov 29 04:17:27 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: client ls {prefix=client ls} (starting...)
Nov 29 04:17:27 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:17:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:27.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:27 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50450 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:27 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.43710 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4278: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:17:28 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50459 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:17:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:17:28 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:17:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 04:17:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/491600817' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 04:17:28 np0005539563 nova_compute[252253]: 2025-11-29 09:17:28.276 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 04:17:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 04:17:28 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47449 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:17:28 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.43728 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:17:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:17:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218446617' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:17:28 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47479 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 04:17:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 04:17:28 np0005539563 nova_compute[252253]: 2025-11-29 09:17:28.708 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:17:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:28.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:28 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50513 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:28 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:17:28 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:17:28.850+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:17:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 04:17:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3264757028' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 04:17:28 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:17:29 np0005539563 nova_compute[252253]: 2025-11-29 09:17:29.002 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:29 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.43782 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:29 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:17:29.080+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:17:29 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:17:29 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 04:17:29 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:17:29 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: ops {prefix=ops} (starting...)
Nov 29 04:17:29 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:17:29 np0005539563 podman[427332]: 2025-11-29 09:17:29.435715987 +0000 UTC m=+0.090283010 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 04:17:29 np0005539563 podman[427333]: 2025-11-29 09:17:29.44063397 +0000 UTC m=+0.096878249 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 04:17:29 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47503 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:29 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:17:29.447+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:17:29 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:17:29 np0005539563 podman[427334]: 2025-11-29 09:17:29.473492752 +0000 UTC m=+0.125785933 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 04:17:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 04:17:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3941305374' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 04:17:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 04:17:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2589653559' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 04:17:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:29.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:29 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.43806 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:29 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50567 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:17:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2709486746' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:17:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4279: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:30 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: session ls {prefix=session ls} (starting...)
Nov 29 04:17:30 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:17:30 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: status {prefix=status} (starting...)
Nov 29 04:17:30 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.43824 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:30 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50579 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:30 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47539 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:30 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47545 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:17:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1292861131' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:17:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:30.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 04:17:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 04:17:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 04:17:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4114203625' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 04:17:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:17:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548445841' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:17:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 04:17:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 04:17:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 04:17:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470528491' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 04:17:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:17:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57399009' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:17:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:31.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:31 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50630 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:31 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:17:31.770+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:17:31 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:17:31 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.43884 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:31 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:17:31.776+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:17:31 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:17:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 04:17:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3589149424' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 04:17:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4280: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:32 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47605 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:32 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:17:32 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:17:32.048+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:17:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 04:17:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243181954' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 04:17:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:17:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3390438926' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:17:32 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.43938 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 04:17:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/875275919' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 04:17:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:32.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:32 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50684 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:32 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47635 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:33 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.43947 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:17:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862402459' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:17:33 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50699 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:33 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47647 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:33 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.43968 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:33.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:33 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50720 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:33 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.43983 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:17:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/938565693' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:17:33 np0005539563 nova_compute[252253]: 2025-11-29 09:17:33.710 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:33 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.43995 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373547008 unmapped: 83918848 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373555200 unmapped: 83910656 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a59ee000/0x0/0x1bfc00000, data 0x2a8fe3e/0x2cb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373555200 unmapped: 83910656 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4178720 data_alloc: 218103808 data_used: 12288000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373555200 unmapped: 83910656 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a59ee000/0x0/0x1bfc00000, data 0x2a8fe3e/0x2cb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be1b44800 session 0x561be0196b40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373555200 unmapped: 83910656 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be221dc00 session 0x561be2b5a3c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373555200 unmapped: 83910656 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be2e82000 session 0x561be01ea000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.543971062s of 10.676105499s, submitted: 37
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be344f800 session 0x561be0db9860
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373563392 unmapped: 83902464 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373563392 unmapped: 83902464 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4180600 data_alloc: 218103808 data_used: 12288000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373563392 unmapped: 83902464 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a59ca000/0x0/0x1bfc00000, data 0x2ab3e3e/0x2cd4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373563392 unmapped: 83902464 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a59ca000/0x0/0x1bfc00000, data 0x2ab3e3e/0x2cd4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373563392 unmapped: 83902464 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373563392 unmapped: 83902464 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373563392 unmapped: 83902464 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4218360 data_alloc: 218103808 data_used: 17575936
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373563392 unmapped: 83902464 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373563392 unmapped: 83902464 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373563392 unmapped: 83902464 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a59ca000/0x0/0x1bfc00000, data 0x2ab3e3e/0x2cd4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373563392 unmapped: 83902464 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373571584 unmapped: 83894272 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4218360 data_alloc: 218103808 data_used: 17575936
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 373579776 unmapped: 83886080 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.729844093s of 12.738764763s, submitted: 2
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 375128064 unmapped: 82337792 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 375136256 unmapped: 82329600 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a533e000/0x0/0x1bfc00000, data 0x3137e3e/0x3358000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 375152640 unmapped: 82313216 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 375152640 unmapped: 82313216 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4278202 data_alloc: 218103808 data_used: 17883136
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 375152640 unmapped: 82313216 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 375152640 unmapped: 82313216 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 375152640 unmapped: 82313216 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 375152640 unmapped: 82313216 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a533c000/0x0/0x1bfc00000, data 0x3139e3e/0x335a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 375152640 unmapped: 82313216 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4278218 data_alloc: 218103808 data_used: 17883136
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be1b44800 session 0x561bdfccb0e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be221dc00 session 0x561be18214a0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374480896 unmapped: 82984960 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374480896 unmapped: 82984960 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374480896 unmapped: 82984960 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a5344000/0x0/0x1bfc00000, data 0x3139e3e/0x335a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374480896 unmapped: 82984960 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374480896 unmapped: 82984960 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4273106 data_alloc: 218103808 data_used: 17887232
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374489088 unmapped: 82976768 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374489088 unmapped: 82976768 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a5344000/0x0/0x1bfc00000, data 0x3139e3e/0x335a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374489088 unmapped: 82976768 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374489088 unmapped: 82976768 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.839481354s of 18.039916992s, submitted: 71
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374497280 unmapped: 82968576 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4273238 data_alloc: 218103808 data_used: 17887232
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374497280 unmapped: 82968576 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374497280 unmapped: 82968576 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a5344000/0x0/0x1bfc00000, data 0x3139e3e/0x335a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be32a2000 session 0x561be1b6eb40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be2ec6c00 session 0x561be35b8000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be221c400 session 0x561be01950e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be1b44800 session 0x561be2878f00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374497280 unmapped: 82968576 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be221dc00 session 0x561be315a5a0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be2ec6c00 session 0x561be3109860
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374571008 unmapped: 82894848 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be24bbc00 session 0x561be35b81e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be2e82000 session 0x561be35b83c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be1b44800 session 0x561be2effa40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 82870272 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4188087 data_alloc: 218103808 data_used: 12288000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a5619000/0x0/0x1bfc00000, data 0x2ab9e3e/0x2cda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 82870272 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 82870272 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 82870272 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a5619000/0x0/0x1bfc00000, data 0x2ab9e3e/0x2cda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 82870272 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be221dc00 session 0x561be1edb0e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 82870272 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4188044 data_alloc: 218103808 data_used: 12288000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 82870272 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a59c3000/0x0/0x1bfc00000, data 0x2ab9e61/0x2cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 82853888 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 82853888 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 82853888 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 82853888 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4226284 data_alloc: 218103808 data_used: 17633280
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 82853888 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a59c3000/0x0/0x1bfc00000, data 0x2ab9e61/0x2cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 82853888 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a59c3000/0x0/0x1bfc00000, data 0x2ab9e61/0x2cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 82853888 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 82853888 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 82853888 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4226284 data_alloc: 218103808 data_used: 17633280
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 374611968 unmapped: 82853888 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.906864166s of 22.182964325s, submitted: 68
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a539c000/0x0/0x1bfc00000, data 0x30e0e61/0x3302000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [0,1])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 376725504 unmapped: 80740352 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 376733696 unmapped: 80732160 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 376954880 unmapped: 80510976 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 376954880 unmapped: 80510976 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4289848 data_alloc: 218103808 data_used: 17879040
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 376954880 unmapped: 80510976 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 376963072 unmapped: 80502784 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a4e6f000/0x0/0x1bfc00000, data 0x31fde61/0x341f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 376963072 unmapped: 80502784 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 376963072 unmapped: 80502784 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4287780 data_alloc: 218103808 data_used: 17883136
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 376963072 unmapped: 80502784 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 376963072 unmapped: 80502784 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.867714882s of 10.099704742s, submitted: 75
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 377044992 unmapped: 80420864 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a4e31000/0x0/0x1bfc00000, data 0x323ae61/0x345c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 378093568 unmapped: 79372288 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 378093568 unmapped: 79372288 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 ms_handle_reset con 0x561be24bbc00 session 0x561be01974a0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4290440 data_alloc: 218103808 data_used: 17891328
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 377044992 unmapped: 80420864 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 377044992 unmapped: 80420864 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 377044992 unmapped: 80420864 heap: 457465856 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 411 ms_handle_reset con 0x561be2e82000 session 0x561be2854b40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 411 ms_handle_reset con 0x561be2ec6c00 session 0x561be28e45a0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 411 ms_handle_reset con 0x561be1b44800 session 0x561be090bc20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 387719168 unmapped: 73949184 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a492d000/0x0/0x1bfc00000, data 0x373daba/0x3960000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 412 ms_handle_reset con 0x561be221dc00 session 0x561be2830f00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 412 ms_handle_reset con 0x561be24bbc00 session 0x561be1e7f4a0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 412 ms_handle_reset con 0x561be32a2000 session 0x561be2effe00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 387842048 unmapped: 73826304 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 413 ms_handle_reset con 0x561be2e82000 session 0x561be01ea780
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4448044 data_alloc: 234881024 data_used: 23351296
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 387874816 unmapped: 73793536 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 413 ms_handle_reset con 0x561be1b44800 session 0x561be01f41e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 381378560 unmapped: 80289792 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 413 ms_handle_reset con 0x561be221dc00 session 0x561be28763c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 413 ms_handle_reset con 0x561be24bbc00 session 0x561bdfccb680
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a415d000/0x0/0x1bfc00000, data 0x3f0843e/0x412e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 381378560 unmapped: 80289792 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 413 ms_handle_reset con 0x561be32a2000 session 0x561be1e7f860
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 381386752 unmapped: 80281600 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 413 ms_handle_reset con 0x561be21a1800 session 0x561be0191c20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 381386752 unmapped: 80281600 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 413 ms_handle_reset con 0x561be21a1800 session 0x561be358eb40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.427036285s of 12.761595726s, submitted: 85
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 413 ms_handle_reset con 0x561be1b44800 session 0x561be1820000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4275824 data_alloc: 218103808 data_used: 12300288
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 82018304 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 413 ms_handle_reset con 0x561be221dc00 session 0x561be0de6b40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 82018304 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 82018304 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4e2d000/0x0/0x1bfc00000, data 0x32353c9/0x345a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 82018304 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4e30000/0x0/0x1bfc00000, data 0x3236f08/0x345d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 82018304 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4281974 data_alloc: 218103808 data_used: 12832768
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379650048 unmapped: 82018304 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379658240 unmapped: 82010112 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379658240 unmapped: 82010112 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379658240 unmapped: 82010112 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379658240 unmapped: 82010112 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4e30000/0x0/0x1bfc00000, data 0x3236f08/0x345d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4281974 data_alloc: 218103808 data_used: 12832768
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379658240 unmapped: 82010112 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 ms_handle_reset con 0x561be85aa400 session 0x561be01eab40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 ms_handle_reset con 0x561be31da400 session 0x561be289e000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 ms_handle_reset con 0x561be1b44800 session 0x561bdfcf70e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 ms_handle_reset con 0x561be21a1800 session 0x561be1e5a3c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.444080353s of 11.659396172s, submitted: 39
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 ms_handle_reset con 0x561be85aa400 session 0x561be18201e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 ms_handle_reset con 0x561be221dc00 session 0x561be1eef2c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 ms_handle_reset con 0x561be0ad5800 session 0x561be2f06000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 ms_handle_reset con 0x561be1b44800 session 0x561be2830780
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 ms_handle_reset con 0x561be21a1800 session 0x561be2f06f00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379658240 unmapped: 82010112 heap: 461668352 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 ms_handle_reset con 0x561be221dc00 session 0x561be18210e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 ms_handle_reset con 0x561be85aa400 session 0x561be01970e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 387670016 unmapped: 82108416 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388980736 unmapped: 80797696 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 384761856 unmapped: 85016576 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a3f82000/0x0/0x1bfc00000, data 0x40e5f08/0x430c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a3f82000/0x0/0x1bfc00000, data 0x40e5f08/0x430c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4430894 data_alloc: 234881024 data_used: 22597632
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 384761856 unmapped: 85016576 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380321792 unmapped: 89456640 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 415 ms_handle_reset con 0x561be462b400 session 0x561be090bc20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380321792 unmapped: 89456640 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380321792 unmapped: 89456640 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380329984 unmapped: 89448448 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4350214 data_alloc: 218103808 data_used: 17235968
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a5aec000/0x0/0x1bfc00000, data 0x35babb5/0x37e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380329984 unmapped: 89448448 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380329984 unmapped: 89448448 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a5aec000/0x0/0x1bfc00000, data 0x35babb5/0x37e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380329984 unmapped: 89448448 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380329984 unmapped: 89448448 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380329984 unmapped: 89448448 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4350214 data_alloc: 218103808 data_used: 17235968
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.794631958s of 14.172966957s, submitted: 109
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380329984 unmapped: 89448448 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380329984 unmapped: 89448448 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a5aea000/0x0/0x1bfc00000, data 0x35bcbb5/0x37e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380329984 unmapped: 89448448 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 89440256 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 89440256 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4359692 data_alloc: 218103808 data_used: 17498112
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 89440256 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5ae6000/0x0/0x1bfc00000, data 0x35be6f4/0x37e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 89440256 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 89440256 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 89440256 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5ae6000/0x0/0x1bfc00000, data 0x35be6f4/0x37e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be1b44800 session 0x561bdf4912c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 89440256 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be21a1800 session 0x561be0de72c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be221dc00 session 0x561be1820f00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be85aa400 session 0x561be2b5b860
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5ae5000/0x0/0x1bfc00000, data 0x35be704/0x37e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,2,4])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3446800 session 0x561be35b83c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be1b44800 session 0x561bdfccb680
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be21a1800 session 0x561be1e7f860
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be221dc00 session 0x561be1bd6780
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3446800 session 0x561be0a3e5a0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4420902 data_alloc: 218103808 data_used: 17498112
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380485632 unmapped: 89292800 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3343000 session 0x561be2854d20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380485632 unmapped: 89292800 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.311850548s of 11.430087090s, submitted: 30
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3343000 session 0x561be0a3e3c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 89980928 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 89980928 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 89980928 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5ddd000/0x0/0x1bfc00000, data 0x32c7704/0x34f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4309440 data_alloc: 218103808 data_used: 12705792
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 89980928 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be1b44800 session 0x561be1bd7a40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be21a1800 session 0x561be1bd6f00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 89980928 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5ddd000/0x0/0x1bfc00000, data 0x32c7704/0x34f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379797504 unmapped: 89980928 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be221dc00 session 0x561be1e7e1e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3446800 session 0x561be27b5680
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5ddd000/0x0/0x1bfc00000, data 0x32c7704/0x34f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 379805696 unmapped: 89972736 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 378986496 unmapped: 90791936 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4356562 data_alloc: 218103808 data_used: 18677760
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380239872 unmapped: 89538560 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5ddc000/0x0/0x1bfc00000, data 0x32c7727/0x34f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380239872 unmapped: 89538560 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 380239872 unmapped: 89538560 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5ddc000/0x0/0x1bfc00000, data 0x32c7727/0x34f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.744662285s of 10.805893898s, submitted: 20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be350f400 session 0x561be1e7e3c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3443000 session 0x561be2f07c20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 381747200 unmapped: 88031232 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 381779968 unmapped: 87998464 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be344fc00 session 0x561be2854f00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362484 data_alloc: 234881024 data_used: 22085632
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5ddc000/0x0/0x1bfc00000, data 0x32c7727/0x34f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 381779968 unmapped: 87998464 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 381779968 unmapped: 87998464 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5ddc000/0x0/0x1bfc00000, data 0x32c7727/0x34f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5ddc000/0x0/0x1bfc00000, data 0x32c7727/0x34f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be31d3400 session 0x561be2770960
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be2e82800 session 0x561be2855860
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 381788160 unmapped: 87990272 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be2e82800 session 0x561be2854d20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 381943808 unmapped: 87834624 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5961000/0x0/0x1bfc00000, data 0x3742727/0x396d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 381943808 unmapped: 87834624 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404585 data_alloc: 234881024 data_used: 22085632
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 381943808 unmapped: 87834624 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 385622016 unmapped: 84156416 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5961000/0x0/0x1bfc00000, data 0x3742727/0x396d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 385540096 unmapped: 84238336 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 385540096 unmapped: 84238336 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4aae000/0x0/0x1bfc00000, data 0x45ec727/0x4817000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 385540096 unmapped: 84238336 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525351 data_alloc: 234881024 data_used: 22355968
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 385540096 unmapped: 84238336 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 385540096 unmapped: 84238336 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.466876030s of 14.956824303s, submitted: 168
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 385556480 unmapped: 84221952 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4a93000/0x0/0x1bfc00000, data 0x4610727/0x483b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 385556480 unmapped: 84221952 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be31d3400 session 0x561be35b83c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3443000 session 0x561be2b5b860
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 385556480 unmapped: 84221952 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be1b44800 session 0x561be27714a0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be21a1800 session 0x561be01952c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be1b44800 session 0x561be3108f00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be31d3400 session 0x561be2786d20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be2e82800 session 0x561be1bd72c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4314071 data_alloc: 218103808 data_used: 16113664
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 384786432 unmapped: 84992000 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 384868352 unmapped: 84910080 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a55b7000/0x0/0x1bfc00000, data 0x304f704/0x3279000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 386383872 unmapped: 83394560 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 386383872 unmapped: 83394560 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 386383872 unmapped: 83394560 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a55b7000/0x0/0x1bfc00000, data 0x304f704/0x3279000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4347511 data_alloc: 234881024 data_used: 20643840
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 386383872 unmapped: 83394560 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be24bbc00 session 0x561be1eee780
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be32a2000 session 0x561be315b2c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 386400256 unmapped: 83378176 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be32a2000 session 0x561be0191c20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 386400256 unmapped: 83378176 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 386400256 unmapped: 83378176 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a61d2000/0x0/0x1bfc00000, data 0x29f2692/0x2c1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 386400256 unmapped: 83378176 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4287006 data_alloc: 234881024 data_used: 19734528
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 386400256 unmapped: 83378176 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a61d2000/0x0/0x1bfc00000, data 0x29f2692/0x2c1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 386400256 unmapped: 83378176 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 386400256 unmapped: 83378176 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.669616699s of 15.872285843s, submitted: 63
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388177920 unmapped: 81600512 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390914048 unmapped: 78864384 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4347766 data_alloc: 234881024 data_used: 20967424
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391208960 unmapped: 78569472 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391208960 unmapped: 78569472 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a6098000/0x0/0x1bfc00000, data 0x2ff8692/0x3220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391208960 unmapped: 78569472 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391208960 unmapped: 78569472 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391208960 unmapped: 78569472 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4347782 data_alloc: 234881024 data_used: 20967424
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391208960 unmapped: 78569472 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a6098000/0x0/0x1bfc00000, data 0x2ff8692/0x3220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 77135872 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 77135872 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 77135872 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3443000 session 0x561be0de7a40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be344fc00 session 0x561be358e3c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a60ae000/0x0/0x1bfc00000, data 0x2ff8692/0x3220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 77135872 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4340646 data_alloc: 234881024 data_used: 20963328
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 77135872 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 77135872 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392650752 unmapped: 77127680 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392650752 unmapped: 77127680 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a60ae000/0x0/0x1bfc00000, data 0x2ff8692/0x3220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.103209496s of 15.745775223s, submitted: 122
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be1b44800 session 0x561be09acf00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392658944 unmapped: 77119488 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a60af000/0x0/0x1bfc00000, data 0x2ff8682/0x321f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4334929 data_alloc: 234881024 data_used: 20963328
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392658944 unmapped: 77119488 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392658944 unmapped: 77119488 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a60af000/0x0/0x1bfc00000, data 0x2ff8682/0x321f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392658944 unmapped: 77119488 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be24bbc00 session 0x561bdfcf65a0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be24bbc00 session 0x561be0fab860
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be1b44800 session 0x561be315be00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be32a2000 session 0x561be1b6e780
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402251776 unmapped: 67526656 heap: 469778432 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3443000 session 0x561be1e5ad20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be344fc00 session 0x561be2771a40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be1b44800 session 0x561be1e7e5a0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be24bbc00 session 0x561be0fabc20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be32a2000 session 0x561be01e5860
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392781824 unmapped: 84344832 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4411798 data_alloc: 234881024 data_used: 20967424
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392781824 unmapped: 84344832 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392781824 unmapped: 84344832 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a56c0000/0x0/0x1bfc00000, data 0x39e6692/0x3c0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392781824 unmapped: 84344832 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392781824 unmapped: 84344832 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392781824 unmapped: 84344832 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4411798 data_alloc: 234881024 data_used: 20967424
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392790016 unmapped: 84336640 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3443000 session 0x561be2876000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392790016 unmapped: 84336640 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be2e82800 session 0x561be1edb680
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a56c0000/0x0/0x1bfc00000, data 0x39e6692/0x3c0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392790016 unmapped: 84336640 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be1b44800 session 0x561be1bd7a40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be24bbc00 session 0x561be1e5b860
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.942764282s of 14.179515839s, submitted: 23
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be32a2000 session 0x561be1edad20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3443000 session 0x561be1bd6d20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392945664 unmapped: 84180992 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be31d3400 session 0x561be315b680
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be31d3400 session 0x561be0db92c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392953856 unmapped: 84172800 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4432326 data_alloc: 234881024 data_used: 22822912
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393289728 unmapped: 83836928 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394412032 unmapped: 82714624 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5699000/0x0/0x1bfc00000, data 0x3a0a6d5/0x3c35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394477568 unmapped: 82649088 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394477568 unmapped: 82649088 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394477568 unmapped: 82649088 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5699000/0x0/0x1bfc00000, data 0x3a0a6d5/0x3c35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4486406 data_alloc: 234881024 data_used: 30388224
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394477568 unmapped: 82649088 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394477568 unmapped: 82649088 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394477568 unmapped: 82649088 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394477568 unmapped: 82649088 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394477568 unmapped: 82649088 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5699000/0x0/0x1bfc00000, data 0x3a0a6d5/0x3c35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4486406 data_alloc: 234881024 data_used: 30388224
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394477568 unmapped: 82649088 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.509922981s of 13.589276314s, submitted: 8
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394485760 unmapped: 82640896 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5699000/0x0/0x1bfc00000, data 0x3a0a6d5/0x3c35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,2,1])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395329536 unmapped: 81797120 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395386880 unmapped: 81739776 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4f9c000/0x0/0x1bfc00000, data 0x41066d5/0x4331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 81715200 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4566000 data_alloc: 234881024 data_used: 31338496
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 81715200 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 81715200 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4f9c000/0x0/0x1bfc00000, data 0x41066d5/0x4331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 81715200 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4f9c000/0x0/0x1bfc00000, data 0x41066d5/0x4331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 81715200 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 81715200 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4564834 data_alloc: 234881024 data_used: 31342592
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 81715200 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 81715200 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 81715200 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4f96000/0x0/0x1bfc00000, data 0x410d6d5/0x4338000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 81715200 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.165871620s of 12.419549942s, submitted: 80
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395419648 unmapped: 81707008 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4f95000/0x0/0x1bfc00000, data 0x410e6d5/0x4339000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4565062 data_alloc: 234881024 data_used: 31342592
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395419648 unmapped: 81707008 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395419648 unmapped: 81707008 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395419648 unmapped: 81707008 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395419648 unmapped: 81707008 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 81698816 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4564626 data_alloc: 234881024 data_used: 31338496
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 81698816 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4f93000/0x0/0x1bfc00000, data 0x410f6d5/0x433a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 81698816 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395436032 unmapped: 81690624 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395436032 unmapped: 81690624 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395436032 unmapped: 81690624 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be32a2000 session 0x561be0196d20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3443000 session 0x561be0a3e960
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4565994 data_alloc: 234881024 data_used: 31617024
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395436032 unmapped: 81690624 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.543982506s of 11.849514008s, submitted: 17
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be350f400 session 0x561be35b9860
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be1b44800 session 0x561be28552c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be24bbc00 session 0x561be0fabe00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4f92000/0x0/0x1bfc00000, data 0x41116d5/0x433c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395386880 unmapped: 81739776 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be31d3400 session 0x561be2878000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395403264 unmapped: 81723392 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be32a2000 session 0x561be1bd74a0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a6b0b000/0x0/0x1bfc00000, data 0x2577682/0x279e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231374 data_alloc: 218103808 data_used: 15212544
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a6b0b000/0x0/0x1bfc00000, data 0x2577682/0x279e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 56K writes, 214K keys, 56K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s#012Cumulative WAL: 56K writes, 20K syncs, 2.72 writes per sync, written: 0.20 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4444 writes, 16K keys, 4444 commit groups, 1.0 writes per commit group, ingest: 18.60 MB, 0.03 MB/s#012Interval WAL: 4444 writes, 1785 syncs, 2.49 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561bde6ad610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561bde6ad610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231374 data_alloc: 218103808 data_used: 15212544
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a6b0b000/0x0/0x1bfc00000, data 0x2577682/0x279e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231374 data_alloc: 218103808 data_used: 15212544
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a6b0b000/0x0/0x1bfc00000, data 0x2577682/0x279e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383541248 unmapped: 93585408 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231374 data_alloc: 218103808 data_used: 15212544
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383557632 unmapped: 93569024 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a6b0b000/0x0/0x1bfc00000, data 0x2577682/0x279e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383557632 unmapped: 93569024 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383557632 unmapped: 93569024 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383557632 unmapped: 93569024 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a6b0b000/0x0/0x1bfc00000, data 0x2577682/0x279e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383557632 unmapped: 93569024 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231374 data_alloc: 218103808 data_used: 15212544
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a6b0b000/0x0/0x1bfc00000, data 0x2577682/0x279e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383557632 unmapped: 93569024 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383557632 unmapped: 93569024 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383557632 unmapped: 93569024 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383565824 unmapped: 93560832 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383565824 unmapped: 93560832 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.602245331s of 29.721258163s, submitted: 49
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4257125 data_alloc: 218103808 data_used: 15212544
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3443000 session 0x561be0a3e1e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be350f400 session 0x561be1eefc20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be24bbc00 session 0x561be0024b40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a6b0b000/0x0/0x1bfc00000, data 0x2577682/0x279e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be31d3400 session 0x561be2b5a780
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be32a2000 session 0x561be0de72c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a64ff000/0x0/0x1bfc00000, data 0x2ba8682/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a64ff000/0x0/0x1bfc00000, data 0x2ba8682/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4284949 data_alloc: 218103808 data_used: 15212544
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a64ff000/0x0/0x1bfc00000, data 0x2ba8682/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4320921 data_alloc: 234881024 data_used: 20193280
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a64ff000/0x0/0x1bfc00000, data 0x2ba8682/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4320921 data_alloc: 234881024 data_used: 20193280
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 383860736 unmapped: 93265920 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a64ff000/0x0/0x1bfc00000, data 0x2ba8682/0x2dcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.667137146s of 16.768726349s, submitted: 26
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be350f800 session 0x561be3108780
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be221c000 session 0x561be2831a40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be24bbc00 session 0x561be0196960
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be31d3400 session 0x561be35b92c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be32a2000 session 0x561be35b9e00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 384024576 unmapped: 93102080 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 384024576 unmapped: 93102080 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 386809856 unmapped: 90316800 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5d2a000/0x0/0x1bfc00000, data 0x337d682/0x35a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4427188 data_alloc: 234881024 data_used: 21217280
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 386818048 unmapped: 90308608 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 387268608 unmapped: 89858048 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a56e4000/0x0/0x1bfc00000, data 0x39c2682/0x3be9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 387268608 unmapped: 89858048 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be350f800 session 0x561be01eaf00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be8611800 session 0x561be0de94a0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 387268608 unmapped: 89858048 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be24bbc00 session 0x561be0de9e00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be31d3400 session 0x561be2f06b40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a56e4000/0x0/0x1bfc00000, data 0x39c2682/0x3be9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 387268608 unmapped: 89858048 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4442476 data_alloc: 234881024 data_used: 21303296
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 387268608 unmapped: 89858048 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389611520 unmapped: 87515136 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a56e4000/0x0/0x1bfc00000, data 0x39c2692/0x3bea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389709824 unmapped: 87416832 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a56c3000/0x0/0x1bfc00000, data 0x39e3692/0x3c0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a56c3000/0x0/0x1bfc00000, data 0x39e3692/0x3c0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389709824 unmapped: 87416832 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.462005615s of 11.840446472s, submitted: 93
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3443000 session 0x561be1821860
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a56c3000/0x0/0x1bfc00000, data 0x39e3692/0x3c0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389734400 unmapped: 87392256 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be3440c00 session 0x561be2effe00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362686 data_alloc: 234881024 data_used: 23339008
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389734400 unmapped: 87392256 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389734400 unmapped: 87392256 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a635a000/0x0/0x1bfc00000, data 0x2d4c692/0x2f74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389734400 unmapped: 87392256 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389734400 unmapped: 87392256 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389734400 unmapped: 87392256 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a635a000/0x0/0x1bfc00000, data 0x2d4c692/0x2f74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362686 data_alloc: 234881024 data_used: 23339008
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389734400 unmapped: 87392256 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389734400 unmapped: 87392256 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389734400 unmapped: 87392256 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a635a000/0x0/0x1bfc00000, data 0x2d4c692/0x2f74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390029312 unmapped: 87097344 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.094202042s of 10.275886536s, submitted: 51
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391380992 unmapped: 85745664 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4386390 data_alloc: 234881024 data_used: 23982080
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391487488 unmapped: 85639168 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391487488 unmapped: 85639168 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391487488 unmapped: 85639168 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391487488 unmapped: 85639168 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a606f000/0x0/0x1bfc00000, data 0x3037692/0x325f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391487488 unmapped: 85639168 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4388506 data_alloc: 234881024 data_used: 24064000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391487488 unmapped: 85639168 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391495680 unmapped: 85630976 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a606f000/0x0/0x1bfc00000, data 0x3037692/0x325f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391495680 unmapped: 85630976 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391495680 unmapped: 85630976 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391495680 unmapped: 85630976 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4388506 data_alloc: 234881024 data_used: 24064000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391495680 unmapped: 85630976 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391495680 unmapped: 85630976 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a606f000/0x0/0x1bfc00000, data 0x3037692/0x325f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391495680 unmapped: 85630976 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391495680 unmapped: 85630976 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391503872 unmapped: 85622784 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a606f000/0x0/0x1bfc00000, data 0x3037692/0x325f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4388506 data_alloc: 234881024 data_used: 24064000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391503872 unmapped: 85622784 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391503872 unmapped: 85622784 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391503872 unmapped: 85622784 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a606f000/0x0/0x1bfc00000, data 0x3037692/0x325f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391503872 unmapped: 85622784 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.060701370s of 20.124479294s, submitted: 21
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be262fc00 session 0x561be01a0780
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391536640 unmapped: 85590016 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4415844 data_alloc: 234881024 data_used: 24064000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391536640 unmapped: 85590016 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391536640 unmapped: 85590016 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5cec000/0x0/0x1bfc00000, data 0x33ba692/0x35e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391544832 unmapped: 85581824 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be262fc00 session 0x561be1b70780
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 85565440 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be31d3400 session 0x561be31081e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 417 ms_handle_reset con 0x561be3440c00 session 0x561be3108960
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 417 ms_handle_reset con 0x561be24bbc00 session 0x561be27712c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 417 ms_handle_reset con 0x561be3443000 session 0x561be358ed20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391577600 unmapped: 85549056 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 417 ms_handle_reset con 0x561be24bbc00 session 0x561be35b8780
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 417 ms_handle_reset con 0x561be31d3400 session 0x561be0dd7a40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 417 ms_handle_reset con 0x561be3440c00 session 0x561be1f06f00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 418 ms_handle_reset con 0x561be262fc00 session 0x561be3109a40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4573910 data_alloc: 234881024 data_used: 26492928
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389677056 unmapped: 92717056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 419 ms_handle_reset con 0x561be3448800 session 0x561be1b6e960
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389718016 unmapped: 92676096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 419 ms_handle_reset con 0x561be262fc00 session 0x561be35b8780
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390823936 unmapped: 91570176 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 419 ms_handle_reset con 0x561be31d3400 session 0x561be358ed20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 419 ms_handle_reset con 0x561be3440c00 session 0x561be3108960
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a4bbe000/0x0/0x1bfc00000, data 0x44e0c7f/0x470e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390823936 unmapped: 91570176 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390856704 unmapped: 91537408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4604972 data_alloc: 234881024 data_used: 30162944
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390856704 unmapped: 91537408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390856704 unmapped: 91537408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390856704 unmapped: 91537408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a4bbe000/0x0/0x1bfc00000, data 0x44e0c7f/0x470e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 419 handle_osd_map epochs [420,420], i have 420, src has [1,420]
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.804007530s of 14.155201912s, submitted: 47
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388038656 unmapped: 94355456 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4bbc000/0x0/0x1bfc00000, data 0x44e27be/0x4711000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be1b42800 session 0x561be01a0780
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388038656 unmapped: 94355456 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be1b43000 session 0x561bdfcf7c20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be31d9400 session 0x561be2effe00
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be1b43000 session 0x561be1821860
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be1b42800 session 0x561be0de94a0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be262fc00 session 0x561be35b92c0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4599404 data_alloc: 234881024 data_used: 30167040
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388038656 unmapped: 94355456 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388038656 unmapped: 94355456 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388038656 unmapped: 94355456 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389816320 unmapped: 92577792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4737000/0x0/0x1bfc00000, data 0x49687be/0x4b97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389816320 unmapped: 92577792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4636508 data_alloc: 234881024 data_used: 30179328
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389816320 unmapped: 92577792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be31d3400 session 0x561be3108780
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4733000/0x0/0x1bfc00000, data 0x496c7be/0x4b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389832704 unmapped: 92561408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389832704 unmapped: 92561408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391233536 unmapped: 91160576 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393994240 unmapped: 88399872 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4691725 data_alloc: 251658240 data_used: 36134912
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393994240 unmapped: 88399872 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4733000/0x0/0x1bfc00000, data 0x496c7be/0x4b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4733000/0x0/0x1bfc00000, data 0x496c7be/0x4b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4691725 data_alloc: 251658240 data_used: 36134912
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4733000/0x0/0x1bfc00000, data 0x496c7be/0x4b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.524549484s of 21.680967331s, submitted: 45
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4723424 data_alloc: 251658240 data_used: 40607744
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397352960 unmapped: 85041152 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a456c000/0x0/0x1bfc00000, data 0x4b337be/0x4d62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397418496 unmapped: 84975616 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397418496 unmapped: 84975616 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be262fc00 session 0x561be0196b40
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be31d9400 session 0x561be35b90e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be3440c00 session 0x561be1b6f0e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be31ce800 session 0x561be315a960
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be3450c00 session 0x561be2771c20
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397426688 unmapped: 84967424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3235000/0x0/0x1bfc00000, data 0x4cc9820/0x4ef9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397426688 unmapped: 84967424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4746368 data_alloc: 251658240 data_used: 40787968
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397426688 unmapped: 84967424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397426688 unmapped: 84967424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3235000/0x0/0x1bfc00000, data 0x4cc9820/0x4ef9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397434880 unmapped: 84959232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3235000/0x0/0x1bfc00000, data 0x4cc9820/0x4ef9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397434880 unmapped: 84959232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be262fc00 session 0x561be0dd70e0
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397434880 unmapped: 84959232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4754181 data_alloc: 251658240 data_used: 41328640
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397885440 unmapped: 84508672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397918208 unmapped: 84475904 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.194417953s of 12.317700386s, submitted: 47
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3211000/0x0/0x1bfc00000, data 0x4ced820/0x4f1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397918208 unmapped: 84475904 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398065664 unmapped: 84328448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399171584 unmapped: 83222528 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4754489 data_alloc: 251658240 data_used: 41328640
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398172160 unmapped: 84221952 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2dff000/0x0/0x1bfc00000, data 0x4cee820/0x4f1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398172160 unmapped: 84221952 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398172160 unmapped: 84221952 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398172160 unmapped: 84221952 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398180352 unmapped: 84213760 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4754665 data_alloc: 251658240 data_used: 41328640
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398180352 unmapped: 84213760 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398180352 unmapped: 84213760 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.183119774s of 10.062936783s, submitted: 351
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2e00000/0x0/0x1bfc00000, data 0x4cee820/0x4f1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 83681280 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398958592 unmapped: 83435520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398958592 unmapped: 83435520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4780741 data_alloc: 251658240 data_used: 41664512
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398958592 unmapped: 83435520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398958592 unmapped: 83435520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2bf7000/0x0/0x1bfc00000, data 0x4ef6820/0x5126000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398958592 unmapped: 83435520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be3440c00 session 0x561be315a000
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 421 ms_handle_reset con 0x561be31d7400 session 0x561be2f06960
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399015936 unmapped: 83378176 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:33 np0005539563 ceph-osd[84724]: osd.0 421 ms_handle_reset con 0x561be8611400 session 0x561be2f061e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 421 ms_handle_reset con 0x561be3451c00 session 0x561be1b6f4a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2bf3000/0x0/0x1bfc00000, data 0x4ef84db/0x512a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 421 ms_handle_reset con 0x561be3451c00 session 0x561be2854960
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a10dd000/0x0/0x1bfc00000, data 0x6a0d4db/0x6c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407691264 unmapped: 74702848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 422 ms_handle_reset con 0x561be262fc00 session 0x561be1edb2c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5016406 data_alloc: 251658240 data_used: 46891008
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407715840 unmapped: 74678272 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a10d9000/0x0/0x1bfc00000, data 0x6a0f188/0x6c42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be31d7400 session 0x561be0dd63c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be31ce800 session 0x561be01912c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404643840 unmapped: 77750272 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be31d9400 session 0x561be01ea780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.674444199s of 10.145226479s, submitted: 141
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be262fc00 session 0x561be01941e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405741568 unmapped: 76652544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be31ce800 session 0x561be358f860
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be31d7400 session 0x561be0de6000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be3451c00 session 0x561be1f074a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405741568 unmapped: 76652544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405766144 unmapped: 76627968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 424 ms_handle_reset con 0x561be3440c00 session 0x561be2f06f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a1498000/0x0/0x1bfc00000, data 0x6650a64/0x6885000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4783118 data_alloc: 251658240 data_used: 46018560
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405774336 unmapped: 76619776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 424 ms_handle_reset con 0x561be1b42800 session 0x561be1eefc20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 424 ms_handle_reset con 0x561be1b43000 session 0x561be0190000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 425 ms_handle_reset con 0x561be24bbc00 session 0x561be1b6f2c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 425 ms_handle_reset con 0x561be31d0400 session 0x561be01a0f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405782528 unmapped: 76611584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 425 ms_handle_reset con 0x561be262fc00 session 0x561be01e8780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 425 ms_handle_reset con 0x561be31ce800 session 0x561be1edb860
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405790720 unmapped: 76603392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405790720 unmapped: 76603392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3982000/0x0/0x1bfc00000, data 0x416854d/0x439c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405790720 unmapped: 76603392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4676221 data_alloc: 251658240 data_used: 40472576
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405790720 unmapped: 76603392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3982000/0x0/0x1bfc00000, data 0x416854d/0x439c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405798912 unmapped: 76595200 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.905318260s of 10.264042854s, submitted: 144
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405839872 unmapped: 76554240 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 427 ms_handle_reset con 0x561be1b42800 session 0x561be315b4a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402251776 unmapped: 80142336 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 427 ms_handle_reset con 0x561be32a2000 session 0x561be01e4b40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 427 ms_handle_reset con 0x561be350f800 session 0x561be1edba40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402259968 unmapped: 80134144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321899 data_alloc: 218103808 data_used: 12640256
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 427 ms_handle_reset con 0x561be1b43000 session 0x561be0b66960
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a555e000/0x0/0x1bfc00000, data 0x258acd7/0x27c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a555f000/0x0/0x1bfc00000, data 0x258acc7/0x27bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321174 data_alloc: 218103808 data_used: 12640256
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a555b000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325348 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a555b000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325348 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a555b000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a555b000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325348 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388816896 unmapped: 93577216 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388816896 unmapped: 93577216 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a555b000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325348 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388816896 unmapped: 93577216 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a555b000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388816896 unmapped: 93577216 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.625246048s of 30.002656937s, submitted: 61
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388825088 unmapped: 93569024 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be1b42800 session 0x561be0faa780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be0dd7c20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a2000 session 0x561be1b6e000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be350f800 session 0x561be28794a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be24bbc00 session 0x561be2eff2c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388825088 unmapped: 93569024 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388825088 unmapped: 93569024 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353269 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388825088 unmapped: 93569024 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a5192000/0x0/0x1bfc00000, data 0x2956806/0x2b8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388825088 unmapped: 93569024 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a5192000/0x0/0x1bfc00000, data 0x2956806/0x2b8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388825088 unmapped: 93569024 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a5192000/0x0/0x1bfc00000, data 0x2956806/0x2b8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388833280 unmapped: 93560832 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be1b42800 session 0x561be1edad20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4355054 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a5191000/0x0/0x1bfc00000, data 0x2956829/0x2b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a5191000/0x0/0x1bfc00000, data 0x2956829/0x2b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4383374 data_alloc: 218103808 data_used: 16621568
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4383374 data_alloc: 218103808 data_used: 16621568
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a5191000/0x0/0x1bfc00000, data 0x2956829/0x2b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.264902115s of 19.366209030s, submitted: 18
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391856128 unmapped: 90537984 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391856128 unmapped: 90537984 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391864320 unmapped: 90529792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415578 data_alloc: 218103808 data_used: 16625664
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391864320 unmapped: 90529792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4efb000/0x0/0x1bfc00000, data 0x2bec829/0x2e23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391864320 unmapped: 90529792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391864320 unmapped: 90529792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391864320 unmapped: 90529792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4efb000/0x0/0x1bfc00000, data 0x2bec829/0x2e23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391733248 unmapped: 90660864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413850 data_alloc: 218103808 data_used: 16625664
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391733248 unmapped: 90660864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391733248 unmapped: 90660864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413850 data_alloc: 218103808 data_used: 16625664
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 nova_compute[252253]: 2025-11-29 09:17:34.004 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413850 data_alloc: 218103808 data_used: 16625664
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413850 data_alloc: 218103808 data_used: 16625664
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413850 data_alloc: 218103808 data_used: 16625664
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.788619995s of 30.887470245s, submitted: 33
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef7000/0x0/0x1bfc00000, data 0x2bf0829/0x2e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413870 data_alloc: 218103808 data_used: 16625664
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391774208 unmapped: 90619904 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391782400 unmapped: 90611712 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef6000/0x0/0x1bfc00000, data 0x2bf0829/0x2e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a2000 session 0x561be1e7f4a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be350f800 session 0x561be2eff4a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be31092c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d7400 session 0x561be1e7ed20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be1b42800 session 0x561be28e50e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be28550e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a2000 session 0x561be31081e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be350f800 session 0x561be1e5be00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3451c00 session 0x561be1820f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392093696 unmapped: 90300416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499f000/0x0/0x1bfc00000, data 0x3147839/0x337f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392093696 unmapped: 90300416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4461293 data_alloc: 218103808 data_used: 16625664
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392093696 unmapped: 90300416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392093696 unmapped: 90300416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392093696 unmapped: 90300416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499e000/0x0/0x1bfc00000, data 0x3148839/0x3380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392093696 unmapped: 90300416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392101888 unmapped: 90292224 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499e000/0x0/0x1bfc00000, data 0x3148839/0x3380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4461293 data_alloc: 218103808 data_used: 16625664
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be1b42800 session 0x561be01e5860
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499e000/0x0/0x1bfc00000, data 0x3148839/0x3380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392101888 unmapped: 90292224 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.990257263s of 14.080326080s, submitted: 23
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be2f072c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392101888 unmapped: 90292224 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a2000 session 0x561be1edb680
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be350f800 session 0x561be01e5a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499e000/0x0/0x1bfc00000, data 0x3148839/0x3380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392118272 unmapped: 90275840 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392118272 unmapped: 90275840 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499c000/0x0/0x1bfc00000, data 0x314886c/0x3382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392142848 unmapped: 90251264 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506529 data_alloc: 234881024 data_used: 22142976
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392142848 unmapped: 90251264 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392142848 unmapped: 90251264 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499b000/0x0/0x1bfc00000, data 0x314886c/0x3382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499b000/0x0/0x1bfc00000, data 0x314886c/0x3382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506305 data_alloc: 234881024 data_used: 22147072
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.336379051s of 11.382768631s, submitted: 15
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499b000/0x0/0x1bfc00000, data 0x314986c/0x3383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499b000/0x0/0x1bfc00000, data 0x314986c/0x3383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506309 data_alloc: 234881024 data_used: 22147072
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499b000/0x0/0x1bfc00000, data 0x314986c/0x3383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392912896 unmapped: 89481216 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395935744 unmapped: 86458368 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396296192 unmapped: 86097920 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396345344 unmapped: 86048768 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 85991424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3077000/0x0/0x1bfc00000, data 0x38cd86c/0x3b07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4571905 data_alloc: 234881024 data_used: 23420928
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 85991424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 85991424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 85991424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3077000/0x0/0x1bfc00000, data 0x38cd86c/0x3b07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.473201752s of 12.682118416s, submitted: 75
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4570341 data_alloc: 234881024 data_used: 23420928
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3074000/0x0/0x1bfc00000, data 0x38ce86c/0x3b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4570005 data_alloc: 234881024 data_used: 23420928
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3074000/0x0/0x1bfc00000, data 0x38ce86c/0x3b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4570005 data_alloc: 234881024 data_used: 23420928
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.998168945s of 11.008939743s, submitted: 2
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570169 data_alloc: 234881024 data_used: 23420928
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570169 data_alloc: 234881024 data_used: 23420928
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.078990936s of 14.082680702s, submitted: 1
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570169 data_alloc: 234881024 data_used: 23420928
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3075000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3075000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570349 data_alloc: 234881024 data_used: 23420928
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396451840 unmapped: 85942272 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: mgrc ms_handle_reset ms_handle_reset con 0x561be2d36000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2945860420
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2945860420,v1:192.168.122.100:6801/2945860420]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: mgrc handle_mgr_configure stats_period=5
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3074000/0x0/0x1bfc00000, data 0x38d086c/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3074000/0x0/0x1bfc00000, data 0x38d086c/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3074000/0x0/0x1bfc00000, data 0x38d086c/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be219e800 session 0x561be1e5ab40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570013 data_alloc: 234881024 data_used: 23420928
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3071000/0x0/0x1bfc00000, data 0x38d086c/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.624737740s of 13.645763397s, submitted: 4
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3071000/0x0/0x1bfc00000, data 0x38d086c/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4569821 data_alloc: 234881024 data_used: 23420928
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570001 data_alloc: 234881024 data_used: 23420928
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38d186c/0x3b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be8611400 session 0x561be0b66780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be0faa5a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396574720 unmapped: 85819392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be1e7e5a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d50000/0x0/0x1bfc00000, data 0x2bf6829/0x2e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4425272 data_alloc: 218103808 data_used: 16625664
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d50000/0x0/0x1bfc00000, data 0x2bf6829/0x2e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4425272 data_alloc: 218103808 data_used: 16625664
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d50000/0x0/0x1bfc00000, data 0x2bf6829/0x2e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.537462234s of 18.683736801s, submitted: 54
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be2854b40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392421376 unmapped: 89972736 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a2000 session 0x561be0db94a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4281: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392486912 unmapped: 89907200 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 89899008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 89899008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 89899008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 89899008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 89899008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392503296 unmapped: 89890816 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392503296 unmapped: 89890816 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 89882624 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47686 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 89882624 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 89882624 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 89882624 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 89882624 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 89882624 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 57.838653564s of 57.928359985s, submitted: 34
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a2000 session 0x561be0197680
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be01941e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4368505 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392552448 unmapped: 89841664 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4230000/0x0/0x1bfc00000, data 0x2717868/0x294e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392552448 unmapped: 89841664 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 89833472 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 89833472 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4230000/0x0/0x1bfc00000, data 0x2717868/0x294e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 89833472 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4368505 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 89833472 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4230000/0x0/0x1bfc00000, data 0x2717868/0x294e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392577024 unmapped: 89817088 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392577024 unmapped: 89817088 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392577024 unmapped: 89817088 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4230000/0x0/0x1bfc00000, data 0x2717868/0x294e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be28312c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392880128 unmapped: 89513984 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4369901 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392888320 unmapped: 89505792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392847360 unmapped: 89546752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392847360 unmapped: 89546752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a420c000/0x0/0x1bfc00000, data 0x273b868/0x2972000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392847360 unmapped: 89546752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392847360 unmapped: 89546752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4381741 data_alloc: 218103808 data_used: 14258176
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392847360 unmapped: 89546752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be8611400 session 0x561be01e4b40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be289e000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.230293274s of 16.324338913s, submitted: 29
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be0b67a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43b5000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4350169 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43b5000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43b5000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43b5000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4350169 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393928704 unmapped: 88465408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393928704 unmapped: 88465408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.276370049s of 12.325051308s, submitted: 16
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be1820000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395001856 unmapped: 87392256 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be2efed20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be1b45400 session 0x561be0b672c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4380280 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4055000/0x0/0x1bfc00000, data 0x28f2868/0x2b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4055000/0x0/0x1bfc00000, data 0x28f2868/0x2b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4380280 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4055000/0x0/0x1bfc00000, data 0x28f2868/0x2b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4055000/0x0/0x1bfc00000, data 0x28f2868/0x2b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394846208 unmapped: 87547904 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4055000/0x0/0x1bfc00000, data 0x28f2868/0x2b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4403640 data_alloc: 218103808 data_used: 15921152
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394846208 unmapped: 87547904 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394846208 unmapped: 87547904 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be8611400 session 0x561be01963c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4055000/0x0/0x1bfc00000, data 0x28f2868/0x2b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.046278954s of 14.122239113s, submitted: 26
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be350f800 session 0x561be358ed20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394870784 unmapped: 87523328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394870784 unmapped: 87523328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394870784 unmapped: 87523328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394911744 unmapped: 87482368 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394911744 unmapped: 87482368 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394911744 unmapped: 87482368 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be2effa40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be28765a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be28e50e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be8611400 session 0x561be315a000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.228950500s of 35.272441864s, submitted: 14
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a4400 session 0x561be1e7e3c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395116544 unmapped: 87277568 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be1edb2c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be358e000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be1b6fe00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be8611400 session 0x561be28e4f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395116544 unmapped: 87277568 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395116544 unmapped: 87277568 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4423858 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395116544 unmapped: 87277568 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3449000 session 0x561be2830000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395116544 unmapped: 87277568 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3449000 session 0x561be0196f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395124736 unmapped: 87269376 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be2b5b4a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3ba1000/0x0/0x1bfc00000, data 0x2da6816/0x2fdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be0dd65a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395157504 unmapped: 87236608 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395157504 unmapped: 87236608 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4427197 data_alloc: 218103808 data_used: 12763136
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395075584 unmapped: 87318528 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3ba0000/0x0/0x1bfc00000, data 0x2da6826/0x2fde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3ba0000/0x0/0x1bfc00000, data 0x2da6826/0x2fde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4485917 data_alloc: 234881024 data_used: 20946944
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3ba0000/0x0/0x1bfc00000, data 0x2da6826/0x2fde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4485917 data_alloc: 234881024 data_used: 20946944
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.410863876s of 19.642406464s, submitted: 23
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397975552 unmapped: 84418560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3491000/0x0/0x1bfc00000, data 0x34b5826/0x36ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398442496 unmapped: 83951616 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4557351 data_alloc: 234881024 data_used: 21196800
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3316000/0x0/0x1bfc00000, data 0x3628826/0x3860000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3316000/0x0/0x1bfc00000, data 0x3628826/0x3860000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4552427 data_alloc: 234881024 data_used: 21196800
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a32fe000/0x0/0x1bfc00000, data 0x3648826/0x3880000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4552747 data_alloc: 234881024 data_used: 21204992
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.202736855s of 13.543350220s, submitted: 103
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a32fe000/0x0/0x1bfc00000, data 0x3648826/0x3880000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be0a3ef00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be8611400 session 0x561be090bc20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399638528 unmapped: 82755584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4366991 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be28e5a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.063098907s of 43.144638062s, submitted: 31
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be0db8000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396288000 unmapped: 86106112 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405626 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396288000 unmapped: 86106112 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3ec0000/0x0/0x1bfc00000, data 0x2a88806/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396296192 unmapped: 86097920 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396296192 unmapped: 86097920 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396296192 unmapped: 86097920 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3ec0000/0x0/0x1bfc00000, data 0x2a88806/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396296192 unmapped: 86097920 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405626 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396296192 unmapped: 86097920 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be358f680
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396451840 unmapped: 85942272 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396460032 unmapped: 85934080 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396771328 unmapped: 85622784 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e9c000/0x0/0x1bfc00000, data 0x2aac806/0x2ce2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4436798 data_alloc: 218103808 data_used: 16662528
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e9c000/0x0/0x1bfc00000, data 0x2aac806/0x2ce2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4436798 data_alloc: 218103808 data_used: 16662528
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e9c000/0x0/0x1bfc00000, data 0x2aac806/0x2ce2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.750879288s of 19.841560364s, submitted: 10
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be01941e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397500416 unmapped: 84893696 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399392768 unmapped: 83001344 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a384d000/0x0/0x1bfc00000, data 0x30f3806/0x3329000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4502954 data_alloc: 218103808 data_used: 16797696
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399450112 unmapped: 82944000 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399450112 unmapped: 82944000 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be361e400 session 0x561be01a0780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be28310e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399450112 unmapped: 82944000 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be01e50e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 83615744 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be2831a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3829000/0x0/0x1bfc00000, data 0x3116806/0x334c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 83615744 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4502371 data_alloc: 218103808 data_used: 16797696
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398974976 unmapped: 83419136 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a382e000/0x0/0x1bfc00000, data 0x3119829/0x3350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a382e000/0x0/0x1bfc00000, data 0x3119829/0x3350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4522931 data_alloc: 234881024 data_used: 19496960
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a382e000/0x0/0x1bfc00000, data 0x3119829/0x3350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4522931 data_alloc: 234881024 data_used: 19496960
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a382e000/0x0/0x1bfc00000, data 0x3119829/0x3350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.129539490s of 19.362030029s, submitted: 55
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a375e000/0x0/0x1bfc00000, data 0x31e9829/0x3420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400285696 unmapped: 82108416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400318464 unmapped: 82075648 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401367040 unmapped: 81027072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4603363 data_alloc: 234881024 data_used: 20443136
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fcc000/0x0/0x1bfc00000, data 0x3972829/0x3ba9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4603683 data_alloc: 234881024 data_used: 20451328
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fcc000/0x0/0x1bfc00000, data 0x3972829/0x3ba9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fcc000/0x0/0x1bfc00000, data 0x3972829/0x3ba9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x3975829/0x3bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x3975829/0x3bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4599779 data_alloc: 234881024 data_used: 20520960
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x3975829/0x3bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 59K writes, 223K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s#012Cumulative WAL: 59K writes, 21K syncs, 2.70 writes per sync, written: 0.21 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2965 writes, 9665 keys, 2965 commit groups, 1.0 writes per commit group, ingest: 9.56 MB, 0.02 MB/s#012Interval WAL: 2965 writes, 1282 syncs, 2.31 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.186190605s of 13.778625488s, submitted: 90
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be27714a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x3975829/0x3bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d9800 session 0x561be2efe3c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401391616 unmapped: 81002496 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401391616 unmapped: 81002496 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474385 data_alloc: 218103808 data_used: 16793600
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401391616 unmapped: 81002496 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401391616 unmapped: 81002496 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401391616 unmapped: 81002496 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3bd4000/0x0/0x1bfc00000, data 0x2d73806/0x2fa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401391616 unmapped: 81002496 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3449000 session 0x561be1f06d20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be2efe1e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be0197c20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379151 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379151 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400670720 unmapped: 81723392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400670720 unmapped: 81723392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400670720 unmapped: 81723392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379151 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400687104 unmapped: 81707008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400687104 unmapped: 81707008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400687104 unmapped: 81707008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400687104 unmapped: 81707008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400687104 unmapped: 81707008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379151 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400695296 unmapped: 81698816 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400695296 unmapped: 81698816 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400695296 unmapped: 81698816 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379151 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379151 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.503349304s of 34.588817596s, submitted: 27
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be2876b40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be315ab40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400875520 unmapped: 81518592 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be1b70780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be1eee000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be2830000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 81494016 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d8b000/0x0/0x1bfc00000, data 0x2bbc868/0x2df3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 81494016 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 81494016 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d8b000/0x0/0x1bfc00000, data 0x2bbc868/0x2df3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 81494016 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50747 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4439475 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 81494016 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be28e50e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 81551360 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 81551360 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d8a000/0x0/0x1bfc00000, data 0x2bbc88b/0x2df4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4483884 data_alloc: 218103808 data_used: 18837504
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d8a000/0x0/0x1bfc00000, data 0x2bbc88b/0x2df4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4483884 data_alloc: 218103808 data_used: 18837504
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.304382324s of 18.628723145s, submitted: 53
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d8a000/0x0/0x1bfc00000, data 0x2bbc88b/0x2df4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [0,0,0,0,1])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406970368 unmapped: 75423744 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4530962 data_alloc: 218103808 data_used: 19914752
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a372c000/0x0/0x1bfc00000, data 0x321288b/0x344a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4544630 data_alloc: 218103808 data_used: 20172800
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3712000/0x0/0x1bfc00000, data 0x323488b/0x346c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537502 data_alloc: 218103808 data_used: 20176896
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.273756981s of 12.800662994s, submitted: 117
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3449000 session 0x561be1e5ab40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3449000 session 0x561be2b5b4a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389066 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389066 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389066 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389066 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389066 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389066 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.497726440s of 30.619930267s, submitted: 44
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be27712c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be0de6000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be1eef2c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be0a3e3c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 76316672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be2854d20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 76316672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a425f000/0x0/0x1bfc00000, data 0x26e9806/0x291f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 76316672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4408727 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a425f000/0x0/0x1bfc00000, data 0x26e9806/0x291f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a425f000/0x0/0x1bfc00000, data 0x26e9806/0x291f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be2786b40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4408727 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be315b4a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be0de9a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3449000 session 0x561be2771a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e4d000/0x0/0x1bfc00000, data 0x26e9839/0x2921000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e4d000/0x0/0x1bfc00000, data 0x26e9839/0x2921000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422558 data_alloc: 218103808 data_used: 13991936
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.007145882s of 13.094347000s, submitted: 25
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be01f43c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be0db9860
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e4d000/0x0/0x1bfc00000, data 0x26e9839/0x2921000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422426 data_alloc: 218103808 data_used: 13991936
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406118400 unmapped: 76275712 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406118400 unmapped: 76275712 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406126592 unmapped: 76267520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e4d000/0x0/0x1bfc00000, data 0x26e9839/0x2921000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406126592 unmapped: 76267520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422494 data_alloc: 218103808 data_used: 14000128
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406126592 unmapped: 76267520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.051295280s of 11.088788033s, submitted: 2
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be0dd61e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be2b5b2c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406126592 unmapped: 76267520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406126592 unmapped: 76267520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406126592 unmapped: 76267520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e4d000/0x0/0x1bfc00000, data 0x26e9839/0x2921000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 76259328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422362 data_alloc: 218103808 data_used: 14000128
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 76259328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e4d000/0x0/0x1bfc00000, data 0x26e9839/0x2921000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 76259328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 76259328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be1821c20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be361fc00 session 0x561be28e41e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 76259328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406151168 unmapped: 76242944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395271 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be1b70d20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403824640 unmapped: 78569472 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c839/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.025921822s of 10.001517296s, submitted: 215
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403955712 unmapped: 78438400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394225 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394225 data_alloc: 218103808 data_used: 12648448
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404004864 unmapped: 78389248 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 78364672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 78364672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 78364672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 78364672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 78364672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.646228790s of 46.096721649s, submitted: 168
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be0de9e00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 82542592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 82542592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 82542592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4468125 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3613000/0x0/0x1bfc00000, data 0x2f25806/0x315b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4468125 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3613000/0x0/0x1bfc00000, data 0x2f25806/0x315b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be01961e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404070400 unmapped: 82526208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404070400 unmapped: 82526208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406724608 unmapped: 79872000 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540130 data_alloc: 234881024 data_used: 22704128
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3612000/0x0/0x1bfc00000, data 0x2f25829/0x315c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406724608 unmapped: 79872000 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406724608 unmapped: 79872000 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406724608 unmapped: 79872000 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406724608 unmapped: 79872000 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406724608 unmapped: 79872000 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540130 data_alloc: 234881024 data_used: 22704128
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406732800 unmapped: 79863808 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3612000/0x0/0x1bfc00000, data 0x2f25829/0x315c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406732800 unmapped: 79863808 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406732800 unmapped: 79863808 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3612000/0x0/0x1bfc00000, data 0x2f25829/0x315c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406732800 unmapped: 79863808 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406732800 unmapped: 79863808 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540610 data_alloc: 234881024 data_used: 22716416
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.410284042s of 23.519502640s, submitted: 12
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407355392 unmapped: 79241216 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3612000/0x0/0x1bfc00000, data 0x2f25829/0x315c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408231936 unmapped: 78364672 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408231936 unmapped: 78364672 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408231936 unmapped: 78364672 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3011000/0x0/0x1bfc00000, data 0x3526829/0x375d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408240128 unmapped: 78356480 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4588054 data_alloc: 234881024 data_used: 23121920
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 77209600 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 77209600 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 77209600 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 77209600 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 77209600 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593528 data_alloc: 234881024 data_used: 22941696
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 77209600 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409395200 unmapped: 77201408 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409395200 unmapped: 77201408 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409395200 unmapped: 77201408 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409403392 unmapped: 77193216 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593528 data_alloc: 234881024 data_used: 22941696
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409403392 unmapped: 77193216 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409403392 unmapped: 77193216 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409403392 unmapped: 77193216 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409403392 unmapped: 77193216 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593528 data_alloc: 234881024 data_used: 22941696
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593528 data_alloc: 234881024 data_used: 22941696
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be35b9680
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593528 data_alloc: 234881024 data_used: 22941696
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593528 data_alloc: 234881024 data_used: 22941696
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409444352 unmapped: 77152256 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.932518005s of 36.101718903s, submitted: 51
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 84320256 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 84320256 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c829/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4403310 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be2f061e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 84287488 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 84287488 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 84287488 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 84287488 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 84287488 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 84287488 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 60.870681763s of 60.922084808s, submitted: 18
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be28e50e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 84279296 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 84279296 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402325504 unmapped: 84271104 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4414748 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be2f06d20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be361fc00 session 0x561be35b8f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be3108960
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be289fe00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e92000/0x0/0x1bfc00000, data 0x26a6806/0x28dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be28e45a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402325504 unmapped: 84271104 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be358e000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3da9000/0x0/0x1bfc00000, data 0x278f806/0x29c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be28761e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402325504 unmapped: 84271104 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be361fc00 session 0x561be28e5680
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be361fc00 session 0x561be2876780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402472960 unmapped: 84123648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be2f07a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be2855a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3343c00 session 0x561be2771a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3440800 session 0x561be1edb860
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be0194f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402522112 unmapped: 84074496 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3343c00 session 0x561be0faa780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402464768 unmapped: 84131840 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4472787 data_alloc: 218103808 data_used: 12685312
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402472960 unmapped: 84123648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3927000/0x0/0x1bfc00000, data 0x2c0f839/0x2e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3927000/0x0/0x1bfc00000, data 0x2c0f839/0x2e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402472960 unmapped: 84123648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402472960 unmapped: 84123648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402472960 unmapped: 84123648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.276145935s of 11.987798691s, submitted: 47
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402481152 unmapped: 84115456 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4477851 data_alloc: 218103808 data_used: 13340672
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 84246528 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 84246528 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be344a400 session 0x561be2830f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3927000/0x0/0x1bfc00000, data 0x2c0f839/0x2e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 84246528 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3927000/0x0/0x1bfc00000, data 0x2c0f839/0x2e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 84246528 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb2800 session 0x561be28314a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 84246528 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444757 data_alloc: 218103808 data_used: 13340672
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 84246528 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be361fc00 session 0x561be1e7f0e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be0de8000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb2800 session 0x561be2f06780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3a80000/0x0/0x1bfc00000, data 0x2ab7816/0x2cee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460859 data_alloc: 218103808 data_used: 12853248
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be01e4780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be1eefc20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.695828438s of 12.155271530s, submitted: 88
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be3109680
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3f88000/0x0/0x1bfc00000, data 0x25b0806/0x27e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402923520 unmapped: 83673088 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402923520 unmapped: 83673088 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402923520 unmapped: 83673088 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402923520 unmapped: 83673088 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 76.208839417s of 76.231025696s, submitted: 7
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402989056 unmapped: 83607552 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be0194f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402989056 unmapped: 83607552 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402989056 unmapped: 83607552 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419063 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402989056 unmapped: 83607552 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402989056 unmapped: 83607552 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419063 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419063 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb2800 session 0x561be2f07a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419063 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be2876780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be28761e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.476503372s of 22.541004181s, submitted: 5
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4420901 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be28e45a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fa9000/0x0/0x1bfc00000, data 0x258c889/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4421061 data_alloc: 218103808 data_used: 12656640
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fa9000/0x0/0x1bfc00000, data 0x258c889/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be289fe00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb2800 session 0x561be0a3e3c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be35b8f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419175 data_alloc: 218103808 data_used: 12660736
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419175 data_alloc: 218103808 data_used: 12660736
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be35b9680
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.364324570s of 17.376232147s, submitted: 3
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be01961e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417599 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417599 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417599 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417599 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 83501056 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 83501056 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417599 data_alloc: 218103808 data_used: 12652544
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 83501056 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 83501056 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403103744 unmapped: 83492864 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.079629898s of 27.113702774s, submitted: 11
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb2800 session 0x561be358e000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be2771a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417263 data_alloc: 218103808 data_used: 15273984
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417263 data_alloc: 218103808 data_used: 15273984
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 80723968 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 80723968 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405880832 unmapped: 80715776 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405889024 unmapped: 80707584 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417263 data_alloc: 218103808 data_used: 15273984
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.729580879s of 11.746542931s, submitted: 5
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be2b5b4a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419045 data_alloc: 218103808 data_used: 15273984
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 80683008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 80683008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 80683008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419045 data_alloc: 218103808 data_used: 15273984
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 80683008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 80683008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 80683008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 80674816 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 80674816 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419045 data_alloc: 218103808 data_used: 15273984
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419045 data_alloc: 218103808 data_used: 15273984
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.560823441s of 22.608398438s, submitted: 1
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405946368 unmapped: 80650240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be289e000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 80642048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 79298560 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505089 data_alloc: 218103808 data_used: 15273984
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 79298560 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3343c00 session 0x561be28e43c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb2800 session 0x561be2876b40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3600000/0x0/0x1bfc00000, data 0x2f3782f/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3600000/0x0/0x1bfc00000, data 0x2f3782f/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504810 data_alloc: 218103808 data_used: 15273984
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3600000/0x0/0x1bfc00000, data 0x2f37868/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 79273984 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 79273984 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 79273984 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 79273984 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504810 data_alloc: 218103808 data_used: 15273984
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3600000/0x0/0x1bfc00000, data 0x2f37868/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.720861435s of 12.936441422s, submitted: 57
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407240704 unmapped: 79355904 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561bdfcb4c00 session 0x561be1bd6f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407248896 unmapped: 79347712 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407248896 unmapped: 79347712 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407248896 unmapped: 79347712 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.834756851s of 25.870422363s, submitted: 12
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561be3447000 session 0x561be2854000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561be0ad4400 session 0x561be0faa780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 79306752 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 79306752 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 79306752 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 79306752 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 79306752 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 79298560 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 79298560 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 79298560 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 79265792 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.212320328s of 18.217164993s, submitted: 1
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561be344a400 session 0x561be09ac000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 79265792 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4507533 data_alloc: 218103808 data_used: 15282176
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 79265792 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407838720 unmapped: 78757888 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 77955072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 77955072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394e4/0x3172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 77955072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570413 data_alloc: 234881024 data_used: 24080384
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561bdfcb2800 session 0x561be28e4d20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 77955072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394e4/0x3172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 77955072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394e4/0x3172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570429 data_alloc: 234881024 data_used: 24080384
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.502777100s of 10.719768524s, submitted: 12
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561bdfcb4c00 session 0x561be0196b40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4569493 data_alloc: 234881024 data_used: 24080384
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4569493 data_alloc: 234881024 data_used: 24080384
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561be0ad4400 session 0x561be01a1680
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.821796417s of 11.829506874s, submitted: 2
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561be3447000 session 0x561be08b2780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561bdf4fdc00 session 0x561be01a0f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408666112 unmapped: 77930496 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4569964 data_alloc: 234881024 data_used: 24080384
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 430 ms_handle_reset con 0x561bdf4fdc00 session 0x561be0de6780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408690688 unmapped: 77905920 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 430 ms_handle_reset con 0x561bdfcb2800 session 0x561be358fa40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405831680 unmapped: 80764928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 430 ms_handle_reset con 0x561bdfcb4c00 session 0x561be1b6e000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405839872 unmapped: 80756736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a3fa4000/0x0/0x1bfc00000, data 0x259010c/0x27c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405839872 unmapped: 80756736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405839872 unmapped: 80756736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4435347 data_alloc: 218103808 data_used: 15290368
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405839872 unmapped: 80756736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405848064 unmapped: 80748544 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be3451000 session 0x561be2eff0e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.886808395s of 10.118666649s, submitted: 77
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be31d7000 session 0x561be1821e00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 81068032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3fa2000/0x0/0x1bfc00000, data 0x2591c4b/0x27cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 81068032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561bdf4fdc00 session 0x561be1821a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 81068032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4443469 data_alloc: 218103808 data_used: 15290368
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3fa2000/0x0/0x1bfc00000, data 0x2591c4b/0x27cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 81068032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 81068032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561bdfcb2800 session 0x561be1821e00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561bdfcb4c00 session 0x561be358fa40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 81059840 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be3451000 session 0x561be289e000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561bdfcb5c00 session 0x561be35b9680
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518461 data_alloc: 218103808 data_used: 15290368
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 61K writes, 229K keys, 61K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s#012Cumulative WAL: 61K writes, 22K syncs, 2.68 writes per sync, written: 0.21 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2012 writes, 5478 keys, 2012 commit groups, 1.0 writes per commit group, ingest: 5.20 MB, 0.01 MB/s#012Interval WAL: 2012 writes, 901 syncs, 2.23 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518461 data_alloc: 218103808 data_used: 15290368
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 80871424 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518461 data_alloc: 218103808 data_used: 15290368
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 80871424 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 80871424 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: mgrc ms_handle_reset ms_handle_reset con 0x561be31d7400
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2945860420
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2945860420,v1:192.168.122.100:6801/2945860420]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: mgrc handle_mgr_configure stats_period=5
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be1b42800 session 0x561be315a960
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518461 data_alloc: 218103808 data_used: 15290368
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518461 data_alloc: 218103808 data_used: 15290368
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be2ed8800 session 0x561be01950e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518461 data_alloc: 218103808 data_used: 15290368
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be43d7400 session 0x561be01970e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be21a5000 session 0x561be0197a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.847000122s of 36.072757721s, submitted: 51
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be361e400 session 0x561be0b67a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404742144 unmapped: 81854464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3652000/0x0/0x1bfc00000, data 0x2ee0c7e/0x311c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404750336 unmapped: 81846272 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519378 data_alloc: 218103808 data_used: 15298560
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3652000/0x0/0x1bfc00000, data 0x2ee0c7e/0x311c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3652000/0x0/0x1bfc00000, data 0x2ee0c7e/0x311c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4578738 data_alloc: 234881024 data_used: 22994944
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3652000/0x0/0x1bfc00000, data 0x2ee0c7e/0x311c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404643840 unmapped: 81952768 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404643840 unmapped: 81952768 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4578738 data_alloc: 234881024 data_used: 22994944
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404643840 unmapped: 81952768 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.508459091s of 13.522068024s, submitted: 4
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407961600 unmapped: 78635008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409141248 unmapped: 77455360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2bfd000/0x0/0x1bfc00000, data 0x38f9c7e/0x3b35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 77234176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 77234176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4677088 data_alloc: 234881024 data_used: 24748032
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 77234176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2bf4000/0x0/0x1bfc00000, data 0x3935c7e/0x3b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 77234176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 77234176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2bf4000/0x0/0x1bfc00000, data 0x3935c7e/0x3b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4668696 data_alloc: 234881024 data_used: 24748032
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2bdb000/0x0/0x1bfc00000, data 0x3957c7e/0x3b93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2bdb000/0x0/0x1bfc00000, data 0x3957c7e/0x3b93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2bdb000/0x0/0x1bfc00000, data 0x3957c7e/0x3b93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4668696 data_alloc: 234881024 data_used: 24748032
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.023251534s of 14.699027061s, submitted: 103
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be0187800 session 0x561be01e50e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be32a5c00 session 0x561be2f07a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd7000/0x0/0x1bfc00000, data 0x3959a47/0x3b97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4688561 data_alloc: 234881024 data_used: 24756224
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd2000/0x0/0x1bfc00000, data 0x3b13a47/0x3b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd2000/0x0/0x1bfc00000, data 0x3b13a47/0x3b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd2000/0x0/0x1bfc00000, data 0x3b13a47/0x3b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 77758464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4688561 data_alloc: 234881024 data_used: 24756224
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 77758464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd2000/0x0/0x1bfc00000, data 0x3b13a47/0x3b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 77758464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 77758464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd2000/0x0/0x1bfc00000, data 0x3b13a47/0x3b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.624474525s of 12.263740540s, submitted: 11
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd2000/0x0/0x1bfc00000, data 0x3b13a47/0x3b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 77758464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 77758464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4690389 data_alloc: 234881024 data_used: 24756224
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd1000/0x0/0x1bfc00000, data 0x3b13a57/0x3b9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408846336 unmapped: 77750272 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be2159000 session 0x561be2efed20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be343c800 session 0x561be315b680
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408846336 unmapped: 77750272 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408846336 unmapped: 77750272 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408862720 unmapped: 77733888 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd1000/0x0/0x1bfc00000, data 0x3b13a57/0x3b9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408870912 unmapped: 77725696 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4714863 data_alloc: 234881024 data_used: 24756224
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408870912 unmapped: 77725696 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408870912 unmapped: 77725696 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408870912 unmapped: 77725696 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 77709312 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcd000/0x0/0x1bfc00000, data 0x3e1ea57/0x3ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 77709312 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4714863 data_alloc: 234881024 data_used: 24756224
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be3443400 session 0x561be01e4780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcd000/0x0/0x1bfc00000, data 0x3e1ea57/0x3ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 77709312 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcd000/0x0/0x1bfc00000, data 0x3e1ea57/0x3ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 77709312 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 77709312 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be2ec7000 session 0x561be28783c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcd000/0x0/0x1bfc00000, data 0x3e1ea57/0x3ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be2159000 session 0x561be358e000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 77709312 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.022949219s of 16.393350601s, submitted: 5
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be32a5c00 session 0x561be28303c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408903680 unmapped: 77692928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4719323 data_alloc: 234881024 data_used: 24756224
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 77684736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 77684736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408920064 unmapped: 77676544 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcb000/0x0/0x1bfc00000, data 0x3e1ea8a/0x3ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4718175 data_alloc: 234881024 data_used: 24981504
Nov 29 04:17:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcb000/0x0/0x1bfc00000, data 0x3e1ea8a/0x3ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700122359' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4718175 data_alloc: 234881024 data_used: 24981504
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcb000/0x0/0x1bfc00000, data 0x3e1ea8a/0x3ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4718975 data_alloc: 234881024 data_used: 25001984
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.700752258s of 15.737976074s, submitted: 11
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcb000/0x0/0x1bfc00000, data 0x3e1ea8a/0x3ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410075136 unmapped: 76521472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410288128 unmapped: 76308480 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 76292096 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 76292096 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 76292096 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4758865 data_alloc: 234881024 data_used: 28110848
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 76292096 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410320896 unmapped: 76275712 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 75997184 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 75997184 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 75997184 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759569 data_alloc: 234881024 data_used: 28110848
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 75997184 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 75997184 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.377988815s of 12.515168190s, submitted: 7
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759217 data_alloc: 234881024 data_used: 28110848
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759041 data_alloc: 234881024 data_used: 28110848
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759041 data_alloc: 234881024 data_used: 28110848
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be32a2000 session 0x561be2b5be00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409755648 unmapped: 76840960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.402155876s of 14.417620659s, submitted: 4
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be343c800 session 0x561bdf491a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be3443400 session 0x561be1f06f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409755648 unmapped: 76840960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 75792384 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [0,0,1])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be344dc00 session 0x561be01ea000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 75792384 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4756864 data_alloc: 234881024 data_used: 28114944
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 75792384 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410845184 unmapped: 75751424 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2962000/0x0/0x1bfc00000, data 0x4089a57/0x3e0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411910144 unmapped: 74686464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411910144 unmapped: 74686464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410886144 unmapped: 75710464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4757168 data_alloc: 234881024 data_used: 28127232
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be462a800 session 0x561be0de8b40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be01c8400 session 0x561be0de74a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 73490432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be43d7800 session 0x561be358e960
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413712384 unmapped: 72884224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a27be000/0x0/0x1bfc00000, data 0x3e1ea47/0x3ba0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be344b000 session 0x561be01f50e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.620646477s of 10.166932106s, submitted: 426
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 72859648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 433 ms_handle_reset con 0x561be3444000 session 0x561be0dd7c20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 72859648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 433 ms_handle_reset con 0x561be01c8400 session 0x561be0b665a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413745152 unmapped: 72851456 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4717364 data_alloc: 234881024 data_used: 28061696
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413745152 unmapped: 72851456 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a27bf000/0x0/0x1bfc00000, data 0x39606d1/0x3b9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 433 ms_handle_reset con 0x561be67b4c00 session 0x561be315b2c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413761536 unmapped: 72835072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 74399744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 74399744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 433 ms_handle_reset con 0x561be3447000 session 0x561be0de7a40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 74399744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4472960 data_alloc: 218103808 data_used: 15327232
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 74399744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3b8c000/0x0/0x1bfc00000, data 0x25956c1/0x27d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 74399744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.126425743s of 10.281791687s, submitted: 58
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3b88000/0x0/0x1bfc00000, data 0x2597200/0x27d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 435 handle_osd_map epochs [435,435], i have 435, src has [1,435]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 435 ms_handle_reset con 0x561be0e6a800 session 0x561be315be00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 435 handle_osd_map epochs [435,435], i have 435, src has [1,435]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 435 ms_handle_reset con 0x561be24bbc00 session 0x561be31094a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4479122 data_alloc: 218103808 data_used: 15278080
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a3b86000/0x0/0x1bfc00000, data 0x2598e8f/0x27d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a3b86000/0x0/0x1bfc00000, data 0x2598e8f/0x27d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4479122 data_alloc: 218103808 data_used: 15278080
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3b83000/0x0/0x1bfc00000, data 0x259a9ce/0x27da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4482096 data_alloc: 218103808 data_used: 15278080
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.350548744s of 13.543437004s, submitted: 44
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be75abc00 session 0x561be1e5ad20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be01c8400 session 0x561be1e7f680
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3b83000/0x0/0x1bfc00000, data 0x259a9ce/0x27da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410681344 unmapped: 75915264 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3b84000/0x0/0x1bfc00000, data 0x259a9ce/0x27da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be350ec00 session 0x561be01963c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410681344 unmapped: 75915264 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410681344 unmapped: 75915264 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4484438 data_alloc: 218103808 data_used: 15343616
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410681344 unmapped: 75915264 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410681344 unmapped: 75915264 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3b83000/0x0/0x1bfc00000, data 0x259aa30/0x27db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be8535800 session 0x561be2830b40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be221d000 session 0x561be0de6960
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be1b45000 session 0x561be0a3e1e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4523936 data_alloc: 218103808 data_used: 15343616
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a373a000/0x0/0x1bfc00000, data 0x29e49ce/0x2c24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4523936 data_alloc: 218103808 data_used: 15343616
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a373a000/0x0/0x1bfc00000, data 0x29e49ce/0x2c24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a373a000/0x0/0x1bfc00000, data 0x29e49ce/0x2c24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4523936 data_alloc: 218103808 data_used: 15343616
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be361f800 session 0x561be2eff860
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be344a400 session 0x561be0197c20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410705920 unmapped: 75890688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a373a000/0x0/0x1bfc00000, data 0x29e49ce/0x2c24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be3448000 session 0x561be1e7e780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.480686188s of 21.662887573s, submitted: 32
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be35d0800 session 0x561be1bd6d20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410705920 unmapped: 75890688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3739000/0x0/0x1bfc00000, data 0x29e49de/0x2c25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4556242 data_alloc: 234881024 data_used: 19640320
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4556242 data_alloc: 234881024 data_used: 19640320
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3739000/0x0/0x1bfc00000, data 0x29e49de/0x2c25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3739000/0x0/0x1bfc00000, data 0x29e49de/0x2c25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410648576 unmapped: 75948032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410648576 unmapped: 75948032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3739000/0x0/0x1bfc00000, data 0x29e49de/0x2c25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4556242 data_alloc: 234881024 data_used: 19640320
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410648576 unmapped: 75948032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.010921478s of 13.021390915s, submitted: 2
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411713536 unmapped: 74883072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411721728 unmapped: 74874880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2ed7000/0x0/0x1bfc00000, data 0x32469de/0x3487000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e56000/0x0/0x1bfc00000, data 0x32c79de/0x3508000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411975680 unmapped: 74620928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411975680 unmapped: 74620928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e56000/0x0/0x1bfc00000, data 0x32c79de/0x3508000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4628596 data_alloc: 234881024 data_used: 19865600
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411975680 unmapped: 74620928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411975680 unmapped: 74620928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411975680 unmapped: 74620928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411983872 unmapped: 74612736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be0ad4400 session 0x561be358f2c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be3448400 session 0x561be3109860
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e23000/0x0/0x1bfc00000, data 0x32fa9de/0x353b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412065792 unmapped: 74530816 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be3448000 session 0x561be3109e00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e23000/0x0/0x1bfc00000, data 0x32fa9de/0x353b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627979 data_alloc: 234881024 data_used: 19873792
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412065792 unmapped: 74530816 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412073984 unmapped: 74522624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627979 data_alloc: 234881024 data_used: 19873792
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412090368 unmapped: 74506240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627979 data_alloc: 234881024 data_used: 19873792
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412090368 unmapped: 74506240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412090368 unmapped: 74506240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be67b4400 session 0x561be1e5ab40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412090368 unmapped: 74506240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be361f800 session 0x561be1f072c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412090368 unmapped: 74506240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561bdfcb3400 session 0x561be0b663c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be3448000 session 0x561be0de8000
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.182992935s of 23.472333908s, submitted: 77
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412090368 unmapped: 74506240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4628591 data_alloc: 234881024 data_used: 19927040
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4628911 data_alloc: 234881024 data_used: 19939328
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4628911 data_alloc: 234881024 data_used: 19939328
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.554869652s of 11.559130669s, submitted: 1
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4636651 data_alloc: 234881024 data_used: 20586496
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 74481664 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e18000/0x0/0x1bfc00000, data 0x33069ce/0x3546000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4639499 data_alloc: 234881024 data_used: 20586496
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e18000/0x0/0x1bfc00000, data 0x33069ce/0x3546000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e18000/0x0/0x1bfc00000, data 0x33069ce/0x3546000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4639499 data_alloc: 234881024 data_used: 20586496
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.222394943s of 18.255271912s, submitted: 11
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e18000/0x0/0x1bfc00000, data 0x33069ce/0x3546000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4639775 data_alloc: 234881024 data_used: 20586496
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e17000/0x0/0x1bfc00000, data 0x33079ce/0x3547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 74448896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4639775 data_alloc: 234881024 data_used: 20586496
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 74448896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 74448896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e17000/0x0/0x1bfc00000, data 0x33079ce/0x3547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 74448896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 74383360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 74383360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.630817413s of 10.641638756s, submitted: 3
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be0186400 session 0x561be2b5b2c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4647517 data_alloc: 234881024 data_used: 21667840
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 74383360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e13000/0x0/0x1bfc00000, data 0x3309627/0x354a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be3344800 session 0x561be28790e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be24bb400 session 0x561be2830780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 74383360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4659621 data_alloc: 234881024 data_used: 21667840
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4659621 data_alloc: 234881024 data_used: 21667840
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be2e82000 session 0x561be31085a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4659621 data_alloc: 234881024 data_used: 21667840
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be0186400 session 0x561be28e4b40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be344a000 session 0x561be315a1e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412278784 unmapped: 74317824 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be343d400 session 0x561be315ad20
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412286976 unmapped: 74309632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412286976 unmapped: 74309632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4659781 data_alloc: 234881024 data_used: 21671936
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4659781 data_alloc: 234881024 data_used: 21671936
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.242610931s of 30.420869827s, submitted: 8
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4709844 data_alloc: 234881024 data_used: 23199744
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 417529856 unmapped: 72261632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2941000/0x0/0x1bfc00000, data 0x3848699/0x3a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2941000/0x0/0x1bfc00000, data 0x3848699/0x3a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 417529856 unmapped: 72261632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 417529856 unmapped: 72261632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4717626 data_alloc: 234881024 data_used: 23224320
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x3a0d699/0x3a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x3a0d699/0x3a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4717658 data_alloc: 234881024 data_used: 23220224
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x3a0d699/0x3a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x3a0d699/0x3a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4717658 data_alloc: 234881024 data_used: 23220224
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x3a0d699/0x3a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x3a0d699/0x3a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.433406830s of 20.506797791s, submitted: 13
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4728630 data_alloc: 234881024 data_used: 23396352
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 75145216 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 75145216 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a28d0000/0x0/0x1bfc00000, data 0x3a7c699/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 75145216 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 75145216 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a28d0000/0x0/0x1bfc00000, data 0x3a7c699/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 75145216 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be24ba000 session 0x561be0194f00
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4729998 data_alloc: 234881024 data_used: 23433216
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 75145216 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be2d36400 session 0x561be0190780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 75603968 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 75603968 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a28d0000/0x0/0x1bfc00000, data 0x3a7c699/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be31d0400 session 0x561be3108780
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be3440c00 session 0x561be090b4a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414212096 unmapped: 75579392 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be344b400 session 0x561be28761e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 438 ms_handle_reset con 0x561be361e400 session 0x561be0dd63c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x353d627/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x353d627/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4672608 data_alloc: 234881024 data_used: 23343104
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a2e11000/0x0/0x1bfc00000, data 0x330b2d4/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.280055046s of 11.439131737s, submitted: 51
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 438 ms_handle_reset con 0x561be3448400 session 0x561be28545a0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 438 ms_handle_reset con 0x561be31d0400 session 0x561be2854b40
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a2e11000/0x0/0x1bfc00000, data 0x330b2d4/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4672384 data_alloc: 234881024 data_used: 23343104
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a2e0d000/0x0/0x1bfc00000, data 0x330ce13/0x3550000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 ms_handle_reset con 0x561be32a4c00 session 0x561bdfcf72c0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414277632 unmapped: 75513856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 ms_handle_reset con 0x561be31d2800 session 0x561be2f061e0
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413237248 unmapped: 76554240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413245440 unmapped: 76546048 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413245440 unmapped: 76546048 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413245440 unmapped: 76546048 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413245440 unmapped: 76546048 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413245440 unmapped: 76546048 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413245440 unmapped: 76546048 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 76529664 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 76521472 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 76521472 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 76521472 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 76521472 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 76521472 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 76521472 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413319168 unmapped: 76472320 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413319168 unmapped: 76472320 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413319168 unmapped: 76472320 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413319168 unmapped: 76472320 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413319168 unmapped: 76472320 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413319168 unmapped: 76472320 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 76464128 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 76464128 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413335552 unmapped: 76455936 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413368320 unmapped: 76423168 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 76349440 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: do_command 'config diff' '{prefix=config diff}'
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: do_command 'config show' '{prefix=config show}'
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413237248 unmapped: 76554240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 76685312 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:17:34 np0005539563 ceph-osd[84724]: do_command 'log dump' '{prefix=log dump}'
Nov 29 04:17:34 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44016 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:34 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47704 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:34 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50759 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:17:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1639215689' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:17:34 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44037 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:34.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:34 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47725 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:34 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50771 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:34 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44055 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:17:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1611074507' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:17:35 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47740 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:35 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50783 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:35 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44070 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 04:17:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750011019' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 04:17:35 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47758 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:35.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:35 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50807 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:35 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44091 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4282: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:36 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50822 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:36 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47773 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 04:17:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2845288947' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 04:17:36 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44118 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:36 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:17:36 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:17:36.459+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:17:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 04:17:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3784398411' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 04:17:36 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47794 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:36 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:17:36 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:17:36.763+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:17:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:36.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 04:17:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1722044676' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 04:17:36 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.50864 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:36 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:17:36 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:17:36.966+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:17:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 04:17:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2031095561' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 04:17:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 04:17:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/61628711' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 04:17:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 04:17:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2677743725' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 04:17:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:37.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 04:17:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272515252' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 04:17:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 04:17:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1816638030' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 04:17:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4283: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 04:17:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3375629223' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 04:17:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 04:17:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/892480265' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 04:17:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 04:17:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/38409193' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 04:17:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 04:17:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/42238607' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 04:17:38 np0005539563 nova_compute[252253]: 2025-11-29 09:17:38.711 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:38.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:38 np0005539563 systemd[1]: Starting Hostname Service...
Nov 29 04:17:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 04:17:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3308801323' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 04:17:39 np0005539563 nova_compute[252253]: 2025-11-29 09:17:39.006 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:39 np0005539563 systemd[1]: Started Hostname Service.
Nov 29 04:17:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 04:17:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3765251769' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 04:17:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 04:17:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167337033' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 04:17:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 04:17:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1980321960' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 04:17:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:39.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 04:17:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1848883205' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 04:17:39 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44277 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:39 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44283 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4284: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:40 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51017 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:40 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47926 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:40 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44301 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:40 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44295 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:40 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47935 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:40 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51029 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:40 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47941 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:40 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44310 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:40 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51038 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:40.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:40 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51056 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44328 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 04:17:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315486681' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 04:17:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47965 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51068 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44352 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.47986 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:41.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 04:17:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2496590787' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 04:17:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51083 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44382 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48001 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4285: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:42 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51098 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 04:17:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4248766985' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 04:17:42 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48019 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:42 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44403 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 04:17:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506427084' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 04:17:42 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51119 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:42 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48034 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:42.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:17:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:17:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:17:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:17:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:17:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:17:43 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51134 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:17:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:17:43 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 04:17:43 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562493909' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 04:17:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:43.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:43 np0005539563 nova_compute[252253]: 2025-11-29 09:17:43.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:43 np0005539563 nova_compute[252253]: 2025-11-29 09:17:43.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:17:43 np0005539563 nova_compute[252253]: 2025-11-29 09:17:43.712 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:43 np0005539563 nova_compute[252253]: 2025-11-29 09:17:43.727 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:17:43 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48088 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:43 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51191 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:43 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44490 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:44 np0005539563 nova_compute[252253]: 2025-11-29 09:17:44.009 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4286: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 04:17:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1865210687' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 04:17:44 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 29 04:17:44 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3920761496' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 04:17:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:44.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 29 04:17:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/146904133' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 04:17:45 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 29 04:17:45 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1849550459' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 04:17:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:45.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:45 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51230 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:45 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44541 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4287: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:46 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48133 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 29 04:17:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432428896' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 04:17:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:46.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:46 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 29 04:17:46 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043708518' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 04:17:47 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51257 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:47 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44574 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:47 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48163 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:47.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 29 04:17:47 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/328211359' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 04:17:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4288: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:48 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44592 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:48 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51275 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:48 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48175 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:48 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44598 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:48 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48181 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:48 np0005539563 nova_compute[252253]: 2025-11-29 09:17:48.715 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:48 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51284 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:48.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:49 np0005539563 nova_compute[252253]: 2025-11-29 09:17:49.011 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:49 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 29 04:17:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/930279285' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44622 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:49.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:49 np0005539563 nova_compute[252253]: 2025-11-29 09:17:49.727 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:49 np0005539563 nova_compute[252253]: 2025-11-29 09:17:49.727 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:49 np0005539563 nova_compute[252253]: 2025-11-29 09:17:49.728 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44634 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:49 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4289: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48214 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51323 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48220 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 nova_compute[252253]: 2025-11-29 09:17:50.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51329 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:17:50 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:17:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:17:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:50.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:17:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:51.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:51 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51356 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:51 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 29 04:17:51 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2964408225' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 04:17:51 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44664 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:51 np0005539563 ovs-appctl[431140]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 04:17:51 np0005539563 ovs-appctl[431143]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 04:17:51 np0005539563 ovs-appctl[431147]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 04:17:52 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48250 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4290: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:52 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44670 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:52 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51374 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:52 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44679 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:17:52 np0005539563 nova_compute[252253]: 2025-11-29 09:17:52.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:52.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 04:17:52 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024385563' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 04:17:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 29 04:17:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476151050' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 29 04:17:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:53.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 29 04:17:53 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1821901786' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 29 04:17:53 np0005539563 nova_compute[252253]: 2025-11-29 09:17:53.716 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:53 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48283 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:53 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51422 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:54 np0005539563 nova_compute[252253]: 2025-11-29 09:17:54.012 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4291: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:54 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51428 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 04:17:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1601568759' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 04:17:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:54.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:54 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Nov 29 04:17:54 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3682814035' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 29 04:17:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Nov 29 04:17:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4152834789' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 29 04:17:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:55.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:55 np0005539563 nova_compute[252253]: 2025-11-29 09:17:55.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:55 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Nov 29 04:17:55 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1854898143' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 29 04:17:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4292: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:56 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44757 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:56 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Nov 29 04:17:56 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3753974095' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 29 04:17:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:17:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:56.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:17:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Nov 29 04:17:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1232979909' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 29 04:17:57 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44784 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:57.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Nov 29 04:17:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1953938530' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 29 04:17:57 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51488 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:57 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48328 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:17:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4293: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:17:58 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44799 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:58 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44805 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:58 np0005539563 nova_compute[252253]: 2025-11-29 09:17:58.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:17:58 np0005539563 nova_compute[252253]: 2025-11-29 09:17:58.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:17:58 np0005539563 nova_compute[252253]: 2025-11-29 09:17:58.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:17:58 np0005539563 nova_compute[252253]: 2025-11-29 09:17:58.719 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:58 np0005539563 nova_compute[252253]: 2025-11-29 09:17:58.776 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:17:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:17:58.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:59 np0005539563 nova_compute[252253]: 2025-11-29 09:17:59.014 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:17:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Nov 29 04:17:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2248783452' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 29 04:17:59 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48358 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:59 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51512 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:59 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Nov 29 04:17:59 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2211685371' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 29 04:17:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:17:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:17:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:17:59.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:17:59 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44841 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:17:59 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44844 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4294: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44853 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51539 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48385 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 04:18:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/724362206' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 04:18:00 np0005539563 podman[432694]: 2025-11-29 09:18:00.505059209 +0000 UTC m=+0.056520244 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:18:00 np0005539563 podman[432695]: 2025-11-29 09:18:00.51244423 +0000 UTC m=+0.063174355 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:18:00 np0005539563 podman[432696]: 2025-11-29 09:18:00.54451725 +0000 UTC m=+0.091173884 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:18:00 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51548 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:00 np0005539563 nova_compute[252253]: 2025-11-29 09:18:00.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:00.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:00 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Nov 29 04:18:00 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3933268894' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44880 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48403 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.44886 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:01.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48412 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:18:01 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51578 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4295: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 04:18:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/985417287' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51584 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:02 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:18:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Nov 29 04:18:02 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2135483182' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 29 04:18:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:02.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:03 np0005539563 virtqemud[251807]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 04:18:03 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51602 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:03 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51608 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:03 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48439 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:03.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:03 np0005539563 nova_compute[252253]: 2025-11-29 09:18:03.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:03 np0005539563 nova_compute[252253]: 2025-11-29 09:18:03.720 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:03 np0005539563 nova_compute[252253]: 2025-11-29 09:18:03.737 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:18:03 np0005539563 nova_compute[252253]: 2025-11-29 09:18:03.738 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:18:03 np0005539563 nova_compute[252253]: 2025-11-29 09:18:03.738 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:18:03 np0005539563 nova_compute[252253]: 2025-11-29 09:18:03.738 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:18:03 np0005539563 nova_compute[252253]: 2025-11-29 09:18:03.739 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:18:03 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51617 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:18:04 np0005539563 nova_compute[252253]: 2025-11-29 09:18:04.015 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4296: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:18:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068616332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:18:04 np0005539563 nova_compute[252253]: 2025-11-29 09:18:04.205 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:18:04 np0005539563 nova_compute[252253]: 2025-11-29 09:18:04.359 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:18:04 np0005539563 nova_compute[252253]: 2025-11-29 09:18:04.361 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3831MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:18:04 np0005539563 nova_compute[252253]: 2025-11-29 09:18:04.361 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:18:04 np0005539563 nova_compute[252253]: 2025-11-29 09:18:04.362 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:18:04 np0005539563 nova_compute[252253]: 2025-11-29 09:18:04.443 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:18:04 np0005539563 nova_compute[252253]: 2025-11-29 09:18:04.443 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:18:04 np0005539563 nova_compute[252253]: 2025-11-29 09:18:04.475 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:18:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Nov 29 04:18:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1920344964' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 29 04:18:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:04.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:04 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:18:04 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2000300130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:18:04 np0005539563 nova_compute[252253]: 2025-11-29 09:18:04.933 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:18:04 np0005539563 nova_compute[252253]: 2025-11-29 09:18:04.940 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:18:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:18:04.995 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:18:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:18:04.995 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:18:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:18:04.995 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:18:05 np0005539563 systemd[1]: Starting Time & Date Service...
Nov 29 04:18:05 np0005539563 systemd[1]: Started Time & Date Service.
Nov 29 04:18:05 np0005539563 nova_compute[252253]: 2025-11-29 09:18:05.307 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:18:05 np0005539563 nova_compute[252253]: 2025-11-29 09:18:05.309 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:18:05 np0005539563 nova_compute[252253]: 2025-11-29 09:18:05.309 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:18:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:05.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4297: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:06.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:07.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4298: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:08 np0005539563 nova_compute[252253]: 2025-11-29 09:18:08.722 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:08.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:09 np0005539563 nova_compute[252253]: 2025-11-29 09:18:09.018 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:09.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4299: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.296373) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890296483, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 1193, "num_deletes": 251, "total_data_size": 1497618, "memory_usage": 1521200, "flush_reason": "Manual Compaction"}
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890320450, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 1482108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87784, "largest_seqno": 88976, "table_properties": {"data_size": 1475611, "index_size": 3314, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 19076, "raw_average_key_size": 22, "raw_value_size": 1461003, "raw_average_value_size": 1743, "num_data_blocks": 143, "num_entries": 838, "num_filter_entries": 838, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407828, "oldest_key_time": 1764407828, "file_creation_time": 1764407890, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 24087 microseconds, and 5304 cpu microseconds.
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.320509) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 1482108 bytes OK
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.320542) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.323086) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.323110) EVENT_LOG_v1 {"time_micros": 1764407890323105, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.323129) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 1491080, prev total WAL file size 1491080, number of live WAL files 2.
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.323848) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(1447KB)], [200(12MB)]
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890323971, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 14242520, "oldest_snapshot_seqno": -1}
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 12088 keys, 12290079 bytes, temperature: kUnknown
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890483640, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 12290079, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12216641, "index_size": 42093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30277, "raw_key_size": 321076, "raw_average_key_size": 26, "raw_value_size": 12009971, "raw_average_value_size": 993, "num_data_blocks": 1580, "num_entries": 12088, "num_filter_entries": 12088, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764407890, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.483986) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 12290079 bytes
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.485342) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.1 rd, 76.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.2 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(17.9) write-amplify(8.3) OK, records in: 12603, records dropped: 515 output_compression: NoCompression
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.485358) EVENT_LOG_v1 {"time_micros": 1764407890485350, "job": 126, "event": "compaction_finished", "compaction_time_micros": 159795, "compaction_time_cpu_micros": 33548, "output_level": 6, "num_output_files": 1, "total_output_size": 12290079, "num_input_records": 12603, "num_output_records": 12088, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890485725, "job": 126, "event": "table_file_deletion", "file_number": 202}
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407890487856, "job": 126, "event": "table_file_deletion", "file_number": 200}
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.323658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.487891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.487896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.487897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.487898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:18:10 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:18:10.487900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:18:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:10.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:11.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4300: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:12.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:18:13
Nov 29 04:18:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:18:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:18:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'images', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'backups']
Nov 29 04:18:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:18:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:18:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:13.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:13 np0005539563 nova_compute[252253]: 2025-11-29 09:18:13.725 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:14 np0005539563 nova_compute[252253]: 2025-11-29 09:18:14.020 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4301: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:14.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:15.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4302: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:16.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:18:16 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.0 total, 600.0 interval#012Cumulative writes: 20K writes, 88K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.02 MB/s#012Cumulative WAL: 20K writes, 20K syncs, 1.00 writes per sync, written: 0.13 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1539 writes, 7168 keys, 1537 commit groups, 1.0 writes per commit group, ingest: 10.77 MB, 0.02 MB/s#012Interval WAL: 1539 writes, 1537 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     18.4      6.57              0.42        63    0.104       0      0       0.0       0.0#012  L6      1/0   11.72 MB   0.0      0.8     0.1      0.6       0.7      0.0       0.0   5.6     42.7     36.8     18.29              2.03        62    0.295    535K    33K       0.0       0.0#012 Sum      1/0   11.72 MB   0.0      0.8     0.1      0.6       0.8      0.1       0.0   6.6     31.5     31.9     24.86              2.45       125    0.199    535K    33K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4     45.0     44.0      1.92              0.23        12    0.160     72K   3043       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.6       0.7      0.0       0.0   0.0     42.7     36.8     18.29              2.03        62    0.295    535K    33K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     18.4      6.56              0.42        62    0.106       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.6      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7800.0 total, 600.0 interval#012Flush(GB): cumulative 0.118, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.77 GB write, 0.10 MB/s write, 0.76 GB read, 0.10 MB/s read, 24.9 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564872d991f0#2 capacity: 304.00 MB usage: 89.16 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000628 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5364,85.41 MB,28.0938%) FilterBlock(126,1.44 MB,0.474082%) IndexBlock(126,2.31 MB,0.759742%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 04:18:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:18:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:18:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:18:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:18:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:18:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:18:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:18:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:18:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:18:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:18:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:17.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4303: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:18 np0005539563 nova_compute[252253]: 2025-11-29 09:18:18.729 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:18.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:19 np0005539563 nova_compute[252253]: 2025-11-29 09:18:19.022 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:19.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4304: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:20.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:21.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4305: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:22.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:23.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:23 np0005539563 nova_compute[252253]: 2025-11-29 09:18:23.747 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:24 np0005539563 nova_compute[252253]: 2025-11-29 09:18:24.024 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4306: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:18:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:18:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:24.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:25.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4307: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:26.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:27.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4308: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 04:18:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 04:18:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 04:18:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 04:18:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 04:18:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 04:18:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:18:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1875438823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:18:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:18:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1875438823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:18:28 np0005539563 nova_compute[252253]: 2025-11-29 09:18:28.751 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:28.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:29 np0005539563 nova_compute[252253]: 2025-11-29 09:18:29.026 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:29 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e5ebfa08-da91-4c67-b5e9-eb5fa31bc7b0 does not exist
Nov 29 04:18:29 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev f99b07ec-37b3-4544-bef5-ab0927d5e23f does not exist
Nov 29 04:18:29 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0f727392-4e3d-4c56-ae27-a967a95530a7 does not exist
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 04:18:29 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:18:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:29.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:29 np0005539563 podman[433724]: 2025-11-29 09:18:29.843557472 +0000 UTC m=+0.050912602 container create b11c621a0f23101d7eca463a33ec14368e43b2e58b0ad57ede1c036792c38b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_newton, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 04:18:29 np0005539563 systemd[1]: Started libpod-conmon-b11c621a0f23101d7eca463a33ec14368e43b2e58b0ad57ede1c036792c38b75.scope.
Nov 29 04:18:29 np0005539563 podman[433724]: 2025-11-29 09:18:29.819068168 +0000 UTC m=+0.026423318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:18:29 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:18:29 np0005539563 podman[433724]: 2025-11-29 09:18:29.942437244 +0000 UTC m=+0.149792394 container init b11c621a0f23101d7eca463a33ec14368e43b2e58b0ad57ede1c036792c38b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 29 04:18:29 np0005539563 podman[433724]: 2025-11-29 09:18:29.950821441 +0000 UTC m=+0.158176571 container start b11c621a0f23101d7eca463a33ec14368e43b2e58b0ad57ede1c036792c38b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:18:29 np0005539563 podman[433724]: 2025-11-29 09:18:29.955237492 +0000 UTC m=+0.162592622 container attach b11c621a0f23101d7eca463a33ec14368e43b2e58b0ad57ede1c036792c38b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_newton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 04:18:29 np0005539563 naughty_newton[433783]: 167 167
Nov 29 04:18:29 np0005539563 systemd[1]: libpod-b11c621a0f23101d7eca463a33ec14368e43b2e58b0ad57ede1c036792c38b75.scope: Deactivated successfully.
Nov 29 04:18:29 np0005539563 podman[433724]: 2025-11-29 09:18:29.959076485 +0000 UTC m=+0.166431615 container died b11c621a0f23101d7eca463a33ec14368e43b2e58b0ad57ede1c036792c38b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 29 04:18:29 np0005539563 systemd[1]: var-lib-containers-storage-overlay-34ecc6c242663ef6e763995f9cbef7e4e372aa9ec5cbfd996d2080f4307ed9fa-merged.mount: Deactivated successfully.
Nov 29 04:18:30 np0005539563 podman[433724]: 2025-11-29 09:18:30.003687805 +0000 UTC m=+0.211042945 container remove b11c621a0f23101d7eca463a33ec14368e43b2e58b0ad57ede1c036792c38b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 04:18:30 np0005539563 systemd[1]: libpod-conmon-b11c621a0f23101d7eca463a33ec14368e43b2e58b0ad57ede1c036792c38b75.scope: Deactivated successfully.
Nov 29 04:18:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4309: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:30 np0005539563 podman[433808]: 2025-11-29 09:18:30.17269346 +0000 UTC m=+0.047717526 container create 16dc07be4fc274fd233177444e6382a19192fbf122c0c152960c1c011ba77820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:18:30 np0005539563 systemd[1]: Started libpod-conmon-16dc07be4fc274fd233177444e6382a19192fbf122c0c152960c1c011ba77820.scope.
Nov 29 04:18:30 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:18:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e887cfbec23fd6c16be022707ce87ce8030bd1c80ab2fd5ec41811abccb14e0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:18:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e887cfbec23fd6c16be022707ce87ce8030bd1c80ab2fd5ec41811abccb14e0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:18:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e887cfbec23fd6c16be022707ce87ce8030bd1c80ab2fd5ec41811abccb14e0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:18:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e887cfbec23fd6c16be022707ce87ce8030bd1c80ab2fd5ec41811abccb14e0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:18:30 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e887cfbec23fd6c16be022707ce87ce8030bd1c80ab2fd5ec41811abccb14e0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:18:30 np0005539563 podman[433808]: 2025-11-29 09:18:30.155063692 +0000 UTC m=+0.030087778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:18:30 np0005539563 podman[433808]: 2025-11-29 09:18:30.260257875 +0000 UTC m=+0.135281941 container init 16dc07be4fc274fd233177444e6382a19192fbf122c0c152960c1c011ba77820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mayer, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:18:30 np0005539563 podman[433808]: 2025-11-29 09:18:30.267622165 +0000 UTC m=+0.142646231 container start 16dc07be4fc274fd233177444e6382a19192fbf122c0c152960c1c011ba77820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mayer, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 04:18:30 np0005539563 podman[433808]: 2025-11-29 09:18:30.271621324 +0000 UTC m=+0.146645420 container attach 16dc07be4fc274fd233177444e6382a19192fbf122c0c152960c1c011ba77820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mayer, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:18:30 np0005539563 nova_compute[252253]: 2025-11-29 09:18:30.303 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:30.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:31 np0005539563 serene_mayer[433824]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:18:31 np0005539563 serene_mayer[433824]: --> relative data size: 1.0
Nov 29 04:18:31 np0005539563 serene_mayer[433824]: --> All data devices are unavailable
Nov 29 04:18:31 np0005539563 systemd[1]: libpod-16dc07be4fc274fd233177444e6382a19192fbf122c0c152960c1c011ba77820.scope: Deactivated successfully.
Nov 29 04:18:31 np0005539563 podman[433808]: 2025-11-29 09:18:31.119938334 +0000 UTC m=+0.994962400 container died 16dc07be4fc274fd233177444e6382a19192fbf122c0c152960c1c011ba77820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mayer, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:18:31 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e887cfbec23fd6c16be022707ce87ce8030bd1c80ab2fd5ec41811abccb14e0b-merged.mount: Deactivated successfully.
Nov 29 04:18:31 np0005539563 podman[433808]: 2025-11-29 09:18:31.390476742 +0000 UTC m=+1.265500808 container remove 16dc07be4fc274fd233177444e6382a19192fbf122c0c152960c1c011ba77820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mayer, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 04:18:31 np0005539563 systemd[1]: libpod-conmon-16dc07be4fc274fd233177444e6382a19192fbf122c0c152960c1c011ba77820.scope: Deactivated successfully.
Nov 29 04:18:31 np0005539563 podman[433847]: 2025-11-29 09:18:31.476617319 +0000 UTC m=+0.327189946 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:18:31 np0005539563 podman[433840]: 2025-11-29 09:18:31.488863621 +0000 UTC m=+0.339385307 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:18:31 np0005539563 podman[433848]: 2025-11-29 09:18:31.520568131 +0000 UTC m=+0.368954599 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:18:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:31.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4310: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:32 np0005539563 podman[434055]: 2025-11-29 09:18:32.056008876 +0000 UTC m=+0.050636565 container create 26e3de7057c06669bc6066df51c333be2bfe2d7547b098757c3346053d5e0f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_albattani, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 04:18:32 np0005539563 systemd[1]: Started libpod-conmon-26e3de7057c06669bc6066df51c333be2bfe2d7547b098757c3346053d5e0f66.scope.
Nov 29 04:18:32 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:32 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:18:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:18:32 np0005539563 podman[434055]: 2025-11-29 09:18:32.033945587 +0000 UTC m=+0.028573296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:18:32 np0005539563 podman[434055]: 2025-11-29 09:18:32.139314735 +0000 UTC m=+0.133942444 container init 26e3de7057c06669bc6066df51c333be2bfe2d7547b098757c3346053d5e0f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_albattani, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:18:32 np0005539563 podman[434055]: 2025-11-29 09:18:32.145658807 +0000 UTC m=+0.140286486 container start 26e3de7057c06669bc6066df51c333be2bfe2d7547b098757c3346053d5e0f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 04:18:32 np0005539563 podman[434055]: 2025-11-29 09:18:32.149211934 +0000 UTC m=+0.143839653 container attach 26e3de7057c06669bc6066df51c333be2bfe2d7547b098757c3346053d5e0f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_albattani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:18:32 np0005539563 hardcore_albattani[434072]: 167 167
Nov 29 04:18:32 np0005539563 systemd[1]: libpod-26e3de7057c06669bc6066df51c333be2bfe2d7547b098757c3346053d5e0f66.scope: Deactivated successfully.
Nov 29 04:18:32 np0005539563 conmon[434072]: conmon 26e3de7057c06669bc60 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26e3de7057c06669bc6066df51c333be2bfe2d7547b098757c3346053d5e0f66.scope/container/memory.events
Nov 29 04:18:32 np0005539563 podman[434055]: 2025-11-29 09:18:32.151714152 +0000 UTC m=+0.146341841 container died 26e3de7057c06669bc6066df51c333be2bfe2d7547b098757c3346053d5e0f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_albattani, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 04:18:32 np0005539563 systemd[1]: var-lib-containers-storage-overlay-321a2207146b8c5ff87cf87ae4ff1a65bdde2cf4a06c969a0afec3df64a4608f-merged.mount: Deactivated successfully.
Nov 29 04:18:32 np0005539563 podman[434055]: 2025-11-29 09:18:32.198951183 +0000 UTC m=+0.193578872 container remove 26e3de7057c06669bc6066df51c333be2bfe2d7547b098757c3346053d5e0f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_albattani, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 04:18:32 np0005539563 systemd[1]: libpod-conmon-26e3de7057c06669bc6066df51c333be2bfe2d7547b098757c3346053d5e0f66.scope: Deactivated successfully.
Nov 29 04:18:32 np0005539563 podman[434096]: 2025-11-29 09:18:32.370873926 +0000 UTC m=+0.055229409 container create 66b2334c389924282ea43fbee7d0ed9cb240b188a4443f7fcc3fb82dbe6676d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:18:32 np0005539563 systemd[1]: Started libpod-conmon-66b2334c389924282ea43fbee7d0ed9cb240b188a4443f7fcc3fb82dbe6676d5.scope.
Nov 29 04:18:32 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:18:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a0595cee99d4f7ee102f2d2deb542a4695a5369874559c07eba383b8af0594/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:18:32 np0005539563 podman[434096]: 2025-11-29 09:18:32.353206567 +0000 UTC m=+0.037562070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:18:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a0595cee99d4f7ee102f2d2deb542a4695a5369874559c07eba383b8af0594/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:18:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a0595cee99d4f7ee102f2d2deb542a4695a5369874559c07eba383b8af0594/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:18:32 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a0595cee99d4f7ee102f2d2deb542a4695a5369874559c07eba383b8af0594/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:18:32 np0005539563 podman[434096]: 2025-11-29 09:18:32.448448741 +0000 UTC m=+0.132804544 container init 66b2334c389924282ea43fbee7d0ed9cb240b188a4443f7fcc3fb82dbe6676d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gates, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:18:32 np0005539563 podman[434096]: 2025-11-29 09:18:32.456942661 +0000 UTC m=+0.141298154 container start 66b2334c389924282ea43fbee7d0ed9cb240b188a4443f7fcc3fb82dbe6676d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gates, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 04:18:32 np0005539563 podman[434096]: 2025-11-29 09:18:32.463784406 +0000 UTC m=+0.148139909 container attach 66b2334c389924282ea43fbee7d0ed9cb240b188a4443f7fcc3fb82dbe6676d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gates, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:18:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:32.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:33 np0005539563 kind_gates[434113]: {
Nov 29 04:18:33 np0005539563 kind_gates[434113]:    "0": [
Nov 29 04:18:33 np0005539563 kind_gates[434113]:        {
Nov 29 04:18:33 np0005539563 kind_gates[434113]:            "devices": [
Nov 29 04:18:33 np0005539563 kind_gates[434113]:                "/dev/loop3"
Nov 29 04:18:33 np0005539563 kind_gates[434113]:            ],
Nov 29 04:18:33 np0005539563 kind_gates[434113]:            "lv_name": "ceph_lv0",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:            "lv_size": "7511998464",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:            "name": "ceph_lv0",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:            "tags": {
Nov 29 04:18:33 np0005539563 kind_gates[434113]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:                "ceph.cluster_name": "ceph",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:                "ceph.crush_device_class": "",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:                "ceph.encrypted": "0",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:                "ceph.osd_id": "0",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:                "ceph.type": "block",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:                "ceph.vdo": "0"
Nov 29 04:18:33 np0005539563 kind_gates[434113]:            },
Nov 29 04:18:33 np0005539563 kind_gates[434113]:            "type": "block",
Nov 29 04:18:33 np0005539563 kind_gates[434113]:            "vg_name": "ceph_vg0"
Nov 29 04:18:33 np0005539563 kind_gates[434113]:        }
Nov 29 04:18:33 np0005539563 kind_gates[434113]:    ]
Nov 29 04:18:33 np0005539563 kind_gates[434113]: }
Nov 29 04:18:33 np0005539563 systemd[1]: libpod-66b2334c389924282ea43fbee7d0ed9cb240b188a4443f7fcc3fb82dbe6676d5.scope: Deactivated successfully.
Nov 29 04:18:33 np0005539563 podman[434096]: 2025-11-29 09:18:33.221552211 +0000 UTC m=+0.905907694 container died 66b2334c389924282ea43fbee7d0ed9cb240b188a4443f7fcc3fb82dbe6676d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gates, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 04:18:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-47a0595cee99d4f7ee102f2d2deb542a4695a5369874559c07eba383b8af0594-merged.mount: Deactivated successfully.
Nov 29 04:18:33 np0005539563 podman[434096]: 2025-11-29 09:18:33.273004637 +0000 UTC m=+0.957360120 container remove 66b2334c389924282ea43fbee7d0ed9cb240b188a4443f7fcc3fb82dbe6676d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_gates, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:18:33 np0005539563 systemd[1]: libpod-conmon-66b2334c389924282ea43fbee7d0ed9cb240b188a4443f7fcc3fb82dbe6676d5.scope: Deactivated successfully.
Nov 29 04:18:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:18:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:33.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:18:33 np0005539563 nova_compute[252253]: 2025-11-29 09:18:33.752 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:33 np0005539563 podman[434278]: 2025-11-29 09:18:33.869697112 +0000 UTC m=+0.040772418 container create 01f470386ab7c74a03a36b1cbd433ae9c2bc1a66404cff235cc7d440cefdb5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kilby, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:18:33 np0005539563 systemd[1]: Started libpod-conmon-01f470386ab7c74a03a36b1cbd433ae9c2bc1a66404cff235cc7d440cefdb5ca.scope.
Nov 29 04:18:33 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:18:33 np0005539563 podman[434278]: 2025-11-29 09:18:33.942504157 +0000 UTC m=+0.113579493 container init 01f470386ab7c74a03a36b1cbd433ae9c2bc1a66404cff235cc7d440cefdb5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kilby, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:18:33 np0005539563 podman[434278]: 2025-11-29 09:18:33.850966203 +0000 UTC m=+0.022041569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:18:33 np0005539563 podman[434278]: 2025-11-29 09:18:33.950652678 +0000 UTC m=+0.121727994 container start 01f470386ab7c74a03a36b1cbd433ae9c2bc1a66404cff235cc7d440cefdb5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kilby, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 04:18:33 np0005539563 podman[434278]: 2025-11-29 09:18:33.953878015 +0000 UTC m=+0.124953361 container attach 01f470386ab7c74a03a36b1cbd433ae9c2bc1a66404cff235cc7d440cefdb5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 04:18:33 np0005539563 systemd[1]: libpod-01f470386ab7c74a03a36b1cbd433ae9c2bc1a66404cff235cc7d440cefdb5ca.scope: Deactivated successfully.
Nov 29 04:18:33 np0005539563 eloquent_kilby[434295]: 167 167
Nov 29 04:18:33 np0005539563 podman[434278]: 2025-11-29 09:18:33.957507093 +0000 UTC m=+0.128582409 container died 01f470386ab7c74a03a36b1cbd433ae9c2bc1a66404cff235cc7d440cefdb5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kilby, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 04:18:33 np0005539563 systemd[1]: var-lib-containers-storage-overlay-336e39c140c9f71a5b4a9cdb44767a30d5f8b9a3fc37dbe8673fa3c24029f6d2-merged.mount: Deactivated successfully.
Nov 29 04:18:33 np0005539563 podman[434278]: 2025-11-29 09:18:33.996157292 +0000 UTC m=+0.167232608 container remove 01f470386ab7c74a03a36b1cbd433ae9c2bc1a66404cff235cc7d440cefdb5ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 04:18:34 np0005539563 systemd[1]: libpod-conmon-01f470386ab7c74a03a36b1cbd433ae9c2bc1a66404cff235cc7d440cefdb5ca.scope: Deactivated successfully.
Nov 29 04:18:34 np0005539563 nova_compute[252253]: 2025-11-29 09:18:34.028 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4311: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:34 np0005539563 podman[434318]: 2025-11-29 09:18:34.144963598 +0000 UTC m=+0.042315388 container create b9eb92fc0a2ad2f1dc8b1d0e7a983a208f696323f86818a60d0343e7310970ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hodgkin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 04:18:34 np0005539563 systemd[1]: Started libpod-conmon-b9eb92fc0a2ad2f1dc8b1d0e7a983a208f696323f86818a60d0343e7310970ed.scope.
Nov 29 04:18:34 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:18:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8f80f45497d30bf8bfb4fcabe64252b37d37f42158296693b9e4e71aa3a1ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:18:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8f80f45497d30bf8bfb4fcabe64252b37d37f42158296693b9e4e71aa3a1ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:18:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8f80f45497d30bf8bfb4fcabe64252b37d37f42158296693b9e4e71aa3a1ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:18:34 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8f80f45497d30bf8bfb4fcabe64252b37d37f42158296693b9e4e71aa3a1ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:18:34 np0005539563 podman[434318]: 2025-11-29 09:18:34.218662888 +0000 UTC m=+0.116014708 container init b9eb92fc0a2ad2f1dc8b1d0e7a983a208f696323f86818a60d0343e7310970ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hodgkin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:18:34 np0005539563 podman[434318]: 2025-11-29 09:18:34.124894164 +0000 UTC m=+0.022245984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:18:34 np0005539563 podman[434318]: 2025-11-29 09:18:34.225784211 +0000 UTC m=+0.123136001 container start b9eb92fc0a2ad2f1dc8b1d0e7a983a208f696323f86818a60d0343e7310970ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hodgkin, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 04:18:34 np0005539563 podman[434318]: 2025-11-29 09:18:34.22979097 +0000 UTC m=+0.127142800 container attach b9eb92fc0a2ad2f1dc8b1d0e7a983a208f696323f86818a60d0343e7310970ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hodgkin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:18:34 np0005539563 nova_compute[252253]: 2025-11-29 09:18:34.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:34 np0005539563 nova_compute[252253]: 2025-11-29 09:18:34.680 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:18:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:34.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:35 np0005539563 wizardly_hodgkin[434334]: {
Nov 29 04:18:35 np0005539563 wizardly_hodgkin[434334]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:18:35 np0005539563 wizardly_hodgkin[434334]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:18:35 np0005539563 wizardly_hodgkin[434334]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:18:35 np0005539563 wizardly_hodgkin[434334]:        "osd_id": 0,
Nov 29 04:18:35 np0005539563 wizardly_hodgkin[434334]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:18:35 np0005539563 wizardly_hodgkin[434334]:        "type": "bluestore"
Nov 29 04:18:35 np0005539563 wizardly_hodgkin[434334]:    }
Nov 29 04:18:35 np0005539563 wizardly_hodgkin[434334]: }
Nov 29 04:18:35 np0005539563 systemd[1]: libpod-b9eb92fc0a2ad2f1dc8b1d0e7a983a208f696323f86818a60d0343e7310970ed.scope: Deactivated successfully.
Nov 29 04:18:35 np0005539563 podman[434318]: 2025-11-29 09:18:35.067342639 +0000 UTC m=+0.964694429 container died b9eb92fc0a2ad2f1dc8b1d0e7a983a208f696323f86818a60d0343e7310970ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 04:18:35 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0d8f80f45497d30bf8bfb4fcabe64252b37d37f42158296693b9e4e71aa3a1ef-merged.mount: Deactivated successfully.
Nov 29 04:18:35 np0005539563 podman[434318]: 2025-11-29 09:18:35.12198752 +0000 UTC m=+1.019339310 container remove b9eb92fc0a2ad2f1dc8b1d0e7a983a208f696323f86818a60d0343e7310970ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 04:18:35 np0005539563 systemd[1]: libpod-conmon-b9eb92fc0a2ad2f1dc8b1d0e7a983a208f696323f86818a60d0343e7310970ed.scope: Deactivated successfully.
Nov 29 04:18:35 np0005539563 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 04:18:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:18:35 np0005539563 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 04:18:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:18:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:35 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 51cea090-1b1e-43c9-af0c-e4f9bb507568 does not exist
Nov 29 04:18:35 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev bc444ede-593a-4e1a-be71-eea552b992f5 does not exist
Nov 29 04:18:35 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev b73f9799-76e0-4f50-b1e0-520a0b18227e does not exist
Nov 29 04:18:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:35.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4312: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:36 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:18:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:36.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:37.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4313: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:38 np0005539563 nova_compute[252253]: 2025-11-29 09:18:38.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:38 np0005539563 nova_compute[252253]: 2025-11-29 09:18:38.793 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:38.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:39 np0005539563 nova_compute[252253]: 2025-11-29 09:18:39.030 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:39.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4314: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:40.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:41.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4315: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:42.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:18:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:18:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:43.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:43 np0005539563 nova_compute[252253]: 2025-11-29 09:18:43.742 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:43 np0005539563 nova_compute[252253]: 2025-11-29 09:18:43.796 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:44 np0005539563 nova_compute[252253]: 2025-11-29 09:18:44.032 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4316: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:44 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:44 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:44 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:44.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:45.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4317: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:46 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:46 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:46 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:46.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:47.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4318: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:48 np0005539563 nova_compute[252253]: 2025-11-29 09:18:48.796 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:48 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:48 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:48 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:48.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:49 np0005539563 nova_compute[252253]: 2025-11-29 09:18:49.035 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:49.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4319: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:50 np0005539563 nova_compute[252253]: 2025-11-29 09:18:50.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:50 np0005539563 nova_compute[252253]: 2025-11-29 09:18:50.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:18:50 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:50 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:50 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:50.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:51 np0005539563 nova_compute[252253]: 2025-11-29 09:18:51.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:51 np0005539563 nova_compute[252253]: 2025-11-29 09:18:51.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:51.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4320: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:52 np0005539563 systemd[1]: session-70.scope: Deactivated successfully.
Nov 29 04:18:52 np0005539563 systemd[1]: session-70.scope: Consumed 2min 51.553s CPU time, 1.0G memory peak, read 388.8M from disk, written 419.1M to disk.
Nov 29 04:18:52 np0005539563 systemd-logind[785]: Session 70 logged out. Waiting for processes to exit.
Nov 29 04:18:52 np0005539563 systemd-logind[785]: Removed session 70.
Nov 29 04:18:52 np0005539563 systemd-logind[785]: New session 71 of user zuul.
Nov 29 04:18:52 np0005539563 systemd[1]: Started Session 71 of User zuul.
Nov 29 04:18:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:52 np0005539563 systemd[1]: session-71.scope: Deactivated successfully.
Nov 29 04:18:52 np0005539563 systemd-logind[785]: Session 71 logged out. Waiting for processes to exit.
Nov 29 04:18:52 np0005539563 systemd-logind[785]: Removed session 71.
Nov 29 04:18:52 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:52 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:52 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:52.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:52 np0005539563 systemd-logind[785]: New session 72 of user zuul.
Nov 29 04:18:53 np0005539563 systemd[1]: Started Session 72 of User zuul.
Nov 29 04:18:53 np0005539563 systemd[1]: session-72.scope: Deactivated successfully.
Nov 29 04:18:53 np0005539563 systemd-logind[785]: Session 72 logged out. Waiting for processes to exit.
Nov 29 04:18:53 np0005539563 systemd-logind[785]: Removed session 72.
Nov 29 04:18:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:53.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:53 np0005539563 nova_compute[252253]: 2025-11-29 09:18:53.842 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:54 np0005539563 nova_compute[252253]: 2025-11-29 09:18:54.037 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4321: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:54 np0005539563 nova_compute[252253]: 2025-11-29 09:18:54.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:54 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:54 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:54 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:54.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:55 np0005539563 nova_compute[252253]: 2025-11-29 09:18:55.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:55.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4322: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:56 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:56 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:18:56 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:56.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:18:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:57.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:18:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4323: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:18:58 np0005539563 nova_compute[252253]: 2025-11-29 09:18:58.843 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:58 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:58 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:58 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:18:58.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:18:59 np0005539563 nova_compute[252253]: 2025-11-29 09:18:59.039 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:18:59 np0005539563 nova_compute[252253]: 2025-11-29 09:18:59.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:18:59 np0005539563 nova_compute[252253]: 2025-11-29 09:18:59.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:18:59 np0005539563 nova_compute[252253]: 2025-11-29 09:18:59.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:18:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:18:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:18:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:18:59.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4324: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:00 np0005539563 nova_compute[252253]: 2025-11-29 09:19:00.196 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:19:00 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:00 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:00 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:00.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:01 np0005539563 nova_compute[252253]: 2025-11-29 09:19:01.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:01.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4325: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:02 np0005539563 podman[434544]: 2025-11-29 09:19:02.503685935 +0000 UTC m=+0.056846543 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 04:19:02 np0005539563 podman[434545]: 2025-11-29 09:19:02.503775717 +0000 UTC m=+0.056644317 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:19:02 np0005539563 podman[434546]: 2025-11-29 09:19:02.555217533 +0000 UTC m=+0.105811651 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:19:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:02 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:02 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:02 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:02.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:03.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:03 np0005539563 nova_compute[252253]: 2025-11-29 09:19:03.845 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4326: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:04 np0005539563 nova_compute[252253]: 2025-11-29 09:19:04.065 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:04 np0005539563 nova_compute[252253]: 2025-11-29 09:19:04.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:04 np0005539563 nova_compute[252253]: 2025-11-29 09:19:04.873 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:19:04 np0005539563 nova_compute[252253]: 2025-11-29 09:19:04.873 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:19:04 np0005539563 nova_compute[252253]: 2025-11-29 09:19:04.874 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:19:04 np0005539563 nova_compute[252253]: 2025-11-29 09:19:04.874 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:19:04 np0005539563 nova_compute[252253]: 2025-11-29 09:19:04.875 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:19:04 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:04 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:04 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:04.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:19:04.996 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:19:04.996 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:19:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:19:04.996 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:19:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:19:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4117766560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:19:05 np0005539563 nova_compute[252253]: 2025-11-29 09:19:05.323 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:19:05 np0005539563 nova_compute[252253]: 2025-11-29 09:19:05.479 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:19:05 np0005539563 nova_compute[252253]: 2025-11-29 09:19:05.480 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4046MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:19:05 np0005539563 nova_compute[252253]: 2025-11-29 09:19:05.480 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:19:05 np0005539563 nova_compute[252253]: 2025-11-29 09:19:05.480 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:19:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:05.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:06 np0005539563 nova_compute[252253]: 2025-11-29 09:19:06.059 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:19:06 np0005539563 nova_compute[252253]: 2025-11-29 09:19:06.060 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:19:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4327: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:06 np0005539563 nova_compute[252253]: 2025-11-29 09:19:06.092 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:19:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:19:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3622949996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:19:06 np0005539563 nova_compute[252253]: 2025-11-29 09:19:06.499 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:19:06 np0005539563 nova_compute[252253]: 2025-11-29 09:19:06.505 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:19:06 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:06 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:06 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:06.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:07 np0005539563 nova_compute[252253]: 2025-11-29 09:19:07.637 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:19:07 np0005539563 nova_compute[252253]: 2025-11-29 09:19:07.640 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:19:07 np0005539563 nova_compute[252253]: 2025-11-29 09:19:07.641 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:19:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:07.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4328: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:08 np0005539563 nova_compute[252253]: 2025-11-29 09:19:08.848 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:08 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:08 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:19:08 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:08.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:19:09 np0005539563 nova_compute[252253]: 2025-11-29 09:19:09.067 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:09.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4329: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:10.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:11.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4330: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:12.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:19:13
Nov 29 04:19:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:19:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:19:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.meta']
Nov 29 04:19:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:19:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:19:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:13.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:13 np0005539563 nova_compute[252253]: 2025-11-29 09:19:13.849 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4331: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:14 np0005539563 nova_compute[252253]: 2025-11-29 09:19:14.068 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:19:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:14.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:19:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:15.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4332: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:19:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:19:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:19:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:19:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:16.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:19:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:19:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:19:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:19:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:19:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:19:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:17.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4333: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:18 np0005539563 nova_compute[252253]: 2025-11-29 09:19:18.850 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:18.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:19 np0005539563 nova_compute[252253]: 2025-11-29 09:19:19.070 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:19:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:19.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:19:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4334: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:20.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:21.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4335: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:22.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:23.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:23 np0005539563 nova_compute[252253]: 2025-11-29 09:19:23.851 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:24 np0005539563 nova_compute[252253]: 2025-11-29 09:19:24.071 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4336: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:19:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:19:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:24.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:25.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4337: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:26.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:27.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4338: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:28 np0005539563 nova_compute[252253]: 2025-11-29 09:19:28.853 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:28.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:29 np0005539563 nova_compute[252253]: 2025-11-29 09:19:29.073 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:29.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4339: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:30.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:31 np0005539563 nova_compute[252253]: 2025-11-29 09:19:31.638 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:31.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4340: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:32.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:33 np0005539563 podman[434769]: 2025-11-29 09:19:33.502078887 +0000 UTC m=+0.055754414 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 04:19:33 np0005539563 podman[434770]: 2025-11-29 09:19:33.502759795 +0000 UTC m=+0.056399621 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:19:33 np0005539563 podman[434771]: 2025-11-29 09:19:33.539791519 +0000 UTC m=+0.091391829 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:19:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:33 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:19:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:33.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:33 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:19:33 np0005539563 nova_compute[252253]: 2025-11-29 09:19:33.857 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:34 np0005539563 nova_compute[252253]: 2025-11-29 09:19:34.075 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4341: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:19:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:34.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:19:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:35.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4342: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:19:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:19:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:19:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:19:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:19:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:36.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:19:37 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev eb8f3d6e-dee8-4cec-8c12-fefd9650f6c9 does not exist
Nov 29 04:19:37 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 4e6f11d3-676e-40ca-b99a-9f4497d2a772 does not exist
Nov 29 04:19:37 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev baafa3cf-574a-431f-8455-7fca42277b25 does not exist
Nov 29 04:19:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:19:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:19:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:19:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:19:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:19:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:19:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:19:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:19:37 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:19:37 np0005539563 podman[435111]: 2025-11-29 09:19:37.697280152 +0000 UTC m=+0.111818504 container create 6ed14a3f20e3464a779eac778f9b61abda69aa798f4eb352cb91dd799f4a9a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 04:19:37 np0005539563 podman[435111]: 2025-11-29 09:19:37.606071718 +0000 UTC m=+0.020610080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:19:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:37.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:37 np0005539563 systemd[1]: Started libpod-conmon-6ed14a3f20e3464a779eac778f9b61abda69aa798f4eb352cb91dd799f4a9a8e.scope.
Nov 29 04:19:37 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:19:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4343: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:38 np0005539563 podman[435111]: 2025-11-29 09:19:38.192706891 +0000 UTC m=+0.607245243 container init 6ed14a3f20e3464a779eac778f9b61abda69aa798f4eb352cb91dd799f4a9a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:19:38 np0005539563 podman[435111]: 2025-11-29 09:19:38.20371677 +0000 UTC m=+0.618255102 container start 6ed14a3f20e3464a779eac778f9b61abda69aa798f4eb352cb91dd799f4a9a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 04:19:38 np0005539563 brave_germain[435127]: 167 167
Nov 29 04:19:38 np0005539563 systemd[1]: libpod-6ed14a3f20e3464a779eac778f9b61abda69aa798f4eb352cb91dd799f4a9a8e.scope: Deactivated successfully.
Nov 29 04:19:38 np0005539563 podman[435111]: 2025-11-29 09:19:38.41275772 +0000 UTC m=+0.827296052 container attach 6ed14a3f20e3464a779eac778f9b61abda69aa798f4eb352cb91dd799f4a9a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:19:38 np0005539563 podman[435111]: 2025-11-29 09:19:38.413206852 +0000 UTC m=+0.827745194 container died 6ed14a3f20e3464a779eac778f9b61abda69aa798f4eb352cb91dd799f4a9a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 04:19:38 np0005539563 nova_compute[252253]: 2025-11-29 09:19:38.857 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:38 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ffe94bd9fd0b8bd791e7be6303b725e308db1036f722a4118d4360afa256c894-merged.mount: Deactivated successfully.
Nov 29 04:19:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:39.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:39 np0005539563 podman[435111]: 2025-11-29 09:19:39.072855485 +0000 UTC m=+1.487393827 container remove 6ed14a3f20e3464a779eac778f9b61abda69aa798f4eb352cb91dd799f4a9a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:19:39 np0005539563 nova_compute[252253]: 2025-11-29 09:19:39.076 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:39 np0005539563 systemd[1]: libpod-conmon-6ed14a3f20e3464a779eac778f9b61abda69aa798f4eb352cb91dd799f4a9a8e.scope: Deactivated successfully.
Nov 29 04:19:39 np0005539563 podman[435153]: 2025-11-29 09:19:39.235338932 +0000 UTC m=+0.023696333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:19:39 np0005539563 podman[435153]: 2025-11-29 09:19:39.481308684 +0000 UTC m=+0.269666106 container create 73663e70592d1dbf5b376ad7194db77d8b906c9e55e1c506f7dc77bcea349c2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 04:19:39 np0005539563 systemd[1]: Started libpod-conmon-73663e70592d1dbf5b376ad7194db77d8b906c9e55e1c506f7dc77bcea349c2c.scope.
Nov 29 04:19:39 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:19:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46474b8d97cfbb918af7088ac83b1ded925c7497bc5208974322ff3edbc9b75b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:19:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46474b8d97cfbb918af7088ac83b1ded925c7497bc5208974322ff3edbc9b75b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:19:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46474b8d97cfbb918af7088ac83b1ded925c7497bc5208974322ff3edbc9b75b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:19:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46474b8d97cfbb918af7088ac83b1ded925c7497bc5208974322ff3edbc9b75b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:19:39 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46474b8d97cfbb918af7088ac83b1ded925c7497bc5208974322ff3edbc9b75b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:19:39 np0005539563 podman[435153]: 2025-11-29 09:19:39.621082516 +0000 UTC m=+0.409439927 container init 73663e70592d1dbf5b376ad7194db77d8b906c9e55e1c506f7dc77bcea349c2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 04:19:39 np0005539563 podman[435153]: 2025-11-29 09:19:39.627841489 +0000 UTC m=+0.416198870 container start 73663e70592d1dbf5b376ad7194db77d8b906c9e55e1c506f7dc77bcea349c2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 04:19:39 np0005539563 podman[435153]: 2025-11-29 09:19:39.653240279 +0000 UTC m=+0.441597750 container attach 73663e70592d1dbf5b376ad7194db77d8b906c9e55e1c506f7dc77bcea349c2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:19:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:19:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:39.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:19:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4344: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:40 np0005539563 adoring_chebyshev[435171]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:19:40 np0005539563 adoring_chebyshev[435171]: --> relative data size: 1.0
Nov 29 04:19:40 np0005539563 adoring_chebyshev[435171]: --> All data devices are unavailable
Nov 29 04:19:40 np0005539563 systemd[1]: libpod-73663e70592d1dbf5b376ad7194db77d8b906c9e55e1c506f7dc77bcea349c2c.scope: Deactivated successfully.
Nov 29 04:19:40 np0005539563 podman[435153]: 2025-11-29 09:19:40.41826425 +0000 UTC m=+1.206621631 container died 73663e70592d1dbf5b376ad7194db77d8b906c9e55e1c506f7dc77bcea349c2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 04:19:40 np0005539563 systemd[1]: var-lib-containers-storage-overlay-46474b8d97cfbb918af7088ac83b1ded925c7497bc5208974322ff3edbc9b75b-merged.mount: Deactivated successfully.
Nov 29 04:19:40 np0005539563 podman[435153]: 2025-11-29 09:19:40.638682728 +0000 UTC m=+1.427040149 container remove 73663e70592d1dbf5b376ad7194db77d8b906c9e55e1c506f7dc77bcea349c2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chebyshev, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:19:40 np0005539563 systemd[1]: libpod-conmon-73663e70592d1dbf5b376ad7194db77d8b906c9e55e1c506f7dc77bcea349c2c.scope: Deactivated successfully.
Nov 29 04:19:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:41.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:41 np0005539563 podman[435341]: 2025-11-29 09:19:41.182307924 +0000 UTC m=+0.021889804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:19:41 np0005539563 podman[435341]: 2025-11-29 09:19:41.281969148 +0000 UTC m=+0.121551018 container create 4110c98b163d89cae99654a25f0876c00ed652ca65f1fa792e577d2a6aae30cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:19:41 np0005539563 systemd[1]: Started libpod-conmon-4110c98b163d89cae99654a25f0876c00ed652ca65f1fa792e577d2a6aae30cf.scope.
Nov 29 04:19:41 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:19:41 np0005539563 podman[435341]: 2025-11-29 09:19:41.464568021 +0000 UTC m=+0.304149891 container init 4110c98b163d89cae99654a25f0876c00ed652ca65f1fa792e577d2a6aae30cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 04:19:41 np0005539563 podman[435341]: 2025-11-29 09:19:41.473661347 +0000 UTC m=+0.313243217 container start 4110c98b163d89cae99654a25f0876c00ed652ca65f1fa792e577d2a6aae30cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:19:41 np0005539563 podman[435341]: 2025-11-29 09:19:41.47818183 +0000 UTC m=+0.317763720 container attach 4110c98b163d89cae99654a25f0876c00ed652ca65f1fa792e577d2a6aae30cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 04:19:41 np0005539563 suspicious_satoshi[435358]: 167 167
Nov 29 04:19:41 np0005539563 systemd[1]: libpod-4110c98b163d89cae99654a25f0876c00ed652ca65f1fa792e577d2a6aae30cf.scope: Deactivated successfully.
Nov 29 04:19:41 np0005539563 podman[435341]: 2025-11-29 09:19:41.479862906 +0000 UTC m=+0.319444766 container died 4110c98b163d89cae99654a25f0876c00ed652ca65f1fa792e577d2a6aae30cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:19:41 np0005539563 systemd[1]: var-lib-containers-storage-overlay-702a847fb43e666a13306c3de5682dacb89de9806d5bf0b3ec9b18a1b83d5c06-merged.mount: Deactivated successfully.
Nov 29 04:19:41 np0005539563 podman[435341]: 2025-11-29 09:19:41.517911388 +0000 UTC m=+0.357493258 container remove 4110c98b163d89cae99654a25f0876c00ed652ca65f1fa792e577d2a6aae30cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:19:41 np0005539563 systemd[1]: libpod-conmon-4110c98b163d89cae99654a25f0876c00ed652ca65f1fa792e577d2a6aae30cf.scope: Deactivated successfully.
Nov 29 04:19:41 np0005539563 podman[435382]: 2025-11-29 09:19:41.645627523 +0000 UTC m=+0.021482885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:19:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:41.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:42 np0005539563 podman[435382]: 2025-11-29 09:19:42.046887907 +0000 UTC m=+0.422743279 container create e4d4143e0e446e1eb57347d77ff0319b40f16e6d88fc32a50121d713de1f70a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 04:19:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4345: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:42 np0005539563 systemd[1]: Started libpod-conmon-e4d4143e0e446e1eb57347d77ff0319b40f16e6d88fc32a50121d713de1f70a1.scope.
Nov 29 04:19:42 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:19:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13a56e46b15d33ebf7f8d95a6f3f3d6da66c4b8afdac32b943dcf9df4fc670c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:19:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13a56e46b15d33ebf7f8d95a6f3f3d6da66c4b8afdac32b943dcf9df4fc670c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:19:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13a56e46b15d33ebf7f8d95a6f3f3d6da66c4b8afdac32b943dcf9df4fc670c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:19:42 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13a56e46b15d33ebf7f8d95a6f3f3d6da66c4b8afdac32b943dcf9df4fc670c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:19:42 np0005539563 podman[435382]: 2025-11-29 09:19:42.200971476 +0000 UTC m=+0.576826838 container init e4d4143e0e446e1eb57347d77ff0319b40f16e6d88fc32a50121d713de1f70a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 04:19:42 np0005539563 podman[435382]: 2025-11-29 09:19:42.213424723 +0000 UTC m=+0.589280085 container start e4d4143e0e446e1eb57347d77ff0319b40f16e6d88fc32a50121d713de1f70a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_pare, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:19:42 np0005539563 podman[435382]: 2025-11-29 09:19:42.217826923 +0000 UTC m=+0.593682265 container attach e4d4143e0e446e1eb57347d77ff0319b40f16e6d88fc32a50121d713de1f70a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_pare, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 04:19:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:42 np0005539563 agitated_pare[435398]: {
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:    "0": [
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:        {
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:            "devices": [
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:                "/dev/loop3"
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:            ],
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:            "lv_name": "ceph_lv0",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:            "lv_size": "7511998464",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:            "name": "ceph_lv0",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:            "tags": {
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:                "ceph.cluster_name": "ceph",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:                "ceph.crush_device_class": "",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:                "ceph.encrypted": "0",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:                "ceph.osd_id": "0",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:                "ceph.type": "block",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:                "ceph.vdo": "0"
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:            },
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:            "type": "block",
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:            "vg_name": "ceph_vg0"
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:        }
Nov 29 04:19:42 np0005539563 agitated_pare[435398]:    ]
Nov 29 04:19:42 np0005539563 agitated_pare[435398]: }
Nov 29 04:19:43 np0005539563 systemd[1]: libpod-e4d4143e0e446e1eb57347d77ff0319b40f16e6d88fc32a50121d713de1f70a1.scope: Deactivated successfully.
Nov 29 04:19:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:43 np0005539563 podman[435382]: 2025-11-29 09:19:43.01107268 +0000 UTC m=+1.386928022 container died e4d4143e0e446e1eb57347d77ff0319b40f16e6d88fc32a50121d713de1f70a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 04:19:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:43.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:43 np0005539563 systemd[1]: var-lib-containers-storage-overlay-d13a56e46b15d33ebf7f8d95a6f3f3d6da66c4b8afdac32b943dcf9df4fc670c-merged.mount: Deactivated successfully.
Nov 29 04:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:19:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:19:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:43.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:43 np0005539563 nova_compute[252253]: 2025-11-29 09:19:43.859 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4346: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:44 np0005539563 nova_compute[252253]: 2025-11-29 09:19:44.116 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:44 np0005539563 podman[435382]: 2025-11-29 09:19:44.562949485 +0000 UTC m=+2.938804867 container remove e4d4143e0e446e1eb57347d77ff0319b40f16e6d88fc32a50121d713de1f70a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_pare, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 04:19:44 np0005539563 systemd[1]: libpod-conmon-e4d4143e0e446e1eb57347d77ff0319b40f16e6d88fc32a50121d713de1f70a1.scope: Deactivated successfully.
Nov 29 04:19:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:45.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:45 np0005539563 podman[435559]: 2025-11-29 09:19:45.191504865 +0000 UTC m=+0.019950393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:19:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:19:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:45.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:19:45 np0005539563 podman[435559]: 2025-11-29 09:19:45.87259455 +0000 UTC m=+0.701040118 container create 1467fb7b9ec01dd0bad6936c2c35adcbfa5c5c2ed116e7c5bdb6e88d232c59ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 04:19:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4347: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:46 np0005539563 systemd[1]: Started libpod-conmon-1467fb7b9ec01dd0bad6936c2c35adcbfa5c5c2ed116e7c5bdb6e88d232c59ba.scope.
Nov 29 04:19:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:19:46 np0005539563 podman[435559]: 2025-11-29 09:19:46.521589384 +0000 UTC m=+1.350034912 container init 1467fb7b9ec01dd0bad6936c2c35adcbfa5c5c2ed116e7c5bdb6e88d232c59ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 04:19:46 np0005539563 podman[435559]: 2025-11-29 09:19:46.52919384 +0000 UTC m=+1.357639368 container start 1467fb7b9ec01dd0bad6936c2c35adcbfa5c5c2ed116e7c5bdb6e88d232c59ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_leavitt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:19:46 np0005539563 trusting_leavitt[435576]: 167 167
Nov 29 04:19:46 np0005539563 systemd[1]: libpod-1467fb7b9ec01dd0bad6936c2c35adcbfa5c5c2ed116e7c5bdb6e88d232c59ba.scope: Deactivated successfully.
Nov 29 04:19:46 np0005539563 podman[435559]: 2025-11-29 09:19:46.535197683 +0000 UTC m=+1.363643231 container attach 1467fb7b9ec01dd0bad6936c2c35adcbfa5c5c2ed116e7c5bdb6e88d232c59ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:19:46 np0005539563 podman[435559]: 2025-11-29 09:19:46.535437799 +0000 UTC m=+1.363883327 container died 1467fb7b9ec01dd0bad6936c2c35adcbfa5c5c2ed116e7c5bdb6e88d232c59ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_leavitt, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:19:46 np0005539563 systemd[1]: var-lib-containers-storage-overlay-b80cc2bfb9909f9d2148f28cf7092c6cc3d0ad8028dd89ffba26f3d13f187fc6-merged.mount: Deactivated successfully.
Nov 29 04:19:46 np0005539563 podman[435559]: 2025-11-29 09:19:46.576069312 +0000 UTC m=+1.404514840 container remove 1467fb7b9ec01dd0bad6936c2c35adcbfa5c5c2ed116e7c5bdb6e88d232c59ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_leavitt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:19:46 np0005539563 systemd[1]: libpod-conmon-1467fb7b9ec01dd0bad6936c2c35adcbfa5c5c2ed116e7c5bdb6e88d232c59ba.scope: Deactivated successfully.
Nov 29 04:19:46 np0005539563 podman[435601]: 2025-11-29 09:19:46.802700239 +0000 UTC m=+0.110039536 container create ef88b46648badbc2c1d1650a16da5460bc72e3cacbb0f154e7bb21416a93468f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:19:46 np0005539563 podman[435601]: 2025-11-29 09:19:46.713621733 +0000 UTC m=+0.020961040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:19:46 np0005539563 systemd[1]: Started libpod-conmon-ef88b46648badbc2c1d1650a16da5460bc72e3cacbb0f154e7bb21416a93468f.scope.
Nov 29 04:19:46 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:19:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97479a44cd71b863fb6f3eeb164e96426251914128ea3ad8ae0bd03fd4f2452/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:19:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97479a44cd71b863fb6f3eeb164e96426251914128ea3ad8ae0bd03fd4f2452/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:19:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97479a44cd71b863fb6f3eeb164e96426251914128ea3ad8ae0bd03fd4f2452/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:19:46 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97479a44cd71b863fb6f3eeb164e96426251914128ea3ad8ae0bd03fd4f2452/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:19:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:19:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:47.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:19:47 np0005539563 podman[435601]: 2025-11-29 09:19:47.375034554 +0000 UTC m=+0.682373871 container init ef88b46648badbc2c1d1650a16da5460bc72e3cacbb0f154e7bb21416a93468f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:19:47 np0005539563 podman[435601]: 2025-11-29 09:19:47.383407091 +0000 UTC m=+0.690746388 container start ef88b46648badbc2c1d1650a16da5460bc72e3cacbb0f154e7bb21416a93468f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:19:47 np0005539563 podman[435601]: 2025-11-29 09:19:47.401047909 +0000 UTC m=+0.708387226 container attach ef88b46648badbc2c1d1650a16da5460bc72e3cacbb0f154e7bb21416a93468f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 29 04:19:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:47.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4348: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:48 np0005539563 magical_yalow[435618]: {
Nov 29 04:19:48 np0005539563 magical_yalow[435618]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:19:48 np0005539563 magical_yalow[435618]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:19:48 np0005539563 magical_yalow[435618]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:19:48 np0005539563 magical_yalow[435618]:        "osd_id": 0,
Nov 29 04:19:48 np0005539563 magical_yalow[435618]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:19:48 np0005539563 magical_yalow[435618]:        "type": "bluestore"
Nov 29 04:19:48 np0005539563 magical_yalow[435618]:    }
Nov 29 04:19:48 np0005539563 magical_yalow[435618]: }
Nov 29 04:19:48 np0005539563 systemd[1]: libpod-ef88b46648badbc2c1d1650a16da5460bc72e3cacbb0f154e7bb21416a93468f.scope: Deactivated successfully.
Nov 29 04:19:48 np0005539563 podman[435601]: 2025-11-29 09:19:48.263839203 +0000 UTC m=+1.571178500 container died ef88b46648badbc2c1d1650a16da5460bc72e3cacbb0f154e7bb21416a93468f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:19:48 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e97479a44cd71b863fb6f3eeb164e96426251914128ea3ad8ae0bd03fd4f2452-merged.mount: Deactivated successfully.
Nov 29 04:19:48 np0005539563 podman[435601]: 2025-11-29 09:19:48.325474505 +0000 UTC m=+1.632813802 container remove ef88b46648badbc2c1d1650a16da5460bc72e3cacbb0f154e7bb21416a93468f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:19:48 np0005539563 systemd[1]: libpod-conmon-ef88b46648badbc2c1d1650a16da5460bc72e3cacbb0f154e7bb21416a93468f.scope: Deactivated successfully.
Nov 29 04:19:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:19:48 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:19:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:19:48 np0005539563 nova_compute[252253]: 2025-11-29 09:19:48.861 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:49.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:49 np0005539563 nova_compute[252253]: 2025-11-29 09:19:49.119 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:49 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:19:49 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 90fa4ea4-bff5-45c8-93ca-615e1adbf193 does not exist
Nov 29 04:19:49 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ffcfecf2-4b40-4853-9dad-8b1f699c2c31 does not exist
Nov 29 04:19:49 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e6685370-fe40-4a35-b30e-498df329125c does not exist
Nov 29 04:19:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:49.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4349: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:19:50 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:19:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:51.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:51.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4350: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:52 np0005539563 nova_compute[252253]: 2025-11-29 09:19:52.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:52 np0005539563 nova_compute[252253]: 2025-11-29 09:19:52.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:19:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:53.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:53 np0005539563 nova_compute[252253]: 2025-11-29 09:19:53.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:53 np0005539563 nova_compute[252253]: 2025-11-29 09:19:53.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:53.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:53 np0005539563 nova_compute[252253]: 2025-11-29 09:19:53.866 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4351: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:54 np0005539563 nova_compute[252253]: 2025-11-29 09:19:54.121 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:55.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:55.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4352: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:56 np0005539563 nova_compute[252253]: 2025-11-29 09:19:56.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:57.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:19:57 np0005539563 nova_compute[252253]: 2025-11-29 09:19:57.548 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:19:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:57.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:19:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4353: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:19:58 np0005539563 nova_compute[252253]: 2025-11-29 09:19:58.866 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:19:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:19:59.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:19:59 np0005539563 nova_compute[252253]: 2025-11-29 09:19:59.123 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:19:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:19:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:19:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:19:59.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:00 np0005539563 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 04:20:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4354: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:20:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:01.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:20:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:01.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4355: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:02 np0005539563 nova_compute[252253]: 2025-11-29 09:20:02.791 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:02 np0005539563 nova_compute[252253]: 2025-11-29 09:20:02.792 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:20:02 np0005539563 nova_compute[252253]: 2025-11-29 09:20:02.792 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:20:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:03.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:03 np0005539563 ceph-mon[74338]: overall HEALTH_OK
Nov 29 04:20:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:03.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:03 np0005539563 nova_compute[252253]: 2025-11-29 09:20:03.868 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4356: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:04 np0005539563 nova_compute[252253]: 2025-11-29 09:20:04.125 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:04 np0005539563 podman[435761]: 2025-11-29 09:20:04.502798078 +0000 UTC m=+0.057401889 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 04:20:04 np0005539563 podman[435762]: 2025-11-29 09:20:04.510890347 +0000 UTC m=+0.064641074 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Nov 29 04:20:04 np0005539563 podman[435763]: 2025-11-29 09:20:04.555714523 +0000 UTC m=+0.093282591 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:20:04 np0005539563 nova_compute[252253]: 2025-11-29 09:20:04.767 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:20:04 np0005539563 nova_compute[252253]: 2025-11-29 09:20:04.767 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:04 np0005539563 nova_compute[252253]: 2025-11-29 09:20:04.768 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:20:04.997 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:20:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:20:04.998 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:20:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:20:04.998 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:20:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:05.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:05 np0005539563 nova_compute[252253]: 2025-11-29 09:20:05.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:05 np0005539563 nova_compute[252253]: 2025-11-29 09:20:05.713 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:20:05 np0005539563 nova_compute[252253]: 2025-11-29 09:20:05.713 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:20:05 np0005539563 nova_compute[252253]: 2025-11-29 09:20:05.713 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:20:05 np0005539563 nova_compute[252253]: 2025-11-29 09:20:05.714 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:20:05 np0005539563 nova_compute[252253]: 2025-11-29 09:20:05.714 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:20:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:05.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4357: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:06 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:20:06 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2812326397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:20:06 np0005539563 nova_compute[252253]: 2025-11-29 09:20:06.234 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:20:06 np0005539563 nova_compute[252253]: 2025-11-29 09:20:06.375 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:20:06 np0005539563 nova_compute[252253]: 2025-11-29 09:20:06.376 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4048MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:20:06 np0005539563 nova_compute[252253]: 2025-11-29 09:20:06.376 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:20:06 np0005539563 nova_compute[252253]: 2025-11-29 09:20:06.377 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:20:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:07.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:20:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:07.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:20:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4358: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:08 np0005539563 nova_compute[252253]: 2025-11-29 09:20:08.869 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:08 np0005539563 nova_compute[252253]: 2025-11-29 09:20:08.900 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:20:08 np0005539563 nova_compute[252253]: 2025-11-29 09:20:08.901 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:20:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:09.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:09 np0005539563 nova_compute[252253]: 2025-11-29 09:20:09.060 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing inventories for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:20:09 np0005539563 nova_compute[252253]: 2025-11-29 09:20:09.078 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating ProviderTree inventory for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:20:09 np0005539563 nova_compute[252253]: 2025-11-29 09:20:09.078 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Updating inventory in ProviderTree for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:20:09 np0005539563 nova_compute[252253]: 2025-11-29 09:20:09.117 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing aggregate associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:20:09 np0005539563 nova_compute[252253]: 2025-11-29 09:20:09.127 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:09 np0005539563 nova_compute[252253]: 2025-11-29 09:20:09.178 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Refreshing trait associations for resource provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:20:09 np0005539563 nova_compute[252253]: 2025-11-29 09:20:09.198 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:20:09 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:20:09 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1008334526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:20:09 np0005539563 nova_compute[252253]: 2025-11-29 09:20:09.643 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:20:09 np0005539563 nova_compute[252253]: 2025-11-29 09:20:09.650 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:20:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:09.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:09 np0005539563 nova_compute[252253]: 2025-11-29 09:20:09.846 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:20:09 np0005539563 nova_compute[252253]: 2025-11-29 09:20:09.848 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:20:09 np0005539563 nova_compute[252253]: 2025-11-29 09:20:09.848 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:20:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4359: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:11.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:11.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4360: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:13.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:20:13
Nov 29 04:20:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:20:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:20:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'volumes']
Nov 29 04:20:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:20:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:20:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:13.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:13 np0005539563 nova_compute[252253]: 2025-11-29 09:20:13.871 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4361: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:14 np0005539563 nova_compute[252253]: 2025-11-29 09:20:14.179 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:15.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:15.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4362: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:20:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:20:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:20:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:20:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:20:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:20:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:20:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:20:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:20:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:20:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:17.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:17.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4363: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:18 np0005539563 nova_compute[252253]: 2025-11-29 09:20:18.901 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:19.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:19 np0005539563 nova_compute[252253]: 2025-11-29 09:20:19.180 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:19.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4364: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:21.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:21.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4365: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:23.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:23.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:23 np0005539563 nova_compute[252253]: 2025-11-29 09:20:23.902 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4366: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:24 np0005539563 nova_compute[252253]: 2025-11-29 09:20:24.184 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:20:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:20:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:25.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:25.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4367: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:27.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:27.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4368: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:28 np0005539563 nova_compute[252253]: 2025-11-29 09:20:28.904 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:29.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:29 np0005539563 nova_compute[252253]: 2025-11-29 09:20:29.187 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:29.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4369: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:31.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:31.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4370: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:33.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:33.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:33 np0005539563 nova_compute[252253]: 2025-11-29 09:20:33.844 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:33 np0005539563 nova_compute[252253]: 2025-11-29 09:20:33.907 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4371: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:34 np0005539563 nova_compute[252253]: 2025-11-29 09:20:34.190 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:35.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:35 np0005539563 podman[435985]: 2025-11-29 09:20:35.502367241 +0000 UTC m=+0.054063138 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:20:35 np0005539563 podman[435986]: 2025-11-29 09:20:35.513862903 +0000 UTC m=+0.058958950 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 04:20:35 np0005539563 podman[435987]: 2025-11-29 09:20:35.544614357 +0000 UTC m=+0.089073957 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 04:20:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:35.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4372: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:37.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:37.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4373: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:38 np0005539563 nova_compute[252253]: 2025-11-29 09:20:38.908 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:39.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:39 np0005539563 nova_compute[252253]: 2025-11-29 09:20:39.235 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:39.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4374: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:41.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:41.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4375: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:43.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:20:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:20:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:43.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:43 np0005539563 nova_compute[252253]: 2025-11-29 09:20:43.951 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4376: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:20:44 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 62K writes, 234K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s#012Cumulative WAL: 62K writes, 23K syncs, 2.67 writes per sync, written: 0.21 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1660 writes, 4795 keys, 1660 commit groups, 1.0 writes per commit group, ingest: 3.52 MB, 0.01 MB/s#012Interval WAL: 1660 writes, 747 syncs, 2.22 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:20:44 np0005539563 nova_compute[252253]: 2025-11-29 09:20:44.236 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:20:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:45.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:20:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:45.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4377: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:47.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:47.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:47 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Nov 29 04:20:47 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:47.976128) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:20:47 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Nov 29 04:20:47 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408047976415, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1508, "num_deletes": 256, "total_data_size": 2670845, "memory_usage": 2717160, "flush_reason": "Manual Compaction"}
Nov 29 04:20:47 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048027693, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 2629222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88977, "largest_seqno": 90484, "table_properties": {"data_size": 2622190, "index_size": 4102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14395, "raw_average_key_size": 19, "raw_value_size": 2608156, "raw_average_value_size": 3582, "num_data_blocks": 181, "num_entries": 728, "num_filter_entries": 728, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407891, "oldest_key_time": 1764407891, "file_creation_time": 1764408047, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 51583 microseconds, and 11502 cpu microseconds.
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.027827) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 2629222 bytes OK
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.027875) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.030476) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.030507) EVENT_LOG_v1 {"time_micros": 1764408048030499, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.030530) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 2664473, prev total WAL file size 2664473, number of live WAL files 2.
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.031447) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373734' seq:72057594037927935, type:22 .. '6C6F676D0034303237' seq:0, type:0; will stop at (end)
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(2567KB)], [203(11MB)]
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048031544, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 14919301, "oldest_snapshot_seqno": -1}
Nov 29 04:20:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4378: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 12289 keys, 14789422 bytes, temperature: kUnknown
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048213147, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 14789422, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14711857, "index_size": 45708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30789, "raw_key_size": 326194, "raw_average_key_size": 26, "raw_value_size": 14499024, "raw_average_value_size": 1179, "num_data_blocks": 1733, "num_entries": 12289, "num_filter_entries": 12289, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764408048, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.213479) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 14789422 bytes
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.215481) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.1 rd, 81.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.7 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 12816, records dropped: 527 output_compression: NoCompression
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.215501) EVENT_LOG_v1 {"time_micros": 1764408048215492, "job": 128, "event": "compaction_finished", "compaction_time_micros": 181724, "compaction_time_cpu_micros": 35344, "output_level": 6, "num_output_files": 1, "total_output_size": 14789422, "num_input_records": 12816, "num_output_records": 12289, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048216335, "job": 128, "event": "table_file_deletion", "file_number": 205}
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408048219393, "job": 128, "event": "table_file_deletion", "file_number": 203}
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.031284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.219545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.219551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.219553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.219556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:20:48 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:20:48.219558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:20:48 np0005539563 nova_compute[252253]: 2025-11-29 09:20:48.673 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:48 np0005539563 nova_compute[252253]: 2025-11-29 09:20:48.953 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:49.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:49 np0005539563 nova_compute[252253]: 2025-11-29 09:20:49.238 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:49.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4379: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:20:50 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev a6fbece6-9537-402f-97bc-49bc02d1db5b does not exist
Nov 29 04:20:50 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d932b02a-ef3d-448e-ba95-7bbe086fcb74 does not exist
Nov 29 04:20:50 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 10e2ed0c-6b07-4fa3-9f7f-127d202643a0 does not exist
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:20:50 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:20:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:20:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:51.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:20:51 np0005539563 podman[436379]: 2025-11-29 09:20:51.297217317 +0000 UTC m=+0.041351323 container create 917ce29c43aeb3e88831c70fbe09aa1e53c770af9a4f1777154e707559502ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lewin, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 04:20:51 np0005539563 systemd[1]: Started libpod-conmon-917ce29c43aeb3e88831c70fbe09aa1e53c770af9a4f1777154e707559502ab4.scope.
Nov 29 04:20:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:20:51 np0005539563 podman[436379]: 2025-11-29 09:20:51.374174614 +0000 UTC m=+0.118308640 container init 917ce29c43aeb3e88831c70fbe09aa1e53c770af9a4f1777154e707559502ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:20:51 np0005539563 podman[436379]: 2025-11-29 09:20:51.280858283 +0000 UTC m=+0.024992299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:20:51 np0005539563 podman[436379]: 2025-11-29 09:20:51.382667226 +0000 UTC m=+0.126801232 container start 917ce29c43aeb3e88831c70fbe09aa1e53c770af9a4f1777154e707559502ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:20:51 np0005539563 podman[436379]: 2025-11-29 09:20:51.386036747 +0000 UTC m=+0.130170773 container attach 917ce29c43aeb3e88831c70fbe09aa1e53c770af9a4f1777154e707559502ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lewin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 04:20:51 np0005539563 sharp_lewin[436395]: 167 167
Nov 29 04:20:51 np0005539563 systemd[1]: libpod-917ce29c43aeb3e88831c70fbe09aa1e53c770af9a4f1777154e707559502ab4.scope: Deactivated successfully.
Nov 29 04:20:51 np0005539563 podman[436379]: 2025-11-29 09:20:51.390713363 +0000 UTC m=+0.134847369 container died 917ce29c43aeb3e88831c70fbe09aa1e53c770af9a4f1777154e707559502ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 04:20:51 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ed06b75526b517590aba4b478b954b99cf0088a48d768144efecd6a4c2baf874-merged.mount: Deactivated successfully.
Nov 29 04:20:51 np0005539563 podman[436379]: 2025-11-29 09:20:51.428414696 +0000 UTC m=+0.172548702 container remove 917ce29c43aeb3e88831c70fbe09aa1e53c770af9a4f1777154e707559502ab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 04:20:51 np0005539563 systemd[1]: libpod-conmon-917ce29c43aeb3e88831c70fbe09aa1e53c770af9a4f1777154e707559502ab4.scope: Deactivated successfully.
Nov 29 04:20:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 04:20:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:20:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:20:51 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:20:51 np0005539563 podman[436421]: 2025-11-29 09:20:51.614240746 +0000 UTC m=+0.052074963 container create 4df2dcfcd3cc975f729b8c6a64ddcba9ce96b208699005ce45a3cb8e33651224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:20:51 np0005539563 systemd[1]: Started libpod-conmon-4df2dcfcd3cc975f729b8c6a64ddcba9ce96b208699005ce45a3cb8e33651224.scope.
Nov 29 04:20:51 np0005539563 podman[436421]: 2025-11-29 09:20:51.595570681 +0000 UTC m=+0.033404918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:20:51 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:20:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c648267c2ffb26f129e71bb114630abe5e5484b8af13ed4a6d81c9bf273a93b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:20:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c648267c2ffb26f129e71bb114630abe5e5484b8af13ed4a6d81c9bf273a93b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:20:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c648267c2ffb26f129e71bb114630abe5e5484b8af13ed4a6d81c9bf273a93b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:20:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c648267c2ffb26f129e71bb114630abe5e5484b8af13ed4a6d81c9bf273a93b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:20:51 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c648267c2ffb26f129e71bb114630abe5e5484b8af13ed4a6d81c9bf273a93b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:20:51 np0005539563 podman[436421]: 2025-11-29 09:20:51.72752523 +0000 UTC m=+0.165359467 container init 4df2dcfcd3cc975f729b8c6a64ddcba9ce96b208699005ce45a3cb8e33651224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:20:51 np0005539563 podman[436421]: 2025-11-29 09:20:51.738053975 +0000 UTC m=+0.175888182 container start 4df2dcfcd3cc975f729b8c6a64ddcba9ce96b208699005ce45a3cb8e33651224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 04:20:51 np0005539563 podman[436421]: 2025-11-29 09:20:51.742127246 +0000 UTC m=+0.179961483 container attach 4df2dcfcd3cc975f729b8c6a64ddcba9ce96b208699005ce45a3cb8e33651224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:20:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:51.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4380: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:52 np0005539563 tender_zhukovsky[436437]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:20:52 np0005539563 tender_zhukovsky[436437]: --> relative data size: 1.0
Nov 29 04:20:52 np0005539563 tender_zhukovsky[436437]: --> All data devices are unavailable
Nov 29 04:20:52 np0005539563 systemd[1]: libpod-4df2dcfcd3cc975f729b8c6a64ddcba9ce96b208699005ce45a3cb8e33651224.scope: Deactivated successfully.
Nov 29 04:20:52 np0005539563 conmon[436437]: conmon 4df2dcfcd3cc975f729b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4df2dcfcd3cc975f729b8c6a64ddcba9ce96b208699005ce45a3cb8e33651224.scope/container/memory.events
Nov 29 04:20:52 np0005539563 podman[436421]: 2025-11-29 09:20:52.580369513 +0000 UTC m=+1.018203720 container died 4df2dcfcd3cc975f729b8c6a64ddcba9ce96b208699005ce45a3cb8e33651224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_zhukovsky, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:20:52 np0005539563 systemd[1]: var-lib-containers-storage-overlay-6c648267c2ffb26f129e71bb114630abe5e5484b8af13ed4a6d81c9bf273a93b-merged.mount: Deactivated successfully.
Nov 29 04:20:52 np0005539563 podman[436421]: 2025-11-29 09:20:52.631180861 +0000 UTC m=+1.069015068 container remove 4df2dcfcd3cc975f729b8c6a64ddcba9ce96b208699005ce45a3cb8e33651224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_zhukovsky, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 04:20:52 np0005539563 systemd[1]: libpod-conmon-4df2dcfcd3cc975f729b8c6a64ddcba9ce96b208699005ce45a3cb8e33651224.scope: Deactivated successfully.
Nov 29 04:20:52 np0005539563 nova_compute[252253]: 2025-11-29 09:20:52.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:52 np0005539563 nova_compute[252253]: 2025-11-29 09:20:52.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:20:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:53.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:53 np0005539563 podman[436603]: 2025-11-29 09:20:53.164816847 +0000 UTC m=+0.036319137 container create d253f4072072c160cffcae87be2f211aa31eaed313124d55440b9c59c115665d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_franklin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:20:53 np0005539563 systemd[1]: Started libpod-conmon-d253f4072072c160cffcae87be2f211aa31eaed313124d55440b9c59c115665d.scope.
Nov 29 04:20:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:20:53 np0005539563 podman[436603]: 2025-11-29 09:20:53.236788908 +0000 UTC m=+0.108291218 container init d253f4072072c160cffcae87be2f211aa31eaed313124d55440b9c59c115665d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_franklin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 04:20:53 np0005539563 podman[436603]: 2025-11-29 09:20:53.24385111 +0000 UTC m=+0.115353410 container start d253f4072072c160cffcae87be2f211aa31eaed313124d55440b9c59c115665d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_franklin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:20:53 np0005539563 podman[436603]: 2025-11-29 09:20:53.149277205 +0000 UTC m=+0.020779515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:20:53 np0005539563 podman[436603]: 2025-11-29 09:20:53.247715055 +0000 UTC m=+0.119217345 container attach d253f4072072c160cffcae87be2f211aa31eaed313124d55440b9c59c115665d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:20:53 np0005539563 condescending_franklin[436620]: 167 167
Nov 29 04:20:53 np0005539563 systemd[1]: libpod-d253f4072072c160cffcae87be2f211aa31eaed313124d55440b9c59c115665d.scope: Deactivated successfully.
Nov 29 04:20:53 np0005539563 podman[436603]: 2025-11-29 09:20:53.249651968 +0000 UTC m=+0.121154258 container died d253f4072072c160cffcae87be2f211aa31eaed313124d55440b9c59c115665d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_franklin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:20:53 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4bb212d364c443ab0e762fa4c43040f78270e747e3284775f398fff628386137-merged.mount: Deactivated successfully.
Nov 29 04:20:53 np0005539563 podman[436603]: 2025-11-29 09:20:53.283554397 +0000 UTC m=+0.155056687 container remove d253f4072072c160cffcae87be2f211aa31eaed313124d55440b9c59c115665d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_franklin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:20:53 np0005539563 systemd[1]: libpod-conmon-d253f4072072c160cffcae87be2f211aa31eaed313124d55440b9c59c115665d.scope: Deactivated successfully.
Nov 29 04:20:53 np0005539563 podman[436646]: 2025-11-29 09:20:53.484466837 +0000 UTC m=+0.069094845 container create 336b7638ca3a61e1ce6b78feba06928a0b9a848ef9e80f950e87ccf9a8d99287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shaw, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 04:20:53 np0005539563 systemd[1]: Started libpod-conmon-336b7638ca3a61e1ce6b78feba06928a0b9a848ef9e80f950e87ccf9a8d99287.scope.
Nov 29 04:20:53 np0005539563 podman[436646]: 2025-11-29 09:20:53.453875297 +0000 UTC m=+0.038503405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:20:53 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:20:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b062e625b0244ef25b66c83bb8547105bc102ce069b378a90e00786b7a34e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:20:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b062e625b0244ef25b66c83bb8547105bc102ce069b378a90e00786b7a34e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:20:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b062e625b0244ef25b66c83bb8547105bc102ce069b378a90e00786b7a34e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:20:53 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b062e625b0244ef25b66c83bb8547105bc102ce069b378a90e00786b7a34e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:20:53 np0005539563 podman[436646]: 2025-11-29 09:20:53.565138985 +0000 UTC m=+0.149767034 container init 336b7638ca3a61e1ce6b78feba06928a0b9a848ef9e80f950e87ccf9a8d99287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shaw, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 04:20:53 np0005539563 podman[436646]: 2025-11-29 09:20:53.571615591 +0000 UTC m=+0.156243609 container start 336b7638ca3a61e1ce6b78feba06928a0b9a848ef9e80f950e87ccf9a8d99287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shaw, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 04:20:53 np0005539563 podman[436646]: 2025-11-29 09:20:53.574369186 +0000 UTC m=+0.158997204 container attach 336b7638ca3a61e1ce6b78feba06928a0b9a848ef9e80f950e87ccf9a8d99287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shaw, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:20:53 np0005539563 nova_compute[252253]: 2025-11-29 09:20:53.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:53.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:53 np0005539563 nova_compute[252253]: 2025-11-29 09:20:53.967 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4381: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:54 np0005539563 nova_compute[252253]: 2025-11-29 09:20:54.239 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]: {
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:    "0": [
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:        {
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:            "devices": [
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:                "/dev/loop3"
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:            ],
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:            "lv_name": "ceph_lv0",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:            "lv_size": "7511998464",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:            "name": "ceph_lv0",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:            "tags": {
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:                "ceph.cluster_name": "ceph",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:                "ceph.crush_device_class": "",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:                "ceph.encrypted": "0",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:                "ceph.osd_id": "0",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:                "ceph.type": "block",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:                "ceph.vdo": "0"
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:            },
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:            "type": "block",
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:            "vg_name": "ceph_vg0"
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:        }
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]:    ]
Nov 29 04:20:54 np0005539563 flamboyant_shaw[436662]: }
Nov 29 04:20:54 np0005539563 systemd[1]: libpod-336b7638ca3a61e1ce6b78feba06928a0b9a848ef9e80f950e87ccf9a8d99287.scope: Deactivated successfully.
Nov 29 04:20:54 np0005539563 podman[436646]: 2025-11-29 09:20:54.375831375 +0000 UTC m=+0.960459433 container died 336b7638ca3a61e1ce6b78feba06928a0b9a848ef9e80f950e87ccf9a8d99287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shaw, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 04:20:54 np0005539563 systemd[1]: var-lib-containers-storage-overlay-f1b062e625b0244ef25b66c83bb8547105bc102ce069b378a90e00786b7a34e7-merged.mount: Deactivated successfully.
Nov 29 04:20:54 np0005539563 podman[436646]: 2025-11-29 09:20:54.43167918 +0000 UTC m=+1.016307198 container remove 336b7638ca3a61e1ce6b78feba06928a0b9a848ef9e80f950e87ccf9a8d99287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:20:54 np0005539563 systemd[1]: libpod-conmon-336b7638ca3a61e1ce6b78feba06928a0b9a848ef9e80f950e87ccf9a8d99287.scope: Deactivated successfully.
Nov 29 04:20:54 np0005539563 podman[436825]: 2025-11-29 09:20:54.996777129 +0000 UTC m=+0.038886636 container create 7f31bc8b619e666dda36d44750f8773d556052685c053aa68e66ff19217fda1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_yalow, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 04:20:55 np0005539563 systemd[1]: Started libpod-conmon-7f31bc8b619e666dda36d44750f8773d556052685c053aa68e66ff19217fda1c.scope.
Nov 29 04:20:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:20:55 np0005539563 podman[436825]: 2025-11-29 09:20:54.98023213 +0000 UTC m=+0.022341657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:20:55 np0005539563 podman[436825]: 2025-11-29 09:20:55.080609023 +0000 UTC m=+0.122718550 container init 7f31bc8b619e666dda36d44750f8773d556052685c053aa68e66ff19217fda1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:20:55 np0005539563 podman[436825]: 2025-11-29 09:20:55.088077815 +0000 UTC m=+0.130187322 container start 7f31bc8b619e666dda36d44750f8773d556052685c053aa68e66ff19217fda1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_yalow, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 04:20:55 np0005539563 podman[436825]: 2025-11-29 09:20:55.09192805 +0000 UTC m=+0.134037577 container attach 7f31bc8b619e666dda36d44750f8773d556052685c053aa68e66ff19217fda1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 04:20:55 np0005539563 cool_yalow[436841]: 167 167
Nov 29 04:20:55 np0005539563 systemd[1]: libpod-7f31bc8b619e666dda36d44750f8773d556052685c053aa68e66ff19217fda1c.scope: Deactivated successfully.
Nov 29 04:20:55 np0005539563 podman[436825]: 2025-11-29 09:20:55.094379126 +0000 UTC m=+0.136488633 container died 7f31bc8b619e666dda36d44750f8773d556052685c053aa68e66ff19217fda1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_yalow, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 04:20:55 np0005539563 systemd[1]: var-lib-containers-storage-overlay-60ffe017e063f6aaf5cb9f2cf2cc391bf04839aabf4a976a74688d7624e0833f-merged.mount: Deactivated successfully.
Nov 29 04:20:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:55.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:20:55 np0005539563 podman[436825]: 2025-11-29 09:20:55.129017696 +0000 UTC m=+0.171127213 container remove 7f31bc8b619e666dda36d44750f8773d556052685c053aa68e66ff19217fda1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_yalow, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 04:20:55 np0005539563 systemd[1]: libpod-conmon-7f31bc8b619e666dda36d44750f8773d556052685c053aa68e66ff19217fda1c.scope: Deactivated successfully.
Nov 29 04:20:55 np0005539563 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 29 04:20:55 np0005539563 podman[436867]: 2025-11-29 09:20:55.261789987 +0000 UTC m=+0.022540352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:20:55 np0005539563 podman[436867]: 2025-11-29 09:20:55.43368801 +0000 UTC m=+0.194438365 container create 522643acf9a4934dd857b0062b85df5458efbb8010a9a7021b403c51c5052d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 04:20:55 np0005539563 nova_compute[252253]: 2025-11-29 09:20:55.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:55 np0005539563 systemd[1]: Started libpod-conmon-522643acf9a4934dd857b0062b85df5458efbb8010a9a7021b403c51c5052d4a.scope.
Nov 29 04:20:55 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:20:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb55ee93cce7d00f82ed2b06217efeec8604f83f53837d2949ec4b01d37f653/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:20:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb55ee93cce7d00f82ed2b06217efeec8604f83f53837d2949ec4b01d37f653/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:20:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb55ee93cce7d00f82ed2b06217efeec8604f83f53837d2949ec4b01d37f653/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:20:55 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb55ee93cce7d00f82ed2b06217efeec8604f83f53837d2949ec4b01d37f653/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:20:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:55.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4382: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:56 np0005539563 podman[436867]: 2025-11-29 09:20:56.588966027 +0000 UTC m=+1.349716382 container init 522643acf9a4934dd857b0062b85df5458efbb8010a9a7021b403c51c5052d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:20:56 np0005539563 podman[436867]: 2025-11-29 09:20:56.597707904 +0000 UTC m=+1.358458249 container start 522643acf9a4934dd857b0062b85df5458efbb8010a9a7021b403c51c5052d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 04:20:56 np0005539563 podman[436867]: 2025-11-29 09:20:56.601770935 +0000 UTC m=+1.362521300 container attach 522643acf9a4934dd857b0062b85df5458efbb8010a9a7021b403c51c5052d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:20:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:57.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:57 np0005539563 fervent_ptolemy[436884]: {
Nov 29 04:20:57 np0005539563 fervent_ptolemy[436884]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:20:57 np0005539563 fervent_ptolemy[436884]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:20:57 np0005539563 fervent_ptolemy[436884]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:20:57 np0005539563 fervent_ptolemy[436884]:        "osd_id": 0,
Nov 29 04:20:57 np0005539563 fervent_ptolemy[436884]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:20:57 np0005539563 fervent_ptolemy[436884]:        "type": "bluestore"
Nov 29 04:20:57 np0005539563 fervent_ptolemy[436884]:    }
Nov 29 04:20:57 np0005539563 fervent_ptolemy[436884]: }
Nov 29 04:20:57 np0005539563 systemd[1]: libpod-522643acf9a4934dd857b0062b85df5458efbb8010a9a7021b403c51c5052d4a.scope: Deactivated successfully.
Nov 29 04:20:57 np0005539563 podman[436867]: 2025-11-29 09:20:57.455010359 +0000 UTC m=+2.215760704 container died 522643acf9a4934dd857b0062b85df5458efbb8010a9a7021b403c51c5052d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 04:20:57 np0005539563 systemd[1]: var-lib-containers-storage-overlay-4eb55ee93cce7d00f82ed2b06217efeec8604f83f53837d2949ec4b01d37f653-merged.mount: Deactivated successfully.
Nov 29 04:20:57 np0005539563 podman[436867]: 2025-11-29 09:20:57.504851481 +0000 UTC m=+2.265601826 container remove 522643acf9a4934dd857b0062b85df5458efbb8010a9a7021b403c51c5052d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 04:20:57 np0005539563 systemd[1]: libpod-conmon-522643acf9a4934dd857b0062b85df5458efbb8010a9a7021b403c51c5052d4a.scope: Deactivated successfully.
Nov 29 04:20:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:20:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:20:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:20:57 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:20:57 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 3c7a2331-e1c6-41ca-b641-18b28ef1f10f does not exist
Nov 29 04:20:57 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 5b104a10-b6d9-4c97-9a60-3ab406c527a2 does not exist
Nov 29 04:20:57 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev df0144f6-35fb-45bc-b3a8-180e2e69e414 does not exist
Nov 29 04:20:57 np0005539563 nova_compute[252253]: 2025-11-29 09:20:57.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:57.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:20:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4383: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:20:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:20:58 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:20:58 np0005539563 nova_compute[252253]: 2025-11-29 09:20:58.970 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:20:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:20:59.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:20:59 np0005539563 nova_compute[252253]: 2025-11-29 09:20:59.240 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:20:59 np0005539563 nova_compute[252253]: 2025-11-29 09:20:59.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:20:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:20:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:20:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:20:59.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4384: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:00 np0005539563 nova_compute[252253]: 2025-11-29 09:21:00.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:00 np0005539563 nova_compute[252253]: 2025-11-29 09:21:00.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:21:00 np0005539563 nova_compute[252253]: 2025-11-29 09:21:00.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:21:00 np0005539563 nova_compute[252253]: 2025-11-29 09:21:00.694 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:21:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:01.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:01.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4385: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:03.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:03.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:03 np0005539563 nova_compute[252253]: 2025-11-29 09:21:03.971 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4386: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:04 np0005539563 nova_compute[252253]: 2025-11-29 09:21:04.241 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:04 np0005539563 nova_compute[252253]: 2025-11-29 09:21:04.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:04 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:21:04.998 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:21:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:21:04.999 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:21:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:21:04.999 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:21:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:05.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:05.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4387: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:06 np0005539563 podman[436971]: 2025-11-29 09:21:06.528055116 +0000 UTC m=+0.069926328 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 04:21:06 np0005539563 podman[436972]: 2025-11-29 09:21:06.534602743 +0000 UTC m=+0.076250049 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:21:06 np0005539563 podman[436973]: 2025-11-29 09:21:06.558599875 +0000 UTC m=+0.101756962 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:21:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:07.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:07 np0005539563 nova_compute[252253]: 2025-11-29 09:21:07.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:07 np0005539563 nova_compute[252253]: 2025-11-29 09:21:07.704 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:21:07 np0005539563 nova_compute[252253]: 2025-11-29 09:21:07.704 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:21:07 np0005539563 nova_compute[252253]: 2025-11-29 09:21:07.704 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:21:07 np0005539563 nova_compute[252253]: 2025-11-29 09:21:07.704 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:21:07 np0005539563 nova_compute[252253]: 2025-11-29 09:21:07.705 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:21:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:07.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:21:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738230363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:21:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4388: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.126 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.273 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.275 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4053MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.275 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.276 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.504 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.504 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.528 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:21:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:21:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2992879480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.952 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.959 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.973 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.977 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.978 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:21:08 np0005539563 nova_compute[252253]: 2025-11-29 09:21:08.979 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:21:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:09.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:09 np0005539563 nova_compute[252253]: 2025-11-29 09:21:09.241 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:09.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4389: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:11.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4390: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:21:13
Nov 29 04:21:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:21:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:21:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.log', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'backups']
Nov 29 04:21:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:21:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:13.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:21:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:21:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:13.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:13 np0005539563 nova_compute[252253]: 2025-11-29 09:21:13.974 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4391: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:14 np0005539563 nova_compute[252253]: 2025-11-29 09:21:14.242 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:15.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:15.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4392: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:21:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:21:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:21:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:21:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:21:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:21:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:21:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:21:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:21:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:21:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:17.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:17.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4393: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:18 np0005539563 nova_compute[252253]: 2025-11-29 09:21:18.980 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:19.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:19 np0005539563 nova_compute[252253]: 2025-11-29 09:21:19.244 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:19.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4394: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:21.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:21.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4395: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:23.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:23.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:23 np0005539563 nova_compute[252253]: 2025-11-29 09:21:23.983 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4396: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:24 np0005539563 nova_compute[252253]: 2025-11-29 09:21:24.246 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:21:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:21:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:25.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:25.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4397: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:27.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:27.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4398: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:28 np0005539563 nova_compute[252253]: 2025-11-29 09:21:28.985 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:29.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:29 np0005539563 nova_compute[252253]: 2025-11-29 09:21:29.247 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:29.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4399: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:31.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:31.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4400: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:32 np0005539563 nova_compute[252253]: 2025-11-29 09:21:32.974 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:33.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:33.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:33 np0005539563 nova_compute[252253]: 2025-11-29 09:21:33.985 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4401: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:34 np0005539563 nova_compute[252253]: 2025-11-29 09:21:34.249 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:35.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:35.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4402: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:37.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:37 np0005539563 podman[437196]: 2025-11-29 09:21:37.494131738 +0000 UTC m=+0.053047370 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 04:21:37 np0005539563 podman[437197]: 2025-11-29 09:21:37.519986499 +0000 UTC m=+0.067798009 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 04:21:37 np0005539563 podman[437198]: 2025-11-29 09:21:37.527426182 +0000 UTC m=+0.080172987 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:21:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:37.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4403: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:39 np0005539563 nova_compute[252253]: 2025-11-29 09:21:39.031 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:39.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:39 np0005539563 nova_compute[252253]: 2025-11-29 09:21:39.250 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:39.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4404: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:41.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:41.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4405: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:43.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:21:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:21:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:43.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:44 np0005539563 nova_compute[252253]: 2025-11-29 09:21:44.033 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4406: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:44 np0005539563 nova_compute[252253]: 2025-11-29 09:21:44.252 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:45.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:45.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4407: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:47.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:47.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:47 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4408: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:49 np0005539563 nova_compute[252253]: 2025-11-29 09:21:49.036 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:49.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:49 np0005539563 nova_compute[252253]: 2025-11-29 09:21:49.254 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:49.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4409: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:51.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:21:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:51.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:21:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4410: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:52 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:53.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:53 np0005539563 nova_compute[252253]: 2025-11-29 09:21:53.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:53 np0005539563 nova_compute[252253]: 2025-11-29 09:21:53.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:21:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:21:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:53.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:21:54 np0005539563 nova_compute[252253]: 2025-11-29 09:21:54.068 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4411: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:54 np0005539563 nova_compute[252253]: 2025-11-29 09:21:54.257 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:55.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:55 np0005539563 nova_compute[252253]: 2025-11-29 09:21:55.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:55.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4412: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:57.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:57 np0005539563 nova_compute[252253]: 2025-11-29 09:21:57.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:57 np0005539563 nova_compute[252253]: 2025-11-29 09:21:57.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:21:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:57.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:57 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:21:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4413: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:21:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:21:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:21:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:21:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:21:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:21:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:21:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 2f5019a5-96ea-4d75-ae9e-5e31b17d9638 does not exist
Nov 29 04:21:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev babd80b1-ed5c-4d7c-bdae-153c35000ef0 does not exist
Nov 29 04:21:58 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ce2a5e07-f10b-4703-ab30-f0044d8f9001 does not exist
Nov 29 04:21:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:21:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:21:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:21:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:21:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:21:58 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:21:59 np0005539563 nova_compute[252253]: 2025-11-29 09:21:59.146 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:21:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:21:59 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:21:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:21:59.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:59 np0005539563 nova_compute[252253]: 2025-11-29 09:21:59.259 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:21:59 np0005539563 podman[437587]: 2025-11-29 09:21:59.591602327 +0000 UTC m=+0.050101511 container create 361e4f318cac45c70ee5389f116a320bfdaf14dfcc66e35cf690a715a253f62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 29 04:21:59 np0005539563 systemd[1]: Started libpod-conmon-361e4f318cac45c70ee5389f116a320bfdaf14dfcc66e35cf690a715a253f62a.scope.
Nov 29 04:21:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:21:59 np0005539563 podman[437587]: 2025-11-29 09:21:59.569263831 +0000 UTC m=+0.027763065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:21:59 np0005539563 podman[437587]: 2025-11-29 09:21:59.666331073 +0000 UTC m=+0.124830227 container init 361e4f318cac45c70ee5389f116a320bfdaf14dfcc66e35cf690a715a253f62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mendel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:21:59 np0005539563 podman[437587]: 2025-11-29 09:21:59.674725231 +0000 UTC m=+0.133224375 container start 361e4f318cac45c70ee5389f116a320bfdaf14dfcc66e35cf690a715a253f62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 04:21:59 np0005539563 podman[437587]: 2025-11-29 09:21:59.679135521 +0000 UTC m=+0.137634695 container attach 361e4f318cac45c70ee5389f116a320bfdaf14dfcc66e35cf690a715a253f62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:21:59 np0005539563 great_mendel[437604]: 167 167
Nov 29 04:21:59 np0005539563 podman[437587]: 2025-11-29 09:21:59.682314827 +0000 UTC m=+0.140813971 container died 361e4f318cac45c70ee5389f116a320bfdaf14dfcc66e35cf690a715a253f62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:21:59 np0005539563 systemd[1]: libpod-361e4f318cac45c70ee5389f116a320bfdaf14dfcc66e35cf690a715a253f62a.scope: Deactivated successfully.
Nov 29 04:21:59 np0005539563 systemd[1]: var-lib-containers-storage-overlay-cdce9de0a46027cc915280ef186586e068bbce291a75c4fd14302448cd935ab9-merged.mount: Deactivated successfully.
Nov 29 04:21:59 np0005539563 podman[437587]: 2025-11-29 09:21:59.720221316 +0000 UTC m=+0.178720460 container remove 361e4f318cac45c70ee5389f116a320bfdaf14dfcc66e35cf690a715a253f62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mendel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:21:59 np0005539563 systemd[1]: libpod-conmon-361e4f318cac45c70ee5389f116a320bfdaf14dfcc66e35cf690a715a253f62a.scope: Deactivated successfully.
Nov 29 04:21:59 np0005539563 podman[437628]: 2025-11-29 09:21:59.868090927 +0000 UTC m=+0.040775338 container create c773c14d3e1e7187e1aa12b9aed0753be7399613539059625f9729850bbac3f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:21:59 np0005539563 systemd[1]: Started libpod-conmon-c773c14d3e1e7187e1aa12b9aed0753be7399613539059625f9729850bbac3f2.scope.
Nov 29 04:21:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:21:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:21:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:21:59.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:21:59 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:21:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c77bbf9bad307f80d805ffbddbc10d039e3881103c71d94403c74a999aeb53aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:21:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c77bbf9bad307f80d805ffbddbc10d039e3881103c71d94403c74a999aeb53aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:21:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c77bbf9bad307f80d805ffbddbc10d039e3881103c71d94403c74a999aeb53aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:21:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c77bbf9bad307f80d805ffbddbc10d039e3881103c71d94403c74a999aeb53aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:21:59 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c77bbf9bad307f80d805ffbddbc10d039e3881103c71d94403c74a999aeb53aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:21:59 np0005539563 podman[437628]: 2025-11-29 09:21:59.939320158 +0000 UTC m=+0.112004569 container init c773c14d3e1e7187e1aa12b9aed0753be7399613539059625f9729850bbac3f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_rubin, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:21:59 np0005539563 podman[437628]: 2025-11-29 09:21:59.851140916 +0000 UTC m=+0.023825347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:21:59 np0005539563 podman[437628]: 2025-11-29 09:21:59.948323022 +0000 UTC m=+0.121007433 container start c773c14d3e1e7187e1aa12b9aed0753be7399613539059625f9729850bbac3f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:22:00 np0005539563 podman[437628]: 2025-11-29 09:22:00.096692757 +0000 UTC m=+0.269377168 container attach c773c14d3e1e7187e1aa12b9aed0753be7399613539059625f9729850bbac3f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_rubin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 04:22:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4414: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.242191) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120242238, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 859, "num_deletes": 251, "total_data_size": 1276599, "memory_usage": 1302144, "flush_reason": "Manual Compaction"}
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120253501, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 1262501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90485, "largest_seqno": 91343, "table_properties": {"data_size": 1258121, "index_size": 2031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9610, "raw_average_key_size": 19, "raw_value_size": 1249427, "raw_average_value_size": 2560, "num_data_blocks": 90, "num_entries": 488, "num_filter_entries": 488, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764408048, "oldest_key_time": 1764408048, "file_creation_time": 1764408120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 11403 microseconds, and 5090 cpu microseconds.
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.253582) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 1262501 bytes OK
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.253618) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.255903) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.255919) EVENT_LOG_v1 {"time_micros": 1764408120255913, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.255948) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1272471, prev total WAL file size 1272471, number of live WAL files 2.
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.256653) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(1232KB)], [206(14MB)]
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120256727, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 16051923, "oldest_snapshot_seqno": -1}
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 12262 keys, 14013808 bytes, temperature: kUnknown
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120352313, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 14013808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13937163, "index_size": 44916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30661, "raw_key_size": 326342, "raw_average_key_size": 26, "raw_value_size": 13725236, "raw_average_value_size": 1119, "num_data_blocks": 1696, "num_entries": 12262, "num_filter_entries": 12262, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400093, "oldest_key_time": 0, "file_creation_time": 1764408120, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c1f3df60-9c98-4a69-b455-f1b44f64a88d", "db_session_id": "JQEAAIL9DQ5VZJBMNK3S", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.352605) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 14013808 bytes
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.354628) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.8 rd, 146.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.1 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(23.8) write-amplify(11.1) OK, records in: 12777, records dropped: 515 output_compression: NoCompression
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.354645) EVENT_LOG_v1 {"time_micros": 1764408120354637, "job": 130, "event": "compaction_finished", "compaction_time_micros": 95673, "compaction_time_cpu_micros": 31670, "output_level": 6, "num_output_files": 1, "total_output_size": 14013808, "num_input_records": 12777, "num_output_records": 12262, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120354931, "job": 130, "event": "table_file_deletion", "file_number": 208}
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764408120356929, "job": 130, "event": "table_file_deletion", "file_number": 206}
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.256505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.357147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.357155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.357157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.357159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:22:00 np0005539563 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/29-09:22:00.357161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:22:00 np0005539563 nifty_rubin[437644]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:22:00 np0005539563 nifty_rubin[437644]: --> relative data size: 1.0
Nov 29 04:22:00 np0005539563 nifty_rubin[437644]: --> All data devices are unavailable
Nov 29 04:22:00 np0005539563 systemd[1]: libpod-c773c14d3e1e7187e1aa12b9aed0753be7399613539059625f9729850bbac3f2.scope: Deactivated successfully.
Nov 29 04:22:00 np0005539563 podman[437628]: 2025-11-29 09:22:00.754301685 +0000 UTC m=+0.926986096 container died c773c14d3e1e7187e1aa12b9aed0753be7399613539059625f9729850bbac3f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:22:00 np0005539563 systemd[1]: var-lib-containers-storage-overlay-c77bbf9bad307f80d805ffbddbc10d039e3881103c71d94403c74a999aeb53aa-merged.mount: Deactivated successfully.
Nov 29 04:22:00 np0005539563 podman[437628]: 2025-11-29 09:22:00.811494056 +0000 UTC m=+0.984178467 container remove c773c14d3e1e7187e1aa12b9aed0753be7399613539059625f9729850bbac3f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_rubin, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:22:00 np0005539563 systemd[1]: libpod-conmon-c773c14d3e1e7187e1aa12b9aed0753be7399613539059625f9729850bbac3f2.scope: Deactivated successfully.
Nov 29 04:22:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:01.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:01 np0005539563 podman[437815]: 2025-11-29 09:22:01.487317669 +0000 UTC m=+0.047615313 container create 3835c4b10d0907bd1787e1e74c8ec41c20ff3060793eb7622b831e148e0f7e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 04:22:01 np0005539563 systemd[1]: Started libpod-conmon-3835c4b10d0907bd1787e1e74c8ec41c20ff3060793eb7622b831e148e0f7e79.scope.
Nov 29 04:22:01 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:22:01 np0005539563 podman[437815]: 2025-11-29 09:22:01.46933112 +0000 UTC m=+0.029628774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:22:01 np0005539563 podman[437815]: 2025-11-29 09:22:01.582266593 +0000 UTC m=+0.142564247 container init 3835c4b10d0907bd1787e1e74c8ec41c20ff3060793eb7622b831e148e0f7e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:22:01 np0005539563 podman[437815]: 2025-11-29 09:22:01.591520405 +0000 UTC m=+0.151818069 container start 3835c4b10d0907bd1787e1e74c8ec41c20ff3060793eb7622b831e148e0f7e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:22:01 np0005539563 podman[437815]: 2025-11-29 09:22:01.595178144 +0000 UTC m=+0.155475798 container attach 3835c4b10d0907bd1787e1e74c8ec41c20ff3060793eb7622b831e148e0f7e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 04:22:01 np0005539563 nostalgic_hugle[437831]: 167 167
Nov 29 04:22:01 np0005539563 systemd[1]: libpod-3835c4b10d0907bd1787e1e74c8ec41c20ff3060793eb7622b831e148e0f7e79.scope: Deactivated successfully.
Nov 29 04:22:01 np0005539563 conmon[437831]: conmon 3835c4b10d0907bd1787 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3835c4b10d0907bd1787e1e74c8ec41c20ff3060793eb7622b831e148e0f7e79.scope/container/memory.events
Nov 29 04:22:01 np0005539563 podman[437815]: 2025-11-29 09:22:01.600164929 +0000 UTC m=+0.160462593 container died 3835c4b10d0907bd1787e1e74c8ec41c20ff3060793eb7622b831e148e0f7e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 29 04:22:01 np0005539563 systemd[1]: var-lib-containers-storage-overlay-83fdc8b70656defe4f82d288593f85b5790964e01e19b6b5f47c1ff00df98390-merged.mount: Deactivated successfully.
Nov 29 04:22:01 np0005539563 podman[437815]: 2025-11-29 09:22:01.644625716 +0000 UTC m=+0.204923390 container remove 3835c4b10d0907bd1787e1e74c8ec41c20ff3060793eb7622b831e148e0f7e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 04:22:01 np0005539563 systemd[1]: libpod-conmon-3835c4b10d0907bd1787e1e74c8ec41c20ff3060793eb7622b831e148e0f7e79.scope: Deactivated successfully.
Nov 29 04:22:01 np0005539563 nova_compute[252253]: 2025-11-29 09:22:01.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:01 np0005539563 nova_compute[252253]: 2025-11-29 09:22:01.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:22:01 np0005539563 nova_compute[252253]: 2025-11-29 09:22:01.677 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:22:01 np0005539563 nova_compute[252253]: 2025-11-29 09:22:01.696 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:22:01 np0005539563 nova_compute[252253]: 2025-11-29 09:22:01.696 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:01 np0005539563 podman[437854]: 2025-11-29 09:22:01.856775979 +0000 UTC m=+0.057476049 container create f640503f2024d25013d3ebdc9403a7aa6c3b7371bf6a9b2f50b0c98a7a68f1f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hypatia, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:22:01 np0005539563 podman[437854]: 2025-11-29 09:22:01.824180526 +0000 UTC m=+0.024880646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:22:01 np0005539563 systemd[1]: Started libpod-conmon-f640503f2024d25013d3ebdc9403a7aa6c3b7371bf6a9b2f50b0c98a7a68f1f3.scope.
Nov 29 04:22:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:01.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:01 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:22:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e03617be8403c005f8ea69dba770c1290fc22d25319e48ca4e5a21397e389f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:22:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e03617be8403c005f8ea69dba770c1290fc22d25319e48ca4e5a21397e389f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:22:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e03617be8403c005f8ea69dba770c1290fc22d25319e48ca4e5a21397e389f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:22:01 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e03617be8403c005f8ea69dba770c1290fc22d25319e48ca4e5a21397e389f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:22:01 np0005539563 podman[437854]: 2025-11-29 09:22:01.972345605 +0000 UTC m=+0.173045665 container init f640503f2024d25013d3ebdc9403a7aa6c3b7371bf6a9b2f50b0c98a7a68f1f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hypatia, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 04:22:01 np0005539563 podman[437854]: 2025-11-29 09:22:01.984086463 +0000 UTC m=+0.184786503 container start f640503f2024d25013d3ebdc9403a7aa6c3b7371bf6a9b2f50b0c98a7a68f1f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 04:22:01 np0005539563 podman[437854]: 2025-11-29 09:22:01.98840945 +0000 UTC m=+0.189109520 container attach f640503f2024d25013d3ebdc9403a7aa6c3b7371bf6a9b2f50b0c98a7a68f1f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hypatia, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 04:22:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4415: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]: {
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:    "0": [
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:        {
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:            "devices": [
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:                "/dev/loop3"
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:            ],
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:            "lv_name": "ceph_lv0",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:            "lv_size": "7511998464",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:            "name": "ceph_lv0",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:            "tags": {
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:                "ceph.cluster_name": "ceph",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:                "ceph.crush_device_class": "",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:                "ceph.encrypted": "0",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:                "ceph.osd_id": "0",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:                "ceph.type": "block",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:                "ceph.vdo": "0"
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:            },
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:            "type": "block",
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:            "vg_name": "ceph_vg0"
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:        }
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]:    ]
Nov 29 04:22:02 np0005539563 practical_hypatia[437871]: }
Nov 29 04:22:02 np0005539563 systemd[1]: libpod-f640503f2024d25013d3ebdc9403a7aa6c3b7371bf6a9b2f50b0c98a7a68f1f3.scope: Deactivated successfully.
Nov 29 04:22:02 np0005539563 podman[437880]: 2025-11-29 09:22:02.775620313 +0000 UTC m=+0.024741861 container died f640503f2024d25013d3ebdc9403a7aa6c3b7371bf6a9b2f50b0c98a7a68f1f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hypatia, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:22:02 np0005539563 systemd[1]: var-lib-containers-storage-overlay-e0e03617be8403c005f8ea69dba770c1290fc22d25319e48ca4e5a21397e389f-merged.mount: Deactivated successfully.
Nov 29 04:22:02 np0005539563 podman[437880]: 2025-11-29 09:22:02.827729227 +0000 UTC m=+0.076850775 container remove f640503f2024d25013d3ebdc9403a7aa6c3b7371bf6a9b2f50b0c98a7a68f1f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hypatia, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:22:02 np0005539563 systemd[1]: libpod-conmon-f640503f2024d25013d3ebdc9403a7aa6c3b7371bf6a9b2f50b0c98a7a68f1f3.scope: Deactivated successfully.
Nov 29 04:22:02 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:03.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:03 np0005539563 podman[438036]: 2025-11-29 09:22:03.445666389 +0000 UTC m=+0.041087316 container create 4ea7833153a0a1d20ceb19d77e7617c0f7eef4fbc29985780f70428303972530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:22:03 np0005539563 systemd[1]: Started libpod-conmon-4ea7833153a0a1d20ceb19d77e7617c0f7eef4fbc29985780f70428303972530.scope.
Nov 29 04:22:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:22:03 np0005539563 podman[438036]: 2025-11-29 09:22:03.427203948 +0000 UTC m=+0.022624895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:22:03 np0005539563 podman[438036]: 2025-11-29 09:22:03.534286472 +0000 UTC m=+0.129707419 container init 4ea7833153a0a1d20ceb19d77e7617c0f7eef4fbc29985780f70428303972530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:22:03 np0005539563 podman[438036]: 2025-11-29 09:22:03.548097957 +0000 UTC m=+0.143518884 container start 4ea7833153a0a1d20ceb19d77e7617c0f7eef4fbc29985780f70428303972530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:22:03 np0005539563 podman[438036]: 2025-11-29 09:22:03.551489409 +0000 UTC m=+0.146910336 container attach 4ea7833153a0a1d20ceb19d77e7617c0f7eef4fbc29985780f70428303972530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 04:22:03 np0005539563 dreamy_wilbur[438053]: 167 167
Nov 29 04:22:03 np0005539563 systemd[1]: libpod-4ea7833153a0a1d20ceb19d77e7617c0f7eef4fbc29985780f70428303972530.scope: Deactivated successfully.
Nov 29 04:22:03 np0005539563 podman[438036]: 2025-11-29 09:22:03.555260722 +0000 UTC m=+0.150681649 container died 4ea7833153a0a1d20ceb19d77e7617c0f7eef4fbc29985780f70428303972530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 04:22:03 np0005539563 systemd[1]: var-lib-containers-storage-overlay-9b8b6a348e05649b6dbb2d7de949e94d88ef3fb1f6e1cfd84cbba742e44a97d3-merged.mount: Deactivated successfully.
Nov 29 04:22:03 np0005539563 podman[438036]: 2025-11-29 09:22:03.595902274 +0000 UTC m=+0.191323201 container remove 4ea7833153a0a1d20ceb19d77e7617c0f7eef4fbc29985780f70428303972530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 04:22:03 np0005539563 systemd[1]: libpod-conmon-4ea7833153a0a1d20ceb19d77e7617c0f7eef4fbc29985780f70428303972530.scope: Deactivated successfully.
Nov 29 04:22:03 np0005539563 podman[438076]: 2025-11-29 09:22:03.78672905 +0000 UTC m=+0.049324958 container create f8a30e98092222c5390a67736253c70be6d7999d38e582a57eb274355e1c66c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sammet, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:22:03 np0005539563 systemd[1]: Started libpod-conmon-f8a30e98092222c5390a67736253c70be6d7999d38e582a57eb274355e1c66c5.scope.
Nov 29 04:22:03 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:22:03 np0005539563 podman[438076]: 2025-11-29 09:22:03.763456849 +0000 UTC m=+0.026052777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:22:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf5637db7c86f5f15b3b3c7553f69434da9f4d01f1e01eea021e4ab97302009/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:22:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf5637db7c86f5f15b3b3c7553f69434da9f4d01f1e01eea021e4ab97302009/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:22:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf5637db7c86f5f15b3b3c7553f69434da9f4d01f1e01eea021e4ab97302009/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:22:03 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf5637db7c86f5f15b3b3c7553f69434da9f4d01f1e01eea021e4ab97302009/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:22:03 np0005539563 podman[438076]: 2025-11-29 09:22:03.876170266 +0000 UTC m=+0.138766184 container init f8a30e98092222c5390a67736253c70be6d7999d38e582a57eb274355e1c66c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 04:22:03 np0005539563 podman[438076]: 2025-11-29 09:22:03.883939047 +0000 UTC m=+0.146534955 container start f8a30e98092222c5390a67736253c70be6d7999d38e582a57eb274355e1c66c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 04:22:03 np0005539563 podman[438076]: 2025-11-29 09:22:03.887824162 +0000 UTC m=+0.150420090 container attach f8a30e98092222c5390a67736253c70be6d7999d38e582a57eb274355e1c66c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sammet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:22:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:03.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4416: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:04 np0005539563 nova_compute[252253]: 2025-11-29 09:22:04.149 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:04 np0005539563 nova_compute[252253]: 2025-11-29 09:22:04.260 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:04 np0005539563 nova_compute[252253]: 2025-11-29 09:22:04.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:04 np0005539563 stupefied_sammet[438092]: {
Nov 29 04:22:04 np0005539563 stupefied_sammet[438092]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:22:04 np0005539563 stupefied_sammet[438092]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:22:04 np0005539563 stupefied_sammet[438092]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:22:04 np0005539563 stupefied_sammet[438092]:        "osd_id": 0,
Nov 29 04:22:04 np0005539563 stupefied_sammet[438092]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:22:04 np0005539563 stupefied_sammet[438092]:        "type": "bluestore"
Nov 29 04:22:04 np0005539563 stupefied_sammet[438092]:    }
Nov 29 04:22:04 np0005539563 stupefied_sammet[438092]: }
Nov 29 04:22:04 np0005539563 systemd[1]: libpod-f8a30e98092222c5390a67736253c70be6d7999d38e582a57eb274355e1c66c5.scope: Deactivated successfully.
Nov 29 04:22:04 np0005539563 podman[438113]: 2025-11-29 09:22:04.932390706 +0000 UTC m=+0.041955259 container died f8a30e98092222c5390a67736253c70be6d7999d38e582a57eb274355e1c66c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 04:22:04 np0005539563 systemd[1]: var-lib-containers-storage-overlay-abf5637db7c86f5f15b3b3c7553f69434da9f4d01f1e01eea021e4ab97302009-merged.mount: Deactivated successfully.
Nov 29 04:22:04 np0005539563 podman[438113]: 2025-11-29 09:22:04.989448795 +0000 UTC m=+0.099013308 container remove f8a30e98092222c5390a67736253c70be6d7999d38e582a57eb274355e1c66c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:22:04 np0005539563 systemd[1]: libpod-conmon-f8a30e98092222c5390a67736253c70be6d7999d38e582a57eb274355e1c66c5.scope: Deactivated successfully.
Nov 29 04:22:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:22:05.000 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:22:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:22:05.000 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:22:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:22:05.000 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:22:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:22:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:22:05 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:22:05 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:22:05 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 8e52483a-1a22-4e0a-8b40-ed6a829daeee does not exist
Nov 29 04:22:05 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ee40e48b-5e0c-422e-85c4-15509a563ece does not exist
Nov 29 04:22:05 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e0607594-737c-4c88-a598-864e94c23e8f does not exist
Nov 29 04:22:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:05.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:22:05 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:22:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:05.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4417: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:07.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:07.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4418: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:08 np0005539563 podman[438181]: 2025-11-29 09:22:08.529603882 +0000 UTC m=+0.077489063 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:22:08 np0005539563 podman[438180]: 2025-11-29 09:22:08.555521785 +0000 UTC m=+0.103976011 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 04:22:08 np0005539563 podman[438182]: 2025-11-29 09:22:08.563953134 +0000 UTC m=+0.111811654 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 04:22:09 np0005539563 nova_compute[252253]: 2025-11-29 09:22:09.151 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:09.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:09 np0005539563 nova_compute[252253]: 2025-11-29 09:22:09.261 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:09 np0005539563 nova_compute[252253]: 2025-11-29 09:22:09.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:09 np0005539563 nova_compute[252253]: 2025-11-29 09:22:09.771 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:22:09 np0005539563 nova_compute[252253]: 2025-11-29 09:22:09.772 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:22:09 np0005539563 nova_compute[252253]: 2025-11-29 09:22:09.772 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:22:09 np0005539563 nova_compute[252253]: 2025-11-29 09:22:09.772 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:22:09 np0005539563 nova_compute[252253]: 2025-11-29 09:22:09.772 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:22:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:09.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4419: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:22:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1288147277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:22:10 np0005539563 nova_compute[252253]: 2025-11-29 09:22:10.257 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:22:10 np0005539563 nova_compute[252253]: 2025-11-29 09:22:10.411 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:22:10 np0005539563 nova_compute[252253]: 2025-11-29 09:22:10.412 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4057MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:22:10 np0005539563 nova_compute[252253]: 2025-11-29 09:22:10.412 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:22:10 np0005539563 nova_compute[252253]: 2025-11-29 09:22:10.413 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:22:10 np0005539563 nova_compute[252253]: 2025-11-29 09:22:10.548 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:22:10 np0005539563 nova_compute[252253]: 2025-11-29 09:22:10.549 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:22:10 np0005539563 nova_compute[252253]: 2025-11-29 09:22:10.590 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:22:10 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:22:10 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/179083472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:22:11 np0005539563 nova_compute[252253]: 2025-11-29 09:22:11.007 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:22:11 np0005539563 nova_compute[252253]: 2025-11-29 09:22:11.013 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:22:11 np0005539563 nova_compute[252253]: 2025-11-29 09:22:11.068 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:22:11 np0005539563 nova_compute[252253]: 2025-11-29 09:22:11.070 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:22:11 np0005539563 nova_compute[252253]: 2025-11-29 09:22:11.070 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:22:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:11.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:11.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4420: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:22:13
Nov 29 04:22:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:22:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:22:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'backups', 'vms', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 29 04:22:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:22:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:13.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:22:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:22:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:13.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4421: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:14 np0005539563 nova_compute[252253]: 2025-11-29 09:22:14.154 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:14 np0005539563 nova_compute[252253]: 2025-11-29 09:22:14.262 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:15.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:15.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4422: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:22:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:22:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:22:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:22:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:22:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:22:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:22:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:22:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:22:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:22:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:17.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:17.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:17 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4423: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:19 np0005539563 nova_compute[252253]: 2025-11-29 09:22:19.158 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:19.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:19 np0005539563 nova_compute[252253]: 2025-11-29 09:22:19.264 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:19.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4424: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:21.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:21.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4425: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:22 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:23.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:23.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4426: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:24 np0005539563 nova_compute[252253]: 2025-11-29 09:22:24.161 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:24 np0005539563 nova_compute[252253]: 2025-11-29 09:22:24.265 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:22:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:22:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:25.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:25.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4427: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:27.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:27.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4428: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:29 np0005539563 nova_compute[252253]: 2025-11-29 09:22:29.163 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:29 np0005539563 nova_compute[252253]: 2025-11-29 09:22:29.267 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:29.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:29.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4429: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:31.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:31.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4430: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:33.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:33.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:34 np0005539563 nova_compute[252253]: 2025-11-29 09:22:34.066 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4431: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:34 np0005539563 nova_compute[252253]: 2025-11-29 09:22:34.165 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:34 np0005539563 nova_compute[252253]: 2025-11-29 09:22:34.268 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:35.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:35.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4432: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:37.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:37.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4433: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:39 np0005539563 nova_compute[252253]: 2025-11-29 09:22:39.168 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:39 np0005539563 nova_compute[252253]: 2025-11-29 09:22:39.269 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:39.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:39 np0005539563 podman[438403]: 2025-11-29 09:22:39.528551244 +0000 UTC m=+0.075575461 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 04:22:39 np0005539563 podman[438404]: 2025-11-29 09:22:39.532144091 +0000 UTC m=+0.083610859 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 04:22:39 np0005539563 podman[438405]: 2025-11-29 09:22:39.593250359 +0000 UTC m=+0.138174829 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 04:22:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:22:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:39.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:22:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4434: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:41.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:41.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4435: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:22:43 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:22:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:43.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:43 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:43 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:43 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:43.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:44 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4436: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:44 np0005539563 nova_compute[252253]: 2025-11-29 09:22:44.197 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:44 np0005539563 nova_compute[252253]: 2025-11-29 09:22:44.271 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:45.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:45 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:45 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:45 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:45.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:46 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4437: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:47.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:47 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:47 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:47 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:47.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:48 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:48 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4438: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:49 np0005539563 nova_compute[252253]: 2025-11-29 09:22:49.199 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:49 np0005539563 nova_compute[252253]: 2025-11-29 09:22:49.273 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:49.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:49 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:49 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:49 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:49.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:50 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4439: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:51.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:51 np0005539563 nova_compute[252253]: 2025-11-29 09:22:51.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:51 np0005539563 nova_compute[252253]: 2025-11-29 09:22:51.679 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:22:51 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:51 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:51 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:51.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:52 np0005539563 nova_compute[252253]: 2025-11-29 09:22:52.038 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:22:52 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4440: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:53 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:53 np0005539563 nova_compute[252253]: 2025-11-29 09:22:53.033 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:53.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:53 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:53 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:53 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:53.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:54 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4441: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:22:54 np0005539563 nova_compute[252253]: 2025-11-29 09:22:54.201 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:54 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 04:22:54 np0005539563 nova_compute[252253]: 2025-11-29 09:22:54.274 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:54 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 04:22:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:55.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:55 np0005539563 radosgw[93236]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 04:22:55 np0005539563 nova_compute[252253]: 2025-11-29 09:22:55.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:55 np0005539563 nova_compute[252253]: 2025-11-29 09:22:55.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:22:55 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:55 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:55 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:55.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:56 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4442: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 04:22:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:22:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:57.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:22:57 np0005539563 nova_compute[252253]: 2025-11-29 09:22:57.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:57 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:57 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:57 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:57.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:58 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:22:58 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4443: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 04:22:58 np0005539563 nova_compute[252253]: 2025-11-29 09:22:58.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:58 np0005539563 nova_compute[252253]: 2025-11-29 09:22:58.679 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:22:59 np0005539563 nova_compute[252253]: 2025-11-29 09:22:59.203 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:59 np0005539563 nova_compute[252253]: 2025-11-29 09:22:59.276 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:22:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:22:59.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:22:59 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:22:59 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:22:59 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:22:59.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:00 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4444: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 29 04:23:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:01.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:01 np0005539563 nova_compute[252253]: 2025-11-29 09:23:01.676 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:23:01 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:01 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:01 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:01.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:02 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4445: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 29 04:23:02 np0005539563 nova_compute[252253]: 2025-11-29 09:23:02.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:23:02 np0005539563 nova_compute[252253]: 2025-11-29 09:23:02.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:23:02 np0005539563 nova_compute[252253]: 2025-11-29 09:23:02.678 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:23:03 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:03.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:03 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:03 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:03 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:03.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:04 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4446: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Nov 29 04:23:04 np0005539563 nova_compute[252253]: 2025-11-29 09:23:04.205 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:04 np0005539563 nova_compute[252253]: 2025-11-29 09:23:04.278 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:23:05.000 158990 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:23:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:23:05.001 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:23:05 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:23:05.001 158990 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:23:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:05.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:05 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:05 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:05 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:05.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:06 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4447: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Nov 29 04:23:06 np0005539563 podman[438700]: 2025-11-29 09:23:06.195120573 +0000 UTC m=+0.054672244 container exec ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:23:06 np0005539563 podman[438700]: 2025-11-29 09:23:06.294105918 +0000 UTC m=+0.153657569 container exec_died ed38017505eaa645144043f7ad50efda382473a87b148704c14fc0126fdc15f4 (image=quay.io/ceph/ceph:v18, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:23:06 np0005539563 podman[438852]: 2025-11-29 09:23:06.862497706 +0000 UTC m=+0.049191755 container exec fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 04:23:06 np0005539563 podman[438852]: 2025-11-29 09:23:06.871986304 +0000 UTC m=+0.058680363 container exec_died fa77fab940461604ceeb8fb05f4f79c5b10f2db3fef867d2e0f8cfcd449c02b9 (image=quay.io/ceph/haproxy:2.3, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-haproxy-rgw-default-compute-0-aoijdn)
Nov 29 04:23:07 np0005539563 podman[438919]: 2025-11-29 09:23:07.053900109 +0000 UTC m=+0.046820042 container exec 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, release=1793, architecture=x86_64, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived)
Nov 29 04:23:07 np0005539563 podman[438919]: 2025-11-29 09:23:07.06725882 +0000 UTC m=+0.060178733 container exec_died 3efea28609b54de48be203e5b83d8b3d8551e4a54a7a2ebbc89a1912a74b724e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-keepalived-rgw-default-compute-0-uxbosd, build-date=2023-02-22T09:23:20, architecture=x86_64, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.buildah.version=1.28.2, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793)
Nov 29 04:23:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:23:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:07.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:07 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:23:07 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:07 np0005539563 nova_compute[252253]: 2025-11-29 09:23:07.702 252257 DEBUG nova.compute.manager [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:23:07 np0005539563 nova_compute[252253]: 2025-11-29 09:23:07.704 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:23:07 np0005539563 nova_compute[252253]: 2025-11-29 09:23:07.797 252257 DEBUG oslo_concurrency.processutils [None req-bfbc2e22-d063-44d1-a227-b6cb69f51f4b 7d32840c789849a29c7630e25f803b3c 532b69b8d9eb42e8a1aed36b5ddb038a - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:23:07 np0005539563 nova_compute[252253]: 2025-11-29 09:23:07.830 252257 DEBUG oslo_concurrency.processutils [None req-bfbc2e22-d063-44d1-a227-b6cb69f51f4b 7d32840c789849a29c7630e25f803b3c 532b69b8d9eb42e8a1aed36b5ddb038a - - default default] CMD "env LANG=C uptime" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:23:07 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:07 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:07 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:07.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:08 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4448: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:08 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev de1f1683-8cbb-43fe-9fd3-bdf38f13e465 does not exist
Nov 29 04:23:08 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 0750223e-ec8f-47b9-9e2d-afe641581f92 does not exist
Nov 29 04:23:08 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev ebd2c7ae-d305-446f-bbc2-c9dc6544784a does not exist
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:08 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:23:08 np0005539563 podman[439221]: 2025-11-29 09:23:08.968497762 +0000 UTC m=+0.037767705 container create cb47f1ec94ca0787b1f2ba461964fedc65913c56fd20bdf18c1dd8b87bf36251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_matsumoto, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 04:23:09 np0005539563 systemd[1]: Started libpod-conmon-cb47f1ec94ca0787b1f2ba461964fedc65913c56fd20bdf18c1dd8b87bf36251.scope.
Nov 29 04:23:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:23:09 np0005539563 podman[439221]: 2025-11-29 09:23:08.952516538 +0000 UTC m=+0.021786501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:23:09 np0005539563 podman[439221]: 2025-11-29 09:23:09.048829032 +0000 UTC m=+0.118098995 container init cb47f1ec94ca0787b1f2ba461964fedc65913c56fd20bdf18c1dd8b87bf36251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_matsumoto, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 04:23:09 np0005539563 podman[439221]: 2025-11-29 09:23:09.056834239 +0000 UTC m=+0.126104182 container start cb47f1ec94ca0787b1f2ba461964fedc65913c56fd20bdf18c1dd8b87bf36251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 04:23:09 np0005539563 podman[439221]: 2025-11-29 09:23:09.061069433 +0000 UTC m=+0.130339396 container attach cb47f1ec94ca0787b1f2ba461964fedc65913c56fd20bdf18c1dd8b87bf36251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:23:09 np0005539563 gracious_matsumoto[439237]: 167 167
Nov 29 04:23:09 np0005539563 systemd[1]: libpod-cb47f1ec94ca0787b1f2ba461964fedc65913c56fd20bdf18c1dd8b87bf36251.scope: Deactivated successfully.
Nov 29 04:23:09 np0005539563 podman[439221]: 2025-11-29 09:23:09.063057577 +0000 UTC m=+0.132327530 container died cb47f1ec94ca0787b1f2ba461964fedc65913c56fd20bdf18c1dd8b87bf36251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 04:23:09 np0005539563 systemd[1]: var-lib-containers-storage-overlay-448afd7664c405a7dceb570a7eb40d244cdc3a2d14fc8ea25b216a70642463bf-merged.mount: Deactivated successfully.
Nov 29 04:23:09 np0005539563 podman[439221]: 2025-11-29 09:23:09.11365396 +0000 UTC m=+0.182923903 container remove cb47f1ec94ca0787b1f2ba461964fedc65913c56fd20bdf18c1dd8b87bf36251 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 04:23:09 np0005539563 systemd[1]: libpod-conmon-cb47f1ec94ca0787b1f2ba461964fedc65913c56fd20bdf18c1dd8b87bf36251.scope: Deactivated successfully.
Nov 29 04:23:09 np0005539563 nova_compute[252253]: 2025-11-29 09:23:09.207 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:09 np0005539563 nova_compute[252253]: 2025-11-29 09:23:09.279 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:09 np0005539563 podman[439265]: 2025-11-29 09:23:09.284824453 +0000 UTC m=+0.039253556 container create 48164085bd2fdd1ed08dd67772c27a6f428ccb4f1eb36b79257e4461657e3712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brattain, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:23:09 np0005539563 systemd[1]: Started libpod-conmon-48164085bd2fdd1ed08dd67772c27a6f428ccb4f1eb36b79257e4461657e3712.scope.
Nov 29 04:23:09 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:09 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:09 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:09.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:09 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:23:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f90df9ecca40797db3c01d24adc350907db5befab92620a8c2efe2d6a013a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:23:09 np0005539563 podman[439265]: 2025-11-29 09:23:09.268672055 +0000 UTC m=+0.023101188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:23:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f90df9ecca40797db3c01d24adc350907db5befab92620a8c2efe2d6a013a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:23:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f90df9ecca40797db3c01d24adc350907db5befab92620a8c2efe2d6a013a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:23:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f90df9ecca40797db3c01d24adc350907db5befab92620a8c2efe2d6a013a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:23:09 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f90df9ecca40797db3c01d24adc350907db5befab92620a8c2efe2d6a013a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:23:09 np0005539563 podman[439265]: 2025-11-29 09:23:09.378865134 +0000 UTC m=+0.133294257 container init 48164085bd2fdd1ed08dd67772c27a6f428ccb4f1eb36b79257e4461657e3712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brattain, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:23:09 np0005539563 podman[439265]: 2025-11-29 09:23:09.385277848 +0000 UTC m=+0.139706951 container start 48164085bd2fdd1ed08dd67772c27a6f428ccb4f1eb36b79257e4461657e3712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 04:23:09 np0005539563 podman[439265]: 2025-11-29 09:23:09.390044236 +0000 UTC m=+0.144473359 container attach 48164085bd2fdd1ed08dd67772c27a6f428ccb4f1eb36b79257e4461657e3712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:23:10 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:10 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:23:10 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:09.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:23:10 np0005539563 loving_brattain[439281]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:23:10 np0005539563 loving_brattain[439281]: --> relative data size: 1.0
Nov 29 04:23:10 np0005539563 loving_brattain[439281]: --> All data devices are unavailable
Nov 29 04:23:10 np0005539563 systemd[1]: libpod-48164085bd2fdd1ed08dd67772c27a6f428ccb4f1eb36b79257e4461657e3712.scope: Deactivated successfully.
Nov 29 04:23:10 np0005539563 podman[439265]: 2025-11-29 09:23:10.172143872 +0000 UTC m=+0.926572975 container died 48164085bd2fdd1ed08dd67772c27a6f428ccb4f1eb36b79257e4461657e3712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brattain, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:23:10 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4449: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Nov 29 04:23:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-99f90df9ecca40797db3c01d24adc350907db5befab92620a8c2efe2d6a013a7-merged.mount: Deactivated successfully.
Nov 29 04:23:10 np0005539563 podman[439265]: 2025-11-29 09:23:10.266650305 +0000 UTC m=+1.021079408 container remove 48164085bd2fdd1ed08dd67772c27a6f428ccb4f1eb36b79257e4461657e3712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:23:10 np0005539563 systemd[1]: libpod-conmon-48164085bd2fdd1ed08dd67772c27a6f428ccb4f1eb36b79257e4461657e3712.scope: Deactivated successfully.
Nov 29 04:23:10 np0005539563 podman[439298]: 2025-11-29 09:23:10.28194693 +0000 UTC m=+0.073067483 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:23:10 np0005539563 podman[439309]: 2025-11-29 09:23:10.28861031 +0000 UTC m=+0.076467435 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:23:10 np0005539563 podman[439311]: 2025-11-29 09:23:10.365425826 +0000 UTC m=+0.147074452 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 04:23:10 np0005539563 nova_compute[252253]: 2025-11-29 09:23:10.677 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:23:10 np0005539563 podman[439509]: 2025-11-29 09:23:10.874146513 +0000 UTC m=+0.037631971 container create c96f9ba3c9d6985520df3d6002ec350aae157d740698b1de5a334b52b513f5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 04:23:10 np0005539563 systemd[1]: Started libpod-conmon-c96f9ba3c9d6985520df3d6002ec350aae157d740698b1de5a334b52b513f5d7.scope.
Nov 29 04:23:10 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:23:10 np0005539563 podman[439509]: 2025-11-29 09:23:10.944492911 +0000 UTC m=+0.107978379 container init c96f9ba3c9d6985520df3d6002ec350aae157d740698b1de5a334b52b513f5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_northcutt, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 04:23:10 np0005539563 podman[439509]: 2025-11-29 09:23:10.952453817 +0000 UTC m=+0.115939265 container start c96f9ba3c9d6985520df3d6002ec350aae157d740698b1de5a334b52b513f5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_northcutt, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 29 04:23:10 np0005539563 podman[439509]: 2025-11-29 09:23:10.856636388 +0000 UTC m=+0.020121856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:23:10 np0005539563 podman[439509]: 2025-11-29 09:23:10.955330385 +0000 UTC m=+0.118815853 container attach c96f9ba3c9d6985520df3d6002ec350aae157d740698b1de5a334b52b513f5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_northcutt, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:23:10 np0005539563 elegant_northcutt[439525]: 167 167
Nov 29 04:23:10 np0005539563 systemd[1]: libpod-c96f9ba3c9d6985520df3d6002ec350aae157d740698b1de5a334b52b513f5d7.scope: Deactivated successfully.
Nov 29 04:23:10 np0005539563 podman[439509]: 2025-11-29 09:23:10.95917804 +0000 UTC m=+0.122663488 container died c96f9ba3c9d6985520df3d6002ec350aae157d740698b1de5a334b52b513f5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_northcutt, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 04:23:10 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ce10d9962ffa98648b2bf6e0c070d16b71f6523f1af4f6958c6d4b508a53fa23-merged.mount: Deactivated successfully.
Nov 29 04:23:10 np0005539563 podman[439509]: 2025-11-29 09:23:10.998712062 +0000 UTC m=+0.162197510 container remove c96f9ba3c9d6985520df3d6002ec350aae157d740698b1de5a334b52b513f5d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_northcutt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 04:23:11 np0005539563 systemd[1]: libpod-conmon-c96f9ba3c9d6985520df3d6002ec350aae157d740698b1de5a334b52b513f5d7.scope: Deactivated successfully.
Nov 29 04:23:11 np0005539563 nova_compute[252253]: 2025-11-29 09:23:11.089 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:23:11 np0005539563 nova_compute[252253]: 2025-11-29 09:23:11.090 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:23:11 np0005539563 nova_compute[252253]: 2025-11-29 09:23:11.091 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:23:11 np0005539563 nova_compute[252253]: 2025-11-29 09:23:11.091 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:23:11 np0005539563 nova_compute[252253]: 2025-11-29 09:23:11.092 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:23:11 np0005539563 podman[439550]: 2025-11-29 09:23:11.153726097 +0000 UTC m=+0.036056399 container create 8d6fdebdbb8dbccc4d23ec3a397ff5bbe623800cd20b5fc54f8137fce74c79d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:23:11 np0005539563 systemd[1]: Started libpod-conmon-8d6fdebdbb8dbccc4d23ec3a397ff5bbe623800cd20b5fc54f8137fce74c79d1.scope.
Nov 29 04:23:11 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:23:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0752d419ba77e465a0c90c35b28d07ea4a91acb95f52bbbe7a00b9a4c71282c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:23:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0752d419ba77e465a0c90c35b28d07ea4a91acb95f52bbbe7a00b9a4c71282c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:23:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0752d419ba77e465a0c90c35b28d07ea4a91acb95f52bbbe7a00b9a4c71282c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:23:11 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0752d419ba77e465a0c90c35b28d07ea4a91acb95f52bbbe7a00b9a4c71282c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:23:11 np0005539563 podman[439550]: 2025-11-29 09:23:11.139053099 +0000 UTC m=+0.021383301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:23:11 np0005539563 podman[439550]: 2025-11-29 09:23:11.248991431 +0000 UTC m=+0.131321633 container init 8d6fdebdbb8dbccc4d23ec3a397ff5bbe623800cd20b5fc54f8137fce74c79d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_germain, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 04:23:11 np0005539563 podman[439550]: 2025-11-29 09:23:11.255260461 +0000 UTC m=+0.137590643 container start 8d6fdebdbb8dbccc4d23ec3a397ff5bbe623800cd20b5fc54f8137fce74c79d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:23:11 np0005539563 podman[439550]: 2025-11-29 09:23:11.258607292 +0000 UTC m=+0.140937494 container attach 8d6fdebdbb8dbccc4d23ec3a397ff5bbe623800cd20b5fc54f8137fce74c79d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_germain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 04:23:11 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:11 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:11 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:11.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:11 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:23:11 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/786460659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:23:11 np0005539563 nova_compute[252253]: 2025-11-29 09:23:11.511 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:23:11 np0005539563 nova_compute[252253]: 2025-11-29 09:23:11.678 252257 WARNING nova.virt.libvirt.driver [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:23:11 np0005539563 nova_compute[252253]: 2025-11-29 09:23:11.680 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4008MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:23:11 np0005539563 nova_compute[252253]: 2025-11-29 09:23:11.680 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:23:11 np0005539563 nova_compute[252253]: 2025-11-29 09:23:11.681 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:23:12 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:12 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:12 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:12.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:12 np0005539563 friendly_germain[439568]: {
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:    "0": [
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:        {
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:            "devices": [
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:                "/dev/loop3"
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:            ],
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:            "lv_name": "ceph_lv0",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:            "lv_size": "7511998464",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=38a37ed2-442a-5e0d-a69a-881fdd186450,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=975ae8ee-a376-4084-87fd-232acabcaa54,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:            "lv_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:            "name": "ceph_lv0",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:            "tags": {
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:                "ceph.block_uuid": "3ahbwq-mZLq-yTdm-TNAm-D2Xu-22sR-NXZc4A",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:                "ceph.cluster_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:                "ceph.cluster_name": "ceph",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:                "ceph.crush_device_class": "",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:                "ceph.encrypted": "0",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:                "ceph.osd_fsid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:                "ceph.osd_id": "0",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:                "ceph.type": "block",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:                "ceph.vdo": "0"
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:            },
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:            "type": "block",
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:            "vg_name": "ceph_vg0"
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:        }
Nov 29 04:23:12 np0005539563 friendly_germain[439568]:    ]
Nov 29 04:23:12 np0005539563 friendly_germain[439568]: }
Nov 29 04:23:12 np0005539563 systemd[1]: libpod-8d6fdebdbb8dbccc4d23ec3a397ff5bbe623800cd20b5fc54f8137fce74c79d1.scope: Deactivated successfully.
Nov 29 04:23:12 np0005539563 conmon[439568]: conmon 8d6fdebdbb8dbccc4d23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d6fdebdbb8dbccc4d23ec3a397ff5bbe623800cd20b5fc54f8137fce74c79d1.scope/container/memory.events
Nov 29 04:23:12 np0005539563 podman[439550]: 2025-11-29 09:23:12.076087196 +0000 UTC m=+0.958417378 container died 8d6fdebdbb8dbccc4d23ec3a397ff5bbe623800cd20b5fc54f8137fce74c79d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_germain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:23:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-0752d419ba77e465a0c90c35b28d07ea4a91acb95f52bbbe7a00b9a4c71282c0-merged.mount: Deactivated successfully.
Nov 29 04:23:12 np0005539563 podman[439550]: 2025-11-29 09:23:12.126708869 +0000 UTC m=+1.009039051 container remove 8d6fdebdbb8dbccc4d23ec3a397ff5bbe623800cd20b5fc54f8137fce74c79d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_germain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:23:12 np0005539563 systemd[1]: libpod-conmon-8d6fdebdbb8dbccc4d23ec3a397ff5bbe623800cd20b5fc54f8137fce74c79d1.scope: Deactivated successfully.
Nov 29 04:23:12 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4450: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 0 B/s wr, 76 op/s
Nov 29 04:23:12 np0005539563 nova_compute[252253]: 2025-11-29 09:23:12.273 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:23:12 np0005539563 nova_compute[252253]: 2025-11-29 09:23:12.275 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:23:12 np0005539563 nova_compute[252253]: 2025-11-29 09:23:12.320 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:23:12 np0005539563 podman[439820]: 2025-11-29 09:23:12.665147545 +0000 UTC m=+0.036683436 container create fd9bade299407b7c850c4b2410946b59cd1ead794a57f0bf0e68565b533223e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:23:12 np0005539563 systemd[1]: Started libpod-conmon-fd9bade299407b7c850c4b2410946b59cd1ead794a57f0bf0e68565b533223e2.scope.
Nov 29 04:23:12 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:23:12 np0005539563 podman[439820]: 2025-11-29 09:23:12.739276585 +0000 UTC m=+0.110812496 container init fd9bade299407b7c850c4b2410946b59cd1ead794a57f0bf0e68565b533223e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 04:23:12 np0005539563 podman[439820]: 2025-11-29 09:23:12.649063148 +0000 UTC m=+0.020599069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:23:12 np0005539563 podman[439820]: 2025-11-29 09:23:12.746857711 +0000 UTC m=+0.118393602 container start fd9bade299407b7c850c4b2410946b59cd1ead794a57f0bf0e68565b533223e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:23:12 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:23:12 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3685331155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:23:12 np0005539563 podman[439820]: 2025-11-29 09:23:12.749505403 +0000 UTC m=+0.121041294 container attach fd9bade299407b7c850c4b2410946b59cd1ead794a57f0bf0e68565b533223e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:23:12 np0005539563 stoic_proskuriakova[439837]: 167 167
Nov 29 04:23:12 np0005539563 systemd[1]: libpod-fd9bade299407b7c850c4b2410946b59cd1ead794a57f0bf0e68565b533223e2.scope: Deactivated successfully.
Nov 29 04:23:12 np0005539563 podman[439820]: 2025-11-29 09:23:12.751941539 +0000 UTC m=+0.123477430 container died fd9bade299407b7c850c4b2410946b59cd1ead794a57f0bf0e68565b533223e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:23:12 np0005539563 nova_compute[252253]: 2025-11-29 09:23:12.765 252257 DEBUG oslo_concurrency.processutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:23:12 np0005539563 systemd[1]: var-lib-containers-storage-overlay-af23ba59f63af95d71136d6651479c958d32fc2210a1461b4ca2ec679378aa54-merged.mount: Deactivated successfully.
Nov 29 04:23:12 np0005539563 nova_compute[252253]: 2025-11-29 09:23:12.771 252257 DEBUG nova.compute.provider_tree [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed in ProviderTree for provider: 190eff98-dce8-46c0-8a7d-870d6fa5cbbd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:23:12 np0005539563 podman[439820]: 2025-11-29 09:23:12.786874396 +0000 UTC m=+0.158410287 container remove fd9bade299407b7c850c4b2410946b59cd1ead794a57f0bf0e68565b533223e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:23:12 np0005539563 systemd[1]: libpod-conmon-fd9bade299407b7c850c4b2410946b59cd1ead794a57f0bf0e68565b533223e2.scope: Deactivated successfully.
Nov 29 04:23:12 np0005539563 podman[439863]: 2025-11-29 09:23:12.953487646 +0000 UTC m=+0.046785160 container create d388fe1039aa36355d17c4ee7ec93660c0f5886a3e30c72add0326e61380dd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:23:12 np0005539563 systemd[1]: Started libpod-conmon-d388fe1039aa36355d17c4ee7ec93660c0f5886a3e30c72add0326e61380dd91.scope.
Nov 29 04:23:13 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:13 np0005539563 nova_compute[252253]: 2025-11-29 09:23:13.011 252257 DEBUG nova.scheduler.client.report [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Inventory has not changed for provider 190eff98-dce8-46c0-8a7d-870d6fa5cbbd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:23:13 np0005539563 nova_compute[252253]: 2025-11-29 09:23:13.012 252257 DEBUG nova.compute.resource_tracker [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:23:13 np0005539563 nova_compute[252253]: 2025-11-29 09:23:13.012 252257 DEBUG oslo_concurrency.lockutils [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:23:13 np0005539563 systemd[1]: Started libcrun container.
Nov 29 04:23:13 np0005539563 podman[439863]: 2025-11-29 09:23:12.936022402 +0000 UTC m=+0.029319916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:23:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba96cd1b861d9b50e5909fa2432459fc1cec486ebafe80844d31e5777b7c698e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:23:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba96cd1b861d9b50e5909fa2432459fc1cec486ebafe80844d31e5777b7c698e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:23:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba96cd1b861d9b50e5909fa2432459fc1cec486ebafe80844d31e5777b7c698e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:23:13 np0005539563 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba96cd1b861d9b50e5909fa2432459fc1cec486ebafe80844d31e5777b7c698e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:23:13 np0005539563 podman[439863]: 2025-11-29 09:23:13.047847365 +0000 UTC m=+0.141144879 container init d388fe1039aa36355d17c4ee7ec93660c0f5886a3e30c72add0326e61380dd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_volhard, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:23:13 np0005539563 podman[439863]: 2025-11-29 09:23:13.054874296 +0000 UTC m=+0.148171780 container start d388fe1039aa36355d17c4ee7ec93660c0f5886a3e30c72add0326e61380dd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_volhard, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:23:13 np0005539563 podman[439863]: 2025-11-29 09:23:13.057796496 +0000 UTC m=+0.151094000 container attach d388fe1039aa36355d17c4ee7ec93660c0f5886a3e30c72add0326e61380dd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_volhard, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 04:23:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-29_09:23:13
Nov 29 04:23:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:23:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 29 04:23:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', 'backups', 'volumes']
Nov 29 04:23:13 np0005539563 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:23:13 np0005539563 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:23:13 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:13 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:13 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:13.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:13 np0005539563 jovial_volhard[439880]: {
Nov 29 04:23:13 np0005539563 jovial_volhard[439880]:    "975ae8ee-a376-4084-87fd-232acabcaa54": {
Nov 29 04:23:13 np0005539563 jovial_volhard[439880]:        "ceph_fsid": "38a37ed2-442a-5e0d-a69a-881fdd186450",
Nov 29 04:23:13 np0005539563 jovial_volhard[439880]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:23:13 np0005539563 jovial_volhard[439880]:        "osd_id": 0,
Nov 29 04:23:13 np0005539563 jovial_volhard[439880]:        "osd_uuid": "975ae8ee-a376-4084-87fd-232acabcaa54",
Nov 29 04:23:13 np0005539563 jovial_volhard[439880]:        "type": "bluestore"
Nov 29 04:23:13 np0005539563 jovial_volhard[439880]:    }
Nov 29 04:23:13 np0005539563 jovial_volhard[439880]: }
Nov 29 04:23:14 np0005539563 systemd[1]: libpod-d388fe1039aa36355d17c4ee7ec93660c0f5886a3e30c72add0326e61380dd91.scope: Deactivated successfully.
Nov 29 04:23:14 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:14 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:14 np0005539563 conmon[439880]: conmon d388fe1039aa36355d17 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d388fe1039aa36355d17c4ee7ec93660c0f5886a3e30c72add0326e61380dd91.scope/container/memory.events
Nov 29 04:23:14 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:14.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:14 np0005539563 podman[439863]: 2025-11-29 09:23:14.006233552 +0000 UTC m=+1.099531076 container died d388fe1039aa36355d17c4ee7ec93660c0f5886a3e30c72add0326e61380dd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_volhard, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 04:23:14 np0005539563 systemd[1]: var-lib-containers-storage-overlay-ba96cd1b861d9b50e5909fa2432459fc1cec486ebafe80844d31e5777b7c698e-merged.mount: Deactivated successfully.
Nov 29 04:23:14 np0005539563 podman[439863]: 2025-11-29 09:23:14.08025916 +0000 UTC m=+1.173556654 container remove d388fe1039aa36355d17c4ee7ec93660c0f5886a3e30c72add0326e61380dd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 29 04:23:14 np0005539563 systemd[1]: libpod-conmon-d388fe1039aa36355d17c4ee7ec93660c0f5886a3e30c72add0326e61380dd91.scope: Deactivated successfully.
Nov 29 04:23:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:23:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:14 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:23:14 np0005539563 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev d157f2e0-03eb-46de-99f4-ffd7b22868a2 does not exist
Nov 29 04:23:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev 63043152-893a-4602-a09f-affa210ff4d4 does not exist
Nov 29 04:23:14 np0005539563 ceph-mgr[74636]: [progress WARNING root] complete: ev e727aa91-51c2-4197-b4ef-e6dedc0feac5 does not exist
Nov 29 04:23:14 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4451: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 114 op/s
Nov 29 04:23:14 np0005539563 nova_compute[252253]: 2025-11-29 09:23:14.210 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:14 np0005539563 nova_compute[252253]: 2025-11-29 09:23:14.281 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:15 np0005539563 ceph-mon[74338]: from='mgr.14132 192.168.122.100:0/1723792043' entity='mgr.compute-0.rotard' 
Nov 29 04:23:15 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:15 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:15 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:15.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:16 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:16 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:23:16 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:16.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:23:16 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4452: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 75 KiB/s rd, 0 B/s wr, 125 op/s
Nov 29 04:23:16 np0005539563 systemd-logind[785]: New session 73 of user zuul.
Nov 29 04:23:16 np0005539563 systemd[1]: Started Session 73 of User zuul.
Nov 29 04:23:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:23:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:23:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:23:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:23:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:23:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:23:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:23:16 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:23:17 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:23:17 np0005539563 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:23:17 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:17 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:17 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:17.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:18 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:18 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:18 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:18 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:18.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:18 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4453: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Nov 29 04:23:19 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51773 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:19 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45093 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:19 np0005539563 nova_compute[252253]: 2025-11-29 09:23:19.145 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:23:19.146 158990 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=105, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:ae:08', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:b6:42:a1:03:61'}, ipsec=False) old=SB_Global(nb_cfg=104) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 04:23:19 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:23:19.147 158990 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 04:23:19 np0005539563 nova_compute[252253]: 2025-11-29 09:23:19.211 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:19 np0005539563 nova_compute[252253]: 2025-11-29 09:23:19.284 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:19 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:19 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:19 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:19.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:19 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51788 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:19 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45102 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:20 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:20 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:20 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:20.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:20 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 04:23:20 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4180975379' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 04:23:20 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4454: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s
Nov 29 04:23:21 np0005539563 ovn_metadata_agent[158985]: 2025-11-29 09:23:21.150 158990 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cb98fb5a-8fde-4aab-9a19-a76cfc927075, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '105'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 04:23:21 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:21 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:21 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:21.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:21 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48616 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:22 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:22 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:22 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:22.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:22 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4455: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Nov 29 04:23:22 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48622 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:23 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:23 np0005539563 ovs-vsctl[440257]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 04:23:23 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:23 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:23 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:23.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:24 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:24 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:24 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:24.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:24 np0005539563 virtqemud[251807]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4456: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Nov 29 04:23:24 np0005539563 virtqemud[251807]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 04:23:24 np0005539563 virtqemud[251807]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 04:23:24 np0005539563 nova_compute[252253]: 2025-11-29 09:23:24.261 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:24 np0005539563 nova_compute[252253]: 2025-11-29 09:23:24.284 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:23:24 np0005539563 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:23:24 np0005539563 lvm[440560]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 04:23:24 np0005539563 lvm[440560]: VG ceph_vg0 finished
Nov 29 04:23:24 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: cache status {prefix=cache status} (starting...)
Nov 29 04:23:24 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:23:24 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: client ls {prefix=client ls} (starting...)
Nov 29 04:23:24 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:23:25 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51815 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:25 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45123 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:25 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:25 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:25 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:25.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:25 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 04:23:25 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:23:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 04:23:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2194017178' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 04:23:25 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51833 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:25 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 04:23:25 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:23:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 04:23:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 04:23:25 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45138 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:25 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 04:23:25 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:23:25 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 04:23:25 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:23:25 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:23:25 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3759730485' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:23:26 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:26 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:26 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:26.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:26 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 04:23:26 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:23:26 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4457: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Nov 29 04:23:26 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 04:23:26 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:23:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 04:23:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263854760' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 04:23:26 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51866 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:26 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:23:26.412+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:23:26 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:23:26 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 04:23:26 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:23:26 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 04:23:26 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:23:26 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45168 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:26 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:23:26.681+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:23:26 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:23:26 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 04:23:26 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/111379511' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 04:23:26 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: ops {prefix=ops} (starting...)
Nov 29 04:23:26 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:23:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 04:23:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2037240265' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 04:23:27 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45192 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:27 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:27 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:23:27 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:27.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:23:27 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51908 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:27 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:23:27 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831362939' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:23:27 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: session ls {prefix=session ls} (starting...)
Nov 29 04:23:27 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt Can't run that command on an inactive MDS!
Nov 29 04:23:27 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45207 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:27 np0005539563 ceph-mds[93638]: mds.cephfs.compute-0.msknqt asok_command: status {prefix=status} (starting...)
Nov 29 04:23:27 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51932 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:28 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:28 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:28 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:28.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735868659' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 04:23:28 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48667 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:28 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4458: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3628530622' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3628530622' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 04:23:28 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48694 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2105236450' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2532790915' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 04:23:28 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.51998 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:28 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:23:28.913+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:23:28 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:23:28 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1147489542' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:23:29 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52010 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:29 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:23:29.066+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:23:29 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:23:29 np0005539563 nova_compute[252253]: 2025-11-29 09:23:29.262 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:29 np0005539563 nova_compute[252253]: 2025-11-29 09:23:29.285 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 04:23:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/493985352' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 04:23:29 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:29 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:29 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:29.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 04:23:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3724313417' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 04:23:29 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48724 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:29 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:23:29.559+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:23:29 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:23:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 04:23:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2248402355' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 04:23:29 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:23:29 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1922656788' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:23:29 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52049 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:30 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:30 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:30 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:30.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:30 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4459: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:23:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:23:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2598535447' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:23:30 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45312 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:30 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52067 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:30 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48751 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:30 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45321 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:30 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52085 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:30 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:23:30 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352921293' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:23:30 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48760 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:31 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45342 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:31 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52103 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391495680 unmapped: 85630976 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4388506 data_alloc: 234881024 data_used: 24064000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391495680 unmapped: 85630976 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391495680 unmapped: 85630976 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a606f000/0x0/0x1bfc00000, data 0x3037692/0x325f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391495680 unmapped: 85630976 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391495680 unmapped: 85630976 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391503872 unmapped: 85622784 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a606f000/0x0/0x1bfc00000, data 0x3037692/0x325f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4388506 data_alloc: 234881024 data_used: 24064000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391503872 unmapped: 85622784 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391503872 unmapped: 85622784 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391503872 unmapped: 85622784 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a606f000/0x0/0x1bfc00000, data 0x3037692/0x325f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391503872 unmapped: 85622784 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.060701370s of 20.124479294s, submitted: 21
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be262fc00 session 0x561be01a0780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391536640 unmapped: 85590016 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4415844 data_alloc: 234881024 data_used: 24064000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391536640 unmapped: 85590016 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391536640 unmapped: 85590016 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5cec000/0x0/0x1bfc00000, data 0x33ba692/0x35e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391544832 unmapped: 85581824 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be262fc00 session 0x561be1b70780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 85565440 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 416 ms_handle_reset con 0x561be31d3400 session 0x561be31081e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 417 ms_handle_reset con 0x561be3440c00 session 0x561be3108960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 417 ms_handle_reset con 0x561be24bbc00 session 0x561be27712c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 417 ms_handle_reset con 0x561be3443000 session 0x561be358ed20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391577600 unmapped: 85549056 heap: 477126656 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 417 ms_handle_reset con 0x561be24bbc00 session 0x561be35b8780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 417 ms_handle_reset con 0x561be31d3400 session 0x561be0dd7a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 417 ms_handle_reset con 0x561be3440c00 session 0x561be1f06f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 418 ms_handle_reset con 0x561be262fc00 session 0x561be3109a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4573910 data_alloc: 234881024 data_used: 26492928
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389677056 unmapped: 92717056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 419 ms_handle_reset con 0x561be3448800 session 0x561be1b6e960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389718016 unmapped: 92676096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 419 ms_handle_reset con 0x561be262fc00 session 0x561be35b8780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390823936 unmapped: 91570176 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 419 ms_handle_reset con 0x561be31d3400 session 0x561be358ed20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 419 ms_handle_reset con 0x561be3440c00 session 0x561be3108960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a4bbe000/0x0/0x1bfc00000, data 0x44e0c7f/0x470e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390823936 unmapped: 91570176 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390856704 unmapped: 91537408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4604972 data_alloc: 234881024 data_used: 30162944
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390856704 unmapped: 91537408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390856704 unmapped: 91537408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 390856704 unmapped: 91537408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a4bbe000/0x0/0x1bfc00000, data 0x44e0c7f/0x470e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 419 handle_osd_map epochs [420,420], i have 420, src has [1,420]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.804007530s of 14.155201912s, submitted: 47
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388038656 unmapped: 94355456 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4bbc000/0x0/0x1bfc00000, data 0x44e27be/0x4711000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be1b42800 session 0x561be01a0780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388038656 unmapped: 94355456 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be1b43000 session 0x561bdfcf7c20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be31d9400 session 0x561be2effe00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be1b43000 session 0x561be1821860
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be1b42800 session 0x561be0de94a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be262fc00 session 0x561be35b92c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4599404 data_alloc: 234881024 data_used: 30167040
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388038656 unmapped: 94355456 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388038656 unmapped: 94355456 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388038656 unmapped: 94355456 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389816320 unmapped: 92577792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4737000/0x0/0x1bfc00000, data 0x49687be/0x4b97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389816320 unmapped: 92577792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4636508 data_alloc: 234881024 data_used: 30179328
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389816320 unmapped: 92577792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be31d3400 session 0x561be3108780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4733000/0x0/0x1bfc00000, data 0x496c7be/0x4b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389832704 unmapped: 92561408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 389832704 unmapped: 92561408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391233536 unmapped: 91160576 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393994240 unmapped: 88399872 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4691725 data_alloc: 251658240 data_used: 36134912
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393994240 unmapped: 88399872 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4733000/0x0/0x1bfc00000, data 0x496c7be/0x4b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4733000/0x0/0x1bfc00000, data 0x496c7be/0x4b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4691725 data_alloc: 251658240 data_used: 36134912
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4733000/0x0/0x1bfc00000, data 0x496c7be/0x4b9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394002432 unmapped: 88391680 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.524549484s of 21.680967331s, submitted: 45
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4723424 data_alloc: 251658240 data_used: 40607744
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397352960 unmapped: 85041152 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a456c000/0x0/0x1bfc00000, data 0x4b337be/0x4d62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397418496 unmapped: 84975616 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397418496 unmapped: 84975616 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be262fc00 session 0x561be0196b40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be31d9400 session 0x561be35b90e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be3440c00 session 0x561be1b6f0e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be31ce800 session 0x561be315a960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be3450c00 session 0x561be2771c20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397426688 unmapped: 84967424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3235000/0x0/0x1bfc00000, data 0x4cc9820/0x4ef9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397426688 unmapped: 84967424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4746368 data_alloc: 251658240 data_used: 40787968
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397426688 unmapped: 84967424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397426688 unmapped: 84967424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3235000/0x0/0x1bfc00000, data 0x4cc9820/0x4ef9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397434880 unmapped: 84959232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3235000/0x0/0x1bfc00000, data 0x4cc9820/0x4ef9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397434880 unmapped: 84959232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be262fc00 session 0x561be0dd70e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397434880 unmapped: 84959232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4754181 data_alloc: 251658240 data_used: 41328640
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397885440 unmapped: 84508672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397918208 unmapped: 84475904 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.194417953s of 12.317700386s, submitted: 47
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3211000/0x0/0x1bfc00000, data 0x4ced820/0x4f1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397918208 unmapped: 84475904 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398065664 unmapped: 84328448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399171584 unmapped: 83222528 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4754489 data_alloc: 251658240 data_used: 41328640
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398172160 unmapped: 84221952 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2dff000/0x0/0x1bfc00000, data 0x4cee820/0x4f1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398172160 unmapped: 84221952 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398172160 unmapped: 84221952 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398172160 unmapped: 84221952 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398180352 unmapped: 84213760 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4754665 data_alloc: 251658240 data_used: 41328640
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398180352 unmapped: 84213760 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398180352 unmapped: 84213760 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.183119774s of 10.062936783s, submitted: 351
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2e00000/0x0/0x1bfc00000, data 0x4cee820/0x4f1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 83681280 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398958592 unmapped: 83435520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398958592 unmapped: 83435520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4780741 data_alloc: 251658240 data_used: 41664512
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398958592 unmapped: 83435520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398958592 unmapped: 83435520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2bf7000/0x0/0x1bfc00000, data 0x4ef6820/0x5126000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398958592 unmapped: 83435520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 ms_handle_reset con 0x561be3440c00 session 0x561be315a000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 421 ms_handle_reset con 0x561be31d7400 session 0x561be2f06960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399015936 unmapped: 83378176 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 421 ms_handle_reset con 0x561be8611400 session 0x561be2f061e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 421 ms_handle_reset con 0x561be3451c00 session 0x561be1b6f4a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2bf3000/0x0/0x1bfc00000, data 0x4ef84db/0x512a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 421 ms_handle_reset con 0x561be3451c00 session 0x561be2854960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a10dd000/0x0/0x1bfc00000, data 0x6a0d4db/0x6c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407691264 unmapped: 74702848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 422 ms_handle_reset con 0x561be262fc00 session 0x561be1edb2c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5016406 data_alloc: 251658240 data_used: 46891008
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407715840 unmapped: 74678272 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a10d9000/0x0/0x1bfc00000, data 0x6a0f188/0x6c42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be31d7400 session 0x561be0dd63c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be31ce800 session 0x561be01912c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404643840 unmapped: 77750272 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be31d9400 session 0x561be01ea780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.674444199s of 10.145226479s, submitted: 141
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be262fc00 session 0x561be01941e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405741568 unmapped: 76652544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be31ce800 session 0x561be358f860
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be31d7400 session 0x561be0de6000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 423 ms_handle_reset con 0x561be3451c00 session 0x561be1f074a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405741568 unmapped: 76652544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405766144 unmapped: 76627968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 424 ms_handle_reset con 0x561be3440c00 session 0x561be2f06f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a1498000/0x0/0x1bfc00000, data 0x6650a64/0x6885000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4783118 data_alloc: 251658240 data_used: 46018560
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405774336 unmapped: 76619776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 424 ms_handle_reset con 0x561be1b42800 session 0x561be1eefc20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 424 ms_handle_reset con 0x561be1b43000 session 0x561be0190000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 425 ms_handle_reset con 0x561be24bbc00 session 0x561be1b6f2c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 425 ms_handle_reset con 0x561be31d0400 session 0x561be01a0f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405782528 unmapped: 76611584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 425 ms_handle_reset con 0x561be262fc00 session 0x561be01e8780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 425 ms_handle_reset con 0x561be31ce800 session 0x561be1edb860
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405790720 unmapped: 76603392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405790720 unmapped: 76603392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3982000/0x0/0x1bfc00000, data 0x416854d/0x439c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405790720 unmapped: 76603392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4676221 data_alloc: 251658240 data_used: 40472576
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405790720 unmapped: 76603392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3982000/0x0/0x1bfc00000, data 0x416854d/0x439c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405798912 unmapped: 76595200 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.905318260s of 10.264042854s, submitted: 144
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405839872 unmapped: 76554240 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 427 ms_handle_reset con 0x561be1b42800 session 0x561be315b4a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402251776 unmapped: 80142336 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 427 ms_handle_reset con 0x561be32a2000 session 0x561be01e4b40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 427 ms_handle_reset con 0x561be350f800 session 0x561be1edba40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402259968 unmapped: 80134144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321899 data_alloc: 218103808 data_used: 12640256
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 427 ms_handle_reset con 0x561be1b43000 session 0x561be0b66960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a555e000/0x0/0x1bfc00000, data 0x258acd7/0x27c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a555f000/0x0/0x1bfc00000, data 0x258acc7/0x27bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321174 data_alloc: 218103808 data_used: 12640256
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a555b000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325348 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a555b000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325348 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a555b000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a555b000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325348 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388808704 unmapped: 93585408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388816896 unmapped: 93577216 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388816896 unmapped: 93577216 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a555b000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325348 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388816896 unmapped: 93577216 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a555b000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388816896 unmapped: 93577216 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.625246048s of 30.002656937s, submitted: 61
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388825088 unmapped: 93569024 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be1b42800 session 0x561be0faa780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be0dd7c20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a2000 session 0x561be1b6e000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be350f800 session 0x561be28794a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be24bbc00 session 0x561be2eff2c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388825088 unmapped: 93569024 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388825088 unmapped: 93569024 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353269 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388825088 unmapped: 93569024 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a5192000/0x0/0x1bfc00000, data 0x2956806/0x2b8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388825088 unmapped: 93569024 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a5192000/0x0/0x1bfc00000, data 0x2956806/0x2b8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388825088 unmapped: 93569024 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a5192000/0x0/0x1bfc00000, data 0x2956806/0x2b8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388833280 unmapped: 93560832 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be1b42800 session 0x561be1edad20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4355054 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a5191000/0x0/0x1bfc00000, data 0x2956829/0x2b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a5191000/0x0/0x1bfc00000, data 0x2956829/0x2b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4383374 data_alloc: 218103808 data_used: 16621568
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4383374 data_alloc: 218103808 data_used: 16621568
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a5191000/0x0/0x1bfc00000, data 0x2956829/0x2b8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.264902115s of 19.366209030s, submitted: 18
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 388841472 unmapped: 93552640 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391856128 unmapped: 90537984 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391856128 unmapped: 90537984 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391864320 unmapped: 90529792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415578 data_alloc: 218103808 data_used: 16625664
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391864320 unmapped: 90529792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4efb000/0x0/0x1bfc00000, data 0x2bec829/0x2e23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391864320 unmapped: 90529792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391864320 unmapped: 90529792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391864320 unmapped: 90529792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4efb000/0x0/0x1bfc00000, data 0x2bec829/0x2e23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391733248 unmapped: 90660864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413850 data_alloc: 218103808 data_used: 16625664
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391733248 unmapped: 90660864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391733248 unmapped: 90660864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413850 data_alloc: 218103808 data_used: 16625664
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413850 data_alloc: 218103808 data_used: 16625664
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413850 data_alloc: 218103808 data_used: 16625664
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391749632 unmapped: 90644480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413850 data_alloc: 218103808 data_used: 16625664
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef8000/0x0/0x1bfc00000, data 0x2bef829/0x2e26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.788619995s of 30.887470245s, submitted: 33
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef7000/0x0/0x1bfc00000, data 0x2bf0829/0x2e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413870 data_alloc: 218103808 data_used: 16625664
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 90628096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391774208 unmapped: 90619904 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 391782400 unmapped: 90611712 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4ef6000/0x0/0x1bfc00000, data 0x2bf0829/0x2e27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a2000 session 0x561be1e7f4a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be350f800 session 0x561be2eff4a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be31092c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d7400 session 0x561be1e7ed20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be1b42800 session 0x561be28e50e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be28550e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a2000 session 0x561be31081e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be350f800 session 0x561be1e5be00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3451c00 session 0x561be1820f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392093696 unmapped: 90300416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499f000/0x0/0x1bfc00000, data 0x3147839/0x337f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392093696 unmapped: 90300416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4461293 data_alloc: 218103808 data_used: 16625664
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392093696 unmapped: 90300416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392093696 unmapped: 90300416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392093696 unmapped: 90300416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499e000/0x0/0x1bfc00000, data 0x3148839/0x3380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392093696 unmapped: 90300416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392101888 unmapped: 90292224 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499e000/0x0/0x1bfc00000, data 0x3148839/0x3380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4461293 data_alloc: 218103808 data_used: 16625664
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be1b42800 session 0x561be01e5860
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499e000/0x0/0x1bfc00000, data 0x3148839/0x3380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392101888 unmapped: 90292224 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.990257263s of 14.080326080s, submitted: 23
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be2f072c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392101888 unmapped: 90292224 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a2000 session 0x561be1edb680
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be350f800 session 0x561be01e5a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499e000/0x0/0x1bfc00000, data 0x3148839/0x3380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392118272 unmapped: 90275840 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392118272 unmapped: 90275840 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499c000/0x0/0x1bfc00000, data 0x314886c/0x3382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392142848 unmapped: 90251264 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506529 data_alloc: 234881024 data_used: 22142976
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392142848 unmapped: 90251264 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392142848 unmapped: 90251264 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499b000/0x0/0x1bfc00000, data 0x314886c/0x3382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499b000/0x0/0x1bfc00000, data 0x314886c/0x3382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506305 data_alloc: 234881024 data_used: 22147072
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.336379051s of 11.382768631s, submitted: 15
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499b000/0x0/0x1bfc00000, data 0x314986c/0x3383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392151040 unmapped: 90243072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499b000/0x0/0x1bfc00000, data 0x314986c/0x3383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506309 data_alloc: 234881024 data_used: 22147072
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a499b000/0x0/0x1bfc00000, data 0x314986c/0x3383000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392912896 unmapped: 89481216 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395935744 unmapped: 86458368 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396296192 unmapped: 86097920 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396345344 unmapped: 86048768 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 85991424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3077000/0x0/0x1bfc00000, data 0x38cd86c/0x3b07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4571905 data_alloc: 234881024 data_used: 23420928
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 85991424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 85991424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 85991424 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3077000/0x0/0x1bfc00000, data 0x38cd86c/0x3b07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.473201752s of 12.682118416s, submitted: 75
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4570341 data_alloc: 234881024 data_used: 23420928
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3074000/0x0/0x1bfc00000, data 0x38ce86c/0x3b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4570005 data_alloc: 234881024 data_used: 23420928
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 85983232 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3074000/0x0/0x1bfc00000, data 0x38ce86c/0x3b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4570005 data_alloc: 234881024 data_used: 23420928
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.998168945s of 11.008939743s, submitted: 2
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 85975040 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570169 data_alloc: 234881024 data_used: 23420928
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570169 data_alloc: 234881024 data_used: 23420928
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 85966848 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.078990936s of 14.082680702s, submitted: 1
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570169 data_alloc: 234881024 data_used: 23420928
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3075000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3075000/0x0/0x1bfc00000, data 0x38cf86c/0x3b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396443648 unmapped: 85950464 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570349 data_alloc: 234881024 data_used: 23420928
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396451840 unmapped: 85942272 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: mgrc ms_handle_reset ms_handle_reset con 0x561be2d36000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2945860420
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2945860420,v1:192.168.122.100:6801/2945860420]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: mgrc handle_mgr_configure stats_period=5
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3074000/0x0/0x1bfc00000, data 0x38d086c/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3074000/0x0/0x1bfc00000, data 0x38d086c/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3074000/0x0/0x1bfc00000, data 0x38d086c/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be219e800 session 0x561be1e5ab40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570013 data_alloc: 234881024 data_used: 23420928
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3071000/0x0/0x1bfc00000, data 0x38d086c/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396541952 unmapped: 85852160 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.624737740s of 13.645763397s, submitted: 4
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3071000/0x0/0x1bfc00000, data 0x38d086c/0x3b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4569821 data_alloc: 234881024 data_used: 23420928
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570001 data_alloc: 234881024 data_used: 23420928
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3073000/0x0/0x1bfc00000, data 0x38d186c/0x3b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396550144 unmapped: 85843968 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be8611400 session 0x561be0b66780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be0faa5a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396574720 unmapped: 85819392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be1e7e5a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d50000/0x0/0x1bfc00000, data 0x2bf6829/0x2e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4425272 data_alloc: 218103808 data_used: 16625664
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d50000/0x0/0x1bfc00000, data 0x2bf6829/0x2e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4425272 data_alloc: 218103808 data_used: 16625664
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395763712 unmapped: 86630400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d50000/0x0/0x1bfc00000, data 0x2bf6829/0x2e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.537462234s of 18.683736801s, submitted: 54
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be2854b40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392421376 unmapped: 89972736 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a2000 session 0x561be0db94a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 89964544 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392437760 unmapped: 89956352 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 89931776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 89923584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392486912 unmapped: 89907200 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 89899008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 89899008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 89899008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 89899008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 89899008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392503296 unmapped: 89890816 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392503296 unmapped: 89890816 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 89882624 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344660 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 89882624 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 89882624 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 89882624 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 89882624 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 89882624 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 57.838653564s of 57.928359985s, submitted: 34
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a2000 session 0x561be0197680
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be01941e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4368505 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392552448 unmapped: 89841664 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4230000/0x0/0x1bfc00000, data 0x2717868/0x294e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392552448 unmapped: 89841664 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 89833472 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 89833472 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4230000/0x0/0x1bfc00000, data 0x2717868/0x294e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 89833472 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4368505 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 89833472 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4230000/0x0/0x1bfc00000, data 0x2717868/0x294e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1578149916' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392577024 unmapped: 89817088 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392577024 unmapped: 89817088 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392577024 unmapped: 89817088 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4230000/0x0/0x1bfc00000, data 0x2717868/0x294e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be28312c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392880128 unmapped: 89513984 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4369901 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392888320 unmapped: 89505792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392847360 unmapped: 89546752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392847360 unmapped: 89546752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a420c000/0x0/0x1bfc00000, data 0x273b868/0x2972000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392847360 unmapped: 89546752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392847360 unmapped: 89546752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4381741 data_alloc: 218103808 data_used: 14258176
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 392847360 unmapped: 89546752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be8611400 session 0x561be01e4b40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be289e000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.230293274s of 16.324338913s, submitted: 29
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be0b67a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43b5000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4350169 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43b5000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43b5000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43b5000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4350169 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 88481792 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393928704 unmapped: 88465408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 393928704 unmapped: 88465408 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.276370049s of 12.325051308s, submitted: 16
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be1820000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395001856 unmapped: 87392256 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be2efed20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be1b45400 session 0x561be0b672c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4380280 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4055000/0x0/0x1bfc00000, data 0x28f2868/0x2b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4055000/0x0/0x1bfc00000, data 0x28f2868/0x2b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4380280 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4055000/0x0/0x1bfc00000, data 0x28f2868/0x2b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4055000/0x0/0x1bfc00000, data 0x28f2868/0x2b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394846208 unmapped: 87547904 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4055000/0x0/0x1bfc00000, data 0x28f2868/0x2b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4403640 data_alloc: 218103808 data_used: 15921152
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394846208 unmapped: 87547904 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394846208 unmapped: 87547904 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be8611400 session 0x561be01963c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4055000/0x0/0x1bfc00000, data 0x28f2868/0x2b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.046278954s of 14.122239113s, submitted: 26
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be350f800 session 0x561be358ed20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394862592 unmapped: 87531520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394870784 unmapped: 87523328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394870784 unmapped: 87523328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394870784 unmapped: 87523328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394887168 unmapped: 87506944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 87498752 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394903552 unmapped: 87490560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394911744 unmapped: 87482368 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4353966 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394911744 unmapped: 87482368 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 394911744 unmapped: 87482368 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be2effa40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be28765a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be28e50e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be8611400 session 0x561be315a000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.228950500s of 35.272441864s, submitted: 14
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be32a4400 session 0x561be1e7e3c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bb000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395116544 unmapped: 87277568 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be1edb2c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be358e000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be1b6fe00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be8611400 session 0x561be28e4f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395116544 unmapped: 87277568 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395116544 unmapped: 87277568 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4423858 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395116544 unmapped: 87277568 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3449000 session 0x561be2830000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395116544 unmapped: 87277568 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3449000 session 0x561be0196f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395124736 unmapped: 87269376 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be2b5b4a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3ba1000/0x0/0x1bfc00000, data 0x2da6816/0x2fdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be0dd65a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395157504 unmapped: 87236608 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395157504 unmapped: 87236608 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4427197 data_alloc: 218103808 data_used: 12763136
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395075584 unmapped: 87318528 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3ba0000/0x0/0x1bfc00000, data 0x2da6826/0x2fde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3ba0000/0x0/0x1bfc00000, data 0x2da6826/0x2fde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4485917 data_alloc: 234881024 data_used: 20946944
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3ba0000/0x0/0x1bfc00000, data 0x2da6826/0x2fde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4485917 data_alloc: 234881024 data_used: 20946944
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 87302144 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.410863876s of 19.642406464s, submitted: 23
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397975552 unmapped: 84418560 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3491000/0x0/0x1bfc00000, data 0x34b5826/0x36ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398442496 unmapped: 83951616 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4557351 data_alloc: 234881024 data_used: 21196800
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3316000/0x0/0x1bfc00000, data 0x3628826/0x3860000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3316000/0x0/0x1bfc00000, data 0x3628826/0x3860000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4552427 data_alloc: 234881024 data_used: 21196800
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a32fe000/0x0/0x1bfc00000, data 0x3648826/0x3880000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4552747 data_alloc: 234881024 data_used: 21204992
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.202736855s of 13.543350220s, submitted: 103
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a32fe000/0x0/0x1bfc00000, data 0x3648826/0x3880000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399630336 unmapped: 82763776 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be0a3ef00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be8611400 session 0x561be090bc20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399638528 unmapped: 82755584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4366991 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be28e5a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4365450 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.063098907s of 43.144638062s, submitted: 31
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396279808 unmapped: 86114304 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be0db8000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396288000 unmapped: 86106112 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405626 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396288000 unmapped: 86106112 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3ec0000/0x0/0x1bfc00000, data 0x2a88806/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396296192 unmapped: 86097920 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396296192 unmapped: 86097920 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396296192 unmapped: 86097920 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3ec0000/0x0/0x1bfc00000, data 0x2a88806/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396296192 unmapped: 86097920 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405626 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396296192 unmapped: 86097920 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be358f680
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396451840 unmapped: 85942272 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396460032 unmapped: 85934080 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 396771328 unmapped: 85622784 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e9c000/0x0/0x1bfc00000, data 0x2aac806/0x2ce2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4436798 data_alloc: 218103808 data_used: 16662528
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e9c000/0x0/0x1bfc00000, data 0x2aac806/0x2ce2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4436798 data_alloc: 218103808 data_used: 16662528
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e9c000/0x0/0x1bfc00000, data 0x2aac806/0x2ce2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397492224 unmapped: 84901888 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.750879288s of 19.841560364s, submitted: 10
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be01941e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 397500416 unmapped: 84893696 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399392768 unmapped: 83001344 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a384d000/0x0/0x1bfc00000, data 0x30f3806/0x3329000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4502954 data_alloc: 218103808 data_used: 16797696
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399450112 unmapped: 82944000 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399450112 unmapped: 82944000 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be361e400 session 0x561be01a0780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be28310e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399450112 unmapped: 82944000 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be01e50e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 83615744 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be2831a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3829000/0x0/0x1bfc00000, data 0x3116806/0x334c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 83615744 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4502371 data_alloc: 218103808 data_used: 16797696
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 398974976 unmapped: 83419136 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a382e000/0x0/0x1bfc00000, data 0x3119829/0x3350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a382e000/0x0/0x1bfc00000, data 0x3119829/0x3350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4522931 data_alloc: 234881024 data_used: 19496960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a382e000/0x0/0x1bfc00000, data 0x3119829/0x3350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4522931 data_alloc: 234881024 data_used: 19496960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a382e000/0x0/0x1bfc00000, data 0x3119829/0x3350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 399794176 unmapped: 82599936 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.129539490s of 19.362030029s, submitted: 55
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a375e000/0x0/0x1bfc00000, data 0x31e9829/0x3420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400285696 unmapped: 82108416 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400318464 unmapped: 82075648 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401367040 unmapped: 81027072 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4603363 data_alloc: 234881024 data_used: 20443136
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fcc000/0x0/0x1bfc00000, data 0x3972829/0x3ba9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4603683 data_alloc: 234881024 data_used: 20451328
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fcc000/0x0/0x1bfc00000, data 0x3972829/0x3ba9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fcc000/0x0/0x1bfc00000, data 0x3972829/0x3ba9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x3975829/0x3bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x3975829/0x3bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4599779 data_alloc: 234881024 data_used: 20520960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x3975829/0x3bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 59K writes, 223K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s#012Cumulative WAL: 59K writes, 21K syncs, 2.70 writes per sync, written: 0.21 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2965 writes, 9665 keys, 2965 commit groups, 1.0 writes per commit group, ingest: 9.56 MB, 0.02 MB/s#012Interval WAL: 2965 writes, 1282 syncs, 2.31 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.186190605s of 13.778625488s, submitted: 90
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be27714a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x3975829/0x3bac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 81010688 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d9800 session 0x561be2efe3c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401391616 unmapped: 81002496 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401391616 unmapped: 81002496 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474385 data_alloc: 218103808 data_used: 16793600
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401391616 unmapped: 81002496 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401391616 unmapped: 81002496 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401391616 unmapped: 81002496 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3bd4000/0x0/0x1bfc00000, data 0x2d73806/0x2fa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 401391616 unmapped: 81002496 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3449000 session 0x561be1f06d20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be2efe1e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be0197c20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379151 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379151 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400662528 unmapped: 81731584 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400670720 unmapped: 81723392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400670720 unmapped: 81723392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400670720 unmapped: 81723392 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379151 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400687104 unmapped: 81707008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400687104 unmapped: 81707008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400687104 unmapped: 81707008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400687104 unmapped: 81707008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400687104 unmapped: 81707008 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379151 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400695296 unmapped: 81698816 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400695296 unmapped: 81698816 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400695296 unmapped: 81698816 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379151 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 81682432 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379151 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43bc000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.503349304s of 34.588817596s, submitted: 27
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be2876b40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be315ab40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400875520 unmapped: 81518592 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be1b70780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be1eee000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be2830000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 81494016 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d8b000/0x0/0x1bfc00000, data 0x2bbc868/0x2df3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 81494016 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 81494016 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d8b000/0x0/0x1bfc00000, data 0x2bbc868/0x2df3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 81494016 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4439475 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 81494016 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be28e50e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 81551360 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 81551360 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d8a000/0x0/0x1bfc00000, data 0x2bbc88b/0x2df4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4483884 data_alloc: 218103808 data_used: 18837504
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d8a000/0x0/0x1bfc00000, data 0x2bbc88b/0x2df4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4483884 data_alloc: 218103808 data_used: 18837504
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402579456 unmapped: 79814656 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.304382324s of 18.628723145s, submitted: 53
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3d8a000/0x0/0x1bfc00000, data 0x2bbc88b/0x2df4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [0,0,0,0,1])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406970368 unmapped: 75423744 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4530962 data_alloc: 218103808 data_used: 19914752
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a372c000/0x0/0x1bfc00000, data 0x321288b/0x344a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4544630 data_alloc: 218103808 data_used: 20172800
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3712000/0x0/0x1bfc00000, data 0x323488b/0x346c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537502 data_alloc: 218103808 data_used: 20176896
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408305664 unmapped: 74088448 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.273756981s of 12.800662994s, submitted: 117
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3449000 session 0x561be1e5ab40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3449000 session 0x561be2b5b4a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389066 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389066 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389066 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389066 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389066 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a43ba000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4389066 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 76324864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.497726440s of 30.619930267s, submitted: 44
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be27712c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be0de6000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be1eef2c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be0a3e3c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 76316672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be2854d20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 76316672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a425f000/0x0/0x1bfc00000, data 0x26e9806/0x291f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 76316672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4408727 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a425f000/0x0/0x1bfc00000, data 0x26e9806/0x291f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a425f000/0x0/0x1bfc00000, data 0x26e9806/0x291f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be2786b40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4408727 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be315b4a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be0de9a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3449000 session 0x561be2771a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 76308480 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e4d000/0x0/0x1bfc00000, data 0x26e9839/0x2921000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e4d000/0x0/0x1bfc00000, data 0x26e9839/0x2921000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422558 data_alloc: 218103808 data_used: 13991936
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.007145882s of 13.094347000s, submitted: 25
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be01f43c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be0db9860
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e4d000/0x0/0x1bfc00000, data 0x26e9839/0x2921000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422426 data_alloc: 218103808 data_used: 13991936
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 76292096 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406118400 unmapped: 76275712 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406118400 unmapped: 76275712 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406126592 unmapped: 76267520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e4d000/0x0/0x1bfc00000, data 0x26e9839/0x2921000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406126592 unmapped: 76267520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422494 data_alloc: 218103808 data_used: 14000128
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406126592 unmapped: 76267520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.051295280s of 11.088788033s, submitted: 2
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be0dd61e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be2b5b2c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406126592 unmapped: 76267520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406126592 unmapped: 76267520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406126592 unmapped: 76267520 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e4d000/0x0/0x1bfc00000, data 0x26e9839/0x2921000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 76259328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422362 data_alloc: 218103808 data_used: 14000128
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 76259328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e4d000/0x0/0x1bfc00000, data 0x26e9839/0x2921000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 76259328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 76259328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be1821c20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be361fc00 session 0x561be28e41e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 76259328 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406151168 unmapped: 76242944 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395271 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be1b70d20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403824640 unmapped: 78569472 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c839/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.025921822s of 10.001517296s, submitted: 215
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403955712 unmapped: 78438400 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394225 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394225 data_alloc: 218103808 data_used: 12648448
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 78397440 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404004864 unmapped: 78389248 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 78381056 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404021248 unmapped: 78372864 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 78364672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 78364672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 78364672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394385 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 78364672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 78364672 heap: 482394112 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.646228790s of 46.096721649s, submitted: 168
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be0de9e00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 82542592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 82542592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 82542592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4468125 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3613000/0x0/0x1bfc00000, data 0x2f25806/0x315b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4468125 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3613000/0x0/0x1bfc00000, data 0x2f25806/0x315b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be01961e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 82534400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404070400 unmapped: 82526208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404070400 unmapped: 82526208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406724608 unmapped: 79872000 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540130 data_alloc: 234881024 data_used: 22704128
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3612000/0x0/0x1bfc00000, data 0x2f25829/0x315c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406724608 unmapped: 79872000 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406724608 unmapped: 79872000 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406724608 unmapped: 79872000 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406724608 unmapped: 79872000 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406724608 unmapped: 79872000 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540130 data_alloc: 234881024 data_used: 22704128
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406732800 unmapped: 79863808 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3612000/0x0/0x1bfc00000, data 0x2f25829/0x315c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406732800 unmapped: 79863808 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406732800 unmapped: 79863808 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3612000/0x0/0x1bfc00000, data 0x2f25829/0x315c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406732800 unmapped: 79863808 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 406732800 unmapped: 79863808 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540610 data_alloc: 234881024 data_used: 22716416
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.410284042s of 23.519502640s, submitted: 12
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407355392 unmapped: 79241216 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3612000/0x0/0x1bfc00000, data 0x2f25829/0x315c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408231936 unmapped: 78364672 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408231936 unmapped: 78364672 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408231936 unmapped: 78364672 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3011000/0x0/0x1bfc00000, data 0x3526829/0x375d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408240128 unmapped: 78356480 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4588054 data_alloc: 234881024 data_used: 23121920
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 77209600 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 77209600 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 77209600 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 77209600 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 77209600 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593528 data_alloc: 234881024 data_used: 22941696
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409387008 unmapped: 77209600 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409395200 unmapped: 77201408 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409395200 unmapped: 77201408 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409395200 unmapped: 77201408 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409403392 unmapped: 77193216 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593528 data_alloc: 234881024 data_used: 22941696
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409403392 unmapped: 77193216 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409403392 unmapped: 77193216 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409403392 unmapped: 77193216 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409403392 unmapped: 77193216 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593528 data_alloc: 234881024 data_used: 22941696
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593528 data_alloc: 234881024 data_used: 22941696
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 77185024 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31ce800 session 0x561be35b9680
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593528 data_alloc: 234881024 data_used: 22941696
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fc4000/0x0/0x1bfc00000, data 0x356b829/0x37a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409427968 unmapped: 77168640 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593528 data_alloc: 234881024 data_used: 22941696
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409444352 unmapped: 77152256 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.932518005s of 36.101718903s, submitted: 51
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 84320256 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 84320256 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c829/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4403310 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be2f061e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402284544 unmapped: 84312064 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 84295680 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 84287488 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 84287488 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 84287488 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 84287488 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402550 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 84287488 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 84287488 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 60.870681763s of 60.922084808s, submitted: 18
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be28e50e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 84279296 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 84279296 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402325504 unmapped: 84271104 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4414748 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be2f06d20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be361fc00 session 0x561be35b8f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be3108960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be289fe00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3e92000/0x0/0x1bfc00000, data 0x26a6806/0x28dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be28e45a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402325504 unmapped: 84271104 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be358e000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3da9000/0x0/0x1bfc00000, data 0x278f806/0x29c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be28761e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402325504 unmapped: 84271104 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be361fc00 session 0x561be28e5680
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be361fc00 session 0x561be2876780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402472960 unmapped: 84123648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be2f07a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be2855a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3343c00 session 0x561be2771a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3440800 session 0x561be1edb860
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be0194f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402522112 unmapped: 84074496 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3343c00 session 0x561be0faa780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402464768 unmapped: 84131840 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4472787 data_alloc: 218103808 data_used: 12685312
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402472960 unmapped: 84123648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3927000/0x0/0x1bfc00000, data 0x2c0f839/0x2e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3927000/0x0/0x1bfc00000, data 0x2c0f839/0x2e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402472960 unmapped: 84123648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402472960 unmapped: 84123648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402472960 unmapped: 84123648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.276145935s of 11.987798691s, submitted: 47
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402481152 unmapped: 84115456 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4477851 data_alloc: 218103808 data_used: 13340672
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 84246528 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 84246528 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be344a400 session 0x561be2830f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3927000/0x0/0x1bfc00000, data 0x2c0f839/0x2e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 84246528 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3927000/0x0/0x1bfc00000, data 0x2c0f839/0x2e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 84246528 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb2800 session 0x561be28314a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 84246528 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444757 data_alloc: 218103808 data_used: 13340672
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 84246528 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be361fc00 session 0x561be1e7f0e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be0de8000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb2800 session 0x561be2f06780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3a80000/0x0/0x1bfc00000, data 0x2ab7816/0x2cee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460859 data_alloc: 218103808 data_used: 12853248
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be01e4780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be1eefc20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.695828438s of 12.155271530s, submitted: 88
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be3109680
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3f88000/0x0/0x1bfc00000, data 0x25b0806/0x27e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402915328 unmapped: 83681280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402923520 unmapped: 83673088 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402923520 unmapped: 83673088 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402923520 unmapped: 83673088 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402923520 unmapped: 83673088 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 83664896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402939904 unmapped: 83656704 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402956288 unmapped: 83640320 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402964480 unmapped: 83632128 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402972672 unmapped: 83623936 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415582 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 76.208839417s of 76.231025696s, submitted: 7
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402980864 unmapped: 83615744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402989056 unmapped: 83607552 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be0194f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402989056 unmapped: 83607552 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402989056 unmapped: 83607552 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419063 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402989056 unmapped: 83607552 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402989056 unmapped: 83607552 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419063 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 402997248 unmapped: 83599360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419063 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb2800 session 0x561be2f07a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419063 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be2876780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403013632 unmapped: 83582976 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be28761e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.476503372s of 22.541004181s, submitted: 5
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4420901 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be28e45a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fa9000/0x0/0x1bfc00000, data 0x258c889/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4421061 data_alloc: 218103808 data_used: 12656640
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fa9000/0x0/0x1bfc00000, data 0x258c889/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403030016 unmapped: 83566592 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be289fe00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb2800 session 0x561be0a3e3c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be35b8f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419175 data_alloc: 218103808 data_used: 12660736
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403046400 unmapped: 83550208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419175 data_alloc: 218103808 data_used: 12660736
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3faa000/0x0/0x1bfc00000, data 0x258c879/0x27c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be35b9680
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.364324570s of 17.376232147s, submitted: 3
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be31d0400 session 0x561be01961e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417599 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403054592 unmapped: 83542016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417599 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403070976 unmapped: 83525632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417599 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417599 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403079168 unmapped: 83517440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 83501056 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 83501056 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417599 data_alloc: 218103808 data_used: 12652544
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 83501056 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 83501056 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 403103744 unmapped: 83492864 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.079629898s of 27.113702774s, submitted: 11
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb2800 session 0x561be358e000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb4c00 session 0x561be2771a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417263 data_alloc: 218103808 data_used: 15273984
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417263 data_alloc: 218103808 data_used: 15273984
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405864448 unmapped: 80732160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 80723968 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 80723968 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405880832 unmapped: 80715776 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405889024 unmapped: 80707584 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417263 data_alloc: 218103808 data_used: 15273984
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.729580879s of 11.746542931s, submitted: 5
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be0ad4400 session 0x561be2b5b4a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fac000/0x0/0x1bfc00000, data 0x258c806/0x27c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419045 data_alloc: 218103808 data_used: 15273984
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 80699392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 80683008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 80683008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 80683008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419045 data_alloc: 218103808 data_used: 15273984
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 80683008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 80683008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 80683008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 80674816 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 80674816 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419045 data_alloc: 218103808 data_used: 15273984
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419045 data_alloc: 218103808 data_used: 15273984
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3fab000/0x0/0x1bfc00000, data 0x258c868/0x27c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 80666624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.560823441s of 22.608398438s, submitted: 1
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405946368 unmapped: 80650240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3447000 session 0x561be289e000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 80642048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 79298560 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505089 data_alloc: 218103808 data_used: 15273984
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 79298560 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561be3343c00 session 0x561be28e43c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 ms_handle_reset con 0x561bdfcb2800 session 0x561be2876b40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3600000/0x0/0x1bfc00000, data 0x2f3782f/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3600000/0x0/0x1bfc00000, data 0x2f3782f/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504810 data_alloc: 218103808 data_used: 15273984
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3600000/0x0/0x1bfc00000, data 0x2f37868/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 79273984 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 79273984 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 79273984 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 79273984 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504810 data_alloc: 218103808 data_used: 15273984
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3600000/0x0/0x1bfc00000, data 0x2f37868/0x316e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.720861435s of 12.936441422s, submitted: 57
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407240704 unmapped: 79355904 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561bdfcb4c00 session 0x561be1bd6f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407248896 unmapped: 79347712 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407248896 unmapped: 79347712 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407248896 unmapped: 79347712 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 79331328 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 79323136 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.834756851s of 25.870422363s, submitted: 12
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561be3447000 session 0x561be2854000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561be0ad4400 session 0x561be0faa780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 79314944 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 79306752 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 79306752 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 79306752 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 79306752 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 79306752 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 79298560 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 79298560 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 79298560 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506760 data_alloc: 218103808 data_used: 15282176
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 79282176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 79265792 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.212320328s of 18.217164993s, submitted: 1
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561be344a400 session 0x561be09ac000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 79265792 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4507533 data_alloc: 218103808 data_used: 15282176
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 79265792 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407838720 unmapped: 78757888 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 77955072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 77955072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394e4/0x3172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 77955072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570413 data_alloc: 234881024 data_used: 24080384
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561bdfcb2800 session 0x561be28e4d20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 77955072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394e4/0x3172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408641536 unmapped: 77955072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x2f394e4/0x3172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570429 data_alloc: 234881024 data_used: 24080384
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.502777100s of 10.719768524s, submitted: 12
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561bdfcb4c00 session 0x561be0196b40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408649728 unmapped: 77946880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4569493 data_alloc: 234881024 data_used: 24080384
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4569493 data_alloc: 234881024 data_used: 24080384
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561be0ad4400 session 0x561be01a1680
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.821796417s of 11.829506874s, submitted: 2
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561be3447000 session 0x561be08b2780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408657920 unmapped: 77938688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 ms_handle_reset con 0x561bdf4fdc00 session 0x561be01a0f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408666112 unmapped: 77930496 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4569964 data_alloc: 234881024 data_used: 24080384
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 430 ms_handle_reset con 0x561bdf4fdc00 session 0x561be0de6780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a35fd000/0x0/0x1bfc00000, data 0x2f394c1/0x3171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408690688 unmapped: 77905920 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 430 ms_handle_reset con 0x561bdfcb2800 session 0x561be358fa40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405831680 unmapped: 80764928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 430 ms_handle_reset con 0x561bdfcb4c00 session 0x561be1b6e000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405839872 unmapped: 80756736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a3fa4000/0x0/0x1bfc00000, data 0x259010c/0x27c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405839872 unmapped: 80756736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405839872 unmapped: 80756736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4435347 data_alloc: 218103808 data_used: 15290368
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405839872 unmapped: 80756736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405848064 unmapped: 80748544 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be3451000 session 0x561be2eff0e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.886808395s of 10.118666649s, submitted: 77
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be31d7000 session 0x561be1821e00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 81068032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3fa2000/0x0/0x1bfc00000, data 0x2591c4b/0x27cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 81068032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561bdf4fdc00 session 0x561be1821a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 81068032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4443469 data_alloc: 218103808 data_used: 15290368
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3fa2000/0x0/0x1bfc00000, data 0x2591c4b/0x27cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 81068032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 81068032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561bdfcb2800 session 0x561be1821e00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561bdfcb4c00 session 0x561be358fa40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 81059840 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be3451000 session 0x561be289e000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561bdfcb5c00 session 0x561be35b9680
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518461 data_alloc: 218103808 data_used: 15290368
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 61K writes, 229K keys, 61K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s#012Cumulative WAL: 61K writes, 22K syncs, 2.68 writes per sync, written: 0.21 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2012 writes, 5478 keys, 2012 commit groups, 1.0 writes per commit group, ingest: 5.20 MB, 0.01 MB/s#012Interval WAL: 2012 writes, 901 syncs, 2.23 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518461 data_alloc: 218103808 data_used: 15290368
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 80879616 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 80871424 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518461 data_alloc: 218103808 data_used: 15290368
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 80871424 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 80871424 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: mgrc ms_handle_reset ms_handle_reset con 0x561be31d7400
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2945860420
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2945860420,v1:192.168.122.100:6801/2945860420]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: mgrc handle_mgr_configure stats_period=5
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be1b42800 session 0x561be315a960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518461 data_alloc: 218103808 data_used: 15290368
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518461 data_alloc: 218103808 data_used: 15290368
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be2ed8800 session 0x561be01950e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518461 data_alloc: 218103808 data_used: 15290368
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3653000/0x0/0x1bfc00000, data 0x2ee0c5b/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be43d7400 session 0x561be01970e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be21a5000 session 0x561be0197a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404873216 unmapped: 81723392 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.847000122s of 36.072757721s, submitted: 51
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 ms_handle_reset con 0x561be361e400 session 0x561be0b67a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404742144 unmapped: 81854464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3652000/0x0/0x1bfc00000, data 0x2ee0c7e/0x311c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404750336 unmapped: 81846272 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519378 data_alloc: 218103808 data_used: 15298560
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3652000/0x0/0x1bfc00000, data 0x2ee0c7e/0x311c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3652000/0x0/0x1bfc00000, data 0x2ee0c7e/0x311c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4578738 data_alloc: 234881024 data_used: 22994944
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a3652000/0x0/0x1bfc00000, data 0x2ee0c7e/0x311c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404635648 unmapped: 81960960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404643840 unmapped: 81952768 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404643840 unmapped: 81952768 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4578738 data_alloc: 234881024 data_used: 22994944
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 404643840 unmapped: 81952768 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.508459091s of 13.522068024s, submitted: 4
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 407961600 unmapped: 78635008 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409141248 unmapped: 77455360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2bfd000/0x0/0x1bfc00000, data 0x38f9c7e/0x3b35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 77234176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 77234176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4677088 data_alloc: 234881024 data_used: 24748032
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 77234176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2bf4000/0x0/0x1bfc00000, data 0x3935c7e/0x3b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 77234176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409362432 unmapped: 77234176 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2bf4000/0x0/0x1bfc00000, data 0x3935c7e/0x3b71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4668696 data_alloc: 234881024 data_used: 24748032
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2bdb000/0x0/0x1bfc00000, data 0x3957c7e/0x3b93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2bdb000/0x0/0x1bfc00000, data 0x3957c7e/0x3b93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2bdb000/0x0/0x1bfc00000, data 0x3957c7e/0x3b93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4668696 data_alloc: 234881024 data_used: 24748032
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.023251534s of 14.699027061s, submitted: 103
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be0187800 session 0x561be01e50e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be32a5c00 session 0x561be2f07a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd7000/0x0/0x1bfc00000, data 0x3959a47/0x3b97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4688561 data_alloc: 234881024 data_used: 24756224
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd2000/0x0/0x1bfc00000, data 0x3b13a47/0x3b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd2000/0x0/0x1bfc00000, data 0x3b13a47/0x3b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408829952 unmapped: 77766656 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd2000/0x0/0x1bfc00000, data 0x3b13a47/0x3b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 77758464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4688561 data_alloc: 234881024 data_used: 24756224
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 77758464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd2000/0x0/0x1bfc00000, data 0x3b13a47/0x3b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 77758464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 77758464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd2000/0x0/0x1bfc00000, data 0x3b13a47/0x3b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.624474525s of 12.263740540s, submitted: 11
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd2000/0x0/0x1bfc00000, data 0x3b13a47/0x3b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 77758464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408838144 unmapped: 77758464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4690389 data_alloc: 234881024 data_used: 24756224
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd1000/0x0/0x1bfc00000, data 0x3b13a57/0x3b9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408846336 unmapped: 77750272 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be2159000 session 0x561be2efed20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be343c800 session 0x561be315b680
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408846336 unmapped: 77750272 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408846336 unmapped: 77750272 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408862720 unmapped: 77733888 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bd1000/0x0/0x1bfc00000, data 0x3b13a57/0x3b9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408870912 unmapped: 77725696 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4714863 data_alloc: 234881024 data_used: 24756224
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408870912 unmapped: 77725696 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408870912 unmapped: 77725696 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408870912 unmapped: 77725696 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 77709312 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcd000/0x0/0x1bfc00000, data 0x3e1ea57/0x3ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 77709312 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4714863 data_alloc: 234881024 data_used: 24756224
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be3443400 session 0x561be01e4780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcd000/0x0/0x1bfc00000, data 0x3e1ea57/0x3ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 77709312 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcd000/0x0/0x1bfc00000, data 0x3e1ea57/0x3ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 77709312 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 77709312 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be2ec7000 session 0x561be28783c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcd000/0x0/0x1bfc00000, data 0x3e1ea57/0x3ba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be2159000 session 0x561be358e000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408887296 unmapped: 77709312 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.022949219s of 16.393350601s, submitted: 5
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be32a5c00 session 0x561be28303c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408903680 unmapped: 77692928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4719323 data_alloc: 234881024 data_used: 24756224
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 77684736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 77684736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408920064 unmapped: 77676544 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcb000/0x0/0x1bfc00000, data 0x3e1ea8a/0x3ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4718175 data_alloc: 234881024 data_used: 24981504
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcb000/0x0/0x1bfc00000, data 0x3e1ea8a/0x3ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4718175 data_alloc: 234881024 data_used: 24981504
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcb000/0x0/0x1bfc00000, data 0x3e1ea8a/0x3ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 408936448 unmapped: 77660160 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4718975 data_alloc: 234881024 data_used: 25001984
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.700752258s of 15.737976074s, submitted: 11
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2bcb000/0x0/0x1bfc00000, data 0x3e1ea8a/0x3ba3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410075136 unmapped: 76521472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410288128 unmapped: 76308480 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 76292096 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 76292096 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 76292096 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4758865 data_alloc: 234881024 data_used: 28110848
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410304512 unmapped: 76292096 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410320896 unmapped: 76275712 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 75997184 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 75997184 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 75997184 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759569 data_alloc: 234881024 data_used: 28110848
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 75997184 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 75997184 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.377988815s of 12.515168190s, submitted: 7
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759217 data_alloc: 234881024 data_used: 28110848
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759041 data_alloc: 234881024 data_used: 28110848
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759041 data_alloc: 234881024 data_used: 28110848
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be32a2000 session 0x561be2b5be00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409739264 unmapped: 76857344 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409755648 unmapped: 76840960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.402155876s of 14.417620659s, submitted: 4
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be343c800 session 0x561bdf491a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be3443400 session 0x561be1f06f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 409755648 unmapped: 76840960 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 75792384 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2960000/0x0/0x1bfc00000, data 0x4089a8a/0x3e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [0,0,1])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be344dc00 session 0x561be01ea000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 75792384 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4756864 data_alloc: 234881024 data_used: 28114944
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 75792384 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410845184 unmapped: 75751424 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2962000/0x0/0x1bfc00000, data 0x4089a57/0x3e0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411910144 unmapped: 74686464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411910144 unmapped: 74686464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410886144 unmapped: 75710464 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4757168 data_alloc: 234881024 data_used: 28127232
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be462a800 session 0x561be0de8b40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be01c8400 session 0x561be0de74a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 73490432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be43d7800 session 0x561be358e960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413712384 unmapped: 72884224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a27be000/0x0/0x1bfc00000, data 0x3e1ea47/0x3ba0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 ms_handle_reset con 0x561be344b000 session 0x561be01f50e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.620646477s of 10.166932106s, submitted: 426
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 72859648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 433 ms_handle_reset con 0x561be3444000 session 0x561be0dd7c20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 72859648 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 433 ms_handle_reset con 0x561be01c8400 session 0x561be0b665a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413745152 unmapped: 72851456 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4717364 data_alloc: 234881024 data_used: 28061696
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413745152 unmapped: 72851456 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a27bf000/0x0/0x1bfc00000, data 0x39606d1/0x3b9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 433 ms_handle_reset con 0x561be67b4c00 session 0x561be315b2c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413761536 unmapped: 72835072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 74399744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 74399744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 433 ms_handle_reset con 0x561be3447000 session 0x561be0de7a40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 74399744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4472960 data_alloc: 218103808 data_used: 15327232
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 74399744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3b8c000/0x0/0x1bfc00000, data 0x25956c1/0x27d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412196864 unmapped: 74399744 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.126425743s of 10.281791687s, submitted: 58
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3b88000/0x0/0x1bfc00000, data 0x2597200/0x27d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 435 handle_osd_map epochs [435,435], i have 435, src has [1,435]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 435 ms_handle_reset con 0x561be0e6a800 session 0x561be315be00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 435 handle_osd_map epochs [435,435], i have 435, src has [1,435]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 435 ms_handle_reset con 0x561be24bbc00 session 0x561be31094a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4479122 data_alloc: 218103808 data_used: 15278080
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a3b86000/0x0/0x1bfc00000, data 0x2598e8f/0x27d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a3b86000/0x0/0x1bfc00000, data 0x2598e8f/0x27d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4479122 data_alloc: 218103808 data_used: 15278080
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412254208 unmapped: 74342400 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3b83000/0x0/0x1bfc00000, data 0x259a9ce/0x27da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4482096 data_alloc: 218103808 data_used: 15278080
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.350548744s of 13.543437004s, submitted: 44
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be75abc00 session 0x561be1e5ad20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be01c8400 session 0x561be1e7f680
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3b83000/0x0/0x1bfc00000, data 0x259a9ce/0x27da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410681344 unmapped: 75915264 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3b84000/0x0/0x1bfc00000, data 0x259a9ce/0x27da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be350ec00 session 0x561be01963c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410681344 unmapped: 75915264 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410681344 unmapped: 75915264 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4484438 data_alloc: 218103808 data_used: 15343616
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410681344 unmapped: 75915264 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410681344 unmapped: 75915264 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3b83000/0x0/0x1bfc00000, data 0x259aa30/0x27db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be8535800 session 0x561be2830b40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be221d000 session 0x561be0de6960
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be1b45000 session 0x561be0a3e1e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4523936 data_alloc: 218103808 data_used: 15343616
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a373a000/0x0/0x1bfc00000, data 0x29e49ce/0x2c24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410689536 unmapped: 75907072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4523936 data_alloc: 218103808 data_used: 15343616
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a373a000/0x0/0x1bfc00000, data 0x29e49ce/0x2c24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a373a000/0x0/0x1bfc00000, data 0x29e49ce/0x2c24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4523936 data_alloc: 218103808 data_used: 15343616
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 75898880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be361f800 session 0x561be2eff860
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be344a400 session 0x561be0197c20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410705920 unmapped: 75890688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a373a000/0x0/0x1bfc00000, data 0x29e49ce/0x2c24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be3448000 session 0x561be1e7e780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.480686188s of 21.662887573s, submitted: 32
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be35d0800 session 0x561be1bd6d20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410705920 unmapped: 75890688 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3739000/0x0/0x1bfc00000, data 0x29e49de/0x2c25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4556242 data_alloc: 234881024 data_used: 19640320
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4556242 data_alloc: 234881024 data_used: 19640320
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3739000/0x0/0x1bfc00000, data 0x29e49de/0x2c25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410640384 unmapped: 75956224 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3739000/0x0/0x1bfc00000, data 0x29e49de/0x2c25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410648576 unmapped: 75948032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410648576 unmapped: 75948032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a3739000/0x0/0x1bfc00000, data 0x29e49de/0x2c25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4556242 data_alloc: 234881024 data_used: 19640320
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 410648576 unmapped: 75948032 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.010921478s of 13.021390915s, submitted: 2
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411713536 unmapped: 74883072 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411721728 unmapped: 74874880 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2ed7000/0x0/0x1bfc00000, data 0x32469de/0x3487000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e56000/0x0/0x1bfc00000, data 0x32c79de/0x3508000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411975680 unmapped: 74620928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411975680 unmapped: 74620928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e56000/0x0/0x1bfc00000, data 0x32c79de/0x3508000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4628596 data_alloc: 234881024 data_used: 19865600
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411975680 unmapped: 74620928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411975680 unmapped: 74620928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411975680 unmapped: 74620928 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 411983872 unmapped: 74612736 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be0ad4400 session 0x561be358f2c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be3448400 session 0x561be3109860
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e23000/0x0/0x1bfc00000, data 0x32fa9de/0x353b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412065792 unmapped: 74530816 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be3448000 session 0x561be3109e00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e23000/0x0/0x1bfc00000, data 0x32fa9de/0x353b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627979 data_alloc: 234881024 data_used: 19873792
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412065792 unmapped: 74530816 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412073984 unmapped: 74522624 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627979 data_alloc: 234881024 data_used: 19873792
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412082176 unmapped: 74514432 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412090368 unmapped: 74506240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4627979 data_alloc: 234881024 data_used: 19873792
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412090368 unmapped: 74506240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412090368 unmapped: 74506240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be67b4400 session 0x561be1e5ab40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412090368 unmapped: 74506240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be361f800 session 0x561be1f072c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412090368 unmapped: 74506240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561bdfcb3400 session 0x561be0b663c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 ms_handle_reset con 0x561be3448000 session 0x561be0de8000
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.182992935s of 23.472333908s, submitted: 77
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412090368 unmapped: 74506240 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4628591 data_alloc: 234881024 data_used: 19927040
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4628911 data_alloc: 234881024 data_used: 19939328
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412098560 unmapped: 74498048 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4628911 data_alloc: 234881024 data_used: 19939328
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.554869652s of 11.559130669s, submitted: 1
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4636651 data_alloc: 234881024 data_used: 20586496
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 74489856 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e24000/0x0/0x1bfc00000, data 0x32fa9ce/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 74481664 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e18000/0x0/0x1bfc00000, data 0x33069ce/0x3546000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4639499 data_alloc: 234881024 data_used: 20586496
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e18000/0x0/0x1bfc00000, data 0x33069ce/0x3546000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e18000/0x0/0x1bfc00000, data 0x33069ce/0x3546000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4639499 data_alloc: 234881024 data_used: 20586496
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 74473472 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.222394943s of 18.255271912s, submitted: 11
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e18000/0x0/0x1bfc00000, data 0x33069ce/0x3546000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4639775 data_alloc: 234881024 data_used: 20586496
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 74465280 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e17000/0x0/0x1bfc00000, data 0x33079ce/0x3547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 74448896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4639775 data_alloc: 234881024 data_used: 20586496
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 74448896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 74448896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a2e17000/0x0/0x1bfc00000, data 0x33079ce/0x3547000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 74448896 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 74383360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 74383360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.630817413s of 10.641638756s, submitted: 3
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be0186400 session 0x561be2b5b2c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4647517 data_alloc: 234881024 data_used: 21667840
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 74383360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e13000/0x0/0x1bfc00000, data 0x3309627/0x354a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be3344800 session 0x561be28790e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be24bb400 session 0x561be2830780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412213248 unmapped: 74383360 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4659621 data_alloc: 234881024 data_used: 21667840
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412262400 unmapped: 74334208 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4659621 data_alloc: 234881024 data_used: 21667840
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be2e82000 session 0x561be31085a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4659621 data_alloc: 234881024 data_used: 21667840
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be0186400 session 0x561be28e4b40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412270592 unmapped: 74326016 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be344a000 session 0x561be315a1e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412278784 unmapped: 74317824 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be343d400 session 0x561be315ad20
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412286976 unmapped: 74309632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412286976 unmapped: 74309632 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4659781 data_alloc: 234881024 data_used: 21671936
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4659781 data_alloc: 234881024 data_used: 21671936
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x3378699/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 412295168 unmapped: 74301440 heap: 486596608 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.242610931s of 30.420869827s, submitted: 8
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4709844 data_alloc: 234881024 data_used: 23199744
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 417529856 unmapped: 72261632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2941000/0x0/0x1bfc00000, data 0x3848699/0x3a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a2941000/0x0/0x1bfc00000, data 0x3848699/0x3a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 417529856 unmapped: 72261632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 417529856 unmapped: 72261632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4717626 data_alloc: 234881024 data_used: 23224320
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x3a0d699/0x3a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x3a0d699/0x3a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4717658 data_alloc: 234881024 data_used: 23220224
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x3a0d699/0x3a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x3a0d699/0x3a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4717658 data_alloc: 234881024 data_used: 23220224
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x3a0d699/0x3a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x3a0d699/0x3a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 75587584 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.433406830s of 20.506797791s, submitted: 13
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4728630 data_alloc: 234881024 data_used: 23396352
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 75145216 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 75145216 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a28d0000/0x0/0x1bfc00000, data 0x3a7c699/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 75145216 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 75145216 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a28d0000/0x0/0x1bfc00000, data 0x3a7c699/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 75145216 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be24ba000 session 0x561be0194f00
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4729998 data_alloc: 234881024 data_used: 23433216
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 75145216 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be2d36400 session 0x561be0190780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 75603968 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 75603968 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a28d0000/0x0/0x1bfc00000, data 0x3a7c699/0x3a8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be31d0400 session 0x561be3108780
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be3440c00 session 0x561be090b4a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414212096 unmapped: 75579392 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 ms_handle_reset con 0x561be344b400 session 0x561be28761e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 438 ms_handle_reset con 0x561be361e400 session 0x561be0dd63c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x353d627/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a2e10000/0x0/0x1bfc00000, data 0x353d627/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4672608 data_alloc: 234881024 data_used: 23343104
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a2e11000/0x0/0x1bfc00000, data 0x330b2d4/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.280055046s of 11.439131737s, submitted: 51
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 438 ms_handle_reset con 0x561be3448400 session 0x561be28545a0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 438 ms_handle_reset con 0x561be31d0400 session 0x561be2854b40
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a2e11000/0x0/0x1bfc00000, data 0x330b2d4/0x354d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:31 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:31 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:31.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:31 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45357 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4672384 data_alloc: 234881024 data_used: 23343104
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 75571200 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a2e0d000/0x0/0x1bfc00000, data 0x330ce13/0x3550000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 ms_handle_reset con 0x561be32a4c00 session 0x561bdfcf72c0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414277632 unmapped: 75513856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 ms_handle_reset con 0x561be31d2800 session 0x561be2f061e0
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413237248 unmapped: 76554240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413245440 unmapped: 76546048 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413245440 unmapped: 76546048 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413245440 unmapped: 76546048 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413245440 unmapped: 76546048 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413245440 unmapped: 76546048 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413245440 unmapped: 76546048 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 76529664 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 76521472 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 76521472 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 76521472 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 76521472 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 76521472 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 76521472 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413286400 unmapped: 76505088 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413319168 unmapped: 76472320 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413319168 unmapped: 76472320 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413319168 unmapped: 76472320 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413319168 unmapped: 76472320 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413319168 unmapped: 76472320 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413319168 unmapped: 76472320 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 76464128 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 76464128 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413335552 unmapped: 76455936 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413368320 unmapped: 76423168 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413376512 unmapped: 76414976 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 76349440 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'config diff' '{prefix=config diff}'
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'config show' '{prefix=config show}'
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413237248 unmapped: 76554240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 76685312 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'log dump' '{prefix=log dump}'
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 424099840 unmapped: 65691648 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'perf dump' '{prefix=perf dump}'
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'perf schema' '{prefix=perf schema}'
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413237248 unmapped: 76554240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 76537856 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 76529664 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 76529664 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 76529664 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 76529664 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 76529664 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 76529664 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 76529664 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 76529664 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 76513280 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413302784 unmapped: 76488704 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413310976 unmapped: 76480512 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413310976 unmapped: 76480512 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413310976 unmapped: 76480512 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413310976 unmapped: 76480512 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413310976 unmapped: 76480512 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413310976 unmapped: 76480512 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413310976 unmapped: 76480512 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413310976 unmapped: 76480512 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 76464128 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 76464128 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 76464128 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 76464128 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 76464128 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 76464128 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 76464128 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 76464128 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413343744 unmapped: 76447744 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413351936 unmapped: 76439552 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413360128 unmapped: 76431360 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413360128 unmapped: 76431360 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413360128 unmapped: 76431360 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413360128 unmapped: 76431360 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413368320 unmapped: 76423168 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413368320 unmapped: 76423168 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413368320 unmapped: 76423168 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413368320 unmapped: 76423168 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413368320 unmapped: 76423168 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413368320 unmapped: 76423168 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413368320 unmapped: 76423168 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413368320 unmapped: 76423168 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413384704 unmapped: 76406784 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 76398592 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 76398592 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 76398592 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 76398592 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 76398592 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 76398592 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 76398592 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413392896 unmapped: 76398592 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 76382208 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 76382208 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 76382208 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 76382208 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 76382208 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 76382208 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 76382208 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413409280 unmapped: 76382208 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 76357632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 76357632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 76357632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 76357632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 76357632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 76357632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 76357632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 76357632 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 76349440 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 76349440 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 76349440 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 76349440 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 76349440 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 76349440 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 76349440 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413442048 unmapped: 76349440 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 76333056 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 76333056 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 76333056 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 76333056 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 76333056 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 76333056 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 76333056 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413458432 unmapped: 76333056 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 76316672 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 76316672 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 76316672 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 76316672 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 76316672 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 76316672 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 76316672 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413474816 unmapped: 76316672 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 76308480 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 76308480 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 76308480 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 76308480 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 76308480 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 76308480 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 76308480 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413483008 unmapped: 76308480 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413499392 unmapped: 76292096 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413499392 unmapped: 76292096 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413499392 unmapped: 76292096 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413499392 unmapped: 76292096 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413499392 unmapped: 76292096 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413499392 unmapped: 76292096 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413499392 unmapped: 76292096 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413499392 unmapped: 76292096 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 76267520 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 76267520 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 76267520 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 76267520 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 76267520 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 76267520 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 76267520 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 76267520 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413548544 unmapped: 76242944 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413548544 unmapped: 76242944 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413548544 unmapped: 76242944 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413548544 unmapped: 76242944 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413548544 unmapped: 76242944 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413548544 unmapped: 76242944 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413548544 unmapped: 76242944 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413548544 unmapped: 76242944 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 76234752 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 76234752 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 76234752 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 76234752 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 76234752 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 76234752 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 76234752 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413556736 unmapped: 76234752 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413573120 unmapped: 76218368 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413573120 unmapped: 76218368 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413573120 unmapped: 76218368 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413573120 unmapped: 76218368 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 04:23:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413573120 unmapped: 76218368 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413573120 unmapped: 76218368 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413573120 unmapped: 76218368 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 76210176 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 76210176 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 76210176 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 76210176 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 76210176 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 76210176 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 76210176 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413581312 unmapped: 76210176 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 76177408 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 76177408 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 76177408 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 76177408 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 76177408 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 62K writes, 234K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s#012Cumulative WAL: 62K writes, 23K syncs, 2.67 writes per sync, written: 0.21 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1660 writes, 4795 keys, 1660 commit groups, 1.0 writes per commit group, ingest: 3.52 MB, 0.01 MB/s#012Interval WAL: 1660 writes, 747 syncs, 2.22 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 76177408 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 76177408 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413614080 unmapped: 76177408 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 76161024 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 76161024 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 76161024 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 76161024 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 76161024 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 76161024 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 76161024 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 76161024 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 76152832 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 76152832 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 76152832 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 76152832 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 76152832 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 76152832 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 76152832 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 76152832 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413663232 unmapped: 76128256 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413663232 unmapped: 76128256 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413663232 unmapped: 76128256 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413663232 unmapped: 76128256 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413671424 unmapped: 76120064 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413671424 unmapped: 76120064 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413671424 unmapped: 76120064 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413671424 unmapped: 76120064 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413687808 unmapped: 76103680 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413687808 unmapped: 76103680 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413687808 unmapped: 76103680 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413687808 unmapped: 76103680 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413687808 unmapped: 76103680 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413687808 unmapped: 76103680 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413687808 unmapped: 76103680 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413687808 unmapped: 76103680 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413704192 unmapped: 76087296 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413704192 unmapped: 76087296 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413704192 unmapped: 76087296 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413704192 unmapped: 76087296 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413704192 unmapped: 76087296 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413704192 unmapped: 76087296 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413704192 unmapped: 76087296 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413704192 unmapped: 76087296 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 76070912 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 76070912 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 76070912 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 76070912 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 76070912 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 76070912 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 76070912 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413720576 unmapped: 76070912 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 76054528 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 76054528 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 76054528 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 76054528 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 76054528 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 76054528 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 76054528 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413736960 unmapped: 76054528 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413753344 unmapped: 76038144 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413753344 unmapped: 76038144 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413761536 unmapped: 76029952 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413761536 unmapped: 76029952 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413761536 unmapped: 76029952 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 76021760 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 76021760 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413769728 unmapped: 76021760 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 76013568 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 76013568 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 76013568 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 76013568 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 76013568 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 76013568 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 76013568 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 76013568 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 75997184 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 75997184 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 75997184 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 75997184 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413802496 unmapped: 75988992 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413802496 unmapped: 75988992 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413802496 unmapped: 75988992 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413802496 unmapped: 75988992 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413818880 unmapped: 75972608 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413818880 unmapped: 75972608 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413818880 unmapped: 75972608 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413818880 unmapped: 75972608 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413818880 unmapped: 75972608 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413818880 unmapped: 75972608 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413818880 unmapped: 75972608 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413818880 unmapped: 75972608 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413835264 unmapped: 75956224 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413835264 unmapped: 75956224 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 75948032 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 75948032 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 75948032 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 75948032 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 75948032 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 75948032 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 75931648 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 75931648 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 75931648 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 75931648 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 75931648 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 75931648 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 75931648 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 75931648 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413876224 unmapped: 75915264 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413876224 unmapped: 75915264 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413876224 unmapped: 75915264 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413876224 unmapped: 75915264 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413876224 unmapped: 75915264 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413876224 unmapped: 75915264 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413876224 unmapped: 75915264 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413876224 unmapped: 75915264 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413884416 unmapped: 75907072 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413892608 unmapped: 75898880 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413892608 unmapped: 75898880 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413892608 unmapped: 75898880 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413892608 unmapped: 75898880 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413900800 unmapped: 75890688 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413908992 unmapped: 75882496 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413908992 unmapped: 75882496 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413925376 unmapped: 75866112 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512722 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413925376 unmapped: 75866112 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413925376 unmapped: 75866112 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7a000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413925376 unmapped: 75866112 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413925376 unmapped: 75866112 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413925376 unmapped: 75866112 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 409.743499756s of 409.842132568s, submitted: 42
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511842 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413925376 unmapped: 75866112 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413933568 unmapped: 75857920 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413933568 unmapped: 75857920 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413933568 unmapped: 75857920 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413933568 unmapped: 75857920 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511842 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413949952 unmapped: 75841536 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413949952 unmapped: 75841536 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413949952 unmapped: 75841536 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413958144 unmapped: 75833344 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413958144 unmapped: 75833344 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.389981270s of 10.015169144s, submitted: 92
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511842 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413990912 unmapped: 75800576 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413990912 unmapped: 75800576 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413990912 unmapped: 75800576 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414007296 unmapped: 75784192 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414015488 unmapped: 75776000 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511842 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414089216 unmapped: 75702272 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414179328 unmapped: 75612160 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 75546624 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 75546624 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 75546624 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511842 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 75546624 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 75546624 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 75546624 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414261248 unmapped: 75530240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414261248 unmapped: 75530240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511842 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414261248 unmapped: 75530240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414261248 unmapped: 75530240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414261248 unmapped: 75530240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414261248 unmapped: 75530240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414261248 unmapped: 75530240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511842 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414261248 unmapped: 75530240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414261248 unmapped: 75530240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414261248 unmapped: 75530240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: osd.0 439 heartbeat osd_stat(store_statfs(0x1a3b7b000/0x0/0x1bfc00000, data 0x259fe13/0x27e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414261248 unmapped: 75530240 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'config diff' '{prefix=config diff}'
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414040064 unmapped: 75751424 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'config show' '{prefix=config show}'
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: bluestore.MempoolThread(0x561bde78bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511842 data_alloc: 218103808 data_used: 15368192
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 414015488 unmapped: 75776000 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: prioritycache tune_memory target: 4294967296 mapped: 413925376 unmapped: 75866112 heap: 489791488 old mem: 2845415832 new mem: 2845415832
Nov 29 04:23:31 np0005539563 ceph-osd[84724]: do_command 'log dump' '{prefix=log dump}'
Nov 29 04:23:31 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52118 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:31 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:23:31 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3861266032' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:23:31 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45375 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:31 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52139 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:32 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:32 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:32 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:32.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:23:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3913738379' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:23:32 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4460: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:23:32 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45396 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:32 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52157 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:32 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45402 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:32 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:23:32.473+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:23:32 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:23:32 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 04:23:32 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3944346860' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 04:23:32 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:32 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52181 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:32 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45435 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:33 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52187 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:33 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45450 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:33 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:33 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:33 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:33.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:33 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 04:23:33 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3530326166' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 04:23:33 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48844 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:33 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52211 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:33 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:23:33.824+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:23:33 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:23:33 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48865 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:34 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:34 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:34 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:34.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:34 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45480 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:34 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:23:34.137+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:23:34 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:23:34 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4461: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:23:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 04:23:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329831851' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 04:23:34 np0005539563 nova_compute[252253]: 2025-11-29 09:23:34.265 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:34 np0005539563 nova_compute[252253]: 2025-11-29 09:23:34.286 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:34 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48886 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 04:23:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3985060618' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 04:23:34 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 04:23:34 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1271726264' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 04:23:34 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48901 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 04:23:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269100547' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 04:23:35 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48919 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 04:23:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1874298122' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 04:23:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 04:23:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4205457577' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 04:23:35 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:35 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:35 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:35.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:35 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48931 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 04:23:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2336183748' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 04:23:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 04:23:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/847719017' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 04:23:35 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48946 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:35 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 04:23:35 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058393943' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 04:23:36 np0005539563 nova_compute[252253]: 2025-11-29 09:23:36.008 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:23:36 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:36 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 04:23:36 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:36.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 04:23:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 04:23:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3573856101' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 04:23:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 04:23:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2157463288' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 04:23:36 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4462: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:23:36 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48964 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:36 np0005539563 systemd[1]: Starting Hostname Service...
Nov 29 04:23:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 04:23:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006769114' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 04:23:36 np0005539563 systemd[1]: Started Hostname Service.
Nov 29 04:23:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 04:23:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552108203' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 04:23:36 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.48982 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 04:23:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182765302' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 04:23:36 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 04:23:36 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3972491086' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 04:23:37 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45621 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:37 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 04:23:37 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1850719826' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 04:23:37 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52367 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:37 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:37 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:37 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:37.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:37 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45639 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:37 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52379 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:37 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.49006 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:37 np0005539563 ceph-38a37ed2-442a-5e0d-a69a-881fdd186450-mgr-compute-0-rotard[74632]: 2025-11-29T09:23:37.502+0000 7f872ae53640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:23:37 np0005539563 ceph-mgr[74636]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:23:37 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45651 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:37 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52388 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:37 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52394 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:23:38 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:38 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:38 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:38.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:38 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45675 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:38 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52403 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:38 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4463: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:23:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 04:23:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1110412571' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 04:23:38 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45705 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:38 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52424 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:38 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45723 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:38 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 04:23:38 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104169900' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 04:23:38 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52439 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:39 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45729 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:39 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52457 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:39 np0005539563 nova_compute[252253]: 2025-11-29 09:23:39.266 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:39 np0005539563 nova_compute[252253]: 2025-11-29 09:23:39.287 252257 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:23:39 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:39 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:39 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:39.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:39 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52472 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:39 np0005539563 nova_compute[252253]: 2025-11-29 09:23:39.678 252257 DEBUG oslo_service.periodic_task [None req-601b9849-cf69-4ee2-b2eb-be5eb5aa70a3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:23:39 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 04:23:39 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859689210' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 04:23:40 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:40 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:40 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:40.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:40 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45753 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 04:23:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1150461807' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 04:23:40 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 04:23:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3459416611' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 04:23:40 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4464: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:23:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:23:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:23:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:23:40 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:23:40 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45768 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:40 np0005539563 podman[443032]: 2025-11-29 09:23:40.520035675 +0000 UTC m=+0.068191660 container health_status 5df2122fba2b3521317efe8b65995701d778e53d12b22f2f141e86e42c1290fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Nov 29 04:23:40 np0005539563 podman[443036]: 2025-11-29 09:23:40.527789156 +0000 UTC m=+0.076218230 container health_status 95387ae314acae77a2a107c9e044a8e80bf5ee6b31181ee5064051a0241c500a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 04:23:40 np0005539563 podman[443038]: 2025-11-29 09:23:40.577844453 +0000 UTC m=+0.123395209 container health_status a3da0c465333459e3f729dcc8031997a2ef67859b4d551a1e9b726e518e789bb (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:23:40 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.49132 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:40 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:23:40 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:23:40 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:23:40 np0005539563 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:23:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 04:23:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1156689358' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 04:23:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.49153 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.49159 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.52556 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.45822 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:41 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:41 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:41 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.102 - anonymous [29/Nov/2025:09:23:41.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.49174 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:41 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.49168 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:23:41 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 04:23:41 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165798035' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 04:23:42 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.49180 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:23:42 np0005539563 radosgw[93236]: ====== starting new request req=0x7efd68aaf6f0 =====
Nov 29 04:23:42 np0005539563 radosgw[93236]: ====== req done req=0x7efd68aaf6f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:23:42 np0005539563 radosgw[93236]: beast: 0x7efd68aaf6f0: 192.168.122.100 - anonymous [29/Nov/2025:09:23:42.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:23:42 np0005539563 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4465: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:23:42 np0005539563 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 29 04:23:42 np0005539563 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/778411828' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 04:23:42 np0005539563 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.49192 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
